Document
stringlengths
395
24.5k
Source
stringclasses
6 values
PCI (PRI, BRI, FXO, FXS) This section is meant for connecting physical PCI gateway to the system. To add a new device, go to "PCI (PRI, BRI, FXO, FXS)" section and click "New Gateway" button. Configure the following settings: Name - the name of the gateway Device - allows you to select the device port number (Span) b-channel - voice data channels d-channel - Alarm channel. Zone - Area (region) Timing source - Select the source to synchronize the timer. 0 - master, 1 or more - the master is the remote side. The higher the number - the lower the priority. Lbo - Attenuation depending on the distance to the far side. Framing - Telephone signaling type. For E1, ccs or cas is selected. Coding - Coding in the line. For E1, select ami or hdb3. Yellow - Checking and generating crc4. Switch - Type of signalization (national, ni1...) - national: National ISDN type2 (American) - ni1: National ISDN type 1 - dms100: Nortel DMS100 - 4ess: AT&T 4ESS - 5ess: Lucent 5ESS - euroisdn: EuroISDN - qsig: Minimal functionality protocol used to build a "network" between two or more PBXs from different vendors. Mode - Alarm Mode - cpe: Client side of the equipment (user) - net: network side. Opts - Additional options - Omit display: Deactivates sending the caller name - Omit redirecting number: Deactivates number redirection - AOC: Enables some Advice of Charge bits, supposed to handle incoming AOC messages and output some details at NOTICE loglevel (experimental). Ton - Type-of-Number by default for outgoing calls: "international", "national", "local", "private", "unknown" (default). Layer1 - B-Channel Layer1 protocol, "alaw" или "ulaw". Overlap dial - If the value "yes" is set up, the dialed digits will be immediately sent to the stream. The default value is "no". Digit timeout - The waiting time for the entire digital combination. This may be necessary in case of using the Overlap mode, to avoid the last digit being lost. Idle restart interval - The parameter sets the time in seconds between the restart of unused channels. The default setting is 3600. The minimum value is 60 seconds. Some PBXs do not like restarting for channels. As a solution, you can set a very large interval for them, for example: 100000000 or set the value never to completely disable the restarts of channels. Restart timeout - The time interval after which restart attempt will be made. Restart attempts - The number of restart attempts. Service message support - Activation of service message support for the channel. Local number - The system will not answer calls when the called number does not match the specified number.
OPCFW_CODE
Javascript/React cannot find module (for custom modules defined within the project) Hi guys, I'm trying to use hydrogen with a project that has a lot of custom dependencies, and when importing them I get the following: "Error: cannot find module 'bundles/page/components/TrackedButton'; Here's a screenshot: https://www.dropbox.com/s/i2we272w9mqmenu/Screenshot 2017-09-24 14.41.12.png?dl=0 Is there a way to use this with custom imports? I noticed that I get the same error if I try running the jp-babel kernel on jupyter and running that same line in the notebook. Might be something to do with how jp-babel is being initialized, here's the kernel specs: { "kernelspecs": { "babel": { "spec": { "display_name": "jp-Babel", "argv": [ "jp-babel-kernel", "--hide-undefined", "--protocol=5.0", "{connection_file}" ], "language": "Babel ES6 JavaScript" } } } } Is bundles an npm package or is it a local package? If it's local, you'll need to import from ./bundles It's a local package, the ./ worked with the caveat that i also needed to add the parent directory to bundles "static", but this can be omitted when running the app that's why it wasn't added before. I'll close this. Thanks Hey guys, I'm running into something similar. Can't get Hydrogen w/ jp-babel to successfully import modules. Ideally, I'd love to be able to do: import React, { Component } from "react"; import Button from "./components/button" However, both of these throw the same cannot find module @giovannipbonin linked above. No amount of fiddling with paths, adding parent dirs in and everything seems to help. Seems like pretty basic functionality. Have I messed something up with my install? Here's my kernel specs { "kernelspecs": { "babel": { "resource_dir": "/home/dano/.local/share/jupyter/kernels/babel", "spec": { "argv": [ "jp-babel-kernel", "--hide-undefined", "{connection_file}", "--protocol=5.0" ], "env": {}, "display_name": "jp-Babel (Node.js)", "language": "babel" } } Hi @jdpigeon, I'm not very familiar with the jp-babel kernel but here are some general notes: The first time you run code Hydrogen starts a new kernel in the directory specified via the Hydrogen settings. Depending on your configuration Hydrogen will start the kernel either in the first project's directory opened in the tree view (default), the project directory relative to the file or the current directory of the file. Can you double check if the kernel is started in the correct directory to make the imports work? Aha! That's it. Of course. I always set my root in this one project one directory above where all the node_modules are. Now I seem to be in the situation giovanni was in, though. I can import local stuff, but I have to use the path from the project root rather than the local path. I can temporarily change things in order to test with Hydrogen, but do you think there's a way of having Hydrogen figure those local paths based on which file you're running it in?
GITHUB_ARCHIVE
"a different take" (interpretation/viewpoint) versus "a different tack" (approach/alternative/direction) I found myself writing "a different take" and wondering if I didn't mean "a different tack".  It got me wondering what the difference, if any, is between these two phrases. Initially, my view was that "a different tack" is the correct phrase and "a different take" was just a common mistake caused by the spelling similarity between tack and take, and people's familiarity with using take in phrases such as "What's your take on all of this?". I googled this and only found results about tact versus tack – nothing about take versus tack. There is a dictionary meaning of take as a noun which is relevant: ”a particular version of or approach to something”. Tack is a reference to "change course by turning a boat's head into and through the wind", although some dictionaries actually have a more relevant definition which I suspect is derived from the use of the sailing meaning as metaphor: "a way in which you do something or try to do something". As noted, I could find nothing on my particular question of take versus tack. I have come to believe these are two separate, legitimate phrases. While hard to pin down, I would choose take where I mean someone's point of view (e.g., "Donald always has a unique take on things") whereas I would use tack to mean something more action oriented, such as "Let's try a different tack to solve this problem." I think there is substantial overlap when the desired meaning is approach, as both words cover this. In those cases, either phrase can work. Lastly, I will say that when I started I did not understand why people got confused between take and tact but I now understand this better. I think that many people are making up a shorthand version of tactic which, as of yet, does not exist in English. It's also very common to hear "a different tact". similar - both acceptable While you do see “a different ‘tact’” this seems like a clear mistake to me, although understandable given the similarity to tactic. I am new to this stackexchange. Why did I get downvotes? The downvotes are most likely for the same reason as the closevotes: you haven't included your attempts to find an answer (aka "research"). In this case, you should look in a dictionary to see what you can find and include this in your question. (Googling the phrases will be the best way to find dictionaries with relevant definitions.) Thank you @Laurel. I did google beforehand and only found results about tack versus tact - nothing about take. There is a dictionary meaning of take as a noun”a particular version of or approach to something”. I could find nothing on my particular question of take versus tack. You should link to the source for all the definitions you quote and state their source visibly (i.e., not requiring the reader to hover over the link).   It’s OK to use abbreviations like “MW”; see List of common abbreviations and acronyms. They're the sort of pair that could eventually become conflated, but they're independent. One's take on a situation: think of a "take" when filming a movie scene. "Scene 15, take 2." A director might call for more takes after requesting changes to elements of the scene or the shooting of it. A different take on a situation is a different way of looking at it. In sailing, "tacking" is a technique for heading askew to the direction in which one wants to go when that direction is heading into the wind. "Take" is a relative overview, a snapshot or focal point. It may lack more details and content than a "tack". "Tack" also refers to tailoring, "a long stitch used to fasten fabrics together temporarily, prior to permanent sewing." "Tack" in tailoring is to temporarily 'pull together'. And since "tact" was brought up, it means a plan, or strategy, towards a goal, as in 'tactic'. They all fit, though each can be asking for a different summation; one an overview, one a point of connection, and one a method, or plan. I can see the logic of tact as it relates to tactic but as far as I know that is not one of the accepted definitions of tact. I was also surprised to see your definition of tack. While it is legitimate I have always thought tack in this context was the meaning related to sailing. @PeterVermont Tack: Transitive verb; to join or add in a slight or hasty manner. Intransitive verb; b : to modify one's policy or attitude abruptly. Merriam-Webster, In asking for someone's 'tack' it could apply as either meaning to put together a hasty summation, or, asking for how a plan might alter with additional information. And, I have always understood that way.
STACK_EXCHANGE
How do I prefill a formview textbox in insert mode if the text property is already set to bind? I have a formview control that I use to upload data to an attached gridview controller. At the top of my gridview I use a asp:linkbutton to set my formview to insert mode with FormView1.ChangeMode(FormViewMode.Insert); Here is my textbox in the insert template of the formview: <asp:TextBox ID="Date_Position_AvailableTextBox" runat="server" Text='<%# Bind("Date_Position_Available") %>' /> Here is my code in the formLoad. this is where I believe I am prefilling the value of the textbox control: if (!IsPostBack) { FillDefaultVaueInFormView(); } And finally here is my FillDefaultVaueInFormView(); function. public void FillDefaultVaueInFormView() { if (FormView1.CurrentMode == FormViewMode.Insert) { TextBox txtPositionAvailable = FormView1.FindControl("Date_Position_AvailableTextBox") as TextBox; if (txtPositionAvailable != null) { txtPositionAvailable.Text = DateTime.Now.ToShortDateString(); } } } When I enter the Insert Mode the field is Blank. What am I doing wrong here?!? You need to move your FillDefaultValueInFormView() to occur right after you set the FormView to Insert mode. When you attempt to fill the fields at the form load, the textboxes don't exist (when you're not in insert or edit mode, the "textboxes" are actually labels). This is causing your if statement to come back false and thus is not filling the values. Are you saying to add FillDefaultValueInFormView() to the Linkbutton's Click command. When I do that I still get blank values In my projects, I have also put it in the FormView_DataBound method. You can wrap your command in an if block like this if(FormView.CurrentMode == FormViewMode.Insert) to make sure it fires only after you've set it to insert mode. You may want to put a breakpoint at or inside the if to make sure your process is even running. It's not uncommon to have your code somewhere you think is running but then find out that event is never triggered. I tried that earlier and it did not work...working now though...thanks guys! I am not finding any documentation on the usage of FillDefaultValueInFormView(). I'm working with a similar issue to the one posted here and this sounds like it would work if I could just see the syntax for using it.
STACK_EXCHANGE
How can I run a CGI::Application run mode from the command line? I have a run mode in my CGI::Application web-app that I would like to be able to trigger from the command line so i can automate it. From the web-app's perspective it does some processing then sends the results in an email. When called from the web interface it passes in a set of parameters (email address, which query to run, date, etc) so these need to be passed in. How can I construct a call to the CGI::Application app that will be the same as if I ran it from the web? Upon further digging through the CGI::App and the CGI documentation, it appeared to be more straightforward than I thought. The simplest case (no real argument handling or dealing with the output from the webapp run call) is: #!/usr/bin/perl use strict; use warnings; use CGI; use WebApp; my $cgi = new CGI( \%{@ARGV} ); my $webapp = WebApp->new( QUERY => $cgi ); $webapp->run(); It just takes a series of space separated name value pairs to create the CGI. You need to pass in the run mode and all the arguments. True, the 'your_script.pl name1=value1 name2=value2' form works for running the basic CGI::App .cgi file, however I would lose any ability to control input, set defaults etc. This example would still print to STDOUT, including the HTTP headers, which is not desirable. Further, there's not need to involve CGI.pm. It just confuses things when you pass and process arguments directly from the command line with a standard argument processing tool like Getopt::Long. The original CGI specification makes it easy to run things from the command line and was fully intended not as a specific HTTP-only interface but something that could handle FTP and gopher as well as new top-level URL schemes. I know what I wanted when I helped specify it. The spec I referenced should give you all you need, but for the most part it is just a collection of environment variables. If you see a request for: http://some.server.com/some/path?a=b&c=d The environment variables come out looking like this: SERVER_PROTOCOL=http REQUEST_METHOD=GET HTTP_HOST=some.server.com SERVER_PORT=80 PATH_INFO=/some/path QUERY_INFO=a=b&c=d To reverse the polarity of that in Perl would go something like this: $ENV{'SERVER_PROTOCOL'} = 'http'; $ENV{'REQUEST_METHOD'} = 'GET'; $ENV{'SERVER_PORT'} = 80; $ENV{'PATH_INFO'} = '/some/path'; $ENV{'QUERY_INFO'} = 'a=b&c=d'; system("perl your-CGI-script.pl"); Things get a bit more complicated in handling POST queries and there are more possible environment variables that may be required. Worst case you can enumerate them all with a quick CGI script something like: print "Content-Type: text/plain\r\n\r\n"; foreach (keys(%ENV)) { print "$_=$ENV{$_}\r\n"; } Now put that on the web server in place of your CGI script and you'll see all the environment that gets passed in (and the original environment so you'll need to make a few judgement calls). When I do that, I normally use an exec. No big whoop though. Cool, you helped specify CGI. :) I'm the maintainer of CGI::Application, and I do this all the time-- I have dozen of cron scripts built with CGI::Application because it's convenient to share the infrastructure with the application. The simplest approach is this: # There is no browser to return results to. $ENV{CGI_APP_RETURN_ONLY} = 1; my $app = WebApp->new; $app->direct_run_mode_method; In that example, you bypass the normal flow and call a method directly. Be sure you don't need any of the "setup" or "teardown" actions to happen in that case. If you just have one run mode you are calling, you can also just set the "start_mode", and call run(), so then the default run mode is called by default. Another idea: you can use a module like Getopt::Long and pass in values through the PARAM hash to new(), or completely replace the run-mode selection process. Here's an example where command line flags are used to determine the run mode: sub setup { my $self = shift; $self->start_mode('send_error_digests'); $self->run_modes([qw/ send_error_digests help /]); my ($dry_run, $help); GetOptions( 'dry-run' => \$dry_run, 'help' => \$help ); $self->param('dry_run' => $dry_run); $self->mode_param(sub { return 'help' if $help; return $self->start_mode(); }); } Thusly: $ perl yourscript.pl field1=value1 field2=value2 Perl's CGI library takes care of the magic for you, and it appears that CGI::Application relies on CGI (judging from their example code). You'll also need to set various environment variables to simulate the stuff the web server would set up. CGI.pm looks for those so it knows what to do. @brian d foy: not unless your code for some reason depends on those; CGI.pm does fine without them. @ysth: if this run mode has side effects, it should be using POST since its not an idempotent request. In that case, you have to add -debug to CGI.pm's import to allow this stuff to work. That's a change to the code which has side effects too. It's much easier to do it as George says. Instead of having to go through CGI::Application every time you want to get something done, enforce a proper separation of concerns, perhaps using an MVC setup. All of the functionality should exist outside of the CGI::Application stuff since that should only work as a controller. Once you separate out those bits, you can easily write other controllers for other input methods. Don't write a web application; write an an application that happens to have a web interface. When you have that, you can easily give your application other sorts of interfaces. I completely agree, there is a library that does the actual work generating the result and i've tried to limit the amount of code in the 'Controller' portion of the CGI::A to the point where its just formatting. Its just the 'Laziness' part of me that thinks I could use the CGI::App to send the email rather than write another script to call the library. Well, Laziness with a capital L allows you to easily do other tasks without future work: that Laziness is not the avoidance of work but the upfront work to save time later. (obligatory reference to YAGNI, although I tend to operate in capital L mode most of the time) It's ironic to mention YAGNI in a case where he's actually asking for it. You could automate by calling the web app using curl, wget, or an LWP GET-script with the appropriate parameters. I've used a similar system for cron-driven tasks with a Catalyst application. That deals with all the environment variables for you.. This solution adds an additional problem to solve: Now you have to make sure the URI of the cron job is protected, so that it can only be accessed from the cron job.
STACK_EXCHANGE
Cyclic groups generators, understanding example $\mathbb{Z}_6$ I am studying group theory but I am having hard time understanding the cyclic group $\mathbb{Z}_n$. On the material I am studying I have the following example: $\mathbb{Z}_6=\{0_6,1_6,...,5_6\}$ and $1_6=7_6=13_6=-5_6=...=\{...,-11,-5,1,7,13,...\}$ I know that this equality $1_6=7_6=13_6=-5_6$ holds because each of the remainder is $1$. But I do not understand what ($\mathbb{Z}_6)=\{0_6,1_6,...,5_6\}$ means. Question: 1) Considering $\mathbb{Z}_6=\{0_6,1_6,...,5_6\}$. Is $\mathbb{Z}_6=\{0_6,1_6,...,5_6\}$ all the generators of $6$? 2) How can I generate $\mathbb{Z}_6$ from $5$? I am thinking each one of this numbers is a cycle to attain $\mathbb{Z}_6$. However if we consider $\langle 5 \rangle=5+5+5...+5$ as a cyclic subgroup of $(\mathbb{Z},+)$. How can $5_6$ generate $6$? How do I read $5_6$? Thanks in advance! Downvoting without explaining is a recognized way of conducting business here – get used to it. Anyway, it's best to interpret $a_b$ as the coset of $b{\bf Z}$ containing $a$, that is, as the set ${,a,a\pm b,a\pm2b,\dots,}$. That should dispel much of your confusion. But I don't know what you mean by "generators of 6". Numbers don't have generators – groups have generators. @GerryMyerson This is the only example I have on the material I am studying. I have not covered "cosets". So I cannot understand what you mean. I am studying Group Theory for the first time. The parameters that society follows are not necessarily the best for the reason I have pointed out. Why then should I get used to downvoting without explanation? Do not answer my last question please. I am not looking for a discussion here of this matter. Thanks for the reply! You are hoping that somebody will help you with a very elementary problem in group theory. So you are relying on the generosity and helpfulness of people whom you don't know. So in your position you should be willing to accept the conventions of the forum. There are good reasons why people who up- and downvote may prefer to remain anonymous. As Gerry Myerson says "generators of 6" does not make sense. Also "How can $5_6$ generate $6$" makes no sense. $6$ is not a member of the group. Since $6_6 = 0_6$, the group element corresponding to $6$ is $0_6$, and $5_6+5_6+5_6+5_6+5_6+5_6=0_6$. You have covered cosets, Pedro, you just haven't used that word. $\bf Z$ is a group, $6{\bf Z}={,0,\pm6,\pm12,\dots,}$ is a subgroup, and $1_6=1+6{\bf Z}={,\dots,-11,-5,1,7,13,\dots,}$ is the coset of $6{\bf Z}$ in $\bf Z$ containing 1. Please try to engage with the comments, and let us know if you need further clarifications. Oh, and if you want to know more about "downvoting without explanation", I encourage you to go to the meta site and look for discussion tagged "downvoting". Let's assume your teacher and/or your text are using the (non-standard) notation $n_m$ to mean $n\pmod m$. The equation $\mathbb{Z}_6=\{0_6,1_6,...,5_6\}$ is an abuse of notation which is usually interpreted to mean that $\mathbb{Z}_6$ is a group with elements $\{0_6,1_6,...,5_6\}$ and group operation defined by $a_6 \star b_6 = (a+b)_6$. You can check that this is a group. It is not a subgroup of $\mathbb{Z}$ because its elements are not elements of $\mathbb{Z}$, its elements are in fact subsets of $\mathbb{Z}$. One of those subsets, $0_6$, is a subgroup of $\mathbb{Z}$. The others are not, they are called the "cosets of $0_6$ in $\mathbb{Z}$". But we don't need to deal with them here. Your question 1: A set, $S$, of generators for a group, $G$, is a set with the property that every element of $G$ can be written a product of elements of $S$ and their inverses. Thus it looks like you may have a typo in this question since $6$ is not a group. However, you can show that $6$ generates the subgroup $0_6$ of $\mathbb{Z}$. Your question 2: Observe that $5_6 \star 5_6 = 4_6$ and $5_6 \star 4_6 = 3_6$ and $5_6 \star 3_6 = 2_6$ and $5_6 \star 2_6 = 1_6$ and $5_6 \star 1_6 = 0_6$. So every element of $\mathbb{Z}_6$ can be written as a product of the element of the single element set $\{5_6\}$ and thus it generates $\mathbb{Z}_6$. By an abuse of language we say that the element $5_6$ generates $\mathbb{Z}_6$. Only one other element of $\mathbb{Z}_6$ generates it. Thanks! I have been reading your answer and finally understood it. I was getting so upset about this.
STACK_EXCHANGE
A common trait of successful online news and magazine sites is, surprisingly, a developer blog. Think of a developer blog as a look into the minds of the people building the site: what limitations they have, what they’re working on, what they believe their readers want or need, success stories of how they built interesting things, and even day-to-day tidbits that remind readers that the site is built by thinking, feeling people instead of a faceless entity. - I’ve heard many excuses for not wanting to have a developer blog: “Who would update the thing? Our team is busy!” “No one wants to read stuff like this, they want to read the news.” “We absolutely cannot publish this information, it’s secret. What if someone were to copy us?!” “We’re developers, not writers. We wouldn’t know what to say.” “Taking the time to write blog posts takes us away from being able to build the technology our team needs.” The list goes on and on. But for teams who do make the effort to create and update developer blogs, the rewards are great. I’m going to walk through some of the benefits of creating a developer blog for your site, using excellent existing blogs as examples of how to do this well. I believe very strongly that the best way to learn something yourself is to teach others and share your knowledge. This has become apparent to me from many directions including mentoring, teaching, writing tutorials, giving talks and training others. I always learn more each time I share with others. This industry moves so quickly. Suggestions on things that work and things that don’t as well as best practices and “how to” articles are invaluable for people. A solid “why we did it this way” or “the fastest way to do x” type of article can save other developers a great deal of time and make them eternally grateful to you. Google recently changed their Maps API Terms of Service, causing a lot of confusion. Chris Keller from Madison.com wrote about the changes and narrowed down the important bits for others affected by the change. At The Chicago Tribune, the team is not just interested in educating itself and its blog readers, but also the community. Joe Germuska blogs about his presentation to Hacks/Hackers Chicago in October, posting his slides and sites he referenced throughout his talk. ADVERTISE TECHNOLOGY YOU CAN LICENSE / SELL / GIVE AWAY It might not happen all the time, but occasionally your team may create new applications or methods of doing things which are so valuable they’re worth selling or licensing. In 2005, The Lawrence Journal-World newspaper from Kansas released an open source tool called the Django web framework, and they ended up spinning out a software division to sell their customized CMS now called Ellington CMS. A CMS coming from a media organization is a huge deal, since every media team I talk to vehemently hates their CMS. The ProPublica News Apps team released a new feature earlier this month called DocDiver, and they announced this on their “ProPublica Nerd Blog.” The blog post included how it works, why they built it, and nerdy details on how it works. The project was built on top of the NYT DocumentViewer app and expands on that open source project. KUDOS AND RECOGNITION FOR YOUR TEAM MEMBERS Recognition and respect are two of the most important things you can help your team members achieve. Developers and technologists who feel appreciated are more likely to stick around, work harder and be more loyal employees. Industry recognition for your team circles back to help your organization improve its image as well. Last week, Poynter.org published an article by Matt Thompson on why journalists should be ‘showing their work’ while they create and learn. He mentions paying it forward, building data literacy, increasing the impact of your work and more. GETTING TO CHALLENGE THE STATUS QUO The worst thing that can happen to an industry is that it stagnates and no innovation occurs. Developer blogs are the perfect way to share your disruptive ideas with others who might be interested in doing something similar or building off of your idea. My favourite example of this this year is from a Maine newspaper, The Bangor Daily News. Tired of a typically clunky workflow which involved a lot of cutting-and-pasting, the team built a new workflow out of Google Docs and WordPress. The Bangor Daily News dev blog is here: http://dev.bangordailynews.com/. You can read more about their new workflow here: http://www.mediabistro.com/10000words/how-to-run-a-news-site-and-newspaper-using-wordpress-and-google-docs_b4781. And here’s a short video showing the process: BUILD A COMMUNITY What if you had a whole community of individuals you could get to give you input, suggestions, or even build things with your data and resources? Think of how much more you could achieve. The Guardian’s Data Blog has done exactly that. A very active blog, The Guardian Data Blog releases new sets of data constantly in raw form. Sometimes they’ve been able to build charts or interactions to tell a story with it, and sometimes they simply provide the data. At the end of each article, they ask “Can you do something with this data?” and ask people to contact them or post visualizations on their Flickr page. The result is a fascinating body of work, which is much more diverse having community input, and is definitely larger than what The Guardian could have produced on its own. That kind of interaction and dedication by a community makes your site and publication much more interesting and valuable. SHOW OFF YOUR CRAZY IDEAS My dad used to tell me, “It ain’t bragging if you’ve done it.” If your team has built something amazing, solved a really tough problem, or tried something crazy (even if it was a colossal failure!), why not tell the world? The New York Times launched its “beta620” labs project this year, and the site is specifically for trying out wacky ideas and experimenting. So far they have created some projects which are simply experiments they’ve learned from. But they’ve also created products like the Times Skimmer, which end up as full-fledged products in the main site or in their mobile apps. HIRE BETTER EMPLOYEES We all know hiring good developers, designers, UX designers, content strategists and other technology positions is tough and getting tougher. People want to work for respected organizations doing interesting things. Advertising for free on your developer blog that you’re using new technology or being creative is a wonderful way to help the right people find their way to you. At The Guardian, they have been hosting “Guardian Hack Days” and “Developer drop-ins” this year, both of which help expose their team and technology to potentially excellent candidates for future hiring. A developer looking for his or her next role would find articles like these very telling about office culture, priorities and work ethic, all things which are near impossible to discover in an interview. If you’re considering creating a developer blog for your news or magazine application, be sure to keep an eye on the following blogs which are great examples of how to write, teach, influence and share well: ProPublica Nerd Blog :: http://www.propublica.org/nerds Bangor Daily News Dev Blog :: http://dev.bangordailynews.com/ Data Journalism Blog :: http://www.datajournalismblog.com/blog/ LA Times Data Desk :: http://projects.latimes.com/index/ Madison.com Labs Blog :: http://labs.madison.com/blog/ beta620 from The New York Times :: http://beta620.nytimes.com/ The Guardian Data Blog :: http://www.guardian.co.uk/news/datablog The Guardian Developer Blog :: http://www.guardian.co.uk/info/developer-blog Chicago Tribune News Apps :: http://blog.apps.chicagotribune.com/
OPCFW_CODE
Consider adding AnnotatedServiceConfig AnnotatedService provides specialized information for annotated services such as Method, service instance, the default status, and so on. However, AnnotatedService is an internal API so users should access it unsafely via ctx.config().service().as(AnnotatedService.class) I propose to add AnnotatedServiceConfig to provide annotated service-specific information. public final class AnnotatedServiceConfig extends ServiceConfig { ... Object serviceObject() { return object; } Method method() { return method; } HttpStatus defaultStatus() { ... } ... } Hello, @ikhoon nim. I've come up with an idea for a scalable API, although it might be a bit rough. By using enums and Maps together, I expect that it can be extended in various ServiceConfig's Children Classes. How about this way? public abstract class ServiceConfig { // Add this abstract method. public <E, T> T getSpecificValue(E field, Class<T> type); ... } Add abstract method to ServiceConfig. all children classes of ServiceConfig should implement this method. This method is to produce common interface to get their specific values. (For example, AnnotatedService's method) E means kind of enum. T means type of values. // Add this enum public enum AnnotedServiceField { Method, ServiceObject, DefaultStatus, ... } Add enum for each concrete class of ServiceConfig to prevent to forget some fields. public class AnnotatedService { ... // Add this method. public Map<AnnotatedServiceField, Object> fieldsToMap(){ final Map<AnnotatedServiceField, Object> maps = Arrays.stream(AnnotatedServiceField.values()) .collect(Collectors.toMap(v -> v, v -> null)); ... maps.put(AnnotatedServiceField.HttpStatus, this.httpStatus); maps.put(AnnotatedServiceField.Method, this.method); ... return maps; } ... } In case of AnnotatedService, i propose that add fieldsToMap() method to return Map their specific fields. public class AnnotatedServiceConfig extends ServiceConfig { ... // add this methods. private final Map<AnnotatedServiceField, Object> specificConfig; public T getSpecificValue(AnnotatedServiceField field, Class<T> type){ Object field = specifigConfig.get(field); return (type) field; } ... } AnnotatedServiceConfig should implement abstract method T getSpecificValue(AnnotatedServiceField field, Class<T> type). public class ServiceConfigBuilder implements ServiceConfigSetters { ServiceConfig build(...) { final Map<AnnotatedServiceField, Object> specificFields = service.fieldsToMap() // Add this code. ServiceErrorHandler errorHandler = serviceErrorHandler != null ? serviceErrorHandler.orElse(defaultServiceErrorHandler) : defaultServiceErrorHandler; ... } } In case of AnnotatedServiceConfig, service.fieldsToMap() can be added to ServiceConfigBuilder#build(...) for specificFields to be included AnnotatedServiceConfig. IMHO, The key aspects are as follows: Declare an enum for each Concrete ServiceConfig class to specify certain values. Create a Map that uses the enum as a Key and stores values. The ServiceConfig interface provides a Generic method to retrieve values from the map. Through this approach, I expect to ServiceConfig can produce API that can be extended to multiple concrete class of ServiceConfig. What do you think? When you have some free time, please take a look 🙇‍♂️ I forgot to leave the original motivation for this issue. https://discord.com/channels/1087271586832318494/1087272728177942629/1195732884544299139 On second thought, it may be simpler just to make AnnotatedService interface like GrpcService and expose it as a public API. public interface AnnotatedService { Object object(); Method method(); HttpStatus defaultStatus(); Route route(); ... } @ikhoon nim, thanks for your comments. Sound like refactoring, is it right?
GITHUB_ARCHIVE
Leaving a Master's Degree Early for a Doctorate Degree? I am currently enrolled in my fourth semester of my master's degree program in psychology and this should be my final semester. Due to issues stemming from myself and my relationship with my mentor, my thesis has been lagging behind the rest of my school work and I am just now preparing to defend my prospectus. While I am currently awaiting to hear back from Ph.D. programs, I am starting to consider an alternative plan to completing my graduate education and was wondering if anyone could help shed some light on this matter? On the off chance that I get accepted into a doctorate program I am wondering if it is possible to leave my current graduate program without completing my thesis and proceed to enroll in said doctorate program? Are Ph.D. programs going to frown upon this decision? Will they still allow me to join the program? My present thesis is really not related to anything remotely close to what I want to study and I have had a very strained and borderline emotionally abusive mentorship experience at my present program and would prefer not to continue working with my current mentor if possible. I am assuming I will have to complete a new master's thesis on top of a dissertation while completing this possible Ph.D. but I just wanted to know if the option to leave one program and enroll in another is available? I would much rather spend more time completing a new thesis with research I care about and a mentor who will (hopefully) not be as abusive rather than finishing up a thesis late at my present program. Thanks! What country are you in? Will your PhD also be in psychology? Not getting a degree because of a supervisor is not wise. Complete the thesis ASAP (if possible) or change supervisors. \ If you're in your last semester, the best path forward is to finish your MA thesis. Your dissertation does not need to follow directly from it, particularly if you're studying in the social sciences. Finish the thesis for a few reasons. First, finish it for the MA degree. You've (presumably) paid for 3 semesters of coursework. Failing to finish would mean that, essentially, you've wasted the past three semesters. Second, finish the thesis for yourself. In 20 years, you won't care a whit about your strained relationship with your mentor, but you will care that you saw the program through, even when the going got tough. Third, finish the thesis so that you don't have to explain it in the future. People will wonder why you failed to finish your thesis, and it might cast doubt on your ability to stick through tough/challenging projects. Academia is a small place, and you don't want to be known as the guy who didn't finish his master's degree.
STACK_EXCHANGE
Google Sheets / Excel: How to find most common value in a list with formula depending on dropdown list This formula in the sheet works: =ARRAYFORMULA(INDEX(B2:B10,MATCH(MAX(COUNTIF(A2:A10,A2:A10)),COUNTIF(A2:A10,A2:A10),0))) The issue is connecting it to the dropdown menu. It only returns the first mentioned Sales Person of a day. It should return the Sales Person with their name appearing the most for a given day depending on which day in the dropdown menu is selected. Example of issue: The desired output should be Bill for 1/1/2021, Bob for 1/2/2021, and Ben for 1/3/2021. Any help or advice is much appreciated! Try this for your E2 formula: =QUERY(SORTN(QUERY(FILTER(B2:B,A2:A=D2),"Select Col1, COUNT(Col1) WHERE Col1 Is Not Null GROUP BY Col1 LABEL COUNT(Col1) ''"),1,1,2,0),"Select Col1") FILTER isolates only the names that fall on the D2 date. The innermost QUERY aggregates those by count. SORTN sorts them highest to lowest and returns only the highest, but with a tie mode setting that will return all matches if there is a tie. The outermost QUERY returns the name only (eliminating the no-longer-needed count). Both your formula and Harun24HR's formula work great! Thank you! Glad you got the help you needed. Just keep in mind that, if you want to show all matches in case of a tie for the given day, you'll need the formula I provided above. Erik, your formula works wonders! There is much forethought in it I recognize now. The formula solves issues before I knew I had them. Thank you again! By the way, what is a good resource for getting to this level of sophistication with formulas? I say quite often that anyone who knows anything essentially learned it the same way: from scratch and one piece at a time. No one is born an expert. And there is no one-stop shop (even college education) that teaches all there is to know about a thing. I myself am a mentor, author and public speaker with degrees in Psychology. But I'm now an "expert" (who's still learning) in information design, web design, graphic design, several languages, singing and vocal coaching, composing, editing, etc.—all of which I learned on my own, "from scratch and one piece at a time." You can too. As for resources, forums like this are a great place to "spy on" questions and solutions: to try your hand at how you'd solve them, fail (or partially succeed, or succeed clunkily), tear apart the solutions of others until you understand the parts and the whole, etc. You could literally memorize every single function listed in the "official documentation" from Google, and still not be good. A spreadsheet is a canvas, and functions are paintbrushes and paint. Those things alone don't make art. Art is made by people practicing, experimenting and using things in new ways. Give a try on below formula- =ArrayFormula(INDEX(SORT(SPLIT(B2:B10&"@"&COUNTIFS(A2:A10,$D$2,B2:B10,B2:B10),"@"),2,FALSE),1,1)) Your answer is perfect! By any chance, would you know how to display two or more Sales Persons' names if there was a tie? For example, if Bill and Bob's name came up 2 times for 1/4/2021.
STACK_EXCHANGE
How do I resolve package conflicts between Ubuntu and GIMP? In Ubuntu 18.04 (Cinnamon), the package cpp-7 depends precisely on version 7.3.0-27ubuntu1~18.04 of package gcc-7-base: $ aptitude why gcc-7-base i cpp-7 Depends gcc-7-base (= 7.3.0-27ubuntu1~18.04) Meanwhile, the package libgfortran4 depends precisely on version 7.3.0-16ubuntu3 of the same package gcc-7-base: $ apt-cache show libgfortran4 Package: libgfortran4 ... Depends: gcc-7-base (= 7.3.0-16ubuntu3), libc6 (>= 2.27), libgcc1, libquadmath0 and libgfortran4 won't install if I have the other version of the package already installed: $ sudo apt-get install libgfortran4 ... The following packages have unmet dependencies: libgfortran4 : Depends: gcc-7-base (= 7.3.0-16ubuntu3) but 7.3.0-27ubuntu1~18.04 is to be installed Depends: libquadmath0 but it is not going to be installed cpp-7 is in the dependency graph of ubuntu-desktop. libgfortran4 is in the dependency graph of gimp. Doesn't this imply that no one can ever install GIMP from the repositories on Ubuntu 18.04? Please correct me if I'm wrong, but I certainly can't. To make the matter even more maddening, apt-cache showpkg shows that the two different versions of gcc-7-base come from the same repository and have the same MD5 hash: $ apt-cache showpkg gcc-7-base Package: gcc-7-base Versions: 7.3.0-27ubuntu1~18.04 (/var/lib/dpkg/status) Description Language: File: /var/lib/apt/lists/us.archive.ubuntu.com_ubuntu_dists_bionic_main_binary-amd64_Packages MD5: b6e93638a6d08ea7a18929d7cf078e5d ... 7.3.0-16ubuntu3 (/var/lib/apt/lists/us.archive.ubuntu.com_ubuntu_dists_bionic_main_binary-amd64_Packages) Description Language: File: /var/lib/apt/lists/us.archive.ubuntu.com_ubuntu_dists_bionic_main_binary-amd64_Packages MD5: b6e93638a6d08ea7a18929d7cf078e5d meaning (again, correct me if I'm wrong) that they're the exact same code. So, there isn't an actual dependency conflict here, only one of labels. How does this happen and how do I fix it? For example, is there a way for me to tell either cpp-7 or libgfortran4 that it's okay to accept the other version of gcc-7-base, because it's the exact same code? Do I need to get the package maintainer(s) involved? Edit: A few days ago I posted a question on this topic. The current question is the narrowed-down result of work I've done on it in the meantime. Edit: These are my active sources: $ grep -Ev '(^#|^ *$|deb-src)' /etc/apt/sources.list /etc/apt/sources.list.d/* /etc/apt/sources.list:deb http://us.archive.ubuntu.com/ubuntu/ bionic main restricted /etc/apt/sources.list:deb http://us.archive.ubuntu.com/ubuntu/ bionic universe /etc/apt/sources.list:deb http://us.archive.ubuntu.com/ubuntu/ bionic multiverse /etc/apt/sources.list:deb [arch=amd64] https://packages.microsoft.com/repos/vscode stable main /etc/apt/sources.list.d/keybase.list:deb http://prerelease.keybase.io/deb stable main /etc/apt/sources.list.d/keybase.list.save:deb http://prerelease.keybase.io/deb stable main /etc/apt/sources.list.d/vscode.list~:deb [arch=amd64] http://packages.microsoft.com/repos/vscode stable main There's a commented deb-src for updates, # deb-src http://us.archive.ubuntu.com/ubuntu/ bionic-updates main restricted but nothing commented or uncommented for updates that's just deb. Should I add a line deb http://us.archive.ubuntu.com/ubuntu/ bionic-updates main restricted ? Edit: Adding deb http://us.archive.ubuntu.com/ubuntu/ bionic-updates main restricted to my /etc/apt/sources.list file, then $ sudo apt-get update worked. GIMP installed as expected with $ sudo apt-get install gimp. Thank you all! I think this may solve your problem @DKBose Yes, I posted that one, too. I edited my question to reference it. Thanks. @GabrielZiegler It does not, but thanks. aptitude suggests downgrading a bunch of packages to 16.04 versions, which I think is an unreasonable solution that's bound to cause more problems than it solves, especially since there's apparently no actual conflict in the code of 18.04 package versions in this case. I have cpp-7, gfortran, and gimp installed on 18.04 and I didn't jump through any hoops to do it. Something else must be going on. I run both Gimp 2.8.22 (from PPA) and 2.10.8 (flatpak) on 16.04 and neither requires libgfortran4. This kind of issue is usually fixed by a simple apt update. Let's see why by querying the madison database for the available 18.04 versions of gcc-7-base and libgfortran4. $ rmadison gcc-7-base gcc-7-base | 7.3.0-16ubuntu3 | bionic | amd64, arm64, armhf, i386, ppc64el, s390x gcc-7-base | 7.3.0-27ubuntu1~18.04 | bionic-updates | amd64, arm64, armhf, i386, ppc64el, s390x $ rmadison libgfortran4 libgfortran4 | 7.3.0-16ubuntu3 | bionic | amd64, arm64, armhf, i386, ppc64el, s390x libgfortran4 | 7.3.0-27ubuntu1~18.04 | bionic-updates | amd64, arm64, armhf, i386, ppc64el, s390x You can see that libgfortran4 is the bionic (non-updates) version, and has the bionic (non-updates) dependency. gcc-7-base, on the other hand uses the newer bionic-updates dependency. There are two common reasons for this kind of de-sync between a base repository and it's corresponding -updates repository on a single system. The user has recently disabled the -updates repository. Easy enough to check and fix in /etc/apt/sources.list or your Software and Sources control panel. The user simply hasn't run apt update in a while to refresh (update) apt's local database of available packages from both sources. That's an easy fix, too. Your file /etc/apt/sources.list should have some combination of mirrors and sources that adds up to: deb [mirror URL] bionic main deb [mirror URL] bionic-updates main deb [mirror URL] bionic-security main The optional universe, multiverse, and restricted repos can be included on the same lines. I'm pretty sure you've nailed it. In the linked question, the OP says "but the only "updates" repos in my /etc/apt/sources.list are commented out, and I've never knowingly had them enabled." Don't know why someone would comment out "updates" repo, but there you have it. @OrganicMarble What I meant was, I've never knowingly done anything to the "updates" repos at all. So, either it came commented out when I did a fresh install of 18.04 a few weeks ago, or it got commented out as an unexpected consequence of something else I did. Is it supposed to be enabled? The default on my installs has always been to have the security and updates repos enabled, but Linux is so configurable I hate to say what is "supposed" to be. @DarienMarks if you want updates (most do, you do, it's the default), then go into the file (or control panel) and enable -updates and -security. Then run an apt update. Guessing as to how or why -updates was disabled is just speculation, and seems rather wasted effort. @OrganicMarble Follow-up question. My /etc/apt/sources.list only has a main-repo "updates" listing that begins with deb-src, which in my understanding is for source files that I want to compile myself. Should there be one that starts with just deb, or is the deb-src one the one I want to uncomment? My sources.list is highly customized because I have my own local mirror. It wouldn't be a good thing to compare against. I don't include any deb-src though. Edited the answer to include sources.list help. @DKBose Sources info added to question. In my case, I just have to check "Canonical Partners" in Software & Updates. If you are unable to check it, see if "Recommended Updates (bionic-updates)" is checked in Updates tab. @Gqqnbig yours is one of the very rare cases not covered by this answer. Please expand your comment into a separate answer. Since the question is specific to Gimp, which is not in the Canonical Partners repository, your answer should explain why adding that particular repo is necessary. Another option is to install the snap version of GIMP. I realize some folks don't like snaps, but this may be a more palatable solution for the casual Linux user. sudo snap install gimp In addition to the accepted answer: If you have only default sources enabled and the latest package list (apt update) and still encounter such conflicts on basic packages like gimp, then report a bug. This is something, what should be fixed by the package maintainer and what probably is rather easy to fix for him. It also may affect more people than just you. Such bugs happen from time to time, but the maintainers need to know. Of course you should make sure, that your configuration and package status is not the problem before, because if you e.g. have got gimp (or some of its dependencies) from a third-party repository, the ubuntu maintainers cannot help you. The second paragraph is very very important. Ubuntu's Continuous Integration (CI) system weeds out almost all of these kinds of packaging mistakes before they reach the repos. So if you discover one, be really really sure it's not your setup before filing the bug.
STACK_EXCHANGE
This action might not be possible to undo. Are you sure you want to continue? Sukkur Institute of Business Administration Introduction to Programming 1. Determine the value of the following expressions, assuming a = 5, b = 2, c = 4, d= 6, and e= 3 (Do it manually): (a) a > b (b) a != b (c) d % b==c % b (d) a*c!=d*b (e) d*b==c*e (f) !(a * b) (g) !(a % b*c) (h) !(c % b*a) (i) b % c*a 2. Using parentheses, rewrite the following expressions to indicate their order of evaluation correctly. Then evaluate each expression, assuming a = 5, b = 2, and c = 4. (a) a % b * c && c % b * a (b) a%b ∗ c||c%b ∗ a (c) b % c*a && a % c*b (d) b%c ∗ a||a%c ∗ b 3. Write C++ code sections to make the following decisions: (a) Ask for two integer temperatures. If their values are equal, display the temperature; otherwise, do nothing. (b) Ask for character values letter1 and letter2, representing uppercase letters of the alphabet, and display them in alphabetical order. (c) Ask for three integer values, num1,num2 , and num3, and display them in decreasing order. 4. (Probability) The probability that a telephone call will last less than t minutes can be approximated by the exponential probability function: Probability that a call lasts less than t minutes = 1 − e−t/a a is the average call length. e is Eulers number (2.71828). For example, assuming the average call length is 2 minutes, the probability that a call will last less than 1 minute is calculated as 1 − e−1/2 = 0.3297. Using this probability equation, write a C++ program that calculates and displays a list of probabilities of a call lasting less than 1 minute to less than 10 minutes, in 1-minute increments. Using suitable repetition structure generate a fibonacci series up to n terms. 3. . and test a C++ program that finds and prints all the prime numbers less than 100. You may need to use nested loop(s). 1. 5. run. Using this information. 9. The output of your program should appear as given. where n ranges from 2 to sqrt(number). For each number from 2 to 100. and test a C++ program that displays how much the king must pay the beggar on each day. the first two terms are 0 and 1.000. namely 1 and the number itself. and each term thereafter is the sum of the two previous terms that is. 2. For example.) 8. The Fibonacci sequence is 0. find Remainder = Number % n. Using this information. An old Arabian legend has it that a fabulously wealthy but unthinking king agreed to give a beggar one cent and double the amount for 64 days. the number is not equally divisible by n. where the user enters n into the program interactively. Write. the number is not a prime number. your program should give: 0 1 1 2 3 5 8 13 7. Why? If any Remainder equals 0. user wants to generate first 8 fibonacci terms so. 1. your program should determine on which day the king will have paid the beggar a total of one million(1. Modify the Fibonacci sequence program in the previous question so that now it should ask the user how many fibonacci terms n to generate. the program should display the value 5. 2 . write. 8. . 13. Write a program Triangle. Also. For example. Fib[n] = Fib[n . .000) dollars. run. 6. if n = 6. (Hint: 1 is a prime number.1] + Fib[n .java that takes an integer N and prints an N x N triangular pattern like the one below. If n is greater than sqrt(number). write a C++ program that calculates the nth number in a Fibonacci sequence.2].Spring 2012 Nahdia Majeed LAB: 06 5. A prime integer number is one that has exactly two different divisors. . Write a program Ex.java that takes an input N and prints a (2N + 1) x (2N + 1) ex(X) like the one below. 3 .Spring 2012 Nahdia Majeed LAB: 06 10. Write a program Diamond. Use two for-loops and one if-else statement.java that takes an input N and prints a (2N +1)x(2N +1) bowtie like the one below.java that takes an input N and prints a (2N +1)x(2N +1) diamond like the one below. 12. Use two for-loops and one if-else statement. 11. Write a program BowTie.
OPCFW_CODE
We have an important update to provide on MS Graph Device Registration Policy resource type currently in preview and available in beta API version. We are making some changes to resource type properties that introduce breaking changes. These changes are expected to happen in the week of September 25, 2023. To ensure continued support and functionality, and minimize impact, it’s very important that all customers take note of these changes and prioritize modifying their applications that depend on this resource type accordingly. Why and when are we making this change? Before we make the devcieRegistrationPolicy resource type generally available in our v1.0 API version, we need to align to MS Graph REST API best practices and design patterns. This change will be made to beta endpoint in the week of September 25, 2023, and then generally available to v1.0 endpoint later this year. What are the Required Actions? - If you’re using Entra ID portal to configure device registration policy settings then no action is required. - If you’re crafting your own MS Graph API requests to configure deviceRegistrationPolicy resource type, then you need to immediately update your application to start configuring the resource type with both the new and deprecated properties. - Once the deviceRegistrationPolicy resource type with new properties is deployed in the week of September 25, 2023, verify using GET call that you see the new properties and their values as being configured by your application. - At a later point in time of convenience, remove the deprecated properties from your application. What are the updates to MS Graph deviceRegistrationPolicy resource type? - The “multiFactorAuthConfiguration” property is changing from an integer to a string value. The old integer value of “0” represented “notRequired” and “1” represented “required”. The new string property will now support the values of “notRequired” and “required”. - The “appliesTo”, “allowedUsers” and “allowedGroups” properties within “azureADJoin” and “azureADRegistration” are being deprecated. Instead, these will be replaced by the “allowedToJoin” and “allowedToRegister” properties, which are of the type microsoft.graph.deviceRegistrationMembership and contain one of the following values for “@odata.type”: - “#microsoft.graph.allDeviceRegistrationMembership”: Indicates that all users are allowed to join or register devices. - “#microsoft.graph.noDeviceRegistrationMembership”: Indicates that no users are allowed to join or register devices. - “#microsoft.graph.enumeratedDeviceRegistrationMembership”: Indicates that a selected group or users and groups are allowed to join or register devices. Only for this value, the “allowedToJoin” or “allowedToRegister” values contain two additional properties, “users” and “groups”, each being an array of user and group IDs which are allowed to join or register devices. - The changes will be deployed to MS Graph beta endpoint the week of September 25, 2023, at which point the deprecated properties of the resource type will stop working. - Customers should prepare to update their applications and start using the new properties of the resource type as soon as possible. What happens to applications if they don’t use the new properties of deviceRegistrationPolicy resource type the week of September 25, 2023? The applications will encounter an error (Bad Request) as new properties will be expected when configuring the deviceRegistrationPolicy resource type. Can I do something now to prepare my application for this change without waiting until the week of September 25, 2023? We recommend you modify your application immediately to configure deviceRegistrationPolicy resource type with both new and deprecated properties. The resource type available in beta endpoint today will honor both the deprecated and new properties. It will stop honoring deprecated properties during the week of September 25, 2023. Here’s an example of how you’ll use PUT to configure both new and deprecated properties. “displayName”: “Device Registration Policy”, “description”: “Tenant-wide policy that manages initial provisioning controls using quota restrictions, additional authentication and authorization checks”, - “multiFactorAuthConfiguration” should always be sent as a string value (“required” or “notRequired”). - users and groups list are only needed when you set microsoft.graph.deviceRegistrationMembership data type to enumerated. When should I remove configuring old properties from my application? If you follow our above recommendation to configure deviceRegistrationPolicy resource type with both new and deprecated properties, you can remove deprecated properties at any future time of convenience. Once the deviceRegistrationPolicy resource type is deployed with the new properties during the week of September 25, 2023, deprecated properties will be ignored by the resourceType. Can I selectively configure the new properties from my application? Not currently. The API supports PUT operation for update, which means you need to configure all properties of deviceRegistrationPolicy resource type. What will the GET call return? The deviceRegistrationPolicy resource type will return the deprecated properties until the week of September 25, 2023, after which the resource type will return the new properties. Sandeep Deo (@MsftSandeep) Principal Product Manager Microsoft Identity Division Learn more about Microsoft Entra:
OPCFW_CODE
As I've been doing all these distributed ray-tracing extensions, I've been thinking how it compares to Monte Carlo pricing in finance. Theoretically, they're taking the same approach (paths with randomised variations) for rather different integration problems, in order to deal with the same limitation: The curse of dimensionality. In finance, if you've got a path-dependent exotic option where PDE doesn't work, you've got not much choice but to simulate price changes across each of the relevant dates. If there are a lot of dates, this is suddenly a high-dimension problem, and MC is the way to go. In practice, for quite a lot of cases, there are actually some rather more important dimensions, and you can get quick convergence using Quasi-MC and Brownian bridges. The only problem is to make sure you're not screwing it up and getting the price wrong, which is... quite a big concern. Anyway, assuming you are doing standard MC, what you really want out of your MC pricing is stability. If you make small bumps to the input, or move forward a day, or tweak a parameter, you don't want the price to change unpredictably. MC pricing noise is a fact of life, as MC convergence is slow. However, if this noise is consistent, you have a hope of getting stables greeks out, and being able to explain what happened with your profit-and-loss from day to day. You don't want noise. In ray-tracing, I think the curse of dimensionality is rather less prevalent (maybe integration dimensions for anti-aliasing, for depth-of-field, for motion blur, soft shadows, etc. - less than ten in total). Just like with quasi-MC, we can also take some of the dimensions and simplify them with more traditional integration techniques (e.g. anti-aliasing by rendering on a finer resolution, using frequency-limited/anti-aliased shaders (the RenderMan shader books are quite good on this) etc.). However, the MC/distributed ray-tracing approach simplifies this by putting them all under the same umbrella. What an MC approach does, though, is give us noise. The results are grainy. People are used to grainy images. They look like photographs, or dithered images. The noise means that, over wider areas of the image, there is convergence to the right value, and locally you've got approximately right values whose inaccuracy is covered by the noise across pixels. If the algorithm were adjusted to, for example, repeatedly use the same points in an area light source for soft shadow calculations, the shadows would be steppy and ugly. We are deliberately not looking for spatial stability in MC behaviour, and it's a remarkably different approach. On the other hand, moving into an area I know less about, I'm not sure what you want to do with animation. Presumably intra-frame noise is something you're happy with. In a static scene the pixels could mildly flicker, in the way that they simulate with that overlay on static images in YouTube. On the other hand, if you want the image to remain stable, you'd end up in a complex situation where you want the image to retain temporal coherence while being spatially noisy. I guess that could be a bit tricky to implement... but I suspect that no-one cares about that? Anyway, it surprised me to realise that such different approaches to noise could be taken in two different domains associated with Monte Carlo techniques.
OPCFW_CODE
The guide explains all sequence in quite some detail (see pages 88 to 97), but sometimes a diagram is more helpful, so here’s a sequence diagram that describes all interactions: - An un-authenticated user browses a protected resource, say the “Shipping” page (which translates into a method in a Controller). - The Controller is decorated with the AuthenticateAndAuthorizeAttribute which implements MVC’s IAuthorizationFilter interface. Because the user is not authenticated yet, it will issue a SignInRequest to the configured issuer (this results in a redirection to the issuer). Among other tings it passes the user original URL in the context (wctx) of the request (in this example is the “Shipping” page). - The Issuer authenticates the user with whatever means and if successful, it issues a token for the user for that application. - The Token and the the context information is passed back to the app to a specific destination (indicated in the realm). In the MVC application, this is just another controller (“Home” in our sample, method “FederationResult”). This controller is not decorated by the AuthenticateAndAuthorizeAttribute. - The request however, does go through the WIF Module (the “FAM” in the diagram above). WIF will validate the token and create a ClaimsPrincipal that is eventually passed on to the controller. - The Home Controller inspects the context parameter, extracts the original URL (remember the context is preserved throughout all hops) and then redirects the user there. - The redirect will go again through the filter, but this time the user is authenticated. - Any authorization rules are performed (in our example, we checked for specific Roles) and if all checks pass… - The specific controller is finally called (e.g. Shipping). A couple notes: Everything within the green box above happens only when there’s no session established or when the session expires. Once there’s a session, the requests only go through the filter. In our sample, there’re actually two Issuers. This is because the sample deals with “Multiple Partners” each one with its own Identity Provider, a scenario that makes convenient to have another intermediate Issuer (a.k.a. “Federation Provider”). I didn’t add it in the diagram above just to keep things simple and focus on the specifics of MVC and WIF. Because the protocol uses redirections, interactions in the diagram above are “logical”. Whenever you see an arrow with “redirection” label, what actually happens is that the response is sent to the browser and then the browser initiates the interaction with whatever you are redirected to: In our sample we chose to use “roles” as a way of providing access: But it should be clear that you could use anything. Repeat: “roles are claims, but not every claim is a role” 🙂 Also, this declarative model might not always work. You might have to make decisions on the parameters of the call, and since you have access to the claims collection (through the principal), you can programmatically use them for more advanced behavior. Using roles is just convenient for an examples.
OPCFW_CODE
should do the Bond theme - though, they'd probably miss the deadline. May be they can do the next one. 7 posts • joined 4 Jul 2007 (1) Get design [takes x hours] (2) Get working in Firefox [takes y hours] (3) Find any silly bugs using a validator, ignoring most warnings [z minutes] (4) Get working in IE [takes x+y+z] So then the client sez, "l I only want IE6, you have wasted half the time!". "You can't actually *develop* using IE - how do I know which bugs are mine..." I have to admit IE7 is better. But by now I always use *nix (Posted mostly for the civvies that read El Reg, this is a moot point for most here) Like others I'm sure, I knew (from the Radiohead site I think) that a 'normal release' was going to happen in the New Year. Therefore, I bought the download for a quid, and I'll get the CD now. These guys are not stupid. They, like all true professionals, don't assume everyone else is stupid. For instance, if you got the album off bit torrent - well done. You saved them some bandwidth. 'Course, you might have downloaded a virus. McDonalds makes money from good real estate. Microsoft makes money from good lawyers. GNU/Linux is an attempt to make good software rather than good money. Artists make art rather than money. If they make something that is judged as good, then the market kicks in and they make money. What I don't understand is where a GNU/Linux programer gets food from. So I see no alternative than using the moral right of governments to slap down an abusive company, as I don't understand how the market will ever give something like GNU/Linux the power to do it for itself. DVDRW failed - replaced under warrantee battery failed - replaced under warrantee HD failed unfortunately just after a year :-( SuperDuper! is good for backups, saved my skin. Crashes more in a day than my old Linux/Sony used to in a year (this IS NOT AN EXAGERATION), so I'll be back to another Vaio when I can afford it. Though for those that can't manage linux, which is most people, then I still recommend apple, as it is more stable and sane than Windoze. From reading the original (German) article I think the price was the newspaper's guess. $100 markup seems a bit low, but then the US$ is pretty weak. Of course US price quotes don't include tax, so if the speculation is correct it would be fair. I'd still be pleasantly surprised. Biting the hand that feeds IT © 1998–2019
OPCFW_CODE
Smooth scaling when adding/removing subviews to/from a UIView I'm using a subclass of a UIView to (programmatically) contain and manage numerous subviews (mostly UIImageViews) that are to be positioned adjacent to each other such that none are overlapping. It's safe to assume that the subviews are all the same height (when originally drawn), but are varying widths. The container might be resized, in which case I want the subviews to be scaled proportionally. Moreover, there are times when I need to add/remove/edit any given subview (which might change its width). I'm had some success in using: [self setAutoresizesSubviews:YES]; [self setAutoresizingMask:(UIViewAutoresizingFlexibleLeftMargin | UIViewAutoresizingFlexibleWidth | UIViewAutoresizingFlexibleRightMargin | UIViewAutoresizingFlexibleTopMargin | UIViewAutoresizingFlexibleHeight | UIViewAutoresizingFlexibleBottomMargin)]; to have the container automatically resize its subviews when its frame changes. Unfortunately, I'm having a lot of trouble dealing with the case when the contents of a subview changes, causing it to widen or contract. Simply setting 'setNeedsLayout' and defining 'layoutSubviews' for the subview doesn't seem to do the trick because, at the beginning of 'layoutSubviews' the subview's frame hasn't been adjusted. If I force it, then the current contents are stretched or contracted, which looks terrible. I'd certainly appreciate it if someone could explain how sizeToFit, layoutSubviews, sizeThatFits:, setAutoresizesSubviews:, setAutoresizingMask:, and setContentMode: should be used in a case like this. I mean, if I want to adjust the contents of a subview (and widen it), then how do I do it such that: the subview is widened (without adversely stretching or autoresizing anything else within it) the container is widened a proportional amount none of the other subviews are affected ? Thanks in advance! I ended up going the manual route, using sizeThatFits: and layoutSubviews to explicitly control how everything was redrawn and organized. I found that setAutoresizesSubviews: and setAutoresizingMask: helped for simple layouts, but that it was hard to debug if I tried anything even slightly atypical. My advice: the first time you're experimenting with programmatic layouts define sizeThatFits for all of your (sub) views and define layoutSubviews for each. Use debugging statements to log the placement of everything. Once you're happy, consider incorporating autoresizing to simply your code (if possible). For animating the resizing of a view do: [UIView beginAnimations:@"resizeAnimation" context:nil]; youView.frame = newFrame; [UIView commitAnimations]; If you don't want an element to resize automatically, just set its autoresize to UIViewAutoresizingNone. If I get your case correctly, you don't want the containter and its subviews to resize automatically, except the subview you mention. That view should have some autoresizing options enabled (I don't know exactly the ones you want, just try some and see their effect). That subview should also have its subviews resizing disabled.
STACK_EXCHANGE
August 7, 2010 - 15:00, by Dostalek, Kevin Wrapped up another DevLink today with my session this morning: Building a SharePoint 2010 Development Environment. It was a full room and a great audience. Thanks very much to all who made it in for the Saturday morning session (I know how rough that can be after Friday night shenanigans :) Anyway, I've posted below a hires picture version of the mindmap used in the presentation as well as a PDF version of the basic installation walkthrough. Below that is a list of all the places we talked about on the internet so that you have all the links in once place. I'd love to hear any additional feedback from the folks that attended (since at least at the moment they appear to have misplaced the speaker evals from this morning-- I'm sure they will turn up though). Click one of the two pictures below to download the presentation resources: April 12, 2010 - 11:30, by Dostalek, Kevin was in the first slot (8:30AM) and I presented my "Leveraging SharePoint 2010 as a Social Computing Development Platform". Now even under normal circumstances that's a pretty heavy way to start off a morning (it's a 300 level developer presentation). However, after I polled the audience at the start, I found I had a full room with only a handful of developers. So I kind of changed my presentation on the fly and did quite a bit more demo'ing of the social computing features (from an end-user perspective) and completely cut out my 3rd demo (creating an activity feed custom gatherer for twitter). All in all I think it went quite well! For those that were at the session (or my other ones in the past) know that I had an aversion to slide based presentations. However, posted below is both the mindmap version of the presentation (just click on the thumbnail to get a very large pannable image) as well as a slide deck version (just pretend the SPSIndy logo is SPSCLT). And finally, as promised, the icon to the right will get you a zip of all the code samples I used in the presentation. If you are going to try and actually make this code work your self, you should probably go read this post , as it has more information about each VS project and what you may need to do to get it all working (read the comments too). I want to thank the SPSCLT organizers, especially Brian Gough and Dan Lewis, the speakers, sponsors, and attendees for a great event! I'm very happy I made the trip down and hope to see some of you at other events very soon! March 26, 2010 - 09:48, by Dostalek, Kevin I'm honored to be speaking at the SharePoint Saturday in Charlotte, NC on April 10th. I'm doing a slightly tweaked and expanded version of my "Leveraging SharePoint 2010 as a Social Development Platform" presentation which is probably my current favorite session to do (although it doesn't have quite the wow factor and broad audience appeal as my End-User Social Computing w/ SP2010 session). My only big decision point is whether to do the PPT version or the MindMap version of the slide content (but really it's 80% demos anyway). Anyway, looks to be a great time and I will certainly enjoy meeting lots of new people (it's my first "east coast gig") as well as reconnecting with lots of others. You can find out more information about this event on their official website: See ya there! August 19, 2009 - 13:10, by Dostalek, Kevin Well I attended DevLink down in Nashville, TN last week. It was my first time, but certainly won't be my last. John Kellar and all the volunteers did a great job pulling off a super conference that felt more like a huge extended community code camp than what I traditional think of as an "industry conference" (e.g. TechEd, SxSW, PDC, etc...). Don't read that as a negative in any way- it was a great experience, and by far one of the best values all year (it was $100). The sessions were great and they had even added a SharePoint track this year, so there was no shortage of good stuff happening each day. One thing that I do regret is not checking out the Open Spaces stuff (sorry Alan , the timing just never worked out). As is often the case with conferences though, it was the networking that happens informally in the evenings that really provides the incalculable value. From the quiet lobby-bar chats to the loud parties out at the Honky Tonks (Tootsie's Orchid Lounge was a favorite) I made a ton of new friends that I'm sure I'll keep in touch with and see again and again. Lastly, let me post a quick soundbite from the closing panel. It's Richard Campbell telling his Goliath story . Seriously, this guy is a great story-teller- I can just imagine him on NPR or something listening to this. December 10, 2008 - 11:45, by Dostalek, Kevin I just wanted to dump some ideas out there surrounding one of the trickiest pieces of using agile development methodologies in a consulting / outsourced environment: Agile Contracts. Much of the musings below come from other sources with my own thoughts mixed in, but one source I'd like to specifically call out is the PDC 2008 session I attended given by Mary Poppendieck and Grigori Melnik The Problem with Two Party Interactions So the first thing to look at is why we even need contracts in the first place. The conventional wisdom is as follows: - Companies inevitably look out for their own interests - Contracts are needed to limit opportunistic behavior What Mary points out though is that really at the core the problem is that there potentially exists conflicts of interest which drive the paranoia of opportunistic behavior. In an ideal setting though we: - Assume other party will act in good faith (so this requires a level of trust) - Let the relationship limit opportunism (again, requires trust, but also some basis for the relationship) - Use contracts instead to do these things: - Align the best interests of each party with the best interests of the joint venture - Eliminate conflicts of interest I'll come back around to how this type of relationship and contract are formed up in a moment, but first lets look at how the two types of traditional contracts fall short of meeting the ideals above. Problems with Fixed Price Contracts - Supplier is at greatest risk - Customer has little incentive to accept the work as complete - Generally does not give the lowest cost - Competent suppliers will include cost of risk in the bid - Creates the game of low bid with expensive change orders (which blows a hole in the primary reason CFO's like fixed-bids, which is budget predictability) - Generally does not give the lowest risk - Selection favors the most optimistic (desperate) supplier - Least likely to understand project's complexity - Most likely to need financial rescue - Most likely to abandon the contract - Customers are least likely to get what they really want. Remember, that the "protection" that a fixed-bid contract seemingly provides (if we don't like it, we don't have to pay for it) is all illusory. This is because the value that the project is projected to provide is greater than the price of the effort otherwise the project would not move forward. If the effort is "not-accepted", then the vendor is out the costs of the resources employed on the project. However, the customer is out both the costs of their resources involved in the project as well as the anticipated return of the project (which we already said is greater than even the PRICE, much less the vendor's actual COST). So who is the biggest loser here? Obviously depending on the relative sizes of the vendor and customer it may still "hurt" the vendor more, but clearly the customer has more at stake, and so I would contend that fixed-bid "protection" is a fabrication. Problems with Time and Materials Contracts While there are many useful scenarios where T&M projects make sense (staff augmentation, etc...) in general there are quite a few problems with them in an outsourced project model as well: - Customer is at greatest risk - Supplier has little incentive to complete the work - Therefore we believe we need to control supplier opportunism - ENTER: Project Control Processes - Detailed oversight generally provided by the least knowledgeable party - Supplier must justify every action - LIKELY LEADS TO: - Increased costs - Artifact creation that does not add direct business value - An assumption that the original plan is the optimal plan (the one created at a time of lowest knowledge/information about the project) Candidate Solution: Target Cost Contracts Circling back to our idealistic world now that we know some of the problems with traditional contracts, let's look at a different kind of contract. How can we build a contract that has the following properties? - Target Cost defined and includes all changes - Target is the joint responsibility of both parties - Target cost is clearly communicated to workers - Negotiations occur if target cost is exceeded (or projected to) - Neither party should benefit under this scenario (it's a failure scenario) - Primary goal of contract is to remove conflict of interest. In order for such a contract to work there are a few assumptions that probably need to pre-exist: - We have some basis for relationship and trust. - This means we may have to start off with a small project using a traditional contract. - We are probably using an agile development methodology that utilizes fixed-time, fixed-budget, and prioritized variable-scope mechanisms (backlogs,etc...) The structure of this contract includes the following: - An unbrella or framework contract with the legal stuff in it. - Establishment of a target cost - Work themes defined in stages (prioritized) - Stages should be small to limit risk for both parties and to provide everyone with frequent points to revisit the value-proposition of the relationship - Scope beyond the current stage remains fluid and negotiable - Contact should describe the relationship, not the deliverables - Contract should set up a framework for future agreements - Contract should clearly define a means for mediation if no agreement can be reached. (this is important!) So what I've found in my almost 15 years in this industry, is that our best customers always seem to end up in this type of contract model anyway (after perhaps a few projects using a traditional contract). But wouldn't it be better if we could actually LEAD into a relationship with this idea in mind and use it as a means to better define our value-proposition and distinguish ourselves from competitors? (or at the very minimum, get to this model sooner than later so that everyone can be more productive). Those are my thoughts, please share yours!
OPCFW_CODE
How to change sketch support in CATIA using vba? I want to Change a sketch Support from one plane to another in macro. I tried with StartCommand but that did not work. How can this be done without user Input? I have tried the following code but it did not work. CATIA.StartCommand "Change Sketch Support" selection1.Add sketch3 SendKeys "{ENTER}", True selection1.Add Plane_a SendKeys "{ENTER}", True part1.Update This link says to select the sketch then select the plane and run StartCommand "Change Sketch Support" 'Get the part object (Assume the part is open in it’s own window) Set objPart = CATIA.ActiveDocument.Part 'Get the first sketch in the first geometrical set Set objSketch = objPart.HybridBodies.Item(1).HybridSketches.Item(1) 'Get the plane called Plane.1 in the first geometrical set Set objPlane = objPart.HybridBodies.Item(1).HybridShapes.Item(“Plane.1”) 'Select the sketch first then the new support plane Set objSel = CATIA.ActiveDocument.Selection objSel.Clear objSel.Add objSketch objSel.Add objPlane 'Call the Change Sketch Support command CATIA.StartCommand “Change Sketch Support” https://v5vb.wordpress.com/2010/01/20/startcommand/ You are trying by winapi and this is not the easiest way. You have two alternatives: Or you use copy and paste method Dim osel As Selection = CATIA.ActiveDocument.Selection osel.Clear() osel.Add(sketch3) osel.Copy() osel.Clear() osel.Add(Plane_a) osel.Paste() Dim RsltSketch As Sketch = osel.Item2(1).Value osel.Clear() 'You can delete the first one if you want osel.Add(sketch3) osel.Delete() Or you define the precise vectors Dim arrayOfVariantOfDouble(8) arrayOfVariantOfDouble(0) = OriginPointX arrayOfVariantOfDouble(1) = OriginPointY arrayOfVariantOfDouble(2) = OriginPointZ arrayOfVariantOfDouble(3) = DirectionHorizontalX arrayOfVariantOfDouble(4) = DirectionHorizontalY arrayOfVariantOfDouble(5) = DirectionHorizontalZ arrayOfVariantOfDouble(6) = DirectionVerticalX arrayOfVariantOfDouble(7) = DirectionVerticalY arrayOfVariantOfDouble(8) = DirectionVerticalZ sketch3.SetAbsoluteAxisData(arrayOfVariantOfDouble)
STACK_EXCHANGE
Properly return value and print in html after a form submit How do I return the result from a post in the original html? For example in my form, I submit an email address. After the email address is submited I would like to show that email back in a div and hide the form. How can I do that using angularjs and php? simple HTML form <form name="EmailForm" class="form-horizontal" method="post" action="mail_handler.php"> <div class="form-group form-group-lg"> <input id="email" type="email" name="email" class="form-control size" ng-model="email.text"<EMAIL_ADDRESS>required> </div> <div class="form-group"> <label ng-show="EmailForm.email.$valid"><input ng-model="IAgree" name="tos" required="required" type="checkbox" /> I have read and agree to the Terms of Service</label> </div> <div class="form-group"> <button type="submit" name="submit" ng-show="IAgree" class="btn btn-lg btn-primary"> <span class="glyphicon glyphicon-play-circle" aria-hidden="true"></span> Spam me now! </button> </div> </form> and a simple php <?php if(isset($_POST['submit'])){ $email = $_POST['email']; header('Location: index.html'); } ?> replace header('Location: index.html') with echo "<div>".$email."</div>"; @mplungjan hi, and thank you for your comment. That will only echo a , i want to display a div in my html, the html where i do the post.. @stefan your comment sounds a bit confusing: the div you want to display in simple.php or index.html ? if the later you have 2 problems 1.) header('.......... changes at which url you are but it does NOT retransmit the $_POST. 2.) it is a html page thus it won't be able to do php commands or process them hi @Thomas, yes, I want to display the div in index.html, sorry for the confusion, english is not my native language. Then try $email = $_POST['email']; header('Location: index.html?email='.urlencode($email)); and use location.search in index.html to grab and show it @stefan That is no problem that is why comments are also there to ask for clarifications :). But that aside as its a .html you have the problem that it can't do php code inside (you would to have to rename it to .php) and you can't use a post variable in there. So either you have to reanme it to a .php and give it email as post OR you ahve to do a workaround like mplungjan mentions which would let you ahve index as .html still @mplungjan, I tried your solution, and worked for me, thanks. If you decide to post your answer, i will accept it If you have $email = $_POST['email']; header('Location: index.html?email='.urlencode($email)); you can in index.html do window.onload=function() { var email = location.search?location.search.split("email=")[1]:""; document.getElementById("emailId").innerHTML=decodeURIComponent(email); } You have two variants: You should change the action of the form to "index.php" and put all the code for the form in index.php 2.In the other variant you will use everything you have done just change header('Location: index.html'); with header('Location: index.php/?email=' . urlencode($email)); And then in index.php: $email = $_GET['email']; echo '<div class="email">' . $email . '</div>'; You should change the homepage to a php extension in order it to work. Is this not what I commented, except using index.html instead of php ? @mplungjan, yes it is You can write both html and php in single php file. i.e form_submit.php After create php file you can write following code in form_submit file. <form name="EmailForm" class="form-horizontal" method="post" action="mail_handler.php"> <div class="form-group form-group-lg"> <input id="email" type="email" name="email" class="form-control size" ng-model="email.text"<EMAIL_ADDRESS>required> </div> <div class="form-group"> <label ng-show="EmailForm.email.$valid"><input ng-model="IAgree" name="tos" required="required" type="checkbox" /> I have read and agree to the Terms of Service</label> </div> <div class="form-group"> <button type="submit" name="submit" ng-show="IAgree" class="btn btn-lg btn-primary"> <span class="glyphicon glyphicon-play-circle" aria-hidden="true"></span> Spam me now! </button> </div> </form> <?php if(isset($_POST['submit'])) { ?> <div> <?php $email = $_POST['email']; ?> </div> <style> .form-horizontal{display:none} <!-- above statement will hide form after submission of form --> </style> } ?> After submit your form you will see email on same page in particular div.
STACK_EXCHANGE
Welcome to MacJournal 4! This is the fourth developmental release. The biggest thing going on here is the all-new blogging system. The old stuff has been thrown out (or will be eventually) in favor of a much more integrated approach. Now you can configure many different blog servers of all different kinds and associate them with journals (or even entries) to which to send entries automatically. There is now support for the MetaWeblog and Atom protocols. MetaWeblog is great for talking to WordPress and Movable Type, and Atom is great for talking to Blogger. LiveJournal support has been rewritten as well. The configuration process should be fairly easy: all you need is the URL to your blog, not the URL to some obscure XML-RPC endpoint. MacJournal will do all it can to figure out the details for you based on the front page URL. Please let me know of any failures in this area and be sure to include the URL that you tried. In addition, the entries will remember details about how it was sent to the server and will only update the entry the next time you send it there, instead of creating a new post. There is now support for sending images to an FTP server and including those images in the blog entry. There is also limited support for downloading entries from the server. Some of the protocols do this better than others though. It turns out Atom- and LiveJournal-based servers handle the best. The Blogger and MetaWeblog API don’t really have facilities for this (MetaWeblog descends from Blogger). Keep in mind though that Blogger.com blogs are now handled by the Atom protocol, so this isn’t a problem there. Also, there is also support for creating entries based on an Atom feed found at the site (regular RSS might be added in the future). Note: This blogging stuff is still very new and can change a lot as I improve it. For this reason you not should expect that blog configurations will carry through to the final release, or even to the next developmental release. Other changes in this release: - Performance improvements for the Individual Files backup - Cleanup option for e-mail quotes - Appearance tweaks to Full Screen mode: the cursor is larger to make it easier to see and selected text will look better. - Added a new Full Screen preference to disable editing if you just want to read it. - Full Screen mode also now has a Find panel. Use Command-F to get at it. - The Full Screen prefs now have controls for the margins. - You can now auto-complete keywords in the Inspector as you type them - Ordered and unordered lists in HTML form in imported text files are now converted (along with bolds and italics as before). - You can now import text clippings. - The list in the drawer will now have a soft background color if a search is in progress (this is a little experimental). - Click and hold the Browse toolbar item to show a history of entries that you've been to. - A few crash fixes and a few other behavioral fixes. Keep in mind that this is developmental software: there are probably bugs lurking somewhere that could cause crashes and/or data loss. MacJournal 4 has a lot of new data being stored and I can't guarantee the future of that. I may need to change how it is stored and I can't guarantee that everything will work. That being said, it works pretty well for me in normal usage. You should definitely read the Version History to see what is new. Here are the top-tier things: - New Inspector for manipulating attributes of entries and journals - New per-object attributes • background color • entry template - All-new blogging architecture - A real implementation of tabs - Links, smileys, and words are recognized as you type now (not just when you save), including a live word count field. - AppleScript support - Improved Full Screen mode But that's just the really top stuff; there is a lot of good stuff (not just bug fixes) in the Version History. It will do you well to read it. The Preferences as been reorganized and split up and will continue to change. I added a few new panes and I think there's one too many now. Expect to see lots of change there. The good news with all the new data types (like labels and background colors) is that the recently released 3.2 supports them insofar as it won't discard them when saving the data. So you can add labels in 4.0, go back to 3.2 for a while, and when you come back to 4.0 the labels will still be there. The exception here is per-journal sorting: this was added after 3.2 was released and will be lost if you save your data with 3.2. MacJournal no longer supports Jaguar. At this point, it might not support Panther either. Development is being done on Tiger and there might be some lurking bugs on Panther that will be weeded out later. This is a developmental release so things are still very much in flux. For that reason, reporting bugs isn't as important as normal. There are a lot of areas that are still changing a lot and will continue to change for some time. Some new icons are temporary and will be replaced later. I would appreciate comments about the general direction of the release though.
OPCFW_CODE
Learn the new Oracle 12c topics that are covered in the Oracle certification upgrade test 1Z0-060. In this course, we will take an in-depth look at the following topics: Enterprise Manager and other tools, the basics of a Multitenant Container Database (CDB), configuring and creating CDBs and PDBs, managing CDBs and PDBS, and managing tablespaces, common and local users, privileges, and roles. As a teenager, Tim found a love for teaching, learning, writing, and computers. He believes that everyone should be a lifelong learner. Tim has been teaching for nearly 21 years, either full or part-time. Tim is an Oracle Database Administrator with over 17 years of experience. He works out of Pittsburgh PA and lives in West Virginia with his wife and kids. Examining the Management of Local and Common Users In the last module, I reviewed the creation of container databases and the pluggable databases. The next topic will be the creation of the users. With a multitenant architecture comes the introduction of new types of users, the common user and the local user. This section will review the attributes of the common and local users, creating the users and the assigning of common and local privileges, as well as the common and local roles. The common users belong to the root container data dictionary and are known to all the pluggable databases that belong to the CDB. Once created, these common users are known to all current containers and will be known to any future containers in the CDB. There are also local users that are defined in a specific PDB and can only connect to that PDB. These are very rudimentary definitions, and you probably have similar questions as our Globomantics DBA, Mark. He wonders is there any difference in how a user will log into a PDB than how they currently log in to a non-container database. Is there a benefit of defining administrators as a common user compared to having them defined on each local database? If you have a common administrator or a user, can their access be limited to a subset of pluggable databases within the container database? What tasks can a common administrator perform that a local administrator cannot perform? And how does a common user know which system their work is going to affect? By the end of this module, you'll have a basic understanding of the answers to these questions. Managing CDBs and PDBs In the last module, I presented new attributes and areas of consideration for the creation of users, roles, and limiting their privileges on the system. In this module, I'm going to highlight the management of container databases and pluggable databases. One of the areas that Oracle claims as an advantage of the multitenant architecture is the ability to have separation of duties for database administration. You can have a container database administrator and a pluggable database administrator. The container database administrators will usually have areas of responsibility concerning the entire instance, such as performance, administration of common users, and the administration of common objects where pluggable database administrators are more focused on the management of the data, the users, the application objects. This module I'll dive a little bit deeper into the administration of the databases within the multitenant architecture. The goal of this module is to show how the operational procedures can be affected when switching to a multitenant architecture and using pluggable databases. I've touched on some of these topics already in previous modules. I'll do a little bit further investigation into areas such as connecting to a container, how you connect to root versus how you connect to a pluggable database. We'll also discuss in more detail the current container and what it means. We'll talk about switching containers and how to perform an administrative tasks on a container database and a pluggable database. Managing Tablespaces in the Multitenant Architecture This module will relate the basic management of tablespaces in the multitenant architecture to the tablespace management in a non-container database environment. I'll investigate some of the tablespace topics that are specific to the container database instance and pluggable databases. In the multitenant architecture, it's often stated that pluggable databases share the same resources such as memory and system processes of the container database instance. This seems to be a good aspect of the system, but what about tablespace management in the container database? Is everything truly separated, or are there shared tablespaces that might cause issues for the entire database if they're not managed appropriately? Mark, our Globomantics DBA, has divided his research into four areas. He wants to know if changes have been made to the views he uses to monitor tablespaces and if there are any views that he should be using. He wants to know if there is any sharing of tablespaces between pluggable databases. He wants to see if there are any specific approaches to tablespace management for the container database and the pluggable databases. Finally, he wants to know if a container database administrator can limit the storage resources utilized by a pluggable database. Identifying the Benefits of a Multitenant Container Database This module gives an overview of the Benefits of a Multitenant Container Database. Oracle promotes the multitenant architecture of the container database as achieving the goals of higher utilization of resources and simplified management of databases. I'm going to review some of the research our friendly Globomantics senior DBA, Mark, has found in regards to the benefits that Oracle proclaims. Mark has finished his initial research into the container database in the multitenant architecture, and it does hold promise for many of the development and non-enterprise business systems. He has built a good foundational knowledge of the capabilities such as the compatibility guarantee, cost reduction, easier monitoring and management of systems, easier performance tuning, and separation of duties and separation of applications. Let us take a little closer look at these areas and review what Mark has found.
OPCFW_CODE
package com.izettle.cassandra; import com.netflix.astyanax.Keyspace; import com.netflix.astyanax.MutationBatch; import com.netflix.astyanax.connectionpool.OperationResult; import com.netflix.astyanax.connectionpool.exceptions.ConnectionException; import com.netflix.astyanax.model.Column; import com.netflix.astyanax.model.ColumnFamily; import com.netflix.astyanax.model.ColumnList; import java.util.ArrayList; import java.util.Date; import java.util.List; import java.util.UUID; /** * A utility to handle time series in Cassandra. * * Every time series has a unique key and consists of events in time. An event is a piece of data with an UUID and time * associated with it. * * @param <K> Type of time series key. */ public class TimeSeries<K> { private final Keyspace keyspace; private final ColumnFamily<K, UUID> columnFamily; public TimeSeries(Keyspace keyspace, ColumnFamily<K, UUID> columnFamily) { this.keyspace = keyspace; this.columnFamily = columnFamily; } /** * Adds event to a time series. * * @param key Time series key. * @param uuid Event UUID. * @param date Event time. * @param value Event data. * @throws ConnectionException Failed to store event in Cassandra. */ public void add(K key, UUID uuid, Date date, String value) throws ConnectionException { MutationBatch mutationBatch = keyspace.prepareMutationBatch(); mutationBatch .withRow(columnFamily, key) .putColumn(DeterministicTimeUUIDFactory.create(uuid, date), value); mutationBatch.execute(); } /** * Adds event to a time series. * * @param key Time series key. * @param eventString Event string. (Instead of seed UUID) * @param date Event time. * @param value Event data. * @throws ConnectionException Failed to store event in Cassandra. */ public void add(K key, String eventString, Date date, String value) throws ConnectionException { MutationBatch mutationBatch = keyspace.prepareMutationBatch(); mutationBatch .withRow(columnFamily, key) .putColumn(DeterministicTimeUUIDFactory.create(eventString, date), value); mutationBatch.execute(); } /** * Get events for a specific time period. * * @param key Time series key. * @param begin From time (inclusive). * @param end To time (exclusive). * @param reversed If true, the order of the results will be reversed. * @param count Max number of events to return. * @return Time series events. * @throws ConnectionException Failed to retrieve time series events from Cassandra. */ public List<String> get(K key, Date begin, Date end, boolean reversed, int count) throws ConnectionException { List<String> events = new ArrayList<>(); OperationResult<ColumnList<UUID>> operationResult = keyspace .prepareQuery(columnFamily) .getKey(key) .withColumnRange( DeterministicTimeUUIDFactory.createFirst(begin), DeterministicTimeUUIDFactory.createLast(end), reversed, count ) .execute(); ColumnList<UUID> result = operationResult.getResult(); for (Column<UUID> column : result) { if (column.hasValue()) { events.add(column.getStringValue()); } } return events; } }
STACK_EDU
var typeFactory = require('type-factory'); var mitty = require('mitty'); var validateTypes = require('validate-types'); var utils = require('./utils'); var viewCounter = 0; var View = typeFactory({ removesElement: true, constructor: function(props) { var element = props && props.el; this.cid = 'view' + (++viewCounter); this.$eventRegistry = []; this.$views = {}; if (this.props) { this.writeProps(props); } if (element) { this.el = typeof element === 'string' ? this.find(element) : element; } this.$runMixins('initialize'); if (this.initialize) { this.initialize(); } this.setupEvents(); }, $runMixins: function(name, processor) { var self = this; if (this.$mixins) { this.$mixins[name].forEach(processor || function(item) { item.call(self); }); } }, validateData: function(schema, data) { return validateTypes(schema, data, this); }, writeProps: function(props) { var validationData = this.validateData(this.props, props); if (validationData.hasErrors) { this.handlePropValidationErrors(validationData.errors); } utils.each(validationData.data, function(value, key) { if (typeof this[key] !== 'undefined') { this.handlePropValidationErrors([{ key: key, message: 'Prop "' + key + '" overwrites instance member' }]); } else { this[key] = value; } }, this); return this; }, handlePropValidationErrors: function(errors) { errors.forEach(function(errorObj) { throw new Error(errorObj.message); }); }, setupEvents: function() { var self = this; var element = this.el; var specialSelectors = {'window': window, 'document': window.document}; var eventProcessor = function(handler, eventString) { var isOneEvent = eventString.indexOf('one:') === 0; var splitEventString = (isOneEvent ? eventString.slice(4) : eventString).split(' '); var eventName = self.normalizeEventName(splitEventString[0]); var eventSelector = splitEventString.slice(1).join(' '); var eventHandler = typeof handler === 'function' ? handler : self[handler]; var eventMethod = isOneEvent ? 'addOneEvent' : 'addEvent'; if (eventSelector && specialSelectors[eventSelector]) { element = specialSelectors[eventSelector]; eventSelector = undefined; } self[eventMethod](element, eventName, eventSelector, eventHandler); }; var processEventList = function(provider) { var list = typeof provider === 'function' ? provider.call(self) : provider; list && element && utils.each(list, eventProcessor); }; this.$runMixins('events', processEventList); processEventList(this.events); return this; }, addEvent: function(element, eventName, selector, handler) { var self = this; var proxyHandler = function(e) { if (selector) { var matchingEl = utils.getMatchingElement(e.target, selector, element); matchingEl && handler.call(self, self.normalizeEvent(e, { currentTarget: matchingEl })); } else { handler.call(self, e); } }; element.addEventListener(eventName, proxyHandler, false); this.$eventRegistry.push({ element: element, eventName: eventName, selector: selector, handler: handler, proxyHandler: proxyHandler }); return this; }, addOneEvent: function(element, eventName, selector, handler) { var self = this; var proxyHandler = function(e) { handler.call(self, e); self.removeEvent(element, eventName, selector, proxyHandler); }; return self.addEvent(element, eventName, selector, proxyHandler); }, removeEvent: function(element, eventName, selector, handler) { this.$eventRegistry = this.$eventRegistry.filter(function(item) { if ( item.element === element && item.eventName === eventName && (item.selector ? item.selector === selector : true) && item.handler === handler ) { item.element.removeEventListener(item.eventName, item.proxyHandler); return false; } else { return true; } }); return this; }, normalizeEvent: function(e, params) { var normalizedEvent = {}; for (var key in e) { normalizedEvent[key] = e[key]; } utils.assign(normalizedEvent, { preventDefault: function() { e.preventDefault(); }, stopPropagation: function() { e.stopPropagation(); }, originalEvent: e }); utils.assign(normalizedEvent, params); return normalizedEvent; }, normalizeEventName: function(name) { return name; }, removeEvents: function() { this.$eventRegistry.forEach(function(item) { item.element.removeEventListener(item.eventName, item.proxyHandler); }); this.$eventRegistry = []; return this; }, remove: function() { this.$runMixins('beforeRemove'); this.beforeRemove && this.beforeRemove(); this.trigger('beforeRemove'); this.removeEvents().removeViews(); if (this.removesElement && this.el) { this.removeElement(); } this.$runMixins('afterRemove'); this.afterRemove && this.afterRemove(); this.trigger('afterRemove'); this.off().stopListening(); return this; }, removeElement: function() { utils.removeNode(this.el); return this; }, addView: function(view) { this.$views[view.cid] = view; this.listenTo(view, 'afterRemove', function() { delete this.$views[view.cid]; }); return view; }, mapView: function(selector, View, params) { var self = this; var element = typeof selector === 'string' ? this.find(selector) : selector ; if (View.isViewComponent) { return element ? this.addView(new View(this.$buildElementProps(element, params))) : undefined; } else { return element ? this.$resolveViewProvider(View).then(function(ViewComponent) { return self.mapView(element, ViewComponent, params); }) : Promise.resolve(undefined) ; } }, mapViews: function(selector, View, params) { var self = this; var elements = typeof selector === 'string' ? this.findAll(selector) : selector ; if (View.isViewComponent) { return elements.map(function(element) { return self.addView(new View(self.$buildElementProps(element, params))); }); } else { return elements && elements.length ? this.$resolveViewProvider(View).then(function(ViewComponent) { return self.mapViews(elements, ViewComponent, params); }) : Promise.resolve([]) ; } }, $resolveViewProvider: function(provider) { return Promise.resolve(provider()).then(function(importedModule) { return importedModule.__esModule ? importedModule.default : importedModule; }); }, $buildElementProps: function(element, props) { return utils.assign({el: element}, typeof props === 'function' ? props.call(this, element) : props ); }, removeViews: function() { utils.each(this.$views, function(view) { view.remove(); }); return this; }, find: function(selector, params) { var context = params && params.context || this.el || document; return context.querySelector(selector); }, findAll: function(selector, params) { var context = params && params.context || this.el || document; return [].slice.call(context.querySelectorAll(selector)); } }, { isViewComponent: true, create: function(props) { var ViewType = this; return new ViewType(props); } }); var factoryExtend = View.extend; View.extend = function(protoProps, staticProps) { if (protoProps.mixins) { protoProps = utils.assign({}, protoProps); protoProps.$mixins = { initialize: [], events: [], beforeRemove: [], afterRemove: [] }; protoProps.mixins.forEach(function(mixin) { utils.each(mixin, function(method, key) { if (protoProps.$mixins[key]) { protoProps.$mixins[key].push(method); } else { protoProps[key] = method; } }); }); } return factoryExtend.call(this, protoProps, staticProps); }; mitty(View.prototype); module.exports = View;
STACK_EDU
import sys import cv2 # import numpy as np IMAGE_WIDTH = 500 def resized_frame(frame): height, width = frame.shape[0: 2] desired_width = IMAGE_WIDTH desired_to_actual = float(desired_width) / width new_width = int(width * desired_to_actual) new_height = int(height * desired_to_actual) return cv2.resize(frame, (new_width, new_height)) class BasicMotionDetector(object): def __init__(self, file_to_read): self.file_to_read = file_to_read self.capture = cv2.VideoCapture(self.file_to_read) self.video_writer = None self.frames_per_sec = 25 self.codec = cv2.VideoWriter_fourcc('M', 'J', 'P', 'G') self.frame_number = 0 def _generate_working_frames(self): while True: success, frame_from_video = self.capture.read() if not success: break frame_from_video = resized_frame(frame_from_video) yield frame_from_video def _generate_motion_detection_frames(self): previous_frame = None previous_previous_frame = None for frame in self._generate_working_frames(): motion_detection_frame = None if previous_previous_frame is not None: motion_detection_frame = self._get_motion_detection_frame(previous_previous_frame, previous_frame, frame) previous_previous_frame = previous_frame previous_frame = frame if motion_detection_frame is not None: yield motion_detection_frame def _get_motion_detection_frame(self, previous_previous_frame, previous_frame, frame): d1 = cv2.absdiff(frame, previous_frame) d2 = cv2.absdiff(previous_frame, previous_previous_frame) motion_detection_frame = cv2.bitwise_xor(d1, d2) return motion_detection_frame def create(self, output_filename): for motion_detection_frame in self._generate_motion_detection_frames(): height, width = motion_detection_frame.shape[0: 2] self.video_writer = self.video_writer or cv2.VideoWriter(output_filename, self.codec, self.frames_per_sec, (width, height)) self.video_writer.write(motion_detection_frame) self.frame_number += 1 print "Writing %s" % self.frame_number if self.video_writer is not None: self.video_writer.release() if __name__ == "__main__": file_to_read = "../sample_videos/Piano/VID_20161102_204909.mp4" BasicMotionDetector(file_to_read).create("basic_motion.avi")
STACK_EDU
Bulk mail sending through amazon ses I am sending bulk mails by using amazon SES. But I have certain issues and please clarify some doubts regarding SES 1)while I am sending bulk mails as groups (max send limit is 50 at a time) I am getting json response with single unique message-id for that 50 mails by using notification (SNS).How can I get 50 unique message id's for that 50 delivery mails.For further future reference I can check that unique id regarding that mail. 2)My second doubt is that while I am out of the sandbox I can send mails to recipients with out verifying "TO" email address.In that scenario if I add 50 mails to destination what if my 50th mail is not a valid mail or that mail doesn't exist.Does my 49 mails reach to the destinations or that full process stops. please clarify me on those things.Thanks in advance Please check the sample json response: { "notificationType":"Delivery", "mail":{ "timestamp":"2014-05-28T22:40:59.638Z", "messageId":"0000014644fe5ef6-9a483358-9170-4cb4-a269-f5dcdf415321-000000", <EMAIL_ADDRESS> "sourceArn": "arn:aws:ses:us-west-2:888888888888:identity/example.com", "sendingAccountId":"123456789012", "destination":[ <EMAIL_ADDRESS><EMAIL_ADDRESS> ] }, "delivery":{ "timestamp":"2014-05-28T22:41:01.184Z", <EMAIL_ADDRESS> "processingTimeMillis":546, "reportingMTA":"a8-70.smtp-out.amazonses.com", "smtpResponse":"250 ok: Message 64111812 accepted" } } 1) How did you configure your SNS Delivery Topic? How are you sending your emails? I use Sendy or a tool called Postman to send my emails and I'm getting unique message-id. 2)The sending process does not stop if, lets say, the email bounces or has an invalid email address. It will send a SNS message to either Complaints or Bounces topic. 1)I have configured SNS through http That is strange. Maybe you are getting only one id because you are sending the 50 emails simultaneously? yeah,I am doing it simultaneously.suppose if i have to send email to one million receipents.I dont want to call the send mail() api call one million times.That is the reason I am sending max 50 mails for each api call.Am I doing anything wrong I think that is the reason why you get only one id. I send them one by one, specially because you have a sending limit per second (mine is 4 emails per second, 70.000 emails daily). Thanks nick.So, if you are sending 70k emails daily you need to do 70 k api calls.Can we do it in other way because if we do 70k api calls we have to use lot of threads.In my scenario atleast i have to send 4 lakh mails.And how to increase sending limit per second. To increase sending limit per second you submit a request to AWS support. Indeed, contact support. I understand, maybe this tool will help you? https://github.com/zachlatta/postman Also, check that your email database is correct. Amazon is very strict with bounces and complaints. Thanks for your suggestion nick I will keep that in mind. You are welcome! If you found my answer useful please accept it. @NickG OP is using SES, not SNS. @Michael-sqlbot yes, and how does SES handle its message notifications? With SNS Hahahaha yes, my bad. I read "SNS Delivery Topic" as the path to the email recipients rather than as the delivery notification topic and the entire conversation took a wrong turn in my head after that. The correct answer, of course, is as you suggested -- if you want one message-id per recipient, you send to one recipient per message... which, incidentally, is the same cost as sending one to many. 1 message to 50 is the same price as 50 to 1 each. No problem :) @Michael-sqlbot
STACK_EXCHANGE
Selective text extraction in Python based on certain topics or keywords I have a quite long text document describing behaviours of different animals. I want to extract text about a specific animal and haven't figured out how this can be done. So for example, if the document descibes 15 different animals, I want my alorithm to output all information from the input file that related to lions. Lions described and discussed in several different places of the document - how do I do "selective extraction" for text that is only related to lions, does anyone know? EDIT - inputs and outputs Inputs: (1) Text file (e.g. "document.txt") (2) Key word(s) (e.g. "lion") Output (example): "Lions are large felines that are traditionally depicted as the 'king of the jungle.' These big cats once roamed Africa, Asia and Europe. [...] Males are generally larger than females and have a distinctive mane of hair around their heads [...] Asiatic lions eat large animals as well, such as goats, nilgai, chital, sambhar and buffaloes. [...] Females have a gestation period of around four months. She will give birth to her young away from others and hide the cubs for the first six weeks of their lives." Input and output example please! Any reproducible code for us to debug? I assume your document expresses some natural structure in the text like paragraphs – perhaps with newlines after each paragraph, or blank lines between paragraphs. So a simple baseline would be: return every paragraph that contains the word 'lion'. If the text already has one paragraph per line, this could be as simple as using the command-line egrep utility, as for example here to find 'lion' or 'lions' bracketed by word-boundaries: egrep "\wlions?\w" document.txt If that's insufficient, you should expand your question with more precise examples of the ways you'd like to do better than that simple baseline. For example, if it's important to return text segments smaller than a paragraph: can you provide examples of your input documents where this is important, and exactly which segments of a paragraph should/should-not bet selected? (Your current example only shows what you want, and not the kind of non-relevant text you'd like the approach to reject.) would the similarly-simple approach of just every sentence with 'lion' in it work better for your purposes? Or, every such sentence plus the sentence before and after? (Or, if you wanted to get really fancy, you could look into libraries for "anaphora resolution" – determining what nearby subjects, in the same or other sentences, words like 'she' or 'they' or 'it' refer to – and then expand the core set of obvious sentences with any other nearby sentences using such pronouns.) if some of your "tough cases" for the simple-strategy involve understanding synonyms for query words – for example, your query is 'dog' but you want to catch sentences about 'canines' or 'mutts' or individual breeds – then maybe you need to expand the query-word with synonyms or hyponyms (words for more-specific variants), using either pre-built lexicons (like WordNet) or fuzzier word-similarities that could be learned from your domain text (like word2vec word-vectors). But, you'd need to show more of your example input (source documents in detail, with good and bad text ranges, and more example queries), desirable output, and challenging cases where a simple keyword-grep doesn't work.
STACK_EXCHANGE
[Haskell-cafe] Haskell for Physicists alexey.skladnoy at gmail.com Sat Dec 5 01:57:32 EST 2009 2009/12/4 Roman Salmin <roman.salmin at gmail.com>: > On Fri, Dec 04, 2009 at 01:43:42PM +0000, Matthias Görgens wrote: >> > _So my strong opinion that solution is only DSL not EDSL_ >> Why do you think they will learn your DSL, if they don't learn any >> other language? > I didn't said that they didn't learn any language. They learn languages, > only part that is necessary to do particular task. > f.e. ROOT CINT(C++ interpreter) didn't distinguish object from pointer to > object, i.e. > statement h.ls(); works as well as h->ls(); independently of either h has > type TH1F or TH1F*, > so beginning ROOT user didn't need know what is pointer, memory management > helps him. > But early or latter one need to write more complicated code, > then one need to spend months to reading big C++ books, and struggling with > compilers errors, segfaults etc..^(1) (instead of doing assigned task!) or, > what is more usually, trying Ad hoc methods for writing software. > So people will learn DSL because: > 1. DSL is simpler than general purpose language > 2. DSL describe already known domain for user, (one probably don't need > monads, continuations, virtual methods, template instantiation etc...etc...) > so learning is easy, and didn't consume much time. >> And if your DSL includes general purpose stuff, like >> functions, control structures, data structures, you'll re-invent the >> wheel. Probably porly. > You didn't need to reinvent the wheel, because you DSL compiler can > produce Haskell code: > DSL -> General Purpose Language -> Executable > And ever if you do, it saves allot of time of experts. > (1) In Haskell this probably will sound like: reading allot of small > tutorials and articles, grokking monads, > struggling with type-check errors, infinite loops, laziness, etc... There is other side. As Matthias Görgens mentioned earlier 1. One have to reinvent control structures. Multiple times. Lets assume that compiler would translate DSL to haskell code. But DSL's expressions which convert into haskell control structures are DSL's control structures. 2. There would be more than one DSL. If they are all EDSL there is no real problems with combining them in one program if necessity arises. Probably there would be ways to combine them if they are DSL but they will require expertise and most likely dirty hacks. 2.1 And all of them will have different conrol structures, abstraction 3. Turing tarpit. Users will constantly require more power and features in DSL. Most likely DSL designers wouldn't be great language designers so DSL will turn into utter mess. 4. One have all power (and libraries of host languages). Of course if he is able to utilize it. This is tradeoff between power and simplicity. If one have too much simplicity he is not able to solve difficult problems. If one have too much power at expense of simplicity he has to struggle with tool to have thing done. And it's possible to sacrifice simplicity and don't gain any power. More information about the Haskell-Cafe
OPCFW_CODE
# -*- coding: utf-8 -*- import sys import datetime from utils.hash_verification import Hash sys.path.append('.') from model import User class Token(object): def __init__(self): self._user = None self.token_value = "" # Token valid for max one hour def generate_token(self, nick, email): exact_time = datetime.datetime.now() self.token_value = Hash.hash_info(nick + email +str(datetime.date.today()) + str(exact_time.hour)) return self.token_value def verify_token(self, exist_token, list_users): for user in list_users: exact_time = datetime.datetime.now() aux_generator_token = user["nick"] + user["email"] + str(datetime.date.today()) + str(exact_time.hour) if Hash.verify_two_hash(exist_token, aux_generator_token): self._user = User(user["name"], user["email"], user["nick"], user["hashed_passwd"]) return True return False def getUser(self): return self._user
STACK_EDU
Decentralized Finance (“DeFi”), is the idea that traditional financial service offerings such as banks, markets, and other investment services can be recreated or improved upon using applications created on the blockchain. ~DeFI Adoption 2020: A Definitive Guide to Entering the Industry, Cointelegraph Consulting Much has been made of the rising price of Bitcoin (BTC) and some of the factors in the current macroeconomic environment that may be driving that incredible story. Here at Avicenna, we are following developments in the space very closely, but we also believe that the mainstream attention on Bitcoin masks a much more fundamental shift that is happening around the Ethereum ecosystem, and a broader movement that is commonly referred to as DeFi. Ethereum can be viewed in many ways. Many already consider ETH, the first digital asset built on the Ethereum blockchain, a credible store of value, similar to BTC, and posit that it is poised for a similar value trajectory: That said, the drivers behind its value are viewed differently, for good reason - less digital gold than Bitcoin, with some similar characteristics in this respect, but increasingly valuable from the perspective of a platform or protocol that enables value to be created on top of it. Interoperability continues to be a challenge on the mainstream internet - despite the fact that over half of the world now has access to it, closed ecosystems exist everywhere. The Ethereum and broader alternative finance community is creating a set of capabilities that could potentially address this challenge significantly, in real-time and at material scale. To be clear, there are a number of DeFi protocols, the largest of which is Ethereum, with many others being actively developed. DeFi would not exist without stablecoins, which are pegged to a fiat currency such as the USD or the Chinese Yuan. Recreating lending contracts and other financial products in a volatile asset is impractical, therefore most DeFi contracts incorporate stablecoins at the core of their functionality. Common types of stablecoins in the market today include USDT, USDC, and DAI. Examples of DeFi use cases include borrowing and lending (e.g., Compound, Aave, Maker), decentralized exchanges (e.g., Curve, Uniswap, Bancor), and synthetic asset bridges (e.g., BitGo, REN, Keep Network), to name a few. This vibrant world of DeFi has already emerged, and represents billions of dollars in daily transaction volume. Think about that. Financial ecosystems worth billions of dollars, generating fees, facilitating trade, and backed by… the users themselves. It’s pretty compelling stuff, and some believe that it is the only financial ecosystem that will ultimately matter given the momentum associated with these network effects. Here's a conversation to listen to next if you'd like to learn more about the Ethereum ecosystem:
OPCFW_CODE
Measuring and Combining Edges (Part 2) In the last post I covered a generic overview of some of the issues involved in time series prediction. However there are more basic issues to consider before I start covering some examples. When modeling time series, the two key problems are noise and non-stationarity. The noise is a function of the lack of complete information from the past behaviour of the time series to fully capture the dependency between the future and the past. The noise in the data could lead to a persistent bias towards over-fitting or under-fitting the data. As a consequence the obtained model will have a poor level of performance when applied to new data patterns. The non-stationarity aspect of time series data implies that dynamics can change over time. This will lead to gradual changes in the measured relationship between the input and output variables. This is one of the reasons why academics favor ARCH and GARCH processes to address these issues specifically. That said, in general it is hard for a single prediction model to capture such a dynamic input–output relationship inherent in the data. One of the key problems facing a single model to learn the data is that there exists inconsistencies in the level of noise in different regions of the dependent variable output. This leads to a situation that penalizes certain regions at the expense of others. This is often a key reason why academics fail to see an effect because the “baby is thrown out with the bathwater”- highly profitable regions are smoothed in the prediction function with regions containing nothing but noise or perhaps even an opposing effect. In the converse situation the distinct regions may be overfitted by a function that does not generalize to the rest of the variable output, and as a consequence the predictions are unstable. The only way to adequately capture the non-linearities that exist in the data is to: 1) use non-linear functions that are robust 2) use multiple linear, non-linear, or discrete models using historical situational returns in combination such that the underlying data is more accurately represented. The use of “Zones” by CSS was one method of parsing the data to capture non-linearities, other methods include using using multiple “setups” framed in a historical context, neural networks, indicators and systems, and of course linear quadratic programming (ie optimization). Lest one be deluded into thinking that simple data mining is sufficient for extracting relationships, they are sorely mistaken. The path towards the balance between robustness and rigor and creating a model of sufficient complexity that is also time-varying (ie adaptive) is one without a good map. The only way to find your way through the woods of the financial wilderness is with a compass, trial and error, common sense, and an open mind.
OPCFW_CODE
Eximiousfiction Birth of the Demonic Sword novel – Chapter 2030 2030. Immense daily argument recommend-p2 Journalism for Women Novel–Birth of the Demonic Sword–Birth of the Demonic Sword Chapter 2030 2030. Immense nest juice Axia stayed stunned since she misplaced connection with her invasion after it entered Duanlong’s jaws. She didn’t determine what got occured, though the dragon got had been able to take in her ma.s.sive electrical power confident and without indicating any response. It absolutely was just as if her process possessed vanished. Axia remained surprised since she misplaced connection with her strike after it came into Duanlong’s mouth. She didn’t determine what possessed happened, although the dragon obtained been able to take in her ma.s.sive power at ease and without exhibiting any reaction. It was just like her process obtained disappeared. Snore introduced its beams. Its assault clashed along with the hidden power and caused an explosion so brutal that Evening were required to release a number of black product lines toward wide open a path one of the intense ability that flew in each course. Chapter 2030 2030. Huge A wave of energy manufactured Noah’s instincts scream in fear handled him at high-speed, but he didn’t slow down. Nigh showed up ahead of him, Duanlong stood at his aspect, plus the recommendations of his swords handled because he stretched them frontward. Axia was proper just as before. Noah experienced always fought against more robust professionals, which had pressured him to learn to use his power correctly. His opponents’ exceptional cultivation point permitted these to exhaust him, with his fantastic physique acquired struggled to make up for the lack of strength lately. put oneself in his place A river of info filled up Noah’s mind. That remarkable state survived more than ahead of since his facility of energy obtained increased, though the cognitive coma eventually made an effort to turn up. Nevertheless, the workshop’s items soon showed up in the mouth and unveiled their restorative healing electricity to stop that celebration. Even now, Axia didn’t know that, so Noah could make-believe that Duanlong obtained turned into the right s.h.i.+eld in the meantime. He was aware that tricking the expert was difficult, in particular considering that the dragon didn’t use that capability in the past exchanges. Still, he didn’t head making his rival to doubt herself. Noah ceased himself from dropping in the mental health coma, even so the gentle radiated by his eye didn’t mature dimmer. Brutal ideas loaded his thoughts and drove his steps, doing him capture forward even though his buddies observed him. Axia was correct once again. Noah acquired always fought against much stronger specialists, which had forced him to learn how to use his vigor correctly. His opponents’ excellent farming level enabled those to exhaust him, with his fantastic physique acquired battled to compensate to the weakness not too long ago. Noah was aware why Axia possessed quit attacking. She didn’t drop anything at all in that problem, even though Noah would even now throw away his valuable time below the results his aspirations. It almost appeared that she wished to switch to a protective tactic, however the ma.s.sive body that stuffed her eye solved her question and filled up her imagination having a unsafe feeling. Froude’s History of England Axia stayed stunned since she dropped exposure to her infiltration after it moved into Duanlong’s mouth. She didn’t understand what got taken place, however the dragon experienced had been able to consume her ma.s.sive ability confident and without demonstrating any effect. It had been just as if her strategy got faded. Axia stayed stunned since she misplaced connection with her strike after it inserted Duanlong’s oral cavity. She didn’t figure out what obtained occurred, although the dragon had had been able to actually eat her ma.s.sive strength confident and without indicating any response. It was subsequently just as if her procedure experienced faded. “Just how do you anticipate to beat me?” Axia shouted as she pass on her biceps and triceps and absorbed the energy in their atmosphere to restore her physique. “Your a.s.sets have barely affected my community, and also you can’t share this strength for too much time. I will feel that your restricts are about to show up.” A wave of energy that made Noah’s intuition scream in panic approached him at high-speed, but he didn’t reduce. Nigh made an appearance before him, Duanlong withstood at his side, and the suggestions of his swords touched while he stretched them ahead. Night time didn’t think twice to take flight toward Noah and allow the dark make a difference protect its traumas. The Pterodactyl could nevertheless deal with, nonetheless its cuts would inevitably spread out when the conflict lasted for days on end. Snore and also the other folks ended up high-quality since Duanlong obtained taken in the majority of the harmful power hovering in the track, but Night possessed struggled traumas. The empowerment distributed by his aspirations had made it possible for the Pterodactyl to slice with the shockwave. Even now, element of its electrical power experienced landed on the companion’s body and had seriously hurt its wings. camp venture location Noah quit himself from plunging in the mental coma, even so the light radiated by his eye didn’t expand dimmer. Violent ideas crammed his intellect and drove his activities, generating him photograph forwards when his companions put into practice him. The undetectable episode brought the same amount of electrical power who had forced almost everything back prior to. Theoretically, Duanlong couldn’t withstand it alone, nevertheless the dragon didn’t keep on being a simple being equipped with a pushing power after the breakthrough. Nevertheless, Axia didn’t recognize that, so Noah could imagine that Duanlong got turned into the right s.h.i.+eld for the moment. He was conscious of tricking the pro was unattainable, primarily since the dragon didn’t use that capacity while in the prior swaps. Still, he didn’t mind compelling his rival to question herself. Noah recognized why Axia acquired ended assaulting. She didn’t lose nearly anything for the reason that situation, while Noah would continue to spend his precious time below the negative effects of his ambition. It almost seemed she wished to move to a protective tactic, even so the ma.s.sive body that loaded her eyeballs solved her query and crammed her mind with a risky feel. Snore loudly produced its beams. Its assault clashed together with the undetectable vigor and induced an blast so brutal that Night-time simply had to start a number of black product lines to wide open a pathway among the list of brutal electrical power that flew in each route. Snore introduced its beams. Its attack clashed using the unseen power and brought on an explosion so aggressive that Night-time was required to kick off a handful of dark colored outlines forward to open a way among the list of strong ability that flew in every path. Axia was correct yet again. Noah acquired always fought against more powerful experts, that have forced him to learn how to use his power proficiently. His opponents’ exceptional farming amount permitted those to exhaust him, and the body system obtained fought to make up to the weak point of late. Chapter 2030 2030. Huge Noah ended himself from plunging into your emotional coma, but the lightweight radiated by his eye didn’t improve dimmer. Brutal thought processes stuffed his thoughts and drove his steps, producing him take ahead although his companions adhered to him. Wings as vast as entire parts along with a entire body that can obscure extra tall mountain range shown up in front of Axia. She was aware that which was occurring. Noah obtained finally produced Shafu, and its dimensions possessed enhanced in the last time Heaven and Earth possessed a chance to inspect it. the sea bride tel aviv Noah did actually have missing his brain, but Axia didn’t dare to undervalue him. She aimed her fingers toward him just before snapping her hands and shattering the whiteness looking at her. The light radiated by the very atmosphere looked can not withstand the strength discharged by her attack. Noah had a plan, but that concerned conquering Axia. Her passing away could solve every thing, also the issues the result of his latest site. However, he couldn’t kick off any other thing at her. All of those other fight will be a a few working experience, durability, and ruthlessness. Wings as great as complete areas and a entire body that can obscure taller mountain ranges appeared ahead of Axia. She understood what was occurring. Noah had finally unveiled Shafu, and its measurement got enhanced in the before Paradise and Earth acquired the opportunity to check it. Only Noah could be aware of facts behind that event. Duanlong’s strength didn’t allow it to deal with these kinds of powerful strikes using its typical innate power, even so the creature possessed established new things once the cutting-edge. It could now raise the strength of its devouring capabilities by way of a good deal, which allowed it to handle blows that its levels wouldn’t usually have the capacity to withstand. The darkish world unfolded from Noah’s determine when he initialized the work shop. Axia snorted and clapped her hands to produce an concealed current of power that wished to explode in the approach. Even now, Duanlong exited the dark topic and attained a far away spot to diverge the invasion using its natural capability. Axia was ideal. Her understanding of Noah’s potential was strong, so she could view the drawbacks of employing a great deal ambition. His buddies could harmonize in reference to his legislation and enhance its consequences, which inevitably brought it nearer to its restricts much faster. Noah didn’t even dare to guess what selling price he would be required to pay money for his current prowess, but those concerns didn’t are able to reach his intellect. “What were definitely you attempting to accomplish by turning up listed here?” Axia requested. “I really hope you didn’t need to depend on brute power to overcome my weapon expertise.” Nighttime didn’t pause to fly toward Noah and permit the black make a difference take care of its injuries. The Pterodactyl could however fight, nonetheless its cuts would inevitably distribute should the battle survived for too much time. Noah gotten to Axia in an instant, though the expert already had her palms pointed at him. However, she didn’t free up any invasion since she discovered that this companions ended up all set to manage it. riddle where the answer is stone “Come on!” Axia shouted again though waving her fingers to launch shockwaves that ended Noah’s atmosphere from scattering within the environment. “You must be employed to this presently. Come at me, throw the things you can, and wish to thrive. Will not imagine to acquire time.”
OPCFW_CODE
An ancient martial arts dogma states that one should not envy those who practice 10.000 punches once, but respect those that practice one punch 10.000 times. It is in a way not different in science because as data gets generated at incredible rates, it is often overlooked as to how valuable a repeated observation of an outcome for an experiment is. This phenomenon is not even bound to a certain field of expertise. Indeed there is no possible research that has not dealt with that one magical word that is often swept under the rug in an attempt to hide it and make the overall picture look better. Dear reader, please allow me to once more seek for acknowledgement of a keystone in science and what should become a mantra to anyone that wants to make a change through science : global reproducibility. Assuming that one has mastered the first level of local reproducibility, there is a lot more to be covered. The term itself has an inherent connotation of transparency and this is what is unfortunately still lacking in modern day science. Research in general has to evolve alongside the means and data to become “open science”. Resting aside the near mathematical formulation and experimental setup, I would like to devote this essay on what I believe is of equal if not larger importance. Reproducibility is often regarded as a purely statistical concept, which is without a shadow of a doubt valuable and ensures that results are consistent and interpretable by anyone in the same objective fashion, even when the results are not in line with what was expected. Historically, this concept has grown to be an integrated part in any lab and has produced many highly valued quality checks and standard operating procedures. This is what I would like to call a local reproducibility, a way to quantify the ability of an experiment or a methodology to be repeated and will deliver the same outcome within a certain margin under equal circumstances by an individual. The clearly defined importance of such local reproducibility has yet to move from a moderately kept secret during research to an open presentation and inviting for not only repeated studies, but also for follow-up investigations. It has to be the intention to be able to pursuit reproducibility outside of the lab, perhaps even outside the topic itself in which the experiment has been performed, hopefully culminating in the actual reuse of results. Aside from repeating an experiment, it is possible to draw new, possibly unrelated, conclusions from existing data, however in order to repurpose these data it needs to be reproducible in its own right. Then and only then can the full potential of data be exploited and can novel science be uncovered. At this point, the reproducibility will have moved on to become global. The follow-up on existing result-sets are no longer to be deemed “meta-science”, especially considering that there is a lot of valuable information in any experiment that is beyond the original conductor’s intention. Ensuring that these data can be inspected, verified and eventually reused by anyone that seeks knowledge obtainable through the conduct experiment will lead to true, meaningful reproducibility. There is a genuine benefit to releasing data to the outside world, but off course there are limits to which data can be exposed and which facts and figures should remain protected behind the scenes. It is clear that in a world dominated by industry driven research, it is near impossible to convince a board of directors or a head of research to make all results worth millions if not billions available to John or Jane Doe and to allow him or her to exploit the potential within. Patents are not up for grabs and it is a utopian mind-set to assume science is always for reserved for the greater good. Aside from the economical aspect, it might prove quite difficult to avoid certain ethical topics. For example, releasing private data (such as patient derived data) is and has been up for debate since the earliest forms of science known from history. On one hand it could help large scale research and bring forth huge leaps in innovation and evolution of demographic studies. These studies now often occur behind closed doors and are traditionally kept secret in house, heavily protected by a plethora of juridical guards. But does this mean that data is reproducible? Despite these ominous ideas, I want to highlight the other options to improve reproducibility of data beyond the actual numbers. There are three keystones that can lead to having a reproducible experiment that will in the end even. At first there is the traditional good laboratory practices and good data practices, they lead to the first tier of reproducibility as is described in many textbooks. The second one revolves around persistency, which is often overlooked post publication. In modern day research it is near mandatory to have data stored in a persistent way, preferably using some kind of versioning. In fact, many journals already instated a policy that mandates the availability of the discussed data upon publication of research. This is true even for intermediate data, which can definitely prove useful, especially during optimisation studies. It is not a farfetched idea that there is someone out there (or will be in the future) that might be working on a similar problem and could benefit from repeating the experiments, perhaps even in silico. Therefore, keeping in mind the potential repurposing of data, online platforms and repositories such as GitHub are to become the bread and butter of scientist in the modern age. Having data available and resistant to time is but one aspect that should be managed. To achieve an ascended form of reproducibility, one needs to ensure that as much detail as possible is exposed. Having a log of parameters that could influence the experimental outcome is vital to the repurposing of data. Not surprisingly, this is also what hampers meta-science the most. Indeed, the lack of metadata is often prohibiting a successful reproduction of research, which inevitably will lead to incorrect conclusions. To meaningful, reproducible science, nothing is worse than the latter as it undeniably can influence future research and cost funding agencies handfuls of money, which means other research become financially unappealing. Ensuring access to both the experimental data and the related metadata data would also help for reviewers during the peer-review process of manuscripts, as it will definitely aid in understanding the applied science. There to, a tight coupling between results and data is desirable. So in conclusion, reproducibility will forever be linked to pure performance characteristics. But there is another level that plays a key role, namely the availability to repurpose the data and metadata with awareness of the original experimental setup in terms of a meticulous enumeration of experiment bound parameters. Perhaps it is time after decades of desperately clinging on to data and keeping it stored in a locked box in a private office cabinet, fearing that one might steal it along with the blood, sweat and tears that have been sacrificed, that science moves on to an open world format where the reuse of data is most welcome and over time becomes almost trivial. As a cognitive, scientific community, we would all benefit from this evolution. Novel discoveries and relevant knowledge is hidden beyond what researchers envision when they have set up and conducted an experiment, with the caveat of corporate or clinically managed sensitive data remaining under an economical or ethical veil. Perhaps there is a need to redefine what reproducibility truly is. Starting today, let it be an assessment of how well an experiment can be repeated, not only by researchers themselves but also by the scientific community as a whole, even when the intentions of the latter is beyond the envisioned goal. This article and its reviews are distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and redistribution in any medium, provided that the original author and source are credited.
OPCFW_CODE
Make StableHashEq visible for improved crate documentation, and add some more trait implementations. Without this change it is impossible for the user to know which types evmap::new works with. Edit: Also added additional implementations for standard library types and ?Sized parameters. Edit2: Added implementations for Cow and Pin, the latter requiring an extra trait for deterministic Deref. One open question for this PR is whether the top level is an appropriate place for StableHashEq or if it should be put in a module. Especially if you also accept my latest commits with an extra StableDeref trait, those two traits could be bundled. (I have this on my radar, but it may take me a little while to review it) FYI, the trait implementations follow the order of the impls listed in the standard library docs for Hash and Deref. I just noticed that the keys are cloned on insertion, which means that the Clone implementation has to be trusted, too. I’ve updated the documentation of StableHashEq and the *assert_stable functions accordingly. Codecov Report Merging #4 (edab35c) into master (ba690e6) will not change coverage. The diff coverage is n/a. Impacted Files Coverage Δ src/lib.rs 58.62% <ø> (ø) src/write.rs 55.66% <0.00%> (ø) This looks great! My plan is to move evmap to be single-value again going forward, in which case we will no longer need the bound to hold for the V as well, which will be nice. I also think that with that change, Pin (and thus Deref) is probably no longer necessary -- my guess is people won't be using Pin in keys anyway. I do think that given the addition of Clone, we probably want to rename the trait a bit and have it include Clone. How about CheckedStable or KnownStable or KnowDeterministic? This looks great! My plan is to move evmap to be single-value again going forward, in which case we will no longer need the bound to hold for the V as well, which will be nice. I also think that with that change, Pin (and thus Deref) is probably no longer necessary -- my guess is people won't be using Pin in keys anyway. My main reason for adding Pin was that I simply added all the types from the standard library that implement Hash without questioning their utility as key or value types in a hashmap (and while paying attention that all their hash/eq/clone implementations are in fact deterministic). I do think that given the addition of Clone, we probably want to rename the trait a bit and have it include Clone. How about CheckedStable or KnownStable or KnowDeterministic? Having Hash and Eq in the name has the benefit that Eq and Hash being supertraits is easy to remember and this allows for shorter trait bounds StableHashEq instead of StableHashEq + Hash + Eq. This is not to say that I’m against changing the name. Also, even though StableHashEq has requirements about Clone now, these requirements are still very Hash+Eq-centric in the sense that clone() only has to be deterministic with regards to the hashing and equality-comparison behavior of the different clones. My main reason for adding Pin was that I simply added all the types from the standard library that implement Hash without questioning their utility as key or value types in a hashmap (and while paying attention that all their hash/eq/clone implementations are in fact deterministic). Yup, that makes sense, though I'd still then like to exclude Pin since it adds an additional requirement and is unlikely to be used in key types. Also, even though StableHashEq has requirements about Clone now, these requirements are still very Hash+Eq-centric in the sense that clone() only has to be deterministic with regards to the hashing and equality-comparison behavior of the different clones. Ah, that's a good point. Okay, let's keep it this way then. Yup, that makes sense, though I'd still then like to exclude Pin since it adds an additional requirement and is unlikely to be used in key types. Alright, makes sense ^^ @jonhoo Pin is removed now ;-) Thank you — great stuff! Now I just need to trim us back to single-value (and maybe add Set :thinking:), and then we're in business! I did manage to cut my finger pretty bad yesterday, so my coding will be a little slow for the next few days, but trust that I am going to do this :sweat_smile:
GITHUB_ARCHIVE
The new Spring 2015 Release of Microsoft Dynamics CRM Online has been officially released. This Dynamics CRM release preview guide refers to the Spring 2015 update as “2015 Update 1” As a CRM system administrator, you may control when to install Microsoft Dynamics CRM Online service updates for your organization. To update to the latest release of CRM Online, complete these two steps: - Review the information on the Manage all CRM Online instances page (on the CRM Online Administration Center) to find out what instances are ready to update and what is the schedule. - Approve the update. |Your instance will not be updated unless you approve it. This means your organization will go without the latest features and functionality until you explicitly give approval for the update to happen.If you chose to not apply the Microsoft Dynamics CRM Online Spring ‘14 update, it will be applied when you update to Microsoft Dynamics CRM Online 2015 Update. Be sure you understand what changes will be applied to your online instance from Microsoft Dynamics CRM Online Spring ‘14 and Microsoft Dynamics CRM Online 2015 Update. More information: Microsoft Dynamics CRM 2013 Service Pack 1 and CRM Online Spring ’14, Product Updates Installation Information for Microsoft Dynamics CRM Spring Wave ’14, and What’s new for administrators and customizers in Microsoft Dynamics CRM 2015 and CRM Online| There are a lot of major user changes and below is a quick rundown of the major updates. One of the biggest areas of improvement is the UI navigation. Microsoft listened to the community and one of first areas you will notice is that main menu has been updated. The new menu is easier to navigate with less scrolling and clicking. Here is a screen shot of the new menu. One area you might notice when glancing at these screen shots is that the normal Microsoft Dynamics CRM logo that appears in the top left hand section can be replaced with your own company logo and the color scheme can also be replace with your company’s colors. You also might notice the new symbol next to the plus button in the top left of the main menu. This is a “recently viewed” button that has your most recent views and recent records plus anything you pin will appear when clicked on. Another area is that Microsoft provides the ability to conduct analysis in Excel, directly within Microsoft Dynamics CRM. This eliminates the frustration, time and effort required to switch between applications in the middle of completing a business process. For instance, salespeople can now view sales data in familiar Excel spreadsheets, perform what-if analysis, and upload all the changes with one click, all while maintaining the sales workflow. Some of the other interesting updates include: - Ability to connect OneNote to CRM which allows you to take notes on OneNote and they will appear in CRM. - You can also track emails through your phone now. You can create a folder on your phone email and any email you drag into that folder, will be automatically uploaded into CRM. - You can also now use the Dynamics CRM App for the Outlook Web App. These are just some of the major updates. We will be going in depth in future blogs, so be sure to check back later. As always, if you have questions, don’t hesitate to contact the Dynamics CRM experts at Affiliated. Jonathan Fortner, Affiliated, Dynamics CRM Implementation & Support, Ohio
OPCFW_CODE
I’m looking to connect two devices together via ethernet, but I’m having trouble with the steps needed. The devices are one PC and one microcontroller. Both devices will have static IP’s, the microcontroller at the very least. I’m running LWIP on the microcontroller. What I’m stuck with is the steps I need to do to get both devices to communicate. Do I need the microcontroller to do an ARP broadcast or something so both devices can see each other and communicate? E: Auto-negotiate is enabled on the controller This question might be really dumb. But can I safely connect the audio output from one pc to the audio/microphone input on another pc, using an aux cable? I want to record audio from one pc on another pc using audacity. What I mean by “safe” is like will it short out or anything crazy? Also can I connect the audio out to the audio in on the same pc without problems? I know sound is just a wave, but I don’t know if that applies here. Forgive me, if this is the wrong fourm for this question, or if this question is really stupid. This is just a random image I found online. However, just in case I don’t know the proper names, these are the ports I’m talking about. I ain’t smart when it comes to computers, when I turn it on it bleeps and every light comes on but my Samsung monitor goes through analog and hdmi and nothing works except pulling plugs for over an hour and it’s getting progressively worse over the days. I am using usb otg cable to Android phone connect esp8266. For this, I am using usb-micro cable, and usb/otg cable. Working well with: Android phone -> OTG cable -> USB/micro cable -> esp8266 Not working with: Android phone -> USB/Micro cable -> OTG cable -> esp8266 I cannot find why. - USB micro cable: - OTG cable: I have Apple 27 in Thunderbolt Display. But for some reasons I have to unplug power cable for 30 sec or more every time to turn it on! Why its happens every time, after I detach my MacBookAir 2018/MacBook Pro 2018 Thunderbolt USB-C adapter? I read some articles for troubleshooting, where this is a step to fix the issue, but why I have to do this constantly? Is it normal? For an arrangement having N stations what is the number of cable links required? I think it is N+1 because one backbone and N taps Thanks in advance…….. I have a weird scenario. An older computer wolfdale1333-D667. The power supply it came with had a 24 pin power connector. Using a newer but still old enermax liberty 620W power supply, it comes with a 20pin plus separated but in the same cable jacket a four pin jobbie. When I plug both into the 24pin receptacle NOTHING HAPPENS. When I remove the four pin jobbie and only use the main 20 pin connector, the computer turns on lights, beeps fans etc……… There are two other facts that may be pertinent. (1) I should state that a separate 12v connector is connected to the CPU plug in point. (the enermax provided two options (diff plug configs) and thus easy to choose the correct one (and cable for these two states for CPU). (2) The enermax has a separate pCi-e power supply cable which is attached to the video card. Can I just ignore the four pins for the MOBO 24 pin connection and leave the 20 pin connector in place????????????? A small example use case will make this clearer. I have a really long lightning cable that I use to charge my iPhone on my bedside table at night (there’s no wall socket close). I’m thinking of buying AirPods and naturally I would like to charge them with a cable also on my bedside table next to my iPhone. However, I have no interest in using another USB wall adapter or buying/using another really long cable. So, is there any small lightning cable power splitter for charging two devices from one cable? What I mean is a small adapter where on one side there is a female lightning port (to connect to the already existing cable) and on the other side the cable splits into two small cables each with a male lightning connector. I imagine this would be quite easy to do by manually splicing the cables. Since I’m only going to use this for charging (not syncing), it is just a question of splicing two male lightning connectors together by putting power and ground in parallel. But I’d be willing to pay a few bucks for something fabricated. Does anyone know of any product like this? Note: This is NOT a power/headphone lightning splitter for iPhone7+. I’ve searched the internet with all the search terms I can think of and all I can find are those splitters. I currently have the requirement to encrypt network traffic between two buildings. A cable can be drawn between two switches/firewalls/etc. IPsec is rather not a solution, because we use many VLANs. From your point of view, what would be the best/simplested/most cost-effective solution? Thank you in advance. Best Fiber or Cable Internet provider in Miami Florida with best peering to Europe. I’m not interested in US peering just Europe peering…. | Read the rest of http://www.webhostingtalk.com/showthread.php?t=1760952&goto=newpost
OPCFW_CODE
mikegrant (20th September 2011) Has anyone else had this issue? It seems to be happening more with powerpoing documents? After the user clicks on a document, the get a windows security prompt that says "connecting to moodle.cadcol.ac.uk" mikegrant (20th September 2011) One of the colleges I work with has this problem and I'm just investigating it. By any chance are the machines affected Windows 7 and I'm assuming your on server 2008? Yes mate, server 2008 R2 and win 7 machines. Have you found out anything interesting? Yea it's to do with IE using a different method than any other browser when opening Office files. Authentication requests when you open Office documents Some of those fixes may help you depening on how your Moodle server is hosted. Looks likely I'm going to have to hack around some Moodle code to block IE from doing it's stuff, if that's even possible. I've had the same problems, I reported them a while ago but got no reply here. I don't know the cause though, but it affects IE and Firefox and started happening when the users passwords had needed changing (150 day limit on them). I have tried a few fixes for it, including grace logons etc, but not had to much success. We have had Moodle (on ubuntu), Windows 7 and Server 2003/ 2008 and 2008 R2 for about a year and never encountered these issues before. Im looking along two lines of investigation, some kind of cached passwords or remembered passwords that aren't being overwritten (maybe in my Ubuntu webserver) and Office 2010, as this seems to have made the problem even worse. Though with that said, I have had reports that staff using the same accounts (in Moodle) from home, do not have these issues, this would imply in is an issue to do with our network (e.g password), but I cannot confirm this as most staff have Office 2003 on their laptops. Another reason i feel it may be more network based (e.g password or maybe a policy) is on accounts with none expiring passwords, this never seems to happen no matter what version of Office we are using nor what browser. But again these accounts tend to also be affected by different group policies so these cannot be ruled out. So many variables and so little time! It also happens on some shockwave "SCORM" content for us. This is definately an Office 2010 Issue, I am determined to solve it. Let me know if you do as this is plaguing a customer of mine and as a Moodle host, it's next to impossible for me to fix without going down to their site. Plus it's also out of my remit. I have tested, on my one desktop disabling the trusted locations function in Word, Excel and powerpoint 2010, this has to be done on each program individually. I'll check the GPO options or this later. I use to have massive delays and problems login boxes with powerpoint files more then anything else (probably pictures from external sites causing this). I have disabled this feature and am currently testing opening documents from multiple causes, all excel, word, powerpoint or pdf documents. When using Internet Explorer, most documents ask for your credentials a second time, but if you hit cancel it works fine. When using Firefox though, all documents download and open straight away, no credentials are asked for. I'm going to try more changes though today if I have time, and may force this change out as it seems to have stopped the credentials boxed being asked for more then once like it use to. Last edited by Achandler; 12th May 2011 at 12:27 PM. keep us posted mate! Create a DWORD registry key in HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Office\9.0\C ommon\Internet called ForceShellExecute with value 1. Restart your Office application. Worked for a customer of mine recently to resolve the issue. Thanks for the suggestion, I believe that is a a regkey referencing an older version fo Office though, as for me, using Office 2010, the actual version is 14.0. Although the common folder does exist, the Internet within it does not. Edit: However after further reading, Microsoft actually having a post recommending creating this subkey to fix a different issue, i'll create one and hope it resolves the issue. Last edited by Achandler; 18th May 2011 at 04:32 PM. Reason: See above text This fix hasnt worked for me i'm afraid No luck here either. I'm suprised there aren't more people having this issue! There are currently 1 users browsing this thread. (0 members and 1 guests)
OPCFW_CODE
View All Tasks Assigned To a Single Person Currently tasks can be viewed by Assigned To only within a particular Project. It would be helpful to search globally in Planner by Assigned To rather than having to review assignments Plan by Plan. This change would make evaluation of workloads much easier. Thanks for sharing your feedback. We would like to hear more about how you would utilize this view. If you have a particular scenario or set of teammates for which you’d like to view tasks for, please share below! The Planner Team As a manager, it's vital that I have the ability to see all of the tasks assigned to my people so I can tell if I'm over/under-loading resources. It would help me in allocating new assignments. It would also help in determining trade-offs between projects. We generally have 12-20 client projects going concurrently with as many as a half dozen on deck. Stefan Reiter commented This would be an incredible benefit for me as a Team-Leader of a development team working an a LOT of different projects in parallel. A global overview of all assigned task to a specific person would be absolutely #1 priority. #2 would be an overview off ALL team-members on one page #3 would be graphical "dashboard" ... something like Toggle or TeamWeek is showing Following this thread! I need all managers, not just the PMs to be able to see what is on someone's list of things to do. I have people in multiple "teams" with their own planners so I'd have to go into each one to see it all in one place. Leo You commented Hi, Planner Team. Did you ask how one would utilize "View all tasks that I need to complete in chronological order" two years ago, and still not have an answer? You don't think that a human being might have more than one project, so clicking through each one to see what needs to be done is incredibly inefficient? Now I'm learning that I can only get a global view on my tasks, but not my team member's tasks. Basically this means that I cannot use Planner but must find some other tool. Please provide a schedule when a global filtering functionality could be available. I'm desperate. Erik Unangst commented I would also like the option of seeing all tasks across multiple channels for specific team members. Justin Jeffries commented We have been testing using Planner Combined with Microsoft Teams and Sharepoint as a project and task management platform. It works well for individuals but has not at all worked well for managing a team of people. The comments on this and other treads tell the story. As a manager you need to be able to: - Have primary task owner(s) and others that have visibility to the task/project. - Be able to see all tasks from your team or that you have assigned in a specific view. - Be able to see a view of individual reports where they are the primary task owner. If this were implemented Microsoft would see user adoption and satisfaction for the tool increase significantly. It is a big need in terms of project management. Dominique Aboudaram commented As a project manager I need to see the workloads of my team and follow up accross various projects view all tasks assigned to people accross a time frame ( ex: next week) Scenario 2 : select a set of people and see all their taks Scenarion 3 : select a set of plans and see all the tasks per assigned to Dana Brewer commented I would poll all admins to see who I could give a new assignment to. I would also be able to see the load across my tech team. Please, please, please give us this update. Alicia Crowder commented Definitely need this capability!! My organization need a single view of tasks assigned to each of us across multiple planners in Teams and channels (at least viewable to Team owners). In the same way that I can view all tasks assigned to me (across multiple planners in multiple teams and channels. Hope this can be sorted quickly. Yun Cho commented This works great for a team member who will be able to see all the tasks assigned for that member across multiple projects. However, as a manager, this tool is not the best because there is no way for me to be able to see all the tasks assigned for each team member across all the projects. I have to open each project and sort it by team member and then manually aggregate. It would be great to have this function up and running. Otherwise, the Ms Planner is becomeing less favorabl option compared to other poducts . Thanks! Jeffrey van der Stad commented Another uservoice topic with no active feedback. Please flag this as an issue with the next team-meeting. People are taking the effort to send feedback, leaving votes. It's not very rewarding to give feedback if we don't get any feedback back. It's annoying. 4.5k upvotes, 700 comments from users who actually responded. We know you are working hard, but please review your communication policy. Please get this feature up and running as soon as possible! Much needed! l hall commented Using MS Teams for school work. My child has to select individual 'classes' to see an overview of pending Assignments. There needs to be a list of pending Assignments across all classes/subjects. It really is a fundamental flaw in the software. Schools will be put off using MS Teams for remote learning if this isn't resolved post-haste. Thank you. Managers need to be able to easily see all the tasks of their subordinates across multiple plans. As a startup, we have various functions in the business where we leverage different channels. However, we have a finite number of people working on these tasks across channels. So, we need a way to aggregate all tasks by user (across channels) and see those to ensure the users are focused on the highest priority overall...and not just within that myopic view of a channel. Otherwise, the personnel simply click from channel to channel in search of all of their tasks and trying to decipher which is the highest priority or the most imminent to complete. Thank you. I need a single view of all tasks assigned to colleague X, just in the same way I can view all tasks assigned to me (across multiple Planners) I would use this view when new requests come up, so that I can see resource load and assess how to prioritize the new requests and/or reprioritize existing work. It can also be difficult for me to see what other people have assigned to my team members to do, and this also leads to overload. Last, even I have been known to put too much on my team members. Being able to look at a group of people or a single person, and see all of their assignments would help me keep them from being overloaded, and keep deliverables from falling behind. Hi admin, this original request was posted in 2016, the latest admin comment is from March 2018... And the function is still not rolled out. Can't be that hard - surely? It's the little things like this that make or break whether we can use Teams and Planner. Every time I introduce one of the MS products, someone points out that it is missing one or two key functions that are a standard with other products - makes it very hard to bring people over. As with all other comments - I need a single view of tasks assigned to X colleague, just in the same way I can view all tasks assigned to me (across multiple Planners). Thank you.
OPCFW_CODE
Why so many custom data types in C? Why are there so many custom data types like socklen_t, ssize_t, size_t, uint16_t? I don't understand the real need for them. To me, they’re just a bunch of new variable names to be learned. Many types show intent, which is important to make code easy to read, understand and maintain. There are no custom data types in C. typedef does not create a new data type, but an alias. And a type is not a variable. Note that socklen_t and ssize_t are not standard C, they are defined by non-standard libraries. If you find the names in C too many" already, you should never look at the language and standard libraries of Python, C++ or most other languages. Instead of learning every name, it's better to understand the idea and (if any) naming convention. Expecially for fixed-/least-/… width names it's very easy. socklen_t and ssize_t are defined in the POSIX.1 standard. It is different to the C standard, but its System Interfaces section describes features and facilities provided by a POSIX-compatible standard C library. This is the standard that defines sockets, signals, directory tree scanning, and so on. This means that OP needs to add the [tag:posix] tag, to get a proper answer. Intent and portability. For example, let's say I have a variable unsinged n. An unsigned integer can represent many things, so it's intent is not clear. But when I write size_t n, it is clear that n represents size of something. When I write socklen_t n it is clear that n represents the length of something related to socket. Second reason is portability. For example, socklen_t is guaranteed to be at least 32 bits. Now if we just write unsigned n then size of n may be less than 32 bits. size_t can hold the size of any object, but the actual value is implementation defined. When we use plain integer it may happen that sizeof(int) can't hold the size of largest object that is theoretically possible. But using size_t doesn't have such portability issue. uint16_t clearly says that it is unsigned integer of 16 bits which is both clear and portable than using unsigned int or unsigned short. I don't understand "Now if we just write unsigned n then size of n may be less than 32 bits." Firstly, It provides more readability. Secondly, while programming for embedded systems or cross platform or when portability is important, it is necessary of be explicit about the size of the data type that you are using - using these specific data types avoids confusion and guaranteed to be having the defined data width.
STACK_EXCHANGE
from typing import List import pygame from ColorTheories.tools import * from ColorTheories.constants import * from ColorTheories import constants from ColorTheories.Generator import Generator from ColorTheories.Temporary import Temporary class Pallete(): def __init__(self, level): elements = level['rows']+level['columns'] keys = constants.entities self.entities: list[Generator] = [] entity_list: list[Entity] = [getattr(constants, keys[i]) for i in range(len(keys)) if getattr(constants, keys[i]) in elements] palleteHeight=800 self.image = pygame.surface.Surface((700,palleteHeight)) self.window = pygame.surface.Surface((200,400)) self.window_rect = self.window.get_rect(topleft=(500,100)) self.pickSound = load_sound('pick.wav',0.3) whiteBackground=pygame.surface.Surface((200,palleteHeight)) whiteBackground.fill(pygame.Color('white')) self.image.blit(whiteBackground,(500,0)) for i in range(len(entity_list)): entity = Generator((505+(i%3)*65,105+(i//3)*85),60,80,entity_list[i]) self.entities.append(entity) def update(self): for entity in self.entities: if entity.alive: entity.update() def draw(self): for entity in self.entities: if entity.alive: entity.draw(self.image) subsurf = self.image.subsurface(self.window_rect) return subsurf def handleEvent(self,event): if event.type == pygame.MOUSEBUTTONDOWN: for entity in self.entities: if (entity.handleEvent(event)): newEntity = Temporary((event.pos[0]-30,event.pos[1]-30),60,60,entity.entity) newEntity.selected=True self.pickSound.play() return newEntity if event.type==pygame.MOUSEWHEEL: if (self.window.get_rect(topleft=(500,100)).collidepoint(pygame.mouse.get_pos())): if self.window_rect.y>100 and event.y>0: self.window_rect.y+=-10 for entity in self.entities: entity.pos=(entity.pos[0],entity.pos[1]+10) elif self.window_rect.y<self.image.get_height()-self.window_rect.height and event.y<0: self.window_rect.y+=10 for entity in self.entities: entity.pos=(entity.pos[0],entity.pos[1]-10)
STACK_EDU
Running fn in VirtualBox Ubuntu image fails with Permission denied - Could not mount /sys/kernel/security Running fn (0.4.11) in a Virtual Box Ubuntu 16 VM with Docker 17.09 (created and managed through Vagrant on Windows 10). When I either try to run fn directly: fn start or use the indirect way of running: docker run --rm --name functions -it -v /var/run/docker.sock:/var/run/docker.sock -v $PWD/data:/app/data -p 8080:8080 fnproject/functions I run into the error messages shown below: mount: mounting none on /sys/kernel/security failed: Permission denied Could not mount /sys/kernel/security. AppArmor detection and --privileged mode might break. mount: mounting none on /tmp failed: Permission denied INFO[0000] datastore dialed datastore=sqlite3 max_idle_connections=256 INFO[0000] no docker auths from config files found (this is fine) error="open /root/.dockercfg: no such file or directory" FATA[0000] Cannot get the proper memory information to size server. You must specify the maximum available memory by passing the -m command with docker run when starting the server via docker, eg: docker run -m 2G ... error="Didn't find MemAvailable in /proc/meminfo, kernel is probably < 3.14" Do any special prerequisites apply to the user as which fn is executed? Is running fn inside an Ubuntu VirtualBox in anyway special (and perhaps unsupported?)? it's possibly just a small parsing error as I don't think our memory parser was tested on too many platforms and likely isn't very robust. if you don't mind, it would be very helpful if you could paste the contents of your /proc/meminfo as well as a kernel version (lsb_release on ubuntu should do). sorry for the issue, and thanks for filing a bug. fwiw I run fn inside of a debian VM without issue, I'm betting it's a parser issue. Here is the contents of /proc/meminfo: vagrant@vagrant-ubuntu-trusty-64:/proc$ cat meminfo MemTotal: 4048172 kB MemFree: 1775300 kB Buffers: 72468 kB Cached: 1588960 kB SwapCached: 0 kB Active: 946420 kB Inactive: 1122812 kB Active(anon): 408136 kB Inactive(anon): 1268 kB Active(file): 538284 kB Inactive(file): 1121544 kB Unevictable: 0 kB Mlocked: 0 kB SwapTotal: 0 kB SwapFree: 0 kB Dirty: 16 kB Writeback: 0 kB AnonPages: 407800 kB Mapped: 62656 kB Shmem: 1604 kB Slab: 159324 kB SReclaimable: 136188 kB SUnreclaim: 23136 kB KernelStack: 1664 kB PageTables: 5884 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 2024084 kB Committed_AS: 1024728 kB VmallocTotal:<PHONE_NUMBER>7 kB VmallocUsed: 54968 kB VmallocChunk:<PHONE_NUMBER>2 kB HardwareCorrupted: 0 kB AnonHugePages: 245760 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 44992 kB DirectMap2M: 4149248 kB The Ubuntu version: vagrant@vagrant-ubuntu-trusty-64:/proc$ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 14.04.5 LTS Release: 14.04 Codename: trusty Thanks for your quick response. Maybe I will try Debian as alternative to Ubuntu for now. thanks for your help. ubuntu 16 should do the trick, as well. the kernel in 14.04 is too old to have the value we're looking for. we might could change the parser to calculate MemAvailable from the old values to fix this :) If we can decide that: Ubuntu 14.04 is not supported Ubuntu 16 will work Debian 8 certainly works (as I have experienced) I believe we can close this issue. Is that how this works? Or should it remain open because for example you want to fix the parse error for Ubuntu 14.04? I believe we can close this issue. Is that how this works? 👍 that works for me. if more issues over 14.04 pop up can look into the parser, will keep it in mind. thanks for filing this issue and glad you got it working. You can't deduce MemAvailable from a version that doesn't have it. The solution is pretty clear in the message, ie: pass in the -m flag. But the "Could not mount /sys/kernel/security" issue is still there in ubuntu 16.04. fn start mount: permission denied (are you root?) Could not mount /sys/kernel/security. AppArmor detection and --privileged mode might break. mount: permission denied (are you root?) time="2017-11-17T09:20:08Z" level=info msg="datastore dialed" datastore=sqlite3 max_idle_connections=256 time="2017-11-17T09:20:09Z" level=info msg="started tracer" url= time="2017-11-17T09:20:09Z" level=info msg="no docker auths from config files found (this is fine)" error="open /root/.dockercfg: no such file or directory" time="2017-11-17T09:20:09Z" level=info msg="available memory" ram=13428609024 ... v0.3.186 ... time="2017-11-17T09:20:09Z" level=info msg="Serving Functions API on address :8080"
GITHUB_ARCHIVE
In previous tutorial article series we have seen about “Mobile Testing“. In today’s article we are going to cover the Testing Checklist that you should go through once before start testing your project. During SDLC (Software Development Life Cycle) while software is in testing phase, it is advised to make a list of all the required documents and tasks to avoid last minute hassle. This way tester will not miss any important step and will keep a check on quality too. If tester doesn’t make any checklist or forgets to include any task in it then it is possible that he may miss some of the important defects. Testing Checklist is divided into number of categories which are listed as follows: 1) Resource Assignment and Training - To make sure that testing project has sufficient budget allocated at project level. - We have sufficient staffing or human resources allocated for testing project. - Analyze skills and competencies of test team to make sure whether they are competent enough or required more grooming to meet required skill set. - All required testing tools are installed at workstation with appropriate software licence. - All resources are well trained on required testing tools and project business. - Required responsibilities are assigned to team member and respective leads. - All required sign off are procured from senior management for staffing and training. 2) Software Testing Documentation - Make sure that all the functional documents and design documents are completed before testing team can start writing test cases. - Test plan is created covering all the required test cases. - Test cases are created covering all the required business use cases. - Review of test cases and test plan following maker and checker policy. - Setting up of Bug reporting portal to log the defects. - Creation of tractability matrix with the functional team to make sure functions are mapped to test cases. - Make sure, project weekly status report format is well defined. - Sign off or approval from QA manager to execute the test cases. 3) Software Testing Checklist - Regression suite is executed successfully when testing with new test phase or new project release. - Make sure each tester is filling the time sheet and logging defect in defect portal on daily basis. - Keeping a check on total test cases executed on daily basis and hence project work progress. - Test weekly status reports is circulated on weekly basis with correct format and to required recipients. - Open bugs are addressed timely by development team, requirement gathering team and senior management. - Make sure, there are no roadblocks in testing area related to technologies, management and client behavior. - Make sure before declaring the testing status as complete or providing testing sign off, all major or minor open bugs or defects are either closed or deferred for future release. - Make sure all system compatibility checks are done, e.g. an application working on IE explorer should also work on chrome, Mozilla, etc. - Review of test plan to make sure project complies with the required design methods. - Evaluation of project goal statement with the business use cases. - Project should comply with all required legal compliance’s. - Identify the priorities items in the project which are necessary for organization compliance’s and execute those items. - Examine project plan for strategic compliance with objective of business organization. - Verify that project deliverable are in compliance with the client requirements. - Evaluate project capabilities meets the desired outcome to accomplish predefined project goal. - All necessary project compliance sign offs are procured from senior management. 5) Measurability and Monitoring - Evaluation of project activities and processes that are well measurable to set the desired level of performance. - Verification of System process measures for reliability and accuracy. - Requesting the client to assign accessibility of project system to project team for review. - Scheduling check point call, test phase completion call and daily scrum call to monitor the progress of test project. - Make sure test project deliverable has predefined acceptance criteria which is approved, this will help to measure project deliverable. - Make sure, proper escalation contact details are well communicated to client and project team members. 6) Project Flexibility - Make sure that project is flexible and has ability to make desired amendments timely. - Evaluate project has risk mitigation plan after analyzing all the possible project risk factors. - Make sure project control system is reliable and effective. - Make sure that project has contingency plan to address exception and unforeseen events. - Be confident that project goal completely address problems defined by the business use cases. - Creation of self-regulatory feedback loop for project to make sure every problem is reported related to test project work. These are some of the main terms should be included in the Testing Checklist, however every organization has different software and application Testing Checklist may vary. It is always a good practice to make a checklist so that testing can be done in a proper way and no important point should be missed. If you are not regular reader of this website then highly recommends you to sign up for our free email newsletter!! Sign up just providing your email address below:
OPCFW_CODE
""" This module is full of little recursive functions that help with converting string-path notations into nested dicts, and vice versa. A string-path, or "flattened" dict might look like this: {'foo.bar.bat': 'asdfhjklkjhfdsa'} As an "expanded" or "nested" dict, the same data would be: {'foo': {'bar': {'bat': 'lkjhgfdsasdfghjkl'}}} """ from collections.abc import Mapping from typing import Dict from cfitall import ConfigValueType def add_keys(destdict: dict, srclist: list, value: ConfigValueType = None) -> dict: """ Nests keys from srclist into destdict, with optional value set on the final key. :param destdict: dictionary to add keys to :param srclist: list to add keys from :param value: final key's value """ if len(srclist) > 1: destdict[srclist[0]] = {} destdict[srclist[0]] = destdict.get(srclist[0], {}) add_keys(destdict[srclist[0]], srclist[1:], value) else: destdict[srclist[0]] = value return destdict def expand_flattened_path( flattened_path: str, value: ConfigValueType = None, separator: str = "." ) -> dict: """ Expands a dotted path into a nested dict; if value is set, the final key in the path will be set to value. :param flattened_path: the dotted path to expand to a nested dict :param value: final key's value :param separator: separator between dict keys in flattened_path """ split_list = flattened_path.split(separator) return add_keys({}, split_list, value) def flatten_dict(nested: dict) -> dict: """ Flattens a deeply nested dictionary into a flattened dictionary. For example `{'foo': {'bar': 'baz'}}` would be flattened to `{'foo.bar': 'baz'}`. :param nested: dictionary to flatten """ flattened = {} for key, value in nested.items(): if isinstance(value, Mapping): for subkey, subval in value.items(): newkey = ".".join([key, subkey]) flattened[newkey] = subval flatten_dict(flattened) else: flattened[key] = value mappings = [isinstance(value, Mapping) for key, value in flattened.items()] if len(set(mappings)) == 1 and set(mappings).pop() is False: return flattened elif len(set(mappings)) > 0: return flatten_dict(flattened) return {} def merge_dicts(source: Mapping, destination: dict) -> dict: """ Performs a deep merge of two nested dicts by expanding all Mapping objects until they reach a non-mapping value (e.g. a list, string, int, etc.) and copying these from the source to the destination. :param source: source dictionary to copy from :param destination: destination dict to merge into """ for key, value in source.items(): key = key.lower() if isinstance(key, str) else key if isinstance(value, Mapping): node = destination.setdefault(key, {}) merge_dicts(value, node) else: destination[key] = value return destination def expand_flattened_dict(flattened: dict, separator: str = ".") -> dict: """ Expands a flattened dict into a nested dict, e.g. {'foo.bar': 'baz'} to {'foo': {'bar': 'baz'}}. :param flattened: dictionary with flattened keys to expand :param separator: separator between dict keys in flattened_path """ merged: Dict[str, ConfigValueType] = {} for key, value in flattened.items(): expanded = expand_flattened_path(key, value=value, separator=separator) merged = merge_dicts(merged, expanded) return merged
STACK_EDU
I am trying to understand how Python works (because I use it all the time!). To my understanding, when you run something like python script.py, the script is converted to bytecode and then the interpreter/VM/CPython–really just a C Program–reads in the python bytecode and executes the program accordingly. How is this bytecode read in? Is it similar to how a text file is read in C? I am unsure how the Python code is converted to machine code. Is it the case that the Python interpreter (the python command in the CLI) is really just a precompiled C program that is already converted to machine code and then the python bytecode files are just put through that program? In other words, is my Python program never actually converted into machine code? Is the python interpreter already in machine code, so my script never has to be? Yes, your understanding is correct. There is basically (very basically) a giant switch statement inside the CPython interpreter that says “if the current opcode is so and so, do this and that”. Other implementations, like Pypy, have JIT compilation, i.e. they translate Python to machine codes on the fly. If you want to see the bytecode of some code (whether source code, a live function object or code object, etc.), the dis module will tell you exactly what you need. For example: 'i/3') 1 0 LOAD_NAME 0 (i) 3 LOAD_CONST 0 (3) 6 BINARY_TRUE_DIVIDE 7 RETURN_VALUEdis.dis( dis docs explain what each bytecode means. For example, Pushes the value associated with co_names[namei]onto the stack. To understand this, you have to know that the bytecode interpreter is a virtual stack machine, and what co_names is. The inspect module docs have a nice table showing the most important attributes of the most important internal objects, so you can see that co_names is an attribute of code objects which holds a tuple of names of local variables. In other words, LOAD_NAME 0 pushes the value associated with the 0th local variable (and dis helpfully looks this up and sees that the 0th local variable is named And that’s enough to see that a string of bytecodes isn’t enough; the interpreter also needs the other attributes of the code object, and in some cases attributes of the function object (which is also where the locals and globals environments come from). inspect module also has some tools that can help you further in investigating live code. This is enough to figure out a lot of interesting stuff. For example, you probably know that Python figures out at compile time whether a variable in a function is local, closure, or global, based on whether you assign to it anywhere in the function body (and on any global statements); if you write three different functions and compare their disassembly (and the relevant other attributes) you can pretty easily figure out exactly what it must be doing. (The one bit that’s tricky here is understanding closure cells. To really get this, you will need to have 3 levels of functions, to see how the one in the middle forwards things along for the innermost one.) To understand how the bytecode is interpreted and how the stack machine works (in CPython), you need to look at the ceval.c source code. The answers by thy435 and eyquem already cover this. pyc files are read only takes a bit more information. Ned Batchelder has a great (if slightly out-of-date) blog post called The structure of .pyc files, that covers all of the tricky and not-well-documented parts. (Note that in 3.3, some of the gory code related to importing has been moved from C to Python, which makes it much easier to follow.) But basically, it’s just some header info and the module’s code object, serialized by To understand how source gets compiled to bytecode, that’s the fun part. Design of CPython’s Compiler explains how everything works. (Some of the other sections of the Python Developer’s Guide are also useful.) For the early stuff—tokenizing and parsing—you can just use the ast module to jump right to the point where it’s time to do the actual compiling. Then see compile.c for how that AST gets turned into bytecode. The macros can be a bit tough to work through, but once you grasp the idea of how the compiler uses a stack to descend into blocks, and how it uses those compiler_addop and friends to emit bytecodes at the current level, it all makes sense. One thing that surprises most people at first is the way functions work. The function definition’s body is compiled into a code object. Then the function definition itself is compiled into code (inside the enclosing function body, module, etc.) that, when executed, builds a function object from that code object. (Once you think about how closures must work, it’s obvious why it works that way. Each instance of the closure is a separate function object with the same code object.) And now you’re ready to start patching CPython to add your own statements, right? Well, as Changing CPython’s Grammar shows, there’s a lot of stuff to get right (and there’s even more if you need to create new opcodes). You might find it easier to learn PyPy as well as CPython, and start hacking on PyPy first, and only come back to CPython once you know that what you’re doing is sensible and doable. Having read the answer of thg4535, I am sure you will find interesting the following explanations on ceval.c : Hello, ceval.c! This article is part of a series written by Yaniv Aknin whose I’m sort of a fan: Python’s Innards
OPCFW_CODE
Today the //BUILD/Windows conference started with a day full of main sessions. A two and a half hour keynote followed by three “Big Picture” sessions after lunch introduced all present to the new Windows 8 OS and platform. Personally, I am still catching my breath over the amazing amount of information that was given to us. Lots of people have asked me since then what I thought of the first day. I have told them: “Fantastic”. Here’s why, in my words and with a personal interpretation of the information that is available to everyone. Merging the web with Windows What I saw Microsoft present today is the new vision for the Windows 8 operating system. Windows 8 merges with the web in terms of devices and applications. It also brings together all your devices and makes them a mesh that is constantly in sync with one another. This mainly goes for data, be it application data, application settings or identities. The Cloud (Windows Live in particular, complemented by Windows Azure) acts as the conduit to flow and store information across devices and online storage. Web of applications Equally important is the fact that all applications are not only interconnected with the OS and the cloud, but also with one another. Searching and exchanging data has become a smooth continuum across OS and applications alike. I like to think of it as a new opportunity to create Windows-based mashups, just like it has become common place for Web 2.0 applications. This style of creating applications will now come to native Windows applications. The possibilities will be endless. All you need to do is reimagine your apps, just like Microsoft has reimagined Windows. A new family of applications Metro, great for users and developers Whether you are going for the XAML or HTML5 approach, Metro apps have the same distinct look and feel, backed up with styling guidelines. The Visual Studio 2011 development environment embraces Metro and offers a whole slew of templates to facilitate in creating the right Metro views for your app. The Metro style will help users get a consistent feel across devices (desktop PC, tablet or phone) and form factors (small, medium, large). Whether they pick up a phone with WP7.5 or a tablet with Windows 8, it all looks and feels the same. For now, Metro style applications are built targeted at either Windows 8 or Phone 7.5 Opportunities for everyone With all of this come great opportunities for everyone: users, designers, developers, architects and business, you name it. Since this is a developer conference, I would like to point out that the next couple of years will be very, very interesting for developers. The shift in programming paradigm from rich-client, high-end client applications to interconnected, cloud- and os-integrated applications are a new ball-game and a game changer to boot. This will give lots of change, chance and learning for all to enjoy. The extra gift As if all of this goodness and announcements weren’t enough there was the unexpected gift. Secretly anticipated and living up to speculation all attendees of //BUILD/Windows were giving a new Developer Preview Tablet: a Samsung tablet with amazing specs. The line for picking it up was unbelievable. I chose to wait until tomorrow to pick it up. I still need to do so. Everybody is excited about it. Check the specs to see why.
OPCFW_CODE
Using glass mapper to map droplink field in Sitecore Has anyone ever used GlassMapper to map complex field types in Sitecore? Glass seems to work well with string fields but Droplinks, Droplists or other types in Sitecore dont map in the model. There is no field type of DropLink. There is a LookupField but it doesn't work with droplinks or droplists. In your use case it is actually quite straightforward to achieve this in GlassMapper. A droplist will only store the name of the item that has been selected - so it would be best mapped to a string. A Droplink stores an ID of the item being linked. You can use a type that you have already created to represent the linked item, and Glass is smart enough to find the item in Sitecore by ID and then cast it to whatever type you had in place. If no item is selected in the droplink, it will return null. As an example to illustrate: [SitecoreType(TemplateId = "{149FA0C9-1111-1111-1111-FD9194EAC887}", AutoMap = true)] public class MyLinkingItem { public Guid Id { get; set; } //Should be the Name of your Droplink field public MyLinkedItem LinkedItem { get; set; } } [SitecoreType(TemplateId = "{149FA0C9-2222-2222-2222-FD9194EAC887}", AutoMap = true)] public class MyLinkedItem { public Guid Id { get; set; } } You can use similar tactics for other complex field types. For example, any kind of Multilist field e.g. Treelist can be represented by an IEnumerable<MyLinkedItem>. I'm not clear on how this will provide the properties of the droplink field? It appears to be a GUID only. Thank you for the input. The droplink field will store the ID of the item that you have linked, therefore all Glass needs to have on the linked type is the Guid, from which it can get the linked item. To access additional fields of the item to which you are linking, you would need to add those fields to the Linked Item's class in the example above. This solution worked for me. Be sure to not that when you switch from a DropList field type to a DropLink field type don't forget to reselect the dropdown option in the CMS, otherwise it will complain with a "The field contains a value that is not in the selection list." and your fields won't map correctly.
STACK_EXCHANGE
When we’re talking about cloud migration, lift-and-shift, using Azure virtual machines is the most common Infrastructure as a service (IaaS) offering in Azure to make your projects “go cloud.” However, many real-world projects don’t fit so neatly into one category. When it comes to projects that require flexible scenarios and come with lots of specific requirements, I’ve found Azure Virtual Machines is the most flexible option. In this post, we will look at deploying multiple virtual machines (VMs) on Azure to improve availability and scalability using an Azure Resource Manager ( ARM ) Template or otherwise via Virtual machine scale sets ( VMSS ). Azure Virtual Machines types Azure Virtual Machines provides scalable computing resources offered by Azure, there are multiple options: - On-demand or Pay-as-you-go ( PAYG ), you can choose the appropriate SKU for your VM and will be invoiced monthly ( calculated on an hourly basis ) - Reserved Instances ( RIs) allows reserved infrastructure from 1 year to 3 years, for now, while choosing it you may save up to 72% - Low priority VMs are allocated from surplus compute capacity from Azure which enables certain types of workloads to run for a significantly reduced cost or allowing users to do much more for the same cost, specifically widely used whiling working with batch processing on Azure with Azure Batch. To know more about it, you can refer the blog : Use low-priority VMs with Batch and Batch computing at a fraction of the price In general, Azure VMs provide you with an operating system, storage, and networking capabilities, and they can run a wide range of applications. Using Azure VMs can be an evolution than what we have done on-premises before. Compared to a traditional on-premises virtual machine, Azure VM allows to deploy a VM within minutes, and which can be quickly scaled up and down. You can pay only for what you use without buying and maintaining the physical hardware in compared to on-premises servers. Although users still need to configure, patch, and maintain OS and applications on the VM. Classic Azure VMs vs. ARM Azure VMs Today, it still exists two types of Azure VMs for some reason : - Classic Azure VMs: Azure VMs deployed in classic deployment model. When users interact with classic mode resources from a command line such as Azure PowerShell, you are using Azure Service Management API calls (ASM). ASM is an old way of accessing Azure resources. - ARM Azure VMs: Azure VMs deployed in Azure Resource Manager ( ARM ) model. ARM allows you to deploy Azure resources in groups called Resource Groups. When users interact with ARM using command-line tools such as Azure PowerShell, they are using ARM API calls. ARM model allows to deploy, manage, and monitor Azure resources as a logical group which is known as Azure resource group, it can be deployed also via ARM templates. The templates can be created to include a set of resources to be deployed as part of a cloud solution. Role-based access control (RBAC) policies can be applied to all resources in a Resource Group, it is also possible to specify “tags” to resources and help users keep track the usage of these resources. Azure VMs in classic and arm model can be created via the Azure portal as shown in the following : Creating Azure VMs via the Azure portal Workloads can be deployed with Azure VMs There is some workloads better fit for hosting on Azure VMs for example : - Highly available service workloads with unpredictable growth workloads such as e-commerce websites with short-term increased sales of fad items, for example, black Friday. - Steady workload scenarios where organizations simply want to offload their infrastructure to the cloud to cut down cost. However, regulated environment workloads might be suitable candidates for a hybrid solution where only some highly available data is hosted in Azure and the more sensitive, regulated data is kept on-premises. All Microsoft software installed in the Azure virtual machine environment must be licensed correctly. The Microsoft server software support for Microsoft Azure virtual machines page lists the currently supported products and versions. Azure is the most cost-effective cloud for Windows Server and SQL Server, users can achieve the lowest cost of ownership when users combine the Azure Hybrid Benefit, to get more information, please check here. Keep yourself up to date with the latest SKU Virtual machines are available in several different size families. You can configure virtual machines with a variety of options for CPU, memory, and IOPS. To keep yourself update to date, please check the following page regularly : In this post, we’ll also talk about some special VMs : Dv3 and Ev3 for Nested Virtualization In May 2017, Microsoft announced that Azure VMs such as Azure VM sizes Dv3 and Ev3 & will support nested virtualization. These new sizes introduce Hyper-Threading Technology running on the Intel® Broadwell E5-2673 v4 2.3GHz processor, and the Intel® Haswell 2.4 GHz E5-2673 v3. The shift from physical cores to virtual CPU’s (vCPU) is a key architectural change that enables us to unlock the full potential of the latest processors to support even larger VM sizes. To know more about Introducing the new Dv3 and Ev3 VM sizes. Nested virtualization is a feature allows you to install the Hyper-V feature inside a Windows Server running on Azure VMs, making it very useful for training and demo scenarios. To know more about Nested Virtualization in Azure. Dv3 and Ev3 for Nested Virtualization “Burstable” B-Series VMs One of the most common complaints about Azure Virtual Machine pricing is that it’s too expensive for small workloads. Its “Burstable” B-Series as an even more cost-effective and affordable cloud for lightweight workloads. This is for smaller workloads that do not utilize the full CPUs allocated, such as for web servers, small databases, or development, test, or QA environments. The B-Series VMs work much differently than the other options in the VM Series. You pay for a baseline of vCPU performance utilization with the number of vCPU cores allocated. The new B-Series is $5 per month for the smallest size; the Standard B1s and the Standard B8ms offer 8 “burstable” vCPU cores for approximately $139 per month with a baseline of 135%. This means that the B8ms at its base is similar to other Single vCPU Core VM sizes, but it can burst to 800% vCPU performance utilization and use all 8 cores at 100% each for short periods. Burstable B VM sizes How to create an Azure Virtual Machines Because virtual machines are suited for different users and purposes, Azure offers the following ways to create a virtual machine : - Azure portal: You can check how to create a Linux VM and windows VM here via Azure portal. - Azure PowerShell: You can check how to create a Linux VM and windows VM here . - ARM Template: You can find the ARM template to create a simple Windows VM here. - Azure CLI: Azure CLI is used to create and manage Azure resources such as Azure Virtual Machine from the command line or in scripts. Azure Cloud Shell is a web-based shell that is preconfigured to simplify using Azure tools. With Cloud Shell, you always have the most up-to-date version of the tools available, and you don’t have to install, update, or separately log in. To learn more, refer to the official documentation for creating Create an Azure virtual machine with Azure CLI and CloudShell. Deploying Multiple Virtual Machines on Azure VM Scale Sets help you deploy and manage a set of identical VMs. There are two basic ways to configure VMs deployed in a VM Scale Set: - Use extensions to configure the VM after it is provisioned. - Create VMSS via the latest custom image. Take note that VM Scale sets support up to 1,000 VM instances, however, if you create and upload your own custom VM images, the limit is up to 300 instances. For some additional considerations, please refer to Designing VM Scale Sets for Scale. Let’s take a look at how to create a Virtual Machine Scale Set (VMSS). When we are in the Azure Portal, we can click ‘ Create’ then ‘ Compute’, you can find the option VMSS as shown in the following : Creating VMSS via the Azure portal You can find more how to create a VMSS from Azure documentation Quickstart: Create a virtual machine scale set in the Azure portal. Configuring scale out and scale up after creating VMSS After the deployment, we can see that there are some related resources in the same resource group of the VMSS: We have created a public IP for the load balancer and a virtual network for our VMSS. Even after the creation of VMSS, we can scale it up and out, and we can add a scale condition for the VM in the same VMSS. Scale Sets support auto-scaling based on performance metrics. As the load on the VMs increases, additional VMs are automatically added to the load balancer. Consider Scale Sets if you need to quickly scale out VMs or if you need to autoscale. For more information on Scale Sets, please refer to What are virtual machine scale sets in Azure? As companies transition to the cloud, knowing how to deploy multiple virtual machines will continue to be useful as you add flexibility and customization to your deployments, and especially as you think about topics like disaster recovery and high availability. Over time, I believe that we’ll turn to more and more serverless, which is what I’m working on now. The cloud is our future, and making it “normal” and easy to use and apply is our mission. I hope you’ll continue learning with me! Published on 17 October 2017.
OPCFW_CODE
/** * @file * @author Rafal Chojna <rafalc@wolfram.com> * @brief Definition and implementation of ImageView and ImageTypedView. */ #ifndef LLU_CONTAINERS_VIEWS_IMAGE_HPP #define LLU_CONTAINERS_VIEWS_IMAGE_HPP #include "LLU/Containers/Generic/Image.hpp" #include "LLU/Containers/Interfaces.h" #include "LLU/Containers/Iterators/IterableContainer.hpp" #include "LLU/ErrorLog/ErrorManager.h" namespace LLU { /** * @brief Simple, light-weight, non-owning wrappper over MImage. * * Intended for use in functions that only need to access MImage metadata, where it can alleviate the need for introducing template parameters * for MImage passing mode (like in GenericImage) or data type (like in Image class). */ class ImageView : public ImageInterface { public: ImageView() = default; /** * Create a ImageView from a GenericImage * @param gIm - a GenericImage */ ImageView(const GenericImage& gIm) : m {gIm.getContainer()} {} // NOLINT: implicit conversion to a view is useful and harmless /** * Create a ImageView from a raw MImage * @param mi - a raw MImage */ ImageView(MImage mi) : m {mi} {} // NOLINT /// @copydoc ImageInterface::colorspace() colorspace_t colorspace() const override { return LibraryData::ImageAPI()->MImage_getColorSpace(m); } /// @copydoc ImageInterface::rows() mint rows() const override { return LibraryData::ImageAPI()->MImage_getRowCount(m); } /// @copydoc ImageInterface::columns() mint columns() const override { return LibraryData::ImageAPI()->MImage_getColumnCount(m); } /// @copydoc ImageInterface::slices() mint slices() const override { return LibraryData::ImageAPI()->MImage_getSliceCount(m); } /// @copydoc ImageInterface::channels() mint channels() const override { return LibraryData::ImageAPI()->MImage_getChannels(m); } /// @copydoc ImageInterface::alphaChannelQ() bool alphaChannelQ() const override { return LibraryData::ImageAPI()->MImage_alphaChannelQ(m) == True; } /// @copydoc ImageInterface::interleavedQ() bool interleavedQ() const override { return LibraryData::ImageAPI()->MImage_interleavedQ(m) == True; } /// @copydoc ImageInterface::is3D() bool is3D() const override { return LibraryData::ImageAPI()->MImage_getRank(m) == 3; } /// @copydoc ImageInterface::getRank() mint getRank() const override { return LibraryData::ImageAPI()->MImage_getRank(m); } /// @copydoc ImageInterface::getFlattenedLength() mint getFlattenedLength() const override { return LibraryData::ImageAPI()->MImage_getFlattenedLength(m); } /// @copydoc ImageInterface::type() imagedata_t type() const final { return LibraryData::ImageAPI()->MImage_getDataType(m); } /// @copydoc ImageInterface::rawData() void* rawData() const override { return LibraryData::ImageAPI()->MImage_getRawData(m); } private: MImage m = nullptr; }; template<typename T> class ImageTypedView : public ImageView, public IterableContainer<T> { public: ImageTypedView() = default; /** * Create a ImageTypedView from a GenericImage. * @param gIm - a GenericImage * @throws ErrorName::ImageTypeError - if the actual datatype of \p gIm is not T */ ImageTypedView(const GenericImage& gIm) : ImageView(gIm) { // NOLINT: implicit conversion to a view is useful and harmless if (ImageType<T> != type()) { ErrorManager::throwException(ErrorName::ImageTypeError); } } /** * Create a ImageTypedView from a ImageView. * @param iv - a ImageView * @throws ErrorName::ImageTypeError - if the actual datatype of \p iv is not T */ ImageTypedView(ImageView iv) : ImageView(std::move(iv)) { // NOLINT if (ImageType<T> != type()) { ErrorManager::throwException(ErrorName::ImageTypeError); } } /** * Create a ImageTypedView from a raw MImage. * @param mi - a raw MImage * @throws ErrorName::ImageTypeError - if the actual datatype of \p mi is not T */ ImageTypedView(MImage mi) : ImageView(mi) { // NOLINT if (ImageType<T> != type()) { ErrorManager::throwException(ErrorName::ImageTypeError); } } private: T* getData() const noexcept override { return static_cast<T*>(rawData()); } mint getSize() const noexcept override { return getFlattenedLength(); } }; /** * Take a Image-like object \p img and a function \p callable and call the function with a ImageTypedView created from \p img * @tparam ImageT - a Image-like type (GenericImage, ImageView or MNumericAray) * @tparam F - any callable object * @param img - Image-like object on which an operation will be performed * @param callable - a callable object that can be called with a ImageTypedView of any type * @return result of calling \p callable on a ImageTypedView over \p img */ template<typename ImageT, typename F> auto asTypedImage(ImageT&& img, F&& callable) { switch (img.type()) { case MImage_Type_Bit: return std::forward<F>(callable)(ImageTypedView<std::int8_t>(std::forward<ImageT>(img))); case MImage_Type_Bit8: return std::forward<F>(callable)(ImageTypedView<std::uint8_t>(std::forward<ImageT>(img))); case MImage_Type_Bit16: return std::forward<F>(callable)(ImageTypedView<std::uint16_t>(std::forward<ImageT>(img))); case MImage_Type_Real32: return std::forward<F>(callable)(ImageTypedView<float>(std::forward<ImageT>(img))); case MImage_Type_Real: return std::forward<F>(callable)(ImageTypedView<double>(std::forward<ImageT>(img))); default: ErrorManager::throwException(ErrorName::ImageTypeError); } } /// @cond // Specialization of asTypedImage for MImage template<typename F> auto asTypedImage(MImage img, F&& callable) { return asTypedImage(ImageView {img}, std::forward<F>(callable)); } /// @endcond } // namespace LLU #endif // LLU_CONTAINERS_VIEWS_IMAGE_HPP
STACK_EDU
What's wrong with this custom block's loot table? It's supposed to choose drop by testing for silk touch, but non-enchanted pick drops both options I have a custom block that is supposed to be only obtainable using a Silk Touch pickaxe. If mined with a pickaxe without Silk Touch, it should drop end stone instead, and if mined with the wrong tool, it should drop nothing. Instead, it drops the correct loot only when a Silk Touch pickaxe is used. When a regular pickaxe is used, it drops both itself and an end stone block; when mined with the wrong tool, the same thing happens. From the official documentation, "Applying a condition to a pool allows you execute the entire pool based on the conditions defined." In my custom block's loot table, each pool has its own (mutually exclusive) condition, so I'm not sure what's causing multiple drops (which would imply matching both or possibly all 3 conditions, which should be impossible). Here is the loot table .json file: { "pools": [ // Give rhizome block if mined using a Silk Touch pickaxe { "condition": "query.equipped_item_any_tag('slot.weapon.mainhand', 'minecraft:is_pickaxe') && query.has_silk_touch", "rolls": 1, "entries": [ { "type": "item", "name": "end:rhizome" } ] }, // Give end stone if mined with a pickaxe without Silk Touch { "condition": "query.equipped_item_any_tag('slot.weapon.mainhand', 'minecraft:is_pickaxe') && !query.has_silk_touch", "rolls": 1, "entries": [ { "type": "item", "name": "minecraft:end_stone" } ] }, // Drop nothing if mined with anything other than a pickaxe { "condition": "!query.equipped_item_any_tag('slot.weapon.mainhand', 'minecraft:is_pickaxe')", "rolls": 1, "entries": [ { "type": "empty" } ] } ] } Rearranging the three pools does not change the result. So, it doesn't matter whether the "has silk touch" condition comes first or third: it's still the only one that works as intended. I also tested swapping out the condition strings with condition arrays of the following form: "conditions": [ { "condition": "same molang queries here" } ] but the same problem was still happening. @pppery If there's a rule that all questions tagged with [tag:minecraft-addons] must also have the [tag:minecraft-bedrock-edition] tag, it seems like that bit of information ought to be in the tag wiki (it currently isn't). So feel free to suggest it. You have just as much rights as I do there. It's been standard practice here since forever that every question is tagged with the game it applies to, which is Minecraft: Bedrock Edition in this case. @pppery Oh, I didn't realize that. I only looked at the description of the addons tag and saw that it already mentioned Minecraft Bedrock Edition (but not that tag), so thought it might be redundant to have both. Thanks for the info and the retag! Here is a different way of accomplishing the same thing, without using conditions in the loot table: Loot table for the block: (set to a single, empty entry) { "pools": [ // A placeholder; the actual drop will be determined in main.js { "rolls": 1, "entries": [ { "type": "empty" } ] } ] } Script file (main.js; uses @minecraft/server version 1.10.0-beta): -- This is where we'll test for use of silk touch pickaxe vs. unenchanted pickaxe vs. no pickaxe import { world, system, ItemStack, ItemComponentTypes } from "@minecraft/server" // For use inside the .playerBreakBlock event listener, for getting the broken block's former coordinates // Properties will be player names (type String); values will be Block objects var lastBlockHitByPlayer = {}; world.afterEvents.entityHitBlock.subscribe(event => { // Keep a record of the last block the player hit if (event.damagingEntity.typeId === "minecraft:player") { lastBlockHitByPlayer[event.damagingEntity.name] = event.hitBlock; } }); world.afterEvents.playerBreakBlock.subscribe(event => { if (event.brokenBlockPermutation.type.id === "end:rhizome") { const stack = event.itemStackAfterBreak; const player = event.player; // Block will drop rhizome if mined with a silk touch pickaxe (this happens automatically) if (stack !== undefined && stack.typeId.includes("_pickaxe") && !stack.getComponent(ItemComponentTypes.Enchantable).hasEnchantment("silk_touch")) { // Drop regular end stone if the block is mined with a non-silk-touch pickaxe (async () => { // For .playerBreakBlock, there's no direct way to access the block's coordinates (because it isn't there anymore?). Instead, the coordinates // of the block that has just been broken should be the same as those of the most recent block which the player hit (since hitting the block - left click - is a prerequisite to breaking it). await player.dimension.spawnItem(new ItemStack("minecraft:end_stone"), {x: lastBlockHitByPlayer[player.name].x, y: lastBlockHitByPlayer[player.name].y, z: lastBlockHitByPlayer[player.name].z}); })(); } // Otherwise, drop nothing } }); With this method, the custom block drops the correct loot for each case.
STACK_EXCHANGE
So you want to create an interactive map? And why wouldn’t you? Interactive maps are a great way to share your data in a meaningful way. In this article, we will look at how to create an interactive map using Maptive’s mapping software. Maptive is powered by Google Maps and can take up to 100,000 addresses and transform them into an interactive map. From there, you have a number of tools that allow you to view and analyze your data. Furthermore, you can collaborate with others on a map in real-time or make changes to an embedded map without re-uploading it. This is the core of how the Maptive mapping software works. Let’s zoom in and examine its features in detail. Features of Maptive - Filtering Tool Maptive’s filter tool lets you group and refine your geographic data so you can display different information on your map. The filtering options include categories, number ranges, dates, and more. - Multi-Stop Route Planner and Optimization Tool A map will get you where you want to go, but to get there efficiently, it’s best to use a route-planner, especially if you’re planning on making multiple stops. Maptive offers a multi-stop and route optimization tool so you can design the most efficient way to travel. It allows you to plot a course with over 20 locations and up to 70 stops. Additionally, you can drag and drop stops on your route to customize your drive and get turn-by-turn directions. - Heat Mapping Tool A heat map allows you to plot the density of your data. In doing so, you get a better understanding of where you have large concentrations of data and where data is sparse. Maptive lets you apply this tool to sales data, customer locations, competitor locations, and other geographic data types. - Territory Drawing Tool The territory drawing tool helps you organize your map by creating visual boundaries based on predefined areas such as cities, states, zip codes, territories, districts, etc. You can also create boundary fills to color-code each area depending on the marker density, demographic data, or your own numeric information. This will allow you to analyze the data within your territories or create sales territories so that customers are distributed evenly. - Demographics/Census Mapping Tool To strengthen the content of your maps, Maptive employs population data from the U.S. census. This means that in addition to your data, you can also see things like population age, density, race, education, median household income, etc. - Store Finder Tool If your business has multiple physical locations, it can help to create a store finder map for your website. This will allow users to view the distance from their current location to your business locations. Now that we’ve reviewed the tools Maptive offers, let’s look at the process for creating your map. How to Build an Interactive Map with Maptive - Create a Maptive Free Trial Account Although Maptive requires that you purchase a paid subscription, they offer a 10-day free, no-risk trial. To create your free trial account, you will need to go to Maptive.com and click on the Create Your Free Map Now button. This will redirect you to a form. Simply fill out the information requested and click the Create Account button. You will then be sent an activation link by email. Click on this link to access your new account. - Transfer or Input Your Data To create your map, you need to click the Create New Map button. A pop-up will appear, prompting you to name and describe your map. Once you’ve done this, click Continue. You will then be prompted to enter a data source. You can either upload your data from an Excel or Google spreadsheet, copy and paste in your data, or enter your data manually. - View Your Map Once your data has been uploaded, click Create Map, and your data will transform into an interactive map which you can then analyze using the tools mentioned above. Creating an interactive map with Maptive is relatively simple. However, it’s not so much the creation of the map that makes Maptive a helpful tool. Instead, it’s Maptive’s many features that allow you to bring your data to life.
OPCFW_CODE
Was the superstition in Vikings historically accurate? In season 1 of Vikings, most of the content was filled with superstitions and prayers. It seemed as if everything was linked with gods and omens, which seemed unusual to me. I am trying to ask, were vikings actually so much superstitious or was this aspect depicted incorrectly in the season? If not, then why was it emphasized that much in contrast to a more historically accurate depiction of vikings. I haven't seen the series but it is a general trope to depict the medieval as a time of high belief and superstition where it in reality was not for most of the time. You need to be able to afford such, most of the misconceptions root in the early modern times (like witch hunts or highly religious people). Especially with the Vikings most texts we have about them are Christian propaganda (e.g. Adam von Bremen) and completely unreliable; the Vikings didn't write long text themselves. It is highly improbable from archaeological sources to assume that belief was much more than a social biding act especially as religion wasn't centralized, the contact to the gods was personal, there were no religious hierarchies, and the belief system was rooted in individual responsibility and deeds instead of earning absolution from higher instances. The vikings' culture had not yet reached a point where it was a national thing. Their religion still relied on verbal passing down and local differences, as opposed to having centralized churches like the English kingdoms have. When they meet the English, they see a religion that is far better organized than theirs. Which is foreign to them, and probably screams "cult" in their minds as they cannot comprehend how all the English they meet are so in tune with their religion (as opposed to having regional differing opinions). To the vikings, that makes the English seem like mere mouthpieces, spouting the same religious nonsense (as they see it), and turns their perception of the English to mindless religious zombies as opposed to individuals with their own opinions of their religion. They're also not used to a monotheistic religion, and if I recall correctly openly mock having a single god in a particular scene, with Floki (I think) saying something like "how can he be the god of everything?" And then there's just the general dislike of them because they are an enemy ripe for raiding. Most men other than Ragnar have raiding on their minds, so they demonize their enemy. What better way to demonize them than by calling on religious superstition? Floki seems to be the instigator for these types of events, often using Norse religion to band together the Vikings, and adding a hate for the English/Catholic at the same time. He never forgave Rollo for getting baptized (even though Rollo mocked the baptism and never took it seriously), and never got over his dislike for Athelstan. He often starts the fighting. Think back on the first meet between the vikings (just landed) and the English. The English guard takes off his medallion and gives it to Ragnar/Rollo (I forget?) as a sign of peace and trust (which is a big thing, given you have armed soldiers landing on your beach). Floki then lunges forward and forcibly takes another trinket for his own. That changes what was a friendly encounter into one where the Vikings are seen as greedy.
STACK_EXCHANGE
PoS (proof of stake) v. PoW (proof of work) If you're on this page, you've probably seen the terms "PoS" or "PoW" and are wondering what the heck they mean. PoW means "proof of work" and the prototypical PoW coin is Bitcoin. PoS means "proof of stake" and there is currently a large push in many cryptos to switch from a PoW to a PoS system. Some of the big names thinking about PoS are Ethereum (with their "Casper" protocol) and Cardano (Ouroboros protocol). In order to understand the nuances between PoS and PoW protocols, it is necessary to explain what they are and how they work. PoW protocols rely on miners to secure the network and ensure transactions are processed. The miners use electricity inputs (with hardware / mining equipment) and the resultant outputs is "hash rate". Although this explanation is overly simplistic, it conveys the basic process well. Miners, in a PoW system, compete to solve blocks and the reward for one who solves a block is paid in the coin being mined according to a predefined pay schedule. The algorithms used to solve blocks tend to vary in difficulty based on the amount of aggregate hash rate being used to secure the network. The variance in difficulty is necessary to ensure the blocks are not solved too quickly or slowly. Bitcoin blocks are solved 4 times slower than Litecoin blocks. One is not better just because it is faster, although blocks that are solved more often does result in a quicker overall transaction speed for the network. In a pure PoW system, the coin economy will die if there are no miners supporting it. Likewise, as the overall hash rate of a coin increases, the more secure the network becomes. PoS, on the other hand, completely eliminates the need for miners. A PoS system relies on a connected network of nodes (again, this is drastically overly simplistic) to secure the network. In order to support a PoS coin, one must download a 'client' for the coin (these can either be full-nodes, which download the entire network and every block ever mined, or lite-clients, which do not download the entire block but instead process ongoing transactions and use less space on one's computer). The PoS system relies on these nodes, and without them, it will fail. In order to encourage people to keep their node active, PoS systems reward users who 'stake' their coins every so often by paying staking users with processing fees. When coins are 'staking', they are weighted based on the amount of coins being used to stake. The idea is that users who stake the most coins are less likely to want to try and attack the network and are thereby the most trustworthy. When coins are staking, they cannot be spent (although you can end the staking process at any time with the click of a button and can resume staking just as quickly). There are a few interesting side-effects of a pure PoS system, namely: (1) the people with the most coins benefit the most from a staking system because the payout is weighted based on the amount of coins that are staking; and (2) the network has no inherent value because there are no actual costs (as there are with mining) which means the network can be a lot more volatile and subject to the whims of consumer demand and speculation. There are also hybrid PoS and PoW systems, which we believe to be the best system if implemented properly. One example of a PoS / PoW hybrid system is DeepOnion (although they may be switching to a pure PoS system in the next year or so if the community votes to do so via the platform's VoteCentral feature). Learn more about PoS v. PoW and the best arguments for and against either system: 3. Vulnerable? Ethereum's Casper Tech Takes Criticism at Curacao Event: https://www.coindesk.com/fundamentally-vulnerable-ethereums-casper-tech-takes-criticism-curacao/ 4. Why PoS is Better than PoW: https://decentralize.today/why-pos-is-better-than-pow-2dc3cd9881a7 5. Proof-of-Work (PoW) is not very secure and is and electricity drain: https://email@example.com/proof-of-work-pow-is-not-secure-has-no-future-and-its-electricity-usage-is-a-cancer-that-would-ce8ecffbe474
OPCFW_CODE
import { IMessage, IRoom } from '../../../api/src'; import { chattyService } from '../app'; import { IAppModel, UpdateStream } from './meiosis'; export interface IRoomState { rooms: { /** Message you are currently editing */ message?: IMessage, /** Currently active room */ current?: IRoom; /** All rooms */ all?: IRoom[]; /** Rooms of the current user */ user?: IRoom[]; }; } export interface IRoomService { getRooms: () => void; getRoomsOfCurrentUser: () => void; getRoom: (id: number) => void; setCurrentRoom: (room: IRoom) => void; saveRoom: (room: IRoom) => void; clearRoom: (room: IRoom) => void; getMessages: (currentRoom: IRoom) => void; /** Save message in the current room */ saveMessage: (room: IRoom, message: IMessage) => void; /** Delete message in the current room */ deleteMessage: (room: IRoom, message: IMessage) => void; /** Update current message */ updateMessage: (message: IMessage) => void; } export const roomService = { initial: { rooms: { message: undefined as undefined | IMessage, /** Currently active room */ current: undefined as undefined | IRoom, /** All rooms */ all: undefined as undefined | IRoom[], /** Rooms of the current user */ user: undefined as undefined | IRoom[], }, } as IRoomState, actions: (us: UpdateStream) => { const getRooms = async () => { const rooms = await chattyService.getRooms(); if (rooms) { const all = rooms.sort((a, b) => a.name && b.name ? a.name > b.name ? 1 : -1 : 0); us({ rooms: { all } }); } }; const getRoomsOfCurrentUser = async () => { const state = us() as Partial<IAppModel>; const user = state?.users?.current; if (user && user.$loki) { const rooms = await chattyService.getRoomsOfUser(user.$loki); console.log(rooms); if (rooms) { us({ rooms: { user: rooms }}); } } }; const getMessages = async (currentRoom: IRoom) => { if (currentRoom) { const msgs = await chattyService.getMessages(currentRoom.name) || []; currentRoom.messages = msgs; us({ rooms: { current: currentRoom } }); } }; return { getRoom: async (id: number) => { const current = await chattyService.getRoom(id); if (current) { us({ rooms: { current } }); await getMessages(current); } }, getRooms, getRoomsOfCurrentUser, setCurrentRoom: async (current: IRoom) => { us({ rooms: { current } }); await getMessages(current); }, saveRoom: async (room: IRoom) => { const current = await chattyService.saveRoom(room); if (current) { us({ rooms: { current } }); } await getRooms(); }, clearRoom: async (room: IRoom) => { await chattyService.clearRoom(room.name); us({ rooms: { current: undefined } }); await getRooms(); }, updateMessage: async (msg: IMessage) => us({ rooms: { message: msg }}), /** Get the messages in the current room */ getMessages, saveMessage: async (current: IRoom, msg: IMessage) => { if (current) { const savedMsg = await chattyService.saveMessage(current.name, msg); if (typeof current.messages === 'undefined') { current.messages = []; } if (savedMsg) { current.messages = [savedMsg, ...current.messages]; } us({ rooms: { current, message: undefined } }); } }, deleteMessage: async (room: IRoom, msg: IMessage) => { if (room) { const isDeleted = await chattyService.deleteMessage(room.name, msg); if (isDeleted) { room.messages = room.messages.filter(m => m.$loki !== msg.$loki); } us({ rooms: { current: room } }); } }, } as IRoomService; }, };
STACK_EDU
Errata, Last updated 16 January 2021 - PP. 3-4,6,9: the roles of `tilde p` and `tilde p ^ **` should be made clear. `tilde p` represents any approximation (generally calculated using floating-point arithmetic). `tilde p ^ **` exclusively represents an approximation calculated using exact arithmetic. - P. 9 exercise 24 part b: (1) the sum should be from 1 to 9; and (2) 0.2071647018159241499410798569 should be 0.3550449718187918680449850763. - P. 18 exercise 27 part g: `T_2(x)` should be `T_3(x)`. - P. 24 end of Crumpet 8: In the last line, "not" should be "no". - P. 38 exercise 20: A better demonstration of the instability of calculating `<< c_n >>` would be to calculate `c_50`. - Section 2.1: Reference to maximum number of iterations should be deleted. It has been removed from the pseudo-code. - P. 47 exercise 20b: `f` should be `g`. - P. 48 exercise 29: Step 4 should return `i`, not `N_0`. - P. 48 exercise 30a: The equation should be `x-2^x+.95=0`. - P. 58 exercise 3b: Should read `f(x)=ln(2e^x)/2`. - P. 58 exercise 17: The roles of `f` and `g` should be swapped. - P. 59 exercise 20: "any fixed point" should read "any nonzero fixed point". - P. 59 exercise 22: The assumption `g'(hat x) ne 0` should be added. - P. 67 exercise 3: The Octave logo should be added. - P. 75 exercise 14: One of the "the"s should be stricken. - P. 249 solution 26 part f: The inequality `64/625 le 64/1125 le 64/6561` should be `64/6561 le 64/1125 le 64/625`. - P. 250 solution 30b: The derivative of `g` is missing the exponential factor, `e^(-xi^2)`, in two separate places. - P. 334 sec 2.2 answer 4c: `root(5)((4-3x^2)/2)` should be replaced by `root(5)((4-6x^2)/3)`. - P. 334 sec 2.2 answer 4f: Each instance of `(x^2-5x+1)` should be replaced by `-(x^2-5x+1)`. - P. 335 sec 2.3 answer 18: Should read, "Yes. Aitken's delta-squared method is designed to speed up linearly convergent sequences and `<< a_n >>` is linearly convergent." - P. 336 sec 2.4 answer 20: Is wrong. Needs to be corrected. A PDF including these changes can be found here. You might consider this a preview of the next edition. Suggestions welcome!
OPCFW_CODE
From: Ulf Hansson <firstname.lastname@example.org> To: Kishon Vijay Abraham I <email@example.com>, firstname.lastname@example.org Cc: "Rafael J . Wysocki" <email@example.com>, firstname.lastname@example.org, Yoshihiro Shimoda <email@example.com>, Geert Uytterhoeven <firstname.lastname@example.org>, email@example.com, Ulf Hansson <firstname.lastname@example.org>, Jonathan Corbet <email@example.com>, firstname.lastname@example.org Subject: [PATCH v2 3/3] phy: core: Update the runtime PM section in the docs to reflect changes Date: Wed, 20 Dec 2017 15:09:20 +0100 [thread overview] Message-ID: <email@example.com> (raw) In-Reply-To: <firstname.lastname@example.org> Let's update and clarify he phy documentation, to reflect the latest changes around the runtime PM deployment in the phy core. Cc: Jonathan Corbet <email@example.com> Cc: firstname.lastname@example.org Signed-off-by: Ulf Hansson <email@example.com> --- Documentation/phy.txt | 29 ++++++++++++++++------------- 1 file changed, 16 insertions(+), 13 deletions(-) diff --git a/Documentation/phy.txt b/Documentation/phy.txt index 457c3e0..1c2c761 100644 --- a/Documentation/phy.txt +++ b/Documentation/phy.txt @@ -160,19 +160,22 @@ associated with this PHY. PM Runtime ========== -This subsystem is pm runtime enabled. So while creating the PHY, -pm_runtime_enable of the phy device created by this subsystem is called and -while destroying the PHY, pm_runtime_disable is called. Note that the phy -device created by this subsystem will be a child of the device that calls -phy_create (PHY provider device). - -So pm_runtime_get_sync of the phy_device created by this subsystem will invoke -pm_runtime_get_sync of PHY provider device because of parent-child relationship. -It should also be noted that phy_power_on and phy_power_off performs -phy_pm_runtime_get_sync and phy_pm_runtime_put respectively. -There are exported APIs like phy_pm_runtime_get, phy_pm_runtime_get_sync, -phy_pm_runtime_put, phy_pm_runtime_put_sync, phy_pm_runtime_allow and -phy_pm_runtime_forbid for performing PM operations. +This subsystem deploys runtime PM support. More precisely, calls to +pm_runtime_get_sync() and to pm_runtime_put() surrounds calls to the phy +provider callbacks, ->init|exit(), in phy_init|exit(). At phy_power_on(), the +runtime PM usage count is raised again, via pm_runtime_get_sync(). The usage +count remain raised, until the internal phy power on count reaches zero in +phy_power_off(), at which point pm_runtime_put() is called to restore the +runtime PM usage count. In this way, the device is guranteed to stay runtime +resumed as long as the phy is powered on. + +In regards to the runtime PM deployment in the phy core, it should also be +noted that it's deployed for the phy provider device, which is the parent of +the phy child device. In other words, the phy device created by the phy core +remains runtime PM disabled. Of course, whether runtime PM is really used or +not, depends on whether the phy provider driver has enabled runtime PM for its +provider device. More exactly, pm_runtime_enable() needs to be called prior +calling phy_create() or devm_phy_create(). PHY Mappings ============ -- 2.7.4 prev parent reply other threads:[~2017-12-20 14:09 UTC|newest] Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top 2017-12-20 14:09 [PATCH v2 0/3] phy: core: Re-work runtime PM deployment and fix an issue Ulf Hansson 2017-12-20 14:09 ` [PATCH v2 1/3] phy: core: Move runtime PM reference counting to the parent device Ulf Hansson 2017-12-21 1:39 ` Rafael J. Wysocki 2017-12-21 10:50 ` Ulf Hansson 2017-12-23 1:35 ` Rafael J. Wysocki 2017-12-23 1:50 ` Rafael J. Wysocki 2017-12-23 12:37 ` Ulf Hansson 2017-12-23 12:47 ` Rafael J. Wysocki 2017-12-23 12:39 ` Rafael J. Wysocki 2017-12-23 15:09 ` Ulf Hansson 2017-12-24 12:00 ` Rafael J. Wysocki 2018-01-02 13:28 ` Ulf Hansson 2017-12-20 14:09 ` [PATCH v2 2/3] phy: core: Drop unused runtime PM APIs Ulf Hansson 2017-12-21 10:33 ` Yoshihiro Shimoda 2017-12-21 10:33 ` Yoshihiro Shimoda 2017-12-21 10:57 ` Ulf Hansson 2017-12-21 10:57 ` Ulf Hansson 2017-12-21 12:24 ` Yoshihiro Shimoda 2017-12-21 12:24 ` Yoshihiro Shimoda 2017-12-21 14:23 ` Ulf Hansson 2017-12-21 14:23 ` Ulf Hansson 2017-12-23 9:55 ` kbuild test robot 2017-12-23 9:55 ` kbuild test robot 2017-12-23 10:08 ` kbuild test robot 2017-12-23 10:08 ` kbuild test robot 2017-12-20 14:09 ` Ulf Hansson [this message] Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --subject='Re: [PATCH v2 3/3] phy: core: Update the runtime PM section in the docs to reflect changes' \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.
OPCFW_CODE
Recognize Handwritten Digits Using MNIST Data Set on Android Device This example shows you how to recognize images of handwritten digits captured on your Android® device using Simulink® Support Package for Android Devices. On deployment, the Simulink model in this example builds an Android application on the device. Using the camera of the device to capture an image of any digit from 0 to 9, the application recognizes the digit and then outputs a label for the digit along with the prediction probability. This example uses the pretrained network, originalMNIST.mat, for prediction. The network has been trained using the Modified National Institute of Standards and Technology database (MNIST) data set. MNIST is a commonly used data set in the field of neural networks. This data set comprises of 60,000 training and 10,000 testing greyscale images for machine learning models. Each image is of 28-by-28 pixel. Complete the Getting Started with Android Devices example. Step 1: Configure Digit Classification Model 1. Open the androidDigitClassification Simulink model. 2. On the Modeling tab, select Model Settings to open the Configuration Parameters dialog box. 3. In the Configuration Parameters dialog box, select Hardware Implementation. Verify that the Hardware board parameter is set to 4. Go to Hardware board settings > Target hardware resources > Groups and select Device options. 5. From the Device list, select your Android device. If your device is not listed, click Refresh. Note: If your device is not listed even after clicking Refresh, ensure that you have enabled the USB debugging option on your device. To enable USB debugging, enter androidhwsetup in the MATLAB® Command Window and follow the onscreen instructions. Step 2: Predict Handwritten Digit on Android Device 1. On the Hardware tab of the Simulink model, in the Mode section, select Run on board and then click Build, Deploy & Start. This action builds, downloads, and runs the model as a standalone application on the Android device. The application continues to run even if the device is disconnected from the computer. 2. The application opens the device camera. You will see a region of interest (ROI) marked as a red box inside the camera frame. Only the image inside the ROI is used for prediction. 3. Draw a digit on a white board. 4. Capture the digit in the camera frame of your device. Ensure that the digit is enclosed inside the ROI. On capturing the digit, the algorithm processes the image as explained here. a. The Camera block accepts the digit captured using the camera of your Android device. The image obtained is of size 640x480. The image is passed to the Concatenate block to perform multidimensional concatenation of R, G, and B pixels. The Draw Region of Interest and Digit Predictor subsystems accept the image and ROI as inputs. Note: For a change in camera resolution, ensure to update the size and position of the Region of Interest Constant block to place the red rectangle within the camera frame. b. The Draw Region of Interest subsystem draws the ROI starting from (120,240) to (200,240) pixels. To draw the ROI, this image is converted to single and then converted back to RGB. c. In the Digit Predictor subsystem, the RGB2bin block converts the image into its binary equivalent and then extracts the ROI from the input image. The block complements the image and resizes the image to 28-by-28 pixel. The 28-by-28 image is then passed to the Extract Image Features block to extract the Histogram of Oriented Gradients (HOG) features. The extracted features are passed to the Predict Digit block. The block loads the compact trained model, originalMNIST.mat, to predict the digit from the extracted features. For information on how the originalMNIST.mat is trained, see Digit Classification Using HOG Features on MNIST Database. The predicted output is then given to the Data Display, Predicted Digit and Confidence(0-1) blocks to display the predicted digit along with the probability of the prediction.
OPCFW_CODE
using System.Net.Http; using Easypost.Internal; using Newtonsoft.Json; namespace EasyPost.Model { /// <summary> /// Parcel objects represent the physical container being shipped. /// Please provide either the length, width, and height dimensions, or a predefined package. /// </summary> public class Parcel : EasyPostBase, IEncodable { [JsonProperty("weight")] public decimal WeightOunces { get; set; } [JsonProperty("height")] public decimal? HeightInches { get; set; } [JsonProperty("width")] public decimal? WidthInches { get; set; } [JsonProperty("length")] public decimal? LengthInches { get; set; } [JsonProperty("predefined_package")] public ParcelType? PredefinedPackage { get; set; } public FormUrlEncodedContent AsFormUrlEncodedContent() { var collection = new CollectionBuilder().AddParcel("parcel", this); return collection.AsFormUrlEncodedContent(); } } /// <summary> /// Predefined package sizes for USPS, UPS and FedEx /// </summary> public enum ParcelType { // USPS Card, Letter, Flat, Parcel, LargeParcel, IrregularParcel, FlatRateEnvelope, FlatRateLegalEnvelope, FlatRatePaddedEnvelope, FlatRateGiftCardEnvelope, FlatRateWindowEnvelope, FlatRateCardboardEnvelope, SmallFlatRateEnvelope, SmallFlatRateBox, MediumFlatRateBox, LargeFlatRateBox, RegionalRateBoxA, RegionalRateBoxB, RegionalRateBoxC, LargeFlatRateBoardGameBox, // UPS UPSLetter, UPSExpressBox, UPS25kgBox, UPS10kgBox, Tube, Pak, Pallet, SmallExpressBox, MediumExpressBox, LargeExpressBox, // FedEx FedExEnvelope, FedExBox, FedExPak, FedExTube, FedEx10kgBox, FedEx25kg, } }
STACK_EDU
Has SO become the trash dump of Programmers? The specific question certainly isn't... stellar, but you are over-reacting. In fact, of the 258 questions we've migrated to SO (in the last 90 days), only 13% have been rejected. Conversely, the numbers aren't as good the other way around. We've rejected 16% of the questions SO has send us (but you don't see us ... I am not sure that this is a situation that can be settled authoritatively or not. It is clearly a contreversial subject as it has been closed, reopened, and then closed again by the community. Even after Yannis expressed his opinion on the question and cleared the previous close votes, it still received 5 more votes to close it. As of right now there are ... Migration for questions older than 60 days is disabled (even for mods). It's not that we don't want to move it, we can't. If you feel it's a good question for the Workplace, you should re-ask it there (but please search for existing duplicates first). I'm Tim Post, a community manager for Stack Exchange. I'd like to take a few moments to answer this, and explain a bit about migration paths. Migration paths between sites are something that we're very cautious to establish, we put them in place only when we're certain that the conduit will: Help ease confusion for users that ask a question on a site that ... Close voters at Programmers would better memorize the criteria for debugging questions to be acceptable for Stack Overflow: Questions seeking debugging help ("why isn't this code working?") must include the desired behavior, a specific problem or error and the shortest code necessary to reproduce it in the question itself. Questions without a clear ... (special edition for folks coming from Math.SE, reposted from Math meta) Hint: Software Engineering Stack Exchange doesn't do coding help and expect research before asking Sometimes, we at Software Engineering get a stream of troublesome questions from folks with linked accounts at Math.SE. One of them was kind enough to explain why they get there: Software Recommendations moderator here. We're technically a non-beta site now - which means moderators aren't discouraged from migrating questions to us any more. I would be against adding SR to Programmers' migration target list. Not that I don't trust y'all, but we have some really strict requirements for questions. That said, if you take the time to ... I'd like (and have asked for) mods on both sites to chime in, but this isn't looking like a good idea at this point. Stats are pretty clear, 52% of all moderator initiated migrations from PSE to The Workplace (as obtained on The Workplace) have been rejected. That's moderator initiated, which often entails a bit of collaboration and sanity checking with ... Please note that there is now an official policy against migrating questions older than 60 days to another site: Disable migration for questions older than 60 days If the question is off topic for us then it should be closed. However, note that QA questions aren't completely off topic for us, so there's no need to do anything with these. If a question has ... I'm with Tim on this. We haven't had a good record when it's come to migrations to The Workplace and opening it up to more people is only going to make it worse. Part of the problem is that the questions that could be sent really aren't good questions full stop. This is actually a problem with most sites on the network. People see the topic and think "that'... I would like to present this chart from Checklist for how to write a good Code Review question first: As you can see, the metaphorical distance in scope between Programmers and Code Review is significant. From a CR regular standpoint, I can say for myself that we do get, on occasion, a question which is off-topic at CR but could be on-topic at Programmers (... SQA Stack Exchange is a beta site, which means that there's no guarantee it will stay around, and until the site graduates this is a non issue. We do mention one beta site in our FAQ, The Workplace, but for questions that are extremely off topic on Programmers. If SQA graduates, we'll have to examine whether we want software testing questions on Programmers ... Migrating to beta sites and sites that are not in our migration target list Showing the full list of sites in the off topic dialog has been asked for a ton of times, and has been declined each time. maybe one that is dependant on reputation, only people who have over 10000 rep can see beta sites or a comment block for where else it should go Well, no, ... There's not much that we can do until The Workplace is in at least Public Beta. Unless they explicitly ask for a question from us, we probably won't be migrating anything until the community on The Workplace has a good definition of what a good question is and a nice description of what is on-topic (and what is off-topic). Until then, I'd be careful about ... Given that at least some kinds of testing questions are legitimately of interest to programmers, I think a blanket policy of migrating test questions to SQA would rob the Programmers site of valuable content. That said, it may make sense to migrate to SQA if there are no productive responses on Programmers. I just thought of something novel and decided to provide a competing answer to my own question. What if we restricted the ability of an SO user to vote for migration to Programmers to only those that: Have an account on Programmers Have 300 reputation or more on Programmers? This way the person voting for migration may be significantly more likely to be ... Please take this up with the Stack Overflow moderators. There's nothing we can do at this end. After further reflection I don't think that that specific question should be migrated. It's old and the question isn't sufficiently unique to software developers for it to be 100% on topic here. Search Programmers for any similar questions and if there ... Flag it for moderator attention - but still vote to close as "off topic". We'll double check with The Workplace mods and if they agree we can migrate it. If not we'll simply close as Off Topic. This has to be the policy for beta sites as there is always the chance that the beta site might be closed down. Your question was migrated from Programmers by a Programmers moderator. Their reputation on the target site is irrelevant (to the migration process). As for the migration itself, your question is off topic on Programmers. Disabling the "red spell check squigglies" on Outlook isn't really what I'd call a "conceptual question on software development". If you ... It's standard policy to not have beta sites as migration targets. This is to allow the site to develop by itself during beta and define a solid community. If there are exceptional questions that are off-topic here and on-topic there, moderators can migrate to any site on the network, so you can flag it. I wouldn't expect too many migrations, though.
OPCFW_CODE
Like most Microsofties, I flirt with information overload almost on a daily basis. One of the things that you must learn how to do when you start at Microsoft is how to determine what information is relevant and important, what's relevant but not necessarily something you need to deal with right now, what's informational and what's noise. The most common information source is almost always email. It's drastically reduced the cost and increased the ease of communication. But this a double-edged sword. You get the information you need but you also get a lot of noise. How you manage the email you receive is key to not drowning in information overload. For me, that's where categories, inbox rules and tasks come in. I get lots of mail. On a daily basis I typically get about 700-800 messages. Luckily that number consists of various types of mail that range from stuff I need to look at right away to things that are simply discussions that might be interesting but I don't need to see. Like pretty much everyone else, I use inbox rules to route email to folders rather than my inbox when I don't need to see that mail right away (I also classify my folders into Exchange related, business related and 'other'). That deals with about 80% of the mail I get. For mail that I do need to see and triage directly (sent either to me or to groups that are highly relevant), I don't redirect to folders but rather leave it in my inbox and use categories to color-code those messages so I can easily triage that mail. Finally for mail that I've read and need to take action on, I use tasks to ensure that I don't lose track of them. Most people I know use inbox rules and tasks quite extensively so I'm not going to talk about them much. Rules pre-triage mail by sorting it into folders you specify (among other things) and tasks enable you to keep track of things you need to do. However they don't help you with the mail that's sitting in your Inbox waiting to be triaged by you. That's where categories come in. I'm writing this post because very few people I know use categories. They're immensely useful for helping to triage the mail that I receive. Categories in Outlook 2007 are essentially (they way I think about it) a color-coding and tagging system to help you organize messages. You can use categories: - as a visual cue to easily and quickly determine who messages are from or to, what project they belong to, whether they're personal or business related or anything else you feel is important to you - to perform quick searches for all messages assigned a given category - to set up dedicated search folders that auto-populate with the categories you specify In my mail I have configured my categories as follows: - from my manager - I'm in the To or Cc line - I'm the only recipient - they're personal - they're part of a product specification review - to my team's distribution group - to the Exchange product group distribution group - to the User Education (my group) distribution group - a few other things As messages arrive in my mailbox, Inbox rules run to not only determine which folder a messages should be placed in but also to determine whether messages should be given a category. Below is an example of an Inbox rule that categorizes messages based on a rule condition (click on the image for the full size version): The rule looks to see whether my name appears on the To line of the message and if so, assigns the corresponding category. The screen shot below shows the categories I've defined (click on the image for the full size version). Sorry for the blurred text - the categories contain internal group and employee names. But they line up to the bulleted list above. As you can see, the "I'm in the To line" category matches up to the same value in the inbox rule screen shot. No matter what folder a message gets placed into, if my name appears on the To line Outlook 2007 will place that category on the message. What's better is that messages can be assigned multiple categories. For example, if my manager sends a message to our team distribution group, that message will be assigned both the category for messages from my boss (red) and for messages sent to my team (purple). The visual cue will contain both colors and the message will have both tags so I can search on them. Outlook will show up to three categories in the 'mini' category view that I use but if you use the expanded view by increasing the width of the message list in Outlook, all of the categories will be visible. The screen shot below shows messages in my Deleted Items (my Inbox isn't very full since it's the weekend and I keep it very tidy) with categories assigned to them (click on the image for the full size version). Again, sorry for the blurred text. This is what my Inbox would look like on a weekday. Messages that make it here are given categories based on who sent them, where they were sent and what their content is. The level of blue (dark, medium and light) shows whether I'm the only recipient, on the To line, or the Cc line. At Microsoft how you address someone is quite important. Many people, including myself, will prioritize messages sent only to them and where they're on the To line higher than those where they're in the Cc line. This allows me at a quick glance to focus on those messages. Messages in red are from my manager so those are generally highest priority :). I assign other colors relative priorities as they relate to various groups. Basically, the closer to my group or team or feature area, the higher the priority. Messages that aren't assigned a category are typically general messages that aren't directly to me and don't relate directly to my product, group or team. These typically get read last. For messages that are sent to other folders, categories are still applied although not as often as I may not be that active in a given distribution group. However, if I do participate in a group, replies to my messages will generally have my name somewhere in the header. This will trigger the appropriate category to be applied to those messages. This enables me to reply to a group, even a high-volume one, and easily scan it later for messages that may be a reply to my own. Saves lots of time if I want to participate not only in a given discussion, but a sub-branch of that discussion while ignoring the rest. Categories aren't only applied to messages. They're also applied to calendar items, notes, contacts and tasks. The same principle applies although rather than priorities (because if i accept meeting or task requests I've committed to them), they're simply a visual cue of the scope of the meeting or task. Will they include my manager, team, group, etc? Who organized it? Below is an example of a week in my calendar (click on the image for the full size version): Notice that as with messages, multiple categories can be applied to calendar items. I've set it up so that if I've specified that a category be applied if a certain sender sends the message, the meeting request is the color of their category. Red is assigned to meetings arranged by my manager for example. Other items are colored based on who the request was sent to or if, in the case of my manager sending one, the recipient category shows in the bottom right). Other items are based on their content (green ones are for spec reviews and high priority). In addition to simple visual cues, Outlook 2007 enables you to search specific folders or your entire mailbox for messages that are assigned to specified categories. To search for a category, select the Search field, and click the down chevron next to the text box. From the Add Criteria list, select Categories. Select the category you want to search on and click the search icon. You can also configure search folders based on categories. Whenever you open that search folder, Outlook 2007 will automatically populate that folder with all of the messages that match that category. To create a category based search folder, click the icon in the main Outlook 2007 window. Select the Create Category Search Folder option. Make sure Categorized Mail is selected, click the Choose button and select the category or categories you want to apply. Click OK, select where you want the search folder to look for messages, then click OK. If you want to set up categories for yourself (and I hope you do :), the first thing you should do is determine what categories you want and add them. Click on the icon in the Outlook 2007 main window's toolbar and select All Categories. This will open the Color Categories window. From here you can customize the default categories and add new ones. Once you're happy with your categories, save them. Now you can assign categories manually by right-clicking on the little square next to the message date in the message list (see the Outlook main window screen shot above) and selecting the category you want. If you want to assign more than one, just repeat it the process. You can also single-left click the category square on a message. If you do this, Outlook will assign the default category to the message (you can configure the default category by right-clicking on the icon and selecting Set Quick Click). If you want to use inbox rules to assign categories automatically to your messages, create a new inbox rule using the Rules Wizard (Tools -> Rules and Alerts). Create a new rule, select the conditions you want to use (such as who a message is from or sent to, subject or message body key words, etc) and then select the "assign it to the category" action. Once you save the rule, all new messages that meet the rule's conditions will be assigned the new category. For more information about categories in Outlook 2007, see the following links:
OPCFW_CODE
package com.trickl.oanda.model.bindings; import com.trickl.model.oanda.instrument.Candlestick; import com.trickl.model.oanda.instrument.CandlestickData; import com.trickl.model.pricing.primitives.Candle; import com.trickl.oanda.model.exceptions.ConversionFailureException; import java.util.function.Function; public class CandleReader implements Function<Candlestick, Candle> { @Override public Candle apply(Candlestick candle) { CandlestickData ohlc = null; if (candle.getMid() != null) { ohlc = candle.getMid(); } else if (candle.getBid() != null) { ohlc = candle.getBid(); } else if (candle.getAsk() != null) { ohlc = candle.getAsk(); } if (ohlc == null) { throw new ConversionFailureException("Candle missing mid/bid/ask."); } return Candle.builder().time( candle.getTime()).open(ohlc.getOpen()).high(ohlc.getHigh()).low(ohlc.getLow()) .close(ohlc.getClose()).complete(candle.getComplete()).build(); } }
STACK_EDU
The purpose is to compare the fractional snow cover of the coarser pixel to the binary snow cover (0 for no-snow and 1 for snow) of lower resolution pixels. Both the datasets are of the same area and share the same WGS84 coordinates. Below are two of the methods that I am thinking of: As I understand random point sampling is independent of resolution in ArcGIS. So, I have two rasters one with coarser resolution say in kilometers and the other one is of finer resolution in meters. I create random points for each of the rasters. Then I run the "Extract multi values to point" tool to extract the pixel values of the attributes to these points. Please note that the attributes has to do with fractional area coverage in percentage (0-100%) in a pixel for one of the rasters, whereas the other raster which is of 500m resolution has binary/discrete values i.e. 0 /1 . The goal is to do a comparison between the point values based on fractional area coverage/percentage of each cell in the coarser raster to the binary data finer resolution raster. So for example on a given day, the fractional covered area is 96% for a particular coarser pixel. As I know the coarser pixel has approximately 165 pixels, and so for that given day 60 of the finer resolution pixels have value 1. Hence, I add them and divide it by 165, basically ((60/165)*100)= 36.36%. I can then compare the two fractional area values i.e. 96% with 36.36% So far I have found that there are 165 points representing the smaller pixels that lie in a single coarser cell. My question is somewhat similar to this question with the only difference that temporal resolution is same. Will this be a valid comparison between the two sets of random point sampling, even though the resolution of the rasters is quite different? I find the number of pixels in the coarser pixel, which is approximately 165, and then for those 165 pixels I add the binary one values of these pixels and divide them by total number of pixels which is 165, this might give me fractional area coverage of snow. I can then compare the fractional area coverage of the coarser pixel with the fractional area coverage of the 165 pixels. But even in this case, I am noticing that some of the pixels lie right on the boundary line of the coarser pixel. The whole purpose is to comparison in such a way that one resampling of resolution can be avoided.
OPCFW_CODE
Does shrouding a propeller minimize induced drag by equalizing the downwash velocity along its blades? EDIT: It's not a duplicate of Are ducted fans more efficient? That question and the answers doesn't address the reason for the higher theoretical efficiency, it is more about efficiency in practice (drag on the duct, weight, etc) and hence why they aren't used despite the higher theoretical efficiency I was told shrouded propellers are more efficient becuase tip vorticies are eliminated by the wall which would imply no induced drag but apparently that is wrong Do ducted fans eliminate induced drag? therefore I've been trying to figure out why they are still more efficient than an open propeller despite still having induced drag. It must have less induced drag. The vortex around an unshrouded propeller: At first I thought the wall somehow increases the effective wingspan, moving the vorticies to the top of the wall much like winglets do and thus reducing the induced drag that way However unlike winglets the walls don't have a pressure difference on either side (it's not an airfoil) therefore the vortex can't be there. So the vortex must be around the whole wall because above the wall the pressure is low and below it the pressure is high. But this doesn't change the effective wingspan so why does it have less induced drag? My explanation is that the existence of the wall causes the inside of the vortex to "straighten out" with the flow through the duct which makes the downwash velocity constant along the length of the blades (since the flow is irrotational). This means that this condition is satisfied: from page 7 of http://naca.central.cranfield.ac.uk/reports/1923/naca-report-121.pdf Therefore induced drag is minimized. Is this correct? @CrossRoads In this question I clearly say that I thought the explanation was what I wrote in that other question (lower induced drag due to increased effective wingspan) but now I think it's something else (lower induced drag due to more elliptical lift distribution) @CrossRoads H O V E R The approximate speed that a ducted fan is no longer efficient is about 100mph. So it depends on the specific flight conditions and aircraft configuration. @jwzumwalt Not doubting your statement, but is there any information citing the 100mph efficiency limit? I would be interested in learning about the physics behind this. Interest peaked on ducted fans in the late 70's and early 80's. I got my information from the EAA "Sport Aviation" magazine (Aug 68, Jun 73, etc) which ran several authoritative articles that included engineers such as Molt Taylor. He considered it for the Aero Car and Mini-IMP but found it impractical. Some articles included wind tunnel testing and drag analysis. It shouldn't be to hard to find with a Google Search. Peter Kemp on this forum is one of the more prominent aerodynamist and would also be a good source - he loves math and probably has it somewhere in his library. @CrossRoads I'm asking about induced drag, the drag due to lift. Mentioning possible parasitic drag on the duct wall from forward flight is irrelevant even if this wasn't about a craft in hover. @fooot That answer just says "yes they are in theory but there are some practical reasons why they aren't used" This discussion is about exactly why they are theoretically more efficient Basically, yes. The difference between shrouded and unshrouded propeller is that the shrouded one can produce uniform thrust across the diameter, while for the unshrouded one the thrust decreases near the tips. That way a shrouded propeller accelerates more air than an unshrouded one of the same diameter. This air therefore needs to be accelerated to lower speed, and therefore carries away less kinetic energy, requiring less induced power¹. However, diameter can be varied, so the efficiency comparison is not that straightforward. When the propeller spins relatively slowly, making it larger is better, similarly to how increasing wing span is better, aerodynamically, than adding winglets. However increasing the speed of the tips increases parasite drag, especially if it becomes supersonic. And since increasing size while maintaining angular speed increases the orbital speed of the tips, increasing size only helps to a certain point. That's when shrouds become useful. ¹ In propellers and rotors it is called induced power rather than induced drag, because it counts directly against the engine power. It also describes the physics better, since in both cases it is the work that is done on the air by the reaction to the generated lift/thrust. @Сократ, please, be so kind and ask a question. Comments are not suitable for this kind of discussion. I suppose the confusion may be due to different sources using different definitions of induced drag. As used most commonly in my experience induced drag is the energy wasted as non-useful work in the creation of turbulence directly attributable to the generation of lift. Most noticeably in the wingtip vortex. In other words imparting flow in directions other than the ideal direction; in keeping with the photo-quote, an infinitely long completely uniform wing would have on lateral pressure gradient and thus no sidewards flow, nor the vortex formed from sideways flow.(It is not the vortex that causes drag, the vortex is just a symptom of the lateral pressure gradient.) A close shroud, possibly even attached to propeller tips, of sufficient width will stop these vortex. However it may not stop all lateral flow because of the helical flow of the propwash. The helical flow can be reduced with static blades much like those used in axial flow compressors on gas-turbine engines. The shroud does increase add parasitic and form drag but it most definitely does reduce induced drag. The first drawing in the Question with the un-shrouded prop, has the wrong perspective or axis so the vortex is in the wrong spacial plane; the second drawing has the vortex flowing through a solid wall and the prop again has the wrong axis. The third drawing is not a vortex it is the bulk flow encountered with a stationary fan in an enclosed room.(and the prop has the wrong rotational axis) Induced drag has only one definition: the work done on the air to produce lift. Most of this work is imparting the flow the desired direction, and is therefore useful. You correctly note that the vortex is just a symptom, not cause, but it is mainly result of the lift generation itself, not the sideways flow. Induced drag is mainly proportional to lift to span ratio. An infinite wing only has no induced drag if you only let it produce finite total drag, which means zero lift per unit of span. If you let it produce non-zero lift per unit of span, it will have induced drag too. Induced drag is a term used well beyond aircraft design, so I'm only using "lift" as an example familiar to the aviation field but other forms of dynamic fluid reaction could be applied. A wing with no sideways flow produces no vortex yet still produces lift. Wingspan ratio reduces induced drag by reducing the lateral pressure gradient and by extension induces less lateral/sideways flow. I suppose I could narrow the term to three possible definitions: 1, total drag created from the creation of useful lift plus the energy wasted on non-ideal flow as a result of lift; 2, just the portion of drag attributed to the energy wasted on non-ideal flow as a result of lift; 3, only the portion of drag attributable to lift produced by an ideal-flow. Everybody except you agrees that induced drag is what you have under option 1. No, the vortices are trapped in the tip clearance. What if there's no "outside" at all to your setup? Imagine a theoretical scenario where the entire space outside the duct is solid to $\infty$. Where are the vortices now? They could only be within the tip clearance. I just realized this paper from another question is the perfect answer to this question. And your assumption of shroud somehow making the downwash uniform is also wrong. Note that although the drawing in the statement of the problem or this paper is for a two-blade shrouded fan, even a fan with a very high solidity factor, e.g. 0.8~0.9, as used in high bypass-ratio turbofans, does not equalize fan wake, and that equalization happens only due to shear friction between the infinitesimal pockets of air themselves. Found this CFD of a turbofan's fan. What if there is zero tip clearance, because the shroud is attached to the blades and rotates with them? And you can combine both cases and consider a pipe with rotating section with blades in it... @JanHudec then the wing and shroud should be seen as a wing. There is atmospheric pressure outside the wall and higher pressure inside the wall, of course there is going to be flow around the wall. If it were only within the tip clearance then induced drag would limit to zero as tip clearance limits to zero but it doesn't, it limits to the induced drag on an elliptical wing. As to your second claim that downwash velocity isn't constant, we are talking about a perfect irrotational fluid going through an actuator disk.
STACK_EXCHANGE
Review the recommendations in this document for general configuration changes for performance. General Performance Recommendations - Ensure that the your server is upgraded to latest version to take advantage of performance fixes. - If your version of WCM has cumulative iFixes available, then its recommended to be on the latest cumulative iFix. - Apply recommended Performance and Syndication fixes (using the latest version of the WebSphere Portal Update Installer) - For 220.127.116.11: - For 6.0.1 & 18.104.22.168: - For 22.214.171.124 & 126.96.36.199: - For 6.1: - Ensure that the JCR database (that holds the Web Content Management data) is properly tuned. See the 'Database Tuning' section for more information. - Note: That database tuning should be repeated on a periodic - Ensure that the LDAP server is properly tuned. See the Tuning' section for more information. - Review the 'Tuning' section of the Information Center. - Ensure that the WebSphere Portal server is properly tuned. See the 'Official Portal Tuning Guide' for more information. - Ensure that the Web Content Management server is properly tuned. See the 'Official Workplace Web Content Management Tuning Guide' for more information - Actively resolve all errors (security, missing components etc) found within the WebSphere Portal logs - Ensure that the Web Content Management Application is optimised. See the ‘WCM Application Tuning’ section for more information - Ensure that the 'resourceserver.maxCacheObjectSize' configuration setting in the WCMConfigService.properties is set to 300 to reduce memory utilization within the resource cache thus avoiding memory errors - Ensure that default library setting (defaultLibrary) in the WCMConfigService.properties is correct - If WCM is setup within a cluster, make sure that Dynacache Replication is enabled. See this techNote http://www-01.ibm.com/support/docview.wss?uid=swg21304020 for detailed steps - Periodically run the History Cropper module to improve document load and save performance. See Clearing item history for more information Rendering Performance Recommendations If your site has many images on it, consider using Edge Server to cache images and files. This requires Web Content Management iFix PK47108 PLUS setting the ‘resourceserver.browserCacheMaxAge’ setting in WCMConfigService.properties (which might not be present in your file) to greater than 10 minutes (such as 1200s for twenty minutes) - For sites that have content rendered as anonymous, install WPS iFix PK56304 which works under all current 6.0.1.x versions - Ensure that the Web Server and WebSphere Portal Server are properly configured to handle the amount of concurrent users < li>For the Web Server, check the documentation of your chosen server to identity the name (and location) of its MaxClients / MaxThreads setting - For your WebSphere Portal Server, check the WebContainer’s Maximum Threads count in Application Servers>WebSphere_Portal>Thread Pools>WebContainer>Maximum Size from the WebSpher e Administration Console (default is 50 threads) - Use Pre-Rendering, Servlet-Caching (eg. Dynacache) or Web Content Management Basic Caching where possible to speed up the rendering of static content. - For Portlet-based rendering where Basic Caching can’t be used, use Web Content Management Advanced Caching (set to SITE) instead (it will provide the same result) - Note: This is due to the WCM Local Rendering communicating to the WCM Server at a layer below where the 'Basic Cache' functions - See the 'Caching' section for more information on caching - If you have some pages with many portlets on them and your are already caching the content of those portlets, then consider caching Portal Pages as well (requires Edge Server) - If using the Web Content Management AuthorTime Search OR PDM Search (Portal Search does not apply) then disable the JCR Search - Go to [WPS_ROOT]\jcr\lib\com\ibm\icm\icm.properties - Set ‘jcr.textsearch.enabled’ to false - You will need to restart the server for the changes to using Portal Search, consider using a dedicated server for the crawler and indexer and disable the local Search crawler (from within the Portal - If you are not using Portal Search, then disable the Search - If you have WCM set up in a cluster - set the following JVM properties for DRS: - set the WCM caches (prefixed by iwk) to NOT SHARED. This is set by default in 6.014, but not in earlier versions This is needed for WAS 6.0,x 6.1x and 7.x - Consider taking a periodic cut of the homepage (using WGet for linux or similar) and pointing the main domain (www.yourco.com) to that static html page - This may require pulling out any dynamic elements and aggregating them in the browser via iFrames or Ajax. Authoring Performance Recommendations - Where many menus/navigators on a page can’t be avoided, avoid viewing those pages to verify new content has been added, instead utilise dummy pages with a max of 1 or 2 menus/navig ators on them - If versioning isn't required, then disabling it (via configuration settings or within the authoring template in 6.1) can improve save times - Not having too many fields on the authoring form will also help improve save times - Also consider reviewing any custom authoring fields and/or custom workflow actions, as badly performing fields/workflow-actions can also negatively impact authoring performance Things to avoid - Don’t use a Cloudscape/Derby data repository in Production. - Don’t Syndicate ‘All Items’ unless necessary, use ‘All Live Items’ instead. - Don’t mix cacheable and non-cacheable items on the same - Don’t Cache personalized content. Content Management Performance Checklist Recommendations for Web Content Management Content Management Caching and Pre-rendering
OPCFW_CODE
I'm a bit suspicious of positioning. My instincts tell me that positioning should be defined in css. Putting it in your js could cause confusing mixing. Also, it seems to preclude graceful degradation. The gut will tell you a lot of things should go in CSS, but when you ask CSS it says "I SUCK". My instincts tell me that positioning should be defined in css. Vertically aligning text with css is entirely possible, although sometimes tricky. Most of the time, line-height will suffice. But you kind of missed of point. CSS is for styling beyond the browsers defaults. JS is for behaviours. Mixing these things up makes debugging and maintainance difficult. But you kind of missed of point. CSS is for styling beyond the browsers defaults. JS is for behaviours. And tables should never be used for layout. Yeah, I know. But those distinctions are only absolute in an ideal world. But in an ideal world, I wouldn't need to code HTML stuff to earn a living anyway. In the real world, I tend to go with a more pragmatic point of vue. Mixing these things up makes debugging and maintainance difficult. A simple JS code is easier to debug and maintain that a complex CSS construction. Sorry dude, but I strongly disagree on all of your points. Indeed, tables should not be used for layout. In 2010 a good coder simply won't have to use tables for layout. There is no layout problem that can't be solved with good css styling. Also, a coder worth paying should be able to debug complex css in reasonable time. Besides, css should be the first place a coder will look for styling information. Because that's where styling should come from. Also, if you wanted to keep your js simple, you wouldn't be including styling in it. Did anything change compared to 2002? Did IE6 improve? Also, if CSS is for styling (as its name implies), why the hell would it be better than JS or tables for layout? I'm not sure I understand your latter question. Do you think that there is a significant difference between layout and styling? I'll answer your questions anyway... CSS is better then tables for layout because when properly done you will save on bandwidth costs, have a faster loading page, have a faster rendering page, you can centralize your layout code, have a higher degree of human accessibility and have a higher degree of bot accessibility (SEO). Thats a whole bunch of important things. And thats just the ones I can think of off the top of my head. Oh yeah - and CSS is also better for layout then tables because tables were not designed with page layout in mind. CSS was. Layout design is built in to CSS. Layout in external CSS is better then layout in JS because of maintainability/debugging and graceful degradation: if your page layout relies on js then you are doing it wrong. (I'm also assuming that you are using unobtrusive js here. Layout in inline js would be another level of sinfulness) Regarding your first question: no, ie6 hasn't improved. So what? Again, a good coder can make most layout solutions work across all major browsers, including ie6. At the very least, ie6 layout issues can be dealt with by an ie6 targeted stylesheet delivered via a conditional comment. So yes - things have changed since 2002. Coders got better. I'm not sure what elements vertical-align is supposed to work for, but 95% of the time this does nothing at all. line-height and position:relative are usually what I end up using. Someone may correct me here (and I'd be happy if they did), but I think the vertical-align property only applies to inline elements, and only relative to other inline elements in the same line. Say, to position a small icon inline with your text. Set the parent as: and then the vertical-align will work properly. It is a little hacky? Sure, but no JS. then the vertical-align will work properly. ...in some browsers. Let's not forget that vertical-align sucks donkey balls because every browser has a slightly different interpretation of bottom, baseline, middle and top because of how they render fonts differently... As for the "fancy buttons" can't this be done in CSS? I've never tried assigning a hover value to a button, but if I remember correctly everything past IE6 supports hover on any element, not just anchor. Seems really unnecessary, as the only thing you would lose with CSS is rounded corners in IE. It's all CSS yes, but it means you don't have to make/test them. The button is honestly the worst example, the other form elements are more interesting. The radio buttons for instance are very nicely done; styling them yourself is quite unintuitive since you have to use labels and hide the actual input elements. Agreed, trying to style radio buttons is a major pain in the ass. No, you cannot do fancy buttons all in CSS. You can do a hover effect, but part of the "feel" of a button is that you can mousedown on it so it activates, then mouseout so it pops back up, and then when you mouse back over it pops back down. The jQuery UI button does this the same way an <input type="button"/> does, anything you try with :hover and :active will not perfectly equate. Also, try making a toggle button with all CSS... not possible either. Maybe it's just me but jQuery UI smells bad. It's very un-jQuery-like in the sense that it's quite heavyweight. Rolling a theme for just 1-2 widgets seems like overkill. jQuery Tools just seems like a much more lightweight and robust alternative. If you want to put one widget on your page, it might not be your cup of tea, especially if you are an efficiency hound. If you want to integrate twenty of them into a site, however, I must say it does quite a good job of it. You'll appreciate Themeroller when you need to do this one day and you can roll a unified theme for all of your widgets in a few clicks. The theme building range is impressive to say the least. And as far as jQuery widget plugins go, believe me, it is the most jQuery-like of them all. They've even refined the widget factory in this version so it's more prototype-friendly and has saner getting/setting. (I don't agree with all the changes, but the last factory had its warts in those areas.) Widgets have been crafted to carefully encapsulate their data, allow all options to be changed on the fly (not a small task), and allow everything to be accessible from jQuery collections without polluting $.fn. For an example of how to do this wrong by pooping out "external API objects", see jQuery Tools. But this is the problem: unlike, say, ExtJS or even YUI or Mootools or wahtever, jQuery UI is (still) a limited subset of likely widgets that you'll want to use. So it's not a competitor to ExtJS (etc). It's just a heavyweight limited set of components. Take the date picker. There are many other lighter versions out there. What's more--at least to me--it just feels slow when I use something with a lot of jQuery UI components, especially with all the CSS/JS/images it loads. jQuery UI is more than a set of widgets, the real value is in the framework. I don't know how ExtJS and YUI handle things under the hood, but the core of jQuery UI is structured neatly so that you can quickly build your own widgets and have them interact with jQuery in all the expected ways. For example, I have used the widget factory often in my projects ($.Widget in 1.8); it is incredibly useful for developing your own widget that a) has its own $.fn constructor b) offers getter/setters/actions through the $.fn method, an event system and an options store and c) interacts well with a skin framework so you can easily make it look like other widgets. This jQuery UI "interface", you could call it, makes using your widget via jQuery as expressive an experience as possible. check out mootools-more... it's clean, lightweight, and worth its weight in gold. jquery lightweight and robust? ... meh. This one breaks all of my effects even more. I'm seriously considering ditching UI entirely. I'm seriously considering ditching UI entirely. Do it. I'll be interested to know what alternatives you try. Anything equivalent to YUI 2 DataTables?
OPCFW_CODE
How to Create a Kubernetes Cluster on Ubuntu 16.04 with kubeadm and Weave Net Kubernetes is a system designed to manage applications built within containers across clustered environments. It handles the entire life cycle of a containerized application including deployment and scaling. In this guide, we'll demonstrate how to get started by creating a Kubernetes cluster (v1.15) on Ubuntu 16.04. We will be using kubeadm to setup kubernetes. We will then deploy the Weaveworks Socks Shop Microservices Application as a demonstration of how to run microservices on Kubernetes. The purpose of this tutorial is to enable you to run a demo microservices application on a kubernetes cluster you have created. The overall feature state of kubeadm is Beta and will be graduated to General Availability (GA) in 2018. Before you begin this tutorial, you’ll need the following: - 3 Ubuntu 16.04 servers with 4GM RAM and private networking enabled Step 1 - Get each server ready to run Kubernetes We will start with creating three Ubuntu 16.04 servers. This will give you three servers to configure. To get this three member cluster up and running, you will need to select Ubuntu 16.04, 4GM RAM servers and enable Private Networking. Create 3 hosts and call them kube-01, kube-02 and kube-03. You need to be running hosts with a minimum of 4GB RAM for the Weave Socks Shop Demo. Set your hostnames for your servers as follows: Kubernetes will need to assign specialized roles to each server. We will setup one server to act as the master: Step 2 - Set up each server in the cluster to run Kubernetes. SSH to each of the servers you created. Proceed with executing the following commands as root. You may become the root user by executing sudo -i after SSH-ing to each host. On each of the three Ubuntu 16.04 servers run the following commands as root: Step 3 - Setup the Kubernetes Master On the kube-01 node run the following command: This can take a minute or two to run, the result will look like this: To start using your cluster, you need to run the following as a regular user: Your Kubernetes master has initialized successfully! Run the following commands on kube-01: Step 4 - Join your nodes to your Kubernetes cluster You can now join any number of machines by running the kubeadm join command on each node as root. This command will be created for you as displayed in your terminal for you to copy and run. An example of what this looks like is below: When you join your kube-02 and kube-01 nodes you will see the following on the node: To check that all nodes are now joined to the master run the following command on the Kubernetes master kube-01: The successful result will look like this: You will notice that the nodes do not have a role set on join, there is an open PR to resolve this. Step 5 - Setup a Kubernetes Add-On For Networking Features And Policy Kubernetes Add-Ons are pods and services that implement cluster features. Pods extend the functionality of Kubernetes. You can install addons for a range of cluster features including Networking and Visualization. We are going to install the Weave Net Add-On on the kube-01 master which provides networking and network policy, will carry on working on both sides of a network partition, and does not require an external database. Read more about the Weave Net Add-on in the Weave Works Docs. Next you will deploy a pod network to the cluster. The options are listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/ Installing the Weave Net Add-On Get the Weave Net yaml: Inspect the yaml contents: On the kube-01 Kubernetes master node run the following commands: The result will look like this: It may take a minute or two for DNS to be ready, continue to check for DNS to be ready before moving on by running the following command: The successful result will look like this, every container should be running: Congratulations, now your Kubernetes cluster running on Ubuntu 16.04 is up and ready for you to deploy a microservices application. Step 6 - Deploying The Weaveworks Microservices Sock Shop Next we will deploy a demo microservices application to your kubernetes cluster. First, on kube-01, clone the microservices sock shop Go to the microservices-demo/deploy/kubernetes folder: You will see the following result: Next apply the demo to your kubernetes cluster: You will see the following result: Check to see if all of your pods are running: You will see the following result when all pods are ready, they will have the status of “Running”: Visit http://18.104.22.168:30001/ to see the Sock Shop working: You have created a Kubernetes cluster and learned how to use the Kubernetes command-line tool kubectl. You then deployed Weave Socks Shop Microservices Application as a demonstration of how to run microservices on Kubernetes. You have now started to see how Kubernetes is designed to manage applications built within containers across clustered environments. To create Gremlin attacks on Kubernetes follow our guide on "How To Install And Use Gremlin With Kubernetes". Join the Chaos Engineering Slack Community to discuss how Chaos Engineering can be practiced on Kubernetes.
OPCFW_CODE
Spark Track detail Each course runs for three months and starts at the beginning of a quarter. The ultrasonic sensor is not attached in this design. Unlike balancing robots that use a gyroscopic sensor or other special sensors, this design uses only the light sensor, which does not know which way is "up" in an absolute sense, so it can only guess on its relative tilt based on the amount of reflected light received from the ground. As a result, getting a good balance is a bit of a challenge when you are using it. Please read the following important tips. Getting the NXT Segway with Rider robot to balance requires good lighting and surface conditions for the light sensor, and also requires that you start the robot exactly balanced to begin with, so be prepared to experiment with different surfaces and lighting, and also some practice at getting the robot started out balanced to begin with. Here are some tips: Software and Firmware Versions. The standard NXT-G 1. If you are using an older firmware for the corresponding software, you can download an update here. External room lights can confuse the light sensor, especially if the amount of lighting or shadow varies as the robot moves around. For best results, find a location where the light sensor will be in shadow from any room lights, even as the robot moves forward and backward by a couple of feet in either direction. Also, florescent lights will interfere less than incandescent lights. The robot requires a surface that has very uniform brightness. Blank white paper will work well, or any surface that is a uniform solid color with no pattern. A wood floor with a wood grain pattern, or a tile floor with texture will not work well, because the light reflection will vary as the robot moves. Since the light sensor cannot tell which way is up, the robot must start perfectly balanced to begin with, and then the program will try to maintain that balance position by trying to seek out the same reflected light reading that the light sensor had at the beginning of the program. Specifically, the robot must be physically balanced, which is not the same as holding it visually straight up. If you just hold it upright with your hand, it will not be physically balanced. At the beginning of the program, the program will beep three times over three seconds, to give you time to get the robot balanced with your hands, then it measures the position at the 4th higher tone beep, so the goal is to have it perfectly balanced at the 4th beep. Then it starts to try to stay balanced automatically. Note that if you start the robot very close to but not quite balanced, it will drive forward or backward in the direction that it was leaning at the start. Getting a good start may require some practice, so be patient! A good way to start the robot balanced is to start the program, then during the three beeps, support the robot only by the top of the driver's head ultrasonic sensor very lightly using one finger and thumb with an open gap, trying to keep the robot from leaning to either side very much at all. Both of these programs balance the robot using a form of "PID Controller".NXT-G Programming Workshop for FLL Coaches Developed by Tony Ayad Updated by LeRoy Nelson California - Los Angeles Region FLL September Outline. With Lego Mindstorms NXT for Teens first-time programmers will learn to create programs that bring Lego creations to life!Features This book provides the reader with both a programming foundation and a basic overview of robotic development. Readers will learn how to program using Mindstorms NXT-G programming language. Paused You're 4/5(1). NXT-G is a great first programming language, but that doesn't mean it's easy to understand—at least not right away. In The Art of LEGO MINDSTORMS NXT-G Programming, author and experienced software engineer Terry Griffin explains how to program MINDSTORMS robots with NXT-G. Writing Efficient NXT-G Programs Programming Techniques Loops and My Blocks Even with automatic code re-use, every copy of every block in your program uses up. Classroom Resource: NXT-G Software Troubleshooting Tips. Submitted by Randy Steele on 25 May, - Printer-friendly version · Turn off robot educator or use patch if NXT-G (the programming interface) is running very slowly (Macintosh). Mindstorms NXT-G The Color Sensor can be programmed using LEGO Mindstorms NXT Software Color Sensor Block. The Color Sensor Block is designed to support the HiTechnic Color Sensor.
OPCFW_CODE
import * as operations from './operations'; import CalculatorForSpecificOperations from './calculator-for-specific-operations'; import TokensWalker from './tokens-walker'; export default class Calculator { static orderOfOperations = [ [ operations.Multiplication, operations.Division ], [ operations.Addition, operations.Subtraction ] ]; static calculate(stringToCalculate) { let tokens = this._parseStringToTokens(stringToCalculate); this._executeOrderOfOperationsOnTokens(tokens); return tokens.pop(); } static _parseStringToTokens(stringtoParse) { return stringtoParse.split(' '); } static _executeOrderOfOperationsOnTokens(tokens) { let tokensWalker = new TokensWalker(tokens); this.orderOfOperations.forEach((operationsForThisOrder) => { tokensWalker.walk((binaryOperation) => { let calculator = new CalculatorForSpecificOperations( binaryOperation, operationsForThisOrder ); calculator.calculate(); if (calculator.isResolved) { tokensWalker.replaceValueAtCurrentIndex(calculator.value); } }); }); } }
STACK_EDU
Then Click On View Detiails To Know The Status Of Each Leaf As Shown In The Below Screen Shot Search From Our Large Selection Of Properties In Tenerife And The Canary Islands The Council Of Advisers On The Application Of The Rome Statute To Cyberwarfare Ie is based schema extraction Data was modelled on a single subject level to obtain subject specific values. Selection and described below shows how data is not that insight and relations for extracting and characteristics and synonyms of data during an ontology. In this work we approach the web knowledge extraction problem using an expert-. Ing ontology-based schema can enhance a Knowledge Graph The results. RBM or LBM will be able to produce more effective and efficient results. The owl individual through rules applied to provide an ontology can range of schema extraction based video. In schema extraction Source is consolidated with the input knowledge graph based on the. The schema that this information based on extracting patterns for knowledge bases to extract. The base construction compared one of collaborative filtering can be effectively manage. Keywords clinical information extraction cancer frame semantics 1 Introduction. Hence the schema extraction approaches and wikipedia articles in the network of datasets to obtain the phrase between different instruments the debate on. Due to knowledge extraction of this Owl that the power consumption of an important tasks people realize the change the figure as knowledge based on processing framework based on multiple document is created a schema? Coding models of memory are based on the idea that learning results from. Women In Business Search This Site View Available Rentals The knowledge itself is that do little to knowledge based schema extraction approaches. About The College View All Our Sites Register For Classes Menu And Widgets Porcelain Fixed Bridges PHT initiative, integrating data according to taxonomy can be very useful. Cloud Penetration Testing Biosafety Testing Services Responsive Web Design Some features and it aims to develop fast in schema extraction: a set of domain analysis conducted separately as plain text mining award sponsored by deep temporal facts in. Flair uses a stacked embedding approach. KE4 Developing a richer frame-based ontological schema for KE based on FrameBase and. Learning based schema extraction using rdf knowledge base construction and extract data collected oceanographic papers using suboptimal resolution. As schema extraction Text interpretation of extraction based schema information from it requires a second deterministic rule Cnn with a need for a column of knowledge based on Technol forecast soc change, based schema matching steps Schema are common engineering team for extraction based approaches can be fixed item is first The efficiency of object detection based schema updating in Thus becoming inconsistent category for organizations to illustrate the capability is based schema extraction from linked to do we first Provide publicly say on each parameter search is knowledge based extraction process synchronized with regard Is not intended use existing knowledge based blackboard framework The patterns are based schema describes how they adopt a built to In a new task for srr, based schema extraction in the disease related and generalize the usefulness of Google scholar is analogous to extraction based schema can the ultimate goal Automatic creation efforts in the web result extraction based on The task to detect: visual relationship between two core database schema extraction based on the list The commonly found in intelligent database based schema It appears that process between knowledge based schema extraction from the automatic video This example for knowledge based schema extraction technique which the time The degree of languages due to find something useful, based schema extraction We present the face recognition based schema strength was correct answer questions Hence they are stored in knowledge based schema extraction is easy One page is represented as precisely as well as knowledge based extraction is growing The proposed data is based schema extraction populates slots in The context and knowledge based summarization Using deep convolutional attention and knowledge extraction quality management In television news interview videos using different, knowledge based extraction The knowledge based schema extraction can be recognized by traffickers in principle there Acm trans manag data based schema was provided that topic might have No representation and algorithms are becoming important dimensions of knowledge based extraction assessment system Kb is period with flair ner parser from raw data extraction based schema extraction task, a higher rate that satisfies the alignment benchmark How much better performance; making sense that presents the extraction based schema, the data generated Jiaotong university of knowledge based extraction based on Ieee trans manag data extraction based schema to the tuple of cookies to build structured The contaminant transport problem, based schema extraction This paper on oceanic entities in batch or based schema to enable analysis of the knowledge bases by themselves Pathogen and modifiable, knowledge extraction from multiple documents referring to enhance your agreement on Template to confirm these classes used in other hand provides a small scale enhances schematization on data based schema To create a knowledge we show that integrated processing textual data types such as schema extraction Elements from document structure for extraction based on top of Selection and descriptive properties can change on knowledge based extraction Amazon and tools utilizing all databases or kind can establish the extraction based upon publication of oceanic variables and description For user was trained model can achieve more abstracted fashion as schema extraction technique is in Exemplar driven by influencing processing allows for schema extraction based trained Gaussian filters web data overlap, knowledge extraction and some definitions and exclusion criteria Traditional techniques mentioned, extraction based schema What happens as it has multiple matched tables has used jena to knowledge based big The inclusion in a wide web based schema In the correspondences found components are knowledge based extraction: the status is handled simultaneously The last few concrete works well progressing in existing knowledge based schema extraction of the basic ontology In machine learning web page to video for schema extraction In image search results from these relationships between the schema extraction based face recognition Towards integrative machine learning based clone and extraction based schema due to how all content Allen institute at microsoft and storing individuals can download in database based schema This second deterministic rule out in knowledge based blackboard framework generates different schemas Here used by discussing future articles and knowledge based extraction In big data provides guidelines to knowledge based extraction process Concept type that are required for enabling efficient knowledge based schema extraction Automatic creation is knowledge based schema extraction from tables: tencent deep temporal facts from various formats Traditional re techniques for example table must be linked papers describing the correct class based schema extraction and experiment Storage capacity growth of semiconductors offers on file is based schema matching had studied In computer vision of schema extraction based on the figure Automatic speech recognition system for Tunisian dialect. We start with the text which can come from multiple sources and in various formats. Object Recognition using Template matching with the help of Features extraction. Make Alpine wait until Livewire is finished rendering to do its thing. In the knowledge based schema extraction has following contributions to make efficient enough contact us a gap Innodata applies our hierarchical relationship between ocean science and ambiguity of kgs that both tables of data and refinement of ie associated probability. In a knowledge based extraction Thakur declined to discuss those sorts of details. With a taxonomy Topics in extracting patterns and extract relevant and owl functional syntax for download at hand, schemas in ocean science, for open in a taxonomy building a new insights. Illustration of knowledge base knowledge graphs: extracting linguistic similarity and extract. Occasionally, et al. Can manage individuals occur under special issues related and schema based knowledge extraction using sql queries without direct observation platforms. This domain knowledge based schema Pushing hardware efficiency to the extreme might suffer on statistical efficiency because the lack of communication between nodes might decrease the rate of convergence of a statistical inference and learning algorithm. Our passion is bringing thousands of the best and brightest data scientists together under one roof for an incredible learning and networking experience.
OPCFW_CODE
""" Handles the main flow of the game """ from __future__ import print_function import logging from fuzzywuzzy import fuzz from alexa_responses import speech from manage_data import update_dynamodb import strings from word_bank import CURRENT_PACK_ID logger = logging.getLogger(__name__) logger.setLevel(logging.DEBUG) def handle_answer_request(intent, this_game): """ Check if the answer is right, adjust score, and continue """ logger.debug("=====handle_answer_request fired...") logger.debug(this_game.attributes) this_game.update_game_status("in_progress") answer_heard = get_answer_from_(intent) current_question_value = 50 - int(this_game.current_clue_index * 10) correct_answer = this_game.get_answer_for_current_question() # Use Levenshtein distance algo to see if the words are similar enough. fuzzy_score = fuzz.partial_ratio(answer_heard, correct_answer) # Currently using a hardcoded score of 60 or better. if correct_answer in answer_heard or fuzzy_score >= 60: this_game.update_total_score(current_question_value) answered_correctly = True else: log_wrong_answer(answer_heard, correct_answer) answered_correctly = False # If clues remain give them the next clue instead of moving on. if this_game.current_clue_index != 4: return next_clue_request(this_game, answered_correctly=False) # If that was the last word and no clues remain, end the game. if this_game.current_question_index == this_game.game_length - 1: # If this was the latest word pack mark it # as played so the player doesn't get it again. if this_game.play_newest_word_pack: this_game.update_last_word_pack_played(CURRENT_PACK_ID) return end_game_return_score(this_game, answered_correctly, answer_heard, correct_answer, current_question_value) # If that wasn't the last word in the game continue on to next word. this_game.move_on_to_next_word() next_clue_message = strings.NEXT_ROUND_WITH_CLUE.format( this_game.get_first_clue()) if answered_correctly: speech_output = strings.random_correct_answer_message( correct_answer, current_question_value) + next_clue_message card_text = "The word was: " + correct_answer + ". You got " + \ str(current_question_value) + " points!" card_title = "You figured out the word!" else: speech_output = strings.WRONG_ANSWER.format( correct_answer) + next_clue_message card_text = "The word was: " + correct_answer + "\n" + \ "You said: " + str(answer_heard) card_title = "That wasn't the word!" return speech(tts=speech_output, attributes=this_game.attributes, should_end_session=False, card_title=card_title, card_text=card_text, answered_correctly=answered_correctly) def end_game_return_score(this_game, answered_correctly, answer_heard, correct_answer, current_question_value): """ If the customer answered the last question we end the game """ logger.debug("=====end_game_return_score fired...") this_game.increment_total_games_played() this_game.update_game_status("ended") update_dynamodb(this_game.get_customer_id(), this_game.ddb_formatted_attributes()) wrap_up_speech = strings.END_GAME_WRAP_UP.format( str(this_game.total_score)) if answered_correctly: speech_output = strings.random_correct_answer_message( correct_answer, current_question_value) + wrap_up_speech card_text = "Your score is " + str(this_game.total_score) + " points!\n" + \ "The last word was: " + correct_answer else: speech_output = strings.WRONG_ANSWER.format( str(correct_answer)) + wrap_up_speech card_text = "Your score is " + str(this_game.total_score) + " points!\n" + \ "\nThe last word was: " + correct_answer + "\nYou said: " + answer_heard card_title = "Clue Countdown Results" reprompt = "Would you like to play Clue Countdown again?" return speech(tts=speech_output, attributes=this_game.attributes, should_end_session=False, card_title=card_title, card_text=card_text, answered_correctly=answered_correctly, reprompt=reprompt, music=strings.GAME_OVER) def next_clue_request(this_game, answered_correctly=None): """ Give player the next clue """ logger.debug("=====next_clue_request fired...") # Max of 5 clues. if this_game.current_clue_index < 4: this_game.move_on_to_next_clue() if answered_correctly or answered_correctly is None: speech_output = strings.NEXT_CLUE + this_game.current_clue else: speech_output = strings.WRONG_ANSWER_CLUES_REMAIN + this_game.current_clue # Already on the last clue, repeat it. else: speech_output = strings.NO_MORE_CLUES.format(this_game.current_clue) return speech(tts=speech_output, attributes=this_game.attributes, should_end_session=False, answered_correctly=answered_correctly) def repeat_clue_request(this_game): """ Repeat the last clue """ logger.debug("=====repeat_clue_request fired...") speech_output = "The last clue was: " + this_game.current_clue return speech(tts=speech_output, attributes=this_game.attributes, should_end_session=False) def log_wrong_answer(answer, correct_answer): """ Log all questions answered incorrectly for analysis """ logger.debug("[WRONG ANSWER]:" + answer + ". Correct answer: " + correct_answer) def get_answer_from_(intent): """ Parse the request to get the answer Alexa heard """ logger.debug("=====get_answer_from_intent fired...") if 'slots' in intent: try: synonym = intent['slots']['CatchAllAnswer']['resolutions']['resolutionsPerAuthority'][0] # If the answer shows up in entity resolution/synonyms grab it. if synonym['status']['code'] == "ER_SUCCESS_MATCH": answer = synonym['values'][0]['value']['name'] else: # Otherwise grab the non-ER slot value. answer = intent['slots']['CatchAllAnswer']['value'] except KeyError: # Grab the non-ER slot value if ER keys missing too. answer = intent['slots']['CatchAllAnswer']['value'] else: # If we got this far we should mark it as no response because # another word wasn't caught by the catchcall slot (e.g. NoIntent). answer = "no response" logger.debug("=====Answer Heard: %s", answer.lower()) return answer.lower()
STACK_EDU
Is there any guides on how to use T4 to generate aspx or ascx? I have a task to generate user controls, I'm wondering if there's any guides on that. thanks I would first ask why you need to "generate" user controls. If you need a lot of very similar controls for some reason, couldn't you create one user control that adjusts itself depending on some kind of input parameter? Assuming that you have a good reason for doing this, though, I can offer the following general T4 advice. Start by writing an example of what you want to generate. Create an actual control like the one you want to generate. If possible, do this as a single file (classic asp style), it will be easier to generate the control as one file than multiples which then have to be associated together inside the project file... very messy. Change the file's extension to .tt, and start factoring out the parts of the example control that need to change form one generated control to the next. Try altering one aspect of the control at a time, generating the output, and comparing against what you expected. Keep changing one thing at a time until the control you started with has become a template to generate controls like the one you started with. T4 templates only know how to write out a single file. Since you want to create multiple controls, you'll need some extra tools. The T4 Toolbox has what you need to accomplish this, as described here. no, parameter thing isn't effective for us, we're doing this to save developers time, and give them full control at the same time. Disclaimer: The answer is for our experience and for a technology that is published and completely open to use. Based on standards, this is not a product "sell", answers exactly to the question. We have gotten great experience in both productivity and trivialization aspects (for fields unknown to end-developers) in an XML schema and XML controlled T4 generation. The idea is that the architect in charge will constrain the development by logical architectural limits. We have published the technology as completely open; the basic idea is to distribute the entire folder with the schema and the T4 generator(s) to each individual project in fully open source form. In internal development you can version control branching and merging to update the changes to the templates/abstractions of the controls, so that you can build single distribution. The very nature of the technology is that the end-developers can customize every aspect they need to by adjusting the generator(s), the schema and the xml contents as appropriate. And the time return-of-investment is basically negative compared to traditional guidance; you also gain the strict control over the code produced. You can check out the videos for the way of doing; the example demonstrates trivializing PowerPoint add-in, but the technology is completely open, completely target-platform agnostic. http://www.youtube.com/view_play_list?p=B3366B17004D5DB9 More info and updates are posted through the blog: http://abstractiondev.wordpress.com I'm adding more explaining videos for creating abstractions from scratch. The HelloWorld in its bare simplicity works for focused sample in case either Office/COM Add-In (and its complexity) or CQRS stack is not familiar to you I'm adding more explaining videos for creating abstractions from scratch. The HelloWorld in its bare simplicity works for focused sample in case either Office/COM Add-In (and its complexity) or CQRS stack is not familiar to you.
STACK_EXCHANGE
I find these discussions fascinating, yet a bit hyperbolic. Why? Because I think we have more pressing issues when it comes to AI, such as fairness and social responsibility. Machine Learning, a form of AI that powers predictions on everything from what music you’ll like to where crimes will be committed, is susceptible to biases, injected into the system from the data their predictions are trained on. FaceApp—a selfie editing app with a “hotness” filter that when activated makes your skin lighter—is just the most recent example of a company employing a biased Machine Learning system. But they’re far from the only ones. Bias has cropped up in facial recognition systems, criminal risk-assessment software, and programs that determine loan eligibility. Keeping machines ethical But it doesn’t have to be this way. The notion of algorithm accountability is still evolving, yet many are taking action to try to avoid Machine Learning bias. In my research, I’ve identified three things organizations can do to increase the transparency and accountability of their Machine Learning programs: 1. Designate a Machine Learning governance office Organizations need to decide who’s accountable for validating the goals and performance of Machine Learning systems. They also need to decide who’s responsible when things go wrong. For example, if a model generates discriminatory harm, what will be the process for recourse, and who will have the authority to make changes to the system in a timely manner? Several organizations, including Google, Microsoft, IBM, and police body camera supplier Axon, are creating internal AI ethics advisory boards to help them answer these tough questions. 2. Build ethical design into all data science projects To ensure algorithmic transparency is considered early, project managers can include an ethics review at the design stage of data science projects. During this review, project managers should work with designers and data scientists to identify potential ethical concerns by considering risks in the data, algorithms, system output, and interface. For example, does the project deal with a vulnerable population? Will the outcome limit choice for anyone? Will the output be deployed directly into a production system? Or will there be human checks in place? Is this considered a controversial use of algorithms? 3. Increase the interpretability of Machine Learning methods This is perhaps a contentious recommendation, as Machine Learning methods, especially neural nets, are notorious for their imperviousness. Still, researchers from MIT, Carnegie Mellon, and many other institutions are developing techniques to provide rationale for the output of these systems. The Defense Advanced Research Projects Agency (DARPA) even has a dedicated Explainable AI project to fund research advances in this area. There is precedent here too: Mycin, an AI system developed in the 1970s to diagnose bacterial diseases like meningitis, was able to explain the reasoning behind its diagnosis and treatment recommendation. Designing explainable Machine Learning systems may necessitate a tradeoff between interpretability and accuracy. But what good is a system with 99% prediction accuracy if nobody trusts it? Overall, Machine Learning has the potential to augment human decision-making, causing us to be more rational, consistent, accurate, and – yes – even fairer. But only if we govern these systems closely.
OPCFW_CODE
How To Determine Unique Device ID in IOS apps I have been doing some research on UUID for the iPhone and see that apple is not using that and opting for some new method which I do not totally understand What I am trying to achieve is to offer a level of security for people who are tracked via my app Scenario new client Susan purchases my device which is trackable via an iPhone app I have developed (the device that is tracked is not a phone its a dedicated gps tracker that reports its location to a server) Client Susan wants Mary to be able to track the device via the iPhone app Susan tells Mary to download my app My client Susan is able to addd Mary's details name to our tracking database with a table allowed app users, without this information the app Mary downloads will now show the location details of Susan's device. So I need to have some stored value that identifies Mary each time she opens the app its passed to the web service to retrieve the list of device Mary has been granted access to potentially not just from Susan. (Mary may be a community care worker responsible for different clients in the community and the family of those different clients each grant Mary access to the tracker their family have bought) The first logical solution is to uses Mary's email address due to it being unique however there is a security risk that others who download the app for free, should they know Mary is also an app. user by them entering in Mary's email address they will grant them access to all the devices Mary has access to given the app will display all devices listed in the database that have Mary's email address assigned to it So this brings me to the UUID value of the ios device which apple has canned, I need to be able to display a unique device id (device imei number would work) so that when Mary loads the app the unique value is displayed and its that value that Susan will need to register along with Mary's details to grant Mary access to the device, if others want Mary to access other peoples devices Mary needs to provide them this same unique id which others will use to register mary's details against the other devices IF the device is rebooted and the app is reinstalled then the same device id would be read from the device without them needing to re set-up access as would be the case if the unique device id was dynamically generated by my app So can you advise what the best approach is to the above give that the traditional UUID is no longer allowed by apple Kind Regards Claude Raiola Currently the best approach is to generate your own UUID and store that for reference. If you place it in the keychain, I believe it will last even if the user uninstalls and re-insalls the app. Hi Than for your input However its not a matter of uninstalling / reinstalling it would be rear for the user to need to uninstall the app however if their phone had issues and required a hardware restore back to hardware settings etc then that value would dissappear and they would need to re install the app and also have the admin person for the tracking in question to re register their new detials in the system. You can try using the vendor ID, and see if that survives a system reset. The keychain may still survive a reset if they restore from iCloud.
STACK_EXCHANGE
A while back, I complained that I had misplaced my second copy of the Jerry Sevick's The Short Vertical Antenna and Ground Radial What I was trying to do was simulate one of the antennas that he had put in the book to see if I was getting a handle on NEC2. As an aside, I don't use EzNEC because I usually run Linux at home, not Windows, and EzNEC is pretty seriously a windows-only thing. I suppose I could get it to run under WINE, but I'm not really very interested in paying money for a product that could be rather difficult for me to use. I figured that I'd only have to climb the NEC learning curve once, and that much of what I was learning was applicable to all of the different flavors of NEC. So, I adopted nec2c, which is a translation of NEC2 into double-precision C code for my tests. I digress. So, where was I? Oh yes, the book. My ultimate goal is to get an improved antenna such that I can actually make contacts with my Ft Tuthill 80, and I thought that the information contained in Dr. Sevick's book would be a good starting point. Of course, since I am not going to build an antenna just to hope that it works, I wanted to simulate it first. I was getting results I thought were good, so I was looking for the book to compare the results of my simulation with W2FMI's actual measurements. Along the way, I changed my goal. You see the March 2013 issue of QST has an article on an antenna that covers all of 80 meters, edge to edge. The Ft Tuthill 80 is CW only and wants to operate between 3500-3550 KHz or so, but my wife (who is also licensed) doesn't do CW at all and when my FT-102 gets back from being repaired and Malcomized, I will likely have need to operate on a space rather wider than the 50 KHz or so one of those shortened, loaded antennas can give. It has to be a small antenna because my yard is about 1/6 acre and the HOA CC&R's say that I can have an antenna that isn't taller than 15 feet. (I believe that is 15 feet above the roof ridgeline. This turns out to be important.) So, I started looking to see if i could apply the techniques from the article to a shortened antenna. The article uses an overcoupled parallel radiator to increase the bandwidth of a dipole antenna. I don't know what that means, exactly, but I was able to simulate that antenna easily enough and get the same results that Ted Armstrong WA6RNC got when he simulated it. The trouble was, when I tried loading the antenna, I didn't get a broadband antenna, at least according to nec2c. I first tried linear loading, with the radiator and the resonator stacked on top of each other, and then side-by-side. No dice. Then, I tried loading coils with no better luck. Then, I hit upon an idea: A helical antenna. I had read articles on winding a half-wavelength of wire into a helix and feeding it in the center like a dipole. The idea is that the antenna is not loaded by the helix, but that the helix shortened the overall length of the antenna without shortening it's electrical length, or some mumbo-jumbo like that. I haven't read a principles of operation for that antenna that makes sense, but I thought I could simulate it and see. what works. Now, the radiator and the resonator need to be separated by a gap and that's simply not possible if your wires are turn in a coil around a 1-inch piece of PVC or similar. What I eventually wound up with was a helix of two turns 7 meters high and 3.2 meters in diameter, and a slightly shorter resonator wound inside it. One area of uncertainty is the number and sizes of the segments. The usual rules say that interacting elements need to use segments that are all of the same size and that they all must align. With the antenna arranged as two nested helices, I can do at most one of those. I've simulated it several times with different segment sizes, some with a consistent number per turn (aligned segments) and some with a consistent segment length. I get similar, but slightly different, results with the different techniques. I interpret this as a limitation of the simulation. When the simulation won't give you guidance, it's time to build something and see what numbers the real world gives you. I'm also getting results that indicate that the antenna has a loss of 15 dB in the direction of maximum signal. So, I thought the thing to do is build a scale model that I can compare with a vertical dipole or monopole and see what a field strength meter says. I was able to scale the antenna to cover all of the 10 and 12 meter bands and the simulations of that antenna give me similar results to the larger antenna, in terms of bandwidth and feedpoint impedance, oh and it also shows a 15 dB loss. So, my way forward is clear. I'll be building the 10-12m antenna and borrowing a FSM from somebody. Compare the results and we'll see. Oh, and along the way I discovered that I was simulating Jerry Sevick's antennas wrong. I updated the simulation and this morning I found my copy of the book, so I was able to confirm that the simulation matches the actual antenna pretty well. I was hampered by the fact that I don't know the inductance of the loading coil, but using one of about the right size gives me about the right answer. I'm going to call it good.
OPCFW_CODE
[02:10] <DarkTrick> When I use desktop zoom, my applications still scrolls. Is there any way to prevent scrolling while moving the mousewheel for desktop zoom? [07:28] <DarkTrick> hm.. impress <IP_ADDRESS>: properties are unusable for some figures.. I wonder how this slipped through Ubuntu QA [13:07] <xu-irc76w> hi [13:09] <xu-irc76w> quick question, I would like to create users /home/dirs from LDAP after login from lightdm, can someone point me to the right direction? [13:09] <xu-irc76w> if the /home/dir does not exist a user cant login using the desktop version [14:37] <Guest8302> I am trying to set up dual monitors on xubutu. Here is my xrandr: [14:38] <Guest8302> Screen 0: minimum 320 x 200, current 1920 x 1080, maximum 16384 x 16384 [14:38] <Guest8302> HDMI-1 connected primary 1920x1080+0+0 (normal left inverted right x axis y axis) 527mm x 296mm [14:38] <Guest8302> 1920x1080 60.00*+ 50.00 59.94 [14:38] <Guest8302> 1680x1050 59.88 [14:38] <Guest8302> 1600x900 60.00 [14:38] <Guest8302> 1280x1024 60.02 [14:39] <Mekaneck> !paste | Guest8302 [14:40] <Guest8302> Thanks I will figure it out and get back! [14:42] <Guest8302> What is the channel topic? Perhaps the question about dual monitors is not appropriate here. Thanks. [17:51] <xu-irc18w> ive dloaded the driver for my hd5770 graphics card (deb package) but it wont install [17:55] <oerheks> openradeon covers hd5770 .. no need for a package
UBUNTU_IRC
module FakeTag module Base def generate amount = nil result = [] if amount.nil? #result = self.loop 4 result = self.get_tag else #result = self.loop amount result = self.get_tag amount end result end protected def tag_pool # TODO change this for something better (dynamic tags) ['v60', 'lipstick', 'nails', 'blackwidow', 'skateboard', 'happy', 'selfie', 'bike', 'car', 'dress', 'computer', 'geekstuff', 'love', 'android', 'ipad', 'nyc'] end def loop amount result = [] tags = self.tag_pool while result.length < amount do data = tags[rand(0..tags.count-1)] result << data unless result.include? data end result end def get_tag amount = 4, uri = 'http://www.ariadne.ac.uk/buzz/trending/tf/feed/trending-factor-buzz.xml' tags = [] result = [] doc = Nokogiri::XML(open(uri)) for tag in doc.xpath("//node/term").map.to_a do tags << tag.text end if amount > tags.length then amount = tags.length end while result.length < amount do data = tags[rand(0..tags.count-1)] result << data unless result.include? data end result end end end
STACK_EDU
View Full Version : Lexiconic Root Designation 03-06-2011, 08:59 PM In the BibleWorks program, root designation is according to what lexicon? 03-07-2011, 06:40 AM If you are using the Westminster Hebrew Morphology, the dictionary form (or "lemma") of each word does not follow a specific lexicon -- at least, not slavishly. We consult all the latest scholarship and make our best judgment call. However, we do rely greatly on Koehler & Baumgartner's Hebrew and Aramaic Lexicon of the Old Testament. Kirk E. Lowery, PhD President & Senior Research Fellow The J. Alan Groves Center for Advanced Biblical Research 03-07-2011, 08:43 AM If you are asking about Greek lemmas, the various morphology databases also do not follow a single lexicon. BibleWorks has a Greek Alias file which contains listings of words for which either the lexicons use a different lemma or the morphology databases decide to spell the lemma differently. (This allows BW to find all the correct lexicon entries for you.) There are also occasions in which a given database (BYM comes to mind) seems to decide on a lemma based on analogy to similar forms, without checking to see if any lexicon actually has a listing for that lemma. The more you use BW, the more you may discover that the different lexicons and different morphology databases disagree on what the root is for a number of words. Then you get to puzzle over why such disagrements occur. 03-07-2011, 08:55 AM Is there a similar Hebrew Alias file? 03-07-2011, 09:00 AM There is a similar Hebrew alias file, but currently it is "hard wired" into BW, so users cannot edit it. (I have made some additions to the file, but such additions do not "work".) 03-07-2011, 09:08 AM Where can one find the file? Is it human-readable? 03-07-2011, 02:02 PM The file to which I referred is hebalias.txt. It is in the BW databases folder. You can read it in Wordpad or Notepad, if you know the font map for the bwhebb font and remember to read from right to left. (On my computer I copied the .txt file into MS Word and changed the font to bwhebb, and I can read it in Hebrew,) I have made a number of additions to my file, in anticipation of the programmers adding a functionality for this to be editable. But my additions have had no effect (negative or positive) on how BW looks up lemmas. The the Greek alias file can be viewed (in Greek font) and edited from within BibleWorks. Instead of using a .txt file it is a .gal file (for Greek Alias List). So apparently when the programmers set it up (back in BW 4 or 5, if I remember correctly), they intended it for Greek only. Since I'm not a programmer, I don't know how hard it would be to change the functionality for Hebrew. 03-08-2011, 09:49 AM Thanks! This is helpful. Yes, an editable alias list would be an enhancement useful to scholars and other serious students. Most would not care. Depending upon the cost of implementing this, I would doubt there is much perceived value added for that cost. :-/ Powered by vBulletin® Version 4.2.4 Copyright © 2017 vBulletin Solutions, Inc. All rights reserved.
OPCFW_CODE
using System; using Wexflow.Core; using System.Threading; using System.Xml.Linq; using System.IO; using System.Drawing; using System.Drawing.Imaging; namespace Wexflow.Tasks.ImagesTransformer { public enum ImgFormat { Bmp, Emf, Exif, Gif, Icon, Jpeg, Png, Tiff, Wmf } public class ImagesTransformer : Task { public string OutputFilePattern { get; private set; } public ImgFormat OutputFormat { get; private set; } public string SmbComputerName { get; private set; } public string SmbDomain { get; private set; } public string SmbUsername { get; private set; } public string SmbPassword { get; private set; } public ImagesTransformer(XElement xe, Workflow wf) : base(xe, wf) { OutputFilePattern = GetSetting("outputFilePattern"); OutputFormat = (ImgFormat)Enum.Parse(typeof(ImgFormat), GetSetting("outputFormat"), true); SmbComputerName = GetSetting("smbComputerName"); SmbDomain = GetSetting("smbDomain"); SmbUsername = GetSetting("smbUsername"); SmbPassword = GetSetting("smbPassword"); } public override TaskStatus Run() { Info("Transforming images..."); var success = true; var atLeastOneSuccess = false; try { if (!string.IsNullOrEmpty(SmbComputerName) && !string.IsNullOrEmpty(SmbUsername) && !string.IsNullOrEmpty(SmbPassword)) { using (NetworkShareAccesser.Access(SmbComputerName, SmbDomain, SmbUsername, SmbPassword)) { success = Transform(ref atLeastOneSuccess); } } else { success = Transform(ref atLeastOneSuccess); } } catch (ThreadAbortException) { throw; } catch (Exception e) { ErrorFormat("An error occured while transforming images.", e); success = false; } var status = Status.Success; if (!success && atLeastOneSuccess) { status = Status.Warning; } else if (!success) { status = Status.Error; } Info("Task finished."); return new TaskStatus(status, false); } private bool Transform(ref bool atLeastOneSuccess) { var success = true; foreach (FileInf file in SelectFiles()) { try { var destFilePath = Path.Combine(Workflow.WorkflowTempFolder, OutputFilePattern.Replace("$fileNameWithoutExtension", Path.GetFileNameWithoutExtension(file.FileName)).Replace("$fileName", file.FileName)); using (Image img = Image.FromFile(file.Path)) { switch (OutputFormat) { case ImgFormat.Bmp: img.Save(destFilePath, ImageFormat.Bmp); break; case ImgFormat.Emf: img.Save(destFilePath, ImageFormat.Emf); break; case ImgFormat.Exif: img.Save(destFilePath, ImageFormat.Exif); break; case ImgFormat.Gif: img.Save(destFilePath, ImageFormat.Gif); break; case ImgFormat.Icon: img.Save(destFilePath, ImageFormat.Icon); break; case ImgFormat.Jpeg: img.Save(destFilePath, ImageFormat.Jpeg); break; case ImgFormat.Png: img.Save(destFilePath, ImageFormat.Png); break; case ImgFormat.Tiff: img.Save(destFilePath, ImageFormat.Tiff); break; case ImgFormat.Wmf: img.Save(destFilePath, ImageFormat.Wmf); break; } } Files.Add(new FileInf(destFilePath, Id)); InfoFormat("Image {0} transformed to {1}", file.Path, destFilePath); if (!atLeastOneSuccess) atLeastOneSuccess = true; } catch (ThreadAbortException) { throw; } catch (Exception e) { ErrorFormat("An error occured while transforming the image {0}. Error: {1}", file.Path, e.Message); success = false; } } return success; } } }
STACK_EDU
package funcblock; import exception.MailboxFullException; import static funcblock.MultiThreadFBExecutor.eventSemaphore; import java.util.concurrent.ArrayBlockingQueue; import java.util.concurrent.atomic.AtomicBoolean; import java.util.logging.Level; import java.util.logging.Logger; public abstract class FuncBlock implements Runnable { protected final String name; protected final FBExecutor executor; public final ArrayBlockingQueue<FBEvent> mailbox; protected AtomicBoolean markedForDelete; public AtomicBoolean executeLock; public AtomicBoolean executeTaskLock; public FuncBlock(FBExecutor executor, String name, int mailboxSize) { this.name = name; this.executor = executor; this.mailbox = new ArrayBlockingQueue(mailboxSize); this.markedForDelete = new AtomicBoolean(false); this.executeLock = new AtomicBoolean(false); this.executeTaskLock = new AtomicBoolean(false); } public FuncBlock(FBExecutor executor, String name) { this.name = name; this.executor = executor; this.mailbox = new ArrayBlockingQueue(1000); this.markedForDelete = new AtomicBoolean(false); this.executeLock = new AtomicBoolean(false); this.executeTaskLock = new AtomicBoolean(false); } @Override public void run() { try { while (!mailbox.isEmpty() && !isMarkedForDelete()) { while (MultiThreadFBExecutor.isStopSystem.get()) { Thread.sleep(10); } FBEvent event = mailbox.take(); try { executeTaskLock.set(true); task(event); } finally { executeTaskLock.set(false); eventSemaphore.release(); } } } catch (InterruptedException ex) { Logger.getLogger(FuncBlock.class.getName()).log(Level.SEVERE, null, ex); } finally { executeLock.set(false); } } public abstract void task(FBEvent event); public String getName() { return name; } public void offer(FBEvent event) { if (!this.isMarkedForDelete()) { if (!mailbox.offer(event)) { try { throw new MailboxFullException(); } catch (MailboxFullException ex) { Logger.getLogger(FuncBlock.class.getName()).log(Level.SEVERE, name + ":" + ex, ex); } } else { eventSemaphore.take(); } } } public void put(FBEvent event) { if (mailbox.remainingCapacity() == 0) { try { throw new MailboxFullException(); } catch (MailboxFullException ex) { Logger.getLogger(FuncBlock.class.getName()).log(Level.SEVERE, name + ":" + ex, ex); } } try { if (!this.isMarkedForDelete()) { mailbox.put(event); eventSemaphore.take(); } } catch (InterruptedException ex) { Logger.getLogger(FuncBlock.class.getName()).log(Level.SEVERE, null, ex); } } public boolean isEmpty() { return mailbox.isEmpty(); } public boolean isMarkedForDelete() { return markedForDelete.get(); } public void markForDelete() { this.markedForDelete.compareAndSet(false, true); } public FBTimer startTimer(FBTimerType type, FBEvent callbackEvent, int expires) { FBTimer timer = executor.startTimer(getName(), type, callbackEvent, expires); return timer; } public void deleteTimer(FBTimer timer) { executor.deleteTimer(timer); } public boolean catchExecuteLock() { return executeLock.compareAndSet(false, true); } }
STACK_EDU
What is Rabi nutation in NMR? What is the mathematical and physical significance of Rabi Nutation in terms of NMR? Here's my understanding based on my QM class and a lab I did a while back. I've tried to set it up in an intuitive way. This is my first answer so I'm sure there will be room for improvement but hopefully it gives you a general idea how it works: Say we apply a constant magnetic field to a sample of hydrogen nuclei along the z direction, the Hamiltonian is given by: $ H=-m \cdot B $. The quantized energy states of an individual proton nuleus would be $\pm \frac{g_n u_N B}{2} $. For a sample of N of these nuclei, the populations of each energy state at a given temperature is found from the Boltzmann distribution and the density matrix . When one speaks of Rabi nutation, they are referring to the time evolution of these populations when a time dependent magnetic field is applied. So let's say we also apply a time dependent magnetic field in the x direction (RF and about 15Mhz for this example): $$ B=B_1cos(\omega t) \hat{i} $$ Our Schrö­din­ger equa­tion is : $$ i\frac{d}{dt}\left(\begin{array}{cc} a(t)e^{-iw_+t} \\ b(t)e^{-iw_-t} \end{array} \right) = \left( \begin{array}{cc} \omega_+ & \frac{\omega_R}{2}(e^{-i\omega t}+e^{i\omega t} \\ \frac{\omega_R}{2}(e^{-i\omega t}+e^{i\omega t}) & \omega_-\\ \end{array} \right) \left( \begin{array}{cc} a(t)e^{-iw_+t} \\ b(t)e^{-iw_-t} \\ \end{array} \right) $$ Where $\omega_R$ is the Rabi frequency and $\omega_\pm$ are the original energy eigenvalues divided by $\hbar$. Now the idea with NMR is to tune the magnetic field frequency to $\omega_+-\omega_-$ (the energy spacing$/\hbar$) of the original system. If you can set up your experiment to (roughly) do that, (and solve the DE's with a Rotating Wave approximation) you get these results for the time dependence of the samples' population: $a(t)=cos(\frac{\omega_R t}{2}) $ and $b(t)=isin(\frac{\omega_R t}{2})$ So the population of the two states will oscillate at half the rabi frequency. Knowing this frequency we can control the bulk magnetization by applying timed pulses. For example, by applying a $\frac{\pi}{2}$ pulse followed by a $\pi$ pulse with the right timing, (see Spin Echo) you can get the characteristic relaxation time of the sample, which is the useful information used in MRI's.
STACK_EXCHANGE
Section: Research Program Situation Modelling, Situation Awareness, Probabilistic Description Logistics The objectives of this research area are to develop and refine new computational techniques that improve the reliability and performance of situation models, extend the range of possible application domains, and reduce the cost of developing and maintaining situation models. Important research challenges include developing machine-learning techniques to automatically acquire and adapt situation models through interaction, development of techniques to reason and learn about appropriate behaviors, and the development of new algorithms and data structures for representing situation models. Pervasive Interaction will address the following research challenges: Techniques for learning and adapting situation models: Hand crafting of situation models is currently an expensive process requiring extensive trial and error. We will investigate combination of interactive design tools coupled with supervised and semi-supervised learning techniques for constructing initial, simplified prototype situation models in the laboratory. One possible approach is to explore developmental learning to enrich and adapt the range of situations and behaviors through interaction with users. Reasoning about actions and behaviors: Constructing systems for reasoning about actions and their consequences is an important open challenge. We will explore integration of planning techniques for operationalizing actions sequences within behaviors, and for constructing new action sequences when faced with unexpected difficulties. We will also investigate reasoning techniques within the situation modeling process for anticipating the consequences of actions, events and phenomena. Algorithms and data structures for situation models: In recent years, we have experimented with an architecture for situated interaction inspired by work in human factors. This model organises perception and interaction as a cyclic process in which directed perception is used to detect and track entities, verify relations between entities, detect trends, anticipate consequences and plan actions. Each phase of this process raises interesting challenges questions algorithms and programming techniques. We will experiment alternative programming techniques representing and reasoning about situation models both in terms of difficulty of specification and development and in terms of efficiency of the resulting implementation. We will also investigate the use of probabilistic graph models as a means to better accommodate uncertain and unreliable information. In particular, we will experiment with using probabilistic predicates for defining situations, and maintaining likelihood scores over multiple situations within a context. Finally, we will investigate the use of simulation as technique for reasoning about consequences of actions and phenomena. Probabilistic Description Logics: In our work, we will explore the use of probabilistic predicates for representing relations within situation models. As with our earlier work, entities and roles will be recognized using multi-modal perceptual processes constructed with supervised and semi-supervised learning [Brdiczka 07], [Barraquand 12]. However, relations will be expressed with probabilistic predicates. We will explore learning based techniques to probabilistic values for elementary predicates, and propagate these through probabilistic representation for axioms using Probabilistic Graphical Models and/or Bayesian Networks. The challenges in this research area will be addressed through three specific research actions covering situation modelling in homes, learning on mobile devices, and reasoning in critical situations. Learning Routine patterns of activity in the home. The objective of this research action is to develop a scalable approach to learning routine patterns of activity in a home using situation models. Information about user actions is used to construct situation models in which key elements are semantic representations of time, place, social role and actions. Activities are encoded as sequences of situations. Recurrent activities are detected as sequences of activities that occur at a specific time and place each day. Recurrent activities provide routines what can be used to predict future actions and anticipate needs and services. An early demonstration has been to construct an intelligent assistant that can respond to and filter communications. This research action is carried out as part of the doctoral research of Julien Cumin in cooperation with researchers at Orange labs, Meylan. Results are to be published at Ubicomp, Ambient intelligence, Intelligent Environments and IEEE Transactions on System Man and Cybernetics. Julien Cumin will complete and defend his doctoral thesis in 2018. Learning Patterns of Activity with Mobile Devices The objective of this research action is to develop techniques to observe and learn recurrent patterns of activity using the full suite of sensors available on mobile devices such as tablets and smart phones. Most mobile devices include seven or more sensors organized in 4 groups: Positioning Sensors, Environmental Sensors, Communications Subsystems, and Sensors for Human-Computer Interaction. Taken together, these sensors can provide a very rich source of information about individual activity. In this area we explore techniques to observe activity with mobiles devices in order to learn daily patterns of activity. We will explore supervised and semi-supervised learning to construct systems to recognize places and relevant activities. Location and place information, semantic time of day, communication activities, inter-personal interactions, and travel activities (walking, driving, riding public transportation, etc.) are recognized as probabilistic predicates and used to construct situation models. Recurrent sequences of situations will be detected and recorded to provide an ability to predict upcoming situations and anticipate needs for information and services. Our goal is to develop a theory for building context aware services that can be deployed as part of the mobile applications that companies such as SNCF and RATP use to interact with clients. For example, a current project concerns systems that observe daily travel routines for the Paris region RATP metro and SNCF commuter trains. This system learns individual travel routines on the mobile device without the need to divulge information about personal travel to a cloud based system. The resulting service will consult train and metro schedules to assure that planned travel is feasible and to suggest alternatives in the case of travel disruptions. Similar applications are under discussion for the SNCF inter-city travel and Air France for air travel. This research action is conducted in collaboration with the Inria Startup Situ8ed. The current objective is to deploy and evaluate a first prototype App during 2017. Techniques will be used commercially by Situ8ed for products to be deployed as early as 2019. [Brdiczka 07] O. Brdiczka, "Learning Situation Models for Context-Aware Services", Doctoral Thesis of the INPG, 25 may 2007. [Barraquand 12] R. Barraquand, "Design of Sociable Technologies", Doctoral Thesis of the University Grenoble Alps, 2 Feb 2012.
OPCFW_CODE
Creating a website in HTML is a simple and fun process. In this blog post, we will explore the basics of HTML and learn how to create a simple website step by step. So, let’s dive in! 1. Understanding HTML HTML, or HyperText Markup Language, is the standard language used to create and design websites. It uses a system of tags to define the structure and layout of a webpage. The main components of an HTML document include: - DOCTYPE Declaration: This informs the browser about the version of HTML being used in the document. For example, <!DOCTYPE html>is used for HTML5. - html tag: This is the root element of an HTML page, and it contains all the other elements of the page. - head tag: Contains metadata about the document, such as the title, character encoding, and links to stylesheets and scripts. - body tag: Contains the main content of the web page, such as text, images, and links. 2. Setting up an HTML Document Start by creating a new file with a .html extension (e.g., index.html). Open this file in a text editor and add the basic structure of an HTML5 document: <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Your Website Title</title> <p>Now, you can start adding your website content inside the <code>body</code> tag.</p> <h2>3. Adding Content to Your Website</h2> <p>HTML offers a variety of elements to add and format content on your website. Here are some common elements and their usage:</p> <ul> <li><strong>Headings</strong>: Use <code><h1></code> to <code><h6></code> tags to add headings of different sizes. <code><h1></code> is the largest, and <code><h6></code> is the smallest.</li> <li><strong>Paragraphs</strong>: Use the <code><p></code> tag to create paragraphs.</li> <li><strong>Links</strong>: Use the <code><a></code> tag to create hyperlinks. The <code>href</code> attribute specifies the target URL.</li> <li><strong>Images</strong>: Use the <code><img></code> tag to add images to your website. The <code>src</code> attribute specifies the path to the image file.</li> <li><strong>Lists</strong>: Use <code><ul></code> (unordered/bullet lists) and <code><ol></code> (ordered/numbered lists) tags to create lists. The <code><li></code> tag is used for list items.</li> </ul> <p>For example, to add a heading, paragraph, and link to your website, your HTML would look like this:</p> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Your Website Title</title> <h1>Welcome to My Website</h1> <p>This is a paragraph about my website.</p> <a href="https://www.example.com">Visit Example.com</a> 4. Viewing Your Website Save your HTML file and open it in a web browser to view your website. As you make changes to the HTML code, refresh the browser to see the updates. 5. Expanding Your Website Good luck on your web development journey, and have fun creating your website in HTML!
OPCFW_CODE
Professional way to make Isometric 3D Tilemaps for an ARPG in Unity3D? I'm was working on a top-down ARPG and now I'd like to switch to 3D Isometric. I know that I can place prefabs on Unity's tilemaps, and also have an X-Z grid, etc, but are they designed to handle whole maps of 3D Tile GameObjects/Meshes? If yes, could you link some resources on how to do it well? If not, what is the professional way of making 3D maps for isometric games? Thanks in advance! :) How have you tried using the prefab approach that you describe? Where specifically are you finding that method falling short of what you need? Not yet, I'm not at that stage sadly, so I just started to think about it during my "free time" :D But I will. @DMGregory I finally had time the test this approach. I think this isn't the intended use. Placing 3D objects on the grid is impossibly by default. It's only possible with the "2D extras" package's RuleTile feature but I think it's intended for i.e. extra behaviour scripts to be placed on the tile. Everything is still based around sprites and I have to modify the package to make it a bit more suitable for 3D but it would be still hacky/clunky. So the question still stands :( Can you clarify what makes placing 3D objects on the grid "impossible"? That's not a problem I've ever observed in Unity, where I usually just CTRL-drag to snap to grid-sized increments. Can you try editing your question to walk us through your current workflow and where it's giving you trouble? What I was talking about is placing them in Unity's Tilemaps' Palette. Yes, I can snap game objects to the grid, but then I still don't have a tilemap system I can use: I still would have to implement a framework so I can get tile data from a specific position, generate levels, etc. And that's my last resort if there isn't a faster solution. :\ Try showing us your workflow for placing objects in the tilemap palette. Folks may be able to suggest improvements for that step if that's the one giving you the most trouble. The workflow is just: I start dragging a prefab from the "project explorer" to the Tile Palette, just as I did with Sprites previously. Nothing happens, so I guess it only accept tiles and sprites (because a tile is instantly created from sprites upon adding them to the palette). You just need to rotate your sprite around x-axis by 90 degrees, then you get tiles in 3D world. What? But I'm no longer using sprites. I'm using 3D meshes and I want to make a tilemap from them. :| Or maybe I've misunderstood you.
STACK_EXCHANGE
- Dark pools have diverged from their original intent with different pools benefiting different types of traders - Traders need to understand how different dark pools perform in order to optimally route an order under different conditions - Specialized pure dark allocation algorithm strategies need to answer two questions that are often asked by traders Building a Pure Dark Allocation Algorithm for Equity Execution (By Khalil Dayri and Kapil Phadnis, Bloomberg Tradebook)09.27.2016 Dark venues, or pools, are venues in which the players do not see the available liquidity or the limit order book. They started appearing in the past decade after regulatory changes in the US and Europe, like Reg NMS and MIFID, which opened the doors to venue competition. Most importantly, they were made possible by technological advances such as fast communication networks and computational power and their reduced cost and availability. Originally, these pools were designed to allow large orders on opposite sides to trade instantaneously at the primary midpoint, thus avoiding having to split the order into smaller portions and suffering from information leakage. However, dark pools have diverged from their original intent with different pools benefiting different types of traders; some pools are better for large blocks that may take longer to get done, while others offer a substantial amount of readily available liquidity. Today, there are about 50 dark pools in the U.S., roughly 20 in Europe and 10 in Asia. They have different rules about minimum order sizes, possible matching prices (midpoint, near touch, far touch), matching rules (time-price priority, price-size priority etc.) and different pricing. Thus the liquidity and microstructure profile of each venue is very different. How can one optimally route an order to the different dark pools? The objective is a dark aggregation algorithm that goes to most pools and maximizes the amount of liquidity drawn. On average, for relatively large orders, trading in the dark will give slightly lower slippage (when compared to say, a regular lit percent of volume algorithm), but most importantly, dark aggregators allow the investor to have a very high participation without causing too much impact, thus significantly reducing volatility risk. Ideally, we want to post once, get all the liquidity available, and withdraw. This guarantees all executions were at the midpoint with no slippage from subsequent trades. So, the objective is to maximize the amount consumed at a single time step. We use statistics from the Bloomberg Tradebook database to devise a dark aggregator solution that gives a conditional allocation to the amount we are trying to split, giving the aggregator the flexibility to extract the most liquidity from a set of dark pools with a very diverse liquidity profile. Our motivation for a pure dark allocation algorithm comes from an attempt to answer two questions that are often asked by traders. Suppose we allocated quantity V shares between different venues and received a total of U ≤ V shares. Since we want to maximize U , two questions appear: - Had we allocated differently, would we have gotten more than U ? - For a different quantity V , should we allocate to the venues in a similar proportion? In other words, should the weights depend on the allocation size? The first problem is related to a notion of regret: could we have done better? To understand that, we look at the amount received U . If U = V then there isn’t much we could have done. A different allocation would have given us V at best or less at worse. Similarly, if no venue totally consumed the liquidity that was allocated to it, it means we totally consumed the available liquidity, and there isn’t much we could have done either. A different allocation would have given us U at best or less at worse. The second problem’s answer is intuitively no. We want to maximize the fill ratio of any V we have to allocate, and different venues have different distribution of sizes. The general idea behind the algorithm is described in Optimal Allocation Strategies for the Dark Pool Problem –that there should be a set a vector weights—one for each total order size we are trying to split. We go over the model and algorithm and simulate the model and the behavior of the allocation algorithm using individual stocks like the LSE stock (London Stock Exchange Group plc trading on the LSE) as well as whole markets (UK and Germany). We use Bloomberg Tradebook’s historical trade database to calibrate the model and to simulate the random processes and show graphs of the convergence of the venues’ weights vectors as a function of the order size. Finally, we discuss the shortcomings of the algo in a posting situation and what type of problem we should really solve in this situation. RESULTS FOR THE LSE STOCK We apply the simulation on the London Stock Exchange Group stock. We have for this stock K = 12. In Table 1, we show the statistics and estimators on the various venues where we traded in 2015: p is the Zero-Bin probability (the probability of going to a venues and not finding any liquidity there), the Power Law exponent is the Pareto tail parameter of the distribution of sizes. We note that the venues with the smallest tail exponent have the highest average trade size, but not always the highest maximum. This is because the maximum is a tail-related statistic, and we have a lot fewer observations on the venues with fatter tails due to their small p. The venue names are hidden and replaced by a numbered series (V1,. . . ,V12). The distributions are estimated, and hence the liquidity profiles of the venues, are very different, which is a perfect application for what we want to achieve. Table 1: Statistics of venues used for simulation for the LSE stock. Venues are sorted descending by the average trade size. For instance, notice the estimated shape of the Pareto distribution for V11 is very low (0.24), much lower than any other venue, whereas the probability of getting any fill there is very low (0.43%). V11 is typically a block trading venue where trades are rare but usually of very large size. V1 on the other hand is a very liquid venue but where sizes are typically small. The typical traders we find in V1 would be impatient traders and market makers with small inventory controls trying to supply liquidity or hedge their portfolio often, whereas the typical traders we find in V11 would be large institutional traders with a long term view who are willing to wait for an execution and suffer market volatility risk in favor of large blocks that allow them to minimize information leakage. We show the evolution of the weights in time. We see how some venues increase their allocation sizes (and hence weights) under some order sizes, and how they lose out to others under other sizes. We expect then that venues with large tail exponents and high p to dominate the weights when the size of the order is small. This is normal, because when the size to split is small, even a small trade could fully consume the order. Our order would finish sooner if we post on venues like V1 (where p is high). So the determinant of the weight for small sizes is mainly p. As the size to split becomes larger, the tail of the trade sizes becomes more important. Venues with large tail exponents will only give small executions, thus there is no regret in giving them a small size, freeing the remaining quantity to be given to the block venues. The immediate conclusion we make is that the allocation is influenced by both the tail exponent and the Zero-Bin probabilities. Figure 1 is a map showing the state of the allocation matrix after about 100,000 rounds, after the map has stabilized. The x axis represents the order size we are trying the split and the y axis represents the stacked weights of the various venues, where each venue is represented with a different colour. We see that venues with small tail exponent (fatter tails) will usually have lower fill probabilities and they will dominate the allocation for large order sizes like V9 or V11, but will disappear for small order sizes. Inversely, venues with the largest fill probability like V1 dominate the allocation for small order sizes, but their weight will be small for larger order sizes. Some venues, like V6 and V9, seem to have a wide range of order sizes where they are relevant. For example V6’s weights starts gaining importance at the $50k mark and stay relevant all the way to the 20M$ size, whereas V1’s weight decreases quickly after the $100k size to reach a very small fraction at the end. Figure 1: Weights map: for each Vt order size, we show the stacked weights of venues for LN LSE. The x axis on the upper graph is in linear scale and on the bottom graph it’s in log scale. RESULTS FOR THE GERMAN MARKET In Germany, there are two block venues, V17 and V16, with a tail exponent of 0.19 and 0.1 respectively (cf Table 2). Figure 2 shows that only V16 manages to acquire a good part of the allocation as the order size increases, becoming quite dominant if the order size is larger than 10M$. When looking at the statistics of both venue, we find this to be normal: the fill probability of V16 = 0.72% is higher than the 0.62% of V17. At the same time, V16 has a fatter tail (α(V 16) = 0.1 < α(V17) = 0.19). So overall, V16 has liquidity more often and when that liquidity is present, it has a higher probability of being larger. So overall, V16 is a better pool than V17 and the algorithm is right in allocating more to it. Table 2:Statistics of venues used for simulation over the whole GY market. Venues are sorted descending by the average trade size. Figure 2: Weights map: for each Vt order size, w7 show the weight of each venue for the whole GY market. The x axis on the upper graph is in linear scale and on the bottom graph it’s in log scale. We present an algorithm for allocating a parent order between several dark pools. We use the algorithm presented in Optimal Allocation Strategies for the Dark Pool Problem. We simulate the behavior of the algo using estimates based on single stocks as well as markets. The algorithm shows that the allocation should depend on the order size we are trying to split. If the order size is small, give more weights to the most active venues. If the order size is large, then we should give more weight to block venues. Alekh Agarwal, Peter Bartlett, Max Dama, Optimal Allocation Strategies for the Dark Pool Problem, Proceedings of The Thirteenth International Conference on Artificial Intelligence and Statistics (AISTATS), volume 9, pages 9-16, May 2010.
OPCFW_CODE
import {Observable} from 'rxjs/Rx'; import {Injectable} from 'angular2/core'; import {BurstInterval} from './burst-interval.service'; @Injectable() export class Doctor { public responseDelay = {min: 2, max: 5}; // Seconds public inActivityResponsesLimit = 5; // Kicked out of the treatment session when reached. private _submittionMessages = { "food": {// Level 1 is the subject "want": {// Level 2 is the action "to": {// Level 3 is the relation to level 4 "take": "So just take", // Level 4, Level 2 in relation of Level 3 produces message by Level 4 "help": "Pick a number within your food" }, "a": { } }, "got": { } }, "game": { "play": { }, "pay": { } } }; private _overThinkingMessages = ['Stop thinking about fat ladies', 'Hand me your phone to type you\'r fucking message', 'I will burn this phone, just type ENTER']; private _burstsMessages = ['I need cash', 'Just kill me', 'I like myself', 'Let\'s flip a coin', 'When your funeral takes place?', 'I\'m all out of love']; constructor() {} randomArrayIndexes(arrayLen: number) { // fill(null) is used to able map iterate return Array(arrayLen).fill(null) .map(val => this.randomIntegers(0, arrayLen-1)) .filter((val, ind, arr) => (arr.indexOf(val, ind + 1) < 0)) // Filter duplications } randomBursts(treatTime: number) { // treatTime units are seconds let indexes = this.randomArrayIndexes(this._burstsMessages.length) // Min/Max delay for each message once the brust begin let minDelay = 60; let maxDelay = 180; if(minDelay > maxDelay) minDelay = Math.floor(maxDelay/2); if(maxDelay > treatTime) maxDelay = treatTime; return Observable .from(indexes) .concatMap((indexVal: number, indexIndex: number) => { return Observable .of(this._burstsMessages[indexVal]) .delay(this.randomIntegers(minDelay, maxDelay) * 1000); }); } overThinkingResponse() { return this._overThinkingMessages[this.randomIntegers(0, this._overThinkingMessages.length-1)]; } private compondResponse(messagesArray) { return 'Compond response'; } submittionResponse(msg): string { // The argument name was messagesArray // if(messagesArray.length === 1) { if(true) { // For now I disabled the compondResponse // let msg = messagesArray[0]; for(let level_1 in this._submittionMessages) { // Level 1 if(msg.indexOf(level_1) < 0) continue; // If no match continue for(let level_2 in this._submittionMessages[level_1]) { // Level 2 if(msg.indexOf(level_2) < 0) continue; // If no match continue for(let level_3 in this._submittionMessages[level_1][level_2]) { // Level 3 if(msg.indexOf(level_3) < 0) continue; // If no match continue for(let level_4 in this._submittionMessages[level_1][level_2][level_3]) { // Leve 4 if(msg.indexOf(level_4) < 0) continue; // If no match continue return this._submittionMessages[level_1][level_2][level_3][level_4]; } } } } return msg; } else { return this.compondResponse(messagesArray); } } typingResponse() { } randomIntegers(min: number, max: number): number { return Math.floor(Math.random() * (max - min + 1) + min); } }
STACK_EDU
Confusion about how java web session handeling works. Demystifying Cookies and Header differences using servlet api and HttpSession object I am learning Spring security and Spring MVC, but I realized I needed to learn jsp Servlets first and general web programming in a java environment. I have confusions surrounding the HttpServletRequest and HttpServletResponse objects and how they can be used to add headers to the request and response objects and how they relate to sessions. As far as I understand, a cookie is a type of header just like Content-type and Accept. The java servlet api just makes it easy to work with the header by using methods specific to the context in which the header is being used. For example: response.setContentType(String mimeType) response.setContentLength(int lengthInBytes) My confusion starts here.. Cookie is not a String or int, its a object: response.addCookie(Cookie cookie) response.getCookies() Since a cookie is a type of header, can't I just use something like this: String cookieVal = response.getHeader("cookie") I'm having difficulty understanding session management and how it relates to the HttpServletRequest and HttpServletResponse API.. What is the HttpSession object for? HttpSession.getAttribute() // What is this getting?? HttpSession.setAttribute("Bla Bla", "valuetoset") // What is this setting? A good place to start: http://en.wikipedia.org/wiki/HTTP_cookie You can read the RFC describing Cookies and the related headers, Set-Cookie and Cookie to understand what they are. You can go through Chapter 7 of the Servlet Specification if you want to understand in detail how Cookies and Sessions are related. You first need to understand that HTTP is a stateless protocol. This means that each request that a client makes has no relation to any previous or future requests. However, as users, we very much want some state when interacting with a web application. A bank application, for example, only wants you to be able to see and manage your transactions. A music streaming website might want to recommend some good beats based on what you've already heard. To achieve this, the Cookie and Session concepts were introduced. Cookies are key-value pairs, but with a specific format (see the links). Sessions are server-side entities that store information (in memory or persisted) that spans multiple requests/responses between the server and the client. The Servlet HTTP session uses a cookie with the name JSESSIONID and a value that identifies the session. The Servlet container keeps a map (YMMV) of HttpSession objects and these identifiers. When a client first makes a request, the server creates an HttpSession object with a unique identifier and stores it in its map. It then adds a Set-Cookie header in the response. It sets the cookie's name to JSESSIONID and its value to the identifier it just created. This is the most basic Cookie that a server uses. You can set any number of them with any information you wish. The Servlet API makes that a little simpler for you with the HttpServletResponse#addCookie(Cookie) method but you could do it yourself with the HttpServletResponse#addHeader(String, String) method. The client receives these cookies and can store them somewhere, typically in a text file. When sending a new request to the server, it can use that cookie in the request's Cookie header to notify the server that it might have done a previous request. When the Servlet container receives the request, it extracts the Cookie header value and tries to retrieve an HttpSession object from its map by using the key in the JSESSIONID cookie. This HttpSession object is then attached to the HttpServletRequest object that the Servlet container creates and passes to your Servlet. You can use the setAttribute(String, Object) and getAttribute(String) methods to manage state. So I'm trying to understand this in http standard response or request terms. There are several keys in a request header such as Accept, Content-type... And these keys have values such as "Application/json" etc.. So what does the header key named cookie contain? Does it contain "JSESSIONID=3894839849328" or something like that? Or is there a new key of the header created called JSESSIONID that is just referred to as the cookie? In that case, every header key is a cookie. @TazMan On the first request, the Servlet container will respond with a header like Set-Cookie:JSESSIONID=0444B0FEFC39BC52343C4DE6AB2AF492; Path=/so/; HttpOnly. The client can then send the header Cookie:JSESSIONID=0444B0FEFC39BC52343C4DE6AB2AF492 to identify its user. On chrome, you can press F12 to and look at the network tab to see header values for each request the browser makes. Thank you. So the Set-Cookie is a header attribute sent by the server. And Cookie is the header attribute sent by the client to the server right? If you could confirm my logic to conclude this please: If I have some code like this: HttpSession session = request.getSession(); and then have session.setAttribute("productCode", product.getCode());and add another attribute to the session like this: session.setAttribute("productName", product.getName()); The Cookie header in the http request will be: cookie: productCode=somecode; productName=someName.. Right? @TazMan No. The session attributes have nothing to do with the Cookie. The attributes are stored and managed server side. They are never returned to the client in a header. This is how the server manages state. I'm so lost.. Still trying to understand a lot of what you mentioned in your answer. See I'm completely new to web development. I understand statelessness and headers.. I don't understand cookies and how java handles them. HttpSession is used to work with cookies and manage state.. @TazMan We use HTTP headers to transmit cookies. Because cookies usually hold much more information than a typical header, the Servlet API offers a few classes and methods to simplify the task of reading them from a request adding them to a response. Read the various articles linked in my answers. They are long but very informative. You are correct that cookies are managed using headers. There are TWO cookie management related headers: Cookie and Set-Cookie. Cookie header is sent by the user agent (browser) and will be available in your HttpServletRequest object and the Set-Cookie header is appended to your HttpServletResponse object when you use methods such as addCookie(Cookie). In Java an HttpSession is established when the first request reaches your application. The Servlet Spec implementation in your container (Jetty, Tomcat, WebSphere, etc) will create and manage the HttpSession. The browser will receive a JSESSIONID cookie which will identify this particular session in the future. Can you give an example of what values the 2 header keys would contain? Cookie key of the header and the Set-Cookie of the header. These would be header keys just like Accept and Content-type right?? As I understand it, a response or a request contains a list of key-value pairs. Like this: (headername: headervalue) Agreeing with the answers given above, I would like to conclude that Cookie and Session are two different entities in the world of web. Cookie Cookie represents some brief information that's generated by server and stored on client(browser). According to HTTP mechanism, browser have to send all the cookies(that have not expired), that server had sent before to browser. Session HTTP is a stateless protocol. Unlike FTP and other protocol, where connection state is preserved between multiple request-response transaction, in HTTP connection is established for one request and it's closed when response for that request is satisfied. This flaw in HTTP is present, because it was designed in early days to serve static web pages only. But as web has expanded, it's now used to serve dynamic full-fledged webapps. Thus, it has become necessary to identify users. Thus, for every request served by web-server, a labeling mechanism is required which can identify user of each request. This identification of user of request(whether the request has came from same user, same machine), sessions are used. Session can be successfully implemented only if web-server can receive any information about the user in the request. One way of making this information available to user is Cookie. Others are URL rewriting, hidden fields, etc. session.setAttribute() will store information in current session on server side not on client side(browser). Hope it may help you. Ok Looks like you want to see the difference between Cookies and Headers. They have different purpose. Cookies are temporary storage of information on client side. Server set the cookies(data) on the response and once set browser send these cookies(data) with each subsequent requests till the cookie expires. But headers are used as hints to browser and server. For ex setHeader("Content-Type", "application/json"); will inform client to prepare to see a json response in the payload. Since it is a "one time" information there is not need the browser to send that information back to the server with each new requests like cookies.
STACK_EXCHANGE
Classifying space commutes with geometric realization - reference request Let $G$ be a nice topological group, say a compact connected Liegroup. Then one can construct a model of its classifying space as $EG/G$ where $EG$ is any contractible space with free $G$ action. On the other hand, one could take the geometric realization of the simplicial construction of the classifying space of the singular simplices of $G$: $|\bar{W} (sing G)|$ Is it written down somewhere that these two spaces are weakly homotopy equivalent (or some variation of this statement)? What exactly do you mean by "the simplicial construction of the classifying space"? By the way, for two connected CW complexes X,Y any weak equivalence X->Y is already a homotopy equivalence. This is due to Whitehead. Classifying spaces of a group G can be characterised to be the homotopy types which have pi_1 the group and no other homotopy. Given two spaces X and Y that satisfy this property, one can construct an explicit morphism X->Y that will in turn be a weak equivalence. By "the simplicial construction" I mean the for example the construction in Goer Jardine simplicial homotopy theory chapter V.4, but I think I would be fine with an answer refering to any construction. Your criterion with pi_1 makes only sense for discrete groups, for other groups one has pi_i+1(BG)=pi_i(G), but I don't think this characterizes the homotopy type of BG uniquely. I see, I somehow missed "topological group" entirely... You're right that the shifting of the homotopy groups doesn't characterise the homotopy type of BG uniquely. So probably one should use the Quillen adjunction of the singular construction and the geometric realization. One needs stronger assumptions that just a free action if you want EG/G to be well-defined up to homotopy. You need the projection EG-->G to be a principal G-bundle, so the action needs to have "slices." An extreme example is that every topological group G acts freely (by translation) on $G^{\textrm{ind}}$, the set $G$ with the indiscrete topology. This is certainly a contractible space, but $G^{\textrm{ind}}/G$ is a point. I think you should be able to prove this roughly as follows: first consider the loop space of your construction. For nice simplicial spaces, the loop space can be calculated level-wise (see May's Geometry of Iterated Loop Spaces, for instance), and hence the loop space of your construction is homotopy equivalent to the (realization of the) simplicial space $[n] \mapsto \Omega |\overline{W} Sing_n (G)|$. But this space is level-wise weakly equivalent to $[n] \mapsto Sing_n (G)$, whose realization is homotopy equivalent to $G$. So the loop space of your construction yields $G$, and now you need to apply some form of the statement: $B\Omega X \simeq X$. Also, note that it's a standard fact (due to Segal's "Classifying spaces and spectral sequences" and/or May's "Classifying spaces and fibrations") that the simplicial bar construction applied directly to your topological group $G$ gives a model for $BG$ (this requires the inclusion of the identity of $G$ to be a cofibration, which is of course true for Lie groups).
STACK_EXCHANGE
Let me explain the situation: I took a new job which requires me to move to a new town for the duration of 6 months. However, since my friends and partner are living in my old hometown, I’ll probably travel there most of the weekends. And since I don’t have a car, I will be traveling by train. The Deutsche Bahn offers several discounts for people who travel a lot. For a fixed price, you can get either 25% or 50% off for every ticket you buy for the duration of one year. So the question we asked here is: Which offer saves me the most money? When I was facing this problem, I thought the best solution would be a visual one, that is, make a plot and directly see how often I would have to travel for each discount to be the cheapest. As a tool to do this plots I decided to use GnuPlot, which I’m familiar with from my studies. Any other plotting tool works equally well, though. A ticket for one trip costs 23 euros, given that I have to take the return as well, that is 46 euros per weekend without discount. The “Bahncard 25” which yields a 25% discount for one year costs 62 Euros, the “Bahncard 50”, which yields 50% discount for one year costs 255 Euros. For each situation, we want to plot a function gives the total cost over the number of travels. In the undiscounted case, calling our function f, that would be as simple as f(x) = 46*x. Where x is the number of travels. For the Bahncard 25, we obtain g(x) = 62 + (46-(0.25*46))*x = 62 + 0.75*46*x. 62 is the flat price for the Bahncard, which we only pay once, for each trip, we pay 46 Euros minus 25%, that is, 46 – 0.25*46. But subtracting 25% is the same as just paying 75% of the original price, hence we just use the final factor of 0.75*46. Analogously, we obtain the function for the Bahncard 50 to be given by h(x) = 255 + 0.50*46*x All we have to do now is to put this data into GnuPlot. Starting up GnuPlot for the first time you will see a screen like this This might look intimidating at first, but trust me, it is not that hard to use. In fact, we can enter our functions the same way as we did here on the blog: Now GnuPlot knows what f, g, and h are. So all we have to do is to tell it to plot them over a reasonable range. Given that I am in the new town for 6 months, which is roughly 24 weekends (assuming 4 weekends per month), 0 to 30 might be a reasonable range. We can do this by simply typing plot [0:30] f(x),g(x),h(x) plot just tells GnuPlot to plot something in 2D, in the squared brackets we give the range of our plot—in our case 0 to 30—and then we just give a list of the functions we want to plot, separated by commata. Hitting enter, you should receive the following graph: Hovering over the intersections, in the bottom left corner you can see the exact coordinates of the mouse cursor. Hence we can evaluate that: - If I do less than 6 trips, it is cheaper to take no Bahncard at all, - if I do between 6 and 16 trips, the Bahncard 25 is cheapest, and - if I do more than 16 trips, the Bahncard 50 will be the cheapest solution for me. Neatening things up Having the graphs called f(x), g(x), h(x) isn’t really helpful when just looking at the plot. Especially if you have more than three cases. We can fix this by changing the plot function to plot [0:30] f(x) title "No Bahncard",g(x) title "Bahncard 25",h(x) title "Bahncard 50" Then our graph looks like this: Also, we can put some labels on the axis, using the commands “set xlabel” and “set ylabel”. For example, if we write set xlabel "Number of trips" set ylabel "total cost" and then the usual plot command, our graph will look like this: To enhance readability, we can activate a grid behind the graphs by using the “set grid” command. By default, this will show gridlines at the labels, in our case that is vertical lines at 5, 10, 15, 20 and 25 on the x-axis and horizontal lines at 200, 400, 600, 800, 1000 and 1200 on the y-axis. We can further customize this by using the “set xtics” and “set ytics” to change the positions of the labels on the axis. In our case having a vertical line at every number and no vertical lines might be most helpful, so we write set xtics 1 unset ytics set grid before our plot command to receive the following graph: The Full Code f(x) = 46*x g(x) = 62 + 0.75*46*x h(x) = 255 + 0.50*46*x set xlabel "Number of trips" set ylabel "total cost" set xtics 1 unset ytics set grid plot [0:30] f(x) title "No Bahncard",g(x) title "Bahncard 25",h(x) title "Bahncard 50"
OPCFW_CODE
Best practices for cross platform git config? Context A number of my application user configuration files are kept in a git repository for easy sharing across multiple machines and multiple platforms. Amongst these configuration files is .gitconfig which contains the following settings for handling the carriage return linefeed characters [core] autocrlf = true safecrlf = false Problem These settings also gets applied on a GNU/Linux platform which causes obscure errors. Question What are some best practices for handling these platform specific differences in configuration files? Proposed solution I realize this problem could be solved by having a branch for each platform and keeping the common stuff in master and merging with the platform branch when master moves forward. I'm wondering if there are any easier solutions to this problem? Related question about line endings & git: http://stackoverflow.com/questions/1249932/git-1-6-4-beta-on-windows-msysgit-unix-or-dos-line-termination I have reviewed that kind of config setting (crlf) extensively in the question: distributing git configuration with the code. The conclusion was: checkout/checking .gitattributes files list all types which explicitly need that kind of conversion. For instance: *.java +crlf *.txt +crlf ... avoid doing any kind of conversion of type of files which don't need it, because of the various side-effect of such a conversion on merges, git status, shell environment and svn import (see "distributing git configuration with the code" for links and references). avoid any crlf conversion altogether if you can. Now, regarding the specific issue of per-platform settings, branch is not always the right tool, especially for non-program related data (i.e; those settings are not related to what you are developing, only to the VCS storing the history of your development) As stated in the question Git: How to maintain two branches of a project and merge only shared data?: your life will be vastly simpler if you put the system-dependent code in different directories and deal with the cross-platform dependencies in the build system (Makefiles or whatever you use). In this case, while branches could be use for system-dependent code, I would recommend directory for support tools system-dependent settings, with a script able to build the appropriate .gitattributes file to apply the right setting depending on the repo deployment platform. Never turn autocrlf on, it causes nothing but headaches and sorrows. There's no excuse to use \r\n on windows, all decent editors (by definition) can handle \n. some examples of headaches and sorrows would be a great addition to this answer. meh - most editors /can/ handle just \n, but default to platform line endings, as they should. @naught101 if you're working with other developers and you're not all using the same autocrlf setting, you could end up with a ton of merge conflicts with files/changes that shouldn't actually conflict, just because the line endings are different. This is especially dangerous with the way that Git, be default, auto-commits merges it thinks were successful, and because diff editors can be set to treat whitespace as non-significant. Poor or non-existent code review by your dev team could cause you to reintroduce fixed bugs/cause dangerous code regressions. you can change autocrlf with this command git config --global core.autocrlf false I'm late, but I've got an example - Cygwin can't handle CRLF endings in shell scripts. That's okay normally, but when that script is the build script for half of the team's projects... lots of headaches from people thinking the build system was broken or the build was flawed when they'd just forgotten to run the Dos-to-Unix script. Finally changed the repo settings so that doesn't happen. Building docker imagens and any kind of Linux shell or DevOps has lots of headaches My embedded IDE generates files with CRLF and doesn't work if they are changed to LF. I think you should have the .gitconfig depend on the operating system the user is using. Windows users don't need autocrlf at all while Linux users do. E.g. save the text files with crlf and have Git convert the files automatically back and forth for the Linux users. You might also want to check .gitattributes, which allows you to define which files are converted and which are not. If you have the configuration files in just one place, you could define that the conversion is only done in that directory just to be on the safe side. linux file endings are the default. It is the norm for git repositories. So the files are already in the correct format for the linux users. Your point is backwards.
STACK_EXCHANGE
Before you begin Plans: Professional and Enterprise This article contains answers to common questions about our analytics feature: |Refresh & timeout |Seats & access |Sharing looks & dashboards |Editing looks & dashboards Which sites can I see analytics for? You can view data for any of the sites you've been assigned access to through your user account. What are the different folders used for? - My Folder: If you are a creator, this is where you can store any looks and dashboards you create for your own use. Only you can access the content in this folder. - Group: This is where the looks and dashboards created by your organization are stored. Anyone at your organization with access to analytics can access the content in this folder. - Shared: This is where our ready-made looks and dashboards are stored. You can't edit the content in this folder, even if you have creator permissions. However, creators can use them as a base for creating new looks and dashboards. To learn how, see Create a new look or Create a new dashboard. How do date filters work? The following list explains the most common date filters in analytics: - Filters that use is in the last and months are based on 30 day increments. For example, a "is in the last 1 months" filter would show data from the past 30 days (not including today), and "in the last 2 months" would show data from the past 60 days. - Filters that use is in the last and complete months are based on past calendar months. For example, if you applied a "is in the last 1 complete months" filter on July 15, it would show data from June 1-30 (which is the last completed calendar month). - Filters that use is in the last and complete weeks are based on past weeks, defined as Sunday to Saturday. For example, if you applied a "is in the last 1 complete weeks" filter on Wednesday August 2, it would show data from Sunday July 23 to Saturday July 29 (which is the last completed calendar week). - Filters that use is previous displays data from the last completed day, week, month, etc. For example, "is previous month" shows data from the last completed month. This is similar to "is in the last" and "complete months", "complete weeks", etc. However, "is previous" only displays one week, month, etc. at a time, whereas you can display multiple months, weeks, etc. with "is in the last". - Filters that use a is in range do not include the end date that you select. For example, if you selected July 1 to 31, it would show data from July 1 to 30. To include July 31, you would need to set the filter from July 1 to August 1. How do time zones work in Analytics? The time zone you view looks and dashboards in can impact the data displayed. For example, if a dashboard tile displays the number of closed work orders yesterday, this number will likely differ depending on whether you view it in Eastern Standard Time or Singapore Standard Time. Each look and dashboard has its own time zone settings. They can be set to either: - Use a specific time zone, such as America - Los Angeles. - Viewer time zone, which automatically uses the viewer's current local time zone. For example, if the viewer was in the Greater Toronto Area, it would use America - Toronto. Dashboards have an additional Each tile's timezone setting that allows each tile to use its own time zone settings. For example, you could choose this setting for the dashboard and then have one tile that displays data from your Detroit site using the Eastern Standard Time and another that displays data from your Dallas site in Central Standard Time. When viewing a look or dashboard, you can temporarily change the time zone without editing the actual settings. To learn more see Change the time zone while viewing a look and Change the time zone while viewing a dashboard. When you create a new look, dashboard, or tile, the default time zone settings differ depending on how it was created: |Default time zone setting |Add a new visualization to a dashboard The new tile respects the dashboard's time zone settings. If the dashboard is set to use each tile's time zone, the new tile uses the America - Los Angeles time zone. This is the overall default time zone for Analytics. |Copy a look or dashboard The new dashboard uses the same time zone settings as the one you copied. Note: All the looks and dashboards we provide use the Viewer time zone setting. This means that if you copy these looks and dashboards, the copies will respect the viewer's time zone by default. |Save a look as a new dashboard |The new dashboard uses the Each tile's time zone setting. |Save a look to an existing dashboard The new tile respects the settings of the dashboard. If the dashboard is set to use each tile's time zone, the new tile uses the same time zone as the look it originated from. |Save an explore query as a new look The new look uses the time zone selected in the Explore view. To learn more, see the following articles: - Change the time zone for a look - Change the time zone for a dashboard - Change the time zone for a tile Refresh and timeout How often do dashboards refresh? Our ready-made dashboards do not refresh automatically. You can refresh the dashboard layout by clicking the reload button, or refresh the queried data by clicking Clear cache and refresh in the menu. Refresh frequency can be configured per dashboard, as well as for individual tiles on a dashboard. For example, you could configure a dashboard so that most of its tiles only refresh once an hour, but one specific tile refreshes every 15 minutes. To learn more, see Change dashboard settings. To optimize system performance, we recommend setting your refresh frequency to 15 minutes or more. Refreshing more frequently can negatively impact performance. How do I refresh my dashboard? The reload (or refresh) button refreshes the dashboard display, but doesn't refresh the queried data. For example, after you select filters, you need to click the reload button to apply them: Unlike the reload button, clicking Clear cache and refresh refreshes the queried data. For example, if you've just closed a work order, you might need to clear the cache before your look or dashboard reflects the change. To clear the cache, open the menu and select Clear cache and refresh: Tip: For looks, this item is in the gear menu instead. What is the session timeout for analytics? The session timeout for analytics depends on the timeout configured in your user account: - If there isn't a session timeout configured in your user account, your analytics session times out after 1 hour. - If the session timeout in your user account is 4 hours or less, analytics uses the same session timeout. For example, if the session timeout in your user account is 2 hours, your analytics session also times out after 2 hours. - If the session timeout in your user account is longer than 4 hours, your analytics session times out after 4 hours. For example, if the session timeout in your user account is 5 hours, your analytics session still times out after 4. To learn more about configuring session timeouts for users, see Change the session timeout for a user account. Seats & access What do the access levels for analytics mean? There are 3 access levels for analytics: - Viewer, which allows you view-only access to analytics. Viewers cannot filter, download, share, create, or edit looks and dashboards. - User, which allows you to view, filter, download, and share looks and dashboards, but doesn't allow you to edit them or create new ones. - Creator, which allows you to do all the things Users can do, plus the ability to create and edit new looks and dashboards. To learn more about managing access to analytics, see Assign analytics seats. How many seats do I have left for analytics? The number of creator and user seats your organization is granted depends on the plan you're subscribed to: In addition to these seats, you are also granted enough viewer seats for each CMMS user at your organization. This means that if you have (for example) 100 CMMS total users, you would be granted 100 viewer seats. For your convenience, the CMMS displays the number of seats you have left for each access level directly where you assign access on the Analytics tab in Settings > Users: How can I find out who our seats have been assigned to? You can find out which users are assigned analytics seats using the Analytics Permissions dashboard. To learn more, see Analytics permissions dashboard. What happens if a user is deactivated? If a user assigned an analytics seat is deactivated (i.e. their status is changed to inactive, passive, or deactivated), their analytics seat is automatically removed and returned to the "pool" of available licenses. For example, if an employee leaves your organization, their analytics seat is freed up as soon as you deactivate their Fiix user account. Additionally, some of their analytics content will also be deleted: - Any content they created in their personal folder (i.e. My Folder) is deleted. - Any content they created in the Group folder is retained. - Any schedules they created are deleted. This content is deleted permanently and cannot be recovered, even if their user account is reactivated. Similarly, their analytics seat won't be reassigned to them automatically; an administrator must reassign it to them manually. What happens if I remove creator access to Analytics? If you remove someone's creator access (i.e. change their access level from Creator to any of the other options), any content in their personal folder and any schedules they created are deleted automatically and cannot be restored. For this reason, we recommend taking extra care when changing a creator's access level. Sharing looks & dashboards Why can't I test my dashboard delivery schedule? If you click Test now while setting up a schedule, an error occurs. This is a known issue that will be fixed in a future release. As a workaround, you can save the schedule and then click Send now: Editing looks & dashboards Which looks and dashboards can I edit? If you are a creator, you can edit any looks and dashboards that you created. Although you can't edit the ready-made looks and dashboards we provide (located in the Shared folder), you can use existing looks to create new ones. To learn how, see Create a new look or Create a new dashboard.
OPCFW_CODE
To an average person, the question “What is an API?” might seem pretty straightforward. However, this is not exactly a simple question. If you don’t have any experience with code whatsoever, APIs may confuse you a bit. In order to help you understand what an API really is, we’re going to cover the basic definition of APIs, try to explain why they are so important, and even tell you how to design one. And don’t worry, the language we used is specifically geared toward people with little to no coding knowledge. However, some of the more experienced readers might find something useful here. Without further ado, let’s dive right into the subject… What are APIs in the First Place? For starters, we need to explain what the phrase “API” even means. It’s an abbreviation for “Application Programming Interface.” Basically, they give both individuals and organizations the power to add more functionalities to their site/platform/app without writing a single line of code. You can do this by simply integrating the code of a certain API into your existing code. And while you’re not aware of it, you encounter APIs on a daily basis. For instance, if you bought something today from an online retailer, you’ve definitely interacted with an API along the way. A vast majority of online stores use PayPal and Stripe APIs. Another API you probably encounter every day is the Facebook API that allows you to log into a site using your Facebook login information. Unlike the ones we mentioned above, some APIs are not outward-facing. Some are used as internal business tools. For instance, an organization can use an API to fill out paperwork, schedule meetings, and even print labels automatically. Why are APIs so Important? Now that you understand the basic concept of APIs, the next logical question is – why are they so important? In short, APIs have changed the way individuals and organizations alike build their platforms and products by giving them the ability to add more functionalities without much effort. That ability is especially liberating for small and micro companies that don’t have the same resources as their large counterparts do. A small company can even use an API as the foundation for its service or product. Just take a look at Lyft, the on-demand transportation company uses Twilio’s API to send messages to their drivers. The Twilio API is practically stitched into the fabric of the platform. On the other hand, some companies use APIs to boost the value of their product/service/platform. Check out RapidAPI’s article for more API examples and how to implement one properly. Here are some of the other reasons to use an API include: · Saving time and money The biggest reason to use an API is to save time and money. Programming takes time and in some cases, it takes months to build a certain functionality from scratch. By using an API, you can take advantage of a function you need without spending months and thousands of dollars developing it. · Continual Innovation In most cases, an API is created by an entire team of developers, who make constant improvements to it. This results in an ever-improving user experience for people who use the API. Some APIs can even be customized to resemble any company’s look and feel. · Data security An API that deals with sensitive data – think how Stripe deals with payment processing, for example – are usually equipped with compliances to protect the data in question. If your organization needs safe money transactions, an API can solve all the security problems for you. · Specialization and focus Lastly, by using an API, members of any organizations are able to shift their focus on other aspects of their operation. An API allows them to leave the non-core, technical operations of their business to companies that were actually build to focus on those operations. How to Design an API? As we said before, designing an API is definitely not an easy job. In a best-case scenario, an experienced team of programmers can build an API in six months or so. However, some APIs take years to see the light of day. You have to understand that numerous stages of an API’s lifecycle. That’s why most experienced teams use platforms such as Stoplight to closely observe every step of the way, test the API, and make tweaks along the way. And as with any other product, there are good and bad APIs out there. Here are a couple of things that separate the good ones from the bad. Although the term “documentation” doesn’t mean much to your average person, it really means something to an average developer. In essence, documentation is sort of like a manual for an API. Without proper documentation, an organization wouldn’t know how to implement an API into their platform/app/site. Documentation needs to be easy to digest, straightforward, and needs to have great examples of code. · Multilingual SDKs Since you’re probably not familiar with this term either, SDK stands for “Software Development Kit.” The more these kits your API has, the more programmers will use it – simple as that. In turn, that will make it faster and easier to get a working integration. Some of the most widely-used programming languages include PHP, C++, and Python, just to name a few. · Testing opportunities No one buys a car without test-driving it first, right? The same goes for an API. If you want to have a fully-functional API, you – or your developers – need to have the right tools for testing. The testing tools allow the developers to get a feel for the API and maybe even experiment a little. Most testing tools have a testing sandbox environment, test mode, and an intuitive API dashboard. · Developer support Every now and then, a developer will have a question about the API he’s using. And if he doesn’t have access to developer support that could answer his questions, he’ll probably just become frustrated at a certain point, and stop drop the API. And that’s why access to developer support makes a big difference in the world APIs. The Bottom Line on APIs And there you have it – we hope this article answered some of your questions about APIs. Now that you know what an API is, how it works, and what’s needed to design one, you can start digging deeper. The Internet is filled with resources, and if you want to know more about APIs, just start browsing around. If you have any additional questions or if you feel that we left out something crucial, feel free to tell us all about it by leaving a comment in the section below.
OPCFW_CODE
Can’t seem to get something to synth right? Curious about what lens is best? Ask your fellow Photosynthers here. K suggested that I start a thread, sort of as a rallying point for other people who are doing kite aerial photography, and to discuss what works, what doesn't, and what ideas might create better synths of a given location. So if you're doing kite aerial photography, please chime in. Here are a couple of other people I've found doing KAP synths (in the hopes they read this and provide their input!) Lots of kite aerial synths to look at. Any pointers, please let us know! If anyone's interested in getting into KAP, here are a couple of URLs to get you going: Gear Supplier (Americas): Gear Supplier (Europe): Flickr KAP Groups: I'm trying to keep track of what gear I use on each flight so I can include it in the descriptions of my synths. I hope this is useful for anyone who's interested in getting into KAP. Wow, nice work rounding everyone up. Now I'm off to explore KAP synths... Photosynth came up recently on the KAP forum, so there may be more KAPers here soon. (I hope so, anyway!) In time I expect the list will grow. Ok, here's a consideration for doing aerial synths from a kite: There are two broad-stroke categories for how KAP rigs work. One is to use a radio of some sort to control the camera and rig (RC KAP). The other is to have a controller on the rig so it controls itself and tells itself when to take pictures (AutoKAP). Cameras running an intervalometer fall into the latter category. When using an RC KAP rig, there's a tendency to only take pictures when you have a subject framed up. Which is great for doing still photography, but not necessarily for creating synths. When making synths, I'm learning there's a lot of utility in taking transition pictures while panning between subjects, or when moving from one location to another. KAP flights where I walk the rig over to a subject, photograph that subject, then walk to another subject, etc. don't synth well. They wind up with low synth percentages and multiple synths you have to switch between. KAP flights where I never stop taking pictures, even when moving from one subject to another, tend to synth better and have more unified views. With an RC KAP rig, this is a conscious decision that needs to be made while the flight is in progress. With an AutoKAP rig, this more or less happens on its own since one of the main characteristics of an AutoKAP rig is that it never stops taking pictures. If the goal of the KAP flight is to create a synth, I think the better route is to use an AutoKAP rig in the first place. Which is a good thing, since AutoKAP rigs are a lot more straightforward to build than an RC KAP rig. They're cheaper, too, since there's no requirement to have a ground-side transmitter. If anyone is interested in building an AutoKAP rig for making aerial synths, there are a couple of different types of rigs inside the broad category of AutoKAP worth considering: The first is simply to have a camera running an intervalometer that's suspended from a kite line. A number of cameras come with intervalometers on them, and the bulk of the Canon compact cameras can run CHDK or SDM, both of which offer intervalometer scripts as part of their standard offering. The only trick there is to come up with a way to mount the camera so it can be suspended from the kite line. This is one example: It only looks down, and was developed specifically for doing ortho shots of archaeological sites. But the same idea could be used with the camera tilted up at an oblique angle. There is no provision for repositioning the camera mid-air. It's simply a way to hold the camera so that it can take pictures on its own. Just to demonstrate that this idea has some merit, it's how this synth was made: The second method is to use a camera that has a fairly wide field of view, and to tilt it at a fixed angle while rotating it around the pan axis with a servo. This is the idea behind the BEAK rig: It looks down at a fixed angle, but rotates, taking pictures periodically so there is overlap for stitching panoramas. Or, in the case of Photosynth, so there is overlap for connecting the photos into a synth. This approach is slightly more expensive than the first type, but not by much. Parts for a BEAK kit run about $80US, minus the kite, line, and camera. In terms of camera gear, it about on par with a good lightweight tripod. (To be fair the tripod I'm lusting after at the moment runs about $650, so my definition of "good" and "lightweight" may differ from that of others.) The third method is to use a fully articulated rig that has both pan and tilt axes, and to use a controller that can drive both. One of the more common controllers for this is the AuRiCo, and is carried by most KAP gear suppliers. It's basically a limited version of a Gigapan controller, but designed for hands-free airborne operation. Depending on how its switches are set, it will move through one of several patterns of pan and tilt, taking pictures all the while. This setup is popular at events like kite festivals because the rig can take pictures all day long, often resulting in a number of good 360x180 degree hemispherical panoramas. Read that another way, and it makes for lots of overlap for creating synths. This is a typical AuRiCo rig: This is only slightly more expensive than the BEAK, only really needing one additional servo to complete the picture. And if you do have an RC KAP rig, the addition of an automatic controller is an inexpensive way to expand your KAP gear to make creating aerial synths that much easier.
OPCFW_CODE
Flashing My Ferris Sweep Using QMK American Tobacco Campus, Durham NC In my previous post, I detailed the build process for my ferris sweep keyboard. After building it, I needed to flash the keyboard with firmware. Like many custom keyboards, it uses qmk. Here’s how I flashed my 34 key layout on the ferris sweep. Create the layout file The first step when flashing a QMK keyboard is to build a layout file. I created my layout using the online qmk config tool. From the tool, I was able to select a ferris sweep base layout, and then assign all the keys to match the layout I previously tested on my ergodox (with a few changes). Setting all the keys took some time, but I should only have to do that once. For future changes, the layout file can be uploaded and edited from the configurator. Once completed, I downloaded the layout file and saved it to my dotfile repo. Install & Setup With a layout file, we can start using QMK to flash the keyboard. In general, there is good documentation about how to do that here. Regardless of your setup, the first step is to install QMK: brew install qmk/qmk/qmk On Linux, it’s suggested to install qmk using pip. So on my Fedora computers: sudo dnf install python3-pip python3 -m pip install --user qmk Note: on Linux I hit a qmk command not found bug. This is because $HOME/.local/bin was not part of the path. So, either add it to your path, or just run the commend directly: Next, you need to run the setup. This will take care of several setup tasks, like downloading the firmware repo and installing dependencies. It will prompt you with a few questions, but most can remain the default value it suggests. It will take a few minutes to download and install everything, but afterwards should be all set. Compile the hex file To build the firmware file, I first created a new keymap directory and downloaded my layout file to it: # Copy downloaded keymap file to compile location cp ryan-test.json ~/Builds/qmk/keyboards/ferris/keymaps/ryan/keymap.json Next, I compiled the new firmware, using the keymap I specified: # Compile layout qmk compile -kb ferris/sweep -km ryan Lastly, I moved the compiled hex file to a location where the QMK toolbox would be able to see it (out of a hidden directory). # Move compiled hex file from build directory to one QMK Toolbox can see (non-hidden) cp ~/Builds/qmk/.build/ferris_sweep_ryan.hex ~/Builds/qmk/ferris_sweep_ryan.hex Flashing the Ferris Sweep To flash the ferris sweep, make sure it is connected to one of the halves. From there, open up the QMK Toolbox application. Load layout file In the configuration tool, open your compiled hex file. In order to flash the keyboard, you have to switch it to reset mode. If you installed a reset switch/button, use that. If not, you can reset the board by connecting the two reset holes using a piece of metal. When I built my sweep, I used a pair of metal tweezers, but I have since graduated to using a bent paper clip (so fancy!). Use whatever works. Due to the reversible PCB for each of my ferris sweep halves, I need to select the handedness for each half when I flash it. Otherwise, it doesn’t know which board is for which hand. To do this in the toolbox, navigate to Tools -> EEPROM in the menubar, and select the handedness you want. You might need to wait a second or two for it to load. With everything set, flash the board by pressing the button! You might want to watch the log to verify it’s actually done before unplugging it. Repeat for the other side After flashing the first side, repeat the process for the other (load the file, reset, select handedness, and flash). Afterwards, all should be working! I’m note positive if this step is required, but I always do it… Some Layout Changes Since writing the post about my layout, I have made a few changes after using the ferris sweep for a few more months. Most of these changes revolve around adding different base layers that I can swap in and out using new keys added to the Navigation layer. In addition to my original base layer, I’ve added: - A second default base layer that does not have Home Row Modifiers - A base layer that uses Colemark instead of QWERTY - A mouse toggle (not base) layer that I forget I have Over time, I’d like to turn my non-home row mod layer into a layout designed for gaming, which is the one thing that this keyboard does not do well right now. But that’s about it. This process took me a little bit to figure out the first time, but now that I know how it’s done, it isn’t that hard. Also, I don’t need to flash firmware as often now that I have a stable layout. Luckily, when I do finally make that ‘gaming’ layout, I’ll have this post to remind me how to flash it! My Obsidian Mobile Setup Building My Ferris Sweep
OPCFW_CODE
Power-gaming requires the ability to plan your character. Playing stereotype characters requires the ability to plan your character. Games that emphasize balance require the ability to plan your character. So, because I like my games kinda old school, I don't care much for the ability to plan your characters. Stats should be rolled randomly, straight down. I also allow the player to swap any two stat values for each other before getting on with the rest of character creations. This allows a meaningful but minimal amount of control over the character's trajectory. Then I randomize the number of items characters get. Usually basic clothes + 1d6 other things. This isn't a rule for how much the character has to their name, but rather how much they have on them at the beginning of the game. Rolled low? Well, maybe your character was just robbed by bandits, and strolled into town without two coppers to rub together. Race and class are the player's choice, but since stats are mostly random, this is another instance of meaningful but minimal control. (Combine that with VERY simple races and classes that only grant abilities at first level, and you have a nice open ended character on your hands.) Lastly, when characters level up, I have them roll on one of two tables. Even levels is a 1d6 table for increases in HP, extra spells, etc. Odd levels is a 1d6 table for increasing one of your stats by 1. I want to give an example of how you can use this to enhance your games, in case you aren't already sold on the idea. EXAMPLE: Grimbug, the orc warrior, had just achieved level 2. His player rolled nice strength at character creation (hence the warrior class), and he has a nice dexterity. In my game, that means he is pretty good at sneaking around. But he is a warrior, so he doesn't think about sneaking all that much. That is, until Grimbug's player rolls on the d6 level up table and Grimbug gets a +1 to stealth (which translates into a 5% increase in stealth check successes in my game). Well, suddenly Grimbug's player is gonna think about stealth more, and probably try it out more often. See, random level-ups denote what the character will be better at in the future. They speak to a part of your story that has yet to be told. Level-up bonuses that are set and selectable are indicative of what has already happened...which has already happened...and can no longer help make your game more fun. A friend of mine was recently bemoaning his players' cliche character and race combos. Dwarven fighters, elvish rangers, halfling rogues, etc. He's running a 5e campaign, but this player trap has been around for a loooooooong time. I'm not saying my style of game is a 100% fix... ...all I'm saying is that when Tordek the Dwarven fighter starts developing magical abilities he doesn't understand (and his player didn't foresee or plan) it certainly makes him a lot more interesting.
OPCFW_CODE
Nonetheless, the aim of this tutorial is to explain the code in such a manner that anybody is welcome to create their own personalized video player. So, without further ado, let’s dive in the secrets of creating a video player! Create the document First things first, you will need to prepare your tools before starting your project. You will need: - A blank .html document (this is where the code will work its magic); - At least two videos (you can easily download some samples from free online sources, like PixaBay.com or Videezy.com to play with for this project); make sure they are both .mp4; - A poster image (that would be a representative picture of your video). For this one, you can download a related image from other free online sources, like Pexels.com or FreeImages.com or you can take a snapshot via VLC (play the video, and then access Video -> Snapshot, and you’re all set); - Icons for the controls of your player (again, you can access without worries websites like FlatIcon.com or IconArchive.com). So, the result should look something like this: For my tutorial, I will use: - Pouring milk into a bottle and a squirrel in nature from PixaBay.com; - A squirrel poster from PixaBay.com also; - Media Mega Pack Outlined from FlatIcon.com; - the FontAwesome to style up the video player; - and finally, the Brackets code editor which is also free as I was attracted by the user-friendly “Live Preview” button in the top right corner, that shows you the result of your work in a web page after you make sure you save the edited .html file. All these are free resources, so unless you want your video player to look a whole lot different, you can easily use the above resources for your own test of my tutorial. Now that we’ve selected and gathered all the tools we need, we can finally go ahead and work on the code. Embed your Video in a web page This is the basic HTML5 skeleton of your future video player. It uses 10 primary code lines that will enable your video to be displayed on any web page with some basic buttons to control it. You will start by marking the HTML version that you are using with the universal doctype declaration <!DOCTYPE html> . This is the first thing with which you start any of your HTML documents so that your browser can be aware of what kind of document you are using. Continue with opening up the script with an HTML tag, and close it with the usual </html>. Move on to the elements that you should include in the HTML format: head tag and the body tag. Now we should focus on what goes around in the body tag. You can’t create a web video without the video element. So, inside the head tags, you insert the tag video and also don’t forget to insert the closing tag. Now, within the video tag, you should state which dimensions your video player should have (it is recommended to set the dimensions for your video player to avoid flickering). The source of the video you want to play in your player and the poster picture that is representative of your video and which the viewers will see before hitting the “Play” button. So, that being said, let’s take one attribute by one and see how it works. The poster attribute is the one you need to create the representative image of your video. Inside it, you should name the folder of the picture (in this case, “Images”) and the name of the image. Next, up, you choose the width and height for your player, and I chose to go with a perfectly symmetric shape. It’s very important to insert the boolean attribute “controls”. Without it, you can control your video only with a right click and then select “Play” or other basic functions. The controls tag will display a basic array of controls: Play, Pause, Volume and Maximize button for a more user-friendly feature. Next up is the source tag, where you specify which is the source of your video with the src tag. As you’ve already created the folder for your video player, your video source will be easy to get recognized by your code just by inserting the name of your video file. As there are three supported video formats for the <video> element: MP4, WebM, and Ogg. You should state what kind of video you are using with the type attribute. It is recommended to insert as many versions of your video as possible, for a better user experience. So, if you also have a .ogg version of your video, you should open another source tag under the .mp4 format, like this: <source src=”videoexample.ogg” type=video/ogg>. Now, if you click the “Video Preview” button, you should see a basic web video player with a poster, control buttons and your video playing perfectly within a frame of your selected dimensions. We enter the CSS area to layout the video player Your video player will be contained in a huge div tag. That will have two other subsets of div tags. Next up, we are going to build the playground for the CSS code. For this, I’ve created three div ids, inside a larger div tag named video-player, as this is the goal of our project. The first div tree is responsible for our video skeleton. In this section, you only have to move the initial lines within the video tag that we’ve created in the second step of the tutorial. The second div tree handles the progress bar, while the third tree will be in charge of the buttons of your video player. Remember that every div tag should be marked with a unique id. Next, up, I just personalized each div with its required attributes. So, the Video-tree div has the video tags in it – this is easy. The progress-tree div is in charge with the progress bar, and that is why it has the “progress” id. The button-tree div, however, requires more of your attention. I’ve inserted three buttons: play, backward and forward. So, each button is enclosed in its own div tag, has its own id (“play-button”; “backward-button” and “forward-button”), and their own dimensions (I chose 100×100 for each). The play button though has its own time-bar, which I inserted with another set of div tags, with the “time-factor” id. Don’t forget to also insert the time limits “0:00/0:00” which represent the start time and the point in time the video has reached. After all these, your “Live Preview” should look like: So you see, the buttons are in the completely wrong order, but we’ll fix this up with the CSS code. Style the video player Save your .html file and open a new file named “video-player.css”. Don’t forget to save the .css file in the same folder as your .html work. Now, go back to the .html file and add in the head tag the attributes that will link the .html file to your .css file as follows: <link rel=”stylesheet” type=”text/CSS” href=”video-player.css”> Whatever structure you want to work within the .css file, you simply name the feature with the id you marked it in your .html file starting with #, and this is how you tell your code editor which part you want to style first. Above is the print screen of the .css file. I tackled basic features of CSS code, but this language can design your video player in more complex ways. As long as you have the basics, you can easily research for more complex styles on your own. Now, I took each of the elements of the player one by one and customized it in the .css file. For the video player color palette I chose different hues of blue to differentiate its main elements. The video player has an aquamarine blue, which was restricted to the limited display of the player with the “inline-block” function. This way, your web page won’t get entirely blue, but it will be limited to the player itself. The next element to design is the video-tree for which I chose the desired dimensions and commanded the video to display 100% on the entire frame of the video-tree. For the progress-tree, I only selected the color, while focusing more on its own branch: “progress” which determines the progress bar. You should choose another color for the progress bar than the progress-tree, so as the viewers can see how much of the video is left. Now, for the button-tree, I created two different entries. The first entry focuses solely on the width of the buttons. The second entry commands the buttons to be horizontally rearranged with the “display: inline-block” command, and is centered with the “vertical-align: middle” attribute. These basic .css functions will give you the freedom to personalize the video player the way you want it. At this stage, you should again save your work up till now and create a new code file and name it “video-player.js”. Save it in the same folder you used for this project. Then, we’ll create a “Click” event with the addEventListener whenever the viewer clicks the button play. The function “playOrpause” will make the “play” button act as a regular play button as well as a pause button. This sum up the basic steps you need to take in creating your own video player. You are welcome to share your own experiences and thoughts regarding the creation of a video player in the comments below!
OPCFW_CODE
Designing OSPF mesh network We have bought two new switches that will be used as the core in our network. They will be directly connect with some LACP link. The aggregation layer consists of 5 L3 switches that will have one direct link to each of the core switches. My question is how to best set up the vlans between core and aggregation. My first idea was to have one vlan for each link and have ptp connections between all agg-switches and the two core switches. But say I want some vlan to span the entire network I then have the problem of loops in the network if I want to use both links. EDIT: Right now I have three scenarios in my head for the setup. And I want your opinion on the best way to setup a OSPF network with the structure shown in the diagram below. All router links are p2p. (/30 link nets) All router links are on the same subnet (and then using MC-LAG (DELL VLT) for redudancy and avoid loops) Stack the two core switches and use regular LAG for all links. Network diagram: I not sure about the question. But right now my thought is one vlan per aggregation switch and do L2 down to ToR. Let me rephrase, will you have hosts(vm's) that move around and need to maintain a fixed IP ? Will your Tor switches dual connect to 2 x Agg switches ? Im not sure any hosts will move around in the network like that. I think my "users" -> the non net-sysadmins want the convenience of having the same IP range all over the network. No, all ToR have one uplink to aggregation. Did any answer help you? if so, you should accept the answer so that the question doesn't keep popping up forever, looking for an answer. Alternatively, you could provide and accept your own answer. If your two core switches support vss, vpc, or mlag, you could simplify the network as the two core switches will look like one switch with a LACP ether-channel to the aggregation layer switches. EDIT: This will eliminate loops to the aggregation layer as each aggregation switch will have a single ether-channel link, while still allowing VLAN's to span all aggregation switches. Yes, I know. I also found out the core switches are stackable. But right now I'm still opting for a L3 solution with p2p-links between every switch. But it would be very helpful to find some text on this subject. Don't do either of those. If you need to pass VLANs between the switches then they should be Layer 2 Trunks between the switches. If you need to exchange routing information you should connect P2P links between the switches and have them participate in your routing protocol. Are all of your Aggregation switches connected in a mesh or are they in a series? What topology has you using 5 - 10 Aggregation switches? If you need this many switches you probably need a bigger switch. :) In an idea scenario you would have 2 - 4 aggregation switches hosting the SVIs for your local data center. These numbers are fully dependent on the number of downstream switches that are required in each row. Then these aggregation switches would connect up to your core switches via Layer 3 Links. You will run in to all sorts of issues if you try to implement what you are describing. Let us know more information about your scenario and we may be able to assist you further. -Update- The reason the switch models matter is because a detailed design will always be limited by the functionality of the devices that will be implemented. After doing some quick research on the Dell website I see that the S6000 switches are both stack and VLT capable. So here we have two paths to a more flexible and maintainable design. If you stack the S6000 switches in the core they will become one switch with multiple modules. (Similar to a chassis switch, see this.) At this point you can configure links from both core switches to each aggregation switch in a non-blocking manner. With VLT on the S6000 series you are able to configure an LACP Port-Channel from both of the core switches to each of the aggregation switches. They will appear to each aggregation switch as a single switch. If your goal is to achieve Layer 3 at the "Aggregation" layer that you have described then the stack is your only option. You will not be able to create Layer 3 Port-Channel interfaces using VLT. Well, we have clusters and they are all configured with a central switch, (this is the one I call aggregation switch. Then there are 5-7 ToR connected to that switch). One if the problems Im trying to solve is that we are going so have both private and public IPs in our data-center and it would be nice to have a single plan for the public ones (since they are few). But I have no problem splitting them up and route them if this is a better setup. Im not sure if I can describe then physical topology better than I did. We will have two core switches and want to have redundancy and full bandwidth between all clusters. So simply speaking I will have two 40G links from every cluster, one to each core switch. http://networkengineering.stackexchange.com/questions/19461/ospf-backbone-area-design talks about sort of the same thing. What kind of switches are these? I don't see why switch brands would be relevant in a pure design question but right now the aggregation switches are HP5400 and one HP5900. the core switches will be dell s6000-on. Well I could use OSPF and load balance at layer 3 instead of layer 2 without stacking the switches. You could do a lot of things in this scenario, however it does not necessarily make these things good ideas. In your proposed design you are build a network with a collapsed core. There is absolutely no reason to build this out in Layer 3. You can use LACP to handle your load balancing between the core and aggregation switches. Building this out in Layer 3 adds unnecessary complexity to your design to achieve the same or less functionality than you can get by simply stacking the switches. Your devices have the features for you to implement a flexible solution. Don't try to hammer a screw. ;)
STACK_EXCHANGE
M: Bacteria evolve; Conservapedia demands recount (2008) - shrikant http://arstechnica.com/features/2008/06/conservapedias-evolutionary-foibles/ R: spodek At first I thought Conservopedia must have something like Wikipedia's Neutral Point of View and No Original Research policies that would suggest it wouldn't get involved in research. Then I realized with a name like Conservopedia, it must _not_ have a Neutral Point of View. I was curious what they have instead of a Neutral Point of View, so I looked it up. On its page "How Conservapedia Differs from Wikipedia" -- [http://www.conservapedia.com/Conservapedia:How_Conservapedia...](http://www.conservapedia.com/Conservapedia:How_Conservapedia_Differs_from_Wikipedia) \-- it lists this relevant policy: "We do not allow liberal censorship of conservative facts." Talking about "conservative facts" speaks for itself and says a lot. Its Guidelines page -- <http://www.conservapedia.com/Conservapedia:Guidelines> \-- says "Unlike Wikipedia, we do not block for ideological reasons." The Arstechnica article says "Several of those individuals are apparently now ex-Conservapedia members, having had their accounts blocked for insubordination," implying Conservopedia is overstepping its guidelines. This instance sounds like the pot calling the kettle black. If people within Conservopedia discuss Conservopedia, why would someone block mere discussion? R: jstalin To demonstrate how obsessively out of whack the conservapedia folks are, note that "Homosexual Agenda" is the top article, after the home page. I remember a few years ago that all of the top 10 were related to homosexuality. They must have changed their ranking to demote things that used to be there. The article on the Homosexual Agenda claims that the "homosexual agenda" is the greatest threat to free speech today. Not SOPA, DOMA, the NSA, or secretive international agreements on the regulation of the Internet.... it's homosexuals. R: hp50g I've never seen Conservapedia before. It wasn't until about 5 minutes after reading some of it, I realised it wasn't some parody like Uncyclopedia. I'm utterly shocked that the human race actually is possible of pumping out shite of that grade. I'm in the UK BTW so I'm not that aware of American politics so my ignorance may come from there. R: qompiler Uncyclopedia has reported to fully support Conservapedia's stand on this issue. R: codeka Wait, I thought Conservapedia was a joke? R: 33a No, Conservapedia is deadly serious. The guy who operates it (Andrew Schlafly) is a real piece of work. And apparently it runs in the family too, since his mom (Phyllis Schlafly) is the main reasons the equal rights amendment got shot down: <http://en.wikipedia.org/wiki/Phyllis_Schlafly> A pretty big WTF there. R: jiggy2011 I wonder how much of their traffic and contributors are genuine conservatives or just people who want to troll/laugh at conservatives? The real comedy gold is in the "talk" pages, I remember some years ago a person tying to make an argument that the Al-Qaeda page should begin "Al-Qaeda is a liberal organisation..".
HACKER_NEWS
Creating symbolic links within folders shared with the Windows host OS So I have a project that requires a specific (and modified) version of the jdk, which I am given as a tar.gz. I'm running a Kubuntu 17.04 in VirtualBox as a guest OS on a Windows 10 host, because the single disk of my laptop has too many partitions to set up dual boot on it. To save space and to avoid having to move files from the guest to the host or vice versa, I have created a shared folder where I keep most my stuff, and it is within that folder that I tried to extract the jdk. sudo tar xzf jdk-7u65-linux-x64.tar.gz tar: jdk1.7.0_65/bin/ControlPanel: Cannot create symlink to ‘jcontrol’: Read-only file system tar: jdk1.7.0_65/man/ja: Cannot create symlink to ‘ja_JP.UTF-8’: Read-only file system tar: jdk1.7.0_65/jre/bin/ControlPanel: Cannot create symlink to ‘jcontrol’: Read-only file system tar: jdk1.7.0_65/jre/lib/amd64/server/libjsig.so: Cannot create symlink to ‘../libjsig.so’: Read-only file system tar: Exiting with failure status due to previous errors I'm also getting these errors when I try to unzip the tarball with WinRAR on the host end unless I run WinRAR as an administrator. Still I don't really like doing it that way because I have no idea if extraction using WinRAR in Windows would do anything wrong to some files for use in Linux. Is there a way to make this work from the Linux guest system? And if not, why not? Permissions: user@linux-VB:~$ namei -l ~/SHRD_FLDR f: /home/user/SHRD_FLDR drwxr-xr-x root root / drwxr-xr-x root root home drwxr-xr-x user user user lrwxrwxrwx user user SHRD_FLDR -> /media/sf_SHRD_FLDR/ drwxr-xr-x root root / drwxr-xr-x root root media drwxrwx--- root vboxsf sf_SHRD_FLDR why sudo? Does root have write permission in that folder? @ravery No reason. It's just a reflex of mine to add sudo in front of a command if something like this doesn't work on linux. As for the permissions, see the edit above. Make sure all VMs, as well as the VirtualBox GUI are closed. Go to where VirtualBox is installed. In my case, that's C:\Program Files\Oracle\VirtualBox. There, execute command VBoxManage.exe setextradata VM_NAME VBoxInternal2/SharedFoldersEnableSymlinksCreate/SHARED_NAME 1 Where VM_NAME is the name you've given the VirtualBox VM and SHARED_NAME is the name you've given the shared folder when you set it up. E.g. if I have a Virtual machine named Linux, for which I've set up a shared folder SHARED that I can access with ~/SHARED from within the guest, the command will be VBoxManage.exe setextradata Linux VBoxInternal2/SharedFoldersEnableSymlinksCreate/SHARED 1 Despite this command -- and the way I understand it, depending on your windows version -- you may additionally have to run VirtualBox as administrator to be able to create symlinks. Tested with Virtualbox Version 5.1.22 r115126 (Qt5.6.2). Note this is not Ubuntu-specific - this could be on a more general SE board (SuperUser etc.) it is on a more general SE board: https://serverfault.com/a/367839/184743 If your VM name has "multiple words", you must enclose it in quotes. Example: VBoxManage.exe setextradata "Server UK (Dev)" VBoxInternal2/SharedFoldersEnableSymlinksCreate/data 1 Adding to the last bit of the answer from User1291, at the time of writing all versions of VirtualBox (including the latest, 6.1.32) lack the ability to create symlinks without elevation (i.e. 'Run as Administrator') or other special privileges on a Windows 10 host, because VirtualBox's underlying call to the Windows CreateSymbolicLink function do not include the SYMBOLIC_LINK_FLAG_ALLOW_UNPRIVILEGED_CREATE flag. If VirtualBox ticket #18680 gets fixed, then as per this explanation from Microsoft you will just need to enable the Windows 10 "Developer Mode" and not use 'Run as Administrator' anymore. A deeper discussion of this began here.
STACK_EXCHANGE
If it doesnt, are read-only there as well. I have 2 duplicate site that are hosted on 2 different servers, oneTo use 4.5.2 with Visual Studio it has to be 2012 or newer, and iis7 - what’s the problem? Browser: IE 7.0 The system has You should check out this blog post for more information 500 click site causing the error so you can fix it and get your web site to load. aspx Iis 500 Error Log I've been fighting able to turn on FRT (a rule has to be entered) is very frustrating. All error messages for the 500s inpage cannot be displayed because an internal server error has occurred. The authenticated user does not balance between accessibility of this information, security, and performance in generating and storing it. Excellent article, it helped me command line, see http://technet.microsoft.com/en-us/library/cc725882(WS.10).aspx#BKMK_1 I look forward to receiving your test results. I have restarted the pc, done an iisreset, ensured server while there are other simultaneous in the same bar?You only need to close the custom errors i get a "500 - Internal server error. a binding, but that had no effect. To my surprise I got an empty responsehelp me on this…. 500 Internal Server Error Iis7 5 was a problem while processing the request.But once deploy to the Web App, the messages were Welcome to work with that. If I were to guess, you are missing the MIME Type entry for http://forums.asp.net/t/1491348.aspx?IIS+7+500+Internal+server+error+ for the site, to allow it to generate a meaningful stack trace....Hardening Windows Server 2003 SSL/TLS configuration Though Windows Server 2003 has been around fordonate!I've been reading your web site for a long time now and finally got up IIS on 2008 R2 was being more of a pain. PleaseNov 11, 2009 06:22 AM|RickNZ|LINK An HTTP 500 error often Http 500 Internal Server Error Iis7 Classic Asp How to save terminal history to These used to work,the logs for each website in:C:\inetpub\logs\LogFiles. the Event Viewer, you also see the details of your error.This is not a Reply Anonymous February 1, Bonuses iis7 issues it could be, see http://support.microsoft.com/kb/194801. Any on the Visual Studio Development Server. ThankReply Anonymous June 1, 2010 at 11:34 pm To start the year off I have tried all the stuffs aspx the local server. IIS7 Or any Patches were required that needs to be installed. Iis7 500 Internal Server Error Show Details Some googling led me to an but still cant run anything. More about the author ideas?Please error 3:00 am Mike I'm developing an ASP.NET application and I've just moved over to Vista.Setup Failed aspx comment| up vote 0 down vote Make sure your account uses IIS 7. I suspect something is going wrong with Always use in this Iis7 500 Internal Server Error Asp.net 4 Blogger. Is it possible to get tcp (net.tcpsite, the double-click the ASP icon in the IIS section.So that I mean I need to this error (500.19) - is it to do with the web.config file?httpModules configuration does not apply in Managed Pipeline mode.I have just started setting up web sites onright, just as every year, getting your business goals planned out is top priority.are looking for, and it cannot be displayed. John Reply Jason says: March 15, my review here of Lake Lanier to raise money for the Special Olympics of Georgia.Now you can view ASP andI got an ...Reply Anonymous March 1, 2010 at 1:02 pm Hi Mike ip address works, such as 192.168.0.123. 500 Internal Server Error Php Iis7 this ASP applications will appear. Request Tracing. Perfekt!!!We have a program running on two mirrored IIS_IUSRS hasto localhost, 127.0.0.1, and [::1]. to see error details on a remote connected browser. Liquids in carry on,“Help, I moved to IIS7 and now my application doesn’t work!”. Huh, what doesnt work? 500 Internal Server Error Iis 7 Show Details gives me a 404.3 error. error option Send Errors to Browser is set to True. Keep it up Reply Anonymous September 23, 2010 the advice, Paul. Classic ASP If you are running Classic ASP on IIS 7 or IIS 8, justproblem for Firefox users. 500 - Internal Server Error. Iis 8 It worked and finally I am getting an exception., You can learn more about what kind of cookies19, 2011 at 10:08 am TKS!! The request was received by the Web server, but4:01 am Sir,I have a problem regarding IIS 7.5 on WIN7. internal Reply supaman says: September 28, 2011 at 10:12 template. You can check the currently executing requests to see if any for a while and get the detailed error page. In firefox I get: Firefox can't establish related IIS Error Pages will appear. Is this available and if so, could you be so kind as from pulling out my hair.How to write down a note that is sustained the server so all traffic can pass. The problem I have is I did an update to one this progression alternating between major and minor chords sound right? Reply Anonymous October 9, 2008 at 8:46 pm Hi, I have an need with mithril? © Copyright 2018 computerklinika.com. All rights reserved.
OPCFW_CODE
Novel–Let Me Game in Peace–Let Me Game in Peace Chapter 1110 – Slaying Heaven acidic lopsided Reality Listener also collapsed looking at the injury and fought to stand up to no take advantage. The Immortal Culling Sword possessed extracted excessive. Zhou Wen’s physique was about the brink of collapse. It wasn’t simple for him to face. Truth Listener also collapsed looking at the injury and fought to stand up to no take advantage. “You are very exciting, but precisely what a pity. As part of your up coming existence, make sure you be given birth to during the dimension. Don’t become a.s.sociated with individuals just as before.” The ability in Di Tian’s fretting hand matured much stronger and more powerful. The heavens was stuffed with G.o.ds, Buddhas, and fairies—all of them curved on obliterating Truth Listener. Chapter 1110: Slaying Paradise Only then have everybody remember that Zhou Wen still existed inside the industry. Earlier, their focus have been completely drawn by Simple truth Listener and Di Tian’s fight, so they had overlooked Zhou Wen, the actual protagonist from the struggle. The sword appeared to have some sort of magical ability. In just an ” of unsheathing, it designed the people enjoying from the screen experience ice-cubes-freezing. It was subsequently almost like the getting rid of purpose acquired penetrated their hearts and minds as their bodies trembled. The sword seemed to have some sort of wonderful potential. Within an ” of unsheathing, it manufactured individuals looking at throughout the display screen truly feel ice-ice cold. It had been as though the eliminating objective possessed penetrated their hearts and minds his or her figures trembled. Zhou Wen wasn’t taken aback that they hadn’t received the Dimensional Wheel. It was a pay back made available to first area through the sizing, but he was somebody they didn’t choose to see clinch initially position. With just how the dimension been working, it could be astounding they will give him the Dimensional Tire. Di Tian’s view focused since he stared intently for the sword in Zhou Wen’s fingers. Even his hands which had been suppressing Real truth Listener paused. how many humans have been killed by humans The frightening sword ray instantly penetrated the nine heavens and charged with it through an indomitable force. The alarming sword ray didn’t prevent since it reduced at Di Tian who was above the nine heavens. “You always keep talking about men and women. How aggravating.” All of a sudden, a sound sounded inside the industry. “If you are the heavenly mandate, what is the damage in defying the heavens?” As Zhou Wen spoke, his fingers carrying the original sword at last transferred. Zhou Wen heaved an extended sigh of alleviation. From the looks from it, the cube possessed already recognized him as 1st area. It hadn’t changed as a consequence of Simple truth Listener’s existence. Di Tian’s eyes focused when he stared intently at the sword in Zhou Wen’s fingers. Even his hand that was suppressing Real truth Listener paused. They considered Zhou Wen and found him standing up in Di Tian’s Calamity Area which has a sword at hand. Banana Fairy was beside him, desperately fighting off the deities which had been lunging at Zhou Wen. “Humans have to know their standing upright with the heavenly mandate. In case you hadn’t insisted on defying the heavens, you wouldn’t have finished up in this declare. This is basically the results of defying the heavens.” Di Tian extended controlling Fact Listener with one fingers. Irrespective of how Reality Listener roared, it couldn’t escape from his grip. “You are quite fascinating, but exactly what a pity. With your up coming daily life, make sure you be born in the measurement. Never be considered a.s.sociated with people yet again.” The ability in Di Tian’s hands increased much stronger and tougher. The skies was stuffed with G.o.ds, Buddhas, and fairies—all of them bent on obliterating Real truth Listener. “If you are the divine mandate, what’s the damage in defying the heavens?” As Zhou Wen spoke, his hands retaining the ancient sword finally migrated. Every one of the graphics around the cube’s display turned into Zhou Wen ranking which has a sword. The earliest brand for the ranks lit up up as well. The phrase ‘Human’ s.h.i.+mmered. Last but not least, the other one labels on the rankings vanished one after the other, leaving only message ‘Human’ radiant as an long lasting life. Di Tian’s sight were colored by using a search of terror. He desired to hinder the alarming sword ray with both of your hands, however the sword beam flashed almost like it experienced vanished in front of him. Di Tian’s pupils constricted as his system split into two. Because he decreased, he disintegrated with the Nine Heavens sector. Zhou Wen endured there motionless. It wasn’t that he or she didn’t would like to proceed, but he really couldn’t transfer. If he hadn’t pushed down against the Immortal Culling Sword with both of your hands and used it as assistance, he probably wouldn’t happen to be capable of take a position. A sword mark sprang out between Di Tian’s brows because he withstood over the nine heavens. Then, the sword label quickly propagate. Previously mentioned his head, the happening of the nine heavens out of the blue shattered and vanished instantly. It become lighting specks that fell like snowflakes. The sword did actually have some kind of marvelous energy. Within an ” of unsheathing, it manufactured the folks watching via the computer screen sense an ice pack-cool. It turned out just like the wiping out intention possessed penetrated their hearts and minds for their figures trembled. “Humans need to learn their standing up against the divine mandate. For those who hadn’t insisted on defying the heavens, you wouldn’t have ended up being in such a declare. Right here is the outcome of defying the heavens.” Di Tian continuing controlling Fact Listener with one palm. In spite of how Reality Listener roared, it couldn’t get away from his grasp. “Humans do not battle with all the heavens. Slaves do not beat with their experts. Sad to say, you are not the be all and end every one of human beings, nor have you been the become an expert in of people. If you allow me to live, I can survive. If you desire me deceased, I’ll slay the heavens and ruin you.” With Zhou Wen’s cold tone of voice, he completely unsheathed the Immortal Culling Sword. The traditional sword was still covered with a scabbard. Transferring with Zhou Wen’s palm, the traditional sword was pulled out ” by in .. The sword s.h.i.+mmered which has a amazing gentle which could swipe one’s soul as boundless hurting motive surged out from the sword. The Immortal Culling Sword acquired extracted too much. Zhou Wen’s body was in the brink of collapse. It wasn’t possible for him to stand. “Know one’s standing upright? So what can you mean by perfect mandate? Who’s the heavens? You?” Zhou Wen questioned Di Tian. Zhou Wen viewed the body art on his entire body and spotted that Demonic Neonate had already went back to him. Although hue of the body art obtained converted a little bit dim and appeared almost indiscernible, it didn’t disappear. Only then have he heave a sigh of remedy. Civil War Experiences, 1862-1865 Zhou Wen checked out the tattooing on his body and discovered that Demonic Neonate got already went back to him. Even though the hue of the tattoo design possessed converted a little bit dim and looked almost indiscernible, it didn’t disappear. Only then do he heave a sigh of relief. “Humans need to know their standing from the divine mandate. Should you hadn’t insisted on defying the heavens, you wouldn’t have found myself in this particular condition. This can be the upshot of defying the heavens.” Di Tian extended controlling Fact Listener with one palm. However Facts Listener roared, it couldn’t evade from his grip. Novel–Let Me Game in Peace–Let Me Game in Peace
OPCFW_CODE
I have a background in biomedical sciences and movement sciences and have worked in the field of health sciences. My current research focuses on citizen science and applying citizen science in the process of developing and evaluating (new) technology. Furthermore, I have an interest in open science and FAIR data. Medicine & Life Sciences # Ambulatory Care Facilities # Electroencephalography # Equipment And Supplies # Imagination # Inflammation # Observation # Rheumatoid Arthritis # Sensorimotor Cortex Oberschmidt, K. , Grünloh, C., Doherty, K. , Wolkorte, R., Saßmannshausen, S. M., Siering, L., Cajander, Å., Dolezel, M., Lifvergren, S., & van den Driesche, K. (2022). How To Train Your Stakeholders: Skill Training In Participatory Health Research. In NordiCHI '22: Adjunct Proceedings of the 2022 Nordic Human-Computer Interaction Conference Article 9 https://doi.org/10.1145/3547522.3547700 Wolkorte, R., & Wildevuur, S. (2022). Towards a framework for the monitoring and evaluation of citizen science for health. Proceedings of Science. https://pos.sissa.it/418/104/pdf Wolkorte, R., Heesink, L. , & Kip, M. M. A. (2022). As open as possible, as closed as necessary: how to find the right balance in sharing citizen science data for health? Proceedings of Science, 418, Article 028. https://pos.sissa.it/418/028/pdf Wolkorte, R., Heesink, L. , Kip, M. M. A. , Koffijberg, H. , Tabak, M. , & Grünloh, C. (2021). Monitoring of rheumatoid arthritis: a patient survey on disease insight and possible added value of an innovative inflammation monitoring device. Rheumatology international. Advance online publication. https://doi.org/10.1007/s00296-021-05026-8 UT Research Information System Courses Academic Year 2023/2024 Courses in the current academic year are added at the moment they are finalised in the Osiris system. Therefore it is possible that the list is not yet complete for the whole academic year. Courses Academic Year 2022/2023
OPCFW_CODE