text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
A while ago, I wrote a post on some of the most important advantages of Svelte. Back then, the framework had just received a major update, and it was quite a hot topic to cover. Now, after the dust has settled, Svelte still has a lot going for it, but it also has some drawbacks that previously went unnoticed.
I don't want to rant over these small issues because that's not the point of this article, and besides - I really like Svelte! But for your information, these are:
- TypeScript support - although it's been added recently, it wasn't there at the time Svelte exploded in popularity. Thus, most of its still small, but very important for the framework ecosystem will most likely not support it.
- Syntax differences - Svelte feels good when you get used to it, but because of its compiler-based nature, there are some syntax nuances that newcomers might find a bit awkward, like the dedicated template syntax,
$:reactive label (although it is technically valid JS), etc.
- Small ecosystem - this is a common issue that unless you're React, Vue, Angular, or [insert your big framework here], or you're 100% down on Web Components, you're doomed to experience. Because of Svelte's recent growth in popularity, it developed a pretty-respectable ecosystem, and because of its good support for Web Components (you can even compile Svelte to such), it's not as big of an issue, but still, something to keep in mind.
So, Svelte is not ideal - nothing is - and that's why we have alternatives. If the idea of the compiler is very attractive to you and you want to have top-to-bottom TypeScript compatibility without Svelte's syntactic gotchas, you might be interested in Solid.
Solid introduction
So, Solid (not S.O.L.I.D. principles, but Solid UI library) is "a declarative JavaScript library for creating user interfaces". So, yet another UI framework? Well, yes, but also no. You see, Solid introduces some nice mixtures of concepts that we haven't seen before, effectively making itself stand out from the overpopulated UI libraries crowd.
What does Solid have going for it? For me there are a few things: it's written in and has first-class support for TypeScript, it supports JSX, with additional React vibes like Fragments, async rendering, and hook-like functionalities, and last but not least - it's wicked-fast, going toe to toe with vanilla JS!
Coding demo
I hope I sparked your interests. Now, let's examine an example Solid component.
// index.tsx import { Component, createState, onCleanup } from "solid-js"; import { render } from "solid-js/dom"; const App: Component = () => { const [state, setState] = createState({ count: 0 }); const timer = setInterval( () => setState("count", (count) => count + 1), 1000 ); onCleanup(() => clearInterval(timer)); return <div>{state.count}</div>; }; render(() => <App />, document.getElementById("app"));
Above you see a simplistic counter component. If you've worked with React before it should feel somewhat familiar to you.
We create our
App component through the use of an arrow function, with a directly-specified type. It's a little tidbit to remind you that Solid works great with TypeScript.
Next up you can notice the use of the
createState() function, together with familiar array destructuring pattern.
This might look a lot like React hooks, but only on the outside. On the inside, there are no "rules of hooks" to oblige to and no issues or confusion around stale closures. That's because the components in Solid are run only once, leaving reactivity and all the re-executing to specialized parts of the code (like callback passed to "Solid hooks"). To make it even more clear, React invokes the
render() method or its functional component equivalent on every re-render, whereas Solid uses its component function as sort-of a "constructor", which runs only once, to set up all the other reactive parts.
So, we've got our state. Now, we use the usual
setInterval() function for the counter functionality, and
setState() in a reducer-like manner (one of many possible ways to use
setState() in Solid), to update the state.
Lastly, we use the hook-like
onCleanup() function to register the callback for handling component disposal. Remember, because the core component function is run only once, "hooks" such as
onCleanup() are the only way to handle reactive behaviors.
Now, just return the JSX element, render the component and you're done! Isn't that complicated, is it?
Things to keep in mind
So, this was just a simple demo to give you some basic understanding of how things look like. For more in-depth guidance check out the official docs, or drop a comment if you'd like to see a full-blown tutorial.
But right now, I'd like to point out a few things that you should keep in mind if you're willing to try out Solid.
First off, I know I repeat myself, but the fact that the component function is run only once is very, very important. Because Solid uses JSX and is inspired by React, it's safe to assume that the developers who'd like to use it would be at least somewhat familiar with React and could (possibly) be confused as to why their code isn't working properly. Knowing about this difference is crucial to get accustomed to the Solid's approach.
Next up, because Solid is a compiler, it requires additional setup for proper development experience. The easiest way to do this is through a Babel plugin (babel-preset-solid), or by starting with a pre-configured boilerplate:
npm init solid app my-app
Because modern web development already relies heavily on tools such as Babel, adding another plugin shouldn't be much of a problem.
Lastly, there are even more things to remember about Solid's reactivity. Because of heavy optimizations and compiler-based design, there are a few gotchas when working with the state. The most important of which is that you shouldn't destructure the state, like so:
const { count } = state;
The value derived from destructuring won't be reactive, and thus won't be updated when used in JSX. If you really can't stand constantly having to enter the full state property path, then (apart from having some truly unwieldy state object), you can still handle that like so:
const count = () => state.count; // later count();
What you do is essentially creating a thunk (or simply a shortcut) to access the state property. It might be a bit of a waste when dealing with simple states, but can also be really helpful when dealing with 2 or more levels of depth.
But for really simple, one-property states like in the previous example, using objects is an overkill all together. For such cases, Solid provides so-called Signals - "atomic immutable cells that consist of a getter and setter". Basically, a tiny version of state objects, but with a few differences.
import { createSignal } from "solid-js"; const App = () => { const [getCount, setCount] = createSignal(0); //... return <div>{getCount()}</div>; };
The
createSignal() method, returns a pair of functions, from which the first one can be used to access the hold value and the second one to set it.
As you can see, signals are somewhat like a dumb-down version of an object-based state, but only somewhat. You see, Solid uses signals as building blocks for the more advanced reactive functionalities. This includes the object-based state, which at its core is a proxy composed of smaller on-demand signals.
To summarize, if you're willing to give Solid a try, then it's quite important to understand its concepts for creating efficient code without compromising too much on development experience.
Drawbacks
Before we declare Solid "the next big thing", or the "best JS UI library", it's worth pointing out some of its drawbacks, which honestly there aren't that many.
From the Solid's as a UI library standpoint, we might argue that all the API and syntax gotchas I've listed, as well as the ones that I didn't, can be considered a drawback. However, we can't expect a simple UI library to go against the very nature of software. The limitations of both JavaScript itself, as well as Solid's compiler-based design, do require some tiny compromises. Which still, at least in my opinion, once you get used to, they shouldn't be much of an issue.
Secondly, of course, the ecosystem is small. At the time of writing the library has about ~4K GitHub stars, and a fair bit of articles have been written about it. But there's still little to no ecosystem developed around it. There's no component library as Vuetify is for Vue and Material UI for React. All you have is what you write, plus the Web Components if you're willing to use those.
And lastly, I'd say the docs are quite detailed and explain the topic quite well, but they're only Markdown files living in the GitHub repo with no flashy landing page or anything like that. I know, I know - I'm nitpicking right now, but there needs to be some "ethical marketing" done to get developers interested in a library - otherwise, you'll only learn about it from benchmarks and articles like this one. The docs are already good enough, the logo is looking nice, and there's a fair bit of example projects ready for you to see, just - there's no landing page.
Is this the future?
To wrap this up, Solid is a really promising UI library with tons of advantages. The familiarity of JSX and React concepts and the speed and bundle size of Svelte make it seem like an ideal UI library.
The few drawback that Solid has either aren't that bad, or can be easily fixed, as the library continues to evolve.
Overall, it gets my solid recommendation (see what I did there?), and I highly recommend you check it out. Oh, and come back after you do and let me know your thoughts in the comments below!
For more up-to-date web development content, be sure to follow me on Twitter, Facebook or through my newsletter. Thanks for reading and I wish you S.O.L.I.D. coding! | https://areknawo.com/solid-the-best-javascript-ui-library/ | CC-MAIN-2021-25 | refinedweb | 1,725 | 59.94 |
WCF from scratch in 2019
I work with a (big!) layer of WCF services and I want to know more about them and how we can adapt for the future. So I’m doing a deep dive! If I switch tenses in this post, apologies. It’s part stream-of-consciousness and part review.
The big questions I hope to answer:
- Can we push multiple WCF services from one WCF Project to separate Azure App Services? (No, we should use one-project-per-service)
- Can we add JSON/web endpoints to an existing SOAP service with just config? (Yes! With some config and code-level attributes)
- Can we add Swagger to our JSON endpoints? (Yes!)
You can jump down to the pictures
Getting started
Steps:
- Made a solution called
TacoServiceswith a WCF Service Application project called
TacoService, and renamed the original
IService1and
Service1files to
ILocationetc. using the Solution Explorer so that both the code elements and the files would be renamed. But I still had to rename the code element for
Service1.
- Added a .Net Standard class library called
TacoServices.Common
- Wrote up my basic service to return some stub data
Service Project Web.config
To set up my config, I more or less followed this dotnetcurry guide to expose WCF services as SOAP and REST. Please note that REST is a misued term here; REST is not just JSON. Every dev has a different level of passion for the purity of the definition of REST. My preferred starting point is the Richardson Maturity Model, so I’ll be calling what we do here a ‘JSON API’.
Sample of config file:
<system.serviceModel> <services> <service name="Taco.Services.Location.LocationService" behaviorConfiguration="generic"> <!-- SOAP - No behaviorConfiguration, uses basicHttpBinding --> <endpoint address="soap" binding="basicHttpBinding" contract="Taco.Services.Location.ILocationService" /> <!-- JSON - Needs a behaviorConfiguration with webHttp, uses webHttpBinding --> <endpoint address="api" binding="webHttpBinding" contract="Taco.Services.Location.ILocationService" behaviorConfiguration="web" /> </service> </services> <behaviors> <endpointBehaviors> <behavior name="web"> <webHttp helpEnabled="true" /> </behavior> </endpointBehaviors> <serviceBehaviors> <behavior name="generic"> <serviceMetadata httpGetEnabled="true" httpsGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="false" /> </behavior> <behavior name=""> >
Things to grok:
- In
<behaviours>we have service-level and endpoint-level behaviours
- The service (and all its endpoints) get the
genericbehavior we created
- Only the JSON/web/RESTish endpoint gets the
webendpoint behavior. The SOAP endpoint doesn’t need one
- Endpoints must have different addresses and these will be appended after the SVC like
localhost:60601/LocationService.svc/soap/which is kind of ugly. You can fix it with an IIS rewrite rule or with Azure API Management in production
If you run a WCF project while the
.svc.cs file is open in VS, it opens the WCF Test Client which will validate your service.
Running the test client, I followed the trail of errors:
- You can’t have two endpoints with the same name (i.e. you can’t have overloads in a
ServiceContract) so I changed
GetLocations(string searchString)to
SearchLocations(...)
- You have to mark a collection type with
[CollectionDataContract]instead of just
[DataContract]
WCF Test Client said “Added Service Successfully” but didn’t actually add my thing, this was because my services config was wrong (earlier version to the one above) and I’d configured all my endpoints to use the
webHttp behaviour (this takes away their SOAP/WSDL powers).
I made a new Console App to consume the services, added the Service Reference (using ‘Discover Services in Solution’), and implemented the basic features of my
LocationService.
SOAP works - making a second service
Everything in my
LocationService now works, so I want to make a second service.
A bit tricky to find the new ‘WCF Service’ option when you go to ‘Add New Item’ on a project. It’s in the Web group, near the bottom, but it doesn’t appear in any of the subcategories of Web (General, Markup, Scripts, etc.)
I created an
OrderService and stubbed out some behaviours.
One project - multiple App Service targets?
Nope. This isn’t the way to go. It makes a lot more sense to divide the services one-per-project and then have a PublishProfile for each project in the usual way.
I split the
OrderService into a separate project which wasn’t too hard with some cunning renaming.
The test app then couldn’t reach the Orders API any more. Following the errors again:
- I had to drop and re-add the Service Reference in the consuming project
- Had to add the new Orders Svc to the list of multi-startup projects
- Had to add the Newtonsoft.Json package to the new project because it was required by a dependency (my Common objects project)
Then it was fixed, and the Orders SOAP service started working again.
Test JSON
LocationService
Firstly, I discovered that the JSON endpoint has a default help page at:
…bear in mind that I used ‘/api’ as the address prefix of the JSON Service Endpoint in my web.config
Through the help page, I discovered that my JSON endpoints only accept POST right now. So I added the attribute
[WebGet(ResponseFormat = WebMessageFormat.Json)] into the
ILocationService contract.
Now they accepted GET, the two endpoints in LocationService just worked! I used
JsonConvert.DeserializeObject to get the result and it was exactly what I was expecting - surprising!
OrderService
The POST on OrderService did not work right away even though I was sending an object which matched the defintion on the help page at “”
The fix was twofold:
Add a
[WebInvoke...] attribute as described in this WCF post by Dean Poulin
[OperationContract] [WebInvoke(Method = "POST", RequestFormat = WebMessageFormat.Json, ResponseFormat = WebMessageFormat.Json)] Result PlaceOrder(OrderRequest order);
Set the Content-Type header in the request (i.e. in the code of the consuming client) to ‘application/json’
client.Headers.Add(HttpRequestHeader.ContentType, "application/json");
Now everything works as well in JSON as it does via SOAP! Pretty cool!
Adding SwaggerWcf
I added the NuGet package with
Install-Package SwaggerWcf and had to do that on both services projects and on the common project.
My service projects didn’t have
global.asax files but the SwaggerWcf readme setup guide said I needed one. You can add a global.asax to your project in the Add New Item dialog by searching for ‘global’ and selecting ‘Global Application Class’. Or you can find it under Web/General.
It was tricky to configure SwaggerWcf. The docs are not great. It will work and come up at your configured URL as soon as you’ve sorted out the routing (below) but won’t have any valuable information until you’ve annotated everything with the attributes described in the readme.
Configure with RouteTable
Even though my project is just straight WCF (not really “ASP.NET”) eventually it was the ASP.NET way of configuring that won out. If you go with the self host, you need an absolute URL in the web.config which doesn’t work for me because I’m using IIS Express with its whimsically-numbered ports.
//Global.asax.cs protected void Application_Start(object sender, EventArgs e) { RouteTable.Routes.Add(new ServiceRoute("docs", new WebServiceHostFactory(), typeof(SwaggerWcfEndpoint))); }
Doing it this way, you do not need a
<service> entry for Swagger in
web.config. I found this pull request on a sample project helpful in fixing my config issues.
You might have to manually add a framework reference to
System.ServiceModel.Activation for the code above to work. Sadly, Intellisense doesn’t suggest the fix automatically (VS 2017).
Practical notes
If you run your project(s) while looking at the svc file and the WCF Test Client opens up, you have to wait for it to load before you can successfully get to the Swagger page.
In
OrderService.svc.cs, my attribute ended up looking like this:
[SwaggerWcf("/OrderService.svc/api")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class OrderService : IOrderService
…in order for the Swagger sample requests to work.
The sample project has you put loads of
SwaggerWcfResponse annotations on, and maybe your API is not that clever. You don’t need all of those. On the other hand, I was able to enhance my API by adding a tiny bit of extra code right before I return the response from
OrderService.PlaceOrder:
WebOperationContext.Current.OutgoingResponse.StatusCode = result.Success ? HttpStatusCode.Created : HttpStatusCode.BadRequest;
I then went ahead and duplicated these changes on the LocationService. I thought I’d nailed it, went to test the ‘/docs’ page, and got:
Request Error
The server encountered an error processing the request. See server logs for more details.
It was because I forogt the
swaggerwcf section in the web.config for the LocationService project:
<configSections> <section name="swaggerwcf" type="SwaggerWcf.Configuration.SwaggerWcfSection, SwaggerWcf" /> </configSections> ... <swaggerwcf> <settings> <setting name="InfoTitle" value="LocationService" /> <setting name="InfoDescription" value="Interface for finding the locations which serve Tacos" /> <setting name="InfoVersion" value="0.0.1" /> <setting name="InfoContactUrl" value="" /> <setting name="InfoContactEmail" value="github@stegriff.co.uk" /> </settings> </swaggerwcf>
Ok, weirdly, the swagger defs “bleed” into each other. The first service to startup knows only about itself, but the second picks up the details of both APIs. I’m still not sure why. I thought it was because one of my service projects held a reference to the other, and the common project had definitions affecting one project and not the other but was being pulled into both. But even after I changed those facts, still the LocationService has a Swagger def for both APIs, and the OrderService has only itself. Mysterious.
Outcomes - what did we get?
Firstly, you can find all the source code for TacoServices on GitHub.
This is what I get when I run the solution (the consumer project is in JSON mode). Directory listing is switched on, and you can click into a SVC file to see the service discovery screen.
The Swagger defs work offline, but to prove a point I set up an Azure App Service for each service, downloaded the PublishProfiles and published each project to the cloud, at
taco-order and
taco-location; here is one of the online Swagger defs:
(If it’s still online - unlikely - you can reach it at)
Conclusion
WCF is a still a workable technology, and parallel SOAP with JSON is possible and practical. The URLs are ugly but this can be fixed with IIS Rewrites or with Azure API Management, which I’ll explore in the next post. Swagger is easy to integrate with the SwaggerWcf library, but there are some snags.
Overall, for me, it was a cool experience and a helpful deep-dive of the technology.
Sources
I’ll repeat my proviso that “REST is not JSON” and when these authors say REST they generally mean a plain-ol’ JSON service.
How to enable REST and SOAP both on the same WCF Service
SwaggerWcf (NuGet Package)
5 simple steps to create your first RESTful service (WCF)
Expose WCF 4.0 Service as SOAP and REST | https://stegriff.co.uk/upblog/wcf-from-scratch-in-2019/ | CC-MAIN-2022-05 | refinedweb | 1,813 | 55.44 |
20120515¶
Schuldnerberatung¶
Fixed a bug in
lino.ui.extjs3.ext_elems.DecimalFieldelement.get_column_options().
Fiddled with commas and points to get the formatting of amounts correct.
lino.mixins.printable.decfmt()
CBSS connection¶
Trying to understand why
python manage.py test cbss.QuickTest
doesn’t work.
Getting answers from the people who run the CBSS is very hard. While waiting their answer I’ll try to understand why there is an anomaly in the XML generated by SUDS: it uses two different prefixes “ns1” and “SOAP-ENV” for the namespace “”:
<?xml version="1.0" encoding="UTF-8"?> <SOAP-ENV:Envelope xmlns: <SOAP-ENV:Header/> <ns1:Body> <wsc:xmlString xmlns: ... </wsc:xmlString> </ns1:Body> </SOAP-ENV:Envelope>
That’s not elegant but should work after what I imagine to know about XML.
But I also found the following post from which I understand that there are SOAP parsers who might have problems with this: | http://luc.lino-framework.org/blog/2012/0515.html | CC-MAIN-2019-13 | refinedweb | 150 | 59.09 |
Justin Leet created HIVE-7898: --------------------------------- Summary: HCatStorer should ignore namespaces generated by Pig Key: HIVE-7898 URL: Project: Hive Issue Type: Improvement Components: HCatalog Affects Versions: 0.13.1 Reporter: Justin Leet Assignee: Justin Leet Priority: Minor
Currently, Pig aliases must exactly match the names of HCat columns for HCatStorer to be successful. However, several Pig operations prepend a namespace to the alias in order to differentiate fields (e.g. after a group with field b, you might have A::b). In this case, even if the fields are in the right order and the alias without namespace matches, the store will fail because it tries to match the long form of the alias, despite the namespace being extraneous information in this case. Note that multiple aliases can be applied (e.g. A::B::C::d). A workaround is possible by doing a FOREACH relation GENERATE field1 AS field1, field2 AS field2, etc. This quickly becomes tedious and bloated for tables with many fields. Changing this would normally require care around columns named, for example, `A::b` as has been introduced in Hive 13. However, a different function call only validates Pig aliases if they follow the old rules for Hive columns. As such, a direct change (rather than attempting to match either the namespace::alias or just alias) maintains compatibility for now. -- This message was sent by Atlassian JIRA (v6.2#6252) | https://www.mail-archive.com/dev@hive.apache.org/msg83497.html | CC-MAIN-2021-10 | refinedweb | 233 | 61.36 |
I
PingBack from
One of the biggest pain points that I find with Visual Studio 2008 is its inconsistency with IntelliSense
Thanks for the [CTRL]+[SHIFT]+J
VS has a bunch of keyboard shortcuts mostof which I don't know or can't memorize. Is there a master list somewhere? I always have a similiar complaint about commands in the Immediate and Command windows. I miss the printed manual.:)
You've been kicked (a good thing) - Trackback from DotNetKicks.com
"because of _my_scripts_are_being_included_by_custom controls_ and_intellisense_doesn't_work_on_those_scripts, this feature is useless to me." :-)
The ScriptManager is the only control that does support this, but I suspect that is because Microsoft "cheated" in their implementation of intellisense.
@SharpGIS
That's fair enough, I've definitely heard that feedback. And yes, the ScriptManager does get a bit of special treatment in that regard. Just to for me to get more context, how many custom controls are you using per project? how many scripts on average does each control include? are any of the scripts dynamically included based special logic or is it always the same set?
it’s encouraging to know that Microsoft is listening and adding jQuery support is a start. but Microsoft has an opportunity to fully build out it’s IDE for javascript - better intellisense and a ‘Go To Definition’ and ‘Find All References’ would be a great addition.
developers still need to depend on Firebug or the like. Microsoft has cool server-side debugging but those same debugging techniques needs to come into the client-side as well + Firebug like integration.
@CurlyFro: Have you tried the updated IE8 Dev Toolbar?
Last month I blogged about how Microsoft is extending support for jQuery . Over the last few weeks
Regarding #6 (childnodes error), is there any way (log file or something) to see what Studio is looking for?
I've just done the "fake" -vsdoc.js for all my jQuery plugins and i am *still* getting the error
@Stephen: What scripts do you have remaining? Also what do you see in your error list? Really do feel free to email me at jking at microsoft dot com and we can diagnose the issue faster. Thanks!
...
A lot of these issues could be addressed if there was a way to make a <script> tag that works server side (ie. runat="server" type behavior). This obviously wouldn't work since that connotates different meaning, but maybe you could have an asp:script tag that has basically the same syntax as stock script but allows for things like relative paths and setting visibility (which would hide the script at runtime).
Similar thing for <asp:link> in headers.
I've often wished for this behavior. I've actually tried to create something custom for this but the problem is that as soon as you namespace those tags Intellisense no longer recognizes them and so this would have to be fixed in VS...
In the case of an ASP.NET MVC, the: recommended “../../folder/file” for MVC forms works fine to get intellisense in visual studio. However the browser can’t find the JS file when you are viewing a page that is using the default (index) controller action or one without an ID. For example the following URL works:">
The script tag is as follows <script src="../../Scripts/JQuery.js" type="text/javascript"></script>. And this is within my site.master file.
However if the URL is like the one below (i.e. I don’t need an ID as part of the URL)
doesn’t work. If I have the script tag as “../Scripts/JQuery.js” then the browser finds the JS correctly.
Any suggestions?.
Not sure if it's too late, but I, too, would like VS to support Site-Relative Paths.
I agree with you. I think it is a good tool to use.
Hi Jeff, I'm currently using VS 2008 TS to write some JavaScript code. The best part IMHO is the absolutely fantastic integrated debugger, but what annoys me is the lack of bracket highlighting. If i write function(key) and put the cursor after ), the ( symbol doesn't change it's color, like it does in C#. Is this a limitation of the IDE or just a problem with my config? TIA
JQuery 1.3 and Visual Studio 2008 IntelliSense
Keep getting this error every time I try to update intellisense....
Warning 130 Error updating JScript IntelliSense: C:\Users\lukas\AppData\Local\Microsoft\Windows\Temporary Internet Files\Content.IE5\V1QNKBDO\wt-fds90[1]..js: Object doesn't support this property or method @ 540:2
@Luke: Any particular reason you're referencing a temp internet file which I assume to be outside of your project? Most likely a getElementById call or createElement call has returned null and on line 540 the invocation of a method off that object fails. If possible, I would try to comment out that invocation.
It's actually the webtrends (wt.js) file that is causing issues for some reason. I have not been able to figure out exactly why yet. I have not been able to find anything online about it.
Here is the file.
Ping me if you want (lukas).
We use absolute (or as you refer to them -Site-Relative Paths) URLs to work around URL rewriting as a site might be many "virtual" directories deep when calling a CSS file and it needs to handle that, using "/" is the easiest way around that so it would be a welcome decision to hand that with the Intellisense files.
I feel it's a safe assumption to assume that the site's root is the web root as in the instances it isn't, I would have thought the developer would be using another method...
Just a thought but could you hook into the value for the Casinni start up URL (by default the application name)? We always update that to the root when we're not using sub-directories/virtual directories.
Tim
I've found that I don't get IntelliSense on XML doc comments in the same file in which they appear (i.e. I put an XML doc comment in one function, then later in the file I use that function and I don't get IntelliSense provided by that XML doc comment). Is there any plan to fix this soon?
Hi Paul,
The next version of Visual Studio will give you XML doc comment enhanced tooltips in the JavaScript file you are editing.2008和VisualWebDeveloper2008Expres...
Dave Ward specializes in writing about ASP.NET, jQuery and ASP.NET AJAX. He is a contributing author
Trademarks |
Privacy Statement | http://blogs.msdn.com/webdevtools/archive/2008/11/18/jscript-intellisense-faq.aspx | crawl-002 | refinedweb | 1,106 | 73.27 |
Sites are an extension to the XFN policies.
Sites can be named relative to
The enterprise root
An organizational unit
Sites named relative to the enterprise root are the same as sites named relative to the top organizational unit. Given an organization name, you can compose a name for its site context by using one of the namespace identifiers, site or _site. For example, if the enterprise root is ../doc.com the context for naming sites relative to the enterprise root is ../doc.com/site. Sites would have names like ../doc.com/site/alameda.
The following objects can be named relative to a site name:
Services at the site, such as the site schedule or calendar, printers, and faxes
The file service available at the site
These objects are named by composing the site name with the namespace identifier of the target object's namespace and the name of the target object. For example, the name site/Clark.bldg-5/service/calendar names the calendar service of the conference room Clark.bldg-5 and is obtained by composing the site name site/Clark.bldg-5 with the service name service/calendar. (See "Composing Names Relative to Sites" for a more detailed description of naming objects relative to sites.) | http://docs.oracle.com/cd/E19455-01/806-1387/af1pol-13/index.html | CC-MAIN-2015-48 | refinedweb | 208 | 55.44 |
listen - listen for connections on a socket
Synopsis
Description
Notes
Errors
Bugs
#include <sys/socket.h>
int listen(int s, int backlog);
To..
On success, zero is returned. On error, -1 is returned, and errno is set appropriately.
Single Unix, 4.4BSD, POSIX 1003.1g draft. The listen function call first appeared in 4.2BSD.
If the socket is of type AF_INET, and the backlog argument is greater than the constant SOMAXCONN (128 in Linux 2.0 & 2.2), it is silently truncated to SOMAXCONN. Dont rely on this value in portable applications since BSD (and some BSD-derived systems) limit the backlog to 5.
accept(2), connect(2), socket(2) | http://www.squarebox.co.uk/cgi-squarebox/manServer/usr/share/man/man2/listen.2 | CC-MAIN-2013-20 | refinedweb | 111 | 62.34 |
Capture Specific Webpage text using regex/searchHTML and save as new textile
I wanted to know if there are any good scripts in use in Python that could demonstrate –Capture Specific Webpage text using regex/searchHTML and save as new textile in editorial? Before i embark on making on?
I couldn't find one using search
I avoid regex like the plague that it is. You might check out the
get_module_version_from_pypi()function in
It uses BeautifulSoup 4 and requests that are both built into Pythonista (and I hope Editorial too).
import bs4, requests def get_soup(url): return soup = bs4.BeautifulSoup(requests.get(url).text, 'html5lib')
I agree with @ccc
I used to use regex to do that sort of thing. You can get it to work, but it takes a lot of effort.
So I embarked to learn how Beautiful Soup works, and now that I've learned it a bit it's way easier and much more effective for many things (not just search).
Here's a little example from cheese.com. (I like cheese...not that I check this website, but cheese is good.)
# coding: utf-8 import requests from bs4 import BeautifulSoup url = '' soup = BeautifulSoup(requests.get(url).text) print soup.find('div', id='abstract') #find one div with id 'abstract' and print
- Webmaster4o
Just as a side note. I think it's also worth checking to see if a webpage/site has a json interface. Of course many are protective of their data. But it's seems like more and more sites are offering json data. Can save a lot of work and fragile scripts
Actually, I was just looking around. I found this site. But there are many sites like this
The below is to get info on Formula 1 Drivers. But if you look at the API, you can get everything about F1. It's so nice. Could easily write an F1 App with this API. I chopped this up. In the full code I am caching to disk etc.
But there is another interesting line that's commented out. The url to a Github repo for a Pantone color list. I just left there as a example. I didn't know before how easy it was to do that. The sample code does not deal with that site. But it's real interesting that we could publish data in our on repos that we all could all use.
Anyway, hope I haven't convoluted the conversion. But the possibilities are exciting.
import json, requests def get_json_data(url): r = requests.get(url) print('r == ', r) if r.status_code == 200: return r.json() if __name__ == '__main__': #url = '' url = '' r = get_json_data(url) drivers = r['MRData']['DriverTable']['Drivers'] print(drivers)
Wow, this is great. You got a pretty active community here!
I'm basically trying to scrape-off some text from webpages and no, theres no JSON there. I'll surely look at this Beautiful Soup thing. THis is the time i heard it.
So my plan is:
I'm probably gonna use Pythonista's action extension.
- Browse the page on Safari (in iOS)
- execute the Pythonista extension (assuming I make it).
- Get the result Text
- safe/append it to a file in dropbox.
@rayseed
Do you want to scrape text off of any old website? Or a particular one?
If it's for random sites, then even using Beautiful soup will be a little tough. Well, anything would be tough for that matter :)
Usually site developers use some constants throughout their code - but going from different site to different site it's not constant and that's where you would need flexibility in your implementation of Beautiful Soup.
But, often, headings are in heading tags and text content is in p tags and etc. So...a generic scraper is possible - but may not get everything, or most likely you'll get more than what you want.
Example:
# coding: utf-8 import requests from bs4 import BeautifulSoup url = '' soup = BeautifulSoup(requests.get(url).text) for i in soup.find_all(lambda tag: tag.parent.name == 'body'): print i.text.strip() #gives a lot of junk... | https://forum.omz-software.com/topic/3179/capture-specific-webpage-text-using-regex-searchhtml-and-save-as-new-textile | CC-MAIN-2017-47 | refinedweb | 687 | 76.93 |
Why not java:global/datasources?Aaron Harshbarger Mar 28, 2012 5:36 AM
We are migrating our spring application over to AS7 and are experiencing issues with getting the datasources to work. When we update the <datasources> element within the standalone-full.xml file,it will not accept a jndi-name that starts with java:global/datasources. It only accepts a jndi-name that starts with java:jboss/datasources?
We really want to stay away from any proprietary hooks, which is why EE6 spec had a java:global namespace. Any reason why JBoss AS 7.1.1 doesn't support the java:global/datasources namespace. I thought it was EE6 certified? Are we doing something wrong?
Below is what we want to have in our Spring EAR/WAR to avoid any tie to a particular vendor. The reason is this could run in JBoss, Tomcat (Standalone), etc.
Any help would be greatly appreciated.
Thanks,
Aaron
1. Re: Why not java:global/datasources?Erhard Siegl Apr 17, 2012 6:20 PM (in response to Aaron Harshbarger)
I came across the same question. When I try to use the "java:global" namespace ist says:
:write-attribute(name=jndi-name, value=java:global/datasources/bookingDatasource)
{
"outcome" => "failed",
"failure-description" => "JBAS010471: Jndi name have to start with java:/ or java:jboss/",
"rolled-back" => true
}
and indeed a datasource java:/datasources/bookingDatasource works. From the documentation in I thought it wouldn't work:.
Especially from 3. I expected "java:/datasources/bookingDatasource" not to be valid and from 5. that "java:global/datasources/bookingDatasource" would be valid. One could argue that the "should" is not a "must" and that "java:/" is not of form "java:xxx", but it would be helpful if someone can clearify this.
2. Re: Why not java:global/datasources?navuri prasad Apr 18, 2012 2:40 AM (in response to Erhard Siegl)
<jee:jndi-lookup
try like this way
<jee:jndi-lookup
3. Re: Why not java:global/datasources?Carlo de Wolf Apr 18, 2012 3:58 AM (in response to Aaron Harshbarger)
java:global/datasources is not defined in the spec.
If you truly want to be vendor independent you should bind the datasource to a JBoss valid JNDI name and use only resource refs in your application.
4. Re: Why not java:global/datasources?Erhard Siegl Apr 18, 2012 5:17 AM (in response to Carlo de Wolf)
(As far as I understand the question is about the "java:global" part and not about the "datasources" part. "java:global/env/MyDS" would be the same)
So do I interpret your answer correct that the namespaces java:comp, ...,java:global are only specified for EJBs but this can't be applied for other things like datasources and that there is no specified global name for datasources?
Of course the resource refs are the clean way, but still one has to edit some deployment-descriptors within a package. (As far as I now there is no way to configure it from outside like deployment-profiles in weblogic??? but thats probably another topic.)
5. Re: Why not java:global/datasources?Stuart Douglas Apr 18, 2012 5:45 AM (in response to Aaron Harshbarger)
I think this is a bug, I can't think of any reason why we don't allow this. Can you file a JIRA?
6. Re: Why not java:global/datasources?Stephen Coy Apr 19, 2012 10:08 PM (in response to Stuart Douglas)
I originally thought that this was a bug too, but I've just re-read the spec (§E5.2.2) on this.
java:comp, java:module, java:app and java:global are all names in the environment naming context (aka ENC).
ie. java:comp/... is not in the global JNDI namespace and neither is java:global/...
As Carlo mentioned, java:global/datasources is undefined. The intent of the spec is that the user can specify java:global/env/datasources/bookingDataSource and that the "deployer" can map this to a physical resource such as java:jboss/datasources/bookingDataSource.
By the way, in my experience, 4 out of 5 "enterprise java" developers have never heard of the ENC and just use global JNDI names everywhere.
7. Re: Why not java:global/datasources?Erhard Siegl Apr 18, 2012 1:33 PM (in response to Stephen Coy)
Stephen, thx for the information and for the reference to the spec (found at).
8. Re: Why not java:global/datasources?aquijano Dec 13, 2012 10:03 AM (in response to Stephen Coy)
I'm still not sure I understand this. On the JEE 6 spec section §EE.5.17 "DataSource Resource Definition" it states:
The DataSource resource may be defined in any of the JNDI namespaces
described in Section EE.5.2.2, “Application Component Environment
Namespaces”. For example, a DataSource resource may be defined:
• in the java:comp namespace, for use by a single component;
• in the java:module namespace, for use by all components in a module;
• in the java:app namespace, for use by all components in an application;
• in the java:global namespace, for use by all applications.
And then there's an example of a datasource bound to java:app/MyDataSource, which JBoss wouldn't allow
what am I missing? | https://developer.jboss.org/thread/197589 | CC-MAIN-2017-26 | refinedweb | 873 | 65.83 |
This document is also available in these non-normative formats: XML and with differences marked.
Copyright © 2012 W3C® (MIT, ERCIM, Keio), All Rights Reserved. W3C liability, trademark and document use rules apply.
This document is being built to articulate requirements for the development of a subsequent version of XProc: An XML Pipeline Working Draft has been produced by the authors listed above, at the will of the chair, and with the consent of the members XProc V.next Goals
1.2 Editorial Process
2 Terminology
3 Design Principles
3.1 Technology Neutral
3.2 Platform Neutral
3.3 Small and Simple
3.4 Infoset Processing
3.5 Straightforward Core Implementation
3.6 Address Practical Interoperability
3.7 Validation of XML Pipeline Documents by a Schema
3.8 Reuse and Support for Existing Specifications
3.9 Arbitrary Components
3.10 Control of Inputs and Outputs
3.11 Control of Flow and Errors
4 Requirements
4.1 Standard Names in Step Optimizations Normative References
A.1 Reference Documents
A.2 Core XML Specifications
A.3 XML Data Model and XML Information Set
A.4 XPath and XQuery
A.5 Style, Transform, Serialize
A.6 XML Schema Languages
A.7 Identifiers and Names
A.8 HTTP Request & Authentication
A.9 Character Encodings
A.10 Media Types
A.11 Digital Signatures
B Non-Normative References
B.1 Candidate Specification: Mathematics
B.2 Candidate Specification: EXI
B.3 Candidate Specifications: HTML
B.4 Candidate Specifications: Digital Signatures and Encryption
B.5 Candidate Specifications: Semantic Web
B.6 Candidate Specification: Mail Messages
B.7 Candidate Non-XML Data Format Specifications
B.8 Reference Processors?
C Unsatisfied V1 CR Issues
C.1 Issue 001: p:template extension
C.2 Issue 004: attribute value templates
C.3 Issue 006: p:data/p:load harmonization
C.4 Issue 010: document base URI
C.5 Issue 015: JSON hack
C.6 Issue 016: conditional output port
C.7 Issue 017: p:store
D Unsatisfied V1 Requirements and Use Cases
E FYI: Categorized Steps
E.1 Library and Pipeline Construction
E.2 Core Pipeline Operations
E.3 Input Sources
E.4 Output Targets
E.5 Variables, Options and Parameters
E.6 Micro-operations
E.7 Transformation
E.8 Query
E.9 Validation
E.10 Document Operations
E.11 File & Directory Operations
E.12 Image Operations
E.13 Sequence Operations
E.14 Input / Output
E.15 Encoding
E.16 Execution Control
E.17 Resource / Collection Management
E.18 Miscellaneous
E.19 XProc Operations
E.20 Environment
E.21 Error / Message Handling
E.22 Debugging
F Collected Input
F.1 Architecture
F.1.1 What Flows?
F.1.1.1 Sequences
F.1.1.2 Sets of Documents
F.1.1.3 MetaData, HTML5, JSON, Plain Text
F.1.2 Events
F.1.3 Synchronization & Concurrency
F.2 Resource Management
F.2.1 Add a Resource Manager
F.2.2 Dynamic pipeline execution
F.2.2.1 Dynamic Manifolds
F.2.3 Information caches
F.2.4 Environment
F.2.5 Datatypes
F.3 Integration
F.3.1 XML Choreography
F.3.2 Authentication
F.3.3 Clustering
F.3.4 Debugging
F.3.5 Fall-back Mechanism
F.3.6 Test Suite
F.4 Usability
F.4.1 Cross platform pipelines
F.4.2 Documentation Conventions
F.4.2.1 p:documentation
F.4.3 Verbosity
F.4.3.1 p:data
F.4.3.2 c:data
F.4.3.3 p:input
F.4.3.4 p:load
F.4.3.5 p:option
F.4.3.6 p:pipe
F.4.3.7 p:serialization
F.4.3.8 p:store
F.4.3.9 p:string-replace
F.4.3.10 p:template
F.4.3.11 p:try
F.4.3.12 p:variable
F.4.3.13 p:viewport
F.4.4 Parameter Rules
F.4.5 Choose-style binding
F.4.6 Remove Restriction on variables/options/params
F.4.7 Attribute Value Templates
F.4.8 Loading computed URIs
F.4.9 Optional options for declared steps
F.4.10 Output signatures for compound steps
F.4.11 XPath
F.4.12 Simplify Use of File Sets
F.4.13 Streaming and Parallel Processing
F.4.14 Required Primary Port
F.5 New Steps
F.5.1 Various Suggestions
F.5.2 OS Operations
F.5.2.1 pos:cwd
F.5.2.2 pos:env
F.5.2.3 pos:info
F.5.3 Directory Operations
F.5.3.1 pxf:copy
F.5.3.2 pxf:chdir
F.5.3.3 pxf:delete
F.5.3.4 pxf:head
F.5.3.5 pxf:info
F.5.3.6 pxf:mkdir
F.5.3.7 pxf:move
F.5.3.8 pxf:tail
F.5.3.9 pxf:tempfile
F.5.3.10 pxf:touch
F.5.4 Zip Operations
F.5.4.1 pxp:unzip
F.5.4.1.1 Thoughts from Vojtech
F.5.4.2 pxp:zip
F.5.4.2.1 Thoughts from Vojtech
F.5.5 Cookie Operations
F.5.5.1 cx:get-cookies
F.5.5.2 cx:set-cookies
F.5.6 Dynamic pipeline evaluation
F.5.6.1 xyz:apply
F.5.6.2 cx:eval
F.5.7 Validation Operations
F.5.7.1 pxp:nvdl
F.5.8 Messaging Operation
F.5.8.1 cx:send-mail
F.5.9 Digital Signatures
F.5.9.1 xyz:sign
F.5.10 File Sets
F.5.10.1 xyz:documents
F.5.11 Iteration
F.5.11.1 xyz:iterate
F.5.11.2 p:iteration-source
F.5.11.3 xyz:until-unchanged
F.5.12 Debugging Operations
F.5.12.1 dbxml:breakpoint
F.5.12.2 dbxml:comment
F.5.12.3 dbxml:debug
F.5.12.4 dbxml:message
F.5.12.5 dbxml:trace
F.5.12.6 dbxml:tracediff
G Contributors
A large and growing set of specifications describe processes operating on XML documents. Many applications depend on the use of more than one of the many inter-related XML family of specifications. How implementations of these specifications interact affects interoperability. XProc: An XML Pipeline Language is designed for describing operations to be performed on XML documents.
three kinds of steps: atomic steps, compound steps, and multi-container steps. Atomic steps carry out single operations and have no substructure as far as the pipeline is concerned. Compound steps and multi-container steps control the execution of other steps, which they include in the form of one or more subpipelines." -- XProc: An XML Pipeline Language
This specification contains requirements for an anticipated XProc V.next. This specification is concerned with the conceptual model of XML process interactions, the language for the description of these interactions, and the inputs and outputs of the overall process. This specification is not generally concerned with the implementations of actual XML processes participating in these interactions.
Improving ease of use (syntactic improvements)
Improving ease of use (increasing the scope: non XML content, for example)
Addressing known shortcomings in the language
Improve relationship with streaming and parallel processing
The following is a strawman list; it has no standing with the Working Group and is likely to be replaced and/or expanded daily until further notice.
Iterate until ready to declare success. (<p:iterate-until)
Review 2 Terminology.
Review 3 Design Principles.
Review 4 Requirements.
Review 5 Use cases
Gather and review A Normative References
Gather and review C Unsatisfied V1 CR Issues
Audit existing D Unsatisfied V1 Requirements and Use Cases
Gather and review E FYI: Categorized Steps
Gather and review input from stakeholders.
Discuss.
Update existing definitions, design principles, requirements and use cases.
Enumerate new definitions, design principles, requirements and use cases.
Review.
Approve.
Publish.
Note:
The Working Group should review the definitions included here to determine whether changes are warranted in light of the publication of XProc: An XML Pipeline Language. Additional term definitions may be warranted and will be added as needed. is a set of connected steps, with outputs of one step flowing into inputs of another.]" -- XProc: An XML Pipeline Language
A pipeline specification document is an XML document that described an XML pipeline.
This definition does not seem to be helpful any longer. XProc 1.0 refers to an XML pipeline, or simply a pipeline.
A step is a specification of how a component is used in a pipeline that includes inputs, outputs, and parameters.
"[A step is the basic computational unit of a pipeline.] A typical step has zero or more inputs, from which it receives XML documents to process, zero or more outputs, to which it sends XML document results, and can have options and/or parameters. There are three kinds of steps: atomic, compound, and multi-container. A pipeline is itself a step and must satisfy the constraints on steps. Connections between steps occur where the input of one step is connected to the output of another."-- XProc: An XML Pipeline Language
A component is an particular XML technology (e.g. XInclude, XML Schema Validity Assessment, XSLT, XQuery, etc.).
An XML infoset that is an input to a XML Pipeline or Step.
Relates to F.1.1 What Flows?
The result of processing by an XML Pipeline or Step.
"[The output ports declared on a step are its declared outputs.] When a step is used in a pipeline, it is connected to other steps through its inputs and outputs." -- XProc: An XML Pipeline Language.
"Some steps accept parameters. Parameters are name/value pairs, like variables and options. Unlike variables and options, which have names known in advance to the pipeline, parameters are not declared and their names may be unknown to the pipeline author. Pipelines can dynamically construct sets of parameters. Steps can read dynamically constructed sets on parameter input ports. [...] A parameter input port is a distinguished kind of input port which accepts (only) dynamically constructed parameter name/value pairs".-- XProc: An XML Pipeline Language
Relates to F.4.4 Parameter Rules and C.2 Issue 004: attribute value templates
The technology or platform environment in which the XML Pipeline is used (e.g. command-line, web servers, editors, browsers, embedded applications, etc.).
"[The environment is a context-dependent collection of information available within subpipelines.] Most of the information in the environment is static and can be computed for each subpipeline before evaluation of the pipeline as a whole begins. The in-scope bindings have to be calculated as the pipeline is being evaluated." -- XProc: An XML Pipeline Language
Relates to proposed steps: F.5.2.2 pos:env and F.5.3.5 pxf:info
The ability to parse an XML document and pass infoitems between components without building a full document information set.
This editor has not discovered corresponding language in XProc: An XML Pipeline Language. Relates to Usability: F.4.13 Streaming and Parallel Processing. -- MM
Note:
The Working Group should review the design principles included here to determine whether changes are warranted in light of the publication of XProc: An XML Pipeline Language. Additional design principles may be warranted and will be added as needed.
Please note that section numbering has been added to facilitate hypertextual references to the individual design principles.).
Relates to F.4.11 XPath.
Probably should add Schematron. [Schematron]
XML Pipelines need to support existing XML specifications and reuse common design patterns from within them. In addition, there must be support for the use of future specifications as much as possible.
The specification should allow use of of the explicit and implicit handling of the flow of documents between steps. When errors occur, these must be able to be handled explicitly to allow alternate courses of action within the XML Pipeline.
Note:
In this section, Editor's Notes appended to each sub-section provide commentary on the status of each requirement. In particular, the editors have made note of whether a requirement has been demonstrably "Satisfied" or whether it remains "Unsatisfied". In the case of requirements that remain Unsatisfied, the editors intend to record potential solutions, in the form of proposals for new steps or changes to existing steps. In the case of demonstrably Satisfied requirements, the editors intend to provide examples, or links to examples, especially those in XProc: An XML Pipeline Language.
XProc must have standard names for atomic steps that correspond with, but not limited to, the following specifications [xml-core-wg]:
XInclude [XInclude]
XSLT [XSLT-1.0], [XSLT-2.0]
XSL FO [Serialization]
XQuery [XQuery-1.0]
XPath and Functions [XPath1.0], [XPath-2.0][XPath-Functions]
XML Schema [XMLSchema1][XMLSchema2]
RELAX NG. [RELAX-NG]
Schematron [Schematron]
HTTP Request and Authentication [RFC-2616] [RFC-2616]
An XML Pipeline must allow applications to define and share new steps that use new or existing components. [xml-core-wg]
There must be a minimal inventory of components defined by the specification that are required to be supported to facilitate interoperability of XML Pipelines.
XProc identifies its Standard Step Library and subdivides it into Required Steps and Optional Steps.]
Relates to F.4.11 XPath supported our requirements and informed our design. While there was a want to address all the use cases listed in this document, in the end, the first version may not have solved all the following use cases. Those unsolved use cases may be migrated into XProc V.next.
Note:
In this section, Editor's Notes appended to each sub-section provide commentary on the status of each Use Case. In particular, the editors have made note of whether a Use Case has been demonstrably "Satisfied" or whether it remains "Unsatisfied". A "TBD" anotation indicates that the status has yet to be ascertained. Some use cases may be only partially satisfied.
In the case of requirements that remain Unsatisfied, the editors intend to record potential solutions, in the form of proposals for new steps or changes to existing steps. In the case of demonstrably Satisfied requirements, the editors intend to provide illustrative examples, or links to examples, especially those in the XProc: An XML Pipeline Language.
Note that these determinations of status are subject to change, especially in the early stages of the development of this document. -- MM])
We could refactor this use case, using p:viewport to extract MathML. We could model the rendering steps, but the existence of implementations is beyond the scope of XProc itself. That is, steps 2 is a black box to us; we simply don't care whether it works, so long as we can model it.
Extract MathML fragments from an XHTML document
Transform each MathML element into one or more substitutes:
Apply a computation (e.g. compute the kernel of a matrix).
Render extracted fragments as JPEG images.
Employ an SVG renderer for SVG glyphs embedded in the MathML.
Render using TeX
Render using eqn/troff
Replace MathML fragments with computed and/or rendered equivalents.
Please provide an example of a step that responds to this use case.])
<?xml version="1.0" encoding="UTF-8"?> <p:declare-step xmlns: <p:input <p:inline> <test/> </p:inline> </p:input> <p:output <p:exec </p:declare-step>
will generate
<c:result xmlns: <test/> </c:result>.
Alex to refactor these use cases:
5.12 A Simple Transformation Service.
Part 1: Parsing the descriptions:
<?xml version="1.0" encoding="UTF-8"?> <p:declare-step xmlns: <p:output <p:http-request> <p:input <p:inline> <c:request </p:inline> </p:input> </p:http-request> <p:viewport <p:unescape-markup/> </p:viewport> </p:declare-step>
Part 2: Escaping the description markup:
<?xml version="1.0" encoding="UTF-8"?> <p:declare-step xmlns: <p:output <p:import <e:get-rss/> <p:viewport <p:escape-markup/> </p:viewport> </p:declare-step>.
<p:pipeline <p:declare-step <p:input <p:output <p:option </p:declare-step> <cx:collection-manager <p:input <p:inline><doc1/></p:inline> <p:inline><doc2/></p:inline> <p:inline><doc3/></p:inline> </p:input> </cx:collection-manager> <p:xslt> <p:input <p:pipe </p:input> <p:input <p:inline> <xsl:stylesheet xmlns: <xsl:output <xsl:param <xsl:template <collection uri="{$collection}"> <xsl:value-of </collection> </xsl:template> </xsl:stylesheet> </p:inline> </p:input> </p:xslt> </p:pipeline>.
This pipeline accepts a "uri" document on the source port, uses that URI to construct a (brain-dead simple) query against a database, runs that query, and styles the result.
<p:declare-step <p:input <p:inline> <uri>/2003/08/20/fungus</uri> </p:inline> </p:input> <p:output <p:input <p:declare-step <p:input <p:input <p:output <p:option <p:option <p:option <p:option <p:option <p:option </p:declare-step> <p:template> <p:input <p:inline> <c:xquery> doc("/production{string(/uri)}.xml") </c:xquery> </p:inline> </p:input> <p:input <p:pipe </p:input> </p:template> <ml:adhoc-query <p:xslt> <p:input <p:document </p:input> </p:xslt> </p:declare-step>
Relates to F.1.1 What Flows? and 5.19 Read/Write Non-XML File and 5.26 Non-XML Document Production
Read a CSV [CSV] file and convert it to XML.
Process the document with XSLT.
Convert the result to a CSV format using text serialization.
The specific use case described in 5.19 (converting a CSV file to XML) can be solved by using XSLT 2.0 to tokenize the CSV data and turn it into XML. The example below uses the stylesheet developed by Andrew Welsh ():
<p:declare-step > <p:output <p:option <p:xslt <p:input <p:empty/> </p:input> <p:input <p:document </p:input> <!-- note that relative paths are resolved against the stylesheet's base URI --> <p:with-param </p:xslt> </p:declare-step>
In this solution, the stylesheet loads the CSV file. I think it should be straightforward to modify the pipeline/stylesheet so that the pipeline itself loads the CSV file (using p:data or p:http-request) and passes the c:data-wrapped representation to the stylesheet.
Receive an XML document to save.
Check the database to see if the document exists.
If the document exists, update the document.
If the document does not exists, add the document.
Need an example showing a step accessing a DB.
Receive an XML document to format.
If the document is XHTML, apply a theme via XSLT and serialize as HTML.
If the document is XSL-FO, apply an XSL FO processor to produce PDF.
Otherwise, serialize the document as XML.
This one is a little tricky as XProc does not support specifying serialization options on output ports dynamically. Because of that, it is not possible to write a pipeline with a single "result" output port that uses different serialization options that depend on the (dynamic) data content type. One solution is to have multiple output ports ("result-html", "result-xml", ...) with different serialization options, but that's probably silly and too inconvenient to work with (plus it does not work with non-XML data). Another solution is not to have any output ports at all and use p:store instead. The drawback of this is that p:store writes the data to an external location and therefore breaks the pipeline flow, but you can have multiple p:store steps with different serialization options, or you can even set the serialization options on p:store dynamically. Because the p:xsl-formatter renders the XSL-FO document to an external location, I went for the p:store solution:
<p:declare-step xmlns: <p:input <p:option <p:choose> <p:when <!-- apply a theme using XSLT and serialize as HTML --> <p:xslt> <p:input <p:document </p:input> <p:input <p:empty/> </p:input> </p:xslt> <p:store <p:with-option </p:store> </p:when> <p:when <!-- apply an XSL-FO processor--> <p:xsl-formatter> <p:with-option <p:input <p:empty/> </p:input> </p:xsl-formatter> </p:when> <p:otherwise> <!-- serialize as XML --> <p:store> <p:with-option </p:store> </p:otherwise> </p:choose> </p:declare-step>.
The newsfeed example (the mobile example is just a combination of the newsfeed example and 5.21):
<p:pipeline > <p:option <p:choose> <p:when <p:xslt> <p:input <p:document </p:input> </p:xslt> </p:when> <p:when <p:xslt> <p:input <p:document </p:input> </p:xslt> </p:when> </p:choose> </p:pipeline>.
This pipeline takes an XML-RPC request document and invokes a method (an XProc pipeline) based on the value of /methodCall/methodName. Because there is no standard p:eval step for dynamic evaluation of XProc pipelines, we have to use p:choose which lists all possible pipelines statically.
The pipeline below is rather simplistic in the sense that it does not try to interpret XMLRPC's "int", "string", "struct", etc. elements. The input data is passed in the original XMLRPC format to the invoked pipelines, and likewise, the pipelines are expected to represent their results in XMLRPC format.
<p:pipeline xmlns: <!-- Defines various 'method' pipelines in the "" namespace. Pipeline interface contract: - a single (primary) input port - a single (primary output port) - expect a single <params> input document - produce a single <params> or <fault> output document --> <p:import <p:pipeline <p:variable <p:identity> <p:input </p:identity> <p:try> <p:group> <!-- Note: the p:choose could be replaced with a single call to p:eval if we had such a step --> <p:choose> <p:when <ex:method1/> </p:when> <p:when <ex:method2/> </p:when> <p:otherwise> <p:template <p:input <p:inline> <message>Unsupported method: {$method}</message> </p:inline> </p:input> <p:with-param </p:template> <p:error <p:input <p:pipe </p:input> </p:error> </p:otherwise> </p:choose> </p:group> <p:catch <p:template> <p:input <p:pipe </p:input> <p:input <p:inline> <fault> <value> <struct> <member> <name>faultCode</name> <value><int>-1</int></value> </member> <member> <name>faultString</name> <value><string>{string(/*)}</string></value> </member> </struct> </value> </fault> </p:inline> </p:input> </p:template> </p:catch> </p:try> <p:wrap-sequence </p:pipeline> <p:validate-with-relax-ng> <p:input <p:data </p:input> </p:validate-with-relax-ng> <ex:invoke-method/> <p:validate-with-xml-schema> <p:input <p:document </p:input> </p:validate-with-xml-schema> </p:pipeline>.
Relates to F.1.1 What Flows?.
Relates to F.1.1 What Flows? and 5.19 Read/Write Non-XML File and 5.26 Non-XML Document Production:
Relates to F.5.11 Iteration
Relates to F.4.11 XPath.
<?xml version="1.0" encoding="UTF-8"?> <p:declare-step xmlns: <p:input <p:inline> <html xmlns=""> <head><title>Test Document</title></head> <body> <div class="main"> <p>I can be arbitrarily large.</p> </div> </body> </html> </p:inline> </p:input> <p:output <p:insert <p:input <p:inline> <ul class="navigation"> <li><a href="/about/">About</a></li> <li><a href="/xml/">Fantastic XML Stuff</a></li> <li><a href="/cats/">Pictures of Cats</a></li> </ul> </p:inline> </p:input> </p:insert> </p:declare-step>
Relates to F.4.11 XPath)
Relates to F.3.5 Fall-back Mechanism)
The pipeline below does the following:
Checks if XSLT 2.0 is supported.
If XSLT 2.0 is available, it applies an XSLT 2.0 stylesheet to the input XML document. The stylesheet uses xsl:result-document to generate secondary output documents.
If XSLT 2.0 is not available, it applies an XSLT 1.0 stylesheet. The stylesheet uses either the exsl:document or result:write extension (whichever is available) to generate secondary output documents.
The pipeline has two output ports: the "result" output port for the primary result of the XSLT transformation, and "secondary" for the secondary documents.
...the pipeline almost works. The problem is with the XSLT 1.0 transformation, because the secondary documents do not appear on the "secondary" step of the p:xslt step. This is actually a requirement made by the XProc specification: "If XSLT 1.0 is used, an empty sequence of documents must appear on the secondary port." The exact behavior of exsl:document and result:write in the XProc context is implementation-defined; in most cases, the generated documents will be simply written to the specified external location.
<p:pipeline <p:output <p:pipe </p:output> <p:declare-step <p:output <p:try> <p:group> <p:xslt <p:input<p:inline><foo/></p:inline></p:input> <p:input <p:inline> <xsl:stylesheet xmlns: <xsl:template <true><xsl:value-of</true> </xsl:template> </xsl:stylesheet> </p:inline> </p:input> <p:input<p:empty/></p:input> </p:xslt> </p:group> <p:catch> <p:identity> <p:input <p:inline><false/></p:inline> </p:input> </p:identity> </p:catch> </p:try> </p:declare-step> <ex:is-xslt20-supported/> <p:choose <p:when <p:output <p:output <p:pipe </p:output> <p:xslt <p:input<p:pipe</p:input> <p:input <p:inline> <xsl:stylesheet xmlns: <xsl:template <doc>Hello world!</doc> </xsl:template> <xsl:template <xsl:result-document <xsl:call-template </xsl:result-document> <ignored/> </xsl:template> </xsl:stylesheet> </p:inline> </p:input> </p:xslt> </p:when> <p:otherwise> <p:output <p:output <p:pipe </p:output> <p:xslt <p:input<p:pipe</p:input> <p:input <p:inline> <xsl:stylesheet xmlns: <xsl:template <doc>Hello world!</doc> </xsl:template> <xsl:template <exsl:document <xsl:call-template <xsl:fallback> <redirect:write <xsl:call-template </redirect:write> </xsl:fallback> </exsl:document> <ignored/> </xsl:template> </xsl:stylesheet> </p:inline> </p:input> </p:xslt> </p:otherwise> </p:choose> </p:pipeline>
Relates to F.3.5 Fall-back Mechanism)
<p:declare-step xmlns: <p:output <p:template> <p:input <p:inline> <root> Is XQuery available : { $has-xquery } </root> </p:inline> </p:input> <p:input <p:empty/> </p:input> <p:with-param </p:template> </p:declare-step>
The following are listed in XProc: An XML Pipeline Language. Should the list broaden?
The following are not listed in XProc: An XML Pipeline Language.
The following are not listed in XProc: An XML Pipeline Language
The following are other XML-related specifications for which some form of processing support.
The following are not listed in XProc: An XML Pipeline Language.
The following are not listed in XProc: An XML Pipeline Language.
The following are not listed in XProc: An XML Pipeline Language.
The following are Semantic Web-related specifications for which some form of processing support.
The following are not listed in XProc: An XML Pipeline Language.
The following are listed in XProc: An XML Pipeline Language but not normatively.
A list of reference processors?
The following are not listed in XProc: An XML Pipeline Language but not normatively.
The following are taken from the XProc Candidate Issues Document as determined at the working group's October 31 f2f (minutes). Issue numbers refer to numbers given in the issues document. The editors intend to expand these notes and migrate them to later sections as and when appropriate.
Issue 001: extend our current p:template in order to have some higher level construct without going into FULL XSLT
Relates to F.4.3.10 p:template
Issue 004: allow attribute value templates within xproc elements
Issue 006: harmonize p:data and p:load
Relates to F.4.3.1 p:data
Issue 016: conditional output port (V.next XOR closable.)
Issue 017: simplified store step
Relates to F.4.3.8 p:store
Sections 2-5 of the V1 XML Processing Model Requirements and Use Cases are included herein, annotated for review of requirements and use cases that have been left unsatisfied in V1. The editors hope to record which requirements and use cases have been satisfied by XProc: An XML Pipeline Language, and to note which have not been satisfied. This should assist the working group in determining which requirements and use cases should be addressed in XProc V.next.
To aid navigation, the requirements can be mapped to the use cases of this section as follows:
Here is my first cut of the step inventory categorization for my action item. I've take this from information that was sent to me, source code, and documentation online [1]. I did not include the general categories we had on the wiki [2]. Those categories were "Sorting", "Validation with Error", "Map-reduce", "Iterate until condition", "Dynamic Pipeline Execution", "Long-form Viewport", and "e-mail." -- AM.
Second cut. Completed list. Anotated. Minor reorganization coming. -- MM.
These lists will be anotated and re-formatted later. -- MM.
5.9p:library
5.16p:documentation
5.17p:pipeinfo
4.1p:pipeline
5.8p:declare-step
5.10p:import
5.11p:pipe
Relates to F.4.3.6 p:pipe
5.12p:inline
5.13p:document
5.14p:data
Relates to F.4.3.1 p:data
5.15p:empty
4.2p:for-each
4.3p:viewport
Relates to F.4.3.13 p:viewport
4.4p:choose
4.4.1p:xpath-context
4.4.2p:when
4.4.3p:otherwise
4.5p:group
4.6p:try
Relates to F.4.3.11 p:try
5.1p:input
Relates to F.4.3.3 p:input
5.2p:iteration-source
Relates to F.5.11.2 p:iteration-source
5.3p:viewport-source
5.4p:output
5.6p:serialization
Relates to F.4.3.7 p:serialization
5.7.1p:variable
Relates to F.4.3.12 p:variable
5.7.2p:option
Relates to F.4.3.5 p:option
5.7.3p:with-option
5.7.4p:with-param
7.1.1p:add-attribute
7.1.2p:add-xml-base
7.1.5p:delete
7.1.12p:insert
7.1.13p:label-elements
7.1.15p:make-absolute-uris
- - -cx:namespace-delete
7.1.16p:namespace-rename
7.1.19p:rename
7.1.20p:replace
7.1.21p:set-attributes
7.1.25p:string-replace
Relates to F.4.3.9 p:string-replace
7.1.27p:unwrap
7.1.28p:wrap
7.1.30p:xinclude
7.1.31p:xslt
- - -p:template
Relates to F.4.3.10 p:template
7.2.9p:xquery
Relates to F.4.3.10 p:template
- - -ml:adhoc-query
- - -ml:insert-document
- - -ml:invoke-module
7.2.4p:validate-with-relax-ng
7.2.5p:validate-with-schematron
7.2.6p:validate-with-xml-schema
- - -cx:nvdl
Relates to F.5.7.1 pxp:nvdl
7.1.3p:compare
7.1.4p:count
7.1.11p:identity
7.1.9p:filter
7.2.2p:hash
7.2.10p:xsl-formatter
- - --cx:delta-xml
- - --cxu:pretty-print
- - --cxu:css-formatter
- - --emx:get-base-uri
Relates to F.5.3 Directory Operations
- - -cxf:copy
- - -cxf:delete
7.1.6p:directory-list
- - -cxf:head
- - -cxf:info
- - -cxf:mkdir
- - -cxf:move
- - -cxf:tail
- - -cxf:tempfile
- - -cxf:touch
- - -cx:unzip
- - -cx:zip
7.1.10p:http-request
7.1.14p:load
Relates to F.4.3.4 p:load
7.1.22p:sink
Relates to F.4.14 Required Primary Port
7.1.24p:store
Relates to F.4.3.8 p:store
- - --cx:uri-info
- - --emx:fetch
7.1.8p:escape-markup
7.1.26p:unescape-markup
7.2.7p:www-form-urldecode
7.2.8p:www-form-urlencode
Relates to F.5.5 Cookie Operations and F.5.8 Messaging Operation
7.2.3p:uuid
- - -cx:get-cookies
- - -cx:set-cookies
- - -cx:send-mail
7.1.18p:parameters
Relates to F.4.4 Parameter Rules
- - -p:in-scope-names
Relates to ???
Relates to F.5.2 OS Operations and Environment
- - -cx:java-properties
- - -cxo:info
- - -cxo:cwd
- - -cxo:env
7.1.7p:error
- - -cx:eval
Relates to F.5.6.2 cx:eval
- - -emx:eval
5.5p:log
Relates to F.5.12 Debugging Operations
- - -cx:message
Relates to F.5.12.4 dbxml:message
- - -emc:message
- - -cx:report-errors
Entirely speculative. Relates to F.5.12 Debugging Operations
- - -dbxml:breakpoint
- - -dbxml:comment
- - -dbxml:debug
- - -cx:message
- - -dbxml:trace
Relates to the Principle: 3.11 Control of Flow and Errors and B.7 Candidate Non-XML Data Format Specifications
Allow non-XML (text/binary) to flow through a pipeline. The implementation would hex-encode non-XML whenever XML was expected This would, for example, allow xsl-formatter to produce the output on a port that could then be serialized by the pipeline.
Allow unbounded number of outputs from some steps? MZ says we need this for the NVDL
use case [cross-reference needed]. Markup pipeline allowed this, subsequent steps need
to access by name, where default naming is with the integers. . .
p:pack
could have more than two inputs, so you could do column-major packing . . .
Relates to F.5.7.1 pxp:nvdl.
Relates to B.7 Candidate Non-XML Data Format Specifications and B.3 Candidate Specifications: HTML and B.5 Candidate Specifications: Semantic Web
From Vojtech Toman: In my XML Prague paper "XProc: Beyond application/xml" I looked at one possible way of extending XProc to support non-XML media types. The basic idea is that XProc steps declare which media types they accept on their input ports and which media types they produce on their output ports. If it happens that data with a media type A (for instance, text/csv) arrives on an input port that expects media type B (for instance, application/xml), the XProc processor will try to convert the data to the expected media type. What kinds of conversions are supported and what do they look like is not covered in the paper, because that is an issue on its own. I was focusing just on the implications of this to the XProc processing model (which, it turns out, are actually not that big).
You can find the conference proceedings here (my article is on page 27):
Support a more event-driven processing model?
Can we suspend a pipeline waiting for something to happen? Some examples; wait for HTTP POST from github (notifications), jms queue listener, tcp socket listener
Can we dump a partially evaluated pipeline instance for subsequent resumption?
Does this relate to the proposed step F.5.6.2 cx:eval?
Related-but-different, with pipeline-internal events, as it were Philip Fennel has done some work on XProc+SMIL. [SMIL]
Does this relate to F.3.1 XML Choreography?
Relates to F.5.3 Directory Operations, and F.5.2 OS Operations
Local store and retrieve. Build it, store it, get it back later, all under your control
On-demand construction. Associate a pipeline with a URI into the manager, which will run if the URI is not there. Or not current -- you need to know what all the dependencies are, and check them
Give URIs to step outputs. So you could point xinclude at a step output. Would you have to include a local catalog facility to make this really useful?
Cache intermediate URIs
Refactoring:
Local store and retrieve is facilitated by F.5.3 Directory Operations
Assigning output to a URI can be accomodated by local/remote store and retrieve with http: and file: methods.
XInclude relates markup to resources, not ports. In my understanding, using XInclude to point at a step output port via a contrived URI that is fronting for an application-defined 'resource manager' is not coherent. Steps have input and output ports. Some steps are capable of locale/remote storage and retrieval or resources. Resources have URIs.
The canonical resource manager use case, to my mind, is the XInclude case. Consider this slightly contrived example.
<doc>Today's weather is <xi:include </doc> pipeline.xpl: <p:pipeline> ... <ext:get-weather-based-on-params-or-locale-or-whatever <p:xinclude> <p:input <p:document </p:input> </p:xinclude> ... </p:pipeline>
The idea is that the get-weather... step produces a document with the appropriate base URI and then when XInclude goes off to get that document, the pipeline provides the document generated by some other step in the pipeline.
It's possible, for any given case, to imagine ways to rewrite the pipeline, but the general case remains: processing some documents will appeal to URIs and it would be useful to be able to generate the documents that should satisfy those URIs in other steps in the pipeline (consider synthesized stylesheets and schemas, for example).
Relates to F.4.4 Parameter Rules and F.4.5 Choose-style binding and F.5.6.2 cx:eval
Run a pipeline whose XML representation is input
Dynamic evaluation. See F.5.6.2 cx:eval
Dynamic attribute values. Meaning?
Support for 'depends-on' (or some mechanism for asserting dependencies that are not manifest in the data flow)
Steps with varying numbers of inputs/outputs with dynamic names.
On the face of it, the need is obvious. Dynamically defined pipelines that conceptually resemble manifolds for processing row-/column-major data. Most scripting languages can accomodate themselves to dynamically changing data structures, so why not XProc? It turns out that there are performance penalties associated with late-binding. First of all, there is a front-end cost associated with constructing the logical model of each manifold; that is why it pays to design your most commonly used manifolds carefully, test them rigorously, and compile them statically, to ensure optimal performance. Dynamic computation of manifold structure, and dynamic composition of port names actually impedes streaming pipeline execution by shifting the burden into the execution layer, where it is can be more fragile because various resources may not have been pre-arranged. -- MM
Should we give access to MemCache and elasticache?
Already possible from an extension step [reference needed], do we need more?
Already possible using p:http-request?
Should we have a way of accessing environment information more generally?
Relates to and F.5.3 Directory Operations and F.5.2 OS Operations and F.3.4 Debugging
The following is a list of steps and functions that generate environment information.
p:base-uri
pos:env
pos:cwd
pos:info
pxf:info
p:iteration-position
p:iteration-size
p:pipeinfo
p:resolve-uri
p:step-available
p:value-available
p:system-property
p:episode
p:language
p:product-name
p:product-version
p:vendor
p:vendor-uri
p:version
p:xpath-version
p:psvi-supported
Data types for options and parameters
Also, as I'm binding certain typed values to options (e.g. pulling a start time off the query parameters), I'd really like an easy way to say: "This option is typed as xs:dateTime. If the value does not cast properly, run this other part of the pipeline." One simple way we could accomplish this is to allow type errors within a certain portion of the pipeline to be caught and processed somehow. -- AM
Suggested by MZ:
<?xml version="1.0" encoding="UTF-8"?> <p:declare-step xmlns: <p:output <p:group> <p:choose> <p:variable <p:when <p:identity> <p:input <p:inline> <the-variable-is-castable-as-date/> </p:inline> </p:input> </p:identity> </p:when> <p:otherwise> <p:identity> <p:input <p:inline> <the-variable-is-NOT-castable-as-date/> </p:inline> </p:input> </p:identity> </p:otherwise> </p:choose> </p:group> </p:declare-step>
Hmm... maybe. I had thought of more of a try/catch operation that would catch type errors. Using p:choose a lot can make simple pipelines very complicated. -- AM
My initial thought was that we could state all the type pre-conditions and they then catch only executes when the typing fails. This would be a lot less complicated that trying to write all that into a test expression. Of course, not everything can be expressed as a simple type cast check. For example, range value checks would still need to be expressions. -- AM
The orchestration of XSLT/XQuery/.... XProc as the controller. Support for playing a useful standardised role in XRX. LQ.
Can we add some kind of authentication management which is out-of-band but available?
Does this need to be in the language, or can it be implementation-defined? If it was in the language how would steps get at it?
Presumably authentication can and should happen out-of-band. Perhaps in a layer that surrounds the processor and/or the data store.
Does this relate to F.5.8.1 cx:send-mail or to B.6 Candidate Specification: Mail Messages?
Relates to Principal: 3.2 Platform Neutral.
Relates to Requirement: 4.9 Allow Optimizations
Do we need support for clustering?
." -- WikiPedia
Relates to the Principle: 3.11 Control of Flow and Errors
How to make xproc development more amenable to debugging?
Relates to F.5.12 Debugging Operations
pos:env
pos:cwd
p:pipeinfo
pos:info
pxf:info
cx:eval
p:error
p:log
cx:message
From Norm:.
Relates to the Principles: 3.6 Address Practical Interoperability and 3.11 Control of Flow and Errors
Relates to Requirements: 4.7 Error Handling and Fall-back
How to make xproc development more amenable to error recovery?
Relates to the Principles: 3.6 Address Practical Interoperability and 3.11 Control of Flow and Errors
Relates to Requirements: 4.7 Error Handling and Fall-back and 4.2 Allow Defining New Components and Steps
Presumably we require a test suite. Luckily, one exists. Let's set our goal and immediately claim victory.
Relates to Principle: 3.2 Platform Neutral, and proposed Steps: F.5.2 OS Operations
Make it easier to create cross platform pipelines e.g. file.separator in file paths
Add a Note or another spec for documentation conventions. Parallel to Javadoc? add an
xml:lang attribute to
p:documentaton and recommend its use.
See for an example.
<p:documentation> <div><head>This is my documentation</head> <p>I can explain my pipeline here.</p> </div> [...] <div><head>Extract metadata from image files</head> <p>I can explain how I extract metadata from various image files. I probably have some details that need explanation.</p> </div>
Can we simplify the markup? Is there a compact sytntax alternative?
The following sub sections represent steps whose Usability is deemed to be affected by superfluous Verbosity, based upon comments gathered in mailing lists and elsewhere. The details need to be filled in with a description of the problem, a suggested revision and a justification.
Harmonize wth F.4.3.4 p:load
<p:data href=... />
It would be really, really nice if a step could output a reference. That way p:store, etc. can return a standardized reference to a resource created. It may be the case that c:data is the wrong element to use for this but it seems like it would be useful in some places where c:data is used.
<c:data
<p:input <p:input <p:input <p:input select= .... />
Harmonize wth F.4.3.1 p:data. Should work like http-request.
<p:load href=... />
Relates to F.1.1 What Flows?
An option on
p:store to save decoded/binary data.
<p:store ... />
Empty source on
p:template. If you're fabricating from whole cloth, you
have to waste space with a pointless <foo/> What would be the downside of having
the empty sequence as the default input in most/all cases? AM suggests that we allow
this on a step-by-step basis
<p:template ... />
Relates to Principle: 3.3 Small and Simple
p:group within
p:try -- Could we remove this requirement?
Is this a case of making life easier for implementors which confuses users? Or is it
actually simpler to have the group/catch as the only top-level children?
p:variable templates
Should we allow p:variable anywhere in groups?
Adding a p:variable requires adding p:group…feels odd
Allow variables to be visible in nested pipelines
Explanation: constructive example... Make p:rename/@new-name optional, so that it’s possible to move elements from namespace X that match a certain condition to namespace Y. This is currently quite difficult to do. Could you achieve this using @use-when?
Now that we have a bunch of real pipelines, can we simplify the rules by limiting the allowed usage patterns? At least, get rid of the necessity for p:empty as the parameter input [when it's now required: someone to fill in]
Data types for options and parameters
Arbitrary data model fragments for parameters/options/variables
Explore using maps to simplify the parameters story
Here's the hard case that has to be handled:
?
Suppose you have a pipeline with a step X, and depending on some dynamic condition, you want X to process documents (or entire sequences of documents) A, B, or C. Currently, the only way to do this is to use a p:choose a to duplicate the step X with different input bindings in each branch. This not only looks silly, but it is painful to write.
One solution to this would be a choose-style binding (a wrapper around the existing bindings) that would dynamically select the bindings to use.
An example would help.
Does this relate to F.2.2 Dynamic pipeline execution
Relates to F.1.1 What Flows?
Can we remove the restriction on variables/options/params being bound only to strings? What would be allowed:
binaries - This would allow not only the possibility of binary resource files, but all would enable the ability to pass maps, which is where I think the real value-add comes in.
sequences - Not just for strings, but for nodes and binaries as well.
Relates to C.2 Issue 004: attribute value templates
An example would help.
Lots of workarounds, but shouldn't need them. Attribute-value templates would solve this.
An example would help.
Does this relate to F.2.2 Dynamic pipeline execution
AM to complete. Simplify the task of passing "optional options" through a pipeline? Something that works from the command line but not internally to a library step???
An example would help.
Relates to 4.4 Allow Pipeline Composition
The existing magic is not consistent or easily understandable
An example would help.
XPath Required?
XPath 2.0 only?
Custom XPath functions (ala xsl:function) using “simplified XProc steps” (whatever that means)
A way of re-using pipelines. Or allowing pipelines to be imported into XQuery or XSLT
Some mechanism for loading sets of documents. XProc, as currently defined, feels somewhat awkward:
consider a xyz:documents element which roughly emulates apache ant filesets
consider reusable file path structures
consider providing conventions for making xproc scripts more cross platform e.g. file seperators
<p:document <p:data
Does this relate to F.2 Resource Management?
Unordered collections?
Streaming is inhibited by the use of p:try/p:catch to capture validation errors (because p:try/p:catch mandates buffering).
So, pipelines written to take advantage of streaming processors will want to
avoid
p:try/p:catch. That should be noted. What are other strategies that
will work in a streaming context? Does
eval do the job?-- MM
Allow
p:for-each to generate the result of each step in an unordered way
(with a simple attribute
ordered=""true|false"", the default
being
true).
Does removing the "in order" from 4.2 p:for-each "For each declared output, the processor collects all the documents that are produced for that output from all the iterations, in order, into a sequence." solve the problem?
Relates to Use Case: 5.29 Large-Document Subtree Iteration
Relates to Use Case: 5.30 Adding Navigation to an Arbitrarily Large Document
(source: Alex Milowski)
I two snippets, which are not interchangeable in that the first has a single non-primary output and the second has a single primary output.
<p:store .../> <p:viewport <p:store </p:viewport> required to write:
<p:store../> <p:sink/> <p:xslt> <p:input <p:pipe </p:xslt>
The following is a list of proposed steps which require explanation, justification and use cases.
p:sax-filter
p:sort
These steps are in the “proposed OS extension namespace”,, identified by the prefix “pos”.
This function returns the “current working directory” of the processor. This function takes no arguments and does not depend on the context. This function should only be implemented by processors for which the concept of a “current working directory” is coherent.
<p:declare-step <p:output </p:declare-step>
The pos:cwd step returns a single c:result containing the current working directory.
On systems which have no concept of a working directory, this step returns the empty
sequence. (This step duplicates the
cwd attribute on the c:result from
pos:info; it's just for convenience.)
There are no standard XProc steps that change the working directory, so this function is likely to return the same value every time it is called. However, there is nothing which prevents an extension step from being defined which changes the current working directory, so it is not necessarily the case that the same value will always be returned.
Returns information about the environment. On systems which nave no concept of an environment and environment variables, this step returns an empty c:result.
<p:declare-step <p:output </p:declare-step>
The pos:env step returns information about the operating system environment. It
returns a c:result containing zero or more c:env elements. Each c:env has
name and
value attributes containing the "name"
and "value" of an environment variable.
Returns information about the operating system.
<p:declare-step <p:output </p:declare-step>
The pos:info step returns information about the operating system on which the processor is running. It returns a c:result element with attributes describing properties of the system. The exact set of properties returned is implementation-dependent. It should include the following properties:
The file separator; usually “/” on Unix, “\” on Windows.
The path separator; usually “:” on Unix, “;” on Windows.
The operating system architecture, for example “i386”.
The name of the operating system, for example “Mac OS X”.
The version of the operating system, for example “10.5.6”.
The current working directory.
The login name of the effective user, for example “ndw”.
The home diretory of the effective user, for example “/home/ndw”.
The following list is informed by Calabash and eXProc Proposed Steps
These steps are in the “proposed file utilities extension namespace”,, identified by the prefix “pxf”.
Relates to F.2 Resource Management
Copies a file.
<p:declare-step <p:output <p:option <!-- anyURI --> <p:option <!-- boolean --> <p:option <!-- boolean --> </p:declare-step>
The pxf:copy copies the file named in href to the new name specified in target. If the target is a directory, the step attempts to move the file into that directory, preserving its base name. If the copy is successful, the step returns a c:result element containing the absolute URI of the target. If an error occurs, the step fails if fail-on-error is true; otherwise, the step returns a c:error element which may contain additional, implementation-defined information about the nature of the error.
Occurs if the file named in href does not exist or cannot be copied to the specified target.
This function changes the “current working directory” of the processor. This function takes one argument and does not depend on the context. This function should only be implemented by processors for which the concept of a “current working directory” is coherent.
There are currently no standard XProc steps that change the working directory. However, there is nothing which prevents an extension step from being defined which changes the current working directory.
Deletes a file.
<p:declare-step <p:output <p:option <!-- anyURI --> <p:option <!-- boolean --> <p:option <!-- boolean --> </p:declare-step>
The pxf:delete step attempts to delete the file or directory named in href. If the file or directory is successfully deleted, the step returns a c:result element containing the absolute URI of the deleted file. If href specifies a directory, it can only be deleted if the recursive option is true or if the directory is empty. If an error occurs, the step fails if fail-on-error is true; otherwise, the step returns a c:error element which may contain additional, implementation-defined information about the nature of the error.
Occurs if the file named in href does not exist or cannot be deleted.
Occurs if the step attempts to delete a directory that is not empty and the recursive option is not true.
Returns the first few lines of text file.
<p:declare-step <p:output <p:option <!-- anyURI --> <p:option <!-- int --> <p:option <!-- boolean --> </p:declare-step>
Returns the first count lines of the file named in href. If count is negative, the step returns all except those first.
Returns information about a file or directory.
<p:declare-step <p:output <p:option <!-- anyURI --> <p:option <!-- boolean --> </p:declare-step>
The info step returns information about the file or directory named in href. The step returns a c:directory for directories, a c:file for ordinary files, or a c:other for other kinds of filesystem objects. Implementations may also return more specific types, for example c:device, so anything other than c:directory or c:file must be interpreted as “other”. If the document doesn't exist, an empty sequence is returned.
The document element of the result, if there is one, will have the following attributes:
If the value of a particular attribute is unknown or inapplicable for the particular
kind of object, or in the case of boolean attributes, if it's false, then the
attribute is not present. Additional implementation-defined attributes may be present,
but they must be in a namespace. If the
href attribute specified is not a
file: URI, then the result is implementation-defined.
If an error occurs, the step fails if fail-on-error is true; otherwise, the step returns a c:error element which may contain additional, implementation-defined information about the nature of the error.
Occurs if the file named in href does not exist or cannot be read..
Creates a directory.
<p:declare-step <p:output <p:option <!-- anyURI --> <p:option <!-- boolean --> </p:declare-step>
The pxf:mkdir step creates a directory with the name in href. If the name includes more than one directory component, all of the intermediate components are created. The path separator is implementation-defined. The step returns a c:result element containing the absolute URI of the directory created.
If an error occurs, the step fails if fail-on-error is true; otherwise, the step returns a c:error element which may contain additional, implementation-defined information about the nature of the error.
Occurs if the file named in
href does not exist or cannot be
created.
Moves (renames) a file or directory.
<p:declare-step <p:output <p:option <!-- anyURI --> <p:option <!-- boolean --> <p:option <!-- boolean --> </p:declare-step>
The pxf:move step attempts to move (rename) the file specified in the
href
option to the new name specified in the
target option. If the target is a
directory, the step attempts to move the file into that directory, preserving its base
name. If the move is successful, the step returns a c:result element containing the
absolute URI of the new name of the file. The original file is effectively removed.
If the
fail-on-error option is "true", then the step will
fail if a file with the name specified in the
target option already exists,
or if the file specified in
href does not exist or cannot be moved. If the
fail-on-error option is "false", the step returns a
c:error element which may contain additional, implementation-defined information about
the nature of the error.
If the
href option specifies a directory, device, other special kind of
object, the results are implementation-defined.
Occurs if the file named in
href does not exist or if the file named
in
target cannot be created..
Returns the last few lines of a text file.
<p:declare-step <p:output <p:option <!-- anyURI --> <p:option <!-- int --> <p:option <!-- boolean --> </p:declare-step>
Returns the last count lines of the file named in href. If count is negative, the step returns all except those last.
Creates a temporary file.
<p:declare-step <p:output <p:option <!-- anyURI --> <p:option <!-- string --> <p:option <!-- string --> <p:option <!-- boolean --> <p:option <!-- boolean --> </p:declare-step>
The pxf:tempfile step creates a temporary file. The temporary file is guaranteed not
to already exist when pxf:tempfile is called. The file is created in the directory
specified by the
href option. If
prefix is specified, the file's
name will begin with that prefix. If
suffix is specified, the file's name
will end with that suffix.
The step returns a c:result element containing the absolute URI of the temporary file. If the delete-on-exit option is true, then the temporary file will automatically be deleted when the processor terminates. it is not possible to create a file in the
href
directory.
Update the modification time of a file.
<p:declare-step <p:output <p:option <!-- anyURI --> <p:option <!-- xs:dateTime --> <p:option <!-- boolean --> </p:declare-step>
The pxf:touch step “touches” the file named in
href. The file will be
created if it does not exist. If
timestamp is specified, the modification
time of the file will be updated to the specified time. If unspecified, the current
date and time will be used. The step returns a c:result element containing the
absolute URI of the touched file.
If an error occurs, the step fails if fail-on-error is true; otherwise, the step returns a c:error element which may contain additional, implementation-defined information about the nature of the error.
Occurs if the file named in
href does not exist or cannot be
changed..
These steps are in the “proposed extension namespace”,, identified by the prefix “pxp”.
unzip A step for extracting information out of ZIP archives.
From
<p:declare-step <p:output <p:option <!-- anyURI --> <p:option <!-- string --> <p:option <!-- string --> </p:declare-step>
The value of the
href option must be an IRI. It is a dynamic error if
the document so identified does not exist or cannot be read. The value of the
file option, if specified, must be the fully qualified path-name of a
document in the archive. It is dynamic error if the value specified does not identify
a file in the archive. The output from the
pxp:unzip step must conform to
the ziptoc.rnc schema. If the
file option is specified, the selected file
in the archive is extracted and returned:
If the content-type is not specified, or if an XML content type is specified, the file is parsed as XML and returned. It is a dynamic error if the file is not well-formed XML.
If the content-type specified is not an XML content type, the file is base64 encoded and returned in a single c:data element.
If the file option is not specified, a table of contents for the archive is returned. For example, the contents of the XML Calabash 0.8.5 distribution archive might be reported like this:
<c:zipfile xmlns: <c:directory <c:directory <c:file <c:file <c:file <c:file <c:directory <c:file <c:file <c:directory <c:file </c:zipfile>
- I think for non-XML data, the step should behave as p:data or p:http-request. Right now, the pxp:unzip spec says that: "If the content-type specified is not an XML content type, the file is base64 encoded and returned in a single c:data element." This obviously does not match the behavior of p:data wrt text media types. The pxp:unzip step also does not insert the "content-type" and "encoding" attributes on the c:data wrapper.
- What happens if the file specified through the "file" option is not found in the archive (I assume a dynamic error)?
zip A step for creating ZIP archives.
From
<p:declare-step <p:input <p:input <p:output <p:option <!-- anyURI --> <p:option <!-- "stored" | "deflated" --> <p:option <!-- "smallest" | "fastest" | "default" | "huffman" | "none" --> <p:option <!-- "update" | "freshen" | "create" | "delete" --> </p:declare-step>
The ZIP archive is identified by the
href. The manifest (described
below) provides the list of files to be processed in the archive. The command
indicates the nature of the processing: “update”, “freshen”, “create”, or “delete”. If
files are added to the archive, compression-method indicates how they should be added:
“stored” or “deflated”. For deflated files, the compression-level identifies the kind
of compression: “smallest”, “fastest”, “default”, “huffman”, or “none”. The entries
identified by the manifest are processed. The manifest must conform to the following
schema:
default namespace c="" start = zip-manifest zip-manifest = element c:zip-manifest { entry* } entry = element c:entry { attribute name { text } & attribute href { text } & attribute comment { text }? & attribute method { "deflated" | "stored" } & attribute level { "smallest" | "fastest" | "huffman" | "default" | "none" } empty }
For example:
<zip-manifest <entry name="file1.xml" href="" comment="An example file"/> <entry name="path/to/file2.xml" href="" method="stored"/> </zip-manifest>
If the command is “delete”, then file1.xml and path/to/file2.xml will be deleted from the archive. Otherwise, the file that appears on the source port that has the base URI will be stored in the archive as file1.xml (using the default method and level), the file that appears on the source port that has the base URI will be stored in the archive as path/to/file2.xml without being compressed.
A c:zipfile description of the archive content is produced on the result port.
- What about source files that are not included in the pxp:zip manifest? Is that an error or do they end up in the ZIP archive under their original base URI?
- Serialization. At the moment, pxp:zip does not allow to specify how XML documents are serialized in the ZIP archive. I ended up with adding serialization options to pxp:zip which are applied to each XML file and are therefore archive-global. It might be useful, though, to be able to specify different serialization options per file - but that would probably require putting the serialization options into the pxp:zip manifest somehow.
- Not sure about the compression level names "smallest" | "fastest" | "default" | "huffman" | "none". They are a direct lift from the Java java.util.zip.Deflater API. Plus, the "huffman" constant is not a compression level, but a compression strategy. I think it should not be in the list.
- The pxp:zip step returns a c:zipfile representation of the ZIP archive on the "result" port. While I understand that this might be useful, it is not consistent with existing standard steps that write output to an external location (p:store, p:xsl-formatter) and that return a URI reference to the external data.
it would be nice if it were possible to compress non-XML data as well (in a similar way that p:http-request allows sending non-XML request bodies). Otherwise things such as creating an EPUB with images would still be impossible with standard XProc.
Get the cookies returned by the previous HTTP request.
<p:declare-step <p:output <p:option <!-- string --> </p:declare-step>
Run a static, known step, whose type is computed dynamically.
An example would help.
Compile a pipeline and run it.
<p:declare-step <p:input <p:input <p:input <p:output <p:option <!-- QName --> <p:option <!-- boolean --> </p:declare-step>
In the simplest case, where the specified pipeline has a single input and a single output, the document(s) on the source port are passed to the pipeline, processed, and the results are passed back on the result port.
If the pipeline specified has multiple inputs or outputs, then the inputs and outputs have to be “multiplexed” on the single port. If this is the case, you must specify that the detailed option is “true”, and encode the input using cx:document. Each input must be wrapped in cx:document with a port attribute that identifies the port to which that document is to be sent. Each output will be wrapped in a cx:document element identifying the port from which it came.
If the pipeline has options, they are passed to the options port. Each options document must have cx:options as its document element and consist entirely of cx:option elements with name and value attributes that specify options and their values.
If the pipeline is a p:library, then the step to evaluate may be specified using the step option. If the pipeline is a library and no step option is specified, the first step in the library will be selected.
Relates to Use Case: 5.28 Document Schema Definition Languages (DSDL) - Part 10: Validation Management
These steps are in the “proposed extension namespace”,, identified by the prefix “pxp”.
A step for performing NVDL (Namespace-based Validation Dispatching Language) validation over mixed-namespace documents.
<p:declare-step <p:input <p:input <p:input <p:output <p:option <!-- boolean --> </p:declare-step>
The source document is validated using the namespace dispatching rules contained in the nvdl document. The dispatching rules may contain URI references that point to the actual schemas to be used. As long as these schemas are accessible, it is not necessary to pass anything on the schemas port. However, if one or more schemas are provided on the schemas port, then these schemas should be used in validation. This requirement is expressed only as a “should” and not a “must” because XProc version 1.0 does not mandate that implementations support caching of documents so that requests for a URI by one step can automatically access the result of some other step if that result had a base URI identical to the requested document.
However, it's not clear that the schemas port has any value if the implementation does not support this behavior. The value of the assert-valid option must be a boolean. It is a dynamic error if the assert-valid option is true and the input document is not valid. The output from this step is a copy of the input, possibly augmented by application by schema processing. The output of this step may include PSVI annotations.
A step to handle SMTP and sending e-mail messages.
<p:declare-step <p:input <p:output </p:declare-step>
The first document on the source port is expected to conform to An XML format
for mail and other messages. Any additional documents are treated as
attachments. The
em:content may contain either text or HTML. To send some
other type as the first message body, you must leave the em:content element out of the
first document and supply the body as a second document.
The xyz namespace is speculative.
Based upon review of existing Use Case, a new
p:sign step is required to
satisfy 5.10 XInclude and Sign
Relates to F.4.12 Simplify Use of File Sets
Consider a xyz:documents element which roughly emulates apache ant filesets
Repeat a [step | group] until some XPath expression is satisfied, feeding its output back as its input after the first go-around. Special built-in support for iterate to fixed-point?
A way to merge the context defined by elements
p:xpath-context,
p:viewport-source,
p:iteration-source ?
The xyz namespace is speculative.
These are highly speculative steps hypothesized by an editor. -- MM
Relates to F.3.4 Debugging
The dbxml namespace is speculative.
We note steps and functions which provide access to a variety of information that is useful in debugging:
Processor XPath Context
Step XPath Context
p:log
p:documentation
XPath Extension Functions
p:pipeinfo
p:step-available
p:value-available
p:iteration-position
p:iteration-size
p:base-uri
p:version-available
p:xpath-version-available
Set a breakpoint, optionally based upon a condition, that will cause pipeline operation to pause at the breakpoint, possibly requiring user intervention to continue and/or issuing a message.
Set debug scope and declare its mode. Provides advice to a processor to facilitate targeted debugging. Allows programmers to leave an audit trail for quality assurance purposes. The mode could be an NMTOKEN list representing, for example, levels of verbosity. Implies the existence of a processor debug state stack.
Like xsl:message. Issue a debugging message, typically including dynamic representation of one or more execution variables or functions to assist in the debugging process. A message may be a simple text message or a complex XML document. Presumes ability to resolve references to named variables in the pipeline and processor environments. Presumably these messages would be issued conditionally, which mechanism is extant. The output port(s) would likely depend on the application.
Provides advice to an XProc processor to produce implementation-defined traces of pipeline execution.
XProc Call Stack
XPath Call Stack
XQuery Call Stack
XQuery/Xpath contexts
Step inputs, outputs
Variables and parameters in scope
Members of the Working Group contributed to this specification as noted throughout.
Erik Bruchez provided use cases.
Alex Milowski produced the original XProc Requirements and Use Cases Working Draft and provided many use cases.
Henry Thompson provided use cases.
Norm Walsh contributed details of steps that are implemented in Calabash and provided use cases.
Mohamed Zergaoui contributed through email and working group discussion. | http://www.w3.org/XML/XProc/docs/langreq-v2.html | CC-MAIN-2018-05 | refinedweb | 11,533 | 51.34 |
Should You Adopt a Single Code Repository for All Code?
Should You Adopt a Single Code Repository for All Code?
Monorepos are popular, but controversial. Learn about the issues with using a single repo for all code and best practices to avoid them.
Join the DZone community and get the full member experience.Join For Free
Monorepos (putting all your code in one repository) are a fairly controversial topic in the world of software development for a number of interesting reasons. For many developers, the thought of storing every line of code in a single central repository brings them out in a cold sweat, and they have to go lie down for a while to battle with the idea. Which is understandable when you consider that monolithic source code repositories have a bad reputation.
Issues With Monorepos
As well as confronting potential technical limitations with large-scale monorepos, teams also encounter performance issues for those single source control systems exceeding multiple gigabytes. Consider new or rookie developers too. During the onboarding process, they face a huge codebase to grapple with rather than receive a steady introduction to smaller repositories.
Other issues also emerge with single central repositories such as managing and restricting access control and integrating a monorepo into an existing build process. These are all largely dependent on the size of the project, team, and organization involved though thankfully and can be addressed accordingly.
Organizing your codebase using a single source control system is tricky, but the concept is no more inconceivable than it is to deal with the issues that come with multiple repositories. Enforcing standardization is an issue—especially at scale—and code can become too complex to even read. Work and effort end up being duplicated. And on top of all this, no one is able to understand a platform from end to end when there are so many source repositories involved.
As part of our DevOps as a Service consultancy program, I encourage the companies we work with to adopt a central code library for tools, training material, and other dependencies shared across services. Where possible we cull or combine others—it’s a significant move to attempt to drive companies to migrate completely from multi to monorepos.
So, when it comes to repositories, there’s a lot of complexity involved in both directions.
Different Types of Single Source Code Repositories
There are essentially two types of monorepos:
- Large repos containing all the code maintained by a company
- Project-specific monorepos like Babel, React or Symfony which combine small official interdependent modules in one library
Awesome-monorepo contains a great collection of monorepo tools that are being used out there in the wild. Mammoth monorepos like those by Google (known as Bazel), Facebook (Buck), and Twitter and Foursquare (who both utilize Pants) have reaped the benefits of unified versioning and a single source of coding truth. Babel, Symfony, and React are all notable project monorepos which have reduced complexity for the maintainers, while still providing easy-to-change modularity for the end user.
Managing code in a single central system can considerably simplify the development process of modular software projects, like microservice-based infrastructures. Monolithic source code repositories are all about moving fast and achieving project/organizational goals more efficiently. Put simply, monorepos increase developer productivity.
It’s important to note at this juncture, that a single source code repository or monorepo in no way implies monolithic software design. This tends to be one of the main reasons many companies don’t adopt them, but the two don’t have to go hand-in-hand.
As mentioned previously, I‘ve driven all the companies we consult with to begin to adopt some elements of this approach. Implementing a monorepo into an existing process is not easy, but it’s not impossible. I begin with a single source testing library that contains all the central tools and resources that are used at the heart of each company’s development pipeline. Introducing such a testing library takes us one step closer to achieving an organizational monorepo.
Within the central library, I also enforce a high level of testing coverage—aiming for an optimal 95%. This allows me to kill two birds with one stone. A single central repository ensures testing consistency and helps achieve high test rate levels. From there, it’s possible to begin culling other repos slowly and overcome some of the angst involved from the team.
Best Practices for Monorepos
Guidelines for optimizing single source codebases:
- Organize your monorepo as a single tree with a number of subdirectories that look more or less like the root directory;
- Get your naming conventions, standards, and policy in order as a disorganized monorepo is horrible to work with;
- The root of your monorepo should contain very little (a README, and a policy);
- Subdirectories can serve as team or project namespaces—depending on the size of the organization;
- Identify each source file by a single strong (a file path that also includes a revision or version control number).
It is best to think of the monorepo as a learning tool. Use it to share information, training material and examples, and much more in a way that you wouldn’t be able to in an app repo. Single source code repositories work best for DevOps organizations who have already adopted an open and collaborative working environment and culture. The return on investment in time and effort your team will spend in migrating to a monolithic codebase will materialize multiple times over in improved productivity and better quality code. So, what are you waiting for?
Published at DZone with permission of Stefan Thorpe , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/should-you-adopt-a-single-code-repository-for-all | CC-MAIN-2019-51 | refinedweb | 970 | 50.36 |
PyKat 0.4.1
Python interface and tools for FINESSE
PyKat is a wrapper for using FINESSE (). It aims to provide a Python toolset for automating more complex tasks as well as providing a GUI for manipulating and viewing simulation setups.
Installation
The easiest way to install PyKat is through PyPi:
pip install pykat
If you are a Windows user you also have the option to download the installer at.
You should now be able to open up a new Python terminal and type import pykat, the output should be:
>>> import pykat ..- PyKat 0.1 _ '( \`.|\.__...-""""-_." ) ..+-----.._ / ' ` .-' . ' `: 7/* _/._\ \ ( ( '::;;+;;: `-"' =" /,`"" `) / L. \`:::a:f c_/ n_' ..`--...___`. . , `^-....____: +. >>>
You will also need to ensure that you have a fully working copy of FINESSE installed and setup on your machine. More details on this can be found at.
You must setup 2 environment variables: ‘FINESSE_DIR’, whose value is the directory that the ‘kat’ executable is in; ‘KATINI’, which states the directory and name of the kat.ini file to use by default in FINESSE, more information in the FINESSE manual can be found about this.
Usage
This does not detail how to use FINESSE itself, just PyKat. FINESSE related queries should be directed at the FINESSE manual or the forum.
We highly recommend running PyKat with IPython, it has so far provided the best way to explore the various PyKat objects and output data. Also of use is IPythons interactive matplotlib mode - or pylab mode - which makes displaying and interacting with multiple plots easy. You can start pylab mode from a terminal using:
ipython -pylab
Regardless of which interpreter you use, to begin using PyKat you first need to include the following:
from pykat import finesse from pykat.detectors import * from pykat.components import * from pykat.commands import * from pykat.structs import *
This provides all the various FINESSE components and commands you will typically need. Running a simulation requires you to already know how to code FINESSE files, which is beyond the scope of this readme. FINESSE commands can be entered in many ways: reading in a previous .kat file, creating pykat objects representing the various FINESSE commands or by writing blocks of FINESSE code as shown next:
import pylab as pl # Here we write out any FINESSE commands we want to process code = """ l l1 1 0 0 n1 s s1 10 1 n1 n2 m m1 0.5 0.5 0 n2 n3 s s2 10 1 n3 n4 m m2 0.5 0.5 0 n4 n5 s s3 10 1 n5 n6 yaxis abs:deg """ # this kat object represents one single simulation, it containts # all the objects and their various states. kat = finesse.kat() # Currently the kat object is empty. We can fill it using a block # string of normal FINESSE commands by parsing them. kat.parseCommands(code) # Once we have some simulation built up we can run it simply by calling... out = kat.run() # This out object contains the results from this run of the simulation. # Parameters can then be changed and kat.run() can be called again producing # another output object. So if we wanted to change the reflectivity of m1 we can do kat.m1.R = 0.2 kat.m1.T = 0.8 # now run it again... out2 = kat.run() # We can plot the output simply enough using pylab plotting. pl.figure() pl.plot(out.x, out["pd_cav"]) pl.xlabel(out.xlabel) pl.ylabel("Intensity [W]") pl.legend(out.ylabels) pl.show()
The above demonstates a way of packaging up a FINESSE simulation - simple or complex - and including any post-processing and plotting in one Python script file. Or you can create kat files separately and produce Python scripts to run and process them, that choice is upto you, Pykat provides the means to be used in both ways.
To load in a separate FINESSE .kat file we can use the commands:
kat = finesse.kat() # load in a separate file in the same directory... kat.loadKatFile('test.kat') # the kat object has now parsed all the commands in this file. # We can alter and objects in there, e.g. if there was a mirror called m1 kat.m1.phi = 45 out = kat.run()]
- Author: Daniel Brown
- License: GPL v2
- Package Index Owner: ddb
- DOAP record: PyKat-0.4.1.xml | https://pypi.python.org/pypi/PyKat/0.4.1 | CC-MAIN-2016-22 | refinedweb | 717 | 64.71 |
go to bug id or search bugs for
Description:
------------
As per discussion on the PHP internals list, the proper way to ensure that when using a short name, the class loaded is always from the current namespace (even if it needs to be autoloaded and has not yet been loaded) is to import that class in every file where it is used.
However, doing so in two separate scripts, OR just importing a class that has already been defined in the namespace, can result in a fatal error claiming an import clash. Because of the nature of namespaces and this bug it requires 2 small scripts to reproduce, not one.
Reproduce code:
---------------
ns_import1.php:
---------------
<?php
namespace Test;
class Helper {}
include dirname(__FILE__) . '/ns_import2.php';
ns_import2.php:
---------------
<?php
namespace Test;
import Test::Helper;
class Other {}
Expected result:
----------------
No error, should be no output.
Actual result:
--------------
$ php ns_import1.php
Fatal error: Import name 'Helper' conflicts with defined class in /tmp/ns_import2.php on line.
Works great now - thanks Dmitry. | https://bugs.php.net/bug.php?id=43183 | CC-MAIN-2020-16 | refinedweb | 167 | 72.26 |
data set. Each image in Fashion-MNIST consisted of a two-dimensional \(28 \times 28\) matrix. To make this data amenable to multilayer perceptrons which anticapte receiving inputs as one-dimensional fixed-length vectors, we first flattened each image, yielding vectors of length 784, before processing them with a series of fully-connected layers.
Now that we have introduced convolutional layers, we can keep the image in its original spatially-organized grid, processing it with a series of successive convolutional layers. Moreover, because we are using convolutional layers, we can enjoy a considerable savings in the number of parameters required.
In this section, we will introduce one of the first published convolutional neural networks whose benefit was first demonstrated by Yann Lecun, then a researcher at AT&T Bell Labs, for the purpose of recognizing handwritten digits in images—LeNet5. In the 90s, their experiments with LeNet gave the first compelling evidence that it was possible to train convolutional neural networks by backpropagation. Their model achieved outstanding results at the time (only matched by Support Vector Machines at the time) and was adopted to recognize digits for processing deposits in ATM machines. Some ATMs still runn the code that Yann and his colleague Leon Bottou wrote in the 1990s!
6.6.1. LeNet¶
In a rough sense, we can think LeNet as consisting of two parts: (i) a block of convolutional layers; and (ii) a block of fully-connected layers. Before getting into the weeds, let’s briefly review the model in Fig. 6.6.1.
Fig. 6.6.1 Data flow in LeNet 5. The input is a handwritten digit, the output a probabilitiy over 10 possible outcomes.¶. Each convolutional layer uses a \(5\times 5\) kernel and processes each output with a sigmoid activation function (again, note that ReLUs are now known to work more reliably, but had not been invented yet). The first convolutional layer has 6 output channels, and second convolutional layer increases channel depth further to 16.
However, coinciding with this increase in the number of channels, the height and width are shrunk considerably. Therefore, increasing the number of output channels makes the parameter sizes of the two convolutional layers similar. The two average pooling layers are of size \(2\times 2\) and take stride 2 (note that this means they are non-overlapping). In other words, the pooling layer downsamples the representation to be precisely one quarter the pre-pooling size.
The convolutional block emits an output with size given by (batch size, channel, height, width). Before we can pass the convolutional block’s output to the fully-connected block, we must flatten each example in the mini-batch. In other words, we take this 4D input and tansform it into the 2D input expected by fully-connected layers: as a reminder, the first dimension indexes the examples in the mini-batch and the second gives the flat vector representation of each example. LeNet’s fully-connected layer block has three fully-connected layers, with 120, 84, and 10 outputs, respectively. Because we are still performing classification, the 10 dimensional output layer corresponds to the number of possible output classes.
While getting to the point where you truly understand what’s going on inside LeNet may have taken a bit of work, you can see below that implementing it in a modern deep learning library is remarkably simple. Again, we’ll rely on the Sequential class.
import d2l from mxnet import autograd, gluon, init, nd from mxnet.gluon import nn))
As compared to the original network, we took the liberty of replacing the Gaussian activation in the last layer by a regular dense layer, which tends to be significantly more convenient to train. Other than that, this network matches the historical definition of LeNet5. Next, we feed a single-channel example of size \(28 \times 28\) into the network and perform a forward computation layer by layer printing the output shape at each layer to make sure we understand what’s happening here.)
Note that the height and width of the representation at each layer throughout the convolutional block is reduced (compared to the previous layer).. Moreover each pooling layer halves the height and width. However, as we go up the stack of layers, the number of channels increases layer-over-layer from 1 in the input to 6 after the first convolutional layer and 16 after the second layer. Then, the fully-connected layer reduces dimensionality layer by layer, until emitting an output that matches the number of image classes.
6.6.2. Data Acquisition and Training¶
Now that we’ve implemented the model, we might as well run some experiments to see what we can accomplish with the LeNet model. While it might serve nostalgia to train LeNet on the original MNIST dataset, that dataset has become too easy, with MLPs getting over 98% accuracy, so it would be hard to see the benefits of convolutional networks. Thus we will stick with Fashion-MNIST as our dataset because while it has the same shape (\(28\times28\) images), this dataset is notably more challenging.
batch_size = 256 train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size=batch_size)
While convolutional networks may have few parameters, they can still be significantly more expensive to compute than a similarly deep multilayer perceptron so if you have access to a GPU, this might be a good time to put it into action to speed up training..
# Save to the d2l package mini-batch stochastic gradient descent. Since each epoch
takes tens of second to run, we visualize the training loss in a finer
granularity.
#.475, train acc 0.821, test acc 0.825 61391.5 exampes/sec on gpu(0).
6.6.4. Exercises). | http://classic.d2l.ai/chapter_convolutional-neural-networks/lenet.html | CC-MAIN-2020-16 | refinedweb | 952 | 50.26 |
In the blog we will discuss several methods to reduce the fitting time when fitting a large number of datasets with a user-defined function.
- Distributed Batch Processing app
- Speedy Fit app
- Define Python Vector fitting function in Origin
In this example for speed comparison, we are going to fit 100 datasets to a complicated function. We have put together a zip file which includes following folders
- csv: 100 csv files, each has a dataset to be fit
- fdf Files: the fitting function defined by Origin C (VoigtSum.FDF) and Python (VoigtSumPy.FDF)
- Analysis Templates: analysis templates for Origin C fitting function (FitOC.ogwu) and Python fitting function (FitPy.ogwu), which are to be used in Distributed Batch Processing app.
- SpeedyFitData.opju: CSV files are imported in a worksheet and put side by side, which are to be used in the Speedy Fit app.
** To use the fitting functions in Origin, drag and drop the FDF files to Origin
Speed Comparison
**The processing speed would be different depending on your PC configuration
As we can see, defining python fitting function which utilized Python’s numpy and scipy’s matrix calculation can greatly improve the speed, especially when working together with Speed Fit app or Distributed Batch Processing app
Define Python Vector fitting function in Origin
The Python code used in VoigtSumPy.FDF in the zip file is defined as below. It utilizes numpy and scipy’s matrix calculation.
import numpy as np from scipy.special import wofz def Voigt(x, xc, A, alpha, gamma): """ Return the Voigt line shape at x with Lorentzian component FWHM gamma and Gaussian component FWHM alpha. """ sigma = alpha/2.0 / np.sqrt(2 * np.log(2)) hg = gamma/2.0 return A*np.real(wofz((x-xc + 1j*hg)/sigma/np.sqrt(2))) / sigma /np.sqrt(2*np.pi) def myfunc( x, a, b, T, P, n): x1c=-10 x2c=-5 x3c=7 vx=np.array(x) x1=vx-x1c x2=vx-x2c x3=vx-x3c c1=1 c2=1 c3=1 m1=50 m2=51 m3=53 wg1=(T/m1)**0.5 wg2=(T/m2)**0.5 wg3=(T/m3)**0.5 wl=P/100 A1=c1*Voigt(x1,0,1,wg1,wl)+c2*Voigt(x2,0,1,wg2,wl)+c3*Voigt(x3,0,1,wg3,wl) y=(a+b*vx)*np.exp(-n**2*(1/8)*A1) return y.tolist()
Speedy Fit app
The Speedy Fit app generated result for all the plots in the end of processing and it saves a lot of time. To use it, we should import all the csv files to a worksheet and put them side by side first. (See SpeedyFitData.opju in the zip file)
Results of the Speedy Fit app
Distributed Batch Processing app
The Distributed Batch Processing app can distribute the fitting process to multiple Origin instances.
To use the app, we should perform fitting on one data first and prepare analysis templates. (See FitPy.ogwu and FitOC.ogwu in the zip file)
Results of the Distributed Batch Processing app
| https://blog.originlab.com/reduce-curve-fitting-time-for-a-large-number-of-datasets | CC-MAIN-2022-33 | refinedweb | 506 | 64.1 |
shm_open -- open or create a shared memory object shm_unlink -- remove a
shared memory object
Standard C Library (libc, -lc)
#include <sys/types.h>
#include <sys/mman.h>
int
shm_open(const char *path, int flags, mode_t mode);
int
shm_unlink(const char *path);
The shm_open() function opens (or optionally creates) a POSIX shared memory
object named path. The shm_unlink() function removes a shared memory
object named path.).
In addition, the FreeBSD implementation causes mmap() of a descriptor
returned by shm_open() to behave as if the MAP_NOSYNC flag had been specified
to mmap(2). (It does so by setting a special file flag using
fcntl(2).)
The shm_unlink() function makes no effort to ensure that path refers to a
shared memory object.
The path argument does not necessarily represent a pathname (although it
does in this.
If successful, shm_open() returns a non-negative integer; shm_unlink()
returns zero. Both functions return -1 on failure, and set errno to
indicate the error.
The shm_open() and shm_unlink() functions can fail with any error defined
for open() and unlink(), respectively. In addition, the following errors
are defined for shm_open():
[EINVAL] The object named by path is not a shared memory object
(i.e., it is not a regular file).
[EINVAL] The flags argument to shm_open() specifies an access
mode of O_WRONLY.
mmap(2), munmap(2), open(2), unlink(2)
The shm_open() and shm_unlink() functions are believed to conform to IEEE
Std 1003.1b-1993 (``POSIX.1'').
The shm_open() and shm_unlink() functions first appeared in FreeBSD 4.3.
Garrett A. Wollman <wollman@FreeBSD.org> (C library support and this manual
page)
Matthew Dillon <dillon@FreeBSD.org> (MAP_NOSYNC)
FreeBSD 5.2.1 March 24, 2000 FreeBSD 5.2.1 | http://nixdoc.net/man-pages/FreeBSD/man3/shm_open.3.html | CC-MAIN-2017-22 | refinedweb | 280 | 57.87 |
UTF-8 string routines
These functions are declared in the main Allegro header file:
#include <allegro5/allegro.h>
About UTF-8 string routines
Some parts of the Allegro API, such as the font rountines, expect Unicode strings encoded in UTF-8. These basic routines are provided to help you work with UTF-8 strings. You should use another library (e.g. ICU) if you require more functionality.
You should also see elsewhere for an introduction to Unicode. Extremely briefly, Unicode is a standard consisting of a large character set (of over 100,000 characters), and rules, such as how to sort strings. A code point is the integer value of a character, but not all code points are characters, as some code points have other uses. Clearly it is impossible represent each code point with a 8-bit byte or even a 16-bit integer, so there exist different Unicode Transformation Formats. UTF-8 has many nice properties, but the main advantages are that it is backwards compatible with C strings, and ASCII characters (code points <= 127) are encoded in UTF-8 exactly as they would be in ASCII.
Here is a diagram of the representation of the word "ål", with a NUL terminator.
---------------- ---------------- -------------- String å l NUL ---------------- ---------------- -------------- Code points U+00E5 (229) U+006C (108) U+0000 (0) ---------------- ---------------- -------------- UTF-8 encoding 0xC3, 0xA5 0x6C 0x00 ---------------- ---------------- -------------- UTF-16LE encoding 0xE5, 0x00 0x6C, 0x00 0x00, 0x00 ---------------- ---------------- --------------
U+00E5 is greater than 127 so requires two bytes to represent in UTF-8. U+006C and U+0000 both exist in the ASCII range, so take one byte each, exactly as in an ASCII string. UTF-16 is a different encoding, in which each code is represented by two or four bytes. In UTF-8 a zero byte is only present when it represents the NUL character, but this is not true for UTF-16.
In the Allegro API, be careful whether a function takes byte offsets or code-point indices. In general, all position parameters are in byte offsets, not code point indices. This may be surprising, but if you think about, is required for good performance. It also means many functions will work even if they do not contain UTF-8, so you may actually store arbitrary data in the strings.
For actual text processing, where you want to specify positions with code point indices, you should use al_ustr_offset to find the byte position of a code point. | https://www.allegro.cc/manual/5/utf8.html | CC-MAIN-2020-16 | refinedweb | 407 | 60.55 |
jndi mapping with Trails examplelucio piccoli Jul 24, 2005 6:09 AM
hi all,
i am just a little puzzled with my own ejb3 SLSB deployment on Jboss 403. I have followed the trails examples having an JSP invoking a SLSB calling a Entity bean. Straight forward stuff. However i can't find where to do the JNDI mapping!! Hence at run time i get JNDI name not found exception in the JSP.
The trails stuff has no Deployment Descriptors as i undestand or as i can find. So where is the JNDI mapping performed.... it is taken care of in the anonation?
any help please.
-lp
1. Re: jndi mapping with Trails exampleKabir Khan Jul 24, 2005 8:12 AM (in response to lucio piccoli)
By default it uses the fully qualified name of the remote/local interface class.
You can override this using the jboss specific @RemoteBinding and @LocalBinding annotations
2. Re: jndi mapping with Trails examplelucio piccoli Jul 25, 2005 12:01 AM (in response to lucio piccoli)
hi all,
i still can not get this dam EJB3 code to be bound to a JNDI name.. what do i have to do here?
I have attached the SLSB code, the JSP code and the exception. The SLSB
cannot bw found from the JNDI context. The ear seems to be deployed cleanly as there are no errors in the server.log
This is such a trivial task but i can't seem to solve it.
Any help is appreciated.
------SLSB code----
package com.asteriski.asset.business;
import com.asteriski.asset.entity.*;
import javax.ejb.*;
import javax.persistence.*;
import java.sql.Timestamp;
import java.util.*;
@Stateless
public class AssetBean implements Asset {
@PersistenceContext (unitName="as")
protected EntityManager em;
public void addAsset (String name) {
System.out.println( "addAsset");
AssetEntity assetEntity = new AssetEntity (name);
em.persist (assetEntity);
}
public Collection getAssets()
{
return em.createQuery("from Asset a").getResultList();
}
}
---------JSP snipp-----
<%
AssetBean ab = null;
try {
System.out.println( "Attempting context lookup");
InitialContext ctx = new InitialContext();
ab = (AssetBean) ctx.lookup(AssetBean.class.getName());
System.out.println( "got context lookup");
} catch (Exception e)
{
System.out.println( "Error context lookup " + e);
//e.printStackTrace ();
}
%>
---------Exception-----
13:52:31,003 INFO [STDOUT] Error context lookup javax.naming.NameNotFoundExcept
ion: com.asteriski.asset.business.AssetBean not bound
13:52:31,003 ERROR [[jsp]] Servlet.service() for servlet jsp threw exception
3. Re: jndi mapping with Trails examplelucio piccoli Jul 25, 2005 1:49 AM (in response to lucio piccoli)
solved this problem.
It seems that the jndi is bound to the interface class not the implementation class!
-lp
4. Re: jndi mapping with Trails exampleBill Burke Jul 25, 2005 10:29 AM (in response to lucio piccoli)
Yes. You don't have reference to the bean class on the client and also, a bean class can have both a local and remote interface...
The real reason for the default was so that you automatically had a compiler checked constant you could use to lookup the EJB.
MyRemoteInterface.class.getName() | https://developer.jboss.org/thread/105946 | CC-MAIN-2019-47 | refinedweb | 499 | 59.6 |
You can subscribe to this list here.
Showing
6
results of 6
I've finally been able to update the WxMpl library so it's compatible
with MPL 0.98:
It's been tested on Debian Lenny (Python 2.5.2, MPL 0.98.1, wxPython
2.6.3.2) and Mac OS 10.5.5 (MacPython 2.5.4, MPL 0.98.1 and 0.98.6svn,
wxPython 2.8.9.1). Please let me know if you encounter any problems.
My thanks to everyone for the patches and feedback, and for being so
patient.
Ken
Thomas Robitaille wrote:
>.
The workaround for now may be to call ax.set_autoscale_on(False) before
your call to ax.contour. You could also save the datalim before and
restore them afterwards.
This sort of request has come up before, though, and the longer-term
solution might be some refactoring in contour.py. As it is, everything
is done when the ContourSet is instantiated; one does not have the
option of simply calculating the contours, for example.
Eric
>
> So essentially, I am wondering if it is possible to retrieve a set of
> LineCollection objects without acting in any way on the current figure/axes.
>
> Thanks for any help,
>
> Thomas
>
> ------------------------------------------------------------------------------
>
>
> _______________________________________________
> Matplotlib-users mailing list
> Matplotlib-users@...
>.
So essentially, I am wondering if it is possible to retrieve a set of
LineCollection objects without acting in any way on the current figure/axes.
Thanks for any help,
Thomas
Hi all,
I have written a small program for optimization of truss
structures. The design variables are the cross sectional
areas of the truss elements (attached figure).
One way to visualize the results is to use the linewidth
as a parameter.
Is it also possible to use different ("continuous") colors
corresponding to the values of the design variables ?
Any pointer would be appreciated.
Thanks in advance.
Nils
Postprocessing
from scipy import io, set_printoptions
from numpy import zeros, arange, outer, identity, loadtxt,
ones, shape, dot, r_, where, min, max
set_printoptions(precision=4,linewidth=150)
from pylab import spy, show, plot, subplot, figure,
imshow, scatter, title, text, annotate, xlabel, ylabel,
savefig
from scipy.linalg import eigh, norm, solve
#
# Visualization of the
# results of a sizing optimization problem
#
N = 45 # Number of nodes
nele = 136 # Number of elements
coord =
loadtxt('coord.inp',comments='#',usecols=(1,2)).astype(float)
inz =
loadtxt('connect.inp',comments='#',usecols=(1,2)).astype(int)
cross = loadtxt('crossopt.dat')
cross_max = max(cross)
cross_min = min(cross)
a = 1.9/(cross_max-cross_min)
b = 2.-a*cross_max
def model(cross):
""" Model plot """
scatter(coord[:,0],coord[:,1])
title('Plane truss structure')
for iele in arange(0,nele):
p_j = coord[inz[iele,1]-1]
p_i = coord[inz[iele,0]-1]
linewidth = a*cross[iele]+b
print iele, linewidth
plot
(r_[p_i[0],p_j[0]],r_[p_i[1],p_j[1]],'r-',lw=linewidth)
xlabel('$x$')
ylabel('$y$')
model(cross)
savefig('truss')
show()
Hi, I have two picking questions. First, If I do this inside a pick
handler function:
def OnPick(self, event):
if isinstance(event.artist, Line2D):
thisline = event.artist
xdata = thisline.get_xdata()
ydata = thisline.get_ydata()
I can grab the data from a line. Fine.
Now I'd like to do two things:
1) WIthin this same pick handler function, have another if conditional,
but for the case when the user is picking the legend. In other words, I
want to pick the legend artist, not a Line2D artist.
2) Modify the above so the user can pick only the actual points on a line,
but not the connecting line if I have plotted it like 'o-' style (connected
points).
I hope this is clear. Any help is appreciated. Thanks.
Che
Jeff Whitaker wrote:
> Armin Moser wrote:
>> Jeff Whitaker wrote:
>>
>>> Armin Moser wrote:
>>>
>>>> Hi,
>>>>
>>>> I would like to interpolate an array of shape (801,676) to regularily
>>>> spaced datapoints using griddata. This interpolation is quick if the
>>>> (x,y) supporting points are computed as X,Y = meshgrid(x,y). If this
>>>> condition is not fullfilled the delaunay triangulation is extremely
>>>> slow, i.e. not useable. Is this a known property of the used
>>>> triangulation? The triangulation can be performed with matlab without
>>>> any problems.
>>>>
>>>> Armin
>>>>
>>>>
>>> Armin: You could try installing the natgrid toolkit and see if that
>>> speeds up griddata at all. If not, please post a test script with data
>>> and maybe we can figure out what is going on.
>>>
>> I have already tried natgrid and it didn't improve the situation. As
>> suggested I append a script demonstrating the problem.
Reducing the original grid from 676x801 to 100x120, I get benchmarks of
about 6 seconds with natgrid and 0.15 s with Robert's delaunay. This
seems quite repeatable.
I also tried randomizing x and y in the first benchmark with natgrid,
and it made only a slight difference.
Eric
>>
>> Thanks
>> Armin
>>
>
> Armin: On my mac, your two benchmarks take 15 and 14 seconds. Do you
> consider that too slow?
>
> Perhaps this is just a toy example to test griddata, but I assume you
> realize that you wouldn't normally use griddata to interpolate data on
> one regular grid to another regular grid. griddata is strictly for
> interpolating scatter data (not on a regular mesh) to a regular mesh.
>
> -Jeff
>> ------8<-------------
>> from numpy import *
>> from pylab import *
>> import time
>>
>> deg2rad = pi/180.0
>> ai = 0.12*deg2rad
>> x = linspace(13,40,676)
>> y = linspace(10,22,801)
>>
>> x = x*deg2rad
>> y = y*deg2rad
>> [x,y] = meshgrid(x,y)
>> z = (x**2+y**2)
>>
>> xi = linspace(x.min(),x.max(),x.shape[1])
>> yi = linspace(y.min(),y.max(),y.shape[0])
>> tic= time.time()
>> zi = griddata(x.flatten(),y.flatten(),z.flatten(),xi,yi)
>> toc = time.time()
>> print toc-tic
>>
>> fac = 2*pi/1.2681
>> nx = fac * (cos(y)*cos(x) - cos(ai))
>> ny = fac * (cos(y)*sin(x))
>> nz = fac * (sin(y) + sin(ai))
>> np = sqrt(nx**2 + ny**2)
>>
>> z = (np**2+nz**2)*exp(-0.001*nz)
>>
>> xi = linspace(np.min(),np.max(),x.shape[1])
>> yi = linspace(nz.min(),nz.max(),y.shape[0])
>> tic = time.time()
>> zi = griddata(np.flatten(),nz.flatten(),z.flatten(),xi,yi)
>> toc = time.time()
>> print toc-tic
>>
>
> | http://sourceforge.net/p/matplotlib/mailman/matplotlib-users/?viewmonth=200902&viewday=21 | CC-MAIN-2016-07 | refinedweb | 1,027 | 68.97 |
7/29/16
- Traits of a Proficient Programmer (thanks Shawn!)
- If explaining a codebase to a beginner, a proficient developer may decide to stick to the basics of what the code is actually doing rather than throwing out named patterns and telling a novice "Go read the Gang of Four before asking me any questions."
- Dig into primary sources rather than just reading summaries. It takes more work, but helps you figure out both the basis and the boundaries of a technique, and also gives you an opportunity to generate new ideas of your own that are inspired by core principles.
- Ask others to explain why they do things, but don't just accept dogmatic reasoning. Demand examples and inquire about context, so that you can try to imagine what it is like to be in their shoes. Doing this is tremendously valuable because it allows you to see the strengths and weaknesses of ideas in their natural habitats.
-
- A meaningful portfolio of work, whether it's professional or personal.
- Ability to talk through and ask good questions about realistic problems in whatever business domain your company is in.
- Sufficient communication skills and technical competence to explain some code they've written, at both the high level and at the detailed level.
- A mindset that emphasizes customer service and a focus on business outcomes rather than raw technical interests.
7/30/16
Couldn’t find a team for the CS50 coding challenge so I’m going solo.
- First problem is called “Punctuation.”
Write a program that, given a line of text, T, via standard input, prints it to standard output with the following modifications:
After every period (.) and comma (,), a space must be added.
The first letter after every period must be capitalized.
#include <stdio.h> #include <string.h> #include <ctype.h> #include "cs50.h" int main(void) { string sentence = GetString(); int period_flag = 0; //loop through the sentence given for (int i = 0; i < strlen(sentence); i++) { if (period_flag == 1) //marker to see if we have run across a period or not { printf("%c", toupper(sentence[i])); //if we have then next character must be uppercase period_flag = 0; //reset flag for next period } else if (sentence[i] == ',') printf(", "); else if (sentence[i] == '.') { period_flag = 1; printf(". "); } else printf("%c", sentence[i]); } }
- Second problem is called “Money, Money, Money.”
Write a program that takes in three values via standard input: a starting bank account balance, B, in dollars; an annual interest rate, R; and a number of years, N. The program should print out the value of the resulting bank account balance after B has received interest at rate R calculated annually for N years, rounded to the nearest cent.
For example, if B is 280.00, R is 0.01 (representing a 1% interest rate), and N is 3, then your program should print out 288.48, which represents the final balance after an initial balance of $280 receives 1% interest calculated once per year for three years.
Expect B and R to be floating-point values and N an integer.
Input Format
Your program will take three lines of input:
The first line will be a floating-point value, B, representing the starting bank account balance.
The second line will be a floating point value, R, representing an interest rate in decimal form.
The third line will be an integer, N, representing the number of years over which that interest should be calculated.
Constraints
B >= 0
R >= 0
N >= 0
There will be no percentage signs (%) or dollar signs ($), in the input, and you should not include either symbol in your output.
Output Format
A number representing dollars as a floating-point value, rounded to the nearest cent.
Sample Input 82 0.02 3 Sample Output 87.02
- This one took a lot longer than it should have. I ran into stupid errors more than anything else. Also I used the incorrect formula for yearly compounded interest in the beginning. Damn you google.
- Formula used:
- I round the value at the very end and then print to two decimal places, but my program fails one test case, but I don’t see how. I will move on since I can’t beat my head against an unknown wall.
- If I try to round any earlier than the end I lose cents in precision and the interest rate doesn’t come out right at all.
- My solution
#include <stdio.h> #include <math.h> #include "cs50.h" int main(void) { double B = GetFloat(); // balance double R = GetFloat(); // interest rate double N = GetInt(); // years if (B < 0 || R < 0 || N < 0) return 0; double X = 1 + R; double Y = pow(X, N); double finalBalance = Y * B; finalBalance = roundf(finalBalance * 100) / 100; printf("%.2f", finalBalance); }
- Third problem is called “One More Year.”
Write a program that, given a year, Y, as an integer via standard input, prints via standard output Leap Year if Y is a leap year, and Not Leap Year if Y is not a leap year.
Recall that:
A year is a leap year if it is a multiple of four (2004, 2008, and 2012 are all leap years). The exception to this rule is that a year which is a multiple of 100 and also not a multiple of 400 is not a leap year. For example, 1800 and 1900 are not leap years, but 2000 is a leap year (because it is a multiple of 400).
- Very simple problem, but it requires the right kind of logic tree to get it right. I had it a little backwards at first. Didn’t take too long to get it all right.
- Solution here
#include <stdio.h> #include <math.h> #include "cs50.h" int main(void) { int year = GetInt(); if (year > 9999 || year < 0) return 0; if (year % 4 == 0) { if (year % 100 == 0 && year % 400 != 0) { printf("Not Leap Year"); return 1; } printf("Leap Year"); return 1; } else printf("Not Leap Year"); return 1; }
- Skipped next problem since it only has a 29% completion rate. I’ll come back to it.
- Next problem is called “Name Your Cat”
Between the words cat and kitten, there are seven unique characters (a, c, e, i, k, n, and t), because only those characters appear in the two words.
Given a string (i.e., a cat's name), reverse all instances of those characters, while leaving the other 19 letters in the alphabet as they are. Each of those 19 characters, if they appear in the name, must remain at the same index.
If there is only one character to reverse, leave it in the same location.
Input Format
You will take in 2 lines of input. The first will be N, the number of characters in the string. The second line of input will be the string S to be manipulated.
Constraints
N > 0
S will only contain lowercase letters.
Output Format
Your program should print the cat-reversed version of S to standard output.
Sample Input 12 davidmeowlan Sample Output dnvadmeowlia
- This one wasn’t so bad except I got a little confused on how to structure my ending condition. After I ate lunch it was easy to spot and finish solving.
- Only thing I don’t know how to do is shorten the giant list of if conditions.
- My solution here
#include <stdio.h> #include <string.h> #include "cs50.h" int main(void) { int n = GetInt(); string s = GetString(); int i = 0; int j = 1; while (i < (n-j)) { if (s[i] == 'a' || s[i] == 'c' || s[i] == 'e' || s[i] == 'i' || s[i] == 'k' || s[i] == 'n' || s[i] == 't') { if (s[n-j] == 'a' || s[n-j] == 'c' || s[n-j] == 'e' || s[n-j] == 'i' || s[n-j] == 'k' || s[n-j] == 'n' || s[n-j] == 't') { //swap char temp = s[i]; s[i] = s[n-j]; s[n-j] = temp; i++; j++; } else { //incrememnt j if we don't swap to find next viable char j++; } } else i++; } printf("%s", s); }
- Fourth problem is simply called “X”.
Write a program that, given an odd integer, N, in [3, 23], draws a box (out of * characters) of height N and width Nwith an X in the middle. That is, the diagonals of the box should be drawn with * characters.
For example, 5 should give you: ***** ** ** * * * ** ** ***** And an input of 7 should give you: ******* ** ** * * * * * * * * * * * ** ** *******
Input Format
An integer N that may be odd or even, and may or may not be in [3, 23].
Constraints
N will be an integer.
Output Format
A box with diagonals, drawn from * characters, provided N is odd and in [3, 23]. If N is even or is not in [3, 23], your program should not print anything.
Sample Input 5 Sample Output ***** ** ** * * * ** ** *****
- I broke this problem down into parts when I’m sure it’s possible to do it with just two FOR loops. That seemed a little complex so I simplified it down to three parts
- One, make the border of *’s
- Two, make the X
- Print the completed 2D array
- The hardest part was getting the two for loops to get the X to work correctly. I had part 1 and 2 done quickly.
- My solution is here
#include <stdio.h> #include "cs50.h" int main(void) { int n = GetInt(); if (n % 2 == 0) return 0; if (n < 3 || n > 23) return 0; int board[n][n]; // counters int x = 1; int y = n-2; // loop through 2D array and put border around it for (int i = 0; i < n; i++) { for (int j = 0; j < n; j++) { if (i == 0) board[i][j] = '*'; else if (j == 0) board[i][j] = '*'; else if (j == (n-1)) board[i][j] = '*'; else if (i == (n-1)) board[i][j] = '*'; else board[i][j] = ' '; } } // loop through at put the X down for (int i = 0; i < n; i++) { for (int j = 0; j < n; j++) { if (x > 0 && x < n-1) { if (y > 0 && y <= n-2) { board[x][x] = '*'; board[x][y] = '*'; x++; y--; } } } } // print the final board for (int i = 0; i < n; i++) { for (int j = 0; j < n; j++) { printf("%c", board[i][j]); } printf("\n"); } }
7/31/16
- Didn’t get much time to work on my CS50 challenge, but did work on the problem called “Word Reverse.”
Write a program that, given a string, S, made up of ASCII characters, reverses each 'word' within S individually while leaving the general word order of S intact and prints the result to standard output.
A 'word' here is defined as any contiguous set of letter and number characters, uppercase or lowercase. Words are separated by any ASCII characters that are not letters or numbers.
For example, if a user inputs:
Hello world!, your program should print olleH dlrow!
Go ha.ng a salami, your program should print oG ah.gn a imalas
Wow, aren't you enjoying this 1337 contest?, your program should print woW, nera't uoy gniyojne siht 7331 tsetnoc?
- I ran into a few issues with this one I put in the comments of my solution gist, but the reverseString function I had already created made the logic pretty trivial. I just need to look for a space or punctuation and switch any string before that (put into a temp array).
- Once that is done clear the temp array and continue to the next space/punctuation and do it again or until I am at the end of the whole string.
- Solution here.
#include <stdlib.h> #include <stdio.h> #include <string.h> #include <ctype.h> #include "cs50.h" char* reverseString(char* s); int main(void) { char *str = malloc(sizeof(char *) * 200); str = GetString(); //get input from user char *temp = malloc(sizeof(char *) * 200); int j = 0; if (str == NULL) return 1; // loop through string copying to a temp string until a special character or space is found. for (int i = 0; i < strlen(str); i++) { if (isalnum(str[i])) { temp[j] = str[i]; j++; if (i == strlen(str)-1) //this is a check to see if this is the last character { reverseString(temp); // reverse temp string and print printf("%s", temp); j = 0; // reset back to beginning of temp array once we find a special character memset(&temp[0], 0, sizeof(temp)); // clear temp array } } else if (!isalnum(str[i])) { reverseString(temp); // reverse temp string and print printf("%s", temp); printf("%c", str[i]); // print space/special character j = 0; // reset back to beginning of temp array once we find a special character memset(&temp[0], 0, sizeof(temp)); // clear temp array } } free(str); //free memory return 0; } char* reverseString(char* s) { int length = 0, i = 0; length = strlen(s); char temp[1]; for(i = 0; i < length/2; i++) { *temp = s[i]; s[i] = s[length-i-1]; s[length-i-1] = *temp; } return s; }
Problems I encountered:
- I needed a way to clear my temp array after each swap. Looked that up and got an answer to just clear the memory which worked well.
- Initially used scanf instead of GetString. Scanf only goes up the first space character which is a problem.
- If my string didn't end in a special character it wouldn't work, so I needed to add a check to see if we were on the last index of the string and then do the reversing and printing if that was the case.
- I didn't put in a check for a NULL string.
- I didn't realize that it wasn't just alpha characters that needed to be switched, but alphanumeric. So instead of isalpha you have to use isalnum.
- I used my initial reverseString function I created for leetcode problem.
- Unfortunately I submitted this solution about 5 minutes after the competition ended so it didn’t count.
- I printed out the rest of the problems as PDFs so I can work on them later, but it looks like I can do them whenever I want and check them on Hackerrank it just won’t count for anything.
- The Maze Runner problem looks incredibly hard!
Write a program that reads from standard input a grid-based standard maze (no loops) represented in ASCII characters, and prints, on standard output, the sequence of positions comprising the shortest path from the start to the exit that passes through no walls of the maze. The start will be the position at the top left of the grid (represented by [0, 0]), and the exit will be the position at the bottom right.
Each successive position in the shortest path must be obtainable from the previous position by incrementing or decrementing one coordinate.
Input Format
The maze will be composed of the ASCII characters +, -, |, space (), and newline (\n).
- and | characters represent walls.
A space either represents the center of a grid square, or, if between two + characters, a passageway (the absence of a wall).
The maze does not need to be square, so its size will be N x M.
Constraints
N >= 2
M >= 2
The maze entrance (starting position) will always be the top left corner, [0, 0].
The maze exit will always be the bottom right, [N - 1, M - 1].
Every position in the maze will be reachable from the start. There will only be one correct shortest path. There may be wrong directions.
Output Format
A set of positions, formatted
[a, b]\n[c, d]\n..., where the letters stand for integers representing [row, col] of the positions comprising the path from the start to the exit. The first coordinate should always be [0, 0]. There should be a newline after each position, including the last. The last coordinate should always be [N - 1, M - 1].
Sample Input + +-+-+-+ | | | | + + + + + | | | +-+-+ +-+ | | | + +-+-+ + | | +-+-+-+ + Sample Output [0, 0] [1, 0] [1, 1] [0, 1] [0, 2] [1, 2] [2, 2] [2, 1] [2, 0] [3, 0] [3, 1] [3, 2] [3, 3]
8/1/16
- In the Money, Money, Money challenge problem I somehow overlooked that the math.h library has a power function in it. WTF. 😫
-
- Fixed it passing Test Case #3 by changing all floats to doubles. What the christ.
- Updated solution including all of the above:
- float vs. double precision
8/2/16
- What are some goals a beginning Self-Taught Developer should have?
- Learning How to Learn: Powerful mental tools to help you master tough subjects
- Persistence > Knowledge
- Pragmatic Thinking and Learning: Refactor Your Wetware (Pragmatic Programmers)
- I'd develop a really well made portfolio of various projects so people can immediately understand what you're capable of.
- How to use your full brain when writing code
- Your Brain at Work: Strategies for Overcoming Distraction, Regaining Focus, and Working Smarter All Day Long
- Think of it [the brain] as a processor that can only handle a limited number of concurrent tasks
- Repetitive training in a task will eventually make your brain run it on its “default network”. This is like the brain’s muscle memory or a CPU’s internal cache. Once you repeat a task enough times, it will become almost effortless.
- A more useful training task might be writing an SQL query, or the code for a website’s controller, or string handling, or practicing regular expressions, or even command line tools such as grep or awk.
- For building a strong vocabulary, there is the obvious habit of reading a lot, which I think at this point most developers already know that they have to.
- Anki: Friendly, intelligent flash cards
- Practice coding problems every day, beyond just your project’s tasks
- HackerRank emailed me a simple problem to solve so I did. Took 2 minutes, but it did help me get used to using scanf and dealing with spaces in input not using the CS50 library crutch.
- I also forgot to initialize the sum variable to 0 so it kept spitting out a garbage sum at the end.
Given an array of integers, can you find the sum of its elements?
Input Format
The first line contains an integer, , denoting the size of the array.
The second line contains space-separated integers representing the array's elements.
Output Format
Print the sum of the array's elements as a single integer.
#include <math.h> #include <stdio.h> #include <string.h> #include <stdlib.h> #include <assert.h> #include <limits.h> #include <stdbool.h> // int main(){ int n; scanf("%d",&n); int arr[n]; int sum = 0; for(int arr_i = 0; arr_i < n; arr_i++){ scanf("%d",&arr[arr_i]); sum += arr[arr_i]; } printf("%i", sum); return 0; }
- I was helping a guy with his pset1 problem “greedy” on reddit. I then looked back at my code and cleaned it up a little bit. There were a few shorter ways of writing the program.
- It is cool to look back and see a slightly better way of doing things.
8/3/16
- First-Ever CS50x Coding Contest 2016 Postmortems
- The above post is really cool since it shows the winners and some statistics
- Also what is fucking sweet is there are videos showing how to solve all the problems! Now I get to see how they solved some of the more algorithmic problems that were way above my head (bus queue and maze runner).
- Holy fuck looking at the first problem and they way they did it is blowing my mind. Their solution is way more simple than mine is by looking at individual characters rather than individual words as I was! I completely over-complicated my approach!
- Had to remember how to use the ternary operator to make some code shorter for another simple coding problem.
- Code here:
#include <math.h> #include <stdio.h> #include <string.h> #include <stdlib.h> #include <assert.h> #include <limits.h> #include <stdbool.h> int main(){ int alice_score = 0; int bob_score = 0; int a0; int a1; int a2; scanf("%d %d %d",&a0,&a1,&a2); int b0; int b1; int b2; scanf("%d %d %d",&b0,&b1,&b2); (a0 == b0) ? : ((a0 > b0) ? alice_score++ : bob_score++); (a1 == b1) ? : ((a1 > b1) ? alice_score++ : bob_score++); (a2 == b2) ? : ((a2 > b2) ? alice_score++ : bob_score++); printf("%i %i", alice_score, bob_score); return 0; }
- Quickly solved his next challenge:
- I was able to use a FOR loop with multiple variables.
for(int i = 0, j = n-1; i < n; i++, j--){ diagonal_sum1 += a[i][i]; diagonal_sum2 += a[i][j]; }
8/4/16
- Uber can recruit engineers through their app.
-
- Freaky - “Debner has no idea how the company knew he is an engineer. He has never talked to Uber, had a recruiter reach out, or submitted his resume to the ride-hailing company.”
- Started working on this problem yesterday, but finished it up this morning.
- I tried to take the entire input as a string first, but that caused a whole host of problems since all numbers were treated as characters and not actual numbers.
- I also didn’t fully understand military time versus civilian time so I didn’t understand all the constraints.
- Once I fixed the above it was fairly simple.
- Solution here:
#include <math.h> #include <stdio.h> #include <string.h> #include <stdlib.h> #include <assert.h> #include <limits.h> #include <stdbool.h> int main(void) { int hours, minutes, seconds; char time[3]; scanf("%d:%d:%d%s", &hours, &minutes, &seconds, time); if (hours < 12 && (strcmp("PM",time) == 0)) { hours += 12; } if (hours == 12 && (strcmp("AM",time) == 0)) { hours = 0; } printf("%02d:%02d:%02d", hours, minutes, seconds); return 0; }
- This morning I completed the final warmup problem on HackerRank because why not? Circular Array Notation:
- I was actually surprised by this one because the code I wrote initially almost worked completely right away which hasn’t ever happened before. It means maybe I’m getting slightly better at figuring out the logic.
- My loops had the incorrect constraints, but after fixing those the code worked.
- I made a temp array to put in the modified shifted numbers, but that really isn’t necessary at all since it’s just (i+k)%n places shifted.
- Solution here:
#include <stdio.h> #include <string.h> #include <math.h> #include <stdlib.h> // int main(void) { int n, k, q; scanf("%d %d %d", &n, &k, &q); int arr[n]; for(int i = 0; i < n; i++){ scanf("%d",&arr[i]); } int toprint[q]; for(int i = 0; i < q; i++){ scanf("%d",&toprint[i]); } int arr2[n]; for(int i = 0; i < n; i++){ arr2[(i+k)%n] = arr[i]; } for(int i = 0; i < q; i++){ printf("%d\n", arr2[(toprint[i])]); } }
I enrolled in the Learning How to Learn course. | https://www.craigrodrigues.com/blog/2016/08/04/learning-to-code-week-12 | CC-MAIN-2019-47 | refinedweb | 3,751 | 71.34 |
31087/create-connection-context-deadline-exceeded-hyperledger
I am practising the Build your first network tutorial. When I run ./byfn.sh up, I get the following error:
Error: error getting endorser client chaincode: endorser client failed to connect to peer0.org1.example.com:7051 failed to create new connection context deadline exceeded.
How to solve this?
Try remove previous docker containers(have mentioned the command below) and then restart the network.
$ docker rm -f $(docker ps -aq) && docker rmi -f $(docker images | grep dev | awk '{print $3}') && docker volume prune
Try re-installing platform-specific binaries. Under the fabric-samples, run the following command:
$ curl -sSL | bash -s 1.2.0
It seems like them are old configuration existing which is conflicting with your command. First run ./byfn.sh down and then bring you network up using ./byfn.sh up. This should help.
I was facing the same issue. I tried many solutions but it didn't work. What finally worked for me is updating docker. I executed the below command and restarted the network and the error was solved.
$ sudo apt-get install docker-engine
Okay, so I was trying some solutions and found out the problem was with the peer.listenAdress. Setting it to the right address solved the problem.
Check for the peer.listenAdress in your configuration files. I was getting this error because I had set a wrong address and after I set the right address, the error was solved.
Seems like peer address problem. Set the ...READ MORE
I am facing the some problem in ...READ MORE
Hi @Vineeth, check your TLS connection. See ...READ MORE
I have your problem too, PLEASE HELP ...READ MORE
Summary: Both should provide similar reliability of ...READ MORE
This will solve your problem
import org.apache.commons.codec.binary.Hex;
Transaction txn ...READ MORE
To read and add data you can ...READ MORE
Hey @gujiwork,
Try making the following changes:
docker-compose-base.yaml and ...READ MORE
You might have to uninstall/remove files from ...READ MORE
OR
Already have an account? Sign in. | https://www.edureka.co/community/31087/create-connection-context-deadline-exceeded-hyperledger?show=44323 | CC-MAIN-2020-10 | refinedweb | 345 | 61.22 |
that are attached to them. Although Unity’s built-in Components can be very versatile, you will soon find you need to go beyond what they can provide to implement your own gameplay features. Unity allows you to create your own Components using scripts. These allow you to trigger game events, modify Component properties over time and respond to user input in any way you like.
Unity supports the C# programming language natively. C# (pronounced C-sharp) is an industry-standard language similar to Java or C++.
In addition to this, many other .NET languages can be used with Unity if they can compile a compatible DLL - see here for further details.
Learning the art of programming and the use of these particular languages is beyond the scope of this introduction. However, there are many books, tutorials and other resources for learning how to program with Unity. See the Learning section of our website for further details.
Unlike most other assets, scripts are usually created within Unity directly. You can create a new script from the Create menu at the top left of the Project panel or by selecting Assets > Create > C# Script from the main menu.
The new script will be created in whichever folder you have selected in the Project panel. The new script file’s name will be selected, prompting you to enter a new name.
It is a good idea to enter the name of the new script at this point rather than editing it later. The name that you enter will be used to create the initial text inside the file, as described below.
When you double-click a script Asset in Unity, it will be opened in a text editor. By default, Unity will use Visual Studio, but you can select any editor you like from the External Tools panel in Unity’s preferences (go to Unity > Preferences).
The initial contents of the file will look something like this:
using UnityEngine; using System.Collections; public class MainPlayer : MonoBehaviour { // Use this for initialization void Start () { } // Update is called once per frame void Update () { } }
A script makes its connection with the internal workings of Unity by implementing a class which derives from the built-in class called MonoBehaviour. You can think of a class as a kind of blueprint for creating a new Component type that can be attached to GameObjects. Each time you attach a script component to a GameObject, it creates a new instance of the object defined by the blueprint. The name of the class is taken from the name you supplied when the file was created. The class name and file name must be the same to enable the script component to be attached to a GameObject.
The main things to note, however, are the two functions defined inside the class. The Update function is the place to put code that will handle the frame update for the GameObject.Objects before any game action takes place. The Start function will be called by Unity before gameplay begins (ie, before the Update function is called for the first time) and is an ideal place to do any initialization.
Note to experienced programmers: you may be surprised that initialization of an object is not done using a constructor function. This is because the construction of objects is handled by the editor and does not take place at the start of gameplay as you might expect. If you attempt to define a constructor for a script component, it will interfere with the normal operation of Unity and can cause major problems with the project.
As noted above, a script only defines a blueprint for a Component and so none of its code will be activated until an instance of the script is attached to a GameObject. You can attach a script by dragging the script asset to a GameObject in the hierarchy panel or to the inspector of the GameObject that is currently selected. There is also a Scripts submenu on the Component menu which will contain all the scripts available in the project, including those you have created yourself. The script instance looks much like any other Component in the InspectorA Unity window that displays information about the currently selected GameObject, Asset or Project Settings, alowing you to inspect and edit the values. More info
See in Glossary:
Once attached, the script will start working when you press Play and run the game. You can check this by adding the following code in the Start function:-
// Use this for initialization void Start () { Debug.Log("I am alive!"); }
Debug.Log is a simple command that just prints a message to Unity’s console output. If you press Play now, you should see the message at the bottom of the main Unity editor window and in the Console window (menu: Window > General > ConsoleAbbreviation of game console
See in Glossary).
2018–03–19 Page amended
MonoDevelop replaced by Visual Studio from 2018.1
Did you find this page useful? Please give it a rating: | https://docs.unity3d.com/Manual/CreatingAndUsingScripts.html | CC-MAIN-2019-47 | refinedweb | 837 | 61.06 |
Today, Red Hat Linux 9 has been “officially” released to the masses via the FTP servers, and we host here a mini-interview with Matt Wilson, Manager, Base Operating Systems at Red Hat, Inc.1. What is in your opinion the most important new feature or update found on Red Hat Linux 9 that could ‘push’ ‘x.
“Though if application writers followed the guidelines provided by the LSB, you would not have dependency problems”
I am finding it tough to believe that the reason for RPM dependency hell is that developers don’t follow LSB standards. I mean either this fact is a closely guarded secret or it implies that the Linux developers are not competent enough.
I thought that the reason for dependency hell with RPM is that it does not intuitively look for dependencies.
Anyhoo, As far as usibility goes…RPM will never top apt-get or portage for that matter.
/troll off
The man’s answers sounded like he was responding to a lawyer’s probing. The questions are interesting, but but the answers are neither informative nor useful. This is very poor PR.
Anyways, Redhat 9 is more like Redhat 8.00001. You can barely notice any difference between 8 and 9. In fact, coming from 8, Redhat 9 is totally anti-climatic. Redhat would have to be the one catching up now, because the new Mandrake and Suse are both much better than Redhat 9. On the desktop anyhow.
> The questions are interesting, but but the answers are neither informative nor useful.
Unfortunately, I will have to agree… Normally Matt is quite verbose from what I make from their mailing lists, but for this interview he seemed not very willing to talk about future plans…
Maybe it is a policy at RH to not talk too much, dunno…
I wanted to put a tip here. I’d been waiting for Linux distros to support the Intel 845 chipset for months. It seemed to me that this was the logical next set in the tradition of the 810 and 815. It took a long time though.
Red Hat 9 does support it. However, watch out for one thing. I don’t know if this is due specifically to the chipset or the Compaq 7550 monitor I am using. At any rate, you install RH 9 and all seems to go well and, then, at reboot, the default bootloader, grub, hangs and all comes to a halt. If that happens to you under these conditions, during installation, choose LILO as your bootloader insteaf of Grub and all will be well.
Uhh no, the differences are very obvious. You just have to use it for more than a minute or two to see them. Bluecurve is more fluid, the menus look better. It’s faster, you can put it under a heavy load and still function. There’s a new kernel with a TON of new drivers (including my AGP adapter that isn’t supported by Linus’s 2.4.20). Use it for a few days, and you’ll see why as well.
dependency hell from RPMs is a load of Bull$h!t, i have compiled software from sourcecode and on occasion (not very often) the sourcecode will not compile because of depencency problems, Linux does not care what type of install you do, it is just that some software depends on other software in order to function properly…
apt-get does a pretty good job of resolving dependencys and finds/downloads/installs the dependant packages, and gcc will make a list of library files that are needed in order for the sourcecode to sucesfully compile/make/install, one is more automated & the other just leaves it up to you to find & install the needed files…
Anyhoo, As far as usibility goes…RPM will never top apt-get or portage for that matter.
Of course, and dpkg will never top urpmi.
Please don’t compare rpm to apt, they are not aimed at the same functionality. One of the issues is that RH does not have a similar tool (as Mandrake does in urpmi).
I am so disapointed! This release is nothing like I hoped it would be. It’s sad that there is so little new stuff. Thank God that Suse is releasing 8.2 soon. At least they have made an effort to give something new. Redhat looks nice, but from a home user perspective I must say that nothing much has changed. No hotplug support, terrible scanner support, XSane 0.89 while Mandrake 9.1 has chosen 0.90 with updated libs so more scanners are covered.
I had high hopes for this release, but I can promise that Redhat has lost me! I will no more go for Redhat softwarel. Create something new and I will think about it.
Wishes for å new release.
Graphical boot, instead of the boring old text based. Suse 8.2 has done this. Much better support for HW. Hot plug support. MPlayer included. There should be no reason why this is a problem for Redhat when a lot of other distributors are including it.
I agree with the comment above, it should be Redhat 8.000001. This release sucks.
So then add apt-get to your Redhat install. I did this with the beta (Phoebe) and it worked great!
Go <a href=”“>here and follow the directions.
If for some reason the above link’s not interpreted correcty by your browser, here it is in long form:
eugenia, ur so 1337!
No, I am just Greek. That’s the real word. 😉
Redhat is much better than Mandrake IMHO.
Redhat uses Nautilus to access samba shares. Perfect. I do not know what mounting shares is anymore. GNOME-VFS for you there. Creating Samba shares is a breeze. And so is settin permissions on them. I do not care much for a ‘graphical boot’ Nothing in it for me yes. Nice graphical touch. an Redhat could do it in a flash if they wanted to. I prefer knowing exactly whta the computer is doin at boot time to enable me to find problems if there are any.
Do people here not care about NPTL???
Why should Redhat do a radical UI change with every release. It is based on GNOME2.2 and KDE3.1 which are point releases. Redhat has done a sterling job of polishing up the menus. They are very Intuitive now.
Redhat’s hardware detection is the best. Period.
MPLayer might have legal issues anyway. Better safe than sorry if you are Redhat. Too large and too nice a target.
The biggest omission IMHO is a way to uninstall third party rpms. But they are working on it. Anyway stop complaining about Redhat. They are not a monopoly. If Mandrake does it for you, then good, you do not need Redhat. You probably do benefit indirectly from it though. You will be thankful for NPTL soon enough. And for all those kernel developments.
>Redhat uses Nautilus to access samba shares. Perfect.
Too bad that it doesn’t work for me though. I have already filled two seperate bug reports on the Red Hat bugzilla for two different bugs about Samba/Nautilus on RHL9…
Nothing more than the usual pathetic PR lingo, Bit Torrent is doing it’s job as I type this and can wait to see “all new things”. Besides total mmedia nakedness I was disappointed how slow RH 8.0 was. Anyway, I’m starting to miss the topic of this post.
Will you please stop plugin your review every time we have a red hat story over here? YES, we know about the review, we already linked it, it was linked a number of times from the forum, it was good, now get over it.
>Though if application writers followed the guidelines provided by the LSB,
AHA. and how to write LSB applications ? Maybe not that hard, but compiling it? That’s harder. I tried with a few of our custom apps. Downloadet the lsbdev-base , and lsbdev-cc. Now, our app compiles fine on everything from Solaris 2.6 to Redhat 6.1 to RHL 9. But LSB ? Nope. LSB didn’t specifie the SunRPC headers/structures we needed. It also failed on an fcntl call using F_LOCK. On the apps that did compile, the lsbcc didn’t make correct LSB binaries on RHL 9
/opt/lsbappchk/bin/lsbappchk for LSB Specification 1.3.0
Checking binary dorigen
Incorrect program interpreter: /lib/ld-linux.so.2
Header[ 1] PT_INTERP Failed
Found wrong intepreter in .interp section: /lib/ld-linux.so.2 instead of: /lib/ld-lsb.so.1
Section .eh_frame: sh_flags is wrong. expecting 3, got 2
Symbol __ctype_b_loc used, but not part of LSB
To bad. I lost my faith in LSB for this.
—
NOS@Utel.no
Custom RHL 9 RPMs at:
It think that RHL 9 have several improvements that I like too much. But I also think that they must wait for a couple weeks and then add the gnome fixes, the new gaim/GTK2, add their ACL and EA (as they already have the patches) and they also could improve Open Office a little bit.
I just wait two or three weeks more and then deliver a more complete, homogeneus and bug-free product. So, I think that I will install Ximian Desktop to solve these problems.
I like the Open Source politics of RH. If anyone demands patents issues to any linux distro, RH will not have problems with that. And I think that they motivate the Open Source standarization.
I like these things, just because you want NTFS, Quicktime, Media Player, DVD, mp3s, you dont need to have something that could be illegal. But I will offer an alternative selling must-be-closed software and then pay a fee for that. It must be cheaper than Windows.
I think that RH could do it, just because is big. And, if you want some Open Source programs that could be illegal, do it at your own risk.
Anyway, I think that all Open Source companies are great, and RH is one of them. Very wise decisions, but it could be better.
Say what? My USB keyboard, mouse, and camera all seem to tell me otherwise.
Could you elaborate on the 2 bugs you filed ref: Gnome & Samba? Are they asthetic or show-stoppers?
“I just wait two or three weeks more and then deliver a more complete, homogeneus and bug-free product.”
In software development, you can always wait two more weeks! Then you never release anything! Sure latest fix for gnome will be nice but then there will be KDE or Gaim or kernel or whatever fix that makes you wait two more weeks. Oh, and by then, why not wait two more weeks for the next major release of Gnome!
I think the Redhat Network is here to precisely deliver the incremental bug fixes.
>Are they asthetic or show-stoppers?
Show stoppers. They used to kill Nautilus, then the RH guy fixed the crash, but he was still not able to fix and mount my samba share on a WinXP PRO machine (macosx mounts it just fine). Search the red hat bugzilla for the two bugs submitted by me.
Too bad that it (SMB browsing via Nautilus) doesn’t work for me though. I have already filled two seperate bug reports on the Red Hat bugzilla for two different bugs about Samba/Nautilus on RHL9…
What are your problems?
Anyway, SMB browsing on the file manager is IMHO the “right” approach. Tell people “you can browse your local files with the file manager but you cannot browse your network with it” is bad. Hope it’s also default in KDE.
Funny that. It always worked for me. No offense, but maybe some of your settings were wrong. But I like Redhat 9.
And, it found my USB mouse before without problems. With MDK it used to jump all over the place. Although it is not a problem now i got an optical PS/2 mouse. But even windows xp had trouble with that when installing, although it worked fine when done. What we need is a proper sound standard for Linux which should encourage driver writers to write drivers for linux.
>No offense, but maybe some of your settings were wrong.
Are you kidding me? What kind of “settings” do I need to mount a samba share on a winxp machine on my network?
And if you want to know.. Lindows 3.0 can mount that very same share “just like that” under Konqueror (and MacOSX can too), while both Red Hat 9 and SuSE 8.2 can not, neither via konqueror or Nautilus. The Nautilus/Konqueror bug seems to be a samba backend bug.
Search the bugzilla if you want to read the bug reports, don’t ask me to copy/paste here a page of full explanations. It is not the right place. Go to Red Hat’s bugzilla and search through my email address and then reply there regarding this.
I should probably clarify the comment from Matt about the LSB….
At the moment, the LSB provides some recommendations that you can use to build portable packages. Unfortunately, because there are no standards for package metadata, that means you can only depend on interfaces in the LSB. Because every app that’s useful needs more interfaces than just that, you have to statically link it all. Because that sucks, nobody does it.
Yes, the lsbcc program isn’t perfect, we know that, I’m wrestling with it at the moment in fact to try and get it to build a hello world GTK program. The general idea is sound, and the agreement is the important thing, but right now documentation on how to actually make use of this standard is practically none existant. Writing some (and perhaps some better tools) is on the todo list for us at autopackage HQ
To say that “there would be no dependancy problems if everybody used the LSB” is somewhat incorrect – there would be no dependancies. Doh. RPM is quite clearly designed to do dependancies in a very flexible manner, so Redhat obviously do believe they are useful. The number of CDs needed if all the packages were strictly LSB compliant would probably more than quadruple, that’s being conservative.
Finally, the reason redhat don’t ship with apt4rpm is QA issues, their customers tend to shout at them if things break, that is what they are there for, and apt-getting random packages from the net tends to break things. C’est la vie.
Oh…. I don’t believe in user space VFS systems anyway – they work great until the moment you try and drag a file from a SMB view into say TextMaker, at which point it barfs because TextMaker doesn’t use gnome-vfs (or kword, or mozilla, or….)
They’d be better off doing proper samba integration at the kernel VFS layer, but that’s a bit harder + they get the gnome-vfs stuff for free.
When did people start looking at updates are merely skin deep.
This is NATIVE POSIX THREAD support, if that is not enough reason why to upgrade – I don’t know what is.
>>eugenia, ur so 1337!
>No, I am just Greek. That’s the real word. 😉
So, Gr33k then?
In software development, you can always wait two more weeks…
Of course, but RH strategy for desktop are mainly GNOME + Mozilla and they participate on both projects and they know about the status of both projects. And both projects release their version before RHL 9 annoucement. If they wait a couple of weeks for that, they will have the latest version, Mozilla and GNOME will deliver a newer one posible in 6 months! and RH also…
Well the only thing that is true is that this Red Hat version is theoretically dissapointing to me in theory. But I have installed the latest previous beta and I could tell that maybe you cant see, but you can feel the difference a lot.
redhat has to make sure it can anticipate the problems its clients might have. Jumping on to the latest and greatest may become somewhat problematic. They ratehr release what they know and provide the rest as updates. That is why Redhat 8.0 still had Mozilla 1.0 tree and now the 1.2 tree, completely jumping the 1.1 release. 1.2.1 was the one available when they started beta testing and they stick to that. Its good for the customers in that they get tried and tested products, but the geeks always want the latest an greatest. But then again, they are the ones who can install their own stuff.
Its a common mistake to compare RPM with APT. RPM is at the same level as dpkg. The rpm in Red Hat 9 will suggest dependancy resolutions but that isnt its job. Multiple tools sit on top of RPM to do that (apt4rpm, Red Carpet, Up2date etc)
They can also test with Mozilla Beta (as they do with xfree)… but you’re right.
I almost say that I like the RH policies, but I think that RH desktop is a little conservative, mostly because they have a lot of competition of Mandrake and SuSE, but I agree with them in General.
This evades the question of DVD playback.
No license is required to play DVDs on a linux computer. DVD players such as Ogle and Xine are GPL.
And no, it is not a violation of the DMCA to employ DeCSS to watch media you have purchased or rented on hardware that you own.
Otherwise, RH9 sounds great!
[in the U.S.]
That’s the fact, not what I ‘think’ or ‘feel’. I asked a lawyer without trying to influence her. She said,
“Ogle & Xine DO violate the DMCA since they circumvent a copy-protection measure.. as did Dimitri’s pdf-decryption software.”
I AGREE WITH YOU: THIS STINKS! But
Companies can be held liable for assisting/enabling felonious conduct, so RedHat is doing the right thing.
Otherwise they could distribute DVD & MP3 playback, AA font hinting, OS-X icons, etc — you get the idea. Suse & Mandrake just aren’t viable target$ for a law$uit yet.
Remember: just because you can get transcode/dvd-rip (from plf/freshrpms/etc) does NOT make it legal to do so.. I tried generic LCD display, and all I got was a bright white display plus a fringe of red at the edges. Nice.
It’s funny that last time I tried Red Hat it did something quite similar, but ONLY after that aggravating package selection routine. Red Hat is well versed in creating anger and hatred. Don’t piss me off immediately, no, that’d be too easy. Wait until I’ve wasted an hour poring over thousands of duplicate programs and command line crap I’ll never use, and THEN spring the trap. All too clever.
Linux propagates only because misery loves company.
After installation, there is no power management whatsoever. Zip. Nada. My BIOS is an ancient APM unit, which is nice because Red Hat still doesn’t support the quite-old-now ACPI standard. All I want to do is dim my laptop’s standard XGA-resolution LCD display, but that’s beyond the abilities of Red Hat Linux 9. I suspect that power management features do not exist for Linux, but who can be sure? With creative program naming as exhibited by grip, PAN and less, plus thousands of other unintuitively-named specimens, one can never tell simply by name what a program is to be used for. The descriptions that the installer provides are generally useless unless you have previously experienced a Red Hat installation and suffered the default installation options.
Carefully note that USB devices, like portable MP3 players and USB drives, and cameras too, can’t be accessed reliably since the device namespace is hooped by design. Unplug, and your device loses its mounted position so that the next time you plug it in, any links to that device will need to be updated MANUALLY – unless, of course, you only have one USB device. WTF.
For those still waiting for a central Control Panel-like app, your search is not yet done. Obviously proponents of decentralized controls will be pleased, but for the other 99% of computer users this is a major failing.
Side note: all fonts are – well, my text looks smeared with vaseline, so I guess you’d call that “anti-aliased.” How obnoxious. The good news it that this AA doesn’t intrude on Evolution, which is the best email app I’ve ever used. Outlook is nothing compared with it.
Web browsing: Mozilla has some very unique middle-click functionality – sends me to the default Red Hat Home Page, despite the fact that I’ve changed the home page to the Register. Why? I’ve set the middle-click to open new tabs when I middle-click on links, but when I middle-click anywhere else I go to the useless Red Hat page cached on my disk. Am I supposed to tweak it? Replace this file? WTF? In Opera (or IE, for that matter) I can scroll smoothly when I middle-click on an empty space. Even Mozilla on Win32 behaves semi-appropriately. Since it takes ten seconds to load this sucker, I won’t be doing much with Mozilla anyways.. what a pig.
I dare any Red Hat user to make the taskbar smaller, to make more usable space. Why is so much room devoted to a few icons? Who knows.
The “Preferences,” “System Settings,” and “System Tools” should be consolidated into one bloody menu, but that’s darned near impossible to do. Why didn’t Red Hat consolidate this stuff into a Control Panel? No, that’d make it too easy, and therefore useful.
Side note: click on “Control Center” in the “Preferences” menu, and you will get a window holding many of the settings, including an icon labeled “Control Center.” Nice! You can finally access the Control Center! Except, when you click it, you’ll get kicked into a duplicate window with the same old crap. Welcome to the magic of recursive linking at its best. The least somebody could have done is to LABEL THE DAMNED WINDOW “CONTROL CENTER” to at least provide a clue that you’ve have already found the fscking thing, instead of endlessly hunting for an application which apparently doesn’t exist. Either rip out the icon for “Control Center,” rename it to “Preferences,” or INCLUDE the fscking application called “Control Center.” Honestly, how hard would that be? I’ll bet it’s plenty difficult.
Sorry dude, this falls under fair use, which supercedes any anti-circumvention legislation. It is my understanding that this is an exception stated in the wording of the DMCA, as it can be conclusively proven that the tool (decss) can be used for purposes other than breaking copyright (watching legitimately acquired media).
See section 1201 of the DMCA (excerpt):
`.
… (this is the critical bit)
`(c) OTHER RIGHTS, ETC., NOT AFFECTED- (1) Nothing in this section shall affect rights, remedies, limitations, or defenses to copyright infringement, including fair use, under this title.
Here’s some other links. This stuff is fuzzy law since it hasn’t been conclusively decided in court. It isn’t worth the risk to a company like RedHat.
I won’t get into the heirarchy of law (constitution,statues, USC). But the MPAA/RIAA/DVD cartel have the fund$ to nullify our rights.
Here’s the timeline of events:
See what Harvard Law says:
That’s why 2600 gave up:…
And Jon J. is about to get re-tried:
To you and me *remember I agree w/you* this is a Bad Law(tm). It IS fair use. But the court$ & several congressmen have been bought..
Yeah, you must have an old Omnibook. They barely work in Windows too so quit complaining.
>It’s funny that last time I tried Red Hat it did something quite similar, but ONLY after that aggravating package selection routine.
Click, click, click, go. I can see where your becoming frustrated.
>Red Hat is well versed in creating anger and hatred.
Huh? No, people without patience are the ones creating hatred and anger.
>Linux propagates only because misery loves company.
Riight, “This application has performed an illegal operation” dialog boxes proagate joy and bliss amongst all.
>After installation, there is no power management whatsoever. Zip. Nada. My BIOS is an ancient APM unit
Yeah, don’t blame the OS for your vendor’s inability to ship a product that conforms to standards. Write a letter to your vendor demanding drivers for the platform or shut the hell up. Whining about it only makes you look bad.
>I suspect that power management features do not exist for Linux, but who can be sure?
Yeah, there’s an educated comment. How much time did you spend researching before you made that claim? Lemme lay it on you in elapsed seconds *0*
>With creative program naming as exhibited by grip, PAN and less, plus thousands of other unintuitively-named specimens, one can never tell simply by name what a program is to be used for.
Right, watch the install process and read the descriptions. You do know how to read right?
>your device loses its mounted position so that the next time you plug it in, any links to that device will need to be updated MANUALLY
Huh? That’s just not true at all. My Camera is ALWAYS /dev/sda1 and ALWAYS mounts on /mnt/camera.
>Side note: all fonts are – well, my text looks smeared with vaseline, so I guess you’d call that “anti-aliased.” How obnoxious
PFFT, go to preferences and change it then. Click RedHat, then Preferences, then Fonts. Now choose from the many options available until you find a setting that suits you. See, that was easy even for the more simple minded beings.
>Web browsing: Mozilla has some very unique middle-click functionality – sends me to the default Red Hat Home Page, despite the fact that I’ve changed the home page to the Register.
No, it doesn’t. I have mozilla on a dozen systems with tabs turned on on EACH OF THEM, and NONE of them exhibit the behavior you describe.
>Even Mozilla on Win32 behaves semi-appropriately. Since it takes ten seconds to load this sucker,
I use Mozilla on both platforms, I have yet to see any difference. They both perform VERY well.
>I won’t be doing much with Mozilla anyways.. what a pig.
Riight, it’s a PIG. OMG it uses more memory than IE, WAAAAH!
>I dare any Red Hat user to make the taskbar smaller,
Right click in an empty slot, select “properties”. Click the drop down in the center of the dialog box, and choose small or x small.
>to make more usable space. Why is so much room devoted to a few icons? Who knows.
Then change it, see above.
>The “Preferences,” “System Settings,” and “System Tools” should be consolidated
WHY? System Tools: Hrm probably System Tools! System Settings: Oh, I’ll bet Network settings and stuff goes there!. Preferences: That’s where I’d go to change my wallpaper and screensaver!
>Side note: click on “Control Center” in the “Preferences” menu, and you will get a window holding many of the settings, including an icon labeled “Control Center.” Nice! You can finally access the Control Center!
OMFG, are you a damn baby or what?
>Honestly, how hard would that be? I’ll bet it’s plenty difficult.
If it’s a problem, submit a bug report. Did you pay anything for it? I doubt it, you sound like the typical leech that expects everything to be handed to you on a silver platter without having to lift a finger.
I too have issues with my laptop. Not everyone has the time or inclination to google for answers, so
—> Why don’t you send bug-reports to RedHat? They can’t fix it if they don’t know it’s broke. And it takes time to fix things.
USB: Don’t get me started… It’s NOT meant to be a poor man’s hot-swappable interface, or a “just hub-connect everything solution [keyboard/mouse/monitor/cd-burner/external hard disk/scanner]”. More like a SCSI that won’t damage devices when unplugged. Dynamic configuration of USB has been vastly oversold IMHO.
Dynamic configuration of USB has been vastly oversold IMHO.
What problems have you had? It works perfect for me on XP, and well on FreeBSD. I have a mass storage flash card, mouse, and a graph link for my calculator that all work instantly when I swap them.
Some questions in the interview, in this reader’s opinion, seem to be leading the interviee.
Take, for instance, question 7: “Why is Red Hat Linux 9 still uses ext3 while more feature-rich filesystems like ReiserFS and XFS are out and about?” It seems to have removed any debate as to the virtues of the various filesystems with regards to RH’s objectives. Questions like “are ReiserFS and XFS more feature-rich?”, “Are these the features we want/need in RHL for our customers?” and “Should RH divert resources from helping develop ext3 to either (or both) ReiserFS and XFS?” are just assumed to be moot. Or does the interviewer assume its readers know this already?
Question 8: “Why isn’t Red Hat working together with NVidia to resolve kernel crashes and bugs that happen very often when running the accelerated Nvidia drivers …?” Is there something the interviewer knows that the readers don’t? Do we know that RedHat ISN’T working with NVidia? If I were to make a guess, I’d even be so bold as to guess wager that RH DOES indeed work with RedHat based on the timely release of their latest drivers (1.0-4349) for RH9 *on the same day of RH9’s release*.
Other than those, it’s another very thought-provoking and informative interview from OSNews, in general, and from Ms. Loli-Queru specifically. Kudos!
People expect Redhat to overcome the shortcomings of hardware manufacturers. The hardware manufacturer has the responsibility to either write drivers or release specifications. Preferably the first. The graphics guys are doing this. Next time you build a computer for your buddies, make sure you buy components with Linux drivers. Even if they will be using windows only. Manufacturers should not be supporting this monopoly.
Redhat does not have the time or resources to be writing every driver under the sun, some of which invariably are used by a handful of people. Redhat’s policy is clear on this one, If you do not ship open source drivers(heck what IP is in drivers), then they do not get included. Demand drivers from your hardware manufacturer. I do not buy branded computers, and I specify the hardware I want so I do not have a problem either undr Windows or Linux. But stop giving Redhat grief for a problem not of its making.
Yeah, don’t blame the OS for your vendor’s inability to ship a product that conforms to standards. Write a letter to your vendor demanding drivers for the platform or shut the hell up. Whining about it only makes you look bad.
Linux fault tolerance reached!…
You have misquoted the question 8 and you took it out of context..
[quote]Linux fault tolerance reached!…]
Questions
1. If the wheels you buy do not fit on your car, who should you blame.
2. If your antenna won’t work with your tv, who do you run to.
3. If the tape you bought refuses to work with your VCR, who do you run to.
The Answers (if you are reasonable)
1. Your wheel manufacturer. They should adhere to the standard
2. The guy who made the antenna. It should have a standard connection jack to your tv.
3. Your tape manufacturer. He should clearly say whether it is Betamax or not, and should also guarantee it works with whichever vcr.
My guess is MS screwed everything over. They have power to make hardware manufactureres only roduce for them, and leave the rest alone. Maybe someone should actually start a company that only sells computers built from fully standard compliant parts. And strongly advertise that. Or maybe I should rush to patent that idea.
What is it with some users that seem to think that more eye-candy and the funkier apps are what make a Linux distro great (or, in this case, worthy of a new major version number)??? I’ve just read the technical review at GuruLabs and find myself excited and warm and fuzzy with this latest offering from RedHat.
I think that RedHat has done a good job of 1) choosing the next greatest technology to introduce (stable, primarilly) in order for Linux to move forward, and 2) integrating them nicely into their distribution. If they hadn’t gone ahead and upgraded to gcc-2.96 when they did, how much longer do you think would it have taken app writers to develop for it? Same goes for gcc-3.2. And now there’s NPTL. I have no doubt in my mind that it is stable and tested enough. That’s how much confidence I have in RH’s engineers.
As for the lack of user-visible apps, what else do you need to add to make it worthy of being called RedHat9? More multimedia apps like Xine and MPlayer? (While I agree that these would be great to have on the distro, I doubt RH has enough resources to allocate for testing and maintaining these apps.) If I were in RH’s shoes, I’d probably wait around some more and see which multimedia app would easilly integrate well with their Bluecurve interface before adding it, but it’s certainly not a showstopper nor would it stop me from upping the major version one.
> My guess is MS screwed everything over. They have power to make hardware manufactureres only roduce for them, and leave the rest alone.
I suggest you leave behind the conspiracy theories and get back on topic, as this old article at adequacy.org is not ontopic.
> You have misquoted the question 8 and you took it out of context.
Pardon me for not quoting the entire text.
>.
Thanks for the added information, but while you say that there is evidence that the drivers do not work ON SOME Via-based chipsets (I read it from the driver README as well), are you saying, then, that the engineer has mentioned that RedHat, specifically, isn’t helping them on that? On second thoughts, I guess it doesn’t really matter since one can’t expect RedHat to help NVidia support every video subsystem unless the problem were RH-specific, yes?
Although from what I have heard and read nothing really new, just some bug fixes. As for the LSB standard. At my workplace we develop our software for Linux using the LSB standard and we have no issues with any dependancy problems.
IMHO Anti-Aliasing looks horrible in linux <full stop>. It’s just blur.
The only decent KDE that redhat shipped was the one in 7.3 – Although it crashed all the time, at least the fonts were not all blurred to crap.
Mozilla anti-aliased looks _really_ unprofessional.
Why can’t the fonts be like in windows? In windows they look perfect an crisp, without all the bleeding and blur. Anti-Aliasing sucks!
What I tried to ask in the question is if Red Hat can form a relationship with Nvidia to have better and Red Hat-certified drivers for Red Hat. By having such an alliance, Red Hat would also need to test the drivers with their OS on many various systems on their labs, and whenever find bugs or problems, nvidia will have to fix them.
This way, you get an “rh-certified” driver that would run on more systems, less headaches for the user, and up to date support for the latest OS of the company, no matter what might have changed in the OS itself since the last version.
Nvidia is still the no1 graphics vendor, so such alliances that could better the product, are mandatory IMO.
>Anti-Aliasing sucks!
Have you ever thought to load the gnome and kde preference panels and turn off AA?
It seems as if RedHat is allocating 70% of its resources in developing the core subsystems (ie. kernel, glibc, networking and video subsystems .. IOW, the OS!) and 30% on applications. At least that’s what _I_ think and used in making my decision to use and continue to use RedHat.
Can one imagine the alternatives?! I’d really prefer kernel stability over anything else … and it’s nice that they’re supporting more hardware as well as better networking (think wireless) as well as video (latest ATI drivers) subsystems. RH9 has only been out a week and already there have been 9 upgraded packages: 5 core (krb5, openssl, samba, sendmail, vsftpd) and 3 applications (balsa, eog, evolution). Adding more applications would also mean allocating more people to maintaining these new apps. RH is already trying to cut down on its resource-eating maintenance jobs (ergo the shorter lifespan for its products).
Excellent interview.
Regarding Matt’s answers, *that* is why I happily handed over the $40 to the CompUSA cashier for RH8 and RH9.
They have got it together.
If the COTS box was available first on the RH site (and not at the store) I’d’ve bought from RH’s website.
you surely must be referring to the 4191s.
they were horrible. 2d was crap…you could actually SEE the screen being painted.
i tested this on dual/single intel machines, athlon xp, athlon thunderbird, redhat 8, 7.3, kde/gnome.
kudos to nvidia, because the latest ones with the nice installer fix a lot of stuff.
i’m sure some people will still have some issues, but 90% of mine were cleared up.
i should mention that with the infamous 4191s, people of all distros were having issues. just look in the nvidia forum, where 50,000 and more views were not uncommon on the 4191 thread.
nautilus smb mount of a share on a REDHAT/samba server results in a password prompt at every folder and file for me.
> Have you ever thought to load the gnome and kde preference panels and turn off AA?
Yes but the problem is when your running an AA’d system, when you turn of Anti-Aliasing the fonts look even WORSE. They are all broken and jagged.
Whereas if you don’t compile XSFT et all, at all, you get nice crisp windows like (dare I say it) non-AA’d fonts.
The problem is the later is quite difficult to do without re-compiling 1/2 your system.
>when you turn of Anti-Aliasing the fonts look even WORSE. They are all broken and jagged.
This is because you still have selected fonts that are meant for AA. In that panels, you simply have to select Helvetica as the fonts of your choice, and then the fonts will run fine on non-AA.
> This is because you still have selected fonts that are meant for AA. In that panels, you simply have to select Helvetica as the fonts of your choice, and then the fonts will run fine on non-AA.
I’m confused. Whadda I have to do? (At least for KDE)
I long for the days of non-antialised jagged“>
That screenshot was supposed to depict the nice crisp fonts of RH 7.3. Not screwed up as I accidently made the link
The interview questions sound like a bad translation:
>Why there was no RandR GUI tool shipped with Red Hat Linux 9’s XFree86 4.3?
ick. Try using english, man!
Either,
On why there was no RandR GUI tool shipped…
or better…
Why was there no RandR….
>Why is Red Hat Linux 9 still uses ext3 while more feature-rich filesystems like ReiserFS and XFS are out and about?
ick. Again, sentence structure.
Why is it that RHL9 still uses ext3..
or
Why is RHL9 still using ext3…
Come on, Show that you have at least a hs Eductation.
Erm… the screenshot you are showing me has no jagged screwed up fonts. The fonts showing there are normal non-AA fonts…
>That screenshot was supposed to depict the nice crisp fonts of RH 7.3. Not screwed up as I accidently made the link
Ah, ok.
So, in order to get these fonts back, go to the kde control panel/fonts and change the fonts to use the font called: Helvetica.
>Come on, Show that you have at least a hs Eductation.
I do. GREEK High School Education, plus GREEK college.
Go elsewhere to troll now.
Cool.
Can I do that with any other fonts?
Yes, you can select any font you want, but Helvetica is really the best that comes in your system for non-AA. Change its size as well to bring it in a shape that you will like it.
> They’d be better off doing proper samba integration at the kernel VFS layer.
I totally agree. All filesystem mounting & vfs stuff should be done in the kernel. There’s no other way that it will work with all apps. IMHO, you should be able to use it with the command line too.
thats a nice question you asked eugenia, i think Redhat should really think about it.
Redhat already provides up-to-date so why not add an extra feature so that we can modify the source list (ie apt-get) to point to another website to download applications like xine, mplayer, etc…
A lot of things should be be done at kernel level I agree. Could I have a comparison of GNOME-VFS and the kernel vfs layer to though.
I know the GNOME one makes it harder for all apps to access it, but it does seem ideally suited for GUI applications, whereas a kernel level one would be more be more suited for the back-end stuff IMO. Or maybe, the GNOME one should extend the capabilities of the kernel one. I don’t think people want to just add functionality to the kernel when it could be supplanted in the future. That would then bring about the curse of legacy code to Linux, carrying around weight for coimpatibility’s sake.
But the reason I had mentioned it, the GNOME-VFS was that it seems to be put to good use in GNOME. You can now have all these little tools accesible from Nautilus, and you do not have to have one big app to configure all your stuff, unlike some other distro specific tools which are sometimes rather large.
One of my other issues with GUI’s in Linux, or rather, KDE is that I have to configure twice to be able to get onto the internet. I have to configure system-wide settings, then configure LISA. This to me is duplicate functionality and is confusing. I do nto know if this has changed recently, I now use Gnome. Such things should be avoided. Set proxies once, and that should be enough. Set LAN settings and that shuld be enough.
Ogle & Xine DO violate the DMCA since they circumvent a copy-protection measure.. as did Dimitri’s pdf-decryption software.
First of all Ogle and Xine do not violate DMCA at all. All either of those programs has the power to do is use a library that may or may not violate the DMCA (this is libcss of course, better known as DeCSS).
And Dimitri’s pdf-software was ruled legal in a court of law because it’s primary purpose is not copyright violation. The same may or may not be true for DeCSS. We know that it the program is just fine in it’s native country (at least so far), but in the U.S. it is as of yet uncontested. Chances are we won’t ever see a big stink over it due to how widespread it has become. Who knows.
The point is this: RedHat is being safe by not including software with questionable legal status. It is much, much better for RedHat to make you install these things with apt-rpm than crossing their fingers and hoping Microsoft doesn’t pull out a patent on NTFS and sue them to hell with it, or the licensors of the MP3 codec suddenly decide that RedHat doesn’t fall under “free”.
-Erwos
for posting to bugzilla and your suggestion about RedHat partnering with/NVidia.
Efforts that will improve Linux for everyone.
I have not grabbed the newest release of redHat yet and do not plan to unless I can find out if Mozilla loads faster on 9 than it does on 8.0. When I first boot into Xserver And everything is finished loading, if I try to launch the browser right away it takes like 2 minutes ( no joke ) to even open.
After this of course it only takes a few seconds if I were to close it and re-open it. It only happens on the first time after startup. Has anyone else had this problem and is this fixed on version 9?
I seriously doubt that it is my system lagging due to the fact that I am running 8.0 on a p3 733 with 512 MEG of SDRAM.
I have heard great things about the new Mandrake. If I cannot be convinced to switch to RedHat 9 then I think It will be Mandrake for me.
“I seriously doubt that it is my system lagging due to the fact that I am running 8.0 on a p3 733 with 512 MEG of SDRAM.”
edit /etc/sysconfig/harddrives
uncomment 32bit mode, dma, and anything else that looks reasonable.
I don’t know why RedHat doesn’t offer a GUI tool or attempt to figure the settings out for you, it’s one of the areas where I feel they need work. | https://www.osnews.com/story/3219/interview-with-matt-wilson-of-red-hat-inc/ | CC-MAIN-2020-34 | refinedweb | 7,764 | 73.98 |
In today’s Programming Praxis exercise, our goal is to find the find the first unrepeated character in a string, along with its index. Let’s get started, shall we?
import Data.Maybe import Data.List
We traverse the list from right to left, keeping a list of characters we’ve already encountered and a list of possible options. When we check a character, we first check if it’s in the list of duplicate characters. If not, we then check the list of options. If the letter in question is there already, we remove it from the list of options and add it to the list of duplicates. Otherwise, we add the current character to the list of options. At the end, we return the first unique element (if any).
unrepeated :: String -> Maybe (Integer, Char) unrepeated = listToMaybe . snd . foldr f ([], []) . zip [0..] where f (i,c) (ds,us) = if elem c ds then (ds, us) else maybe (ds, (i,c):us) (\(fi,fc) -> (fc:ds, delete (fi,fc) us)) $ find ((== c) . snd) us
Some tests to see if everything is working properly:
main :: IO () main = do print $ unrepeated "aaabc" == Just (3, 'b') print $ unrepeated "aaabbbcddd" == Just (6, 'c') print $ unrepeated "aaaebbbcddd" == Just (3, 'e') print $ unrepeated "aabbcc" == Nothing print $ unrepeated "aba" == Just (1, 'b')
Tags: bonsai, character, code, first, Haskell, interview, kata, praxis, programming, question, unrepeated | https://bonsaicode.wordpress.com/2013/04/30/programming-praxis-first-unrepeated-character-in-a-string/ | CC-MAIN-2016-50 | refinedweb | 227 | 61.87 |
Java switch/case statement syntax
The switch statement in Java provides a convenient method for branching a program based on a number of conditionals. This recipe describes the use of the Java switch statement.
The basic format of a switch statement in Java is:
switch (expression) {
case cond1: code_block_1;
case cond2: code_block_2;
...
case condn: code_block_n;
default: code_block_default;
}
where expression is an integral expression (like int, char, short, or byte, but not long). In each case statement within the switch statement, a comparison is made which is equivalent to if (expression == cond1). If the comparison evaluates to true, the code within the block is executed. The final default: line is analogous to a final else statement.
This arrangement is similar to a cascade of if/else if/else if statements but with one substantial difference. At the end of each code block, an optional break statement alters the flow through the switch statement. Without any break statements, all subsequent code blocks will be executed once a true evaulation is found. To make a switch statement behave just like an if/else if/else if statement, always put break statements at the end of code blocks. However, leaving out break statements can provide a capability very difficult to achieve with if statements.
For example, consider the following code:
public class TestSwitch {
public final static int TITANIUM = 0;
public final static int PLATINUM = 1;
public final static int GOLD = 2;
public final static int SILVER = 3;
public final static int TIN = 4;
public static void main(String[] args) {
System.out.println(”Tin —–”);
printGift(TIN);
System.out.println(”Titanium —–”);
printGift(TITANIUM);
}
public static void printGift(int serviceLevel) {
switch(serviceLevel) {
case TITANIUM: case PLATINUM:
System.out.println(” Free toaster”);
case GOLD:
System.out.println(” Free stapler”);
case SILVER: case TIN:
System.out.println(” Free staple remover”);
break;
default:
System.out.println(”No gift”);
}
}
}
The example demonstrates break usage since any match will cause one or more println commands to output text but will not print the “No gift” line from the default code block. In addition, note that multiple case statements can be placed before each code block. Running this sample code results in the following output:
Tin -----
Free staple remover
Titanium -----
Free toaster
Free stapler
Free staple remover | http://www.tech-recipes.com/rx/668/java-switchcase-statement-syntax/ | crawl-002 | refinedweb | 374 | 54.63 |
semidbm 0.4.0
An alternative to python's dumbdbm
Overview
SemiDBM is an attempt at improving the dumbdbm in the python standard library. It’s a slight improvement in both performance and in durability. It can be used anywhere dumbdbm would be appropriate to use, which is basically when you have no other options available. It uses a similar design to dumbdbm which means that it does inherit some of the same problems as dumbdbm, but it also attempts to fix problems in dumbdbm, which makes it only a semi-dumb dbm :) It supports a “dbm like” interface:
import semidbm db = semidbm.open('testdb', 'c') db['foo'] = 'bar' print db['foo'] db.close() # Then at a later time: db = semidbm.open('testdb', 'r') # prints "bar" print db['foo']
A design goal of semidbm is to remain a pure python dbm. This makes installation easy and allows semidbm to be used on any platform that supports python.
Supported Python Versions
Semidbm supports python 2.6, 2.7, and 3.3.
Official Docs
Read the semidbm docs for more information and how to use semidbm.
Improvements
Below are a list of some of the improvements semidbm makes over dumbdbm.
Single Data File
Instead of an index file and a data file, the index and data have been consolidated into a single file. This single data file is always appended to, data written to the file is never modified.
Data File Compaction
Semidbm uses an append only file format. This has the potential to grow to large sizes as space is never reclaimed. Semidbm addresses this by adding a compact() method that will rewrite the data file to a minimal size.
Limitations
- Not thread safe; can’t be accessed by multiple processes.
- The entire index must fit in memory. This essentially means that all of the keys must fit in memory.
Post feedback and issues on github issues, or check out the latest changes at the github repo.
- Author: James Saryerwinnie
- Keywords: semidbm
- License: BSD
- Categories
- Package Index Owner: jamesls
- DOAP record: semidbm-0.4.0.xml | https://pypi.python.org/pypi/semidbm/0.4.0 | CC-MAIN-2017-43 | refinedweb | 345 | 65.01 |
PENDING
Placeholder for a proposal.
Some notes from discussion w/ jlahoda:
for Java lang scripting, use ClassPath with IDE modules
registration in SFS / Lookup.default & memory leaks, unloading
DoS defense
usages:
editor hints
macros: loops, counters, special editor actions like Find
lexer w/ token ID (e.g. "block comm") vs. category ("comm"),
flyweight tokens store offset in seq, start+len, subtokens
use Quick Search for unusual actions
(where to get code name vs. display name? code completion?)
dynamic abbreviations
editor hints could be run project-wide with a trigger (e.g. distinctive FQN or identifier)
simplified pattern syntax for tree matches, e.g. "($type) getCookie($type.class)"
debugger breakpoints
code completion, e.g. on String's in a certain method or XML elements,
returning matches on prefix + callback for HTML docs
setEmbedding to create nested token stream under some conditions,
e.g. to colorize Java or HTML in Java (string/comment), XML, ...
hyperlink providers
refactoring plugin (FU, rename, delete)
scriptable code folds?
exceptions from script -> OW
New project type, "IDE Scripting Project".
No associated build system; very simple metadata, just a properties file.
Metadata can have display name, list of module deps, maybe more.
Probably no need for deps on other projects;
deps on Java code done better by creating a NB module for that code,
and deps on other script code probably overkill (TBD).
Simple dir containing at least init.* for some scripting extension,
e.g. init.groovy if using Groovy.
Intention is that init.* will be loaded & run at startup if installed.
Other files can be created, which can be loaded eagerly or lazily.
New File offers appropriate script file types.
Might also by default create scratch.* for playing with things.
All NB environmental objects are in a namespace
starting from a global singleton named netbeans
(compare to window or document for JavaScript in a web browser).
An SPI allows required modules to offer various objects inside that namespace.
For example, netbeans.layers would be a low-level object offering r/w access
to the system filesystem;
netbeans.menus would be a higher-level object offering methods
such as addSimpleMenuItem (with menu folder, position, ID, label, callback).
Code completion (if available) knows about these objects.
Code completion for script files also offers whatever Java classes are available
given the declared module deps, so you can use random NB APIs directly.
Of course Java platform APIs are available too.
Run File loads and runs one script file.
Run loads and runs init.*.
Should be some context menu item in the editor to eval the selection.
In all cases, stdio is redirected to a "scripting" tab in the Output Window.
Exceptions from script also go to OW, hyperlinked if possible back to script & Java sources.
Project context menu item "Install" (or "Uninstall")
first locates or creates startup.* script (of same extension as project init script)
in NetBeans system filesystem under Scripts/
(so effectively in $userdir/config/Scripts/).
Looks for an existing call to load the project's init script by absolute path,
using the appropriate idiom for the language.
If not found, labeled "Install" and inserts a call;
if found, labeled "Uninstall" and removes it.
In either case, opens startup script in editor
and sets caret to inserted/removed load function location.
User can, if desired, make some other customizations there and save.
Any known load calls are hyperlinked: clicking opens the corresponding project,
if packaged in a project, else just the bare file, if not.
Optional - project context menu item "Package" creates an NBM of the script.
TBD how it is registered in the IDE, and how recipients can choose to customize it.
Or could just create a ZIP of the project,
which could be unpacked and registered by someone else.
(New Project wiz for scripting projects could offer to take such a ZIP,
or a bare script file,
or a URL to either a ZIP or a bare file, to be downloaded somewhere.)
Alternate project recognition: any dir containing a *.nbscript.*
(analogous to *.user.js in GreaseMonkey).
This script would then contain metadata in special comments.
Advantage that a script would then be fully self-contained. | http://wiki.netbeans.org/ScriptableIDE | CC-MAIN-2016-26 | refinedweb | 688 | 66.94 |
Just What Is jLinq?
A while back I had a project that required a lot of sorting and selecting of information that pretty much wasn’t going to change. I started out by writing a couple sorting functions and all was good.
But as any developer knows, the requirements began to change and sorting alone wasn’t good enough. Steadily, the requirements went from sorting by type, to sorting by type or name, then type, name or date…
Sooner or later I had to start including query terms in each of my sorting functions! Quite literally all of my efforts had been wasted. My only thought was “this would be so much easier with LINQ”…
And why not? It was a perfect candidate. And so with some effort, jLinq was quickly born.
Now you may have noticed, this isn’t the first project like this. Some people have put together some neat little query languages, like JSLINQ. What sets jLinq apart is simplicity and extensibility.
A lot of examples of other query languages I’ve found look something along the lines of…
var result = new JSLINQ(records) .Where(function(r) { return r.Age < 5 && r.Name == "Jim" }) [/sourcecode] I'm not picking on JSLINQ, but to me it seems a lot like using the code below... [sourcecode language='jscript'] var results = []; for (var item in records) { if (records[item].Age < 5 && records[item].Name == "Jim") { results.push(records[item]); } } [/sourcecode] I'm making some unfair assumptions here, don't let my explanation drive you away from JSLINQ. I've haven't used enough to critique how it works. I'm basing this off of code samples I've seen. Regardless, I wanted jLinq to focus on a couple things... <h2>Simplicity</h2> jLinq focuses more on writing your comparisons in advance, for example, consider the command 'greater()'. It is used like the following... .greater("nameOfField", someValue)
But the command checks all of the following…
- If the field is a number, is the number greater than the value provided?
- If the field is a string, is the string length greater than the value provided?
- If the field is an array, is the element length greater than the value provided?
- If it is anything else, just try (field > value)
The specifics of how those values are checked remain hidden in the command allowing better abstraction for queries.
Secondly, because queries tend involve more than just one comparison, jLinq has methods such as ‘or()’, ‘and()’ and ‘not()’ that all work with extended functions automatically.
Extensibility
It’s a good day when a programmer realizes they can’t make a project the best it can be on his own. Extensibility was built into jLinq from the start. In fact, the entire jLinq core library is built from extension methods! This way developers can create share their jLinq functions and libraries with others!
jLinq allows you to extend upon the existing jLinq library, extend into separate namespaces on the jLinq library or even create your own jLinq library from scratch (for example, a jLinq/jQuery library is in the works).
Extending jLinq is simple. Consider we wanted to add in a pre-built function for even numbered values.
jLinq.extend({ name:"isEven", type:"query", count:0, method:function(query) { return ((query.value % 2) == 0); } });
And then accessed within a query like…
var results = jLinq.from(records) .isEven("age") .select();
Now here is the scary part…… that is really about as hard as it gets!
I’ll blog more about the full detail of jLinq, but as for now please ask me any questions that are on your mind!
I love your script! It’s awesome!!
but i’m missing a function: the .take() function would be nice 🙂
regards
Dave
April 29, 2009 at 3:15 pm
That is a great idea – I’ll add that to the library.
There is a function called ‘skipTake’ that lets you provide a 0 for the ‘skip’ value, which might tide you over for a bit.
hboss
April 29, 2009 at 5:05 pm
cool, nice to hear that.
i got another question
i’m trying to order my array, but the values are doubles (3.4, 2.43,..), and that isn’t working correctly..
is there a support for it or is the mistake at my code?
Dave
April 30, 2009 at 11:00 am
What does your query look like?
hboss
April 30, 2009 at 11:31 am
My array looks like this:
[{“__type”:”UserStat”,”Direction”:”BadgeIn”,”Person”:”x”,”Today”:5.42,”Week”:30.93,”WeekLag”:0
.8,”TotalLag”:37.47},….]
and my query:
var FilteredUserStats = jLinq.from(UserStats).orderBy(“Today”).select();
Dave
April 30, 2009 at 1:31 pm
Hmm… more or less, last night I discovered orderBy wasn’t behaving as expected. I had to make some changes to the way it worked.
Unfortunately, I ran out of time to finish, but I should be done by tonight.
If you want to get it working now, try using it like .orderBy({“Today”:”a”}) – **BUT** that will only work until the next time you download the library.
Future versions will simply be done like… .orderBy(“Today”, … other field names…)
Orignally I wanted to use a format like…
.orderBy({
“firstField”: “a”, // ascending
“secondField”: “d” //descending
//etc…
})
but as it turns out, IE doesn’t always enumerate each of the properties in the same order that they were entered. The new format will look like…
.orderBy(
“firstField”, //ascending is implied
“-secondField”, // ‘-‘ marks it as descending
);
like I said though, this is still in progress so as of tonight we should be all fixed up.
Thanks!!
hboss
April 30, 2009 at 1:58 pm
Awesome!!
thanks you very much!
Btw, i included your library in a blog post of me, as it is awesome to work with!
regards
Dave
Dave
April 30, 2009 at 2:04 pm | https://somewebguy.wordpress.com/2009/04/28/just-what-is-jlinq/ | CC-MAIN-2018-05 | refinedweb | 978 | 74.59 |
Capital Investment - PV
Please provide any assistance towards solving each part of the problem (i.e. what equations or numbers to use, further explanation of the questions being asked, etc.) Thank you.
In recent years, there has been a lot of media coverage about the funding status of pension plans for state employees. In many states, the amount of money invested in employee pension plans is far less than the amount estimated to be needed to pay them the retirement benefits they have been promised. Basically, pension plans work by investing enough money while employees are working so that the money invested, plus the investment income it earns over the years, will be sufficient to pay the workers their retirement incomes once they have retired.
There are many complicated assumptions, estimates, and calculations needed to determine how much money a state should invest in its pension fund each year. One of the most important assumptions is the rate of return the plan's investments will earn in the future. As you have seen in this chapter, the higher the rate of return used to calculate the present value of future cash flows, the lower the present value will be. To determine a pension plan's funded status, actuaries (1) estimate the future cash payments expected to be made to employees, (2) calculate the present value of those cash flows using an assumed rate of return (this present value is the gross liability of the fund), and (3) subtract the amount of money that has been invested from the gross liability calculated in step 2 (this amount is the funded status of the pension plan). Essentially, this is the same as calculating the net present value of an investment. If the plan has less money in its investments than the present value of its estimated future cash flows, it has a net liability and is considered to be underfunded by that amount.
Many states' pension plans have assumed they will earn 8 percent or more on their investments, even though many experts think a more appropriate assumption would be 6.5 percent. As an example, the state of Virginia used an assumed rate of return of 7.5 percent in 2009 but reduced the rate to 7.0 percent in 2011. In 2011, Virginia paid out approximately $3.3 billion in benefits to retirees.
a.) Assume Virginia's annual payments will continue to be $3.3 billion, and that retirees will receive benefits for 20 years on average. Using an assumed rate of return of 8 percent, calculate the liability of the state's pension plan. The liability is the present value of the future cash payments. (Be aware that the real-world calculation for a state's pension plan liability involves many more assumptions than just these two.)
b.) Assume the annual payments will continue to be $3.3 billion, and that retirees will receive benefits for 20 years on average. Using an assumed rate of return of 6 percent, calculate the liability of the state's pension plan.
c.) Reviewing your answers from Requirements a and b, provide an explanation as to why states may wish to assume a higher rate of return on their pension plan's investments than actuaries might recommend.
Solution Preview
Attached are 2 pdf documents and an Excel file.
I will show you two ways to calculate the Present Value of the future cash payments. The first method is the long way, done in Excel, to help you understand what is happening. The second method is a shortcut. If you prefer, you may skip to the second method.
First ...
Solution Summary
The expert calculates the present value in two ways--a step-by-step way for each time period, and by using a shortcut formula | https://brainmass.com/business/standard-costing-cost-control-and-measuring-performance/capital-investment-pv-581311 | CC-MAIN-2017-34 | refinedweb | 631 | 60.45 |
One, in other programming languages we don’t do it in the application level either. The customer case, however, might be more complex, as I recently learned:
The two lists we want to join might be coming from different databases (or even different database types).
We might not have permission to create this join query on this database.
All this brings us to this point, where we want the application to handle this.
We need to extend the solution to include a piece of code, performing the join operation for us. This is done using a Web Service. Fortunately, there is a built in option to generate the Web Service, in the NetWeaver Developer Studio.
The steps in high level
Create a mockup Service Component in Visual Composer
For the detailed flow of creating the web service, follow the step in: Using Web Services in Visual Composer. I added details for the sake of this specific example, and some screenshots to make it a bit more clear.
Create Service Component skeleton for the Web Service
The relevant part in the documentation: Defining the Interface of the Service Component.
1. Open NetWeaver Developer Studio
2. Open the Visual Composer perspective
3. Create a new model
4. Add a Service Component to the model
5. Right Click => Drill Down
6. Add a Data In element
7. Open Define Data and press View => Hierarchy Tree
8. Add the students Node under the root.
9. Add the fields student_id, student_name in the right panel.
10. Add the students_courses Node under the root
11. Add the fields student_id, course_name
12. Add a Data Out element
13. Open Define Data, add the fields student_id, student_name, course_name
14. Save the model.
15. Go back to the main model, press Redefine Ports for the Service Component. Check all the ports and press OK.
We need to pass the service two lists, with a different set of fields: students and students_courses. To achieve this we defined a cluster, with two sub-nodes, one for each list.
The relevant part in the documentation: Implementing the Web Service.
I only added screenshots for some of the steps to make it a bit more clear.
We start the process with this the Generate WSDL option:
Here is the example Java Bin Skeleton created for us:
We only need to implement one method of one class in this case. See the following java coding I added:
public class JoinSEIImplBean {
public java.util.List<com.sap.vc.sad.Out1> join(com.sap.vc.sad.In1 in1) {
ArrayList<Out1> res = new ArrayList<Out1>();
List<InDirin1Students> students = in1.getStudents();
List<InDirin1StudentCourses> studentCourses = in1.getStudentCourses();
Map<String, InDirin1Students> studentsMap =
new HashMap<String, InDirin1Students>();
// convert students table to map
for (InDirin1Students s : students)
studentsMap.put(s.getStudentId(), s);
// go over students courses and find matches
for (ListIterator<InDirin1StudentCourses> it = studentCourses.listIterator();
it.hasNext(); ) {
InDirin1StudentCourses sc = it.next();
InDirin1Students s = studentsMap.get(sc.getStudentId());
if (s == null) continue;
Out1 out = new Out1();
out.setStudentId(sc.getStudentId());
out.setStudentName(s.getStudentName());
out.setCourseName(sc.getCourseName());
res.add(out);
}
return res;
}
}
Consume the new Web Service in our model
The relevant part in the documentation: Consuming the Web Service.
I only added screenshots for some of the steps to make it a bit more clear.
Adding destination for the Web Service looks like this in NWDS:
And in NWA:
Modeling steps:
1. Open the Search Panel and search for the new Web Service
2. Add the service to the model
3. Connect a Grid View to the output port of the service
4. Connect the existing data services to the input port of the Web Service
5. Define the mapping from the Get Students service to the Web Service:
6. Define the mapping from the Get Students Courses service to the Web Service:
That’s it
We can now run our new application with the joined data.
Some notes:
What if we don’t want to handle the join in a the application? And can’t do it on the database where the data is originated? In that case we could have an external process copy the data of both sources to one database we can manage. Then we write the join operation on this database. Finally, consume the join result in Visual Composer (e.g. as a JDBC Stored Procedure).
When we need to pass several different lists to a Service in one input port: This port should have a clustered structure. each list could be represented as a different Sub-Node, thus allowing for its own structure.
Performance: Join operations might be costly. It is important we are aware of the Memory / CPU consumption of the specific Join we are implementing, lest we suffer a performance hit.
Related documents:
Handling the Join Operation
Paging records in the UI – Coding our own Web Service to get Row Numbers | https://blogs.sap.com/2014/11/16/handle-the-join-operation-with-a-web-service/ | CC-MAIN-2019-22 | refinedweb | 808 | 58.38 |
On 19.03.07, Joerg Lehmann wrote:
> On 16.03.07, Michael J Gruber wrote:
> > > On 15.03.07, Michael J Gruber wrote:
> > >> Lander venit, vidit, dixit 2007-03-15 15:56:
> > >>
> > >> Oh, I see it's kind of patched in svn already: Now, the default value of
> > >> length is 1, not None, so that things work if you don't specify a
> > >> length. Passing "length=None" still doesn't work, so I guess the
> > >> description should be changed.
> > >
> > > The same has already been reported by Stefan Schenk (see pyx bug
> > > 1632988). Also the manual has been corrected in the svn version.
> >
> > Who's looking at the manual ;)
> >
> > I meant the docstrings for path.path.tangent(),
> > normpath.normpath.tangent() and the _pt variants.
> >
> > >
> > > However, J?rg did not seem to be quite satisfied with the
> > > length=1 solution.
> >
> > I was quite surprised, too. That indeed does scale the tangent in a way
> > which depends on the default unit and the current scale factor. It seems
> > it's done just in order to save one or two if's.
No. The scaling is intended. When a user scales everything -- why not also the
tangent?
> I agree, this was one of the reasons, I didn't like the patch. The
> scaling problem could be solved by using true cms. But the question is
> whether one should really introduce a more or less arbitrary length
> scale here?
I do not see the problem. As I introduced the default value as a
simple number and not a pyx.length, I will be scaled as many other
default lengths in pyx.
The following minimal example scales both the circle and the tangent
by 5:
from pyx import *
unit.set(uscale=5)
p = path.circle(0, 0, 1)
c = canvas.canvas()
c.stroke(p)
c.stroke(p.tangent(1.57))
This is the behaviour I would expect from a useful default length.
The only other possibility I see is to omit the default argument completely.
This would avoid the -- somehow artificial -- introduction of a length scale
here.
Michael.
View entire thread | http://sourceforge.net/p/pyx/mailman/message/6193243/ | CC-MAIN-2015-40 | refinedweb | 345 | 76.82 |
SHMOP(2) Linux Programmer's Manual SHMOP(2)
shmat, shmdt - System V shared memory operations
#include <sys/types.h> #include <sys/shm.h> void *shmat(int shmid, const void *shmaddr, int shmflg); int shmdt(const void *shmaddr);: · shm_atime is set to the current time. · shm_lpid is set to the process-ID of the calling process. ·:..
POSIX.1-2001, POSIX.1-2008, SVr4.). (shmid == -1) errExit("shm cre‐ ated); }
brk(2), mmap(2), shmctl(2), shmget(2), capabilities(7), shm_overview(7), sysvipc(7)
This page is part of release 5.08 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. Linux 2020-04-11 SHMOP(2)
Pages that refer to this page: ipcrm(1), ipcs(1), lsipc(1), pcp-ipcs(1), ipc(2), prctl(2), shmctl(2), shmget(2), syscalls(2), svipc(7), sysvipc(7) | https://man7.org/linux/man-pages/man2/shmdt.2.html | CC-MAIN-2020-40 | refinedweb | 151 | 65.73 |
I have a need to be able to identify which public DNS servers machines in internal networks are using for queries. To clarify, I do not want IPs or names of internal DNS servers or network devices. I need a scriptable way to identify what public servers they are calling to when queries are forwarded. For instance, I'm on a home network and my router's address is 172.16.1.1 - and it is programmed to used 4.2.2.1 and 4.2.2.2 for name resolution. I need to be able to capture these addresses. Parsing out of ip/ifconfig or logging into routers etc. are not options.
I've tried getting info out of nslookups and dig but I don't seem to be able to get the servers that are actually getting the queries at first before they start recursing. The internal DNS servers get identified but I'm not seeing the next hops.
Any ideas ? Unix or Dos or Windows etc, any solutions welcome.
There is really no way to accomplish your task, because the DNS server can resolve queries in any way it feels is appropriate, and it's a black box to the client. However, it is possible to get close to answering your question.
The trick is to set up a subdomain with a special nameserver whose response to any A query simply echoes the IP address of the host that issued the query. Such a nameserver can be implemented in Perl using the Net::DNS::Nameserver module:
A
#!/usr/bin/perl
use Net::DNS::Nameserver;
Net::DNS::Nameserver->new(
# w.x.y.z is the IP address of the host where this code is running
LocalAddr => ['w.x.y.z'],
ReplyHandler => sub {
my ($qname, $qclass, $qtype, $peerhost, $query) = @_;
my ($rcode, @ans, @auth, @add) = ('NXDOMAIN');
print "Received query for $qname from $peerhost\n";
if ($qtype eq 'A') {
$rcode = 'NOERROR';
push @ans, Net::DNS::RR->new("$qname 1 $qclass $qtype $peerhost");
}
return ($rcode, \@ans, \@auth, \@add, { aa => 1});
},
)->main_loop;
Next, configure some domain's NS record to point to this special nameserver. I've done this for a domain called resolverid.acrotect.com. (As a service to you, I'll leave the code running there for the next few days.)
NS
resolverid.acrotect.com
Then, issue a few different DNS queries from a client inside your network:
dig +short cachebuster1.resolverid.acrotect.com
dig +short cachebuster2.resolverid.acrotect.com
dig +short cachebuster3.resolverid.acrotect.com
This will cause your nameserver to resolve the queries by contacting the rigged nameserver, which replies with the IP address of the machine that forwarded the query. In some cases, that address is the "public" side of your "private" nameserver, which is what you asked for.
In other cases, the infrastructure is more complicated. For example, if you are configured to use Google DNS servers 8.8.8.8 and 8.8.4.4, you are actually using a geographically distributed pool of nameservers. The Google server that makes the query to the rigged nameserver might not have an IP address anywhere near 8.8.8.8 or 8.8.4.4. However, you would be able to see that it belongs to Google by doing a reverse-IP or WHOIS lookup on the response. (Interestingly, if you do the reverse-IP lookup using Google DNS, you'll see that Google has optimized away the reverse-IP lookup based on the A lookup you just performed. The PTR that it returns will be something.resolverid.acrotect.com. To bust that optimization, you'll have to do dig +trace -x a.b.c.d.)
PTR
something.resolverid.acrotect.com
dig +trace -x a.b.c.d
You need to capture (think tcpdump on unix) the data flowing between the hosts in question and the Internet, looking for queries sent to port 53 (generally, udp but tcp may be used too) and parse the dump's output.
tcpdump
You'll need to run tcpdump either on the gateway system itself, or, if you've got a managed switch, from a port in monitor mode.
netsh
ipconfig /all
/etc/resolv.conf
iodine
I don't believe you can (through clever use of DNS from your client).
The whole point of a having a DNS resolver recurse for you, is that it is going to answer your queries on your behalf. Where it got them from are none of your business (as a mere DNS client).
By posting your answer, you agree to the privacy policy and terms of service.
asked
3 years ago
viewed
953 times
active | http://serverfault.com/questions/398160/how-to-get-public-dns-server-addresses/398163 | CC-MAIN-2015-27 | refinedweb | 770 | 73.37 |
.NET library is available as a NuGet package on here.
Create a new Command Line Application in Visual Studio 2012 or above.
Install the client library using the Package Manager Console:
Install-Package Datasift.net
Step 2: Create A DataSift Client
With the package installed, now you can write a script to access the API. Firstly we need to create a client object that will access the API for us.
Open Program.cs in your new application. Firstly include the following namespaces at the top of the file:
using DataSift; using DataSift.Enum; using DataSift.Streaming;
Next replace the code inside the Program class with the following:
// References we'll need to keep private static DataSiftStream _stream; private static string _hash; static void Main(string[] args) { // Create a new DataSift client var client = new DataSiftClient("DATASIFT_USERNAME", "DATASIFT_APIKEY"); }
Step 3: Compile A Filter
In order to stream data from the platform, you need to create a filter in CSDL. You compile this filter using the API and receive a hash that represents the filter.
Add the following code in your Main method:
// Compile filter looking for brand mentions var csdl = @"interaction.content contains_any ""Calvin Klein, GQ, Adidas"" "; var compiled = client.Compile(csdl); _hash = compiled.Data.hash;:
_stream = client.Connect(); _stream.OnConnect += stream_OnConnect; _stream.OnMessage += stream_OnMessage; _stream.OnDelete += stream_OnDelete; _stream.OnDataSiftMessage += stream_OnDataSiftMessage; _stream.OnClosed += stream_OnClosed; // Wait for key press before ending example Console.WriteLine("-- Press any key to exit --"); Console.ReadKey(true);
Then, add the following event handlers to your class:
static void stream_OnConnect() { Console.WriteLine("Connected to DataSift."); // Subscribe to stream _stream.Subscribe(_hash); } static void stream_OnMessage(string hash, dynamic message) { Console.WriteLine("INTERACTION: {0}", message.interaction.content); } static void stream_OnDelete(string hash, dynamic message) { // You must delete the interaction to stay compliant Console.WriteLine("Deleted: {0}", message.interaction.id); } static void stream_OnDataSiftMessage(DataSift.Enum.DataSiftMessageStatus status, string message) { switch (status) { case DataSiftMessageStatus.Warning: Console.WriteLine("WARNING: " + message); break; case DataSiftMessageStatus.Failure: Console.WriteLine("FAILURE: " + message); break; case DataSiftMessageStatus.Success: Console.WriteLine("SUCCESS: " + message); break; } } static void stream_OnClosed() { Console.WriteLine("Connection has been closed."); }
Step 5: Give It A Whirl
With your script now complete, you can run the example and see data pouring into your console.
Run your application in debug mode by pressing F5.
There's a complete version of the Program.cs file Main method with the following:
var csdl = @"tag.brand ""Calvin Klein"" { interaction.content contains ""Calvin Klein"" } tag.brand ""GQ"" { interaction.content contains ""GQ"" } tag.brand ""Adidas"" { interaction.content contains ""Adidas"" } return { interaction.content contains_any ""Calvin Klein, GQ, Adidas"" }";
Then within the stream_OnMessage handler, change the code to:
Console.WriteLine("{0}: {1}", message.interaction.tag_tree.brand[0], message.interaction.content);
Run the application again, and you can see the tags assigned to the data (delivered under the interaction.tag_tree property of each item).
Learn More
Congratulations, you can now use the DataSift API to stream live data. To learn more about the platform please take a look at the following resources: | http://dev.datasift.com/docs/products/stream/quick-start/getting-started-net | CC-MAIN-2017-26 | refinedweb | 495 | 54.18 |
Chapter 14: Other recipes
Other recipes
Upgrading
In the "site" page of the administrative interface there is an "upgrade now" button. In case this is not feasible or does not work (for example because of a file locking issue), upgrading web2py manually is very easy.
Simply unzip the latest version of web2py over the old installation.
This will upgrade all the libraries as well as the applications admin, examples, welcome. It will also create a new empty file "NEWINSTALL". Upon restarting, web2py will delete the empty file and package the welcome app into "welcome.w2p" that will be used as the new scaffolding app.
web2py does not upgrade any file in your.
# When you upgrade web2py your should update app files that are important to web2py for properly functioning # # views/ # appadmin.html # generic.ics # generic.load # generic.rss # layout.html # generic.json # generic.map # generic.xml # web2py_ajax.html # generic.html # generic.jsonp # generic.pdf # # controller/ # appadmin.py # # static/ # css/* # images/* # js/* # # You can do it with the following bash commands : # NOTE: Please make a backup of your app before to make sure you don't break anything # # From web2py/applications/ cp -R welcome/static/* YOURAPP/static/ cp welcome/controllers/appadmin.py YOURAPP/controllers/ cp -R welcome/views/* YOURAPP/views/
How to distribute your applications as binaries
It is possible to bundle your app with the web2py binary distribution and distribute them together. The license allows this as long you make clear in the license of your app that you are bundling with web2py and add a link to the
web2py.com.
Here we explain how to do it for Windows:
- Create your app as usual
- Using admin, bytecode compile your app (one click)
- Using admin, pack your app compiled (another click)
- Create a folder "myapp"
- Download a web2py windows binary distribution
- Unzip it in folder "myapp" and start it (two clicks)
- Upload using admin the previously packed and compiled app with the name "init" (one click)
- Create a file "myapp/start.bat" that contains "web2py/web2py.exe"
- Create a file "myapp/license" that contains a license for your app and make sure it states that it is being "distributed with an unmodified copy of web2py from web2py.com"
- Zip the myapp folder into a file "myapp.zip"
- Distribute and/or sell "myapp.zip"
When users will unzip "myapp.zip" and click "run" they will see your app instead of the "welcome" app. There is no requirement on the user side, not even Python pre-installed.
For Mac binaries the process is the same but there is no need for the "bat" file.
if __name__ == "__main__"::
if __name__ == "__main__": ... code which tests things:
from db import *
You can also consider importing all models.
if False: from gluon import * from db import * #repeat for all models from menu import *
Building a minimalist web2py
Some times we need to deploy web2py in a server with very small memory footprint. In this case we want to strip down web2py to its bare minimum.
An easy way to do it is the following:
- On a production machine, install the full web2py from source
- From inside the main web2py folder run
python scripts/make_min_web2py.py /path/to/minweb2py
- Now copy under "/path/to/minweb2py/applications" the applications you want to deploy
- Deploy "/path/to/minweb2py" to the small footprint server
The script "make_min_web2py.py" builds a minimalist web2py distribution that does not include:
- admin
- examples
- scripts
- rarely used contrib modules
It does include a "welcome" app consisting of a single file to allow testing deployment. Look into this script. At the top it contains a detailed list of what is included and what is ignored. You can easily modify it and tailor to your needs.
Fetching an external URL
Python includes the
urllib library for fetching urls:
import urllib page = urllib.urlopen('').read()
This is often fine, but the
urllib module does not work on the Google App Engine. Google provides a different API for downloading URLs that works on GAE only. In order to make your code portable, web2py includes a
fetch function that works on GAE as well as other Python installations:, T)
The second argument (T) must be passed to allow internationalization for the output.
Geocoding
If you need to convert an address (for example: "243 S Wabash Ave, Chicago, IL, USA") into geographical coordinates (latitude and longitude), web2py provides a function to do so.
from gluon.tools import geocode address = '243 S Wabash Ave, Chicago, IL, USA' (latitude, longitude) = geocode(address)
The function
geocode requires a network connection and it connects to the Google geocoding service for the geocoding. The function returns
(0,0) in case of failure. Notice that the Google geocoding service caps the number of requests, so you should check their service agreement. The
geocode function is built on top of the
fetch function and thus it works on GAE.
Pagination
This recipe is a useful trick to minimize database access in case of pagination, e.g., when you need to display a list of rows from a database but you want to distribute the rows over multiple pages.
Start by creating a primes application that stores the first 1000 prime numbers in a database.
Here is the model
db.py: one of the HTTP status codes [status]
- time_taken is the amount of time the server took to process the request, in seconds, not including upload/download time.
In the appliances repository [appliances] , you will find an appliance for log analysis.
This logging is disabled by default when using mod_wsgi since it would be the same as the Apache log.
Populating database with dummy data
For testing purposes, it is convenient to be able to populate database tables with dummy data. web2py includes a Bayesian classifier already trained to generate dummy but readable text for this purpose.
Here is the simplest way to use it:
from gluon.contrib.populate import populate populate(db.mytable, 100)
It will insert 100 dummy records into db.mytable. It will try to do intelligently by generating short text for string fields, longer text for text fields, integers, doubles, dates, datetimes, times, booleans, etc. for the corresponding fields. It will try to respect requirements imposed by validators. For fields containing the word "name" it will try to generate dummy names. For reference fields it will generate valid references.
If you have two tables (A and B) where B references A, make sure to populate A first and B second.
Because population is done in a transaction, do not attempt to populate too many records at once, particularly if references are involved. Instead, populate 100 at a time, commit, loop.)
Accepting credit card payments
There are multiple ways to accept credit card payments online. web2py provides specific APIs for some of the most popular and practical ones:
- Google Wallet [googlewallet]
- PayPal [paypal]
- Stripe.com [stripe]
- Authorize.net [authorizenet]
- DowCommerece [dowcommerce]
The first two mechanisms above delegate the process of authenticating the payee to an external service. While this is the best solution for security (your app does not handle any credit card information at all) it makes the process cumbersome (the user must login twice; for example, once with your app, and once with Google) and does not allow your app to handle recurrent payments in an automated way.
There are times when you need more control and you want to generate yourself the entry form for the credit card info and than programmatically ask the processor to transfer money from the credit card to your account.
For this reason web2py provide integration out of the box with Stripe, Authorize.net (the module was developed by John Conde and slightly modified) and DowCommerce. Stripe is the simplest to use and also the cheapest for low volume of transactions (they charge no fix cost but charge about 3% per transaction). Authorize.net is better for high volumes (has a fixed yearly costs plus a lower cost per transaction).
Mind that in the case of Stripe and Authorize.net your program will be accepting credit cards information. You do not have to store this information and we advise you not to because of the legal requirements involved (check with Visa or MasterCard), but there are times when you may want to store the information for recurrent payments or to reproduce the Amazon one-click pay button.
Google Wallet
The simplest way to use Google Wallet (Level 1) consists of embedding a button on your page that, when clicked, redirects your visitor to a payment page provided by Google.
First of all you need to register a Google Merchant Account at the url:
You will need to provide Google with your bank information. Google will assign you a
merchant_id and a
merchant_key (do not confuse them, keep them secret).
Then you simply need to create the following code in your view:
{{from gluon.contrib.google_wallet import button}} {{=button(merchant_id="123456789012345", products=[dict(name="shoes", quantity=1, price=23.5, currency='USD', description="running shoes black")])}}
When a visitor clicks on the button, the visitor will be redirected to the Google page where he/she can pay for the items. Here products is a list of products and each product is a dictionary of parameters that you want to pass describing your items (name, quantity, price, currency, description, and other optional ones which you can find described in the Google Wallet documentation).
If you choose to use this mechanism, you may want to generate the values passed to the button programmatically based on your inventory and the visitor shopping chart.
All the tax and shipping information will be handled on the Google side. Same for accounting information. By default your application is not notified that the transaction has been completed therefore you will have to visit your Google Merchant site to see which products have been purchased and paid for, and which products you need to ship to your buyers there. Google will also send you an email with the information.
If you want a tighter integration, you have to use the Level 2 notification API. In that case you can pass more information to Google and Google will call your API to notify about purchases. This allows you to keep accounting information within your application but it requires you expose web services that can talk to Google Wallet.
This is a considerable more difficult problem but such API has already been implemented and it is available as plugin from
You can find the documentation of the plugin in the plugin itself.
Paypal
Paypal integration is not described here but you can find more information about it at this resource:
Stripe.com
This is probably one of the easiest way and flexible ways to accept credit card payments.
You need to register with Stripe.com and that is a very easy process, in fact Stripe will assign you an API key to try even before you create any credentials.
Once you have the API key you can accept credit cards with the following code:
from gluon.contrib.stripe import Stripe stripe = Stripe(api_key) d = stripe.charge(amount=100, currency='usd', card_number='4242424242424242', card_exp_month='5', card_exp_year='2012', card_cvc_check='123', description='the usual black shoes') if d.get('paid', False): # payment accepted elif: # error is in d.get('error', 'unknown')
The response,
d, is a dictionary which you can explore yourself. The card number used in the example is a sandbox and it will always succeed. Each transaction is associated to a transaction id stored in
d['id'].
Stripe also allows you to verify a transaction at a later time:
d = Stripe(key).check(d['id'])
and refund a transaction:
r = Stripe(key).refund(d['id']) if r.get('refunded', False): # refund was successful elif: # error is in d.get('error', 'unknown')
Stripe makes very easy to keep the accounting within your application.
All the communications between your app and Stripe go over RESTful web services. Stripe actually exposes even more services and provides a larger set of Python API. You can read more on their web site.
Authorize.Net
Another simple way to accept credit cards is to use Authorize.Net. As usual you need to register and you will obtain a
login and a transaction key (
transkey. Once you have them it works very much like Stripe does:
If you have a valid Authorize.Net account you should replace the sandbox
login and
transkey with those of your account, set
testmode=False to run on the real platform instead of the sandbox, and use credit card information provided by the visitor.
If
process returns
True, the money has been transferred from the visitor credit card account to your Authorize.Net account.
invoice is just a string that you can set and will be store by Authorize.Net with this transaction so that you can reconcile the data with the information in your application.
Here is a more complex example of workflow where more variables are exposed:()
Notice the code above uses a dummy test account. You need to register with Authorize.Net (it is not a free service) and provide your own login, transkey, testmode=True or False to the AIM constructor.
Dropbox API
Dropbox is a very popular storage service. It not only stores your files but it keeps the cloud storage in sync with all your machines. It allows you to create groups and give read/write permissions to the various folders to individual users or groups. It also keeps version history of all your files. It includes a folder called "Public" and each file you put in there will have its own public URL. Dropbox is a great way to collaborate.
You can access dropbox easily by registering at
you will get an
APP_KEY and an
APP_SECRET. Once you have them you can use Dropbox to authenticate your users.
Create a file called "yourapp/private/dropbox.key" and in it write
<APP_KEY>:<APP(auth, filename='private/dropbox.key') mydropbox = auth.settings.login_form
This will allow users to login into your app using their dropbox credentials, and your program will be able to upload files into their dropbox account:
stream = open('localfile.txt', 'rb') mydropbox.put('destfile.txt', stream)
download files:
stream = mydropbox.get('destfile.txt') open('localfile.txt', 'wb').write(read)
and get directory listings:
contents = mydropbox.dir(path = '/')['contents']
Streaming virtual files
It is common for malicious attackers to scan web sites for vulnerabilities. They use security scanners like Nessus to explore the target web sites for scripts that are known to have vulnerabilities. An analysis of web server logs from a scanned machine or directly in the Nessus database reveals that most of the known vulnerabilities are in PHP scripts and ASP scripts. Since we are running web2py, we do not have those vulnerabilities, but we will still be scanned for them. This is annoying, so we like to respond to those vulnerability scans and make the attacker understand their time is being wasted.
One possibility is to redirect all requests for .php, .asp, and anything suspicious to a dummy action that will respond to the attack by keeping the attacker busy for a large amount of time. Eventually the attacker will give up and will not scan us again.
This recipe requires two parts.
A dedicated application called jammer with a "default.py" controller as follows:
class Jammer(): def read(self, n): return 'x'*n def jam(): return response.stream(Jammer(), 40000)
When this action is called, it responds with an infinite data stream full of "x"-es. 40000 characters at a time.
The second ingredient is a "route.py" file that redirects any request ending in .php, .asp, etc. (both upper case and lower case) to this controller.
route_in=( ('.*.(php|PHP|asp|ASP|jsp|JSP)', 'jammer/default/jam'), )
The first time you are attacked you may incur a small overhead, but our experience is that the same attacker will not try twice. | https://web2py.com/books/default/chapter/29/14/other-recipes | CC-MAIN-2021-25 | refinedweb | 2,650 | 63.19 |
Agenda
See also: IRC log
david: I see impact on two
levels
... good to be official part of HTML, good for our prestige
... but it will also force us to be more consicse , more compact
... I agree, we need to make the cut really soon
... and what we put out of scope in the first wave
Felix: Some background why this
happened. In the original charter we said we would define
metadata for HTML5. We would use RDFa and Microdata. This
approach is difficult. Jirka, in discussion with the HTML
group, was pointed to a solution, to define its- attributes.
... This mechanism was not created by us, but was advocated because namespaces are not possible in HTML5, but this is a replacement for that.
... W3C international discussion did not say it was a wrong approach, but rather that we need to coordinate this work with the HTML5 working group, that we keep them aware and are OK with it.
... But what does it mean that they are OK with it? One thing is that it does *not* mean we are adding attributes to HTML5 itself.
... Rather we need a review from the HTML5 working group that they are OK with our approach. It sounds like a minor difference, but to come back to the process, adding attributes to HTML5 would be adding to the work done in HTML5. We cannot do that. HTML5 is in last call and nothing can be added. All that we are doing is defining attributes and getting the blessing of the HTML that we are following the right approach.
... We need to change the charter for this because we said we would not invent our attributes, but instead use RDFa and microdata, but we are inventing our own attributes. The change in charter is to make them aware that we are doing this.
... In terms of timing, it is important that we do this now before we finalize the draft so that we can move forward with out plan.
... Last point: as David said, this has good parts and bad parts. The good part is that we now have more interest from the HTML community and working group in our work. That interest, I know from experience, is not easy. This is all public, btw., you can and should let people know.
... The bad aspect is what David said: we can be motivated to be as web content-producer digestible/understandable as possible. We need to be careful that what we describe and define and keep that perspective in mind: we need to make it understandable to people outside of localization. Look at what Arle did in changing attribute names to make them more understandable.
... Like David said, it also means we need to close the set of data categories we want to deal with.
... We may still add mtConfidence, but aside from that, it makes sense ton concentrate now on how to sell what we have agreed upon to the web content people.
<fsasaki>
Felix: One admin detail: to make this work, we will need a review of the charter. Everyone representing an organization, please fill in this form or get your rep to fill it in.
David: I think this is important and that we took the time is good. But let's keep the discussion short.
david: comments that Yves made
were made before
... the category as specified now contains two to three different categories
... I think the contents of this category, at least the display-size should be taken further
... is there anybody who wants to take this further?
pedro: you mean to split this into several ones?
felix: propose that micha takes an action item to split this into several ones
micha: sure
<scribe> ACTION: michael to split special requirements into several data categories [recorded in]
<trackbot> Created ACTION-189 - Split special requirements into several data categories [on Michael Kruppa - due 2012-08-09].
micha: it would be just two categories
<Arle> Apologies for having to drop out. I'll look at the minutes later, but someone came to the door and I can't put them off. I may be able to jump back in later.
david: display size, storage,
band characters
... this should be split I think
... should be quite easiy
pedro: so summary is: we split special requirements into three: display, storage size, forbidden characters
david: yes, band characters are
the least stable part
... the reg ex thing needs to be resolved
pedro: the current attributes of
storage size and display size are part of one category?
... for me it is fine
david: a lot of discussion about
this during last weeks call
... action item for tadej to implement this, tadej, what's your progress?
tadej: I went through the
minutes
... mostly things were around good terminology to fit all communities
... right now I have a version that integrates all suggestions
... I still work on the one with different variants of pointer, refpointer etc.
... I will send a new version of the draft, this time on google docs
david: I thought it should be final?
tadej: thought it would be
necessary
... I have enough information from everyone
felix: no need to have too many call for consensus for a data category, if it is ready, we will put it into the draft
david: have a task force or just post it?
tadej: from my perspective I
think this is ready
... just want to have another review from the people on the call
felix: that's fine
pedro: for this data
category
... we should involve piek vossen, he can give great input on this
felix: agree, if we send this to the list, piek hopefully can join the discussion
david: had good discussion about
mtConfidence
... mtEngine self evaluation
... Chris Wendt said that this would serve their purpose
declan: understand the
difficiulties ms is mentioning in the mail
... the parts MS was talking about could be hard to implement
... would propose to jsut implement mtConfidence score
... the automatic metrix are hard and may not be that useful across the automatic workflow
david: in the august list, I
responded to jan nelson
... declan and chris wendt made similar points
... other pointers are needed to produce the score, but not needed for a content attribute
... agree it would be messy to try to implement this with reference implementations
... agree with Declan and Chris that self evaluation order and confidence would be more useful and stable
... happy to drive only mtConfidence
... human evaluation does not suffer from this
... many people do this
... not error checking, but people using simple scale
... this evaluation gets more importance
... would be good to be able to encode it
felix: I hope that we can postpone this discussion since we have too much stuff to do, we should focus on that
pedro: for post editing you need
a lot of other information
... score itself is not enough
david: think post editing is out
of scope
... it would be messy if we try to map score and post editing
... not sure if this is what you meant
... I'm happy to continue just with mtConfidence
... this needs to move forward on the ML
<scribe> ACTION: dfilip to draft a section about mtConfidence, based on the discussion [recorded in]
<trackbot> Created ACTION-190 - Draft a section about mtConfidence, based on the discussion [on David Filip - due 2012-08-09].
david: maxime is working on
this
... prominent in the light of recent changes
<tadej> scribe: tadej
fsasaki: The current status is that output to RDF is already done and independent of RDFa or Microdata, we are at the point of needing a chapter for the standard and defining the RDF ontology.
<fsasaki> rdf representation here and
<fsasaki> phil: we are very close to being able to issue our call
<fsasaki> .. had various naming and implementation details, we are very close
<fsasaki> david: I discussed with arle that he would submit a speaking proposal for seattle
<fsasaki> .. what's the time line for closing?
<fsasaki> phil: need to check with Arle
<fsasaki> david: on track for closing this within august
<fsasaki> see overdue actions at
<fsasaki> action-158 - jirka, will do editorial work next week
<fsasaki> jirka: might make sense that yves edits this
<fsasaki> action-164 discussed during the call today
<fsasaki> felix: see editing plans for HTML5 and query language attr. here
<fsasaki>
<fsasaki> felix: this is just a start about the prague f2f, feel free to comment
<fsasaki> david: short update on my action item - seattle event
<fsasaki> action-34
<fsasaki> david: we extended call for papers
<fsasaki> .. felix and arle, can you promote the event on the social media setup
<fsasaki> ACTION: felix to promote seattle event on mlw setup [recorded in]
<trackbot> Created ACTION-191 - Promote seattle event on mlw setup [on Felix Sasaki - due 2012-08-09].
<fsasaki> david: we have a strong pc
<fsasaki> david: lot's of interesting submissions on the way
<fsasaki> david: on good track with this event
<fsasaki> pedro: felix asked me to present in prague implementation, things of what we use for our showcase, progress indicator and readyness
<fsasaki> felix: everything you have available, if possible just show us on the list
<fsasaki>
<fsasaki> pedro: agree to focus on this next year
<fsasaki> david: thanks, think we did good progress today, thanks all for your hard work
<fsasaki> bye everybody | http://www.w3.org/2012/08/02-mlw-lt-minutes.html | CC-MAIN-2015-22 | refinedweb | 1,561 | 70.84 |
On Sat, Dec 15, 2012 at 8:41 AM, Haojian Zhuang<haojian.zhuang@gmail.com> wrote:> On Tue, Dec 4, 2012 at 9:32 AM, Haojian Zhuang <haojian.zhuang@gmail.com> wrote:>> On Mon, Dec 3, 2012 at 4:14 PM, Haojian Zhuang <haojian.zhuang@gmail.com> wrote:>>> clk->rate = parent->rate / div * mult>>>>>> The formula is OK. But it may overflow while we do operate with>>> unsigned long. So use do_div instead.>>>>>> Signed-off-by: Haojian Zhuang <haojian.zhuang@gmail.com>>>> --->>> drivers/clk/clk-fixed-factor.c | 5 ++++->>> 1 file changed, 4 insertions(+), 1 deletion(-)>>>>>> diff --git a/drivers/clk/clk-fixed-factor.c b/drivers/clk/clk-fixed-factor.c>>> index a489985..1ef271e 100644>>> --- a/drivers/clk/clk-fixed-factor.c>>> +++ b/drivers/clk/clk-fixed-factor.c>>> @@ -28,8 +28,11 @@ static unsigned long clk_factor_recalc_rate(struct clk_hw *hw,>>> unsigned long parent_rate)>>> {>>> struct clk_fixed_factor *fix = to_clk_fixed_factor(hw);>>> + unsigned long long int rate;>>>>>> - return parent_rate * fix->mult / fix->div;>>> + rate = (unsigned long long int)parent_rate * fix->mult;>>> + do_div(rate, fix->div);>>> + return (unsigned long)rate;>>> }>>>>>> static long clk_factor_round_rate(struct clk_hw *hw, unsigned long rate,>>> -->>> 1.7.10.4>>>>>>> Correct Mike's email address.>> Any comments? Does it mean that nobody want to fix the bug?Thanks for the patch. My apologies for letting this one slip throughthe cracks but my normal email workflow was unavoidably disrupted andI find myself playing catch-up with pending patches.The patch looks good to me but I'll change the $SUBJECT to "clk:fixed-factor: round_rate should use do_div" and do some testing beforetaking it in.Regards,Mike | https://lkml.org/lkml/2012/12/15/128 | CC-MAIN-2018-47 | refinedweb | 265 | 50.94 |
Introduction
Most of the time you will use SQL, Oracle, Access or some other database to store data. But it is possible to use Excel spreadsheet much like a database to stored data. This article and code will explain how to do this in C#.
Spreadsheet Setup
The first step to using an Excel spreadsheet as a place to store data and possibley update/delete/add data, is to put the data in the spreadsheet. Most of the time you will have column headers for you data and this will be the field names used when writing queries. After populating the spreadsheet with data (you may already have a spreadsheet you want to use, which is fine) the next step is to Define the Names in the workbook. A Name in the Excel workbook is a section of data that will be given a name and a range. The name that you give it will be much like a table name in a database. Here is how to create a name:
At this point you Excel spreadsheet setup is complete, so save the spreadsheet.
Database
To connect to the spreadsheet database using ADO.NET you will need a connection string, the one below will do the trick.
"Provider=Microsoft.Jet.OLEDB.4.0;Data Source=C:\MySpreadsheet.XLS;Extended Properties=""Excel 8.0;HDR=Yes;IMEX=2"""
You will need to add a reference to the "System.Data.OleDb" namespace also. This will allow you to connect and query data from the spreadsheet.
Example Program
There is an example program with this that will show you how to connect to the spreadsheet and query data. It is possible to have more than one spreadsheet in an Excel workbook that you can query using SQL joins. This assumes that you have set up each spreadsheet with a "Name" as described in the Spreadsheet Setup section above.
The example program had examples of querying a single "Name" (or table if that is the way you want to think about it) , two spreadsheets with both spreadsheets having a "Name" associated with it, inserting data into a spreadsheet and updating data in a spreadsheet. The basic steps necessary for using the data is fairly simple and just like using SQL or any other database. First you make a connection to the spreadsheet using the connection string listed above. Second you write a SQL query using the spreadsheet "Name" (table name) that you defined before in setting up the spreadsheet. Third you execute the query and if necessary return records. And finally display the data, assuming that is what you want to do with it.
The code is fairly well commented so I am not going to go into much detail about how it is done but you can look at the code to get a good idea of how to do this. Once you get past the Excel spreadsheet setup from above, the rest is just like connecting to any other database that you may have used before.
Conclusion
Using Excel as a database for most purposes is probably not a good idea but there may come a time when data is sent to you on a regular basis in a spreadsheet, to be updated or imported into other process and this may come in handy for that process.
Accessing Excel Spreadsheet in C#
Calling Oracle stored procedures from Microsoft.NET
Hello,you can also easily convert Excel file to DataSet with this Excel .NET library without using JET or OLE drivers or Excel Automation.Here is a sample Excel C# code");
thank you
Yes but there are much better and faster ways to read Excel Files.We have a free version of .NET component that you can use in your application (even in commercial ones) for Reading/Writing XLS/CSV/HTML/XLSX files.
Mark Thank you very much. :)
Just what the doctor ordered | http://www.c-sharpcorner.com/uploadfile/bourisaw/accessexceldb08292005061358am/accessexceldb.aspx | crawl-003 | refinedweb | 650 | 70.43 |
> There must be some code in the import machinery that assumes (with > excellent reason) that the __name__ of the module from which it is > called will never contain periods. Right. Periods in module names are used for the package namespace. (Import some code that uses packages and then print sys.modules.keys() to see how.) > Consider the following: > > >>> >>> import string > >>> string > <module 'a.b.string' from '/usr/lib/python2.3/string.pyc'> > > The peculiar behavior is *only* triggered by periods: > > >>> > > Would this be easy to fix? No. > It would fall into the realm of "so don't do that!" except that Zope's > Python Scripts set their __name__ to their Zope Id, which can contain > periods. Too bad. --Guido van Rossum (home page:) | https://mail.python.org/pipermail/python-dev/2004-January/041754.html | CC-MAIN-2017-30 | refinedweb | 123 | 77.03 |
For standardization in ROS we have tried to standardize how information is represented inside a coordinate frame. Please see REP 103 for conventions on units and coordinate conventions. This REP includes Units, Orientation conventions, Chirality, Rotation Representations, and Covariance Representations.
Transform Direction
Transforms are specified in the direction such that they will transform the coordinate frame "frame_id" into "child_frame_id" in tf::StampedTransform and geometry_msgs/TransformStamped. This is the same transform which will take data from "child_frame_id" into "frame_id".
Naming
Rules:
All frame_ids should be resolved when created such that all stored and sent frame_ids are globally unique.
All code should reference the frame name and resolve using the tf_prefix into a frame_id Unless the code is specifically calling to full tf_prefixs possibly between robots.
Coordinate frames in ROS are identified by a string frame_id in the format /[tf_prefix/]frame_name This string has to be unique in the system. All data produced can simply identify it's frame_id to state where it is in the world. All frame_ids sent over the system should be fully resolved using the tf_prefix policy outlined below.
tf_prefix
To support multiple "similar" robots tf uses a tf_prefix parameter. Without a tf_prefix parameter the frame name "base_link" will resolve to frame_id "/base_link".
If the tf_prefix parameter is set to "a_tf_prefix" "base_link" will resolve to "/a_tf_prefix/base_link". This is most useful when running two similar robots which otherwise would have name collisions in their frame_ids.
tf_prefix is determined by doing a searchParam, if it is not found a single '/' is prepended to the frame_name.
tf_prefix for data sources
All sources of data should apply the tf_prefix resolution to their frame_id before publishing.
tf_prefix for data processors
All nodes using frame_ids should use fully resolved frame_ids, resolving the tf_prefix from the parameter.
The only exception is nodes which are explicitly designed to work between multiple robots. In which case they should know all fully resolved frame_ids.
Multi Robot Support
tf_prefix is designed for operating multiple similar robots in the same environment. The tf_prefix parameter allows two robots to have base_link frames, for they will become /robot1/base_link and /robot2/base_link if all nodes for robot1 are run in robot1 as their tf_prefix parameter. And likewise for robot2. This means that a node running with parameter tf_prefix="robot1" can lookup "base_link" which will resolve to /robot1/base_link, but if it wants to know where robot2 is it can lookup /robot2/base_link explicitly.
Rules of thumb when developing multi robot safe software
In the single robot case everything should work with any arbitrary tf_prefix set including no tf_prefix set. To do this code should by default be configured using frame names, which are resolved immediately into frame_ids using the tf_prefix parameter.
If that robot is pushed down into a namespace with a unique tf_prefix everything should work the same.
A second copy of the robot should be able to be started in a separate namespace with a different tf_prefix and work completely independently of the first robot as though it was the only one there.
- Specific code for robot interaction can be then added to tell the two robots how to interact. A common case for this would be to point robot2 at robot1's map frame_id so they can share the same map. | https://wiki.ros.org/geometry/CoordinateFrameConventions | CC-MAIN-2021-31 | refinedweb | 543 | 52.9 |
[ charter | policy | FAQ ]
<NOTE: comp.std.c++ is currently non-operational because of technical problems with moderation. Please use comp.lang.c++.moderated until further notice.
Quoting from the charter: comp.std.c++ is for technical announcements and discussion of the ANSI/ISO C++ standardization process and the C++ standard, and for discussion of the design and standardization of the C++ language and libraries. Other discussion that is directly related to the C++ standard (not related merely to C++ programming techniques) is also welcome. Posts should be cogent and free from personal attacks.
RFC 1855 (Netiquette Guidelines, by Sally Hambridge) describes generally accepted standards that apply to comp.std.c++ as well as to other Usenet newsgroups.
Further information about moderation policy can be found at.
If you're talking specifically about the C++ standard, or about how the C++ language itself is designed, or about recent changes to the language, then your post probably belongs in comp.std.c++. If you're talking about how to write C++ programs (if you want to know how to do something in C++, for example), then your post probably belongs in comp.lang.c++.moderated or comp.lang.c++. If you're talking about general issues that aren't unique to C++ (algorithms, for example, or object-oriented design), then you might want to consider comp.programming, comp.lang.misc, or comp.object.
Finally, most of the popular platforms (Unix, OS/2, MS-Windows, Macintosh, and so on) have their own groups. Topics that apply to some particular platform, rather than to C++ in general, should go to one of those groups.
You should be able to use your ordinary newsreading software: if things are set up properly, an article that you "post" to comp.std.c++ will automatically get forwarded to one of the moderators. If that doesn't work, though, you can mail your article to std-c++@ncar.ucar.edu; this is less convenient than using your newsreader, but probably more reliable.
Because of differences in time zones, and other factors, it can take 24 hours for your article to finish its travels and appear at your news site. If your article is not accepted (which will occur only if it doesn't meet the fairly loose criteria described here and in the document describing our moderation policy) your article will be returned directly to you by the moderator with an explanation. If you submit an article and it isn't rejected and it doesn't appear in comp.std.c++ after two days, please send mail to the contact address std-c++-request@ncar.ucar.edu asking if it was received. Please do not just resubmit the article. Please also realize that if you don't provide a valid email address, the moderators cannot notify you if the article is rejected!
Note that crossposted articles often take longer to appear, largely because they involve more work for the moderators. This is especially true of articles that are crossposted to more than one moderated newsgroup.
Cancelling an article doesn't work for moderated newsgroups. If you accidently hit "send" before you are finished composing the article, immediately send email to the contact address std-c++-request@ncar.ucar.edu with a copy of what you want cancelled, so the moderators can match the cancel request with the article. If you realize you made a mistake some time after sending the article, then you should wait for the article to appear and then post a followup containing the correction.
Comp.std.c++ is moderated by panel: an article sent to std-c++@ncar.ucar.edu is automatically mailed to one of the moderators. If you want to get in touch with the moderators (if you want to discuss moderation policy, for example), you should send mail to std-c++-request@ncar.ucar.edu. Mail sent to that address will be forwarded to all of the moderators.
At present, there are five moderators:
Is there an archive of comp.std.c++ postings?
We (the moderators of comp.std.c++) don't maintain such an archive, and we don't know of anyone who does. Google Groups, however, does save articles from comp.std.c++. (as well as all other newsgroups.)
comp.lang.c++ is for general discussion about C++ programming, and comp.lang.c++.moderated is a moderated newsgroup for general discussion about C++ programming. You should probably read the comp.lang.c++ FAQ list, available in HTML form on Marshall Cline's WWW site and in plain text form on rtfm.mit.edu. Bjarne Stroustrup also maintains a list of C++ resources online. See also question B.1.
The comp.std.c++ FAQ is currently maintained by James Dennett, one of the newsgroup's moderation panel. He can be contacted at jdennett@acm.org. Corrections and suggestions are welcomed.
To get a copy from ANSI, visit the ANSI web store.
For a paper copy, go to. Choose Catalogs/Standards Information, then "ANSI-ISO-IEC Online Catalog". Search for "14882".
Designation: ISO/IEC 14882-1998
Title: Information Technology - Programming Languages - C++
Price per copy (US dollars): $175.00
To download an electronic copy, go to. The document is in PDF format, 2794Kb in size, readable by the free Adobe Acrobat reader. The price (US dollars) is $18.00.
The current (2003) version of the C++ Standard is also available in book form, from John Wiley and Sons Ltd; its ISBN is 0470846747. The cost of this book is lower than the cost of a paper copy from ANSI, and it is widely available in bookstores.
You can view the earlier public-comment version (December 1996) at.
The public-comment version is similar to the 1998 standard, but differs in many details. The current standard is different again in that it includes many corrections to the 1998 standard. You should not rely on the public-comment version for detailed answers to C++ questions.
I'm looking for a more recent free version of the draft standard than the public comment version. Why can't I find any copies online?
The public comment versions are the only ones that are supposed to be publicly circulated; interim working papers contain editorial comments, portions that are known to be unstable, and so on.
From a legal (copyright) perspective, the C++ committee is not authorized to distribute any versions except the public-comment versions, and Committee members have been asked not to make other versions publicly available.
If you want to follow the standardization process, you may either get information from a colleague who is on the committee, or else join the committee yourself. Membership is open to all. (including all novices) are better off with a good textbook. Experts who really need a standard are (by the standards organizations) assumed to be able to afford it.
The "C++ Committee" is really two committees in one: a United States committee, and an international committee.
The Information Technology Industry Council in the US sponsors InterNational Committee for Information Technology Standards (INCITS) (formerly called X3), a standards body accredited to the American National Standards Institute (ANSI). None of these bodies are part of the government, but their members come from academia, industry, and individual government agencies. INCITS charters individual committees to create and maintain standards documents. Each member of a committee represents some organization, but not a country. A self-employed person can represent self. Each organization gets one vote.
For C++, that committee is J16. (Formerly called X3J16.) The J means "programming languages", and the 16 means it is the 16th J committee to receive a charter.
The International Organization for Standardization (ISO) is an independent body composed of member nations. Each member of an ISO technical committee attends on behalf of his or her nation, each nation getting one vote. Each nation determines for itself how its members are selected. For C++, the technical committee is SC22/WG21 (Steering Committee 22, Working Group 21), under JTC1 (Joint Technical Committee 1). The US representative to WG21 is a member of J16, and casts the US WG21 vote according to the majority vote of J16. In effect, the technical representatives of the US to WG21 comprise the entire J16 committee. (It is actually not that simple, but it is a reasonable approximation.)
The C standard was created by J11 (formerly X3J11). After it was complete, an ISO committee (SC22/WG14) was formed to create the ISO C standard. At the time, document format requirements of ANSI and ISO were different, so there were for a time two standards. ANSI worked with ISO to find a way to avoid in the future creating two "standards". For C, ANSI adopted the ISO version of the C standard, and now publishes that as the ANSI standard; there is only one C standard now.
For C++, the INCITS and ISO committees meet together and published a single standard.
That depends on the procedures of your country. In the US you would join J16, in Britain you would contact BSI, in France you would contact AFNOR, DIN in Germany, and so on.
Two kinds of membership in the committee are possible: Principal member and Advisor. ("Advisor" was formerly known as "Observer".) Both kinds of members have the same privileges in their access to the committee documents and any information related to the committee work and distributed by the committee or by the InterNational Committee for Information Technology Standards (INCITS) Secretariat. (INCITS was formerly known as X3). As of 2008, both kinds of members pay $1200 membership fee per year, which includes an unlimited number of alternate members.
Committee documents are made available before and after each of the meetings. Currently there are 2 or 3 meetings per year. Documents submitted in electronic format (PDF, HTML, or plain ASCII) are posted on a controlled web site.
After consultation, INCITS have now withdrawn the option of having paper copies of all documents mailed for an additional fee.
The Principal member is expected to take work assignments and cannot miss more than one out of any three consecutive meetings. In exchange, the Principal member has voting rights and is allowed an alternate (no additional fee is charged for the first alternate) who can replace the Principal at some meetings or participate together with him/her. Only one Principal member can represent any organization or individual. The Advisor has no obligations except for membership fees and can attend or skip any or all meetings. All documents are available to the Advisor regardless of his/her presence at meetings.
ANSI supports an "open door policy", which means that any person from the international community and any non-US organization can also join the technical committee as a Principal member or an Advisor.
To apply, send an application form with information about you and/or your company and how your business depends on C++ standardization. Indicate the name of the ANSI C++ standard committee (J16) and what kind of membership you are applying for. The form is available at or in HTML format at.
Address this letter to:
Deborah Spittle
INCITS Secretariat
Suite 200
1250 Eye Street, NW
Washington, DC 20005
USA
Please also send copies of this letter to the J16 chair and vice-chair:
You can contact individual members or officers of WG21 or J16. You can submit comments during public-comment periods via the announced procedures. (They have been announced in comp.std.c++; also see INCITS's press release) You can post comments and questions to comp.std.c++.
The table below describes the schedule from before approval of the FCD.
The ISO side of the C++ committee is WG21, which is under SC22, which is under ISO/IEC JTC1. "CD" means Committee Draft, the draft standard which is voted on. When a Final CD (FCD) is accepted, it becomes a Final Draft International Standard (FDIS).
After final approval, the FDIS becomes an International Standard (IS). Once the standard has reached FDIS stage, it may be cited publically as the "Final Draft International Standard for C++."
In November 1997, the C++ Committee voted to approve the draft dated December 1997 as the "Final Draft International Standard" (FDIS), and forwarded it to ISO to be sent to member nations for a final vote. The first C++ standard (ISO/IEC 14882:1998, Programming Language -- C++) was approved on July 9 by a unanimous 20-0 vote. Formal publication occurred Sept 1, 1998.
Between 1998 and 2003, the work of the C++ Committee involved evaluating Defect Reports (reports from any source of possible errors and requests for clarification), and publishing corrections, resulting in the updated C++ Standard ISO/IEC 14882:2003.
The C++ committee has formally registered its intention to update the C++ standard, adding significant new library and language functionality, and has a current target of releasing an updated standard in 2009.
In 2006 the Library Working Group published a Technical Report (informally, TR1) including a number of new library components. While not formally a Standard, this TR is expected to provide experience with the proposed features, many of which have been incorporated into the latest versions of the Working Paper.
The Annotated C++ Reference Manual (usually referred to as the ARM), by Ellis and Stroustrup, essentially covers what C++ was before WG21/J16 convened. It consists of a formal reference manual interspersed with explanatory comments, and is very valuable even though it is somewhat out of date. The Design and Evolution of C++ (usually referred to as D&E), by Stroustrup, is less technical and more historical. It is also more up to date: it includes many of the changes in C++ since the ARM was published. D&E is useful because it discusses language design decisions in detail, including ideas that were ultimately rejected. Both books are published by Addison-Wesley.
Finally, The Draft Standard C++ Library, by Plauger (published by Prentice Hall), is also useful: it is the basis for some of the text in the working paper. There were major changes just after the book was published, unfortunately, and the C++ standard library in the IS no longer bears much resemblance to the one Plauger describes.
You should probably read the ARM and D&E before posting to this group.
See the discussion in D&E: it's possible that your idea has already been considered and rejected, or that a similar proposal is being considered. D&E also has some comments about extension proposals in general.
Work towards the next C++ standard, intended for release in 2009, is well underway, and its major features have been established by this time. The Library Working Group is still accepting proposals, but they are not expected to form part of C++09, but rather to form part of future Technical Reports.
At the moment the committee's time is split between handling reported defects in the standard and considering proposals for changes and extensions.
Some of the first proposals to go before the committee have come from the Boost group, see.
Informally, most members "have a little list" of pet topics they plan to bring up. Many good ideas have also been posted to comp.std.c++, and many committee members read and participate in those discussions.
If you have some feature in mind, you should post it to comp.std.c++. It will get some discussion and refining without the need to wait for the committee to be ready to review it. If some vendor, or g++ contributor, likes the idea, it might even get implemented, and have the advantage of being "existing practice."
You should submit a defect report to the standardization committee; see Question B13 for details.
ISO (the International Organization for Standardization), which is ultimately responsible for the C++ standard) has a single mechanism called Defect Reports for reporting problems and asking for interpretations.
A DR is not a request for an extension or a suggestion for a change. A DR is not an opportunity to have someone teach you C++. A DR reports an apparent error, inconsistency, ambiguity, or omission in the published final standard (ISO/IEC IS 14882:1998, Programming Languages -- C++). Examples: The standard
If a submitted DR is accepted, the C++ Committee eventually prepares a formal response. The response might be that no defect exists. If the Committee agrees that a defect has been identified, it will eventually adopt a formal resolution. The resolution might be to ignore the defect, or to correct it and publish the correction in a Technical Corrigendum (TC). The first TC to the 1998 Standard was issued in 2003. (The standard itself can comes up for review in 2003. At most 2 TCs to any version of the standard can be published.)
As a procedural matter, a submission is known inside the Committee as
an "issue", until it is either closed or elevated to DR status. Two lists
of issues and DRs currently before the Committee are now available on two
public Web sites:
The open-std site is the official public web page for the C++ committee.
The "core language" issues concern primarily clauses 1-16 of the C++ standard. The "library" issues concern primarily clauses 17-27 of the standard. There is some overlap, and some of the Annexes are covered by both lists. The lists are updated several times per year.
The lists include proposed resolutions. A "proposed resolution" reflects the best judgement of the C++ Committee as of the last revision date of the issue. It has no official weight, and does not override the contents of the standard. It provides guidance as to how the issue is likely to be officially resolved in a TC.
If you believe you have found a defect in the latest publically available draft of the standard, first review the lists on the web site to be sure it has not already been submitted. It wastes everyone's time (including yours) to process duplicate reports.
Assuming your defect is not already in the lists, the simplest way to submit a DR is via the moderated Usenet newsgroup comp.std.c++. The moderators of that newsgroup have agreed to act as a preliminary filter, and forward reasonable-looking DRs to the C++ Committee for consideration. Post your submission to that newsgroup, or send it by email to std-c++@ncar.ucar.edu.
Prepare your submission following these guidelines:
No particular format is required, as long as the guidelines above are followed.
The submission should be plain ASCII text, not html, not anything formatted
for a word processing program. The language should be English. (Your English
writing need only be understandable. No DR will be rejected because of
incorrect grammar or spelling.)
Examples:
"How can I create and use a stack of strings? I can't figure it out from the material in clauses 21 and 23."
Not a DR. The standard is not meant to be a tutorial, and is not suitable for trying to learn the language. You can ask the question in a usenet C++ newsgroup, or send it to a C++ magazine that offers a Q&A column, or read a C++ textbook.
"The standard is defective because the library does not contain a hash table template."
Not a DR. Despite the clever use of "defective", it suggests a substantial addition.
"The stuff about errno isn't clear."
Not a valid submission. It doesn't say what isn't clear, and doesn't identify the parts of the standard that aren't clear.
"In Section 17.4.1.2 [lib.headers] and 19.3 [lib.errno] it is unclear whether errno must be a macro."
A valid submission, although more detail would be better. (A more elaborate DR has already been submitted on this subject.)
"Section X.Y.Z [foo.bar] paragraph 18 says 'time flies.' Is that a statement about time, or a requirement to measure the flight of insects?"
Probably a valid submission. You should first check to see whether the explanation already exists elsewhere in the standard. Check the document index, or search the document if you have an electronic version. In addition, it is common for the introductory paragraphs of a clause to be imprecise (to provide an overview) with detailed explanations appearing later.
The moderators of comp.std.c++ will verify that the guidelines have been followed. If they find that the guidelines have not been followed, the submission will be returned to you with an explanation. You can submit a corrected version if you wish.
If the moderators agree that the submission follows the guidelines, they will forward it to the C++ committee, where it will be processed as explained above. The moderators will also post your submission in the comp.std.c++ newsgroup. That posting will serve as your acknowledgement. (If accepted by the C++ Committee, your submission will also eventually appear in one of the issues lists.)
If you feel that your submission has been unfairly rejected by the newsgroup moderators, you can submit it directly to the ISO National Body for your country. In the US it is ANSI; in the UK it is BSI; in France it is AFNOR; in Germany it is DIN. Try the ISO home page for contact information.
If you cannot locate an appropriate National Body, the UK delegation to the C++ Committee has volunteered to act as ombudsman. If all else has failed and you still think you have a valid DR, you can submit it to them via Francis Glassborow.
The rules for qualification conversions (standard conversions that add const or volatile specifiers) are given in section 4.4 of the draft standard. T** to const T** is not a qualification conversion; this is not an oversight, but is necessary to preserve const safety. Consider the following. (This example was provided by James Kanze, but the general idea behind it is well known.)
const char c = 'a'; char* p; char** pp = &p; const char** ppc = pp; // Supposing that this were not illegal. *ppc = &c; // Oops: where does p point? *p = 'b'; // And what is wrong here?The rules in section 4.4 of the draft standard prohibit this sort of error.
There have been at least three different versions of auto_ptr in various drafts of the C++ standard; the differences all have to do with transfer of ownership. (The original auto_ptr proposal avoid all of this complexity by forbidding transfer of ownership.)
There is a discussion of auto_ptr on Scott Meyers's More Effective C++ Web site. This discussion includes the final proposal for auto_ptr as defined in the C++ standard.
The committee considered proposals for a range of smart pointer semantics, including garbage collection and reference counting. Garbage collection was considered too ambitious, and indeed the state of the art has already moved past the proposals then before the committee. Reference counting elicited heated disagreements: some thought it useful, others disagreed as to what semantics it should have, and still others thought it inefficient. In the end only auto_ptr was accepted.
Many of the changes are summarized in Appendix A of Stroustrup's book The C++ Programming Language (second edition or later). If you have an older printing of that book, you can get an updated version of the appendix from. Also, many of these changes are discussed in Sean Corfield's C++ Beyond the ARM site.
Here are a few of the changes. Templates and exceptions are no longer "experimental". Templates may be used for non-virtual member functions, as well as for functions and classes. Templates may be partially specialized---that is, a template specialization may itself be templatized. Class template parameters may have defaults. Function template parameters are still deduced from argument types, but it is now also permitted to specify the parameters explicitly. The binding of names in template functions has been clarified, and a new keyword, typename, may be used to resolve some cases that would otherwise be ambiguous. Member functions of derived classes may have covariant return types. Run-time type identification (RTTI) has been added to the language. There is no longer any reason to use C-style casts: you should use static_cast, dynamic_cast, const_cast, or reinterpret_cast instead. A namespace mechanism has been added to the language; the keywords using and namespace deal with that mechanism. The use of file-scope static functions is deprecated in favor of functions declared within an unnamed namespace. It is possible to declare variables in the test expression of if, for, while, and switch statements. (But not do statements.) The scope of variables declared in a for loop no longer extends past the end the end of the loop. C++ now has a boolean type, bool, with manifest constants true and false; the built-in relational operators have been redefined to return bool instead of int. You can restrict the application of single-argument constructors with the explicit keyword, and you can use the mutable keyword in class definitions to declare that members in otherwise const objects can actually be changed. The language now has an extensive standard class library; the STL, or Standard Template Library (a container, iterator, and algorithm library) is integrated into the standard C++ class library.
The following Web sites contain STL information.
You might also want to look at one or more of these books:
The SGI implementation is available from, and the original HP implementation is available from and.
A widely ported implementation based on the HP/SGI STL and supporting extensive debugging facilities as well as versions optimized for speed is available from the STLPort project.
Also, versions 2.7 and later of libg++ (the GNU C++ library) include a version of the STL that works with GNU C++. GNU C++ version 3.0 and later includes GNU's libstdc++ v3, which conforms much more closely to the C++ Standard.
Note that several companies, including Modena, Rogue Wave, Plum Hall, Dinkumware, and ObjectSpace, also sell commercial implementations.
Daveed Vandevoorde has written a partial implementation of the valarray templates; you can get it from. Modena has written a partial implementation of the string template and made it freely available; it is called bstring.h, and you can get it as part of the source code for David Musser's and Atul Saini's book.
The most recent version of libstdc++ (the C++ library supplied with gcc) implements much of the standard library. Most of the commercial library vendors also sell implementations.
STLport is an Open Source free implementation of the complete C++ standard library. It is available for download at.
The first compilers which support all of the features of modern C++ have appeared during 2002. Most compilers implement most, but not all, of the post-ARM changes.
Version 4.3 of Comeau Computing's C++ compiler, based on the EDG frontend, claims full support for the language, and when used in conjunction with Dinkumware's library supports the whole of standard C++. Most leading compilers have fairly complete support, although some features such as partial ordering of function templates and export support are often lacking.
Since the C++ standard has been finalized, many books about standard C++ have appeared. Two such books are the third edition of C++ Primer, by Stanley Lippman and Josée LaJoie (Addison-Wesley, 1998), and the third edition of The C++ Programming Language, by Bjarne Stroustrup (Addison-Wesley, 1997). The C++ Programming Language is now in its sixteenth printing; if you have an earlier printing, you may wish to see the list of errata. One book that describes the entire standard library (but not the core language) is The C++ Standard Library - A Tutorial and Reference by Nicolai M. Josuttis (Addison-Wesley, 1999). (Formerly available as Die C++-Standardbibliothek)
Automatic type conversions are dangerous in general, since the compiler can apply them even when you don't intend for a conversion to occur. The conversions can result in ambiguities, or can turn invalid code into legal code that does the wrong thing.
In the specific case of an automatic char* conversion from string, you could wind up unexpectedly with a pointer to the internals of a string object. Such a pointer could easily become a dangling (invalid) pointer when the contents of the string were reallocated, or if the lifetime of the string object ended while the pointer still existed.
Worse, it is intended to be allowed by the standard for strings to use a reference-counting implementation. If you could change the contents of a string, you might unknowingly change the contents of other strings that shared the data.
You can get a conversion to const char*, but you must ask for it explicitly by calling member function c_str(). The explicit function call reminds you to be careful about lifetimes. You can copy the string contents to your own char array and then do anything you want with it. | http://www.comeaucomputing.com/csc++/faq.html | CC-MAIN-2015-11 | refinedweb | 4,811 | 55.64 |
"bhaaluu" <bhaaluu at gmail.com> wrote > States, getters-setters, direct access....... > I'm still in toilet-training here/ 8^D > Can you provide some simple examples that > illustrate exactly what and why there is any > contention at all? I'll try. State is just a bit of jargon to describe the combined values of an objects attributes. Here is an example: class Counter(object): def __init__(self, initial=0, delta=1): self.value = initial self.delta = delta def inc(self) self.value += delta return self.value def state(self): return (self.value,self.delta) a = Counter() b = Counter(1) c = Counter(0,5) print a.state, b.state, c.state So all 3 objects have different combinations of values so they have different states. The state determines the value returned by the objects inc method x = a.inc() # x-> 1 y = b.inc() # y -> 2 z = c.inc() # z -> 5 This is fine and dandy but what if we want to find out the current value of a.value without calling inc? Thats where hetter/setter/direct access comes into the picture. In Java and some other languages the idiomatic thing to do is provide methods prefixed with get/set for each attribute class Counter(object): def __init__(self, initial=0, delta=1):... def inc(self)... def state(self):... def getDelta(self): return self.delta def getValue(self): return self.value def setValue(self, val): self.value = val def setDelta(self, val): self.delta = val Now this is a lot of typing! It also isn't necessary in Python because Python allows you to access the attributes diectly - direct access. Like so: a.delta = 42 a.inc() print a.value This gives rise to a debate between the OOP purists who say that you should only access the internals of an object via a method(get/set) and the pragmatists who say its OK to use direct access. And old school OOPers like me say it would be better if you didn't need to use either since you should define the object in terms of higher level abstractions/methods. Now, with my pragmatic hat on I see no point whatsoever in writing reams of get/set code just for the salke of it, so if you must bypass the abstract methods use direct access. But what if you want all of the attributes - to print state say? Thats where the question of direct access versus a state() method comes in. My preference is to provide a single method that returns the values that you need (and in many cases thats less than all of the attributes!) rather than allowing, or even encouraging, direct access. The danger with direct access is that we use it not only for reading but also for directly modifying the attributes - and that is a bad OOP habit! (Remember: Objects do it to themselves!) For example we want to decrement a counter instead of incrementing. we could do it directly: c.value = c.value -5 But it would be better to do it in an OOP way. So the final issue (and Kent will no doubt have more to add from his perspective!) is if we do want to modify the Counter how do we do it (assuming we don't own the original class or too many other projects already use it to risk breaking them)? Well, the pure OOP way is by sub classing. Thus if we want a counter with a dec() method as well as an inc(), we can create one: class UpDownCounter(Counter): def dec(self): self.value -= self.delta return self.value Now if we make c an instance of UpDownCounter we can do: c = UpDownCounter(0,5) c.inc() print c.state() c.dec() print c.state() And this has the advantage that there is no impact on the other objects derived from the initial Counter class. Note that as well as adding methods you can also modify, or change entirely, the existing methods, that's called "overriding a method" and is probably left for later! I hope that makes sense, its about the simplest example I could come up with. -- Alan Gauld Author of the Learn to Program web site | https://mail.python.org/pipermail/tutor/2008-February/060138.html | CC-MAIN-2016-50 | refinedweb | 703 | 75.81 |
Details
Description
when to define constructor() in global scope,
I think that the compiler should report an error then.
---
sample code---
- the code has a bug purposely
class Rectangle:
_width as single
_height as single
def constructor(width as single, height as single):
_width = width
_height = height
virtual def GetArea():
return _width * _height
class Square(Rectangle):
def constructor(width as single):
super(width, width)
override def GetArea():
return _width * _width
----
compile it----
>booc constructor.boo
Boo Compiler version 0.9.2.3383 (CLR 2.0.50727.4200)
constructor.boo(14,14): BCE0024: The type 'Rectangle' does not have a visible constructor
that matches the argument list '(single, single)'.
constructor.boo(16,18): BCE0060: 'Square.GetArea()': no suitable method found to override.
constructor.boo(9,16): BCE0005: Unknown identifier: '_width'.
constructor.boo(9,25): BCE0005: Unknown identifier: '_height'.
4 error(s). | http://jira.codehaus.org/browse/BOO-1300?page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel | CC-MAIN-2015-06 | refinedweb | 140 | 50.02 |
I am flipping two coins and the first coin that produces three HEADS in a row wins. The programme should terminate after a coin reaches this goal. And I want to know which coin it was and print a statement saying ie either Coin1 or Coin2 won the race. I can flip them OK and count to three with my two counters headCount1 and headCount2. Problem is I cant pinpoint which coin it was that actually won. Thanks for helping and here is my code:
//FlipRace.java public class FlipRace { public static void main (String[] args) { int headCount1 = 0, headcount2 = 0; Coin Coin1 = new Coin(); Coin Coin2 = new Coin(); while (headCount1 < 3 && headCount2 < 3) { Coin1.flip(); Coin2.flip(); System.out.println ("Coin1: " + Coin1); System.out.println ("Coin2: " + Coin2); if (Coin1.isHeads()) headCount1++; if (Coin2.isHeads()) headCount2++; } System.out.println ("A headCount attained 3. (Coin was flipped" + " heads 3 times in a row."); } }
And the main class:
//Coin.java //Represents a coin with two sides that can be flipped.; } } | http://www.javaprogrammingforums.com/whats-wrong-my-code/31024-determine-source-event-2-objects.html | CC-MAIN-2015-22 | refinedweb | 168 | 77.84 |
Chachacha changes changelogs.
Project description
CHACHACHA
Chachacha changes changelogs. This is a tool you can use to keep your changelog tidy, following the Keep a Changelog specification which is the most implemented plugin at the moment.
Installation
Grab latest copy from releases page and place it where you can invoke it.
Alternatively you can choose to install Python package and cope with
$PATH configuration.
$ pip install chachacha ...
Quickstart
Init a new changelog and then add some changes:
chachacha init chachacha added Glad to meet you cat CHANGELOG.md
Subcommands are modeled from Keep a Changelog specification:
chachacha --help Usage: chachacha [OPTIONS] COMMAND [ARGS]... Options: --filename TEXT changelog filename --driver TEXT changelog format driver --help Show this message and exit. Commands: init initialize a new file config configure changelog options release release a version added add an "added" entry changed add a "changed" entry deprecated add a "deprecated" entry fixed add a "fixed" entry removed add a "removed" entry security add a "security" entry
So you can add, change, deprecate, fix, remove and security announce your changes.
KAC format plugin driver heavily depends on Colin Bounouar's keepachangelog library.
Releasing a version is simple as:
chachacha release --help Usage: chachacha release [OPTIONS] release a version Options: --major bump a major version --minor bump a minor version --patch bump a patch version --help Show this message and exit.
Where:
- major: release a major
- minor: release a minor
- patch: release a patch
Specification follows Semantic Versioning thanks to python semver library.
Configuration
Starting from 0.1.3, Chachacha supports a small configuration system directly embedded in the file via a hack on Markdown link syntax. This allow for a number of features like generating compare history:
chachacha init chachacha config git_provider GH chachacha config repo_name aogier/chachacha chachacha config tag_template 'v{t}' chachacha added one feature chachacha added another feature chachacha release chachacha security hole chachacha added capability cat CHANGELOG.md [...] - another feature [Unreleased]: [0.0.1]: [//]: # (C3-1-DKAC-GGH-Raogier/chachacha-Tv{t})
Configuration system keys are:
git_provider: a git repo provider driver (supported:
GHfor github.com)
repo_name: repo name + namespace your repo is
tag_template: a tag template which maps release versions with tag names. Variable
twill be expanded with the version number.
Examples
Start a changelog, add entries and then release
chachacha init # quoting is supported chachacha added "this is a new feature I'm excited about" chachacha added this is also good chachacha deprecated this is no longer valid
File is now:
cat CHANGELOG.md # Changelog All notable changes to this project will be documented in this file. The format is based on [Keep a Changelog](), and this project adheres to [Semantic Versioning](). ## [Unreleased] ### Added - this is a new feature I'm excited about - this is also good ### Deprecated - this is no longer valid [//]: # (C3-1-DKAC)
Now release it:
chachacha release chachacha added new version added item
File is now:
cat CHANGELOG.md # Changelog All notable changes to this project will be documented in this file. The format is based on [Keep a Changelog](), and this project adheres to [Semantic Versioning](). ## [Unreleased] ### Added - new version added item ## [0.0.1] - 2020-02-26 ### Added - this is a new feature I'm excited about - this is also good ### Deprecated - this is no longer valid [//]: # (C3-1-DKAC)
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/chachacha/ | CC-MAIN-2020-16 | refinedweb | 571 | 52.6 |
1. mmap memory map file
Creating a memory map for a file will use the operating system virtual memory to directly access the data on the file system instead of using regular I/O functions to access the data.Memory mapping typically provides I/O performance because using memory mapping does not require a separate system call for each access or data replication between buffers; in fact, both the kernel and user applications can access memory directly.
Memory mapped files can be viewed as modifiable strings or objects of similar files, depending on your needs.Mapping files support general file API methods such as close(), flush(), read(), readline(), seek(), tell(), and write().It also supports string APIs, provides features such as fragmentation, and methods like find().
All the examples below use the text file lorem.txt, which contains some Lorem Ipsum.For ease of reference, the following code list gives the text of this file...
1.1 Read Files
You can create a memory-mapped file using the mmap() function.The first parameter is the file descriptor, either from the fileno() method of the file object or from os.open().The caller is responsible for opening the file before calling mmap(), and closing it when it is no longer needed.
The second parameter of mmap() is the size (in bytes) of the file part to be mapped.If the value is 0, the entire file is mapped.If this size is larger than the current size of the file, the file will be expanded.
Both platforms support an optional keyword parameter access.Use ACCESS_READ for read-only access; ACCESS_WRITE for write-through, where assignments to memory are written directly to a file; ACCESS_COPY for copy-on-write, where assignments to memory are not written to a file.
import mmap with open('lorem.txt', 'r') as f: with fragmentation operation.In this example, the pointer moves forward 10 bytes after the first reading.The fragmentation operation then resets the pointer back to the starting point of the file, and the fragmentation moves the pointer 10 bytes forward again.Calling read() after the fragmentation operation gives the file 11-20 bytes.
1.2 Writing Files
To create a memory mapped file to receive updates, first open the file using the mode'r+'(instead of'w') before mapping to complete the append.You can then use any API method that alters the data, such as write(), or assign to a slice.
The following example uses the default access mode ACCESS_WRITE and assigns it to a slice to modify a portion of a row" in the middle of the first line in the memory file will be replaced.
Access settings for ACCESS_COPY do not write changes to files()))
In this example, you must rotate the file handle and mmap handle separately, since the internal state of the two objects is maintained separately.
1.3 Regular Expression
Since a memory-mapped file is like a string, it is often used with other modules that handle strings, such as regular expressions.The following example finds all the sentences that contain "nulla".(' '))
Since this pattern contains two groups, the return value of findall() is a tuple sequence.The print statement finds matching sentences and replaces line breaks with spaces so that the results are printed on the same line.
| https://programmer.help/blogs/python3-standard-library-mmap-memory-mapping-file.html | CC-MAIN-2020-16 | refinedweb | 552 | 61.77 |
Designs Made until finding the right one (Budget and workability)
A data structure used to synchronize concurrent processes running on different threads. For example, before accessing a non-threadsafe resource, a thread will lock the mutex. This is guaranteed to block the thread until no other thread holds a lock on the mutex and thus enforces exclusive access to the resource. Once the operation is complete, the thread releases the lock, allowing other threads to acquire a lock and access the resource.
So after understanding that, I have created a class that controls the Flutter secure storage.
import 'dart:async';
import 'package:flutter/widgets.dart';
import 'package:flutter_secure_storage/flutter_secure_storage.dart';/// A data structure used to synchronize concurrent…
Some of you built the app using an older version of flutter, the current version I have is 1.22.6 so latest from version 1 of flutter.
You might already know this but, create a new branch before doing this work :)
This should upgrade your dar-sdk version as well (min version 2.12.0)
flutter upgrade
/// In case you can't run that command recheck you are using the stable version
flutter channel stable
To upgrade the packages run this:
dart pub outdated --mode=null-safety
If that does not work you can run this command to find out what packages are not null-safe…
The built failed likely due to AndroidX incompatibilities in a plugin. The tool is about to try using Jetfier to solve the incompatibility.
Building plugin file_picker…FAILURE: Build failed with an exception.What went wrong:
A problem occurred configuring root project ‘file_picker’.
SDK location not found. Define location with sdk.dir in the local.properties file or with an ANDROID_HOME environment variable.
When I was installing the flutter_file_picker and then was trying to build the app I started to get that error described above, so I will describe the process of how I fixed it.
Please note that this is a solution…
AAPT stands for Android Asset Packaging Tool. The Android Asset Packaging Tool (aapt) takes your application resource files, such as the AndroidManifest.xml file and the XML files for your Activities, and compiles them.This is a great tool which help you to view, create, and update your APKs (as well as zip and jar files). It can also compile resources into binary assets. It is the base builder for Android aplications. This tool is a piece of the SDK (and assemble framework) and permits you to see, make, and redesign Zip-perfect chronicles (zip, jolt, apk)
Yes! If you are releasing an…:
UI/UX Designer & Front End Developer @CADS Software LTD UK, | https://iamloonix.medium.com/?source=post_internal_links---------6---------------------------- | CC-MAIN-2021-39 | refinedweb | 439 | 55.95 |
I have often downloaded demos that were either running to fast or to slow for my computer. A nice demo that runs so fast that it more looks like a flickering set of images or where the mouse or other controls makes the screen spin out of control is just as annoying as viewing something at 3 fps.
I have therefore written this small article about timing.
To Fast!
Demo's made on a slow computer that are moved to a fast one are often very annoying. Mainly because it is very easy to make a demo run slower, but it is extremely difficult to make it run faster. The easiest way is to time all the control functions. Take a look at the following code:
#include <time.h>
// Timing Variables
clock_t time_now, movement_timer = 0;
// Main Loop
while (running) {
// Handle Messages
time_now = clock();
if(time_now - movement_timer > CLK_TCK/30) {
movement_timer = time_now;
// Update Controls
}
// Drawing
}
Most programs are built more or less like this. The main procedure typically holds the loop (the "while" loop) where all the handling is done. This loop reads and handles input messages from keys and the mouse. The loop also contains the drawing of the graphics. If you move the handling into a timed statement (the "if" line) you can limit the updating of the game to a specified maximum per second. In this case it will be 30 times per sec.
Note that the message handling is still done outside the "if" statement, so you need to keep the mouse or keys pressed and un-pressed states in some sort of variable.
The update control part is not only the user controls (mouse and keys) it also handles moving monsters, running animations etc, everything that should be timed. The drawing routine may still run at 120 fps, the movement update remains at 30.
To Slow!
This is a much more difficult issue, but I would like to add some ideas. First you spend days on improving and optimizing your demo to get the best performance you can. There are no ideas in this article to magically improve the basic speed. Thats up to you
but:
When you have squeezed all the performance out of your system, you might improve it for the user. You may have the best demo or game ever, but if it runs awful on the end users machine it doesnt matter.
I have made a flight simulator that I will use as an example. In the simulator there is one factor that greatly influence the performance. This is the view range (i.e. the distance to the horizon). Now this distance can easily be changed. It is in one of my config files and is used as a variable inside the program. If it changes I only have to set a few things like the depth plane, the fog etc and the variable.
Now I have added a small function (in the main loop) just like in the example above;
time_now = clock();
++frames;
if(time_now - last_time > CLK_TCK*2) {
// Calculate How Many Frames Per Second We Run
fps = (frames*CLK_TCK)/(time_now-last_time);
last_time = time_now;
frames = 0;
// Depending On The Fps, Change The View Range
if ((fps>=0) && (fps<24)) { // Running Slow, Lets Improve Performance
i = (int)(fps-24);
if (fps>=20) {
ChangeViewRangeBy(i); // A Small Change When We Run Between 20 And 24 Fps
} else if (fps>10) {
ChangeViewRangeBy(-i*i); // A Large Change When We Run Between 10 And 20 Fps
} else {
ChangeViewRangeBy(-DepthPlane / 2); // Below 10 Fps, We Half The View Range!
}
} else if (fps>40) {
i = (int)(fps-40);
ChangeViewRangeBy(i); // Above 40 Fps We Slightly Increase The View Range
}
}
The effect is noticeable, but it may have side effects. I tried it on a very slow machine and the tip of the plane was almost at the end of the world
however, this may be handled later. If the above code looks like the code normally used to calculate fps you are correct. It is very similar.
In the flight simulator I have added a small view of the world when you load it. Here you can see the plane in the airfield, gently turning around, while the game waits for the user to press a key and start the game. This is where it comes in really handy. This small screen is really measuring the performance of the computer and finding the best view range for the game before the player starts.
Now not all games and demos have such a nice and easy parameter as the view range, but you can use the method to remove performance requiring effects. I case of "monsters" you can divide the AI into handling them in turns. Like handling half of the "monsters" in one tick and the other half in the second tick. You may even divide them further to allow for more flexibility. If all "monsters" are divided into 100 pools, you may handle them dynamically like the view range above. Run a variable amount of pools per tick based on the fps, just like the view range.
If you download the above mentioned flight simulator (can be found on my website or on the NeHe site under "F" in the Downloads section you need release 1.2), you may press the "d" key to view debug information, and among others you can turn on/off view range adjustment by pressing "v".
NeHe™ and NeHe Productions™ are trademarks of GameDev.net, LLC
OpenGL® is a registered trademark of Silicon Graphics Inc. | http://nehe.gamedev.net/article/improved_timing/15010/ | CC-MAIN-2016-18 | refinedweb | 922 | 77.57 |
What Jade syntax are you using? Can you provide an example of a .jade file that causes ST to hang?
.jade
Have you tried installing a newer version of the JavaScript syntax, as discussed at?
Here is an example of a .jade file
:doc
@name Toolbar
@props {Boolean} [left]
@props {Boolean} [center]
@props {Boolean} [right]
import './index.ess'
var classFn = this.composeClasses('Toolbar')
.Toolbar(className=classFn())
div(className=classFn('&-inner'))
if props.left
div(className=classFn('&-left'))
yield left
if props.center
div(className=classFn('&-center'))
yield center
if props.right
div(className=classFn('&-right'))
yield right
:module
export var mixins = [require('ui-kit/mixins/compose-classes')]
I haven't tried updating my js syntax, i'll try that now. Is there a way to reset the state on ST? i can't open it currently because of the feature of re-opening files where i left off, so it re-opens a file that it is hanging on, and all i can do is force quit.
(FYI, i'm on OSX El Capitan)
You could wipe your Local/Session.sublime-session file, but you probably don't want to do that. Installing the JavaScript.sublime-syntax override is likely to fix what you are running into.
Local/Session.sublime-session
JavaScript.sublime-syntax
Why wouldn't i want to wipe the Local/Session.sublime-session file? I'm not worried about losing any changes, if that is the concern there.
It just kills all of your history (recent projects, searches, and more). Any adjustments you've made to the size of UI panels, or options, like the Find panel.
Well, i killed it, it was a choice between ST not working at all, re-installing, or just losing that, which i would have anyway for the other two options. Adding the Javascript.sublime-syntax override did solve my problem. Has the underlying problem been diagnosed? I have about 30 colleagues who will all have this issue when they update. I warned them all this morning to not update because of this
Javascript.sublime-syntax
The underlying issue is that the Jade syntax uses regex constructs that aren't supported by our newer regex engine. It ends up including the JavaScript syntax that ships with Sublime Text. That syntax uses a regex that works fine on a non-backtracking engine (like our newer engine), but once combined with the regexes from Jade it causes catastrophic backtracking in the Oniguruma engine.
Unfortunately it seems that none of the users of the dev builds utilize the Jade syntax, nor the MarkdownEditing syntax with fenced js code blocks.
I'm a dev build user and I refuse to use any syntax that won't use the sregex engine, so that's why I never discovered it
it shows how important it is to get an even more diverse group of people on the dev builds though... consider using them, people!
Well that's unfortunate. I'll let my team know about the override. Anyway, thanks for all the help! I'll check into using the dev builds, maybe i can at least cover the jade syntax issues.
lol and maybe some day i'll have the luxury of choosing my syntax, but today is not that day. And, honestly i love the dev perks of using jade, it's unfortunate that it's conflicting so bad with ST
I guess I'm special in that I manage to find the time to hack together my own syntax that is suitable enough for my needs I don't tend to regularly work on many different file types though, which makes it easier
Hi WBond, Thanks for the update...but I guess I'm kind of wondering if there is a bigger fix possible? I've been reporting the miserable experiences that indexing and broken plugins create for a long time now. For anyone developing NodeJS (and probably other dependency-heavy platforms as well), huge directory trees are the norm - not an edge case.
Could Sublime:- be smarter about whether to index dependencies by default? - count the number of files to be indexed, and warn the user when a massive indexing job is about to start?- drop the priority of long-running indexing tasks so they don't take over your whole computer?- give the user better feedback about what's going on?- detect plugins that are known to be problematic and warn the user, offering to disable them?
This is probably the 4th or 5th time I've had a situation where some change (plugin, new Sublime build, etc) causes my whole editing experience to grind to a halt, and the solution always involves hackily editing config files, looking under the hood etc. Surely my experience is not rare.
Could everyone install dev build 3125 and see how it helps?
We've included the JavaScript change, but also made some tweaks to the default number of index_workers. Jon created a new Help > Index Status… window to show all of the indexing activity.
index_workers
We've got some more tweaks planned for the future, but hopefully these changes should help users experiencing acute behavior.
Fundamentally there really aren't issues with large folder trees. I dogfood ST on a daily basis with projects of around 100k files and regularly work on the syntax definitions, where saving a syntax definition triggers reindexing.
We made a tweak in 3125 to the default number of workers that should address users who don't care for fans spinning up on their laptops.
Sublime Text already sets indexers as low priority. While it is true that the indexers will use spare CPU, it shouldn't really "take over" unless the OS has trouble scheduling. Our tweak to the default number of indexers should help reduce this symptom.
The Console shows errors in indexing, to allow the user to find issues with syntaxes. The progress of indexing is displayed in the status bar, and there is a new Index Status window in build 3125 allowing users to see exactly what is going on.
We've got some ideas about further ways we can make the indexing process even more robust.
One of the best ways to help ensure that betas don't run into situations like we are discussing here is to help test the dev builds and help test the third-party packages you use with the dev builds.
I switched to build 3125 and removed my Javascript.sublime-syntax and the indexing seems to have worked properly (the output was error free). Thanks for the fix! i'll try and keep up with dev builds in the future.
I can't even paste with out having it hang. Not sure how such a basic functionality could do that. I've exited and restarted about 6 times in the past 10 minutes. Also it is just 3 lines of code. I'm probably just going to switch to Atom this weekend
Is this build 3125 you are talking about? If so, what syntax are you using when you try to paste?
This is 3124. I am trying to paste a simple html snippet. In to a 95 line file
Oh, you should try dev build 3125 and confirm that it works.
The issue is that a third-party package you have installed (MarkdownEditing, Jade, or possibly some other) uses regexes that combined with regexes in the default JavaScript syntax cause catastrophic backtracking on the Oniguruma regex engine when combined with certain code.
We shipped a dev build with a fix for it last night and are planning to ship a new beta shortly.
I may when I am less mad at sublime. With the latest updates I've probably lost an hour or so. Not counting the times I lost work because I tried to paste like a fool. I mean the background updates of packages usually makes it hang for about 10 minutes with no indication of what is going on
Beta 3126 has been released.
Thanks everyone for helping to identify the issue! | https://forum.sublimetext.com/t/solved-high-cpu-usage-in-beta-3124/23060?page=2 | CC-MAIN-2018-22 | refinedweb | 1,345 | 73.07 |
I’ve previously posted about the new logging features in ASP.NET Core RC1 and MVC6. This time I’m going to write about how Microsoft now has dependency inversion baked into the new Core framework.
Dependency inversion is a well documented and understood principle – it’s what the D stands for in SOLID, and says that your code should only depend on abstractions, not concrete implementations. So plug your services into your application through interfaces.
In previous versions of MVC, I’ve needed to download a 3rd party library to assist with dependency inversion – these libraries are also sometimes called “containers”. Examples of containers I’ve used are NInject.MVC, Autofac, and Sprint.NET.
In MVC6, Microsoft has entered this field, by including a simple container in the new version of ASP.NET. This isn’t intended to replicate all the features of other containers – but it provides dependency inversion features which may be suitable for many projects. This allows us to avoid adding a heavyweight 3rd party dependency to our solution (at least until there’s a feature we need from it).
Getting started
For our example, first create the default MVC6 web application in Visual Studio 2015.
Now let’s create a simple stubbed service and interface to get some users. We’ll save this in the “Services”folder of the project.
public interface IUserService { IEnumerable<User> Get(); }
We’ll need a User object too – we’ll put this in the “Models” folder.
public class User { public string Name { get; set; } }
Let’s create a concrete implementation of this interface, and save this in the “Services” folder too.
public class UserService : IUserService { public IEnumerable<User> Get() { return new List<User>{ new User { Name = "Jeremy" } }; } }
Now modify the HomeController to allow us to display these users on the Index page – we need to change the constructor (to inject the interface as a class dependency), and to change the Index action to actually get the users.
public class HomeController : Controller { private readonly IUserService _userService; public HomeController(IUserService userService) { _userService = userService; } public IActionResult Index() { var users = _userService.Get(); return View(users); } }
If we just run our project now, we’ll get an exception – the HomeController’s Index action is trying to get users, but the IUserService has not been instantiated yet.
We need to configure the services that the container knows about. This is where Microsoft’s new dependency inversion container comes in. You just need to add a single line of code in the ConfigureServices method in Startup.cs to make sure the controller is given a concrete instance of UserService when it asks the container “Can you give me something that implements IUserService?”
public void ConfigureServices(IServiceCollection services) { ... services.AddTransient<IUserService, UserService>(); }
If we run the project again now, we won’t get any exceptions – obviously we’d have to change the Index view to display the users.
Transient, Scoped, Singleton, Instance
In the example above, I used the “AddTransient” method to register the service. There’s actually 4 options to register services:
- AddTransient
- AddScoped
- AddSingleton
- AddInstance
Which option you choose depends on the lifetime of your service:
- Transient services are created each time they are called. This would be useful for a light service, or when you need to guarantee that every call to this service comes from a fresh instantiation (like a random number generator).
- Scoped services are created once per request. Entity Framework contexts are a good example of this kind of service.
- Singleton services are created once and then every request after that uses the service that was created the first time. A static calculation engine might be a good candidate for this kind of service.
- Instance services are similar to Singleton services, but they’re created at application startup from the ConfigureServices method (whereas the Singleton service is only created when the first request is made). Instantiating the service at startup would be useful if the service is slow to start up, so this would save the site’s first user from experiencing poor performance.
Conclusion
Microsoft have added their own dependency inversion container to the new ASP.NET Core framework in MVC6. This should be good enough for the needs of many ASP.NET projects, and potentially allows us to avoid adding a heavyweight third party IoC container. | https://jeremylindsayni.wordpress.com/2016/03/29/how-to-use-built-in-dependency-inversion-in-mvc6-and-asp-net-core/ | CC-MAIN-2017-26 | refinedweb | 716 | 53.31 |
mark florisson, 28.02.2012 11:28: > On 28 February 2012 10:25, Stefan Behnel wrote: >> mark florisson, 28.02.2012 11:16: >>> On 28 February 2012 09:54, Stefan Behnel. >> >> I was going to pass a constant flag into the macro that would let the C >> compiler do the right thing: >> >> """ >> #ifdef WITH_THREAD >> #define __Pyx_RefNannySetupContext(name, acquire_gil) \ >> if (acquire_gil) { \ >> PyGILState_STATE __pyx_gilstate_save = PyGILState_Ensure(); \ >> __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), ...) \ >> PyGILState_Release(__pyx_gilstate_save); \ >> } else { \ >> __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), ...) \ >> } >> #else >> #define __Pyx_RefNannySetupContext(name, acquire_gil) \ >> __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), ...) >> #endif >> """ >> >> That also gets rid of the need to declare the "save" variable independently. > > I don't think that will work, I think the teardown re-uses the save variable. Well, it doesn't *have* to be the same variable, though. Speaking of that, BTW, there are a couple of places in the code (Nodes.py, from line 1500 on) that release the GIL, and some of them, specifically some error cases, look like they are using the wrong conditions to decide what they need to do in order to achieve that. I think this is most easily fixed by using the above kind of macro also for the refnanny cleanup call. Stefan | https://mail.python.org/pipermail/cython-devel/2012-February/001983.html | CC-MAIN-2017-17 | refinedweb | 196 | 65.12 |
>IRC Channel
#/g/wdg @ irc.rizon.net
Web client:
>Learning materials
>Frontend development
>Backend development
>Useful tools (embed) - Discover new open source libraries, modules and frameworks and keep track of ones you depend upon. - Guides for HTML, CSS, JS, Web APIs & more.
>NEET guide to web dev employment (embed)
>Use these sites to paste large amounts of code
who /late night grind/ here
>tfw js finally starting to make a little bit of sense to my pea brain
Please help, why my code isn't working? Trying to make a sticky header when I scroll down
It's probably something stupid, I'm checking the variable values on the console and they're undefined for some reason.
>>52646227
I "fixed" the variables problem being undefined, removing the global $(function(){}) does the trick for some reason, but I still am not getting the nav to stick to the top when I scroll down, and I just checked the height method ,it only calculates height of the header once when the page is loaded, but I guess that can be fixed by making it do the calculation every time I resize the window.
i'm supposed to be making a rock, paper scissors game in js on codecademy. this is what i have. what's up with it?var rockPaperScissors = function(Player1, Player2) {
if (Player1 === "Paper"); {
if (Player2 === "Rock");
return "Player 1 Wins!";
}
else {
return "Player 2 Wins!";
}
if (Player1 === "Rock"); {
(Player2 === "Scissors"); {
return "Player 1 Wins!";
}
else {
return "Player 2 Wins!";
}
if (Player1 === "Scissors"); {
(Player2 === "Paper"); {
return "Player 1 Wins!";
}
else {
return "Player 2 Wins!";
}
}
alert("Welcome to case sensitive rock, paper, scissors!");
}
var rockPaperScissors.Player1 = prompt("Please choose Rock, Paper or Scissors");
}
var rockPaperScissors.Player2 = prompt("Please choose Rock, Paper or Scissors");
}
}
>>52646818
The syntax is incorrect. Don't place semicolons after if statements. If you need two conditions to both be true, place them inside the same parenthesis and separate them with logical AND (&&)
For exampe, your first if:if (Player1 === "Paper" && Player2 === "Rock") {
return "Player 1 Wins!";
} else {
return "Player 2 Wins!";
}
Can someone explain wordpress to me? Why do people use it when their are better frameworks and library's? Because its easy to make a simple blog or static webpage?
>>52647192
I ask my boss why we use wordpress every day. He can't come up with a good answer. I think the owner is just set in hid ways.
>>52647192
Because there's a lot of support, plug ins and templates
>>52647192
because some people are just born with brain cancer.
Has anyone been using Go?
Toughts?
>>52648617
Some weird syntax, I don't really get it. IPFS is implemented in Go, so it must somehow be useful I guess.
Hey guys,
I'm using the Bluebird for Promises, but do you know another library to retry failed Promises?
Basically I'm connecting to an unreliable API, and I need to retry in some case (like error 503), but not in other (Error 404), or do a reauth and retry in other cases.
I'm not sure how to do that "properly"
>>52649408
Upgrade to ES6?
Is there any way to keep track or maintain element names? I get lost now and again working on medium sized projects, how the hell do major companies maintain their code when they have hundreds of elements some of which are named
mail-container-top-title
>>52650055
query selectors?
Names are old-fashioned
>>52650076
I dont get it, I just read up on query selectors and don't see how they help
How would a new guy go about drumming up business from locals? How does one successfully pitch their web dev services?
>>52650724
Go to business events and network. It's annoying, but when you've got practically no track record, it's the easiest way to meet clients.
It could be really hard to get your first client, but once you do, make sure to kick ass for them. They could become a faithful client for years to come. My first client still regularly contacts me when he needs work done and has introduced me to other clients.
Sales meetups are good too because the people there need web dev, and you'll also learn sales techniques, which you'll need to get work.
My general sales strategy is to get the other person talking about what they do (the people at professional meetups are usually very passionate about what they do) and then only mention that you're a freelance web developer when they ask what it is you do. If they need work done, they'll say something or hint at it, and if they don't, they might know someone who does.
Don't try to sell on your first meeting though. Let them talk so you can find out what they need and can better present yourself as the solution to their problems when the time comes. Plus if you start selling right out the gate, it comes off as desperate(incompetent) or at best mercenary(expensive). Neither of which you want to come off as.
Networking can really give you the edge when it comes to freelancing too. I'll put out a couple of tips on how to do that in another post.
>>52650974
What about cold calling/emailing businesses? I see alot of people with only facebook pages and no actual website.
>>52650724
>>52650974
Networking is important because every business needs web development, but they probably won't need it when they meet you. But it'll be really convenient for them if when the time does come or they meet someone who needs web dev, they can say, "oh, I know a guy!" Be that guy.
As I said earlier, you should go to professional meetups in your area. Then get them talking and basically only talk about yourself if it's incredibly relevant or if they ask. (It's a well known way of getting people to like you.)
So let's say you've done all this, hit it off with a potential contact, and they say something along the lines of
>I gotta move on now, it was fun talking to you though. Do you have a business card?
at which point you either give them your business card or pull my signature move (because I'm too lazy for business cards)
>reach for wallet
>stop and pretend you just remembered something
>you know, I just gave out my last one. Do you have a card?
It's better than saying you don't have a business card, trust me. If they don't have a card either, then just exchange email addresses.
Then you rinse repeat with the next person. (Easy mode conversation starter at these events: "Hi I'm Anon. What do you do?" btw)
So at the end of the night, you leave with a pocket full of business cards. Neato. Now what? Now you follow up. Literally all you need for first follow up is an email saying something along the lines of this is Anon from such and such. It was nice meeting you and talking about whatever we talked about. And maybe some friendly nonsense. Doesn't matter much.
The important parts here are actually emailing them, alluding to where you met, and mentioning what they talked about and making a comment about what they do. THIS IS NOT FOR THEIR SAKE. THIS IS FOR YOUR CONVENIENCE. Keeping track of everyone you meet is hard. This way, you don't forget those important details and can do a search through your emails to find whoever you need to find.
Whats /wdg/s opinion on pic related?
Why?
>>52651196
I don't really mess with that much, and I don't have a website or even a proper Facebook page or linkedin. (I should really fix that, but meh.) I pretty much rely on my personal network and repeat clients for business. And word of mouth.
The most I'll do in the way of cold calling is when someone tells me about someone I want to meet or work for, and I contact them from there. But then I can ask for an introduction or say "I was talking to so and so, aaaand..."
I mean, I probably could cold call now with reasonable success, but I don't see much point. And if I had done that in the beginning, people would ask what I've done before and can they see my portfolio, to which I'd say "uuuhhhh about that..." and look like a total idiot and get nothing out of it.
Meetups are a much better strategy as far as I'm concerned.
>>52647192
Basically it's incredibly easy to use and make themes for. You don't have to know much PHP to set up a site. This usually means WP devs are really bad at programming and are usually more likely to copy paste code over and over again.
>>52650076
>query selectors?
Literally doesn't know what he's talking about.
>>52650055
You need to maintain a style guide and use something like ITCSS. Primer is a good example:
>>52646764
If height is calculated after the page is loaded, you could try placing that code in a $(document).ready() event listener.
I can't see where that script is loaded in CodePen (I'm on mobile), but you'll typically want to include your scripts near the bottom of the body tag. This way all the HTML is already loaded when the script starts. You could also wrap it in an event listener so the script starts when the DOM is fully loaded. You might be querying for DOM elements that don't exist yet.
I'm starting to hate the Bootstrap look, so I want to customize it. Good tuts?
>>52651452
Cool concept, but generally the templates are too general to be effective and making your own templates will take more time than you save.
I've tried it, but I wouldn't use it (unless you have a team of idiots who can't properly make a skeleton/add files to your project)
>>52650974
>>52651342
>>52651473
Thanks.
I still can't grasp the difference between object literals and constructors, and when should i add the 'new' keyword when making objects?
As far as I can tell object literals are for making 'single', 'static' objects, can't find a better explanation, and constructors are like templates of objects, you can make several instances of them by making new variables and calling them with the new keyword and adding parameters, am I wrong?
I'm guessing you can create like a monster template that takes health and damage, with functions to calculate them, then you can make a goblin and golem variables and give them different parameters, they will both be their separate objects but using the constructor as a base?
>>52646227
You are selecting".mainNav"but . is a class selector. You need a id selector #
But why use javascript at all? Just apply the css at the start
>>52652027
Simple, don't use Bootstrap
>>52652213
A valid solution I guess. But the grid system is too useful.
>>52652238
How is it too useful?
So I had a salary negotiation the other day and made a counter-offer, and the interviewer pretty much asked "Why do you want more money?"
I thought I was prepared for the negotiation but that one completely threw me off. It seems like an inappropriate question, but how would you reply?
>>52652238
Then just recreate their grid system. It's quite easy just to pull that part.
It's bad though, it interlinks the HTML to the CSS. Your HTML then dictates the style, when that should be the sole job of the CSS
>>52652261
Blow and hookers, obvs.
>>52652261
That question is really "Why should I give you more money?" State solid reasons on why you would want more (better support a family, looking to buy a house, needing to pay off debt, saving for the future, etc) rather than just "I like a bigger number in my bank account so I can buy nicer things". Afterwards, follow up with why you deserve more (I work quicker or more efficient, It's fair given the salaries of similar jobs, etc)
>>52652336
Solid advice. I heard never to bring up your own financial situation or style of living though, and it seems like mentioning what you need the money for breaks that rule. Would the following answer, although very political, be fine?
"I know that I am bringing X to this company and, as I plan on being here for a while, I want this decision to make sense."
It's also worth noting that they're hesitant to pay me more because I'd already be making more than anyone else there. IMO everyone else's salary is irrelevant, but whatever
>>52652260
>>52652278
I can't html/css. And honestly, I don't even want to bother after I've read SO answers with so many hacks just to get a small thing working. Let alone getting it to work on all browsers.
>>52652392
True, I guess I'm too personal when it comes to the workplace. I'd say that's fine. Essentially you are arguing that you are worth the money and it would be a good decision to give them more.If they prod further into the personal, just say the reasons are important, but personal
>>52652422
What audience do you have to develop for? IE 9, 8, 7? As far as I recall, bootstrap doesn't really handle anything that requires hacks. I think they don't even vendor prefix their css. If you are allowed flexbox () then that's far superior to any grid system.
>>52650974
>>52651342
>>52651473
Thanks for the writeup!
Any advice on what to read re: networking? Besides Carnegie.
Norms, etiquette, that kinda thing - I'm Russian, so I guess cultural norms might be different and not always obvious.
Why do people use hipster languages that won't exist in 10 years? Why not use PHP? It'll never die. PHP is eternal.
>>52652513
Good shit anon why have I never heard of this.
>What audience do you have to develop for?
Biologists, so probably IE 6
When I set up a negative bottom margin to one of my elements it disappears, does anyone know why?
>>52652661
What languages do you qualify as hipster anon
>>52652661
But php is shit m8
>>52652692
Node.js, Ruby on Rails etc
>>52652666
Alright satan. Do you not have some analytics to see what your users use? If they are all on chrome or FF or the latest IE/Edge then you can throw away all the hacks and give the middle finger to that one idiot on IE 5.5, just tell him to upgrade his browser.
>>52652688
Can you create a "Minimal, Complete, and Verifiable" example? Negative margin should work fine.
>>52652730
The web application is still in development. But I got some people trying it out and they all use chrome/firefox/safari - so fuck IE honestly.
Safari is not a bitch I hope?
Anybody got recommendations for a decent CMS for smaller projects? Language base isn't really a concern, but my team mainly works with PHP and Ruby.
We're currently stuck on Wordpress and I hate developing for it.
>>52652661
Because nobody knows what languages will become eternal and which ones will fade.
Just because C/++ dominates doesn't mean Rust won't be very popular in a decade. There's actually no reason for Rust to die out because it is so much better than it's predecessors (IMO). People just like to cling to whatever's around because if it ain't broke don't fix it (but it is broke).
>>52652721
Wouldn't consider these hipster, Node.js is very fast and RoR is great for applications using relational data. The paradigms are what is important; if you're half-competent, you can learn any framework or platform over the weekend, so it's not like it's a huge waste of time.
>>52652666
What's it about?
>tfw biology degree
>>52652857
Data visualisation and manipulation
>>52652768
It's pretty much the same code, it works here for some reason. The only diference is the video not being linked in codepen, weird thing is that the orange part of the nav bar doesn't disapear but a diferent element or anything just a :last-child selector
These are the relevant parts of the CSS#mainNav {
height: 9vh;
z-index: 10;
width: 100%;
background-color: $cDarkBlack;
display: flex;
align-items: stretch;
margin-bottom: -9vh;
}
.mainNavSticky {
position: fixed;
z-index: 10;
top: 0;
}
.text {
padding-top: 10em;
}
>>52652932
>but *it's not* a different element
>>52652902
>Data visualisation
>trigger d3.js nightmares
>>52653073
I'm using d3js. It's cool. What don't you like about it?
Accidentally asked the wrong general, so I'll repost here.
Let me preface this by saying I'm a complete and utter dingus.
I do have a Neopets-tier understanding of html, but that's about it.
I have an idea for a website, but I'm not really sure where to start.
I want to make one of those charity websites with the one click to donate button, but have one button donate for multiple causes.
That, and have it so petitions are way easier to fill out and require less datamining and no spam emails.
pls no bully
>>52653182
Sounds fun. Do you have any questions?
>>52652422
Well it's funny that everyone shits on webdevs for working with such "easy" languages like html and css, but when it comes to using those easy languages, they back down because it seems too hard to write some markup and css that will work in all major browsers. Just something that I find really funny on /g/.
>>52653212
Yes. Where do I even start?
I figure it as a one stop shop for quick, effortless activism, so I want to make sure everything is functional above all, and without having to sign up for anything.
The petition thing seems the most difficult, because I'm not sure html was meant to be used that way.
>>52653273
Like anything else, you need to start with the basics.
Host a 'Hello, World!' website.
Host a website that lets you do things, like a calculator or a word find.
Host a website that lets you enter information into a form, and that information is stored into a database.
Then you can build a website that does things and stores information and displays that information to the right people in the right way.
Then you can add functionality to use PayPal or Bitcoin to transfer funds from users to 3rd parties.
Etc, etc.
You do have a year or so to work on this, right?
Retard here currently learning the MVC model.
Im writing a registration. Am I assuming right that I should transfer the db data from the model and the POST data to the registration script and do everything in there?
>>52652932
Maybe it's a stacking context problem?
I'm not sure unless you can recreate that behavior inside of the codepen.
>>52653272
I respect good web devs. HTML and CSS are easy though, they just require you to know all the quirks, which takes a lot of time and patience I imagine.
>>52653414
I found where the problem was, but I don't know why it's doing it. The concept id has a background color and as soon as I remove it the navbar shows again. So I guess it is a stacking issue, although navBar and concept aren't even related.#concept {
background-color: $cWhite;
padding-top: 14em;
padding-bottom: 7em;
border-bottom: 1px solid $cDarkGray;
.conceptSummary {
width: 60%;
text-align: center;
margin: 0 auto;
}
p {
margin: 2em auto;
}
p:last-of-type {
margin-bottom: 7em;
}
.clientLogos {
margin: 0 auto;
display: flex;
flex-wrap: wrap;
justify-content: space-around;
li {
}
}
}
>>52653430
Yes, HTML and CSS are easy to learn only as syntax, but not quite as easy to work with, because of many reasons (different implementations in browsers, variable support for different standards, etc). I remember when I was writing C++ small programs and everything actually made more sense than in CSS.
I actually get less stuff done if I have to write a lot of CSS manually, than if I have to write C++. Simply because a lot of things in CSS are quirky, it's not very clear what causes something that should work not to work. Hence, the whole culture of SO questions and answers on html+css quirks.
It's no wonder that so many devs just use some generic framework and just customise a few elements.
>>52653821
Here's a shot in the dark. If you add "position:relative" to #mainNav and move z-index from .mainNavSticky to #mainNav, does it still have the same problem? (You may have to change .mainNavSticky to #mainNav.mainNavSticky to set precedence when scrolling down)
>var x = require('y').z
How am I supposed to write this code, but using imports instead? Something like "import x from 'y'", but requiring the specific function.
>>52653339
I do have the time, this is just a passion project.
Thanks for the advice, anon!
Shitty material to make shitty programmers.
>>52646818
You need to use "this" before Player1 and Player2, use && for checking multiple values, toLowerCase is your friend.
You can just put the prompts in the function instead and call that when the document is loaded completely. Hint:window.onload
>>52653974
Relative position fixes it but I had to modify the jQuery and get rid of the sticky class entirely because it wouldn't stick anymore.var $mainNav = $("#mainNav");
var $mainNavSticky = "mainNavSticky";
var $headerHeight = $('header').height();
$(window).scroll(function() {
if( $(this).scrollTop() > $headerHeight ){
$mainNav.css("position", "fixed").css("top", "0");
} else {
$mainNav.css("position", "relative").css("top", "");
}
});
});
Is this really the only way to remove an attribute in jQuery? css("top", ""). Seems kind of dirty.
>>52654336
Did you try changing .mainNavSticky to #mainNav.mainNavSticky with your old JS? I'm surprised that didn't work
Oh, and I'm not a huge jQuery guy, but an internet search seems to say that's right. I prefer vanilla:
element.style.top = null;
>>52652554
I have no idea about what's good to read to that end, but I'll tell you how I learned because it definitely did not come naturally to me.
I picked up the basics of how to get people to like me through an interest in psychology and working semiprofessionally as an actor for a bit. Really, I can boil most of what I learned from those down to:
>keep the other person talking as much as possible and try to be genuinely interested in what they say
>when you do talk, it's better to leave them wanting more than to leave them wanting less
I have a natural tendency to babble on without letting others say much, so those were really important lessons for me. Might not be as much an issue for others.
From there, I mostly learned by going to meetups and paying attention to what the really successful networkers do. I also found that many such people have a bunch of little tricks they use (like the emails for remembering who everyone is), and they're usually proud of these tricks and are more than willing to share if you ask them and act impressed (which usually isn't hard because a lot of those tricks are genuinely impressive.) I've learned more from that than I would ever have imagined.
And the best part about that method, is you'll learn techniques that are proven to work in your culture. Heck, I've even learned some tricks that only work in the city I live in by talking with more experienced people.
Good networking does require getting out of your comfort zone though, and it's often a lot of work, but it's totally worth it as far as I'm concerned.
>>52654428
Yeah, unless there's a difference between#mainNav .mainNavStickyand#mainNav.mainNavSticky
I don't suppose I can combine javascript and jquery in the same statement?
>>52654430
Ass kisser.
>>52654523
>unless there's a difference#mainNav .mainNavSticky
An element of id mainNav with a child that has class mainNavSticky#mainNav.mainNavSticky
An element of both id mainNav and class mainNavSticky
You would want the second
>>52654523
Yep, vanilla js and jquery are compatible. jquery is just a series of vanilla js anyways
Note that $mainNav is not a vanilla element, but a jquery wrapped element. You could either do:document.getElementById("mainNav").style.top = null;
or$mainNav[0].style.top = null;
Further note that jQuery[0] dewraps the first element inside the jquery wrapper.
>>52654542
I'm not even going to argue with that. I've found there's few situations a little well-placed flattery can't solve. It helps that I'm an idiot who's easily impressed/amused and likes most people. For me, it's just a matter of saying what's on my mind at the right time. I'm not afraid of calling people out on their bullshit either when it's appropriate though, so in practice it's rarely seen as ass kissing. But nah, yeah, I've become a manipulative cunt over the years. Ass kisser is totally fair.
>report a phishing site to the hosting provider
>ticket gets ignored
>report a phishing site to the registrar
>get a we didnt du nuffin answer
welp
>>52655115
Report it to anti-virus companies. Many will block the domain
>>52655115
What's the site?
Tag it on OpenDNS.
>>52651767
Every wp tutorial actually encourages copy pasting code. Sucks i have to learn it for work. Why can't more people jump on the django band wagon.?
>>52652789
Check out wagtail. It's open source and written in Python on top of django. Easily customizable and scalable if you're familiar with python and django.
>>52655197
Just reported it to Google. I mostly reported it because it's infesting the ads on my favorite news site. And phone users would get automatically redirected towards it while activating your phones vibrating function. It's super annoying compared to other shitty phishing sites.
Unbelievable how much I have to bend over backwards to suit older versions of IE.
What the hell do Angular/React do anyway? I only hear buzzwords when I watch quick explanation videos.
>>52656579
They're front-end JS frameworks. They basically enhance HTML and the way data is rendered to the user.
OMG....
@rockPaperScissors
switch($resulta){
case'rock':
if ($resultb == 'rock') $r = 'even';
if ($resultb == 'paper') $r = 'a lose';
if ($resultb == 'scissor') $r = 'a wins';
break;
case'paper':
.
.
.
break;
}
alert, print, echo or cuming $r;
We dont talk about loops right here...
@sticky nav
First get a myload() and a myscroll() function.
The make a myresize() function dont forget a little timeout in there and trigger the both functions above.
AND Dont manipulate the css with javascript set a class and write clear css, sass or less code.
@media all and (max-width:@mydick){
* {
content:'LEARN TO CODE';
}
}
>>52656725
So React or Angular? Both? I'm looking at some job postings and most of them want Angular, but I'm in Mexico and we're like 3 years behind when it comes to web technologies.
>>52656808
Angular 2 makes Angular obsolete. Sure, you'll have the people who used Angular still developing Angular, but you won't get Google's support. It's dead.
So let's say I want to call a script or a command on my server and render the stdout on my web page. What would be the smartest way to do that, do I really need to install PHP just for this?
>>52656808
I have no idea. I haven't gotten around to learning one of those yet so I'll let others advise you further on that.
But generally, posters in these threads seem to favor React.
So for an entry level front end position, would a knowledge of html, css, js, jquery and ruby be sufficient? Only on js now but trying to figure out where to go after that (and frameworks like angular).
>>52657149
Yes. You'll learn more at a junior position in a month than at home preparing for that position for 6 months.
Start getting your bullshit game on, buy a nice shirt and go out jobhunting faggot.
>all of the above based on my own experience
>>52656917
>>52656917
Nope you not even need a server for running Javascript and PHP. Just take a look at xampp installed in 3 mins. Even supports Sql
>>52657299
>xampp
Eh, the server runs nginx and I don't need a database. I guess I'll just install php-fpm.
Should I switch to Angular 2 from knockout?
>>52657458
Never mind, knockout works just fine.
>>52656808
Vue.js
Any spanish-speaking from here knows acamica.com?
Is it any good?
>>52658048
>consigue el trabajo soñado
>Globant
Corré, negro, corré!!!
Someone describe a day as a Jr/Semi-Sr Front End developer
>>52656377
Welcome to webdev where you can't use any of the best web standards and tech, because they're not supported by some old version of IE, still used by a bunch of old farmers, Afghan mujahedins and cheapskate companies.
>>52658363
Is it really that bad still? Unless these statistics are rubbish why should we care about a fraction (IE8 and bellow?) of this 6% market IE supposedly has?
>>52656579
The do the View part of the Model-View-Controller framework. Essentially keeps the program logic separate from all the dirty work of modifying the DOM tree
I'm a huge faggot please rape my face.
OK now that's out of the way, how to I import a Themeforest theme into Dreamweaver?
I just purchased this template
after someone recommended it on /gd/
I'm trying to build a website for a friend and my PLAN was to edit the code bit by bit, put my logo in place, my text, my links, etc. That's my coding level.
here's the folder, filled with .php and .css files. I'd like to bring it into the split view to edit it. It's for a friend and it's my first website.
>>52658886
oh and there's a big .psd file with layers to edit stuff
>>52657149
Depends if you want to do frontend or backend. For front end I'd skip the jquery (just an api that you can easily look up, it's better to know vanilla), and instead focus on angular 1.x or react. If you want to do backend, research what your target company is using, and learn that. Do they have a SQL or noSQL database? You should be proficient in whatever type of database they use. You should also know enough of what backend language they use to skirt by.
>>52658468
As long as clients are asking for IE support, it is bad.
Well, IE 11 supports almost everything you'd want to use and IE 10 is somewhat worse, but below that shit starts to fall apart (IE 9, 8, 7, even 6).
>>52658886
>Dreamweaver
Also, most wordpress templates come with options for you to edit colors, logos, fonts, text, and all that direct from the wordpress dashboard. No need to edit the source code for that. You just need t set it up in the server. You can download XAMP and do it locally so you don't mess up if it's your first time
>>52659020
We once had a client asking for 5.5 IE and AOL support a couple years ago. Their user demographic was literally grandparents and great-grandparents. We managed to do it, but we were laughing the entire time. We made the smallest font around 24pt.
>>52659110
i think you have to "upgrade" the WP account on order to import themes.
it's pretty shitty.
>>52659337
You don't need to host on word press. If you have the theme already then treat it like any other website source code. Host it elsewhere.
>>52659358
i'm saying you can't "add new theme" on wordpress (edit it) unless you have a paid account
i've tried importing the .php files into dreamweaver but cannot then use live view, that only works for html files.
there is no index.html file in folder
>>52659404
found a guide, ty
sigh that guide is not what i was after
eh well
>>52659404
I remember editing wordpress themes right in the dashboard without needing a premium account. Maybe you're getting confused with the Wordpress blogging service and the actual Wordpress CMS.
This javascript rock paper scissors game on codecademy is ruining my life. I have no idea what I'm doing. I copy pasted an anon's suggestion above but am still coming up milhouse as far as a solution. I don't think I'm fully grasping the syntax. Anyone else retarded here? Here's what I have now.var rockPaperScissors = function(Player1, Player2) {
if (Player1 === "Paper" && Player2 === "Rock") {
return "Player 1 Wins!";
} else {
return "Player 2 Wins!";
}
}
if (Player1 === "Scissors" && Player2 === "Paper") {
return "Player 1 Wins!";
} else {
return "Player 2 Wins!";
}
}
if (Player1 === "Rock" && Player2 === "Scissors") {
return "Player 1 Wins!";
} else {
return "Player 2 Wins!";
}
}
var rockPaperScissors.Player1 = prompt("Please choose Rock, Paper or Scissors");
}
var rockPaperScissors.Player2 = prompt("Please choose Rock, Paper or Scissors");
}
}
What does it mean for a framework to be "opinionated"?
>>52660768
I have no idea what you are even doingvar rockPaperScissors.Player1 = prompt("Please choose Rock, Paper or Scissors");
}
var rockPaperScissors.Player2 = prompt("Please choose Rock, Paper or Scissors");
Why are you doing this?rockPaperScissorsis a function it should berockPaperScissors(input1, input2);where input1 and input2 are the values returned from prompt.
>>52661574
It means that it has a strict or semi-strict of doing things, and it imposes you to make them that way.
Copy pasta:
>If a framework is opinionated, it lock or guides you into their way of doing things.
>For example: some people believe that a template system shouldn't provide access to user defined methods and functions as it leaves the system open to returning raw HTML.
>So an opinionated framework developer only allows access to data structures. By design, the software is limiting and encourages the designer into doing things their way.
>>52661619
So how should I go about using the prompt?
>>52661574
>What does it mean for a framework to be "opinionated"?
It means there's generally considered one right way to do things within that framework, sometimes but not always it will mean the framework will take steps to keep you from doing things a different way.
It's not always a bad thing, having agreed upon standards makes coming into projects with existing code a lot easier and having a canonical right way frees framework designers/implementers to optimize more aggressively than if they needed to support a wider array of configurations.
An example would be Django's ORM. It supports arbitrary DB backends, the core projects maintains Postgres/MySQL/SQLite/Oracle support. This is "opinionated" in the sense that a range of things are supported but it also means DB specific things (like postgres' collection types or GIS support) require some dark magic to really utilize, the vanilla driver won't leverage them. Vs. ORMs that pick one DB backend tend to make better use of its full range of features because they don't need to present the lowest common denominator interface.
>In an online course
>Finally grasp a programming concept
>Guy in the video presents yet another concept that goes over my head
Is this how it will be during my entire learning process?
>>52661574
It means it was at least partially written by a woman.
>>52660768
>var rockPaperScissors.Player1 = prompt("Please choose Rock, Paper or Scissors");var X.Y = Zis always a malformed statement. If X is defined then the "var" doesn't mean anything and if X isn't then trying to set one of its properties will always fail because it's undefined.
>>52661670
var input1 = promt("etc");
var input2 = promt("etc");
rockpaperScissors (input1, input2) {
//Logic for the game
}
Retard here.
Can someone educate me on the different components that would be involved in making something similar to an online ide?
I've been reading about html5, javascript, etc but it's hard to differentiate between legitimate information and someone shilling some framework.
>>52661670
assuming prompt is a function defined by codecademy.var input1 = prompt("Please choose Rock, Paper or Scissors");
var input2 = prompt("prompt("Please choose Rock, Paper or Scissors");
Also looking at your code again, you have a lot of braces wrong. You should learn how code is structured.
{ } denote a block of code sofunction(x, y) {
... // stuff in here is a block of code that belongs to the function
}
This is the same for control flow statements like if and else.
if(someBooleanExpression) {
... // code block to execute when someBooleanExpression is true
} else {
... // code block to execute when someBooleanExpression is false
}
>>52661716
>Is this how it will be during my entire learning process?
Yeah, but it gets less painful. There's a tendency among novices to be really self derogatory, while experts will discover something and be like "oh, that's cool". There's no reason you can't have that same attitude as a novice but most people lack the knowledge/self confidence early on to tell the difference between correcting a stupid mistake and actually learning something new.
bulma.io
>>52661769
Every IDE is a text editor complexed with a compiler/interpreter so those are the two things you need to provide for. You can outsource the text editing to something like codemirror (although you'll still need to provide for file storage somehow, server based would make the most sense but it's up to you). As for the compiler, if it's JS or a language written in JS you can do it client side but otherwise you'll need to ship code back to a server to get compiled/interpreted and executed and then rig up some mechanism for returning results to the client. If you don't have a firm grasp on how this would work then I recommend not doing it, executing user submitted code on your server is a real security risk unless you have a bulletproof security plan.
>>52661769
What >>52661916
said but if you still want to check an online IDE, you can take a look at the code for
It's really modularized so it'll be quite difficult actually.
>>52654430
Thanks man, I appreciate the advice.
I'm so upset right now:
firefox 44 contains a regression which is still up to and including 47 (nightly) on windows 7 which causes all menus to render incorrectly with a transparent background and causes selections to be solid black on black (think spoiler text, without the hover)
so new version of firefox is pretty much unusable to me, right? (don't tell me to install windows 10 that's not a solution)
firefox 43 which I'm stuck with now has all kinds of intermittent issues with the console and very strange behaviors regarding Promises combined with XHR and I think CustomElements.js from webcomponents.js which I'm using is causing issues as well.
I'm very close to going full botnet and just using Chrome, but it has its own issues and I've been using Firefox since it was called Firebird. It is incredibly frustrating that the web platform is advertised as modern and capable and amazing implying that every browser is equally as stable and developed by competent engineers and that standards mean that everything is well-designed, logical smooth sailing but nothing could be further from the truth. It's all a bunch of lies, a god damn house of cards built out of matchsticks soaked in gasoline touted by megalomaniacal ignorant know-it-alls, eager to tell you that you're doing it wrong to make themselves feel smarter. Knowing the pitfalls of this giant pile of shit which does not work reliably 100% of the time which no one fully understands anymore does not mean that there's nothing wrong with it.
Don't reply to this post. It's just incredibly frustrating. Web is just a means to an end, I understand that much. But to all you newcomers out there, no: it doesn't get easier. The more you learn, the less you actually know. Everything is changing constantly and security updates are pushed along with breaking changes and senseless UI updates and there's no opting out. Backwards compatibility is high whereas forwards compatibility (new standards) is abouterror comment too long
>>52661978
>just using Chrome
Just do it. Chrome has always had objectively superior dev tools anyway.
What's the deal with .io? Is it a free domain or something?
Why's everyone using it?
>>52661978
>But to all you newcomers out there, no: it doesn't get easier.
Yeah but as somebody who spends most of their time on the computer anyway (writing and editing), I'd much rather struggle with this than being a carpenter or some other physically demanding shit.
>>52662212
it sounds leet and hackery
>>52662212
It's generally 5-6 times more expensive than a .com tld. It just looks cool and the higher cost means a somewhat lower noise-to-signal ratio than other tlds.
>>52648617
it's boring, but solid
huh, getting a part time job in retail while I try to figure out whether I actually want to do web dev.
doing retail might be enough to make me want to do anything but taht tho
>>52650974
quality post
>>52651452
I prefer to just take the config for my previous project as a base for my next. keep doing that and you'll be really familiar with all the bits and why they're there. I don't trust this kind of thing to make the right decisions.
>>52651767
WP themes are some of the worst code I've ever seen. Hacks on hacks on hacks. Totally incomprehensible. And WP itself isn't much better. But the devs just focus on making them look pretty so they can maximise $50 downloads on theme forest
>>52652238
a much lighter alternative is skeleton framework. and probably plenty others. but you can write your own grid system in about 30 lines.
>>52652261
Say that's what you believe the market rate to be for someone of your level. If you know the market and your skills, it shouldn't throw you off, stick to your guns. Be prepared to walk. You make or lose a lot of money in salary negotiations, they're not to be taken lightly.
>>52652278
Why recreate it, you can select what features too add to the download files, just select grid system.
>>52652422
why are you writing css if you can't css? just learn, it takes like a week or two
>>52652336
I don't think that's the right approach. they don't care about your personal situation. they just want to pay fair price for a fair service. so quantify those.
>>52652666
really, you can just forget IE6 in 2016. IE8 is for dinosaurs. Support IE9+ if you're pushing it.
>>52652721
rails is ancient at this point and an extremely conservative choice. node is quite established, too. real hipsters are using haskell and elixir/phoenix and lua/openresty and someshitthathasn'tbeeninventedyet. as far as I can tell the main reason php is limping along is because of all those shitty hosting providers with their LAMP stacks, and of course, the abomination that is wordpress
>>52653182
If you have the inclination to learn, start with some basic online courses in web development from udemy / codeacademy or the like. But if you just want to focus on the charity, I'd try to get a small amount of backing and just pay a professional to code it for you.
>>52653372
you'll have a registration form (V). that will be html or, if you're doing an SPA, a view library like React.
something that handles the form submission, probably via HTTP (C). This can be as simple as a route handler.
something that stores the submitted data in some database, via ORM, or with a plain query language like SQL (M)
you could put your validation in the C or the M. I prefer the M.
But if I were you, I'd learn this with a concrete example, not in the abstract.
>>52654772
You'd make a great politician
how did you guys get enough experience to get actual jobs? I hear about freelance and your own projects.
What sites are best for freelancing? What types of projects would impress?
>>52656808
React is technically better and simpler, but more companies hire for Angular devs. But those probably will be painful jobs. Go for react. It's all gonna change in 1 year anyway.
>>52657149
That's sufficient. I'd prefer to hire someone with good knowledge of the fundamentals over someone invested in some shitty framework.
>>52657299
>you not even need a server
you do realise that the 'A' in XAMPP stands for the Apache webserver
>>52658241
check emails
write css
write js
go for coffee
respond to colleague's bad jokes on slack with emojis
do code review with senior, get told how to make things simpler
commit some shit
write css
write js
go for lunch
ask designers for clarification of ux flow
ask product owner (who does literally nothing) for clarification of business logic
document clarifications
talk to backend dev about http/rest interface, agree on routes and data definitions
document
write css
write js
look up shit on stack overflow about how to solve some bizarre problem on iOS webkit
go home
drink wine
>>52660768
that's not valid javascript. just go back to the basics for now and learn about variables, functions etc
>>52660768
Here's how I'd write that
var ROCK = 'Rock';
var PAPER = 'Paper';
var SCISSORS = 'Scissors';
var P1_WIN = 'Player 1 Wins!';
var P2_WIN = 'Player 2 Wins!';
function getHand() {
return prompt("Please choose Rock, Paper or Scissors");
}
function getWinner(player1, player2) {
function compare(stronger, weaker) {
return player2 === stronger ? P1_WIN : player2 === weaker ? P2_WIN : 'Draw';
}
switch (player1) {
case ROCK:
return compare(SCISSORS, PAPER);
case PAPER:
return compare(ROCK, SCISSORS);
case SCISSORS:
default:
return compare(PAPER, ROCK);
}
}
var player1 = getHand()
var player2 = getHand()
alert(getWinner(player1, player2))
>>52661769
You'd need a text editor with syntax highlighting at the least. Probably a js compiler for whatever language you're targeting, or interface to a server with such a compiler. Integration with whatever package ecosystem is relevant. I dunno. It sounds kinda deep.
>>52661978
just use chrome buddy. can't remember the last time I had a problem with it. inb4 botnet
>>52662212
because the .com namespace is pretty much all taken
>>52664228
even cleaner:var Rock = 'Rock',
Scissors = 'Scissors',
Paper = 'Paper',
Draw = 'Draw',
P1Win = 'Player 1 wins!',
P2Win = 'Player 2 wins',
choose = 'Please choose Rock, Paper or Scissors';
var beats = {
Rock: Scissors, Paper: Rock, Scissors: Paper
};
function getWinner(h1, h2) {
return beats[h1] === h2 ? P1Win :
beats[h2] === h1 ? P2Win :
Draw
}
alert(getWinner(prompt(choose), prompt(choose)));
>>52651810
>ITCSS
That's exactly what I was looking for, thanks anon.
>>52658241
Ok I'll do it.
It's 0822 and I just got to work, filled up the water bottle and got me a cup of tea. Next I'll check out my mails, jira, and notes from yesterday and start working on whatever I've got to do. We've got flexible hours so as long as I'm at work before 0930 when our daily is no one cares. If I go to the gym before work I usually show up around that time.
Start doing whatever is assigned to me, right now it's reworking breakpoints. If I get something to a state that to me seems like it's ready on my local machine commit those changes, deploy it to the development environment, and test it there. Call over the designer at some point so they can have a look at it if it's any larger work with UI. If everything looks fine build a new release from it and add it to the list of new components for the next release to our integration testing environment. Once the next release to the test environment happens assign task to a tester.
If the testers find out something they come and tell me about it and ask if it's supposed to work like that, usually not, if it's a bug they'll create a new task from it and I'll fix it at some point when I have time.
During the day keep a few breaks and a lunch at some point.
>>52664607
God, one day I hope I can do this for a living.
>mfw trying to pick a JS library for my API frontend
This is always the hardest part. I like that React has pre-rendered HTML, but now there's also Vue which is the latest and greatest™. Ember might be worth it as well. Fuck Angular though.
>>52646818
you also don't have a stipulation for ties
>>52665544
Vue is just like a cleaner, better-implemented angular. But it's still a fundamental mistake to put logic in html. Better to represent html in a logic-capable language (js), which is the approach React and other virtual dom-based libraries take
I got a job in a web development firm, and i have to create a program in PHP that is an basically an interactive timetable for incoming projects and jobs and shows how much time each part of the project/job has left
I need to do this in a month/6 weeks
i do not know PhP
i only know HTML5, CSS, some javascript and some C#
how fucked am i?
>>52666829
you could do that in two weeks
why do you have to use php though? would be better to use your existing c# skills. it's a better language, too
>>52667125
to be fair my boss did say i could could use asp.net to do the project, may aswell just build on my excising skills
but some other jobs he wants me to do include a lot of php based colab projects in the firm so it would be nice to learn that and be able to do more work
>>52667159
its really easy, dont worry about it.
but consider that a job that requires php isn't going to help you develop in your career
>>52667304
this isnt even the career that i want, its just a really really fucking good way to earn money while i build up a portfolio for the career i really want
don't get me wrong i still enjoy web dev and coding but its not the be all and end all for me
>>52666829
That sounds quite simple. PHP is easy to learn and shouldn't take you more than a day or two to get going. If anything, making it interactive with JS seems like the most difficult part about this.
>>52666811
>html in a logic-capable language (js)
>better
What are the best free/cheapo hosting for personal stuff? I'm trying to build a portfolio of random shit to go freelance but I can't decide whether I should invest in native mobile apps or web stuff.
I'd much rather do web stuff since it encompasses both worlds, but hosting can be a daunting prospect.
>>52662606
>noise-to-signal ratio
wat
>>52664495
that's beautiful anon
How retarded is my code? i'm trying to pick up better patterns in js.(function() {
var App = {
el:{
body: $('body'),
listItem: $('.item'),
closeButton: $('.close')
},
openProjects: function() {
App.el.listItem.each(function() {
var projectName = $(this).data('name');
var projectDiv = $('body div.' + projectName);
$(this).on('click', function() {
projectDiv.addClass('open');
});
});
},
closeProjects: function() {
App.el.closeButton.each(function() {
$(this).on('click', function() {
App.el.body.find('.open').removeClass('open');
});
});
},
init: function() {
App.openProjects();
App.closeProjects();
}
};
App.init();
})();
could someone confirm if the syntax is correct in this rails app i'm trying to make. It's saying there's a syntax error and won't render the page<%= if (:title == "some string") %>
<meta property="og:image" content=""/>
<% end %>
Someone add this to OP
How the hell am I just finding out about this? So many awesome components in there.
What the fuck is prototype for
>>52668631
>>52668631
>>52668631
I've tried defining a different variable in that pages' controller called @meta but it's still not rendering<%= if @meta == "some string" %>
<meta property="og:image" content=""/>
//in the controller
def page_controller
@meta = "some string"
end
<% end %>
>>52668287
>jQuery
>2016
shiggy
>>52668631
if statements don't use parens in ruby
>>52648617
Simple language, but that can be considered a strength. Very easy to do multithreading.
Has anyone tried FreeCodeCamp?
I'm a semi-senior Java backend developer and already know how to do some basic HTML+CSS+Javascript frontend development, but I need to build a portfolio. Is it worth it to go through the (incredibly long) coursework?
>>52669365
>>52669264
>implying vanilla JS is half as efficient to write
do you actually have a job anon?
>>52669432
How's that supposed to help me? I already know what flask is. I just want to use FCC as a way to build up a basic portfolio.
I work a dead-end enterprise Java job and, in the not-so-far-away future, I will need to be able to get paid remotely in a freelance manner. Anyone have any experience with FCC in that regard?
Is there a way, with CSS, to create a centered large div, surrounded by several smaller divs? Sort of like my pic, but they would be evenly spaced and about the same radius from the middle div.
Any suggestions or resources to look at?
I'm doing the Ruby on Rails tutorial, and I'm confused about some syntax. Here's a bit of code:class CommentsController < ApplicationController
http_basic_authenticate_with name: "dhh", password: "secret", only: :destroy
def create
@article = Article.find(params[:article_id])
# ...
end
# snippet for brevity
what is meant by the colon after "name", "password", and "only"? I know having a colon before a word means it's a symbol, but I haven't seen code likeonly: :destroyanywhere, and I can't find anything about it.
>>52670200
Yes. Just position the outside elements based on the element in the center. If at all possible make your surrounding elements at least somewhat rounded. That way they'll look better.
With a bit of math and thinking you could even make it automatically adjust the positioning of your elements based on how many you have.
>>52670237
It's short for a symbol with the hashrocket.name: 'dhh'
:name => 'dhh'
>>52669264
So what's the correct syntax?
>>52670200
I suppose you could do it like this:
Not sure if there's any advantage in using bootstrap for that, though.
You could equally just use a bunch of absolutely positioned boxes, using relative units.
>>52663967
>>52664607
How long have you been working as a front end?
>>52670892
I'm >>52664607
I'm officially a full stack dev so there's a bit of Java besides html, css, and js. Though I prefer doing front end much more than working with JEE. I've got a couple of guys in my team who outskill me in Java by a mile and actually enjoy working with it so I'll try to delegate all the bigger back end tasks to them. On the other hand I'm better at front end stuff which means that I mostly do front end stuff with some back end every now and then.
As to the actual question I've been working for just over a year now.
>>52670834
>bootstrap
>>52671295 (You)
These threads are so fucking helpful
>>52671559
/wdg/ > /agdg/ >>>>>>> /dpt/
>>52671783
anime OP /dpt/ > /wdg/ > /agdg/ >>> /dpt/
This reminds me, why isn't this OP anime?
The hell just happened to this thread?
When I refresh this page, am I 'connected' to 4chan's site, or have I just requested some data?
>>52672430
think rationally about it. what do you think web browsers do when they hit a URL?
>>52672487
I suppose I'm asking more of a question like:
Do servers keep a running count of 'active' connections? Client who have recently requested the data and/or are actively requesting more data.
>>52672582
They can and most sites do but by default you just see http requests to the server.
>>52672582
fairly sure 4chan must do client tracking in some way or form, and very likely keeps some form of session data
that has nothing to do with having an "active" connection or how the client interacts with the website, it's all server side stuff.
what exactly is it that you're really asking though?
>>52672620
>>52672612
Someone asked me if they could see everyone currently connected to their website.
My first thought was "No, that's not how it works, you little shit.", but then I figured I could basically pull a list of IPs that made requests in last x amount of time.
I'm just rubberducking, really. Thanks.
Relatively new to web development, just did a Java Server Faces project for college.
I want to learn a backend framework, should I go with Django for a more fixed structure or should I go with Node + Express + Angular?
I don't have much experience in Python nor Javascript.
>>52672738
Go with python but not django, use flask. It's more hands on and detailed. That way you learn python which is really helpful for backend but also for other marketable skills like data analysis and with a few libraries visualization; its more bang for your buck.
Django is the bootstrap of backend python development.
why do big websites take so long to compile and debug reeeeeeee
>>52672942
are you retarded?
>>52672973
probably
>>52672831
>flask
mah senpai
>>52672942
>not working with JIT-ed languages
>any year
I judt cannot wrap my head around MVC frameworks.
It sounds like everything goes through the index page and calls other pages for the "content"?
Big issue being PHP classes and that I haven't used classes since Java a decade ago.
Also maybe my book just sucks.
>>52673143
if you don't understand classes you need to go back to school or get a Java book or something
>>52672694
Check to see if session is started, store IP and session timestamp, create analytic panel with jquery to poll db and display locations with an ip-to-location api.
You can also just bullshit then and say it will strain the servers then recommend some analytic plugin code.
How do I import a theme into Axure?
How do I import a theme into dreamweaver?
pls
Hey guys, can I get some feedback on my new website? New to web development.
aetherspace.ch
>>52673230
>you need to go back to school
lol
Actually probably. My GI bill still has like 65,000 in it.
>>52673312
well I haven't gone to school myself but I still understand OOP
it's kind of an important skill to have nowadays
>>52673342
I understand it. Just not with PHP. My problem is that frameworks use like Eloquent and Blade when I am used to raw SQL and Bootstrap or Foundation.
>>52673143
functionality and components of the website fall under model, the controller reads in these components and projects them on the view.
If you're creating a todo list app you structure it like:
>model
- adding and removing from list
- changing the style of the list
>controller
file that calls all of the models
>view
the .html or whatever files that are projected on and dynamically created
>>52673572
So everything is technically happening on one page that pulls and displays content based upon get/post?
How does this.com/cats/catpictures know the filetype of catpictures or how to ammend it and keep the URL without an extension?
On django, how would I go about adding the next and previous objects, based on a nonunique charfield, to my context for a single object view?
I tried this simple queries but they will fail if there is more than one object with the same name value.previous = Person.objects.order_by('name').filter(name__lt=current_person.name).last()
next = Person.objects.order_by('name').filter(name__gt=current_person.name).first()
>>52673817
ignore file extensions, they say nothing about a file.
when you ask for a specific path a script can just read it an image from a database or a file directory and return the bytes to the browser.
before that it says which filetype it is in the header.
>>52652238
Then use Sass and include the grid system (mixins) only.
Bootstrap 4 may even have a grid-only download.
>shitty agency jobs building wordpress, magento and just generally legacy shit for years
>move to a big city, get job at a tiny whitelabel agency, less than 10 staff
>making stuff with WebGL for big brands
>building our own SaaS product
>was allowed time to figure out how to set up docker so we can start up node.js servers in containers to handle communication with a front-end client that needs it's own server for each instance communicating over websockets.
>started trying to figure out a CD strategy for that same project too
>fixed a (big) security flaw on a (small) production site for a well-known European brand
>only been there a month
Coming from big teams working for SMEs to this tiny team working for international brands has been so fulfilling. No more time tracking everything I do. No more of the constant change and fix requests. And best of all: Actually getting time to refactor, do R&D and try new approaches.
>>52674753
Is Sass hard to learn?
>>52669530
>implying vanilla js is the only alternative to jQuery
>>52670892
I did it for 8 years, moved on to product guy now
>>52674922
How long did it take you to get to the point where you were comfortable enough to apply for jobs?
Is 20 years old too young to get a job as a junior front end? I just started my first semester in college, but I'd rather just skip college entirely and go straight to a job if possible.
>>52675411
I just hired a 22 year old, he's awesome. young is good in this industry
>>52675411
>>52675495
On the contrary, is there an age that would be considered too old to do this? (23 atm)
So I just followed this guide:
I put all the files on my VPS and I can get to the index and register pages, but when I try to make an account I get server error 500.
Where should I start troubleshooting?
>>52675608
Good question.
I'm 27 and also curious.
>>52675639
Database connect/login information.
Otherwise PHP version.
>>52675357
About 6 weeks. I started reading nettuts when that first spun off from psdtuts, built some "WOW SO VALID XHTML and CSS" sites off their tutorials, realised a lot of people in my area with real experience were producing stuff worse than that and thought "I can do better". Applied for a job at an agency building what we called "brochure websites" and got it straight away. I was only a few months out of high school, apparently all the other candidates were even worse than I must have been.
That was about 8 years ago, so obviously I long ago realised how shit I was then, but I guess not knowing how shit I was at the time helped. It takes (took?) a surprisingly little amount of knowledge to get started building websites. No one is asking for degrees, just for you to show you can do what they do. Just go for it, worse that happens is they say no, it doesn't even bar your from re-applying to the same company a few months later.
>>52673987
I give up, I seriously can't think of a way to this with queryset lookups, I'm just going to iterate over the objects like so:def get_prev_next(current_person_id):
persons = Person.objects.order_by('name').iterator()
previous_person = None
next_person = None
for person in persons:
if person.id == current_person_id:
try:
next_person = next(persons)
except StopIteration:
pass
break
previous_person = person
return (previous_person, next_person)
Far from a one liner, but it works.
>>52675639
Start with the PHP error log file. You can use phpinfo(), or `php -i` from the CLI to find out where the error log is on your setup (depends on whether it's an apache module, or fastCGI. Using phpinfo() is probably the safest way to check as at least you know it's being parsed by the same php interpreter as your web server is using, whereas the CLI version of php might be a different install with a different error log that won't contain your 500 in it).
>>52675823
Do you mean making a page called info.php with<?phpin it?
phpinfo();
?>
I don't really know what I am looking for.
>>52675411
>tfw 32 and hoping to get an entry level position
>>52675411
It's never too young. The problem is that junior positions are rare and front-end only positions are rare. If one does open, it'd be flocked.
>>52675608
>>52675680
No. If you're over 25 trying to get your first job in the industry you'll probably want to demonstrate your capability somehow. Put a website up on Heroku or something. If you have the money and time you might want to consider a coding school, but depending on where you live there might not be one near you.
>>52676547
Them feels.
I spent the last decade and a half deving as a hobby. Now I'm mid-life crisis age with no money to blow on a sports car.
My area is booming with web dev jobs with no one to fill them. I'm seeing jobs posted for 80 dollars an hour. That's just bananas considering the low cost of living.
>>52676966
At least you have dev jobs in your area. I'll have to move if I want a shot.
What's with all the bootstrap hate in this thread?
>>52674753
It does, but just as a heads up even their grid system comes with bloated bullshit.
>>52675192
No, it's really easy but you're best off learning it when you find you actually have a need for it. So for example if you're working on a site and think "fuck this shit im tired of referencing this color 20 times now and having to go back to get the right hex/rgb code" its probably time to look into it.
>>52677169
The area I'm in is Dallas/ft worth if you are ever looking. Web dev spots came out of nowhere the last 2 years. Enough positions to attract the coding bootcamps to the area. Just 5 years ago there were next to zero dev jobs here.
I need a database to hold like 2KB of data for my asp.net app.. what should I use
Which linux distro for LAMP?
>>52677502
here in Austin too, same story
>>52678503
Whatever you want. Depends on the purpose of the data and if it actually needs to be server-side.
Sounds like something I would stick into a session/cookie
>>52676966
>80 / hr
*chokes* uh, where?
>>52678557
I started typing out a response and realized I don't need a database lol.
If certain animation could be done with pure CSS3 should I be doing it with CSS instead of Javascript?
The code I've seen to make somewhat complex CSS3 animations is huge and looks like a pain to manage.
>>52678559
DFW.
Just saw another for a MongoDB programmer for 65 an hour. Another for entry-level PHP at 45 an hour.
Compared to what I make now, the thought of landing any of those is like lottery-winning levels of excitment.
>>52678656
I'm in dfw and I have basic html/css/javascript skills
how i get job
Opinions on Jekyll?
>>52678703
Portfolio probably. I'm not a good person to speak to about that.
>>52678775
my portfolio is basically nonexistent. I have made a couple sites fucking around with some bootstrap templates but that's about it.
>>52646818
use else if man.
>>52678791
My portfolio is an fmylife clone based around feel memes.
Basically I also have no portfolio.
>B.A in IS&T ready for early 2017
>currently self-teaching css and js
if I make a few projects between now and the summer could I realistically apply and get hired for a dev internship if I'm not extremely shit?
>>52668743
its a shared object
>>52678829
I'm also working on a guitar tab editor in javascript/react but I have no idea what I'm doing so far.
>>52646818
>alert("Welcome to case sensitive rock, paper, scissors!");
i lold
>>52646818
>>52678920
to avoid case sensitivity you can just capitalize all the letters entered and use all caps for your elsif chain
I'm building a new site with drupal and php for a company that makes airplane parts. There are long lists of parts for different models of aircraft that I need to store in a database and display in a bootstrap accordion html structure. I wanted to make a database system of storing the parts individually and then "attaching" them to the aircraft models, but the guy I'm building the site for doesn't want to do it that way because there are too many parts, and they are all specific to each model so they don't really repeat at all. I have longtext textarea inputs for saving the lists, but I need a way to parse them into the html structure. I'm not actually entering the data, so I tried to show him how to put the lists into html and then just copy and paste it into the textarea to save in the database, but that didn't go well because if one tag isn't closed properly it messes the whole page up. I was thinking of setting up a way for him to write out the lists with non-common character delimiters so that I can explode and parse the string into html before saving it in the database, but I'm not sure if that is a good idea. The string he would enter would look something like this. I could explode the string on "[" and then parse the parts into html based on the first character of each array node. Is this a dumb idea, or a custom solution to a custom problem? Should I parse before saving and store the html in the database or save the string in the database and parse it before displaying in on the web page?[{{Header 1
[{Subheader
[part name
[part name
[part name
[{Subheader
[part name
[part name
[{{Header 2
[part name
[part name
>>52678902
If I knew those techs I would be pumping out a twitter clone and writing press releases about how it allows free speech.
>>52678804
>>52678804
maybe also a switch statement, i.e.
switch($play) {
case $rock:
wins $scissors;
loses $paper;
case $scissors:
etc.
}
>>52669530
>>implying vanilla JS is half as efficient to write
>he doesn't even know how to do basic event listeners and dom manipulation without jQuery
>>52678949
Just explode by newline.
That way boss only has to enter one part number per line.
>>52672831
>Django is the bootstrap of backend python development
What would be the flask of frontend? React, Angular, something else?
>>52678519
arch
>>52679014
Do you think I should store the string or html?
>>52678990
Not him but it's not about JavaScript being more efficient. It's about jQuery's simplicity, cross browser compatibility, and ease of use. Most companies usually work with jQuery because that means less time and money spent on fixing browser issues, hell 50% of the non-Senior front end devs I've met have barely even touched vanilla JavaScript.
>>52676341[Fri Jan 29 02:28:41.955460 2016] [:error] [pid 707] [client xxx.xxx.xxx.xxx:xxxxx] PHP Warning: mysqli::mysqli(): (HY000/1045): Access denied for user 'sec_user'@'localhost' (using password: YES) in /var/www/html/includes/db_connect.php on line 3
It appears something is wrong with mariaDB/mysql. I have absolutely no idea on how to set that up, and had a fucking terrible time working through it with that guide I linked to earlier. I remember after each command I saw something like "0 lines affected" or something, which seemed odd at the time. Now I think it meant I wasn't actually doing anything.
Anyone know a really good guide that has the basics of setting up databases and users in mariaDB/mysql?
>>52679135
>It's about jQuery's simplicity, cross browser compatibility, and ease of use.
Unless you're supporting IE 8 there's almost no reason to use jQuery for things like event listeners or DOM manipulation. It is much more simple and concise to write code without jQuery. You do know JS don't you?
>Most companies usually work with jQuery because that means less time and money spent on fixing browser issues
You are not getting paid enough if you are working for a company that has non-corporate clients insisting supporting ancient browsers. Get out. Fast.
>hell 50% of the non-Senior front end devs I've met have barely even touched vanilla JavaScript.
Which is why they are still not senior devs.
I get it, jQuery is easy to use. But if you're including jQuery in every project you start just because "lol I dk how to write an event handler/pluck elements from this form without jQuery" then you need to hit the books. However, I totally understand using jQuery when it comes to writing software for corporate clients.
Also if you say anything about using jQuery for animation I will knock your fucking lights out you lil bitch.
>>52679283
>php and mysql
shigy :D
>>52679297
I don't know if you're agreeing with me or not. Your first post in the reply chain was this >>52669264 probably implying that no one in 2016 uses jQuery? but then you acknowledge that corporations will almost always require you to use it.
>>52679348
Hey, it's babbys first web site. I'm just learning shit.
>>52679377
No, >>52669264 is not me.
>>52679297
Is jQuery loading on the page already? If it is, no reason not to use it. If it isn't, no reason to load it just to use for that.
>>52679104
I don't know what you are doing. If only the one visitor/client needs the string then just store it as a js variable. If only that one visitor/client needs the string across several visits to the site then use a cookie. If everyone that visits the site needs the string then use a one table database.
Node.js worth learning?
>>52679377
>>52679297
Anyone who is good at javascript will have their own library of functions to handle dom and animation and other stuff. The problem is when someone else looks at my code and sees that I am using my own library they are like "WTF?! Why doesn't he just use jQuery? What a dumb ass." Using jQuery makes your code much more manageable for others after you are gone. Otherwise your legacy will be "WTF was that dumbass thinking."
>>52679283
Did you check line 3 of your connect file?
Your db credentials might be wrong.
>>52679297
AJAX with XMLHttpRequest is ass. I include jQuery just for that because it's small and no one notices.
>>52671895
your custom stylesheet applied align:center to everything?
>>52679485
I guess I wasn't at all clear enough. There is already a huge database of aircraft parts info and a whole back end built to handle putting data into the database. The database is used to display all of the parts info in the website like a catalog to customers, but the only transaction is selling a subscription to the site to look at all of the parts info. These long lists of parts are being stored as longtext in one column of one table. The data is meant to be displayed to the members of the site in a popup bootstrap accordion section. Like if you click on a part, this long list of other parts associated with that part pops up. No javascript or cookies are involved at all in building the html. The html structure is built with php and includes all of the parts info from the database. I don't know if I should store the string and parse it into html when the page renders, or parse the string from textarea in the backend so that html is stored in the database and then returned when the page is rendered.
>>52679524
Here's the db_connect file:<?php
include_once 'psl-config.php'; // As functions.php is not included
$mysqli = new mysqli(HOST, USER, PASSWORD, DATABASE);
I'm thinking I didn't set up the db right. Mainly because I had no idea what I was doing and basically copy/pasting.
>>52679283
read the error, its right in front of you
Access is denied for user 'sec_user'@'localhost'. Can you login via mysql cli with the credentials you are providing the script?
>>52679699
Believe it or not, I absolutely can.
I can log in to mariaDB with the user 'sec_user.'
>>52679735
can you pull up/make changes to the database you are trying to use?
users have permissions on a database level, you may have not set those up.
>>52679754
I figured it out. I had the wrong password in the psl-config file.
After correcting that I was able to register successfully, but when I tried to log in it wouldn't let me access the protected page.
I don't think it's actually logging me in.
What web developement appropriate languages have support for low level memory operations, I'm working on a project where i need to cast byte array to int and vise versa frequently (like 60m times per request), php isn't cutting it, do i have to use asp? or are there better languages.
hey so i just accepted a javascript/php internship, however my computer science degree is mainly focused on java/c so i've got a bit of learning to do.
- i'm familiar with the MVC architecture on a high level, but what exactly is a controller when it comes to programming?
- in PHP the difference between -> and :: is that the former is used on instances of a class, while the latter is used on non instances of a class e.g. calling a static method?
>>52679818
controller is basically just the application itself, you pull in templates and fill them with data from the db.
>>52679845
ah so it is essentially where all the application logic is kept?
>>52679637
Generally, you shouldn't store any HTML in the database.
If the database is so messed up to where it has no primary key, the part name; info; and related parts are all stored in one LONGTEXT, then you should look into a way to explode data and merge into a proper table.
>>52679297
This whole "jQuery is for noobs" thing is really cringey. Like ok, you went and figured out you can rewrite parts of jQ without loading jQ and feel proud. Good for you, but so can everyone else, some of us just have better things to do with our time.
>>52679816
>i need to cast byte array to int and vise versa frequently (like 60m times per request)
What on earth are you doing? Like there's a 90% chance you're doing something very wrong if you think this is a requirement.
To answer the question though, it's probably a good idea to stick to a high level language and use it's C interop. I don't know what that looks like in PHP but worst case you can just fork a process to run the C although most languages have a way to dodge the relatively small but non-negligible process creation overhead.
I guess Go or maybe Rust would be the way to go if you have a really compelling reason to be bit banging at the application level in a HTTP request/response cycle
>>52680272
my project needs a key-value store for some 30m id/values, but also needs that data to sorted and be quickly filtered by 4*250*11 possible categories, so I'm writing my own storage engine, it works fine in c and for my specific data set it's like 400x faster than using mongodb, but getting the c functions either ported to or acceible from a web-acceptable language is proving difficult.
what isnamespacein php?
>>52680064
Most of the parts are stored properly in the database. It's just that this one type of parts have these long lists of other parts associated with them, and it is all so specific that it is easier to copy and paste the list than to set up a backend system to store each part individually in the database along with references to subcategories, and then attach the parts to the other part through a query when the page is rendered. Plus updating entries if things changed would be complicated. It's even harder because there are all of these sub-categories in the lists too. I guess ideally I would like to store the lists as an array, but I'm not entirely sure how to do that from a textarea input. It seems easier to parse this:{{Header 1
{Subheader
part name
part name
part name
{Subheader
part name
part name
{{Header 2
part name
part name
thanarray(
'Header 1' => array(
'Subheader1' => array(
'part name','part name','part name'
),
'Subheader2' => array(
'part name','part name','part name'
),
'Header 2' => array(
'Subheader1' => array(
'part name','part name','part name'
),
'Subheader2' => array(
'part name','part name','part name'
),
);
but maybe I am just being stupid and missing something.
>>52680399
"I'm writing my own storage engine" is a statement that's always going to raise some eyebrows in terms of reinventing the wheel but sure, I can think of cases where this might make sense.
Python has pretty good/performant C interop. You might even consider a CGI script if the response format is simple enough that you could handle it through C end-to-end.
>>52680484
Do you know what namespaces are in other languages? It's like that but with a slightly clunkier syntax. You should still use them, PHP not in a namespace is a code smell.
>>52680537
>Do you know what namespaces are in other languages?
not really but i just looked it up. is it so you can tell your class where it's operating in a file structure?
>>52679804
I have no idea what's going on with the log in thing. I went through every file, comparing them to the guide, and everything matched.
But now I get error 500 when I try to go to the index.
I am tired and will go through everything again tomorrow. Maybe even start from scratch.
>>52680506
wow i completely forgot about cgi
>>52680614
Thanks to everyone that was helping, by the way. Really appreciate it.
>>52680564
There's no necessary relationship between file structure and namespaces, although conventionally they're related and some languages enforce a mapping between them.
Their main use is as an organizational tool (related functionality lives in the same namespace) and to avoid naming collisions. We could think of lots of programs that might need like a message class or something, so when you have two libraries that use the same name there becomes an issue. Ideally we don't want to stick our project's name on the front of everything we name to ensure its uniqueness so we put things in namespaces. Things can be referred to tersely within the same namespace and verbosely but unambiguously but others.
Lots of languages have a lot of rules about how namespaces work (for example, many languages allow you to bring in another namespace to your own and refer to its contents more briefly if you don't expect there to be naming collisions) but they're all trying to address those two main issues.
is there a tutorial out there for making websites like this?
Basically a tutorial to get me from a blank page to "click a thing and something happens without reloading a page"
>>52680502
Ok, so the original dev fucked it up and just took textarea input for "related parts" and stored it in a longtext, then it just displays that long chunk of text on the part page?
Are they wanting you to make these related parts linkable or are they already links to other parts?
>>52680484
It's retarded.
Basically it's a way of figuring itself through directories or something
laracasts has a good explanation video on it that will confuse you even more and probably piss you off.
>>52681299
>I don't know anything about subject
>I'm going to tell everyone subject is retarded
>>52680969
that's just fundamental javascript mate, that site doesn't even make ajax requests.
>>52681392
Nice meme. Procedural PHP is the best and the only way to show any creativity and skill.
Using namespace with your frameworks and shit is like buying a box of cake mix and calling yourself a baker.
>>52680969
right click -> view source
It's loaded with javascript files, so...javascript.
New thread: >>52681445
>>52681256
Well it's more I'm designing it now and that is what the client is looking for. It is just a list but it has to fit into the bootstrap html accordion structure. I'm not really concerned about rendering the html after the list is parsed. I'm just not sure the best way to store the data so that I can query it as a string and parse it. I explained to him that the best approach would be to store each part in the list as it's own table entry and then map those entries to the parts that they are associated with, but he said he would rather copy and paste then enter each part in the list. I could theoretically parse the string that I get from the textarea and then put each part into a table, but it would take a lot of time to set up, and I don't know how I would update the parts listings if changes needed to be made. Storing the info as an array seems like a good approach, but since the "array" would just be a string anyway until I parsed it, I figured why not make the string easier to parse than a string that is written like an array. That approach still seems weird to me though. | http://4archive.org/board/g/thread/52646121 | CC-MAIN-2016-44 | refinedweb | 14,677 | 71.85 |
Introduction to Pandas¶
Written by Luke Chang & Jin Cheong
Analyzing data requires being facile with manipulating and transforming datasets to be able to test specific hypotheses. Data come in all different types of flavors and there are many different tools in the Python ecosystem to work with pretty much any type of data you might encounter. For example, you might be interested in working with functional neuroimaging data that is four dimensional. Three dimensional matrices contain brain activations in space, and these data can change over time in the 4th dimension. This type of data is well suited for numpy and specialized brain imaging packages such as nilearn. The majority of data, however, is typically in some version of a two-dimensional observations by features format as might be seen in an excel spreadsheet, a SQL table, or in a comma delimited format (i.e., csv).
In Python, the Pandas library is a powerful tool to work with this type of data. This is a very large library with a tremendous amount of functionality. In this tutorial, we will cover the basics of how to load and manipulate data and will focus on common to data munging tasks.
For those interested in diving deeper into Pandas, there are many online resources. There is the Pandas online documention, stackoverflow, and medium blogposts. I highly recommend Jake Vanderplas’s terrific Python Data Science Handbook. In addition, here is a brief video by Tal Yarkoni providing a useful introduction to pandas.
After the tutorial you will have the chance to apply the methods to a new set of data.
Pandas Objects¶
Pandas has several objects that are commonly used (i.e., Series, DataFrame, Index). At it’s core, Pandas Objects are enhanced numpy arrays where columns and rows can have special names and there are lots of methods to operate on the data. See Jake Vanderplas’s tutorial for a more in depth overview.
Series¶
A pandas
Series is a one-dimensional array of indexed data.
import pandas as pd data = pd.Series([1, 2, 3, 4, 5]) data
0 1 1 2 2 3 3 4 4 5 dtype: int64
The indices can be integers like in the example above. Alternatively, the indices can be labels.
data = pd.Series([1,2,3], index=['a', 'b', 'c']) data
a 1 b 2 c 3 dtype: int64
Also,
Series can be easily created from dictionaries
data = pd.Series({'A':5, 'B':3, 'C':1}) data
A 5 B 3 C 1 dtype: int64
DataFrame¶
If a
Series is a one-dimensional indexed array, the
DataFrame is a two-dimensional indexed array. It can be thought of as a collection of Series objects, where each Series represents a column, or as an enhanced 2D numpy array.
In a
DataFrame, the index refers to labels for each row, while columns describe each column.
First, let’s create a
DataFrame using random numbers generated from numpy.
import numpy as np data = pd.DataFrame(np.random.random((5, 3))) data
We could also initialize with column names
data = pd.DataFrame(np.random.random((5, 3)), columns=['A', 'B', 'C']) data
Alternatively, we could create a
DataFrame from multiple
Series objects.
a = pd.Series([1, 2, 3, 4]) b = pd.Series(['a', 'b', 'c', 'd']) data = pd.DataFrame({'Numbers':a, 'Letters':b}) data
Or a python dictionary
data = pd.DataFrame({'State':['California', 'Colorado', 'New Hampshire'], 'Capital':['Sacremento', 'Denver', 'Concord']}) data
pd.read then press tab to see a list of functions that can load specific file formats such as: csv, excel, spss, and sql.
In this example, we will use
pd.read_csv to load a .csv file into a dataframe.
Note that read_csv() has many options that can be used to make sure you load the data correctly. You can explore the docstrings for a function to get more information about the inputs and general useage guidelines by running
pd.read_csv?
pd.read_csv?
To load a csv file we will need to specify either the relative or absolute path to the file.
The command
pwd will print the path of the current working directory.
pwd
'/Users/lukechang/Dropbox/Dartbrains/Notebooks'
We will now load the Pandas has many ways to read data different data formats into a dataframe. Here we will use the
pd.read_csv function.
df = pd.read_csv('/Users/lukechang/Dropbox/Dartbrains/Data/salary/salary.csv', sep = ',') # df = pd.read_csv('psych60/data/salary/salary.csv', sep = ',')
Ways to check the dataframe¶
There are many ways to examine your dataframe. One easy way is to just call the dataframe variable itself.
df
77 rows × 6 columns
However, often the dataframes can be large and we may be only interested in seeing the first few rows.
df.head() is useful for this purpose.
df.head()
On the top row, you have column names, that can be called like a dictionary (a dataframe can be essentially thought of as a dictionary with column names as the keys). The left most column (0,1,2,3,4…) is called the index of the dataframe. The default index is sequential integers, but it can be set to anything as long as each row is unique (e.g., subject IDs)
print("Indexes") print(df.index) print("Columns") print(df.columns) print("Columns are like keys of a dictionary") print(df.keys())
Indexes RangeIndex(start=0, stop=77, step=1) Columns Index(['salary', 'gender', 'departm', 'years', 'age', 'publications'], dtype='object') Columns are like keys of a dictionary Index(['salary', 'gender', 'departm', 'years', 'age', 'publications'], dtype='object')
You can access the values of a column by calling it directly. Single bracket returns a
Series and double bracket returns a
dataframe.
Let’s return the first 10 rows of salary.
df['salary'][:10]
0 86285 1 77125 2 71922 3 70499 4 66624 5 64451 6 64366 7 59344 8 58560 9 58294 Name: salary, dtype: int64
shape is another useful method for getting the dimensions of the matrix.
We will print the number of rows and columns in this data set using fstring formatting. First, you need to specify a string starting with ‘f’, like this
f'anything'. It is easy to insert variables with curly brackets like this
f'rows: {rows}'.
Here is more info about formatting text.
rows, cols = df.shape print(f'There are {rows} rows and {cols} columns in this data set')
There are 77 rows and 6 columns in this data set
Describing the data¶
We can use the
.describe() method to get a quick summary of the continuous values of the data frame. We will
.transpose() the output to make it slightly easier to read.
df.describe().transpose()
We can also get quick summary of a pandas series, or specific column of a pandas dataframe.
df.departm.describe()
count 77 unique 7 top bio freq 16 Name: departm, dtype: object
Sometimes, you will want to know how many data points are associated with a specific variable for categorical data. The
value_counts method can be used for this goal.
For example, how many males and females are in this dataset?
df['gender'].value_counts()
0 67 1 9 2 1 Name: gender, dtype: int64
You can see that there are more than 2 genders specified in our data.
This is likely an error in the data collection process. It’s always up to the data analyst to decide what to do in these cases. Because we don’t know what the true value should have been, let’s just remove the row from the dataframe by finding all rows that are not ‘2’.
df = df.loc[df['gender']!=2] df['gender'].value_counts()
0 67 1 9 Name: gender, dtype: int64
Dealing with missing values¶
Data are always messy and often have lots of missing values. There are many different ways, in which missing data might present
NaN,
None, or
NA, Sometimes researchers code missing values with specific numeric codes such as 999999. It is important to find these as they can screw up your analyses if they are hiding in your data.
If the missing values are using a standard pandas or numpy value such as
NaN,
None, or
NA, we can identify where the missing values are as booleans using the
isnull() method.
The
isnull() method will return a dataframe with True/False values on whether a datapoint is null or not a number (nan).
df.isnull()
76 rows × 6 columns
Suppose we wanted to count the number of missing values for each column in the dataset.
One thing that is nice about Python is that you can chain commands, which means that the output of one method can be the input into the next method. This allows us to write intuitive and concise code. Notice how we take the
sum() of all of the null cases.
We can chain the
.null() and
.sum() methods to see how many null values are added up in each column.
df.isnull().sum()
salary 0 gender 0 departm 0 years 1 age 1 publications 0 dtype: int64
You can use the boolean indexing once again to see the datapoints that have missing values. We chained the method
.any() which will check if there are any True values for a given axis. Axis=0 indicates rows, while Axis=1 indicates columns. So here we are creating a boolean index for row where any column has a missing value.
df[df.isnull().any(axis=1)]
You may look at where the values are not null. Note that indexes 18, and 24 are missing.
df[~df.isnull().any(axis=1)]
74 rows × 6 columns
There are different techniques for dealing with missing data. An easy one is to simply remove rows that have any missing values using the
dropna() method.
df.dropna(inplace=True)
/Users/lukechang/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:1: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: """Entry point for launching an IPython kernel.
Now we can check to make sure the missing rows are removed. Let’s also check the new dimensions of the dataframe.
rows, cols = df.shape print(f'There are {rows} rows and {cols} columns in this data set') df.isnull().sum()
There are 74 rows and 6 columns in this data set
salary 0 gender 0 departm 0 years 0 age 0 publications 0 dtype: int64
Create New Columns¶
You can create new columns to fit your needs. For instance you can set initialize a new column with zeros.
df['pubperyear'] = 0
.
Here we can create a new column pubperyear, which is the ratio of the number of papers published per year
df['pubperyear'] = df['publications']/df['years']
.
Indexing and slicing Data¶
Indexing in Pandas can be tricky. There are many ways to index in pandas, for this tutorial we will focus on four: loc, iloc, boolean, and indexing numpy values. For a more in depth overview see Jake Vanderplas’s tutorial](), where he also covers more advanced topics, such as hierarchical indexing.
Indexing with Keys¶
First, we will cover indexing with keys using the
.loc method. This method references the explicit index with a key name. It works for both index names and also column names. Note that often the keys for rows are integers by default.
In this example, we will return rows 10-20 on the salary column.
df.loc[10:20, 'salary']
10 56092 11 54452 12 54269 13 55125 14 97630 15 82444 16 76291 17 75382 19 62607 20 60373 Name: salary, dtype: int64
You can return multiple columns using a list.
df.loc[:10, ['salary', 'publications']]
Indexing with Integers¶
Next we wil try
.iloc. This method references the implicit python index using integer indexing (starting from 0, exclusive of last number). You can think of this like row by column indexing using integers.
For example, let’s grab the first 3 rows and columns.
df.iloc[0:3, 0:3]
Let’s make a new data frame with just Males and another for just Females. Notice, how we added the
.reset_index(drop=True) method? This is because assigning a new dataframe based on indexing another dataframe will retain the original index. We need to explicitly tell pandas to reset the index if we want it to start from zero.
male_df = df[df.gender == 0].reset_index(drop=True) female_df = df[df.gender == 1].reset_index(drop=True)
Indexing with booleans¶
Boolean or logical indexing is useful if you need to sort the data based on some True or False value.
For instance, who are the people with salaries greater than 90K but lower than 100K ?
df[ (df.salary > 90000) & (df.salary < 100000)]
This also works with the
.loc method, which is what you need to do if you want to return specific columns
df.loc[ (df.salary > 90000) & (df.salary < 100000), ['salary', 'gender']]
Numpy indexing¶
Finally, you can also return a numpy matrix from a pandas data frame by accessing the
.values property. This returns a numpy array that can be indexed using numpy integer indexing and slicing.
As an example, let’s grab the last 10 rows and the first 3 columns.
df.values[-10:, :3]
array([[53638, 0, 'math'], [59139, 1, 'bio'], [52968, 1, 'bio'], [55949, 1, 'chem'], [58893, 1, 'neuro'], [53662, 1, 'neuro'], [57185, 1, 'stat'], [52254, 1, 'stat'], [61885, 1, 'math'], [49542, 1, 'math']], dtype=object)
Renaming¶
Part of cleaning up the data is renaming with more sensible names. This is easy to do with Pandas.
Renaming Columns¶
We can rename columns with the
.rename method by passing in a dictionary using the
{'Old Name':'New Name'}. We either need to assigne the result to a new variable or add
inplace=True.
df.rename({'departm':'department','pubperyear':'pub_per_year'}, axis=1, inplace=True)
/Users/lukechang/anaconda3/lib/python3.7/site-packages/pandas/core/frame.py:4133: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: errors=errors,
Renaming Rows¶
Often we may want to change the coding scheme for a variable. For example, it is hard to remember what zeros and ones mean in the gender variable. We can make this easier by changing these with a dictionary
{0:'male', 1:'female'} with the
replace method. We can do this
inplace=True or we can assign it to a new variable. As an example, we will assign this to a new variable to also retain the original lablels.
df['gender_name'] = df['gender'].replace({0:'male', 1:'female'}).
Operations¶
One of the really fun things about pandas once you get the hang of it is how easy it is to perform operations on the data. It is trivial to compute simple summaries of the data. We can also leverage the object-oriented nature of a pandas object, we can chain together multiple commands.
For example, let’s grab the mean of a few columns.
df.loc[:,['years', 'age', 'publications']].mean()
years 14.972973 age 45.567568 publications 21.662162 dtype: float64
We can also turn these values into a plot with the
plot method, which we will cover in more detail in future tutorials.
%matplotlib inline df.loc[:,['years', 'age', 'publications']].mean().plot(kind='bar')
<matplotlib.axes._subplots.AxesSubplot at 0x7fea789a8810>
Perhaps we want to see if there are any correlations in our dataset. We can do this with the
.corr method.
df.corr()
Merging Data¶
Another common data manipulation goal is to merge datasets. There are multiple ways to do this in pandas, we will cover concatenation, append, and merge.
Concatenation¶
Concatenation describes the process of stacking dataframes together. The main thing to consider is to make sure that the shapes of the two dataframes are the same as well as the index labels. For example, if we wanted to vertically stack two dataframe, they need to have the same column names.
Remember that we previously created two separate dataframes for males and females? Let’s put them back together using the
pd.concat method. Note how the index of this output retains the old index.
combined_data = pd.concat([female_df, male_df], axis = 0)
We can reset the index using the
reset_index method.
pd.concat([male_df, female_df], axis = 0).reset_index(drop=True)
74 rows × 7 columns
We can also concatenate columns in addition to rows. Make sure that the number of rows are the same in each dataframe. For this example, we will just create two new data frames with a subset of the columns and then combine them again.
df1 = df[['salary', 'gender']] df2 = df[['age', 'publications']] df3 = pd.concat([df1, df2], axis=1) df3.head()
Append¶
We can also combine datasets by appending new data to the end of a dataframe.
Suppose we want to append a new data entry of an additional participant onto the
df3 dataframe. Notice that we need to specify to
ignore_index=True and also that we need to assign the new dataframe back to a variable. This operation is not done in place.
For more information about concatenation and appending see Jake Vanderplas’s tutorial.
new_data = pd.Series({'salary':100000, 'gender':1, 'age':38, 'publications':46}) df3 = df3.append(new_data, ignore_index=True) df3.tail()
Merge¶
The most powerful method of merging data is using the
pd.merge method. This allows you to merge datasets of different shapes and sizes on specific variables that match. This is very common when you need to merge multiple sql tables together for example.
In this example, we are creating two separate data frames that have different states and columns and will merge on the
State column.
First, we will only retain rows where there is a match across dataframes, using
how=inner. This is equivalent to an ‘and’ join in sql.
df1 = pd.DataFrame({'State':['California', 'Colorado', 'New Hampshire'], 'Capital':['Sacremento', 'Denver', 'Concord']}) df2 = pd.DataFrame({'State':['California', 'New Hampshire', 'New York'], 'Population':['39512223', '1359711', '19453561']}) df3 = pd.merge(left=df1, right=df2, on='State', how='inner') df3
Notice how there are only two rows in the merged dataframe.
We can also be more inclusive and match on
State column, but retain all rows. This is equivalent to an ‘or’ join.
df3 = pd.merge(left=df1, right=df2, on='State', how='outer') df3
This is a very handy way to merge data when you have lots of files with missing data. See Jake Vanderplas’s tutorial for a more in depth overview.
Grouping¶
We’ve seen above that it is very easy to summarize data over columns using the builtin functions such as
pd.mean(). Sometimes we are interested in summarizing data over different groups of rows. For example, what is the mean of participants in Condition A compared to Condition B?
This is suprisingly easy to compute in pandas using the
groupby operator, where we aggregate data using a specific operation over different labels.
One useful way to conceptualize this is using the Split, Apply, Combine operation (similar to map-reduce).
This figure is taken from Jake Vanderplas’s tutorial and highlights how input data can be split on some key and then an operation such as sum can be applied separately to each split. Finally, the results of the applied function for each key can be combined into a new data frame.
Groupby¶
In this example, we will use the
groupby operator to split the data based on gender labels and separately calculate the mean for each group.
df.groupby('gender_name').mean()
Other default aggregation methods include
.count(),
.mean(),
.median(),
.min(),
.max(),
.std(),
.var(), and
.sum()
Transform¶
While the split, apply, combine operation that we just demonstrated is extremely usefuly to quickly summarize data based on a grouping key, the resulting data frame is compressed to one row per grouping label.
Sometimes, we would like to perform an operation over groups, but retain the original data shape. One common example is standardizing data within a subject or grouping variable. Normally, you might think to loop over subject ids and separately z-score or center a variable and then recombine the subject data using a vertical concatenation operation.
The
transform method in pandas can make this much easier and faster!
Suppose we want to compute the standardized salary separately for each department. We can standardize using a z-score which requires subtracting the departmental mean from each professor’s salary in that department, and then dividing it by the departmental standard deviation.
We can do this by using the
groupby(key) method chained with the
.transform(function) method. It will group the dataframe by the key column, perform the “function” transformation of the data and return data in same format. We can then assign the results to a new column in the dataframe.
df['salary_dept_z'] = (df['salary'] - df.groupby('department').transform('mean')['salary'])/df.groupby('department').transform('std')['salary'].
This worked well, but is also pretty verbose. We can simplify the syntax a little bit more using a
lambda function, where we can define the zscore function.
calc_zscore = lambda x: (x - x.mean()) / x.std() df['salary_dept_z'] = df['salary'].groupby(df['department']).transform(calc_zscore) df.head()
/Users/lukechang/anaconda3/lib/python3.7/site
For a more in depth overview of data aggregation and grouping, check out Jake Vanderplas’s tutorial
Reshaping Data¶
The last topic we will cover in this tutorial is reshaping data. Data is often in the form of observations by features, in which there is a single row for each independent observation of data and a separate column for each feature of the data. This is commonly referred to as as the wide format. However, when running regression or plotting in libraries such as seaborn, we often want our data in the long format, in which each grouping variable is specified in a separate column.
In this section we cover how to go from wide to long using the
melt operation and from long to wide using the
pivot function.
Melt¶
To
melt a dataframe into the long format, we need to specify which variables are the
id_vars, which ones should be combined into a
value_var, and finally, what we should label the column name for the
value_var, and also for the
var_name. We will call the values ‘Ratings’ and variables ‘Condition’.
First, we need to create a dataset to play with.
data = pd.DataFrame(np.vstack([np.arange(1,6), np.random.random((3, 5))]).T, columns=['ID','A', 'B', 'C']) data
Now, let’s melt the dataframe into the long format.
df_long = pd.melt(data,id_vars='ID', value_vars=['A', 'B', 'C'], var_name='Condition', value_name='Rating') df_long
Notice how the id variable is repeated for each condition?
Pivot¶
We can also go back to the wide data format from a long dataframe using
pivot. We just need to specify the variable containing the labels which will become the
columns and the
values column that will be broken into separate columns.
df_wide = df_long.pivot(index='ID', columns='Condition', values='Rating') df_wide
Exercises ( Homework)¶
Read the salary_exercise.csv into a dataframe, and change the column names to a more readable format such as sex, rank, yearsinrank, degree, yearssinceHD, and salary.
Clean the data by excluding rows with any missing value.
What are the overall mean, standard deviation, min, and maximum of professors’ salary?
Exercise 2¶
Create two separate dataframes based on the type of degree. Now calculate the mean salary of the 5 oldest professors of each degree type. | https://dartbrains.org/content/Introduction_to_Pandas.html | CC-MAIN-2020-40 | refinedweb | 3,905 | 64.3 |
07 February 2012 14:56 [Source: ICIS news]
LONDON (ICIS)--?xml:namespace>
December industrial production fell 2.7%, construction output fell 6.4% and the energy sector was down 2.2%, the ministry said.
November/December productive output was down by 1.0% compared with September/October, it added.
Compared with November/December 2010 output was up 2.8% in November/December 2011.
The ministry said that near-term prospects for industrial production in
However, an increase in December's industrial orders and an improvement in economic sentiment indicators were signs that industry was about to overcome the weakness in production, it said.
In related news, market research group Markit said last week | http://www.icis.com/Articles/2012/02/07/9530200/germanys-productive-output-falls-2.9-in-dec-from-nov.html | CC-MAIN-2014-42 | refinedweb | 113 | 53.88 |
$\renewcommand{\vec}[1]{\mathbf{#1}}$
Last edited: 19th of November 2017
In a series of laboratory sessions in a physics course at NTNU, the students are studying the rolling motion of objects on a curved track. Due to the curvature, the velocity and acceleration will vary. The students uses a high speed camera to record the motion and compare the results with numerical simulations.
In this notebook, we will simulate a rolling ball on some arbitrary track in two dimensions by solving Newton's second law. The track is made by performing cubic spline interpolation on a set of points. The setup will be based on the setup used in the aforementioned laboratory sessions. For more information, see [1].
First, we import neccesary library packages.
import numpy as np import scipy.interpolate as interp import matplotlib.pyplot as plt plt.style.use('bmh') # Nicer looking plots
An object, such as a ball, is rolling on a curved track. The rotation axis passes thorugh the center of mass. Let $m$ be the mass, $r$ be the radius of the ball and $I_n$ the moment of inertia. The forces on the center of mass is described by Newton's second law
\begin{equation} \mathbf F = m\mathbf a. \end{equation}
The forces on the ball can be decomposed into a component parallel to the track and a component normal to the track, as shown in figure 1. Let the local slope of the track be described by an angle $\theta(x)$. The gravitational force acts on the center of mass vertically downwards. This amounts to a component parallel to the track $mg\sin \theta(x)$ and a normal component $mg\cos \theta(x)$. The forces from the track on the ball have a normal component $N$ (normal force) and the parallel component $f$ (friction).
Figure 1: A cylinder or sphere is rolling on curve $y(x)$. The forces acting on the object are indicated on the figure. At each point $x$, the slope of the curve is defined by an angle $\theta(x)$.
Due to the curvature of the track, the center of mass has a centripetal acceleration normal to the track. The total acceleration is thus given by
\begin{equation} \mathbf a = \dot v \mathbf{e}_\parallel + \frac{v^2}{R(x)} \mathbf{e}_\perp, \end{equation}
where $R(x)$ is the radius of the curvature and $v$ is the speed (along the track). Thus, we obtain the two equations
\begin{equation} mg \sin \theta(x)-f=m\dot v, \label{eq:parallel} \end{equation} \begin{equation} N-mg\cos\theta(x) = \frac{mv^2}{R(x)}. \label{eq:normal} \end{equation}
The first equation describes the motion parallel to the track, while the second equation yields, with $N\geq 0$, a condition for when the ball loses contact with the surface.
An expression for the friction is found by using Newton's second law of rotation,
\begin{equation} \tau = I_0\omega. \end{equation}
The friction $f$ is the only force that do not pass through the rotation axis, and is thus the only force that contribute to the total torque $\tau$. The friction acts on the object a distance $r$ from the rotation axis (and at an angle $\pi/2$), so that $fr = I_0 \omega$. The ball is assumed to roll without any gliding. By using the rolling condition $v=\omega r$, equation \eqref{eq:parallel} can be written as
\begin{equation} \dot v = \frac{g\sin (\theta(x))}{1 + I_0/mr^2}. \label{eq:vdot} \end{equation}
A complete discussion on rotational dynamics can be found in e.g. [chap. 9-10, 2] and [chap. 6, 3].
Consider the curve from $y(x)$ from $A$ to $B$ as shown in figure 2. The curve from $A$ to $B$, with an arclenth $\Delta s$, can be considered as a small circle sector with angle $\Delta \theta$. The circle has a centre at $C$ and a radius $R\approx R_A\approx R_B$. The radius of the curvature thus becomes $R=\Delta s/\Delta \theta$. The curvature is in turn defined as $\kappa= 1/R$.
Figure 2: Sketch used to describe curvature and the radius of the curvature. (Taken from [1])
Consider now a infinitesimal arclenth $\Delta s \to\text{d} s$. From the figure one can see that
\begin{equation} \text{d}y/\text{d} x = \tan\theta, \label{eq:theta} \end{equation}
which gives
$$\frac{\text{d} \theta}{\text{d}x} = \frac{\text{d}}{\text{d}x}\arctan\left(\frac{\text{d}y}{\text{d}x}\right)=\frac{1}{1+\left(\text d y/\text dx\right)^2}\frac{\text d^2 y}{\text dx^2}.$$
The differential $\text d s$ is given by
$$(\text{d}s)^2 = (\text d x)^2 + (\text d y)^2 \quad \Longrightarrow \quad \text d s=\text d x\sqrt{1 + \left(\text d y/dx\right)^2}.$$
The curvature can thus be written as
\begin{equation} \kappa =\frac{\text{d}\theta}{\text{d}s} = \frac{\text{d}^2 y/\text{d}x^2}{\left(1 +\left(\text d y/\text d x\right)^2\right)^{3/2}}, \end{equation}
and the radius of the curvature becomes
\begin{equation} R(x) = \frac{\text{d}s}{\text{d}\theta} = \frac{\left(1 +\left(\text d y/\text d x\right)^2\right)^{3/2}}{\text{d}^2 y/\text{d}x^2}. \label{eq:R} \end{equation}
Note that $\kappa$ is always finite as long as $\text d y/\text d x$ is continuous, while $R\to \infty$ if $\text d^2 y/\text d x^2\to 0$.
Curvature is discussed in more detail in e.g. [chap. 11, 4] and [chap. 13, 5].
We will be considering a solid sphere (ball), which has a moment of inertia $I_0= 2mr^2/5$. The moment of inertia is easily computed for such objects, but this will not be discussed here. See e.g. [chap. 9, 2] for a general discussion and [6] for a list of some typical values for $I_0$. Assume that the ball has a radius $r=1\,\text{cm}$ and is made of iron with a density $\rho=7850\,\text{kg/m}^3$ (density found in [7]). The mass of the ball is in this case $m= 32.9\,\text{g}$ and the moment of inertia is thereby $I_0=13.2\,\mathrm{g\,cm^2}$.
# Properties of the rolling object r = 0.01 # m (radius) rho = 7850 # kg/m^3 (density) g = 9.81 # m/s^2 (gravitational acceleration) m = (4/3)*np.pi*r**3*rho # kg (mass) c = 2/5 I0 = c*m*r**2 # kg m^2 (moment of inertia)
Figure 3: The actual setup used in the laboratory frame is shown to the left. A sketch is shown to the right. (Right image taken from [1])
The track is made by a plastic bar which is attached to a solid frame at $N=8$ different mounts, as shown in figure 3. Let $(x_i, y_i)$ be the position of the $i$th mount. The $x$ positions are fixed and uniformly distributed across the frame. We will assume that the frame has a length $L=1.4\,\text{m}$, such that the distance between the mounts is $l = 20\,\text{cm}$. The $y$ positions can be changed.
# Properties of the frame L = 1.4 # m (length) yi = [.5, .3, .25, .3, .35, .34, .25, .15] # m (y-positions) N = len(yi) # (# of mounts) xi = np.linspace(0, L, N) # m (x-positions)
The track can be described by a cubic spline that interpolates $(x_i, y_i)$. The cubic spline consists of a set of cubic polynomials that has continuous first and second derivaties at the interpolation points. See our notebook on Cubic Splines for a general discussion. We will be using the function
CubicSpline from
scipy.interpolate to perform the interpolation. This creates a callable class, which we can treat as a function $y(x)$ that describes the track.
# Callable class for the track curve get_y = interp.CubicSpline(xi, yi, bc_type="natural")
As we saw earlier, $\theta$ and $R$ depends on $\text dy/\text dx$ and $\text d^2y/\text dx^2$. The class
CubicSpline has a function called
derivative which returns a class
PPoly, which essentially is the derivative of the spline.
PPoly has also a function
derivative. We can thus easily compute $\theta(x)$ and $R(x)$ by using equations \eqref{eq:theta} and \eqref{eq:R}.
get_dydx = get_y.derivative() # Callable derivative of track get_d2ydx2 = get_dydx.derivative() # Callable double derivative of track def get_theta(x): """ Returns the angle of the track. """ return -np.arctan(get_dydx(x)) def get_R(x): """ Returns the radius of the curvature. """ return (1 + (get_dydx(x))**2)**1.5/get_d2ydx2(x) def get_curvature(x): """ Returns the curvature (1/R). """ return get_d2ydx2(x)/(1 + (get_dydx(x))**2)**1.5
Let's plot the track!
x = np.linspace(xi[0], xi[-1], 200) # Create figure fig, axarr = plt.subplots(3, 1, sharex=True, figsize=(12, 9), dpi=600) fig.subplots_adjust(hspace=0.02) # Axes 1: axarr[0].plot(x, get_y(x), 'C0', label=r"$y(x)$") axarr[0].plot(xi, yi, 'C1o', label="Mounts") axarr[0].set_ylabel(r"$y(x)$, [m]", size='15') #axarr[0].set_aspect('equal') # Axes 2: axarr[1].plot(x, get_theta(x), 'C0') axarr[1].set_ylabel(r"$\theta(x)$, [rad]", size='15') # Axes 2: axarr[2].plot(x, get_curvature(x), 'C0') axarr[2].set_ylabel(r"$\kappa(x)$, [1/m]", size='15') plt.show()
We are assuming that there is no loss of mechanical energy. Thus, if the highest point of the track is $y(x=0)$, then the ball will fall of to the right. If this is not the case, the ball will roll back and forth.
We start by assuming that the ball is always in contact with the track. Equation \eqref{eq:vdot} yields a coupled set of ordinary differential equations (ODE)
$$\frac{\text dv}{\text dt} = \frac{g\sin (\theta(t))}{1 + I_0/mr^2},$$
where $v$ is the momentary velocity along the track. The $x$ position is in turn given by
$$ \text dx = \text ds \cos(\theta)\: \Longrightarrow \: \frac{\text dx}{\text dt} = v\cos(\theta).$$
def get_vdot(theta): """ Returns the time derivative of the (total) velocity. """ return g*np.sin(theta)/(1 + c) def RHS(z): """ Evaluates the right hand side of the two coupled ODEs given in the text. Parameters: z : array-like, len(2), float. [v, x] The first element is the velocity, the second is the x-position. Returns: array-like, len(2), float. [a, v] The first element is the time derivative of the velocity (acceleration), the second is the time derivative of the position (velocity). """ # z = [x, v] # w = [vx, a] w = np.zeros(2) theta = get_theta(z[0]) w[0] = z[1]*1/np.sqrt(1+np.tan(theta)**2) w[1] = get_vdot(theta) return w
$v(t)$ and $x(t)$ can be found by applying a method for solving ODEs, such as an Euler method, a Runge-Kutta method or a more advanced addative method. We refer you to the respective modules or an example that solves an ODE at numfys.net. In this notebook we will be using the 4th order Runge-Kutta method.
def rk4step(f, y, h): """ Performs one step of the 4th order Runge-Kutta method. Parameters: f : Callable function with one input parameter. The right hand side of the ODE. Note that the RHS is in our case not a function of time. y : array-like, float. Current position. h : float. Time step. """ s1 = f(y) s2 = f(y + h*s1/2.0) s3 = f(y + h*s2/2.0) s4 = f(y + h*s3) return y + h/6.0*(s1 + 2.0*s2 + 2.0*s3 + s4)
We choose $x=0$ and $v=0$ as initial conditions, and a time-step $\Delta t = 10^{-3}\, \text{s}$. We have assumed that the ball do not loose any mechanical energy. One method of checking whether the numerical computation gives realistic results is to check if the total mechanical energy
\begin{equation} E = \frac{1}{2}mv^2 + mgh + \frac{1}{2}I_0 \omega^2 = \frac{1}{2}m(1 + c)v^2 + mgh, \end{equation}
is constant.
dt = 1e-3 # s tstop = 5 # s. If the ball has not reached the end within 5 seconds, we stop. x0 = 0 # m. Initial x-position v0 = 0 # m/s. Inital velocity def get_K(v): """ Returns the kinetic energy. """ return .5*m*(1 + c)*v**2 def get_U(h): """ Returns the potential energy. """ return m*g*h
Everything is now set to roll the ball down the track.
# Set initial values x = [x0] # x-position v = [v0] # velocity h = get_y(x0) # height K = [get_K(v0)] # kinetic energy U = [get_U(h)] # potential energy it = 0 # Iterator itmax = tstop/dt # Maximum number og iterations while x0 <= L and it < itmax: # Perform one step of the Runge-Kutta method [x0, v0] = rk4step(RHS, [x0, v0], dt) # Append different values to their arrays x = np.append(x, x0) v = np.append(v, v0) h = get_y(x0) K = np.append(K, get_K(v0)) U = np.append(U, get_U(h)) it += 1 print("Iterations: %i"%(it)) print("Final time: %.2f s"%(it*dt)) dE = (K[0] - K[-1] + U[0] - U[-1])/(K[0] + U[0]) print("Relative change in mechanical energy: %.2e"%(dE))
Iterations: 1075 Final time: 1.07 s Relative change in mechanical energy: -1.08e-09
The relative change in the mechanical energy was minimal, which means that the time step was more than small enough. The ball used $1.07\,\text{s}$ to reach the end of the track.
Let's plot the computed quantities!
t = np.linspace(0, it*dt, it + 1) # Create figure fig, axarr = plt.subplots(3, 1, sharex=True, figsize=(10, 8), dpi=400) fig.subplots_adjust(hspace=0.02) # Axes 1: axarr[0].plot(t, x) axarr[0].set_ylabel(r"$x(t)$, [m]") # Axes 2: axarr[1].plot(t, v) axarr[1].set_ylabel(r"$v(t)$, [m/s]") # Axes 2: axarr[2].plot(t, U, label="Potential energy, $U$") axarr[2].plot(t, K, label="Kinetic energy, $K$") axarr[2].plot(t, K + U, label="Mechanical energy, $E=U+K$") axarr[2].set_ylabel(r"Energy, [J]") axarr[2].set_xlabel(r'$t$, [s]') axarr[2].legend() plt.show() | https://nbviewer.jupyter.org/urls/www.numfys.net/media/notebooks/rollercoaster.ipynb | CC-MAIN-2017-51 | refinedweb | 2,391 | 67.76 |
Technology knowledge has to be shared and made accessible for free. Join the movement.
Despite the fact that R has excellent graphics capabilities, it's sometimes
desirable to graph data in another programming language. For example, when
the entire data analysis process is in Python, it may make sense to graph
the data within Python as well. Below and in the rest of this chapter, we
explore matplotlib — a popular Python module for graphing data.
matplotlib
import matplotlib.pyplot as plt
# Working with figures in matplotlib
f1 = plt.figure() # open a figure
f2 = plt.figure() # open a second figure
plt.savefig('f2.pdf') # save fig 2 - the active figure
plt.close(f2)
plt.savefig('f1.pdf') # save fig 2 - the active figure
plt.close(f1) | https://tech.io/playgrounds/29924/computing-with-data/graphing-data-in-python | CC-MAIN-2019-04 | refinedweb | 125 | 61.22 |
Closed Bug 1062053 Opened 8 years ago Closed 8 years ago
[TSF] ns
Text Store should use Static Ref Ptr
Categories
(Core :: Widget: Win32, defect)
Tracking
()
mozilla35
People
(Reporter: masayuki, Assigned: masayuki)
Details
(Keywords: inputmethod)
Attachments
(2 files)
No description provided.
This makes all static members storing COM objects StaticRefPtr. In the future, we should move nsTextStore into mozilla::widget namespace for getting rid of the redundant "mozilla::" in nsTextStore.h. However, if we do so now, it breaks my patches which needs to be landed as soon as possible. Therefore, I'll do it later. And this patch removes Tsf prefix from the static variables because it's very clear that it's related to TSF in nsTextStore.
Attachment #8483368 - Flags: review?(jmathies)
And also this renames the sTsfClientId to sClientId for consistency with other static member names.
Attachment #8483369 - Flags: review?(jmathies)?
(In reply to Masatoshi Kimura [:emk] from comment #3) >? Yes. IIRC, it's disabled due to wrong string API usage. Anyway, the test won't pass with current nsTextStore because the test just checks the behavior of first nsTextStore implementation. In other words, the test assumes that the old implementation is the "correct" implementation as ITextStore.
Status: ASSIGNED → RESOLVED
Closed: 8 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla35 | https://bugzilla.mozilla.org/show_bug.cgi?id=1062053 | CC-MAIN-2022-27 | refinedweb | 212 | 56.76 |
In programming, textual content is represented by strings.
You have seen and used strings already.
"Hello, playground", for example, is a string that appears at the top of every newly created playground.
Like all strings, it can be thought of as an ordered collection of characters.
In reality, Swift strings are not themselves collections, but they do provide a variety of views into their underlying contents that are collections.
In this chapter, you will see more of what strings can do.
In Swift, you create strings with the String type. Create a new playground called Strings and add the following new instance of the String type.
Listing 7.1 Hello, playground
import Cocoa
var str = "Hello, ...
No credit card required | https://www.oreilly.com/library/view/swift-programming-the/9780134610733/ch07.xhtml | CC-MAIN-2019-35 | refinedweb | 120 | 67.86 |
Akeneo PIM 3.0, let’s tidy up!
Since its creation in 2013, Akeneo has been continuously growing and evolving. The product, the team, the offices, the code, everything has changed. Every single time, changes were never introduced just for the sake of it. It has rather been a necessity, an adaptation aimed to try to solve a new problem, to catch a dawning opportunity or to better answer a problem we better fathom.
This post will be about the latest “big” change of the code base for the upcoming PIM 3.0: its reorganization by business concerns. As usual, we’ll provide the tools to handle this change smoothly in your projects. The public API didn’t change, we just tidied up all the classes in different namespaces. Actually, that change is not that “big” for you, even it’s crucial for us.
We’ve been growing…
What do you think the following figures relate to: 5 in 2013, 9 in 2014, 13 in 2015, 23 in 2016, 39 in 2017 and 50 in 2018? Our number of customers? Our number of partners? No, actually, it’s the headcount of people working in the product team. At the time of this post being written, more than 30 of them work on our main product: the PIM.
Now let’s take a look at the number of lines of PHP in the PIM per version: 90k in 1.0, 190k in 1.4, 390k in 1.7, 460k in 2.0 and 510k in 2.3. Yes, it’s constantly growing. Of course it comes to the fact that for each release, we ship new features.
In a matter of 5 years, code base is 5 times larger and headcount has been multiplied by 10.
Building a product is an extremely pleasant and exciting journey. It’s a never ending trip, with more and more cities to visit (aka features or products to build), luggage to carry (aka code base to maintain) and friends to meet (aka teammates to onboard). However, an enterprise seminar gathering 150 people does not require the same logistics and organization than a trip with 3 friends. It goes exactly the same for our product and our product team.
How do we work at Akeneo?
Before digging into the problems we are currently facing, and the reasons why we believe a code reorganization will be the stepping-stone to solve them, we have to understand how we built the PIM so far, from an organizational, business and technical point of view.
As you may know, our product team is divided into squads. Each squad is an independent cross functional small group which is made of developers, a product owner, a UX designer and a devops. They have to take care of a specific subject and lead their mission successfully, whichever way they deem appropriate. We are happy with this way of working and organization since we introduced it, mid 2015. It answered a lot of problems we encountered in the past.
Understanding what hundreds of customers need and imagining which features could serve them best is already a tough exercice. But if you also take into account the cost of delay, the schedule of the squads, the risks, the financial aspects, the sales pipeline, the Community and Enterprise editions, the on-premise and SAAS versions, the support and maintenance, the improvements, the new features and the market targets, it becomes a headache. This strategic management is complex by nature. There is no one-size-fits-all type of answer that would match all criteria. There is no one-size-fits-all type of answer that would please all the persons impacted by those decisions. That’s why, sometimes, in the past, we had to leave some existing features aside to be able to ship a brand new product or component. For instance, a squad could give birth to a feature and then move on to something else. Once shipped, the feature becomes the responsibility of all the different squads. This kind of project management is not a problem per se. With a small team owning the whole product, it’s totally viable. But it becomes harder with a larger organization and a far bigger product.
Most of the bundles that exist in the PIM 2.3 were already there at the release of the 1.0. To be more precise, 11 bundles out of the 20 existing in PIM 2.3 Community Edition originally comes from the version 1.0. Some internal parts have been totally revamped, the storage has evolved twice, the business code has been decoupled from the infrastructure, but still, from a purely macro perspective, the code organization has not changed that much over the last 5 years. The only noteworthy change on that side appeared on the Enterprise Edition; each squad created its own folder dedicated to the feature they were in charge of and tried to isolate the most possible its code from the rest of the code base.
All of that worked well. Well, that works well.
So, everything sounds good, why such a change?
Apart from this idealistic depiction, the squads working on the PIM face mostly two important challenges:
- We feel it’s getting more and more difficult to ship a new feature promptly (or to improve an existing one). There is no reason to panic now, but the sooner we act, the easier and cheaper it will be.
- We feel it’s getting more and more difficult to onboard a new teammate in the PIM. The application is so huge that it’s now impossible to understand and master all the functional and technical aspects.
Both difficulties are rooted in the technical debt as well as in the current squad organization and the project portfolio management. On a daily basis, this translates into:
- Some parts of the code are too tightly coupled: changing a class here can cause a piece there to break.
- Some features are not open-closed proof: from a functional point of view, some features modify an existing behavior, instead of extending it.
- Builds are too long: regarding our tests, we chose quantity over quality by a lack of experience.
- Maintenance is sometimes complex: as each developer joins the maintenance squad once in a while, on occasions, you may have to fix something you never heard about before.
- Collective code ownership is hard to maintain: when a group become too numerous it’s a natural tendency that “the responsibility of everybody” turns slowly into “the responsibility of nobody”.
Absolutely none of these problems is serious. Some may take time, some may require work, but none of them is insurmountable or out of control. We just try to pay close attention to early warning signs.
This is not just a code reorganization
We believe we can solve the problems listed above by giving squads more autonomy. We believe that with guidance, nurturing and autonomy, squads will deliver more value more frequently to our customers. We believe we would be more efficient if each squad had a smaller and well defined perimeter. How to achieve that? More or less, the squad α would officially be responsible of the features B, D and E while the squad β would officially take care of A and C. In that case, a squad could:
- Capitalize on its functional knowledge.
- Make strong choices that impact them, and only them.
- Regularly deliver functional improvements on their expertise scope.
- Prioritize their own functional debt.
- Prioritize their own technical debt.
- Take care of their own maintenance.
- Take care of their own tests.
To achieve that, we need three different and complementary things:
- Reorganize the code base so that we effectively see and understand what are the features A, B, C, D and E. As of today, the code base does not reflect the existing functional parts of our business.
- Minimise the impacts between features so that the squad α can’t easily break the work of the squad β (and vice versa).
- Carefully map squads to features. This is for sure the most complex part, as it may eventually impact the way we prioritize the functional roadmap. For instance, it’s easy to prioritize D over B and E, but it becomes more difficult to prioritize A over B, D and E.
We are confident it’s the right path to follow. During the last months, two squads worked this way on two different upcoming Enterprise features. Their code base is isolated, their tests and builds are independent from the rest of the PIM. They are more autonomous, happy and efficient. They feel more productive and the feedback loop between the squad and the stakeholders is shorter. They feel more confident about their code base, to maintain it and to make it evolve and change.
This blog post comes to an end. It focused mainly on the reasons why we needed to reorganize the code base. The next one will detail how we achieved this reorganization and what are the impacts on a project. It will be more technical and concrete. Stay tuned! | https://medium.com/akeneo-labs/akeneo-pim-3-0-lets-tidy-up-a91d986bf5bb | CC-MAIN-2019-13 | refinedweb | 1,520 | 72.76 |
Created on 2008-03-28 10:15 by mark, last changed 2012-08-24 08:34 by ncoghlan. This issue is now closed.
The tiny program at the end of this message runs under Python 2.5 &
30a3. Under 2 it gives the following output:
: python sax.py test.xml
('+', u'document')
('+', u'outer')
('+', u'inner')
('-', u'inner')
('-', u'outer')
('-', u'document')
Done
Under 3 it does not terminate:
: python3 sax.py test.xml
+ document
+ outer
+ inner
- inner
- outer
- document
Traceback (most recent call last):
File "sax.py", line 19, in <module>
parser.parse(sys.argv[1])
File "/home/mark/opt/python30a3/lib/python3.0/xml/sax/expatreader.py",
line 107, in parse
xmlreader.IncrementalParser.parse(self, source)
File "/home/mark/opt/python30a3/lib/python3.0/xml/sax/xmlreader.py",
line 124, in parse
buffer = file.read(self._bufsize)
File "/home/mark/opt/python30a3/lib/python3.0/io.py", line 774, in read
current = self.raw.read(to_read)
KeyboardInterrupt
The xml.sax.parser() function seems to work fine if you give it an open
file object and close the file after the call. But the documentation
says you can give it a filename, but if you do that the parser does not
terminate in Python 3 although it works fine in Python 2.
# sax.py
import sys
import xml.sax
BUG = True
class SaxHandler(xml.sax.handler.ContentHandler):
def startElement(self, name, attributes):
print("+", name)
def endElement(self, name):
print("-", name)
handler = SaxHandler()
parser = xml.sax.make_parser()
parser.setContentHandler(handler)
if BUG:
parser.parse(sys.argv[1])
else:
fh = open(sys.argv[1], encoding="utf8")
parser.parse(fh)
fh.close()
print("Done")
# end of sax.py
Here is the test file:
<?xml version="1.0" encoding="UTF-8"?>
<document>
<outer>
<inner>
</inner>
</outer>
</document>
I had to disable three unit tests in test_sax. We didn't noticed the
problem before because the tests weren't actually run. The three tests
are marked clearly with XXX and FIXME.
ISTM that this release blocker can be solved by changing
xml.sax.xmlreader.py line 122 from:
while buffer != "":
to
while buffer != b"":
I've a better idea:
while buffer:
It's faster and works for both empty bytes and str.
The patch fixes the issue and re-enables three unit tests.
The patch looks great. (I love enabling disabled tests!)
Looks like this is a duplicate of issue3590, so this patch fixes two
release blockers ;)
Benjamin will commit this.
Applied in r66203. | https://bugs.python.org/issue2501 | CC-MAIN-2021-21 | refinedweb | 410 | 62.64 |
This is needed to implement call hierarchy.
What does ASTNode.Parent correspond to in here? I couldn't find any clarifications on libIndex side, are we sure that's always what we want? It would be nice to have some tests demonstrating what this corresponds to in a variety of cases.
Also why do we only store NamedDecls as containers? It makes sense from CallHierarchy perspective as we are only interested in function-like containers, and they are nameddecls (?). But these might as well be TranslationUnitDecl (?) for top level declarations,
Use Decl rather than NamedDecl, and add a test
I wrote a test with various cases I could think of, and the choice of decl coming from ASTNode.Parent seems reasonable to me.
i am afraid we are going to have an indeterminate value here if for whatever reason Container detection fails. (e.g. for macros, or if ASTNode.Parent set to nullptr by libindex).
Sent out to address the issue.
can we also have an assertion checking containers for top level symbols are the same ?
and extend the test to have multiple declarations inside:
and ensure the container is same for those (and different than top-level symbols themselves). It is okay for those symbols.
Note that it is OK for namespaces to be not in the index, i just want to make sure the containers for the top-level symbols inside a namespace are the same.
Address review comments
Good catch, and thank you for addressing this!
thanks, LGTM!
can you also assert containers here are non-null (and below)
nit: i would suggest writing these in the form of:
auto Container = [](StringRef RangeName){
auto Ref = findRefWithRange(RangeName;
ASSERT_TRUE(Ref);
ASSERT_FALSE(Ref->Container.isNull());
return Ref->Container;
};
EXPECT_EQ(Container("sym1"), Container("sym2"));
EXPECT_NE(Container("sym1"), Container("sym2"));
then you could update the ones above as:
EXPECT_EQ(Container("rangename"), findSymbol(Symbols, "symname"));
It looks like the container is null for toplevel decls. Is that a problem?
(I will wait for a response about the containers for top-level decls before committing.)
Revised as suggested, except for allowing a null container for top-level decls.
nit: instead of making this part of the lambda and complicating the signature I would just check this holds for classscope1 and namespacescope1 explicitly. it should be true for other cases anyways, as we are checking equality with non-null smybolids.
I think that's OK for now, might be worth leaving a comment tho. (at Ref::Container)
Address final review comments | https://reviews.llvm.org/D89670 | CC-MAIN-2021-04 | refinedweb | 417 | 57.47 |
Hello there,
I am inserting Image Merge Fields at run time as:
string imageName = “TestImage”;
builder.InsertField(string.Format(“MERGEFIELD Image:{0}”, imageName), string.Format("«Image:{0}»", imageName));
and when I call objDocumentMailMerge.GetFieldNames(), it returns Image field names prefixed with ‘Image:’, so for above code, it will give ‘Image:TestImage’.
Is that correct, or I am doing something wrong here…
(I was expecting it to return just ‘TestImage’, though if above code is right, I can always chop off ‘Image:’ to get correct field names.)
Thank you!
MailMerge.GetFieldNames() Returns Image MergeFields prefixed with 'Image:'
Hello there,
Hi Nutan,
<?xml:namespace prefix = o
Thanks for your inquiry. Yes, this is expected behavior. MailMerge.GetFieldNames just returns names of all merge fields in the document.
Best regards. | https://forum.aspose.com/t/mailmerge-getfieldnames-returns-image-mergefields-prefixed-with-image/80208 | CC-MAIN-2021-10 | refinedweb | 127 | 50.12 |
How to use type annotations with Python’s
csv module
Adding a type annotation for a “writer” object created by
csv.writer(...) is fiddlier than you might think.
The type of the writer object is
_csv.writer which you verify with:
>>> import csv, sys >>> type(csv.writer(sys.stdout)) <class '_csv.writer'>
but if you try this:
import _csv def generate_report(writer: _csv.writer): pass
mypy complains that:
Function "_csv.writer" is not valid as a type [valid-type].
The correct approach is to use
_csv._writer as the type annotation. A string literal is required
otherwise you’ll get a
AttributeError: module '_csv' has no attribute '_writer' exception.
import _csv def generate_report(writer: '_csv._writer'): pass | https://til.codeinthehole.com/posts/how-to-typecheck-csv-objects-in-python/ | CC-MAIN-2022-05 | refinedweb | 117 | 62.95 |
697
10 S.Ct. 378
33 L.Ed. 787
BOESCH t al.
v.
GRAFF et al.
March 3, 1890.
Albert Graff and J. F. Donnell filed their bill in the circuit court of the United
States for the northern district of California against Emile Boesch and Martin
Bauer, to recover for infringement of letters patent No. 289,571, for an
improvement in lamp-burners, granted on December 4, 1883, to Carl
Schwintzer and Wilhelm Graff, of Berlin, Germany, assignors of one-half to J.
F. Donnell & Co., of New York, all rights being averred to be now vested in the
complainants. Claim 1, alleged to have been infringed, reads as follows: 'In a
lamp-burner of the class described, the combination, with the guide tubes, of a
ring-shaped cap provided with openings for the wicks, said cap being applied to
the upper ends of the guide tubes, so as to close the intermediate spaces
between the same, substantially as set forth.' The patent was granted December
4, 1883, but prior to that, November 14, 1879, January 13, 1880, and March 26,
1880, letters patent had been granted to Carl Schwintzer and Whilhelm Graff
by the government of Germany for the same invention. After a hearing on the
merits, an interlocutory decree was entered, finding an infringment, and
referring the case to a master for an accounting. The opinion will be found
reported in 33 Fed. Rep. 279. A petition for a rehearing was filed, and
overruled. The case then went to the master, who reported that the infringement
was willful, wanton, and persistent; that the appellees had sustained demages to
the extent of $2,970.50; and that they waived all claims to the profits realized
by the infringement. Exceptions were filed to this report, and overruled, and a
final decree entered in favor of Graff and Donnell for $2,970.50, with interest,
and costs, from which decree this appeal has been prosecuted. Appellants urge
three grounds for reversal: First, that a title to the patent sufficient to maintain a
suit for infringement was not, at the date of filing the bill, vested in the
complainants; second, that Boesch and Bauer could not be held for
infringement, because they purchased the burners in Germany from a person
having the right to sell them there, though not a licensee under the German
patents; third, that the damages awarded were excessive.
2
These propositions are presented by some of the errors assigned, and are the
only errors alleged which require attention, that which questions the
infringement not being argued by counsel, and that which goes upon the refusal
of the circuit court to grant a rehearing not being open to consideration here.
Buffington v. Harvey, 95 U. S. 99, 100; Steines v. Franklin Co., 14 Wall, 15,
22; Railway Co. v. Heck, 102 U. S. 120; Kennon v. Gilmer, 131 U. S. 22, 24, 9
Sup. Ct. Rep. 696. The assignment by Schwintzer to Albert Graff was dated the
22d day of April, 1885, was absolute in form, and transferred title to six twentyfourths of the patent, for the expressed consideration of 'the sum of one hundred
dollars, and for other valuable considerations;' but a contract between
Schwintzer and Albert Graff was produced by the latter upon his examination
by the respondents, which read as follows: 'S. 1. Mr. Albert Graff binds himself
to pay to Mr. Carl Schwintzer, instead of the, in the patent letter mentioned, one
hundred dollars for the first year, the sum of two hundred and fifty marks,
payable on the 1st February, 1886, and each following year on the same date
the sum five hundred marks, (not less,) till the amount of four thousand marks
are paid in all. S. 2. Should Mr. Albert Graff, of San Francisco, not be able to
sell more than one thousand burners, called 'Diamond' or 'Mitrailleuse' burners,
No. 10,621, manufactured by Mess. Schwintzer & Graff, of Berlin, he reserves
to himself to make up a new agreement with Mr. Carl Schwintzer. S. 3. Should
not Mr. Albert Graff, San Franciso , against all expectation, stick to the
agreements mentioned in S. 1 & 2, all titles of the patent letter ceded to him by
Carl Schwintzer shall him return. S. 4. Mr. Carl Schwintzer, partner of the firm
Schwintzer & Graff, engages to deliver to Mr. Albert Graff the said burners at
the same price as before, if the market price of the metal does not exceed,
make 150% kos.; and promise likewise to effect any order promptly, if in his
power.'
Albert Graff testified in respect to the words, 'instead of the, in the patent letter
mentioned, one hundred dollars for the first year,' etc., that they meant that,
instead of the $100 mentioned in the assignment, he was to pay 250 marks the
first year, and that the contract was made on day later than the assignment.
Counsel contends that the two documents must be construed together, and
amount simply to an executory contract to assign when Graff shall have paid
the sum of 4,000 marks; that, therefore, Graff could, at most, only be regarded
as a licensee of the interest under the patent, until such time as his contract
should be executed according to its terms; and that the legal right as to six
twenty-fourths of the patent remained in Schwintzer, who was therefore a
necessary party. It is evident that the agreement was not drawn by parties well
versed in English, but their intention is sufficiently apparent. The assignment,
being absolute in form, conveyed the legal title, and on the next day the parties
signed this contract, relating to the consideration, probably to enable Albert
Graff to pay the 4,000 marks out of the sales of the burners; at all events, it
provides that, if Graff failed to carry out his covenants, then the title was to
return to Schwintzer, which provision was in the nature of a security to him that
he should be paid. The condition that if Mr. Albert Graff did not, 'against all
expectations, stick to the agreements mentioned in S. 1 & 2, all titles of the
patent letter ceded to him by Carl Schwintzer shall him return,' is a condition
subsequent. The title had already vested, but was liable to be defeated in futuro,
on failure of the condition. There has been no such failure, but, on the contrary,
Albert Graff has paid the 4,000 marks in full. We shall therefore not reverse the
decree on the ground first referred to.
4
Letters patent had been granted to the original patentees for the invention by the
government of Germany in 1879 and 1880. A portion of the burners in question
were purchased in Germany from on Hecht, who had the right to make and sell
them there. By section 5 of the imperial patent law of Germany, of May 25,
1877, it was provided that 'the patent does not affect persons who, at the time of
the patentee's application, have already commenced to make use of the
invention in the country, or made the preparations requisite for such use.' 12 O.
G. 183. Hecht had made preparations to manufacture the burners prior to the
application for the German patent. The official report of a prosecution against
Hecht in the first criminal division of the royal district court, No. 1, at Berlin, in
its session of March 1, 1882, for an infringement of the patent law, was put in
evidence; wherefrom it appeared that he was found not guilty, and judgment for
costs given in his favor, upon the ground 'that the defendant has already prior to
November 14, 1879,that is to say, at the time of the application by the
patentees for and within the state,made use of the invention in question,
especially, however, had made the necessary preparations for its use. Section 5,
eodem. Thus Schwintzer & Graff's patent is of no effect against him, and he had
to be acquitted accordingly.'
It appears that appellants received two invoices from Germany, the burners in
one of which were not purchased from Hecht, but, in the view which we take of
the case, that circumstance becomes immaterial. The exact question presented
is whether a dealer residing in the United States can purchase in another
country articlesp atented there, from a person authorized to sell them, and
import them to and sell them in the United States, without the license or
consent of the owners of the United States patent. In Wilson v. Rousseau, 4
How. 646, it was decided that a party who had purchased and was using the
Woodworth planing-machine during the original term for which the patent was
granted, had a right to continue the use during an extension granted under the
act of congress of 1836; and Mr. Chief Justice TANEY, in Bloomer v.
McQuewan, 14 How. 539, 549, says, in reference to it, that 'the distinction is
there taken between the grant of the right to make and vend the machine and
the grand of the right to use it.' And he continues: .' In Adams v. Burke, 17 Wall. 453, it was held;' and that 'the
right to the use of such machines or instruments stands on a different ground
from the right to make and sell them, and inheres in the nature of a contract of
purchase, which carries no implied limitation to the right of use within a given
locality.' Mr. Justice BRADLEY, with whom concurred Mr. Justice SWAYNE
and Mr. Justice STRONG, dissented, holding that the assignee's interest 'was
limited in locality, both as to manufacture and use.' The right which Hecht had
to make and sell the burners in Germany was allowed him under the laws of
that country, and purchasers from him could not be thereby authorized to sell
the articles in the United States in defiance of the rights or patentees under a
United States patent. A prior foreign patent operates under our law to limit the
duration of the subsequent patent here, but that is all. The sale of articles in the
United States under a United States patent cannot be controlled by foreign laws.
It is conceded that these exceptions raise two points, namely, that the
infringement was not willful, and that the reduction of prices was not caused
solely by it; and this, as it seems to us, is quite sufficient to permit the real
question involved to be passed upon. The master awarded $2,970.50 as
damages for the reduction in price, which, he holds, was caused by the
respondent's infringement. He says: 'After the reduction in his prices,
complainant sold, at wholesale, one thousand three hundred and twelve tenwick burners, at a price twenty-five cents less on each than his original price;
four hundred and fifty twelve-wick burners, at fifty cents less; five hundred and
ninety-two sixteen-wick burners, at seventy-five cents less; and seven hundred
and sixteen twenty-wick burners, at seventy-five cents less,a total difference
between the original and the reduced prices of one thousand five hundred and
thirty-five dollars and fifty cents. In addition, be sold at retail, on an average,
five burners on each of the five hundred and seventy-four business days
between the time when his prices were first reduced and October 31, 1887; the
number of burners thus sold being two thousand eight hundred and seventy,
which were sold at a minimum reduction of fifty cents each under original
prices,a total difference between the original and the new prices of fourteen
hundred and thirty-five dollars; which sum, added to the said sum of one
thousand five hundred and thirty-five dollars and fifty cents, gives an aggregate
amount of two thousand nine hundred and seventy dollars and fifty cents.'
8
The report of a master is merely advisory to the court, which it may accept and
act upon, in whole or in part, according to its own judgment as to the weight of
the evidence. Kimberly v. Arms, 129 U. S. 512, 523, 9 Sup. Ct. Rep. 355. Yet,
in dealing with exceptions to such reports, 'the conclusions of the master,
depending upon the weighing of conflicting testimony, have every reasonable
presumption in their favor, and are not to be set aside or modified unless there
clearly appears to have been error or mistake on his part.' Tilghman v. Proctor,
125 U. S. 136, 149, 8S up. Ct. Rep. 894. We think there was error here, within
that rule. Where the patenteegranted no licenses, and had no established license
fee, but supplied the demand himself, and was able to do so, an enforced
reduction of price is a proper item of damages, if proven by satisfactory
evidence. Manufacturing Co. v. Sargent, 117 U. S. 536, 6 Sup. Ct. Rep. 934.
The damages must be actual damages, but where the patented feature is the
essential element of the machine or article, as in the case just cited, if such
damages can be ascertained they may be awarded. When, however, a plaintiff
seeks to recover because he has been compelled to lower his prices to compete
with an infringing defendant, he must show that his reduction in prices was due
solely to the acts of the defendant, or to what extent it was due to such acts.
Cornely v. Marck wald, 131 U. S. 159, 9 Sup. Ct. Rep. 744. There must be
some data by which the actual damages may be calculated. New York v.
Ransom, 23 How. 487; Rude v. Westcott, 130 U. S. 152, 9 Sup. Ct. Rep. 463.
The master reported 'that the number of lamp-burners proven to have been sold
by respondents, containing the invention claimed in and by the first claim of
complainants' letters patent, is fourteen, provided that only the capped burners
sold contain said invention, and that the number is one hundred and fourteen, if
the half capped burners so sold are to be held to contain said invention.' The
evidence established that the first invoice of lampburners contained 50 20-wick
burners with caps, of which respondents sold 4; and 50 12-wick burners with
half caps, of which respondents sold 12; and 50 16-wick burners with half caps,
of which respondents sold 44; and that respondents altered the 46 remaining 20wick burners by changing their caps to half caps, and sold 44. This makes the
100 with half caps, referred to by the master. Of the second invoice, the
respondents sold 4 20-wick capped burners and 6 16-wick capped burners,
making, with 4 20-inch burners with caps sold out of the first invoice, the 14
capped wick burners reported as thus disposed of. The original bill in this case
was filed September 17, 1886. It had been preceded by another suit, which had
been dismissed. The goods in the second invoice, it is testified, had been
ordered before this suit was commenced, but the invoice is dated October 16,
1886. This invoice contained 100 20 and 100 16-wick burners with caps, of
which respondents sold 4 20-wick and 6 16-wick burners unchanged, as before
stated. Most of this lot were still on hand at the time the testimony was taken,
though some had been altered into what was called the 'Boesch Burner,' which
had no caps at all, and sold as such.
10
The evidence tends to establish a profit of $1.85 on the 20-wick burners, $1.50
on the 16-wick, and 75 cents on the 12-wick. This would show a profit of
$23.80 on the 14 capped burners, being 820-wick and 6 16-wick burners; and a
profit of $156.40 on the 100 half-capped burners, being 44 20-wick, 44 16wick, and 12 12-wick burners. Respondents had been advised by their counsel
that the burners with half caps were not an infringement. The cap was the
invention in question. The claim infringed, as already seen, was a combination,
with the guide tubes, of a ring-shaped cap provided with openings for the
wicks, said cap being applied to the upper ends of the guide tubes, so as to close
the intermediate spaces between the same. The half cap admitted the air
directly to each wick, and in that respect differed from the claim of the patent.
It is argued, however, with much force, on behalf of the appellees, that the
difference was a difference in degree, and not in kind, as the air reached the
wick when the full cap was used, and the functions of the latter as a
strengthening band, a protector of the tops of the tubes, and in other particulars
were performed by the half cap; and this position is not resisted by counsel for
appellants. But, assumiug that the sale of 100 burners with half caps was an
infringemn t, we are not prepared to concede that the sale of 114 burners under
the circumstances detailed could have had the effect, in compelling a reduction
of price, which has been ascribed to it.
11
It is remarked by the master that 'it is a fact of common knowledge that there is
to be found on sale in the market a great variety of lamp-burners, among which,
as shown by the evidence, have been for many years burners of the same
general class as complainants'.' This being so, and Boesch & Bauer being | https://id.scribd.com/document/310928555/Boesch-v-Graeff-133-U-S-697-1890 | CC-MAIN-2019-30 | refinedweb | 2,922 | 64.95 |
Introduction: HackerBox 0051: MCU Lab
Greetings to HackerBox Hackers around the world! HackerBox 0051 presents the HackerBox MCU Lab. The MCU Lab is a development platform to test, develop, and prototype with microcontrollers and microcontroller modules. An Arduino Nano, ESP32 Module, and SMT32 Black Pill are used to explore the.
This guide contains information for getting started with HackerBox 00 in livin' the HACK LIFE.
Teacher Notes
Teachers! Did you use this instructable in your classroom?
Add a Teacher Note to share how you incorporated it into your lesson.
Step 1: Content List for HackerBox 0051
- MCU Module 1: Arduino Nano 5V, 16MHz
- MCU Module 2: WEMOS ESP32 Lite
- MCU Module 3: STM32F103C8T6 Black Pill
- Exclusive MCU Lab Printed Circuit Board
- FT232RL USB Serial Adapter
- OLED 128x64 Display I2C 0.96 Inches
- Bidirectional 8-Bit Logic Level Shifter
- WS2812B RGB SMD LED
- Four Surface Mount Tactile Buttons
- Four Red Diffused 5mm LEDs
- Piezo Buzzer
- HD15 VGA Connector
- Mini-DIN PS/2 Keyboard Connector
- 100K Ohm Potentiometer
- 8 Position DIP Switch
- AMS1117 3.3V Linear Regulator SOT223
- Two 22uF Tantalum Capacitors 1206 SMD
- Ten 680 Ohm Resistors
- Four Adhesive Rubber PCB Feet
- Two 170 point Mini Solderless Breadboards
- Eleven 8 pin Female Header Sockets
- 40 pin Breakaway Header
- Bundle of 65 Male Jumper Wires
- Raised Fist Circuit Board Sticker
- Hack The Planet Smiley Pirate Sticker
- Exclusive HackerBox "Remove Before Flight" Keychain: HackerBoxes MCU Lab
The MCU Lab is a compact, polished version of a development platform we use to prototype and test various microcontroller (MCU) based designs. It is super useful for working with MCU modules (such as an Arduino Nano, ESP32 DevKit, etc) or individual MCU device packages (such as ATMEGA328s, ATtiny85s, PICs, etc). A target MCU can be placed into either of the mini solderless breadboards. Two MCUs can be interfaced together using both breadboards or one of the breadboard spaces can be used for other circuitry.
The "feature blocks" of the MCU Lab are broken out to female headers similar to those found on an Arduino UNO. The female headers are compatible with male jumper pins.
Step 3: Assemble the HackerBoxes MCU Lab
SMD COMPONENTS ON BACK OF BOARD
Start by mounting the AMS1117 (SOT 233 Package) Linear Regulator and the two 22uF filter capacitors on the reverse of the PCB. Note that one side of each capacitor silkscreen is rectangular and the other side is octagonal. The capacitors should be oriented so that the dark stipe on the package aligns to the octagonal silkscreen side.
CONTINUE WITH COMPONENTS ON FRONT OF BOARD
Solder the WS2812B RGB LED. Orient the white marked corner of each LED to correspond to the tabbed corner as shown on the PCB silkscreen.
Four SMD Tactile Buttons
Four Red LEDs with Four Resistors
Level Shifter with VA pin nearest 3V3 marking and VB pin nearest 5V marking. The Level Shifter module can be mounted flush to the PCB by soldering the headers to the module and then sliding the black plastic spacers off the headers before mounting the module to the MCU Lab PCB. Leaving the spacers on is fine as well.
Two strips of the header can be broken off to connect the FT232 module. A smaller 4-pin section of header can also be used for the 5V/GND header just next to the FT232 module.
For now, populate the female VGA header closest to the HD15 VGA connector and Keyboard Socket. However, DO NOT POPULATE the additional header adjacent to that one or the five resistors between those two headers. Specific options for video signal interfacing are discussed later.
Populate the other nine female headers.
Remove adhesive from back of both solderless breadboards to attach them to the MCU Lab PCB.
Position adhesive rubber feet to the bottom of the MCU Lab PCB to protect your workbench from scratches.
HANDLING POWER INPUTS
There are at least two, and more likely as many as four, places where power may come into the MCU Lab. This can cause trouble, so always carefully consider the following pointers:
The header points labeled 5V are all connected. The 5V rail also connects to the keyboard socket, the level shifter, and the WS2812B RGB LED. Power can be supplied to the 5V rail by plugging the FT232 into USB, connecting the four pin power header to an external supply, or by connecting a jumper from one of a 5V pin on the PCB to a powered 5V module (usually powered by USB).
Similarly, the GND pins are all connected. They connect to the USB GND on the FT232 (assuming USB is connected to the FT232). They can also be connected to ground using a jumper between one of them and a powered module as discussed for the 5V net.
The 3V3 rail is driven by the regulator on the back of the PCB. It is a source only and (unlike the 5V rail) it should not be driven by any modules or other circuits since it is driven directly from the regulator on the 5V rail.
Step 4: Arduino Nano MCU Module
One of the most common MCU modules these days is the Arduino Nano. The included Arduino Nano board comes with header pins, but they do not come soldered to the module. Leave the pins off for now. Perform these initial tests on the Arduino Nano module prior to soldering on the header pins. All that is needed is a microUSB cable and the Arduino Nano board just as is comes out of the bag. Nano. In includes an on-board MicroUSB port connected to a CH340G USB/Serial bridge chip. Detailed information on the CH340 (and drivers, if needed) can be found.
SOFTWARE: If you do not yet have the Arduino IDE installed, you can download it from Arduino.cc
Plug the Nano into the MicroUSB cable and the other end of the cable into a USB port on the computer. Launch the Arduino IDE software. Select "Arduino Nano" in the IDE under tools>board and "ATmega328P (old bootloader)" under tools>processor. Select the appropriate USB port under tools>port (it is likely a name with "wchusb" in it).
Finally, load up a piece of example code:
File->Examples->Basics->Blink
Blink.
Now that you have confirmed operation of the Nano module, go ahead and solder the header pins onto it. Once the headers are connected, the module can be easily used in one of the solderless breadboards of the MCU Lab. This process of testing out an MCU module by downloading some simple test code, modifying, and downloading again is a best practice whenever using a new, or different type, MCU module.
If you would like additional introductory information for working in the Arduino ecosystem, we suggest checking out the Guide for the HackerBoxes Starter Workshop, which includes several examples and a link to a PDF Arduino Textbook.
Step 5: Explore MCU Lab With Arduino Nano
POTENTIOMETER
Connect center pin of potentiometer to Nano Pin A0.
Load and Run: Examples > Analog > AnalogInput
The example defaults to the Nano's onboard LED. Turn the potentiometer to change the blink speed.
Modify:
In the code, change LedPin=13 to 4
Jumper from Nano Pin 4 (and GND) to one of the red LEDs of the MCU Lab.
BUZZER
Jumper from Buzzer to Nano Pin 8. Be sure the board GND is connected to the GND of the powered Nano since the buzzer ground is hard wired to the board GND net.
Load and Run: Examples > Digital > toneMelody
OLED DISPLAY
In the Arduino IDE, use the library manager to install "ssd1306" from Alexey Dyna.
Connect OLED: GND to GND, VCC to 5V, SCL to Nano's A5, SDA to Nano's A4
Load and Run: Examples > ssd1306 > demos > ssd1306_demo
WS2812B RGB LED
In the Arduino IDE, use the library manager to install FastLED
Connect the WS2812's header pin to the Nano's pin 5.
Load: Examples > FastLED > ColorPalette
Change NUM_LEDS to 1 and LED_TYPE to WS2812B
Compile and Run
WRITE SOME CODE TO EXERCISE THE BUTTONS AND SWITCHES
Remember to use pinMode(INPUT_PULLUP) to read a button without adding a resistor.
COMBINE SOME OF THESE EXAMPLES TOGETHER
For example, cycle outputs in some interesting way and show states or input values on the OLED or serial monitor.
Step 6: WEMOS ESP32 Lite
The ESP32 microcontroller (MCU) is a low-cost, low-power system on a chip (SOC) with integrated Wi-Fi and dual-mode Bluetooth. The ESP32 employs a Tensilica Xtensa LX6 core and includes built-in antenna switches, RF balun, power amplifier, low-noise receive amplifier, filters, and power-management modules. (wikipedia)
The WEMOS ESP32 Lite module is more compact than the previous version which makes it easier to use on a solderless breadboard.
Make your initial test of the WEMOS ESP32 module before soldering the header pins onto the module.
Set up the ESP32 support package in the Arduino IDE.
Under tools>board, be sure to select the "WeMos LOLIN32"
Load the example code at Files>Examples>Basics>Blink and program it to the WeMos LOLIN32
The example program should cause the LED on the module to blink. Experiment with modifying the delay parameters to make the LED blink with different patterns. This is always a good exercise to build confidence in programming a new microcontroller module.
Once you are comfortable with the module's operation and how to program it, carefully solder the two rows of header pins into place and test loading programs once again.
Step 7: ESP32 Video Generation
This video demonstrates the ESP32 VGA Library and a very nice, simple tutorial from bitluni's lab.
The demonstrated 3-bit implementation (8 colors) uses direct wire jumpers between the ESP32 module and the VGA connector. Making these connections on the MCU Lab's VGA header is fairly easy since no additional components are involved.
Depending upon which MCU is in use, its voltage level, the pixel resolutions, and the desired color-depths, there are various combinations of inline resistors and resistor networks that may be placed between the MCU and the VGA header. If you decide to permanently use inline resistors, they can be soldered onto the MCU Lab PCB. If you would like to maintain flexibility and especially if you want to use more complex solutions, then it is recommended to not solder any resistors into place and simply use using the solderless boards and VGA header to connect up the necessary resistors.
For example, to implement bituni's 14-bit color mode shown at the end of the video, the ESP32 module can be positioned onto one of the mini solderless boards and the other solderless board can be used to connect up the resistors ladders.
Here are some other examples:
In HackerBox 0047 an Arduino Nano drives a simple VGA output with 4 resistors.
A VIC20 Emulator is implemented on ESP32 using FabGL and 6 resistors.
Implement a BASIC PC using ESP32 and 3 resistors.
Play Space Invaders on ESP32 using FabGL and 6 resistors.
Generate VGA output on STM32 with 6 resistors.
Simultaneous Text and Graphics layers on STM32 with Video demonstration.
Step 8: STM32F103C8T6 Black Pill MCU Module
The Black Pill is an STM32-based MCU Module..
Programming the STM32 from Arduino IDE.
Step 9: TXS0108E 8-Bit Logic Level Shifter
The TXS0108E (datasheet) is an 8-Bit Bidirectional Logic Level Shifter. The module is set up to level-shift signals between 3.3V and 5V.
Since the signal level channels are bidirectional, floating inputs can cause the corresponding outputs to be unintentionally driven. An output enable (OE) control is provided to protect in such scenarios. Care should be taken depending upon how the shifter is connected to make sure that an output from the shifter (either "intentional" or due to a floating input on the other side) is never allowed to cross-drive an output from another device.
The OE pin is left disconnected in the PCB traces. A two-pin header is provided below the module for connecting OE and 3V3. Shorting the two-pin header (using a piece of wire or a jumper block) connects OE to 3V3 which enables the IC to drive its outputs. A pulldown resistor and logic control can also be connected to the OE pin.
Step 10: HackLife
We hope you are enjoying this month's HackerBox adventure into electronics and computer technology. Reach out and share your success in the comments below or on the HackerBoxes.
6 People Made This Project!
See 2 More
Recommendations
7 Discussions
Tip 3 days ago on Step 5
Tip:
Use the FT232RL USB serial adapter board to program your MCU board (i.e. NANO V3):
Connect RX from the FT232RL to the TX of your MCU board.
Connect TX from the FT232RL to the RX of your MCU board.
Don't forget power and ground.
4 days ago
Here's an easy demo code using the potentiometer and the OLED Display with the ESP32 Wemos Lolin32 board:
Using Library:... (install from Library manager)
Demo video:
Code:
#include "SSD1306Wire.h"
#include <Wire.h>
// Initialize the OLED display
SSD1306Wire display(0x3c, 5, 4); // On ESP32 Lolin: SDA=5 SCL=4
void setup() {
// Initializing the display
display.init();
display.flipScreenVertically();
}
void loop() {
display.clear();
int potval = analogRead(2);
String val = String(potval);
display.setFont(ArialMT_Plain_24);
display.drawString(0, 10, "Value: " + val);
display.display();
delay(10);
}
4 days ago
OLED Demo on the Lolin32
Question 7 days ago
Trying to upload Blink on Wemos ESP32 Lite, but keep stalling at the message "Hard resetting via RTS pin..." Any Suggestions?
Answer 6 days ago
Actually, the program is uploaded just fine, but no built-in LED is flashing. I changed the LED pin to an external LED, on my fancy new test lab :), and it worked. Naming pin 22, expressly, also worked for the built-in LED.
Reply 5 days ago
Do a little googling. You have to modify the code for the WEMOS, the default Blink code is for Arduinos.
6 days ago on Step 8
The link is for the working with the blue pill. Since we have the black pill I found this tutorial clearer: | https://www.instructables.com/id/HackerBox-0051-MCU-Lab/ | CC-MAIN-2020-10 | refinedweb | 2,363 | 61.16 |
Subject: Re: [boost] [any] new version
From: Mathias Gaunard (mathias.gaunard_at_[hidden])
Date: 2011-09-05 07:18:26
On 02/09/2011 20:58, Martin Bidlingmaier wrote:
> So there are actually two main issues that need to be taken care of:
>
> - 1. dlls
> I think a good solution would be to add a new class, e.g. as
> Andrey Semashev proposed 'local_any', that derives from the current
> version of any.
> The normal version could be used for inter dll passing, the new version
> when the user can assure that a local_any instance won't be passed between
> dlls.
> Conversion could be provided from local_any to any.
I don't think Boost libraries should be designed for a single platform.
> - 2. thread safety
> In c++03, variables defined at namespace scope are not necessarily
> initialized before main()
Yes they are.
> , so there could be data races for the
> previous_id in next_id() in multi threaded applications.
That's not namespace scope, that's function scope.
The C++11 standard (the only one aware of multithreading) mandates that
initialization of static variables at function scope is thread-safe.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2011/09/185537.php | CC-MAIN-2019-22 | refinedweb | 208 | 67.45 |
Works? |
Struts Controller |
Struts Action Class |
Struts
ActionFrom Class... |
Struts
Built-In Actions |
Struts
Dispatch Action |
Struts
Forward Action |
Struts
LookupDispatchAction |
Struts MappingDispatchAction
Managing Datasource in struts Application
-source>
</data-sources>
Then in my action class i am retrieving...Managing Datasource in struts Application Hi
i need to know how to do set up of Oracle data base with struts using Data source.I have defined
Action in Struts 2 Framework
. Actions are mostly associated with a HTTP request of User.
The action class..., and select the view result page that should send back to the user
action class... { return "success"; }
}
action class does not extend another class and nor
STRUTS
STRUTS MAIN DIFFERENCES BETWEEN STRUTS 1 AND STRUTS 2
Struts
Struts How to retrive data from database by using Struts
Struts
Struts how to learn struts
Struts
Struts what is SwitchAction in struts
Multiple file upload - Struts
using struts and jsp.
I m using enctype="multipart". and the number of files... only with servlets and i m using struts.
I have this implemeted in servlets but now i have to do this with struts.
In this "items = upload.parseRequest
Im not getting validations - Struts
org.apache.struts.action.DynaActionForm;
public class DynaStudentRegAction extends Action{
public...Im not getting validations I created one struts aplication
im using.....
and... (data), View (user
interface) and Controller (user request handling... highly maintainable web
based enterprise applications. Struts is also being
Struts Articles
.
4. The UI controller, defined by Struts' action class/form bean... receives a request.
2. Struts identifies the action mapping which... request workflow
action interceptors
form validation, so
Struts
Struts When Submit a Form and while submit is working ,press the Refresh , what will happen in Struts
Problem in request object
Problem in request object I have created a form where I have file..." type="image/gif" href="favicon.gif" >
<div class="box">
<div class...;
</tr>
<tr>
</tr>
</table>
</div>
<div class
struts
struts Hi
what is struts flow of 1.2 version struts?
i have struts applicatin then from jsp page how struts application flows
Thanks
Kalins Naik
Please visit the following link:
Struts Tutorial
Problem in request Object
Problem in request Object I have created a form where I have file...;link
<div class="box">
<div class="header">
<table>
<tr>
<td>
<img src
Dispatch Action - Struts
Dispatch Action While I am working with Structs Dispatch Action . I am getting the following error. Request does not contain handler parameter named 'function'. This may be caused by whitespace in the label text
JSP Request Dispatcher
the RequestDispatcher class to transfer the
current request to another jsp page. You can... the button, the
action button transfer the request page information...
JSP Request Dispatcher
struts
struts i have no any idea about struts.please tell me briefly about struts?**
Hi Friend,
You can learn struts from the given link:
Struts Tutorials
Thanks
Advertisements
If you enjoyed this post then why not add us on Google+? Add us to your Circles | http://roseindia.net/tutorialhelp/comment/86582 | CC-MAIN-2016-07 | refinedweb | 497 | 66.54 |
5.2: CardCollection
- Page ID
- 15299
Here’s the beginning of a
CardCollection class that uses
ArrayList instead of a primitive array:
public class CardCollection { private String label; private ArrayList<Card> cards; public CardCollection(String label) { this.label = label; this.cards = new ArrayList<Card>(); } }
When you declare an
ArrayList, you specify the type it contains in angle brackets (
<>). This declaration says that
cards is not just an
ArrayList, it’s an
ArrayList of
Card objects.
The constructor takes a string as an argument and assigns it to an instance variable,
label. It also initializes
cards with an empty
ArrayList.
ArrayList provides a method,
add, that adds an element to the collection. We will write a
CardCollection method that does the same thing:
public void addCard(Card card) { this.cards.add(card); }
Until now, we have used
this explicitly to make it easy to identify attributes. Inside
addCard and other instance methods, you can access instance variables without using the keyword
this. So from here on, we will drop it:
public void addCard(Card card) { cards.add(card); }
We also need to be able to remove cards from a collection. The following method takes an index, removes the card at that location, and shifts the following cards left to fill the gap:
public Card popCard(int i) { return cards.remove(i); }
If we are dealing cards from a shuffled deck, we don’t care which card gets removed. It is most efficient to choose the last one, so we don’t have to shift any following cards. Here is an overloaded version of
popCard that removes and returns the last card:
public Card popCard() { int i = size() - 1; return popCard(i); }
Notice that
popCard uses
CardCollection’s own
size method, which in turn calls the
ArrayList’s
size method:
public int size() { return cards.size(); }
For convenience,
CardCollection also provides an
empty method that returns
true when
size is zero:
public boolean empty() { return cards.size() == 0; }
Methods like
addCard,
popCard, and
size, which invoke another method without doing much additional work, are called wrapper methods. We will use these wrapper methods to implement less trivial methods, like
deal:
public void deal(CardCollection that, int n) { for (int i = 0; i < n; i++) { Card card = popCard(); that.addCard(card); } }
The
deal method removes cards from the collection it is invoked on,
this, and adds them to the collection it gets as a parameter,
that. The second parameter,
n, is the number of cards to deal.
To access the elements of an
ArrayList, you can’t use the array
[] operator. Instead, you have to use the methods
get and
set. Here is a wrapper for
get:
public Card getCard(int i) { return cards.get(i); }
The
last method gets the last card (but doesn’t remove it):
public Card last() { int i = size() - 1; return cards.get(i); }
In order to control the ways card collections are modified, we don’t provide a wrapper for
set. The only modifiers we provide are the two versions of
popCard and the following version of
swapCards:
public void swapCards(int i, int j) { Card temp = cards.get(i); cards.set(i, cards.get(j)); cards.set(j, temp); }
We use
swapCards to implement
shuffle, which we described in Section 13.2:
public void shuffle() { Random random = new Random(); for (int i = size() - 1; i > 0; i--) { int j = random.nextInt(i); swapCards(i, j); } }
ArrayList provides additional methods we aren’t using here. You can read about them in the documentation, which you can find by doing a web search for “Java ArrayList”. | https://eng.libretexts.org/Bookshelves/Computer_Science/Book%3A_Think_Java_-_How_to_Think_Like_a_Computer_Scientist/05%3A_Objects_of_Objects/5.02%3A_CardCollection | CC-MAIN-2021-17 | refinedweb | 598 | 61.97 |
Hi,
Today we have migrated AIR SDK from 3.6 to 3.7. After migration we got lot of issues related to font embedding:
The above mentioned font is working fine with Flex 4.6 + AIR 3.6 SDK.
We are facing same kind of problem, please help
Guys.. I am also facing such a issue...Is anyone having a solution to this issue?? Any help will be highly appreciated.
I'm not sure all AIR SDKs are set up as overlays for a Flex SDK. Make sure the font libraries are listed in your flex-config.xml and those libraries/jars actually exist.
We were also facing the same issue. The issue was to do with merging of Flex SDK and AIR SDK. The airmobile-config.xml file in the frameworks folder of the AIR 3.8 SDK was 6 KB where as the same file in the 3.5 SDK was 15 KB. Compared them both and realized a lot of namespaces and libraries have not been defined in the new one.
Replaced the new airmobile-config.xml with the one in the 3.5 SDK and changed the value of "target-player" node to 11.8 and "swf-version" node to 21.
This solved the issue and I was able to compile the fonts correctly. | https://forums.adobe.com/thread/1248408 | CC-MAIN-2017-30 | refinedweb | 218 | 79.26 |
Quite often you want to know who owns a given
domain. To obtain the registry information, you go to the respective registry
and start a so-called WHOIS query (lookup). The trick is that you have to know
which registry is responsible for which TLD (Top Level Domain).
The database is the so-called WHOIS database and it has one distinct property:
it provides us with a query interface via TCP port 43! And as the .NET framework
provides us with the TCPClient class, we can use this interface to directly
obtain our data.
The following example is a minimal implementation of a WHOIS lookup (whois.aspx):
<% @Page Language="VB" %>
<% @Assembly Name="System.Net" %>
<% @Import Namespace="System.Net.Sockets" %>
<% @Import Namespace="System.Text" %>
<% @Import Namespace="System.IO" %>
<%
Dim tcpc As New TCPClient()
If 0 = tcpc.Connect("whois.networksolutions.com", 43) Then
Dim strDomain As [String] = "microsoft.com" + ControlChars.Cr + ControlChars.Lf
Dim arrDomain As [Byte]() = Encoding.ASCII.GetBytes(strDomain.ToCharArray())
Dim s As Stream = tcpc.GetStream()
s.Write(arrDomain, 0, strDomain.Length)
Dim sr As New StreamReader(tcpc.GetStream(), Encoding.ASCII)
While - 1 <> sr.Peek()
Response.Write((sr.ReadLine() + "<br>"))
End While
tcpc.Close()
Else
Response.Write("Could not connect to WHOIS server!")
End If
%>
To be able to work with the TCPClient class, we need the System.Net.Sockets
namespace. We also need to add classes like System.Text or System.IO.
The TCPClient class is doing most of the work in this example:.
View All | http://www.c-sharpcorner.com/UploadFile/jghosh/whois-in-Asp-Net-and-VB-Net/ | CC-MAIN-2017-09 | refinedweb | 247 | 63.15 |
The SeqAn FM Index. More...
#include <seqan3/search/fm_index/fm_index.hpp>
The SeqAn FM Index.
The seqan3::fm_index is a fast and space-efficient string index to search strings and collections of strings.
Here is a short example on how to build an index and search a pattern using an cursor. Please note that there is a very powerful search module with a high-level interface seqan3::search that encapsulates the use of cursors.
Here is an example using a collection of strings (e.g. a genome with multiple chromosomes or a protein database):
Constructor that immediately constructs the index given a range. The range cannot be empty.
At least linear.
Returns a seqan3::fm_index_cursor on the index that can be used for searching.
Constant.
No-throw guarantee.
Checks whether the index is empty.
trueif the index is empty,
falseotherwise.
Constant.
No-throw guarantee.
Compares two indices.
trueif the indices are unequal, false otherwise.
Linear.
No-throw guarantee.
Compares two indices.
trueif the indices are equal, false otherwise.
Linear.
No-throw guarantee.
Returns the length of the indexed text including sentinel characters.
Constant.
No-throw guarantee. | https://docs.seqan.de/seqan/3-master-user/classseqan3_1_1fm__index.html | CC-MAIN-2021-21 | refinedweb | 186 | 61.22 |
I have a simple one page site with one button, and a dataset linked to a database; and a table linked to that dataset: peterjanderson6724.wixsite.com/website-1. Try it.
The page code is:
import wixData from 'wix-data'; $w.onReady(function () { $w("#dataset1").onReady(() => { $w('#table1').refresh() }); }); export function writeTestData_click(event) { let dateNow = new Date() let toInsert = { "testField": 'AAAA', 'comment': 'Test Insert', 'testDateText': dateNow.toString() }; wixData.insert("TestData", toInsert) .then((results2) => { $w('#dataset1').refresh() console.log('INSERT SUCCESS', results2); }) .catch((err) => { let errorMsg = err; console.log('INSERT FAIL', errorMsg); }) }
and there is a hook on the database 'TestData' as follows:
export function TestData_beforeInsert(item, context) { item.testField = 'BBBB' console.log('Data hook fired', item) return item; }
So what I expect to happen is that the page code inserts an item into the database with testField set to 'AAAA', and the hook changes the value of testField to 'BBBB.
This works fine in Preview, where all items in the database are written with testField value 'BBBB'.
But on the live site all items in the database have testField value 'AAAA'.
Why is the hook not firing on the live site?
Where am I going wrong?
Hi Peter,
I tried your code EXACTLY as it is. I created the same DB Collection, Database columns too.
However it is working for me but I did add an async/await function for the data.js file
The code which works for me is this:
page code
data.js
Are you suggesting that every assignment in a hook needs an 'await' ??
@Peter Anderson I just removed the async/await function but its still working for me. No its not needed but I just tried to handle the promise more efficiently before returning the item.
Hey, I tried to reproduce on a new site but it's working for me. I'll try to play with your site.
See:
If its the 'await' that solves the problem then perhaps the Wix documentation should make that clear?
Peter,
It is entirely possible that in the midst of all these conversations, Wix has done some work behind the scenes and fixed the underlying problem. To Shan's point above, handling promises is important, but making the data hook an async function does not change whether the Wix code calls it or not.
My solution to avoid future issues with data.js: i copied the various data.js functions into my own backend functions (.jsw) and then added page code to all them myself. I did not change the code in the functions copied from data.js. Everything now works as expected.
@shan Hold on. We have not got it fixed. I have added the missing semi colons, added 'async' and 'await' to the hook, and I still see the same problem
@shan @Dave Bogan @Peter Anderson
You are all circling around the correct answer. I think the key is in the API documentation which isn't very clear.
What this is saying is that the item that is returned to the insert function needs to be a Promise. Making the function Async kind of makes that happen but adding await in front of a String assignment isn't needed.
The issue is when you encounter an error or exception in this function you need to force the correct behaviour. The API says you need to return a rejected promise - which is your clue :-).
To force the result to be a Promise the easy thing to do is return a Promise.resolve(). @shan essentially forced a Promise wrapper on the function by adding async.
So your return code if changed to
Will probably do the trick.
Steve
P.
@Steve Cropper HOORAY! Your Promise.resolve trick works. Presumably that should be in all our data hooks?
Can you please try to make a small change in data.js (even adding space), publish your site and try again?
Will do
Made a simple change to the hook. Same problem. Look at my site and you will see the new item I added using the dashboard editor has fired the hook
peterjanderson6724.wixsite.com/website-1
@Peter Anderson Thanks, we're checking.
The problem has returned in the last ten minutes
Did you publish your site again with other changes?
EDIT: we are regardless working to fix the problem in the background.
@Tomer (Wix) I have just edited the hook as follows, and republished. Same problem.
@Peter Anderson Thanks, we are still checking, I appreciate your patience.
Hi everyone, the fix is fully deployed.
Thank you again for your patience, we truly appreciate it!
Great! On to the next bug ...
Peter. Can you add a post to make this thread clear to future readers? What was actually done to get your hook working? Shan says make your hook async, then Steve says better to use Promise.resolve(), then Tomer gets involved and the Promise disappears, and he apparently does some magic behind the scenes. I am having the same problem that my afterQuery hook does not seem to run, and I have no idea what to do!
What was the final out come of this? I created a dataset that uses a beforeInsert hook. It works once and then I get an error message when I try to add another order. Here is my code:
export async function Orders_beforeInsert(item, orderNumber1) { //TODO: write your code here... let hook = orderNumber1; item.orderNumber1 = await randomNumber(5); console.log(orderNumber1); return Promise.resolve(item); } function randomNumber (len) { var n = ''; for(var count = 0; count < len; count++) { randomNumber = Math.floor(Math.random() * 10); n += randomNumber.toString(); console.log(n); } return n; }
Any help is appreciated. The code creates a random number that is inserted (by the hook) to my order number 1 field in my dataset.
link to my site.
This code works fine in preview mode.
Thanks
Greg | https://www.wix.com/corvid/forum/community-discussion/data-hook-not-working-on-live-site-but-working-ok-in-preview | CC-MAIN-2019-47 | refinedweb | 975 | 75.91 |
...one of the most highly
regarded and expertly designed C++ library projects in the
world. — Herb Sutter and Andrei
Alexandrescu, C++
Coding Standards
Like MPL, the Sequence is a fundamental concept in Fusion. A Sequence may or may not actually store or contain data. Container are sequences that hold data. Views, on the other hand, are sequences that do not store any data. Instead, they are proxies that impart an alternative presentation over another sequence. All models of Sequence have an associated Iterator type that can be used to iterate through the Sequence's elements.
#include <boost/fusion/sequence.hpp> #include <boost/fusion/include/sequence.hpp> | http://www.boost.org/doc/libs/1_52_0_beta1/libs/fusion/doc/html/fusion/sequence.html | CC-MAIN-2013-48 | refinedweb | 106 | 50.53 |
04 January 2013 17:41 [Source: ICIS news]
LONDON (ICIS)--Crude oil recuperated some losses on Friday after the US Energy Information Administration (EIA) issued its weekly stock report showing a much larger-than-expected draw in crude oil stocks in the week ending 28 December.?xml:namespace>
Before the report was issued, the front-month February ICE Brent contract was trading around $111.15/bbl, a loss of 99 cents compared to the previous settlement. The contract then edged higher to trade around $111.45/bbl shortly after the report was issued.
The front-month NYMEX WTI contract also edged higher. Before the report was published, the contract was trading around $92.70/bbl, a loss of 22 cents/bbl compared to the settlement on Thursday. The contract then edged higher to trade around $92.80/bbl around 10 minutes after the report was issued.
Analysts’ predictions for this week’s US stock figures were that they would show a draw on crude stocks of about 900,000 bbl, a build on distillate of around 1.40m bbl and a build on gasoline of around 2.00m bbl.
The American Petroleum Institute (API) figures were published late | http://www.icis.com/Articles/2013/01/04/9628935/crude-oil-futures-recuperate-losses-on-us-stock-data.html | CC-MAIN-2014-41 | refinedweb | 198 | 66.13 |
carbon-c-relay man page
carbon-c-relay -- graphite relay, aggregator and rewriter
Synopsis
carbon-c-relay -f config-file [ options ... ]
Description
carbon-c-relay accepts, cleanses, matches, rewrites, forwards and aggregates graphite metrics by listening for incoming connections and relaying the messages to other servers defined in its configuration. The core functionality is to route messages via flexible rules to the desired destinations.
carbon-c-relay is a simple program that reads its routing information from a file. The command line arguments allow to set the location for this file, as well as the amount of dispatchers (worker threads) to use for reading the data from incoming connections and passing them onto the right destination(s). The route file supports two main constructs: clusters and matches. The first define groups of hosts data metrics can be sent to, the latter define which metrics should be sent to which cluster. Aggregation rules are seen as matches.
For every metric received by the relay, cleansing is performed. The following changes are performed before any match, aggregate or rewrite rule sees the metric:
- double dot elimination (necessary for correctly functioning consistent hash routing)
- trailing/leading dot elimination
- whitespace normalisation (this mostly affects output of the relay to other targets: metric, value and timestamp will be separated by a single space only, ever)
- irregular char replacement with underscores (_), currently irregular is defined as not being in [0-9a-zA-Z-_:#], but can be overridden on the command line.
Options
These options control the behaviour of carbon-c-relay.
- -v: Print version string and exit.
- -d: Enable debug mode, this prints statistics to stdout and prints extra messages about some situations encountered by the relay that normally would be too verbose to be enabled. When combined with -t (test mode) this also prints stub routes and consistent-hash ring contents.
- -s: Enable submission mode. In this mode, internal statistics are not generated. Instead, queue pressure and metrics drops are reported on stdout. This mode is useful when used as submission relay which´ job is just to forward to (a set of) main relays. Statistics about the submission relays in this case are not needed, and could easily cause a non-desired flood of metrics e.g. when used on each and every host locally.
- -t: Test mode. This mode doesn´t do any routing at all, but instead reads input from stdin and prints what actions would be taken given the loaded configuration. This mode is very useful for testing relay routes for regular expression syntax etc. It also allows to give insight on how routing is applied in complex configurations, for it shows rewrites and aggregates taking place as well. When -t is repeated, the relay will only test the configuration for validity and exit immediately afterwards. Any standard output is suppressed in this mode, making it ideal for start-scripts to test a (new) configuration.
- -D: Deamonise into the background after startup. This option requires -l and -P flags to be set as well.
- -f config-file: Read configuration from config-file. A configuration consists of clusters and routes. See Configuration Syntax for more information on the options and syntax of this file.
- -l log-file: Use log-file for writing messages. Without this option, the relay writes both to stdout and stderr. When logging to file, all messages are prefixed with MSG when they were sent to stdout, and ERR when they were sent to stderr.
- -p port: Listen for connections on port port. The port number is used for both TCP, UDP and UNIX sockets. In the latter case, the socket file contains the port number. The port defaults to 2003, which is also used by the original carbon-cache.py. Note that this only applies to the defaults, when listen directives are in the config, this setting is ignored.
- -w workers: Use workers number of threads. The default number of workers is equal to the amount of detected CPU cores. It makes sense to reduce this number on many-core machines, or when the traffic is low.
- -b batchsize: Set the amount of metrics that sent to remote servers at once to batchsize. When the relay sends metrics to servers, it will retrieve batchsize metrics from the pending queue of metrics waiting for that server and send those one by one. The size of the batch will have minimal impact on sending performance, but it controls the amount of lock-contention on the queue. The default is 2500.
- -q queuesize: Each server from the configuration where the relay will send metrics to, has a queue associated with it. This queue allows for disruptions and bursts to be handled. The size of this queue will be set to queuesize which allows for that amount of metrics to be stored in the queue before it overflows, and the relay starts dropping metrics. The larger the queue, more metrics can be absorbed, but also more memory will be used by the relay. The default queue size is 25000.
- -L stalls: Sets the max mount of stalls to stalls before the relay starts dropping metrics for a server. When a queue fills up, the relay uses a mechanism called stalling to signal the client (writing to the relay) of this event. In particular when the client sends a large amount of metrics in very short time (burst), stalling can help to avoid dropping metrics, since the client just needs to slow down for a bit, which in many cases is possible (e.g. when catting a file with nc(1)). However, this behaviour can also obstruct, artificially stalling writers which cannot stop that easily. For this the stalls can be set from 0 to 15, where each stall can take around 1 second on the client. The default value is set to 4, which is aimed at the occasional disruption scenario and max effort to not loose metrics with moderate slowing down of clients.
- -B backlog: Sets TCP connection listen backlog to backlog connections. The default value is 32 but on servers which receive many concurrent connections, this setting likely needs to be increased to avoid connection refused errors on the clients.
- -U bufsize: Sets the socket send/receive buffer sizes in bytes, for both TCP and UDP scenarios. When unset, the OS default is used. The maximum is also determined by the OS. The sizes are set using setsockopt with the flags SORCVBUF and SOSNDBUF. Setting this size may be necessary for large volume scenarios, for which also -B might apply. Checking the Recv-Q and the receive errors values from netstat gives a good hint about buffer usage.
- -T timeout: Specifies the IO timeout in milliseconds used for server connections. The default is 600 milliseconds, but may need increasing when WAN links are used for target servers. A relatively low value for connection timeout allows the relay to quickly establish a server is unreachable, and as such failover strategies to kick in before the queue runs high.
- -c chars: Defines the characters that are next to [A-Za-z0-9] allowed in metrics to chars. Any character not in this list, is replaced by the relay with _ (underscore). The default list of allowed characters is -_:#.
- -H hostname: Override hostname determined by a call to gethostname(3) with hostname. The hostname is used mainly in the statistics metrics carbon.relays.<hostname>.<...> sent by the relay.
- -P pidfile: Write the pid of the relay process to a file called pidfile. This is in particular useful when daemonised in combination with init managers.
- -O threshold: The minimum number of rules to find before trying to optimise the ruleset. The default is 50, to disable the optimiser, use -1, to always run the optimiser use 0. The optimiser tries to group rules to avoid spending excessive time on matching expressions.
Configuration Syntax
The config file supports the following syntax, where comments start with a # character and can appear at any position on a line and suppress input until the end of that line:
` cluster <name>
< <forward | anyof | failover> [useall] |
<carbonch | fnv1ach | jumpfnv1a_ch> [replication <count>] >
<host[:port][=instance] [proto <udp | tcp>]
[type linemode]
[transport <gzip | lz4 | ssl>]> ...
;
cluster <name>
file [ip]
</path/to/file> ...
;
match
<* | expression ...>
[validate <expression> else <log | drop>]
send to <cluster ... | blackhole>
[stop]
;
rewrite <expression>
into <replacement>
;
aggregate
<expression> ...
every <interval> seconds
expire after <expiration> seconds
[timestamp at <start | middle | end> of bucket]
compute <sum | count | max | min | average |
median | percentile<%> | variance | stddev> write to
<metric>
[compute ...]
[send to <cluster ...>]
[stop]
;
send statistics to <cluster ...>
[stop]
; statistics
[submit every <interval> seconds]
[reset counters after interval]
[prefix with <prefix>]
[send to <cluster ...>]
[stop]
;
listen
type linemode [transport <gzip | lz4 | ssl <pemcert>>]
<<interface[:port] | port> proto <udp | tcp>> ...
</ptah/to/file proto unix> ...
;
include </path/to/file/or/glob>
; `
Clusters
Multiple clusters can be defined, and need not to be referenced by a match rule. All clusters point to one or more hosts, except the file cluster which writes to files in the local filesystem. host may be an IPv4 or IPv6 address, or a hostname. Since host is followed by an optional : and port, for IPv6 addresses not to be interpreted wrongly, either a port must be given, or the IPv6 address surrounded by brackets, e.g. [::1]. Optional transport and proto clauses can be used to wrap the connection in a compression or encryption later or specify the use of UDP or TCP to connect to the remote server. When omitted the connection defaults to an unwrapped TCP connection. type can only be linemode at the moment.
The forward and file clusters simply send everything they receive to all defined members (host addresses or files). The any_of cluster is a small variant of the forward cluster, but instead of sending to all defined members, it sends each incoming metric to one of defined members. This is not much useful in itself, but since any of the members can receive each metric, this means that when one of the members is unreachable, the other members will receive all of the metrics. This can be useful when the cluster points to other relays. The any_of router tries to send the same metrics consistently to the same destination. The failover cluster is like the any_of cluster, but sticks to the order in which servers are defined. This is to implement a pure failover scenario between servers. The carbon_ch cluster sends the metrics to the member that is responsible according to the consistent hash algorithm (as used in the original carbon), or multiple members if replication is set to more than 1. The fnv1a_ch cluster is a identical in behaviour to carbon_ch, but it uses a different hash technique (FNV1a) which is faster but more importantly defined to get by a limitation of carbon_ch to use both host and port from the members. This is useful when multiple targets live on the same host just separated by port. The instance that original carbon uses to get around this can be set by appending it after the port, separated by an equals sign, e.g. 127.0.0.1:2006=a for instance a. When using the fnv1a_ch cluster, this instance overrides the hash key in use. This allows for many things, including masquerading old IP addresses, but mostly to make the hash key location to become agnostic of the (physical) location of that key. For example, usage like 10.0.0.1:2003=4d79d13554fa1301476c1f9fe968b0ac would allow to change port and/or ip address of the server that receives data for the instance key. Obviously, this way migration of data can be dealt with much more conveniently. The jump_fnv1a_ch cluster is also a consistent hash cluster like the previous two, but it does not take the server information into account at all. Whether this is useful to you depends on your scenario. The jump hash has a much better balancing over the servers defined in the cluster, at the expense of not being able to remove any server but the last in order. What this means is that this hash is fine to use with ever growing clusters where older nodes are also replaced at some point. If you have a cluster where removal of old nodes takes place often, the jump hash is not suitable for you. Jump hash works with servers in an ordered list without gaps. To influence the ordering, the instance given to the server will be used as sorting key. Without, the order will be as given in the file. It is a good practice to fix the order of the servers with instances such that it is explicit what the right nodes for the jump hash are.
DNS hostnames are resolved to a single address, according to the preference rules in RFC 3484. The any_of, failover and forward clusters have an explicit useall flag that enables expansion for hostnames resolving to multiple addresses. Each address returned becomes a cluster destination.
Matches
Match rules are the way to direct incoming metrics to one or more clusters. Match rules are processed top to bottom as they are defined in the file. It is possible to define multiple matches in the same rule. Each match rule can send data to one or more clusters. Since match rules "fall through" unless the stop keyword is added, carefully crafted match expression can be used to target multiple clusters or aggregations. This ability allows to replicate metrics, as well as send certain metrics to alternative clusters with careful ordering and usage of the stop keyword. The special cluster blackhole discards any metrics sent to it. This can be useful for weeding out unwanted metrics in certain cases. Because throwing metrics away is pointless if other matches would accept the same data, a match with as destination the blackhole cluster, has an implicit stop. The validation clause adds a check to the data (what comes after the metric) in the form of a regular expression. When this expression matches, the match rule will execute as if no validation clause was present. However, if it fails, the match rule is aborted, and no metrics will be sent to destinations, this is the drop behaviour. When log is used, the metric is logged to stderr. Care should be taken with the latter to avoid log flooding. When a validate clause is present, destinations need not to be present, this allows for applying a global validation rule. Note that the cleansing rules are applied before validation is done, thus the data will not have duplicate spaces. The route using clause is used to perform a temporary modification to the key used for input to the consistent hashing routines. The primary purpose is to route traffic so that appropriate data is sent to the needed aggregation instances.
Rewrites
Rewrite rules take a regular expression as input to match incoming metrics, and transform them into the desired new metric name. In the replacement, backreferences are allowed to match capture groups defined in the input regular expression. A match of server\.(x|y|z)\. allows to use e.g. role.\1. in the substitution. A few caveats apply to the current implementation of rewrite rules. First, their location in the config file determines when the rewrite is performed. The rewrite is done in-place, as such a match rule before the rewrite would match the original name, a match rule after the rewrite no longer matches the original name. Care should be taken with the ordering, as multiple rewrite rules in succession can take place, e.g. a gets replaced by b and b gets replaced by c in a succeeding rewrite rule. The second caveat with the current implementation, is that the rewritten metric names are not cleansed, like newly incoming metrics are. Thus, double dots and potential dangerous characters can appear if the replacement string is crafted to produce them. It is the responsibility of the writer to make sure the metrics are clean. If this is an issue for routing, one can consider to have a rewrite-only instance that forwards all metrics to another instance that will do the routing. Obviously the second instance will cleanse the metrics as they come in. The backreference notation allows to lowercase and uppercase the replacement string with the use of the underscore (_) and carret (^) symbols following directly after the backslash. For example, role.\_1. as substitution will lowercase the contents of \1. The dot (.) can be used in a similar fashion, or followed after the underscore or caret to replace dots with underscores in the substitution. This can be handy for some situations where metrics are sent to graphite.
Aggregations
The aggregations defined take one or more input metrics expressed by one or more regular expresions, similar to the match rules. Incoming metrics are aggregated over a period of time defined by the interval in seconds. Since events may arrive a bit later in time, the expiration time in seconds defines when the aggregations should be considered final, as no new entries are allowed to be added any more. On top of an aggregation multiple aggregations can be computed. They can be of the same or different aggregation types, but should write to a unique new metric. The metric names can include back references like in rewrite expressions, allowing for powerful single aggregation rules that yield in many aggregations. When no send to clause is given, produced metrics are sent to the relay as if they were submitted from the outside, hence match and aggregation rules apply to those. Care should be taken that loops are avoided this way. For this reason, the use of the send to clause is encouraged, to direct the output traffic where possible. Like for match rules, it is possible to define multiple cluster targets. Also, like match rules, the stop keyword applies to control the flow of metrics in the matching process.
Statistics
The send statistics to construct is deprecated and will be removed in the next release. Use the special statistics construct instead.
The statistics construct can control a couple of things about the (internal) statistics produced by the relay. The send to target can be used to avoid router loops by sending the statistics to a certain destination cluster(s). By default the metrics are prefixed with carbon.relays.<hostname>, where hostname is determinted on startup and can be overridden using the -H argument. This prefix can be set using the prefix with clause similar to a rewrite rule target. The input match in this case is the pre-set regular expression ^(([^.]+)(\..*)?)$ on the hostname. As such, one can see that the default prefix is set by carbon.relays.\.1. Note that this uses the replace-dot-with-underscore replacement feature from rewrite rules. Given the input expression, the following match groups are available: \1 the entire hostname, \2 the short hostname and \3 the domainname (with leading dot). It may make sense to replace the default by something like carbon.relays.\_2 for certain scenarios, to always use the lowercased short hostname, which following the expression doesn´t contain a dot. By default, the metrics are submitted every 60 seconds, this can be changed using the submit every <interval> seconds clause.
To obtain a more compatible set of values to carbon-cache.py, use the reset counters after interval clause to make values non-cumulative, that is, they will report the change compared to the previous value.
Listeners
The ports and protocols the relay should listen for incoming connections can be specified using the listen directive. Currently, all listeners need to be of linemode type. An optional compression or encryption wrapping can be specified for the port and optional interface given by ip address, or unix socket by file. When interface is not specified, the any interface on all available ip protocols is assumed. If no listen directive is present, the relay will use the default listeners for port 2003 on tcp and udp, plus the unix socket /tmp/.s.carbon-c-relay.2003. This typically expands to 5 listeners on an IPv6 enabled system. The default matches the behaviour of versions prior to v3.2.
Includes
In case configuration becomes very long, or is managed better in separate files, the include directive can be used to read another file. The given file will be read in place and added to the router configuration at the time of inclusion. The end result is one big route configuration. Multiple include statements can be used throughout the configuration file. The positioning will influence the order of rules as normal. Beware that recursive inclusion (include from an included file) is supported, and currently no safeguards exist for an inclusion loop. For what is worth, this feature likely is best used with simple configuration files (e.g. not having include in them).
Examples
carbon-c-relay evolved over time, growing features on demand as the tool proved to be stable and fitting the job well. Below follow some annotated examples of constructs that can be used with the relay.
Clusters can be defined as much as necessary. They receive data from match rules, and their type defines which members of the cluster finally get the metric data. The simplest cluster form is a forward cluster:
cluster send-through
forward
10.1.0.1
;
Any metric sent to the send-through cluster would simply be forwarded to the server at IPv4 address 10.1.0.1. If we define multiple servers, all of those servers would get the same metric, thus:
cluster send-through
forward
10.1.0.1
10.2.0.1
;
The above results in a duplication of metrics send to both machines. This can be useful, but most of the time it is not. The any_of cluster type is like forward, but it sends each incoming metric to any of the members. The same example with such cluster would be:
cluster send-to-any-one
any_of 10.1.0.1:2010 10.1.0.1:2011;
This would implement a multipath scenario, where two servers are used, the load between them is spread, but should any of them fail, all metrics are sent to the remaining one. This typically works well for upstream relays, or for balancing carbon-cache processes running on the same machine. Should any member become unavailable, for instance due to a rolling restart, the other members receive the traffic. If it is necessary to have true fail-over, where the secondary server is only used if the first is down, the following would implement that:
cluster try-first-then-second
failover 10.1.0.1:2010 10.1.0.1:2011;
These types are different from the two consistent hash cluster types:
cluster graphite
carbon_ch
127.0.0.1:2006=a
127.0.0.1:2007=b
127.0.0.1:2008=c
;
If a member in this example fails, all metrics that would go to that member are kept in the queue, waiting for the member to return. This is useful for clusters of carbon-cache machines where it is desirable that the same metric ends up on the same server always. The carbon_ch cluster type is compatible with carbon-relay consistent hash, and can be used for existing clusters populated by carbon-relay. For new clusters, however, it is better to use the fnv1a_ch cluster type, for it is faster, and allows to balance over the same address but different ports without an instance number, in constrast to carbon_ch.
Because we can use multiple clusters, we can also replicate without the use of the forward cluster type, in a more intelligent way:
` cluster dc-old
carbonch replication 2
10.1.0.1
10.1.0.2
10.1.0.3
; cluster dc-new1
fnv1ach replication 2
10.2.0.1
10.2.0.2
10.2.0.3
; cluster dc-new2
fnv1a_ch replication 2
10.3.0.1
10.3.0.2
10.3.0.3
;
match
send to dc-old
; match
send to
dc-new1
dc-new2
stop
; `
In this example all incoming metrics are first sent to dc-old, then dc-new1 and finally to dc-new2. Note that the cluster type of dc-old is different. Each incoming metric will be send to 2 members of all three clusters, thus replicating to in total 6 destinations. For each cluster the destination members are computed independently. Failure of clusters or members does not affect the others, since all have individual queues. The above example could also be written using three match rules for each dc, or one match rule for all three dcs. The difference is mainly in performance, the number of times the incoming metric has to be matched against an expression. The stop rule in dc-new match rule is not strictly necessary in this example, because there are no more following match rules. However, if the match would target a specific subset, e.g. ^sys\., and more clusters would be defined, this could be necessary, as for instance in the following abbreviated example:
` cluster dc1-sys ... ; cluster dc2-sys ... ;
cluster dc1-misc ... ; cluster dc2-misc ... ;
match ^sys. send to dc1-sys; match ^sys. send to dc2-sys stop;
match send to dc1-misc; match send to dc2-misc stop; `
As can be seen, without the stop in dc2-sys´ match rule, all metrics starting with sys. would also be send to dc1-misc and dc2-misc. It can be that this is desired, of course, but in this example there is a dedicated cluster for the sys metrics.
Suppose there would be some unwanted metric that unfortunately is generated, let´s assume some bad/old software. We don´t want to store this metric. The blackhole cluster is suitable for that, when it is harder to actually whitelist all wanted metrics. Consider the following:
match
some_legacy1$
some_legacy2$
send to blackhole
stop;
This would throw away all metrics that end with some_legacy, that would otherwise be hard to filter out. Since the order matters, it can be used in a construct like this:
` cluster old ... ; cluster new ... ;
match * send to old;
match unwanted send to blackhole stop;
match * send to new; `
In this example the old cluster would receive the metric that´s unwanted for the new cluster. So, the order in which the rules occur does matter for the execution.
Validation can be used to ensure the data for metrics is as expected. A global validation for just number (no floating point) values could be:
match *
validate ^[0-9]+\ [0-9]+$ else drop
;
(Note the escape with backslash \ of the space, you might be able to use \s or [:space:] instead, this depends on your libc implementation.)
The validation clause can exist on every match rule, so in principle, the following is valid:
match ^foo
validate ^[0-9]+\ [0-9]+$ else drop
send to integer-cluster
; match ^foo
validate ^[0-9.e+-]+\ [0-9.e+-]+$ else drop
send to float-cluster
stop;
Note that the behaviour is different in the previous two examples. When no send to clusters are specified, a validation error makes the match behave like the stop keyword is present. Likewise, when validation passes, processing continues with the next rule. When destination clusters are present, the match respects the stop keyword as normal. When specified, processing will always stop when specified so. However, if validation fails, the rule does not send anything to the destination clusters, the metric will be dropped or logged, but never sent.
The relay is capable of rewriting incoming metrics on the fly. This process is done based on regular expressions with capture groups that allow to substitute parts in a replacement string. Rewrite rules allow to cleanup metrics from applications, or provide a migration path. In it´s simplest form a rewrite rule looks like this:
rewrite ^server\.(.+)\.(.+)\.([a-zA-Z]+)([0-9]+)
into server.\_1.\2.\3.\3\4
;
In this example a metric like server.DC.role.name123 would be transformed into server.dc.role.name.name123. For rewrite rules hold the same as for matches, that their order matters. Hence to build on top of the old/new cluster example done earlier, the following would store the original metric name in the old cluster, and the new metric name in the new cluster:
` match * send to old;
rewrite ... ;
match * send to new; `
Note that after the rewrite, the original metric name is no longer available, as the rewrite happens in-place.
Aggregations are probably the most complex part of carbon-c-relay. Two ways of specifying aggregates are supported by carbon-c-relay. The first, static rules, are handled by an optimiser which tries to fold thousands of rules into groups to make the matching more efficient. The second, dynamic rules, are very powerful compact definitions with possibly thousands of internal instantiations. A typical static aggregation looks like:
aggregate
^sys\.dc1\.somehost-[0-9]+\.somecluster\.mysql\.replication_delay
^sys\.dc2\.somehost-[0-9]+\.somecluster\.mysql\.replication_delay
every 10 seconds
expire after 35 seconds
timestamp at end of bucket
compute sum write to
mysql.somecluster.total_replication_delay
compute average write to
mysql.somecluster.average_replication_delay
compute max write to
mysql.somecluster.max_replication_delay
compute count write to
mysql.somecluster.replication_delay_metric_count
;
In this example, four aggregations are produced from the incoming matching metrics. In this example we could have written the two matches as one, but for demonstration purposes we did not. Obviously they can refer to different metrics, if that makes sense. The every 10 seconds clause specifies in what interval the aggregator can expect new metrics to arrive. This interval is used to produce the aggregations, thus each 10 seconds 4 new metrics are generated from the data received sofar. Because data may be in transit for some reason, or generation stalled, the expire after clause specifies how long the data should be kept before considering a data bucket (which is aggregated) to be complete. In the example, 35 was used, which means after 35 seconds the first aggregates are produced. It also means that metrics can arrive 35 seconds late, and still be taken into account. The exact time at which the aggregate metrics are produced is random between 0 and interval (10 in this case) seconds after the expiry time. This is done to prevent thundering herds of metrics for large aggregation sets. The timestamp that is used for the aggregations can be specified to be the start, middle or end of the bucket. Original carbon-aggregator.py uses start, while carbon-c-relay´s default has always been end. The compute clauses demonstrate a single aggregation rule can produce multiple aggregates, as often is the case. Internally, this comes for free, since all possible aggregates are always calculated, whether or not they are used. The produced new metrics are resubmitted to the relay, hence matches defined before in the configuration can match output of the aggregator. It is important to avoid loops, that can be generated this way. In general, splitting aggregations to their own carbon-c-relay instance, such that it is easy to forward the produced metrics to another relay instance is a good practice.
The previous example could also be written as follows to be dynamic:
aggregate
^sys\.dc[0-9].(somehost-[0-9]+)\.([^.]+)\.mysql\.replication_delay
every 10 seconds
expire after 35 seconds
compute sum write to
mysql.host.\1.replication_delay
compute sum write to
mysql.host.all.replication_delay
compute sum write to
mysql.cluster.\2.replication_delay
compute sum write to
mysql.cluster.all.replication_delay
;
Here a single match, results in four aggregations, each of a different scope. In this example aggregation based on hostname and cluster are being made, as well as the more general all targets, which in this example have both identical values. Note that with this single aggregation rule, both per-cluster, per-host and total aggregations are produced. Obviously, the input metrics define which hosts and clusters are produced.
With use of the send to clause, aggregations can be made more intuitive and less error-prone. Consider the below example:
` cluster graphite fnv1a_ch ip1 ip2 ip3;
aggregate ^sys.somemetric
every 60 seconds
expire after 75 seconds
compute sum write to
sys.somemetric
send to graphite
stop
;
match * send to graphite; `
It sends all incoming metrics to the graphite cluster, except the sys.somemetric ones, which it replaces with a sum of all the incoming ones. Without a stop in the aggregate, this causes a loop, and without the send to, the metric name can´t be kept its original name, for the output now directly goes to the cluster.
Statistics
When carbon-c-relay is run without -d or -s arguments, statistics will be produced. By default they are sent to the relay itself in the form of carbon.relays.<hostname>.*. See the statistics construct to override this prefix, sending interval and values produced. While many metrics have a similar name to what carbon-cache.py would produce, their values are likely different. By default, most values are running counters which only increase over time. The use of the nonNegativeDerivative() function from graphite is useful with these.
The following metrics are produced under the carbon.relays.<hostname> namespace:
metricsReceived
The number of metrics that were received by the relay. Received here means that they were seen and processed by any of the dispatchers.
metricsSent
The number of metrics that were sent from the relay. This is a total count for all servers combined. When incoming metrics are duplicated by the cluster configuration, this counter will include all those duplications. In other words, the amount of metrics that were successfully sent to other systems. Note that metrics that are processed (received) but still in the sending queue (queued) are not included in this counter.
metricsQueued
The total number of metrics that are currently in the queues for all the server targets. This metric is not cumulative, for it is a sample of the queue size, which can (and should) go up and down. Therefore you should not use the derivative function for this metric.
metricsDropped
The total number of metric that had to be dropped due to server queues overflowing. A queue typically overflows when the server it tries to send its metrics to is not reachable, or too slow in ingesting the amount of metrics queued. This can be network or resource related, and also greatly depends on the rate of metrics being sent to the particular server.
metricsBlackholed
The number of metrics that did not match any rule, or matched a rule with blackhole as target. Depending on your configuration, a high value might be an indication of a misconfiguration somewhere. These metrics were received by the relay, but never sent anywhere, thus they disappeared.
metricStalls
The number of times the relay had to stall a client to indicate that the downstream server cannot handle the stream of metrics. A stall is only performed when the queue is full and the server is actually receptive of metrics, but just too slow at the moment. Stalls typically happen during micro-bursts, where the client typically is unaware that it should stop sending more data, while it is able to.
connections
The number of connect requests handled. This is an ever increasing number just counting how many connections were accepted.
disconnects
The number of disconnected clients. A disconnect either happens because the client goes away, or due to an idle timeout in the relay. The difference between this metric and connections is the amount of connections actively held by the relay. In normal situations this amount remains within reasonable bounds. Many connections, but few disconnections typically indicate a possible connection leak in the client. The idle connections disconnect in the relay here is to guard against resource drain in such scenarios.
dispatch_wallTime_us
The number of microseconds spent by the dispatchers to do their work. In particular on multi-core systems, this value can be confusing, however, it indicates how long the dispatchers were doing work handling clients. It includes everything they do, from reading data from a socket, cleaning up the input metric, to adding the metric to the appropriate queues. The larger the configuration, and more complex in terms of matches, the more time the dispatchers will spend on the cpu. But also time they do /not/ spend on the cpu is included in this number. It is the pure wallclock time the dispatcher was serving a client.
dispatch_sleepTime_us
The number of microseconds spent by the dispatchers sleeping waiting for work. When this value gets small (or even zero) the dispatcher has so much work that it doesn´t sleep any more, and likely can´t process the work in a timely fashion any more. This value plus the wallTime from above sort of sums up to the total uptime taken by this dispatcher. Therefore, expressing the wallTime as percentage of this sum gives the busyness percentage draining all the way up to 100% if sleepTime goes to 0.
server_wallTime_us
The number of microseconds spent by the servers to send the metrics from their queues. This value includes connection creation, reading from the queue, and sending metrics over the network.
dispatcherX
For each indivual dispatcher, the metrics received and blackholed plus the wall clock time. The values are as described above.
destinations.X
For all known destinations, the number of dropped, queued and sent metrics plus the wall clock time spent. The values are as described above.
aggregators.metricsReceived
The number of metrics that were matched an aggregator rule and were accepted by the aggregator. When a metric matches multiple aggregators, this value will reflect that. A metric is not counted when it is considered syntactically invalid, e.g. no value was found.
aggregators.metricsDropped
The number of metrics that were sent to an aggregator, but did not fit timewise. This is either because the metric was too far in the past or future. The expire after clause in aggregate statements controls how long in the past metric values are accepted.
aggregators.metricsSent
The number of metrics that were sent from the aggregators. These metrics were produced and are the actual results of aggregations.
Bugs
Please report them at:
Author
Fabian Groffen <grobian@gentoo.org>
See Also
All other utilities from the graphite stack.
This project aims to be a fast replacement of the original Carbon relay. carbon-c-relay aims to deliver performance and configurability. Carbon is single threaded, and sending metrics to multiple consistent-hash clusters requires chaining of relays. This project provides a multithreaded relay which can address multiple targets and clusters for each and every metric based on pattern matches.
There are a couple more replacement projects out there, which are carbon-relay-ng and graphite-relay .
Compared to carbon-relay-ng, this project does provide carbon´s consistent-hash routing. graphite-relay, which does this, however doesn´t do metric-based matches to direct the traffic, which this project does as well. To date, carbon-c-relay can do aggregations, failover targets and more.
Acknowledgements
This program was originally developed for Booking.com, which approved that the code was published and released as Open Source on GitHub, for which the author would like to express his gratitude. Development has continued since with the help of many contributors suggesting features, reporting bugs, adding patches and more to make carbon-c-relay into what it is today. | https://www.mankier.com/1/carbon-c-relay | CC-MAIN-2018-13 | refinedweb | 6,567 | 63.49 |
Microsoft Graph APIs permissions differ between the documentation and what is available under Azure Portal
I would like to get the list of a user's devices via the Microsoft Graph API (AAD sign-in).
However the permissions listed here: differ from what is actually available under Microsoft Graph permissions (App registration - Azure portal).
Even by granting all permissions in the portal I still get 403 (Forbidden) when trying to call from my application.
Note: works using the Graph Explorer
Note: works from my application
Thanks for your help!.
- How to properly change the content of a reactjs portal?
Currently I have a state property
showModalin the parent component and when this is set to
true, it renders a modal via a portal. This part works, but I want to change the content of the modal. I thought that if I used a different type value to differentiate the contents, that I could just render the modal and its different content and replace the previous one, but this does not work:
showModal(type) { if (this.state.showModal) { if (type === "first") { return ( <Modal> <ModalContent onClick={this.onClickMod1} /> </Modal> ) } else if (type === "second") { return ( <Modal> <ModalProgress onCancel={this.onCancel} /> </Modal> ) } else if (type === "three") { return ( <Modal> <ModalFinal onDone={this.onDone} /> </Modal> ) } } } render() { return ( <div> {this.showModal("first")} ... </div> ) }
When the button in the
ModalContentcomponent is clicked, it triggers the
onClickevent back in the parent. At the
onClickhandler function, I want to be able to tell the modal that the content will be different, so update the modal content, e.g.
onClick() { // wait for state to be set then call showModal again this.setState({ showProgressModal: !this.state.showProgressModal }, async function(){ this.showModal("second"); }); }
I wonder if I need to remove the modal altogether and re-render it with the content? Or is there another way to replace the child used to create the portal. My modal code looks like this:
... const modalRoot = document.getElementById('modal'); export class Modal extends Component { constructor(props) { super(props); this.el = document.createElement('div'); } componentDidMount() { modalRoot.appendChild(this.el); } componentWillUnmount() { modalRoot.removeChild(this.el); } render() { return ReactDOM.createPortal( this.props.children, modalRoot, ); } } export default Modal;
- xRM-Portals Cryptographic Operation Error - settings.xml corrupted
I have a running installation of xRM Portals with Dynamics 365 8.2 On-premise. Everything works fine. ButI have the problem, that the
settings.xmlfile gets corrupted every night - I can't explain why. I have the problem described in the following blogpost:
After deleting the
settings.xmland reconfiguration everything works fine. But just for one day. After returning to the office the next day, the settings are again broken. What can I do to fix this?
- Liferay portal beginner’s guide/ demo application/ step-by-step tutorial?
I am trying to learn / develop a portal with Liferay 7. All resources i found are bad quality youtube vidios, short and not really readable blog posts or the very cluttered overloaded documentation page from Liferay.
I have basic knowledge in Java SE, HTML, Javascript and SQL, but i am a total newbie when it comes to portals or web applications in general.
Where do i start? Are there any beginner’s guides, demo applications, step-by-step tutorials you can share/recommend? If you are recently developing using Liferay, where do you start, which books do you read, which requirements/knowledge are indispensable bevor diving in to portal development? | http://quabr.com/52767927/microsoft-graph-apis-permissions-differ-between-the-documentation-and-what-is-av | CC-MAIN-2019-09 | refinedweb | 564 | 58.79 |
I am having a problem writing from a outside file. here is the assignment.
Write the program as an object-oriented C++ program that allows%, and 30 year at 5.75% (use an array for the mortgage information and read the interest rates to fill the array from a sequential file). In either case, a new amount and make a new selection, or quit. Insert comments in the program to document the program.
I am not sure if i am doing it right, of if i am just way off. Can someone help me?
Here is my main file:
//Jeremy Johnson //PRG 411 //Instructor: Charles Ford #include "mortgageCalc_JeremyJohnson.h" #include <math.h> #include <iostream> #include <iomanip> #include <fstream> using namespace std; int main () { MortgagePayment totalMort; double interestRate[]= {5.35, 5.5, 5.75,};//*Annual interest rate double years[]= {7, 15, 30,}; //*Term of the loan double rate = 0;// Rates double term = 0;// Term double interestPaid; //Interest paid out double balance; //total Loan Balance double paidOut; //Amount Paid on Loan double NumOfPymts; //total Number of Payments int Monthly; //*Allows loop for loan balance and interest paid int listScroll = 0; //*Scroll List for loan balance and interest paid char choice; //User chooses to enter Int and term or select scenario bool doneEndProg = false; do { totalMort.openingHead(); totalMort.enterPrincipal(); cout<< "Would you like to me to provide options, or would you like to make the choices yourself??" << endl; cout<< "Press 1 for me to provide options, or 2 to make your own choices" <<endl; cin >> choice; if (choice == '1') { /* cout << "Your Terms and Rates are as followed. Please make a choice: " << endl; cout << "1. 7 years at 5.35%" << endl; cout << "2. 15 years at 5.5%" << endl; cout << "3. 30 years at 5.75%" << endl; cout << "What is your selection:" << endl; */ int iarray[3] = {0}; float farray[3] = {0.0F}; ifstream in("cr15.txt"); in >> iarray[0] >> iarray[1] >> iarray[2]; in >> farray[0] >> farray[1] >> farray[2]; // do the same with the float array int select; cin >> select; switch (select) { case 1: cout << "You have selected 7 Years and 5.35%" << endl; break; case 2: cout << "You have selected 15 Years at 5.50%" << endl; break; case 3: cout << "You have selected 30 Years at 5.75%" << endl; break; default: cout << "Invalid choise. You have chosen poorly..." << endl; cout << "Please correct your choice" <<endl; cin >> select; return 0; } rate = interestRate[select - 1]; term = years[select - 1]; } else if (choice == '2') { cout << "Enter your desired rate: "; cin >> rate; cout << "Enter your desired term(in years): "; cin >> term; } //calculate monthly payment double tPayment = (totalMort.principal * ((rate/1200)/(1 - pow((1+(rate/1200)),-1*(term*12))))); cout << "These were your choices:" << endl; cout << "Interest Rate: "<< rate << "\n"; cout << "Term: " << term << " Years\n" << endl; //Ouput payment cout << "Based on your answers above, here are the results of your answers: \n" << endl; cout << "Your Total Monthly Payment is: $" << tPayment << "\n" << endl; //Number of Payments, NumOfPymts = term * 12; listScroll = 0; for (Monthly = 1; Monthly <= NumOfPymts && toupper(choice) != 'D'; ++Monthly) { interestPaid = totalMort.principal * (rate / 1200); paidOut = tPayment - interestPaid; balance = totalMort.principal - paidOut; if (balance < 0) { balance = 0; } totalMort.principal = balance; //This will Scroll and seperate the loan balance and Interest paid if (listScroll == 0) { cout << "Balance" << "\t\t\tTotal Interest Paid" << endl; } cout << setprecision(2) << fixed << "$" << setw(5) << balance << "\t\t\t\t$"<< setw(5) << interestPaid << endl; ++listScroll; /*Allows the user to enter a value to see the remaining information after 12 lines, to clear the list and enter new values, or to quit. */ if (listScroll == 12) { cout << "Would you like to play again?\n" << endl; cout << "Enter 'S' to scroll through the list, Enter 'C' to continue and try again, Enter 'E' to quit and be done.\n" << endl; cin >> choice; if (choice == 'S' || choice == 's') { listScroll = 0; } else { if (choice == 'C' || choice == 'c') { break; } else if (choice == 'E' || choice == 'e') { doneEndProg = true; } } } } } while ( choice != 'E' && choice != 'e' ); return 0; }
Here is my other file that is part of the assignment:
//Jeremy Johnson //PRG 411 //Instructor: Charles Ford #include "mortgageCalc_JeremyJohnson.h" #include <math.h> #include <iostream> #include <iomanip> using namespace std; void MortgagePayment::openingHead() //This is an opening for the program....nothing more... { cout <<"\n\t\t Jeremy Johnson's Mortgage Calculator\n\n"; cout << "\tWrite the program as an object-oriented C++\n"; cout <<"\tprogram that allows the user to select which way\n"; cout << "\tthey want to calculate a mortgage: by input of the amount\n" ; cout << "\tof the mortgage, the term of the mortgage, and the interest\n" ; cout << "\trate of the mortgage payment or by input of the amount of a\n"; cout << "\tmortgage and then select from a menu of mortgage loans:\n"; cout << "\t\t- 7 year at 5.35%\n"; cout << "\t\t- 15 year at 5.5%\n"; cout << "\t\t- 30 year at 5.75%.\n"; cout << "\tIn either case, display the mortgage payment amount. Then, list\n"; cout << "\tthe loan balance and interest paid for each payment over the\n" ; cout << "\tterm of the loan. On longer term loans, the list will scroll off\n"; cout << "\tthe screen. Do not allow the list to scroll off the screen, but\n" ; cout << "\trather display a partial list and then allow the user to continue\n"; cout << "\tthe list. Allow the user to loop back and enter a new amount and\n" ; cout << "\tmake a new selection, or quit. Insert comments in the program to\n" ; cout << "\tdocument the program."; cout <<"\n____________________________________________________________________\n\n " <<endl; } void MortgagePayment::enterPrincipal() //This is the function for the user input of loan amount { cout << "\t\tEnter the total amount of the loan:$ "; //User input cin >> this->principal; }
here is the header file
//Jeremy Johnson //PRG 411 //Instructor: Charles Ford class MortgagePayment { public: void openingHead(); //Introduction to the program void enterPrincipal(); //This is where the principle is entered double principal; //principle double interest; //Interest int term; //Term };
Here is the cr15.txt
7 5.35
15 5.5
30 5.75
Thank you everyone!! Any help from your guys would be great. | https://www.daniweb.com/programming/software-development/threads/227851/reading-an-outside-file-mortgage-calculator | CC-MAIN-2017-17 | refinedweb | 1,008 | 71.24 |
This C Program computes the Value of X ^ N. The program uses power function defined in math library.
Here is source code of the C program to computes the Value of X ^ N. The C program is successfully compiled and run on a Linux system. The program output is also shown below.
/*
* C program to compute the value of X ^ N given X and N as inputs
*/
#include <stdio.h>
#include <math.h>
long int power(int x, int n);
void main()
{
long int x, n, xpown;
printf("Enter the values of X and N \n");
scanf("%ld %ld", &x, &n);
xpown = power(x, n);
printf("X to the power N = %ld\n", xpown);
}
/* Recursive function to computer the X to power N */
long int power(int x, int n)
{
if (n == 1)
return(x);
else if (n % 2 == 0)
/* if n is even */
return (pow(power(x, n/2), 2));
else
/* if n is odd */
return (x * power(x, n - 1));
}
$ cc pgm55.c -lm $ a.out Enter the values of X and N 2 5 X to the power N = 32. | https://www.sanfoundry.com/c-program-x-power-n/ | CC-MAIN-2018-13 | refinedweb | 183 | 79.9 |
hey all...
i successfully learnt everything in the bitwise lesson and made a program:
the program runs successfully..the program runs successfully..Code:#include <iostream> #include <cstdlib> using namespace std; char in_use = 0; int *b = new int; int *c = new int; void check(){ for(int a = 0; a <= 8; a++){ if((in_use & (1 << a)) != 0){ cout << "The car " << a + 1 << " is in use...\n"; }else{ cout << "The car " << a + 1 << " is not in use...\n"; } } } void set(int pos){ in_use = (in_use | (1 << pos)); cout << "The car is now in use..."; } void unset(int pos){ in_use = (in_use & ~(1 << pos)); cout << "The car is now not in use..."; } int main(){ char x = 'y'; while(x == 'Y' || x == 'y'){ cout << "Welcome to tennisstar's Car Company..."; cout << "\n\n1)Take a Car for rent..."; cout << "\n2)Give a rented car back..."; cout << "\n3)See availability..."; cout << "\n4)Exit..."; cout << "\n\nEnter choice: "; int *a = new int; cin >> *a; switch(*a){ case 1: cout << "\n===\n\nChoose a car from 1 - 7 which you want to take: "; cin >> *b; *b -= 1; set(*b); break; case 2: cout << "\n===\n\nChoose a car from 1 - 7 which you want to give: "; cin >> *c; *c -= 1; unset(*c); break; case 3: cout << "\n===\n\n"; check(); break; case 4: cout << "\n"; system("pause"); return 0; default: cout << "\n===\n\nUNKNOWN OPERATION!!!"; } cout << "\n\n"; system("pause"); system("cls"); } }
but my question is: can we use a byte (i.e a char) to store something more than 8 values?
or maybe less?
can we use an encoding in which its 5 bits = 1 byte
or 9 bits = 1 byte
if yes...
how can we do so?
kindly help...
-tennisstar | http://cboard.cprogramming.com/cplusplus-programming/152942-bitwise-more-than-8-less-than-8-a.html | CC-MAIN-2016-07 | refinedweb | 282 | 82.04 |
Building)!
Later, Atari released a clone of Space Invaders on the Atari 2600 home system. It was a great success, and meant that people could play Space Invaders on their home systems, instead of on an arcade machine. Space Invaders is pretty embedded in pop culture these days, and you might even find Space Invaders mosaic and graffiti on the streets!
Of course, Space Invaders was such a popular game, there were many clones and variations. Let's make our own version using Kaboom and Replit.
Game mechanics
Space Invaders features alien enemies that move across the screen from one side to the other in a grid formation. The player moves left and right along the bottom of the screen and shoots at the aliens from below. Once the aliens reach the end of the screen, they move down one row and start moving in the opposite direction. In this way, the aliens get closer and closer to the player. Shooting an alien will destroy it and score points for the player. The aliens in the bottom row can shoot downwards towards the player.
If the player gets shot, they lose a life. Players have three lives, and the game ends when they run out of lives.
When the aliens reach the bottom of the screen, the game is immediately over, as the alien invasion was a success! To win, the player has to destroy all the aliens before they reach the bottom of the screen.
Getting started on Replit
Head over to Replit and create a new repl, using "Kaboom" as the template. Name it something like "Space Invaders", and click "Create Repl".
After the repl has booted up, you should see a
main.js file under the "Scenes" section. This is where we'll start coding. It already has some code in it, but we'll replace that.
Download this archive of sprites and asset files we'll need for the game, and unzip them on your computer. In the Kaboom editor, click the "Files" icon in the sidebar. Now drag and drop all the sprite files (image files) into the "sprites" folder. Once they have uploaded, you can click on the "Kaboom" icon in the sidebar, and return to the "main" code file.
Setting up Kaboom
First we need to initialize Kaboom. In the "main" code file, delete all the example code. Now we can add a reference to Kaboom, and initialize it:
import kaboom from "kaboom";
kaboom({
background: [0, 0, 0],
width: 800,
height: 600,
scale: 1,
debug: true
});
We initialize Kaboom with a black background (
[0, 0, 0]), a width of 800 pixels, a height of 600 pixels, and a scale of 1. We also set
debug to
true, so we can access Kaboom diagnostics and info as we are developing. You can bring up the Kaboom debug info in the game by pressing "F1".
Importing sprites and other game assets
Kaboom can import sprites in many different formats. We'll use the
.png format, along with the Kaboom
loadSpriteAtlas function. This function allows us to tell Kaboom how to load a sprite sheet. A sprite sheet is an image with multiple frames of a sprite animation in it. We'll use sprite sheets for the aliens, so we can have a "move" animation when the aliens move, and an "explosion" animation for when the aliens are destroyed.
Similarly, we'll use a sprite sheet for the player's ship, so that we can animate an explosion when the player is destroyed.
This is what the two sprite sheets look like, for the aliens and the player:
We need to describe how to use each of the images in the sprite sheets. Kaboom's
loadSpriteAtlas function accepts an object describing all these details. Add the following code to the "main" code file:
loadRoot("sprites/");
loadSpriteAtlas("alien-sprite.png", {
"alien": {
"x": 0,
"y": 0,
"width": 48,
"height": 12,
"sliceX": 4,
"sliceY": 1,
"anims": {
"fly": { from: 0, to: 1, speed: 4, loop: true },
"explode": { from: 2, to: 3, speed: 8, loop: true }
}
}
});
loadSpriteAtlas("player-sprite.png",{
"player": {
"x": 0,
"y": 0,
"width": 180,
"height": 30,
"sliceX": 3,
"sliceY": 1,
"anims": {
"move": { from: 0, to: 0, speed: 4, loop: false },
"explode": { from: 1, to: 2, speed: 8, loop: true }
}
}
});
The first call,
loadRoot, tells Kaboom which directory to use as default for loading sprites – this is just easier than typing out the full root for each asset when we load it.
Then we load the sprite sheets. The first argument is the path to the sprite sheet, and the second argument is an object describing how to use the sprite sheet. The object has a key for each sprite in the sprite sheet, and the value is another object describing how to use that sprite. Let's take a look at the keys we've used:
xand
ydescribe where the sprites start, by specifying the top left corner of the sprite.
widthand
heightdescribe the size of the sprite.
sliceXand
sliceYdescribe how many sprites are in each row and column of the sprite sheet. We have four separate sprites in the
xdirection in the alien file, and three in the player file.
animsis an object that describes the animation for each sprite. Here we use the names of the animations for the keys, and the values are objects describing the animation.
fromand
todescribe the index of the first and last frames of the animation.
speedis how many frames to show per second.
loopis a boolean that tells Kaboom if the animation should loop, or only play once.
Making a scene
Scenes are like different stages in a Kaboom game. Generally, there are Space Invaders is and how to play it. You might like to add your own intro scene in later!
Let's add the code for defining each scene:
scene("game", () => {
// todo.. add scene code here
});
scene("gameOver", (score) => {
// todo.. add scene code here
});
go("game")
Notice in the
"gameOver" scene definition, we add a custom parameter:
score. This is so we can pass the player's final score to the end game scene to display it.
To start the whole game off, we use the
go function, which switches between scenes.
Adding the player object
Now that we have the main structure and overhead functions out of the way, let's start adding in the characters that make up the Space Invaders world. In Kaboom, characters are anything that makes up the game world, including floor, platforms, and so on, not only the players and bots. They are also known as "game objects".
Let's add in our player object. Add this code to the
"game" scene:
const player = add([
sprite("player"),
scale(1),
origin("center"),
pos(50, 550),
area(),
{
score: 0,
lives: 3,
},
"player"
]);
player.play('move');
This uses the
add function to add a new character to the scene. The
add function takes an array (
[ ]) of components that make up the look and behavior of a game character. In Kaboom, every character is made up of one or more components. Components give special properties to each character. There are built-in components for many properties, like:
sprite, to give the character an avatar.
pos, to specify the starting position of the object and give it functionality like movement.
origin, to specify whether
posuses the object's center or one of the corners.
Kaboom also allows us to add custom properties to a game object. For the player, we add in their score and number of lives remaining as custom properties. This makes it simple to keep track of these variables without using global variables.
We can also add a
tag to the game objects. This is not too useful on the player object, but it will be very useful on the alien objects. The tag will allow us to select and manipulate a group of objects at once, like selecting and moving all aliens.
Adding the aliens
In Space Invaders, the aliens operate as a unit in a tightly formed grid. They all move in sync with each other. This is what that looks like:
To create this grid, we could add each alien one at a time, but that would be a lot of code. Instead, we can use a
for loop to cut down on the amount of code we need to write. We just need to decide how many rows and columns of aliens we want.
Let's create two constants for the number of rows and columns of aliens. Add this code to top of the "main" file:
const ALIEN_ROWS = 5;
const ALIEN_COLS = 6;
We also need to specify the size of each "block" of the grid. Add these constants under the rows and columns we added above:
const BLOCK_HEIGHT = 40;
const BLOCK_WIDTH = 32;
The last constants we need are to determine how far from the top and left side the alien block should start. Add these below the block-size constants:
const OFFSET_X = 208;
const OFFSET_Y = 100;
Now we can use the
for loop to add each alien. We'll use an outer
for loop to run through each row, and then we'll use an inner
for loop to add the aliens in columns, in this type of pattern:
for each row // Loop through each row
for each column // Loop through each column
add alien // Add an alien at position [row,column]
We'll also keep a reference to each alien in a 2D array. This will be useful later, when we need to choose an alien to shoot at the player.
Now, let's translate that to actual code. Add the following code to the
"game" scene:
let alienMap = [];
function spawnAliens() {
for (let row = 0; row < ALIEN_ROWS; row++) {
alienMap[row] = [];
for (let col = 0; col < ALIEN_COLS; col++) {
const x = (col * BLOCK_WIDTH * 2) + OFFSET_X;
const y = (row * BLOCK_HEIGHT) + OFFSET_Y;
const alien = add([
pos(x, y),
sprite("alien"),
area(),
scale(4),
origin("center"),
"alien",
{
row: row,
col: col
}
]);
alien.play("fly");
alienMap[row][col] = alien;
}
}
}
spawnAliens();
This code adds the function
spawnAliens to the
"game" scene. We implement the double for loop in the function, and add the aliens to the scene.
We use the constants we defined earlier to calculate where to add each alien. We also add a custom property to each alien called
row and
col. This is so we can easily access which row and column the alien is in when we query it later. Our 2D array,
alienMap, is where we store a reference to each alien at indices
row and
col. There is some code to initialise each row of the array after the first for loop.
We also call
alien.play("fly"), which tells Kaboom to run the
"fly" animation on the alien. If you look at the
loadSpriteAtlas call for the
alien sprite, you'll see that it defines the
"fly" animation, which switches between the first two frames of the sprite sheet.
Then we call the
spawnAliens function to add the aliens to the scene.
If you run the game, you should see a block of animated aliens and the blue player block at the bottom of the screen, like this:
Moving the player
The next step is adding controls to move the player around the screen. Kaboom has the useful
onKeyDown function that we can use to call a handler when specified keys are pressed. When we added the
pos component to our player, it added methods to
move the player. We'll use these functions to add this move-handling code to the
"game" scene:
let pause = false;
onKeyDown("left", () => {
if (pause) return;
if (player.pos.x >= SCREEN_EDGE) {
player.move(-1 * PLAYER_MOVE_SPEED, 0)
}
});
onKeyDown("right", () => {
if (pause) return;
if (player.pos.x <= width() - SCREEN_EDGE) {
player.move(PLAYER_MOVE_SPEED, 0)
}
});
You'll notice that we use two constants:
SCREEN_EDGE, which provides a margin before the player gets right to the edge of the screen, and
PLAYER_MOVE_SPEED, which is the speed at which the player moves.
Add the two constants at the top of the "main" file, along with the other constants:
const PLAYER_MOVE_SPEED = 500;
const SCREEN_EDGE = 100;
You'll also notice that we have a
pause variable. We'll use this later on to prevent the player from moving when they have been shot.
If you run the game now, you'll be able to move the player left and right on the screen.
Moving the aliens
The next step is to make the aliens move. In Space Invaders, the aliens move from one side of the screen to the other. When they reach either end of the screen, they move down a row, and start moving in the opposite direction.
For this, we'll need a few flags to determine where we are in the sequence. Add these to the
"game" scene:
let alienDirection = 1;
let alienMoveCounter = 0;
let alienRowsMoved = 0;
We use
alienDirection as a flag that can be either 1 or -1. It controls if the aliens move left or right. We use
alienMoveCounter to track how many places the aliens have moved over in the current direction. When this counter reaches a certain value, we'll switch the alien direction and move them all down a row. We use
alienRowsMoved to track how many rows down the aliens have moved. When they have moved down a certain number of rows and reach the ground, we'll end the game.
We'll also need a few constants that hold the speed the aliens should move at, how many columns the aliens should move before switching directions, and how many rows the aliens can move before reaching the ground. Add these along with the other constants:
const ALIEN_SPEED = 15;
const ALIEN_STEPS = 322;
const ALIEN_ROWS_MOVE = 7;
Since the aliens should move automatically, without the player pressing a key, we need a way to call our code to move the aliens every frame. Kaboom has a function
onUpdate that we can use. Add the following code to the
"game" scene:
onUpdate(() => {
if (pause) return;
every("alien", (alien) => {
alien.move(alienDirection * ALIEN_SPEED, 0);
});
alienMoveCounter++;
if (alienMoveCounter > ALIEN_STEPS) {
alienDirection = alienDirection * -1;
alienMoveCounter = 0;
moveAliensDown();
}
if (alienRowsMoved > ALIEN_ROWS_MOVE) {
pause = true;
player.play('explode');
wait(2, () => {
go("gameOver", player.score);
});
}
});
function moveAliensDown() {
alienRowsMoved ++;
every("alien", (alien) => {
alien.moveBy(0, BLOCK_HEIGHT);
});
}
This code has a number of parts. First, we check if the game is in the pause state. If it is, we don't want to do anything, so we return early. Then we use the Kaboom
every function, which selects game objects with a given tag, and runs the given function on each one. In this case, we're selecting all aliens and using
move to move them across the screen, at the speed and direction specified by our direction flag.
Then we update the
alienMoveCounter and check if it has reached the value of
ALIEN_STEPS. If it has, we switch the direction of the aliens and reset the counter. We also call a helper function
moveAliensDown to move the aliens down a row. Note that in the
moveAliensDown function, we also select all aliens using the
every function. This time, we make use of the
moveBy function, which moves the aliens by a given amount. The difference between the
move and
moveBy functions is that
move parameters specify pixels per second, while
moveBy specifies the total number of pixels to move by.
Finally, we check if the aliens have moved down more than
ALIEN_ROWS_MOVE. If they have, we end the game. When the game ends, we change the player sprite to play the
explode animation, which plays the last two frames of the sprite sheet. We also wait for two seconds before calling the
go function to go to the
"gameOver" scene, passing in the player's score so it can be shown to the player.
Firing bullets
Now our game characters can all move around. Let's add in some shooting. In Space Invaders, the player shoots up to the aliens. There should be a "reload" time between shots, so that the player can't just hold down the fire button and machine-gun all the aliens. That would make the game too easy, and therefore boring. To counter that, we'll need to keep track of when the last bullet was fired and implement a short "cooldown" period before the player can shoot again. We'll use the
onKeyDown function to connect pressing the space bar to our shooting code. Add the following code to the
"game" scene:
let lastShootTime = time();
onKeyPress("space", () => {
if (pause) return;
if (time() - lastShootTime > GUN_COOLDOWN_TIME) {
lastShootTime = time();
spawnBullet(player.pos, -1, "bullet");
}
});
function spawnBullet(bulletPos, direction, tag) {
add([
rect(2, 6),
pos(bulletPos),
origin("center"),
color(255, 255, 255),
area(),
cleanup(),
"missile",
tag,
{
direction
}
]);
}
You'll see in the code above that we have a helper function,
spawnBullet, that handles creating a bullet. It has some parameters, like the starting position of the bullet
bulletPos, the direction it should move in
direction, and the tag to give the bullet. The reason this is in a separate function is so that we can re-use it for the aliens' bullets when we make them shoot. Notice that we use Kaboom's
cleanup component to automatically remove the bullet when it leaves the screen. That is super useful, because once a bullet leaves the screen, we don't want Kaboom spending resources updating it every frame. With hundreds of bullets on the screen, this can be a performance killer.
We also use the constant
GUN_COOLDOWN_TIME to test if the player can shoot again. This is the time in seconds between shots. Add this constant to the other constants we have used:
const GUN_COOLDOWN_TIME = 1;
To check the gun cooldown time, we use the Kaboom
time function. The
time function returns the time since the game started in seconds. Whenever the player shoots, we record the time in
lastShootTime. Then, each time the player presses the space bar, we check if the time since the last shot is greater than
GUN_COOLDOWN_TIME. If it is, we can shoot again. If it isn't, we can't shoot again. This way we can make sure the player needs to smash the fire button to get a rapid fire.
The code above handles the player pressing the fire button, the space bar, and spawning a bullet. This bullet will just be stationary until we add in some movement for it each frame. We've given each bullet spawned a tag called
missile so that we'll be able to select it later. We also added a custom property
direction to the bullet. Using those properties, we can move the bullet in the direction it should move using this code:
onUpdate("missile", (missile) => {
if (pause) return;
missile.move(0, BULLET_SPEED * missile.direction);
});
The
onUpdate function has an option to take a tag to select the game objects to update each frame. In this case, we're updating all bullets. We also have a constant
BULLET_SPEED that specifies the speed of the bullets. Add this constant to the other constants:
const BULLET_SPEED = 300;
If you run the game now, you should be able to shoot bullets. They won't kill the aliens yet. We'll add that next.
Bullet collisions with aliens
Now that we have bullets and they move, we need to add collision detection and handling code to check when the bullet hits an alien. For this, we can use the Kaboom
onCollide function. First add the constant below to the other constants:
const POINTS_PER_ALIEN = 100;
Then add the following code to the
"game" scene:
onCollide("bullet", "alien", (bullet, alien) => {
destroy(bullet);
alien.play('explode');
alien.use(lifespan(0.5, { fade: 0.1 }));
alienMap[alien.row][alien.col] = null; // Mark the alien as dead
updateScore(POINTS_PER_ALIEN);
});
In this function, we pass the tags for the
bullet and
alien in to
onCollide, so that our handler is fired whenever these two types of objects collide on the screen. First we call Kaboom's
destroy function to destroy the bullet on the screen. Then we call the
play function on the alien to play the
explode animation. We also use the
lifespan function to make the alien fade out and disappear after a short period of time. Finally, we mark the alien as dead in the
alienMap array, by setting its entry to null. This way, we can keep tabs on which aliens are still alive when we choose an alien to shoot back at the player.
Finally, we call a helper method
updateScore to add to the player's score, and update it on screen. We need a bit of code to get this part working - including adding text elements to the screen to show the score. Add the following code to the
"game" scene:
add([
text("SCORE:", { size: 20, font: "sink" }),
pos(100, 40),
origin("center"),
layer("ui"),
]);
const scoreText = add([
text("000000", { size: 20, font: "sink" }),
pos(200, 40),
origin("center"),
layer("ui"),
]);
function updateScore(points) {
player.score += points;
scoreText.text = player.score.toString().padStart(6, "0");
}
First we add a text label for the score. We use the Kaboom
text component to create a text element. Then we need a text element that shows the actual score. We add it the same way as the label, except this time we store a reference to this text element in
scoreText. Then we have the helper function
updateScore, which adds points to the player's score and updates the score text element. We use the
padStart function to add leading zeros to the score, so that the score is always six digits long. This shows the player that it is possible to score a lot of points!
If you run the game now, you should be able to shoot at an alien, destroy it, and see your points increase.
The aliens fight back
It's not fair that only the player can shoot the aliens - we've got to give the aliens a chance to shoot back! Since we don't want the aliens to be shooting each other, we need to only allow aliens with a clear shot to the ground to be able to shoot. In other words, an alien that shoots must not have another alien in front of them. Recall that when we added the aliens, we created a 2D array that stores a reference to each alien. When an alien gets hit, we set the entry in the array to null. Therefore we can use this array to find an alien that has a clear shot to the ground to shoot at the player.
To make the aliens shoot at regular intervals, we'll use the Kaboom
loop function, which calls a function at a regular interval. Add the following code to the
"game" scene:
// Find a random alien to make shoot
loop(1, () => {
if (pause) return;
// Randomly choose a column, then walk up from the
// bottom row until an alien that is still alive is found
let row, col;
col = randi(0, ALIEN_COLS);
let shooter = null;
// Look for the first alien in the column that is still alive
for (row = ALIEN_ROWS - 1; row >= 0; row--) {
shooter = alienMap[row][col];
if (shooter != null) {
break;
}
}
if (shooter != null) {
spawnBullet(shooter.pos, 1, "alienBullet");
}
});
First, we check if we are in a paused state - if so, we get out early. If not, our task is to randomly choose an alien that has a clear shot at the ground. To do this, we use this logic:
- Choose a random column in the alien map.
- Walk up the rows from the bottom until we find an alien that is still alive.
- If we find an alien, we can use it as the shooter.
- If we successfully find a shooter, spawn a bullet at the shooter's position, and tag it as an alien bullet.
This way, there is no pattern that the player can learn to outsmart the aliens.
If you run the game now, you should see a random alien shoot at the player every second.
Bullet collisions with the player
Now that the aliens can shoot, we can add code to determine if one of their bullets hit the player. To do this, we can use the Kaboom
onCollide function again. Add the following code to the
"game" scene:
player.onCollide("alienBullet", (bullet) => {
if (pause) return;
destroyAll("bullet");
player.play('explode');
updateLives(-1);
pause = true;
wait(2, () => {
if (player.lives == 0){
go("gameOver", player.score);
}
else {
player.moveTo(50, 550);
player.play('move');
pause = false;
}
});
});
This code is similar to the previous collision handler we added for bullets hitting aliens. There are a few difference though.
First, we check if the game is in the pause state, and exit early from the function if it is. If not, then we destroy the bullet, as we don't want to display it anymore (it's stuck in the player!). Next, we use the
play method to change the player sprite to the
explode animation we defined in the
loadSpriteAtlas call. We have a helper method,
updateLives, similar to the one we used to update the score. We set the the
pause flag to true to prevent the player or aliens from moving or shooting. After two seconds, using the
wait function, we either go to the end game screen (if the player has no more lives left) or reset the player to the start position (if the player still has lives) to allow the game to continue. Once the player has been reset, we set the
pause flag to false and switch the player sprite back to the
move animation.
The
updateLives helper function needs a few UI elements, as we did for the score. Add the following code to add the lives text elements to the
"game" scene:
add([
text("LIVES:", { size: 20, font: "sink" }),
pos(650, 40),
origin("center"),
layer("ui"),
]);
const livesText = add([
text("3", { size: 20, font: "sink" }),
pos(700, 40),
origin("center"),
layer("ui"),
]);
function updateLives(life) {
player.lives += life;
livesText.text = player.lives.toString();
}
This code follows the same pattern as the score UI elements, so we won't go into details here.
We made a call to the
"gameOver" scene. At the moment, we just have a placeholder comment there. Let's add the code we need to show the final score and add the logic to start a new game. Add the following code to the
"gameOver" scene:
add([
text("GAME OVER", { size: 40, font: "sink" }),
pos(width() / 2, height() / 2),
origin("center"),
layer("ui"),
]);
add([
text("SCORE: " + score, { size: 20, font: "sink" }),
pos(width() / 2, height() / 2 + 50),
origin("center"),
layer("ui"),
])
onKeyPress("space", () => {
go("game");
});
In the
"gameOver" scene, we add a big, size 40 "Game Over" banner. The score is added below it, in smaller text. We also add a way to start a new game. We use the
onKeyPress function to listen for the space bar being pressed. When this happens, we call the
go function to start the game again.
All the elements for the game are now defined. Give it a go, and see how you do!
Next steps
There are a number of things you can add to this game to make it more interesting.
- Once the player shoots all the aliens and wins, nothing happens. Try making the screen fill with more aliens, and make them move or shoot faster for each level the player reaches.
- Add some sound effects and music. Kaboom has the
playfunction to play audio files. You can add effects for shooting, explosions, points scored, etc.
- Add different types of aliens. In many Space Invaders versions, a "boss" ship flies across the top of the screen at random intervals. Shooting this ship gives the player lots of bonus points.
- Try giving the player a bonus life if they reach a certain score.
What other features can you add to this game? Have fun, and happy coding! | https://docs.replit.com/tutorials/build-space-invaders-with-kaboom | CC-MAIN-2022-27 | refinedweb | 4,656 | 79.7 |
XSL Date & Time Library
XSL Date & Time Library is an XSL stylesheet which provides various date and time functions for use within your own XSL stylesheets.
Downloads for the software described here are available on the downloads page.
UPDATE (26-Jan-2008): Version 1.09 is now released. This version has tons of bugfixes and new features over the 1.01 release, including an important bug fix to
iso-from-unix which only occurs in the last few days of each year, a bug fix to
dst-adjust and the ability to generate RSS-compliant dates from UNIX or ISO timestamps, plus new regression tests. I’ve also added information in the article about how to use the lookup tables.
In July 2006 I was working on a new show scheduling system for our radio station, Deviant Audio. The system is XML-based and uses a source XML file storing a list of single event broadcasts and recurring event broadcasts (such as regular radio shows). This is then transformed with XSL into a filtered list of forthcoming shows, either for a specific “content authority” (the person responsible for controlling a particular broadcast segment) to provide DJs and administrators with a list of forthcoming broadcasts they control; or all shows coming up until a certain date (used to display show schedules on the web site and in our chat room).
We have DJs in various timezones and so to make direct XML source editing easy, broadcasts can be specified in any timezone. Recurring events have associated metadata describing how often the event recurs (which may be a fixed number of days or a non-constant time span such as the first Saturday of every month). The times and dates in the source XML are always in local time, however the show automation system always works in UTC so the transformed times must be adjusted for Daylight Savings (DST). The DST rules used in Europe and US/Canada are different, so to ensure DJs who broadcast at “8pm local time” are always broadcasting at 8pm in their timezone – and not 7pm or 9pm – DST offsets must be calculated for each broadcast depending on the region that broadcast originates from (so show times in different regions will shift at slightly different times of the year). In addition, the US is extending its DST period from 2007-2015, so new rules are in effect from 2007 henceforth.
XSL is unaware of time and has no native date or time functions. EXSLT is a non-portable library of extensions for XSL implemented by some XSLT processors, which provides date and time functions. However, some XSLT processors – including versions of PHP 5 linked against libxslt – do not support all EXSLT extensions, in particular libxslt at the time of writing does not support all the date-time functions in EXSLT.
I have therefore created an XSL date-time library, which you can view or download above. The library is implemented as a series of first-order XPath functions created with the EXSLT/func extension namespace (which is fortunately supported by libxslt). To use:
- Define the namespace in your code (the library uses “is-date” as the namespace prefix):
<xsl:stylesheet .... xmlns:
- Import the library into your stylesheet:
<xsl:import
- Call the desired function from any XPath expression, eg:
<xsl:value-of
The functions have been regression tested against PHP 5.1.4’s and PHP 5.2.4’s date-time functions including leap year, millennium and DST switchover edge cases using the supplied PHP test code. You can read more about my PHP test chassis and how to use it in XSL if you’re interested.
Functions
I have documented the code with JavaDoc-style comments so it should be fairly self-explanatory, but here is a quick summary of the functions provided:
unix-from-iso (v1.00) – converts an ISO8601-formatted date to a UNIX timestamp
iso-from-unix (v1.00) – converts a UNIX timestamp to an ISO8601-formatted date (with +00:00 timezone modifier)
day-of-week (v1.00) – calculates the day of the week (Mon-Sun) from a year, month and date (useful for creating calendar displays)
day-of-week-from-iso (v1.09) – same as
day-of-week but takes an ISO timestamp instead of year, month, date numbers. The timezone is ignored.
day-of-week-utc-from-iso (v1.09) – same as
day-of-week but takes an ISO timestamp instead of year, month, date number. The day is adjusted to the day in UTC time before calculating the day of the week (ie. if the supplied timestamp is 2008-01-25T23:00:00-05:00 – a Friday in EST – the function will return Saturday, since it is 4am in UTC). This is handy for when you need everything to be localised to UTC regardless of the timezones you’re supplying.
date-of-first-day (v1.00) – calculates the day (Mon-Sun) of the first day (1st) in the month, given the day (Mon-Sun) of 1st of the month (you can use this to calculate the first Tuesday etc. in the month; by adding 7, 14 or 21 to the result you can calculate the 2nd, 3rd and 4th Tuesdays etc. in the month)
date-of-last-day (v1.00) – calculates the day (Mon-Sun) of the last day (28th-31st) in the month, given the day (Mon-Sun) of the last day of the month and the date (28th-31st) of the last day in the month (you can use this to calculate the last Tuesday etc. in the month; by subtracting 7, 14 or 21 from the result you can calculate the 2nd, 3rd and 4th last Tuesdays etc. in the month)
dst-offset (v1.00) – calculates in seconds the DST offset of a supplied local time in ISO format, given the DST ruleset to use (European or American). For example in summertime date-times the offset will be 3600 seconds (1 hour), and in wintertime date-times it will be 0 (zero) (you can subtract the result from a local time to deal with a single generic DST-adjusted timezone throughout the year, eg. “Eastern Time” rather than EST and EDT; if you further add the timezone modifier (eg. +5 hours for ET) to a date-time after subtracting the DST offset, you get a time in UTC that is correct regardless of the time of year or DST rules in force in the region of the original local time)
dst-adjust (v1.01) – takes a (UTC) UNIX timestamp and DST ruleset and adjusts the time for DST in the specified region. If the specified ruleset indicates summertime for the region, typically 1 hour will be subtracted from the timestamp, otherwise it will be returned unchanged. This is useful when you must do time manipulation in UTC but ensure that times spread over a year are consistent with the DST rules in the region specified. For example if you want to know when 8pm Mountain is every day of the year (UTC-7 in wintertime, UTC-6 in summertime), you must subtract 1 hour from the UTC time during summertime (8pm MT is 3am UTC in wintertime and 2am UTC in summertime). This function makes just such an adjustment.
dst-adjust-from-iso (v1.09) – same as
dst-adjust but takes an ISO timestamp instead of a UNIX timestamp
add-interval (v1.01) – adds a number of possibly uneven time intervals to an ISO timestamp. Specifying a number as the interval length adds the specified number of days; specifying a number followed by ‘md’ specifies a number of months, however the relative day of the month will be kept the same. For example if the supplied timestamp falls on the 2nd Saturday of the month, an interval of 1md will return the date-time for the 2nd Saturday of the following month. Behaviour on 29th, 30th and 31st of months with more than 28 days is undefined (and not included in the regression tests).
Breaking change in v1.09:
add-interval now returns an ISO timestamp with the original timezone preserved instead of a UNIX timestamp. Explanation from ChangeLog: This is because adding a monthly period then converting to UNIX causes the timezone information to be lost: a 1st Tuesday of the month could become a 2nd Tuesday of the month if
add-interval is called again when the first Tuesday was the 7th in the local timezone but after midnight on 8th in UTC.
week-number-in-month-from-iso (v1.09) – calculates the week number in the given month from an ISO timestamp. The timezone is ignored.
week-number-in-month-utc-from-iso (v1.09) – calculates the week number in the given month from an ISO timestamp. The timestamp is converted to UTC first.
time-ampm (v1.09) – convert a 24-hour time into 12-hour am/pm format. Takes a 4-digit argument 0000-2359 as the input, outputs eg. “12:47pm” for “1247” or “3:01am” for “0301”. Outputs “midnight” for “0000” and “midday” for “1200”.
time-ampm-from-iso (v1.09) – same as
time-ampm but takes an ISO timestamp as the input.
rfc822-from-iso (v1.09) – takes an ISO timestamp and converts it into an RFC-822 timestamp, suitable for the
<pubDate> attribute in RSS feed generation. Outputs a timestamp such as “Thu, 24 Jan 2008 16:12:28 GMT”.
Lookup tables
The library also has a number of look-up tables you can use to convert numerical date-time information into English text. These tables are:
days – days of the week starting at Sunday (Sunday, Monday…) and 3-digit days (Sun, Mon…)
months – months of the year (January, February…)
abbr-months – 3-letter months of the year (Jan, Feb…)
dates – dates of the month with ordinator (1st, 2nd, 3rd…)
dstlesstz – list of timezones (PST, MST, CST…)
For example, to get the name of the month from an ISO timestamp:
<xsl:value-of
This selects the 6th and 7th characters of
$yourTimestamp – which correspond to the month number from 01 to 12 – then finds the element in the
months table which matches, and prints its
name attribute.
If you want to group a lot of date-indexed information by day, for example at our radio station, displaying all Monday’s shows, then all Tuesday’s shows etc., you can do something like this:
<xsl:for-each <xsl:variable <xsl:for-each /* Put your processing code here */ </xsl:for-each> </xsl:for-each>
This assumes that
$dataToGroup contains the data you want to group, one element per item, with an attribute called
date which contains an ISO timestamp.
I hope you find the code useful!
Please send feedback via the contact page or leave a comment below. | https://katyscode.wordpress.com/2008/01/26/xsl-date-time-library/ | CC-MAIN-2018-17 | refinedweb | 1,797 | 66.17 |
This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.
On Thu, Feb 02, 2012 at 02:15:12PM -0800, Roland McGrath wrote: > Posting here is not limited to subscribers. Based on my earlier attempts to send email here, non-subbed senders are silently dropped. > Send from whichever address you want to receive replies on. > > > [BZ 13656] > > The format is "[BZ #nnn]". Ah, yes. I see this now in the existing ChangeLog, but the "#" should probably be added to the template in > > *. If a specific style is required, I can resend. I think it makes sense to name it after the BZ number if it's a bug regression check. Seems like it would make merges much less confusing. Feature tests, yeah, "tst-description" seems good. > > + 02111-1307 USA. */ > > Blank line here. Fixed. > > + if (sprintf (output, fmt, 1, 2, 3, "test") > 0 && > > + strcmp (output, expected) != 0) > > Operator goes on the second line when a clause is line-wrapped like this. Fixed. > > + /* Check behavior of 32bit positional overflow. */ > > Say "32-bit". Gah, missed one. Thanks. > > +/*. Given the lack of any kind of markup to distinguish English from code, I like having the "()" in plain text. > > +# define EXPECTED_SIGNAL 11 > > Use SIGSEGV, not the integer literal. Ah, yes. Good call. Fixed. > Third time's the charm. > > I'm being extremely pedantic just because you are a new contributor and I > want to teach all the conventions for future reference. We are often > looser about some of this stuff, especially in test cases. Sure, understood. New version... 2012-02-02 Kees Cook <keescook@chromium.org> [BZ #13656] * stdio-common/vfprintf.c (vfprintf): Check for nargs overflow and validate argument-based array offsets. * stdio-common/bug13656.c: New file. * stdio-common/Makefile (tests): Add nargs overflow test. diff --git a/stdio-common/Makefile b/stdio-common/Makefile index 006f546..5ece3f6 100644 --- a/stdio-common/Makefile +++ b/stdio-common/Makefile @@ -60,7 +60,7 @@ tests := tstscanf test_rdwr test-popen tstgetln test-fseek \ tst-popen tst-unlockedio tst-fmemopen2 tst-put-error tst-fgets \ + scanf16 scanf17 tst-setvbuf1 tst-grouping bug23 bug24 bug13656 test-srcs = tst-unbputc tst-printf diff --git a/stdio-common/bug13656.c b/stdio-common/bug13656.c new file mode 100644 index 0000000..5c2ffd7 --- /dev/null +++ b/stdio-common/bug13656.c @@ -0,0 +1,81 @@ +/* Test for vfprintf nargs allocation overflow (BZ #13656). + Copyright (C) 2012 Free Software Foundation, Inc. + This file is part of the GNU C Library. + Contributed by Kees Cook <keescook@chromium.org>,> +#include <stdint.h> +#include <unistd.h> +#include <inttypes.h> +#include <string.h> +#include <signal.h> + +static int +format_failed (const char *fmt, const char *expected) +{ + char output[80]; + + printf ("%s : ", fmt); + + memset (output, 0, sizeof output); + /* Having sprintf itself detect a failure is good. */ + if (sprintf (output, fmt, 1, 2, 3, "test") > 0 + && strcmp (output, expected) != 0) + { + printf ("FAIL (output '%s' != expected '%s')\n", output, expected); + return 1; + } + puts ("ok"); + return 0; +} + +static int +do_test (void) +{ + int rc = 0; + char buf[64]; + + /* Regular positionals work. */ + if (format_failed ("%1$d", "1") != 0) + rc = 1; + + /* Regular width positionals work. */ + if (format_failed ("%1$*2$d", " 1") != 0) + rc = 1; + + /* Check behavior of 32-bit positional overflow. */ + sprintf (buf, "%%1$d %%%" PRIdPTR "$d", UINT32_MAX / sizeof (int)); + if (format_failed (buf, "1 %$d") != 0) + rc = 1; + + return rc; +} + +/* Positional arguments are constructed via read_int(), so nargs + can only overflow on 32-bit systems. On 64-bit systems, it will + attempt to allocate a giant amount of stack memory and crash, + which is the expected situation. */ +#if __WORDSIZE == 32 +# define EXPECTED_STATUS 0 +#else +# define EXPECTED_SIGNAL SIGSEGV +#endif + +#define TEST_FUNCTION do_test () +#include "../test-skeleton.c" diff --git a/stdio-common/vfprintf.c b/stdio-common/vfprintf.c index 952886b..3c1172c 100644 --- a/stdio-common/vfprintf.c +++ b/stdio-common/vfprintf.c @@ -1700,6 +1700,13 @@ do_positional: /* Determine the number of arguments the format string consumes. */ nargs = MAX (nargs, max_ref_arg); + /* Check for potential integer overflow. */ + if (nargs > SIZE_MAX / (2 * sizeof (int) + sizeof (union printf_arg))) + { + done = -1; + goto all_done; + } + /* Allocate memory for the argument descriptions. */ args_type = alloca (nargs * sizeof (int)); memset (args_type, s->_flags2 & _IO_FLAGS2_FORTIFY ? '\xff' : '\0', @@ -1715,13 +1722,17 @@ do_positional: for (cnt = 0; cnt < nspecs; ++cnt) { /* If the width is determined by an argument this is an int. */ - if (specs[cnt].width_arg != -1) + if (specs[cnt].width_arg > -1 && specs[cnt].width_arg < nargs) args_type[specs[cnt].width_arg] = PA_INT; /* If the precision is determined by an argument this is an int. */ - if (specs[cnt].prec_arg != -1) + if (specs[cnt].prec_arg > -1 && specs[cnt].prec_arg < nargs) args_type[specs[cnt].prec_arg] = PA_INT; + /* Sanity-check the data_arg location. */ + if (specs[cnt].ndata_args && specs[cnt].data_arg >= nargs) + continue; + switch (specs[cnt].ndata_args) { case 0: /* No arguments. */ -- 1.7.5.4 -- Kees Cook @outflux.net | http://sourceware.org/ml/libc-alpha/2012-02/msg00025.html | CC-MAIN-2017-43 | refinedweb | 789 | 62.04 |
Trying to experiment with uploading gpx traces of Northern Tier ACA route and getting "Found no good GPX points in the input data" - opened data in josm and adze without any apparent problem. Is there any way that I can get specifics as to what the issue might be?
asked
25 Jul '13, 05:06
net-buoy
26●2●2●4
accept rate:
0%
closed
25 Jul '13, 06:11
Jonathan Ben...
8.2k●17●85●108
have a look at this
already looked at that and checked and the file appears to have time tags, which is the reason why I posed the question having seen that to which you pointed me before I posted the question :-)
Where is the GPX track from? ie what device or software did you use to create it? Is it definitely a GPX track, not a route or waypoints?
Maybe you could upload it somewhere, and post a link to it here. Or add a sample of the GPX to your question.
For uploading GPX traces to openstreetmap.org, they should be tracks you have recorded yourself, using a GPS device. It is usually not a good idea to upload GPX tracks or routes found elsewhere, eg from other websites.
Firstly, there are issues over copyright and licensing. OpenStreetMap is licensed under the Open Database License, so any contributions need to be compatible with this.
Assuming you are referring to the GPS routes from the Adventure Cycling Association: reading the terms for the data agreement, it says it requires credit and a link back to the ACA, and it only allows non-commercial use. So this would not be suitable for adding to OSM under these terms.
You could ask the ACA for specific permission to add the routes to OSM. Though you would need to ensure they understand the licensing of OSM, which allows reuse by others. And check if their GPS routes have been derived from another copyright source, eg they might by traced from Google Maps, which would be unsuitable for OSM.
Also those GPS routes are not very detailed, they only have points every 5km or so. This may be good enough for navigating with a GPS device, but it is not very useful for mapping roads or paths in OSM. For a trace recorded on a GPS device, you would typically have points every 20m, which allows you to accurately map the shape of a road.
The best option may be to get out there and cycle the routes yourself. That way you can record suitable GPS traces, plus you can survey loads of other details. eg what sort of roads or paths it follow, or any shops or camp-sites along the way etc.
answered
27 Jul '13, 11:45
Vclaw
9.2k●8●93●140
accept rate:
22%
If you open the trace with a text editor does it look like this? I used notepad on my Win7 PC to view this one that I recorded with a Garmin Vista HCX. If your trace doesn't have the Lat Lon with times it won't be accepted. The system is set up this way so that traces constructed by tracing a copyrighted map are rejected. hope this is of assistance.
answered
25 Jul '13, 20:43
andy mackey
13.0k●85●140●281
accept rate:
4%
ok.... getting the thrust a bit now -- this is a route and it has waypoints (wpt) but not trackpoints (as in trkpt) [aspects of diff explained here: ] and in modifying it with josm or adze I managed I guess to convert wpt to rtept,
but
that is still not trkpt and if this app will only accept trkpt can I just replace all the instances of rtept with trkpt and then import....
and
is there an introductory level doc on gpx that explains the different aspects and usages of these three different types of points that you could <groanalert> point me to </groanalert>?
bottom line is that I would like to see if I can add the routes to OCM without having to play games adding them to an intermediary app, or am I doing something that is inappropriate for this resource?
stanza from original northern tier ACA gpx file
<wpt lat="48.4537470" lon="-122.5042790">
<time>2013-01-02T18:41:02Z</time>
<name>F010N0</name>
<cmt>Josh Green Ln turn (EB)</cmt>
<desc>Josh Green Ln turn (EB)</desc>
<sym>Waypoint</sym>
<extensions>
<gpxx:WaypointExtension xmlns:
<gpxx:DisplayMode>SymbolAndName</gpxx:DisplayMode>
</gpxx:WaypointExtension>
</extensions>
</wpt>
same stanza after half an hour of thrashing about with gpx apps ;-)
<rtept lon="-122.504280" lat="48.453751">
<time>2013-01-02T18:41:02.000Z</time>
<name>F010N0</name>
<cmt>Josh Green Ln turn (EB)</cmt>
<desc>Josh Green Ln turn (EB)</desc>
<sym>Waypoint</sym>
<extensions>
<gpxx:RoutePointExtension xmlns:
<gpxx:Subclass>000000000000FFFFFFFFFFFFFFFFFFFFFFFF</gpxx:Subclass>
</gpxx:RoutePointExtension>
</extensions>
</rtept>
Once you sign in you will be able to subscribe for any updates here
Answers
Answers and Comments
Markdown Basics
learn more about Markdown
This is the support site for OpenStreetMap.
Question tags:
gpx ×250
import ×192
routingerror ×21
question asked: 25 Jul '13, 05:06
question was seen: 9,033 times
last updated: 27 Jul '13, 11:45
GPX import : how to avoid manual creation of node ?
Adding relations by using a GPX-file
Don't manage loading a gpx and then editing it to a path in OSM
[closed] How to import a gpx without time tags?
Errorneous GPX zip upload
Uploading a GPX from TTGpsLogger doesn't work
how to import a twl file to osm
Why does a GPX import report "Issue while inserting job into database"?
Wie kann ich .itm-Daten in GPX-Dateien umwandeln?
How to save maps.me or Organic Maps BOOKMARKS as KML or KMZ?
First time here? Check out the FAQ! | https://help.openstreetmap.org/questions/24550/found-no-good-gpx-points-in-the-input-data?sort=active | CC-MAIN-2022-27 | refinedweb | 977 | 77.57 |
Introduction to Break in C#
Generally, when we are talking about terminating some execution at that time we are giving condition to do so. But in many cases, we don’t have an exact condition to get out of some loop or condition. Here, with the break statement, we are matching the condition to get out of the current execution and pass the control over the next upcoming statement. It helps us to continue with execution by avoiding particular operations at a certain stage. In programming languages, we often encountered with the break statements. Break statements are something that resembles its meaning to break the logic here. Like other programming languages, c# also has a break statement. You must have seen a break statement in the switch case also. In switch cases after every case, we find this break statement to come out of that case if not matched and move forward.
Syntax:
Break statement has very easy syntax as follows:
break;
Just the keyword break and semicolon. Defining a break statement is nothing but handing over the control to the next operation in sequence. Break statements are the rule applied to get out of a particular situation on time and stop further execution.
Flow Diagram
- Above, the flow chart simply shows the working of the break statement, at the start of the flow it checks for the particular condition. If it satisfied then the loop continues. At the point where the loop gets a break statement. Or, the condition where this loop gets out of the loop with the use of a break statement.
- The flow is effortless just need to understand it by executing some examples.
- The best thing to get familiar with the break statement is to write the code and try output with different scenarios.
- Break statements are very easy. But a lot of people got confused with it as it gets out of the loop and starts further execution process.
- There are many cases where we use these statements, especially in nested loops. In the nested loop inner loop gets break statements at a particular event to get out of the loop.
How does Break Statement Work in C#?
Suppose we have one program and we are running loop in that program. Our requirement is if the loop reaches to 5 stop the execution of the loop and start running code in the sequence. If you look at the examples carefully break statements more likely to work as a meaning it has. It breaks the execution flow at the specified location and control is going to pass over the next required operation.
Examples of Break Statement in C#
Examples of Break Statement in C# are given below:
Example #1
Program to get no’s till 10. If it exceeds 10 then break the loop.
using System;
public class EvenNo {
public static void Main(string[] args) {
for(int i=0;i<=20;i++){
Console.WriteLine(i);
if(i==10){
break;
}
}
} }
Output:
In the above program, we used one for a loop. In this, we have given the condition if i is less than equal to 20 then execute further. We have given if condition in for loop that if the value of i reaches to 10 then stop executing for a loop. And this scenario we achieved through break statement. Try this example in an editor and you will get an output as above.
Example #2
Now we are going to see break statement with the switch case
using System;
public class Switch
{
public static void Main(string[] args)
{
int n=2;
switch (n)
{
case 1:
Console.WriteLine("Current value of n is: 1");
break;
case 2:
Console.WriteLine("Current value of n is: 2");
break;
case 3:
Console.WriteLine("Current value of n is: 3");
break;
case 4:
Console.WriteLine("Current value of n is: 4");
break;
default:
Console.WriteLine("Please give the correct no.");
break;
}
}
}
Output:
In the above program, we have used a switch case. In this, we are checking multiple cases. From case one we are checking the case against the given condition. If the switch case doesn’t match the particular condition it breaks that case with a break statement and jumps to the next case. It executes till getting a matching case. If the case gets matched then it gets executed and output will be shown. In the worst case if none of the condition gets satisfied then execution get the default case mentioned in the switch case statements with the keyword default. After executing this statement again it gets the break statement and operation get over.
Example #3
We are going to see break statement with do-while loop here:
using System;
public class Program
{
public static void Main(string[] args)
{
int i = 0;
do
{
Console.WriteLine("The value of i is :{0}", i);
i+=2;
if (i == 10)
break;
}while (i < 20);
Console.WriteLine("Press Enter Key to Exit..");
Console.ReadLine();
}
}
Output:
In the above program, we have used a do-while loop with a break statement. We have checked for a value of i. we increment the value of I by 2. We break the loop once it reaches 10. But we have while condition to check it till 20. We break the execution in the middle of the loop as per our requirement. Till now we have seen examples of a break statement. Which shows how can we use the break statement with different loops and with if condition. These are very basic examples to check to work of break statements. To try these examples in a suitable editor. When we are executing any loop inside the program and we need to break it in between of the execution at that time we use break statement with the keyword break and semicolon. At this time of the break, the statement gets the execution out of the loop. After that control gets over to the next execution statement in the sequence.
Conclusion
Every language has a break statement to come out of the loop or a condition at a particular point. It totally depends on the requirement. This is a very small but useful statement in any language and thus, it for c# also. Try to gets hands dirty on break statement usage.
Recommended Articles
This is a guide to Break Statement in C#. Here we discuss the introduction and working of break statement in c# along with its examples. You may also look at the following articles to learn more – | https://www.educba.com/break-in-c-sharp/ | CC-MAIN-2021-04 | refinedweb | 1,084 | 73.58 |
fgetws man page
fgetws — read a wide-character string from a FILE stream
Synopsis
#include <wchar.h> Value
The fgetws() function, if successful, returns ws. If end of stream was already reached or if an error occurred, it returns NULL.
Attributes
For an explanation of the terms used in this section, see attributes(7).
Conforming to
POSIX.1-2001, POSIX.1-2008, C99.
Notes
fgetwc(3), unlocked_stdio(3)
Colophon
This page is part of release 4.11 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at.
Referenced By
fgetc(3), fgetwc(3), fgetwln(3), gets(3). | https://www.mankier.com/3/fgetws | CC-MAIN-2017-26 | refinedweb | 113 | 77.33 |
After. I'm looking for suggestions. Here's the improved code: --- {-# OPTIONS_GHC -fglasgow-exts -fno-cse #-} module Timer ( startTimer, stopTimer ) where import qualified Data.Map as M import System.Time import System.IO.Unsafe import Control.Exception import Control.Concurrent --- Map timer name and kick-off time to action type Timers = M.Map (ClockTime, String) (IO ()) timeout :: Int timeout = 5000000 -- 1 second {-# NOINLINE timers #-} timers :: MVar Timers timers = unsafePerformIO $ do mv <- newMVar M.empty forkIO $ checkTimers return mv --- Not sure if this is the most efficient way to do it startTimer :: String -> Int -> (IO ()) -> IO () startTimer name delay io = do stopTimer name now <- getClockTime let plus = TimeDiff 0 0 0 0 0 delay 0 future = addToClockTime plus now block $ do t <- takeMVar timers putMVar timers $ M.insert (future, name) io t --- The filter expression is kind of long... stopTimer :: String -> IO () stopTimer name = block $ do t <- takeMVar timers putMVar timers $ M.filterWithKey (\(_, k) _ -> k /= name) t --- Now runs unblocked checkTimers :: IO () checkTimers = do t <- readMVar timers -- takes it and puts it back case M.size t of -- no timers 0 -> threadDelay timeout -- some timers _ -> do let (key@(time, _), io) = M.findMin t now <- getClockTime if (time <= now) then do modifyMVar_ timers $ \a -> return $! M.delete key a try $ io -- don't think we care return () else threadDelay timeout checkTimers -- | http://www.haskell.org/pipermail/haskell-cafe/2005-December/012961.html | CC-MAIN-2014-35 | refinedweb | 223 | 66.23 |
Some time ago now, I published a post entitled Quick Tip: Using the SharePoint ‘Person or Group’ field in code which covered details of how to use the ‘Person or Group’ field type in your custom applications.
As I eluded to at the end of that post, one of the most common uses of the ‘Person or Group’ field in custom applications is when you are using the InfoPath Contact Selector control to select people inside an InfoPath form and want to store those values in a ‘Person or Group’ field inside SharePoint. The Contact Selector control is a very powerful control and there is lots of good information about it in these posts:
My own article: Top Tips for InfoPath form development with SharePoint: Part 1 (look for tip 4)
The InfoPath Team Blog: Using the Contact Selector Control
MSDN: The Contact Selector Control
As wonderful as the Contact Selector control is, it leaves a lot to the imagination in terms of storing its values in SharePoint and that is what this post is all about!
The scenario here is that we have a browser–based form (lets say a holiday request form) that gets saved to a SharePoint Form Library when it is submitted. Part of that form is the manager that needs to approve the request which is captured as a Contact Selector.
So the questions is “how do you save your contact selector value to SharePoint?”
But surely standard InfoPath property promotion does the trick!?
The first place I look when trying to do this was to try to promote one of the three values that the Contact Selector generates to a SharePoint ‘Person or Group’ field via InfoPath’s standard property promotion.
The short answer is this did not work. The main reason is that the Contact Selector is not a single field, it is actually 3 different fields (DisplayName, AccountId and AccountType) that are arranged in a very specific XML structure.
You could just bind one of the three fields to SharePoint but the column will be created as a standard ‘text’ type. This is because the three properties contained within the Contact Selector are all text fields in InfoPath. So this will work but it will not be stored as a ‘Person or Group’ field in SharePoint. This is great if you just need the login name, but what we really want is the value to be stored as a ‘Person or group’ field.
So what is the answer?
Regrettably, the only way to do this is to introduce custom code to your InfoPath form. However, the good news is that the code is fairly simple.
The code sample below deals with multiple values. If you only have a single person in your contact selector it would be much simpler, however this is a more robust solution.
The full code sample is below but these are the main steps:
- Set your form submit event to use custom code
- Add references and using statements to Microsoft.SharePoint.dll and microsoft.office.workflow.tasks.dll
- Get a semi-colon delimited list of aliases stored within the Contact Selector by using the Contact object.
- Get the SPLisItem for the form that was submitted. The logic for this will vary on your solution. In most cases the form has a unique ID, but for the sake of this simple example I am simply matching the Title field in SharePoint. So long as you get an SPLIstItem for the form that was submitted it doe snot really matter how you do this.
- Construct a string in the right format for a ‘Person or Group’ field. Refer to my Quick Tip: Using the SharePoint ‘Person or Group’ field in code article for details on this
- Update the Person or Group field in the list item.
The code is as follows:
using Microsoft.SharePoint;
using Microsoft.Office.Workflow.Utility;
public void FormEvents_Submit(object sender, SubmitEventArgs e)
{
//actually submit the form
e.CancelableArgs.Cancel = false;
//get a navigator bound to the gpContactSelector
XPathNavigator xn = this.MainDataSource.CreateNavigator();
XPathNavigator xnManager = xn.SelectSingleNode(“/my:myFields/my:Manager/my:gpContactSelector”, this.NamespaceManager); //this is the path to the gpContactSelector for your Contact Selector control
//get a semi-colon delimited array of aliases stored inside the contact selector
string aliases = “”;
using (SPSite site = new SPSite(sSiteURL))
{
using (SPWeb web = site.RootWeb)
{
//add the login name for each person to the string
Contact[] contacts = Contact.ToContacts(xnManager.InnerXml, web);
if (contacts != null && contacts.Length > 0)
{
foreach (Contact contact in contacts)
{
if (contact != null)
{
aliases += contact.LoginName + “;”;
}
}
}
//trim the final ;
if (aliases.EndsWith(“;”))
{
aliases = aliases.Substring(0, aliases.Length – 1);
}
}
}
//get the list item that was added by the form. This will depend on the logic of the form, but this simple example goes on the title
qGetItem.Query = “<Where><Contains><FieldRef Name=’Title’/><Value Type=’Text’>” + “some value that represents the title” + “</Value></Contains></Where>”; //replace ‘some value that represents the title’ with the title of your form
SPListItemCollection listitems = web.Lists[“FormsLibaryDisplayName”].GetItems(qGetItem); //replace ‘FormsLibaryDisplayName’ with the name of your form library
SPListItem li = listitems[0];
//construction a person field string
string personFieldString = “”;
string[] aliases_a = aliases.Split(‘;’);
foreach (string alias in aliases_a)
{
//get an SPUser from the alias
SPUser user = web.EnsureUser(alias);
//construct the person field sring from the ID and Login name. The ;# separator is the right format for a Sharepoint person or group field
personFieldString += user.ID.ToString() + “;#” + user.LoginName.ToString() + “;#”;
}
//update the list item
li[“PersonFieldName”] = personFieldString; //replace ‘PersonFieldName’ with the name of your Person or Group field in SharePoint
li.Update();
}
That is the end of the article, I hope you found it useful.
This article was published by
PingBack from
hi Martin,
thanks for this tip!
But I currently have a task to do exactly this the opposite way:
So, there is a multi user field in a SharePoínt list and I need to prefill an InfoPath Contact Selector with the current field value.
I have a Web Service that reads the SharePoint list and returns the values of the list item as secondary data source.
But when I bind the persons to the contact selector only the first person is set as value.
Do you have any advice how to get all users set in the contact selector?
Thanks
Matthias
In response to Matthias’s comment: I have never had to do this myself, but this should be possible via code in your InfoPath form. You just need to understanding the underlying XML structure that goes behind the Contact Selector and populate it accordingly.
Martin Kearn
Hi Martin,
I am Quite new to working with workflows in Visual Studio and specially relating them to the infopath form.
I have a form with control type as Contact Selector (which will basically be to get the email id of the user so that a Cc copy will be sent once the workflow is Started)
Is it possible to get the count of the total no of user enter ed in the contact Selector.
The thing is i need to get the count of the number of user key in the contact selector.
Could u tell how this would be possible
Thanks
Aditi
will this solution work if i will try to opening this in infopath client
This is exactly what I have been needing…but I am having a few errors with the code (New to VS coding)
The errors I get on build are:
1. The name ‘sSiteURL’ does not exist in the current context using (SPSite site = new SPSite(sSiteURL))
2. The name ‘qGetItem’ does not exist in the current context "qGetItem.Query = "<Where>…
3. The name ‘web’ does not exist in the current context listitems = web.Lists[…
Thanks for any help on this
Alternatively – If you’re using a workflow attached to your list you could simply take the relevant value of the ContactSelector (forget which one specifically without looking) and copy that value into a SharePoint Person or Group field.
Works for me.
One thing I have extreme difficulty with is populating the other way around. How on earth do you get a Person or Group field to translate through to an InfoPath Contact Selector in such a way that it appears to have resolved the user when looking at the InfoPath form.
Jason..
here is a link to explain. | https://blogs.msdn.microsoft.com/uksharepoint/2009/04/17/quick-tip-storing-infopath-contact-selector-values-in-sharepoint/ | CC-MAIN-2017-09 | refinedweb | 1,395 | 60.35 |
Here's a list of amazing tricks that you can use to improve your React applications instantly.
These tips will not only make your code cleaner and more reliable, but they also aim to make your development experience easier and overall more enjoyable.
Give these techniques a try in your React projects today!
Want the complete guide to become an expert React developer from front to back? Check out The React Bootcamp.
Replace Redux with React Query
As your application gets larger it becomes harder to manage state across your components. So you may reach for a state management library like Redux.
If your application relies on data that you get from an API, you often use Redux to fetch that server state and then update your application state.
This can be a challenging process – not only do you have to fetch data, but you also need to handle the different states, depending on whether you have the data or are in a loading or error state.
Instead of using Redux to manage data you get from a server, use a library like React Query.
First of all, React Query gives you greater control over making HTTP requests in your React apps through helpful hooks, along with the ability to easily refetch data. And it also enables you to seamlessly manage state across your app components, often without having to manually update state yourself.
Here's how you set up React Query in your index.js file:
import { QueryClient, QueryClientProvider } from 'react-query' import ReactDOM from "react-dom"; import App from "./App"; const queryClient = new QueryClient() const rootElement = document.getElementById("root"); ReactDOM.render( <QueryClientProvider client={queryClient}> <App /> </QueryClientProvider>, rootElement );
Here, we are setting up a query client which will set up a cache for us to effortlessly manage any requests that we have made in the past. We also set up a query client provider component to pass it down the entire component tree.
How to make requests with React Query
You can make requests with the useQuery hook, which takes an identifier for your query (in this case, since we are fetching user data, we will call it 'user'), plus a function that is used to fetch that data.
import { useQuery } from "react-query"; export default function App() { const { isLoading, isError, data } = useQuery("user", () => fetch(" => res.json()) ); if (isLoading) return "Loading..."; if (isError) return "Error!"; const user = data.results[0]; return user.email; }
As you can see, React Query takes care of managing these various states that can take place when we fetch our data. We no longer need to manage these states ourselves, we can just destructure them from what is returned from
useQuery.
Where does the state management part of useQuery come into play?
Now that we have fetched the user data and have it stored in our internal cache, all we need to do to be able to use it across any other component is to call
useQuery() with the 'user' key that we associated with it:
import { useQuery } from "react-query"; export default function OtherComponent() { const { data } = useQuery('user'); console.log(data); }
Make React Context Easier with a Custom Hook
React Context is a great way to pass data across our component tree. It allows us to pass data into whatever component we like without having to use props.
To consume context in a React function component, we use the
useContext hook.
However, there is a slight downside to doing so. In every component that we want to consume data that has been passed down on context, we have to import both the created context object and import React to grab the useContext hook.
Instead of having to write multiple import statements every time we want to read from context, we can simply create a custom React hook.
import React from "react"; const UserContext = React.createContext(); function UserProvider({ children }) { const user = { name: "Reed" }; return <UserContext.Provider value={user}>{children}</UserContext.Provider>; } function useUser() { const context = React.useContext(UserContext); if (context === undefined) { throw new Error("useUser in not within UserProvider"); } return context; } export default function App() { return ( <UserProvider> <Main /> </UserProvider> ); } function Main() { const user = useUser(); return <h1>{user.name}</h1>; // displays "Reed" }
In this example, we are passing down user data on our custom UserProvider component, which takes a user object and is wrapped around the Main component.
We have a
useUser hook to more easily consume that context. We only need to import that hook itself to consume our User Context in any component we like, such as our Main component.
Manage Context Providers in a Custom Component
In almost any React application that you create, you will need a number of Context providers.
We not only need context providers for React Context that we are creating, but also from third party libraries that rely upon it (like React Query) in order to pass their tools down to our to the components that need them.
Once you've started working on your React project for a while, here's what it tends to look like:
ReactDOM.render( <Provider3> <Provider2> <Provider1> <App /> </Provider1> </Provider2> </Provider3>, rootElement );
What can we do about this clutter?
Instead of putting all of our context providers within our App.js file or index.js file, we can create a component called ContextProviders.
This allows us to use the children prop, then all we have to do is put all these providers into this one component:
src/context/ContextProviders.js export default function ContextProviders({ children }) { return ( <Provider3> <Provider2> <Provider1> {children} </Provider1> </Provider2> </Provider3> ); }
Then, wrap the ContextProviders component around App:
src/index.js import ReactDOM from "react-dom"; import ContextProviders from './context/ContextProviders' import App from "./App"; const rootElement = document.getElementById("root"); ReactDOM.render( <ContextProviders> <App /> </ContextProviders>, rootElement );
Pass props more easily using the object spread operator
When it comes to working with components, we normally pass down data with the help of props. We create a prop name and set it equal to its appropriate value.
However, if we have a lot of props that we need to pass down to a component, do we need to list them all individually?
No, we don't.
A very easy way to be able to pass down all the props that we like without having to write all of the prop names and their corresponding values is to use the
{...props} pattern.
This involves putting all of our prop data in an object and spreading all of those props individually to the component we want to pass it to:
export default function App() { const data = { title: "My awesome app", greeting: "Hi!", showButton: true }; return <Header {...data} />; } function Header(props) { return ( <nav> <h1>{props.title}</h1> <h2>{props.greeting}</h2> {props.showButton && <button>Logout</button>} </nav> ); }
Map over fragments with React fragment
The
.map() function in React allows us to take an array and iterate over it, then display each element's data within some JSX.
However, in some cases, we want to iterate over that data but we do not want to return it within a closing JSX element. Maybe using an enclosing JSX element would modify our applied or we simply don't want to add another element to the DOM.
A little known tip to be able to iterate over a set of data, and not have the parent element as an HTML element, is to use
React.Fragment.
To use the longhand form of React fragments can provide it the
key prop which is required for any element over which we are iterating.
import React from 'react' export default function App() { const users = [ { id: 1, name: "Reed" }, { id: 2, name: "John" }, { id: 3, name: "Jane" } ]; return users.map((user) => ( <React.Fragment key={user.id}>{user.name}</React.Fragment> )); }
Note that we cannot use the required
key prop for the shorthand fragment's alternative:
<></>.
Want Even More? Join The React Bootcamp
The React Bootcamp was created to make you a superstar, job-ready React developer in 1 amazing course, featuring videos, cheatsheets and much more.
Gain the insider information 100s of developers have already used to become React experts, find their dream jobs, and take control of their future: | https://www.freecodecamp.org/news/build-better-react-apps-with-these-tricks/ | CC-MAIN-2022-21 | refinedweb | 1,358 | 61.56 |
Bug #7560
fcl does not understand files that have lines terminated by cr-nl
Description
Earlier today I was emailed a .fcl file which I saved to disk and then could not use. The problem turned out to be that the file had lines terminated DOS style with cr-nl, not with a simple nl.
I have attached a one line file, foo.fcl, that contains an example:
#include "minimalMessageService.fcl"
Hopefully the error is not removed by redmine. Anyway, when I ran this through art it gave the error:
Failed to parse the configuration file 'foo.fcl' with exception
---- Malformed #include directive: BEGIN
#include "minimalMessageService.fcl"
at line 1 of file ./foo.fcl
---- Malformed #include directive: END
Art has completed and will exit with status 7002.
Note that it gives this same message whether or not the included file actually exists in FHICL_FILE_PATH.
I request that FHICL be updated to be able to handle files with lines terminated by cr-nl.
History
#1 Updated by Kyle Knoepfel almost 6 years ago
- Category set to Infrastructure
- Status changed from New to Accepted
- Assignee set to Christopher Green
- Target version set to 1.13.00
- Estimated time set to 2.00 h
- SSI Package fhicl-cpp added
- SSI Package deleted (
)
Time estimate depends on our initial analysis that this is a trivial change to the white-space skipper being accurate.
#2 Updated by Christopher Green almost 6 years ago
Fixed with cetlib:7054b87, cetlib:5c7d15f, and cetlib:c9f2ee0. The problem was actually in cetlib:source:cetlib/includer.cc and cetlib:source:cetlib/include.cc, as the FHiCL parser already handled the problem by using
boost::spirit::ascii::space as part of its whitespace skipper.
#3 Updated by Christopher Green almost 6 years ago
- Status changed from Accepted to Resolved
- % Done changed from 0 to 100
#4 Updated by Christopher Green almost 6 years ago
- Status changed from Resolved to Closed
Also available in: Atom PDF | https://cdcvs.fnal.gov/redmine/issues/7560 | CC-MAIN-2020-50 | refinedweb | 324 | 56.35 |
When Anonymous Monk wrote that Perl was dying, my first response was rage that someone was talking that way about my beloved Perl. However, the article was well-written, the comments presented in a reasonable, non-trollish way, and I found myself having to take a closer look at Perl, why I used it, and where I thought it was going.
My conclusions didn't change. I'm still using Perl, I still intend to use it, and I believe it will be around for a while yet. But Anonymous Monk did a service by requiring me to think about what Perl meant to me. Like many people, I tend to get into ruts, habits that become familiar, comfortable, and eventually stagnating. It doesn't make much sense to challenge everything, every day - but it does make sense to question things every now and then, to review whether this or that habit is still working for me, or whether I would be better served by something else. Anonymous Monk shook me out of my comfortable drowsiness, got me angry, and forced me to take another look. No, I'm not ready to make any changes just yet - but I didn't really know that until I took the time to think about it. So thank you, Anonymous Monk, for p*ssing me off and forcing me to reflect. I still don't agree with you, but now I at least have some idea why I don't. Oh, and BTW, your fears of being severely downvoted weren't realized. More people upvoted your node, than downvoted it.
Update: The link to this node was inoperative when I last checked. However, this external link might work. Thanks to ww and Arunbear for their suggestions regarding linking to the article.
If you don't use Perl, it's easy to think it's dying because you don't see how extensively it's used every day. I really don't have problems getting Perl jobs and frankly, from my perspective working with the Perl Foundation, I'm quite pleased at how much more widespread Perl is than I had previously thought. However, what if you do use Perl extensively? You see that its phenomenal growth rate has slowed down and you start to get nervous when you see the silly "Perl is Dead" type of talk. The reality is simple: Perl could never sustain its growth rate and now we see its popularity going through up and down cycles. That's pretty much what we see out of any other popular language.
For more evidence, see the TIOBE programming index. Despite dropping, Perl is currently in 6th place. Out of the hundreds and hundreds of languages used all over the planet for production work, Perl is solidly in the top 10. Regardless of what your field is, wouldn't you be happy to be in the top 10 out of hundreds? Probably.
If people are concerned about Perl, there are legitimate reasons to be. Perl is showing its age. We've broken a lot of ground for dynamic languages and more recent languages have learned from our mistakes. That's a nice little gift for them, but Perl's use is extremely widespread and it's going to be around for a long time to come.
Cheers,
Ovid
New address of my CGI Course.
Well, lots of people showed up for the funeral. Then the corpse danced.
After Compline,Zaxo
#!/usr/bin/perl
#
# Perl CubeBorg Core Code (PCCC)
#
use PHP;
use Java;
use JavaScript;
use Inline Python => 'DATA'; # Mr. DATA is working for us
use Inline Ruby => 'DATA';
use Inline Basic => 'DATA';
use Inline C => 'DATA';
###
motto;
###
PHP::eval(<<EVAL);
$search = array ('@my@i','@me@i');
$replace = array ('our','us');
$new_brain = preg_replace($search, $replace, $old_brain);
EVAL
###
my $runtime = new JavaScript::Runtime;
my $context = $runtime->create_context();
$context->eval(q!
function starcubeborg() {
write("The CubeBorg is running!<BR>");
}
!);
###
my $java = new Java;
my $frame = $java->create_object("java.awt.Frame","Perl CubeBorg");
$frame->setSize(400000,400000,400000);
$frame->show();
my $weapon = $java->create_object("java.awt.Button","Push Me :)");
$java->do_event($button,"addActionListener",\&event_handler);
sub event_handler {
my($object,$event) = @_;
if ($object->same($weapon)) {
print "You pushed my button!!\nYou are dead!!!";
}
}
### Technologies ###
__DATA__
__Python__
def surrendered(x):
print "You are surrendered!!!"
return x
def kidnap(x):
print "I take you!!!"
return x
__Ruby__
def add_distinctiveness(a)
our_distinctiveness + a
end
__Basic__
010 DEF DEFEAT(S) = DEFEATS+1
020 DEF SUCCESSES(S)= SUCCESSES+1
__C__
void motto() {
printf(
"We are the PerlBorg. Lower your shields and surrender your ships.
+"
"We will add your biological and technological distinctiveness to o
+ur own. "
"Your culture will adapt to service us. Resistance is futile.\n");
}
[download]
Well, *Is* Perl Dying?
Please define 'dying' as used in this context.
Are there relatively fewer 'perl programming' threads found by Google? Yes.
Is Perl no longer a useful language? No.
Are there niches where Perl used to be the favored language, but isn't anymore? Yes.
Are there no niches where Perl is the favored language? No.
Are there relatively fewer 'perl programming' threads found by Google? Yes.
Is Perl no longer a useful language? No.
Are there niches where Perl used to be the favored language, but isn't anymore? Yes.
Are there no niches where Perl is the favored language? No.
Just like statistics -- I can come up with a side of the argument that I wish to support, and come up with numbers and references to support it.
The language has most likely, filled a few niches. The C language is THE thing you write your OS in once you can write a C compiler for your OS, or can cross compile from another OS. BASIC and COBOL found niches of their own, such as being implemented in every damned place, or as a business language. A niche does not mean that it cannot spill over into other realms, and stay usable. I.e. using C to what PHP excels at.
I believe Perl gained popularity in text processing and the likes as sed and awk are not, something. Elegant? Easy? Just not something. CGI is just that though, text processing, and that's where things got really easy for Perl in gaining a niche. At least until "better" ways came about. Some will argue php to gained traction as being easier than Perl over CGI. PHP did something that perl couldn't at that time, which is to become novel.
Where does Perl fit in? It makes easy things brain dead easy, and hard things possible for a lot of things, nothing uberly-specific. Will it die? As long as there's a need for a jack-of-all trades, master of none, yes, perl can do this and more. It's as if Perl became the new shell programming. You can string enough tools, in the form of CPAN modules together, and you can do some things with so little effort, it's criminal.
Will php live on in a niche, like c is for system programming, perl as a do-it-all language, and what not? Maybe RoR will kill it off? Who knows?
One could argue java is a better business language, but so was cobol at one point. It seems that well written languages seem to last longest because they fit their niches well by design. I'd argue Perl5, while ugly at times, is well written. It will last for some time longer as C and Fortran has in their niches. But watch out, when Perl6 comes out, everyone will go ga-ga, and the cycle occurs. *again*
However, the article was well-written, the comments presented in a reasonable, non-trollish way...
I thought it was full of half-truths, logic errors, distortions, and all deliberately chosen as provocative. It's been a couple of months since the last whiny anonymonk posted a discussion with that title, too..
Things change and we all change too and what may suit us today may be totally wrong tomorrow.
Perl is not dying, but maybe something Perlish has died in some of us. Is that bad? Certainly not: it just opens up some more space to evolve.
CountZero
"If you have four groups working on a compiler, you'll get a 4-pass compiler." - Conway's Law
See, I get so riled up on the subject because I really truly love Perl and I don't want to see it fade away into just another niche language. I don't think Perl's ever going to die (COBOL is still going strong, after all), but it may fade into the background if we're not careful.
I want to make sure we take over the world. Is there really any harm in that? :-)
I think we're not supposed to talk about | http://www.perlmonks.org/index.pl/jacques?node_id=562053 | CC-MAIN-2015-35 | refinedweb | 1,476 | 73.68 |
PROBLEM LINK :
Setter: Aditya Kumar Shaw
Tester: Reet Roy
Editorialist: Sitam Sardar
DIFFICULTY :
MEDIUM
PREREQUISITES :
Array
String
Prime Number Theorem
PROBLEM :
Chef is a secret agent for the Government of Chef Land. Chef has found a gang planning to attack Chef Land. The gang is going to bombard few cities in Chef Land. The names of the cities in Chef Land are always 7 digits numbers .
The gang leader, Almoodi, is fond of mathematics and has devised a unique and perfect method of selecting the cities to be bombarded. He has concatenated all prime numbers to create a big string: “23571113171923293137…”. He asks each of his bombers to provide a number ( which is not provided by previous bombers, to prevent collision). This number is that starting index in the string of primes, and the city name will be the next 7 digits in the string. E.g., if the bomber chooses 8, the city name will be 1719232. Note that the index of string starts from 0.
Chef has a spy in the gang. The spy has provided Chef all the numbers chosen by the bombers. However, Chef isn’t that comfortable with prime numbers and doesn’t have time to learn it. So, Chef has requested you to assist him in finding the city names before it is too late.
SHORT EXPLANATION OF THE QUESTION :
Here basically all prime numbers are concatenated in a string.
- At first, you have to give input the number of the test cases (lets the input be n)
- Thereafter we have to give n number of inputs which are basically different indices ( i ) of the string.
- Now after getting each index we get the substring of the 7 characters from the i-th index ( subString[i,i+7] ).
QUICK APPROACH :
- Take Input the number of the test case (t) and corresponding indices (n).
- Now make a string that contains n th index and subSting[n,n+7].
- Then print the corresponding substrings of different indices.
EXPLANATION :
- At first let’s take a max number 33000000 though 32000000 is sufficient for this problem
- Then make an array needed for sieve (primeArr) of bool type of size max which contains 0 (false) or 1 (true) if its i^th index is non-prime or prime simultaniously.
- Then create the desired string of prime numbers by using the primeArr.
- Then take input the number of test cases (t) and maximum possible input (n) for each test case.
Time Complexity :
O(m * log log m)
Space Complexity :
O(m)
Here m = O(n log n) which is the size of the array primeArr, array needed for sieve.
֍ Now question is, how to find the value of m?
- Let’s Find the value of m :
We know (approximately) from the prime number theorem that
- Numbers of prime numbers till n, \pi (n) = n / log n
- n-th prime number p(n) = n * (log n + log log n)
But we don’t need the nth prime number. Each prime number has some length. Most prime numbers will have length 6 for the given constraint for n ( 1.5 * 10^7 ). We can cross-check this fact by putting n = 4, 5, and 6 in Eqn(3) for primes of length 4, 5, and 6 respectively.
- Numbers of primes each having length x, Y(x) = π(10^(x+1)) -π(10^x)
Hence, we can divide p(n) by 6 to get an approximation of m.
We have got approx value of m = n * (log n + log log n) / 6. For n = 10^7, m is approximately 4.8 * 10^7.
We now try different values of m and print the length of the string of primes. We see that for m = 3.2 * 10*7, length of string > n while for m = 3.1 * 10^7, length of string < n. So, we shall use m = 3.2 * 10^7.
SOLUTION
Setter's Solution
#include<bits/stdc++.h>
using namespace std;
int NextPrime(bool *arr, int n, int p);
void sieve(bool *arr, int n);
string stringOfPrimes(bool *arr, int n);
int main(void)
{
int max = 33000000; // though 32000000 is sufficient for this problem
bool *primeArr = (bool *)malloc(max * sizeof(bool));
sieve(primeArr, max);
string primes = stringOfPrimes(primeArr, max);
// cout << “size of prime string is " << primes.length() << endl;
int t, n;
cin >> t;
while(t–) {
cin >> n;
cout << primes.substr(n, 7) << endl;
}
return 0;
}
string stringOfPrimes(bool *arr, int n) {
stringstream st(”");
for(int i=2; i<n; ++i) {
if(arr[i]==1)
st << i;
}
return st.str();
}
void sieve(bool *arr, int n)
{
// initialize array with 1
// 1 means prime
// all non-primes will be changed to 0
for(int i=0; i<n; ++i) {
arr[i] = 1;
}
arr[0] = 0; //0 is not prime
arr[1] = 0; //1 is neither prime nor composite
int p = 2; //will start finding composites nos with 2
int n_sqrt = sqrt(n);
while(p <= n_sqrt) {
int index = pow(p,2);
for(; index<n; index+=p) {
arr[index] = 0;
}
//cout << "last " << p;
p = NextPrime(arr,n,p);
//cout << " next " << p << endl;
}
}
int NextPrime(bool *arr, int n, int p) {
int i;
for(i = p+1; i<n; ++i) {
if(arr[i]!=0)
break;
}
return i;
}
Tester's Solution
import math
max_val = 33000000
def sieve():
tr = [True] * max_val
s = ‘’ #string of primes generation
for i in range(2, max_val):
if(tr[i]):
s += str(i) #append if true
for j in range(i*i, len(tr), i):
tr[j] = False
return s
t = int(input())
s = sieve()
while(t > 0):
n = int(input())
print(s[n:n+7])#city id taking 7 digits starting from n
t -= 1 | https://discuss.codechef.com/t/findid-editorial/84902 | CC-MAIN-2021-49 | refinedweb | 943 | 77.06 |
C++ Output Formatting
Formatting output plays an important role and it makes the output easy to read and understand. C++ provides several input / output manipulators. There are two types of I / O manipulators: setw() and setprecision(). To use these manipulations, you must include the header file iomanip.h
#include < iomanip.h >
C++ Setw() Manipulator
The setw() manipulator set the width of the specified area for the output.It takes the size of the field as a parameter.For example, the code :
cout<< setw(6) << "P";
Output :- _ _ _ _ _ P
cout<< setw(8) << 22;
Output :- _ _ _ _ _ _ 2 2
cout<< setw(8) << 4444;
Output :- _ _ _ _ 4 4 4 4
cout<< setw(8) << 666666;
Output :- _ _ 6 6 6 6 6 6
C++ Setprecision() Manipulator
Setprecision() Manipulators sets the total number of digits to be displayed when the manipulator floating point number is printed.
cout<< Setprecision(5) << 123.456;
Output :- 123.46 \\It will print the output to the screen(Notice the rounding)
cout<< Setprecision(2) << 1.1658;
Output :- 1.2
cout<< Setprecision(4) << 125.721;
Output :- 125.7
cout<< Setprecision(6) << 125.6987;
Output :- 125.699
Precedence of Operators
The associativity of an operator is a property that determines how operators of the same precedence are grouped in the absence of parentheses.
For example A = 3 + 5 * 3; here, A is assigned 18, not 24 because operator * has higher precedence than +, so it first gets multiplied with 5*3 and then adds into 3.
Operators are listed top to bottom, in descending precedence.
Following table lists the precedence order of operators:
Exercise:-
1. Which operator has the highest Precedence?
View Answer
Explanation:++(post increment) operator has the highest Precedence
2. Which operator has the lowest Precedence?
View Answer
Explanation:,(Comma) operator has the lowest Precedence
Program :-
C++ Program
Visit : | https://letsfindcourse.com/tutorials/cplusplus-tutorials/cpp-formatting-output | CC-MAIN-2021-21 | refinedweb | 309 | 50.94 |
#include <unistd.h> ssize_t readlink(const char *restrict path, char *restrict buf, size_t bufsiz);
ssize_t readlinkat(int fd, const char *restrict path, char *restrict buf, size_t bufsize);() marks for update the last data access timestamp of the symbolic link.
The readlinkat() function is equivalent to the readlink() function except in the case where path specifies a relative path. In this case the symbolic link whose content is read is readlinkat() is passed the special value AT_FDCWD in the fd parameter, the current working directory is used and the behavior is identical to a call to readlink().
Upon successful completion, readlink() and readlinkat() return the count of bytes placed in the buffer. Otherwise, it returns −1, leaves the buffer unchanged, and sets errno to indicate the error.
The readlink() and readlinkat() functions readlink() and readlinkat() functions may fail if:
Read permission is denied for the directory.
More than {SYMLOOP_MAX} symbolic links were encountered in resolving path.
As a result of encountering a symbolic link in resolution of the path argument, the length of the substituted pathname string exceeded {PATH_MAX}.
The readlinkat() function may fail if:
The path argument is not an absolute path and fd is neither AT_FDCWD nor a file descriptor associated with a directory.
Portable applications should not assume that the returned contents of the symbolic link are null-terminated.
See attributes(5) for descriptions of the following attributes:
stat(2), symlink(2), attributes(5), standards(5) | https://docs.oracle.com/cd/E36784_01/html/E36872/readlink-2.html | CC-MAIN-2021-10 | refinedweb | 238 | 50.36 |
No unread comment.
View All Comments
No unread message.
View All Messages
No unread notification.
View All Notifications
*
*
Login using
C# Corner
In Focus
How to Get Most Benefit from a Tech Conference
C# Corner
Contribute
An Article
A Blog
A News
A Video
A Link
An Interview Question
Ask a Question
TECHNOLOGIES
.NET
Career Advice
How do I
Networking
SharePoint
.NET Core
Chapters
HTML 5
Node.js
SignalR
.NET Standard
Cloud
Internet & Web
Office Development
Software Testing
Agile Development
Coding Best Practices
Internet of Things
OOP/OOD
SQL Language
Algorithms in C#
Cognitive Services
Ionic
Operating Systems
SQL Server
Android
Cortana Development
iOS
Philosophy
Swift
Angular
Current Affairs
Java and .NET
PHP
TypeScript
Architecture
Cyber Security
JavaScript
Power BI
UWP
Artificial Intelligence
Databases & DBA
JQuery
Printing in C#
Visual Studio
ASP.NET
Design Patterns & Practices
JSON
Progressive Web Apps
Web Development
ASP.NET Core
DevOps
kotlin
Project Management
Windows Controls
Azure
Dynamics CRM
Leadership
Python
Windows Forms
Big Data
Enterprise Development
LINQ
Q#
Windows PowerShell
Blockchain
Entity Framework
Machine Learning
R
Workflow Foundation
Bot Framework
F#
Microsoft 365
React
WPF
C#
Google Development
Microsoft Office
Ruby on Rails
Xamarin
C# Corner
Graphics Design
Mobile Development
Servers
XML
C, C++, MFC
Hiring and Recruitment
Multithreading
Request a new Category
|
View All
ANSWERS
BLOGS
VIDEOS
INTERVIEWS
BOOKS
NEWS
EVENTS
CHAPTERS
ANNUAL CONFERENCE
Mathura Conference
Xamarin DevCon
Delhi Conference
Startup Conference
Hyderabad Conference
Chandigarh Conference
CAREER
JOBS
TRAINING
MORE
Consultants
IDEAS
LINKS
STUDENTS
STORIES
Certifications
Efficiently Storing Passwords in .NET Framework
Afzaal Ahmad Zeeshan
Jul 30
2015
Article
2
0
13.3k
twitter
google Plus
Reddit
WhatsApp
Bookmark
Print
Other Artcile
Expand
Introduction and Background
In this article, I will discuss a few things about security in user accounts and other lockers that you might be creating in your own applications. First of all, you need to understand that security is the most essential part of every application where a user is able to share his data. A user trusts an application if he is sure that your application is safe of any external hacks and unauthorized calls to get the data. Users must be provided enough grounds to put their faith in you.
I am not just talking about the users with enterprise solutions, instead I am talking about every single application. If you develop an application that provides users with access to authentication services, such as logging in. Then, you must always provide security to users. First of foremost thing to secure is the credentials that they will provide you with. Let me simply one thing for you. An
authentication system
is not only the one that is created using a
username
,
an email for example
, and a
,
made up using alphanumeric characters
. Authentication system, by definition is a system that authenticates the users. A system that tells and ensures that user indeed is the user that he/she is trying to be. For example, on C# Corner I am, "AuthorID: 201fc1". So when server wants to communicate with me. It would perform actions for my interactions using that ID. Which is then used, to show my name, Afzaal Ahmad Zeeshan, and my display picture to users for good readability purposes. On the backend it is the AuthorID that is being used (I am not sure what they call it in their database).
Now think of a scenario where C# Corner has no security. Someone logs in to computer, and in the AuthorID enters my ID. C# Corner would look up for a record and work act respectively for what I am
authorized to do
. It may delete all of my articles, or at worse share my private information on the internet, such as passwords or other private resources stored with my account.
Thank God and development team, there are security measures on C# Corner
. Now, this was just a common example. There are some worse conditions and scenarios that might be raised by this security leak.
That is why, it is always recommended to pay attention to security foremost!
Securing users
Securing your users is a very simple yet complex task. Many frameworks that are being used nowadays have these features already built in. ASP.NET and most specially, .NET framework have cryptography techniques already built in for you. You can use the namespaces (
discussed later in this article
) to secure your protected data in a way that no one can steal the data, even if they do, I repeat, even if they do, it would take a lot of time of theirs to decrypt the ciphers.
Mostly, authentication systems are built using email address and password as the key/value pair for credentials. But, let me tell you one more thing.
You can create your own login system using a security password and an answer
.
A sample example for a server-client communication that sets up authentication based on a question and an answer.
Now, you can see that the above logic is not pretty much secure, because everyone knows (or at least ones who know me) that Eminem is my favorite rapper. Anyone can take that and authenticate themselves on the servers. For this, there is another technique known as, Hashing. I will show you what that is in a minute. First, understand the actual authentication logic.
Authentication is actually just a function, feature or a program that asks the users a security question before they can continue using the service as who they are trying to say they are. For example, your username and password. Username, everyone knows. Password, only you know. Server asks you to tell them both, accurate, so that it can authenticate you on itself and allow you to work as the one you are trying to say you are. If the credentials are false, server won't authenticate you.
Each user has to go through this process, no matter sharing their answers to security questions or providing their passwords.
Above image has a demonstration of the authentication system in websites. You do not need to be using only username and password combination anymore, there are many more functions that can be implemented to the application, just to make sure that the user is actually himself whom he is trying to claim to be. :-)
Securing the passwords
Securing user accounts starts with the process of securing the passwords of users. A question can be shared with anyone, but the answer to that question should not be shared, even with the server. Many techniques have been introduced to store passwords and I would only talk about hashing the password.
The technique of hashing is to hide the underlying message, known as password, so that no one can ever get an idea of what user might be using as the secure string for his account. Passwords are similar to keys for your locks at homes. There is no purpose to lock down your house if you are going to leave the key to it unsafe, exposed or unprotected. Similarly, if you create a login system, but do not secure the passwords for users then chances are that your users are already at stake of privacy issues.
Cryptography began ever since computers have been introduced, programmers have created algorithms to secure your data in the computer, so that no potential user can ever steal it without your intent or consent. Privacy issues, threats etc are the worst ones that can make your application lose users, so easily as compared to an application with bad UI or bad UX techniques.
Hashing the passwords
Enough with introduction, you need to understand what hashing process is, how it helps the programmers and how does it maintain the security. Hashing a password is key to securing the passwords in your system. A hashing algorithm is designed in a way that it is almost impossible for anyone to convert the hashed string back to the actual password string.
For example, (I am not sure of the actual algorithm used, but as a sample example) I ask you to select one number from 1-100. Selected? Next, I will use 1 number (50) and multiply your number by it. Let's assume, you selected 60. The result is 3000. Similarly, storing 3000 (and the number I chose) would provide me a solution to mask your data, 60. No hacker would be able to convert 3000 back to the actual password, unless they know either one of the inputs. Either 50, or 60 is required to run the algorithm back to get the result of 3000.
Above example, was a very stupid and a childish one. In real, the answers span to a hundreds of characters and the process is executed 1000s time to make sure, string is no longer able to be converted back. For real hashing processes, a special key is used as a private number. This key is used to hash the string. The hash created is an alphanumeric string. It is almost impossible to convert it back, however I will talk about a few other side effects that cause the hashed passwords to be cracked. Read the article...
Process of hashing a password; with salt.
It is just a security measure that makes sure that your content of passwords, is not easily accessible even if your database is exposed to hackers!
Adding Salt
Um,
ever had something to eat without masala
? Possibly, you can eat your food without it. But, the purpose of salting the food is to add extra taste to it, but
is not required
. Similarly, adding a salt string in password before processing it in hash algorithm is not required. But, it would give very fruitful results if done.
Purpose of adding a salt to password.
From above image it is clear as to why should one add a salt string to the password string before hashing it. The purpose is not to distinguish anything at all.
Purpose of adding salt
The main purpose to add a salt to the password before hashing is that there are quite a lot of procedures defined in hacking.
Rainbow Table attack
Brute Force attack
Dictionary attack
These attacks, try to attempt as many password strings on the server to guess the actual password as they can. Rainbow Table attack is a very common one. Rainbow attack, for example, try to enter a few most commonly used password strings and get their hashes so that server
may let them in
. Dictionary attack, brute force attacks are also most commonly used in this hacking technique.
Adding a salt just makes it even harder for the hacker to create an algorithm to guess a password string for which the hash would be computed to a result that was stored for the user. For example, I enter password, "KingsNeverDieIsAGreatSongByEminem", and at the same time server generates another salt key, "DidYouKnow". So the password string now looks like, "DidYouKnowKingsNeverDieIsAGreatSongByEminem". The hash computed for this is way too much different for either one of them. Hacker, when attempting to create a hash for these, would not be able to create a same one.
Also, it must be noted that salt must be a random string, so hacker or his tool cannot judge it. A few good techniques for salt are:
It must be a new salt for every password.
For every user, a new salt must be generated.
A new salt must be generated every time you need to compute hash.
Hash and salt can be stored in the database for later purposes.
Salt must be random string of alphanumeric characters.
Although appending or prepending salt doesn't matter, but by convention salt is prepended to the password.
Following the above conditions, you can create an extra security layer over the password. In next chapter, we will learn how to hash passwords in .NET framework using C#. Remember: .NET framework and its derived frameworks, such as WPF, WinForms, ASP.NET etc all can use native cryptographic libraries of .NET framework. So, the code I would provide is applicable to ASP.NET web applications, WPF desktop apps, WinForms applications and other frameworks that run on .NET framework.
Hashing a password in .NET
Fun part of the article, in this section I will show you a code of a few lines only, that can be used to generate hashes. I would show you many kinds of procedures that can be used to mask the passwords; hash the passwords.
.net framework
cryptography
passwords
security
TRENDING UP
01
Top 5 Trending Web Development Frameworks In 2018
02
SQL Coding Best Practices
03
SharePoint Online Automation - Email Notification Using PowerShell Script In SharePoint Online
04
Top Five Developer Trends Of 2018
05
Top Software Job Trends Of 2018
06
Integrating Charts With Angular 5 - Part One
07
Getting Started With Docker For Windows - Containerize a C# Console App
08
CRUD Operation With Angular 5 HTTP Client And ASP.NET Core Web API
09
Email Directly From C# .NET On Azure With No Mail Server
10
Top 10 New Features Of ASP.NET Core 2.0
View All | http://www.c-sharpcorner.com/UploadFile/201fc1/efficiently-storing-passwords-in-database/ | CC-MAIN-2018-09 | refinedweb | 2,177 | 62.68 |
[
]
Hadoop QA commented on HBASE-14809:
-----------------------------------
{color:green}+1 overall{color}. Here are the results of testing the latest attachment
against master branch at commit 7c3c9ac9c67cd03f9a915f528d22cb4ed81cb6e8.
ATTACHMENT ID: 12772352
{color:green}+1 @author{color}. The patch does not contain any @author tags.
{color:green}+1 tests included{color}. The patch appears to include.security.access.TestNamespaceCommands
Test results:
Release Findbugs (version 2.0.3) warnings:
Checkstyle Errors:
Console output:
This message is automatically generated.
> Namespace permission granted to group
> --------------------------------------
>
> Key: HBASE-14809
> URL:
> Project: HBase
> Issue Type: Bug
> Components: security
> Affects Versions: 1.0.2
> Reporter: Steven Hancz
> Attachments: 14809-v1.txt, 14809-v2.txt
>
>
> Hi,
> We are looking to roll out HBase and are in the process to design the security model.
> We are looking to implement global DBAs and Namespace specific administrators.
> So for example the global dba would create a namespace and grant a user/group admin privileges
within that ns.
> So that a given ns admin can in turn create objects and grant permission within the given
ns only.
> We have run into some issues at the ns admin level. It appears that a ns admin can NOT
grant to a grop unless it also has global admin privilege. But once it has global admin privilege
it can grant in any NS not just the one where it has admin privileges.
> Based on the HBase documentation at
> Table 13. ACL Matrix
> Interface Operation Permissions
> AccessController grant(global level) global(A)
> grant(namespace level) global(A)|NS(A)
> grant at a namespace level should be possible for someone with global A OR (|) NS A permission.
> As you will see in our test it does not work if NS A permission is granted but global
A permission is not.
> Here you can see that group hbaseappltest_ns1admin has XCA permission on ns1.
> hbase(main):011:0> scan 'hbase:acl'
> ROW COLUMN+CELL
> @ns1 column=l:@hbaseappltest_ns1admin, timestamp=1446676679787, value=XCA
> However:
> Here you can see that a user who is member of the group hbaseappltest_ns1admin can not
grant a WRX privilege to a group as it is missing global A privilege.
> $hbase shell
> 15/11/13 10:02:23 INFO Configuration.deprecation: hadoop.native.lib is deprecated. Instead,
use io.native.lib.available
> HBase Shell; enter 'help<RETURN>' for list of supported commands.
> Type "exit<RETURN>" to leave the HBase Shell
> Version 1.0.0-cdh5.4.7, rUnknown, Thu Sep 17 02:25:03 PDT 2015
> hbase(main):001:0> whoami
> ns1admin@WLAB.NET (auth:KERBEROS)
> groups: hbaseappltest_ns1admin
> hbase(main):002:0> grant '@hbaseappltest_ns1funct' ,'RWX','@ns1'
> ERROR: org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient permissions
for user 'ns1admin' (global, action=ADMIN)
> The way I read the documentation a NS admin should be able to grant as it has ns level
A privilege not only object level permission.
> CDH is a version 5.4.7 and Hbase is version 1.0.
> Regards,
> Steven
--
This message was sent by Atlassian JIRA
(v6.3.4#6332) | http://mail-archives.apache.org/mod_mbox/hbase-issues/201511.mbox/%3CJIRA.12912952.1447445394000.70394.1447491311021@Atlassian.JIRA%3E | CC-MAIN-2018-39 | refinedweb | 495 | 50.23 |
Frozen Plans
Most SQL statements have an associated Query Plan. A query plan is created when an SQL statement is prepared. By default, operations such as adding an index and recompiling the class purge this Query Plan. The next time the query is invoked it is re-prepared and a new Query Plan is created. Frozen plans enable you to retain (freeze) a existing Query Plan across compiles. Query execution uses the frozen plan, rather than performing a new optimization and generating a new query plan.
Changes to system software may also result in a different Query Plan. Usually, these upgrades result in better query performance, but it is possible that a software upgrade may worsen the performance of a specific query. Frozen plans enable you to retain (freeze) a Query Plan so that query performance is not changed (degraded or improved) by a system software upgrade.
How to Use Frozen Plans
There are two strategies for using frozen plans — the optimistic strategy and the pessimistic strategy:
Optimistic: use this strategy if your assumption is that a change to the system software or to a class definition will improve performance. Run the query and freeze the plan. Export (backup) the frozen plan. Unfreeze the plan. Make the software change. Re-run the query. This generates a new plan. Compare the performance of the two queries. If the new plan did not improve performance, you can import the prior frozen plan from the backup file.
Pessimistic: use this strategy if your assumption is that a change to the system software or to a class definition will probably not improve performance of a specific query. Run the query and freeze the plan. Make the software change. Re-run the query with the %NOFPLAN keyword (which causes the frozen plan to be ignored). Compare the performance of the two queries. If ignoring the frozen plan did not improve performance, keep the plan frozen and remove %NOFPLAN from the query.
Software Version Upgrade Automatically Freezes Plans
When you upgrade InterSystems IRIS® data platform to a new major version, existing Query Plans are automatically frozen. This ensures that a major software upgrade will never degrade the performance of an existing query. After a software version upgrade, perform the following steps for performance-critical queries:
Execute the query with the plan state as Frozen/Upgrade and monitor performance. This is the optimized Query Plan that was created prior to the software upgrade.
Add the %NOFPLAN keyword to the query, then execute and monitor performance. This optimizes the Query Plan using the SQL optimizer provided with the software upgrade. It does not unfreeze the existing Query Plan.
Compare the performance metrics.
If the %NOFPLAN performance is better, the software upgrade improved the Query Plan. Unfreeze the Query Plan. Remove the %NOFPLAN keyword.
If the %NOFPLAN performance is worse, the software upgrade degraded the Query Plan. Keep the Query Plan frozen; promote it from Frozen/Upgrade to Frozen/Explicit. Remove the %NOFPLAN keyword.
After testing your performance-critical queries, you can unfreeze all remaining Frozen/Upgrade plans.
This automatic freeze occurs when you prepare/compile a query under an InterSystems software version newer than the version under which the plan was originally created. For example, consider an SQL statement that was prepared/compiled under system software version xxxx.1. You subsequently upgrade to version xxxx.2, and the SQL statement is prepared/compiled again. The system will detect this is the first prepare/compile of the SQL statement on the new version, and automatically marks the plan state as Frozen/Upgrade, and uses the existing plan for the new prepare/compile. This ensures the query plan used is no worse than the query plan of the previous version.
Only major version InterSystems system software upgrades automatically freeze existing query plans. A maintenance release version upgrade does not freeze existing query plans. For example, a major version upgrade, such as from 2018.1 to 2019.1 would perform this operation. A maintenance release version upgrade, such as 2018.1.0 to 2018.1.1 does not perform this operation.
In the Management Portal SQL interface the SQL Statements Plan State column indicates these automatically frozen plans as Frozen/Upgrade and the Plan Version indicates the InterSystems software version of the original plan. Refer to SQL Statement Details for further information. You can unfreeze individual plans using this interface.
You can list all Frozen/Upgrade plans in the current namespace using the INFORMATION.SCHEMA.STATEMENTS Frozen=2 property.
You can use the following $SYSTEM.SQL.Statement methods to freeze a single query plan or multiple query plans: FreezeStatement() for a single plan; FreezeRelation() for all plans for a relation; FreezeSchema() for all plans for a schema; FreezeAll() for all plans in the current namespace. There are corresponding Unfreeze methods.
A Freeze method can promote (“freeze”) query plans flagged as Frozen/Upgrade to Frozen/Explicit. Commonly, you would use this method to selectively promote appropriate Frozen/Upgrade plans to Frozen/Explicit, then unfreeze all remaining Frozen/Upgrade plans.
An Unfreeze method can unfreeze Frozen/Upgrade query plans within the specified scope: namespace, schema, relation (table), or individual query.
Frozen Plans Interface
There are two frozen plan interfaces, used for different purposes:
Management Portal SQL Statements interface, used to freeze (or unfreeze) the plan for an individual query.
The $SYSTEM.SQL.Statement Freeze and Unfreeze methods, used to freeze or unfreeze all plans for a namespace, a schema, a table, or an individual query.
In the Management Portal SQL interface select the Execute Query tab. Write a query, then click the Show Plan button to display the current query execution plan. If the plan is frozen, the first line in the Query Plan section is “Frozen Plan”.
In the Management Portal SQL interface select the SQL Statements tab. This displays a list of SQL Statements. The Plan State column of this list specifies Unfrozen, Unfrozen/Parallel, Frozen/Explicit, or Frozen/Upgrade. (The Plan State column is blank if the statement has no associated Query Plan.)
You can list the plan state for all SQL Statements in the current namespace using the INFORMATION.SCHEMA.STATEMENTS Frozen property values: Unfrozen (0), Frozen/Explicit (1), Frozen/Upgrade (2), or Unfrozen/Parallel (3).
To freeze or unfreeze a plan, choose an SQL statement in the SQL Statement Text column. This displays the SQL Statement Details box. At the bottom of this box it displays the Statement Text and Query Plan. The background color for these sections is green if the plan is not frozen, and blue if the plan is frozen. Just above that, under Statement Actions, you can select the Freeze Plan or Un-Freeze Plan button, as appropriate. You then select Close.
Freeze Plan button: Clicking this button will cause the query optimization plan for this statement to be frozen. When a plan is frozen, and that SQL statement is compiled, the SQL compilation will use the frozen plan information and skip the query optimization phase.
Un-Freeze Plan button: Clicking this button will delete the frozen plan for this statement and new compilations of this statement will go through query optimization phase to determine the best plan to use.
You can also freeze or unfreeze one or more plans using the $SYSTEM.SQL.Statement Freeze and Unfreeze methods. You can specify the scope of the freeze or unfreeze operation by specifying the appropriate method: FreezeStatement() for a single plan; FreezeRelation() for all plans for a relation; FreezeSchema() for all plans for a schema; FreezeAll() for all plans in the current namespace. There are corresponding Unfreeze methods.
The meaning and use of the other fields in the SQL Statement Details box are described in the “SQL Statements” chapter of this guide.
Privileges
A user can view only those SQL Statements for which they have execute privileges. This applies both to Management Portal SQL Statements listings and to INFORMATION.SCHEMA.STATEMENTS class queries.
Management Portal SQL Statements access requires “USE” privilege on the %Development resource. Any user that can see an SQL Statement in the Management Portal can freeze or unfreeze it.
For catalog access to SQL Statements, you can see the statements if you are privileged to execute the statement or you have “USE” privilege on the %Development resource.
For $SYSTEM.SQL.Statement Freeze or Unfreeze method calls, you must have “U” privilege on the %Developer resource.
Frozen Plan Different
If a plan is frozen, you can determine if unfreezing the plan would result in a different plan without actually unfreezing the plan. This information can assist you in determining which SQL statements are worth testing using %NOFPLAN to determine if unfreezing the plan would result in better performance.
You can list all frozen plans of this type in the current namespace using the INFORMATION.SCHEMA.STATEMENTS FrozenDifferent property.
A frozen plan may be different from the current plan due to any of the following operations:
Recompiling the table or a table referenced by the table
Using SetMapSelectability() to activate or deactivate an index
Running TuneTable on a table
Upgrading the InterSystems software version
Recompiling automatically purges existing cached queries. For other operations, you must manually purge existing cached queries for a new query plan to take effect.
These operations may or may not result in a different query plan. There are two ways to determine if they do:
Manually checking individual frozen plans
Automatically scanning all frozen plans on a daily basis
If the plan has not yet been checked by either of these operations, or a plan is not frozen, the SQL Statements listing New Plan column is blank. Unfreezing a checked frozen plan resets the New Plan column to blank.
Manual Frozen Plan Check
At the top of the SQL Statement Details page for a frozen plan there is a Check frozen button. Pressing this button displays the Unfrozen plan different check box. If this box is checked, unfreezing the plan would result in a different query plan.
When you have performed this Check frozen test on a frozen plan:
If the Unfrozen plan different box is checked, the SQL Statements listing New Plan column contains a “1”. This indicates that unfreezing the plan would result in a different plan.
If the Unfrozen plan different box is not checked, the SQL Statements listing New Plan column contains a “0”. This indicates that unfreezing the plan would not result in a different plan.
A cached query that has been frozen has a New Plan of “0”; purging the cached query and then unfreezing the plan causes the SQL statement to disappear.
A Natural Query that has been frozen has a blank in the New Plan column.
After performing this test, the Check frozen button disappears. If you wish to re-test a frozen plan, select the Refresh Page button. This re-displays the Check frozen button.
Automatic Daily Frozen Plan Check
InterSystems SQL automatically scans all frozen statements in the SQL Statement listing every night at 2:00am. This scan lasts for, at most, one hour. If the scan is not completed in one hour, the system notes where it left off, and continues from that point on the next daily scan. You can use the Management Portal to monitor this daily scan or to force it to scan immediately: select System Operation, Task Manager, Task Schedule, then select the Scan frozen plans task.
This scan examines all frozen plans:
If the frozen plan has the same InterSystems software version as the current version, InterSystems IRIS® data platform computes a hash on referenced tables and timestamps of the two plans to create an internal list of plans that may have changed. For this subset it then performs a string-for-string comparison of the two plans to determine which plans actually differ. If there is any difference between the two plans (however minor), it flags the SQL statement in the SQL Statements listing New Plan column with a “1”. This indicates that unfreezing the plan would result in a different query plan.
If the frozen plan has the same InterSystems IRIS version as the current version and string-for-string comparison of the two plans is an exact match, it flags the SQL statement in the SQL Statements listing New Plan column with a “0”. This indicates that unfreezing the plan would not result in a different query plan.
If the frozen plan has a different InterSystems software version from the current version (Frozen/Update), InterSystems IRIS determines if a change to the SQL optimizer logic would result in a different query plan. If so, it flags the SQL statement in the SQL Statements listing New Plan column with a “1”. Otherwise, it flags the SQL statement New Plan column with a “0”.
You can check the results of this scan by invoking INFORMATION.SCHEMA.STATEMENTS. The following example returns the SQL Statements for all frozen plans, indicating whether the frozen plan is different from what the plan would be if not frozen. Note that an unfrozen statement may be Frozen=0 or Frozen=3:
SELECT Frozen,FrozenDifferent,Timestamp,Statement FROM INFORMATION_SCHEMA.STATEMENTS WHERE Frozen=1 OR Frozen=2
Frozen Plan in Error
If a statement's plan is frozen, and something changes to a definition used by the plan to cause the plan to be invalid, an error occurs. For example, if an index was deleted from the class that was used by the statement plan:
The statement's plan remains frozen.
On the SQL Statement Details page the Compile Settings area displays a Plan Error field. For example, if a query plan used an index name indxdob and then you modified the class definition to drop index indxdob, a message such as the following displays: Map 'indxdob' not defined in table 'Sample.Mytable', but it was specified in the frozen plan for the query.
On the SQL Statement Details page the Query Plan area displays Plan could not be determined due to an error in the frozen plan.
If the query is [re]compiled and the frozen plan is in an error state, InterSystems IRIS does not use the frozen plan. Instead, the system creates a new Query Plan that will work given the current definitions. However, this Query Plan is not preserved in a cached query or an SQL Statement if a frozen plan is in effect.
The plan in error remains in error until either the plan is unfrozen, or the definitions are modified to bring the plan back to a valid state.
If you modify the definitions to bring the plan back to a valid state, go to the SQL Statement Details page and press the Clear Error button to determine if you have corrected the error. If corrected, the Plan Error field disappears; otherwise the Plan Error message re-displays. If you have corrected the definition, you do not have to explicitly clear the plan error for SQL to begin using the frozen plan. If you have corrected the definition, the Clear Error button causes the SQL Statement Details page Frozen Query Plan area to again display the execution plan.
A Plan Error may be a “soft error.” This can occur when the plan uses an index, but that index is currently not selectable by the query optimizer because its selectability has been set to 0 by SetMapSelectability(). This was probably done so the index could be [re]built. When InterSystems IRIS encounters a soft error for a statement with a frozen plan, the query processor attempts to clear the error automatically and use the frozen plan. If the plan is still in error, the plan is again marked in error and query execution uses the best plan it can.
%NOFPLAN Keyword
You can use the %NOFPLAN keyword to override a frozen plan. An SQL statement containing the %NOFPLAN keyword generates a new query plan. The frozen plan is retained but not used. This allows you to test generated plan behavior without losing the frozen plan.
The syntax of %NOFPLAN is as follows:
DECLARE <cursor name> CURSOR FOR SELECT %NOFPLAN ... SELECT %NOFPLAN .... INSERT [OR UPDATE] %NOFPLAN ... DELETE %NOFPLAN ... UPDATE %NOFPLAN
In a SELECT statement the %NOFPLAN keyword can only be used immediately after the first SELECT in the query: it can only be used with the first leg of a UNION query, and cannot be used in a subquery. The %NOFPLAN keyword must immediately follow the SELECT keyword, preceding other keywords such as DISTINCT or TOP.
Exporting and Importing Frozen Plans
You can export or import SQL Statements as an XML-formatted text file. This enables you to move a frozen plan from one location to another. SQL Statement exports and imports include an encoded version of the associated query plan and a flag indicating whether the plan is frozen. For details, refer to Exporting and Importing SQL Statements. | https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=GSQLOPT_frozenplans | CC-MAIN-2020-45 | refinedweb | 2,803 | 63.09 |
This article provides details on how to write to the System Event log from a C# application.
In many cases, it's best not only to detect and catch errors, but to log them as well. For example, some problems may occur only when your application is dealing with a particularly large load. Other problems may occur, with no obvious reasons as expected, only in certain remote situations. To diagnose these errors and build a larger picture of the application problems, you need to log errors automatically so they can be reviewed at a later time.
The .NET Framework provides a wide range of logging options. When certain errors occur, you can send emails, add a database record or write to a file. The following section throws light on how you can log error details to the System Event Log from your .NET application.
The Windows event log basically maintains three types of logs:
Each event record identifies the source (generally the application or service that created the record), the type of notification (error, information, warning) and the time it was logged. You can double-click on the log to view additional information such as a text description.
One of the potential problems with event logs is that they are automatically overwritten when the maximum set size is reached or after a certain number of days (typically seven days). This means application logs cannot be used to log critical information that need to be retained for a long period of time. Instead, they should be used to track information that is valuable only for a short period of time. For example, the event logs can be used to review errors and diagnose strange behavior immediately after it occurs, say within a couple of days or so.
The following sample code snippet shows how to write to the System Event Log. Here we try to read the contents of a text file. If the specified text file exists in the specified path, the contents are displayed. If the file is not found, control flow is passed to the catch() block and the corresponding error is logged to the system event log.
catch()
private void btnEventLogger_Click(object sender, EventArgs e)
{
try
{
string FilePath, Line;
FilePath = "C:\\MySampleFile.txt";
StreamReader m_StreamReader;
m_StreamReader = File.OpenText(FilePath);
while ((Line = m_StreamReader.ReadLine()) != null)
{
MessageBox.Show(Line);
}
m_StreamReader.Close();
}
catch (Exception ex)
{
EventLog m_EventLog = new EventLog("");
m_EventLog.Source = "MySampleEventLog";
m_EventLog.WriteEntry("Reading text file failed " + ex.Message,
EventLogEntryType.FailureAudit);
MessageBox.Show(
"Successfully wrote the following to the System Event Log : \n"
+ ex.Message);
}
}
The Event Viewer can be accessed from Administrative Tools > Event Viewer.
On double clicking an item from the list, the details of that item would be opened up. The error message can be viewed in the Description area.
The System.Diagnostics namespace provides classes that allow you to interact with system processes, event logs, and performance counters. Therefore, include this namespace to use the EventLog class.
System.Diagnostics
EventLog
Event logging uses disk space and normally takes processor time away from applications. So do not store unimportant information or large quantities of data in the event log. Generally, an event log should be used to log unexpected conditions or errors and not user actions or performance tracking. | http://www.codeproject.com/Articles/29052/Writing-to-System-Event-Log | CC-MAIN-2014-52 | refinedweb | 543 | 55.44 |
kaiwang@ could consistently reproduce it in his machine.
Product: Chrome
Stack Signature: base::`anonymous namespace'::OnNoMemory()-559748
New Signature Label: base::`anonymous namespace'::OnNoMemory()
New Signature Hash: 97a82f64_959b8e4e_9da16f17_f7d04147_3df7e38c
Report link:
Meta information:
Product Name: Chrome
Product Version: 22.0.1186.0
Report ID: dd23179213e07ffa
Report Time: 2012/06/26 05:38:13, Tue
Uptime: 18 sec
Cumulative Uptime: 0 sec
OS Name: Windows NT
OS Version: 6.1.7601 Service Pack 1
CPU Architecture: x86
CPU Info: GenuineIntel family 6 model 37 stepping 5
ptype: browser
Thread 0 *CRASHED* ( EXCEPTION_BREAKPOINT @ 0x60d88fa5 )
0x60d88fa5 [chrome.dll] - process_util_win.cc:110
base::`anonymous namespace'::OnNoMemory()
0x608516d2 [chrome.dll] - allocator_shim.cc:136
malloc
0x608515b4 [chrome.dll] - generic_allocators.cc:16
generic_cpp_alloc
0x6085abe6 [chrome.dll] - xmemory:187]
std::allocator::allocate(unsigned int)
0x60862f37 [chrome.dll] - vector:751]
std::vector,std::allocator >,std::allocator,std::allocator > > >::reserve(unsigned int)
0x60862ee5 [chrome.dll] - vector:1297]
std::vector,std::allocator >,std::allocator,std::allocator > > >::_Reserve(unsigned int)
0x60862e50 [chrome.dll] - vector:991]
std::vector,std::allocator >,std::allocator,std::allocator > > >::push_back(std::basic_string,std::allocator > const &)
0x6088b5a6 [chrome.dll] - string_split.cc:29
base::SplitStringT,std::allocator > >
0x6088b4ba [chrome.dll] - string_split.cc:49
base::SplitString(std::basic_string,std::allocator > const &,char,std::vector,std::allocator >,std::allocator,std::allocator > > > *)
0x61669caf [chrome.dll] - template_url_service.cc:1252
TemplateURLService::CreateTemplateURLFromTemplateURLAndSyncData(Profile *,TemplateURL *,SyncData const &,std::vector > *)
0x6166befd [chrome.dll] - template_url_service.cc:1060
TemplateURLService::MergeDataAndStartSyncing(syncable::ModelType,std::vector > const &,scoped_ptr,scoped_ptr)
0x617b3599 [chrome.dll] - ui_data_type_controller.cc:161
browser_sync::UIDataTypeController::Associate()
0x617b3372 [chrome.dll] - ui_data_type_controller.cc:104
browser_sync::UIDataTypeController::StartAssociating(base::Callback const &)
0x61841f7d [chrome.dll] - model_association_manager.cc:446
browser_sync::ModelAssociationManager::StartAssociatingNextType()
0x61841209 [chrome.dll] - model_association_manager.cc:406
browser_sync::ModelAssociationManager::ModelLoadCallback(syncable::ModelType,SyncError)
0x61840fcd [chrome.dll] - bind_internal.h:938
base::internal::InvokeHelper<1,void,base::internal::RunnableAdapter,void (base::WeakPtr const &,syncable::ModelType const &,SyncError const &)>::MakeItSo(base::internal::RunnableAdapter,base::WeakPtr const &,syncable::ModelType const &,SyncError const &)
0x61841025 [chrome.dll] - bind_internal.h:1317
base::internal::Invoker<1,base::internal::BindState,void (browser_sync::ModelAssociationManager *,syncable::ModelType,SyncError),void (base::WeakPtr)>,void (browser_sync::ModelAssociationManager *,syncable::ModelType,SyncError)>::Run(base::internal::BindStateBase *,syncable::ModelType const &,SyncError const &)
0x617b3323 [chrome.dll] - ui_data_type_controller.cc:93
browser_sync::UIDataTypeController::OnModelLoaded()
0x617b1b22 [chrome.dll] - app_notification_data_type_controller.cc:39
browser_sync::AppNotificationDataTypeController::Observe(int,content::NotificationSource const &,content::NotificationDetails const &)
0x608f400d [chrome.dll] - notification_service_impl.cc:129
NotificationServiceImpl::Notify(int,content::NotificationSource const &,content::NotificationDetails const &)
0x6166b637 [chrome.dll] - template_url_service.cc:1491
TemplateURLService::NotifyLoaded()
0x6166bc25 [chrome.dll] - template_url_service.cc:803
TemplateURLService::OnWebDataServiceRequestDone(int,WDTypedResult const *)
0x60cd6a3f [chrome.dll] - web_data_service.cc:606
WebDataService::RequestCompleted(int)
0x608dcf02 [chrome.dll] - bind_internal.h:1254
base::internal::Invoker<2,base::internal::BindState,void (media::GpuVideoDecoder *,int),void (media::GpuVideoDecoder *,int)>,void (media::GpuVideoDecoder *,int)>::Run(base::internal::BindStateBase *)
0x60878029 [chrome.dll] - message_loop.cc:455
MessageLoop::RunTask(base::PendingTask const &)
0x60876f07 [chrome.dll] - message_loop.cc:643
MessageLoop::DoWork()
0x609ebfa3 [chrome.dll] - message_pump_win.cc:239
base::MessagePumpForUI::DoRunLoop()
0x60876a98 [chrome.dll] - message_loop.cc:409
MessageLoop::RunInternal()
0x60cc66b8 [chrome.dll] - message_loop.cc:759
MessageLoopForUI::RunWithDispatcher(base::MessagePumpDispatcher *)
0x60cc6414 [chrome.dll] - chrome_browser_main.cc:1912
ChromeBrowserMainParts::MainMessageLoopRun(int *)
0x60cc638a [chrome.dll] - browser_main_loop.cc:440
content::BrowserMainLoop::RunMainMessageLoopParts()
0x60cc62fc [chrome.dll] - browser_main_runner.cc:98
`anonymous namespace'::BrowserMainRunnerImpl::Run()
0x608e138e [chrome.dll] - browser_main.cc:21
BrowserMain(content::MainFunctionParams const &)
0x60867b08 [chrome.dll] - content_main_runner.cc:372
content::RunNamedProcessTypeMain(std::basic_string,std::allocator > const &,content::MainFunctionParams const &,content::ContentMainDelegate *)
0x60867a89 [chrome.dll] - content_main_runner.cc:627
content::ContentMainRunnerImpl::Run()
0x6085a2b7 [chrome.dll] - content_main.cc:35
content::ContentMain(HINSTANCE__ *,sandbox::SandboxInterfaceInfo *,content::ContentMainDelegate *)
0x6085a243 [chrome.dll] - chrome_main.cc:28
ChromeMain
0x0101627c [chrome.exe] - client_util.cc:423
MainDllLoader::Launch(HINSTANCE__ *,sandbox::SandboxInterfaceInfo *)
0x0101547b [chrome.exe] - chrome_exe_main_win.cc:31
RunChrome(HINSTANCE__ *)
0x010154e6 [chrome.exe] - chrome_exe_main_win.cc:47
wWinMain
0x0106d962 [chrome.exe] - crt0.c:275
__tmainCRTStartup
0x75ad3399 [kernel32.dll] + 0x00013399]
BaseThreadInitThunk
0x77e19ef1 [ntdll.dll] + 0x00039ef1]
__RtlUserThreadStart
0x77e19ec4 [ntdll.dll] + 0x00039ec4]
_RtlUserThreadStart
kaiwang, can you debug this and see what's actually going wrong here?
I can see at least one bad thing about this code but I'd be kind of surprised if it caused this crash.
Actually, this could be really bad.
SplitString() doesn't actually clear the destination vector. As a result I think this code could blow up exponentially: every time the same TemplateURL is run through CreateTemplateURLFromTemplateURLAndSyncData(), the input encoding list doubles in size.
As part of the fix I probably need to run a normalization pass to crunch down anyone's data that has blown up this way.
Steve, is there a way we can quickly see how bad this is in the wild, by looking ion the sync servers for users whose TemplateURL input_encodings have too many entries (or are just too many characters)?
cc+Tim who would know more about querying the server-side data. Tim - is there a query we can run quickly to see how pressing this new issue is?
kaiwang - Do you mind emailing me your "Web Data" file from the affected profile so I can see which search engine entry might have been prone to this? (perhaps any entry with some valid Encoding list, but I haven't looked at the code in detail yet)
cc -Tim +Raz
Raz: First off, this issue is orthogonal to the DSP issues we've been tacking in the past two weeks. Secondly, let us know when you can help us with this query.
Issue 134676 has been merged into this issue.
Sent my "Web Data" file to Steve.
Inspecting the Web Data file with SQL Lite database browser, it appears that the encoding values are being duplicated as you expected. See the attached file for what the encodings might look like.
I also took a look at a random earlier Web Data file (probably from around M21 or so, but uncertain) and it appears that these values are OK (they show maybe one or two encoding types in the string, with no repeats).
So there appears to be two issues here:
1. The issue you mentioned, Peter, where SplitString isn't clearing the vector.
2. A more recent changed has caused this code to put TemplateURLs through that CreateTemplateURLFromTemplateURLAndSyncData method over and over again.
Using SplitString() incorrectly here was caused by r131224 which is part of M20. I don't think there's another change that makes the CreateXXX method get called more frequently, I think it just takes a little bit of time for this to blow up.
We should not ship M20 with this bug as it will cause people's startup times to balloon and then Chrome to begin crashing constantly. Fixing SplitString() to clear its vector is safe (I audited all callers) and will at least prevent the problem from worsening. I can check that in now and request merges. Correcting the existing damage will be a little trickier as I need to figure out precisely where to do the correction.
Peter, just so you know, I just pushed M20 to stable. Let's find out how bad is this bug in M20. We should certainly get it merged when the fix is properly verified in canary.
By the time you see this bug appear as a crash, it is way late to be addressing it.
I would halt the push for this.
If the client tries to commit an entity that's larger than 10k, the server will return success but *not actually commit the item to durable storage*.
We are seeing over 15 / second commit attempts of large search engine entities.
I think we have some varz's counting occurrences. Standby..
The second sentence (15 / second) in my comment 11 is incorrect. Disregard. New numbers coming.
I think that will help to reduce the growth of the bug here from "exponential" to "first exponential, then linear", but it won't halt the bug entirely (and the "linear" phase won't kick in until we have close to 10K of encodings, which means this will still grow fairly quickly).
Still waiting on try results for the first of two fixes here, but I'm busy coding the second as we speak.
Yeah, I agree with Peter. I didn't mean to mitigate the concern. Dropping commits causes a steaming mess of corruption in addition to the OOM issue, and there is no great way to recover this corruption server side. And having clients out of sync like that will lead to unrecoverable errors / weird crashes client side.
It looks like .3% of search engine commits server side are > 10k in size. This is *a lot*. The scale behind that ratio is quite large.
Can we drill down to figure out what fraction of our existing entries have duplicate input_encoding values?
Also, if we have historical data, it'd be nice to know what the fraction has looked like over time. If this has ramped up from a much smaller value over the last two months, that's another data point that suggests that this bug may be to blame.
When I look at the crash rates in, M20 isn't significantly affected. Majority of crashes were observed in M21.
Version Crash counts
22.0.1187.0 46
22.0.1186.0 132
22.0.1185.0 76
22.0.1184.0 1
22.0.1183.1 48
22.0.1183.0 55
22.0.1182.0 80
22.0.1181.0 18
21.0.1180.4 51
21.0.1180.2 2
21.0.1180.11 26
21.0.1180.0 23
21.0.1179.0 5
21.0.1178.0 7
21.0.1171.0 2
21.0.1170.0 5
20.0.1132.43 2
20.0.1132.42 1
20.0.1132.39 3
actually this also implies a huge jump from 1180 to 1182 considering anything past 1180 has only been on canary vs the 1180 numbers that are from dev channel. so there are two points where it gets bad. 1179-1180 and 1181-1182
oh and clearly something made it even worse between 1185 and 1186
Yes, some change in M21 might have tickled pkasting@ code more compared to M20.
I think you're both mistaken in assuming that higher rates for a version mean something changed in that version.
This is a bug that, once the client is new enough to trigger it (anything r131224 and newer), will progressively worsen on the client, little by little, each time the client syncs. Thus clients which are used the most frequently are most likely to show problems here. These are also the clients likely to be on the newest versions.
(Note for example that people that people who are still using M20, i.e. beta users, will have had significantly less time using a buggy build, and probably also less usage per unit time, than Dev/Canary users.)
This graph () shows the increase in frequency of large search engine commits on our stable server (which means beta channel and stable channel chrome only) around June 15th. This graph () shows the same on our "dev" server instance, which means dev channel / canary chrome. The rate is certainly higher on dev (mostly m21), but it is absolutely happening on m20 -- the 0.3% I referred to above was from the 'stable' instance alone.
(I can't yet explain the drop / bounce on the stable graph a few days after the initial spike, before spiking back up...)
Was this bug present in m20 when m20 was in dev channel? Is it possible dev channel clients are more likely to hit the issue / have more search engines and hence may trigger this more easily?
Peter's change is part of M20 even before it branched. According to the stats in comment #17, there isn't any crashes on M20 dev.
This is an issue for sure in M20 but based on the current data it doesn't seem to be alarming. So I'm not going to rollback Stable at this moment. However, if we have a clean and simple patch, we can try spinning a build to stable either on Thursday or Friday.
First fix landed in r144323.
This should be safe to merge to M20 and M21 and will prevent the problem from getting any worse.
The following revision refers to this bug:
------------------------------------------------------------------------
r144323 | pkasting@chromium.org | Tue Jun 26 16:37:02 PDT 2012
Changed paths:
M
M
M
M
M
Make SplitString() and variants clear their outparam vector. (Note that SplitStringIntoKeyValues() and SplitStringIntoKeyValuePairs() already did this.) This is more in line with what other APIs that take outparams do.
I audited all callers, and the only ones affected by this are the buggy ones in bug 134695 that already wanted this behavior, as well as the couple of places in this CL that were manually calling clear() and now don't have to.
BUG= 134695
TEST=none
TBR=brettw,phajdan.jr
Review URL:
------------------------------------------------------------------------
Thanks Peter for the fix! Since your patch is touching the file in /base, I'm little nervous on how this will affect other parts of chrome which calls it. Is there a simpler patch that will affect only your code and not the base function?
If there isn't, then let's get this patch baked in tomorrow's canary and see if it fixes or regresses any other issues.
Yes, we could merge a one-liner instead to M20 that just touched template_url_service.cc. I wouldn't want to do that for M21, as I don't want the trunk and the branch to get too out of sync, but I can put this together for M20 later tonight.
Meanwhile, fix part 2 is at .
For clarity, do you want me to just go ahead and land the one-liner fix on M20, or do you want to see it beforehand?
Basically, it's the addition of this line:
data.input_encodings.clear();
...before the SplitString() call in TemplateURLService::CreateTemplateURLFromTemplateURLAndSyncData().
Second fix in in r144387.
I would like to merge this fix to M21 but leave it out of M20.
The following revision refers to this bug:
------------------------------------------------------------------------
r144387 | pkasting@chromium.org | Tue Jun 26 20:36:07 PDT 2012
Changed paths:
M
M
M
M
M
De-dupe input encodings in TemplateURLs.
BUG= 134695
TEST=none
Review URL:
------------------------------------------------------------------------
r144390 fixed a compile error due to a near-simultaneous landing with another sync change (r144385); that shouldn't be merged back.
The following revision refers to this bug:
------------------------------------------------------------------------
r144390 | pkasting@chromium.org | Tue Jun 26 20:45:52 PDT 2012
Changed paths:
M
Fix compile error due to interaction with akalin's r144385.
BUG= 134695
TEST=none
------------------------------------------------------------------------
Yes, that's the change that I expected in M20. Could you please check out 1132_43 branch (yes, its _43) and add your change directly in the branch? Thanks!
The following revision refers to this bug:
------------------------------------------------------------------------
r144471 | pkasting@chromium.org | Wed Jun 27 10:13:23 PDT 2012
Changed paths:
M
Minimal mitigation for bug 134695 for the M20 branch.
BUG= 134695
TEST=none
------------------------------------------------------------------------
This has now been mitigated for M20. I'd still like to merge the real fixes to M21.
we have a little bit of time before beta2 cut for m21, can we let it soak in canary a bit?
Sure.
Hey Peter - is there anything we can do to validate that this is improving things in canary? Perhaps play with sync for a while and ensure that we're seeing healthy Web Data contents?
When do you think we'll see Tim's graph of giant Search Engine commits calming down?
I'd look at the number of >10k submits coming from users who have this patch versus ones that don't. If those 10k entries are caused by this bug, then this patch ought to eliminate them immediately. It should also subsequently eliminate them from those users' other clients once those clients sync.
The following revision refers to this bug:
------------------------------------------------------------------------
r144962 | dharani@chromium.org | Fri Jun 29 12:44:49 PDT 2012
Changed paths:
M
Merge 144471 - Minimal mitigation for bug 134695 for the M20 branch.
BUG= 134695
TEST=none
TBR=pkasting@chromium.org
Review URL:
------------------------------------------------------------------------
will review/approve for m21 on july 9 since there's nothing going out before then.
Update on some numbers:
- Not seeing this OOM crash anymore on canary channel (which confirms the fix)
- For the current dev channel (21.0.1180.15), this OOM is more than 6% of browser crashes.
- It is low incidence on the stable channel. Only saw a handful of these and they were in in 20.0.1132.43
thanks eric!! ok let's go ahead and land it. 144387 is the one u want for m21, right?
Merged to 1180.
The following revision refers to this bug:
------------------------------------------------------------------------
r145762 | pkasting@chromium.org | Mon Jul 09 15:30:28 PDT 2012
Changed paths:
M
M
M
M
M
Merge 144387 - De-dupe input encodings in TemplateURLs.
BUG= 134695
TEST=none
Review URL:
TBR=pkasting@chromium.org
------------------------------------------------------------------------
Issue 136211 has been merged into this issue.
This issue has been closed for some time. No one will pay attention to new comments.
If you are seeing this bug or have new data, please click New Issue to start a new bug. | https://bugs.chromium.org/p/chromium/issues/detail?id=134695 | CC-MAIN-2018-51 | refinedweb | 2,848 | 59.09 |
The NTFS File System
Windows 2000 comes with a new version of NTFS. This newest version of NTFS provides performance, reliability, and functionality not found in FAT. Some new features in Windows 2000, such as Active Directory ™ directory service and the storage features based on reparse points are only available on volumes formatted with NTFS
NTFS also includes security features required for file servers and high-end personal computers in a corporate environment, and data access control and ownership privileges important for data integrity.
Multiple Data Streams® Word and Microsoft® WordPad. For instance, a file structure like the following illustrates file association, but not multiple files:
program:source_file
:doc_file
:object_file
:executable_file
You can use the Win32 advanced programming interface (API) CreateFile to create an alternate data stream. Or, at the command prompt, you can type commands such as:
echo text>program:source_file
more <program:source_file
Caution
Because NTFS is not supported on floppy disks, when you copy an NTFS file to a floppy disk, data streams and other attributes not supported by FAT are lost without warning.
Reparse Points
Reparse points are new file system objects in the version of NTFS included with Windows 2000. Reparse points have a definable attribute containing user-controlled data and are used to extend functionality in the input/output (I/O) subsystem.
For more information about reparse points, see the Platform Software Development Kit (SDK) link on the Web Resources page at .
Change Journal
The change journal is used by NTFS to provide a persistent log of all changes made to files on the volume. For each volume, NTFS uses the change journal to track information about added, deleted, and modified files. The change journal is much more efficient than time stamps or file notifications for determining changes in a given namespace.
The change journal is implemented as a sparse stream in which only a small active range uses any disk allocation. The active range initially begins at offset 0 in the stream and moves monotonically forward. The unique sequence number (USN) of a particular record represents its virtual offset in the stream. As the active range moves forward through the stream, earlier records are deallocated and become unavailable. The size of the active range in a sparse file can be adjusted. For more information about the change journal and sparse files, see the Platform Software Development Kit (SDK) link on the Web Resources page at .
Encryption
File and directory-level encryption is implemented in the version of NTFS included with Windows 2000 for enhanced security in NTFS volumes. Windows 2000 uses Encrypting File System (EFS) to store data in encrypted form, which provides security when the storage media are removed from a system running Windows 2000. For more information about EFS, see the Microsoft ® Windows ® 2000 Server Resource Kit Distributed Systems Guide .
Sparse File Support
Sparse files allow programs to create very large files, but to consume disk space only as needed. A sparse file is a file with an attribute that causes the I/O subsystem to allocate the file's meaningful (nonzero) data. All nonzero data is allocated on disk, whereas all nonmeaningful data (large strings of data composed of zeros) is not. When a sparse file is read, allocated data is returned as it was stored, and nonallocated data is returned, by default, as zeros in accordance with the C2 security requirement specification.
NTFS includes full sparse file support for both compressed and uncompressed files. NTFS handles read operations on sparse files by returning allocated data and sparse data. It is possible to read a sparse file as allocated data and a range of data without having to retrieve the entire data set, although, by default, NTFS returns the entire data set.
You can set a user-controlled file system attribute to take advantage of the sparse file function in NTFS. With the sparse file attribute set, the file system can deallocate data from anywhere in the file and, when an application calls, yield the zero data by range instead of storing and returning the actual data. File system APIs allow for the file to be copied or backed as actual bits and sparse stream ranges. The net result is efficient file system storage and access. Figure 3.4 shows how data is stored with and without the sparse file attribute set.
Figure 3.4 Sparse Data Storage
Disk Quotas
Disk quotas are a new feature in NTFS that provide more precise control of network-based storage. Disk quotas are implemented on a per-volume basis and enable both hard and soft storage limits to be implemented on a per-user basis. For more information about disk quotas, see "Data Storage and Management" in this book.
The introduction of distributed file system (Dfs), NTFS directory junctions, and volume mount points also creates situations where logical directories do not have to correspond to the same physical volume. Available disk space is based on user context, and the space reported for a volume is not necessarily representative of the space available to the user. For this reason, do not rely on space queries to make assumptions about the amount of available disk space in directories other than the current one. For more information about Dfs, see the Distributed Systems Guide . For more information about volume mount points, see "Volume Mount Points" later in this chapter.
Distributed Link-Tracking
Windows 2000 provides a distributed link-tracking service that enables client applications to track link sources that have been moved locally or within a domain. Clients that subscribe to this link-tracking service can maintain the integrity of their references because the objects referenced can be moved transparently. Files managed by NTFS can be referenced by a unique object identifier. Link tracking stores a file's object identifier as part of its tracking information.
The distributed link-tracking service tracks shell shortcuts and OLE links within NTFS volumes on computers running Windows 2000. For example, if a shell shortcut is created to a text document, distributed link-tracking allows the shortcut to remain correct, even if the target file moves to a new drive or computer system. Similarly, in a Microsoft® Word document that contains an OLE link to a Microsoft® Excel spreadsheet, the link remains correct even if the Excel file moves to a new drive or computer system.
If a link is made to a file on a volume formatted with the version of NTFS included with Windows 2000, and the file is moved to any other volume with the same version of NTFS within the same domain, the file is found by the tracking service, subject to time considerations. Additionally, if the file is moved outside the domain or within a workgroup, it is likely to be found.
Converting to Windows 2000 File Systems
The on-disk format for NTFS has been enhanced in Windows 2000 to enable new functionality. The upgrade to the new on-disk format occurs when Windows 2000 mounts an existing NTFS volume. The upgrade is quick and automatic; the conversion time is independent of volume size. Note that FAT volumes can be converted to NTFS format at any time using the Convert.exe utility.
Important
Performance of volumes that have been converted from FAT is not as high as volumes that were originally formatted with NTFS.
Multiple Booting of Windows NT and Windows 2000
Your ability to access your NTFS volumes when you multiple-boot Windows NT and Windows 2000 depends on which version you are using. (Redirected clients using NTFS volumes on file and print servers are not affected.)
Windows NT Compatibility with the Version of NTFS Included with Windows 2000
When a Windows 2000 volume is mounted on a system running Windows NT 4.0 Service Pack 4, most features of the version of NTFS included with Windows 2000 are not available. However, most read and write operations are permitted if they do not make use of any new NTFS features. Features affected by this configuration include the following:
Reparse points. Windows NT cannot use any features based on reparse points, such as Remote Storage and volume mount points.
Disk quotas. When running Windows NT, Windows 2000 disk quotas are ignored. This allows you to allocate.
Cleanup Operations on Windows NT Volumes
Because files on volumes formatted with the version of NTFS included with Windows 2000 can be read and written to by Windows NT, Windows 2000 may need to perform cleanup operations to ensure the consistency of the data structures of a volume after it was mounted on a computer that is running Windows NT. Features affected by cleanup operations are explained below.
Disk quotas If disk quotas are turned off, Windows 2000 performs no cleanup operations. If disk quotas are turned on, Windows 2000 cleans up the quota information.
If a user exceeds the disk quota while the NTFS volume is mounted by a Windows NT 4.0 system, all further disk allocations of data by that user will fail. The user can still read and write data to any existing file, but will not be able to increase the size of a file. However, the user can delete and shrink files. When the user gets below the assigned disk quota, he or she can resume disk allocations of data. The same behavior occurs when a system is upgraded from a Windows NT system to a Windows 2000 system with quotas enforced.
Reparse points Because files that have reparse points associated with them cannot be accessed by computers that are running Windows NT 4.0 or earlier, no cleanup operations are necessary in Windows 2000.
Encryption Because encrypted files cannot be accessed by computers that are running Windows NT 4.0 or earlier, no cleanup operations are necessary.
Sparse files Because sparse files cannot be accessed by computers that are running Windows NT 4.0 or earlier, no cleanup operations are necessary.
Object identifiers Windows 2000 maintains two references to the object identifier. One is on the file; the other is in the volume-wide object identifier index. If you delete a file with an object identifier on it, Windows 2000 must scan and clean up the leftover entry in the index.
Change journal Computers that are running Windows NT 4.0 or earlier do not log file changes in the change journal. When Windows 2000 starts, the change journals on volumes accessed by Windows NT are reset to indicate that the journal history is incomplete. Applications that use the change journal must have the ability to accept incomplete journals.
Structure of an NTFS Volume
Like FAT, NTFS uses clusters as the fundamental unit of disk allocation. In the Disk Management snap-in, you can specify a cluster size of up to 4 KB. If you type format at the command prompt to format your NTFS volume, but do not specify an allocation unit size using the /A:<size> switch , the values in Table 3.4 will be used.
Table 3.4 Default Cluster Sizes for NTFS
Note
Windows 2000, like Windows NT 3.51 and Windows NT 4.0, supports file compression. Since file compression is not supported on cluster sizes above 4 K, the default NTFS cluster size for Windows 2000 never exceeds 4 K. For more information about NTFS compression, see "File and Folder Compression" later in this chapter.
Boot Sector
The first information found on an NTFS volume is the boot sector. The boot sector starts at sector 0 and can be up to 16 sectors long. It consists of two structures:
The BIOS parameter block, which contains information on the volume layout and file system structures.
Code that describes how to find and load the startup files for the operating system being loaded. For Windows 2000, this code loads the file Ntldr. For more information about the boot sector, see "Disk Concepts and Troubleshooting" in this book.
Master File Table and Metadata
When a volume is formatted with NTFS, a Master File Table (MFT) file and other pieces of metadata are created. Metadata are the files NTFS uses to implement the file system structure. NTFS reserves the first 16 records of the MFT for metadata files.
Note
The data segment locations for both $Mft and $MftMirr are recorded in the boot sector. If the first MFT record is corrupted, NTFS reads the second record to find the MFT mirror file. A duplicate of the boot sector is located at the end of the volume.
Table 3.5 lists and briefly describes the metadata stored in the MFT.
Table 3.5 Metadata Stored in the Master File Table
The remaining records of the MFT contain the file and directory records for each file and directory on the volume.
NTFS creates a file record for each file and a directory record for each directory created on an NTFS volume. The MFT includes a separate file record for the MFT itself. These file and directory records are stored on the MFT. The attributes of the file are written to the allocated space in the MFT. Besides file attributes, each file record contains information about the position of the file record in the MFT.
Each file usually uses one file record. However, if a file has a large number of attributes or becomes highly fragmented, it may need more than one file record. If this is the case, the first record for the file, called the base file record, stores the location of the other file records required by the file. Small files and directories (typically 1,500 bytes or smaller) are entirely contained within the file's MFT record.
Directory records contain index information. Small directories might reside entirely within the MFT structure, while large directories are organized into B-tree structures and have records with pointers to external clusters that contain directory entries that could not be contained within the MFT structure.
NTFS File Attributes
Every allocated sector on an NTFS volume belongs to a file. Even the file system metadata is part of a file. NTFS views each file (or folder) as a set of file attributes. Elements such as the file's time stamp are always resident attributes. When the information for a file is too large to fit in its MFT file record, some of the file attributes are nonresident. Nonresident attributes are allocated one or more clusters of disk space and stored as an alternate data stream in the volume. NTFS creates the Attribute List attribute to describe the location of both resident and nonresident attribute records.
Table 3.6 lists the file attributes defined by NTFS, although other file attributes might be defined in the future.
Table 3.6 NTFS File Attribute Types
MS-DOS -Readable File Names on NTFS Volumes
By default, Windows NT and Windows 2000 generate MS-DOS-readable file names on all NTFS volumes. To improve performance on volumes with many long, similar names, you can change the default value of the registry entry NtfsDisable8dot3NameCreation (in HKEY_LOCAL_MACHINE\System \CurrentControlSet\Control\FileSystem) to 1 .. | http://technet.microsoft.com/en-us/library/cc976808(d=printer).aspx | CC-MAIN-2013-48 | refinedweb | 2,503 | 60.95 |
Keys are partitioned among the reducers using a partition function which is
specified in the aptly named Partitioner class. By default, Hadoop will
hash the key (and probably mods the hash by the number of reducers) to
determine which reducer to send your key to (I say probably because I
haven't looked at the actual code). What this means for you is that if you
set a custom bit in the key field, keys with different bits are not
guaranteed to go to the same reducers even if they rest of key is the same.
For example
Key1 = (DataX+BitA) --> Reducer1
Key2 = (DataX+BitB) --> Reducer2
What you want is for any key with the same Data to go to the same reducer
regardless of the bit value. To do this you need to write your own
partitioner class and set your job to use that class using
job.setPartitionerClass(MyCustomPartitioner.class)
Your custom partitioner will need to break apart your key and only hash on
the DataX part of it.
The partitioner class is really easy to override and will look something
like this:
public class MyCustomPartitioner extends Partitioner<Key, Value> {
public int getPartition(Key key, Value value, int numPartitions){
//split my key so that the bit flag is removed
//take the modified key and mod it by numPartitions
return the result
}
}
Of course Key and Value would be whatever Key and Value class you're using.
Hope that helps.
~Ed
On Mon, Oct 18, 2010 at 8:58 PM, Brad Tofel <brad@archive.org> wrote:
> Whoops, just re-read your message, and see you may be asking about
> targeting a reduce callback function, not a reduce task..
>
> If that's the case, I'm not sure I understand what your "bit/tag" is for,
> and what you're trying to do with it. Can you provide a concrete example
> (not necessarily code) of some keys which need to group together?
>
> Is there a way to embed the "bit" within the value, so keys are always
> common?
>
> If you really need to fake out the system so different keys arrive in the
> same reduce, you might be able to do it with a combination of:
>
> org.apache.hadoop.mapreduce.Job
>
> .setSortComparatorClass()
> .setGroupingComparatorClass()
> .setPartitionerClass()
>
> Brad
>
>
> On 10/18/2010 05:41 PM, Brad Tofel wrote:
>
>> The "Partitioner" implementation used with your job should define which
>> reduce target receives a given map output key.
>>
>> I don't know if an existing Partitioner implementation exists which meets
>> your needs, but it's not a very complex interface to develop, if nothing
>> existing works for you.
>>
>> Brad
>>
>> On 10/18/2010 04:43 PM, Shi Yu wrote:
>>
>>> How many tags you have? If you have several number of tags, you'd better
>>> create a Vector class to hold those tags. And define sum function to
>>> increment the values of tags. Then the value class should be your new Vector
>>> class. That's better and more decent than the Textpair approach.
>>>
>>> Shi
>>>
>>> On 2010-10-18 5:19, Matthew John wrote:
>>>
>>>> Hi all,
>>>>
>>>> I had a small doubt regarding the reduce module. What I understand is
>>>> that
>>>> after the shuffle / sort phase , all the records with the same key value
>>>> goes into a reduce function. If thats the case, what is the attribute of
>>>> the
>>>> Writable key which ensures that all the keys go to the same reduce ?
>>>>
>>>> I am working on a reduce side Join where I need to tag all the keys with
>>>> a
>>>> bit which might vary but still want all those records to go into same
>>>> reduce. In Hadoop the Definitive Guide, pg. 235 they are using TextPair
>>>> for
>>>> the key. But I dont understand how the keys with different tag
>>>> information
>>>> goes into the same reduce.
>>>>
>>>> Matthew
>>>>
>>>>
>>>
>>>
>>
> | http://mail-archives.apache.org/mod_mbox/hadoop-common-user/201010.mbox/%3CAANLkTinvbkxKdcxt7T6-REBuHj6av0DvG1XZ_7=pb++8@mail.gmail.com%3E | CC-MAIN-2017-39 | refinedweb | 628 | 66.88 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.