text stringlengths 20 1.01M | url stringlengths 14 1.25k | dump stringlengths 9 15 ⌀ | lang stringclasses 4
values | source stringclasses 4
values |
|---|---|---|---|---|
SizeAttribute Class
Specifies the maximum number of characters that can be stored in a column which is created to store the data of a property or field.
Namespace: DevExpress.Xpo
Assembly: DevExpress.Xpo.v20.2.dll
Declaration
Remarks
The current version of XPO only supports size definitions for the persistent properties or fields of string data types. If the SizeAttribute isn't applied to a member, the size of the database column which the member's data is stored in is specified by the SizeAttribute.DefaultStringMappingFieldSize value.
Note the SizeAttribute affects only the creation of the database column. If the database column already exists, its size is not changed.
Examples
The following example applies the SizeAttribute to the FullName and Background properties. The maximum number of characters that can be stored in a database column which the FullName property is mapped to is 128. As for the Background property the unlimited size of the column it is mapped to is specified.
using DevExpress.Xpo; class Customer : XPObject { private string fullName; private string background; [Size(128)] public string FullName { get { return fullName; } set { SetPropertyValue<string>(nameof(FullName), ref fullName, value); } } [Size(SizeAttribute.Unlimited)] public string Background { get { return background; } set { SetPropertyValue<string>(nameof(Background), ref background, value); } } } | https://docs.devexpress.com/XPO/DevExpress.Xpo.SizeAttribute | CC-MAIN-2021-17 | en | refinedweb |
04/22/2004: Java; Caching Short Objects
While looking at the POI source code, I noticed that a lot of Short objects were being created. So I looked around for a small stand-alone class that would allow me to cache Short object. I did see some pages devoted to sparse arrays and matrixes (notably the COLT package) however they were too large for my purposes.
I wrote the
I wrote the
shortShortCache class shown below. It usage should be pretty obvious but please contact me if you have any difficulty using the class or suggestions for improvement.
/** * Provides a cache for Short objects. This class should be used when the same short values * are used over and over to avoid the cost of continously creating the same Short objects. * * Since the get() method creates Short objects and adds them to the cache, the put() * method never needs to be called outside this class. */ public class shortShortCache { /** This is how the cache is used. */ public static void main(String[] args) { shortShortCache ssm = new shortShortCache(); short numEntries = (short) 2000; short start = (short) 23454; short middle = (short) (start + (numEntries / 2)); short end = (short) (start + numEntries); for (short i = start; i <= end; i++) { ssm.put(i); } // is the first short cached? System.out.println(start + ": " + ssm.get(start)); // is the middle short cached?. System.out.println(middle + ": " + ssm.get(middle)); // is the last short cached? System.out.println(end + ": " + ssm.get(end)); System.out.println("Done."); } /** The initalize size of the cache. */ private int initialSize = 500; /** How much to grow the cache when its capacity is reached. */ private int increment = 500; /** The size of the cache, which is not the same as the number of entries in the caceh. */ private int currentCapacity = 0; /** * The number of entries in the cache. */ protected int distinct = 0; /** The maximum number of entries so that the cache doesn't grow unbounded. */ protected int maxEntries = 3000; /** * The cached short values. */ private short table[]; /** * The cached Short methods */ private Short values[]; /** A no-args constructor which uses all of the defaults. */ public shortShortCache() { super(); clear(); } /** A constructor that lets the user set the control variables. */ public shortShortCache(final int _initialSize, final int _increment, final int _maxEntries) { super(); this.initialSize = _initialSize; this.increment = _increment; // we quietly handle the error of a _maxEntries parameter less than the _initialSize parameter. if (_maxEntries < _initialSize) { this.maxEntries = _initialSize; } else if (_maxEntries > Short.MAX_VALUE) { this.maxEntries = Short.MAX_VALUE; } else { this.maxEntries = _maxEntries; } clear(); } /** Create and/or clear the cache. */ public void clear() { this.table = new short[this.initialSize]; this.values = new Short[this.initialSize]; this.currentCapacity = this.initialSize; this.distinct = 0; } /** * Returns the value to which this map maps the specified key. If the short value * is not in the cache, then create a Short object automatically. */ public Short get(short key) { Short rv = null; for (int i = 0; i < this.distinct; i++) { if (this.table[i] == key) { rv = this.values[i]; } } // If the key is not in the cache, then add it // to the cache. if (rv == null) { rv = new Short(key); if (this.currentCapacity < this.maxEntries) { put(key); } } return rv; } /** * Add a mapping from a short to a Short object. * * If the size of the cache is too small, then expand it. If the size of the cache * is more than the maxEntries, then return. The get() method automatically * creates a Short object for any short values not in the cache. */ public void put(short key) { if (this.currentCapacity < this.maxEntries) { if (this.distinct == this.currentCapacity) { int newCapacity = this.currentCapacity + this.increment; // store the current cache. short oldTable[] = this.table; Short oldValues[] = this.values; // create new arrays. short newTable[] = new short[newCapacity]; Short newValues[] = new Short[newCapacity]; this.table = newTable; this.values = newValues; // move info from old table to new table. for (int i = this.currentCapacity; i-- > 0;) { newTable[i] = oldTable[i]; newValues[i] = oldValues[i]; } this.currentCapacity = newCapacity; } this.table[this.distinct] = key; this.values[this.distinct] = new Short(key); this.distinct++; } } /** Returns the number of key-value mappings in this map. */ public int size() { return this.distinct; } /** * Returns true if this map contains a mapping for the specified key. */ public boolean containsKey(short key) { boolean rv = false; for (int i = 0; i < this.distinct; i++) { if (this.table[i] == key) { rv = true; break; } } return rv; } /** * Returns true if this map maps one or more keys to the specified value. */ public boolean containsValue(Short value) { boolean rv = false; if (value != null) { for (int i = 0; i < this.distinct; i++) { if (this.values[i].equals(value)) { rv = true; break; } } } return rv; } /** * Returns true if this map contains no key-value mappings. */ public boolean isEmpty() { boolean rv = false; if (this.distinct > 0) { rv = true; } return rv; } /** * From: * * Java's object cloning mechanism can allow an attacker to manufacture new * instances of classes that you define�without executing any of the class's * constructors. Even if your class is not cloneable, the attacker can define a * subclass of your class, make the subclass implement java.lang.Cloneable, * and then create new instances of your class by copying the memory images * of existing objects. By defining this clone method, you will prevent such attacks. */ public final Object clone() throws CloneNotSupportedException { throw new CloneNotSupportedException(); } /** * List all short values in the cache. */ public String toString() { StringBuffer sb = new StringBuffer(); sb.append("shortShortMap[" + this.distinct + "]: "); for (int i = 0; i < this.distinct; i++) { sb.append(this.table[i] + ", "); } return sb.toString(); } } | https://medined.github.io/blog/page228/ | CC-MAIN-2021-17 | en | refinedweb |
C
Models and Views in Qt Quick Ultralite
Applications frequently need to provide and display ways to create.
To visualize data, bind the view's
model property to a model and the
delegate property to a component or another compatible type.
Displaying Data with Views
Views are containers for collections of items. They are feature-rich and can be customized to meet style or behavior requirements.
A set of standard views are provided in the basic set of Qt Quick graphical types:
- Repeater - create items for each data entry without a predetermined layout
- ListView - arranges items in a horizontal or vertical list
These types have properties and behaviors exclusive to each type. See their respective documentation for more information.
View Delegates
Views need a delegate to visually represent an item in a list. A view visualizes each item list according to the template defined by the delegate. Items in a model are accessible through the
index property as well as the item's properties.
Component { id: petdelegate Text { id: label font.pixelSize: 24 text: if (index == 0) label.text = model.type + " (default)" else text: model.type } }
Models
Data is provided to the delegate via named data roles, which the delegate may bind to. Here is a ListModel with two roles, type and age, and a ListView with a delegate that binds to these roles to display their values:
import QtQuick 2.15 Rectangle { color: "white" ListModel { id: myModel ListElement { type: "Dog" age: 8 } ListElement { type: "Cat" age: 5 } } Component { id: myDelegate Text { text: model.type + ", " + model.age } } ListView { anchors.fill: parent model: myModel delegate: myDelegate } }
To get control over which roles are accessible, and to make delegates more self-contained and usable outside of views, required properties can be used. If a delegate contains required properties, the named roles are not provided. Instead, the QML engine checks if the name of a required property matches that of a model role. If so, that property is bound to the corresponding value from the model.
import QtQuick 2.15 Rectangle { color: "white" ListModel { id: myModel ListElement { type: "Dog" age: 8 noise: "meow" } ListElement { type: "Cat" age: 5 noise: "woof" } } Component { id: delegate Text { required property string type required property int age text: type + ", " + index // WRONG: text: type + ", " + noise // The above line would cause a compiler error // as there is no required property noise } } ListView { anchors.fill: parent model: myModel delegate: delegate } }
If there is a naming clash between the model's properties and the delegate's properties, the roles can be accessed with the qualified model name instead. For example, if a Text type had type and age properties, the text in the above example would display those property values instead of the type and age values from the model. In this case, the properties could have been referenced as
model.type and
model.age instead to ensure that the delegate displays the property values from the model.
A special index role containing the index of the item in the model is also available to the delegate. Note that this index is set to -1 if the item is removed from the model. If you bind to the index role, ensure that the logic accounts for the possibility of index being -1, that is, that the item is no longer valid. Usually the item will shortly be destroyed, but it is possible to delay delegate destruction in some views.
Models that do not have named roles will have the data provided via the modelData role. The modelData role is also provided for models that have only one role. In this case the modelData role contains the same data as the named role.
Note: index, and modelData roles are not accessible if the delegate contains required properties, unless it has also required properties with matching names.
List Model
ListModel is a simple hierarchy of types specified in QML. The available roles are specified by the ListElement properties.
ListModel { id: fruitModel ListElement { name: "Apple" cost: 245 } ListElement { name: "Orange" cost: 325 } ListElement { name: "Banana" cost: 195 } }
The above model has two roles, name and cost. These can be bound to by a ListView delegate, for example:
ListView { anchors.fill: parent model: fruitModel delegate: Row { Text { text: "Fruit: " + model.name } Text { text: "Cost: $" + model.cost } } }
Note: Unlike in Qt Quick, ListModel and ListElement are read-only. This implies that you cannot change values that are stored in such a model. If you need editable models, see Qul::ListModel and Integrating C++ code with QML.
Lists of Objects
An array literal containing object literals can be used as a model. The current element of the array is available as the
modelData role and can be accessed directly or as
model.modelData.
ListView { anchors.fill: parent model: [ { name: "Apple", color: "green" }, { name: "Pear", color: "pink" } ] delegate: Row { Text { text: "Fruit: " + modelData.name } Text { text: "Color: " + model.modelData.color } } }
Properties of Type ListModel<T>
A property of type
ListModel<T> type can be used as a model, where properties of
T describe the model structure.
This allows the model data to be set externally, or by PropertyChanges in a State.
// MyView.qml Item { property ListModel<NameAgeType> myModel ListView { model: myModel delegate: Text { text: model.name } } } // NameAgeType.qml QtObject { property string name property int age } // User.qml Item { MyView { myModel: [{ name: "John Smith", age: 42 }] } MyView { myModel: ListModel { ListElement { name: "Smith"; age: 42 } } } }
Note: The
ListModel<T> type is specific to Qt Quick Ultralite and does not exist in Qt Quick. Using it means that the QML code will not be valid Qt Quick QML. Without it, it's not possible to declare the model in a different file..
C++ Data Models
Models can be defined in C++ and then made available to QML. This mechanism is useful for exposing existing C++ data models, mutable, or otherwise complex datasets to QML.
For information, visit Qul::ListModel and Integrating C++ code with QML.
Available under certain Qt licenses.
Find out more. | https://doc.qt.io/archives/QtForMCUs-1.6/qtquick-modelviewsdata-modelview.html | CC-MAIN-2021-21 | en | refinedweb |
When deploying an Elastic Stack application, the operator generates a set of credentials essential for the operation of that application. For example, these generated credentials include the default
elastic user for Elasticsearch and the security token for APM Server.
To list all auto-generated credentials in a namespace, run the following command:
kubectl get secret -l eck.k8s.elastic.co/credentials=true
You can force the auto-generated credentials to be regenerated with new values by deleting the appropriate Secret. For example, to change the password for the
elastic user from the quickstart example, use the following command:
kubectl delete secret quickstart-es-elastic-user
If you are using the
elastic user credentials in your own applications, they will fail to connect to Elasticsearch and Kibana after the above step. It is not recommended to use
elastic user credentials for production use cases. Always create your own users with restricted roles to access Elasticsearch.
To regenerate all auto-generated credentials in a namespace, run the following command:
kubectl delete secret -l eck.k8s.elastic.co/credentials=true
The above command regenerates auto-generated credentials of all Elastic Stack applications in the namespace. | https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-rotate-credentials.html | CC-MAIN-2021-21 | en | refinedweb |
JS Party – Episode #152
Automate the pain away with DivOps
featuring Jonathan Creamer
What the what is DivOps?! That’s the question Jonathan Creamer is here to answer. In so doing, we cover the past, present, and future of frontend tooling.
Featuring
Sponsors
AWS Amplify – AWS Amplify is a suite of tools and services that enable developers to build full-stack serverless and cloud-based web and mobile apps using their framework and technology of choice. Amplify gives you easy access to hosting, authentication, managed GraphQL, serverless functions, APIs, machine learning, chatbots, and storage for files like images, videos, and pdfs. Learn more and get started for free at awsamplify.info/JSParty.
Notes & Links
Transcript
Click here to listen along while you enjoy the transcript. 🎧
Hello, everybody! We’re so excited to be back this week, episode #152. It’s my first time MC-ing, so if the show is horrible, you can just blame me.
It’s gonna be amazing.
Apologies in advance. But we’re really excited to have a very special guest here today… Jonathan Creamer is gonna be here with us, and we’ll get into his back-story in a little bit. On the panel with me today is Divya…
Hello, helloo…!
And Nick.
Hoy-hoy!
Hey-hey. And we’re so excited to have Divya here on a show that’s about DivOps… We’ll have to get all the bad puns out of the way now…
Div-yeah…!
Div-oh-yeah… [laughs]
There you go…
Yeah… Yeah, I think Jonathan came prepared for that one…
Very good, very good.
So Jonathan is here because we had a show with Ben Ilegbodu a few weeks ago, where we were talking about TypeScript… And Ben brought up this term called DivOps. We all leaned into that, we were like “DivOps?!” and he’s like “Yeah. You know, I have a friend, and he’s trying to make it a thing, and I’m trying to help him…” I’m like “Well, it’s a thing now…”
It’s a thing.
Yeah, so welcome, Jonathan. Can you tell us a little bit about yourself?
Yeah, thanks. So Ben and I worked at Eventbrite together; he actually brought me on to Eventbrite, which was really fun… So that’s how we’re friends. We used to see each other at conferences all the time, where we’d talk about DivOps things at the time, but we hadn’t quite coined that term… So yeah, the DivOps thing happened because inside of Eventbrite I’m on the frontend infrastructure team, and that’s kind of one of the more common terms you hear for describing a team that does the kind of work that we’ll talk about, what DivOps does… But Kyle Welch, my co-worker and now manager - we used to talk about this all the time.
[00:04:17.27] So I’m on frontend infrastructure, I’m technically a frontend developer. I’ve been writing code for ten years, a lot of backend stuff, C#, ColdFusion, then to Node and JavaScript. Tons of JavaScript. But frontend-wise, I actually don’t write that much frontend code anymore. I prefer TypeScript now to do that…
Yes…!
Yes, of course! But I don’t actually write much client-side facing frontend code anymore. I use frontend tools to build; tools that allow other frontend engineers to do their job… Because we’re a big React shop, so it’s complicated. There’s a lot of stuff that goes into building a web page today… And I know we’ll talk about it here in a minute, what led up to that.
So yeah, we were sitting there, talking, and we were like [unintelligible 00:05:06.21] So I sort of tweeted out, and asked the community; we got a couple different answers, and some people said frontend SRE, frontend ops, [unintelligible 00:05:18.09] The most popular one actually was frontend DevOps, and [unintelligible 00:05:22.18] And that still was like “Well, that’s fine… Frontend DevOps sort of makes sense…” But then actually what happened was this guy Enrique was the one – Enrique Staying on Twitter; he said “Frontend engineers who manage infra should be called <div>ops”, and he actually put the angle brackets in there… And I just latched on to that immediately, and I was like “Oh my god, that is AMAZING. DivOps.”
And the moment that that came up, I started going to some conferences… We went to this conference called Connect.Tech in Atlanta, and I started pitching the idea there, and everybody was like “Oh, this is amazing. Yeah, do something with it!”
So Kyle and I started this DivOps community on Slack, and I sort of blogged about it on my personal blog… And there is a divops.dev website; it’s terrible, because I’m not a frontend engineer…
Because you’re ops, right? I would expect nothing les…
I did an amazing GitHub Actions build pipeline though, to merge from master… So the CI is fantastic. But yeah, so DivOps to me is what goes in all this tooling that we have to write to get all the other folks that work on frontend stuff get their stuff to the internet, so people can see it. Nobody sees my stuff necessarily, but people see the stuff that I help get out there.
Yeah, you’re the pipeline.
Yeah, we build the pipeline. It’s WebPack, it’s Babel, it’s maybe looking into Parcel, it’s Docker, it’s CI, it’s Jenkins, it’s deploying things, it’s S3, it’s Kubernetes, it’s all that stuff. So it’s a whole lot. I could go on all day about it. But that’s in a nutshell how it came to be.
I’m just really happy you’re adding semantics to the least semantic HTML tag.
Yeah… [laughter]
I mean, I would expect nothing less, again, from the ops world… [laughs]
Sure, sure.
“HTML what…?” So Jonathan, that’s super-cool. If you think about it, this is definitely a job that wasn’t a thing ten years ago, or even five years ago perhaps… And so it’s interesting how much the tooling landscape and the mindshare required has shifted the job market and focus of developers. And for companies working at scale, with huge frontend teams, there’s typically a frontend infrastructure team, and typically a frontend platform team that’s supplying a bunch of components, and then folks working on builds and pipelines and all the DX (developer experience) workflows. So it’s really great to see Eventbrite has made that a thing as well.
[00:08:05.00] Yup. I feel like most people who are doing it now sort of fell backwards into it. We were all writing jQuery code eight years ago, and doing things like the revealing module pattern, and using IIFEs… And all of a sudden you’ve just got all these cool – you know, guys writing about JavaScript architecture back then, Nicholas Zakas, books on all that… And then Backbone came out, and that was like “Oh my god, Backbone is amazing! Now we’ve gotta build all this infrastructure around that; models, collections…”, and then Marionette… And then RequireJS. Then you had to learn not only how to stop concatenating files, and do all that… And we just started this natural evolution; it became more and more complicated… And a lot of us just sort of fell backwards into it, and I just was like “This is awesome. I just wanna do this.” I don’t really wanna write code as much that people see; I like building this [unintelligible 00:09:00.02] way better.” It’s just fun.
When I think back on all the things that led me here, it’s like – and funny enough, we’re actually still deprecating Backbone code here at Eventbrite, but that was really where I first started getting into having to figure out, like “We’ve got all these Backbone models…” I remember the dumbest thing – I remember in 2010 having this ginormous, long, 10,000-line Backbone application, because I didn’t know how to actually concatenate things at the time. I still have a Stack Overflow post from ten years ago, where I’m like “How do I take all these files and put them into one?” And I don’t even remember what the answer was at the time, but somebody turned me on to something way back then…
Probably Grunt, I guess…
It wasn’t. It was before Grunt. It was some Java thing that would compile… What was that? I don’t remember; I’ll have to go back and look on Stack Overflow.
Wow…
But yeah, I had to build that pipeline to take a bunch of files and squish them into one; that’s where the revealing module pattern and all those early JavaScript patterns came in, so that you weren’t leaking into the global namespace, and all that.
And then when Grunt came out, obviously, that was the game-changer. Suddenly, we were like “Oh wow, now I have an official way of doing this whole thing. I’m gonna take now all these RequireJS modules and trace the dependencies, run r.js, uglify things, build my CSS, Grunt all the things… Yeah, that was really one of the first times when I realized “This is becoming a thing.”
Yeah, it really is. Taking a step back into the history of these tools - Grunt was the first JavaScript task runner; it was created by Ben Alman, cowboy on GitHub, or on the internet in general. Ben is just a really brilliant engineer; I think he’s currently a principal engineer at Toast, I believe… But he worked at Bocoup for a number of years, which is a company that I worked at.
And the IIFE pattern is also something that Ben kind of invented and socialized throughout the community, so it’s interesting to see the history there…
So we had Grunt, and then we had Gulp… It’s interesting to see what the evolution was, because Grunt ran everything serially, and then Gulp was “better”, because it–
Streams…
Yeah, it streams, and lets you do more concurrent–
And piping.
Piping, yeah. And then you could write in JavaScript; there wasn’t this weird other syntax that you needed to learn, and you could integrate… So it’s interesting to see how that evolution has come through…
Yeah, definitely.
…all the way to React, where I think that was one of the first, if not – yeah, I think it was the first JavaScript library that really couldn’t copy-paste into the web. You can’t just take that source code and – you can’t just take JSX and just copy-paste into the browser; a compiler is always required. That was a very big shift for the community, and one that I’m still personally – I think [unintelligible 00:12:12.17] still pending for me on whether it’s a good thing or not… But I don’t know, what do you think, Divya?
[00:12:20.26] What was the question? I totally missed what you said in the beginning.
Oh, you spaced out?
That’s okay, I forgive you.
[laughs]
The question was like… React - do you think the fact that React is a tool that you can’t even run in the browser… You can’t write React code without a compiler; so it’s not even–
Well, you could…
I was gonna say, you could… It’d be pretty gnarly though.
You could with CDN. You need access to the internet in order to [unintelligible 00:12:43.01]
You’d have to create element, and blah-blah… You’d have to write very weird code. You would write code that you would traditionally think of as React.
It would be gross.
I saw a comment about that on Twitter today; I won’t call the individual out, just because they probably don’t wanna be, but they were talking about that pattern of - instead of writing JSX, writing the function React.createElement() s actually like a Hyperscript function that takes in three arguments and does things with them… And they called out writing that for the past couple of years and really enjoying that, and thinking they made the best decision, versus just writing JSX directly.
Oh, wow.
Interesting.
Wow… Yeah, that is very interesting. That is, I would say, a person who is very patient. Maybe should consider a role in teaching…
Does not have a lot of nested components as well… [laughter] It’s just React.createElement(). [laughter] I understand the appeal of it; I would definitely rename it to something else, like H, but… I don’t know, at the end of the day it’s just faster… It’s easier to analyze code, for my eyes specifically… So I’m talking anecdotally, but it’s easier for me to just look at JSX and know what’s going on… Whereas I feel like I would be lost looking at Hyperscript calls over and over.
I think what led up to React, when I was at appendTo, building RequireJS and doing all these things - we sort of painted ourselves into a corner of needing build tools anyways for stuff… Even if you’re able to write JSX vanilla React in the browser, are you gonna – I mean, maybe now you could go vanilla CSS, too; it’s a little easier to write CSS now than ever before… But that wasn’t true until not that long ago. And we still have a lot of IE11 traffic, unfortunately. We’re finally on the cusp of turning that stuff off, but something has to run first to make your code be able to run everywhere, and it just sort of has to happen… So why not just also throw JSX compilation into the mix, too? It’s not even that slow, really; it’s just converting some ASTs [unintelligible 00:14:50.08] function call.
So yeah, to me it’s just kind of – build tools is just a part of it now, part of the job, like it or not, at some level.
Yeah, it’s like a necessary part of the job anyway, in order to write code that can be supported on multiple browsers, and performs well… I don’t personally see a world where we aren’t running build tools on our JavaScript code. I think the concern is more like – the local development workflows personally for me have greatly been impacted by this, and I think we’ll get into some of the tooling in the next segment…
I think there’s also a bunch of skills needed to have an entry point into modern web dev now… And that’s not very inclusive, because you’re asking people who are learning the language and learning the jargon to now learn ops, learn how to manage config files.
[00:15:56.17] Yeah. And that’s where this whole thing came up… Because it’s like “I don’t want my devs having to come in and learn all that stuff. I’ll take care of that for you. Put that on me. I love that stuff; I’ll do that all day. I’ll write you a WebPack config right now if you want one.” I love doing that stuff, I don’t know why. I geek out so hard on it.
I want the junior engineers coming out of wherever, or just starting – if you’re listening to this podcast and you’re like “I don’t know all this stuff…”, that’s okay. Come to me and let’s talk. I’ll help you get going, and then over time I can teach you more about this, and why it’s important, and how it works… But ultimately, definitely within the context of my company - I just want my feature teams to go make Eventbrite the best possible live events experience on the internet; and I don’t want you to have to worry about your WebPack config, and your Babel config, and your ES modules, and whatever. It’s like, I got you; that’s my job.
There’s something to be said about the increase in the number of zeroconfig type tooling. For example - sure, 10-20 years ago writing frontend code was fairly straightforward. You’d write a single file, maybe a CSS file, and then later on you’d throw in JavaScript, or whatever… You don’t need tooling for that. And then obviously, it’s become more complex, where you have WebPack, and earlier there was Grunt and Gulp, and so on…
But in the advent of tooling, at least at the beginning stages, there was not a lot of boilerplate code that you could just use and run with. You’d still have to write your own Grunt, you’d still have to run your own Gulp, and WebPack, and so on. But I think – and this is sort of me endorsing frameworks to an extent, because I think frameworks have actually helped… There’s an argument both sides, but I think in terms of Create React App let’s say, it has given people the ability to just run Create React App, it creates a boilerplate for you, and then you can just run with it.
Obviously, there’s an overhead in terms of learning JSX, and whatever, but that tends to come with frameworks. I mean, it’s sort a trade-off - would you rather learn JS, or would you rather understand who to write a WebPack config? And for a lot of people, JSX is very similar to HTML. Not the same, obviously, but it is a path to working faster than it is to understand all the config. And I think that’s actually really interesting, in terms of how the industry has moved towards…
I think that’s a positive, because it means that people don’t have to learn a lot of this… And if they want to, they can, because at least with Create React App you are working off of that boilerplate, and then if you really want to, you can extend the config. And if you wanna go one level further, you can just eject completely, which is obviously not recommended, because you don’t get further updates. But if you have very specific needs, you can do that. And that obviously means that you’re full on in the deep end, which is like what you do, Jonathan, which is completely updating WebPack, understanding every intricacy of that process. So I think there’s a wide spectrum in terms of the way in which you can enter frontend today.
Yeah. And what’s interesting about that, too - you’re right, frameworks, that’s part of why Next.js and Gatsby are so good and so popular; it’s like, you don’t have to worry about that stuff. But I think a lot of what’s interesting is that companies like Eventbrite and like Google or whoever, people that have been around for a minute - we’ve had to go through transformations… We started with Backbone and Marionette; actually, before that it was, like I said, IIFEs… To Backbone, to Require, then to React, and shoving React inside Backbone, and then now taking React out and only doing React… It was sort of hard to find a breaking point, to just say “Hey, we’re switching to Angular here, and Angular can do everything.” We picked up React, because we saw React was happening… And really back then, four years ago, when Eventbrite switched to React, there wasn’t a good React framework back then. Create React App wasn’t a thing, and Next.js definitely wasn’t a thing; maybe it was…
[00:19:56.22] So I think from that perspective we sort of just all had to find the ways to take what we had done and build our own little frameworks around them, and that’s where teams like [unintelligible 00:20:07.04] taking us into the future, taking the company into the future with React.
Cool. Just to kind of close up this section, I had one more question… Where would you delineate the difference between DevOps and DivOps? Is it strictly JavaScript tooling is DivOps, and then everything else might be DevOps? Repo management can be something that a team takes advantage of, for example. Which side would that be on? And what are your thoughts on YAML?
[laughs] Well, we’re switching–
Shots fired…
Yeah, we’re switching to CircleCI right now, so we do a lot of YAML.
Nice.
So I would say that it’s a very blurry line. The ideal line, and this is one that I had at LonelyPlanet, where I came from before Eventbrite, which is really great… We partnered with our DevOps team to have them help us create some infrastructure patterns and paradigms to where they sort of did for us what I’m doing for my engineering customers from frontends. They would create – you know, if you copy this Jenkins file, there’s a couple macros in here that will build your stuff… And then just took this Kubernetes manifest…
So that sort of give and take between DevOps and then my world – it’s like, I understand the DevOps flows and how to create my own infrastructure when I need to; I don’t necessarily need to get into networking VPCs, and routing HTTP traffic. I can, and I like to understand that stuff, but that partnership with DevOps or SRE is, I think, the ideal place where we can create an API, like anything else. And same thing I’m talking about with this tooling stuff. It’s like, “How do I work with the DevOps team? What levels, what touchpoints do we have?” and sort of building that understanding between the two.
That’s super-cool, Jonathan. I think what’s really interesting for me is this convergence of these two worlds that in previous lives never talked to each other. You have opsy, infra, cloud, CI folk, and you have folks who are writing JavaScript that are maybe at the tip of the spear… It’s this really nice full circle with DivOps, so thank you so much for talking to us about this cool topic. We’ll get into tooling and all the other fun stuff you kids can’t wait for next.
Jonathan…
That was a really cool insight into DivOps. And with Divya mentioning this separation of concerns, where Create React App has been create to abstract away all of the complexity around managing your configs, and lets you focus on just learning the tool… It’s really nice that the community at large is starting to take that. We’ve seen even just with WebPack 4, many years ago there was – I think they introduced the zeroconfigs there, as well.
[00:24:17.25] I’ve been around long enough to remember Karma was a tool that was super-widely adopted, and is still widely adopted today because of the way legacy stuff works… But I was the one person that had to set up all the configs for all my teams, because no one ever really got it. Docs were pretty poor… We’ve come a long, long way in terms of tooling, defaults etc. But can you give us an overview of what you consider to be really the best in class tooling landscape for frontend teams in 2020? If I was starting a project today, what would I need, and how should we go about setting it up?
Yeah. So what’s interesting is WebPack 5 just came out a few days ago, and it introduced a lot of things. You brought up WebPack 4 kind of converting into the – you basically [unintelligible 00:25:09.01] wepback -p or webpack -d and it just sort of has the same defaults, which is great. So from that perspective - yeah, you’ve got a lot of options now. Parcel is another big one; I think that was sort of the whole mantra behind Parcel, noconfig. At least at first. I know then Kyle came in and kind of added a little bit of config, because there were some needs there… Parcel 2 is gonna be even more incredible in terms of what they’re looking to do with Parcel 2. So I think Parcel is big…
I know eventually for teams that cannot support legacy stuff, Snowpack sounds pretty dang cool. I think IE11 is still a crutch there; or at least it was the last time I checked. I don’t remember. But yeah, so there’s a lot of great things out there still, and coming out, new things.
Babel, obviously, has gotten so good now that it can even transpile TypeScript, generally. You guys talked a little bit about that last week with Ben… Because the TypeScript compile is really good, but it’s like, it doesn’t match sometimes with what – I’m already doing all this sort of stuff in Babel, so the fact that I can now do TypeScript is fantastic.
And then in terms of other tooling that they were using at Eventbrite that I would say is pretty useful industry-wide and pretty good standards is - we have a big monorepo actually of our frontend code, which… Say what you want about monorepos; there’s definitely contentiousness about monorepos versus multi-repos, but for us, what we have chosen to do with our tooling stack, and all this DivOps stuff, is - we’re using a tool called Bolt, which is very similar to Lerna, built by the same guy, Jamie… And we’re able to… Basically, we have about 150 different frontend packages, and we can go in and say – our design system is in there, our tiny little packages that control widgets on the page are in there… And then entire applications are in there.
We have tools built that can detect – you know, if I change the button in my component system, I can see the downstream effects across my entire repo… Which is actually really hard in a multi-repo setup, unless you’re gonna write some crazy tooling to go about all these different repositories.
If I change the button, I get a list of every app in every package downstream that it touches, that the button is affecting. So I can run my Jest tests against everything downstream to make sure I haven’t broken everything. Same for WebPack. Now if I change the button, I can go run the WebPack builds of all the apps that use the button…
[00:27:52.13] And the opposite is true - if I’m only touching one small widget used by 2-3 different applications, then the blast radius is a lot smaller. So you get some better CI wins for that, because most of your builds are pretty smooth and pretty quick, because most of the teams are focused on what they’re focused on…
But then, when we have teams that come in and want to make repo-wide sweeping changes, we’ve built that in to be able to confidently say “I can change this card display widget and make sure that everything else alongside it gets tested”, which is really cool and super-fun. It took us a minute to get right, but it’s been really fun. And that’s the kind of tooling that – I just love building that stuff. I just love seeing how that affects people’s day-to-day, and the excitement that people get when we ship an update to it that makes it even better, and they’re like “Oh, this is so great!”
So yeah, the monorepo thing has been big… And I think industry-wide, that’s another tool that we’ve seen grow in popularity because of Lerna; Bolt was kind of a next step for that, that we’re using… Nx is another big, popular monorepo tool; there’s a couple of different ones out there, but I think the monorepo for frontends is pretty big.
Yeah, Microsoft has released Rush recently…
Yup, Rush.
It looks pretty good, actually, and I think they’re using it internally inside Microsoft, which is awesome, because that means you’re getting good support…
And Google has Bazel, which is their thing for it… A lot of the big companies have monorepos. But does a startup just shipping code need one of those? Probably not… But for a team of 150 engineers, it’s pretty nice to have the tooling of your monorepo to help shape it all and make it all make sense… So yeah, I’d say if you’re a big company and you’re having trouble keeping everything in sync, the monorepo is a good pattern, I would say, for modern build tools. it’s very helpful. It adds some shape and clarity around making changes; and confidence. So I really like that strategy a lot.
What else, industry-wide, tooling-wise…?
While you’re thinking, I can clarify something for folks. We’ll get into Snowpack in a bit, but… Snowpack does have – I guess we can get into it now. Snowpack has interoperability with WebPack, so that you can use Snowpack for – it’s really geared towards your local development… And because you need to support older browsers that maybe don’t have VSM, and whatever else; you can actually just literally use – you just plug in your WebPack… They have a plugin essentially for production; you just use WebPack to build your production bundles.
So for folks who are wondering, “What is Snowpack?” - well, we had Fred on the show a little while ago; I don’t’ remember what episode number, but we’ll link it in the show notes… But Snowpack essentially is this awesome bundler that lets you – it’s ESM-first, so you don’t need to bundle your JavaScript… So it’s using native modules, and it drastically improves your local developer workflow, because you’re able to build things file-by-file, and your spans and not gonna spin when you’re doing a watch, and having to constantly update your whole bundle, update your whole bundle, update your whole bundle…
So Snowpack is really great; a lot of frontend teams are starting to adopt it. We’re also considering adopting it for my team, and teams at large at my company… So Id’ highly recommend looking into it, just at minimum for local development workflow; it’s a game-changer.
[00:32:01.05] Yeah, I’ve definitely seen stuff about that. It’s one of the ones that’s like “Man, I need to look at that.” I’ve got it in my ever-long to-do list of articles and things I need to learn about.
Right. I’m gonna throw Nick a bone here, because I’m gonna talk about TypeScript… But how do you – there’s configs around linting, and there’s this kind of suite of tools that are what I like to call in the same cluster; they’re things that have a lot of peer dependencies… Whether it’s a Babel preset that requires these versions of Babel core, or whether it’s a TypeScript linting rule… There’s all these clusters which really for me make upgrades extremely challenging. For example, when WebPack comes out with a major release, there’s a ton of tools built around WebPack, and have peer dependencies set. What are recommendations for how to manage that?
Yeah, I love that question so much.
And TypeScript, because Nick. [laughs]
Yes, because Nick. So what we have enforced, which is a little different, and one of the things that Bolt does at its core - which, Bolt is one thing, it’s like a thing, but at its core what we wanted to enforce with our monorepo was there’s a consistent version of every package across the entire repo. You can’t have multiple versions of React in our world. We don’t support it; we don’t want it. We want everything to be consistent, so that way we can predict things better, and there’s not forking Node module folders, where one of the packages in my repo required React 16.9 and the next one required 16.12. That causes all these other downstream – it’s just crazy.
So literally, if you would ask for a Babel plugin at 6.17.1, that’s the only one that’s gonna exist in the repo at any moment, period. We don’t allow it; we’ll fail the build. You can’t do that. So we enforce that pretty strictly…
And in terms of that, going to the next build pattern or the new upgrade and dealing with those breaking changes… And even – we do a ton of migrating things; we’ve gone from WebPack 2, to 3, to 4, and how did that work; we’ve moved code around… We write a lot of Babel plugins for doing code mods, actually, which is really powerful, and fundamental to how Babel itself actually works. It’s this concept of an abstract syntax tree (AST), if you’re not familiar. ASTexplorer.com kind of describes it. It’s basically a way for you to write code, and that code can then get compiled down into a tree of like “This is a variable. This is a function etc” And then you can easily go in and replace a function call with something else, or whatever… Which is actually how Babel works under the covers, and why all of a sudden I didn’t have to use tsc to compile my TypeScript anymore, because Babel released their own AST parser for TypeScript… Which was super-handy, because now I can use babel/preset-env and babel/present-typescript, and babel/preset-commonJS to whatever, or dynamic imports… And you can kind of combine these Babel plugins into something that makes sense for what your team’s targeting. We’ve still gotta support IE11, at least for a minute; hopefully we’re gonna kill that, maybe in 2021, hopefully. We’ll see. And we wanna support private fields, or whatever. You can do all that kind of stuff, because under the covers you’re using these cool AST things.
So our team actually has written several different ASTs to help us convert from old things to new things, and do those upgrades, by going to – oh gosh, one of the biggest projects I worked on here at Eventbrite was actually taking us from [unintelligible 00:35:53.08] to 16. It was actually kind of hard; it took a while, because you’ve gotta make sure nobody’s using the wrong [unintelligible 00:36:00.01] thing anymore, then you’ve gotta go in and upgrade some of these different libraries… So we had to write some code that writes code, to help that upgrade path. So if you’re a team who manages a lot of code, like we do in frontend infrastructure, I cannot stress the importance and the usefulness of doing something like ASTs.
[00:36:22.07] It even helps because ESLint actually is also using ASTs, too. You can write ESLint plugins to verify if there’s certain patterns at your company that you want to enforce, you can write ESLint plugins to have enforce that kind of stuff. There’s all kinds of cool stuff that you can do.
Yeah, automation for the win. I think you’re preaching to the choir. In this group we all love ASTs… [laughs] And generally, using automation as much as possible, for sure.
I think automation is the key there. So the question was “How do you manage upgrading things?” It’s automation and verification.
And repeatability, I guess, is the best thing with ASTs, because you can just run it on your whole codebase; if you get something wrong, just git checkout, update your transform, run it on your whole codebase again…
Yeah. And it’s funny… I didn’t use to feel this way. I used to get really nervous, but I do like 9,000-file-long commits all the time now. It’s like “Eh, whatever.” It’s no big deal anymore. [unintelligible 00:37:22.29] Because I have that confidence now that I’m not gonna screw anything up. And it’s not just like finding and replacing, which - half the time I try to do that, I just break VS Code. I try to find and replace something across every file in our repo and it took 30 minutes for VS Code to do it. And it took an AST that I ended up running like 30 seconds to just scan the entire repo and change it. BRRRP, done!
Safe updates, right? ASTs are amazing for precision. One thing I wanna know for folks wondering why the Babel compiler is better for transpiling your TypeScript… We talked about this a little bit in Ben’s show, but we can get into it now. Basically, Babel has a lot – they’re essentially an implementer on the TC39. The same way V8 implements JavaScript, Babel is considered an implementation of JavaScript, because they actually make polyfills, and they do transpiling… And they also deal with managing bugs and idiosyncrasies between browsers, right? So there’s so much wealth there… Trying to replace Babel at this point is – you know, you have to catch up to all the bug fixes… There’s so much that they’re handling, it’s a good separation of concerns to use Babel to transpile and TypeScript to type-check, and not TypeScript to compile. You just get a lot more benefits there… So I was really glad when the babel/types merged; that was great.
Yeah, I agree. That’s the workflow we also adopted.
Same.
TypeScript did a lot of really cool things around – if you just wanna use TypeScript and you just wanna ship something and you don’t care all that much, tsc is probably gonna be fine, especially with some of the composite project stuff that they have now, where it will only recompile the stuff you change. They have that built into TypeScript now, in terms of making things faster [unintelligible 00:39:31.29] It’s pretty good. But yeah, as part of a larger ecosystem, we use tsc to type-check and dump d.ts files out to the filesystem that we can ship with our packages… Because that’s the one thing that Babel can’t do yet that I’m aware of, is generate the TypeScript definition files… Which is very useful, because if you are creating a package that you want those type definitions to be on for your autocomplete and your IDE, it’s important to do that tsc step to get those type definitions.
[00:40:07.01] And the funny thing is tsc is running in the background of VS Code anyways for you. That’s why VS Code rocks as hard as it does - it’s because whether or not you’re using TypeScript at your company… If you’re just like “I use JavaScript!” and then I’m like “Are you using VS Code?” and they’re like “Yeah”, I’m like “No, you’re actually using TypeScript.” Because whether or not you like it, it’s taking your JavaScript and running it through the TypeScript compiler, analyzing your code, and telling you “Hey, you misspelled this.” That’s TypeScript; that’s the power of their compiler…
Which is powered by ASTs.
Which is powered by ASTs, exactly.
Bringing it all back… [laughs]
Yeah, they have like a – if you ever just go look… I’ve just dug into TypeScript before, the compiler - it’s insane. God, it’s insane. And it’s a massively different way of looking at ASTs than Babel does, too. It’s literally just this huge, long file, the TypeScript compiler; it’s crazy. It’s fun though, I love it.
I have a question from a workflow standpoint. So when you’re setting up these tools, and maybe as somebody who works more directly on the frontend, but they have a change that they wanna make - maybe a config change, or a tooling change, or bringing in some new tooling… How does that work? Does it go to you as a ticket? I’m just curious about the delineation… I asked about DevOps and DivOps; now frontend and DivOps.
Yeah, sure. That’s a great question. We leverage the code owner’s file pretty heavily at Eventbrite. We on the frontend infrastructure team own everything that is not owned explicitly by a team. So we have certain teams that own the packages, whatever, this folder; or this folder, that folder. And then everything else falls through to us. But what that means is – we just had a guy from our [unintelligible 00:41:55.07] office come in and say “Hey, I’ve noticed that the Storybook Addons ticket that you guys have put into GitHub a while ago has a help wanted tag. Can I help?” I’m like, “Absolutely. That’s why I’ve put that label on it.”
I want to manage this stuff, I wanna own this stuff, but we want to treat our monorepo as like an open source project. We are the owners of it, but we want our teams who are interested in it, and have a bend towards the same mindset that I do as [unintelligible 00:42:23.05] Come and contribute, yeah. Absolutely.
My name is gonna get attached to the pull request as a code owner, and I’ll see it, and then yeah; just approve, merge me. We have a merge pipeline that we manage [unintelligible 00:42:38.26] and it sends it off to Jenkins and merges it in, and then there you go. And then that person gets to have contributed to the entire frontend ecosystem at Eventbrite.
So yeah, very encouraged. We definitely push hard on telling teams “Don’t just treat this as Jonathan, Kyle and Alex’s project. This is everyone’s thing. If you find areas where it sucks, tell us, and fix it with us, and work at it together.”
Wow… That’s so incredible. I also love that y’all are using the GitHub owner’s files, because I’m assuming – because you’re a monorepo, so you use the GitHub owner’s file to figure out who should be tagged on pull requests, and who should approve XYZ. That’s a little bit into DivOps a tiny bit, right
Yeah, definitely.
Do you guys lock down certain files, like your package.jsons? I’m curious who gets tagged on certain reviews always, from your team.
We have a – Jamie built a codeowners-enforcer package that actually helps with it, too. So if something goes into the repository that doesn’t get added to the codeowners file, the build fails. Every folder has to have an owner.
Every single folder?
Well, at least at the top level.
Top level, okay. I was gonna say…
Every package; not like down to source components or whatever, but the package at that level does.
Yeah, that’s amazing. Well, the DivOps, automating code ownership. This is super-cool.
[00:44:11.29] Yes, it’s all about automating.
Automate it all.
Kyle likes to say that he likes to automate the pain away.
That’s amazing. I think I need to give Kyle my phone number. [laughter]
What you described is actually very similar to what happens at the company I’m at. We have a team like that that works on also a monorepo. We’re using Lerna for that, but very much a monorepo to make sure we’re all on the same version of React, using the same version of TypeScript, things like that.
Some of the things that they do are kind of – I’d say like almost doing spikes to figure out the future of things, or maybe analyzing where things might go from an architectural standpoint across all of the projects that we’re doing. Is that something that you would also put into the role of DivOps traditionally?
Yeah, this is something that we’ve been trying to figure out even within our own team - how do we draw the lines between the systems sort of side of DivOps, and the actual architecture side of DivOps. I love both sides of that, so it’s actually very hard still in my own head; I like doing both, so I don’t know. I will say, functionally, yes, we do have a lot of input into the architecture that goes into the monorepo itself.
Not only are we trying to help make sure that the build tools and systems (putting everything together) work, we are also trying to help steer… You know, right now we have 95 different applications; they’re all using React, they’re all using Redux… Like, should we squish those into a few and maybe have Next.js orchestrate those things together? And should we use Redux Form anymore? We use Redux Form pretty heavily; should we switch to something else?
We’ve been having a lot of conversations in our frontend guild meetings about stopping using Redux Form as much in switching to hooks, because now we can use hooks. For a long time we couldn’t use hooks, because we were stuck on React 15.8, or whatever it was. So that side of my head [unintelligible 00:47:22.23] on its game a lot of the time too, because I get asked about those questions… And then I get asked to do mentoring. Because I’ve been doing this for ten years, and then I’m also mentoring folks on how application architecture should work, while also maintaining that stack…
So yeah, I think it does fit in. It’s a wide umbrella, this DivOps thing, which is part of why I like calling it out, and just that awareness of everything that I have to deal with… Just writing it down; it’s like, “Okay, those are the things”, and I can just visualize it all now, which is great.
Yeah, I really like the idea of a team dedicated to improving the productivity of everybody else on the team, because otherwise that stuff just kind of gets pushed to the side a lot, or it happens not as part of your regular assigned task, and it’s hard to get that assigned. So it’s really good that there’s somebody looking out for that, or a team looking out for the best interests of the development experience, while not taking away from the developers actually working on the user experience, and things like that. So it’s really beneficial from that standpoint.
Right. And what’s so interesting about what you’ve just said also - my team is doing all that we’re doing, and sometimes what we wanna do is to actually step away for a second and let the feature teams talk and discuss best practices that they’re seeing, and making sure that we’re facilitating communication across all the different frontend teams.
We’re dealing with frontend teams in Mendoza, Argentina, and frontend teams in Madrid, and frontend teams in San Francisco, and frontend teams in Nashville… So part of our role is also – we have these weekly guild meetings where everybody that’s really interested in frontend at large (not just infrastructure) comes together, we talk… And a lot of times it inadvertently becomes the frontend infra hour, which we don’t want it to be, because we want to hear from everyone using the stuff we’re using, so that we can help facilitate what is evolving as best practices inside of Eventbrite. And then also what we’re seeing in the industry, so we can kind of help shape those best practices for the teams, and maybe put in some new lint rules to help inform “This is not the right way to use hooks. Don’t do that.” So that way we get consistency.
People jump from team to team inside of Eventbrite, and we even want new hires to come in and see – not have to have a massive onboarding period of learning how frontend works at Eventbrite. No. We just want a standard access, so that anybody can come in, from any company, and just sort of get it. “Oh, okay. They’re using hooks. And here’s some Redux. The Redux is - whatever. But we get it. It makes sense.” So helping set up some fences around that architecture.
I love how customer-oriented you are, by the way…
[laughs] Thank you. That comes from my product manager, [unintelligible 00:50:14.08] He’s an incredible guy.
That’s awesome.
Throughout the course of my career I’ve been at the benefit of having a lot of good project and product managers, that helped me do that… So we constantly are focused – and inside of Eventbrite that’s one of our big mantras, trying to make the lives of our customers better… Whether that be on the feature team for the folks creating events, or in the case of our foundations teams, helping those engineers just live better lives, and have fun writing code. No one wants to wake up every morning – especially now we’re all at home, nobody wants to just wake up and hate the environment that you’re working in. We wanna make it better.
Yeah, that’s super-cool. I’m just impressed… The culture of good ops folks, traditional DevOps people - they’re extremely customer-oriented, and there’s this strong communication factor, because they’re typically the ones coordinating a bunch of teams that are very siloed, and you’re the common denominator…
Yeah.
So I just love that you’re advocating for that, and I think it’s just great for people to hear that y’all have that culture at Eventbrite, because it gives people hope. Siloing is a big problem the larger your company gets, and nobody has it perfect. If you look at Google, Google feels like 700 companies really, to the external person… Because it’s like “Wait, did they not know that messages exists already? Why are there seven other messaging platforms that seem to be cannibalizing each other.” But there’s just like… [laughs] Weird silos, you know?
The silo thing - boy, that rings so true to me. We had that problem where I came from at LonelyPlanet occasionally; we had folks in Australia–
It’s in the name. Just kidding. [laughs] Just kidding.
[00:52:07.14] Yeah, yeah. And when I got to Eventbrite, I was like “I don’t want that culture.” And especially now, everyone’s remote, everyone’s working from home… And I’ve been lucky also that I’ve done a lot of remote work, and when I was appendTo for years we were really good about staying in touch, and communicating… And yeah, we have folks seven hours ahead in Madrid. So I committed myself to waking up at 7 my time, and being online for 3-4 hours of crossover with that team, because I want to be able to help them solve their problems if they have it. Then I’m online for the last few hours of San Francisco’s day. I’m in a good timezone I guess too, luckily… Because then the Mendoza folks are an hour ahead.
So yeah, it’s facilitating that communication across teams, and making sure everybody’s on the same page…
One of the things I think people were afraid of when we started down this path was that we’re gonna force you to follow our standards, and just rule with this iron fist, like “This is how things are done!” But that’s quite the opposite of what we wanted. We want people to just feel like this is everybody’s thing, we’re working on all this together, we want input from anybody that wants to be involved, to help shape this. This is your work environment. We feel like we have the capability to stay in touch with industry best standards, and help keep moving us forward, so that way we’re trying not to have to maintain tons of legacy code, and maintainability, all that stuff. So yeah, no silos, please.
I think it’s interesting to think about, because it’s unique… The company that you’re at, you’re sort of split into – your focus is on tooling, and frontend tooling, and then the different teams that are probably more UI-focused, and building components, and whatever else…
But it’s interesting, because oftentimes when you think about the frontend tools that you use, it affects everyone. If you work on frontend, you’re gonna have to think about tooling at some point. But how do you make decisions? How much agency do teams have? Because you’ve mentioned you have a frontend guild, there are lots of people who get to chime in… But overall, how does the decision get made? Because you own the tooling in your team…
Sure.
…and then a specific frontend team that’s working on this particular component might be like “We need to use this particular tool to move forward.” But do they have the agency to do that, or is it something that they have to review with your team, and then your team approves, and then sort of moves to its implementation?
That’s an awesome question. We’ve had that happen quite a bit. One of the big efforts was a team really wanted to roll with some Cypress testing. It had been something on my radar for a good bit, and I hadn’t been able to experiment with it… And they just sort of showed up with some “Here, this is what we think would look good.” Then our team’s like “Cool!” And since we’re the owners of the stuff, we see all the PRs; we just talk with them, and we’re like “Yeah, this looks good. Approve. Go ahead and merge.” And then they help us maintain it.
Then standards-wise, we’ve actually recently started this practice of writing what we call ADRs (architecture decision records). There’s a couple of groups of Eventbrite right now meeting to come up with those. A salient example is like “Should it be __fixtures as a directory name?” And then we’ll write down some pros and cons of that, and have lots of people going in and read it, and approve it, and then we’ll all merge it together, so everybody’s sort of feeling good about that.
So we’ll write these ADRs about new ideas we have… That’s another good change agent for making sure people feel like they’re a part of shaping the thing, and it’s not just “Frontend infrastructure put this new thing in.”
[00:55:56.11] To the point earlier about being customer-centric, I think we’ve built up a lot of trust with folks… Because we do focus so much on the customers, and making sure everybody’s happy. In general, the frontend community trusts us to make the right call, which is huge. If we say “This is probably the right path”, we generally get good – and if there is an outlier that’s like “I don’t know, this doesn’t seem to make sense”, we just talk it out, and figure it out. It’s been really, really great, I feel.
I was just gonna ask if you have an RFC process, a few minutes ago…
Yeah, that’s it.
I was like “I wonder if you have that…”, because that’s the sanest way to do this, that’s democratic. It’s benevolent dictator, stressing on the benevolent part.
So yeah, that’s… Allowing for change.
So we have the ADR process… We also created some GitHub issue templates for folks that go in and create bugs or feature requests… We’ve got a feature request in for like “We need offline testing to try to speed up CI, because these integration tests take way too long.” It’s like, “Cool.” That combined with some stuff I had picked up from some conferences, and suddenly we’re doing this really cool Cypress testing thing where we’re doing user flow testing with scenarios… And that all sort of came from feedback that we got from the GitHub issue.
We use GitHub Projects to manage those issues that are coming in, and labels, and just letting people feel like they can contribute.
Yeah, that’s so wonderful.
This is why when I wish everything was open source, because I think people could really get insights into productive workflows at scale… It would be awesome.
That’s why I created this DivOps community…
That’s awesome.
…because I’m tired of like “You guys doing your thing over here…” And about silos again - we’ve siloed ourselves of different companies too, which is sort of unfortunate. I love when I get in – our DivOps, we’ve had a few meetings now… So Ben came to the DivOps group that we had a month ago and talked about what they’re doing at Stitch Fix. He pointed these specific things that I’m like “Oh, great. I’m doing that here, too.“I feel validated, like I’ve made the right choice.
So there’s that validation aspect of the community, too… Because sometimes it just feels like you’re just in this vacuum, like “I’m making stuff up as I go.” And then when you get a group of people that do the same thing together in the same room and you find out “Oh, they’re doing that, too? That’s awesome!” Or they’re doing that too, but slightly differently, and you’re like “Oh, I didn’t think about it from that angle.”
There’s a guy from the Shopify team in DivOps, and he was talking about their merge pipeline that they do, and I’m like “That’s awesome! We have a merge pipeline, too; it looks different from yours, but now you can help me sort of shape what it could look like at Eventbrite. I need some insights from other people, from different places. It’s all about diversity, and thoughts, and getting all kinds of different ideas.
Different inputs, yeah. That’s so cool. I didn’t realize that the community that you were starting was also kind of a mindshare between people for best practices…
It is.
…not just like a support group. Because I thought it was an emotional support group, quite frankly… [laughter] But that’s awesome. Consider me a new member, because I love nerding out about automation, and I use everything from Bash, to ASTs… I’ve been around the JavaScript world long enough to have just seen the patterns evolve… So it’s nice to have some of that grandma knowledge to bring to this group.
[00:59:47.18] I am curious though… One thing that does come up if you’ve been doing this for a while - you said you’ve been doing this for ten years, and Amal has been doing it for a really long time - one of the things, as someone who’s also been doing it for a while, going from Grunt, to Gulp, to WebPack, and now Snowpack… People talk about JavaScript fatigue a lot, which is this constant moving from tool to tool… Which also brings up the question which I think we touched on during the break a little bit, which is like “Are we adding complexity where complexity is not needed? And is there a way in which we can move forward where we’re not completely obliterating –” Because frontend infrastructure is gonna be a thing; people are gonna always wanna bundle, and transpile, and as long as that exists, this sort of work will exist. But is there a way and a path forward where we can make it streamlined?
I think it is a luxury to have a team dedicated to frontend infrastructure, and I don’t think that that’s something every team can do… So do you see a future in which this is easy for people to get into and deepen their knowledge, without having to know everything?
Right. I think it kind of comes back to – I heard this quote once when I was learning about all these different design patterns. Somebody said something like “Design patterns aren’t created, they’re discovered.” And I think that’s so true for this build stuff as well. Without us having gone crazy out there on these WebPack configs that are like 1,000 lines long, we wouldn’t have arrived on what that webpack -p mode does. We had to kind of go crazy for a little bit…
I sort of think we have reached a point at which the innovation has sort of leveled out a little bit. Snowpack is a more recent one, but… Finally, that JS fatigue – I remember going to conferences a couple years ago, and every talk was about JS fatigue. I’ve seen less of that now. I think we’re finally getting over that hump, to a certain extent, because people went off and innovated, and now we’ve sort of found those common denominators about what things need to be there… And now that’s why you’re seeing frameworks like Next, and Gatsby, and Create React App, and Create Next App, and all these things become more popular. And then maybe the evolution to that is – you know, we talked a little bit in the break again about where do we go in the future; maybe tools like Rust can come in and help speed things up, and who knows what’s gonna happen next.
Yeah, it’s interesting you brought that up, because we actually – especially with Babel, there are a lot of people talking about how Babel is complex, and sometimes it’s really slow, and there’s a lot of issues with it… And part of it is implementation, part of it is also just community, and how much time you can put into open source etc. But it’s interesting to see JavaScript tooling move in a direction that I just never thought that it would move into.
Now you see Rust coming into the fold, so you have things like SWC, that allows you to do TypeScript checking for you, which is way faster than Babel… Which I think, sort of almost to Nick’s question, moves into this completely – it sort of takes DevOps and DivOps and it’s almost like DivOps moves in that direction really quickly… Because as we see people moving towards picking other languages other than JavaScript to write tooling, then is that even frontend anymore? Because that’s almost full-on DevOps at that point.
Yeah, that’s a great call-out. And again, it just kind of comes back to the whole thing, like “What am I? What is my job description?” I’m a frontend infrastructure person not writing frontend at that point. But I think it’s just like picking a framework; some frameworks make sense for you, some don’t.
We did the pick-your-own-adventure game with React and Redux, and it kind of goes for tooling, too. If you’re hitting bottlenecks in your tooling – you’re probably not gonna be hitting bottlenecks in speed, just building some landing pages, marketing pages, little eCommerce sites; that’s probably not the problem. But a big company is like – we are where we’re dealing with 10,000 JavaScript files. If you’re hitting that performance bottleneck, something like Rust or Go might make sense. It’s a new thing to learn, but it’s gonna solve some of those performance bottlenecks. But it’s about picking and meeting the problem where it’s at, and not just creating problems that don’t exist yet.
[01:04:10.29] If you’re not dealing with 10,000, 50,000, 100,000 file-projects, Rust and Go probably don’t make sense yet. At least not yet. Maybe in a year or so there’ll be some more incredible Go and Rust tooling for frontend… We’re getting there though. But picking the tool that makes sense for you and your team where you’re at is what’s important, I think. Just like picking a framework.
Yeah. YAGNI never gets old. You ain’t gonna need it, but also don’t pre-optimize.
Right.
I personally think we really have a problem in our community that’s a side effect of being an engineer, I think. Everybody’s got this problem, but in varying degrees. Some people have it worse than others, but the need to kind of want to over-engineer…
One of my favorite talks from Kent C. Dodds is his a-ha thing where he’s like “Avoid hasty abstractions.” And it’s so true. Just don’t abstract until you need it. Don’t go crazy doing things until you find there’s a need. And doing small, little things is okay. Just iterate and add value as you go. You don’t have to boil the ocean at first.
Yeah. I honestly think code reviews have made that problem – I think people feel the need to have everything perfect on the first iteration… And I think you have to remind people that - first pass, second pass… There is a conflict with wanting to have the perfect PR, and wanting to deliver it in iterations; it’s difficult. I feel like the PR workflow doesn’t communicate well when something is the one. First pass. Versus final rubber stamp.
Exactly.
And it would be nice to be able to do some more iterative delivery and communicate that more clearly with people.
Definitely.
I just wanted to give you an opportunity to tell us about the logo that you have for DivOps, because it’s awesome.
Oh yes, yes. So I was sitting there doodling one day, and I drew the angle brackets and the hammer, and I was like “That kind of looks like Mjölnir, Thor’s hammer.” So I sketched something out that kind of looked like it. Also, when people ask me what I do that are not tech people, I tell them “Yeah, we’re kind of like a hammer builder. We build hammers for other people to build stuff.” That’s the easiest way I can describe to a non-tech person what I actually do in my job. So I saw the angle bracket, and the hammer, I saw the Mjölnir, and then my friend David Neal on Twitter - he’s a really great illustrator now, and also engineer. We’ve known each other for a really long time. So I threw it at him and he came back with that, and I was really excited about it. He’s awesome. If you don’t follow David, give him a follow, too. He’s great.
That’s awesome. We’ll try to link his profile. Thanks for calling out the logo thing, Nick, because I feel like logos are what make things official in JavaScript communities, you know?
Exactly.
Yeah, it’s true.
So… Not official until there’s a logo, and a website that ends in .dev or .io, because .com was taken.
And a Discord.
Yes, and a Discord. [laughter] And a Discorders Slack channel… Increasingly Discord more so than Slack.
Yup. So I’d love to continue conversations with whoever is interested in this stuff. You can search my name, I’m everywhere - Jonathan Creamer, @jcreamer898. And the DivOps community, like I said, I have a URL; it’s divops.dev. It’s gross. It’s just the most basic Gatsby thing ever… So if anybody wants to make it not gross, that’d be awesome.
[01:07:54.10] And then we’re doing the Slack community thing, and hanging out in there, just chatting about – somebody asked today some TypeScript questions, so I’ll probably go in there and answer some TypeScript questions. So yes, definitely feel free to join and chat online. This is what I’m passionate about.
Nice!
So I have a parting question before we end…
Okay.
How do we know that the Slack people are people in your channel, and not bots that have been created by all the DivOps folks?
[laughs] That’s a very good question.
How do we know? Do we know? Can we know, is the question, really… I see you typing already. Just kidding. [laughs]
Yeah, right, right… Most people that came in gave intros, and we said hey, and we meet every now and then to talk… But yeah, that’s a good question; they’re not sentient beings.
Well, thank you for answering my question in a serious way. I really appreciate that you took my question seriously. That’s awesome. [laughs] Thank you. So with that said, Jonathan, you are, I would say, gold for any team writing JavaScript. You and all of your teammates.
Thank you.
We need to clone you. I wish more companies had budget for this role, and this focus… it makes more sense at scale, but it’s a job that needs to be done for anyone that’s writing modern web applications. It just kind of sucks for smaller companies and smaller teams. Developers are just doing both jobs, and it’s nice to have the luxury of separating those concerns.
I think if you don’t, and you’re a company that doesn’t have that budget, it’s important to just sort of formalize it a little, at least yourselves. Just meet and talk, and write things down. That’s really the biggest thing. For the longest time everything was up here, in my head, just spinning around, and I didn’t put it on paper… [unintelligible 01:09:49.13] is massively important, not only for yourself. You can just offload your memory into a different place, and remember why you made decisions, and come back to them later, like “Oh God, why was it that I had to install this Babel Module Resolver plugin? I completely don’t remember.” And then you can go back and see “Oh yeah, this is why.” So write things down, talk, communicate… It comes back to communication; it’s key in all of this.
I also like that you formalize it in terms of just like a process… Because for so long, even for me, embarrassingly, when I work on tooling I think I’m not actually doing frontend. I’m just like “Oh, I’m doing a thing that will then allow me to do the work I need to do.” And so I’ll spend a week building a Rollup config, and I will be like “I was supposed to be building this, but I was building this…”
[laughs] Yak shaving.
…and felt like I actually didn’t do any work…
Yeah, I know.
Which is interesting, because if you think about it, that is sort of related.
It is.
And it’s not yak shaving, even though people think it is… To some extent it could be, but–
It’s important, yak shaving, though…
You might spend five hours messing around with the config, and then you’ve found one thing that was like “Oh, that was actually what I needed”, and you put it back, and that five hours of yak shaving exploration was actually massive important, because it simplified some part of your flow that maybe you didn’t know existed until you went and explored it… And now you’ve automated the pain away.
Yeah. I’m really glad that this tagline that I came up with is becoming–
It’s catching on.
Yeah. And I don’t even know - Nick, is it shared credit? I don’t know… All I know is that I should be on the bottom of whatever readme file, along with Nick and Divya. It was invented here. [laughs]
[unintelligible 01:11:33.29] Copyright.
Thank you.
We hope to have you back on the show again…
Yeah, anytime. I’d love to hang out.
Because it’s an evolving thing…
Yeah. Anytime you talk about WebPack, I’ll just show up and just hang out and listen and ask questions. Or build tooling.
Or build tooling, right. Well, we’ll link to everything that we talked about in the show notes, including the community, the blog post that kind of started it all, you’ll get to see the logo… We have so many links.
With that said, thanks everyone. We’ll see you next week!
Our transcripts are open source on GitHub. Improvements are welcome. 💚 | https://changelog.com/jsparty/152 | CC-MAIN-2021-21 | en | refinedweb |
If you’re reading this article, it’s probably because you already know about or have heard of container orchestration. But if it’s the first time you’ve come across the term, the main point of this article is to show you how to implement amazing, container-based software architecture, so welcome. Check out my second post on how to deploy a Java 8 Spring Boot application on a Kubernetes cluster.
When I started to use container microservices (specifically, using docker containers), I was happy and thought my applications were amazing. As I learned more, though, I understood that when my applications are managing individual containers the container runtime APIs work properly without any trouble, but when managing applications that could have hundreds of containers working across multiple servers or hosts they are inadequate. So my applications weren’t exactly as amazing as I had thought. I needed something to manage the containers, because they needed to be connected to the outside world for tasks such as load balancing, distribution and scheduling. As my applications started to be used by more and more people, my services weren’t able to support a lot of requests; I was nervous because the result of all my effort seemed to be collapsing. But, as you might have already guessed, this is the part of the story where “Kubernetes” comes in.
Kubernetes is an open source orchestrator developed by Google for deploying containerized applications. It provides developers and DevOps with the software they need to build and deploy distributed, scalable, and reliable systems.
Ok, so maybe you are asking yourself, “How could Kubernetes help me?” Personally, Kubernetes helped me with one constant in the life of every application: change. The only applications that do not change are dead ones; as long as an application is alive, new requirements will come in, more code will be shipped, and it will be packaged and deployed. This is the normal lifecycle of all applications, and developers around the world have to take this reality into account when looking for solutions.
If you’re wondering how Kubernetes is structured, let me explain it quickly:
● The smallest unit is the node. A node is a worker machine, a VM or physical machine, depending on the cluster.
● A group of nodes is a cluster.
● A container image wraps an application and its dependencies.
● One or more containers are wrapped into a higher-level structure called a “pod.”
● Pods are usually managed by one more layer of abstraction: deployment.
● A group of pods working as a load balancer to direct traffic to running containers is called “services.”
● A framework responsible for ensuring that a specific number of pod replicas are scheduled and running at any given time is a “replication controller.”
● The key-value tags (i.e. the names) assigned to identify pods, services, and replication controllers are known as “labels.”
When I decided to develop applications that run in multiple operating environments, including dedicated on-prem servers and public clouds such as Azure and AWS, my first obstacle was infrastructure lock-in. Traditionally, applications and the technologies that make them work have been closely tied to the subjacent infrastructure, so it was expensive to use other deployment models despite their potential advantages. This meant that applications tended to become dependent on a particular environment, leading to performance issues. Kubernetes eliminates infrastructure lock-in by offering core capabilities for containers without imposing constraints. It achieves this through a combination of features within the platform itself (pods and services).
My next challenge was to find something to manage my containers. I wanted my applications to be broken down into smaller parts with clear separation of functionality. The abstraction layer provided an individual container image that made me fundamentally rethink how distributed applications are built. This modular focus allows for faster development by smaller teams that are responsible for specific containers. All this sounds good, so far, but this can’t be achieved by containers alone; there needs to be a system for integrating and orchestrating these modular parts. Kubernetes achieves this in part by using pods, or a set of containers that are managed as a single application. The containers exchange resources such as kernel namespaces, IP addresses and file systems. By allowing containers to be placed in this manner, Kubernetes effectively removes the temptation to cram too much functionality into a just one container image.
Another important thing that Kubernetes has offered me is the ability to speed up the process of building, testing, and releasing software with “Kubernetes Controllers.” Thanks to these controllers, I can resolve complicated tasks such as:
● Visibility: I can easily recognize in-process, completed and failed deployments with state querying capabilities.
● Version control: I was able to update deployed pods using newer versions of application images and roll back to an earlier deployment if the current version was not stable.
● Scalability: I was amazed by what Kubernetes can do; applications can be deployed for the first time in a scalable way across pods, and deployments can be scaled in or out at any time.
● Deployment timing: I was able to stop a deployment at any time and resume it later.
As I researched — and played a little bit — with Kubernetes, I found that other orchestration and management tools have emerged such as AWS EC2, Docker Swarm, Apache Mesos and Marathon. Obviously, each one has its benefits, but I have noticed that they are starting to copy each other in functionality and features. Kubernetes, meanwhile, remains hugely popular due to its innovation, big open source community, and architecture.
Without Kubernetes, I would have probably been forced to create my own update workflows, software deployment and scaling; in others words, I would have had to put in a lot more time and effort. Kubernetes enables us to squeeze the maximum utility out containers and set up cloud-native applications that can work anywhere with self-contained, cloud-specific requirements.
Stay tuned for more posts on Kubernetes! | https://gorillalogic.com/blog/kubernetes-container-orchestration/ | CC-MAIN-2021-21 | en | refinedweb |
I am very new to the C++programming, and so I keep facing the below error saying:
Reference to overloaded function could not be resolved; did you mean to call it?
Below is my code that is causing me all sorts of troubles :
#include <stdio.h>
#include <string>
using namespace std;
int main() {
string myname;
string myage;
cout << "Enter the name and age: ";
cin >> myname >> myage;
cout << "Hello, " << myname << ", are you " << myage << " years old?\n";
return 0;
}
I am currently using Xcode on my Mac OS X Mojave.
I have also noticed that if I have only the current code, then it works very fine, but when I try to have multiple files, all of the files fail to work.
Can anyone explain me why my code is failing and can be the solution for it?
Solution :
In your code the stdio.h does not define the std::cin and std::cout. That header only defines your C functions for the input and the output, like your printf and scanf. So it is the I/O header, but it is not the one you are looking for.
You need to simply include the <iostream> to have the std::cin and std::cout.
If you do above simple change your code will start giving you the desired output. | https://kodlogs.com/34704/reference-to-overloaded-function-could-not-be-resolved-did-you-mean-to-call-it | CC-MAIN-2021-21 | en | refinedweb |
Unbound Frontier_exploration [closed]
Hello, I am curently working on a turtlebot with ubuntu 14.04 LTS and ROS Indigo, and i am trying to use send the frontier_server an actionlib goal to do unbound exploration. My problem is that when the explore_server receive my goal it say that "couldn't transform from map to U+0001 Failed to set region boundary" From the explore_server code it seems that the problem come from the updateBoundaryPolygon service, but i don't really know what the problem is.
Here is the code i use as actionlib single goal client
#include <ros/ros.h> #include <actionlib/client/simple_action_client.h> #include <actionlib/client/terminal_state.h> #include <test_client/ExploreTaskAction.h> int main (int argc, char **argv) { ros::init(argc, argv, "frontier_exploration_client"); // create the action client // true causes the client to spin its own thread actionlib::SimpleActionClient<test_client::ExploreTaskAction> ac("explore_server", true); ROS_INFO("Waiting for action server to start."); // wait for the action server to start ac.waitForServer(); //will wait for infinite time ROS_INFO("Action server started, sending goal."); // send a goal to the action test_client::ExploreTaskGoal goal; goal.explore_boundary.header.seq = 1; goal.explore_boundary.header.frame_id = 1; goal.explore_center.point.x = 1; goal.explore_center.point.y = 0; goal.explore_center.point.z = 0; ac.sendGoal(goal); //wait for the action to return bool finished_before_timeout = ac.waitForResult(ros::Duration(30.0)); if (finished_before_timeout) { actionlib::SimpleClientGoalState state = ac.getState(); ROS_INFO("Action finished: %s",state.toString().c_str()); } else ROS_INFO("Action did not finish before the time out."); //exit return 0; }
Any clues on where i made a mistake ?
EDIT : Ok, problem solved, the cause was the header that was badly configured : it need to be goal.explore_boundary.header.frame_id = "map" to work. The problem now is that the frontier explorer node only give goals that are at the robot's position, resulting in the robot staying at his initial position.
Will need a lot more information than that to find out why your robot is staying at the initial position. I suggest you answer and resolve this question with the frame_id info, and make a new question regarding the new issue.
Hi! Can you please tell me what the test_client is? I thought that ExploreTaskAction.h was a part of the Frontier Exploration package.
(I'm referring to this line - #include <test_client exploretaskaction. )
Hello, ExploreTaskAction.h is indeed part of the Frontier Exploration package, i just took it and put it on my own package to test only the sending of a goal. | https://answers.ros.org/question/208946/unbound-frontier_exploration/ | CC-MAIN-2021-21 | en | refinedweb |
This site uses strictly necessary cookies. More Information
I'm making a 2d board-game-style game in Unity 5, and I have a prefab made up of a couple of sprites which represents a game piece. I want some text in my prefab that I can update as the game progresses.
If i try to add text, it requires a canvas, but when I create a canvas, an extraordinarily enormous canvas is created, that looks to be at least 1000x times bigger than by camera area. If I try to place this canvas inside my prefab, my prefab is now made of an enormously huge canvas, and my tiny sprite images. This makes the prefab impossible to position, or calculate sizing or animate, or anything else I want to do.
How can I add text to a prefab, and make the text contained within the size of my prefab spites?
Here's what I have tried so far:
if I set the canvas for the text to "Render Mode: World Space" I'm able to make it's rect tranform smaller. However, if I get it as small as my sprites, the text becomes an unreadably blurry mess. I guess this happens because my sprites are literally at least 1000x smaller than the canvas, so when I zoom in enough to even see the sprites, the text has been zoomed into oblivion. My sprites are so much smaller than the canvas, that if I am zoomed out to see the full canvas, my sprites are not even visible.
I'm able to kind of make things work if I recreate my prefab using UI Images instead of sprites. This way, the UI Images, and the text are both UI elements contained in the enormous canvas, so the size disparity doesn't exist. However, I don't know what the pitfalls are going to be trying to build an entire game out of ui images instead of sprites. Do I get all the state capabilities of sprites?
Answer by Pharan
·
Oct 04, 2015 at 01:00 AM
Use the first method (World Space canvas), but scale down the Transform of the Canvas to around 0.01 on all axes.
This will make it match the default 2D sprite scale of "100 pixels per unit". Then set the dimensions and sizes of your UI elements have a gameobject only rotate 180 degrees
3
Answers
Fill Font Characters with White?
0
Answers
using UnityEngine.UI; 'Not necessary' although I need to use 'Text' and '.text'. Also unable to drag-and-drop the Text from my canvas into the 'text' slot.
1
Answer
Is it possible to make 2D Sprites in Unity,Is it possible to make 2D Sprites in Unity or is Blender more suitable?
2
Answers
How to prevent jitter with movement over curved slopes
0
Answers
EnterpriseSocial Q&A | https://answers.unity.com/questions/1075951/is-there-a-way-to-add-text-to-a-prefab-in-a-2d-gam.html | CC-MAIN-2021-21 | en | refinedweb |
On Fri, Jan 05, 2007 at 09:14:30PM +0000, Daniel P. Berrange wrote: > > The following series of (2) patches adds a QEMU driver to libvirt. The first patch > provides a daemon for managing QEMU instances, the second provides a driver letting > libvirt manage QEMU via the daemon. > > Basic architecture > ------------------ > > The reason for the daemon architecture is two fold: > > - At this time, there is no (practical) way to enumerate QEMU instances, or > reliably connect to the monitor console of an existing process. There is > also no way to determine the guest configuration associated with a daemon. Okay, we admitted that principle in the first round of QEmu patches last year. The only question I have is about the multiplication of running daemons for libvirt, as we also have another one already for read only xen hypervisor access. We could either decide to keep daemon usages very specific (allows to also easilly restrict their priviledge) or try to unify them. I guess from a security POV it's still better to keep them separate, and anyway they are relatively unlikely to be run at the same time (KVM and Xen on the same Node). > - It is desirable to be able to manage QEMU instances using either an unprivilegd > local client, or a remote client. The daemon can provide connectivity via UNIX > domain sockets, or IPv4 / IPv6 and layer in suitable authentication / encryption > via TLS and/or SASL protocols. C.f. my previous mail, yes authentication is key. Could you elaborate in some way how the remote access and the authentication is set up, see my previous mail on the remote xend access, we should try to unify and set up a specific page to document remote accesses. > Anthony Ligouri is working on patches for QEMU with the goal of addressing the > first point. For example, an extra command line argument will cause QEMU to save > a PID file and create a UNIX socket for its monitor at a well-defined path. More > functionality in the monitor console will allow the guest configuration to be > reverse engineered from a running guest. Even with those patches, however, it will > still be desirable to have a daemon to provide more flexible connectivity, and to > facilitate implementation libvirt APIs which are host (rather than guest) related. > Thus I expect that over time we can simply enhance the daemon to take advantage of > newer capabilities in the QEMU monitor, but keep the same basic libvirt driver > architecture. Okay, Work in Progress. > Considering some of the other hypervisor technologies out there, in particular > User Mode Linux, and lhype, it may well become possible to let this QEMU daemon > also provide the management of these guests - allowing re-use of the single driver > backend in the libvirt client library itself. Which reopen the question, one multi-featured daemon or multiple simpler (but possibly redundant) daemons ? > XML format > ---------- > > As discussed in the previous mail thread, the XML format for describing guests > with the QEMU backend is the same structure as that for Xen guests, with > following enhancements: > > - The 'type' attribute on the top level <domain> tag can take one of the > values 'qemu', 'kqemu' or 'kvm' instead of 'xen'. This selects between > the different virtualization approaches QEMU can provide. > > - The '<type>' attribute within the <os> block of the XML (for now) is > still expected to the 'hvm' (indicating full virtualization), although > I'm trying to think of a better name, since its not technically hardware > accelerated unless you're using KVM yeah I don't have a good value to suggest except "unknown" bacause basically we don't know a priori what the running OS will be. > - The '<type>' attribute within the <os> block of the XML can have two > optional 'arch' and 'machine' attributes. The former selects the CPU > architecture to be emulated; the latter the specific machine to have > QEMU emulate (determine those supported by QEMU using 'qemu -M ?'). Okay, I hope we will have enough flexibility in the virNodeInfo model to express the various combinations, we have a 32 char string for this, I guess that should be sufficient, but I don't know how to express that in the best way I will see how the pacth does it. From my recollection of posts on qemu-devel some of the machines names can be a bit long on specific emulated target. At least we should be okay for a PC architecture. > - The <kernel>, <initrd>, <cmdline> elements can be used to specify > an explicit kernel to boot off[1], otherwise it'll do a boot of the > cdrom, harddisk / floppy (based on <boot> element). Well,the kernel > bits are parsed at least. I've not got around to using them when > building the QEMU argv yet. Okay > - The disk devices are configured in same way as Xen HVM guests. eg you > have to use hda -> hdd, and/or fda -> fdb. Only hdc can be selected > as a cdrom device. Good ! > - The network configuration is work in progress. QEMU has many ways to > setup networking. I use the 'type' attribute to select between the > different approachs 'user', 'tap', 'server', 'client', 'mcast' mapping > them directly onto QEMU command line arguments. You can specify a > MAC address as usual too. I need to implement auto-generation of MAC > addresses if omitted. Most of them have extra bits of metadata though > which I've not figured out appropriate XML for yet. Thus when building > the QEMU argv I currently just hardcode 'user' networking. Okay, since user is the default in QEmu (assuming I remember correctly :-) > - The QEMU binary is determined automatically based on the requested > CPU architecture, defaulting to i686 if non specified. It is possible > to override the default binary using the <emulator> element within the > <devices> section. This is different to previously discussed, because > recent work by Anthony merging VMI + KVM to give paravirt guests means > that the <loader> element is best kept to refer to the VMI ROM (or other > ROM like files :-) - this is also closer to Xen semantics anyway. Hum, the ROM, one more parameter, actually we may once need to provide for mutiple of them at some point if they start mapping non contiguous area. > Connectivity > ------------ > > The namespace under which all connection URIs come is 'qemud'. Thereafter > there are several options. First, two well-known local hypervisor > connections > > - qemud:///session > > This is a per-user private hypervisor connection. The libvirt daemon and > qemu guest processes just run as whatever UNIX user your client app is > running. This lets unprivileged users use the qemu driver without needing > any kind admin rights. Obviously you can't use KQEMU or KVM accelerators > unless the /dev/ device node is chmod/chown'd to give you access. > > The communication goes over a UNIX domain socket which is mode 0600 created > in the abstract namespace at $HOME/.qemud.d/sock. okay, makes sense. Everything runs under the user privilege and there is no escalation. > - qemud:///system > > This is a system-wide privileged hypervisor connection. There is only one > of these on any given machine. The libvirt_qemud daemon would be started > ahead of time (by an init script), possibly running as root, or maybe under > a dedicated system user account (and the KQEMU/KVM devices chown'd to match). Would that be hard to allow autostart ? That's what we do for the read-only xen hypervisor access. Avoiding starting up stuff in init.d when we have no garantee it will be used, and auto-shutdown when there is no client is IMHO generally nicer, but that feature can just be added later possibly, main drawback is that it requires an suid binary. > The admin would optionally also make it listen on IPv4/6 addrs to allow > remote communication. (see next URI example) > > The local communication goes over one of two possible UNIX domain sockets > Both in the abstract namespace under the directory /var/run. The first socket > called 'qemud' is mode 0600, so only privileged apps (ie root) can access it, > and gives full control capabilities. The other called 'qemud-ro' is mode 0666 > and any clients connecting to it will be restricted to only read-only libvirt > operations by the server. > > - qemud://hostname:port/ > > This lets you connect to a daemon over IPv4 or IPv6. If omitted the port is > 8123 (will probably change it). This lets you connect to a system daemon > on a remote host - assuming it was configured to listen on IPv4/6 interfaces. Hum, for that the daemon requires to be started statically too. > Currently there is zero auth or encryption, but I'm planning to make it > mandortory to use the TLS protocol - using the GNU TLS library. This will give > encryption, and mutual authentication using either x509 certificates or > PGP keys & trustdbs or perhaps both :-) Will probably start off by implementing > PGP since I understand it better. > > So if you wanted to remotely manage a server, you'd copy the server's > certificate/public key to the client into a well known location. Similarly > you'd generate a keypair for the client & copy its public key to the > server. Perhaps I'll allow clients without a key to connect in read-only > mode. Need to prototype it first and then write up some ideas. okay, though there is multiple authentication and encryption libraries, and picking the Right One may not be possible, there is so many options, and people may have specific infrastructure in place. Anyway the current state is no-auth so anything will be better :-) > Server architecture > ------------------- > > The server is a fairly simple beast. It is single-threaded using non-blocking I/O > and poll() for all operations. It will listen on multiple sockets for incoming > connections. The protocol used for client-server comms is a very simple binary > message format close to the existing libvirt_proxy. Good so we keep similar implementations. Any possibility of sharing part of that code, it's always very sensitive areas, both for security and edge case in the communication. > Client sends a message, server > receives it, performs appropriate operation & sends a reply to the client. The > client (ie libvirt driver) blocks after sending its message until it gets a reply. > The server does non-blocking reads from the client buffering until it has a single > complete message, then processes it and populates the buffer with a reply and does > non-blocking writes to send it back to the client. It won't try to read a further > message from the client until its sent the entire reply back. ie, it is a totally > synchronous message flow - no batching/pipelining of messages. Honnestly I think that's good enough, I don't see hundreds of QEmu instances having to be monitored remotely from a single Node. On an monitoring machine things may be accelerated by multithreading the gathering process to talk to multiple Nodes in parallel. At least on the server side I prefer to keep things as straight as possible. > During the time > the server is processes a message it is not dealing with any other I/O, but thus > far all the operations are very fast to implement, so this isn't a serious issue, > and there ways to deal with it if there are operations which turn out to take a > long time. I certainly want to avoid multi-threading in the server at all costs! completely agree :-) > As well as monitoring the client & client sockets, the poll() event loop in the > server also captures stdout & stderr from the QEMU processes. Currently we just > dump this to stdout of the daemon, but I expect we can log it somewhere. When we > start accessing the QEMU monitor there will be another fd in the event loop - ie > the pseduo-TTY (or UNIX socket) on which we talk to the monitor. At some point we will need to look at adding a Console dump API, that will be doable for Xen too, but it's not urgent since nobody requested it yet :-) > Inactive guests > --------------- > > Guests created using 'virsh create' (or equiv API) are treated as 'transient' > domains - ie their config files are not saved to disk. This is consistent with > the behaviour in the Xen backend. Guests created using 'virsh define', however, > are saved out to disk in $HOME/.qemud.d for the per-user session daemon. The > system-wide daemon should use /etc/qemud.d, but currently its still /root/.qemud.d Maybe this should be asked on the qemu-devel list, Fabrice and Co. may have a preference on where to store config related stuff for QEmu even if it's not directly part of QEmu. > The config files are simply saved as the libvirt XML blob ensuring no data > conversion issues. In any case, QEMU doesn't currently have any config file > format we can leverage. The list of inactive guests is loaded at startup of the > daemon. New config files are expected to be created via the API - files manually > created in the directory after initial startup are not seen. Might like to change > this later. Hum, maybe we could use FAM/gamin if found at configure time, but well it's just an additional feature, let's just avoid any uneeeded timer. > XML Examples > ------------ > > This is a guest using plain qemu, with x86_64 architecture and a ISA-only > (ie no PCI) machine emulation. I was actually running this on a 32-bit > host :-) VNC is configured to run on port 5906. QEMU can't automatically > choose a VNC port, so if one isn't specified we assign one based on the > domain ID. This should be fixed in QEMU.... > > <domain type='qemu'> > <name>demo1</name> > <uuid>4dea23b3-1d52-d8f3-2516-782e98a23fa0</uuid> > <memory>131072</memory> > <vcpu>1</vcpu> > <os> > <type arch='x86_64' machine='isapc'>hvm</type> > </os> > <devices> > <disk type='file' device='disk'> > <source file='/home/berrange/fedora/diskboot.img'/> > <target dev='hda'/> > </disk> > <interface type='user'> > <mac address='24:42:53:21:52:45'/> > </interface> > <graphics type='vnc' port='5906'/> > </devices> > </domain> > > A second example, this time using KVM acceleration. Note how I specify a > non-default path to QEMU to pick up the KVM build of QEMU. Normally KVM > binary will default to /usr/bin/qemu-kvm - this may change depending on > how distro packaging of KVM turns out - it may even be merged into regular > QEMU binaries. > > <domain type='kvm'> > <name>demo2</name> > <uuid>4dea24b3-1d52-d8f3-2516-782e98a23fa0</uuid> > <memory>131072</memory> > <vcpu>1</vcpu> > <os> > <type>hvm</type> > </os> > <devices> > <emulator>/home/berrange/usr/kvm-devel/bin/qemu-system-x86_64</emulator> > <disk type='file' device='disk'> > <source file='/home/berrange/fedora/diskboot.img'/> > <target dev='hda'/> > </disk> > <interface type='user'> > <mac address='24:42:53:21:52:45'/> > </interface> > <graphics type='vnc' port='-1'/> > </devices> > </domain> Okay, I'm nearing completion of a Relax-NG schemas allowing to validate XML instances, I will augment to allow the changes, but based on last week discussion it should not bee too hard and still retain good validation properties. > Outstanding work > ---------------- > > - TLS support. Need to add TLS encryption & authentication to both the client > and server side for IPv4/6 communications. This will obviously add a dependancy > on libgnutls.so in libvirt & the daemon. I don't consider this a major problem > since every non-trivial network app these days uses TLS. The other possible impl > of OpenSSL has GPL-compatability issues, so is not considered. > > - Change the wire format to use fixed size data types (ie, int8, int16, int32, etc) > instead of the size-dependant int/long types. At same time define some rules for > the byte ordering. Client must match server ordering ? Server must accept client's > desired ordering ? Everyone must use BE regardless of server/client format ? I'm > inclined to say client must match server, since it distributes the byte-swapping > overhead to all clients and lets the common case of x86->x86 be a no-op. Hum, on the other hand if you do the conversion as suggested by IETF rules it's easier to find the places where the conversion is missing, unless you forgot to ntoh and hton on both client and server code. Honnestly I would not take the performance hit in consideration at that level and not now, the RPC is gonna totally dominate it by order of magnitudes in my opinion. > - Add a protocol version message as first option to let use protocol at will later > while maintaining compat with older libvirt client libraries. Yeah, this also ensure you get a functionning server on that port ! > - Improve support for describing the various QEMU network configurations > > - Finish boot options - boot device order & explicit kernel > > - Open & use connection to QEMU monitor which will let us implement pause/resume, > suspend/restore drivers, and device hotplug / media changes. > > - Return sensible data for virNodeInfo - will need to have operating system dependant > code here - parsing /proc for Linux to determine available RAM & CPU speed. Who > knows what for Solaris / BSD ?!? Anyone know of remotely standard ways for doing > this. Accurate host memory reporting is the only really critical data item we need. The GNOME guys tried that, maybe dig up the gst (gnome system tools) code base :-) > - There is a fair bit of duplicate in various helper functions between the daemon, > and various libvirt driver backends. We should probably pull this stuff out into > a separate lib/ directoy, build it into a static library and then link that into > both libvirt, virsh & the qemud daemon as needed. Yes definitely ! This all sounds excellent, thanks a lot !!! Daniel -- Red Hat Virtualization group Daniel Veillard | virtualization library veillard redhat com | libxml GNOME XML XSLT toolkit | Rpmfind RPM search engine | https://listman.redhat.com/archives/libvir-list/2007-January/msg00013.html | CC-MAIN-2021-21 | en | refinedweb |
Components Interacting
Learn how to make React components interact with one another.Start
Key Concepts
Review core concepts you need to learn to master this subject
Returning HTML Elements and Components
React Component File Organization
this.props
defaultProps
props
this.props.children
Binding
this keyword
Call
super() in the Constructor
Returning HTML Elements and Components
Returning HTML Elements and Components
class Header extends React.Component { render() { return ( <div> <Logo /> <h1>Codecademy</h1> </div> ); } }
A class component’s
render() method can return any JSX, including a mix of HTML elements and custom React components.
In the example, we return a
<Logo /> component and a “vanilla” HTML title.
This assumes that
<Logo /> is defined elsewhere.
- 1A React application can contain dozens, or even hundreds, of components. Each component might be small and relatively unremarkable on its own. When combined, however, they can form enormous, fant…
- 2Here is a .render() method that returns an HTML-like JSX element: class Example extends React.Component { render() { return Hello world ; } } You’ve seen render methods return s, s,…
- 3This is new territory! You’ve never seen a component rendered by another component before. You have seen a component rendered before, though, but not by another component. Instead, you’ve seen …
- 4When you use React.js, every JavaScript file in your application is invisible to every other JavaScript file by default. ProfilePage.js and NavBar.js can’t see each other. This is a probl…
- 6Now you’re ready for to render ! All that’s left to do is render .
What you'll create
Portfolio projects that showcase your new skills
Random Color Picker
It's time to build fluency in React fundamentals. In this next Pro Project, we're going to practice working with multiple components in React so you can hone your skills and feel confident taking them to the real world. Why? It's important to understand the relationship between parent and child components and how they can interact. What's next? Deciding on a color, family interactions, more React. You got this!
How you'll master it
Stress-test your knowledge with quizzes that help commit syntax to memory | https://www.codecademy.com/learn/react-101/modules/learn-react-components-interacting | CC-MAIN-2021-21 | en | refinedweb |
Possible to implement LaTeX in Scene module?
Is there a way that I can get the Scene module to print a graphical LaTeX output, possibly as an image? My goal is to have Scene produce a few of these graphics so that they may be dragged around and have some interactivity. I have been exploring the SymPy and Matplotlib libraries, but it seems that both are concerned with printing to the console
I'm fairly new to both libraries, most of Python really.. I thought that I could possibly produce the graphic through Matplotlib, and have it available for Scene to draw, but I am struggling to execute such a approach. Thanks in advance!
If you can draw the LaTeX string in
matplotlib, there's a
savefigfunction that saves the plot to a png file, which you could read and display in the Scene
import matplotlib.mathtext as mt s=r'$\frac{A}{B} = C$' mt.math_to_image(s, 'test.png')
the second argument to math_to_image could also be a BytesIO, etc. | https://forum.omz-software.com/topic/2431/possible-to-implement-latex-in-scene-module | CC-MAIN-2022-27 | en | refinedweb |
Back to: Design Patterns in C# With Real-Time Examples
Builder Design Pattern in C# with Examples
In this article, I am going to discuss the Builder Design Pattern in C# with examples. Please read our previous article where we discussed the Abstract Factory Design Pattern in C# with examples. The Builder Design Pattern falls under the category of the Creational Design Pattern. As part of this article, we are going to discuss the following pointers.
- What is the Builder Design Pattern?
- Understanding the Builder Design Pattern with a real-time example.
- Understanding the class diagram of the Builder Design Pattern?
- Implementing the Builder Design Pattern in C#.
- When to use the Builder Design Pattern in real-time applications.
What is the Builder Design Pattern?
The Builder Design Pattern builds a complex object using many simple objects and using a step-by-step approach. The Process of constructing a complex object should be generic so that the same construction process can be used to create different representations of the same complex object.
So, the Builder Design Pattern is all about separating the construction process from its representation. When the construction process of your object is very complex then only you need to use to Builder Design Pattern. If this is not clear at the moment then don’t worry we will try to understand this with an example.
Please have a look at the following diagram. Here, Laptop is a complex object. In order to build a laptop, we have to use many small objects like LCD Display, USB Ports, Wireless, Hard Drive, Pointing Device, Battery, Memory, DVD/CD Reader, Keyboard, Plastic Case, etc. So, we have to assemble these small objects to build laptop complex objects.
So, to build the complex object laptop we need to define some kind of generic process something like below.
1. Plug the memory
2. Plug the Hard Drive
3. Plug the battery
4. Plug the Keyboard
……
……
10. Cover the Laptop with a plastic case
So, using the above process we can create different types of laptops such as a laptop with a 14inch screen or a 17inch screen. Laptop with 4GB RAM or 8GB RAM. So, like this, we can create different kinds of laptops. So, all the laptop creations will follow the same generic process. So, now if you read the definition, then definitely you will understand the definition of Builder Design Pattern.
Understanding the Builder Design Pattern with one real-time example:
Let us understand the builder design pattern with one real-time example. Suppose we want to develop an application for displaying the reports. The reports we need to display either in Excel or in PDF format. That means, we have two types of representation of my reports. In order to understand this better, please have a look at the following diagram.
As you can see, in the above image, we are generating the report either in Excel and PDF. Here, the construction process involves several steps such as Create a new report, setting report type, header, content, and footer. If you look at the final output we have one PDF representation and one Excel representation. Please have a look at the following diagram to understand the construction process and its representation.
Understanding the Class Diagram of Builder Design Pattern in C#
Let us understand the class diagram and the different components involved in the Builder Design Pattern. In order to understand this please have a look at the following diagram.
In order to separate the construction process from its representation, the builder design pattern Involve four components. They are as follows.
- Builder: The Builder is an interface that defines all the steps which are used to make the concrete product.
- Concrete Builder: The ConcreteBuilder class implements the Builder interface and provides implementation to all the abstract methods. The Concrete Builder is responsible for constructs and assembles the individual parts of the product by implementing the Builder interface. It also defines and tracks the representation it creates.
- Director: The Director takes those individual processes from the Builder and defines the sequence to build the product.
- Product: The Product is a class and we want to create this product object using the builder design pattern. This class defines different parts that will make the product.
Implementation of Builder Design Pattern in C#:
Let us implement the Report example that we discussed using the Builder Design Pattern in C# step by step.
Step1: Creating the Product
Create a class file with the name Report.cs and then copy and paste the following code in it. This is our product class and within this class, we define the attributes (such as ReportHeader, ReportFooter, ReportFooter, and ReportContent) which are common to create a report. We also define one method i.e. DisplayReport to display the report details in the console.
using System; namespace BuilderDesignPattern { public class Report { public string ReportType { get; set; } public string ReportHeader { get; set; } public string ReportFooter { get; set; } public string ReportContent { get; set; } public void DisplayReport() { Console.WriteLine("Report Type :" + ReportType); Console.WriteLine("Header :" + ReportHeader); Console.WriteLine("Content :" + ReportContent); Console.WriteLine("Footer :" + ReportFooter); } } }
Once we know the definition of the Product we are building, now we need to create the Builder.
Step2: Creating the Abstract Builder class.
Create a class file with the name ReportBuilder.cs and then copy and paste the following in it. This is going to be an abstract class and this class provides the blueprint to create different types of beverages. That means the subclasses are going to implement this ReportBuilder abstract class.
namespace BuilderDesignPattern { public abstract class ReportBuilder { protected Report reportObject; public abstract void SetReportType(); public abstract void SetReportHeader(); public abstract void SetReportContent(); public abstract void SetReportFooter(); public void CreateNewReport() { reportObject = new Report(); } public Report GetReport() { return reportObject; } } }
Notice, here we have four abstract methods. So, each subclass of ReportBuilder will need to implement those methods in order to properly build a report. Now, we need to create a few concrete builder classes by implementing the above ReportBuilder interface.
Step3: Creating Concrete Builder classes.
In our example, we are dealing with two types of reports i.e. Excel and PDF. So, we need to create two concrete builder classes by implementing the ReportBuilder abstract class and providing implementation to the ReportBuilder abstract methods.
ExcelReport.cs
Create a class file with the name ExcelReport.cs and then copy and paste the following code in it. This ExcelReport class implements the ReportBuilder abstract class which is the blueprint for creating the report objects
namespace BuilderDesignPattern { class ExcelReport : ReportBuilder { public override void SetReportContent() { reportObject.ReportContent = "Excel Content Section"; } public override void SetReportFooter() { reportObject.ReportFooter = "Excel Footer"; } public override void SetReportHeader() { reportObject.ReportHeader = "Excel Header"; } public override void SetReportType() { reportObject.ReportType = "Excel"; } } }
PDFReport.cs
Create a class file with the name PDFReport.cs and then copy and paste the following code in it. This class also implements the ReportBuilder abstract class and provides implementation to all the abstract methods. The following PDFReport class is used to create the Report in PDF format.
namespace BuilderDesignPattern { public class PDFReport : ReportBuilder { public override void SetReportContent() { reportObject.ReportContent = "PDF Content Section"; } public override void SetReportFooter() { reportObject.ReportFooter = "PDF Footer"; } public override void SetReportHeader() { reportObject.ReportHeader = "PDF Header"; } public override void SetReportType() { reportObject.ReportType = "PDF"; } } }
Once you have the required concrete builder classes, now we need to create the director. The director will execute the required steps to create a particular report.
Step4: Creating the Director
Please create a class file with the name ReportDirector.cs and then copy and paste the following code in it. The following class is having one generic method i.e. MakeReport which will take the ReportBuilder instance as an input parameter and then create and return a particular report object.
namespace BuilderDesignPattern { public class ReportDirector { public Report MakeReport(ReportBuilder reportBuilder) { reportBuilder.CreateNewReport(); reportBuilder.SetReportType(); reportBuilder.SetReportHeader(); reportBuilder.SetReportContent(); reportBuilder.SetReportFooter(); return reportBuilder.GetReport(); } } }
Note: This MakeReport is so generic that it can create and return different types of report objects. Once we have the Director and Concrete Builder, now we can use them in the Main method to create different types of Reports.
Step5: Client code.
Please modify the Main method as shown below. Here, first, we will create an instance of the ReportDirector class and then create an instance of PDFReport class. Once we have ReportDirector instance and PDFReport instance, then we call the MakeReport method on the ReportDirector instance bypassing the PDFReport instance as an argument which will create and return the report in PDF format. The same process is also for Excel report.
namespace BuilderDesignPattern { class Program { static void Main(string[] args) { // Client Code Report report; ReportDirector reportDirector = new ReportDirector(); // Construct and display Reports PDFReport pdfReport = new PDFReport(); report = reportDirector.MakeReport(pdfReport); report.DisplayReport(); Console.WriteLine("-------------------"); ExcelReport excelReport = new ExcelReport(); report = reportDirector.MakeReport(excelReport); report.DisplayReport(); Console.ReadKey(); } } }
Now run the application and you should get the output as shown below.
When to use the Builder Design Pattern in real-time applications?
We need to use the Builder Design Pattern in real-time applications in the following scenarios.
- When you want to make a complex object by specifying only its type and content. The built object is constructed from the details of its construction.
- When you decouple the process of building a complex object from the parts that make up the object.
- When you want to isolate the code for construction and representation.
In the next article, I am going to discuss the Builder Design Pattern Real-time Example using C#. Here, in this article, I try to explain the Builder Design Pattern in C# with Examples. I hope you understood the need and use of the Builder Design Pattern in C#. | https://dotnettutorials.net/lesson/builder-design-pattern/ | CC-MAIN-2022-27 | en | refinedweb |
How to Add Cluster Support to Node.js
April 1st, 2021
What You Will Learn in This Tutorial
How to use the Node.js cluster module to take advantage of a multi-core processor in your production environment.
Table of Contents
By nature, JavaScript is a single-threaded language. This means that when you tell JavaScript to complete a set of instructions (e.g., create a DOM element, handle a button click, or in Node.js to read a file from the file system), it handles each of those instructions one at a time, in a linear fashion.
It does this regardless of the computer it's running on. If your computer has an 8-core processor and 64GB of ram, any JavaScript code you run on that computer will run in a single thread or core.
The same rules apply in a Node.js application. Because Node.js is based on the V8 JavaScript Engine the same rules that apply to JavaScript apply to Node.js.
When you're building a web application, this can cause headaches. As your application grows in popularity (or complexity) and needs to handle more requests and additional work, if you're only relying on a single thread to handle that work, you're going to run into bottlenecks—dropped requests, unresponsive servers, or interruptions to work that was already running on the server.
Fortunately, Node.js has a workaround for this: the
cluster module.
The
cluster module helps us to take advantage of the full processing power of a computer (server) by spreading out the workload of our Node.js application. For example, if we have an 8-core processor, instead of our work being isolated to just one core, we can spread it out to all eight cores.
Using
cluster, our first core becomes the "master" and all of the additional cores become "workers." When a request comes into our application, the master process performs a round-robin style check asking "which worker can handle this request right now?" The first worker that meets the requirements gets the request. Rinse and repeat.
Setting up an example server
To get started and give us some context, we're going to set up a simple Node.js application using Express as an HTTP server. We want to create a new folder on our computer and then run:
npm init --force && npm i express
This will initialize our project using NPM—the Node.js Package Manager—and then install the
express NPM package.
Be mindful of Node.js and NPM version here
For this tutorial, we're using Node.js
v15.13.0with NPM
v7.7.6. Check out this tutorial on using NVM to install and manage different versions of Node.js.
After this is complete, we'll want to create an
index.js file in our new project folder:
/index.js
import express from "express"; const app = express(); app.use("/", (req, res) => { res.send( `"Sometimes a slow gradual approach does more good than a large gesture." - Craig Newmark` ); }); app.listen(3000, () => { console.log("Application running on port 3000."); });
Here, we
import express from 'express' to pull
express into our code. Next, we create an instance of
express by calling that import as a function and assigning it to the variable
app.
Next, we define a simple route at the root
/ of our application with
app.use() and return some text to ensure things are working (this is just for show and won't have any real effect on our cluster implementation).
Finally, we call to
app.listen() passing
3000 as the port (we'll be able to access the running application at in our browser after we start the app). Though the message itself isn't terribly important, as a second argument to
app.listen() we pass a callback function to log out a message when our application starts up. This will come in handy when we need to verify if our cluster support is working properly.
To make sure this all works, in your terminal,
cd into the project folder and then run
node index.js. If you see the following, you're all set:
$ node index.js Application running on port 3000.
Adding Cluster support to Node.js
Now that we have our example application ready, we can start to implement
cluster. The good news is that the
cluster package is included in the Node.js core, so we don't need to install anything else.
To keep things clean, we're going to create a separate file for our Cluster-related code and use a callback pattern to tie it back to the rest of our code.
/cluster.js
import cluster from "cluster"; import os from "os"; export default (callback = null) => { const cpus = os.cpus().length; if (cluster.isMaster) { for (let i = 0; i < cpus; i++) { const worker = cluster.fork(); worker.on("message", (message) => { console.log(`[${worker.process.pid} to MASTER]`, message); }); } cluster.on("exit", (worker) => { console.warn(`[${worker.process.pid}]`, { message: "Process terminated. Restarting.", }); cluster.fork(); }); } else { if (callback) callback(); } };
Starting at the top, we import two dependencies (both of which are included with Node.js and do not need to be installed separately):
cluster and
os. The former gives us access to the code we'll need to manage our worker cluster and the latter helps us to detect the number of CPU cores available on the computer where our code is running.
Just below our imports, next, we
export the function we'll call from our main
index.js file later. This function is responsible for setting up our Cluster support. As an argument, make note of our expectation of a
callback function being passed. This will come in handy later.
Inside of our function, we use the aforementioned
os package to communicate with the computer where our code is running. Here, we call to
os.cpus().length expecting
os.cpus() to return an array and then measuring the length of that array (representing the number of CPU cores on the computer).
With that number, we can set up our Cluster. All modern computers have a minimum of 2-4 cores, but keep in mind that the number of workers created on your computer will differ from what's shown below. Read: don't panic if your number is different.
/cluster.js
[...] if (cluster.isMaster) { for (let i = 0; i < cpus; i++) { const worker = cluster.fork(); worker.on("message", (message) => { console.log(`[${worker.process.pid} to MASTER]`, message); }); } cluster.on("exit", (worker) => { console.warn(`[${worker.process.pid}]`, { message: "Process terminated. Restarting.", }); cluster.fork(); }); } [...]
The first thing we need to do is to check if the running process is the master instance of our application, or, not one of the workers that we'll create next. If it is the master instance, we do a for loop for the length of the
cpus array we determined in the previous step. Here, we say "for as long as the value of
i (our current loop iteration) is less than the number of CPUs we have available, run the following code."
The following code is how we create our workers. For each iteration of our
for loop, we create a worker instance with
cluster.fork(). This forks the running master process, returning a new child or worker instance.
Next, to help us relay messages between the workers we create and our master instance, we add an event listener for the
message event to the worker we created, giving it a callback function.
That callback function says "if one of the workers sends a message, relay it to the master." So, here, when a worker sends a message, this callback function handles that message in the master process (in this case, we log out the message along with the
pid of the worker that sent it).
This can be confusing. Remember, a worker is a running instance of our application. So, for example, if some event happens inside of a worker (we run some background task and it fails), we need a way to know about it.
In the next section, we'll take a look at how to send messages from within a worker that will pop out at this callback function.
One more detail before we move on, though. We've added one additional event handler here, but this time, we're saying "if the cluster (meaning any of the running worker processes) receives an exit event, handle it with this callback." The "handling" part here is similar what we did before, but with a slight twist: first, we log out a message along with the worker's
pid to let us know the worker died. Next, to ensure our cluster recovers (meaning we maintain the max number of running processes available to us based on our CPU), we restart the process with
cluster.fork().
To be clear: we'll only call
cluster.fork() like this if a process dies.
/cluster.js
import cluster from "cluster"; import os from "os"; export default (callback = null) => { const cpus = os.cpus().length; if (cluster.isMaster) { for (let i = 0; i < cpus; i++) { const worker = cluster.fork(); // Listen for messages FROM the worker process. worker.on("message", (message) => { console.log(`[${worker.process.pid} to MASTER]`, message); }); } cluster.on("exit", (worker) => { console.warn(`[${worker.process.pid}]`, { message: "Process terminated. Restarting.", }); cluster.fork(); }); } else { if (callback) callback(); } };
One more detail. Finishing up with our Cluster code, at the bottom of our exported function we add an
else statement to say "if this code is not being run in the master process, call the passed callback if there is one."
We need to do this because we only want our worker generation to take place inside of the master process, not any of the worker processes (otherwise we'd have an infinite loop of process creation that our computer wouldn't be thrilled about).
Putting the Node.js Cluster to use in our application
Okay, now for the easy part. With our Cluster code all set up in the other file, let's jump back to our
index.js file and get everything set up:
/index.js
import express from "express"; import favicon from "serve-favicon"; import cluster from "./cluster.js"; cluster(() => { const app = express(); app.use(favicon("public/favicon.ico")); app.use("/", (req, res) => { if (process.send) { process.send({ pid: process.pid, message: "Hello!" }); } res.send( `"Sometimes a slow gradual approach does more good than a large gesture." - Craig Newmark` ); }); app.listen(3000, () => { console.log(`[${process.pid}] Application running on port 3000.`); }); });
We've added quite a bit here, so let's go step by step.
First, we've imported our
cluster.js file up top as
cluster. Next, we call that function, passing a callback function to it (this will be the value of the
callback argument in the function exported by
cluster.js).
Inside of that function, we've placed all of the code we wrote in
index.js earlier, with a few modifications.
Immediately after we create our
app instance with
express(), up top you'll notice that we're calling to
app.use(), passing it another call to
favicon("public/favicon.ico").
favicon() is a function from the
serve-favicon dependency added to the imports at the top of the file.
This is to reduce confusion. By default, when we visit our application in a browser, the browser will make two requests: one for the page and one for the app's
favicon.ico file. Jumping ahead, when we call to
process.send() inside of the callback for our route, we want to make sure that we don't get the request for the
favicon.ico file in addition to our route.
Where this gets confusing is when we output messages from our worker. Because our route receives two requests, we'll end up getting two messages (which can look like things are broken).
To handle this, we import
favicon from
serve-favicon and then add a call to
app.use(favicon("public/favicon.ico"));. After this is added, you should also add a
public folder to the root of the project and place an empty
favicon.ico file inside of that folder.
Now, when requests come into the app, we'll only get a single message as the
favicon.ico request will be handled via the
favicon() middleware.
Continuing on, you'll notice that we've added something above our
res.send() call for our root
/ route:
if (process.send) { process.send({ pid: process.pid, message: "Hello!" }); }
This is important. When we're working with a Cluster configuration in Node.js, we need to be aware of IPC or interprocess communication. This is a term used to describe the communication—or rather, the ability to communicate—between the master instance of our app and the workers.
Here,
process.send() is a way to send a message from a worker instance back to the master instance. Why is that important? Well, because worker processes are forks of the main process, we want to treat them like they're children of the master process. If something happens inside of a worker relative to the health or status of the Cluster, it's helpful to have a way to notify the master process.
Where this may get confusing is that there's no clear tell that this code is related to a worker.
What you have to remember is that a worker is just the name used to describe an additional instance of our application, or here, in simpler terms, our Express server.
When we say
process here, we're referring to the current Node.js process running this code. That could be our master instance or it could be a worker instance.
What separates the two is the
if (process.send) {} statement. We do this because our master instance will not have a
.send() method available, only our worker instances. When we call this method, the value we pass to
process.send() (here we're passing an object with a
pid and
message, but you can pass anything you'd like) pops out in the
worker.on("message") event handler that we set up in
cluster.js:
/cluster.js
worker.on("message", (message) => { console.log(`[${worker.process.pid} to MASTER]`, message); });
Now this should be making a little more sense (specifically the
to MASTER part). You don't have to keep this in your own code, but it helps to explain how the processes are communicating.
Running our Clustered server
Last step. To test things out, let's run our server. If everything is set up correctly, from the project folder in your terminal, run
node index.js (again, be mindful of the Node.js version you're running):
$ node index.js [25423] Application running on port 3000. [25422] Application running on port 3000. [25425] Application running on port 3000. [25426] Application running on port 3000. [25424] Application running on port 3000. [25427] Application running on port 3000.
If everything is working, you should see something similar. The numbers on the left represent the process IDs for each instance generated, relative to the number of cores in your CPU. Here, my computer has a six-core processor, so I get six processes. If you had an eight-core processor, you'd expect to see eight processes.
Finally, now that our server is running, if we open up in our browser and then check back in our terminal, we should see something like:
[25423] Application running on port 3000. [25422] Application running on port 3000. [25425] Application running on port 3000. [25426] Application running on port 3000. [25424] Application running on port 3000. [25427] Application running on port 3000. [25423 to MASTER] { pid: 25423, message: 'Hello!' }
The very last log statement is the message received in our
worker.on("message") event handler, sent by our call to
process.send() in the callback for our root
/ route handler (which is run when we visit our app at).
That's it!
Wrapping up
Above, we learned how to set up a simple Express server and convert it from a single-running Node.js process to a clustered, multi-process setup. With this, now we can scale our applications using less hardware by taking advantage of the full processing power of our server.
Get the latest free JavaScript and Node.js tutorials, course announcements, and updates from CheatCode in your inbox.
No spam. Just new tutorials, course announcements, and updates from CheatCode. | https://cheatcode.co/tutorials/how-to-add-cluster-support-to-node-js | CC-MAIN-2022-27 | en | refinedweb |
Combining data from a database and a web service with Fetch
by Alejandro Gomez
- •
- January 19, 2017
- •
- scala• http4s• fetch• doobie
- |
- 11 minutes to read.
In a previous article, I discussed how you can use Fetch to query and combine data from a variety of sources such as databases and HTTP servers. This time, we’re going to examine a full example using the Doobie library for DB access and http4s as the HTTP client.
For the sake of this example, let’s assume we have a DB table with users and we are storing related to-do items in a third-party web service. We’ll query users from the DB and their list of to-do items from an HTTP API.
Querying the Database
We’ll start by creating our
user DB table and inserting a few values in it. For performing queries to the DB, we’ll need a Doobie
Transactor; you can find details on how to create one in Doobie’s documentation. We’ll create a transactor that runs queries to the
Task type from the fs2 library and makes sure that the
user table is created and populated with a few records upon transactor creation:
type UserId = Int case class User(id: UserId, name: String) val dropTable = sql"DROP TABLE IF EXISTS user".update.run val createTable = sql""" CREATE TABLE user ( id INTEGER PRIMARY KEY, name VARCHAR(20) NOT NULL UNIQUE ) """.update.run def addUser(usr: User) = sql"INSERT INTO user (id, name) VALUES(${usr.id}, ${usr.name})".update.run val users: List[User] = List("William Shakespeare", "Charles Dickens", "George Orwell").zipWithIndex.map { case (name, id) => User(id + 1, name) } val xa: Transactor[Task] = (for { xa <- createTransactor _ <- (dropTable *> createTable *> users.traverse(addUser)).transact(xa) } yield xa).unsafeRunSync.toOption.getOrElse( throw new Exception("Could not create test database and/or transactor") )
Now that we have the table created and a few records inputted, we can start using Doobie for querying users. We’ll start by writing a couple of functions that’ll help us run the queries for one user or multiple users:
import cats.data.NonEmptyList import doobie.imports.{Query => _, _} def userById(id: Int): ConnectionIO[Option[Author]] = sql"SELECT * FROM author WHERE id = $id".query[Author].option def usersByIds(ids: NonEmptyList[Int]): ConnectionIO[List[Author]] = { implicit val idsParam = Param.many(ids) sql"SELECT * FROM author WHERE id IN (${ids: ids.type})".query[Author].list }
Let’s run some queries for individual users:
import doobie.imports._ userById(1).transact(xa).unsafeRun //=> Some(User(1,William Shakespeare)) userById(42).transact(xa).unsafeRun // => None
as well as multiple users:
import cats.data.NonEmptyList val ids: NonEmptyList[UserId] = NonEmptyList(1, List(2, 3)) usersByIds(ids).transact(xa).unsafeRun //=> List(User(1,William Shakespeare), User(2,Charles Dickens), User(3,George Orwell))
The only missing piece of the puzzle is the user data source, which should be fairly easy to implement now
that we can query the
user table.
implicit val userDS = new DataSource[UserId, User] { override def name = "UserDoobie" override def fetchOne(id: UserId): Query[Option[User]] = Query.sync { userById(id).transact(xa).unsafeRun } override def fetchMany(ids: NonEmptyList[UserId]): Query[Map[UserId, User]] = Query.sync { usersByIds(ids).map { users => users.map(a => a.id -> a).toMap }.transact(xa).unsafeRun } } def user(id: UserId): Fetch[User] = Fetch(id)
We’ve seen how to create a DB data source using Doobie, now it’s time to move on to the HTTP data source and how we can use them together!
Querying the Web service
As I mentioned before, we are using a third-party web service for storing to-do items related to our users in the database. We’ll use the JSON placeholder to emulate queries to an API that stores to-do items, so let’s start by using Circe for deriving the JSON decoders. Circe makes this really easy:
type TodoId = Int case class Todo(id: TodoId, userId: UserId, title: String, completed: Boolean) import io.circe._ import io.circe.generic.semiauto._ implicit val todoDecoder: Decoder[Todo] = deriveDecoder
That’s it; we can now decode
Todo instance from JSON payloads. Our next step is to write a function to fetch a user’s
to-do items given its user id, for which we will use the http4s HTTP client. One thing to take into account is that both
Doobie and http4s use the
Task type in some of their results types. Unfortunately, the former is from fs2 and the latter from Scalaz.
import org.http4s.circe._ import org.http4s.client.blaze._ import scalaz.concurrent.{Task => Zask} val httpClient = PooledHttp1Client() def todosByUser(id: UserId): Zask[List[Todo]] = { val url = s"{id}" httpClient.expect(url)(jsonOf[List[Todo]]) }
We can now easily query the web service for obtaining a list of to-do items for a user:
todosByUser(1).run // => List(Todo(1,1,delectus aut autem,false), ...)
The only piece missing is writing the to-do items’ data source, we’ll use the
Query#async constructor and run the Scalaz Task’ returned by the HTTP client asynchronously. In the JVM, you can use
Query#sync for blocking calls and
Query#async for non-blocking calls, although, in
JS most of your data sources will use
Query#async since I/O in JavaScript does not block.
implicit val todosDS = new DataSource[UserId, List[Todo]] { override def name = "TodoH4s" override def fetchOne(id: UserId): Query[Option[List[Todo]]] = Query.async { (ok, fail) => todosByUser(id).unsafePerformAsync(_.fold(fail, (x) => ok(Some(x)))) } override def fetchMany(ids: NonEmptyList[UserId]): Query[Map[UserId, List[Todo]]] = batchingNotSupported(ids) } def todos(id: UserId): Fetch[List[Todo]] = Fetch(id)
Like many HTTP APIs, the to-do items’ endpoint doesn’t support batching, so we implement
DataSource#fetchMany with the default unbatched implementation with
DataSource#batchingNotSupported.
Putting it all together
We now have both data sources in place, let’s combine them and see how fetch optimizes data access. For querying users and to-dos, we’ll create a
Fetch for each of them and combine them using
.product. When combining fetches this way we are implicitly telling Fetch that they can be run in parallel:
def fetchUserAndTodos(id: UserId): Fetch[(User, List[Todo])] = user(id).product(todos(id))
Let’s run a fetch returned by the function above so you see that both queries (to the database and the web service) run concurrently; we’ll be using the debugging facilities recently introduced to fetch to visualize a fetch execution:
import cats.Id import fetch.syntax._ import fetch.unsafe.implicits._ import fetch.debug._ describe( fetchUserAndTodos(1).runE[Id] ) // ... // [Concurrent] // [Fetch one] From `UserDoobie` with id 1 // [Fetch one] From `TodoH4s` with id 1
Let’s take a look at a more involved example: a fetch that has multiple steps, deduplication and caching:
import fetch._ val involvedFetch: Fetch[List[(User, List[Todo])]] = for { userAndTodos <- Fetch.traverse(List(1, 2, 1, 2))(fetchUserAndTodos _) moreUserAndTodos <- Fetch.traverse(List(1, 2, 3))(fetchUserAndTodos _) } yield userAndTodos ++ moreUserAndTodos describe( involvedFetch.runE[Id] ) // .. // [Concurrent] // [Fetch many] From `UserDoobie` with ids List(1, 2) // [Fetch many] From `TodoH4s` with ids List(1, 2) // [Concurrent] // [Fetch one] From `UserDoobie` with id 3 // [Fetch one] From `TodoH4s` with id3
As you can see in the description of the fetch execution, the fetch was run in two rounds:
- In the first, both data sources were queried in batch for the ids 1 and 2. Repeated identities were deduplicated.
- In the second, both data sources were queried for getting the id 3. Note how Fetch didn’t need to ask for ids 1 and 2 since they are cached from the previous round.
Conclusion
The recently introduced asynchronous query support has made it possible to use Fetch with non-blocking clients and made it viable for using it in a non-JVM environment with Scala.js. Besides optimizing data access with caching, batching, and parallelism, Fetch lets you treat every data source uniformly and arbitrarily combines data from multiple data sources.
Hopefully, this article has helped you understand how Fetch can be useful. Feel free to drop by the Fetch Gitter channel to ask any questions. Most of the code contained in this article has been extracted from examples that Peter Neyens contributed to the Fetch repository. If you have more examples of Fetch usage, don’t hesitate to open a pull request so more people can benefit from them. | https://www.47deg.com/blog/fetch-doobie-http4s/ | CC-MAIN-2022-27 | en | refinedweb |
#include <NCollection_BaseAllocator.hxx>
Purpose: Basic class for memory allocation wizards. Defines the interface for devising different allocators firstly to be used by collections of NCollection, though it it is not deferred. It allocates/frees the memory through Standard procedures, thus it is unnecessary (and sometimes injurious) to have more than one such allocator. To avoid creation of multiple objects the constructors were maid inaccessible. To create the BaseAllocator use the method CommonBaseAllocator. Note that this object is managed by Handle.
Constructor - prohibited.
CommonBaseAllocator This method is designed to have the only one BaseAllocator (to avoid useless copying of collections). However one can use operator new to create more BaseAllocators, but it is injurious.
Prints memory usage statistics cumulated by StandardCallBack.
Callback function to register alloc/free calls. | https://dev.opencascade.org/doc/occt-7.6.0/refman/html/class_n_collection___base_allocator.html | CC-MAIN-2022-27 | en | refinedweb |
#include <CGAL/Epick_d.h>
A model for
Kernel_d that uses Cartesian coordinates to represent the geometric objects.
This kernel is default constructible and copyable. It does not carry any state so it is possible to use objects created by one instance with functors created by another one.
This kernel supports construction of points from
double Cartesian coordinates. It provides exact geometric predicates,> | https://doc.cgal.org/4.12/Kernel_d/structCGAL_1_1Epick__d.html | CC-MAIN-2022-27 | en | refinedweb |
current position:Home>Deep understanding of Python features
Deep understanding of Python features
2022-01-30 07:22:45 【cxapython】
Little knowledge , Great challenge ! This article is participating in “ A programmer must have a little knowledge ” Creative activities .
This article has participated in 「 Digging force Star Program 」 , Win a creative gift bag , Challenge creation incentive fund
1. Assertion
Python The assertion statement is a debugging aid , Not a mechanism for handling runtime errors .
assert Condition is False Is triggered when , The following content is the error information .
import sys assert sys.version_info >= (3, 7), " Please be there. Python3.7 And above environmental execution " Copy code
If the minimum requirement for this project is Python3.7 Environment , So if you use Python3.6 To run this project , This error message will appear .
Traceback (most recent call last): File "/Users/chennan/pythonproject/demo/nyandemo.py", line 3, in <module> assert sys.version_info > (3, 7), " Please be there. Python3.7 The above environment is implemented " AssertionError: Please be there. Python3.7 The above environment is implemented Copy code
Early termination of the project
2. Cleverly place commas
Reasonably format the elements in the list , Easier to maintain
We usually do this when we write a list
l = ["apple", "banana", "orange"] Copy code
Use the following methods to distinguish each item more clearly , Habitually add a comma at the end , Prevent missing commas when adding elements next time , It looks more Pythonic
l = [ "apple", "banana", "orange", ] Copy code
3. Underline 、 Double underline and other
Pre single underline : _var
1. It's a convention , Methods and variables with a single preceding underscore are used internally only
2. The use of wildcard guided packages
from xx import * such , You don't need to import variables with a single underscore in front of them , Unless it defines
__all__ Covering this behavior .PEP8 It is generally not recommended to guide the package in this way .
Post single underline : var_
If the variable name used is Python Keywords in , For example, to declare
class This variable , We can put a single underline after it at this time
class_
This is also PEP8 Agreed in
Leading double underline : __var
Leading double underscores will make Python The interpreter overrides the property name , Prevent from being overwritten by names in subclasses . class Test: def __init__(self): self.foo = 11 self.__bar = 2 t = Test() print(dir(t)) Copy code
Looking at the properties of the class, you can find
self.__bar Change into
_Test__bar, This is also called name rewriting (name mangling), The interpreter changes the name of the variable , Prevent naming conflicts when expanding this type .
['_Test__bar', '__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', 'foo'] Copy code
If you want to visit
__bar, What shall I do? , We can go through
t._Test__bar Visit .
If we inherit Test Then rewrite
__bar What will happen
class ExtendTest(Test): def __init__(self): super().__init__() self.foo = "overridden" self.__bar = "overridden" et = ExtendTest() print(et.foo) print(et.__bar) Copy code
It turns out that something went wrong
AttributeError: 'ExtendTest' object has no attribute '__bar' Copy code
The reason is the same as before , Because the interpreter puts
__bar The name of is changed to prevent the variable of the parent class from being overwritten .
We can access these two classes separately
__bar Find that they exist at the same time , It's really not covered .
print(et._Test__bar) print(et._ExtendTest__bar) Copy code
Get the results
2 overridden Copy code
By the way
__bar In English, it is generally called
dunderbar.
Except for double underlined variables , Double underlined method names can also be overwritten by interpreter names .
class ManglingMethod: def __mangled(self): return 42 def call_it(self): return self.__mangled() md = ManglingMethod() md.call_it() md.__mangled() Copy code
After getting the error message
AttributeError: 'ManglingMethod' object has no attribute '__mangled' Copy code
Double underline before and after : var
The so-called magic method , Its name will not be changed by the interpreter , However, as far as naming conventions are concerned, it is best to avoid using such formal variable and method names
Underline : _
1.
_ It can indicate that the variable is temporary or irrelevant
for _ in rang(5): print("hello") Copy code
2. It can also be used as a thousand separator before a number
for i in range(1000_000): print(i) Copy code
3. It can be used as a placeholder when unpacking tuples .
car = ("red", "auto", 12, 332.4 ) color,_,_,mileage = car print(color) print(_mileage) Copy code
4. If you use command line mode ,
_ You can get the results of the previous calculation
>>> 20+5 25 >>> _ 25 >>> print(_) 25 Copy code
4. Custom exception classes
We have the following code
def validate(name): if len(name) < 10: raise ValueError Copy code
If you call this method in other files ,
validate("lisa") Copy code
When you don't understand the function of this method , If the name verification fails , The call stack will print out the following information
Traceback (most recent call last): File "/Users/chennan/pythonproject/demo/nyandemo.py", line 57, in <module> validate("lisa") File "/Users/chennan/pythonproject/demo/nyandemo.py", line 55, in validate raise ValueError ValueError Copy code
The information in this stack debug backtrace indicates , An incorrect value has occurred , But I don't know why I made a mistake , So we need to follow up at this time validate To find out ,
At this time, we can define an exception class ourselves
class NameTooShortException(ValueError): def __str__(self): return " The length of the entered name must be greater than or equal to 10" def validate(name): if len(name) < 10: raise NameTooShortException(name) validate("lisa") Copy code
So if there's another mistake , You can know why you're wrong , At the same time, the calling method is also convenient to catch the specified exception , Don't use ValueError.
try: validate("lisa") except NameTooShortException as e: print(e) Copy code
5.Python Bytecode
Cpython When the interpreter executes , First, it is translated into a series of bytecode instructions . Bytecode is Python Intermediate language of virtual machine , It can improve the execution efficiency of the program
Cpython Do not directly execute human readable source code , Instead, it performs a compact number generated by compiler parsing and syntax semantic analysis 、 Variables and references .
such , It can save time and memory when executing the same program again . Because the bytecode generated by the compilation step will be
.pyc and
.pyo The form of file is cached on the hard disk , Therefore, it is better to execute bytecode than to execute the same code again Python Files are faster .
def greet(name): return 'hello, ' + name + '!' #__code__ Can get greet Virtual machine instructions used in the function , Constants and variables gc = greet.__code__ print(gc.co_code) # Command stream print(gc.co_consts) # Constant print(gc.co_varnames) # Parameters transmitted dis.dis(greet) Copy code
result
b'd\x01|\x00\x17\x00d\x02\x17\x00S\x00' (None, 'hello, ', '!') ('name',) 70 0 LOAD_CONST 1 ('hello, ') 2 LOAD_FAST 0 (name) 4 BINARY_ADD 6 LOAD_CONST 2 ('!') 8 BINARY_ADD 10 RETURN_VALUE Copy code
The interpreter is indexing 1 It's about ('hello, ') Find constants , And put it on the stack , And then name Put the contents of the variable on the stack Cpython Virtual machine is based on stack virtual machine , Stack is the internal data structure of virtual machine . The stack only supports two actions : Push and pull Push : Add a value to the top of the stack Out of the stack : Delete and return the value at the top of the stack .
Suppose the stack is initially empty , Before executing the first two opcodes (opcode) after , Virtual content (0 It's the top element ) For example, we introduced name by lisa.
0: 'lisa' 1: 'hello, ' Copy code
BINARY_ADD Instruction pops two string values from the stack , And connect them Then push the result onto the stack again .
0:'hello, lisa' Copy code
Then the next LOAD_CONST take '!' Push to stack . The result at this point
0:'!' 1:'hello, lisa' Copy code
next BINARY_ADD The opcode pops the two strings out of the stack again, connects them, and then pushes them into the stack , Generate the final result
0:'hello, lisa!' Copy code
Last bytecode RETURN_VALUE , Tell the virtual machine that what is currently at the top of the stack is the return value of the function .
The original text comes from my Zhihu column :zhuanlan.zhihu.com/p/267563522
author[cxapython] | https://en.pythonmana.com/2022/01/202201300722307009.html | CC-MAIN-2022-27 | en | refinedweb |
Java program to solve Climb Stairs using dynamic programming
Hello Everyone! In this Java tutorial, we are going to discuss the problem of climb stairs using ‘n’ of ways using dynamic programming. We are going to use the recursion method to solve this problem.
How to solve climb stairs using dynamic programming in Java
First, understand what is climb stair problem. In this problem, we have to climb the stair from 0 stairs to nth stair using dynamic programming. In other words, There are n stairs, someone status at the bottom wants to reach the endpoint. The individual can climb both 1 stairs or 2 stairs (or a variable number of jumps)at a time. Count the wide variety of approaches, the person can reach the top. But there are few conditions we have to follow:-
- You are given a number of n, representing the wide variety of stairs in a staircase.
- You are at the 0th step and are required to climb to the top.
- You are given n numbers, where ith detail’s value represents -till how far from the step you could jump to in a single move.
- You can of path soar fewer number of steps inside the circulate.
- You are required to print the range of different paths thru which you could climb to the top.
Java Code for solving Climb Stairs
import java.util.*; public class Main { private static final Scanner mega= new Scanner(System.in); public static void main(String[] args) throws Exception { System.out.println("Enter the number"); int n = mega.nextInt(); int[] arr = new int[n]; for (int i = 0; i < arr.length; i++) { arr[i] = mega.nextInt(); } mega.close(); int[] dp = new int[n + 1]; dp[n] = 1; for (int i = n - 1; i >= 0; i--) { if (arr[i] > 0) { for (int j = 1; j <= arr[i] && i + j < dp.length; j++) { dp[i] += dp[i + j]; } } } System.out.println("Result:- "+dp[0]); } }
We have to show or define the variety of ways to climb the stairs from zero to top.
Now, run this program see the output.
Input
Enter the number:- 10 3 3 0 2 1 2 4 2 0 0
Output
Result:- 5
Now, You can understand how to solve climb stair problem using dynamic programming. | https://www.codespeedy.com/java-program-to-solve-climb-stairs-using-dynamic-programming/ | CC-MAIN-2022-27 | en | refinedweb |
Before the update, the Unity test tools test runner window 'hamburger icon' in the top right had an option to set the tests to run automatically every time the code compiles. That option is no longer there. Where is it?
Answer by Tomek-Paszek
·
May 09, 2017 at 08:58 AM
It wasn't reimplemented... sorry about that. It shold come back at some point. In the meantime you can implement a similar behaviour with DidReloadScripts callback (just invoke the test run from the script).
Thanks for your answer @$$anonymous$$ek-Paszek . I think many of the users would be happy to see some example code regarding this workaround to know exactly how, and where to implement the callback.
Which class and method do I call to "invoke the test run"? The only classes that aren't attribute classes I could find in UnityEditor.TestTools are LogAssert and $$anonymous$$onoBehaviourTest ??
still needs exm
76 People are following this question.
executing "unity test tools" tests both fail and pass + warning
1
Answer
The type or namespace name `SceneManagment' does not exist in the namespace
1
Answer
How to run tests that require Input?
1
Answer
How can I use VSCode as a diff/merge tool for Unity Collaborate?
1
Answer
Debugging tools that can be used with Unity in Native OS and multithreading envi?
0
Answers
EnterpriseSocial Q&A | https://answers.unity.com/questions/1350722/unity-56-testrunner-edit-mode-how-to-run-tests-aut.html | CC-MAIN-2022-27 | en | refinedweb |
In this tutorial, we'll be building Scout, an application created using Python with Flask. On the client side, we'll use JavaScript for certain dynamic functionalities required for our app. This tutorial is split into two parts—in the first, we'll set up Google auth, build a user interface, and implement a Firebase Firestore.
The final version of the code can be found here, if you're curious to see it all put together!
Services Involved
Scout relies on four services for its operation:
- Nightscout, an open-source project that supports cloud access to data from a variety of CGM (continuous glucose monitoring) devices.
- Nexmo for sending and receiving SMS messages.
- Google auth: The API that allows us to use the Google authentication service for our web application.
- Firebase/Firestore, to store our data in the cloud.
Application Features
Scout allows us to ping the Nightscout data of a user obtaining the last blood glucose level recorded. If levels are below 70mg/dL(3.9 mmol/L) or above 240mg/dL(13.3 mmol/L), the application will execute a call to the user's mobile phone number, and the user will hear their current blood glucose level. If the user does not respond, a text message will be sent to the user's preferred emergency contact and up to 5 additional numbers.
The ping frequency to Nightscout is set to one minute. To be exact, it will be done 30 seconds after each minute using the system clock. If a user's glucose level remains out of the standard range during that time, the call will be made again.
If the Nightscout service does not respond for 1 hour, an SMS will be sent alerting the user that their service is offline.
The user can sign up/login using their Google account and configure the following information:
- Nightscout API URL
- Personal number
- Preferred emergency contact and up to 5 additional phone numbers
Application Structure
Our application can be divided into two parts:
- A Flask app that allows the user to log in and configure the application data
- A Python Thread with a scheduler that runs with a given frequency, consulting the Nightscout dashboard of each user and sending alerts when necessary. A second scheduler that runs less frequently can be assigned to obtain fresh data.
Prerequisites
- Python 3.x.x
- An application directory:
mkdir Scout cd Scout
- Flask:
pip install Flask
dotenv,
requests, and
google-authPython libraries:
pip install requests python-dotenv google-auth`
- A unique key to store our session variables. It's a binary that we have to handle discreetly. Generate with the following:
python -c 'import os; print (os.urandom (16)) '`
User Interface Development and Google Auth
In this section, we will start configuring Google auth. We will also create a simple interface for calling the Google auth API, a login view, and a persistent server-side session to keep us logged in until the user decides to log out.
Google Auth
To use Google auth, we have to obtain a
client ID, which we will use to call the Google API sign in. Head to the Google Cloud dashboard, and create a new project.
- Once the project has been created, click on the navigation menu
(≡)and select APIs & Services> Credentials.
- Click on Create Credentials> OAuth client ID.
- Select Web.
- In Authorized JavaScript sources and Authorized redirection URIs write the domain name to be used for this app, e.g.. In our case we will assume that
/will be the endpoint that will consume our authentication service.
- Click on Create and our client ID will be generated. It should be listed on OAuth 2.0 client IDs. Keep the client ID at hand, as we will use it later.
Diving into the Source Code
Once we are done with all our preparations, let's open our favorite editor (for example, pyCharm or Visual Studio Code). Create a new file. You can name it whatever you want, in my case I chose
notifier.py.
At the very beginning of the file we will import the following modules:
import json, os from flask import Flask, request, render_template, session
In the same way, we import some functions that will allow us to read the environment variables. A secure way to handle credentials is to make them available only in the scope of the operating system that runs the application.
from os.path import join, dirname from dotenv import load_dotenv
Let's include the modules for Google auth that will allow us to reconfirm the identity of the user from the backend. This will allow us to create a persistent session if the identity is valid.
from google.oauth2 import id_token import google.auth.transport.requests
And the requests module that allows us to request using the POST or GET methods. It's similar to Axios.
import requests
Create a new file and name it
.env (In the tutorial repo I named it
.example-env. If using my repo, make sure you rename it!) Add the following lines:
GOOGLE_CLIENT_ID="YOUR_GOOGLE_AUTH_CLIENT_ID" SITE_URL="YOUR_SITE_URL"
Note: Replace
YOUR_GOOGLE_AUTH_CLIENT_ID with the
clientID generated by google, and
YOUR_SITE_URL with the domain name you registered previously (). Save the file!
Let's go back to
notifier.py and add the following lines:
app = Flask(__name__) app.secret_key = [THE KEY YOU PREVIOUSLY GENERATED]
We assigned the variable
app to represent. our Flask application.
app will create its own context to make only operations related to the requests made to the Flask application
Then we assign the secret_key attribute. Paste the value previously generated with
python -c 'import os; print(os.urandom(16))'.
To access the environment variables defined in the
.env file, we add the following:
envpath = join(dirname(__file__),"./.env") load_dotenv(envpath)
Now, let's define the
get_session function that evaluates whether there is a specific key within the session variable, returning
None in case it doesn't exist, and the value of the key otherwise. It can be reused in different sections of the program:
def get_session(key): value = None if key in session: value = session[key] return value
In the following section we begin to define our Flask application with the controller for the endpoint
/, which will be our landing page and will show us the Google login button:
@app.route('/',methods=['GET','POST']) def home(): if get_session("user") != None: return render_template("home.html", user = get_session("user")) else: return render_template("login.html", client_id=os.getenv("GOOGLE_CLIENT_ID"), site_url=os.getenv("SITE_URL"))
The line
@app.route('/,methods=['GET','POST']) indicates that every request, either
GET or
POST, will be directed to the
home() handler. The
home function evaluates whether the user session exists, then loads the
home.html template if the user is authenticated. If the user is not authenticated, we load the
login.html template, where the Google authentication interface will be displayed (we pass the value of
GOOGLE_CLIENT_ID and
SITE_URL previously defined in our
.env file. The second parameter will be used for redirection).
Following the workflow, when the user first enters the site the user session variable will not exist, therefore
login.html will be loaded. The next logical step would be to develop the
home.html jinja template. But before doing that we need to do the following:
- Create a new
staticdirectory within your main application directory:
mkdir static
- Download Materialize, a framework for front-end development based on material design. Unzip the file and move the
cssand
jsdirectories into the previously created
staticdirectory. Ideally, keep only the minified versions of the
cssand
jsfiles.
- Download the Materialize icons. Once downloaded, create a new
fontsdirectory within
static, and move the font file there.
- Create
style.cssin the
static/css/directory. Usually Materialize is more than enough to style an app, but sometimes an additional style file is necessary to control certain details not covered by Materialize. Let's add some extra style:
@font-face { font-family: 'Material Icons'; font-style: normal; font-weight: 400; src: url(/static/fonts/google-icons.woff2) format('woff2'); } .material-icons { font-family: 'Material Icons'; font-weight: normal; font-style: normal; font-size: 24px; line-height: 1; letter-spacing: normal; text-transform: none; display: inline-block; white-space: nowrap; word-wrap: normal; direction: ltr; -moz-font-feature-settings: 'liga'; -moz-osx-font-smoothing: grayscale; } .logo { font-size: 30px !important; padding-top: 5px; } div.g-signin2 { margin-top: 10px; } div.g-signin2 div { margin: auto; } div#user { margin-top: 10px; margin-bottom: 10px; } div#user.guest { text-align: center; font-size: 20px; font-weight: bold; } div#user.logged { text-align: right; } div#user.logged a { margin-left: 10px; } body { display: flex; min-height: 100vh; flex-direction: column; } main { flex: 1 0 auto; } div.add_contact { height: 20px; } div.add-contacts-container { padding-bottom: 40px !important; } .input-field .sufix { right: 0; } i.delete { cursor: pointer; }
Now, we are ready to create our app's layout. We start by creating a parent template that defines blocks of content that are used by other child files. Officially, this will be our first jinja template 🎉. In order for this file to be recognized by Flask as a template, let's create a
templates directory inside
static, and create
layout.html. Let's add the following code:
<!DOCTYPE html> <html lang="en"> <head> <link rel="stylesheet" href="{{ url_for('static', filename='css/materialize.min.css') }}" /> <link rel="stylesheet" href="{{ url_for('static', filename='css/style.css') }}" /> {% block head %}{% endblock %} </head> <body> <header class="container-fluid"> <nav class="teal"> <div class="container"> <div class="row"> <div class="col"> <a href="#" class="brand-logo" ><i class="material-icons logo">record_voice_over</i> Scout = Nexmo + Nightscout</a > </div> </div> </div> </nav> </header> <main class="container"> {% block content %}{% endblock %} </main> <footer class="page-footer teal"> <div class="footer-copyright"> <div class="container"> <!--This is the hashtag used by the nightscout project :) --> Scout <a class="brown-text text-lighten-3">#WeAreNotWaiting</a> </div> </div> </footer> <script language="javascript" src="{{ url_for('static', filename='js/materialize.min.js') }}" ></script> {% block script %}{% endblock %} </body> </html>
There are a couple of interesting details to highlight: The use of blocks and the use of the
url_for function. Blocks are reserved sections for inserting code with jinja from child templates. The
url_for function generates the URLs to the JavaScript and CSS resources in
static.
The file structure that we have up to this point should be:
- static/ - css/ - materialize.min.css - fonts/ - google-icons.woff2 - js/ - materialize.min.js - templates/ - layout.html .env - notifier.py
If everything looks proper, create
login.html in the same
templates directory. This file will be loaded when the
user session variable does not exist (that is, the user is not logged in).
{% extends "layout.html" %} {% block head %} <script src="" async defer></script> <meta name="google-signin-client_id" content="{{ client_id }}" /> {% endblock %} {% block content %} <div id="user" class="guest">Welcome guest, You need to authenticate</div> <div class="row"> <div class="col s6 offset-s3"> <div class="card blue-grey darken-1"> <div class="card-content white-text"> <span class="card-title">Login To Enter Scout</span> <p> This application will help you configure alerts to your mobile phone, a preferred emergency contact and up to 5 other contacts. If you have a nightscout dashboard and you have your api available for external queries, You can use this server and when your glucose levels are out of range, you will receive a call to alert you and your preferred contact(s) of such. If you do not answer the call then a sms is sent to your emergency contact(s). </p> <div class="g-signin2" data-</div> </div> </div> </div> </div> <script language="javascript"> function onSignIn(googleUser) { var profile = googleUser.getBasicProfile() if (profile.getId() !== null && profile.getId() !== undefined) {() ) } } </script> {% endblock %}
Notice that the first line for
login.html is
{% extends "layout.html"%}. This indicates that
login.html inherits from
layout.html. In other words, it is a child of
layout.html. This means that the renderer will load
layout.html with the code variants that we added in
login.html. These variants are defined within the blocks allowed in layout. Within
login.html we use the
head and
content blocks.
In the
head block, we have:
<script src="" async defer></script> <meta name="google-signin-client_id" content="{{ client_id }}" />
The first line indicates that we will be using the Google API for the authentication process and the second is metadata used by Google to know our app's
clientID. Note that within the
content attribute we have written
{{client_id}}. When the jinja compiler evaluates this expression, it will print the value of the
client_id variable that we pass to the template using the
render_template function.
The next block is
content, and in there, we present a message to the user indicating how the application works. Then we have a few lines of JavaScript. Basically, it is a function connected to the
onSignIn event, which is used by Google to return the data of the user that was logged in using Google auth.
We obtain the user profile with
googleUser.getBasicProfile(). If there is an ID, the authentication process was successful, and we can proceed to send some data to our server to make an identity reconfirmation with Google and create the session.() )
The previous lines connect us to our server using an AJAX request. Pay attention to the line
xhr.open ('POST', '{{site_url|safe}}/login');. It indicates the method that the request will use. The URL
{{site_url|safe}} will be replaced by the value of the
site_url variable that we pass to the template. To do the reconfirmation, our Flask application only needs the
id_token. However, we also pass the username and email since we will use them later for other operations.
Once the reconfirmation is done, our server will redirect us to
/. If the reconfirmation was not successful, the user will have to try to log in again. Now, if we look carefully, we haven't defined the
/login endpoint yet. this endpoint will be responsible for reconfirming.
To create it, let's go back to
notifier.py and add the following lines:
@app.route('/login',methods=["POST"]) def login(): try: token = request.form.get("idtoken") client_id = os.getenv("GOOGLE_CLIENT_ID") infoid = id_token.verify_oauth2_token(token, google.auth.transport.requests.Request(), client_id) if infoid['iss'] not in ['accounts.google.com', '']: raise ValueError('Wrong issuer.') userid = infoid['sub'] #Here is a good place to create the session session["user"] = {"userid": userid, "username": request.form.get("username"), "email": request.form.get("email")} return userid except ValueError: return "Error" pass
As mentioned above, to reconfirm the identity of the user on the server, we only need the
id_token obtained from Google auth and passed to
/login using an AJAX POST request. Then we get
client_id using
os.getenv("GOOGLE_CLIENT_ID").
When we make the reconfirmation, we place our code within a
try/except block for exception handling in case an error occurs at the time of making the request.
This verification is done using the
verify_oauth2_token method and returns an
infoid that must have a key
iss, which is a reference to the
issuer. If the value of the issuer does not match the domain we configured, we assume the verification returns an error and an exception will be generated. If, on the other hand, the response is valid, we proceed to create the persistent session on the server side, assigning
user to the session object. Within this session, we store the
userid,
username, and
Once this is done, our server returns the response and the
xhr.onload event of our ajax request is triggered. Its function is to redirect us to
/. In
/ our application evaluates if the
user session exists, and if so, it will load the
home.html template by passing the
session['user'].
Following the logic of our application, the next step is to create
home.html in the
templates directory:
{% extends "layout.html" %} {% block content %} <div id="user" class="logged"> you are logged in as <b>{{ user.username }}</b> - <a id="logout" class="teal-text" href="/logout">Logout</a> </div> {% endblock %}
This template inherits from layout, and in our
content block we will show the logged user and a
logout link to close the session. The latter is not programmed—to complete the login experience we will define the endpoint
logout in
notifier.py. Let's add:
@app.route('/logout') def logout(): session.pop("user",None) return redirect(url_for('home'))
Our
logout endpoint deletes the user session and redirects us to
home. Back at
home it will evaluate the session, and if it doesn't find any, it will render
login.html.
Note:
url_for uses the handler name
def home() for redirection, not the endpoint (although it is also valid to use endpoints for redirects). In the case of
url_for needing to generate a url in
https, the line should be:
url_for ('home', _external = True, _scheme = 'https'). The external parameter indicates the generation of an absolute URL and scheme defines the protocol we want to use.
At this point, we can test if Google auth works. To test locally, let's run the following command in our terminal:
export FLASK_APP=notifier flask run
We are telling Flask to run
notifier.py. However, it's best to use a more robust server that allows for more efficient handling of our requests, thus improving the app performance. Therefore, we will use Gunicorn, an HTTP WSGI server written in Python and compatible with multiple frameworks (Flask included).
To install, let's execute the following command in our terminal:
pip install gunicorn
After installing, from the same terminal window and from our app's root directory, type:
gunicorn -b 0.0.0.0:80 notifier:app
This command deploys our application to our local server and listens for requests using port 80. With this, we should be able to access our app and test if we can log in and log out.
Note: To stop the application, hit ctrl+c in the same terminal window where gunicorn is running.
Storing Nightscout Settings with Firebase/Firestore
In this section, we will build a simple interface where our user can add the following data:
- nightscout_api: a valid Nightscout URL to obtain the glucose level data, (for example,).
- phone: the mobile number where alerts will be sent.
- emerg_contact: preferred emergency contact (relative or close friend who can receive alerts).
- extra_contacts: an optional array with up to 5 additional phone numbers.
- email: The Google account email address to log in (we will use it as an external key to obtain the logs of a logged-in user).
- username: Also obtained from the user's Google account, we will use it for data presentation.
Firestore allows us to handle collections and documents in a similar fashion to mongodb. For this application, our collection will be called scouts. A document from our collection should look like this:
{ "email": "", "username": "", "phone": "", "emerg_contact": "", "nightscout_api": "", "extra_contacts": [] }
Adding Firebase Firestore to Our Project
- Go to
- Log in with your Google or GSuite account
- Click on Go to Console
- Click on Add project. If your previously created project used for Google auth does not appear on the list, click on Add project . We should see our project name listed on Enter the name of your project . Select it and click on Continue, Provide additional information for the next steps, and when finished click on Create project
- On the Firebase console page, click on authentication and in the sign in method tab, enable Google.
- Click on Database, and select FIRESTORE. Then Database > Tab Rules. Modify the existing rule as follows:
rules_version = '2'; service cloud.firestore { match /databases/{database}/documents { match /{document=**} { allow read, write: if request.auth.uid != null; } } }
This is to make sure only logged in users have access to our application.
Connecting the Project with Firebase
Once our project is set up on the Firebase console, we have to generate a Firebase key:
- Login to
- Click on Project Overview> Project settings and select the Service account tab
- Click on the generate new private key option
- Save the JSON file in our application's root directory. You can rename that file to whatever you want. I used:
super_private_key.json
The next step is to add the name of your private key file to
.env with the following line:
FIREBASE_PRIVATE_KEY="./super_private_key.json"
With these initial preparations in order, we are ready to connect our application to Firebase/Firestore. In the case of data queries for Firestore, the ideal scenario in Python would be to have a class that we can reuse in our application that allows us to add, modify, or query records (CRUD). This way we keep our code simple, organized, and much easier to understand.
Before we write that code, we need to install the Python modules that will allow us to perform these operations (back to our terminal!):
pip install firebase-admin
Let's create the
models.py file and add the following lines:
import firebase_admin, os from firebase_admin import credentials from firebase_admin import firestore from os.path import join, dirname from dotenv import load_dotenv
The previous lines indicate which modules we will be using in our
models.py file. Among these, we can see
firebase_admin, which we will use to connect to our Firebase project and perform operations (CRUD) on our scouts data collection. We will use
dotenv to get the
FIREBASE_PRIVATE_KEY variable from our
.env file.
In the same file, add the following lines:
envpath = join(dirname(__file__),"./.env") load_dotenv(envpath) credits = credentials.Certificate(os.getenv("FIREBASE_PRIVATE_KEY")) firebase_admin.initialize_app(credits)
The first two lines are well known since we have previously used them in
notifier.py to load our environment to extract the value of the
FIREBASE_PRIVATE_KEY variable. The next two lines connect our application with Firebase using the private key we generated earlier.
credits = credentials.Certificate (os.getenv (" FIREBASE_PRIVATE_KEY ")) extracts the private key from the file as well as other additional data, and
firebase_admin.initialize_app (credits) authenticates our application to use Firebase and initializes it to perform operations.
Once the connection with Firebase is defined, we will proceed to define the case model. Normally, when we use technologies such as
SQLAlchemy to work with
flask and
sqlite, the models are classes where we define different attributes that are the fields of the database and use a set of native functions of the model to perform operations on the database.
In our case, we will create the
model class as a "bridge" class that will allow us to use the firestore methods to perform database operations. In other words,
model will function as a parent class from which other classes will inherit to return their methods. Then we add the code to the end of the
models.py file:
class model: def __init__(self,key): self.key = key self.db = firestore.client() self.collection = self.db.collection(self.key) def get_by(self, field, value): docs = list(self.collection.where(field,u'==',u'{0}'.format(value)).stream()) item = None if docs != None: if len(docs) > 0: item = docs[0].to_dict() item['id'] = docs[0].id return item def get_all(self): docs = self.collection.stream() items = [] for doc in docs: item = None item = doc.to_dict() item['id'] = doc.id items.append(item) if len(items) > 0: return items else: return None def add(self, data, id=None): if id == None: self.collection.add(data) else: self.collection.document(u'{0}'.format(id)).set(data) return True def update(self, data, id): if id != None: if data != None: doc = self.collection.document(u'{id}'.format(id=id)) doc.update(data) return False
Within the
model class, we start defining our constructor. It will accept the
key parameter representing the name of the collection inside Firestore. The builder also initializes the firestore client,
self.db= firestore.client(), and the collection,
self.db.collection(self.key).
The
get_by method receives the name of the field and the value by which we want to filter data from our collection. The line
docs= list(self.collection.where (field,u'==',u'{0}.format(value)).stream()) runs a query to our Firebase collection,
self.collection, and is a reference to this collection defined in the constructor. In our collection, we use the
where method to filter by field and value.
Pay special attention to the character
u—this indicates that Python will send the field name and the value in
unicode format. By using the code fragment
u'{0}.format(value) we are telling Python that any value, regardless of type, should be formatted as
unicode. The
stream method, in turn, returns the flow of documents as a special type of data, so the
list function is used to convert it into an array that can be traversed with Python.
Normally, when making a query to Firestore to obtain data from a collection, for each record in the collection we would obtain an object with two attributes: the document id and the
to_dict() method that formats the document to the Python dictionary type of data (a format that has a structure similar to JSON and that makes it easy for us to access each field).
The
get_by function evaluates whether the document exists. If it exists, it creates a consolidated item with
item= docs[0].to_dict() to store the document in a variable. With
item['id']= docs[0].id, we add the id to the document to have all the information at our disposal. Another important detail is that
get_by returns the first document found. We leave this as it is in our case. Once our user logs in with Google, they will only have access to a document that will contain their data (one and only one).
We define the
get_all method, which does not receive any parameters. Its function is to obtain all the documents in a collection, consolidate them by creating a dictionary for each item, and fill out an array with each consolidated document. This function returns an array of all the documents in the collection, or
none in case there are no documents.
The
add method receives the
data and
id parameters.
id is optional, but if it exists it allows us to add a new document with a defined id. If the parameter
id does not exist, the new document will be created with an id automatically generated by Firestore. The parameter
data must be of type
dictionary and will contain the data that we want to add to our collection.
Finally, we define the
update method, which receives the parameters
data and
id, both of which are required. While
data contains a
dict indicating which fields will be altered with what values,
id defines which document in the collection we will be modifying.
Next, we will add the
scout class. The purpose of this class is to act as an interface that allows us to pass the data of our scout collection more directly without thinking of unnecessary formatting when adding new documents. Let's add the following code to
models.py:
class scout: def __init__(self, email = '', username = '', nightscout_api = '', phone = '', emerg_contact = '', extra_contacts = []): self.email = email self.username = username self.nightscout_api = nightscout_api self.phone = phone self.emerg_contact = emerg_contact self.extra_contacts = extra_contacts
Note: We will go into more detail on how this class will be used later.
Finally, let's add the
scouts class that inherits from the
model class to reuse its methods and in turn has its own methods to interact with the scout collection. Let's add the code at the end of the
models.py file:
class scouts(model): def __init__(self): super().__init__(u'scouts') def get_by_email(self, email): docs = list(self.collection.where(u'email',u'==',u'{0}'.format(email)).stream()) item = None if docs != None: if len(docs) > 0: item = docs[0].to_dict() item['id'] = docs[0].id return item def getby_personal_phone(self,phone): return self.get_by(u'phone',phone) def add(self, data, id = None): if type(data) is scout: super().add(data.__dict__,id) else: super().add(data,id)
The scouts class inherits from the model. In its constructor we call the parent constructor and pass it
scouts, which is nothing more than the key that the
model constructor expects to reference a collection in Firebase.
Then we find the
get_by_email method, which obtains the first document from the
scouts collection that matches the email provided. This method will be used to obtain the Nightscout data of each user connected using a Google account.
The method
getby_personal_phone receives a phone parameter (the user's personal telephone) and will return the document associated with that data. This method calls the
get_by method of the
model class and it will be very useful to obtain user data when we are running the
nexmo events webhook.
Finally, we have the
add method. IF
data is an instance of the
scout class, we will convert its attributes to dictionary with
data.__dict__. The
id attribute is optional for this method. Pay attention that this method, in turn, calls the
add method of the model class for reuse.
Don't forget to send the file!!⭐️
Playing with the Python Console
A very practical (and maybe fun?) way to test what we have done is with the Python console. Before the fun begins, open the Firebase console in your browser and click on the Database option. Make sure to select Cloud Firestore in the upper left corner next to Database.
Let's go to our terminal. From our project folder, execute the
python command. This will take us to the Python console where we can run Python code. In the python console, we execute the following commands:
- Import our previously created python module:
>>> import models >>> from models import model, scouts, scout
- Create an instance of the scouts class called
scout_firebaseand add a document to Firebase. Review the Firebase console after executing the
addmethod. In Firestore, a new document will be added with the data provided. Pay special attention to the add method—we pass an instance of the
scoutclass with all the corresponding data. Internally the
addmethod converts the instance of the class to dictionary:
>>> scouts_firebase= scouts() >>> scouts_firebase.add(scout(email='email@gmail.com ',nightscout_api ='someurl', phone ='12345678', emerg_contact='23456789', extra_contacts=['34567890']))
- Get all the documents from our scouts collection:
>>> docs = scouts_firebase.get_all() >>> print(docs)
- Update the document we added (in this case we only update the
nightscout_apifield). We can check the update in the Firebase console. Later, we obtain the document using the
get_by_emailmethod and print item to confirm that the field value was in effect updated:
>>> scouts_firebase.update({u'nightscout_api':'some_testing_url'},docs[0]['id']) >>> item = scouts_firebase.get_by_email('email@gmail.com') >>> print(item)
To close the Python console just type
quit() to go back to the terminal.
Create the User Data Configuration Interface
With the defined data models, the next step is to create the interface that will receive the data of the connected user, with Google auth. Our application will be in charge of using the model to store this information.
In
notifier.py, just under the last
import add the following lines:
import models from models import model, scouts, scout
This adds the
models module to the
notifier.py script and imports the
models,
scouts, and
scout classes from the module to be able to use them. Later, before the lines that define the
get_session function, add the code that initializes the
scouts class:
nightscouts = scouts()
Next, edit the
home function, which controls the endpoint
/. The function should be modified with the following workflow in mind: A user authenticates with Google auth to our application; If they are authenticating for the first time when
/ is loaded, an empty form will be presented with a
new flag to indicate to the application that the user will insert a new document to Firebase. If the user who is connecting already exists before loading
/, a query will be made to Firebase to bring the data related to that email, and the information will be shown on the form with the
edit flag to indicate to the application that by submitting the form you will be modifying the document of an existing user.
Currently our
home function is defined as follows:
@app.route('/',methods=['GET','POST']) def home(): if get_session("user") != None: return render_template("home.html", user = get_session("user")) else: return render_template("login.html", client_id=os.getenv("GOOGLE_CLIENT_ID"), site_url=os.getenv("SITE_URL"))
With the additional code, it should look like this:
@app.route('/',methods=['GET','POST']) def home(): global scouts if get_session("user") != None: if request.method == "POST": extra_contacts = request.form.getlist('extra_contacts[]') if request.form.get("cmd") == "new": nightscouts.add(scout(email=get_session("user")["email"], username=get_session("user")["username"], nightscout_api=request.form.get('nightscout_api'), phone=request.form.get('phone'), emerg_contact=request.form.get('emerg_contact'), extra_contacts=extra_contacts)) else: nightscouts.update({u'nightscout_api':request.form.get('nightscout_api'), u'phone':request.form.get('phone'), u'emerg_contact':request.form.get('emerg_contact'),u'extra_contacts':extra_contacts},request.form.get('id')) return render_template("home.html", user = get_session("user"), scout = nightscouts.get_by_email(get_session("user")["email"])) else: return render_template("login.html", client_id=os.getenv("GOOGLE_CLIENT_ID"), site_url=os.getenv("SITE_URL"))
Basically, we've added a conditional that assesses if the method used to access
/ is
POST. If so, we can assume that the request has been made from a form.
In this case, we would be talking about the user configuration form. If the method used is
POST, we ask if the flag (in this case
cmd) is
new. If so, the
add method will be executed by adding the user's new document.
Note: We get
username directly from the session, as this is data obtained from Google auth.
If the flag detected is
edit, the new values of the form are received and the
update method of the
scouts class is executed to update the document of the connected user.
Note:
username are not modified as they are exclusive data from Google.
Regardless of the method used, in
render_template we pass all the data of the connected user using the variable
scout = nightscouts.get_by_email(get_session("user")["email"]), to fill the form with the configuration information in if the user exists. The form fields will be empty.
Now let's edit the
home.html file. This jinja template is loaded only if the user has previously logged in. Currently, we only have one line of code within the block
content indicating the connected user and the link for logout. Just below this, we will add the code that will receive the application data for the user.
The block should look like this:
{% block content %} <div id="user" class="logged"> you are logged in as <b>{{ user.username }}</b> - <a id="logout" class="teal-text" href="/logout">Logout</a> </div> <div class="row"> <div class="col s8 offset-s2"> <div class="card blue-grey darken-1"> <div class="card-content white-text"> <h1 class="card-title">Your Scout Profile</h1> <div class="row"> <form id="scout-form" class="col s12" method="POST" action="/"> <input type="hidden" name="cmd" value="{{ 'new' if scout == None else 'edit' }}" /> {% if scout!=None %} <input type="hidden" name="id" value="{{ scout.id }}" /> {% endif %} <div class="row"> <div class="col s12 input-field"> <input placeholder="E.g." value="{{ scout.nightscout_api }}" id="nightscout_api" name="nightscout_api" type="text" class="validate" required /> <label for="nightscout_api" class="white-text" >Enter NightScout Api Entries Url (Entries url finish with <b>entries.json</b>)</label > </div> </div> <div class="row"> <div class="col s12 input-field"> <i class="material-icons prefix">phone</i> <input placeholder="E.g. 50588888888" id="phone" name="phone" value="{{ scout.phone }}" type="tel" class="validate" pattern="[0-9]+" required /> <label for="phone" class="white-text" >Enter your mobile number</label > </div> </div> <div class="row"> <div class="col s12 input-field"> <i class="material-icons prefix">phone</i> <input placeholder="E.g. 50588888888" id="emerg_contact" name="emerg_contact" value="{{ scout.emerg_contact }}" type="tel" class="validate" pattern="[0-9]+" required /> <label for="emerg_contact" class="white-text" >Enter emergency contact</label > </div> </div> <div class="row"> <div class="col s12 add-contacts-container"> <div class="row"> <div class="col s6"> <label class="white-text" >Add 5 additional contact numbers:</label > </div> <div class="col s6 add_contact"> <div class="right-align"> <a onclick="add_contact()" class="btn waves-effect waves-light red" ><i class="material-icons">group_add</i></a > </div> <br /> </div> </div> <div class="divider"></div> <div class="contact_numbers" id="contact_numbers"></div> </div> </div> <div class="row"> <div class="col s12 right-align"> <button class="waves-effect waves-light btn-small" type="submit" > <i class="material-icons left">save</i> Save </button> </div> </div> </form> </div> </div> </div> </div> </div> {% endblock %}
If the
scout variable has a value of
None, the flag
cmd will have the value
new—otherwise the value will be
edit. The values of the text fields receive
{{scout.phone}}, so when
scout is
None jinja will print empty. When
scout exists the id is received in a hidden type field. This field should not be modified since it is the unique identifier of the document in Firebase. The
add_contact() JavaScript function is undefined.
Let's add some more code in
home.html, just after
{% endblock %}. In this case, we will use the
script block that we define in
layout.html to define the necessary JavaScript functions:
{% block script %} <script language="javascript"> var contacts_numbers = null; var container = null; var incremental = 0; window.addEventListener('load', function(event){ container = document.getElementById("contact_numbers"); contact_numbers = container.getElementsByClassName("contact_number"); var extra_contacts = validate({{ scout.extra_contacts|safe }}); //var extra_contacts = {{ scout.extra_contacts|safe if scout else "Array()" }}; for(var p=0;p<extra_contacts.length;p++){ add_contact(extra_contacts[0]); } }); function validate(value){ if(value!==null & value!==undefined) return value; else return Array(); } function add_contact(value){ if(contact_numbers.length < 5){ incremental += 1; var div = document.createElement("div"); div.className = 'row contact_number'; div.setAttribute('id','id_'+incremental); if(!(value!=null && value!==undefined)){<i class="material-icons prefix">contact_phone</i><input placeholder="E.g. 50588888888" name="extra_contacts[]" value="'+value+'" type="tel" class="validate" pattern="[0-9]+"><i class="material-icons prefix sufix delete" onclick="delete_contact(\'id_'+incremental+'\')">delete</i></div>'; container.appendChild(div); contact_numbers = container.getElementsByClassName("contact_number"); }else{ M.toast({html: 'Sorry, You can just add a maximun of 5 contact numbers'}); } } function delete_contact(id){ contact_number = document.getElementById(id); container.removeChild(contact_number); } </script> {% endblock %}
In the
script block we define three functions and the event listener of
onload page. The
validate function evaluates whether the value of
{{scout.extra_contacts|safe}} passed by jinja is empty. If that's the case, then
validate returns an empty
Array(), otherwise jinja returns the
extra_contacts array.
If, when loading the page,
extra_contacts contains information, the function
add_contact is executed for each position of the
extra_contacts array, passing the value of the phone number to the value attribute of the input.
The function
add_contact() dynamically adds a text field where the user can type an additional telephone number, up to the five allowed. Each input will have an icon to be clicked on to eliminate the record. This same function evaluates whether the number of allowed contacts has been reached. In that case,
M.toast of materialize is used to display an alert to the user indicating that they cannot add more than five contact numbers. This function is triggered when loading
/ and when clicking on the button to add telephone numbers.
The
delete_contact() function removes the record created by
add_contact(). The function is triggered from the
onclick event of the delete icon added by
add_contact for each input.
With these last details, we have concluded configuring Google auth login and Firebase/Firestore for storage and reading data.
At this point, we should be able to log in with Google, add our Nightscout configuration from the form, save our data in Firestore, modify our configuration, and properly log out.
To Be Continued!
The next step will be to set up/configure the app in Nexmo and to create a scheduler in Python for the Nightscout alerts. Check back in next week to read Part Two of this tutorial. | https://developer.vonage.com/blog/2020/02/24/nightscout-notification-nexmo-dr | CC-MAIN-2022-27 | en | refinedweb |
Hi, I’m using profiling tool VTune Amplifier. What I’m interested in is parallel programming, both in thread level and instruction levels. The number of cores in my server is 16, and it supports AVX instructions. (not support AVX2, AVX512)
lscpu gives:: 62
Model name: Intel® Xeon® CPU E5-2650 v2 @ 2.60GHz
Stepping: 4
CPU MHz: 1200.433
CPU max MHz: 3400.0000
CPU min MHz: 1200.0000
BogoMIPS: 5201.92
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 20480K
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14
NUMA node1 CPU(s): 1,3,5,7,9,11,13 f16c rdrand lahf_lm cpuid_fault pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms xsaveopt dtherm ida arat pln pts md_clear flush_l1d
I’m profiling resnet18 training code below. I don’t copy the code of printing loss and accuracy.
import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim import torchvision import torchvision.transforms as transforms import torchvision.models as models transform_train = transforms.Compose([ transforms.RandomCrop(32, padding=4), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)), ]) #transform_test = transforms.Compose([ # transforms.ToTensor(), # transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)), #]) trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform_train) trainloader = torch.utils.data.DataLoader(trainset, batch_size=128, shuffle=True, num_workers=0) #testset = torchvision.datasets.CIFAR10(root='./data', train=False, # download=True, transform=transform_test) #testloader = torch.utils.data.DataLoader(testset, batch_size=100, # shuffle=False, num_workers=2) # get some random training images dataiter = iter(trainloader) images, labels = dataiter.next() # define network net = models.resnet18(pretrained=False) criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(net.parameters(), lr=0.01, momentum=0.9, weight_decay=5e-4) for epoch in range(15): #() # calculate loss running_loss += loss.item()
In my profiling result, I found that AVX dynamic codes (which are hotspots in my code) are mostly executed by 16 threads. (Total 48~49 threads are running, but 16 of them are terminated before training, and the other 16 of them are executing other codes) I have some interesting results. As I increase the number of training loops, some of CPU doesn’t work. I attached result images below with google drive link. Files numbered 1~4 are for epoch 5, 15, 25, and 50, respectively.
The CPU Utilization metrics are 58.3%, 62.1%, 53%, and 49.4%, respectively. I think I have to mention some note. For epoch 50, I’ve profiled it twice because of the extremely low metric at the first time. It was 31.1%. The result image of this is in the link above, with the file name numbered 5.
Is there anyone who could give me some insight about these results? | https://discuss.pytorch.org/t/some-of-cpu-cores-dont-work-when-i-increase-training-epochs/57439 | CC-MAIN-2022-27 | en | refinedweb |
SELECT ad_group_ad_report."campaign.id" AS "ad_group_ad_report.campaign_id" , avg(ad_group_ad_report."metrics.active_view_ctr") AS "ad_group_ad_report.active_view_ctr_avg" , sum(ad_group_ad_report."metrics.gmail_forwards") AS "ad_group_ad_report.gmail_forwards_sum" FROM "your-username~google-ads".ad_group_ad_report GROUP BY 1 ORDER BY 1
Here are all the tables you will be able to access when you use Splitgraph to query Google Ads data. We have also listed some useful queries that you can run.
repositories: - namespace: CHANGEME repository: airbyte-google-ads #-google-ads plugin: airbyte-google-ads # Plugin-specific parameters matching the plugin's parameters schema params: customer_id: 6783948572,5839201945 # REQUIRED. Customer ID(s). Comma separated list of (client) customer IDs. Each customer ID must be specified as a 10-digit number without dashes. More instruction on how to find this value in our <a href="">docs</a>. Metrics streams like AdGroupAdReport cannot be requested for a manager account. start_date: '2017-01-25' # REQUIRED. Start Date. UTC date and time in the format 2017-01-25.-30' # End Date (Optional). UTC date and time in the format 2017-01-25. Any data after this date will not be replicated. custom_queries: # Custom GAQL Queries (Optional). - query: SELECT segments.ad_destination_type, campaign.advertising_channel_sub_type FROM campaign WHERE campaign.status = 'PAUSED' # Custom Query. A custom defined GAQL query for building the report. Should not contain segments.date expression because it is used by incremental streams. See Google's <a href="">query builder</a> for more information. table_name: '' # Destination Table Name. The table name in your destination database for choosen query. login_customer_id: '7349206847' # Login Customer ID for Managed Accounts (Optional). If your access to the customer account is through a manager account, this field is required and must be set to the customer ID of the manager account (10-digit number without dashes). More information about this field you can see <a href="">here</a> conversion_window_days: 14 # Conversion Window (Optional). A conversion window is the period of time after an ad interaction (such as an ad click or video view) during which a conversion, such as a purchase, is recorded in Google Ads. For more information, see Google's <a href="">documentation<-google-ads: # This is the name of this credential that "external" sections can reference. plugin: airbyte-google-ads # Credential-specific data matching the plugin's credential schema data: credentials: # REQUIRED. Google Credentials. developer_token: '' # REQUIRED. Developer Token. Developer token granted by Google to use their APIs. More instruction on how to find this value in our <a href="">docs</a> client_id: '' # REQUIRED. Client ID. The Client ID of your Google Ads developer application. More instruction on how to find this value in our <a href="">docs</a> client_secret: '' # REQUIRED. Client Secret. The Client Secret of your Google Ads developer application. More instruction on how to find this value in our <a href="">docs</a> refresh_token: '' # REQUIRED. Refresh Token. The token for obtaining a new access token. More instruction on how to find this value in our <a href="">docs</a> access_token: '' # Access Token. Access Token for making authenticated requests. More instruction on how to find this value in our <a href="">docs<. | https://www.splitgraph.com/data-sources/google-ads/tables/ad_group_ad_report/queries/ad-group-ad-report-average-active-view-ctr-total-gmail-forwards-by-ad-group-ad-report-campaign | CC-MAIN-2022-27 | en | refinedweb |
Service Init Listener
VaadinServiceInitListener can be used to configure
RequestHandler,
IndexHtmlRequestListener and DependencyFilter objects.
You can also use it to dynamically register routes during application startup.
The listener gets a
ServiceInitEvent, which is sent when a Vaadin service is initialized.
public class ApplicationServiceInitListener implements VaadinServiceInitListener { @Override public void serviceInit(ServiceInitEvent event) { event.addIndexHtmlRequestListener(response -> { // IndexHtmlRequestListener to change the bootstrap page }); event.addDependencyFilter((dependencies, filterContext) -> { // DependencyFilter to add/remove/change dependencies sent to // the client return dependencies; }); event.addRequestHandler((session, request, response) -> { // RequestHandler to change how responses are handled return false; }); } }
In a Spring Boot project, it is sufficient to register this listener by adding the
@Component annotation on the class.
In plain Java projects, the listener should be registered as a provider via the Java SPI loading facility.
To do this, you should create the META-INF/services resource directory and a provider configuration file with the name com.vaadin.flow.server.VaadinServiceInitListener.
This is a text file and should contain the fully qualified name of the
ApplicationServiceInitListener class on its own line.
It allows the application to discover the
ApplicationServiceInitListener class, instantiate it and register it as a service init listener for the application.
The content of the file should look like this:
com.mycompany.ApplicationServiceInitListener | https://vaadin.com/docs/latest/advanced/service-init-listener | CC-MAIN-2022-27 | en | refinedweb |
Closed Bug 701182 Opened 10 years ago Closed 9 years ago
Additional descriptive text for Help > About $PRODUCTNAME in Aurora and Nightly
Categories
(Firefox :: General, defect)
Tracking
()
Firefox 11
People
(Reporter: me, Assigned: tchevalier)
References
(Blocks 1 open bug)
Details
(Whiteboard: [good first bug][mentor=margaret])
Attachments
(2 files, 10 obsolete files)
The following text should be added to the About boxes in both Aurora and Nightly, in preparation for the upcoming Telemetry-by-default in these testing releases. $PRODUCTNAME is experimental software and things break from time to time. $PRODUCTNAME automatically sends useful test information back to Mozilla so that we can make Firefox better.
Whiteboard: [good first bug][mentor=margaret]
Component: Menus → General
QA Contact: menus → general
Just a comment on the "good first bug" whiteboard: this needs to land before the next merge.
Okay, I can be on the hook for doing it if no one else does - I just figured it would be an easy way for a new person to get involved.
Hi everybody ! I'm just a beginner, and I'm looking to solve a first bug in order to progress and learn. I can, if you like, try to provide a patch tonight. Feel free to tell me if I do something clumsy, I have not yet become accustomed. Also, I probably would ask you questions right here.
I need some help. I identified the files where I'm going to apply my patches: mozilla-central\browser\base\content\aboutDialog.js mozilla-central\browser\base\content\aboutDialog.xul But I haven't found where I should place the text itself. The texts are they defined in a location directly in mozilla-central?
Ok, I've found it ! :) mozilla-central\browser\locales\en-US\chrome\browser\aboutDialog.dtd I'm working on the patch.
First patch, I have not tested yet. I run a compilation. I am sure that everything is not perfect, I await your comments!
Looks good, but may be I should remove the space between the two sentences? Then it will update mozilla-central\browser\branding\nightly\content\about-background.png
I've improve the first patch: I corrected the "else" who has incorrect syntax, and the comment who was illogic. PATCH V1: - // Include the build ID if this is an "a#" (nightly or aurora) build + // Include the build ID and warning if this is an "a#" (nightly or aurora) buildDescExperimental").hidden = true; + document.getElementById("warningDescDatas").hidden = true; + } PATCH V2: - // Include the build ID if this is an "a#" (nightly or aurora) build + // Include the build ID if this is an "a#" (nightly or aurora) build and hide warning if it isn'tDesc").hidden = true; + } Also, I've deleted the carriage return between the two sentences, in order to have a single paragraph. I will provide a screenshot of the result.
Attachment #574131 - Attachment is obsolete: true
Here is the screenshot of the result.
Comment on attachment 574191 [details] [diff] [review] Patch to add warnings in Nightly and Aurora in About window V2 Review of attachment 574191 [details] [diff] [review]: ----------------------------------------------------------------- Congratulations on your first patch. I've looked it over and provided some general feedback on your patch so far. When you are happy with the state of the patch, request feedback from one of the browser peers: ::: browser/base/content/aboutDialog.js @@ +73,5 @@ > document.getElementById("version").textContent += " (" + buildDate + ")"; > } > + else { > + document.getElementById("warningDesc").hidden = true; > + } Switching the warning description to be hidden by default will require adjusting this code to only show the warning description when running nightly or aurora. ::: browser/base/content/aboutDialog.xul @@ +121,5 @@ > </description> > #endif > + <description class="text-blurb" id="warningDesc"> > + &warning.desc; > + </description> I think we should make this |hidden="true"| by default since it will be hidden for the majority of our users.
Thanks a lot for your feedback, Jared ! :) It's not much, but I'm really happy to finally make my contribution to the Mozilla project. I have considered your comments, so I asked to me as you have suggested, a review for this patch V3.
Attachment #574191 - Attachment is obsolete: true
Attachment #574322 - Flags: review?(dolske)
Comment on attachment 574322 [details] [diff] [review] Patch to add warnings in Nightly and Aurora in About window V3 Thanks for the patch, indeed! I just have two quick drive-by comments: 1) It looks like you used tabs for indentation, but we generally use spaces, so you'll want to update that for consistency (I've changed the setting in my text editor to use spaces instead of tab characters). 2) This additional text increases the height of the about dialog. Because of the dark black overlay at the bottom of box it's hard to tell that the background image is cut off, but it is. This doesn't strike me as a huge deal, since this is just for Nightly and Aurora, but maybe we can file a follow-up bug to have a taller background image.
Oops, indeed, the tabs have crept into my code, thank you for the tip of the editor, it's not stupid! I'll post the new patch. I have already planned to file a bug to the bottom of the box, as soon as my patch is validated by a reviewer (the final height of the box depends on my patch). Maybe you want me to file it now?
I removed tab characters
Attachment #574322 - Attachment is obsolete: true
Attachment #574322 - Flags: review?(dolske)
Attachment #574371 - Flags: review?(dolske)
Comment on attachment 574371 [details] [diff] [review] Patch to add warnings in Nightly and Aurora in About window V4 Thanks for the patch Théo! A few comments: - you can set hidden="true" on the warningDesc <description>, to avoid the need to hide it programmatically in init() - there's a leftover "Firefox" in the warning.desc string that should be &brandShortName; - the "Mozilla" in warning.desc should probably be changed to &vendorShortName; - this whole warning about telemetry should probably be put behind MOZ_TELEMETRY_REPORTING ifdef, to match the actual telemetry code. This makes the patch slightly more complicated since you need to split the description in two somehow, but it shouldn't be too difficult. r- since we need to sort out these issues. Do feel free to find me on IRC (I'm "gavin" in #fx-team) or ask questions here if you need any clarification.
Attachment #574371 - Flags: review?(dolske) → review-
(In reply to Gavin Sharp (use gavin@gavinsharp.com for email) from comment #15) > - the "Mozilla" in warning.desc should probably be changed to > &vendorShortName; Hmm, or maybe it should use toolkit.telemetry.server_owner.
Thank you for the improvements Gavin! I tried to use &toolkit.telemetry.server_owner; but it doesn't work after the compilation (I don't know why), so I used &vendorShortName; who works. An other thing, on my build, with telemetry activated, MOZ_TELEMETRY_REPORTING ifdef isn't recognized. I guess it's because MOZ_TELEMETRY_REPORTING is defined ONLY on Mozilla official build, right? Apart from this, everything seems to work.
Attachment #574371 - Attachment is obsolete: true
Attachment #575270 - Flags: review?(gavin.sharp)
Comment on attachment 575270 [details] [diff] [review] Patch to add warnings in Nightly and Aurora in About window V5 Could we get a little wordsmithing from UX? This makes the about dialog feel like it's got an awful lot of text in it, and I'd like to think that our cunning wordsmiths can pare that down. Also, what's the reasoning for wanting to put this in the about dialog, instead of something like about:rights or about:license?
I agree that this is starting to be a bit excessive. CCing our writer to see if there's a way to improve the situation.
Here's the bare bones of what I think this could say: $PRODUCTNAME is experimental and unstable. It automatically sends test information back to Mozilla to help make Firefox better. Mozilla is a global community working together to keep the Web open, public and accessible to all. We should probably update this for Beta and GA as well: $PRODUCTNAME is designed by Mozilla, a global community working together to keep the Web open, public and accessible to all.?
(In reply to Matej Novak [:matej] from comment #20) >? Bug 656518 was filed about that, but it doesn't look like anyone did any work in there. If you provide more details in there, it should probably be easy for someone to pick up. Maybe Théo would help you out :)
Yep, I would help with pleasure, you just need to tell me what should I do, I'm afraid of not knowing what to do according to the latest comment. To put it in a nutshell, I should update all $PRODUCTNAME, in bug 656518, and what do I do in this bug? Do we need to wait to decide where put the warnings, or I'll just have to update the current patch with Matej's proposition?
(In reply to Théo Chevalier from comment #22) > Yep, I would help with pleasure, you just need to tell me what should I do, > I'm afraid of not knowing what to do according to the latest comment. Awesome! We'll definitely help you out :) > To put it in a nutshell, I should update all $PRODUCTNAME, in bug 656518, > and what do I do in this bug? Yeah, updating $PRODUCTNAME and wordmarks should happen in bug 656518 (we should discuss an approach to that over in that bug, just to keep these tasks separate). In this bug, I believe your current approach is still correct, but we're just increasing the scope a bit to also update the community strings, so that the text of the about dialog matches what Matej specified in comment 20. > Do we need to wait to decide where put the warnings, or I'll just have to > update the current patch with Matej's proposition? I think we'll still be putting the warnings above the community text in Nightly/Aurora like you're doing in your current patch. The one thing that may be complicated is changing the community blurb based on the channel, but we could probably use the same approach to do that.
Patch updated from Matej proposition. So, now, background image height is ok (at least in en-US).
Attachment #574192 - Attachment is obsolete: true
Attachment #575270 - Attachment is obsolete: true
Attachment #575270 - Flags: review?(gavin.sharp)
Attachment #576591 - Flags: review?(limi)
Result of the patch on Firefox Nightly with telemetry enabled. (Just focus on the warnings and the community strings, new wordmark and $PRODUCTNAME are relative to bug 656518)
Assignee: nobody → theo.chevalier11
Status: NEW → ASSIGNED
I have assigned this to Théo as it appears that he is working on this. I'm trying to keep the status of our mentored bugs up to date. Please unassign if this was done in error.
Limi - Looks like the patch is waiting on your review. Can you please take a look so that we can land this change before the cut over to Aurora?
Comment on attachment 576592 [details] Result of the patch V6 I think what we need is ui-review from Limi, not code review, so I'm flagging the screenshot.
Attachment #576592 - Flags: ui-review?(limi)
Comment on attachment 576591 [details] [diff] [review] Patch to add warnings in Nightly and Aurora in About window V6 >diff --git a/browser/base/content/aboutDialog.js b/browser/base/content/aboutDialog.js >+ document.getElementById("warningDesc").hidden = false; >+ document.getElementById("communityExperimentalDesc").hidden = false; Can you put both of these descriptions in a <vbox id="experimental">, and just hide that, to avoid needing to toggle both individually? You could also use a <deck> for this and switch its selectedIndex, but that's probably overkill. >diff --git a/browser/locales/en-US/chrome/browser/aboutDialog.dtd b/browser/locales/en-US/chrome/browser/aboutDialog.dtd >+<!-- LOCALIZATION NOTE (community.Exp.MozillaLink): This is a link title that links to. --> >+<!ENTITY community.Exp.MozillaLink "&vendorShortName;"> >+<!ENTITY community.Exp.Middle2 " is a "> >+<!-- LOCALIZATION NOTE (community.Exp.CreditsLink): This is a link title that links to about:credits. --> >+<!ENTITY community.Exp.CreditsLink "global community"> >+<!ENTITY community.Exp.End2 " working together to keep the Web open, public and accessible to all."> You should keep the string names lowercase to match the other strings in this file. You don't need the "2" suffix for new strings, those are just there because these strings were changed and they needed to change the string name accordingly. >-<!ENTITY community.end2 " working together to make the Internet better. We believe that the Internet should be open, public, and accessible to everyone without any restrictions."> >+<!ENTITY community.end2 " working together to keep the Web open, public and accessible to all."> This string name needs to change, since the string value is changing. I recommend using "community.end3". r- for these minor changes, but I'll r+ a patch that addresses them!
Attachment #576591 - Flags: review?(gavin.sharp) → review-
Corrected ! :)
Attachment #576591 - Attachment is obsolete: true
Attachment #581304 - Flags: review?(gavin.sharp)
(Tab char removed.)
Attachment #581304 - Attachment is obsolete: true
Attachment #581304 - Flags: review?(gavin.sharp)
Attachment #581307 - Flags: review?(gavin.sharp)
Comment on attachment 576592 [details] Result of the patch V6 This looks fine, although I think "is experimental and unstable" is a bit strong, and something like "may be unstable" would be better — but don't let this block landing the patch. :)
Attachment #576592 - Flags: ui-review?(limi) → ui-review+
No problem Alex, I can change it immediately :)
Accordingly to Alex comment (comment 32), I've changed &brandShortName; is experimental and unstable. into: &brandShortName; is experimental and may be unstable.
Attachment #581307 - Attachment is obsolete: true
Attachment #581307 - Flags: review?(gavin.sharp)
Attachment #582907 - Flags: review?(gavin.sharp)
Thanks again for the patch, Théo! I've made a few little tweaks: - added some metadata to the diff (see for details on how to do that generally) - added a community.exp.start, even though it's blank for en-US, because other locales might require it. - fixed a small mistake in addressing my last comment - you needed to change the existing string's name to community.end3, not the new one that you're adding. - added some additional detail to the localization notes with those changes, I'm granting r+, and I'll go ahead and push this to inbound for you.
Attachment #582907 - Attachment is obsolete: true
Attachment #582907 - Flags: review?(gavin.sharp)
Attachment #583015 - Flags: review+ This should be merged to inbound within 24 hours - once that happens we'll mark this FIXED and it should be in the next Nightly!
Target Milestone: --- → Firefox 11
Thanks Gavin, great job ! :) It's a good news. With this patch, background is broke again. I'll open a bug to have a bigger background (I'll calculate how exactly should be the new height. Maybe 2 new lines will be a good margin.) What about 699806 and 656518 ? Too late for this merge, no? Or 699806 will follow this patch because it's related? BTW, your article about metadata is very interesting, I'll take some notes ;) And if it's not already done, it should be on MDN.
Status: ASSIGNED → RESOLVED
Closed: 9 years ago
Resolution: --- → FIXED
Isn't "test information" a bit too cryptic, also considering that telemetry always (AFAIK) refers to "performance date"?
(In reply to flod (Francesco Lodolo) from comment #39) > Isn't "test information" a bit too cryptic, also considering that telemetry > always (AFAIK) refers to "performance date"? This is exactly how Telemetry is describded in the user prompt: Will you help improve %1$S by sending anonymous information about performance, hardware characteristics, feature usage, and browser customizations to %2$S? So telemetry is wider than performances datas sending, I guess.
(In reply to flod (Francesco Lodolo) from comment #39) > Isn't "test information" a bit too cryptic, also considering that telemetry > always (AFAIK) refers to "performance date"? "Test information" is the language that we've decided to use to encompass all of telemetry's data-gathering. | https://bugzilla.mozilla.org/show_bug.cgi?id=701182 | CC-MAIN-2021-17 | en | refinedweb |
Flair is a powerful open-source library for natural language processing. It is mainly used to get insight from text extraction, word embedding, named entity recognition, parts of speech tagging, and text classification. All these features are pre-trained in flair for NLP models. It also supports biomedical data that is more than 32 biomedical datasets already using flair library for natural language processing tasks. Easily integrated with Pytorch NLP framework for embedding in document and sentence.
Humboldt University of Berlin and friends mainly develop flair. The Humboldt University of Berlin maintains the Flair library and has already done more than a hundred industry project implementations and research-based projects using Flair.
Github:
Research Paper:
-
-
-
Let’s look at Flair’s performance based on the nlp task such as named entity recognition, parts of speech tagging, and chunking with their accuracy in the table below.
Installation:
Using pip:
pip install flair
Source:
Using conda:
conda install -c bioconda flair
Source:
Flair Model:
First, import sentences from flair’s data library, then import the model for SequenceTagger. Make a sentence using the Sentence object, then load Named entity recognition on SequenceTagger, then run the code.
For an example of the flair model, see the code below.
from flair.data import Sentence from flair.models import SequenceTagger # make a sentence sentence = Sentence('I love India .') # load the NER tagger tagger = SequenceTagger.load('ner') # run NER over sentence tagger.predict(sentence)
Flair has the following pre-trained models for NLP Tasks:
- Name-Entity Recognition
- Parts-of-Speech Tagging
- Text Classification
- Training Custom Models
Tokenization:
In the flair library, there is a predefined tokenizer using the segtok library of python. To
use the tokenization just the “use_tokenizer” flag value is true. If not want to implement the write false. We can also define the label of each sentence and its related topic using the function add_tag.
For example, see the code below:
from flair.data import Sentence # Make a sentence object by passing an untokenized string and the 'use_tokenizer' flag untokenized_sentence = Sentence('The grass is green.', use_tokenizer=False # Print the object to see what's in there print(untokenized_sentence)
In this case, no tokenization occurs use_tokenizer is false.
Source:
Word Embeddings:
Here is the list of embedding in the library. We will learn about flair library in detail, and there code implementation.
Flair Embedding:
Effective embeddings are contextual string embeddings that capture latent syntactic-semantic data that goes beyond standard word embedding. The main differences are:
(1) Without any clear notion of vocabulary, they are educated and thus essentially model words as character sequences.
(2) they are contextualized by their surrounding text, meaning that depending on their contextual use, the same word will have distinct embeddings.
Code:
from flair.embeddings import FlairEmbeddings # init embedding flair_embedding_forward = FlairEmbeddings('news-forward') # create a sentence sentence = Sentence('The grass is green .') # embed words in sentence flair_embedding_forward.embed(sentence)
Training a Text Classification Model:
We are training a text classifier over the TREC-6 corpus, using a combination of simple GloVe embeddings and Flair embeddings.
In this code, import Corpus and TREC_6 for datasets, WordEmbeddings, FlairEmbeddings, and Document RNN Embeddings, TextClassifier, ModelTrainer.
Code:
from flair.data import Corpus from flair.datasets import TREC_6 from flair.embeddings import WordEmbeddings, FlairEmbeddings, DocumentRNNEmbeddings from flair.models import TextClassifier from flair.trainers import ModelTrainer # 1. get the corpus corpus: Corpus = TREC_6() # 2. create the label dictionary label_dict = corpus.make_label_dictionary() # 3. make a list of word embeddings word_embeddings = [WordEmbeddings('glove')] # 4. initialize document embedding by passing a list of word embeddings # Can choose between many RNN types (GRU by default, to change use rnn_type parameter) document_embeddings = DocumentRNNEmbeddings(word_embeddings, hidden_size=256) # 5. create the text classifier classifier = TextClassifier(document_embeddings, label_dictionary=label_dict) # 6. initialize the text classifier trainer trainer = ModelTrainer(classifier, corpus) # 7. start the training trainer.train('resources/taggers/trec', learning_rate=0.1, mini_batch_size=32, anneal_factor=0.5, patience=5, max_epochs=150)
Source:
Summary
We learn about the Flair open-source library for NLP problems. We also covered the area about NLP and the use of Flair to solve the tasks and their use in the industry. Some important Flair pipelines and their code in the development of pre-trained NLP models.. | https://analyticsindiamag.com/flair-hands-on-guide-to-robust-nlp-framework-built-upon-pytorch/ | CC-MAIN-2021-17 | en | refinedweb |
Even in debug mode, wing still fails to give the completion suggestions for aiida.
Hi,
Today, I meet a very strange thing. Several days ago, I told that wing can do runtime completion for me but today I find that it fails to do that thing.
See the following minimal example code:
from aiida import load_profile load_profile() # * pk 17 - q-e.git-pw@localhost codename = 'q-e.git-pw@localhost' from aiida.orm import Code, load_node from aiida.plugins import CalculationFactory code = Code.get_from_string(codename) # code = Code.get(label='q-e.git-pw', machinename='localhost') # code = load_node(17) PwCalculation = CalculationFactory('quantumespresso.pw') builder = PwCalculation.get_builder()
If I put cursor at the last line and debug to that line. Then I check the runtime completion with:
PwCalculation.g<tab>
It gives nothing.
Regards
Please provide a working example we can try here. The above doesn't work on its own because load_profile() fails and if I comment out that and the call to Code.get_from_string then it fails because I don't have entry point quantumespresso.pw. | https://ask.wingware.com/question/1820/even-in-debug-mode-wing-still-fails-to-give-the-completion-suggestions-for-aiida/ | CC-MAIN-2021-17 | en | refinedweb |
Spring singletons are not Java Singleton. In this post, let’s go over the difference between Spring Singleton vs Singleton Pattern.
Singleton scope is the default scope in Spring. Spring container create exactly one instance of the object defined by that bean definition.
Many times, we start comparing this design with the Singleton pattern as defined in the Gang of Four (Gof) book.
Singleton scope in Spring is not same as singleton pattern. Some of the main differences between these 2 are
If you pay close attention, these are entirely different design in terms of how they define singleton.
Singleton instance in Spring will be stored in a cache of such singleton beans, and all subsequent requests and references for that named bean will result in the cached object being returned.
Let’s work on an example to understand it more clearly.
public class CustomerAccount { private String name; public CustomerAccount() { } public CustomerAccount(String name) { this.name = name; } @Override public String toString() { return "CustomerAccount{" + "name='" + name + '\'' + '}'; } }
Spring Boot main method
@SpringBootApplication public class DemoApplication { public static void main(String[] args) { SpringApplication.run(DemoApplication.class, args); } @Bean(name = "bean1") public CustomerAccount customerAccount(){ return new CustomerAccount("Test User 1"); } @Bean(name = "bean2") public CustomerAccount customerAccount1(){ return new CustomerAccount("Test User 2"); } }
Let’s understand this code.
How many instances created by Spring IoC container for above example?
To get an answer to the above questions, let’s create a unit test case for our example.
@RunWith(SpringRunner.class) @SpringBootTest public class SingletonScopeTest { private static Logger log = LoggerFactory.getLogger(SingletonScopeTest.class); @Resource(name = "bean1") CustomerAccount account1; @Resource(name = "bean1") CustomerAccount duplicateAccount; @Resource(name = "bean2") CustomerAccount account2; @Test public void testSingletonScope(){ log.info(account1.getName()); log.info(account2.getName()); log.info("account are equal:: {}", account1 == account2); log.info("Duplicate Account :: {}", account1 == duplicateAccount); } }
Output
2018-01-15 21:53:29.171 INFO 8421 --- [ main] com.example.demo.SingletonScopeTest : Test User 1 2018-01-15 21:53:29.171 INFO 8421 --- [ main] com.example.demo.SingletonScopeTest : Test User 2 2018-01-15 21:53:29.171 INFO 8421 --- [ main] com.example.demo.SingletonScopeTest : account are equal:: false 2018-01-15 21:53:29.172 INFO 8421 --- [ main] com.example.demo.SingletonScopeTest : Duplicate Account :: true
On checking the output, We found
For a given Id, Spring container maintain only one shared an instance of singleton bean. In our example
Spring Singleton is very different from Singleton pattern. Spring guarantees to create only one bean instance for given bean id definition per container.Singleton pattern ensures that one and only one instance is created per ClassLoader. | https://www.javadevjournal.com/spring/spring-singleton-vs-singleton-pattern/ | CC-MAIN-2018-34 | en | refinedweb |
Request to the Pilot for motion or navigation.
More...
#include <PilotRequest.h>
Request to the Pilot for motion or navigation.
Definition at line 17 of file PilotRequest.h.
List of all members.
PilotTypes::noRequest
Constructor.
Definition at line 8 of file PilotRequest.cc.
Definition at line 32 of file PilotRequest.cc.
Definition at line 27 of file PilotRequest.h.
Definition at line 62 of file PilotRequest.cc.
Definition at line 97 of file PilotRequest.h.
[friend]
Definition at line 18 of file PilotRequest.h.
Pointer to MapBuilderRequest to check if we have acquired the object.
Definition at line 57 of file PilotRequest.h.
Referenced by operator=().
Returns true if vision confirms that we have successfully acquired the object (e.g., for pushing).
Definition at line 58 of file PilotRequest.h.
True if the robot should avoid large turns by walking backwards if distance is short.
Definition at line 60 of file PilotRequest.h.
Referenced by KoduInterpreter::MotionActionRunner::ExecuteMotionAction::ExecuteGoToShapeRequest::doStart(), Grasper::DoBodyTransport::doStart(), and operator=().
If true, the obstacle boundaries used by the collision checker are displayed in the world shape space automatically if path planning fails.
Definition at line 78 of file PilotRequest.h.
If true, use IR to avoid walking off a cliff.
Definition at line 67 of file PilotRequest.h.
If true, use rangefinder sensors to dynamically avoid obstacles (not yet implemented).
Definition at line 69 of file PilotRequest.h.
Point in the base reference frame to bring to the target location.
Definition at line 50 of file PilotRequest.h.
Referenced by KoduInterpreter::MotionActionRunner::ExecuteMotionAction::ExecuteGoToShapeRequest::doStart(), and operator=().
If true, the waypointList will be cleared before appending new waypoints.
Definition at line 40 of file PilotRequest.h.
Maximum tolerable distance to the ground (millimeters).
Definition at line 68 of file PilotRequest.h.
What to do about collisions.
Definition at line 37 of file PilotRequest.h.
Referenced by DualCoding::Pilot::PushObjectMachine::AdjustPush::doStart(), DualCoding::Pilot::PushObjectMachine::FetchObj::doStart(), DualCoding::Pilot::PushObjectMachine::SetUpExecute2::doStart(), KoduInterpreter::MotionActionRunner::ExecuteMotionAction::ExecuteGoToShapeRequest::doStart(), KoduInterpreter::GrabActionRunner::ExecuteGrabAction::PrepareForAnotherGrasp::RepositionBody::doStart(), KoduInterpreter::PerceptualMultiplexor::FailureRecovery::ObjectManipRecovery::PrepareForAnotherGrasp::RepositionBody::doStart(), and operator=().
Rotation angle in radians (positive is counterclockwise).
Definition at line 32 of file PilotRequest.h.
Referenced by KoduInterpreter::GrabActionRunner::PrepareBody::FaceObject::doStart(), KoduInterpreter::PerceptualMultiplexor::FailureRecovery::ObjectManipRecovery::PrepareForItemRecovery::FaceTarget::doStart(), Grasper::DoBodyApproach2::doStart(), operator=(), and Turn::preStart().
How many particles to display (number or percentage) as LocalizationParticle shapes.
Definition at line 74 of file PilotRequest.h.
If true, the obstacle boundaries used by the collision checker are displayed in the world shape space.
Definition at line 77 of file PilotRequest.h.
How many particles to display (number or percentage) as a single Graphics shape.
Definition at line 73 of file PilotRequest.h.
If true, the planned path is displayed in the shape space.
Definition at line 75 of file PilotRequest.h.
If true, the RRT search tree is displayed in the world shape space.
Definition at line 76 of file PilotRequest.h.
Forward distance in mm (negative means go backward).
Definition at line 30 of file PilotRequest.h.
Referenced by KoduInterpreter::GiveActionRunner::Backup::doStart(), KoduInterpreter::GrabActionRunner::ExecuteGrabAction::PrepareForAnotherGrasp::RepositionBody::doStart(), KoduInterpreter::GrabActionRunner::PrepareBody::ReverseBody::doStart(), KoduInterpreter::DropActionRunner::ExecuteDrop::ReverseBody:::PrepareForItemRecovery::Reverse::doStart(), Grasper::DoWithdraw::doStart(), Grasper::DoBodyApproach3::doStart(), operator=(), and WalkForward::preStart().
Sideways distance in mm (positive to the left).
Definition at line 31 of file PilotRequest.h.
Referenced by operator=(), and WalkSideways::preStart().
If true, the Pilot will execute the path it has planned; if false, it plans but does not execute.
Definition at line 80 of file PilotRequest.h.
Translation speed in mm/sec for dx or dy.
Definition at line 33 of file PilotRequest.h.
Referenced by DualCoding::Pilot::PushObjectMachine::SetUpExecute2::doStart(), DualCoding::Pilot::ExecutePlan::Walk::doStart(), Grasper::DoBodyApproach3::doStart(), and operator=().
Distance between the gate point and the target point when targetHeading or baseOffset used.
Definition at line 52 of file PilotRequest.h.
Should return true if there are enough landmarks to localize.
Definition at line 43 of file PilotRequest.h.
Referenced by DualCoding::Pilot::LocalizationUtility::CheckEnoughLandmarks::doStart(), and operator=().
pointer to MapBuilderRequest used to find landmarks; will be deleted by the Pilot
Definition at line 42 of file PilotRequest.h.
Referenced by DualCoding::Pilot::LocalizationUtility::TakeInitialPicture::doStart(), Kodu::VisualLocalizationTask::getPilotRequest(), and operator=().
Vector of specific landmarks to use for localization, overriding the default.
Definition at line 81 of file PilotRequest.h.
Maximum allowable distance to walk backward instead of turning around.
Definition at line 61 of file PilotRequest.h.
Maximum number of iterations for path planner RRT search.
Definition at line 79 of file PilotRequest.h.
Referenced by Grasper::PlanBodyTransport::doStart(), Grasper::PlanBodyApproach::doStart(), operator=(), and Grasper::planBodyPath().
Pointer to MapbuilderRequest for finding the object to be pushed; will be deleted by the Pilot.
Definition at line 55 of file PilotRequest.h.
Finds the localShS object matching objectShape in worldShS.
Definition at line 56 of file PilotRequest.h.
Referenced by DualCoding::Pilot::PushObjectMachine::FetchObj::doStart(), and operator=().
Object we want to push to the target.
Definition at line 54 of file PilotRequest.h.
Referenced by DualCoding::Pilot::PushObjectMachine::FetchObj::doStart(), DualCoding::Pilot::PushObjectMachine::ChoosePathObjToDest::doStart(), DualCoding::Pilot::LookForTarget::doStart(), DualCoding::Pilot::LookForObject::doStart(), and operator=().
Inflation in mm of obstacle bounding shapes for path planning.
Definition at line 66 of file PilotRequest.h.
Referenced by operator=(), and Grasper::planBodyPath().
Minimum tolerable rangefinder distance to an obstacle (millimeters).
Definition at line 70 of file PilotRequest.h.
Definition at line 63 of file PilotRequest.h.
Definition at line 64 of file PilotRequest.h.
[private]
Definition at line 100 of file PilotRequest.h.
Used to construct a PilotEvent to notify the requesting node of the results of this Pilot operation.
Definition at line 93 of file PilotRequest.h.
Type of pilot request.
Definition at line 29 of file PilotRequest.h.
Referenced by KoduInterpreter::PerceptualMultiplexor::PilotTaskRunner::ExecutePilotTask::doStart(), getRequestType(), and operator=().
If true, terminate search and post a completion event.
Definition at line 46 of file PilotRequest.h.
Referenced by DualCoding::Pilot::VisualSearchMachine::Check::doStart(), DualCoding::Pilot::VisualSearchMachine::Search::doStart(), and operator=().
MapBuilderRequest to be used for visual search.
Definition at line 45 of file PilotRequest.h.
Referenced by DualCoding::Pilot::VisualSearchMachine::Search::doStart(), and operator=().
Angle to rotate body to continue a visual search.
Definition at line 47 of file PilotRequest.h.
Referenced by DualCoding::Pilot::VisualSearchMachine::Rotate::doStart(), DualCoding::Pilot::VisualSearchMachine::Check::doStart(), and operator=().
Sideways translational speed used for setVelocity.
Definition at line 34 of file PilotRequest.h.
Heading on which we want to arrive at the target.
Definition at line 51 of file PilotRequest.h.
Referenced by Grasper::DoBodyTransport::doStart(), Grasper::DoBodyApproach::doStart(), and operator=().
Shape to walk to.
Definition at line 49 of file PilotRequest.h.
Referenced by KoduInterpreter::MotionActionRunner::ExecuteMotionAction::ExecuteGoToShapeRequest::doStart(), Grasper::DoBodyTransport::doStart(), Grasper::DoBodyApproach::doStart(), and operator=().
Lookout request for tracking objects while walking.
Definition at line 71 of file PilotRequest.h.
Rotational speed in radians/sec for da.
Definition at line 35 of file PilotRequest.h.
Referenced by DualCoding::Pilot::PushObjectMachine::SetUpExecute2::doStart(), DualCoding::Pilot::ExecutePlan::Turn::doStart(), Grasper::DoBodyTransport::doStart(), Grasper::DoBodyApproach2::doStart(), and operator=().
Definition at line 36 of file PilotRequest.h.
Waypoint list for waypointWalk.
Definition at line 39 of file PilotRequest.h. | http://tekkotsu.org/dox/classDualCoding_1_1PilotRequest.html | CC-MAIN-2018-34 | en | refinedweb |
As far as I can tell, this is safe. It prints "Should Get Here" and nothing else, as expected and desired. In real life, CantGetHere is a function that actually IS called and does some arithmetic. To avoid a seg fault in line 14, I have a NULL pointer check on line 8. However, I also have one on line 23, so the check on line 8 is redundant, correct? Line 23 will short-circuit if ptr is NULL and hence the CantGetHere() function won't be called, right?
#include <stdio.h> #include <stdbool.h> bool CantGetHere(int* ptr) { printf("What the heck?!? How did I get here?\n"); if(!ptr) { printf("Seg fault check. NULL pointer.\n"); } else { printf("*ptr equals %d\n", *ptr); } return false; } int main() { int* ptr = NULL; if(!ptr || CantGetHere(ptr)) { printf("Should get here.\n"); } else { printf("Should not get here either.\n"); } getchar(); // pause return 0; }
I'm working off the assumption that the person in this thread is correct and that I understand him correctly.
Short circuit evaluation, and order of evaluation, is a mandated semantic standard in both C and C++.
If it wasn't, code like this would not be a common idiom
char* pChar = 0; // some actions which may or may not set pChar to something if ((pChar != 0) && (*pChar != '\0')) { // do something useful }
That's what I'm going for. A check for a NULL pointer to avoid a seg fault, and then dereferencing the pointer if and only if it's not NULL by use of the || operator. Just want to confirm that ||, like &&, always works from left to right.
Edited by VernonDozier: n/a | https://www.daniweb.com/programming/software-development/threads/375963/operator-precedence-want-to-confirm-that-this-is-safe | CC-MAIN-2018-34 | en | refinedweb |
Typescript: Cannot find module 'source-map' - uglify.js
Update:
After installing this package I get the above error:
npm install @types/html-webpack-plugin --save-dev
Uninstalling it solves the problem.
- C:\WINDOWS\system32>node -v v9.5.0
- C:\WINDOWS\system32>npm -v 5.6.0
- C:\WINDOWS\system32>tsc -v Version 2.7.1
- VS Code: v1.20
- Win 10 x64
When I run tsc in the root of my project folder, I now get the following error:
tsc
node_modules/@types/uglify-js/index.d.ts(9,32): error TS2307: Cannot find module 'source-map'.
I can't remember what I changed, but it was working before...
tsconfig.json:
{ "compilerOptions": { "module": "esnext", // esnext to have support for Dynamic Module Imports "target": "es5", "rootDir": "src", "sourceMap": true, "lib": [ "es2015", "dom" ] } }
See also questions close to this topic
- Bootstrap related Javascript files bundled in the wrong order by Webpack
(There is a similar question. However, my Webpack configuration is significantly different, and now the official documentation doesn't seem to recommend using
script-loader, so I thought there might be a different solution to this question.)
Phoenix framework 1.4.0 uses Webpack by default. I was able to successfully configure it up to now.
Bootstrap.js requires
jqueryand
popper.jsto be loaded before it to work. So I specified:
import css from '../css/app.scss'; // webpack automatically bundles all modules in your // entry points. Those entry points can be configured // in "webpack.config.js". // // Import dependencies // import 'phoenix_html'; import 'jquery'; import 'popper.js'; import 'bootstrap'; // Import local files // // Local files can be imported directly using relative // paths "./socket" or full ones "web/static/js/socket". // import socket from "./socket" import { startLesson } from './lesson/show.ts';
in my
app.js
However, the generated
app.jslooks like this:
/***/ "../deps/phoenix_html/priv/static/phoenix_html.js": /*!********************************************************!*\ !*** ../deps/phoenix_html/priv/static/phoenix_html.js ***! \********************************************************/ ... /***/ "./css/app.scss": /*!**********************!*\ !*** ./css/app.scss ***! \**********************/ ... /***/ "./js/app.js": /*!*******************!*\ !*** ./js/app.js ***! \*******************/ ... /***/ "./js/lesson/show.ts": /*!***************************!*\ !*** ./js/lesson/show.ts ***! \***************************/ ... /***/ "./node_modules/bootstrap/dist/js/bootstrap.js": /*!*****************************************************!*\ !*** ./node_modules/bootstrap/dist/js/bootstrap.js ***! \*****************************************************/ ... /***/ "./node_modules/jquery/dist/jquery.js": /*!********************************************!*\ !*** ./node_modules/jquery/dist/jquery.js ***! \********************************************/ ... /***/ "./node_modules/popper.js/dist/esm/popper.js": /*!***************************************************!*\ !*** ./node_modules/popper.js/dist/esm/popper.js ***! \***************************************************/ ... /***/ "./node_modules/webpack/buildin/global.js": /*!***********************************!*\ !*** (webpack)/buildin/global.js ***! \***********************************/ ...
which apparently put the vendor JS files in alphabetical order and thus made
bootstrap.jsunusable.
Actually, the official Bootstrap documentation suggests that only writing
import 'bootstrap';should be enough. However, that also resulted in
jqueryand
popper.jsbeing bundled after Bootstrap.js.
Also, I think the custom JS files, i.e.
./js/app.jsand
./js/lesson/show.tsshould be loaded after the dependencies are loaded?
How should I change the Webpack configuration so that the JS files are bundled in the correct order?
The current
webpack.config.js:
const path = require('path');', // entry: './js/app.ts', output: { filename: 'app.js', path: path.resolve(__dirname, '../priv/static/js') }, module: { rules: [ { test: /\.js$/, exclude: /node_modules/, use: { loader: 'babel-loader' } }, { test: /\.tsx?$/, exclude: /node_modules/, use: 'ts-loader' }, { test: /\.scss$/, include: [path.resolve(__dirname, 'css')], use: [MiniCssExtractPlugin.loader, 'css-loader', 'sass-loader'] } ] }, plugins: [ new MiniCssExtractPlugin({ filename: '../css/app.css' }), new CopyWebpackPlugin([{ from: 'static/', to: '../' }]) ] });
- Cannot extend a class, extra methods undefined
Suppose I have the following class:
import EventEmitter from 'event-emitter'; export default class SharedChannel extends EventEmitter { constructor(opts) { super(); console.log('SharedChannel constructor'); } send(event, data) { console.log('SharedChannel send'); } }
In my application, I try to use that class:
import SharedChannel from './lib/SharedChannel'; const channel = new SharedChannel(); channel.send('sessionData', 'Some session data goes here');
I get the following error:
Uncaught TypeError: channel.send is not a function
Methods from the EventEmitter class do work, but my
sendmethod does not. I can, for example, call
channel.emit(). Also, I am able to access class methods from within that class constructor. For example, I can call
channel.send()right after
super(). I'm able to hack around it by calling
this.send = function() { ...in the constructor, but of course, this should not be necessary.
This application is being built with Webpack and Babel. In my
webpack.config.js, I have the following Babel rule:
{ test: /\.js$/, exclude: /(node_modules|bower_components)/, use: { loader: 'babel-loader', options: { presets: ['@babel/preset-env'] } } }
In
.babelrc:
{ "presets": ["@babel/preset-env"] }
Versions for packages:
"@babel/core": "^7.0.0-rc.1", "@babel/plugin-proposal-object-rest-spread": "^7.0.0-rc.1", "@babel/preset-env": "^7.0.0-rc.1", "babel-loader": "^8.0.0-beta", "webpack": "^4.16.5", "webpack-cli": "^3.1.0"
Any advice on how to fix this?
- Error importing local npm package
We have several websites, each in its own project, and we are looking to migrate them all to using Vue.js. Each website has its own directory with
.vuefiles that are then bundled using Webpack. We have a Webpack config in place that converts the .vue files, bundles, lints and pipes it all through babel and it works fine.
However, now that we have been moving things over for several weeks we have noticed that there are several components and core javascript files that are very similar and ideally we want to pull these out into a shared library of vue components and functions.
We have extracted several
.vueinto a folder alongside the websites, and put them together as a basic npm module with its own
package.json, and include them using an npm file include, so in the
package.jsonit just looks like:
"vue-shared": "file:../CommonJavascript/Vue". Now when we try to use Webpack to build the bundle, we get the error:
ERROR in ../CommonJavascript/Vue/index.js Module build failed (from ./node_modules/eslint-loader/index.js): Error: Failed to load plugin react: Cannot find module 'eslint-plugin-react'
I'm not sure where this error is coming from, because we aren't using react anywhere, and it seemed happy enough to build fine before we moved the files out. At the moment the only dependency in the shared module is moment, and it only contains 4
.vue, and a basic wrapper to bundle them up:
import button from 'Button.vue' import loading from 'Loading.vue' import modal from 'Modal.vue' import progressBar from 'ProgressBar.vue' export default { button, loading, modal, progressBar, }
But, I was curious so I decided to add the package (even though we don't need it) to see if it would fix the issue, but I then get a new error:
ERROR in ../CommonJavascript/Vue/index.js Module build failed (from ./node_modules/babel-loader/lib/index.js): ReferenceError: Unknown plugin "transform-runtime" specified in "base" at 0, attempted to resolve relative to "C:\Projects\Tmo\Code\CommonJavascript\Vue"
Now, that one makes a little more sense, we do use the babel runtime transform on the main project, but it isn't required by anything in the shared project and even if it was, surely the fact it is included in the main project means it should still build.
Partly, it seems perhaps I'm just not understanding the way npm resolves dependencies. It seems to be trying to now resolve some dependencies by looking in the shared files project and I dont know why. Also I have no idea where this strange dependency on eslint-plugin-react has come from.
So I guess this is a multi-part question. What is up with the way npm is trying to resolve the dependencies? Am I doing things right by moving the
.vuefiles into a separate project, wrapping it up as a module and requiring it in the main project? and if not, what is the best way to have shared dependencies like this?
- Property getAllKeys does not exists on type IDBIndex
Build is fine, since i'm using a transpiler (rollup).
But VSCode is highlighting one file related to some operations with IndexedDB.
I found that vscode is getting the lib.dom.d.ts file from somewhere in program files instead of this one which includes getAllKeys.
I missing something here :(
How can I accomplish this?
Thanks
EDIT - adding info: Latest vscode version, with included ts v2.9.2. TsConfig target: es2017 Tried almost all lib combinations.
Update 1: Included webworker in libs and still nothing. It's weird since it's here:
- Better way to do the class initilization in angular4 typecript
Better way to initialize inner class inside outer class in angular4 .
Here have a outer class name ProductsModel which contains ProductsListModel. I have to send the ProductId string array to server side request.The below code is working fine when initialize the inner class inside outer class.
when not initialize :
export class ProductsModel{ productList : ProductListModel; }
when doing this i have got a below error message: cannot set property ProductId be undefined.
So i have initialize below like this which is working as expected ,is there any better way to initialize
Outer Class: export class ProductsModel{ productList = new ProductListModel(); } export class ProductListModel{ ProductId:string[]; } -- app. component.ts export class AppComponent { // initialsize outer class here: products = new ProductsModel(); in this subscribe: DetailsByProductID(){ this.products.productList.ProductId = ['8901','8902']; //pass the model object here this.ProductService.fetchByPID(this.products).subscribe(resposne=>console.log(response)}); } }
- Running SonarQube on TeamCity
I am running sonarqube on teamcity, I have installed the plugin and I can see the installed service, but when I add it to the build step, I get an error.
I checked on the machine and the sonarqube service isn't running. Nothing say that I need to install the sonarscanner on the server.
Can you please advise.
ERROR: SonarQube server [dev-ci-01:9000] can not be reached [10:52:12]ERROR: Error during SonarQube Scanner execution [10:52:12]org.sonarsource.scanner.api.internal.ScannerException: Unable to execute SonarQube [10:52:12] at org.sonarsource.scanner.api.internal.IsolatedLauncherFactory$1.run(IsolatedLauncherFactory.java:84) [10:52:12] at org.sonarsource.scanner.api.internal.IsolatedLauncherFactory$1.run(IsolatedLauncherFactory.java:71) [10:52:12] at java.security.AccessController.doPrivileged(Native Method) [10:52:12] at org.sonarsource.scanner.api.internal.IsolatedLauncherFactory.createLauncher(IsolatedLauncherFactory.java:71) [10:52:12] at org.sonarsource.scanner.api.internal.IsolatedLauncherFactory.createLauncher(IsolatedLauncherFactory.java:67) [10:52:12] at org.sonarsource.scanner.api.EmbeddedScanner.doStart(EmbeddedScanner.java:218) [10:52:12] at org.sonarsource.scanner.api.EmbeddedScanner.start(EmbeddedScanner.java:156) [10:52:12] at org.sonarsource.scanner.cli.Main.execute(Main.java:74) [10:52:12] at org.sonarsource.scanner.cli.Main.main(Main.java:61) [10:52:12]Caused by: java.lang.IllegalStateException: Fail to get bootstrap index from server [10:52:12] at org.sonarsource.scanner.api.internal.Jars.getBootstrapIndex(Jars.java:100) [10:52:12] at org.sonarsource.scanner.api.internal.Jars.getScannerEngineFiles(Jars.java:76) [10:52:12] at org.sonarsource.scanner.api.internal.Jars.download(Jars.java:70) [10:52:12] at org.sonarsource.scanner.api.internal.JarDownloader.download(JarDownloader.java:39) [10:52:12] at org.sonarsource.scanner.api.internal.IsolatedLauncherFactory$1.run(IsolatedLauncherFactory.java:75) [10:52:12] ... 8 more [10:52:12]Caused by: java.lang.IllegalArgumentException: unexpected url: dev-ci-01:9000/batch/index [10:52:12] at org.sonarsource.scanner.api.internal.shaded.okhttp.Request$Builder.url(Request.java:142) [10:52:12] at org.sonarsource.scanner.api.internal.ServerConnection.callUrl(ServerConnection.java:109) [10:52:12] at org.sonarsource.scanner.api.internal.ServerConnection.downloadString(ServerConnection.java:98) [10:52:12] at org.sonarsource.scanner.api.internal.Jars.getBootstrapIndex(Jars.java:96) [10:52:12] ... 12 more [10:52:12]ERROR: [10:52:12]ERROR: Re-run SonarQube Scanner using the -X switch to enable full debug logging. [10:52:12]Process exited with code 1
- How can I configure webpack 3.5.4 to only create source maps for JS, and skip CSS?
I've been trying to figure out how to skip the CSS source maps in production, because I only need JS. Is there a way to do this?
I can just delete the
*.css.mapfiles later, but I think the build would be faster if I can skip them.
- Is it possible to get Chrome to load and process source map files synchronously so that page-load breakpoints are hit?
Right now we webpack up a lot of js and use the
devtool: 'source-map'option to ensure there is a source map generated for our typescript.
The problem is that this map file is loaded and processed asynchronously to the page loading, so if you put a breakpoint into code which runs immediately after page load, it doesn't always get hit as the map file has not been hooked up properly yet.
If, I add a line something like:
if (debug === true) window.setTimeout(init, 1000); else init();
then that 1 second delay is enough to allow chrome time to process and source-map file and then the breakpoint gets hit.
I have tried using some of the other webpack options as mentioned here (e.g. 'inline-source-map', but this also seems to process the source map file asynchronously - via a data uri, so suffers from the same problem)
Whilst it works, the delayed start mentioned above is clearly pretty brittle! Is there some way I can tell chrome to wait to process the source map file in order that page startup breakpoints can be hit?
- Ionic 3 PWA app breakpoints and source code in the browser are messed up
I cannot put breakpoints in the correct position because it seems to be showing an older version of the sources in the browser.
How can I force it to update the source in the browser? | http://quabr.com/48757817/typescript-cannot-find-module-source-map-uglify-js | CC-MAIN-2018-34 | en | refinedweb |
Taking into account the behavior of the
free() function, it is a good practice to set your pointer to
NULL right after you free it.
By doing so, you can rest assured that in case you accidentally call
free() more than one times on the same variable (with no reallocation or reassignment in between), then no bad side-effects will happen (besides any logical issues that your code might be dealing with).
You can include
free() from
malloc.h and it will have the following signature
extern void free(void *__ptr);.
Description of operation:
Free a block allocated by
malloc,
realloc or
calloc.
The
free() function frees the memory space pointed to by
ptr, which must have been returned by a previous call to
malloc(),
calloc(), or
realloc(). Otherwise, if
free(ptr) has already been called before, undefined behavior occurs. If
ptr is
NULL, no operation is performed.
Working examples:
#include <stdio.h> #include <malloc.h> int main() { printf("Hello, World!\n"); void * c = malloc (sizeof(char) * 10); free(c); c = NULL; free(c); return 0; }
#include <iostream> int main() { std::cout << "Hello, World!" << std::endl; void * c = malloc (sizeof(char) * 10); free(c); c = NULL; free(c); return 0; }
This post is also available in: Greek | https://bytefreaks.net/programming-2/c/cc-a-small-tip-for-freeing-dynamic-memory | CC-MAIN-2018-34 | en | refinedweb |
SQL Query, Sum the same entry from ID multiple times
I first ran a query to get the item ID,
SELECT ITEM FROM `SALES` WHERE DATE>='2018-2-1' AND DATE<='2018-2-5' AND CASH_CREDIT='1'
Which returned the results; 1,3,7,4,8,7,5,1
I need to use those ID numbers to add the RETAIL_PRICE of each item together. I've tried the following but it didn't allow for the item ID to be used more than once so it only added the RETAIL_PRICE from 1,3,7,4,8,5 which leaves out the values from the other 7 and 1. I need to add them with those 2 values included.
SELECT SUM(RETAIL_PRICE) FROM `ITEMS` WHERE ID IN(1,3,7,4,8,7,5,1)
Thanks in advance.
1 answer
- answered 2018-02-13 00:52 spencer7593
Assuming that
idis the PRIMARY KEY (or a UNIQUE KEY) in
ITEMStable, we can use a JOIN operation, to get all the rows that match, and the add up the retail price from each of the rows.
SELECT SUM(i.retail_price) AS tot_retail_price FROM `ITEMS` i JOIN `SALES` s ON s.item = i.id WHERE s.date >= '2018-02-01' AND s.date < '2018-02-06' AND s.cash_credit = '1'
As you observed, this condtion
WHERE ID IN (1,3,7,4,8,7,5,1)
is equivalent to
WHERE ID IN (1,3,4,5,7,8)
is equivalent to
WHERE ID IN (1,1,1,1,1,1,1,3,4,5,7,7,7,7,7,7,7,8)
That query is one pass through the table, evaluating each row in the table. It doesn't matter how many times a value is repeated in the
INlist. The condition is evaluated, and either a row satisfies the condition or it doesn't. The condition is TRUE or FALSE (or NULL).
See also questions close to this topic
- How to close connection in PDO
Hi what is the correct way to close a connection in PDO with PHP?
my db_conf.php:
class dataBase { public function connDB() { $conn = new PDO("mysql:host=localhost;dbname=db_test5","root",""); return $conn; } }
my crud.php
class Data extends dataBase { public function insertViews($data) { $stmt = (new dataBase) -> connDB() -> prepare("UPDATE post SET views = :views"); $stmt -> bindParam(":views", $data["views"], PDO::PARAM_STR); $stmt -> execute(); $stmt -> close(); //this its correct? $stmt = null; //or this its correct? $conn = null; //or this its correct? } }
What is the correct?
Method 1:
$stmt -> close();
Method 2:
$stmt = null;
Method 3:
$conn = null;
- Joining multiple tables and join isn't working
Im having some problems joining tables Table A has the information I need, but Table C has column to let me know if Table A's id's are active or not. These two table have nothing in common to join them.
SO I am using Table B as in between because they it shares attributes with both tables.
What I want out of this query is to return a set of distinct id's that are attache to active users OR users who are not in Table C at all BUT they all most have something scheduled for today.
Here is what I have:
SELECT DISTINCT a.staffid FROM TABLE_A a LEFT OUTER JOIN TABLE-B b ON a.staffid = b.id_key LEFT OUTER JOIN TABLE_C c ON lower(c.email) = lower(b.tbl_users_username) AND (c.inactive= 0 OR c.inactive IS NULL) WHERE a.active = 1 AND a.scheduledate = CURDATE() order by a.staffid
This still returns me users that are no longer active and I am running out of ideas so any help would be greatly appreciated
- MySQL to Front End of Angular App
So I'm working on an Angular web application and I want to use C# for backend stuff. So I have a MySQL connection in C# that I'd like to link up to the frontend of my Angular application (Managed with Angular CLI). Would the way to do this be through controllers or some other means. I'm not really sure what direction to go here to connect the front end to the back end.
-.
- Redshift - Finding number of times a particular values is store in a column in table
Given below is a view of my table that stores sale data along with the detail whether the customer is a new customer or not. I am trying to find if the same sale_id has multiple entries for the same customer and tags him as a new customer. Given below is a sample view of the table
cust_id,prod_id,sale_id,is_new_cust,store_type 1,prod_a,1001,t,store 2,prod_a,1002,,online 3,prod_a,1003,t,store 3,prod_a,1003,t,store
I need to find how many such customers exist that have the tag of is_new_cust for the same sale_id.
Given below is the SQL I tried:
select cust_id,count(is_new_cust) from sales where store_type = 'store' and is_new_cust='t' group by cust_id having count(is_new_cust)> 1;
Expected output:
cust_id,count 3,2
The above SQL returns 1 no results.
I am using Amazon Redshift DB for the above.
Could anyone help me find where am I going wrong with the query. Thanks..
- How to add two complex SQL Queries and get output in single query?
I am new to coding and was preparing a SQL command but stuck at a step. Below is the main SQL Query : )
in the above query I am taking the value of pu.username from another query which is attached below:
select pu.username from per_users pu where) )
Now I when I join both the query like below: paam1.person_id = pu.person_id and pu )
I am Trying this is Oracle database No Data output is coming, I don't understand why?
PLEASE HELP ME !!
Thanks in advance, Shivam
- NOT NULL constraint failed: accounts_user.password while updating data
I got the error NOT NULL constraint failed: accounts_user.password while updating the data from a Form. I am using a Custom User Model. Everything works fine but, while updating data from the admin form I got this error.
modles.py
from phonenumber_field.modelfields import PhoneNumberField from django.db import models from django.utils import timezone from django.contrib.auth.models import ( BaseUserManager, AbstractBaseUser ) class UserManager(BaseUserManager): def _create_user(self, username, email, password, first_name, last_name, **extra_fields): now = timezone.now() if not username: raise ValueError('User must have a username.') else: username = username.lower() if not password: raise ValueError('Password required!') email = self.normalize_email(email) user_obj = self.model( username = username, email = email, first_name = first_name, last_name = last_name, #is_staff = is_staff, #is_active = is_active, #is_admin = is_admin, #date_of_birth = date_of_birth, ) user_obj.set_password(password) user_obj.save(using = self._db) return user_obj def create_superuser(self, username, email, password, first_name, last_name, **extra_fields): extra_fields.setdefault('is_staff',True) extra_fields.setdefault('is_admin',True) return self._create_user( username = username, email = email, password = password, first_name = first_name, last_name = last_name, **extra_fields ) def create_user(self, username, email, password, first_name, last_name, **extra_fields): extra_fields.setdefault('is_staff',True) extra_fields.setdefault('is_admin',False) return self._create_user( username = username, email = email, password = password, first_name = first_name, last_name = last_name, **extra_fields ) def create_webuser(self, username, email, password, firstname, first_name, last_name, **extra_fields): extra_fields.setdefault('is_staff',False) extra_fields.setdefault('is_admin',False) return self._create_user( username = username, email = email, password = password, first_name = first_name, last_name = last_name **extra_fields ) class User(AbstractBaseUser): GET_COLOR_CODE = ( ('PI', 'PINK'), ('BL', 'BLUE'), ('RE', 'RED'), ('YL', 'YELLOW'), ('GR', 'GREEN') ) color_code = models.CharField(max_length = 10, choices = GET_COLOR_CODE) username = models.CharField(max_length = 33, unique = True) first_name = models.CharField("First name of the user", max_length = 33) last_name = models.CharField("Last name of the user", max_length = 33) email = models.EmailField("Email of user", max_length = 255, unique = True) phone_number = PhoneNumberField() #date_of_birth = models.DateTimeField() is_staff = models.BooleanField(default =True) is_active = models.BooleanField(default = True) is_admin = models.BooleanField(default = False) timestamp = models.DateTimeField(auto_now_add=True) #is_staff = models.BooleanField(default = False)def inscription(request): object = UserManager() USERNAME_FIELD = 'username' REQUIRED_FIELDS = ['email','first_name','last_name'] # to create python manage.py createsuperuser def get_username(self): return getattr(self, self.USERNAME_FIELD) def __str__(self): return self.first_name def get_full_name(self): return self.first_name+' '+self.last_name def get_short_name(self): return self.first_name def has_perm(self, perm, obj = None): return True def has_module_perms(self, app_label): return True @property def active(self): return self.is_active @property def admin(self): return self.is_admin @property def staff(self): return self.is_staff
This is Traceback I found.
Django Version: 2.0.6 Python Version: 3.6.5 Installed Applications: ['accounts', 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'phonenumber_field'] Installed'] Trace) The above exception (NOT NULL constraint failed: accounts_user.password) was the direct cause of the following exception: File "/home/dev/.virtualenv/django/lib/python3.6/site-packages/django/core/handlers/exception.py" in inner 35. response = get_response(request) File "/home/dev/.virtualenv/django/lib/python3.6/site-packages/django/core/handlers/base.py" in _get_response 128. response = self.process_exception_by_middleware(e, request) File "/home/dev/.virtualenv/django/lib/python3.6/site-packages/django/core/handlers/base.py" in _get_response 126. response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/home/dev/.virtualenv/django/lib/python3.6/site-packages/django/contrib/admin/options.py" in wrapper 575. return self.admin_site.admin_view(view)(/views/decorators/cache.py" in _wrapped_view_func 44. response = view_func(request, *args, **kwargs) File "/home/dev/.virtualenv/django/lib/python3.6/site-packages/django/contrib/admin/sites.py" in inner 223. return view(request, *args, **kwargs) File "/home/dev/.virtualenv/django/lib/python3.6/site-packages/django/contrib/admin/options.py" in change_view 1557. return self.changeform_view(request, object_id, form_url, extra_context) File "/home/dev/.virtualenv/django/lib/python3.6/site-packages/django/utils/decorators.py" in _wrapper 62. return bound_func(/utils/decorators.py" in bound_func 58. return func.__get__(self, type(self))(*args2, **kwargs2) File "/home/dev/.virtualenv/django/lib/python3.6/site-packages/django/contrib/admin/options.py" in changeform_view 1451. return self._changeform_view(request, object_id, form_url, extra_context) File "/home/dev/.virtualenv/django/lib/python3.6/site-packages/django/contrib/admin/options.py" in _changeform_view 1491. self.save_model(request, new_object, form, not add) File "/home/dev/.virtualenv/django/lib/python3.6/site-packages/django/contrib/admin/options.py" in save_model 1027. obj.save() File "/home/dev/.virtualenv/django/lib/python3.6/site-packages/django/contrib/auth/base_user.py" in save 73. super().save(*args, **kwargs) File "/home/dev/.virtualenv/django/lib/python3.6/site-packages/django/db/models/base.py" in save 729. force_update=force_update, update_fields=update_fields) File "/home/dev/.virtualenv/django/lib/python3.6/site-packages/django/db/models/base.py" in save_base 759. updated = self._save_table(raw, cls, force_insert, force_update, using, update_fields) File "/home/dev/.virtualenv/django/lib/python3.6/site-packages/django/db/models/base.py" in _save_table 823. forced_update) File "/home/dev/.virtualenv/django/lib/python3.6/site-packages/django/db/models/base.py" in _do_update 872. return filtered._update(values) > 0 File "/home/dev/.virtualenv/django/lib/python3.6/site-packages/django/db/models/query.py" in _update 709. return query.get_compiler(self.db).execute_sql(CURSOR) File "/home/dev/.virtualenv/django/lib/python3.6/site-packages/django/db/models/sql/compiler.py" in execute_sql 1379. cursor = super().execute_sql(result_type) File "/home/dev/.virtualenv/django/lib/python3.6/site-packages/django/db/models/sql/compiler.py" in execute_sql 1068. cursor.execute(sql, params) File "/home/dev/.virtualenv/django/lib/python3.6/site-packages/django/db/backends/utils.py" in execute 100. return super().execute(sql, params) File "/home/dev/.virtualenv/django/lib/python3.6/site-packages/django/db/backends/utils.py" in execute 68. return self._execute_with_wrappers(sql, params, many=False, executor=self._execute) File "/home/dev/.virtualenv/django/lib/python3.6/site-packages/django/db/backends/utils.py" in _execute_with_wrappers 77. return executor(sql, params, many, context) File "/home/dev/.virtualenv/django/lib/python3.6/site-packages/django/db/backends/utils.py" in _execute 85. return self.cursor.execute(sql, params) File "/home/dev/.virtualenv/django/lib/python3.6/site-packages/django/db/utils.py" in __exit__ 89. raise dj_exc_value.with_traceback(traceback) from exc) Exception Type: IntegrityError at /locdev/accounts/user/11/change/ Exception Value: NOT NULL constraint failed: accounts_user.password
I'm not understanding what the problem is. Please anyone can help me!
- How to make database copy?
Good evening, I am using MySQL db which I have created with Entity Framework Core Now I need copy of this db for my tests. Could you tell me how to make copy of this database?
Now I am using this code (it is my real db):
var optionsBuilder = new DbContextOptionsBuilder<ShortyContext>(); optionsBuilder.UseMySql("server=localhost;port=3306;database=shorty;username=****;password=*****"); context = new ShortyContext(optionsBuilder.Options);
- Best Web Service Stack for Simple Attestation Tracking
I'm hoping to create a simple web app that allows a user to login/authenticate, shows a block of text (attestation or agreement), and essentially has a single button that users can click to accept or agree to the terms.
Their response would then be stored in a database with the User ID, date of last agreement, and state (Compliant, Notified, Warning, Non-Compliant). They are expected to review the attestation/agreement monthly. The App would also check periodically if Users are in compliance and send out emails regarding the state of their agreement.
I've never worked on a web app like this before seeing as I'm usually doing automation and scripting work.
I've read about the MEAN stack and the MERN stack but don't really know what be best for my use case. It seems like a simple app and so I don't want to over-engineer it.
Also what is the best way to set up a test environment for development purposes? Just spin up a few docker hosts?
I'm happy to give more information as needed but really just looking for some input and ideas on the best way to get started with this seeing as I've never done anything similar before.
Thanks! | http://quabr.com/48757775/sql-query-sum-the-same-entry-from-id-multiple-times | CC-MAIN-2018-34 | en | refinedweb |
Question:
i'm trying to make a GTK application in python where I can just draw a loaded image onto the screen where I click on it. The way I am trying to do this is by loading the image into a pixbuf file, and then drawing that pixbuf onto a drawing area.
the main line of code is here:
def drawing_refresh(self, widget, event): #clear the screen widget.window.draw_rectangle(widget.get_style().white_gc, True, 0, 0, 400, 400) for n in self.nodes: widget.window.draw_pixbuf(widget.get_style().fg_gc[gtk.STATE_NORMAL], self.node_image, 0, 0, 0, 0)
This should just draw the pixbuf onto the image in the top left corner, but nothing shows but the white image. I have tested that the pixbuf loads by putting it into a gtk image. What am I doing wrong here?
Solution:1
I found out I just need to get the function to call another expose event with
widget.queue_draw() at the end of the function. The function was only being called once at the start, and there were no nodes available at this point so nothing was being drawn.
Solution:2
You can make use of cairo to do this. First, create a gtk.DrawingArea based class, and connect the expose-event to your expose func.
class draw(gtk.gdk.DrawingArea): def __init__(self): self.connect('expose-event', self._do_expose) self.pixbuf = self.gen_pixbuf_from_file(PATH_TO_THE_FILE) def _do_expose(self, widget, event): cr = self.window.cairo_create() cr.set_operator(cairo.OPERATOR_SOURCE) cr.set_source_rgb(1,1,1) cr.paint() cr.set_source_pixbuf(self.pixbuf, 0, 0) cr.paint()
This will draw the image every time the expose-event is emited.
Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com
EmoticonEmoticon | http://www.toontricks.com/2018/05/tutorial-drawing-pixbuf-onto-drawing.html | CC-MAIN-2018-34 | en | refinedweb |
It's not the same without you
Join the community to find out what other Atlassian users are discussing, debating and creating.
Hi All,
Im just thinking to migrate my jira to new version. From jira5.2.3 to jira6.0.8.
Im using post script runner to automatically add due date.
However, when testing in jira6.0.8 the script not fuction as it did in jira5.2.3.
This is my simple script.
import com.atlassian.jira.ComponentManager import com.atlassian.jira.issue.CustomFieldManager import com.atlassian.jira.issue.MutableIssue import com.atlassian.jira.issue.customfields.CustomFieldType import com.atlassian.jira.issue.fields.CustomField import java.sql.Timestamp; MutableIssue myIssue = issue Calendar cal = Calendar.getInstance(); // set due date to: current date + 8 hours Timestamp mydueDate = new Timestamp(cal.getTimeInMillis()+ 8*1000*60*60); myIssue.setDueDate(mydueDate);
Please help me to resolve this issue.
Thank you in advance.
Hi, are there any errors in the log? At wich position in the list of postfunctions is this script?
No error on the catalina. However, I manage to resolve this after upgrade greenhopper plugin.
I got this one more issue, how to display due date field on the ticket?
No error on the catalina. However, I manage to resolve this after upgrade greenhopper plugin.
I got this one more issue, how to display due date field on the ticket?
hello rozuan,
are you updating the issue after you set setDueDate customfield?
Where do you want to display the due date? In JIRA you can add the due date to the view screen, in JIRA Agile you can add it to the deatils view in the rapid board configuration.. | https://community.atlassian.com/t5/Marketplace-Apps-questions/my-post-function-script-runner-not-working-when-upgrading-to/qaq-p/133008 | CC-MAIN-2018-34 | en | refinedweb |
The Spring Cloud Pipelines repository contains opinionated Concourse pipeline definitions. Those jobs form an empty pipeline and an opinionated sample pipeline that you can use in your company.
The:
The simplest way to deploy Concourse to K8S is to use Helm.
Once you have Helm installed and your
kubectl is pointing to the
cluster, run the following command to install the Concourse cluster in your K8S cluster:
$ helm install stable/concourse --name concourse
Once the script is done, you should see the following output
1. Concourse can be accessed: * Within your cluster, at the following DNS name at port 8080: concourse-web.default.svc.cluster.local * From outside the cluster, run these commands in the same shell: export POD_NAME=$(kubectl get pods --namespace default -l "app=concourse-web" -o jsonpath="{.items[0].metadata.name}") echo "Visit to use Concourse" kubectl port-forward --namespace default $POD_NAME 8080:8080 2. Login with the following credentials Username: concourse Password: concourse
Follow the steps and log in to Concourse under.
You can use Helm also to deploy Artifactory to K8S, as follows:
$ helm install --name artifactory --set artifactory.image.repository=docker.bintray.io/jfrog/artifactory-oss stable/artifactory
After you run this command, you should see the following output:
NOTES: Congratulations. You have just deployed JFrog Artifactory Pro! 1. Get the Artifactory URL by running these commands: NOTE: It may take a few minutes for the LoadBalancer IP to be available. You can watch the status of the service by running 'kubectl get svc -w nginx' export SERVICE_IP=$(kubectl get svc --namespace default nginx -o jsonpath='{.status.loadBalancer.ingress[0].ip}') echo 2. Open Artifactory in your browser Default credential for Artifactory: user: admin password: password
Next, you need to set up the repositories.
First, access the Artifactory URL and log in with
a user name of
admin and a password of
Then, click on Maven setup and click
Create.
If you go to the Concourse website you should see something resembling the following:
You can click one of the icons (depending on your OS) to download
fly, which is the Concourse CLI. Once you download that (and maybe added it to your PATH, depending on your OS) you can run the following command:
fly --version
If
fly is properly installed, it should print out the version.
We made a sample credentials file called
credentials-sample-k8s.yml
prepared for
k8s. You can use it as a base for your
credentials.yml.
To allow the Concourse worker’s spawned container to connect to the Kubernetes cluster, you must pass the CA contents and the auth token.
To get the contents of CA for GCE, run the following command:
$ kubectl get secret $(kubectl get secret | grep default-token | awk '{print $1}') -o jsonpath='{.data.ca\.crt}' | base64 --decode
To get the auth token, run the following command:
$ kubectl get secret $(kubectl get secret | grep default-token | awk '{print $1}') -o jsonpath='{.data.token}' | base64 --decode
Set that value under
paas-test-client-token,
paas-stage-client-token, and
paas-prod-client-token
After running Concourse, you should get the following output in your terminal:
$ export POD_NAME=$(kubectl get pods --namespace default -l "app=concourse-web" -o jsonpath="{.items[0].metadata.name}") $ echo "Visit to use Concourse" $ kubectl port-forward --namespace default $POD_NAME 8080:8080 Visit to use Concourse
Log in (for example, for Concourse running at
127.0.0.1 — if you do not provide any value,
localhost is assumed). If you run this script, it assumes that either
fly is on your
PATH or that it is in the same folder as the script:
$ fly -t k8s login -c -u concourse -p concourse
Next, run the following command to create the pipeline:
$ ./set_pipeline.sh github-webhook k8s credentials-k8s.yml
The following images show the various steps involved in runnig the
github-webhook pipeline:
Figure 8.7. Unpause the pipeline by clicking in the top lefr corner and then clicking the
play button | https://cloud.spring.io/spring-cloud-pipelines/multi/multi_concourse-pipeline-k8s.html | CC-MAIN-2020-29 | en | refinedweb |
import "go.chromium.org/luci/server/limiter"
Package limiter implements load shedding for servers.
Currently only supports setting a hard limit on a number of concurrently processed requests. Over-engineered to support more features in the future without significantly altering its API.
doc.go grpc.go limiter.go module.go peer.go
ErrLimitReached is returned by CheckRequest when some limit is reached.
func NewModule(opts *ModuleOptions) module.Module
NewModule returns a server module that installs default limiters applied to all routes/services in the server.
NewModuleFromFlags is a variant of NewModule that initializes options through command line flags.
Calling this function registers flags in flag.CommandLine. They are usually parsed in server.Main(...).
func NewUnaryServerInterceptor(l *Limiter) grpc.UnaryServerInterceptor
NewUnaryServerInterceptor returns a grpc.UnaryServerInterceptor that uses the given limiter to accept or drop gRPC requests.
PeerLabelFromAuthState looks at the auth.State in the context and derives a peer label from it.
Currently returns one of "unknown", "anonymous", "authenticated".
TODO(vadimsh): Have a small group with a whitelist of identities that are OK to use as peer label directly.
Limiter is a stateful runtime object that decides whether to accept or reject requests based on the current load (calculated from requests that went through it).
It is also responsible for maintaining monitoring metrics that describe its state and what requests it rejects.
May be running in advisory mode, in which it will do all the usual processing and logging, but won't actually reject the requests.
All methods are safe for concurrent use.
New returns a new limiter.
Returns an error if options are invalid.
CheckRequest should be called before processing a request.
If it returns an error, the request should be declined as soon as possible with Unavailable/HTTP 503 status and the given error (which is an annotated ErrLimitReached).
If it succeeds, the request should be processed as usual, and the returned callback called afterwards to notify the limiter the processing is done.
ReportMetrics updates all limiter's gauge metrics to match the current state.
Must be called periodically (at least once per every metrics flush).
type ModuleOptions struct { MaxConcurrentRPCs int64 // limit on a number of incoming concurrent RPCs (default is 100000, i.e. unlimited) AdvisoryMode bool // if set, don't enforce MaxConcurrentRPCs, but still report violations }
ModuleOptions contains configuration of the server module that installs default limiters applied to all routes/services in the server.
func (o *ModuleOptions) Register(f *flag.FlagSet)
Register registers the command line flags.
type Options struct { Name string // used for metric fields, logs and error messages AdvisoryMode bool // if true, don't actually reject requests, just log MaxConcurrentRequests int64 // a hard limit on a number of concurrent requests }
Options contains configuration of a single Limiter instance.
type RequestInfo struct { CallLabel string // an RPC or an endpoint being called (if known) PeerLabel string // who's making the request (if known), see also peer.go }
RequestInfo holds information about a single inbound request.
Used by the limiter to decide whether to accept or reject the request.
Pretty sparse now, but in the future will contain fields like cost, a QoS class and an attempt count, which will help the limiter to decide what requests to drop.
Fields `CallLabel` and `PeerLabel` are intentionally pretty generic, since they will be used only as labels in internal maps and metric fields. Their internal structure and meaning are not important to the limiter, but the cardinality of the set of their possible values must be reasonably bounded.
Package limiter imports 17 packages (graph) and is imported by 2 packages. Updated 2020-07-02. Refresh now. Tools for package owners. | https://godoc.org/go.chromium.org/luci/server/limiter | CC-MAIN-2020-29 | en | refinedweb |
#include <hallo.h> Karsten M. Self wrote on Sun Oct 20, 2002 um 07:53:30PM: > DebianPlanet and LinuxWatch have both posted reviews of Debian 3.0. As > might be expected, a lot of whinging about the installation. Little > appreciation for true merits, most of which only become apparent on > actual use and maintenance of the system. suggest to give people working on useability tasks give more rights (read: complain to Tech. Comitee faster) since many maintainers (I won't call the names but most of you which people I am talking about) just give a fsck on the needs of newbies. (read: The _bad_ old: GO-AND-RTFM method instead of thinking about the reason for even coming to the problem and fixing the real reasons instead of symptoms.) Hm, additional things I just think of: - broken home/end keys in bash in xterm (even in Woody) - missing apt localisation extensions (who t.f. told we that we are going to release in the next few weeks, again and again for almost 6 months?!) - centralised "setup" tool which would reconfigure etherconf, pppoeconf, and do sth. as gx-debconf does, but be more understandable. Gruss/Regards, Eduard. | https://lists.debian.org/debian-devel/2002/10/msg01400.html | CC-MAIN-2020-29 | en | refinedweb |
Introduction to Buckminster
< To: Buckminster Project
Contents
- 1 Getting Started
- 2 Installation
- 3 What do you want to do?
- 4 Materialize an existing component
- 5 Publish an existing component
- 6 Publish your Eclipse project
- 7 Put together a virtual distribution
- 8 Component Specifications (CSPECs)
Getting Started
The purpose of this document is to enable you to install Buckminster, and to start using it for materializing and publishing component specifications. It provides samples of Buckminster code as XML fragments, and illustrates common usage scenarios.
If you are new to Buckminster, then you should read the Why Buckminster? document first.
The key essential concepts used in Buckminster are:
- A Component Query (CQUERY). A CQUERY is a (usually quite small) XML file which names some component that you want to download and materialise. Included within the CQUERY is a reference (usually a URL) to a resource map (RMAP).
- A Resource Map (RMAP). An RMAP (again, an XML file) identifies possible locations - e.g. internet repositories - where a component - given in a CQUERY - can be found. The component in the CQUERY may depend on having further components (which in turn may need further ones, forming a transitive closure): the RMAP lists all the places where all of the components may be found.
- A publisher of components normally provides his/her audience of consumers with a CQUERY and RMAP. Armed with these, consumers can give the CQUERY (which in turn points to the given RMAP) to the Buckminster plugin to the Eclipse IDE and the appropriate components will be materialised into Eclipse. Materialisation into non-Eclipse environments, including stand-alone on a local or remote machine, is also possible.
- Bills of Materials (BOM). A BOM is (an XML file) generated automatically by Buckminster, given a CQUERY and accompanying RMAP: it gives a detailed list of all components, and their respective locations, in order to fulfill that particular CQUERY. A BOM file can also be published and used to materialise, in situations where the publisher wants to ensure that all of his/her audience of consumers get precisely the same set of specific component instances from the same set of repositories.
- Component Specification (CSPEC). A CSPEC (is an XML file and) describes the metadata, including dependencies on other components, for a particular component. CSPECs are automatically generated by Buckminster for Eclipse plugins and features, and thus Eclipse-based publishers in most cases do not need to manually write CSPECs.
- Component Specification Extension (CSPEX). A CSPEX (is an XML file and) is manually written to extend an automatically generated CSPEC in cases where Buckminster cannot find the entire metadata needed automatically.
- Materialisation Specification (MSPEC). An MSPEC (is an XML file which) can be provided by a publisher to ensure that downloaded components materialise to particular folders/directories on target machines, and with appropriate replace/update/discard policies if some of the components are in fact already downloaded on a specific machine.
All these Buckminster features are explained in this document, using working examples. However keep in mind that in many cases, a CQUERY and an RMAP are all that are needed..
Installation
Installation instructions for Buckminster into the Eclipse IDE are given here.
What do you want to do?
There are a number of different ways in which you may wish to use Buckminster, and these are listed in each of the following sections:
- Materialize an existing component (and those on which it depends)
- Publish an existing component (and those on which it depends) so that others can materialize your component
- Publish your Eclipse project: “Buckminsterizing” your project and sharing it with your team
- Put together a virtual distribution for a set of various packaged components: you want to publish your product and share with the whole software community.
Materialize an existing component
A materialization is started in either of two ways:
- by using a Component Query (CQUERY) – for the top most component required OR
- by providing a pre-existing Bill of Materials (BOM)
The CQUERY case is described here. The BOM case is in the immediate next section.
A sample CQUERY is online here, from the Hello World example, and reproduced below for convenience:
<?xml version="1.0" encoding="UTF-8"?> <cq:componentQuery xmlns: <cq:rootRequest </cq:componentQuery>
The first line indicates that the file contains XML.
The second line gives a CQUERY (from the XML name space) and also identifies a Resource Map (or RMAP). RMAPs are discussed later below. The resourceMap entry may in fact be omitted, in which case Buckminster will attempt to resolve the CQUERY locally or by using preconfigured remote resolvers.
The rootRequest identifies the name of the top most component required – here org.demo.hello.xml.world. The (optional) componentType identifies this component as a osgi.bundle (an Eclipse plugin). Finally the versionType indicates that the version numbering scheme used for this particular component follows the OSGi versioning convention. This is in fact the default version scheme for Buckminster, and an OSGI formatted version string follows the pattern [0-9].[0-9].[0-9].[a-zA-Z0-9_-] (for example: 3.2.0.RC_beta-1). Alternative values for the versionType are String; Timestamp; or Triplet. An overview of versioning is given Why Buckminster? introduction and there is a fuller description in the Buckminster meta-data language guide.
You can find the CQUERY wizard to help you write CQUERY XML files in File > New > Other > Buckminster > Component Query File. Enter an appropriate Container and File name in which you want your new CQUERY to be placed. Then fill in the fields of the form as needed:
- the Component Name is the top level name of your target component, and will be the rootRequest for the CQUERY
- Properties and Advisor Nodes (see the tabbed form at the bottom of the pane) are advanced topics, discussed TBD
- An RMAP should normally be given. We discuss this in a later section below.
At the bottom of the wizard, there are three buttons: select External Save As to save your CQUERY as an external file.
To actually execute a CQUERY:
- either select File > Open File within in the Eclipse Java SDK, and enter the URL for the CQUERY in the File Name field. This will then start the CQUERY wizard and populate it with the data from the CQUERY file you specified OR
- use the CQUERY wizard directly OR
- use File > Import > Other > Materialize from Buckminster MSPEC, CQUERY or BOM, pressing Next and then the URL for the CQUERY and the Load button.
In each case, proceed to select the Resolve to Wizard or, more directly, Resolve and Materialise button. Resolve to Wizard allows you a finer degree of control. In particular, it can show you the full set of components that Buckminster has resolved and is prepared to materialize, as well as letting you save the derived BOM (i.e., as an external file). Press Next and the platform environment settings can be examined, and the generated MSPEC saved if you so wish. Press Finish to execute the materialisation.
When materialization starts, a new window will open, entitled Resolving Query, with a progress bar. Buckminster runs the query, transitively finds all the required components, downloads them, and runs all required actions (such as building the required jar). When Buckminster has successfully finished, your workspace is set up with the transitive closure of the components selected.
So, in summary, to initiate a materialization:
- Write a CQUERY in XML into a file, either directly using an editor or by using the CQUERY wizard.
- State the name of the top level component you are materializing.
- Provide the URL of the RMAP you want to use to identify one or more repositories where you expect to find the top level component, and any transitively required dependencies, to be located.
- Enter the file name of your CQUERY file into File > Open File in Eclipse, or immediately proceed to resolve and materialise within the CQUERY wizard.
Materializing using a pre-existing Bill Of Materials
As noted in the Why Buckminster? guide, a BOM represents a snapshot of your components taken at the time of resolution when the BOM was externalised from Buckminster's internal model. The same query, resolved later at a different time, may yield a different result (i.e. a different BOM) because components may have moved or newer versions may be available.
It can be useful to use a BOM (rather than a CQUERY) for materialization, when it is important that every user materializes the identical configuration. Examples include sharing a software configuration within a team, or submitting a configuration for testing.
As noted in the previous section, a BOM can be saved as an external file by the CQUERY wizard and before materialization using the BOM function in the Resolve to Wizard.
Subsequently, the BOM file can be used by selecting File > Import > Other > Materialize from Buckminster MSPEC, CQUERY or BOM; pressing Next; then the URL for the BOM file, and finally the Load button. Note that if instead you simply pass a BOM file to File > New File, the BOM file will be displayed as a text file.
After the BOM file is loaded, press either Finish to complete the materialization, or Next to allow finer control as in the CQUERY Resolve to Wizard (explained in the previous section), followed eventually by Finish to materialize.
After Materialisation: Where is the component on my machine ?
Once you have successfully materialised a component, you should be able to see it listed in your package explorer: Window > Show view > Package Explorer.
If you right-click the component, a menu listing things you can do with the component is thrown up. In particular, the menu includes items Buckminster > Invoke Action.. and Buckminster > View CSPEC..
Invoke Action.. allows you to click on any of the publicly available action attributes defined for this particular component. Action attributes are discussed in detail below. An optional Properties file can be also be specified as context for whatever action you select: for example, to specify the output folder for a build action.
View CPSEC.. allows you to view the Component Specification (CPSEC) for the component, which in many cases is automatically generated by Buckminster. The CSPEC will list the publicly available action attributes which Invoke Action.. can use, together with other attributes. CSPECs are discussed in detail below.
Resource Maps (RMAPs)
An RMAP provides location information for families of components. When Buckminster needs to load a particular component , it looks in the RMAP (specified in the CQUERY, see above) to determine the repository (or repositories) containing it. If that component in turn requires others components, the RMAP is consulted to find the repository (or repositories) containing each component.
This separation of component and storage location is useful because it allows Buckminster to retrieve the same component from potentially many different storage locations, allowing flexibility.
First, the RMAP defines storage locations for families of components, rather than single components. A user can say, "find all the Apache components on the Apache ftp server," without having to specify "find Apache Struts on the Apache ftp server and find Apache Cocoon on the Apache ftp server and find ...," etc., etc. This is done by locator string patterns, as discussed below.
Second, the RMAP defines a search path of storage locations to search for a component. A user can say, "first look in my local repository, then look in my team's repository, then look in my corporate group repository, then look in the Eclipse public CVS repository". This is done by providing alternative repository providers within a searchPath, as we will see below.
Third, the RMAP allows variability by setting alternative values for the locators. For example, alternative repositories can be specified for different distributed development sites. The RMAP for a team in Stockholm would list the same component families as the RMAP for a team in Winnipeg, but the two RMAPs would differ with alternative local repository servers (specified in their respective locator entries). TCP round-trips from Canada to Sweden are slower than TCP round-trips from Canada to Canada, so the Winnipeg team might maintain a replicated CVS repository for certain components. Buckminster allows both teams to use essentially the same RMAP, but to define a different repository sites, and thus a different set of search paths. Because Buckminster tries each entry in a list of locators in order, the last locator entry in the alternative RMAPs might be the same: for example, if components were not available locally in a repository cache, they would be fetched from some common designated master site for all teams.
Similarly to CQUERYs, an RMAP file can be manually prepared using an editor, or a Buckminster wizard at File > New > Other > Buckminster > Resource Map File.
Here is a simple RMAP (and is online here), from the Hello World example,:
<>
The first six lines declare the name spaces and syntax of the RMAP and the repository providers needed.
Below that are two major elements declaring searchPaths: the first is called default, and the second is called maven.
Continue down and you find two locator declarations. These map patterns of component names, to specific searchPaths:
- the first states that if a component name starts with the pattern se.tada. then the maven search path should be used; and
- the second locator states that if a component name starts with the pattern org.demo. then the default path should be used.
Note that locator entries are tried in order: in this example, Buckminster will first try to match component names beginning with se.tada. (and if matched, then hence the maven repository), before the second entry. Thus in general, the locator list should be arranged in order from "most specific string pattern" to "least specific": in the example above the order in fact does not matter since the string patterns are distinct.
The same searchPathRef value may occur in multiple locator declarations. For example, we could add a further declaration:
<rm:locator
and thus components whose name started with se.tada or with hi.hello would in each case map to the maven search path.
Back to the searchPaths themselves. An RMAP can have one or more searchPaths, and the order in which they appear is not significant. Each searchPath must have one or more providers, and if more than one provider is given, they will be tried in order for that particular searchPath. In our case, we have two paths:
- The default path:
- The default path declares that the repository for these family of components (in fact, those starting with org.demo.) is in a CVS repository, and that in fact these components are part of an eclipse project ("osgi.bundle, eclipse.feature, buckminster").
- We do not want to commit changes back (mutable=false), and we do not want to extract source code (source=false).
- The URI to a specific CVS repository is then given.
- This URI ends with the name of the component. This name is obtained from the preset property buckminster.component which contains the name of the component being matched.
- The maven path:
- The maven path declares that it uses a maven provider; a readerType of maven, and that the componentType is also a maven component.
- Like the default path, we do not want to commit changes back (mutable=false), and we do not want to extract source code (source=false).
- A URI to the maven repository at Ibiblio is declared.
- A mapping that translates the component name into what the actual file on Ibiblio is called, is then declared (i.e. mapping se.tada.util.sax to tada-sax).
So, in summary, to write an RMAP:
- Identify a number of searchPaths which the RMAP can use.
- For each searchPath, give a locator string pattern matching the component names which you want to be found using this searchPath. In fact you can define several locator string patterns, so that several different sets of components can all be found using the same searchPath.
- For each searchPath, give one or more providers. Each provider should list its readerType (CVS, maven, subversion or whatever) together with a list of URIs at which appropriate repositories can be found.
What RMAP Reader Types are supported?
At the time of writing, the full list of readerTypes for use within provider declarations are listed in the Buckminster meta-data language guide. They include for example CVS, Subversion, Maven, Perforce and site.feature amongst others.
componentTypes are required to configure the required readerType. The componentTypes indicates to the readerType which component structure and meta-data to expect for the declared provider. This information is essential in order for Buckminster to extract component dependency information, which Buckminster automatically translates on-the-fly into its own dependency format (ie a CSPEC) which is then used in the component resolution and materialization process.
componentTypes must be referenced in this attribute by its fully qualified name. The full list of implementations of componentTypess are also given in the Buckminster meta-data language guide. They include for example osgi.bundle, eclipse.feature, jar and maven amongst others.
Publish an existing component
Buckminster provides two ways to publish an existing component:
- Write a CQUERY, and associated RMAP, as described above in the section Materialize an existing component above. Publish the CQUERY so that others can use it, as described also in the same section OR
- Derive a BOM from a CQUERY (and an associated RMAP). Publish the BOM so that others can use it, as described in Materializing using a pre-existing Bill Of Materials. Bear in mind the advantages and dis-advantages of publishing a BOM rather than a CQUERY, as described in the same section.
Publish your Eclipse project
In many cases, Buckminster can automatically fully, or partially, derive the specifications needed to "Buckminsterize" an existing project from meta-data that is already available. Since this is true for Eclipse plugins, "Buckminsterizing" these projects is particularly simple.
For Eclipse project that are not just plug-ins, Buckminster may need to be manually told about dependencies outside of the plug-in. A Component Specification (CSPEC) is used for this, and more particularly CSPEC Extensions (CSPEXs) (Both are discussed later below).
There are three basic steps to Buckminsterizing an entire Eclipse project containing plug-ins and features:
- Ensure that every component in your project has an implicit or explicit CSPEC. CSPECs/CSPEXs only have to be explicitly written for those components for which Eclipse cannot automatically derive meta-information. For many Eclipse projects, Buckminster can automatically generate complete CPSECs on the fly, and manual intervention is unnecessary.
- Write an RMAP so that it and those components on which it depends, can be located. Put this XML file somewhere on your network where others can read it.
- Write a CQUERY naming your top level component, and specifying the RMAP (as a URL). Put this XML file somewhere on your network where others can read it.
- Tell the others about your CQUERY XML file, and have them open it in Eclipse using File > File Open or equivalent mechanisms as explained above.
Straight-forward cases
If all the components of your Eclipse project are either features or bundles, then nothing at all needs to be done with them: the Buckminster CSPEC generator can do the work for you.
All that remains is the Buckminster RMAP: where can the components be found?
The basic layout of the repositories (CVS, maven or whatever) storing the components needs to be understood, and an RMAP and a CQUERY created for them. You should then execute the CQUERY to verify that everything materialises correctly.
Here for example is the RMAP which the Eclipse STP project team used to Buckminsterise STP (and the CQUERY follows immediately):
<?xml version="1.0" encoding="UTF-8"?> <rmap xmlns: <provider readerType="cvs" componentTypes="osgi.bundle,eclipse.feature,buckminster" mutable="true" source="true"> <uri format=":pserver:anonymous@dev.eclipse.org:/cvsroot/stp,org.eclipse.stp.soas/features/{0}"> <bc:propertyRef </uri> </provider> </searchPath> <searchPath name="soas.plugins"> <provider readerType="cvs" componentTypes="osgi.bundle,eclipse.feature,buckminster" mutable="true" source="true"> <uri format=":pserver:anonymous@dev.eclipse.org:/cvsroot/stp,org.eclipse.stp.soas/{0}"> <bc:propertyRef </uri> </provider> </searchPath> <searchPath name="stp.sc"> <provider readerType="cvs" componentTypes="osgi.bundle,eclipse.feature,buckminster" mutable="true" source="true"> <uri format=":pserver:anonymous@dev.eclipse.org:/cvsroot/stp,org.eclipse.stp.servicecreation/{0}"> <bc:propertyRef </uri> </provider> </searchPath> <searchPath name="datatools.connectivity"> <provider readerType="cvs" componentTypes="osgi.bundle,eclipse.feature,buckminster" mutable="true" source="true"> <uri format=":pserver:anonymous@dev.eclipse.org:/cvsroot/datatools,org.eclipse.datatools.connectivity/R1.0/{0}"> <bc:propertyRef </uri> </provider> </searchPath> <searchPath name="b2j.plugins"> <provider readerType="cvs" componentTypes="osgi.bundle,eclipse.feature,buckminster" mutable="true" source="true"> <uri format=":pserver:anonymous@dev.eclipse.org:/cvsroot/stp,org.eclipse.stp.b2j/{0}"> <bc:propertyRef </uri> </provider> </searchPath> <!-- <searchPath name="imported"> <provider readerType="eclipse.import" componentTypes="osgi.bundle,eclipse.feature" mutable="true" source="true"> <uri format="file:/c:/M7/site/eclipse?importType=binary"/> </provider> </searchPath> --> <locator searchPathRef="soas.features" pattern="^org\.eclipse\.stp\.soas\.(.*\.)?feature" /> <locator searchPathRef="soas.plugins" pattern="^org\.eclipse\.stp\.soas(\..*)?" /> <locator searchPathRef="stp.sc" pattern="^org\.eclipse\.stp\.sc(\..*)?" /> <locator searchPathRef="stp.sc" pattern="^org\.eclipse\.stp\.common(\..*)?" /> <locator searchPathRef="b2j.plugins" pattern="^org\.eclipse\.stp\.b2j(\..*)?" /> <locator searchPathRef="datatools.connectivity" pattern="^org\.eclipse\.datatools\.connectivity(\..*)?" /> <!-- <locator searchPathRef="imported" pattern="^org\.eclipse\.emf(\..*)?" /> <locator searchPathRef="imported" pattern="^org\.eclipse\.wst(\..*)?" /> <locator searchPathRef="imported" pattern="^org\.eclipse\.jst(\..*)?" /> <locator searchPathRef="imported" pattern="^org\.eclipse\.jem(\..*)?" /> <locator searchPathRef="imported" pattern="^org\.eclipse\.xsd(\..*)?" /> <locator searchPathRef="imported" pattern="^org\.eclipse\.perfmsr(\..*)?" /> <locator searchPathRef="imported" pattern="^org\.apache(\..*)?" /> --> </rmap>
As you can see, there are multiple CVS repositories for soas.features, soas.plugins, stp.sc, datatools.connectivity and b2j.plugins. There are two locators for the stp.sc repository, corresponding to two sets of components org.eclipse.stp.sc and org.eclipse.stp.common. Also you will see commented out XML for the same component sets which was used to help develop and debug the RMAP, when Buckminster was instructed to seek pre-built binaries obtained from the Eclipse Update Sites, rather than using source for components which essentially were external to STP.
The RMAP above was put into a local file stp.rmap, and then given to the corresponding simple CQUERY:
<?xml version="1.0" encoding="UTF-8"?> <cq:componentQuery xmlns: <cq:rootRequest </cq:componentQuery>
Once the materialisation completed, the component org.eclipse.stp.soas.feature materialises into the current workspace. Right-clicking on the package explorer view for the component throws up the usual Buckminster > Invoke Action.. option. Selecting the feature.exports action (which was automatically generated by Buckminster as part of the associated CSPEC) then proceeds to build the feature and all of its various plug-ins -- a properties file such as
buckminster.output.root=c:/temp/stp.build.output
can be given to the Invoke Action.. window so as to control to which directory the built output goes.
Put together a virtual distribution
A virtual distribution ("distro") is software distribution made available via a hosting server (e.g. somewhere on the internet): however the hosting server itself does not store any of the binaries or other (eg source) components that make up the distribution - in that sense, it is virtual. The virtual distro is actually a set of pointers to places where the component binaries and/or source codes can be obtained during the materialization.
Sometimes "virtual distro" however is taken as having a different meaning: a package of operating system software hosted above a native operating system. Such "virtual hosting" is quite different from the "virtual distribution" capabilities enabled by Buckminster.
A virtual distro is in practice a set of metadata stored in a hosting server. The metadata contains information about the components forming the distro. Each of the components may define dependencies on other components. If the virtual distro be materializable (ie downloadable), then all component dependencies must be resolved prior to materialization.
Frequently it is possible to create a set of virtual distros, using essentially the same set of components configured in different ways, tailoring to particular use cases.
The role of the publisher of the virtual distro is to define complete set of components to be materialized. The process of virtual distro materialization is as follows:
- reading virtual distro metadata
- resolution of all component dependencies
- downloading components from their locations
- performing post-download actions on the downloaded components
Creating and Publishing a Virtual Distro: an example
A virtual distro is defined by the topmost component. The top component itself is not necessarily itself mapped to any binaries or source code: it can instead just define some distro attributes (name, version, description etc.) and dependencies on other components.
The top component is described with the Buckminster CSPEC file. Once the CSPEC is created, it can be made available to the community of users (e.g. using a site like Cloudsmith). CSPECs can be automatically generated by Buckminster for Eclipse plugins and features. However, for this example, we will write CSPECs manually.
The top component can depend on two different types of components:
- virtual components
- materializable components
A virtual component is just yet another component which in turn simply refers to another set of components (with attributes and dependencies). It may optionally define post-download actions on the materialized components. It does not usually point to any binaries or source code.
A materializable component points to the binaries or source code (or both), stored at appropriate repositories somewhere in the internet. In additon, it may define attributes, dependencies on other components, etc.
To better understand this all, we'll use the example of the virtual distro for the Lomboz for Web Minimal for Windows, version 3.2.2. We will show you the steps necesssary to be a publisher of a virtual distro, using Buckminster, for some audience community of users who may wish to us the virtual distro.
In fact several similar distros can be created using essentially the same structure of reusable virtual components. We initially define the top component, which in turn defines dependencies on two other components – the windows specific one, and the collection of platform independent components for this distro.
The top component is called, for the example, com.eteration.lomboz4web-win32-minimal. The CSPEC written for it is:
<?xml version="1.0" encoding="UTF-8"?> <cs:cspec xmlns: <cs:dependencies> <cs:dependency <cs:dependency </cs:dependencies> </cs:cspec>
As you can see, this manually written CSPEC is reasonably straight-forward:
- The first three lines relate to standard XML declaratons.
- The name of the component described by the CSPEC then follows, in this case com.eteration.lomboz4web-win32-minimal.
- There is then a short textual description, and some version information. Versioning is described in more detail in Why Buckminster? introduction and there is a fuller description in the Buckminster meta-data language guide.
- There then follow two dependencies: the current component in turn depends on the components org.objectweb.lomboz.product.lomboz-all-in-one-win32 and com.eteration.lomboz4web-platform-independent-minimal
More detailed examples of CSPECs below if you are interested.
The component org.objectweb.lomboz.product.lomboz-all-in-one-win32 in fact is materializable (ie it consists of binary and/or source).
The com.eteration.lomboz4web-platform-independent-minimal component is a virtual component. We have to write another component specification, again as a CSPEC:
<?xml version="1.0" encoding="UTF-8"?> <cs:cspec xmlns: <cs:dependencies> <cs:dependency <cs:dependency </cs:dependencies> </cs:cspec>
The apache-tomcat component is a materializable.
The com.eteration.lomboz-common component is another virtual component, and here is the CSPEC written for it:
<?xml version="1.0" encoding="UTF-8"?> <cs:cspec xmlns: <cs:dependencies> <cs:dependency <cs:dependency <cs:dependency <cs:dependency </cs:dependencies> </cs:cspec>
All the components upon which this component depends are materializable.
At this point, we have defined a tree of related components, all of whose leaves are materializable. We have written a set of CSPECs, one for each virtual component - and if any of the components had been Eclipse plugins or features, they would not need to have been manually written - and these CSPECs will need to be made available to our audience. Cloudsmith is of course one concise and natural way to publish such virtual distros (including necessary CSPECs) to an audience.
The CQUERY
The CQUERY for the top level component (com.eteration.lomboz4web-win32-minimal) is:
<?xml version="1.0" encoding="UTF-8"?> <cq:componentQuery xmlns: </cq:componentQuery>
Note that an RMAP must be provided: a sample RMAP is given later below.
Because no specific version is given in the CQUERY, Buckminster will look for the latest version available. More sophisticated queries - such as version over-riding and/or exclusion of specific components - are of course possible, and the The CQUERY also allows to specify exceptions, like version overriding, component exclusion etc. For more details, see the Buckminster meta-data language guide documentation.
The CQUERY - like the CSPECs we wrote above - will have to be published to our audience community, so that they can use our virtual distro: once again Cloudsmith is one tool which can make this easy.
Resolving this query (using the usual approach, see Materializing an existing component) will yield a bill of materials. The bill of materials is a set of pointers to all the specific components needed for materialization of the distro, together with their respective locations.
Users - those we wish to be able to materialise the top level component com.eteration.lomboz4web-win32-minimal - can be given the original CQUERY, or the bill of materials. However, we can make things even more convenient, and avoid users having to select the destination for each component (or at least specifying a default destination). A Buckminster MSPEC can be used instead, which allows us (ie the publisher of the virtual distro) to predefine destination locations: this is discussed later below.
First however, we should consider the RMAP for the CQUERY.
The RMAP
A possible RMAP for the virtual distro is:
<?xml version="1.0" encoding="UTF-8"?> <rm:rmap xmlns: :locator <rm:locator <rm:locator <rm:locator <rm:locator <rm:locator <rm:locator <rm:locator <rm:locator </rm:rmap>
The RMAP - like the CSPECs and CQUERY we wrote above - will also have to be published to our audience community, so that they can use our virtual distro: and also once again Cloudsmith is one tool which can make this easy.
Making use of an MSPEC to further simplify downloading
As noted above, we can make things even more convenient, and avoid users having to select the destination for each component (or at least specifying a default destination) by providing an MSPEC, such as the following:
<?xml version="1.0" encoding="UTF-8"?> <md:mspec xmlns: <md:mspecNode <md:mspecNode <md:mspecNode <md:mspecNode <md:mspecNode <md:mspecNode <md:mspecNode </md:mspec>
The MSPEC indicates that:
- The materialisation should be done to the local folder c:/lomboz/install, which is a filesystem, and in general if there are conflicting components already installed on the destination machine, then they should be REPLACEd with the new components being downloaded.
- All downloaded components whose names match the pattern com.eteration.lomboz.* are virtual, and are excluded.
- All downloaded components whose names match the pattern org.objectweb.lomboz.product.lomboz.* should go to the folder eclipse.zips/ (under c:/lomboz/install/), but replace any existing components of the same name only if the downloaded ones are more recent version.
- Likewise myfaces-core, hibernate-core, spring-framework and struts to the folder frameworks.zips/; and apache-tomcat to the folder servers.zips/
Note that the URL to the .bom file, generated (and saved) during query resolution should be inserted in the 6th line above for the url= value.
For more details of MSPECs, see the Buckminster meta-data language guide documentation.
Component Specifications (CSPECs)
If Buckminster cannot derive the necessary component structure from existing meta-data, a CSPEC must be manually created as an XML file. Buckminster provides a wizard to help at File > New > Other > Buckminster > Component Specification file.
Sample CSPECs can be viewed by selecting File > View a selected CSPEC within the Eclipse Java SDK. Some documented examples are also given in this section later below.
A CSPEC lists a set of named attributes: you can think of it as the interface to the component, for Buckminster to use - more specifically, for other components in Buckminster to use. In fact an attribute can either be publicly available (for other CPSECs to use) as part of this interface, or for private use just within the current CSPEC itself.
Each attribute is one of three possibilities:
- an artifact that denotes one or more pre-existing files and directories (folders). The distinction between a file and a folder/directory depends on whether or not there is a trailing '/' after the name. The <path> sub-element can be used when more then one file or directory is desired; OR
- an action that describes something that can be carried out in order to produce zero or more new artifacts (ie files or directories); OR
- a group, which is an arbitrary grouping of artifact, action, and group instances. It can be trivially used just to provide further alternate names for atrributes and/or make private attributes public (the analogy is a method which can return the value of another method). However a group is more usefully adopted to bring together the output of several attributes into one single attribute.
To implement action attributes, currently Buckminster only supports ant as an actor scripting language. Parameters may be passed to the actor via actorProperties values (for example targets in the case of ant to describe the targets which ant should try to build). There are also certain "built-in" actions: eclipse.clean to request a clean build from the Eclipse build system; eclipse.build to request a full build from the Eclipse build system; and buckminster.prebind, which is called by the "workspace" materializer after it has completed a download, but before the component is bound to the Eclipse workspace. This action is not expected to have a product. Its primary purpose is to rearrange things or copy things into a project.
Finally, a CSPEC may also define a list of dependencies to other components. Such dependencies are referred from actions, prerequisites and groups.
Here is a simple example of an artifact and a group (but not a complete CSPEC):
<cs:artifacts> <cs:public </cs:artifacts> <cs:groups> <cs:public <cs:attribute </cs:public> </cs:groups>
Here an artifact foo.jar denotes a single file located at ../com.ics.jar, which is of type bundle.jar. If the base is omitted, the name of the component (described by the CSPEC) is assumed. The type field is essentially at this time only documentation, and serves no other purpose. Because of the groups declaration, the foo.jar artifact ends up in practice having two externally visible names: foo.jar and java.binaries. The java.binaries is something that is intended to participate in a classpath that can be used by actions that are dependent on this component. The foo.jar is intended to be a packed bundle, suitable for an update site.
Consider another simple example (again, not a complete CSPEC):
<cs:artifacts> <cs:private <cs:path <cs:path <cs:path </cs:private> </cs:artifacts> <cs:groups> <cs:public <cs:attribute </cs:public> </cs:groups>
Here, bundle.classpath and java.binaries are alternative names for the collection of files resolver.jar, xercesImpl.jar, and xml-apis.jar. However only java.binaries is visible as an attribute outside the CSPEC: the attribute bundle.classpath is presumably used privately elsewhere within the (rest of the) CSPEC.
A full example of a CSPEC
Here is an example of a full CSPEC for a component called describes the following:
- The first two lines declare that this is XML; it follows the CSPEC format, and it describes a component called org.demo.worlds
- The next section lists the component attributes that are considered artifacts in this CSPEC. In this case there is only one declaration, a public attribute (available both within this CSPEC and to other CSPECs) called source, that is mapped to a folder called org.demo/worlds/src/
- The next section lists the component attributes that are implemented as actions:
- The first action is java.binary.archives, and is publicly available to other CSPECs from this CSPEC.
- It is implemented using an ant actor: when ant is invoked by Buckminster as part of this particular action, it is passed make/build.xml as the value of the standard buildFile parameter to ant.
- The action has one prerequisite - the local (local to this CSPEC) action eclipse.build (see the next action in this CSPEC) which must be completed before this action can execute. This prerequisite also has an alias name input which can (only) be used within ant to refer to the same attribute eclipse.build.
- The result of the action is a product in the file base bin/jars/worlds.jar; ant can also reference this as output.
- The second action is eclipse.build, and is private (local) to this particular CSPEC. Accessing this (action) attribute does not invoke an ant action but instead causes the Eclipse build system to run:
- It has one prerequisite - the local (local to this CSPEC) attribute source/ must exist: from above, this tests that the folder src/ exists.
- The result of the action is a directory called bin/classes/.
- Finally, there is a publicly available (group) attribute called java.binaries, which in fact is implemented by the single local action called eclipse.build described above: the value of java.binaries is whatever eclipse.build generates.
In summary: this CSPEC, for a component called org.demo.worlds has three externally visible attributes:
- source, which in fact is the contents of the folder org.demo.worlds/src/;
- java.binary.archives, which in fact is the file org.demo.worlds/bin/jars/world.jar/ but only available once generated by an ant actor, and in turn only once the Eclipse build system has been called as a local action on this component;
- java.binary, which in fact is the directory org.demo.worlds/bin/classes/., but only available once the Eclipse build system has been called as a local action on this component.
A complex example of a CSPEC
Here is an example of a Buckminster generated CSPEC for a component called org.eclipse.buckminster.core.feature:
<?xml version="1.0" encoding="UTF-8"?> <cs:cspec xmlns: <cs:dependencies> <cs:dependency <cs:dependency <cs:dependency <cs:dependency <cs:dependency <cs:dependency <cs:dependency </cs:dependencies> <cs:artifacts> <cs:private <cs:private <cs:path <cs:path <cs:path <cs:path </cs:private> <cs:private </cs:artifacts> <cs:actions> <cs:public <cs:actorProperties> <cs:property <cs:property </cs:actorProperties> <cs:properties> <cs:property </cs:properties> <cs:prerequisites> <cs:attribute <cs:attribute <cs:attribute <cs:attribute <cs:attribute <cs:attribute <cs:attribute </cs:prerequisites> </cs:public> <cs:public <cs:actorProperties> <cs:property <cs:property </cs:actorProperties> <cs:prerequisites> <cs:attribute <cs:attribute <cs:attribute <cs:attribute </cs:prerequisites> <cs:products <cs:path </cs:products> </cs:public> <cs:private <cs:actorProperties> <cs:property <cs:property </cs:actorProperties> <cs:prerequisites <cs:attribute </cs:prerequisites> <cs:products </cs:private> <cs:private <cs:actorProperties> <cs:property <cs:property </cs:actorProperties> <cs:prerequisites <cs:attribute </cs:prerequisites> <cs:products </cs:private> <cs:private <cs:actorProperties> <cs:property <cs:property </cs:actorProperties> <cs:prerequisites <cs:attribute <cs:attribute </cs:prerequisites> <cs:products </cs:private> </cs:actions> <cs:groups> <cs:public <cs:attribute <cs:attribute <cs:attribute <cs:attribute <cs:attribute <cs:attribute <cs:attribute </cs:public> <cs:public <cs:attribute <cs:attribute </cs:public> <cs:public <cs:attribute <cs:attribute </cs:public> <cs:public <cs:public </cs:groups> </cs:cspec>
OK. Take a deep breath and get a coffee (I did, while I wrote this). The result of all this is a CSPEC with the following attributes (all file and directory names are relevant to the component name org.eclipse.buckminster.core.feature:
- A private attribute build.properties located in the directory build.properties/
- A private attribute jar.contents which is the collection of files feature.properties, epl-v10.html, license.html and COPYRIGHT
- A private attribute raw.manifest which is the file feature.xml
- A public attribute buckminster.clean which is implemented as an ant actor, setting its buildFileId as buckminster.pdetasks and targets as delete.dir; in addition ${buckminster.output} is passed as the defined value (-Dprop=value) of dir.to.delete to ant. There are however a number of prerequisites for this action: the attribute buckminster.clean is accessed on each of the other components org.eclipse.buckminster.ui, org.eclipse.buckminster.cmdline, org.eclipse.buckminster.installer, org.eclipse.buckminster.runtime, org.eclipse.buckminster.ant, org.eclipse.buckminster.core and org.eclipse.buckminster.sax. Note that there is no result or output from this action: there is no products field.
- A public attribute manifest, also implemented as an ant actor, setting its buildFileId as buckminster.pdetasks and its targets as expand.feature.version. The action has prerequisites - all of which are other attributes appearing elsewhere in the CSPEC (ie in the big list you are currently reading): raw.manifest, also available as the name manifest to ant; bundle.jars, and as bundles to ant; feature.references, and as features to ant; and build.properties, as properties to ant. It yields a result ${buckminster.output}/temp/feature.xml, also known as action.output within ant.
- A private attribute copy.features, implemented in ant, setting its buildFileId as buckminster.pdetasks and targets as copy.group. It has one prerequisite attribute (appearing elsewhere in the CSPEC) called feature.jars, known as action.requirements to ant. Its value is the contents of ${buckminster.output}/site/features/, also known as action.output to ant. However, the action has an uptoDatePolicy setting, an optimisation hint which may reduce the work Buckminster needs to do. The MAPPER setting compares the product with each corresponding prerequsite: if there is match and the product is younger than the prerequisite then the action can be skipped, since the results of the action were previously done and are already in place.
- A private attribute copy.plugins, similar in style to the copy.features attribute above.
- A private attribute feature.jar, somewhat similar in style to both the copy.features and copy.plugins above. However the uptToDatePolicy optimistion hint is set to COUNT and with fileCount=1, meaning that the action can be skipped as unnecessary if at least one file in the products ${buckminster.output}/jar/ is younger than the youngest artifact found in the prerequisites mainfest and jar.contents
- A public attribute bundle.jars which is the collection of all the <bundle.jar> attributes of each of each of the other components org.eclipse.buckminster.ui, org.eclipse.buckminster.cmdline, org.eclipse.buckminster.installer, org.eclipse.buckminster.runtime, org.eclipse.buckminster.ant, org.eclipse.buckminster.core and org.eclipse.buckminster.sax.
- A public attribute feature.exports, which is both of the (private) attibutes copy.features and copy.plugins as above, but with the directory ${buckminster.output}/site/ overriding the base directory settings in those two attributes.
- A public attribute feature.jars, which is the pair of attributes feature.jar and feature.references.
- Two public attributes feature.references and product.root.files, which in fact are both dummy attributes having no value, and placeholders -- this is a generated CSPEC.
My coffee is cold, time for another.
Extended Component Specifications (CSPEXs)
In some cases, not all information that needs to be described can be declared in ordinary plug-in meta-data. In these cases, Buckminster makes it possible to extend the automatically generated CSPEC by manually providing a CSPEC Extension ("CSPEX") file buckminster.cspex, which must be placed into the same directory/folder as the component itself, and which follows the general CSPEC format.
In general, a CSPEX enables you to manually extend, and add to, automatically generated CSPEC files with further metadata that is not readily deducible by Buckminster.
Here is an example, from the Hello World example, and reproduced below for convenience:
< are the XML content and declarations of namespaces. Note the use of a cspecExtension element as the top most element.
- Next, two dependencies are listed for the current component, on both components org.demo.worlds and se.tada.util.sax. Buckminster needs to find both before the current component can be processed.
- The actions section declares a private (local to this CSPEX) action buckminster.prebind as an ant action. As for CSPECs, the name buckminster.prebind has specific implication for Buckminster, and results in an action initiated once a component is downloaded but before it is bound into the current workspace. The ant actor is passed make/prebind.xml as the value of its buildFile parameter.
- There are two pre-requisites to the action.
- The first is on attribute java.binary.archives in the component se.tada.util.sax, and is known within the ant script as tada-sax.jar.
- The second is on the attribute java.binary.archives in the component org.demo.worlds, and is also known within the ant script as worlds.jar.
- The products section declares that the result is known as output to the ant script, and is produced in the folder jars/. | http://wiki.eclipse.org/Introduction_to_Buckminster | CC-MAIN-2020-29 | en | refinedweb |
If none of those things convince you that you really shouldn't use
absolute positioning, I'll let you in on the secret: pass
null to
setLayout(), that is call
setLayout(null);
Then move and resize each of your components to their desired
locations and sizes in the
paint() method using
setLocation() and
setSize():
public void setLocation(int x, int y) public void setSize(int width, int height)
where
x and
y are the coordinates of the
upper left hand corner of the bounding box of your component and
width and
height are the width and height
in pixels of the bounding box of your component.
This applet puts a button precisely 30 pixels wide by 40 pixels high at the point (25, 50):
import java.applet.*; import java.awt.*; public class ManualLayout extends Applet { private boolean laidOut = false; private Button myButton; public void init() { this.setLayout(null); this.myButton = new Button("OK"); this.add(this.myButton); } public void paint(Graphics g) { if (!this.laidOut) { this.myButton.setLocation(25, 50); this.myButton.setSize(30, 40); this.laidOut = true; } } } | http://www.cafeaulait.org/course/week8/35.html | CC-MAIN-2020-29 | en | refinedweb |
Hi it's me again.
Well I've done this program where it converts roman numerals to numbers but I've used it with if else.
But I'm supposed to use switch.But I'm supposed to use switch.Code:/* wo0dy at home doing assignment. March 24th 2006 Friday 9am D-48-A Cyberia (5) Write a program that converts a C++ string representing a number in Roman numeral form to decimal form. The symbols used in the Roman numeral system and their equivalents are given below: I 1 V 5 X 10 L 50 C 100 D 500 M 1000 For example, the following are Roman numbers: XII (12), CII (102), XL (40). The rules for converting a Roman number to a decimal number are as follows: a. Set the value of the decimal number to zero. b. Scan the string containing the Roman character from left to right. If the character is not one of the symbols in the numeral symbol set, the program must print an error message and terminate. Otherwise, continue with the following steps. (Note that there is no equivalent to zero in Roman numerals.) If the next character is the last character, add the value of the current character to the decimal value. If the value of the current character is greater than or equal to the value of the next character, add the value of the current character to the decimal value. If the value of the current character is less than the next character, subtract the value of the current character from the decimal value. Solve this problem using a switch statement. Some sample outputs are given below. */ //It works only for letters that correspond to Roman numerals. I think thats what Sir said it should work. #include <iostream> #include <cstring> using namespace std; int main () { char roman_num[10]; cout << "Enter a Roman Numeral: "; cin >> roman_num; int len = (strlen(roman_num)-1); //I put -1 cos I the lecturer said something about int counter = 0; //the program uses an extra space for memory or something int number =; cout<<"\nOne or more of the inputs entered is invalid. \a"<<endl; } if (b4sum > number) sum = b4sum - number; else sum = sum + number; b4sum = number; } cout << "\nThe Roman Numeral is: " << sum << endl; system ("PAUSE"); return 0; }
I've tried it and read my book but it kept saying that switch quantity is not an integer. The examples from the book were also given using integers.
Can anyone explain me the concept so I can understand how to apply this? | https://cboard.cprogramming.com/cplusplus-programming/78270-if-else-works-but-i-cant-do-switch.html | CC-MAIN-2017-26 | en | refinedweb |
Script to import any file in Pythonista from any app
To send any file to Pythonista, select sahre and "Run Pythonista Script"
Then use this script below in Pythonista (you need to add it as an extension) to save any file from "run Pythonista Script" share method ... works with any file type .py .zip etc.
All imported file are store in a directory "inbox" in Pythonista
# coding: utf-8 import appex import clipboard import console import shutil import os def getuniquename(filename, ext): root, extension = os.path.splitext(filename) if ext!='': extension = ext filename = root + extension filenum = 1 while os.path.isfile(filename): Error file %s not found !' % os.path.basename(filename)) else: print(' > as %s' % os.path.basename(filename)) if __name__ == '__main__': main()
- TutorialDoctor
Wow. This script is solid. Does what it says. And it is very fast.
Nice script. But this does not work if the contents have Unicode characters. For example try the code block in the following discussion.
This can be fixed by adding the encoding parameter. Replace the following lines
filenum += 1 with open(filename,'w') as f: f.write(text) print('Done!')
by
filenum += 1 with open(filename,'w', encoding='utf-8') as f: f.write(text) print('Done!')
Using this with the Git2Go app.
I can move files from Pythonista to Git2Go and then to GitHub and back again.
This works great.
Then I wanted a way to update a file in Pythonista from Git2Go.
This script works great, ... but.
Is there a way to actually select where the file will be saved, instead of just dropping it in an "inbox" folder and then having to move it to the right location?
Thanks
Got it working:
[](link url)
Thanks a lot for sharing this!
I'm still very new to Pythonista and importing a lot of scripts and ui files from the web to learn - your script is a life saver !
@chebfarid
If you want to save it in a folder you choose:
import File_Picker # from @omz dest_dir = File_Picker.file_picker_dialog('Select the folder where you want to save the file', multiple=False, select_dirs=True, file_pattern=r'^.*\.py$')
Sorry if this should be obvious to everyone, but how to save this script as an extension? I see how to create text, image, and URL extensions but do I have to know in advance the type of file to be shared/saved?
@ihf You can use any script in the app extension. Either edit the shortcuts in the settings (within the main app) or use the Edit button in the extension itself.
The text/image/url extensions you're referring to are just templates for scripts that interact with data passed into the extension. You don't have to use them.
If I simply add the script send then from the IOS sharing menu select it(within Photos),I get:
Input path: None
Traceback (most recent call last):
File "/private/var/mobile/Containers/Shared/AppGroup/AA838D46-5C81-42AA-9301-F57202B8AD48/Pythonista3/Documents/.Extension/SaveFile.py", line 57, in <module>
main()
File "/private/var/mobile/Containers/Shared/AppGroup/AA838D46-5C81-42AA-9301-F57202B8AD48/Pythonista3/Documents/.Extension/SaveFile.py", line 47, in main
filename=os.path.join(dest_path, os.path.basename(file))
File "/var/containers/Bundle/Application/E7467484-E2A2-4AA3-B7E4-6621D1F13B23/Pythonista3.app/Frameworks/Py3Kit.framework/pylib/posixpath.py", line 141, in basename
i = p.rfind(sep) + 1
AttributeError: 'NoneType' object has no attribute 'rfind'
Still don't see where my error is. Perhaps someone can enlighten me?
Further experimentation reveals that if you try to save a .jpg file (from Photos) you get the above reported error. PDFs (for one) work properly. It would be nice if this script could save any shared file.
Modifying the script to work in Photos should be relatively easy. If
appex.get_file_path()returns
None, you could check
appex.get_image(), and save that. Roughly like this:
# ... file = appex.get_file_path() img = appex.get_image() if file is not None: filename=os.path.join(dest_path, os.path.basename(file)) filename=getuniquename(filename, '') print('Input path: %s' % file) shutil.copy(file, filename) elif img is not None: print('Saving image') filename=os.path.join(dest_path, 'imported_image.jpg') filename = getuniquename(filename, '') img.save(filename) # ... | https://forum.omz-software.com/topic/3606/script-to-import-any-file-in-pythonista-from-any-app | CC-MAIN-2017-26 | en | refinedweb |
This post is mainly focused on C#, but is also the second of my posts on using the digitalPersona U.are.U 4000B fingerprint sensor.
I left the previous post with my code throwing an exception – the sensor’s SDK is designed so that fingerprint capture is asynchronous. After telling the sensor to start capturing, the main thread isn’t blocked. Instead, when the device completes a scan, the OnComplete event fires on a separate worker thread.
However, I want to be able to enroll a fingerprint on the main thread, and make this thread wait until enrollment has finished on the worker thread before proceeding.
The C# framework provides a way of doing this with the ManualResetEvent class. This allows threads to communicate between each other – usually in the use case of blocking one thread until it receives a signal from another that it’s allowed to proceed. This is perfect to meet my needs in this program.
I find it useful to think of the ManualResetEvent as a logical opposite to the BackgroundWorker class which I’ve blogged about before. I see ManualResetEvent as a way to block an asynchronous process by making it a synchronous one – and the BackgroundWorker class is a way to unblock a synchronous process by making it an asynchronous one.
Even though the ManualResetEvent class has fewer use cases than the BackgroundWorker, I still view it as important to understand, and have it available in your programming toolbox.
It’s pretty simple to use the ManualResetEvent class:
- Instantiate the ManualResetEvent class;
- Start the main thread;
- When an asynchronous worker thread is triggered, call the ManualResetEvent object’s WaitOne() method to block the main thread;
- When the worker thread has completed, call the ManualResetEvent object’s Set() method to release the main thread and allow it to continue.
I modified my code to use this class, and I’ve pasted this below with the new code highlighted in bold. As you can see, I’ve only added three lines of code.
public class DigitalPersonaFingerPrintScanner : DPFP.Capture.EventHandler, IFingerprintScanner { private ManualResetEvent _mainThread = new ManualResetEvent(false); private Capture _capture; private Sample _sample; public void Enroll() { _capture = new Capture(); _capture.EventHandler = this; _capture.StartCapture(); _mainThread.Wait; _mainThread.Set(); }) { } }
Now I’m able to successfully run main method of my program synchronously using the code below, and enroll a fingerprint fully before generating the bitmap.
using (var scanner = new DigitalPersonaFingerPrintScanner()) { scanner.Enroll(); scanner.CreateBitmapFile(@"C:\Users\jeremy\Desktop\fingerprint.bmp"); } | https://jeremylindsayni.wordpress.com/2016/03/26/how-to-use-manualresetevent-in-c-to-block-one-thread-until-another-has-completed/ | CC-MAIN-2017-26 | en | refinedweb |
I am running a Chef spec unit test using the command below. The relevant code is shown further below. I expected some_test_data to be used in the unit test instead of method_name actually getting called. But what is happening is that the stub is not used. Instead method_name does actually get called, which in this case is not appropriate in the unit test. What am I mis-understanding or doing wrong here? Thank you.
rspec spec/unit/mytest_spec.rb
# Code from Chef Spec mytest_spec.rb unit test
allow(ClassName).to receive(:method_name).and_return(some_test_data)
# Code unit test is testing
my_variable = method_name(node)
# Method that gets called above
def self.method_name(node)
# Do something
end
Actually you're passing a parameter to
method_name but in the stubbed method, you're not stubbing out the parameters. That is why the stubbed method doesn't get called when you run the tests.
It should be
allow(ClassName).to receive(:method_name).with('argument').and_return(some_test_data)
[I am not sure about the following because you didn't post the actual code. You can ignore it if my assumption is wrong]
You're testing a class method, but you're not calling it on the class. ie.
Shouldn't
my_variable = method_name(node) be
my_variable = Classname.method_name(node) ?
For more, see and | https://codedump.io/share/h5TNl7OiTNAp/1/actual-method-being-called-instead-of-stub39d-version | CC-MAIN-2017-26 | en | refinedweb |
The objective of this post is to explain how to parse and use JSON data from a POST request in Flask.
Introduction
The objective of this post is to explain how to parse and use JSON data from a POST request in Flask, a micro web framework for Python.
You can check an introduction to Flask here. More posts on Flask are listed in the “Related posts” section.
The code. Any GET request to the same URL will be ignored.
To do so, we specify the POST keyword to the methods argument of the route decorator. You can read more on how to control the HTTP allowed methods in this previous post. You can check the code bellow, which only has the route decorator and the declaration of the handling function.
@app.route('/postjson', methods = ['POST']) def postJsonHandler():
Note that, in the previous code, we specified that the server will be listening on the /postjson URL.
First, to confirm if the content is of type JSON, we check the is_json attribute of the request object. This attribute indicates if this request is JSON or not [1].
To get the posted JSON data, we just need to call the get_json method on the request object, which parses the incoming JSON request data and returns it [2] as a Python dictionary.
You can check the full code bellow, which already includes the calling of the run method on the app object, to start the server. Note that we specified port 8090 for our server to be listening on.
from flask import Flask from flask import request app = Flask(__name__) @app.route('/postjson', methods = ['POST']) def postJsonHandler(): print (request.is_json) content = request.get_json() print (content) return 'JSON posted' app.run(host='0.0.0.0', port= 8090)
Testing the code
We can test this code by using Postman, which allows us to do POST requests very easily. After installing this Chrome extension, open it and, on the request tab, select the POST method from the dropdown.
On the URL, use 127.0.0.1:8090/postjson. So, we are using our machine’s loopback IP and the 8090 port we specified in the Python code. Then, click in the Body separator, so we can specify our JSON content.
All the relevant areas mentioned before are highlighted in figure 1.
Figure 1 – Preparing the POST request in POSTMAN.
On the Body separator, choose the radio button raw. A text editor should become available. On the dropdown next to the radio buttons, choose JSON (application/json) and then input valid JSON on the text editor, as indicated in figure 2.
Figure 2 – Defining the JSON data to post.
You can use other JSON content, but bellow there is the one used in this example.
{ "device":"TemperatureSensor", "value":"20", "timestamp":"25/01/2017 10:10:05" }
Finally, hit send and the POST request should be sent to the Flask server. You should get an output similar to the one in figure 3 if you are running the code on IDLE, the Python default IDE.
Figure 3 – Output of the test program in IDLE.
The “True” is the value of the is_json attribute, which indicates that the content was JSON. The string corresponds to our JSON content, sent from POSTMAN.
Note that we are printing the whole content from the dictionary that corresponds to the parsed JSON content. You can print each attribute of the JSON object by accessing the dictionary by the name of the attribute, as shown bellow.
print (content['device']) print (content['value']) print (content['timestamp'])
Related posts
- Python anywhere: Forcing HTTPS on Flask app
- Python anywhere: Deploying a Flask server on the cloud
- LinkIt Smart Duo: Running a Flask server
- Flask: Hello World
- Flask: Controlling HTTP methods allowed
References
[1]
[2]
Technical details
- Python version:3.6.0
- Flask library: 0.12
I might try this later 😎😎
LikeLiked by 1 person
Awesome 🙂 If you do, tell me if it works well for you
LikeLiked by 1 person
OK 😎😎
LikeLiked by 1 person
Pingback: ESP8266: Posting JSON data to a Flask server on the cloud | techtutorialsx | https://techtutorialsx.com/2017/01/07/flask-parsing-json-data/ | CC-MAIN-2017-26 | en | refinedweb |
Testing Realm Migrations.
Aniket Kadam
Originally published at
dev.to
on
・7 min read
Why test?
Realm migrations are often, left for the end. They can be stressful, and if you’re doing a manual test, it can take really long to set up or even create the conditions to test.
Let’s see how we can take it down from 20 minutes of manual labor, to 20 seconds of automated test.
How?
The idea is simple,
- Store the older copy of the db that you will migrate.
- In your instrumentation test, copy the saved db to a temp location.
- Set your configuration to the new schema number and get an instance. (this is where it’s most likely to crash on the spot)
- Verify that you got the right structure.
Let’s begin with an example.
Full Source Code available here
I have this class. Let’s call it version 1.
public class Dog extends RealmObject { String name; }
I want to keep track of my dogs’ age, so I want to change it to this: Which we will make version 2.
public class Dog extends RealmObject { String name; int age; }
This is what my configuration looked like for v1.
Realm.init(this); RealmConfiguration realmConfiguration = new RealmConfiguration.Builder() .schemaVersion(1) .build(); Realm.setDefaultConfiguration(realmConfiguration); Realm realm = Realm.getDefaultInstance(); realm.close();
The only two lines I will change here for v2 will be in the configuration:
RealmConfiguration realmConfiguration = new RealmConfiguration.Builder() .schemaVersion(2) .migration(new MigrationExample()) .build();
Straightforward, I update the schema version and provide a migration class. This looks like:
class MigrationExample implements RealmMigration { @Override public void migrate(DynamicRealm realm, long oldVersion, long newVersion) { if (oldVersion < 2) { updateToVersion2(realm.getSchema()); } } private void updateToVersion2(RealmSchema schema) { RealmObjectSchema dogSchema = schema.create(Dog.class.getSimpleName()); // Get the schema of the class to modify. dogSchema.addField("age", int.class); // Add the new field. } }
There’s a problem here i’ve left in, but we’ll discover what it is during the test :) Can you see it already?
If you’re in a rush, here’s the finished tests, be sure to grab the support file from Realm below it.
Understand
Now, that your code is tested and deployed to production you can relax and understand how this works.
Let’s begin with setting up for the test.
- Store the older copy of the db that you will migrate.
For this, you’re best off running your app on an emulator.
Why?
Answer: Emulators provide root access by default and it’s the easiest way to get the older copy of the realm. The older copy of realm is by default in the internal storage of your app in the files folder.
Install and run your older version of the app, on the emulator. This will likely initialize your realm and create the realm file in internal storage. Now you will be able to pull it.
Easiest is with a terminal command with adb.
adb pull /data/data/com.aniket.realmmigrationtestexample/files/default.realm ~/RealmMigrationTest/app/src/androidTest/assets/realmdb_1.realm
With this two things are achieved:
- The realm file for your app is pulled from internal phone (emulator) storage to your computer.
- It is stored in the assets of the androidTest folder in your app’s project, with a new name! Now it is called realmdb_1.realm
Great, we’re done with step 1!
Note: If you needed to do more complex things, such as create the entire state of your app as it would be after a lot of user interaction, it’s easiest to just do that once manually and then pull it. Be sure all the data you wanted is actually written to disk!
Setting up the test
- In your instrumentation test, copy the saved db to a temp location.
This line jumps a few steps. How do we copy the file we have in assets to a new realm file that we can use?
Realm already has written a class that helps with this and a lot of support activities required for it. This is the TestRealmConfigurationFactory, which can be taken from their repo from here
Why copy their file? Surely there’s a better way?
Answer: This is internal to their project and just isn’t exported elsewhere in a manner that I’m aware of. If you do know of a better way to get this into your project without copying it, let me know in the comments.
Ok, now that you have this file, let’s create a bog standard instrumentation test to use it.
@RunWith(AndroidJUnit4.class) public class MigrationExampleTest { @Rule public final TestRealmConfigurationFactory configFactory = new TestRealmConfigurationFactory(); @Before public void setUp() throws Exception { } @Test public void migrate() throws Exception { } }
Your migration test should look like this to start with.
Note, if you don’t have the TestRealmConfigurationFactory copied to your project yet, it will show up as an error.
If you missed it, you’ll need to pick this up from the earlier mentioned location. Be sure to change the package and keep the copyright text!
Setup Complete!
Writing the tests
Now let’s begin with the tests themselves. Add this to the tests.
@Test(expected = RealmMigrationNeededException.class) public void migrate_migrationNeededIsThrown() throws Exception { String REALM_NAME = "realmdb_1.realm"; RealmConfiguration realmConfig = new RealmConfiguration.Builder() .name(REALM_NAME) // Keep using the temp realm. .schemaVersion(1) // Original realm version. .build();// Get a configuration instance. configFactory.copyRealmFromAssets(context, REALM_NAME, realmConfig); // Copy the stored version 1 realm file from assets to a NEW location. Realm.getInstance(realmConfig); }
The REALM_NAME, should match the name of the file you pulled from adb. Recall that we renamed it from its default name to realmdb_1.realm and stored it in the assets folder, for androidTest. Not the assets folder under main!
Next we set the schema version as it was originally, and get a realm config.
Then with the configFactory, copy the realm in the assets, to a new location separate from the default realm. This just prevents us from deleting the default realm (since the TestRealmConfigFactory deletes the namedRealm before it’s copied)
Then we try to get an instance of it. Which should fail since the Dog object has already had the age field added to it.
This verifies that a MigrationNeededException is actually thrown.
Test Part 2:
In this test, we’re going actually do the migration!
Get the realm configuration again, this time incrementing the schema version and adding your main Migration class.
@Test public void migrate_migrationSuceeds() throws Exception { String REALM_NAME = "realmdb_1.realm"; // Same name as the file for the old realm which was copied to assets. RealmConfiguration realmConfig = new RealmConfiguration.Builder() .name(REALM_NAME) .schemaVersion(2) // NEW realm version. .migration(new MigrationExample()) .build();// Get a configuration instance. configFactory.copyRealmFromAssets(context, REALM_NAME, realmConfig); // Copy the stored version 1 realm file // from assets to a NEW location. // Note: the old file is always deleted for you. // by the copyRealmFromAssets. Realm realm = Realm.getInstance(realmConfig); assertTrue("The age field was not added.", realm.getSchema().get(Dog.class.getSimpleName()) .hasField("age")); assertEquals(realm.getSchema().get(Dog.class.getSimpleName()) .getFieldType("age"), RealmFieldType.INTEGER); realm.close(); }
Once this is done, get the realm instance again. This time the migration will run, and your test should fail!
Here we finally run into the error I mentioned I’d put in. I originally created the Dog.class all over again before adding the field.
So in the MigrationExample.java change ‘create’:
private void updateToVersion2(RealmSchema schema) { RealmObjectSchema dogSchema = schema.create(Dog.class.getSimpleName()); // Get the schema of the class to modify. dogSchema.addField("age", int.class); // Add the new field. }
To ‘get’:
private void updateToVersion2(RealmSchema schema) { RealmObjectSchema dogSchema = schema.get(Dog.class.getSimpleName()); // Get the schema of the class to modify. dogSchema.addField("age", int.class); // Add the new field. }
Run this again and your test will pass!
Congratulations, you can now test Realm Migrations!
Conclusion:
Testing realm migrations with instrumentation tests is pretty easy and can help you track down tricky bugs. For instance, you can now run realm queries and other operations on the final realm you receive at the end of this test. You can verify for instance, that all your existing dogs, had a default age assigned to them. You could ensure that some data transformation you were doing, completed successfully.
As always, you can set debug pointers and step through the migrations to fix issues you encounter.
I hope this stripped down example will help you to more thoroughly test your application and bring a difficult to set test, into the realm of easy testability.
See more examples of how to use this at:
Until next time, remember, if you’re doing any manual repetitive work, there’s always a better way!
You can reach me on twitter @aniketsmk I’m always happy to get comments and suggestions.
Notes:
If you find emulators difficult to run, or your test db can only be created on a real device, you can always use
File externalStorageFile = new File(getExternalFilesDir(null), "copiedInternalRealmToExternal.realm"); realm.writeCopyTo(externalStorageFile);
and pick up the file from there. This will copy your internal realm to an external location, where even real devices wouldn’t obstruct you from picking it up.
This was edited on 7th August, 2017 to correct one of the tests. | https://dev.to/aniketsmk/testing-realm-migrations-16g | CC-MAIN-2019-51 | en | refinedweb |
When registered with our forums, feel free to send a "here I am" post here to differ human beings from SPAM bots.
If trying to compile while debugging in progress, ask to stop debugging (previously it wasn't allowed) - needs testing on windows
(1) When trying to compile while debugging, the Information box asks, "Do you want to stop the debugger now?" There are three options: Yes, No and Cancel. What is the difference between No and Cancel?
Quote from: rhf on February 07, 2008, 03:07:22 am(1) When trying to compile while debugging, the Information box asks, "Do you want to stop the debugger now?" There are three options: Yes, No and Cancel. What is the difference between No and Cancel?There is no difference. Maybe I should have used a OK/Cancel combination to avoid confusion...
- Mac OS X: (10.4 and 10.5 Universal Binary, using bundled wxWidgets 2.8.7)
afb, my C::B builds on mac are unusable, because of two bugs, wxDynamicLibrary not working on dylibs, and combos in toolbars not appearing. Both have been confirmed on the wx bug tracker - But your builds are free of them? :? I'm a bit amazed at your magical bug-fixing abilities :lol:
--- include/wx/mac/carbon/chkconf.h 2007-05-14 11:09:36.000000000 +0200+++ include/wx/mac/carbon/chkconf.h 2007-05-21 10:59:19.000000000 +0200@@ -55,7 +55,7 @@ */ #ifndef wxMAC_USE_NATIVE_TOOLBAR- #define wxMAC_USE_NATIVE_TOOLBAR 1+ #define wxMAC_USE_NATIVE_TOOLBAR 0 #endif #endif
Anyway, good job, as usual
Someday, the C::B website should be updated, it says only Linux and Windows are supported (I'm not sure how good mac support is ATM but I think it's usable (apart from some wizards perhaps)
QuoteAnyway, good job, as usual Thank you! I haven't been able to do much development, but the monthly builds have been "working" OK.
Nice one! :-) I wish I could try MAC-OS someday. Do you see any chance of running it in a VM somehow? I would be willing to buy MAC-OS, but not a MAC - thjis is just by far too expensive (even at eBay) for me. :-( | http://forums.codeblocks.org/index.php?topic=7757.msg58375 | CC-MAIN-2019-51 | en | refinedweb |
import "github.com/lib/pq/hstore"
type Hstore struct { Map map[string]sql.NullString }
Hstore is a wrapper for transferring Hstore values back and forth easily.
Scan implements the Scanner interface.
Note h.Map is reallocated before the scan to clear existing values. If the hstore column's database value is NULL, then h.Map is set to nil instead.
Value implements the driver Valuer interface. Note if h.Map is nil, the database column value will be set to NULL.
Package hstore imports 3 packages (graph) and is imported by 155 packages. Updated 2019-08-13. Refresh now. Tools for package owners. | https://godoc.org/github.com/lib/pq/hstore | CC-MAIN-2019-51 | en | refinedweb |
Linux
2017-09-15
NAME
flock - apply or remove an advisory lock on an open file
SYNOPSIS
#include <sys/file.h>
int flock(int fd, int operation);
DESCRIPTION
Apply or remove an advisory lock on the open file specified by fd. The argument operation is one of the following:.
A shared or exclusive lock can be placed on a file regardless of the mode in which the file was opened.
RETURN VALUE
On success, zero is returned. On error, -1 is returned, and errno is set appropriately.
ERRORS
CONFORMING TO
NOTES
Since.
SEE ALSO
COLOPHON
This page is part of release 5.00 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at.
REFERENCED BY
lbdb_dotlock(1), chown(2), fcntl(2), fork(2), getrlimit(2), dbopen(3), flockfile(3), lockf(3), signal(7), mbox(5), mmdf(5), mutt_dotlock(1), nfs(5), flock(1), fsck(8), with-lock-ex(1), hylafax-server(5), flock(2), ploop(8), sup(1), daemonize(1), vlimit(3), insmod(8), hylafax-server(5f), mbox_neomutt(5), mmdf_neomutt(5), halockrun(1), proc(5), vipw(8), mbox_mutt(5), mmdf_mutt(5) | https://reposcope.com/man/en/2/flock | CC-MAIN-2019-51 | en | refinedweb |
The topic for today is quite important when you create your type: Regular and SemiRegular types.
Here is the exact rule for today.
Regular
SemiRegular
Okay, the first question I have to answer is quite obvious. What is a Regular or a SemiRegular type? My answer is based on the proposal p0898. I assume you may already guess it. Regular and SemiRegular are concepts, which are defined by concepts.
The term Regular goes back to the father of the Standard Template Library Alexander Stepanov. He introduced the term in his book Fundamentals of Generic Programming. Here is a short excerpt. It's quite easy to remember the eight concepts used to define a regular type. There is the well-known rule of six:
X()
X(const X&)
operator=(const X&)
X(X&&)
operator=(X&&)
~X()
Just add the Swappable and EqualityComparable concepts to it. There is a more informal way to say that a type T is regular: T behaves like an int.
To get SemiRegular, you have to subtract EqualityComparable from Regular.
I hear your next question: Why should our template arguments at least be Regular or SemiRegular or do as the ints do? The STL containers and algorithms, in particular, assume Regular data types.
What is commonly used but not a Regular type? Right: a reference.
Thanks to the type-traits library the following program checks at compile time if int& is a SemiRegular type.
// semiRegular.cpp
#include <iostream>
#include <type_traits>
int main(){
std::cout << std::boolalpha << std::endl;
std::cout << "std::is_default_constructible<int&>::value: " << std::is_default_constructible<int&>::value << std::endl;
std::cout << "std::is_copy_constructible<int&>::value: " << std::is_copy_constructible<int&>::value << std::endl;
std::cout << "std::is_copy_assignable<int&>::value: " << std::is_copy_assignable<int&>::value << std::endl;
std::cout << "std::is_move_constructible<int&>::value: " << std::is_move_constructible<int&>::value << std::endl;
std::cout << "std::is_move_assignable<int&>::value: " << std::is_move_assignable<int&>::value << std::endl;
std::cout << "std::is_destructible<int&>::value: " << std::is_destructible<int&>::value << std::endl;
std::cout << std::endl;
std::cout << "std::is_swappable<int&>::value: " << std::is_swappable<int&>::value << std::endl; // requires C++17
std::cout << std::endl;
}
First of all. The function std::is_swappable requires C++17. Second here is the output.
You see, a reference such as an int& is not default constructible. The output shows that a reference is not SemiRegular and, therefore, not Regular. To check, if a type is Regular at compile time, I need a function isEqualityComparable which is not part of the type-traits library. Let's do it by myself.
In C++20 we might get the detection idiom which is part of the library fundamental TS v2. Now, it's a piece of cake to implement isEqualityComparable.
// equalityComparable.cpp
#include <experimental/type_traits> // (1)
#include <iostream>
template<typename T>
using equal_comparable_t = decltype(std::declval<T&>() == std::declval<T&>()); // (2)
template<typename T>
struct isEqualityComparable:
std::experimental::is_detected<equal_comparable_t, T>{}; // (3)
struct EqualityComparable { }; // (4)
bool operator == (EqualityComparable const&, EqualityComparable const&) { return true; }
struct NotEqualityComparable { }; // (5)
int main() {
std::cout << std::boolalpha << std::endl;
std::cout << "isEqualityComparable<EqualityComparable>::value: " <<
isEqualityComparable<EqualityComparable>::value << std::endl;
std::cout << "isEqualityComparable<NotEqualityComparable>::value: " <<
isEqualityComparable<NotEqualityComparable>::value << std::endl;
std::cout << std::endl;
}
The new feature is in the experimental namespace (1). Line (3) is the crucial one. It detects if the expression (2) is valid for the type T. The type-trait isEqualityComparable works for an EqualityComparable (4) and a NotEqualityComparable (5) type. Only EqualityCompable returns true because I overloaded the Equal-Comparison Operator.
To compile the program, you need a new C++ compiler such as GCC 8.2.
Until C++20, comparison operators are automatically generated for arithmetic types, enumerations, and with restrictions for pointers. This may change with C++20 due to the spaceship operator: <=>. With C++20, when a class defines operator <=>, automatically the operators ==, !=, <, <=, >, and >= are generated. It's even possible just to define operator <=> as defaulted such as for the type Point.
class Point {
int x;
int y;
public:
auto operator<=>(const Point&) const = default;
....
};
// compiler generates all six relational operators
In this case, the compiler will generate the implementation. The default operator<=> performs a lexicographical comparison on its bases (left-to-right, depth-first) and continues with its non-static member in declaration order. This comparison applies short-circuit evaluation. This means the evaluation of a logical expression ends if the result is known.
Now, I have all the ingredients to define Regular and SemiRegular. Here are my new type-traits.
// isRegular.cpp
#include <experimental/type_traits>
#include <iostream>
template<typename T>
using equal_comparable_t = decltype(std::declval<T&>() == std::declval<T&>());
template<typename T>
struct isEqualityComparable:
std::experimental::is_detected<equal_comparable_t, T>
{};
template<typename T>
struct isSemiRegular: std::integral_constant<bool,
std::is_default_constructible<T>::value &&
std::is_copy_constructible<T>::value &&
std::is_copy_assignable<T>::value &&
std::is_move_constructible<T>::value &&
std::is_move_assignable<T>::value &&
std::is_destructible<T>::value &&
std::is_swappable<T>::value >{};
template<typename T>
struct isRegular: std::integral_constant<bool,
isSemiRegular<T>::value &&
isEqualityComparable<T>::value >{};
int main(){
std::cout << std::boolalpha << std::endl;
std::cout << "isSemiRegular<int>::value: " << isSemiRegular<int>::value << std::endl;
std::cout << "isRegular<int>::value: " << isRegular<int>::value << std::endl;
std::cout << std::endl;
std::cout << "isSemiRegular<int&>::value: " << isSemiRegular<int&>::value << std::endl;
std::cout << "isRegular<int&>::value: " << isRegular<int&>::value << std::endl;
std::cout << std::endl;
}
The usage of the new type-traits isSemiRegular and isRegular makes the main program quite readable.
With my next post, I jump directly to the template definition. 255
All 2993008
Currently are 123 guests and no members online
Kubik-Rubik Joomla! Extensions
Read more...
Read more... | http://modernescpp.com/index.php/c-core-guidelines-regular-and-semiregular-typs | CC-MAIN-2019-51 | en | refinedweb |
Hi Guys!
In a normal PyQt window that isnt being called by a shotgun app, if there is un-submitted data that i dont want the user to loose. I can check and event.ignore() in the def closeEvent(self,event): but that wont work here because I am not the parent app.
I was wondering if you knew a way around this?
Cheers for the help!
PS here is the work around that phil and I tried in the __init__.py that calls the app
def show_dialog(app):
# defer imports so that the app works gracefully in batch modes
from .rigtastic import AppDialog
# show the dialog window using the engine's show_dialog method
widget = app.engine.show_dialog("Rigtastic", app, AppDialog, app)
widget.closeEvent = closeEvent
bIcon.SetTankIcon(widget)
def closeEvent(event):
print "CLOSE EVENT RUNNING"
event.ignore() | https://support.shotgunsoftware.com/hc/ko/community/posts/209492088-In-a-shotgun-app-and-pyqt-window-Can-I-disable-the-close-event-event-ignore-anyway- | CC-MAIN-2019-51 | en | refinedweb |
#?(:clj (Clojure expression) :cljs (ClojureScript expression) :cljr (Clojure CLR expression) :default (fallthrough expression))
Reader conditionals are a feature added in Clojure 1.7. They are designed to allow different dialects of Clojure to share common code that is mostly platform independent, but contains some platform dependent code. If you are writing code across multiple platforms that is mostly independent you should separate
.clj and
.cljs files instead.
Reader conditionals are integrated into the Clojure reader, and don’t require any extra tooling beyond Clojure 1.7 or greater. To use reader conditionals, all you need is for your file to have a
.cljc extension and to use Clojure 1.7 or ClojureScript 0.0-3196 or higher. Reader conditionals are expressions, and can be manipulated like ordinary Clojure expressions. For more technical details, see the reference page on the reader.
There are two types of reader conditionals, standard and splicing. The standard reader conditional behaves similarly to a traditional
cond. The syntax for usage is
#? and looks like:
#?(:clj (Clojure expression) :cljs (ClojureScript expression) :cljr (Clojure CLR expression) :default (fallthrough expression))
The platform tags
:clj, etc are a fixed set of tags hard-coded into each platform. The
:default tag is a well-known tag to catch and provide an expression if no platform tag matches. If no tags match and
:default is not provided, the reader conditional will read nothing (not nil, but as if nothing was read from the stream at all).
The syntax for a splicing reader conditional is
#?@. It is used to splice lists into the containing form. So the Clojure reader would read this:
(defn build-list [] (list #?@(:clj [5 6 7 8] :cljs [1 2 3 4])))
as this:
(defn build-list [] (list 5 6 7 8))
One important thing to note is that in Clojure 1.7 a splicing conditional reader cannot be used to splice in multiple top level forms. In concrete terms, this means you can’t do this:
;; Don't do this!, will throw an error #?@(:clj [(defn clj-fn1 [] :abc) (defn clj-fn2 [] :cde)]) ;; CompilerException java.lang.RuntimeException: Reader conditional splicing not allowed at the top level.
Instead you’d need to do wrap each function individually:
#?(:clj (defn clj-fn1 [] :abc)) #?(:clj (defn clj-fn2 [] :cde))
or use a
do to wrap all of the top level functions:
#?(:clj (do (defn clj-fn1 [] :abc) (defn clj-fn2 [] :cde)))
Let’s go through some examples of places you might want to use these new reader conditionals.
Host interop is one of the biggest pain points solved by reader conditionals. You may have a Clojure file that is almost pure Clojure, but needs to call out to the host environment for one function. This is a classic example:
(defn str->int [s] #?(:clj (java.lang.Integer/parseInt s) :cljs (js/parseInt s)))
Namespaces are the other big pain point for sharing code between Clojure and ClojureScript. ClojureScript has different syntax for requiring macros than Clojure. To use macros that work in both Clojure and ClojureScript in a
.cljc file, you’ll need reader conditionals in the namespace declaration.
Here is an example from a test in route-ccrs
(ns route-ccrs.schema.ids.part-no-test (:require #?(:clj [clojure.test :refer :all] :cljs [cljs.test :refer-macros [is]]) #?(:cljs [cljs.test.check :refer [quick-check]]) #?(:clj [clojure.test.check.properties :as prop] :cljs [cljs.test.check.properties :as prop :include-macros true]) [schema.core :as schema :refer [check]]))
Here is another example, we want to be able to use the
rethinkdb.query namespace in Clojure and ClojureScript. However we can’t load the required
rethinkdb.net in ClojureScript as it uses Java sockets to communicate with the database. Instead we use a reader conditional so the namespace is only required when read by Clojure programs.
(ns rethinkdb.query (:require [clojure.walk :refer [postwalk postwalk-replace]] #?(:clj [rethinkdb.net :as net]))) ;; snip... #?(:clj (defn run [query conn] (let [token (get-token conn)] (net/send-start-query conn token (replace-vars query)))))
Exception handling is another area that benefits from reader conditionals. ClojureScript supports
(catch :default) to catch everything, however you will often still want to handle host specific exceptions. Here’s an example from lemon-disc.
(defn message-container-test [f] (fn [mc] (passed? (let [failed* (failed mc)] (try (let [x (:data mc)] (if (f x) mc failed*)) (catch #?(:clj Exception :cljs js/Object) _ failed*))))))
The splicing reader conditional is not as widely used as the standard one. For an example on its usage, let’s look at the tests for reader conditionals in the ClojureCLR reader. What might not be obvious at first glance is that the vectors inside the splicing reader conditional are being wrapped by a surrounding vector.
(deftest reader-conditionals ;; snip (testing "splicing" (is (= [] [#?@(:clj [])])) (is (= [:a] [#?@(:clj [:a])])) (is (= [:a :b] [#?@(:clj [:a :b])])) (is (= [:a :b :c] [#?@(:clj [:a :b :c])])) (is (= [:a :b :c] [#?@(:clj [:a :b :c])]))))
There isn’t a clear community consensus yet around where to put
.cljc files. Two options are to have a single
src directory with
.clj,
.cljs, and
.cljc files, or to have separate
src/clj,
src/cljc, and
src/cljs directories.
Before reader conditionals were introduced, the same goal of sharing code between platforms was solved by a Leiningen plugin called cljx. cljx processes files with the
.cljx extension and outputs multiple platform specific files to a generated sources directory. These were then read as normal Clojure or ClojureScript files by the Clojure reader. This worked well, but required another piece of tooling to run. cljx was deprecated on June 13 2015 in favour of reader conditionals.
Sente previously used cljx for sharing code between Clojure and ClojureScript. I’ve rewritten the main namespace to use reader conditionals. Notice that we’ve used the splicing reader conditional to splice the vector into the parent
:require. Notice also that some of the requires are duplicated between
:clj and
:cljs.
(ns taoensso.sente (:require #?@(:clj [[clojure.string :as str] [clojure.core.async :as async] [taoensso.encore :as enc] [taoensso.timbre :as timbre] [taoensso.sente.interfaces :as interfaces]] :cljs [)])))
(ns taoensso.sente #+clj (:require [clojure.string :as str] [clojure.core.async :as async)] [taoensso.encore :as enc] [taoensso.timbre :as timbre] [taoensso.sente.interfaces :as interfaces]) #+cljs (:require )]))
At the time of writing, there is no way to use
.cljc files in versions of Clojure less than 1.7, nor is there any porting mechanism to preprocess
.cljc files to output
.clj and
.cljs files like cljx does. For that reason library maintainers may need to wait for a while until they can safely drop support for older versions of Clojure and adopt reader conditionals.
Original author: Daniel Compton | https://clojure.org/guides/reader_conditionals | CC-MAIN-2017-39 | en | refinedweb |
E.g
01110 00110 11001 01010 (2,3) : circumference = 10 (3, 5) or (4,4) : circumference = 4 (4, 2): circumference = 8
Could you clarify the coordinate system used and what exactly is meant by circumference and island?
Suppose that coordinates start at top-left and 1-based. Suppose an island is defined as a group of 1s horizontally or vertically (but not diagonally) adjacent. If this is the case, I can understand the (3, 5) and (4, 4) cases, but others? (4, 1) seems more like 8, and (2, 3) seems more like 10.
@SergeyTachenov You are correct in understanding the coordinate system and the values. I have edited the post with the correct values! (posted in a hurry and hence the mistakes in values)
All right, so here is how I'd approach it. Looking at these islands, the first thing that comes to mind is BFS. Maybe DFS is an option too, but BFS is easier to visualize, and therefore would probably lead to more readable code, at least for those familiar with BFS. On the other hand, DFS may be easier to implement recursively, only we risk stack overflow for really large matrices because recursion may go too deep. So I'd stick with BFS.
Using BFS, we can easily find the island, but what about circumference? First thing that comes to mind here is that we could probably count the number of adjacent zeroes, assuming that everything outside the matrix is filled with zeroes too. But we better be careful here lest we run into edge cases, so let's look at the possibilities.
...0000... ...1111... ...1111...
In this case it's pretty clear that the number of adjacent zeroes will do the trick.
...0000... ...0111... ...0111...
The corner case (literally). But it looks like it's still fine: ignoring diagonally adjacent zero, we get exactly what we need.
...0001... ...0001... ...1111...
Another corner case. Now, here things go wrong. The circumference of this part is 5, and we have only four adjacent zeroes. Which means we have to count the corner zero twice. Or we can count unique adjacent pairs 1-0, then it will be counted twice automatically because one zero is adjacent to two 1s.
...0000... ...0111... ...0000...
Here, either approach works.
...1111... ...1000... ...1111...
And here, we again have to count unique pairs. Looks like it is the correct approach after all.
So, we do BFS, and when we stumble into a zero or the matrix border, we add the pair of coordinates to a set. Then we return the set size.
struct BorderPiece { const int i1, j1, i2, j2; BorderPiece(int i1, int j1, int i2, int j2) : i1(i1), j1(j1), i2(i2), j2(j2) {} bool operator==(const BorderPiece &that) const { return i1 == that.i1 && j1 == that.j1 && i2 == that.i2 && j2 == that.j2; } }; namespace std { template<> struct hash<BorderPiece> { size_t operator()(const BorderPiece &p) const { return (((p.i1 * 17) + p.j1) * 17 + p.i2) * 17 + p.j2; } }; } int circumference(const vector<vector<bool>> &matrix, size_t i, size_t j) { --i; --j; // convert to 0-based const int m = matrix.size(), n = matrix[0].size(); auto get = [&matrix, &m, &n](ptrdiff_t i, ptrdiff_t j) { return i >= 0 && i < m && j >= 0 && j < n ? matrix[i][j] : false; }; using Coords = pair<size_t, size_t>; unordered_set<BorderPiece> border; vector<vector<bool>> visited(m, vector<bool>(n)); visited[i][j] = true; queue<Coords> q; q.push(Coords(i, j)); using Step = pair<ptrdiff_t, ptrdiff_t>; const vector<Step> nearby{ Step(0, +1), Step(0, -1), Step(+1, 0), Step(-1, 0) }; while (!q.empty()) { Coords ij = q.front(); auto i1 = ij.first, j1 = ij.second; q.pop(); for (Step s : nearby) { ptrdiff_t i2 = i1 + s.first, j2 = j1 + s.second; if (get(i2, j2)) { if (!visited[i2][j2]) { visited[i2][j2] = true; q.push(Coords(i2, j2)); } } else { border.insert(BorderPiece(i1, j1, i2, j2)); } } } return border.size(); }
@SergeyTachenov
This seems like a really good approach to me. However I am unclear why the 3rd example has circumference of 5. Shouldn't it be 13 ? Also, i was assuming that with this approach you are going to pad the entire matrix with 0's all around (or at least logically). I am not sure if you have implemented that way. But that would definitely work.
Here is how I solved it:
Key Observation:
In an island how many units can a single 1 contribute ?
0000 0100 0000
In this case the only 1 contributes 4 (= the actual circumference)
0000 0110 0000
In this case the each 1 contributes 3 (the actual circumference = 6)
0000 0110 0010
In this case the two 1's contributes 3 and one of them contributes 2 (the actual circumference = 8)
Lastly,
0010 0111 0010
In this case the all 1's contributes 3 and the middle one contributes 0 (the actual circumference = 12)
Now, contribution for any 1 = max degree (=4) - actual outdegree (You can confirm that from all the examples)
So basically algorithm is :
- Perform dfs/bs
- while visiting each node count its outdegree and compute contribution => add it to total circumference
@ayushpatwari I should have put more dots in there, but the edit function is buggy, and I can't edit now. The thing is, I meant to show only a part of an island, so in the case 3 only the 1s bordering 0s are meaningful, the borders are not borders at all: just imagine one more line of dots below the whole thing.
In my implementation, I did pad the matrix with 0s around: note that the get lambda returns
false in case when
i or
j is outside the area.
Your approach is essentially the same, but without keeping the border pieces in the set. As a matter of fact, I don't have to either! You basically sum up
4 - count of 1s, but it's the same as summing up
count of 0s (assuming the matrix is surrounded by 0s). And that's exactly what I do, so I don't even need a set because I never ever try to insert the same thing into it (because
i1, j1 are unique). Therefore, the size of set will be equal to the number of
insert calls, so I can just count them without actually inserting anything anywhere.
One more interesting observation. The complexity of the whole thing can be up to O(mn). This is the worst case lower bound because in the worst case an island may be a very narrow snake-like windy thing with its circumference of order
mn. But in the best case, when an island has relatively normal shape, we can do much better if we can just go in a random direction, hit the island's border, and then just walk around. It can get as good as O(m + n). It's easier said than done, though, because walking around will involve a lot of edge cases.
This idea raises an important question, though: can an island have “lakes” inside? Because if it can, the improved approach won't work. But then again, our current approach will calculate the total circumference of all shores, both inner and outer, which may or may not be what we want.
On a side note, I think your original problem example has a small mistake: the last case should be (4, 2), not (4, 1) because (4, 1) points to a 0.
I'm not understanding how you guys are counting the perimeter. For example:
01110 00<1>10 11001 01010 ``` The value at (2,3) i've put < > around, if I understand the coordinate system correctly. Now counting the 1's around it, we have: 3 1's in the first row 1 1's in the 2nd row 3 1's in the 3rd row which gives us a total of 7 surrounding ones. How did you arrive at 10? -Thanks
Do BFS or DFS and sum up the number of non diagonally adjacent 0's for every 1 on the island. Edges of the matrix should also be considered as 0's.
@dat.vikash The problem is to find the perimeter, not the number of surrounding one's. I think drawing a diagram and marking out the island will help. Also you can check this problem which was added recently :
Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect. | https://discuss.leetcode.com/topic/62098/given-a-0-1-grid-and-coordinates-x-y-find-the-circumference-perimeter-of-the-enclosing-island-of-1-s | CC-MAIN-2017-39 | en | refinedweb |
I have an application with a servlet that uses @Inject to access a managed bean and @EJB to access an EJB:
@WebServlet("/test") public class CDITestServlet extends HttpServlet {
@Inject ApplicationBean applicationBean;
@EJB StatelessEjb statelessEjb;
protected void doGet(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
response.setContentType("text/plain");
PrintWriter out = response.getWriter();
out.println("applicationBean: " + applicationBean);
out.close();
}
}? | https://www.ibm.com/developerworks/community/forums/html/topic?id=64606887-b68a-4351-b053-1c84a04df16e&ps=25 | CC-MAIN-2017-39 | en | refinedweb |
In a previous post I discussed asynchronous repositories. A closely related and complimentary design pattern is the Unit of Work pattern. In this post, I'll summarize the design pattern and cover a few non-conventional, but useful extensions.
Overview
The Unit of Work is a common design pattern used to manage the state changes to a set of objects. A unit of work abstracts all of the persistence operations and logic from other aspects of an application. Applying the pattern not only simplifies code that possess persistence needs, but it also makes changing or otherwise swapping out persistence strategies and methods easy.
A basic unit of work has the following characteristics:
- Register New - registers an object for insertion.
- Register Updated - registers an object for modification.
- Register Removed - registers an object for deletion.
- Commit - commits all pending work.
- Rollback - discards all pending work.
Extensions
The basic design pattern supports most scenarios, but there are a few additional use cases that are typically not addressed. For stateful applications, it is usually desirable to support cancellation or simple undo operations by using deferred persistence. While this capability is covered via a rollback, there is not a way to interrogate whether a unit of work has pending changes.
Imagine your application has the following requirements:
- As a user, I should only be able to save when there are uncommitted changes.
- As a user, I should be prompted when I cancel an operation with uncommitted changes.
To satisfy these requirements, we only need to make a couple of additions:
- Unregister - unregisters pending work for an object.
- Has Pending Changes - indicates whether the unit of work contains uncommitted items.
- Property Changed - raises an event when a property has changed.
Generic Interface
After reconsidering what is likely the majority of all plausible usage scenarios, we now have enough information to create a general purpose interface.
public interface IUnitOfWork<T> : INotifyPropertyChanged where T : class
{
bool HasPendingChanges
{
get;
}
void RegisterNew( T item );
void RegisterChanged( T item );
void RegisterRemoved( T item );
void Unregister( T item );
void Rollback();
Task CommitAsync( CancellationToken cancellationToken );
}
Base Implementation
It would be easy to stop at the generic interface definition, but we can do better. It is pretty straightforward to create a base implementation that handles just about everything except the commit operation.
public abstract class UnitOfWork<T> : IUnitOfWork<T> where T : class
{
private readonly IEqualityComparer<T> comparer;
private readonly HashSet<T> inserted;
private readonly HashSet<T> updated;
private readonly HashSet<T> deleted;
protected UnitOfWork()
protected UnitOfWork( IEqualityComparer<T> comparer )
protected IEqualityComparer<T> Comparer { get; }
protected virtual ICollection<T> InsertedItems { get; }
protected virtual ICollection<T> UpdatedItems { get; }
protected virtual ICollection<T> DeletedItems { get; }
public virtual bool HasPendingChanges { get; }
protected virtual void OnPropertyChanged( PropertyChangedEventArgs e )
protected virtual void AcceptChanges()
protected abstract bool IsNew( T item )
public virtual void RegisterNew( T item )
public virtual void RegisterChanged( T item )
public virtual void RegisterRemoved( T item )
public virtual void Unregister( T item )
public virtual void Rollback()
public abstract Task CommitAsync( CancellationToken cancellationToken );
public event PropertyChangedEventHandler PropertyChanged;
}
Obviously by now, you've noticed that we've added a few protected members to support the implementation. We use HashSet<T> to track all inserts, updates, and deletes. By using HashSet<T>, we can easily ensure we don't track an entity more than once. We can also now apply some basic logic such as inserts should never enqueue for updates and deletes against uncommitted inserts should be negated. In addition, we add the ability to accept (e.g. clear) all pending work after the commit operation has completed successfully.
Supporting a Unit of Work Service Locator
Once we have all the previous pieces in place, we could again stop, but there are multiple ways in which a unit of work could be used in an application that we should consider:
- Imperatively instantiated in code
- Composed or inserted via dependency injection
- Centrally retrieved via a special service locator facade
The decision as to which approach to use is at a developer's discretion. In general, when composition or dependency injection is used, the implementation is handed by another library and some mediating object (ex: a controller) will own the logic as to when or if entities are added to the unit of work. When a service locator is used, most or all of the logic can be baked directly into an object to enable self-tracking. In the rest of this section, we'll explore a UnitOfWork singleton that plays the role of a service locator.
public static class UnitOfWork
{
public static IUnitOfWorkFactoryProvider Provider
{
get;
set;
}
public static IUnitOfWork<TItem> Create<TItem>() where TItem : class
public static IUnitOfWork<TItem> GetCurrent<TItem>() where TItem : class
public static void SetCurrent<TItem>( IUnitOfWork<TItem> unitOfWork ) where TItem : class
public static IUnitOfWork<TItem> NewCurrent<TItem>() where TItem : class
}
Populating the Service Locator
In order to locate a unit of work, the locator must be backed with code that can resolve it. We should also consider composite applications where there may be many units of work defined by different sources. The UnitOfWork singleton is configured by supplying an instance to the static Provider property.
Unit of Work Factory Provider
The IUnitOfWorkFactoryProvider interface can simply be thought of as a factory of factories. It provides a central mechanism for the service locator to resolve a unit of work via all known factories. In composite applications, implementers will likely want to use dependency injection. For ease of use, a default implementation is provided whose constructor accepts Func<IEnumerable<IUnitOfWorkFactory>>.
public interface IUnitOfWorkFactoryProvider
{
IEnumerable<IUnitOfWorkFactory> Factories
{
get;
}
}
Unit of Work Factory
The IUnitOfWorkFactory interface is used to register, create, and resolve units of work. Implementers have the option to map as many units of work to a factory as they like. In most scenarios, only one factory is required per application or composite component (ex: plug-in). A default implementation is provided that only requires the factory to register a function to create or resolve a unit of work for a given type. The Specification pattern is used to match or select the appropriate factory, but the exploration of that pattern is reserved for another time.
public interface IUnitOfWorkFactory
{
ISpecification<Type> Specification
{
get;
}
IUnitOfWork<TItem> Create<TItem>() where TItem : class;
IUnitOfWork<TItem> GetCurrent<TItem>() where TItem : class;
void SetCurrent<TItem>( IUnitOfWork<TItem> unitOfWork ) where TItem : class;
}
Minimizing Test Setup
While all of factory interfaces make it flexible to support a configurable UnitOfWork singleton, it is somewhat painful to set up test cases. If the required unit of work is not resolved, an exception will be thrown; however, if the test doesn't involve a unit of work, why should we have to set one up?
To solve this problem, the service locator will internally create a compatible uncommitable unit of work instance whenever a unit of work cannot be resolved. This behavior allows self-tracking objects to be used without having to explicitly set up a mock or stub unit of work. You might be thinking that this behavior hides composition or dependency resolution failures and that is true. However, any attempt to commit against these instances will throw an InvalidOperationException, indicating that the unit of work is uncommitable. This approach is the most sensible method of avoiding unnecessary setups, while not completely hiding resolution failures. Whenever a unit of work fails in this manner, a developer should realize that they have not set up their test correctly (ex: verifying commit behavior) or resolution is failing at run time.
Examples
The following outlines some scenarios as to how a unit of work might be used. For each example, we'll use the following model:
public class Person
{
public int PersonId { get; set;}
public string FirstName { get; set; }
public string LastName { get; set; }
}
Implementing a Unit of Work with the Entity Framework
The following demonstrates a simple unit of work that is backed by the Entity Framework:
public class PersonUnitOfWork : UnitOfWork<Person>
{
protected override bool IsNew( Person item )
{
// any unsaved item will have an unset id
return item.PersonId == 0;
}
public override async Task CommitAsync( CancellationToken cancellationToken )
{
using ( var context = new MyDbContext() )
{
foreach ( var item in this.Inserted )
context.People.Add( item );
foreach ( var item in this.Updated )
context.People.Attach( item );
foreach ( var item in this.Deleted )
context.People.Remove( item );
await context.SaveChangesAsync( cancellationToken );
}
this.AcceptChanges();
}
}
Using a Unit of Work to Drive User Interactions
The following example illustrates using a unit of work in a rudimentary Windows Presentation Foundation (WPF) window that contains buttons to add, remove, cancel, and apply (or save) changes to a collection of people. The recommended approach to working with presentation layers such as WPF is to use the Model-View-View Model (MVVM) design pattern. For the sake of brevity and demonstration purposes, this example will use simple, albeit difficult to test, event handlers. All of the persistence logic is contained within the unit of work and the unit of work can report whether it has any pending work to help inform a user when there are changes. The unit of work can also be used to verify that the user truly wants to discard uncommitted changes, if there are any.
public partial class MyWindow : Window
{
private readonly IUnitOfWork<Person> unitOfWork;
public MyWindow() : this( new PersonUnitOfWork() ) { }
public MyWindow( IUnitOfWork<Person> unitOfWork )
{
this.InitializeComponent();
this.ApplyButton.IsEnabled = false;
this.People = new ObservableCollection<Person>();
this.unitOfWork = unitOfWork;
this.unitOfWork.PropertyChanged +=
( s, e ) => this.ApplyButton.IsEnabled = this.unitOfWork.HasPendingChanges;
}
public Person SelectedPerson { get; set; }
public ObservableCollection<Person> People { get; private set; }
private void AddButton_Click( object sender, RoutedEventArgs e )
{
var person = new Person();
// TODO: custom logic
this.People.Add( person );
this.unitOfWork.RegisterNew( person );
}
private void RemoveButton_Click( object sender, RoutedEventArgs e )
{
var person = this.SelectedPerson;
if ( person == null ) return;
this.People.Remove( person );
this.unitOfWork.RegisterRemoved( person );
}
private async void ApplyButton_Click( object sender, RoutedEventArgs e )
{
await this.unitOfWork.CommitAsync( CancellationToken.None );
}
private void CancelButton_Click( object sender, RoutedEventArgs e )
{
if ( this.unitOfWork.HasPendingChanges )
{
var message = "Discard unsaved changes?";
var title = "Save";
var buttons = MessageBoxButton.YesNo;
var answer = MessageBox.Show( message, title, buttons );
if ( answer == DialogResult.No ) return;
this.unitOfWork.Rollback();
}
this.Close();
}
}
Implementing a Self-Tracking Entity
There are many different ways and varying degrees of functionality that can be implemented for a self-tracking entity. The following is one of many possibilities that illustrates just enough to convey the idea. The first thing we need to do is create a factory.
public class MyUnitOfWorkFactory : UnitOfWorkFactory
{
public MyUnitOfWorkFactory()
{
this.RegisterFactoryMethod( () => new PersonUnitOfWork() );
// additional units of work could be defined here
}
}
Then we need to wire up the service locator with a provider that contains the factory.
var factories = new IUnitOfWorkFactory[]{ new MyUnitOfWorkFactory() };
UnitOfWork.Provider = new UnitOfWorkFactoryProvider( () => factories );
Finally, we can refactor the entity to enable self-tracking.
public class Person
{
private string firstName;
private string lastName;
public int PersonId
{
get;
set;
}
public string FirstName
{
get
{
return this.firstName;
}
set
{
this.firstName = value;
UnitOfWork.GetCurrent<Person>().RegisterChanged( this );
}
}
public string LastName
{
get
{
return this.lastName;
}
set
{
this.lastName = value;
UnitOfWork.GetCurrent<Person>().RegisterChanged( this );
}
}
public static Person CreateNew()
{
var person = new Person();
UnitOfWork.GetCurrent<Person>().RegisterNew( person );
return person;
}
public void Delete()
{
UnitOfWork.GetCurrent<Person>().RegisterRemoved( this );
}
public Task SaveAsync()
{
return UnitOfWork.GetCurrent<Person>().CommitAsync( CancellationToken.None );
}
}
Conclusion
In this article we examined the Unit of Work pattern, added a few useful extensions to it, and demonstrated some common uses cases as to how you can apply the pattern. There are many implementations for the Unit of Work pattern and the concepts outlined in this article are no more correct than any of the alternatives. Hopefully you finish this article with a better understanding of the pattern and its potential uses. Although I didn't explicitly discuss unit testing, my belief is that most readers will recognize the benefits and ease in which cross-cutting persistence requirements can be tested using a unit of work. I've attached all the code required to leverage the Unit of Work pattern as described in this article in order to accelerate your own development, should you choose to do so.
really useful, thanks!
great..
Hey Chris,
Great article and like you say a breath of fresh air to the usual implementation's I've created before for UoW and Repository.
I liked it so much I've used the pattern here to implement Repositories / Uow connect to a variety of backing stores it's works great so long as you have decent IQueryable implementation underneath 🙂
I'd appreciate your opinion on this :
In the past I've also had my UnitOfWork act a factory for the repository as I figure I needed the UOW for a writeable repository although I could have let the IOC container give me the repositories I need it seemed neater to only take one dependency on the constructor.
I was thinking of extending the UnitOfWorkFactory so that you you'd get write repositories from the UnitOfWorkFactory, what's your view on tying the repo to the unit of work in this way though.
Best Regards
John | https://blogs.msdn.microsoft.com/mrtechnocal/2014/04/18/unit-of-work-expanded/ | CC-MAIN-2019-47 | en | refinedweb |
We are about to switch to a new forum software. Until then we have removed the registration on this forum.
Hello everybody, I am working on a project in wich I want to send and recieve data (Strings) from a device wich is connectet via ethernet. The reason for ethernet is because these devices are used in industrial enviroment and are organized in a network. The device has a chip on it which creates its own webserver.
I have no access to the firmware of the device. All I've got ist the data for connecting (IP, Port, etc) and the information for the communication protocol (the fabricator made its own protocol).
example:
if I send the String: GET VER\n the Device sends me: S V1.13 @ LED-C3A Controller\n (every message has a line-feed ('\n') at the end in both directions.)
Also I know it connects via TCP.
The device connects correctly. I've tested it with the Software with which it is delivered.
The problem:
500-600ms after I've started my program I get the message: client got out-of-Stream. After that I cant send any commands without getting more errors.
The odd thing is that I send a command, after the connection is set, then wait 600ms (required from protocol), lose the connection (client got out-of-Stream) and get the correct answer from the device.
Here is my code:
import processing.net.*; Client SLUX; String dataIn = new String(""); void init_SLUX() { println("connect to: 193.168.122.150"); SLUX = new Client(this, "192.168.122.150", 1580); println("connected"); println("command: get version"); SLUX.write("GET VER" + '\n'); println("wait"); int millisStart = millis(); while ((millis() - millisStart) < 600){;} ; println("check answer"); if (SLUX.available() > 0) { dataIn = SLUX.readString(); print("version is: "); println(dataIn); } println("end of init"); } void setup() { size(displayWidth - 50, displayHeight - 100); init_SLUX(); } void draw() { }
I used println() insted of Breakpoints since there is no such thing in Processing (yet). Here is the terminal-output I get when I run the program:
connect to: 193.168.122.150 connected command: get version wait Client got end-of-stream. check answer version is: S V1.13 @ LED-C3A Controller 1 end of init
Any suggestions?
PS: Please excuse my english. It's not my native language.
Answers
Problem solved. Please close this Discussion. | https://forum.processing.org/two/discussion/4775/client-got-out-of-stream-network-library | CC-MAIN-2019-47 | en | refinedweb |
public class Formatting extends java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
public static Formatting.Selection reformatRegion(SourceFile file, int startOffset, int endOffset)
public static boolean reformatSubtree(SourceElement subtree)
subtree- The SourceElement whose contents should be formatted.
public static javax.swing.undo.UndoableEdit reformatSelection(SourceFile file, int start, int end)
file- is the source file that contains the selection region
start- is the start offset of the selected region
end- is the end offset of the selected region | https://docs.oracle.com/middleware/12211/jdev/api-reference-esdk/oracle/javatools/parser/java/v2/util/Formatting.html | CC-MAIN-2019-47 | en | refinedweb |
Felgo version 2.3.1 is the first update in 2015 – we’re happy to start the New Year with these new features:
Change of Import Path for Felgo Qt 5 Plugins
Due to internal changes in Qt 5.4, we had to change the import path of the Felgo Qt Plugins. As a customer of the Felgo Indie License or above, you can use these plugins:
- Soomla Plugin for cross-platform in-app purchases
- AdMob & Chartboost plugin for adding mobile ads to monetize your game
- Facebook integration to your game, which is especially powerful in combination with our cross-platform leaderboard & achievement service Felgo Game Network
- Flurry Analytics to get insights how your app is performing and how you can improve it
- Local Push Notifications to bring your users back to your app
To get these plugins, follow the installation steps here.
Since this Felgo update, you then get access to them with the new import
import VPlayPlugins.*
Instead of before:
import VPlay.plugins.* //deprecated now!
So for example to add Facebook to your game, use the following new import:
import VPlayPlugins.facebook 1.0
and use it like that:
import VPlayPlugins.facebook 1.0 Facebook { licenseKey: "<your-plugin-license-key>" appId: "xxxxxxxxxxxxxxxx" readPermissions: [ "email", "user_friends" ] publishPermissions: ["publish_actions"] Component.onCompleted: { openSession() // opens the native Facebook login screen } }
As a Felgo customer with the Indie License or above, you can then generate a licenseKey for all plugins for free here.
Improved Dynamic Image Switching & File Selector Detection
Dynamic Image Switching now supports per-image file switching with Felgo File Selectors. This means, you can decide to not include an hd2 version of your image to reduce the app size. Although the image will then not be crisp on hd2 (high retina) screens like the one of iPad 3, this addition allows you to decide based on your requirements if app size or image quality is more important on a per-image basis.
Box2D RayCast Support, iOS Simulator Fix & More
Felgo does now support Box2D Add RayCast functionality! This is useful in games to check if physics bodies are along a ray – you can make fun & complex AI-powered games with that addition.
There are many bug fixes in this update too, for example iOS simulator now again works like a charm.
As a Felgo customer and in the trial you can update to the latest version now! See the Update Guide how to get it, and for the full changelog.
Update to the latest version Now | https://felgo.com/updates/update-2-3-1-felgo-qt5-plugin-import-changes-raycast-support | CC-MAIN-2019-47 | en | refinedweb |
.
Before we get started, here’s the list of requirements
- Microsoft Visual Studio 2010
- MS .NET Framework 4.0
- MS CRM SDK 2011
Walkthrough Steps: Web Service
- Open Visual Studio 2010
- Select your project, ASP.NET Empty Web Application
- Select a suitable name of your project
- On Successful Creation of Project
- Go to Solution Explorer
- Add New Item: Web Service
- Provide a suitable name
- Click on Add to create
- You will see the C# content in “CustomWebService.asmx.cs”
- Add reference to CRM Assemblies
- Adding another .NET assemblies
- Adding Live ID Code File available in CrmSdk (SDK\samplecode\cs\helpercode)
- Declare the namespace
- Write the code for CRM
/// This function accepts the lastname and creates a contact with that in CRM
/// </summary>
/// <param name=”lname”></param>
/// <returns></returns>
public string CreateContact(string lname)
{
string message = string.Empty;
try
{
OrganizationServiceProxy serviceProxy;
ClientCredentials deviceCredentials = null;
ClientCredentials clientCredentials = null;
Guid contactId = Guid.Empty;
//Organization URL
Uri OrganizationUri = new Uri(String.Format(“https://{0}.api.crm.dynamics.com/XRMServices/2011/Organization.svc”,”<<Organization>>”));
//Discovery URL
Uri HomeRealmUri = new Uri(String.Format(“.{0}/XRMServices/2011/Discovery.svc”, “crm.dynamics.com”));
//Setting Client Credentials
clientCredentials = this.GetCredentials(“<<Live Id>>”, “<<Live Password>>”);
//Get the Device Id and Password
deviceCredentials = this.GetDeviceCredentials();
//Using Organization Service Proxy to instantiate the CRM Web Service
using (serviceProxy = new OrganizationServiceProxy(OrganizationUri, HomeRealmUri, clientCredentials, deviceCredentials))
{
// This statement is required to enable early-bound type support.
serviceProxy.ServiceConfiguration.CurrentServiceEndpoint.Behaviors.Add(new ProxyTypesBehavior());
IOrganizationService service = (IOrganizationService)serviceProxy;
//Define Entity Object
Entity contact = new Entity(“contact”);
//Specify the attributes
contact.Attributes[“lastname”] = lname;
//Execute the service
contactId = service.Create(contact);
//Confirmation Message
message = “Contact Created with LastName:- “ + lname + ” Guid is” + contactId.ToString();
}
}
catch (Exception ex)
{
message = ex.Message;
}
//returns the message
return message;
}
- Write the other functions used in the code above();
}
- Complie your code
- You’ve successfully completed the walkthrough. SDKsamplecodecshelpercodedevice"
Hi Apurv,
I have tried same as above. But am getting following error. Can you please help me.
"The authentication endpoint Username was not found on the configured Secure Token Service!"
Create Simple Web Services with Microsoft Dynamics CRM
I think the admin of this web page is actually working hard in favor of his website, for the reason that here every stuff is quality based
material.
Awesome things here. I am very satisfied to see your article.
Thank you so much and I’m looking forward to touch
you. Will you kindly drop me a e-mail?
Some genuinely prime content on this site, saved to fav. | https://blogs.msdn.microsoft.com/apurvghai/2011/08/25/create-your-custom-web-service-and-integrate-with-crm-2011-online/ | CC-MAIN-2019-47 | en | refinedweb |
Today we are going to learn to create a simple Web API using ASP.Net Core and we want to enable CORS for public users to get access to our Web API.
Before we start you may want to know what is CORS is. CORS stands for Cross-Origin Resource Sharing. By default, CORS policy is standard and applied to web browsers to prevent a Javascript call from a different origin. It means you cannot make a Javascript call to other locations or domain URLs where the origin of the Javascript code is running. The reason CORS is enabled is for security purposes. This to ensure the requests call are not from external servers which can be dangerous.
So when Cors policy is enabled? There is a situation where you want to give external access for public users to access your internal data. You can put some security measurement like the access need to be authenticated first by providing login token, OAuth authentication, etc.
In this example, I am going to give full access to our web API without any restriction. So let's get started by creating a Web API .Net Core project in Visual Studio.
Under the Controllers folder, we are removing the default ValueController.cs file and we add a new controller named MyFirstAPIController
Here is the full code of the MyFirstAPIController class file. As you can see we create one single web api get method that will return a list of fruit names.
using System; using System.Collections.Generic; using System.Linq; using System.Threading.Tasks; using Microsoft.AspNetCore.Http; using Microsoft.AspNetCore.Mvc; namespace NetCoreWebAPI.Controllers { [Produces("application/json")] [Route("api/MyFirstAPI")] public class MyFirstAPIController : Controller { [HttpGet("GetFruits")] public async Task<List<string>> GetFruits() { List<string> fruits = new List<string>(); fruits.Add("Apple"); fruits.Add("Banana"); fruits.Add("Blueberry"); fruits.Add("Cherry"); fruits.Add("Mango"); fruits.Add("Pear"); fruits.Add("Strawberry"); return await Task.Run(() => { return fruits; }); } } }
If we tried to access the web api browser we will get a return of JSON array file.
To test if CORS policy is enabled or not, we can write a Javascript code in an HTML file and try to use the Web API get method. Here is the sample code.
<!doctype html> <html> <body> <script> fetch("") .then(function(response) { response.json().then(data => { console.log(data); }); }) .catch(function(error) { console.log(error); }); </script> </body> </html>
If you try to run above code by opening the file using any browser. You will receive the following error message. This is the web console screenshot in Chrome browser.
Note: the url of is for example only and maybe removed in the future.
How to fix CORS issue in ASP.Net Web API Core?
To fix this problem is quite simple. The first thing we need to do is to install Microsoft.AspNetCore.Cors library and add it to our project. You can install it using Nuget Package Manager in Visual Studio.
Once the above library has been successfully installed. We need to add two Cors methods in the ConfigureServices and Configure methods under Startup.cs file. The first one will be:
services.AddCors();
Remember this need to be add before the code: services.AddMvc();.
The last method we need to add will be the following code.
app.UseCors(builder => builder .AllowAnyOrigin() .AllowAnyMethod() .AllowAnyHeader() .AllowCredentials());
For you easy to review, I will include the whole code in the Startup.cs file.; namespace NetCoreWeb(); } // This method gets called by the runtime. Use this method to configure the HTTP request pipeline. public void Configure(IApplicationBuilder app, IHostingEnvironment env) { if (env.IsDevelopment()) { app.UseDeveloperExceptionPage(); } app.UseCors(builder => builder .AllowAnyOrigin() .AllowAnyMethod() .AllowAnyHeader() .AllowCredentials()); app.UseMvc(); } } }
If we have redeployed the code and try to run the Html file again in the browser. We should receive the following response.
Demo Files
You can download the sample code below. | https://bytutorial.com/blogs/asp-net-core/how-to-enable-cors-in-aspnet-core-web-api | CC-MAIN-2019-47 | en | refinedweb |
Good Evening !
I'm trying to make an AlarmManager that will play a Ringtone in loop but unfortunately it's not playing any song when the phone is in silent mode. How can I do that ? Here is my actual code in my Broadcast Receiver:
[BroadcastReceiver] public class OneShotAlarm : BroadcastReceiver { //sleep_main slpmain; Intent serviceToStart; public override void OnReceive(Context context, Intent intent) { Toast.MakeText(context, "Received", ToastLength.Short).Show(); Android.Net.Uri notification = RingtoneManager.GetDefaultUri(RingtoneType.Alarm); Ringtone r = RingtoneManager.GetRingtone(Application.Context, notification); r.Play(); serviceToStop = new Intent(context, typeof(TimestampService)); context.StopService(serviceToStop); } }
PS : Should I use a Media Player to make it loop ?
Thanks a lot for your help,
Clément. | https://forums.xamarin.com/discussion/100456/ringtone-manager-not-playing-ringtone-while-in-silent-mode | CC-MAIN-2019-47 | en | refinedweb |
Legacy analysis pass which computes a
DominatorTree.
More...
#include "llvm/IR/Dominators.h"
Legacy analysis pass which computes a
DominatorTree.
Definition at line 259 of file Dominators.h.
Definition at line 265 of file Dominators.h.
References llvm::PassRegistry::getPassRegistry(), and llvm::initializeDominatorTre 276 of file Dominators.h.
References llvm::AnalysisUsage::setPreservesAll().
Definition at line 269 of file Dominators.h.
Referenced by llvm::getBestSimplifyQuery(), llvm::StackProtector::runOnFunction(), llvm::LazyValueInfoWrapperPass::runOnFunction(), and llvm::DominatorTree::viewGraph().
Definition at line 270 of file Dominators 375 of file Dominators 280 of file Dominators.h.
References print(), and llvm::DominatorTreeBase< NodeT, IsPostDom >::releaseMemory().
runOnFunction - Virtual method overriden by subclasses to do the per-function processing of the pass.
Implements llvm::FunctionPass.
verifyAnalysis() - This member can be implemented by a analysis pass to check state of analysis information.
Reimplemented from llvm::Pass.
Definition at line 368 of file Dominators.cpp.
References assert(), llvm::codeview::Basic, llvm::JumpTable::Full, and llvm::VerifyDomInfo.
Definition at line 263 of file Dominators.h.
Referenced by llvm::SafepointIRVerifierPass::run(), and llvm::DominatorTreeVerifierPass::run(). | http://llvm.org/doxygen/classllvm_1_1DominatorTreeWrapperPass.html | CC-MAIN-2019-47 | en | refinedweb |
We are about to switch to a new forum software. Until then we have removed the registration on this forum.
I want to have the rectangle change colour based on which key on my MIDI keyboard is pushed.
Can anyone help?
import themidibus.*; MidiBus myBus; void setup() { size(1920, 400); background(0); MidiBus.list(); myBus = new MidiBus(this, 0, 1); } void draw() { fill(pitch); rect(width/2,height/2,50,50); } void noteOn(int channel, int pitch, int velocity) { // Receive a noteOn println(); println("Note On:"); println("Pitch:"+pitch); println("Velocity:"+velocity); } void noteOff(int channel, int pitch, int velocity) { // Receive a noteOff println(); println("Note Off:"); println("Pitch:"+pitch); println("Velocity:"+velocity); }
Answers
@Uranhjorten -- I don't have a MIDI keyboard here. What values are you getting from
pitch?
You need to:
colorMode(HSB, minPitch, maxPitch)so that you can change hue with a single number, rather than with three R,G,B numbers --
If you don't want to use pitch colors throughout your sketch, isolate your colorMode style setting by using pushStyle / popStyle
I'm not sure how the midibus library works, but it prints values around 30-70 when I hit different keys.
It says something like 'the variable pitch does not exist' when I try fill(pitch); Do I need to return pitch from noteOn, and if so how would I do that?
Don't return pitch from noteOn. Instead, either:
int mPitchand inside noteOn() assign
mPitch = pitch. Use the global mPitch in draw()
Thank you! I'll see if that works :) | https://forum.processing.org/two/discussion/21334/rectangle-color-based-on-midi-key | CC-MAIN-2019-47 | en | refinedweb |
Hundreds of LEDs on Arduino: a New Way From the Past
Introduction: Hundreds of LEDs on Arduino: a New Way From the Past
Finally managed to put a universal library for DM633/DM634 family together; time to share it with the world.
The DM63x family is a bunch of cheap and useful LED drivers from an obscure Taiwanese manufacturer called SiTI. I stumbled upon these drivers by chance, spent a year testing them and building some projects with them, designing boards to accommodate them and now I’m rather sure these chips are almost perfect for the hobbyist purposes that do not involve huge LED matrixes or cubes. Unlike the now-popular programmable LEDs like WS2812, this is the old-fashioned way of controlling LEDs; any kind of an LED can be used here. The drivers are rather old (like 8 years or so), but no library was ever made for an Arduino. Up until now I just included all the driver controlling functions in the sketch – it was cumbersome, not compatible with different drivers and not easily shared. The library solves it; it can even work with 12- and 16-bit drivers simultaneously.
This instructable is also a kind of support for my UltiBlink site, but you’ll be able to make your own working prototyping contraption with the following:
- an Arduino;
- a LED driver chip (DM631,632,633 or 634) in SOP, SSOP or TSSOP package (not SOPB);
- an SOP/SSOP – DIP adapter (breakout) board with at least 24 pins;
- some soldering skills or another way to solder the last two together.
I’ll start this instructable with some basic knowledge about LED drivers in general, continue with DM63x particulars and then go on to the library; so if you’re already familiar with these you should skip to step 4.
Step 1: LED Driver?
The common LED driving chips (not to be confused with LED power supplies, also named LED drivers) are in fact just current-sinking shift registers with inbuilt resistors. Unlike the usual shift registers they do not provide voltage on their outputs, but sink it (hence the name). Their outputs are physical inputs. This means the LED cathodes are connected to them, which also means only common anode RGB LEDs can be used. The inbuilt resistors limit the current going through each channel; pretty useful with LEDs, as you won’t need a resistor per each channel, just one resistor per LED driver (used as a reference). This is the reason they are called the LED drivers: while these chips can be used in most applications where a shift register is needed, they are more expensive, thus most useful when the price increase is justified by the design efficiency, namely when used to drive LEDs.
Like the shift registers, these common LED drivers are binary, meaning they have a single control bit for each channel that can be either open or closed. If you want something more interesting, namely a partial brightness for a single color, you have to generate an appropriate PWM signal somewhere else and keep it running to the LED driver at all times (refreshing it). In Arduino world, the ShiftPWM library works with these common LED drivers. Still, this eats resources of the microcontroller that, as in the case of Atmega328, is not exactly a powerhouse. And most of the time, you need it to do something else as well.
Thus, the TLC5940 became popular in the world of Arduino. Unlike most ‘common’ LED drivers the drivers from TI can generate PWM signals themselves, thus keeping the desired modulated brightness of connected LEDs without a constant data stream from the controller. These drivers take more than one bit per channel on the input; the famous TLC5940, in fact, is 12-bit, so you send it a string of ones and zeros forming numbers from 0 to 4096 per each channel. You still have to provide this chip with the external PWM frequency – this is essential for multiplexing, the mode of operation it was designed for.
The DM63x line of LED drivers goes even further than that – these are equipped with their own 'free-runnung' ~18MHz PWM engine, meaning you don’t have to give them an external PWM frequency (although it is possible). Thus, the number of connections required to actually control these LED drivers is reduced to the bare minimum of three: clock, data, and latch (or chip select in SPI terms). It also means that all your Arduino timers are free, as none of them is used to provide PWM reference clock. Excellent! So, where’s the catch? Well, these chips don’t fare as well in the multiplexing environment (although multiplexing is possible, but that's a topic for some future article). But the ‘free-running PWM’ is excellent for the hobbyist crowd that just needs to light up some LEDs, not build a high-res billboard full of them.
There are four chips in the line. DM631 and DM633 are 12-bit while DM632 and DM634 are 16-bit. DM633 and DM634 also feature a 7-bit global brightness control.
Like the shift registers, LED drivers can be chained, increasing the number of outputs. LED drivers with PWM capabilities can be used with anything that requires PWM signals, not just LEDs.
Step 2: The SMD Problem
DM63x chips are somewhat known in hobbyist electronics circles, I’ve read about some projects using them (most notably the Lightpack – yep, it was built with the DM631 chip), but they aren’t as widespread as the TLC5940 chip. Two reasons: the manufacturer is obscure and not present in places like Digikey; and they are available only in SMD, or surface mount, packages. In fact, the genuine TLC5940 in a breadboard-friendly DIP package is also extinct now, but at least you can get a fake one on eBay. Same eBay can offer DM63x chips too – and genuine ones by the way, as no one fakes them.
Still, the SMD package can be a problem for a hobbyist although this is a problem that is rather rewarding to solve. After all, most of the modern interesting chips come in SMD packages, so the equipment and skills to use them in your projects will come in handy. And DM63x LED drivers can be a good place to start.
To use one on a breadboard you just need to solder it on a standard SOP-DIP breakout board (also called a connector or an adapter board). There are a lot of instructions on how to solder an SOP/SSOP chip (a useful skill), but I guess you can also ask someone else to do it.
There is one nasty catch here, however. These chips come in three package sizes: SOP, SOP-B, and SSOP/TSSOP. The usual two-sided SOP-DIP connector is compatible with both SOP and SSOP/TSSOP packages. I never saw a connector for the unusual SOP-B package. Problem is, the sellers on eBay and similar sites have absolutely no idea of the exact package they are offering. They usually just copy-paste descriptions and photos from each other, so mostly they are selling ‘SOP packages’ that are in fact anything but. You may try to ask them, but I personally am yet to receive any coherent answer. My experience shows that 12-bit DM633s are more prone to be in SOP-B packages, and 16-bit DM634s usually come in SSOP packages. I never actually saw any in SOP (the biggest one) package and I have doubts they are actually produced nowadays, so SSOP is your best bet. By the way, TSSOP package has a thermal pad on the bottom, but with DM chips you can ignore it.
The manufacturer can sell you the exact chip in an exact package if you order 60+ pieces.
You can also, of course, just get one of my UltiBlink boards.
Step 3: Meet the Chip
The nice thing about DM chips is the ease of connection (and actually remembering their layout). Each has 24 pins in the same configuration. The first one is ground. Then come three main data lines: DATA_IN, CLOCK, and LATCH. You program an LED driver by sending a stream of 12- or 16-bit integers into the DATA_IN line. The CLOCK line allows the driver to discern separate bits in that stream: it ‘catches’ the current state of the DATA_IN line as 0 or 1 at either the falling or the rising edge of the CLOCK signal. The LATCH is used to copy the current data in the stream into the internal memory, which directly controls outputs. While LATCH is low, the stream of data just flows through all the connected chips; each next incoming bit pushes the string forward. Once the LATCH goes high, the data is latched and kept. Thus, you send the values for all connected LED drivers keeping LATCH low, then get it high – and bingo, you have some lights on.
Pins 5 to 20 are outputs, you connect LED cathodes to them.
21 is SOMODE/GCK. By default, this selects the edge of the CLOCK signal to use – rising or falling. It can also be used to provide external PWM frequency to the driver. In practice, you can just let this leg hang in the air, but it is better to connect it to ground.
22 is DATA_OUT – you connect it to the next driver’s DATA_IN.
23 is REXT pin: here sits the single resistor (connected to ground) that regulates the current going through outputs. For standard LEDs, you should use something like 3K resistor for DM631 and DM632 chips and around 2.2K for DM633 and DM634 ones. In the case of these latter chips, you can just connect REXT to ground as they have internal Global Brightness Control (just make sure you don’t raise the GBC over the default 70%). I recommend having the resistor here in any case.
The final pin 24 is Vcc. As usual, it is a good idea to have a decoupling capacitor here – I use 100nF.
With such a simple setup programming these drivers is also rather easy. In fact, you can use a technique called bit-banging, which amounts to manually sending data. Keep the latch pin low; send a single bit through the data line, pulse the clock pin, send the second bit, pulse the clock again, repeat as much as necessary, then pulse the latch. Use direct port manipulation, not digitalWrite() for this.
It is better and faster to use the SPI interface, as it was designed to do just that. It also has data (digital pin 11 on Arduino) and clock (pin 13) lines plus the Chip Select (CS – you can use any pin for this) line that works as a latch. You can still have other SPI devices connected to your Arduino on different CS lines.
My library uses this method.
Step 4: Library: Memory, Declaration, Initialization
Before we dig into the library particulars there is one important thing to understand. You cannot just turn on a single led somewhere in the middle of the LED driver by addressing said LED directly. You always have to send the full set of bytes according to the full number of LED drivers in a chain. So you have to assemble all this data in memory and then send it in one burst (refresh). This means you have to allocate some memory just for this task. Considering the drivers are 12- or 16-bit it means up to 2 bytes per single LED, so in fact the number of chained LEDs you can connect to an Arduino is limited by the size of its available RAM, which is also needed for other stuff your program intends to do. While all this allocation and later usage will be done by the library, it is important to keep the memory size in mind when designing any project involving a lot of LEDs.
So, the DMdriver library (finally).
The library is compatible with any chip in the DM63x line. In fact, you can use different ones in the same project, provided they are connected independently (not chained, different latch pins). It does all the basic stuff you’ll expect from such a library, leaving you to worry about what you want to achieve, not how to achieve that. So, head on to GitHub, download it and unzip in your Libraries folder.
First of all, don’t forget to
#include "DMdriver.h"
in your sketch. Now we can declare an object of the DMdriver type called, for example, TestObject:
DMdriver TestObject (DM634, 3, 9);
On declaration, you have to provide three values. First, the type of the driver you have – it can be DM631, DM632, DM633 or DM634. Secondly, the number of LED drivers chained in this particular object. Lastly, the pin connected to the latch line of this object.
You can declare multiple objects connected to different latch pins. For example, the contraption in the video and an illustration above has 4 such objects running on a mix of DM633 and DM634 chips. You can also declare the same physical object connected to one latch pin multiple times as different virtual objects if needed, just note that it will allocate more video memory (however, the actual memory allocation won’t happen at this step).
To use the DMdriver object you’ll have to initialize it once in the setup() section:
TestObject.init();
This will set up the SPI interface and allocate the needed memory to your object. The memory will be allocated as a dynamic array, this is the weird way of C++. If you initialize all your objects during setup() and never try to re-allocate their memory later that won’t be a problem.
Now, if you do the init() with no parameters, the LEDs connected will be addressed by the output pins on LED drivers, meaning 0 for the first one, 1 for the second, …, 16 for the first pin on the second chained chip and so on. Sometimes when you are designing your own board, especially if you are making it yourself, you won’t be able to install your LEDs in this particular order. This is especially cumbersome with RGB LEDs: your first RGB LED may, in fact, be connected to outputs 0, 4 and 6, the second to 1, 3 and 12 and so on. The init() function can take an optional parameter pointing to an array of values representing the physical LED connections, a look-up table.
The look-up table is declared as an array of bytes representing the string of physical LED driver pins according to the order you want them to be in. In the aforementioned case, this array should begin like this:
byte orderLED[] = {0, 4, 6, 1, 3, 12, …… }
and contain the number of bytes according to the number of all the outputs in the object, in this example 48. It should be declared before the setup() section, and now you’ll initialize the TestObject like this:
TestObject.init(orderLED);
There are also two destructors that will free the memory used by both arrays, but you should never use them. They only reason they are there is the C++ tradition of ‘if you have a dynamic array you must have the means to get rid of it’. On Arduino, clearing this memory won’t do any actual good and trying to allocate a new array in place of the destructed one will lead to ugliness.
Step 5: Library: Let There Be Light
Now that we have our object declared and initialized, we can start using it. As explained before, we will be building a needed picture in the ‘video memory’ and then sending it in full for the LED driver to display.
First,
TestObject.clearAll();
fills the video memory with zeros.
TestObject.setPoint(num, value);
sets the individual LED at address num to value. If the look-up table was provided during initialization, the library will get the actual pin number from it. For now, the change happens in memory only.
value = TestObject.getPoint(num);
Returns the current value of the LED #num. Note that this value is also from memory, so it won’t necessarily correspond to the actually visible color.
TestObject.setRGBpoint(num, red, green, blue);
sets the RGB LED num to the color corresponding to red, green and blue values. The num is the position of the RGB LED in your chain of RGB LEDs (starting with 0). This will set red value to the output calculated as num*3, green to num*3+1 and blue to num*3+2; it is assumed that you have either connected your RGB LEDs in the right order or used a look-up table as described before.
There are two other ways of setting up an RGB LED. Firstly,
TestObject.setRGBled(num, red, green, blue);
sets up the RGB LED that has it’s red cathode connected to output num, assuming that green is num+1 and blue is num+2. This is a tiny bit faster. Secondly,
TestObject.setRGBmax(num, red, green, blue <, max>);
works the same as the setRGBpoint() function, but has an optional max parameter. This parameter limits the overall brightness of the RGB LED; if the sum of red, green and blue exceeds max, the values will be reduced, keeping their proportions intact. This function is useful if you need to keep the power consumption in check. It can also be used for brightness control and fade-out effects; just keep in mind that it is more complex than previous functions, so it eats a bit more memory and is a bit slower.
Note that you should use only one of these three setRGB functions in your sketch to save program memory. Just select the one that suits your needs best and stick to it. That’s why max is optional in setRGBmax().
Now that we have the needed values stored in video memory, we can do
TestObject.sendAll();
to actually display the result.
With DM633 and DM634 chips you can also set the global brightness using
TestObject.setGlobalBrightness(val);
where val is a 7-bit number from 0 to 127. Note that 0 doesn’t mean ‘shut off’, it is more like ‘very weak’. It is better to always set the GBC just after the init(), especially with DM634 chips, as they tend to change it randomly on startup.
You can also
TestObject.setGBCbyDriver(byteArray);
to set the global brightness individually for each connected DM633 or DM634 chip. This is useful if you have separate drivers controlling red, green and blue colors in all the RGB LEDs, so you can do some color calibration. The byteArray here is an array of brightness values for each of the drivers, starting with the first one in the chain.
Step 6: Library: Configuration
There is a DMconfig.h file in the library directory that has a couple of configuration options mostly intended for reducing memory usage.
The first one is
#DEFINE DM256PWM
If uncommented, this define will reduce the size of the allocated video memory to a single byte per LED, so you’ll be limited to 0-255 values, like with the PWM outputs on Arduino. Seems a loss when compared to the 16-bit initial resolution, but is not. Firstly, less memory is used. Secondly, by default this option will square the value before sending it to the driver, meaning it will actually use the full 16-bit spectrum and look rather good. At 127 it will actually look like 50% brightness (the actual value sent will be 16,129, not 32,768). Better seen than explained, try it. If you change the second part of this define to ‘<<8’, a simple bit shift will be used: faster, but doesn’t look as cool as the square option.
The DM256PWM option can also be used to produce a universal code compatible with both 12- and 16-bit LED drivers (check the examples in the library).
#DEFINE DMCHAIN X
It is best explained in the illustration above. In default mode, with no optional look-up table specified, the library will consider all the LEDs connected sequentially to the driver outputs. In chains it will also mean that first driver outputs go first, then the second one’s, etc. In real life this is rarely achievable; usually, you’ll want the sides of the drivers to be connected in series. You can solve this with a look-up table, but it will take memory; so you may just use this config. The X means the number of drivers in each segment (check the second picture above).
You can also play around with the #pragma define; sometimes setting the optimization to O2 produces smaller code. Do not change the last define, it is there for future compatibility reasons.
That's it. Please note that the library is still work in progress, so some errors and bugs are possible; please report them. Also, suggestions are welcome.
This is interesting. Is there a convenient and reasonably priced source for these chips? I don't think I've ever seen them. (You aren't surprised, I can tell.)
There's about of dozen sellers of these chips at AliExpress at any given time (it's like $1 - $1.2 per piece if you buy 10). I guess the same is true for eBay. The manufacturer sells 60+ pieces, but they add postage, and it's not profitable unless you buy like 500. Ah, yes, you can also opt for my UltiBlink, certainly : )
Thank you. I'll go looking.?
I knew someone will ask this, and prepared a long reply, but now I cought a flu so the reply will be short.
1. The WS2812 and similar LED-with-chip things were made for and excel in RGB LED strips.
2. In any other project they lack versatility. You are basically stuck with 5050 SMD form-factor and that's it. With an external LED driver you can connect LEDs of absolutely any form-factor imaginable, and also different stuff that requires PWM signal.
There are additional things like the weird precisely-timed protocol, their tendency to fail due to heating (actually confirmed by Adafruit) and their not-very-wide availability, but for me the versatility factor is most important......
Just in case, you do understand that all the LED drivers mentioned in my instructable are digital, right?
Yep I read it all :) I didn't do any driveby commenting :)
A long answer is here : )
I was just starting to design an LED controller for one of my projects and was stuck with the driver selection. Now I think I found it.. Thank you!
I'm really glad someone has found a practical use for my article. Thanks and good luck with your project!
Love your idea and very detail explanation!
Thanks and keep updating this, when you can!
Nice! Always good to have options... | http://www.instructables.com/id/Hundreds-of-LEDs-on-Arduino-a-New-Way-From-the-Pas/ | CC-MAIN-2017-34 | en | refinedweb |
my requirement is to remove duplicate rows from csv file, but the size of the file is 11.3GB. So I bench marked the pandas and python file generator.
Python File Generator:
def fileTestInPy():
with open(r'D:\my-file.csv') as fp, open(r'D:\mining.csv', 'w') as mg:
dups = set()
for i, line in enumerate(fp):
if i == 0:
continue
cols = line.split(',')
if cols[0] in dups:
continue
dups.add(cols[0])
mg.write(line)
mg.write('\n')
import pandas as pd
df = pd.read_csv(r'D:\my-file.csv', sep=',', iterator=True, chunksize=1024*128)
def fileInPandas():
for d in df:
d_clean = d.drop_duplicates('NPI')
d_clean.to_csv(r'D:\mining1.csv', mode='a')
Pandas is not a good choice for this task. It reads the entire 11.3G file into memory and does string-to-int conversions on all of the columns. I'm not surprised that your machine bogged down!
The line-by-line version is much leaner. It doesn't do any conversions, doesn't bother looking at unimportant columns and doesn't keep a large dataset in memory. It is the better tool for the job.
def fileTestInPy(): with open(r'D:\my-file.csv') as fp, open(r'D:\mining.csv', 'w') as mg: dups = set() next(fp) # <-- advance fp so you don't need to check each line # or use enumerate for line in fp: col = line.split(',', 1)[0] # <-- only split what you need if col in dups: continue dups.add(col) mg.write(line) # mg.write('\n') # <-- line still has its \n, did you # want another?
Also, if this is python 3.x and you know your file is ascii or UTF-8, you could open both files in binary mode and save a conversion. | https://codedump.io/share/F8xHfOnXOax1/1/is-pandas-readcsv-really-slow-compared-to-python-open | CC-MAIN-2017-34 | en | refinedweb |
I'm trying to write a phonebook application using Tkinter. It's not very advanced, and it's mentioned in this tutorial:
The full source of the tutorial's version being here:
However, I'm trying to write a version myself, yet I keep getting the following error:
File "phonebook_test.py", line 96, in ? win = MakeWindow()
File "phonebook_test.py", line 55, in MakeWindow
Load_but = Button(f2,text="Load", command=load())
File "phonebook_test.py", line 13, in load
name, phone = phonelist[whichSelected()]
File "phonebook_test.py", line 10, in whichSelected
return int(Contacts_lbox.curselection())
NameError: global name 'Contacts_lbox' is not defined
My code is here:
from Tkinter import * phonelist = [['Mary Mag', '01467393'], ['Bob hilder', '9370703'], ['Fire & Res', '999']] def whichSelected () : #print "At %s of %d" % (Contacts_lbox.curselection(), len(phonelist)) return int(Contacts_lbox.curselection()) def load(): name, phone = phonelist[whichSelected()] name_string.set(name) number_string.set(phone) def MakeWindow(): global Contacts_lbox, number_string, name_string win = Tk() ###frame 1### f1 = Frame(win) name = StringVar() number = StringVar() name_string = StringVar() name_box = Entry(f1,textvariable=name_string) number_string = StringVar() number_box = Entry(f1,textvariable=number_string) name_label = Label(f1,text="Name:") number_label = Label(f1,text="Number:") #Pack/Grid statements f1.pack() name_label.grid(row=0,column=0) name_box.grid(row=0,column =1) number_label.grid(row =1, column =0) number_box.grid(row =1,column=1) ###frame 2### f2 = Frame(win) Add_but = Button(f2,text="Add") Update_but = Button(f2,text="Update") Delete_but = Button(f2,text="Delete") Load_but = Button(f2,text="Load", command=load()) #Pack/Grid statements f2.pack() Add_but.grid(row =0, column = 0) Update_but.grid(row =0, column = 1) Delete_but.grid(row =0, column = 2) Load_but.grid(row =0, column = 3) ###frame 3### #Declarations f3 = Frame(win) Contacts_lbox = Listbox(f3,height=5) Scroll = Scrollbar (f3,orient=VERTICAL) #Pack statements f3.pack() Contacts_lbox.pack(side = RIGHT, fill =BOTH) Scroll.pack(side=LEFT,fill = Y) #Configure Statements Scroll.configure(command=Contacts_lbox.yview) Contacts_lbox.configure(yscrollcommand=Scroll.set) return win def setSelect () : phonelist.sort() Contacts_lbox.delete(0,END) for name,phone in phonelist : Contacts_lbox.insert(END, name) win = MakeWindow() setSelect () win.mainloop()
The purpose of the load button is to display in the text boxes the information about the selected listbox entry.
setSelect sorts and then displays the contents of the listbox.
The contents of the listbox are stored within phonelist.
WhichSelected() returns the index of the selected item within the listbox (Contacts_lbox).
I've tried placing my global declaration in all sorts of places, and i've tried googling, but I just can't see what the solution to my problem is. Any help would be greatly appreciated!
Also, what does:
print "At %s of %d" % (Contacts_lbox.curselection(), len(phonelist))
(It's commented out - near the top) actually do? Because I can't work that out either.
I'd be grateful for any help. | https://www.daniweb.com/programming/software-development/threads/138449/global-name-not-defined | CC-MAIN-2017-34 | en | refinedweb |
In our last post we were discussing the importance of good tools to developers and how they help us to perform tasks like testing, profiling, decompiling, and building applications more quickly. Today I will take us a little further down this path as I explore one of the new ‘tools’ that we are offering in our Q2 2011 Xaml release – RadImageEditor.
A Quick Note
I had a few questions directed my way in regards to a line from the last post:
“…as well as providing controls that are out-of-the-box “tools” to bring enhanced functionality to your applications.”
Let me explain this a little further, as this is the entire topic behind this series and it somehow got buried in my line of thought last time. At Telerik, we have been working with the WPF and Silverlight platforms since they were released and have a significant amount of experience in working with the Xaml platforms. We have built rich controls like RadGridView, RadChart, and RadScheduleView, just to name a few, that help developers to build richer applications faster in modern development environments. Our controls matured with these technologies, making improvements based on both customer requirements and new functionality available as the frameworks have evolved.
So what do you do when you have an industry-leading suite that has several years’ worth of maturity under its belt? You enhance it further, both with new controls and with this new class of controls that I’m referring to as “tools”. We’ve done the heavy lifting via architecture, design, testing, and ease-of-implementation, allowing you to drop these “tools” into your application and instantly add significant rich functionality to your application, often without writing a single line of code outside of the Xaml needed to get it to display.
Don’t believe me? Read on!
Let’s Edit Some Images
When working with RadRichTextBox, many customers mentioned to us the need for a few features that would greatly enhance the Word-like experience and help to make the control a complete rich text editing experience. I’ll talk more about RadRichTextBox and the significant enhancements it is seeing in Q2 (including ImageEditor integration) later this week, but today I want to focus on the new RadImageEditor tool and how easy it is to implement in your applications.
The team here has developed RadImageEditor with a command-based architecture that allows you to plug in as many or as few options as you would like for your end users, giving you complete control over how this new tool will be implemented in your application. To start, we add the new RadImageEditorUI to our Xaml:
<
telerik:RadImageEditorUI
x:
…and can instantly see the results in our designer window:
If I ran this right now, I could open images, zoom in and out, and save them back to my desktop, all with only having written a line of Xaml.
Adding Functionality
Once we have the ability to load images into the control, the need arises to be able to do something with that image. To get started on this, I’m actually going to direct your attention to the Q2 2011 Beta Demos so you (and I) can grab a bit of code that the team has already provided us with in the First Look demo. You can access the code via the ‘Code’ link in the upper-right corner of the example, then select Example.Xaml from the list over on the left to get to the page we need. We first need to set two namespaces in addition to the standard Telerik Xaml namespace:
UserControl
...
xmlns:tools
"clr-namespace:Telerik.Windows.Media.Imaging.Tools;assembly=Telerik.Windows.Controls.ImageEditor"
xmlns:commands
"clr-namespace:Telerik.Windows.Media.Imaging.ImageEditorCommands.RoutedCommands;assembly=Telerik.Windows.Controls.ImageEditor"
...>
Once those are in place, scroll through the demo code and you’ll be able to see the structure of how RadImageEditor is setup. The basic structure for commands (as far as Xaml is concerned) is:
Breaking this down a bit, a Section will hold a group of related tools and a ToolItem represents functionality that you want to perform. If the ToolItem performs some more complex functionality that is configurable by the user, we will additionally have a CommandParameter that links to a Tool which will display within the ImageEditorUI. As an example, here is the Rotate 90 Degrees Counter-ClockWise (doesn’t require additional input) and the Rounded Corners ToolItems, showing the difference in how they will be represented in code:
telerik:ImageToolItem
ImageKey
"Rotate90CCW"
Text
"Rotate270"
Command
"commands:ImageEditorRoutedCommands.Rotate90Counterclockwise"
/>
"RoundCorners"
"Round Corners"
"commands:ImageEditorRoutedCommands.ExecuteTool"
>
telerik:ImageToolItem.CommandParameter
tools:RoundCornersTool
</
>
To add all of the currently-available functionality to our instance of the RadImageEditor tool, we can simply copy and paste everything between the <telerik:RadImageEditorUI> opening and closing tags and drop that into our application. The end result is seen below when run:
Now we can resize, flip, modify colors, crop, and more, right within our Silverlight and WPF applications.
Wrapping Up
Normally I’ll include a project download with projects like this, but since this tool is so easy to use as well as implement, simply point your browser to the First Look demo, copy the three namespaces and the contents of the RadImageEditorUI tag to your app, and you’re good to go!
Hopefully you can see how an easy to use ‘tool’ like RadImageEditor is a great addition to your Telerik Xaml toolbox and just how simple our team is making it to bring rich functionality to your applications. Stay tuned for our next post in this series where we explore the new features of our rich text editing tool, RadRichTextBox, in-depth.. | http://www.telerik.com/blogs/q2-2011-xaml-tools---radimageeditor | CC-MAIN-2017-34 | en | refinedweb |
.
Activity
- All
- Work Log
- History
- Activity
- Transitions
also: build.properties is reserved name in eclipse pde; I suggest pivot-build.properties instead
I was piggybacking on two things:
1) ApplicationContext already has a getJVMVersion() method, so this is very similar – since ApplicationContext is in the "wtk" area, this is where it had to happen. However, I changed the generic <package macro, so actually the version file "build.properties" is embedded in all the .jar files.
2) The "build.properties" file was already being used as the sole source for the Pivot version number, so I added nothing new. If the name needs to be changed for Eclipse, that is definitely outside the realm of this small change.
AFAIK the classloader I'm using should be correct because it must load the build.properties from the same .jar file where the ApplicationContext.class file is located. Is there a problem with that in other environments??
Thanks.
Embedding the file in all JARs seems like the right thing to do, and the location of the method (in ApplicationContext) makes sense to me.
FYI, I have used Eclipse for Pivot development since day one and the existence of the build.properties file has never been a problem.
I know this is pointless, nonetheless
re: "location of the method (in ApplicationContext) "
I am sorry I was not clear: you already has version in core:
public class Version implements Comparable<Version>, Serializable {
why not provide singe static final field in core, which does parsing of build version
and other build context information as you add more of it and expose it as Version or some new BuildInfo interface
so nobody knows build.properties exists
re: "since day one"
try using pde:
> re: "location of the method (in ApplicationContext) "
> I am sorry I was not clear: you already has version in core:
> public class Version implements Comparable<Version>, Serializable {
> why not provide singe static final field in core, which does parsing of build version
Where would this field live? Certainly not in the Version class itself, since Version is a generic class representing a four-value revision number (in other words, it isn't specific to Pivot). I don't see any problem leaving it in ApplicationContext.
> so nobody knows build.properties exists
In the current implementation, only ApplicationContext needs to know about it. How would moving it to another class be any different?
> try using pde:
I have. How do you think the Pivot plugin was built?
That's not something I have time to tackle at the moment, but if you are willing to set it up I'd be happy to help if I can.
Note that after this little change (in ApplicationContext) I'm no more able to run some bxml files from eclipse ... if the property file is not found probably I could have a warning in console, but without blocking the execution, ok ?
For example, try to run "As Pivot script application": tests/ ... / multiple_selection_list.bxml , this is the result:
java.lang.ExceptionInInitializerError
Caused by: java.lang.NullPointerException
at java.util.Properties$LineReader.readLine(Properties.java:418)
at java.util.Properties.load0(Properties.java:337)
at java.util.Properties.load(Properties.java:325)
at org.apache.pivot.wtk.ApplicationContext.<clinit>(ApplicationContext.java:1518)
Exception in thread "main"
Tell me what you think.
I'm not sure that silently failing is the right solution. It sounds like build.properties is not on the classpath. We probably need to update the Eclipse projects to include this file.
Ok, but "silently" not so much, because something is written in the Console but I agree that could not be the best.
And Greg, are we sure that not finding such file is right to be a fatal error (in Startup of applications) ?
> We probably need to update the Eclipse projects to include this file.
Ok, let's see if it's a good solution, we can try it now ...
I'd be happy to do it, except that I know next to nothing about Eclipse... But if someone can point me to what has to change I can do it.
Sandro, all I did was add our "build.properties" file to all the built .jar files. So, if there is a setting in the Eclipse projects that specifies what the .jar files contain, that just needs to be updated (or something like that).
Hi Roger, don't worry
... to add the build.properties to any Pivot jar we can do it inside our Ant build process.
The only problem now is that from the development environment with all Pivot sources (from trunk) we have to see if it's possible to add a reference to that file for any Pivot subproject (like tests, examples, etc). I don't know if it's possible inside eclipse, but I'll try to see as soon as possible.
In the meantime using the patch2 in eclipse here will solve the problem, as a workaround.
Just a note:
I've just seen that inside Pivot jars (for example all jars of Pivot-2.0), in the manifest there is this element:
Implementation-Version: 2.0
where the release number if taken from build.properties by ant ...
so why not try to use directly this info (and not bundle a copy of build.properties) ?
And when not available (like when there aren't Pivot jars but full sources instead) verify if load from build.properties (if possible, after some checks in eclipse) or if log it as a warning/error ...
Bye,
Sandro
Can't we just set the classpath for Eclipse to include our "build.properties" and leave the code the way it was? If we just ensure that "build.properties" is always available then there is nothing more to do. I just don't know how to set the Eclipse classpath. Is that done in some of the .settings files? Or somehow via the .classpath file?
> to add the build.properties to any Pivot jar we can do it inside our Ant build process.
FYI, Roger's patch already added this to build.xml.
> I've just seen that inside Pivot jars (for example all jars of Pivot-2.0), in the manifest there is this element:
> Implementation-Version: 2.0
I actually wrote code to do this a long time ago, but it was pretty messy. The manifest isn't on the classpath, so it's not easy to get to programmatically.
Ok, I'll try to add the build.properties to our eclipse projects, with the assumption that it is 1 level upper (in root of the workspace folder) ... and keep you updated before committing.
another wild idea: there is not need for build.properties at all;
use instead:
getClass().getPackage().getSpecificationVersion();
getClass().getPackage().getSpecificationTitle();
getClass().getPackage().getImplementationVersion();
getClass().getPackage().getImplementationTitle();
Okay, so here's a patch that reverts the addition of "build.properties" to the .jar files and instead uses Andrei's method to get the Implementation-Version (which is already set by the value from "build.properties"). It should be fail-safe in the case of running from .class files and not from .jars.
Patch to use Package.getImplementationVersion() instead of the "build.properties" file; removes "build.properties" from the .jar files.
I was not aware of Package#getSpecificationVersion(), etc. - good find. I assume that this pulls the version info from the JAR manifest?
From my testing, yes, it pulls the exact "version" value set in the Jar manifest for Implementation-Version.
What do you think of my "version2.patch"?
Patch looks good. I applied a slightly modified version - let me know if anything looks amiss.
The only think I was wondering (which is why I was super-cautious) is that getPackage() can return null (such as when Pivot is run from .class files instead of a .jar?) so I didn't want Sandro's case to be broken again if that happened.
A class should still have a valid package whether it was loaded from a JAR or a directory, so I think it is safe to assume that getPackage() will return non-null.
getPackage() returns null when you run exploded jar applet (legacy classloder);
this currently breaks wtk, wtk-terra
That's unfortunate. It's probably not a case we really need to worry about, but the fix is simple enough so we might as well do it.
Hi all,
(after a synchronize with the trunk) I've tried to run/debug some Test classes as Applications, and as Applets from my eclipse and all works good
:
ApplicationContext.class.getPackage() never returns null , but String version = ApplicationContext.class.getPackage().getImplementationVersion(); willy be null, so a safe pivotVersion will be initialized.
If it's Ok for all, to me seems that this issue could be marked as resolved.
Thank you to all.
Here is a patch that implements this feature. | https://issues.apache.org/jira/browse/PIVOT-746?attachmentOrder=desc | CC-MAIN-2017-34 | en | refinedweb |
Edit > Preferences > Build & Run > General. The CMake tab contains additional settings for CMake. You can find more settings for CMake in Edit > Preferences > Kits > CMake and for Qbs in Edit > Preferences > Edit mode.
Selecting Project Type
The following table lists the wizard templates for creating projects.
To create a new project, select File > New 6 API in Python applications. You can use the PySide6 modules to gain access to individual Qt modules, such as Qt Core, Qt GUI, and Qt Widgets.
If you have not installed PySide6, Qt Creator prompts you to install it after the project is created. Further, it prompts you to install the Python language server that provides services such as code completion and annotations. Select Install to install PySide6 and the language server.
To view and manage the available Python interpreters, select Edit > Preferences > Python > Interpreters.
You can add and remove interpreters and clean up references to interpreters that have been uninstalled, but still appear in the list. In addition, you can set the interpreter to use by default. Window UI wizard enables you to create a Python project that contains the source file for a class. Specify the PySide version, class name, base class, and and source file for the class.
The wizard adds the imports to the source file to provide access to the QApplication, the base class you selected in the Qt Widgets module, and Qt UI tools:
import sys from PySide6.QtWidgets import QApplication, QWidget
Note: It is important that you first create the Python code from your UI form. In PySide6, you can do this by executing
pyside6-uic form.ui -o ui_form.py on a terminal. This enables you to import the class that represents your UI from that Python file.
Once you generate the Python code from the UI file, you can import the class:
from ui_form import Ui_Widget
The wizard also adds a main class with the specified name that inherits from the specified base class:
class Widget(QWidget): def __init__(self, parent=None): super().__init__(parent)
The following lines in the main class instantiate the generated Python class from your UI file, and set up the interface for the current class.
self.ui = Ui_Widget() self.ui.setupUi(self)
Note: UI elements of the new class can be accessed as member variables. For example, if you have a button called button1, you can interact with it using
self.ui.button1.(sys.argv)
Next, the wizard instantiates the
MainWindow class and shows it:
widget = Widget() widget *.
Always regenerate the Python code after modifying a UI file..
Specifying Project Contents
A project can contain files that should be:
- Compiled or otherwise handled by the build
- Installed
- Not installed, but included in a source package created with
make dist
- Not installed, nor be part of a source package, but still be known to Qt Creator
Qt Creator displays all files that are declared to be part of the project by the project files in the Projects view. The files are sorted into categories by file type (.cpp, .h, .qrc, and so on). To display additional files, edit the project file. Alternatively, you can see all the files in a project directory in the File System view.
Declaring files as a part of the project also makes them visible to the locator and project-wide search.
CMake Projects
When using CMake, you can specify additional files for a project by either adding them as sources or installing them.
In the CMakeLists.txt file, define the files as values of the target_sources command using the
PRIVATE property, for example.
You can prevent CMake from handling some files, such as a .cpp file that should not be compiled. Use the set_property command and the HEADER_FILE_ONLY property to specify such files. For example:
set_property(SOURCE "${files}" PROPERTY HEADER_FILE_ONLY ON)
Alternatively, to install the files, use the install command with the
FILES or
DIRECTORY property.
qmake Projects
Use the following variables in the .pro file:
SOURCESand
HEADERSfor files to compile
INSTALLSfor files to install
DISTFILESfor files to include in a source package
OTHER_FILESfor files to manage with Qt Creator without installing them or including them in source packages
For example, the following value includes text files in the source package:
DISTFILES += *.txt.
CMake Projects
You can add CMakeLists.txt files to any project by using the add_subdirectory command. The files can define complete projects that are included into the top-level project or any other CMake commands.
qmake Projects
When you create a new project and select qmake as the build system, you can add it to another project as a subproject in the Project Management dialog. However, the root project must specify that qmake uses the
subdirs template to build the project.
To create a root project, select File > New New Project.
Keyboard shortcuts for wizards can be set in Edit > Preferences > Environment > Keyboard > Wizard. All wizard actions start with Impl there.. | https://doc-snapshots.qt.io/qtcreator-master/creator-project-creating.html | CC-MAIN-2022-40 | en | refinedweb |
Returns specific acceleration measurement which occurred during last frame. (Does not allocate temporary variables).
using UnityEngine;
public class Example : MonoBehaviour { // Calculates weighted sum of acceleration measurements which occurred during the last frame // Might be handy if you want to get more precise measurements void Update() { Vector3 acceleration = Vector3.zero; for (var i = 0; i < Input.accelerationEventCount; ++i) { AccelerationEvent accEvent = Input.GetAccelerationEvent(i); acceleration += accEvent.acceleration * accEvent.deltaTime; } print(acceleration); } } | https://docs.unity3d.com/kr/2021.3/ScriptReference/Input.GetAccelerationEvent.html | CC-MAIN-2022-40 | en | refinedweb |
Here’s a challenge for you: Reconstruct the sinogram below using the the ASTRA Tomography Toolbox that I introduced in the previous article. You’ll have to figure out the exact meaning of a sinogram like this to be able to do that. For more information on tomography, have a look at my series of articles on that subject.
Send in your solution (your script and the resulting image) to tom at tomroelandts dot com (sorry for the spam avoidance measure). Your solution can be in MATLAB or Python, but it has to use the ASTRA Toolbox. You can choose any reconstruction algorithm you want. This challenge remains open until the end of March 2014.
I’ll randomly pick a winner from the correct solutions and buy him or her sushi in Antwerp.
[update] The challenge is now closed, and, unfortunately, I have not received any correct solutions… My solution is shown below, together with a Python script that produces it.
import numpy as np from scipy import misc import astra # Load sinogram. sinogram = misc.imread('challenge-sinogram.png').astype(float) / 255 # Create geometries and projector. num_angles, det_count = sinogram.shape vol_geom = astra.create_vol_geom(det_count, det_count) angles = np.linspace(0, np.pi, num_angles, endpoint=False) proj_geom = astra.create_proj_geom('parallel', 1., det_count, angles) projector_id = astra.create_projector('linear', proj_geom, vol_geom) # Create sinogram_id and store sinogram. sinogram_id = astra.data2d.create('-sino', proj_geom,('42-reconstruction.png', reconstruction) # Cleanup. astra.algorithm.delete(algorithm_id) astra.data2d.delete(reconstruction_id) astra.data2d.delete(sinogram_id) astra.projector.delete(projector_id)
Add new comment | https://tomroelandts.com/articles/tomography-challenge | CC-MAIN-2022-40 | en | refinedweb |
If you play a lot of games, you probably noticed at some point in time that the version number or build number of the game is often presented clearly on the screen. This is often not by accident and is instead a way to show users that they are using the correct version of your game. Not only that, but it can help from a development perspective as well. For example, have you ever built a game and had a bunch of different builds floating around on your computer? Imagine trying to figure out which build is correct without seeing any build or version information!
In this tutorial we’re going to see how to very easily extract the build information defined in the project settings of a Unity game.
If you came here expecting something complex, think again, because what we’re about to do won’t take much. In fact, to get the version number you really only need the following line:
Debug.Log(Application.version);
However, printing the version number to your console isn’t really going to help anyone.
Instead, you might want to create a game object with a
Text component attached. Then when the application loads, you can set the game object to display the version information on the screen.
Start by creating a VersionNumber.cs file within your Assets directory. In the VersionNumber.cs file, add the following C# code:
using System.Collections; using System.Collections.Generic; using UnityEngine; using UnityEngine.UI; public class VersionNumber : MonoBehaviour { private Text _versionNumberText; void Start() { _versionNumberText = GetComponent<Text>(); _versionNumberText.text = "VERSION: " + Application.version; } }
Like previously mentioned, the above code will get a reference to the
Text component for the game object this script is attached to. After, it will set the text for that component to the application version. This will work for all build targets.
Attach the VersionNumber.cs script to one of your game objects with a
Text component and watch it work.
I’d like to say it is more complicated than this, but it isn’t.
A video version of this tutorial can be seen below. | https://www.thepolyglotdeveloper.com/2022/02/extract-version-information-game-unity-csharp/ | CC-MAIN-2022-40 | en | refinedweb |
#include <CGAL/Arr_conic_traits_2.h>
The class
Arr_conic_traits_2 is a model of the
ArrangementTraits_2 concept and can be used to construct and maintain arrangements of bounded segments of algebraic curves of degree \( 2\) at most, also known as conic curves.
A general conic curve \( C\) is the locus of all points \( (x,y)\) satisfying the equation: \( r x^2 + s y^2 + t x y + u x + v y + w = 0\), where:
A bounded conic arc is defined as either of the following:
COLLINEARorientation. \( r, s, t, u, v, w\)) must be rational numbers. This guarantees that the coordinates of all arrangement vertices (in particular, those representing intersection points) are algebraic numbers of degree \( 4\) (a real number \( \alpha\) is an algebraic number of degree \( d\) if there exist a polynomial \( p\) with integer coefficient of degree \( d\) such that \( p(\alpha) = 0\)). We therefore require separate representations of the curve coefficients and the point coordinates. \( x\)-monotone curve types, as detailed below.
While the
Arr_conic_traits_2 models the concept
ArrangementDirectionalXMonotoneTraits_2, the implementation of the
Are_mergeable_2 operation does not enforce the input curves to have the same direction as a precondition. Moreover,
Arr_conic_traits_2 supports the merging of curves of opposite directions.
ArrangementLandmarkTraits_2
ArrangementDirectionalXMonotoneTraits_2
Types | https://doc.cgal.org/4.12/Arrangement_on_surface_2/classCGAL_1_1Arr__conic__traits__2.html | CC-MAIN-2022-40 | en | refinedweb |
User:Lahwaacz/Tasks
Long-term tasks
- figure out what to do with Extra Keyboard Keys in Xorg
- fix Autofs page/section
- QEMU: split network topologies into separate section (poor writing flag)
- also consider moving the link-level part of the bridged networiking into Network bridge (expansion flag)
- finish Category_talk:File_systems#This_category_should_be_split
- investigate [1] when FS#40661 is closed
- Gold_linker
- Help talk:Editing#Editor assistants: userscripts, external editors...
- expand instructions in ArchWiki:Requests#Broken_package_links
- look at the new systemd-* groups, describe in Users and groups
- look at 'file attributes', 'extended attributes', 'file capabilities'
- implement Template:Bots (aka w:Template:Bots)
- discussion about hostname resolution
- review Wireless bonding, Talk:Wireless bonding#wpa_supplicant
Back-end
all {wiki} bugs :: User:Kynikos/Tasks#Back-end
- suggest setting $wgRegisterInternalExternals to true, see User_talk:Kynikos#External_links
- publicize Special:Undelete: ArchWiki:Requests#Should we remove or archive obsolete articles?
- archlinux skin: search button color in Special:Search (T90336) + suggestions background
MediaWiki bugs
- Expose cache type in Special:Version
- Expose full parser configuration as part of the API
- some parts are available (interwiki map, namespaces), some are not: $wgLegalTitleChars, $wgIllegalFileChars, $wgUrlProtocols, $wgNamespacesWithSubpages (for {{BASEPAGENAME}}), $wgMaxTemplateDepth | https://wiki.archlinux.org/title/User:Lahwaacz/Tasks | CC-MAIN-2022-40 | en | refinedweb |
- Proxy Creation Features
- Binding Options
- Weight Painting Features
How to Run:
import rr_main_mech
rr_main_mesh.window_creation()
If you have any feedback or suggestions for the tool, please feel free to contact me at conley.setup@gmail.com
Cheers and happy rigging,
Jen
Version 1.0.1
Updated code for consistancy across RigBox Reborn scripts.
Please use the Feature Requests to give me ideas.
Please use the Support Forum if you have any questions or problems.
Please rate and review in the Review section. | https://ec2-34-231-130-161.compute-1.amazonaws.com/maya/script/rigbox_reborn-mesh-tool-for-maya | CC-MAIN-2022-40 | en | refinedweb |
public class LdapUserDetailsService extends java.lang.Object implements UserDetailsService
LdapUserSearchand an
LdapAuthoritiesPopulator. The final UserDetails object returned from loadUserByUsername is created by the configured UserDetailsContextMapper.
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
public LdapUserDetailsService(LdapUserSearch userSearch)
public LdapUserDetailsService(LdapUserSearch userSearch, LdapAuthoritiesPopulator authoritiesPopulator)
public UserDetails loadUserByUsername(java.lang.String username) throws UsernameNotFoundException)
UsernameNotFoundException- if the user could not be found or the user has no GrantedAuthority
public void setUserDetailsMapper(UserDetailsContextMapper userDetailsMapper) | https://docs.spring.io/spring-security/site/docs/5.1.8.RELEASE/api/org/springframework/security/ldap/userdetails/LdapUserDetailsService.html | CC-MAIN-2022-40 | en | refinedweb |
Model–view–controller (MVC) is a software architecture pattern. Initially intended for desktop computing, in the recent years it is used for developing web applications. () .
Model- Model is the data or objects that the application interacts with.
View- View provides an interactive user interface to display the data provided by the model.
Controller- Controller is the module that handles interaction between view and model component. It processes the user request, calls the appropriate model and sends the data to be rendered to the view to display it to the user.
Java Spring MVC framework helps in developing loosely coupled applications using MVC architecture. Understanding and setup of basic Spring MVC Framework can be done by following this tutorial:
Most applications require database access. User interface is provided by most applications to allow user to add, delete and update data. jTable provides a jQuery plugin to create interactive tables which allow the same functions of adding, deleting and updating data. It also has several features like: paging, sorting, create or edit record in dialog form, master/child tables etc.
See detailed documentation.
The goal of this article is to show how jTable can be used easily in integration with Java Spring.
Two basic classes are Student and City. Student class represents a record in Student database table and City class represents a record in City database table. A student lives in a city and a list of cities is provided for the student to select. Student table has CityId which is the Id in the City table.
Student
City
Student Class is as follows:
public class Student {
public int id;
@NotNull
@Size(min = 2, max = 30)
public String name;
@NotNull
public String email;
@NotNull
public String password;
@NotNull
public String gender;
@NotNull
public int city_id;
@NotNull
@DateTimeFormat(pattern = BaseController.DATE_FORMAT)
@JsonSerialize(using = JsonDateSerializer.class)
public Date birth_date;
@NotNull
public int education;
public String about;
@NotNull
public String active_flg;
@DateTimeFormat(pattern = BaseController.DATE_FORMAT)
@JsonSerialize(using = JsonDateSerializer.class)
public Date record_date;
City Class is as follows:
public class City {
public int id;
@NotNull
public String name;
Similarly, classes are created for
StudentResults
StudentPhone
Course
Getter and setter methods were added to all the classes. jTable allows master/child tables (). StudentPhone and StudentResults are defined for child tables.
Every class in the model has a respective controller. The controller defines the methods for listing, deleting and saving records. These methods make a call to methods defined in the model for database access. The results returned are provided to the view for displaying.
Student Controller:
List Method
public JTableResult List(JTableRequest jTableRequest) {
JTableResult rslt = new JTableResult();
try {
JdbcTemplate jdbcTemplate = getJdbcTemplate();
return Student.retrievePage(jdbcTemplate, jTableRequest);
} catch (Exception ex) {
rslt.Result = "Error";
rslt.Message = ex.getMessage();
return rslt;
}
}
RetrievePage method will return records based on the paging and sorting parameters.
Save Method
public JTableResult Save(HttpServletRequest request, @Valid Student student, BindingResult bindingResult) {
JTableResult rslt = new JTableResult();
if (bindingResult.hasErrors()) return toError(bindingResult);
int action = Integer.parseInt(request.getParameter("action"));
if (student.active_flg == null) student.active_flg = "N";
try {;
} catch (Throwable ex) {
rslt.Result = "Error";
rslt.Message = ex.getMessage();
return rslt;
}
}
Save method calls the insert or the update method of Student.java class based on the parameter value of action passed. Datatype error handling (explained later) bit is also taken care in this section.
Delete Method
public JTableResult Delete(int id) {
JTableResult rslt = new JTableResult();
try {
JdbcTemplate jdbcTemplate = getJdbcTemplate();
Student.delete(jdbcTemplate, id);
rslt.Result = "OK";
return rslt;
} catch (Exception ex) {
rslt.Result = "Error";
rslt.Message = ex.getMessage();
return rslt;
}
}
Delete method of Student.java class is called and it deletes the student record of that particular student id.
Similarly, following controllers are created:
CityController, CourseController, StudentResultController, StudentPhoneController.
CityController, CourseController, StudentResultController, StudentPhoneController
jTable expects the data in certain format. In order to implement that, JTableResult.java and JTableResponse.java was created with the expected fields. These fields are set as per jTable requirements.
JTableResult
JTableResponse
For select, radio button, checkbox jTable supports code value pair data.
Ex: options: { '1': 'Home phone', '2': 'Office phone', '3': 'Cell phone' }
(See-)
For our Student example, we have City and Course as select dropdown. The key-value pairs for list of cities and courses are stored using a HashMap (retrieveAll method in City.java and Course. java ). Spring takes care of converting it correctly.
JavaScriptAndCSS.jsp includes all the necessary jTable and jQuery UI scripts. It also defines a menu, allowing a user to select the database he wants to view.
Student.jsp includes the JavaScriptAndCSS.jsp file and obtains the options for cities and courses.
var JTableInfo =
{
title: 'The Student List',
paging: true,
pageSize: 10,
sorting: true,
defaultSorting: 'name ASC',
actions: {
listAction: 'Student/List',
createAction: 'Student/Save?action=1',
updateAction: 'Student/Save?action=2',
deleteAction: 'Student/Delete'
},
fields: {
id: {
key: true,
create: false,
edit: false,
list: false
},
phones: {
title: '',
width: '2%',
sorting: false,
edit: false,
create: false,
display: studentPhone
},
title: 'User Password',
type: 'password',
list: false
},
gender: {
title: 'Gender',
width: '13%',
options: { 'M': 'Male', 'F': 'Female' }
},
city_id: {
title: 'City',
width: '12%',
options: cities
},
birth_date: {
title: 'Birth date',
width: '15%',
type: 'date',
}
}
}
$('#JTable').jtable(JTableInfo);
$('#JTable').jtable('load');
});
JTableInfo sets the title of the table, actions for the records and fields of the record in the database.
Each field has set of properties for the behaviour of that field. Above code snippet shows selected fields of the Student table (See Student.jsp for entire code). Similarly, view City.jsp and Course.jsp is created for City and Course respectively.
To display the child tables (StudentPhone and StudentResult), functions studentPhone() and studentResult() are defined in Student.jsp. See for details.
Form validation is done by using annotations.
Empty String as null- We want to convert all the empty strings coming from the browser as null. Spring supports () binder for conversion.
See method initBinder in BaseController. Datatype Error Handling Annotation @Valid along with BindingResult will handle the datatype errors. Ex: (In StudentController.java)
@ResponseBody
@RequestMapping(value = "Save")
public JTableResult Save(HttpServletRequest request, @Valid Student student, BindingResult bindingResult)
BindingResult is Spring’s object that holds the result of the validation. If any errors have occurred during validation, they are stored in BindingResult.
1. Date from controller to jTable
jTable expects the given date string to be formatted as one of the samples shown below:
I have used the first format. Class JsonDateSerializer converts the date string into the selected format (first one). The annotation @JsonSerialize(using = JsonDateSerializer.class) for date field in Student.java will help achieve that.
2. Date from jTable to controller jTable uses jquery UI date picker date format. For valid date formats see section $.datepicker.formatDate( format, date, options ) at . I have used dd/mm/yy date format in my example. The defaultDateFormat for jTable is set in JavaScriptAndCSS.jsp. Corresponding date format for Java is dd/MM/yyyy. Annotation @DateTimeFormat(pattern = BaseController.DATE_FORMAT) sets the date format to dd/MM/yyyy.
The BaseController.DATE_FORMAT is accessing the DATE_FORMAT variable defined in BaseController.
jTable supports built-in themes, as well as jQuery UI, themes. I have used jQuery UI themes only. For this purpose jqueryuiTheme is set to true in JavaScriptAndCSS.jsp.
Getting jQuery UI theme (Example: blitzer theme)
<link rel="stylesheet" type="text/css" href="../css/jquery-ui.blitzer.css" />
Project database access setup is done in ProjectJDBC.xml. HSQLDB was selected for the demo due to its ease-of-use. The value attribute of property with name=”url” is set to the path where the database will be created if it does not exist.
<bean id="dataSource" class="org.springframework.jdbc.datasource.DriverManagerDataSource">
<property name="driverClassName" value="org.hsqldb.jdbcDriver"/>
<property name="url" value="jdbc:hsqldb:file:C:\DB\StudentDB;shutdown=true;hsqldb.default_table_type=cached;sql.enforce_strict_size=false"/>
<property name="username" value="sa"/>
<property name="password" value=""/>
</bean>
ProjectDAO.java and ProjectJDBC.java are used for setting up database access in SpringJava. Method getJdbcTemplate() in BaseController is used for returning the jdbcTemplate. For details refer: .
The DataBase Setup button in the main menu calls the DatabaseSetupController. DataBaseSetupController was defined to create the tables and insert records. Every time the button is clicked, the database will be populated with the seed records to get the user started.
Create table statement defines self generating id as the primary key for each table. Every time a record is inserted, the id value increments by 1. In order to obtain the generated id value KeyHolder is used in insert() function in each of the model classes.
KeyHolder holder = new GeneratedKeyHolder();
jdbcTemplate.update(psc, holder);
id = holder.getKey().intValue()
The above code snippet is part of Save() function of StudentController.java. Student.retrieveById is used to get the record and given to .Record parameter as per jTable requirement. Hence, KeyHolder gives the generated id which is passed as a parameter in retrieveById() in the case of insert action.
I used IntelliJ IDEA 2016.1.1 for the demo. For deploying the code correctly, make sure you do the following steps correctly:
In order to use other databases, the database access setup part will change and create table statement in each of the model classes will change according to the syntax for the selected database.
Source code is available at
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) | https://codeproject.freetls.fastly.net/Articles/1112115/JTable-Spring-Demo-with-database-integration?PageFlow=Fluid | CC-MAIN-2022-40 | en | refinedweb |
So far, we have learned the basics of Rust syntax, developed a custom Kubernetes controller, and integrated with the front-end with Wasm.
I've been using the JVM for two decades now, mainly in Java. The JVM is a fantastic piece of technology. IMHO, its most significant benefit is its ability to adapt the native code to the current workload; if the workload changes and the native code is not optimal, it will recompile the bytecode accordingly.
On the other side,.
While GC algorithms have improved over time, GC itself is still a big complex machine. Fine-tuning GC is complex and depends heavily on the context. What worked yesterday might not work today. All in all, configuring the JVM for the best handling of GC in your context is like magic.
As the ecosystem around the JVM is well developed, it makes sense to develop applications using the JVM and delegate the parts that require predictability to Rust.
Existing alternatives for JVM-Rust integration
During the research for this post, I found quite a couple of approaches for JVM-Rust integration:
Asmble:
Asmble is a compiler that compiles WebAssembly code to JVM bytecode. It also contains an interpreter and utilities for working with WASM code from the command line and from JVM languages.
--
Asmble is released under the MIT License but is not actively maintained (the last commit is from 2 years ago).
GraalVM:
GraalVM is a high-performance JDK distribution designed to accelerate the execution of applications written in Java and other JVM languages along with support for JavaScript, Ruby, Python, and a number of other popular languages. GraalVM’s polyglot capabilities make it possible to mix multiple programming languages in a single application while eliminating foreign language call costs.
--
GraalVM allows to run LLVM bitcode. Rust can compile to LLVM. Hence, GraalVM can run your Rust-generated LLVM code along with your Java/Scala/Kotlin/Groovy-generated bytecode.
jni crate:
This crate provides a (mostly) safe way to implement methods in Java using the JNI. Because who wants to actually write Java?
--
JNI has been the way to integrate C/C++ with Java in the past. While it's not the most glamorous approach, it requires no specific platform and is stable. For this reason, I'll describe it in detail in the next section.
Integrating Java and Rust via JNI
From a bird's eye view, integrating Java and Rust requires the following steps:
- Create the "skeleton" methods in Java
- Generate the C headers file from them
- Implement them in Rust
- Compile Rust to generate a system library
- Load the library from the Java program
- Call the methods defined in the first step. At this point, the library contains the implementation, and the integration is done.
Old-timers will have realized those are the same steps as when you need to integrate with C or C++. It's because they also can generate a system library. Let's have a look at each step in detail.
Java skeleton methods
We first need to create the Java skeleton methods. In Java, we learn that methods need to have a body unless they are
abstract. Alternatively, they can be
native: a native method delegates its implementation to a library.
public native int doubleRust(int input);
Next, we need to generate the corresponding C header file. To automate generation, we can leverage the Maven compiler plugin:
<plugin> <artifactId>maven-compiler-plugin</artifactId> <version>3.8.1</version> <configuration> <compilerArgs> <arg>-h</arg> <!--1--> <arg>target/headers</arg> <!--2--> </compilerArgs> </configuration> </plugin>
- Generate header files...
- ...in this location
The generated header of the above Java snippet should be the following:
#include <jni.h> #ifndef _Included_ch_frankel_blog_rust_Main #define _Included_ch_frankel_blog_rust_Main #ifdef __cplusplus extern "C" { #endif /* * Class: ch_frankel_blog_rust_Main * Method: doubleRust * Signature: (I)I */ JNIEXPORT jint JNICALL Java_ch_frankel_blog_rust_Main_doubleRust (JNIEnv *, jobject, jint); #ifdef __cplusplus } #endif #endif
Rust implementation
Now, we can start the Rust implementation. Let's create a new project:
cargo new lib-rust
[package] name = "dummymath" version = "0.1.0" authors = ["Nicolas Frankel <nicolas@frankel.ch>"] edition = "2018" [dependencies] jni = "0.19.0" // 1 [lib] crate_type = ["cdylib"] // 2
- Use the
jnicrate
- Generate a system library. Several crate types are available:
cdylibis for dynamic system libraries that you can load from other languages. You can check all other available types in the documentation.
Here's an abridged of the API offered by the crate:
The API maps one-to-one to the generated C code. We can use it accordingly:
#[no_mangle] pub extern "system" fn Java_ch_frankel_blog_rust_Main_doubleRust(_env: JNIEnv, _obj: JObject, x: jint) -> jint { x * 2 }
A lot happens in the above code. Let's detail it.
- The
no_manglemacro tells the compiler to keep the same function signature in the compiled code. It's crucial as the JVM will use this signature.
- Most of the times, we use
externin Rust functions to delegate the implementations to other languages: this is known as FFI. It's the same as we did in Java with
native. However, Rust also uses
externfor the opposite, i.e., to make functions callable from other languages.
- The signature itself should precisely mimic the code in the C header, hence the funny-looking name
- Finally,
xis a
jint, an alias for
i32. For the record, here's how Java primitives map to Rust types:
We can now build the project:
cargo build
The build produces a system-dependent library. For example, on OSX, the artifact has a
dylib extension; on Linux, it will have a
so one, etc.
Use the library on the Java side
The final part is to use the generated library on the Java side. It requires first to load it. Two methods are available for this purpose,
System.load(filename) and
System.loadLibrary(libname).
load() requires the absolute path to the library, including its extension, e.g.,
/path/to/lib.so. For applications that need to work across systems, that's unpractical.
loadLibrary() allows you to only pass the library's name - without extension. Beware that libraries are loaded in the location indicated by the
java.library.path System property.
public class Main { static { System.loadLibrary("dummymath"); } }
Note that on Mac OS, the
lib prefix is not part of the library's name.
Working with objects
The above code is pretty simple: it involves a pure function, which depends only on its input parameter(s) by definition. Suppose we want to have something a bit more involved. We come up with a new method that multiplies the argument with another one from the object's state:
public class Main { private int state; public Main(int state) { this.state = state; } public static void main(String[] args) { try { var arg1 = Integer.parseInt(args[1]); var arg2 = Integer.parseInt(args[2]); var result = new Main(arg1).timesRust(arg2); // 1 System.out.println(arg1 + "x" + arg2 + " = " + result); } catch (NumberFormatException e) { throw new IllegalArgumentException("Arguments must be ints"); } } public native int timesRust(int input); }
- Should compute
arg1 * arg2
The
native method looks precisely the same as above, but its name. Hence, the generated C header also looks the same. The magic needs to happen on the Rust side.
In the pure function, we didn't use the
JNIEnv and
JObject parameters:
JObject represents the Java object, i.e.,
Main and
JNIEnv allows accessing its data (or behavior).
#[no_mangle] pub extern "system" fn Java_ch_frankel_blog_rust_Main_timesRust(env: JNIEnv, obj: JObject, x: jint) -> jint { // 1 let state = env.get_field(obj, "state", "I"); // 2 state.unwrap().i().unwrap() * x // 3 }
- Same as above
- Pass the object's reference, the field's name in Java and its type. The type refers to the correct JVM type signature, e.g.
"I"for
int.
stateis a
Result<JValue>. We need to unwrap it to a
JValue, and then "cast" it to a
Result<jint>via
i()
Conclusion
In this post, we have seen how to call Rust from Java. It involves flagging methods to be delegated as
native, generating the C header file, and using the
jni crate. We have only scraped the surface with simple examples: yet, we've laid the road to more complex usages.
The complete source code for this post can be found on Github:
To go further:
Originally published at A Java Geek on July 18th, 2021
Top comments (0) | https://dev.to/nfrankel/rust-and-the-jvm-1jbg | CC-MAIN-2022-40 | en | refinedweb |
Specifies the action to take upon delivery of a signal.
Berkeley Compatibility Library (libbsd.a)
#include <signal.h> int sigaction (Signal, Action, OAction) int Signal; struct sigaction *Action, *OAction; int sigvec (Signal, Invec, Outvec) int Signal; struct sigvec *Invec, *Outvec; void (*signal (Signal, Action)) () int Signal; void (*Action) (int);
The sigaction subroutine allows a calling process to examine and change the action to be taken when a specific signal is delivered to the process issuing this subroutine.
In multi-threaded applications using the threads library (), signal actions are common to all threads within the process. Any thread calling the sigaction subroutine changes the action to be taken when a specific signal is delivered to the thread's process, that is, to any thread within the process.
Note: The sigaction subroutine must not be used concurrently to the sigwait subroutine on the same signal.
The Signal parameter specifies the signal. If the Action parameter is not null, it points to a sigaction structure that describes the action to be taken on receipt of the Signal parameter signal. If the OAction parameter is not null, it points to a sigaction structure in which the signal action data in effect at the time of the sigaction subroutine call is returned. If the Action parameter is null, signal handling is unchanged; thus, the call can be used to inquire about the current handling of a given signal.
The sigaction structure has the following fields:
The sa_handler field can have a SIG_DFL or SIG_IGN value, or it can be a pointer to a function. A SIG_DFL value requests default action to be taken when a signal is delivered. A value of SIG_IGN requests that the signal have no effect on the receiving process. A pointer to a function requests that the signal be caught; that is, the signal should cause the function to be called. These actions are more fully described in "Parameters".
When a signal is delivered to a thread, if the action of that signal specifies termination, stop, or continue, the entire process is terminated, stopped, or continued, respectively.If the SA_SIGINFO flag (see below) is cleared in the field of the sigaction structure, the sa_handler field identifies the action to be associated with the specified signal. If the SA_SIGINFO flag is set in the sa_flags field, the sa_sigaction field specifies a signal-catching function. If the SA_SIGINFO bit is cleared and the sa_handler field specifies a signal-catching function, or if the SA_SIGINFO bit is set, the sa_mask field identifies a set of signals that will be added to the signal mask of the thread before the signal-catching function is invoked.
The sa_mask field can be used to specify that individual signals, in addition to those in the process signal mask, be blocked from being delivered while the signal handler function specified in the sa_handler field is operating. The sa_flags field can have the SA_ONSTACK, SA_OLDSTYLE, or SA_NOCLDSTOP bits set to specify further control over the actions taken on delivery of a signal.
If the SA_ONSTACK bit is set, the system runs the signal-catching function on the signal stack specified by the sigstack subroutine. If this bit is not set, the function runs on the stack of the process to which the signal is delivered.
If the SA_OLDSTYLE bit is set, the signal action is set to SIG_DFL label prior to calling the signal-catching function. This is supported for compatibility with old applications, and is not recommended since the same signal can recur before the signal-catching subroutine is able to reset the signal action and the default action (normally termination) is taken in that case.
If a signal for which a signal-catching function exists is sent to a process while that process is executing certain subroutines, the call can be restarted if the SA_RESTART bit is set for each signal. The only affected subroutines are the following:
Other subroutines do not restart and return EINTR label, independent of the setting of the SA_RESTART bit.If SA_SIGINFO is cleared and the signal is caught, the signal-catching function will be entered as: void func(int signo);
where signo is the only argument to the signal catching function. In this case the sa_handler member must be used to describe the signal catching function and the application must not modify the sa_sigaction member. If SA_SIGINFO is set and the signal is caught, the signal-catching function will be entered as: void func(int signo, siginfo_t * info, void * context); where two additional arguments are passed to the signal catching function. The second argument will sa_sigaction member must be used to describe the signal catching function and the application must not modify the sa_handler member. The si_signo member contains the system-generated signal number. The si_errno member may contain implementation-dependent. If SA_NOCLDWAIT is set, and sig equals SIGCHLD,. Otherwise, terminating child processes will be transformed into zombie processes, unless SIGCHLD is set to SIG_IGN. If SA_RESETHAND is set, the disposition of the signal will be reset to SIG_DFL and the SA_SIGINFO flag will be cleared on entry to the signal handler. If SA_NODEFER is set and sig is caught, sig will not be added to the process' signal mask on entry to the signal handler unless it is included in sa_mask. Otherwise, sig will always be added to the process' signal mask on entry to the signal handler. If sig is SIGCHLD and the SA_NOCLDSTOP flag is not set in sa_flags , and the implementation supports the SIGCHLD signal, then a SIGCHLD signal will be generated for the calling process whenever any of its child processes stop. If sig is SIGCHLD and the SA_NOCLDSTOP flag is set in sa_flags , then the implementation will not generate a SIGCHLD signal in this way. When a signal is caught by a signal-catching function installed by sigaction, a new signal mask is calculated and installed for the duration of the signal-catching function (or until a call to either sigprocmask ors remains will-dependent; the signal-catching function will be invoked with a single argument.
The sigvec and signal subroutines are provided for compatibility to older operating systems. Their function is a subset of that available with sigaction.
The sigvec subroutine uses the sigvec structure instead of the sigaction structure. The sigvec structure specifies a mask as an int instead of a sigset_t. The mask for the sigvec subroutine is constructed by setting the i-th bit in the mask if signal i is to be blocked. Therefore, the sigvec subroutine only allows signals between the values of 1 and 31 to be blocked when a signal-handling function is called. The other signals are not blocked by the signal-handler mask.
The sigvec structure has the following members:
int (*sv_handler)(); /* signal handler */ int sv_mask; /* signal mask */ int sv_flags; /* flags */ signal subroutine in the libc.a library allows an action to be associated with a signal. The Action parameter can have the same values that are described for the sv_handler field in the sigaction structure of the sigaction subroutine. However, no signal handler mask or flags can be specified; the signal subroutine implicitly sets the signal handler mask to additional signals and the flags to be SA_OLDSTYLE.
Upon successful completion of a signal call, the value of the previous signal action is returned. If the call fails, a value of -1 is returned and the errno global variable is set to indicate the error as in the sigaction call.).
Note: The SIGKILL and SIGSTOP signals cannot be ignored.
Upon delivery of the signal, the receiving process runs the signal-catching function specified by the pointer to function. The signal-handler subroutine can be declared as follows:
handler(Signal, Code, SCP) int Signal, Code; struct sigcontext *SCP;The Signal parameter is the signal number. The Code parameter is provided only for compatibility with other UNIX-compatible systems. The Code parameter value is always 0. The SCP parameter points to the sigcontext structure that is later used to restore the previous execution context of the process. The sigcontext structure is defined in the signal.h file.
A new signal mask is calculated and installed for the duration of the signal-catching function (or until sigprocmask or sigsuspend subroutine is made). This mask is formed by joining the process-signal mask (the mask associated with the action for the signal being delivered) and the stack, and restores the process signal mask to the state when the corresponding setjmp subroutine was made.
Once an action is installed for a specific signal, it remains installed until another action is explicitly requested (by another call to the sigaction subroutine), or until one of the exec subroutines is called. An exception to this is when the SA_OLDSTYLE bit is set. In this case the action of a caught signal gets set to the SIG_DFL action before the signal-catching function for that signal subroutines should not be called from signal-catching functions since their behavior is undefined.
Upon successful completion, the sigaction subroutine returns a value of 0. Otherwise, a value of SIG_ERR is returned and the errno global variable is set to indicate the error.
The sigaction subroutine is unsuccessful and no new signal handler is installed if one of the following occurs:
These subroutines are part of Base Operating System (BOS) Runtime.
The acct subroutine, _exit, exit, or atexit subroutine, getinterval, incinterval, absinterval, resinc, resabs, alarm, ualarm, getitimer, or setitimer subroutine, getrlimit, setrlimit, or vlimit subroutine, kill subroutine, longjmp or setjmp subroutine, pause subroutine, ptrace subroutine, sigpause or sigsuspend subroutine, sigprocmask, sigsetmask, or sigblock subroutine, sigstack subroutine, sigwait subroutine, umask subroutine, wait, waitpid, or wait3 subroutine.
The kill command.
The core file.
Signal Management in AIX Version 4.3 General Programming Concepts: Writing and Debugging Programs provides more information about signal management in multi-threaded processes. | https://sites.ualberta.ca/dept/chemeng/AIX-43/share/man/info/C/a_doc_lib/libs/basetrf2/sigaction.htm | CC-MAIN-2022-40 | en | refinedweb |
Each Answer to this Q is separated by one/two green lines.
The following imports NumPy and sets the seed.
import numpy as np np.random.seed(42)
However, I’m not interested in setting the seed but more in reading it.
random.get_state() does not seem to contain the seed. The documentation doesn’t show an obvious answer.
How do I retrieve the current seed used by
numpy.random, assuming I did not set it manually?
I want to use the current seed to carry over for the next iteration of a process.]
This contribution is intended to serve as a clarification to the right answer from ali_m, and as an important correction to the suggestion from Dong Justin.
These are my findings:
- After setting the random seed using
np.random.seed(X)you can find it again using
np.random.get_state()[1][0].
- It will, however, be of little use to you.
The output from the following code sections will show you why both statements are correct.
Statement 1 – you can find the random seed using
np.random.get_state()[1][0].
If you set the random seed using
np.random.seed(123), you can retrieve the random state as a tuple using
state = np.random.get_state(). Below is a closer look at
state (I’m using the Variable explorer in Spyder). I’m using a screenshot since using
print(state) will flood your console because of the size of the array in the second element of the tuple.
You can easily see
123 as the first number in the array contained in the second element. And using
seed = np.random.get_state()[1][0] will give you
123. Perfect? Not quite, because:
Statement 2 – It will, however, be of little use to you:
It may not seem so at first though, because you could use
np.random.seed(123), retrieve the same number with
seed = np.random.get_state()[1][0], reset the seed with
np.random.seed(444), and then (seemingly) set it back to the
123 scenario with
np.random.seed(seed). But then you’d already know what your random seed was before, so you wouldn’t need to do it that way. The next code section will also show that you can not take the first number of any random state using
np.random.get_state()[1][0] and expect to recreate that exact scenario. Note that you’ll most likely have to shut down and restart your kernel completely (or call
np.random.seed(None)) in order to be able to see this.
The following snippet uses
np.random.randint() to generate 5 random integers between -10 and 10, as well as storing some info about the process:
Snippet 1
# 1. Imports import pandas as pd import numpy as np # 2. set random seed #seedSet = None seedSet = 123 np.random.seed(seedSet) # 3. describe random state state = np.random.get_state() state5 =)
Notice that the column named
seedState is the same as the first number under
state. I could have printed it as a stand-alone number, but I wanted to keep it all in the same place. Also notice that,
seedSet = 123, and
np.random.seed(seedSet) so far have been commented out. And because no random seed has been set, your numbers will differ from mine. But that is not what is important here, but rather the internal consisteny of your results:
Output 1:
random seedSet seedState state 0 2 None 1558056443 1558056443 1 -1 None 1558056443 1808451632 2 4 None 1558056443 730968006 3 -4 None 1558056443 3568749506 4 -6 None 1558056443 3809593045
In this particular case
seed = np.random.get_state()[1][0] equals
1558056443. And following the logic from Dong Justins answer (as well as my own answer prior to this edit), you could set the random seed with
np.random.seed(1558056443) and obtain the same random state. The next snippet will show that you can not:
Snippet 2
# 1. Imports import pandas as pd import numpy as np # 2. set random seed #seedSet = None seedSet = 1558056443 np.random.seed(seedSet) # 3. describe random state #state = np.random.get_state() state =)
Output 2:
random seedSet seedState state 0 8 1558056443 1558056443 1558056443 1 3 1558056443 1558056443 1391218083 2 7 1558056443 1558056443 2754892524 3 -8 1558056443 1558056443 1971852777 4 4 1558056443 1558056443 2881604748
See the difference?
np.random.get_state()[1][0] is identical for Output 1 and Output 2, but the rest of the output is not (most importantly the random numbers are not the same). So, as ali_m already has clearly stated:
It’s therefore impossible to map every RNG state to a unique integer seed.
Check the first element of the array returned by
np.random.get_state(), it seems exactly the random seed to me.
This answer complements important details others missed. First, to rephrase the conclusion:
Original random seeds (set via
np.random.seed) cannot be retrieved after generating numbers, but intermediates (current state) can.
Refer to @vestland’s answer; it may, however, mislead: the generated numbers differ not due to inability to map states, but that an incomplete encoding is used:
get_state()[1]. The complete representation includes
pos = get_state()[2]. To illustrate:
import numpy as np state0 = np.random.get_state() rand0 = np.random.randint(0, 10, 1) state1 = np.random.get_state() rand1 = np.random.randint(0, 10, 1) assert all(s0 == s1 for s0, s1 in zip(state0[1], state1[1]))
We generated a number, yet
get_state()[1] remained identical. However:
np.random.set_state(state0) assert np.random.randint(0, 10, 1) == rand0
and likewise for
state1 &
rand1. Hence, @vestland’s numbers differ because when not setting a seed,
pos = 623 – whereas if we use
np.random.seed,
pos = 624. Why the inconvenient discrepancy? No clue.
In summary on
np.random.seed(s):
get_state()[1][0]immediately after setting: retrieves
sthat exactly recreates the state
get_state()[1][0]after generating numbers: may or may not retrieve
s, but it will not recreate the current state (at
get_state())
get_state()[1][0]after generating many numbers: will not retrieve
s. This is because
posexhausted its representation.
get_state()at any point: will exactly recreate that point.
Lastly, behavior may also differ due to
get_state()[3:] (and of course
[0]).
While what the top answer says is generally true, in that it’s not possible in general, it is in fact possible. I would redirect you to this persons blog:
This individual developed a mersenne twister cracking algorithm to recover initial seeds, and provided the details and algorithm in full. I am not the author, and do not understand what the material in full, but anybody interested in doing this should check this out.
| https://techstalking.com/programming/python/how-can-i-retrieve-the-current-seed-of-numpys-random-number-generator/ | CC-MAIN-2022-40 | en | refinedweb |
I get a Overflow error when i try this calculation, but i cant figure out why.
1-math.exp(-4*1000000*-0.0641515994108)
Solution #1:
The number you’re asking math.exp to calculate has, in decimal, over 110,000 digits. That’s slightly outside of the range of a double, so it causes an overflow.
Solution #2:
To fix it use:
try: ans = math.exp(200000) except OverflowError: ans = float('inf')
Solution #3:
I think the value gets too large to fit into a
double in python which is why you get the
OverflowError. The largest value I can compute the
exp of on my machine in Python is just sligthly larger than 709.78271.
Solution #4:
This may give you a clue why:*1000000*-0.0641515994108%29
Notice the 111442 exponent.
Solution #5:
Unfortunately, no one explained what the real solution was. I solved the problem using:
from mpmath import *
You can find the documentation below:
Solution #6:
Try np.exp() instead of math.exp()
Numpy handles overflows more gracefully, np.exp(999) results in inf
and 1. / (1. + np.exp(999)) therefore simply results in zero
import math import numpy as np print(1-np.exp(-4*1000000*-0.0641515994108))
| https://techstalking.com/programming/question/solved-python-overflowerror-math-range-error/ | CC-MAIN-2022-40 | en | refinedweb |
how can a Dataframe be converted to a SpatialGridDataFrame using the R maptools library? I am new to Rpy2, so this might be a very basic question.
The R Code is:
coordinates(dataf)=~X+Y
In Python:
import rpy2
import rpy2.robjects as robjects
r = robjects.r
# Create a Test Dataframe
d = {'TEST': robjects.IntVector((221,412,332)), 'X': robjects.IntVector(('25', '31', '44')), 'Y': robjects.IntVector(('25', '35', '14'))}
dataf = robjects.r['data.frame'](**d)
r.library('maptools')
# Then i could not manage to write the above mentioned R-Code using the Rpy2 documentation
Apart this particular question i would be pleased to get some feedback on a more general idea: My final goal would be to make regression-kriging with spatial data using the gstat library. The R-script is working fine, but i would like to call my Script from Python/Arcgis. What do you think about this task, is this possible via rpy2?
Thanks a lot!
Richard
In some cases, Rpy2 is still unable to dynamically (and automagically) generate smart bindings.
An analysis of the R code will help:
coordinates(dataf)=~X+Y
This can be more explicitly written as:
dataf <- "coordinates<-"(dataf, formula("~X+Y"))
That last expression makes the Python/rpy2 straigtforward:
from rpy2.robjects.packages import importr sp = importr('sp') # "coordinates<-()" is there from rpy2.robjects import baseenv, Formula maptools_set = baseenv.get('coordinates<-') dataf = maptools_set(dataf, Formula(' ~ X + Y'))
To be (wisely) explicit about where "coordinates<-" is coming from, use:
maptools_set = getattr(sp, 'coordinates<-') | http://www.dlxedu.com/askdetail/3/6326b47f40f52ea278f7bb032be079dd.html | CC-MAIN-2018-47 | en | refinedweb |
This page is a snapshot from the LWG issues list, see the Library Active Issues List for more information and the meaning of C++11 status.
Section: 20.2.3.2 [char.traits.specializations.char16_t] Status: C++11 Submitter: BSI Opened: 2010-08-25 Last modified: 2017-02-03
Priority: Not Prioritized
View all other issues in [char.traits.specializations.char16_t].
View all issues with C++11 status.
Duplicate of: 1444
Discussion:
Addresses GB-109, GB-123
It is not clear what the specification means for u16streampos, u32streampos or wstreampos when they refer to the requirements for POS_T in 21.2.2, as there are no longer any such requirements. Similarly the annex D.7 refers to the requirements of type POS_T in 27.3 that no longer exist either.
Clarify the meaning of all cross-reference to the removed type POS_T.
[ Post-Rapperswil, Daniel provides the wording. ]
When preparing the wording for this issue I first thought about adding both u16streampos and u32streampos to the [iostream.forward] header <iosfwd> synopsis similar to streampos and wstreampos, but decided not to do so, because the IO library does not yet actively support the char16_t and char32_t character types. Adding those would misleadingly imply that they would be part of the iostreams. Also, the addition would make them also similarly equal to a typedef to fpos<mbstate_t>, as for streampos and wstreampos, so there is no loss for users that would like to use the proper fpos instantiation for these character types.
Additionally the way of referencing was chosen to follow the style suggested by NB comment GB 108.
Moved to Tentatively Ready with proposed wording after 5 positive votes on c++std-lib.
[ Adopted at 2010-11 Batavia ]
Proposed resolution:
The following wording changes are against N3126.
Change [char.traits.specializations.char16_t]p.1 as indicated:
1 - The type u16streampos shall be an implementation-defined type that satisfies the requirements for
POS_T in 21.2.2.
Change [char.traits.specializations.char32_t]p.1 as indicated:
1 - The type u32streampos shall be an implementation-defined type that satisfies the requirements for
POS_T in 21.2.2.
Change [char.traits.specializations.wchar.t]p.2 as indicated:
2 - The type wstreampos shall be an implementation-defined type that satisfies the requirements for
POS_T in 21.2.2.
Change [fpos.operations], Table 124 — Position type requirements as indicated:
Change [depr.ios.members]p.1 as indicated:
namespace std { class ios_base { public: typedef T1 io_state; typedef T2 open_mode; typedef T3 seek_dir; typedef
OFF_Tstreamoff; typedef POS_Tstreampos; // remainder unchanged }; }
Change [depr.ios.members]p.5+6 as indicated:
5 - The type streamoff is an implementation-defined type that satisfies the requirements of
type OFF_T (27.5.1).
6 - The type streampos is an implementation-defined type that satisfies the requirements of
type POS_T (27.3). | https://cplusplus.github.io/LWG/issue1414 | CC-MAIN-2018-47 | en | refinedweb |
Computer Science: Linux: The Terminal
For those of you that are familiar with the Linux operating system family, you know that the terminal is the most important part of it. What is the “terminal”? To answer this question, we first explore some fundamental truths related to the Linux kernel and the OS.
Device Files in Linux
The Linux operating system hosts these mysterious little items known as device files. These files contain all the bytes read from input devices like mice, keyboards, and joysticks. Kernel modules read from these devices and write to their respective device file. Remember how I sad the terminal is a virtual input device? Well it has a device file as well; one that you are no-doubt familiar with:
/dev/stdin
The file “stdin” holds the bytes gathered from the terminal. We can access this file in Python to see what the user is typing into the terminal:
f = open('/dev/stdin', 'r')
f.read(1024)
Run the above program. As you type into the terminal, you will notice it is vomiting what you type. This is because we are reading from the terminal’s device file. You can expect the same thing if you read from the mouses device file. There is also a much faster way of accessing the terminal’s device file:
import sys
sys.stdin.read()
Awesome! Did you know that the terminal was that simple?
File Descriptors in Linux
A file descriptor is often over complicated in the Computer Science community. All it is a unique ID, in the form of an integer, that represents a certain open file object. For example:
f = open('test.doc', 'r')
The file object “f” has a file descriptor:
fd = f.fileno()
The “fileno” method can be used to grab the descriptor of any open file, including device files! Let us get the file descriptor of the terminal’s device file:
sys.stdin.fileno()
This should return 0 or 1, because the terminal device file is the main open file on the Linux system. Every file you open after that will increment by one.
Controlling the Linux Terminal
We can use the terminal’s file descriptor to change attributes and adjust colors! We can even make secret password prompts and change the position of the cursor!
For a simple example, let us grab the current attributes or settings of the terminal using a handy std-lib module called “termios”:
import sys
import termios
# retrieve terminal file descriptor
fd = sys.stdin.fileno()
# grab the current settings
settings = termios.tcgetattr(fd)
If we return those settings, we should see something like this:
[17664, 5, 191, 35387, 15, 15, ['\x03', '\x1c', '\x7f', '\x15', '\x04', '\x00', '\x01', '\x00', '\x11', '\x13', '\x1a', '\x00', '\x12', '\x0f', '\x17', '\x16', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00']]
Wow! That is a lot! These are all UTF-8 encoded raw bytes. We can easily change these settings later on, but for now, lets use “termios” to update the settings! This is as easy as setting the old settings now, which will refresh the settings buffer above.
import sys
import termios
# retrieve terminal file descriptor
fd = sys.stdin.fileno()
# grab the current settings
settings = termios.tcgetattr(fd)
# update settings
termios.tcsetattr(fd, termios.TCSANOW, settings)
Now we have a fresh input buffer! Lets use the above method to edit the settings and do some cool stuff!
import sys
import termios
# retrieve terminal file descriptor
fd = sys.stdin.fileno()
# grab the current settings
settings = termios.tcgetattr(fd)
# change one buffer in the settings
settings[3] &= ~termios.ECHO
# update settings
termios.tcsetattr(fd, termios.TCSANOW, settings)
Bam! Now when you call “sys.stdin.read” you will not see what you type! This is useful for password prompts. To undo the above terminal control operation, run this function:
def enable_echo ():
fd = sys.stdin.fileno()
setts = termios.tcgetattr(fd)
setts[3] |= termios.ECHO
termios.tcsetattr(fd,
termios.TCSANOW, setts)
There are endless possibilities for controlling terminals in UNIX based systems. The module “termios” has many constants and methods that do various terminal control operations. Here is a quick reference:
termios.tcsetattr(FD, WHEN, ATTRIBUTES)
Set FD's ATTRIBUTES according to WHEN
termios.tcflush(FD, QUEUE)
Flush FD's i/o queue according to QUEUE
termios.tcsendbreak(DURATION)
Send break command for DURATION
termios.tcgetattr(FD)
Return FD's standard attributes
termios.tcflow(FD, ACTION)
Suspends transmission or reception according to ACTION
termios.cfgetosspeed()
Retrieve line buffer rate (output BAUD).
termios.cfsetosspeed(B_RATE)
Set line buffer rate (output BAUD).
Possible WHEN arguments:
TCSANOW-----set attributes now
TCSAFLUSH-----set attributes after draining i/o
TCSADRAIN-----set attributes after draining o
Possible QUEUE arguments:
TCIFLUSH-----flush terminal's input queue
TCOFLUSH-----flush terminal's output queue
TCIOFLUSH-----flush terminal's i/o queue
Possible ACTION arguments:
TCOOFF-----suspends terminal's output
TCOON-----restarts suspended terminal output
TCIOFF-----transmits STOP character to terminal's input
TCION-----transmits START character to terminal's input
Possible B_RATE (o_BAUD) arguments:
B0 B50 B75 B110 B134 B150
B200 B300 B600 B1200 B1800
B2400 B4800 B9600 B19200
B38400 B57600 B115200
B230400
Additionally, there are non-argument constansts that can be used to edit the settings buffer from “tcgetattr”. They are many:
ECHO-----echo terminal's input queue
NOFLUSH-----no flushing terminal's i/o queues
ICANON-----enable canonical mode
ECHOE-----enable erase character's if ICANON is set
IEXTEN-----enable implementation-defined input processing
IGNBRK-----ignore break condition on input
BRKINT-----if not IGNBRK, flush i/o queues on BRK
IGNPAR-----ignore parity and framing errors
INPCK-----exact opposite of IGNPAR
IGNCR-----ignore carriage return on input
ICRNL-----translate carriage return to newline unless IGNCR
IXON-----enable flow control in input queue XON/XOFF
IXOFF-----disable flow control in input queue XON/XOFF
ECHOK-----if ICANON, kill character erases current line
NOTE: This guide only applies to Linux. | https://medium.com/@khorvath3327/computer-science-linux-the-terminal-c7b316fbf015 | CC-MAIN-2018-47 | en | refinedweb |
Microchip bootloader encryption jobs
RSA encryption project with GUI ay... [login to view URL] Then Throw New InvalidOperationException("Private key is not loaded") End If Dim num As Integer = [login to vi...
.. proyectos ....
..: [login to view [login to view
Lütfen detayları görmek için Kaydolun ya da Giriş Yapın.
..
Lütfen detayları görmek için Kaydolun ya da Giriş Yapın.
Hi, I have some issue with the code. Your job would be to fix it, and would implement some algorithm. Details will be shared later. Having encryption knowledge is helpful Thanks!
.. [login to view URL]; import [login to view URL]; import [login to view URL]; import [login to view URL]; import [login to view URL]; public class Java_AES_Cipher { private static int CIPHER_KEY_LEN = 32; private static String CIPHER_NAME = "AES/CBC/PKCS5PADDING"; | https://www.tr.freelancer.com/job-search/microchip-bootloader-encryption/ | CC-MAIN-2018-47 | en | refinedweb |
While working on the new feature of the ability to add custom events to Jira (available in the upcoming Jira 3.6 release), I thought it would be of interest to discuss a point on adding new objects to the Jira source through PicoContainer.
In Summary
A new object was introduced to the Jira source that depended on a number of objects registered through PicoContainer and also one that was not registered. In order to adhere to the Dependency Injection concept, a factory object was introduced to provide the object with dependencies managed through PicoContainer and avoid static retrieval of these dependencies.
Background
Manager objects are the engine of Jira. Each manager represents a particular concern and provides a service to achieve a particular goal. For example, the PermissionManager provides methods to check whether a user has the required permission to perform certain actions.
Jira uses PicoContainer for dependency injection. In a Jira nutshell:
* objects are registered with a central manager (ComponentManager)
* the ComponentManager manages the construction of all registered objects through PicoContainer
* each object requiring a reference to another registered object declares that dependency in its constructor
For convenience, the ComponentManager normally includes static methods to allow access to the registered objects – e.g.:
ProjectManager projectManager = ComponentManager.getInstance().getProjectManager();
This can be useful when an object’s constructor includes a non-registered object – i.e. an object not provided through PicoContainer.
However, this approach violates the Dependency Injection concept and can cause further problems in unit testing. For example, it is not possible to ‘mock out‘ a statically retrieved dependency for such references in writing unit tests.
Hence, to adhere to the concept of Dependency Injection, it is recommended to declare the dependency through the class constructor and let PicoContainer manage it.
Enter the Factory
The implementation of the new ‘custom event’ feature introduced the object TemplateContext. A TemplateContext encapsulates the information available to the Velocity context for rendering email templates. The TemplateContext object has the following dependencies:
* TemplateIssueFactory – allows creation of TemplateIssue objects
* FieldLayoutManager – used in rendering an issue event comment
* RendererManager – used in rendering an issue event comment
The object also included a dependency on an IssueEvent object that is not retrieved through the ComponentManager – so it is not possible to register the TemplateContext object through PicoContainer.
The simple option would be to statically retrieve these managers from the ComponentManager. However, in order to adhere to the Dependency Injection concept, the dependencies should be declared in the constructor and retrieved through PicoContainer:
public TemplateContext(IssueEvent issueEvent, TemplateIssueFactory templateIssueFactory, FieldLayoutManager fieldLayoutManager, RendererManager rendererManager)
{
this.issueEvent = issueEvent;
this.issue = issueEvent.getIssue();
this.templateIssueFactory = templateIssueFactory;
this.fieldLayoutManager = fieldLayoutManager;
this.rendererManager = rendererManager;
}
In order to retrieve these managers through PicoContainer, a TemplateContextFactory object was introduced:
public class DefaultTemplateContextFactory implements TemplateContextFactory
{
private final TemplateIssueFactory templateIssueFactory;
private final FieldLayoutManager fieldLayoutManager;
private final RendererManager rendererManager;
public DefaultTemplateContextFactory(TemplateIssueFactory templateIssueFactory, FieldLayoutManager fieldLayoutManager, RendererManager rendererManager)
{
this.templateIssueFactory = templateIssueFactory;
this.fieldLayoutManager = fieldLayoutManager;
this.rendererManager = rendererManager;
}
public TemplateContext getTemplateContext(IssueEvent issueEvent)
{
return new TemplateContext(issueEvent, templateIssueFactory, fieldLayoutManager, rendererManager);
}
}
This factory class declares the required dependencies and is registered with the ComponentManager, ensuring that these dependencies are managed through PicoContainer:
From ComponentManger.java:
internalContainer.registerComponentImplementation(TemplateContextFactory.class, DefaultTemplateContextFactory.class);
Each object requiring a TemplateContext object will delcare its dependency on the TemplateContextFactory in its constructor, and retrieve the TemplateContext object by calling the factory getTemplateContext(…) method.
With this approach, the TemplateContext is now correctly constructed with PicoContainer managed objects.
This approach can lead to an increased number of objects – but the simple implementation of the factory class should not increase the code complexity. | https://www.atlassian.com/blog/archives/factories_of_mass_production | CC-MAIN-2018-47 | en | refinedweb |
Question:
I am relatively new to Flex/ActionScript, but I have been using a pattern of creating one file per function in my util package - with the name of the file being the same as the name of the function. Like if the file was convertTime.as:
package util{ public function convertTime(s:String):Date{ ... } }
This way I can import the function readily by doing:
import util.convertTime; ... convertTime(...);
I like this way better than importing a class object and then calling the static methods hanging off of it, like this:
import util.Util; ... Util.convertTime(...);
But, the more I do this, the more files I'll end up with, and it also seems a bit wasteful/silly to put only one function into a file, especially when the function is small. Is there another alternative to this? Or are these two options the only ones I have?
Update: after some research, I've also posted my own answer below.
Solution:1
Yes, these are your two main options for utility libraries. We actually use both of these approaches for our generic utility functions. For a very small set of functions that we feel should actually be builtins (such as map()), we put one function per file, so that we can use the function directly.
For more obscure/specialized utility functions, we don't want to pollute our global namespace so we make them static functions on a utility class. This way, we're sure that when someone references ArrayUtils.intersect(), we know what library intersect() came from, and what roughly it's for (it intersects two arrays).
I would recommend going with the latter route as much as possible, unless you have a function that a) you use very frequently and b) is really obvious what it does at a glance.
Solution:2
I came across some other alternatives after all and thought I'd share them here.
Alternative 1 - use inheritence
This is probably an obvious answer, but is limited. You would put your static methods into a parent class, inherit them to get them in the subclasses. This would only work with classes. Also, because ActionScript is single inheritence, you can only inherit once.
Alternative 2 - Alias the methods
You still write utility functions as static methods hanging off util classes, but you alias them so you can access them with a shorter name, ex:
import mx.binding.utils.BindingUtils; var bind:Function = BindingUtils.bindProperty;
Now you can just call
bind(...);
rather than than the lengthy
BindingUtils.bindProperty(...);
You can do this within the class scope and the function scope, but not the package scope - because apparently you can only have one visible attribute inside a package. If you do this in the class scope, you will want to make sure it doesn't conflict with your other class attribute names.
Alternative 3 - use include
As described in this flexonrails blog post you can use include to simulate a mixin in ActionScript. An include is different from an import in that all it's doing is copying the entirety of the file you are including from and paste it into the place you are including it at. So, it has completely no handling of namespace issues, you can not reference its full path name afterwards like you can with imports, if you have conflicting names, you are on your own with this. Also unlike import, it creates different copies of the same code. But what you can do with this is put any number of functions in a file, and include them into class or function scope in another file. Ex:
// util/time_utils.as function convertTime(..){ ... } function convertDate(..){ ... }
To include:
include 'util/time_util.as'; // this is always a relative path ... convertTime(...);
Solution:3
@ an0nym0usc0ward
OOP is simply the method of consolidating like functions or properties into an object that can be imported and used. It is nothing more that a form of organization for your code, ALL code executes procedurally in the processor in the end, OOP is just organization of sources. What he is doing here may not be OOP as you learn from a book, but it does the exact same thing in the end, and should be treated with the same respect.
Anyone that truly understands OOP wouldn't be naive enough to think that the approved and documented form of OOP is the only possible way to object orient your code.
EDIT: This was supposed to be a comment response to an0nym0usc0ward's rude comment telling him to learn OOP. But I guess I typed it in the wrong box :)
Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com
EmoticonEmoticon | http://www.toontricks.com/2018/05/tutorial-one-file-per-functionareally.html | CC-MAIN-2018-47 | en | refinedweb |
One powerful feature of Windows which can get overlooked is the COM method of accessing other programs. This defines the standard application binary interface so that programs can communicate and use each other almost as you would a library. The interface is language neutral and can be used from Python just as well as any other language.
COM objects use the client-server model. The program being accessed, or library in the above analogy, is the server and the program doing the calling is the client. Most scripts are clients which use various COM servers already installed on the computer (or other computers on the network). To get a reference to the COM server you use the Dispatch method.
For example I can use COM to start Microsoft Word (assuming I have Word installed on the computer) and open a document with the following script
import win32com.client app = win32com.client.Dispatch("Word.Application") doc = app.Documents.Open("file.doc") app.Visible = True
Four lines of code and I have a word processor! The strength of COM is also its weakness, it only defines how the programs talk; it doesn’t define the methods or properties, what is called the application programming interface. That is left up to the server.
So how did I know about the Documents.Open method or the Visible property? In an ideal world the API would be fully documented and available. In reality there is little or no documents or it is not available or it is difficult to make sense of. If the COM server has been registered in a standard way you can use a COM browser to gain information.
A basic browser comes with pywin32. A more powerful browser, oleview, can be downloaded from Microsoft as part of the Windows Server 2003 Resource Kit Tools. If working with Microsoft Office COM objects, my preferred browser is actually the Object Browser that comes with VBA. Just press Alt + F11 from Word and then F2. I find this has the cleanest interface.
The last way of making a start with a COM interface is to simply dump everything that can be found out about the API into a file and use that as a starting point. You can then use this information with the interactive prompt to query the COM object and see what it returns. The program do this is makepy.py which can be found in the win32client\client directory. I’ll hopefully cover using makepy in another article. | https://quackajack.wordpress.com/tag/word/ | CC-MAIN-2018-47 | en | refinedweb |
Hi,
I have a form with two drop down menus.
Depending on what the user selects in the first, the second should be
populated accordingly.
Here is the code I am using to do this.
View (test.html.erb):
<% form_for :applicant, :url=> {:action => “index”} do |f| %>
<%= f.label :topic_1 %>
<%= f.select :topic_1, [“a”, “b”, “c”, “d”, “e”, “f”], :include_blank =>
true %>
<%= observe_field “applicant_topic_1”, :update => “applicant_member_1”,
:with => “topic_1”, :url => { :controller => “form”, :action =>
“get_members_1” } %>
<%= f.label :member_1 %>
<%= f.select :member_1, [“Please select topic 1 first”] %>
Controller:
def get_members_1
@members_1 = Member.find_all_by_topic params[:topic_1]
end
View (get_members_1.html.erb):This is no. 1 (I have shortened this for sake of simplicity. Normally a lookup of all members would take place here)
As you can see, when the user enters something in ‘topic_1’ the
observe_field helper calls the method ‘get_members_1’ in the controller,
and updates the field ‘member_1’ dynamically.
Here is this example in action:
My problem is that this works fine in FF, IE, Chrome and Opera, but it
doesn’t work at all in Konqueror.
I have Googled the problem and come up empty handed.
This is the java script that the helper generates:<![CDATA[ new Form.Element.EventObserver('applicant_topic_1', function(element, value) {new Ajax.Updater('applicant_member_1', '/test/get_members_1', {asynchronous:true, evalScripts:true, parameters:'topic_1=' + encodeURIComponent(value) + '&authenticity_token=' + encodeURIComponent('EWNBpyzHQtObzxZbuXqg2s0tUHw8JWJhaeKZiyvWO9E=')})}) //]]> <p>Can anybody point me in the right direction to sort this out?<br> I am relatively ok at ruby / rails, but a beginner regarding java script<br> / ajax.</p> <p>I also had a look on the rails doc page regarding observe_field, but I<br> cannot see what I am doing wrong.</p> <p>I would be grateful for any help.</p> | https://www.ruby-forum.com/t/need-help-observe-field-to-work-in-safari-or-konqueror/169728 | CC-MAIN-2018-47 | en | refinedweb |
Request
- Introduction
-
Middleware & Interceptors
-
- Reference: back-end interface specification recommendations
Introduction
For middle and back-end applications, a large part of the work lies in the request back-end CRUD interface, in order to further reduce the user's perception of the request layer, we removed the default generated utils/request.ts file, and instead exposed to developers through configuration of the way to do the request configuration and enhanced processing; at the same time through the business summary of a set of standard interface structure specification and provide unified interface parsing, error handling capabilities; subsequent will continue to improve configurable items, provide vertical scenarios such as lists, login failure and other solutions.
At the same time, we have a built-in useRequest Hook to encapsulate some common data processing logic, through which you can more simply implement related functions.
How to use
Using request
With
import { request } from 'umi'; you can use the built-in request method. request takes two arguments, the first one is the url and the second one is the options of the request. options is formatted as in umi-request.
Most of the usage of request is equivalent to umi-request, except that options extends skipErrorHandler with a configuration of true will skip the default error handling and is used for some specific interfaces in the project.
The sample code is as follows.
request('/api/user', { params: { name: 1, }, skipErrorHandler: true, });
Using useRequest
useRequest is a Hook built into the best practices, through which you get the power of the request interface, whether it's page-flipping or loading more or combining with antd's Table component becomes very easy. A minimal example is as follows.
import { useRequest } from 'umi'; export default () => { const { data, error, loading } = useRequest(() => { return services.getUserList('/api/test'); }); if (loading) { return <div>loading... </div>; } if (error) { return <div>{error.message}</div>; } return <div>{data.name}</div>; };
The first argument to useRequest takes a function that returns a Promise, which is one of the services that OneAPI automatically generates if you have access to OneAPI.
The data returned by the Hook is the data field in the actual JSON data returned by the backend, which is easy to use (of course you can also modify it through configuration). See its API documentation for more on the use of useRequest.
<! -- ### Middleware -->
Middleware & Interceptors
In some cases we need to do some special processing before the network request is made or after the response. For example, the corresponding Access Token is automatically added to the Header before each request.
@umijs/plugin-request provides three runtime configuration items to help us accomplish similar needs.
Middleware: middlewares
Middlewares, like interceptors, also allow developers to gracefully do enhanced processing before and after network requests. However, it is a little more complex to use and we recommend using interceptors in preference.
The sample code is as follows.
// src/app.ts const demo1Middleware = async (ctx: Context, next: () => void) => { console.log('request1'); await next(); console.log('response1'); }; const demo2Middleware = async (ctx: Context, next: () => void) => { console.log('request2'); await next(); console.log('response2'); }; export const request: RequestConfig = { errorHandler, middlewares: [demo1Middleware, demo2Middleware], };
The execution order is as follows.
request1 -> request2 -> response -> response2 -> response1
It is strongly recommended that you take a closer look at umi-request about middleware's documentation _zh-cn.md# middleware).
Pre-request interception: requestInterceptors
To intercept web requests before they are processed by
.then or
catch, you can add the following configuration inside the
src/app.ts web request configuration.
export const request: RequestConfig = { errorHandler, // Add a pre-request interceptor that automatically adds an AccessToken requestInterceptors: [authHeaderInterceptor], };
requestInterceptors receives an array, each item of the array is a request interceptor. Equivalent to umi-request's
request.interceptors.request.use().
The sample interceptor code is as follows.
// src/app.ts const authHeaderInterceptor = (url: string, options: RequestOptionsInit) => { const authHeader = { Authorization: 'Bearer xxxxxx' }; return { url: `${url}`, options: { . . options, interceptors: true, headers: authHeader } }; };
See the interceptor documentation for umi-request for more details.
Post-response interceptors: responseInterceptors
In the network request response
.then or
catch processing interception before processing, using basically the same method as requestInterceptors.
The specific sample code is as follows.
// src/app.ts const demoResponseInterceptors = (response: Response, options: RequestOptionsInit) => { response.headers.append('interceptors', 'yes yo'); return response; }; export const request: RequestConfig = { errorHandler, responseInterceptors: [demoResponseInterceptors], };
Uniform specification
Uniform error handling
Interface requests are not always 100% successful, but normally we expect them to be successful and only fail if there is a network exception or a problem with permissions, etc. So we usually expect the code logic to consider only the successful cases, and for exceptions just handle them uniformly in one place.
In best practice, we have defined a set of interface formatting and error handling specifications that will uniformly indicate an error when it fails, and the code will only need to consider success. You can use
import { request } from 'umi'; to get that capability using the request method built into the best practice.
The default interface format is
export interface response { success: boolean; // if request is success data?: any; // response data errorCode?: string; // code for errorType errorMessage?: string; // message display to user showType?: number; // error display type: 0 silent; 1 message.warn; 2 message.error; 4 notification; 9 page traceId?: string; // Convenient for back-end Troubleshooting: unique request ID host?: string; // onvenient for backend Troubleshooting: host of current access server }
Of course you can also modify or customize some of the logic of your project through the runtime configuration of
request exposed in
app.ts, see
@umijs/plugin-request's documentation.
When there is an HTTP error or
success is
false in the returned data request will throw an exception which will be caught by useRequest when you use it, in most cases you don't need to care about the exception, the uniform error handling will do a uniform error hints. For some scenarios where you need to handle errors manually, you can use the
onError method or the
error object exposed by useRequest to do custom handling.
Uniform interface specification
In addition to the outermost specification defined above for error handling, we also provide a specification for the data format within
data. For paging scenarios we recommend the following format for the backend, so that the frontend can easily interface with antd's Table component, but of course, if the backend is not in this format, you can use the
formatResult configuration of the
useRequest Hook to do the conversion.
{ list: [ current?] current?: number, pageSize?: number, total?: number, }
Reference: back-end interface specification recommendations
In order to be able to distinguish between pages and interfaces when finally deployed, and also to facilitate front-end debugging to do interface forwarding, we recommend adding the
/api prefix to the back-end interface path.
In addition, the return format of the interface is recommended to refer to the unified interface specification, to facilitate uniform error handling, the example is as follows.
{ "success": true, "data": {}, "errorCode": "1001", "errorMessage": "error message", "showType": 2, "traceId": "someid", "host": "10.1.1.1" }
For easy ones it can be as follows.
{ "success": true, "data": {}, "errorMessage": "error message" }
Refer to the Uniform Error Handling and Uniform Interface specification above for details.
If the backend return format does not conform to the specification you can refer to
@umijs/plugin-request's documentation to configure the
errorConfig.adaptor compatibility in the runtime configuration. | https://beta-pro.ant.design/docs/request/ | CC-MAIN-2022-40 | en | refinedweb |
Crash in [@ mozilla::dom::(anonymous namespace)::Promise
Native Handler Shim::Rejected Callback]
Categories
(Core :: DOM: Core & HTML, defect)
Tracking
()
People
(Reporter: gsvelto, Assigned: peterv)
References
Details
(Keywords: crash, leave-open)
Crash Data
Attachments
(3 files)
Crash report:
Reason:
EXC_BAD_ACCESS / KERN_INVALID_ADDRESS
Top 10 frames of crashing thread:
0 XUL mozilla::dom:: dom/promise/Promise.cpp:391 1 XUL mozilla::dom::NativeHandlerCallback dom/promise/Promise.cpp:341 2 XUL js::InternalCallOrConstruct js/src/vm/Interpreter.cpp:594 3 XUL js::Call js/src/vm/Interpreter.cpp:664 4 XUL PromiseReactionJob js/src/builtin/Promise.cpp:1904 5 XUL js::InternalCallOrConstruct js/src/vm/Interpreter.cpp:594 6 XUL JS::Call js/src/jsapi.cpp:2861 7 XUL mozilla::dom::PromiseJobCallback::Call dom/bindings/PromiseBinding.cpp:31 8 XUL mozilla::PromiseJobRunnable::Run xpcom/base/CycleCollectedJSContext.cpp:211 9 XUL mozilla::CycleCollectedJSContext::PerformMicroTaskCheckPoint xpcom/base/CycleCollectedJSContext.cpp:644
I found this during nightly crash triage but it's not a recent regression given the first version with significant volume is release 84. The crash is caused by a NULL pointer access and it's happening on shutdown - at least two user comments mention this happening when they tried to quit Firefox. I can't tell from the stack what's the affected component unfortunately.
I think DOM is a reasonable starting component for a crash involving DOM promises.
It seems that the crash reports are coming after landing of bug 1679094.
djg: Could you take a look?
A recent crash report shows potentially interesting mac_crash_info:
bp-170a9dba-d442-4564-b8fc-ef7b30210718
{ "num_records": 1, "records": [ { "message": "Performing @selector(menuItemHit:) from sender NSMenuItem 0x12b0957b0", "module": "/System/Library/Frameworks/AppKit.framework/Versions/C/AppKit" } ] }
This is consistent with the crash having been triggered by pressing
Cmd-Q or choosing
Quit Firefox from the menu.
Looks like maybe mInner was unlinked or maybe previously resolved or rejected. I'm not sure how any of that could happen. We could probably just return if mInner is null, though maybe we'd still be in a bad state.
Hey Kagami,
The crash volume increased on nightly since April. From the crash stack, it's not very clear where we got this problematic promise from. However, you've touched Promise code recently for Streams, and we've started landed/enabled Streams recently. It could be that the crash increment on nightly is coming from unexpected resolved/rejected promises in this new feature. So I think it makes sense to start from asking for your help to take a first look. Thank you.
Hmm, indeed Streams heavily uses
Promise::AppendNativeHandler and thus the function. But I'm off this week, NI'ing :smaug in case he can get some idea before I return.
Smaug and I looked at it earlier this week. I found one problematic call to Promise::AppendNativeHandler, and we're also going to convert the assert at to a diagnostic assert.
Depends on D145066
Pushed by pvanderbeken@mozilla.com: extensions::RequestWorkerRunnable::Init should propagate failure of dom::PromiseWorkerProxy::Create. r=rpl Make PromiseNativeHandlerShim diagnostic assert that the PromiseNativeHandler is non-null. r=smaug
Pushed by pvanderbeken@mozilla.com: Log the reason for nulling out PromiseNativeHandlerShim::mInner. r=smaug
All the crashes I'm seeing are from ClearedFromCC (). There seems to be an intermittent test failure that's hitting the same condition too (dom/base/test/browser_bug1303838.js).
FWIW, this is currently the #5 overall top content process crash for Fx101 on Release. landed to 102. It should prevent the crash at least.
FYI, I just experienced a crash with fx 104
And I just hit it in Nightly 106, with an in-browser Zoom tab. ClearedFromCC again. | https://bugzilla.mozilla.org/show_bug.cgi?id=1688585 | CC-MAIN-2022-40 | en | refinedweb |
I have a dataset of B x T x C, where B is batches, T is timestep (uneven), and C is characters (uneven). I would like to use EmbeddingBag to get a mean-embedding of each timestep of characters.
For example, lets say I have three datapoints in my batch:
- [[], [0, 4], [1, 1], [5]]
- This has 4 time steps, and 0, 2, 2, 1, respectively, characters in each timestep.
- [[1], [2, 3]]
- This has 2 time steps, and 1, 2, respectively, characters in each timestep.
- [[2, 4, 5], []]
- This has 2 timesteps, and 3, 1, respectively, characters in each timestep
So let’s init that:
all_tensors = [[[], [0, 4], [1, 1], [5]], [[1], [2,3]], [[2, 4, 5], []]]
And I know this is what I want my embedder to look like:
embedder = torch.nn.EmbeddingBag(num_embeddings = 6, embedding_dim = 2, mode = 'mean')
And…this is where I am stuck. Is there a good tutorial for how this problem should be approached?
Edit: Think I got a bit closer…
def pad_array(base_input): for index1, datapoint in enumerate(base_input): base_input[index1] = torch.LongTensor(np.asarray([np.pad(a, (0, 5 - len(a)), 'constant', constant_values=0) for a in datapoint])) return base_input all_tensors = [[[], [0, 4], [1, 1], [5]], [[1], [2,3]], [[2, 4, 5], []]] paddedchar_tensors = pad_array(all_tensors) paddedchar_tensors = rnn_utils.pad_sequence(padded_tensors, batch_first=True)
This gives me
paddedcode_tensors as:
tensor([[[0, 0, 0, 0, 0], [0, 4, 0, 0, 0], [1, 1, 0, 0, 0], [5, 0, 0, 0, 0]], [[1, 0, 0, 0, 0], [2, 3, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0]], [[2, 4, 5, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0]]])
But once again, I am stuck, running this through the EmbeddingBag gives me this error:
ValueError: input has to be 1D or 2D Tensor, but got Tensor of dimension 3 | https://discuss.pytorch.org/t/how-to-use-embeddingbag-with-uneven-3d-data/97530 | CC-MAIN-2022-40 | en | refinedweb |
public class StorageStatus extends java.lang.Object
This class assumes BlockId and BlockStatus are immutable, such that the consumers of this class cannot mutate the source of the information. Accesses are not thread-safe.
clone, equals, finalize, boolean containsBlock(BlockId blockId)
this.blocks.contains, which is O(blocks) time.
blockId- (undocumented)
public scala.Option<BlockStatus> getBlock(BlockId blockId)
this.blocks.get, which is O(blocks) time.
blockId- (undocumented).
rddId- (undocumented)
public long memRemaining()
public long memUsed()
public long cacheSize()
public long diskUsed()
public long memUsedByRdd(int rddId)
public long diskUsedByRdd(int rddId)
public scala.Option<StorageLevel> rddStorageLevel(int rddId) | https://spark.apache.org/docs/2.0.0-preview/api/java/org/apache/spark/storage/StorageStatus.html | CC-MAIN-2022-40 | en | refinedweb |
C Tutorial
Control statement
C Loops
C Arrays
C String
C Functions
C Structure
C Pointer
C File
C Header Files
C Preprocessors
C Misc
Structure examples program in C language
Here we will see some structure example programs using C. You are most welcome if you have completed our all the previous topics of C tutorial especially C structure. So, let’s see some program of structure example here.
We recommend you to see our following guide if you don’t have any idea about them.
Store data in a structures dynamically using C
In this program you will learn to store user input data by dynamic memory allocation. In C severally we need to store data in a structure dynamically. So, let’s have a look at the given C program.
// code here
Output of structure example program:
Calculate difference between two time periods using C structure
Here in this C program we will learn to calculate the difference between two time periods. Let’s try to write the code as bellow;
// C program to calculate difference between two time periods using C #include <stdio.h> struct time{ int sec; int mins; int hrs; }; void diff_of_time_period(struct time start, struct time stop, struct time *diff){ while (stop.sec > start.sec){ --start.mins; start.sec = start.mins + 60; } diff->sec = start.sec - stop.sec; while (stop.mins > start.mins){ --start.hrs; start.mins = start.mins + 60; } diff -> mins = start.mins - stop.mins; diff -> hrs = start.hrs - stop.hrs; } int main(){ struct time time_start, time_stop, time_diff; printf("Enter the starting time : \n"); printf("hour minute second : "); scanf("%d %d %d", &time_start.hrs, &time_start.mins, &time_start.sec); printf("Enter the stopping time : \n"); printf("hour minute second : "); scanf("%d %d %d", &time_stop.hrs, &time_stop.mins, &time_stop.sec); diff_of_time_period(time_start, time_stop, &time_diff); printf("\nDifference of time is = %d:%d:%d\n", time_diff.hrs, time_diff.mins, time_diff.sec); return 0; }
Compile and run the program to get the output like this.
C structure example to add two complex numbers
Here, we will write a C program to add two complex number. Let’s see the bellow structure example program;
// C program to add two complex number using structure #include <stdio.h> typedef struct complex_num{ float real; float imag; } comp; comp add(comp n1, comp n2) { comp temp; temp.real = n1.real + n2.real; temp.imag = n1.imag + n2.imag; return (temp); } int main() { comp first_num, second_num, sum; printf("Enter real and imaginary parts of first number : \n"); scanf("%f %f", &first_num.real, &first_num.imag); printf("Enter real and imaginary parts of second number : \n"); scanf("%f %f", &second_num.real, &second_num.imag); sum = add(first_num, second_num); printf("Sum = %.1f + %.1fi", sum.real, sum.imag); return 0; }
Output of this add two complex number program :
Enter real and imaginary parts of first number : 5.9 -3.88 Enter real and imaginary parts of second number : 2.8 3.9 Sum = 8.7 + 0.0i | https://worldtechjournal.com/c-tutorial/c-structure-examples/ | CC-MAIN-2022-40 | en | refinedweb |
html - Internal links on a specific page have stopped working
I'm trying to link to internal 'Pages' that are created within the Shopify back end, but the button link on this specific page is broken: () It used to work and I haven't made any changes to it.
'Pages' can be linked elsewhere within the app but just not on that specific section for some reason. The button always links back the page that you are on for some reason.
Html code:
{% if block.settings.button_text != blank %} <a href="{{ block.settings.button_url }}" class="standard__cta {{ block.settings.button_style }} {{ block.settings.button_color }}" data- {{ block.settings.button_text }} </a> {% endif %}
Schema code:
{ "type":"url", "id":"button_url", "label":"Link" },
A href in Inspect mode:
>. | https://e1commerce.com/items/internal-links-on-a-specific-page-have-stopped-working | CC-MAIN-2022-40 | en | refinedweb |
This post was flagged by the community and is temporarily hidden.
Hi Chang,
Can you compile with
make debugmpiandsmp, and see if the problem occurs - if so, this will tell us where it went wrong.
I see that you’re running with ASE, I don’t think this makes sense. The ASE calculator only supports ground state calculations but you wish to do BSE skipping the ground state. Can you try submitting the job directly? I also don’t think there’s a way to specify MPI processes with the ASE calculator (?) If not, the run time performance will be terrible.
One you check these things, can you provide more information please? Which intel compiler and what
make.inc did you use (provided or custom)? Can you supply
INFO.OUT and corresponding BSE INFO file for the failing case, and the run time settings.
I also note your rgkmax is extremely small, but I assume this is for testing purposes.
Cheers,
Alex
This post was flagged by the community and is temporarily hidden.
Hi Chang,
This is the problem:
forrtl: severe (408): fort: (8): Attempt to fetch from allocatable variable PMUO1 when it is not allocated Image PC Routine Line Source exciting_debug_mp 0000000003935ADF Unknown Unknown Unknown exciting_debug_mp 0000000002A844C6 exccoulint_ 484 exccoulint.f90 exciting_debug_mp 0000000000B1D746 exccoulintlaunche 118 exccoulintlauncher.f90 exciting_debug_mp 0000000000935FD9 xsmain_ 161 xsmain.F90 exciting_debug_mp 0000000001956B01 xstasklauncher_ 233 xstasklauncher.f90 exciting_debug_mp 00000000022D8EE0 tasklauncher_ 25 tasklauncher.f90 exciting_debug_mp 0000000001D7DD5B MAIN__ 51 main.f90 exciting_debug_mp 0000000000410D52 Unknown Unknown Unknown libc-2.26.so 00007F661379234A __libc_start_main Unknown Unknown exciting_debug_mp 0000000000410C6A Unknown Unknown Unknown
I’ll ask the BSE developers if this has been patched.
W.r.t. running exciting with mpi via ASE, I didn’t look at the code but you’ll also need to set the env variable for
OMP_NUM_THREADS for maximum efficiency. excitingtools already enables you to generate input for ground state and BSE with python, it also has numerous file parsers. There is an open MR with ASE to completely overhaul the calculator, using excitingtools as a plug-in . Hopefully that goes through this year.
Cheers,
Alex
Hi Alex,
I am indeed setting OMP_NUM_THREADS to 1 for my parallelized computations yes, and many thanks for the tips about python scripts of BSE - I am still learning these very convenient tools now (I created a special Python 2.7.14 environment in Anaconda for exciting tools) and they are indeed very helpful. The news about open MR with ASE sounds great, too.
Since I am trying to use exciting to compute X-ray absorption and emission spectra based on GW-BSE, I tried to go beyond TDA by setting
coupling = "true", using the tutorial for BN (). But unfortunately the program crashes each time I attempt:
mpprun info: Starting impi run on 1 node ( 8 rank X 1 th ) for job ID 21805777 Abort(101) on node 2 (rank 2 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 101) - process 2 Abort(101) on node 4 (rank 4 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 101) - process 4 Abort(101) on node 6 (rank 6 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 101) - process 6 mpprun info: Job terminated with error
The same test will run smoothly if I turn off
coupling.
This exciting build was compiled by the staff at NSC in Linköping University, so it should not suffer from my own inexperience as in the previous case. I have uploaded the whole case in:
Could you help me in this case, too? Thank you!!!
Best wishes,
Chang Liu
Hi Chang,
Python’s subprocess will start its own shell instance. I’m relatively sure that you would need to pass
OMP_NUM_THREADS as an env dictionary to subprocess in order for anything other than 1 to be used. Something like this:
def some_routine(my_env, ...): if my_env is None: my_env: dict = os.environ.copy() process = Popen(execution_str.split(), cwd=path, stdout=PIPE, stderr=PIPE, env=my_env)
excitingtools is not the tutorial scripts, it’s the python3 package we’re developing to supersede the scripts: excitingtools · PyPI (also packaged with exciting).
W.r.t. the failing calculation, I’ll pass this info on to Fabian, who currently does the most x-ray absorption simulations in the group.
Cheers,
Alex | https://matsci.org/t/bse-xas-calculation-crashes/44509 | CC-MAIN-2022-40 | en | refinedweb |
Registered users can ask their own questions, contribute to discussions, and be part of the Community!
Registered users can ask their own questions, contribute to discussions, and be part of the Community!
Hi,
I'm new to Dataiku and the community and I'm using Dataiku online. Documentation indicates that scenarios can be used to "Automate the retraining of “saved models” on a regular basis, and only activate the new version if the performance is improved". This is exactly what I need to set up but I can't seem to find examples showing the specific steps I would setup in scenarios to accomplish this. Attaching my current flow (I got an error pasting a screenshot ). Any references to documentation / working examples would be great.
Thank you
Operating system used: windows
Operating system used: windows
Hi @StephenEaster ,
Thanks for posting here. Sharing the solution we implemented on your end with the Community.
Added a custom python step in a scenario :
import dataiku # Define variables necessary to run this code (Please assign your own environment's values) ANALYSIS_ID = 'XXXXXXX' # The identifier of the visual analysis containing the desired ML task ML_TASK_ID = 'XXXXXXX' # The identifier of the desired ML task SAVED_MODEL_ID = 'S-RESPONSEMODELING-ngsDEF4J-1656449809436' # The identifier of the saved model when initially running the scenario will use variables later TRAINING_RECIPE_NAME = 'train_Predict_Revenue_NDays_Company__regression_' # Name of the training recipe to update for redeploying the model # client is a DSS API client. client = dataiku.api_client() p = client.get_project(dataiku.default_project_key()) # Retrieve existing ML task to retrain the model mltask = p.get_ml_task(ANALYSIS_ID, ML_TASK_ID) # Wait for the ML task to be ready mltask.wait_guess_complete() # Start train and wait for it to be complete mltask.start_train() mltask.wait_train_complete() # Get the identifiers of the trained models # There will be 3 of them because Logistic regression and Random forest were default enabled ids = mltask.get_trained_models_ids() # Iterating through all the existing algorithms to determine which one has the best AUC score or other metrics r2,auc, f1 etc. actual_metric = "r2" temp_auc = 0 for id in ids: details = mltask.get_trained_model_details(id) algorithm = details.get_modeling_settings()["algorithm"] auc = details.get_performance_metrics()[actual_metric] if auc > temp_auc: best_model = id print("Better model identified") print("Algorithm=%s actual_metric=%s" % (algorithm, auc)) # Let's compare the "best" model of the newly trained model vs the existing model to see which is better details = mltask.get_trained_model_details(best_model) auc = details.get_performance_metrics()[actual_metric] # We'll need to pull the current model ID from project variables and retrieve the model info vars = p.get_variables() try: current_model = vars["standard"]["current_model"] except: current_model = SAVED_MODEL_ID current_details = mltask.get_trained_model_details(current_model) current_auc = current_details.get_performance_metrics()[actual_metric] # Let's deploy the model with the best AUC score (either new or existing) if auc > current_auc: model_to_deploy = best_model else: model_to_deploy = current_model print("Model to deploy identified: " + model_to_deploy) # Update project variables to reflect the new model ID that is being deployed vars["standard"]["current_model"] = model_to_deploy p.set_variables(vars) # Deploy the model to the Flow ret = mltask.redeploy_to_flow(model_to_deploy, recipe_name = TRAINING_RECIPE_NAME, activate = True)
The assumption here is the model is already trained and winning model was deployed to the flow. To obtain the required variables :
The Saved Model ID e.g that one that is already deployer
Regards, | https://community.dataiku.com/t5/Using-Dataiku/Using-Scenarios-to-automatically-retrain-models/m-p/26743 | CC-MAIN-2022-40 | en | refinedweb |
@Generated(value="OracleSDKGenerator", comments="API Version: 20200606") public class ListRecommendationsRequest extends BmcRequest<Void>
getBody$, getInvocationCallback, getRetryConfiguration, setInvocationCallback, setRetryConfiguration, supportsExpect100Continue
clone, finalize, getClass, notify, notifyAll, wait, wait, wait
public ListRecommendationsCategoryId()
The unique OCID associated with the category.
public String getCategoryName()
Optional. A filter that returns results that match the categoryationsRequest.SortBy getSortBy()
The field to sort by. You can provide one sort order (
sortOrder). Default order for TIMECREATED is descending. Default order for NAME is ascending. The NAME sort order is case sensitive.
public LifecycleState getLifecycleState()
A filter that returns results that match the lifecycle state specified.
public Status getStatus()
A filter that returns recommendations that match the status specified.
public String getOpcRequestId()
Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a particular request, please provide the request ID.
public ListRecommendationsRequest.Builder toBuilder()
Return an instance of
ListRecommendationsRequest.Builder that allows you to modify request properties.
ListRecommendationsRequest.Builderthat allows you to modify request properties.
public static ListRecommend> | https://docs.oracle.com/en-us/iaas/tools/java/2.44.0/com/oracle/bmc/optimizer/requests/ListRecommendationsRequest.html | CC-MAIN-2022-40 | en | refinedweb |
Hi,;
Hi,;
See if Reset Tag Value after a Number of Seconds - #6 by hamlinjr helps. Pay attention to the warnings and cautions.
Ok. So, now I want to create a function, that will work like a Timer ON from PLC, but I encounter a problem:
def TimerON(tagPath, time): if system.tag.read(tagPath).value == 0: date_ms = system.date.toMillis(system.date.now()) return 0 else: time_diff = system.date.toMillis(system.date.now()) - date_ms if time_diff > time: return 1 TimerON("[default]Testowe/test1",5000)
The scripting console shows an error, when tag value is 1:
UnboundLocalError: local variable ‘date_ms’ referenced before assignment
You’ve assigned
date_ms in the
if block (for no apparent reason, not using it) but are trying to use it in the
else block.
date_ms in
if block is to remember the timestamp, when the variable was ‘0’ last time.
Doesn’t work like that. Your script has no “memory”.
So is it possible to create this kind of function via scripting?
It’s possible, you just need to find a place to store the start time that is scoped outside of your function.
For instance, perhaps create a UDT with the needed tags. Note that creating a “Timer” in this manner will be pretty inaccurate as there will be built in latencies with tag scan times, and read and right times. Can it be done? Yes. Should it? I’m not sure.
What exactly are you trying to accomplish, other than to replicate a construct from a PLC? There may be a better approach then this.
I want to trigger a IF condition in my script by a boolean value, that becomes TRUE, when tag is TRUE for 5 seconds.
Use a tag change event of some kind on your boolean to record the current timestamp into another tag (memory tag) whenever the boolean becomes true. In the script where you need this, read both the boolean tag and the timestamp tag. Then you have enough information to evaluate your condition.
Hello!
sorry for the delay
, I had a similar need, check if my solution could fit yours:
udt_TON.xml (2.1 KB)
use: create an instance of TON, say myTON
set myTON.PT = 3.5 → it’s a float, PresetTime in seconds
set myTON.EN = True to enable the timer
Q is set to False at this moment and will be set to True after 3500ms
caveat: PT can’t be 0
see Adding a Delay to a Script - Ignition User Manual 7.9 - Ignition Documentation for details on the use of Timer
That’s not what you want to do, that’s how you’re trying to do it.
Try to be as closed to the problem you want to solve, without including anything you think might be part of the solution. You might not even need a script to achieve your ‘true’ goal.
Unless of course your ultimate goal is to trigger an
if in a script, in which case… why ?
UDT attached.
timer_udt.json (1.2 KB) | https://forum.inductiveautomation.com/t/timer-on-and-timer-off/55156 | CC-MAIN-2022-40 | en | refinedweb |
How to Convert MSG to JPG in C# .Net Framework
MSG format can pose some serious problems to the would-be parser. Not only do you have to deal with the nuances of the format’s packaging, but you also have to be able to handle three different types of body content. This can consist of HTML, plain text, or even RTF, thus multiplying the required work. And that’s not even taking into consideration the matter of rendering and rasterizing to get your image. Let’s forget that whole mess for a minute, as I have a method to show you that will get this all done in no time at all.
First, our package installation.
Install-Package Cloudmersive.APIClient.NET.DocumentAndDataConvert -Version 3.2.8
Then call this conversion function from the above library.
using System;using System.Diagnostics;using Cloudmersive.APIClient.NET.DocumentAndDataConvert.Api;using Cloudmersive.APIClient.NET.DocumentAndDataConvert.Client;using Cloudmersive.APIClient.NET.DocumentAndDataConvert.Model;namespace Example{public class ConvertDocumentMsgToJpgExample{public void main(){// Configure API key authorization: ApikeyConfiguration.Default.AddApiKey("Apikey", "YOUR_API_KEY");// Uncomment below to setup prefix (e.g. Bearer) for API key, if needed// Configuration.Default.AddApiKeyPrefix("Apikey", "Bearer");var apiInstance = new ConvertDocumentApi();var inputFile = new System.IO.Stream(); // System.IO.Stream | Input file to perform the operation on.var quality = 56; // int? | Optional; Set the JPEG quality level; lowest quality is 1 (highest compression), highest quality (lowest compression) is 100; recommended value is 75. Default value is 75. (optional)try{// Convert Email MSG file to JPG/JPEG image arrayMsgToJpgResult result = apiInstance.ConvertDocumentMsgToJpg(inputFile, quality);Debug.WriteLine(result);}catch (Exception e){Debug.Print("Exception when calling ConvertDocumentApi.ConvertDocumentMsgToJpg: " + e.Message );}}}}
Done. Told you it was going to be easy!
| https://cloudmersive.medium.com/how-to-convert-msg-to-jpg-in-c-net-framework-e4dba0e982a | CC-MAIN-2022-40 | en | refinedweb |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.