text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Assaf,
I'm not following all of this. My main goal here is not to break the client
when a process is redeployed.
Lance
On 8/8/06, Assaf Arkin <arkin@intalio.com> wrote:
>
> The client breaks when the endpoint changes, or the messages/operations
> accepted by the endpoint change.
>
> Whenever you deploy a new version -- same or different name, version
> number,
> tagged or not -- that accepts the same messages on the same endpoints, the
> client does not perceive any difference. It still invokes the process the
> same way, regardless of how the server chooses to refer to that process
> definition.
>
> However, changing the signature and changing the process name, breaks the
> client. Because the client does not talk to the process, the client talks
> to
> the service, and so changing the signature breaks the client. Changing the
> process name is immaterial.
>
> A restriction that "if you change the signature you must change the
> process
> name" does not in any way protect the client from breaking, but makes life
> incredibly hard for developers. It's like asking you to change the Java
> class name every time you change its signature. When you're writing code,
> how often do you change signatures?
>
> Assaf
>
> On 8/8/06, Lance Waterman <lance.waterman@gmail.com> wrote:
> >
> > Assaf,
> >
> > From a client application's perspective which of the three options
> > requires
> > a change in the way I send a message into the BPEL engine?
> >
> > Lance
> >
> >
> > On 8/8/06, Assaf Arkin <arkin@intalio.com> wrote:
> > >
> > > Reading through the BPEL spec, I get the impression that however you
> > > decide
> > > to name a process is meaningless. If you send a message to its
> > initiating
> > > activity it will start. If you send a message to the wrong endpoint,
> it
> > > won't.
> > >
> > > So clearly people who want to version processes need to take into
> > account
> > > that Bar replacing Foo on same instantiating activity, means Bar is
> the
> > > version you now use, not Foo. Which means you can get really creative
> > with
> > > process names, like Order, OrderV1, Order_V2_With_25%_More_Activities.
> > >
> > >
> > > But there are two requirements you can't solve with overriding and
> > naming.
> > >
> > > One, only very few people can actual design, deploy and forget. Most
> > > people
> > > go through some iterative process, so you end up deploying different
> > > iterations of the same process as you're working to get it to done.
> And
> > > naming each deployment, that's like saving every draft you write under
> a
> > > different name.
> > >
> > > The source control approach is much better, it gives each version a
> > serial
> > > number and datetime stamp, so you can easily track changes and
> rollback.
> > > If
> > > you have some instance running, you know which process definition it
> > > belongs
> > > to: not the name, but the actual definition you pushed to the server
> > > before
> > > it was instantiated.
> > >
> > > (In some other development environments, deployment happens strictly
> > > through
> > > SVN and people in fact use the SVN version number to mark each
> release)
> > >
> > > Two, numbers and timestamps are fine but a burden when you do want to
> > > track
> > > milestone releases, especially in production. So you want to associate
> > > some
> > > meaningful name, usually related to that milestone, like "Release 1",
> > > "Release 1.1", whatever. A tagging mechanism separate from the process
> > > name
> > > has the benefit that you can clearly see its timeline, searching by
> > name,
> > > ordering by sequential version number, and displaying those tags.
> > >
> > > If tags sound familiar, source control does that as well.
> > >
> > >
> > > So I personally prefer a system whereby:
> > > 1. I can replace Foo with Bar because I decide Foo is a better name,
> and
> > > it's taking over Bar's role (same instantiation).
> > > 2. Or replace Foo with another Foo, and be able to see sequence of
> > > deployment using serial number/datetime I don't have to worry about.
> > > 3. Or affix a specific version label/tag.
> > >
> > > #1, I don't see that happening often, and you can always retire Foo
> and
> > > activate Bar.
> > >
> > > #2 is something the server already has to do in order to maintain
> > > instances
> > > using the old version, so just give me access to the sequence
> > > number/deployment timestamp.
> > >
> > > #3 is a really nice feature to have.
> > >
> > > Assaf
> > >
> > >
> > > On 8/8/06, Alex Boisvert <boisvert@intalio.com> wrote:
> > > >
> > > > Lance Waterman wrote:
> > > > > On 8/8/06, Alex Boisvert <boisvert@intalio.com> wrote:
> > > > >>
> > > > >> Lance,
> > > > >>
> > > > >> For consideration, I would like to briefly review the design
that
> I
> > > had
> > > > >> in mind for versioning in PXE. I think it's similar in spirit
to
> > > what
> > > > >> you describe in your deployment spec.
> > > > >>
> > > > >> First, each process definition would be identified by its fully
> > > > >> qualified name (/name/ and /namespace/ attributes) and a version
> > > > >> number. The process engine would manage the version number in
a
> > > > >> monotonically increasing fashion, meaning that each time a
> process
> > is
> > > > >> redeployed, the version number increases.
> > > > >
> > > > >
> > > > > I don't understand the need for a version number that is managed
> by
> > > the
> > > > > engine. I think a client may use whatever version scheme they use.
> > We
> > > > > just
> > > > > need to validate that the version identifier is unique at
> deployment
> > > > > time.
> > > > There is no strict need for a version number managed by the
> engine. I
> > > > think this idea came up when we wanted to simplify the management
> > > > interfaces and wanted to avoid the need for an extra user-provided
> > > > identifier if you already encoded version information in the
> > > > name+namespace. It made it easier to define and communicate the
> > > > "latest" process version.
> > > >
> > > > > I agree with Maciej's comments on this and would like to add from
> > the
> > > > > deployment spec under sec 1.2.5:
> > > > >
> > > > > *CONSTRAINT: Any change in the service interface ( i.e. a new
> > > <receive>
> > > > > element ) for a process definition will require a new identifier
(
> > i.e
> > > .
> > > > > name/namespace ) within the definition repository. Versioning is
> not
> > > > > supported across changes in the service interface and shall be
> > > > > enforced by
> > > > > the deployment component.*
> > > > >
> > > > > I would like to make sure folks are okay with this as well.
> > > > Personally, I would be against this because it would mean that I
> > cannot
> > > > deploy a new process definition that implements additional
> interfaces
> > > > (among other things).
> > > >
> > > > I don't see the reason to bind together the notions of service
> > interface
> > > > and versioning.
> > > >
> > > >
> > > > > In general I would like to define the concept of a "current"
> process
> > > > > definition. The "current" process definition is the definition
> used
> > by
> > > > > the
> > > > > engine on an instantiating event. There could be instances running
> > in
> > > > the
> > > > > engine that are using other versions of a process definition,
> > however
> > > > its
> > > > > not possible to have multiple versions that are used for
> > instantiating
> > > > > new
> > > > > processes ( see Maciej's reply on endpoints ). Through management
> > > > > tooling a
> > > > > client will identify the "current" process.
> > > > I don't think we need to define the notion of a "current"
> process. I
> > > > think we only need to define which (unique) process provides an
> > > > instantiating (non-correlated) operation on a specific endpoint.
> > > >
> > > > alex
> > > >
> > > >
> > >
> > >
> > > --
> > > CTO, Intalio
> > >
> > >
> > >
> >
> >
>
>
> --
> CTO, Intalio
>
>
> | http://mail-archives.us.apache.org/mod_mbox/ode-dev/200608.mbox/%3Ccbb700270608081619k67404c8cqba3a77b21e3469b3@mail.gmail.com%3E | CC-MAIN-2020-29 | refinedweb | 1,130 | 62.88 |
Linq is a great technology to manage data directly from your .Net language.
One of its features is grouping. Many people understand grouping like it is defined in Sql. Linq is implementing grouping quite the same way. Let's discover this syntax and how to make consecutive groups easier.
Then we will show how to use WPF HierarchicalDataTemplate to expose the results in just a few lines.
Let's assume we have a collection of customer: 'customers'. You just have to use 'group by' to define groups among your data.
var q =
from c in db.Customers
group c by c.Country;
As we can see in its definition, IGrouping is just adding a few things:
- the key of the group (country in our sample).- the items grouped by this common key. To retrieve these items, you have to browse the group which is an enumeration itself.
foreach (var g in q)
{
var city = g.Key;
foreach (var c in g)
Console.WriteLine(c.CompanyName);
}
// Summary:
// Represents a collection of objects that have a common key.
//
// Type parameters:
// TKey:
// The type of the key of the System.Linq.IGrouping<TKey,TElement>.
//
// TElement:
// The type of the values in the System.Linq.IGrouping<TKey,TElement>.
public interface IGrouping<TKey, TElement> : IEnumerable<TElement>, IEnumerable
{
// Summary:
// Gets the key of the System.Linq.IGrouping<TKey,TElement>.
//
// Returns:
// The key of the System.Linq.IGrouping<TKey,TElement>.
TKey Key { get; }
}
var q =
from g in
(from c in db.Customers
group c by c.Country)
select new { g.Key, Count = g.Count() };
To simplify this syntax, you can use the 'into' keyword and then make disappear the nested query. Do not forget the first syntax that makes more visible why 'c' is not reachable after the group statement. Like in Sql, once the data are grouped, you can only select properties from the group.
var q =
from c in db.Customers
group c by c.Country into g
select new { g.Key, Count = g.Count() };
We can write it 'manually' nesting a second Linq query into the result of our first query:
var q =
from c in db.Customers
group c by c.Country into g
select new {
g.Key,
Count = g.Count(),
SubGroups = from c in g
group c by c.City into g2
select g2};
The result is a tree of items grouped in a first level of countries groups and then each country group has a SubGroups property that stores the group of cities contained in each country.
Writing this will become less and less readable when the number of child groups will grow.Moreover, it's quite hard to factorize this code as we have to insert a new query inside the last projection.I wanted to make this scenario more simple and more generic. Here is the idea.
The first thing I have done is to create a fixed type to define a group.This allows me to have a returnable type (anonymous types are not), so I can isolate my code in a method. Moreover, as I will use my method recursively, actually I had no choice!Another reason was having the fixed non generic type GroupResult makes it easier for me to use WPF data binding (xaml does not support generic types).
public class GroupResult
{
public object Key { get; set; }
public int Count { get; set; }
public IEnumerable Items { get; set; }
public IEnumerable<GroupResult> SubGroups { get; set; }
public override string ToString()
{ return string.Format("{0} ({1})", Key, Count); }
}
If the number of group selectors is zero, then the method returns null. It's also what will stop the recursivity in the case of multiple selectors.
If the number of group selectors is greater than zero, then I isolate the first one and build a simple Linq GroupBy query using it. Each returned item is a GroupResult and I am calling recursively the GroupByMany method on the results of the group (g) to fill the SubGroups property. When calling this method, I am using the remaining unused group selectors, which will finish by being empty.
public static class MyEnumerableExtensions
{
public static IEnumerable<GroupResult> GroupByMany<TElement>(
this IEnumerable<TElement> elements,
params Func<TElement, object>[] groupSelectors)
{
if (groupSelectors.Length > 0)
{
var selector = groupSelectors.First();
//reduce the list recursively until zero
var nextSelectors = groupSelectors.Skip(1).ToArray();
return
elements.GroupBy(selector).Select(
g => new GroupResult
{
Key = g.Key,
Count = g.Count(),
Items = g,
SubGroups = g.GroupByMany(nextSelectors)
});
}
else
return null;
}
}
var result = customers.GroupByMany(c => c.Country, c => c.City);
Last step, let's try to display the result. WPF has a feature that I love: the possibility to associate a template to a data type. Usually, a template is stored in the resources and is indexed with a key, then the controls reference this template. Using the 'DataType' syntax without key, the template is automatically associated to any content control when the type of the content is corresponding to the DataType of the template.
The HierarchicalDataTemplate is a special template that allows you to define a collection of children (ItemsSource property) in addition to the regular DataTemplate definition.
Some hierarchical controls like the TreeView are using this template to build their structure recursively. So we have nothing more to do to display our multiple groupby results than connecting them to the treeview:
DataContext = customers.GroupByMany(c => c.Country, c => c.City);
<Window x:Class="WPFGroupingTemplate.Window1"
xmlns=""
xmlns:x=""
xmlns:
<Window.Resources>
<HierarchicalDataTemplate DataType="{x:Type local:GroupResult}"
ItemsSource="{Binding SubGroups}">
<TextBlock Text="{Binding}" />
</HierarchicalDataTemplate>
</Window.Resources>
<DockPanel>
<TreeView x:
<ListView ItemsSource="{Binding ElementName=groups,
Path=SelectedItem.Items}">
<ListView.View>
<GridView>
<GridViewColumn Header="Company name"
DisplayMemberBinding="{Binding CompanyName}" />
<GridViewColumn Header="Contact name"
DisplayMemberBinding="{Binding ContactName}" />
<GridViewColumn Header="Country"
DisplayMemberBinding="{Binding Country}" />
<GridViewColumn Header="City"
DisplayMemberBinding="{Binding City}" />
</GridView>
</ListView.View>
</ListView>
</DockPanel>
</Window>
I let you evaluate the size of the code if you had to write the same program using Windows Forms and ADO.Net...
The source code attached is for Visual Studio 2008. Even if this sample is using Linq to object grouping, I have used a local Northwind database to populate my collection. You just need to modify the connection string in the app.config file to run the sample (or use any of your data sources of course). | http://blogs.msdn.com/b/mitsu/archive/2007/12/22/playing-with-linq-grouping-groupbymany.aspx | CC-MAIN-2015-48 | refinedweb | 1,041 | 59.19 |
Using Cypress to Write Tests for a React Application framework that does all that clicking work for us and that’s what we’re going to look at in this post. It’s really for any modern JavaScript library, but we’re going to integrate it with React in the examples.
Let’s set up an app to test
In this tutorial, we will write tests to cover a todo application I’ve built. You can clone the repository to follow along as we plug it into Cypress.
git clone [email protected]:kinsomicrote/cypress-react-tutorial.git
Navigate into the application, and install the dependencies:
cd cypress-react-tutorial yarn install
Cypress isn’t part of the dependencies, but you can install it by running this:
yarn add cypress --dev
Now, run this command to open Cypress:
node_modules/.bin/cypress open
Typing that command to the terminal over and over can get exhausting, but you can add this script to the
package.json file in the project root:
"cypress": "cypress open"
Now, all you have to do is do
npm run cypress once and Cypress will be standing by at all times. To have a feel of what the application we’ll be testing looks like, you can start the React application by running
yarn start.
We will start by writing a test to confirm that Cypress works. In the
cypress/integration folder, create a new file called
init.spec.js. The test asserts that
true is equal to
true. We only need it to confirm that’s working to ensure that Cypress is up and running for the entire application.
describe('Cypress', () => { it('is working', () => { expect(true).to.equal(true) }) })
You should have a list of tests open. Go there and select
init.spec.js.
That should cause the test to run and pop up a screen that shows the test passing.
While we’re still in
init.spec.js, let’s add a test to assert that we can visit the app by hitting in the browser. This’ll make sure the app itself is running.
it('visits the app', () => { cy.visit('') })
We call the method
visit() and we pass it the URL of the app. We have access to a global object called
cy for calling the methods available to us on Cypress.
To avoid having to write the URL time and again, we can set a base URL that can be used throughout the tests we write. Open the
cypress.json file in the home directory of the application and add define the URL there:
{ "baseUrl": "" }
You can change the test block to look like this:
it('visits the app', () => { cy.visit('/') })
...and the test should continue to pass. 🤞
Testing form controls and inputs
The test we’ll be writing will cover how users interact with the todo application. For example, we want to ensure the input is in focus when the app loads so users can start entering tasks immediately. We also want to ensure that there’s a default task in there so the list is not empty by default. When there are no tasks, we want to show text that tells the user as much.
To get started, go ahead and create a new file in the integration folder called
form.spec.js. The name of the file isn’t all that important. We’re prepending "form" because what we’re testing is ultimately a form input. You may want to call it something different depending on how you plan on organizing tests.
We’re going to add a
describe block to the file:
describe('Form', () => { beforeEach(() => { cy.visit('/') }) it('it focuses the input', () => { cy.focused().should('have.class', 'form-control') }) })
The
beforeEach block is used to avoid unnecessary repetition. For each block of test, we need to visit the application. It would be redundant to repeat that line each time
beforeEach ensures Cypress visits the application in each case.
For the test, let’s check that the DOM element in focus when application first loads has a class of
form-control. If you check the source file, you will see that the input element has a class called
form-control set to it, and we have
autoFocus as one of the element attributes:
<input type="text" autoFocus value={this.state.item} onChange={this.handleInputChange}
When you save that, go back to the test screen and select
form.spec.js to run the test.
The next thing we’ll do is test whether a user can successfully enter a value into the input field.
it('accepts input', () => { const input = "Learn about Cypress" cy.get('.form-control') .type(input) .should('have.value', input) })
We’ve added some text ("Learn about Cypress") to the input. Then we make use of
cy.get to obtain the DOM element with the
form-control class name. We could also do something like
cy.get('input') and get the same result. After getting the element,
cy.type() is used to enter the value we assigned to the
input, then we assert that the DOM element with class
form-control has a value that matches the value of
input.
In other words:
Our application should also have two todos that have been created by default when the app runs. It’s important we have a test that checks that they are indeed listed.
What do we want? In our code, we are making use of the list item (
<li>) element to display tasks as items in a list. Since we have two items listed by default, it means that the list should have a length of two at start. So, the test will look something like this:
it('displays list of todo', () => { cy.get('li') .should('have.length', 2) })
Oh! And what would this app be if a user was unable to add a new task to the list? We’d better test that as well.
it('adds a new todo', () => { const input = "Learn about cypress" cy.get('.form-control') .type(input) .type('{enter}') .get('li') .should('have.length', 3) })
This looks similar to what we wrote in the last two tests. We obtain the input and simulate typing a value into it. Then, we simulate submitting a task that should update the state of the application, thereby increasing the length from 2 to 3. So, really, we can build off of what we already have!
Changing the value from three to two will cause the test to fail — that’s what we’d expect because the list should have two tasks by default and submitting once should produce a total of three.
You might be wondering what would happen if the user deletes either (or both) of the default tasks before attempting to submit a new task. Well, we could write a test for that as well, but we’re not making that assumption in this example since we only want to confirm that tasks can be submitted. This is an easy way for us to test the basic submitting functionality as we develop and we can account for advanced/edge cases later.
The last feature we need to test is the deleting tasks. First, we want to delete one of the default task items and then see if there is one remaining once the deletion happens. It’s the same sort of deal as before, but we should expect one item left in the list instead of the three we expected when adding a new task to the list.
it('deletes a todo', () => { cy.get('li') .first() .find('.btn-danger') .click() .get('li') .should('have.length', 1) })
OK, so what happens if we delete both of the default tasks in the list and the list is completely empty? Let’s say we want to display this text when no more items are in the list: "All of your tasks are complete. Nicely done!"
This isn’t too different from what we have done before. You can try it out first then come back to see the code for it.
it.only('deletes all todo', () => { cy.get('li') .first() .find('.btn-danger') .click() .get('li') .first() .find('.btn-danger') .click() .get('.no-task') .should('have.text', 'All of your tasks are complete. Nicely done!') })
Both tests look similar: we get the list item element, target the first one, and make use of
cy.find() to look for the DOM element with a
btn-danger class name (which, again, is a totally arbitrary class name for the delete button in this example app). We simulate a click event on the element to delete the task item.
Testing network requests
Network requests are kind of a big deal because that’s often the source of data used in an application. Say we have a component in our app that makes a request to the server to obtain data which will be displayed to user. Let’s say the component markup looks like this:
class App extends React.Component { state = { isLoading: true, users: [], error: null }; fetchUsers() { fetch(``) .then(response => response.json()) .then(data => this.setState({ users: data, isLoading: false, }) ) .catch(error => this.setState({ error, isLoading: false })); } componentDidMount() { this.fetchUsers(); } render() { const { isLoading, users, error } = this.state; return ( <React.Fragment> <h1>Random User</h1> {error ? <p>{error.message}</p> : null} {!isLoading ? ( users.map(user => { const { username, name, email } = user; return ( <div key={username}> <p>Name: {name}</p> <p>Email Address: {email}</p> <hr /> </div> ); }) ) : ( <h3>Loading...</h3> )} </React.Fragment> ); } }
Here, we are making use of the JSON Placeholder API as an example. We can have a test like this to test the response we get from the server:
describe('Request', () => { it('displays random users from API', () => { cy.request('') .should((response) => { expect(response.status).to.eq(200) expect(response.body).to.have.length(10) expect(response).to.have.property('headers') expect(response).to.have.property('duration') }) }) })
The benefit of testing the server (as opposed to stubbing it) is that we are certain the response we get is the same as that which a user will get. To learn more about network requests and how you can stub network requests, see this page in the Cypress documentation.
Running tests from the command line
Cypress tests can run from the terminal without the provided UI:
./node_modules/.bin/cypress run
...or
npx cypress run
Let’s run the form tests we wrote:
npx cypress run --record --spec "cypress/integration/form.spec.js"
Terminal should output the results right there with a summary of what was tested.
There’s a lot more about using Cypress with the command line in the documentation.
That’s a wrap!
Tests are something that either gets people excited or scared, depending on who you talk to. Hopefully what we’ve looked at in this post gets everyone excited about implementing tests in an application and shows how relatively straightforward it can be. Cypress is an excellent tool and one I’ve found myself reaching for in my own work, but there are others as well. Regardless of what tool you use (and how you feel about tests), hopefully you see the benefits of testing and are more compelled to give them a try.
No mention of using data-test-id for your selectors? You will be leading people into a world of pain down the road.
Using
fetchdid you have any issues? AFAIK
fetchis still not supported by cypress and you need to do this trick
cy.get() wont work always in React Apps as it will try to find out the DOM elements before react renders it. | https://css-tricks.com/using-cypress-to-write-tests-for-a-react-application/ | CC-MAIN-2019-35 | refinedweb | 1,926 | 72.66 |
Should we extend HTTP?
In summary: probably not (at least right now). Instead, let's look at an AggregatorApi to do the same job.
Proposal
[MartinAtkins : RefactorOk] Can we perhaps make an optional addition to the HTTP requests made by aggregators to indicate that an aggregator has current data as of a particular date? Then only new/changed entries can be delivered.
The header, which could be called something like 'X-Last-Polled' would be optional both for the client to send and the server to honour. Small sites may wish to trade bandwidth for the reduced CPU utilization of the feed being a static file served directly from disk or memory.
Clients should be prepared to re-recieve data they've already got, despite their indication that they already had it. This should "just work" anyway, since the repeats can just be interpreted as editing the entry to be the same as it already is. The aggregator should notice the last-modified time hasn't changed and thus not bother with the entry again.
Servers should be prepared to not get this header. When they do, they should just serve some arbitrary amount of entries they feel is sensible. This will most often happen because the client does not actually cache data locally, or just displays what the feed currently contains. It can also happen when an aggregator first subscribes to a feed, and just wants to grab as many current entries as the server will give it.
X-Last-Polled is the same as If-Modified-Since in syntax, but is a request rather than a question. Hopefully everyone can see why If-Modified-Since is not appropriate for this purpose.
Something like this will have to be standardized, even as a "best practice", or else aggregators will start trying to it themselves in incompatible ways and we'll end up having to send five different headers.
Discussion
[MartinAtkins : RefactorOk] I originally considered that If-Modified-Since would work for this, but then realised that there are some users of RSS (and thus, ultimately Atom) feeds which don't make any effort to cache individual entries locally. Instead, they pull down the feed, transform it into something else (usually HTML) and that's the only data they keep. When they request again, they politely use the last-modified time on their HTML file in the If-Modified-Since header and if they don't get a response they just leave the file as it is and wait until next time. If they get a response they replace their HTML file with the new data which, if it's considered to be X-Last-Polled, will now be at worst blank, and at best only contain new stuff, thus losing anything that hadn't been seen in the mean time.
This may well cause problems for some proxies. However, some sites are already beefed up enough to be able to deal with bypassing proxies. LiveJournal.com, for example, always bypasses proxies because the responses generated are dependant on who is making the request. Assuming my implementation were to be used, it would have to be specified that servers MUST use Cache-control: private when honouring X-Last-Polled. That is, unless it's valid to put X-Last-Polled in the Vary header -- I can't remember how exactly Vary is specified. (probably not best to rely on it anyway, as there are plenty of dodgy proxies out there)
This is not really in the spirit of HTTP, but the benefits of including this functionality in some form are at least twofold:
Sites which can afford to dynamically-generate their Atom feeds, and whose feeds change infreqently, can save bandwith by only returning the changes.
It encourages less frequent automated retrievals or no automated retrievals at all, since you aren't going to miss anything by not retrieving for some time. By the current model, it becomes necessary to request frequently because once an item has been pushed off the bottom of the feed it's no longer gettable, thus encouraging clients to request frequently to avoid missing items. This way, my aggregator only needs to make a request when I'm ready to read, at which point it will get everything that has been added or updated since the last update, regardless of how long ago that was.
The second benefit suggests that feeds should indicate their support of such a feature, since an aggregator will need to know the difference between a static feed (which must be checked frequently for updates) and an 'intelligent' feed, which it can be a lot more lax with.
Other implementation suggestions are welcome, since mine was really just there to support the concept.
[JamesAylett RefactorOk] My problem with X-Last-Polled is that it seems different to other HTTP modifier headers. Unlike other HTTP request headers, the entity you are requesting an instance of (to use RFC 3229's terminology) is different with different X-Last-Polled headers. Unlike variable file formats, transfer encodings, instance manipulations - even different languages - having a request header which gives you a different document just feels wrong to me. (You could argue that the XML serialisation of the feed means that X-Last-Polled is a little like Content-Range. I wouldn't.)
[JamesAylett RefactorOk DeleteOk] (Paraphrased from discussion now moved to AggregatorApi) Are you happy to drop the proposal to extend HTTP for the purpose of feed transfer in favour of concentrating on an AggregatorApi?
[AsbjornUlsberg] Why not just use PUSH instead of PULL?
Of course, some clients will still poll. The client part of an aggregating feed proxy would need to poll, so this would be a good application of server-push if the originating server supports it; there will be far fewer aggregating feed proxies than end-users -- at least, that's the idea.
An aggregating feed proxy could indeed have a persistant connection with a set of its clients, although I doubt many will. The aggregated 'pull' model is similar to how users get their USENET news from their ISP's news server, which itself sucks the news from other sources.
I don't really see much harm in also creating a persistant feed consuming protocol, except that we already have two ways for an aggregator to operate: either it polls a static feed, or it asks for a delta feed generated dynamically using the reader protocol. Hopefully, everyone who has the latter will also keep the former, but adding a third option in increases the likelyhood that feed producers will pick only one or two of the options, fragmenting a system which was supposed to make more integration possible, plus making an aggregator much more difficult to write.
[AsbjornUlsberg] Many good points, Martin. Maybe the PUSH method shuold be an extension that can only be done in another namespace? We definetively need to think more about this, so postponing it to after v1.0 sounds like a good idea.
CategoryApi, CategoryArchitecture | http://www.intertwingly.net/wiki/pie/PossibleHTTPExtensionForEfficientFeedTransfer | crawl-001 | refinedweb | 1,170 | 56.29 |
Avoiding function call overheads
Function calls in Python are relatively expensive in comparison to for example C. Cython allows one to reduce this overhead significantly.
Main reason for the large function call overhead in Python is the “boxing” and “unboxing” of function call arguments and return values. Once again, dynamic typing makes programming more flexible but with a cost in performance.
One can investigate the function call overhead in Python with the following example. Let’s make a simple code that does not perform much else than calls a function in a loop:
def inner(i): return i+1 def outer_1(n): x = 0 for i in range(n): x = inner(x)
We can create also a version which does not have the function call:
def outer_2(n): x = 0 for i in range(n): x = x + 1
If one measures the performance of the two outer functions, one observes that the one with the function call in the loop is typically 3-4 times slower.
Using pure C functions
If a function is used only within a Cython module, one can get rid of a large part of Python’s function call overhead by declaring the function as a pure C function, once again using the cdef keyword:
cdef int add (int x, int y): cdef int result result = x + y return result
When a function is defined as a pure C function, it can be called only from the corresponding Cython module, but not from a Python code. If a function is called both from Cython and Python, Cython can generate an additional Python wrapper by declaring the function with cpdef instead of cdef:
cpdef int add (int x, int y): cdef int result result = x + y return result
This adds some overhead, so if the function is not called from Python it is better to use just cdef.
Mandelbrot kernel as pure C function
The kernel function is called only within the mandelbrot.pyx module, so we can make it a pure C function:
cdef int
When continuing with the performance investigation, we obtain the following results:
- Pure Python: 0.57 s
- Static type declarations in the kernel: 14 ms
- Kernel as C-function: 9.6 ms
In this case the speed-up is not so drastic, but still a respectable 1.5.
© CC-BY-NC-SA 4.0 by CSC - IT Center for Science Ltd. | https://www.futurelearn.com/courses/python-in-hpc/0/steps/65124 | CC-MAIN-2020-16 | refinedweb | 399 | 58.76 |
This is a sample XLL (DLL for Microsoft Excel) that adds linear interpolation capability to the Excel function list. XLL functionality in Excel is extremely fast, and allows you to extend the basic functions in any way you desire. This XLL was written to address a problem in which I was doing many thousands of interpolations via VBA in a spreadsheet, and it was taking about 20 minutes to calculate (for some reason, linear interpolation is missing from the Excel built-in function set). Inclusion of this interpolation function to replace the VBA code cut down the calculation time to a matter of seconds. Note that speed is not the only advantage; functions installed in an XLL can be permanently included in Excel so they are always available and easy to access.
Excel add-ins (particularly in C) are somewhat of a mystery, with documentation hard to come by. It was originally documented (sort of) in the Microsoft Excel Developer's Kit (available here). However, when originally developing this add-in, I found that the documentation was unclear about many of the important details. I have gotten so much from The Code Project -- I figured that this would be a good way for me to contribute back a little.
This particular add-in does linear interpolation on a set of data in Excel. Given a curve that is defined by a table of discrete (X,Y) value pairs, interpolation is the process of estimating a dependent (Y) value along the curve that is not necessarily at the defined points.
For example, given the chart below, in which an X and Y pair of values is known at each of the dots, linear interpolation will calculate the value of Yi corresponding to some arbitrary value Xi. Note that the term linear implies that the data points are connected by straight lines.
The function can also handle extrapolation by assuming the data continues in a straight line beyond the first two or last two points.
The syntax of the interpolation function is as follows:
Interp( X, Xarray, Yarray, extrapFlag )
where:
X
Xarray
Yarray
extrapFlag
true
false
The XLL is essentially a function that just waits to be called by Excel whenever the spreadsheet recalculates. The XLL will add new functions to the Excel function list. For example, the following function could be entered into a cell:
=INTERP(A1,B1:B10,C1:C10)
The function can be used in arbitrarily complex formulas, e.g.:
=2.564*D15-3.543*INTERP(A1,B1:B10,C1:C10)^2.7
And the function can be used as often as desired in the spreadsheet. Data are transmitted to and from Excel using the XLOPER struct, which really just encapsulates a variant structure with a C union declaration. A basic XLL framework is supplied by Microsoft here. This framework includes all the required include files, libraries, and some basic code to build an XLL (however, I did not find it very straightforward to use).
XLOPER
struct
union
The XLOPER declaration looks like this:
typedef struct xloper
{
union
{
double num; // xltypeNum
LPSTR str; // xltypeStr
WORD bool; // xltypeBool
WORD err; // xltypeErr
short int w; // xltypeInt
struct
{
WORD count; // always = 1
XLREF ref;
} sref; // xltypeSRef
struct
{
XLMREF far *lpmref;
DWORD idSheet;
} mref; // xltypeRef
struct
{
struct xloper far *lparray;
WORD rows;
WORD columns;
} array; // xltypeMulti
struct
{
union
{
short int level; // xlflowRestart
short int tbctrl; // xlflowPause
DWORD idSheet; // xlflowGoto
} valflow;
WORD rw; // xlflowGoto
BYTE col; // xlflowGoto
BYTE xlflow;
} flow; // xltypeFlow
struct
{
union
{
BYTE far *lpbData; // data passed to XL
HANDLE hdata; // data returned from XL
} h;
long cbData;
} bigdata; // xltypeBigData
} val;
WORD xltype;
} XLOPER
This struct can be used to pass floating point values, integers, strings, error codes, arrays, etc.
The XLL code is first initialized via the DllMain() entry point. This function is called by Windows only once, when the DLL is first loaded. In our case, this routine just initializes some Pascal strings (which require a byte count in the first byte). We only respond to the DLL_PROCESS_ATTACH reason code. Once the strings are initialized, the code just waits to be called by Excel.
DllMain()
DLL_PROCESS_ATTACH
The code is "hooked" into Excel using a registration process. Excel will call the function xlAutoOpen() when the XLL is added to Excel (via Add-in Manager, REGISTER command, VBA, etc.). The xlAutoOpen() routine then "registers" each function to Excel. Excel provides Excel4() and Excel4v() (varargs version) to provide access to a wide range of functionality. The first argument to this routine is a function code, which determines the behavior (and the remaining arguments). xlAutoOpen() first calls Excel4(xlGetName) to get the XLL name. This name is then used as an argument to the xlfRegister function code, which registers each of our functions. The xlfRegister option passes information about the name of each function, its passed parameter types, its return type, its help ID, a description of the function, and a description of each parameter. This information is used by Excel to give tooltip help when the function is entered in a spreadsheet, and helpful text when the insert function wizard is used. Take a look at the file 'interface.h' for a listing of the function information:
xlAutoOpen()
Excel4()
Excel4v()
Excel4(xlGetName)
xlfRegister
LPSTR functionParms[kFunctionCount][kMaxFuncParms] =
{
// function title, argument types, function name, arg names,
// type (1=func,2=cmd),
// group name (func wizard), Hot Key, help ID,
// function description,
// (repeat) description of each argument
{" Interp", " RRRRA", " Interp",
" x,xArray,yArray,extrapFlag", " 1",
" Interpolation Add-In", " ",
" interp.hlp!300",
" Performs linear interpolation. This is the general version"
"that can handle single values or arrays.",
" The X values to be interpolated. Can be a single value or"
" an array (each value is interpolated individually)",
" A table of values that define X. Must be sorted (increasing"
" or decreasing).",
" A table of values that define Y (for each X table value).",
" If TRUE, extrapolation beyond the table is allowed. If omitted"
" or FALSE, the result is truncated at the table bounds." },
{" InterpX", " RBRRA", "InterpX",
"x,xArray,yArray,extrapFlag", " 1",
" Interpolation Add-In", " ",
" interp.hlp!310",
" Performs linear interpolation. This version interpolates"
" only a single X value at a time.",
" The X value to be interpolated. Only a single value is allowed"
" to take advantage of Excel's 'implicit intersection'.",
" A table of values that define X. Must be sorted (increasing"
" or decreasing).",
" A table of values that define Y (for each X table value).",
" If TRUE, extrapolation beyond the table is allowed. if omitted"
" or FALSE, the result is truncated at the table bounds." },
};
The information passed to Excel is as follows:
Note that this add-in actually contains two versions of the Interp() function. The first, Interp(), is a general version that can handle Excel arrays for the 'X' input argument. That is, it can receive multiple X values and return multiple Y values (one for each X). This provides ultimate flexibility. However, there is a feature in Excel called "implicit intersection" that allows you to use range names in formulas in a simplistic way. For example, you might declare a spreadsheet range with the name 'foo' and the range C1:C10. If you then use the name 'foo' in a spreadsheet formula that expects a single value, Excel will automatically use the value that corresponds to the row or column that matches the formula location. That is, if the formula is in A3, then using the name 'foo' will yield a value from cell C3 (same row). If you are trying to use implicit intersection with the general Interp() routine, it will try to calculate an array of results since the whole array will be passed to Interp() for the X argument. Therefore, a second version called InterpX() is provided which only accepts a single value for the X argument, and therefore works with implicit intersection.
Interp()
Interp()
InterpX()
The other function of note in generic.c is the xlAutoAdd() function, which is called just once at the time that the add-in is added via the add-in manager. This gives you an opportunity to pop up a dialog box indicating copyrights, etc.
xlAutoAdd()
The Interp2.c file contains the function code itself. The code is well-commented, but here's the lowdown. Upon entry to Interp(), the code first verifies that the Xarray argument type is acceptable. Since it accepts the XLOPER (variant) type, it is necessary to make sure that the type the user entered to the function is reasonable. The type must be a reference array (xlTypeRef or xlTypeSRef) or a multi array (xlTypeMulti). Excel4(xlCoerce) is then called to "coerce" the data into a xlTypeMulti type so that the subsequent calculations only have to deal with one type. Next, a check is made to see if any of the Xarray data has not yet been calculated by the spreadsheet. If so, just return and wait for later. This procedure of data checking is repeated for the Yarray data. Next, a check is done to make sure that the X data are monotonically increasing or decreasing (sorted), and that the Xarray and Yarray data are all numeric values. The final check is on the X argument, which is similar to the Xarray and Yarray arguments.
xlTypeRef
xlTypeSRef
xlTypeMulti
Excel4(xlCoerce)
After all the input arguments have been verified, the interpolation calculations are done. This is just a process of finding which two values in the Xarray argument are the bounds around the X argument (for interpolation). For extrapolation, the code determines if the X argument is less than the first Xarray value or greater than the last one. The interpolation calculation is simple:
result = ( X - xlo ) / ( xhi - xlo ) * ( yhi - ylo ) + ylo;
//where:
// X = X argument
// xlo, xhi = bounding X values
// ylo, yhi = Y values corresponding to bounding X values
One more note of interest -- you may need to allocate memory to return values to Excel. This will then need to get freed up using the xlAutoFree() function. Excel will call this function if you return an XLOPER struct with the xlbitDLLFree bit set. Also, sometimes Excel will need to free up memory that it allocates when it passes parameters to you. You can free that memory using the Excel4( xlFree ) function call.
xlAutoFree()
xlbitDLLFree
Excel4( xlFree )
This XLL add-in includes a help file and an XLL file. To add it into Excel, put interp32.xll into the Program Files/Microsoft Office/Office10/Library folder. Put the help file interp.hlp into the Program Files/Microsoft Office/Office10 folder. This will make the help available from the Help button in the function wizard. To make the add-in available in Excel, select the "Interpolation Add-in" check box under the Tools/Addins menu. This only needs to be done once and the added functionality will remain available in Excel permanently (unless it is later disabled).
This code was originally written for Visual C++ 6.0. Part of the initialization process requires adding a byte count to the beginning of some static strings, which is allowed in Visual C++ 6.0, but causes an exception in later versions. This article has now been updated to correct that problem (plus two other minor problems). The solution file is now set up for Visual Studio 2002 .NET.
This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.
A list of licenses authors might use can be found here
__declspec(dllexport) LPXLOPER xarr ()<br />
{<br />
<br />
static XLOPER xlArray;<br />
XLOPER xlValues[4];<br />
int i;<br />
for (i = 0; i < 4; ++i)<br />
{<br />
carr[i]=i*2; <br />
xlValues[i].val.num = i+10;<br />
xlValues[i].xltype = xltypeNum;<br />
}<br />
//this will generate array with 4 element 10,11,12,13<br />
<br />
xlArray.xltype = xltypeMulti;<br />
xlArray.val.array.lparray = &xlValues;<br />
xlArray.val.array.rows = 1;<br />
xlArray.val.array.columns = 4;<br />
return (LPXLOPER) &xlArray;<br />
}
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | https://www.codeproject.com/Articles/5263/Sample-Excel-Function-Add-in-in-C | CC-MAIN-2017-51 | refinedweb | 2,032 | 53.92 |
I keep gettingcompiling errors and i don't know why. the source code is copied directly from the book and as far as i can tell there aren't any errors. i'm using dev c++ as a compiler.
here's the code:
/*******************************************************\
* Password Probability Matrix * File: ppm_gen.c *
*********************************************************
* *
* Author: John
int singleval(char a)
{
int i, j;
i= (int)a;
if((i >= 46) && (i <= 57))
j = i - 46;
else if ((i >= 97) && (i <= 122))
j = i - 59;
return j;
}
int tripleval(char a, char b, char c)
{
return (((singleval(c)%4)*4096)+(singleval(a)*64)+singleval(b));
}
main()
{
char *plain;
char *code;
char *data;
int i, j, k, l;
unsigned int charval, val;
FILE *handle;
if (!(handle = fopen("4char.ppm", "w")))
{
printf("Error: Couldn't open file '4char.ppm' for writing.\n");
exit(1);
}
data = (char *) malloc(SIZE+19);
if (!(data))
{
printf("Error Couldn't allocate memory.\n");
exit(1);
}
plain = data+SIZE;
code = plain+5;
for(i=32; i<127; i++)
{
for(j=32; j<127; j++)
{
print("Adding %c%c** to 4char.ppm..\n", i, j);
for(k=32; k<127; k++)
{
for(l=32; l<127; l++)
{
plain[0] = (char)i;
plain[1] = (char)j;
plain[2] = (char)k;
plain[3] = (char)l;
plain[4] = 0;
code =crypt(plain, "je");
val = tripleval(code[2], code[3], code[4]);
charval = (i-32)*95 + (j-32);
data[(val*WIDTH)+(charval/8)] |= (1<<(charval%8));
val += (HEIGHT * 4);
charval = (k-32)*95 + (l-32);
data[(val*WIDTH)+(charval/8)] |= (1<<(charval%8));
val = HEIGHT + tripleval(code[4], code[5], code[6]);
charval = (i-32)*95 + (j-32);
data[(val*WIDTH)+(charval/8)] |= (1<<(charval%8));
val += (HEIGHT * 4);
charval = (k-32)*95 + (l-32);
data[(val*WIDTH)+(charval/8)] |= (1<<(charval%8));
val = (2 * HEIGHT) + tripleval(code[6], code[7], code[8]);
charval = (i-32)*95 + (j-32);
data[(val*WIDTH)+(charval/8)] |= (1<<(charval%8));
val += (HEIGHT * 4);
charval = (k-32)*95 + (l-32);
data[(val*WIDTH)+(charval/8)] |= (1<<(charval%8));
val = (3 * HEIGHT) + tripleval(code[8], code[9], code[10]);
charval = (i-32)*95 + (j-32);
data[(val*WIDTH)+(charval/8)] |= (1<<(charval%8));
val += (HEIGHT * 4);
charval = (k-32)*95 + (l-32);
data[(val*WIDTH)+(charval/8)] |= (1<<(charval%8));
}
}
}
}
printf("finished.. saving..\n");
fwrite(data, SIZE, 1, handle);
free(data);
fclose(handle);
}
i know that the program is supposed to be used to crack passwords. it's an exercise in the book Hacking: The Art of Exploitation by Jon Erickson. these the errors that i get:
ln: message:
[warning]in function 'main'
83 [warning]assignment makes pointer without a cast
[warning]in function 'main'
[link error]undefined reference to 'print'
[link error]undefined reference to 'crypt'
any help that can be given on resolving these errors would be awesome! Thanks!
-=[the]Punisher=-
\"People should not be afraid of their governments. Governments should be afraid of their people.\" - V
I'm not good at programing so I'm just going try to figure it out using my logic. Shouldn't the ifs be inside the brackets because thats the block of code you're running...you got printf by itself...but what is it supposed to print, at what condition.
#include <stdio.h> = #studio.h?
I dont see you starting with Main.
Doublecheck your spelling.
Run it it debug mode and go through all the errors.
[link error]undefined reference to 'print'
This one's a simple typo - you meant to type:
printf("Adding %c%c** to 4char.ppm..\n", i, j);
but instead did:
print("Adding %c%c** to 4char.ppm..\n", i, j);
I think it's great that you're trying to start learning about security and programming through a hands-on approach, but I would suggest taking a few steps back and learning the basics of C++ or even programming in general before you tackle this stuff.
Good luck!
As for crypt() there might be another file you need to include, since the function is not defined anywhere within your program. The warnings can be ignored for now, but if the program does not run as expected look over.
The 'assignment makes pointer wihtout a cast' is a warning regarding the lack of a descriptor to indicate type conversion. Some compilers take care of this upon compilation, but the warning is still given as to indicate that the code might not be cross-compatible [some compilers might execute the code differently from what is expected].
Does the book specify on what platform the code is meant to be run on, or is it ANSI standardized? If it is the latter then everything's fine, however if not you should be carefull about includes. <stdio.h> isn't a required include in Linux using gcc but Windows compilers always required it [as far as I've used them].
Anyway I hope this helps... again, check for an include for the crypt() function.
/\\
I belive i can offer some assistance here... the included library unistd contains crypt. The only thing I can think of is that your compiler either does not have that header file, or during compile you need to add a library (i.e. -lm , this includes the math library). If your running windows, it isnt likely you have this header file... considering its a GNU lib. I usually just include any library includes in my make file to keep me from having to remember what libraries I need every time i run the program. Hope this helped.
Good book by they way :-)
-Shell_Coder
Ive also had trouble compiling with Dev C++
if the <crypt.h> header file is needed, where can i download it? I cant run the program until the program is compiled, but Dev won't let me compile it until all of the errors are worked out. I'm trying to compile it under windows, but i'll try to compile it under Mandrake to see what happens. thanks
-=[the]Punisher=-
nah... you dont need crypt.h you just need to compile it with gcc or... you can use the windows crypt api. If your interested in making it work for windows i recommend searching for Crypt on MSDN's library, or you could always write your own crypt function ;-). However, I recommend putting on your linux box.
-Shell_Coder
yay... now i get to look up syntax! that's always fun!
-=[the]Punisher=-
Forum Rules | http://www.antionline.com/showthread.php?257817-How-to-disable-Windows-NT-2000-XP-file-system-cache&goto=nextoldest | CC-MAIN-2015-48 | refinedweb | 1,068 | 72.56 |
QMC_DESC(3) Library Functions Manual QMC_DESC(3)
QmcDesc - container for a metric description
#include <QmcDesc.h> CC ... -lqmc -lpcp
A QmcDesc object is a container for a metric descriptor (pmDesc, see PMAPI(3)) and units.
~QmcDesc(); Destructor. QmcDesc(pmID pmid); Construct a container for the descriptor for pmid. The descriptor is obtained from the current PMAPI(3) context using pmLookupDesc(3).
int status() const; A status less than zero indicates that the descriptor could not be obtained, the PMAPI(3) error is encoded in the result. pmID id() const; Return the pmID for this descriptor. pmDesc desc() const; Return a copy of the actual metric descriptor. const pmDesc *descPtr() const; Return a pointer to the actual descriptor to avoid using a pointer to a temporary.
const QString &units() const; The complete unit string for this descriptor. const QString &abvUnits() const; The unit string using abbreviations. bool useScaleUnits() const; Returns true if the units have been set by a call to QmcDesc::setScaleUnits. const pmUnits &scaleUnits() const; Return the scaling units for this descriptor. void setScaleUnits(const pmUnits &units); Set the scaling units for this descriptor.
PMAPI(3), QMC(3), pmflush(3), pmLookupDesc. SGI QMC_DESC(3)
Pages that refer to this page: QMC(3), QmcContext(3), QmcIndom(3) | http://man7.org/linux/man-pages/man3/QmcDesc.3.html | CC-MAIN-2017-43 | refinedweb | 207 | 50.12 |
My current code looks like this:
final String[] value = new String[1];
SwingUtilities.invokeAndWait(new Runnable() {
public void run() {
value[0] = textArea.getText();
}
});
The use of a final array seems like a bit of a hack. Is there a more elegant solution?
I've done a lot of searching, but I don't seem to be able to find anything to do this, which surprises me. Although I keep coming across
SwingWorker, but I'm not sure that's suitable in this case?
I'm assuming that
JTextArea.getText() isn't thread-safe.
Thanks.
All problems can be solved by adding another layer of indirection (except when you have too many layers :P).
public class TextSaver implements Runnable { private final JTextArea textArea; private final ObjectToSaveText saveHere; public TextSaver(JTextArea textArea, ObjectToSaveText saveHere) { this.textArea = textArea; this.saveHere = saveHere; } @Override public void run() { saveHere.save(textArea.getText()); } }
I'm not going to provide the code for ObjectToSaveText, but you get the idea. Then your SwingUtilties call just becomes:
SwingUtilities.invokeAndWait(new TextSaver(textArea, saveHere));
You can retrieve the saved text from your saveHere object.
I find that in 99% of my Swing code that I'm often accessing a JTextArea in response to a user action (the user has typed, clicked a button, closed a window, etc). All of these events are handled through event listeners which are always executed on the EDT.
Can you provide more detail in your use case?
UPDATE BASED ON USE CASE: Can the user change the text after the server has been started? If yes, then you can use the listener style mentioned previously. Make sure to be careful with your concurrency.If the user can't change the text, pass the text to the server thread in response to the button click (which will be on the EDT) and disable the text box.
LAST UPDATE:
If the client connections are persistent and the server continues to send updates you can use the listener model. If not, two copies of the data might be redundant. Either way, I think you'll end up having more threading work (unless you use selectors) on yours hands than worrying about copying one data value.
I think you've got a plethora of info now, good luck.
I encountered the same need, to get swing component value, but from calls of a javascript engine within my application. I slapped together the following utility method.
/** * Executes the specified {@link Callable} on the EDT thread. If the calling * thread is already the EDT thread, this invocation simply delegates to * call(), otherwise the callable is placed in Swing's event dispatch queue * and the method waits for the result. * <p> * @param <V> the result type of the {@code callable} * @param callable the callable task * @return computed result * @throws InterruptedException if we're interrupted while waiting for the * event dispatching thread to finish executing doRun.run() * @throws ExecutionException if the computation threw an exception */ public static <V> V getFromEDT(final Callable<V> callable) throws InterruptedException, ExecutionException { final RunnableFuture<V> f = new FutureTask<>(callable); if (SwingUtilities.isEventDispatchThread()) { f.run(); } else { SwingUtilities.invokeLater(f); } return f.get(); }
I'm sure you can figure out how to use this, but I like to show how especially brief it is in Java 8:
String text = <String>getFromEDT(() -> textarea.getText());
Edit: changed method to do safe publication | http://m.dlxedu.com/m/askdetail/3/d4a40180b498e403c65c25f39a700781.html | CC-MAIN-2018-47 | refinedweb | 559 | 65.42 |
When you’re writing Python tutorials, you have to use Monty Python references. It’s the law. On the 40th anniversary of the release of Monty Python’s Life of Brian, I wanted to share this example that I made for
collections.defaultdict that doesn’t fit in the tutorial I’m writing. It comes as homage to the single Eric the Half a Bee.
from collections import defaultdict class HalfABee: def __init__(self): self.is_a_bee = False def value(self): self.is_a_bee = not self.is_a_bee return "Is a bee" if self.is_a_bee else "Not a bee" >>> eric = defaultdict(HalfABee().value, {}) >>> print(eric['La di dee']) Is a bee >>> print(eric['La di dee']) Not a bee
Dictionaries that can return different values for the same key are a fine example of Job Security-Driven Development. | https://www.sicpers.info/2019/04/half-a-bee/ | CC-MAIN-2020-10 | refinedweb | 135 | 60.82 |
Haven.
-David
On Jan 31, 2007, at 11:05 PM, David Blevins wrote:
> Reposting this info with more details.
>
> So the 10,000 foot perspective is that we are creating a conversion
> tool to convert the prior openejb-jar.xml into the new set of
> descriptors (geronimo-openejb.xml, new openejb-jar.xml, jpa entity-
> mappings.xml). It is expected that all existing plans will work
> and no one will have to or even *should* migrate just yet.
>
> We are doing this for two reasons:
>
> 1. There is significant investment in current descriptor format.
> These come to mind:
> - TCK
> - DayTrader
> - iTests
> - Samples
> - All documentation to date
> - All user-land apps to date
>
> 2. The new plans will not be stable for a good long while.
> Primary reasons are:
> - Continued work in integration (security, webservices,
> corba, etc.)
> - Continued work implementing JavaEE 5
>
>
> So the big motivation for the conversion tool is that with some
> hard work from Dain and I up front (and for a while really), we can
> save a few hundred developer and user hours over the next couple
> months. We're very close to having something running and hope to
> see something basically working by the end of the week.
>
> If we're really lucky after this week we can hide all the change
> under the conversion tool and no one (but the people working on and
> maintaining the conversion tool that is) will have to feel the pain
> or frustration while things move underneath. As we enable things
> like security people won't have to update their plans, we'll just
> add more conversion logic to port that information over and put it
> into action.
>
> I can say that the spirit of the new descriptor files is to fully
> embrace the "anti-descriptor" focus of Java EE 5 and to be as small
> and contain as little requirements as possible. Hopefully when the
> day comes to move over (not soon), it will more be a matter of
> deleting than adding. There will definitely be lots of
> experimentation in that area, so we'll see.
>
> As always, thoughts and questions very welcome and encouraged.
>
> -David
>
>
> On Jan 31, 2007, at 3:13 PM, Dain Sundstrom wrote:
>
>> I just checked in the working converter for CMP beans. There is a
>> fairly extensive test suite that converts the OpenEJB2 itests,
>> daytrader and the OpenEJB2 CMR mappings tests. For example, here
>> are the mappings for daytrader
>>
>>
>> ejb-jar.xml
>> -----------
>>
>> container/openejb-core/src/test/resources/convert/oej2/cmp/
>> daytrader/daytrader-ejb-jar.xml
>>
>> openejb-jar.xml
>> ---------------
>>
>> container/openejb-core/src/test/resources/convert/oej2/cmp/
>> daytrader/daytrader-openejb-jar.xml
>>
>>
>> And the final JPA mappings
>> --------------------------
>>
>> container/openejb-core/src/test/resources/convert/oej2/cmp/
>> daytrader/daytrader-orm.xml
>>
>>
>>
>> I hope to have the converter fully integrated into Geronimo in by
>> the end of the week.
>>
>> -dain
>>
>> On Jan 21, 2007, at 11:01 PM, David Blevins wrote:
>>
>>> On Jan 19, 2007, at 5:07 PM, David Blevins wrote:
>>>
>>>> Keep your ejb related plan files intact or a copy of them at
>>>> least. I'm going to try and write a conversion tool that will
>>>> at least handle trivial apps. A non-trivial app would be one
>>>> with CMPs. The new mapping.xml format for cmps is the jpa
>>>> mapping.xml and converting that will be a little more work.
>>>>
>>>> Nothing done yet, just announcing intentions.
>>>
>>> Ok, have a little progress on this. So far I am able to convert:
>>>
>>> - <environment> and children
>>> - <security> and children
>>> - <gbean> and children
>>> - <message-destination> and children
>>> - <persistence-context-ref> and children
>>>
>>> To see same converted document, look here:
>>>
>>> source:
>>> openejb3/container/openejb-jee/src/test/resources/openejb-jar-2-
>>> full.xml
>>> result:
>>> openejb3/container/openejb-jee/src/test/resources/geronimo-
>>> openejb-converted.xml
>>>
>>> It was a bit of work getting JAXB2 to work with our schemas
>>> because of duplicated elements combined with the fact that we
>>> allow invalid xml (i.e. no namespace at all). So I actually had
>>> to write a invalid 2 valid xml converter before I could get
>>> started on the openejb-jar.xml to geronimo-openejb.xml converter.
>>>
>>> Things should go faster from here. Will hack more in the morning.
>>>
>>> Not sure what the effort is going to be to convert the cmp
>>> information as that is a openejb-jar.xml 2 jpa mapping.xml
>>> conversion. Hoping Dain might have some input on that.
>>>
>>>
>>> -David
>>>
>>
> | http://mail-archives.apache.org/mod_mbox/geronimo-dev/200702.mbox/%3C47BF1114-BB34-48B2-A96A-B0FE9FEE2BDC@visi.com%3E | CC-MAIN-2016-26 | refinedweb | 734 | 65.42 |
Today, let’s take a look at how to deploy our react app to Netlify, including setting up continuous deployment.
Continuous Deployment (CD) is a software release process that uses automated testing to validate if changes to a codebase are correct and stable for immediate autonomous deployment to a production environment.
Why Netlify?
Netlify is an all-in-one platform for running web projects. This means you can use Netlify for hosting most if not all your web projects. Netlify is simple to use and set up which works perfectly when you have a basic website you want to get up and open for the world to use quickly.
This post will explain step by step to deploy our react project to Netlify as the different ways that Netlify allows us to do that.
React Project Example
First of all, we need to have a react project example to deploy to Netlify. So you can use a previous project or create a new one with the next commands.
We will use the package create-react-app that allows us to get started a project in just seconds.
npx create-react-app react-project-example
Note: react-project-example is the project's name can be changed as you prefer.
After running the command, you will create a folder with the name react-project-example or the name you prefer. The next step will be to make sure that the application is running successfully. To start the application need to use the next command
npm start
After that, you can enter in your browser and will see something like that, congratulations! 🎉 🎊 👏
Apply a Change in the project
In addition, you can apply a change to the react project to make sure that the changes are applied for example
src/App.js
import logo from './logo.svg'; import './App.css'; function App() { return ( <div className="App"> <header className="App-header"> <img src={logo} <p> Hello World #1 </p> </header> </div> ); } export default App;
With the change, the page will look like
You can find the code that we do here in this GitHub repository how-to-deploy-react-project-to-netlify-project-example
Netlify Deployment
First of all, you need to create a Netlify account that is Free on the Starter Pack.
There are two ways in Netlify to do a deployment in the case of a React project
- Manual deployment
- Automatic Deployment
Manual Deployment
This is the easiest way to deploy our React project in Netlify. Basically, consist of drag and drop the build generated with our app to the Netlify page.
Before doing that we need to do some things, we will need to generate the build of our react application, we need to execute the next command
npm run build
Once the command was executed successfully you will see the build folder in the project.
The next step is to open the react project folder in finder if you're on a Mac or file explorer in the case of Windows and drag and drop the build folder into the empty box on the Netlify page.
That is it. after a few seconds, the application should be deployed. You will see and URL, for example, something like (that is a URL generated by Netlify) that we can share with people to access the site.
Automatic Deployment (from Git)
As mentioned before the Netlify manual deployment is the easiest way to deploy, however, the simplest is not always the best for a software project. In the case that we make a change to our source code won't be reflected on the web page until will be done a new manual deployment, which could provoke a waste of time for the team members and the project.
*How do we deploy our applications? *
When we use automatic deployment all the changes made to the code are reflected on the deployed site, each time we push to the repository.
Before doing that we need to add the project to a repository. Go to a version control system and create a new empty repository for your app can be public or private there isn't a problem with it.
In the case of Github follow this guide Adding an existing project to GitHub using the command line
Once the repository is in a version control system as GitHub, GitLab, or Bitbucket we can start with the Netlify automatic deployment setup. Click in the button New Site from Git
You will need to authorize Netlify access to your version control system, after that you can search and choose the specific repository that wants to be deployed.
Note: In this example image was selected GitHub
The next step will be to set some parameters settings related to the build and branches
- Branch to deploy
- Build command
- Publish directory (build directory)
- Advanced
- Environment variables
- Functions settings (serverless functions)
All of them are filled automatically but in some cases could be required to be changed, After that then click on the Deploy button, and our app will be deployed.
Testing our Netlify Automatic Deployment
Now, whenever commit what we do to the branch that we set in the setup step we push to the version control system, which will produce that Netlify automatically builds our app for us and update our deployed version.
To check this we will apply a quick change to our react application in src/App.js
import logo from './logo.svg'; import './App.css'; function App() { return ( <div className="App"> <header className="App-header"> <img src={logo} <p> Hello World #2 from automatic deployment </p> </header> </div> ); } export default App;
Once those changes are made. After commit and push your code to the version control system will generate a new deployment.
After that a little time you should see your changes reflected in your Netlify website.
Personalize some Netlify configurations
There are some extra configuration that we can apply to our pipeline as
- Disable Automatic Deployments
- Change website domain
Disable Automatic Deployments
In some projects, the deployment needs to be done at specific times, timezone, or with some different rules to make sure the build quality with testing and other practices that could be applied to your project.
Basically, we just need to select the Deploys tab and click on Stop auto publishing, that with provoke that our changes won't be reflected automatically until we decide to launch a new deployment manually from the Netlify website.
We have two options to do that. The first one is to click on the last unpublished build and select the Publish deploy button. The second one is the Trigger deploy option in the deploys screen and just repeat the previous steps in the other option.
Once you click in Publish deploy, the change will be reflected on the website.
Change website domain
As you checked the default Netlify domain is generated in a random way based in an algorithm designed by Netlify. In the case that we want to change that domain, we need to go to Domain settings
Once in that section select Options > Edit site name and type a new site name for your website in Netlify the site name determines the default URL. After that, the domain URL will change based on the site name that was chosen.
Also, you can bring a custom domain name that you already own, or buy a new one with Netlify.
Conclusion
Netlify is an awesome platform for running our web projects, as you see during this post we explore some of the features related to deployments, so don't be afraid to put it into practice you learned it. I hope it will be useful for everyone.
I will be writing some other posts related to Netlify there are several features that we can explore and integrate into our projects, also let me know if you have an idea of a topic that I can talk about in the next posts.
Let me know in the comments recommendations or something else that can be added, I will update the post based on that thanks! 👍
Discussion (0) | https://practicaldev-herokuapp-com.global.ssl.fastly.net/brayanarrieta/how-to-deploy-your-react-project-to-netlify-step-by-step-3a06 | CC-MAIN-2021-31 | refinedweb | 1,351 | 54.36 |
A utility class for referencing the alignment characteristics in layout_attributes_t.
#include <adobe/layout_attributes.hpp>
List of all members.
Definition at line 28 of file layout_attributes.hpp.
aligned either left or top depending on axis.
aligned either right or bottom depending on axis.
aligned centered within available space. Guides are supporessed
extra space in the parent container's layout is distributed evenly among those with this alignment. First item being forward aligned and last item being right aligned. Guides are suppressed.
Expands to fill the available space. Identical to align_reverse_fill unless item is not a container and contains a guide, in which case, the guide will snap to other forward guides, and the item will fill forward. Normally used for edit_text, edit_number, and popup.
Expands to fill the available space. Identical to align_forward_fill unless item is not a container and contains a guide, in which case, the guide will snap to other reverse guides, and the item will fill reverse. This alignment is the conterpart to align_forward_fill, intended for implementing support for right-to-left languages. See Implementing Right-To-Left Layout.
default alignment, alignment is picked up by the parent.
Identical to align_forward_fill.
Identical to align_foward.
Identical to align_reverse.
Definition at line 29 of file layout_attributes.hpp.
Use of this website signifies your agreement to the Terms of Use and Online Privacy Policy.
Search powered by Google | https://stlab.adobe.com/structadobe_1_1layout__attributes__alignment__t.html | CC-MAIN-2017-17 | refinedweb | 227 | 61.83 |
Leave it to Microsoft to make all my hard work worthless. A while ago, I posted a tutorial on how to use named pipes in C# and .NET. Back then it took a lot of hard work and a lot of Windows API calls to get named pipes integrated into a .NET 2.0 application. Now, thanks to .NET 3.5, named pipes are as easy as importing System.IO.Pipes.
If you want a named pipe server, all you have to do is create some instances of NamedPipeServerStream to handle each client connection. I stole the following code straight from the MSDN documentation.
using System; using System.IO; using System.IO.Pipes; class PipeServer { static void Main() { using (NamedPipeServerStream pipeServer = new NamedPipeServerStream("testpipe", PipeDirection.Out)) { Console.WriteLine("NamedPipeServerStream object created."); // Wait for a client to connect Console.Write("Waiting for client connection..."); pipeServer.WaitForConnection(); Console.WriteLine("Client connected."); try { // Read user input and send that to the client process. using (StreamWriter sw = new StreamWriter(pipeServer)) { sw.AutoFlush = true; Console.Write("Enter text: "); sw.WriteLine(Console.ReadLine()); } } // Catch the IOException that is raised if the pipe is // broken or disconnected. catch (IOException e) { Console.WriteLine("ERROR: {0}", e.Message); } } } }
Just like most .NET streams, the NamedPipeServerStream supports both synchronous and asynchronous communication. The stream is also full duplex, meaning it can be read and written to at the same time. Getting the overlapped IO working was one of the biggest challenges to overcome for named pipes in previous versions of .NET.
Making a client is just as easy as the server. The Pipes namespace has another stream, called NamedPipeClientStream, which does all the work for you.(); } }
When I get a chance to use these objects a little more, I'll post a more in-depth tutorial. Personally, I'd like to see a set of objects similar to the TcpClient and TcpServer classes for handling named pipes, but I guess I'll have to wait a little longer for that. | http://tech.pro/tutorial/752/dotnet-35-adds-named-pipes-support | CC-MAIN-2014-52 | refinedweb | 330 | 69.07 |
EXP(3M) EXP(3M)
NAME
exp, expm1, exp2, exp10, log, log1p, log2, log10, pow, compound, annu-
ity - exponential, logarithm, power
SYNOPSIS
#include <<math.h>>
double exp(x)
double x;
double expm1(x)
double x;
double exp2(x)
double x;
double exp10(x)
double x;
double log(x)
double x;
double log1p(x)
double x;
double log2(x)
double x;
double log10(x)
double x;
double pow(x, y)
double x, y;
double compound(r, n)
double r, n;
double annuity(r, n)
double r, n;
DESCRIPTION
exp() returns the exponential function e**x.
expm1() returns e**x-1 accurately even for tiny x.
exp2() and exp10() return 2**x and 10**x respectively.
log() returns the natural logarithm of x.
log1p() returns log(1+x) accurately even for tiny x.
log2() and log10() return the logarithm to base 2 and 10 respectively.
pow() returns x**y. pow(x ,0.0) is 1 for all x, in conformance with
4.3BSD, as discussed in the
compound() and annuity() are functions important in financial computa-
tions of the effect of interest at periodic rate r over n periods.
compound(r, n) computes (1+r)**n, the compoundu-
ity() are computed using log1p() and expm1() to avoid gratuitous inac-
curacy for small-magnitude r. compound() and annuity() are not defined
for r <= -1.
Thus a principal amount P0 placed at 5% annual interest compounded
quarterly for 30 years would yield
P30 = P0 * compound(.05/4, 30.0 * 4)
while a conventional fixed-rate 30-year home loan of amount P0 at 10%
annual interest would be amortized by monthly payments in the amount
p = P0 / annuity( .10/12, 30.0 * 12)
SEE ALSO
matherr(3M)
DIAGNOSTICS
All these functions handle exceptional arguments in the spirit of
ANSI/IEEE Std 754-1985. Thus for x == +-0, log(x) is - with a division
by zero exception; for x < 0, including -, log(x) is a quiet NaN with
an invalid operation exception; for x == + or a quiet NaN, log(x) is x
without exception; for x a signaling NaN, log(x) is a quiet NaN with an
invalid operation exception; for x == 1, log(x) is 0 without exception;
for any other positive x, log(x) is a normalized number with an inexact
exception.
In addition, exp(), exp2(), exp10(), log(), log2(), log10() and pow()
may also set errno and call matherr(3M).
24 March 1988 EXP(3M) | http://modman.unixdev.net/?sektion=3&page=exp2&manpath=SunOS-4.1.3 | CC-MAIN-2017-30 | refinedweb | 401 | 51.48 |
Here is the new version of the algorithm:
Voilà !
There are many refinements since the last versions, helping to solve the issues found in the last logs, while still maintaining simplicity and efficiency, both with hardware and software implementations.
- The same algorithm generates both short and full length results. The 16-bit result simply truncates the 32-bit result. The short result misses the D input but if you really cared, you wouldn't use 16 bits only.
- A second carry bit, for the mixing lane, preserves as much entropy as possible. Now both lanes use EAC mod 65535.
- All adders have 2 inputs and a carry-in. This simplifies hardware design, as well as the use of "Add with Carry" opcodes (though x86 only has 1 carry flag so it will probably use LEA a lot).
- The init values now integrate the size of the buffer, which is not limited to 64Ki words.
- The magic init value 0xABCD4567 should boost early avalanche, even with very low values.
- The 0x7 is mixed with the first datum, it is odd and high density, increasing carry propagation,
- The other numbers have medium or more bit density
- 0x4567 + 0xABCD = 0xF134 so the first round should not cancel both addends (and the many next, to be confirmed).
Keep in mind that this diagram shows the "principles", the algorithm could be implemented "as is" (like the reference below) but several optimisations are possible in software for example.
uint32_t Pisano_Checksum32( uint16_t *buffer, // points to the data to scan uint32_t words // number of 16-bit WORDS to scan ) { uint32_t Y = words + MAGIC_INIT, X = Y >> 16, // extract the MSB C = 0, // The "magic number" takes care of proper D = 0; // initialisation so C & D can be left cleared Y &= 0xFFFF; while (words) { // C, D : range 0-1 // X, Y : range 0-FFFFh C += X + Y; // result range : 0-1FFFFh X = Y + *buffer + D; // result range : 0-1FFFFh buffer++; Y = C & PISANO_MASK; // result range : 0-FFFFh C = C >> PISANO_WIDTH; // result range : 0-1 D = X >> PISANO_WIDTH; // result range : 0-1 X = X & PISANO_MASK; // result range : 0-FFFFh words--; } C += X + Y; // result range : 0-1FFFFh return C+((X+D) << PISANO_WIDTH); // range : 0-FFFFFFFFh } uint16_t Pisano_Checksum16( uint16_t *buffer, // points to the data to scan uint32_t words // number of 16-bit WORDS to scan ) { return (uint16_t)(Pisano_Checksum32(buffer, words)); }
Now, there is a lot of work to simplify the loop body while keeping the exact same external behaviour.
Discussions
Become a Hackaday.io Member
Create an account to leave a comment. Already have an account? Log In. | https://hackaday.io/project/178998-peac-pisano-with-end-around-carry-algorithm/log/196354-the-new-version-of-peac16x2 | CC-MAIN-2021-43 | refinedweb | 426 | 64.24 |
!
Well, everybody is talking about Silverlight these days, so I would like to put couple of cents here and share with you extremely simple techique how it could be useful to mojoLovers.
I will talk about Slide Show/Image Gallery only, but the same technique may be usefull in other cases also.
I googled the Web looking for nice and attractive Image Gallery for my project kaskad-dc.spb.ru (that is powered by mojoPortal of cause! ) and found Vertigo SlideShow control (live example, manual and download instructions) here:
Briefly speaking this is Open Source control for Silverlight 1.0 (pure JavaScript, that's amazing!) that was developed by commercial organisation with huge creativity and with the cutting edge quality. I was really imressed and I would like to thank them in public.
Since it is Open Source and since it is free I did following:
1. I put files Silverlight.js and SlideShow.js in /ClientScript folder of the site. (They both are provided with SlideShow release)
2. I followed simple 3-step Quick Start Guide to configure SlideShow control.
You may see the result here: kaskad-dc.spb.ru/gallery.aspx
If you love it, take a deep breath - I will jump 2 steps further and continue with some boring C# implementation details.
"Better is the worst enemy of good", so after first couple hours of joy and excitement I have realised, that publishing of real images in real site is a little bit boring task. Since currently I am playing with Silverlight version 2, then after some googling I found Vertigo SlideShow2 control developed with C# specially for this newer version. New control is able to play video as well, so one can mix images with something more exciting.
It is Open Source and free also, so deployment steps were following:
1. I put file Vertigo.SlideShow.xap in /ClientBin folder of the site.
2. I followed 3-step configuration process which is even easier and more flexible for second version.
3. I realilsed, that Html Content feature of mojoPortal is more than enough to embed SlideShow2 control into web-page, because it is just <object> tag with some specific parameters.
Ta-daaa! You may see the result here: kaskad-dc.spb.ru/gallery.aspx
The most significant drawback of Silverlight 2 is it's "cross platform" "compatibility" with different OSes and browsers. I personally live with FireFox and Ubuntu, so I had to install Moonlight from the Mono project. But this is not enough! Full XAP support will be ready on September, 2009 only (the best wishes to the guys!)
I have decided to expand cross-platform-cross-browser abilities of SlideShow and mix them in single mojoPortal feature.
I had not much time to create some kind of redistributable package, but I want to share my SlideShow.ascx and SlideShow.ascx.cs files right here. (wow, it is like "feature request": "Joe, please, implement file attachments to our Forum feature! Thank you!" ) This control may be in a minute registered on working portal by hands in usual way.
Best Regards,
Anton Kytmanov
kyta.spb.ru
SlideShow.ascx:
<%@ Control Language="C#" AutoEventWireup="true" CodeBehind="SlideShow.ascx.cs" Inherits="Vertigo.SlideShow.SlideShow" %>
SlideShow.ascx.cs:
using System;
using System.Collections;
using System.Configuration;
using System.Globalization;
using System.Data;
using System.Web;
using System.Web.Security;
using System.Web.UI;
using System.Web.UI.HtmlControls;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts;
using mojoPortal.Web;
namespace Vertigo.SlideShow {
public partial class SlideShow : SiteModuleControl {
#region Constants
private const string silverlight2_ObjectFormat = @"
<object type=""application/x-silverlight-2""
data=""data:application/x-silverlight-2,""
width=""{0}"" height=""{1}"">
<param name=""background"" value=""{2}"" />
<param name=""source"" value=""{3}"" />
<param name=""initParams"" value=""{4}"" />
<a href="""" style=""text-decoration: none;""><img src="""" style=""border-style: none;"" alt=""Get Microsoft Silverlight"" /></a>
</object>
";
private const string silverlight1_Object = @"
<script type=""text/javascript"">
new SlideShow.Control(new SlideShow.XmlConfigProvider());
</script>
";
#endregion Constants
private uint width = 530;
private uint height = 400;
private string bgColor = "white";
private string pathToXAP = @"/ClientBin/Vertigo.SlideShow.xap";
private string initParams = @"ConfigurationProvider=LightTheme,DataProvider=XmlDataProvider;Path=/Data/Sites/1/FolderGalleries/Data2.xml";
private string controlPresentationInHtml = "";
public SlideShow() : base()
{
controlPresentationInHtml = string.Format(CultureInfo.InvariantCulture,
silverlight2_ObjectFormat,
width, height, bgColor, pathToXAP, initParams);
}
protected void Page_Load (object sender, EventArgs e) {
// If Windows...
if (Request.Browser.Platform.ToUpperInvariant().StartsWith("WIN")) {
// Unknown;
// Win95; Win98; Windows NT 5.0 (Windows 2000); Windows NT 5.1 (Windows XP); WinNT (all other versions of Windows NT); Win16; WinCE
// Mac68K; MacPPC
// UNIX
// WebTV
controlPresentationInHtml = string.Format(CultureInfo.InvariantCulture,
silverlight2_ObjectFormat,
width, height, bgColor, pathToXAP, initParams);
return;
}
// Prepare JavaScript
controlPresentationInHtml = silverlight1_Object;
string</script>";
if (!Page.ClientScript.IsClientScriptBlockRegistered ("Silverlight")) {
Page.ClientScript.RegisterClientScriptBlock (typeof (SlideShow), "Silverlight", myScript, false);
}</script>";
if (!Page.ClientScript.IsClientScriptBlockRegistered ("VertigoSlideShow")) {
Page.ClientScript.RegisterClientScriptBlock (typeof (SlideShow), "VertigoSlideShow", myScript, false);
}
}
protected override void Render (HtmlTextWriter writer) {
writer.Write (controlPresentationInHtml);
}
}
}
Hi Anton,
Looks great! I've seen that Vertigo Slide Show popping up everywhere lately, its very cool. If you'd like to contribute this to the project, send the files for this to me at joe dot audette at g mail dotcom and agree to license it under the same CPL as mojoPortal, I'll include it in the next release and list you in the credits.
Best,
Joe
Was this added to mojo? If not, I'll just go ahead and add it to my site manually.
-Joe
Hi Joe,
No I think this one drifted off my radar at a time when I was very busy, thanks for bringing it back up.
I will look into adding this soon but its not in time for the coming release which I hope to make no later than Monday.
Hi All,
I've integrated the Vertigo Silverlight player into mojoPortal in the Image Gallery feature and created a new Flickr Slideshow feature as well.
Its on demo.mojoportal.com now. the Flickr part is hard to demo because it requires a Flickr username and api key. It works on my local machine but I'm not going to enter my credentials into the module settings on a public demo.
If you use the regular Image Gallery Feature and upload some images then go to the feature instance settings and enable the Silverlight Slide Show you can see it. I must say it is pretty sweet! But the Silverlight pugin is not yet as ubiquitous as I would like.
Anyway, its a really nice user experience with Silverlight enabled.
Cheers, | https://www.mojoportal.com/Forums/Thread.aspx?pageid=5&t=2309~-1 | CC-MAIN-2022-05 | refinedweb | 1,080 | 50.63 |
Shipwright::Backend::Base - Base Backend Class
Base Backend Class
the constructor
initialize a project you should subclass this method, and call this to get the dir with content initialized
import a dist.
A wrapper around svn's commit command.
regenerate the dependency order.
return a dependency graph in graphviz format
get or set the dependency order.
get or set the map.
get or set the sources map.
get or set flags.
get or set version.
get or set branches.
get or set known failure conditions.
get or set refs
return the hashref of require.yml for a dist.
Check if the given repository is valid.
you should subclass this method, and run this to get the file path with latest version
get or set test_script for a project, i.e. /t/test
trim dists
update refs.
we need update this after import and trim
return true if has branch support
for vcs backend, we made a local checkout/clone version, which will live here
sunnavy
<sunnavy@bestpractical.com>
Shipwright is Copyright 2007-2015 Best Practical Solutions, LLC.
This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. | http://search.cpan.org/dist/Shipwright/lib/Shipwright/Backend/Base.pm | CC-MAIN-2016-50 | refinedweb | 197 | 77.94 |
IMPLEMENTING CORDIC ALGORITHMS
A single compact routine for computing transcendental functions
Pitts Jarvis
Pitts Jarvis is a senior engineer at 3Com Corporation. He can be reached at 1275 Martin Ave., Palo Alto, CA 94301.
Efficiently computing sines, cosines, and other transcendental functions is a process about which many programmers are blissfully ignorant. When these values are called for in a graphics or CAD program, we usually rely on a call to the compiler's run-time library. The library either derives the necessary values in some mysterious manner or calls the floating-point coprocessor to assist in the task.
The CORDIC (COordinate, Rotation DIgital Computer) family of algorithms is an elegant, efficient, and compact way to compute sines, cosines, exponentials, logarithms, and associated transcendental functions using one core routine. These truly remarkable algorithms compute these functions with n bits of accuracy in n iterations -- where each iteration requires only a small number of shifts and additions. Furthermore, these routines use only fixed-point arithmetic. Using these algorithms, you can cast your entire graphics application into fixed-point, and thus avoid the cost of run-time conversion from fixed- to floating-point representation and back.
Even if you don't plan on recasting your application into fixed-point, you just might be curious how your floating-point coprocessor works. The Intel numerics family (8087, 80287, and 80387) all use Cordic algorithms, in a form slightly different than described here, to compute circular functions. The Intel implementations are described by R. Nave1 and A. K. Yuen2.
The implementations may be contemporary, but the algorithms are not new. J. E. Volder3 coined the name in 1959. He applied these algorithms to build a special-purpose digital computer for real-time airborne navigation. D. S. Cochran4 identifies their use in the HP-35 calculator in 1972 to calculate the transcendental functions.
Mathematical Manipulation
If we have a vector [x, y], we can rotate it through an angle a by multiplying it by the matrix Ra, defined in Example 1(a). Explicitly doing the matrix multiplication yields the equation in Example 1(b).
If we choose x = 1 and y = 0 and multiply that vector by Ra we are left with the vector [cos a, sin a].
Multiplying by two successive rotation matrices; Ra and Rb rotates the vector through the angle a + b, or more formally RaRb - Ra+b. If we choose to represent the angle a as a sum of angles ai for i = 0 through n (see Example 1(c) , then we can rotate the vector through the angle a by multiplying a series of rotation matrices Rao, Ra1, . . . Ran.
By picking the ai carefully, we can simplify the arithmetic. Notice that we can rewrite the rotation matrix by factoring out cos a as shown in Example 1(d). If we pick ai such that tan ai = 2-i for i = 0 through n, all of the multiplications by tan ai become right shifts by i bits.
Now we need to specify an algorithm so that we can represent a as the sum of the ai. Initialize a variable, z, to a. This z will be a residue quantity, which we are trying to drive to zero by adding or subtracting ai at the i-th step. At the first step, i= 0. At the i-th step, if z
0 then subtract ai from z. Otherwise add ai to z. At the last step i = n, and z is the error in our representation of a. Notice that in Example 1(e), for large i, each additional step yields one more bit of accuracy in our representation of a.
Figure 1 shows the relative magnitudes of the incremental angles, ai. Figure 2 gives an example of the convergence process with an initial angle of 0.65. Notice that successive iterations do not necessarily reduce the absolute error in the representation of the angle. Also notice that the error does not oscillate about zero.
At each step as we decompose a into the sum or difference of the ai. Figure 2 gives an example of the convergence pro's we could also multiply our vector [x, y] by the appropriate Rai. Figure 2 gives an example of the convergence pro or R-ai. Figure 2 gives an example of the convergence pro depending on whether we add or subtract ai. Figure 2 gives an example of the convergence pro. Remember, these multiplications are nothing more than shifts. We must also multiply in the still embarrassing factor cos ai. Figure 2 gives an example of the convergence pro. However, cosine is an even function and has the property that cos ai - cos (-a1). It does not matter whether we add or subtract the angle - we always multiply by the same factor! Because all of the cos ai can be factored out and grouped together, we can treat their product as a constant and compute it only once, along with all the ai = tan-12-i.
Not all angles can be represented as the sum of ai. There is a domain of convergence outside of which we cannot reduce the angle to within an-1 of zero. See Example 1(f). For the algorithm to work, we must start with a such that |a|
amax
1.74. This conveniently falls just outside the first quadrant. If we are given an angle outside the first quadrant, we can scale it by dividing by
/2 obtaining a quotient Q and a remainder D where |D| <
/2 < a?? Since the algorithm computes both sine and cosine of D, we pick the appropriate value and sign depending on the value of Q.
What about angles within the domain of convergence? It's not obvious that the strange set we've picked (see Example 1(g)) can represent all angles within the domain of convergence to within a~~ But, using mathematical induction. Walther proves that the scheme works.
The Circular Functions
One variation of the Cordic algorithm computes the circular functions -- sin, cos, tan, and so on. This algorithm is shown in pseudocode in Example 2(a).
First, start with [x,y,z] The x and y are as before z is the quantity that we drive to zero with an initial value of angle a. The first step in the loop decides whether to add or subtract ai from the residue z. The variable s is correspondingly positive or negative The second step reduces the magnitude of z and effects the multiplications by the tan a~ The expression ~~~~ means shift y right by ~ bits.
Example 2: (a) The basic algorithm; (b) the inverse algorithm
(a) for i from 0 to n do { if (z <img src=""> 0) then s:= 1 else s:= -1; [x,y,z] := [x - s y>>i, y + s x>>i, z - s a<SUB>1</SUB>] } (b) for i from 0 to n do { if (y <img src=""> 0) then s:= 1 else s:= -1; [x,y,z] := [x + s y>>i, y - s x>>i, z - s a<SUB>1</SUB>] }
When you start the algorithm with [x,y,z] and then drive z to zero as specified by Example 2(a). We are left with the quantities in Example 3(a). Where K is a constant. It is just the product of the cos ai as in Example 3(b).
For the curious, K
0.607. The value of K can be precomputed by setting [x,y,z] to [1,0,0] and running the algorithm as before. The result is shown in Example 3(c). Take the reciprocal of the final x and we have K. Therefore to compute sin a and cos a, set [x,y,z] to [K, O, ~] and run the algorithm, Example 3(d) shows the result. In effect, we start with a vector, [x,y] and rotate it through a given ~~ a my driving z to zero. Running the algorithm with the special case where the vector initially lies along the x axis and is of length K, rotates the vector by angle a and leaves behind cos a and sin a. This relationship is shown in Figure 3.
To compute tan 1a instead of z, we could choose to drive y to zero. Driv. ing y to zero rotates the vector through the angle a, the angle subtended by the vector and the x axis, leaving the vector lying along the x axis. Start with the vector anywhere in the first or fourth quadrant and an initial value of zero in z. The first or fourth quadrant is used because almost all vectors in the second or third quadrant will not converge. At the i-th step, if y >- 0, the vector lies in the first quadrant, subtract ai from z. Move the vector closer to the x axis; rotate it by negative ai by multiplying by the rotation matrix R-ai. If y < 0, the vector lies in the fourth quadrant, add ai to z and multiply the vector by Rai. At the end, z has the negative of the angle of the original vector [x, y], tan-1 y/x = tan-1a.
Changing the sign of ai has no effect on the computed values of x and y and leaves the original angle a in z rather than its negative. With this change, the inverse algorithm to drive y to zero becomes the expression shown as in the algorithm in Example 2(b).
Starting with [x, y, z] and then driving y to zero using the inverse algorithm leaves behind the quantities in Example 3(e).
Hyperbolic Functions
The hyperbolic functions (sinh, cosh, and so on) are similar to the circular functions. The correspondences between these two types of functions are shown in Table 1.
Table 1: Hyperbolic functions
Hyperbolic Function Circular Function --------------------------------------------------- e<SUP>x</SUP>+e<SUP>-x</SUP> e<SUP>ix</SUP> + e<SUP>-ix</SUP> cosh x = ---------- cos x = -------------- 2 2 e<SUP>x</SUP>-e<SUP>-x</SUP> e<SUP>ix</SUP> - e<SUP>-ix</SUP> sinh x = ---------- sin x = -------------- 2 2i [cosh a sinh a] [cosh a -sin a] H<SUB>a</SUB> = [sinh a cosh a] R<SUB>a</SUB> = [sin a cos a] H<SUB>a</SUB>H<SUB>b</SUB> = H<SUB>a+b</SUB> R<SUB>a</SUB>R<SUB>b</SUB> = R<SUB>a+b</SUB> e<SUP>x</SUP> = cosh x + sinh x x-1 In x = 2 tanh<SUP>-1</SUP> --- x+1
By analogy, use Ha as the rotation matrix and represent a using the set ai = tanh-1 2-i for i = 1 to n. Notice that for hyperbolics, a0 is infinity.
Given the change in the ai, can we still represent any angle a within the domain of convergence the same way we did for the circular functions? Unfortunately, the answer is no! Walther points out that repeating an occasional term makes the representation converge in the hyperbolic case. Repeating the terms as shown in Example 4(a) does the trick.
Except for the repeated terms and some changes of sign, the algorithms for hyperbolic functions are identical to the circular functions. Listing One, page 157, shows this in detail.
For hyperbolic functions, we start with [x,y,z] and then drive z to zero. This yields the quantities in Example 4(b). Starting with x,y,z and then driving y to zero gives the quantity shown in Example 4(c). For hyperbolics, K
1.21.
Some interesting special cases include the exponential, square root, and natural logarithm. The exponential case is in Example 4(d) while the square root and logarithm cases are in Example 4(e).
Calculating the Constants
The algorithm to compute the circular and hyperbolic functions requires several precomputed constants. These include the scaling constant, K, for both circular and hyperbolic functions, and the sets shown in Example 5(a) and Example 5(b), respectively. Listing One illustrates this.
The program, written in C, uses fixed point arithmetic for all calculations. All constants and variables used to calculate functions are declared as the type long. The code assumes that a long is at least 32 bits. I have decided to represent numbers in the range - 4 <= x < 4; this lets me represent e as a fixed point number. The high order bit is for the sign. The low order fractionBits (a constant defined as 29) bits hold the fractional part of the number. The remainder of the bits between the sign bit and the fractional part hold the integer part of the number. Figure 4 shows the fixed point format in graphic form.
I use power series to calculate the incremental angles ai, as shown in Example 5(c) and Example 5(d), respectively. How do we know the number of terms necessary to evaluate tan-1 and tanh-1 to 32 bits of precision? First consider the value of x for which tan-1 x = x to 32 bits of precision. A theorem of numerical analysis states that for an alternating sum where the absolute values of the terms decrease monotonically, the error is less than the absolute value of the first neglected term. Solving the equation x3/3 = 2-32 for x yields x = 3
(6 * 2-11); therefore for i
11, tan-1 2-i = 2-i with 32 bits of precision.
For the higher powers of two, we need to solve the relation 2-in/n = 2-32 for n for each of the cases i = 1 to 10. We do not even attempt the calculation for i = 0. The series for tan-1 1 converges very slowly, even after 500 terms the third digit is still changing. Fortunately, we know that the answer is
/4. Computing the rest is not as much work. The array terms has the gory details.
As usual, tanh-1 is more perverse. It is not an alternating sum and does not meet the conditions of the theorem used above. Consider the second neglected term of tanh-1 1/2. It is less than 1/4 of the first neglected term because the series includes only every other power of two. All of the other neglected terms can have no effect on the 33rd bit. The series for the other arguments, 1/4, 1/8, ..., converges even faster. Therefore, the number of terms calculated for tan-1 works just as well for tanh-1 for 32-bit accuracy.
Before computing the power series, we still need to compute the coefficients, 1/k, for each term k = 1, 3, 5, ... 27. We fill the coefficient array long a[28] with odd indices by calling the routine Reciprocal, which takes two arguments and returns a long. The first argument is the integer for the desired reciprocal. The second specifies the desired precision for the fractional part of the result. Reciprocal uses a simple as can be restoring division; it is the algorithm we all learned in grade school for long division. The elements of the array a with even indices get OL because there are no terms in the power series with even exponents.
Everything is ready to fill the arrays atan[fractionBits+ 1] and atanh[fractionBits+1].
The routine Poly2 evaluates the power series for the specified number of terms for the specified power of two using Horner's rule. The coefficients come from the array a, which we just carefully filled. Horner's rule is the recommended method for evaluating polynomials. A polynomial as in Example 5(e) can be rewritten as in Example 5(f). This simple recursive formula evaluates the polynomial with n multiplications and n additions. We compute the prescaling constants K by using the method explained above; in the program we call these XOC and XOH, for the circular and hyperbolic constants, respectively. Program output to this point is shown in Listing Two, page 158.
The routines Circular, InvertCircular, Hyperbolic, and InvertHyperbolic are the C implementations of the algorithms described above. They all take as arguments the initial values for [x,y,z]; they leave their results in the global variables X, Y, and Z. Considering their versatility and the wide range of functions they compute, these routines are compact and elegant!
References
_IMPLEMENTING CORDIC ALGORITHMS_ by Pitts Jarvis
[LISTING ONE]
<a name="0210_000c">
/* cordicC.c -- J. Pitts Jarvis, III
* cordicC.c computes CORDIC constants and exercises the basic algorithms.
* Represents all numbers in fixed point notation. 1 bit sign,
* longBits-1-n bit integral part, and n bit fractional part. n=29 lets us
* represent numbers in the interval [-4, 4) in 32 bit long. Two's
* complement arithmetic is operative here.
*/
#define fractionBits 29
#define longBits 32
#define One (010000000000>>1)
#define HalfPi (014441766521>>1)
/* cordic algorithm identities for circular functions, starting with [x, y, z]
* and then
* driving z to 0 gives: [P*(x*cos(z)-y*sin(z)), P*(y*cos(z)+x*sin(z)), 0]
* driving y to 0 gives: [P*sqrt(x^2+y^2), 0, z+atan(y/x)]
* where K = 1/P = sqrt(1+1)* . . . *sqrt(1+(2^(-2*i)))
* special cases which compute interesting functions
* sin, cos [K, 0, a] -> [cos(a), sin(a), 0]
* atan [1, a, 0] -> [sqrt(1+a^2)/K, 0, atan(a)]
* [x, y, 0] -> [sqrt(x^2+y^2)/K, 0, atan(y/x)]
* for hyperbolic functions, starting with [x, y, z] and then
* driving z to 0 gives: [P*(x*cosh(z)+y*sinh(z)), P*(y*cosh(z)+x*sinh(z)), 0]
* driving y to 0 gives: [P*sqrt(x^2-y^2), 0, z+atanh(y/x)]
* where K = 1/P = sqrt(1-(1/2)^2)* . . . *sqrt(1-(2^(-2*i)))
* sinh, cosh [K, 0, a] -> [cosh(a), sinh(a), 0]
* exponential [K, K, a] -> [e^a, e^a, 0]
* atanh [1, a, 0] -> [sqrt(1-a^2)/K, 0, atanh(a)]
* [x, y, 0] -> [sqrt(x^2-y^2)/K, 0, atanh(y/x)]
* ln [a+1, a-1, 0] -> [2*sqrt(a)/K, 0, ln(a)/2]
* sqrt [a+(K/2)^2, a-(K/2)^2, 0] -> [sqrt(a), 0, ln(a*(2/K)^2)/2]
* sqrt, ln [a+(K/2)^2, a-(K/2)^2, -ln(K/2)] -> [sqrt(a), 0, ln(a)/2]
* for linear functions, starting with [x, y, z] and then
* driving z to 0 gives: [x, y+x*z, 0]
* driving y to 0 gives: [x, 0, z+y/x]
*/
long X0C, X0H, X0R; /* seed for circular, hyperbolic, and square root */
long OneOverE, E; /* the base of natural logarithms */
long HalfLnX0R; /* constant used in simultanous sqrt, ln computation */
/* compute atan(x) and atanh(x) using infinite series
* atan(x) = x - x^3/3 + x^5/5 - x^7/7 + . . . for x^2 < 1
* atanh(x) = x + x^3/3 + x^5/5 + x^7/7 + . . . for x^2 < 1
* To calcuate these functions to 32 bits of precision, pick
* terms[i] s.t. ((2^-i)^(terms[i]))/(terms[i]) < 2^-32
* For x <= 2^(-11), atan(x) = atanh(x) = x with 32 bits of accuracy */
unsigned terms[11]= {0, 27, 14, 9, 7, 5, 4, 4, 3, 3, 3};static long a[28], atan[fractionBits+1], atanh[fractionBits+1], X, Y, Z;
#include <stdio.h> /* putchar is a marco for some */
/* Delta is inefficient but pedagogical */
#define Delta(n, Z) (Z>=0) ? (n) : -(n)
#define abs(n) (n>=0) ? (n) : -(n)
/* Reciprocal, calculate reciprocol of n to k bits of precision
* a and r form integer and fractional parts of the dividend respectively */
long
Reciprocal(n, k) unsigned n, k;
{
unsigned i, a= 1; long r= 0;
for (i= 0; i<=k; ++i) {r += r; if (a>=n) {r += 1; a -= n;}; a += a;}
return(a>=n? r+1 : r); /* round result */
}
/* ScaledReciprocal, n comes in funny fixed point fraction representation */
long
ScaledReciprocal(n, k) long n; unsigned k;
{
long a, r=0; unsigned i;
a= 1L<<k;
for (i=0; i<=k; ++i) {r += r; if (a>=n) {r += 1; a -= n;}; a += a;};
return(a>=n? r+1 : r); /* round result */
}
/* Poly2 calculates polynomial where the variable is an integral power of 2,
* log is the power of 2 of the variable
* n is the order of the polynomial
* coefficients are in the array a[] */
long
Poly2(log, n) int log; unsigned n;
{
long r=0; int i;
for (i=n; i>=0; --i) r= (log<0? r>>-log : r<<log)+a[i];
return(r);
}
WriteFraction(n) long n;
{
unsigned short i, low, digit; unsigned long k;
putchar(n < 0 ? '-' : ' '); n = abs(n);
putchar((n>>fractionBits) + '0'); putchar('.');
low = k = n << (longBits-fractionBits); /* align octal point at left */
k >>= 4; /* shift to make room for a decimal digit */
for (i=1; i<=8; ++i)
{
digit = (k *= 10L) >> (longBits-4);
low = (low & 0xf) * 10;
k += ((unsigned long) (low>>4)) - ((unsigned long) digit << (longBits-4));
putchar(digit+'0');
}
}
WriteRegisters()
{ printf(" X: "); WriteVarious(X);
printf(" Y: "); WriteVarious(Y);
printf(" Z: "); WriteVarious(Z);
}
WriteVarious(n) long n;
{
WriteFraction(n); printf(" 0x%08lx 0%011lo\n", n, n);
}
Circular(x, y, z) long x, y, z;
{
int i;
X = x; Y = y; Z = z;
for (i=0; i<=fractionBits; ++i)
{
x= X>>i; y= Y>>i; z= atan[i];
X -= Delta(y, Z);
Y += Delta(x, Z);
Z -= Delta(z, Z);
}
}
InvertCircular(x, y, z) long x, y, z;
{
int i;
X = x; Y = y; Z = z;
for (i=0; i<=fractionBits; ++i)
{
x= X>>i; y= Y>>i; z= atan[i];
X -= Delta(y, -Y);
Z -= Delta(z, -Y);
Y += Delta(x, -Y);
}
}
Hyperbolic(x, y, z) long x, y, z;
{
int i;
X = x; Y = y; Z = z;
for (i=1; i<=fractionBits; ++i)
{
x= X>>i; y= Y>>i; z= atanh[i];
X += Delta(y, Z);
Y += Delta(x, Z);
Z -= Delta(z, Z);
if ((i==4)||(i==13))
{
x= X>>i; y= Y>>i; z= atanh[i];
X += Delta(y, Z);
Y += Delta(x, Z);
Z -= Delta(z, Z);
}
}
}
InvertHyperbolic(x, y, z) long x, y, z;
{
int i;
X = x; Y = y; Z = z; for (i=1; i<=fractionBits; ++i)
{
x= X>>i; y= Y>>i; z= atanh[i];
X += Delta(y, -Y);
Z -= Delta(z, -Y);
Y += Delta(x, -Y);
if ((i==4)||(i==13))
{
x= X>>i; y= Y>>i; z= atanh[i];
X += Delta(y, -Y);
Z -= Delta(z, -Y);
Y += Delta(x, -Y);
}
}
}
Linear(x, y, z) long x, y, z;
{
int i;
X = x; Y = y; Z = z; z= One;
for (i=1; i<=fractionBits; ++i)
{
x >>= 1; z >>= 1; Y += Delta(x, Z); Z -= Delta(z, Z);
}
}
InvertLinear(x, y, z) long x, y, z;
{
int i;
X = x; Y = y; Z = z; z= One;
for (i=1; i<=fractionBits; ++i)
{
Z -= Delta(z >>= 1, -Y); Y += Delta(x >>= 1, -Y);
}
}
/*********************************************************/
main()
{
int i; long r;
/*system("date");*//* time stamp the log for UNIX systems */
for (i=0; i<=13; ++i)
{
a[2*i]= 0; a[2*i+1]= Reciprocal(2*i+1, fractionBits);
}
for (i=0; i<=10; ++i) atanh[i]= Poly2(-i, terms[i]);
atan[0]= HalfPi/2; /* atan(2^0)= pi/4 */
for (i=1; i<=7; ++i) a[4*i-1]= -a[4*i-1];
for (i=1; i<=10; ++i) atan[i]= Poly2(-i, terms[i]);
for (i=11; i<=fractionBits; ++i) atan[i]= atanh[i]= 1L<<(fractionBits-i);
printf("\natanh(2^-n)\n");
for (i=1; i<=10; ++i){printf("%2d ", i); WriteVarious(atanh[i]);}
r= 0;
for (i=1; i<=fractionBits; ++i)
r += atanh[i];
r += atanh[4]+atanh[13];
printf("radius of convergence"); WriteFraction(r); printf("\n\natan(2^-n)\n");
for (i=0; i<=10; ++i){printf("%2d ", i); WriteVarious(atan[i]);}
r= 0; for (i=0; i<=fractionBits; ++i) r += atan[i];
printf("radius of convergence"); WriteFraction(r);
/* all the results reported in the printfs are calculated with my HP-41C */
printf("\n\n--------------------circular functions--------------------\n");
printf("Grinding on [1, 0, 0]\n");
Circular(One, 0L, 0L); WriteRegisters();
printf("\n K: "); WriteVarious(X0C= ScaledReciprocal(X, fractionBits));
printf("\nGrinding on [K, 0, 0]\n");
Circular(X0C, 0L, 0L); WriteRegisters();
printf("\nGrinding on [K, 0, pi/6] -> [0.86602540, 0.50000000, 0]\n");
Circular(X0C, 0L, HalfPi/3L); WriteRegisters();
printf("\nGrinding on [K, 0, pi/4] -> [0.70710678, 0.70710678, 0]\n");
Circular(X0C, 0L, HalfPi/2L); WriteRegisters();
printf("\nGrinding on [K, 0, pi/3] -> [0.50000000, 0.86602540, 0]\n");
Circular(X0C, 0L, 2L*(HalfPi/3L)); WriteRegisters();
printf("\n------Inverse functions------\n");
printf("Grinding on [1, 0, 0]\n");
InvertCircular(One, 0L, 0L); WriteRegisters();
printf("\nGrinding on [1, 1/2, 0] -> [1.84113394, 0, 0.46364761]\n");
InvertCircular(One, One/2L, 0L); WriteRegisters();
printf("\nGrinding on [2, 1, 0] -> [3.68226788, 0, 0.46364761]\n");
InvertCircular(One*2L, One, 0L); WriteRegisters();
printf("\nGrinding on [1, 5/8, 0] -> [1.94193815, 0, 0.55859932]\n");
InvertCircular(One, 5L*(One/8L), 0L); WriteRegisters();
printf("\nGrinding on [1, 1, 0] -> [2.32887069, 0, 0.78539816]\n");
InvertCircular(One, One, 0L); WriteRegisters();
printf("\n--------------------hyperbolic functions--------------------\n");
printf("Grinding on [1, 0, 0]\n");
Hyperbolic(One, 0L, 0L); WriteRegisters();
printf("\n K: "); WriteVarious(X0H= ScaledReciprocal(X, fractionBits));
printf(" R: "); X0R= X0H>>1; Linear(X0R, 0L, X0R); WriteVarious(X0R= Y);
printf("\nGrinding on [K, 0, 0]\n");
Hyperbolic(X0H, 0L, 0L); WriteRegisters();
printf("\nGrinding on [K, 0, 1] -> [1.54308064, 1.17520119, 0]\n");
Hyperbolic(X0H, 0L, One); WriteRegisters();
printf("\nGrinding on [K, K, -1] -> [0.36787944, 0.36787944, 0]\n");
Hyperbolic(X0H, X0H, -One); WriteRegisters();
OneOverE = X; /* save value ln(1/e) = -1 */
printf("\nGrinding on [K, K, 1] -> [2.71828183, 2.71828183, 0]\n");
Hyperbolic(X0H, X0H, One); WriteRegisters();
E = X; /* save value ln(e) = 1 */
printf("\n------Inverse functions------\n");
printf("Grinding on [1, 0, 0]\n");
InvertHyperbolic(One, 0L, 0L); WriteRegisters();
printf("\nGrinding on [1/e + 1, 1/e - 1, 0] -> [1.00460806, 0,
-0.50000000]\n");
InvertHyperbolic(OneOverE+One,OneOverE-One, 0L); WriteRegisters();
printf("\nGrinding on [e + 1, e - 1, 0] -> [2.73080784, 0, 0.50000000]\n");
InvertHyperbolic(E+One, E-One, 0L); WriteRegisters();
printf("\nGrinding on (1/2)*ln(3) -> [0.71720703, 0, 0.54930614]\n");
InvertHyperbolic(One, One/2L, 0L); WriteRegisters();
printf("\nGrinding on [3/2, -1/2, 0] -> [1.17119417, 0, -0.34657359]\n"); InvertHyperbolic(One+(One/2L), -(One/2L), 0L); WriteRegisters();
printf("\nGrinding on sqrt(1/2) -> [0.70710678, 0, 0.15802389]\n");
InvertHyperbolic(One/2L+X0R, One/2L-X0R, 0L); WriteRegisters();
printf("\nGrinding on sqrt(1) -> [1.00000000, 0, 0.50449748]\n");
InvertHyperbolic(One+X0R, One-X0R, 0L); WriteRegisters();
HalfLnX0R = Z;
printf("\nGrinding on sqrt(2) -> [1.41421356, 0, 0.85117107]\n");
InvertHyperbolic(One*2L+X0R, One*2L-X0R, 0L); WriteRegisters();
printf("\nGrinding on sqrt(1/2), ln(1/2)/2 -> [0.70710678, 0,
-0.34657359]\n");
InvertHyperbolic(One/2L+X0R, One/2L-X0R, -HalfLnX0R); WriteRegisters();
printf("\nGrinding on sqrt(3)/2, ln(3/4)/2 -> [0.86602540, 0,
-0.14384104]\n");
InvertHyperbolic((3L*One/4L)+X0R, (3L*One/4L)-X0R, -HalfLnX0R);
WriteRegisters();
printf("\nGrinding on sqrt(2), ln(2)/2 -> [1.41421356, 0, 0.34657359]\n");
InvertHyperbolic(One*2L+X0R, One*2L-X0R, -HalfLnX0R);
WriteRegisters();
exit(0);
}
<a name="0210_000d"><a name="0210_000d">
<a name="0210_000e">[LISTING TWO]
<a name="0210_000e">
atanh (2^-n)
1 0.54930614 0x1193ea7a 002144765172
2 0.25541281 0x082c577d 001013053575
3 0.12565721 0x04056247 000401261107
4 0.06258157 0x0200ab11 000200125421
5 0.03126017 0x01001558 000100012530
6 0.01562627 0x008002aa 000040001252
7 0.00781265 0x00400055 000020000125
8 0.00390626 0x0020000a 000010000012
9 0.00195312 0x00100001 000004000001
10 0.00097656 0x00080000 000002000000
radius of convergence 1.11817300
atan (26-n)
0 0.78539816 0x1921fb54 003110375524
1 0.46364760 0x0ed63382 001665431602
2 0.24497866 0x07d6dd7e 000765556576
3 0.12435499 0x03fab753 000376533523
4 0.06241880 0x01ff55bb 000177652673
5 0.03123983 0x00ffeaad 000077765255
6 0.01562372 0x007ffd55 000037776525
7 0.00781233 0x003fffaa 000017777652
8 0.00390622 0x001ffff5 000007777765
9 0.00195312 0x000ffffe 000003777776
10 0.00097656 0x0007ffff 000001777777
radius of convergence 1.74328660 | http://www.drdobbs.com/database/implementing-cordic-algorithms/184408428?pgno=1 | CC-MAIN-2015-18 | refinedweb | 4,672 | 62.58 |
Help:Redirects - Revision history 2014-10-20T13:22:32Z Revision history for this page on the wiki MediaWiki 1.23.5 Brantgurga: 1 revision: Importing the public domain help content from Mediawiki 2010-10-29T16:54:49Z <p>1 revision: Importing the public domain help content from Mediawiki</p> <p><b>New page</b></p><div>{{PD Help Page}}<br /> <br /> Redirects are used to forward users from one page name to another. They can be useful if a particular article is referred to by multiple names, or has alternative punctuation, capitalization or spellings.<br /> <br /> ==Creating a redirect==<br />:<br /> <nowiki>#REDIRECT [[</nowiki>''pagename''<nowiki>]]</nowiki><br />.<br /> <br /> You should use the 'preview' button below the Edit window, or Alt-P, to check that you have entered the correct destination page name. The preview page will not look like the resulting redirect page, it will look like a numbered list, with the destination page in blue: <br /> 1. REDIRECT <span style="color:blue">''pagename''</span><br /> If the ''pagename'' as you typed it is not a valid page, it will show in red. Until there is a valid destination page, you should not make the redirect.<br /> <br /> ==Viewing a redirect==<br />.<br /> <br />.<br /> <br /> ==Deleting a redirect==<br /> There's generally no need to delete redirects. They do not occupy a significant amount of database space. If a page name is vaguely meaningful, there's no harm (and some benefit) in having it as a redirect to the more relevant or current page. <br /> <br />]].<br /> <br /> ==Double redirects ==<br />. <br /> <br /> However, you could look out for double redirects and eliminate them, by changing them to be 1-step redirects instead. You are most likely to need to do this after a significant [[Help:Moving a page|page move]]. Use the "what links here" toolbox link to find double redirects to a particular page, or use [[Special:DoubleRedirects]] to find them throughout the whole wiki.<br /> <br /> == A redirect to a page in the category namespace ==<br /> To prevent a page that redirects to a category from appearing in the category, precede the word "Category" with a colon: <br /> <nowiki>#REDIRECT [[:Category:Glossary]]</nowiki><br /> <br /> {{Languages|Help:Redirects}} <br /> <br /> [[Category:Help|Redirects]]</div> Brantgurga | http://www.funtoo.org/index.php?title=Help:Redirects&feed=atom&action=history | CC-MAIN-2014-42 | refinedweb | 381 | 51.78 |
Recently, it was discovered by a contributor that the rating attribute for event feedbacks in Open Event was of type String. The type was incorrect, indeed. After a discussion, developers came concluded that it should be of type Float. In this post, I explain how to perform this simple migration task of changing a data type across a typical Flask app’s stack.
To begin this change, we first, we modify the database model. The model file for feedbacks (feedback.py) looks like the following:
from app.models import db class Feedback(db.Model): """Feedback model class""" __tablename__ = 'feedback' id = db.Column(db.Integer, primary_key=True) rating = db.Column(db.String, nullable=False) # ←-- should be float comment = db.Column(db.String, nullable=True) user_id = db.Column(db.Integer, db.ForeignKey('users.id', ondelete='CASCADE')) event_id = db.Column(db.Integer, db.ForeignKey('events.id', ondelete='CASCADE')) def __init__(self, rating=None, comment=None, event_id=None, user_id=None): self.rating = rating # ←-- cast here for safety self.comment = comment self.event_id = event_id self.user_id = user_id … … …
The change here is quite straightforward, and spans just 2 lines:
rating = db.Column(db.Float, nullable=False)
and
self.rating = float(rating)
We now perform the database migration using a couple of manage.py commands on the terminal. This file is different for different projects, but the migration commands essentially look the same. For Open Event Server, the manage.py file is at the root of the project directory (as is conventional). After cd’ing to the root, we execute the following commands:
$ python manage.py db migrate
and then
$ python manage.py db upgrade
These commands update our Open Event database so that the rating is now stored as a Float. However, if we execute these commands one after the other, we note that an exception is thrown:
sqlalchemy.exc.ProgrammingError: column "rating" cannot be cast automatically to type float HINT: Specify a USING expression to perform the conversion. 'ALTER TABLE feedback ALTER COLUMN rating TYPE FLOAT USING rating::double precision'
This happens because the migration code is ambiguous about what precision to use after converting to type float. It hints us to utilize the USING clause of PostgreSQL to do that. We accomplish this manually by using the psql client to connect to our database and command it the type change:
$ psql oevent psql (10.1) Type "help" for help. oevent=# ALTER TABLE feedback ALTER COLUMN rating TYPE FLOAT USING rating::double precision
We now exit the psql shell and run the above migration commands again. We see that the migration commands pass successfully this time, and a migration file is generated. For our migration, the file looks like the following:
from alembic import op import sqlalchemy as sa # These values would be different for your migrations. revision = '194a5a2a44ef' down_revision = '4cac94c86047' def upgrade(): op.alter_column('feedback', 'rating', existing_type=sa.VARCHAR(), type_=sa.Float(), existing_nullable=False) def downgrade(): op.alter_column('feedback', 'rating', existing_type=sa.Float(), type_=sa.VARCHAR(), existing_nullable=False)
This is an auto-generated file (built by the database migration tool Alembic) and we need to specify the extra commands we used while migrating our database. Since we did use an extra command to specify the precision, we need to add it here. PostgreSQL USING clause can be added to alembic migration files via the postgresql_using keyword argument. Thus, the edited version of the upgrade function looks like the following:
def upgrade(): op.alter_column('feedback', 'rating', existing_type=sa.VARCHAR(), type_=sa.Float(), existing_nullable=False, postgresql_using='rating::double precision')
This completes our work on database migration. Migration files are useful for a variety of purposes – they allow us to easily get to a previous database state, or a new database state as suggested by a project collaborator. Just like git, they allow for easy version control and collaboration.
We didn’t finish this work after database migration. We also decided to impose limits on the rating value. We concluded that 0-5 would be a good range for rating. Furthermore, we also decided to round off the rating value to the “nearest 0.5”, so if the input rating is 2.3, it becomes 2.5. Also, if it is 4.1, it becomes 4.0. This was decided because such values are conventional for ratings across numerous web and mobile apps. So this will hopefully enable easier adoption for new users.
For the validation part, marshmallow came to rescue. It is a simple object serialization and deserialization tool for Python. So it basically allows to convert complex Python objects to JSON data for communicating over HTTP and vice-versa, among other things. It also facilitates pre-processing input data and therefore, allows clean validation of payloads. In our case, marshmallow was specifically used to validate the range of the rating attribute of feedbacks. The original feedbacks schema file looked like the following:
from marshmallow_jsonapi import fields from marshmallow_jsonapi.flask import Schema, Relationship from app.api.helpers.utilities import dasherize class FeedbackSchema(Schema): """ Api schema for Feedback Model """ class Meta: """ Meta class for Feedback Api Schema """ type_ = 'feedback' self_view = 'v1.feedback_detail' self_view_kwargs = {'id': '<id>'} inflect = dasherize id = fields.Str(dump_only=True) rating = fields.Str(required=True) # ← need to validate this comment = fields.Str(required=False) event = Relationship(attribute='event', self_view='v1.feedback_event', self_view_kwargs={'id': '<id>'}, related_view='v1.event_detail', related_view_kwargs={'feedback_id': '<id>'}, schema='EventSchemaPublic', type_='event') … …
To validate the rating attribute, we use marshmallow’s Range class:
from marshmallow.validate import Range
Now we change the line
rating = fields.Str(required=True)
to
rating = fields.Float(required=True, validate=Range(min=0, max=5))
So with marshmallow, just about 2 lines of work implements rating validation for us!
After the validation part, what’s remaining is the rounding-off business logic. This is simple mathematics, and for getting to the “nearest 0.5” number, the formula goes as follows:
rating * 2 --> round off --> divide the result by 2
We will use Python’s built-in function (BIF) to accomplish this. To implement the business logic, we go back to the feedback model class and modify its constructor. Before this type change, the constructor looked like the following:
def __init__(self, rating=None, comment=None, event_id=None, user_id=None): self.rating = rating self.comment = comment self.event_id = event_id self.user_id = user_id
We change this by first converting the input rating to float, rounding it off and then finally assigning the result to feedback’s rating attribute. The new constructor is shown below:
def __init__(self, rating=None, comment=None, event_id=None, user_id=None): rating = float(rating) self.rating = round(rating*2, 0) / 2 # Rounds to nearest 0.5 self.comment = comment self.event_id = event_id self.user_id = user_id
This completes the rounding-off part and ultimately concludes rating’s type change from String to Float. We saw how a simple high-level type change requires editing code across multiple files and the use of different tools in between. In doing so, we thus also learned the utility of alembic and marshmallow in database migration and data validation, respectively.
Resources | https://blog.fossasia.org/patching-an-attribute-type-across-a-flask-app/ | CC-MAIN-2022-40 | refinedweb | 1,170 | 51.75 |
This checklist should be consulted when creating pull requests to make sure they are complete before merging. These are not intended to be rigidly followed—it’s just an attempt to list in one place all of the items that are necessary for a good pull request. Of course, some items will not always apply.
Formatting should follow PEP8. Exceptions to these rules are acceptable if it makes the code objectively more readable.
No tabs (only spaces). No trailing whitespace.
Import the following modules using the standard scipy conventions:
import numpy as np import numpy.ma as ma import matplotlib as mpl from matplotlib import pyplot as plt import matplotlib.cbook as cbook import matplotlib.collections as mcol import matplotlib.patches as mpatches
See below for additional points about Keyword argument processing, if code in your pull request does that.
Adding a new pyplot function involves generating code. See Writing a new pyplot function for more information.
Every new feature should be documented. If it’s a new module, don’t forget to add a new rst file to the API docs.
Docstrings should be in numpydoc format. Don’t be thrown off by the fact that many of the existing docstrings are not in that format; we are working to standardize on numpydoc.
Docstrings should look like (at a minimum):
def foo(bar, baz=None): """ This is a prose description of foo and all the great things it does. Parameters ---------- bar : (type of bar) A description of bar baz : (type of baz), optional A description of baz Returns ------- foobar : (type of foobar) A description of foobar foobaz : (type of foobaz) A description of foobaz """ # some very clever code return foobar, foobaz
Each high-level plotting function should have a simple example in the Example section of the docstring. This should be as simple as possible to demonstrate the method. More complex examples should go in the examples tree.
Build the docs and make sure all formatting warnings are addressed.
See Documenting matplotlib for our documentation style guide.
If your changes are non-trivial, please make an entry in the CHANGELOG.
If your change is a major new feature, add an entry to doc/users/whats_new.rst.
If you change the API in a backward-incompatible way, please document it in doc/api/api_changes.rst.
Using the test framework is discussed in detail in the section Testing.
Matplotlib makes extensive use of **kwargs for pass-through customizations from one function to another. A typical example is in matplotlib.pylab)
Note: there is a use case when kwargs are meant to be used locally in the function (not passed on), but you still need the **kwargs idiom. That is when you want to use *args to allow variable numbers of non-keyword args. In this case, python will not allow you to use named keyword args after the *args usage, so you will be forced to use **kwargs. An example is matplotlib.contour.ContourLabeler.clabel():
# in contour.py def clabel(self, *args, **kwargs): fontsize = kwargs.get('fontsize', None) inline = kwargs.get('inline', 1) self.fmt = kwargs.get('fmt', '%1.3f') colors = kwargs.get('colors', None) if len(args) == 0: levels = self.levels indices = range(len(self.levels)) elif len(args) == 1: ...etc...
This section describes how to add certain kinds of new features to matplotlib.
If you are working on a custom backend, the backend setting in matplotlibrc (Customizing matplotlib) supports an external backend via the module directive. if my_backend.py is a matplotlib backend in your PYTHONPATH, you can set use it on one of several ways
in matplotlibrc:
backend : module://my_backend
with the use directive is your script:
import matplotlib matplotlib.use('module://my_backend')
from the command shell with the -d flag:
> python simple_plot.py -d module://my_backend
We have hundreds of examples in subdirectories of matplotlib/examples, and these are automatically generated when the website is built to show up both in the examples and gallery sections')
A large portion of the pyplot interface is automatically generated by the boilerplate.py script (in the root of the source tree). To add or remove a plotting method from pyplot, edit the appropriate list in boilerplate.py and then run the script which will update the content in lib/matplotlib/pyplot.py. Both the changes in boilerplate.py and lib/matplotlib/pyplot.py should be checked into the repository.
Note: boilerplate.py looks for changes in the installed version of matplotlib and not the source tree. If you expect the pyplot.py file to show your new changes, but they are missing, this might be the cause.
Install your new files by running python setup.py build and python setup.py install followed by python boilerplate.py. The new pyplot.py file should now have the latest changes. | http://matplotlib.org/devel/coding_guide.html | CC-MAIN-2014-41 | refinedweb | 798 | 67.76 |
On Wed, 25 Feb 2009, Andrew Morton wrote:> On Wed, 25 Feb 2009 15:30:08 -0500> Steven Rostedt <rostedt@goodmis.org> wrote:> > > From: Steven Rostedt <srostedt@redhat.com>> > > > The ftrace utility reads space delimited words from user space.> > Andrew Morton did not like how ftrace open coded this. He had> > a good point since more than one location performed this feature.> > > > This patch creates a copy_word_from_user function that can copy> > a space delimited word from user space. This puts the code in> > a new lib/uaccess.c file. This keeps the code in a single location> > and may be optimized in the future.> > > > Does your code actually still need this? It is unacceptble to just> be more strict about userspace's write()s?Well, it does make it easy for cat and grep to work with the interface.> > > diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h> > index 6b58367..2d706d9 100644> > --- a/include/linux/uaccess.h> > +++ b/include/linux/uaccess.h> > @@ -106,4 +106,8 @@ extern long probe_kernel_read(void *dst, void *src, size_t size);> > */> > extern long probe_kernel_write(void *dst, void *src, size_t size);> > > > +extern int copy_word_from_user(void *to, const void __user *from,> > + unsigned int copy, unsigned int read,> > + unsigned int *copied, int skip);> > +> > #endif /* __LINUX_UACCESS_H__ */> > diff --git a/lib/Makefile b/lib/Makefile> > index 32b0e64..46ce28c 100644> > --- a/lib/Makefile> > +++ b/lib/Makefile> > @@ -11,7 +11,8 @@ lib-y := ctype.o string.o vsprintf.o cmdline.o \> > rbtree.o radix-tree.o dump_stack.o \> > idr.o int_sqrt.o extable.o prio_tree.o \> > sha1.o irq_regs.o reciprocal_div.o argv_split.o \> > - proportions.o prio_heap.o ratelimit.o show_mem.o is_single_threaded.o> > + proportions.o prio_heap.o ratelimit.o show_mem.o is_single_threaded.o \> > + uaccess.o> > > > lib-$(CONFIG_MMU) += ioremap.o> > lib-$(CONFIG_SMP) += cpumask.o> > diff --git a/lib/uaccess.c b/lib/uaccess.c> > new file mode 100644> > index 0000000..5b9a4ac> > --- /dev/null> > +++ b/lib/uaccess.c> > @@ -0,0 +1,134 @@> > +/*> > + * lib/uaccess.c> > + * generic user access file.> > That's a good place for it. I wonder if we have other uaccess> functions which should be moved here sometime.> > > + * started by Steven Rostedt> > + *> > + * Copyright (C) 2009 Red Hat, Inc., Steven Rostedt <srostedt@redhat.com>> > + *> > + * This source code is licensed under the GNU General Public License,> > + * Version 2. See the file COPYING for more details.> > + */> > +#include <linux/uaccess.h>> > +#include <linux/ctype.h>> > +> > +/**> > + * copy_word_from_user - copy a space delimited word from user space> > + * @to: The location to copy to> > + * @from: The location to copy from> > + * @copy: The number of bytes to copy> > + * @read: The number of bytes to read> > + * @copied: The number of bytes actually copied to @to> > + * @skip: If other than zero, will skip leading white space> > + *> > + * This reads from a user buffer, a space delimited word.> > + * If skip is set, then it will trim all leading white space.> > + * Then it will copy all non white space until @copy bytes have> > + * been copied, @read bytes have been read from the user buffer,> > + * or more white space has been encountered.> > + *> > + * Note, if skip is not set, and white space exists at the beginning> > + * it will return immediately.> > + *> > + * Returns:> > + * The number of bytes read from user space> > Confused.> > Is this "the number of bytes which I copied into *to", or is it "the> number of userspace bytes over which I advanced"?> > Hopefully the latter, because callers of copy_word_from_user() should> be able to call this function multiple times to be able to parse "foo> bar zot\0" into three separate words with three separate calls to> copy_word_from_user(). It might be worth mentioning how callers should> do this in the covering comment?Yes it is the latter. Since I tried to be consistent in using "read" and "copy" I thought it was obvious. But like most things technical, everything is obvious to the one that wrote the code.> > > + * -EAGAIN, if we copied a word successfully, but never hit> > + * ending white space. The number of bytes copied will be the same> > + * as @read. Note, if skip is set, and all we hit was white space> > + * then we will also returne -EAGAIN with @copied = 0.> > + *> > + * @copied will contain the number of bytes copied into @to> > + *> > + * -EFAULT, if we faulted during any part of the copy.> > + * @copied will be undefined.> > + *> > + * -EINVAL, if we fill up @from before hitting white space.> > + * @copy must be bigger than the expected word to read.> > + */> > +int copy_word_from_user(void *to, const void __user *from,> > + unsigned int copy, unsigned int read,> > + unsigned int *copied, int skip)> > +{> > The uaccess functions are a bit confused about whether the `size' args> are unsigned, unsigned long, etc. They should be size_t. unsigned is> OK here.I'll do a clean up patch.> > > + unsigned int have_read = 0;> > + unsigned int have_copied = 0;> > + const char __user *user = from;> > + char *kern = to;> > + int ret;> > + char ch;> > +> > + /* get the first character */> > + ret = get_user(ch, user++);> > + if (ret)> > + return ret;> > + have_read++;> > +> > + /*> > + * If skip is set, and the first character is white space> > + * then we will continue to read until we find non white space.> > + */> > + if (skip) {> > + while (have_read < read && isspace(ch)) {> > + ret = get_user(ch, user++);> > + if (ret)> > + return ret;> > + have_read++;> > + }> > +> > + /*> > + * If ch is still white space, then have_read == read.> > + * We successfully copied zero bytes. But this is> > + * still valid. Just let the caller try again.> > + */> > + if (isspace(ch)) {> > + ret = -EAGAIN;> > + goto out;> > + }> > + } else if (isspace(ch)) {> > + /*> > + * If skip was not set and the first character was> > + * white space, then we return immediately.> > + */> > + ret = have_read;> > + goto out;> > + }> > +> > +> > + /* Now read the actual word */> > + while (have_read < read &&> > + have_copied < copy && !isspace(ch)) {> > +> > + kern[have_copied++] = ch;> > +> > + ret = get_user(ch, user++);> > + if (ret)> > + return ret;> > +> > + have_read++;> > + }> > +> > + /*> > + * If we ended with white space then we have successfully> > + * read in a full word.> > + *> > + * If ch is not white space, and we have filled up @from,> > + * then this was an invalid word.> > + *> > + * If ch is not white space, and we still have room in @from> > + * then we let the caller know we have split a word.> > + * (have_read == read)> > + */> > + if (isspace(ch))> > + ret = have_read;> > + else if (have_copied == copy)> > + ret = -EINVAL;> > + else {> > + WARN_ON(have_read != read);> > + ret = -EAGAIN;> > + }> > +> > + out:> > + *copied = have_copied;> > +> > + return ret;> > +}> > Sheer madness ;)> > Someone is going to want to extend the "isspace" to include other> tokens. We can fall off that bridge when we come to it.Hmm, Frederic mentioned this too. I guess adding a "delimiter" field and calling it copy_token_from_user would not be to hard to implement. Then we can have copy_word_from_user be just a wrapper (as Frederic mentioned).-- Steve | http://lkml.org/lkml/2009/2/25/424 | CC-MAIN-2016-18 | refinedweb | 1,067 | 74.19 |
29938/does-one-have-to-be-machine-learning-expert-to-use-amazon-lex
I do know machine learning, but in no way, I am an expert. I want to create a chatbot using amazon lex. So to use Amazon Lex, do I have to technologies like deep learning, Artificial Intelligence & Ml?.
Well you are right Elastic Load Balancer ...READ MORE
Well you are right Elastic Load Balancer is mainly ...READ MORE
When you create a subnet,It will be ...READ MORE
There is actually no need to be ...READ MORE
Hey @nmentityvibes, you seem to be using ...READ MORE
It can work if you try to put ...READ MORE
Consider this - In 'extended' Git-Flow, (Git-Multi-Flow, ...READ MORE
When you use docker-compose down, all the ...READ MORE
You can track metrics for your bot ...READ MORE
The code would be something like this:
import ...READ MORE
OR | https://www.edureka.co/community/29938/does-one-have-to-be-machine-learning-expert-to-use-amazon-lex | CC-MAIN-2019-22 | refinedweb | 150 | 78.75 |
Views: 11278
Mirza Hasn.
konsa code tha jiss me mistake find kerni thi ...thanks
please share past two days exams ....JazaK Allah
today my paper
Q.1: write the following code using Interface.(5) // File Worker.java public class Worker extends Thread{ private String job ; //Constructor of Worker class public Worker (String j ){ job = j; } //Override run() method of Thread class public void run ( ) { for(int i=1; i<= 10; i++) System.out.println(job + " = " + i); } } // end class // File ThreadTest.java public class ThreadTest{ public static void main (String args[ ]) { //instantiate three objects of Worker (Worker class is now //becomes a Thread because it is inheriting from it)class Worker first = new Worker (“first job”); Worker second = new Worker (“second job”); Worker third = new Worker (“third job”); //start threads to execute first.start(); second.start(); third.start(); }//end main } // end class
Q.2: if a java developer wants to create a servlet in which folder he puts web application for deployment?(3)
Q.3: write code for Request Dispatcher-forward.(5)
Q.4: Cookie c = new Cookie (“visit,10”); Above code is right or wrong. Also describe reason.(2)
Q.5: one question is about session id.
(2) Q.6: write JSP directive element.(2)
Q.7: write JavaBeans design convention.(3)
Q.8: format of writing EL.(2)
Q.9: why we partitioned an application into logical layers.(5)
Q.10: parts of http request.(5)
Q.(11) advantages of JSP.(3)
shared by someone
#cs506 ka paper .
sirf 20% mcqs past ma sy thy
subjective b ni tha past ma sy
custom tag, yield, web server application server,javaBean, upcasting sy questions thy. . 2 program likhny thy.
#cs506..
1. How web services communicate each other?
2. If you want to write your own servlent then you should extend /subclass from servlent . Write the name of the servlent.
3. Suppose a java developer want to send response to local host URL and also want to encode this URl. Being a java developer which encoding method recommended for URl encoding.
4. If u want to declare a new method or varables outside of to get() / do post() method for servlat, in jsp at class level which JSP tag will be used.
5. Suppose two program t1 and t2. Want to stop t1 to t2 complete . write the method and code.
6. Six EL(expression language)relational operation.
7. Purpose of hash mapping in session taking through cookies.
8. Purpose JSP declare tags . code example of class attribute and method
9. Java full fill some requirement to become java bans those write.
10. Write the code of following output.(sleep k code the simple)
11. Write logical layers
12. Arrange classes in package. Suitable package name
• Dbconn.java
• Input.jsp
• Home.html
• Insertdb.
• Adduser.html
65% MCQ from past papers
8 Q of 2 and 3 marks are conceptual
4 Q of 5 marks was coding.
all the best to all
+ "OnlyJannat(❀‿❀)" thanks for sharing
1. What information can be appended to urL?
2. What is difference between web server and Application Server?
3. single line to include a page "vu home .java" you can use any name for tags but mention which have chosen for which tag element
4. Write a script that validate the phone number field in the form such a way if user leaves it empty it receives a message “ phone number is required. It can’t be left empty……..5
5. which class we used for request and response in our java code?
6. Write the code for “IncludeServlet” in the processRequest() method given below.In such a way that user enter "name" and "accountAmount" and submit the form to NewServlet. NewServlet show the username and amount with message “ please wait your amount is send” and IncludeServlet that display user’” Thank you”.
7. Write output:
Code was given: on page no 217 on handsout.
8. Aik question login sy related tha
Ans:
String id=request.getParameter("userid");
String password=request.getParameter("pwd");
if(id.equals("vulms" && password.equals("admin")){
response.sendRedirect("welcome.jsp");
}
else{
response.sendRedirect("error.jsp");
}catch (Exception e){
system.out.println(e.getMessage());
}
Mostly mcqs were from past papers. Remember me in yours prayers
Today's First Session:
90% MCQ's were new and conceptual.
Main Subjective Questions:
Best of Luck!
plzzzzzz share ur current papers all students and telll us how many from past papers
my today paper was at 10:30
98% mcqs from past papers...
subjective past papers sy nahi tha
when we DOUBLE click on GUI a new file open what is its name?2
distinguish advantage of HTTP session?3
custom tag sy related q tha
role of layers in application if layers are not used what will be disadvantage?3
ak q tha jis me processes or periorties di hui thi yeild() or Sleep()ko use krna tha usme 5
baqi yad nahi sorry....
best of luck..nd pray for me too..
uzmama | https://vustudents.ning.com/forum/topics/cs506-all-current-final-term-papers-fall-2015-past-final-term-1?groupUrl=cs506webdesignanddevelopment&commentId=3783342%3AComment%3A5458357&groupId=3783342%3AGroup%3A59376 | CC-MAIN-2021-25 | refinedweb | 826 | 69.07 |
The Cast framework provides queueing classes that support the creation of lists of MediaQueueItem instances, which can be built from MediaInfo instances such as video or audio streams, to play sequentially on the Cast receiver. This queue of content items can be edited, reordered, updated, and so forth.
The Cast Receiver SDK maintains the queue and responds to operations on the queue as long as the queue has at least one item currently active (playing or paused). Senders can join the session and add items to the queue. The receiver maintains a session for queue items until the last item completes playback or the sender stops the playback and terminates the session, or until a sender loads a new queue on the receiver. The receiver does not maintain any information about terminated queues by default. Once the last item in the queue finishes, the media session ends and the queue vanishes.
Create and load media queue items
A media queue item is represented in the Cast framework as a MediaQueueItem instance. When you create a media queue item, if you are using the Media Player Library with adaptive content, you can set the preload time so that the player can begin buffering the media queue item before the item ahead of it in the queue finishes playing. Setting the item's autoplay attribute to true allows the receiver to play it automatically. For example, you can use a builder pattern to create your media queue item as follows:
MediaQueueItem queueItem = new MediaQueueItem.Builder(mediaInfo) .setAutoplay(true) .setPreloadTime(20) .build();
Load an array of media queue items in the queue by using the appropriate queueLoad method of RemoteMediaClient.
Receive media queue status updates
When the receiver loads a media queue item, it assigns a unique ID to the item which persists for the duration of the session (and the life of the queue). Your app can learn the status of the queue in terms of which item is currently loaded (it might not be playing), loading, or preloaded. The MediaStatus class provides this status information:
- getPreloadedItemId() method - If the next item has been preloaded, returns the preloaded item ID.
- getLoadingItemId() method - Returns the item ID of the item that is currently loading (but isn't active in the queue) on the receiver.
- getCurrentItemId() method - Returns the item ID of the item that that was active in the queue (it might not be playing) at the time the media status change happened.
- getQueueItems() (Deprecated, use
MediaQueueinstead) method - Returns the list of MediaQueueItem instances as an unmodifiable list.
Your app can also get the list of items using the
MediaQueue
class. The class is a sparse data model of the media queue. It keeps the list of
item IDs in the queue, which is automatically synchronized with the receiver.
MediaQueue doesn't keep all the
MediaQueueItem
because it will take too much memory when the queue is very long. Instead, it
fetches the items on demand and keeps an LRU
cache of
recently accessed items. You can use these methods to access the media queue:
- getItemIds() method - Returns the list of all item IDs in order.
- getItemAtIndex() method - Returns the cached item at a given index. If the item is not cached,
MediaQueuewill return
nulland schedule to fetch the item. When the item is fetched, MediaQueue.Callback#itemsUpdatedAtIndexes() will be called, and calling
getItemAtIndex()with the same ID again will return the item.
- fetchMoteItemsRelativeToIndex() is used when the user scrolls the queue UI to the top or bottom, and your app wants to fetch more items from the cloud.
Use these methods together with the other media status methods to inform your app about the status of the queue and the items in the queue. In addition to media status updates from the receiver, your app can listen for changes to the queue by implementing RemoteMediaClient.Callback and MediaQueue.Callback.
Also CAF provides two utility classes to conveniently create UI for queueing.
- MediaQueueRecyclerViewAdapter, for backing the data of RecyclerView
- MediaQueueListAdapter, for backing the data of ListAdapter.
For example, to create a
RecyclerView using
MediaQueueRecyclerViewAdapter:
public class MyRecyclerViewAdapter extends MediaQueueRecyclerViewAdapter<MyViewHolder> { public MyRecyclerViewAdapter(MediaQueue mediaQueue) { super(mediaQueue); } @Override public void onBindViewHolder(MyViewHolder holder, position) { MediaQueueItem item = getItem(position); // Update the view using `item`. ... } } public class MyViewHolder implements RecyclerView.ViewHolder { // Implement your own ViewHolder. ... } public void someMethod() { RecyclerView.Adapter adapter = new MyRecyclerViewAdapter( getRemoteMediaClient().getMediaQueue()); RecyclerView recyclerView = (RecyclerView) getActivity().findViewById(R.id.my_recycler_view_id); recyclerView.setAdapter(adapter); }
Edit the queue
To operate on the items in the queue, use the queue methods of the RemoteMediaClient class. These let you load an array of items into a new queue, insert items into an existing queue, update the properties of items in the queue, make an item jump forward or backward in the queue, set the properties of the queue itself (for example, change the repeatMode algorithm that selects the next item), remove items from the queue, and reorder the items in the queue. | https://developers.google.com/cast/docs/android_sender/queueing | CC-MAIN-2018-47 | refinedweb | 825 | 52.19 |
Details
- Type:
Bug
- Status:
Resolved
- Priority:
Major
- Resolution: Fixed
- Affects Version/s: JRuby 1.7.0.pre1
- Fix Version/s: JRuby 1.7.0.pre2
- Component/s: Application Error
- Labels:None
- Environment:I'm running Microsoft Windows Vista with Java Client VM 1.6.0_14, but I think the problem is likely to independent of the platform.
- Number of attachments :
Description
This seems to be a problem in 1.7.0.preview1 and in 1.6.7; I haven't tried it in other JRuby versions, but I suspect it will be a problem in other JRuby versions.
In JIRB or run as a JRuby program:
def rr(rng) puts puts "demonstrating (" + rng.inspect + ").each problem:" indx = -1 # use indx to prevent an almost infinite loop rng.each do |v| indx += 1 puts " each indx= #{indx.inspect}, v= #{v.inspect}" break if indx >= 5 end end vv = 2**63 rr(vv - 3 ... vv + 0) # (1) at indx == 3 has integer overflow rr(vv - 3 .. vv + 0) # (2) at indx == 3 has integer overflow rr(vv - 3 ... vv - 1) # (3) prints out expected "each" values rr(vv - 3 .. vv - 1) # (4) doesn't print out any "each" values
In
JRUBY-6612 I reported an integer overflow problem in RubyFixnum.java in
public IRubyObject op_mul
and also gave examples of some strange behaviour of Range#each
with integer values near the maximum Fixnum value.
The reported integer overflow problem seems to have been fixed in
JRuby 1.7.0.preview1
but I still seem to be getting this sometimes strange behaviour of Range#each.
I've put below extracts from RubyRange.java: it seems that the problem may arise in "private void rangeEach".
I suspect that the problems of examples (1) and (2) above might be caused (at least partly) by a problem with Integer#succ which I've reported here:
JRUBY-6778 Possible long integer overflow bug in Integer#succ in RubyInteger.java
So fixing that Integer#succ problem may fix (1) and (2).
But it's not clear to me from the RubyRange.java code why the exclusive range in (3) works but the inclusive range in (4) doesn't work, and I don't have a sufficient understanding of the interaction between the underlying Java code for various JRuby classes to see what is the cause of the problem.
***** extracts from RubyRange.java 1.7.0.preview1 from line 346 private IRubyObject rangeLt(ThreadContext context, IRubyObject a, IRubyObject b) { IRubyObject result = invokedynamic(context, a, OP_CMP, b); if (result.isNil()) return null; return RubyComparable.cmpint(context, result, a, b) < 0 ? getRuntime().getTrue() : null; } private IRubyObject rangeLe(ThreadContext context, IRubyObject a, IRubyObject b) { IRubyObject result = invokedynamic(context, a, OP_CMP, b); if (result.isNil()) return null; int c = RubyComparable.cmpint(context, result, a, b); if (c == 0) return RubyFixnum.zero(getRuntime()); return c < 0 ? getRuntime().getTrue() : null; } private void rangeEach(ThreadContext context, RangeCallBack callback) { IRubyObject v = begin; if (isExclusive) { while (rangeLt(context, v, end) != null) { callback.call(context, v); v = v.callMethod(context, "succ"); } } else { IRubyObject c; while ((c = rangeLe(context, v, end)) != null && c.isTrue()) { callback.call(context, v); if (c == RubyFixnum.zero(getRuntime())) break; v = v.callMethod(context, "succ"); } } }
Activity
Additional fixes:
commit ee963e52200617634eb11d9ffbf984b956f7fb21 Author: Charles Oliver Nutter <headius@headius.com> Date: Thu Nov 8 13:44:42 2012 -0600 Fix JRUBY-6612, JRUBY-6777, JRUBY-6778, JRUBY-6779, JRUBY-6790 [JRUBY-6612] some problems with JRuby seeming to not detect Java Long arithmetic overflows [JRUBY-6777] RubyFixnum.java - two methods fail to detect some long integer overflows [JRUBY-6778] Possible long integer overflow bug in Integer#succ in RubyInteger.java [JRUBY-6779] Strange behaviour of some Integer Ranges with Range#each - maybe an integer overflow problem? [JRUBY-6790] Possible long integer overflow in fixnumStep in RubyNumeric.java Patches by Colin Bartlett. Thank you! :100644 100644 93f83c8... 109b856... M src/org/jruby/RubyInteger.java :100644 100644 b631a40... 7ecf098... M src/org/jruby/RubyNumeric.java :100644 100644 3ed0b55... d15c2ed... M src/org/jruby/RubyRange.java
With the fix for
JRUBY-6612, we appear to match MRI output for this one too. | http://jira.codehaus.org/browse/JRUBY-6779 | CC-MAIN-2014-10 | refinedweb | 680 | 51.04 |
How to remove an element from a regular array in C#?
How to remove an element from an regular array in C#?
Well, you can’t really change a regular array or remove an item from it. You have to create a new array that is a copy of the current array without the one value you want.
Here is a quick static class I created that will copy your array, leaving out the one item you don’t want.
namespace ArrayItemDeleter
{
static class ArrayItemDeleter
{
public static void RemoteArrayItem
{
T[] newArray = new T[inArray.Length – 1];
for (int i = 0, j = 0; i < newArray.Length; i++, j++) { if (i == inIndex) { j++; } newArray[i] = inArray[j]; } inArray = newArray; } } } [/sourcecode] Here is how you would use it in your program. [sourcecode language="csharp"] using System; namespace ArrayItemDeleter { class Program { static void Main(string[] args) { string[] str = {"1","2","3","4","5"}; ArrayItemDeleter.RemoteArrayItem(ref str, 2); } } } [/sourcecode] Now, if you really want to add and remove items a lot, you should use a System.Collections.Generic.List object; | http://www.wpfsharp.com/2009/10/15/how-to-remove-an-element-from-a-regular-array-in-csharp/ | CC-MAIN-2017-39 | refinedweb | 176 | 63.9 |
Please confirm that you want to add Do Your Own Accounts to your Wishlist.
This course is for the self-employed, busy, business owner/manager who is looking for a simple, easy way to do their bookkeeping using an excel spreadsheet. Likewise, it is ideal for the bookkeeper who is looking for a simple way to do their clients accounts.
With the course you get a reusable template excel spreadsheet which acts as a cashbook, as well as prepares a profit and loss for you – which will help you to submit your self assessment tax return (UK filing) – and its also (due to complying with international accounting standards) suitable globally, for providing a profit and loss account.
The course looks at a fictitious company and it takes two months worth of bank statements. We use this information and enter it into the cashbook, which then provides the basis for the accounts. The accounts are populated automatically on the spreadsheet, based on the figures typed into the cashbook.
The template excel cashbook is included, and you download the fictitious company bank statements. In real life, you’d simply use your own bank statements to fill in the cashbook.
The course will take approx 90 minutes to complete. With the course tutor, you enter month 1 together, then you have a go at month 2 by yourself, and then watch the lecture to see if you got month 2 correct. We then go a step further by looking at some accounting concepts, and the physical aspects of keeping accounting records.
It’s an ideal course if you just want a simple, but very effective many to manage your small business bookkeeping using an excel spreadsheet. No accounting knowledge is needed or presumed. You can use this template to do your own accounts, and then send it to your accountant. With the accounts now in a much better shape, you should ask for a reduction in accountancy fees, so their course should save you time and money.
Explanation of what the course is about and what we'll cover.
How to get your manual (physical) accounts organised - files, organisation etc.,
Explanation of the elements on the excel spreadsheet.
How to input income, on the income side of the spreadsheet
How to enter expenses, on the expenses side of the spreadsheet
How to read the accounts (once the income and expenses have been entered)
What do the figures mean, now that you have them?
Explanation of accounting concepts, and terminology and ideas to be aware of.
An overview of the various accounting systems available (aside from excel spreadsheets). | https://www.udemy.com/doyourownaccounts/ | CC-MAIN-2017-13 | refinedweb | 434 | 60.14 |
Generating Pink Noise (Flicker, 1/f) in an FPGA
Intro
The Harmon Instruments signal generator provides simulated phase noise modulation. Typical signal sources include a flicker noise component at low offset frequencies. The 3 dB/octave slope is more complex to create than 6 dB/octave which can be produced with a simple integrator.
Stochastic Voss-McCartney
The Voss-Mcartney method of generating pink noise is a multirate algorithm. The output is the sum of many random variables updated at sample rates that are power of two multiples of each other. There are a few other useful references. In the stochastic version, rather than updating each random variable in the sum at fixed intervals, they are updated at randomized intervals averaging the update rates in the non stochastic version.
In this implementation, a sum of 32 values is used, numbered 0 to 31. Value 0 has a probability of update of 0.5 each clock cycle, value 1 0.25, value n 2^-(n+1), etc. It might be desirable to add a value that updates every cycle for improved high frequency conformance, but it's not required in this application.
Here's a simple Python model:
import numpy as np import random ram = np.zeros(32) def get_pink(): for i in range(len(ram)): if random.getrandbits(1): ram[i] = random.gauss(0, 1) break return np.sum(ram)
The low frequency deviation in the plot below is due to the number of samples used, not the generator.
The plot below is the output of the phase noise source set to pure pink as measured with a spectrum analyzer. This shows good performance over at least 6 decades of frequency range.
nMigen Implementation
I'm unaware of a closed form solution to the output spectral density. Numerical evaluation gives aproximately 48100 / sqrt(Hz) at 1 Hz assuming the gaussian input has a standard deviation of 10102.5.
The value in RAM at index 31 is updated twice as often as it should be in this code. That may be fixed at some point in the future. At 250 MSPS, that results in noise that should be below 0.058 Hz being at 0.058 Hz.
On Artix-7, usage is 58 LUTs, 96 registers. An external gaussian white noise source is required as well as 31 pseudo random bits per clock.
class PinkNoise(Elaboratable): def __init__(self, i, urnd): self.i = i # 20 bit signed white gaussian noise self.urnd = urnd # 31 pseudo random bits self.o = Signal(signed(25)) def elaborate(self, platform): m = Module() # count trailing zeros bits_1 = self.urnd cond_2 = bits_1[:16] == 0 bits_2 = Signal(15) result_2 = Signal() cond_3 = bits_2[:8] == 0 bits_3 = Signal(7) result_3 = Signal(2) cond_4 = bits_3[:4] == 0 bits_4 = Signal(3) result_4 = Signal(3) ptr = Signal(5) m.d.sync += [ result_2.eq(cond_2), bits_2.eq(Mux(cond_2, bits_1[16:31], bits_1[:15])), result_3.eq(Cat(cond_3, result_2)), bits_3.eq(Mux(cond_3, bits_2[8:15], bits_2[:7])), result_4.eq(Cat(cond_4, result_3)), bits_4.eq(Mux(cond_4, bits_3[4:7], bits_3[:3])), ptr.eq( Mux(bits_4[0], Cat(C(0,2),result_4), Mux(bits_4[1], Cat(C(1,2),result_4), Mux(bits_4[2], Cat(C(2,2),result_4), Cat(C(3,2),result_4) ) ) ) ), ] ram = Memory(width=len(self.o) - len(ptr), depth=2**len(ptr)) wrport = m.submodules.wrport = ram.write_port(domain='sync') rdport = m.submodules.rdport = ram.read_port(domain='comb') m.d.comb += [ wrport.en.eq(1), wrport.addr.eq(ptr), wrport.data.eq(self.i), rdport.addr.eq(ptr), ] i_pipe = Signal(signed(len(self.i))) ram_pipe = Signal(signed(len(rdport.data))) m.d.sync += [ i_pipe.eq(self.i), ram_pipe.eq(rdport.data), self.o.eq(self.o + i_pipe - ram_pipe), ] return m | https://harmoninstruments.com/blog/ | CC-MAIN-2020-05 | refinedweb | 619 | 51.95 |
Subject: Re: [boost] [convert] Performance
From: Joel de Guzman (djowel_at_[hidden])
Date: 2014-06-11 20:30:12
On 6/11/14, 7:55 PM, Vladimir Batov wrote:
> Joel de Guzman wrote
>> On 6/11/14, 2:58 PM, Vladimir Batov wrote:
>>> ...
>>> Thank you, Joel, for sharing your performance testing framework.
>>
>> My pleasure. I'm glad it helped. BTW, as I said before, you can use
>> the low-level spirit ...
>
> Thanks, very much appreciated. Although Spirit is such a Terra-incognito for
> me. So, I am hoping still Jeroen will jump in and do that. :-)
>
> Now, let me get back to the performance tests... You gave me this new "toy"
> to play with... so now I cannot stop spamming the list with my new findings.
> Apologies. Still, it seems important as there were concerns voiced about
> boost::convert() performance penalties.
>
> The results I posted before were for your vanilla performance tests... well,
> with an addition of 2 of my own tests:
>
> atoi_test: 2.2135431510 [s] {checksum: 730759d}
> strtol_test: 2.1688206260 [s] {checksum: 730759d}
> spirit_int_test: 0.5207926250 [s] {checksum: 730759d}
> spirit_new_test: 0.5803735980 [s] {checksum: 730759d}
> cnv_test: 0.6161884860 [s] {checksum: 730759d}
>
> However, I felt somewhat uneasy with the limited number (9) of strings
> tested. More importantly, I felt that the short strings were too heavily
> represented in the test. What I mean is, for example, there are only 10
> 1-digit strings out of enormous sea of available numbers. That is,
> statistically, they are only
> 10 * 100% / (INT_MAX * 2) out of all available numbers. However, in the test
> they contributed to 11% of performance results. And I felt that short
> strings might be spirit's "speciality" so to speak. In other words, I felt
> that the test might use the input data that favored spirit. So, I replaced
> those 9-strings input set with 1000 randomly generated strings from the
> [INT_MIN, INT_MAX] range... and that's the results I've got:
I do not think a random distribution of number of digits is a
good representation of what's happening in the real world. In
the real world, especially with human generated numbers(*), shorter
strings are of course more common.
(* e.g. programming languages, which, you are right to say, is
spirit's specialty).
BTW, the fact that smaller numbers occur more in real life is
taken advantage of some optimized encodings such as Google
Protocol Buffers's Base 128 Varints where smaller numbers
take a smaller number of bytes. If the distribution was equal,
then encodings such as Varints would not make sense.
()
> local::strtol_test: 312.5899575630 [s] {checksum: 7aa26f0b}
> local::old_spirit_test: 132.2640077370 [s] {checksum: 7aa26f0b}
> local::new_spirit_test: 148.1716253210 [s] {checksum: 7aa26f0b}
> local::cnv_test: 143.4929925850 [s] {checksum: 7aa26f0b}
>
> 1) With the original 9-strings test spirit was 4 times faster than strtol;
> with 1000 strings the difference is down to about 2.5 times... which is what
> I've been getting: str-to-int spirit/strtol=1.45/3.76 seconds;
> 2) the overhead of "new_spirit_test" compared to "old_spirit_test" is still
> about 12%. What is important is that the only difference between the tests
> is 2 additional validity checks:
>
> struct old_spirit_test : test::base
> { ...
> boost::spirit::qi::parse(beg, end, boost::spirit::qi::int_, n);
> return n;
> }
> struct new_spirit_test : test::base
> { ...
> if (boost::spirit::qi::parse(beg, end, boost::spirit::qi::int_,
> n))
> if (beg == end)
> return n;
>
> return (BOOST_ASSERT(0), 0);
> }
>
> It seems that Spirit is really testing the speed limits given that other
> operations start playing visible role... like those (necessary IMO) checks
> add 12%!
>
> 3) The "funny" (as you mentioned before) part is that with 1000-strings set
> the cnv_test is again better than raw new_spirit_test (which has the same
> process flow as cnv_test). That's what my tests have been showing all along
> (although they are run against 10000000-strings input set):
>
> str-to-int spirit: raw/cnv=1.45/1.44 seconds (99.05%).
> str-to-int spirit: raw/cnv=1.45/1.44 seconds (99.00%).
> str-to-int spirit: raw/cnv=1.45/1.44 seconds (99.01%).
> str-to-int spirit: raw/cnv=1.45/1.44 seconds (99.07%).
> str-to-int spirit: raw/cnv=1.45/1.44 seconds (99.59%).
>
> All compiled with gcc-4.8.2
>
> I personally have no explanation to that but at least I feel that
> boost::convert() framework does not result in performance degradation as we
> were concerned it might be... seems to be the opposite.
Shrug. Obviously, there's something wrong with that picture, but
I am not sure what. It may be that what's happening here is that some
other overhead(*) outweighs the actual conversion by a large
factor at that scale and that is what you are actually seeing.
(* E.g. string operations)
Regards,
-- Joel de Guzman
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2014/06/214540.php | CC-MAIN-2021-49 | refinedweb | 822 | 68.06 |
[
Date Prev
][
Date Next
] [
Thread Prev
][
Thread Next
] [
Date Index
] [
Thread Index
]
Bug#617626: installation-reports: After install on external firewire, display hosed and can't boot into Mac OS X
To
:
submit@bugs.debian.org
Subject
: Bug#617626: installation-reports: After install on external firewire, display hosed and can't boot into Mac OS X
From
: PhillyG <
phillygmac@gmail.com
>
Date
: Wed, 9 Mar 2011 03:53:35 -0500
Message-id
: <
[🔎]
DA5522D9-9DD1-4D96-882C-6A1E3EF5516F@gmail.com
>
Reply-to
: PhillyG <
phillygmac@gmail.com
>,
617626@bugs.debian.org
Subject: installation-reports: After install on external firewire, display hosed and can't boot into Mac OS X
Package: installation-reports
Severity: important
*** Please type your report below this line ***
-- Package-specific info:
Boot method: CD
Image version:
downloaded 7 March 2011
Date: <8 March 2011 AM>
Machine: Apple Dual USB iBook (500 Mhz 384MB + Western Digital 80GB external Firewire)
Partitions: <df -Tl will do; the raw partition table is preferred>
fdisk -l
154054688 @ 2018 ( 73.5G) Linux native
/dev/sda4 Apple_UNIX_SVR2 swap 2244733 @ 154056706 ( 1.1G) Linux swap
/dev/sda5 Apple_Free Extra 49 @ 156301439 ( 24.5k) Free space
Block size=512, Number of Blocks=156301488
DeviceType=0x0, DeviceId=0x0
/dev/hda
# type name length base ( size ) system
/dev/hda1 Apple_partition_map Apple 63 @ 1 ( 31.5k) Partition map
/dev/hda2 Apple_Driver_ATA Macintosh 64 @ 64 ( 32.0k) Unknown
/dev/hda3 Apple_Driver_ATA Macintosh 64 @ 128 ( 32.0k) Unknown
/dev/hda4 Apple_Patches Patch Partition 512 @ 192 (256.0k) Unknown
/dev/hda5 Apple_HFS MacOS 19640166 @ 704 ( 9.4G) HFS
/dev/hda6 Apple_Free 10 @ 19640870 ( 5.0k) Free space
Block size=512, Number of Blocks=19640880
DeviceType=0x0, DeviceId=0x0
Drivers-
1: @ 64 for 21, type=0x701
2: @ 128 for 34, type=0xf8ff
df -Tl
Filesystem Type 1K-blocks Used Available Use% Mounted on
/dev/sda3 ext3 75818104 15353648 56613092 22% /
tmpfs tmpfs 191572 0 191572 0% /lib/init/rw
udev tmpfs 186592 164 186428 1% /dev
tmpfs tmpfs 191572 0 191572 0% /dev/shm: [E]
Overall install: [E]
I chose Debian because it still supports PPC. Kudos for that. I tested
my computer before starting the installation, and noted that by holding the "t"
key down during startup, it would try to boot from the firewire port.
During the "Partition hard drives", I instructed the installer to use the whole
Firewire Disk, and to leave the internal drive untouched. Everything went smoothly
until it was time to reboot.
I held down the "t" key and started the computer. All that happened was I saw the
same firewire symbol that I saw before the install.
I restarted the computer and got a dialog offering the choices "old" and "Linux". I
chose "old". The computer seemed to hang at the grey screen with the Apple logo. After
5 minutes I pulled the plug and restarted. This time I chose "Linux".
The computer seemed to boot, and after a short while I got a quarter inch wide
horizontal grey stripe an inch and a half from the bottom of the screen. I recall
reading about something like that when investigation the various distributions.
Because my intention was to create a server, I had configured the machine for ssh.
I was able to log in via ssh, which is how I am able to send you this report.
Thoughts, comments, etc.
"The operation was a success; unfortunately, the patient died!"
1) I had expected the installer to leave my internal hard drive untouched, and to
install a boot partition on the firewire drive. I had expected to boot into Debian
by holding down the "t" key during startup, similar to how I held down the "c" key
to boot from the Debian installer CD. I am not an fdisk guru, but it seems to me
that there is a bootblock on the external drive, but no longer a bootblock on the
internal drive.
2) I have no idea why the hold down the "t" thing doesn't work.
3) The grey striped display thing is why I labeled the severity: important. If I
had not configured ssh during the install, the install would have been a total fail.
4) In addition to the grey stripe, I don't know how to dim or turn off the display.
It doesn't turn off when I close the lid, and it hasn't gone to sleep yet after 36 hours.
Interesting note: I just tried using the F1 key, which is supposed to dim the screen.
A 3.5" H x 7.75" W tan box appeared. The bottom left corner of the box is 1.5" from the
bottom and flush with the left edge. I hope this doesn't burn a permanent strip (and box)
onto
the display.
5) My previous Linux experience is CentOS, which uses RPM. Aptitude is interesting.
6) I downloaded
and
. It was a straightforward
install on CentOS. I ran into a number of problems on Debian until I became aware that
the system had no development tools. Cut to the chase: rtorrent is up and running.
Finally, a little story just to let you know how clueless I am. I started using computers
in high school in 1965. I bought an Apple ][+ in 1980, and a Mac 128k in 1984. I bought
this iBook for my daughter in 2001. I got it as a hand-me-down when my daugher bought a
PowerBook. She has since moved on to a MacBook Pro.
Two years ago, the return (enter) key stopped working. You can not log on to Mac OS X
without using the enter key. So for the past two years I have used a Dell keyboard via a
PS/2 to USB converter just so I could use the enter key.
It wasn't until last Saturday that I realized that the iBook keyboard has TWO enter keys,
and the one to the right of the CMD key still works just fine!
Clueless. It's possible that I overlooked something during the install.
--
Please make sure that the hardware-summary log file, and any other
installation logs that you think would be useful are attached to this
report. Please compress large files using gzip.
Once you have filled out this report, mail it to
submit@bugs.debian.org
.
-- System Information:
Debian Release: 6.0
APT prefers squeeze-updates
APT policy: (500, 'squeeze-updates'), (500, 'stable')
Architecture: powerpc (ppc)
Kernel: Linux 2.6.32-5-powerpc
Locale: LANG=en_US.UTF-8, LC_CTYPE=en_US.UTF-8 (charmap=UTF-8)
Shell: /bin/sh linked to /bin/dash
Reply to:
debian-boot@lists.debian.org
PhillyG (on-list)
PhillyG (off-list)
Follow-Ups
:
Bug#617626: marked as done (installation-reports: After install on external firewire, display hosed and can't boot into Mac OS X)
From:
owner@bugs.debian.org (Debian Bug Tracking System)
Prev by Date:
Debian installer build: failed or old builds
Next by Date:
Processing of linux-kernel-di-amd64-2.6_1.76+squeeze1_amd64.changes
Previous by thread:
Bug#604033: #604033 Bug report: Installing Squeeze on a Intel Matrix BIOS RAID1 volume
Next by thread:
Bug#617626: marked as done (installation-reports: After install on external firewire, display hosed and can't boot into Mac OS X)
Index(es):
Date
Thread | https://lists.debian.org/debian-boot/2011/03/msg00289.html | CC-MAIN-2016-40 | refinedweb | 1,207 | 64.61 |
In this article we will find out how you can connect to your Queue Manager that runs on Cloud Pak for Integration. Since in OCP 4.2 cluster the worker nodes would not have a public IP address, you may not be able to connect to the Queue Manager the same way that you used to do earlier.
You can connect to the Queue Manager running on Cloud Pak for Integration from within the cluster or from outside the cluster depending on your requirement.
1. Connecting to the Queue Manager from within the Cluster.
If your application that connects to the Queue Manager is also deployed on the same cluster, you would use the service name to connect to the Queue Manager.
To get the service name, run the command below:
oc get svc -n <namespace>
Note that 9443 is the default port for the WebUI and 1414 is the default port for the Queue Manager listener.
If your application is deployed on a different namespace from where the Queue Manager is deployed, qualify the name of the service with the namespace as below;
<service-name>.<namespace>.svc
For example, in this case the service name will be:
mq-tls-rel-ibm-mq.mq.svc
The connection information would be as below:
Queue Manager Name : <Name of the Queue Manager> Host: <service-name>.<namespace>.svc Port: <Listener Port> Channel: <Server connection Channel name>
The image below represents how you can connect to it from the MQInput node in ACE
For this connection scenario, SSL configuration has not been done on the channel ‘DEF.SVRCONN’ and the connection is not over TLS.
2. Connecting to the Queue Manager from outside the Cluster
In many scenarios the application that needs to connect to the queue manager, may be deployed outside the OpenShift Cluster where the Queue Manager is deployed. Till CP4I 2019.3.x, which was deployed on OCP 3.11, you could expose NodePort for the queue manager listener and connect using the cluster hostname and NodePort. But from CP4I 2019.4, which runs on OCP 4.2, this is not an option anymore.. The required configuration of the OpenShift Route depends on the SNI behavior of your client application.
The SNI will be set to the MQ channel under the following conditions:
CONDITION 1:
- IBM MQ C Client v8 and above
- Java/JMS Client version 9.1.1 and above with a TLS v1.2 or higher CipherSuite and Java 8
- NET Client unmanaged mode
- IBM MQ C Client v7.5 or below
- IBM MQ C Client AllowOutboundSNI set to NO
- Java/JMS Client version 9.1.0 and below
- .NET Client managed mode
- AMQP or XR client
The SNI will be set to the host name under the following conditions:
CONDITION 2:
Refer to the Knowledge Center link below:
Connecting to a queue manager deployed in an OpenShift cluster
2.1 Configure a TLS secret while deploying an MQ helm-chart
When you deploy a Queue Manager on Cloud Pak for Integration, create a TLS secret containing a private and public key that you would use for an MQ TLS connection. You may generate self-signed keys as below:
openssl req -newkey rsa:2048 -nodes -keyout tls.key -x509 -days 365 -out tls.crt
Now create a TLS secret as below:
oc create secret tls tls-secret --key="tls.key" --cert="tls.crt" -n <namespace>
Now supply this tls-secret and key and cert names when deploying the MQ helm chart from the Platform Navigator. Look at the screenshot below:
You can add more Keys if you intend to use different keys for different Channels. First key pair is used by MQSC scripts supplied with the helm chart to configure default CERTLABL.
Below screenshot shows two key pairs being added, one with label name ‘default’ and the other one with the label name ‘label2’. Two secrets, corresponding to each key pair, need to be created here with the names ‘tls-secret’ and ‘tls-secret2’ in this case.
Also, you would add public certificates of connecting clients if implementing mutual authentication. In this article, we are only explaining server-side authentication, so no certificate has been added.
If you need to add Keys or certificates after the queue manager has been deployed, you can follow the standard procedure of helm release upgrade.
Notice the first Key name ‘default’ here. This will be set as CERTLABL in Queue Manager/Channel while doing the configuration. When you add more Keys, ensure that you give different names to them, so that CERTLABLs are created for each of the Keys that you supply and can be used appropriately at Queue Manager and/or Channel level.
The CP4I helm chart takes care of doing the required configurations for TLS.
2.2 Configure Queue Manager and Channel for TLS
After the Queue Manager is deployed, configure the TLS settings at Queue Manager and Channel level.
When you are implementing the DevOps pipeline, these steps would be part of your MQSC script.
Click on Queue Manager Properties and go to the SSL tab:
Notice the name of ‘Cert label’. This is the ‘name’ for the pair of the Key and Cert that we supplied in the helm chart. If you have configured more than one Key pairs, provide the appropriate label name here. This ‘Cert label’ will be used when you are connecting from the clients as described in this section above under ‘CONDITION 2’ and the CERTLABL supplied at the Channel level will be ignored.
Note that by default CHLAUTH and CONNAUTH are enabled. You may keep them enabled or disable them if you do not require those. For this demonstration, to keep it simple, we have disabled those.
Under ‘Extended’ tab, delete the entry in the ‘Connection authentication’ field.
Under the ‘Communication’ tab, disable ‘CHLAUTH records’
Now go to the Server Connection channel under the SSL tab. Specify the ‘SSL cipher spec’ and appropriate ‘Cert label’ that was created as part of the helm deployment. Since we are not using mutual authentication, so ‘SSL authentication’ has been made ‘Optional’. Setting it to ‘Optional’ will disable the client authentication.
If your client application is using the clients as specified in this section above under ‘CONDITION 1’, the ‘Cert label’ specified at the channel will be used and the ‘Cert label’ specified at Queue Manager will be ignored.
If you intend to remotely administer the Queue Manager, you may specify MCA user as ‘mqm’, which will give complete authority on the Queue Manager to the clients connecting to it via this channel. It is recommended not to do this and configure the authentication appropriately.
Make sure that you ‘REFRESH SECURITY’ of the Queue Manager after making the security related changes. To refresh the security, go to the Queue Manager properties and refresh all three types of securities.
2.3 Import the certificate in Client’s TrustStore
The client that tries to connect to the Queue Manager over SSL, must accept the TLS certificate presented by the Queue Manager. This would require you to import the TLS certificate into the client’s truststore. If you are using a Java trust-store, you can use the Keytool command or the iKeyMan tool supplied with IBM MQ to import the certificate.
In this case, we created tls.key and tls.crt at step 2.1. Import tls.crt into the client’s truststore and give it any label name.
2.4 Connect from Client’s specified under CONDITION 2
If you are using MQ clients as specified in this section above under ‘CONDITION 2’, you can proceed with the connection now.
Let us connect to the Queue Manager from MQ Explorer 9.1.0.
Click on ‘Add Remote Queue Manager’ and enter the Queue Manager Name and click Next
Get the route name for the Queue Manager service.
oc get route -n <namespace>
Enter this route host name as Host name, port as 443 and the name of the channel.
Click on ‘Next’ thrice to reach the SSL configuration page. Click on ‘Enable SSL key repositories’ and enter the path of client truststore and password for truststore.
Click on Next. Enable SSL Options and select a Cipher spec. Here we have selected ANY_TLS12. Note that we specified ANY_TLS12 cipher spec at server connection channel also. Since at channel we have specified ANY_TLS12, we can select any cipher spec here that TLS12 supports.
Click on Finish. It will connect to the Queue Manager successfully.
2.5 Connect from Client’s specified under CONDITION 1
If you are connecting from client’s specified under CONDITION 1, the SNI will be set to the MQ channel. Client applications that set the SNI to the MQ channel require a new OpenShift Route to be created for each channel you wish to connect to. You also have to use unique channel names across your Red Hat OpenShift cluster, to allow routing to the correct queue manager.
To determine the required host name for each of your new OpenShift Routes, you need to map each channel name to an SNI address as documented here:
The SNI address used by MQ is based upon the channel name that is being requested, followed by a suffix of “.chl.mq.ibm.com”.
Since here we are using the channel ‘DEF.SVRCONN’, it will translate to the SNI address below:
def2e-svrconn.chl.mq.ibm.com
Refer to the link above to translate the SNI address for your channel name.
You must then create a new OpenShift Route (for each channel) by applying the following yaml in your cluster:
apiVersion: route.openshift.io/v1 kind: Route metadata: name: <provide a unique name for the Route> namespace: <the namespace of your MQ deployment> spec: host: <SNI address mapping for the channel> to: kind: Service name: <the name of the Kubernetes Service for your MQ deployment (for example "<Helm Release>-ibm-mq")> port: targetPort: 1414 tls: termination: passthrough
Let us create the yaml file for our ‘DEF.SVRCONN’ channel in this case.
Get the service name.
oc get svc -n mq
Create a yaml file with the content below, say ‘mqroute.yaml’
apiVersion: route.openshift.io/v1 kind: Route metadata: name: defsvrconnmqroute namespace: mq spec: host: def2e-svrconn.chl.mq.ibm.com to: kind: Service name: mq-tls-rel-ibm-mq port: targetPort: 1414 tls: termination: passthrough
Now create the route with below command
oc create -f <route yaml file>
Now let us connect to the Queue Manager from MQ Explorer 9.1.3.
Click on ‘Add Remote Queue Manager’ and enter Queue Manager name.
Click Next and Enter Route name as host name, port as 443 and Channel name
You can get the route host name using below command
oc get route -n <namespace>
Click on Next thrice to reach the SSL configuration page. Specify the client Truststore that has the public key for the CERLABL specified in the channel and enter password to open the Truststore.
Click on Next. Enable SSL options and specify the SSL CipherSpec. Since on channel we have specified ‘ANY_TLS12’, you can use any of the Cipher spec supported by TLS12.
Click on Finish and it would successfully connect to the Queue Manager.
Refer to the article below to connect to Queue Manager over TLS.
MQ with TLS
-
Thanks Anand, that clarifies. I now have a tls.key that is password protected (password protected when the CSR was ordered) key, how do I relay that password into this command “oc create secret tls tls-secret –key=”tls.key” –cert=”tls.crt” -n ” ? Or this is not needed?
Hi Anand, could SSL be skipped by the application client (my client is legacy) when connecting externally using the route and port as 443? My CP4I version is 2020.1.1
Hi Abu,
If compute nodes of OCP cluster have public IP, then you could use NodePort to connect to QM without SSL. However in real-life, OCP clusters would be deployed with private IPs only for nodes as per the recommendation. So any applications from outside OCP cluster will have to consume the deployed OCP services via routes only.
So.
However if you need to connect to the queue manager temporarily from outside cluster without ssl, say for debugging etc, you may use port-forwarding. | https://developer.ibm.com/integration/blog/2020/02/28/connecting-to-a-queue-manager-on-cloud-pak-for-integration/ | CC-MAIN-2020-40 | refinedweb | 2,047 | 62.58 |
Template Class in C++
Sign up for FREE 1 month of Kindle and read all our books for free.
Get FREE domain for 1st year and build your brand new site
Template class in C++ is a feature that allows a programmer to write generic classes and functions which is useful as the same class/ function can handle data of multiple data types.
First, we need to know that C++ supports the use of Generic classes and Funtions. So what is this generic programming and how do we use it?
Generic Classes
Generic programming helps us to code and write our programs in such a way that they are independent of data type, yes independent of data type. Usually in programming we hadle different kinds of data with different data type and very often we might need to write the same lines of code each and every time because of change in data type although the functionality or aim of the program is same. Hence we can use the generic classes that act as templates and support any kind of data type.
Example class :
Consider a simple program in C++ like this one,
#include <iostream> using namespace std; //class class Arithmetic { //data members private: int a; int b; public: //member functions //contructor Arithmetic(int a, int b) { this - > a = a; this - > b = b; } //function int add() { int c; c = a + b; return c; } }; int main() { Arithmetic a(10, 5); cout << a.add(); return 0; }
All we are trying to do is just add two numbers, now this is very easy and works fine in the same way we can write functions to perform all kinds of arthematic operations but what if we want to add two decimal numbers i.e of data type float or double or anything, yes we will have to re-write the same logic again from the beginning. But wait what if we make this class a template class in such a way that it can apply the same logic to different types of input irrespective of their data types.
What is a Template Class
In simple terms as indicated by the name it acts as template, and as we have discussed above about generic classes/functions, templates is a feature in C++ using which we can implement generic functions i.e use the same programming logic even if there is a change in the data-type. Template classes ultimate goal or purpose is to make us code and use the same classes and functions to work effectively in the same way on different data types.
Why use a Template Class
We might have come across many scenarios while programming, where we different function and classes with the same logic but just beacuse of change in data-type say int / float we have to re-write the exact some code again and call differnt functions according to their data-type. This adds a lot of compute time, run-time, memory and for us to think, type and ,implement it. Instead the whole process is solved just by implementing templates. Saving a lot of time, energy and making the whole programming process a lot more efficient.
Disadvantages of not using template classes:
- Increased compile time, run time
- More lines of code and memory resources
- Same code logic is repeated for different data-types
Important syntax rules for Templates in C++ :
template <class T>should be used before class declaration or while defiing member function outside the class.
- All the variables should be declared as data type
T
- Even the return type of a funtion should be of type
T
- While initializing class objects in the main function it should be in the following format.
class_name <data_type> object_name(data_members);
Example :
Arithmetic<int>a(10,5);
And when we want to use another data type just mention the data type
Example :
Arithmetic<float>b(2.5,1.3);
Using Template Class
#include<iostream> using namespace std; //before class we need to specify that it's a template class template<class T> class Arithmetic { private: /*notice the data type of data member of class they are of type T */ T a; T b; public: //same again data types of type T Arithmetic(T a,T b); T add(); }; /*whenever we use/define template functions, we need to mention the templat <class T>*/ template<class T> Arithmetic<T>::Arithmetic(T a,T b) { this->a=a; this->b=b; } //even the return type sould of type template T template<class T> T Arithmetic<T>::add() { T c; c=a+b; return c; } int main() { /*while intializing object of class we need to mention the type*/ Arithmetic<int>a(10,5); cout<<a.add()<<endl; //same is re-used for float type or any other data-type Arithmetic<float>b(2.5,1.3); cout<<b.add(); }
Now the above program does the same as previous one but using the concept of template class we can re-use the same code to process any kind of data-type.
Advantages of Templates in C++
- Code repetition is reduced as same code is re-used
- This results in smaller compile and execution times
- Implements the concepts of Polymorphism(reusability)
- Good design and structure changes can be made easily with less number of errors and confusion
- Saves time, safe and secure code is obtained easily
- Increases performance which helps us in building powerful libraries
Thoughts
Using templates in C++ will give us mutiple advantages saving time, memory and lines of code at the same time they should be used only when needed and the code should always be easy to read and understand the logic.
With this article at OpenGenus, you must have the complete idea of template classes in C++. Enjoy. | https://iq.opengenus.org/template-class-in-cpp/ | CC-MAIN-2021-17 | refinedweb | 957 | 53.58 |
MP4DeleteTrackEdit - Delete a track edit segment
#include <mp4.h> bool MP4DeleteTrackEdit( MP4FileHandle hFile, MP4TrackId trackId, MP4EditId editId )
hFile Specifies the mp4 file to which the operation applies. trackId Specifies the track to which the operation applies. editId Specifies the edit segment to be deleted.
Upon success, true (1). Upon an error, false (0).
MP4DeleteTrackEdit deletes the specified track edit segment. Note that since editId’s form a sequence, deleting an editId will cause all edit segments with editId’s greater than the deleted one to be reassigned to their editId minus one. Deleting an edit segment does not delete any media samples.
MP4(3) MP4AddTrackEdit(3) | http://huge-man-linux.net/man3/MP4DeleteTrackEdit.html | CC-MAIN-2017-17 | refinedweb | 106 | 60.11 |
Change a request XML from Groovy
03-29-2015 04:37 AM
03-29-2015 04:37 AM
Change a request XML from Groovy
Hello everyone,
currently I am experimenting with Groovy and looking for the way to write the prettyfied XML back in the Request of Mock Response test step. Here is what I already have:
def non_formatted_xml = context.expand('${Mock Response#Request}') def formatted_xml = groovy.xml.XmlUtil.serialize(non_formatted_xml)
As you may see there is only one last operation missing. Does anyone know how to write XML back?
P.S. Thank you in advance for any help you can provide!
Sincerely,
Eugene | https://community.smartbear.com/t5/SoapUI-Open-Source-Questions/Change-a-request-XML-from-Groovy/td-p/97405 | CC-MAIN-2021-49 | refinedweb | 104 | 64.71 |
I don't know if this will help or not, but there's a basic StateT example on the haskell wiki that you could look at, to see how to deal with State in general. The code is at and is thanks to Don Stewart. Maybe I'll just paste the code with a few more comments (with the warning that I'm a newbie as well): import Control.Monad.State main :: IO () main = runStateT code [1..] >> return () -- Here, the state is just a simple stack of integers. runStateT is the equivalent -- in the StateT monad of runState in the State monad code :: StateT [Integer] IO () code = do x <- pop -- pop an item out of the stack io $ print x -- now in the INNER monad, perform an action return () -- -- pop the next unique off the stack -- pop :: StateT [Integer] IO Integer -- This type signature is correct, but it's the reason you have to be -- careful with StateT. pop really has nothing to do with IO, but it has -- been 'tainted' by IO because it's being done together with it pop = do (x:xs) <- get -- get the list that's currently in the stack put xs -- put back all but the first return x -- return the first io :: IO a -> StateT [Integer] IO a -- transform an action from being in the inner monad (in this case IO), to -- being in the outer monad. since IO is so familiar, it's been written already -- and it's called liftIO io = liftIO Gurus, please check my comments to be sure I haven't said something stupid! Hope this helps. Andrew On 2/26/07, Alfonso Acosta <alfonso.acosta at gmail.com> wrote: > On 2/27/07, Kirsten Chevalier <catamorphism at gmail.com> wrote: > > So what if you changed your netlist function so that the type > > sig would be: > > > > netlist :: DT.Traversable f => > > (State s (S HDPrimSignal) -> State s v ) -> -- new > > (State s (Type,v) -> S v -> State s ()) -> -- define > > State s (f HDPrimSignal) -> -- the graph > > IO (State s ()) > > > > > Or why not: > > > > netlist :: DT.Traversable f => > > (State s (S HDPrimSignal) -> State s v ) -> -- new > > (State s (Type,v) -> S v -> State s ()) -> -- define > > State s (f HDPrimSignal) -> -- the graph > > IO s > > > > Uhm, this looks better, I'll try with this one and see what I get, I > anyway suspect I'll have a hard time because of the nested monads > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > > | http://www.haskell.org/pipermail/haskell-cafe/2007-February/022951.html | CC-MAIN-2013-48 | refinedweb | 408 | 71.18 |
Lighttpd is a secure, fast and very flexible open source web server. It is optimised for a high performance environment and uses very low memory as compared to other web servers. It supports many advanced features for example FastCGI, CGI, Auth, Output-Compression, URL-Rewriting and many more. It is a popular web server for the Catalyst and Ruby on Rails web frameworks. Lighttpd is used by many high traffic websites like Bloglines, WikiMedia etc.
Some the most notable features of Lighttpd are
- It supports SCGI, HTTP proxy and FastCGI for load balancing.
- It supports web server event mechanism performance also provides also support for more efficient event notification schemes.
- It supports chroot that changes the apparent root directory for the current running process and its children.
- It supports conditional URL rewriting (mod_rewrite), TLS/SSL with SNI support, via OpenSSL and authentication against an LDAP server.
- It supports HTTP compression using mod_deflate.
Requirements
Lighttpd does not require any special hardware requirement but to follow this guide you will need a server with CentOS 7 installed. You will also need sudo or root access to the server. If you are logged in as a non-root but sudo user, run
sudo su to switch to root user.
Install Lighttpd
You can install Lighttpd either from the available packages or installing from source. It is recommended to update your system and available repositories before we install any packages. Run the following command to do so.
yum -y update
Before installing Lighttpd we need to make sure that Apache or nginx is not installed on your server. Run the following commands to remove these packages.
yum -y erase httpd nginx
If they are not installed in your server, the command will simply show you
No Match for argument: httpd No Match for argument: nginx No Packages marked for removal
Lighttpd is not available on the default CentOS YUM repository hence you will need to add EPEL repository to your system. Install EPEL repository in your system using the following command.
yum -y install epel-release yum -y update
Now you can install Lighttpd using the following command.
yum -y install lighttpd
Once the packages are installed, you can run the Lighttpd server also enable it to automatically start at boot time using the following commands:
systemctl start lighttpd systemctl enable lighttpd
You can see the version of Lighttpd installed in your system using the following command:
lighttpd -v
You will see following output:
lighttpd/1.4.39 (ssl) - a light and fast webserver Build-Date: Mar 1 2016 15:43:12
Now you will need to adjust your firewall to allow the
http and
https traffic to pass. Run the following commands to add new firewall rules. If you do not have
firewalld installed, no need to run these commands.
firewall-cmd --permanent --zone=public --add-service=http firewall-cmd --permanent --zone=public --add-service=https firewall-cmd --reload
Now you can browse the following URL using your favorite browser to see your web server working.
You will see a not found message as shown below:
This due to the default document root directory of Lighttpd is
/var/www/htdocs but the startup promo files for Lighttpd is saved in
/var/www/lighttpd.
Now you have two options to correct this issue. You can either rename
/var/www/lighttpd to
/var/www/htdocs or you can change the configuration files to make
/var/www/lighttpd directory as the default document root directory. To rename
/var/www/lighttpd to
/var/www/htdocs run the following command.
mv /var/www/lighttpd /var/www/htdocs
You can now check the web front of your server, by going to and you will see the following page.
You can change the default web root directory by editing the default configuration file of Lighttpd, which is
/etc/lighttpd/lighttpd.conf. Use your favorite editor to edit the files. In this tutorials we will be using
nano, if you do not have
nano installed, you can run following command to install
yum -y install nano.
nano /etc/lighttpd/lighttpd.conf
Scroll down to find the following lines:
## ## Document root ## server.document-root = server_root + "/htdocs"
Change
htdocs to
lighttpd to make it look like the following code.
## ## Document root ##
server.document-root = server_root + "/lighttpd"
Now save the file and you should see your server running. If you renamed your directory to
/var/www/htdocs then your document root directory is
/var/www/htdocs and if you have changed your configuration file then your default document root is
/var/www/lighttpd.
Install PHP-FPM
You can install PHP to work with Lighttpd using PHP-FPM. PHP-FPM (FastCGI Process Manager) is an alternative PHP FastCGI implementation with some additional features useful for sites of any size, especially busier sites.
yum -y install php-fpm lighttpd-fastcgi
Now we will need configure PHP-FPM to run a FastCGI server on port
9000. Edit
/etc/php-fpm.d/ file using your favorite editor.
nano /etc/php-fpm.d/
Scroll down to see the following code:
; the value of
user and
group to
lighttpd and make it look like shown below.
;
Save and exit the file. Now start PHP-FPM and enable it to automatically start at boot time using the following command:
systemctl start php-fpm systemctl enable php-fpm
This should run PHP-FPM on your server successfully.
Configuring Lighttpd to Work With PHP
To enable PHP to work with Lighttpd web server, we will need to make few configuration changes. Open your
/etc/php.ini file in your favorite editor:
nano /etc/php.ini
Look for the following lines in the configuration:
; cgi.fix_pathinfo provides *real* PATH_INFO/PATH_TRANSLATED support for CGI. $ ; previous behaviour was to set PATH_TRANSLATED to SCRIPT_FILENAME, and to not $ ; what PATH_INFO is. For more information on PATH_INFO, see the cgi specs. Se$ ; this to 1 will cause PHP CGI to fix its paths to conform to the spec. A sett$ ; of zero causes PHP to behave as before. Default is 1. You should fix your s$ ; to use SCRIPT_FILENAME rather than PATH_TRANSLATED. ; ;cgi.fix_pathinfo=1
Uncomment the line
;cgi.fix_pathinfo=1 to make it
cgi.fix_pathinfo=1. Save the file and exit the editor.
Now open another file
/etc/lighttpd/conf.d/fastcgi.conf using your favorite editor.
nano /etc/lighttpd/conf.d/fastcgi.conf
Now look for the following lines in the file:
## server.modules += ( "mod_fastcgi" )
Add the following lines just below the above line:
fastcgi.server += ( ".php" => (( "host" => "127.0.0.1", "port" => "9000", "broken-scriptfilename" => "enable" )) )
Save the file and exit from editor:
Now open
/etc/lighttpd/modules.conf file using your favorite editor.
nano /etc/lighttpd/modules.conf
Look for the following lines in the file:
## ## FastCGI (mod_fastcgi) ## #include "conf.d/fastcgi.conf"
Uncomment
#include "conf.d/fastcgi.conf" to make it look line
include "conf.d/fastcgi.conf". Save the file and exit from editor.
Now restart PHP-FPM and Lighttpd using the following command.
systemctl restart php-fpm systemctl restart lighttpd
Now to verify if Lighttpd is configured to use PHP-FPM, you will need to view your php information. Create a new file in your document root directory which may be
/var/www/htdocs or
/var/www/lighttpd according how you have configured it before.
nano /var/www/lighttpd/phpinfo.php
Now add the following php code into the file.
Now browse the following file through frontend using your favorite web browser. Go to the following URL.
You will see following page, which will show you all your php configuration and information.
You will find that your server API is FPM/FastCGI. This shows that you have a working Lighttpd web server with PHP-FPM.
Conclusion
In this tutorial we have learnt to install Lighttpd web server, which is known for its ability to handle more than thousands concurrent connections in parallel without using much resources. We also learnt to install and configure PHP-FPM to use with Lighttpd web server. You can install MySQL or MariaDB to make it full LLMP stack. | https://hostpresto.com/community/tutorials/how-to-install-lighttpd-with-php-fpm-on-centos-7/ | CC-MAIN-2019-13 | refinedweb | 1,341 | 56.66 |
bash - GNU Bourne-Again SHell
bash [options] [file]
Bash is Copyright (C) 1989, 1991 by the Free Software
Foundation, Inc.
Bash is an sh-compatible command language interpreter that
executes commands read from the standard input or from a
file. Bash also incorporates useful features from the
Korn and C shells (ksh and csh).
Bash is ultimately intended to be a conformant implementa-
tion of the IEEE Posix Shell and Tools specification (IEEE
Working Group 1003.2)., starting with $0.
-i If the -i flag is present, the shell is interac-
tive.
-s If the -s flag is present, or if no arguments
remain after option processing, then commands
are read from the standard input. This option
allows the positional parameters to be set when
invoking an interactive shell.
- A single - signals the end of options and dis-
ables further option processing. Any arguments
after the - are treated as filenames and argu-
ments. An argument of -- is equivalent to an
argument of -.
Bash also interprets a number of multi-character options.
These options must appear on the command line before the
single-character options to be recognized.
-norc Do not read and execute the personal initializa-
tion file ~/.bashrc if the shell is interactive.
This option is on by default if the shell is
invoked as sh.
-noprofile
Do not read either the system-wide startup file
/etc/profile or any of the personal initializa-
tion files ~/.bash_profile, ~/.bash_login, or
~/.profile. By default, bash normally reads
these files when it is invoked as a login shell
(see INVOCATION below).
-rcfile file
Execute commands from file instead of the stan-
dard personal initialization file ~/.bashrc, if
the shell is interactive (see INVOCATION below).
-version Show the version number of this instance of bash
when starting.
-quiet Do not be verbose when starting up (do not show
the shell version or any other information).
This is the default.
-login Make bash act as if it had been invoked as a
login shell.
-nobraceexpansion
Do not perform curly brace expansion (see Brace
Expansion below).
-nolineediting
Do not use the GNU readline library to read com-
mand lines if interactive.
-posix Change the behavior of bash where the default
operation differs from the Posix 1003.2 standard
to match the standard. Bash reads and exe-
cutes commands from this file, then exits. Bash's exit
status is the exit status of the last command executed in
the script.
blank A space or tab.
word A sequence of characters considered as a single
unit by the shell. Also known as a token.
name A word consisting only of alphanumeric characters
and underscores, and beginning with an alphabetic
character or an underscore. Also referred to as an
identifier. com-
mand (see SHELL GRAMMAR below) or the third word of a case
or for command:
! case do done elif else esac fi for function if in
select then until while { }
Simple Commands
A simple command is a sequence of optional variable
assignments followed by blank-separated words and redirec-
tions, and terminated by a control operator. The first
word specifies the command to be executed.:
[ ! ] command [ | command2 ... ]
The standard output of command is connected to the stan-
dard input of command2. This connection is performed
before any redirections specified by the command (see
REDIRECTION below).
If the reserved word ! precedes a pipeline, the exit sta-
tus of that pipeline is the logical NOT of the exit status
of the last command. Otherwise, the status of the
pipeline is the exit status of the last command. The
shell waits for all commands in the pipeline to terminate
before returning a value.
Each command in a pipeline is executed as a separate pro-
cess (i.e., in a subshell).
Lists
A list is a sequence of one or more pipelines separated by
one of the operators ;, &, &&, or ||, and terminated by
one of ;, &, or <newline>.
Of these list operators, && and || have equal precedence,
followed by ; and &, which have equal precedence.
If a command is terminated by the control operator &, the
shell executes the command in the background in a sub-
shell..
Compound Commands
A compound command is one of the following:
(list) list is executed in a subshell. Variable parameters are printed (see PARAMETERS
below). The PS3 prompt is then displayed and a
line or return is used to remove the special meaning of certain
characters or words to the shell. Quoting can be used to
disable special treatment for special characters, to pre-
vent reserved words from being recognized as such, and to
prevent parameter expansion.
Each of the metacharacters listed above under DEFINITIONS
has special meaning to the shell and must be quoted if
they are to represent themselves. There are three quoting
mechanisms: the escape character, single quotes, and--
ing it with a backslash.
The special parameters * and @ have special meaning when
in double quotes (see PARAMETERS below).
A parameter is an entity that stores values, somewhat like
a variable in a conventional programming language. It can
be a name, a number, or one of the special characters
listed below under Special Parameters.. If the variable has its -i
attribute set (see declare below in SHELL BUILTIN COM-
MANDS) then value is subject to arithmetic expansion even
if the $[...] syntax does not appear. Word splitting is
not performed, with the exception of "$@" as explained
below under Special Parameters. Pathname expansion is not
performed. sin-
gle null or unset, the parameters are separated by
spaces.
@ Expands to the positional parameters, starting from
one. When the expansion occurs within double
quotes, each parameter expands invocation, by the set builtin command, or
those set by the shell itself (such as the -i
flag).
$ Expands to the process ID of the shell. In a ()
subshell, it expands to the process ID of the cur-
rent shell, not the subshell.
! Expands to the process ID of the most recently exe-
cuted background (asynchronous) command.. cur-
rent sequential line number (starting with 1)
within a script or function. When not in a script
or function, the value substituted is not guaran-
teed to be meaningful. When in a function, the
value is not the number of the source line that the
command appears on (that information has been lost
by the time the function is executed), but is an
approximation of the number of simple commands:.''.
HOME sub-
jected to parameter expansion, command substitu-
tion, and arithmetic expansion before being inter-
preted as a pathname. PATH is not used to search
for the resultant pathname.
MAIL prompt-
ing. If this variable is unset, the shell disables
mail checking.
MAILPATH
A colon-separated list of pathnames to be checked
for mail. The message to be printed may be speci-
fied his-
tory ter-
min, over-
riding.
Expansion is performed on the command line after it has
been split into words. There are seven kinds of expansion
performed: brace expansion, tilde expansion, parameter and
variable expansion, command substitution, arithmetic
expansion, word splitting, and pathname expansion.
The order of expansions is: brace expansion, tilde. The preamble is prepended to each string contained
within the braces, and the postamble pre-
served
traditional versions of sh, the Bourne shell. sh does not
treat opening or closing braces specially when they appear
as part of a word, and preserves them in the output. Bash
removes braces from words as a consequence of brace expan-
sion. For example, a word entered to sh as file{1,2}
appears identically in the output. The same word is out-
put as file1 file2 after expansion by bash. If strict
compatibility with sh is desired, start bash with the
-nobraceexpansion flag (see OPTIONS above) or disable
brace expansion with the +o braceexpand option to the set
command (see SHELL BUILTIN COMMANDS below).
Tilde Expansion
If a word begins with a tilde character (`~'), all of the
characters preceding the first slash (or all characters,
if there is no slash) are treated as a possible login
name. If this login name is the null string, the tilde is
replaced with the value of the parameter HOME. If HOME is
unset, the home directory of the user executing the shell
is substituted instead.
If a `+' follows the tilde, the value of PWD replaces the
tilde and `+'. If a `-' follows, the value of OLDPWD is
substituted. If the value following the tilde is a valid
login name, the tilde and login name are replaced with the
home directory associated with that name. If the name is
invalid, or the tilde expansion fails, the word is
unchanged.
Each variable assignment is checked for unquoted instances
of tildes following a : or =. In these cases, tilde sub-
stitution is also performed. Consequently, one may use
pathnames with tildes in assignments.
${parameter}
The value of parameter is substituted. The braces
are required when parameter is a positional parame-
ter with more than one digit, or when parameter is
followed by a character which is not to be inter-
preted as part of its name.
In each of the cases below, word is subject to tilde
expansion, parameter expansion, command substitution, and
arithmetic expansion. Bash tests for a parameter that is
unset or null; omitting the colon results in a test only
for a parameter that is unset.
${parameter:-word}
Use Default Values. If parameter is unset or null,
the expansion of word is substituted. Otherwise,
the value of parameter is substituted.
${parameter:=word}
Assign Default Values. If parameter is unset or
null, the expansion of word is assigned to.
$).
Command Substitution
Command substitution allows the output of a command to
replace the command name. There are two forms:
$(command)
or
`command`
Bash performs the expansion by executing command and
replacing the command substitution with the standard undergo parameter pro-
cess.
On systems that support it, process substitution is per-
formed simultaneously with parameter and variable expan-
sion, command substitution, and arithmetic expansion.
Word Splitting
The shell scans the results of parameter expansion, com-
mand substitution, and arithmetic expansion that did not
occur within double quotes for word splitting.
The shell treats each character of IFS as a delimiter, and
splits the results of the other expansions into words on
these characters. If the value of IFS char-
acter). Any character in IFS that is not IFS whitespace,
along with any adjacent IFS whitespace characters, delim-
its a field. A sequence of IFS whitespace characters is
also treated as a delimiter. If the value of IFS is null,
no word splitting occurs. IFS cannot be unset.
Explicit null arguments ("" or '') are retained. Implicit
null arguments, resulting from the expansion of parameters
that have no values, are removed.
Note that if no expansion occurs, no splitting is per-
formed.
Pathname Expansion
After word splitting, unless the -f option has been set,
bash scans each word for the characters *, ?, and [. If
one of these characters appears, then the word is regarded
as a pattern, and replaced with an alphabetically sorted
list of pathnames matching the pattern. If no matching
pathnames are found, and the shell variable
allow_null_glob_expansion is unset, the word is left
unchanged. If the variable is set, and no matches are
found, the word is removed. When a pattern is used for
pathname generation, the character ``.'' at the start of
a name or immediately following a slash must be matched
explicitly, unless the shell variable glob_dot_filenames
is set. The slash character must always be matched
explicitly. In other cases, the ``.'' character is not
treated specially.
The special pattern characters have the following. >|, then the value of the
-C option to the set builtin command is not tested, and
file creation is attempted. (See also the description of
noclobber under Shell Variables above.)an-
tically, pathname
expansion, or arithmetic expansion is performed on word.
If any characters in word are quoted, the delimiter is the
result of quote removal on word, and the lines in the
here-document are not expanded. Otherwise, con-
taining
word evaluates to -, file descriptor n is closed. If n is
not specified, the standard input (file descriptor 0) is
used.
The operator
[n]>&word
is used similarly to duplicate output file descriptors.
If n is not specified, the standard output (file descrip-
tor 1) is used. As a special case, if n is omitted, and
word does not expand to one or more digits, the standard
output and standard error are redirected as described pre-
viously.
Opening File Descriptors for Reading and Writing
The redirection operator
special parameter # is updated to reflect the change.
Positional parameter 0 is unchanged. com-
pletes, the values of the positional parameters and the
special parameter # are restored to the values they had
prior to function execution.
Function names and definitions may be listed with the -f
option to the declare or typeset builtin commands. Func-
tions may be exported so that subshells automatically have
them defined with the -f option to the export builtin.
Functions may be recursive. No limit is imposed on the
number of recursive calls. con-
tain sig-
nals such as SIGINT. These processes are said to be in
the foreground. Background processes are those whose, immedi-
ately. .
When bash is interactive, it ignores SIGTERM (so that kill
0 does not kill an interactive shell), and SIGINT is
caught and handled (so that the wait builtin is interrupt-
ible). In all cases, bash ignores SIGQUIT. If job con-
trol is in effect, bash ignores SIGTTIN, SIGTTOU, and
SIGTSTP.
Synchronous jobs started by bash have signals set to the
values inherited by the shell from its parent. When job
control is not in effect, background jobs (jobs started
with &) ignore SIGINT and SIGQUIT. Commands run as a
result of command substitution ignore the keyboard-gener-
ated job control signals SIGTTIN, SIGTTOU, and SIGTSTP.. If the search is unsuccessful, the shell prints an
error message and returns a nonzero exit status.
If the search is successful, or if the command name.
When a program is invoked it is given an array of strings
called the environment. This is a list of name-value
pairs, of the form name=value.
The shell allows you to manipulate the environment in.
Bash itself returns the exit status of the last command
executed, unless a syntax error occurs, in which case it
exits with a non-zero value. See also the exit builtin
command below..
This is the library that handles reading input when using
an interactive shell, unless the -nolineediting option is
given. By default, the line editing commands are similar
to those of emacs. A vi-style line editing interface is
also available. pre-
fix. The combination M-C-x means ESC-Control-x, or press
the Escape key then hold the Control key while pressing
the x key.)
The default key-bindings may be changed with an ~/.inputrc
file. The value of the shell variable INPUTRC, if set, is
used instead of ~/.inputrc. Other programs that use this
library may add their own commands and bindings.
For example, placing
M-Control-u: universal-argument
or
C-Meta-u: universal-argument
into the ~/.inputrc would make M-C-u execute the readline
command universal is customized by putting commands in an initial-
ization file. The name of this file is taken from the
value of the INPUTRC univer-
sal-argument, M-DEL is bound to the function back-
ward-kill-word, and C-o is bound to run the macro
expressed on the right hand side (that is, to insert the
text >&output into the line).
In the second form, "keyseq":function-name or macro, key-
seq differs from keyname above in that strings denoting an
entire key sequence may be specified by placing the
sequence within double quotes. Some GNU Emacs style key
escapes can be used, as in the following example.
"\C-u": universal-argument
"\C-x\C-r": re-read-init-file
"\e[11~": "Function Key 1"
In this example, C-u is again bound to the function uni-
versal-argument. C-x C-r is bound to the function
re-read-init-file, and ESC [ 1 1 ~ is bound to insert the
text Function Key 1. The full set of escape sequences is
\C- control prefix
\M- meta prefix
\e an escape character
\\ backslash
\" literal "
\' literal '
When entering the text of a macro, single or double quotes
should be used to indicate a macro definition. Unquoted
text is assumed to be a function name. Backslash will
quote any character in the macro text, including " and '.
Bash allows the current readline key bindings to be dis-
played or modified with the bind builtin command. The
editing mode may be switched during interactive use by
using the -o option to the set builtin command (see SHELL
BUILTIN COMMANDS below).
Readline has variables that can be used to further cus-
tomize its behavior. A variable may be set in the inputrc
file with a statement of the form
set variable-name value
Except where noted, readline variables can take the values
On or Off. The variables and their default values are:
horizontal-scroll-mode (Off)
When set to On, makes readline use a single line
for display, scrolling the input horizontally on a
single screen line when it becomes longer than the
screen width rather than wrapping to a new line.
editing-mode (emacs)
Controls whether readline begins with a set of key
bindings similar to emacs or vi. editing-mode can
be set to either emacs or vi.
mark-modified-lines (Off)
If set to On, history lines that have been modified
are displayed with a preceding asterisk (*).
bell-style (audible)
Controls what happens when readline wants to ring
the terminal bell. If set to none, readline never
rings the bell. If set to visible, readline uses a
visible bell if one is available. If set to audi-
ble, readline attempts to ring the terminal's bell.
comment-begin (``#'')
The string that is inserted in vi mode when the
vi-comment command is executed.
meta-flag (Off)
If set to On, readline will enable eight-bit input
(that is, it will not strip the high bit from the.
completion-query-items (100)
This determines when the user is queried about
viewing the number of possible completions gener-
ated by the possible-completions command. It may
be set to any integer value greater than or equal
to zero. If the number of possible completions is
greater than or equal to the value of this vari-
able, the user is asked whether or not he wishes to
view them; otherwise they are simply listed on the
terminal..
expand-tilde (Off)
If set to on, tilde expansion is performed when
readline attempts word completion.
Readline implements a facility similar in spirit to the
conditional compilation features of the C preprocessor
which allows key bindings and variable settings to be
performed as the result of tests. There are three termi-
nal applica-
tion name, and an initialization file can
test for a particular value. This could be
used to bind key sequences to functions use-
ful for a specific program. For instance,
the following command adds a key sequence
that quotes the current or previous word in
Bash:
$if Bash
# Quote the current or previous word
"\C-xq": "\eb\"\ef\""
$endif
$endif This command, as you saw in the previous example,
terminates an $if command.
$else Commands in this branch of the $if directive are
executed if the test fails..
The following is a list of the names of the commands and
the default key sequences to which they are bound. this, or the previous,
word. Words are composed of alphanumeric charac-
ters (letters and digits).
clear-screen (C-l)
Clear the screen leaving the current line at the
top of the screen. With an argument, refresh the
current line without clearing the screen.
redraw-current-line
Refresh the current line. By default, this is
unbound., mov-
ing)
of characters between the start of the current line
and the current point. This is a non-incremental
search. By default, this command is unbound.
history-search-backward
Search backward through the history for the string
of characters between the start of the current line
and the current point. This is a non-incremental
search. By default, this command is unbound.
yank-nth-arg (M-C-y)
Insert the first argument to the previous command
(usually the second word on the previous line) at
point (the current cursor position). With an argu-
ment n, insert the nth word from the previous com-
mand (the words in the previous command begin with
word 0). A negative argument inserts the nth word
from the end of the previous command.
yank-last-arg (M-., M-_)
Insert the last argument to the previous command
(the last word on the previous line). With an
argument, behave exactly like yank-nth-arg. expansion.
insert-last-argument (M-., M-_)
A synonym for yank-last-arg.
operate-and-get-next (C-o)
Accept the current line for execution and fetch the
next line relative to the current line from the
history for editing. Any argument is ignored.
Commands for Changing Text
delete-char (C-d)
Delete the character under the cursor. If point is
at the beginning of the line, there are no charac-
ters in the line, and the last character typed was
not C-d, then return EOF.
backward-delete-char (Rubout)
Delete the character behind the cursor. When given
a numeric argument, save the deleted text on the
kill-ring.
quoted-insert (C-q, C-v)
Add the next character that you type. Point moves forward as well.
If point is at the end of the line, then transpose
the two characters before point. Negative argu-
ments don't work.
transpose-words (M-t)
Drag the word behind the cursor past the word in
front of the cursor moving the cursor over that
word as well.
upcase-word (M-u)
Uppercase the current (or following) word. With a
negative argument, do the previous word, but do not
move point.
downcase-word (M-l)
Lowercase the current (or following) word. With a
negative argument, do the previous word, but do not
move point.
capitalize-word (M-c)
Capitalize the current (or following) word. With a
negative argument, do the previous word, but do not
move point.
Killing and Yanking
kill-line (C-k)
Kill the text from the current cursor position to
the end of the line.
backward-kill-line (C-x C-Rubout)
Kill backward to the beginning of the line.
unix-line-discard (C-u)
Kill backward from point to the beginning of the
line.
kill-whole-line
Kill all characters on the current line, no matter
where the cursor is. By default, this is unbound.
kill-word (M-d)
Kill from the cursor to the end of the current
word, or if between words, to the end of the next
word. Word boundaries are the same as those used
by forward-word.
backward-kill-word (M-Rubout)
Kill the word behind the cursor. Word boundaries
are the same as those used by backward-word.
unix-word-rubout (C-w)
Kill the word behind the cursor, using white space
as a word boundary. The word boundaries are dif-
ferent from backward-kill-word.
delete-horizontal-space
Delete all spaces and tabs around point. By
default, this is unbound.
yank (C-y)
Yank the top of the kill ring into the buffer at
the cursor.
yank-pop (M-y)
Rotate the kill-ring, and yank the new top. Only
works following yank or yank-pop.
Numeric Arguments
digit-argument (M-0, M-1, ..., M--)
Add this digit to the argument already accumulat-
ing, or start a new argument. M-- starts a nega-
tive argument.
universal-argument
Each time this is executed, the argument count is
multiplied by four. The argument count is ini-
tially one, so executing this function the first
time makes the argument count four. By default,
this is not bound to a key.
Insert all completions of the text before point
that would have been generated by possible-comple-
tions. By default, this is not bound to a key.
complete-filename (M-/)
Attempt filename completion on the text before
point.
possible-filename-completions (C-x /)
List the possible completions of the text before
point, treating it as a filename.
complete-username (M-~)
Attempt completion on the text before point, it as a command name. Command completion
attempts to match the text against aliases,
reserved words, shell functions, builtins, and
finally executable filenames, in that order.
possible-command-completions (C-x !)
List the possible completions of the text before
point, treating it as a command name.
dynamic-complete-history (M-TAB)
Attempt completion on the text before point, com-
paring the text against lines from the history list
for possible completion matches.
complete-into-braces (M-{)
Perform filename completion and return the list of
possible completions, to contain no more than HISTFILESIZE lines. The
builtin command fc (see SHELL BUILTIN COMMANDS below) may
be used to list or edit and re-execute a portion of the
history list. The history builtin can be used to display
the history list and manipulate the history file. When
using the command-line editing, search commands are avail-
able in each editing mode that provide access to the his-
tory list. When an interactive shell exits, the last
HISTSIZE lines are copied from the history list to HIST-
FILE. If HISTFILE is unset, or if the history file is
unwritable, the history is not saved. as one word. Only backslash (\)
and single quotes can quote the history escape character,
which is ! by default.
The shell allows control of the various characters used by
the history expansion mechanism (see the description of
histchars above under Shell Variables).
Event Designators
An event designator is a reference to a command line entry
in the history list.
! Start a history substitution, except when followed
by a blank, newline, = or (.
!! Refer to the previous command. This is a synonym
for `!-1'.
!n Refer to command line n.
!-n Refer to the current command line minus n.
!string
Refer to the most recent command starting with
string.
!?string[?]
Refer to the most recent command containing string.
^string1^string2^
Quick substitution. Repeat the last command,
replacing string1 with string2. Equivalent to
``!!:s/string1/string2/'' (see Modifiers below).
!# The entire command line typed so far.
Word Designators
A : separates the event specification from the word desig-
nator. It can be omitted if the word designator begins
with a ^, $, *, or %. Words are numbered from the
beginning of the line, with the first word being denoted
by a 0 (zero).
0 (zero)
The zeroth word. For the shell, this is the com-
mand.
Modifiers
After the optional word designator, you can add a sequence
of one or more of the following modifiers, each preceded
by a `:'.
h Remove a trailing pathname component, leaving only
the head.
r Remove a trailing suffix of the form .xxx, leaving
the basename.
e Remove all but the trailing suffix.
t Remove all leading pathname components, leaving the
tail.
p Print the new command but do not execute it.
q Quote the substituted words, escaping further sub-
stit-
slash. If & appears in new, it is replaced by old.
A single backslash will quote the &.
&.
The shell allows arithmetic expressions to be evaluated,
under certain circumstances (see the let builtin command
and Arithmetic Expansion). Evaluation is done in long
integers with no check for overflow, though division by 0
is trapped and flagged as an error. The following list of
operators is grouped into levels of equal-precedence oper-
ators. The levels are listed in order of decreasing
precedence.
- + unary minus and plus
! ~ logical and bitwise negation
* / % multiplication, division, remainder
+ - addition, subtraction
<< >> left and right bitwise shifts
<= >= < >
comparison
== != equality and inequality
& bitwise AND | http://www.linuxonlinehelp.com/man/bash.html | crawl-001 | refinedweb | 4,625 | 65.32 |
BeamLargeFiles / src / com.example.android.beamlargefiles /
BeamLargeFilesFragment.java
1 /*.beamlargefiles; 18 19 import android.app.Activity; 20 import android.net.Uri; 21 import android.nfc.NfcAdapter; 22 import android.nfc.NfcEvent; 23 import android.os.Bundle; 24 import android.support.v4.app.Fragment; 25 import android.util.Log; 26 27 /** 28 * This class demonstrates how to use Beam to send files too large to transfer reliably via NFC. 29 * 30 * <p>While any type of data can be placed into a normal NDEF messages, NFC is not considered 31 * "high-speed" communication channel. Large images can easily take > 30 seconds to transfer. 32 * Because NFC requires devices to be in extremely close proximity, this is not ideal. 33 * 34 * <p>Instead, Android 4.2+ devices can use NFC to perform an initial handshake, before handing 35 * off to a faster communication channel, such as Bluetooth, for file transfer. 36 * 37 * <p>The tradeoff is that this application will not be invoked on the receiving device. Instead, 38 * the transfer will be handled by the OS. The user will be shown a notification when the transfer 39 * is complete. Selecting the notification will open the file in the default viewer for its MIME- 40 * type. (If it's important that your application be used to open the file, you'll need to register 41 * an intent-filter to watch for the appropriate MIME-type.) 42 */ 43 public class BeamLargeFilesFragment extends Fragment implements NfcAdapter.CreateBeamUrisCallback { 44 45 private static final String TAG = "BeamLargeFilesFragment"; 46 /** Filename that is to be sent for this activity. Relative to /assets. */ 47 private static final String FILENAME = "stargazer_droid.jpg"; 48 /** Content provider URI. */ 49 private static final String CONTENT_BASE_URI = 50 "content://com.example.android.beamlargefiles.files/"; 51 52 /** 53 * Standard lifecycle event. Registers a callback for large-file transfer, by calling 54 * NfcAdapter.setBeamPushUrisCallback(). 55 * 56 * Note: Like sending NDEF messages over standard Android Beam, there is also a non-callback 57 * API available. See: NfcAdapter.setBeamPushUris(). 58 * 59 * @param savedInstanceState Saved instance state. 60 */ 61 @Override 62 public void onCreate(Bundle savedInstanceState) { 63 super.onCreate(savedInstanceState); 64 setHasOptionsMenu(true); 65 Activity a = getActivity(); 66 67 // Setup Beam to transfer a large file. Note the call to setBeamPushUrisCallback(). 69 NfcAdapter nfc = NfcAdapter.getDefaultAdapter(a); 70 if (nfc != null) { 71 Log.w(TAG, "NFC available. Setting Beam Push URI callback"); 72 nfc.setBeamPushUrisCallback(this, a); 73 } else { 74 Log.w(TAG, "NFC is not available"); 75 } 77 } 78 79 /** 80 * Callback for Beam events (large file version). The return value here should be an array of 81 * content:// or file:// URIs to send. 82 * 83 * Note that the system must have read access to whatever URIs are provided here. 84 * 85 * @param nfcEvent NFC event which triggered callback 86 * @return URIs to be sent to remote device 87 */ 89 @Override 90 public Uri[] createBeamUris(NfcEvent nfcEvent) { 91 Log.i(TAG, "Beam event in progress; createBeamUris() called."); 92 // Images are served using a content:// URI. See AssetProvider for implementation. 93 Uri photoUri = Uri.parse(CONTENT_BASE_URI + FILENAME); 94 Log.i(TAG, "Sending URI: " + photoUri); 95 return new Uri[] {photoUri}; 96 } 98 } | https://developer.android.com/samples/BeamLargeFiles/src/com.example.android.beamlargefiles/BeamLargeFilesFragment.html | CC-MAIN-2018-13 | refinedweb | 521 | 53.27 |
LWC11 Submission
From whole
LWC11 Submission using the Whole Platform
This is the documentation of the Whole Platform submission to the Language Workbench Competition 2011 (LWC11). The assignment can be found at [1]. For more details and to find the others submissions see [2].
Solution Overview
The solution presented below has been developed having in mind two goals:
- to exploit the benefits of a graphical language workbench by providing a solution at the domain level and
- to support an agile approach by using model interpretation instead of code generation to apply the solution.
The solution consists of 8 artifacts:
- two grammars for the Entities and the Instances DSLs
- one metamodel for the ER DSL
- two actions for defining and exposing as tooling all of the generators, validator and content assist
- a deployer to hot deploy the solution in the running workbench
- a test suite for testing the grammars and the generators
- a custom DataType parser to support the date format used in the provided examples
With graphical domain languages, lines of code is no longer a suitable metric to measure the size and thus the conciseness of a solution. Let us show you that almost all of the solution code can be visualized in a 27 inches monitor.
Phase 0 - Basics
This phase is intended to demonstrate basic language design, including IDE support (code completion, syntax coloring, outlines, etc).
- Task 0.1 Simple (structural) DSL without any fancy expression language or such.
- Task 0.2 Code generation to GPL such as Java, C#, C++ or XML
- Task 0.3 Simple constraint checks such as name-uniqueness
- Task 0.4 Show how to break down a (large) model into several parts, while still cross-referencing between the parts
Phase 1 - Advanced
This phase demonstrates advanced features not necessarily available to the same extent in every LWB.
- Task 1.1 Show the integration of several languages
- Task 1.2 Demonstrate how to implement runtime type systems
- Task 1.3 Show how to do a model-to-model transformation
- Task 1.4 Some kind of visibility/namespaces/scoping for references
- Task 1.5 Integrating manually written code (again in Java, C# or C++)
- Task 1.6 Multiple generators
Phase 2 - Non-Functional
Phase 2 is intended to show a couple of non-functional properties of the LWB. The task outlined below does not elaborate on how to do this.
- Task 2.1 How to evolve the DSL without breaking existing models
- Task 2.2 How to work with the models efficiently in the team
- Task 2.3 Demonstrate Scalability of the tools
Phase 3 - Freestyle
Every LWB has its own special "cool features". In phase three we want the participants to show off these features. Please make sure, though, that the features are built on top of the task described below, if possible. | http://sourceforge.net/apps/mediawiki/whole/index.php?title=LWC11_Submission | CC-MAIN-2014-15 | refinedweb | 470 | 51.68 |
Consuming WCF methods using async and await in MVC 4 application
In this article we are going to focus on how to consume WCF methods using async and await in MVC 4 application. This a simple demo of how to use async/await concept in mvc 4 application while consuming WCF service.
In this article we are going to focus on how to consume WCF methods using async and await in MVC 4 application.
Step 1: Launch Visual Studio and create a WCF Service application as shown below.
Open the Service1.svc.cs file and modify the GetData method with the following code to make it simple for this demo.
public string GetData(int value)
{
return "hello";
}
Step 2: Now build the WCF Service and Press Ctrl +F5 to open it in browser window. Now copy the service url from the browser.
Step 3: Create MVC 4 internet application in Visual Studio.
Step 4: Right-Click References and select Add Service Reference -> Now paste the url we have copied in Step 2. Click on Go. This will discover the WCF Service.
Step 5: In the Add Service Reference window -> click on Advanced and then in the Service Reference Settings window, make sure "Generate task based operations" is checked. Then select "Reuse types in specified referenced assemblies and then select "Newtonsoft.Json" and click on ok. This will add wcf service reference in mvc project.
Step 6: Open HomeController.cs file. add the reference to below namespace at the top of the file.
using System.Threading.Tasks;
Step 7: Now use the async/await pattern to call the GetData method of the WCF service.
public async Task
{
ServiceReference1.Service1Client c = new ServiceReference1.Service1Client();
var taskAsync = await c.GetDataAsync(5);
ViewBag.Result = taskAsync.ToString();
return View();
}
Step 8: Now right click the above action method and select "Add View". Then click on OK. This adds the view named "CallMyWCFService" to the MVC project. Add the following code in your view.
@{
ViewBag.Title = "Test WCF Service";
}
<h2>Test WCF Service async await pattern</h2>
The response from the service is : @ViewBag.Result
Step 9: Open the browser and access this view as shown below. Here the port number may vary based on the port on which your mvc application is running.
Please note that if you are unable to add reference to wcf service and getting error as unable to find service in the solution then run svcutil.exe utility from the visualstudio command prompt.
svcutil.exe
| http://www.dotnetspider.com/resources/45913-Consuming-WCF-methods-using-async-and-await-in-MVC-4-application.aspx | CC-MAIN-2018-39 | refinedweb | 414 | 67.15 |
I've been getting questions and bug reports from people attempting to link their C++ code with readline. They're running into two problems: `const' and function pointers. I've fixed both, but this message is about the second topic. (Readline-4.2 does use `const' arguments where appropriate, but that's not very interesting.) Readline-4.1 uses four typedefs for function pointers: typedef int Function (); typedef void VFunction (); typedef char *CPFunction (); typedef char **CPPFunction (); This is good enough for C code, but C++ really wants to check that function prototypes match exactly. These definitions are shared between readline, the tilde library, and bash, and each must avoid redefining them. They also contribute to namespace pollution -- no code that links against readline can use these typedefs for anything else. I have just finished modifying the readline source to use a new set of typedefs for function pointers, all fully prototyped, and all using the `rl_' prefix. The readline and history libraries no longer use any of the four typedefs above internally. I also modified the tilde library, and it now uses a single new typedef, with a `tilde_' prefix (typedef char *tilde_hook_func_t __P((char *)); for the curious), replacing the previous use of `CPFunction *'. Here's the guts of the new `rltypedefs.h' header file, containing the new typedef declarations. #if !defined (_RL_FUNCTION_TYPEDEF) # define _RL_FUNCTION_TYPEDEF /* Bindable functions */ typedef int rl_command_func_t __P((int, int)); /* Typedefs for the completion system */ typedef char *rl_compentry_func_t __P((const char *, int)); typedef char **rl_completion_func_t __P((char *, int, int)); typedef char *rl_quote_func_t __P((char *, int, char *)); typedef char *rl_dequote_func_t __P((char *, int)); typedef int rl_compignore_func_t __P((char **)); typedef void rl_compdisp_func_t __P((char **, int, int)); /* Type for input and pre-read hook functions like rl_event_hook */ typedef int rl_hook_func_t __P((void)); /* Input function type */ typedef int rl_getc_func_t __P((FILE *)); /* Generic function that takes a character buffer (which could be the readline line buffer) and an index into it (which could be rl_point) and returns an int. */ typedef int rl_linebuf_func_t __P((char *, int)); /* `Generic' function pointer typedefs */ typedef int rl_intfunc_t __P((int)); #define rl_ivoidfunc_t rl_hook_func_t typedef int rl_icpfunc_t __P((char *)); typedef int rl_icppfunc_t __P((char **)); typedef void rl_voidfunc_t __P((void)); typedef void rl_vintfunc_t __P((int)); typedef void rl_vcpfunc_t __P((char *)); typedef void rl_vcppfunc_t __P((char **)); #endif /* _RL_FUNCTION_TYPEDEF */ There are so many because I was very careful not to change the return value or parameters of any of the existing functions. This should ensure that old code continues to work -- in fact, before I changed bash to use the new prototypes, I rebuilt the parts of bash that use readline, and they compiled correctly. The intent is that the old typedefs will continue to work for one more release, and then will be removed from the public header files. This should give everyone enough time to update his or her code. I did change the type of one readline variable: rl_completion_entry_function The code that called through this pointer always expected it to return a char *, which required a lot of messy casting, and which did not work on all architectures. rl_completion_entry_function is now a pointer to an rl_compentry_func_t, which, as above, is a function returning a char *. I should have done this a while ago. -- ``The lyf so short, the craft so long to lerne.'' - Chaucer ( ``Discere est Dolere'' -- chet) Chet Ramey, CWRU address@hidden | https://lists.gnu.org/archive/html/bug-bash/2000-10/msg00078.html | CC-MAIN-2020-10 | refinedweb | 554 | 55.78 |
The error I get tells me that I have an indentation error somewhere, but I can't see it... `from urllib2 import urlopenfrom json import load, dumps
url = '' key = 'API_KEY'url = url + keyurl += '&numResults=1&format=json&id=1007' #1007 is scienceurl += "&requiredAssets=image,text,audio"response = urlopen(url)json_obj = load(response)"] + "\n"`
I also had this issue. I resolved by editing in notepad++, then pasting back into the coding window. Running into this issue on #13 as well. Might also need an extra, unindented line at the end of #12. It just won't indent properly after some of the if statements.
from urllib2 import urlopen
from json import load, dumps
url = ''
key = 'API_KEY'
url = url + key
url += '&numResults=1&format=json&id=1007' #1007 is science
url += '&requiredAssets=image,text,audio'
response = urlopen(url)
json_obj = load(response)
# uncomment 3 lines below to see JSON output to file'] | https://discuss.codecademy.com/t/12-program-name-and-npr-org-url/30621 | CC-MAIN-2017-22 | refinedweb | 149 | 53.51 |
0
when i run my project in java. i keep getting this stuff.
No Console ..... I dont know how to get console in eclipse.
here is my code.
import java.io.*; public class WordGame { public WordGame() { } public static void main (String args[]) { String WordGuess; WordJudge gm = new WordJudge(); gm.pickword(); Console c = System.console(); if (c == null){ System.err.println("No Console."); System.exit(1); } while (!gm.gameEnded()){ gm.displayWord(); System.out.format("You have %d attempts remaining. \n", gm.getRemainingGuesses()); strGuess = c.readLine("Enter your guess : "); gm.judgeGuess(WordGuess); } if (gm.plyrwin()){ System.out.format("You won ! It took you %d attempts. \n", gm.nGuessesNeeded()); System.out.format(gm.GetTheWord()); } else{ System.out.format("You lost. The word was %s \n", gm.GetTheWord()); } } }
please help me ... badly needed your reply ASAP. Tnx in advance guys.
Edited by ~s.o.s~: Added code tags, learn to use them. | https://www.daniweb.com/programming/software-development/threads/367532/no-console | CC-MAIN-2017-13 | refinedweb | 148 | 56.82 |
Remove trailing whitespaces. 98: by way of a handful of simple examples. 99: 100: #### Client and server are one 101: 102: This simple example shows `alice` 103: [su(1)]()'ing to 104: `root`: 105: 106: $ whoami 107: alice 108: $ ls -l `which su` 109: -r-sr-xr-x 1 root wheel 10744 Dec 6 19:06 /usr/bin/su 110: $ su - 111: Password: xi3kiune 112: # whoami 113: root 114: 115: * The applicant is `alice`. 116: * The account is `root`. 117: * The [su(1)]() 118: process is both client and server. 119: * The authentication token is `xi3kiune`. 120: * The arbitrator is `root`, which is why 121: [su(1)]() is 122: setuid `root`. 123: 124: #### Client and server are separate 125: 126: The example below shows `eve` try to initiate an 127: [ssh(1)]() 128: connection to `login.example.com`, ask to log in as `bob`, and succeed. Bob 129: should have chosen a better password! 130: 131: $ whoami 132: eve 133: $ ssh bob@login.example.com 134: bob@login.example.com's password: god 135: Last login: Thu Oct 11 09:52:57 2001 from 192.168.0.1 136: NetBSD 3.0 (LOGIN) #1: Thu Mar 10 18:22:36 WET 2005 137: 138: Welcome to NetBSD! 139: $ 140: 141: * The applicant is `eve`. 142: * The client is Eve's 143: [ssh(1)]() 144: process. 145: * The server is the 146: [sshd(8)]() 147: process on `login.example.com` 148: * The account is `bob`. 149: * The authentication token is `god`. 150: * Although this is not shown in this example, the arbitrator is `root`. 151: 152: #### Sample policy 153: 154: The following is FreeBSD's default policy for `sshd`: 155: 156: sshd auth required pam_nologin.so no_warn 157: sshd auth required pam_unix.so no_warn try_first_pass 158: sshd account required pam_login_access.so 159: sshd account required pam_unix.so 160: sshd session required pam_lastlog.so no_fail 161: sshd password required pam_permit.so 162: 163: * This policy applies to the `sshd` service (which is not necessarily 164: restricted to the 165: [sshd(8)]() 166: server.) 167: 168: * `auth`, `account`, `session` and `password` are facilities. 169: 170: * `pam_nologin.so`, `pam_unix.so`, `pam_login_access.so`, `pam_lastlog.so` and 171: `pam_permit.so` are modules. It is clear from this example that `pam_unix.so` 172: provides at least two facilities (authentication and account management.) 173: 174: There are some differences between FreeBSD and NetBSD PAM policies: 175: 176: * By default, every configuration is done under `/etc/pam.d`. 177: 178: * If configuration is non-existent, you will not have access to the system, in 179: contrast with FreeBSD that has a default policy of allowing authentication. 180: 181: * For authentication, NetBSD forces at least one `required`, `requisite` or 182: `binding` module to be present. 183: 184: ## PAM Essentials 185: 186: ### Facilities and primitives 187: 188: The PAM API offers six different authentication primitives grouped in four 189: facilities, which are described below. 190: 191: * `auth` -- *Authentication.* This facility concerns itself with authenticating 192: the applicant and establishing the account credentials. It provides two 193: primitives: 194: 195: * [pam\_authenticate(3)]() 196: authenticates the applicant, usually by requesting an authentication token 197: and comparing it with a value stored in a database or obtained from an 198: authentication server. 199: 200: * [pam\_setcred(3)]() 201: establishes account credentials such as user ID, group membership and 202: resource limits. 203: 204: * `account` -- *Account management.* This facility handles 205: non-authentication-related issues of account availability, such as access 206: restrictions based on the time of day or the server's work load. It provides 207: a single primitive: 208: 209: * [pam\_acct\_mgmt(3)]() 210: verifies that the requested account is available. 211: 212: * `session` -- *Session management.* This facility handles tasks associated 213: with session set-up and tear-down, such as login accounting. It provides two 214: primitives: 215: 216: * [pam\_open\_session(3)]() 217: performs tasks associated with session set-up: add an entry in the `utmp` 218: and `wtmp` databases, start an SSH agent, etc. 219: 220: * [pam\_close\_session(3)]() 221: performs tasks associated with session tear-down: add an entry in the 222: `utmp` and `wtmp` databases, stop the SSH agent, etc. 223: 224: * `password` -- *Password management.* This facility is used to change the 225: authentication token associated with an account, either because it has 226: expired or because the user wishes to change it. It provides a single 227: primitive: 228: 229: * [pam\_chauthtok(3)]() 230: changes the authentication token, optionally verifying that it is 231: sufficiently hard to guess, has not been used previously, etc. 232: 233: ### Modules 234: 235: Modules are a very central concept in PAM; after all, they are the *M* in *PAM*. 236: A PAM module is a self-contained piece of program code that implements the 237: primitives in one or more facilities for one particular mechanism; possible 238: mechanisms for the authentication facility, for instance, include the UNIX® 239: password database, NIS, LDAP and Radius. 240: 241: #### Module Naming 242: 243: FreeBSD and NetBSD implement each mechanism in a single module, named 244: `pam_mechanism`.so (for instance, `pam_unix.so` for the UNIX mechanism.) Other 245: implementations sometimes have separate modules for separate facilities, and 246: include the facility name as well as the mechanism name in the module name. To 247: name one example, Solaris has a `pam_dial_auth.so.1` module which is commonly 248: used to authenticate dialup users. Also, almost every module has a man page with 249: the same name, i.e.: 250: [pam\_unix(8)]() 251: explains how the `pam_unix.so` module works. 252: 253: #### Module Versioning 254: 255: FreeBSD's original PAM implementation, based on Linux-PAM, did not use version 256: numbers for PAM modules. This would commonly cause problems with legacy 257: applications, which might be linked against older versions of the system 258: libraries, as there was no way to load a matching version of the required 259: modules. 260: 261: OpenPAM, on the other hand, looks for modules that have the same version number 262: as the PAM library (currently 2 in FreeBSD and 0 in NetBSD), and only falls back 263: to an unversioned module if no versioned module could be loaded. Thus legacy 264: modules can be provided for legacy applications, while allowing new (or newly 265: built) applications to take advantage of the most recent modules. 266: 267: Although Solaris PAM modules commonly have a version number, they're not truly 268: versioned, because the number is a part of the module name and must be included 269: in the configuration. 270: 271: #### Module Path 272: 273: There isn't a common directory for storing PAM modules. Under FreeBSD, they are 274: located at `/usr/lib` and, under NetBSD, you can find them in 275: `/usr/lib/security`. 276: 277: ### Chains and policies 278: 279: When a server initiates a PAM transaction, the PAM library tries to load a 280: policy for the service specified in the 281: [pam\_start(3)]() 282: call. The policy specifies how authentication requests should be processed, and 283: is defined in a configuration file. This is the other central concept in PAM: 284: the possibility for the admin to tune the system security policy (in the wider 285: sense of the word) simply by editing a text file. 286: 287: A policy consists of four chains, one for each of the four PAM facilities. Each 288: chain is a sequence of configuration statements, each specifying a module to 289: invoke, some (optional) parameters to pass to the module, and a control flag 290: that describes how to interpret the return code from the module. 291: 292: Understanding the control flags is essential to understanding PAM configuration 293: files. There are a number of different control flags: 294: 295: * `binding` -- If the module succeeds and no earlier module in the chain has 296: failed, the chain is immediately terminated and the request is granted. If 297: the module fails, the rest of the chain is executed, but the request is 298: ultimately denied. 299: 300: This control flag was introduced by Sun in Solaris 9 (SunOS 5.9), and is also 301: supported by OpenPAM. 302: 303: * `required` -- If the module succeeds, the rest of the chain is executed, and 304: the request is granted unless some other module fails. If the module fails, 305: the rest of the chain is also executed, but the request is ultimately denied. 306: 307: * `requisite` -- If the module succeeds, the rest of the chain is executed, and 308: the request is granted unless some other module fails. If the module fails, 309: the chain is immediately terminated and the request is denied. 310: 311: * `sufficient` -- If the module succeeds and no earlier module in the chain has 312: failed, the chain is immediately terminated and the request is granted. If 313: the module fails, the module is ignored and the rest of the chain is 314: executed. 315: 316: As the semantics of this flag may be somewhat confusing, especially when it 317: is used for the last module in a chain, it is recommended that the `binding` 318: control flag be used instead if the implementation supports it. 319: 320: * `optional` -- The module is executed, but its result is ignored. If all 321: modules in a chain are marked `optional`, all requests will always be 322: granted. 323: 324: When a server invokes one of the six PAM primitives, PAM retrieves the chain for 325: the facility the primitive belongs to, and invokes each of the modules listed in 326: the chain, in the order they are listed, until it reaches the end, or determines 327: that no further processing is necessary (either because a `binding` or 328: `sufficient` module succeeded, or because a `requisite` module failed.) The 329: request is granted if and only if at least one module was invoked, and all 330: non-optional modules succeeded. 331: 332: Note that it is possible, though not very common, to have the same module listed 333: several times in the same chain. For instance, a module that looks up user names 334: and passwords in a directory server could be invoked multiple times with 335: different parameters specifying different directory servers to contact. PAM 336: treat different occurrences of the same module in the same chain as different, 337: unrelated modules. 338: 339: ### Transactions 340: 341: The lifecycle of a typical PAM transaction is described below. Note that if any 342: of these steps fails, the server should report a suitable error message to the 343: client and abort the transaction. 344: 345: 1. If necessary, the server obtains arbitrator credentials through a mechanism 346: independent of PAM -- most commonly by virtue of having been started by `root`, 347: or of being setuid `root`. 348: 349: 2. The server calls 350: [pam\_start(3)]() 351: to initialize the PAM library and specify its service name and the target 352: account, and register a suitable conversation function. 353: 354: 3. The server obtains various information relating to the transaction (such as 355: the applicant's user name and the name of the host the client runs on) and 356: submits it to PAM using 357: [pam\_set\_item(3)](). 358: 359: 4. The server calls 360: [pam\_authenticate(3)]() 361: to authenticate the applicant. 362: 363: 5. The server calls 364: [pam\_acct\_mgmt(3)]() 365: to verify that the requested account is available and valid. If the password is 366: correct but has expired, 367: [pam\_acct\_mgmt(3)]() 368: will return `PAM_NEW_AUTHTOK_REQD` instead of `PAM_SUCCESS`. 369: 370: 6. If the previous step returned `PAM_NEW_AUTHTOK_REQD`, the server now calls 371: [pam\_chauthtok(3)]() 372: to force the client to change the authentication token for the requested 373: account. 374: 375: 7. Now that the applicant has been properly authenticated, the server calls 376: [pam\_setcred(3)]() 377: to establish the credentials of the requested account. It is able to do this 378: because it acts on behalf of the arbitrator, and holds the arbitrator's 379: credentials. 380: 381: 8. Once the correct credentials have been established, the server calls 382: [pam\_open\_session(3)]() 383: to set up the session. 384: 385: 9. The server now performs whatever service the client requested -- for 386: instance, provide the applicant with a shell. 387: 388: 10. Once the server is done serving the client, it calls 389: [pam\_close\_session(3)]() 390: to tear down the session. 391: 392: 11. Finally, the server calls 393: [pam\_end(3)]() 394: to notify the PAM library that it is done and that it can release whatever 395: resources it has allocated in the course of the transaction. 396: 397: ## PAM Configuration 398: 399: ### PAM policy files 400: 401: #### The `/etc/pam.conf` file 402: 403: The traditional PAM policy file is `/etc/pam.conf`. This file contains all the 404: PAM policies for your system. Each line of the file describes one step in a 405: chain, as shown below: 406: 407: login auth required pam_nologin.so no_warn 408: 409: The fields are, in order: service name, facility name, control flag, module 410: name, and module arguments. Any additional fields are interpreted as additional 411: module arguments. 412: 413: A separate chain is constructed for each service / facility pair, so while the 414: order in which lines for the same service and facility appear is significant, 415: the order in which the individual services and facilities are listed is not. The 416: examples in the original PAM paper grouped configuration lines by facility, and 417: the Solaris stock `pam.conf` still does that, but FreeBSD's stock configuration 418: groups configuration lines by service. Either way is fine; either way makes 419: equal sense. 420: 421: #### The `/etc/pam.d` directory 422: 423: OpenPAM and Linux-PAM support an alternate configuration mechanism, which is the 424: preferred mechanism in FreeBSD and NetBSD. In this scheme, each policy is 425: contained in a separate file bearing the name of the service it applies to. 426: These files are stored in `/etc/pam.d/`. 427: 428: These per-service policy files have only four fields instead of `pam.conf`'s 429: five: the service name field is omitted. Thus, instead of the sample `pam.conf` 430: line from the previous section, one would have the following line in 431: `/etc/pam.d/login`: 432: 433: auth required pam_nologin.so no_warn 434: 435: As a consequence of this simplified syntax, it is possible to use the same 436: policy for multiple services by linking each service name to a same policy file. 437: For instance, to use the same policy for the `su` and `sudo` services, one could 438: do as follows: 439: 440: # cd /etc/pam.d 441: # ln -s su sudo 442: 443: This works because the service name is determined from the file name rather than 444: specified in the policy file, so the same file can be used for multiple 445: differently-named services. 446: 447: Since each service's policy is stored in a separate file, the `pam.d` mechanism 448: also makes it very easy to install additional policies for third-party software 449: packages. 450: 451: #### The policy search order 452: 453: As we have seen above, PAM policies can be found in a number of places. If no 454: configuration file is found for a particular service, the `/etc/pam.d/other` is 455: used instead. If that file does not exist, `/etc/pam.conf` is searched for 456: entries matching he specified service or, failing that, the "other" service. 457: 458: It is essential to understand that PAM's configuration system is centered on 459: chains. 460: 461: ### Breakdown of a configuration line 462: 463: As explained in the [PAM policy files](chap-pam.html#pam-config-file "18.5.1. 464: PAM policy files") section, each line in `/etc/pam.conf` consists of four or 465: more fields: the service name, the facility name, the control flag, the module 466: name, and zero or more module arguments. 467: 468: The service name is generally (though not always) the name of the application 469: the statement applies to. If you are unsure, refer to the individual 470: application's documentation to determine what service name it uses. 471: 472: Note that if you use `/etc/pam.d/` instead of `/etc/pam.conf`, the service name 473: is specified by the name of the policy file, and omitted from the actual 474: configuration lines, which then start with the facility name. 475: 476: The facility is one of the four facility keywords described in the 477: [[Facilities and primitives|guide/pam#facilities-primitives]]] section. 478: 479: Likewise, the control flag is one of the four keywords described in the [[Chains 480: and policies|guide/pam#chains-policies]] section, describing how to interpret 481: the return code from the module. Linux-PAM supports an alternate syntax that 482: lets you specify the action to associate with each possible return code, but 483: this should be avoided as it is non-standard and closely tied in with the way 484: Linux-PAM dispatches service calls (which differs greatly from the way Solaris 485: and OpenPAM do it.) Unsurprisingly, OpenPAM does not support this syntax. 486: 487: ### Policies 488: 489: To configure PAM correctly, it is essential to understand how policies are 490: interpreted. 491: 492: When an application calls 493: [pam\_start(3)](), 494: the PAM library loads the policy for the specified service and constructs four 495: module chains (one for each facility.) If one or more of these chains are empty, 496: the corresponding chains from the policy for the `other` service are 497: substituted. 498: 499: When the application later calls one of the six PAM primitives, the PAM library 500: retrieves the chain for the corresponding facility and calls the appropriate 501: service function in each module listed in the chain, in the order in which they 502: were listed in the configuration. After each call to a service function, the 503: module type and the error code returned by the service function are used to 504: determine what happens next. With a few exceptions, which we discuss below, the 505: following table applies: 506: 507: [[!table data=""" 508: | `PAM_SUCCESS` | `PAM_IGNORE` | `other` 509: binding | if (!fail) break; | - | fail = true; 510: required | - | - | fail = true; 511: requisite | - | - | fail = true; break; 512: sufficient | if (!fail) break; | - | - 513: optional | - | - | - 514: """]] 515: 516: If `fail` is true at the end of a chain, or when a `break` is reached, the 517: dispatcher returns the error code returned by the first module that failed. 518: Otherwise, it returns `PAM_SUCCESS`. 519: 520: The first exception of note is that the error code `PAM_NEW_AUTHTOK_REQD` is 521: treated like a success, except that if no module failed, and at least one module 522: returned `PAM_NEW_AUTHTOK_REQD`, the dispatcher will return 523: `PAM_NEW_AUTHTOK_REQD`. 524: 525: The second exception is that 526: [pam\_setcred(3)]() 527: treats `binding` and `sufficient` modules as if they were `required`. 528: 529: The third and final exception is that 530: [pam\_chauthtok(3)]() 531: runs the entire chain twice (once for preliminary checks and once to actually 532: set the password), and in the preliminary phase it treats `binding` and 533: `sufficient` modules as if they were `required`. 534: 535: ## PAM modules 536: 537: ### Common Modules 538: 539: #### pam\_deny(8) 540: 541: The 542: [pam\_deny(8)]() 543: module is one of the simplest modules available; it responds to any request with 544: `PAM_AUTH_ERR`. It is useful for quickly disabling a service (add it to the top 545: of every chain), or for terminating chains of `sufficient` modules. 546: 547: #### pam\_echo(8) 548: 549: The 550: [pam\_echo(8)]() 551: module simply passes its arguments to the conversation function as a 552: `PAM_TEXT_INFO` message. It is mostly useful for debugging, but can also serve 553: to display messages such as `Unauthorized access will be prosecuted` before 554: starting the authentication procedure. 555: 556: #### pam\_exec(8) 557: 558: The 559: [pam\_exec(8)]() 560: module takes its first argument to be the name of a program to execute, and the 561: remaining arguments are passed to that program as command-line arguments. One 562: possible application is to use it to run a program at login time which mounts 563: the user's home directory. 564: 565: #### pam\_ftpusers(8) 566: 567: The 568: [pam\_ftpusers(8)]() 569: module successes if and only if the user is listed in `/etc/ftpusers`. 570: Currently, in NetBSD, this module doesn't understand the extended syntax of 571: [ftpd(8)](), but 572: this will be fixed in the future. 573: 574: #### pam\_group(8) 575: 576: The 577: [pam\_group(8)]() 578: module accepts or rejects applicants on the basis of their membership in a 579: particular file group (normally `wheel` for 580: [su(1)]()). It is 581: primarily intended for maintaining the traditional behaviour of BSD 582: [su(1)](), but has 583: many other uses, such as excluding certain groups of users from a particular 584: service. 585: 586: In NetBSD, there is an argument called `authenticate` in which the user is asked 587: to authenticate using his own password. 588: 589: #### pam\_guest(8) 590: 591: The 592: [pam\_guest(8)]() 593: module allows guest logins using fixed login names. Various requirements can be 594: placed on the password, but the default behaviour is to allow any password as 595: long as the login name is that of a guest account. The 596: [pam\_guest(8)]() 597: module can easily be used to implement anonymous FTP logins. 598: 599: #### pam\_krb5(8) 600: 601: The 602: [pam\_krb5(8)]() 603: module provides functions to verify the identity of a user and to set user 604: specific credentials using Kerberos 5. It prompts the user for a password and 605: obtains a new Kerberos TGT for the principal. The TGT is verified by obtaining a 606: service ticket for the local host. The newly acquired credentials are stored in 607: a credential cache and the environment variable KRB5CCNAME is set appropriately. 608: The credentials cache should be destroyed by the user at logout with 609: [kdestroy(1)](). 610: 611: #### pam\_ksu(8) 612: 613: The 614: [pam\_ksu(8)]() 615: module provides only authentication services for Kerberos 5 to determine whether 616: or not the applicant is authorized to obtain the privileges of the target 617: account. 618: 619: #### pam\_lastlog(8) 620: 621: The 622: [pam\_lastlog(8)]() 623: module provides only session management services. It records the session in 624: [utmp(5)](), 625: [utmpx(5)](), 626: [wtmp(5)](), 627: [wtmpx(5)](), 628: [lastlog(5)]() 629: and 630: [lastlogx(5)]() 631: databases. 632: 633: #### pam\_login\_access(8) 634: 635: The 636: [pam\_login\_access(8)]() 637: module provides an implementation of the account management primitive which 638: enforces the login restrictions specified in the 639: [login.access(5)]() 640: table. 641: 642: #### pam\_nologin(8) 643: 644: The 645: [pam\_nologin(8)]() 646: module refuses non-root logins when `/var/run/nologin` exists. This file is 647: normally created by 648: [shutdown(8)]() 649: when less than five minutes remain until the scheduled shutdown time. 650: 651: #### pam\_permit(8) 652: 653: The 654: [pam\_permit(8)]() 655: module is one of the simplest modules available; it responds to any request with 656: `PAM_SUCCESS`. It is useful as a placeholder for services where one or more 657: chains would otherwise be empty. 658: 659: #### pam\_radius(8) 660: 661: The 662: [pam\_radius(8)]() 663: module provides authentication services based upon the RADIUS (Remote 664: Authentication Dial In User Service) protocol. 665: 666: #### pam\_rhosts(8) 667: 668: The 669: [pam\_rhosts(8)]() 670: module provides only authentication services. It reports success if and only if 671: the target user's ID is not 0 and the remote host and user are listed in 672: `/etc/hosts.equiv` or in the target user's `~/.rhosts`. 673: 674: #### pam\_rootok(8) 675: 676: The 677: [pam\_rootok(8)]() 678: module reports success if and only if the real user id of the process calling it 679: (which is assumed to be run by the applicant) is 0. This is useful for 680: non-networked services such as 681: [su(1)]() or 682: [passwd(1)](), to 683: which the `root` should have automatic access. 684: 685: #### pam\_securetty(8) 686: 687: The 688: [pam\_securetty(8)]() 689: module provides only account services. It is used when the applicant is 690: attempting to authenticate as superuser, and the process is attached to an 691: insecure TTY. 692: 693: #### pam\_self(8) 694: 695: The 696: [pam\_self(8)]() 697: module reports success if and only if the names of the applicant matches that of 698: the target account. It is most useful for non-networked services such as 699: [su(1)](), where the 700: identity of the applicant can be easily verified. 701: 702: #### pam\_ssh(8) 703: 704: The 705: [pam\_ssh(8)]() 706: module provides both authentication and session services. The authentication 707: service allows users who have passphrase-protected SSH secret keys in their 708: `~/.ssh` directory to authenticate themselves by typing their passphrase. The 709: session service starts 710: [ssh-agent(1)]() 711: and preloads it with the keys that were decrypted in the authentication phase. 712: This feature is particularly useful for local logins, whether in X (using 713: [xdm(1)]() or 714: another PAM-aware X login manager) or at the console. 715: 716: This module implements what is fundamentally a password authentication scheme. 717: Care should be taken to only use this module over a secure session (secure TTY, 718: encrypted session, etc.), otherwise the user's SSH passphrase could be 719: compromised. 720: 721: Additional consideration should be given to the use of 722: [pam\_ssh(8)](). 723: Users often assume that file permissions are sufficient to protect their SSH 724: keys, and thus use weak or no passphrases. Since the system administrator has no 725: effective means of enforcing SSH passphrase quality, this has the potential to 726: expose the system to security risks. 727: 728: #### pam\_unix(8) 729: 730: The 731: [pam\_unix(8)]() 732: module implements traditional UNIX® password authentication, using 733: [getpwnam(3)]() 734: under FreeBSD or 735: [getpwnam\_r(3)]() 736: under NetBSD to obtain the target account's password and compare it with the one 737: provided by the applicant. It also provides account management services 738: (enforcing account and password expiration times) and password-changing 739: services. This is probably the single most useful module, as the great majority 740: of admins will want to maintain historical behaviour for at least some services. 741: 742: ### NetBSD-specific PAM Modules 743: 744: #### pam\_skey(8) 745: 746: The 747: [pam\_skey(8)]() 748: module implements S/Key One Time Password (OTP) authentication methods, using 749: the `/etc/skeykeys` database. 750: 751: ## PAM Application Programming 752: 753: This section has not yet been written. 754: 755: ## PAM Module Programming 756: 757: This section has not yet been written. 758: 759: ## Sample PAM Application 760: 761: The following is a minimal implementation of 762: [su(1)]() using PAM. 763: Note that it uses the OpenPAM-specific 764: [openpam\_ttyconv(3)]() 765: conversation function, which is prototyped in `security/openpam.h`. If you wish 766: build this application on a system with a different PAM library, you will have 767: to provide your own conversation function. A robust conversation function is 768: surprisingly difficult to implement; the one presented in the [Sample PAM 769: Conversation Function](chap-pam.html#pam-sample-conv "18.11. Sample PAM 770: Conversation Function") sub-chapter is a good starting point, but should not be 771: used in real-world applications. 772: 773: #include <sys/param.h> 774: #include <sys/wait.h> 775: 776: #include <err.h> 777: #include <pwd.h> 778: #include <stdio.h> 779: #include <stdlib.h> 780: #include <string.h> 781: #include <syslog.h> 782: #include <unistd.h> 783: 784: #include <security/pam_appl.h> 785: #include <security/openpam.h> /* for openpam_ttyconv() */ 786: 787: extern char **environ; 788: 789: static pam_handle_t *pamh; 790: static struct pam_conv pamc; 791: 792: static void 793: usage(void) 794: { 795: 796: fprintf(stderr, "Usage: su [login [args]]\n"); 797: exit(1); 798: } 799: 800: int 801: main(int argc, char *argv[]) 802: { 803: char hostname[MAXHOSTNAMELEN]; 804: const char *user, *tty; 805: char **args, **pam_envlist, **pam_env; 806: struct passwd *pwd; 807: int o, pam_err, status; 808: pid_t pid; 809: 810: while ((o = getopt(argc, argv, "h")) != -1) 811: switch (o) { 812: case 'h': 813: default: 814: usage(); 815: } 816: 817: argc -= optind; 818: argv += optind; 819: 820: if (argc > 0) { 821: user = *argv; 822: --argc; 823: ++argv; 824: } else { 825: user = "root"; 826: } 827: 828: /* initialize PAM */ 829: pamc.conv = &openpam_ttyconv; 830: pam_start("su", user, &pamc, &pamh); 831: 832: /* set some items */ 833: gethostname(hostname, sizeof(hostname)); 834: if ((pam_err = pam_set_item(pamh, PAM_RHOST, hostname)) != PAM_SUCCESS) 835: goto pamerr; 836: user = getlogin(); 837: if ((pam_err = pam_set_item(pamh, PAM_RUSER, user)) != PAM_SUCCESS) 838: goto pamerr; 839: tty = ttyname(STDERR_FILENO); 840: if ((pam_err = pam_set_item(pamh, PAM_TTY, tty)) != PAM_SUCCESS) 841: goto pamerr; 842: 843: /* authenticate the applicant */ 844: if ((pam_err = pam_authenticate(pamh, 0)) != PAM_SUCCESS) 845: goto pamerr; 846: if ((pam_err = pam_acct_mgmt(pamh, 0)) == PAM_NEW_AUTHTOK_REQD) 847: pam_err = pam_chauthtok(pamh, PAM_CHANGE_EXPIRED_AUTHTOK); 848: if (pam_err != PAM_SUCCESS) 849: goto pamerr; 850: 851: /* establish the requested credentials */ 852: if ((pam_err = pam_setcred(pamh, PAM_ESTABLISH_CRED)) != PAM_SUCCESS) 853: goto pamerr; 854: 855: /* authentication succeeded; open a session */ 856: if ((pam_err = pam_open_session(pamh, 0)) != PAM_SUCCESS) 857: goto pamerr; 858: 859: /* get mapped user name; PAM may have changed it */ 860: pam_err = pam_get_item(pamh, PAM_USER, (const void **)&user); 861: if (pam_err != PAM_SUCCESS || (pwd = getpwnam(user)) == NULL) 862: goto pamerr; 863: 864: /* export PAM environment */ 865: if ((pam_envlist = pam_getenvlist(pamh)) != NULL) { 866: for (pam_env = pam_envlist; *pam_env != NULL; ++pam_env) { 867: putenv(*pam_env); 868: free(*pam_env); 869: } 870: free(pam_envlist); 871: } 872: 873: /* build argument list */ 874: if ((args = calloc(argc + 2, sizeof *args)) == NULL) { 875: warn("calloc()"); 876: goto err; 877: } 878: *args = pwd->pw_shell; 879: memcpy(args + 1, argv, argc * sizeof *args); 880: 881: /* fork and exec */ 882: switch ((pid = fork())) { 883: case -1: 884: warn("fork()"); 885: goto err; 886: case 0: 887: /* child: give up privs and start a shell */ 888: 889: /* set uid and groups */ 890: if (initgroups(pwd->pw_name, pwd->pw_gid) == -1) { 891: warn("initgroups()"); 892: _exit(1); 893: } 894: if (setgid(pwd->pw_gid) == -1) { 895: warn("setgid()"); 896: _exit(1); 897: } 898: if (setuid(pwd->pw_uid) == -1) { 899: warn("setuid()"); 900: _exit(1); 901: } 902: execve(*args, args, environ); 903: warn("execve()"); 904: _exit(1); 905: default: 906: /* parent: wait for child to exit */ 907: waitpid(pid, &status, 0); 908: 909: /* close the session and release PAM resources */ 910: pam_err = pam_close_session(pamh, 0); 911: pam_end(pamh, pam_err); 912: 913: exit(WEXITSTATUS(status)); 914: } 915: 916: pamerr: 917: fprintf(stderr, "Sorry\n"); 918: err: 919: pam_end(pamh, pam_err); 920: exit(1); 921: } 922: 923: ## Sample PAM Module 924: 925: The following is a minimal implementation of 926: [pam\_unix(8)](), 927: offering only authentication services. It should build and run with most PAM 928: implementations, but takes advantage of OpenPAM extensions if available: note 929: the use of 930: [pam\_get\_authtok(3)](), 931: which enormously simplifies prompting the user for a password. 932: 933: #include <sys/param.h> 934: 935: #include <pwd.h> 936: #include <stdlib.h> 937: #include <stdio.h> 938: #include <string.h> 939: #include <unistd.h> 940: 941: #include <security/pam_modules.h> 942: #include <security/pam_appl.h> 943: 944: #ifndef _OPENPAM 945: static char password_prompt[] = "Password:"; 946: #endif 947: 948: #ifndef PAM_EXTERN 949: #define PAM_EXTERN 950: #endif 951: 952: PAM_EXTERN int 953: pam_sm_authenticate(pam_handle_t *pamh, int flags, 954: int argc, const char *argv[]) 955: { 956: #ifndef _OPENPAM 957: const void *ptr; 958: const struct pam_conv *conv; 959: struct pam_message msg; 960: const struct pam_message *msgp; 961: struct pam_response *resp; 962: #endif 963: struct passwd *pwd; 964: const char *user; 965: char *crypt_password, *password; 966: int pam_err, retry; 967: 968: /* identify user */ 969: if ((pam_err = pam_get_user(pamh, &user, NULL)) != PAM_SUCCESS) 970: return (pam_err); 971: if ((pwd = getpwnam(user)) == NULL) 972: return (PAM_USER_UNKNOWN); 973: 974: /* get password */ 975: #ifndef _OPENPAM 976: pam_err = pam_get_item(pamh, PAM_CONV, &ptr); 977: if (pam_err != PAM_SUCCESS) 978: return (PAM_SYSTEM_ERR); 979: conv = ptr; 980: msg.msg_style = PAM_PROMPT_ECHO_OFF; 981: msg.msg = password_prompt; 982: msgp = &msg; 983: #endif 984: password = NULL; 985: for (retry = 0; retry < 3; ++retry) { 986: #ifdef _OPENPAM 987: pam_err = pam_get_authtok(pamh, PAM_AUTHTOK, 988: (const char **)&password, NULL); 989: #else 990: resp = NULL; 991: pam_err = (*conv->conv)(1, &msgp, &resp, conv->appdata_ptr); 992: if (resp != NULL) { 993: if (pam_err == PAM_SUCCESS) 994: password = resp->resp; 995: else 996: free(resp->resp); 997: free(resp); 998: } 999: #endif 1000: if (pam_err == PAM_SUCCESS) 1001: break; 1002: } 1003: if (pam_err == PAM_CONV_ERR) 1004: return (pam_err); 1005: if (pam_err != PAM_SUCCESS) 1006: return (PAM_AUTH_ERR); 1007: 1008: /* compare passwords */ 1009: if ((!pwd->pw_passwd[0] && (flags & PAM_DISALLOW_NULL_AUTHTOK)) || 1010: (crypt_password = crypt(password, pwd->pw_passwd)) == NULL || 1011: strcmp(crypt_password, pwd->pw_passwd) != 0) 1012: pam_err = PAM_AUTH_ERR; 1013: else 1014: pam_err = PAM_SUCCESS; 1015: #ifndef _OPENPAM 1016: free(password); 1017: #endif 1018: return (pam_err); 1019: } 1020: 1021: PAM_EXTERN int 1022: pam_sm_setcred(pam_handle_t *pamh, int flags, 1023: int argc, const char *argv[]) 1024: { 1025: 1026: return (PAM_SUCCESS); 1027: } 1028: 1029: PAM_EXTERN int 1030: pam_sm_acct_mgmt(pam_handle_t *pamh, int flags, 1031: int argc, const char *argv[]) 1032: { 1033: 1034: return (PAM_SUCCESS); 1035: } 1036: 1037: PAM_EXTERN int 1038: pam_sm_open_session(pam_handle_t *pamh, int flags, 1039: int argc, const char *argv[]) 1040: { 1041: 1042: return (PAM_SUCCESS); 1043: } 1044: 1045: PAM_EXTERN int 1046: pam_sm_close_session(pam_handle_t *pamh, int flags, 1047: int argc, const char *argv[]) 1048: { 1049: 1050: return (PAM_SUCCESS); 1051: } 1052: 1053: PAM_EXTERN int 1054: pam_sm_chauthtok(pam_handle_t *pamh, int flags, 1055: int argc, const char *argv[]) 1056: { 1057: 1058: return (PAM_SERVICE_ERR); 1059: } 1060: 1061: #ifdef PAM_MODULE_ENTRY 1062: PAM_MODULE_ENTRY("pam_unix"); 1063: #endif 1064: 1065: ## Sample PAM Conversation Function 1066: 1067: The conversation function presented below is a greatly simplified version of 1068: OpenPAM's 1069: [openpam\_ttyconv(3)](). 1070: It is fully functional, and should give the reader a good idea of how a 1071: conversation function should behave, but it is far too simple for real-world 1072: use. Even if you're not using OpenPAM, feel free to download the source code and 1073: adapt 1074: [openpam\_ttyconv(3)]() 1075: to your uses; we believe it to be as robust as a tty-oriented conversation 1076: function can reasonably get. 1077: 1078: #include <stdio.h> 1079: #include <stdlib.h> 1080: #include <string.h> 1081: #include <unistd.h> 1082: 1083: #include <security/pam_appl.h> 1084: 1085: int 1086: converse(int n, const struct pam_message **msg, 1087: struct pam_response **resp, void *data) 1088: { 1089: struct pam_response *aresp; 1090: char buf[PAM_MAX_RESP_SIZE]; 1091: int i; 1092: 1093: data = data; 1094: if (n <= 0 || n > PAM_MAX_NUM_MSG) 1095: return (PAM_CONV_ERR); 1096: if ((aresp = calloc(n, sizeof *aresp)) == NULL) 1097: return (PAM_BUF_ERR); 1098: for (i = 0; i < n; ++i) { 1099: aresp[i].resp_retcode = 0; 1100: aresp[i].resp = NULL; 1101: switch (msg[i]->msg_style) { 1102: case PAM_PROMPT_ECHO_OFF: 1103: aresp[i].resp = strdup(getpass(msg[i]->msg)); 1104: if (aresp[i].resp == NULL) 1105: goto fail; 1106: break; 1107: case PAM_PROMPT_ECHO_ON: 1108: fputs(msg[i]->msg, stderr); 1109: if (fgets(buf, sizeof buf, stdin) == NULL) 1110: goto fail; 1111: aresp[i].resp = strdup(buf); 1112: if (aresp[i].resp == NULL) 1113: goto fail; 1114: break; 1115: case PAM_ERROR_MSG: 1116: fputs(msg[i]->msg, stderr); 1117: if (strlen(msg[i]->msg) > 0 && 1118: msg[i]->msg[strlen(msg[i]->msg) - 1] != '\n') 1119: fputc('\n', stderr); 1120: break; 1121: case PAM_TEXT_INFO: 1122: fputs(msg[i]->msg, stdout); 1123: if (strlen(msg[i]->msg) > 0 && 1124: msg[i]->msg[strlen(msg[i]->msg) - 1] != '\n') 1125: fputc('\n', stdout); 1126: break; 1127: default: 1128: goto fail; 1129: } 1130: } 1131: *resp = aresp; 1132: return (PAM_SUCCESS); 1133: fail: 1134: for (i = 0; i < n; ++i) { 1135: if (aresp[i].resp != NULL) { 1136: memset(aresp[i].resp, 0, strlen(aresp[i].resp)); 1137: free(aresp[i].resp); 1138: } 1139: } 1140: memset(aresp, 0, n * sizeof *aresp); 1141: *resp = NULL; 1142: return (PAM_CONV_ERR); 1143: } 1144: 1145: ## Further Reading 1146: 1147: ### Papers 1148: 1149: * *[sun-pam]: [Making Login Services Independent of Authentication Technologies]()*. Vipin Samar and Charlie Lai. Sun Microsystems. 1150: * *[opengroup-singlesignon]: [X/Open Single Sign-on Preliminary Specification]()*. The Open Group. 1-85912-144-6. June 1997. 1151: * *[kernelorg-pamdraft]: [Pluggable Authentication Modules]()*. Andrew G. Morgan. October 6, 1999. 1152: 1153: ### User Manuals 1154: 1155: * *[sun-pamadmin]: [PAM Administration]()*. Sun Microsystems. 1156: 1157: ### Related Web pages 1158: 1159: * *[openpam-website]: [OpenPAM homepage]()*. Dag-Erling Smørgrav. ThinkSec AS. 1160: * *[linuxpam-website]: [Linux-PAM homepage]()*. Andrew G. Morgan. 1161: * *[solarispam-website]: [Solaris PAM homepage]()*. Sun Microsystems. 1162: 1163: ### Networks Associates Technology's license on the PAM article 1164: 1165: Copyright (c) 2001-2003 Networks Associates Technology, Inc. 1166: All rights reserved. 1167: This software was developed for the FreeBSD Project by ThinkSec AS and 1168: Network Associates Laboratories, the Security Research Division of 1169: Network Associates, Inc. under DARPA/SPAWAR contract N66001-01-C-8035 1170: ("CBOSS"), as part of the DARPA CHATS research program. 1171: Redistribution and use in source and binary forms, with or without 1172: modification, are permitted provided that the following conditions 1173: are met: 1174: 1. Redistributions of source code must retain the above copyright 1175: notice, this list of conditions and the following disclaimer. 1176: 2. Redistributions in binary form must reproduce the above copyright 1177: notice, this list of conditions and the following disclaimer in the 1178: documentation and/or other materials provided with the distribution. 1179: 3. The name of the author may not be used to endorse or promote 1180: products derived from this software without specific prior written 1181: permission. 1182: THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND 1183: ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE 1184: IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE 1185: ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE 1186: FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 1187: DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS 1188: OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) 1189: HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT 1190: LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY 1191: OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF 1192: SUCH DAMAGE. | https://wiki.netbsd.org/cgi-bin/cvsweb/wikisrc/guide/pam.mdwn?hideattic=0;f=h;ln=1;rev=1.3 | CC-MAIN-2020-45 | refinedweb | 6,465 | 52.39 |
map()
map(function, list1)
[function(i) for i in list1]
[i.function() for i in list1]
map(.function, list1) # error!
map(run_method(function), list1) # error!
map
You'd use
operator.methodcaller():
from operator import methodcaller map(methodcaller('function'), list1)
methodcaller() accepts additional arguments that are then passed into the called method;
methodcaller('foo', 'bar', spam='eggs')(object) is the equivalent of
object.foo('bar', spam='eggs').
If all objects in
list1 are the same type or subclasses of that type, and the method you want to call doesn't take any arguments, you can pass in the unbound method to
map as the function to call. For example, to lowercase all strings in a list, you can use:
map(str.lower, list_of_strings)
where
str.lower is the unbound method on the
str type.
Note that a list comprehension is not really the equivalent of a
map() here.
map() can only do one loop, entirely in C.
map() will
zip() multiple iterable arguments, and
map() in Python 3 is itself an iterator.
A list comprehension on the other hand can do multiple (nested) loops and add in filtering, and the left-hand expression can be any valid Python expression including nested list comprehensions. | https://codedump.io/share/6ZiO6Dw4SVEo/1/python-calling-method-in-the-map-function | CC-MAIN-2017-04 | refinedweb | 202 | 66.84 |
Choosing Your Reporting Tool
Okay, once again I bring up the topic of reporting. Namely, because it seems as though there's no one best solution from what I've read here.
Most of you say that Crystal is a pain to use (the 'Devil's spawn someone said). From what we've been attempting to do, we're finding that to be true as well. In addition to making .net reporting more difficult, they've also make the licensing confusing as hell.
On MS Reporting Services, you are restricted to IIS and MS SQL. So, we're ruling that out for the moment because we have a Firebird database.
So, what are the shortcomings of Data Dynamics' ActiveReports product?
Minority Report
Wednesday, August 4, 2004
I use crayons and plain white paper.
Kermit
Wednesday, August 4, 2004
AFAIK MS Reporting Services will play nice with your Firebird DB through ODBC.
Just me (Sir to you)
Wednesday, August 4, 2004
I'm about to look at different charting and reporting objects, and you might want to look at ComponentOne VSVIEW Reporting Edition. If it's as good as their FlexGrid object, it might be a good solution.
Fred
Wednesday, August 4, 2004
Reporting services can report from OLEDB/ODBC services, but Reporting Services itself uses a SQL Server back-end, and requires (a) SQL Server license(s). If you don't currently use SQL Server in your enterprise then it can be an irritation.
Dennis Forbes
Wednesday, August 4, 2004
So far I haven't run into any show stoppers with ActiveReports (...yet). They seem to be a smallish company so their longevity might be a factor. We have created a pretty complicated web reporting system with user selectable criteria and grouping using ActiveReports over the last couple of months. Most of the problems we have ever had have been minor and are usually fixed relatively soon. They do release new versions rather often.
I think right now the worst thing about it is that you can't create subclasses of their Report class because if you do Visual Studio no longer recognizes it and you can no longer use the report designer on it. This is suppsed to be fixed in the next major (pay for it) upgrade.
Using the report designer is almost as good as using Access' report designer. Their query system allows you to put in variables that can be replaced later, we are currently using this technique to add on to where and group by clauses on the fly.
Overall there are anoyances but in comparison to Crystal Reports they can easily be worked around.
Justin
Wednesday, August 4, 2004
I tend to output my data in an XML format and apply an XSL transform to it - Then display it in a browser or embedded IE control in an app. Works well for my needs
Dan G
Wednesday, August 4, 2004
I would love to just use XML and XSL, but every single report we build always has some fancy nested 'group by' and summary data that requires an advanced reporting tool.
Crystal support is great, but I just want a tool with fewer bugs. Right now, Crystal is crashing aspnet_wp.exe when a strongly typed report is loaded inside a namespace.
I could scream right now. AAAAAAAAAAAHHHH!!!!!!!!!!!!!!!!!!!
survivor
Thursday, August 5, 2004
Recent Topics
Fog Creek Home | https://discuss.fogcreek.com/joelonsoftware5/170947.html | CC-MAIN-2022-21 | refinedweb | 563 | 71.04 |
Search Type: Posts; User: oneofthelions
Search: Search took 0.01 seconds.
- 28 Dec 2012 11:46 PM
- Replies
- 1
- Views
- 1,409
Hi, I am trying to insert a TreeMap or GeoMap, part of GoogleMaps.
But I am not able to see the graph within ExtJS Layout. Is there any plugin for Google maps?
...
- 28 Apr 2011 2:11 AM
- Replies
- 0
- Views
- 1,679
I have a button at which when the user hovers over I display a tooltip.
function createTooltp(toolTipId) {
tooTip = new Ext.ToolTip({
target: toolTipId,
anchor: 'left',
...
- 17 Feb 2011 9:47 PM
Thanks to the information on Mask, I shall keep in mind.
But I called my function as you mentioned.
But this doesn't work. This return to the function is not happening?
- 13 Feb 2011 11:35 PM
It only Mask the text field to allow digits and hyphen. But the regex is not checked to allow only one hyphen and at most two digits after hyphen. Currently the user can enter as many digits and...
- 11 Feb 2011 2:56 AM
I have a text filed
var issueNoField = new Ext.form.TextField({
fieldLabel:'Issue No',
width: 120,
vtype: 'hyphen'
- 10 Feb 2011 4:38 AM
Jump to post Thread: Ext Js Numeric and hyphen by oneofthelions
- Replies
- 0
- Views
- 1,953
Hi, I was using TextField of the form. I also want to allow Hyphen. I need help in calling a regular expression which does a check that no alphabets and only numbers allowed and one hyphen as well.
...
- 6 Feb 2011 10:03 PM
- Replies
- 2
- Views
- 2,913
I did it with using a button feature.
<script type="text/javascript">
function openFAQPopup(){
var <portlet:namespace/>learnUrl = "<%=learnUrl%>";
var <portlet:namespace/>win...
- 4 Feb 2011 1:36 AM
- Replies
- 2
- Views
- 2,913
Hi - I am using Liferay Portal for my portlets. Navigating from JSP1 with onclick call a JS function. This would have Ext.Window().
The new JSP2(gadget) is poped up. There is a link, on click I...
- 3 Feb 2011 3:38 AM
- Replies
- 2
- Views
- 1,626
I have local var (Array list) which has three key, value pairs.
I want the second value to be displayed in the combo box and its key as default.
Ext.onReady(function(){
var group =...
Results 1 to 9 of 9 | http://www.sencha.com/forum/search.php?s=1178498d84f3105f6c7f128b555d7931&searchid=10603484 | CC-MAIN-2015-14 | refinedweb | 390 | 84.37 |
printf,
fprintf,
sprintf,
snprintf,
asprintf,
dprintf,
vprintf,
vfprintf,
vsprintf,
vsnprintf,
vasprintf,
vdprintf—
#include <stdio.h>int
printf(const char *format, ...); int
fprintf(FILE *stream, const char *format, ...); int
sprintf(char *str, const char *format, ...); int
snprintf(char *str, size_t size, const char *format, ...); int
asprintf(char **ret, const char *format, ...); int
dprintf(int fd, const char * restrict format, ...);
#include <stdarg.h>
#include <std); int
vdprintf(int fd, const char * restrict format, va_list ap);;
dprintf() and
vdprintf() write output to the given file descriptor;() and
vsnprintf():
$specifying the next argument to access. If this field is not provided, the argument following the last argument accessed will be used. Arguments are numbered starting at
1.
#’ character specifying that the value should be converted to an “alternate form”. For
oconversions, the precision of the number is increased to force the first character of the output string to a zero (except if a zero value is printed with an explicit precision of zero). For
xand
Xconversions, a non-zero result has the string ‘
0x’ (or ‘ all other formats, behaviour is undefined.
0’ character specifying zero padding. For all conversions except
n, the converted value is padded on the left with zeros rather than blanks. If a precision is given with a numeric conversion (
d,
i,
o,
u,
x, and
X), the ‘
0’ flag is ignored.
-’ indicates the converted value is to be left adjusted on the field boundary. Except for
nconversions, the converted value is padded on the right with blanks, rather than on the left with blanks or zeros. A ‘
-’ overrides a ‘
0’ if both are given.
d,
a,
A,
e,
E,
f,
F,
g,
G, or
i).
+’ character specifying that a sign always be placed before a number produced by a signed conversion. A ‘
+’ overrides a space if both are used.
.’ followed by an optional digit string. If the digit string is omitted, the precision is taken as zero. This gives the minimum number of digits to appear for
d,
i,
o,
u,
x, and
Xconversions, the number of digits to appear after the decimal-point for
a,
A,
e,
E,
f, and
Fconversions, the maximum number of significant digits for
gand
Gconversions, or the maximum number of characters to be printed from a string for
sconversions.
d,
i,
n,
o,
u,
x, or
Xconversions: Note: the
tmodifier, when applied to an
o,
u,
x, or
Xconversion, indicates that the argument is of an unsigned type equivalent in size to a ptrdiff_t. The
zmodifier, when applied to a
dor
iconversion, indicates that the argument is of a signed type equivalent in size to a size_t. Similarly, when applied to an
nconversion, it indicates that the argument is a pointer to a signed type equivalent in size to a size_t. The following length modifiers are valid for the
a,
A,
e,
E,
f,
F,
g, or
Gconversions: The following length modifier is valid for the
cor
sconversions:
*’
int(or appropriate variant) argument is converted to signed decimal (
dand
i), unsigned octal (
o), unsigned decimal (
u), or unsigned hexadecimal (
xand
X) notation. The letters
abcdefare used for
xconversions; the letters
ABCDEFare used for
Xconversions. The precision, if any, gives the minimum number of digits that must appear; if the converted value requires fewer digits, it is padded on the left with zeros.
DOU
long intargument is converted to signed decimal, unsigned octal, or unsigned decimal, as if the format had been
ld,
lo, or
lurespectively. These conversion characters are deprecated, and will eventually disappear.
eEconversion.
fF.
gG
doubleargument is converted in style
for
e(or
Efor
Gconversions). The precision specifies the number of significant digits. If the precision is missing, 6 digits are given; if the precision is zero, it is treated as 1. Style
e.
aA
doubleargument is rounded and converted to hexadecimal notation in the style [-]0xh
.hhh
pis a literal character ‘
p’, and the exponent consists of a positive or negative sign followed by a decimal number representing an exponent of 2. The
Aconversion.
c
intargument is converted to an
unsigned char, and the resulting character is written..
p
void *pointer argument is printed in hexadecimal (as if by ‘
%#x’ or ‘
%#lx’).
n
int *(or variant) pointer argument. No argument is converted.
%
%’ is written. No argument is converted. The complete conversion specification is ‘
%%’.
printf(),
dprintf(),
fprintf(),
sprintf(),
vprintf(),
vdprintf(),
vfprintf(),
vsprintf(),
asprintf(), and
vasprintf() functions return the number of characters printed (not including the trailing() family of functions may fail if:
EILSEQ]
ENOMEM]
EOVERFLOW]
fprintf(),
printf(),
snprintf(),
sprintf(),
vfprintf(),
vprintf(),
vsnprintf(), and
vsprintf() functions conform to ISO/IEC 9899:1999 (“ISO C99”). The
dprintf() and
vdprintf() functions conform to IEEE Std 1003.1-2008 (“POSIX.1”).
ftoa() and
itoa() first appeared in Version 1 AT&T UNIX. The function
printf() first appeared in Version 2 AT&T UNIX, and
fprintf() and
sprintf() in Version 7 AT&T UNIX. The functions
snprintf() and
vsnprintf() first appeared in 4.4BSD. The functions
asprintf() and
vasprintf() first appeared in the GNU C library. This implementation first appeared in OpenBSD 2.3. The functions
dprintf() and
vdprintf() first appeared in OpenBSD 5.3.
%D,
%O, and
%Uare not standard and are provided only for backward compatibility. The effect of padding the
%pformat with zeros (either by the ‘
0’ flag or by specifying a precision), and the benign effect (i.e., none) of the ‘
#’ flag on
%nand
%pconversions,
asprintf() interface is not available on all systems as it is not part of ISO/IEC 9899:1999 (“ISO C99”). It is important never to pass a string with user-supplied data as a format without using . | http://man.openbsd.org/printf.3 | CC-MAIN-2018-47 | refinedweb | 929 | 55.13 |
R/GRASS GIS issue with rgdal
2011-04-30 23:52:37 GMT
All: Sorry for posting the way I have, but I am not really sure where to send this, as it is NOT a GRASS issue, really, but… On a Mac running OS X 10.6.7, I have the following issue: (1) I've loaded the GRASS GRASS-6.4.1-1-Snow.dmg from the KyngChaos Wiki (2) GDAL Complete 1.8 framework package (3) rgdal 0.6.33-1 - R 2.12 package (4) installed R 2.13 Mac binary from CRAN The curious thing is, if I try to load rgdal through the Mac R GUI, I get: > library(rgdal) Error in dyn.load(file, DLLpath = DLLpath, ...) : unable to load shared object '/Library/Frameworks/R.framework/Versions/2.13/Resources/library/rgdal/libs/i386/rgdal.so': dlopen(/Library/Frameworks/R.framework/Versions/2.13/Resources/library/rgdal/libs/i386/rgdal.so, 6): Library not loaded: /Library/Frameworks/R.framework/Versions/2.12/Resources/lib/libR.dylib Referenced from: /Library/Frameworks/R.framework/Versions/2.13/Resources/library/rgdal/libs/i386/rgdal.so Reason: image not found Error: package/namespace load failed for 'rgdal' However, if I load rgdal in R through a Mac term window R prompt, life is good and I get:(Continue reading) | http://blog.gmane.org/gmane.comp.gis.grass.user/month=20110501 | CC-MAIN-2014-52 | refinedweb | 221 | 50.63 |
How to convert String (byte array as string) to short
java byte array to string utf-8
byte array to string c#
string to byte array golang
string to byte array python
add string to byte array java
string to byte array kotlin
java byte array to hex string
Hello here i want to convert Byte array ie 0x3eb to short so i considered 0x3eb as a string and tried to convert to short but its throwing Numberformat Exception...someone please help me
import java.io.UnsupportedEncodingException; public class mmmain { public static void main(String[] args) throws UnsupportedEncodingException { String ss="0x03eb"; Short value = Short.parseShort(ss); System.out.println("value--->"+value); } } Exception what im getting is Exception in thread "main" java.lang.NumberFormatException: For input string: "0x3eb" at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65) at java.lang.Integer.parseInt(Integer.java:491) at java.lang.Short.parseShort(Short.java:117) at java.lang.Short.parseShort(Short.java:143) at mmmain.main(mmmain.java:14)
even i tried converting 0x3eb to bytes by
byte[] bytes = ss.getBytes();
but i didnt found any implementation for parsing bytes to short.
Thanks in advance
Since the string value that you're using is a hexadecimal value, to convert it into short, you need to remove the
0x using a substring and pass the radix as below:
Short.parseShort(yourHexString.substring(2), 16)
Here 16 is the radix. More info in the doc here.
Update
Since the OP asked for some more clarification, adding the below info..
3 ways to convert String to byte array in Java, Today, I am going to discuss one of the common tasks for programmers, converting a String to a byte array. You need to do that for multiple reasons e.g. for� The idea here is to create the byte stream from specified byte array. Then read all characters from the byte stream and return stream as a string. The following code example shows how to implement this. The solution automatically tries to determine the encoding used using byte order mark (BOM) in the byte stream. If not found, UTF-8 encoding is
See the doc of
parseShort:
Parses the string argument as a signed decimal short. The characters in the string must all be decimal digits, except that the first character may be an ASCII minus sign '-' ('\u002D') to indicate a negative value or an ASCII plus sign '+' ('\u002B') to indicate a positive value.
The string to be parsed should only contain decimal characters and sign characters, it can not contains the
0x prefix.
Try:
String ss="3eb"; Short value = Short.parseShort(ss, 16);
Convert String to Byte Array and Reverse in Java, A String is stored as an array of Unicode characters in Java. To convert it to a byte array, we translate the sequence of Characters into a sequence� The Encoding.GetString () method converts an array of bytes into a string. The following code snippet converts an ASCII byte array into a string and prints the converted string to the console. // Convert a byte array to a C# string. string str = Encoding.ASCII.GetString (bytes);
You have to cut "0x" from the beginning:
short.parseShort(yourHexString.Substring(2), 16)
Convert between byte array/slice and string � YourBasic Go, CODE EXAMPLE When you convert between a string and a byte slice (array), you get a brand new slice that contains the same bytes as the string, and vice� We refer to the process of converting a byte array to a String as decoding. Similar to encoding, this process requires a Charset. However, we cannot just use any charset for decoding a byte array. We should use the charset that was used to encode the String into the byte array. We can convert a byte array to a String in many ways.
Follow this document this may help you String to byte array, byte array to String in Java
How to convert Byte[] array to String in Java, println(“Text [Byte Format] : ” + bytes);. At this point you are creating new String by using bytes values not byte Object location. So the result is right� I have an array of unsigned characters, produced by hash function. I want to convert it to string or ascii code or hexdecimal. The value at the moment are gibberish and hash are nonsense. I need to know how I can convert them. Any help would be much appreciated. The array a byte array or unsigned
C# Convert String to Byte Array, Main: The character representations from the input string were first converted to fit in one byte elements each. Info: We see that the integer 68 corresponds to the� How to convert String to byte array in Java ? To convert from String to byte array , use String.getBytes() method . Please note that this method uses the platform’s default charset . We can use String class getBytes() method to encode the string into a sequence of bytes using the platform’s default charset .
Convert byte[] Array to String in Java, Learn to convert byte[] to String and String to byte[] in java – with examples. Conversion between byte array and string may be used in IO� This is the snippet Convert a Byte Array to a String on FreeVBCode. The FreeVBCode site provides free Visual Basic code, examples, snippets, and articles on a variety of other topics as well.
Convert.ToByte Method (System), Converts the value of the specified object to an 8-bit unsigned integer, using the example defines a string array and attempts to convert each string to a Byte. Convert String to Byte[] Array 1. Using String.getBytes() Method. To convert a string to a byte array, just use String.getBytes() method. This method uses the default character encoding to encode this string into a sequence of bytes. Here is an example: // create a string (demo purpose only) String str = "Hey, there!"; // convert string to byte
- bro after converting it is showing o/p as 1003 for 3eb. but thing is I want to take "0x3eb " from file and then i need to update this hexadecimal value in main code....so if i convert 3eb it is showing 1003...when there is a need of updation i need to update is as "0x3eb" ....how can i achieve this..? for example--> static final short PRODUCT_ID1 =0x3eb;
- @Ravikiran do you mean that you only need to get the hex value as a string, update the value and then use it again as a hex string itself at some other place? If that's the case, you can convert the hex string to int using
Integer.parseInt(hexString.substring(2), 16), then update the value and then convert it back to hex string using
Integer.toHexString(updatedInt)
- Bro i have some textfile in that user will store inputs in hexadecimal form(ie 0x3eb). I will fetch this data using fileinputstream and store the data as a string. Now actually i want to update this string(ie 0x3eb) into my code ie as Short(hexadecimal byte) for this variable --> static final short PRODUCT_ID1 = 0x3eb; so how can i achieve this..? the same data that i have received from user text file i have to update to the above variable PRODUCT_ID1 .
- @Ravikiran. Does this make it clear?
- No worries at all, @Ravikiran :) Please accept the answer if it helped you answer your query. Thanks!
- Link-only answers are not useful, as the URL may become stale over time, which would render your answer useless to future readers. You should add an explanation. | https://thetopsites.net/article/54958209.shtml | CC-MAIN-2021-25 | refinedweb | 1,256 | 63.59 |
Conditionals in Swift
In previous chapters your code led a relatively simple life: you declared some simple constants and variables and then assigned them values. But of course, an application really comes to life – and programming becomes a bit more challenging – when the application makes decisions based on the contents of its variables. For example, a game may let players leap a tall building if they have eaten a power-up. You use conditional statements to help applications make these kind of decisions.
if/else
if/else statements execute code based on a specific logical condition. You have a relatively simple either/or situation and depending on the result one branch of code or another (but not both) runs. Consider Knowhere, your small town from the previous chapter, and imagine that you need to buy stamps. Either Knowhere has a post office or it does not. If it has a post office, you will buy stamps there. If it does not have a post office, you will need to drive to the next town to buy stamps. Whether there is a post office is your logical condition. The different behaviors are “get stamps in town” and “get stamps out of town.”
Some situations are more complex than a binary yes/no. You will see a more flexible mechanism called switch in Chapter 5. But for now, let’s keep it simple.
Create a new OS X playground and name it Conditionals. Enter the code below, which shows the basic syntax for an if/else statement:
Listing 3.1 Big or small?
import Cocoa var population: Int = 5422 var message: String if population < 10000 { message = "\(population) is a small town!" } else { message = "\(population) is pretty big!" } print(message)
You first declare population as an instance of the Int type and then assign it a value of 5422. Next, you declare a variable called message that is of the String type. You leave this variable uninitialized at first, meaning that you do not assign it a value.
Next comes the conditional if/else statement. This is where message is assigned a value based on whether the “if” statement evaluates to true. (Notice that you use string interpolation to put the population into the message string.)
Figure 3.1 shows what your playground should look like. The console and the results sidebar show that message has been set to be equal to the string literal assigned when the conditional evaluates to true. How did this happen?
Figure 3.1 Conditionally describing a town’s population
The condition in the if/else statement tests whether your town’s population is less than 10,000 via the < comparison operator. If the condition evaluates to true, then message is set to be equal to the first string literal (“X is a small town!”). If the condition evaluates to false – if the population is 10,000 or greater – the message is set to be equal to the second string literal (“X is pretty big!”). In this case, the town’s population is less than 10,000, so message is set to “5422 is a small town!”
Table 3.1 lists Swift’s comparison operators.
Table 3.1 Comparison operators
You do not need to understand all of the operators’ descriptions right now. You will see many of them in action as you move through the book, and they will become clearer as you use them. Refer back to this table as a reference if you have questions.
Sometimes you only care about one aspect of the condition that is under evaluation. That is, you want to execute code if a certain condition is met and do nothing if it is not. Enter the code below. (Notice that new code, shown in bold, appears in two places.)
Listing 3.2 Is there a post office?
import Cocoa var population: Int = 5422 var message: String var hasPostOffice: Bool = true if population < 10000 { message = "\(population) is a small town!" } else { message = "\(population) is pretty big!" } print(message) if !hasPostOffice { print("Where do we buy stamps?") }
Here, you add a new variable called hasPostOffice. This variable has the type Bool, short for “Boolean.” Boolean types can take one of two values: true or false. In this case, the Boolean hasPostOffice variable keeps track of whether the town has a post office. You set it to true, meaning that it does.
The ! is called a logical operator. This operator is known as “logical not.” It tests whether hasPostOffice is false. You can think of ! as inverting a Boolean value: true becomes false, and false becomes true.
The code above first sets hasPostOffice to true, then asks whether it is false. If hasPostOffice is false, you do not know where to buy stamps, so you ask. If hasPostOffice is true, you know where to buy stamps and do not have to ask, so nothing happens.
Because the town does have a post office (because hasPostOffice was initialized to true), the condition !hasPostOffice is false. That is, it is not the case that hasPostOffice is false. Therefore, the print() function never gets called.
Table 3.2 lists Swift’s logical operators. | https://www.informit.com/articles/article.aspx?p=2473483 | CC-MAIN-2020-16 | refinedweb | 857 | 75 |
#include <FXButton.h>
Inheritance diagram for FX::FXButton:
When pressed, the button widget sends a SEL_COMMAND to its target. Passing the BUTTON_TOOLBAR style option gives buttons a "flat" look, and causes the edge of the button to be raised when the cursor moves over it. Passing BUTTON_DEFAULT allows the button to become the default button in a dialog, when the focus moves to it. The default widget in a dialog is the widget which will accept the RETURN key when it is pressed. The BUTTON_INITIAL flag makes the button the default widget when the focus moves to a widget which can not itself be a default widget. There should be only a single button in the dialog which is the initial default; typically this is the OK or CLOSE button. The option BUTTON_AUTOGRAY (BUTTON_AUTOHIDE) causes the button to be grayed out (hidden) if its handler does not respond to the SEL_UPDATE message. This is useful when messages are delegated, for example when using a multiple document interface, where the ultimaye destination of a message can be changed.
See also: | https://fox-toolkit.org/ref14/classFX_1_1FXButton.html | CC-MAIN-2021-25 | refinedweb | 179 | 57.5 |
Stepper motor controller with Arduino
A stepper motor is a kind of motor that, instead of rotating continuously in one direction, allows rotate in very small steps and stop and change direction very quickly, without problems of inertia. This makes these engines very suitable for mounting them in devices which perform movements that required high precision. In this article I will show how to build a simple controller to handle one of these motors through the Arduino board, along with a sample program written in CSharp that allows you to operate the motor from the computer where the plate is connected.
In this link you can download the source code and executable of the StepByStepMotorArduino project, written with Visual Studio 2013.
The model of Arduino board I used is the Arduino MEGA, which provides lots of inputs and outputs, although you can use any other board model connected to the computer with a USB connection.
There are many types of stepper motors. That which I will use in this article is a unipolar one, which is the easiest to control, since it is only necessary to activate in an order determined the different coils for the motor to move in the desired direction.
The motor I have used is recycled. I got it from an old hard drive, on a PC that had nothing less than the processor 8086, but it is similar to others that can be found in the market without problems.
The problem in this case is to find out which cables polarize the coils and what the common wire is. This is pretty simple; you only need a multimeter to measure the resistance between each pair of wires. Resistance will be the same between any two coil terminals, but it will be around one half between one of these terminals and the common one.
In this case, I have a common terminal and four cables that connect to each coil. To find out the order in which we polarize these wires, simply connect the common terminal to ground and keep trying to passing a current pulse through the other wires until you get four movements in the same direction. In the case of this motor, I used 12V, which is standard in this type of engine.
As the work of control and timing is all done through software, either on the Arduino board or in the PC program, all you need is a series of switches that allow you to control the 12 volts needed by the motor using the 5 volts of the Arduino outputs. This is the board that I have mounted:
Each switch is composed of a 2N2222 transistor, a BA 158 diode and a 1K resistor, as simple as that. This is the scheme of the board:
In the board you can see five transistors. In this example I only need 4, so I only show the scheme for this amount, the board also allows you to move motors with 5 terminals; all you have to do is to add another switch to the scheme.
The inputs S1 to S4 are connected to 4 output pins of the Arduino board. I used the pins 22, 24, 26 and 28, but you can use those you want instead.
The program for the Arduino board is as follows:
int pin1 = 22;
int pdir = LOW;
void setup() {
// Initialize pins
for (int ix = 22; ix <=28; ix+=2) {
pinMode(ix, OUTPUT);
digitalWrite(ix, LOW);
}
Serial.begin(9600);
}
void loop() {
if(Serial.available() > 1) {
int val = Serial.read();
int vt = Serial.read();
if (val & 1) { // move in one direction
for (int ip = 0; ip < vt; ip++) {
digitalWrite(pin1, HIGH);
delay(100);
digitalWrite(pin1, LOW);
pin1 += 2;
if (pin1 > 28) {
pin1 = 22;
}
}
}
if (val & 2) { // move in the other direction
for (int ip = 0; ip < vt; ip++) {
digitalWrite(pin1, HIGH);
delay(100);
digitalWrite(pin1, LOW);
pin1 -= 2;
if (pin1 < 22) {
pin1 = 28;
}
}
}
}
}
In the serial port we will write two bytes, the first is to indicate the direction of the movement, with bit 0 to 1 to move in one direction and bit 1 to 1 to move in the opposite. The two bits can be passed to 1 at time, in which case the engine will make some steps in one direction and then return to the starting point.
The second byte contains the number of steps we want to advance the motor. To take a step, we send a pulse of 100 milliseconds to the current pin and then move to the next pin, as simple as this.
The program which in turn controls the Arduino board from the computer is also very simple, it has a text box to indicate the port where is connected to the board, another to indicate the number of steps that we want to advance and a button for each direction:
As the number of steps can only be between 1 and 255, since it is a byte, I use a function to get the value or use a default value of 1 if an incorrect value is written:
private byte GetSteps()
{
int steps;
if (int.TryParse(txtSteps.Text, out steps))
{
if (steps > 255)
{
steps = 255;
}
if (steps < 1)
{
steps = 1;
}
}
else
{
steps = 1;
}
txtSteps.Text = steps.ToString();
return (byte)steps;
}
In the Click event handler of the buttons, the data is written to the serial port to pass them on to the Arduino board:
SerialPort p = new SerialPort(txtCOM.Text);
p.BaudRate = 9600;
p.Open();
byte[] val = new byte[2] { 1, GetSteps() };
p.Write(val, 0, 2);
p.Close();
Notice that the port BaudRate property is set to 9600 in both the program in the Arduino board and that in the PC program.
If you want to see the complete assembly where I used this circuit, you can follow this link where I explain how to build a robotic airsoft turret. | http://software-tecnico-libre.es/en/article-by-topic/all-sections/all-topics/arduino/stepper-motor-arduino | CC-MAIN-2018-17 | refinedweb | 982 | 62.21 |
Add instruction subset required for disassembling:
- All compact (16-bit) instruction formats;
- A couple of 32-bit instructions;
- Ability to print symbolic information for branch and call instructions.
clang-format is applied.
Add instruction subset required for disassembling:
clang-format is applied.
Hi Tatyana,
Thanks for working on this!
Can you add disassembly tests for the new 16-bit short instructions that you added here?
There are a few places in the ARCDisassembler code where you use 'auto' instead of standard scalar types (unsigned/uint64_t/etc.). Can you use these types instead of auto?
You create a new namespace (arc::inst_decoder) in the Disassembler. This looks different than the rest of the Disassemblers, can you use static routines instead?
Thanks again!
Hi Pete, thank you for review!
I'll do suggested corrections as soon as I'll have tests written.
Speaking of static and namespaces, I don't like this C-style code, but if you insist...)
Hello Tatyana,
I am mostly going with the style/conventions observed in the other backends/disassemblers. Can we use these current conventions for this patch? I feel like ARCDisassembler doesn't need many changes to enable the new instructions.
That will let us focus on the main new material here (new .td definitions/16-bit disassembly support). Perhaps suggestions on when do use C++ namespaces and MCDisassembler* instead of void* that would apply for all disassemblers could be proposed in a separate patch?
So, I view the goal of this patch is to provide enough 16-bit instruction format information so we can disassemble all 16-bit instructions with these formats.
Thanks again!
Hello Tatyana,
Thanks again for the work here.
I applied this patch, and am getting build errors (DecodeFromCyclicRange, ReadField used before declared, others).
Assuming these declarations get moved to the right places, my only remaining question is how others feel about the ReadField/ReadFields variadic templates.
Thanks for comment, Pete!
Fixed the order of declarations and using of functions.
Removed variadic template for better readability.
I'm getting a few errors when running these tests now?
misc.txt:25:10: error: expected string not found in input # CHECK: bl -2028 ^ <stdin>:9:2: note: scanning from here bl -1996 br.txt:6:10: error: expected string not found in input # CHECK: brlo %r10, %r4, -112 ^ <stdin>:3:2: note: scanning from here brlo %r10, %r4, -108 ^ compact.txt:81:10: error: expected string not found in input # CHECK: b_s 256 ^ <stdin>:28:2: note: scanning from here b_s 316 ^
I don't personally feel strongly about either of these.
I'm generally trying to match the style in other backends, which on observation I thinke would either just implement the logic inline, or create a new static routine...and omit auto.
I was wrong on other details myself originally.
But, I'm also a newbie at this reviewing bit (like I somehow mistakenly marked this as accepted?), so I'm happy to be told otherwise.
I agree that matching the style in other backends is very important, but it is hard to be consistent with last, because it was written long before even c++11 was released...
When I added instructions, I didn't care about properties like isBranch, isBarrier, etc., because didn't know its purpose. But it was found that debugger cannot step over a range of instructions correctly without this knowing, thus, I've added appropriate fields to instructions.
Also, replaced empty { } with ;
OK, I was aware of these...but didn't know that you'd need them for the debugger. There are a couple of others (mayLoad, mayStore, Defs STATUS32), but I was going to add them when the code generation uses them. Thanks!
Hi Pete,
Now I have commit after approval access and would land this revision. May I do it now? | https://reviews.llvm.org/D37983 | CC-MAIN-2018-26 | refinedweb | 637 | 66.03 |
New Features and Improvements:
- Let's welcome Wim Lemmens (didgiman). He's our new responsible for the ColdFusion integration. In this version we are introducing his new files with the following changes:
- The "Uploader", used for quick uploads, is now available natively for ColdFusion.
- Small bugs have been corrected in the File Browser connector.
- The samples now work as is, even if you don't install the editor in the "/FCKeditor" directory.
- And a big welcome also to "Andrew Liu", our responsible for the Python integration. This version is bringing native support for Python , including the File Browser connector and Quick Upload.
- The "IsDirty()" and "ResetIsDirty()" functions have been added to the JavaScript API to check if the editor content has been changed.*
- New language files:
- Hindi (by Utkarshraj Atmaram)
- Latvian (by Janis Klavin)
- For the interface, now we have complete RTL support also for the drop-down toolbar commands, color selectors and context menu.
- [SF BUG-1325113] [SF BUG-1277661] The new "Delete Table" command is available in the Context Menu when right-clicking inside a table.
- The "FCKConfig.DisableTableHandles" configuration option is now working on Firefox 1.5.
- The new "OnBlur" and "OnFocus" events have been added to the JavaScript API (IE only). See "_samples/html/sample09.html" *
- Attention: The "GetHTML" function has been deprecated. It now returns the same value as "GetXHTML". The same is valid for the "EnableXHTML" and "EnableSourceXHTML" that have no effects now. The editor now works with XHTML output only.
- Attention: A new "PreserveSessionOnFileBrowser".
- Attention: The "fun" smileys set has been removed from the package. If you are using it, you must manually copy it to newer installations and upgrades.
- Attention: The "mcpuk" file browser has been removed from the package. We have no ways to support it. There were also some licensing issues with it. Its web site can still be found at.
- It is now possible to set different CSS styles for the chars in the Special Chars dialog window by adding the "SpecialCharsOut" and "SpecialCharsOver" in the "fck_dialog.css" skin file.*
- [SF Patch-1268726] Added table "summary" support in the table dialog. Thanks to Sebastien-Mahe.
- [SF Patch-1284380] It is now possible to define the icon of a FCKToolbarPanelButton object without being tied to the skin path (just like FCKToolbarButton). Thanks to Ian Sullivan.
- [SF Patch-1338610] [SF Patch-1263009] New characters have been added to the "Special Characters" dialog window. Thanks to Deian.
- You can set the QueryString value "fckdebug=true" to activate "debug mode" in the editor (showing the debug window), overriding the configurations. The "AllowQueryStringDebug" configuration option is also available so you can disable this feature.
Fixed Bugs:
- [SF BUG-1363548] [SF BUG-1364425] [SF BUG-1335045] [SF BUG-1289661] [SF BUG-1225370] [SF BUG-1156291] [SF BUG-1165914] [SF BUG-1111877] [SF BUG-1092373] [SF BUG-1101596] [SF BUG-1246952] The URLs for links and images are now correctly preserved as entered, no matter if you are using relative or absolute paths.
- [SF BUG-1162809] [SF BUG-1205638] The "Image" and "Flash" dialog windows now loads the preview correctly if the "BaseHref" configuration option is set.
- [SF BUG-1329807] The alert boxes are now showing correctly when doing cut/copy/paste operations on Firefox installations when it is not possible to execute that operations due to security settings.
- A new "Panel" system (used in the drop-dowm toolbar commands, color selectors and context menu) has been developed. The following bugs have been fixed with it:
- [SF BUG-1186927] On IE, sometimes the context menu was being partially hidden.*
- On Firefox, the context menu was flashing in the wrong position before showing.
- On Firefox 1.5, the Color Selector was not working.
- On Firefox 1.5, the fonts in the panels were too big.
- [SF BUG-1076435] [SF BUG-1200631] On Firefox, sometimes the context menu was being shown in the wrong position.
- [SF BUG-1364094] Font families were not being rendered correctly on Firefox .
- [SF BUG-1315954] No error is thrown when pasting some case specific code from editor to editor.
- [SF BUG-1341553] A small fix for a security alert in the File Browser has been applied.
- [SF BUG-1370953] [SF BUG-1339898] [SF BUG-1323319] A message will be shown to the user (instead of a JS error) if a "popup blocker" blocks the "Browser Server" button. Thanks to Erwin Verdonk.
- [SF BUG-1370355] Anchor links that points to a single character anchor, like "#A", are now correctly detected in the Link dialog window. Thanks to Ricky Casey.
- [SF BUG-1368998] Custom error processing has been added to the file upload on the File Browser.
- [SF BUG-1367802] [SF BUG-1207740] A message is shown to the user if a dialog box is blocked by a popup blocker in Firefox.
- [SF BUG-1358891] [SF BUG-1340960] The editor not works locally (without a web server) on directories where the path contains spaces.
- [SF BUG-1357247] The editor now intercepts SHIFT + INS keystrokes when needed.
- [SF BUG-1328488] Attention: The Page Break command now produces different tags to avoid XHTML compatibility issues. Any Page Break previously applied to content produced with previous versions of FCKeditor will not me rendered now, even if they will still be working correctly.
- It is now possible to allow cut/copy/past operations on Firefox using the user.js file.
- [SF BUG-1336792] A fix has been applied to the XHTML processor to allow tag names with the "minus" char (-).
- [SF BUG-1339560] The editor now correctly removes the "selected" option for checkboxes and radio buttons.
- The Table dialog box now selects the table correctly when right-clicking on objects (like images) placed inside the table.
- Attention : A few changes have been made in the skins. If you have a custom skin, it is recommended you to make a diff of the fck_contextmenu.css file of the default skin with your implementation.
- Mouse select (marking things in blue, like selecting text) has been disabled on panels (drop-down menu commands, color selector and context menu) and toolbar, for both IE and Firefox.
- On Gecko, fake borders will not be applied to tables with the border attribute set to more than 0, but placed inside tables with border set to 0.
- [SF BUG-1360717] A wrapping issue in the "Silver" skin has been corrected. Thanks to Ricky Casey.
- [SF BUG-1251145] In IE, the focus is now maintained in the text when clicking in the empty area following it.
- [SF BUG-1181386] [SF BUG-1237791] The "Stylesheet Classes" field in the Link dialog window in now applied correctly on IE. Thanks to Andrew Crowe.
- The "Past from Word" dialog windows is now showing correctly on Firefox on some languages.
- [SF BUG-1315008] [SF BUG-1241992] IE, when selecting objects (like images) and hitting the "Backspace" button, the browser's "back" will not get executed anymore and the object will be correctly deleted.
- The "AutoDetectPasteFromWord" is now working correctly in IE. Thanks to Juan Ant. Gmez.
- A small enhancement has been made in the Word pasting detection. Thanks to Juan Ant. Gmez.
- [SF BUG-1090686] No more conflict with Firefox "Type-Ahead Find" feature.
- [SF BUG-942653] [SF BUG-1155856] The "width" and "height" of images sized using the inline handlers are now correctly loaded in the image dialog box.
- [SF BUG-1209093] When "Full Page Editing" is active, in the "Document Properties" dialog, the "Browse Server" button for the page background is now correctly hidden if "ImageBrowser" is set to "false" in the configurations file. Thanks to Richard.
- [SF BUG-1120266] [SF BUG-1186196] The editor now retains the focus when selecting commands in the toolbar.
- [SF BUG-1244480] The editor now will look first to linked fields "ids" and second to "names".
- [SF BUG-1252905] The "InsertHtml" function now preserves URLs as entered.
- [SF BUG-1266317] Toolbar commands are not anymore executed outside the editor.
- [SF BUG-1365664] The "wrap=virtual" attribute has been removed from the integration files for validation purposes. No big impact.
- [SF BUG-972193] Now just one click is needed to active the cursor inside the editor.
- The hidden fields used by the editor are now protected from changes using the "Web Developer Add-On > Forms > Display Forms Details" extension. Thanks to Jean-Marie Griess.
-.
- On some very rare cases, IE was loosing the "align" attribute for DIV tags. Fixed.
- [SF BUG-1388799] The code formatter was removing spaces on the beginning of lines inside PRE tags. Fixed.
- [SF BUG-1387135] No more "NaN" values in the image dialog, when changing the sizes in some situations.
- Corrected a small type in the table handler.
- You can now set the "z-index" for floating panels (toolbar dropdowns, color selectors, context menu) in Firefox, avoiding having them hidden under another objects. By default it is set to 10,000. Use the FloatingPanelsZIndex configuration option to change this value.
Special thanks to Alfonso Martinez, who have provided many patches and suggestions for the following features / fixes present in this version. I encourage all you to donate to Alfonso, as a way to say thanks for his nice open source approach. Thanks Alfonso!. Check out his contributions:
- [SF BUG-1352539] [SF BUG-1208348] With Firefox, no more "fake" selections are appearing when inserting images, tables, special chars or when using the "insertHtml" function.
- [SF Patch-1382588] The "FCKConfig.DisableImageHandles" configuration option is not working on Firefox 1.5.
- [SF Patch-1368586] Some fixes have been applied to the Flash dialog box and the Flash pre-processor.
- [SF Patch-1360253] [SF BUG-1378782] [SF BUG-1305899] [SF BUG-1344738] [SF BUG-1347808] On dialogs, some fields became impossible to select or change when using Firefox. It has been fixed.
- [SF Patch-1357445] Add support for DIV in the Format drop-down combo for Firefox.
- [SF BUG-1350465] [SF BUG-1376175] The "Cell Properties" dialog now works correctly when right-clicking in an object (image, for example) placed inside the cell itself.
- [SF Patch-1349166] On IE, there is now support for namespaces on tags names.
- [SF Patch-1350552] Fix the display issue when applying styles on tables.
- [SF Patch-1352320] Fixed a wrong usage of the "parentElement" property on Gecko.
- [SF Patch-1355007] The new "FCKDebug.OutputObject" function is available to dump all object information in the debug window.
- [SF Patch-1329500] It is now possible to delete table columns when clicking on a TH cell of the column.
- [SF Patch-1315351] It is now possible to pass the image width and height to the "SetUrl" function of the Flash dialog box.
- [SF Patch-1327384] TH tags are now correctly handled by the source code formatter and the "FillEmptyBlocks" configuration option.
- [SF Patch-1327406] Fake borders are now displayed for TH elements on tables with border set to 0. Also, on Firefox, it will now work even if the border attribute is not defined and the borders are not dotted.
- Hidden fields now get rendered on Firefox.
- The BasePath is now included in the debugger URL to avoid problems when calling it from plugins.
* This version has been partially sponsored by Alkacon Software. | https://ckeditor.com/cke4/release/22 | CC-MAIN-2019-13 | refinedweb | 1,853 | 65.22 |
i am writing a small tool for myself just to allow me to login to a phpbb forum from a C# application.
let's say i got:
forum address:
[textbox] - txtUser
[textbox] - txtPass
[checkbox] - chkRememberMe
[button] - btnLogin
how can I send those data above to the forum then get the "forum topic" listed into a [listbox] with webrequest and webrespond in c#?
please help me out because i have no clue at all.
Those samples and tutorial i found from internet only shows me how to POST, but did not show me how to retrive the respond.
Thanks.
Try this..
[CODE]
WebRequest request = WebRequest.Create(url);
WebResponse response = request.GetResponse();
StreamReader reader = new StreamReader(response.GetResponseStream());
string str = reader.ReadLine();
while(str != null)
{
Console.WriteLine(str);
str = reader.ReadLine();
}
[/CODE]
Thanks. but it doesn't show me the page after i login.
it gave me the html without login. and not even the page that says error login.
it's like i've POST the data for login, but retrive a different non-login page.
how do i fix it?
check this link if this could help you ..
Cheers mate. :)
do u have msn? care to add me in? i am using hotmail with this username.
sorry, I dont have it.
using System;
using System.Text;
using System.Net;
using System.Collections.Specialized;
//...
string url = "";
NameValueCollection formData = new NameValueCollection();
formData["field-keywords"] = "Harry Potter";
// add more form field / values here
WebClient webClient = new WebClient();
byte[] responseBytes = webClient.UploadValues(url, "POST", formData);
string response = Encoding.UTF8.GetString(responseBytes);
Console.WriteLine(response);
cheers mate.
Next I'll have to figure how to read into the next page(into the forum topic to see the latest reply or threads), because i think the page needs to read cookie to varified I'm the logged in user and display the post/reply threads.
i am also trying to post values automatically to an asp page from asp.net application. I need to manipulate the search results.
How do i do this.
I tried ur sample code, but i don't know how to retrieve values from my search results.
NameValueCollection formData = new NameValueCollection();
formData["Last_Name"] = "SMITH";
formData["Social_Security"] = "244557719";
formData["Dob"] = "03/08/1982";
WebClient webClient = new WebClient();
byte[] responseBytes = webClient.UploadValues("" , "POST", formData); | http://www.nullskull.com/q/69239/help-me-how-to-use-c-to-post-data-and-retrive.aspx | CC-MAIN-2014-52 | refinedweb | 380 | 52.56 |
.Hand source files end in
.C
NOX::Abstract::Groupis in the file
NOX_Abstract_Group.H.
//@HEADER // ************************************************************************ // // NOX: An Object-Oriented Nonlinear Solver Package // Copyright (2002) Sandia Corporation // // LOCA: Library of Continuation Algorithms Package // Copyright (2005) Roger Pawlowski (rppawlo@sandia.gov) or // Eric Phipps (etphipp@sandia.gov), Sandia National Laboratories. // ************************************************************************ // CVS Information // $Source: /space/CVS/Trilinos/packages/nox/src/NOX_Description.H,v $ // $Author: rppawlo $ // $Date: 2006/09/12 17:38:11 $ // $Revision: 1.54.2.7 $ // ************************************************************************ //@HEADEROnce the file is committed to the CVS repository, the CVS Information lines will look something like the following:
// $Source: /space/CVS/Trilinos/packages/nox/src/NOX_Description.H,v $ // $Author: rppawlo $ // $Date: 2006/09/12 17:38:11 $ // $Revision: 1.54.2.7 $The header information is automatically filled in between the two
//@HEADERkeys when we run the
nox/maintenance/update_nox_headers.shcommand.
NOX_Abstract_Vector.Hheader file.
#ifndef NOX_ABSTRACT_VECTOR_H #define NOX_ABSTRACT_VECTOR_H ...body of include file goes here... #endif
iostream) directly in your files. Instead, include
NOX_Common.H. The goal is to better enable system portability since some machines have
math.hand others have
cmathand so on. Currently, we have the following system headers:
cstdlib
cstdio
cassert
iostream
iomanip
string
cmath
vector
map
deque
algorithm
sstream
fstream
#include "../test/utils/NOX_TestCompare.H"
#include "NOX_TestCompare.H"
*) or references (
&) should be declared using forward declarations, and not by including the header files.
NOXnamespace. No exceptions!
Abstract
Parameter
Solver
StatusTest
LineSearch
Direction
Epetra
NOX::Linesearch::MoreThuente).
_' or `
__').
{}function. The reason for this is that if the function does not inline correctly, it can actually lead to slower code rather than faster code.
void foo (); // No!! void foo(); // Better
int i,j; // No!! int i; // Yes int j;
*' and `
&' should be written together with the types of variables instead of with the names of variables in order to emphasize that they are part of the type definition. Instead of saying that
*iis an
int, say that
iis an
int*.
int *i; // No!! int* i; // Yes
if,
for,
while, etc.
if (a == b && c < d || e == f) // No! { /* Stuff */ } if (((a == b) && (c < d)) || (e == f)) // Yes { /* Stuff */ }
ifstatement should always follow on a separate line.
if ( /*Something*/ ) i++; // No!! if ( /*Something*/ ) // Yes! i++;
if ( /*Something*/ ) // Yes! { i++; j++; } if ( /*Something*/ ) { // Okay i++; j++; } if ( /*Something*/ ) // No! { i++; j++; }Adding the following line to your \c .emacs file will help:
(c-set-offset 'substatement-open 0)
=signs and all logical operators.
.' or `
->', nor between unary operators and operands.
constor
enum; never using
#define.
NOX::Utilsclass has utility functions related to printing. To use it, include
NOX_Utils.H.
NOX::Utils::outor
NOX::Utils::poutfunctions with the appropriate MsgType flag. The flags are:
NOX::Utils::Error
NOX::Utils::Warning
NOX::Utils::OuterIteration
NOX::Utils::InnerIteration
NOX::Utils::Parameters
NOX::Utils::Details
NOX::Utils::OuterIterationStatusTest
NOX::Utils::LinearSolverDetails
NOX::Utils::TestDetails
NOX::Utils::StepperIteration
NOX::Utils::StepperDetails
NOX::Utils::StepperParameters
NOX::Utils::Debug
NOX::Utils::error
NOX::Utils::perrand then throw an exception with the string
"NOX Error". For example if utils is a NOX::Utils object,
if (/* Error Condition */) { utils.err() << "ERROR: NOX::Epetra::Group::getNewton() - invalid Newton vector" << endl; throw "NOX Error"; }
/*) is followed by an exclamation mark to indicate that it's a Doxygen comment. The open and close comment markers are on lines by themselves, and the text of the comment is indented two spaces. Always include a
\briefdescription. The long description follows. Observe the use of the formatting tags
\cand
\e. The
\notetag is used for any special notes. The
\authortag is recommended.
/*! \brief Arbitrary combination of status tests. In the \c AND (see NOX::Status::Combo::ComboType) combination, the result is \c Unconverged (see NOX::Status::StatusType) if \e any of the tests is \c Unconverged. Otherwise, the result is equal to the result of the \e first test in the list that is either \c Converged or \c Failed. It is not recommended to mix \c Converged and \c Failed tests in an \c AND combination. In the \c OR combination, the result is \c Unconverged if \e all of the tests are \c Unconverged. Otherwise, it is the result of the \e first test in the list that is either \c Converged or \c Failed. Therefore, it will generally make sense to put the \c Failed -type tests at the end of the \c OR list. \note We always runs through all tests, even if we don't need to. This is useful so that the user knows which tests have and have not be satisfied. \author Tammy Kolda (SNL 8950) */ class Combo : public Test { ... }; // class Combo
/*! \brief %Newton-like solver with a line search. The following parameters are valid for this solver: - "Line Search" - Sublist of the line search parameters, passed to the NOX::Linesearch::Manager constructor. Defaults to an empty list. - "Linear Solver" - Sublist of the linear solver parameters, passed to Abstract::Group::computeNewton(). Furthermore, the "Tolerance" within this list may be modified by the resetForcingTerm(). Defaults to an empty list. - "Forcing Term Method" - Method to compute the forcing term, i.e., the tolerance for the linear solver. Defaults to "" (nothing). Choices are "Type 1" and "Type 2". - "Forcing Term Minimum Tolerance" - Minimum acceptable linear solver tolerance. Defaults to 1.0e-6. - "Forcing Term Maximum Tolerance" = Maximum acceptable linear solver tolerance. Default to 0.01. - "Forcing Term Alpha" - Used for the "Type 2" forcing term calculation. Defaults to 1.5. - "Forcing Term Gamma" - Used for the "Type 2" forcing term calculation. Defaults to 0.9. \author Tammy Kolda (SNL 8950), Roger Pawlowski (SNL 9233) */Here's a more complicated example to produce a two-tiered list.
/*! The parameters must specify the type of line search as well as all the corresponding parameters for that line search. <ul> <li> "Method" - Name of the line search. Valid choices are <ul> <li> "Full Step" (NOX::Linesearch::FullStep) <li> "Interval %Halving" (NOX::Linesearch::Halving) <li> "Polynomial" (NOX::Linesearch::Polynomial) <li> "More'-Thuente" (NOX::Linesearch::MoreThuente) </ul> </ul> */
\briefcomments. Those can be formatted in either of two ways, as follows.
/*! \brief The test can be either the AND of all the component tests, or the OR of all the component tests. */ enum ComboType {AND, OR}; //! Constructor Combo(ComboType t = OR);
Newtonto the NOX::Solver::Newton class.
//! Newton-like solver with a line search.To prevent that automatic link, insert a percent sign (
%) immediately before the word that is causing the link. For example,
//! %Newton-like solver with a line search. | http://trilinos.sandia.gov/packages/docs/r7.0/packages/nox/doc/html/coding.html | CC-MAIN-2014-15 | refinedweb | 1,070 | 52.26 |
In geometry the ratio of the circumference of a circle to its diameter is known as π. The value of π can be estimated from an infinite series of the form:
π / 4 = 1 - (1/3) + (1/5) - (1/7) + (1/9) - (1/11) + ... Math class has a random() method that can be used. This method returns random numbers between 0.0 (inclusive) to 1.0 (exclusive). There is an even better random number generator that is provided the Random class. We will first create a Random object called randomGen. This random number generator needs a seed to get started. We will read the time from the System clock and use that as our seed.
Random randomGen = new Random ( System.currentTimeMillis() );. There is a method nextDouble() that will return a double between 0.0 (inclusive) and 1.0 (exclusive). But we need random numbers between -1.0 and +1.0. The way we achieve that is:
double xPos = (randomGen.nextDouble()) * 2 - 1.0;
double yPos = (randomGen.nextDouble()) * 2 - 1.0;
To determine if a point is inside the circle its distance from the center of the circle must be less than the radius of the circle. The distance of a point with coordinates ( xPos, yPos ) from the center is Math.sqrt ( xPos * xPos + yPos * yPos ). The radius of the circle is 1 unit.
The class that you will be writing will be called CalculatePI. It will have the following structure:
import java.util.*;
public class CalculatePI
{
public static boolean isInside ( double xPos, double yPos )
{ ... }
public static double computePI ( int numThrows )
{ ... }
public static void main ( String[] args )
{ ... }
}
In your method, and 100,000. You will call the method computePI() with these numbers as input parameters. Your output will be of the following form:
Computation of PI using Random Numbers
Number of throws = 100, Computed PI = ..., Difference = ...
Number of throws = 1000, Computed PI = ..., Difference = ...
Number of throws = 10000, Computed PI = ..., Difference = ...
Number of throws = 100000, Computed PI = ..., Difference = ...
* Difference = Computed PI - Math.PI
In the method computePI() you will simulate the throw of a dart by generating random numbers for the x and y coordinates. You will call the method isInside() to determine if the point is inside the circle or not. This you will do as many times as specified by the number of throws. You will keep a count of the number of times a dart landed inside the circle. That figure divided by the total number of throws is the ratio π / 4. The method computePI() will return the computed value of PI.
Here is what I have done so far but the value of PI is coming out 0.0 and the difference is coming out the negative value of PI (-3.141592....) I having been debugging for the last day and a half. I've kinda have lost the ability to be objective about this and was wondering what anyone though about what I might be doing wrong?
import java.util.*; public class CalculatePI2 { public static boolean isInside (double xPos, double yPos) { boolean result; double distance = Math.sqrt((xPos * xPos) + (yPos * yPos)); if (distance < 1) result = false; return(distance < 1); } public static double computePI (int numThrows) { Random randomGen = new Random (System.currentTimeMillis()); int hits = 0; double PI = 0; for (int i = 0; i <= numThrows; i++) { double xPos = (randomGen.nextDouble()) * 2 - 1.0; double yPos = (randomGen.nextDouble()) * 2 - 1.0; if (isInside(xPos, yPos)) { hits++; } PI = (4 * (hits/numThrows)); } return PI; } public static void main (String[] args) { Scanner reader = new Scanner (System.in); System.out.println("This program approximates PI using the Monte Carlo method."); System.out.println("It simulates throwing darts at a dartboard."); System.out.print("Please enter number of throws: "); int numThrows = reader.nextInt(); double PI = computePI(numThrows); double Difference = PI - Math.PI; System.out.println ("Number of throws = " + numThrows + ", Computed PI = " + PI + ", Difference = " + Difference ); } }
I actually made the program to where the user could enter the number of throws and I really only needed to have the program approximate Pi for the values of 100, 1000, 10000, and 100000. Not sure if he is going to like that.
This post has been edited by KoreyAusTex: 10 July 2007 - 05:20 PM | http://www.dreamincode.net/forums/topic/30265-calculate-pi-using-random-numbers/ | CC-MAIN-2016-26 | refinedweb | 697 | 68.57 |
OLD DESCRIPTION:
I have a .NET app I wrote myself that I'm trying to get set up to run as a scheduled task on Windows Server 2008 R2. When I run the app myself from the command line, it works just fine. However, when I set up the task, it completes the task within 1 second of when it starts and says it completed successfully, though of course the app did NOT run. Because of this, I don't get any errors logged either by the scheduler or by the app. If I take out the argument ("auto") then it "runs" the task, but never opens the console to display the menu.
This is what I've tried so far:
I'm still pretty green with server administration, so it's possible I've overlooked something, but I don't know what that is if I did. I found one question on here that seemed like it was related (GUI doesn't load for a scheduled task) but it's a little different because at least that one actually ran part of the task.
UPDATE:
After some more digging, I discovered that the application actually has been running, but due to something I guess I didn't know about the default settings namespace in .NET, the location in the config file where the app stores/reads the web service credentials varies based on whether you're sitting there running the app or the app is being run through TS. Still trying to figure out a way around that...
Regardless, this is where I'm at now: the app spits out SSL/TLS errors whenever the task scheduler attempts to run the app. I have a certificate stored in a subdirectory of the app's home directory (E:\Appname), and as was the case with the credentials, running the app manually causes no problems with the connection. I've ensured that the cert and its folder have the task owner listed with full control.
Am I missing anything else here?
If you're trying to debug a failed task that's running in the SYSTEM security context (which is the default for a scheduled task) you should grab a copy of psexec.exe and run psexec -s cmd.exe. This will get you an interactive cmd session as SYSTEM. You can verify this by running whoami from this new command prompt.
psexec -s cmd.exe
whoami
Try running your application from here. You'll be able to see any output that it might be writing to the console. Since this is a custom app, I think you'll be hard pressed to find a definitive answer, since we don't know what your code is actually doing. Getting an interactive session as SYSTEM will at least show you if it's a permission problem, or a problem with the settings that you're using in the Task Scheduler.
As it turns out, the problem was that the config file was stored in the AppData\Local directory for the user which was running the app when the config settings were changed (a capability I'd put into the program when running it without the "auto" arg). Since I'd been logging myself on to do the configuration and not the user to which the task was assigned, there was no user.config file for the task user, hence the lack of useable config data. Aligning the task user and the presence of the user.config file fixed it.
By posting your answer, you agree to the privacy policy and terms of service.
asked
2 years ago
viewed
2923 times
active | http://serverfault.com/questions/473877/2008-r2-task-scheduler-wont-run-task-but-says-completed-successfully-w-o-error | CC-MAIN-2015-18 | refinedweb | 609 | 69.52 |
INILAH.COM, Pariaman - General Hospital (Dr) M Djamil Padang, the patient received an unexpected return (avian influenza) Bird flu. Patient initials BR (73) is a resident of Padang Pariaman district. Patients undergoing intensive treatment in the isolation room bird flu Dr M Djamil Padang.
Information compiled, BR was hospitalized on Friday (8/2) at 12.00 pm. Patients are referred from hospitals Pariaman, because infected have symptoms of H5N1 or bird flu virus. A few days before patient contact with his neighbor's birds-poultry that died suddenly.
Meanwhile Giving Information Officer Dr M Djamil Padang, said Gustafianof BR, newly suspected teridap H5NI virus, because the family name but a few ducks and BR neighbor died suddenly at that direct contact with the birds.
"A few days ago a dead duck and the corresponding direct high fever accompanied by cough and shortness of breath. Seeing this, the family was immediately rushed to a hospital in Padang Padang Pariaman. Next is the general hospital, referring to Dr BR M Djamil Padang because of suspected H5NI virus infected patients, "said Gustavianof explained.
The doctor who saw the symptoms of the disease in question BR agreed saying suspect bird flu. Gustavianof said, BR is not positive, but traits that led to it is quite clear and the hospital will do blood sampling, to ensure the 73-year-old man infected with the virus or not.
"For that, until now its status is still suspected bird flu. BR has been getting intensive treatment in the isolation room bird flu. Later if the blood sample is positive, the doctor Dr M Djamil will take further action, "said Gustavianof.
The care given to BR, the standards set are appropriate for the patient supeck bird flu. "We are still isolated, pending and see the condition of the patient and the doctor will soon take a blood sample," said Gustafianof stressed.
A few days ago in Sanur District Nan Sabaris Nagari, Padang Pariaman District, a total of 1207 ducks belonging to four families (KK) died suddenly. Duck population in the area is 3590 tails and owners there are six families. Extermination perpetrated against the surviving ducks newly done to the families that Doni, a total of 299 birds. Meanwhile, three more are underway socialization KK to be the destruction of the ducks are still alive. It also carried out spraying.
In Sumatra, there are two locations of ducks were found dead suddenly, ie Korong Payakumbuh and Kampung Lintang, Sanur Nagari, District Nan Sabaris, Padang Pariaman. Head of Animal Health Animal Husbandry Department Sumatra M Kamil said, from observation teams on the ground, the death of thousands of ducks will lead to a new type of bird flu virus that is coded H5NI 2.3.2.
Pascamatinya thousands of ducks in Pariaman due to the H5N1 virus, West Sumatra Husbandry Department to block the passage of ducks from the scene. The anticipation is done, in order to prevent and inhibit the spread of the disease similar to other areas in West Sumatra. | http://pandemicinformationnews.blogspot.com/2013/02/indonesia-pariaman-supect-bird-flu-h5n1.html | CC-MAIN-2013-48 | refinedweb | 503 | 60.65 |
On 30 September 2011 03:02, Claude Heiland-Allen <claude at goto10.org> wrote: > On 30/09/11 02:45, DukeDave wrote: >> >> 1. Is there some reason (other than 'safety') that "cabal install" cleans >> everything up? > > As far as I've experienced and understand it, it doesn't - it's more that > GHC can detect when Haskell modules don't need recompiling while the same is > not true for C or C++ sources. For example, I change one module and see GHC > report only that module and its dependents being recompiled, while the other > compiled modules are reused from previous 'cabal install' runs. The > "C-sources:" are recompiled every time even if unchanged, which I too find > it somewhat annoying even with my small projects. Excellent, that is consistent with what I'm seeing, and I'm glad I'm not the only one who finds it annoying. I have no familiarity with how cabal and GHC handle C-sources, but I presume that the job of building them is handed off to a local C/C++ compiler (I presume g++ in my case). Given this I can only assume that cabal is doing something: 1. Deleting the object files before calling the C compiler (and so everything must be rebuilt)? 2. Touching the source C files in some way, before calling the C compiler? 3. Passing some argument to the compiler which is telling it to rebuild everything? 4. Something else? > >> 2. Why does setting cleanHook to return () not have any effect? > > I think becausae the clean hook is probably not called by 'cabal install', > but by 'cabal clean'. Ah yes, that does make sense, my bad. > > > Claude > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > > | http://www.haskell.org/pipermail/haskell-cafe/2011-October/095844.html | CC-MAIN-2014-15 | refinedweb | 289 | 71.24 |
How constant are constants?
I got a rude surprise today when I discovered that constants in C# (.NET programs in general) are a little more constant than I thought. I know that sounds strange, but let me explain.
Here’s a C# class that defines two constants and a method that outputs those constants.
using System;
namespace OtherTest
{
public static class TestObj
{
public const int ConstInt = 42;
public const string ConstString = "First test";
public static void ShowConstants()
{
Console.WriteLine("TestObj.ShowConstants");
Console.WriteLine("ConstInt = {0}", ConstInt);
Console.WriteLine("ConstString = {0}", ConstString);
}
}
}
We can compile that into OtherTest.dll with a simple command line:
csc /t:library OtherTest.cs
If you’re having trouble running the compiler, go to your All Programs menu and select Visual Studio|Visual Studio Tools|Visual Studio Command Prompt. Then you should be able to run the command line compiler.
Okay so far. Here’s some code that references the constants in that class:
using System;
using OtherTest;
namespace testo
{
class Program
{
static void Main()
{
Console.WriteLine("Inside Program.Main");
Console.WriteLine("ConstInt = {0}", TestObj.ConstInt);
Console.WriteLine("ConstString = {0}", TestObj.ConstString);
TestObj.ShowConstants();
}
}
}
Compile that program and link with the OtherTest assembly with this command line:
csc /t:exe /reference:OtherTest.dll ConstTest.cs
That creates ConstTest.exe which, when run, provides this output:
C:\DevWork\ConstTest>ConstTest
Inside Program.Main
ConstInt = 42
ConstString = Firsttest
TestObj.ShowConstants
ConstInt = 42
ConstString = Firsttest
That’s exactly what we expect. Now, change the values of the constants in OtherTest.cs:
public const int ConstInt = 99604;
public const string ConstString = "Second test";
And re-compile the assembly:
And run the ConstTest.exe program again:
C:\DevWork\ConstTest>ConstTest.exe
Inside Program.Main
ConstInt = 42
ConstString = Firsttest
TestObj.ShowConstants
ConstInt = 99604
ConstString = Second test
If you’re as surprised as I am, raise your hand. The constant that the main program sees is different from the constant that is defined in the external assembly.
I understand what’s going on, but I’m pretty surprised by it. The compiler, when it compiled ConstTest.cs, reached into OtherTest.cs, got the values of the constants, and included them as constants in the compiled code. So when the code references TestObj.ConstInt at runtime, it’s really just getting the constant 42. That all makes sense. Except, of course, when you change the constant in the OtherTest.cs assembly, re-compile, and the values don’t match.
TestObj.ConstInt
What surprises me is that the C# compiler (CSC) hoists the constants from the assembly. After all, the compiler is just creating MSIL (intermediate code) that is later compiled to native code by the JIT compiler at runtime. I would have expected CSC to write MSIL that says, in effect, “Use the constant value defined in that other assembly here,” and let the JIT compiler figure out how to optimize everything. That would eliminate the confusion you see above, and I don’t understand why it’s not done that way.
The thing to remember here is that constants are evaluated at compile time. I don’t know all the ramifications of this, but one thing is for certain: if your program references constants that are defined in an external assembly, then your program is at risk any time that assembly is changed. Unless you make the rule that a publicly visible constant never changes value throughout the life of the source code, you’re running the risk of encountering a very difficult to find bug that will “go away” when you re-compile the project. And there’s nothing worse than bugs that just “go away.”
Jim: Section 10.5.2.2 of The C# Programming Language has the skinny on this. “Constants and readonly fields have different binary versioning semantics. When an expression references a constant, the value of the constant is obtained at compile time [as you discovered], but when an expression references a readonly field, the value of the field in not obtained until runtime.” In the Fourth Edition, Jon Skeet points out you should use constant for true constants (like the number of milliseconds in a second or the minimum value of an int) and readonly for everything that isn’t a constant in the physical sense (and so that might be changed in the manner you show in the future).
Cheers, Julian
Julian: Thanks for the reply. I asked the question on StackOverflow and got pretty much the same response from Eric Lippert: “Constants are supposed to be constant. For all time. That’s a much stricter interpretation of “constant” than I’m used to, and in the future I’ll be using readonly fields for things that could change.
Jim, nice job dragging this into the light and pointing it out. It’s very counter-intuitive, and it changes my concept of constants as immutable bit arrays into one more deeply rooted in the semantics of the grammar. In the new world they are apparently so universally absolute that the compiler is free to promote them to literals and ignore any future definitions from the declaring libraries. Good to know, thank you Julian.
Matthew: You’re right, it completely changes one’s view on the meaning of “constant.” It’s a complete departure from how programmers have treated constants–certainly how I’ve learned to treat constants. I’ll be writing more about this. | http://blog.mischel.com/2010/12/10/how-constant-is-a-constant/ | CC-MAIN-2015-11 | refinedweb | 898 | 57.57 |
Only redraw when necessary
RESOLVED FIXED in fennec1.0a1
Status
P1
normal
People
(Reporter: pavlov, Assigned: roc)
Tracking
(Depends on 1 bug, {dev-doc-complete})
Firefox Tracking Flags
(Not tracked)
Details
Attachments
(3 attachments, 8 obsolete attachments)
Right now we have some code that redraws the whole visible area every 15 seconds.
Flags: blocking-fennec1.0+
er, every 1 second
er, every 1 second
Priority: -- → P1
We have some code in fennec that redraws the whole <canvas> from the hidden <browser> element every 100ms based on the current scroll position and zoom factor. We need a mechanism to repaint the right parts of the canvas only as necessary. We should hook this up by having the view manager(s) for the browser element's presshell(s) send notifications outward covering the entire browser element (things "offscreen" as well as on). Then clip those based on the visible area based on the location and zoom factor. Perhaps adding a browser onpaint() method would be nice.
I'm thinking of dispatching a MozDOMAfterPaint DOM event that is fired asynchronously at 'window'. It would have an attribute with a ClientRectList describing the area that has changed. Always firing this event would incur some overhead but I'm not sure how much. If the overhead is significant I'd prefer to optimize the event dispatching path (e.g. by remembering whether anyone has ever registered an event handler for this kind of event). This is something that would be generally useful for the Web IMHO. Is asynchronous dispatch of this event going to be adequate for Fennec's needs? It means there could be some delay between the paint and the event being processed. But firing it synchronously could cause all kinds of problems.
roc: I think async should be OK, as long as they aren't too far apart. queuing them up should help, and the interesting test cases will be heavy dhtml pages to see if they repaint fast enough, but I think they should be fine. would it be possible to query the current damage region to repaint as-needed if we had some spots that needed to update very frequently?
(In reply to comment #5) > would it be possible to query the current damage region to repaint as-needed if > we had some spots that needed to update very frequently? We could add a separate API for that, but it would be hard to use correctly. I think this DOM event would fire at least as fast as a timeout, so I'm not sure how you'd use the damage region API..
(In reply to comment #6) >. This actually isn't going to work. Subframes that are transformed by CSS transforms or SVG foreignobject make it impossibly difficult for a chrome event handler to figure out what a repaint in a subframe actually means for the toplevel document. So I'll have to bubble those up to the top level. Now, the tricky part is, I don't want to fire a DOM event at a toplevel content window when one of its subframes invalidates, because that's an information leak. So my current plan is to track "invalidation in subframes" separately from "invalidation in this document". There will be two "invalid regions" tracked in each presContext. When a region goes from empty to nonempty we'll spawn an XPCOM event to asynchronously dispatch a DOM event. When we dispatch the DOM event, if the "invalidation in this document" region is empty and the document is not trusted, we will target the event at the chrome event handler instead of the window. When we read the event's dirty rect list, if the caller is trusted then we return the union of the two regions, otherwise we just return the "invalidation in this document" region. >. What I have right now is that clipping and scrolled-out-of-view limit what's reported, but z-ordering both inside and outside the content window does not.
OK, here's where I'm at. This almost works. The mochitest currently has one failure; when we're privileged we should be able to see invalidation happening in an iframe, but we can't. Need to debug that. Also, this needs performance testing. If it causes significant overhead during rapid painting, then we need to find a way to fix that. But the API is basically here. You should be able to build on this.
Assignee: pavlov → roc
Status: NEW → ASSIGNED
Comment on attachment 337827 [details] [diff] [review] fix, WIP > void >+nsPresContext::FireDOMPaintEvent() >+{ >+ nsCOMPtr<nsIDocShell> docShell(do_QueryReferent(mContainer)); >+ if (!docShell) >+ return; >+ nsCOMPtr<nsPIDOMWindow> ourWindow = do_GetInterface(docShell); >+ nsISupports* eventTarget = ourWindow; >+ if (mSameDocDirtyRegion.IsEmpty() && !IsChrome()) { >+ // Don't tell the window about this event, it should not know that >+ // something happened in a subdocument >+ eventTarget = ourWindow->GetChromeEventHandler(); >+ } >+ >+ nsNotifyPaintEvent event(PR_TRUE, NS_AFTERPAINT, mSameDocDirtyRegion, >+ mCrossDocDirtyRegion); >+ // Empty our regions now in case dispatching the event causes more damage >+ // (hopefully it won't, or we're likely to get an infinite loop! At least >+ // it won't be blocking app execution though). >+ mSameDocDirtyRegion.SetEmpty(); >+ mCrossDocDirtyRegion.SetEmpty(); >+ event.target = do_QueryInterface(ourWindow); >+ nsEventDispatcher::Dispatch(ourWindow, this, &event); >+} I don't understand this. What is eventTarget? Should you dispatch the event to eventTarget, not to ourWindow? Is it always the top-level content window which dispatches these events? Or could/do iframes dispatch their own events (in which case event propagates from iframe to chromehandler, not to top window)?
hey roc, is this patch missing some previous patch with things like NotifiyInvalidates? I fail to apply in nsPresContext.cpp/h, nsScrollFrame.cpp
That was just an hg misfire. Here's the real patch.
Attachment #337827 - Attachment is obsolete: true
(In reply to comment #9) > I don't understand this. What is eventTarget? > Should you dispatch the event to eventTarget, not to ourWindow? Yes, that's a mistake, I want to dispatch to eventTarget. > Is it always the top-level content window which dispatches these events? Or > could/do iframes dispatch their own > events (in which case event propagates from iframe to chromehandler, not to top > window)? iframes can dispatch events, and what you describe is the desired behaviour.
Argh, missed a file
Attachment #337960 - Attachment is obsolete: true
(In reply to comment #12) > iframes can dispatch events, and what you describe is the desired behaviour. Ok, and that is what happens anyway (using normal event dispatch). Is the special dispatch-to-chrome-handler needed? Or are there cases when top-level content window is notified that iframe is painted, but iframe doesn't get that info. Or perhaps the dirty region handling needs that parent dispatches such event?
Fixed a leak. Added some more tests, which all pass. Basically good to go. Only thing not addressed yet is performance testing. There are various ways to disable this when it's not needed if perf is an issue.
Attachment #337990 - Flags: superreview?(mats.palmgren)
Attachment #337990 - Flags: review?(Olli.Pettay)
(In reply to comment #14) The important thing here is that we sometimes want to only notify the chrome event handler, not the window itself. In particular, invalidation occurring in a subdocument should not be reported to the content window, only to the chrome event handler.
Actually, would it make sense to just dispatch a simple "paint" or "MozPaint" or "MozAfterPaint" event and at that point the information about what was painted could be available in WindowUtils of the event target's window? That way there wasn't reason to create yet another event type, and nsDOMNotifyPaintEvent::GetClientRects and nsDOMNotifyPaintEvent::GetBoundingClientRect work anyway like static methods, depending just on prescontext. Or do we really want to expose the event *and* the paint information to web content?
{ &nsGkAtoms::onvolumechange, { NS_VOLUMECHANGE, ventNameType_HTML }}, #endif //MOZ_MEDIA +{ &nsGkAtoms::onMozDOMAfterPaint,{ NS_AFTERPAINT, EventNameType_HTML }} This will fail if MOZ_MEDIA not defined. -{ &nsGkAtoms::onvolumechange, { NS_VOLUMECHANGE, ventNameType_HTML }}, +{ &nsGkAtoms::onvolumechange, { NS_VOLUMECHANGE, ventNameType_HTML }} #endif //MOZ_MEDIA + ,{ &nsGkAtoms::onMozDOMAfterPaint,{ NS_AFTERPAINT, EventNameType_HTML }}
@@ -3610,17 +3610,17 @@ nsIFrame::InvalidateWithFlags(const nsRe if (nsSVGIntegrationUtils::UsingEffectsForFrame(this)) { nsRect r = nsSVGIntegrationUtils::GetInvalidAreaForChangedSource(this, aDamageRect + nsPoint(aX, aY)); - GetParent()->InvalidateInternal(r, mRect.x, mRect.y, this, aImmediate); + GetParent()->InvalidateInternal(r, mRect.x, mRect.y, this, aFlags);
(In reply to comment #18) > Actually, would it make sense to just dispatch a simple "paint" or "MozPaint" > or > "MozAfterPaint" event and at that point the information about what was painted > could be available in WindowUtils of the event target's window? Then when do you clear those stored dirty areas? You really want to clear them before firing the event, in case the event does more invalidation (you don't want the new invalidation to get mixed up with the area already reported, especially if there are multiple listeners). So you'd need two sets of dirty areas, the "area reporting now" and "next dirty area". Right now we have that, the "area reporting now" is just stored in the event. Having the data which is really part of the event be stored globally is ugly IMHO. > That way there wasn't reason to create yet another event type, and > nsDOMNotifyPaintEvent::GetClientRects and > nsDOMNotifyPaintEvent::GetBoundingClientRect work anyway like static methods, > depending just on prescontext. It really shouldn't be this hard to create a new event type. I would hate to work around that problem by exposing an ugly API to content/chrome. > Or do we really want to expose the event *and* the paint information to web > content? I do. People have asked for it. It's useful for debugging Web apps and logging timing data.
ok, this hooks things up to redraw with fennec. only 4 real issues left: need to fix up rounding a bit more, and sometimes we're unable to get the canvas's width and height which causes us to draw funny sometimes. we need to clear the canvas when a new page starts loading and we need to cancel the event listener when we switch to another tab
Attachment #338032 - Attachment is obsolete: true
(In reply to comment #21) > I do. People have asked for it. It's useful for debugging Web apps and logging > timing data. Ok, that was the most important part. I'll review the patch asap.
> Created an attachment (id=338182) [details] Looks very good, but some strange things happen after loading "linux.org.ru"... all interactions and scrolling works very randomly... May be it is because there are dynamic text frame on the left side?
Comment on attachment 337990 [details] [diff] [review] fix v4 >+[scriptable, uuid(dec5582e-5cea-412f-bf98-6b27480fb46a)] >+interface nsIDOMNotifyPaintEvent : nsIDOMEvent >+{ >+ /** >+ * Stores a list of rectangles which are affected. >+ */ >+ readonly attribute nsIDOMClientRectList clientRects; >+ /** >+ * Stores the bounding box of the rectangles which are affected. >+ */ >+ readonly attribute nsIDOMClientRect boundingClientRect;. Then when creating the event, use normal document.createEvent();event.initXXX();target.dispatchEvent(event) I'd like to avoid to add new things to nsGUIEvent, if possible. How often do we dispatch these events?
(In reply to comment #25) >. clientRects and boundingClientRect have to return different values depending on whether the caller is trusted or not. How would that fit in here? > How often do we dispatch these events? Every time we paint. So I'd like to make it as cheap as possible, which probably means not constructing a heap-allocated list of heap-allocated ClientRect objects unless we really have to...
Comment on attachment 337990 [details] [diff] [review] fix v4 >diff --git a/content/base/src/nsContentUtils.cpp b/content/base/src/nsContentUtils.cpp >--- a/content/base/src/nsContentUtils.cpp >+++ b/content/base/src/nsContentUtils.cpp >@@ -476,6 +476,7 @@ nsContentUtils::InitializeEventTable() { > { &nsGkAtoms::ondurationchange, { NS_DURATIONCHANGE, EventNameType_HTML }}, > { &nsGkAtoms::onvolumechange, { NS_VOLUMECHANGE, EventNameType_HTML }}, > #endif //MOZ_MEDIA >+ { &nsGkAtoms::onMozDOMAfterPaint, { NS_AFTERPAINT, EventNameType_HTML }} Maybe EventNameType_None. And I'd call the event "MozAfterPaint". >+NS_IMETHODIMP >+nsDOMNotifyPaintEvent::GetClientRects(nsIDOMClientRectList** aResult) ... >+ *aResult = rectList.forget().get(); Why not .swap() ? >+void >+nsPresContext::NotifyInvalidation(const nsRect& aRect, PRBool aIsCrossDoc) >+{ >+ if (aRect.IsEmpty()) >+ return; >+ >+ if (mSameDocDirtyRegion.IsEmpty() && mCrossDocDirtyRegion.IsEmpty()) { >+ // No event is pending. Dispatch one now. >+ nsCOMPtr<nsIRunnable> ev = >+ new nsRunnableMethod<nsPresContext>(this, >+ &nsPresContext::FireDOMPaintEvent); >+ NS_DispatchToCurrentThread(ev);. Also I think we don't want to fire events too often - not sure what is the best way to limit the frequency. attachment 338182 [details] [diff] [review] does quite a bit work for each event. >?
roc's patch merged to tip
Attachment #337964 - Attachment is obsolete: true
Attachment #337990 - Attachment is obsolete: true
Attachment #337990 - Flags: superreview?(mats.palmgren)
Attachment #337990 - Flags: review?(Olli.Pettay)
(In reply to comment #27) > Maybe EventNameType_None. > And I'd call the event "MozAfterPaint". OK > >+NS_IMETHODIMP > >+nsDOMNotifyPaintEvent::GetClientRects(nsIDOMClientRectList** aResult) > ... > >+ *aResult = rectList.forget().get(); > Why not .swap() ? OK >. Yes. But how would this flag work if the listener is a chrome event handler that's listening for events on all subframes? > Also I think we don't want to fire events too often - not sure what is the best > way to limit the frequency. > attachment 338182 [details] [diff] [review] does quite a bit work for each event. That is true. However, you could use JS timeouts to coalesce too-frequent events (pushing the bounding-rect and/or client-rects onto an array), and implementing coalescing there gives authors full control over the policy, which is nice. > >? Yes. The previous lack of return was a (mostly harmless) bug.
(In reply to comment #29) > > >+NS_IMETHODIMP > > >+nsDOMNotifyPaintEvent::GetClientRects(nsIDOMClientRectList** aResult) > > ... > > >+ *aResult = rectList.forget().get(); > > Why not .swap() ? > > OK Actually that doesn't work because rectList is an nsRefPtr<nsClientRectList>, not an nsIDOMClientRectList.
This testcase is my attempt to get a worst-case scenario for the overhead of this feature. We just repaint a small area about as fast as we possibly can (up to the limits of JS timers). I ran this in my Linux-opt build and sysprof says we're spending 0.6% of the time under nsPresContext::FireDOMPaintEvent. I'm not sure whether that's enough to worry about, but I lean towards not worrying about it, assuming this really is the worst case.
Updated to trunk again, updated to comments.
Attachment #338492 - Attachment is obsolete: true
Attachment #338572 - Flags: superreview?(mats.palmgren)
Attachment #338572 - Flags: review?(Olli.Pettay)
Stuart, note that you'll have to change your patch to use 'MozAfterScroll' as the event name.
Comment on attachment 338572 [details] [diff] [review] fix v5 >+ case NS_NOTIFYPAINT_EVENT: >+ { >+ newEvent = >+ new nsNotifyPaintEvent(PR_FALSE, msg, >+ ((nsNotifyPaintEvent*) mEvent)->sameDocRegion, >+ ((nsNotifyPaintEvent*) mEvent)->crossDocRegion); static_cast<> >diff --git a/layout/base/tests/Makefile.in.orig b/layout/base/tests/Makefile.in.orig This file shouldn't be here, I think. (About the chromehandler thing, if the optimization is needed; if chromehandler isn't a DOM node or window, QIing to nsPIDOMEventTarget and getting ELM from it and asking HasListenersFor(NS_LITERAL_STRING("MozAfterPaing")) should work.)
Attachment #338572 - Flags: review?(Olli.Pettay) → review+
this still uses "MozDOMAfterPaint", but should be otherwise correct and ready to land once roc's patch lands
Attachment #338182 - Attachment is obsolete: true
Attachment #338740 - Flags: review?(gavin.sharp)
Comment on attachment 338740 [details] [diff] [review] v1.0 for fennec to use roc's patch >diff --git a/chrome/content/deckbrowser.xml b/chrome/content/deckbrowser.xml > <method name="updateCanvasState"> >- if (aNewDoc) >+ if (aNewDoc) { Just get rid of this parameter, and remove all the callers that don't pass true, since the aNewDoc==false case is just a no-op now. >+ // FIXME: canvas needs to know it's actual width/height >+ var rect = this._canvas.getBoundingClientRect(); >+ var w = rect.right - rect.left; >+ var h = rect.bottom - rect.top; This shouldn't be needed... if the canvas isn't already sized correctly, there's nothing to clearRect anyways (browserToCanvas takes care of doing it initially). >+ <field name="browserRedrawHandler"> >+ handleEvent: function (aEvent) { >+ let self = this.deckbrowser; nit: get rid of this and just use this.deckbrowser directly? > <method name="_browserToCanvas"> >- ctx.clearRect(0,0,w,h); >+ //ctx.clearRect(0,0,w,h); Just remove this? >+ <method name="_browserToCanvas2"> This should have a better name... _redrawRect? >- if (isNaN(w) || isNaN(h)) >- throw "Can't get content width/height"; >- >+ if (isNaN(w) || isNaN(h)) { >+ return [this._canvas.width, this._canvas.height]; >+ } Ew, unnecessary brackets. You can get rid of this code too:
Attachment #338740 - Flags: review?(gavin.sharp) → review+
Comment on attachment 338572 [details] [diff] [review] fix v5 Stuart asked me over IRC to take this superreview request, so... In nsContentUtils.cpp, nsDOMEvent.cpp, and nsDOMEvent.h, please switch to not having any leading commas at all (by changing the comma situation at the other ifdef boundaries) or add before the MOZ_SVG ifdef. In nsDOMNotifyPaintEvent.h, please only use one _ at the end of the include guard; names with two consecutive underscores are reserved for the implementation. Should nsIDOMNotifyPaintEvent.idl say something about what coordinate system is used? Should it also say something about the cross-origin restrictions? The second change in nsDOMClassInfo.cpp should match the ordering of the entry in the first change and in the header; otherwise I think you risk crashes. (And the other two are adding to the end, which is good.) Please undo the license header munging in nsPresContext.cpp. nsPresContext::FireDOMPaintEvent is a bit confusing; it might be worth commenting that; * events sent to the window get propagated to the chrome event handler * event.target is intentionally always the window, and thus sometimes differs from eventTarget. Please don't hg add and commit layout/base/tests/Makefile.in.orig In the test, could you make the element with id="display" a div rather than a p, so that the actual document tree matches the indentation? (The start tag for the div following implicitly closes the p.) In the test, would you mind putting runTest1, runTest2, and runTest3 in order, rather than backwards? It's JavaScript, so that should be ok. (And maybe triggerPaint before check, twice.) In nsIFrame.h, could you document INVALIDATE_CROSS_DOC with its correct name (not INVALIDATE_CROSS_FRAME), and document INVALIDATE_NOTIFY_ONLY ? In nsIFrame.h, could you rename the parameter to InvalidateInternalAfterResize to be aFlags rather than aImmediate? Do you need to set the cross-document invalidation flag in nsSVGOuterSVGFrame::Paint (#ifdef XP_MACOSX code)? If not, why not? In nsGUIEvent.h, please fix the comment that says "pagetransition events" to say something correct. sr=dbaron with that
Attachment #338572 - Flags: superreview?(mats.palmgren) → superreview+
> Do you need to set the cross-document invalidation flag in > nsSVGOuterSVGFrame::Paint (#ifdef XP_MACOSX code)? If not, why not? I suppose I should. Good catch. I'll address all your comments, shouldn't be hard.
In nsIFrame.h, remove the old UUID comment. In nsSVGForeignObjectFrame.cpp, + nsRegion* region = (aFlags & INVALIDATE_CROSS_DOC) + ? &mSameDocDirtyRegion : &mCrossDocDirtyRegion; Looks odd - should it be the opposite? In nsDOMNotifyPaintEvent.cpp, + if ( aEvent ) { Remove the spaces around aEvent please. + if (mEventIsInternal) { + if (mEvent->eventStructType == NS_NOTIFYPAINT_EVENT) { For internal events you can just assert it's a NS_NOTIFYPAINT_EVENT? +NS_IMETHODIMP +nsDOMNotifyPaintEvent::GetClientRects(nsIDOMClientRectList** aResult) +{ + *aResult = nsnull; This assignment isn't necessary, the caller shouldn't use the result in case of an error. In nsDOMNotifyPaintEvent.h, + ~nsDOMNotifyPaintEvent(); Add an explicit "virtual" for clarity and consistency with other nsDOM*Event classes.
(In reply to comment #39) > In nsIFrame.h, remove the old UUID comment. > > In nsSVGForeignObjectFrame.cpp, > + nsRegion* region = (aFlags & INVALIDATE_CROSS_DOC) > + ? &mSameDocDirtyRegion : &mCrossDocDirtyRegion; > > Looks odd - should it be the opposite? Actually yes, it should. I guess the test needs to be better... > In nsDOMNotifyPaintEvent.cpp, > + if ( aEvent ) { > > Remove the spaces around aEvent please. OK > + if (mEventIsInternal) { > + if (mEvent->eventStructType == NS_NOTIFYPAINT_EVENT) { > > For internal events you can just assert it's a NS_NOTIFYPAINT_EVENT? Maybe, but other events don't do that. > +NS_IMETHODIMP > +nsDOMNotifyPaintEvent::GetClientRects(nsIDOMClientRectList** aResult) > +{ > + *aResult = nsnull; > > This assignment isn't necessary, the caller shouldn't use the result > in case of an error. OK > In nsDOMNotifyPaintEvent.h, > + ~nsDOMNotifyPaintEvent(); > > Add an explicit "virtual" for clarity and consistency with other > nsDOM*Event classes. Sure.
Updated to comments. I also added a test involving foreignObject. This is ready for checkin. Stuart, feel free to land if I don't get to it first.
Attachment #338572 - Attachment is obsolete: true
Attachment #338790 - Flags: superreview+
Attachment #338790 - Flags: review+
I popped my patch into the try server: "rocallahan@mozilla.com 1221546086" Unfortunately Windows was busted (not by me, I think). Mac numbers looked fine. Linux may be showing a Ts regression. Hard to say, I don't know what the noise range is. It would be surprising if this hurt Ts but not Tp, however --- Tp paints more. Maybe the throbber invalidation showing up? I'm trying another build to see if the regression is repeatable.
Looking at the main Tinderboxes, Linux Ts seems to jump around all over the place. So maybe we shouldn't be too concerned.
Pushed 9a46f2a17ddc. Watching Tinderbox.
Looking at a bunch of graphs, things seem to be basically fine except there's pretty clearly a Tdhtml regression on Mac: The Mac builder picked up my changeset for the build starting at 3:27 on the 18th. So, back out or add the optimization on top?
everything is checked in here. marking fixed.
Status: ASSIGNED → RESOLVED
Last Resolved: 11 years ago
Resolution: --- → FIXED
I just tried the attached test on the latest nightly, and the event never fires. What am I missing?
What testcase? The mochittest is passing on Tinderbox.
I'm apparently hallucinating, as there isn't an attached test case that resembles the one I tried running earlier. I'm very confused. Sorry. :)
Keywords: dev-doc-needed → dev-doc-complete | https://bugzilla.mozilla.org/show_bug.cgi?id=450930 | CC-MAIN-2019-22 | refinedweb | 3,536 | 58.08 |
Tutorial: Integrate Cognidox with Azure Active Directory
In this tutorial, you'll learn how to integrate Cognidox with Azure Active Directory (Azure AD). When you integrate Cognidox with Azure AD, you can:
- Control in Azure AD who has access to Cognidox.
- Enable your users to be automatically signed-in to Cognidox.
- Cognidox single sign-on (SSO) enabled subscription.
Scenario description
In this tutorial, you configure and test Azure AD SSO in a test environment.
- Cognidox supports SP and IDP initiated SSO
- Cognidox supports Just In Time user provisioning
Adding Cognidox from the gallery
To configure the integration of Cognidox into Azure AD, you need to add Cognid Cognidox in the search box.
- Select Cognidox from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
Configure and test Azure AD single sign-on
Configure and test Azure AD SSO with Cognidox using a test user called B.Simon. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Cognidox.
To configure and test Azure AD SSO with Cognidox, complete the following building blocks:
- Configure Azure AD SSO - to enable your users to use this feature.
- Configure Cognidox Cognidox test user - to have a counterpart of B.Simon in Cognidox that is linked to the Azure AD representation of user.
- Test SSO - to verify whether the configuration works.
Configure Azure AD SSO
Follow these steps to enable Azure AD SSO in the Azure portal.
In the Azure portal, on the Cognidox:
urn:net.cdox.<YOURCOMPANY>
b. In the Reply URL text box, type a URL using the following pattern:
https://<YOURCOMPANY>.cdox.net/auth/postResponse
Click Set additional URLs and perform the following step if you wish to configure the application in SP initiated mode:
In the Sign-on URL text box, type a URL using the following pattern:
https://<YOURCOMPANY>.cdox.net/
Note
These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact Cognidox Client support team to get these values. You can also refer to the patterns shown in the Basic SAML Configuration section in the Azure portal.
Cognidox application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes. Click Edit icon to open User Attributes dialog.
In addition to above, Cognidox. In the Namespace textbox, type the namespace shown for that row.
d. Select Source as Transformation.
e. From the Transformation list, type the value shown for that row.
f. From the Parameter 1 list, type the value shown for that row.
g. Click Save.
On the Set up Single Sign-On with SAML page, in the SAML Signing Certificate section, find Federation Metadata XML and select Download to download the certificate and save it on your computer.
On the Set up Cognidox section, copy the appropriate URL(s) based on your requirement.
Configure Cognidox SSO
To configure single sign-on on Cognidox side, you need to send the downloaded Federation Metadata XML and appropriate copied URLs from Azure portal to Cognidox Cognidox.
In the Azure portal, select Enterprise Applications, and then select All applications.
In the applications list, select Cognid Cognidox test user
In this section, a user called B.Simon is created in Cognidox. Cognidox supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Cognidox, a new one is created after authentication.
Test SSO
In this section, you test your Azure AD single sign-on configuration using the Access Panel.
When you click the Cognidox tile in the Access Panel, you should be automatically signed in to the Cognidox for which you set up SSO. For more information about the Access Panel, see Introduction to the Access Panel.
Additional Resources
Feedback | https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/cognidox-tutorial | CC-MAIN-2019-39 | refinedweb | 666 | 56.35 |
Recently needed the ability to parse durations from human readable strings that were also context aware. The context being the date to start your duration calculation from so that if you started on January 1st 2017 and wanted 2 months you’d get exactly 31 (number of days in January 2017) + 28 (number of days in February). Then if I gave it the context of April 1st I’d get 61 days since there were two months with 31 days each.
I tried to find an existing library with no luck so I wrote
delta to take care
of the job and hopefully someone else would find it of use. You can get your
hands on delta easily through pypi like so:
pip install delta
Once installed you can use it like so:
import delta from datetime import datetime print(delta.parse('1 year 2 months and 3 days')) print(delta.parse('2 months and 3.5 weeks', datetime(2017, 3, 4)))
You can see that
delta allows you to easily include a context or not and when
you don’t supply the context it will assume the current date. Another thing you
may have noticed is you can get quite expressive with the duration expressions
being able to do all of the following:
1 year 2 months and 3 weeks 2 months, 3 weeks and 12 days 1y 2m 3w 4d 3.5 years and 2.7 days
delta will handle all of those without any issues.
If you find
delta useful then head over to the github
project and open any issues or contribute a PR for any additional features you’d
like. | http://rlgomes.github.io/work/python/date/parsing/2017/03/04/15.59-human-friendly-context-aware-date-parsing.html | CC-MAIN-2017-26 | refinedweb | 275 | 74.73 |
Important: Please read the Qt Code of Conduct -
QML ComboBox doesn't add text from its dropdown list to the text area
Hi,
I have the following code which displays its dropdown list from a db. When I choose an item from the list the text is highlighted for a moment, the dropdown closes but the chosen text is not copied to the text area.
The code:
import QtQuick 2.9 import QtQuick 2.0 import VPlayApps 1.0 import QtQuick.Controls 1.4 import QtQuick.Controls 2.2 import QtQuick.Controls.Styles 1.4 import QtQuick.Layouts 1.3 import QtQuick.LocalStorage 2.0 import QtQml.Models 2.3 import "Database.js" as JS import "Dropboxes.js" as DB App { Rectangle { id: root color: "#a1d9ea" anchors.fill: parent focus: false Text { id: title text: "Combobox" font.pixelSize: 25 anchors.horizontalCenter: root.horizontalCenter anchors.top: root.top anchors.topMargin: 30 } ComboBox { id: whatCombo anchors.horizontalCenter: title.horizontalCenter anchors.top: title.bottom anchors.topMargin: 30 editable: true textRole: "text" height: 50 width: 230 model: ListModel { id: listModel } delegate: ItemDelegate { width: whatCombo.width height: whatCombo.height text: model.what font.pixelSize: 18 } } Component.onCompleted: { var db = JS.dbGetHandle() db.transaction(function (tx) { var results = tx.executeSql( 'SELECT what FROM dropboxWhatIs order by what desc') for (var i = 0; i < results.rows.length; i++) { listModel.append({ what: results.rows.item(i).what, checked: "" }) } }) } } }
Am I missing something?
Thank you for your help.
@gabor53 Hi, you need to bind
textwith the current value of the combobox:
Text { ... text: whatCombo.currentText ... }
The problem is that when I click on a dropdown item it is either
not actually chosen or
chosen, but it doesn't appear in the combobox text area.
Your recommendation works right after there is an item chosen and it appears in the combobox's editable text area.
@gabor53 You can try with that:
delegate: ItemDelegate { ... onClicked: whatCombo.currentIndex = index highlighted: control.highlightedIndex === index }
I'm surprised it's not selected automatically.
- Diracsbracket last edited by Diracsbracket
@gabor53 said in QML ComboBox doesn't add text from its dropdown list to the text area:
textRole: "text"
Again, you don't have a role
textin your model. It should be
whatsince you use:
listModel.append({ what: results.rows.item(i).what, checked: "" })
Hi @Diracsbracket ,
Thank you. I changed it and it works perfectly. | https://forum.qt.io/topic/94513/qml-combobox-doesn-t-add-text-from-its-dropdown-list-to-the-text-area/3 | CC-MAIN-2021-39 | refinedweb | 389 | 54.49 |
After building and debugging in Microsoft Visual Studio 2010. The sum, average and product is not calculating right. Also the large and small is not showing up in results.
When I input: 13, 27, 14
it gives me this:
The Code I currently have:
Code:#include <iostream> // supports std using namespace std; // executes program int main () { int x; int y; int z; int sum; int average; int product; int smallest; int largest; cout << "Input three different intergers: "; cin >> x >> y >> z; sum = x + y + z; cout << "Sum is " << sum << endl; average = (x + y + z) / 3; cout << "Average is " << average << endl; product = (x * y * z); cout << "Product is " << product << endl; if(x < y && x < z) cout<<"Smallest: "<< x <<endl; else if(y < x && y < z) cout<<"Smallest: "<< y<<endl; else if(y < x && z < y) cout<<"Smallest: "<< z <<endl;
-TP | http://cboard.cprogramming.com/cplusplus-programming/140365-cplusplus-help.html | CC-MAIN-2014-15 | refinedweb | 139 | 55.1 |
C++11 brought Move Semantics. Since then we have extra capabilities to write faster code, support movable-only types, but also more headaches :). At least I have, especially when trying to understand the rules related to that concept. What’s more, we also have copy elision, which is a very common optimisation (and even mandatory in several cases in C++17). If you create an object based on another one (like a return value, or assignment), how do you know if that was copied or moved?
In this article I’ll show you two ways how to determine the status of a new object - copied, moved or copy-elision-ed. Let’s start!
Intro
Usually, when I try to show in my code samples that some object was moved or copied, I declared move operations for my type and then logged the message.
That worked, but how about built-in types? For example
std::string or
std::vector?
One day I was discussing a code sample related to
std::optional and JFT (a very experienced developer and very helpful!! See his articles here or here).
He showed me one trick that is simple but is very useful.
Let’s have a look at those two techniques now.
1) Logging Move
That’s the most “explicit” way of showing if something was moved: add extra code to log inside move/copy constructors.
If you have a custom type and you want to see if the object was moved or not, then you can implement all the required move operations and log a message.
For a sample class, we have to implement all special member methods (the rule of five):
- copy constructor
- move constructor
- copy assignment operator
- move assignment operator
- destructor
class MyType { public: MyType(std::string str) : mName(std::move(str)) { std::cout << "MyType::MyType " << mName << '\n'; } ~MyType() { std::cout << "MyType::~MyType " << mName << '\n'; } MyType(const MyType& other) : mName(other.mName) { std::cout << "MyType::MyType(const MyType&) " << mName << '\n'; } MyType(MyType&& other) noexcept : mName(std::move(other.mName)) { std::cout << "MyType::MyType(MyType&&) " << mName << '\n'; } MyType& operator=(const MyType& other) { if (this != &other) mName = other.mName; std::cout << "MyType::operator=(const MyType&) " << mName << '\n'; return *this; } MyType& operator=(MyType&& other) noexcept { if (this != &other) mName = std::move(other.mName); std::cout << "MyType::operator=(MyType&&) " << mName << '\n'; return *this; } private: std::string mName; };
(The above code uses a simple approach to implement all operations. It’s C++, and as usual, we have other possibilities, like the copy and swap idom).
Update: move and move assignment should be also marked with
noexcept. This improves exception safety guarantees and helps when you put your class in STL containers like vectors (see this comment: below the article). And also Core Guideline - C.66
When all of the methods are implemented, we can try using this type and checking the log output. Of course, if you have a more complicated class (more member variables), then you have to “inject” the logging code in the appropriate places.
One basic test:
MyType type("ABC"); auto tmoved = std::move(type);
The output:
MyType::MyType ABC MyType::MyType(MyType&&) ABC MyType::~MyType ABC MyType::~MyType
Here, the compiler used move constructor. The content was stolen from the first object, and that’s why the destructor prints empty name.
How about move assignment?
The second test:
MyType tassigned("XYZ"); MyType temp("ABC"); tassigned = std::move(temp);
And the log message:
MyType::MyType XYZ MyType::MyType ABC MyType::operator=(MyType&&) ABC MyType::~MyType MyType::~MyType ABC
This time the compiler created two objects and then the content of
XYZ is overridden by
ABC.
Play with the code @Coliru.
Or below:
Logging is relatively straightforward, but what’s the second option we could use?
2) Looking at the Address
In the previous section, we worked with a custom type, our class. But what if you have types that cannot be modified? For example: the Standard Library types, like
std::vector or
std::string. Clearly, you shouldn’t add any logging code into those classes :)
A motivating code:
#include <iostream> #include <string> std::string BuildString(int number) { std::string s { " Super Long Builder: " }; s += std::to_string(number); return { s }; } int main() { auto str42 = BuildString(42); std::cout << str42; }
In the above code, what happens to the returned value from
BuildString()? Is it copied, moved or maybe the copy is elided?
Of course, there are rules that specify this behaviour which are defined in the standard, but if we want to see it and have the evidence, we can add one trick.
What’s that?
Look at their
.data() property!
For example, you can add the following log statement:
std::cout << &s << ", data: " << static_cast<void *>(s.data()) << '\n';
To the
BuildString function and to
main(). With that we might get the following output:
0x7ffc86660010, data: 0x19fec40 0x7ffc866600a0, data: 0x19fec20 Super Long Builder: 42
The addresses of strings
0x7ffc86660010 and
0x7ffc866600a0 are different, so the compiler didn’t perform copy elision.
What’s more, the data pointers
0x19fec40 and
0x19fec20 are also different.
That means that the copy operation was made!
How about changing code from
return { s }; into
return s;?
In that context we’ll get:
0x7ffd54532fd0, data: 0xa91c40 0x7ffd54532fd0, data: 0xa91c40 Super Long Builder: 42
Both pointers are the same! So it means that the compiler performed copy elision.
And one more test:
return std::move(s);:
0x7ffc0a9ec7a0, data: 0xd5cc50 0x7ffc0a9ec810, data: 0xd5cc50
This time the object was moved only. Such behaviour is worse than having full copy elision. Keep that in mind.
You can play with code sample @Coliru
A similar approach will work with
std::vector - you can also look at
vector::data property.
All in all:
- if the address of the whole container object is the same, then copy elision was done
- if the addresses of containers are different, but
.data()pointers are the same, and then the move was performed.
One More Example
Here’s another example, this time the function returns
optional<vector>, and we can leverage the second technique and look at the address.
#include <iostream> #include <string> #include <vector> #include <optional> std::vector<int> CreateVec() { std::vector<int> v { 0, 1, 2, 3, 4 }; std::cout << std::hex << v.data() << '\n'; //return {std::move(v)}; // this one will cause a copy return (v); // this one moves //return v; // this one moves as well } std::optional<std::vector<int>> CreateOptVec() { std::vector<int> v { 0, 1, 2, 3, 4 }; std::cout << static_cast<void *>(v.data()) << '\n'; return {v}; // this one will cause a copy //return v; // this one moves } int main() { std::cout << "CreateVec:\n"; auto vec = CreateVec(); std::cout << static_cast<void *>(vec.data()) << '\n'; std::cout << "CreateOptVec:\n"; auto optVec = CreateOptVec(); std::cout << static_cast<void *>(optVec->data()) << '\n'; }
Or below:
The example uses two functions that create and return a vector of integers and optional of vector of integers. Depending on the return statement, you’ll see different output. Sometimes the vector is fully moved, and then the data pointer is the same, sometimes the whole vector is elided.
Summary
This article is a rather straightforward attempt to show the “debugging” techniques you might use to determine the status of the object.
In one case you might want to inject logging code into all of the copy/move/assignment operations of a custom class. In the other case, when code injections are not possible, you can look at the addresses of their properties.
In the example section, we looked at the samples with
std::optional,
std::vector and also a custom type.
I believe that such checks might help in scenarios where you are not sure about the state of the object. There are rules to learn. Still, if you see proof that an object was moved or copied, it’s more comfortable. Such checks might allow you to optimise code, improve the correctness of it and reduce some unwanted temporary objects.
Some extra notes:
- Since we log into constructors and other essential methods, we might get a lot of data to parse. It might be even handy to write some log scanner that would detect some anomalies and reduce the output size.
- The first method - logging into custom classes - can be extended as a class can also expose
.data()method. Then your custom class can be used in the context of the second debugging technique.
Once again, thanks to JFT for valuable feedback for this article!
Some references
- The View from Aristeia: The Drawbacks of Implementing Move Assignment in Terms of Swap
- Thomas Becker: C++ Rvalue References Explained
How about your code? Do you scan for move/copy operations and try to optimise it better? Maybe you found some other helpful technique? | https://www.bfilipek.com/2019/07/move-debug.html | CC-MAIN-2020-29 | refinedweb | 1,446 | 62.88 |
Heads up! To view this whole video, sign in with your Courses Plus account or enroll in your free 7-day trial. Sign In Enroll
Using the Built-in Observer and Observable Classes10:39 with Craig Dennis
In the java.util package there is a class named Observable. It's been there a while, let's dust it off and use it to make our Restaurant Simulator hummmm.
So in order to make all the different configuration changes to our restaurant 0:00 simulator throughout our day, 0:03 it's clear that we're going to need to make things more extensible. 0:05 Different clients are gonna have different needs, and 0:09 we need to be able to configure things for each of our clients specifically. 0:11 Observer to the rescue. 0:15 So I've got some great news for you. 0:16 The Observer pattern has been included in the JDK pretty much from the beginning. 0:18 It's fairly well documented, and it's pretty straightforward to implement. 0:23 There are some complaints about the way that it's implemented, 0:26 and we'll explore those after we get our hands dirty with it a little bit. 0:29 Now, a quick word of warning here. 0:32 There's been a somewhat recent trend in what is known as reactive extensions. 0:34 RxJava is the Java flavored version. 0:38 Reactive extensions make use of the Observer pattern combined 0:41 with the the Iterator pattern to deal with strands of data and events. 0:44 It's everywhere these days. 0:47 But I wanted to warn you that there's a bit of a namespace collision 0:49 around a term that we are about ready to use. 0:52 That term is observable. 0:55 The observable from the reactive extensions world 0:57 is different than what we're going to be exploring. 1:00 Check out the teacher's notes for more and 1:02 just know that they are two different things. 1:04 Ready? 1:07 Let's dive in. 1:07 All right, so in java.util there is a class named Observable, and 1:10 this is what we're going to use to mark our subjects. 1:15 So in our case, the staff and 1:17 the dashboard are observing changes to the table. 1:19 So the table is the subject. 1:22 So let's go ahead and let's pop open the table. 1:25 And what we'll do is we'll have it extend that Observable class. 1:29 So we'll say extends Observable. 1:32 Now we have all of the methods available to us from Observable. 1:37 Now, basically, we have now exposed the ability to have Observer subscribe and 1:40 have added the ability to notify them when things change. 1:46 While we're in here, why don't we set up what it is that we want to broadcast? 1:49 So we're interested when the status changes, right. 1:54 So let's go to that status setter here. 1:57 We'll come in here, so status newStatus, and 2:00 what we'll do is we'll notify the observers. 2:04 So let's say notify, and see, it's now part of this, Observers. 2:08 Now this notifyObservers takes an optional argument, which can be anything at all. 2:12 Now, the parameter is used to narrow the focus of 2:17 what it was specifically that was changed. 2:20 So in our case, it probably makes sense for us to push the status through. 2:23 You don't have to do this. 2:27 This is a totally fine call. 2:28 Let's do it anyway, though. 2:29 So let's say, we'll pass through the new status that got set. 2:30 Now, there is one thing that we need to remember to do when we're 2:34 using the Observable class. 2:37 And pay close attention here, because this will for sure bite you. 2:39 So the way that notifyObservers works is it will 2:42 only send out notifications only if it's been told that changes were made. 2:46 Now, this allows you to make sure that you can control when your observers 2:52 are notified. 2:56 You must call setChanged, and you need to do it before the notifyObservers happens. 2:57 So it's called setChanged. 3:04 So what that does is that it lets it know that things changed. 3:06 So after the notification happens, the state will turn back to unchanged. 3:09 So you can also check that by using a method called hasChanged. 3:15 Now, I'm not gonna add any protection here, 3:19 as I want any tweak in our status to trigger events, right? 3:21 So what do you say we look at how to create an observer to observe changes to 3:24 our table. 3:29 Okay, so let's go ahead and we'll tackle this first issue here, right? 3:30 The staff will not like needing to refresh the desk dashboard, right. 3:33 So let's do that. Let's go over to the dashboard. 3:37 So let's open it up, we'll go Shift Shift dashboard. 3:39 And what we'll do is we'll have the Dashboard implement 3:42 the Observer interface, also in java.util. 3:47 Now, immediately, we'll see a squiggly, 3:51 because the interface has a required method, which if we go ahead and 3:54 try to see what it's angry about, it'll ask us to implement it. 3:59 So implement methods. 4:04 And it's one named update, and it takes an Observable and an Object. 4:06 So this is a method that is called from within that notifyObservers method 4:14 on the other side, right. 4:19 So as you can see, it's passed an Observable, which it's called o here. 4:20 And the second parameter arg is an object. 4:26 So in our case that would be the status object that we passed across, or 4:29 it's null in most cases. 4:32 So the idea here is, in this update method, it's where you add the code that 4:34 will respond to the change in the class that this will eventually observe. 4:39 That make sense? 4:43 So what do we want the dashboard to do when a specific table changes? 4:44 Well, ideally and 4:49 probably realistically, we'd update that single table in the dashboard. 4:50 But we don't have a way to do that right now. 4:55 So let's just go ahead and call the render method, right? 4:57 So if I put in here, say, render. 5:00 Now any time a table changes, the dashboard will update automatically. 5:03 Well, that is, of course, if we're observing it. 5:08 So let's go take a look at our simulator, 5:11 the main method here, and take a look at what's happening. 5:14 First of all, we can get rid of this refreshDashboard, right? 5:18 Before, the server was having to do it, and the assistant was having to do it. 5:21 So we'll remove this. 5:26 There's no more dashboard refresh. 5:28 It looks like most of this is actually gonna start going away here pretty 5:30 soon, right? 5:33 Really what we want, is we want for each table, 5:35 we'll make the dashboard observe it. 5:38 And there's a method that was added by the Observable superclass named addObserver. 5:40 Our tables have that. 5:45 So let's go ahead and let's, right after we go here, 5:47 let's do tables.forEach, 5:52 for each table, we'll add 5:57 the observer of dashboard. 6:01 There we go. 6:08 Now the dashboard is watching all the tables, and when the table state changes, 6:09 it will notify all of its observers and the dashboard will render. 6:14 Nice. So 6:21 I'm pretty sure we can remove this concern, right? 6:21 The staff will not liking needing to refresh it, so 6:24 we'll automatically refresh now. 6:26 Awesome. 6:27 Okay, let's see if we can't take care of this next one. 6:29 The servers should be assigned to a table. 6:31 Well, now that we know about addObserver, 6:34 it seems like we just need to have them observe specific tables, right? 6:36 You know what, let's do that for server. 6:41 Let's go ahead and let's come over here to the simulator, 6:43 and let's cut this logic for the server out of the main loop here, right. 6:49 So we're gonna grab, these two cases are both server-based, right, so, 6:54 they're gonna not be in here, server-based solution there. 7:00 Okay, and now let's open up server. 7:05 So I'm just gonna do Shift Shift server. 7:07 And we'll make this server implement the observer. 7:12 Now, wait a second, all employees should be able to observe a table. 7:16 So let's do that, let's make the employee actually implement the observer. 7:21 So we'll say implements Observer. 7:26 And you'll notice that we don't get a squiggly, and 7:30 that's because it's abstract. 7:33 So it's not required to be there yet. 7:35 But if we come back to our server, 7:37 now our server needs to have the observer method there. 7:38 So let's go ahead and implement the methods, and we'll do the update. 7:42 And I have in my clipboard, I pasted that original. 7:48 I pasted it at the wrong place. 7:58 There we go, so we'll paste. 8:01 All right. 8:03 So we will do, first I'm gonna clean up this little bit. 8:05 Get rid of all of this optional stuff here. 8:15 So if it's available, we'll do a leadToTable and we'll break. 8:17 And if it's finished, we'll just close out the table. 8:24 And it's complaining about not knowing what the table is. 8:29 So we need to define that, don't we? 8:32 We also need to define the switch bit here. 8:35 So we definitely need to access the table, right? 8:38 So what can we do? 8:41 So we can say table, and remember, it's passed in through that observable there. 8:42 But it is an observable, so we need to do a downcast to the table. 8:47 Yuck, right? 8:53 And now let's go ahead and add that switch statement, now that we have that. 8:56 So we'll say switch, and we'll do table.getStatus. 9:00 And in here, we'll add our case statements that we had from before. 9:07 Tab that in once there. 9:13 There, that's looking a lot better. 9:15 Okay, so if the table's available, we'll view the table, 9:18 otherwise we'll finish on that. 9:20 Cool. 9:23 Okay, and now, we know how to assign these tables, right, we just assign them, right? 9:25 So we add observers to those tables. 9:29 So let's come in, and why don't we move our observer stuff down below. 9:31 So let's move all the stuff where we started doing the observers down here. 9:38 And so let's just for each table, we'll say, 9:42 table1.addObserver, and we'll have Alice watch the first three. 9:46 That sound good? 9:52 So we'll say, Alice, 2, 3. 9:53 And then let's for 4 and 5, let's have Bob watch those. 9:57 So Alice and Bob all both servers. 10:02 So Bob's got 4 and 5, and Alice only has 1 and 2. 10:05 So now they shouldn't pick up each other's checks, right? 10:09 That's what the client was concerned about here. 10:13 So let's flip over, server should be assigned to a table, bam. 10:17 All right, so why don't we take a quick break, 10:23 this has been going on for a little bit. 10:25 And we'll return and fix up the assistant to use the pattern as well. 10:26 You know, why don't you try to then tackle the assistant, right? 10:31 Try to make that assistant observable. 10:34 And then tune back in and check out how I did it. 10:36 You got this. 10:38 | https://teamtreehouse.com/library/using-the-builtin-observer-and-observable-classes | CC-MAIN-2020-40 | refinedweb | 2,270 | 83.46 |
Estimating.
Thanks to Craig Shoemaker for tagging me on the "Five Things"<g>. So, here we go, here are five things you probably didn't know about me:
1. I grew up in Iowa. I love the Midwest and I try to get back there to see friends whenever I can. I try to speak at the Des Moines, Iowa .NET User Group once a year, since that is where I have a lot of friends. I will be speaking there on March 7, 2007, so if you are in the area, stop by! ()
2. I snow ski. I try to go a few times a year to Mammoth with my friend, and the VP of my company, Michael. I love being out in the fresh air and getting exercise. I am an intermediate skiier. I tried snowboarding once, but I prefer skiing.
3. I play drums in a progressive rock band called Evolve (). We are an all-original music band. We have created about 20 songs so far and are working on our first CD which we hope to release in about 1 month. I picked up drums in January of 2004 after not having played for 20 years! Took awhile to get the chops back, but thanks to some good instructors, I am improving all the time. In fact, I am fortunate to have a local guy who is a Gene Krupa impersonator give me some lessons. Check out Randy Caputo if you are interested ().
4. I was going to become a theatre major before I discovered computers. While in college I was working in the school theatre doing lighting and stagecraft and we used an old Apple II+ to run the lights for the productions. I really enjoyed playing with the computer, and that is how I decided I liked computers better than theatre. However, to this day, I still love live theatre and go whenever I have a chance.
5. I have an 8 year old daughter named Maddie. She and I are like two peas in a pod. We love roller blading together, and in fact, we go 2 times a week to a local skating rink. We enjoying walking with the dog, going to movies, and just laughing and being together. Being recently divorced has cut down a little of our time together, but we still have some great quality time together.
Hopefully, this gives you a little insight to my life. I hope you find it interesting.
So, if I were to pick on 3 other people, here is who I would like to know more about.
Paul
In the last 2 weeks I have had two different clients complain that there are "memory leaks" in .NET. I tell them very politely, that most likely it is their code!<g> In both cases it was their code. The first case had to do with the programmers using a DataReader and not closing them when they were done with them, or they put the Close() method call within their Try block and not in a Finally. The second case involved the StreamWriter in the File.IO namespace where once again the file was not always being closed correctly due to the programmer not putting the close in the Finally block. Below is what the original code looked like:
using System.IO;private void CreateLogFile(){ try { StreamWriter sw = new StreamWriter(@"D:\Samples\Test.txt", true, System.Text.UTF8Encoding.UTF8);
sw.WriteLine("This is some text"); sw.Close(); } catch(Exception ex) { throw ex; }}
You can see in the above code that the sw.Close() method is within the Try block. If an exception occurs when trying to open or write to the file, the code would go immediately to the Catch block and the Close would never execute. If this method is being called many times, this can cause and "apparent" memory leak. With just a little re-factoring the code was fixed to close the file in the Finally block.
private void CreateLogFile(){ StreamWriter sw = null;
try { sw = new StreamWriter(@"D:\Samples\Test.txt", true, System.Text.UTF8Encoding.UTF8);
sw.WriteLine("This is some text"); } finally { if (sw != null) { sw.Close(); } }}
Notice that there are three items that we had to fix up here. First, the declaration of the StreamWriter was moved out of the Try block. If we did not do this, then the variable "sw" would not able to be accessed from within the Finally block since it would have block-level scope only within the Try portion. Secondly, we created a Finally block, checked the "sw" variable for null, and if it was not null then we closed it. Thirdly, since nothing was happening with the catch, we eliminated it all together.
Watch these types of scenarios as this can cause hard-to-find bugs in your code.
I learned something today... I was doing some Encryption of strings using DPAPI and converting them to a Base64 string and everything worked fine when I was encrypting and decrypting. However, when I saved the Base64 string to a file, then re-read the data and tried to decrypt the data, it would not work. It took me quite awhile to figure out what was going on.
I realized that when saving text using the WriteAllText method on the File class, you need to specify the Encoding type. For example:
File.WriteAllText(C:\Temp\Test.txt, "This is some text", System.Text.Encoding.UTF8)
Now the above example uses just plain text, but when you are using doing encryption and decryption and you are translating a string to an array of bytes, you use one of the encoding mechanisms to do the conversion. For example:
bytArray = Encoding.UTF8.GetBytes(strValue)
So in this case if you use UTF encoding to translate a string into an array of bytes prior to doing the encryption, you need to make sure that you store these characters to a file using the same encoding mechanism.
I hope this helps someone else, so you won't have to go through the same agony I did!
In our daily programming with .NET, we often find new things to use. In some cases Microsoft tells us there is something new to use. Take the case of moving from .NET 1.1 to .NET 2.0. Remember in .NET 1.1 how you used the ConfigurationSettings.AppSettings("MyValue") to retrieve values from your .Config files? Then when .NET 2.0 came out and you attempted to upgrade your project, now all those lines of code were marked as Obsolete and a bunch of warnings were generated in your project.
Change is inevitable in this industry, and in life! However, somethings like this we can avoid with a little careful planning. I am sure most of you have discovered the benefits of using a Data Layer. This is where you create a class to wrap up ADO.NET so you spend less time writing the same ADO.NET code over and over again. The same technique should be used with configuration settings as well.
In .NET 2.0 Microsoft wants you to now use the ConfigurationManager class now to retrieve application settings and connection strings. But should you? I say NO! Once you start writing this code all over the place you have locked yourself into that way of doing things. This means that using the ConfigurationManager class you have just locked yourself into only placing your configuration settings into a Config file.
What would happen if someone needed you to store all your settings in the registry, or in an XML file on another server, or in a database table? You would have to find all those places where you used the ConfigurationManager class and replace that code. It would be better if you wrap up the ConfigurationManager class into your own class and just have methods to retrieve your various settings. Keep it simple, something like the following:
Public Class AppConfig Public Shared Function ConnectString() As String Return System.Configuration.ConfigurationManager.ConnectionStrings("SQL").ConnectionString End Function
Public Shared Function DefaultStateCode() As String Return System.Configuration.ConfigurationManager.AppSettings("DefaultStateCode") End FunctionEnd Class
You would then use this class whenever you wished to retrieve these values, for example:
lblState.Text = AppConfig.DefaultStateCode
If you then need to change the location of where the StateCode is retrieved from, you only need to change the Shared Function DefaultStateCode to retrieve the value from the registry, a database table, or whereever.
So as you are programming your applications, think about wrapping up code that could potentially change in the future.
Have fun in your coding,
<a href="" style="display:none" rel="tag">CodeProject</a>.
In doing so, I am going with a complete provider model for each and every piece of my Framework. I really like this model. In fact, I just wrote a nice little chapter about how to create providers in my "Architecting ASP.NET 2.0 Applications" eBook (). So far, I have created a Data Provider, a Configuration Management Provider, an Exception Management Provider and a Cryptography provider. Yes, I know, I could have just used the Microsoft Enterprise Library, but have you seen the underlying code for that? Wow! Very complicated, way over-engineered. I knew I could make it simpler (and I have).
I will be writing an eBook on each one of these provider model blocks that I have created and releasing them this quarter (Q1 of 2007). Keep a watch on my web site for them. | http://weblogs.asp.net/psheriff/archive/2007/01.aspx | CC-MAIN-2013-20 | refinedweb | 1,587 | 73.68 |
I occasionally mention work projects on this site, and many of the posts that I write are directly influenced by things that I do at work. Currently, I'm spearheading a project rewriting the entire medical education technology suite for my institution's undergraduate medical education program. We started by aggregating the data from several systems into a single database with multiple schemas, and then skinned to previous learning management system (LMS) to provide a mobile-friendly experience, with an updated look-and-feel.
As a part of this process, we re-engineered the application portal with a card-based layout--each card being both an entry point into another application in the suite, as well as actionable items to limit steps and friction.
Several of these cards have data that can be based on timely interactions. For exam, "open exams" for currently open tests, or exam statistics to evaluate overall performance. During in-class exams, students need to know when an exam opens, and faculty need to know progress. For a timed exams, students would have to refresh the page to see when an exam is finally open, and faculty members would have to refresh the page to see the statistics update.
This is not idea. What we really need is real-time data, which means building a real-time application feed.
This post is the first on a series about building real-time application feeds in Node.JS with Socket.io. Maybe we'll even do a few videos on it, as well.
First, let's talk about getting started. I'm going to assume for this series that you're front-end and back-end are both JavaScript (or TypeScript). We'll need to install Socket.io.
npm install socket.io --save npm install @types/socket.io --save-dev
We're going to use Express as our back-end framework, and Socket.io and Express will run on the same port, so we'll use the
http package as the intermediary. If you're entry point is an
index.ts file (we're using TypeScript here), it might resemble something like this:
import * as express from "express"; import * as http from "http"; import * as socket from "socket.io"; const app = express(); const server = new http.Server(app); const io = socket(server); server.listen(process.env.PORT || 3000, () => { console.log(`Application listening on port ${process.env.PORT || 3000}!`); });
We can add Socket.io pretty easily. You can put this before the
server.listen() along with any of your Express routes:
io.on("connection", (socket) => { console.log("A user has connected to the socket!"); socket.on('disconnect', () => console.log('A user has disconnected from the socket!')); });
Here we're just setting up a connection event when someone connects to the Socket.io infrastructure. Inside, we simply log a message, and then we set up an event when someone disconnects from the socket.
Now we need to set up things on the front-end. You'll need to include a Socket.io client. One comes with the NPM package:
<script type="text/javascript" src="{{basePath}}/node_modules/socket.io-client/dist/socket.io.js"></script>
Then, you can set up the front-end logic for connecting inside of a
<script> tag:
let socketUrl = "{{basePath}}"; if(document.location.href.indexOf("localhost") != -1) { socketUrl = ""; } const socket = io(socketUrl);
- The
{{basePath}}you see is a Handlebar variable that tells me the server URL and base folder. I'm using Handlebars as the view engine for Express.
In the code above, we default the
socketUrl to the base path. If the current URL is
localhost, however, we set the
socketUrl to the localhost URL and port that is running.
Once we know what the URL is, we create a
socket variable by passing the URL into the
io() function.
This is all you need to get started. Spin up your Express instance, and then go to the localhost URL. You should see the "A user has connected to the socket!" string logged in the output window.
Now this does absolutely nothing of value, but it shows you how to create a real-time connection between a client and server. You can then begin to write real-time connection logic using
on() and
emit(). We'll begin to look at these in future posts as we build out real-time functionality, but as a quick example, you could set up connections like the following:
Client:
socket.on("open_exams", (data) => { //Receives data automatically from the server when "open_exams" is emitted to. });
Server:
io.on("connection", (socket) => { console.log("a user connected"); const hasChanged = await haveExamsChanged(); if(hasChanged) { socket.emit("open_exams", { refresh: true }); } });
On the server side,
hasChanged checks to see if any exams have since changed their status. If so, we "emit" to the "open_exams" channel, returning a single object telling the client that the refresh value is
true.
We'll see how this code evolves to actually do something useful in future posts.
(Photo by Taavi Randmaa) | https://codepunk.io/getting-started-with-node-js-and-socket-io-for-real-time-web-applications/ | CC-MAIN-2019-30 | refinedweb | 831 | 58.58 |
Hi,
I am having trouble trying to extend a non-abstract (though non-final)
java class. I get the following exception from JRuby:
Exception in thread “main” java.lang.RuntimeException: java_object
returned null for NewEdge
at org.jruby.javasupport.JavaUtil.convertRubyToJava(JavaUtil.java:825)
at
org.jruby.gen.InterfaceImpl343660132.create(org/jruby/gen/InterfaceImpl343660132.gen:13)
…
The object has a constructor that takes two arguments. Due to the
bug/feature that the number of arguments in the ruby class must be the
same as the number of arguments in the java class I am also overriding
self.new. Note that the types of asset1 and asset2 are java types of
the same type expected by the java ctor.
So in Java I have a class called Edge and in Ruby:
class NewEdge < Pairs::Edge
def initialize(asset1,asset2) @asset1, @asset2 = asset1, asset2 @passed = true end def self.new(asset1, asset2, ranges, mincor, maxadf) obj = self.allocate obj.send :initialize, asset1, asset2 obj.instance_variable_set(:@ranges, ranges) obj.instance_variable_set(:@mincor, mincor) obj.instance_variable_set(:@maxadf, maxadf) puts "created object", obj obj end ...
end
NewEdge.new(…)
The new object is being passed back to java, at which point the above
exception occurs. Why would convertRubyToJava ever fail? On another
note I do hope that the ctor issue is resolved as well at some point.
Writing self.new factories are ugly.
Thanks
Jonathan | https://www.ruby-forum.com/t/exception-thrown-from-jruby-when-attempting-to-extend-java-class/184387 | CC-MAIN-2022-33 | refinedweb | 228 | 51.65 |
I'm trying to do some consolidation but I'm not sure on the specific rules of consolidation so I'm having A LOT of difficulty on it. Here's what I'm trying to do.
Write a.
I'm having difficulty with closing the first account then Consolidating :(. Here's my code so far with no errors so far.
//*********************************************************** // TestAccounts1 // A simple program to test the numAccts method of the // Account class. //*********************************************************** import java.util.Scanner; public class TestConsolidation { public static void main(String[] args) { String name1; String name2; String name3; double balance = 100.00; int balance1; int balance2; int balance3; //declare and initialize Scanner object String message; Scanner scan = new Scanner (System.in); //Prompt for and read in the names System.out.println ("What is the first name? "); name1 = scan.next(); message = scan.nextLine(); //balance = balance1; System.out.println ("The first name is: \"" + name1 + "\""); System.out.println("The balance for this acoount is:" +balance); System.out.println ("What is the second name? "); name2 = scan.next(); message = scan.nextLine(); System.out.println ("The second name is: \"" + name2 + "\""); System.out.println("The balance for this acoount is:" +balance); System.out.println ("What is the third name? "); name3 = scan.next(); message = scan.nextLine(); System.out.println ("The first name is: \"" + name3 + "\""); System.out.println("The balance for this acoount is:" +balance); } } | https://www.daniweb.com/programming/software-development/threads/237104/testconsolidation | CC-MAIN-2017-09 | refinedweb | 220 | 54.29 |
Implementing an indexer for your object with C# .NET
Indexers are used extensively when accessing items in an array or List:
Friend f = friends[2];
It’s fairly easy to implement your own indexer. Imagine a table with guests sitting around. We could implement an indexer to easily access guest #n.
The Guest object is simple with only one property:
public class Guest { public string Name { get; set; } }
Here comes the implementation of the Table object including the indexers:
public class Table { public Table() { Guests = new List<Guest>() { new Guest(){Name = "John"} , new Guest(){Name = "Charlie"} , new Guest() {Name = "Jill"} , new Guest(){Name = "Jane"} , new Guest(){Name = "Martin"} , new Guest(){Name = "Ann"} , new Guest(){Name = "Eve"} }; } private List<Guest> Guests { get; set; } public Guest this[int index] { get { return Guests[index]; } set { Guests[index] = value; } } public Guest this[string index] { get { return (from g in Guests where g.Name.ToLower() == index.ToLower() select g).FirstOrDefault(); } } }
Let’s see what we have here:
- A constructor that fills up the private Guests list
- An integer indexer with get and set methods. This looks like a standard getter and setter property. The getter returns the Guest from the list in position “index”. The setter sets the incoming guest – value – at the appropriate index
- A string indexer that allows to extract a guest by a name
Here’s how you can call the indexers:
public class CustomIndexerService { public void RunDemo() { Table t = new Table(); Guest guest = t[2]; Guest replacement = new Guest() { Name = "Elvis" }; t[1] = replacement; Guest martin = t["martin"]; } }
We first extract Guest #2 which returns Jill as arrays start at 0. We then replace Guest #1 with another guest Elvis. Finally we retrieve the guest “Martin” using the string indexer.
View all various C# language feature related posts here.
Pingback: Implementing an indexer for your object with C# .NET | Dinesh Ram Kali. | https://dotnetcodr.com/2015/11/24/implementing-an-indexer-for-your-object-with-c-net-2/ | CC-MAIN-2019-04 | refinedweb | 311 | 60.45 |
golang自家的单元测试做的很好了,自需要"文件名_test.go" 就可以在里面写单元测试,而且go test命令也很强大,可以只运行单个测试函数,在goland 可以点击单元测试函数前面的图标,但是切换到vscode就需要自己动手了。go test 主要参考
单元测试写起来也比较容易,设定号 输入 判断 输出 与预想是否一致,一致则ok,否则 则报错。
单元测试是一门学问,考虑的问题也很多,比如很多边界问题、如何自动化、测试样本等等很多东西".
package testing
import "testing".
Within these functions, use the Error, Fail or related methods to signal failure.
To write a new test suite, create a file whose name ends _test.go that contains the TestXxx functions as described here. Put the file in the same package as the one being tested. The file will be excluded from regular package builds but will be included when the “go test” command is run. For more detail, run “go help test” and “go help testflag”.
Tests and benchmarks may be skipped if not applicable with a call to the Skip method of *T and *B:
func TestTimeConsuming(t *testing.T) { if testing.Short() { t.Skip("skipping test in short mode.") } ... }
Benchmarks
Functions of the form
func BenchmarkXxx(*testing.B)
are considered benchmarks, and are executed by the "go test" command when its -bench flag is provided. Benchmarks are run sequentially.
For a description of the testing flags, see.
A sample benchmark function looks like this:
func BenchmarkHello(b *testing.B) { for i := 0; i < b.N; i++ { fmt.Sprintf("hello") } }
The benchmark function must run the target code b.N times. During benchmark execution, b.N is adjusted until the benchmark function lasts long enough to be timed reliably. The output
BenchmarkHello 10000000 282 ns/op
means that the loop ran 10000000 times at a speed of 282 ns per loop.
If a benchmark needs some expensive setup before running, the timer may be reset:
func BenchmarkBigLen(b *testing.B) { big := NewBig() b.ResetTimer() for i := 0; i < b.N; i++ { big.Len() } }") } }) }
Examples
The package also runs and verifies example code. Example functions may include a concluding line comment that begins with "Output:" and is compared with the standard output of the function when the tests are run. (The comparison ignores leading and trailing space.) These are examples of an example:
func ExampleHello() { fmt.Println("hello") // Output: hello } func ExampleSalutations() { fmt.Println("hello, and") fmt.Println("goodbye") // Output: // hello, and // goodbye }
The comment prefix "Unordered output:" is like "Output:", but matches any line order:
func ExamplePerm() { for _, value := range Perm(4) { fmt.Println(value) } // Unordered output: 4 // 2 // 1 // 3 // 0 }
Example functions without output comments are compiled but not executed.
The naming convention to declare examples for the package, a function F, a type T and method M on type T are:
func Example() { ... } func ExampleF() { ... } func ExampleT() { ... } func ExampleT_M() { ... }
Multiple example functions for a package/type/function/method may be provided by appending a distinct suffix to the name. The suffix must start with a lower-case letter.
func Example_suffix() { ... } func ExampleF_suffix() { ... } func ExampleT_suffix() { ... } func ExampleT_M_suffix() { ... }
The entire test file is presented as the example when it contains a single example function, at least one other function, type, variable, or constant declaration, and no test or benchmark functions.
Subtests and Sub-benchmarks
The Run methods of T and B allow defining subtests and sub-benchmarks, without having to define separate functions for each. This enables uses like table-driven benchmarks and creating hierarchical tests. It also provides a way to share common setup and tear-down code:
func TestFoo(t *testing.T) { // <setup code> t.Run("A=1", func(t *testing.T) { ... }) t.Run("A=2", func(t *testing.T) { ... }) t.Run("B=1", func(t *testing.T) { ... }) // <tear-down code> }
Each subtest and sub-benchmark has a unique name: the combination of the name of the top-level test and the sequence of names passed to Run, separated by slashes, with an optional trailing sequence number for disambiguation.
The argument to the -run and -bench command-line flags is an unanchored regular expression that matches the test's name. For tests with multiple slash-separated elements, such as subtests, the argument is itself slash-separated, with expressions matching each name element in turn. Because it is unanchored, an empty expression matches any string. For example, using "matching" to mean "whose name contains":".
Subtests can also be used to control parallelism. A parent test will only complete once all of its subtests complete. In this example, all tests are run in parallel with each other, and only with each other, regardless of other top-level tests that may be defined:> }
Main, including those of the testing package, it should call flag.Parse explicitly.
A simple implementation of TestMain is:
func TestMain(m *testing.M) { // call flag.Parse() here if TestMain uses flags os.Exit(m.Run()) }
有疑问加站长微信联系(非本文作者)
感谢作者:zhishuai
查看原文:golang test 单元测试 | https://studygolang.com/articles/11920 | CC-MAIN-2022-40 | refinedweb | 773 | 59.3 |
Hi...
I am quite new to C++ (but no to programming or OOP) so, I have some questions specific to C++ syntax, etc ;) I hope you can help me as you did when I was trying to learn C ;)
I have a Class like: (I am using the C based openCV library)
(IplImage is a structure so I want to get returned a pointer to the structure created inside each method, This would have perfect sense in Objective-C)
And My questions areAnd My questions areCode:
#include "cv.h"
#include "highgui.h"
class CylindricalWarper{
private:
float fx,fy; //focal lenght
float cx,cy; //optical center
float k1,k2; //radial distorition coeficients
public:
void InitInstance(CString cameraString);
IplImage * CorrectDistortion(IplImage *image);
IplImage * CylindricalWarp(IplImage *undistorted);
};
1. Do I need a SuperClass? (ie: class CylindricalWarper:Warper{... )or is up to me?
and what is the most basic Class (which all objects inherit from) in C++
2. When I compile it I got two errors: error C2440: 'initializing' : cannot convert from 'void *' to 'IplImage *'
Each one in the first line inside CorrectDistortion method and CylindricalWarp method.
I think it is not just like C when you write something like
What is the correct way?What is the correct way?Code:
IplImage * correctDistortion(IplImage *);
3. What is the meaning and what is used for the word "virtual" ?
Any response would be very appreciated.
Regards
Ignacio.
PD: this task could be accomplished without classes but just for learning purposes I decided to make such a class | http://cboard.cprogramming.com/cplusplus-programming/116237-method-returning-pointer-structure-printable-thread.html | CC-MAIN-2014-35 | refinedweb | 251 | 62.68 |
Reactjs is UI library for Nodejs web projects which offer responsive web application. By default Reactjs project is one page application. You may wonder is all the React apps is one page apps ? No chance.
A route is a web page in a website / web app. A standard web app is consisting of many routes such as Home/Index Page, About, Contact etc
Routes
What is a route ? A route is a web page in a website / web app. A standard web app is consisting of many routes such as Home/Index Page, About, Contact etc. Each page has different content. If you were familiar with Python Flask app development, things becomes crystal clear.
How to create routes in React
To turn the one page Reactjs app into multi page app we have to use the BrouwserRouter and Router from the react-router-dom module.
import {BrowserRouter as Router,Route} from 'react-router-dom' import './bootstrap/dist/css/bootstrap.min.css' import Home from './components/home.component' import About from "./components/about.component"; function App() { return ( <Router> <div className="container"> <br/> <Route path="/" exact component={Home}/> <Route path="/about" exact component={About}/> </div> </Router> );
In the above example , we had created two routes with component, one for Home page and another for About Page.
The Route component is used to create new page in our app. The route has path and component props.
The path is used to define the route, ‘/’ is used for the default, which is used none of route is specified in the URL such as
component define the Reactjs component is being used in the route which can be defined in a separate file, usually under Component folder.
That is all you need, following React post deserve your attention
- How to clear all element in mongoose object array in Nodejs - How to clear elements in a mongodb object array
- Use dotenv to store configurations in Reactjs - use dotenv in Reactjs to store API base URL
- How to export multiple components in Reactjs - How to render export multiple components in React
- How to create ant row design for Reactjs - Quickly create a row design using Ant Design customize UI for React component
- Ant layout for Reactjs apps - How to create a app layout with Ant Design UI Kit in Reactjs
- How to render child components in Reactjs - How to render child render mongo object in Reactjs - How to render MongoDB object/ObjectId in React/MERN app
- How to render mongo document in Reactjs - How to render mongo document in react component | https://developerm.dev/2021/01/04/how-to-create-multi-page-app-with-reactjs/ | CC-MAIN-2021-17 | refinedweb | 424 | 57.3 |
Sahara can't login to nodes
Hi all
I'm using a full working openstack kilo release and I'm facing a strange thing. Sahara is able to instanciate a cluster but is stuck in 'Waiting' state. Seems it can't SSH to nodes :
[root@net000 ~]# grep 10.0.0.91 /var/log/messages Jan 13 16:06:32 net000 sahara-all: 2016-01-13 16:06:32.693 62723 DEBUG sahara.service.engine [-] Can't login to node cl1-ngt1-vanilla-hadoop-worker-001 10.0.0.91, reason SSHException: Error reading SSH protocol banner _is_accessible /usr/lib/python2.7/site-packages/sahara/service/engine.py:128
But I successfully run this from the network node (which owns the namespaces and the sahara server)
ip netns exec qrouter-26422538-e7ee-428d-b3b8-ed3b57e1e1d6 nc 10.0.0.91 22
I manually connect to the VM and ran tcpdump in the hadoop VM (10.0.0.91) : no SSH packet comes here (I have filtered my own SSH session)
[root@net000 ~]# ip netns exec qrouter-26422538-e7ee-428d-b3b8-ed3b57e1e1d6 ssh -i id_rsa.priv cloud-user@10.0.0.91 Warning: Permanently added '10.0.0.91' (RSA) to the list of known hosts. [cloud-user@fdssd-ngt-vanilla-hadoop-master-001 ~]$ tcpdump -i eth0 port 22 and not port 35571 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes -------nothing there------
So it seems sahara doesn't use the namespace benefits to connect to the VM, or what ?
Any idea to go further ???
Thanks
My sahara conf, I'm using neutron :
[DEFAULT] use_neutron = true use_namespaces = True use_floating_ips = False enable_notifications = False notification_driver = messaging rpc_backend = rabbit infrastructure_engine = direct | https://ask.openstack.org/en/question/87430/sahara-cant-login-to-nodes/ | CC-MAIN-2021-17 | refinedweb | 291 | 56.86 |
Control.Monad.Logic.Class
Description
A backtracking, logic programming monad.
Adapted from the paper /Backtracking, Interleaving, and Terminating Monad Transformers/, by Oleg Kiselyov, Chung-chieh Shan, Daniel P. Friedman, Amr Sabry ()
Synopsis
- class MonadPlus m => MonadLogic m where
- reflect :: MonadLogic m => Maybe (a, m a) -> m a
- lnot :: MonadLogic m => m a -> m ()
Documentation
class MonadPlus m => MonadLogic m whereSource
Minimal implementation: msplit
Methods
msplit :: m a -> m (Maybe (a, m a))Source
Attempts to split the computation, giving access to the first result. Satisfies the following laws:
msplit mzero == return Nothing msplit (return a `mplus` m) == return (Just (a, m))
interleave :: m a -> m a -> m aSource
Fair disjunction. It is possible for a logical computation to have an infinite number of potential results, for instance:
odds = return 1 `mplus` liftM (2+) odds
Such computations can cause problems in some circumstances. Consider:
do x <- odds `mplus` return 2 if even x then return x else mzero
Such a computation may never consider the 'return 2', and will therefore never terminate. By contrast, interleave ensures fair consideration of both branches of a disjunction
(>>-) :: m a -> (a -> m b) -> m bSource
Fair conjunction. Similarly to the previous function, consider the distributivity law for MonadPlus:
(mplus a b) >>= k = (a >>= k) `mplus` (b >>= k)
If 'a >>= k' can backtrack arbitrarily many tmes, (b >>= k) may never be considered. (>>-) takes similar care to consider both branches of a disjunctive computation.
ifte :: m a -> (a -> m b) -> m b -> m bSource
Logical conditional. The equivalent of Prolog's soft-cut. If its first argument succeeds at all, then the results will be fed into the success branch. Otherwise, the failure branch is taken. satisfies the following laws:
ifte (return a) th el == th a ifte mzero th el == el ifte (return a `mplus` m) th el == th a `mplus` (m >>= th)
Pruning. Selects one result out of many. Useful for when multiple results of a computation will be equivalent, or should be treated as such.
Instances
reflect :: MonadLogic m => Maybe (a, m a) -> m aSource
The inverse of msplit. Satisfies the following law:
msplit m >>= reflect == m
lnot :: MonadLogic m => m a -> m ()Source
Inverts a logic computation. If
m succeeds with at least one value,
lnot m fails. If
m fails, then
lnot m succeeds the value
(). | http://hackage.haskell.org/package/logict-0.4.2/docs/Control-Monad-Logic-Class.html | CC-MAIN-2014-23 | refinedweb | 384 | 52.29 |
Last updated on September 30th, 2017 |
Ionic Social Sharing is easy to set up, today we’re going to build a basic app that explains all those principles, you’ll get:
- A button that opens your phone’s share sheet and lets you pick the app you’ll use to share.
And a button bar with hot-links to share to Twitter, WhatsApp, and Instagram.
I think that’s enough talk, let’s get to coding!
Make sure to grab the code from Github to follow along:
The first thing you’ll want to do is navigate to your coding folder and start a new Ionic 2 app:
$ cd ~/Development $ ionic start IWantToShare blank --v2 $ cd IWantToShare/
That’s just the regular command to create a new app, it gives it the IWantToShare name and uses the blank template (since all we’ll be doing is using the
Now we’re going to install Social Sharing native plugin. You can find the entire docs to the plugin in the ionic-native docs.
$ ionic plugin add cordova-plugin-x-socialsharing $ npm install --save @ionic-native/social-sharing
Since Ionic released Ionic Native 3, you now need to import the plugin in
app.module.ts so it’s available to be used throughout the app:
import { SocialSharing } from '@ionic-native/social-sharing'; @NgModule({ declarations: [...], imports: [...], bootstrap: [IonicApp], entryComponents: [...], providers: [ SplashScreen, StatusBar, SocialSharing ] }) export class AppModule {}
Now we’re ready to rumble, go ahead and open your
HomePage folder inside your favorite text editor or IDE (I was using atom, but I’m giving VSCode a try these days)
Inside the
home.html file, we’re going to create something simple:
- A title.
- An Image
- A button to open the share sheet.
- A button bar with the social media icons we’ll use to share directly.
<ion-header> <ion-navbar> <ion-title> IWantToShare </ion-title> </ion-navbar> </ion-header> <ion-content padding> <p> <em>I'm the hulk!</em> </p> <img src="assets/img/hulk.jpg" /> <button ion-button Share me! </button> <ion-grid> <ion-row> <ion-col> <button ion-button icon-only (click)="twitterShare()" color="dark" clear> <ion-icon</ion-icon> </button> </ion-col> <ion-col> <button ion-button icon-only (click)="whatsappShare()" color="dark" clear> <ion-icon</ion-icon> </button> </ion-col> <ion-col> <button ion-button icon-only (click)="instagramShare()" color="dark" clear> <ion-icon</ion-icon> </button> </ion-col> </ion-row> </ion-grid> </ion-content>
As you can see there’s nothing weird or hard to understand in that picture, we’re just writing a title “I’m the Hulk!” and using an image that I have in the
assets/img/ folder and it’s called
hulk.jpg
Now we need to start adding those functions to our
home.ts file so we can start sharing!
The first thing we’ll do, and this will work for all the functions is to import the Sharing plugin from
ionic-native.
import { SocialSharing } from '@ionic-native/social-sharing';
Now we need to inject it into the constructor:
constructor(public navCtrl: NavController, private socialSharing: SocialSharing) {}
Share via share sheet:
Now we need to create the
regularShare() function. This is going to open the share sheet and let our users share using the apps they have installed.
/** * Opens up the share sheet so you can share using the app you like the most. */ regularShare(){ // share(message, subject, file, url) this.socialSharing.share("Testing, sharing this from inside an app I'm building right now", null, "www/assets/img/hulk.jpg", null); }
That function just calls
.share() from the
SocialSharing class, as you can see it takes four arguments:
(message, subject, file, URL) since all we need to pass is our message and our image, we set the other 2 to null.
We are going to set up now three hot-links so users can immediately share via Twitter, WhatsApp and Instagram.
Share via Twitter
For this all we need to know is call the Twitter function from the
SocialSharing class:
/** * This share's directly via twitter using the: * shareViaTwitter(message, image, url) */ twitterShare(){ this.socialSharing.shareViaTwitter("Testing, sharing this from inside an app I'm building right now", "www/assets/img/hulk.jpg", null); }
This one only takes three arguments, so we’re setting the
URL to null because we don’t have one here.
By the way, feel free to add your URL here so people can share it and you get all the social mojo.
Share via Instagram
This one is pretty similar to Twitter:
/** * This share's directly via Instagram using: * shareViaInstagram(message, image) */ instagramShare(){ this.socialSharing.shareViaInstagram(`Testing, sharing this from inside an app I'm building right now`, "www/assets/img/hulk.jpg"); }
But it only takes two arguments, the only two we need anyway.
Once you get sent to Instagram, you’ll have your picture ready, and the text will be copied to the clipboard, so you just need to paste it.
Share via WhatsApp
Whatsapp takes the same three arguments as Twitter:
/** * This share's directly via whatsapp using the: * shareViaWhatsapp(message, image, url) */ whatsappShare(){ this.socialSharing.shareViaWhatsApp("Testing, sharing this from inside an app I'm building right now", "www/assets/img/hulk.jpg", null); }
And there you have it. I usually have a share button for Twitter in the settings page of my apps, I get 3 or 4 tweets every month from people sharing them.
I guess I could go more aggressive and add the share buttons in other places, but that’s OK, those apps are dead to me anyway 😛 | https://javebratt.com/social-sharing-with-ionic/ | CC-MAIN-2017-43 | refinedweb | 928 | 60.24 |
New Build Documentation
Latest revision as of 18:50, 1 March 2007
This documentation last updated during the ASL 1.0.25 distribution.
[edit]
[edit] The Easy Way
[edit].
[edit].
[edit] The Other Way
[edit] Obtaining and Patching Boost
The Boost 1.33.1 distribution can be downloaded from
[edit])
[edit]
[edit] Patching under Win32
Win32 users can use the patchfile provided, but there are some caveats. First, the patchfile is distributed with Unix line endings, which must first be converted to DOS line endings. If you do not have a program to accomplish this, there is a small command line tool called
leconv that will do this for you. It is in:
C:\adobe-source\adobe\tools\
The GNUWin32 project has made a version of patch for Win32. It can be obtained at
If you already use boost build, please make sure that the adobe version of bjam will be the one found in your path, since ASL relies on patched support files found relative to the bjam executable.
[edit] Build Option 1: Using Boost Build (bjam)
Boost 1.33.1 ships with a single version of the build tool that supports both v1 and v2 syntax. When the Adobe patch instructions are followed, boost build will be upgraded to be able to produce universal binaries on the macintosh, but only when bjam invoked using v2 syntax. V1 files are untouched by the patch. If you will be moving from v1 to v2 syntax then the guide at may be of some use.
[edit].
[edit]
If you use the web install you can choose not to download most of the SDK components to save time. Only the Core SDK appears to be required.
2. Copy the "bin", "include" and "lib" directory from the PSDK-installation into "%ProgramFiles%\Microsoft Visual Studio 8\VC"
At this point, for example, windows.h should be available within your MSVC installation. This would be a good time to make sure that you can successfully build the IDE project "Begin" located in ...\adobe-source\ide_projects\vc8
Another issue is that boost build v2 from boost 1.33.1 does not provide a VC 8 Express savvy msvc-config, so we must configure boost build manually. Follow the next steps to complete VC 8 Express specific preparation:
3. Choose a "home" directory, and add a %HOME% environment variable referring to it. For example, if you chose "C:\Home", you would set the env var by opening the system control panel/advanced/environment variables and adding the variable HOME for all users with value C:\Home.
4. Now create a file in the chosen HOME directory named user-config.jam . The file must contain the following contents (assuming that you installed VC 8 Express in the default location):
import toolset : using ;.
[edit].
[edit]).
[edit].
[edit] bjam Details
Executing the build script will build various excecutables including bjam (boost build, compatible with version 1 and 2) in a sub-directory named after the platform, and copy it into the ASL tools directory. The build will then cause all libraries including libasl, libasl_dev, and the appropriate pieces of Boost to be built. It will also build and run several test applications. Copious output will be produced, indicating the success or failure of the build and the associated tests. Debug and Release targets are supported. The default is to build and use static libraries throughout, except that DLL versions of platform runtime libraries are employed.
The "projects" used by BBv2 are always named Jamfile.v2. Each Jamfile.v2 inherits settings from any Jamfile.v2's that appear in its parent directories, so the Jamfile.v2's in the test directories are relatively sparse.
[edit]
[edit] Build Option 2: Using an IDE
[edit].)
[edit].)
[edit] MacOS X Universal Binary Support
To disable building Universal Binaries with the XCode IDE projects, open the top-level xcconfig file for your respective binary. For example:
~/adobe-source/ide_projects/darwin/adobe_xconfig_application.xcconfig
and edit the value in the "Artifact Architecture" section. Reloading the XCode IDE projects at that time will cause Universal Binary support to be dropped, building only PowerPC versions of the binaries. Alternatively, you can drop "ppc" if you are only interested in building MacTel versions of the binaries.
[edit] MacOS X Binary Artifact Compatibility
On Mac OS X there is an environment variable
MACOSX_DEPLOYMENT_TARGET that can be optionally set to
10.1, 10.2, 10.3, or 10.4. When this variable is set to one of those
values, it establishes the minimum operating system version that is
supported by your binary artifact. For the Adobe Source Libraries, all
binaries are built.
[edit] Known Issues
[edit] General Issues
We are aware the release is generally quite cumbersome, and are still figuring out how to package releases more efficiently.
BBv2 intentionally builds the debug variant of ASL with "warnings as errors" turned on. The release variant is not built with this setting on because of warnings within some library headers in the GCC STL.
[edit]:
ADOBE_TEST_MICROSOFT_NO_DEPRECATE=0
The Jamfile at the top of the ASL distribution has this macro defined by default. Another option to disable the warnings in your code is by supressing the warnings with a pragma:
#pragma warning ( disable : 4996 )
[edit] Executing Adobe Begin on Windows XP on a Non-Development Machine
When an app is built using a version of MSVC, that version of Microsoft's Runtimes Libraries must be 'find do was not caught for a while because these runtime libraries were automatically found when smoke testing the apps. (A big thanks goes to Ken Silver for being the first one report Adobe Begin failing to load on his machine, which was not set up for development.)
[edit] Feedback).
Of course, feedback of any kind is highly requested at all times. Please contact one of the project leads. | http://stlab.adobe.com/wiki/index.php?title=New_Build_Documentation&diff=1880&oldid=1824 | CC-MAIN-2013-20 | refinedweb | 970 | 62.27 |
169392/initate-shutdown-instance-certein-started-example-started
I have instances in GCP. I can schedule a time to start and stop using the scheduler. But, I don't want a specific time of the day, I want a specific time after instance was started.
For example - Stop the instance after 8 hours the instance is up and running.
You can add the contents of a startup script directly to a VM when you create the VM,
You can also pass a Linux startup script directly to an existing VM:
In your Cloud Console go to VM Instance page and click on the instance you want to pass the startup script
#! /bin/bash
shutdown -P +60
- P instructs the system to shut down and then power down.
The time argument specifies when to perform the shutdown operation.
The time can be formatted in different ways.
Firstly it can be an absolute time in the format hh:mm where hh is the hour and mm is the minute of the hour.
Secondly it can be of the format +m where m is the number of minutes to wait.
Also, the word now is the same as specifying +0; it shuts the system down immediately.
You can create an instance from an ...READ MORE
If you configured an instance to allow ...READ MORE
Yes, migrating an instance from Azure to ...READ MORE
Cloud Storage uses a flat namespace to ...READ MORE
You would probably encounter the following error:
Disk ...READ MORE
You can only change the machine type ...READ MORE
In the GCP Console only, you can ...READ MORE
Check this out ...READ MORE
You can add the contents of a ...READ MORE
You might have a hotspot on Cloud ...READ MORE
OR
At least 1 upper-case and 1 lower-case letter
Minimum 8 characters and Maximum 50 characters
Already have an account? Sign in. | https://www.edureka.co/community/169392/initate-shutdown-instance-certein-started-example-started | CC-MAIN-2022-21 | refinedweb | 314 | 74.49 |
Plugin Development
Building plugins for Windows is very similar to the Capacitor plugin development process, though targeting the Windows App SDK and using a .NET language (C# being the recommended language).
#Concepts
#Languages
Programming a plugin for the Windows platform requires using a .NET compatible language. We strongly recommend C# and have not tested plugins in other .NET languages like F# or Visual Basic (nor do we support these languages).
#When to be Idiomatic, when to be consistent across platforms
When building a plugin, there are times to write code that is idiomatic to the language and platform. For example, following C# coding conventions. And then there are times to prefer consistency across multiple platforms.
To stay consistent, it is critical that plugin methods be named the same across platforms. For example, in the Example plugin below, this is why our method is named
test rather than the C# conventional
Test, which would result in a different method being exported to your cross-platform web app when running on windows.
However, in most other situations, being idiomatic makes sense. For example, notice below that the
call.Resolve uses a capital
R while on iOS and Android the method is
call.resolve. Since this is not cross-platform code, it makes sense to be idiomatic to the C# language which is what a C# developer would expect.
#Example plugin
Refer back to this example plugin written in C# as we explore the process of building plugins for the Windows platform.
using Capacitor; namespace MyPlugin { [CapacitorPlugin] public class TestPlugin : Plugin { [PluginMethod(PluginMethodReturnType.Promise)] public void test(PluginCall call) { call.Resolve(new JSObject() { { "key", "value" } }); } }}
#Creating a plugin
There are two ways to create a plugin: embedded directly into your app, or built as a standalone project.
Embedded plugins are better for plugins you don't plan to distribute or ones that add extra functionality to your app but where you don't intend to use them in other apps. Embedded plugins are also quick and easy to create since they are built directly into your app code.
Standalone plugins are better/required for plugins you plan to publish to other developers or use in other apps. The tradeoff is these plugins live outside of your app code as separate projects, and thus have higher development overhead.
#Embedding a Plugin
To embed a plugin in your app, create a new C# class in your app project and add copy in the example plugin above.
#Creating a Standalone Plugin:
Documentation for standalone Windows plugin development coming soon.
#Developing a plugin
The Ionic Windows platform targets the Windows App SDK which is Microsoft's new SDK for modern windows app development. Plugins will interface with the APIs and native UI components available in this SDK, so consult the Windows App SDK Reference as you develop your plugin.
Also note: the Windows App SDK overlaps with the last-generation Universal Windows Platform (UWP), so UWP documentation will often be used when building your plugin.
#Publishing a plugin
Plugins are published to npm, and use a local Nuget package system to resolve native plugin libraries locally rather than through the Nuget package repository. This makes distribution and usage natural for a JavaScript developer.
Thus, to publish your plugin, simply publish it to npm like any other npm package. | https://ionic.io/docs/windows/plugin-development | CC-MAIN-2022-05 | refinedweb | 551 | 53.92 |
While debugging the application, timer is working as expected.
After deploying the application in device. if I keep the application in background, then timer elapsed event is not triggering at given time interval. I have implemented sample code, which displays timer start time and every minute it will update the UI with start time and current time with count (Count should be the number of minutes after start button click). Output for 6 minutes should increment to count to 6, but it is not working as expected (Please find the attachment for the reference).
[Activity(Label = "App1", MainLauncher = true, Icon = "@drawable/icon")]
public class MainActivity : Activity
{
int count = 0;
Button btnStart;
Button btnStop;
TextView txtStatus;
TextView txtStartOrStop;
System.Timers.Timer _timer;
DateTime startTime;
protected override void OnCreate(Bundle bundle) { base.OnCreate(bundle); // Set our view from the "main" layout resource SetContentView(Resource.Layout.Main); // Get our button from the layout resource, // and attach an event to it btnStart = FindViewById<Button>(Resource.Id.btnStart); btnStop = FindViewById<Button>(Resource.Id.btnStop); txtStatus = FindViewById<TextView>(Resource.Id.txtStatus); txtStartOrStop = FindViewById<TextView>(Resource.Id.txtStartOrStop); btnStart.Click += StartTimer; btnStop.Click += StopTimer; } public void StartTimer(object sender, EventArgs e) { if (_timer == null) { startTime = DateTime.Now; _timer = new System.Timers.Timer(); _timer.Interval = 60000; _timer.Elapsed += bw_DoWork; _timer.Enabled = true; _timer.Start(); txtStartOrStop.Text = "Running"; count = 0; RunOnUiThread(() => txtStatus.Text = startTime.ToLongTimeString() + " " + count + " " + DateTime.Now.ToLongTimeString()); } } private void bw_DoWork(object sender, System.Timers.ElapsedEventArgs e) { count++; RunOnUiThread(() => txtStatus.Text = startTime.ToLongTimeString() + " " + count + " " + DateTime.Now.ToLongTimeString()); } public void StopTimer(object sender, EventArgs e) { if (_timer != null) { _timer.Stop(); _timer = null; txtStartOrStop.Text = ""; } }
If the operation is done repeatedly in background, I think background task is the solution. Even if you want the task to be executed when the app is killed this may be solution.
Here are quite good documentation, with some examples. take a look.
Answers
@sasikiranvarikunta,
I had a similar problem and finally found out that timer event is not triggered in the background.
So I did a little trick.
Catch the exact time when the application goes into background, lets say you have backgroundMilliseconds.
And when it comes to foreground => foregroundMilliseconds.
Here is the pseudo code:
if ((foregroundMilliseconds - backgroundMilliseconds) >= 6 Minutes) UpdateUI();
You can get the exact duration of being in background mode.
You can catch the app foreground and background mode in
OnResumeand
OnPausecorrespondingly.
Hi @ashalva .. Thanks for the reply. My requirement is to do some operation at background for every one minute (with start and stop buttons). I need that operation to be performed even if the application is in background mode, so I made this sample application to share with other developers.
If the operation is done repeatedly in background, I think background task is the solution. Even if you want the task to be executed when the app is killed this may be solution.
Here are quite good documentation, with some examples. take a look.
Hi @ashalva .... I tried with backgrounding service and System.Threading.Timer instead of System.Timers.Timer. Still I'm facing the same problem.. Not sure, it is issue with my device or code. I'm trying a lot to solve this issue.
Changing the Timer Reference will not solve the problem. Have you implemented the backgound service correctly? You should see your service in app manager > services
Yes. Service is working fine.
Hi @ashalva.. Timer is not working properly when device went to sleep mode. I was using location service to get the updates in my application.. I updated time in location update event to trigger at every one minute. This is not 100% accurate but it is working better compared to timers. Thanks for the help
@sasikiranvarikunta,
You are welcome! | https://forums.xamarin.com/discussion/comment/207089 | CC-MAIN-2021-17 | refinedweb | 616 | 52.87 |
We are about to switch to a new forum software. Until then we have removed the registration on this forum.
first post here so bear with me :P
Now I'm not a programmer by trade, but my university offers a programming unit related to digital arts using processing. An assignment I'm working on requires the user to input an integer value which is then used to create a range for randomization. Ideally the output should look like a bunch of random high-rises, creating a city skyline (abstract art don't question it) with 5 "high-rises". However, only one is appearing and the GUI appears again. Any help would be much appreciated as I'm at wits end with my deadline approaching fast :|
import static javax.swing.JOptionPane.*; //GUI element class
void setup() { //setup canvas size(1250,750); //sets size of canvas
frameRate(10);
}
void draw(){
String input = showInputDialog("Type in integer number..."); int num = parseInt(input == null? "" : input, MIN_INT); if (input == null) showMessageDialog(null, "You didn't enter anything!", "Alert", ERROR_MESSAGE); else if (num == MIN_INT) showMessageDialog(null, "Entry \"" + input + "\" isn't a number!", "Alert", ERROR_MESSAGE); else showMessageDialog(null, "Number " + num + " has been registered.", "Info", INFORMATION_MESSAGE); for(int i = 0 ; i < 5 ; i++) { background(random(125,175),random(125,175),random(125,175)); //sets colour of canvas, ideally in rgb fill (random (150,255)); float X = random(num,num+400); float Y = random(num,num+450); rect(X,Y,150,750); }
Answers
Thanks, will try them out tomorrow (it's a bit late rn) and see if I can get it to work as intended :P | https://forum.processing.org/two/discussion/25246/accepting-user-input-for-visual-assignment | CC-MAIN-2019-47 | refinedweb | 269 | 53.61 |
I have a structure containing character arrays with no any other member functions. I am doing assignment operation between two instances of these structures. If I'm not mistaken, it is doing shallow copy. Is shallow copy safe in this case?
I've tried this in C++ and it worked but I would just like to confirm if this behavior is safe.
If by "shallow copy", you mean that after assignment of a
struct containing an array, the array would point to the original
struct's data, then: it can't. Each element of the array has to be copied over to the new
struct. "Shallow copy" comes into the picture if your struct has pointers. If it doesn't, you can't do a shallow copy.
When you assign a
struct containing an array to some value, it cannot do a shallow copy, since that would mean assigning to an array, which is illegal. So the only copy you get is a deep copy.
Consider:
#include <stdio.h> struct data { char message[6]; }; int main(void) { struct data d1 = { "Hello" }; struct data d2 = d1; /* struct assignment, (almost) equivalent to memcpy(&d2, &d1, sizeof d2) */ /* Note that it's illegal to say d2.message = d1.message */ d2.message[0] = 'h'; printf("%s\n", d1.message); printf("%s\n", d2.message); return 0; }
The above will print:
Hello hello
If, on the other hand, your
struct had a pointer,
struct assignment will only copy pointers, which is "shallow copy":
#include <stdio.h> #include <stdlib.h> #include <string.h> struct data { char *message; }; int main(void) { struct data d1, d2; char *str = malloc(6); if (str == NULL) { return 1; } strcpy(str, "Hello"); d1.message = str; d2 = d1; d2.message[0] = 'h'; printf("%s\n", d1.message); printf("%s\n", d2.message); free(str); return 0; }
The above will print:
hello hello
In general, given
struct T d1, d2;,
d2 = d1; is equivalent to
memcpy(&d2, &d1, sizeof d2);, but if the struct has padding, that may or may not be copied.
Edit: In C, you can't assign to arrays. Given:
int data[10] = { 0 }; int data_copy[10]; data_copy = data;
is illegal. So, as I said above, if you have an array in a
struct, assigning to the struct has to copy the data element-wise in the array. You don't get shallow copy in this case: it doesn't make any sense to apply the term "shallow copy" to a case like this.
Assigning structs does a member-wise assignment, and for arrays this means assigning each item. (And this is done recursively for "multiple dimension" arrays, which are really just arrays of arrays.)
You are correct that it does a shallow copy, even on arrays. (I'm assuming that you have not overloaded op= with respect to C++; if you overload it you can do anything you want.)
Remember that a shallow copy means copying the value of something, while a deep copy means copying the value to which something points or refers. The value of an array is each item in it.
The difference between shallow and deep is most meaningful when you have a type that does indirection, such as a pointer. I find my answer the most helpful way to look at this issue, but you could also say "shallow" vs "deep" doesn't even apply to other types, and they are just "copied".
struct S { int n; int* p; int a[2]; int* ap[2]; int xy[2][2]; }; void f() { S c, d; c = d; // equivalent to: c.n = d.n; c.p = d.p; c.a[0] = d.a[0]; // S::a is similar to your situation, only using c.a[1] = d.a[1]; // int instead of char. c.ap[0] = d.ap[0]; c.ap[1] = d.ap[1]; c.xy[0][0] = d.xy[0][0]; c.xy[0][1] = d.xy[0][1]; c.xy[1][0] = d.xy[1][0]; c.xy[1][1] = d.xy[1][1]; }
That I used int above doesn't change anything of the semantics, it works identically for char arrays, copying each char. This is the S::a situation in my code.
Note that p and ap are copied shallowly (as is every other member). If those pointers "own" the memory to which they point, then it might not be safe. ("Safe" in your question is vague, and really depends on what you expect and how you handle things.)
For an interesting twist, consider boost::shared_ptr and other smart pointers in C++. They can be copied shallowly, even though a deep copy is possible, and this can still be safe. | http://www.dlxedu.com/askdetail/3/383b12f256c4da99059813650e89670d.html | CC-MAIN-2018-34 | refinedweb | 778 | 73.07 |
The preliminary vfs stage 2 patch is ready. This patch is NOT for the weak of heart because I've just barely tested it. Don't run it if you aren't prepared to lose something! fetch This patch does a couple of things. First, it changes the per-filesystem vop_ops to a per-mount vop_ops, which will allow us to use the vop_ops structure to manage all sorts per-mount (future) features like cluster or other forms of cache coherency, (kernel layer) journaling, the storing of real filesystem statistics, and so forth. Second, it requires that all VOP calls pass the vop_ops as the first argument and the dispatch is based on that instead of needing a vnode to be passed as the first argument and using the dispatch in the vnode. This will allow us to start passing a namecache structure instead of a directory vnode for namespace VOP's (like open/rename/delete/etc). (However, I didn't want to rewrite all the VOP_*() calls immediately so the VOP_* macros add the required argument). Third, it removes all VCALL()'s, replacing them with wrapper calls. VCALLs were primarily used by null and union mounts to forward requests down through the layers. This type of forwarding will require kernel intervention to deal with the messaging layer and so these are all now wrapper calls. Forth, all VOCALL()'s are restricted to vfs ops chaining only (the vnoperate stuff). And I simplified the macro's arguments. Note that you have to blow away your kernel object hierarchy and then rebuild it to avoid make/make depend failures with a moved header file. (e.g. rm -rf /usr/obj/usr/src/sys/<KERNELNAME> before make buildkernel). Only serious developers should try this patch. If you do try it, please try to exercise as many VFS's as posible (e.g. union, null mounts), though keep in mind that some VFSs have stability issues as a matter of course that are not related to this work. -Matt Matthew Dillon <dillon@xxxxxxxxxxxxx> | http://leaf.dragonflybsd.org/mailarchive/kernel/2004-08/msg00178.html | CC-MAIN-2015-32 | refinedweb | 340 | 71.55 |
I’ve been lucky enough to work full-time with ASP.NET MVC 3 for the past couple of months, and I’ve completely fallen in love. It’s quite a change from WebForms, and it took a while to ‘click’ so I’m putting together a few examples both as a personal reference, but also to help anyone looking to take the leap from WebForms.
The first thing I struggled with was how to implement dropdown lists, so this post will walk through the steps required to make a simple form:
The first step is to build a model for the form. We need two properties for the dropdown, one to hold the list of options and the other for the ID of the selected option:
public class CreditCardModel { public List<SelectListItem> CardTypeOptions { get; set; } [Display(Name = "Card Type")] public string CardTypeID { get; set; } [Display(Name = "Card Number")] [Required(ErrorMessage = "Please provide your card number")] public string CardNumber { get; set; } }
The controller class requires two action methods, one to show the initial form (the GET request) and one to handle the form submission (the POST request):
public class CreditCardController : Controller { public ActionResult AddCard() { var model = new CreditCardModel(); // Populate the dropdown options model.CardTypeOptions = GetCardTypes("MS"); // Set the default to American Express return View(model); } [HttpPost] public ActionResult AddCard(CreditCardModel model) { // TODO - Handle the form submit // Populate the dropdown options model.CardTypeOptions = GetCardTypes("MS"); // Set the default to American Express return View(model); } // TODO - AddCardComplete goes here // TODO - GetCardTypes goes here }
I’ve hard-coded the dropdown values inside the controller to keep things simple:
private List<SelectListItem> GetCardTypes(string defaultValue) { List<SelectListItem> items = new List<SelectListItem>(); items.Add(new SelectListItem { Text = "American Express", Value = "AE", Selected = (defaultValue == "AE") }); items.Add(new SelectListItem { Text = "Mastercard", Value = "MS", Selected = (defaultValue == "MS") }); items.Add(new SelectListItem { Text = "Visa", Value = "VS", Selected = (defaultValue == "VS") }); return items; }
The view needs the ‘@model’ directive at the top to bind to the CreditCardModel. I’m using Razor as my view engine of choice – quite simply, it rocks! Note the use of ValidationSummary, any validation errors will automatically appear here:
@model Demo.Models.CreditCardModel @{ ViewBag. }
Inside the AddCard POST action we need a check that the model is valid before we save the card details. This can be checked by calling ModelState.IsValid which is available to all action methods.
Something to watch out for – it’s a recommended practice to redirect the user to an action following a post. The reason for this is to prevent the user accidentally re-submitting the form by refreshing the page. This is done by returning a call to RedirectToAction(), passing in the action name.
if (ModelState.IsValid) { // TODO - Save the data to your database... // Prevent the user resubmitting the card by asking their // browser to request a different page using a GET request return RedirectToAction("addcardcomplete"); }
Create a new view called AddCardComplete:
@{ ViewBag.Title = "Add Card Complete"; } <h2>Add Card Complete</h2> <p>Your card has been added</p>
Create an action method for the view inside the controller:
public ActionResult AddCardComplete() { return View(); }
All done! The selected value of the dropdown will be automatically bound to the CardTypeID property of the CreditCardModel instance.
Thanks very much, excellent. Can you give a little hint on using the viewmodels, could I use colon inherit from main model, just for the views?
Hi Stu,
I’m a little confused by your question. Are you asking if your model can inherit from a base class? If so, then yes. The model can inherit from almost anything. I think the only restriction is that the base type must have a default constructor.
Kind regards
James
Thank You very much man…. I have been looking around the internet for the past two day trying to learn about the MVC 3 framework and none of the tutorials I looked at tried to explain the dropdown list as you did. I am used to java spring so i was having trouble referencing the data being submitted from dropdowns in the view. The key info in this tutorial for me is the view model and how its properties reference the the values of selected options on a view. This solved a big part of my problem. Thank You.
Pingback: Blue Ray Plus - Latest Technology News
This was a very usefull blog. Works perfect, thanks :)
What does the SelectListItem class contains?
The SelectListItem class is part of MVC (System.Web.Mvc) and contains three properties: Selected (bool), Text (string) and Value (string).
Hi,
Thanks for this blog. It helps me.
Now, I want to access selected value of field into my controller, so how can i access that? Can you please provide me suggestion.
The selected value is automatically stored in the model by MVC3, so in the controller’s POST action method you can do:
var selectedValue = model.CardTypeID;
To get the selected text, you’ll need to perform a lookup:
var selectedText = (from x in model.CardTypeOptions where x.Value == model.CardTypeID select x.Text).FirstOrDefault();
Cannot implicitly convert type ‘System.Collections.Generic.List’ to ‘System.Collections.Generic.List
at
model.CardTypeOptions = GetCardTypes(“MS”);
Please Help Me
I’d check that the return type of GetCardTypes() matches the CardTypeOptions property. They should both be “List<SelectListItem>”. If that doesn’t work, try posting an example of the code to and let me know the URL of your question.
Thank you, It was helpful..:)
Hi, what if I need to send the CardType options via Ajax to an action method
say,
public JsonResult(CreditCardModel data)
how should I do this?
Normally you wouldn’t send *all* the card types options back to the server because the server already has these – it provided them during the initial GET request. What I suspect you really want to do is send the *selected* card type back to the server. This can be done elegantly with jQuery:
function save() {
var selectedCard = $(“#CardTypeID option:selected”).text();
$.post(“/creditcard/addcard”, { CardTypeID: selectedCard } );
}
Excellent man… | http://codeoverload.wordpress.com/2011/05/22/dropdown-lists-in-mvc-3/ | CC-MAIN-2014-15 | refinedweb | 1,001 | 55.13 |
On Wed, 2004-05-12 at 12:14 -0400, Paul Jarc wrote: > Andy Wingo <address@hidden> wrote: > > Consider a generic, `output'. I assert that x.output() in Python does > > not clobber the namespace. > > Agreed, because that name exists only within "x.". I'm not sure this is true. Of course python people will agree with you, but contrast it to the message-passing style of programming, where a message only has meaning in as much as the recipient knows how to deal with it. Likewise, output() only means something if `x' has an output field, and generics only have meaning in as much as a method has been specialized for a type. If the generic has not been specialized for a certain type, it won't apply. The corresponding situation would be if x (or a superclass) doesn't define an output() method. I'm not clear on the difference, except wrt collisions with non-generics, which I believe Andreas covered. There is still something inside me saying that modules should not be exporting identifiers like 'field or 'parent. But logic's leading me the other way! Cheers, -- Andy Wingo <address@hidden> | http://lists.gnu.org/archive/html/guile-user/2004-05/msg00036.html | CC-MAIN-2015-48 | refinedweb | 192 | 74.39 |
I downloaded and imported a csv file from an economics research database which contains three columns (screenshot of the table): “Country-Code”, “Time”, “Indicator”. There are basically two types of indicators (1. amount in local currency and 2. EUR exchange rate). How can I create a new column “EUR_amount” in Python that divdes the amount with the rate in case the countrycode and the month is the same for both items, e.g. EUR = amount/rate where country and time matches?
Any help highly appreciated! (Please keep in mind that I am quite a noob with python and this is my first question on stackoverflow ever.) Thanks a lot in advance.
Edit: Adding this code after receiving feedback from mozway (thanks for that):
import pandas as pd df = pd.DataFrame({'country_code':['EU','UK','US','EU','UK','US','EU','UK','US','EU','UK','US','EU','UK','US','EU','UK','US'], 'date':['2019-03','2019-03','2019-03','2019-04','2019-04','2019-04','2019-05','2019-05','2019-05','2019-03','2019-03','2019-03','2019-04','2019-04','2019-04','2019-05','2019-05','2019-05'], 'item':['exposure','exposure','exposure','exposure','exposure','exposure','exposure','exposure','exposure','FX-rate','FX-rate','FX-rate','FX-rate','FX-rate','FX-rate','FX-rate','FX-rate','FX-rate'], 'value':[15000,9000,13000,16500,8750,17000,17000,7999,25000,1.00,1.25,0.90,1,1.23,0.93,1.00,1.24,0.95]}) print(df)
So, to restate my question: How can I divide the item exposure with the item FX-rate under the condition of country_code AND date are matching?
Answer
You can first split the data frames into two parts – exposure and FX-rate
fx = df[df["item"]=="FX-rate"] exp = df[df["item"]!="FX-rate"]
After that, you can use
merged_df = pd.merge(fx,exp,on=["country_code","date"],how='outer')
See for other arguments and examples.
The above will result in
Next is just a matter of division
merged_df["Convert"] = merged_df["value_y"]/merged_df["value_x"] | https://www.tutorialguruji.com/python/using-python-how-to-divide-two-different-indicators-amount-and-exchange-rate-in-case-two-attributes-match-country-and-date/ | CC-MAIN-2021-43 | refinedweb | 335 | 50.33 |
SwiftOSC v1.1.3
SwiftOSC is a Swift Open Sound Control 1.1 client and server framework.
Installation
pod 'SwiftOSC', '~> 1.1'
OR
Step 1
Clone or download repository from Github.
Step 2
Open SwiftOSC.xcworkspace and build SwiftOSC frameworks.
Step 3
Embed SwiftOSC into project.
Quick Start
OSC Server
Step 1
Import SwiftOSC framework into your project
import SwiftOSC
Step 2
Create Server
var server = OSCServer(address: "", port: 8080)
Step 3
Start server
server.start()
Step 4
Setup server delegate to handle incoming OSC Data
class OSCHandler: OSCServerDelegate { func didReceive(_ message: OSCMessage){ if let integer = message.arguments[0] as Int { print("Received int (integer)" } else { print(message) } } } server.delegate = OSCHandler()
OSC Client
Step 1
Import SwiftOSC framework into your project
import SwiftOSC
Step 2
Create client
var client = OSCClient(address: "localhost", port: 8080)
Step 3
Create a message
var message = OSCMessage( OSCAddressPattern("/"), 100, 5.0, "Hello World", Blob(), true, false, nil, impulse, Timetag(1) )
Step 4
Send message
client.send(message)
Known Issues
- All OSC messages are delivered immediately. Timetags are ignored.
About
Devin Roth is a composer and programmer. When not composing, teaching, or being a dad, Devin attempts to make his life more efficient by writing programs.
For additional information on Open Sound Control visit.
Latest podspec
{ "name": "SwiftOSC", "version": "1.1.3", "summary": "SwiftOSC is an Open Sound Control client and server framework written in Swift.", "description": "SwiftOSC is an Open Sound Control client and server framework written in Swift. SwiftOSC impliments all the functionality of the OSC 1.0 specifications () and is also exteneded to include the features of OSC 1.1 ().", "homepage": "", "license": { "type": "MIT", "file": "LICENSE" }, "authors": { "Devin Roth": "[email protected]" }, "source": { "git": "", "tag": "1.1.3" }, "platforms": { "ios": "9.0", "osx": "10.10" }, "ios": { "source_files": [ "Framework/iOS/iOS}", "Framework/iOS/**/*.{c,h,m,swift}", "Framework/SwiftOSC/**/*.{c,h,m,swift}" ] }, "osx": { "source_files": [ "Framework/macOS/macOS", "Framework/macOS/**/*.{c,h,m,swift}", "Framework/SwiftOSC/**/*.{c,h,m,swift}" ] }, "pushed_with_swift_version": "4.0" }
Sun, 03 Dec 2017 00:20:07 +0000 | https://tryexcept.com/articles/cocoapod/swiftosc | CC-MAIN-2018-13 | refinedweb | 337 | 61.53 |
Eclipse Community Forums - RDF feed Eclipse Community Forums How do I write a search-path based scope? <![CDATA[I have a set of "models" which use libraries to define common parts implemented in Xtext. This works very well. To allow customers to create their own models, I created a project which contains only the libraries. All my unit tests succeed with this setup. This probably because I resolve resources myself and pass InputStreams to Xtext (not my code ) But when I create a new project, I get errors in the DSL editors even after adding the library project to the build path as a "required project". My guess is that this is because the library is references with "lib base;" in the DSL but the actual file is in the package "com.pany.product.config.dsl.libs". I read a lot of articles in this forum and the Xtext documentation. I think I need to implement a IContainer.Manager which takes a set of paths and tries to locate the library name in those paths. I looked at ResourceSetBasedAllContainersState which seems almost like what I need but how do I configure it? From the code in ResourceSetBasedAllContainersStateProvider, the container2Uris Multimap translates container names to a set of URIs. How do I map the library name "base" to the URI "classpath:/com/pany/product/config/dsl/libs/base.lib"? Or is the ResourceSetBasedAllContainersState smart enough to do that for me? Or am I on the wrong track here? All I want is a static mapping from library name to URI. ]]> Aaron Digulla 2011-11-30T14:55:23-00:00 Re: How do I write a search-path based scope? <![CDATA[Hi, what kind of scoping do you use? importUri based? customize ImportUriResolver or ImportUriGlobalScopeProvider to transpose the uri. namespace based? the it should work out of the box ~Christian]]> Christian Dietrich 2011-11-30T20:21:35-00:00 Re: How do I write a search-path based scope? <![CDATA[As I said, my approach is "search path based". The resources are in a certain package on the classpath. It works when the resources are in the same project but I get errors in the editor when I move them to a different project (even though they are still in the same place on the classpath).]]> Aaron Digulla 2011-12-01T08:54:37-00:00 Re: How do I write a search-path based scope? <![CDATA[And how does your "search path based" stuff work?]]> Christian Dietrich 2011-12-01T08:58:55-00:00 Re: How do I write a search-path based scope? <![CDATA[Remember that there are two modes of operation: Product and Eclipse editor. The product is web based, so no Eclipse/OSGi/p2 is involved. Product mode: I have a configuration which lists all packages where resources can be found. At startup, I read the config and create a "DSL loader" which searches and loads all configured libraries in a shared resource. Scripts are then loaded into the same resource. Since the libraries are already present, the proxy resolver has no problem to find them. Eclipse editor mode: Within the editor, something else happens. My guess is that the JavaProjectsState class is somehow involved but the code is hard to understand - this is one of the drawbacks of DI: If you don't know which part of the code contains a certain functionality, you're lost since you can't easily figure it out by looking the code. What I don't understand is why it works when the resources are in the same project but why it fails when I split the resources over several projects. In the JavaProjectsState class, I see WorkspaceProjectsStateHelper and JavaProjectsStateHelper which suggests that the Xtext runtime can resolve resources via the classpath but for some reason, this isn't happening. One odd thing that I noticed: There is no "Referenced Libraries" node in the Package Explorer :-/ Makes me wonder if I'm missing a plug-in when I start an Eclipse Application via "Run...". PS: The errors suddenly vanished. I cleaned the configuration, the workspace and built the plug-ins once more. No idea which of the three fixed the problem. Is there a way to debug the resource resolution?]]> Aaron Digulla 2011-12-01T10:19:18-00:00 Re: How do I write a search-path based scope? <![CDATA[Hi, id debug into DefaultGlobalScopeProvider.getVisibleContainers/getScope ~Christian]]> Christian Dietrich 2011-12-01T10:30:12-00:00 | https://www.eclipse.org/forums/feed.php?mode=m&th=263014&basic=1 | CC-MAIN-2021-04 | refinedweb | 745 | 72.87 |
Pandas dataframe is a powerful two dimensional data structure which allows you to store data in rows and columns format.
It also supports multi index for columns and rows.
In this tutorials, you'll learn how to create multi-index pandas data-frame and how to rename the columns of the multi index data-frame.
If you would like to rename columns of a single index dataframe read, How to Rename columns of Pandas Dataframe?
Creating Multi-Index Dataframe
To create a multi index dataframe, you need to follow two steps.
First, create a normal dataframe using
pd.DataFrame() method.
Second, set the columns for the dataframe using
pd.MultiIndex.from_tuples(). This allows you to set mutli index for the columns of the dataframe.
Snippet
import pandas as pd df = pd.DataFrame([[1,2,3], [4,5,6], [7,8,9]]) df.columns = pd.MultiIndex.from_tuples((("a", "b"), ("a", "c"), ("a", "d"))) df
You'll see the below output.
Output
a b c d 0 1 2 3 1 4 5 6 2 7 8 9
The multilevel column index dataframe is created. a is the first level column index and b, c, d are the second level column indexes.
Next, let's see how to rename these mutli-level columns.
Renaming the Multiindex Columns
To rename the multi index columns of the pandas dataframe, you need to use the set_levels() method.
Use the below snippet to rename the multi level columns.
Snippet
df.columns.set_levels(['b1','c1','d1'],level=1,inplace=True) df
where,
['b1','c1','d1']- New column names of the index
level=1- Level of the columns to be renamed
inplace=True- To perform the rename operation in the same dataframe rather than creating the new dataframe
Now the second level index of the columns will be renamed to b1, c1, d1 as shown below.
Output
a b1 c1 d1 0 1 2 3 1 4 5 6 2 7 8 9
This is how you can rename the columns of the multi index dataframe.
Conclusion
In this short tutorial, you've learnt how to rename the columns of the multi-index pandas data-frame using the
set_levels() method.
Discussion (0) | https://dev.to/vikramaruchamy/how-to-rename-multi-index-columns-in-pandas-dataframe-170 | CC-MAIN-2021-43 | refinedweb | 364 | 62.58 |
The concepts of attribute types and attribute syntax were mentioned briefly in the previous chapter. Attribute types and the associated syntax rules are similar to variable and data type declarations found in many programming languages. The comparison is not that big of a stretch. Attributes are used to hold values. Variables in programs perform a similar task?they store information.
When a variable is declared in a program, it is defined to be of a certain data type. This data type specifies what type of information can be stored in the variable, along with certain other rules, such as how to compare the variable's value to the data stored in another variable of the same type. For example, declaring a 16-bit integer variable in a program and then assigning it a value of 1,000,000 would make no sense (the maximum value represented by a signed 16-bit integer is 32,767). The data type of a 16-bit integer determines what data can be stored. The data type also determines how values of like type can be compared. Is 3 < 5? Yes, of course it is. How do you know? Because there exists a set of rules for comparing integers with other integers. The syntax of LDAP attribute types performs a similar function as the data type in these examples.
Unlike variables, however, LDAP attributes can be multivalued. Most procedural programming languages today enforce "store and replace" semantics of variable assignment, and so my analogy falls apart. That is, when you assign a new value to a variable, its old value is replaced. As you'll see, this isn't true for LDAP; assigning a new value to an attribute adds the value to the list of values the attribute already has. Here's the LDIF listing for the ou=devices,dc=plainjoe,dc=org entry from Figure 2-1; it demonstrates the purpose of multivalued attributes:
#
The LDIF file lists two values for the telephoneNumber attribute. In real life, it's common for an entity to be reachable via two or more phone numbers. Be aware that some attributes can contain only a single value at any given time. Whether an attribute is single- or multivalued depends on the attribute's definition in the server's schema. Examples of single-valued attributes include an entry's country (c), displayable name (displayName), or a user's Unix numeric ID (uidNumber).
An attribute type's definition lays the groundwork for answers to questions such as, "What type of values can be stored in this attribute?", "Can these two values be compared?", and, if so, "How should the comparison take place?"
Continuing with our telephoneNumber example, suppose you search the directory for the person who owns the phone number 555-5446. This may seem easy when you first think about it. However, RFC 2252 explains that a telephone number can contain characters other than digits (0-9) and a hyphen (-). A telephone number can include:
a-z
A-Z
0-9
Various punctuation characters such as commas, periods, parentheses, hyphens, colons, question marks, and spaces
555.5446 or 555 5446 are also correct matches to 555-5446. What about the area code? Should we also use it in a comparison of phone numbers?
Attribute type definitions include matching rules that tell an LDAP server how to make comparisons?which, as we've seen, isn't as easy as it seems. In Figure 2-3, taken from RFC 2256, the telephoneNumber attribute has two associated matching rules. The telephoneNumberMatch rule is used for equality comparisons. While RFC 2552 defines telephoneNumberMatch as a whitespace-insensitive comparison only, this rule is often implemented to be case-insensitive as well. The telephoneNumberSubstringsMatch rule is used for partial telephone number matches?for example, when the search criteria includes wildcards, such as "555*5446".
The SYNTAX keyword specifies the object identifier (OID) of the encoding rules used for storing and transmitting values of the attribute type. The number enclosed by curly braces ({ }) specifies the minimum recommended maximum length of the attribute's value that a server should support.
All entries in an LDAP directory must have an objectClass attribute, and this attribute must have at least one value. Multiple values for the objectClass attribute are both possible and common given certain requirements, as you shall soon see. Each objectClass value acts as a template for the data to be stored in an entry. It defines a set of attributes that must be present in the entry and a set of optional attributes that may or may not be present.
Let's go back and reexamine the LDIF representation of the ou=devices,dc=plainjoe,dc=org entry:
#
In this case, the entry's objectClass is an organizationalUnit. (The schema definition for this is illustrated by two different representations in Figure 2-5.) The listing on the right shows the actual definition of the objectClass from RFC 2256; the box on the left summarizes the required and optional attributes.
Here's how to understand an objectClass definition:
An objectClass possesses an OID, just like attribute types, encoding syntaxes, and matching rules.
The keyword MUST denotes a set of attributes that must be present in any instance of this object. In this case, "present" means "possesses at least one value."
The keyword MAY defines a set of attributes whose presence is optional in an instance of the object.
The keyword SUP specifies the parent object from which this object was derived. A derived object possesses all the attribute type requirements of its parent. Attributes can be derived from other attributes as well, inheriting the syntax of its parent as well as matching rules, although the latter can be locally overridden by the new attribute. LDAP objects do not support multiple inheritance; they have a single parent object, like Java objects.
It is possible for two object classes to have common attribute members. Because the attribute type namespace is flat for an entire schema, the telephoneNumber attribute belonging to an organizationalUnit is the same attribute type as the telephoneNumber belonging to some other object class, such as a person (which is covered later in this chapter). | http://etutorials.org/Server+Administration/ldap+system+administration/Part+I+LDAP+Basics/Chapter+2.+LDAPv3+Overview/2.2+What+Is+an+Attribute/ | CC-MAIN-2018-09 | refinedweb | 1,029 | 55.13 |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
_rec_name - predefined fields
Hai Friends ,
First I thank everyone who give good guideline for newbie in OpenERP . I have question regards predefined fields - _rec_name in OpenERP .
- Its not clear for me , _rec_name predefined fields in OpenERP . Friends can you please explain with sample module .It more useful for clear understanding .
Thanks & Regards OMPRAKASH.A
_name is used to define object like
account.invoice, sale.order, purchase.order, res.users.
When you have name field in your columns, you don't need to define field in
_rec_name. OpenERP takes name field by default.
You have seen name in any form when you select
many2one field. For example in Sale Order when you select Customer, you can see Customer's Name in that
many2one field. Now if you want to show Customer's Phone Number in
many2one field, you have to define
phone field in
_rec_name like this:
_rec_name = 'phone'
If your columns don't have any name field then you have to define any field in
_rec_name.
There are other reasons which Prakash described in his answer.
Hi Sudhir Arya, Thanks for your reply . I am new for OpenERP , If my question are very silly please don't mistake . I am very interest to know about OpenERP . 1. > From your point I can understand that _rec_name not needed until _name is defined . But if i create a <objectname>.py class _name is mandatory . So I must declare the _name fields . 2.> If i define _rec_name how can i make use .
This is my sample code please explain with this ...< Test.py > from osv import fields,osv class test(osv.osv): _name = 'sodexis' _rec_name = 'data_id' _columns = { 'name' : fields.char('name' , required = 1 , size = 30 ), 'age' : fields.integer('age' , store = False), 'data_id':fields.many2one('test', 'data_id', required=False) } test()
Please explain with this example , And once i thank you both for such guideline ... Thanks regards ----OMPRAKASH.A
I am talking about
name field in
_columns not
_name. There is nothing to do with
_name.
In your code you have defined
name field
`name`: fields.char('name', required=1, size=30)
So you don't need to define
_rec_name because OpenERP takes
name field by by default.
Hi,
I have another doubt here.I want to know the way to find which field is applied in _rec_name in the particular model.
I have taken that model id from ir.model table and also I could find all the fields list created in that model through ir.model.fields table.But I could not find which field is used in _rec_name
Could you please help me?
In fields.many2one("table.name", output)
Default name_get method return the output value "name" field value in the table.
suppose in the table contain no "name" field then we can use _rec_name = field_name
Now in name_get method return _rec_name field value.
Hi Prakash , Thanks for your reply . I have doubt , _name is mandatory then how can I use _rec_name ? If you explain with sample code , It make more useful for me & once again thanks for your reply
Hi omprakash, Here is the sample code.
Class Test(models.Model):
_name = 'test'
_rec_name = field_1
field_1 = fields.Many2one('object',"Field 1")
field_2 = fields.Many2one('object',"Field 2")
date = fields.Date("Date")
As you can see no "name" field is there in above model.
So here we have 2 option :
Option 1: set rec_name :- Here field which is set in rec_name works as Model name field.
Option 2: override name_get method :- We need to override name_get method.(here we can concate more than 2 fields and create name for model)
@api.multi
@api.depends('field_1', 'date')
def name_get(self):
result = []
for test in self:
name = test.field_1 + ' ' + account.date
result.append((test.id, name))
return result
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now | https://www.odoo.com/forum/help-1/question/rec-name-predefined-fields-17172 | CC-MAIN-2017-39 | refinedweb | 678 | 76.82 |
pythos at bag.python.org wrote: >Newbie at python (but not programming) here... > >I have a program that has "import os" at the top, and then later a call to >utime() is made. The python interpreter says "name 'utime' is not defined". >But if I change "utime(...)" to "os.utime(...)" then it works fine. Perhaps I >am expecting the "import os" statement to work the same way as "import ><package_name>.*" does in Java. So is it the case that if I write "import os" >in python, then I still need to write "os.utime(...)"? Or is there something >else wrong? Thanks. > > > What you want is from os import utime which will make 'utime' a name in the local namespace. Ken | http://mail.python.org/pipermail/python-list/2004-August/240230.html | CC-MAIN-2013-20 | refinedweb | 120 | 94.66 |
In this post I want to explore the design space for error handling techniques in Scala. We previously posted about some basic techniques for error handling in Scala. That post generated quite a bit of discussion. Here I want to expand the concepts Jonathon introduced by showing how we can systematically design a mechanism for error handling, introduce some moderately advanced techniques, and discuss some of the tradeoffs.
Goals
Before we can design our system we must lay out the goals we hope to accomplish. There are two goals we are aiming for.
Our first goal is to stop as soon as we encounter an error, or in other words, fail-fast. Sometimes we want to accumulate all errors – for example when validating user input – but this is a different problem and leads to a different solution.
Our second goal is to guarantee we handle every error we intend to handle. As every programmer knows, if you want something to happen every time you get a computer to do it. In the context of Scala this means using the type system to guarantee that code that does not implement error handling will not compile.
There are two corollaries of our second goal:
if there are errors we don’t care to handle, perhaps because they are so unlikely, or we cannot take any action other than crashing, don’t model them; and
if we add or remove an error type that we do want to handle, the compiler must force us to update the code.
Design
There are two elements to our design:
- how we represent the act of encountering an error (to give us fail-fast behaviour); and
- how we represent the information we store about an error.
Failing Fast
Our two tools for fail-fast behaviour are throwing exceptions and sequencing computations using monads.
We can immediately discard using exceptions. Exceptions are unchecked in Scala, meaning the compiler will not force us to handle them. Hence they won’t meet our second goal.
This leaves us with monads. The term may not be familiar to all Scala programmers, but most will be familiar with
Option and
flatMap. This is essentially the behaviour we are looking for.
Option gives us fail-fast behaviour when we use
flatMap to sequence computations1.
scala> Option(1) flatMap { x => println("Got x") Option.empty[Int] flatMap { y => // The computation fails here and later steps do not run println("Got y") Option(3) map { z => println("Got z") x + y + z } } } Got x res0: Option[Int] = None
It’s normally clearer to write this using a for-comprehension:
for { x <- Option(1) y <- Option.empty[Int] z <- Option(3) } yield (x + y + z) res1: Option[Int] = None
There are a lot of data structures that implement variations of this idea. We might also use
Either or
Try from the standard library, or Scalaz’s disjuction, written
\/.
We want some information on errors for debugging. This means we can immediately drop
Option from consideration, as when we encounter an error the result is simply
None. We know that an error has happened, but we don’t know what error it is.
We can also drop
Try from consideration.
Try always stores a
Throwable to represent errors. What is a
Throwable? It can be just about anything. In particular, it’s not a sealed trait so the compiler can’t help us to ensure we handle all the cases we intend to handle. Therefore we can’t meet goal two if we use
Try.
Either allows us to store any type we want as the error case. Thus we could meet our goals with
Either, but in practice I prefer not to use it. The reason being it is cumbersome to use. Whenever you
flatMap on an
Either you have to decide which of the left and right cases is considered that success case (the so-called left and right projections). This is tedious and, since the right case is always considered the succesful case, only serves to introduce bugs2. Here’s an example of use, showing the continual need to specify the projection.
// Given a method that returns `Either`: def readInt: Either[String, Int] = try { Right(readLine.toInt) } catch { case exn: NumberFormatException => Left("Please enter a number") } // We can call right-biased flatMap... readInt.right.flatMap { number => } /// ...or left-biased flatMap: readInt.left.flatMap { errorMessage => // flatMap is left-biased here } // This makes for-comprehensions cumbersome: for { x <- readInt.right y <- readInt.right z <- readInt.right } yield (x + y + z)
My preferred choice is Scalaz’s
\/ type, which is right-biased. This means it always considers the right hand to be the successful case for
flatMap and
map. It’s much more convenient to use than
Either and can be used as a mostly drop-in replacement for it. Herea’s an example of use.
import scalaz.\/ def readInt: \/[String, Int] = // String error or Int success try { \/.right(readLine.toInt) // Creates a right-hand (success) value } catch { case exn: NumberFormatException => \/.left("Please enter a number") // Creates a left-hand (failure) value } // \/ is a monad, so it has a flatMap method and we can use it in for // comprehensions for { x <- readInt y <- readInt z <- readInt } yield (x + y + z)
Representing Errors
Having decided to use the disjunction monad for fail-fast error handling, let’s turn to how we represent errors.
Errors form a logical disjunction. For example, database access could fail because the record is not found or no connection could be made or we are not authenticated, and so on. As soon as we see this structure we should turn to an algebraic data type (a sum type in particular), which we implement in Scala with code like
sealed trait DatabaseError final case class NotFound(...) extends DatabaseError final case class CouldNotConnect(...) extends DatabaseError final case class CouldNotAuthenticate(...) extends DatabaseError ...
When we process a
DatabaseError we will typically use a
match expression, and because we have used a
sealed trait the compiler will tell us if we have forgotten a case. This meets our second goal, of handling every error we intend to handle.
I strongly recommend defining a separate error type for each logical subsystem. Defining a system wide error hierarchy quickly becomes unwieldy, and you often want to expose different information at different layers of the system. For example, it is useful to include authentication information if a login fails but making this information available in our HTTP service could lead to leaking confidential information if we make a programming error.
A complete code example is in this Gist.
Unexpected Errors
We have the basic structure in place – use
\/ for fail fast behaviour along with an algebraic data type to represent errors. However we still have a few issues to address to really polish our system. One is how we handle unexpected errors. This can either be legacy code throwing exceptions, or they can be errors that we just aren’t interested in dealing with. For example, running out of disk space may be possibility that we decide is so unlikely that we don’t care to devote error handling logic to it. To handle this case I like to add a case to our algebraic data types to store unexpected errors. This usually has a single field that stores a
Throwable.
Locating Error Messages
It is very useful to know the location (file name and line number) of an error. Exceptions provide this through the stack trace, but if we roll our own error types we must add the location ourselves. We can use macros to extract location information, but it is probably simpler to created a sealed subtype of
Exception as the root of our algebraic data types, and use
fillInStackTrace to capture location information. Wrap this up behind a convenience constructor and we’ll always have location information for debugging.
Union types
Finally, we see that we often repeat error types as we move between layers. For example, both the database and service layers in the example have
NotFound errors that mean essentially the same thing. Inheritance restricts us to tree shaped subtyping relationships. We can’t “reach into” the
DatabaseError type to pull out just the
NotFound case for inclusion in
ServiceError.
If we used a logical extension of
Either (or
\/) that we can piece together types in an ad-hoc way. For example, we could use
\/[NotFound, BadPassword] to represent our errors, and if we wanted to extend to more cases we could use
\/[NotFound, \/[BadPassword, NotFound]] and so on, forming a list structure. The shapeless
Coproduct provides a generalisation of this idea.
We can go one step further with unboxed union types to achieve the same effect with less runtime cost. This might be a step too far for most teams, but do note that union types are slated for inclusion in a future version of Scala.
Conclusions
We have seen how to construct an error handling framework that meets our two goals of failing fast and handling all the errors we intend to handle. As always, use techniques appropriate for the situation. For example, many people commented on
Try in our previous post.
Try won’t help us ensure we handle all the errors we want to handle, our second design in this post. For this reason I don’t like using it. However, if you can accept losing the guarantees on error handling it imposes then it is worth considering. If you are writing a one off script maybe you don’t need error handling at all.
We’ve also seen systematic application of Scala features. Whenever we have a structure that is this or that we should recognise it is a sum type and reach for a
sealed trait. Whenever we find ourselves sequencing computation there is probably a monad involved. Understanding these patterns is the foundation for successful programming in Scala. If you are interested in learning more they are explained in more depth in our books and courses, particularly Essential Scalaz. The next two Essential Scala courses are running in San Francisco and Edinburgh.
I use
Option.empty[A]to construct an instance of
Nonewith the type I want. If I instead used a plan
None(which is a sub-type of
Option[Nothing]) type inference in this case infers
Option[String]. This is due to the overloading of
+for string concatentation as well as numeric addition. ↩
Admittedly this is not a common source of bugs. However, I sometimes get my right and left mixed up (I’m left-handed) and this is the kind of mistake I could make. ↩ | http://underscore.io/blog/posts/2015/02/23/designing-fail-fast-error-handling.html | CC-MAIN-2017-26 | refinedweb | 1,768 | 63.29 |
This instructable will show how to use Visual Studio 2013 Community Technology Edition to compile a program using a GNU GCC compiler toolchain. In the resulting project template, Visual Studio retains the intellisense and code completion feature, however built in debugging is lost in this scheme. Scheme shown was developed to allow programs to be written and compiled on a windows computer and then run on a Raspberry Pi (Model 1). The procedure could be adapted to other purposes with relative ease.
Step 1: Download and Install the Tool Chain
Get a GNU GCC toolchain that compiles to a linux executable while running on windows. This instructable was created to compile programs to run on a Raspberry Pi. You could also get one that compiles to a windows executable, but given that Visual Studio is essentially free for most hobbyists it wouldn't make much sense.
We will use the tool chain provided by SysProg. Here is a link:. Pick the package that is appropriate for your target platform. I picked raspberry-gcc4.9.1-r2.exe. I installed it with the following options. Don't check the path option.
A quick word about SysProgs. They are selling a package that does all of this automatically and also provides remote debugging capabilities among other things. This instructable is fine for small applications that are not too complicated. However, if you intend to build Skynet to take over the world using your Raspberry Pi then I suggest you spend some money to buy their product. This author is not affiliated with them in any way, but they deserve credit for providing the tool chains in a free and easy to use manner.
Step 2: Install Visual Studio
If you haven't done so already already, Install the Visual Studio 2013 CTE version. Installing Visual Studio is well documented elsewhere, so it is not repeated here.
Open Visual Studio
Start a MakeFile Project, name it something that can be thrown away. You won't keep this particular project. Click OK. Click Next.
Step 3: Create the Build Batch Files
Enter Build.bat in the Build Command Line box. Enter Clean.bat in the Clean commands box. Delete the contents of the Output box. Skip to release configuration by leaving the same as debug box checked. Click Finish.
Visual Studio is now in an empty project.
Step 4: Setup the Search Paths
Goto the Project Properties. Delete all of the VC++ Directories. Goto Nmake. Delete the Preprocessor Definitions Add the following path to the Nmake Include Search Path (change the path to suit your machine):
C:\SysGCC\Raspberry\arm-linux-gnueabihf\include\c++\4.6.3
C:\SysGCC\Raspberry\lib\gcc\arm-linux-gnueabihf\4.6\include
C:\SysGCC\Raspberry\arm-linux-gnueabihf\sysroot\usr\include\arm-linux-gnueabihf
C:\SysGCC\Raspberry\arm-linux-gnueabihf\sysroot\usr\include
C:\SysGCC\Raspberry\arm-linux-gnueabihf\include\c++\4.6
Step 5: NMake Setup
Click OK. NMake should look like this.
Step 6: Create a Source Code File
Add a source file for the main function. Main.cpp
Add super simple hello world code:
#include <stdio.h>
int main()
{
printf(“Hello Pi from windows.\n”);
return(0);
}
Note that the code completion worked. It used the GCC header files as it's source.
Step 7: Setup the Batch Build Files
Add a batch file to build using Utility/Text File called build.bat. Make sure it is located on the project level. Add a batch file to build using Utility/Text File called clean.bat. Make sure it is located on the project level. This file will not be used in this version of this instructable but a clean function should be created eventually. Rebuild is also good but in a small program we can do without it.
Save all.
Type the following script to the build.bat file (you will need to change the path to your target machine).
set sources=*.cpp
set executable=Main
PATH=C:\SysGCC\Raspberry\arm-linux-gnueabihf\bin:$PATH
PATH=C:\SysGCC\Raspberry\bin:$PATH
C:\SysGCC\Raspberry\bin\arm-linux-gnueabihf-gcc-4.6.exe -Wall -O3 %sources% -o %executable%
move %executable% "\\PI\user\Home\Programs\%executable%"
Note that the script automatically move the executable to my Raspberry Pi.
Step 8: Create a Project Template
Most people would not want to go through this procedure every time to write a program for Pi. Luckily, Visual Studio has us covered. Save the project as a template (File/Export Template). The options are self explanatory. Close the throwaway project/solution.
Start a new project. Under VisualC++, select the “throwaway” template (you may have changed the name to something more sensible.) Rename the project as you would any other project. You should see a new project with the same files as the model project used to create the template.
Step 9: Thoughts and Comments
If you have ever used GNU GCC or most command line compilers, then you are probably asking what is wrong with this person right about now. Why isn't make being used to build this. Well here is a little secret, you don't have to use make. This batch file works fine and it isn't too complicated. Make is excellent, but it is too complicated for a very simple program like this. If you want to use make then by all means. However, you should start it from the build batch file in order to keep Visual Studios' make system separate from the GNU make system.
A word about compiling on this scheme. The debugger system will not work, this scheme is not designed to setup remote debugging. You will need to compile the code using "Build/Build Solution" and debug the old fashioned way, using printf statements. That is the biggest reason that this scheme is not suitable for complicated programs.
This may cause you to ask why you would want to do this in the first place. In this author's opinion Visual Studio puts the right options in front of a user but it has all of the options anyone could need somewhere in the menus. They can be hard to find from time to time, but they are there. It also hard to beat the intellisense and code completion features in Visual Studio.
Of course you could also use Eclipse or Code Blocks, or one of the other IDEs' out there. They are clearly very powerful, but they can be overwhelming. Microsoft has gotten a lot of stuff wrong over the years, but Visual Studio is one they have gotten right. | http://www.instructables.com/id/Compile-Using-GNU-GCC-from-Visual-Studio/ | CC-MAIN-2017-17 | refinedweb | 1,100 | 66.64 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.