text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
No fullscreen on Mac OS X Lion with OpenGL
{{{
!python
import pygame pygame.vernum (1, 9, 2) pygame.display.set_mode((1344,840),pygame.FULLSCREEN,24) <Surface(1344x840x24 SW)> Killed. pygame.display.set_mode((1344,840),pygame.FULLSCREEN|pygame.OPENGL,24)
Traceback (most recent call last): File "<stdin>", line 1, in <module> pygame.error: Error setting OpenGL fullscreen: invalid fullscreen drawable
}}}
Installed with the mpkg provided by the pygame website. Installed SDL.framework, SDL_image.framework, SDL_mixer.framework, SDL_ttf.framework from the SDL website, version 1.2.13.
Lion removed the API that libSDL uses for fullscreen mode:
I am having the same problem except it doesn't crash, it just doesn't make the window .
Go update your SDL package to 1.2.15. Seems like the bug is fixed already. I just updated my SDL.framework package and it works fine.
I'll test it and report back. Thanks.
Works. I consider it fixed.
It seems like it was a SDL dependent bug. Upgrade libSDL to 1.2.15 and everything works fine/
Reopen if necessary :) Thanks!
Andrej | https://bitbucket.org/pygame/pygame/issues/91/no-fullscreen-on-mac-os-x-lion-with-opengl | CC-MAIN-2015-40 | refinedweb | 177 | 71.31 |
Nowadays, all personalcomputer and workstations come with multiple cores.
Most .NET applications failto harness the full potential of this computing power. Even when developers attempt to do so, it isgenerally be means of writing low level
manipulation of threads and locks. Thisoften leads to a situation, where the code becomes either un-readable or fullof potential threats. These threats are often not
detected if running on asingle Core machine.
The task parallel library allows you to write code which is human readable, less error prone, and adjusts itself with the number of Cores available.
So you can be sure that your software would auto-upgrade itself with the upgrading environment.
What is the first thing
that you try to do, when you see parts of your code not performing well. Lazy
load, Linq queries, Optimizing For loops, etc. We often overlook parallelization
in the time consuming independent units of work.
Most often the CPU will
show you the following story during your performance intensive routines.
Shouldn’t your CPU be utilized more like this?
The Task Parallel
Library (TPL) is a set of public types and APIs in the System.Threading and System.Threading.Tasks
namespaces in the .NET Framework 4.0. The TPL scales the degree of concurrency
dynamically to efficiently use all the cores that are available. By using TPL,
you can maximize the performance of your code while focusing on the work that
your program is designed to accomplish.
The Task Parallel
Library introduces the concept of “Task”.
Task parallelism is the process of running
these tasks in parallel. A Task is an
independent unit of work, which runs within a program. Benefits of identifying
tasks within your system are:
The task parallel
library utilizes the Threads under the hood to execute these tasks in parallel.
The decision and number of Threads to use is dynamically calculated by the
runtime environment.
The creation of a
thread comes with a huge cost. Creating a huge number of Threads within your
application also comes with an overhead of Context Switching. In a single core
environment, it might lead to a bad performance as well, since we have a single
core which serves various threads.
The task on the other
hand, dynamically calculates if it needs to create different threads of
execution or not. It uses the ThreadPool under the hood, in order to distribute
the work, without going through the overhead of Thread creation/or un-necessary
context switching if not required.
The following code
snippet shows the creation of parallel tasks using Threads and Task.
You can download the
sample used above.
So how is this different from creating a thread
again? Well, one of the first advantages of using Tasks over Threads is that it
becomes easier to guarantee that you are going to maximize the performance of
your application on any given system. For example, if I am going to fire off
multiple threads that are all going to be doing heavy CPU bound work, On a
single core machine we are likely to cause the work to take significantly
longer. It is clear, threading has overhead, and if you are trying to execute
more CPU bound threads on a machine than you have available cores for them to
run, then you can possibly run into problems. Each time the CPU has to switch
from thread to thread, there is a bit of overhead, and if you have many threads
running at once, then this switching can happen quite often, causing the work
to take longer than if it had just been executed synchronously. This diagram
might help spell that out for you a bit better:
As you can see, if we
aren’t switching between pieces of work, then we don’t have the context
switches between threads. So, the total cumulative time to process in that
manner is much longer, even though the same amount of work was done. If these were
being processed by two different cores, then we could simply execute them on
two cores, and the two sets of work would get executed simultaneously,
providing the highest possible efficiency.
Now when we have a
slight idea of Tasks and their capacity, let us look into these Tasks in a
little more detail and how they are different from ThreadPools.
Let us see how you can
start a new execution on a ThreadPool
Let us see what you
will have to do if you wish to Wait () for the thread to finish.
Messy! Isn’t is?.
What if you have to
wait for 15 threads to finish?
How do you capture the
return values from multiple threads?
How do you return the
control back to GUI thread?
There are answers to
it. Delegates, Raising events but this leads to an error prone situation when
we drill into a chain of multi threaded actions.
Let us see how Tasks
handle this situation elegantly:
Waiting on Tasks:
Execute another Async task when the current task is done:
In real world scenarios, we often have multiple operations which we want to perform asynchronously. Look at the following code snippet and see how you can model it alternatively.
Parallel extensions have been introduced along with the Task Parallel Library to achieve data Parallelism. Data parallelism refers to scenarios
in which the same operation is performed concurrently (that is, in parallel) on elements in a source collection or array. The .NET provides new constructs to
achieve data parallelism by using Parallel.For and Parallel.Foreach constructs.
Let us see how we can use these:
The above mentioned Parallel.ForEach construct utilizes the multiple cores and thus enhances the performance in the same fashion.
The following graph shows, how parallel extensions improve the performance of the system:
You can download the
code from the following link [Download]
The parallel extensions
and the task parallel library helps the developers to leverage the full
potential of the available hardware capacity. The same code can adjust itself
to give you the benefits across various hardware. It also improves the
readability of the code and thus reduces the risk of introducing nasty bugs
which drives developers crazy.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
static void Main(string[] args)
{
const int SIZE = 1000000000;
Console.WriteLine("Assign");
byte[] buffer = new byte[SIZE];
Console.WriteLine("Parallel");
Parallel.For(0, SIZE, i => buffer[i] = 255);
Console.WriteLine("Traditional");
for (int i = 0; i < SIZE; ++i)
buffer[i] = 255;
Console.WriteLine("Finished");
}
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | https://www.codeproject.com/Articles/362996/Multi-core-programming-using-Task-Parallel-Library | CC-MAIN-2017-43 | refinedweb | 1,118 | 61.97 |
Helpful information and examples on how to use SQL Server Integration Services.
The new Data Profiling Task in 2008 generates an XML output file. The output is easy to read, and could be used to make decisions within a control flow. For example, you could check whether or not to process your most recent data set based on criteria in the profile (are any values in certain columns null? Do my values fall within expected ranges?)
While you can parse this XML file in traditional ways, those of you who aren't afraid of using undocumented, unsupported APIs can make use of the classes in the Microsoft.SqlServer.DataProfiler assembly which we use to load the XML profile internally.
Let's use a simple scenario: We want to check one of the columns (AddressLine1) in our staging table to make sure that it contains no NULL values. If it is clean, we process it in a data flow task. If NULL values are found, we want to send an email to the DBA.
Here's what the control flow looks like:
We're first running our data profile, then processing the results in a script task. If we match our criteria (i.e. no nulls in the AddresLine1 column), the script task returns success and we run the "Process" data flow. If the criteria doesn't match, we fail the task, and run the send mail task instead.
Note, we can store the results of the data profiling task in a variable (as a string) instead of saving it out to disk.
Now the interesting part - the script task.
First, add a reference to the Microsoft.SqlServer.DataProfiler DLL. It can be found under %ProgramFiles%\Microsoft SQL Server\100\DTS\Binn\Microsoft.SqlServer.DataProfiler.dll (and should also be in the GAC).
The following code loads the data profile XML from the package variable, de-serializes it into a DataProfile object, and cycles through the profiles until it finds the one its looking for.
using Microsoft.DataDebugger.DataProfiling;
const string ColumnName = "AddressLine1";
readonly long Threshold = 0;
public void Main()
{
Dts.TaskResult = (int)ScriptResults.Success;
// Retrieve the profile from the package variable
string dataProfleXML = Dts.Variables["User::DataProfile"])
{
// Check the profile type
if (p is ColumnNullRatioProfile)
{
// Match the column name
ColumnNullRatioProfile nullProfile = p as ColumnNullRatioProfile;
if (nullProfile.Column.Name.Equals(ColumnName, StringComparison.InvariantCultureIgnoreCase))
{
// Make sure it's within our threshold
if (nullProfile.NullCount > Threshold)
{
// Fail the task
Dts.TaskResult = (int)ScriptResults.Failure;
}
break;
}
}
}
}
Note, as I mentioned above, this API is for internal use, and is subject to change (it already has between CTP5 and CTP6, and will change again in the upcoming CTP-Refresh). I just thought some people out there might find this interesting.
PingBack from
Yeah, very interesting. I plan to put a post together outlining a supported way to do this -haven't got round to it yet- but if and when I do I'll definitely link back here.
-Jamie
I am trying to use 2008's data profiler XML output data in SSRS report (I am under the impression my SSRS data source needs to be a SQL table). Is it possible for me to convert the XML profiler ouput to a SQL table using the API mentioned above? Regards,
There's nothing in the API that lets you automatically load the profile XML document into a sql table, however, you could do the mapping yourself with a little code.
I believe that SSRS supports a number of data source types, including XML and custom .NET objects. If you search the books online, you should be able to find a couple of different ways to access the data.
Thank you for the feedback. Since our reports already run off of SQL tables, I think I will try to convert the XML.
I tried following the steps mentioned in your post, but run into a problem due to a SSIS string variable memory limit error. (When I save the data profile results to an XML file, the sizes can get quite large for value frequency distros, which is the component I am using. One example I tried was over 200mb – is there a varchar(max) like variable in SSIS?) I spoke with a coworker who has a bit more XML experience, and he mentioned being able to bulk load XML files into a sql table using a schema mapping file.
What are your thoughts on going that route?
I created in VS a XSD file based on the data profiler XML output, but am getting stuck on adding the sql relationship code snippets for bulk loading.
The examples I have been trying to follow are here:
I can email you the XML, XSD, and create table statement if that would be useful.
Thanks for helping out a XML novice! Any feedback you have regarding the best approach would be much appreciated.
I made a bit of progress using the API. I'm stuck at the point where I try to access the freqdist values and counts - I believe some kind of collection / array is being used. If I can figure that out, plus solve the memory error issue, I think I'm close to having a solution. Regards,
using System;
using System.Collections;
using System.Data;
using System.Data.SqlClient;
using Microsoft.SqlServer.Dts.Runtime.VSTAProxy;
using System.Windows.Forms;
using Microsoft.DataDebugger.DataProfiling;
string DataSource;
string DataBase;
string Schema;
string Table;
long RowCount;
string ColumnName;
long DistinctValues;
string Value;
long Count;
public void Main()
{
Dts.TaskResult = (int)ScriptResults.Success;
//SqlConnection sqlConn = new SqlConnection("DSN");
//sqlConn.Open();
// Retrieve the profile from the package variable
string dataProfleXML = Dts.Variables["User::xmlResults"])
{
if (p is ColumnValueDistributionProfile)
{
ColumnValueDistributionProfile ValueDist = p as ColumnValueDistributionProfile;
DataSource = ValueDist.Table.DataSource;
MessageBox.Show(DataSource,"DataSource");
DataBase = ValueDist.Table.Database;
MessageBox.Show(DataBase,"Database");
Schema = ValueDist.Table.Schema;
MessageBox.Show(Schema, "Schema");
Table = ValueDist.Table.Table;
MessageBox.Show(Table, "Table");
RowCount = ValueDist.Table.RowCount;
MessageBox.Show(RowCount.ToString(), "RowCount");
ColumnName = ValueDist.Column.Name;
MessageBox.Show(ColumnName, "ColumnName");
DistinctValues = ValueDist.NumDistinctValues;
MessageBox.Show(DistinctValues.ToString(), "DistinctValues");
// how do i access the ValueDistributionItem details (values and counts) ??
// the below does not seem to work
IEnumerator MyEnum = ValueDist.ValueDistribution.GetEnumerator();
while (MyEnum.MoveNext())
{
MessageBox.Show(MyEnum.Current.ToString());
}
// insert variables into sql table using ado connection
}
}
//sqlConn.Close();
}
Microsoft Research contributed the Data Profiling Task and Data Profile Viewer to Integration Services
Extending the Data Profiling Task
Is there anyway I can save the results to a SQL table?
I'm currently trying to use an ExecuteSqlTask to access the results from a package user variable and update the table but I don't know how this task can access package variables.
I would use a ScriptTask component but I need to be able to do this programatically and as far as I am aware you cannot add a ScriptTask component programatically.
Hi Liam,
It is possible to add a script task programmatically - see this post here:
You can use variables with the Execute SQL Task by binding variables to query parameters. You could create a stored procedure which takes in an XML document (which is how the Data Profiling Task will store the data), and then either store the value in a table, or perform some sort of XQuery on it.
I haven't tried out this approach myself, but theoretically it should work. :) I'd recommend using the ADO.NET connection type if you do - it should have the best handling for XML types.
Thanks mmasson. I managed to get this working with the ExecuteSQLTask. Works a treat :) | http://blogs.msdn.com/mattm/archive/2008/03/12/accessing-a-data-profile-programmatically.aspx | crawl-002 | refinedweb | 1,262 | 56.45 |
NOTE: Since writing this article we've started using Behaviour Designer. It's expensive (for the asset store) but it is really good and I highly recommend it.
Do you need an economical and effective way of using behavior trees? The fluent behavior trees API allows the coder-come-game-designer to have many of the benefits of traditional behavior trees with much less development time.
For many years I've been interested in behavior trees. They are an effective method for creating AI and game-logic. Many game or AI designers from the professional industry use a behavior tree editor to create behaviors.
Indie developers (or professional developers with budget constraints!) may not be able afford to buy or build a (good) behavior tree editor. In many cases indie developers wear many hats and must stretch themselves across a number of roles. If this is your situation then you probably aren't a specialised game designer with a tools team at your disposal.
If this sounds familiar then you should consider fluent behavior trees. Your code editor becomes your behavior tree editor. You can achieve much of the power of behavior trees without having an editor or without needing specialist designers.
This article documents the technique and the open-source library. I present some theory and then practical examples. In addition I contrast against traditional behavior trees and other techniques.
Contents
Table of Contents generated with DocToc
- Contents
- Introduction
- Getting the code
- Theory
- Practice
- Conclusion
Introduction
Behavior trees have been in our consciousness for over 10 years now. They entered the games industry around 2005 thanks to the GDC talk on AI in Halo 2. Since then they have been popularized by multiple sources including the influential AiGameDev.com.
The technique presented here is a combination of a fluent api with the power of behavior trees. This is available in an open-source library C# library. I personally have used the technique and the library for vehicle AI in a commercial driving simulator project that was built on Unity. The code, however, is general C# and is not limited to Unity. For instance you could easily use it with MonoGame. A similar library would work in C++ and other languages if anyone cares to port the code.
This article is for game developers who are looking for a cheap, yet expressive and robust method of building AI. It's cheap because you don't need to build an editor (your usual IDE will do fine) and because you don't then need to hire a game designer to use the editor. This methodology will suit indie devs who typically do a bit of everything: coding, art, design, etc.
If you are a professional game dev working for a company that has a tools or AI department, then it's possible this article won't help you. Those that can afford to build an editor and hire game designers, please do that. However fluent behavior trees could still be useful to you in other ways. For one thing they can allow you to experiment with behavior trees and test the waters before committing to building a fully data-driven behavior trees system. Fluent behavior trees can provide a cheap way of getting into behavior trees, before committing to the full-blown expensive system.
The idea for fluent behavior trees came to me during my work on promises for game development. After we discovered promises at Real Serious Games our use of them snowballed, eventually leading to a large article on the topic. We pushed promises far indeed, to the point where I realized that our promises were looking remarkably similar to behavior trees, as mentioned in the aforementioned article. This led me to think that a fluent API for behavior trees was viable and more appropriate than promises in certain situations. Of course when a developer starts using fluent APIs they naturally start to see many opportunities to make use of them and I am no exception. I know this sounds a bit like if the only tool you have is a hammer, then every problem looks like a nail, but trust me I have many other tools. And fluent behavior trees are another tool in the toolbox that can be bought out at appropriate times.
Getting the code
This C# library implements by the book behavior trees. That is to say: I did my research and implemented standard game dev behavior trees. The implementation is very simple and there is little in the way of additional embellishments. This library should work in any .Net 3.5 code-base. The reason it is .Net 3.5 is for compatibility with Unity. If needed the library can easily be upgraded to a higher version of .Net. The library has only been tested on Windows, however given how simple it is I expect it will also work on mobile platforms. If anyone has problems please log an issue on github.
The code for the library is shared via github:
From github you can download a zip of the code. To contribute please fork the code, make your changes and submit a pull-request.
You can build the code in Visual Studio and then move the DLL to your Unity project. Alternately you can simply put the code in a sub-directory of your Unity project.
A pre-compiled version the library is available on NuGet:
Theory
Before getting into the practical details of using fluent behavior trees I'll cover some brief theory and provide some resources for understanding behavior trees in more depth.
If you already have a good understanding of behavior trees please skip or skim these sections.
Behavior trees
Behavior trees are a fantastic way to construct and manage modular and reusable AI and logic. In the professional industry designers will use an editor to build, tweak and debug AI. During development they build a library of reusable and plugable behaviors. Over time this makes it quicker and easier to build AI for new entities, as the process of building AI becomes gluing together various pre-made behaviors then tweaking their properties. The editor makes it easy to quickly rewire AI, thus we can iterate faster to improve gameplay more quickly.
A behavior tree represents the AI or logic that makes an entity think. The behavior tree itself is kind of stateless in that it relies on the entity or the environment for storing state. Each update re-evaluates the behavior tree against the state of the entity and the environment. Each update the behavior tree picks up from where it left off last update. It must figure out what state it is in and what it should now be doing.
Behavior trees are scalable to massive AIs. They are a tree and trees can be nested hierarchically to an infinite level. This means they can represent arbitrarily complex and deeply nested AI. They are constructed from modular components so very large trees are manageable. The size of a mind that can be built with a behavior tree is only limited by the size of the tree that can be handled by our tools and our PCs.
There is much information and tutorials online about behavior trees: how they work, how they are structured and how they compare to other forms of AI. I've just covered the basics here. Here are links some for more indepth learning:
Fluent vs traditional
Traditional behavior trees are loaded from data. They are constructed and edited with a visual editor. They are stored in the file-system or a database. In contrast fluent behavior trees are constructed in code via an API. We can say they are code-driven rather than data-driven.
Why adopt this approach?
For a start it's cheap and you still get many of the same benefits as traditional behavior trees. A usable first version of the fluent behavior tree library was constructed in a day. Contrast that to the many weeks it would take to build a functioning behavior tree editor. And you are coder a right? (I've made that assumption already). So why would you even want an editor like that when instead you can deal directly with an fluent API. You could buy an editor and there are options, especially on the Unity asset store. To buy though still costs... maybe not much, but consider that you then have to invest time to learn that particular package. Not just how to use the editor, but also how to use its API to load and run the behavior tree.
I have found that fluent APIs make for a pleasant coding experience. They fit well with intellisense, where the compiler understands what your options are and can auto-complete expressions and statements for you.
Defining behavior trees in code gives you a structured way to hack together behaviors. That's right I said hack together. Have you ever been in that situation where you designed something awesome, clean and elegant, only to find out, in the months before completion that you had to hack the hell out of it because your original design couldn't cope with the changing requirements of the game. I've learned that is very important to have an architecture that can cope with the hacking you must inevitably do at some point to make a great game. Finding a modular architectural mechanism that supports this increases our ability to work fast and adapt (we can hack when we need to) and ultimately it does improve the design of our code (because we can chop out and rewrite the hacked up modules if need be). I've heard this called designing for disposability, where we plan our system in a modular fashion with the understanding that some of those modules will end up as complete mess and we might want to throw it out. Fluent behavior trees support modular and compartmentalized behaviors, so I believe they help support the principle of disposability.
In any case, there is no single best approach to anything. You'll have to figure out the right way for your own project and situation, sometimes that will work out for you. Other times you'll find out the hard way how wrong you were. The benefit of trying fluent behavior trees is that you won't sink a lot of time into them. The time investment is minimal, but the potential return on the time is large. For that reason behavior trees can be an attractive option.
Ultimately there is nothing to stop you mixing and matching these approaches. Use the fluent API for fast prototyping while coding to test an idea. Then use a good editor when you need to build and deploy loads of
behaviors into production.
For efficient turnaround time with fluent behavior trees you need a fast build time. You might want to work in a smaller testbed rather than working in your full game. Ultimately if you have a slow build time it's going to slow you down when coding and testing changes to fluent behavior trees. If that's going to be a problem then traditional behavior trees might be more suitable for you, but probably only if you can hot-load them into a running game. So if you are buying a behavior tree editor/system you should check for that feature!
Promises vs behavior trees
It would be remiss if I didn't spend some time on promises.
In the promises article we mentioned that we had pushed promises into the realm of behavior trees. We were already using promises to manage asynchronous operations such as loading levels, assets and data from the database. Of course, this is what promises are intended for and what they excel at.
It wasn't a huge leap for us to then start using promises to manage other operations such as video and sound playback. After all these are operations that also happen asynchronously over a period of time. When they complete they generally trigger some kind of callback that we handle to sequence the next operation (eg the next video or sound). Promises worked really well for this because they are great for sequencing chains of asynchronous operations.
The epiphany came when we started seeing the entire game as interwoven chains of asynchronous operations. After all, what is a game if not a complex web of logic-chains that endure over many frames. We extended promises to support composition of game logic. During this we realized that behavior trees may have been a better fit for some of what we were doing. However it was late in that project and at the time we didn't have a library for behavior trees. So we pushed on and completed the project.
We got a lot of mileage out of promises, although it would have been better if we could have combined promises and behavior trees.
We were able to do many behavior tree-like things with promises, but there is a problem in doing that with promises that is solved by the nature of behavior trees. Behavior trees are similar to promises in that they allow you to compose or chain logic that happens (asynchronously) over many frames or iterations of the game -loop. The main difference is in how they are ticked or
updated and that behavior trees are generally more suited to the step-by-step nature of the game-loop. Behavior trees are only ticked (or moved-along step-by-step) each iteration of the game-loop. When the behavior tree is no longer being ticked, the logic of the behavior tree is stopped and does not progress. So it's incredibly easy to cancel or otherwise back-out of a running behavior tree: you simply stop updating it.
A running chain of promises isn't quite as easy to get out of as a behavior tree. They are designed to represent in-flight asynchronous operations that just happen without being ticked or updated. This means you have little ongoing control over the operation until it either completes or errors. For example downloading a file from the internet... it's going to continue until it completes or errors. Aborting a chain of promises often involves injecting another promise specifically to throw an exception in the event that we need to reject the entire chain of promises. This can be achieved by racing the abortable-promise against the promise (or promises) that might need to be aborted. If this sounds complicated and painful, that's because it is. Alarm bells should be ringing.
Since then I've used fluent behavior trees in production and have also theorized that promises and behavior trees can easily be used in combination to achieve the benefits of both. Further on I'll show examples of how that might look.
Practice
The section describes how to use the fluent behavior tree library.
Behavior tree status
Behavior tree nodes may return the following status codes:
- Success: The node has finished what it was doing and succeeded.
- Failure: The node has finished, but failed.
- Running: The node is still working on something.
Basic usage
A behavior tree is created through BehaviourTreeBuilder. The root node for the completed tree is returned when the Build function is called:
using FluentBehaviourTree; ... IBehaviourTreeNode tree; public void Startup() { var builder = new BehaviourTreeBuilder(); this.tree = builder .Sequence("my-sequence") .Do("action1", t => { // .. do something ... return BehaviourTreeStatus.Success; }); .Do("action2", t => { // .. do something after ... return BehaviourTreeStatus.Success; }) .End() .Build(); }
To move the behavior tree forward in time it must be ticked on each iteration of the game loop:
public void Update(float deltaTime) { this.tree.Tick(new TimeData(deltaTime)); }
Node names
Note the names that are specified when creating nodes:
this.tree = builder .Sequence("my-sequence") // The node is named 'my-sequence'. ... etc ... .End() .Build()
These names are purely for testing and debugging purposes. This allows a visualisation of the tree to be rendering to more easily see the state of our AI. Debug visualisation is very important for understanding and debugging what our games are doing.
Node types
The following types of behavior tree nodes are supported.
Action / Leaf node
Call the Do function to create an action node at the leaves of the behavior tree. The return value (Success, Failure or Running) specifies the current status of the node.
.Do("action", t => { // ... do something ... // ... query the entity, query the environment then take some action ... // Return status code indicate if the action is // successful, failed or ongoing. return BehaviourTreeStatus.Success; });
Sequence
Runs each child node in sequence. Fails for the first child node that fails. Moves to the next child when the current running child succeeds. Stays on the current child node while it returns running. Succeeds when all child nodes have succeeded.
.Sequence("my-sequence") .Do("action1", t => { // Sequential action 1. return BehaviourTreeStatus.Success; // Run this. }); .Do("action2", t => { // Sequential action 2. return BehaviourTreeStatus.Success; // Then run this. }) .End()
Parallel
Runs all child nodes in parallel. Continues to run until a required number of child nodes have either failed or succeeded.
int numRequiredToFail = 2; int numRequiredToSucceed = 2; .Parallel("my-parallel", numRequiredToFail, numRequiredToSucceed) .Do("action1", t => { // Parallel action 1. return BehaviourTreeStatus.Running; }); .Do("action2", t => { // Parallel action 2. return BehaviourTreeStatus.Running; }) .End()
Selector
Runs child nodes in sequence until it finds one that succeeds. Succeeds when it finds the first child that succeeds. For child nodes that fail it moves forward to the next child node. While a child is running it stays on that child node without moving forward.
.Selector("my-selector") .Do("action1", t => { // Action 1. return BehaviourTreeStatus.Failure; // Fail, move onto next child. }); .Do("action2", t => { // Action 2. return BehaviourTreeStatus.Success; // Success, stop here. }) .Do("action3", t => { // Action 3. return BehaviourTreeStatus.Success; // Doesn't get this far. }) .End()
Condition
The condition function is syntactic sugar for the Do function. It allows return of a boolean value that is then converted to a success or failure. It is intended to be used with Selector.
.Selector("my-selector") // Predicate that returns *true* or *false*. .Condition("condition", t => SomeBooleanCondition()) // Action to run if the predicate evaluates to *true*. .Do("action", t => SomeAction()) .End()
Inverter
Inverts the success or failure of the child node. Continues running while the child node is running.
.Inverter("inverter") // *Success* will be inverted to *failure*. .Do("action", t => BehaviourTreeStatus.Success) .End() .Inverter("inverter") // *Failure* will be inverted to *success*. .Do("action", t => BehaviourTreeStatus.Failure) .End()
Nesting behavior trees
Behavior trees can be nested to any depth, for example:
.Selector("parent") .Sequence("child-1") ... .Parallel("grand-child") ... .End() ... .End() .Sequence("child-2") ... .End() .End()
Behavior reuse
Separately created sub-trees can be spliced into parent trees. This makes it easy to build behavior trees from reusable functions.
private IBehaviourTreeNode CreateSubTree() { var builder = new BehaviourTreeBuilder(); return builder .Sequence("my-sub-tree") .Do("action1", t => { // Action 1. return BehaviourTreeStatus.Success; }); .Do("action2", t => { // Action 2. return BehaviourTreeStatus.Success; }); .End() .Build(); } public void Startup() { var builder = new BehaviourTreeBuilder(); this.tree = builder .Sequence("my-parent-sequence") .Splice(CreateSubTree()) // Splice the child tree in. .Splice(CreateSubTree()) // Splice again. .End() .Build(); }
Promises + behavior trees
Following are some theoretical examples of how to combine the power of fluent behavior trees and promises. They are integrated via the promise timer.
PromiseTimer promiseTimer = new PromiseTimer(); public void Update(float deltaTime) { promiseTimer.Update(deltaTime); } public IPromise StartActivity() { IBehaviourTreeNode behaviorTree = ... create your behavior tree ... return promiseTimer.WaitUntil(t => behaviorTree.Update(t.elapsedTime) == BehaviourTreeStatus.Success ); }
The
StartActivity function starts an activity that is represented by a behavior tree. We use the promise timer's
WaitUntil function to resolve the promise once the behavior tree has completed. This is a simple way to combine promises and behavior trees and have them work together.
This could be improved slightly with a overloaded
WaitUntil that is specific for behavior trees:
public IPromise StartActivity() { IBehaviourTreeNode behaviorTree = ... create your behavior tree ... return promiseTimer.WaitUntil( behaviorTree, BehaviourTreeStatus.Success ); }
A real example
Now I want to show a real world example. The code examples here are from the the driving simulator project. This is a larger example, but in the scheme of things it is actually quite a simple use of behavior trees. The vehicle AI in the driving sim was just complex enough that structuring it as a behavior tree made it much more manageable.
The behavior tree presented here makes use of many helper functions. So much of the actual work of querying and updating the entity and the environment is delegated to the helper functions. I won't show the detail of the helper functions, this means I can show the higher level logic of the behavior tree without getting overwhelmed by the details.
This code builds the vehicle AI behavior tree:
behaviourTree = builder .Parallel("All", 20, 20) .Do("ComputeWaypointDist", t => ComputeWaypointDist()) // Always try to go at at the speed limit. .Do("SpeedLimit", t => SpeedLimit()) .Sequence("Cornering") .Condition("IsApproachingCorner", t => IsApproachingCorner() ) // Slow down vehicles that are approaching a corner. .Do("Cornering", t => ApplyCornering()) .End() // Always attempt to detect other vehicles. .Do("Detect", t => DetectOtherVehicles()) .Sequence("React to blockage") .Condition("Approaching Vehicle", t => IsApproachingVehicle() ) // Always attempt to match speed with the vehicle in front. .Do("Match Speed", t => MatchSpeed()) .End() .Selector("Stuff") // Slow down for give way or stop sign. .Sequence("Traffic Light") .Condition("IsApproaching", t => IsApproachingSignal()) // Slow down for the stop sign. .Do("ApproachStopSign", t => ApproachStopSign()) // Wait for complete stop. .Do("WaitForSpeedZero", t => WhenSpeedIsZero()) // Wait at stop sign until the way is clear. .Do("WaitForGreenSignal", t => WaitForGreenSignal()) .Selector("NextWaypoint Or Recycle") .Condition("SelectNextWaypoint", t => TargetNextWaypoint() ) // If selection of waypoint fails, recycle vehicle. .Do("Recycle", t => RecycleVehicle()) .End() .End() // Slow down for give way or stop sign. .Sequence("Stop Sign") .Condition("IsApproaching", t => IsApproachingStopSign() ) // Slow down for the stop sign. .Do("ApproachStopSign", t => ApproachStopSign()) // Wait for complete stop. .Do("WaitForSpeedZero", t => WhenSpeedIsZero()) // Wait at stop sign until the way is clear. .Do("WaitForClearAwareness", t => WaitForClearAwarenessZone() ) .Selector("NextWaypoint Or Recycle") .Condition("SelectNextWaypoint", t => TargetNextWaypoint() ) // If selection of waypoint fails, recycle vehicle. .Do("Recycle", t => RecycleVehicle()) .End() .End() .Sequence("Follow path then recycle") .Do("ApproachWaypoint", t => ApproachWaypoint()) .Selector("NextWaypoint Or Recycle") .Condition("SelectNextWaypoint", t => TargetNextWaypoint() ) // If selection of waypoint fails, recycle vehicle. .Do("Recycle", t => RecycleVehicle()) .End() .End() .End() // Drive the vehicle based on desired speed and direction. .Do("Drive", t => DriveVehicle()) .End() .End() .Build();
This diagram describes the structure of the vehicle AI behavior tree:
Here is the expanded sub-tree for the traffic light logic:
Here is the expanded sub-tree for the follow path logic:
In this example I've glossed over a lot of the details. I wanted to show (at a high-level) a real world example and using the diagrams I have illustrated the tree structure of the code.
Conclusion
In this article I've explained how to create behavior trees in code through a fluent API. I've focused on my open-source C# library with examples in this article are built on. I've used these techniques in one commercial Unity product (the driving simulator). I look forward to finding future opportunities to use fluent behavior trees and continuing to build on the ideas presented here.
If you have ideas for improves, please fork the github repo and start contributing! If you use the library and find problems please log an issue.
Have fun! | https://codecapers.com.au/fluent-behavior-trees-for-ai-and-game-logic/ | CC-MAIN-2021-49 | refinedweb | 3,893 | 57.47 |
I'm transporting 4 tablespaces from a 8.1.6.2 production DB to an 8.1.6.2 dev DB on another server. I've completed all the prep steps, exported the metadata, and remote copied the datafiles to the target server. However, when I attempt to import the metadata, I get the following:
Export file created by EXPORT:V08.01.06 via conventional path
About to import transportable tablespace(s) metadata...
import done in US7ASCII character set and WE8ISO8859P1 NCHAR character set
import server uses WE8ISO8859P1 character set (possible charset conversion)
. importing SYS's objects into SYS
IMP-00017: following statement failed with ORACLE error 1565:
"BEGIN sys.dbms_plugts.beginImpTablespace('TNT_DATA',10,'SYS',10,0,16384,2"
",211520805898,1,240,32,16,0,0,1,0,0,3042992015,1,0,32079,NULL,49,447319235,"
"NULL,NULL); END;"
IMP-00003: ORACLE error 1565 encountered
ORA-01565: error in identifying file '/d1u32/oracle/tntdev/datafiles/tnt_indx01.
dbf'
ORA-06512: at "SYS.DBMS_PLUGTS", line 1364
ORA-06512: at line 1
IMP-00000: Import terminated unsuccessfully
It looks as though dbms_plugts is saying either looking for the "tnt_indx" datafile in the tnt_data tablespace, or the metadata is saying that "tnt_indx" datafile belongs to the tnt_data tablespace - both of which is not true. If you need any additional info, plz let me know. Thanx!
Tim Hussar
OS, bock_size and other requirement are the same?? secondly the transport tablespace must be in read only before you tnansprot!!...
nguyenjl
yes, all are the same between source and target. and yes, the tbs was in read-only mode. the only thing I can think of is that the char sets are supposed to be the same, yet our source export seems to have been done in US7ASCII whereas the target database import is using WE char set. the nat char sets are the same, though. also, I know that TTS has worked for these 2 DBs before. my source DB is no longer available (it was an EMC BCV mirror that's now been reestablished). looks like I'll haveta go with a regular export/import on this.
Forum Rules | http://www.dbasupport.com/forums/showthread.php?22989-transportable-tablespace-errors | CC-MAIN-2014-35 | refinedweb | 354 | 65.42 |
Angular 6 HttpClient assign resulting payload to array
angular httpclient post
angular 2 http get example
angular http get with body
angular/common/http
httpoptions angular
angular 6 http post example with parameters
angular-httpclient set headers
I am attempting to do a get call in angular the call itself works as I can put a log in the subscribe and see the returned data the issue I am having is I can't seem to assign the data to my existing array (Currently Empty) and as the code below is of type
Event[]. I have tried using a map on the data array which is also of type
Event[] but no luck and the same with push although I believe this is because you can't push an array. I am sure there is something simple I am missing or can't find.
Here is the call I am making and bellow that the Event model.
this.httpClient.get<Event[]>('').subscribe((data) => this.events = data); export class Event { constructor(public name: String, public date: Date, public time: Date) {} }
I am new to angular so I could be doing it all wrong any help is much appreciated.
EDIT
I have dome some more research but still no joy maybe this is something to do with having it in subscribe. I tried some of the array clone solutions from here
EDIT 2
Looking further I see that the contents of subscribe are a function is there something I am missing with scope does my
this.events need passing in some way, it's set at the class level.
EDIT 3
import { Event } from '../shared/event.model'; import { Injectable } from '@angular/core'; import { Http, Response } from '@angular/http'; import { Subject } from 'rxjs'; import { map } from 'rxjs/operators'; import { HttpClient } from '@angular/common/http' @Injectable() export class AdminService { eventsChanged = new Subject<Event[]>(); private events: Event[] = []; constructor(private http: Http, private httpClient: HttpClient) {} getEvents() { this.httpClient.get<Event[]>('') .pipe( map( (data: Event[]) => data.map(event => { // may need to coerce string to Date types return new Event(event.name, event.date, event.time) }) ) ) .subscribe((events: Event[]) => this.events = events); console.log(this.events); return this.events; }
I am then using this call in my component this hasn't changed from when it worked using a local array of
Event[].
this.events = this.adminService.getEvents();
The base issue is you are attempting to return the
Event[] data from your
AdminService.getEvents() method prior to
httpClient.get<Event[]>() resolving/emitting and
subscribe() executing/assigning, that is why it is always returning an empty array. This is simply the asynchronous nature of
HttpClient and RxJS.
@Injectable() export class AdminService { // ... getEvents() { // this happens after console.log() and return this.events .subscribe((events: Event[]) => this.events = events); // this executes before get()/subscribe() resolves, so empty [] is returned console.log(this.events); return this.events; }
Instead return the
get<Event[]>().pipe() instead for the
@Component to call and utilize:
import { Event } from '../shared/event.model'; import { Injectable } from '@angular/core'; import { Http, Response } from '@angular/http'; import { Subject, Observable } from 'rxjs'; import { map } from 'rxjs/operators'; import { HttpClient } from '@angular/common/http' @Injectable() export class AdminService { eventsChanged = new Subject<Event[]>(); constructor(private http: Http, private httpClient: HttpClient) {} getEvents(): Observable<Event[]> { return this.httpClient.get<Event[]>('') .pipe( map( (data: Event[]) => data.map(event => { // may need to coerce string to Date types return new Event(event.name, event.date, event.time) }) ) ); }
Component:
@Component({ /* ... */ }) export class EventsComponent implements OnInit { events$: Observable<Event[]>; constructor(private adminService: AdminService) {} ngOnInit() { this.events$ = this.adminService.getEvents(); } }
Template with async pipe:
<ul> <li *{{event.name}}</li> </ul>
Or:
@Component({}) export class EventsComponent implements OnInit { events: Event[] = []; constructor(private adminService: AdminService) {} ngOnInit() { this.adminService.getEvents() .subscribe(events => this.events = events); } }
Template:
<ul> <li *{{event.name}}</li> </ul>
On a side note,
HttpClient with a type will not automatically create instances of
Event class from a typed
get(), it is objects with a set type. You could use the RxJS
map operator in combination with
Array.prototype.map to create an instance of class
Event for each
Event typed object return from
get<Event[]>. Also be careful with naming it
Event as it could conflict with an existing symbol Event.
Hopefully that helps!
Communicating with backend services using HTTP, The HttpClient in @angular/common/http offers a simplified client HTTP API Re-subscribing to the result of an HttpClient method call has the effect of reissuing the HTTP request. Then import and add it to the AppModule providers array like this: Typescript disallows the following assignment because req.url is readonly Trying to send a multidimensional array to an API but angular http.get doesn't let me, this is total rubbish, time after time I get baffled how such simple things are so complicated or not possible at all with angular 2+, totally regretting not picking up something else like react.
There are multiple problems with your code.
As per your code, you are unable to understand the RxJS Observable. You call subscribe, when you are finally ready to listen. So your
getEvents()method should not subscribe, rather return the observable. i.e.
getEvents(): Observable<Event[]> { return this.httpClient .get<Event[]>("") .pipe( map(data => data.map(event => new Event(event.name, event.date, event.time)) ) ); }
Since you have used
<ul> <li *{{event.name}}</li> </ul>
The
async pipe does the
subscribe for you in the html. Just expose the
events$ like you have already done in the
ngOnInit().
I wouldn't define my interface as
Eventbecause
Eventis already part of RxJS.
Also, since you are calling your console.log outside the subscription, it will always be null, unless you add a
tap from RxJS like this.
import { Event } from "../shared/event.model"; import { Injectable } from "@angular/core"; import { Http, Response } from "@angular/http"; import { Subject } from "rxjs"; import { map, tap } from "rxjs/operators"; import { HttpClient } from "@angular/common/http"; @Injectable() export class AdminService { constructor(private http: Http, private httpClient: HttpClient) {} getEvents(): Observable<Event[]> { return this.httpClient .get<Event[]>("") .pipe( map(data => data.map(event => new Event(event.name, event.date, event.time)) ), tap(console.log) ); } }
Also, why are you calling both
Httpand
HttpClient. Either use the client or http.
Happy Coding
Angular HTTP Client - QuickStart Guide, This post will be a quick practical guide for the Angular HTTP Client module. How to do HTTP Requests in sequence, and use the result of the first As we can see this data is a JSON structure, with no arrays. Instead, a call to set will return a new HttpParams object containing the new value properties. Note: Angular 6 deprecated the old http client in favor of the newer http client module which is an improved version of the http client API that lives in the @angular/common/http package. The old API is still available in @angular/http in Angular 6, but is now removed in Angular 9.
From reviewing your third edit, I think the
map/pipe issues are a red-herring and the core issue is the implementation of an asynchronous
http call inside of a public method signature that behaves synchronously, i.e.,
getEvents().
For example, I would expect this code snippet to behave similarly, in that the method is able to immediately return
this.events, whose value is still an empty array, and then proceed with executing the specified behavior inside of the asynchronous
setTimeout
private events: Event[] = []; private event1: Event = new Event("event1", "date", "time"); private event2: Event = new Event("event2", "date", "time"); public getEvents(): Event[] { setTimeout(() => { this.events = [..., event1, event2]; }, 5000); return this.events; }
For your code example, are you able to get the desired results with a code implementation similar to this. Complete functional implementation available on StackBlitz:
export class AdminService { ... public getEvents(): Observable<Event[]> { // Returns an observable return this.httpClient.get<Event[]>(url); } } export class EventComponent { constructor(private adminService: AdminService) {} public events: Event[] = []; ngOnInit() { // Subscribe to the observable and when the asynchronous method // completes, assign the results to the component property. this.adminService.getEvents().subscribe(res => this.events = res); } }
HttpParams doesn't accept array · Issue #19071 · angular/angular , closca opened this issue on Sep 6, 2017 · 22 comments. Open let payload = new HttpParams(); payload.set('ids', [1,2,3,4]); feat(common): accept numeric value for HttpClient params #19595 new HttpParams({ fromObject: object, }); Previous result before PR: key=foo&key=bar New result after PR: key[]=foo&key[]=bar..
Consuming APIs in Angular: The Model-Adapter Pattern, A TypeScript-friendly pattern to improve how you integrate Angular apps and REST APIs. the resulting Object in our Angular components and HTML templates. As it turns out, Angular's HttpClient can do just this for you, so we wouldn't even have to We'll map the data array to an array of Course objects:. For this purpose, Angular provides the HttpClient service. HTTP client in Angular - Getting Started Import HttpClientModule. To use any service, you need to import the respective module. For HttpClient you can get started by importing the HttpClientModule. You may import it into any Angular module utilizing the HttpClient.
Observables Array Operations, Here we have used it to create a new result set by excluding any user whose id property is less than six. Now when our subscribe callback gets invoked, the data it That command will create a new Angular 8 app with the name `angular-httpclient` and pass all questions as default then the Angular CLI will automatically install the required NPM modules. After finished, go to the newly created Angular 8 folder then run the Angular 8 app for the first time.
Angular 2: Using the HTTP Service to Write Data to an API, You can find a tutorial for the HttpClient service in my post Angular 5: The entire operation will result in an error state if any single request fails Important: the In-memory Web API module has nothing to do with HTTP in Angular. If you're just reading this tutorial to learn about HttpClient, you can skip over this step. . If you're coding along with this tutorial, stay here and add the In-memory Web API n
- I see there are three votes to close, did I not give enough information? If not what do you need?
- In your original example, if you instead log inside the
subscribe(), are you seeing the data you expect?
subscribe((data) => console.log(data));
- Yes I see the json posted from the server it's just not saving to the other array and both are shown as type
Event[]
- Can you clarify the intent/need for the multiple
rxjsoperators? I'm familiar with seeing the
mapoperator in code that imports and uses the
HttpModule, but one of the advantages of the
HttpClientModuleis that it handles type-checking and serialization for you. The example StackBlitz I provide in my solution below removes the
rxjsoperators and is still able to correctly handle serialization to the specified type.
- I don't know I am only half way through the angular course and the instructor likes to show you multiple implementations then change them later showing you how to do it badly then how it should be done.
- Sorry but same issue this.events is still empty
- When/where exactly are you trying to use
this.events, if you are attempting to log it prior resolving it will be undefined.
- it's not undefined it's empty I define it at the top shown as
[]the get runs in a function that returns this.events
- You please would need to show how you are using this.events and calling the get() in your component exactly. You are absolutely sure you are getting data from the server? There must be something else going on.
- @bobthemac I've updated the answer, you simply cannot return
this.eventsfrom the service like that as it resolves before
subscribe()executes. Instead return the
Observable<Event[]>instead and execute the subscribe in the @Component. Thanks!
- I am part way through an angular course, haven't got to observables yet. I haven't used async in the template, and haven't gotten to the upgrade everything to httpclient part either. Thanks for the answer though some good point in their all taken onboard. | http://thetopsites.net/article/52129889.shtml | CC-MAIN-2021-04 | refinedweb | 2,034 | 55.24 |
Remove dependency on QTKit from wekitExitFullscreen()
Created attachment 95567 [details]
Patch
This is the first bug of two. The second one will build upon this one to implement the video full-screen API in WebKit2.
Created attachment 95570 [details]
Patch
Discovered a small problem with the volume slider; it was scaling between 0-100, and the previous patch was setting it to 0-1.
Attachment 95570 [details] did not pass style-queue:
Failed to run "['Tools/Scripts/check-webkit-style', '--diff-files', u'Source/WebCore/ChangeLog', u'Source/WebCor..." exit_code: 1
Source/WebKit/mac/WebView/WebMediaDelegate.h:32: Code inside a namespace should not be indented. [whitespace/indent] [4]
Total errors found: 1 in 18 files
If any of these errors are false positives, please file a bug against check-webkit-style.
Comment on attachment 95570 [details]
Patch
View in context:
Damn this patch will badly break
> Source/WebCore/ChangeLog:106
> + * platform/mac/WebWindowAnimation.h: Renamed from Source/WebKit/mac/WebView/WebWindowAnimation.h.
There is a previous patch doing almost that waiting for review. Perhaps we should cancel it.
(In reply to comment #4)
> (From update of attachment 95570 [details])
> View in context:
>
> Damn this patch will badly break
Yes, it probably will. However, it looks like QTKitFullScreenVideoHandler could just create an instance of the new WebMediaHandler class to pass along to the fullscreen controller (instead of passing an HTMLElement directly). Apart from that, and different include paths for WebVideoFullscreenController.h, nothing here should be insurmountable.
> > Source/WebCore/ChangeLog:106
> > + * platform/mac/WebWindowAnimation.h: Renamed from Source/WebKit/mac/WebView/WebWindowAnimation.h.
>
> There is a previous patch doing almost that waiting for review
>. Perhaps we should cancel it.
Lets see what happens to this bug first. It may get roundly rejected. :)
Comment on attachment 95570 [details]
Patch
Clearing r?; I'll break this bug into smaller pieces and reattach.
Created attachment 95648 [details]
[PATCH 1/5] [LANDED] Move Full Screen Controllers into WebCore.
Created attachment 95649 [details]
[PATCH 2/5] Add two new utility clasess: WebEventListener and WebMediaDelegate
Created attachment 95650 [details]
[PATCH 3/5] WebVideoFullscreenController and WebVideoFullscreenHUDController now use a delegate
Created attachment 95651 [details]
[PATCH 4/5] Disable calling into new full-screen API from video full-creen API. Enable video full-screen mode for AVFoundation.
There is no 5/5. :-D
Who should be CC'd to review these?
(In reply to comment #12)
> Who should be CC'd to review these?
Eric Carlson I believe.
Comment on attachment 95648 [details]
[PATCH 1/5] [LANDED] Move Full Screen Controllers into WebCore.
View in context:
> Source/WebCore/WebCore.exp.in:1371
> _wkMeasureMediaUIPart
> +_wkCreateMediaUIBackgroundView
> +_wkCreateMediaUIControl
> +_wkWindowSetAlpha
> +_wkWindowSetScaledFrame
> _wkMediaControllerThemeAvailable
It looks like exports in this file should be in sort order when not inside #if defs.
Jer could you land patch 1, so I can rebase my patch, it's quite needed for Qt? It seems that patch 2,3 and 4 are not related to us.
Thanks.
Patch 1/5 is blocking work on bug #61728, so I'll land that patch now.
Committed r89271: <>
Whoops, re-opening. Still have patches 2, 3, and 4 to review.
Comment on attachment 95648 [details]
[PATCH 1/5] [LANDED] Move Full Screen Controllers into WebCore.
Landed patch 1/5. Clearing r+.
Committed a 32-bit build fix for patch 1/5.
Committed r89292: <>
Committed a build fix for Leopard.
Committed r89296: <>
This bug is no longer being actively pursued. Clearing the r? flags from the outstanding patches and marking this bug as "not to be fixed". | https://bugs.webkit.org/show_bug.cgi?id=61843 | CC-MAIN-2017-13 | refinedweb | 584 | 51.24 |
normal()
Contents
normal()#
Sets the current normal vector.
Examples#
def setup(): py5.size(100, 100, py5.P3D) py5.no_stroke() py5.background(0) py5.point_light(150, 250, 150, 10, 30, 50) py5.begin_shape() py5.normal(0, 0, 1) py5.vertex(20, 20, -10) py5.vertex(80, 20, 10) py5.vertex(80, 80, -10) py5.vertex(20, 80, 10) py5.end_shape(py5.CLOSE)
Description#
Sets the current normal vector. Used for drawing three dimensional shapes and surfaces,
normal() specifies a vector perpendicular to a shape’s surface which, in turn, determines how lighting affects it. Py5 attempts to automatically assign normals to shapes, but since that’s imperfect, this is a better option when you want more control. This function is identical to
gl_normal3f() in OpenGL.
Underlying Processing method: normal
Signatures#
normal( nx: float, # x direction ny: float, # y direction nz: float, # z direction /, ) -> None
Updated on September 01, 2022 16:36:02pm UTC | https://py5.ixora.io/reference/sketch_normal.html | CC-MAIN-2022-40 | refinedweb | 152 | 62.34 |
Windows SharePoint Services (WSS) provides an easy way to create workspaces where
information can be collected to share information among users working toward
a common goal. This goal may be of different types, including making presentations
for an event, execution of a business process, or even materials for a project.
Always, organizations build presentations as an output of this collaboration
or as a part of regular reporting on the status of the team. Microsoft PowerPoint
provides users with a powerful platform to make presentations. However, the presentation
creator expend to much effort and time to locate, duplicate and organize this
information into the presentation. The presentation's authors are likely retyping,
cutting, and pasting or otherwise manually importing content. But this way is
somehow error prone and time consuming. So, in this article we not only look
for reducing the effort; we also look for increase of accuracy. This article details how we can extend PowerPoint to provide a tool that has capability of building
slides populated with content stored in a SharePoint site.
To understand the how this will be useful, let's take an example. Almost any time
there is a chain of command in any organization; people farther down are always
summarizing and reporting up. For example, in the Software Development firm,
there is a project which is of long term work and needs every week a meeting
of the work and presentation of the work by PowerPoint slide show. So, they prepare
a SharePoint Meeting workspace for their need to be fulfilled and keep their
agendas for meeting, the members of the meeting and much other information. Every
week, the presentation creators get their agenda information from the site and
create presentation in PowerPoint. He does his work by copying and pasting and
sometime manually importing content. Also there are so many types of examples
are there in the commercial world. So, in this article we will see how to reduce
this type of work by providing some handy tool to PowerPoint which will fetch
the SharePoint meeting agendas to the PowerPoint and create the slides.
Example Overview
For this type of solution, we will extend Microsoft PowerPoint to provide a tool
for the author to import data from a SharePoint site. We have focused our efforts
to build a briefing presentation made of the items which are imported from a
typical meeting workspace in WSS. Once the tool has been launched, it will examine
a site provided by user to see if it contains an objectives list, a list of agenda
items, or any slide libraries. The objectives list and agenda lists are default
list templates for meeting workspace in SharePoint. In this example, we also
have included slide library because users who are a part of this collaboration
may have built slides of their own that may be included in our briefing output
presentation. Once the site has been crawled, the custom import tool will give
a wizard process that offers the user to build slides for the content it finds
in the site.
This application is flexible enough so that the presentation author can build slide
in any presentation he is working on. To fulfil this requirement, the solution
will be an application-level add-in. As an add-in this tool will always be available
to the authoring user from within PowerPoint. The controls for this tool will
be added to the Microsoft Office ribbon interface as the launching point of the
tool. From the ribbon, the author will be able to launch the tool, which presents
itself as a custom task pane. This pane provides the user with the wizard experience
of steps and prompts, building slides as per current presentation instructions.
As the wizard runs, it asks the user about if he wants to have a presentation slide
built for the content it finds in the site. If the user goes for it, the add-in
will query the SharePoint site for the objective list items and place them in
a new slide as a bulleted list. Same way, if the user selects to build slide
from the agenda, the add-in will query the SharePoint site for the agenda list
items and place them in a new slide displayed within a table. Also, the add-in
will populate the notes page of the slide with details such as owner of the agenda
item and notes that are contained in the SharePoint site. At the last step, the
add-in will display a list of slide libraries it finds on the site as the links.
Each link will open the library in the browser so that the user can select slides
to import into the presentation he is working.
Walking through the example development
In the following section we will details the main elements of the solution. The walkthrough
will guide you how to get started with Visual Studio to create the briefing PowerPoint
add-in. We will also see how we can extend the Office ribbon interface using
VSTO. The wizard process will also be explained so you can see how it is implement.
Also, the walkthrough will explain the work that goes into building each of the
PowerPoint slides for our briefing. This includes creating new slides, putting
content as a bulleted list, inserting and populating a table, placing content
on a slide’s notes page and linking the SharePoint slide libraries.
Creating the Project
Using Visual Studio to build a PowerPoint add-in project is very easy by using VSTO’s
project types. Simply open Visual Studio 2008 and select to create new project.
Under the Visual C# language node, expand the Office node and select 2007. This
will display Office 2007 VSTO project templates. Select PowerPoint 2007 Add-in.
Name the project and solution BriefingCS. Note that the dropdown list of framework
selection has the value .NET Framework 3.5.
Once the project is created, you should see few files which are already created by
the project template in the Solution Explorer. The PowerPoint node, that when
expanded, will have a code file named ThisAddIn.cs. This file is the main entry
point for our add-in. In this file a developer can write code for events at the
add-in scope. These events include Startup and Shutdown. There are also PowerPoint
application-level events such as SlideShowBegin and SlideShowEnd. These events
are accessible by registering the events to the Application object in code.
Ribbon Customization
First, we need a way for the user to launch our briefing application from within
PowerPoint. To provide this starting point, we will focus our attention on the
Microsoft Office ribbon. The ribbon interface occupies the top portion of most
of the Office applications and provides a different experience then the layered
menus and toolbars of previous Office products. This interface provides a simple
user experience by improving the organization of commands and providing a more
graphical representation of options.
In earlier versions of Office, developer spent enormous efforts and time in writing
code for custom CommandBars object to create new buttons, extend menus, and hide
existing options and other customizations. By using compiled code to make these
changes, developers found that the code was not portable and reusable across
different Microsoft Office applications. So, the ribbon interface provides the
developer with an improved programming mode for customizing it, in addition to
providing enhanced end user experience.
The biggest difference in the ribbon programming mode is that customizations to the
UI are done using XML. Using a custom XML file, a developer can manipulate the
ribbon interface, controlling its tabs, groups and controls. This customization
includes adding as well as hiding the controls in the ribbon. As the customizations
are XML based, they are reusable in different projects and different Office applications.
In addition to XML declarations, a class that serves the XML also provided and
includes event handlers for running code in response to user interaction. VSTO
does most of the plumbing of this automatically, allowing us to focus just on
the customization and saves us from much more labour.
To implement the ribbon add-in in our project, first add a new item. From the Add
New Item dialog box, select the Ribbon(Visual Designer) support template and
name the file RibbonAddin1.cs. When the process is complete, Visual Studio will
have added actually three files, RibbonAddin1.cs, RibbonAddin1.designer.cs and
RibbonAddin.resx. And opens the RibbonAddin.cs in designer and the designer looks
like below image. Now in designer select the tab named “TabAddIns (Built-In)”
and change its Label to “Add-Ins” from the property window. Now in the designer
select the “group1” and set its Label property to “SharePoint”. Add a ToggleButton
from the toolbox to this group box. Set the ToggleButton’s Label property to
“Briefing” and set its image property to the appropriate one. Also set its ControlSize
property to “RibbonControlSizeLarge”. Now double click that toggle button to
generate its click event handler int RibbonAddIn.cs file. There in that event
handler method write the following code:
Globals.ThisAddIn.CustomTaskPane.Visible = toggleButton1.Checked;
This code will take the responsibility of hiding and showing our custom task pane
for our application. Of course, we need the task pane, which is nothing else
but a user control named ucBriefingTaskPane. You need to add a blank user controls
with the name ucBriefingTaskPane if you want to simply test the application at
this stage. The following code snippet shows you how the add-in creates an instance
of the custom task pane.
private CustomTaskPane _customTaskPane;
public CustomTaskPane CustomTaskPane
{
get { return _customTaskPane; }
set { _customTaskPane = value; }
}
private void ThisAddIn_Startup(object sender, System.EventArgs e)
{
_customTaskPane = Globals.ThisAddIn.CustomTaskPanes.Add(new ucBriefingTaskPane(), "Custom Briefing");
_customTaskPane.DockPosition = MsoCTPDockPosition.msoCTPDockPositionRight;
_customTaskPane.Width = 250;
}
By adding the ribbon support to the project, the user now has a way to start the
briefing application, which will provide the user with a series of steps for
building PowerPoint slides from SharePoint site content.
To test the application and to see how your RibbonAddin1 is integrated with the PowerPoint
application, just press F5 or Click the Debug button the top toolbar. This will
start the PowerPoint and adds the Add-Ins tab at the last in the PowerPoint’s
Ribbon bar, as show in below image.
Clicking on the Briefing button in the toolbar will show you the CustomTaskPane at
the right site, which we added by the previous code.
Implementation of the Task Pane and Wizard Steps User Controls
Our solution presents its user interface through a custom task pane that loads to
the right-hand side of PowerPoint’s application window. The wizard experience
for the user is a series of steps. The required logic and actions for each of
the steps is written into a single user control. Primary goal of each step is
as follows.
1. Get the URL of the SharePoint site and check its contents
2. Building a presentation slide for the items in the objectives list
3. Building a presentation slide for the items in the agenda list
4. Displaying a list of slide libraries so the user can import presentation slides
from the site.
5. Conclusion message.
The user may not need to follow or complete all the steps. Based on the crawl of
the site in step 1, it may be possible that the wizard may skip certain subsequent
steps. For example, if the site does not contain any objective list, the user
should not even be exposed to step 2. For this reason, our architecture compartmentalizes
each of the steps into its own user control. Each step user control is responsible
for displaying an interface, gathering input from the user, doing its work and
reporting back to parent. The task-pane control serves as a controller, taking
on the responsibility of orchestrating the step controls and collecting the information
they report back that may be necessary later in the process.
To provide uniformity to the step user controls, we defined an object interface with
some common elements that each control must implement. This makes it easier for
our task-pane controller to interact with them since it can be guaranteed that
the elements of the interface are a part of the step control.
delegate void CompletedEventHandler(object sender, EventArgs e);
public interface IWizardStep
{
ucBriefingTaskPane Parent { get; }
event CompletedEventHandler Completed;
void WorkComplete();
void Start();
}
The IWizardStep interface requires that each step user control have the following:
• A read only property that provides a reference to the task pane which is parent
of it.
• An Event named Completed that will be used to inform the parent task pane that
a particular step has done its work and the task pane should move on. The Completed
event is created by using CompletedEventHanlder delegate.
• A WorkComplete() method used to support multithreading so that the step control
can execute its work on a separate thread than the user interface one.
• A Start() method that will be called by the task pane, telling the step control
that it is its turn in the process.
The task pane controller is a no user interface user control. It is simply a container
control that will organize and manage step controls that present them to the
user. An instance of each of the Stepn controls named wizard (where n represents
the sequence number) is loaded into its controls collection. As these are loaded
from code, each has its visibility off by default. In the task pane’s load event,
the instance wizard1 is loaded first for start-up. Also the task pane maintains
some information obtained from a step control that will be used later in the
process. This includes the URL of the SharePoint site entered by user, information
about whether it had any of the lists it could build content from, as well as
a collection of information about the slide libraries contained in the site.
WizardStep1: Examining the Site
The wizardStep1 control is responsible for examining the SharePoint site specified
by URL the user entered. For our example, the wizardStep1 control is looking
to see if the site contains a list named “Objectives”, a list named “Agenda”,
and information on any slide libraries. For our example, we simply created a
meeting workspace to act as a test environment and populated it with some content.
The objects and agenda lists are there already, created by site template by default,
but you have to create at least one slide library if you want to test. Following
are the steps to create a slide library to your site:
1. Go to your meeting workspace site.
2. From the Site Actions menu, choose Create.
3. In the Libraries group, click the Slide Library link.
4. Enter a suitable name for the slide library, for example, Meeting Slides.
5. Click Create button.
6. You will be redirected to a page that displays the library’s items.
7. For testing purpose, you need to upload a sample PowerPoint 2007 presentation.
Back to our step controls, this step control is designed to first ask the user for
site Url of the meeting workspace. When user enters and clicks Next button, the
step control tries to discover information about the lists and libraries contained
in the site. SharePoint’s web services are used to get information and content
of the lists in this example. To add a web reference in the project, go through
the following steps.
1. In Visual Studio’s Solution Explorer window, right-click the Project name and
select Add Web Reference or Add Service Reference. (If you have added any web
service reference to the project earlier, then the Add Web Reference will be
available otherwise select Add Service Reference.)
2. If you have Add Web Reference option and selected it then directly jump to the
step 5.
3. Click Advanced… Button.
4. Click “Add Web Reference…” button.
5. Type the URL of the meeting workspace with a _vti_bin/lists.asmx ending. SharePoint’s
web services are called by adding the _vti_bin folder on to the end of any site
URL. This enables the web service call to discover its context and return answers
specifically to the site referred to by the rest of the URL. So, for our example
the URL would be like.
6. Click Go. The window below the URL will display a listing of all the methods in
the Lists Web service.
7. In the Web reference name text box, change the name to WSLists.
8. Click Add Reference.
Just as the web reference added, following is the code for the Next buttons click
event.
this.UseWaitCursor = true;
WSLists.Lists listService = new BreifingCS.WSLists.Lists();
listService.Credentials = System.Net.CredentialCache.DefaultCredentials;
listService.Url = this.txtSiteUrl.Text.Trim("/".ToCharArray()) + "/_vti_bin/lists.asmx";
listService.GetListCollectionCompleted += new BreifingCS.WSLists.GetListCollectionCompletedEventHandler(listService_GetListCollectionCompleted);
listService.GetListCollectionAsync();
One more thing to keep in mind that, when we examine the site content, this work
does not interfere with the PowerPoint application’s user interface. Therefore,
this work will be set up to run on a different thread. To support this, the Next
button’s event handler first changes the user’s cursor to the WaitCursor. The
web-service proxy is then configured to make a call under the user’s security
context and to access the service through the site URL. The GetListCollectionAsync
method is called so that the current thread will not wait for a response. The
listService_GetListCollectionCompleted method of this step user control will
be called when it receives its response.
Once the service has returned a response to the briefing add-in application, our
code will execute and examines the response looking for the Objectives and Agenda
lists as well as any slide library. The response from the web-service call is
a string of XML that includes references to many different namespaces. Therefore
this XML is loaded in XmlDocument object and XmlNamespaceNavigator is set up
to easily search for the information. See the following code snippet.
XmlNode result = e.Result;
XmlDocument doc = new XmlDocument();
doc.LoadXml(result.OuterXml);
XmlNamespaceManager namespaceMrg = new XmlNamespaceManager(doc.NameTable);
namespaceMrg.AddNamespace(ucBriefingTaskPane.SharePointNamespacePrefix,
ucBriefingTaskPane.SharePointNamespaceUri);
XmlNodeList agendaListNode = doc.SelectNodes("//sp:List[@Title='Agenda']", namespaceMrg);
if (agendaListNode == null || agendaListNode.Count == 0)
Parent.HasAgendaList = false;
XmlNodeList objectiveListNode = doc.SelectNodes("//sp:List[@Title='Objectives']", namespaceMrg);
if (objectiveListNode == null || objectiveListNode.Count == 0)
Parent.HasObjectiveList = false;
XmlNodeList slideLibraries = doc.SelectNodes("//sp:List[@ServerTemplate='2100']", namespaceMrg);
After gathering all the required information, this step wraps up its work and at
the end the WaitCursor is turned off and the parent task pane invokes a delegate
referring to the WorkComplete method. By using the this.Parent.Invoke() technique,
we are returning control back to the original thread, leaving the one that performed
the background processing.
WizardStep2: Building Objectives
The WizardStep2 user control is responsible for building a presentation slide containing
the content in the SharePoint site’s Objectives list. The parent task pane will
move control to this step by calling its Start method. In this method, first
a check to Objectives list in the user’s site is done. If it did, the step makes
itself visible, exposing its user interface to the user. If it didn’t this control
simply calls WorkComplete to raise its Completed event, skipping itself and moving
on to the next step in the process. Assuming that your site contains Objectives
list, the user is presented with two options. The first is for the application
to build a presentation slide that contains a bulleted list of the objective
items from the site. The other option is to skip this feature and move on to
the next step.
If the user chooses to have an Objectives slide built, the add-in again uses SharePoint’s
lists.asmx web service and at this time the GetListItems method is used to get
the objective list items. Same here, the asynchronous implementation of this
method is used to prevent the main thread waiting for results. Below code snippet
shows how the objective items are get using lists.asmx web service.
WSLists.Lists listService = new BreifingCS.WSLists.Lists();
listService.Credentials = System.Net.CredentialCache.DefaultCredentials;
listService.Url = Parent.SiteUrl + "/_vti_bin/lists.asmx";
XmlDocument doc = new XmlDocument();
XmlNode nodeQuery = doc.CreateNode(XmlNodeType.Element, "Query", "");
XmlNode nodeViews = doc.CreateNode(XmlNodeType.Element, "ViewFields", "");
XmlNode nodeQueryOptions = doc.CreateNode(XmlNodeType.Element, "QueryOptions", "");
nodeQueryOptions.InnerXml = "<IncludeMandatoryColumns>FALSE</IncludeMandatoryColumns>";
nodeViews.InnerXml = "<FieldRef Name='Objectives'/>";
nodeQuery.InnerXml = "";
listService.GetListItemsCompleted += new BreifingCS.WSLists.GetListItemsCompletedEventHandler(listService_GetListItemsCompleted);
listService.GetListItemsAsync("Objectives", null, nodeQuery, nodeViews, null, nodeQueryOptions, null);
Once the GetListItemsAsync method is received its results, it transfers control to
the GetListItemsCompleted method. Once the results are separated from other XML
nodes, returned in the response, by using each object item a slide is created
by the following way.
Microsoft.Office.Interop.PowerPoint.Presentation presentation =
Globals.ThisAddIn.Application.ActivePresentation;
Microsoft.Office.Interop.PowerPoint.Slide slide = presentation.Slides.Add(presentation.Slides.Count + 1,
Microsoft.Office.Interop.PowerPoint.PpSlideLayout.ppLayoutText);
slide.Shapes[1].TextFrame.TextRange.Text = "Objectives";
StringBuilder sb = new StringBuilder();
foreach (XmlNode node in objectiveNodeList)
{
sb.Append(node.Attributes["ows_Objective"].InnerText);
sb.Append(Environment.NewLine);
}
slide.Shapes[2].TextFrame.TextRange.Text = sb.ToString();
Globals.ThisAddIn.Application.ActiveWindow.View.GotoSlide(slide.SlideIndex);
To create the slide, the code obtains a reference to the user’s active presentation
and uses its slide collection’s Add method. The Add method takes 2 parameters:
the index of the new slide and the layout of the slide. In this case we want
to add the slide at the end of the presentation and the layout should be the
one that has only a title and a text box. With the slide created, the content
is placed on it by accessing its different shapes. As per our selected layout,
two shapes need attention. The first is at index one and it represents the title
of the slide. This is set to “Objectives”. The second shape, at index two, is
populated using a StringBuilder, having each objective separated by a carriage
return. Placing the carriage-return character is what gives us each objective
as a bullet item. And at last, the new created slide is made active one.
WizardStep3: Building Agenda Items
The wizardStep3 control is not so different than the wizardStep2. In its start method,
it also decides whether it should show itself or not. Instead of simply building
another bulleted list, this slide presents the agenda items in a PowerPoint table
while providing more-detailed content in the Notes portion of the slide. To get
the slide set up for a table, the layout used in the Add method is ppLayoutTable.
After we set the title of the presentation slide, the table is added by accessing
the slide’s Shareps collection and calling its AddTable method. See the following
code snippet:
Microsoft.Office.Interop.PowerPoint.Presentation presentation =
Globals.ThisAddIn.Application.ActivePresentation;
Microsoft.Office.Interop.PowerPoint.Slide slide = presentation.Slides.Add(presentation.Slides.Count + 1,
Microsoft.Office.Interop.PowerPoint.PpSlideLayout.ppLayoutTable);
slide.Shapes[1].TextFrame.TextRange.Text = "Agenda";
Microsoft.Office.Interop.PowerPoint.Shape tblAgenda = slide.Shapes.AddTable(agendaNodeList.Count, 2, -1, -1, -1, -1);
tblAgenda.Table.Columns[1].Width = 200;
tblAgenda.Table.Columns[2].Width = 400;
tblAgenda.Table.FirstRow = false;
The result of our efforts is a slide that presents the time and title of each agenda
item in a nicely formatted table.
WizardStep4: Integration with Slide Libraries
The step 4 user control is responsible for informing the user of the slide libraries
that exists in the SharePoint site. A slide library is a new SharePoint library
type that allows users to work on individual slides. This granularity allows
for a great collaboration experience as users are able to work on just their
portion while others edit their own portions simultaneously. The slide library
also promotes slide reuse because these slides can be imported into other decks.
Our assumption with this example is that the meeting workspace has at least one
slide library and that there are existing slides there that should be imported
into this presentation.
Like other steps, the step 4 also check if at least one slide library was discovered
previously and shows itself only if that is the case. This control’s Start method
includes a call to the ListOutSlideLibrariesLinks method. This method populate
a panel with LinkLabel controls for each of the slide library in the site.
private void ListOutSlideLibrariesLinks()
{
pnlLinks.Controls.Clear();
LinkLabel[] linkLabels = new LinkLabel[Parent.SlideLibraries.Count];
for (int i = 0; i < Parent.SlideLibraries.Count; i++)
{
linkLabels[i] = new LinkLabel();
linkLabels[i].Location = new Point(5, 25 * i);
linkLabels[i].LinkClicked += new LinkLabelLinkClickedEventHandler(ucWizardStep4_LinkClicked);
linkLabels[i].LinkBehavior = LinkBehavior.AlwaysUnderline;
linkLabels[i].Text = ((LibraryItem)Parent.SlideLibraries[i]).Name;
LinkLabel.Link link = linkLabels[i].Links.Add(0, ((LibraryItem)Parent.SlideLibraries[i]).Name.Length);
link.LinkData = Parent.SiteUrl + ((LibraryItem)Parent.SlideLibraries[i]).Url;
}
pnlLinks.Controls.AddRange(linkLabels);
}
As per above code, linkControls is an array of LinkLabels containing enough controls
for the number of slide libraries in the site. Within the loop, a new LinkLabel
control is created for each slide library. The LinkLabel’s LinkBehavior is set
to always present its text as underlined, and the actual text value is set to
be the name of the slide library. When the user clicks on one of the LinkLabels,
our event hanlder responds by opening the browser and directing the user to the
URL of the slide library. This event handler launches the browser automatically
by using the System.Diagnostics namespace and starting a process with the URL
as a parameter. Once in a browser, the user will be able to use the out-of-the-box
functionality of selecting slides and importing them to the current presentation.
To perform this operation, the user must select the check boxes next to the slides
he wants to import and click the Copy Slide to Presentation button. The user
is then asked if he wants to send the selected slideds imported into the presentation
we have been constructing. Note that there is also an option to receive notifications
if the slide in the site were to change after this import has done. If that’s
selected, each time the user opens the presentation, PowerPoint will check to
see if the souce slide has been modified since the last import. If so, the user
will be alerted and asked if he wishes to update the presentation.
Though there is a step5 user control, this is a last step in which presentation slides
are contructed. The last step user control simply informs the user that he has
completed the briefing application and that he can close the task pane by clicking
the button in the ribbon user interface.
You can download the souce code for this example from here. | http://www.nullskull.com/a/1526/powerpoint-presentation-from-sharepoint-content-via-vsto.aspx | CC-MAIN-2017-30 | refinedweb | 4,453 | 55.13 |
1, Generation of hill ordering
The shell sort is Insert sort One of. Also known as reduction increment Sorting is an improved version of direct insertion sorting algorithm. Hill sort is an unstable sort algorithm. This method was named after DL. Shell in 1959..
2, Hill's ranking is based on the following two properties of insertion ranking
Insert sorting is efficient when it is used to sort almost sorted data, that is to say, it can achieve the efficiency of linear sorting.
But insert sorting is generally inefficient, because insert sorting can only move data one bit at a time.
public static void main(String [] args) { int[]a={49,38,65,97,76,13,27,49,78,34,12,64,1}; System.out.println("Before sorting:"); for(int i=0;i<a.length;i++) { System.out.print(a[i]+" "); } //Shell Sort int d=a.length; while(true) { d=d/2; for(int x=0;x<d;x++) { for(int i=x+d;i<a.length;i=i+d) { int temp=a[i]; int j; for(j=i-d;j>=0&&a[j]>temp;j=j-d) { a[j+d]=a[j]; } a[j+d]=temp; } } if(d==1) { break; } } System.out.println(); System.out.println("After sorting:"); for(int i=0;i<a.length;i++) { System.out.print(a[i]+" "); } }
perhaps
package com.fantj.dataStruct.shellSort; /** * Created by Fant.J. * 2017/12/21 20:49 */ public class ShellSort { /** * Sorting method */ public static void sort(long []arr){ //Initialize an interval int h =1; //Calculate maximum interval while (h < arr.length/3){ h = h * 3 + 1; } while (h > 0){ //Sort insert long temp = 0; for (int i = 1; i< arr.length ;i++){ temp = arr[i]; int j = i; while (j>h-1 && arr[j-h] >= temp){ arr[j] = arr[j-1]; j -= h; } arr[j] = temp; } //Decrease interval h = (h -1) / 3; } } } | https://programmer.ink/think/java-data-structure-and-algorithm-6-hill-sorting.html | CC-MAIN-2021-39 | refinedweb | 311 | 59.9 |
Created on 2010-10-26 15:48 by belopolsky, last changed 2010-11-01 17:43 by belopolsky. This issue is now closed.
On Tue, Oct 26, 2010 at 11:18 AM, Guido van Rossum <guido@python.org> wrote:
> On Tue, Oct 26, 2010 at 8:13 AM, Alexander Belopolsky
> <alexander.belopolsky@gmail.com> wrote:
>> The one demo that I want to find a better place for is Demo/turtle.
>
> Sure, go for it. It is a special case because the turtle module is
> also in the stdlib and these are intended for a particular novice
> audience. Anything we can do to make things easier for those people to
> get start with is probably worth it. Ideally they could just double
> click some file and the demo would fire up, with a command-line
> alternative (for the geeks among them) e.g. "python -m turtledemo" .
>
> --
> --Guido van Rossum (python.org/~guido)
>
-- "Move Demo scripts under Lib"
Before I prepare a patch, I would like to get an agreement on the location of the demo package. In the order of my preference:
1. turtle.demo Pro: obvious relationship to turtle. Cons: require converting turtle.py to a package.
2. turtledemo Pro: BDFL's suggestion; "Flat is better than nested". Cons: relationship to turtle module is less obvious than in #1; stdlib namespace pollution. (Turtle invasion! :-)
3. demo.turtle - probably not a good idea if not as a part of a general Demo reorganization.
Note that while I listed conversion of turtle.py to a package as a cons, it may not be a bad idea on its own. For example, toolkit abstraction naturally belongs to submodules and procedural interface belongs to package level while OOP classes may be separated into submodules. While I am not proposing any such changes as a part of this ticket, it may not be a bad idea to take the first step and rename turtle.py to turtle/__init__.py.
IMO converting turtle.py into a package, unless that's already planned anyway, is not a good project to undertake right now. (OTOH the demo itself already is a package, less an __init__.py file.) Note that the turtle module already runs some demo when invoked as a script -- maybe this can be made to fire up the demo viewer instead?
On Tue, Oct 26, 2010 at 12:00 PM, Guido van Rossum
<report@bugs.python.org> wrote:
..
> IMO converting turtle.py into a package, unless that's already planned anyway, is not a good project
> to undertake right now.
What are your reasons? I don't necessarily disagree, but want to
weight pros and cons. It does not seem like a hard thing to do:
rename turtle.py to turtle/__init__.py and add __main__.py. As I
said, I don't intend to anything more than that in the first step.
> (OTOH the demo itself already is a package, less an __init__.py file.)
Unfortunately it also relies on being run from directory containing
the main script, so converting it into a proper package is a bit more
involved than renaming the directory and adding an empty __init__.py
file. Still it is not that hard.
> Note that the turtle module already runs some demo when invoked as a script
> -- maybe this can be made to fire up the demo viewer instead?
Yes, I wanted to do that as well. Note that in this case, it would be
natural to move turtleDemo.py code into turtle/__main__.py. I would
also like to be able to run individual tdemo_* scripts without the
demo viewer or with an alternative viewer. Some naming decisions have
to be made for that as well:
$ python -m turtle.tdemo_chaos
$ python -m turtle.demo.chaos
$ python -m turtledemo.tdemo_chaos
...
I would like Gregor Lingl's approval of turning turtle.py into a package. It might make some things harder for novices, e.g. trackebacks and just browsing the source code..
IOW, yes, flat still seems better than nested here!
On Tue, Oct 26, 2010 at 12:31 PM, Guido van Rossum
<report@bugs.python.org> wrote:
..
> I would like Gregor Lingl's approval of turning turtle.py into a package.
Me too. :-) I added him to the "nosy list".
> It might make some things harder for novices, e.g. trackebacks and just browsing the source code.
If better tracebacks were the goal, I would start with removing
eval-generated module level functions:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<string>", line 1, in forward
..
TypeError: can't multiply sequence by non-int of type 'float'
Browsing source code may indeed be complicated by the package
structure. For example, I often find it difficult to navigate
unittest code once it became a package in the recent versions.
However, simple renaming of turtle.py to turtle/__init__.py is
probably a non-event since novice-oriented environments such as IDEs
are likely to take the user directly to turtle/__init__.py when he or
she clicks on "turtle".
> Also many people don't expect to find any code in a file named __init__.py (and most of the time I
> agree with this).
Well, logging, tkinter, and ctypes are clearly counterexamples to this rule.
> But the alternative isn't so great either, assuming we'll want strict backwards compatibility (I
> wouldn't want the instructions in Gregor's or anyone's book to start failing because of this).
Backwards compatibility can be restored by use of relative imports as
necessary. This is the unittest approach where backward compatibility
is paramount.
> You can't rename turtle to turtle/turtle.py either, because then there'd be horrible confusion
> between turtle/ and turtle.py.
Agree. We have too many turtles already. :-) On the other hand,
moving turtle object definitions into turtle/pen.py in some distant
future may not be a horrible idea. (Note that "pen" end "turtle" are
already mostly synonymous.)
> IOW, yes, flat still seems better than nested here!
For a six year old, turtledemo is still nested - just without a dot. :-)
PS: My six year old loves the turtle!
I thought this email-to-roundup bug was fixed some time ago. The mangled sample session was:
>>> turtle.forward('5 miles')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<string>", line 1, in forward
..
TypeError: can't multiply sequence by non-int of type 'float'
Just an FYI: the python.org installers for Mac OS X install the demos including the turtle demo (which is probably the most useful of the bunch these days) in /Applications/Python m.n/Extras/Demo. Depending on the default application association for ".py" files (something the user can change using the OS X Finder), double-clicking on the demo files will generally launch them in IDLE.app or with Python Launcher.app.
Various comments:
I usually expect things in stdlib to be usefully importable. Idlelib is clearly an exception.
>> Also many people don't expect to find any code in a file named
>> __init__.py (and most of the time I agree with this).
> Well, logging, tkinter, and ctypes are clearly counterexamples
> to this rule.
I think it a mistake that tkinter.__init__ is huge, about as big as the other 13 modules added together. It makes it really hard to dive into the code.
I think turtledemo would be fine, separate from turtle, if one could import and run things (one at a time) from within the interactive interpreter.
Yeah, I wish unittest hadn't been split up, and I really dislike the organization of the email package, though I think I understand how it came about historically. So I vote for flat :)
> I think it a mistake that tkinter.__init__ is huge,
> about as big as the other 13 modules added together.
> It makes it really hard to dive into the code
That is certainly *not* a best practice.
First of all: I'd not like to see turtle.py converted into a package. I think with the turtle module things should be as simple as possible and I don't see any need to put approx. 100kB of code into an __init__.py and be it only because there are hundreds of them and on cannot see from the name what's in there. I do not expect the avarage turtle user to look at the code, but certainly some teachers and instructors do even if they are not python specialists accustomed to the use of packages.
As far as I understood from the discussion the MAIN POINT is to make the turtleDemo accessible as easyly as possible.
So at first this needs a decision to put the demo code into the Windows-Distribution.
The next question is where to put it. In my opinion there are two possibilities.
1) The one mentioned by Alexander in msg119612: a turtledemo directory in Lib.
2) To put a turtledemo directory into the Tools directory (in some sense the demoViewer is sort of a tool, isn't it.)
I quickly prepared a 'prototypical' collection with a modified viewer, which accounts for some of the arguments which came up in the current discussion. It contains a turtledemo package. Please download it and have a look.
Each demo can be run by doubleclicking, as a standalone script, as well as from the turtledemo.viewer
If from the above options 1) were chosen or the package is somewhere else in the search path, on can - at the interactive prompt - do things like:
>>> from turtledemo import bytedesign
>>> bytedesign.main()
and also:
>>> from turtledemo import viewer
>>> viewer.run()
Morover one can load the examples into Idle and start them via the run-command. So one has a look at the code and moreover the possibility to make changes and try out, what happens.
I have renamed the sample scripts, omitting the tdemo_ - prefix, as
this looks nicer (at least for me). The previous version turtleDemo recognized scripts with this suffix as belonging to the demo-suite and
added it to the examples menu. This was necessary because of a script, which cannot be demonstrated in the demo_viewer (two canvases). I changed this in my proposition in the following way: scripts that are in the demodirectory and should not appear in the DemoViewer's example-Menu, should end with _.py or some other extension.
To make the demo_viewer importable creates a special problem: the demos are imported by the demoViewer via the __import__ function, so everyone can add his own demos to the demo-directory and they will show up in the viewer. The original version (and even this one) can do this also for demos in subdirectories when started directly. (I've included as an example the geometry directory). This does't work yet for importing the viewer, I only provided a very clumsy quick and dirty solution and I hope that someone has a better, more bright idea to do this more elegantly and correctly.
Now for something completely different: how to make it easily accessible.
1) Put remarks (and links) into the documentation, near the head of the turtle module's docs. (Referring to 24.5.7)
2) Put a hint into the commentary/docstring of the file turtle.py itself.
3) I do not consider it a good idea to put the viewer.py into the
if __name__ == "__main__" - part of turtle.py thus replacing the current demo. I think this part should demonstrate the possibilities of the module without recurring to extra use of Tkinter (as the demo viewer does). (I must also confess, that the quality of the code of the viewer, which I programmed back in 2006, does not reach the standards I tried to achieve for turtle.py)
4) BUT I'd find it useful to change this final demo code of turtle.py, so it - in the end - displays a message where to find more demos (i. e. turtledemo) or even to put there a question like ("Do you want to see more demos?") and two "turtle-generated" "buttons": "YES" and "NO". If one chooses "Yes", the turtledemo.viewer would pop up in a separate process.
Please consider this to be a proposal or a contribution. I'm very interested in discussing it further and bringing it to a quick and easy solution for Python 3.2. Unfortunately I'm currently involved in a project (also python-related) with very severe time constraints for the next four to five month. So ideas for enhancing the demoviewer and probably turtle.py have to wait until then and possibly will be an issue for Python 3.3
Best regards,
Gregor
Attached patch, issue10199.diff is mostly a copy of Gregor's turtledemo32.zip, with the main difference being that viewer code is in turtledemo/__main__.py so it can be run with just
$ python -m turtledemo
I would like to commit this before any further improvements. Just as Gregor, I have many ideas on how to improve the demo, but implementing them would be a subject of a new ticket.
Here is a quick list:
1. Add test_turtledemo to unittests.
2. Add means to run all examples in order without user input.
3. Prune the examples: do we need both "tree" and "forest"?
5. Turn the left panel to the expanded list of tests and add a "show source" button to display source in a separate window.
6. Improve names of examples. "I_dont_like_tiltdemo" is not really a good name. :-)
Imho it is very important to clarify the name convention for demoscripts to be added to the demo before committing (or at least before the apperance of beta1). It decides about adding scripts to the Examples Menu of the viewer.
We all know, that things once they have found their way into Lib cannot be changed easily afterwards. Guidos argument on backwards compatibility applies. So now is the only point in time to decide about this.?
Please note that there are other constraints also for demo_files anyway, like the size of the graphics window and the presence of a main()-function to be called by the viewer.
I'd like this to be decided actively.
What do you think?
Best regards,
Gregor
On Wed, Oct 27, 2010 at 2:37 PM, Gregor Lingl <report".
Committed in revision 86095. I included only those demo scripts that are described in the current manual. I am open to making further improvements prior to bata 1, and will open separate issues to track those. | https://bugs.python.org/issue10199 | CC-MAIN-2019-47 | refinedweb | 2,424 | 66.74 |
I'm trying to port this asset to the Windows store and I'm encountering a very odd error.
I have an error that only happens when trying to build for the Windows Store. Actually, I get three of the same error, on different lines.
public class StateMachine<T> : IStateMachine where T : struct, IConvertible, IComparable
public StateMachine<T> Initialize<T>(MonoBehaviour component) where T : struct, IConvertible, IComparable
public StateMachine<T> Initialize<T>(MonoBehaviour component, T startState) where T : struct, IConvertible, IComparable
The error message is:
The type or namespace 'IConvertable' could not be found (are you missing an assembly or directive reference?)
The error only happens when building for Windows Store. I can build for Windows Standalone and play in the editor just fine. I'm using Unity 64 bit, 5.3.5f1. Any help would be much appreciated. Thanks!
Have you explicitely included the System namespace ?
System
I have. Any.
Distribute terrain in zones
3
Answers
Profiling or debugging long hang at load on standalone build
2
Answers
Reliable debugging on Unity Remote.
2
Answers
Is there any way to view the console in a build?
14
Answers
Unity 2017 Windows Store option not showing up in build settings,Unity 2017 no Windows Store option in the build settings
1
Answer | https://answers.unity.com/questions/1209568/issues-with-porting-to-windows-store-and-iconverta.html | CC-MAIN-2019-51 | refinedweb | 212 | 55.95 |
How to deal with dirty side effects in your pure functional JavaScript
So, you’ve begun to dabble in functional programming. It won’t be long before you come across the concept of pure functions. And, as you go on, you will discover that functional programmers appear to be obsessed with them. “Pure functions let you reason about your code,” they say. “Pure functions are less likely to start a thermonuclear war.” “Pure functions give you referential transparency”. And on it goes. They are not wrong either. Pure functions are a good thing. But there’s a problem…
A pure function is a function that has no side effects.1 But if you know anything about programming, you know that side effects are the whole point. Why bother calculating 𝜋 to 100 places if there’s no way anyone can read it? To print it out somewhere, we need to write to a console, or send data to a printer, or something where someone can read it. And, what good is a database if you can’t enter any data into it? We need to read data from input devices, and request information from the network. We can’t do any of it without side effects. And yet, functional programming is built around pure functions. So how do functional programmers manage to get anything done?
The short answer is, they do what mathematicians do: They cheat.
Now, when I say they cheat, they technically follow the rules. But they find loopholes in those rules and stretch them big enough to drive a herd of elephants through. There’s two main ways they do this:
- Dependency injection, or as I call it, chucking the problem over the fence; and
- Using an Effect functor, which I think of as extreme procrastination.2
Dependency Injection
Dependency injection is our first method for dealing with side effects. In this approach, we take any impurities in our code, and shove them into function parameters. Then we can treat them as some other function’s responsibility. To explain what I mean, let’s look at some code:
// logSomething :: String -> () function logSomething(something) { const dt = (new Date()).toIsoString(); console.log(
${dt}: ${something}); return something; }
Our
logSomething() function has two sources of impurity: It creates a
Date() and it logs to the console. So, not only does it perform IO, it also gives a different result every millisecond that you run it. So, how do you make this function pure? With dependency injection, we take any impurities and make them a function parameter. So instead of taking one parameter, our function will take three:
// logSomething: Date -> Console -> String -> () function logSomething(d, cnsl, something) { const dt = d.toIsoString(); cnsl.log(
${dt}: ${something}); return something; }
Then to call it, we have to explicitly pass in the impure bits ourselves:
const something = "Curiouser and curiouser!" const d = new Date(); logSomething(d, console, something); // ⦘ Curiouser and curiouser!
Now, you may be thinking: “This is stupid. All we’ve done is shoved the problem one level up. It’s still just as impure as before.” And you’d be right. It’s totally a loophole.
It’s like feigning ignorance: “Oh no officer, I had no idea that calling
log() on that “
cnsl” object would perform IO. Someone else just passed it to me. I’ve got no idea where it came from.” It seems a bit lame.
It’s not quite as stupid as it seems though. Notice something about our
logSomething() function. If you want it to do something impure, you have to make it impure. We could just as easily pass different parameters:
const d = {toISOString: () => '1865-11-26T16:00:00.000Z'}; const cnsl = { log: () => { // do nothing }, }; logSomething(d, cnsl, "Off with their heads!"); // ← "Off with their heads!"
Now, our function does nothing (other than return the
something parameter). But it is completely pure. If you call it with those same parameters, it will return the same thing every single time. And that is the point. To make it impure, we have to take deliberate action. Or, to put it another way, everything that function depends on is right there in the signature. It doesn’t access any global objects like
console or
Date. It makes everything explicit.
It’s also important to note, that we can pass functions to our formerly impure function too. Let’s look at another example. Imagine we have a username in a form somewhere. We’d like to get the value of that form input:
// getUserNameFromDOM :: () -> String function getUserNameFromDOM() { return document.querySelector('#username').value; } const username = getUserNameFromDOM(); username; // ← "mhatter"
In this case, we’re attempting to query the DOM for some information. This is impure, since
document is a global object that could change at any moment. One way to make our function pure would be to pass the global
document object as a parameter. But, we could also pass a
querySelector() function like so:
// getUserNameFromDOM :: (String -> Element) -> String function getUserNameFromDOM($) { return $('#username').value; } // qs :: String -> Element const qs = document.querySelector.bind(document); const username = getUserNameFromDOM(qs); username; // ← "mhatter"
Now, again, you may be thinking “This is still stupid!” All we’ve done is move the impurity out of
getUsernameFromDOM(). It hasn’t gone away. We’ve just stuck it in another function
qs(). It doesn’t seem to do much other than make the code longer. Instead of one impure function, we have two functions, one of which is still impure.
Bear with me. Imagine we want to write a test for
getUserNameFromDOM(). Now, comparing the impure and pure versions, which one would be easier to work with? For the impure version to work at all, we need a global document object. And on top of that, it needs to have an element with the ID
username somewhere inside it. If I want to test that outside a browser, then I have to import something like JSDOM or a headless browser. All to test one very small function. But using the second version, I can do this:
const qsStub = () => ({value: 'mhatter'}); const username = getUserNameFromDOM(qsStub); assert.strictEqual('mhatter', username,
Expected username to be ${username});
Now, this doesn’t mean that you shouldn’t also create an integration test that runs in a real browser. (Or, at least a simulated one like JSDOM). But what this example does show is that
getUserNameFromDOM() is now completely predictable. If we pass it qsStub it will always return
mhatter. We’ve moved the unpredictability into the smaller function
qs.
If we want to, we can keep pushing that unpredictability further and further out. Eventually, we push them right to the very edges of our code. So we end up with a thin shell of impure code that wraps around a well-tested, predictable core. As you start to build larger applications, that predictability starts to matter. A lot.
The disadvantage of dependency injection
It is possible to create large, complex applications this way. I know because I’ve done it. Testing becomes easier, and it makes every function’s dependencies explicit. But it does have some drawbacks. The main one is that you end up with lengthy function signatures like this:
function app(doc, con, ftch, store, config, ga, d, random) { // Application code goes here } app(document, console, fetch, store, config, ga, (new Date()), Math.random);
This isn’t so bad, except that you then have the issue of parameter drilling. You might need one those parameters in a very low-level function. So you have to thread the parameter down through many layers of function calls. It gets annoying. For example, you might have to pass the date down through 5 layers of intermediate functions. And none of those intermediate functions uses the date object at all. It’s not the end of the world. And it is good to be able to see those explicit dependencies. But it’s still annoying. And there is another way…
Lazy Functions
Let’s look at the second loophole that functional programmers exploit. It starts like this: A side effect isn’t a side effect until it actually happens. Sounds cryptic, I know. Let’s try and make that a bit clearer. Consider this code:
// fZero :: () -> Number function fZero() { console.log('Launching nuclear missiles'); // Code to launch nuclear missiles goes here return 0; }
It’s a stupid example, I know. If we want a zero in our code, we can just write it. And I know you, gentle reader, would never write code to control nuclear weapons in JavaScript. But it helps illustrate the point. This is clearly impure code. It logs to the console, and it might also start thermonuclear war. Imagine we want that zero though. Imagine a scenario where we want to calculate something after missile launch. We might need to start a countdown timer or something like that. In this scenario, it would be perfectly reasonable to plan out how we’d do that calculation ahead of time. And we would want to be very careful about when those missiles take off. We don’t want to mix up our calculations in such a way that they might accidentally launch the missiles. So, what if we wrapped
fZero() inside another function that just returned it. Kind of like a safety wrapper.
// fZero :: () -> Number function fZero() { console.log('Launching nuclear missiles'); // Code to launch nuclear missiles goes here return 0; } // returnZeroFunc :: () -> (() -> Number) function returnZeroFunc() { return fZero; }
I can run
returnZeroFunc() as many times as I want, and so long as I don’t call the return value, I am (theoretically) safe. My code won’t launch any nuclear missiles.
const zeroFunc1 = returnZeroFunc(); const zeroFunc2 = returnZeroFunc(); const zeroFunc3 = returnZeroFunc(); // No nuclear missiles launched.
Now, let’s define pure functions a bit more formally. Then we can examine our
returnZeroFunc() function in more detail. A function is pure if:
- It has no observable side effects; and
- It is referentially transparent. That is, given the same input it always returns the same output.
Let’s check out
returnZeroFunc(). Does it have any side effects? Well, we just established that calling
returnZeroFunc() won’t launch any nuclear missiles. Unless you go to the extra step of calling the returned function, nothing happens. So, no side-effects here.
Is it
returnZeroFunc() referentially transparent? That is, does it always return the same value given the same input? Well, the way it’s currently written, we can test it:
zeroFunc1 === zeroFunc2; // true zeroFunc2 === zeroFunc3; // true
But it’s not quite pure yet. Our function
returnZeroFunc() is referencing a variable outside its scope. To solve that, we can rewrite it this way:
// returnZeroFunc :: () -> (() -> Number) function returnZeroFunc() { function fZero() { console.log('Launching nuclear missiles'); // Code to launch nuclear missiles goes here return 0; } return fZero; }
Our function is now pure. But, JavaScript works against us a little here. We can’t use
=== to verify referential transparency any more. This is because
returnZeroFunc() will return always a new function reference. But you can check referential transparency by inspecting the code. Our
returnZeroFunc() function does nothing other than return the same function, every time.
This is a neat little loophole. But can we actually use it for real code? The answer is yes. But before we get to how you’d do it in practice, let’s push this idea a little further. Going back to our dangerous
fZero() function:
// fZero :: () -> Number function fZero() { console.log('Launching nuclear missiles'); // Code to launch nuclear missiles goes here return 0; }
Let’s try and use the zero that
fZero() returns, but without starting thermonuclear war (yet). We’ll create a function that takes the zero that
fZero() eventually returns, and adds one to it:
// fIncrement :: (() -> Number) -> Number function fIncrement(f) { return f() + 1; } fIncrement(fZero); // ⦘ Launching nuclear missiles // ← 1
Whoops. We accidentally started thermonuclear war. Let’s try again. This time, we won’t return a number. Instead, we’ll return a function that will eventually return a number:
// fIncrement :: (() -> Number) -> (() -> Number) function fIncrement(f) { return () => f() + 1; } fIncrement(zero); // ← [Function]
Phew. Crisis averted. Let’s keep going. With these two functions, we can create a whole bunch of ‘eventual numbers’:
const fOne = fIncrement(zero); const fTwo = fIncrement(one); const fThree = fIncrement(two); // And so on…
We could also create a bunch of
f*() functions that work with eventual values:
// fMultiply :: (() -> Number) -> (() -> Number) -> (() -> Number) function fMultiply(a, b) { return () => a() * b(); } // fPow :: (() -> Number) -> (() -> Number) -> (() -> Number) function fPow(a, b) { return () => Math.pow(a(), b()); } // fSqrt :: (() -> Number) -> (() -> Number) function fSqrt(x) { return () => Math.sqrt(x()); } const fFour = fPow(fTwo, fTwo); const fEight = fMultiply(fFour, fTwo); const fTwentySeven = fPow(fThree, fThree); const fNine = fSqrt(fTwentySeven); // No console log or thermonuclear war. Jolly good show!
Do you see what we’ve done here? Anything we would do with regular numbers, we can do with eventual numbers. Mathematicians call this ‘isomorphism’. We can always turn a regular number into an eventual number by sticking it in a function. And we can get the eventual number back by calling the function. In other words we have a mapping between numbers and eventual numbers. It’s more exciting than it sounds. I promise. We’ll come back to this idea soon.
This function wrapping thing is a legitimate strategy. We can keep hiding behind functions as long as we want. And so long as we never actually call any of these functions, they’re all theoretically pure. And nobody is starting any wars. In regular (non-nuclear) code, we actually want those side effects, eventually. Wrapping everything in a function lets us control those effects with precision. We decide exactly when those side effects happen. But, it’s a pain typing those brackets everywhere. And it’s annoying to create new versions of every function. We’ve got perfectly good functions like
Math.sqrt() built into the language. It would be nice if there was a way to use those ordinary functions with our delayed values. Enter the Effect functor.
The Effect Functor
For our purposes, the Effect functor is nothing more than an object that we stick our delayed function in. So, we’ll stick our
fZero function into an Effect object. But, before we do that, let’s take the pressure down a notch:
// zero :: () -> Number function fZero() { console.log('Starting with nothing'); // Definitely not launching a nuclear strike here. // But this function is still impure. return 0; }
Now we create a constructor function that creates an Effect object for us:
// Effect :: Function -> Effect function Effect(f) { return {}; }
Not much to look at so far. Let’s make it do something useful. We want to use our regular
fZero() function with our Effect. We’ll write a method that will take a regular function, and eventually apply it to our delayed value. And we’ll do it without triggering the effect. We call it
map. This is because it creates a mapping between regular functions and Effect functions. It might look something like this:
// Effect :: Function -> Effect function Effect(f) { return { map(g) { return Effect(x => g(f(x))); } } }
Now, if you’re paying attention, you may be wondering about
map(). It looks suspiciously like compose. We’ll come back to that later. For now, let’s try it out:
const zero = Effect(fZero); const increment = x => x + 1; // A plain ol' regular function. const one = zero.map(increment);
Hmm. We don’t really have a way to see what happened. Let’s modify Effect so we have a way to ‘pull the trigger’, so to speak:
// Effect :: Function -> Effect function Effect(f) { return { map(g) { return Effect(x => g(f(x))); }, runEffects(x) { return f(x); } } } const zero = Effect(fZero); const increment = x => x + 1; // Just a regular function. const one = zero.map(increment); one.runEffects(); // ⦘ Starting with nothing // ← 1
And if we want to, we can keep calling that map function:
const double = x => x * 2; const cube = x => Math.pow(x, 3); const eight = Effect(fZero) .map(increment) .map(double) .map(cube); eight.runEffects(); // ⦘ Starting with nothing // ← 8
Now, this is where it starts to get interesting. We called this a ‘functor’. All that means is that Effect has a
map function, and it obeys some rules. These rules aren’t the kind of rules for things you can’t do though. They’re rules for things you can do. They’re more like privileges. Because Effect is part of the functor club, there are certain things it gets to do. One of those is called the ‘composition rule’. It goes like this:
If we have an Effect
e, and two functions
f, and
g
Then
e.map(g).map(f)is equivalent to
e.map(x => f(g(x))).
To put it another way, doing two maps in a row is equivalent to composing the two functions. Which means Effect can do things like this (recall our example above):
const incDoubleCube = x => cube(double(increment(x))); // If we're using a library like Ramda or lodash/fp we could also write: // const incDoubleCube = compose(cube, double, increment); const eight = Effect(fZero).map(incDoubleCube);
And when we do that, we are guaranteed to get the same result as our triple-map version. We can use this to refactor our code, with confidence that our code will not break. In some cases we can even make performance improvements by swapping between approaches.
But enough with the number examples. Let’s do something more like ‘real’ code.
A shortcut for making Effects
Our Effect constructor takes a function as its argument. This is convenient, because most of the side effects we want to delay are also functions. For example,
Math.random() and
console.log() are both this type of thing. But sometimes we want to jam a plain old value into an Effect. For example, imagine we’ve attached some sort of config object to the
window global in the browser. We want to get a a value out, but this is will not be a pure operation. We can write a little shortcut that will make this task easier:3
// of :: a -> Effect a Effect.of = function of(val) { return Effect(() => val); }
To show how this might be handy, imagine we’re working on a web application. This application has some standard features like a list of articles and a user bio. But where in the HTML these components live changes for different customers. Since we’re clever engineers, we decide to store their locations in a global config object. That way we can always locate them.fe For example:
window.myAppConf = { selectors: { 'user-bio': '.userbio', 'article-list': '#articles', 'user-name': '.userfullname', }, templates: { 'greet': 'Pleased to meet you, {name}', 'notify': 'You have {n} alerts', } };
Now, with our
Effect.of() shortcut, we can quickly shove the value we want into an Effect wrapper like so:
const win = Effect.of(window); userBioLocator = win.map(x => x.myAppConf.selectors['user-bio']); // ← Effect('.userbio')
Nesting and un-nesting Effects
Mapping Effects thing can get us a long way. But sometimes we end up mapping a function that also returns an Effect. We’ve already defined
getElementLocator() which returns an Effect containing a string. If we actually want to locate the DOM element, then we need to call
document.querySelector()—another impure function. So we might purify it by returning an Effect instead:
// $ :: String -> Effect DOMElement function $(selector) { return Effect.of(document.querySelector(s)); }
Now if we want to put those two together, we can try using
map():
const userBio = userBioLocator.map($); // ← Effect(Effect(<div>))
What we’ve got is a bit awkward to work with now. If we want to access that div, we have to map with a function that also maps the thing we actually want to do. For example, if we wanted to get the
innerHTML it would look something like this:
const innerHTML = userBio.map(eff => eff.map(domEl => domEl.innerHTML)); // ← Effect(Effect('<h2>User Biography</h2>'))
Let’s try picking that apart a little. We’ll back all the way up to
userBio and move forward from there. It will be a bit tedious, but we want to be clear about what’s going on here. The notation we’ve been using,
Effect('user-bio') is a little misleading. If we were to write it as code, it would look more like so:
Effect(() => '.userbio');
Except that’s not accurate either. What we’re really doing is more like:
Effect(() => window.myAppConf.selectors['user-bio']);
Now, when we map, it’s the same as composing that inner function with another function (as we saw above). So when we map with
$, it looks a bit like so:
Effect(() => $(window.myAppConf.selectors['user-bio']));
Expanding that out gives us:
Effect( () => Effect.of(document.querySelector(window.myAppConf.selectors['user-bio']))) );
And expanding
Effect.of gives us a clearer picture:
Effect( () => Effect( () => document.querySelector(window.myAppConf.selectors['user-bio']) ) );
Note: All the code that actually does stuff is in the innermost function. None of it has leaked out to the outer Effect.
Join
Why bother spelling all that out? Well, we want to un-nest these nested Effects. If we’re going to do that, we want to make certain that we’re not bringing in any unwanted side-effects in the process. For Effect, the way to un-nest, is to call
.runEffects() on the outer function. But this might get confusing. We’ve gone through this whole exercise to check that we’re not going to run any effects. So we’ll create another function that does the same thing, and call it
join. We use
join when we’re un-nesting Effects, and
runEffects() when we actually want to run effects. That makes our intention clear, even if the code we run is the same.
// Effect :: Function -> Effect function Effect(f) { return { map(g) { return Effect(x => g(f(x))); }, runEffects(x) { return f(x); } join(x) { return f(x); } } }
We can then use this to un-nest our user biography element:
const userBioHTML = Effect.of(window) .map(x => x.myAppConf.selectors['user-bio']) .map($) .join() .map(x => x.innerHTML); // ← Effect('<h2>User Biography</h2>')
Chain
This pattern of running
.map() followed by
.join() comes up often. So often in fact, that it would be handy to have a shortcut function. That way, whenever we have a function that returns an Effect, we can use this shortcut. It saves us writing
map then
join over and over. We’d write it like so:
// Effect :: Function -> Effect function Effect(f) { return { map(g) { return Effect(x => g(f(x))); }, runEffects(x) { return f(x); } join(x) { return f(x); } chain(g) { return Effect(f).map(g).join(); } } }
We call the new function
chain() because it allows us to chain together Effects. (That, and because the standard tells us to call it that).4 Our code to get the user biography inner HTML would then look more like this:
const userBioHTML = Effect.of(window) .map(x => x.myAppConf.selectors['user-bio']) .chain($) .map(x => x.innerHTML); // ← Effect('<h2>User Biography</h2>')
Unfortunately, other programming languages use a bunch of different names for this idea. It can get a little bit confusing if you’re trying to read up about it. Sometimes it’s called
flatMap. This name makes a lot of sense, as we’re doing a regular mapping, then flattening out the result with
.join(). In Haskell though, it’s given the confusing name of
bind. So if you’re reading elsewhere, keep in mind that
chain,
flatMap and
bind refer to similar concepts.
Combining Effects
There’s one final scenario where working with Effect might get a little awkward. It’s where we want to combine two or more Effects using a single function. For example, what if we wanted to grab the user’s name from the DOM? And then insert it into a template provided by our app config? So, we might have a template function like this (note that we’re creating a curried version):
// tpl :: String -> Object -> String const tpl = curry(function tpl(pattern, data) { return Object.keys(data).reduce( (str, key) => str.replace(new RegExp(
{${key}}, data[key]), pattern ); });
That’s all well and good. But let’s grab our data:
const win = Effect.of(window); const name = win.map(w => w.myAppConfig.selectors['user-name']) .chain($) .map(el => el.innerHTML) .map(str => ({name: str}); // ← Effect({name: 'Mr. Hatter'}); const pattern = win.map(w => w.myAppConfig.templates('greeting')); // ← Effect('Pleased to meet you, {name}');
We’ve got a template function. It takes a string and an object, and returns a string. But our string and object (
name and
pattern) are wrapped up in Effects. What we want to do is lift our
tpl() function up into a higher plane so that it works with Effects.
Let’s start out by seeing what happens if we call
map() with
tpl() on our pattern Effect:
pattern.map(tpl); // ← Effect([Function])
Looking at the types might make things a little clearer. The type signature for map is something like this:
map :: Effect a ~> (a -> b) -> Effect b
And our template function has the signature:
tpl :: String -> Object -> String
So, when we call map on
pattern, we get a partially applied function (remember we curried
tpl) inside an Effect.
Effect (Object -> String)
We now want to pass in the value from inside our pattern Effect. But we don’t really have a way to do that yet. We’ll write another method for Effect (called
ap()) that will take care of this:
// Effect :: Function -> Effect function Effect(f) { return { map(g) { return Effect(x => g(f(x))); }, runEffects(x) { return f(x); } join(x) { return f(x); } chain(g) { return Effect(f).map(g).join(); } ap(eff) { // If someone calls ap, we assume eff has a function inside it (rather than a value). // We'll use map to go inside off, and access that function (we'll call it 'g') // Once we've got g, we apply the value inside off f() to it return eff.map(g => g(f())); } } }
With that in place, we can run
.ap() to apply our template:
const win = Effect.of(window); const name = win.map(w => w.myAppConfig.selectors['user-name']) .chain($) .map(el => el.innerHTML) .map(str => ({name: str})); const pattern = win.map(w => w.myAppConfig.templates('greeting')); const greeting = name.ap(pattern.map(tpl)); // ← Effect('Pleased to meet you, Mr Hatter')
We’ve achieved our goal. But I have a confession to make… The thing is, I find
ap() confusing sometimes. It’s hard to remember that I have to map the function in first, and then run
ap() after. And then I forget which order the parameters are applied. But there is a way around this. Most of the time, what I’m trying to do is lift an ordinary function up into the world of applicatives. That is, I’ve got plain functions, and I want to make them work with things like Effect that have an
.ap() method. We can write a function that will do this for us:
// liftA2 :: (a -> b -> c) -> (Applicative a -> Applicative b -> Applicative c) const liftA2 = curry(function liftA2(f, x, y) { return y.ap(x.map(f)); // We could also write: // return x.map(f).chain(g => y.map(g)); });
We’ve called it
liftA2() because it lifts a function that takes two arguments. We could similarly write a
liftA3() like so:
// liftA3 :: (a -> b -> c -> d) -> (Applicative a -> Applicative b -> Applicative c -> Applicative d) const liftA3 = curry(function liftA3(f, a, b, c) { return c.ap(b.ap(a.map(f))); });
Notice that
liftA2 and
liftA3 don’t ever mention Effect. In theory, they can work with any object that has a compatible
ap() method.
Using
liftA2() we can rewrite our example above as follows:
const win = Effect.of(window); const user = win.map(w => w.myAppConfig.selectors['user-name']) .chain($) .map(el => el.innerHTML) .map(str => ({name: str}); const pattern = win.map(w => w.myAppConfig.templates['greeting']); const greeting = liftA2(tpl)(pattern, user); // ← Effect('Pleased to meet you, Mr Hatter')
So What?
At this point, you may be thinking ‘This seems like a lot of effort to go to just to avoid the odd side effect here and there.’ What does it matter? Sticking things inside Effects, and wrapping our heads around
ap() seems like hard work. Why bother, when the impure code works just fine? And when would you ever need this in the real world?
The functional programmer sounds rather like a mediæval monk, denying himself the pleasures of life in the hope it will make him virtuous.
Let’s break those objections down into two questions:
- Does functional purity really matter? and
- When would this Effect thing ever be useful in the real world?
Functional Purity Matters
It’s true. When you look at a small function in isolation, a little bit of impurity doesn’t matter. Writing
const pattern = window.myAppConfig.templates['greeting']; is quicker and simpler than something like this:
const pattern = Effect.of(window).map(w => w.myAppConfig.templates('greeting'));
And if that was all you ever did, that would remain true. The side effect wouldn’t matter. But this is just one line of code—in an application that may contain thousands, even millions of lines of code. Functional purity starts to matter a lot more when you’re trying to work out why your app has mysteriously stopped working ‘for no reason’. Something unexpected has happened. You’re trying to break the problem down and isolate its cause. In those circumstances, the more code you can rule out the better. If your functions are pure, then you can be confident that the only thing affecting their behaviour are the inputs passed to it. And this narrows down the number of things you need to consider… err… considerably. In other words, it allows you to think less. In a large, complex application, this is a Big Deal.
The Effect pattern in the real world
Okay. Maybe functional purity matters if you’re building a large, complex applications. Something like Facebook or Gmail. But what if you’re not doing that? Let’s consider a scenario that will become more and more common. You have some data. Not just a little bit of data, but a lot of data. Millions of rows of it, in CSV text files, or huge database tables. And you’re tasked with processing this data. Perhaps you’re training an artificial neural network to build an inference model. Perhaps you’re trying to figure out the next big cryptocurrency move. Whatever. The thing is, it’s going to take a lot of processing grunt to get the job done.
Joel Spolsky argues convincingly that functional programming can help us out here. We could write alternative versions of
map and
reduce that will run in parallel. And functional purity makes this possible. But that’s not the end of the story. Sure, you can write some fancy parallel processing code. But even then, your development machine still only has 4 cores (or maybe 8 or 16 if you’re lucky). That job is still going to take forever. Unless, that is, you can run it on heaps of processors… something like a GPU, or a whole cluster of processing servers.
For this to work, you’d need to describe the computations you want to run. But, you want to describe them without actually running them. Sound familiar? Ideally, you’d then pass the description to some sort of framework. The framework would take care of reading all the data in, and splitting it up among processing nodes. Then the same framework would pull the results back together and tell you how it went. This how TensorFlow.
When you use TensorFlow, you don’t use the normal data types from the programming language you’re writing in. Instead, you create ‘Tensors’. If we wanted to add two numbers, it would look something like this:
node1 = tf.constant(3.0, tf.float32) node2 = tf.constant(4.0, tf.float32) node3 = tf.add(node1, node2)
The above code is written in Python, but it doesn’t look so very different from JavaScript, does it? And like with our Effect, the
add code won’t run until we tell it to (using
sess.run(), in this case):
print("node3: ", node3) print("sess.run(node3): ", sess.run(node3))
⦘ node3: Tensor("Add_2:0", shape=(), dtype=float32)
⦘ sess.run(node3): 7.0
We don’t get 7.0 until we call
sess.run(). As you can see, it’s much the same as our delayed functions. We plan out our computations ahead of time. Then, once we’re ready, we pull the trigger to kick everything off.
We’ve covered a lot of ground. But we’ve explored two ways to handle functional impurity in our code:
- Dependency injection; and
- The Effect functor.
Dependency injection works by moving the impure parts of the code out of the function. So you have to pass them in as parameters. The Effect functor, in contrast, works by wrapping everything behind a function. To run the effects, we have to make a deliberate effort to run the wrapper function.
Both approaches are cheats. They don’t remove the impurities entirely, they just shove them out to the edges of our code. But this is a good thing. It makes explicit which parts of the code are impure. This can be a real advantage when attempting to debug problems in complex code bases. | http://brianyang.com/how-to-deal-with-dirty-side-effects-in-your-pure-functional-javascript/ | CC-MAIN-2018-43 | refinedweb | 5,590 | 67.45 |
Knuth-Morris-Pratt (KMP) String Matching Algorithm in Python
Knuth-Morris-Pratt (KMP) is a linear time string matching algorithm. The algorithm avoids unnecessary comparison and computation of the transition function by using prefix (Π) function . Before starting the actual comparison between the characters of pattern and text, it computes the Π (pi) values for each character of the pattern using prefix function algorithm. The code is based on the algorithm described in CLRS book, but it checks the multiple occurrences of the pattern along with mismatch.
def KMP_Matcher(t,p): n = len (t) m = len (p) pi = calculate_PI(p) k = 0 for i in xrange(n): while (k > 0 and p[k] != t[i]): k = pi[k] if (p[k] == t[i]): k = k+1 if (k == m): print "pattern",p + " found at index ", i-m+1 k = 0 else: if (i == n-1): print "Pattern", p + " not found" pat = "monkey" str = "five little monkey jumping on the bed. one monkey fell down and now there are just four monkey" KMP_Matcher(str,pat) pat = "donkey" str = "five little monkey jumping on the bed. one monkey fell down and now there are just four monkey" KMP_Matcher(str,pat)
The output of the code is as follows…
pattern monkey found at index 12
pattern monkey found at index 43
pattern monkey found at index 88
Pattern donkey not found | https://www.bitsdiscover.com/knuth-morris-pratt-kmp-string-matching-algorithm-in-python/ | CC-MAIN-2022-40 | refinedweb | 228 | 53.75 |
Content-type: text/html
putwc, putwchar, fputwc - Write a wide character to a stream
Standard C Library (libc.so, libc.a)
#include <wchar.h>
wint_t putwc(
wint_t wc,
FILE *stream);
wint_t putwchar(
wchar_t wc );
wint_t fputwc(
wint_t wc,
FILE *stream);
For the fputwc() and put. For the putwchar() function, the XPG4 standard specifies the parameter type wint_t rather than wchar_t, which is required by the current version of the ISO C standard. On Tru64 UNIX systems, both types resolve to int; however, wchar_t and wint_t may not be equivalent types on other systems that conform to these standards.
Interfaces documented on this reference page conform to industry standards as follows:
fputwc(), putwc(), putwchar(): XPG4, XPG4-UNIX
Refer to the standards(5) reference page for more information about industry standards and associated tags.
Specifies the wide character to be converted and written. Points to the output data.
The fputwc() function converts the wchar_t specified by the wc parameter to its equivalent multibyte character and then writes the multibyte character to the file or terminal associated with the stream parameter. The function also advances the file position indicator for stream if the associated file supports positioning requests. If the file does not support positioning requests or was opened in append mode, the function appends the character to the end of stream. The st_ctime and st_mtime fields of the FILE structure are marked for update between a successful execution of fputwc() and completion of one of the following: A successfully executed call to fflush() or fclose() on the same stream A call to exit() or abort()
If an error occurs while the character is being written, the shift state of the output file is undefined. See the RESTRICTIONS section for information about support for shift-state encoding.
The putwc() function performs the same operation as fputwc(), but can be implemented as a macro on some implementations that conform to X/Open standards. If implemented as a macro, this function may evaluate stream more than once; therefore, stream should never be implemented as an expression with side effects (for example, as in putwc(wc,*f++)).
The putwchar() macro works like the putwc() function, except that putwchar() writes the character to the standard output stream (stdout). The call putwchar(wc) is equivalent to putwc(wc, stdout).
[Digital] With the exception of stderr, output streams are, by default, buffered if they refer to files, or line buffered if they refer to terminals. The standard error output stream, stderr, is unbuffered by default, but using the freopen() function causes it to become buffered or line buffered. Use the setbuf() function to change the stream's buffering strategy.
Currently, the Tru64 UNIX product does not include locales whose codesets use shift-state encoding. Some sections of this reference page refer to function behavior with respect to shift sequences. This information is included only for your convenience in developing portable applications that run on multiple platforms, some of which may supply locales whose codesets do use shift-state encoding.
On successful completion, these functions return the value written. If these functions fail, they return the constant WEOF, set the error indicator for the stream, and set errno to indicate the error.
If any of the following conditions occur, the putwc(), fputwc(), and putwchar() functions set errno to the corresponding value: wide-character code specified by the wc parameter does not correspond to a valid character in the current locale. One of the following errors occurred: The process is a member of a background process group attempting to write to its controlling terminal; TOSTOP is set; the process is neither ignoring nor blocking SIGTTOU; and the process group of the process is orphaned. [XPG4-UNIX] A physical I/O error occurred. There was no free space remaining on the device containing the file. An attempt was made to write to a pipe or FIFO that is not open for reading by any process. A SIGPIPE signal will also be sent to the process.
Functions: getc(3), getwc(3), printf(3), putc(3), puts(3), wctomb(3), wprintf(3)
Others: i18n_intro(5), l10n_intro(5), standards(5) delim off | http://backdrift.org/man/tru64/man3/putwc.3.html | CC-MAIN-2017-22 | refinedweb | 687 | 50.57 |
import "gopkg.in/src-d/go-vitess.v1/vt/wrangler/testlib"
Package testlib contains utility methods to include in unit tests to deal with topology common tasks, like fake tablets and action loops.
fake_tablet.go vtctl_pipe.go
type FakeTablet struct { // Tablet and FakeMysqlDaemon are populated at NewFakeTablet time. // We also create the RPCServer, so users can register more services // before calling StartActionLoop(). Tablet *topodatapb.Tablet FakeMysqlDaemon *fakemysqldaemon.FakeMysqlDaemon RPCServer *grpc.Server // The following fields are created when we start the event loop for // the tablet, and closed / cleared when we stop it. // The Listener is used by the gRPC server. Agent *tabletmanager.ActionAgent Listener net.Listener // These optional fields are used if the tablet also needs to // listen on the 'vt' port. StartHTTPServer bool HTTPListener net.Listener HTTPServer *http.Server }
FakeTablet keeps track of a fake tablet in memory. It has: - a Tablet record (used for creating the tablet, kept for user's information) - a FakeMysqlDaemon (used by the fake event loop) - a 'done' channel (used to terminate the fake event loop)
func NewFakeTablet(t *testing.T, wr *wrangler.Wrangler, cell string, uid uint32, tabletType topodatapb.TabletType, db *fakesqldb.DB, options ...TabletOption) *FakeTablet
NewFakeTablet creates the test tablet in the topology. 'uid' has to be between 0 and 99. All the tablet info will be derived from that. Look at the implementation if you need values. Use TabletOption implementations if you need to change values at creation. 'db' can be nil if the test doesn't use a database at all.
StartActionLoop will start the action loop for a fake tablet, using ft.FakeMysqlDaemon as the backing mysqld.
func (ft *FakeTablet) StopActionLoop(t *testing.T)
StopActionLoop will stop the Action Loop for the given FakeTablet
func (ft *FakeTablet) Target() querypb.Target
Target returns the keyspace/shard/type info of this tablet as Target.
type TabletOption func(tablet *topodatapb.Tablet)
TabletOption is an interface for changing tablet parameters. It's a way to pass multiple parameters to NewFakeTablet without making it too cumbersome.
func ForceInitTablet() TabletOption
ForceInitTablet is the tablet option to set the 'force' flag during InitTablet
func StartHTTPServer() TabletOption
StartHTTPServer is the tablet option to start the HTTP server when starting a tablet.
func TabletKeyspaceShard(t *testing.T, keyspace, shard string) TabletOption
TabletKeyspaceShard is the option to set the tablet keyspace and shard
VtctlPipe is a vtctl server based on a topo server, and a client that is connected to it via gRPC.
NewVtctlPipe creates a new VtctlPipe based on the given topo server.
Close will stop listening and free up all resources.
Run executes the provided command remotely, logs the output in the test logs, and returns the command error.
RunAndOutput is similar to Run, but returns the output as a multi-line string instead of logging it.
RunAndStreamOutput returns the output of the vtctl command as a channel. When the channcel is closed, the command did finish.
Package testlib imports 29 packages (graph) and is imported by 2 packages. Updated 2019-06-13. Refresh now. Tools for package owners. | https://godoc.org/gopkg.in/src-d/go-vitess.v1/vt/wrangler/testlib | CC-MAIN-2019-35 | refinedweb | 502 | 68.47 |
>>
1 & 3: EGL itself is pretty clean, and resembles GLX quite a lot. You need a platform-specific stub to get a window handle and a display connection, which you then use to create the platform-independent EGL context. All in all, it fits quite nicely into the IWindowInfo, IGraphicsContext and IGraphicsMode abstractions.
Of course, a) you'd still have to implement the IPlatformFactory interface for a complete port and b) the iPhone doesn't use EGL. Fortunately, getting a simple context up and running looks fairly simple.
2: The issue is that regular OpenTK.Graphics.GL takes up about 1.8MB of disk space. When your hard limit is 10MB, you really can't afford to waste that amount to code you aren't using.
Conceptually, OpenTK is split into a distinct parts:
Right now, the core depends on OpenTK.Graphics. Once this dependency is broken we'd be able to compile each part independently: either into different dlls (as you propose), or into a single one using conditional compilation.
4: Pretty much, yes. The enums are right there, but the C signatures do not use them - I'm afraid we'll have to fix that by hand.
Error checking is simple enough to implement - no problems there!
Re: OpenTK gains support for OpenGL ES 1.0, 1.1 and 2.0
1&3) The khronos website was down earlier and thus no access to the specs. Got them now, will read the essential pieces this weekend. According to your assessment (and my impression of the situation), no major problem there.
2) Sorry, totally forgot about the size constraints being a problem (the 500GB HDD that came with my PC last year is nowhere near half-full).
Actually I was referring to namespaces, not necessarily dlls. Moving ES closer to the root namespace will make the difference more obvious and I'd also like to suggest not re-use the name GL for ES-related functions (applications may have code-paths for both APIs) and rather name it GLES or ES. So if programmers browse code, they don't have to know which using statements the current file has, but can identify on first sight whether the code is currently related to GL or ES.
I'd prefer to have 1-2 big .dlls (as it currently is) allowing conditional compilation if you feel something is just wasting space (e.g. people might want to use FMOD instead of OpenAL and simply remove it from OpenTK for their projects). For Desktops/Laptops disk space is not a problem, but allowing to shrink OpenTK - when the application is ready to ship - will certainly help if the app is meant to be distributed by internet downloads and not hardcopies.
4) I'll take a look at that, since I've done that for GL already it should be pretty obvious what belongs where.
P.S. Mind my interest in GL ES is fairly limited, although I don't have a PS3 that's the only platform I'd be interested porting my renderer for.
Offtopic, but before I forget it: Who did the OpenCL functions? The person should be slapped for adding tons of meaningful
// OpenCL 1.0comments over the function signatures, rather than c&p the native C. :P
Re: OpenTK gains support for OpenGL ES 1.0, 1.1 and 2.0
2. Agree 100%. I'll fix that.
For the size issues, take a look into ilmerge. It's an awesome little tool that can merge all assemblies into a single executable. There's an open-source alternative, called mono-linker, but it's a little hard to get (it's buried deep into the Mono repository) and it crashed on
[return: ]attributes last time I tried (maybe this is fixed now).
4. I actually removed the C signatures on purpose. The C headers changed every couple of weeks (we were at version 43 last time I checked), so the signatures fell out of sync rapidly.
I've given up on the hand-written approach: I'll just translate the C headers into xml and run them through the generator. Ultimately, this will save us a lot of trouble once OpenCL 1.1 is released.
Re: OpenTK gains support for OpenGL ES 1.0, 1.1 and 2.0
[ILMerge]
If there's a parameter that strips the resulting assembly of unused classes/structs/methods/etc., I cannot see it. (I knew about ILMerge, but that missing feature is why I didn't bother with it)
[OpenCL]
Apologies, so whoever is responsible for the frequent changes should be slapped. :P
Re: OpenTK gains support for OpenGL ES 1.0, 1.1 and 2.0
[ILMerge]
As far as I can tell, this feature is enabled by default.
Re: OpenTK gains support for OpenGL ES 1.0, 1.1 and 2.0
Assuming this is true, I take back all statements regarding conditional compilaton of OpenTK. (Waste of our time doing that, if you can simply reduce size by a post-build step).
Re: OpenTK gains support for OpenGL ES 1.0, 1.1 and 2.0
ILMerge is windows-only, unfortunately, and I don't know if the resulting binaries will work on Mono. (I'll have to check mono-linker again.)
Re: OpenTK gains support for OpenGL ES 1.0, 1.1 and 2.0
beta in august. release later.
Re: OpenTK gains support for OpenGL ES 1.0, 1.1 and 2.0
what's mono-linker?
Re: OpenTK gains support for OpenGL ES 1.0, 1.1 and 2.0
It's a tool that can strip and merge assemblies into a single one. Depends on Mono.Merge and Cecil.
The project exits somewhere deep inside the mono repository.
Edit: or was it Mono.Merge that depends on Mono.Linker? I can never get this right. | http://www.opentk.com/blog/1/opentk-gains-opengl-es-support?page=1&%24Version=1&%24Path=/ | CC-MAIN-2015-48 | refinedweb | 981 | 76.11 |
This patch allows LLDB to parse the $qXfer:Libraries: packet supported by GDBServer. From adding this support, lldb will try to resolve all modules currently loaded by the target upon attach, and correctly set their load address. LLDB will also search the shared library search path for suitable images which is particuarly nice.
The PosixDynamicLoader has been modified (as per Gregs suggestions) to query the loaded modules from ProcessGdbRemote. The PosixDynamicLoader has also had its load address computation changed making it suitable for use when lldb has only shared librarys in its module list.
The content of the fix is good, just needs a bit of re-org.
I would like to see the following changes:
lldb_private::Process should get a new virtual function:
class Process {
// Sometimes processes know how to retrieve and load shared libraries.
// This is normally done by DynamicLoader plug-ins, but sometimes the
// connection to the process allows retrieving this information. The dynamic
// loader plug-ins can use this function if they can't determine the current
// shared library load state.
// Returns the number of shared libraries that were loaded
virtual size_t
LoadModules ()
{
return 0;
}
}
Then ProcessGDBRemote should implement this function
Remove this.
Move this to the ProcessGDBRemote::LoadModules() function.
call m_process->LoadModules(). You might also want to check if this returns a non-zero value and if so skip the code below?
You need to set the executable first _before_ setting and load locations. If the target clears the image list in Target::SetExecutableModule() it will clear the section load list. So you need to first set the executable, then load images.
Remove this and call new Process::LoadModules() functions described in comments.
Chage this function to use a local copy of GDBLoadedModuleInfoList and do the loading all within this function. You will probably need to set the executable in the target if needed, then add all other modules, then update the load location _after_ changing the executable. When calling Target::SetExecutableModule() it will clear the target module list and if any modules were loaded before this, they will be unloaded (because they are being removed from the target, so we don't want to have section load info for a module that the target doesn't have anymore...).
Move this to the ProcessGDBRemote.cpp file and use in ProcessGDBRemote::LoadModules().
Remove this and add the:
size_t
LoadModules() override;
to the ProcessGDBRemote.h header file and implement the functionality in there.
Hi greg, I have pushed the loading code back inside of ProcessGDB remote as you suggested.
I wasnt quite following your suggestion regarding SetExecutable in the PosixDYLD, could you elaborate more on that please.
Thanks,
Aidan
Hi Greg,
I forgot to remove two changes which accidentally made it threw in this patch. Could you please ignore the changes to PlatformWindows.cpp involving the software breakpoint opcode, and the change to GDBRemoteCommunicationClient.cpp which was committed this morning.
Sorry for the confusion.
See inlined comments for what needs to be fixed
Not part of this patch.
Don't need this as a member variable right? We will call Process::LoadModules() from the dynamic loader when needed.
We might need to call Target::SetExecutable(..) in here if the executable changes. Take care of that here _before_ you load anything in the target. If you call Target:SetExecutable() after this, all of your load info will get blown away.
Move to .cpp file in anonymous namespace.
Shouldn't need this as a member variable right? Just use one locally in the ProcessGDBRemote::LoadModules()?
Hi Greg,
Thanks again for looking over my previous patches. Here's another revision which I hope manages to take on board all of the things you have suggested.
See inlined comments.
A few things to fix:
1 - remove spaces in if statements after the '(' and before the ')' to keep things consistent
2 - check the comments about setting the target correctly by creating a target with one executable (a.out) then attach to another (b.out) and make sure that works.
Do we need to check the return value of this and not do stuff below if non-zero is returned?
Please add "_sp" suffix to shared pointer variables. So this should be "executable_sp"
You don't need he ".get()" you can just write:
if (!executable_sp)
Should this be negated? Why would we check if the file doesn't exist and return it?
Doesn't this code need to get the executable from the GDB info? How can we continue to use the wrong executable? What if we do:
(lldb) target create a.out
Then we attach and the program is "b.out"? We would have a valid executable, but it needs to be changed to "b.out".
We need to get the executable from the GDBLoadedModuleInfoList and verify our target has the right executable. Is this being done somewhere that I am missing? See my example above where we create a target with "a.out" then we attach to a program that is "b.out". Make sure that works.
Hi greg,
I have changed the SetExecutable() code, so that it will now search the library list for the first executable, set it, and then try to load all of the modules into the target. Will this have the behaviour you are wanting or have I still miss understood something. I hope I have addresses your other concerns too.
Looks good. | https://reviews.llvm.org/D9471?id=25292 | CC-MAIN-2022-27 | refinedweb | 897 | 66.54 |
If you have had a chance to play with Visual Studio Whidbey, you may have noticed some of the cool new features in the Forms designer - snap lines, smart tags and so on. Now what if you desire to expose some of this design time UI in your own application? Guess what, Whidbey makes this super easy to do! As I will show below, with less than 10 lines of code, you can display a form that hosts the Windows Forms designer.
The designer classes you will need are located in the various System.*.Design namespaces, particularly System.ComponentModel.Design. These are part of System.Design.dll, so you will first need to add a reference to this assembly (that ships as part of the .NET Framework SDK).
The first thing you need to do is create a DesignSurface. This is the class that represents the user interface that you would think of as the “designer”.
DesignSurface surface = new DesignSurface();
There, now we have a DesignSurface. Next, we need to indicate what the type of the root component is, and ask the DesignSurface to load it. We want a Form designer, so:
surface.BeginLoad(typeof(Form));
BeginLoad begins the process of loading the designer, and will continue asynchronously. However, the “view” that represents the root component at design time is immediately available after this call.
Control view = (Control) surface.View;
Cool, now we have a Control that represents the View. We can manipulate this Control just like any other Windows Forms Control. In this case, lets dock fill it and add it to a Form.
view.Dock = DockStyle.Fill;
Form myDesigner = new Form();
myDesigner.Controls.Add(view);
myDesigner.Show();
That's it! You now have a form hosting the Windows Forms designer, complete with support for snap lines, smart tags etc! Ofcourse, to create a little more functional designer, you need to provide a way to add components to the Form, set properties etc. You may also need to implement a couple of designer services along the way. But the 7 lines of code above are all you need to get started!
This is very interesting! What about support for alternative forms of serialization? I have heard the CodeDom serializer will become available outside of VS in Whidbey. This should make it possible to do undo/redo, correct?
But is there any support for alternative serialization formats. It seems that anything that can save and restore property values and object graphs might be hooked into a generic framework, if this has been addressed. For example, an xml format such as MyXaml.
I forwarded your question to our designer guru, Brian Pepin, and here is what he said:
"Yes, we’re still using the code dom. It is supported to serialize designers using any format you like, but integrating this into VS takes work and access to the VSIP program. Frank’s MyXaml should work fine, and should even work with undo / redo provided he implements a ComponentSerializationService based on MyXaml."
This is great, just need some more information on how to implement the service and integrate in VS. I could find almost no information. We are in the VSIP, but I was hoping a package would not be necesary, simply because packages are very user-unfriendly for installation/uninstallation. Plus we prefer to use managed code as much as possible.
From the entry above:
"Of course, to create a little more functional designer, you need to provide a way to add components to the Form, set properties etc. You may also need to implement a couple of designer services along the way."
Is there any more documentation or samples on the little details mentioned above?
Frank/Peter: I don’t know if there is any documentation already available, but I do know folks in my team are writing some cool samples that will soon be published on. Hopefully, those will help you get started.
Thank you very much! Can’t wait to see those samples 🙂
Can you give any more information on those samples ?
In particular, I’ve managed to integrate a designer into our application, but Whidbey appears to generate Code that fails to compile as soon as you have a property that refers to a resource.
As there is so little documentation available, I don’t know if it’s me or a bug in Beta 1.
David: Can you please report your problem through the MSDN Product Feedback site ()? That would help us investigate further.
but, i want to hosting web form, just like ASP.NET web Matirx, if i can use the class DesignSurface, how to do it ?
xzw: I suggest contacting Nikhil () or Bradley () and they should be able to point you in the right direction.
Let’s see if you still read these comments ;).
I have a Form that I loaded using a custom serialization process with a bunch of controls on it, something like
Form from = MySerializer.Load(myFile);
now I want to show this form in the designer and have ALL it’s controls in ‘design mode’, etc that the user can move it around, edit it,…
I tried doing it like this:
IContainer container = DesignSurface.ComponentContainer;
// (snip)remove everything
container.Add(form);
but that way, only the form itself and not the controls on it can be designed. What am I doing wrong?
Frederik.
Frederik: Yes, I see comments since I get email notifications for them 🙂
Regarding your question: you need to walk the controls and site each of them in the designer (by adding them to the DesignerHost). That way, all the controls will be in design mode and not just the form.
BTW, you might want to check out Mike Harsh’s ‘Design Mode Dialog’ sample that does something like this. Its up on the samples page on windowsforms.net:
Yup, I found out about that sample shortly after posting my previous message. After updating it a little but so it would work on Beta 2 and making sure that *all* possible properties would get copied (I don’t see why the sample limits that scope), I put all the funtionality in a class.
I’m, right now, seeing something quite weird. When I load a form and copy the controls, they show up in design mode indeed. However, *clicking* on the controls will not select them! If I make a selection around them with my mouse (eg. left click left of the control, keep the mouse button down and then select the control), they get selected.
Does it sound familiar? Am I missing something obvious or did I just hit a bug that should get logged through ProductFeedback?
Okay, I figured out that blindly copying all properties won’t work, since that will copy, for example, the Site property etc, things that the designer will set. Or at least, that’s what I think (I’m not really into design-time experiences).
I switched back to copying only a select set of properties and tadaa… it works :).
The fact that you have to recursely copy all controls to the to be designed form should, in my opinion, *really* be mentioned on the MSDN docs…
Glad you got it working. I agree the behavior is non-obvious. We are working on having a more detailed sample and whitepaper up on MSDN on this topic by the end of this summer. That should serve as a good starting point for folks doing this sort of thing.
Okay, I’m back and I’m seeing some really weird behaviour again.
I am still trying to show controls in the designer. As a backgrounder, I am trying to design Windows Installer dialogs as defined in WiX source code.
For every control that MSI supports, I added a new WinForms control in my DLL. For example, ‘PushButton’, that just inherits from Button.
Every control as an Element property that will hold the source code that defined the control, so that I can track where the control that is being shown in the designer, is defined in the souce code.
I create a new form and add all the required controls on it, set the Element property and so on. My dialog displays well and I get more or less what I expected.
When I copy the form and the controls to the designer without copying the Element property (well, only copying a select set of properties), it all works well.
Once I do copy the Element property, the form still displays in the designer, but I can’t select a control by clicking on it and ‘weird’ things seem to be happening.
Setting the Element property does more than just assigning a value to a field, to be exact, it also sets the Size and Location properties.
Do you have any clue as to what could be going on here? I can send you the source code that reproduces the behaviour, or submit a bug on ProductFeedback or whatever. I really have no clue what I’m doing wrong here…
All I can say is that it’s weird, very weird and smells like a bug 🙂
Hmm – can’t say off the top of my head what the problem might be. Perhaps an exception is thrown at some point when setting the Element property and that leaves things in a bad state?
I would try catching all exceptions and seeing if there are any stray ones.
If you can’t get to the bottom of it, please report the problem through Product Feedback (with the project attached that reproduces the issue) and we can try to help.
Cool stuff, quick question:
You Said:
Regarding your question: you need to walk the controls and site each of them in the designer (by adding them to the DesignerHost). That way, all the controls will be in design mode and not just the form.
My question:
Can you give a couple of lines of code showing this?
Thanks!
Mike Harsh’s ‘Design Mode Dialog’ sample that I linked to above has the code that does this.
I’m trying to figure out how to hook into the code that handles drawing the control as it is being dragged. I have an overlay control that is mostly transparent and is placed over the top of a background image. It works great in design mode, but when I start to drag it, something is taking a snapshot of the current state of the control and using that as the picture as it is being dragged around. So in effect, it looks like I’m moving a portion of my background image around the screen when I move my control (once I drop it, the control refreshes and becomes transparent again). Ideally, I would like to simply draw a rectangle that represents the location of the control as it is being moved around, preserving transparency inside the control and leaving the background image alone. Any idea how I might be able to override this behavior?
I forwarded your question to Martin Thorsen, who has a good knowledge of the drag/drop implementation. Here is what he had to say:
"The only way for him to do this is to handle the WM_PRINT command. Preferably he should do that in his control designer’s wndproc. When we generate the drag image, we send a WM_PRINT to the control."
If you have further questions on this, you may want to post them on the WinForms Designer forum, where you should hopefully get a quick answer:
This is great but I’m working on a WPF project. Is there any support for this using the Cider (or other XAML) designer?
Gary: Sorry, I don’t know, but you can contact one of the Cider bloggers listed here or ask on their forum:
PingBack from
PingBack from
PingBack from | https://blogs.msdn.microsoft.com/rprabhu/2004/06/15/designer-hosting-in-whidbey/ | CC-MAIN-2017-51 | refinedweb | 1,976 | 71.34 |
I'm experimenting with flocking bots, and I don't want to go to the trouble of coming up with a vector class. Is there something that works well already out there? Thanks.
Is there a built-in Python vector class?
Started by silverphyre673, Mar 09 2008 12:57 AM
6 replies to this topic
#1 Members - Reputation: 454
Posted 09 March 2008 - 12:57 AM
Sponsor:
#2 Crossbones+ - Reputation: 1160
Posted 09 March 2008 - 01:30 AM
If you want, here's the vector class that I wrote. No guarantees on anything, but it's working great for me. It's only for 2D vectors, but I'm guessing that that will be enough for your work. Note that I also define a class Point to be the same as Vector, which you might not want..
import math
class Vector:
'Represents a 2D vector.'
def __init__(self, x = 0, y = 0):
self.x = float(x)
self.y = float(y)
def __add__(self, val):
return Point( self[0] + val[0], self[1] + val[1] )
def __sub__(self,val):
return Point( self[0] - val[0], self[1] - val[1] )
def __iadd__(self, val):
self.x = val[0] + self.x
self.y = val[1] + self.y
return self
def __isub__(self, val):
self.x = self.x - val[0]
self.y = self.y - val[1]
return self
def __div__(self, val):
return Point( self[0] / val, self[1] / val )
def __mul__(self, val):
return Point( self[0] * val, self[1] * val )
def __idiv__(self, val):
self[0] = self[0] / val
self[1] = self[1] / val
return self
def __imul__(self, val):
self[0] = self[0] * val
self[1] = self[1] * val
return self
def __getitem__(self, key):
if( key == 0):
return self.x
elif( key == 1):
return self.y
else:
raise Exception("Invalid key to Point")
def __setitem__(self, key, value):
if( key == 0):
self.x = value
elif( key == 1):
self.y = value
else:
raise Exception("Invalid key to Point")
def __str__(self):
return "(" + str(self.x) + "," + str(self.y) + ")"
Point = Vector
def DistanceSqrd( point1, point2 ):
'Returns the distance between two points squared. Marginally faster than Distance()'
return ( (point1[0]-point2[0])**2 + (point1[1]-point2[1])**2)
def Distance( point1, point2 ):
'Returns the distance between two points'
return math.sqrt( DistanceSqrd(point1,point2) )
def LengthSqrd( vec ):
'Returns the length of a vector sqaured. Faster than Length(), but only marginally'
return vec[0]**2 + vec[1]**2
def Length( vec ):
'Returns the length of a vector'
return math.sqrt( LengthSqrd(vec) )
def Normalize( vec ):
'Returns a new vector that has the same direction as vec, but has a length of one.'
if( vec[0] == 0. and vec[1] == 0. ):
return Vector(0.,0.)
return vec / Length(vec)
def Dot( a,b ):
'Computes the dot product of a and b'
return a[0]*b[0] + a[1]*b[1]
def ProjectOnto( w,v ):
'Projects w onto v.'
return v * Dot(w,v) / LengthSqrd(v).
#3 Moderators - Reputation: 8917
Posted 09 March 2008 - 01:31 AM
What kind of vector class are you looking for? A mathematical vector class for doing orientations and dot products, etc. or a container vector class for storing stuff in?
#4 Members - Reputation: 454
Posted 09 March 2008 - 09:29 AM
Sorry -- that could have been a little more clear. I'm looking for a mathematical vector class, not a container. I think I would want to be familiar with pretty much one of the most basic features of the language before I go about writing programs in it :)
#5 Moderators - Reputation: 1674
Posted 28 March 2008 - 05:16 AM
Quote:
One of the built-in Python types is 'complex', and complex numbers are (2D) points (or vectors).
#6 Members - Reputation: 340
Posted 31 March 2008 - 10:27 PM
In almost all cases which make use of vector math, the array type in the numpy module is appropriate. All of the appropriate functions are implemented in optimized C, and the types are closely compatible with python lists.
Some 'common' operations on numpy arrays that you would expect on a custom 2 or 3 dimensional vector class aren't present (dot product), but are very easy to reproduce.
Some 'common' operations on numpy arrays that you would expect on a custom 2 or 3 dimensional vector class aren't present (dot product), but are very easy to reproduce.
#7 Moderators - Reputation: 3115
Posted 08 April 2008 - 04:48 AM
I just found pyeuclid. "Vector, matrix and quaternion classes for use with 2D and 3D games and graphics applications". | http://www.gamedev.net/topic/486122-is-there-a-built-in-python-vector-class/ | CC-MAIN-2014-10 | refinedweb | 757 | 72.36 |
Anyone know that c program got wat type of error?
What do you mean?
Erm, such like syntax error that kind, got what other type of error else?
What on earth are you talking about?
Follow the rules of the forum and ask a proper question.
You may not pass this time , good luck next time.
Maybe semantic errors? that the code works but doesn't do what you want?
no idea if sematic is english, (i translated it loosely from dutch)
>Got exam tomorrow
Cool! I've exams too within 3-and-a-half months, wish me the best :P.
>Erm, such like syntax error that kind, got what other type of error else?
So, if I don't misunderstand you, you want to know some kind of syntax error?
Well, that's not very difficult, just try to compile this program and learn the error messages by heart:
#include <stdio.h>
int main(void)
{
/* Hey, /* this is a very nice comment*/ */
printf("I would like to get some syntax errors in this program...")
return something;
}}
:P
>Got exam tomorrow
So you've been screwed.
Now did you remember when your instructor told you to read a text book?
>Anyone know that c program got wat type of error?Yes.
Perhaps you're inquiring about a runtime error?
When I think about programming, the two types of errors that I consider to be significant. These are syntactical errors and logical errors.
Syntax concerns the format of your code and whether the computer can understand what your code says.
Logic concerns whether your program does what you intended it to do.
In order to compile, your code requires correct syntax. Logic, however, it does ... | https://www.daniweb.com/programming/software-development/threads/214405/got-exam-tomorrow | CC-MAIN-2017-17 | refinedweb | 284 | 76.93 |
/* * @ */ /* * DNSNameList.h * - convert a list of DNS domain names to/from the compact * DNS form described in RFC 1035 */ /* * Modification History * * January 4, 2006 Dieter Siegmund (dieter@apple) * - created */ #ifndef _S_DNSNAMELIST_H #define _S_DNSNAMELIST_H #include <stdint.h> /* * Function: DNSNameListBufferCreate * * Purpose: * Convert the given list of DNS domain names into the compact form * described in RFC 1035. If "buffer" is NULL, this routine allocates * a buffer of sufficient size and returns its size in "buffer_size". * Use free() to release the memory. * * If "buffer" is not NULL, this routine places at most "buffer_size" * bytes into "buffer". If "buffer" is too small, NULL is returned, and * "buffer_size" reflects the number of bytes used in the partial conversion. * * Returns: * NULL if the conversion failed, non-NULL otherwise. */ uint8_t * DNSNameListBufferCreate(const char * names[], int names_count, uint8_t * buffer, int * buffer_size); /* * Function: DNSNameListCreate * * Purpose: * Convert compact domain name list form described in RFC 1035 to a list * of domain names. The memory for the list and names buffer area is * dynamically allocated in a single allocation. Use free() to release * the memory. * * Returns: * NULL if an error occurred i.e. buffer did not contain a valid encoding. * non-NULL if the conversion was successful, and "names_count" contains * the number of names in the returned list. */ const char * * DNSNameListCreate(const uint8_t * buffer, int buffer_size, int * names_count); #endif _S_DNSNAMELIST_H | http://opensource.apple.com/source/bootp/bootp-198.2/bootplib/DNSNameList.h | CC-MAIN-2016-30 | refinedweb | 219 | 56.25 |
{"version":"","title":"Journal — Sympolymathesy, by Chris Krycho","home_page_url":"","feed_url":"","description":"Reflections and practice: on faith, art, and technology","items":[{"id":"","author":{"name":"Chris Krycho","url":""},"title":"[journal] Waiting for Communion","url":"","date_published":"2020-03-22T08:35:00.000-00:00","content_html":"
A coronavirus reminder of our place in the time between the times.
Assumed audience: Other theologically-orthodox Christians, especially (but not only) those in traditions which link the sacraments to the gathered church.\n
Epistemic status: Humbly confident in the application… if a bit less persuaded by the Westminster system’s official view on the subject.\n
In the midst of the 2019–2020 coronavirus pandemic, many ordinary Christian practices are difficult.! 19:6–10)...\n","summary":"Many of us are unable to gather together or take communion because of government mandates during the coronavirus crisis. Whatever our theologies of the Lord’s Supper, this is a great loss—but one that can shape our hearts in the right direction.\n","tags":["theology","communion","sacraments","fellowship","eschatology"]},{"id":"","author":{"name":"Chris Krycho","url":""},"title":"[journal] Ulysses Publishing With WordPress on Linux","url":"","date_published":"2020-02-02T19:40:00.000-00:00","content_html":"
A tech tip for other folks using WordPress on custom Linux setups.
Assumed audience: Anyone who uses XML-RPC with WordPress on Linode, especially would-be publishers-from Ulysses.\n
I spent some time today migrating my wife’s long-dormant (but soon to be no-longer dormant!) website from a shared hosting setup to a dedicated Linode setup I can manage myself. I ran into an interesting issue when finishing the setup, and figured I’d document it clearly to explain what the issue is and how to fix it.\n
The defaults supplied out of the box on Linode’s One-Click app setup for WordPress have two issues for supporting this flow, which will :\n
Right out of the box, you will see Ulysses respond with an error indicating that there is a “compatibility problem.” This shows up because Ulysses communicates with WordPress via its XML-RPC API… and the default configuration from Linode blocks XML-RPC:1\n
\n\n
<files xmlrpc.php>\n order allow,deny\n deny from all\n</files>\n
You can fix this by simply deleting that block.\n
(There are a couple other blog posts out there on this same subject, and they recommend doing a bunch of other workarounds, all intended basically to allow XML-RPC connections to work while not exposing this particular file name. These workarounds as well as the original default exist because XML-RPC is occasionally a vector for attacks on servers. In my case, I’m not currently all that concerned about that; if it comes up I’ll deal with it then.)\n
The
ServerName value in the Apache config does not correctly work for Certbot to set up your site to automatically forward HTTP connections to HTTPS. Unfortunately, HTTPS connections are a (soft, but highly recommended) requirement for Ulysses to connect to the setup, and if forwarding isn’t enabled, Ulysses complains (as it should!). The problem here is that the default Apache config on the Linode One-Click WordPress app supplies the *IP address of the server—rather than the domain name for your site—as the
ServerName value. Changing that fixes the Certbot issue, and thereby unblocks the Ulysses-WordPress connection.
In our case, I needed to change it from
ServerName <ip address> to
ServerName jaimiekrycho.com, in both the
/etc/apache2/sites-enabled/wordpress.conf and
/etc/apache2/sites-enabled/wordpress-le-ssl.conf files, and then to run Certbot again to reinstall the certificate and configure it to forward all HTTP connections to HTTPS. At least on my machine, it wouldn’t do that last step until I had rewritten those
ServerName entries.
Once I had made those two changes, everything worked nicely! I hope this ends up helping you if you run into the same.\n
If you’re thinking that it would be really nice if WordPress offered a modern JSON API instead of being stuck with XML-RPC, well… I’m with you.\n
A correction from a reader! WordPress does have a JSON API, and has for almost half a decade now! I have no idea why Ulysses is using XML-RPC instead of that API; at first blush it certainly looks like it could. My bad for not checking this and just assuming the problem was on WordPress’ end rather than Ulysses’.\n
Perhaps weirdly, I haven’t done much of this before!
Assumed audience: Others who might be interested in making their lives easier by automating repetitive (and meaningless) tasks that are a regular part of their work.\n
This.\n\n:\n!\n
…now to get back to drafting the rest of the newsletter that sent me down this rabbit hole in the first place.\n","summary":"I built an Alfred workflow to generate Alibris affiliate links tonight. It's convenient, and I learned a bit because this is new to me!\n","date_modified":"2020-02-15T22:02:00.000-00:00","tags":["Alfred","scripting","writing","programming","scripting","automation"]},{"id":"","author":{"name":"Chris Krycho","url":""},"title":"[journal] Please Don’t Just Screenshot Books!","url":"","date_published":"2020-01-18T20:15:00.000-00:00","content_html":"
A PSA to writers-on-the-web about how we share text.
Assumed audience: everyone who blogs, writes a newsletter, tweets, you name it. No shame for not knowing this—many people don’t!
I!\n.\n
In short, if you share an image of text, rather than actual text, you’re making it difficult-to-impossible to read for some of your readers!\n!\n!)\n\n
Thank you for listening to this PSA from your fellow writer-on-the-internet!\n
Which, let’s be honest, is nearly all of us at some point in the future, just given how human vision degrades over our lives! ↩︎\n︎\n︎\n
This is the part where I reiterate my comments about Twitter, but now generalize them to social media in general. ↩︎\n
Making explicit just how confident I am (or am not).
Assumed audience: people who care about thinking well and communicating clearly.
Epistemic status: extremely confident...\n.\n
My Assumed Audience headings were partly inspired by the way Sarah Constantin deploys “epistemic status,” so this is an appropriate turn of events! ↩︎\n
Doubly embarrassing for having now messed up both of my feeds.\n
Assumed audience: people who are annoyed that all my feed items just showed up in their readers again. I’m *really* sorry, everyone.\n
Stop.\n!\n","summary":"While fixing another issue with my Atom feed, I discovered that I was rendering bad item IDs. It’s fixed now; sorry it happened.\n","tags":["blogging","Atom feed","Nunjucks","site meta"]},{"id":"","author":{"name":"Chris Krycho","url":""},"title":"[journal] …but Running is Sunlight","url":"","date_published":"2019-11-27T21:00:00.000-00:00","content_html":"
(Why yes, that is a riff on Superman.)\n
Assumed audience: people interested in discussions of travel and of maintaining good habits.!.\n.\n.\n
A final note, illustrating the point here: whereas until yesterday’s post I had managed no writing at all on this trip, today alone I have written over 1,200 words between this post and a long note in my Zettelkasten.︎\n
Because I am such a coffee snob that I just can’t do the coffee available where I am this week. ↩︎\n︎\n
Sheer delight—by way of writing in a Pano Totebook with a Uniball Signo RT1 0.38mm.
Assumed audience: users of notebooks and pens and people who care about the aesthetics of writing...\n.\n.\n
I have only experienced one downside to capless pens at this size tip: they occasionally get a bit dried-out, especially in the hyper-dry Colorado air. However, that condition has never been permanent.\n
The net is that these are my favorite pens in the world. It’s hard for me to imagine going back to earlier pens.\n.\n.\n
Dear reader, it was a great choice.\n)..\n.\n
Their first, the Panobook, is presumably great as well—but I’m a notebook-in-my-bag kind of guy; I want the notebook on my desk to be the notebook I have everywhere. ↩︎\n
It ruins all my normal habits and rhythms and tanks my productivity.\n
Assumed audience: people interested in discussions of travel and of maintaining good habits.\n
An observation on travel (which every trip confirms anew): travel absolutely tanks my ability to maintain my habits. Whether the habit in question is writing 500 words daily, running, or even just eating normal amounts, I struggle to maintain it when away from home.\n:\n
There is no extra mental hurdle to go running from my house because I know all the roads and paths near me quite well at this point. I don’t have to plan a route; I can just run a route.\n
Travel itself is tiring, and fatigue is disruptive to all those good habits as well. None of us makes our best choices when we are tired. This is one reason that sleeping well is so critical to doing well in life more generally!1:.\n
My tentative thoughts on doing better on this going forward:\n
Those are tentative, though. If you have tips, especially if you’re a more experienced traveler than I am, I’d love to hear them!︎\n
Time to stop leaning so hard on search.!\n\n
I’m now actively working on changing this, because I want to be able to remember passages specifically—not just a vague sense of “this is somewhere in one of Paul’s letters.”\n.\n!\n
There is an interesting point to trace out in more detail here, around the difference between computing-as-replacement of human abilities and computing-as-extension of human abilities. Got essays or books on the topic? Please send them my way! ↩︎\n
The PCA’s tests for licensure and ordination include a fairly rigorous Bible knowledge exam. This is part of what got me thinking about this. ↩︎\n
The first two, last night: 1 Corinthians 6:19–20 and Matthew 22:29–30. ↩︎\n
I like the service. But I’m not using it.
Assumed audience: folks who are thinking about social media and blogging and the IndieWeb movement.
For.\n
In short, if you are going to be on a public social network at all, I think micro.blog is a great choice. But that’s the fundamental question: do I want to be on a social network at all?\n
The answer I came to back in June was very simple: no.\n..\n︎\n
All my best efforts and this is still where we end up!\n
Assumed audience: people who care about the details of web publishing.\n
To my very great annoyance, I realized today that I managed to ship a broken version of JSON Feed with this version of my site.\n
For those of you who don’t care about any of the details: it’s fixed now!\n
For those of you who do care about the details, the rest of this post is a deep dive into what went wrong and why, and how I fixed it.]):
\n\n
\n\n
- \n\n
input: The absolute or relative input URL to parse. If
inputis relative, then
baseis required. If
inputis absolute, the
baseis ignored.
- \n\n:
\n\n
class URL {\n static withBase(base: string | URL, relativePath: string): URL;\n static fromAbsolute(path: string): URL;\n}\n:
\n\n
import { Result } from 'true-myth'\nimport { URL } from 'url'\nimport { logErr, toString } from './utils'\n\nconst absoluteUrl = (path: string, baseUrl: string): string =>\n Result.tryOrElse(logErr, () => new URL(path, baseUrl))\n .map(toString)\n .unwrapOr(path)\n:\n
\n\n
type Components = {\n path: string,\n baseUrl: string | URL\n}\n\nconst absoluteUrl = ({ path, baseUrl }: Components): string =>\n // ...\n.
Reflections prompted by Michael Sacasas’ wrapping up a decade of blogging.
Assumed audience: other writers and thinkers-aloud with long-running public projects, or fans and followers of the same.
Over.\n.\n.)\n..\n
I’m grateful to Sacasas for his example in many ways. This ending is on that list. Here’s to ends of projects and to edges and to new beginnings.\n
If you’re thinking that this blog itself, in its various permutations, is an example of such an endlessness… I’m not going to argue with you. ↩︎\n
How and why I switched to mostly decaf coffee.
Assumed audience: lovers of coffee, tea, and other caffeinated beverages… and good health.
Thanksgiving week, I went off coffee cold turkey, and it hurt. It literally hurt.\n.\n. consistent. My body has gotten used to—worse, come to depend on—the daily dose. The result is an addiction, however mild. Take away the hit, and the withdrawals set in..\n\n.\n..\n
And when we travel again in a week, I’ll be without a reliable source of good coffee in the mornings—but this time it won’t matter a bit.\n︎\n
Normally, anyway. There have been times when I have reached for caffeine as a help in particularly exhausting phases, but I have always been careful to dial back after those times. ↩︎\n
They don’t make a big deal about this, and they reserve the right to change it at will, but at the moment my favorite coffee shop uses a single-origin coffee for their decaf. I can hardly say how happy this makes me. ↩︎\n
Explaining how I run this site—everything.
Assumed audience: People interested in the nerdy details of how to get a website like this up and running. Here I get into everything from getting a domain and setting up DNS to how I use Markdown and Git!\n
On seeing this site relaunch back in November, my friend John Shelton asked if I had anywhere I’d listed out the whole of my setup for hosting this site. The answer is: I hadn’t, but as of now I have!\n
If you want the super short version, this is it (with the topics covered in this post marked with a *):. (Hopefully, I say, because I started this post three months ago!)\n\n\n
My costs are pretty low for this setup. Cloudflare is free for setups like mine. GitHub is free for setups like mine. Netlify is free for setups like mine. The code font, Hack, is also free. (Sensing a theme here?)\n
In terms of things I do actually pay for (or have in the past), though:\n
I pay $15/year for the domain at Hover.\n.)\n
As will become clear in the next section, I have spent a… non-trivial… amount of money on writing applications over the last decade.\n
In general, I’m not opposed to paying for good services, but if there is a good service with a freemium model and I fit comfortable in the free tier, I’m happy to take advantage of it.\n..\n manually. Ulysses’ way of working with Markdown is much more bespoke, but it Just Works.2\n
As mentioned, though, both iA Writer and Ulysses are slower (and just feel heavier) than I’d like. As a result, I also Workflow section below!)\n.\n.\n
Equally important to me at this point though: writing in Markdown means I am writing in a plain text document. I can write it in any text editor anywhere which handles UTF-8—and while UTF-8 isn’t perfectly!)\n
My publishing work flow feels relatively straightforward to me at this point, but that’s entirely a function of the fact that I’ve been using a variant on this same approach for over half a decade now, and that it’s a programmer’s work flow.\n
When I write all that Markdown material, it goes in one of two places, depending on how far along in the process I am. If it’s a big post or essay that I don’t intend to publish for a Git repository where the entire site lives. I have copies of that on every machine I use, as well as on GitHub.\n\n
I usually create a Git branch for new posts before I publish them, so that I can take of some of the Netlify features described in the next section. On a Mac, I use mostly the command line and occasionally the Fork Git UI for interacting with the repository. On iOS, I use Working Copy to interact with the repository. It exposes the repository as a location which I can open from other apps which support document provider locations:\n
Then I can work with it directly in any app which has mounted the folder. For example, viewing this very post in iA Writer:\n
When I’m done, I can just commit that file to the repository on whatever branch I’m working in and push it up to GitHub, and it will trigger the publication workflow that I built with Netlify (described in the next section).\n
I had used a similar approach in the past for managing the content and design of the site, but it was never a full workflow because I couldn’t use it to publish the site. For that, I needed to switch up how I published the site. So: Netlify!\n\n
I use Netlify to actually host and deploy the site. Netlify is a developer-focused tool, which makes it super easy to take a Git repository which defines how to build a website with some basic tools, and turn it into a deployed website.\n). particular about separating my DNS from my hosting/deployment setup (as discussed below), I could do that on Netlify as well, and that is also an incredibly simple setup process.\n.\n
This is handy for content, of course, but it was even handier during the design process for the site, when I could set up two options with different approaches to a particular design question, each with its own URL, and share them for others to give feedback.\n
I don’t normally need a CMS,.3 For the last few years, I got by managing everything just via command line tools and building everything on my home machine.\n.\n
Forestry has far and away the better UI of the two. In fact, it has such a reasonable UI that my friend Stephen said of it:\n
\n\n
Wow. I am impressed with this CMS.\n
It ... it makes sense. It's laid out like ... normal people would lay it out. I shouldn't be so shocked, but lo, I'm shocked.\n good!.\n—\n
\n\n
you can use Netlify CMS to accept contributions from GitHub users without giving them access to your repository.\n
This is fine so far as it goes, but if the users already have to be GitHub users, well… then the GitHub UI gets the job done well enough.4 As such I just ended up making the links for editing a post take users straight to GitHub. It’s not perfect, but it’s good enough.\n.\n
I buy all my domains at Hover.5 I first tried Hover after a podcast ad half a decade ago, and it worked out so well that in that span I have steadily moved everything different registrar), and they even have a nice website!\n
I switched all of my DNS name servers to Cloudflare.\n.)\n!\n\n
When you put all those pieces together, what you have is:\n
v5.chriskrycho.comto the corresponding Netlify setup!︎\n︎\n
I like being able to generate things which aren’t web pages from my content sometimes! ↩︎\n
Internet users can be obnoxious. When one of my posts hit the top of Hacker News a few months ago, I had people “signing up” for rewrite updates with email addresses—I kid you not—like
dont-be-arrogant@rewrite.software︎\n
Picking up a dropped thread from Winning Slowly 7.13
Assumed audience: people who care about how we speak and think about tech—and also listeners to Winning Slowly!
I!\n
Early on in the episode, I noted that people’s feeling of decline is itself a kind of actual decline, and Stephen disagreed:\n
\n\n
I don’t think that’s necessarily even true: because we Twitter bots that make things seem true, and… there’s literally not even anybody making that idea. Twitterbots picked it up out of the air and made it a thing….\n.\n\n..\n
Stephen has since responded with a very thoughtful piece: On Twitter Bots and the Presence of Disinformation. This paragraph in particular gets quite clearly at both what I aimed to get at on the episode and what I gestured at a bit above:\n
\n\n).\n
I commend the whole post to you.\n
…as my November-writing adventures make clear!
Assumed audience: people following along with me as I try to write every day, or who are interested in writing generally.\n.!.\n.\n
Realistically: not just rarely but in fact never. ↩︎\n
A year of rest and recovery, for which I’m profoundly grateful.
Assumed audience: mostly my future self!—but you’re welcome to read along and see my thoughts on how 2019 went for me and what I hope 2020 will look like.
We.\n
You can find my past years’ write-ups here:\n
If you’d like to skip to a specific section of this year’s write-up, have at it!.\n
In 2019, I aimed to do two things in podcasting:\n
I also expected—but did not commit!—to recording a number of episodes of Mass Affection with Jaimie..\n.\n. truly Christianly about technology and ethics.!\n.\n
This year’s book list (so far as I remember it) with links to reviews where I wrote them—\n
Fiction:\n
Nonfiction:\n..!).\n!\n
In my goals for this year, I wrote:\n
\n\n..\n..\n
The two essays I did write, and of which I remain quite proud:\n
All told I wrote about 60,000 words across the two versions of this site, and about 30,000 words in Across the Sundering Seas. Those two essays come out to about 3,600 words—a mere 4% of my writing this year.).\n.\n!!\n always feel well-rested.\n.\n
My health-oriented goals for the year ahead are relatively tame:..\n
I’ll keep sleeping exactly as much as I need—no alarm. I’ll keep off the caffeine. Between the two, I should be at a sustainable level in terms of rest and alertness.4\n
The “work” bucket this year breaks down into two big categories: LinkedIn and my side project, rewrite.\n:.)..\n
I also had a chance to step into actively mentoring a couple more junior engineers, and that’s been one of the most satisfying things I’ve ever done. Helping them find their strengths (and helping them deal with their weaknesses) is both really fun and really challenging.\n
The net is that a year in, I’m really loving working at LinkedIn—even more than I guessed or hoped when I started. It’s the best job I’ve ever had, and I’m so grateful for that...\n!!\n.\n.\n.\n
My big “resolution” for 2020, then, is to do these harder things. Doing the harder things will make them easier, and it will make doing other hard things easier, too...\n︎\n
I wrote last year:\n
\n\n
Once I’m through that list, I’ll have covered the entirety of the language and quite a few of the most important crates in the ecosystem. But there are always new things happening, so I’ll have some interesting decisions to make about where to take the show.\n
What I knew at the time was that “interesting decisions” almost certainly meant “the exact timing of wrapping things up,” though I reserved the final decision until a later point in the year. ↩︎\n
I wrote a year ago:\n
\n\n
[One] of my goals for the year was to publish a few longer-form essays, possibly even getting paid for them. That certainly did not happen; I did not manage to publish even a single essay at Mere Orthodoxy.\n
Sounds… familiar. ↩︎\n
I originally accidentally left off the end of this sentence, everything after “sustainable”; the reader who caught it said he was choosing to read it “I should be at a sustainable wind farm”. I laughed out loud. ↩︎\n
I also wanted to make sure that my team at Olo heard from me and not from a random blog post! ↩︎\n
Read: hopefully, the TypeScript/Ember.js story will get a lot better this year. But those pieces have to fall into place, so we’ll see. ↩︎\n
Rethinking this site—and my own vocations—as the 2010s give way to the 2020s.
Assumed audience: others reflecting on the decade change, and especially other Christians thinking about their callings.
The.\n
In the decade ahead, I’m ready to build on the foundations I set...\n
At the end of the 2020s, I will be in my early 40s, a third of the way through my adult life, and I hope that I will have done some things, made some differences that matter then....\n
I wrote yesterday in the final 2019 issue of Across the Sundering Seas:\n
\n\n
One of the ways we can serve each other in the internet era is by just shutting up. That goes for books, the subject of our discussion a few years ago. It also goes for newsletters and blogs, though.?\n
It is that responsibility which I feel, and it weighs heavily on me. I know what I want my public theological work both on this site specifically and when working in public generally to be: a better voice, helping people think in a more genuinely Christ-like way...\n..\n.\n\n.\n.︎\n:\n
\n.\n
The whole thing is well worth your time. ↩︎\n
More central still in my life are my role as husband and father, but the shape of goals and aims in that space is and must be very different. ↩︎\n
Reflections on a month of writing.
Assumed audience: people who care about writing, accomplishing longer-term goals, or both.!\n
So much for the stats rundown. What (if anything) did I learn from this process? A handful of things, it turns out!).\n.\n...)\n
I am slowly turning this around, figuring out how to get the studying wires in my brain connected once again after burnout short-circuited some of them. But it is painfully slow..%!\n.\n.\n
Getting these numbers required finishing this post, since they’re inclusive of its totals. Self-referentiality FTW! ↩︎\n
I count those in those totals because the point was not to publish but to write—and my notes are definitely writing. ↩︎\n
Yes, I was the kind of kid who liked writing papers even in late elementary school. Nerdy from the beginning. ↩︎\n︎\n︎\n
This is a recurring theme of my blog over the past few years. Given my love of writing, I don’t expect it to stop being a challenge for me. ↩︎\n
A new website design and implementation for 2020 and beyond—with a new title to boot!\n
Assumed audience: people who care about things like new website designs.
Welcome to Sympolymathesy—the fifth version of this website! I’m happy to have it in the world at last.\n!\n
.\n
There are a few layers to this choice of name.\n.\n.\n..\n.\n!\n?\n
The primary mandate for this redesign, then, was to accommodate that variety. I am now sectioning the site by medium, instead of by subject:\n.!.\n
In the midst of the refresh, I also switched up the ways I'm building and deploying the site.\n.\n...\n.\n\n
As a result, each post will soon actually include a direct link to its source on GitHub, along with a "Suggest an edit" link that will allow people to send in corrections!!\n.\n
So now: time to get down to the business of actually filling up this space with words and photography and more!\n
Fourteen! Years! I find it rather astounding that I have been at this so long—longer than any other endeavor in my life. ↩︎\n
You know, once I have those set up: soon! It's a fairly high priority for me—but lower priority than just getting this out the door! ↩︎\n
I also went back and tweaked to do the same on v4 as one final act of curation and maintenance for the future! ↩︎\n
Hat tip to Chris Coyier on this—I learned it from this CSS Tricks article! ↩︎\n
Not a paid promotion, I promise. 😂 Netlify is just doing really great work and I can't help but enthuse about it. ↩︎\n | https://v5.chriskrycho.com/journal/feed.json | CC-MAIN-2020-16 | refinedweb | 4,934 | 72.05 |
I'm attempting to learn Java. I'm finding it to be frustrating and quite foreign from most other languages. I have something here typed exactly as the book has it. According to the book, it should be working, but it's not. Any tips would be SO great! I really feel like an idiot asking for help with this simplistic of a program.
import java.util.Scanner; public class Addition { public static void main(String[] args) { Scanner input == new Scanner(System.in); int num1; //first num to add int num2; //second num to add int sum; //sum of added nums System.out.print("Enter first integer: "); num1 = input.nextInt(); //read first num from user System.out.print("Enter second integer: "); num2 = input.nextInt(); //read second num from user sum = num1 + num2; //add numbers System.out.printf("Sum is %d\n", sum); //display sum } }
It's the lines
Scanner input == new Scanner(System.in); num1 = input.nextInt(); //read first num from user
that NetBeans isn't liking. It says "variable input might not have been initialized". | https://www.daniweb.com/programming/software-development/threads/221781/truly-a-newb-issue | CC-MAIN-2017-51 | refinedweb | 176 | 61.93 |
Get the highlights in your inbox every week.
What's a hero without a villain? How to add one to your Python game
What's a hero without a villain? How to add one to your Python game
In part five of this series on building a Python game from scratch, add a bad guy for your good guy to battle.
Subscribe now
In the previous articles in this series (see part 1, part 2, part 3, and part 4), you learned how to use Pygame and Python to spawn a playable can download some pre-built assets from Open Game Art. Here are some of the assets I use:
Creating the enemy sprite
Yes, whether you realize it or not, you basically already know how to implement enemies. The process is very similar to creating a player sprite:
- Make a class so enemies can spawn.
- Create an
updatefunction so enemies can detect collisions.
- Create a
movefunction so your enemy can roam around.
Start with the class. Conceptually, it's mostly the same as your Player class. You set an image or series of images, and you set the sprite's starting position.
Before continuing, make sure you have a graphic for your enemy, even if it's just a temporary one. Place the graphic in your game project's
images directory (the same directory where you placed your player image).
A game looks a lot better the sprite should appear. This means you can use this same enemy class to generate any number of enemy sprites anywhere in the game world. All you have to do is make a call to the class, and tell it which image to use and the X and Y coordinates of your desired spawn point.
Again, this is similar in principle to spawning a player sprite. In the
setup section of your script, add this code:
enemy = Enemy(20,200,'yeti.png')# spawn enemy
enemy_list = pygame.sprite.Group() # create enemy group
enemy_list.add(enemy) # add enemy to group
In that sample code,
20 is the X position and
200 is the Y position. You might need to adjust these numbers, depending on how big your enemy sprite is, but try to get it to spawn in a place so that you can reach it with your player sprite.
Yeti.png is the image used for the enemy.
Next, draw all enemies in the enemy group to the screen. Right now, you have only one enemy, but you can add more later if you want. As long as you add an enemy to the enemies group, it will be drawn to the screen during the main loop. The middle line is the new line you need to add:
player_list.draw(world)
enemy_list.draw(world) # refresh enemies
pygame.display.flip()
Launch your game. Your enemy appears in the game world at whatever X and Y coordinate you chose.
Level one
Your game is in its infancy, but you will probably want to add another level. will be called along with each new level. It requires some modification so that each time you create a new level, you can create several enemies:
class Level():
def bad(lvl,eloc):
if lvl == 1:
enemy = Enemy(eloc[0],eloc[1],'yeti = [200,20]
You can adjust the distance and speed as needed.
Will this code work if you launch your game now?
Of course not, and you probably Ninja-IDE.
4 Comments
I've just started learning Python. What the books you can advise for me? I know a little Java.
This is a good article about python, im newbie and learning :=)
This is a good article about python , like this
Thanks, all. The next Python article in this series will be out very soon. | https://opensource.com/article/18/5/pygame-enemy | CC-MAIN-2020-05 | refinedweb | 626 | 73.68 |
Steem Developer Portal
PY: Search Accounts
How to pull a list of the active user accounts or trending tags from the blockchain using Python.
Full, runnable src of Search Accounts can be downloaded as part of the PY tutorials repository.
This tutorial will explain and show you how to access the Steem blockchain using the steem-python library to fetch a list of active authors or trending tags, starting the search from a specified value, and displaying the results on the console.
Intro
We are using the
lookup_accounts and
get_trending_tags functions that are built-in in the official library
steem-python. These functions allow us to query the Steem blockchain in order to retrieve either a list of active authors or a list of trending tags. The option is available to either get a complete list starting from the first value on the blockchain or starting the list from any other closest match string value as provided by the user. Both of these functions have only two parameters:
- account/aftertag - The string value from where to start the search. If this value is left empty the search will start from the first value available
- limit - The maximum number of names/tags that the query retrieves
Steps
- App setup - Library import and Steem class initialisation
- List selection - Selection of the type of list
- Get and display account names - Get a list of account names from the blockchain
- Get and display trending tags - Get a list of trending tags from the blockchain
1. App setup
In this tutorial we use 2 packages,
pick - helps us to select the query type interactively.
steem - steem-python library for interaction with the Blockchain.
First we import both libraries and initialize Steem class
from steem import Steem from pick import pick s = Steem()
2. List selection
The user is given the option of which list to create,
active accounts or
trending tags. We create this option list and setup
pick.
#choose list type title = 'Please select type of list:' options = ['Active Account names', 'Trending tags'] #get index and selected list name option, index = pick(options, title)
This will show the two options as a list to select in terminal/command prompt. From there we can determine which function to execute.
3. Get and display account names
Once the user selects the required list, a simple
if statement is used to execute the relevant function. Based on the selection we then run the query. The parameters for the
lookup_accounts function is captured in the
if statement via the terminal/console.
if option=='Active Account names' : #capture starting account account = input("Enter account name to start search from: ") #input list limit limit = input("Enter max number of accounts to display: ") lists = s.lookup_accounts(account, limit) print('\n' + "List of " + option + '\n') print(*lists, sep='\n')
Once the list is generated it is displayed on the UI with line separators along with a heading of what list it is.
4. Get and display trending tags
The query for a list of trending tags is executed in the second part of the
if statement. Again, the parameters for the query is captured via the terminal/console.
else : #capture starting tag aftertag = input("Enter tag name to start search from: ") #capture list limit limit = input("Enter max number of tags to display: ") lists = s.get_trending_tags(aftertag, limit) print('\n' + "List of " + option + '\n') for names in lists : print(names["name"])
The query returns an array of objects. We use the
for loop to build a list of only the tag
names from that array and then display the list on the UI with line separators. This creates an easy to read list of tags.
That’s it!.
To Run the tutorial
- review dev requirements
- clone this repo
cd tutorials/15_search_accounts
pip install -r requirements.txt
python index.py
- After a few moments, you should see output in terminal/command prompt screen. | https://developers.steem.io/tutorials-python/search_accounts | CC-MAIN-2018-47 | refinedweb | 650 | 58.21 |
When it comes to developing a Java EE app in Eclipse you have two choices. You can use the native project structure of Eclipse or you can use a Maven managed project. If you are using a third party framework like Spring or Struts2, you might find that Maven is the easier of the two options. I found Maven is also a better option when your Java EE project depends on other in-house Java projects. If you are not sure if you should move to Maven, this tutorial will be useful.
There are tons of tutorials on how to create a basic web or Spring MVC project using Maven. There are a few things missing. Very few properly show how to do this from Eclipse and almost none I have seen show how to build a full stack Java EE 6 app with CDI, JPA and other API. Hopefully, this tutorial will fill that gap.
Software Setup
I am using Eclipse Kepler. It comes with Maven already integrated. You do not need to install any extra plug-ins or have Maven installed on the OS. Although, if you choose to build your project from the command line then you will need to install Maven on the OS. We will get to command line build later in this tutorial.
Since I will be using full stack Java EE, I am using TomEE as the test server. You can use JBoss as well. But, Tomcat will not work.
Create a Maven Managed Java EE 6 Web Project
In Eclipse, choose File > New > Project from the menubar.
Choose Maven > Maven Project and click Next.
Accept the defaults. Click Next.
Now, we will need to choose an archetype for the project. The good old “maven-archetype-webapp” is a good start for any web project. We will use it.
Enter webapp in the Filter. Select maven-archetype-webapp. Click Next.
Now, we will need to enter the coordinates of this Maven artifact.
Enter a group ID and an artifact ID. The artifact ID will also be used as the Eclipse project name. In our case, the name of the project will be SimpleWeb. Also, if you build the WAR file for the project, Maven will call it SimpleWeb.war.
You don’t need to worry about the package name.
Click Finish. This will create the SimpleWeb project.
Unfortunately, the project is woefully incomplete. We will need to do a few things before we can start coding.
First, we will need to add dependency for the Java EE API.
Double click pom.xml of the project to open it in the editor.
At the bottom of the editor, click the pom.xml tab.
Add the following lines within the <depdnedncies></dependencies> section.
<dependency> <groupId>javax</groupId> <artifactId>javaee-web-api</artifactId> <version>6.0</version> <scope>provided</scope> </dependency>
Note that, we set the scope to “provided”. We do not want to ship the JAR file with the WAR. We expect the server to provide the Java EE API classes in the run time classpath.
Save changes.
Finally, we will need to create the directory structure where Java code will go.
Within the src/main folder, create a folder called java. As per Maven’s conventions, all Java code should go in that folder.
Write Some Code
Let’s first write a basic Servlet.
Right click SimpleWeb project and select New > Class.
Enter a package and class name as shown above.
Note that Eclipse has correctly chosen the right source folder.
Click Finish.
Extend HttpServlet and add the @WebServlet annotation as shown below.
@WebServlet("/hello") public class HelloServlet extends HttpServlet { }
Add a simple doGet method as follows.
protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { System.out.println("Servlet is called."); }
Organize imports. Save changes. There should be no compilation error. Maven has added all necessary Java EE API classes to the build path.
Test the Application
In the Servers view, right click TomEE and select Add Remove. Add SimpleWeb project to the server this way.
Restart the server.
Using curl, execute the Servlet:
curl
The console should show the output from the server.
If you change any code in the web project, Eclipse will automatically redeploy the app and restart it within a few seconds.
Build a Utility Library
If you have a very large web application it can help to develop some of the code in other projects. This code may need to access Java EE API such as CDI, EJB and JPA. Eclipse doesn’t provide any easy way to create a Java project that depends on Java EE API. Also, consider the challenge of constantly building these utility projects and adding the resulting JAR files to the WEB-INF/lib folder of your web project. Eclipse Maven plug-in really shines in this department.
Create another Maven project. This time, choose maven-archetype-quickstart as the archetype.
Enter SimpleUtility as the artifact ID. This will also be the name of the project.
Open the pom.xml file and add the dependency for the Java EE 6 API the same way we did for the web project.
<dependency> <groupId>javax</groupId> <artifactId>javaee-web-api</artifactId> <version>6.0</version> <scope>provided</scope> </dependency>
In the SimpleWeb project, add a new class called com.mobiarch.util.Greeter.
Add the following method to the class.
public void sayHello() { System.out.println("Hello from utility library."); }
Save changes.
We intend to inject this class to our Servlet. For CDI to work, we need to add beans.xml to the utility JAR file. In a Maven project, the file needs to be in src/main/resources/META-INF/beans.xml.
Create an empty beans.xml file as shown above. After a build, the file will be in the META-INF folder of the JAR file.
Use the Utility Library
Back in the SimpleWeb project, open pom.xml and add the dependency to the utility library project.
<dependency> <groupId>com.mobiarch</groupId> <artifactId>SimpleUtility</artifactId> <version>0.0.1-SNAPSHOT</version> </dependency>
Save changes.
Open the HelloServlet class. Add an injected instance of the Greeter class as shown below.
public class HelloServlet extends HttpServlet { @Inject Greeter greeter; ... }
From the doGet method, use the Greeter.
protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { greeter.sayHello(); }
Save changes.
Before we can test the changes, we have to do a bit of project management. Every time dependency between projects changes you need to update the projects. This is a pain in the neck and hopefully will go away some day. Right click SimpleWeb and select Maven > Update Project.
Select both SimpleUtility and SimpleWeb and click OK.
Restart the server.
Use curl to execute the Servlet.
curl
Console should show the output from the injected class.
A huge advantage of Maven based utility project is that you can update the code in these projects and Eclipse will automatically redeploy the web application and restart it within a few seconds. Try changing the message printed by the Greeter class and re-execute the URL from curl. You should see changes.
Building the Final Package
At the end of development, you need to produce a WAR file. The WAR file should include all dependent JAR files in WEB-INF/lib unless they are marked as provided. This can be done from within Eclipse. More commonly, this will be done from a command line in a separate build machine.
First, we will learn how to package from Eclipse. The depndent projects need to be built first and installed in the local Maven repository. Right click SimpleUtility project and select Run As > Maven Install. Next, do the same for the SimpleWeb project.
If you expand the target folder, you will see the WAR file. The WAR file will include the JAR file for SimpleUtility.
Now, we will do packaging from the command line.
Install Maven in your OS. For example, in Ubuntu:
sudo apt-get install maven2
Then, from a terminal window, go into the SimpleUtility and run:
mvn clean install
This will do a clean build and install the articfact in the local repository.
Go into SimpleWeb and run:
mvn clean install
The WAR file will be created in the target folder.
Benefits of Maven Based Eclipse Projects
In summary, these are benefits. All of them are quite significant in my mind:
- Ability to build projects from command line without the need for Eclipse. This is essential for an automated git clone/pull and clean build of the projects.
- Ability to easily write Java code in external projects and use these projects from a web application. The Java projects can have full access to the Java EE API. Any change made to the external Java projects are reflected immediately during testing. | https://mobiarch.wordpress.com/2014/08/20/with-maven-eclipse-and-java-ee/ | CC-MAIN-2018-13 | refinedweb | 1,463 | 77.13 |
Hi everyone. I am attempting to configure my Jetson Nano to run a Python script once the device has power. We’re trying to use the Jetson as a vision coprocessor for our team’s robot, and so we don’t want to have login each time we want the script to run.
I’ve followed the thread here and setup a systemd service that runs
python3 /usr/local/bin/main.py under the user
robot.
However, I’m running into an error. The Python code, when run by the service, is unable to import my Python module
pynetworktables. The terminal prints out
ModuleNotFoundError: No module named 'networktables' when I check using
sudo systemctl status robot When not run by the service, when I simply say
python3 /usr/local/bin/main.py, it runs without error.
Here is the beginning of
main.py:
import cv2
from networktables import NetworkTables
Oddly, cv2 works fine. It’s successfully getting the OpenCV library.
NetworkTables is not. I installed the library using
pip3 install pynetworktables (from these instructions).
So now I guess my question is, how do I fix this issue? Do I need to install the library in a different way? Or use docker or something else like that? I’m guessing that part of the issue is that I’ve installed the library under my regular user, not
robot and so it can’t access it. (?)
Any help is appreciated. | https://forums.developer.nvidia.com/t/run-python-script-on-boot-without-logging-in-but-with-the-installed-python-libraries/197120 | CC-MAIN-2022-27 | refinedweb | 238 | 67.55 |
explain_fopen_or_die - open file and report errors
#include <libexplain/fopen.h> FILE *explain_fopen_or_die(const char *pathname, const char *flags);
The explain_fopen_or_die() function opens the file whose name is the string pointed to by pathname and associates a stream with it. See fopen(3) for more information. This is a quick and simple way for programs to constitently report file open errors in a consistent and detailed fahion.
Upon successful completion explain_fopen_or_die returns a FILE pointer. If an error occurs, explain_fopen will be called to explain the error, which will be printed onto stderr, and then the process will terminate by calling exit(EXIT_FAILURE).
fopen(3) stream open functions explain_fopen(3) explain fopen(3) errors exit(2) terminate the calling process
libexplain version Copyright (C) 2008 Peter Miller
Written by Peter Miller <pmiller@opensource.org.au> explain_fopen_or_die(3) | http://huge-man-linux.net/man3/explain_fopen_or_die.html | CC-MAIN-2017-17 | refinedweb | 136 | 53.61 |
The people behind the Inspiration Mars Foundation -- which on Wednesday announced plans to send a manned spacecraft on a 510-day fly-by mission to Mars -- say this on their website: "We are steadfastly committed to the safety, health and overall well-being of our crew. We will only fly this mission if we are convinced that it is safe to do." Let's hope that's true, because launching humans on such a long and faraway mission into space before we're technologically capable and reasonably certain about the health effects of such a prolonged journey just isn't worth it, at least in my opinion. The foundation, headed by U.S. multimillionaire and first space tourist Dennis Tito, wants to send a two-person crew ( a man and a woman) to Mars in 2018, when a rare planetary alignment would allow for a relatively short round-trip of about 500 days. The craft wouldn't even go into Mars orbit, but instead would fly within 100 miles and then "sling-shot" its way back toward Earth. The problem is, even while the Inspiration Mars Foundation assures it won't go through with the mission if it is unconvinced it would be safe, Tito tells Space.com that the two-person crew essentially are going to be guinea pigs:.
And let's not forget all the other things that happen to the human body in space. A Russian experiment in which participants lived in the equivalent of deep space for 17 months showed that long trips in space can have drastic effects on sleep patterns and fitness. Given that prolonged sitting can be fatal, this is something to think about. Then there's bone loss, heart atrophy, nausea and headaches -- all conditions of modern space travel. While we're at it, let's throw in the recent NASA-supported study reporting that space travel is harmful to the brain and could accelerate Alzheimer's disease. And the "impact of radiation," as Tito puts it, is described in Wikipedia:
The potential acute and chronic health effects of space radiation, as with other ionizing radiation exposures, involve both direct damage to DNA and indirect effects due to generation of reactive oxygen species. ...By one NASA estimate, for each year that astronauts spend in deep space, about one-third of their DNA will be hit directly by heavy ions. Thus, loss of critical cells in highly complex and organized functional structures like the central nervous system (CNS) could result in compromised astronaut function, such as changes in sensory perception, proprioception, and behavior or longer term decrements in cognitive and behavioral functions.
So you lift off from Earth as a fully functioning human astronaut and you return (if you return) as ... what? I've said it before, and I'll say it again: As eager as I am to see us explore the stars, rushing into it is only going to lead to unnecessary lives lost. I understand exploration requires risk, but it shouldn't require recklessness. But that's just me. What do readers think? In our eagerness to go to Mars, are we rushing into disaster? Now read this:
10 things that happen to our bodies during space flight
Spidernaut never got to enjoy its fame
Polar ice sheets continue to melt, but climate-change deniers remain thick as ever | http://www.itworld.com/article/2713145/hardware/wanted--2-human-guinea-pigs-for-premature-flight-to-mars.html | CC-MAIN-2017-26 | refinedweb | 559 | 56.49 |
Switch our complete calendar on and off alongside your private calendar
For other calendar software, simply
import this URL.
or another sharing service
UX Intensive is a four-day workshop series for designers wanting to take their practice to the next level.
We examine the four key elements that contribute to a successful interactive experience: Design Strategy, Design Research, Interaction Design and Digital Service Design. Each day is led by our team of experts and includes fun, hands-on activities designed to build skills and provide a framework that participants can immediately put into practice. Get ready to roll up your sleeves and get to work.
We are happy to offer anyone from from IxDA a 15% discount when you register with the discount code "IXDA". For more information and to register, visit the UXI Berlin website here:
Cheers!
Join our community and advance:
Your
Skills
Your
Network
Your
Career | http://www.interaction-design.org/calendar/ux_intensive_berlin_2013_hosted_by_adaptive_path.html | CC-MAIN-2014-42 | refinedweb | 150 | 52.7 |
h
H extension is related to C++ or Java header file. You can open h extensions with Borland C++, Visual C++ or Java languages.
To create codes with C++ or Java programming languages, header files are used for using prepared functions and libraries. The extension of these files is .h. You can try to open .h files with Note Pad. To use these files, it is enough to call related file beginning of the code which you will write:
For example C++ code:
#include <iostream>
#include <time.h>
What to use h
Borland C++ , Visual C++, Java, C builder languages
which page related to h file
h Related Searches
You can check related links about h fileextension. | http://www.files3.com/h/ | CC-MAIN-2017-26 | refinedweb | 117 | 73.68 |
{-# LANGUAGE RecursiveDo #-} {-# , never, race, race' , runFuture ) where import Control.Concurrent import Data.Monoid (Monoid(..)) import Control.Applicative import Control.Monad (join). newtype Future a = Future { force :: IO a -- ^ Get a future value. Blocks until the value is -- available. No side-effect. } -- | Make a 'Future' and a way to fill it. The filler should be invoked -- only once. Later fillings may block. newFuture :: IO (Future a, a -> IO ()) newFuture = do v <- newEmptyMVar return (Future (readMVar v), putMVar v) -- | Make a 'Future', given a way to compute a (lazy) value. future :: IO a -> Future a future mka = unsafePerformIO $ do (fut,snk) <- newFuture -- let snk' a = putStrLn "sink" >> snk a -- putStrLn "fork" forkIO $ mka >>= snk return fut {-# NOINLINE future #-} instance Functor Future where fmap f (Future get) = future (fmap f get) instance Applicative Future where pure a = Future (pure a) Future getf <*> Future getx = future (getf <*> getx) -- Note Applicative's pure uses 'Future' as an optimization over -- 'future'. No thread or MVar. instance Monad Future where return = pure Future geta >>= h = future (geta >>= force . h) instance Monoid (Future a) where mempty = never mappend = race' -- | A future that will never happen never :: Future a never = fst (unsafePerformIO newFuture) {-# NOINLINE never #-} -- | A future equal to the earlier available of two given futures. See also 'race\''. race :: Future a -> Future a -> Future a Future geta `race` Future getb = unsafePerformIO $ do (w,snk) <- newFuture let run get = forkIO $ get >>= snk run geta run getb return w {-# NOINLINE race #-} -- | Like 'race', but the winner kills the loser's thread. race' :: Future a -> Future a -> Future a Future geta `race'` Future getb = unsafePerformIO $ do (w,snk) <- newFuture let run get tid = forkIO $ do a <- get killThread tid snk a mdo ta <- run geta tb tb <- run getb ta return () return w {-# NOINLINE race' #-} -- TODO: make race & race' deterministic, using explicit times. Figure -- out how one thread can inquire whether the other whether it is -- available by a given time, and if so, what time. -- | Run an 'IO'-action-valued 'Future'. runFuture :: Future (IO ()) -> IO () runFuture = join . force | http://hackage.haskell.org/package/reactive-0.0/docs/src/Data-Future.html | CC-MAIN-2015-48 | refinedweb | 338 | 62.48 |
Let me start by saying this is my first post here at daniweb and I am pretty new at C++. With that said I will dive right in.
I have a program due next week and trying to get it hammered out. I have to create a line editor that will keep entire text on a linked list one line in a seperate node. ( I think I have that part). It starts with "Edit" and enter the file name to open. ( I think I have that as well). afte this line a prompt should appear with line number (i.e. 1>) if the letter 'I' is entered with a number 'n' following it , then insert the text followed before line 'n'. If 'd' is entered with two numbers 'n' and 'm', one 'n' or no number following it, then delete lines 'n' - 'm', line 'n', or current line. Do the same with command 'L' for listing line. If 'A' is entered then append to existing line, and finally if 'E' exit and save in text file.
I have most of the functions put together, just not real sure how to approach the prompts and such for the insert, delete, list, and append. (or exit for that matter). My code is below. I appreciate any help with this.
#include <stdlib.h> #include <iostream> #include <fstream> #include <string> #include <stdio.h> using namespace std; struct node{ node *next; string sentence; //char sentence[80]; }; void append (struct node **, string); //void append (struct node **, char); node *add_node(node *list, string x){ //insert at front or in front of current line //node *add_node(node *list, char x){ node *p = new node; p->next = list; p->sentence = x; return p; } node *find_item(node *list, string x){ //node *find_item(node *list, char x){ node *p = list; while (p != NULL){ if (p->sentence == x) return p; p = p->next; } return NULL; } node *find_item_before(node *list, string x){ //node *find_item_before (node *list, char x){ node *p = list, *q = list; while (q != NULL){ if (q->sentence == x) return p; p = q; q = q->next; } return NULL; } node *delete_node (node *list, string x){ //delete line //node *delete_node (node *list, char x){ node *p = find_item_before(list, x); if (p != NULL){ node *q; if ((p == list) && (p->sentence == x)){ q = p->next; delete p; return q; }else{ q = (p->next)->next; delete p->next; p->next = q; } }else return list; return list; } node *insert_after (node *list, string x, string y){ //node *insert_after (node *list, char x, char y){ node *p = find_item(list, x); if (p !=NULL){ node *q = p->next; p->next = new node; (p->next)->next = q; (p->next)->sentence = y; return p->next; } return NULL; } void append (struct node **p, string x) //void append (struct node **p, char x) { struct node *tmp, *r; if (*p == NULL) { tmp = new node; tmp->sentence = x; tmp->next = NULL; *p = tmp; }else{ tmp = *p; while (tmp->next !=NULL) tmp = tmp->next; r = new node; r->sentence = x; r->next = NULL; tmp->next = r; } } void print_list(node *list){ // print lines node *p = list; int l = 1; //cout << l << ">"; while (p != NULL){ cout<< l << ">"; cout << p->sentence << endl; p=p->next; ++l; } //cout << "tail." << endl; return; } int main(){ char fName[20]; cout << "Edit: "; cin >> fName; ofstream outfile; outfile.open(fName,ios::out); if (outfile.fail()) outfile.open(fName,ios::out); string temp; //char temp[80]; node *lst = new node; cout << "1> "; cin >> temp; getline (cin, temp); //cin >> temp; lst->sentence = temp; //lst->sentence = "The first sentence"; lst->next = NULL; print_list(lst); /*cout << "now i'll add a sentence to the front" << endl; lst = add_node(lst, "try here"); print_list(lst); cout << "try and add another sentence to the front" << endl; lst = add_node(lst, "another sentence"); print_list(lst); cout << "try and append sentence" << endl; append(&lst, "to this sentence"); print_list(lst); cout << "try and delete node" << endl; lst = delete_node(lst, "try here"); print_list(lst); cout << "try and delete another node" << endl; lst = delete_node(lst, "try again"); print_list(lst);*/ delete lst; cout << "done" << endl; cin.ignore(); return 0; } | https://www.daniweb.com/programming/software-development/threads/87467/help-with-linked-lists | CC-MAIN-2016-50 | refinedweb | 668 | 75.13 |
print LaTeX in notebook from script
I've read through
but still can't seem to find the equivalent to using "view" or "pretty-print" to display equations at LaTeX in a notebook cell when running a Python script from the notebook. (I do have "Typeset" checked at the start of the notebook.) For example, suppose my file foo.py has a function:
def show_all(self): y = var('y') x = 2.0*y**2 view(x) pretty_print(x) show(x) JSMath()(x) JSMath().eval(x) print JSMath()(x) print JSMath().eval(x) html(JSMath()(x)) html(JSMath().eval(x)) print html(JSMath()(x)) print html(JSMath().eval(x))
Running this function from a notebook cell returns lines of the form either
\newcommand{\Bold}[1]{\mathbf{#1}}2.0 \, y^{2}
or
\newcommand{\Bold}[1]{\mathbf{#1}}2.0 \, y^{2} </font></html>
But I just want a nice LaTeX display of 2y**2 equivalent to typing (from a notebook cell)
view(2.0*y**2)
which returns
2.00000000000000y2 (prettier than this of course)
So I'm still missing the right combination of LaTeX library functions. As a follow-up question, is dealing with LaTeX macros the only way to get the "2.00000000000" in the above to display as 2.0, 2.00, etc.? | https://ask.sagemath.org/question/9357/print-latex-in-notebook-from-script/ | CC-MAIN-2020-45 | refinedweb | 215 | 69.07 |
Scripts Deep Dive
In this section we’ll take a more in-depth view at scripts and how they can be used most effectively for CloudShell orchestration.
How CloudShell handles scripts
CloudShell executes a Python script in a very simple and straightforward way by simply running it with a Python executable. To send information to the script, CloudShell sets environment variables in the scope of the script process. These environment variables include information about the sandbox reservation, as well as the script parameters. The script standard output is returned as the command result. If an exception is raised, or if a non-zero process result code is returned by the script, the execution will be considered a failure.
Using a main function and packaging multiple files
As scripts become more complex, instead of structuring them as one big function, it is advisable to create a main function and separate the rest of the logic to different functions. Python requires including some boilerplate code in addition to the main function to make this work. Here is some example code demonstrating how to use main functions with scripts:
import cloudshell.helpers.scripts.cloudshell_scripts_helpers as helpers import os def print_keys(): for key in os.environ: print key + " : " + os.environ[key] def print_app_names(session): reservation_details = session.GetReservationDetails().ReservationDescription for app in reservation_details.Apps: print app.Name def main(): session = helpers.get_api_session() print_keys() print_app_names(session) if __name__ == "__main__": main()
As you’re writing more complex orchestration scripts, it may become prudent to also separate the code to multiple files. To do that, we can take advantage of Python’s ability to support executing .zip archives containing multiple scripts. The only requirement, is that one of the files is named ___main__.py, which is how the entry point of the Python process is determined.
Referencing other packages
As opposed to Shell drivers, CloudShell doesn’t look for a requirement.txt file for scripts and doesn’t attempt to retrieve dependencies from Pypi. The script dependencies must be installed on the Python used by the Execution Server.
On windows machines, the ES will by default use the Python included with the ES installation. It can be found in the \Python\2.7.10 directory under the ES installation folder. If you’re using Linux, the Execution Server will use the default Python configured in the os. In both cases it is possible, however, to specify a different Python environment. To do so, add the following key to the execution server customer.config file:
<add key="ScriptRunnerExecutablePath" value="PATH_TO_EXECUTABLE" />
To install a dependency, run the following using the Python executable referenced by the ES:
python -m pip install [PACKAGE_NAME]
As dependencies can get complex, it is recommended to keep a centralized requirements.txt file where you can catalog the requirements of all of the orchestration scripts and add new ones if needed. This will both make it easier to keep track of the dependencies used by the orchestration scrips and avoid version conflicts, and make it easier to deploy new Execution Servers. Instead of installing each dependency independently you’ll then be able to run:
python -m pip install -r [REQUIREMENTS_FILE_PATH]
Setup and teardown scripts
Setup and teardown are a special types of orchestration scripts. There are two things that make them special:
- They can’t have any inputs as they are being launched automatically
- If you use the default ‘Python Setup & Teardown’ driver, then simply including a teardown or setup script in the reservation and setting a duration for the setup/teardown is enough for CloudShell to launch it.
To set a script as a teardown or setup script, you need to edit it from the script management page. One of the fields allows you to select the Script Type. By choosing ‘Setup/Teardown’ the script will take on that special behavior. Notice that you’ll not be able to run it separately from the environment built in setup and teardown commands and you won’t be able to add any inputs to it.
Debugging scripts
CloudShell includes some helper functions to make it easier to debug a script by running it on real sandbox reservation data. The helper functions allow the script to “attach” to a CloudShell Sandbox, by filling in all of the environment variables of the script so that the same information is available to it as would be if CloudShell launched it.
To attach to a CloudShell sandbox, first create a sandbox reservation, then add the following code and fill in the required data for the function parameters.
import cloudshell.helpers.scripts.cloudshell_dev_helpers as dev_helpers dev_helpers.attach_to_cloudshell_as(user="CLOUDSHELL_USER", password="CLOUDSHELL_PWD", domain="DOMAIN", reservation_id="RESERVATION_ID", server_address="ADDRESS", command_parameters={"NAME":"VALUE"})
If we include the above code in the example script we provided earlier, we’ll be able to run it locally as well as from the CloudShell sandbox. The attach_to_cloudshell_as function will populate all of the blueprint data as CloudShell would so from the code perspective it doesn’t make a different where its being run from. Furthermore, the code will ignore the special attach_to_cloudshell_as function if you run it from CloudShell so that there is no adverse effect to leaving the statement there.
One drawback of using this strategy is that its probably not a good idea to leave your CloudShell credentials in the code itself in plain sight. That is why we recommend you use a similar function which takes the same information from a file. Make sure to add that file to the .gitignore list so that it doesn’t get on source control of course. The following code will have the same effect as the lines above, only it will look for the information in a JSON file named quali_config.json which should be in the project root.
import cloudshell.helpers.scripts.cloudshell_dev_helpers as dev_helpers dev_helpers.attach_to_cloudshell()
The quali_config.json should have the following structure:
{ "user" : "USER", "password" : "PASSWORD", "domain" : "DOMAIN", "server_address" : "SERVER_ADDRESS", "cloudshell_api_port" :"CLOUDSHELL_API_PORT", "reservation_id" : "reservation_id", "command_parameters" : { "PARAM_NAME" : "PARAM_VALUE" } } | https://devguide.quali.com/orchestration/7.1.0/scripts-deep-dive.html | CC-MAIN-2018-34 | refinedweb | 991 | 53.92 |
Last week I salvaged a magnetic sensor from a stationary bike and connected it to my Arduino Uno in order to analyze the sensor’s behavior with the ADC converters.
I wanted to capture a good enough signal, so I searched about the Arduino capabilities:
From analogRead() reference description:
It takes about 100 microseconds (0.0001 s) to read an analog input, so the maximum reading rate is about 10,000 times a second.
In order to include the overhead of the code, I decided to read a value each 200μs. The ADC converters read a 10-bits value, so in order to transmit the readings through serial port, if I pack the 10-bits values in 2 bytes, I need 80.000bps. The right setting for the “
Serial.begin()” Arduino function will then be
115200, which is the smallest value bigger than 80000. With this information, I wrote the following sketch:
enum { ANALOG_INPUT_PIN = A0, UPDATE_INTERVAL_MICROS = 200, MODE_2_CONSECUTIVE_READS = 5000, }; unsigned long next_update; int reads; int mode; void setup() { Serial.begin(115200); analogReference(EXTERNAL); mode = 0; } void mode1() { int analog_value = analogRead(ANALOG_INPUT_PIN); Serial.println(analog_value); delay(100); } void mode2() { unsigned long current_micros; current_micros = micros(); if(current_micros >= next_update) { byte b1, b2; int analog_value; next_update = next_update + UPDATE_INTERVAL_MICROS; analog_value = analogRead(ANALOG_INPUT_PIN); reads--; b1 = analog_value&0xFF; b2 = ( analog_value >> 8 ) & 0xFF; Serial.write(b1); Serial.write(b2); if(reads <= 0) { mode = 0; Serial.write(0xFF); Serial.write(0xFF); } } } void loop() { if(Serial.available() > 0) { int rcv; rcv = Serial.read(); if(rcv == '1') { mode = 1; } else if(rcv == '2') { mode = 2; reads = MODE_2_CONSECUTIVE_READS; next_update = micros(); } else if(rcv == '0') { if(mode == 2) { Serial.write(0xFF); Serial.write(0xFF); } mode = 0; } } switch(mode) { case 1: mode1(); break; case 2: mode2(); break; default: delay(200); break; } }
The program has two main working modes that can be enabled/disabled by transmitting a character to the serial port. The mode “1” consists of reading the analog pin and sending the decimal value through the serial port, and it’s for rough readings and debug, the mode “2” reads 5000 times the analog pin, once every 200 microseconds, in order to have a full second of captured signal; it transmits the data through the serial port for each sample that is read.
From the PC side I setup a python script that uses the pySerial library (Debian/Ubuntu package: python-serial) to communicate with Arduino, and then displays the data on a plot using matplotlib (Debian/Ubuntu package: python-matplotlib). A tricky fact is that when pySerial opens the serial port, the Arduino is reset, and the script must wait some time (1.5 seconds is enough) before communicating.
#!/usr/bin/python import time import serial import matplotlib.pyplot as plt max_valid_read = 1023 reference_voltage = 3.3 update_interval = 2e-4 ser = serial.Serial( port = '/dev/ttyACM0', baudrate = 115200, ) time.sleep(1.5) ser.flush() ser.write('2') r = 0 r_time = 0 analog_reads = []; analog_times = []; while r <= max_valid_read: b1 = ord(ser.read(1)) b2 = ord(ser.read(1)) r = b1 + b2*256 if(r <= max_valid_read): analog_reads.append((r*reference_voltage)/max_valid_read) analog_times.append(r_time) r_time = r_time + update_interval plt.plot(analog_times, analog_reads) plt.show()
The scripts sends a character that enables mode “2” to the Arduino, which then sends the analog data back to the PC. The collected data is then plotted on a figure like in the screenshot.
The figure illustrates the data captured while I waved a magnet in front of the sensor. In the normal working condition the sensor’s connectors are open, so the readings are high because of the pull-up resistors that I put. When a magnetic field is applied, the connectors are closed, so the readings are low because the analog pin is connected to ground. According to its behavior this sensor can actually be connected to the digital inputs of Arduino, but in the meantime I got some useful tools ready to analyze analog input.
Brandon
2011/09/29
Thanks, this was very helpful. I am fairly new to the Arduino and hadn’t been able to find a good code and accompanying explanation of how to use python with the Arduino’s serial data. While the Arudno obviously has limitations I’m pretty excited to have a cheap and simple tool for data acquisition.
coloneldeguerlass
2015/07/14
Well, this was very helpful and I managed, with Patolin’s blog (for the texas side using Energia + pure C) and your help -using pyserial and Matplotlib- , to get a very cheap “oscilloscope”. Differences are the following:
* it samples at 1M samples per second (maximum lm4F120 can achieve)
* I did not worry about the transmission speed and therefore chose to transmit ASCII -very useful with gtkterm under Mageia5 linux and Teraterm under XP to debug- . That was because it is not necessary to transmi ‘on the fly’ : samples are stored into an array, because lm4F120 has much more RAM than Arduino UNO.
When transmitting a 1024 sample has ended, interrupt routine is again allowed to sample -always does- and store (when the buffer is full, it does not store any more into the buffer…)
on the python side, port and speed can be specified -syntax is gtkterm like -s -p – :
from sys import argv
Speed = 115200
Port = ‘/dev/ttyUSB0’
iarg = 1
for a in argv :
if (a == ‘-s’) :
Speed = int(argv[(iarg)])
if (a == ‘-p’) :
Port = argv[iarg]
iarg = iarg+1
print Speed, Port
refernce_voltage=3.3
update_interval=1000e-9
taille=1024
// after it is like balau’s script
you may notice (taille) the size of the buffer is much shorter than 5K (a screen is less than 1000 pixel is X resolution).
Advantages are one can achieve a very cheap ‘oscilloscope’ with a comfortable number of samples per second (Stellaris Tiva is about 20 E$): sound card oscilloscopes -no extra cost; no over voltage protection- sample 20 times ‘slower’ 44Ks/sec (arduino uno is about 13 K samples /sec IIRC, but protects the PC with a 10 E$ -with clones- extra cost)
gyan
2016/02/09
pllease explain the significance of the line “r = b1 + b2*256″…
Balau
2016/02/09
We have to send and receive a value contained in 2 bytes, but can only send one byte at a time. So we split it in two bytes in function mode1, and reconstruct the value in python with the line you mention. b1 contains the least significant byte, that weighs less, and b2 contains the most significant byte, that weighs 256 times b1. See also endianness
gyan
2016/02/10
Thanks…could you please explain how the calculation is done i.e. why 256 is only multiplied(i mean the underlying math) and i have more no. of byte then is it like that “b1+b2256+b3256+b4*256 and so on”…?
coloneldeguerlass
2016/02/10
If you have more number of bytes (vey unlikely with an AD converter) to receive, and then put, say,in a 32 bits number result32, formula should look like:
result32 = b1 + 256(b2+256(b3 + 256 *b4))) -see for the way of having efficient computations and avoiding knowing each and every 2’s power… | https://balau82.wordpress.com/2011/03/26/capturing-an-analog-signal-with-arduino-and-python/ | CC-MAIN-2017-47 | refinedweb | 1,182 | 57.91 |
Welcome to my Factory design pattern tutorial. This is a continuation of my design patterns video tutorial.
You use the Factory design pattern when you want to define the class of an object at runtime. It also allows you to encapsulate object creation so that you can keep all object creation code in one place.
The Factory pattern is presented in many ways to help you learn. Refer to the code that follows the video to completely understand it.
If you like videos like this, please tell google [googleplusone]
Sharing is nice
Code from the Video
ENEMYSHIP.JAVA
public abstract class EnemyShip { private String name; private double speed; private double directionX; private double directionY; private double amtDamage; public String getName() { return name; } public void setName(String newName) { name = newName; } public double getDamage() { return amtDamage; } public void setDamage(double newDamage) { amtDamage = newDamage; } public void followHeroShip(){ System.out.println(getName() + " is following the hero"); } public void displayEnemyShip(){ System.out.println(getName() + " is on the screen"); } public void enemyShipShoots() { System.out.println(getName() + " attacks and does " + getDamage() + " damage to hero"); } }
UFOENEMYSHIP.JAVA
public class UFOEnemyShip extends EnemyShip { public UFOEnemyShip(){ setName("UFO Enemy Ship"); setDamage(20.0); } }
ROCKETENEMYSHIP.JAVA
public class RocketEnemyShip extends EnemyShip { public RocketEnemyShip(){ setName("Rocket Enemy Ship"); setDamage(10.0); } }
ENEMYSHIPTESTING.JAVA
import java.util.Scanner; public class EnemyShipTesting { public static void main(String[] args){ // Create the factory object EnemyShipFactory shipFactory = new EnemyShipFactory(); // Enemy ship object EnemyShip theEnemy = null; Scanner userInput = new Scanner(System.in); System.out.print("What type of ship? (U / R / B)"); if (userInput.hasNextLine()){ String typeOfShip = userInput.nextLine(); theEnemy = shipFactory.makeEnemyShip(typeOfShip); if(theEnemy != null){ doStuffEnemy(theEnemy); } else System.out.print("Please enter U, R, or B next time"); } /* EnemyShip theEnemy = null; // Old way of creating objects // When we use new we are not being dynamic EnemyShip ufoShip = new UFOEnemyShip(); doStuffEnemy(ufoShip); System.out.print("\n"); // ----------------------------------------- // This allows me to make the program more dynamic // It doesn't close the code from being modified // and that is bad! // Defines an input stream to watch: keyboard Scanner userInput = new Scanner(System.in); String enemyShipOption = ""; System.out.print("What type of ship? (U or R)"); if (userInput.hasNextLine()){ enemyShipOption = userInput.nextLine(); } if (enemyShipOption == "U"){ theEnemy = new UFOEnemyShip(); } else if (enemyShipOption == "R"){ theEnemy = new RocketEnemyShip(); } else { theEnemy = new BigUFOEnemyShip(); } doStuffEnemy(theEnemy); // -------------------------------------------- */ } // Executes methods of the super class public static void doStuffEnemy(EnemyShip anEnemyShip){ anEnemyShip.displayEnemyShip(); anEnemyShip.followHeroShip(); anEnemyShip.enemyShipShoots(); } }
BIGUFOENEMYSHIP.JAVA
public class BigUFOEnemyShip extends UFOEnemyShip { public BigUFOEnemyShip(){ setName("Big UFO Enemy Ship"); setDamage(40.0); } }
ENEMYSHIPFACTORY.JAVA
// This is a factory thats only job is creating ships // By encapsulating ship creation, we only have one // place to make modifications public class EnemyShipFactory{ // This could be used as a static method if we // are willing to give up subclassing it public EnemyShip makeEnemyShip(String newShipType){ EnemyShip newShip = null; if (newShipType.equals("U")){ return new UFOEnemyShip(); } else if (newShipType.equals("R")){ return new RocketEnemyShip(); } else if (newShipType.equals("B")){ return new BigUFOEnemyShip(); } else return null; } }
Factory Design Pattern Diagram
Hi Derek, many many thanks for all these awesome video tutorials on Design Patterns. Just wondering why not also you provide the Alien Spaceship diagrams related to individual Tutorial?
Those will be really helpful. Thanks once again.
Do you want to see the UML diagrams I made? I can upload them if I still have them
Yes Derek, something like what you have for “Abstract Factory Design Pattern Diagram” (at the very bottom of that article)
Thanks
I’ll see if I can find all of those. Thanks for pointing that out. I don’t know what works most of the time and what doesn’t
hi this is really great and helpful. Thx for the dedication.
1 question though, in ENEMYSHIPFACTORY.JAVA class, why do u need line 12: EnemyShip newShip = null;
i don’t see you use this local variable?
thx
Hi Derek. Thank you so much for the tutorial. I have started to learn your tutorials on Design Patterns. Though I have not learned Java, I am able to follow your code. I know C++. So what I do is after going through your tutorial, I implement the code example for the design pattern in C++. Oh My! I am learning a lot.
Now coming to your tutorial on Factory Design Pattern. I have tried very hard to get a solid understanding on this design pattern for quite some time now. But I never succeeded. At last your tutorial provided me much needed succor and it took just 11 minutes to understand the entire thing. Thank you so much for this, Derek. I am a big fan of yours. As always you are always succinct with your tutorials.
Thank you very much for taking the time to write such a nice comment. I very much appreciate it! I’m very happy that I’ve been able to help you 🙂
Line 12 EnemyShipTesting.java
EnemyShip theEnemy = null;
Since EnemyShip class is abstract, How can we create objects of the same?
We are actually calling the constructor for either UFOEnemyShip or RocketEnemyShip. In the above I’m declaring that I will use EnemyShip to refer to data and nothing else since it isn’t a primitive. Later when I call the constructors mentioned actually creates an object. Does that make sense?
Hi Derek,
Thank you so much for all the work you put into your tutorials, having been a software developer in Java for 12 years I can honestly say I’ve learnt more from your site than at work!
Believe me it’s frustrating beyond belief trying to code blind not knowing how things fit together and not knowing where to go!
My question is related to why the EnemyShip class is abstract, is the idea of the pattern to have it abstract? The reason I ask is that the class has concrete implementations so it could be just as well a concrete class?
Thank you very much for taking the time to write such a nice message 🙂
I’m very happy to have been able to help.
I basically made it abstract because there is no reason to instantiate EnemyShip. That is basically the only reason.
Derek your videos are extremely helpful, just a small suggestion, while organizing the classes, please group them together in a way that makes the code more readable. For example while reading from top down I would like to read UFOEnemyShip, rocketShip and BigUFOEnemyShip together, then the EnemyShipFactory and finally the class containing the main method.
Thank you 🙂 Sorry if that was confusing. I’m constantly working to make everything understandable. Thank you for the input
Your videos are clear, precise, straight and has lot of information. Thank you.
You’re very welcome 🙂 Thank you
hello, derek
once again, thanks for the great tutorial
I have a question: what is the purpose of making the EnemyShip class abstract? since there are no abstract methods, or final or static fields inside the EnemyShip, I guess it is to make this class not to be instantiated, but if so, why do it that way?
thanks you very much(as always:-))
Thank you 🙂 Yes the only reason is to keep the EnemyShip from being instantiated. In this situation that is the only goal. Sorry about any confusion
Hi Derek,
First of all, thank you very much for another excellent tutorial.
My question is, in EnemyShipFactory.java. You are using if statements here. I remember you have a refactor tutorial about this. In there, you used switch (same as if) first and then used reflection. I think that you were required not to use any if or switch statements in a coding challenge.
What is the best practice to create ships when applying Factory Method design pattern?
Thanks,
Fong
Hi Fong,
You’re very welcome 🙂 Thank you for visiting my site.
It changes depending on the situation. I know as programmers we start off learning a ton of rules. In the beginning there is always a right and wrong way of doing things. That pretty much ends once you get advanced and start using things like design patterns and when you refactor.
You learn when to use these things through a ton of experimenting. It is more art then science at this point. Even the GOF design pattern book says as much. Now is when you truly become an advanced programmer. Enjoy the journey.
Sorry I couldn’t give a more definite answer, but this is truly what i believe. I hope that helps
Derek
Hi Derek,
Thank you very much for video.
All design pattern concepts are very nicely presented with exeample .
Just I have a request, is there any place where I can download
all code(zip, jar..) ?
Do you have videos of webservices (CXF) ?
Thnaks and Best Regards,
Kitty
Hi Kitty,
You’re very welcome 🙂 Sorry, but I don’t have them all in one zipped file because that would be quite confusing because there are so many files. I do however have everything on one page here Design Pattern Video Tutorial.
I plan on cover web services and J2EE soon. Thank you 🙂 Derek
Hello Sir,
One question, in a scenario where one factory to produce multiple products.
In this case, Factory class job to produce EnemyShip as well as HeroShip (multiple products). Then to accommodate this requirement I guess we need to have another method makeHeroShip() with return type HeroShip in the same Factory class (suppose ShipFactory). am I right? Or there is more better way to achieve this. Please share your views.
Hello, It may be best to see more examples of the factory pattern. I have 2 additional videos on this refactoring tutorial page. I hope that helps 🙂
I don’t need a college degree. I just need DEREK BANAS! Awesome tutorials dude. I am going to Donate five bucks. I will donate more when I can. I have been watching your tutorials everynight and I’m enjoying them very much. Thanks! Eric.
Thank you 🙂 The compliment is all I ask for. There is no reason to donate. That is there only because my wife forced me to use it.
Big thanks to you
I have exams next week and your videos really helped me.
best of luck for you
Thank you 🙂 I’m glad I could help. Best of luck on your exams.
Hey Derek, a big thank you all the way from Scotland, your video’s have been a great help.
Hey Stuart, I will always be amazed that people all over the world are watching my videos! You’re very welcome 🙂
Thanks so much!
You’re very welcome 🙂
Thank you very much, these videos are really helpful for me.
You’re very welcome 🙂 I’m glad they are helping so many people.
Hi Derek,
Thanks for this video. It helps in understanding factory pattern with ease in short time.
Can you also list down cons of factory pattern, complexity introduced by factory pattern, and other pattern that can be used as alternative.
Thanks.
Hi, You may find that dependency injection is often a better option over the factory pattern. We use factory to separate the use of a component from the implementation and instance management of that component. The major drawback of the factory versus dependency injection is that it can be complex and hard to test.
I was curious what you used to create the UML diagrams in your videos?
Normally just paper and pencil. UMLet is nice though if I need it. Visual Paradigm is the best program, but expensive. | http://www.newthinktank.com/2012/09/factory-design-pattern-tutorial/?replytocom=10514 | CC-MAIN-2019-26 | refinedweb | 1,923 | 58.28 |
Details
- Type:
Improvement
- Status: Closed
- Priority:
Major
- Resolution: Fixed
- Affects Version/s: 2.0-beta-2
-
- Component/s: changes.xml
- Labels:None
Description
A DTD/XSD for changes.xml would be extremely useful for IDE auto-completion. It should be hosted on the apache site.
Issue Links
- is depended upon by
MCHANGES-86 Create a changes-validate mojo
- Closed
- relates to
MCHANGES-47 Add support for multiple <issue> and <due-to> tags in changes.xml
- Closed
Activity
- All
- Work Log
- History
- Activity
- Transitions
Cool, any chance of hosting this somewhere convenient? We would also ideally need a namespace.
Please note that the Maven 2 version of the plugin does not yet have all the features of the Maven 1 plugin.
Work has been done by Denis Cabasson in
MCHANGES-47 to try to use a Modello model in the changes plugin. With this in place it would be easy to generate an xsd for changes.xml. I've been working a bit on Modello to apply patches that are required by Denis' patch for the changes plugin.
If we use the patch from
MCHANGES-47 the changes xml need changes and this will break backward compatibility.
To preserve backward comp, we have to fix MODELLO-113 (I'm working on providing a patch for this).
fixed in rev 680301.
will the xsd be uploaded to or will it be elsewhere?
Yep good idea !
It will included in the site generation too.
Note using the Xpp3Reader has the following side effect.
People must add
<action type="fix" dev="toto"> <![CDATA[ Add <i>foo</i> in the bean . ]]> </action> Whereas the following format was supported before : <action type="fix" dev="toto"> Add <i>foo</i> in the bean . </action>
I will see if I can find a solution in modello if not I will add note in the release note.
The m1 plugin has an xsd already: | https://issues.apache.org/jira/browse/MCHANGES-61 | CC-MAIN-2017-13 | refinedweb | 315 | 67.86 |
Elijah Manor | January 4th, 2010
jQuery supports a wide variety of built-in events and selectors. But despite the rich functionality that jQuery provides, you eventually come to a spot where you need an event or a selector that isn’t supported natively. The good news is that jQuery makes it easy to extend the library to support custom events and selectors.
In this article, I’ll take a look at some of the events and selectors that jQuery makes available and then move on to show how you can extend jQuery to support your custom needs. You can view, run, and edit the code samples on JS Bin (jsbin.com). Each example points to the specific URL to use.
In the sections that follow, I’ll cover some of the standard events you can use in jQuery. These are foundational concepts I want to be sure you know about before I dive into creating custom events.
I imagine that most readers are familiar with event helpers such as blur, change, click, dblclick, error, focus, keydown, and many more. These shortcut methods make it easy to wire up handlers when one of these events occurs. Here is a quick example of attaching the click event helper to a selection and displaying a message to the console when the event fires (jsbin.com/uduwa/edit).
<script type="text/javascript">
$(function() {
$(".clickMe").click(function(e) {
console.log("I've been clicked!");
});
});
</script>
If you want to bind multiple events to your selection, you need to use the bind method instead of the event helpers. In the following example (jsbin.com/olono3/edit), I bind the click and dblclick events to a selection and display a message to the console when the event is fired.
<script type="text/javascript">
$(function() {
$(".clickMe").bind("click dblclick", function(e) {
console.log("I've been clicked!");
});
});
</script>
Another method you need to know about when it comes to events is the live method. Using the preceding bind code snippet, let's say that you created a new DIV with the class clickMe after the DOM is loaded. Do you think the text "I've been clicked!" will be printed to the console when it is clicked? The answer is no. You have to register the event handler again, which isn't what you may have intended. If you use the live method instead of the bind method, any new dynamic DOM elements matching your criteria execute the event handler you specified. So let's try the preceding code again, but use the live method this time ().
<script type="text/javascript">
$(function() {
$(".clickMe").live("click", function(e) {
console.log("I've been clicked!");
});
$("#addClickMe").click(function(e) {
$("<p class='clickMe'>Click Me</p>").appendTo('body');
});
});
I added a #addClickMe button that, when you click it, dynamically creates and appends a new paragraph element with the class clickMe, which happens to match the selector that I defined when registering the live event. Not only is "I've been clicked!" displayed in the console when I click on the initial paragraph, this text is also displayed when I click on any of the dynamically created paragraphs.
You might notice that I didn't provide both click and dblclick in the preceding live example, as I did with the bind example. I did this because the live method can bind only one event per call, whereas the bind method can handle multiple events.
If you are dealing with dynamically created DOM elements, you definitely want to use the live method, but what if that isn't the situation you face? Should you use the live or the bind technique? To answer that question, you need to think about some performance concerns. The bind method attaches your event handler during page load to the DOM element you selected. If a lot of bind methods are being set during page load, performance might be hurt. The live method, on the other hand, is registered at the document level and does most of its processing when the event has fired to figure out which event handler to call. The live method might speed up page loading, but it might also slow down performance while your events are being fired. Your choice often rests on how many DOM elements you have attached events to, how often events are fired, and how deep your DOM tree is. I would encourage you to test each technique on your site for the best performance.
As you've just seen, lots of different options are available in jQuery when it comes to events. However, creating your own event can come in very handy from time to time.
I think Rebecca Murphey says it best:
It turns out that custom events offer a whole new way of thinking about event-driven JavaScript. Instead of focusing on the element that triggers an action, custom events put the spotlight on the element being acted upon.
You can find a overview of what event-driven programming looks like in jQuery by watching a quick, five-minute video by Benson Wong entitled Event Driven Programming with jQuery Tutorial. His video provides a quick overview of using bind, live, and trigger, and then he goes into how these methods can work together when you use custom events and event-driven programming.
Even if you have not heard the term event-driven programming, you still might be familiar with the observer design pattern, which is also known as the publish and subscribe pattern. Design patterns are a set of reusable techniques that commonly occur in software development. I'll leave it to you to read further about this and many other patterns.
Before we start writing an application using a custom jQuery event, let's take a look at how to define and call a simple custom event. Instead of using the event helpers, you need to use either the bind or live method to assist you in making a custom event. So let's create our first custom event. It’s as easy as calling the bind or live method but with a custom name.
$(".clickMe").bind("quadrupleClick", function(e) {
console.log("Wow, you really like to click!");
});
Now that we've attached a quadrupleClick event handler to the results of the .clickMe selector, how do we go about making the event handler fire? Well, you use the trigger method (jsbin.com/abiye3/edit).
var numberOfClicks = 0;
$(".clickMe").click(function() {
numberOfClicks++;
if (numberOfClicks >= 4) {
$(this).trigger("quadrupleClick");
numberOfClicks = 0;
}
});
This code attaches an already defined click event to the .clickMe element and then records the number of times it is clicked. Once the number of clicks is four or greater, it triggers the quadrupleClick event, which fires the event handler defined previously, and the message "Wow, you really like to click" is displayed in the console.
Now let's change gears and see how you might use this technique in a more complicated scenario.
I am going to write a small program that calculates how much a cup of coffee costs based on the extras that a customer adds to it. Here is what the no-frills UI looks like.
You first select the items that you want in your coffee (such as an extra espresso shot, flavored syrup, or soy milk), and then you select the type of coffee you want. After you make your selections, the total price shows up at the bottom of the page. A user is not required to select an extra feature first. Users should be able to just as easily select a coffee first and then select one of the extras.
In addition, the application has some weird rules, such as:
Here is the HTML for the UI. Again, my focus is not on making it pretty but on the interaction between the elements and in showing how a jQuery custom event can help you create a cleaner, more extensible code base.
<!DOCTYPE html>
<html>
<head>
<script src=""></script>
<meta charset=utf-8 />
<title>Event Oriented Programming: Old Style</title>
</head>
<body>
<h3>Welcome to *$s</h3>
<div id="extras">
<input id="extraShot" type="checkbox">Extra Shot ($0.25)
<input id="syrup" type="checkbox">Syrup (varies)
<input id="soyMilk" type="checkbox">Soy Milk (varies)
</div>
<div>
<select id="coffee" size="5">
<option value="2.00">Caffe Latte ($2.00)</option>
<option value="2.50">Caffe Mocha ($2.50)</option>
<option value="2.20">Cappuccino ($2.20)</option>
<option value="1.45">Espresso ($1.45)</option>
</div>
<div>
Total Price: <strong>$<span id="totalPrice">0.00</span></strong>
</div>
</body>
</html>
First, I am going to write the code in a non-event-driven way (jsbin.com/eqoci/edit), and then I'll rewrite it using event-driven programming and utilize a custom jQuery event.
$(function() {
$('#coffee').change(function(e) {
var updatedPrice = parseFloat($(this).val());
if(!isNaN(updatedPrice)) {
if($('#extraShot').is(':checked')){
updatedPrice += extraShotPrice();
}
if($('#syrup').is(':checked')){
updatedPrice += syrupPrice();
}
if($('#soyMilk').is(':checked')){
updatedPrice += soyMilkPrice();
}
$('#totalPrice').text(updatedPrice);
}
});
$('#extras input').change(function(e) {
$('#coffee').trigger('change');
});
});
function extraShotPrice() {
var price = 0.25;
console.log('Extra Shot $0.25');
return price;
}
function syrupPrice() {
var price = 0;
var day = new Date().getDay();
switch (day) {
case 0: case 6:
price = 0.50;
console.log('Syrup (Sat/Sun) $0.50');
break;
case 1: case 5:
price = 1.00;
console.log('Syrup (Mon/Fri) $1.00');
break;
case 2: case 3: case 4:
price = 0.25;
console.log('Syrup (Tue/Wed/Thu) $0.25');
break;
}
return price;
}
function soyMilkPrice() {
var price = 0;
var month = new Date().getMonth();
if (month === 11) {
price = 0.50;
console.log('Soy Milk (Dec) $0.50');
} else {
price = 1.00;
console.log('Soy Milk (Jan/Feb/Mar/Apr/May/Jun/Jul/Sep/Nov) $1.00');
}
return price;
}
This code is fairly readable. You can see that most of the work is performed in the change event handler of the #coffee select element. First, the cost from the select option is applied and then a series of checks are made to see whether the cost of any additional shots, syrups, or soy milk should be added to the price. By the end of the event handler, the final price is applied to the #totalPrice span element and is displayed to the user.
The good things about this code are that it is fairly simple to read, you can find out pretty much all that it is doing in one event handler, and the flow is straightforward. The downside to the code is that it requires changes in multiple places if you want to add another coffee extra or modify one of the existing extra items. This violates the open closed principle, which is one of Bob Martin's S.O.L.I.D. principles. The open closed principle states that:
Software Entitles (Classes, Modules, Functions, etc.) should be open for extension, but closed for modification
Now let’s take a look using a custom jQuery event to change the coffee selection code and apply event-driven programming or the observer pattern.
I reused the extraShotPrice, syrupPrice, and soyMilkPrice methods from the previous code, so I am not going to list them again here. I’ll just show the code that I replaced (jsbin.com/asude/edit).
$(function() {
$('#coffee').change(function(e) {
$('#totalPrice').text($(this).val());
$('#extras input').trigger('selected:coffee', this);
});
$('#extraShot').bind('selected:coffee change', function(e, caller) {
updatePrice(extraShotPrice(), this, caller);
});
$('#syrup').bind('selected:coffee change', function(e, caller) {
updatePrice(syrupPrice(), this, caller);
});
$('#soyMilk').bind('selected:coffee change', function(e, caller) {
updatePrice(soyMilkPrice(), this, caller);
});
});
function updatePrice(price, checkBox, caller) {
if ($('#coffee option').is(':selected')) {
var isChecked = $(checkBox).is(':checked');
var currentPrice = $('#totalPrice').text();
if (isChecked || !caller) {
var updatedPrice = calculatePrice(currentPrice, price, isChecked);
$('#totalPrice').text(updatedPrice);
}
}
}
function calculatePrice(currentPrice, extraPrice, isSelected) {
extraPrice = isSelected ? parseFloat(extraPrice) : (parseFloat(extraPrice) * -1);
return parseFloat(currentPrice) + extraPrice;
}
As you can see, this code is quite different from the previous version. The change event handler off the #coffee select element doesn’t do much work compared to the first example. Really, the only thing it does is set the starting price and then trigger a custom event called selected:coffee.
The rest of the work is performed by the event handlers attached to the particular extras that are selected by the user. For example, if the user selects an extra shot and soy milk, the code first updates the base price and then triggers an event to anyone listening that a coffee order has been selected. The #extraShot and #soyMilk event handlers then respond to the event and run their code, which individually updates the base price with the price of the extra. After all the events are consumed, the final price reflects the amount the user needs to purchase the product.
This code is still fairly easy to read, but you can't go to one location any longer and see the big picture of what’s going on. As a result, debugging can be a little trickier.
Note: The updatePrice method is a little more complicated than I wanted in this example, mainly because I needed to account for the source of who triggers the event. If a user first selects a coffee (without selecting an extra), I didn't want to subtract the price of unselected extras from the base price, which is why you see me checking the caller object, which represents a new coffee being selected.
This piece of code is also very extensible. I can add another coffee extra without touching any of the existing methods. For example, if I want to add chocolate as an extra, all I need to do is add the following two methods and HTML (jsbin.com/ediye3/edit). Notice that I don’t touch any of the existing code. By coding like this, you help ensure that you don't break any of the code that you had previously.
<input id="chocolate" type="checkbox">Chocolate ($0.45)
$('#chocolate').bind('selected:coffee change', function(e, caller) {
updatePrice(chocolatePrice(), this, caller);
});
function chocolatePrice() {
var price = 0.45;
console.log('Chocolate $0.45');
return price;
}
On a side note, it’s a good idea to have unit tests wrapping your code. The jQuery unit testing framework, called QUnit, is a good start if you are interested in researching this. Adding unit tests around your code can give you confidence that you didn't break anything when you refactor code as we just did.
Now that we’ve tackled custom jQuery events, let's look at selector filters. From what I've seen, selector filters are one of those things that are very cool, but not a lot of people use them to their full potential.
Some commonly known selector filters are :first, :even, :odd, :eq(), :contains(), and some lesser-known ones are :header, :animated, :nth-child(), and [attribute*=].
Just as Paul Irish encourages you to learn about the lesser-known methods in his article 10 Advanced jQuery Performance Tuning Tips, you should also take time to learn about the lesser-known selector filters.
You can easily use these selector filters to narrow down your selection to the elements you are targeting in the DOM. For example, here are some examples of selectors you might use:
//Selects all the input elements (text, password, radio, etc...)
$(':input')
//Selects all the disabled input elements
$(':disabled')
//Selects the 4th (0 based) list item
$('li:eq(3)')
//Selects list items that have links inside of them
$('li:has(a)')
Not only do you have access to a myriad of selector filters, but you can also define your own and use them to assist you in your selections.
First, let's look at how to create a really simple selector filter. Then I’ll focus on some nice custom selector filters created by developers in the community.
The following example shows the basic template layout of a custom selector filter. Four parameters are passed into the method, but you probably won’t use many of them. I have mimicked the parameter names found in the jQuery source code. The parameters that I focus on in this article are the elem and match parameters that contain the DOM element being tested and any parameters that may have been passed to the selector filter.
/// <summary>
/// This is a jQuery Custom Selector Filter Template
/// </summary>
/// <param name="elem">Current DOM Element</param>
/// <param name="i">Current Index in Array</param>
/// <param name="match">MetaData about Selector</param>
/// <param name="array">Array of all Elements in Selection</param>
/// <returns>true to include current element or false to exclude</returns>
$.expr[":"].test = function(elem, i, match, array) {
return true;
};
//Example usage
$(".clickMe:test").fadeIn("slow");
Here is a simple example of using the selector filter template to create a custom jQuery selector filter to find all the cheap coffees from the *$s application (jsbin.com/evefa/edit). Cheap is defined as $2.00 or less.
<select id="coffee" size="5" multiple="multiple">
<option value="2.00">Caffe Latte ($2.00)</option>
<option value="2.50">Caffe Mocha ($2.50)</option>
<option value="2.20">Cappuccino ($2.20)</option>
<option value="1.45">Espresso ($1.45)</option>
$.expr[":"].cheap = function(elem, i, match, array) {
return parseFloat($(elem).val()) <= 2.00;
};
//Example Usage
$("#coffee option:cheap").attr("selected", "selected");
The custom selector filter ends up selecting the caffe latte and espresso coffees because they are both $2.00 or less in base value.
You have probably seen other selector filters that use parameters, such as the :contains(text) and :eq(index) filters. Let's tweak our simple selector filter and pass in a parameter to define what cheap means (jsbin.com/epixu/edit).
$.expr[":"].lessThanEqualTo = function(elem, i, match, array) {
return parseFloat($(elem).val()) <= parseFloat(match[3]);
};
//Example Usage
$("#coffee option:lessThanEqualTo('2.20')").attr("selected", "selected");
As you can see, we can now pass a value to the new custom jQuery selector filter called :lessThanEqualTo and have the filter match against that value instead of hard coding $2.00, as we did previously. The parameters passed to the selector filter are stored in the third index of the match argument. If you were to examine the match argument from the previous selection, it would look like this:
$("#coffee option:lessThanEqualTo('2.20')").attr("selected", "selected");
/// <param name="match">MetaData about Selector</param>
[
":lessThanEqualTo('2.20')", //Selector Filter with Arguments
"lessThanEqualTo", //Selector Filter Only
"'", //Type of Quote Used (if any)
"2.20" //Arguments
]
Now on to some interesting jQuery custom selector filters that I've seen recently. Probably the coolest one is James Padolsey's Regex Selector for jQuery. Here are some examples of using this selector filter (jsbin.com/enohu/edit):
//Select all the title attributes that end in an !
$("div:regex(title,!$)").addClass('highlight');
//Select all the text fields that have non a-ZA-Z0-9_ characters
$(":text:regex(value,[^\\w])").addClass('highlight');
There are many more examples of the types of selections you can make. Check out James Padolsey’s blog post for more information about this handy selector filter. James wrote another blog post, entitled Extending jQuery's Selector Capabilities, in which he lists many examples of custom selectors that you might find helpful.
Another selector filter that I think is neat is John Sheehan's Custom jQuery Selector for ASP.NET WebForms. One of the pains of ASP.NET WebForms 3.5 and earlier versions is that your HTML element IDs are mangled by INamingContainer, resulting in long and unpredictable ID values.
//Example of a mangled id attribute
<h3 id="ctl00_MainContent_ctl20_ctl00_UserLabel">Selector for ASP.NET WebForms</h3>
You can navigate your way around these altered IDs in several ways (as indicated in John's blog post), but I found John's approach of using a selector very interesting (jsbin.com/osozi/edit).
String.prototype.endsWith = function(str) {
return (this.match(str + '$') == str)
}
jQuery.expr[":"].asp = function(a, i, m) {
return jQuery(a).attr('id') && jQuery(a).attr('id').endsWith(m[3]);
};
//Example usage
$(":asp('UserLabel')").addClass('highlight');
One of the downsides of this selector is that it might select other DOM elements that also end with the specified string, which is the same disadvantage of using the [attribute*=value] selector filter (jsbin.com/ileta/edit).
$("[id$='UserLabel']").addClass('highlight')
I'm not sure which technique is faster, but neither of them is particularly fast because they use an implied universal selector as their first argument. You can find out more information about this in Paul Irish's jQuery Anti-Patterns presentation on slide 32.
Despite my concerns about speed, I haven't found a good way of selecting the mangled elements gracefully and fast. I have used the [attribute*=value] selector when writing ASP.NET WebForms, and I find that John's approach has a quicker and easier syntax. I encourage you to look at John's blog post because he does a good job explaining the pros and cons of each approach.
ASP.NET WebForms 4.0 solves a lot of these problems by allowing you to control your own ID names and has numerous options for how to control your ID attributes.
Another jQuery selector I like is from Rick Strahl's blog post Using jQuery to Search Content and Creating Custom Selector Filters. In the article he creates a selector filter called :containsNoCase. The reason he developed the selector is that the built-in :contains(text) selector is case sensitive, and Rick wanted "SQL" to match "Sql", "sql", and so on. Here is the selector filter that he created to solve this need (jsbin.com/ajedu/edit):
$.expr[":"].containsNoCase = function(el, i, m) {
var search = m[3];
if (!search) return false;
return eval("/" + search + "/i").test($(el).text());
};
//Example Usage
$("tr:containsNoCase(jquery)").addClass("highlight");
This code searches for all table rows with the content "jQuery", "jquery", "JQuery", and so on. If the code finds a match, it adds a highlight class to the rows. Rick examines some other neat tidbits in his article, so I recommend you give it a read.
Events and selector filters are powerful and easy to use features of jQuery. We can be thankful that we aren't tied down to the core library implementations. jQuery is open and extensible, and developers can define and use their own events and selector filters in their applications.
Some topics from this article that you might consider for future learning are unit testing concepts such as QUnit, common design patterns to use in your projects, and the S.O.L.I.D principles of software. In addition to these, I highly recommend reading the blog posts that I linked to throughout the article. These men and woman are at the top of their field and regularly give back to the community..
More MSDN Magazine Blog entries >
Browse All MSDN Magazines
Receive the MSDN Flash e-mail newsletter every other week, with news and information personalized to your interests and areas of focus. | http://msdn.microsoft.com/en-us/magazine/ff452700.aspx | CC-MAIN-2014-42 | refinedweb | 3,824 | 65.12 |
We conclude this series of blog posts by demonstrating how to take a set of feeds, and mash them up into a single RSS feed using RSSBus.
If you’ve been following this blog series, you’ll know that I was asked by my leadership to prove that RSSBus could generate a 360° view of a “contact” by (a) producing RSS feeds from disparate data sources such as databases, web services and Excel workbooks and (b) combining multiple feeds to produce a unified view of a data entity. Our target architecture looks a bit like this:
In this post, I’ll show you how to mash up all those individual feeds, and also how to put a friendly HTML front end on the resulting RSS data.
Building the Aggregate Feed
First off, my new aggregate feed asks for two required parameters: first name and last name of the desired contact.
Next, I’m ready to call my first sub-feed. Here, I set the input parameter required by the feed (“in.lastname”), and make a call to the existing feed. Recall that this feed calls my “object registry service” which tells me every system that knows about this contact. I’ve taken the values I get back, and put them into a “person” namespace. The “call” block executes for each response value (e.g. if the user is in 5 systems, this block will execute 5 times), so I have a conditional statement (see red box) that looks to see which system is being returned, and setting a specific feed value based on that.
I set unique feed items for each system (e.g. “person:MarketingID”) so that I can later do a check to see if a particular item exists prior to calling the feed for that system. See here that I do a “check” to see if “MarketingID” exists, and if so, I set the input parameter for that feed, and call that feed.
You may notice that I have “try … catch” blocks in the script. Here I’m specifically catching “access denied” blocks and writing a note to the feed instead of just blowing up with a permission error.
Next, I called the other data feeds in the same manner as this one above. That is, I checked to see if the system-specific attribute existed, and if so, called the feed corresponding to that system. My “reference data” feed which serves up Microsoft Excel data returns a data node that holds the blog feed for the contact. I took that value (if it exists) and then called the built-in RSSBus Feed Connector’s feedGet operation and passed in the URL of my contact’s blog feed. This returns me whatever is served up by my contact’s external blog.
Neat. So, now I have a single RSS feed that combines data from web services, Google web queries, Excel workbooks, SQL Server databases, and external blog feeds. If I view this new, monster feed, I get a very denormalized, flat data set.
You can see (in red) that when data repeating occurred (for example, multiple contact “interactions”), the related values, such as which date goes with which location, isn’t immediately obvious. Nonetheless, I have a feed that can be consumed in SharePoint, Microsoft Outlook 2007, Newsgator, or any of your favorite RSS readers.
Building a RSSBus HTML Template
How about presenting this data entity in a business-friendly HTML template instead of a scary XML file? No problem. RSSBus offers the concept of “templates” where you can design an HTML front end for the feed.
Much like an ASP.NET page, you can mix script and server side code in the HTML form. Here, I call the mashup feed in my template, and begin processing the result set (from the “object registry service”). Notice that I can use an enumeration to loop through, and print out, each of the systems that my contact resides in. This enumeration (and being able to pull out the “_value” index) is a critical way to associate data elements that are part of a repeating result set.
To further drive that point home, consider the repeating set of “interactions” I have for each contact. I might have a dozen sets of “interaction type + date + location” values that must be presented together in order to make sense. Here you can see that I once again use an enumeration to print out each date/type/location that are related.
The result? I constructed a single “dashboard” that shows me the results of each feed as a different widget on the page. For a sales rep about to visit a physician, this is a great way for them to get a holistic customer view made up of attributes from every system that knows anything about that customer. This even includes a public web (Google) query and a feed from their personal, professional, or organization’s blog. No need for our user to log into 6 different systems to get data, rather, I present my own little virtual data store.
Conclusion
In these four blog posts, I explained a common data visibility problem, and showed how RSSBus is one creative tool you can use to solve it. I suspect that no organization has all their data in an RSS-ready format, so applications like RSSBus are a great example of adapter technology that makes data extraction and integration seamless. Mashups are a powerful way to get a single real-time look at information that spans applications/systems/organizations and they enable users to make more informed decisions, faster.
Technorati Tags: RSSBus, Mashup
Categories: General Architecture, SOA
Richard,
Excellent work!
I thought you might be wondering how multiple attributes representing repeating subgroups stay synchronized. When you add items to a multi-valued attribute (e.g. repeatedly set values to interactions:location#) they are added to the end of a sequence pointed to from a single attribute slot in the item hashtable. As long as you add grouped items to attributes in sequence and read them in sequence, as you did, they will stay synchronized. The XML linearizers and readers in RSSBus take care to do the same, so that order is preserved in transit.
Ralph James (ralphj)
rssbus.com | https://seroter.wordpress.com/2008/09/17/building-enterprise-mashups-using-rssbus-part-iv/ | CC-MAIN-2017-39 | refinedweb | 1,044 | 67.38 |
My problem is that I need to write a program that asks the user to input a number from 1-99 and outputs the number spelled in english. I believe that I have the general outline down in that I will break it up into statements using If and Switch statements. I want the computer to recognize on class by having all numbers bigger than 9 but less than 20 in a class, which would require recognizing the first digit of a number. Thus the 20's would be in one class, the 30's in another, the 40's in another, etc.
Without having the user input two digits for a number separately, is there a way to recognize a first digit so that a general format would be If (first digit equals 1), else?
People here know more than I do. I would like to keep this to using only these basic directives.
#include <iostream>
using namespace std;
int main()
{}
Edited by skips: n/a | https://www.daniweb.com/programming/software-development/threads/344505/but-need-to-use-single-digit-from-mulidigit-inpu | CC-MAIN-2017-17 | refinedweb | 167 | 67.79 |
Duck type compatibility¶
In Python, certain types are compatible even though they aren’t subclasses of
each other. For example,
int objects are valid whenever
float objects
are expected. Mypy supports this idiom via duck type compatibility. As of
now, this is only supported for a small set of built-in types:
intis duck type compatible with
floatand
complex.
floatis duck type compatible with
complex.
- In Python 2,
stris duck type compatible with
unicode.
Note
Mypy support for Python 2 is still work in progress.
For example, mypy considers an
int object to be valid whenever a
float object is expected. Thus code like this is nice and clean
and also behaves as expected:
def degrees_to_radians(x: float) -> float: return math.pi * degrees / 180 n = 90 # Inferred type 'int' print(degrees_to_radians(n)) # Okay!
Note
Note that in Python 2 a
str object with non-ASCII characters is
often not valid when a unicode string is expected. The mypy type
system does not consider a string with non-ASCII values as a
separate type so some programs with this kind of error will
silently pass type checking. In Python 3
str and
bytes are
separate, unrelated types and this kind of error is easy to
detect. This a good reason for preferring Python 3 over Python 2!
See Text and AnyStr for details on how to enforce that a value must be a unicode string in a cross-compatible way. | http://mypy.readthedocs.io/en/stable/duck_type_compatibility.html | CC-MAIN-2017-26 | refinedweb | 239 | 64.41 |
Python Programming, news on the Voidspace Python Projects and all things techie.
Pytte
It looks like Fredrik Lundh has been working on my idea for a fast extension language for Python, using Python syntax :
Pytte
Of course he's been working on it since long before I had the idea...
Fredrik has been doing some very interesting experiments with Pyrex recently, so definitely worth staying tuned.
Actually, Pytte doesn't look like a Python extension language, so maybe my idea still has some merit. I had another wild idea, which I'm equally likely to try implementing.
I've not used the Python C API, how about embedding a Python interpreter in an extension module ? It could be used instead of threading (although implementing the callback would be interesting, but you can release the GIL around calls into the embedded interpreter).
Like this post? Digg it or Del.icio.us it. Looking for a great tech job? Visit the Hidden Network Jobs Board.
Posted by Fuzzyman on 2006-10-26 00:09:14 | |
Categories: Python, Hacking
Python, Metaclasses and Overloaded Methods
One of the nice features of C# is that you can provide multiple overloads for methods. You provide multiple methods (including constructors of course) with the same name, but with a different type signature for the different overloads.
Because the computer knows the type of your arguments at compile time, it knows which of your overloads to call.
The equivalent in Python is to do type checking of arguments in the body of your method, which can get ugly or feel unpythonic; as Ned Batchelder reported recently.
I've been contemplating metaclasses. They felt like deep black magic, but I know that all they do is customise class creation and I've been itching for a reason to try them out. Upon further investigation the basics are actually quite straightforward.
Create a class that derives from type. The __new__ method takes the arguments (cls, classname, bases, classDict). classDict is a dictionary of the attributes (including methods) of the class.
Your __new__ method should end with a call to the type.__new__ method with similar, but possibly modified, arguments.
Once I'd worked out a way to provide method overloading using a metaclass I had to try it. It's a little rough around the edges, but it works.
class OverloadedError(Exception):
pass
def GetMethodName(name):
digit = name.rstrip('_')[-1]
return ''.join(name.rsplit(digit, 1))
class MethodOverloading(type):
def __new__(cls, classname, bases, classDict):
newDict = {}
overloads = {}
for attr, item in classDict.items():
if callable(item) and attr.rstrip('_') and attr.rstrip('_')[-1].isdigit():
newName = GetMethodName(attr)
if newName in classDict:
raise OverloadedError("Method '%s' is also overloaded." % newName)
overloadList = overloads.setdefault(newName, [])
overloadList.append((attr, item))
else:
newDict[attr] = item
storedOverloads = {}
newDict['_overloads'] = storedOverloads
for methodName, overloadList in overloads.items():
thisMethod = {}
for entry in overloadList:
name, method = entry
(args, _, __, defaults) = inspect.getargspec(method)
args = args[1:]
if len(args) != len(defaults):
raise OverloadedError("Overloaded method '%s' has non-keyword arguments." % name)
thisSignature = []
for arg, val in zip(args, defaults):
if not isinstance(val, type):
break
thisSignature.append(val)
if not thisSignature:
raise OverloadedError("Overloaded method '%s' has no types specified." % name)
thisSignature = tuple(thisSignature)
if thisSignature in thisMethod:
raise OverloadedError("Overloaded method '%s' has a conflicting signature "
"with another method." % name)
thisMethod[thisSignature] = method
storedOverloads[methodName] = thisMethod
def typeCheck(self, *args, **keywargs):
thisMethod = self._overloads[methodName]
signature = tuple([type(arg) for arg in args])
realMethod = thisMethod.get(signature)
if realMethod is None:
raise OverloadedError("No overload matches '%s' signature for method '%s'."
% (tuple(signature), attr))
return realMethod(self, *args, **keywargs)
newDict[methodName] = typeCheck
return type.__new__(cls, classname, bases, newDict)
Using it is simple. Here's an example of a class with two possible constructors, one takes three integers, the other takes a string :
class SomeClass(object):
__metaclass__ = MethodOverloading
def __init1__(self, x=int, y=int, z=int):
self.test = (x, y, z)
def __init2__(self, string=str):
self.test = string
This creates a class that has a single __init__ method, and no __init1__ or __init2__ methods. When you create an instance, the faked up __init__ does the type checking for you and calls the appropriate method under-the-hood.
For example, this :
b = SomeClass('Bright Red')
print a.test, b.test
should print (1, 2, 3) Bright Red.
Because it is a relatively quick hack (when I should be doing other things), there are a few rules and restrictions about how it works. Most of these could be relaxed by some extra work.
In your overloaded method declarations all arguments must be keyword arguments (except for self).
Those default arguments are evaluated from left to right, arguments that are types are collected as the method signature until a non-type argument is found.
For example, in the following 'example1' has a signature of (int, int, float), 'example2' has a signature of (int,). Calling 'example(3)' calls 'example2' under the hood. On the instance the method 'example' exists, but 'example1' and 'example2' don't :
pass
def example2(self, w=int)
pass
When calling the method the keywords must not be used. Untyped arguments must be passed in as keyword arguments. Not following this pattern will cause an exception to be raised or the wrong method to be called. This restriction can be removed, but only by annoyingly fiddly code, so it will have to wait for the next revision.
Other thigns still to do include :
- Method attributes (like docstrings etc) are not yet preserved (which of the overloaded docstrings to use ?).
- Add name mangling to the '_overloads' attribute.
- Currently a maximum of ten overloads per method (0-9).
- Allow providing a default where no overloads match: a method with no typed arguments which is currently not supported.
- Allow tuples of types.
- You currently can't have an overloaded method which takes no arguments.
You can download this as a Python file, along with some basic unit tests :
There is a runtime cost of course. Calling an overloaded method involves an extra method call plus a bit of jiggery pokery.
Like this post? Digg it or Del.icio.us it. Looking for a great tech job? Visit the Hidden Network Jobs Board.
Posted by Fuzzyman on 2006-10-25 21:45:36 | |
Categories: Python, General Programming, Hacking
IronPython, Subprocesses and Switching Off the Internet
Over the last couple of days, a colleague and I have been adding the final finishing touches to a user story. Part of this means sensibly informing the user when a connection to a server fails, rather than unceremoniously crashing.
The relevant line of the functional test looks like this :
# Harriet pulls out Harold's ethernet cable
So how do we simulate this from our test framework ?
(Functional tests mustn't delve into the application internals to handle this sort of thing, they should be 'black box' tests that test functionality and are not dependent on implementation. That means we can't simply find the relevant connection object and close it.)
After a few abortive attempts Andrzej thinks he has come up with something that works. It also illustrates launching a sub-process from IronPython. It uses the System.Diagnostics.Process class and the ipconfig command line tool.
clr.AddReference('System')
from System.Diagnostics import Process
def disconnect():
runProcess = Process()
runProcess.StartInfo.FileName = "ipconfig"
runProcess.StartInfo.Arguments = " /release *local*"
runProcess.Start()
def connect():
runProcess = Process()
runProcess.StartInfo.FileName = "ipconfig"
runProcess.StartInfo.Arguments = " /renew *local*"
runProcess.Start()
Like this post? Digg it or Del.icio.us it. Looking for a great tech job? Visit the Hidden Network Jobs Board.
Posted by Fuzzyman on 2006-10-24 15:29:44 | |
Categories: Work, IronPython, Python
Google Co-op Search for Python Programming
Google have just unveiled a new service: Google Co-op.
This allows you to create and customise search engines for specific subjects, a bit like Skimpy my Python search engine.
Given that their technology might be marginally better than my homebrew CGI, I've created a Python Programming Search Engine.
This allows you to search the web for Python related topics, giving priority to results from the 1500 or so domains that I've pulled in from Skimpy. It has three 'annotations', letting you narrow your search results for pages relevant to 'algorithms', 'tutorials' or 'examples'.
If you have any suggestions for this, perhaps further annotations, or would like to help adding sites and tuning the results (adding annotation labels for specific sites) then get in touch or use the edit this search engine link to volunteer as a collaborator.
Here is a quick link to the Python Programming Search Engine :
The html to add this to your own site is :
<!-- Google CSE Search Box Begins --> <form id="searchbox_018413290510798844940:k69bxcfofe0" action=""> <input type="hidden" name="cx" value="018413290510798844940:k69bxcfofe0" /> <input name="q" type="text" size="40" /> <input type="submit" name="sa" value="Search" /> <input type="hidden" name="cof" value="FORID:1" / </form <script type="text/javascript" src="" > </script> <!-- Google CSE Search Box Ends -->
Like this post? Digg it or Del.icio.us it. Looking for a great tech job? Visit the Hidden Network Jobs Board.
Posted by Fuzzyman on 2006-10-24 11:19:43 | |
Categories: Python, Website
Movable Python Docs and Movable IDLE Homepage
I've finally got round to updating the Movable python Documentation. Hmmm... at least I'm part way through the update.
There are still several glaring omissions, but there is enough up there to be useful. I can complete the rest in my lunch breaks.
There are also now homepages for Movable IDLE and the Mega-Pack.
Like this post? Digg it or Del.icio.us it. Looking for a great tech job? Visit the Hidden Network Jobs Board.
Posted by Fuzzyman on 2006-10-23 00:59:14 | |
Categories: Python, Projects
Selling Pair Programming
At Resolver Systems we use various development practises taken straight from mainstream Extreme Programming.
Although some of these are principles that you learn fairly quickly as a programmer anyway, like YAGNI, both Test Driven Development and Pair Programming were completely new to me.
I'm thoroughly sold on both of them.
Test Driven Development produces code that is not only better tested, but in my experience better structured and with a better API. With a test framework in place, it is also easier to refactor. It does feel like harder work than not writing tests at all, but the benefits become obvious fairly quickly as soon as you embark on a bug hunt.
For the programmer, so long as you like people and the other members of your team aren't awkward s*ds [1], pair programming delivers a more obvious benefit. Pair programming is so much more fun than programming by yourself.
It also feels like it delivers a much better product, but feelings wouldn't be enough to sell it to a team manager concerned that switching to pair programming is about to half his team's productivity. I've been thinking (a bit) about how to rationalise the benefits of pair programming, and whether I could sell it as a development technique to someone who is sceptical.
I certainly much prefer it, and there are some obvious benefits that it delivers. Proponents of pair programming say that net productivity is more than double, if you factor in that code produced by a pair is generally less buggy. This sort of claim needs backing up of course [2].
As well as the feel good factor, the obvious benefits that I see in practise at Resolver are :
Less Distraction
Sometimes, despite your best intentions, it's hard to focus on the job in hand. Joel Spolsky discusses this in his article Fire and Motion..
[...] What drives me crazy is that ever since my first job I've realized that as a developer, I usually average about two or three hours a day of productive coding.
Pair programming almost completely eradicates this problem. Ok, so if both of you are in a distracted mood then it is possible to conspire together to avoid coding, but basically if two of you are sat together to program; that's what happens.
Code Review
I've not worked in a commercial programming environment other than Resolver, but I know that many companies handle checking code quality through a process of code review. This will either be done before check-in, or periodically (at least in theory).
Pair programming means that all code has been worked on by at least two programmers, this at least reduces the chance of major problems being checked in. Test driven development, where all tests must pass before code can be checked in, and continuous integration where the tests are automatically run after every checkin, reinforce this.
There is another way that pair programming raises overall code quality.
Code familiarity and ownership.
Part of pair programming is regularly rotating programmers. This means that everyone in a team becomes familiar with the whole codebase.
Everyone in the team knows the code, and feels a sense of ownership for it. You are less likely to get patches of the code that no-one dares touch because 'only Bob has any idea of how it works'. You also get more pairs of eyes going over the code.
Additionally, we all have our own favourite programming tools and techniques. Some parts of the code may require particular idioms that individuals may be unaware of or unfamiliar with. Regularly changing pairs means that this knowledge is shared throughout the team.
Until I worked at Resolver I had never used databases. By pairing with other team members I've just been the lead on a couple of User Stories allowing Resolver to interact with databases in various ways.
Mentoring
Integrating new team members can be difficult in an ongoing project. Pairing means that new members can quickly be brought up to speed without having to spend months confused or working on non-productive tasks. Perhaps only weeks instead.
Within the last three weeks we've taken on two new developers at Resolver. They are both experienced programmers, but neither had worked on a similar project before. (Both had .NET familiarity, but one was primarily a C++ programmer with little Python experience, the other has used Python a lot but mainly done web development.)
After a few weeks of pairing with other team members on different parts of our application they have just got to the point where they can pair with each other on a user story.
Catching Mistakes
Whilst pairing one team member 'drives': has control of the keyboard and monitor. (Regularly switching the driver is another important part of pair programming.) Although you will both be actively engaged in creating the code, the one who isn't driving is much more likely to spot typos, mistakes and errors in logic.
However, the main reason to program in pairs is that two heads think better than one. In fact two heads think very differently to one.
Joel Spolsky, in the article mentioned above, says :
Once you get into flow it's not too hard to keep going.
You know the flow, those times when you wonder why flames aren't coming out of the keyboard, when nothing else matters but the code and time seems to disappear.
Well you don't get the same flow when programming as a pair, but you can definitely get into a zone together, and you form ideas between you. This is a very satisfying process, and much more fun than coding in isolation.
The benefit comes from a pair looking at any problem from two slightly different viewpoints, with different experiences and approaches. The areas where your understanding is the same gives you the ability to communicate, the areas where your understanding is different gives you the ability to innovate.
Here's a simple practical example. Andrzej and I were working together on a problem last week. I metioned out loud one possibility, which Andrzej recognised as being a plausible solution. This is something that neither of us are likely to have tried working on our own.
I can't fully articulate the dynamics of working in pairs, and I'm not convinced that I'm (yet) working at one hundred percent capacity as much as I'd like to, but I'm learning a great deal and will continue to examine pair programming with a semi-critical eye in order to understand it.
This has deliberately not been an attempt to exhaustively expose the pros and cons of pair programming [3]. I'm biased, but only because my experience of pair programming has been so positive. This is my attempt to evaluate pair programming based on my own observations.
Like this post? Digg it or Del.icio.us it. Looking for a great tech job? Visit the Hidden Network Jobs Board.
Posted by Fuzzyman on 2006-10-22 18:17:57 | |
Categories: General Programming, Work
Python Jobs with the Hidden Network
If you've been reading The Daily WTF [1] recently, then you might have heard of The Hidden Network. This is a job advertising network using :
over fifty, high-quality on-line publications ("blogs"). These blogs are written by experts in their field and read by professionals with a passion for their career and industry.
Hey, their words not mine.
In their wisdom, they've accepted me into the hidden network. If you're looking for a new job, then you should check out the Hidden Network Job Board. There should be no agencies looking for CVs advertising here, only decent employers with (hopefully) a woeful absence of pointy haired bosses.
If you're looking to hire a new programmer, then this network is the ideal way to find candidates. You can post job adverts via the job board.
The job adverts appear here on the Techie Blog, you can see them on the sidebar [2], and the other sites in the network. It would be good to start to see some Python programming jobs appear there [3]. The hidden network provides an excellent way to get your job advert in front of good candidates (hey, especially if they are readers of my blog).
Like this post? Digg it or Del.icio.us it. Looking for a great tech job? Visit the Hidden Network Jobs Board.
Posted by Fuzzyman on 2006-10-22 14:35:18 | |
Categories: General Programming, Python, Website
Latest Toys
I moved house about six months ago and it wasn't until last week that I got round to unpacking my inkjet printer. Unfortunately they don't cope with being left for this long with their ink cartridges left in.
Despite not unpacking it for six months, I really do need a printer. In the UK the Computer Shopper Magazine has a good reputation for impartially reviewing computer hardware. But why is it that almost none of the models they review can actually be purchased, and none of the models available have been reviewed. sigh
In the end I found a Mono laser multi-function printer, the Samsung SCX-4200 which did have a good review and was available. The wife is out shopping, when she comes back with some paper I'll let you know if it works.
Update
It works fine so far...
My other latest toy is a Nintendo DS Lite. I know, I know.
There are now five of us at Resolver with these (Andrzej is the only one who hasn't succumbed to the peer pressure so far). I'm determined to beat Christian at Mario Kart in the next few days. Giles of course claims that the only reason he's bought one is for the Brain Training game.
Like this post? Digg it or Del.icio.us it. Looking for a great tech job? Visit the Hidden Network Jobs Board.
Posted by Fuzzyman on 2006-10-21 14:07:28 | |
Categories: Computers, Fun
Video Podcasting and Youtube
I've never got into video podcasting, I've not even got as far as podcasting. When I sit at my computer I tend to have a film or an episode of Stargate (up to season 8 so far) on in the background.
This means I have a special abhorrence that I reserve for programs or web-services which don't provide reference documentation, but instead provide screencasts.
However, there is a lot of fun stuff on YouTube, despite it recently becoming part of the evil empire.
These are two of my latest 'finds', actually they've both been sent to me by friends. I just don't have the patience to trawl through it. I like the way you can embed videos in web pages though, thusly :
Stormtroopers Cop Show
White and Nerdy
Of course the best of YouTube, and lots more live content, can be found on SlippedStream.
Like this post? Digg it or Del.icio.us it. Looking for a great tech job? Visit the Hidden Network Jobs Board.
Posted by Fuzzyman on 2006-10-21 13:51:26 | |
New Releases
It's been another hectic week for new releases in the Python world.
- Python 2.4.4 is now out. This contains a host of bugfixes, many of them exposed by the Coverity report on Python. A new release of Movable Python will follow soon.
- wxPython 2.7.1.1 - the GUI toolkit. This looks like a major new release, including the removal of several previously deprecated features. It's nice to see projects updated, let's hope it doesn't break too much code.
- PythonCE for Python 2.5 - Python for mobile devices. pySQLite - is now available.
- SPE 0.8.3b - the Python IDE. This is the first release for a long time. This release brings Python 2.5 and wxPython 2.7 compatibility, although there seem to be reported issues with wxGlade.
Of course the most important release of the week is rest2web 0.5.0.
Like this post? Digg it or Del.icio.us it. Looking for a great tech job? Visit the Hidden Network Jobs Board.
Posted by Fuzzyman on 2006-10-21 13:40:11 | |
Archives
This work is licensed under a Creative Commons Attribution-Share Alike 2.0 License.
Counter... | http://www.voidspace.org.uk/python/weblog/arch_d7_2006_10_21.shtml | crawl-002 | refinedweb | 3,709 | 65.42 |
Syntax Highlight and Content Completion
To create an XQuery document, select and when the New document wizard appears, select XQuery entry.
Oxygen XML Developer provides syntax highlight for keywords and all known XQuery functions and operators. A Content Completion Assistant is also available and can be activated with the (Ctrl (Meta on Mac OS)+Space) shortcut. The functions and operators are presented together with a description of the parameters and functionality. For some supported database engines such as eXist and Berkeley DB, the content completion list offers the specific XQuery functions implemented by that engine. This feature is available when the XQuery file has an associated transformation scenario that uses one of these database engines or the XQuery validation engine is set to one of them via a validation scenario or in the XQuery Preferences page.
The extension functions included in the Saxon product are available on content completion if one of the following conditions are true:
- The edited file has a transformation scenario associated that uses as transformation engine Saxon 9.6.0.7 PE or Saxon 9.6.0.7 EE.
- The edited file has a validation scenario associated that use as validation engine Saxon 9.6.0.7 PE or Saxon 9.6.0.7 EE.
- The validation engine specified in Preferences is Saxon 9.6.0.7 PE or Saxon 9.6.0.7 EE.
If the Saxon namespace () is mapped to a prefix, the functions are presented using this
prefix. Otherwise, the default prefix for the Saxon namespace (
saxon) is
used.
If you want to use a function from a namespace mapped to a prefix, just type that prefix and the content completion displays all the XQuery functions from that namespace. When the default namespace is mapped to a prefix, the XQuery functions from this namespace offered by content completion are also prefixed. Otherwise, only the function name being used.
The content completion pop-up window presents all the variables and functions from both the edited XQuery file and its imports.
Figure: XQuery Content Completion
| http://www.oxygenxml.com/doc/versions/18.0/ug-developer/topics/syntax-highlight-and-content-completion.html | CC-MAIN-2016-40 | refinedweb | 340 | 54.42 |
Microsoft's Patterns & Practices Summit 2007: Development Day
Microsoft's Patterns & Practices Summit highlighted the best strategies for architecting .NET applications. This report focuses on the summit's day devoted to development. development day.
Why Ruby?
John Lam kicked things off with a look at IronRuby, which is an open-source implementation of Ruby that runs .NET objects.
Lam identified three goals for IronRuby. The first is to be as compatible to Ruby as possible. The second is to peacefully coexist with the .NET Framework and, as a result, allow .NET to interoperate with the Mac, via Silverlight, and with Linux, via Mono. The third is to achieve acceptable performance -- that is, faster that it currently runs and with additional features.
Lam then demonstrated a bit of IronRuby's potential. In one scenario, he showed the audience how to build IronRuby class libraries. This involved adding classes and then marking constructor methods, which, like C# 3.0 extension methods, are static.
The Right Tools for the Right Job
"I don't know about you, but I feel utterly overwhelmed at the technology being thrown at us," Rockford Lhotka said at the beginning of his talk. "The only way I feel I can maintain any sort of sanity is to plug the new technology into an architectural framework."
At that, Lhotka launched a discussion of where and how it is best to use object-oriented design, service-oriented design or workflow design.
Object-oriented design is essentially message-based communication between objects, he said. It tends not to fit within a distributed computing model without a bit of planning.
Service-oriented design, meanwhile, offers loosely coupled interactions between services that lack tiers. (Think of the basic tiers of an application -- the presentation, UI, business, data access and database layers.) This provides a nice, non-deterministic way to model how applications, and the world, tend to operate, and it is cross-platform and -language. On the other hand, Lhotka noted, both the concept and the tools are immature.
Workflow design, finally, represents an orchestration of ordered activities that are designed in terms of input and output. This is not a new concept per se -- remember poster-sized flowcharts? -- but mastering it requires architects to unlearn object-oriented and event-driven programming, Lhotka said.
Designing for Workflow
Picking up where Lhotka left off, Ted Neward offered some "proto-patterns and proto-practices" for workflow design -- so described because there is no critical mass rallying around Windows Workflow Foundation yet.
Neward first identified four goals of workflow design.
- Capture long-running processes over which end users will want some control
- Give so-called "knowledge workers" the capacity to edit the workflow
- Keep workflows decoupled from their stored environment
- Avoid tying a workflow to a single host environment, since that limits its usefulness
He then offered seven ruminations on the subject.
- Don't ignore the prior art. Workflow Patterns, for example, outlines more than 100 patterns and practices.
- Make sure any action outside the workflow is captured as a workflow service. Remember, you will never know who will want to interact with your workflow and how they will interact, Neward noted.
- Decide whether architects or knowledge workers will be writing the workflow.
- Use state-machine workflows for open-ended processes.
- Look for hosting scenarios, from batch execution toWinForms to stored procedures, which stretch Windows through the rest of your enterprise.
- Set timeout options for non-processing workflow activities. Otherwise -- if, say, a user wins the lottery and retires -- that task may never be completed, Neward said.
- Keep in mind that workflows are hardware-independent and, therefore, can be persisted and restored in entirely different locations.
The Future of Model-Based Design
David Trowbridge and Suhail Dutta focused primarily on Microsoft's plans for the role of modeling in Rosario, the version of Visual Studio Team System coming out after VS 2008.
Rosario includes a logical class designer and a sequence designer, it supports Domain Specific Languages and those DSLs wrap around tool adaptors for a nice, loosely coupled interface. All this functionality can be used in standalone applications, Trowbridge said, or it can be used in a software factory.
The duo also demoed Progression, which is the code name for a tool that aims to bridge the gap between architecture and development -- the former focusing on requirements and models, the latter on refactoring, visualizing, filtering and analyzing code.
In Progression, architects can organize the components of a solution by namespaces and classes, links the references among all the classes and identify so-called "hubs," or important pieces of code on which numerous classes rely.
Dependency Injection Frameworks
Through dependency injection, objects that need to find each other, well, find each other. It works thanks to what Peter Provost described as the Hollywood Principle: "Don't call me. I'll call you."
Dependencies can be provided several ways -- through an interface and a contract, via setter methods, via constructor parameters or through method injections. These processes promote flexible code that is easy to test, maintain and reuse, but, Provost noted, they do lend themselves to the creation of lots and lots of objects.
Enter ObjectBuilder. This runtime framework for building dependency injection systems will manage the way objects are built up. The Event Broker, for example, lets an object raise an event and then pass data along with it, while the Interceptor, as Scott Densmore put it, allows the Rebel Forces to get in the way of the Death Star's method call to destroy Alderaan.
Enterprise Library Devolved
Densmore returned to the dais to discuss the Enterprise Library, which is a collection of application blocks. The Enterprise Library Factory, meanwhile, is essentially a modified version of the dependency injection container and is used by all other factories.
Configuration of an application's plug-ins, Densmore said, boils down to the container in which they are found and not the library. This means configuration should be easy to do without changing code.
The Future of Design Patterns
Much more was said during this hour-long panel discussion than can be reported here. Stay tuned for a full-fledged, honest-to-goodness story on future of Microsoft's Patterns and Practices program in the next few days.
Start the conversation | https://searchwindevelopment.techtarget.com/news/1281356/Microsofts-Patterns-Practices-Summit-2007-Development-Day | CC-MAIN-2019-39 | refinedweb | 1,043 | 53.61 |
14 June 2011 05:13 [Source: ICIS news]
SINGAPORE (ICIS)--?xml:namespace>
“We shut it down on the morning of 13 June, and will restart it on schedule this Sunday,” the source said.
The estimated production loss during the shutdown is around 10,000 tonnes and Sam Nam will reduce supply by around 20% to its contracted customers in June, the source added.
The company recently shut its No 2 350,000 tonne/year purified terephthalic acid (PTA) plant from 21 May to 4 June for a planned two-week turnaround.
Its other two QTA units, with a combined capacity of 900,000 tonnes/year, are running at full rates.
Sam Nam is a joint venture that was set up by
Additional reporting by Becky Zhang | http://www.icis.com/Articles/2011/06/14/9468785/s-koreas-sam-nam-shuts-no-4-qta-unit-for-one-week-turnaround.html | CC-MAIN-2014-49 | refinedweb | 126 | 68.6 |
Create simple GUIs using the Rofi application
Project description
A Python module to make simple GUIs using Rofi.
What is Rofi?
Rofi is a popup window switcher with minimal dependencies. Its basic operation is to display a list of options and let the user pick one. The following screenshot is shamelessly hotlinked from the Rofi website (which you should probably visit if you want actual details about Rofi!) and shows it being used by the teiler screenshot application.
What is this module?
It simplifies making simple GUIs using Rofi. It provides a class with a number of methods for various GUI actions (show messages, pick one of these options, enter some text / a number). These are translated to the appropriate Rofi command line options, and then the standard subprocess module is used to run Rofi. Any output is then processed and returned to you to do whatever you like with.
Examples
Data entry
The simplest example is to create a Rofi instance and prompt the user to enter a piece of text:
from rofi import Rofi r = Rofi() Rofi.escape() class method:
msg = Rofi Rofi.
Requirements
You need to have the rofi executable available on the system path (i.e., install Rofi!). Everything else that python-rofi Rofi itself.
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/python-rofi/ | CC-MAIN-2018-26 | refinedweb | 236 | 66.13 |
On Wed, Aug 10, 2011 at 06:40:31PM +0100, Vince Weaver wrote:> HelloHi Vince,> Sam Wang reported to me that my perf_event validation tests were failing > with branches on ARM Cortex A9.> > It turns out the branches event used (ARMV7_PERFCTR_PC_WRITE) only seems> to count taken branches.It also counts exceptions and instructions that write to the PC.> ARMV7_PERFCTR_PC_IMM_BRANCH seems to do a better job of counting both > taken and not-taken. So I've attached a patch to change the definition> for Cotex A9.Well, it also only considers immediate branches so whilst it might satisyyour test, I think that overall it's a less meaningful number.> This might be needed for Cortex A8 but I don't have a machine to test on > (yet).We use the same event encoding for HW_BRANCH_INSTRUCTIONS on the A8.> I'm assuming this is a proper fix. The "generalized" events aren't > defined very well so there's always some wiggle room about what they mean.I'm really not a big fan of the generalised events. I appreciate that theymake perf easier to use but *only* if you can actually provide a sensibledefinition of the event which can (ideally) be compared between twodifferent CPU implementations for the same architecture.So, my take on this is that we should either:(a) leave it like it is since taken branches is probably a more useful metric than number of immediate branches executed.(b) start replacing our generalised events with HW_OP_UNSUPPORTED and force the user to use raw events. I agree this isn't very friendly, but it's better than giving them crazy results [for example, we currently report more cache misses than cache references on A9 iirc].Personally, I'm favour of (b) and getting userspace to provide the user witha CPU-specific event listing and then translate this to raw events usingsomething like libpfm.As an aside, I also think this is part of a bigger problem. For example, thesoftware event PERF_COUNT_SW_EMULATION_FAULTS would be much more useful ifwe could describe different types of emulation faults. These would probablybe architecture-specific and we would need a way for userspace to communicatethe event subclass to the kernel rather than having separate ABI events forthem. So not only would we want raw events, we'd also want a way to specifythe PMU to handle them (given that a global event namespace across PMUs isunrealistic).Will | http://lkml.org/lkml/2011/8/10/395 | CC-MAIN-2017-34 | refinedweb | 401 | 60.85 |
xml.etree.ElementTree — The ElementTree XML API¶.
Tutorial¶
This is a short tutorial for using
xml.etree.ElementTree (
ET in
short). The goal is to demonstrate some of the building blocks and basic
concepts of the module.
XML tree and elements¶.
Parsing XML¶ can import this data by reading from a file:
import xml.etree.ElementTree as ET tree = ET.parse('country_data.xml') root = tree.getroot()
Or directly'
Note
Not. A document type declaration may be accessed by passing a
custom
TreeBuilder instance to the
XMLParser
constructor..
Finding interesting elements¶.
Modifying an XML File¶>
Building XML documents¶
The
SubElement() function also provides a convenient way to create new
sub-elements for a given element:
>>> a = ET.Element('a') >>> b = ET.SubElement(a, 'b') >>> c = ET.SubElement(a, 'c') >>> d = ET.SubElement(c, 'd') >>> ET.dump(a) <a><b /><c><d /></c></a>
Parsing XML with Namespaces.
Example¶]")
Reference¶
Functions¶
xml.etree.ElementTree.
Comment(text=None)¶.)¶, parser=None)¶
Parses an XML section from a string constant. Same as
XML(). text is a string containing XML data. parser is an optional parser instance. If not given, the standard
XMLParserparser is used. Returns an
Elementinstance.
xml.etree.ElementTree.
fromstringlist(sequence, parser=None)¶)¶
Checks if an object appears to be a valid element object. element is an element instance. Returns a true value if this is an element object.
xml.etree.ElementTree.
iterparse(source, events=None, parser=None)¶)¶)¶)¶ 3.2.
xml.etree.ElementTree.
SubElement(parent, tag, attrib={}, **extra)¶.)¶)¶"?>
xml.etree.ElementTree module:
from xml.etree”:
< Objects¶
- class
xml.etree.ElementTree.
Element(tag, attrib={}, **extra)¶.
tag¶
A string identifying what kind of data this element represents (the element type, in other words).
text¶
tail¶
These attributes can be used to hold additional data associated with the element. Their values are usually strings but may be any application-specific object. If the element is created from an XML file, the text attribute holds either the text between the element’s start tag and its first child or end tag, or
None, and the tail attribute holds either the text between the element’s end tag and the next tag, or
None. For the XML data
<a><b>1<c>2<d/>3</c></b>4</a>
the a element has
None¶.
clear()¶
Resets an element. This function removes all subelements, clears all attributes, and sets the text and tail attributes to
None.
get(key, default=None)¶
Gets the element attribute named key.
Returns the attribute value, or default if the attribute was not found.
items()¶
Returns the element attributes as a sequence of (name, value) pairs. The attributes are returned in an arbitrary order.
keys()¶
Returns the elements attribute names as a list. The names are returned in an arbitrary order.
The following methods work on the element’s children (subelements).
append(subelement)¶
Adds the element subelement to the end of thisiterator(tag=None)¶
Deprecated since version 3.2: Use method
Element.iter()instead.
insert(index, subelement)¶
Inserts subelement at the given position in this element. Raises
TypeErrorif subelement is not an
Element.
iter(tag=None)¶.
makeelement(tag, attrib)¶
Creates a new element object of the same type as this element. Do not call this method, use the
SubElement()factory function instead.
remove(subelement)¶")
ElementTree Objects¶
- class
xml.etree.ElementTree.
ElementTree(element=None, file=None)¶
ElementTree wrapper class. This class represents an entire element hierarchy, and adds some extra support for serialization to and from standard XML.
element is the root element. The tree is initialized with the contents of the XML file if given.
_setroot(element)¶
Replaces the root element for this tree. This discards the current contents of the tree, and replaces it with the given element. Use with care. element is an element instance.
find(match,)¶. Objects¶
- class
xml.etree.ElementTree.
QName(text_or_uri, tag=None)¶ a URI, and this argument is interpreted as a local name.
QNameinstances are opaque.
TreeBuilder Objects¶
- class
xml.etree.ElementTree.
TreeBuilder(element_factory=None)¶)¶
Adds text to the current element. data is a string. This should be either a bytestring, or a Unicode string.
start(tag, attrs)¶
Opens a new element. tag is the element name. attrs is a dictionary containing element attributes. Returns the opened element. 3.2.. | https://docs.python.org/3.7/library/xml.etree.elementtree.html | CC-MAIN-2019-39 | refinedweb | 700 | 54.49 |
#include <mega8.h> #include <delay.h> #define pinWrite(i,j,port)port=(i)?port|(1<<j):port&(~(1<<j)) #define pinPulse(j,port)port|=(1<<j)\ port&=(~(1<<j)) #define sport PORTB #define sclock PORTB1 #define slatch PORTB2 #define sdata PORTB0 uint8_t j,i=0,a[]={255, 255, 193, 221, 221, 255, 255, 255}; void dshift(uint8_t data) { for(uint8_t i=0;i<8;i++) { if(data & 0b10000000) { pinWrite(1,sdata,sport); } else { pinWrite(0,sdata,sport); } pinPulse(sclock,sport); data=data<<1; //Now bring next bit at MSB position } pinPulse(slatch,sport); }) { for(i=0;i<8;i++) { j=a[i]; dshift(j); pinPulse(PORTB4,PORTB); } } }
The above code gives the error
Error: C:\Users\deepa\Desktop\matrix\ledm.c(12): '(' expected
When I convert the first uint8_t to int, it gives even more errors:
Error: C:\Users\deepa\Desktop\matrix\ledm.c(13): declaration syntax error
Error: C:\Users\deepa\Desktop\matrix\ledm.c(15): ')' expected, but ';' found
Error: C:\Users\deepa\Desktop\matrix\ledm.c(15): ';' expected, but ')' found
Error: C:\Users\deepa\Desktop\matrix\ledm.c(28): ';' expected
Error: C:\Users\deepa\Desktop\matrix\ledm.c(29): operand types 'int (*)()' and 'int' are incompatible with the '<<' or '<<=' operator
Error: C:\Users\deepa\Desktop\matrix\ledm.c(34): ';' expected
Error: C:\Users\deepa\Desktop\matrix\ledm.c(137): too many arguments in function call
Error: C:\Users\deepa\Desktop\matrix\ledm.c(138): ';' expected
I am just a beginner, so sorry for any mistakes i might have made.
Why on earth are you making life so complicated when you have Codevision?
can just be:
then later in the code you do things like:
to write those individual port bits. No need for all this nonsense:
Top
- Log in or register to post comments
The error you need to take care of first is the first listed errror:
Error: C:\Users\deepa\Desktop\matrix\ledm.c(12): '(' expected
I cannot tell which line is line 12 so I can't help you more.
I cannot tell what you are trying to do. This is a place where comments would be REALLY useful.
Jim
Jim Wagner Oregon Research Electronics, Consulting Div. Tangent, OR, USA
Top
- Log in or register to post comments
You make life very complicated. Surely this is clearer:
But it would be even clearer if you defined the pins e.g.
There are several problems with your approach. Please look at the comments marked .kbv
Top
- Log in or register to post comments
The error is resolved. But someone said that since I'm using CodeVision, I can directly use PORTB.3=1;
Could anyone point me to some kind of documentation or website for codevision?
I have been studying directly from avr tutorials.
Top
- Log in or register to post comments
Okay, turns out you were right on all counts. I also had 1 question.
Why did I have to use <stdint.h> to use uint8_t?
Isn't it just another name for char?(Source:Google)
Top
- Log in or register to post comments
No, uint8_t etc are not part of the C language. They need to be declared from < stdint.h >
GCC is naughty. It silently includes several headers via < avr/io.h > e.g. < stdint.h >
I suggest that you get into the habit of specifying which headers to include.
It does no harm. You will find that your code works seamlessly with other toolchains.
David.
Top
- Log in or register to post comments
Top
- Log in or register to post comments
C is a very simple language with very few keywords.
Everything is built with header files and libraries.
Without includes you have nothing more than char, int, long, ...
If you want access to special function registers you have to include the relevant header e.g. < avr/io.h >
Yes, < stdint.h > is a wise include in almost every source file. It removes ambiguity from the "implementation defined" int etc.
There is nothing wrong with putting all your "usual" includes into a special "david.h" or "cliff.h"
In fact I quite like the idea that "Arduino.h" or "mbed.h" tend to know EVERYTHING.
But this is not part of the C language. The user should determine which headers that she wants to include.
David.
Top
- Log in or register to post comments
The manual for Codevision, all 900 pages of it, is installed when you installed the compiler. It's in the 'documentation' folder.
Everything else is just plain 'C' about which there are 100s of websites. Some good, some bad.
Just out of interest how would you pass something like PORTB.3 into a function?
Say you wanted to be able to:
How could you pass that? Presumably by reference but as what type? Is it something like:
Or what exactly? Or is it even do-able?
Top
- Log in or register to post comments...
and...
Maybe "fallen" is not the right word. You had an "epiphany", perhaps?
Anyway, early responders here seem to have concentrated in OP's style and approach. I don't disagree, but first tell why there is a syntax error.
Re uint8_t and friends -- I have to do some digging but I seem to recall that it might depend on what level (e.g. C99) the toolchain generally follows.
You can put lipstick on a pig, but it is still a pig.
I've never met a pig I didn't like, as long as you have some salt and pepper.
Top
- Log in or register to post comments
I don't think that I have ever wanted to pass a "sbit" to a function. I don't think that I have even tried with Keil 8051
It is a lot easier to use straightforward statements or macros.
Of course you can do anything in CV that is possible with GCC. It just requires the appropriate address dereferencing.
The CV SFR.# extension makes the code look intuitive. GCC can generate the same ASM sequences but requires convoluted construction to satisfy C syntax.
David.
Top
- Log in or register to post comments
That is indeed an "interesting" question. Doesn't your menu bar have a "Help" section? What does that say?
/>
Note the "Help Topics" and "HP InfoTech on the Web" -- doesn't that address the situation?
How did you get your copy of CV? What version?
You can put lipstick on a pig, but it is still a pig.
I've never met a pig I didn't like, as long as you have some salt and pepper.
Top
- Log in or register to post comments
Lee,
Having a bad hair day? Or is it a long time chip on the shoulder?
I've always thought that compilers that offer the .n access provide a nice feature for micro work. The only argument against as I've stated before is that it is non-standard so it can miseducate folks, esp. beginners if they don't realise that it is "something special" and they can't rely on it if they see other architectures/compilers in their future.
But if your plan is to work solely with a particular architecture and a particular compiler then I'd always use it if possible as it makes life a lot easier.
At one stage you may remember I actually wrote a whole header file generating system for AVRs purely to deliver the kind of Adc.Aden = 1 sort of syntax which is as close as you can get if limited to "standard C" but "normal" C limits this to PortB.b1 = 0 rather than the preferable PortB.1 = 0 which isn't quite achievable.
Top
- Log in or register to post comments
So I took everyone's advice and used the function TSTBIT to get input for the given circuit
I'm trying to move the dot left by one at every "ON" setting. But the dot is not moving.
Top
- Log in or register to post comments
I do not know what TSTBIT() does. I suspect that you want to use PINB.4
I strongly advise you to format your code. One click from the Codevision IDE.
Life is much better when your code looks neat and tidy.
David.
Top
- Log in or register to post comments
And right again you are.
The help section never showed me anything along the lines of PINB.
BTW, why does PORTB.4 not work in the same manner as the PINB.4 command?
Something to do with input/output?
Also, what kind of formatting are we talking about here? I'm pretty sure I have my code indented out properly.
Top
- Log in or register to post comments
Continue here please...
Never mind, unlocked now by popular demand.
John Samperi
Ampertronics Pty. Ltd.
* Electronic Design * Custom Products * Contract Assembly
Top
- Log in or register to post comments | https://www.avrfreaks.net/comment/2635201 | CC-MAIN-2019-22 | refinedweb | 1,479 | 68.47 |
I was asked to talk about how to use GMarkup. This is a brief introduction; there are many people more qualified to talk about it than I am. These are my opinions and not those of the project or my employer. If you want to suggest a change or report a mistake, suggest away.
Firstly, why you shouldn’t use GMarkup.
Don’t use GMarkup if all you want is to store a simple list of settings. Instead, either use gconf, or if what you want is a file on disk, use GKeyFile, which lets you write things like:
[favourites]
icecream=chocolate
film=Better Than Chocolate
poem=Jenny kiss'd me when we met
in the style of .ini files. These are much more user-friendly.
Don’t use GMarkup if you want to parse actual arbitrary XML files. Instead, use libxml, which is beautiful and wonderful and fast and accurate. GMarkup is made to be easy to use.
Do use GMarkup if you want a reasonably complicated way to store files on disk, in a new format you’re making up.
Why GMarkup files are not XML.
XML is big and scary and complicated and spiky. People pretend it is simple. It isn’t. GMarkup files differ in many ways from XML, which makes them easier to use but also less flexible. Here are some ways in which a file can be XML but not GMarkup:
- There is no character code but Unicode, and UTF-8 is its encoding. GMarkup does not attempt to screw around with UTF-16, ASCII, ISO646, or, heaven help us, EBCDIC. That way madness lies.
- There are five predefined entities: & for &, < for <, > for >, " for ", and ' for '. You cannot define any new ones, but you can use character references (giving the code point explicitly, like ☃ or ☃ for a snowman, ☃).
- Processing instructions (including doctypes and comments) aren’t specially treated, and there is no validation.
There are also a few subtle ways in which a file can be parsable by GMarkup but not be valid XML. However, these are officially invalid GMarkup even though they work fine, if you can follow that. Many people don’t care, but they should.
Okay, so how do we get going?
There are two ways people deal with XML: either as a tree, or as a series of events. GMarkup always sees them as a series of events. There are five kinds of event which can happen:
- The start of an element
- The end of an element
- Some text (inside an element)
- Some other stuff (processing instructions, mainly, including comments and doctypes)
- An error
Let’s imagine we have this file, called simple.xml:
<zoo> <animal noise="roar">lion</animal> <animal noise="sniffle">bunny</animal> <animal noise="lol">cat</animal> <keeper/> </zoo>
This will be seen by the parser as a series of events, as follows:
- Start of “zoo”.
- Start of “animal”, with a “noise” attribute of “roar”.
- The text “lion”.
- End of “animal”.
- Start of “animal”, with a “noise” attribute of “sniffle”.
- The text “bunny”.
- End of “animal”.
- Start of “animal”, with a “noise” attribute of “lol”.
- The text “cat”.
- End of “animal”.
- Start of “keeper”.
- End of “keeper”.
- End of “zoo”.
(Actually there’ll be some extra text which is just whitespace, but let’s ignore that for now.)
There are two kinds of objects to deal with.
One is a GMarkupParser: it lists what to do in each of the five cases given above. In each case we give a function which knows how to handle opening elements, or closing elements, or whatever. If we don’t care about that case, we can say NULL. The signatures needed for each of these functions are given in the API documentation.
The second kind of object is a GMarkupParseContext. You construct this, feed it text, which it will parse, and then eventually destroy it. It would be nice if there was a function which would just read in a file and deal with it, but there isn’t. Fortunately, we have g_file_get_contents(), which is almost as good, if we can assume there’s memory available to store the whole file at once.
So let’s say we want to print the animals’ noises from the file above.
- Decide which kinds of events we need to know about. We need to know when elements open so that we can pick up the animal noise, and when text comes past giving the animal name, so we can print it. It would be possible to free the noise when we need to get the next noise, but it would be easier to free it when we see </animal>, so let’s do it like that. Processing instructions and errors we can ignore for the sake of example.
- Write functions to handle each one.
- Write a GMarkupParser listing the name of each function.
- Write something to load the file into memory and parse it.
Here’s some less-than-beautiful example code to do that.
#include <glib.h> #include <stdio.h> #include <stdlib.h> #include <string.h> gchar *current_animal_noise = NULL; /* The handler functions. */ void start_element (GMarkupParseContext *context, const gchar *element_name, const gchar **attribute_names, const gchar **attribute_values, gpointer user_data, GError **error) { const gchar **name_cursor = attribute_names; const gchar **value_cursor = attribute_values; while (*name_cursor) { if (strcmp (*name_cursor, "noise") == 0) current_animal_noise = g_strdup (*value_cursor); name_cursor++; value_cursor++; } } void text(GMarkupParseContext *context, const gchar *text, gsize text_len, gpointer user_data, GError **error) { /* Note that "text" is not a regular C string: it is * not null-terminated. This is the reason for the * unusual %*s format below. */ if (current_animal_noise) printf("I am a %*s and I go %s. Can you do it?\n", text_len, text, current_animal_noise); } void end_element (GMarkupParseContext *context, const gchar *element_name, gpointer user_data, GError **error) { if (current_animal_noise) { g_free (current_animal_noise); current_animal_noise = NULL; } } /* The list of what handler does what. */ static GMarkupParser parser = { start_element, end_element, text, NULL, NULL }; /* Code to grab the file into memory and parse it. */ int main() { char *text; gsize length; GMarkupParseContext *context = g_markup_parse_context_new ( &parser, 0, NULL, NULL); /* seriously crummy error checking */ if (g_file_get_contents ("simple.xml", &text, &length, NULL) == FALSE) { printf("Couldn't load XML\n"); exit(255); } if (g_markup_parse_context_parse (context, text, length, NULL) == FALSE) { printf("Parse failed\n"); exit(255); } g_free(text); g_markup_parse_context_free (context); } /* EOF */
Save that as simple.c. If you have the GNOME libraries properly installed, then typing
gcc simple.c $(pkg-config glib-2.0 --cflags --libs) -o simple
will compile the program, and running it with ./simple will give you
I am a lion and I go roar. Can you do it?
I am a bunny and I go sniffle. Can you do it?
I am a cat and I go lol. Can you do it?
I think that was enough to whet your appetite, but there’s a whole lot more to know. You can read more here. If you want to see a real-life example, Metacity uses exactly this sort of arrangement for its theme files. (Later: Julien Puydt shares memories of how schema handling in gconf was written using GMarkup.) Any questions?
Photo: Day-old chick, GFDL, from here, by Fir0002, modified by Dcoetzee, Editor at Large, and tthurman.
4 thoughts on “XML, GMarkup, and all that jazz”
While your parser functions correctly when you feed the entire file, it won’t necessarily give the right results if the input is fed to the parser in chunks.
If I split the animal name over a chunk boundary, GMarkup may call your text() handler twice, which might lead to output like:
I am a li and I go roar. Can you do it?
I am a on and I go roar. Can you do it?
To handle this case correctly, you’d need to accumulate the text (maybe using GString) when you’re inside an interesting element and then print the message from the end_element() handler.
@James: That’s a very good point.
It might be nice if there was a flag you could pass into GMarkup to make it do the buffering for itself, the same way it clearly does for tags it hasn’t seen the whole of.
I guess the reason the SAX APIs are structured like this is so that they can pass through character data directly from the input buffer without copying (returning spans delimited by tags or entities).
In contrast with element start/end tags, the text content can be quite large (unless your XML format does everything through attributes …) so this makes sense. If you can avoid having to make a copy of every byte of the input, you probably should … | https://blogs.gnome.org/tthurman/2008/02/14/gmarkup/ | CC-MAIN-2018-05 | refinedweb | 1,426 | 74.29 |
This file defines the Orocos plugin API. More...
#include <string>
#include "../rtt-config.h"
Go to the source code of this file.
This file defines the Orocos plugin API.
A plugin is a dynamic library which has a unique name and can be loaded in a running process. In case the loading is done in an Orocos TaskContext, the plugin is notified of the TaskContext. A plugin can reject to load in a process, in which case the library will be unloaded from the process again. Once loaded, a plugin remains in the current process until the process exits.
Definition in file Plugin.hpp.
Return the unique name of this plugin.
No two plugins with the same name will be allowed to live in a single process.
Returns the target name for which this plugin was built.
Instructs this plugin to load itself into the process or a component.
This function will first be called with t being equal to zero, giving the plugin the opportunity to load something in the whole process. Implement in this function any startup code your plugin requires. This function should not throw. | http://www.orocos.org/stable/documentation/rtt/v2.x/api/html/Plugin_8hpp.html | CC-MAIN-2016-07 | refinedweb | 188 | 76.62 |
Bug Description
I had got "RuntimeError: dictionary changed size during iteration":
Traceback (most recent call last):
...
File "/app/lib/
find_spec = FindSpec(cls_spec)
File "/app/lib/
info.
File "/app/lib/
cls.
File "/app/lib/
column = getattr(cls, attr, None)
File "/app/lib/
self._cls = _find_descripto
File "/app/lib/
for attr, _descr in cls.__dict_
RuntimeError: dictionary changed size during iteration
storm.info.
I wrote pache:
Index: info.py
=======
--- info.py (rev 5903)
+++ info.py (working)
@@ -19,6 +19,7 @@
# along with this program. If not, see <http://
#
from weakref import ref
+from threading import RLock
from storm.exceptions import ClassInfoError
from storm.expr import Column, Desc, TABLE
@@ -30,7 +31,9 @@
__all__ = ["get_obj_info", "set_obj_info", "get_cls_info",
+_lock = RLock()
+
def get_obj_info(obj):
try:
return obj.__storm_
@@ -50,8 +53,13 @@
# Can't use attribute access here, otherwise subclassing won't work.
return cls.__dict_
else:
- cls.__storm_
- return cls.__storm_
+ _lock.acquire()
+ try:
+ if "__storm_
+ cls.__storm_
+ return cls.__storm_
+ finally:
+ _lock.release()
class ClassInfo(dict):
I don't think much of Storm is thread-safe. You should probably not be sharing stores and model objects between threads, but someone with more experience may correct me. We certainly avoid that in Launchpad.
I did not share stores and model objects. I use other stores each threads.
get_cls_info() stores model information to class. Class is shared every thread.
But, get_cls_info does not have protection for threadsafe.
Ah yes, good point :-)
It must be thread safe indeed. The fix is going a good direction, but it doesn't look entirely correct. Reference.__get__ isn't called by get_cls_info.
FWIW, that'd be easy to fix on 2.X, since we can call items() which is atomic. In 3.X a more traditional approach is necessary. It's a pity that the classes itself offers no way to reflect a list of key/value pairs in a safe way.
There are several problems, not only `cls.__
For example ClassInfo.columns. That value will be used by many places.
But that value is setted after first initialization, in rare cases, because thread-unsafe.
I'm probably missing what you actually mean. Apparently ClassInfo.columns is set in ClassInfo's constructor.
Yes, ClassInfo.columns is setted in ClassInfo's constructor.
And ClassInfo's constructor called from get_cls_info().
ClassInfo.columns is stored __storm_
that value shold not changed after initialized.
Because ObjectInfo use ClassInfo.columns.
A ClassInfo object is stored in __storm_
Maybe what you're trying to say is that get_cls_info today may return two different values if it's called concurrently?
It shouldn't indeed, so let's fix that too.
There's still something else to be done in addition to the suggested fix, though. Iterating over the class attributes may happen within Storm despite the lock being acquired there, so the lock isn't a guarantee.
ClassInfo object is stored in __storm_
If __storm_
lock to initialize ClassInfo | https://bugs.launchpad.net/storm/+bug/919059 | CC-MAIN-2014-15 | refinedweb | 484 | 54.08 |
Name | Synopsis | Description | Return Values | Errors | Usage | Attributes | See Also
#include <stdlib.h> int mblen(const char *s, size_t n);
If s is not a null pointer, mblen() determines the number of bytes constituting the character pointed to by s. It is equivalent to:
mbtowc((wchar_t *)0, s, n);
A call with s as a null pointer causes this function to return 0. The behavior of this function is affected by the LC_CTYPE category of the current locale.
If s is a null pointer, mblen() returns 0. It s is not a null pointer, mblen() returns 0 (if s points to the null byte), the number of bytes that constitute the character (if the next n or fewer bytes form a valid character), or -1 (if they do not form a valid character) and may set errno to indicate the error. In no case will the value returned be greater than n or the value of the MB_CUR_MAX macro.
The mblen() function may fail if:
Invalid character sequence is detected.
The mblen() function can be used safely in multithreaded applications, as long as setlocale(3C) is not being called to change the locale.
See attributes(5) for descriptions of the following attributes:
mbstowcs(3C), mbtowc(3C), setlocale(3C), wcstombs(3C), wctomb(3C), attributes(5), standards(5)
Name | Synopsis | Description | Return Values | Errors | Usage | Attributes | See Also | http://docs.oracle.com/cd/E19082-01/819-2243/mblen-3c/index.html | CC-MAIN-2015-27 | refinedweb | 226 | 58.21 |
xkcd's 13-Gigapixel Webcomic 193
New submitter Nomen writes "Today's xkcd: Click and Drag (Google Maps version) is probably the world's biggest web comic at an RSI-inducing resolution of 165,888x79,872 pixels. It's made up of 225 different images that take up 5.52MB of space. Now, if only the mines were powered by nethack..."
In unrelated news, (Score:5, Insightful)
there was a huge, worldwide drop in productivity today. Especially at universities, research labs, software development companies etc.
Re:In unrelated news, (Score:5, Funny)
And a world-wide increase in carpal tunnel syndrome.
Insane (Score:4, Insightful)
Re: (Score:2)
How do you know he isn't? He is on slashdot after all. Oh, wait...
Hack (Score:4, Interesting)
Some guy made a keyboard-controllable fullscreen interface:
Obligatory xkcd (Score:4, Funny)
Re:Obligatory xkcd (Score:5, Funny)
Mother of God (Score:2)
It will take all day to explore this whole thing using the Google Maps version, I can't imagine finding everything using the dragging method.
Re:Mother of God (Score:5, Informative)
Note that is privately hosted, and since a Slashdotting is likely to turn his suspicions about hosting costs into reality you might want to consider a donation, or at least a like to help him with the Facebook "Like" for the image linked from the page.
Re: (Score:2)
Probably because every nerd on the internet just got the link from Slashdot and is likely melting the server as we speak. It doesn't seem like it's actually hosted by Google.
Re: (Score:2)
Here are the URLs: [nopaste.info]
Re:Mother of God (Score:5, Informative)
Re: (Score:2)
You just need more monitors. It's almost manageable at 7680x1440. The GM version, that is.
Thumbnail? (Score:2)
Anybody create or know of a thumbnail of the whole thing? I don't want to miss anything!
That comic should have been put out on a Friday - I only like to waste a lot of time on Fridays. I can only afford a cursory amount of wasted time on a Wednesday. All that clicking and dragging used up most of it, so I'll have to cut this
/. post short.
wow, massive finger cramps... (Score:2)
only got a fraction of the way through it when my index and middle finger started to seize up like an engine without oil.
needs a zoom function!
Re:wow, massive finger cramps... (Score:5, Informative)
Re: (Score:3)
needs a zoom function!
I've heard this said over 50 times today.
In a way, it's a clear statement that the speaker doesn't get the point. (Or doesn't care to get the point.)
The word balloon in the "start position" of the final frame says it all: "I just didn't expect [the world] to be so big."
Your graphospasms are the webcomic equivalent of becoming footsore after attempting to walk the entire surface of the Earth. Zooming out is implicitly cheating.
Re: (Score:2)
i'd be more than footsore if i tried to walk the earth my friend!
fortunately we have google earth to view the world, which is the exact metaphor of the google eath overlay!
Re: (Score:2)
Not sure if it's a good thing that I saw most of the content yesterday by clicking and dragging. It took quite a while. I missed the stuff in the sky, but saw all the underground and surface events.
is it funny? (Score:1)
Re: (Score:3)
I was disappointed by the whales. I thought should have been a whale and a bowl of Petunias.
Re:is it funny? (Score:5, Funny)
Impossible to tell until they reach the ground and we see whether they miss it or not.
Re: (Score:2)
Only there are 2 whales (falling) and no bowl of petunias.
There were of course two missiles in the source material.
Re:is it funny? (Score:4, Insightful)
Re: (Score:2)
Re: (Score:2)
Retirement (Score:4, Insightful)
Zoomable version (Score:2, Informative)
Someone created a zoomable version of today's comic. Makes it a LOT faster to see everything:
Webcomics will come to exploit web tech (Score:5, Insightful)
I kinda regret seeing this comic immediately when it became live, as just as the working day had just begun here and I sunk an enormous amount of time into exploring it, but I'm nonetheless thrilled whenever webcomics do something that extends the comic format beyond the limitations of paper.
Earlier XKCD strips could easily be converted to print format, just see the first collection XKCD Volume 0 [amazon.com] that Randall published (yes, he found a way to put the alt text in there too). But taking advantage of HTML and Javascript, making the comic interactive to a degree, feels like something fresh. Cyanide and Happiness have also been employing animated GIF elements. There's a lot of room for creativity in the webcomic format.
Re: (Score:3)
Look at the bright side (besides new stuff for the format): This would make one hell of a poster.
Re: (Score:2)
OMG (Score:5, Funny)
I have just found a bigger time waster than Slashdot.
Re: (Score:2)
Slashdot used to link directly to xkcd (it was in the QuickLinks frame). They stopped that a couple of years ago for some reason. Maybe because the slashdot time waster + xkcd time waster equalled, well, too much wasted time...
Ooh, look, a Google Maps Version linked from /. (Score:2)
Randall Munroe is my hero (Score:5, Insightful)
Re: (Score:2)
Re: (Score:2)
Big View in Chrome: (Score:5, Informative)
- in the console, expand the "middleContainer" div
- select the "comic" div
- in the box on the right of the console, uncheck "overflow:hidden"
Re: (Score:3)
I feel better (Score:3)
I, for one, am relieved to know that the bottomless pits in Mario Bros. are not truly bottomless. If any of my Mario's survived the fall, they got to live out their days in comfortable pagodas buried deep within the earth.
Randall is truly evil (Score:2)
In a good way....
Commander Mark (Score:2)
This reminds me of the PBS show Secret City, hosted by COMMANDER MARK! [youtube.com]
I loved his pen murals.
Magnum Opus (Score:2)
This is Randall's Magnum Opus.
The statistics (Score:2)
165,888x79,872 pixels.
Wow!
It's made up of 225 different images
Wow.
that take up 5.52MB of space.
W- meh.
Re: (Score:2)
"colour", so it compresses really well. Uncompressed, it's quite a bit more.
Re: (Score:2)
I have a friend who does images on a similar scale (Score:3)
Lode Runner? (Score:2)
Randall must have played too many games back in the day.
For the impatient... (Score:2)
Mouse wheel tilt left/right (Score:2)
art (Score:2)
This was one of the coolest things I found on the Internet since quite a while.
And, despite the RSI-effect and all, it kind of loses something in the various maps, deep zoom, etc. versions. As soon as you can zoom, it doesn't have the scale/size feeling anymore. Something is lost.
Drawing (Score:2)
When I was younger, my brother and I would often have a sort of pseudo-game for those rainy afternoons.
We would draw a simplified piece of terrain on A4 paper, usually just a wobbly line that went up and down (a bit like a Scorched Earth kinda outline, similar to the game we used to play on the Spectrum: Tank Trax).
Then we'd fill it it with various stick figures, with various comedy elements. With two of you, you could work on different bits and then show each other what you'd done and tie it in with the
Corcovado (Score:3)
The mountain seems to be the famous Corcovado Montain (the one with the big Jesus statue on top) in Rio de Janeiro, Brazil. See here [fbcdn.net] for comparison.
Re:Not all that impressive (Score:5, Insightful)
Your face isn't all that impressive.
Re: (Score:2)
The fact this got modded +5 Insightful makes the burn all that more impressive.
Ohh, and the guy in the grass is masturbating.
Re: (Score:2)
Ohh, and the guy in the grass is masturbating.
On a related note, Randall watches Peep Show.
Re: (Score:2)
On a related note, Randall watches Peep Show.
Fucko, we like to call it inter-species erotica.
Re: (Score:2)
This comic is very morish.
Re: (Score:2)
That's what I was referring to
:)
Re:Not all that impressive (Score:5, Informative)
Re: (Score:2)
IMO nicer version: [rent-a-geek.de]
Re:Not all that impressive (Score:5, Insightful)
Of course it is. It's a digital scavenger hunt
:)
Quite fun, and will probably be the most-bookmarked webcomic of the year as people realize the size of it and flag it for perusal in their off hours.
Well played Mr. Munroe, well played indeed.
Re:Not all that impressive (Score:5, Insightful)
Re:Not all that impressive (Score:5, Insightful)
scavenger hunt! (Score:2)
Someone needs to arrange a scavenger hunt list for this image. I'll get you started:
find two x-wings
find three whales
find one submarine
find three comets
Re: (Score:2)
I know I found a blue whale, an x-wing and the submarine (it was subterranean).
Re: (Score:3)
It was extremely cool, but having to actually scroll all over, combined with a small scrollable window, made my hands hurt enough that I eventually gave up. I'm very thankful for the (multiple) people that repackaged it as a zoomable map.
Re:Not all that impressive (Score:5, Informative)
Give Randall some credit, this must have taken ages to put together, and is all the more impressive once you realise that it's more or less all draw to the scale. The real mindfsck is when you read the comment from the girl on the far left and work out that, despite all of that apparently vast area to explore, the whole thing represents is only about five miles from one side to the other. I just didn't expect it [the drawing] to be so small!
Re:Not all that impressive (Score:5, Informative)
Clever. Another way to work out the dimensions would be to scale it to the Burj Khalifa [wikipedia.org] located in the center.
Re: (Score:2)
Or the Saturn V just above it.
Re:Not all that impressive (Score:5, Funny)
Soooo... Glass half empty kind of guy, huh?
Oblig: [xkcd.com]
Re:Not all that impressive (Score:5, Interesting)
It's so big. Yet so really, really small when you compare it to the real world. More size comparisons:
Based on the Burj Khalifa being 829 meters, each image (cell) is about 100 meters.
The full image is 81 cells wide, 32 cells high. The highest point where the whales are at is 13 cells high (including the initial ground level), while the lowest point is around 19 cells deep (not including the initial ground level).
Mt. Everest is slightly higher than the image is wide (88.5 cells).
The deepest mine in the world is about twice the depth of the caves from ground level (39 cells).
In fact, the deepest hole ever drilled is about six times as deep (122 cells).
If the jumbo jets' cruise altitude were drawn to scale, they would be close to ten times the height of the whale from ground level (124 cells).
If this was a map of Manhattan starting at the tip of Battery Park, it would end near the southern parts of Central Park, specifically the whereabouts of the skating rink (according to Google Maps anyway).
Also, apparently, some forum-goers have found the images at the four 11x11 corners to be blank (present but blank, whereas the rest of the empty space is not an actual image). The theory is that this one, 1110, will be either the last or the penultimate comic (with 1111 being the last comic, or just blank). The last comic theory comes from the obvious reference to Calvin and Hobbes, and a reference to the very first comic at the eastern-most cell. There are more blank images at 1n4e, 1n5e, 2n1w, 2n3w, 8n1w, but its meaning has not been cracked yet.
Re: (Score:2, Funny)
Really?
That'll be the day I stop reading it.
Re:Not all that impressive (Score:5, Insightful)
No life? Drawing XKCD is his job.
But did you find Waldo? (Score:3)
He's there.
Re: (Score:2)
As is his cane.
Re: (Score:2)
That's whose cane it is? I saw the cane, but didn't make the connection.
Wonder if there'll ever be a poster version of it. (Probably multiple posters you have to assemble yourself..)
Re: (Score:2)
Re: (Score:3)
Hey that is just uncool.
He might damage a car if he plays in traffic.
Re: (Score:2)
I love the way you crafted this slam without actually specifying what you consider to be "modern culture", other than that you agree with the xkcd fans that Lady Gaga doesn't qualify. All the reward, none of the risk. Excellent.
Re:typifies xkcd (Score:5, Funny)
WTF is an "artefact"?
It's very similar to an artifact, except artefacts are only found by people who don't know what the little red, squiggly line under the word is for.
Re: (Score:3)
More accurately, it
/is/ an artifact. There are multiple acceptable spellings of artifact.
Re: (Score:2)
More accurately, it
/is/ an artifact. There are multiple acceptable spellings of artifact.
Ha, yea, I actually remembered that shortly after hitting the submit button. Where's a time-limited edit button when you need one?
My high school English teacher is probably spinning in her grave...
Re: (Score:2)
Re: (Score:3)
It's the normal British spelling of artefact. No red squiggly line for me
:-)
Re: (Score:2)
Re: (Score:3)
Well, you can always enjoy the initial joke, then bookmark it and come back later. My complaint is that all the clicking and dragging gets boring real fast, and isn't adequately repaid by the little jokes you discover along the way. Maybe it would be more fun on a tablet.
Re: (Score:2)
Re:I was actually disappointed by this. (Score:5, Interesting)
That's easy, try finding these without zooming out:
2 MD-80s
2 other airliners, possibly 767s
Apollo 13
Two X-Wings
Re:I was actually disappointed by this. (Score:4, Interesting)
Re: (Score:2)
I knew I had forgotten something!
Re: (Score:2)
You forgot the Q400 in a drastic nose-up pitch. (possibly about to stall)
Re:I was actually disappointed by this. (Score:5, Funny)
Facebook? Get out of here.
Re: (Score:2)
Re:I was actually disappointed by this. (Score:5, Interesting)
My buddy wrote up a script that pulls the whole map into a big clickable image: [drawert.net]
Re:I was actually disappointed by this. (Score:5, Funny)
I was hoping someone would do something like that. I hope that server can handle some traffic.
Anyone esle wish that one of the whales in the sky was a flower pot thinking to itself "not again"? (HHGttG)
Re: (Score:2)
Re:I was actually disappointed by this. (Score:5, Interesting)
Here's the python script I threw together. It's crude, but gets the job done. (Note that it needs wget on the path or in the same directory, didn't feel like tinkering with binary writes.)
import os, urllib
baseUrl=""
def convert(coords):
st = ''
if coords[0]>0:
st+=str(coords[0])+'n'
else:
st+=str(abs(coords[0]))+'s'
if coords[1]>0:
st+=str(coords[1])+'e'
else:
st+=str(abs(coords[1]))+'w'
st+='.png'
return st
x=1
y=1
flipX = 1
flipY = 1
while True:
coords = (x*flipX, y*flipY)
print coords
u = urllib.urlopen(baseUrl+convert(coords))
firstLine = True
img = False
for line in u:
if firstLine:
firstLine = False
if line == '\x89PNG\r\n':
print 'Found Image!'
os.spawnl(os.P_WAIT, "wget"," -nc ",baseUrl+convert(coords))
elif line == '<?xml version="1.0" encoding="iso-8859-1"?>\n':
pass
else:
print line
u.close()
if flipY==-1:
flipY = 1
y+=1
if y>x:
y=1
if flipX==-1:
flipX=1
x+=1
else:
flipX=-1
else:
flipY = -1
Re:I was actually disappointed by this. (Score:5, Funny)
You must be one of those persons who read tomorrow's comic on their dilbert calendar.
Re: (Score:2)
This does it already: [mrphlip.com] | https://tech.slashdot.org/story/12/09/19/2026223/xkcds-13-gigapixel-webcomic?sdsrc=prev | CC-MAIN-2016-50 | refinedweb | 2,799 | 74.08 |
If you contribute to more than one blog, you may want to have all of your posts show up in a single stream, even if they aren’t hosted on that same site.
In my case, I like to have the “external posts” I write here for Giant Robots to show up on my personal blog, so that I can direct people to a single place to see everything I write. Having the posts all in one place also allows me to use Google Analytics to track clicks from my site to posts on Giant Robots.
At first glance, Jekyll doesn’t seem suited to this, but let’s dive in and see if we can make it work.
Meta Refresh
The first thing that comes to mind isn’t always the best, but let’s take a look at my original attempt.
The desire was to keep our Google Analytics page views by setting up a custom layout for external posts that will use a
<meta> tag to redirect to the post externally after we get our pageview:
YAML front-matter in
_posts/2013-07-12-external-posts-in-jekyll.md:
--- title: External Posts in Jekyll layout: external external_url: ---
_layouts/external.html:
<!DOCTYPE html> <html> <head> <title>{{ page.title }}</title> <meta http- </head> <body><!-- Google Analytics JavaScript --></body> </html>
This does the trick: We get Google Analytics to tell us that we had a page view, but the user is still redirected to the post externally before too long. Even with the
0 as the refresh delay, most browsers will load the entire page and render before redirecting, so unless we want to add an ugly “You are being redirected” message, we need a new solution.
Conditional when rendering
site.posts
We already know whether a post is “external” to our site based on the
external_url key in the front matter. Let’s see if we can use that knowledge when rendering the list of posts on our
index.html.
{% for post in site.posts %} <dt><time>{{ post.date | date: "%d %b %Y" }}</time></dt> <dd> {% if post.external_url %} [{{ post.title }}]({{ post.external_url }}) {% else %} [{{ post.title }}]({{ post.url }}) {% endif %} </dd> {% endfor %}
Okay, now we have posts rendering in our main post list, but if they have the
external_url front-matter they will automatically link wherever the post was originally published. Sweet.
Unfortunately, there are a couple of downsides to this approach. First, we are probably confusing the user by linking them to an external post that looks just like the links within our site. Second, we have lost the ability to track page views.
Solutions
Fortunately, Jekyll has our back.
We can add another bit of front-matter to tell us what the actual site is that we’re linking to. This allows us to style the links that will take the user away from the domain differently:
--- ... external_site: thoughtbot
We can take advantage of the
onclick attribute to trick Google Analytics into thinking that by clicking on the link, we’re actually viewing that page:
[{{ post.title }}]({{ post.external_url }})
We use
post.url which actually comes out looking like
/external-posts-in-jekyll since Google Analytics requires that page views be from a single site.
We could also achieve this without using
onclick by setting a data attribute such as
data-analytics-path and write JavaScript to find links with that attribute and add a click handler.
Bringing it all together
index.html:
{% for post in site.posts %} <dt><time>{{ post.date | date: "%d %b %y" }}</time></dt> {% if post.external_url %} <dd class="{{ post.external_site }}"> [{{ post.title }}]({{ post.external_url }}) </dd> {% else %} <dd>[{{ post.title }}]({{ post.url }})</dd> {% endif %} {% endfor %}
style.css:
dl dd.thoughtbot:before { height: 26px; width: 27px; content: " "; background: url("ralph.png") no-repeat; background-size: cover; position: relative; display: inline-block; top: 4px; margin-right: 6px; }
front-matter:
--- title: External Posts in Jekyll layout: post external_url: external_site: thoughtbot ---
If you’d like to see what this looks like in practice, all of this code was adapted from my implementation at. | https://thoughtbot.com/blog/external-posts-in-jekyll | CC-MAIN-2020-45 | refinedweb | 675 | 64 |
JBoss.orgCommunity Documentation
Abstract
This book describes the STM feature that is available for use with Naray.
This book describes the STM feature that is available for use with Narayana
This guide is most relevant to engineers who want to use STM in their applications. It is assumed that the reader is already familiar with the core Narayana
In this chapter we shall look at the Software Transactional Memory (STM) implementation that ships as part of Narayana. We won't go into the theoretical details behind STM as they would take up an entire book and there are sufficint resources available for the interested reader to find out themselves. But suffice it to say that STM offers an approach to developing transactional applications in a highly concurrent environment with some of the same characteristics of ACID transactions. Typically though, the Durability property is relaxed (removed) within STM implementations.
The Narayana STM implementation builds on the Transactional Objects for Java (TXOJ) framework which has offered building blocks for the construction of transactional objects via inheritence. The interested reader should look at the text on TXOJ within the ArjunaCore documentation for more in depth details. However, within TXOJ an application class can inherit from the LockManager class to obtain persistence (D) and concurrency (I), whilst at the same time having the flexibility to change some of these capabilities. For example, an object could be volatile, i.e., no durability, and yet still maintain the other transactional properties.
If you look at the abilities that TXOJ offers to developers then it shares many aspects with STM. However, the downside is that developers need to modify their classes through class inheritence (something which is not always possible), add suitable extension methods (for saving and restoring state), set locks etc. None of this is entirely unreasonable, but it represents a barrier to some and hence is one of the reasons we decided to provide a separate STM implementation.
In order to illustrate the Narayana STM implementation we shall use a worked example throughout the rest of this chapter. We'll make it simple to start with, just a atomic integer that supports set, get and increment methods:
public interface Atomic { public void incr (int value) throws Exception; public void set (int value) throws Exception; public int get () throws Exception; }
We'll throw exceptions from each method just in case, but obviously you could just as easily catch any problems which occur and return booleans or some other indicator from the increment and set methods.
In this example we'll next create an implementation class:
public class ExampleInteger implements Atomic { public int get () throws Exception { return state; } public void set (int value) throws Exception { state = value; } public void incr (int value) throws Exception { state += value; } private int state; }
The implementation is pretty straightforward and we won't go into it here. However, so far apart from inheriting from our Atomic interface there's nothing to call this implementation out as being atomic. That's because we haven't actually done anything STM related to the code yet.
Now let's start to modify it by adding in STM specific elements.
All class scope annotations should be applied to the interface whereas method scope annotations should be applied to the implementation class.
Let's start by looking at the Atomic interface. First of all any transactional objects must be instrumented as such for the underyling STM implementation to be able to differentiate them from non-transactional objects. To do that you use the Transactional annotation on the class. Next we need to ensure that our transactional object(s) is free from conflicts when used in a concurrent environment, so we have to add information about the type of operation, i.e., whether or not the method modifies the state of the object. You do this using either the ReadLock or WriteLock annotations.
If you do not add locking annotations to the methods on your Transactional interface then Narayana will default to assuming they all potentially modify the object's state.
At this stage we end up with a modified interface:
@Transactional public interface Atomic { public void incr (int value) throws Exception; public void set (int value) throws Exception; public int get () throws Exception; }
And class:
public class ExampleInteger implements Atomic { @WriteLock public int get () throws Exception { return state; } @WriteLock public void set (int value) throws Exception { state = value; } @ReadLock public void incr (int value) throws Exception { state += value; } private int state; }
As you can see, these are fairly straightfoward (and hopefully intuitive) changes to make. Everything else is defaulted, though will we will discuss other annotations later once we go beyond the basic example.
We are contemplating allowing method annotations to be applied on the interface and then overridden on the implementation class. For now if you follow the above conventions you will continue to be compatible if this change is eventually supported.
Now we have a transactional class, by virtue of its dependency on the Atomic interface, how we we go about creating instances of the corresponding transactional object and use it (them) within transactions?
Container<Atomic> theContainer = new Container<Atomic>(); ExampleInteger basic = new ExampleInteger(); Atomic obj = theContainer.create(basic); AtomicAction a = new AtomicAction(); a.begin(); obj.set(1234); a.commit(); if (obj.get() == 1234) System.out.println("State changed ok!"); else System.out.println("State not changed!"); a = new AtomicAction(); a.begin(); obj.change(1); a.abort(); if (obj.get() == 1234) System.out.println("State reverted to 1234!"); else System.out.println("State is wrong!");
For clarity we've removed some of the error checking code in the above example, but let's walk through exactly what is going on.
Some of the discussions around AtomicAction etc. are deliberately brief here because you can find more information in the relevant ArjunaCore documentation.
First we need to create an STM Container: this is the entity which represents the transactional memory within which each object will be maintained. We need to tell each Container about the type of objects for which it will be responsible. Then we create an instance of our ExampleInteger. However, we can't use it directly because at this stage its operations aren't being monitored by the Container. Therefore, we pass the instance to the Container and obtain a reference to an Atomic object through which we can operate on the STM object.
At this point if we called the operations such as incr on the Atomic instance we wouldn't see any difference in behaviour: there are no transactions in flight to help provide the necessary properties. Let's change that by creating an AtomicAction (transaction) and starting it. Now when we operate on the STM object all of the operations, such as set, will be performed within the scope of that transaction because it is associated with the thread of control. At this point if we commit the transaction object the state changes will be made permanent (well not quite, but that's a different story and one you can see when we discuss the Container in more detail later.)
The rest of the example code simply repeats the above, except this time instead of committing the transaction we roll it back. What happens in this case is that any state changes which were performed within the scope of the transaction are automatically undone and we get back the state of the object(s) as it existed prior to the operations being performed.
Pretty simple and not too much additional work on the part of the developer. Most of the ways in which you will use the Narayana STM implementation come down to similar approaches to what we've seen in the example. Where things may differ are in the various advanced options available to the developer. We'll discuss those next as we look at all of the user classes and annotations that are available.
All of the classes, interfaces and annotations that you should be using can be located within the org.jboss.stm and org.jboss.stm.annotations packages. All other classes etc. located within org.jboss.stm.internal are private implementation specific aspects of the framework and subject to change without warning.
The following annotations are available for use on STM interfaces or classes.
@Transactional: Used on the interface. Defines that implementations of the interface are to be managed within a transactional container. Unless specified using other annotations, all public methods will be assumed to modify the state of the object, i.e., require write locks. All state variables will be saved and restored unless marked explicitly using the @State annotation or SaveState/RestoreState. This assumes currently that all state modification and locking occurs through public methods, which means that even if there are private, protected or package scope methods that would change the state, they will not be tracked. Therefore, the implementation class should not modify state unless by calling its own public methods. All methods should either be invoked within a transactional context or have the Nested annotation applied, wherein the system will automatically create a new transaction when the method is invoked.
@Optimistic: Used on the interface. Specifies that the framework should use optimistic concurrency control for managing interactions on the instances. This may mean that a transaction is forced to abort at the end due to conflicting updates made by other users. The default is @Pessimistic.
@Pessimistic. Used on the interface. Specifies that pessimistic concurrency control should be used. This means that a read or write operation may block or be rejected if another user is manipulating the same object in a conflicting manner. If no other annotation appears to override this, then pessimistic is the default for a transactional object.
@Nested: Used on the interface or class. Defines that the container will create a new transaction for each method invocation, regardless of whether there is already a transaction associated with the caller. These transactions will then either be top-level transactions or nested automatically depending upon the context within which they are created.
@NestedTopLevel: Used on the interface or class. Defines that the container will create a new transaction for each method invocation, regardless of whether there is already a transaction associated with the caller. These transactions will always be top-level transactions even if there is a transaction already associated with the invoking thread.
@ReadLock: Used on the class method. The framework will grab a read lock when the method is invoked.
@WriteLock: Used on the class method. The framework will grab a write lock then the method is invoked.
@LockFree: Used on the class method. No locks will be obtained on this method, though any transaction context will still be on the thread when the method is invoked.
@TransactionFree: Used on the class method. This means that the method is not transactional, so no context will exist on the thread or locks acquired/released when the method is invoked.
@Timeout: time between each retry attempt in milliseconds.
@Retry: number of retry attempts.
@State: Used on the class member variables to define which state will be saved and restored by the transaction system. By default, all member variables (non-static, non-volatile) will be saved.
@NotState: Used on the class member variables to define which state to ignore when saving/restoring instance data. Note that any member variable that is not annotated with NotState will be saved and restored by the transaction system, irrespective of whether or not it has the State annotation. You should use these annotations cautiously because if you limit the state which is saved (and hence restored) you may allow dirty data to cross transaction boundaries.
@SaveState: Used on the class method to define the specific save_state method for the class. This is used in preference to any @State indications on the class state. This is the case no matter where in the class hierarchy it occurs. So if you have a base class that uses save/restore methods the inherited classes must have them too if their state is to be durable. In future we may save/restore specifically for each class in the inheritance hierarchy.
@RestoreState: Used on the class method to define the specific restore_state method for the class. This is used in preference to any @State indications on the class state.
By default objects created within STM do not possess the Durable aspect of traditional ACID transactions, i.e., they are volatile instances. This has an obvious performance benefit since there is no disk or replicated in-memory data store involved. However, it has disadvantages. If the objects are Pessimitic or Optimistic then they can be shared between threads in the same address space (JVM instance). At the time of writing Optimistic objects cannot be shared between address spaces.
Most of the time you will want to create volatile STM objects, with the option of using optimistic of pessimistic concurrency control really down to the type of application you are developing. As such you use of Containers will be very similar to that which we have seen already:
TestContainer<Sample> theContainer = new TestContainer<Sample>(); SampleLockable tester = new SampleLockable(); Sample proxy = theContainer.enlist(tester);
However, the Container class has a number of extensibility options available for the more advanced user and requirements, which we shall discuss in the rest of this section.
By default when you create a Container it is used to manage volatile objects. In STM language we call these objects recoverable due to the fact their state can be recovered in the event of a transaction rolling back, but not if there is a crash. The Container therefore supports two types:
public enum TYPE { RECOVERABLE, PERSISTENT };
You can therefore use the TYPE constructore to create a Container of either type. You can always determine the type of a Container later by calling the type() method.
All Containers can be named with a String. We recommend uniquely naming your Container instances and in fact if you do not give your Container a name when it is created using the default constructure then the system will assign a unique name (an instance of a Narayana Uid). If you want to give you Container a name then you can use the constructor that takes a String and you can get the name of any Container instance by calling the name() method. The default type of a Container is RECOVERABLE.
The Container also supports two sharing models for objects created:
public enum MODEL { SHARED, EXCLUSIVE };
SHARED means the instance may be used within multiple processes. It must be PERSISTENT too; if not then the framework. EXCLUSIVE means that the instance will only be used within a single JVM, though it can be PERSISTENT or RECOVERABLE. You can get the model used by your container by calling the model() method. The default model for a Container is EXCLUSIVE.
Given the above information, you should now be able to understand what the various constructors of the Container class do, since they provide the ability to modify the behaviour of any created instance through combinations of the above three parameters. Where a given parameter is not available in a specific constructor, the default value discussed previously is used.
Once a Container is created, you can use the create() method to create objects (handles) within the STM. As shown in the previous example, you pass in an unmodified (with the possible exception of annotations) class instance which corresponds to the interface type given to the Container when it was created and the Container will return a reference to an instance of the same type:
Sample1 obj1 = theContainer.create(new Sample1Imple(10));
All objects thus created are uniquely identified by the system. You can obtain their identifier (an instance of the Uid class) at any time by calling the getIdentifier method of the corresponding Container:
Uid id = theContainer.getIdentifier(obj1)
This can be useful for debugging purposes. However, it can also be useful if you want to create a duplicate handle to the object for another thread to use. This is not strictly necessary when using the default Pessimistic concurrency control, but is a requirement when using Optimistic (MVCC) (see relevant section).
Do not share the same reference for an Optimistic object with multiple threads. You must use the clone() operation for each thread.
There are two variants of the clone() operation. Both of them require an empty instance of the original non-STM class to clone the data in to (this does not actually happen for Pessimistic instances, but is still required at present for uniformity):
public synchronized T clone (T member, T proxy)
This version requires a reference to the STM object that is being cloned as the second parameter:
Sample1 obj2 = theContainer.clone(new Sample1Imple(), obj1);
The second version is similar:
public synchronized T clone (T member, Uid id)
This time instead of a reference you can provide the object's identifier:
Sample1 obj2 = theContainer.clone(new Sample1Imple(), theContainer.getIdentifier(obj1));
You are free to use either clone() operation depending upon what information your program has available.
Earlier in this chapter we discussed how you can instrument your implementation class member variables with the State and NotState annotations to indicate what state should be saved and restored by the transaction system. In some situations you may want even more control over this process and this is where the @SaveState and @RestoreState annotations come in. These annotations let you define a method which will be called when the system needs to save your objec's state and likewise when it needs to restore it.
You must use SaveState and RestoreState annotations together, i.e., you cannot just define one without the other.
Your methods can be called whatever you want but they must have the following signatures.
@SaveState
public void save_state (OutputObjectState os) throws IOException
@RestoreState
public void restore_state (InputObjectState os) throws IOException
Each operation is then given complete control over which state variables are saved and restored at the appropriate time. Any state-related annotations on member instance variables are ignored by the framework so you must ensure that all state which can be modified within the scope of a transaction must be saved and restored if you want it to be manipulated appropriately by the transaction.
For instance, look at the following example:
public class DummyImple implements Dummy { public DummyImple () { _isNotState = false; _saved = 1234; } @ReadLock public int getInt () { return _saved; } @WriteLock public void setInt (int value) { _saved = value; } @ReadLock public boolean getBoolean () { return _isNotState; } @WriteLock public void setBoolean (boolean value) { _isNotState = value; } @SaveState public void save_state (OutputObjectState os) throws IOException { os.packInt(_saved); } @RestoreState public void restore_state (InputObjectState os) throws IOException { _saved = os.unpackInt(); } public int _saved; public boolean _isNotState; }
In this example, only the int member variable is saved and restored. This means that any changes made to the other member variable(s) within the scope of any transaction, in this case the boolean, will not be undone in the event the transaction(s) rolls back.
Use the SaveState and RestoreState annotations with care as you could cause dirty data to be visible between transactions if you do not save and restore all of the necessary state.
Per object concurrency control is done through locks and type specific concurrency control is available. You can define locks on a per object and per method basis, and combined with nested transactions this provides for a flexible way of structuring applications that would typically not block threads unless there is really high contention. All but the @Transactional annotation are optional, with sensible defaults taken for everything else including locks and state.
However, the locking strategy we had originally was pessimistic..
The obvious alternative to this approach is optimistic or MV.
As discussed previously, there are two annotations: @Optimistic and @Pessimistic, with Pessimistic being the default, i.e., if no annotation is present, then the STM framework will assume you want pessimistic concurrency control. These are defined on a per interface basis and define the type of concurrency control implementation that is used whenever locks are needed.
@Transactional @Optimistic public class SampleLockable implements Sample { public SampleLockable (int init) { _isState = init; } @ReadLock public int value () { return _isState; } @WriteLock public void increment () { _isState++; } @WriteLock public void decrement () { _isState--; } @State private int _isState; }
And that's it. No other changes are needed to the interface or to the implementation. However, at present there is a subtle change in the way in which you create your objects. Recall how that was done previously and then compare it with the style necessary when using optimistic concurrency control:
Container theContainer = new Container(); Sample obj1 = theContainer.create(new SampleLockable(10)); Sample obj2 = theContainer.clone(new SampleLockable(10),obj1);
In the original pessimistic approach the instance obj1 can be shared between any number of threads and the STM implementation will ensure that the state is manipulated consistently and safely. However, with optimistic concurrency we need to have one instance of the state per thread. So in the above code we first create the object (obj1) and then we create a copy of it (obj2), passing a reference to the original to the container.
Remember that the same reference to Optimistic (MVCC) objects cannot be shared between different threads: you must use the clone() operation on the corresponding Container for each thread which wishes to use the object.
In this chapter we have considered all of the publicly available interfaces and classes for the STM framework within Narayana. There is deliberately a lot of flexibility on offer but much of it will only be needed by more advanced users and use cases. In this section we shall consider the most typical way in which we believe users will want to use the STM implementation. Let's consider the interface first:
@Transactional public interface Sample { public void increment (); public void decrement (); public int value (); }
Whilst MVCC (optimistic concurrency control) is available, it is most useful in environments with a high degree of contention. Even then, with the ability to control the timeout and retry values of the locking used by the pessimistic concurrency control option, the surety of making progress in a longer running transaction and not being forced to roll back later can be an advantage. Therefore, pessimistic (the default) is probably the approach you will want to take initially.
Now let's look at the implementation class:
public class MyExample implements Sample { public MyExample () { this(0); } public MyExample (int init) { _isState = init; } @ReadLock public int value () { return _isState; } @WriteLock public void increment () { _isState++; } @WriteLock public void decrement () { _isState--; } private int _isState; }
By this point it should look fairly straightforward. We've kept it simple deliberately, but it can be as complex as your application requires. There are no nested transactions at work here, but you can easily add them using the Nested annotation. Remember that they give you improved modularity as well as the ability to better control failures.
Because STM implementations typically relax or remove the durability aspect, you are more likely to want to create volatile objects, i.e., objects that do not survive the crash and repair of the JVM on which they are created. Therefore, you should use the default Container constructor, unless you want to control the name of the instance and in which case you can pass in an arbitrary string. Then all that is left is the creation and manipulation of AtomicActions as you invoke the relevant methods on your object(s).
MyExample ex = new MyExample(10); Container<Sample> theContainer = new Container<Sample>(); Sample obj1 = theContainer.create(ex); AtomicAction act = new AtomicAction(); act.begin(); obj1.increment(); act.commit(); | http://docs.jboss.org/jbosstm/5.0.2.Final/guides/stm_guide/index.html | CC-MAIN-2017-04 | refinedweb | 3,929 | 51.78 |
Deciding on something that becomes a public interface of a developer-oriented technology is a tricky task. Not only does the resulting design need to be correct and complete, but also there are various aspects that are more around aesthetics and personal preference. The URI format used by Astoria will need to survive both sets of challenges…
The Astoria REST “protocol” is made up of a URI addressing scheme, HTTP-based interaction model and payload formats (Web3S/XML,JSON , ATOM/APP). In the interest of staying focused on the URI format, this write-up will only touch on the URI format used by Astoria and leave discussion of the interaction model to a future post. See this post for a discussion around payload formats used by Astoria.
In general, Astoria takes a conceptual model expressed in terms of entities in an EDM schema and surfaces data that follows that model over an HTTP interface, representing entities as resources and associations between entities as links. The URI interface needs to provide a rich yet simple way of addressing those resources.
The URI format in Astoria has a few specific goals:
a)
b) Allow for simple queries to be formulated. That is, instead of pointing to a particular resource, allow URIs that express filtered sets of resources satisfying certain criteria
c) Support manipulating the presentation of results. This includes things such as sorting resources, paging over them and expanding related resources.
This “part I” write up focuses on item a) above; pointing to resources and their members. We will discuss b) and c) in future blog posts
NOTE: the following descriptions use EDM terminology and constructs. Regardless of the underlying data access layer (Entity Framework, Custom LINQ provider, etc) an Astoria service is exposing, the service is described using an EDM schema, so this description applies equally to any data source. In addition, typical REST verbiage (as is done above) refers to items pointed to by URIs as resources. In the remainder of this write up the term ‘entity’ should be interpreted as a synonym for ‘resource’.
Starting from the root
At the root of the service we are thinking of carrying the behavior of the CTP forward and putting all of the resource sets, which are simply the list of entity sets we find in the EDM schema. These are addressed by name, separated by a forward-slash (“/”) from the service root URI. (e.g. …/northwind.svc/Customers, where “Customers” is a resource container).
A detail: In an EDM schema an entity set is contained within a single entity container and there may be multiple entity containers in the schema. If that’s the case, to access non-default containers, the names need to be container-qualified (e.g. “/NorthwindContainer.Customers”). The default one *cannot* be qualified, to avoid introducing a redundant way of getting to it.
Pointing to a particular entity
Every entity in an EDM schema has a key which consists of one or more of the properties in the entity. An entity key is unique within the containing entity set, so to identify an entity with a URI we need to include at least the entity-set and the key values.
--Location of keys
The key value could go before or after the question mark. That is, the URI could be built by adding a query parameter after the URI question mark as in:
…/northwind.svc/Customers?key=23
Or we could consider the entity-set-plus-value construct part of the URI namespace of the service and write it as
…/northwind.svc/Customers(23)
We prefer the second approach with the entity-sets and keys form a URI namespace and there is no query parameter required. One of the reasons for leaning towards the second approach is that it makes it explicit which entity set the key is associated with, especially when the URI path becomes quite long.
--a bit of syntax
Now, assuming we go with that approach, there is now the question of the syntax. The May 2007 CTP used values in square-brackets (e.g. “…/Customers[ALFKI]”). We got “generous” feedback saying that square-brackets were a bad choice.
The approach we are currently thinking to take is to attach the key directly after the entity set name and using the ‘!’ character as the separator (e.g. …/Customers!23 ). That said, as per our last “design” posting on formats, we are looking to support the Web3S format. Web3S has a more flexible data model in that it allows heterogeneous sets while an Astoria server supports homogenous sets. To enable interoperability between any servers implementing Web3S and an Astoria server a URI scheme flexible enough to address heterogeneous sets is required. Therefore, we are thinking to expand on our current approach and allow a “full” form of URI and a “compressed” form, where the full form supports heterogeneous sets and the compressed form can be used as a shorthand notation when the set being addressed is on an Astoria server and is thus homogenous. For example:
“Full” form: …/Customers/Customer(123) would identify the instance 123 of type Customer within the Customers set.
“Compressed” form: …/Customers!123 would identify the same resource as above in Astoria because the ‘Customer’ type is implied since Astoria Servers support homogenous sets.
--composite keys
One option would be to encode name/value pairs, but that would result in verbose URIs and extra syntax to be invented. We are leaning toward a simple approach: we only use the values, separated by semicolon. The values are listed in the same order as they appear in the metadata document which describes the service. Metadata will be the topic of a future post, however, for now it’s enough to say in the typical case the description of a service will be available by making a GET request to …/$metadata. The following is an example of a URI which contains a composite key:
…/Customers!’ALFKI’,2
Some folks love this, some hate it. The main concern from folks who hate it is readability: you cannot interpret the URI without the schema. Is that an issue? The alternate option of using name-value pairs is more explicit in this sense. We could have:
…/Customers!CustomerID='ALFKI',Key=2
The single-key case would still not require the name, and given that most cases will be single-key cases, compactness won’t suffer too much. An aspect of this that’s both good and bad is the fact that by using names you can specify the values in different order. That’s “handy”, but it means that these URIs are not useful for comparison as strings when trying to determine identity.
-- literal forms
Using just literal values in a composite key doesn’t really work, because now you cannot tell whether that’s a 2-element key or a 1 element key that happens to have a comma in it. So we need to use proper literal forms for the values. We will need that when we want to express query expressions such as filter predicates anyway, so we may as well be consistent and use a single literal form everywhere.
Literals:
· Strings: a string surrounded by single-quotes, (e.g. 'ALFKI')
· Numbers: just the number, using US style (dot separates decimal digits)
· Dates: quoted as strings. Inside the string, use format described in RFC3339
· Guids: use the form “dddddddd-dddd-dddd-dddd-dddddddddddd”
· Binary: “0x” followed by two hex digits per byte (e.g. 0x1AB4)
So the examples above would actually be, for single- and composite-key respectively:
…/Customers('ALFKI')
…/Customers('ALFKI',2)
Probably is a good idea to not allow spaces, as it would help making sure that URIs that mean the same thing are easily comparable.
Addressing members
To address a member of an entity, simply append the member to a URI that points to the entity, separated by a forward-slash. For example, if Customers have a CompanyName property:
…/Customers!'ALFKI'/CompanyName
Note that addressing a member like that would return the member appropriately wrapped to conform to the negotiated MIME type. For example, if using XML you’d get the value wrapped in an XML element and annotated with the required namespaces and such.
If you want just the value with no wrappers, you can use the /$value “magic member”. For example:
…/Customers('ALFKI')/CompanyName/$value
For a string, this would just return the string (text/plain) by default (same applies to all types but binary, which would return application/octet-stream). The developer can customize this by annotating the schema and indicating which MIME type a given value should be treated as. That would allow for example a text field to store HTML or a binary field to store an image, and HTTP responses would include the proper MIME type for them.
Association traversal
A special form of member access is when the member being accessed is actually a navigation property (a link in non EDM terms). Such a property can be considered a hard-link that resolves into the related entity (for associations that have a cardinality of 1 on the other end) or a set of entities (if the other end is “many”).
The syntax is the same as in regular members, independent of the cardinality of the other end. So, if a customer has sales orders, the URI to access the orders would be:
…/Customers!'ALFKI'/Orders
Keep drilling down
When the result of a given URI is a single object, the members of those objects can be accessed by adding the member name to the URI. For example, if a customer has a “Contact” navigation property that points to a single Person object, which has a Name property, the name can be retrieved directly by using this URI:
…/Customers!'ALFKI'/Contact/Name
For the case where traversing an association yields a set, you need to further scope the set by providing a key to point to a single element of the set in order to traverse further using the URI path. For example if a customer has a set of orders and each order has an order date property, one valid URI to access an order date would be:
…/Customers!'ALFKI'/Orders!123/OrderDate
A note on escaping
Quite a bit of escaping beyond basic URI encoding is necessary for the whole scheme to work. Things like “=” and “?” need to be carefully handled to not confuse URI translators and agents. Details go beyond this write up, but specific thoughts are welcome. Although the trickiest one deserves to be brought up: should we escape “/” inside a quoted string? The same question, asked more deeply, is “should we assume that consumers of Astoria URIs understand their syntax?”.
In general, we are leaning towards requiring characters of special meaning in a URI path (ex. ‘/’) to be escaped even when such a character is within a quoted string. If the character is not escaped we will treat it as per its predefined meaning in path segments in RFC 3986. We believe this would provide a consistent method for developers to craft and interpret URIs.
Wrapping Up...
The ideas presented above represent our current thinking in the space. As always, feedback and comments are most welcome. We look forward to hearing your thoughts. In follow up posts we will discuss the query string section of the URI and dig into addressing service operations.
Mike Flasko
Program Manager
Pablo Castro
Tech Lead
This post is part of the transparent design exercise in the Astoria Team. To understand how it works and how your feedback will be used please look at this post.
PingBack from
The relationship between "customers" and "23" is a hierarchical one, so "/customers/23" should be the proper way, if you want to avoid using a query. Ditto for "…/Customers/ALFKI/Orders/123".
I don’t get the need for quoting. Quotes aren’t special in the URI generic syntax (which is why "/" does need escaping), and I can’t even see why your examples need them. Is it because of spaces? You could always escape them.
FWIW, you should probably use a query too, because HTML forms (without script hacks) can only build queries, and I fully expect your users will want browser (and HTML, using conneg) access to these parameterized URLs.
Some of the other things you talk about give me the impression that you’re exposing too much "database" here. Why should "key" matter? Why are parentheses needed? I think you’re making this harder than it needs to be.
Thanks for the feedback Mark. We’ll take this back to the team and discuss. I beleive you are right in that the quotes are not strictly needed in this context. In a future post we will touch on the use of the query string where we need to have quoted literals. Part of the reason we adopted this in the URI path was for consistency between literals in the path and query string
From a readability perspective it’s ‘less surprising’ to have something like Customers/Customer(ALFKI)/Orders/Order(123) since this lets someone just pick up a URL and have a decent idea of what it’s encoding. It’s all about the trade offs, in this case between compactness and readability.
It is time for another weekly roundup of news that focuses on .NET, agile and general development related
We’re trying to keep up posting regularly on the design aspects of Astoria we have on the table week
We're trying to keep up posting regularly on the design aspects of Astoria we have on the table week
Hmm, one thing I’d suggest bearing in mind is what the URIs identify – web resources may be information resources (things that can be completely represented in a document) or real-world things, concepts etc. Even if a Customer entity would always be a bunch of data, you might want to consider what the relationship between the person and the bunch of data might be.
See
Presentations given in Irvine and Riverside, CA – here’s the deck and links for more information about | https://blogs.msdn.microsoft.com/odatateam/2007/09/21/uri-format-part-1-addressing-resources-using-uri-path-segments/?replytocom=2453 | CC-MAIN-2018-30 | refinedweb | 2,361 | 59.74 |
Make sure to get the most up-to-date version. CodeBlocks just came out with version 12.11 (late november 2012) which includes the compiler GCC version 4.7.1 (the MinGW TDM-GCC distribution) which should work for Windows 8 (at least, according to the website). Other than that, you'd have to try the Clang compiler by following these instructions. Any IDE (like CodeBlocks, or even Visual Studio) can be configured use the Clang compiler instead (I'm sure you can find instruction for that on the web too).
Other than that, there aren't really too many options that are free.
In theory, anything that works on Windows 7 should also work on Windows 8, as far as I know. So, if you say that you "keep getting build errors", it might not be because the compiler is not working but rather that your code has errors (or things that are now errors in the newer version of the compiler). Make sure you test it with a very simple program:
#include <iostream> int main() { std::cout << "Hello World!" << std::endl; return 0; };
you can download dos-box turbo c++ from internet. it will work on all versions of window including win 7 and win 8 also
try it now
you can download dos-box turbo c++ from internet. it will work on all versions of window including win 7 and win 8 also
try it now
Why doing so many effort for something that is worse?
This is not because my programs have errors, this was just when i installed code blocks. It gives me a g++ error and says all files are up to date and won't build. Programming is easy so I barely make mistakes.
I wonder if somebody has tried any version of Intel C++ compiler integrated with VS 2005, or VS 2008, or VS 2010 or
VS 2012 on Windows 8?
Here is a summary:
** Operating systems **
Windows Server 2012* - supported
Windows Vista* - no longer supported
** Visual Studios **
So has anyboady found a working compiler. I start school in a week. I hate MS Visual. I am in college taking 4 programming classes and need a compiler to work on my new computer. I could always install windows 7 on my virtual box and download code blocks but I don't want to have to store data and write code from a virtual machine and possibly have it lost. Any help would be much appreciated. | https://www.daniweb.com/programming/software-development/threads/444641/c-compilers-for-windows-8 | CC-MAIN-2016-50 | refinedweb | 413 | 70.84 |
|
>
C++
>
Compilers
Compilers - Page 4
61-71 of 71
Size of C++ executable
by DevX Pro
Does one need a runtime DLL for an .exe file written with MFC in Visual C++? Can one make a standalone .exe program with C++ that is small and easily portable?
A Workaround for Namespace-less Compilers
by Danny Kalev
Since namespaces are relatively new in C++, some compilers do not support this feature yet. To workaround this lack of support, you can use struct holding static member ...
C/C++ Linking
by DevX Pro
If I make a C++ library and supply it to a client who does not have a C++ compiler, can they call my C++ functions from their C routines? That is, is it possible to use the C++ library without having a C++ compiler installed (on UNIIX).
What Happens When an Inline Function Cannot be Inlined?
by Danny Kalev
Not every function that is declared inline can actually be inlined by the compiler. A function that has local variables, loops, or an extensive amount of code cannot be inlined efficiently. In this ...
Enhancing Performance of Legacy Software
by Danny Kalev
When you port pure C code to a C++ compiler, you may discover slight performance degradation. This is not a fault in the programming language or the compiler, but a matter of compiler tuning. All you ...
Member Alignment
by Danny Kalev
Manual calculation of a struct/class size is not just an indication of poor design and a source of maintenance problems; it may also lead to bugs that are very hard to ...
Explicit Template Instantiation
by Danny Kalev
...
Pre-Defined Macros
by Danny Kalev
All C/C++ compilers define the following ...
Allow the Java Compiler to Remove Debug Code
by Randy Kahle
The Java compiler will optimize your application by removing code ...
Compilers: Which is best?
by DevX Pro
Which model of C/C++ compiler offers the best value for a beginning programmer? (It must be ANSI C compatible.)
Exception specifications
by DevX Pro
I recently learned that the C++ draft standard requires that an implementation not perform any static checking on a function's exception specification (aside from inherited virtual functions and function pointers). This makes the following legal (copied from the standard, sec 15.4 par 10): extern void f() throw(X, Y); void g() throw(X) { f(); // OK } Question 1: Why would they do such a thing? Question 2: Are lint-like tools available that can give warnings about these situations? It seems to me that without any static checking, the throw specifer is all but useless for a typical project.
61-71 of 71
Thanks for your registration, follow us on our social networks to keep up-to-date | http://www.devx.com/tips/cpp/compilers/4 | CC-MAIN-2015-27 | refinedweb | 454 | 62.78 |
Building a tiny GraphQL API using Next.js
With the release of Next.js v9, API routes are easier than ever.
I'm a huge fan of Next.js, it's my go-to framework if I'm building a web application these days - I tend to leave Gatsby for blogs and heavily content-driven sites, and create-react-app for more single page-y applications. When v9 dropped the other day, I was reading through the blog post and immediately wanted to see if I could hack something together using the new API routes feature.
If you're not familiar with Next.js one of it's advantages over other frameworks is the minimal approach to configuration. All you need is a
./pages/index.js file, containing:
export default () => <h1>Hello world!</h1>;
and you have a working, server-rendered React page for your Next.js app. Version 9 brings API routes, meaning all you need is a
./pages/api/index.js file, containing:
export default (req, res) => res.end("Hello world");
and you have a working API endpoint. It was possible to do this in previous versions of Next.js by editing the underlying express server, but having applied the ethos from the rest of the framework, this is available with zero configuration.
What if...
This got me thinking, what if you could use the API routing to expose a GraphQL API? Especially with the advent of the serverless paradigm and the return of the monolith, it's becoming increasing normal to bundle your API endpoints with the rest of your application. So I set myself the task of exposing a small GraphQL API from some Next.js API routing.
If you want to skip to the end, here's the finished repo: and here's a demo:
Development
Start off by creating a new Next.js application:
mkdir next-graphql-api cd next-graphql-api
Then initialise a new
yarn project (you can use
npm, I use
yarn 🤷♂️):
yarn init -y
Then install the dependencies needed for Next.js, plus
axios for HTTP requests and
graphql for parsing and creating GraphQL schemas:
yarn add next react react-dom axios graphql
Create an test index page -
./pages/index.js:
import React from "react"; export default () => <h1>Hello world!</h1>;
Then boot the server by running
yarn next dev and visit to check everything went smoothly. You should see
Hello world!.
GraphQL
Now we want to create our GraphQL endpoint. Let's create
./pages/api/graphql.js with the following content, which we will step through line-by-line:
import { graphql, buildSchema } from "graphql"; const schema = buildSchema(` type Query { hello: String } `); const root = { hello: () => "Hello world!" }; export default async (req, res) => { const query = req.body.query; const response = await graphql(schema, query, root); return res.end(JSON.stringify(response)); };
Firstly by creating the file with the name
graphql.js Next.js will create an endpoint with the URL. Ace!
Then we import the necessary functions needed from the
graphql library.
buildSchema allows us to define (using GraphQL - the query language itself) through types what kind of queries we can allow to be requested - or a "schema". In this case, we create a query with
the name
hello that will respond with a
string.
We then need to define a resolver for this query, always think about these in pairs - here's my query and here's what will resolve my query. These are objects where the properties are the names of
the queries we defined in our schema, and the values are functions which will return the data. Our query
hello should return
string, so we return
Hello world!, but this could be any string -
feel free to try out changing this!
Once we have our GraphQL bootstrapping done, all is left is to create (and export) the function which will serve our requests to.
If you've ever used the express (or koa) Node.js HTTP frameworks, this function signature should look fairly familiar. We are given access to the request (
req) and response (
res), so we want to take the
supplied GraphQL query out of the request body, then pass this (along with the
schema and
root resolver) to the
graphql function, to resolve our query.
If successful, the response is returned as a plain JavaScript object. We then need to convert this into JSON if we are to send it back to the client (be it a browser, mobile application, Postman etc).
So now, if you hit the endpoint in Postman (I am using Insomnia below), you should receive a response. Hooray!
Consuming this data
Back to
./pages/index.js, we want to use our newly created GraphQL endpoint to populate our UI. Add the following and again we will go through line-by-line:
import React from "react"; import axios from "axios"; const query = `{ hello }`; const App = props => <h1>The response from the server is: {props.hello}</h1>; App.getInitialProps = async () => { const response = await axios.post(``, { query }); return { ...response.data.data }; }; export default App;
This is a fairly standard React Function Component. The juicy bit is when we hook into Next.js'
getInitialProps method. This is fired by the main Next.js application component, which wraps all
components in our application. It's great for things like data fetching, where you want to do some pre-fetching before a component is rendered or indeed given props. We need this!
We hook into this and use the
axios library, a wrapper around
fetch, to request our newly created GraphQL endpoint. We supply the raw GraphQL as the body using the
query field, then once we
get a response, destructure that into the props which this method will return - so anything that is returned will end up as a
prop in the main component. We don't deal with errors or anything like
that, so it's something to think about if you were to take this further.
Then on the lines above we know, having used
getInitialProps to supply this, that we will have a
prop named
hello, as this is returned from the GraphQL query, with a value of
Hello world!.
Conclusion
Next.js is an excellent framework which allows you to get up and running very quickly, with minimal configuration needed. Creating UI and API routes are trivial, and as we are in full control of what response is returned from API routes, there's nothing to stop us creating a GraphQL API which sits inside the rest of our application. | https://mikefrancis.dev/blog/tiny-graphql-api-using-next-js | CC-MAIN-2020-50 | refinedweb | 1,087 | 72.36 |
#1 Members - Reputation: 560
Posted 11 July 2005 - 08:30 PM
#2 Members - Reputation: 1312
Posted 11 July 2005 - 09:32 PM
Anyways it appears you need the MS Platform SDK (required for win32 apps).
Once you've obtained that you must follow these instructions.
After which
create a project, then set-up the project settings for SDL:
C/C++ --> General --> Additional Include Directories = < insert sdl include directory >
Linker --> General --> Additional Library Directoires = < insert sdl library directory >
Linker --> Input --> Additional Dependencies = SDLmain.lib SDL.lib
You can also setup SDL include/library directories to be always set like you do for the platform sdk given in the instructions at the link above.
#3 Members - Reputation: 175
Posted 11 July 2005 - 10:03 PM
"c:\Program Files\SDL\include\"
"c:\Program Files\SDL\lib"
Then in Visual Studio 2005, you need to go to the Tools>Options menu, and under Projects and Solutions>VC++ Directories, you add those two directories...
Set Include Files to "c:\Program Files\SDL\include\" and
set Library files to "c:\Program Files\SDL\lib".
If you actually meant that you're using Visual C++ 2005 Express Edition (Beta 2), then there's a bug where doing the above isn't possible...it doesn't offer a way to set the directory in the options. What you need to do instead is to edit a text file. Assuming you installed VC++ Express to Program Files, you need to edit this file:
"c:\Program Files\Microsoft Visual Studio 8\VC\vcpackages\VCProjectEngine.Dll.Express.Config"
and when you're done editing that file, you need to DELETE the file below or your changes won't work:
"c:\Documents and Settings\Administrator\Local Settings\Application Data\Microsoft\VCExpress\8.0\VCComponents.dat"
You may need to change "Administrator" in that path to whatever your login name is. Once you've edited the file and deleted that .dat file, you have to restart Visual C++.
Editing the VCProjectEngine.Dll.Express.Config file should be easy enough to figure out, but just in case...I'll show you a section of mine where I've added SDL, DirectX 9.0c, and the Platform SDK:
<Directories
Include="c:\Program Files\SDL\include;c:\Program Files\Microsoft DirectX 9.0c SDK (June 2005)\Include;c:\Program Files\Microsoft Platform SDK\Include;$(VCInstallDir)include;$(VCInstallDir)PlatformSDK\include;$(FrameworkSDKDir)include"
Library="c:\Program Files\SDL\lib;c:\Program Files\Microsoft DirectX 9.0c SDK (June 2005)\Lib\x86;c:\Program Files\Microsoft Platform SDK\Lib;$(VCInstallDir)lib;$(VCInstallDir)PlatformSDK\lib;$(FrameworkSDKDir)lib;$(VSInstallDir);$(VSInstallDir)lib"
Path="c:\Program Files\Microsoft Platform SDK\Bin;$="c:\Program Files\Microsoft Platform SDK\src\crt;$(VCInstallDir)crt\src"
/>
Once again, remember to delete VCComponents.dat when you're done editing!
As for the linker error that you got, I'd say its because either you haven't installed the Platform SDK, or you haven't set up the paths to the include and lib files for it. uuid.lib is from that Platform SDK. Check out this FAQ on how to install it for Visual C++ 2005 Express.
There's a little bit more to getting SDL working, but this reply is already getting quite long, so I'll just quickly tell you. In your project's settings, you need to change the Runtime Library to Multi-threaded DLL which should be under C/C++>Code Generation. And if I remember right, for SDL, you can't just use "int main()", you need to add the parameters for the command line arguments, like this:
int main(int argc, char *argv[])
Then of course you need to add sdl.lib sdlmain.lib to the Additional Dependencies section of the Linker settings.
I actually made a project wizard to setup an SDL project automatically, 'cause it's quite a bit of work. Unfortunitly I have released it, but maybe I will someday. oh well...I'll bookmark this thread in case you need any other help. Good luck.
#4 Crossbones+ - Reputation: 4277
Posted 12 July 2005 - 08:24 AM
That should work out for ya.
#5 Members - Reputation: 560
Posted 12 July 2005 - 09:53 AM
I want to thank you all for your prompt response to this issue.
It's fixed, and I am now working through some tutorials from
This is what I didn't do (for future reference)
¤ Change Runtime Library to 'Multithreaded DLL'
¤ Ignore Specific Library -> 'uuid.lib'
¤ Placing SDL.DLL within the Visual Studio 8 Directory
Thanks for the help everybody!
#6 Anonymous Poster_Anonymous Poster_* Guests - Reputation:
Posted 25 July 2006 - 09:40 AM
i keep on getting this error message:
------ Build started: Project: vtest, Configuration: Debug Win32 ------
Linking...
LINK : fatal error LNK1561: entry point must be defined
Build log was saved at ":\Documents and Settings\Alan\Desktop\vtest\vtest\Debug\BuildLog.htm"
vtest - 1 error(s), 0 warning(s)
========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ==========
#7 Members - Reputation: 134
Posted 25 July 2006 - 11:22 AM
Regards.
#8 Members - Reputation: 318
Posted 29 January 2007 - 03:29 AM
#9 Members - Reputation: 1062
Posted 29 January 2007 - 04:15 AM
Quote:
It sounds like you didn't install the SDK properly. Try making a blank project and including <windows.h>, and it should fail. The header search paths probably just need to be set up properly.
#10 Members - Reputation: 122
Posted 13 March 2007 - 09:01 AM
SDL_teste.cpp
Linking...
LINK : fatal error LNK1561: entry point must be defined"
#include "SDL.h" /* Todas as aplicações SDL precisam deste cabeçalho */
#include <stdio.h>
int main(void) {
printf("Initializing SDL.\n");
/* Inicializa padrões, Vídeo e Audio */
if((SDL_Init(SDL_INIT_VIDEO|SDL_INIT_AUDIO)==-1)) {
printf("Não foi possível inicializar SDL: %s.\n", SDL_GetError());
exit(-1);
}
printf("SDL inicializada.\n");
printf("Terminando SDL.\n");
/* Termina todos os subsistemas */
SDL_Quit();
printf("Terminando....\n");
exit(0);
}
Anyone can help me??
Im with the libs linked, with uuid.lib ignored, with /MD and with no precompiled header.
Thanks!
#11 Members - Reputation: 122
Posted 13 March 2007 - 09:21 AM
I have to set the CONSOLE subsystem.
And then, use the main like this: "int main(int argc, char *argv[])"
"int main()" does not work.
;)
#12 Members - Reputation: 150
Posted 18 March 2007 - 04:24 PM
I kept getting really annoying manifest errors when the debugger attempted to use/load the SDL.dll, and I spent too much time trying to track down these errors online.
The only workaround that immidiately fixed up these issues was to build the SDL source in VS2005.
hth,
#13 Members - Reputation: 151
Posted 20 August 2007 - 09:42 AM
Quote:
I've got the same problem. I lloked into the VC/Includes directorie and it's just missing. No "windors.h" file. They aren't GL libraries too, so I think it could be related to the "Express" think (My guess is that in the free relase the just missed many libraries). Does anyone know how to fix it? I'm just trying to move from VC++ 6 to the 2005, but my projects don't work 'cause of that...
Thank you
#14 Members - Reputation: 1113
Posted 20 August 2007 - 01:19 PM
Learn to make games with my SDL 2 Tutorials
#15 Members - Reputation: 151
Posted 20 August 2007 - 01:37 PM
first, thanks to lazy foo and his tutorials, which made me learn SDL.
Now that I can run my old project in VC++ 2005, SDL_image seems to have forgot how to load an jpg...
loadedImage = IMG_Load( "Data/secsi.jpg" );
then loadedImage is just NULL u.u
(I'm getting off topic, no? should I open a new thread? xDDDD)
#16 Members - Reputation: 758
Posted 20 August 2007 - 01:51 PM
Quote:
Try moving where the data folder is. The working directory may be different than where you are used to in VS 6.0. Your working directory is by default I think the same as where the source files are. That is where it is for me but I am not sure if I moved anything.
#17 Members - Reputation: 162
Posted 30 July 2008 - 09:14 AM
Quote:
I realize this is an old thread, but I recently had the exact same issue, and found a solution. Chances are you are missing jpeg.dll on your system, which is included as part of the binary download of SDL_image from the SDL_image download page
You either need to include jpeg.dll in the same folder as your executable, or could put it somewhere "visible" to applications on your system, like in the windows\system32 folder.
Hope this helps anyone else that may be encountering the same issue. Sometimes SDL is a little too good at failing gracefully, and in this situation it simply doesn't load in the image...sadly it doesn't return an error code that I can see, or any way to tell that it happened because the jpeg.dll file wasn't found. | http://www.gamedev.net/topic/331596-setting-up-sdl-for-visual-c-2005/ | CC-MAIN-2016-18 | refinedweb | 1,494 | 64 |
SAP PI - Quick Guide
SAP PI - Introduction.
SAP PI/XI enables you to set up cross system communication and integration and allows you to connect SAP and non-SAP systems based on different programming language like Java and SAP ABAP. It provides an open source environment that is necessary in complex system landscape for the integration of systems and for communication.
SAP Process Integration is a middleware to allow seamless integration between SAP and non-SAP application in a company or with systems outside the company.
Example
An application that is run on different systems that are part of different business units in a company or implemented in a distributed environment between different companies that.
Why do We Need SAP PI?.
The
SAP PI is used to connect different applications or systems in a distributed environment that can be set up between different companies, so there is a possibility that the structure of data exchange between two components differs from each other.
Mapping determines the structure of data in a source system to structure of data in a target system. It also determines the conversion rules that are applied to the data between source and target system.
SAP PI - Installation Options
When you run a scenario in SAP PI, the communication and processing capabilities depend on runtime engines that are installed with the installation of SAP PI. You can install one or more runtime engines on a host system. SAP PI provides the following two installation options −
Type 1 — Dual Usage Type
This installation is based on ABAP and Java and provides tools for designing and configuring integration content and also these runtime engines −
- Integration Engine
- Business Process Engine
- Advanced Adapter Engine
Type 2 — Advance Adapter Engine Extended AEX
This installation is based on Java and provides tools for designing and configuring integration content and contains Advance adapter engine as runtime engine.
SAP PI - Netweaver PI Architecture
SAP PI architecture consists of multiple components which are used at design time, configuration time and runtime. In SAP PI, the sender system is known as the source and the receiver is called the target system and the architecture is known as Hub and Spoke structure. The Spoke is used to connect with external systems and Hub is used to exchange messages.
A SAP PI system is divided into the following components −
- Integration Server
- Integration Builder
- System Landscape Directory SLD
- Configuration and Monitoring
Runtime Workbench
This is a tool used to provide central monitoring of PI components and messages.
Integration Server
This is one of the key components of the SAP PI system and is used for processing of messages.
It consists of the following three engines −
- Business Process Engine
- Integration Engine
- Central Advanced Adapter Engine AAE
Business Process Engine
This engine is used for message correlation and deals with the processing of messages in ccBPM.
Integration Engine
This engine is used for routing and mapping and provides central integration server services. If the source structure is different from the target structure, the integration engine calls mapping runtime as shown in the illustration below wherein, the source structure is converted to target structure.
A mapping runtime is based on Java Stack as mentioned under the SAP PI platform topic.
A message can be of the following two types −
Synchronous message is defined as message contains both the request-response part
Asynchronous message is defined as message contains either request or the response part only
In SAP PI, a message is represented by an interface. An Interface contains the structure of the message in XML format and the direction.
Central Advanced Adapter Engine (AAE)
As Integration Engine handles messages in XML and SOAP protocol, if the business system doesn’t contain data in a specific format, adapters are used to convert the messages specific protocol and message format required by the Integration Engine.
In SAP PI architecture, you can consider Adapter Engine as Spoke and Integration Engine as HUB to connect to external systems.
In SAP PI older releases dual stack system, most of the adapters were part of Java stack and only two adapters were part of ABAP stack.
Java Stack Adapters
The following adapters run on Java Stack −
RFC adapter, SAP Business Connector adapter, file/FTP adapter, JDBC adapter, JMS adapter, SOAP adapter, Marketplace Adapter, Mail adapter, RNIF adapter, CIDX adapter
ABAP Stack Adapters
The following adapters run on ABAP Stack −
SAP PI - UI Tools
You can use different SAP PI user interface tools to access different components of SAP PI system architecture. The most common UI tools are −
ES Builder − This tool provides Java user interface for working in Enterprise Service Repository ESR.
SAP NW Developer Studio − This is Java Eclipse-based tool to view and edit some object types in Enterprise Service Repository.
Integration Builder − This tool provides Java-based user interface to work in the Integration Directory.
SAP GUI
This is SAP client tool to access ABAP stack of the SAP PI system.
The following illustration shows the different UI tools of SAP PI and the components that can be accessed using these tools −
SAP PI — Platform
Single Stack Vs Dual Stack
In SAP PI older releases, not all the components were based on a single platform. Few components like Integration Engine, Business Process Engine and Integration Builder were based on ABAP stack and other components like Enterprise Service Repository ESR, Integration Directory (SLD, Adapter Engine, etc.) were based on Java Stack. So these type of systems were called dual stack systems as PI required both ABAP and Java stack to run.
In the latest releases of SAP PI, ABAP stack components are modified to work on Java stack so, SAP PI needs only Java stack to run and is called the single stack system.
SAP PI - Home Page
To open SAP PI Tools home page, use the following URL −
http://<host>:5<instance#>00/dir/start/index.jsp
Example −
SAP PI home page has the following four Java links −
- Enterprise Services Repository (ESR)
- Integration Directory (ID)
- System Landscape (SL)
- Configuration and Monitoring (CM)
Enterprise Services Repository (ESR)
In SAP PI, Enterprise Service Repository is used to design and create objects to be used in the integration scenario. You can design Interface Objects, Mapping Objects and the different integration processes.
Interface Objects
The following are the Interface Objects −
- Service Interface
- Data type
- Message type
Mapping Objects
Mapping of messages is done as per the sender and the receiver data structure
Integration Processes
Operation Mapping is used for converting the source structure to target structure if data structure is different. Complex Operation Mapping can be simplified using Message Mapping.
Message Mapping can be implemented in the following ways −
- Graphical Mapping
- Java Mapping
- XSLT Mapping
- ABAP Mapping
Under Enterprise Service Repository, you can see different UI tools — Enterprise Service Builder and Web UI and Service Registry.
When you launch the Enterprise Service ES Builder application, you get an option to run the application for the first time as shown in the following screenshot. Click Run.
Once the application is launched, you get the following options −
- Main Menu Bar and Standard Toolbar at the top
- Navigation Area on the left side
- Work Area on the right side
The object editors are displayed in the work area. These object editors include functions that relate specifically to the objects that are open.
When you run Web UI, you will be prompted to enter the username and password.
In Web-based interface, you can perform the following tasks −
Search − Search for service interfaces, data types, and so on.
Subscribe − Subscribe for Notifications.
Manage − Manage lifecycle status of service interfaces, data types, and so on.
Integration Directory
Integration Directory is used for the configuration of objects that are created in Enterprise Service Repository and configuration is executed by the Integration Engine at runtime. To configure ESR objects, you need to import object — Service and Communication Channel.
Service allows you to address the sender or the receiver of messages. Depending on how you want to use the service, you can select from the following service types −
- Business System
- Business Service
- Integration Process Service
Communication channel determines inbound and outbound processing of messages by converting external native messages to SOAP XML format using Adapter Engine. Two types of communication Channel — Sender Channel and Receiver Channel.
In Integration directory, you can make four types of configuration −
Sender Agreement − This determines how the message is transformed by Integration server.
Receiver Determination − This is used to determine information of receiver to whom message to be sent.
Interface Determination − This is used to determine the inbound interface to which the message is to be sent. This also determines the interface mapping for processing the message.
Receiver agreement − This defines how a message is to be transformed and processed by the receiver.
Under Integration Directory, you can see the Integration Builder. When you click the Integration Builder, you can see the different options to configure the objects created in ESR.
SAP PI - System Landscape Directory
The
Landscape
Under Landscape, you can find the following options −
Technical Systems − You can view and define systems and servers.
Landscapes − You can view and configure group of systems.
Business Systems − You can view and configure business systems for use in Process Integration.
Software Catalog
Products − This is to view products in SAP software catalog.
Software components − This is to view software components in SAP Software catalog.
Development
Name Reservation − This is used for name reservation and also for NW development.
CIM Instances − This is used to view and maintain data on CIM level.
Configuration and Monitoring
Monitoring Tab.
Configuration and Administration Tab.
Testing Tab
Under the Testing tab, you get the following two options −
Send Test Message
To simulate a message flow and verify that the SAP NetWeaver Process Integration runtime is functioning correctly by sending a test message to the Integration Engine or the Advanced Adapter Engine.
Cache Connectivity Test
This is used to inspect the cache connectivity status of the infrastructure components of SAP NetWeaver Process Integration and test their connectivity with the runtime caches.
SAP PI - Communication
In SAP PI, you can define two types of communication — Synchronous and Asynchronous.
Synchronous Communication.
In this approach, there is a possibility that the sender might resend the message after time out and a duplicate message may exist. This approach in PI is known as BE (Best Effort).
Consider two systems – A and B. And, you introduce an intermediate system I between the two systems. Communication between System A and System I is synchronous and System A and System B is asynchronous.
The following types of errors can occur in this communication scenario −
Application Error − There is an error at the receiver end while processing a message and the sender is not aware about this error and keeps waiting for the reply.
Network level Error − In this error, there is an error in communication network between the sender and the receiver. Sender is not aware about this and the message is stuck in between and the sender waits till the operation timeout.
Error in Response Message − In this scenario, an error occurs and the response message gets stuck in between and sender keeps on waiting.
Advantages
The following are the key advantages of using Synchronous Communication −
There is no need to configure response message routing.
There is no need to correlate response to a request.
In this communication, response is received immediately.
Recommended Scenario
This is suitable for operations that involve read operations, for example, viewing a purchase order.
Disadvantages
The following are the key disadvantages of using Asynchronous Communication −
In case of a failure, the sender needs to send a message again.
The receiving system should be configured to check duplicate messages.
In this scenario, the sender application is blocked till a response is received or a time out error occurs.
You can’t configure multiple receivers.
Asynchronous Communication
In Asynchronous Communication, you add an intermediate system or a middleware between two systems. When a Sender Application sends a request, it does not wait for the Receiver Application to send the response. If there is a failure due to some reason, the middleware is responsible for resending the message. If required, the receiving system can send a response back to Sender as a separate asynchronous call.
This approach in SAP PI is called Exactly Once (EO) or Exactly Once in Order (EOIO).
An intermediate system is a queue and message from A is first added to the queue and at receiver end, it is pulled from queue and send to receiver. The response message from system B follows the
You can also maintain order in certain situations as per business requirement by using First In First Out (FIFO). This scenario is called Asynchronous with order maintained or Exactly Once in Order (EOIO).
Asynchronous communication assures guaranteed delivery. If the receiver system is not available for sometime, then the intermediate queue keeps the message and it remains there till the receiver system is available and the message is pulled from queue and sent to receiver system.
Recommended Scenario
This is recommended for modify operations like creating a purchase order or modify a purchase order
Advantages
The following are the key advantages of asynchronous communication −
In case of failure, the SAP PI system ensures guaranteed delivery and will resend the message.
No configuration required for duplicate checks.
You can configure multiple receivers in this scenario.
Both the sender system and the receiver system need not be online at same time.
PI logs all the messages sent via asynchronous communication.
No time out as intermediate system keeps the message and response request.
Disadvantages
The following are the key disadvantages of asynchronous communication −
In this scenario, the sender needs to correlate responses to request on its own.
Response message needs to be implemented and routed separately.
It doesn’t provide an immediate response.
SAP PI — Technologies
SAP provides a middleware based on NetWeaver called the SAP NetWeaver Process Integration. SAP NetWeaver PI delivers a message in specific format called the Simple Object Access Protocol (SOAP-HTTP). This message contains a header and payload. The header contains general information such as the sender and receiver information and the payload contains the actual data.
System can communicate with SAP NetWeaver PI directly or with the use of adapters −
- Communication using Application Adapters
- Communication using Technical Adapters
- Communication using Industry Standard Adapters
- Communication using Transaction Adapters
- Direct communication using Proxies
SAP PI - Securing Objects
For transferring information in the form of objects from one Enterprise Service Repository to other, you can select from the three means of transport −
- File System Based Transport
- Change Management Service (CMS)
- Change and Transport System (CTS)
SAP PI - Creating Objects
SAP PI is runtime environment that assigns inbound messages to receivers and maps them to another structure or protocol. SAP NW PI requires information about how the messages are to be processed. The information about the design objects are there in PI Enterprise Service Repository ESR and Integration Directory.
Systems that are connected to SAP PI are maintained in the System Landscape Directory (SLD). The data in SLD can be divided into the following categories −
- Software Component Information
- System Landscape Description
Software Component Information
It includes information about all available SAP software modules. It also includes possible combinations of software modules along with dependencies. For example, software component, release, support packages, OS versions and database etc.
To check this, go to System Landscape Directory (SLD)
To see all technical systems, their type, versions and last update, go to the Technical System tab.
To view products and software components, go to the Software Catalog option.
Once you click the Product tab, you can see all the products, and also their version and vendor name.
You can also check Software components, versions and Vendor name.
System Landscape Description defines the individual system landscape information. Data supplier provides SLD up to date system information at regular time periods.
SAP PI - Modeling Scenarios
A
- Process Component Architecture Model
- Process Integration Scenario Model
Process Component Architecture Model.
Process Integration Scenario Model.
SAP PI - Design Objects
A Design Object is uniquely identified by the name and namespace and is assigned to one software component. You can create different design objects to perform various functions and few are mandate objects and others are optional in an integration scenario. Most common design objects include process integration scenario, message types, data types, mappings, etc.
The following table defines common design objects, their functions and use in a scenario −
These Design Objects are often represented in the form of hierarchy.
SAP PI - Display Design Objects
Consider a company that has many interfaces and each interface contains a middleware. You want to see the different types of middleware that are used. This can be done by logging into SAP PI tools Start Page.
Step 1 − Use Integration Builder URL or T-code — SXMB_IFR.
Step 2 − Go to System Landscape Directory on SAP PI 7.3 screen.
Step 3 − Click Product as shown in the following screenshot. If you are prompted to enter username and password enter the details.
Step 4 − To see the technical system, the software component is assigned to, enter the product name and click Go.
Step 5 − Click the product name and go to the Installed System tab on the details pane. You can check the name of the assigned technical system there.
Step 6 − If you want to see which business system is derived from this technical system, select the technical system. You can also check it by going to the SLD home page → Business Systems.
Step 7 − Enter the technical system name and click Go.
Step 8 − Name the field that determines the name of the business system.
SAP PI - Integration Scenarios
To create an integration scenario in SAP PI, you need to create technical and business system in System Landscape Directory.
The SLD is implemented as a Java software component (SAP _JTECHT) on the SAP NetWeaver Application Server Java. It is based on the open Common Information Model (CIM) standard, and is defined and published by the Distributed Management Task Force, Inc. at.
The SLD is the central listing tool for application component information, products and software components, and system landscape data (technical and business systems).
In SLD, to move from business model to technical model you use relationship between process step and software component.
How to Transfer Software Components in SLD?
When you install the System Landscape Directory, the initial catalog is installed.
From SAP Market Place, you can import more up to date catalog.
You can also import your own software components and products depending on the project and integration scenario.
For A2A scenarios, business systems are used and they exist in SLD. For B2B scenario, you use business objects and they reside in Integration Directory.
Technical System
Technical systems are part of the System Landscape Directory (SLD) and contain information about version, database and patch levels, operating system, etc.
There are different modes on the technical system −
- AS ABAP System
- AS Java System
- Standalone Java system
- Third Party
There are different import tools that can be used to transfer data from the technical system to SLD. SAP NetWeaver Administrator is common SAP AS JAVA > 7.1 import tool.
Business System
Business system acts as a sender and a receiver in SLD. They inherit the software components from technical systems as products. No new software components can be added to the business systems in SLD.
With SAP AS ABAP, each client is defined as one business system. In SAP AS Java, each technical system acts as a business system.
SAP PI - File to File Scenario
In SAP PI file to file scenario, we transfer a file from source system to target system. Once the components are built in SAP PI, you can transfer a file in SAP PI system by creating objects in the Enterprise Service Builder.
SAP PI — File to File Scenario Execution
Step 1 − Go to SAP PI Tools Page → Enterprise Service Builder under ESR.
Step 2 − To find the name of component under which objects have to be created, expand the component to find out the software component version.
Step 3 − Select the component → Right click, click New to create an object under this component.
Step 4 − The first object that we create is a namespace. Enter the namespace in the form of URL and click Create button at the bottom.
Step 5 − Once the object is created under software component, save and activate the object.
Step 6 − To Activate, click Activate as shown in the following screenshot −
Step 7 − Once the Namespace is saved and activated, create a data type. Go to software component → Right click → New. In the next window, select interface objects → data type.
Step 8 − Enter the name of Data Type and Namespace and click Create as above. Next is to insert sub element into the Data Type.
Step 9 − Enter the name of the element.
Step 10 − Insert sub element to add child employee id and name.
Step 11 − Define the type and occurrence. Occurrence defines how many times that element will appear in the file. You can select minimum occurrence and maximum occurrence value.
Step 12 − Click the Save button.
Step 13 − Activate the data type. Go to Data type → Activate.
Creating a Message Type
Step 1 − Right click Namespace → New
Step 2 − Under Interface Objects, select Message Type. Enter the fields.
Step 3 − Enter the name of Message Type.
Step 4 − By default, it takes the name of Namespace and Software components. If it doesn’t, you can select manually. Then, click Create.
Step 5 − Now, define the Data Type that you will be using for Message Type. Drag the Data Type from the left bar to the Data Type option under Message Type (
). Click Save button.
Step 6 − Activate Message Type → Activate.
Note − If the structure of your input file and output file is the same, you can use one Data Type and one Message Type only. If the structure is different, you have to create two data types and message types for inbound and outbound. In this example we are using the same structure for both input and output file.
SAP PI - Creating Service Interface
Let.
SAP PI - Creating Message Mapping
Let us now understand how to create Message Mapping to map inbound process to outbound process.
Step 1 − Go to Object → New → Mapping Objects → Message Mapping.
Step 2 − Enter the name of mapping name and click Create as shown above. Now, define source and target message. Drag the message under Message Type to source and target message under mapping.
Step 3 − Now, map these messages using the available mapping options. Select the function from dropdown and you can see different options available under each tab.
Example − You have first name and last name in the source file and you want the full name in the target file. Here you can use Concatenate under Text function.
Step 4 − We are now implementing file to file scenario so, we will just select source and target mapping and will directly map if the name and the structure are identical.
Step 5 − As the structure is the same, we will use the above method. In the next window, click Apply.
Step 6 − You can see all icons turn green and mapping is shown. Now, save the mapping.
Step 7 − Activate the Message Mapping. Go to Message Mapping → Activate. Once this Message Mapping is activated, create Operation Mapping in ESR.
SAP PI - Creating Operation Mapping
Let us now understand how to create Operation Mapping.
Step 1 − Go to Object → New → Message Objects → Operation Mapping.
Step 2 − Enter the name of Operation Mapping and click Create button.
Step 3 − In the next window, you need to enter Source Operation and Target Operation. Drag the Service Interface from the left pane to Source Operation and Target Operation. Inbound Service Interface will be dragged to Target Operation and Outbound Service Interface will be dragged to Source Operation.
Step 4 − Drag the Message Mapping to the Mapping Program option as in the following screenshot. Once you make these settings, click the Save button at the top.
Step 5 − Now, go to Operation Mapping → Activate → Activate → Close.
Step 6 − Go to Integration Builder under Integration Directory on SAP PI Tools Home Page to configure a scenario.
Step 7 − Go to Configuration Scenario View of Integration Builder.
Step 8 − To configure a scenario, go to Object → New → Under Administration tab → Configuration Scenario.
Step 9 − Enter the name of Configuration Scenario and click Create button.
Step 10 − Save and Activate the Configuration Scenario as shown in the following screenshot.
SAP.
_85<<.
SAP PI - Creating Integrated Configuration
Let.
SAP PI - Connectivity
SAP.
SAP PI provides you with a wide range of adapters that allows you to connect applications from different protocols. In the case of the sender, the adapter converts the inbound message encrypted in the sender protocol into a PI-SOAP message; in the case of the receiver, the PI-SOAP message is converted then into the receiver's protocol.
Available Adapters in SAP NetWeaver PI
SAP PI supports different adapters and some of them are processes in Advance Adapter Engine or in the Integration engine. IDoc, HTTP and XI are few adapters that are processed in the Integration Engine.
The following are the available adapters in SAP NetWeaver PI −
How to Check Existing Adapter Engines?
You can check the list of existing adapter engines in System Landscape Directory (SLD) by performing the following steps −
Step 1 − Use Integration Builder URL or T-code — SXMB_IFR.
Step 2 − Go to the System Landscape directory on SAP PI 7.3 screen and click Product as shown below. If you are prompted to enter the username and password, enter the details.
Step 3 − Navigate to the Technical System area on the left pane of the System Landscape Directory.
Step 4 − Select Process Integration as the type of Technical System.
Step 5 − Check how many Adapter Engines are listed.
Step 6 − There is only one type XIAdapterFramework Adapter Engine that corresponds to the Central Adapter Engine on the Integration Server.
Step 7 − You can also check the list of existing adapters on Runtime Workbench. Go to Configuration and Monitoring.
Step 8 − In the next window, go to component monitor option.
Step 9 − Select components with ‘All’ as status.
SAP PI - ccBPM Overview.
SAP PI - ccBPM Configuration
While.
SAP PI — Integration Processes
The Display Integration Process screen opens, and the Graphical Definition of the Integration Process is displayed.
The Business Process Editor starts when you double click an Integration Process. It consists of the following areas −
- Area with header data
- Graphical definition area
- Properties pane
- Process over view area
- Processing log
- Object area
Steps in an Integration Process
The steps that are configured in an integration process are either message steps or steps related to a process.
The following are message-relevant steps −
- Receive a message
- Send a Message
- Determine receivers for subsequent send steps in the process
- Transform a message
The following are process-relevant steps −
- Switch
- Block
- Control (trigger exceptions or alerts)
- Fork
- Container operation (processing of data)
- While loop
- Wait
SAP PI - Monitoring Integration Processes
You.
Example
The process monitor T-code — SWF_XI_SWI1 expects the workflow number of the integration process.
You can determine the runtime cache by using the T-code — SXI_CACHE as shown in the following screenshot −
SAP PI - Web Services
A Web service is an application function or a service and can be used through Internet standard. It is an independent, modular, and self-describing application function or service.
It can be described, made available, located and transformed or called using standard Internet Protocols.
Each Web service encapsulates a function which is used to perform many tasks. A service provider is used to provide access to a web service. A service provider has WSDL document.
A Web service user is called a service requester who uses the web service with the help of a web browser. In a normal scenario, a service requester is an application that access Web service. An application takes all the necessary details to access a Web service from service description and this information is maintained in the service registry.
The following illustration shows a common Web service scenario −
Web Service – Key Features
The following are the key features of a Web service −
Web service allows programs running on different platforms, operating systems and different languages to communicate with each other.
Web service is an application function or a service.
Web service can be used through internet standard.
Web services can be published and traced.
Web services form a basis for Enterprise Services Architecture (ESA) which is known as SAP's enhanced version of service-oriented architecture SOA.
How to Analyze Different Web Services?
Perform the following steps to analyze different Web services −
Step 1 − Login to ECC system, use Transaction code — SOAMANAGER
Step 2 − Select the Web service checkbox → Apply Selection.
Step 3 − Verify if the Overview tab contains the entry SERVICE → binding is displayed. In case SERVICE binding is not displayed, it means that binding must be completed.
Step 4 − To show the Web service and its binding, choose the Open WSDL document for the selected binding or service link.
Step 5 − A Web browser showing the WSDL opens → you can scroll down at the end of the WSDL. You will find the endpoint under the node WSDL port.
Where SAP PI is Not Recommended?
SAP PI is not recommended for a synchronous request/response scenario. In synchronous communication, it is invoked by request and response operation and process output is returned immediately after the operation. The load is more on infrastructure in case of synchronous communication.
In a non-SAP backend like Java, DOT NET, SAP PI is not recommended as middleware tool in UI driven scenario.
When a backend system is exposed as UI service, SAP PI is not recommended for UI driven scenarios. | https://www.tutorialspoint.com/sap_pi/sap_pi_quick_guide.htm | CC-MAIN-2019-22 | refinedweb | 4,983 | 53.81 |
javascript custom indicator for stock trading
Budget $10-30 USD
I have a simple custom indicator in thinkorswim and need converted for tradovate. I hope you have knowledge of trading software. Must start and finish within a few hours.
Here is code from thinkorswim that needs to be converted
input RiskUnit = 150;
input buffer = .00;
input digits = 0;
def price = close(priceType = [login to view URL]);
def candleRangeBull = price - low + buffer;
def candleRangeBear = high - price + buffer;
def BullRisk = (RiskUnit) / round(candleRangeBull);
def BearRisk = (RiskUnit) / round(candleRangeBear);
def BullRisk1 = rounddown(Bullrisk, digits);
def BearRisk1 = rounddown(Bearrisk, digits);
def o = open;
AddLabel(yes, "Risk: " + AsDollars(RiskUnit) + " Ele : "+ (if price > o then BullRisk1 else BearRisk1), if price > o then [login to view URL] else [login to view URL]);
[login to view URL]
5 freelancere byder i gennemsnit $60 timen for dette job
Hello, I have gone through your project JS CUSTOM INDICATOR description. I CAN START WORK RIGHT NOW. Kindly contact me so that we can discuss more on the specifics. Thank you
Hi Client thanks for your job posting. TOP SKILL IS ALGORITHM IN JAVASCRIPT/C/C++/C#. I can do it very fast. I not only took part but also won in many programing contest. So it's easy for me. Please contact me. Thanks | https://www.dk.freelancer.com/projects/javascript/javascript-custom-indicator-for-stock | CC-MAIN-2022-33 | refinedweb | 214 | 64.2 |
The use of
;
Simply create a subroutine that initializes something, blesses a reference to it into the appropriate class, and returns the blessed reference. There is no enforced naming scheme (as there is in C++ or Java, for example), but many Perl hackers use new:
sub new {
my $class = shift; # allow for inheritance
$class = ref($class) || $class; # allow indirect or
# direct calling
my $self = {}; # create an anonymous
# array to hold member data
bless($self, $class); # allow for inheritance
return $self; # return new object
}
[download]
You will probably want to be in your own package before you declare this constructor, though.
I tend to use:
sub new {
my $class = shift;
my $self = {};
if (bless( $self, $class)->init( @_ )) {
return $self;
} else {
# throw some sort of error
}
}
sub init { 1; }
[download]
sub init {
my $self = shift;
if ($self->SUPER::init( @_ )) {
## do some sort of initialization
## or return false
return 1;
} else {
return 0;
}
}
[download]
I'm a pragmatist, and see nothing wrong with supporting $obj->new(). In fact, perlobj even mentions it.
If your class takes args that are in addition to what the parent class uses in ->new(), then you need a bit more code.
# Create a new object
sub new
{
my $thing = shift;
my $class = ref($thing) || $thing;
my $self = {};
bless($self, $class)
if (! $self->_init($thing, @_)) {
# Failed to initialize
# Throw some sort of error, or
return; # Returns 'undef'
}
return ($self);
}
# Initialize a new object
sub _init
{
my $self = shift;
my $thing = shift;
# Separate '@_' into args for parent class and args for this sub
+class
my @parent_args = ...;
my @my_args = ...;
# Perform parent class initialization
if (! $self->SUPER::_init($thing, @parent_args)) {
# Parent class initialization failed
return (0);
}
# Perform subclass initialization
# Making use of '@my_args', if any
if (ref($thing)) {
# $thing->new( ... ) was called
# Make use of '@my_args', if any
# And make use of object's data, if applicable
} else {
# CLASS->new( ... ) was called
# Make use of '@my_args', if any
}
return (1);
}
[download]
Don't call your contructors explicitly throughout the code.
Use Class Factory instead.
Please (register and) log in if you wish to add an answer
Yes!
No way!
Results (108 votes). Check out past polls. | https://www.perlmonks.org/?node_id=8177 | CC-MAIN-2018-26 | refinedweb | 358 | 65.46 |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
How to customize populating dropdown in OpenERP ?
Hi guys,
I am a newbie in OpenERP, now I have a task to customzie poplulating dropdown in OpenERP, specific task is displaying uom name + uom_category in dropdown instead of uom_name.
Example: field="uom_id"
The GUI currently display a dropdown just contain name of uom like below:
Unit(s)
kg
g
...
But now I would like to customize to see both uom name and uom category when user click on dropdow. Example:
Unit(s) - Unit
kg - Weight
g - Weight
cm- Length
...
Could we do that ? Because I saw openERP populating automatically and don't know how to custom it. I appreciate your help.
Many thanks, Duy.
Hi Duy,
First of all, you have to make one Custom module,
put following code in your .py file.
class product_uom(osv.osv): _inherit = 'product.uom' def name_get(self, cr, uid, ids, context=None): res = [] for rec in self.browse(cr, uid, ids, context): name = (rec.name ) + ' - ' + (rec.category_id and rec.category_id.name or '') res.append((rec.id, name)) return res
and in your .xml file,
use widget="selection" For example,
<field name="uom_id" position="attributes"> <attribute name="widget">selection</attribute> </field>
Hope this work for you.
Hi all,
First, I would like to thank you for your very quick responses ( Keyur and Ghanshyam Prajapati ). Looks like the second solution that is what I need. But I have a question, why we don't write like below:
name = (rec.name ) + ' - ' + (rec.category_id.name)
I don't know why you get
rec.category_id and rec.category_id.name or '') ? Why we need 'and' and 'or' operator while category_id.name is enough. Actually, I am still not familiar with query in OpenERP yet.
Thanks again, Duy.
In product.uom object category_id is required field, so you can use directly rec.category_id.name, but suppose if category_id is not required field then you can not use directly category_id.name because suppose you have not select any category_id, at that while you fetching name from category_id it gives you error because of category_id is null. please mark my answer if your problem is solved. thanks
how can I mark your answer, vote it ? Because this is the first time I use this Q & A page. Thanks
Click on tick mark on my answer.
I define in my class this way and works for me.
class stock_picking(osv.osv): _name = 'stock.picking' _inherit = 'stock.picking' _columns = { 'aux_almacen_orig': fields.selection(_buscar_shortname_alm,method="True", type="char", size=256, string="Almacen Origen" ),
then i define the function this way:
def _buscar_shortname_alm(self, cr, uid, context=None): obj = self.pool.get('stock.location') ids = obj.search(cr, uid, []) res = obj.read(cr, uid, ids, ['shortcut', 'id'], context) res = [(r['id'], r['shortcut']) for r in res if r['shortcut'] != False] return res
note: i define shorcut in my class stock location, this field only for to short the complete name in mi case, but you can change with another field that you have in your class:
class stock_location(osv.osv): _name = "stock.location" _inherit = "stock.location" _columns = { 'shortcut' : fields.char('Nombre Corto',size=50),
and XML you show
<field name="aux_almacen_orig" />
and finally this is the result. orchidshouseperu.com/screenshots/Captura%20de%20pantalla%20de%202014-03-20%2015:52:20.png
Hi Duy,
Ofcourse you can customize your selection field. There is a method
fields_get() of ORM.
This method gets all the description of fields. Once you have the description of the field you can easily customize it's functionality.
You can have little description of this method in ORM Documentation.
def fields_get(self, cr, uid, fields=None, context=None): res = super(your_class, self).fields_get(cr, uid, fields, context) if res['uom_id']: res['uom_id']['selection'].append(('your_key','your_value')) return res
Hope this solution solve your problem.
Thanks.
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now | https://www.odoo.com/forum/help-1/question/how-to-customize-populating-dropdown-in-openerp-22500 | CC-MAIN-2017-34 | refinedweb | 685 | 61.22 |
I want to write the following programm in python . Main objective is input from keyoard to enter elements into the matrix . Thanks in advance #include<iostream> #include<math.h> #include<cstdlib> using namespace std; int main() { double **a; int i,j,n; cout<<"Enter size of the matrix\n"; cin>>n; a=new double* [n]; for(i=0;i<n;i++) a[i]=new double[n+1]; for(i=0;i<n;i++){ for(j=0;j<n;j++) cin>>a[i][j]; cout<<endl; } for(i=0;i<n;i++){ for(j=0;j<n;j++) cout<<a[i][j]<<"\t"; cout<<endl; } delete [] a; } thanku Chris for your program. after running yourprogram following errors are coming ,can expalined in detailes please . Enter size of the matrix: 2 Traceback (most recent call last): File "two.py", line 16, in <module> stringified = "\n".join("\t".join(row) for row in a) File "two.py", line 16, in <genexpr> stringified = "\n".join("\t".join(row) for row in a) TypeError: sequence item 0: expected string, float found Thanks in Advance | https://mail.python.org/pipermail/python-list/2010-August/584479.html | CC-MAIN-2017-30 | refinedweb | 179 | 67.65 |
Full file at
Starting Out with Java: From Control Structures through Data Structures Answers to Review Questions Chapter 2 Multiple Choice and True/False 1. c 2. b 3. a 4. b and c 5. a, c, and d 6. a 7. c 8. b 9. a 10. d 11. b 12. a 13. a 14. c 15. a 16. True 17. True 18. False 19. True 20. False 21. False Predict the Output 1. 0 100 2. 8 2 3. I am the incrediblecomputing machine and I will amaze you. 4. Be careful This might/n be a trick question. 5. 23 1 Find the Error  The comment symbols in the first line are reversed. They should be /* and */. Š 2007 Pearson Education
Full file at
The word class is missing in the second line. It should read public class MyProgram. The main header should not be terminated with a semicolon. The fifth line should have a left brace, not a right brace. The first four lines inside the main method are missing their semicolons. The comment in the first line inside the main method should begin with forward slashes (//), not backward slashes. The last line inside the main method, a call to println, uses a string literal, but the literal is enclosed in single quotes. It should be enclosed in double quotes, like this: "The value of c is". The last line inside the main method passes C to println, but it should pass c (lowercase). The class is missing its closing brace.
Algorithm Workbench 1. double temp, weight, age; 2. int months = 2, days, years = 3; 3. a) b = a + 2; b) a = b * 4; c) b = a / 3.14; d) a = b – 8; e) c = 'K'; f) c = 66; 4. a) 12 b) 4 c) 4 d) 6 e) 1 5. a) 3.287E6 b) -9.7865E12 c) 7.65491E-3 6. System.out.print("Hearing in the distance\n\n\n"); System.out.print("Two mandolins like creatures in the\n\n\n"); System.out.print("dark\n\n\n"); System.out.print("Creating the agony of ecstasy.\n\n\n"); System.out.println(" - George Barker");
7. int speed, time, distance; speed = 20; time = 10; distanct = speed * time; © 2007 Pearson Education
Full file at System.out.println(distance); 8. double force, area, pressure; force = 172.5; area = 27.5; pressure = area / force; System.out.println(pressure); 9. double income; // Create a Scanner object for keyboard input. Scanner keyboard = new Scanner(System.in); // Ask the user to enter his or her desired income System.out.print("Enter your desired annual income: "); income = keyboard.nextDouble();
10. String str; double income; str = JOptionPane.showInputDialog("Enter your desired " + "annual income."); income = Double.parseDouble(str);
11.
total = (float)number;
Short Answer 1. Multi-line style 2. Single line style 3. A self-documenting program is written in such a way that you get an understanding of what the program is doing just by reading its code. 4. Java is a case sensitive language, which means that it regards uppercase letters as being entirely different characters than their lowercase counterparts. This is important to know because some words in a Java program must be entirely in lowercase. 5. The print and println methods are members of the out object. The out object is a member of the System class. The System class is part of the Java API. 6. A variable declaration tells the compiler the variable’s name and the type of data it will hold. 7. You should always choose names for your variables that give an indication of what they are used for. The rather nondescript name, x, gives no clue as to what the variable’s purpose is. 8. It is important to select a data type that is appropriate for the type of data that your program will work with. Among the things to consider are the largest and smallest possible values that might be stored in the variable, and whether the values will be whole numbers or fractional numbers. 9. In both cases you are storing a value in a variable. An assignment statement can appear anywhere in a program. An initialization, however, is part of a variable declaration. © 2007 Pearson Education
Full file at 10.
11.
12.
13. 14.
Comments that start with // are single-line style comments. Everything appearing after the // characters, to the end of the line, is considered a comment. Comments that start with /* are multi-line style comments. Everything between these characters and the next set of */ characters is considered a comment. The comment can span multiple lines. Programming style refers the way a programmer uses spaces, indentations, blank lines, and punctuation characters to visually arrange a program’s source code. An inconsistent programming style can create confusion for a person reading the code. One reason is that the name PI is more meaningful to a human reader than the number 3.14. Another reason is that any time the value that the constant represents needs to be changed, we merely have to change the constant's initialization value. We do not have to search through the program for each statement that uses the value. javadoc SalesAverage.java The result will be an int.
Š 2007 Pearson Education | https://issuu.com/eric410/docs/solution-manual-starting-out-with-j | CC-MAIN-2017-34 | refinedweb | 880 | 68.47 |
#include <db.h> int db_copy(DB_ENV *dbenv, const char *dbfile, const char *target, const char *password);
The
db_copy() routine copies the named
database file to the target directory. An optional password can be
specified for encrypted database files. This routine can be used
on operating systems that do not support atomic file system reads
to create a hot backup of a database file. If the specified
database file is for a QUEUE database with extents, all extent
files for that database will be copied as well.
The path name to the file to be backed up. The file name is resolved using the usual BDB library name resolution rules.
The directory to which you want the database copied. This is specified relative to the current directory of the executing process or as an absolute path. | http://idlebox.net/2011/apidocs/db-5.2.28.zip/api_reference/C/db_copy.html | CC-MAIN-2014-10 | refinedweb | 135 | 63.09 |
HOUSTON (ICIS)--Here is Monday's midday ?xml:namespace>
CRUDE: Apr WTI: $104.64/bbl, up $2.05 Apr Brent: $111.32/bbl, up $2.25
NYMEX WTI crude futures surged along with various commodities such as gold and grains as investors engaged in a flight-to-safety in response to the escalating tensions in Ukraine. WTI topped out at $105.22/bbl before retreating.
RBOB: Apr $3.0186/gal, higher by 4.12 cents/gal
Reformulated blendstock for oxygen blending (RBOB) gasoline futures moved higher on stronger crude oil futures. The threat of war between Russia and Ukraine sent oil markets much higher in early trading, despite weak economic reports from the US and China.
NATURAL GAS: Apr $4.564/MMBtu, down 4.5 cents/MMBtu
Natural gas futures on the NYMEX were trending downward on the expectations that the gas production for the spring season should begin resuming. However, weather forecasts are unchanged, with anticipations of below-normal temperatures across much of the northeast still in the forecast.
ETHANE: narrower at 32.63 cents/gal
Ethane spot prices moved to a tighter range in early trading, as buying activity remained thin.
AROMATICS: benzene down at $4.80-4.90/gal
Prompt benzene spot prices were discussed within a lower range early in the day. There were no fresh trades heard, but the morning range was down from $4.90-4.95/gal FOB (free on board) the previous session.
OLEFINS: March ethylene flat at 52.5 cents/lb, March PGP bid flat at 67 cents/lb
March ethylene was flat at 52.5 cents/lb on Monday, based on the most recent reported trade. March polymer-grade propylene (PGP) was bid flat at 67.00 cents/lb on Monday compared with the price at the close of the previous week, against offers at 69.75 cents/lb.
For more pricing intelligence please visit | https://www.icis.com/resources/news/2014/03/03/9758875/noon-snapshot-americas-markets-summary/ | CC-MAIN-2017-13 | refinedweb | 315 | 69.99 |
System freezes when using SDK 3.0 and an extended (open) menu of Commands
I am developing a midlet for a touch screen phone and using more Commands than there are softkeys, and so they form a menu.
When I use the Java(TM) Platform Micro Edition SDK 3.0 for the emulator (as is recommended), and in particular the Default FX Phone 1 (which is a touch screen phone emulator) the system keeps crashing. If I leave the menu extended, then after a small period of time, say a few minutes,( it varies), then the whole thing freezes and I have to close down and then restart my computer and reload netbeans. I haven't tried every phone in the list but at least four of them give the same results.
When I use Sun Java(TM) WirelessToolkit 2.5.2_01 for CLDC for the emulator and in particular the DefaultColorPhone, the problem doesn't occur. This is not an emulator for a touch phone and so I cannot really test without going on to my target device.
I have included a bare bones version of an app that will illustrate what happens.
import com.sun.lwuit.Display;
import com.sun.lwuit.Command;
import com.sun.lwuit.Form;
import com.sun.lwuit.events.ActionEvent;
import com.sun.lwuit.events.ActionListener;
import javax.microedition.midlet.MIDlet;
public class Cash_Balance extends MIDlet implements ActionListener {
public void startApp() {
Display.init(this);
Form TitleForm = new Form("");
TitleForm.addCommand(new Command("EXIT",0));
TitleForm.addCommand(new Command("ONE",1));
TitleForm.addCommand(new Command("TWO",2));
TitleForm.setCommandListener(this);
TitleForm.show();
}
public void actionPerformed(ActionEvent ae) {
Command cmd = ae.getCommand();
switch (cmd.getId())
{ case 0: notifyDestroyed();
// case 1:
}
}
public void pauseApp() { }
public void destroyApp(boolean unconditional) { }
}
Is this a problem with SDK 3.0 or is there something else that I can do to get around it. | https://www.java.net/node/703584 | CC-MAIN-2015-32 | refinedweb | 312 | 51.95 |
There is no class or function called vil_property. More...
Go to the source code of this file.
There is no class or function called vil_property.
The image class vil_image_resource has the method :
bool get_property(char const *tag, void *property_value = 0) const;
which allow format extensions to be added without cluttering the interface to vil_image_resource. The idea is that properties can be identified by a "tag" tags is a namespace in the general sense of the word. We only have one namespace, so try not to clutter it. All property tags described in this file should begin with "vil_property_" and that chunk of the namespace is reserved.
Definition in file vil 40 of file vil_property.h.
Original image origin in pixels.
Measured right from left edge, and down from top - i.e. in i and j directions Type is float[2].
Definition at line 58 of file vil_property.h.
Pixel width in metres.
Strictly this is the pixel spacing, and not some function of the sensor's spatial sampling kernel. Type is float[2].
Definition at line 53 of file vil_property.h.
true if image resource is a pyramid image.
Definition at line 82 of file vil_property.h.
The quantisation depth of pixel components.
This is the maximum information per pixel component. (Bear in mind that particular image may not even be using all of it.) e.g. an image with true vil_rgb<vxl_byte> style pixels would return 8. If a file image has this property implemented, and purports to supply an unsigned-type you can assume that it will give you pixels valued between 0 and 2^quantisation_depth - 1. Type is unsigned int.
Definition at line 67 of file vil 47 of file vil_property.h.
For unblocked images, the following properties are not implemented.
It is assumed that all blocks are the same size and padded with zeros if necessary. Thus, n_block_i = (ni() + size_block_i - 1)/size_block_i and n_block_j = (nj() + size_block_j - 1)/size_block_j. Both properties must be implemented for blocked images. Type is unsigned int. Block size in columns.
Definition at line 76 of file vil_property.h.
Block size in rows.
Definition at line 79 of file vil_property.h. | http://public.kitware.com/vxl/doc/release/core/vil/html/vil__property_8h.html | crawl-003 | refinedweb | 358 | 69.79 |
linalg 0.3.2
A Simple Linear Algebra Package enabling easy Matrix calculations.
linalg #
A Simple Linear Algebra Package.
This package is intended to be a portable, easy to use linear algebra package. The library does not have any dependencies outside of Dart itself, thus making it portable and easy to integrate. Internally all numbers are stored as Dart Doubles ( 64 bit as specified in the IEEE 754 standard ).
Our goal is to keep the code readable, documented and maintainable.
Short Example #
Just a quick example on how to do matrix multiplication.
final Matrix a = Matrix([[1, 2], [3, 4]]); final Vector b = Vector.column([2, 3]); final Matrix e = Matrix([[8], [18]]); Matrix result = a * b; print(result); print(result == e);
This prints
[[8.0], [18.0]] true
Complete Example #
A more extensive example with various matrix operations. See the Matrix API and Vector API for the full details.
import 'package:linalg/linalg.dart'; void example() { // ***************************** // Lets solve a linear equation. // ***************************** // // A * B = E // // We have B and E, we have to find A. // // A * B * B' = E * B' // // Were B' = inverse of B. // // A * I = E * B' // // Where I = identity matrix // // A = E * B' final Matrix B = Matrix([[2, 0], [1, 2]]); final Matrix E = Matrix([[4, 4], [10, 8]]); Matrix Bi = B.inverse(); Matrix A_calc = E * Bi; final Matrix A = Matrix([[1, 2], [3, 4]]); print("The calculated A_calc = $A_calc, the expected A is $A, they are ${A_calc==A?'':'not'} the same."); // Expected: The calculated A = [[1.0, 2.0], [3.0, 4.0]], the expected A is [[1.0, 2.0], [3.0, 4.0]], they are the same. // ***************************** // Lets do some more matrix math // ***************************** // // Next let multiply Matrix A by 3. Matrix Am = A * 3.0; print(Am); // Expecting: [[3.0, 6.0], [9.0, 12.0]] // Now add matrix B to A Matrix AmPlusB = Am + B; print(AmPlusB); // Expecting: [[5.0, 6.0], [10.0, 14.0]] // What is the determinant of the A matrix? print("The determinant of A = ${A.det()}"); // Expecting The determinant of A = -2.0 }
Installation #
Add linalg as a dependency to your flutter project.
dependencies: linalg: ^0.3.2
and at the top of your dart file add:
import 'package:linalg/linalg.dart';
Attribution #
Original code came from:
We ended up rewriting considerable parts of the code and adding tests and documentation. | https://pub.dev/packages/linalg | CC-MAIN-2020-40 | refinedweb | 393 | 60.11 |
Table of Contents
Plot styles by example
This tutorial shows various styles of presenting data when using SCaVis. As usual, we make a small Jython code snippets to illustrate various Canvas styles.
The base SCaVis code which makes data for the examples below is the same. It looks as:
from java.util import Random from jhplot import * c1 =HPlot("Canvas") c1.visible() c1.setRange(0,100,0,100) h1 = H1D("Histogram",20, 50.0, 100.0) f1=F1D("cos(x)*x",1,50) p1= P1D("X-Y data") rand = Random(10) for i in range(500): h1.fill(85+10*rand.nextGaussian()) if (i<200): p1.add(56+7*rand.nextGaussian(),70+7*rand.nextGaussian()) c1.draw(f1) c1.draw(h1) c1.draw(p1)
Below we show how to apply various graphic styles when presenting these 3 objects: a histogram, a function and data points.
Article styles
Here are “scientific” styles: plots are all in black and while, nothing fancy.
Presentation styles
Here are “presentation” styles: plots are colourful and look attractive.
Here is a double plot with linear and log scale. We use
jhplot.HPlot after rescaling the canvas sizes. Errors are shown as shaded band.
Here is another double plot with linear and log scale. We use
jhplot.HPlotJa which is the most flexible. Errors are shown as shaded band. | http://www.jwork.org/scavis/wikidoc/doku.php?id=man:visual:plot_styles | CC-MAIN-2014-15 | refinedweb | 222 | 62.04 |
I'm developing Xamarin.Mac(Cocoa) using C#. I want to develop application waiting seconds sometimes, so I developed waiting function using Task.Delay.
But repeating Task.Delay causes a large delay.
This is a test code for Cocoa.
using System; using System.Threading.Tasks; using System.Threading; using AppKit; using Foundation; namespace TaskDelayTest { public partial class ViewController : NSViewController { public ViewController(IntPtr handle) : base(handle) { } public override void ViewDidLoad() { base.ViewDidLoad(); } public override NSObject RepresentedObject { get { return base.RepresentedObject; } set { base.RepresentedObject = value; } } async partial void Execute(AppKit.NSButton sender) { while (true) { int millisecond = 1000; Console.WriteLine($"WaitTime : {millisecond}"); var startDate = DateTime.Now; await Task.Delay(millisecond).ConfigureAwait(false); var endDate = DateTime.Now; var diff = (endDate - startDate); Console.WriteLine($"WaitEnd : {endDate.ToString("yyyy/MM/dd HH:mm:ss.fff")}"); Console.WriteLine($"DiffTime : {diff.TotalMilliseconds}"); } } } }
Console result:
.
.
.
(Repeating a lot of times)
.
.
.
WaitTime : 1000 WaitEnd : 2018/02/01 01:00:25.784 DiffTime : 1000.192
WaitTime : 1000 WaitEnd : 2018/02/01 01:00:26.784 DiffTime : 1000.193
WaitTime : 1000 WaitEnd : 2018/02/01 01:00:27.785 DiffTime : 1000.232
WaitTime : 1000 WaitEnd : 2018/02/01 01:00:28.785 DiffTime : 1000.462
WaitTime : 1000 WaitEnd : 2018/02/01 01:00:39.785 DiffTime : 10999.643
WaitTime : 1000 WaitEnd : 2018/02/01 01:00:42.259 DiffTime : 2473.116
WaitTime : 1000 WaitEnd : 2018/02/01 01:00:43.582 DiffTime : 1322.505
WaitTime : 1000 WaitEnd : 2018/02/01 01:00:44.582 DiffTime : 1000.173
WaitTime : 1000 WaitEnd : 2018/02/01 01:00:45.582 DiffTime : 1000.147
WaitTime : 1000 WaitEnd : 2018/02/01 01:00:48.973 DiffTime : 3389.941
WaitTime : 1000 WaitEnd : 2018/02/01 01:00:49.973 DiffTime : 1000.186
WaitTime : 1000 WaitEnd : 2018/02/01 01:00:50.973 DiffTime : 1000.186
WaitTime : 1000 WaitEnd : 2018/02/01 01:00:52.083 DiffTime : 1108.889
.
.
.
Sometimes very very delayed. This did not happen on Windows. It only occurred with Xamarin.Mac.
And I have to develop as asynchronous function because it's cocoa application. If I develop as not async function, it locks UI.
Answers
So waking up significantly after your scheduled wake up time is not unexpected when writing code of that nature.
Task.Delay(millisecond)is saying "wait at least X ms", not "wake up me at this time".
You may want to read this documentation:
But to answer the root question, you need to integrate more closely with the underlying OS. NSTimer doen't have real-time guarantees, and can shift your wake up times when your device is on batteries, but will likely get you roughly what you want:
Since Task.Delay gets scheduled on the thread pool, it has higher jitter if you need that sort of precision.
Hi, I encounter the same problem, when I repeat Task.Delay(100) or jus Thread.Sleep(100) in an just empty Xamarin.Mac app, it will delay even up tp 10s. Have you solved it yet?
I also tried > @ChrisHamons 's solution by using NSTimer instead, but the same:
delay time: 98
delay time: 100
delay time: 99
delay time: 99
delay time: 99
delay time: 99
delay time: 99
delay time: 99
delay time: 99
delay time: 10098
delay time: 6321
delay time: 4078
delay time: 1284
delay time: 2869
delay time: 1293
delay time: 1078
It seems if the Xamarin.Mac app screens off for a while, the Task.Delay() or NSTimer() will not working correctly, but if I click the app to make it show on the top of the screen, the timing will becomes correct again. How weird it is. Will cocoa app also enter a status like sleeping?
Sorry for reposting so often, but I think I already figured out what the issue is. MacOS will enable 'App Nap' automatically, so if the app is not on the topmost foreground, it will take a nap eventually and the timer will become unpredictable, that is the reason why Task.Delay() or other timing functions do not work. There is an API can control this behaviour.
The accepted answer works like a charm.
Not sure whether yours is due to the same issue, hope it will be helpful for anyone who suffer from this problem. | https://forums.xamarin.com/discussion/121058/repeating-task-delay-causes-a-large-delay | CC-MAIN-2019-51 | refinedweb | 706 | 63.25 |
Dimitry likes it when the bad code he finds can be sung aloud. This particular line can be sung to the tune of “Rule Britannia”.
for (var html in data["html"]) $(data["html"][html][0]).html(data["html"][html][1]);
Blake was surprised by this line. I guess you could say… he was shell-shocked .
if ($error = shell_exec("$mysql -h'$db_host' -u'$grantor_user' -p'$grantor_pass' mysql < $tmppath 2>&1 1>/dev/null")) { #handle the error… }
Jeremy’s ex-co-worker is the sort of person who would wear a belt with suspenders.
initialTab = 0; if (isNaN(initialTab)) { initialTab = 0; }
The ternary operator is a convenient shorthand for conditionals. When people discover it for the first time, some of them get a little carried away with it. Bernard had to give someone a little lesson on “order of operations”.
public Money getFinanceCredit() { return (isEdited() ? ((!usePercentage()) ? editedFixedAmount : ((percentageOfBasis == null) ? NullMoney.NULL_AMOUNT : basis.multiply(percentageOfBasis))) : NullMoney.NULL_AMOUNT); } | https://thedailywtf.com/articles/Ternary-Over-a-New-Leaf | CC-MAIN-2017-51 | refinedweb | 154 | 51.14 |
Prev
C++ VC ATL STL Database Code Index
Headers
Your browser does not support iframes.
Re: mysql in c++ initialize error occurs a simple program is executed in redhat9.0 , using gcc 3.2.2 compiler version ...
From:
"=?iso-8859-1?q?Erik_Wikstr=F6m?=" <eriwik@student.chalmers.se>
Newsgroups:
comp.lang.c++
Date:
14 Feb 2007 23:55:34 -0800
Message-ID:
<1171526134.769070.190370@v33g2000cwv.googlegroups.com>
On Feb 15, 8:25 am, "yogesh" <yogeshkum...@gmail.com> wrote:
mysql in c++ initialize error occurs a simple program is executed in
redhat9.0 , using gcc 3.2.2 compiler version ...
#include <stdio.h>
#include <mysql.h>
#include <string.h>
int main()
{
MYSQL* mysql;
MYSQL_RES* res;
MYSQL_ROW row;
char query[80];
mysql = mysql_init( NULL );
if( mysql == NULL ) {
mysql_real_connect( mysql, "localhost", "username",
"password","dbname", 0, "/tmp/mysql.sock", 0 );
sprintf( query, "SELECT * FROM tablename" );
mysql_real_query( mysql, query, (unsigned
int)strlen(query) );
res = mysql_use_result( mysql );
while( row = mysql_fetch_row( res ) ) {
printf( "%s %s\n", row[0], row[1] );
}
mysql_free_result( res );
mysql_close( mysql );
}
else {
printf( "mysql_init returned NULL\n" );
}
return 0;}
"n1.cpp" 34L, 656C written
if i run the code iam getting the error as follows
[root@localhost yog]# gcc n1.cpp
/tmp/cccUdCdL.o(.text+0x16): In function `main':
: undefined reference to `mysql_init'
/tmp/cccUdCdL.o(.text+0x4b): In function `main':
: undefined reference to `mysql_real_connect'
/tmp/cccUdCdL.o(.text+0x81): In function `main':
: undefined reference to `mysql_real_query'
/tmp/cccUdCdL.o(.text+0x8f): In function `main':
: undefined reference to `mysql_use_result'
/tmp/cccUdCdL.o(.text+0xa0): In function `main':
: undefined reference to `mysql_fetch_row'
/tmp/cccUdCdL.o(.text+0xd8): In function `main':
: undefined reference to `mysql_free_result'
/tmp/cccUdCdL.o(.text+0xe6): In function `main':
: undefined reference to `mysql_close'
/tmp/cccUdCdL.o(.eh_frame+0x11): undefined reference to
`__gxx_personality_v0'
collect2: ld returned 1 exit status
This is a bit off topic here, you should ask these kinds of questions
in a group for either gcc/g++ or for mysql. Your problem however is
that you have forgotten to add the right libraries to the path of the
linker.
--
Erik Wikstr=F6m
The September 17, 1990 issue of Time magazine said that
"the Bush administration would like to make the United Nations
a cornerstone of its plans to construct a New World Order."." | https://preciseinfo.org/Convert/Articles_CPP/Database_Code/C++-VC-ATL-STL-Database-Code-070215095534.html | CC-MAIN-2022-05 | refinedweb | 374 | 50.94 |
Problem
Have you wondered how Database Copy Wizard works behind the scenes? Do you have a requirement to create a copy of your database (say copy of your production database for development or testing) programmatically? In this tip, I am going to show you how you can use SMO (SQL Server Management Objects) classes to transfer database objects and data to another server or database.
Solution
- Before you start writing your code using SMO, you need to take reference of several assemblies which contains different namespaces to work with SMO. For more details on what these assemblies are and how to reference them in your code, refer to my tip Getting started with SQL Server Management Objects (SMO).
- Location of assemblies in SQL Server 2005 is C:\Program Files\Microsoft SQL Server\90\SDK\Assemblies folder.
- Location of assemblies in SQL Server 2008 is C:\Program Files\Microsoft SQL Server\100\SDK\Assemblies folder.
- In SQL Server 2005, the Transfer class is available under Microsoft.SqlServer.Management.Smo namespace and in Microsoft.SqlServer.Smo (microsoft.sqlserver.smo.dll) assembly.
- In SQL Server 2008, the Transfer class is available under Microsoft.SqlServer.Management.Smo namespace and in Microsoft.SqlServer.SmoExtended (microsoft.sqlserver.smoextended.dll) assembly.
- To transfer data, the Transfer class creates a SSIS package dynamically, you can specify the location of this SSIS package using TemporaryPackageDirectory property of the Transfer class instance.
- If you try to connect SQL Server 2008 from SMO 2005, you will get an exception error "SQL Server <10.0> version is not supported".
Next Steps
- Download the code
- Review these other SMO tips
- Review Transfer class on MSDN.
- Review all of my previous tips
Last Update: 12/29/2009
About the author
View all my tips | https://www.mssqltips.com/sqlservertip/1910/transfer-sql-server-database-schema-objects-and-data-with-smo/ | CC-MAIN-2016-40 | refinedweb | 290 | 54.93 |
The status line normally shows its permanent message. More...
#include <FXStatusLine.h>
The status line normally shows its permanent message.
A semi-permanent message can override this permanent message, for example to indicate the application is busy or in a particular operating mode. The status line obtains the semi-permanent message by sending its target (if any) SEL_UPDATE message. A ID_SETSTRINGVALUE can be used to change the status message. When the user moves the cursor over a widget which has status-line help, the status line can flash a very temporarily message with help about the widget. For example, the status line may flash the "Quit Application" message when the user moves the cursor over the Quit button. The status line obtains the help message from the control by sending it a ID_QUERY_HELP message with type SEL_UPDATE. Unless the value is overridden, the status line will display the normal text, i.e. the string set via setNormalText(). If the message contains a newline (
), then the part before the newline will be displayed in the highlight color, while the part after the newline is shown using the normal text color. | http://fox-toolkit.org/ref/classFX_1_1FXStatusLine.html | CC-MAIN-2017-22 | refinedweb | 189 | 62.98 |
#include <playerclient.h>
Inherits ClientProxy.
List of all members.
SonarProxy
[inline]
Constructor. Leave the access field empty to start unconnected.
Enable/disable the sonars. Set state to 1 to enable, 0 to disable. Note that when sonars are disabled the client will still receive sonar data, but the ranges will always be the last value read from the sonars before they were disabled.
state
Returns 0 on success, -1 if there is a problem.
Request the sonar geometry.
Range access operator. This operator provides an alternate way of access the range data. For example, given a SonarProxy named sp, the following expressions are equivalent: sp.ranges[0] and sp[0].
sp
sp.ranges
[virtual]
All proxies must provide this method. It is used internally to parse new data when it is received.
Reimplemented from ClientProxy.
Print out current sonar range data.
The number of sonar readings received.
The latest sonar scan data. Range is measured in m.
Number of valid sonar poses
Sonar poses (m,m,radians) | http://playerstage.sourceforge.net/doc/Player-1.6.5/player-html/classSonarProxy.php | CC-MAIN-2013-20 | refinedweb | 167 | 72.22 |
#include <Teuchos_BLAS.hpp>
Definition at line 295 of file Teuchos_BLAS.hpp.
Definition at line 407 of file Teuchos_BLAS.hpp.
Return ABS(x) if y > 0 or y is +0, else -ABS(x) (if y is -0 or < 0).
Note that SIGN respects IEEE 754 floating-point signed zero. This is a hopefully correct implementation of the Fortran type-generic SIGN intrinsic. ROTG for complex arithmetic doesn't require this function. C99 provides a copysign() math library function, but we are not able to rely on the existence of C99 functions here.
We provide this method on purpose only for the real-arithmetic specialization of GivensRotator. Complex numbers don't have a sign; they have an angle.
Definition at line 316 of file Teuchos_BLAS.hpp. | http://trilinos.sandia.gov/packages/docs/r10.10/packages/teuchos/browser/doc/html/classTeuchos_1_1details_1_1GivensRotator_3_01ScalarType_00_01false_01_4.html | CC-MAIN-2014-15 | refinedweb | 124 | 60.01 |
Martin Wegner wrote:
<snip/>
>>A classic place to start is see if the log4j config file can be moved into
>>an LDAP server. Imagine being able to edit the log4j config file using an
>>LDAP client instead of using a text editor. And then it can be shared
>>amongst multiple apps (if need be).
>>
>>
>>
Yeah that's one of the massive benefits of using a distributed database
like LDAP. Plus the hierarchical aspect works well with log4j's Logger
hierarchy as well (or was it Categories don't remember). Anyhow there
is a good fit here. Also Enrique in an email below made mention of
using the Java Preferences API. The Perferences API is also an ideal
candidate for being backed by a directory. Funny thing is I remember
reading somewhere in the Preferences API documentation about not having
to use heavy weight facilities like LDAP for storing this kind of data;
hence the reason for the Preferences API. I disagree with those remarks
though - they never knew about Eve as an embeddable lighter wight directory.
Anyhow this is a most excellent topic that only scratches the surface
for configuration. Really any peice of information that is...
o relatively static,
o needs to be read optimized,
o highly available for multiple systems to access
o and might be hierarchical in nature
is an ideal candidate for being backed by LDAP.
Another very unique fit for LDAP are resource bundles for distributed
applications. LDAP filters can use language tags to select attributes
specific to a partiticular language. For example I can have a label on
a webpage where the entry for the label has the following commonNames;
cn;lang-en: hello
cn;lang-fr; bonjour
cn;lang-de; hallo
cn;lang-es; hola
Just asking for cn may default to the language code associated with the
current locale or the locale of the user making the request. I could
explicitely ask for a specific version of the attribute as well such as
cn;lang-fr. Plus bundles can go the distance with internationalization
since LDAP strings are Unicode strings. There's definately lots of
synergy here but LDAP has been the child left behind for too long: most
people have just given up on it unfortunately.
>>The slippery slope of this suggestion is that no one has ever codified a
>>best practices in this area (server config in an LDAP server).
>>
True no one has done this for the gambit of J2EE application
configurations or component configurations out there. This is one of
the those things this project will have to address and address well.
IMO having the expertise of folks like Henri and Phil from the commons
will give us the vision we need to do that right. People like yourself
who see this vision are also critical.
>>So my
>>guess is that this group will stir up a number of hornets with this idea.
>>But that is a good sign.
>>
Oh I think so. Look M$ has leveraged the directory in .Net pretty well
behind the scences to help orchestrate component oriented systems. The
open source and Java world has lagged behind in this area for far too
long. Almost to a crippling degree. It's about time we leveraged this
core technology to help manage and orchestrate our distributed
components and systems. Otherwise with the geometric explosion of
components, users, systems, nodes and much more we're not going to fair
well in comparision to other technologies already leveraging the directory.
>>If we can get a set of best practices defined
>>(LDAP tree structure, DN construction, live update handling, etc.) then I
>>think the Eve server will have provided a benefit above and beyond the
>>LDAP server.
>>
>>
Great thoughts Marty. DIT structure and using LDAP properly is
something that is lacking en mass. It's pretty hard to make even
seasoned DBAs see how to contrain and manage a directory information
tree (DIT). The key lies in using Schema structures correctly. One of
our goals is to make LDAP in general easier to understand and hence
DIT's easier to design.
Take for example things like NameForms which define those attributes
that may be used for the RDN of an entry (structural objectclass) hence
effecting the possible DNs within your DIT namespace. Many people just
don't know about these things. I guess we are going to have to
predefine schemas for the configuration stuff to make sure they don't
have to grok these concepts to store simple configurations in their
directory. In the end though I don't want to dumb it down to the point
where things are not intuitive. Directory users should not loose power
for ease of use.
So yeah if we can overcome these hurdles gracefully we can really make a
big impact to the OS community and especially the Java community. These
and other reasons (i.e. lack of LDAP triggers) are the driving forces
behind the genesis of the Directory project.
Alex | http://mail-archives.us.apache.org/mod_mbox/directory-dev/200411.mbox/%3C41A38A4D.1010305@bellsouth.net%3E | CC-MAIN-2020-29 | refinedweb | 832 | 62.78 |
This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.
On Oct 21, 2014, at 7:17 AM, Bernd Schmidt <bernds@codesourcery.com> wrote: > Some tests use stdio functions which are unavaiable with the cut-down newlib I'm using for ptx testing. I'm somewhat uncertain what to do with these; they are by no means the only unavailable library functions the testsuite tries to use (signal is another example). Here's a patch which deals with parts of the problem, but I wouldn't mind leaving this one out if it doesn't seem worthwhile. So, it is nice if an entire newlib port can be done, then, you don’t have to worry about subsets of it. I think the patch is Ok, I would change: +#ifndef __nvptx__ fprintf (stderr, "Test failed: %s\n", msg); +#endif to just comment out the line. If you want to try your hand at removing stdio in some fashion, you can try, I am skeptical that there is a meaningful way to do that and it seems larger and harder then the patch you have. The usual fashion I would recommend would be a stub library for testing that simply disappears such calls as uninteresting. int fpritnf (…) { return 0; } would go a long way. For test cases that someone care about what fprintf actually does, those can be marked as stdio and likely, those would be a far smaller set to so mark. | https://gcc.gnu.org/legacy-ml/gcc-patches/2014-10/msg02166.html | CC-MAIN-2021-04 | refinedweb | 250 | 76.96 |
- !
Import StyleBooks
This is the second section of your StyleBook and lets you declare which other StyleBook you want to refer to from your current StyleBook. This enables you to import and reuse other StyleBooks instead of rebuilding the same configuration in your own StyleBook. This is a mandatory section.
You must declare the namespace and version number of the StyleBook(s) that you want to refer to in your current StyleBook. Every StyleBook must refer to the netscaler.nitro.config namespace if it uses any of the NITRO configuration objects directly. This namespace contains all the Citrix ADC NITRO types, such as lbvserver service or monitor. StyleBooks for Citrix ADC versions 10.5 and later are supported, which means that you can use your StyleBook to create and run configurations on any Citrix ADC instance running release 10.5 or later.
The prefix attribute used in the import-stylebooks section is a shorthand to refer to the combination of namespace and version. For example, the “ns” prefix can be used to refer to the namespace netscaler.nitro.config with version 10.5. In the later sections of your StyleBook, instead of using the namespace and version each time you want to refer to a StyleBook with this namespace and version, you can simply use the prefix string chosen together with the name of the StyleBook to uniquely identify it.
Example:
import-stylebooks:
-
namespace: netscaler.nitro.config
version: “10.5”
prefix: ns
-
namespace: com.acme.stylebooks
version: “0.1”
prefix: stlb
In the above example, the first prefix defined is called ns and refers to the namespace netscaler.nitro.config and version 10.5. The second prefix that is defined is called stlb, and refers to the namespace com.acme.stylebooks and version 0.1.
After you define a prefix, everytime you want to refer to a type or a StyleBook that belongs to a certain namespace and version, you can use the notation <namespace-shorthand>::<type-name>. For example, ns::lbvserver refers to the type lbvserver that is defined in the namespace netscaler.nitro.config, version 10.5.
Similarly, if you want to refer to a StyleBook with version “0.1” in the com.acme.stylebooks namespace , you can use the notation stlb::<stylebook-name>.
Note
By convention, the prefix “ns” is used to refer to the NITRO namespace of Citrix ADC. | https://docs.citrix.com/en-us/citrix-application-delivery-management-service/stylebooks/stylebooks-grammar/import-stylebooks-section.html | CC-MAIN-2019-04 | refinedweb | 393 | 58.89 |
I am writing a program that counts the number of words in a text file.
How would I go about useing a command line argument as the text file name that will be opened? Right now i am opening a file in the same directory as the program.(test.txt file)
for example: wordcount somefile.txt
would open "somefile" and count the words in it.
Code:#include <iostream> #include <fstream> #include <cstdlib> #include <cctype> using namespace std; int main (int argc, char * argv[] ) { int wordcount = 0; int letters = 0; double average = 0; char ch; ifstream infile; infile.open("test.txt"); if (infile.fail( )) { cout << "Input file opening failed.\n"; system("pause"); exit(1); } } | http://cboard.cprogramming.com/cplusplus-programming/83485-more-unix-woes.html | CC-MAIN-2014-52 | refinedweb | 113 | 76.93 |
In my last posting about code contracts I introduced you how to force code contracts to classes through interfaces. In this posting I will go step further and I will show you how code contracts work in the case of inherited classes.
As a first thing let’s take a look at my interface and code contracts.
[ContractClass(typeof(ProductContracts))]
public interface IProduct
{
int Id { get; set; }
string Name { get; set; }
decimal Weight { get; set; }
decimal Price { get; set; }
}
[ContractClassFor(typeof(IProduct))]
internal sealed class ProductContracts : IProduct
private ProductContracts() { }
int IProduct.Id
{
get
{
return default(int);
}
set
Contract.Requires(value > 0);
}
string IProduct.Name
return default(string);
Contract.Requires(!string.IsNullOrWhiteSpace(value));
Contract.Requires(value.Length <= 25);
decimal IProduct.Weight
return default(decimal);
Contract.Requires(value > 3);
Contract.Requires(value < 100);
decimal IProduct.Price
And here is the product class that inherits IProduct interface.
public class Product : IProduct
public int Id { get; set; }
public string Name { get; set; }
public virtual decimal Weight { get; set; }
public decimal Price { get; set; }
if we run this code and violate the code contract set to Id we will get ContractException.
public class Program
static void Main(string[] args)
var product = new Product();
product.Id = -100;
Now let’s make Product to be abstract class and let’s define new class called Food that adds one more contract to Weight property.
public class Food : Product
public override decimal Weight
return base.Weight;
{
Contract.Requires(value > 1);
Contract.Requires(value < 10);
base.Weight = value;
Now we should have the following rules at place for Food:
Interesting part is what happens when we try to violate the lower and upper limits of Food weight. To see what happens let’s try to violate rules #2 and #4. Just comment one of the last lines out in the following method to test another assignment.
var food = new Food();
food.Weight = 12; food.Weight = 2;
And here are the results as pictures to see where exceptions are thrown. Click on images to see them at original size.
As you can see for both violations we get ContractException like expected.
Code contracts inheritance is powerful and at same time dangerous feature. Although you can always narrow down the conditions that come from more general classes it is possible to define impossible or conflicting contracts at different points in inheritance hierarchy. Code contracts and inheritance - Gunnar Peipman's ASP.NET blog [asp.net] on Topsy.com
Very nice! I'm curious, what's the performance cost of a contract vs non-contract? I know you don't get something for nothing, but I was curious just how "heavy" it is.
Thanks for question James!
I plan to make some performance measuring during next couple of days. So, stay tuned :)
why would we use class code contracts vs data annotation attributes?
Code contracts is interesting topic to discover for me. Although it is new technology that is currently
Isn´t it a violation of the Liskov substitution principle, when you are tightening the preconditions from w < 100 to w < 10?
It seems to me like something the code contract framework should catch and disallow.
I heard from Mike Barnett, that Code Contracts does not support the loosening of preconditions in derived classes, which seem even more weird if they allow them to be tightened. | http://weblogs.asp.net/gunnarpeipman/archive/2010/05/12/code-contracts-and-inheritance.aspx?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+gunnarpeipman+%28Gunnar+Peipman%27s+ASP.NET+blog%29 | crawl-003 | refinedweb | 552 | 58.99 |
Yes and this is what i get:
Breakpoint 1 (main.cpp:24) pending.
Child process PID: 4864
Program exited normally.
Debugger finished with status 0
Yes and this is what i get:
Breakpoint 1 (main.cpp:24) pending.
Child process PID: 4864
Program exited normally.
Debugger finished with status 0
Thanks, i forgot something
the main looks like this now:
The problem is , that it counts the character w-s and not the words.The problem is , that it counts the character w-s and not the words.Code:nt main() { string hol="input.txt"; Enor t(hol); t.First(); while (!t.End()) { t.Next(); } cout<<darabw<<endl; return 0; }
in the input the number should be 1 at the end, but it will be six.
the darabw should only be added to if wk is false, and the character we are on is w or W.
Last edited by Psyho; 05-31-2011 at 01:48 PM.
It reads in from a file, and counts the words, which contain w or W, so if within the same word there are more than one, then that should be counted as 1.
My idea was, that: find the first 'real' character, and if it is w or W, than we found one word that's good. wk =true (we found a w here already); ++darabw
In Next(), we look at the next character. If it is "w" or "W" and we already found one, we don't do anything.
If it is a delimiter, then we aren't in a word, so wk=false, and nemlimit=false since the next 'real' character will be a start of the new word.
We only count one up in darabw, if ch is "w" or "W", and we haven't already found one, so wk is false.
Last edited by Psyho; 05-31-2011 at 01:59 PM.
Why so complicated then? Are you trying to do it with a state machine or something? The problem can be solved by storing characters up to a word boundary, then checking for 'w' or 'W' and starting over.
Code:#include <ctype.h> #include <fstream> #include <iostream> #include <string> using namespace std; #define TEST int main() { ifstream ifs("test.txt"); int count = 0; string word; while (ifs) { char ch; if (!ifs.get(ch) || isspace(ch)) { #if defined(TEST) cout << "Checking '"<< word << "'" << endl; #endif count += word.find_first_of("wW") != string::npos; word.clear(); } else { word.push_back(ch); } } cout << count << endl; }
It is an order, that i can't do that. Don't know why, but I mustn't.
Anyway in:
even if wk is true it increases darabw in the first if.even if wk is true it increases darabw in the first if.Code:void Enor::Next() { char ch; f.get(ch); if (((ch=='w')|| ((ch=='W')) && (wk==false))) {++darabw; wk=true;} if ((ch==' ') || (ch== '\t') || (ch=='\n')) {nemlimit=false; wk=false;}
I just tried to play with it for a bit, it doesn't even care, what wk is. i rewrote it to if...&&(wk==true)... ;wk=false;} and it didn't change anything.
Last edited by Psyho; 05-31-2011 at 02:26 PM.
thanks i managed to do it in the end.
If your interested it was this:
from : if (((ch=='w')|| ((ch=='W')) && (wk==false))) => ((ch=='w'|| ch=='W')&& (wk==false))
those which are red, had to go.
Thank you for all the help you have given me.
Hope to hear from you again. | http://cboard.cprogramming.com/cplusplus-programming/138498-need-help-2.html | CC-MAIN-2016-36 | refinedweb | 580 | 83.36 |
Introduction to Squares in Java
When a number is multiplied by itself, the resulting number formed is the Square of the Number. Squares of a number are very easy to find. Generally, whenever we find the square root of an Integer number, we get the result in Integer only. Similarly, whenever we find the square of a decimal number, we get the answer in decimal as well. An interesting fact about the square of a number is that whenever we do a square of an integer number the value of the resulting number increases. However, when we do the square of decimals between 0 and 1, the resulting number decreases. An example would be that of a squaring of 0.5. When we square 0.5, the number gets decreased to 0.25. In this article, we are going to see the various methods of how we can square a number using the Java programming language.
Working – Square of a number can be found out in Java by a variety of techniques. We would like to see some examples related to the square of a number by which we can understand the square of a number better.
How to calculate Square in Java?
Let us learn how to calculate square in java:
Example #1
The simplest way of finding the square of a number is Math.pow() where it can be used to calculate any power of a number.
Code:
import java.util.*;
public class Square
{
public static void main(String args[])
{
Scanner sc=new Scanner(System.in);
int num;
System.out.print("Enter a number which is integer format: ");
num=sc.nextInt();
System.out.println("The square of "+ num + " is: "+ Math.pow(num, 2));
}
}
Output:
Example #2
In the next program, we are going to calculate the square of a number in the usual form such that it multiplies two numbers sequentially and finds the square of the respective number.
Code:
import java.util.*;
public class Square2
{
public static void main(String args[])
{
Scanner sc=new Scanner(System.in);
int no;
System.out.print("Enter a number which is integer format: ");
no=sc.nextInt();
System.out.println("Square of "+ no + " is: "+(no*no));//the number is multiplied with its own
}
}
Output:
Example #3
In this example, we are going to check if a number is a perfect square or not. This is a little bit complex program as it checks if a number is a square of another number.
Code:
import java.util.Scanner;
class JavaExample {
static boolean checkPerfectSquare(double x)
{
// finding the square root of given number
double s= Math.sqrt(x);
return ((s - Math.floor(s)) == 0); //Math.floor() is used here to calculate the lower value.
}
public static void main(String[] args)
{
System.out.print("Enter any number:");
Scanner scanner = new Scanner(System.in);
double no= scanner.nextDouble();
scanner.close();
if (checkPerfectSquare(no))
System.out.print(no+ " is a perfect square number");
else
System.out.print(no+ " is not a perfect square number");
}
}
Output:
Example #4
In this program, we find the number of square numbers within a specific range. We enter the range of numbers and the code would produce the square number in that specific range. In the below program, we find the number of square integers between 0 and 100.
Code:
// Finding the range of perfect square numbers in Java programming language
import java.io.IOException;
public class SquareNumbersInRange {
public static void main(String[] args) throws IOException {
int starting_number = 1;
int ending_number = 100;
System.out.println("Perfect Numbers between "+starting_number+ " and "+ending_number);
for (int i = starting_number; i <= ending_number; i++) {
int number = i;
int sqrt = (int) Math.sqrt(number);
if (sqrt * sqrt == number) {
System.out.println(number+ " = "+sqrt+"*"+sqrt);
}
}
}
}
Output:
Example #5
In this program, we are going to see the sum of squares of the first N natural numbers. We enter the value of N and the program calculates the sum of squares of the first N natural numbers.
Code:
// Java Program to find sum of
// square of first n natural numbers
import java.io.*;
class SumofSquares
{
// Return the sum of the square of first n natural numbers
static int square sum(int n)
{
// Move the loop of I from 1 to n
// Finding square and then adding it to 1
int sum = 0;
for (int i = 1; i <= n; i++)
sum += (i * i);
return sum;
}
// Main() used to print the value of sum of squares
public static void main(String args[]) throws IOException
{
int n = 6;
System.out.println("The sum of squares where N value is 6 is "+ squaresum(n));
}
}
Output:
Conclusion
- In this article, we see a list of methods by which we can square a number, how we can find whether a number is square or not within a specific range and also the sum of integers of the first N natural numbers. However, there are also some other techniques that can be used to find the square of a number. The name of a technique that can be used to see and check if a number is square or not is the Recursion technique which uses a function within a function to check if the number is a perfect square or not.
- Although the recursion technique is difficult to use, it can be used to calculate the square of a number within a few lines of code. Further, using square numbers we can generate a lot of pattern programs. We can print a square pattern in a spiral format or a zig-zag format. Similarly, the square numbers can be used in the source code to generate the double square such as the number 16 where the double square is number 2.
Recommended Articles
This is a guide to the Squares in Java. Here we have discussed the Introduction along Examples and codes with Output of Squares in Java. You can also go through our other suggested articles to learn more– | https://www.educba.com/squares-in-java/ | CC-MAIN-2020-29 | refinedweb | 983 | 64.1 |
django.js 0.8.1
Django JS Tools
Dj.
Compatibility
Django.js requires Python 2.6+ and Django 1.4.2+.
Installation')), ... )
Documentation
The documentation is hosted on Read the Docs
Changelog
0.8.1 (2013-10-19)
- Fixed management command with Django < 1.5 (fix issue #23 thanks to Wasil Sergejczyk)
- Fixed Django CMS handling (fix issue #25 thanks to Wasil Sergejczyk)
- Cache Django.js views and added settings.JS_CACHE_DURATION
- Allow customizable Django.js initialization
- Allow manual reload of context and URLs
- Published Django.js on bower (thanks to Wasil Sergejczyk for the initial bower.json file)
- Do not automatically translate languages name in context
0.8.0 (2013-07-14)
- Allow features to be disabled with:
- settings.JS_URLS_ENABLED
- settings.JS_USER_ENABLED
- settings.JS_CONTEXT_ENABLED)
0.7.5 (2013-06-01)
- Handle Django 1.5+ custom user model
- Upgraded to jQuery 2.0.2 and jQuery Migrate 1.2.1
0.7.4 (2013-05-11)
0.7.3 (2013-04-30)
- Upgraded to jQuery 2.0.0
- Package both minified and unminified versions.
- Load minified versions (Django.js, jQuery and jQuery Migrate) when DEBUG=False
0.7.1 (2013-04-25)
- Optionnaly include jQuery with {% django_js_init %}.
0.7.0 (2013-04-25)
- Added RequireJS/AMD helpers and documentation
- Added Django Pipeline integration helpers and documentation
- Support unnamed URLs resolution.
- Support custom content types to be passed into the js/javascript script tag (thanks to Travis Jensen)
- Added coffee and coffescript template tags
- Python 3 compatibility
0.6.5 (2013-03-13)
- Make JsonView reusable
- Unescape regex characters in URLs
- Fix handling of 0 as parameter for Javasript reverse URLs
0.6.4 (2013-03-10)
- Support namespaces without app_name set.
0.6.3 (2013-03-08)
- Fix CSRF misspelling (thanks to Andy Freeland)
- Added some client side CSRF helpers (thanks to Andy Freeland)
- Upgrade to jQuery 1.9.1 and jQuery Migrate 1.1.1
- Do not clutter url parameters in js, javascript and js_lib template tags.
0.6.2 (2013-02-18)
- Compatible with Django 1.5
0.6.1 (2013-02-11)
- Added static method (even if it’s a unused reserved keyword)
0.6 (2013-02-09)
- Added basic user attributes access
- Added permissions support
- Added booleans context processor
- Added jQuery 1.9.0 and jQuery Migrate 1.0.0
- Upgraded QUnit to 1.11.0
- Added QUnit theme support
- Allow to specify jQuery version (1.8.3 and 1.9.0 are bundled)
0.5 (2012-12-17)
Added namespaced URLs support
Upgraded to Jasmine 1.3.1
- Refactor testing tools:
- Rename test/js into js/test and reorganize test resources
- Renamed runner_url* into url* on JsTestCase
- Handle url_args and url_kwargs on JsTestCase
- Renamed JasmineMixin into JasmineSuite
- Renamed QUnitMixin into QUnitSuite
- Extracted runners initialization into includable templates
Added JsFileTestCase to run tests from a static html file without live server
Added JsTemplateTestCase to run tests from a rendered template file without live server
- Added some settings to filter scope:
- Serialized named URLs whitelist: settings.JS_URLS
- Serialized named URLs blacklist: settings.JS_URLS_EXCLUDE
- Serialized namespaces whitelist: settings.JS_URLS_NAMESPACES
- Serialized namespaces blacklist: settings.JS_URLS_NAMESPACES_EXCLUDE
- Serialized translations whitelist: settings.JS_I18N_APPS
- Serialized translations blacklist: settings.JS_I18N_APPS_EXCLUDE
Expose PhantomJS timeout with PhantomJsRunner.timeout attribute
0.4 (2012-12-04)
Upgraded to jQuery 1.8.3
Upgraded to Jasmine 1.3.0
Synchronous URLs and context fetch.
Use django.utils.termcolors
- Class based javascript testing tools:
- Factorize JsTestCase common behaviour
- Removed JsTestCase.run_jasmine() and added JasmineMixin
- Removed JsTestCase.run_qunit() and added QUnitMixin
- Extract TapParser into djangojs.tap
Only one Django.js test suite
Each framework is tested against its own test suite
Make jQuery support optionnal into JsTestCase
Improved JsTestCase output
Drop Python 2.6 support
Added API documentation
0.3.2 (2012-11-10)
- Optionnal support for Django Absolute
0.3.1 (2012-11-03)
- Added JsTestView.django_js to optionnaly include django.js
- Added js_init block to runners to templates.
0.3 (2012-11-02)
- Improved ready event handling
- Removed runners from urls.py
- Added documentation
- Added ContextJsonView and Django.context fetched from json.
- Improved error handling
- Added DjangoJsError custom error type
0.2 (2012-10-23)
- Refactor template tag initialization
- Provides Jasmine and QUnit test views with test discovery (globbing)
- Provides Jasmine and QUnit test cases
- Added Django.file()
- Added {% javascript %}, {% js %} and {% css %} template tags
0.1.3 (2012-10-02)
- First public release
- Provides django.js with url() method and constants
- Provides {% verbatim %} template tag
- Patch jQuery.ajax() to handle CSRF tokens
- Loads the django javascript catalog for all apps supporting it
- Loads the django javascript i18n/l10n tools in the page
- Downloads (All Versions):
- 128 downloads in the last day
- 544 downloads in the last week
- 2702 downloads in the last month
- Author: Axel Haustant
- Download URL:
- Keywords: django javascript test url reverse helpers
- License: LGPL
- Categories
- Development Status :: 4 - Beta
- Environment :: Web Environment
- Framework :: Django
- Intended Audience :: Developers
- License :: OSI Approved :: GNU Library or Lesser General Public License (LGPL)
- Operating System :: OS Independent
- Programming Language :: Python
- Programming Language :: Python
- Programming Language :: Python :: 3
- Topic :: Software Development :: Libraries :: Python Modules
- Topic :: System :: Software Distribution
- Package Index Owner: noirbizarre
- DOAP record: django.js-0.8.1.xml | https://pypi.python.org/pypi/django.js/ | CC-MAIN-2015-18 | refinedweb | 858 | 52.36 |
[
]
Shai Erera commented on LUCENE-1614:
------------------------------------
I don't like that code also. But if we allow to return any value, except -1 or NO_MORE_DOCS
before the iteration started, someone will have very hard time trying to write such code (to
determine if the iterator has started).
The current contract already specifies what this method should return when nextDoc or advance
were not called. We just need to make it more explicit:
* Return -1 if the iteration did not start, and there are documents to return
* Return NO_MORE_DOCS if there are no more docs to return (whether the iteration started or
not).
* Return the doc ID on any other case..
Note that I also wrote that this method should not throw any exception, but I think of relaxing
that either, and say "it is better if the implementation does not throw any exception in case
there are no more documents to return". The reason is, we cannot force "don't throw exception"
in the code ... What do you think?
I will update the patch if you agree to these changes.
> | http://mail-archives.apache.org/mod_mbox/lucene-dev/200905.mbox/%3C1128825542.1243594125933.JavaMail.jira@brutus%3E | CC-MAIN-2015-22 | refinedweb | 180 | 68.4 |
This is a C Program to find first and last occurrence of given character in a string.
This program takes a string and a character as input and finds the first and last occurrence of the input character in a string.
1. Take a string and a character as input.
2. Using for loop search for the input character.
3. When the character is found, then print its corresponding position.
4. Again keep on searching for the input character. Now keep on incrementing a variable whenever the input character encounters.
5. Do step-4 until the end of string. when it is done, print the value of the variable.
Here is source code of the C Program to find first and last occurrence of given character in a string. The C program is successfully compiled and run on a Linux system. The program output is also shown below.
/*
* C Program to find First and Last Occurrence of given
* Character in a String
*/
#include <stdio.h>
#include <string.h>
void main()
{
int i, count = 0, pos1, pos2;
char str[50], key, a[10];
printf("enter the string\n");
scanf(" %[^\n]s", str);
printf("enter character to be searched\n");
scanf(" %c", &key);
for (i = 0;i <= strlen(str);i++)
{
if (key == str[i])
{
count++;
if (count == 1)
{
pos1 = i;
pos2 = i;
printf("%d\n", pos1 + 1);
}
else
{
pos2 = i;
}
}
}
printf("%d\n", pos2 + 1);
}
1. Take a string and a character as input and store it in the array str[] and variable key respectively.
2. Using for loop search for the variable key. If it is found then increment the variable count.
3. If the value of count is equal to 1, then copy the value of i into the variables pos1 and pos2 and print the value (pos+1) as the first position.
4. If the value of count is not equal to 1, then just copy the value of i into the variable pos2. Do this step until the end of string.
5. Print the value (pos2+1) as the last position and exit.
enter the string welcome to sanfoundry's c programming class! enter character to be searched m 6 34
Sanfoundry Global Education & Learning Series – 1000 C Programs.
Here’s the list of Best Reference Books in C Programming, Data-Structures and Algorithms | http://www.sanfoundry.com/c-program-first-and-last-occurrence-character-string/ | CC-MAIN-2018-09 | refinedweb | 386 | 73.68 |
]>
NAME
SYNOPSIS
REQUEST ARGUMENTS
DESCRIPTION
RETURN VALUE
ERRORS
SEE ALSO
AUTHOR
xcb_grab_key − Grab keyboard key(s)
#include <xcb/xproto.h>
Request function
owner_events
If 1, the grab_window will still get the pointer events. If 0, events are not reported to the grab_window.
grab_window
Specifies the window on which the pointer should be grabbed.
Using the special value XCB_MOD_MASK_ANY means grab the pointer with all possible modifier combinations.
The special value XCB_GRAB_ANY means grab any key.
pointer.
keyboard.
Establishes a passive grab on the keyboard. In the future, the keyboard is actively grabbed (as for Grab.
Returns an xcb_void_cookie_t. Errors (if any) have to be handled in the event loop.
If you want to handle errors directly with xcb_request_check instead, use xcb_grab_key_checked. See xcb-requests(3) for details.
xcb_access_error_t
Another client has already issued a GrabKey with the same button/key combination on the same window.
xcb_window_error_t
The specified window does not exist.
xcb_value_error_t
TODO: reasons?
xcb-requests(3), xcb_grab_keyboard(3)
Generated from xproto.xml. Contact xcb@lists.freedesktop.org for corrections and improvements. | http://www.x.org/releases/current/doc/man/man3/xcb_grab_key.3.xhtml | CC-MAIN-2014-35 | refinedweb | 173 | 53.47 |
mysql database in asp.net web application using c#, vb.net with example.
To use MySQL database in our asp.net web application first install MySQL database in your system using following url Download & Install MySQL. In case already if you MySQL installed server available means that would be fine.
We installed full MySQL server including MySQL Server, Work Bench and Visual Studio connector in our system. Once we finished MySQL installation that would be like as shown below
Now we will create new database in our MySQL Server for that click on Local instance server in MySQL Connections section like as shown in above image and enter credential details to connect our server.
Once we connect Go to the File --> New Model like as shown below.
Once we select New Model that will open Model editor like as shown below
If you want to rename database enter required name in Name field and click on Rename References button. Now we will create new table for that Double Click on Add Table like as shown below.
Once we click on Add Table new editor will open in that enter required table columns and click Save like as shown below
Once we finished creation of Employee table insert some dummy data in our table to show that data in our application like as shown below
Now we will see how to connect MySQL database from our asp.net application for that open visual studio and create new project like as shown below
Now right click on your project and select Manage Nuget Package reference like as shown below
Once we select Manage NuGet Packages it will open new window in that search for mysql and install MySQL.Data once installation done that will show tick mark like as shown below.
Once we install our MySQL.Data.dll reference will add it in our application like as shown below.
Now open your Default.aspx page and write the code like as shown below
Now open code behind file and write the code like as shown below.
C# Code
If you observe above code we added “MySql.Data.MySqlClient” namespace to access MySQL connection commands.
Here our MySQL connection commands are same as SQL Connection but only difference is we added My at the starting of commands (for example MySqlConnection).
VB.NET Code
Now we will run and the see the result of our application that will be like as shown below
Demo
Following is the result of connecting MuSQL database in asp.net web applications.
1 comments :
Very Nice, Clear and informative Article.Thanks Much. You are doing wonderful job.
Note: Only a member of this blog may post a comment. | https://www.aspdotnet-suresh.com/2016/11/aspnet-get-fetch-data-from-mysql-database-using-csharp-vbnet.html | CC-MAIN-2021-31 | refinedweb | 449 | 71.34 |
.
Free Trial
The evaluation version is the same as the purchased one – the trial version simply becomes licensed when you add a few lines of code to apply the license.
The Trial version of Aspose.Words without the specified license provides full product functionality, but inserts an evaluative watermark at the top of the document upon loading and saving and limits the maximum document size to a few hundred paragraphs.
Temporary License
If you wish to test Aspose.Words without the limitations of the Trial version, you can also request a 30-day Temporary License. For more details, see the “Get a Temporary License” page.
Purchased License
After purchase, you need to apply the license file or stream. This section describes options of how this can be done, and also comments on some common questions. python script that calla Aspose.Words for Python via .NET
- Stream
- As a Metered License – a new licensing mechanism
Use the set_license method to license a component.
Calling set_license multiple times is not harmful, it just wastes processor time.
Apply License Using a File or Stream Object
When developing your application, call set_license in your startup code before using Aspose.Words classes.
Load a License from a File
Using the set_license method, you can try to find the license file in the embedded resources or assembly folders for further use.
The following code example shows how to initialize a license from a folder:
lic = aw.License() # Try to set license from the folder with the python script. try : lic.set_license("Aspose.Words.Python.NET.lic") print("License set successfully.") except RuntimeError as err : # We do not ship any license with this example, visit the Aspose site to obtain either a temporary or permanent license. print("\nThere was an error setting the license: {0}".format(err))
Load a License from a Stream Object
The following code example shows how to initialize a license from a stream using another set_license method:
lic = aw.License() # Try to set license from the stream. try : lic_stream = io.FileIO("C:\\Temp\\Aspose.Words.Python.NET.lic") lic.set_license(lic_stream) lic_stream.close() print("License set successfully.") except RuntimeError as err : # We do not ship any license with this example, visit the Aspose site to obtain either a temporary or permanent license. print("\nThere was an error setting the license: {0}".format(err)):
# Set metered public and private keys metered = aw.Metered() # Access the setMeteredKey property and pass public and private keys as parameters metered.set_metered_key("*****", "*****") # Load the document from disk. doc = aw.Document(docs_base.my_dir + "Document.docx") #Get the page count of document print(doc.page_count)
Changing the License File Name
The license filename does not have to be “Aspose.Words.Python.NET.lic”. You can rename it to your liking and use that name when setting a license in your application.
“Cannot find license filename” Exception
When you purchase and download a license, the Aspose website names the license file “Aspose.Words.Python.NET.lic”. You download the license file using your browser. In this case, some browsers recognize the license file as XML and append the .xml extension to it, so the full file name on your computer becomes “Aspose.Words.Python.NET.lic.XML”.
When Microsoft Windows is configured to hide extensions for known file types (unfortunately, this is the default in most Windows installations), the license file will appear as “Aspose.Words.Python.NET.lic” in Windows Explorer. You will probably think that this is the real file name and call set_license passing it “Aspose.Words.Python.NET_license separately for each Aspose product that you use in your application.
- Use the Fully Qualified License Class Name. Each Aspose product has a License class in its own namespace. For example, Aspose.Words has aspose.words.License and Aspose.Cells has aspose.cells.License class. Using the fully qualified class name allows you to avoid confusion as to which license applies to which product. | https://docs.aspose.com/words/python-net/licensing/ | CC-MAIN-2022-27 | refinedweb | 654 | 58.48 |
There's been lots of debate about use of DataSets with web services. The issue is interoperability: the default WSDL representation of a DataSet is an arbitrary schema followed by arbitrary XML. That's just a bit on the loose side for most toolkits to deal with in the general case. However, one use case that drives the desire to use a DataSet in the first place is that the type of data to be returned is not known precisely until runtime. Lately, I discovered yet another corner of XmlSerializer that helps me deal with some of these situations without having to resort to using a DataSet.
It turns out that sometimes, although you don't know the exact type of data you'll be returning, you do know that it's going to be one of a known set of types. This is particularly common when you're returning the results of a database query: you might know that all your columns are either strings, booleans, integers, dates, or floating point numbers. If that's the case, there's an XML schema construct that allows you to say, “I want to have one of the following elements appear here.” It's called <xs:choice>, and here's a schema fragment that demonstrates its use:
<xs:element name=“item“> <xs:complexType> <xs:choice> <xs:element minOccurs=“1“ maxOccurs=“1“ name=“string“ type=“xs:string“ /> <xs:element minOccurs=“1“ maxOccurs=“1“ name=“integer“ type=“xs:int“ /> <xs:element minOccurs=“1“ maxOccurs=“1“ name=“boolean“ type=“xs:boolean“ /> <xs:element minOccurs=“1“ maxOccurs=“1“ name=“date“ type=“xs:dateTime“ /> <xs:element minOccurs=“1“ maxOccurs=“1“ name=“float“ type=“xs:float“ /> </xs:choice> </xs:complexType></xs:element>
What this schema means is that when an “item” element appears, it must have as a child exactly one of the elements <string>, <integer>, <boolean>, <date>, or <float>. Further, if the child element is <string>, the contents of that element must be a string. But if the child element is <integer>, then the contents must be an integer, and so on and so forth. It's a reasonably “normal“ bit of schema, and while I have no idea what the support in various non-.NET toolkits looks like, support for it is a lot more likely to exist than for the vagaries of DataSet.
From an XML standpoint, this is a pretty nice thing. It lets us express exactly what we want, namely, that we'll tell you at runtime what our choice for the type of the data is. Where this gets really great is that it's supported by System.Xml.Serialization, which powers the ASP.NET Web Services stack.
Here's how it maps: you define a class with the usual attributes from the System.Xml.Serialization namespace. But when you get to the element that you want to represent as an <xs:choice>, you simply use more than one [XmlElement] attribute, and you use the overload that lets you specify a type. Here's what I mean:
[XmlRoot(”item”)]public class Item{ private object value; [XmlElement(“string“, typeof(string))] [XmlElement(“integer“, typeof(int))] [XmlElement(“boolean“, typeof(bool))] [XmlElement(“date“, typeof(DateTime))] [XmlElement(“float“, typeof(float))] public object Value { get { return value; } set { this.value = value; } } }
When the serializer encounters this type during either serialization or deserialization, it will use the attributes to control the mapping between element names and .NET types. Which is to say, when the Value is a float, you'll get <float>, when the Value is string, you'll get a <string>, and (better still) vice versa! You have to do a type test and typecast at the other end, like this:
if (foo.Value is string) { DealWithString((string) foo.Value); }else if (foo.value is float) { DealWithFloat((float) foo.Value); }// etc.
But big deal. :)
By itself, this is a fairly powerful feature. And there's still more we can do, but I think I'll stop this entry now while this entry is still reasonably short. More later. | http://www.pluralsight.com/community/blogs/craig/archive/2004/12/14/3931.aspx | crawl-002 | refinedweb | 670 | 60.14 |
Introduction to parsing Swift code with the SwiftSyntax Library
A quick tutorial for parsing Swift code from Swift code.
How would you build a tool to automatically convert Swift code to Javascript? This is a hard problem that becomes even harder/impossible if you use the wrong tools. To do it correctly, we will need to steal some ideas from how compilers work.
The first step is to parse Swift code into an Abstract Syntax Tree. For this task we can use SwiftSyntax — a library that lets you “parse, inspect, generate, and transform Swift source code.”
In this guide I will show you how to use SwiftSyntax to parse code into an AST, and how to pretty-print the AST to see how it works.
My Setup
Here is the setup I’m using:
macOS Version 10.15.6 (19G73) Xcode Version 11.6 (11E708) Swift version 5.2.4 (bundled with Xcode)
If you have recently downloaded a new Xcode, make sure to switch to it using:
sudo xcode-select -s "/<Some Path>/Xcode.app/"
If you are using the Xcode 12 beta, this guide may not work for you. These instructions did not work when I tried using
Version 12.0 beta 3 (12A8169g).
Setting up SwiftSyntax
To start, set up a new project called
SwiftCodeAnalyzer:
mkdir SwiftCodeAnalyzer cd SwiftCodeAnalyzer swift package init --type executable
Inside of Package.swift include
SwiftSyntax as a package dependency, and also as a dependecy of the
SwiftCodeAnalyzer target.
// swift-tools-version:5.2 // The swift-tools-version declares the minimum version of Swift required to build this package. import PackageDescription let package = Package( name: "SwiftCodeAnalyzer", dependencies: [ .package(name: "SwiftSyntax", url: "", .exact("0.50200.0")), ], targets: [ // Targets are the basic building blocks of a package. A target can define a module or a test suite. // Targets can depend on other targets in this package, and on products in packages this package depends on. .target( name: "SwiftCodeAnalyzer", dependencies: ["SwiftSyntax"]), .testTarget( name: "SwiftCodeAnalyzerTests", dependencies: ["SwiftCodeAnalyzer"]), ] )
To ensure you have everything set up propertly, try importing
SwiftSyntax, and then use
swift run to run the project.
// Add this code inside of Sources/SwiftCodeAnalyzer/main.swift import SwiftSyntax print("SwiftSyntax successfully imported!")
Parsing code with SwiftSyntax
Now that we have successfully imported SwiftSyntax into our project, we can begin using it to parse some Swift code. In this project we will be parsing code stored as a string, but SwiftSyntax also supports parsing code stored in a file too.
Converting code to an AST
Here is how you can can parse code with SwiftSyntax:
import SwiftSyntax let swiftSource = """ import Foundation func test() { print("hello world") } test() """ let rootNode: SourceFileSyntax = try! SyntaxParser.parse(source: swiftSource) // We will replace this in the next step. print(rootNode.description)
The output of
SyntaxParser.parse is the root node of the AST. It contains a list of children nodes, and each child node also contains children. This structure repeats recursively until we hit the leaf nodes. These are typically atomic syntax units like numbers, or variable names.
If you run this code,
rootNode.description will recursively traverse each node in the AST and print out the corresponding string associated with it. The result is a string that precisely matches what is stored inside of
swiftSource. This isnt very interesting since we already know what the source code looks like. Let’s try pretty-sprinting out the AST directly and display the names of each node.
Pretty-Printing the AST
Here is how you can pretty-print the AST:
import SwiftSyntax let swiftSource = """ import Foundation func test() { print("hello world") } test() """ let rootNode: SourceFileSyntax = try! SyntaxParser.parse(source: swiftSource) recursivePrint(node: Syntax(rootNode), indent: 0) func recursivePrint(node: Syntax, indent: Int) { let indentString = String(repeating: " ", count: indent) let nodeName = String(describing: node.customMirror.subjectType) print(indentString + nodeName) for child in node.children { recursivePrint(node: child, indent: indent + 1) } }
This code recursively traverses the AST and prints out the name of each node. Each level is indented by an additional 2 spaces.
You might notice that the code does something weird here:
String(describing: node.customMirror.subjectType). This is is a hack and you probably should not use it in production. This was just the simplest way I could get the name of the node for the purposes of this tutorial.
If you run the code, the output should look something like this:
SourceFileSyntax CodeBlockItemListSyntax CodeBlockItemSyntax ImportDeclSyntax TokenSyntax AccessPathSyntax AccessPathComponentSyntax TokenSyntax CodeBlockItemSyntax FunctionDeclSyntax TokenSyntax ...
Congratulations! You just built a tool that can parse Swift code and visualize its AST.
Explaining how the AST works is out of scope for this guide, but I would encourage you to play changing the source code to see how it affects the AST.
If you want to learn more about SwiftSyntax and how you can use it properly, please take a look at the
SyntaxVisitor class, and also read up on the visitor pattern. If I was writing real production code using SwiftSyntax, this is what I would to parse and analyze the AST of some code. If I get around to writing another article on SwiftSyntax I will likely cover the Visitor pattern and how to use it. | https://vivekseth.com/swift-syntax-intro/ | CC-MAIN-2021-31 | refinedweb | 861 | 56.35 |
Introduction: Connect PS/2 Keyboard to Arduino
Hi everyone, this is also an Interesting project that brings 106 Inputs to your Arduino. Can't believe? Follow the project and see how this happens with a PS/2 Keyboard.
OK First of all you need
- Arduino (UNO)
- PS/2 Keyboard
- PS/2 Keyboard connector
Step 1: Keyboard Conection
Following is the pin-out of the Connector. There are 4 wires coming from the keyboard and their connections to arduino Digital pins are as follows.
- 5V :- Arduino 5V out
- Ground :- Arduino GND
- Clock :- Arduino Pin 3
- Data :- Arduino Pin 8
Step 2: Code
First include this library to Arduino Software.
#include < PS2Keyboard.h>
const int DataPin = 8; const int IRQpin =); } } }
Step 3: Testing
So we have finished our coding, Upload it to arduino and keep the Arduino connected to PC.Then open the Serial Monitor on Arduino Software and Press some keys on the Keyboard connected to Arduino and you will see It prints what you type on that keyboard.Comment your ideas.
2 People Made This Project!
VarghaH made it!
thereminhero made it!
Recommendations
We have a be nice policy.
Please be positive and constructive.
9 Comments
Please help me! I followed all the steps and did not succeed! I attach you to the pictures
And The keyboard sometimes blink when is connected to arduino
Did you get a solution? I have the same problem.
I resolved. My keyboard was fried because I inverted + and - I tried with another keyboard and SUCCESS! :)
Hi sir,
I have pannel look like keyboard ..can you help to interface with it..Thank you
excelente!, muchas gracias, funciona perfecto!, pero no con adaptador USB a PS/2.
It works nicely, thanks
My Keyboard doesn't seem to turn on when connecting the 5V and GND, but it's an RJ12 keyboard, so I'm not really sure if it's the same pinout (I looked up the pinout of a PS/2 to RJ12 adaptor and it seemed pretty straightforward)
hi, I have done this proyect and it doesn´t work :c | http://www.instructables.com/id/Connect-PS2-Keyboard-to-Arduino/ | CC-MAIN-2018-26 | refinedweb | 346 | 74.49 |
An Idiot's Guide to Managing Money
Disclaimer - I don't know diddly about managing my money!
It's true! So why, then, would I be writing on this subject? Well, for starters because it is one of my goals for 2009 to write a hub on each and every hubmob weekly topic, so even though I am now, always have been, and probably always will be, a complete idiot when it comes to finances, I am going to give this a go. Plus, I actually know a fair amount about such subjects, it's just that I have a hard time taking my own advice when it comes to money management.
The Best Way to Make Money Is The Old Fashioned Way - Work!
I know the ultra-conservative neo-nazi fascist pigs out there would love to believe that every family on welfare is secretly living the high life on the public dole, but let me tell you, nothing could be further from the truth. Welfare, or Transitional Aid for Families in Need (I think that's what TANF means), is barely a subsistence level existence. And the notion that having more babies is going to help is laughable. Sure, you'll see an increase in your monthly TANF check, but it will not come close to matching the increased expenditures that new member of the family for whom you are now responsible will incur.
So the bottom line is that if you want security for your family, getting a job is the biggest step you can take toward financial security. Of course, that's easier said than done, in today's economy. That does not give you permission to sit on your butt and wait for things to get better. Instead, you should be starting your job search immediately! Start networking! Go to job fairs! You never know which contact you make today that may result in an opportunity down the line.
I'll give you an example. I am presently teaching for the University of Phoenix Online. I love online teaching because you can't beat the commute, I get to work at home and be there when my kids get off the school bus, and as long as I have a computer with an internet connection, I'm golden. I got my foot in the door last year when the University of Phoenix needed people with communications and journalism background to teach one of their introductory classes, called "Contemporary Business Communications." I believe that just about every new student to enroll in Axia College of the University of Phoenix must take this class, whether they want to or not. So as long as I do the job that is expected of me, I've got job security. Online education is booming, for the same reasons why I love teaching online.
Some time ago, and I'm not real sure when, I apparently sent a copy of my curriculum vitae (an educator's expanded resume) to DeVry University online. Earlier this afternoon, I checked my email, and it turns out that DeVry matched my qualifications against their needs, and found a match. So I received an invitation to submit additional information about my teaching and professional experience, with the hope that I can teach Interpersonal Communications for DeVry.
A Poor Economy Is A Great Time To Pursue A Degree
The reason why people like me are getting work for these online programs is because many people who are either unemployed, underemployed, or afraid they'll become either of the above, decide that the time is now to go back to school and get their degree. But they don't want to uproot their family, and if they are employed, they don't want to quit their job just to pursue a degree. So the solution is to take classes from the comfort of your own home, online. You could also take classes at a local community college, as most of them tend to be pretty affordable. A degree will make you that much more marketable when the economy improves (and it will, it always does, even after the Great Depression).
The Next Topic - Taxes
The worst way to manage your money is to let the government take extra money out of your paycheck every week or two, depending upon how often you get paid. Sure, I'm the first to admit that a near sexual thrill overcomes me when I get my tax return back, but in reality what has happened is that you've let the government take your money, use it to pay for $200 toilet seats, and then give it back to you a year later, without interest. Instead, the best way to manage your money is to have just enough money withheld from your check so you can break even, or perhaps get a small tax return. In that way, you'll be keeping more money in your pocket on a paycheck-by-paycheck basis, which you can use to buy your own $200 toilet seat, if that's what you want to do. The bottom line (no pun intended) is that it's always better in a bad economy to have access to every dime you have at your disposal.
While we're talking about taxes, I've got to say a word about the Earned Income Credit. When I get my taxes back, just about all of the return (which I actually mailed out today) will be in the form of the Earned Income Credit. The amount of your credit depends upon how much or how little earned income you actually had, your family size, etc. It's a great way to really blow up that return, but you have to be careful to be honest about your income. If our friendly auditors at the IRS find that you have exaggerated your income in order to qualify for a higher earned income credit, there can be serious consequences, not the least of which is that they can ban you from taking the credit in future years (I think it's two years, but I could be wrong there). The important thing, as I said before, is to be brutally honest, and in the end, you'll have some gravy to put on your mashed potatoes.
This is HUGE! Check Your Credit Annually!
A couple years ago, I started getting into the habit, just after New Year's, to request copies of my credit report from all three of the major credit reporting agencies. These companies - TransUnion, Experian and Equifax - make their money by selling reports on you to potential employers, landlords, bank officers, etc. The quality of their report, therefore, is inextricably linked to how much value any of those potential clients place on the information. In that regard, each of those companies actually want people to dispute information contained in their credit report if in fact it is erroneous. Depending upon whom you talk to, up to 80 percent of all credit reports contain at least one error.
It is against the law to try to mislead the credit bureaus to remove items from your credit report when it is actually accurate. Fortunately, though, they give you options you can choose from, including one where you can stipulate that you have no knowledge of a particular debt. Such a claim is very nearly impossible to refute, because it's impossible to know what a person can or can not remember. Therefore, when you go through your personal credit report, paying particular attention to the negative items listed, if you have any doubt about whether a particular debt is actually yours, it is in your interest to dispute it. In an initial dispute, the less you say the better. The credit bureaus then have 30 days from the day you submitted your dispute, to contact the creditors and verify the information. If they do verify the information, sometimes the items are updated, to reflect interest that has accrued, or perhaps to reflect payments you may have made to reduce the debt.
If a creditor can not provide documentation within 30 days that you owe this money, by law the item must be removed from your credit report.
The down side of this approach, challenging everything under the Sun, is that once an item has been verified, if you challenge the debt again, particularly too soon, the credit bureau can legally refuse to verify your debt, labeling it "frivolous," since they just jumped through the hoops to prove it was your debt.
Sometimes, however, it is appropriate to contact the creditor directly. I'll give you an example that showed up on my credit report when I pulled them for 2009. According to the TransUnion report, I had a collections debt for over $300 to Maine Medical Center. I checked with Maine Medical Center to find out when this debt was incurred, and they said it was February of 2007. The problem was that in February of 2007, I had full health insurance. So I contacted my insurer and verified that my insurance was, in fact, in force on the date of service.Then I called back Maine Medical Center and informed them that they should have been paid by my insurance company, and added that this negative item on my credit report was affecting my ability to get affordable credit today.
In the end, Maine Medical Center agreed to drop the debt, as it was their error in billing that resulted in not being paid for the service, and they notified their collections agency that the matter had been resolved. All-in-all, I spent about a half-hour making phone calls, and I got that item permanently removed from my credit report.
You can also write letters to creditors and collections agencies, asking them to verify a debt. I have one item on my credit report claiming that I once bounced a check to some company in Seattle for some pictures. I know, based upon the date this check was allegedly written by me, that I did not even have a checking account at that time. So this is one that I am sending a certified letter to, asking them to verify the debt. I'm also asking them to provide me with a copy of the original check, so I can compare the signature on the check against the signature on my driver's license. I fully expect to find that this check was written by someone else with a similar name as mine, and ended up in my credit report.
Of course, the best way to improve your credit is to pay your bills. Even if you have accounts that have been charged off, you can pay off the debt and see your credit worthiness increase. Sometimes you can negotiate with a creditor or collections agency, and reach a deal whereby if you pay down the debt, meeting payment due dates, at the end of the process not only will the negative item be removed from your credit report, but you can also ask the creditor to agree to report it as a positive account. In an economy like we have today, creditors will be much more likely to make concessions like that as long as it results in their company getting money they had long since given up hope on collecting.
One last point about credit - for most negative items on your report, the longest they can remain on your report is seven years. Bankruptcies can stay on longer, up to 10 years, and public judgements by courts can stay on significantly longer (I believe 20 years, but don't quote me on that). So if you check your credit report and see that there is a collectons account on your record that is more than seven years old, you should immediately write to the credit bureau and remind them that the item must be removed from your credit report.
UPDATE!
I just received the results back from my challenges to accounts that appeared in one of my credit reports, and found that 10 items I disputed could not be verified. And in this case, it was not because the creditors simply ran out of time. They still had three days left before the items would have been dropped due to the 30 day requirement. So it pays to challenge items you believe to be inaccurate. Just because an item is on your credit report does not mean that you actually owe the money. Credit reports are generated by creditors and collections agencies that are run by human beings. Human beings are prone to making mistakes, and in this case, I would have been doing a disservice to myself by simply accepting the accuracy of these items.
Last, but certainly not least, we have investing!
There's always a safe place to put money you want to invest, but the key is to know where to put it, depending upon the condition of the economy. With the way the stock market has been going up and down like a manic depressant patient, now would not be the time to jump into the stock market. Real estate, too, is not a great option for the short-term because property values are down in most places. On the other hand, if you are looking for a bargain, and intend to keep the property for a while, this is a great time to buy. I contacted a realtor in Florida recently and found that this realtor alone has HUNDREDS of foreclosed, bank-owned properties, many of which can be had for less than the debt that was originally owed.
The key in this case is to have good enough credit in order to qualify for a mortgage, assuming you don't have cash on hand to buy the property outright. That said, there are plenty of properties in the greater Orlando area that you could buy for cash for what would normally be considered a healthy downpayment. The key, as I've heard it before, is to try to qualify for the best mortgage you can, with the smallest down payment you can get away with, as long as you can save the money for a rainy day, such as when you may lose your job and need a safety net.
Another area that is booming right now is precious metals. The problem is, it may be too late to cash in on that boom, because if you get into the gold market at $800 an ounce, you may not make as much money if the price is near its ceiling.
You also have the option of going with a mutual fund, spreading around the types of companies you invest in. I used to have a mutual fund that was split up four ways - 25 percent each for: aggressively managed, conservatively managed, utilities and common stock. By diversifying your portfolio, when the market drops, you've got a fighting chance to weather the storm on Wall Street.
So just remember, folks, that I am by no means an expert on money management, and anything I've said here could very well be worth less than, say, cow patties, and I strongly encourage anyone who reads this to check my facts with people who are not financial idiots and actually know what they're talking about.....of course, George W. Bush has an MBA, and he obviously didn't have a clue on the economy, so who's to say who is an expert these days?
There are a few nice tips down there. Shocking at first but the content makes a lot of sense !
I love this guide, this is mine. Great hub!! :-)
Hey crash
Did you send it to Washington?
Good man.
Useful and practical information here in the Idiots Guide. Great Hub:)
Work??? How does work get you money? LOL! Kidding! Great advice in here!! Especially when it comes to credit reports. A few years ago, I had a few things on my credit report that I had to dispute, and I had no idea that it had ever been there.
What's a HubMob? :)
For somebody who claims to know little about money, you have written some good advice! I especially liked your point about using the recession as a good time to improve qualifications. Of course, this would have to be at the lowest cost possible! But you are right, the current situation will not last forwever.
Good point about how working is the best way to make money. I am willing to work several part time jobs and write online just to make ends meet, although I would love a great paying full time job if I can ever have one again.
13 | https://hubpages.com/money/HubMob-Weekly-TopicAn-Idiots-Guide-to-Managing-Money | CC-MAIN-2017-39 | refinedweb | 2,808 | 65.05 |
The intravoxel incoherent motion (IVIM) model describes diffusion and perfusion in the signal acquired with a diffusion MRI sequence that contains multiple low b-values. The IVIM model can be understood as an adaptation of the work of Stejskal and Tanner [Stejskal65] in biological tissue, and was proposed by Le Bihan [LeBihan84]. The model assumes two compartments: a slow moving compartment, where particles diffuse in a Brownian fashion as a consequence of thermal energy, and a fast moving compartment (the vascular compartment), where blood moves as a consequence of a pressure gradient. In the first compartment, the diffusion coefficient is \(\mathbf{D}\) while in the second compartment, a pseudo diffusion term \(\mathbf{D^*}\) is introduced that describes the displacement of the blood elements in an assumed randomly laid out vascular network, at the macroscopic level. According to [LeBihan84], \(\mathbf{D^*}\) is greater than \(\mathbf{D}\).
The IVIM model expresses the MRI signal as follows:
\[S(b)=S_0(fe^{-bD^*}+(1-f)e^{-bD})\]
where \(\mathbf{b}\) is the diffusion gradient weighing value (which is dependent on the measurement parameters), \(\mathbf{S_{0}}\) is the signal in the absence of diffusion gradient sensitization, \(\mathbf{f}\) is the perfusion fraction, \(\mathbf{D}\) is the diffusion coefficient and \(\mathbf{D^*}\) is the pseudo-diffusion constant, due to vascular contributions.
In the following example we show how to fit the IVIM model on a diffusion-weighted dataset and visualize the diffusion and pseudo-diffusion coefficients. First, we import all relevant modules:
import matplotlib.pyplot as plt import numpy as np from dipy.reconst.ivim import IvimModel from dipy.core.gradients import gradient_table from dipy.data import get_fnames from dipy.io.gradients import read_bvals_bvecs from dipy.io.image import load_nifti_data
We get an IVIM dataset using DIPY’s data fetcher
read_ivim.
This dataset was acquired with 21 b-values in 3 different directions.
Volumes corresponding to different directions were registered to each
other, and averaged across directions. Thus, this dataset has 4 dimensions,
with the length of the last dimension corresponding to the number
of b-values. In order to use this model the data should contain signals
measured at 0 bvalue.
fraw, fbval, fbvec = get_fnames('ivim')
The gtab contains a GradientTable object (information about the gradients e.g.
b-values and b-vectors). We get the data from the file using
load_nifti_data.
data = load_nifti_data(fraw) bvals, bvecs = read_bvals_bvecs(fbval, fbvec) gtab = gradient_table(bvals, bvecs, b0_threshold=0) print('data.shape (%d, %d, %d, %d)' % data.shape)
The data has 54 slices, with 256-by-256 voxels in each slice. The fourth dimension corresponds to the b-values in the gtab. Let us visualize the data by taking a slice midway(z=33) at \(\mathbf{b} = 0\).
z = 33 b = 0 plt.imshow(data[:, :, z, b].T, origin='lower', cmap='gray', interpolation='nearest') plt.axhline(y=100) plt.axvline(x=170) plt.savefig("ivim_data_slice.png") plt.close()
The region around the intersection of the cross-hairs in the figure contains cerebral spinal fluid (CSF), so it should have a very high \(\mathbf{f}\) and \(\mathbf{D^*}\), the area just medial to that is white matter so that should be lower, and the region more laterally contains a mixture of gray matter and CSF. That should give us some contrast to see the values varying across the regions.
x1, x2 = 90, 155 y1, y2 = 90, 170 data_slice = data[x1:x2, y1:y2, z, :] plt.imshow(data[x1:x2, y1:y2, z, b].T, origin='lower', cmap="gray", interpolation='nearest') plt.savefig("CSF_slice.png") plt.close()
Now that we have prepared the datasets we can go forward with
the ivim fit. We provide two methods of fitting the parameters of the IVIM
multi-exponential model explained above. We first fit the model with a simple
fitting approach by passing the option fit_method=’trr’. This method uses
a two-stage approach: first, a linear fit used to get quick initial guesses
for the parameters \(\mathbf{S_{0}}\) and \(\mathbf{D}\) by considering b-values
greater than
split_b_D (default: 400))and assuming a mono-exponential
signal. This is based on the assumption that at high b-values the signal can be
approximated as a mono exponential decay and by taking the logarithm of the
signal values a linear fit can be obtained. Another linear fit for
S0
(bvals <
split_b_S0 (default: 200)) follows and
f is estimated using
\(1 - S0_{prime}/S0\). Then a non-linear least-squares fitting is performed to
fit
D_star and
f. If the
two_stage flag is set to
True while
initializing the model, a final non-linear least squares fitting is performed
for all the parameters. All initializations for the model such as
split_b_D
are passed while creating the
IvimModel. If you are using Scipy 0.17, you
can also set bounds by setting
bounds=([0., 0., 0.,0.], [np.inf, 1., 1., 1.]))
while initializing the
IvimModel.
For brevity, we focus on a small section of the slice as selected aboove, to fit the IVIM model. First, we instantiate the IvimModel object.
ivimmodel = IvimModel(gtab, fit_method='trr')
To fit the model, call the fit method and pass the data for fitting.
ivimfit = ivimmodel.fit(data_slice)
The fit method}\).
ivimparams = ivimfit.model_params print("ivimparams.shape : {}".format(ivimparams.shape))
As we see, we have a 20x20 slice at the height z = 33. Thus we have 400 voxels. We will now plot the parameters obtained from the fit for a voxel and also various maps for the entire slice. This will give us an idea about the diffusion and perfusion in that section. Let(i, j) denote the coordinate of the voxel. We have already fixed the z component as 33 and hence we will get a slice which is 33 units above.
i, j = 10, 10 estimated_params = ivimfit.model_params[i, j, :] print(estimated_params)
Now we can map the perfusion and diffusion maps for the slice. We
will plot a heatmap showing the values using a colormap. It will be
useful to define a plotting function for the heatmap here since we
will use it to plot for all the IVIM parameters. We will need to specify
the lower and upper limits for our data. For example, the perfusion
fractions should be in the range (0,1). Similarly, the diffusion and
pseudo-diffusion constants are much smaller than 1. We pass an argument
called
variable to out plotting function which gives the label for
the plot.
def plot_map(raw_data, variable, limits, filename): fig, ax = plt.subplots(1) lower, upper = limits ax.set_title('Map for {}'.format(variable)) im = ax.imshow(raw_data.T, origin='lower', clim=(lower, upper), cmap="gray", interpolation='nearest') fig.colorbar(im) fig.savefig(filename)
Let us get the various plots with fit_method = ‘trr’ so that we can visualize them in one page
plot_map(ivimfit.S0_predicted, "Predicted S0", (0, 10000), "predicted_S0.png") plot_map(data_slice[:, :, 0], "Measured S0", (0, 10000), "measured_S0.png") plot_map(ivimfit.perfusion_fraction, "f", (0, 1), "perfusion_fraction.png") plot_map(ivimfit.D_star, "D*", (0, 0.01), "perfusion_coeff.png") plot_map(ivimfit.D, "D", (0, 0.001), "diffusion_coeff.png")
Next, we will fit the same model with a more refined optimization process with fit_method=’VarPro’ (for “Variable Projection”). The VarPro computes the IVIM parameters using the MIX approach [Farooq16]. This algorithm uses three different optimizers. It starts with a differential evolution algorithm and fits the parameters in the power of exponentials. Then the fitted parameters in the first step are utilized to make a linear convex problem. Using a convex optimization, the volume fractions are determined. The last step is non-linear least-squares fitting on all the parameters. The results of the first and second optimizers are utilized as the initial values for the last step of the algorithm.
As opposed to the ‘trr’ fitting method, this approach does not need to set any thresholds on the bvals to differentiate between the perfusion (pseudo-diffusion) and diffusion portions and fits the parameters simultaneously. Making use of the three step optimization mentioned above increases the convergence basin for fitting the multi-exponential functions of microstructure models. This method has been described in further detail in [Fadnavis19] and [Farooq16].
ivimmodel_vp = IvimModel(gtab, fit_method='VarPro') ivimfit_vp = ivimmodel_vp.fit(data_slice)
Just like the ‘trr’ fit method, ‘VarPro’}\).
i, j = 10, 10 estimated_params = ivimfit_vp.model_params[i, j, :] print(estimated_params)
To compare the fit using fit_method=’VarPro’ and fit_method=’trr’, we can plot one voxel’s signal and the model fit using both methods.
We will use the predict method of the IvimFit objects, to get the predicted signal, based on each one of the model fit methods.
fig, ax = plt.subplots(1) ax.scatter(gtab.bvals, data_slice[i, j, :], color="green", label="Measured signal") ivim_trr_predict = ivimfit.predict(gtab)[i, j, :] ax.plot(gtab.bvals, ivim_trr_predict, label="trr prediction") S0_est, f_est, D_star_est, D_est = ivimfit.model_params[i, j, :] text_fit = """trr param estimates: \n S0={:06.3f} f={:06.4f}\n D*={:06.5f} D={:06.5f}""".format(S0_est, f_est, D_star_est, D_est) ax.text(0.65, 0.80, text_fit, horizontalalignment='center', verticalalignment='center', transform=plt.gca().transAxes) ivim_predict_vp = ivimfit_vp.predict(gtab)[i, j, :] ax.plot(gtab.bvals, ivim_predict_vp, label="VarPro prediction") ax.set_xlabel("bvalues") ax.set_ylabel("Signals") S0_est, f_est, D_star_est, D_est = ivimfit_vp.model_params[i, j, :] text_fit = """VarPro param estimates: \n S0={:06.3f} f={:06.4f}\n D*={:06.5f} D={:06.5f}""".format(S0_est, f_est, D_star_est, D_est) ax.text(0.65, 0.50, text_fit, horizontalalignment='center', verticalalignment='center', transform=plt.gca().transAxes) fig.legend(loc='upper right') fig.savefig("ivim_voxel_plot.png")
Let us get the various plots with fit_method = ‘VarPro’ so that we can visualize them in one page
plt.figure() plot_map(ivimfit_vp.S0_predicted, "Predicted S0", (0, 10000), "predicted_S0.png") plot_map(data_slice[..., 0], "Measured S0", (0, 10000), "measured_S0.png") plot_map(ivimfit_vp.perfusion_fraction, "f", (0, 1), "perfusion_fraction.png") plot_map(ivimfit_vp.D_star, "D*", (0, 0.01), "perfusion_coeff.png") plot_map(ivimfit_vp.D, "D", (0, 0.001), "diffusion_coeff.png")
References:
Stejskal, E. O.; Tanner, J. E. (1 January 1965). “Spin Diffusion Measurements: Spin Echoes in the Presence of a Time-Dependent Field Gradient”. The Journal of Chemical Physics 42 (1): 288. Bibcode: 1965JChPh..42..288S. doi:10.1063/1.1695690.
Le Bihan, Denis, et al. “Separation of diffusion and perfusion in intravoxel incoherent motion MR imaging.” Radiology 168.2 (1988): 497-505.
Fadnavis, Shreyas et.al. “MicroLearn: Framework for machine learning, reconstruction, optimization and microstructure modeling, Proceedings of: International Society of Magnetic Resonance in Medicine (ISMRM), Montreal, Canada, 2019.
Farooq, Hamza, et al. “Microstructure Imaging of Crossing (MIX) White Matter Fibers from diffusion MRI.” Scientific reports 6 (2016).
Example source code
You can download
the full source code of this example. This same script is also included in the dipy source distribution under the
doc/examples/ directory. | https://dipy.org/documentation/1.4.1./examples_built/reconst_ivim/ | CC-MAIN-2021-25 | refinedweb | 1,778 | 50.84 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.