text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
Survey period: 11 Mar 2013 to 18 Mar 2013
Pay a one-time $50 charge or pay $5 a month - but only on the months you need it. Which model works best for you?
public class Naerling : Lazy<Person>{
public void DoWork(){ throw new NotImplementedException(); }
}
Naerling wrote:Unfortunately there aren't so many good products around nowadays (not that I'm interested in anyway).
Naerling wrote:So long single player game that I can't play because of ridiculous DRM...
And WHY are they doing this on the new Sim City too...
CDP1802 wrote:Remember the times when their ideas of DRM included installing root kits on your computer
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
|
http://www.codeproject.com/script/Surveys/View.aspx?srvid=1428&msg=4514712
|
CC-MAIN-2015-22
|
refinedweb
| 138
| 59.13
|
Await, SynchronizationContext, and Console Apps
Stephen
When I discuss the new async language features of C# and Visual Basic, one of the attributes I ascribe to the await keyword is that it “tries to bring you back to where you were.” For example, if you use await on the UI thread of your WPF application, the code that comes after the await completes should run back on that same UI thread.
There are several mechanisms that are used by the async/await infrastructure under the covers to make this marshaling work: SynchronizationContext and TaskScheduler. While the transformation is much more complicated than what I’m about to);
In other words, before the async method yields to asynchronously wait for the Task ‘t’, we capture the current SynchronizationContext. When the Task being awaited completes, a continuation will run the remainder of the asynchronous method. If the captured SynchronizationContext was null, then RestOfMethod() will be executed in the original TaskScheduler (which is often TaskScheduler.Default, meaning the ThreadPool). If, however, the captured context wasn’t null, then the execution of RestOfMethod() will be posted to the captured context to run there.
Both SynchronizationContext and TaskScheduler are abstractions that represent a “scheduler”, something that you give some work to, and it determines when and where to run that work. There are many different forms of schedulers. For example, the ThreadPool is a scheduler: you call ThreadPool.QueueUserWorkItem to supply a delegate to run, that delegate gets queued, and one of the ThreadPool’s threads eventually picks up and runs that delegate. Your user interface also has a scheduler: the message pump. A dedicated thread sits in a loop, monitoring a queue of messages and processing each; that loop typically processes messages like mouse events or keyboard events or paint events, but in many frameworks you can also explicitly hand it work to do, e.g. the Control.BeginInvoke method in Windows Forms, or the Dispatcher.BeginInvoke method in WPF.
SynchronizationContext, then, is just an abstract class that can be used to represent such a scheduler. The base class exposes several virtual methods, but we’ll focus on just one: Post. Post accepts a delegate, and the implementation of Post gets to decide when and where to run that delegate. The default implementation of SynchronizationContext.Post just turns around and passes it off to the ThreadPool via QueueUserWorkItem. But frameworks can derive their own context from SynchronizationContext and override the Post method to be more appropriate to the scheduler being represented. In the case of Windows Forms, for example, the WindowsFormsSynchronizationContext implements Post to pass the delegate off to Control.BeginInvoke. For DispatcherSynchronizationContext in WPF, it calls to Dispatcher.BeginInvoke. And so on.
That’s how await “brings you back to where you were.” It asks for the SynchronizationContext that’s representing the current environment, and then when the await completes, the continuation is posted back to that context. It’s up to the implementation of the captured context to run the delegate in the right place, e.g. in the case of a UI app, that means running the delegate on the UI thread. This explanation also helps to highlight what happens if the environment didn’t set a SynchronizationContext onto the current thread (and if there’s not special TaskScheduler, as there isn’t in this case). If the context comes back as null, then the continuation could run “anywhere”. I put anywhere in quotes because obviously the continuation can’t run “anywhere,” but logically you can think of it like that… it’ll either end up running on the same thread that completed the awaited task, or it’ll end up running in the ThreadPool.
All of the UI application types you can create in Visual Studio will end up having a special SynchronizationContext published on the UI thread. Windows Forms, Windows Presentation Foundation, Metro style apps… they all have one. But there’s one common kind of application that doesn’t have a SynchronizationContext: console apps. When your console application’s Main method is invoked, SynchronizationContext.Current will return null. That means that if you invoke an asynchronous method in your console app, unless you do something special, your asynchronous methods will not have thread affinity: the continuations within those asynchronous methods could end up running “anywhere.”
As an example, consider this application:
using System;
using System.Collections.Generic;
using System.Threading;
using System.Threading.Tasks;
class Program
{
static void Main()
{
DemoAsync().Wait();
}
static async Task DemoAsync()
{
var d = new Dictionary<int, int>();
for (int i = 0; i < 10000; i++)
{
int id = Thread.CurrentThread.ManagedThreadId;
int count;
d[id] = d.TryGetValue(id, out count) ? count+1 : 1;
await Task.Yield();
}
foreach (var pair in d) Console.WriteLine(pair);
}
}
Here I’ve created a dictionary that maps thread IDs to the number of times we encountered that particular thread. For thousands of iterations, I get the current thread’s ID and increment the appropriate element of my histogram, then yield. The act of yielding will use a continuation to run the remainder of the method. Here’s some representative output I see from executing this app:
[1, 1]
[3, 2687]
[4, 2399]
[5, 2397]
[6, 2516]
Press any key to continue . . .
We can see here that the execution of this code used 5 threads over the course of its run. Interestingly, one of the threads only had one hit. Can you guess which thread that was? It’s the thread running the Main method of the console app. When we call DemoAsync, it runs synchronously until the first await the yields, so the first time we check the ManagedThreadId for the current thread, we’re still on the thread that invoked DemoAsync. Once we hit the await, the method returns back to Main(), which then blocks waiting on the returned Task to complete. The continuations used by the remainder of the async method’s execution would have been posted to SynchronizationContext.Current, except that it a console app, it’s null (unless you explicitly override that with SynchronizationContext.SetSynchronizationContext). So the continuations just get scheduled to run on the ThreadPool. That’s where the rest of those threads are coming from… they’re all ThreadPool threads.
Is it a problem then that using async like this in a console app might end up running continuations on ThreadPool threads? I can’t answer that, because the answer is entirely up to what kind of semantics you need in your application. For many applications, this will be perfectly reasonable behavior. Other applications, however, may require thread affinity, such that all of the continuations run on the same thread. For example, if you invoked multiple async methods concurrently, you might want all the continuations they use to be serialized, and an easy way to guarantee that is to ensure that only one thread is used for executing all of the continuations. If your application does demand such behavior, are you out of luck? Thankfully, the answer is ‘no’. You can add such behavior yourself.
If you’ve made it this far in reading, hopefully the components of a solution here have started to become obvious. You effectively need a message pump, a scheduler, something that runs on the Main thread of your app processing a queue of work. And you need a SynchronizationContext (or a TaskScheduler if you prefer) that feeds the await continuations into that queue. With that framework in place, let’s build a solution.
First, we need our SynchronizationContext. As described in the previous paragraph, we’ll need a queue to store the work to be done. The work provided to the Post method comes in the form of two objects: a SendOrPostCallback delegate, and an object state that is meant to be passed into that delegate when it’s invoked. As such, we’ll have our queue store a KeyValuePair<TKey,TValue> of these two objects. What kind of queue data structure should we use? We need something ideally suited to handle producer/consumer scenarios, as our asynchronous method will be “producing” these pairs of work, and our pumping loop will need to be “consuming” them from the queue and executing them. .NET 4 saw the introduction of the perfect type for the job: BlockingCollection<T>. BlockingCollection<T> is a data structure that encapsulates not only a queue, but also all of the synchronization necessary to coordinate between a producer adding elements to that queue and a consumer removing them, including blocking the consumer attempting a removal while the queue is empty.
With that, the pieces fall into place: a BlockingCollection<KeyValuePair<SendOrPostCallback,object>> instance; a Post method that adds to the queue; another method that sits in a consuming loop, removing each work item and processing it; and finally another method that lets the queue know that no more work will arrive, allowing the consuming loop to exit once the queue is empty.
private sealed class SingleThreadSynchronizationContext :
SynchronizationContext
{
private readonly
BlockingCollection<KeyValuePair<SendOrPostCallback,object>>
m_queue =
new BlockingCollection<KeyValuePair<SendOrPostCallback,object>>();
public override void Post(SendOrPostCallback d, object state)
{
m_queue.Add(
new KeyValuePair<SendOrPostCallback,object>(d, state));
}
public void RunOnCurrentThread()
{
KeyValuePair<SendOrPostCallback, object> workItem;
while(m_queue.TryTake(out workItem, Timeout.Infinite))
workItem.Key(workItem.Value);
}
public void Complete() { m_queue.CompleteAdding(); }
…
}
Believe it or not, we’re already half done with our solution. We need to instantiate one of these contexts and set it as current onto the current thread, so that when we then invoke the asynchronous method, that method’s awaits will see this context as Current. We need to alert the context to when there won’t be any more work arriving, which we can do by using a continuation to call Complete on our context when the Task returned from the async method is compelted. We need to run the processing loop via the context’s RunOnCurrentThread method. And we need to propagate any exceptions that may have occurred during the async method’s processing. All in all, it’s just a few lines:); }
}
That’s it. With our solution now available, I can change the Main method of my demo console app from:
static void Main()
{
DemoAsync().Wait();
}
to instead use our new AsyncPump.Run method:
static void Main()
{
AsyncPump.Run(async delegate
{
await DemoAsync();
});
}
When I then run my app again, this time I get the following output:
[1, 10000]
Press any key to continue . . .
As you can see, all of the continuations have run on just one thread, the main thread of my console app.
The AsyncPump sample class described in this post is available as an attachment to this post.
|
https://devblogs.microsoft.com/pfxteam/await-synchronizationcontext-and-console-apps/
|
CC-MAIN-2019-18
|
refinedweb
| 1,767
| 54.22
|
A functional reactive alternative to Spring
Options Just Ahead sign image via Shutterstock.
When you want to stray away slightly from what the magic annotations allow, you suddenly hit a wall: you start debugging through hundreds of lines of framework code to figure out what it’s doing, and how you can convince the framework to do what you want instead.
datamill is a Java web framework that is a reaction to that approach. Unlike other modern Java frameworks, it makes the flow and manipulation of data through your application highly visible. How does it do that? It uses a functional reactive style built on RxJava. This allows you to be explicit about how data flows through your application, and how to modify that data as it does. At the same time, if you use Java 8 lambdas (datamill and RxJava are intended to be used with lambdas), you can still keep your code concise and simple.
Let’s take a look at some datamill code to illustrate the difference:
public static void main(String[] args) { OutlineBuilder outlineBuilder = new OutlineBuilder(); Server server = new Server( rb -> rb.ifMethodAndUriMatch(Method.GET, "/status", r -> r.respond(b -> b.ok())) .elseIfMatchesBeanMethod(outlineBuilder.wrap(new TokenController())) .elseIfMatchesBeanMethod(outlineBuilder.wrap(new UserController())) .orElse(r -> r.respond(b -> b.notFound())), (request, throwable) -> handleException(throwable)); server.listen(8081); }
A few important things to note:
- datamill applications are primarily intended to be started as standalone Java applications – you explicitly create the HTTP server, specify how requests are handled, and have the server start listening on a port. Unlike traditional JEE deployments where you have to worry about configuring a servlet container or an application server, you have control of when the server itself is started. This also makes creating a Docker container for your server dead simple. Package up an executable JAR using Maven and stick it in a standard Java container.
- When a HTTP request arrives at your server, it is obvious how it flows through your application. The line
rb.ifMethodAndUriMatch(Method.GET, "/status", r -> r.respond(b -> b.ok()))
says that the server should first check if the request is a HTTP GET request for the URI /status, and if it is, return a HTTP OK response.
- The next two lines show how you can organize your request handlers while still maintaining an understanding of what happens to the request.For example, the line
.elseIfMatchesBeanMethod(outlineBuilder.wrap(new UserController()))
says that we will see if the request matches a handler method on the UserControllerinstance we passed in. To understand how this matching works, take a look at the UserController class, and one of the request handling methods:
@Path("/users") public class UserController { ... @GET @Path("/{userName}") public Observable<Response> getUser(ServerRequest request) { return userRepository.getByUserName(request.uriParameter("userName").asString()) ()))) .switchIfEmpty(request.respond(b -> b.notFound())); } ... }
You can see that we use @Path and @GET annotations to mark request handlers. But the difference is that you can pin-point where the attempt to match the HTTP request to an annotated method was made. It was within your application code – you did not have to go digging through hundreds of lines of framework code to figure out how the framework is routing requests to your code.
- Finally, in the code from the
UserController, notice how the response is created – and how explicit the composition of the JSON is within datamill:
())))
You have full control of what goes into the JSON. For those who have ever tried to customize the JSON output by Jackson to omit properties, or for the poor souls who have tried to customize responses when using Spring Data REST, you will appreciate the clarity and simplicity.
Just one more example from an application using datamill – consider the way we perform a basic select query:
public class UserRepository extends Repository<User> { ... public Observable<User> getByUserName(String userName) { return executeQuery( (client, outline) -> client.selectAllIn(outline) .from(outline) .where().eq(outline.member(m -> m.getUserName()), userName) .execute() .map(r -> outline.wrap(new User()) .set(m -> m.getId(), r.column(outline.member(m -> m.getId()))) .set(m -> m.getUserName(), r.column(outline.member(m -> m.getUserName()))) .set(m -> m.getEmail(), r.column(outline.member(m -> m.getEmail()))) .set(m -> m.getPassword(), r.column(outline.member(m -> m.getPassword()))) .unwrap())); } ... }
A few things to note in this example:
- Notice the visibility into the exact SQL query that is composed. For those of you who have ever tried to customize the queries generated by annotations, you will again appreciate the clarity. While in any single application, a very small percentage of the queries need to be customized outside of what a JPA implementation allows, almost all applications will have at least one of these queries. And this is usually when you get the sinking feeling before delving into framework code.
- Take note of the visibility into how data is extracted from the result and placed into entity beans.
- Finally, take note of how concise the code remains, with the use of lambdas and RxJava Observable operators.
Hopefully that gives you a taste of what datamill offers. What we wanted to highlight was the clarity you get on how requests and data flows through your application, and the clarity into how data is transformed.
datamill is still in an early stage of development but we’ve used it to build several large web applications. We find it a joy to work with.
We hope you’ll give it a try – we are looking for feedback. Go check it out.
This post was originally published on Stacks and Foundations.
|
https://jaxenter.com/a-functional-reactive-alternative-to-spring-127054.html
|
CC-MAIN-2020-40
|
refinedweb
| 925
| 52.19
|
The Setup
I just spent a week writing some editor code that allows me to place spawn points around my map (in the editor).
The editor script, SpawnEditorWindow, resides in the Editor folder.
The manager for that script, called SpawnPointManager (resides outside Editor folder), is connected to an empty game object in my scene.
Also connected to that empty game object is another script called SpawnManager, which communicates with my in-game pauseMenu to spawn all the characters in the map at runtime.
SpawnManager draws information from SpawnPointManager, which in turn figure out how many spawns there are from SpawnEditorWindow (basically, from the editor slider bars).
The Problem
So, on to my problem... This works beautifully inside the editor. I hook everything up, open the spawn editor window, adjust how many spawns I want, position them, etc.
However... when I go to run this, the List those spawns isn't initalized... or maybe it's reintialized, I don't know. Anyway, the List count is zero, thus no spawns exist.
The Question
What are my options here? How can I get this information into runtime? It's a List.
Make sure that all the relevant information is being stored in serialized objects in the scene. Information that is only in the editor window will be lost whenever the scripts reload.
Answer by PaxNemesis
·
Mar 16, 2012 at 03:37 PM
If you put your own custom class into a list it have to be Serialized not to disappear. So if you have a custom SpawnPoint script you need to use the [System.Serializable] attribute:
[System.Serializable]
public class SpawnPoint : MonoBehaviour
{
// Stuff you want to do.
}
public SpawnPointManager : MonoBehaviour
{
public List<SpawnPoint> spawns = new List<SpawnPoint>();
// Do the rest of your stuff.
}
Thanks for the snippet and the explaination. Your tip about the custom class list was spot on. It's working now, thanks!
PS: Anyone reading this in the future may want to know that I had to delete the game objects and reattach my scripts after this for it to recognize it. Not sure why.
Answer by DaveA
·
Mar 16, 2012 at 07:16 AM
You could have a game object with a script containing a List and have your editor script Find it and copy to it.
That's actually how it's set up now. SpawnPointManager (records spawn points) and SpawnManager (what actually manages the spawning), are both attached to an empty game object. The editor script is finding the SpawnPointManager script and setting that game objects position (the spawn node) to what the slider in the editor window says. No dice..
How to stop game from running in editor
1
Answer
How to create EditorWindow on game load
1
Answer
scripts not working at runtime?
1
Answer
Game mechanic Tips
0
Answers
how to create handles like editor gizmos in runtime?
6
Answers
|
http://answers.unity3d.com/questions/228077/getting-data-from-editor-into-runtime.html
|
CC-MAIN-2016-07
|
refinedweb
| 477
| 65.22
|
Chinese Remainder Theorem with solution in C++
Today we will see a rather interesting problem called the Chinese remainder problem and try to understand the theory behind it. The Chinese Remainder theorem was first introduced by a Chinese mathematician Sun Tzu. Let us define the problem properly.
Chinese Remainder Problem in C++
Let us consider a set of numbers p1,p2,…..,pn such that they are co-prime. Our objective is to find an unknown,x, given the following data:
x= a1 mod p1 meaning that x gives remainder a1 when divided by p1
x=a2 mod p2 x gives remainder a2 when divided by p2
……
x= an mod pn
with this information, we are supposed to determine x. We will see shortly, however, that multiple values of x are possible which satisfy the given constraints. Note that the solution does not always exist as the constraints may be contradicting.
Solution to the Chinese Remainder Problem
So as usual we can solve it using brute force or efficiently. Let us see the brute force first to get an idea.
Brute force implementation
So we have a set of numbers which are coprime and their corresponding remainders as the input. We need to find a number that satisfies these conditions.
The obvious idea would be to go through each number starting from 1 and checking if it satisfies the conditions. But, of course, it is not a very refined method as it becomes computationally expensive as we go to larger numbers.
#include<iostream> #include<vector> using namespace std; int main() { vector<int> pr, rem; // vector that holds the coprimes and the remainders int i,num=1,n,flag=1; int p,r; cin>>n; // number of inputs for(i=0;i<n;i++) { cin>>p>>r; pr.push_back(p); rem.push_back(r); } while(1) { flag=1; for(i=0;i<n;i++) if(num%pr[i] != rem[i]) // if the number does not satisfy we flag it { flag=0; } if(flag==1) // if the flag does not change it means all conditions are satisfied break; num++; } cout<< num; return 0; }
Now let’s get to the proper solution.
Efficient method
Let us take an example to understand the problem more deeply.
Say,
1.x= 3 mod 4
2.x= 5 mod 6
3.x= 2 mod 5
We will start the manual calculation with the first statement.
1.x= 3 mod 4. Now when we take natural numbers, we can see that 3, 7, 11,15,… and so on can satisfy this. Or, in other words, all numbers from 3 as we add 4 will satisfy this condition. Let us call this a 3(+4 ) set.
Now let us take the second statement and filter out the possible numbers from the existing set of 3(+4) numbers.
2.x= 5 mod 6. The first number from the 3 (+4) set to satisfy this is 11, then we have 23, and so on. We can see that the common difference has become 12 which is the LCM of 6 and 4. Also notice that we haven’t checked all the natural numbers that satisfy this condition this time, only the ones in the previous set. This will reduce the number of checkings we have to do.
So now the set becomes 11 (+12) set.
To this let us add the final condition.
3.x= 2 mod 5. If we do this similarly we can check that 47 is the first number in the 11(+12 ) set that satisfies this condition. Now we can calculate the common difference as LCM of 5,12 which is 60. So the final set is 47 (+60 ).
The solution is 47, 107,167, etc. as all these numbers will satisfy the conditions. As we have discussed there are multiple solutions possible in this scenario. But generally we take the least number as the solution as we have done in the brute force method.
Let us now see the c++ code for this.
#include<iostream> #include<vector> using namespace std; int gcd(int n1,int n2) // calculating gcd using eucledian's formula { if(n2==0) return n1; return gcd(n2,n1%n2); } int lcm(int n1,int n2) { //caculating lcm using the formula: lcm x gcd = n1 x n2 int gc,lc; gc=gcd(n1,n2); lc=n1*n2/gc; return lc; } int main() { int x,cd,i,p,r,n; int cnt,flag; // we use this to count the max checks, we can keep it as much as we want vector<int> pr,rem; cin>>n; // number of inputs for(i=0;i<n;i++) { cin>>p>>r; pr.push_back(p); rem.push_back(r); } x=rem[0]; // let us say the first number will be the starting and cd= pr[0]; // the first remainder will be the common difference like we saw in the example for(i=1;i<n;i++) { cnt=0; flag=0; while(cnt<100000) // Here we have taken a max value for 10^5 because we are dealing with large numbers { if(x%pr[i]==rem[i]) // condition for finding the next start value { flag=1; break; } x=x+cd; } if(flag==0) // This means that no number until cnt range has matched the condition. { cout<<"No solution."; // Hence no solution is possible return 0; } cd=lcm(pr[i],cd); // updating the cd } cout<<x<<" "<<cd; return 0; }
So we can see the output for the example we have seen earlier:
3 4 3 6 5 5 2 47 60
We have used the Euclidean method of finding the HCF and then using the well-known formula to calculate the lcm. For more understanding of how to calculate the HCF using the Eucledian Method, visit this page
|
https://www.codespeedy.com/chinese-remainder-theorem-with-solution-in-cpp/
|
CC-MAIN-2022-27
|
refinedweb
| 954
| 70.02
|
Understanding BigDecimal round() Method in Java
Hello Learners, today we are going to learn about the BigDecimal round method in Java. The round method belongs to the BigDecimal class. The class resides under java.math package.
BigDecimal class provides methods to operate on larger or even very smaller floating-point numbers. The round method is one of those methods that used to round off a floating-point number up to the given precision.
Let’s see a simple code to check how the round function works.
import java.math.*; public class bigdeciround { public static void main(String[] args) { BigDecimal bg1 = new BigDecimal("23.426").round(new MathContext(4)); BigDecimal bg2 = new BigDecimal("508.283").round(new MathContext(5)); BigDecimal bg3 = new BigDecimal("57.05").round(new MathContext(3)); BigDecimal bg4 = new BigDecimal("33.5").round(new MathContext(2)); System.out.println("The value after rounding off "+bg1); System.out.println("The value after rounding off "+bg2); System.out.println("The value after rounding off "+bg3); System.out.println("The value after rounding off "+bg4); } }
OUTPUT:
The value after rounding off 23.43 The value after rounding off 508.28 The value after rounding off 57.1 The value after rounding off 34
Explanation
The round method follows the basic math rules to round off any number. Let’s examine these numbers.
- First, you need to create an object of big decimal class with the value you want to round off under double-quotes. Then you have to call the round method.
- Inside the round method, you must pass the object of MathContext class for passing the precision and you are done.
- In the bg1, we passed 23.426 and gave the precision as 4. So the round method will round off the number after 4th digit from left. It will remove everything starting from the 5th position which is 6.
- Here, 6 is greater than 2. So while rounding off it will add one at the last digit which is 2. Now the new number after rounding off is 23.43
- Similarly, with bg2 we have given the precision as 5. It will remove everything after 5th digit. Now here, 3 is smaller than 8 so while rounding off one was not added to 8 and 3 is simply removed. The new number after rounding off is 508.28 and the same way the other numbers are rounded off.
That’s it, Done. Try to do it on your own, it’s a simple code.
So, that’s all for now about the BigDecimal round method in Java. Till then Keep Learning, Keep Practicing, Keep Reading, Keep Coding!
“THINK TWICE CODE ONCE!”
|
https://www.codespeedy.com/bigdecimal-round-method-in-java/
|
CC-MAIN-2020-45
|
refinedweb
| 442
| 69.68
|
> From: Jan.Materne@rzf.fin-nrw.de [mailto:Jan.Materne@rzf.fin-nrw.de]
>
> > I agree with the goal to make as many things antlibs as possible, I
> > could even be convinced that we start to break up our
> current set of
> > core/optional tasks into antlibs with independent release cycles.
>
> Then we´ll earn the whole inter-project-dependency-problems.
> We have to ensure that an AntLib works with all AntCore
> releases or fails with a defined error.
>
Which means having some sort of version dependency infrastructure.
> I agree that breaking Ant into several modules would improve the
> development process, especially for the optional tasks. An
> AntLib for the task implementation AND the correct version of
> the 3rd party lib.
>
>
> But I see two things:
> 1. Plugging in AntLibs should be very easy - especially if they were
> part of earlier Ant releases. Or we have to use namespaces
> everywhere...
> <project xmlns: xmlns: ...>
> is a little bit too long, I think.
>
> So dropping the AntLibs into a directory and they will be
> "auto-deployed"
> into Ant (I like that feature of JBoss :-) will be fine.
>
Agreed. What I would like is for this to be done by antlib itself
and not by core. So what I envision is some sort of <import>
directive that allows one antlib to import into its namespace
definitions of other antlibs.
a) Being able to add to the antlib.xml file something like:
<antlib>
<import basedir="${ant.home}/libs">
<include name="*.jar"/>
</import>
</antlib>
Where the import directive will look on each JAR file for and antlib.xml
and load the jar into its classloaders and the antlib.xml definitions
into its own.
b) once we have such a mechanism, third parties can provide their
own cluster of related files that can be load globally or in a more
per project organization.
> 2. It should be easy for the user to get a "complete" working
> version of
> Ant. So we should provide a bundle of the AntCore and a
> set of AntLibs.
>
That would be simple.
Jose Alberto
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@ant.apache.org
For additional commands, e-mail: dev-help@ant.apache.org
|
http://mail-archives.apache.org/mod_mbox/ant-dev/200402.mbox/%3CF3D701FEF0483B4AAA11A18CE328461B078184@leeds.cellectivity.com%3E
|
CC-MAIN-2015-18
|
refinedweb
| 362
| 67.15
|
This.
Step-by-step instruction
Possible Causes
The SCR source and the SCR target servers have FQDNs with disjointed domain names
Resolution).
Possible Causes
Windows firewall settings are blocking the command
Resolution
Add the "Windows PowerShell" to the Exceptions list under Windows Firewall settings.
Step-by-step instruction
Add a Program to the Exceptions List
SCR Hidden Network Share is not created in a Cluster with Event id 2074
Possible Causes
Resources in the default Cluster group, such as Cluster IP Address, Cluster name and Quorum disk were moved to a different cluster group.
Resolution
Move the Cluster IP Address, Cluster name and Quorum disk to the default Cluster group.
Step-by-step instruction
I have implemented an SCR solution at my company and have been doing some DR testing on our failover server. It seems that the:
Get-Mailbox -Database hub-casSSGMBX-SSG |where {$_.objectClass -NotMatch ‘(SystemAttendantMailbox |ExOleDBSystemMailbox)’}| Move-Mailbox -ConfigurationOnly -TargetDatabase DRE2K7DRSSGDR?
Hi orliville
I can see in your case you are utlizing database portability, right?
well, database portability by design to be done when a storage group or mailbox database has been corrupted for anyreason, so that’s mean the server is up and running but thie specific storage group is dead, in this case this command will work fine..
Let Product Team correct me if i’m mistaken
."
This is true for the SCR Target PC in a scenario where we would have a total hardware failure or Site outage?
Hi Orliville,
I believe the clarification in technet article below will help:.
/RecoverServer isn’t what Orliville is trying to do here. You said that the above command doesn’t work. Can you clarify what you mean it doesn’t work? What was the error/ what makes you think that it only works when the original is active. With the -ConfigurationOnly switch the cmdlet shouldn’t even be talking to the source. Is the target (DRMBX) mounted?
Hi,
I believe you should read this document, it sums up all cases with what you can do :
I’m really waiting for SP2 to have some features of SCR visible in EMC as many clients are "scared" by EMS.
Cheers,
I second Jaycee’s comment: it would be great to get SCR into a more friendly state. The concept is great but if a site is slightly non-standard, the mysterious PowerShell incantations fail and you’re on your own.
I’ve had an incident open with Product Support for SIX WEEKS and we’ve made no progress (database won’t seed). I suspect disjoint namespace but, despite what’s said above, fix 951955 isn’t being distributed.
Hi
will the get-mailbox as orliville stated with the -configurationonly switch works if the source server is unaccessible? in our case we have 2 exchange servers located on 2 different sites, both servers are used in production and has 50/50 split mailboxes for users.
both servers also has SCR enabled between the First Storage Group on each servers. I have tried database portability between the two and it seems to be working, but i am not sure if get-mailbox will work if we totally lost the other site? /Recoverserver seems to be only use if there is a standbymachine dedicated only for recovery?
Hi
I am in the process of setting up Ex2K7 and am looking at implementing SCR in our main office, I understand the Database portability method in the vent of a corrupt database, but wwhat is the process if the source server has a sudden hardware failure is it the same or are there any differences
Thanks
Hi
When I run the command eseutil /r E02 to place the database in a clean shutdown I get an error stating
Recovery has indicated that there may be a lossy recovery option run recovery with the /a argument
Operation termainated with error -528
Do you have any idea what might be causing this
Thanks
|
https://blogs.technet.microsoft.com/exchange/2008/05/28/troubleshooting-top-exchange-2007-sp1-scr-issues/
|
CC-MAIN-2018-05
|
refinedweb
| 659
| 57.91
|
#include <sensors/sensors.hh>
An IMU sensor.
Constructor.
Destructor.
Returns the angular velocity in the IMU sensor local frame.
Get the category of the sensor.
Connect a signal that is triggered when the sensor is updated.
fills a msgs::Sensor message.
Get the sensor's ID.
Returns the imu message.
Return last measurement time.
Return last update time.
Returns the imu linear acceleration in the IMU sensor local frame.
Get name.
Return true if the sensor needs to be updated.
Get the sensor's noise model for a specified noise type.
get orientation of the IMU relative to a reference pose Initially, the reference pose is the boot up pose of the IMU, but user can call either SetReferencePose to define current pose as the reference frame, or call SetWorldToReferencePose to define transform from world frame to reference frame.
Get the sensor's parent's ID.
Returns the name of the sensor parent.
The parent name is set by Sensor::SetParent.
Reset the lastUpdateTime to zero.
Get fully scoped name of the sensor.
Set whether the sensor is active or not.
Set the sensor's parent.
Sets the current IMU pose as the reference NED pose, i.e.
X axis of the IMU is aligned with North, Y axis of the IMU is aligned with East, Z axis of the IMU is aligned with Downward (gravity) direction.
Set the update rate of the sensor.
Sets the rotation transform from world frame to IMU's reference frame.
For example, if this IMU works with respect to NED frame, then call this function with the transform that transforms world frame to NED frame. Subsequently, ImuSensor::Orientation will return identity transform if the IMU is aligned with the NED frame. This call replaces SetWorldToReferencePose.
Returns the topic name as set in SDF.
Reimplemented in GpuRaySensor, RaySensor, CameraSensor, ForceTorqueSensor, LogicalCameraSensor, MultiCameraSensor, SonarSensor, and WirelessTransceiver..
|
http://gazebosim.org/api/dev/classgazebo_1_1sensors_1_1ImuSensor.html
|
CC-MAIN-2017-51
|
refinedweb
| 310
| 61.02
|
Prev
C++ VC ATL STL Socket Code Index
Headers
Your browser does not support iframes.
Re: Simplest way to download a web page and print the content to stdout with boost
From:
"Francesco S. Carta" <entuland@gmail.com>
Newsgroups:
comp.lang.c++
Date:
Sun, 13 Jun 2010 15:03:42 -0700 (PDT)
Message-ID:
<18a07df9-ce87-4a2b-9c5a-cd51eb856881@x27g2000yqb.googlegroups.com>
"Francesco S. Carta" <entul...@gmail.com> wrote:
gervaz <ger...@gmail.com> wrote:
On Jun 13, 1:42 pm, "Francesco S. Carta" <entul...@gmail.com> wrote:
gervaz <ger...@gmail.com> wrote:
Hi all,
can you provide me the easiest way to download a web page (e.g.http=
://) and print the output to stdout using the boost
library?
Thanks,
Mattia
Yes, we can :-)
Sorry, but you should try to find the way by yourself first - that's
not hard, split the problem and ask Google, find pointers and follow
them, try to write some code and compile it. If you don't succeed you
can post here your attempts and someone will eventually point out the
mistakes.
--
FSC
Ok, nice advice :P
Here what I've done (adapted from what I've found reading the doc and
googling):
#include <iostream>
#include <boost/asio.hpp>
int main()
{
boost::asio::io_service io_service ;
boost::asio::ip::tcp::resolver resolver(io_service) ;
boost::asio::ip::tcp::resolver::query query("",
"http");
boost::asio::ip::tcp::resolver::iterator iter =
resolver.resolve(query);
boost::asio::ip::tcp::resolver::iterator end;
boost::asio::ip::tcp::endpoint endpoint;
while (iter != end)
{
endpoint = *iter++;
std::cout << endpoint << std::endl;
}
boost::asio::ip::tcp::socket socket(io_service);
socket.connect(endpoint);
boost::asio::streambuf request;
std::ostream request_stream(&request);
request_stream << "GET / HTTP/1.0\r\n";
request_stream << "Host: localhost \r\n";
request_stream << "Accept: */*\r\n";
request_stream << "Connection: close\r\n\r\n";
boost::asio::write(socket, request);
boost::asio::streambuf response;
boost::asio::read_until(socket, response, "\r\n\r\n");
std::cout << &response << std::endl;
return 0;
}
But I'm not able to retrieve the entire web content.
Other questions:
- the while loop seems like an iterator loop, but what
boost::asio::ip::tcp::resolver::iterator end stands for? Is a zero
value?
Whatever the value, in the framework of STL iterators the "end" one is
simply something used to match the end of the container / stream /
whatever so that you know there isn't more data / objects to get. You
shouldn't worry about its actual value - I ignore the details too,
maybe there is something wrong with your program and I'll have a look,
but I'm pressed and I wanted to drop in my 2 cents.
- to see the output I had to use &response, why?
That's not good to pass the address of a container to an ostream
unless you're sure its actual representation matches that of a null-
terminated c-style string. In this case I suppose you have to convert
that buffer to something else, in order to print its data.
There is also the chance that you have to
- call "read_until" to fill the buffer
- pick out the data from the buffer (eventually flushing / emptying
it)
multiple times, until there is no more data to fill it.
Hope that helps you refining your shot.
I've played with your program a bit. Up to the line:
request_stream << "GET / HTTP/1.0\r\n";
should be all fine.
In particular, the loop that checks for the end of the endpoint list
is fine because, as it seems, those iterators get automatically set to
mean "end" if you don't assign them to anything - it works differently
from, say, a std::list, where you have to explicitly refer to the
end() method of a list instantiation.
The first problem with your code is where you send the server the
"Host" header. You should replace "localhost" with the domain name you
want to read from - in this case:
request_stream << "Host:\r\n";
Then we have the (missing) loop to retrieve the data.
The function "read_until" that you are calling will throw when the
socket has no more data to read, and consider also that all overloads
of that function return a size_t with the amount of bytes that it has
transferred to the buffer.
Seems like you have to intercept the throw, in order to know when to
stop calling it. Another option is to use the "read_until" overload
that doesn't throw (it takes an error_code argument, instead) and
maybe check if the returned size_t is not null - then you would break
the loop.
So far we're just filling the buffer. For printing it out you have to
build an std::istream out of it and get the data out through the
istream.
Try to read_until "\r\n", not _until "\r\n\r\n", then getline on the
istream to a string.
If you want I'll post my (working?) code, but since I've learned a lot
by digging my way, I think you can take advantage of doing the same.
Have good coding and feel free to ask further details if you want -
heck, reading boost's template declarations is not very good time...
(don't exclude the fact that I could have said something wrong, it's
something new for me too, I hope to be corrected by more experienced
users out there, in such case)
--
FSC
Generated by PreciseInfo ™
)
|
https://preciseinfo.org/Convert/Articles_CPP/Socket_Code/C++-VC-ATL-STL-Socket-Code-100614010342.html
|
CC-MAIN-2022-05
|
refinedweb
| 900
| 70.23
|
Welcome to my brief homepage for CS 352. Announcements, homework hints, etc. may appear here.
My office hours will soon consist of Tuesday 12:30-1:30 (formerly 11:30-1:00) and Thursday 2:00-3:30 on the 20th floor of the tower.).
The class website is:
Goto for login and server information related to accessing the class newsgroup: utexas.class.cs352-hunt. You will need a USENET newsreader, like Mozilla's Thunderbird to read and post to the newgroup. Thunderbird is installed on the department linux machines and can be started by typing "/lusr/bin/thunderbird" from a terminal. Here is a short graphical tutorial on how to setup newsgroup reading in Thunderbird. Your user name and password can be retrieved from the first link of my paragraph.
Past Homework Grading Criteria Discussion Notes Homework Solutions
If you become stuck on a homework problem, and there is no clarification on this page, I recommend you look at the previous class's homepage. HOMEWORK HINTS (CS 352 Fall 2006)
Test #2 review sheet
Test #1 review sheet
Test 1 Distribution
Again, see CS 352 Fall 2006 for a more comprehensive list of homework tips.
Here's a helpful template for 2.51 that Dr. Hunt gave me.
/* Experimental Harness */ #include "stdio.h" int problem_2_51 ( int k ) { return( -1 << k ); } #define K ( 2 ) int main () { printf("Problem_2_51 for %d is: 0x%x.\n", K , problem_2_51( K ) ); }
#include <stdio.h>at the top of your file.
|
http://www.cs.utexas.edu/users/ragerdl/cs352-2008/index.html
|
CC-MAIN-2019-35
|
refinedweb
| 246
| 67.45
|
If you like this article, check out my work to solve personal finance at fiskal.app.
A while ago my 8 year old daughter was getting frustrated trying to alphabetize her spelling list. She’s constantly misplacing of the word and starting all over again. Being an opportunistic father, I told her I’d teach her how to tell the computer to do it for her. We opened up a ClojureScript REPL and she wrote her first line of code (sort [“truth” “simple” “powerful” “learn” “happy”]). If an 8 year old can learn basic ClojureScript so can you.
On Capital One’s Level Money team we just use ClojureScript for our front-end JavaScript single page web app. React handles our view layer (bit.ly/fluxless) and ClojureScript handles the data layer. Our ClojureScript library is called Triforce. If you want to see how our Javascript interacts with ClojureScript, check out our other post. ClojureScript is amazing at handling asynchronous work, dependency management, optionally does static type checking, event propagation, data manipulation, writes reusable/refactorable code, low memory footprint and is very performant. First up:
Data Fetching
That’s it. If you want to enable CORS just add a simple Object (called a “map” in ClojureScript).
Async
Async work in ClojureScript is a joy. There’s no callback hell or wrapping data in Promises. ClojureScript uses GoLang’s CSP for async. It takes normal synchronous looking code and just parks until that line is done executing. The core construct used for parking is a channel. By default a channel has one piece of data on it and the other side has to “park” there’s something on the channel to take. put! will put data on a channel and take! will take it off (>! is synonymous for put! and <! is for take!). The last concept to understand is async work can only happen inside a go block, which tell us that any code in here does not have to be synchronous.
So in the code above, it parks until it can take! the response the calls the print function to show the response. When we want the response to be set to a variable we just call let and then a variable name. We can rewrite it like this:
REST calls
Unfortunately we don’t have a shiny GraphQL or Datomic back-end. We have standard REST endpoints which work just fine. The ClojureScript way to do things is to pass data around to other functions. So for each endpoint we do just that. We’re create a simple map that details the minor differences of each endpoint.
defn creates a function called “get-repos” with required parameters of hostname, auth and request. Then it calls another function called “call-network” with what we need to for this endpoint.
Now we’ll make a few tweaks to make our network call agnostic. This composability is one of the key features that makes ClojureScript so easy to refactor and reuse; Write less code and change that code quickly.
Static typing in a loosely typed world
So all that should be fairly simple to do but there a lot that is unspoken here. What is request? What is auth? We can’t have those ambiguities on our public interfaces. We’ll add a library from Prismatic called Schema to give us opt-in static typing when it’s really needed. We use static typing on the edge of our namespaces where additional clarity becomes extremely important. It also doubles as validation when the code runs.
To conclude our network layer, we’ll update our http call to be a little more dynamic. We deconstruct the function parameters with the {:keys } syntax and then merge the default-params with the custom params for the call. Also notice that the last line of any function is the return value.
Dependencies
Dependency management isn’t a strong suit of most pure functional languages. In this case, ClojureScript has taken the idea of Classes in OO languages to control dependences better. Internally it’s just a closure in a function. This gives the power of encapsulation with the trade-offs classes can have. At the top level, we express what are our dependencies and how they relate. Then we inject them into components to create a system.
Next we take the components and put them into a system that manages the hierarchy of dependencies.
Event Propagation
The last core piece we use is localized caching and event proration. This is important to save data locally to be reused and modified along with the network requests. When the data is changed in any way we need to expose an interface to allow other functions to be triggered as a response to a data change.
For this we use atoms (mutable data) and atom-watchers.
CLJ->JS
To expose all of this to the JavaScript world we use ^:export on a single function and we wrap the core.async and immutable data in a Promise with mutable JSON.
Small, simple code is more stable and flexible
ClojureScript’s immutable data and focus on small, simple, pure functions make it very easy to compose and understand what the code is doing. We also handle data transforms and other work locally in isolated functions with just standard ClojureScript map/reduce type functions. ClojureScript has a steep learning curve for those with only an object-oriented background. ClojureScript has kept our View layer in sync while only exposing what’s needed. We’re excited for DataScript, OM Next reconciler and Datomic to make things easier and more flexible for our Data layer running in the browser and on Node.
ClojureScript is a fantastic language! To get up and running fast, check out figwheel, read the online ClojureScript book and use the ClojureDocs examples when you get stuck.
|
https://medium.com/@puppybits/clojurescript-is-the-triforce-of-power-984ac29da3d7
|
CC-MAIN-2020-40
|
refinedweb
| 976
| 65.73
|
The rise of automation, along with increased computational power, novel application of statistical algorithms, and improved accessibility to data, have resulted in the birth of the personal digital assistant market, popularly represented by Apple’s Siri, Microsoft’s Cortana, Google’s Google Assistant, and Amazon’s Alexa.
While each assistant may specialize in slightly different tasks, they all seek to make the user’s life easier through verbal interactions so you don’t have to search out a keyboard to find answers to questions like “What’s the weather today?” or “Where is Switzerland?”. Despite the inherent “cool” factor that comes with using a digital assistant, you may find that the aforementioned digital assistants don’t cater to your specific needs. Fortunately, it’s relatively easy to build your own.
This tutorial will walk you through the basics of building your own digital virtual assistant in Python, complete with voice activation plus response to a few basic inquiries. From there, you can customize it to perform whatever tasks you need most.
Installing Python
To follow along with the code in this tutorial, you’ll need to have a recent version of Python installed. I’ll be using ActivePython, for which you have two choices:
Download and install the pre-built Virtual Assistant runtime environment (including Python 3.6) for Windows 10 or CentOS 7, or
Build your own custom Python runtime with just the packages you’ll need for this project, by creating a free ActiveState Platform account, after which you will see the following image:
Click the Get Started button and choose Python 3.6 and the OS you’re working in. In addition to the standard packages included in ActivePython, we’ll need to add a few third party packages, including something that can do speech recognition, convert text to speech and playback audio:
- Speech Recognition Package – when you voice a question, we’ll need something that can capture it. The SpeechRecognition package allows Python to access audio from your machine’s microphone, transcribe audio, save audio to an audio file, and other similar tasks.
- Text to Speech Package – our assistant will need to convert your voiced question to a text one. And then, once the assistant looks up an answer online, it will need to convert the response into a voiceable phrase. For this purpose, we’ll use the gTTS package (Google Text-to-Speech). This package interfaces with Google Translate’s API. More information can be found here.
- Audio Playback Package – All that’s left is to give voice to the answer. The mpyg321 package allows for Python to play MP3 files.
- Google Assistant and Siri, which focus primarily on helping users with non-work related tasks like leisure and fitness.
- Cortana, which focuses on work efficiency.
- Alexa, which is more concerned with retail.
Once the runtime builds, you can download the State Tool and use it to install your runtime into a virtual environment.
And that’s it! You now have Python installed, as well as everything you need to build the sample application. In doing so, ActiveState takes the (sometimes frustrating) environment setup and dependency resolution portion out of your hands, allowing you to focus on actual development.
All the code used in this tutorial can be found in my Github repo.
All set? Let’s go.
Digital Assistant Voice Input
The first step in creating your own personal digital assistant is establishing voice communication. We’ll create two functions using the libraries we just installed: one for listening and another for responding. Before we do so, let’s import the libraries we installed, along with a few of the standard Python libraries:
import speech_recognition as sr from time import ctime import time import os from gtts import gTTS import requests, json
Now let’s define a function called listen. This uses the SpeechRecognition library to activate your machine’s microphone, and then converts the audio to text in the form of a string. I find it reassuring to print out a statement when the microphone has been activated, as well as the stated text that the microphone hears, so we know it’s working properly. I also include conditionals to cover common errors that may occur if there’s too much background noise, or if the request to the Google Cloud Speech API fails.
def listen(): r = sr.Recognizer() with sr.Microphone() as source: print("I am listening...") audio = r.listen(source) data = "" try: data = r.recognize_google(audio) print("You said: " + data) except sr.UnknownValueError: print("Google Speech Recognition did not understand audio") except sr.RequestError as e: print("Request Failed; {0}".format(e)) return data
For the voice response, we’ll use the gTTS library. We’ll define a function respond that takes a string input, prints it, then converts the string to an audio file. This audio file is saved to the local directory and then played by your operating system.
def respond(audioString): print(audioString) tts = gTTS(text=audioString, lang='en') tts.save("speech.mp3") os.system("mpg321 speech.mp3")
The listen and respond functions establish the most important aspects of a digital virtual assistant: the verbal interaction. Now that we’ve got the basic building blocks in place, we can build our digital assistant and add in some basic features.
Digital Assistant Voiced Responses
To construct our digital assistant, we’ll define another function called digital_assistant and provide it with a couple of basic responses:
def digital_assistant(data): if "how are you" in data: listening = True respond("I am well") if "what time is it" in data: listening = True respond(ctime()) if "stop listening" in data: listening = False print('Listening stopped') return listening return listening
This function takes whatever phrase the listen function outputs as an input, and checks what was said. We can use a series of if statements to understand the voice query and output the appropriate response. To make our assistant seem more human, the first thing we’ll add is a response to the question “How are you?” Feel free to change the response to your liking.
The second basic feature included is the ability to respond with the current time. This is done with the ctime function from the time package.
I also build in a “stop listening” command to terminate the digital assistant. The listening variable is a Boolean that is set to True when the digital assistant is active, and False when not. To test it out, we can write the following Python script, which includes all the previously defined functions and imported packages:
time.sleep(2) respond("Hi Dante, what can I do for you?") listening = True while listening == True: data = listen() listening = digital_assistant(data)
Save the script as digital_assistant.py. Before we run the script via the command prompt, let’s check that ActiveState Python is running correctly by entering the following on the command line:
$ Python3.6
If ActivePython installed correctly, you should obtain an output that looks like this:
ActivePython 3.6.6.3606 (ActiveState Software Inc.) based on Python 3.6.6 (default, Dec 19 2018, 08:04:03) [GCC 4.2.1 Compatible Apple LLVM 9.0.0 (clang-900.0.39.2)] on darwin Type "help", "copyright", "credits" or "license" for more information.
Note that if you have other versions of Python already installed on your machine, ActivePython may not be the default version. For instructions on how to make it the default for your operating system, ActiveState provides procedures here.
With ActivePython now the default version, we can run the script at the command prompt using:
$ Python3.6 digital_assistant.py
You should see and hear the output:
Hi Dante, what can I do for you? I am listening...
Now you can respond with one of the three possibilities we defined in the digital_assistant function, and it will respond appropriately. Cool, right?
How To Create A Digital Assistant Google Maps Query
I find myself frequently wondering where a certain city or country is with respect to the rest of the world. Typically this means I open a new tab in my browser and search for it on Google Maps. Naturally, if my new digital assistant could do this for me, it would save me the trouble.
To implement this feature, we’ll add a new if statement into our digital_assistant function:
def digital_assistant(data): "stop listening" in data: listening = False print('Listening stopped') return listening return listening
The new if statement picks up if you say “where is” in the voice query, and appends the next word to a Google Maps URL. The assistant replies, and a command is issued to the operating system to open Chrome with the given URL. Google Maps will open in your Chrome browser and display the city or country you inquired about. If you have a different web browser, or your applications are in a different location, adapt the command string accordingly.
How To Create A Digital Assistant Weather Query
If you live in a place where the weather can change on a dime, you may find yourself searching for the weather every morning to ensure that you are adequately equipped before leaving the house. This can eat up significant time in the morning, especially if you do it every day, when the time can be better spent taking care of other things.
To implement this within our digital assistant, we’ll add another if statement that recognizes the phrase “What is the weather in..?”
def digital_assistant(data): global listening "what is the weather in" in data: listening = True api_key = "Your_API_key" weather_url = "?" data = data.split(" ") location = str(data[5]) url = weather_url + "appid=" + api_key + "&q=" + location js = requests.get(url).json() if js["cod"] != "404": weather = js["main"] temp = weather["temp"] hum = weather["humidity"] desc = js["weather"][0]["description"] resp_string = " The temperature in Kelvin is " + str(temp) + " The humidity is " + str(hum) + " and The weather description is "+ str(desc) respond(resp_string) else: respond("City Not Found") if "stop listening" in data: listening = False print('Listening stopped') return listening time.sleep(2) respond("Hi Dante, what can I do for you?") listening = True while listening == True: data = listen() listening = digital_assistant(data)
For the weather query to function, it needs a valid API key to obtain the weather data. To get one, go here and then replace Your_API_key with the actual value. Once we concatenate the URL string, we’ll use the requests package to connect with the OpenWeather API. This allows Python to obtain the weather data for the input city, and after some parsing, extract the relevant information.
Conclusions
There is a myriad of digital assistants currently on the market, including:
With modest expectations in mind, each does its job relatively well. If you require more specificity, designing your own digital assistant is far from a pipe dream. Recent advances in speech recognition and converting text to speech make it viable even for hobbyists. And working in Python greatly simplifies the task, giving you the ability to make any number of customization to tailor your assistant to your needs.
|
https://sweetcode.io/how-build-digital-virtual-assistant-python/
|
CC-MAIN-2020-50
|
refinedweb
| 1,839
| 52.9
|
Subject: Re: [boost] [review] Review of Outcome (starts Fri-19-May)
From: Robert Ramey (ramey_at_[hidden])
Date: 2017-05-27 17:56:06
On 5/27/17 6:29 AM, Niall Douglas via Boost wrote:
> On 27/05/2017 01:22, Robert Ramey via Boost wrote:
>>> *- Error-handling algorithmic composition with-or-without C++ exceptions
>>> enabled
>>
>> I would drop just about all the references to exceptions. It's an
>> orthogonal issue. If you've decided you need exceptions or that they
>> are convenient in your context, you're not going to change to outcome.
>> If you've already decided on returning error indicators (for whatever
>> reason), you're not going to switch to exceptions. You might want to
>> switch to outcome. So comparison with exceptions isn't really relevant
>> at all.
>
> You may have a missed a major use case for outcome: calling STL code in
> a C++ project which is C++ exceptions disabled.
These applications are not using exceptions right now. They might use
outcome as an alternative to an integer return code. But exceptions are
not relevant to this case.
Much of the games and
> finance industry are interested in Outcome precisely because of the ease
> Outcome enables passing through C++ exceptions thrown in a STL using
> island through code with exceptions disabled.
I have to confess it never occurred to me what happens when a program
compiled with exceptions disabled uses the "throw" statement. I would
have assumed that it would fail to compile.
> Even if you're not doing that, another major use case is to keep
> exception throws, and having to deal with control flow inverting
> unexpectedly, isolated into manageable islands. The rest of your
> intermediate code can then assume no exception throws, not ever. That
> means no precautionary try...catch, no need to employ smart pointers, no
> need to utilise RAII to ensure exception safety. It can be a big time
> saver, especially during testing for correctness because it eliminates a
> whole load of complexity.
I get this. But this is likely already being addressed with code like:
// presume that sqrt throws when passed a negative number or Nan or???
float safe_sqrt(float x, int & error_flag){
float y;
try {
y = sqrt(x);
error_flag = 0;
}
catch(const std::domain_error & de){
error_flag = 1;
}
return y;
}
which you would recommend replacing with something like:
// presume that sqrt throws when passed a negative number or Nan or???
outcome<float, int>
safe_sqrt(float x){
float y;
try {
y = sqrt(x);
return outcome(y);
}
catch(const std::domain_error & de){
return outcome(0);
}
}
Which I would agree would be an improvement. BUT it really has nothing
to do with exceptions per se. outcome is replacing exceptions, it's
replacing an ad hoc integer error code/result combination with a better one.
So I would personeally have limited references to exceptions to a small
example similar to the above.
>> Pretty much the same for error codes. You might or might not use them -
>> but this is a totally orthogonal question as to whether you use outcome
>> or not as it is (I think) designed to handle any type as an error return
>> type
> You are thinking of expected<T, E> where E is selectable by the end
> user.
I wasn't thinking of Expected. I saw Vicent's presentation of it some
years ago and found it unconvincing so I forgot about it.
> Outcome's outcome<T> and result<T> hard code the error type to
> error_code_extended. So error codes are the only game in town for those
> refinements.
Hmmm - it didn't get that the E parameter had to be error_code. I
thought it could be anything - which would have decoupled outcome from
error_code.
>> Eliminating all the irrelevant material would make the package much,
>> much easier to evaluate and use.
>
> With respect Robert, I don't think you understood all the use cases.
Right - that's my complaint. I read the documentation. I saw a 3 cases
in the introduction. It wasn't apparent that it would be useful for
things other than that.
> Most have called for significantly *more* documentation to cover in
> detail even more use cases, not less.
Maybe - Either there other use cases that aren't in the docs or they're
wrong. If they're other uses cases which aren't obvious, that's sort of
red flag for me. It suggests that either the library might have
non-obvious behavior. But I'm just commenting on what I currently see.
> (Incidentally I tried less documentation three times previously. For
> some reason, people need to be hand held more when it comes to
> describing really simple and obvious-to-me at least stuff)
More is not necessarily better
>>> *- No dependencies (not even on Boost)
>>
>> Yowwww. This terrible. I looked a little through boost-lite.
>> config.hpp - repeats a lot of stuff from boost/config. Doesn't seem to
>> provide for compilers other than gcc, clang and some flavors of msvc.
>
> It actually provides for all the C++ 14 compilers currently available.
> If they are not MSVC, GCC nor clang, they pretend to be one of those, so
> the same detection routines work.
> You may also have missed the extensive use of C++ 17 feature test
> macros. All C++ 14 compilers, even MSVC, support those now.
Right - this is the wrong approach. If you think that boost/config.hpp
needs fixing you can address that. But mixing configuration into a
particular library just generates more work rather than factoring out
into the best implementations.
>> Looks like some stuff might have some value - but with not docs/comments
>> it's pretty much useless if something goes wrong - like a compiler upgrade.
>
> The C++ 17 feature test macros ought to be observed by all future
> compiler upgrades.
>
>> chrono.hpp - what is this doing in here?
>>
>> precompiling <tuple> etc..... Looks like you're trying to solve some
>> other problem here -
>>
>> goes on an on.
> I think you didn't realise those are the namespace bind files allowing
> the C++ 11 STL to be switched with the Boost STL. They are auto
> generated by a clang AST parser.
gratuitious complexity.
>> The questions of pre-compiled headers/modules/C++11 subset of boost have
>> nothing to do with outcome and shouldn't be included here. Focus on one
>> and only one thing: The boost.outcome library. It's quite a small
>> library and trying to make it carry the freight for all these other
>> ideas will deprive it of the value it might have.
I'd like to. But it looks like accepting outcome into boost will
effectively mean accepting a boat load of other unrelated stuff. So if
you want to address just outcome, remove all the other stuff.
> That would not be a widely held opinion. Most end users of Outcome don't
> want any Boost connection at all.
No problem. If you believe that don't submit it to Boost.
> They do want excellent top tier cmake
LOL - which would require CMake being quality product - which I don't
believe it is.
> support, with automatic usage of compiler technologies such as
> precompiled headers,
These are available to users - not required to add to libraries
C++ Modules,
Not ready yet
clang-tidy passes,
don't know what that is.
unit test
All libraries support unit test without adding confusing stuff to their
library
> dashboarding,
not related to the library
sanitisers,
already supported.
ABI guarantees,
This is interesting and I might accept this. But but I'm not crazy
about it being snuck in the back door.
and lots of other goodies.
nope
> boost-lite provides lots of common cmake machinery used by all my
> libraries to save me reinventing the wheel for each.
LOL - that's not what it looks like. I don't think library code should
be tied to anything outside it.
> Again, none of this matters to an end user who just wants to #include
> and go. You can do exactly this with Outcome. It neither uses nor
> interferes with anything in Boost. It is an excellent neighbour,
> including to other versions of itself.
>
> Just because you don't understand the value of exception_ptr and nested
> exceptions doesn't mean you are in a majority. Nested exceptions support
> in the C++ 11 STL wouldn't be there unless WG21 thought them highly useful.
Right - I'm just relating the experience of one user - myself. But I
don't think I'm untypical. I have to say I've never seen exception_ptr
or nested_exception in any code. If you think this is important, you
need to provide information on or at least point to references which
describe how this stuff because most of us who don't read the standard
text don't even knows it exists.
>> I expect to see type requirements (aka concepts) for T and E. Can any
>> types be substituted? It's a mystery. Hmmm maybe initial example
>> and/or looking at the source code will be useful. First of all, the
>> example has no #include <output.hpp> or some such. I go looking for
>> that and .... I can't find it. It's somewhere hidden through a maze of
>> ... preprocessing?
>
> There are static asserts which fire if the types used don't meet their
> requirements.
LOL - so one doesn't know what the requirements are until you compile
the code? How can write code this way?
> You are correct they are not documented in the reference API docs. There
> is an open issue to fix that.
OK - but
>
>>> - What is your evaluation of the implementation?
>>
>> Needs to be slimmed down and all the extraneous stuff removed. This
>> isn't a vehicle for upending boost. This is trying to get as modest
>> simple header only library into boost in a way which makes it useful to
>> users nothing more.
>
> As I have said many times now, end users can #include and go.
That's certainly not apparent from the examples.
> They don't care,
LOL - this user cares
nor need to care,
I'm not convinced of this.
> how the implementation works so long as it cannot cause them misoperation.
Up to a point this is true. But for a simple type such as this one I
have some expectation what the implemenation should look like and when I
look here - I can't see anything that makes much sense to me.
>> If one thinks I'm being two harsh, I propose the following experiment:
>> Take a pair of left over interns from C++Now and direct them to the
>> outcome git hub package. Give them a simple task to use the outcome
>> library. See if they can complete it in one hour or less.
>
> I have seen completely uninitiated users get up and running with Outcome
> in less than five minutes.
Honestly I can't believe this. I would have to see it for myself.
> If the end user has programmed in Rust or
> Swift before, they just need to be told:
>
> #include "boost.outcome/include/boost/outcome.hpp"
>
> boost::outcome::expected<T, E>
>
> ... and they're good to go.
LOL - I'll start learning Rust so I can use outcome.
>
>> On the other hand, I believe it wouldn't be too tough to take the pieces
>> of this library already made and compose them into a more tightly
>> focused package that would fit into boost in a natural way that users
>> would useful and easy to use. But the look and feel of the package
>> would be entirely different than the current one so It would really have
>> to be reviewed again. I also believe that if the package were
>> re-formulated as I suggest, the review process would be much, much less
>> onerous for everyone involved.
>
> Thanks for the review.
You're welcome
Robert Ramey
>
> Niall
>
>
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
|
https://lists.boost.org/Archives/boost/2017/05/235368.php
|
CC-MAIN-2019-39
|
refinedweb
| 1,987
| 65.83
|
I will explain how to build an environment that allows you to call C ++ functions from Python using Pybind11. There weren't many articles for people using Windows and Visual Studio Code (hereinafter VSCode), so I'll summarize what I've researched. Click here for debugging ().
We have confirmed in the following environment. The installation method below CMake is explained in this article.
CMake is a tool that makes the settings for building C ++ etc. available to various compilers. However, I'm not sure about it myself, so please see here for details.
Download and install the Windows win64-x64 Installer from the Download Page ().
Also, install the extension CMake Tools for VS Code in VS Code. If your device is offline, please refer to here.
msvc is a Microsoft C ++ compiler. The C ++ compiler part of Visual Studio.
For the installation method, see here when the device is online, and here when the device is offline.
If you are in an online environment, install with
pip.
Powershell
> pip install pybind11
For offline environment
Download
pybind11-x.y.z-py2.py3-none-any.wh (x, y, z are numbers) from the PyPI Download Page () and then install with
pip.
Powershell
>pip install download destination/pybind11-2.5.0-py2.py3-none-any.whl #2.5.0 part should be read as appropriate
I will also download the official sample for later use. If you are online (and have Git installed), go to the location you want to download on Powershell and
Powershell
git clone --recursive
Download as. If you are in an offline environment, press the
Clone or download button in the Official Sample (). ~~ Also, download the pybind11 folder in the same way. ~~
~~ ** [Important].)
From the
cmake_exmple folder you downloaded earlier, copy
pybind11,
src, and
CMakeLists.txt to an appropriate folder (hereinafter referred to as
project_root). ~~ In addition, if Japanese is included in the path name, Configure and Genarate cannot be performed, so please use half-width alphanumeric characters (spaces and symbols are also OK). ~~ (If you use g ++ for the compiler, Japanese is not good, but msvc seems to be OK)
Folder structure
project_root ├ pybind11 ├ src │ └main.cpp └ CMakeLists.txt
Comment out or delete the
#ifdef VERSION_INFO to
#endif part of
main.cpp because it causes a compile error (I don't know why).
project_root/src/main.cpp
#include <pybind11/pybind11.h> int add(int i, int j) { return i + j; } namespace py = pybind11; PYBIND11_MODULE(cmake_example, m) { (Abbreviation) m.def("add", &add, R"pbdoc( Add two numbers Some other explanation about the add function. )pbdoc"); (Abbreviation) //Comment out or delete the following because it will cause a compile error (I do not know why) //#ifdef VERSION_INFO // m.attr("__version__") = MACRO_STRINGIFY(VERSION_INFO); //#else // m.attr("__version__") = "dev"; //#endif }
project_root/CMakeLists.txt
cmake_minimum_required(VERSION 3.4...3.18) project(cmake_example) add_subdirectory(pybind11) pybind11_add_module(cmake_example src/main.cpp) # EXAMPLE_VERSION_INFO is defined by setup.py and passed into the C++ code as a # define (VERSION_INFO) here. target_compile_definitions(cmake_example PRIVATE VERSION_INFO=${EXAMPLE_VERSION_INFO})
Open
project_root with VS Code and build using CMake Tools from the command palette (
Ctrl + p).
Debugwith
Cmake: Select Variant
Visal Studio Build Tools 2019 Release --amd64in
Cmake: Select a Kit
Cmake: Build
The above can also be set by clicking the VS Code status bar (blue bar at the bottom).
If successful,
cmake_example.cp37-win_amd64.pyd will be created in the
./build/Debug folder. Open a terminal with
Ctrl + @ in VS Code and give it a try.
Powershell
> python #Launch python >>> from build.Debug.cmake_example import add >>> add(1,2) 3 >>> exit() #Return to Powershell
If you can call the
add function, you are successful.
The debugging method of the
add function is explained in Debugging, so please refer to that as well.
cmake -G" MinGW Makefiles "..cannot be done ~~
~~ If the path name contains Japanese, Configure and Genarate cannot be performed. The path name should be quiet and half-width alphanumeric characters (spaces and symbols are OK). I want you to support Japanese. ~~ (If you use g ++ for the compiler, Japanese is not good, but msvc seems to be OK)
~~3.2.1. warning: 'void PyThread_delete_key_value(int)' is deprecated [-Wdeprecated-declarations]~~
~~ As I wrote above,.)
3.2.2. error: '::hypot' has not been declared If you are using any version of Python released before December 2018, the build will fail with this error. The cause is pyconfig.h, and it is fixed as follows.
Python installation destination/include/pyconfig.h
#define COMPILER "[gcc]" - #define hypot _hypot ← Delete this line. #define PY_LONG_LONG long long #define PY_LLONG_MIN LLONG_MIN #define PY_LLONG_MAX LLONG_MAX
This issue has been officially discussed and merged after a pull request (actual commit).
Thank you for reading to the end.
This article only used the official sample, but please refer to other people's articles and modify
main.cpp and
CMakeLists.txt.
I'm not familiar with C ++, so it took me a while to be able to do just this. I would like to make my C ++ debut.
I referred to the following article.
-Introduction to CMake -How to use Cmake (1) -I tried using CMake (2) A little more decent project
-Official sample -Use C ++ functions from python with pybind11 -How to execute C ++ code from Python using pybind11
Recommended Posts
|
https://memotut.com/en/01d425c06f990e34870d/
|
CC-MAIN-2021-49
|
refinedweb
| 873
| 59.09
|
chmod, fchmod - change permissions of a file
#include <sys/types.h>
#include <sys/stat.h>
int chmod(const char *path, mode_t mode);
int fchmod(int fildes, mode_t mode);
The mode of the file given by path or:
EPERM The effective UID does not match the owner of the file, and is
not zero.
EROFS The named file resides on a read-only file system.
EFAULT path points outside your accessible address space.
ENAMETOOLONG [Toc] [Back]
path path.
EIO An I/O error occurred.
The general errors for fchmod are listed below:
EBADF The file descriptor fildes is not valid.
EROFS See above.
EPERM See above.
EIO See above.
The chmod call conforms to SVr4, SVID, POSIX, X/OPEN, 4.4BSD. SVr4
documents EINTR, ENOLINK and EMULTIHOP returns, but no ENOMEM. POSIX.1
does not document EFAULT, ENOMEM, ELOOP or EIO error conditions, or the
macros S_IREAD, S_IWRITE and S_IEXEC.
The fchmod call conforms to 4.4BSD and SVr4. SVr4 documents additional
EINTR and ENOLINK error conditions. POSIX requires the fchmod)
Linux 2.0.32 1997-12-10 CHMOD(2)
|
http://nixdoc.net/man-pages/Linux/man2/chmod.2.html
|
crawl-002
|
refinedweb
| 179
| 79.77
|
In the previous part of my Object Oriented Design Tutorial, I showed you how to build a Use Case, Object Model, Sequence Diagram and Class Diagram from scratch.
In this tutorial, I show you how to turn those diagrams into code and a working program. This is the process a person goes through to create excellent Object Oriented Designs. The code follows the video along with the digrams from last time.
If you like topics like this, it helps to tell Google with a click here [googleplusone]
If you know anyone interested in Object Oriented Design, feel free to share
Code from the Video
COIN.JAVA
public class Coin { private String coinOption = ""; public String[] coinValue = {"Heads", "Tails"}; Coin(){ // A random value of 0 or 1 is calculated // The value of coinOption is set based on // the random index chosen from coinValue[] int randNum = (Math.random() < 0.5)?0:1; coinOption = coinValue[randNum]; } public String getCoinOption(){ return coinOption; } }
PLAYER.JAVA
public class Player { private String name = ""; private String coinOption = ""; public String[] coinValue = {"Heads", "Tails"}; Player(String newName){ name = newName; } public String getCoinOption(){ return coinOption; } // Set coinOption to the opposite of what is sent public void setCoinOption(String opponentFlip){ coinOption = (opponentFlip == "Heads")?"Tails":"Heads"; } public String getRandCoinOption(){ // Get a random 0 or 1 int randNum = (Math.random() < 0.5)?0:1; // Set the value based on the index chosen at random // for the array coinValue which will be either // Heads or Tails coinOption = coinValue[randNum]; return coinValue[randNum]; } public void didPlayerWin(String winningFlip){ if(coinOption == winningFlip){ System.out.println(name + " won with a flip of " + coinOption); } else { System.out.println(name + " lost with a flip of " + coinOption); } } }
COINGAME.JAVA
public class CoinGame { Player[] players = new Player[2]; Coin theCoin = new Coin(); CoinGame(String player1Name, String player2Name){ players[0] = new Player(player1Name); players[1] = new Player(player2Name); } public void startGame(){ // Pick a random player to choose the face value guess int randIndex = (Math.random() < 0.5)?0:1; String playersPick = players[randIndex].getRandCoinOption(); // Set the opponents coinOption to the opposite value int opponentsIndex = (randIndex == 0)?1:0; players[opponentsIndex].setCoinOption(playersPick); // Flip the coin to find the winning side String winningFlip = theCoin.getCoinOption(); // See the results of the flip players[0].didPlayerWin(winningFlip); players[1].didPlayerWin(winningFlip); } }
COINFLIPPINGGAME.JAVA
import java.util.Scanner; public class CoinFlippingGame { public static void main(String[] args){ // Create a coin game with the 2 players provided CoinGame theCoinGame = new CoinGame("Mark", "Tom"); String usersAnswer; do { theCoinGame.startGame(); System.out.println("Play Again? "); Scanner playGameAgain = new Scanner(System.in); usersAnswer = playGameAgain.nextLine(); } while ((usersAnswer.startsWith("y")) || (usersAnswer.startsWith("Y"))); } }
Object Oriented Design Diagrams
Click on the image below a few times to see it full screen
hi derek, too bad you switched to java, i really liked your video’s
What language did you prefer? I’m constantly planning for the future
C++ would be great, if you can do it.
Thanks.
That tutorial is coming in the games tutorials. I’ll have to cover C first though
I’d like to have it on PHP. But you’re videos were great. Thanks for sharing.
You’re very welcome 🙂 I’ll cover php in detail after I finish with Java
Great video series (as they all are)! One slight bug in the code is that it appears the coin is only flipped one time instead of once per game. (Random coin flip occurs in Coin’s constructor and a Coin object is only created once.) Still, I wish I could write virtually bug-free code off the top of my head as you can!
Thank you very much 🙂 I’ll look into that bug and thank you for pointing it out. Sometimes I focus so much on the core subject that I let little silly things slip. Sorry about that
The code in the video had players[0] and player[0] in CoinGame constructor instead of player[0] and player[1]. The code is correct in the hand out.
Also the code ran correctly in video, so you may have edited it. Just an observation.
As always awesome video.
Sorry about that. Yes I do these tutorials out of my head so on occasion a little error slips in.
the string winningFlip in class CoinGame doesn’t change, so I moved line 5 inside startGame method.
Is that correct?
Sorry about that error
I just completed this. I tried to write the codes on my own using the SD/CD before looking at yours. It was fun! <3
Niki
That is the perfect way to use this tutorial. I’m glad you enjoyed it 🙂
Hello Derek,
thank you so much for the effort of putting those videos together! I’m in the process of learning Java right now and your tutorials are making it so much easier to digest. Awesome work!
Marc
Hello Marc, I’m very happy that they are helping 🙂 Many more videos are coming.
Hello Mr. Banas,
I have a question to you which I was asked in an informal interview by a professor from NASA, he asked me, “can you write a compiler that takes your C# code and compiles it” (I suppose he meant without .Net framework or JVM and JDK?) . I was clueless and so much ambressed. I am just a university student not a great developer. I want to know is it possible that I can write my own compiler for my java or C# code? if yes please recommend any books for guidance. Humble thanks
Raj
Hello Raj,
Yes you could, but it wouldn’t be that easy. You’d just convert the C# code into the native assembly language of the host computer that runs it. Engineering a Compiler is a pretty good book on compilers.
|
http://www.newthinktank.com/2012/12/object-oriented-design-2/?replytocom=17408
|
CC-MAIN-2021-04
|
refinedweb
| 956
| 65.01
|
12 November 2007 10:46 [Source: ICIS news]
MOSCOW (ICIS news)--Belarusian Potash Company (BPC) has signed potash supply contracts for next year after problems with a rail link were resolved, major producer Uralkali said on Monday.
BPC also said it would raise its standard MOP (muriate of potash) spot offers for Asia from $360/tonne to $400/tonne and granular MOP spot offers for ?xml:namespace>
Russian producer Silvinit said last month that it could halt its potash shipments in the second half of November due to the possible shutdown of a railway line in the country’s
After Silvinit's announcement, BPC moved to halt negotiations on new potash supply contracts.
However, the construction of a 1.8 km section of a new 6 km rail line to bypass the affected area has been accelerated and is due to be completed by 1 January.
BPC signed a joint-venture deal with Uralkali in 2005 and exports potash to countries
|
http://www.icis.com/Articles/2007/11/12/9077760/bpc-signs-2008-potash-supply-contracts.html
|
CC-MAIN-2014-15
|
refinedweb
| 161
| 54.26
|
Announcing Pegasus Frontend
said in Announcing Pegasus Frontend:
!
I haven't tried using scripts yet but they might be what you're looking for..
@fluffypillow I'm experiencing a strange problem loading arcade roms (both FBA and MAME) on one PC (but not another). It stems from the launcher using forward slashes in {file.path} rather than backslashes which Windows typically uses for directory paths. If I manually change the argument to be all backslashes then it will work from the command line, but that doesn't really help me with Pegasus.
What's most strange is that it works fine on my primary PC. Either way it might make sense to change this for the windows build so it uses backslashes instead?
- fluffypillow last edited by fluffypillow
@PlayingKarrde That's certainly interesting, as according to the source code of MAME, both slashes are accepted. But other than that, yes, using backslashes on Windows sounds like a good idea.
- fluffypillow last edited by fluffypillow
Okay, after quite some time one of the largest weekly updates has landed! This is yet another breaking change, so feel free to keep an older version around if any of these changes affect you negatively.
- Major changes in the theme Api
- It is now possible to present games and collections in multiple locations with custom ordering and filtering
- It is now possible to present a list of all existing games, and similarly, sort or filter it with custom parameters
- See below for implementation details
- Updated the 9999999 and ES2 Simple themes to these themeing changes (I should really make an updater eventually)
- Bugfixes and optimizations
- Fixed a bug where Pegasus crashed on exit
- Fixed a bug where log files weren't saved properly before reboot/shutdown
- Fixed a bug where the ES2 support could cause a crash during loading
The bad news:
- Custom filters support and searching/filtering in the default theme is temporarily disabled (haven't finished updating that part yet)
- At the moment, Pegasus doesn't remember the selected game and its position in the menu when you return from playing. This will be fixed in the future, likely when theme options get added.
Theme Api changes:
- Previously, collection list objects and game list objects had
index,
current,
increment/decrementIndex()fields and methods (eg.
api.collections.current). Because the lists can now appear in multiple places with different sorting and filtering, having one index isn't particularly meaningful anymore, and thus all these fields and methods were removed.
- Because the only remaining field would have been
model(eg.
api.collections.model), the whole intermediate object got removed, and the object itself can now be used as model (eg.
api.collections).
- These models are no longer JavaScript arrays, but so called "item models". They can be used without any difference in List/Grid/PathViews'
modelproperty. As for manual operations,
- to get a single item, instead of
themodel[anindex], you'd use
themodel.get(anindex)
- to get the count of items, instead of
themodel.length, you'd use
themodel.count
- In addition, the list of all games is now accessible as
api.allGames, which is a list of game objects similarly to a single collection's
gamesfield.
- With these changes, the current data structure is:
api.collectionsis a list of collections, which is an item model
- each collection has the same fields as before (see the Api docs)
- each collection has a
gamesfield, a list of games, which is an item model
- each game has the same fields and
launch()method as before
api.allGamesis a list of all games, independent of collections. Also an item model.
- For inspriation with updating theme code, you can take a look at the different ways it's handled in the commits of ES2 Simple and 9999999-in-1. The main theme is a mess, but if you wish, you can take a look at these commits too.
Sorting and filtering:
- SortFilterProxyModel is now available. Take a look at the example there to see how it's used. You can find the list of sorters/filters here.
- For example, to get the list of all games ordered by play time, you could write something like
import SortFilterProxyModel 0.2 ... SortFilterProxyModel { id: mysorter sourceModel: api.allGames sorters: RoleSorter { roleName: "playTime" } } ListView { model: mysorter ... }
- Because filtering can now be done in themes,
api.filtersis now removed. In the future, it might be updated to hold the user-defined custom filters instead.
@fluffypillow amazing thanks for this. Been excited about these changes coming. I'll try and update my theme tomorrow if I have time.
PS. Also fixed the slashing issue,
{file.path}and
{file.dir}now uses backslashes as directory separator on Windows
@fluffypillow said in Announcing Pegasus Frontend:
PS. Also fixed the slashing issue,
{file.path}and
{file.dir}now uses backslashes as directory separator on Windows
<3
No major updates this week yet, only a few memory optimizations. I've started the work on the metadata changes however, but will likely get finished in the next year. Until then, happy holidays!
@fluffypillow I'm using your frontend in my LE fork and I start it with this script
Once I've exit a game the frontend resets to the first system. So e.g. I play NES games and everytime it resets to GB. Is it supposed to be this way? Emulationstation for example keeps the last used system once you exit an emulator so for example a lr-core, Dolphin or else. Maybe it's a problem of the theme itself? I'm not sure if this happened with the gameOS theme too but I can't test it untill the dev adopted your latest API changes.
@5schatten Yes, remembering the launched game is temporarily removed due to the changes in the theme API (see here), but will be re-added soon. If it's urgent, you can try using the stable release until then, or build commit
3ab18e0, which is right before this whole patch set.
@fluffypillow thx for the explanation :-) otherwise it's a great frontend once you've polished the minor issues ;-) Btw. I've opened a new "issue" though it's more a feature request. Can you skip the mame/fba bios files like recent ES versions do?
@5schatten Ah sorry, missed the mail about a new issue somehow. Will take a look.
Contributions to the project are always appreciated, so if you would like to support us with a donation you can do so here.
Hosting provided by Mythic-Beasts. See the Hosting Information page for more information.
|
https://retropie.org.uk/forum/topic/9598/announcing-pegasus-frontend/1023?lang=en-US
|
CC-MAIN-2021-39
|
refinedweb
| 1,095
| 63.9
|
E-File Magic 2.1.501
Sponsored Links
Download location for E-File Magic 2.1.501
Create, Print, Mail, and E-File all 1098, 1099, 5498, and W-2G tax forms. No Pre Printed Forms Required! FAST, import 10000+ forms in minutes. Requires e-file service to be purchased from and provided by E-File Magic.....read more
NOTE: You are now downloading E-File Magic 2.1.501. This trial download is provided to you free of charge. Please purchase it to get the full version of this software.
Select a download mirror
E-File Magic 2.1.501 description
E-File Magic allows you to create, print, Mail, and E-File all 1098, 1099, 5498, and W-2G tax forms. Major Features: No PrePrinted Form is Required! Support for ALL 1098, 1099, 5498, and W-2G forms. FAST, import 10000+ forms in minutes. Simpli...read more
E-File Magic 2.1.501 Screenshot
E-File Magic 2.1.501 Keywords
Bookmark E-File Magic 2.1.501
E-File Magic 2.1.501 Copyright
WareSeeker.com do not provide cracks, serial numbers etc for E-File Magic 2.1.501. Any sharing links from rapidshare.com, yousendit.com or megaupload.com are also prohibited.
Featured Software
Want to place your software product here?
Please contact us for consideration.
Contact WareSeeker.com
Related Software
Specializes in printing, mailing, and electronically filing business IRS forms Free Download
Collection System is personal Finance tool Free Download
Like Criss Angel? Now you can have Criss Angel even closer to you at the palm of your hand. Criss Angel, one of the greatest il... Free Download
**IGN: “If you grew up on NES and Genesis, this game is a love letter, written expressly to you.” **Toucharcade: “…a 2.5D sides... Free Download
import QuickBooks data Free Download
MostFun Magic Ball 3 is a game assigning you the task of capturing bank robbers, saving damsels in distress, and battling pirates on the open seas. Free Download
Screenshot Magic allows image capture from the following sources: Full Desktop, Active Window (full or contents only) and DirectX Fullscreen (games). Screenshot Magic can save images in the following formats (24-bit): BMP, JPG and PNG. Free Download
ZeallSoft Magic Mirror is funny and easy to use photo distorting software. You can distorting your friend or family face and producing surreal effects! These help to disfigure and distort any ordinary image beyond recognition. Free Download
Latest Software
- Magic Camera 6.4.0
- 1099 Pro Professional 2009.06.15
- SyncMaster 1701mp,SyncMaster Magic CX700MP 1.0
- SyncMaster 720XT, SyncMaster Magic CX716XT 2.0
- SyncMaster 732NW,SyncMaster Magic CX732NW 2.0
- SyncMaster 510M,SyncMaster Magic CX501BM 1.0
- SyncMaster 193T/197T/191Tplus,SyncMaster Magic CX9 1.0.0.0
- SyncMaster 173mp,SyncMaster Magic CX712MP 1.0
Popular Software
Favourite Software
|
http://wareseeker.com/download/e-file-magic-2.1.501.rar/7c4e5bb55
|
CC-MAIN-2014-52
|
refinedweb
| 471
| 62.24
|
A pretty sweet search bar for React Native
react-native-searchbar.
Works on both iOS and Android.
Installation
npm install react-native-searchbar --save
- Install
react-native-vector-iconsif the project doesn't have them already. The search bar accesses MaterialIcons.
- Now you can require the search bar with
import SearchBar from 'react-native-searchbar'or
var SearchBar = require('react-native-searchbar')
Available Props
Usage
Use a ref to show and hide the search bar and set the text input value
ref={(ref) => this.searchBar = ref}
this.searchBar.show()
this.searchBar.hide()
this.searchBar.setValue("text to set")
Write your own search logic with
handleSearchor provide some
dataand use the results handed back from
handleResults.
Use your powers for good!
Notes for Android
- Render the search bar component after the component it is supposed to display over. iOS handles this nicely with a
zIndexof
10. Android elevation is set to 2.
- The bottom of the search bar will have a thin border instead of a shadow.
Example
Full example at
example/
import SearchBar from 'react-native-searchbar'; const items = [ 1337, 'janeway', { lots: 'of', different: { types: 0, data: false, that: { can: { be: { quite: { complex: { hidden: [ 'gold!' ], }, }, }, }, }, }, }, [ 4, 2, 'tree' ], ]; ... _handleResults(results) { this.setState({ results }); } ... ... <SearchBar ref={(ref) => this.searchBar = ref} data={items} handleResults={this._handleResults} showOnLoad /> ...
Contributing
Contributing to
react-native-searchbar is easy! With four simple steps:
Create a branch
- Fork the repository
git clone <your-repo-url>to clone your GitHub repo to your local one
git pull origin masterto pull the latest code
npm installto install the project's dependencies
git checkout -b the-name-of-my-branchto create a branch (use something short and comprehensible, such as:
fix-styling-of-search-bar).
Make the change
Test the change
- Run
npm run fixfrom the project root (This will run Prettier and ESLint and automatically fix any issues).
- If possible, test any visual changes in Android and iOS.
Push the change!
git add -A && git commit -m "My message"(replacing
My messagewith a commit message, such as
Fixed styling on search bar) to stage and commit your changes
git push my-fork-name the-name-of-my-branch
|
https://reactnativeexample.com/a-pretty-sweet-search-bar-for-react-native/
|
CC-MAIN-2019-13
|
refinedweb
| 359
| 57.37
|
I. Objective:
Simple 1-layer multi layer perceptron (MLP). f is the activation function and introduces the non-linearity to our system.
II. Linear Model:
A simple linear model with a softmax layer on top. The main difference here is the lack of a non-linear activation function (ReLU, tanh, etc.). Thanks to Karpathy for the data and code structure, but we will break down the math behind the lines for better understanding. You can check out the code for loading the data on the Github repo but here we will focus on the main model operations.
# Class scores [NXC] logits = np.dot(X,*W)
# Backpropagation dscores = probs dscores[range(len(probs)), y] -= 1 dscores /= config.DATA_SIZE
dW = np.dot(X.T, dscores) dW += config.REG*W
W += -config.LEARNING_RATE * dW
Results:
We can see that the decision boundary of our classifier is linear and cannot adapt to the non-linear contortions of the data.
III. Neural Network:
Now we introduce a neural net with a softmax on the last layer for class probabilities. We use a ReLU unit to introduce non-linearity. Our network will have two layers, where the shape of the input will be manipulated as follows:
Once again, let’s break down the code.
z_2 = np.dot(X, W_1)
a_2 = np.maximum(0, z_2) # ReLU
logits = np.dot(a_2, W_1*W_1) loss += 0.5 * config.REG * np.sum(W_2*W_2)
# Backpropagation dscores = probs dscores[range(len(probs)), y] -= 1 dscores /= config.DATA_SIZE dW2 = np.dot(a_2.T, dscores)
dhidden = np.dot(dscores, W_2.T) dhidden[a_2 &amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;lt;= 0] = 0 # ReLu backprop dW1 = np.dot(X.T, dhidden)
dW2 += config.REG * W_2 dW1 += config.REG * W_1
W_1 += -config.LEARNING_RATE * dW1 W_2 += -config.LEARNING_RATE * dW2
Our accuracy is very simple and involves doing the forward pass and then comparing the predicted class with the target class.
def accuracy(X, y, W_1, W_2=None): logits = np.dot(X, W_1) if W_2 is None: predicted_class = np.argmax(logits, axis=1) print "Accuracy: %.3f" % (np.mean(predicted_class == y)) else: z_2 = np.dot(X, W_1) a_2 = np.maximum(0, z_2) logits = np.dot(a_2, W_2) predicted_class = np.argmax(logits, axis=1) print "Accuracy: %.3f" % (np.mean(predicted_class == y))
Results:
The resulting decision boundary is able to classify the non-linear data really well.
IV. Tensorflow Implementation:
We will start by setting up our tensorflow model but we will have an extra function called summarize() which will store the progress as we training through the epochs. We will decide which values to store with tf.scalar_summary() so we can see the changes later.
def create_model(sess, FLAGS): model = mlp(FLAGS.DIMENSIONS, FLAGS.NUM_HIDDEN_UNITS, FLAGS.NUM_CLASSES, FLAGS.REG, FLAGS.LEARNING_RATE) sess.run(tf.initialize_all_variables()) return model class mlp(object): def __init__(self, input_dimensions, num_hidden_units, num_classes, regularization, learning_rate): # Placeholders self.X = tf.placeholder("float", [None, None]) self.y = tf.placeholder("float", [None, None]) # Weights W1 = tf.Variable(tf.random_normal( [input_dimensions, num_hidden_units], stddev=0.01), "W1") W2 = tf.Variable(tf.random_normal( [num_hidden_units, num_classes], stddev=0.01), "W2") with tf.name_scope('forward_pass') as scope: z_2 = tf.matmul(self.X, W1) a_2 = tf.nn.relu(z_2) self.logits = tf.matmul(a_2, W2) # Add summary ops to collect data W_1 = tf.histogram_summary("W1", W1) W_2 = tf.histogram_summary("W2", W2) with tf.name_scope('cost') as scope: self.cost = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(self.logits, self.y)) \ + 0.5 * regularization * tf.reduce_sum(W1*W1) \ + 0.5 * regularization * tf.reduce_sum(W2*W2) tf.scalar_summary("cost", self.cost) with tf.name_scope('train') as scope: self.optimizer = tf.train.AdamOptimizer( learning_rate=learning_rate).minimize(self.cost) def step(self, sess, batch_X, batch_y): input_feed = {self.X: batch_X, self.y: batch_y} output_feed = [self.logits, self.cost, self.optimizer] outputs = sess.run(output_feed, input_feed) return outputs[0], outputs[1], outputs[2] def summarize(self, sess, batch_X, batch_y): # Merge all summaries into a single operator merged_summary_op = tf.merge_all_summaries() return sess.run(merged_summary_op, feed_dict={self.X:batch_X, self.y:batch_y})
Then we will train for several epochs and save the summary each time.
def train(FLAGS): # Load the data FLAGS, X, y = load_data(FLAGS) with tf.Session() as sess: model = create_model(sess, FLAGS) summary_writer = tf.train.SummaryWriter( FLAGS.TENSORBOARD_DIR, graph=sess.graph) # y to categorical Y = tf.one_hot(y, FLAGS.NUM_CLASSES).eval() for epoch_num in range(FLAGS.NUM_EPOCHS): logits, training_loss, _ = model.step(sess, X, Y) # Display if epoch_num%FLAGS.DISPLAY_STEP == 0: print "EPOCH %i: \n Training loss: %.3f, Accuracy: %.3f" \ % (epoch_num, training_loss, np.mean(np.argmax(logits, 1) == y)) # Write logs for each epoch_num summary_str = model.summarize(sess, X, Y) summary_writer.add_summary(summary_str, epoch_num) if __name__ == '__main__': FLAGS = parameters() train(FLAGS)
Finally, we can view our training progress using:
$ tensorboard --logdir=logs
and then heading over to on your browser to view the results. Here are a few:
Extras (DropOut and DropConnect):
There are many add on techniques to this vanilla neural network that works to increase optimization, robustness and overall performance. We will be convering many of them in future posts but I will briefly talk about a very common regularization technique: dropout.
What is it? Dropout is a regularization technique that allows us to nullify the outputs of certain neurons to zero. This will effectively be the same as the neuron not existing in the network. We will do this for p% of the total neurons in each layer and for each batch, a new p% of the neurons in each layer are “dropped”.
Why do we do this? It works out to be a great regularization technique because for each input batch, we are sampling from a different neural net since a whole new set of neurons are dropped. By repeating this, we are preventing the units from co-adapting too much to the data. The original paper describes each iteration as a “thinned” network because p% of the neurons are dropped. Note: Dropout is only for training time. At test time, we will not be dropping any neurons.
In the image above, the layer has p=0.5 which means half of it’s units are dropped. In an other iteration, a different set of 1/2 of the neurons will be dropped. Let’s take a look at masking code to really understand what’s happening.
We use a Bernoulli distribution to generation 0/1 with probability p for 0. We apply this mask to the outputs from our layer. The parts that are multiplied by zero are our “dropped” neurons since they will yield an output of 0 when multiplied by the next set of weights.
Another regularization method, which is an extension of dropout, is dropconnect. It also involves a similar mechanism but is applied to the weights instead.
Notice that here, a set of weights are dropped instead of the neurons.
We apply a similar bernoulli mask to the weights and we use those weights for the layers. Any inputs that are dot producted with the zeroed weights will result in 0. You can see the similarity with dropout and so, empirically, both techniques offer similar results. Dropconnect was proposed because you always have more weights than neurons, so there are more ways to create “thinned” models thus results in more robust training. However, in more papers you will see mostly dropout being utilized and very rarely drop connect since results are similar.
You can read more about dropout here and dropconnect here.
V. Raw Code:
GitHub Repo (Updating all repos, will be back up soon!)
2 thoughts on “Vanilla Neural Network”
One of the most succinct and clearest NN guides I’ve seen so far, great material
LikeLiked by 1 person
Thanks 😀 made it for myself initially just to be able to look at it and recall everything quickly
|
https://theneuralperspective.com/2016/10/02/03-vanilla-neural-network/
|
CC-MAIN-2018-22
|
refinedweb
| 1,295
| 52.56
|
Search the Community
Showing results for tags 'fileinstall'.
Found 10 results
CreateFilesEmbedded.au3 - Like FileInstall...
JScript posted a topic in AutoIt Example ScriptsCreateFilesEmbedded.exe Application CreateFilesEmbedded_(RedirectLink).html (Previous downloads: 575) Example using the binary return (without writing the file in HDD) BinarySoundTest.zip Sample: Fixes: Free Software João Carlos.
FileInstall Strange behaviour
lonardd posted a topic in AutoIt General Help and SupportHi, I have a source Script where I inserted the following code lines to be able to extract the script source code anytime later if I run it with the /ExtractSourceCode: #Region ;**** Directives created by AutoIt3Wrapper_GUI **** #AutoIt3Wrapper_UseX64=n #AutoIt3Wrapper_Change2CUI=y #AutoIt3Wrapper_Res_SaveSource=Y #EndRegion ;**** Directives created by AutoIt3Wrapper_GUI **** If StringInStr($cmdlineRaw, "/ExtractSourceCode") Then FileInstall(@ScriptFullPath, @ScriptDir & "\" & @ScriptName & ".txt", 1) ;FileInstall("C:\Test.txt", @ScriptDir & "\Test.txt") Exit EndIf I used to be able to compile it on another computer some years ago without problems. Now I wanted to modify the code, so I extracted it, renamed the file *.au3, performd my little modification and rebuild. Strangely, I get this popup with Caption: Aut2Exe Error and Invalid FileInstall() function. BEfore I hit the OK button on the Popup, I can see the file is actually Built as I can see that an EXE file is created, but as I hit OK in that error dialog, the EXE disappears. Any advice? Thanks David I can't remember if I did it with Autoit 2 EPP_NF_Replacer_UBI.au3
-.
AutoIt Archiver (like winrar sfx)
careca posted a topic in AutoIt General Help and SupportHey.
Create Self Extracting Encrypted EXE files
Jfish posted a topic in AutoIt Example ScriptsHello
Help needed for FileInstall funtion
TheDcoder posted a topic in AutoIt General Help and SupportHello! If FileInstall('image.png', @ScriptDir) = 0 Then MsgBox(0, "Unexcepted Error", "Damaged file") Exit Will it make any difference if the file was unable to extract?
Problem with invalid? FileInstall
Graeme posted a topic in AutoIt General Help and SupportHi, I have a very simple line in a complex script. If Not FileInstall("Thunderbird update.jpg",$ScriptTargetDir,1) Then SplashOff() MsgBox(0x10,$UpdateID,"Update failed - could not install Thunderbird update.jpg in " & $ScriptTargetDir) Exit(0) EndIf Every time I try to compile this script I get the message: But the file Thunderbird update.jpg exists in the same directory as the script. I tried making it a bmp like the example in the help file - no difference. I tried ".Thunderbird update.jpg" and I tried making it "ThunderbirdUpdate.jpg" gettting rid of the space. The destination file is defined as well. The error message from SciTE isn't very helpful - >"C:\Program Files (x86)\AutoIt3\SciTE\AutoIt3Wrapper\AutoIt3Wrapper.exe" /ShowGui /in "U:Documents5 GraemeQAupdate2_2_0.au3" +>17:38:08 Starting AutoIt3Wrapper v.2.1.4.4 SciTE v.3.3.7.0 ; Keyboard:00000809 OS:WIN_7/Service Pack 1 CPU:X64 OS:X64 Environment(Language:0409 Keyboard:00000809 OS:WIN_7/Service Pack 1 CPU:X64 OS:X64) -> No changes made.. >Running AU3Check (1.54.22.0) from:C:Program Files (x86)AutoIt3 +>17:38:11 AU3Check ended.rc:0 >Running:(3.3.8.1):C:\Program Files (x86)\AutoIt3\Aut2Exe\aut2exe_x64.exe /in "U:Documents5 GraemeQAupdate2_2_0.au3" /out "C:UsersGraemeAppDataLocalAutoIt v3Aut2exe~AU3tngpptu.exe" /nopack /comp 4 !>17:38:21 Aut2exe.exe ended errors because the target exe wasn't created, abandon build. (C:UsersGraemeAppDataLocalAutoIt v3Aut2exe~AU3tngpptu.exe)rc:9999 +>17:38:21 AutoIt3Wrapper Finished.. >Exit code: 0 Time: 13.756 Any ideas as to what the problem is would be very appreciated. Blessings Graeme
Installation script with .msi's
lilx posted a topic in AutoIt General Help and SupportHi, )
FileInstall Limit
Roshith posted a topic in AutoIt General Help and SupportHi, I'm trying to create an installer that extracts about 586 files (3.2 GB) to temp directory, then executes the msi file. For this, I created a script with 586 fileinstall() functions. Script runs perfectly when executed without compiling. But when executed after compiling, only 263 (~2 GB) files are extracted to the temp directory. The remaining fileinstall() functions returned 0. Is there any limit for usage of fileinstall() in a single script. If so, any work around other than creating a separate script for the remaining files. -Roshith
Invalid FileInstall
johnmcloud posted a topic in AutoIt General Help and SupportHi guys, i have problem with FileInstall: Global $Dest = @TempDir & "_Temp" DirCreate($Dest) For $i = 0 To 9 FileInstall("._TempX0" & $i & ".bmp", $Dest & "X0" & $i & ".bmp", 1) FileInstall("._TempY0" & $i & ".bmp", $Dest & "Y0" & $i & ".bmp", 1) Next For $i = 10 To 60 FileInstall("._TempX" & $i & ".bmp", $Dest & "X" & $i & ".bmp", 1) FileInstall("._TempY" & $i & ".bmp", $Dest & "Y" & $i & ".bmp", 1) Next I have a folder with X01.bmp X02.bmp Y01.bmp etc... Work fine, it install the files but when i'm trying to compile i have the error: INVALID FILEINSTALL FUNCTION() So work if the file is .au3 but i can't compile into .exe What is the problem? Thanks
|
https://www.autoitscript.com/forum/tags/fileinstall/
|
CC-MAIN-2020-05
|
refinedweb
| 822
| 51.95
|
01 August 2005 11:33 [Source: ICIS news]
LONDON (CNI)--Total confirmed on Monday that its 160,000 bbl/day ?xml:namespace>
A spokeswoman for the French oil and petrochemicals company said units were restarted one by one from last Wednesday, and that normal production was reached over the weekend.
The refinery went down just after midday local time on Tuesday (26 July) after a malfunction of a power switch at the Borselle nuclear plant, which supplies Vlissingen.
Benzene feedstocks were hit by the outage, but a company official said last week that benzene production was not expected to be affected as the upstream units were restarted shortly after the power cut. The spokeswoman did not have any further information on the impact on downstream products.
The refinery has two gasoil hydrotreaters, producing 120,000 tonne of diesel/month. The refinery is operated as a joint venture between Total
|
http://www.icis.com/Articles/2005/08/01/2005432/totals-vlissingen-refinery-back-to-normal-production.html
|
CC-MAIN-2014-35
|
refinedweb
| 149
| 59.33
|
4.8 Creating Threads
The Pthreads library can be used to create, maintain, and manage the threads of multithreaded programs and applications. When creating a multithreaded program, threads can be created any time during the execution of a process because they are dynamic. The pthread_create() function creates a new thread in the address space of a process. The thread parameter points to a thread handle or thread id of the thread that will be created. The new thread will have the attributes specified by the attribute object attr. The thread parameter will immediately execute the instructions in start_routine with the arguments specified by arg. If the function successfully creates the thread, it will return the thread id and store the value in the thread parameter.
If attr is NULL, the default thread attributes will be used by the new thread. The new thread takes on the attributes of attr when it is created. If attr is changed after the thread has been created, it will not affect any of the thread's attributes. If start_routine returns, the thread returns as if pthread_exit() had been called using the return value of start_routine as its exit status.
Synopsis
#include <pthread.h> int pthread_create(pthread_t *restrict thread, const pthread_attr_t *restrict attr, void *(*start_routine)(void*), void *restrict arg);
If successful, the function will return 0. If the function is not successful, no new thread is created and the function will return an error number. If the system does not have the resources to create the thread or the thread limit for the process has been reached, the function will fail. The function will also fail if the thread attribute is invalid or the caller thread does not have permission to set the necessary thread attributes.
These are examples of creating two threads with default attributes:
pthread_create(&threadA,NULL,task1,NULL); pthread_create(&threadB,NULL,task2,NULL);
These are the two pthread_create() function calls from Example 4.1. Both threads are created with default attributes.
Program 4.1 shows a primary thread passing an argument from the command line to the functions executed by the threads.
Program 4.1
#include <iostream> #include <pthread.h> #include <stdlib.h> int main(int argc, char *argv[]) { pthread_t ThreadA,ThreadB; int N; if(argc != 2){ cout << "error" << endl; exit (1); } N = atoi(argv[1]); pthread_create(&ThreadA,NULL,task1,&N); pthread_create(&ThreadB,NULL,task2,&N); cout << "waiting for threads to join" << endl; pthread_join(ThreadA,NULL); pthread_join(ThreadB,NULL); return(0); }
Program 4.1 shows how the primary thread can pass arguments from the command line to each of the thread functions. A number is typed in at the command line. The primary thread converts the argument to an integer and passes it to each function as a pointer to an integer as the last argument to the pthread_create() functions. Program 4.2 shows each of the thread functions.
Program 4.2
void *task1(void *X) { int *Temp; Temp = static_cast<int *>(X); for(int Count = 1;Count < *Temp;Count++){ cout << "work from thread A: " << Count << " * 2 = " << Count * 2 << endl; } cout << "Thread A complete" << endl; } void *task2(void *X) { int *Temp; Temp = static_cast<int *>(X); for(int Count = 1;Count < *Temp;Count++){ cout << "work from thread B: " << Count << " + 2 = " << Count + 2 << endl; } cout << "Thread B complete" << endl; }
In Program 4.2, task1 and task2 executes a loop that is iterated the number of times as the value passed to the function. The function either adds or multiplies the loop invariant by 2 and sends the results to standard out. Once complete, each function outputs a message that the thread is complete. The instructions for compiling and executing Programs 4.1 and 4.2 are contained in Program Profile 4.1.
Program Profile 4.1
Program Name
program4-12.cc
Description
Accepts an integer from the command line and passes the value to the thread functions. Each function executes a loop that either adds or multiples the loop invariant by 2 and sends the result to standard out. The main line or primary thread is listed in Program 4.1 and the functions are listed in Program 4.2.
Libraries Required
libpthread
Headers Required
<pthread.h> <iostream> <stdlib.h>
Compile and Link Instructions
c++ -o program4-12 program4-12.cc -lpthread
Test Environment
SuSE Linux 7.1, gcc 2.95.2,
Execution Instructions
./program4-12 34
Notes
This program requires a command-line argument.
This is an example of passing a single argument to the thread function. If it is necessary to pass multiple arguments to the thread function, create a struct or container containing all the required arguments and pass a pointer to that structure to the thread function.
4.8.1 Getting the Thread Id
As mentioned earlier, the process shares all its resources with the threads in its address space. Threads have very few resources of their own. The thread id is one of the resources unique to each thread. The pthread_self() function returns the thread id of the calling thread.
Synopsis
#include <pthread.h> pthread_t pthread_self(void);
This function is similar to getpid() for processes. When a thread is created, the thread id is returned to the creator or calling thread. The thread id will not know the created thread. Once the thread has its own id, it can be passed to other threads in the process. This function returns the thread id with no errors defined.
Here is an example of calling this function:
//... pthread_t ThreadId; ThreadId = pthread_self();
A thread calls this function and the function returns the thread id stored in the variable ThreadId of type pthread_t.
4.8.2 Joining Threads
The pthread_join() function is used to join or rejoin flows of control in a process. The pthread_join() causes the calling thread to suspend its execution until the target thread has terminated. It is similar to the wait() function used by processes. This function can be called by the creator of a thread. The creator thread waits for the new thread to terminate and return, thus rejoining flows of control. The pthread_join() can also be called by peer threads if the thread handle is global. This will allow any thread to join flows of control with any other thread in the process. If the calling thread is canceled before the target thread returns, the target thread will not become a detached thread (discussed in the next section). If different peer threads simultaneously call the pthread_join() function on the same thread, this behavior is undefined.
Synopsis
#include <pthread.h> int pthread_join(pthread_t thread, void **value_ptr);
The thread parameter is the thread (target thread) the calling thread is waiting on. If the function returns successfully, the exit status is stored in value_ptr. The exit status is the argument passed to the pthread_exit() function called by the terminated thread. The function will return an error number if it fails. The function will fail if the target thread is not a joinable thread or, in other words, created as a detached thread. The function will also fail if the specified thread thread does not exist.
There should be a pthread_join() function called for all joinable threads. Once the thread is joined, this will allow the operating system to reclaim storage used by the thread. If a joinable thread is not joined to any thread or the thread that calls the join function is canceled, then the target thread will continue to utilize storage. This is a state similar to a zombied process when the parent process has not accepted the exit status of a child process, the child process continues to occupy an entry in the process table.
4.8.3 Creating Detached Threads
A detached thread is a terminated thread that is not joined or waited upon by any other threads. When the thread terminates, the limited resources used by the thread, including the thread id, are reclaimed and returned to the system pool. There is no exit status for any thread to obtain. Any thread that attempts to call pthread_join() for a detached thread will fail. The pthread_detach() function detaches the thread specified by thread. By default, all threads are created as joinable unless otherwise specified by the thread attribute object. This function detaches already existing joinable threads. If the thread has not terminated, a call to this function does not cause it to terminate.
Synopsis
#include <pthread.h> int pthread_detach(pthread_t thread thread);
If successful, the function will return 0. If not successful, it will return an error number. The pthread_detach() function will fail if thread is already detached or the thread specified by thread could not be found.
This is an example of detaching an already existing joinable thread:
//... pthread_create(&threadA,NULL,task1,NULL); pthread_detach(threadA); //...
This causes threadA to be a detached thread. To create a detached thread, as opposed to dynamically detaching a thread, requires setting the detachstate of a thread attribute object and using that attribute object when the thread is created.
4.8.4 Using the Pthread Attribute Object
The thread attribute object encapsulates the attributes of a thread or group of threads. It is used to set the attributes of threads during their creation. The thread attribute object is of type pthread_attr_t. This structure can be used to set these thread attributes:
size of the thread's stack
location of the thread's stack
scheduling inheritance, policy, and parameters
whether the thread is detached or joinable
the scope of the thread
The pthread_attr_t has several methods that can be invoked to set and retrieve each of these attributes. Table 4-3 lists the methods used to set the attributes of the attribute object.
The pthread_attr_init() and pthread_attr_destroy() functions are used to initialize and destroy a thread attribute object.
Synopsis
#include <pthread.h> int pthread_attr_init(pthread_attr_t *attr); int pthread_attr_destroy(pthread_attr_t *attr);
The pthread_attr_init() function initializes a thread attribute object with the default values for all the attributes. The attr parameter is a pointer to a pthread_attr_t object. Once attr has been initialized, its attribute values can be changed by using the pthread_attr_set functions listed in Table 4-3. Once the attributes have been appropriately modified, attr can be used as a parameter in any call to the pthread_create() function. If successful, the function will return 0. If not successful, the function will return an error number. The pthread_attr_init() function will fail if there is not enough memory to create the object.
The pthread_attr_destroy() function can be used to destroy a pthread_attr_t object specified by attr. A call to this function deletes any hidden storage associated with the thread attribute object. If successful, the function will return 0. If not successful, the function will return an error number.
4.8.4.1 Creating Detached Threads Using the Pthread Attribute Object
Once the thread object has been initialized, its attributes can be modified. The pthread_attr_setdetachstate() function can be used to set the detachstate attribute of the attribute object. The detachstate parameter describes the thread as detached or joinable.
Synopsis
#include <pthread.h> int pthread_attr_setdetachstate(pthread_attr_t *attr, int *detachstate); int pthread_attr_getdetachstate(const pthread_attr_t *attr, int *detachstate);
The detachstate can have one of these values:
PTHREAD_CREATE_DETACHED PTHREAD_CREATE_JOINABLE
The PTHREAD_CREATE_DETACHED value will cause all the threads that use this attribute object to be detached. The PTHREAD_CREATE_JOINABLE value will cause all the threads that use this attribute object to be joinable. This is the default value of detachstate. If successful, the function will return 0. If not successful, the function will return an error number. The pthread_attr_setdetachstate() function will fail if the value of detachstate is not valid.
The pthread_attr_getdetachstate() function will return the detachstate of the attribute object. If successful, the function will return the value of detachstate to the detachstate parameter and 0 as the return value. If not successful, the function will return an error number. In Example 4.2, the threads created in Program 4.1 are detached. This example uses an attribute object when creating one of the threads.
Example 4.2 Using an attribute object to create a detached thread.
//... int main(int argc, char *argv[]) { pthread_t ThreadA,ThreadB; pthread_attr_t DetachedAttr; int N; if(argc != 2){ cout << "error" << endl; exit (1); } N = atoi(argv[1]); pthread_attr_init(&DetachedAttr); pthread_attr_setdetachstate(&DetachedAttr,PTHREAD_CREATE_DETACHED); pthread_create(&ThreadA,NULL,task1,&N); pthread_create(&ThreadB,&DetachedAttr,task2,&N); cout << "waiting for thread A to join" << endl; pthread_join(ThreadA,NULL); return(0); }
Example 4.2 declares an attribute object DetachedAttr. The pthread_attr_init() function is used to allocate the attribute object. Once initialized, the pthread_attr_detachstate() function is used to change the detachstate from joinable to detached using the PTHREAD_CREATE_DETACHED value. When creating ThreadB, the Detached-Attr is the second argument in the call to the pthread_create() function. The pthread_join() call is removed for ThreadB because detached threads cannot be joined.
|
http://www.informit.com/articles/article.aspx?p=169479&seqNum=8
|
CC-MAIN-2019-26
|
refinedweb
| 2,117
| 65.42
|
Dart: a New Web Programming Experience
JavaScript.
In comes Google Dart, a new JavaScript replacement language. Dart is a ground-up re-imagining of what JavaScript should be. It requires its own runtime environment (which Google makes available for free under the three-clause BSD license) and has its own syntax and libraries. Dart is an object-orientated language with a heavy Java-like feel, but it maintains many of the loved JavaScript paradigms like first-class functions.
So, why have I chosen to use Dart? Good question! I have chosen Dart because it is a clean break from JavaScript, and it has the object-orientated programming style I have come to know and love. Because I have a Java background, the learning curve didn't seem as steep with Dart as it does with JavaScript. Also, because Dart is so new, it gives me a chance to become an early adopter of the language and watch its evolution.
Installing Dart
Before you can program with Dart, you need to grab a copy from. I chose to install only the SDK; however, there is an option to grab the full Dart integrated development environment with the SDK included. Eclipse users will feel right at home with the IDE, because it is based on Eclipse components.
To install the SDK, I just unzipped the files and copied the whole directory to $HOME/bin. Then I modified my path variable to look in the folder I created:
PATH=$PATH:$HOME/bin/dart-sdk/bin
Now I can run dart, dart2js and dartdoc from anywhere.
Language Features
The core Dart language is pretty straightforward. The basic data types available are var (stores any object), num (stores any number type), int, double, String, bool, List (arrays) and Map (associative array). All of these data types are declared in the dart:core library. dart:core is always available and does not need to be imported. Functions also can be considered a data type, because Dart treats them as first-class objects. You can assign functions to variables and pass them as parameters to other functions or write anonymous functions in-line.
For flow control, you have the "usual" if, else if, else, for, while and do-while loops, break, continue, switch and assert. Exceptions are handled through try-catch blocks.
Dart has a lot of support for object-oriented programming. Classes are defined with the class keyword. Every object is an instance of some class, and all classes descend from the Object type. Dart allows only for single inheritance. The extends keyword is used to inherit from a parent class other than Object. Abstract classes can be used to define an interface with some default implementation. They cannot be instantiated directly, but can make use of factory constructors to create the appearance of direct instantiation. Abstract classes are defined with the abstract modifier in front of the class declaration.
Standard Library
Dart ships with an impressive standard library. I use a few of the libraries and classes in my examples here, so it will be helpful to have an idea of what they can do ahead of time. I can't cover all of them, but I cover the ones I personally find most useful.
As I said earlier, dart:core defines all of the core data types that are available. That's not all though! dart:core also contains the regular expression classes that are an invaluable addition to any standard library. Dart uses the same regular expression syntax as JavaScript.
dart:io provides classes that let your program reach out to the world. The File and Directory classes can be used to interact with the local filesystem. The File class also will allow you to open input and output streams on the specific file. If you want to write cross-platform code and allow users to specify the path of a file native to their operating system, the Path class provides a really nice Path.fromNative(String path) constructor that will convert Windows and UNIX paths to their Dart counterparts. Also included in this library are the Socket and ServerSocket classes that can be used for traditional network communications. The HttpServer class allows you to program a Web server quickly. This is great if you want to add a custom rest API to your application. dart:io can be imported and used only on server-side applications, so don't try to use it in your browser apps!
dart:html contains all of the classes necessary to interact with a client browser's document object model. This library is required to write any client-side code that runs in a browser. The library defines two static methods: Element query(String selector) and List<Element> queryAll(String selector). These methods allow you to grab HTML5 elements from the browser's DOM using cascading-stylesheet selectors. (I show an example of this later.)
dart:math, dart:json and dart:crypto provide helpers that are hard to live without. dart:math provides all of the static math methods that programmers have come to expect. dart:json provides the JSON helper class. It has only three static methods: parse(String json), which returns a Map containing the parsed document; String stringify(Object object) and void printOn(Object object, StringBuffer output) can be used to serialize an object into JSON. Any object can be made serializable by implementing a toJson() method. dart:crypto has helpers for performing md5, sha1 and sha256 hashing. There also is a CryptoUtils class with methods for converting bytes to hex and bytes to base64.
Server-Side Programming
Let's jump into Dart by looking at some server-side programming:
import 'dart:io'; void main(){ String fileName = './test.txt'; File file = new File(fileName); var out = file.openOutputStream(); out.writeString("Hello, Dart!\n"); out.close(); }
Does it look pretty familiar? It's not much different from a Java program at this point. You start by importing the dart:io library. This gives you access to the File and OutputStream classes. Next, you declare a main method. Just like Java and C, main acts as the entry point for all programs. The File object is used to hold a reference to a single file on the filesystem. In order to write to this file, you open the file's output stream. This method will create the file if it does not exist or clears the contents of the file if it does. It returns an OutputStream object that then can be used to send data to the file. You write a single string and close the OutputStream.
To run this program, save it to a file called first.dart. Then use the Dart runtime environment provided with the SDK:
$ dart first.dart
When your program is done, you should see a file called test.txt in the same directory. Open it, and you will see your text.
What's interesting about Dart is that all I/O is event-based. This is much in the same style as Node.js. Every time you call a method that performs I/O, it is added to the event queue and the method returns immediately. Almost every single I/O method takes in a callback function or returns a Future object. The reason for this design choice is for scalability. Because Dart runs your code in a single thread, non-blocking asynchronous I/O calls are the only way to allow the software to scale to hundreds or even thousands of users.
Listing 1. wunder.dart
import 'dart:io'; import 'dart:uri'; import 'dart:json'; void main(){ List jsonData = []; String apiKey = ""; String zipcode = ""; //Read the user supplied data form the options //object try { apiKey = new Options().arguments[0]; zipcode = new Options().arguments[1]; } on RangeError { print("Please supply an API key and zipcode!"); print("dart wunder.dart <apiKey> <zipCode>"); exit(1); } //Build the URI we are going to request data from Uri uri = new Uri("" "api/${apiKey}/conditions/q/${zipcode}.json"); HttpClient client = new HttpClient(); HttpClientConnection connection = client.getUrl(uri); connection.onResponse = (HttpClientResponse response) { //Our client has a response, open an input //stream to read it InputStream stream = response.inputStream; stream.onData = () { //The input stream has data to read, //read it and add it to our list jsonData.addAll(stream.read()); }; stream.onClosed = () { //response and print the location and temp. try { Map jsonDocument = JSON.parse(new String.fromCharCodes(jsonData)); if (jsonDocument["response"].containsKey("error")){ throw jsonDocument["response"]["error"]["description"]; } String temp = ↪jsonDocument["current_observation"]["temperature_string"]; String location = jsonDocument["current_observation"] ["display_location"]["full"]; print('The temperature for $location is $temp'); } catch(e) { print("Error: $e"); exit(2); } }; //Register the error handler for the InputStream stream.onError = () { print("Stream Failure!"); exit(3); }; }; //Register the error handler for the HttpClientConnection connection.onError = (e){ print("Http error (check api key)"); print("$e"); exit(4); }; }
In Listing 1, you can see this evented I/O style put to work with the
HttpClientConnection object returned by the
HttpClient.getUrl(Uri url)
method. This object is working in the background waiting for a response
from the HTTP server. In order to know when the response has been received,
you must register an
onResponse(HttpClientResponse
response) callback method.
I created an anonymous method to handle this. Also notice that toward the
bottom of the program, I register an
onError() callback
as well. Don't worry;
all of the callbacks are registered before the HTTP transaction begins.
>.
|
http://www.linuxjournal.com/content/dart-new-web-programming-experience?quicktabs_1=0
|
CC-MAIN-2015-22
|
refinedweb
| 1,568
| 65.62
|
note baku <p> Well, that's 99% true :-) </p> <p> <em>If</em> you make the assumption that the user has JavaScript (but code for the case where they don't, as well!), you could open up a new, miniature browser window (with <code>window.open</code>) and set its URL to something like <code></code> with an HTTP <code>Refresh: </code> header in that script. If you refresh every 3-5 seconds, you could display a 'status bar' and/or percentage/number of bytes/transfer speed... </p> <p> Just link the JavaScript to the form's <code>submit</code> button. (Sorry, my E-262's really rusty or I'd try to offer some sample code, but I'd probably do more harm than good in this case! :-) ) -- I forget which way it goes, but if you return (true | false ? ) from your <code>window.open</code> call in the <code>onClick</code> (?) handler, the form won't submit, but run your code instead, so make sure to RTFM :-) </p> 57335 57357
|
http://www.perlmonks.org/?displaytype=xml;node_id=57407
|
CC-MAIN-2015-14
|
refinedweb
| 172
| 82.04
|
Broken section links
- 2.8 Subpages in the main namespace
- 2.9 Drop of i686 support
- 3 Bot requests)
- I've created a draft in my user pages User:ToxygeneB/Perl. If anyone is still interested, please have a read and make any comments as necessary. If it is considered "good enough" I will move it to a wiki page. ToxygeneB (talk) 01:58, 10 December 2017 (UTC)
- Since your page was deemed "good enough" to move, should this section be stricken through or removed? Perl now exists, and recommendations for improvements can go on its respective page. CorporalKobold (talk) 22:59, 4 February 2018 )
- See User:Isacdaavid/Linux_Console for a newer draft. -- Alad (talk) 09:14, 28 August 2016 article status
Gummiboot
Gummiboot is included in systemd since 220-2 as systemd-boot. Relevant search: gummiboot -- Lahwaacz (talk) 14:05, 30 August 2015 [5] [6],)
Template:Broken package link has been created so this section can be probably closed. -- Lahwaacz (talk) 19:37, 4 February 2018 (UTC)
Subpages in the main namespace
Subpages in the main namespace are finally enabled [7],)
Drop of i686 support
Following [8] I've done these edits for the moment:
There are so many other articles that need updating, and also the edits above will need to be amended after November 2017. I think it's better to decide here whether we remove all the i686 content immediately, or we keep it until the final deprecation and do the cleanup then.
— Kynikos (talk) 08:57, 26 January 2017 (UTC)
- Hi, I think for some of content it will depend on further decisions of the devs along the timeline, e.g. will there be changes to
arch=options. We should wait a little to see how the migration plan looks like. IMHO it is useful to start updating once a topic is clear, i.e. before the deadline. Another moving target is whether a community effort to keep i686 somewhat establishes itself; any such would have an impact on what to change how.
- In general I believe the related content changes will be so wide ranging that we should open a Archwiki:Requests/Drop of i686 support (or a top-level link like Archwiki:Drop of i686 support - easier to crosslink) to link to from here. Better to keep overview.
- --Indigo (talk) 09:32, 27 January 2017 (UTC)
I think it's good to keep Migrating between architectures around, it's also linked at least from the FAQ.
— Kynikos (talk) 08:57, 26 January 2017 (UTC)
About Makepkg#Build 32-bit packages on a 64-bit system and Install bundled 32-bit system in 64-bit system I'm not sure, perhaps they may be useful for somebody during the transition?
— Kynikos (talk) 08:57, 26 January 2017 (UTC)
- schroot redirects to Install bundled 32-bit system in 64-bit system. I think we should continue to have a page showing an example of setting up an schroot. It's sometimes useful to run Fedora, Ubuntu, etc in schroot. I suppose the article should be altered (and re-titled) to use something other than 32-bit Arch as the example. Bobpaul (talk) 17:46, 17 July 2017 (UTC)
-)
For some x86_64 capable hardware there are 32-bit UEFI restrictions. Example section: Unified Extensible Firmware Interface#UEFI Firmware bitness.
It needs to be checked whether
i386.efi bootloader files will continue to be built after i686 is dropped (FS#52772). Depending on result, it may be useful to rebase Category:Boot loaders content to x86_64 early on?
--Indigo (talk) 08:45, 30 January 2017 (UTC)
- Turns out my prior research was bad, 32-bit efi files are not packaged anyhow. Hence, users requiring those need to generate them, see FS#52772 for details. Still it is useful to weave this info in when references to i686 are eliminated in Category:Boot loaders articles. --Indigo (talk) 09:24, 2 February 2017 (UTC)
Bot requests
Here, list requests for repetitive, systemic modifications to a series of existing articles to be performed by a wiki bot.
|
https://wiki.archlinux.org/index.php?title=ArchWiki:Reports&diff=470176&oldid=450663
|
CC-MAIN-2018-09
|
refinedweb
| 677
| 61.77
|
I reset it and the all the routers in the ATF lab about three times yesterday . Something is configured wrong somewhere that is jamming up the DHCP issuing of IP addresses.
We need to address the way the ATF network is structured, but its a real pain and will take a lot of time with little science dividend.
The gateway router went down sometime since yesterday. I reset it, and it is accessible again.
This morning, the Linksys gateway router was hung. Craig went into the ATF lab to reset it, and foolishly pushed the "Reset" button on the Linksys router.
DO NOT push the reset button on the gateway router! This factory-resets the settings of the router, and does not cycle the internet connection as Craig expected. To cycle the internet connection, simply unplug and replug the power cable.
Fortunately, awade had taken a screenshot of the Linksys settings on ws2 in the ATF, and was able to quickly get everything working again.
Craig took some similar screenshots and is posting them here for future reference.
Here they are for future reference.
Somehow I lost access to the 3com switch in the PSL lab. The default config IP was lost and from there we have been unable to edit its settings.
The 3com model 2920-SFP switches don't have a reset button. They can only be reset from the serial port. Larry Wallace lent me a USB serial to RJ45 cable but I have been unable to obtain a connection through a terminal PuTTY session until now.
The instructions for resetting the router can be found HERE
I found that serial connections through MacOs terminal 'screen' utility just returned random characters. It turns out that the problems with the USB serial connection were the baud rate. In this case it must be set to 38400.
To launch a serial USB session with a converter device. Run
> ls /dev/*usb*
then plug in the USB converter device (USB to serial DB9 + DB9 to RJ45) and run the above command again to see what appears when the device is initialized. You should see something like /dev/cu.usbserial-xyz appear, where cu.usbserial-xyz is serial converter you want to connect to.
Then launch screen with
> screen -L /dev/cu.usbserial-A4007BdM 38400 -L
Power cycle the 3com router and it should return a bunch of sensible startup dialog like
Starting......
************************************************************************
* *
*
As instructed hit Control-b (Not command-b) then follow the instructions in the LINK above. Note that where they say "<blank>" for password they actually mean enter nothing.
From there you can configure IP addresses etc from command line. However, it is probably just easier to let the top level router DHCP allocate an IP address and then do it directly from the browser.
YOU MUST SET A NEW ADMIN PASSWORD
FB4 is down. The DAQD appears to be running on the machine but no data is being written to file.
I was looking yesterday and discovered that FB4 had stopped. Some digging revealed that the last recorded frames were from May-29.
We started migrating equipment for the FB4 Cymacs to the QIL on Friday. See attached list and images.
I moved a new Dell Precision 3430 onto the lab bench near the door. It's replacing the older unused machine that was in that spot (IP=10.0.1.33). The intention is for the new machine to provide MEDM (sitemap) access to the QIL cymac and to run the full CDS utils suite (awg, NDS, etc.).
There are two operating systems that have CDS package support: Debian 9 and Scientific Linux 7. Unfortunately neither of these operating systems is officially supported for this model computer according to Dell.
I attemped to install base Debian 9.9, which can be done successfully and booted. However, the installer is unable to locate the drivers for the network card, leaving the machine without network capability. This is likely because the network card (Intel i219-LM) is brand new and support hasn't been incorporated into the distributions yet. I next tried installing the latest weekly snapshot (testing build) of Debian with additional commercial firmware included. This time the network card was recognized, and the installation appeared to complete entirely successfully. However, the system then failed to boot the new OS. I tried first just reinstalling the GRUB (boot) loader, then entirely reinstalling the OS, both with the same result. Something in the test build is preventing the hard disk from being recognized as a bootable device.
I have one more idea to try next week: Again install base Debian 9.9 (without network capability), then attempt to manually install the i219-LM drivers provided by Intel.
I finally succeeded getting Debian installed on the new workstation with a working network card. I installed Debian 10.0, which was just released last week and will be supported for five years. After installing the OS, I
The user name is controls as usual and it has the standard W. Bridge password. The lscsoft repo for Buster (Debian 10.0) is still missing many packages, so I installed the cds packages for Stretch (Debian 9.9) instead. They seem to be compatible with 10.0 as far as I can tell. The machine is at the same IP as the one it replaced, 10.0.1.33.
To be able to interface with the cymac, there is still an RTS environment (environment variables and an NFS mount) that needs to be set up. I'm looking into what this involves.
Chris and I set up the LIGO RTS environment on the QIL cymac, using code copied from the cryolab cymac. Specifically, the script /opt/rtcds/rtcds-user-env.sh was edited to match the cryolab version and added to the /home/controls/.bashrc file. We also downloaded a copy of the CDS user apps SVN to /opt/rtcds/userapps/release. Tools like dataviewer and ndscope now work on the cymac (fb4: 10.0.1.156).
opt/rtcds/rtcds-user-env.sh
/home/controls/.bashrc
/opt/rtcds/userapps/release
Our plan is to set up a network drive on a third machine to host the /opt/rtcds directory currently located on the cymac. This way, the directory can be shared with any number of workstations as well as the cymac itself, and the NFS mounts will be unaffected by frequent reboots of the cymac.
/opt/rtcds
I also unsuccessfully attempted to diagnose the race condition that occurs between all the RTS services on boot. Right now the services all start correctly only about 1/3 of the time. I tried setting the order that the services are started and adding a 15-second delay after each service start. However, this did not make things become deterministic.
Two new 27" LED monitors arrived today for the QIL workstation. I've installed them.
I rebuilt one of our old desktop machines to serve as NFS server for the cymac. It is running Debian 10.0 and assigned IP 10.0.1.169 (hostname qil-nfs). I installed a new 2 TB hard drive dedicated to hosting the LIGO RTS software and frame builder archive, which is shared with all other lab machines via NFS.
I have moved the new machine into the server rack and copied the contents of /opt/rtcds on the cymac into the shared location. Functionality like sitemap and the CDS tools can now be run directly from the QIL workstation (plus any other machine on which we add the NFS mount).
Someone (not me) has recently changed the IP addresses of the lab machines. I see the new assignments are the following:
Recently Duo wanted to make an arbitrary waveform excitation using the QIL cymac, but it wasn't working. An excitation would die after 10 seconds or so, with awgtpman reporting that the data was too far in the future.
It turns out this was caused by a missing leap second in the RTS software. It is now fixed upstream, and we're running a patched version of awgtpman on fb4, until the change propagates to the packaged version.
MATLAB license had expired on QIL-WS2 so I had to activate it again.
Adding fb4:/usr/share/advligorts to QIL-WS2 to /etc/fstab file
Should help access to CDS_PARTS model file in Simulink on QIL-WS2
Except access is denied by FB4
Noticed that the DAC channels were not producing a corresponding output in the real world (I changed the Laser Current FM12 value and got not corresponding change on the laser diode driver display).
Sent the following to Chris: "Can you log into the QIL FB4 workstation to see if there is an issue with the DAC? I restarted the C4TST model last week and I don’t seem to have working DAC outputs anymore. The ADC channels still work and the model appears to be running. It just seems that I can’t output any voltages."
After observing that the "DK" (DACKILL) bit in the state word on the IOP status screen was red, the resolution to this was to restart the IOP and TST models.
Added Simulink > Model-Wide Utilities > Model Info block to c4tst.mdl. Text inside that block is:
#DAQ Channels
FM31_OUT 16384
--
Now following
And it failed. See attached screenshot. Then I copied c4tst.mdl to the simLink directory. Compile still failed.
[Aidan, Jon, Chris W, Ian]
Summary: We rebuilt the Cymacs C4TST today to get FM31_OUT into frames
Main points:
1982 sudo /sbin/rmmod c4tst c4iop
1983 cd /opt/rtcds/caltech/c4/target/c4iop/scripts/
1984 ./startupC4rt
1985 cd ../../c4tst/scripts/
1986 ./startupC4rt
1987 systemctl start rts-awgtpman@c4iop.service
1988 cd .././../
1989 ls
1990 cd gds
1991 ls
1992 cd awgtpman_startup/
1993 ls
1994 ./awgtpman_c4iop.cmd
1995 ./awgtpman_c4tst.cmd
1996 systemctl restart daqd@standiop.service
1997 systemctl
1998 systemctl status daqd@standiop.service
1999 systemctl stop daqd@standiop.service
2000 sudo systemctl restart daqd@standiop.service
Got the DAC working by reactivating entries in the C4TST_cdsMuxMatrix.
No problems with channels 12-14. However, channel 15 doesn't output anything at the AI chassis.
Using channel 14 on the AI chassis with FM15 input into it.
[Aidan]
I was working in the QIL on Friday and I heard a clicking sound coming from the rack where the DAQ is installed. It turned out to be the DC power supply for the AI/AA chassis. One of the voltage was floating around from ~14.2V to ~14.8V and the unit was clicking as it did this. Since the AA/AI chassis expect +/-18V which is regulated down to +/-15V, this was, to use the scientific term, bad.
I set the low voltage channel back to 18V. We have noticed previous drifts DAC channels - it's possible this was the cause.
We should not have a bench power supply installed permanently. Can you install a Sorensen in that rack or use one of the nearby ones?
The attached files are the scripts used to take data during the PD temperature cycling/testing and to retrieve and analyze data after the fact.
#diode name
i=1001
diode=A1
caput C4:TST-FM15_OFFSET 0
sleep 1
while :; do
#-----------------------------------------------------
# dark current
echo =======================
echo ----- TOP OF LOOP -----
# script to maximize the output power of the piezo
import serial
import time
import os, sys, subprocess
import numpy as np
def slowDownJog(ser):
ser.write('1SU50\r\n')
time.sleep(0.1)
# analysis od the A1 JPL PD diode
# Aidan Brooks - 10-Sept-2021
import cdsutils
import numpy as np
import matplotlib.pyplot as plt
import os, glob
import scipy.signal
you can put these in the GIT repo for the QIL Cryo tests that Radhika set up. Otherwise, they'll get lost. And we should probably change autorun to a .py script and document these in the README on the repo..
What I have so far is a scatterometer using two InGaAs photodiodes.
There is a square plate that I found, and screwed onto a rotating plate. I then put that on 2 4inch posts, once I have the motor running I will but it underneath. The laser comes in from the right through the fiber, which is carefully screwed into the table via a fiber organizer. The laser then moves through a collimator, a waveplate polarizer, another polarizer (making sure we have horizontally polarized light), and then reaches a 90/10 beam splitter. 10% of the beam is directed onto a post that is screwed into the square plate, through an iris, and focused on a photodiode. This gives us a baseline of how much power is coming out of the laser. The other 90% of the beam is sent through the chopper, which is being driven at 164Hz and produces a square wave (the 3rd image is from testing the chopper, ch.1 is the chopper driver, ch.2 is the chopped beam). The beam then goes through the silicon sample, where the light transmitted is dumped, and the light scattered is focused via an iris and recorded with another photodetector. I made sure to dump any light that was reflected off of the sample and off of the chopper.
The osciolloscope is triggered off of the choppers driver, which produces a square wave at 164Hz. Initially I looked at the light transmitted through the sample, and then slowly rotated the sample to see how the light changed. After integrating, I was able to see the scattered light at theta = 0, at other angles it quickly dropped to zero (maybe the photodetector isn't sensitive enough, or there is a steep drop off at other angles). For theta = 0, there was a delta of 5.04 mV. The trigger's high was -19.7mV, adn the signal's high was -14.6mV, this is the 4th image.
The raspberry pi is up and running, once I have the camera I should be able to set up a script that triggers the camera and spins the sample using the stepper motor. I would like to test the camera to see what kind of signal I get from it, and if that differs from the photodetectors. I need to find a way to connect the stepper motor to the rotation stage, maybe 3D print an attachable washer I can screw into the rotation stage. Also I will try to 3D print a large box that I can paste beam-dump wrap to so that I can block out room lights. Any other thoughts to improve the experiment are welcome!
Great, its good to see a first setup.
For your power monitoring, it my be a good idea to characterize the W/V conversion of the what you see on the monitor photo diode voltage vs the power incident on the sample.
The Thorlabs PD that you are using will have a certain responsivity and gain, and, your beam splitter will be 90:10 ± 1-2 %;
it is easiest to measure power incident on the sample directly with a power meter and compare voltage on your photo detector for a few different power settings. Make an elog of this.
Separate verbose elogs are good, as soon as you take the data post something. Also be verbose, include model numbers of things like photo detectors, lasers, lenses, oscilloscopes etc, this makes posts searchable: more information is better.
For beam dumping from chopper and other optics, maybe see if you can fit aluminum beam dumps panels to provide maximum coverage.
Now that you have a camera a nice side project would be to get the full 40 mW of your laser and have a look how it scatters of different beam dumping materials.
I've bought some black foil from Thorlabs, some brand new razor blade dumps, black glass dumps and anodized aluminum panels. I have a feeling they not as black as you'd think at wavelengths that matter.
That would be a one day project to make up an imaging lens system for the camera and shine some light on some black stuff. A detailed elog entry of course.
With the data, we're going to need to find something that lets you get files that you can plot in Matlab or python. You do see photo screen shots often in the elogs, but there is a strong preference for logging the data and plotting it nicely.
That way you can also characterize the noise. Maybe you would also like to measure the dark noise of the detector. I can show you how to use the SR785 to do this. (Another thing for your todo list).
The schematic is good, but it would be nice to include dimensions and part model numbers. There are number of different Thorlabs detectors for instance.
Sorry, this is a lot of stuff. But easier to respond on elog altogether.
I noticed that the sample isn't centered along the axis of rotation, how do you think this well affect the measurement? Is there any way to mitigate this?
Good news on the Rasberry Pi, if you're able to get a python package up and running to activate certain pins then maybe we can work out how to
make a buffering/conditioning circuit that will trigger the camera properly from a digital output signal. I've ordered another 16 Gb SD card, but the order is
taking forever in shipping, it should be here in a day or two. Let me know if you need access to Solidworks or other cad stuff; we have a windows box that you can
get a username for and work over screen forwarding. Otherwise onshape is a good free solution for drawing things.
Good work.
Set Up Changes Over the Past 2 weeks:
I built a new cable to connect the raspberry pi to the camera so that we have an external trigger.
Also drilled a hole in one of our rotation stages so that we can fit the motor inside of it, and drive it from the raspberry pi. There is another rotation stage that I took apart to see how it worked, ball bearings went everywhere but I think I got them all back in.
The transistor chip to drive the motor from the pi came in this week, so that should be running by today.
I changed the set up last week, so now the camera is rotating around the sample, instead of the sample mounted on an optics stage with the other optics. This will make motorizing the camera's rotation extremely easy.
When I ran the camera without the lense everything was in black/white and unfocused. The camera now has a lense with a 10cm focal length, an aperture, and a 10cm tube connected to it. I glued two tubes together since we needed a male/male connection between the camera and the lense. I need to play around with the camera settings and get that all together by today, so we have some initial images.
I have the code ready for the raspberry pi already, the only thing left to do is to set up a static ip address on it and connect the stepper motor.
So right now the camera is set up with a 10cm tube, a 10cm focal length coated lense, and an aperture.
I tried to mimic Aidan's experiment, and set up the camera 25 cm away from the sample, then taking 100 images with the laser on, and 100 with the laser off. I then averaged the light images, averaged the dark images, and subtracted them from each other.
Below is the averaged light image:
Below is the averaged dark image:
And this is our subtracted image (it's nasty):
I think what's happening here, is that the tube is causing a major loss in aperture controle, and the focal length on the lense is way too large for a short range image. This is probably causing some major distortions, and just giving a super noisy image. I'm going to look for a lense that has a shorter focal length, and do this a few times today until we get a better image.
Another note is that this is using the factory calibration file (NUC file). I tried to generate my own NUC file earlier last week. This was done by putting the cap on the camera and taking multiple dark shots, then shining the laser directly on the camera and taking multiple light shots. Wimby 3.3 has some script to generate a NUC file from those images, but it only led to the camera's view being completely orange. So I stuck with the factory settings. Seeing as though there's no real difference in the light/dark images in this report, I might try a different NUC file.
So 2 ways I'm trying to clean up this image right now:
1. A new lens system.
I would like to build a telecentric lens system. Right now I have two Newport lenses with focal lengthes of 23.31mm for 1550, with AR coatings for 1000-15000 nm. By placing an aperture at their cofocal lengths, I should be able to create a bi-telectric effect. However I can't find the correct tube length, or an aperture that will fit into a tube (and I don't think Andrew reaaaally wants me to saw one of the tubes in half). So I'm fine will just making a telecentric system for now, and wait for an adjustable aperture/tube to come in from Thorlabs.
Telecentric lens: Just put an aperture at the focal length of the lens
Basically all of the rays that hit the camera will now be parrallel with the optical axis.
2. Calibration of the Camera
I'm going to look into how the Wimby software calibrates its images before I try to do it myself, for now I'm going to stick with the factory calibration. Looking at Aidan's image, I'm guessing that's what he did so I don't want to mess with it too much.
Okay so I tried it again with the new lens setup, and you can ~almost~ see the silicon.
exposure time: 200,000us
# of images with laser on: 100
# of images with laser off: 100
subtracted image:
This, at least does not look so random, the light orange rectangle in the center corresponds with the silicon block. I wrote a code that smooths the image
So I think this would benefit from calibration. I talked with Jigyasa at the 40m yesterday, and she showed me how she calibrated her camera using white matte paper. I'll write up another post explaining that, and the set up that I'll make for it today.
Ok so I should have a calibration file done by today. The idea is to measure the surface scatter distribution from a known scatter element. White paper can be approximated as a lambertian scatter source, meaning that it's BRDF is a constant 1/pi sr^-1.
By definition:
Where Ps is the measured scattered light, Pi is the incindent light on the sample, is the solid angle, and is the angle the light is scattered through. There is a cosine corrected version of the BRDF in Stover that drops the cosine term, and accounts for bulk scatter.
Regardless, a lambertian scatter source is a constant independent of angle:
The calibration function can then be calculated using this constant:
The ARB of the cameara is defined as the sum of the photon counts divided by the exposure time, over the incindent power, this basically gives a ration between the power recorded on the camera to the power incindent on the sample. :
Which means the the calibration function is a ratio of the power of the scattered light and the light recorded by the camera. Once we have this function, we can multiply by the images we have in order to calibrate them.
I have already set up the paper to be used to measure the scatter. However I realize that I messed up the lens set up so I'm redoing that now, should be done with re-measurements in a bit. lots of thanks to Jigyasa.
Results from camera calibration!
After some more direction from Rana I was able to record the BRDF of the white matte paper and compare that with the expected BRDF of a lambertian surface.
With a calibration constant Fc_array of
[ 5.80173338e-13 -8.13104927e-13 8.88762806e-13 -6.81240016e-13
-6.89416046e-13 9.02894516e-13 -8.46491614e-13 4.95011517e-13]
I made the following plot of the data points, and it lines up nearly perfectly:
Fonts too small - and theres no such thing as negative BRDF.
This calibration was done using the setup constructed above, and replacing the sample with a white piece of paper. Following the procedure above resulted in a calibration constant of:
F_c = 3.73*10^{-12} Watt*sec*counts^{-1}*str^{-1}
In the two images above, we have the normalized ARB of the camera plotted against the cosine function, and the BRDF of the sample plotted against an ideal Lambertian BRDF. From the first graph, we see that the ARB falls off along cosine theta as expected. From the second graph we can see that there is a small constant offset between the measured BRDF and the ideal Lambertian BRDF. This is due to the assumption that the reflectivity of white paper is 1. However, typical reflectivity constants of white paper are closer to .8. We can recalculate our calibration constant, using a recorded value of paper reflectivity: .8.
F_c = 2.99*10^{-12} Watt*sec*counts^{-1}*str^{-1}
Finally got an image of the beam through the sample. I thought I had it before, but it was actually the beam reflected by the surface of the scatter. So far the beam through the sample has been imaged. This was done by summing up 100 images of the sample at an exposure of 2000\(\mu s\), and subtracting 100 dark images of the sample.
One can see a small scatter feature in the silicon along the laser's path at (400,200). One can also see the reflection of the laser beam going in and out of the edges of the silicon sample. The red background is due to an error in rounding 0 and the max. The signal to noise ratio of the scatter feature here is about 90:40 counts.
I thought that I was imaging the beam here, but if you rotate it a bit you can see that this is coming out of reflection off of the back image. If you take a laser card and look at the reflected beam, you see that there are actually 2 one reflected off of the front surface and one coming off of the back surface. Going straight through the beam gives a dark image, since it will be coming in normal to the surface. We can see this in the 3rd image below.
.tif files, 16bit
Each image is at a different exposure time, naming convention goes: exposureTimeus_0.tif
I worked on adapting the HDR code to use tiff images, which store 16 bit pixel values, also using matplotlib to view those images.
Below is an image of the beam going through the silicon at the camera's highest exposure time: 100,000 us. In the code plot_tiff.py, I convert pixel space to detector space, and counts to Joules/str. In the image you can see two splotches higher than the background, those are the beam going in and out through the sample.
I also created an HDR image with exposure times: 2000, 4000, 8000, 10000, 20000, 100000 us. The signal looks a bit clearer, some background subtraction might be needed. Also still can't see the laser inside of the silicon.
The highest exposure time on the camera is 200ms, I reran all of the sum_images scripts I had to use the tiff images with it. The first image is a background subtracted image of the beam through the sample. The second image is the HDR code run with all subtracted background images at all of the exposure times up to 200ms. The first image is obviously less saturated than the HDR image, with a higher signal to noise ratio.
I think I've finally imaged scatter in the silicon sample.
This image was taken as a tiff with the Wimby camera, with an exposure time of 200ms, and all of the roomlights off. The camera is looking at the sample from an angle, so you can see the beam going through the back of the sample. I first summed 100 images, and then subtracted the sum of 100 background images.The background image was taken by just turning the laser off. Background subtraction was able to get rid of hot pixels.
On the side you see 2 bright spots, they are in a white box which I drew on the image. You can tell that these are intrinsic to the sample and not just noise, I repeated the same process at 5 different angles, 3 are shown below. Near a scattered angle of 90 degrees you can't see the scatter anymore.
Images that created this image:
folders 1-5 : each contain 100 tif images of the sample from 5 different angles
folder dark: containt 100 dark images of the sample with the laser off
folder outputs: graphs of the outputted background subtracted files
So this is an image of the silicon with my iphone camera from about the same angle that the camera was viewing it.
Assuming Rayleigh Scattering (which is a rough approximation), I calculated how much scatter would come from SiO2.
n = 1.431, d = 100nm
The rayleigh cross section is 2.36*10^-18 m^2
For a number density of about 10 si02 scatter sites/ volume of silicon, N = 1.736 * 10^5 cm^-3
N*cross section = amount of light scatted per distance traveled
That's 4.09*10^-15 ammount of light scattered per centimeter. The distance between the silicon and the lens is 9cm
Over 9cm the intensity of light recorded is 1.87*10^-13 Watts.
Our calibration constant is 2.99 *10^-12 counts/m^2/str.
This means that for a scatter source of this size, index of refraction, and at 1550, the camera will record less than 1/10th of a count.
Moving Forward:
I will get a signal before I leave Saturday morning! It will happen!
I'm going to play around with the distance between the lens and the silicon, try to really zoom in on the scatter. If that doesn't work, maybe put a larger lens in front of it. I don't know, I'll do anything. I'm desperate. :D
So Rana and I talked about moving the camera closer to the Silicon to try and observe scatter. I varied the distance by 1cm increments, refocused the camera, keeping the camera's angle at 30 degrees from the normal. These images are being taken from the same angle as the image I uploaded yesterday in the visible. I will attach the zipped folders of the raw images in another elog. These subtracted images use the summed dark image file that is attached here, and are a result of 100 images at that distance being summed, and the dark image being subtracted from it. There seems to be some over subtracting here, since there are values less than zero. The dark image was taken at the Silicon with the laser turned off.
So I talked to Aidan this morning!
So I took the original image of the Silicon, 16.5cm away from it, 200ms exposure. I took 100 images with the laser on, and 100 with the laser off. I then summed the "laser on" images and the "laser off" imags. I subtracted the laser off images from the laser on images. The background subtraction code now accounts for oversubtraction, so any oversubtracted pixels are set to zero. This produced the first image.
I then made a graph that plots the sum of each row over the y-axis, to see where the peak counts were. This is the first plot. This showed me that the laser was between pixel space 250 and 320. I then summed over the x-axis the laser space. The edges gave a high pixel count of over 200, while there was a lower distribution between 20-100 counts. I then rescaled the colormap to show between 0 and 100 counts, multiplied by the calibration factor, and that gives us the image of the laser through the sample below.
The beam is around 1.8*10^-10 W/str, which is a few factors off what I calculated yesterday. I will look into that...
I've restarted the Diabolo and am checking the alignment into fibers. The current configuration coming out of the WOPO breadboard is a fiber 50:50 beam splitter followed by two matched F240APC-1064 nm fiber collimators. There is a HT532HR1064 dichroic mirrors in each of the split arms remove any remaining residual green. The plan is to use a single NF1811 in one arm to see if we can see SQZ out at RF. It will be lossy and susceptible to RIN, but we will be measuring at very high frequency.
Power of 1064 nm after the power-control PBS is 3.12 mW, at the other end of the fiber I am seeing 300 uW. At the output of the HD fiber colimators there is an even split of about 148 uW: about what we should expect. I will try to check the alignment tomorrow and see if I can identify shot noise on the NF1811 above the dark noise. I haven't don the calculations, will check these number tonight.
---
I also tracked down the Newport 3040 temperature controller (found in the PSL lab). I've reattached this to the WOPO butterfly mount and am able to get a temperature readout from the 10kΩ thermistor with a 10 µA test current (this should deliver 0.1V to the NP3040 ADC). There is an option for 100 µA excitation of the sensor (have used this in the past), but I figured less current means less self heating. Not sure what the situation is with S/N inside the box, its an expensive mystery.
Settings on the Newport 3040 are basically the same as before, see ATF:2124, for good measure here are the full settings list:
The NP3040 does give you explicit gain levels for P and I terms in the feedback loop real values. It just has mystery numbers 0.2, 0.6, 1, 2, 3, 5, 6, 10...300 with either "fast" or "slow". I used 2 Fast, and then gain 10 Fast. Integration doesn't seem to be aggressive enough as its not reaching the set point. Any more proportional gain and it overshoots and hits the shutdown rail on loop startup. A current of 0.4 A is needed to reach a set point of 61.93 C, so there is plenty of actuation headroom. Its not an ideal PID loop but I'll leave this for now, it is enough to just move the set point a little higher.
There is a 1064 nm mirror mounted on a PZT just before coupling into the fiber. Wires have been soldered to a BNC and solidly mounted to a L-bracket on the table. I have obtained a thorlabs HV driver that can do upto 150 V from 10 V . There is an adjustable range with a switch on the back (75V, 100V or 150V), I need to check the voltage range allowable for this PZT before powering up. The plan is to scan 1064 nm phase over a few wavelengths to scan the detected SQZ phase. About 100V will do it.
Something to check is the impact of banana shapped motion of the mirror+PZT, in the past this changed power through modecleaners by misaligning with the voltage scan. However, that was on very long PZT stacks. Might expect a similar effect coupling into fiber, its just something to calibrate out in the baseline shotnoise curve as a function of scan voltage.
I haven't checked the 532 nm coupling efficiency or made a shot noise measurement. I have a NF1811 + power supply and will try to look at this tomorrow with a spectrum analyzer.
Summary, here are some numbers relating to requirements for for the WOPO squeezing detection:
In order to obtain some coherent light inducing some measurable shot noise, 1064 nm light is coupled in to fiber and injected into one of the legs of the fiber 50:50 beam splitter; the other leg of the 50:50 splitter is to be connected to the WOPO directly. If we can see shot noise then injecting a 1 dB squeezed state with 50% loss should give use a roughly 0.47 dB of squeezing. The variance of the prepared state will go as
where is the total loss from the point of amplification. Computing dB of squeezing is then found by normalizing the squeezing the the LO shot noise level, taking 10*log10(V_in/V_shot).
Initially I will try to see squeezing with a single detector out of one of the legs of the fiber beam splitter. This means a 50% hit in terms of loss and also that the detector is susceptible to intensity noise of the 1064 nm local oscillator (although we might expect this to be much low in the >1MHz range). With about 1 mW of light on an ideal (eta=1, lossless) photo diode we should we photocurrent of order
The shot noise current at this power is given by:
We want to look at the quantum noise at around 1 MHz and for it to be above all of the typical noise sources with reasonable margin (i.e. 6 to 10 dB clearance). A New Focus 1811 detector is an OK choice as its quoted dark noise, or NEP, is 2.5 pW/sqrtHz. This is given for peak responsivity 1.05 A/W @ 1550 nm. Scaling to equivalent NEP at 1064 nm amounts to rescaling NEP by the relative responsivity ratio
The equivalent current is 2.6 pA/sqrtHz. Which gives a shot noise clearance above PD dark noise of 8.0 dB. For order of 10 dB shot noise clearance we would need 2.5 mW: this is still within a factor of two of the maximum power of the detector.
At the moment I can't find at spare working NF1811. Here are some options that don't work (for the record):
A New Focus 1611 is not such a good choice. Peak responsivity NEP is 20 pW/sqrtHz; gain is also lower 700V/A. Here the current dark noise for the given responsivity of 1.05 A/W is is 21 pA/sqrtHz. To get shot noise equivalent to detector dark noise we would need 1.48 mW. To get to 10 dB clearance I would need 148 mW. So no good here.
The Thorlabs PDA10CF has a NEP of 1.2e-11 W/sqrtHz @ 1.04 A/W. At this current noise of 12.5 pA/sqrtHz we would need 58 mW for 10 dB clearance.
Maybe...
Looking at Zach's M2 ISS board (ATF:1888) it looks like it will clear the dark noise with 10 dB clearance, but nobody can tell me where that already built unit is or the spare PCB. From what I know, the main issue with ordering more is sorting out whether using a MAX333A in place of the MAX333 will have sufficiently low through resistance (the MAX333A is about 20 Ω, compaired to the 100Ω MAX333 that Zach used in the initial test model. There was also an issue with the size of some of the inductors compared to the design footprint, not sure if that is resolved. Its also not RF.
At the moment with the best alignment (30 minutes effort) the efficiency of coupling into the fibers is 75% for 1064 nm and 50 % for 532 nm. The 50:50 fiber beam splitter appears to have close to 1-2% loss. Its rated up to 1W so we can easily get 1-10 mW out the other end.
Some more effort needs to be put into the 532 nm in coupling. We will need order 10s mW. I just don't want to burn the ends by just jamming a bunch of power in. Need to check the cleanness of fiber ends with a microscope before upping the power: it seems this is how the last fibers were damaged.
Today, I studied noise reduction in transimpedance amplifiers in Jerald Graeme's book, and I intend to use the techniques presented in this text to improve my photodiode circuits. In addition, I learned how to model photodiodes in LTspice and simulate noise within the circuits. Afterwards, I attended laser safety training, and then my mentors and I cleaned up the lab area where I will be working. We began setting up the laser and optical elements that will be used in the construction of the balanced homodyne detector, and I learned how to clean optical elements with First Contact.
One of the factors we're taking into account when figuring out the optimal fiber cable length to use in the 2um laser characterization project is the power loss present as a function of such length. Andrew and I worked through some figures and came up with the following plots, sampling a few values of the attenuation coefficient alpha. The process was relatively straightfoward, we introduced some loss, , into a signal. Thus, at one of the outputs of the MZ, the signal we receive would be:
Next, since we ideally want our signal to be locked at mid-fringe, we take the derivative of the function with respect to frequency and observe the maxima.
In order to best visualize the points at which the slope is of highest sensitivity, we take the derivative once more and observe the zero points.
Through ThorLabs data on the SM2000 fiber optic cable ( ), we came to a good approximation that our attenuation coefficient is approximately 8.63*10^-3 dB/m. The orange line in the above graph is a close approximation to this value, but the sensitivity slope for the approximation we obtained is shown in the following graph:
When considering power loss in the fiber optic cable, the optimal fiber cable length is roughly 116.8 meters. If we are willing to sacrifice roughly 10% of the calculated sensitivity*, then we can drop the cable length to approximately 72 meters.
*This was done by subtracting 10% of the maximum value of the derivative of the output power w.r.t frequency (using the actual attenuation coefficient from ThorLabs). Maximum was 8.776*10^-8 W/Hz , 90% of max = 7.893*10^-8, which falls around 72 meters.
**First elog, critiques are very much welcome!
|
https://nodus.ligo.caltech.edu:8081/QIL/?id=2705&sort=Type
|
CC-MAIN-2022-33
|
refinedweb
| 7,224
| 71.55
|
> To the original poster.... what environment are you running this in? Linux. Xubuntu if that matters. > When I put your program in notepad and run it from the windows command > prompt it works. yeah yeah...same here. After I got the tip that it actually worked, I went into the eclipse directory where the program lives, ran it there from the shell, and it worked. Meaning, I didn't modify anything on the file itself (by accident or on purpose). > But when I paste it into eclipse and run it > eclipse's console, it doesn't work because answer seems to have a > stray '\r' carriage return (CR) and therefore the comparison to 'no' > fails. I get no 'compile' errors there. I get regular execution but it just doesn't change the condition to False at the very end. Therefore it loops forever. I used other values like zeros and ones to make sure I could print the values when the interpreter got down to that line. Everything checked. Just didn't change the condition on the main loop.
|
https://mail.python.org/pipermail/python-list/2008-February/502331.html
|
CC-MAIN-2014-15
|
refinedweb
| 179
| 73.78
|
Post-nonlinear causal models
Algorithm Introduction
Causal discovery based on the post-nonlinear (PNL 1) causal models. If you would like to apply the method to more than two variables, we suggest you first apply the PC algorithm and then use pair-wise analysis in this implementation to find the causal directions that cannot be determined by PC.
(Note: there are some potential issues in the current implementation of PNL. We are working on them and will update as soon as possible.)
Usage
from causallearn.search.FCMBased.PNL.PNL import PNL pnl = PNL() p_value_foward, p_value_backward = pnl.cause_or_effect(data_x, data_y)
Parameters
data_x: input data (n, 1), n is the sample size.
data_y: output data (n, 1), n is the sample size.
Returns
pval_forward: p value in the x->y direction.
pval_backward: p value in the y->x direction.
|
https://causal-learn.readthedocs.io/en/latest/search_methods_index/Causal%20discovery%20methods%20based%20on%20constrained%20functional%20causal%20models/pnl.html
|
CC-MAIN-2022-40
|
refinedweb
| 137
| 58.69
|
Just a few things that I have found over the last few days that may help make your products easier for people to use.
1. Change'Adafruit_i2c.py' to take a 1 or 0 for the i2c driver so that smbus is not needed in externial code:
Original:
def __init__(self, address, bus=smbus.SMBus(0), debug=False):
self.address = address
self.bus = bus
self.debug = debug
2. New:
def __init__(self, address, bus=1, debug=False): # rev 2
self.address = address
self.bus =smbus.SMBus(bus)
self.debug = debug
3. Servo demo:
a. You say in the tutorial that if you have rev 2 boards, that you must change:
self.i2c = Adafruit_I2C(address)
to
self.i2c = Adafruit_I2C(address, bus=smbus.SMBus(1))
In order for the tutorial to work 'import smbus' must also be added to the
Adafruit_PWM_Servo_Driver/Adafruit_PWM_Servo_Driver.py
Should be modified as per 1. above
b. ^C should shut down the demo, not leave it running! Haven't dug into the code yet.
4. I2c.py:
a. Adafruit_I2C.py file should be the same in:
/Adafruit-Raspberry-Pi-Python-Code/Adafruit_I2C
as in:
/Adafruit-Raspberry-Pi-Python-Code/Adafruit_BMP085(and elsewhere, and it's not!)
should use the same base file.
Seriously screws up the tabs!!!!!
|
https://forums.adafruit.com/viewtopic.php?f=26&t=35060&start=0
|
CC-MAIN-2016-18
|
refinedweb
| 209
| 71.41
|
The unsafe package
SafeHaskell introduced the notion of safe and unsafe modules. In order to make as many as possible modules "safe", the well-known unsafe functions were moved to distinguished modules. This makes it hard to write packages that work with both old and new versions of GHC. This package provides a single module System.Unsafe that exports the unsafe functions from the base package. It provides them in a style ready for qualification, that is, you should import them by
import qualified System.Unsafe as Unsafe
The package also contains a script called rename-unsafe.sh. It replaces all occurrences of the original identifiers with the qualified identifiers from this package. You still have to adapt the import commands. It uses the darcs-replace-rec script from the darcs-scripts package.
Properties
Modules
- System
Downloads
- unsafe-0.0.tar.gz [browse] (Cabal source package)
- Package description (included in the package)
Maintainer's Corner
For package maintainers and hackage trustees
|
https://hackage.haskell.org/package/unsafe
|
CC-MAIN-2016-26
|
refinedweb
| 161
| 58.99
|
NAME | SYNOPSIS | DESCRIPTION | RETURN VALUES | ERRORS | USAGE | ATTRIBUTES | SEE ALSO | NOTES
#include <sys/types.h> #include <sys/stat.h>int stat(const char *path, struct stat *buf);.
This field uniquely identifies the file in a given file system. The pair st_ino and st_dev uniquely identifies regular files.
This field uniquely identifies the file system that contains the file. Its value may be used as input to the ustat() function to determine more information about this file system. No other meaning is associated with this value.
This field should be used only by administrative commands. It is valid only for block special or character special files and only has meaning on the system where the file was configured.
This field should be used only by administrative commands.
The user ID of the file's owner.
The group ID of the file's group.
For regular files, this is the address of the end of the file. For block special or character special, this is not defined. See also pipe(2).
Time when file data was last accessed.().
A hint as to the "best" unit size for I/O operations. This field is not defined for block special or character special files.
The total number of physical blocks of size 512 bytes actually allocated on disk. This field is not defined for block special or character special files.
Upon successful completion, 0 is returned. Otherwise, -1 is returned and errno is set to indicate the error.
The stat(), fstat(), lstat(), and fstatat()functions will fail if:
The file size in bytes or the number of blocks allocated to the file or the file serial number cannot be represented correctly in the structure pointed to by buf.
The stat(), lstat(), and fstatat() functions will fail if:
Search permission is denied for a component of the path prefix.
The buf or path argument points to an illegal address.
A signal was caught during the execution of the stat() or lstat() function. also have the valid value of AT_FDCWD.
The buf argument points to an illegal address.
A signal was caught during the execution of the fstat() function.
The fildes argument points to a remote machine and the link to that machine is no longer active.
A component is too large to store in the structure pointed to by buf..
NAME | SYNOPSIS | DESCRIPTION | RETURN VALUES | ERRORS | USAGE | ATTRIBUTES | SEE ALSO | NOTES
|
https://docs.oracle.com/cd/E19683-01/816-0212/6m6nd4n9g/index.html
|
CC-MAIN-2019-47
|
refinedweb
| 396
| 68.06
|
#include <diskLight.h>
Light emitted from one side of a circular disk. The disk is centered in the XY plane and emits light along the -Z axis.
Definition at line 58 of file diskLight.h.
Construct a UsdLuxDiskLight on UsdPrim
prim . Equivalent to UsdLuxDiskLight::Get(prim.GetStage(), prim.GetPath()) for a valid
prim, but will not immediately throw an error for an invalid
prim
Definition at line 70 of file diskLight.h.
Construct a UsdLuxDiskLight on the prim held by
schemaObj . Should be preferred over UsdLuxDiskLight(schemaObj.GetPrim()), as it preserves SchemaBase state.
Definition at line 78 of file diskLight.h.
Destructor.
Returns the type of schema this class belongs to.
Reimplemented from UsdGeomXformable.DiskLight holding the prim adhering to this schema at
path on
stage. If no prim exists at
path on
stage, or if the prim at that path does not adhere to this schema, return an invalid schema object. This is shorthand for the following:
Radius of the disk.
Return a vector of names of all pre-declared attributes for this schema class and all its ancestor classes. Does not include attributes that may be authored by custom/extended methods of the schemas involved.
Definition at line 142 of file diskLight.h.
Compile time constant representing what kind of schema this class is.
Definition at line 64 of file diskLight.h.
|
https://www.sidefx.com/docs/hdk/class_usd_lux_disk_light.html
|
CC-MAIN-2021-10
|
refinedweb
| 224
| 69.28
|
On 4/10/06, Steven Bethard <steven.bethard at gmail.com> wrote: > On 4/10/06, Guido van Rossum <guido at python.org> wrote: > > Are there other proto-PEPs being worked on? I would appreciate if the > > authors would send me a note (or reply here) with the URL and the > > status. > > This is the Backwards Incompatibility PEP discussed earlier. I've > submitted it for a PEP number, but haven't heard back yet: > > > I like this! I hope it can be checked in soon. > This is potentially a Python 2.6 PEP, but it has some optional > extensions for Python 3000 and may be relevant to the > adaptation/overloading/interfaces discussion. It proposes a make > statement such that: > make <callable> <name> <tuple>: > <block> > would be translated into the assignment: > <name> = <callable>("<name>", <tuple>, <namespace>) > much in the same way that the class statement works. I've posted it > to comp.lang.python and had generally positive feedback. I've > submitted it for a PEP number, but I haven't heard back yet: > > > I don't like this. It's been proposed many times before and it always ends up being stretched until it breaks. Also, I don't like the property declaration use case; IMO defining explicit access method and explicitly defining a property makes more sense. In particular it bugs me that the proposed syntax indents the access methods and places them in their own scope, while in fact they become (unnamed) methods. Also, I expect that the requirement that the accessor methods have fixed names will make debugging harder, since now the function name in the traceback doesn't tell you which property was being accessed. I expect that the PEP will go forward despite my passive aggressive negativism; there are possible rebuttals for all of my objections. But I don't have to like it. I wish the community efforts for Python 3000 were focused more on practical things like the effects of making all strings unicode, designing a bytes datatype, a new I/O stack, and the view objects to be returned by keys() etc. These things need thorough design as well as serious prototyping efforts in the next half year. -- --Guido van Rossum (home page:)
|
https://mail.python.org/pipermail/python-3000/2006-April/000704.html
|
CC-MAIN-2022-33
|
refinedweb
| 371
| 73.17
|
Opened 7 years ago
Closed 7 years ago
#14218 closed (wontfix)
Paginator just implement the __iter__ function
Description (last modified by )
Right now, when you want to iter into all the pages of a Paginator object you to use the page_range function. It would be more logical and naturel to use the normal python of doing that by implementing the iter function like that:
def __iter__(self): for page_num in self.page_range: yield self.page(page_num)
Change History (2)
comment:1 Changed 7 years ago by
comment:2 Changed 7 years ago by
I'm not sure that's common enough functionality to worry about.
Note: See TracTickets for help on using tickets.
Reformatted, please use the preview button in the future.
|
https://code.djangoproject.com/ticket/14218
|
CC-MAIN-2017-13
|
refinedweb
| 122
| 69.11
|
Dependency injection is all about connecting abstractions with implementations.
Carefully defining dependencies produces a codebase with more abstractions (interfaces/abstract classes) and just more classes in general.
Ideally the abstractions should be independent of the implementations at an assembly level (no reference between the assembly that contains the abstractions and the implementations). This way it is impossible to use a concrete class instead of an abstraction, even by “accident” (be this a co-worker who is not familiar with the code base or your future self, perhaps during a bad day).
Why is this a problem?
Depending on an interface/abstract class is more flexible than depending on concrete classes. Object oriented languages provide ways in which you can replace those abstractions with concrete implementations at runtime. You want to do this as much as possible, since this is the best way to make your codebase flexible and reusable (see this and this).
When a new object is required, it’s dependencies (abstract classes/interfaces it references) need to be assigned concrete classes. This task can be delegated to an IoC container. When an instance of a particular type is requested to an IoC container, it will “inject” (usually through the class’ constructor) the implementations required by that type (it’s dependencies). And those implementations are defined in a set of mappings that can easily be changed.
But how can you then have these mappings without referencing the assemblies with the implementations?
If, for example you create an ASP.NET MVC project and install StructureMap as your IoC container (I usually use the NuGet package StrucutreMap.MVC5)
Install-Package StrucutreMap.MVC5
You get this
Registry class (Registry classes is where you define the mappings in StructureMap).
public DefaultRegistry() { Scan( scan => { scan.TheCallingAssembly(); scan.WithDefaultConventions(); scan.With(new ControllerConvention()); }); //For<IExample>().Use<Example>(); }
This registry class is in your main assembly (the ASP.NET MVC web project). For you to use it this way means that your main assembly has to reference the abstractions (
IExample) and the implementations (
Example).
But if you reference implementation and abstractions, even if you just use the abstractions it becomes easy to use a concrete class where an abstraction should be used instead. In big enough projects with many people working on them this is almost a guarantee. It might feel like a shortcut or it might just be because the person doing the code is not familiar with it. It’s as easy as just writing
new MyConcreteClass(...).
If your assembly does not reference the assembly containing the implementation of the abstract classes/interfaces there’s no chance of this happening.
Setting yourself up for success
If your main assembly does not reference the implementations there is no way of getting this wrong, and it’s very easy to do.
When defining the mappings there’s no way to avoid referencing the abstractions and their implementations, but we can push that out of our main assembly. Sticking with the example of using the ASP.NET MVC as our main assembly, we can it like this (the arrows represent references between assemblies):
Notice that the Abstractions does not depend on anything “concrete” (no implementations), ASP.NET MVC does not depend on anything “concrete” and that Implementations only depends on abstractions (and so does the ASP.NET MVC assembly).
An example will help make this clearer. Imagine you want to create a web page where the user can enter some numbers, click a button and have them sorted. Let’s say that you have this functionality right in your homepage, so if we stick with MVC that usually is your
HomeController.
HomeController has a dependency on something that can sort an array of numbers, let’s call it
ISorter. Because
ISorter is an abstraction we’ll put it in another assembly (Abstractions) and we’ll make the MVC project reference that assembly.
We need to create an implementation for
ISorter, let’s call it
BubbleSort. This is a concrete class so we’ll put it in an another assembly (Implementations). This assembly references Abstractions (it needs to because it needs to get
ISorter from there).
Finally we need to tie all this together and we do it by creating a Mappings assembly. Mappings references both Abstractions and Implementations. Also, the main assembly (MVC project) reference Mappings.
If we use StructureMap the mappings are specified in a registry class, and it will be as simple as this:
using Abstractions; using Implementations; using StructureMap.Configuration.DSL; namespace Mappings { public class MappingsRegistry : Registry { public MappingsRegistry() { For<ISorter>().Use<BubbleSort>(); } } }
You’ll have to configure StructureMap in the MVC project to go looking for this registry. This is how you can do that:
public DefaultRegistry() { Scan( scan => { scan.TheCallingAssembly(); scan.WithDefaultConventions(); scan.With(new ControllerConvention()); //You have to add these two lines, they will add the mappings from the Mappings assembly scan.Assembly(typeof(Mappings.MappingsRegistry).Assembly); scan.LookForRegistries(); }); }
And that’s it. If you follow this structure you can not get it wrong. In the MVC project writing
new BubbleSort() won’t event compile, and better yet, it won’t be possible to do something such as:
public class HomeController : Controller { private readonly ISorter _sorter; public HomeController(ISorter sorter) { _sorter = sorter; } public ActionResult Index() { var bubbleSort = (BubbleSort)_sorter; //compilation error
Because let’s face it, if you have to cast, it means your abstraction is not good enough and you should take care of that, and not use a workaround.
Have a look at a working project that has a page where you can enter numbers and click Sort, and that uses this structure.
git clone
Conclusion
When you follow the dependency inversion principle you want to make sure that you do not depend on anything concrete.
You can do that on willpower alone, or you can set your projects up so that this is guaranteed.
By making sure your main project only references the assembly that contains high-level classes and by using an assembly that “ties” those high-level classes to their implementations you can make sure that no high-level class depends on anything concrete.
|
https://www.blinkingcaret.com/2016/02/03/dependency-injection-without-referencing-implementations/
|
CC-MAIN-2021-04
|
refinedweb
| 1,018
| 54.02
|
Crash in GraphicsMode.get_Default when creating GLWidgetPosted Tuesday, 25 May, 2010 - 17:34 by cdhowie in
I'm attempting to build a simple GLWidget project so that I can experiment with some node layout algorithms, and I get the following right out of the gate:
Unhandled Exception: System.NullReferenceException: Object reference not set to an instance of an object at OpenTK.Graphics.GraphicsMode.get_Default () [0x00000] in <filename unknown>:0 at Gtk.GLWidget..ctor () [0x00000] in <filename unknown>:0
I am running inside of an Xvnc server. glxinfo reports quite a bit of information, and glxgears seems to work just fine, so I doubt it's the GLX extension that's failing somewhere.
So basically... is this a bug?
Re: Crash in GraphicsMode.get_Default when creating GLWidget
It seems that this stems from the fact that Xvnc does not implement either Xinerama or XRandR. Is there a way to work around this in X11DisplayDevice? I find it rather odd that either Xinerama or XRandR would be required at all. I'd expect at least a fallback on another method that is guaranteed to exist in every X server.
Re: Crash in GraphicsMode.get_Default when creating GLWidget
The exception originates in GraphicsMode.get_Default which doesn't use Xinerama/XRandR. Can you please compile a debug version from SVN () and test with that? The stacktrace and debug output should reveal more information on this error.
(OpenTK should work without Xinerama (1999) and XRandR (2001) anyway. It is possible to implement a XF86 fallback (1991, see X11DisplayDevice.cs:201) but I won't be doing that any time soon.)
Re: Crash in GraphicsMode.get_Default when creating GLWidget
Sorry, I should have given more info. I did compile my own version and tinkered (quite a bit actually) with the sources. I don't have the code in front of me at the moment, but I can tell you the basic sequence of events.
GLWidget:.ctor() calls GraphicsMode.get_Default, which calls DisplayDevice.get_Default, which is null because the X11 DisplayDevice factory (I forget exactly what it's called...) doesn't find any devices, in turn because Xvnc has no XRandR or Xinerama. Since GraphicsMode.get_Default does not check the return value from DisplayDevice.get_Default, the NRE is thrown.
Re: Crash in GraphicsMode.get_Default when creating GLWidget
Ah, yes, you are right.
I'll modify X11DisplayDevice to set some sensible defaults when it Xinerama/XRandR aren't available. Chances are xvnc doesn't support XF86 either, so that's the only solution that would allow it to work.
Re: Crash in GraphicsMode.get_Default when creating GLWidget
Sounds good. I know that GLX can provide info on what color depths and bits-per-rgb are available. And in Xvnc, the current video mode may not necessarily represent available GLX modes.
For example, even when running Xvnc with -depth 24, there is no GLX mode with a depth of 24. 16 is the highest depth supported, so even if it did support the XF86 call you might be left with data that cannot be used to create a working GL context anyway.
Re: Crash in GraphicsMode.get_Default when creating GLWidget
I'm expecting exactly the same NRE in slightly different situation. I downloaded OpenTK v1.0 (named October, 6 2010) and unzipped it. I am trying to start Example.exe to see your amazing samples.
The demo shell works, but when I try to call any sample (Julia Set, or any other) I receive the same error.
Then I extracted GameWindowsSimple.cs in separate solution -- the same issue with stack trace like this:
My environment is Ubuntu 10.04 64bit, some nVidia GPU (I just don't remember it's model). glxgears works fine.
I have dual monitor configuration, but it is NOT Xinerama. Just two completely separate workspaces. Is it the root of the problem? How should I overcome the issue?
Have a fast code!
Anton.
Re: Crash in GraphicsMode.get_Default when creating GLWidget
Can you please test with SVN trunk? It contains several fixes for non-xrandr, non-xinerama setups. If it still doesn't work, please file a bug report and I'll look into this.
Re: Crash in GraphicsMode.get_Default when creating GLWidget
Can you please test with SVN trunk?
Nice bet! Svn version is working. =)
I have realized, that there is no way to reference OpenTK.Compatibility.dll, isn't it? There are huge amount of duplicate class name conflicts in this dll.
To be exact, a lot of classes in OpenTK.Graphics namespace conflict with classes in OpenTK.Graphics.OpenGL namespace. Is it by design? Must I avoid using Compatibility assembly? At current point I'd like to use TextRenderer class to print some overlay text.
BTW, I'd like to ask you to keep TextRenderer class (maybe refactored?), because it is extremely easy to use. RenderText sample in OpenTK examples is... well... a little bit ugly. In my case I'd like to print FPS statistics which changes every frame. So it is very unnaturally to prepare full-featured texture to display 4 digits just to dispose it some milliseconds later.
I've experienced FPS about 1000 using this class, so from my POV there is no obvious performance issues with it. I feel it is very useful when printing constantly changing values (physical measurements and other realtime data streams).
Where should we expect OpenTK v1.0.1 with all this sweet bug-fixes? =)
Have a fast code!
Anton.
Re: Crash in GraphicsMode.get_Default when creating GLWidget
OpenTK.Compatibility is there to help port applications from older OpenTK or Tao versions. It's not meant to be used for new applications.
TextPrinter used to take up about 50% of the time I put into OpenTK and I simply don't have enough time to work on it anymore. However, feel free to split and use the code from OpenTK.Compatibility (copy Source/Compatibility/Fonts and Source/Compatibility/Graphics/Text*).
I'm working on the new release.
|
http://www.opentk.com/node/1811
|
CC-MAIN-2016-30
|
refinedweb
| 994
| 61.12
|
Red Hat Bugzilla – Bug 825902
[FEAT] Support for separate namespace for 'hooks' friendly keys.
Last modified: 2013-12-18 19:08:11 EST
Description of problem:
Currently glusterd supports 'hooks' for every operation. Using this, user can execute some scripts 'pre' and 'post' an operation.
We need to support special/separate namespace for few keys which user wants to be passed to these hook scripts, but glusterd need not interpret them. Also, this should not result in a failure in both staging and commiting phase. That way, we can provide more flexibility to users.
patch fixes the issue on master. Any 'user.*' commands will be passed on to Hooks scripts now.
the bug fix is only in upstream, not in release-3.3. Hence moving it out of the ON_QA, and setting MODIFIED (as a standard practice @ Red Hat)
CHANGE: (glusterd: Persisted hooks friendly user.* keys) merged in master by Anand Avati (avati@redhat.com)
|
https://bugzilla.redhat.com/show_bug.cgi?id=825902
|
CC-MAIN-2016-26
|
refinedweb
| 155
| 66.44
|
Log in to like this post! How to use the Calendar control as a Slicer for the XamPivotGrid – part 1 Atanas Dyulgerov / Wednesday, January 04, 2012 Recently we had a customer that had a requirement to build an app using the Infragistics Pivot Grid and filter the data in it using a calendar. He needed to be able to select specific dates that should be included in the results of the pivot grid and show results for selected month, year or all periods. As additional functionality he wanted to hide the totals and all header cells that are not on the level that need to be shown in the Pivot Grid UI. For example if year 2008 is selected in the calendar the hierarchy in the pivot grid should show only the specific months and not the 2008 total, half year and quarters groups and results. This article will show you in two parts how to realize all those requirements – implement a slicer control that has the UI of a calendar, hide total columns and unneeded header cells in the columns and rows. This first part will do the basic filtering based on selected dates and the second part will show the rest. So let’s get straight to it. What is the easiest way to filter the data in the PivotGrid using a control that provides a fast, accurate and easy to comprehend alternative to the standard filtering mechanisms, separate from the PivotGrid’s own UI? The answer to that question is no doubt using a slicer. In short a slicer is a control that lists the available values for a given level in a given hierarchy and allows you to select the ones that you want to include in the results of the bound SlicerProvider (this is the DataSource instance that the PivotGrid uses). You can read details about the existing XamPivotDataSlicer control here:. The default template shows all items in a ListBox and internally the selection of those items triggers the filtering. In our case however we know that those items will be years, months and days and we want them shown on a calendar instead of a ListBox. Retemplating is what we need to do. We’re going to replace the list box with a calendar and hook up the events of the calendar with the filtering logic we need to make it filter. In this article we will create a separate generic control that we’ll call CalendarSlicer. With very little modification and with the same success we can use a UserControl or just work with an existing XamPivotDataSlicer. I hope it will be much cleaner and understandable however to have it as a separate control. The first step is to create an empty control that inherits from XamPivotDataSlicer and edit its template. In the template we’ll add a calendar control and set it a TemplatePart named “PART_Calendar”. Here is how this part of the code in Themes/Generic.xaml will look like: <Style TargetType="local:CalendarSlicer"> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="local:CalendarSlicer"> <mscontrols:Calendar x: </ControlTemplate> </Setter.Value> </Setter> </Style> Note that the local namespace is where you have defined the CalendarSlicer control and mscontrols is where the calendar is: xmlns:mscontrols="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls" If we run the app using this newly created slicer we will have a slicer that looks just like a calendar, but nothing will happen when you start interacting with the calendar. The second step is to initialize the calendar properly and hook up the events we want to work with. To get the instance of the calendar we need to override the OnApplyTemplate method and in it we’ll have the instance easily. Before I show you the code how to do that exactly we need to establish what we need to handle. If you just want to filter based on selected dates and not care about whole months or years you just need to subscribe to the Calendar’s SelectedDatesChanged event. The SelectionMode property on the calendar controls whether you can select single dates only, a whole range or non-continuous ranges of dates. The list of all selected dates is contained in the SelectedDates collection. If you want to take action based on which mode you are in you need to deal with the DisplayMode property and DisplayModeChanged event. The valid values for DisplayMode are: Decade, Year and Month. The event is raised when you click on a year or month, or the button to go up one level in the UI of the calendar. Another thing that is important, especially if you use XMLA data source is the min and max dates that should be displayed in the calendar. They are set through the DisplayDateStart and DisplayDateEnd properties on the calendar. If you allow the display of dates that are outside the min and max date in the analysis server while using an XMLA data source you will get an exception on clicking those dates, because the MDX query constructed by the slicer will be invalid. A normal not retemplated slicer does not allow situation in which no items are selected it will also take care to show all the valid values, but since we’re replacing the ListBox that shows them with a calendar we need to take care of that limitation ourselves. Having said all that I can show you the whole OnApplyTemplate method: public override void OnApplyTemplate() { base.OnApplyTemplate(); var cal = GetTemplateChild("PART_Calendar") as Calendar; cal.SelectedDatesChanged += SelectedDatesChanged; cal.DisplayModeChanged += DisplayModeChanged; cal.DisplayDateStart = this._minDate; cal.DisplayDateEnd = this._maxDate; cal.DisplayMode = this._displayMode; cal.SelectionMode = this._selectionMode; } Note how I get instance of the calendar and then set all the values I’d need. You can replace those with actual values or use properties set from the parent code as in this demo. You’ll be able to see the definitions of those in the full solution at the end of the article. Now that we know when a date is selected in the calendar or the mode has changed, lets do some actual filtering. The third step is to implement the SelectedDatesChanged event handler. The sender for that event handler is the calendar itself. From it we can take the collection of dates that are selected. Once we do that we can cycle through the Items collection of the slicer and check if this date is in the list with selected dates. If the date is selected we need to set the IsSelected property of the FilterSource behind the current item to true. If not – to false. While we are cycling through the list of items we also monitor if there actually are selected items. In the end after the cycle is done we have to call the RefreshGrid method of the DataSource to reflect the changes in the PivotGrid UI. This will be done only if the selection is valid on the server, otherwise there would be an exception. Here is the code that does all this: void SelectedDatesChanged(object sender, SelectionChangedEventArgs e) { if (this.SlicerProvider == null) return; var cal = sender as Calendar; var selectedDates = cal.SelectedDates; bool isNullSelection = true; foreach (var item in this.Items) { DateTime itemDate; if (!DateTime.TryParse(item.DisplayName, out itemDate)) return; if (selectedDates.Contains(itemDate)) { item.FilterSource.IsSelected = true; isNullSelection = false; } else { item.FilterSource.IsSelected = false; } } if (!isNullSelection) (this.SlicerProvider as DataSourceBase).RefreshGrid(); else MessageBox.Show("No data in the server for the selection!"); } Note that we are parsing the DisplayName property of the item. This is the actual caption of the member behind that item. In most analysis databases that will be the date string. In the Adventure Works sample data it is and if you use flat data source it will also probably be like that. But if for some reason this is not a parsable value you will need to access the member of the item and do the comparison in your own custom way. So far we have created a slicer inheriting control that looks like a calendar and on selection of a number of dates in the Month DisplayMode it will filter the data in the pivotgrid, provided that the proper SlicerProvider, Hierarchy Name and Level are specified. In many cases this will be a sufficient functionality for a calendar slicer. If you want to delve more into this scenario lookout for part two of this article in the next few days. We’ll continue on with implementing month and year filtering, expanding to the proper level and hiding unnecessary columns and cells. And you can find the solution with all the code that was explained in this article here: CalendarSlicer Solution When you open the solution make sure you provide the right assembly references to the dlls you have. It does not matter if they are trial or not. Also in MainPage make sure the values for ServerURI, Database, etc are set according to the data you want to experiment with. It is predefined with the Infragistics’ sample data server. If you use your own data the TargetHierarchyName and level index might also need to be changed accordingly. I hope this has been useful and interesting.
|
https://www.infragistics.com/community/blogs/b/atanas_dyulgerov/posts/how-to-use-the-calendar-control-as-a-slicer-for-the-xampivotgrid-part-1
|
CC-MAIN-2018-22
|
refinedweb
| 1,537
| 61.46
|
Importing external functions from a Dynamic Link Library(DLL) that has no exposed methods can easily be accomplished with the right know-how. Any C function, in C++, declared “extern C” can be called directly from CM by using an "import dll" definition. Thus, the basic solution is to write an "extern C" wrapper DLL for the desired DLL.
In the following example a function "add" will be imported into CM from an external DLL named Maths. If the Type Library doesn't exist you can create it by running the following command in the command prompt:
regasm Maths.dll /tlb:Maths.tlb
The header for the wrapper class.
MathsWrapper.h
#import "Maths.tlb" using namespace Maths extern "C" __declspec(dllexport) int add(int addend1, int addend2);
Once the type library is imported the name of the CoClass that holds the function is needed. Visual Studio will be able to expand upon the available typedefs imported from the Maths type library. (Use Maths::)
For this example, the CoClass will be called MathFunc.
MathsWrapper.cpp
#include "MathsWrapper.h" int add(int addend1, int addend2) { int sum; //run Maths.dll add function// CoInitialize(NULL); //Initialize COM MathFuncPointer m; //CoClass Pointer HRESULT hres = m.CoCreateInstance(__uuidof(MathFunc)); //Instantiate if (SUCCEEDED (hres)) { hres = m->addFunc(addend1, addened2); //call desired function a = 0; } CoUninitialize(); //Uninitialize COM return a; }
Finally, the new MathsWrapper.dll can be directly imported and called in CM.
import dll "MathsWrapper.dll" { public int add(int a1, int a2); } public class MathsClass { public constructor(); public int addUp(int an1, int an2) { return add(an1, an2); } }
Instructions on how to make a basic DLL in Visual Studio can be found here.
Please sign in to leave a comment.
|
https://support.configura.com/hc/en-us/articles/360056079914-Import-DLL
|
CC-MAIN-2022-21
|
refinedweb
| 285
| 56.86
|
Hi On Mon, May 21, 2012 at 03:12:58PM +0000, Babic, Nedeljko wrote: > Hi, > > CPU features can be detected at runtime in MIPS architectures, however this is quite cumbersome. > For example, information stored in status registers, can't be read from user space. > On the other hand we could obtain some information from /proc/cpuinfo. This was done in MIPS optimizations for pixman. However, this solution also has problems. For example, some vendors (like Broadcom) have their own version of the /proc/cpuinfo description, where they don't mention at all on which MIPS core these platforms are based on. So, this way of runtime detection would prevent MIPS optimizations although they are available for use. > You can see discussion regarding this problem on pixman mailing list (). that is a shitty design, i would suggest that this is fixed either in kernel or hardware so that theres a portable way to get this information from user space at least in the medium term future. > > > disabling the C functions like this is quite unpractical, consider > > how this would look with 7 cpu architectures (not to mention the > > maintaince burden) > > I think there either should be function pointers in a structure > > like reimar suggested (like LPCContext) or if this causes a > > "meassureable" overhead on MIPS and runtime cpu feature detection > > isnt possible or doesnt make sense for MIPS then > > There are a lot of functions that we optimized that are not optimized for other architectures. For most of these functions structures with appropriate function pointers don’t exit. In order to use this approach I would have to make probably a lot of changes in architecture independent parts of code and I am not sure if this is justifiably to do just for our optimizations. for functions that could be optimized in SIMD SSE* easily, structures with function pointers should be added. Because SSE is runtime detectable and once SSE optimizations for them are written there will be function pointers in structures for them. It would be quite inconvenient if we had a different system for these cases for MIPS in place at that point. > > > there simply could be a: > > > > #ifndef ff_acelp_interpolatef > > void ff_acelp_interpolatef(float *out, const float *in, > > const float *filter_coeffs, int precision, > > int frac_pos, int filter_length, int length){ > > ...C code here ... > > } > > #endif > > > > and a > > > > void ff_acelp_interpolatef_mips(float *out, const float *in, > > const float *filter_coeffs, int precision, > > int frac_pos, int filter_length, int length){ > > ... MIPS code here ... > > } > > > > and in some internal header a > > #define ff_acelp_interpolatef ff_acelp_interpolatef_mips > > This is maybe bather approach for us, but we have a lot of code to submit. This will also create maintenance burden. > > What do you suggest how should we proceed regarding this? what kind of maintaince burden do you see with this case: <>
|
http://ffmpeg.org/pipermail/ffmpeg-devel/2012-May/124934.html
|
CC-MAIN-2017-30
|
refinedweb
| 458
| 60.14
|
File Adapters fetch or copy files to and from different file
systems. They pick a file from a file system and turn into Framework’s
Message to publish onto a channel and
vice versa. Framework supports a declarative model using
file namespace. It also provides a few classes
for reading and writing files, but using namespace is advised.
The
file namespace provides the
respective elements to create the objects declaratively and easily. In
order to use the
file namespace, you should add the
respective schema urls to your XML file, highlighted in bold
below:
<?xml version="1.0" encoding="UTF-8"?> <beans .... xmlns: ... </beans>
Framework provides two adapters to read and write the files. The
inbound-channel-adapter element is
used for reading the files and publishing them onto a channel as
File payload messages. The
outbound-channel-adapter is used for picking
up the
File payload messages from a
channel, extracting them as files, and writing them to the file
systems.
The following snippet demonstrates the inbound adapter:
<!-- Adapter using namespace --> <file:inbound-channel-adapter <int:poller </file:inbound-channel-adapter> ...
No credit card required
|
https://www.oreilly.com/library/view/just-spring-integration/9781449335403/ch07s02.html
|
CC-MAIN-2019-13
|
refinedweb
| 186
| 65.52
|
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
Store char in field ?
Please tell me how to store char in a field so i can relate it to other field in other model but same module
My py :
from osv import fields, osv import time class notebook_project(osv.osv): _name = "notebook.project" _description = "Notebook Project ID" _columns = { 'project_name' : fields.many2one('project.project', 'Project Name'), 'project_name_id' : fields.char('Project ID', size=32, required=True, store=True), } notebook_project() class notebook_member(osv.osv): _name = "notebook.member" _description = "Notebook Member of Project" _columns = { 'project_member' : fields.many2one('hr.employee', 'Member Name',required=True, store=True), 'project_id' : fields.related('project_name_id', 'project_id', type='many2one', relation='notebook.project', string='Project ID', store=True, readonly=True',), } notebook_member()
I need the value of project ID ( char ) to be stored so I can call it on other model ( in notebook.member , from notebook.project ) . Thanks in advance :)
You need to set the attribute "store" only on functional fields and only if you need to search on them. Fields like char, many2x are already stored. At the moment I can't see any relation between notebook_project and notebook_member so you can't get the project_name_id using a related
|
https://www.odoo.com/forum/help-1/question/store-char-in-field-23999
|
CC-MAIN-2017-09
|
refinedweb
| 217
| 52.26
|
I have create a CMP Bean, I have a "ejbCreate" method with some parameters on the Bean :
public class Share extends EntityBean {
public String name;
public Object ejbCreate(String aName) {
name = aName;
}
}
and I think that the home interface should not have and ejbCreate method !
public interface ShareHome extends EJBHome {
public Share findByPrimaryKey(Object pk) throws ...
}
How can I create a new instance of a share bean from the home, I have no creation method !! Do I need to add one ? and how, I have no PK !
Can someone explain me how to do this ! Thanks
Create a CMP Bean (1 messages)
- Posted by: Christophe Demez
- Posted on: December 11 2000 11:32 EST
Threaded Messages (1)
- Create a CMP Bean by Paula Polinario on December 11 2000 12:05 EST
Create a CMP Bean[ Go to top ]
Hi, Christophe,
- Posted by: Paula Polinario
- Posted on: December 11 2000 12:05 EST
- in response to Christophe Demez
In the interface home you must have a method create and in the Bean, a method ejbCreate. The method create in the home corresponds to the method ejbCreate in the Bean; both must have the same parameters.
About the primary key, if the primary key is only one field, you don't need to create a class - though you can do it if you want -, but if the primary key is compose by two or more fields, you must create a Primary Key class.
Regards,
Poli
|
https://www.theserverside.com/discussions/thread/2598.html
|
CC-MAIN-2020-50
|
refinedweb
| 243
| 70.16
|
Dave wrote: > > This. > > > > The MS VC stdlib.h has > > /* function prototypes */ > > #if _MSC_VER >= 1200 > _CRTIMP __declspec(noreturn) void __cdecl abort(void); > _CRTIMP __declspec(noreturn) void __cdecl exit(int); > #else > _CRTIMP void __cdecl abort(void); > _CRTIMP void __cdecl exit(int); > #endif > > > if you are interested Thanks. Can someone with access to VC try the patch below? board.o should get smaller if the patch is effective. Arend Index: engine/board.h =================================================================== RCS file: /cvsroot/gnugo/gnugo/engine/board.h,v retrieving revision 1.9 diff -u -p -r1.9 board.h --- engine/board.h 12 Apr 2004 15:22:27 -0000 1.9 +++ engine/board.h 20 Apr 2004 18:30:59 -0000 @@ -392,7 +392,15 @@ void simple_showboard(FILE *outfile); /* Our own abort() which prints board state on the way out. * (pos) is a "relevant" board position for info. + * + * Marking it "noreturn" allows better optimization, reducing the cost + * of leaving assertions enabled all the time. */ +#ifdef __GNUC__ + __attribute__((noreturn)) +#elif (defined(_MSC_VER) && _MSC_VER >= 1200) + __declspec(noreturn) +#endif void abortgo(const char *file, int line, const char *msg, int pos); #ifdef GG_TURN_OFF_ASSERTS
|
http://lists.gnu.org/archive/html/gnugo-devel/2004-04/msg00148.html
|
CC-MAIN-2015-06
|
refinedweb
| 181
| 69.38
|
Create a very large FITS file from scratch¶
This example demonstrates how to create a large file (larger than will fit in
memory) from scratch using
astropy.io.fits.
By: Erik Bray
License: BSD
Normally to create a single image FITS file one would do something like:
import os import numpy as np from astropy.io import fits data = np.zeros((40000, 40000), dtype=np.float64) hdu = fits.PrimaryHDU(data=data)
Then use the
astropy.io.fits.writeto() method to write out the new
file to disk
hdu.writeto('large.fits')
However, a 40000 x 40000 array of doubles is nearly twelve gigabytes! Most systems won’t be able to create that in memory just to write out to disk. In order to create such a large file efficiently requires a little extra work, and a few assumptions.
First, it is helpful to anticipate about how large (as in, how many keywords) the header will have in it. FITS headers must be written in 2880 byte blocks, large enough for 36 keywords per block (including the END keyword in the final block). Typical headers have somewhere between 1 and 4 blocks, though sometimes more.
Since the first thing we write to a FITS file is the header, we want to write enough header blocks so that there is plenty of padding in which to add new keywords without having to resize the whole file. Say you want the header to use 4 blocks by default. Then, excluding the END card which Astropy will add automatically, create the header and pad it out to 36 * 4 cards.
Create a stub array to initialize the HDU; its exact size is irrelevant, as long as it has the desired number of dimensions
data = np.zeros((100, 100), dtype=np.float64) hdu = fits.PrimaryHDU(data=data) header = hdu.header while len(header) < (36 * 4 - 1): header.append() # Adds a blank card to the end
Now adjust the NAXISn keywords to the desired size of the array, and write
only the header out to a file. Using the
hdu.writeto() method will cause
astropy to “helpfully” reset the NAXISn keywords to match the size of the
dummy array. That is because it works hard to ensure that only valid FITS
files are written. Instead, we can write just the header to a file using the
astropy.io.fits.Header.tofile method:
header['NAXIS1'] = 40000 header['NAXIS2'] = 40000 header.tofile('large.fits')
Finally, grow out the end of the file to match the length of the data (plus the length of the header). This can be done very efficiently on most systems by seeking past the end of the file and writing a single byte, like so:
with open('large.fits', 'rb+') as fobj: # Seek past the length of the header, plus the length of the # Data we want to write. # 8 is the number of bytes per value, i.e. abs(header['BITPIX'])/8 # (this example is assuming a 64-bit float) # The -1 is to account for the final byte that we are about to # write: fobj.seek(len(header.tostring()) + (40000 * 40000 * 8) - 1) fobj.write(b'\0')
More generally, this can be written:
shape = tuple(header['NAXIS{0}'.format(ii)] for ii in range(1, header['NAXIS']+1)) with open('large.fits', 'rb+') as fobj: fobj.seek(len(header.tostring()) + (np.product(shape) * np.abs(header['BITPIX']//8)) - 1) fobj.write(b'\0')
On modern operating systems this will cause the file (past the header) to be filled with zeros out to the ~12GB needed to hold a 40000 x 40000 image. On filesystems that support sparse file creation (most Linux filesystems, but not the HFS+ filesystem used by most Macs) this is a very fast, efficient operation. On other systems your mileage may vary.
This isn’t the only way to build up a large file, but probably one of the safest. This method can also be used to create large multi-extension FITS files, with a little care.
Finally, we’ll remove the file we created:
os.remove('large.fits')
Total running time of the script: ( 0 minutes 0.000 seconds)
Gallery generated by Sphinx-Gallery
|
http://docs.astropy.org/en/stable/generated/examples/io/skip_create-large-fits.html
|
CC-MAIN-2019-26
|
refinedweb
| 696
| 66.13
|
Creating a Component-Based Application with Rails
- 2.1 The Entire App Inside a Component
- 2.2 ActiveRecord and Handling Migrations within Components
- 2.3 Handling Dependencies within Components
This chapter excerpt tells the story of creating a full Rails application within a component. From the first steps to migrations and dependency management, it covers the common pitfalls of the unavoidable aspects of component-based Rails.
Save 35% off the list price* of the related book or multi-format eBook (EPUB + MOBI + PDF) with discount code ARTICLE.
* See informit.com/terms
Photo: mattiaath/123RF
In preparation for creating your first'
2.1 our, we. See Appendix A for a full explanation of these creation parameters and to learn how to switch the tests to RSpec. 1, we notice that among the many gems that are being used, we also see our our, our local filesystem). Commonly, path is used only for gems under development, but as we will see, it works just fine for use in CBRA applications.
Back to AppComponent. While it is now hooked up within, our welcome controller is created inside the AppComponent namespace. See Appendix B for an in-depth discussion of the folder structure and the namespace created by the engine.. See Appendix B for a discussion of how routing works and what options you have.
./config/routes.rb
1 Rails.application.routes.draw do 2 mount AppComponent::Engine, at: "/" 3 end
That’s all. Let’s start up our server!
Start up the Rails server. Execute in ./
$ rails s =>
Now, when you open with a browser, you should see (as in Figure 2.1) your new welcome controller’s index page. That wasn’t too bad, was it?
Figure 2.1. Your first CBRA-run web page
Having the component separated from the container application also allows us to draw a component diagram (see Figure 2). In subsequent sections of the book we will see how we create a web of components with dependencies among them.
Figure 2.2. Our first component diagram
Generating this graph yourself at this stage of the app is a bit tricky. It is the way we are referencing the component in Gemfile currently. If you still want to do it, there are a couple of steps involved.
First, you will need to install the cobradeps gem (), a gem I wrote to allow for the generation of Rails component diagrams. This gem depends on graphviz (), an open-source graph visualization software. You install these two applications as follows (assuming that you are on OSX and are homebrew, which I recommend you do:).] will behave normally with bundler and is used by cobradeps to determine that the gem is indeed a direct dependency (we will see in Section 2.3.
|
https://www.informit.com/articles/article.aspx?p=2928188&seqNum=2
|
CC-MAIN-2021-10
|
refinedweb
| 460
| 57.27
|
I'm currently trying the Amazon Web Services (AWS) with .NET and of course I'm browsing the german catalog using the german locale. The strange thing is that strings containing german umlaut characters (like ä,ö,ü ...) arrived in .NET strings as '??'. I traced the protocols and found that AWS correctly states the use of UTF-8 in the Content-Type HTTP Header and the XML processing instruction also states UTF-8. The arrived response contains all umlaut characters correctly, so to me it looks like something is going wrong with the Encoding in the Deserialization step that maps the XML into .NET class members.
I found a solution that works. I'm using WSE-2.0 for the SOAP client and wrote a custom input filter that is very simple. I still believe it shouldn't be that way, although it works for me now.
public class EncodingFilter : SoapInputFilter { public override void ProcessMessage(SoapEnvelope envelope) { envelope.Encoding = System.Text.Encoding.Unicode; }}
|
http://blogs.msdn.com/b/juergenp/archive/2004/03/10/87273.aspx?Redirected=true&title=Web%20Services%20and%20I18N%20strangeness&summary=&source=Microsoft&armin=armin
|
CC-MAIN-2013-48
|
refinedweb
| 163
| 58.99
|
#include <Factory.hh>
Inherits fatalmind::SQL::SQLFactoryInterface.
List of all members.
Thats the place where you configure which database user and password to use.
The following event attributes are used:
EXECUTE
BATCH
ERROR- One or more of the statements in the batch had an error. The error details will be written in the next argument.
SLMDC- (not an error) A abbrevation for SelectListMetaDataCache. Means that the meta data for the select list (types and sizes of selectd columns) has been used from the cache so that an additional database round-trip could be avoided.
EXACT- (not an error) SQLSelectRow did successfully query for a single row (a database roundtrip was saved).
CACHED- (not an error) Means that the cursor was cached. e.g. the same statement was executed in the same session before, the prepare round-trip could be avoided.
NO
DATA- a SQLSelectRow command did not return any data
MORE
DATA- a SQLSelectRow command did return more then one row (emulated ORA-01422 error)
IMPLICIT- The commit was performed in a single round trip with the previous command (database roundtrip saved).
|
http://www.fatalmind.com/software/ResourcePool/cplusplus/doc/classfatalmind_1_1oracleFactory.html
|
CC-MAIN-2018-22
|
refinedweb
| 182
| 55.34
|
One of the most exciting pronouncements at the 2014 BUILD conference was Microsoft’s introduction of “universal apps,” which run on PCs, tablets, and phones. According to Microsoft, universal apps will one day run on Xboxes, too. The universal app represents the first step in a convergence that Microsoft has been seeking for a long time – a convergence of platforms and APIs that allows you to write an application one time and have it run on a variety of Windows devices and form factors.
From a consumer standpoint, a universal app is one that has a single app identity across all devices. Imagine that you buy it once (assuming it’s not free) from the Windows Store, and then download it to your Windows tablet and your Windows phone. Once there, the app offers an optimized experience for each form factor, shares data across devices through the cloud, supports in-app purchases, and more. A consumer could care less whether the same binary is being installed on each device; all he or she knows is that the same app works on a PC, a tablet, or a phone.
From a developer’s perspective, a universal app is not what you might think. It’s not a single binary that runs on multiple platforms. Rather, it takes the form of a Visual Studio solution containing multiple projects: one project for each targeted platform, plus a project containing code and resources shared between platforms. Because Windows Phone 8.1 implements the vast majority of the WinRT APIs that Windows 8.1 implements, a LOT of code can be shared between a Windows project and a Windows Phone project. Most of the platform-specific code you write is UI-related, which, as a developer, I’m perfectly fine with because a UI that looks great on a 30” monitor must be tweaked to look equally great on a 5” phone, and vice versa. Even there, Microsoft has done a lot of work to bridge the gap. For example, Windows Phone 8.1 includes the same Hub control featured in Windows 8.1, meaning you can use similar markup on both platforms to produce a UI tailored to each form factor.
You can download the RC release of Visual Studio 2013 Update 2 and see first-hand what’s involved in building a universal app. I did, and to help developers get acquainted with universal apps, I built a XAML and C# version of the Contoso Cookbook app that I originally wrote for Microsoft to help introduce developers to WinRT and Windows Store apps. The screen shot below shows Contoso Cookbook running side by side on a Windows tablet and a Windows phone. What’s remarkable is how little platform-specific code I had to write, and how similar the XAML markup is for both platforms.
You can download a zip file containing the Visual Studio solution for Contoso Cookbook. Or you can follow along as I highlight some of the more interesting aspects of the development process. Either way, I hope you come away excited about universal apps and the prospects that they hold for the future of Windows development. The future is now, and the future has “universal apps” written all over it.
Getting Started (and Learning to Share)
The first step in implementing Contoso Cookbook as a universal app was to use Visual Studio 2013 Update 2’s New Project command to create a solution. From the list of project types available, I selected “Hub App (Universal Apps)” to create basic 3-page navigation projects anchored by Hub controls:
The result was a solution containing three projects: a Windows 8.1 project named ContosoCookbookUniversal.Windows, a Windows Phone 8.1 project named ContosoCookbookUniversal.WindowsPhone, and a third project named ContosoCookbookUniversal.Shared. Here’s how it looked in Solution Explorer after I added a few more files to the Windows and Windows Phone projects representing flyouts and pages that are specific to each platform:
The Shared project doesn’t target any specific platform, but instead contains resources that are shared between the other two projects via shared (linked) files. In an excellent blog post, my friend and colleague Laurent Bugnion documented some of the ins and outs of working with the Shared project. It can include source code files, image assets, and other files that are common to the other projects. Source code files placed in the Shared project must be able to compile in the other projects – in this case, they must be able to compile in a Windows 8.1 project and a Windows Phone 8.1 project. You can’t add references in the Shared project, but you can add references in the other projects and use those references in the Shared project. Those references can refer to platform-specific assemblies, portable class libraries (PCLs), and even Windows Runtime components. What’s really cool is that if you add references to platform-specific assemblies to the Windows project and the Windows Phone project and the assemblies expose matching APIs, you can call those APIs from source code files in the Shared project.
You can even use #if directives to include platform-specific code in a shared file. In the case of Contoso Cookbook, I needed to add some Windows-specific code to App.xaml.cs to add commands to the system’s Settings pane – something that doesn’t exist in the Windows Phone operating system. So in App.xaml.cs, I added the following conditional using directive:
#if WINDOWS_APP using Windows.UI.ApplicationSettings; #endif
And then, in the OnLaunched override, I added this:
#if WINDOWS_APP // Add commands to the settings pane SettingsPane.GetForCurrentView().CommandsRequested += (s, args) => { // Add an About command to the settings pane var about = new SettingsCommand("about", "About", (handler) => new AboutSettingsFlyout().Show()); args.Request.ApplicationCommands.Add(about); // Add a Preferences command to the settings pane var preferences = new SettingsCommand("preferences", "Preferences", (handler) => new PreferencesSettingsFlyout().Show()); args.Request.ApplicationCommands.Add(preferences); }; #endif
The result was that the code compiled just fine and executed as expected in the Windows app, but effectively doesn’t exist in the Windows Phone app. You can prove it by running the Windows version of Contoso Cookbook, selecting the Settings charm, and verifying that the Settings flyout contains the About and Preferences commands registered in App.xaml.cs:
I added other files to the Shared project so they could be used in the other projects. For example, I added CS and JSON files to the DataModel folder in the Shared project to represent the data source shared by the Windows app and the Windows Phone app, and I added CS files to the Common folder representing value converters and helper components used in both projects. I even added an Images folder containing images used by both apps. Here’s what the Shared project looked like at the completion of the project:
Again, thanks to file linking, you can use anything in the Shared project from the other projects as if it were part of those projects. This provides a common base to build from, and to the extent that you can build what each project needs into the Shared project, you can minimize the amount of platform-specific code and resources required.
Building the UIs
Most of the UI work takes place in the platform-specific projects, allowing you to craft UIs that look great on PC, tablets, and phones alike, but that share common data, resources, components, and even view-models.
Even though the UI for the Windows version of Contoso Cookbook is defined separately from the UI for the Windows Phone version, they have a lot in common that reduced the amount of work required to build them. Each project, for example, contains a start page named HubPage.xaml that uses a Hub control to present content to the user. I used different data templates to fine-tune the UIs for each platform, but the basic structure of the XAML was the same in both projects.
Another UI component ported from Windows 8.1 to Windows Phone 8.1 is the CommandBar class. To implement command bars in both projects, I simply copied the following XAML from ItemPage.xaml in the Windows project to ItemPage.xaml in the Windows Phone project:
<Page.BottomAppBar> <CommandBar> <AppBarButton Icon="ReShare" Label="Share" Click="OnShareButtonClicked" /> </CommandBar> </Page.BottomAppBar>
The click handler calls DataTransferManager.ShowShareUI to display the system’s Sharing pane. (In Windows, you can show the Sharing pane programmatically, or you can rely on the user to show it by tapping the Share charm in the charms bar. There is no charms bar in Windows Phone, so if you wish to share content, you must present the system’s Sharing page programmatically.) Why did I use Click events rather than commanding? Because I couldn’t get commanding on AppBarButtons to work reliably. I assume this is a consequence of the fact that we’re working with tools and platforms that aren’t quite finished yet, and that commanding will work as expected in the final releases.
Both projects use DataTransferManager.DataRequested events to share recipe images and text from the items page. The code to share content is identical on both platforms, so after registering a handler for DataRequested in each project’s ItemPage.xaml.cs, I factored the sharing code out into a static method in the ShareManager class I added to the Shared project, and called that method from each project’s DataRequested event handler. Here’s the relevant code in ShareManager:
public static void ShareRecipe(DataRequest request, RecipeDataItem item) { request.Data.Properties.Title = item.Title; request.Data.Properties.Description = "Recipe ingredients and directions"; // Share recipe text var recipe = "rnINGREDIENTSrn"; recipe += String.Join("rn", item.Ingredients); recipe += ("rnrnDIRECTIONSrn" + item.Directions); request.Data.SetText(recipe); // Share recipe image var reference = RandomAccessStreamReference.CreateFromUri(new Uri(item.ImagePath)); request.Data.Properties.Thumbnail = reference; request.Data.SetBitmap(reference); }
In cases where UI capabilities varied significantly between platforms, I wrote platform-specific code. For example, as noted earlier, since Windows Phone doesn’t have a charms bar, I used #if to include Windows-specific code in the shared App.xaml.cs file to hook into the charms bar. Along those same lines, I added settings flyouts based on Windows’ SettingsFlyout class to the Windows project in files named AboutSettingsFlyout.xaml and PreferencesSettingsFlyout.xaml. Since SettingsFlyout wasn’t ported to the Jupiter (XAML) run-time in Windows Phone 8.1, I added a settings page named SettingsPage.xaml to the Windows Phone project. The screen shot below shows the Preferences flyout in the Windows app and the Settings page in the Windows Phone app side by side:
In each case, the user is presented with a UI that allows him or her to choose to load data locally from in-package resources or remotely from Azure. The code that loads the data and parses the JSON is found in the shared RecipeDataSource class and works identically on both platforms. (Fortunately, the Windows.Data.Json and Windows.Web.Http namespaces are present in WinRT on the phone and in Windows, so the code just works in both places.) And in each case, I built the settings UI around the ToggleSwitch control that’s present on both platforms. Same concept, different implementation, and a perfect example of how platform-specific projects retain the ability to use platform-specific APIs and controls without impacting the other projects.
I didn’t include search functionality in the apps even though it was present in the original Contoso Cookbook. I will probably add it later, but the reason I chose not to for now is that while Windows has a SearchBox control, Windows Phone does not. That means I’ll need to build my own search UI for the phone – not a big deal, really, since I can easily put the search logic in a component that’s shared by both projects.
The Bottom Line
Most of the code that drives the Windows app and the Windows Phone app is shared, and while the UIs are separate, they’re similar enough that building both was less work than building two UIs from scratch. If I had built a Windows Phone version of Contoso Cookbook for Windows Phone 7 or 8, it would have been a LOT more work since Windows Phone 7 contained no WinRT APIs and Windows Phone 8 contained only a small subset.
If you’re interested, I’ll be delivering a session on universal apps at the Software Design & Development conference in London next month. It should be a fun time for all as we delve into what universal apps are and how they’re structured. I’ll have plenty of samples to share as we party on universal apps and learns the ins and outs of writing apps that target separate but similar run-times.
Microsoft has talked a lot about “convergence” in recent months, and now we see evidence of what it means: one API – the WinRT API – for multiple platforms, and a high degree of fidelity between UI elements for each platform that doesn’t preclude developers from using platform-specific elements to present the best possible experience on every device. This is the future of Windows development. And for many of us, it couldn’t have come a moment too soon.
|
https://www.wintellect.com/building-universal-apps-with-visual-studio-2013-update-2/
|
CC-MAIN-2019-09
|
refinedweb
| 2,223
| 60.55
|
Strings in .NET are Immutable and can be very inefficient
Once you assign an initial value to a System.String object the data cannot be changed. This may seem incorrect because it looks like the value can be changed in code, but in reality any change made to a String results in a brand new String with the changed value. Understanding this is important so that we are aware of the inefficiencies of using the String type. If we had an application doing a lot of string processing there would be a performance penalty with using String types.
There is a great example of how String.Append is exponentially inefficient as string size increases on this web blog.
Thankfully .NET has the StringBuilder type in the System.Text namespace. The StringBuilder contains methods that help you with basic string manipulation and when you modify a string in code you are modifying the internal representation of that string in memory rather than making copies of it every time you make a change.
String manipulation example
String myString = "This string is immutable!";
myString = "Wait a second, I just changed the immutable string, didn't I?";
If you were to look at the CIL code generated in the above example, you’ll see two different strings being stored in memory.
StringBuilder Manipulation Example
using System.Text;
StringBuilder sb = new StringBuilder("This string can be changed");
sb.Append(".\n");
sb.Appendsb.AppendLine("Like this.");
sb.Replace(".","!");
myString = sb.ToString();
The above example modifies the string in memory and then assigns the resultant value to a string.
|
http://www.displacedguy.com/tech/net-system-string-vs-system-text-stringbuilder/
|
CC-MAIN-2015-35
|
refinedweb
| 262
| 63.19
|
Matplotlib has a testing infrastructure based on nose, making it easy to write new tests. The tests are in matplotlib.tests, and customizations to the nose testing infrastructure are in matplotlib.testing. (There is other old testing cruft around, please ignore it while we consolidate our testing to these locations.)
The following software is required to run the tests:
- nose, version 1.0 or later
- Ghostscript (to render PDF files)
- Inkscape (to render SVG files)
Running the tests is simple. Make sure you have nose installed and run the script tests.py in the root directory of the distribution. The script can take any of the usual nosetest arguments, such as
To run a single test from the command line, you can provide a dot-separated path to the module followed by the function separated by a colon, e.g., (this is assuming the test is installed):
python tests.py matplotlib.tests.test_simplification:test_clipping
If you want to run the full test suite, but want to save wall time try running the tests in parallel:
python ../matplotlib/tests.py -sv --processes=5 --process-timeout=300
as we do on Travis.ci.
An alternative implementation that does not look at command line arguments works from within Python:
import matplotlib matplotlib.test()
Running tests by any means other than matplotlib.test() does not load the nose “knownfailureif” (Known failing tests) plugin, causing known-failing tests to fail for real.
Many elements of Matplotlib can be tested using standard tests. For example, here is a test from matplotlib.tests.test_basic:
from nose.tools import assert_equal def test_simple(): """ very simple example test """ assert_equal(1+1,2)
Nose determines which functions are tests by searching for functions beginning with “test” in their name.
If the test has side effects that need to be cleaned up, such as creating figures using the pyplot interface, use the @cleanup decorator:
from matplotlib.testing.decorators import cleanup @cleanup def test_create_figure(): """ very simple example test that creates a figure using pyplot. """ fig = figure() ...
Writing an image based test is only slightly more difficult than a simple test. The main consideration is that you must specify the “baseline”, or expected, images in the image_comparison() decorator. For example, this test generates a single image and automatically tests it:
import numpy as np import matplotlib from matplotlib.testing.decorators import image_comparison import matplotlib.pyplot as plt @image_comparison(baseline_images=['spines_axes_positions']) def test_spines_axes_positions(): # SF bug 2852168 fig = plt.figure() x = np.linspace(0,2*np.pi,100) y = 2*np.sin(x) ax = fig.add_subplot(1,1,1) ax.set_title('centered spines') ax.plot(x,y) ax.spines['right'].set_position(('axes',0.1)) ax.yaxis.set_ticks_position('right') ax.spines['top'].set_position(('axes',0.25)) ax.xaxis.set_ticks_position('top') ax.spines['left'].set_color('none') ax.spines['bottom'].set_color('none')
The first time this test is run, there will be no baseline image to compare against, so the test will fail. Copy the output images (in this case result_images/test_category/spines_axes_positions.*) to the correct subdirectory of baseline_images tree in the source directory (in this case lib/matplotlib/tests/baseline_images/test_category). Note carefully the * at the end: this will copy only the images we need to include in the git repository. The files ending in _pdf.png and _svg.png are converted from the pdf and svg originals on the fly and do not need to be in the respository. Put these new files under source code revision control (with git add). When rerunning the tests, they should now pass.
There are two optional keyword arguments to the image_comparison decorator:
- extensions: If you only wish to test some of the image formats (rather than the default png, svg and pdf formats), pass a list of the extensions to test.
- tol: This is the image matching tolerance, the default 1e-3. If some variation is expected in the image between runs, this value may be adjusted.
If you’re writing a test, you may mark it as a known failing test with the knownfailureif() decorator. This allows the test to be added to the test suite and run on the buildbots without causing undue alarm. For example, although the following test will fail, it is an expected failure:
from nose.tools import assert_equal from matplotlib.testing.decorators import knownfailureif @knownfailureif(True) def test_simple_fail(): '''very simple example test that should fail''' assert_equal(1+1,3)
Note that the first argument to the knownfailureif() decorator is a fail condition, which can be a value such as True, False, or ‘indeterminate’, or may be a dynamically evaluated expression.
We try to keep the tests categorized by the primary module they are testing. For example, the tests related to the mathtext.py module are in test_mathtext.py.
Let’s say you’ve added a new module named whizbang.py and you want to add tests for it in matplotlib.tests.test_whizbang. To add this module to the list of default tests, append its name to default_test_modules in lib/matplotlib/__init__.py.
Travis CI is a hosted CI system “in the cloud”.
Travis is configured to receive notifications of new commits to GitHub repos (via GitHub “service hooks”) and to run builds or tests when it sees these new commits. It looks for a YAML file called .travis.yml in the root of the repository to see how to test the project.
Travis CI is already enabled for the main matplotlib GitHub repository – for example, see its Travis page.
If you want to enable Travis CI for your personal matplotlib GitHub repo, simply enable the repo to use Travis CI in either the Travis CI UI or the GitHub UI (Admin | Service Hooks). For details, see the Travis CI Getting Started page. This generally isn’t necessary, since any pull request submitted against the main matplotlib repository will be tested.
Once this is configured, you can see the Travis CI results at – here’s an example.
Tox is a tool for running tests against multiple Python environments, including multiple versions of Python (e.g., 2.6, 2.7, 3.2, etc.) and even different Python implementations altogether (e.g., CPython, PyPy, Jython, etc.)
Testing all versions of Python (2.6, 2.7, 3.*) requires having multiple versions of Python installed on your system and on the PATH. Depending on your operating system, you may want to use your package manager (such as apt-get, yum or MacPorts) to do this.
tox makes it easy to determine if your working copy introduced any regressions before submitting a pull request. Here’s how to use it:
$ pip install tox $ tox
You can also run tox on a subset of environments:
$ tox -e py26,py27.
|
http://matplotlib.org/devel/testing.html
|
CC-MAIN-2015-11
|
refinedweb
| 1,108
| 57.06
|
> I'd slightly prefer the name iterdir_stat(), as that almost makes the (name, stat) return values explicit in the name. But that's kind of bikeshedding -- scandir() works too.
I find iterdir_stat() ugly :-)
I like the scandir name, which has some precedent with POSIX.
> That's right: if we have a separate scandir() that returns (name, stat) tuples, then a plain iterdir() is pretty much unnecessary -- callers just ignore the second stat value if they don't care about it.
Hum, wait.
scandir() cannot return (name, stat), because on POSIX, readdir() only
returns d_name and d_type (the type of the entry): to return a stat,
we would have to call stat() on each entry, which would defeat the
performance gain.
And that's the problem with scandir: it's not portable. Depending on
the OS/file system, you could very well get DT_UNKNOWN (and on Linux,
since it uses an adaptive heuristic for NFS filesystem, you could have
some entries with a resolved d_type and some others with DT_UNKNOWN,
on the same directory stream).
That's why scandir would be a rather low-level call, whose main user
would be walkdir, which only needs to know the entry time and not the
whole stat result.
Also, I don't know which information is returned by the readdir
equivalent on Windows, but if we want a consistent API, we have to
somehow map d_type and Windows's returned type to a common type, like
DT_FILE, DT_DIRECTORY, etc (which could be an enum).
The other approach would be to return a dummy stat object with only
st_mode set, but that would be kind of a hack to return a dummy stat
result with only part of the attributes set (some people will get
bitten by this).
|
https://bugs.python.org/msg188431
|
CC-MAIN-2019-09
|
refinedweb
| 294
| 59.67
|
v2: BFS to find gems, avoiding dead ends (detected by Kosaraju) solution in Clear category for Inertia by Phil15
# Tested on a 100x100 grid too! Too long on a 100x200 grid, I interrupted.
from typing import Tuple, Iterable
GEM, ROUGH, ICE, ROCK, MINE = '$. X*'
MOVES = {'NW': (-1, -1), 'N': (-1, 0), 'NE': (-1, 1),
'W': ( 0, -1), 'E': ( 0, 1),
'SW': ( 1, -1), 'S': ( 1, 0), 'SE': ( 1, 1)}
def inertia(grid: Tuple[str], start: Tuple[int]) ->), gems, dead = position, MOVES[move],: # avoid deadends.
yield move, new_position, gems
def find_gem(start):
""" Find the closest gem to start, thanks to BFS. """
queue = [([], start)] # (path from start, current position) queue
visited = set()
while queue:
path, position = queue.pop(0)
visited.add(position)
for move, new_position, gems in neighbors(position):
collect = gems & uncollected_gems
if collect:
return path + [move], new_position, collect
if new_position not in visited:
queue.append((path + [move], new_position))
deadends = set()
# 1) Compute the graph of possible moves thanks to DFS from start.
graph, stack = {}, [start]
while stack:
position = stack.pop()
graph[position] = [new_pos for _, new_pos, _ in neighbors(position)]
stack.extend(n for n in graph[position] if n not in graph)
# 2) Know all dead ends in this graph thanks to Kosaraju's algorithm.
# A bit too powerful for this task but we could (with improvements)
# solve this task without the first precondition.
for component in kosaraju(graph):
if start not in component:
deadends |= component
# 3) Now simply collect gems avoiding deadends.
uncollected_gems = {(i, j) for i, row in enumerate(grid)
for j, cell in enumerate(row) if cell == GEM}
while uncollected_gems:
path, start, gems = find_gem(start)
uncollected_gems -= gems
yield from path
def kosaraju(graph):
""" Kosaraju's algorithm to know strongly connected components.
Post-order DFS on the graph, and DFS on the transpose graph. """
order = list(iterative_postorder_dfs(graph))
# DFS on the transpose graph following the order.
transposed = transpose_graph(graph)
visited = {v: False for v in transposed}
while order:
start = order.pop()
if not visited[start]:
component = set()
stack = [start]
while stack:
v = stack.pop()
visited[v] = True
component.add(v)
for w in transposed[v]:
if not visited[w]:
stack.append(w)
yield component
def transpose_graph(graph):
""" Reverse all edges from the directed graph, into a new one. """
gT = {}
for v in graph:
for neighbor in graph[v]:
gT.setdefault(neighbor, []).append(v)
return gT
def iterative_postorder_dfs(graph):
""" An iterative implementation of DFS with post-order:
A node is yielded after exploration of its neighbors. """
# Iterative implementation complicates things but recursion have a limit.
visited = {node: False for node in graph}
while True:
try:
start = next(node for node, visit in visited.items() if not visit)
except StopIteration:
break
to_do = [(True, start)] # dfs stack
while to_do:
continue_exploration, node = to_do.pop()
if not continue_exploration:
yield node
else:
to_do.append((False, node)) # will be popped after its neighbors
visited[node] = True
neighbors = [n for n in graph[node] if not visited[n]]
for n in reversed(neighbors):
to_do.append((True, n))
Dec. 26, 2018
Forum
Price
For Teachers
Global Activity
ClassRoom Manager
Leaderboard
Jobs
Coding games
Python programming for beginners
|
https://py.checkio.org/mission/inertia/publications/Phil15/python-3/v2-bfs-to-find-gems-avoiding-dead-ends-detected-by-kosaraju/share/fb3bfc910006760baa71e443da86f1a0/
|
CC-MAIN-2022-21
|
refinedweb
| 510
| 56.15
|
import sysimport twython #from nltk.corpus import wordnet as wn #import nltk nounsstring = open("Nouns.txt", "r").read() adjectivesstring = open("Adjectives.txt", "r").read() nouns = list() adjectives = list() nouns = nounsstring.split("\r\n") adjectives = adjectivesstring.split("\n") #print nouns results = dict() def isNoun(word): if word == "": return False if word.lower() in nouns: return True else: return False def isAdjective(word): if word == "": return False if word.lower() in adjectives: return True else: return False api_key, api_secret, access_token, token_secret = sys.argv[1:] twitter = twython.Twython(api_key, api_secret, access_token, token_secret) response = twitter.get_user_timeline(screen_name='aparrish', count=200) for tweet in response: mynouns = list() myadjectives = list() if tweet['retweeted'] is False and tweet['text'][0:2] != "RT": #print tweet['text'] words = tweet['text'].split(" ") for word in words: if isNoun(word): mynouns.append(word) #print word + " is noun" elif isAdjective(word): myadjectives.append(word) #print word + " is adjective" for noun in mynouns: nounindex = words.index(noun) for adjective in myadjectives: adjectiveindex = words.index(adjective) if adjectiveindex < nounindex: results[noun] = adjective print "MY RESULTS" for key in results: print results[key] + "\t\t" + key
Basically, I was trying to get every tweet (in the case above, I got 200 tweets) from an account, and pick up the nouns and adjectives from each tweet, then matched them together in a corresponding way to show the writer's opinions toward different objects (nouns).
I downloaded the nouns list and adjectives list from the Internet, by checking the words in tweets with words in nouns and adjectives list, python will pick the words in tweets as nouns or adjectives and match them together.
To be honest, I was not that satisfied about the outcomes. Because it is really hard to cover one's taste by using limited pieces of words. I have been thinking about making more structures and logics to make the outcomes more reasonable and accurate. However, I found that the grammar and many idiomatic expressions are totally beyond my power. Anyway, it was a good shot for me to further this idea and deepen my thesis idea as well. Maybe it works not good, but the experience I have got from this mid-term project would definitely be helpful for me to do some other experiments along with this idea.
|
http://www.ranmoo.com/mrrm-2/276
|
CC-MAIN-2017-26
|
refinedweb
| 377
| 57.47
|
When your model query API don't go well or you want more performance, you can use raw sql queries in django. The Django Object Relational Mapper (ORM) helps bridge the gap between the database and our code Performing raw queries.
Using Facebook integration, we can get the user verified email id, general information, friends, pages, groups and you can post on his profile, facebook pages, groups with out user entering a details in a less span of time.
Dj:
Django model is the single, definitive source of data about your data. It contains the essential fields and behaviors of the data you’re storing. Generally, each model maps to a single database table. And an instance of that class represents a particular record in the database table. Django Manager is the interface through which database query operations are provided to Django models.By default, Django adds a Manager with the name "objects" to every Django model class.
Python-reCaptcha is a pythonic and well-documented reCAPTCHA client that supports all the features of the remote API to generate and verify CAPTCHA challenges. To add into your django project, you need captcha public, private keys are required.
Python decorators supports aspect-oriented programming. It is used to add or modify code in functions or classes. Using decorators will provide security, tracing, looking ..etc Let see an example:
@fundecorator
def myfunc():
print "This is my function"
Having a good environment setup is important for effective, fast and easy coding. We have different IDE's like eclipse, pycharm, sublime etc.. which are powerful and easy to use.
IDE's like eclipse, pycharm, sublime etc.. are resource intensive as they run many features, this is not a problem if you have really great system with powerfull resources.
Python Properties is a class for managing class attributes in Python. Property( ) is a built-in function that creates and returns a property object
Syntax:
attribute_name = property(get_Attribute, set_Attribute,del_Attribute(Optional), doc_Attribue(Optional))
where, get_Attritube is function to get value of the attribute,.
|
https://micropyramid.com/blog/?page=17
|
CC-MAIN-2017-09
|
refinedweb
| 337
| 55.84
|
Code. Collaborate. Organize.
No Limits. Try it Today.
We often use the XmlDataSource control together with the TreeView control.
XmlDataSource
TreeView
If the data source is static, it works quite well. But, when the data source needs to be dynamic, there seems to be some problems. This article will introduce one such problem and give you a hack solution.
If the data source needs to be dynamic, you may choose to:
DataSourceID
DataFile
Data
Obviously, the first way is more limited and only applicable for some special situations. It's not a good idea to put a lot of data source controls on one form and, more importantly, the count of the data source controls must be a fixed number.
But, what will happen when you change the value of the DataFile or Data property of the XmlDataSource control? From Microsoft's documentations, you may find the following information:
If you change the value of the DataFile/Data property, the DataSourceChanged event is raised. If caching is enabled and you change the value of DataFile/Data, the cache is invalidated.
If you change the value of the DataFile/Data property, the DataSourceChanged event is raised. If caching is enabled and you change the value of DataFile/Data, the cache is invalidated.
DataSourceChanged
Unfortunately, in my test (you may download the samples to see it yourself), I found that, the cache will not be automatically invalidated when you change the value of the Data property of the XmlDataSource control, although DataFile makes no trouble.
First, you may choose to change the value of the DataFile property of the XmlDataSource, instead of Data. This is an easy solution.
Otherwise, if you stick to changing the property of Data for some reason (for example, your data source is not a real XML file, but just a dynamically-populated string), you may try the following hack method:
public static void XmlDataSourceCacheHack(XmlDataSource dataSource)
{
try
{
Type t = typeof(XmlDataSource);
MethodInfo m = t.GetMethod("CreateCacheKey",
BindingFlags.Instance | BindingFlags.NonPublic);
string key = (string)m.Invoke(dataSource, null);
PropertyInfo p = t.GetProperty("Cache",
BindingFlags.Instance | BindingFlags.NonPublic);
object cache = p.GetValue(dataSource, null);
Type t2 = t.Assembly.GetType("System.Web.UI.DataSourceCache");
MethodInfo m2 = t2.GetMethod("Invalidate",
BindingFlags.Instance | BindingFlags.Public);
m2.Invoke(cache, new object[] { key });
}
catch
{
}
}
(It's sure that "using System.Reflection;" is required at the beginning of the file.)
using System.Reflection;
This is just a hack for this problem. I spent more than one day on debugging to find the source of this strange problem.
(P.S.: If you find a more perfect way to solve it, avoiding Reflection, do tell me. Thanks!)
You may surely entirely disable the cache feature of the XmlDataSource. It also works fine. This is another easy solution.
But, if your situation is just like mine: the change possibilities are lower than other postback possibilities (for example, the TreeView is quite big and always triggers postbacks more frequently than data sources change), you will prefer using Reflection than disabling the cache.
I'm not sure whether this is a bug from Microsoft, or if it is just applied to the design specification. But, either the implementation is wrong, or the documentation is wrong..
C# 6: First reactions
|
http://www.codeproject.com/Articles/26987/Hack-to-enforce-the-cache-of-an-XmlDataSource-to-i?PageFlow=FixedWidth
|
CC-MAIN-2014-15
|
refinedweb
| 539
| 56.05
|
Hi All,
I'm currently working on some duplicate prevention scripts in Python, for this i am looking for a simple Python code example that would allow bypassing the max number of events set in limits.conf
Using the "search.py" provided in example won't allow bypassing the limits.conf max event limit, i found this link:
And some others for C# and Java, but i don't get it to be honest...
A simple code sample would be very helpful for me...and others with the same need 🙂
Thanks in advance for you help !
For people looking for the same need, i finally found my solution on Splunk dev using REST API with a simple Python script, i can retrieve the full event no matters the number of events 🙂
Works great and very simple !
Python sample script:
#!/usr/bin/env python
import urllib, urllib2
from xml.dom import minidom
base_url = ''
username = 'admin'
password = 'changeme'
search_query = 'search error | head 10'
# Login and get the session key
request = urllib2.Request(base_url + '/servicesNS/%s/search/auth/login' % (username),
data = urllib.urlencode({'username': username, 'password': password}))
server_content = urllib2.urlopen(request)
session_key = minidom.parseString(server_content.read()).\
getElementsByTagName('sessionKey')[0].childNodes[0].nodeValue
print "Session Key: %s" % session_key
# Perform a search
request = urllib2.Request(base_url + '/servicesNS/%s/search/search/jobs/export' % (username),
data = urllib.urlencode({'search': search_query,'output_mode': 'csv'}),
headers = { 'Authorization': ('Splunk %s' %session_key)})
search_results = urllib2.urlopen(request)
print search_results.read()
View solution in original post
|
https://community.splunk.com/t5/Developing-for-Splunk-Enterprise/Python-SDK-How-to-bypass-max-count-in-limits-conf-to-return-all/td-p/140032
|
CC-MAIN-2021-39
|
refinedweb
| 242
| 51.65
|
Logging AWS Cloud Map API Calls with AWS CloudTrail
AWS Cloud Map is integrated with AWS CloudTrail, a service that provides a record of the actions that are taken by a user, a role, or an AWS service in AWS Cloud Map. CloudTrail captures all API calls for most AWS Cloud Map API actions as events. This includes calls from the AWS Cloud Map console and all programmatic access, such as the AWS Cloud Map API and AWS SDKs. (CloudTrail doesn't capture calls to the AWS Cloud Map DiscoverInstances API.)
If you create a trail, you can enable continuous delivery of CloudTrail events to an Amazon S3 bucket, including events for AWS Cloud Map. If you don't configure a trail, you can still view the most recent events in the CloudTrail console in Event history. Using the information collected by CloudTrail, you can determine the request that was made to AWS Cloud Map, the IP address that the request was made from, who made the request, when it was made, and additional details.
Topics
AWS Cloud Map Information in CloudTrail
CloudTrail is enabled on your AWS account when you create the account. When activity occurs in AWS Cloud Map, Cloud Map,:
Most AWS Cloud Map actions are logged by CloudTrail and are documented in the AWS Cloud Map API Reference. For example, calls to the
CreateHttpNamespace,
DeleteService, and
RegisterInstance actions generate entries in the CloudTrail log files. (CloudTrail doesn't capture
calls to the
AWS Cloud Map DiscoverInstances API.).
Viewing AWS Cloud Map Events in Event History
CloudTrail lets you view recent events in Event history. To view events for AWS Cloud Map API requests, you must choose the AWS Region where you created your namespaces in the Region selector at the top of the console. If you created namespaces in multiple AWS Regions, you must view the events for each Region separately. For more information, see Viewing Events with CloudTrail Event History in the AWS CloudTrail User Guide.
Understanding AWS Cloud Map
eventName element identifies the action that occurred. CloudTrail supports all AWS Cloud Map
API actions.
The following example shows a CloudTrail log entry for
CreatePublicDnsNamespace.
{ "Records": [ { "eventVersion": "1.05", "userIdentity": { "type": "IAMUser", "principalId": "A1B2C3D4E5F6G7EXAMPLE", "arn": "arn:aws:iam::111122223333:user/smithj", "accountId": "111122223333", "accessKeyId": "AKIAIOSFODNN7EXAMPLE", "userName": "smithj" }, "eventTime": "2018-01-16T00:44:17Z", "eventSource": "servicediscovery.amazonaws.com", "eventName": "CreatePublicDnsNamespace", "awsRegion": "us-west-2", "sourceIPAddress": "192.0.2.92", "userAgent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.12; rv:52.0) Gecko/20100101 Firefox/52.0", "requestParameters": { "description": "test", "creatorRequestId": "1234567890123456789", "name": "example.com" }, "responseElements": { "operationId": "unmipghn37443trlkgpf4idvvitec6fw-2example" }, "requestID": "35e1872d-c0dc-11e7-99e1-03e9fexample", "eventID": "409b4d91-34e6-41ee-bd97-a816dexample", "eventType": "AwsApiCall", "recipientAccountId": "444455556666" } ] }
|
https://docs.aws.amazon.com/cloud-map/latest/dg/logging-using-cloudtrail.html
|
CC-MAIN-2020-40
|
refinedweb
| 450
| 55.44
|
If anyone could help me with this I would be really thankful!
Write a program that stores the names of these artists in a String array. The program should prompt the user to enter the name of an artist and output that artist's position in the charts or a message saying that they are not present.
You should choose the most appropriate search method introduced in the lecture and modify it so that it handles strings, rather than integers.
Ads
import java.util.*; class SearchingAndSorting { public static void main(String[] args) { String names[]={"AAAA","DDDD","FFFF","ZZZZ","KKKKK","PPPP"}; ArrayList<String> list=new ArrayList<String>(); for(int i=0;i<names.length;i++){ list.add(names[i]); } Scanner input=new Scanner(System.in); System.out.print("Enter Name: "); String name=input.nextLine(); int index; if(list.contains(name)){ index=list.indexOf(name); System.out.println("Position: "+(index+1)); } else{ System.out.println("Artist is not present"); } } }
Ads
Ads
|
http://roseindia.net/answers/viewqa/Java-Beginners/22689-Java-Sorting-and-Searching-.html
|
CC-MAIN-2017-13
|
refinedweb
| 159
| 52.26
|
# Go Quiz
In this series, we will be discussing interesting aspects and corner cases of Golang. Some questions will be obvious, and some will require a closer look even from an experienced Go developer. These question will help to deeper the understanding of the programming language, and its underlying philosophy. Without much ado, let's start with the first part.
Value assignment
----------------
What value `y` will have at the end of the execution?
```
func main() {
var y int
for y, z := 1, 1; y < 10; y++ {
_ = y
_ = z
}
fmt.Println(y)
}
```
According to the specification, `for` loop creates its own scope. Therefore, we are dealing with two different scopes there: one inside the `main` function, and one inside the `for` loop. Therefore, we don't reassign `y` inside the `for` loop initialization, but instead creating new `y` that shadows the one from the outer scope. Therefore, the outer `y` is not affected, and the program will output `0`.
A part of a string
------------------
In this example, we have a string and would like to access a part of it. What would be the result of the following snippet?
```
s := "9"
v1 := s[0]
for _, v2 := range s {
fmt.Println(v1)
fmt.Println(v2)
fmt.Println(v1 == v2)
break // a single loop iteration
}
```
The first two print statement would output the same result. A string in Golang is an immutable array of bytes and every character is encoded in UTF-8. In this case, we are dealing with the ASCII-only string, therefore, character `9` will be encoded as a single byte with the value equal to `57`. Therefore, the first print statement would output `57`. Exactly the same value would be printed at the second line, as in this case, we will have rune `r` that consists of a single byte.
However, the program won't compile due to the third line, as we are dealing with different types: uint8 (under alias `byte`) and int32 (under alias `rune`). The numeric value of the variables is equal, but their types are different, therefore, they cannot be compared without the explicit type conversion.
Struct Conversion
-----------------
In this example, we have two similar structs that differ only in struct tags. Such an approach could be used in a real life. For example, you can have a separate representation of a single domain model in different packages: package `db` that is responsible for database persistence and package `api` that is responsible for handling the incoming requests. In this case, the structs would be equal save for the struct tags. What would be the result of the following code snippet? `#v` outputs the full Golang representation of the value, including the type of the struct and its field names.
```
type Struct1 struct {
A int `db:"a"`
}
type Struct2 struct {
A int `json:"a"`
}
func main() {
s1 := Struct1{A: 1}
s2 := Struct2(s1)
fmt.Printf("%#v", s2)
}
```
That's a tricky question because according to the Golang specification a struct tag is a part of the struct definition. Therefore, at some point, it wasn't possible to do the conversion. However, later the Go team decided to relax the constraint (without changing the definition of the struct in the spec), and now such conversion is permitted.
`main.Struct2{A:1}`
How about this snippet? We are trying to convert `Struct1` to `Struct2`. All information necessary for `Struct2` is available in `Struct1`. However, there is also a redundant field `B` in `Struct1`.
```
type Struct1 struct {
A int
B int
}
type Struct2 struct {
A int
}
func main() {
s1 := Struct1{}
s2 := Struct2(s1)
fmt.Printf("%#v", s2)
}
```
In this case, the specification does not care whether we have all the information to instantiate `Struct2` from `Struct1`. `Struct1` has an extra field, and that's the end of the deal: the operation is not permitted, and the code won't compile.
JSON Unmarshalling
------------------
Will the existing records in the map be preserved when we unmarshal JSON-encoded values into it? What happens in the case of a collision (note key `Field1`) ?
```
s := map[string]int{
"Field1": 1,
"Field2": 2,
}
data := `{"Field2": 202}`
err := json.Unmarshal([]byte(data), &s)
if err != nil {
panic(err)
}
fmt.Println(s)
```
Existing records in the map will be preserved. In the case of a collision, the value will be overwritten.
`map[Field1:1 Field2:202]`
What about structs?
```
type request struct {
Field1, Field2 int
}
r := request{Field1: 1, Field2: 2}
data := `{"Field2": 202}`
err := json.Unmarshal([]byte(data), &r)
if err != nil {
panic(err)
}
fmt.Println(r)
```
The same logic is valid here:
`{Field1:1 Field2:202}`
And that all the question for today :) How many right answers did you get out of four?
|
https://habr.com/ru/post/550786/
| null | null | 782
| 64.81
|
On Apr 18, 2006, at 2:52 PM, William A. Rowe, Jr. wrote:
>> @@ -240,7 +240,7 @@
>> const char *encoding;
>> /* only work on main request/no subrequests */
>> - if (!ap_is_initial_req(r)) {
>> + if (r->main != NULL) {
>> ap_remove_output_filter(f);
>
> Actually, explain to me how this code successfully leaves the http
> protocol
> layer output_filter in the filter chain for subrequest components?
Using ap_is_initial_req:
AP_DECLARE(int) ap_is_initial_req(request_rec *r)
{
return (r->main == NULL) /* otherwise, this is a sub-
request */
&& (r->prev == NULL); /* otherwise, this is an internal
redirect */
}
it will remove the filter for both sub-request and internal
redirects. The patch just removes the filter if it is a sub request.
> I'd think
> this code (original, and even the patched flavor) could break the
> filter stack
> by yanking the deflate filter out from the middle of servicing a
> request, e.g.
> when a subrequest is included midstream.
The patched block of code is only called when f->ctx is NULL and
hasn't been setup yet by mod_deflate. I would assume when a sub
request would get added the ctx for its ap_filter_t struct would be
NULL and f->r->main would be the top request so the deflate filter
would be removed.
> It seems this should be a conditional add-filter, never a
> conditional remove
> filter event. add-filter on the top level request, noop on nested
> requests.
Not sure I have the expertise to comment on that.
Brian
|
http://mail-archives.apache.org/mod_mbox/httpd-dev/200604.mbox/%3CBC0CE5EA-2920-465A-AA59-8CCC59E892AB@firehawksystems.com%3E
|
CC-MAIN-2015-32
|
refinedweb
| 237
| 62.98
|
COM Interop
Yeah, I’ve Done That:
Calling .NET Code From ASP
Updated: 12/16/09
Visual Studio 2008 SP1
Windows 7
ASP
IIS
COM has to be one of the most finicky, frail, and brittle function calling mechanisms ever invented. Register this; find the programmatic ID; do you have all your GUIDs? However, it is on many millions of computers and even with the advances made in the Microsoft area around component development it is still the only way to get some tasks done. To wit, while you may have to work on an ASP site, you can still utilize the underpinnings of .NET and make your site changes and enhancements far more robust and much more maintainable.
COM is a function calling mechanism. Nothing more; nothing less. In its attempt to be a calling mechanism for everyone and to allow versioning of components and to achieve language neutrality it has become a cumbersome technology to administer. If you have not had the joy of spending a night attempting to get one function call to work via COM on a production system then you just don’t know COM. With that in mind, let’s delve into the implementation of COM and create a .NET class that can be called from an ASP page using VBScript.
A Sample Library
We’ll create a .NET library written in C# with the goal of calling the methods on a class named HelloCom from VBScript code on an ASP page. So fire up Visual Studio and create a C# class library project named ComLibrary. As always, I like to create a Visual Studio “blank solution” named ComLibrarySolution to house the ComLibrary project.
Visual Studio creates a file called AssemblyInfo.cs when it creates a project. Open that file and change the ComVisible attribute from false to true:
[assembly: ComVisible(true)]
It is not strictly necessary to enable this attribute for the initial class we are going to create because all public members of a class are already visible to COM clients, but it will come in handy later so go ahead and set the ComVisible attribute to true. Also take note of the GUID that Visual Studio has conveniently placed in that file for you as well.
Now add a class to the project and name it HelloCom. In order for COM to be able to create an instance of this class, the class must have a default (parameterless) constructor, so add one to the class.
This class is going to have two functions: a read-only property named ComputerName which will return the name of the computer and a method named Echo which will take in a string, append a “2” to it, and return the resultant string to the caller. These methods are quite straightforward, so here’s the entire class:
using System;
namespace ComLibrary
{
public class HelloCom
{
string _echoSuffix;
/// <summary>
/// COM requires a default constructor.
/// </summary>
public HelloCom()
{
_echoSuffix = "2";
}
/// <summary>
/// Gets the name of the computer.
/// </summary>
/// <value>
/// A string containing the name of the computer.
/// </value>
public string ComputerName
{
get
{
return Environment.MachineName;
}
}
/// <summary>
/// “Echo” the string passed in, appending a “2” to it.
/// </summary>
/// <param name="s">
/// The string to be returned.
/// </param>
/// <returns>
/// The sting that was passed in, with a “2” appended to it.
/// </returns>
public string Echo(string s)
{
if (s == null) s = string.Empty;
return s + _echoSuffix;
}
}
}
Register With COM
In order to call this class from a COM client, you must register the generated assembly with COM. There is a command-line tool distributed with the .NET Framework for doing this. It is called the assembly registration tool, and can be found here:
%SystemRoot%\Microsoft.NET\Framework\v2.0.50727\RegAsm.exe
To have this directory automatically added to your PATH so that you can run this utility just by typing its name, bring up a Visual Studio command prompt window. Type regasm in the window and you will see a list of the switches displayed.
To register the above C# library with COM, set your directory to the Debug directory for the solution, for example:
cd/d H:\vsprojects\ComLibrarySolution\ComLibrary\bin\Debug
Let’s try some of the utility’s switches to understand its behavior. Go ahead and register the assembly ComLibrary.dll using this command:
regasm ComLibrary.dll /tlb:ComLibrary.tlb /registered /verbose
You will see output similar to the following:
Microsoft (R) .NET Framework Assembly Registration Utility 2.0.50727.4927
Types registered successfully
Type 'C' exported.
Assembly exported to 'H:\vsprojects\ComLibrarySolution\ComLibrary\bin\Debug\ComLibrary.tlb', and the type library was registered successfully
This command has done two things:
Created a type library named ComLibrary.tlb for the class.
Added entries to the Windows registry to allow the COM run-time to find this class.
Programmatic ID & Class ID
One primary way that COM finds the information necessary to create an instance of a class is by using a string called a programmatic ID or ProgID. In our sample library, the ProgID for our class is ComLibrary.HelloCom because the namespace we created the class in is ComLibrary and the name of the class is HelloCom.
Lets look at the registry keys that have been added. Fire up the Windows registry editor (regedit.exe) and navigate to the following key:
HKEY_LOCAL_MACHINE\SOFTWARE\Classes\ComLibrary.HelloCom
The HKLM\SOFTWARE\Classes key is the node in the registry that COM looks for ProgIDs. Looking at this key you will see it has one sub value, the (Default) key, which simply echos the ProgID name. It also has one sub-key named CLSID which also has one default value which in this case is set to:
{48EACCF2-D486-35BC-ABD9-F40E9E6EBB80}
This GUID was created by regasm. Every time you run regasm a different GUID will be generated and assigned to the class (we’ll look at how to specify your own GUID below). GUIDs hit it big-time with the first release of COM. A GUID is the internal identifier that COM uses to find a class, and these are known in COM parlance as a class ID or CLSID for short. You can find a COM-enabled class by knowing its ProgID or its CLSID. Since a ProgID is much easier to work with, but since COM really works with the CLSID under the covers, the HKLM\SOFTWARE\Classes node in the registry simply translates ProgIDs to CLSIDs, much like DNS translates host names to IP addresses.
So now let’s find where COM gets all the information about our class. Find this key:
HKEY_LOCAL_MACHINE\SOFTWARE\Classes\Wow6432Node\CLSID
\{48EACCF2-D486-35BC-ABD9-F40E9E6EBB80}
HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Classes\CLSID
\{48EACCF2-D486-35BC-ABD9-F40E9E6EBB80}
Tip: When I’m scouring through the registry to debug issues with a COM object not working properly, I’ll go to the Classes node for my ProgID and highlight the CLSID node in the registry editor. Now double-click the (Default)Edit String window is displayed with the GUID highlighted. Press Ctrl+C, click the Cancel button, press Ctrl+F, Press Ctrl+V, then press Enter. The registry editor should now bring you to the key we just navigated to manually above.
The figure shows this key displayed in the registry editor. The top key has a default value which simply contains the ProgID of the class, as does the ProgId sub-key.
The most important sub-key—which allows the COM run-time to find the class—is the InprocServer32 sub-key. The information found in this sub-key is used to create an instance of the class. This sub-key has the following values:
(Default)—The name of the host DLL. For .NET classes, this will always be mscoree.dll.
Assembly—The name of the .NET assembly which contains the class.
Class—The fully-qualified name of the class.
RuntimeVersion—The version of the .NET run-time that the class requires.
ThreadingModel—The threading model to be used.
The InprocServer32 key will have a sub-key named the same as the version number assigned to the assembly. In our case this is 1.0.0.0, which is the version number specified in the AssemblyVersion attribute found in the AssemblyInfo.cs file. Notice that the version number sub-key has the same entries as the InprocServer32 key has, except for the ThreadingModel value.
The Implemented Categories sub-key specifies the component category that the class falls under. The GUID shown is the name of a sub-key under this key:
HKEY_CLASSES_ROOT\Component Categories
\{62C8FE65-4EBB-45e7-B440-6E39B2CDBF29}
This sub-key simply specifies that this is a .NET class.
Test Harness Project
Now let’s create a test harness project to make sure that we can call our class using the COM run-time. Add a new project of type VB.NET Test Project to your solution and call it ComLibrary.Test. (If your edition of Visual Studio does not have this project type then you will have to create a console application to use as your test harness project.)
Right-click the ComLibrary project node and select Add then New Test. Select the Unit Test template and name the test HelloComTests.cs. The first thing I do is rip out all the auto-generated in the class except for the TestMethod1 method. To this method add the following code:
Dim comObject As Object = CreateObject("ComLibrary.HelloCom")
Dim computer As String = comObject.ComputerName
Console.WriteLine("Computer Name: {0}", computer)
You will also have to add Option Strict Off to the top of the file to allow the VB compiler to late-bind to the ComputerName property of the COM object. If you do not have this reference then you will get the following error when trying to run the unit test:
Cannot create ActiveX component.
Add a project reference to the ComLibrary project. Right-click the ComLibrary.Test node and select Properties. Then go to the References tab and add the project reference.
We are now ready to run the unit test we just created. In the HelloComTests.vb file, right-click somewhere within the confines of the TestMethod1 method, and select Run Tests. Look at the test run results window and you’ll see your computer name displayed.
So let’s think about what we just did. We created a run-of-the-mill .NET class written in C#. The only COM-specific change we made to that class was to enable the ComVisisble attribute, and that isn’t even necessary at this point since the C# class and its methods we wish to call are public. Then, using regasm, we created a type library file and added appropriate entries to the registry so that the class can be called from the COM infrastructure. We then called this class from a VB.NET test harness. We are one step away from calling this .NET class written in C# from the VBScript code on our ASP page.
Before jumping into the VBScript, let’s examine the registry key that directs a scripting engine to the type library for our class. Look back in the AssemblyInfo.cs file that VIsual Studio generated for us. That file contains a GUID (attached to the Guid attribute) that Visual Studio created for us. This GUID is used to define a key like this:
HKEY_CLASSES_ROOT\TypeLib
\{7E48B9D4-B26A-4340-85B5-B442679AD903}
A scripting engine might know the GUID for a type library that it is going to reference. For example, an ASP website’s Global.asa file might have the following type library reference in it:
<!--METADATA TYPE="TypeLib"
NAME="Microsoft ActiveX Data Objects 2.1 Library"
UUID="{00000201-0000-0010-8000-00AA006D2EA4}"
VERSION="2.1"-->
This reference will cause the scripting engine to look up this GUID under the TypeLib key where it can find out where the type library file resides.
The regasm tool took the GUID out of the assembly file’s metadata section and created this key for us as well as populated the key with the appropriate information. Navigate to the that key now. Notice that there is also a version number sub-key under this key. In our case, the version number is 1.0.0, so our type library information would be under this key:
HKEY_CLASSES_ROOT\TypeLib
\{7E48B9D4-B26A-4340-85B5-B442679AD903}
\1.0
\0
\win32
Notice how there is a entry for the “major + minor” version number and then there is a separate sub-key for each release number. The Win32 key points to the location of the type library file for the version of interest, in my case this is:
\\s2\doten\vsprojects\ComLibrarySolution\ComLibrary
\bin\Debug\ComLibrary.tlb
Unfortunately, this scheme does not have a friendly name for the type library like the ProgID does for the class information. It would be nice if there were a “friendly type name”-to-GUID translator, but there is no such mechanism. It’s GUIDs all the way for the type library information.
Sign the Assembly
In order to use COM to call our library, the library assembly must be signed. If you do not have a key-pair file create one with this command:
sn -k ComLibraryKeyPair.snk
This will create a key-pair file that you can add to the Visual Studio solution and then specify that the ComLibrary should be signed with this file on the Signing tab of the Properties pages.
The regasm codebase switch must also be used when registering the library for use on an ASP page. So first un-register the library:
regasm ComLibrary.dll /unregister /tlb /verbose
and then register it again, but with the codebase switch:
regasm ComLibrary.dll /tlb:ComLibrary.tlb /registered /verbose /codebase
Calling COM From VBScript
To create an ASP page that calls our library, you will need to configure IIS to host an ASP website (refer to my article Debugging ASP With Visual Studio for one way to set up this environment). With the required environment in place, create an ASP page which we’ll call test-com.asp. Add this code to the page:
<%
option explicit
on error goto 0
%>
<html>
<head>
<title>COM Test Page</title>
</head>
<body>
<h1>COM Test Page</h1>
<%
dim comLib
set comLib = Server.CreateObject("")
Response.Write "Computer Name: " & comLib.ComputerName & "<br>"
%>
</body>
</html>
Now browse to this test page. If you get this error message:
Server object error 'ASP 0177 : 80070002'
Server.CreateObject Failed
/pm/admin/test-com.asp, line 16
then it means that your ASP website does not have a reference to the library (error code 80070002 is “file not found”). So add a reference to the library DLL file to your website, then browse to the page again.
Calling .NET Framework Classes
Not only can you call your own classes from VBScript, you can also call the .NET Framework classes from VBScript. You may be stuck in the world of ASP and VBScript, but you still have the .NET Framework at your disposal.
dim shell, env
set shell = Server.CreateObject("WScript.Shell")
set env = shell.Environment("Process")
dim systemRoot : systemRoot = env("systemroot")
dim fileSys
set fileSys = Server.CreateObject("Scripting.FileSystemObject")
dim filename : filename = fileSys.GetTempName()
TBS.
References
Patrick Steele over at 15 Seconds wrote an article called COM Interop Exposed which gives a good account of the history of COM and and its relation to VB6. He explains how VB6 exposes a class as a COM coclass and how VB6 generates interfaces. More importantly, he describes how to expose a .NET class to COM, how COM early- and late-bingind work and why. And in Part 2, he describes COM events (the COM connection point protocol).
Essential COM by Don Box, Addison-Wesley, 1998, ISBN 0-201-63446-5.
Also refer to the Microsoft COM page and the COM, COM+, and DTC page at MSDN.
|
https://www.glenndoten.net/asp/com-interop
|
CC-MAIN-2022-27
|
refinedweb
| 2,641
| 64.81
|
- Training Library
- Microsoft Azure
- Courses
- 70-534 - Architect An Azure Compute Infrastructure
ARM Templates defining infrastructure as code with Azure Resource Manager templates.
As solution architects, we tend to think about things holistically. For example, if you have back-end application that's comprised of some virtual machines, a queue, and some persistent storage, these are all part of the same group of resources.
Since we consider these things to be conceptually one unit, it makes sense to manage them as one unit. That's what Azure Resource Manager offers. It allows you to manage all of the different resources that make up your solutions, as a single group.
Resource Manage has a concept called resource groups, which consists of one or more resources that we want to manage as a single unit. A single solution can have as many resource groups as needed.
As an example, if you had an implementation of the competing consumers pattern, you might use one resource group for the application resources, another for the message queue, and another for the resources in the messages processing pool. And since each resource is in its own group, those groups can all be managed independently.
If the application group consists of virtual machines or virtual network et cetera, then you can manage that group as a single entity. What that means is that you can create all of those resources at the same time. You could monitor all of those resources as a single unit and you can manage the role based access, to all of the resources for the entire group.
Resource Manager has given us, as engineers, a way to treat solutions in the same way we think about them, conceptually, and it's added a lot of value. With this new way to work with resources, Azure also provides a native infrastructure as code solution based on Resource Manger, making it easier to create consistent solutions.
Azure Resource Manage templates, also called ARM templates, are a JSON based template that declares which resources are required for a solution. ARM templates allow you to specify which resource you need and then have them created by having the template process to be at the portal, PowerShell, the rest API, or the command line interface.
Let's check out the ARM template structure. All templates will follow this same basic pattern and this here, is just a skeleton for that pattern. The schema property defines the URL to a JSON file that defines the templating language for the specific version of this file.
The content version is something that you can set for yourself and its purpose is to make sure that you're using the correct version of the file for your deployment, based on whichever versioning you use. Parameters allow you to click information at deployment time.
Now, this can be anything you want, however, as an example, you could use something like the name of a resource or maybe a password that you need provided at deployment time.
Now, variables here are no different than variables anywhere else in the development world. There are values that you can use elsewhere in the template. Resources are where you define which resources you actually want to create. Each resource will be its own JSON object with optional child resources. The resource object becomes even more powerful due to a few properties, such as the copy property, which allows a resource to be looped over, thereby creating multiple copies.
So, if you wanted to create multiple virtual machines, then you could do that with the copy. There's also the dependsOn property, that allows a resource to have its creation delayed while it waits for the creation of another resource or resources that use specify.
Then we have outputs. This property allows you to specify the values that you want returned after deployment and this can be particularly useful if you're using PowerShell or the command line interface to automate the creation of resources, because then you can get back the object with the values you need, to maybe move on to the next step in the process.
Let's walk through a simple template and then we'll deploy it. Instead of writing all of this out, I'm going to grab an existing template from Azure's github page. Azure has a repo on github named, Azure quick start templates, and it has an extensive collection of templates to create all kinds of different resources. It's a fantastic reference and I recommend that you check it out.
Looking through the list, you can see that there are a lot templates here. I'm going to use the one named simple windows vm template and then I'll click on this. And I'll click on the azuredeploy.json file. It starts out the same way as the skeleton that I just showed. And it starts with a schema end version.
Then we have some parameters here. Now, since the goal of this template is to start up a windows vm, it needs the administrative username and password. Notice the username has a type of string and the password has a type of secured string. Now the difference is what you'd expect, the secure string would be masked anywhere you're prompted to enter it.
This meditative section is where you can specify the description that will be displayed to whomever performs the deployment. Now, looking at the Windows OS version parameter, you can see that you can even specify a list of allowed values to select from and that comes in handy when you don't want to allow free formed text.
Okay, down here in the variables section, you get a glimpse of the first function that is used in this template, which is the concatenate function. And you can specify functions by putting them inside square brackets.
In this example, the concat function, which combines strings, takes this hard coded string and joins it with the result of this other function, which creates a deterministically unique string. The function themselves in this example aren't really that important. What's important is that you understand that you can execute functions by putting them inside square brackets.
So, there are several variables here which are used throughout the rest of the template and then in the next section, we have the resources and this is where all the resources are specified.
The first resource is going to be used to ensure that there is a storage account based on the name that's pulled from the storageAccountname variable. The type property here represents the namespace for the resource joined by the resource type.
The API version is the property that specifies the version of the rest API to use for this specific resource. The location property is the location where your resource will be created and you'll notice that here, it's using a function that fetches the location from the resource group that's actually being used.
As I scroll down, notice that the resource declarations roughly look the same and that's because they all follow the same basic pattern. Looking at the network interface resource here, you can see the dependsOn property, that I mentioned previously, which allows you to delay the creation of a resource until dependent resources are created.
Here, the public IP and virtual networks are required before the NIC can be created. If I jump down to the outputs section, you can see that you can create you own output here and in this example, the host name as a string data type and it returns the fully qualified domain. So, this template will allow you to input a username, password, DNS prefix, and then select the version of Windows, and then it's going to create the resources listed here.
Alright, let's see how this actually works by doing this in the portal. I'm going to start by searching for templates and then I'll select it here from the list. You can see that I don't have any templates at the moment, so I'm going to click add and I'll fill out a name and description, perfect. And with that done, I'll paste in the template.
It's worth noting that you could have also done this by clicking on the button in the github repo and have Azure pull the results of the JSON file automatically and it does this because it passes Azure the URL to that JSON file via a URL parameter. Now, if I save this, it will take just a second to create and once created, it doesn't show up right away. So, I'll just need to refresh this list.
And now, if I click on it, it's going to open up a blade and I'll have this deploy button. If I click it, I can fill out this form. Notice that these are the parameters from the template. The secure string data type for the password, causes it to be masked. Also, the allowed options for the Windows version, cause a drop down to be created. So, I'm going to populate this and agree to the license and then I'll click purchase.
So, this is just going to take a moment to complete, however, by looking at the resource group that I created for this and clicking refresh, you can see that it's creating everything specified in the template and in order.
Because all of the resources are a part of the same resource group, you can manage the permissions for all of these resources by setting permissions on the group itself. And you can monitor all of the resources at the group level too. Again, since all of the resources belong to the same group, if you delete the group itself, all of the resources in that group are deleted. So, that's how you deploy through the portal.
Now, if you want to use PowerShell, there are two commandlets worth knowing about. The first is the Test-AzureRmResourceGroupDeployment and it allows you to validate you deployment. And then, if you want to actually perform the deployment, you can use the New-AzureRmResourceGroupDeployment commandlet. If you want to do it from the command line, it's pretty simple. You can use the group deployment create sub commands of the Azure executable.
Okay, that's going to wrap up this lesson. In the next lesson, I'm going to cover availability. So, if you're ready to keep learning, then let's get started.
|
https://cloudacademy.com/course/architect-an-azure-compute-infrastructure/arm-templates/
|
CC-MAIN-2020-05
|
refinedweb
| 1,782
| 68.4
|
Count all Prime Length Palindromic Substrings
Given a string str, the task is to count all the sub-strings of str which are palindromes and their length is prime.
Examples:
Input: str = “geeksforgeeks”
Output: 2
“ee” and “ee” are the only valid sub-strings
Input: str = “abccc”
Output: 3
Approach: Using Sieve of Eratosthenes, find all the primes till the length of str because that is the maximum length a sub-string of str can have. Now starting from the smallest prime i.e. j = 2 till j ≤ len(str). If j is prime then count all the palindromic sub-strings of str whose length = j. Print the total count in the end.
Below is the implementation of the above approach:
C++
Java
Python3
C#
// C# implementation of the approach
using System;
class GfG
{
// Function that returns true if
// sub-string starting at i and
// ending at j in str is a palindrome
static bool isPalindrome(string str,
int i, int j)
{
while (i < j) { if (str[i] != str[j]) return false; i++; j--; } return true; } // Function to count all palindromic // substring whose lwngth is a prime number static int countPrimePalindrome(string str, int len) { bool[] prime = new bool[len + 1]; Array.Fill(prime, true); // 0 and 1 are non-primes prime[0] = prime[1] = false; for (int p = 2; p * p <= len; p++) { // If prime[p] is not changed, // then it is a prime if (prime[p]) { // Update all multiples of p greater // than or equal to the square of it // numbers which are multiple of p // and are less than p^2 are already // been marked. for (int i = p * p; i <= len; i += p) prime[i] = false; } } // To store the required number // of sub-strings int count = 0; // Starting from the smallest prime // till the largest length of the // sub-string possible for (int j = 2; j <= len; j++) { // If j is prime if (prime[j]) { // Check all the sub-strings of // length j for (int i = 0; i + j - 1 < len; i++) { // If current sub-string is a // palindrome if (isPalindrome(str, i, i + j - 1)) count++; } } } return count; } // Driver code public static void Main() { string s = "geeksforgeeks"; int len = s.Length; Console.WriteLine(countPrimePalindrome(s, len)); } } // This code is contributed by Code_Mech [tabby title = "PHP"]
2
Recommended Posts:
- Count of Palindromic substrings in an Index range
- Rearrange the string to maximize the number of palindromic substrings
- Check if all the palindromic sub-strings are of odd length
- Number of palindromic subsequences of length k where k <= 3
- Sum of all odd length palindromic numbers within the range [L, R]
- Convert all substrings of length 'k' from base 'b' to decimal
- Largest palindromic prime in an array
- Number of strings of length N with no palindromic sub string
- Find number of substrings of length k whose sum of ASCII value of characters is divisible by k
- Count substrings with same first and last characters
- Number of substrings with count of each character as k
- Count of substrings of a binary string containing K ones
- Count Substrings with equal number of 0s, 1s and 2s
- Count of total anagram substrings
- Count distinct substrings that contain some characters at most, rituraj_jain, Code_Mech
|
https://www.geeksforgeeks.org/count-all-prime-length-palindromic-substrings/
|
CC-MAIN-2019-22
|
refinedweb
| 534
| 50.57
|
#include "page0size.h"
#include "sync0rw.h"
#include "ut0byte.h"
#include "ut0mutex.h"
#include "ut0new.h"
#include <atomic>
#include <queue>
#include <set>
#include <vector>
Go to the source code of this file.
Transaction system global type definitions
Created 3/26/1996 Heikki Tuuri
printf(3) format used for printing DB_TRX_ID and other system fields
Maximum transaction identifier.
Page number of the transaction system page.
Rollback pointer (DB_ROLL_PTR, DATA_ROLL_PTR)
Row identifier (DB_ROW_ID, DATA_ROW_ID)
Transaction identifier (DB_TRX_ID, DATA_TRX_ID)
Rollback segment header.
Rollback segment array header.
File objects.
Transaction system header
Undo log header.
Undo log record.
Undo log page header.
Undo segment header.
Undo number.
Type of data dictionary operation.
Transaction execution states when trx->state == TRX_STATE_ACTIVE.
Transaction states (trx_t::state)
Mark the transaction for forced rollback.
Was the transaction rolled back asynchronously or by the owning thread.
This flag is relevant only if TRX_FORCE_ROLLBACK is set.
If this flag is set then the transaction cannot be rolled back asynchronously.
For masking out the above four flags.
maximum length that a formatted trx_t::id could take, not including the terminating NUL character.
Random value to check for corruption of trx_t.
Space id of the transaction system page (the system tablespace)
|
https://dev.mysql.com/doc/dev/mysql-server/latest/trx0types_8h.html
|
CC-MAIN-2020-16
|
refinedweb
| 197
| 54.79
|
1.1 anton 1: \input texinfo @c -*-texinfo-*- 2: @comment %**start of header (This is for running Texinfo on a region.) 3: @setfilename gforth-info 4: @settitle GNU Forth Manual 5: @setchapternewpage odd 6: @comment %**end of header (This is for running Texinfo on a region.) 7: 8: @ifinfo 9: This file documents GNU Forth 0.0 10: 11: Copyright @copyright{} 1994 GNU Forth Development Group 12: 13: Permission is granted to make and distribute verbatim copies of 14: this manual provided the copyright notice and this permission notice 15: are preserved on all copies. 16: 17: @ignore 18: Permission is granted to process this file through TeX and print the 19: results, provided the printed document carries a copying permission 20: notice identical to this one except for the removal of this paragraph 21: (this paragraph not being relevant to the printed manual). 22: 23: @end ignore 24: Permission is granted to copy and distribute modified versions of this 25: manual under the conditions for verbatim copying, provided also that the 26: sections entitled "Distribution" and "General Public License" are 27: included exactly as in the original, and provided that the entire 28: resulting derived work is distributed under the terms of a permission 29: notice identical to this one. 30: 31: Permission is granted to copy and distribute translations of this manual 32: into another language, under the above conditions for modified versions, 33: except that the sections entitled "Distribution" and "General Public 34: License" may be included in a translation approved by the author instead 35: of in the original English. 36: @end ifinfo 37: 38: @titlepage 39: @sp 10 40: @center @titlefont{GNU Forth Manual} 41: @sp 2 42: @center for version 0.0 43: @sp 2 44: @center Anton Ertl 45: 46: @comment The following two commands start the copyright page. 47: @page 48: @vskip 0pt plus 1filll 49: Copyright @copyright{} 1994 GNU Forth Development Group 50: 51: @comment !! Published by ... or You can get a copy of this manual ... 52: 53: Permission is granted to make and distribute verbatim copies of 54: this manual provided the copyright notice and this permission notice 55: are preserved on all copies. 56: 57: Permission is granted to copy and distribute modified versions of this 58: manual under the conditions for verbatim copying, provided also that the 59: sections entitled "Distribution" and "General Public License" are 60: included exactly as in the original, and provided that the entire 61: resulting derived work is distributed under the terms of a permission 62: notice identical to this one. 63: 64: Permission is granted to copy and distribute translations of this manual 65: into another language, under the above conditions for modified versions, 66: except that the sections entitled "Distribution" and "General Public 67: License" may be included in a translation approved by the author instead 68: of in the original English. 69: @end titlepage 70: 71: 72: @node Top, License, (dir), (dir) 73: @ifinfo 74: GNU Forth is a free implementation of ANS Forth available on many 75: personal machines. This manual corresponds to version 0.0. 76: @end ifinfo 77: 78: @menu 79: * License:: 80: * Goals:: About the GNU Forth Project 81: * Other Books:: Things you might want to read 82: * Invocation:: Starting GNU Forth 83: * Words:: Forth words available in GNU Forth 84: * ANS conformance:: Implementation-defined options etc. 85: * Model:: The abstract machine of GNU Forth 86: @comment * Emacs and GForth:: The GForth Mode 87: * Internals:: Implementation details 88: * Bugs:: How to report them 89: * Pedigree:: Ancestors of GNU Forth 90: * Word Index:: An item for each Forth word 91: * Node Index:: An item for each node 92: @end menu 93: 94: @node License, Goals, Top, Top 95: @unnumbered License 96: !! Insert GPL here 97: 98: @iftex 99: @unnumbered Preface 100: This manual documents GNU Forth. The reader is expected to know 101: Forth. This manual is primarily a reference manual. @xref{Other Books} 102: for introductory material. 103: @end iftex 104: 105: @node Goals, Other Books, License, Top 106: @comment node-name, next, previous, up 107: @chapter Goals of GNU Forth 108: @cindex Goals 1.5 ! anton 109: The goal of the GNU Forth Project is to develop a standard model for ! 110: ANSI Forth. This can be split into several subgoals: 1.1 anton 111: 1.5 ! anton: 1.1 anton: !! somtime in spring or summer 1994. If you are lucky, you can still get 161: dpANS6 (the draft that was approved as standard) by aftp from 162:. 163: 164: @cite{Forth: The new model} by Jack Woehr (!! Publisher) is an introductory 165: book based on a draft version of the standard. It does not cover the 166: whole standard. It also contains interesting background information 167: (Jack Woehr was in the ANS Forth Technical Committe). 168: 169: @node Invocation, Words, Other Books, Top 170: @chapter Invocation 171: 172: You will usually just say @code{gforth}. More generally, the default GNU 173: Forth image can be invoked like this 174: 175: @example 176: gforth [--batch] [files] [-e forth-code] 177: @end example 178: 179: The @code{--batch} option makes @code{gforth} exit after processing the 180: command line. Also, the startup message is suppressed. @file{files} are 181: Forth source files that are executed in the order in which they 182: appear. The @code{-e @samp{forth-code}} or @code{--evaluate 183: @samp{forth-code}} option evaluates the forth code; it can be freely 184: mixed with the files. This option takes only one argument; if you want 185: to evaluate more Forth words, you have to quote them or use several 186: @code{-e}s. !! option for suppressing default loading. 187: 188: You can use the command line option @code{-i @samp{file}} or 189: @code{--image-file @samp{file}} to specify a different image file. Note 190: that this option must be the first in the command line. The rest of the 191: command line is processed by the image file. 192: 193: If the @code{--image-file} option is not used, GNU Forth searches for a 194: file named @file{gforth.fi} in the path specified by the environment 195: variable @code{GFORTHPATH}; if this does not exist, in 196: @file{/usr/local/lib/gforth} and in @file{/usr/lib/gforth}. 197: 198: @node Words, , Invocation, Top 199: @chapter Forth Words 200: 201: @menu 202: * Notation:: 203: * Arithmetic:: 204: * Stack Manipulation:: 205: * Memory access:: 206: * Control Structures:: 207: * Local Variables:: 208: * Defining Words:: 209: * Vocabularies:: 210: * Files:: 211: * Blocks:: 212: * Other I/O:: 213: * Programming Tools:: 214: @end menu 215: 216: @node Notation, Arithmetic, Words, Words 217: @section Notation 218: 1.3 anton 219: The Forth words are described in this section in the glossary notation 1.1 anton 220: that has become a de-facto standard for Forth texts, i.e. 221: 222: @quotation 223: @samp{word} @samp{Stack effect} @samp{pronunciation} @samp{wordset} 224: @samp{Description} 225: @end quotation 226: 227: @table @samp 228: @item word 229: The name of the word. BTW, GNU Forth is case insensitive, so you can 230: type the words in in lower case. 231: 232: @item Stack effect 233: The stack effect is written in the notation @code{@samp{before} -- 234: @samp{after}}, where @samp{before} and @samp{after} describe the top of 235: stack entries before and after the execution of the word. The rest of 236: the stack is not touched by the word. The top of stack is rightmost, 237: i.e., a stack sequence is written as it is typed in. Note that GNU Forth 238: uses a separate floating point stack, but a unified stack 239: notation. Also, return stack effects are not shown in @samp{stack 240: effect}, but in @samp{Description}. The name of a stack item describes 241: the type and/or the function of the item. See below for a discussion of 242: the types. 243: 244: @item pronunciation 245: How the word is pronounced 246: 247: @item wordset 248: The ANS Forth standard is divided into several wordsets. A standard 249: system need not support all of them. So, the fewer wordsets your program 250: uses the more portable it will be in theory. However, we suspect that 251: most ANS Forth systems on personal machines will feature all 252: wordsets. Words that are not defined in the ANS standard have 253: @code{gforth} as wordset. 254: 255: @item Description 256: A description of the behaviour of the word. 257: @end table 258: 259: The name of a stack item corresponds in the following way with its type: 260: 261: @table @code 262: @item name starts with 263: Type 264: @item f 1.5 ! anton 265: Bool, i.e. @code{false} or @code{true}. 1.1 anton 266: @item c 267: Char 268: @item w 269: Cell, can contain an integer or an address 270: @item n 271: signed integer 272: @item u 273: unsigned integer 274: @item d 275: double sized signed integer 276: @item ud 277: double sized unsigned integer 278: @item r 279: Float 280: @item a_ 281: Cell-aligned address 282: @item c_ 283: Char-aligned address (note that a Char is two bytes in Windows NT) 284: @item f_ 285: Float-aligned address 286: @item df_ 287: Address aligned for IEEE double precision float 288: @item sf_ 289: Address aligned for IEEE single precision float 290: @item xt 291: Execution token, same size as Cell 292: @item wid 293: Wordlist ID, same size as Cell 294: @item f83name 295: Pointer to a name structure 296: @end table 297: 298: @node Arithmetic, , Notation, Words 299: @section Arithmetic 300: Forth arithmetic is not checked, i.e., you will not hear about integer 301: overflow on addition or multiplication, you may hear about division by 302: zero if you are lucky. The operator is written after the operands, but 303: the operands are still in the original order. I.e., the infix @code{2-1} 304: corresponds to @code{2 1 -}. Forth offers a variety of division 305: operators. If you perform division with potentially negative operands, 306: you do not want to use @code{/} or @code{/mod} with its undefined 307: behaviour, but rather @code{fm/mod} or @code{sm/mod} (probably the 308: former). 309: 310: @subsection Single precision 311: + 312: - 313: * 314: / 315: mod 316: /mod 317: negate 318: abs 319: min 320: max 321: 322: @subsection Bitwise operations 323: and 324: or 325: xor 326: invert 327: 2* 328: 2/ 329: 330: @subsection Mixed precision 331: m+ 332: */ 333: */mod 334: m* 335: um* 336: m*/ 337: um/mod 338: fm/mod 339: sm/rem 340: 341: @subsection Double precision 342: d+ 343: d- 344: dnegate 345: dabs 346: dmin 347: dmax 348: 349: @node Stack Manipulation,,, 350: @section Stack Manipulation 351: 352: gforth has a data stack (aka parameter stack) for characters, cells, 353: addresses, and double cells, a floating point stack for floating point 354: numbers, a return stack for storing the return addresses of colon 355: definitions and other data, and a locals stack for storing local 356: variables. Note that while every sane Forth has a separate floating 357: point stack, this is not strictly required; an ANS Forth system could 358: theoretically keep floating point numbers on the data stack. As an 359: additional difficulty, you don't know how many cells a floating point 360: numkber takes. It is reportedly possible to write words in a way that 361: they work also for a unified stack model, but we do not recommend trying 1.3 anton 362: it. Also, a Forth system is allowed to keep the local variables on the 363: return stack. This is reasonable, as local variables usually eliminate 364: the need to use the return stack explicitely. So, if you want to produce 365: a standard complying program and if you are using local variables in a 1.1 anton 366: word, forget about return stack manipulations in that word (see the 367: standard document for the exact rules). 368: 1.2 pazsan 369: @subsection Data stack 1.1 anton 370: drop 371: nip 372: dup 373: over 374: tuck 375: swap 376: rot 377: -rot 378: ?dup 379: pick 380: roll 381: 2drop 382: 2nip 383: 2dup 384: 2over 385: 2tuck 386: 2swap 387: 2rot 388: 389: @subsection Floating point stack 390: fdrop 391: fnip 392: fdup 393: fover 394: ftuck 395: fswap 396: frot 397: 398: @subsection Return stack 399: >r 400: r> 401: r@ 402: rdrop 403: 2>r 404: 2r> 405: 406: @subsection Locals stack 407: 408: @subsection Stack pointer manipulation 409: sp@ 410: sp! 411: fp@ 412: fp! 413: rp@ 414: rp! 415: lp@ 416: lp! 417: 418: @node Memory access 419: @section Memory access 420: 421: @subsection Stack-Memory transfers 422: @ 423: ! 424: +! 425: c@ 426: c! 427: 2@ 428: 2! 429: f@ 430: f! 431: sf@ 432: sf! 433: df@ 434: df! 435: 436: @subsection Memory block access 437: 438: move 1.5 ! anton 439: erase ! 440: ! 441: While the previous words work on address units, the rest works on ! 442: characters. ! 443: ! 444: cmove ! 445: cmove> 1.1 anton 446: fill 1.5 ! anton 447: blank 1.1 anton 448: 449: @node Control Structures 450: @section Control Structures 451: 452: Control structures in Forth cannot be used in interpret state, only in 453: compile state, i.e., in a colon definition. We do not like this 454: limitation, but have not seen a satisfying way around it yet, although 455: many schemes have been proposed. 456: 457: @subsection Selection 458: 459: @example 460: @var{flag} 461: IF 462: @var{code} 463: ENDIF 464: @end example 1.3 anton 465: or 1.1 anton 466: @example 467: @var{flag} 468: IF 469: @var{code1} 470: ELSE 471: @var{code2} 472: ENDIF 473: @end example 474: 475: You can use @code{THEN} instead of {ENDIF}. Indeed, @code{THEN} is 476: standard, and @code{ENDIF} is not, although it is quite popular. We 477: recommend using @code{ENDIF}, because it is less confusing for people 478: who also know other languages (and is not prone to reinforcing negative 479: prejudices against Forth in these people). Adding @code{ENDIF} to a 480: system that only supplies @code{THEN} is simple: 481: @example 482: : endif POSTPONE then ; immediate 483: @end example 484: 1.5 ! anton 485: [According to @cite{Webster's New Encyclopedic Dictionary}, @dfn{then ! 486: (adv.)} has the following meanings: ! 487: @quotation ! 488: ... 2b: following next after in order ... 3d: as a necessary consequence ! 489: (if you were there, then you saw them). ! 490: @end quotation ! 491: Forth's @code{THEN} has the meaning 2b, @code{THEN} in Pascal ! 492: and many other programming languages has the meaning 3d.] 1.4 anton 493: 1.5 ! anton 494: We also provide the words @code{?dup-if} and @code{?dup-0=-if}, so you 1.1 anton 495: can avoid using @code{?dup}. 496: 497: @example 498: @var{n} 499: CASE 500: @var{n1} OF @var{code1} ENDOF 501: @var{n2} OF @var{code2} ENDOF 502: @dots 503: ENDCASE 504: @end example 505: 506: Executes the first @var{codei}, where the @var{ni} is equal to 507: @var{n}. A default case can be added by simply writing the code after 508: the last @code{ENDOF}. It may use @var{n}, which is on top of the stack, 509: but must not consume it. 510: 511: @subsection Simple Loops 512: 513: @example 514: BEGIN 515: @var{code1} 516: @var{flag} 517: WHILE 518: @var{code2} 519: REPEAT 520: @end example 521: 522: @var{code1} is executed and @var{flag} is computed. If it is true, 523: @var{code2} is executed and the loop is restarted; If @var{flag} is false, execution continues after the @code{REPEAT}. 524: 525: @example 526: BEGIN 527: @var{code} 528: @var{flag} 529: UNTIL 530: @end example 531: 532: @var{code} is executed. The loop is restarted if @code{flag} is false. 533: 534: @example 535: BEGIN 536: @var{code} 537: AGAIN 538: @end example 539: 540: This is an endless loop. 541: 542: @subsection Counted Loops 543: 544: The basic counted loop is: 545: @example 546: @var{limit} @var{start} 547: ?DO 548: @var{body} 549: LOOP 550: @end example 551: 552: This performs one iteration for every integer, starting from @var{start} 553: and up to, but excluding @var{limit}. The counter, aka index, can be 554: accessed with @code{i}. E.g., the loop 555: @example 556: 10 0 ?DO 557: i . 558: LOOP 559: @end example 560: prints 561: @example 562: 0 1 2 3 4 5 6 7 8 9 563: @end example 564: The index of the innermost loop can be accessed with @code{i}, the index 565: of the next loop with @code{j}, and the index of the third loop with 566: @code{k}. 567: 568: The loop control data are kept on the return stack, so there are some 569: restrictions on mixing return stack accesses and counted loop 570: words. E.g., if you put values on the return stack outside the loop, you 571: cannot read them inside the loop. If you put values on the return stack 572: within a loop, you have to remove them before the end of the loop and 573: before accessing the index of the loop. 574: 575: There are several variations on the counted loop: 576: 577: @code{LEAVE} leaves the innermost counted loop immediately. 578: 579: @code{LOOP} can be replaced with @code{@var{n} +LOOP}; this updates the 580: index by @var{n} instead of by 1. The loop is terminated when the border 581: between @var{limit-1} and @var{limit} is crossed. E.g.: 582: 583: 4 0 ?DO i . 2 +LOOP prints 0 2 1.3 anton 584: 1.1 anton 585: 4 1 ?DO i . 2 +LOOP prints 1 3 586: 587: The behaviour of @code{@var{n} +LOOP} is peculiar when @var{n} is negative: 588: 589: -1 0 ?DO i . -1 +LOOP prints 0 -1 1.3 anton 590: 1.1 anton 591: 0 0 ?DO i . -1 +LOOP prints nothing 592: 593: Therefore we recommend avoiding using @code{@var{n} +LOOP} with negative 594: @var{n}. One alternative is @code{@var{n} S+LOOP}, where the negative 595: case behaves symmetrical to the positive case: 596: 597: -2 0 ?DO i . -1 +LOOP prints 0 -1 1.3 anton 598: 1.1 anton 599: -1 0 ?DO i . -1 +LOOP prints 0 1.3 anton 600: 1.1 anton 601: 0 0 ?DO i . -1 +LOOP prints nothing 602: 603: The loop is terminated when the border between @var{limit-sgn(n)} and 604: @var{limit} is crossed. However, @code{S+LOOP} is not part of the ANS 605: Forth standard. 606: 1.4 anton 607: @code{?DO} can be replaced by @code{DO}. @code{DO} enters the loop even 608: when the start and the limit value are equal. We do not recommend using 609: @code{DO}. It will just give you maintenance troubles. 1.1 anton 610: 1.5 ! anton 611: @code{UNLOOP} is used to prepare for an abnormal loop exit, e.g., via ! 612: @code{EXIT}. @code{UNLOOP} removes the loop control parameters from the ! 613: return stack so @code{EXIT} can get to its return address. ! 614: ! 615: Another counted loop is ! 616: @example ! 617: @var{n} ! 618: FOR ! 619: @var{body} ! 620: NEXT ! 621: @end example ! 622: This is the preferred loop of native code compiler writers who are too ! 623: lazy to optimize @code{?DO} loops properly. In GNU Forth, this loop ! 624: iterates @var{n+1} times; @code{i} produces values starting with @var{n} ! 625: and ending with 0. Other Forth systems may differently, even if they ! 626: support @code{FOR} loops. ! 627: 1.1 anton 628: 629: @contents 630: @bye 631:
|
https://www.complang.tuwien.ac.at/cvsweb/cgi-bin/cvsweb/gforth/Attic/gforth.texi?annotate=1.5;hideattic=0;f=h;only_with_tag=MAIN;ln=1
|
CC-MAIN-2021-31
|
refinedweb
| 3,328
| 70.84
|
In this article, I will show you how to control dual servo SG90 motors in a pan-tilt camera stand with an Arduino Uno and a simple joystick module.
The Joystick Module
A joystick is one of the easiest ways to control a servo motor. While some tutorials require a motor shield or other extra things, this tutorial only requires basic peripherals. The joystick aka 'the thumbstick' is a cool control interface for a project, especially for robotics. The X and Y axes are two ~10k potentiometers that can control 2D movement by generating analog signals. There is also a push button that could be used for controlling other commands or movements.
The HC-SR04
In this tutorial, I am setting up the servo-controlled pan-tilt stand with an HC-SR04 ultrasonic sensor as an example. However, keep in mind that the stand can be used for sensors, cameras, and more!
Let’s Get Started
Collect Hardware
- Arduino UNO
- 2 Servo Motors SG90
- Joystick Module
- Dual Servo Stand
- Some Jumper Wires
- HC-SR04
The Arduino UNO
Joystick module
Servo motor
The breadboard
The pan-tilt bracket kit
Assemble the dual servo stand first. I am using the pan/tilt bracket kit from Sparkfun.
Connecting the Hardware
Let’s start with the 2 servos. I am using a breadboard to make the connections easy.
Connections for the servo motors and the Arduino Uno.
Follow the connection diagrams above and below to complete the connections.
The connections for the joystick module and the Arduino Uno.
The final wiring will look like this:
The final project schematic.
Once everything is connected it will look like this!
Source Code
Copy and paste the following code to the Arduino software. Always test (compile) the code before uploading it to your Arduino board.
When you upload the code to the Arduino, the servos should not move until you use the joystick.
#include <Servo.h> const int servo1 = 11; // ("----------------"); }
|
https://maker.pro/arduino/tutorial/how-to-set-up-pan-tilt-camera-stand-with-arduino-uno-and-joystick-module
|
CC-MAIN-2019-26
|
refinedweb
| 323
| 64.81
|
16 November 2010 14:15 [Source: ICIS news]
TORONTO (ICIS)--DNP Green Technology has restructured its Bioamber succinic acid joint venture with France's Agro-industrie Recherches et Developpements (ARD), the US-based renewable chemicals producer said on Tuesday.
Under the restructuring, DNP Green acquired 100% of Bioamber while ARD became a stakeholder in DNP. At the same time, DNP Green changed its name to BioAmber Inc, it said.
Financial terms were not disclosed.
Bioamber was established in 2008 as an equal joint venture to produce bio-based succinic acid. Earlier this year, it commissioned a bio-based succinic acid plant in ?xml:namespace>
“As Bioamber moves into its commercial phase, the partners have decided to entrust commercialisation to DNP Green and have ARD focus on process optimisation in the plant and the scale-up of a next-generation organism producing succinic acid,” the companies
|
http://www.icis.com/Articles/2010/11/16/9411097/dnp-green-acquires-100-of-bioamber-joint-venture.html
|
CC-MAIN-2014-52
|
refinedweb
| 145
| 51.89
|
In the old visual studio (probably before 2010). I was able to add a control event in the code-behind page using the dropdown list at the top of the page. The dropdown list gives me all events of a control and I just selected the one and the empty event
handler appeared in the ocde page. Now in the Visual Studio 2019, in the code behind page (c#), I see three dropdown lists, the first one just has the namespace (my solution), the second one just show the current page partial class name, and the third one
contains the objects in the page, as well as event procedures I have created. What did I miss? How do I add an event procedure without having to type the empty procedure myself?
OK. I figured it out. I have to look into the Design tab of the page (not the Source tab), then the Lightning button appears in the Properties page of the control I selected in the Design tab. Clicking the lightning button gives me all events.
|
https://social.msdn.microsoft.com/Forums/en-US/f2d0d6a9-31ca-448f-b79c-516fe13bf7c8/how-to-add-a-control-event-using-visual-studio-2019?forum=aspvisualstudio
|
CC-MAIN-2022-05
|
refinedweb
| 177
| 76.96
|
inkscape generates no latex formula
Bug Description
Binary package hint: inkscape
Using "effects -> render -> latex formula" does not render anything on screen. Instead, following error message in console is given:
(inkscape:9793): GLib-CRITICAL **: g_utf8_collate: assertion `str2 != NULL' failed
Extension::Script: Unknown error for pclose
: Success
Inkscape does not crash, it is still usable. Googling shows, that /usr/share/
Versions:
inkscape: 0.44-1ubuntu1
Ubuntu: egdy, recently updated
As for me, I have this when using "effects -> render -> latex formula"
pstoedit: version 3.45 / DLL interface 108 (build Oct 18 2007 - = xml.dom.
File "/usr/lib/
return expatbuilder.
File "/usr/lib/
fp = open(file, 'rb')
IOError: [Errno 2] No such file or directory: '/tmp/inkscape-
Similar problems here in Ubuntu Gutsy (=> Inkscape 0.45): When trying to render ANY valid latex string, inkscape hangs after pressing the OK button still showing the input dialog.
No error message in the console but in a file under /tmp (ink_ext_
pstoedit: version 3.44 / DLL interface 108 (build Apr 29 2007 - release build - g++ 4.1.3 20070423 (prerelease) (Ubuntu 4.1.2-3ubuntu3)) : Copyright (C) 1993 - 2006 Wolfgang Glunz
PostScript/PDF Interpreter finished. Return status 2 executed command : /usr/bin/gs -q -dDELAYBIND -dWRITESYSTEMDICT -dNODISPLAY -dNOEPS /home/jules-
The interpreter seems to have failed, cannot proceed !
Traceback (most recent call last):
File "/usr/share/
e.affect()
File "/usr/share/
self.effect()
File "/usr/share/
svg_open(self, svg_file)
File "/usr/share/
doc = xml.dom.
File "/usr/lib/
return expatbuilder.
File "/usr/lib/
result = builder.
File "/usr/lib/
parser.
xml.parsers.
Sorry, if I miss something obvious, it's in the middle of the night here ;)
The problem should be resolved by installing pstoedit version 3.45... It worked like a charm for me!
Curious I do not have this option in my inkscape on gutsy. So I cannot help.
oups pstoedit was not installed... After installation of the gutsy version 3.44, the option appeared in the menu but inkscape freeze when I'm trying to use it. So if Kenshiro is right that means I will have to wait for Hardy for this option and if pstoedit will be update.
Nope, no need to wait for Hardy.
1) download pstoedit 3.45 here: http://
2) uninstall pstoedit 3.44
3) install libplot-dev and librsvg2-dev
4) install pstoedit 3.45
see this post if you need any complement: https:/
et bon courage ;-)
I don't like to install stuff like this but it's because I forget about checkinstall. It's working fine now. Thanks.
I also don't like messing with my package system in a way like this so I built my own custom (lib)pstoedit(-dev) package set based on the ubuntu source package and the upstream tarball.
Is there any chance, that this bug will get fixed in the repositories before Hardy?
My favoured workaround right now is to install the Debian Sid version of this package.
-------
1. Go to http://
pstoedit_
2. Unpack the source using:
$ dpkg-source -x pstoedit_3.45-2.dsc
3. Grab the packages you'll need to build this tool:
$ sudo aptitude install build-essential debhelper dh-buildinfo docbook-to-man g++ libwmf-dev libmagick++9-dev libplot-dev libpng12-dev pkg-config gs fakeroot
but don't take my word for it. You can find out exactly what's required by looking at debian/control in the pstoedit-3.45 folder.
4. Build:
$ cd pstoedit-3.45
$ fakeroot debian/rules binary
5. Install the generated package files (libpstoedit-
-------
Other bugs addressing this problem: bug #156365 (contains links to a prebuilt binary package, if you're a trusting sort), bug #78737, bug #136950, bug #123499.
Hopefully the upstream package will hit Ubuntu before Hardy is released.
I have build the packages following your instructions.
You can download them at http://
We've just released 0.46, which has packages available for Ubuntu Gutsy and Hardy (see http://
still not working. I did an upgrade to hardy and the latex render doesn't work.
I just have the message:
Inkscape has received additional data from the script executed. The script did not return an error, but this may indicate the results will not be as expected.
pstoedit: version 3.45 / DLL interface 108 (build Feb 28 2008 - release build - g++ 4.2.3 (Ubuntu 4.2.3-2ubuntu1)) : Copyright (C) 1993 - 2007 Wolfgang Glunz
and nothing done
Same as above.
Still not working with 0.46 from Hardy repositories. (not from ppa)
I think this bug should get a little more love.
Many mathematicans use inkscape to create drawings for their papers. Not being able to include LaTeX formulae is a showstopper.
Not being able to use latex in inkscape is definitely a show-stopper for me. I used to use the textext extension, but sadly it seems to be broken in Ubuntu Harty (inkscape v0.46). So I was happy to learn inkscape included a new built-in option for rendering latex. Except that doesn't work either! I had to install the python lxml package to get past the first error. But now, I get another error I am unable to resolve.
pstoedit: version 3.45 / DLL interface 108 (build Feb 13 2008 -22, in etree._
IOError: Error reading file '/tmp/inkscape-
This happens when I click Effects -> Render -> Latex Formula and then click apply (accepting the default content).
If anyone can find a workaround/fix for this, that would rock. I need latex to annotate my paper figures.
I agree with Sebastian. Inkscape should be the software replacing xfig but unfortunatly because of this bug I cannot tell to anyone to use it. Last week my wife did have a discussion in her lab to choose the tools they will need for publication. They are looking at something to replace the adobe suite but because of stupid bug like this there are big chance they will stay on windows + adobe and honestly it's very difficult to blame them.
This bug is present since gutsy. A workaround has been found (recompile pstoedit) but this time it doesn't seem to be enough.
At least for v0.46 this seems to be bug #195052 in Inkscape,
https:/
There, the latex renderer does not report an error message (the pstoedit output is not an error), but no formula is displayed. Essentially, as far as I understood the problem, inkscape couldn't import the generated svg properly. A patch has been commited on April 08, and I can confirm that it solves the problem for me on Hardy. It would be nice, if this patch could be taken to the ubuntu packages soon. As many people have already said, the latex formula capabiliy of inkscape is essential.
I'm not sure if this solves the problem of Cuchaz. There it rather seems that no svg is generated, which might come from some missing software ...
What still does not work is the preview capability of the new latex formula renderer. Whenever you just type a single character into the input line, inkscape immediately tries to generate latex output -- but then most of the time the syntax is incorrect, because you haven't finished typing yet, and many error messages pop up. Obviously, the way the preview is done, does not make much sense.
As a mathematician, I and my colleges really want this to be fixed. Let's get it as a upgrade for Gutsy.
Yep the new file eqtexsvg.py is doing the job (at least on the 0.46 version for gutsy coming from ppa). In the worst case scenario, if the file is not corrected for hardy, it's very easy to do the change yourself, you just have to copy the file in the directory /usr/share/
The patch work fine for me, too.
I extended it to remove the annoying "pstoedit: version ..." message window. My version is attached.
Final version of /usr/share/
I tried using Sebastian's eqtexsvg.py and I get this:27, in etree._
etree.XMLSyntax
I am using Hardy Beta. Any help would be much appreciated since LaTeX formula rendering is extremely important to me as well.
This happens when your SHELL environment is not set to /bin/bash
Try "export SHELL=/bin/bash" before running inkscape.
I will write a patch for this problem soon.
Thank-you, Sebastian! It works now. I appreciate your help very much!
Ok, this actually has nothing to do with SHELL. Problem was that pstoedit writes temporary files in CWD and this fails if CWD is not writeable.
I solved this by CDing into the temporary directory.
Fixed version is attached.
Patch for inkscape package.
I do not want to appear impatient but Hardy will be released in 6 days and this bug affects many people. A working patch is already available.
Could the package maintainer please include the patch so that it will be released with Hardy? Or are there any problems with the patch? If yes, please notify us, so that we can work on a fix.
The patch only affects eqtexsvg.py which is only used by the LaTeX-plugin, so there should be no side effects or regressions.
100% agree with Sebastian and I would be very worry for the quality of Ubuntu if this patch is not include and that will be impossible to told to my friend to install Hardy (I'm a scientist as most of my friend) and after to explain them how to correct a problem in one of the package for the only reason that the maintener didn't change a stupid file. Hardy is supposed to be a LTS and this kind os stupid mistake is a huge backward for my point of view!
If you don't want to change the file incriminate, delete the function it's better to not have something apparent than something which is not working!
Hardy is out and still there are no change in inkscape... This problem is not a securtiy one so that means only one thing: Inkscape will be break for one more year for Latex inclusion in a LTS version.
I must admit that I do not understand it. Someone must explain me the problem to apply Sebastian patch? As he told no side effect can appeard so why this correction has bot been applied?
Anyway thanks for ubuntu it was fun to use it.
I've applied this to trunk - revision 18416, so it should be in for 0.47 unless it breaks stuff. Half the above patch had already been applied, so I've attached the patch I applied.
It's a shame that this patch came after after the March 11 release of 0.46 - the hard deadline for ubuntu was pretty early (although I noticed they later busted a gut to get Abiword 2.6 in post deadline!). There are some other great features and fixes that didn't quite make the cut either for similar reasons. There has been some discussion of six monthly releases loosely tied to the Ubuntu and Fedora release schedules, so hopefully that means it won't be 12 months.
There is some mention of being able to fix this by upgrading pstoedit - eg: Kenshiro said this;
> The problem should be resolved by installing pstoedit version 3.45... It worked like a charm for me!
Is that a viable fix for Hardy? If so it could be added to the known issues section of the readme on the Inkscape wiki, mentioning the workaround?
Milestoned for 0.47
I think this can be included as SRU.
quoting https:/
(1) Patch is safe because it only affects eqtexsvg.py. This file is only used by the LaTeX-plugin, which does not work anyway without the patch. So it can't be broken any further.
(2) Inkscape is defintely not critical infrastructure.
The line that begins
os.system('cd ' + base_dir + ' ; pstoedit
isn't going to work on Windows, since Windows doesn't use semicolons like that (it uses && instead).
I've attached a patch, but it's untested (as I haven't got pstoedit installed).
As suggested in my last post, the current SVN version of eqtexsvg.py is broken on Windows. Here's a revised patch, which adds double quotes around the file names (since they may contain spaces or other awkward characters). I've also rewritten the argument to the os.system() call, as it was hard to read, and adding all those quotes just made it worse.
Tested on Windows XP (now that I've installed pstoedit). Someone should test it on Linux.
There were still places where filenames were not quoted, which can cause problems on Windows. Here's an updated version of eqtexsvg.py with more quoting. (I'm attaching the whole file to facilitate testing. Patch to follow.)
Here's the patch.
Applied updated patch to SVN trunk - revision 18440
Does the fix committed status for ubuntu mean the earlier version of this patch was committed to ubuntu's source? Or something else?
If someone had somehow committed this to the ubuntu source tree, then it needs to be updated as per the most recent patch.
@sas: Your patch works fine for me on Ubuntu Hardy.
Sebastian: Just to clarify: Do you mean the latest patch at comment 37 (ie:sas' last comment)?
This is what you should have if you use SVN trunk 18440 or newer.
Confirming that sas' latest patch does work on 0.46 Ubuntu. Didn't build from scratch, but simply copied the .py file into /usr/share/
Will talk to Bryce about milestoning this for 0.46.1
I use Hardy, with Inkscape 0.46. I have the eqtexsvg.py file (version that was posted by sas on 2008-04-27) in the folder ~/.inkscape/
Traceback (most recent call last):
File "/home/
import inkex, os, tempfile, sys, xml.dom.minidom
ImportError: No module named inkex
What is this module called inkex? Can I find it somewhere and can there be a reason why I don't seem to have it?
Help much appreciated :)
inkex.py is supplied with Inkscape, in the extensions directory (on Ubuntu I think that's /usr/share/
To copy the file into /usr/share/
Here's a patch to apply easily to Linux versions of Inkscape. Stay tuned for the details for applying it next.
Try that patch again...If this works, hopefully we can follow what Kees Cook did on bug 195052, only this should apply all changes since the version in 0.46, where his only does some...
=====
cd /usr/share/
curl -s '{patch url}' | sudo patch -p0
=====
Have to reboot into Linux after this to test.
OK the url of that patch is http://
=====
cd /usr/share/
curl -s 'http://
=====
Time to try.
For all the mathematicians out there in ubuntu land... The procedure in my last post works on an ubuntu live CD setup. Here's the steps;
* Installed Inkscape and all the recommended and suggested packages for that, then installed kile to satisfy the latex requirements (this in turn installed texlive, which may have been enough). Couldn't find miktex as recommended in the script itself, but I think any tex environment will do.
* Go to Applications -> Accessories -> Terminal
* cd /usr/share/
* curl -s 'http://
* complained that curl is not installed
* sudo apt-get install curl (installs curl)
* (up arrow x 2) curl -s 'http://
* complained that patch is not installed
* sudo apt-get install patch (installs patch)
* (up arrow x 2) curl -s 'http://
* reports "patching file eqtexsvg.py"
* run Inkscape and test Effects -> Render -> LaTeX formula... (Apply using defaults)
* Looks great.
Thank you, sas and Rygle, for your help!
Putting the eqtexsvg.py file into /usr/share/
Thanks all for this patch! It works perfectly on my Hardy install.
Sadly, the patch did not fix my problem so I did some more digging on my own. Mike Wimmer suggested that my system is missing software. I dug around in the /tmp directory and it looks like everything works correctly up to the part where pstoedit generates the svg. After toying around with pstoedit for a bit, I got it to tell me:
Unsupported output format plot-svg
Apparently Hardy's pstoedit was compiled without svg support! After getting some dev packages (libplot-dev, librsvg2-dev) and recompiling from source (http://
This should fix Kenshiro's problem as well.
Now, I just wish the UI had a bigger box to type in. Oh, and that you could edit already-rendered equations like textext could.
Cuchaz: I don't know how your Hardy system doesn't work, as I was using the release version of the 8.04 live CD. No need to compile anything or use any dev packages. Just simply follow the instructions in my last post here.
The only think I did do was to update the repositories in synaptic.
I think there must be something else going on for you.
I was following this bug over at Bug #195052 for a while (https:/
Thanks for your time.
I updated the patch with the target hardy-proposed and included the new, longer patch. Packages will be on https:/
I'm not sure if this placed right here, but on my debian machine textext (?) fails to notice, that pstoedit was compiled without plot-svg support and tries to use it which leads to the "Unsupported output format plot-svg"-error.
Removing pstoedit from the CONVERTERS-line (924 in textext.py) induces textext to ignore the pstoedit/plot-svg method and to use the pstoedit/
I can't believe that this is still an issue in Ubuntu Intrepid.
Why is "Inkscape (Ubuntu)" marked as "Fix Committed"? That's a bit irritating. What has been committed there? Well, definitely not Rygle's patch, because you still have to apply it to enable the Latex extension.
Sorry to disappoint you, but current practice is to mark bug "fix released" when patch is committed to trunk. This is simply to reduce the time some developer would need to waste marking "fix committed" bugs as "fix released". This has been discussed, this probably will not change.
If you really need this patch you can compile inkscape yourself, this isn't that hard at all, just follow http://
OK, only "fix released" means that the package includes the patch.
BTW you do not need to compile inkscape package yourself to apply Rygle's patch. Only a pythen scrpt has been changed.
No way. It's still broken? Unbelievable!
@mahfiaz if I want to compile every soft on my linux I don't think I will use a distribution like ubuntu but something a little more like gentoo...
We are speaking about a bug which has been corrected for inkscape two version ago...
Just press it on, there is reason enough to release 0.46.1 version, also ubuntu guys are known to be of hacker type, they can simply replace the python script according to xylo.
Milestoned for 0.46.1, I think it would get in as soon as a release warden take a look at it.
Thanks!
Thanks for the patch. I'm a Debian testing user, and I installed Inkscape 0.46-3 from unstable, just to notice that the bug is still present in that version :(
Had to apply the patch once again...
Well thanks anyway !
Looks like this was fixed in 0.46-4
I'm using: inkscape_
I've tryied to apply the patch, but I get this message:
curl -s 'thttp:
patching file eqtexsvg.py
Reversed (or previously applied) patch detected! Assume -R? [n] y
Hunk #3 FAILED at 95.
Hunk #4 FAILED at 108.
2 out of 4 hunks FAILED -- saving rejects to file eqtexsvg.py.rej
Or (after reinstalling inkscape), I apply the patch answering no to the question, I get:
Reversed (or previously applied) patch detected! Assume -R? [n] n
Apply anyway? [n] y
Hunk #1 FAILED at 49.
Hunk #2 FAILED at 88.
Hunk #3 FAILED at 98.
Hunk #4 FAILED at 113.
4 out of 4 hunks FAILED -- saving rejects to file eqtexsvg.py.rej
The patch does not work in either case.
The error is this:
Traceback (most recent call last):
File "/usr/share/
e.affect()
File "/usr/share/
self.effect()
File "/usr/share/
svg_open(self, svg_file)
File "/usr/share/
doc = inkex.etree.
File "lxml.etree.pyx", line 2583, in lxml.etree.parse (src/lxml/
File "parser.pxi", line 1465, in lxml.etree.
File "parser.pxi", line 1494, in lxml.etree.
File "parser.pxi", line 1394, in lxml.etree.
File "parser.pxi", line 968, in lxml.etree.
File "parser.pxi", line 542, in lxml.etree.
File "parser.pxi", line 628, in lxml.etree.
File "parser.pxi", line 566, in lxml.etree.
IOError: Error reading file '/tmp/inkscape-
Three years on, this is still an issue in Ubunty Jaunty.
In Karmic I could not find the latex plugin anymore. Has it been removed from the inkscape package?
For those who have still problem of getting the latex plugin to work, there is an alternative latex plugin available on http://
It allows you also to edit the formulas anytime after creation, because it also stores the tex code beside the formula in the svg file.
Additional information: This would not be seen in earlier versions of inkscape, which do not include this effects menu.
My installation of Inkscape did have an effects menu, but under the effects->render menu, there was not an entry for latex-formulae. Which prerequisite packages might I need for this? I have installed tex-base and a number the latex packages already.
|
https://bugs.launchpad.net/inkscape/+bug/55273
|
CC-MAIN-2018-39
|
refinedweb
| 3,614
| 75.61
|
Blue Remembered Hills - Character Notes
Blue Remembered Hills
Teacher recommended
Peter
Peter is the bully of the gang, very proud of his position as “Number Two” (though later “Number Three” after John beats him in a fight) after Wallace Wilson. Most of what he says is said to boast to the other boys. He is one of the strongest physically but is not too bright as Willie tricks him into believing that dirty apples are dropped all over Germany. He is the most violent of the gang, as he is the most enthusiastic about killing the squirrel and cutting off his tail. He regularly bullies Donald Raymond and is annoyed when John stands up to him. Peter is jealous of the others when it looks like they might be better at him in any way.
When it looks like Raymond may be going to beat him in a bet, he cheats to ensure that he wins which annoys John, who is fair. This leads to a fight between them that he loses. It is Peter that is the most keen to pretend that he was miles away from the barn where Donald is finally killed.
John
John is the fairest of the gang. At first he ranks under Peter but after fighting with him he becomes “Number Two”. He backs up Raymond when Peter bullies him, and when the children are hiding from the escaped Prisoner of War he is looked up to as the person who is going to look after them. However, he sometimes gets into little arguments with Audrey as she doesn’t like his taking the lead.
He gets into an argument with Peter because Peter has been bullying Raymond and cheating him. John is also cynical and isn’t fooled when the other boys begin to pretend or suggest unlikely comments.
Willie
Willie is light-hearted and easy-going. He is often playing, and loves to pretend to be a Spitfire. He isn’t very strong physically but is intelligent enough to trick Peter into not eating his apple. He is easily overpowered by Peter but is able to stand up to him, for example when Peter starts threatening Willie and he is able to say “Oh leave I alone will ya!”, which makes Peter back down.
Willie is often the one who makes the best suggestions, and often manages to make the others laugh, for example the mimicked Italian voice. He is the first to realise what is happening with Donald in the barn, and is the first to try and open the door and encourage Donald out.
Raymond
Raymond is the lowest member of the main gang. He is the gentlest of the gang and also he stutters, and he is often teased because of this. He is quiet and a pacifist; he is very upset that the others have killed the squirrel and might be going to cut off its tail. He is pleased that John is standing up for him when Peter cheats him when on their bet.
Raymond is pleased to be accepted by the boys as a friend, even if it does mean that he is teased occasionally and is sometimes bullied by Peter. He is probably the one that feels the most sorry for Donald at the end of the play and doesn’t, as the others do, claim that he had seen nothing and had been miles away.
Donald
Donald is not really accepted by the other boys as a friend. He is bullied and ridiculed by the boys and is abused and beaten up by his mother. He is desperate to be liked by the others and is happy to play House with the girls in order to be accepted by the others. However, when the girls turn on him and start tormenting him horribly he turns scared.
He tries to flatter Peter when he is in the barn but Peter still doesn’t accept him, scowling and shouting at Donald. He is regularly being threatened by his mother and by Peter, and he desperately wants his father to return (he has been taken by the Japanese). Donald is a pyromaniac and spends a lot of his time in the barn, lighting the hay, trying to start a fire. When he finally manages to get one started, it spirals out of control and when the other children lock him in, it ends in his death.
Angela
Angela is the prettier and more popular of the two girls. When the children have to hide in the hollow Angela is the most scared of the children but she enjoys getting pampered and looked after by the boys. She isn’t so interested in the boys’ personal arguments and is generally nice, but she and Audrey gang up on Donald when they are playing House in the barn.
Audrey
Audrey is overshadowed by Angela’s prettiness. She is a bit of a tomboy and very independent; she doesn’t mind getting dirty or muddy. She feels a bit annoyed that she is not given the same protective treatment as Angela when they are hiding in the hollow: John is very comforting towards Angela when she wails, but no-one seems to listen to Audrey. This gives her a bit of a chip on her shoulder, and she often has a go at the boys, accusing them of being afraid and saying “Wallace Wilson would go” accusingly to John.
|
https://getrevising.co.uk/revision-cards/blue-remembered-hills-character-notes
|
CC-MAIN-2017-22
|
refinedweb
| 909
| 67.18
|
Opened 10 years ago
Closed 10 years ago
Last modified 10 years ago
#3664 closed (fixed)
UnicodeDecodeError in contrib/syndication/feeds.py
Description
I'm using contrib.syndication for making feeds for Flickr photos and Ma.gnolia links that both have tags which have funky characters (tags like 'pärnu' and 'työ'). Django dies with UnicodeDecodeError when trying to make a feed that has url with funky characters.
The error message is:
UnicodeDecodeError at /syndicate/tag/pärnu/ 'ascii' codec can't decode byte 0xc3 in position 24: ordinal not in range(128) ... Exception Location: /usr/lib/python2.4/site-packages/Django-0.95-py2.4.egg/django/contrib/syndication/feeds.py in add_domain, line 9
add_domain function is very simple, and the problem seems to be with line that is:
url = u'' % (domain, url)
I tested this and found that when decoding the url with latin1 (iso-8859-1) like:
url = u'' % (domain, url.decode('latin1'))
but I'm not very confident of this being a good fix for this.
Attachments (1)
Change History (13)
comment:1 Changed 10 years ago by
comment:2 Changed 10 years ago by
I wrote a workaround for myself for this. Details are at
It would have been better to write a good patch to resolve the problem and not it's causes, but I'm still not really sure how this should be fixed "right".
comment:3 Changed 10 years ago by
This is a documentation bug, rather than a code bug.
Anything you pass up as a link, including things returned from item_link() in syndication classes and get_absolute_urls() on models, must already be in the character set specified in RFC 1738 (the URL spec). So you must already have done the necessary conversion from non-ASCII characters to ASCII and called urllib.quote() if necessary. In the above example, you are passing non-ASCII characters to something expecting content for a URL, so it is failing.
We cannot perform the conversion to utf-8 and/or url quoting, because, for example, the standard IRI -> URI conversion process is that you convert first and then quote(), so we don't want to accidently do it twice (and there are lots of other places where get_absolute_url() needs to already be returning the correctly quoted string).
I will update the documentation.
comment:4 Changed 10 years ago by
Changed 10 years ago by
wording fix
comment:5 Changed 10 years ago by
comment:6 Changed 10 years ago by
comment:7 Changed 10 years ago by
I can't see how this is fixed now. Still makes errors for me, I have quoted everything correctly but feeds.py still seems to get in trouble because of the request URL containing urlencoded unicode.
Why is it even
url = u'' % (domain, url)
and not
url = '' % (domain, url)
if the urls shouldnt be unicode??
comment:8 Changed 10 years ago by
It sounds like you haven't fully URL and IRI encoded your "url" fragment. Please ask support questions on the mailing list (django-users), though, rather than in Trac.
comment:9 Changed 10 years ago by
I still have this error, I think the ticket should be reopened.
From what I can tell the error has nothing to do with fully encoding your url fragments and so on. The problem seems to be that the feed object gets a somehow not URL-quoted feed_url where it says
def __init__(self, slug, feed_url):
when I do a print feed_url it does not show me a URL which is "ASCII and URL-quoted". So the part after
# 'url' must already be ASCII and URL-quoted, so no need for encoding
throws an error. Maybe no one ever discovered the bug because you don't have to do with foreign-language sites!?
comment:10 Changed 10 years ago by
Please read the Unicode URI/IRI documentation carefully; if you have Unicode inside URLs, you are responsible for ensuring that you call the proper function to escape it before handing it off to anything else. If you have further questions, please follow Malcolm's suggestion and ask them on the django-users mailing list.
comment:11 Changed 10 years ago by
That would mean I can't use the feeds as described in the docs!?
The request URL has encoded and quoted Unicode, so what can I do when it is passed wrong to the feed object which throws an error?
All my other URLs are completely correct.
comment:12 Changed 10 years ago by
We have asked a number of times in the comments to please ask questions on the django-users list. You can post an example of how your code is generating the URL and what the problem is. The lack of examples you have provided makes it impossible to debug anything and Trac is not a good place to have support and debugging conversations. Certainly the earlier examples in this ticket were cases of bad user code, rather than a bug in Django, and yours may well be similar.
Post to django-users. Give an example of what the URL string is and how you are generating it. Then you will get help with fixing it.
This looks to be another unicode issue that we're going to look into after 0.96 is released.
|
https://code.djangoproject.com/ticket/3664
|
CC-MAIN-2017-04
|
refinedweb
| 887
| 69.11
|
Run pending scripts (services) in a
child_processand talk to them (and kill those bastards)
import Forrest from 'forrest'; // run a simple command that exits by itself Forrest.run('ls').then(output => console.log(output)); // run a service and do soemthing when ready var service = Forrest.run(, { service: './my-server.js', expect: /listening on 3000/g, }); // do something when the service is ready service.then(() => console.log('MyServer is ready!')); // do something else if the service fails service.catch(err) => console.log('Error', err));
[stdout|stderr]
some services send logs to stderr, strange but true!
listen to stderr and fail the promise if anything is sent here.
gracefully stop the service, sends in a
SIGTERM and return a Promise.
hard kill
Write here your ES6 source files.
Write here your ES6 unit tests.
This is the target folder for ES5 transpiled files.
(ES6) (ES5) /src/foo.js -> /lib/foo.js
This is the target folder for the test coverage report.
There are a couple of NPM scripts which make your developer life easier:
It transpiles
/src into ES5 compatible files in
/lib.
It transpiles and monitor
/src files for new changes.
It checks your code for any possible problem or style errors accordingly to
.eslintrc.
It removes all the generated files in
/lib and
/coverage.
It runs the tests and produces a test coverage report in
/coverage.
|
https://www.npmjs.com/package/forrest
|
CC-MAIN-2017-17
|
refinedweb
| 226
| 68.47
|
The
@staticmethod decorator is nothing new. In fact, it was added in version 2.2. However, it's not till now in 2012 that I have genuinely fallen in love with it.
First a quick recap to remind you how
@staticmethod works.
class Printer(object): def __init__(self, text): self.text = text @staticmethod def newlines(s): return s.replace('\n','\r') def printer(self): return self.newlines(self.text) p = Printer('\n\r') assert p.printer() == '\r\r'
So, it's a function that has nothing to do with the instance but still belongs to the class. It belongs to the class from an structural point of view of the observer. Like, clearly the
newlines function is related to the
Printer class. The alternative is:
def newlines(s): return s.replace('\n','\r') class Printer(object): def __init__(self, text): self.text = text def printer(self): return newlines(self.text) p = Printer('\n\r') assert p.printer() == '\r\r'
It's the exact same thing and one could argue that the function has nothing to do with the
Printer class. But ask yourself (by looking at your code); how many times do you have classes with methods on them that take
self as a parameter but never actually use it?
So, now for the trump card that makes it worth the effort of making it a
staticmethod: object orientation. How would you do this neatly without OO?
class UNIXPrinter(Printer): @staticmethod def newlines(s): return s.replace('\n\r', '\n') p = UNIXPrinter('\n\r') assert p.printer() == '\n'
Can you see it? It's ideal for little functions that should be domesticated by the class but have nothing to do with the instance (e.g.
self). I used to think it looked like it's making a pure looking thing like something more complex that it needs to be. But now, I think it looks great!
Follow @peterbe on Twitter
Consider using classmethod instead. Guido has indicated that staticmethod is the result of a misunderstanding and he'd take it back if he could.
Certainly by passing the conversion function?
def newlines(s):
return s.replace('\n', '\r')
class Printer(object):
def __init__(self, text, to_newline=newlines):
self.text = text
self.to_newline = to_newline
def printer(self):
return self.to_newline(self.text)
def _to_unix_newline(s):
return s.replace('\r\n', '\n')
def UNIXPrinter(text):
return Printer(text, _to_unix_newline)
Yes, the fact that you can call a static method through an instance in Python means you get "virtual static" methods. I'm not sure that's an especially compelling use for them. Consider the case where UNIXPrinter is-a Printer, but provides a definition of the printer method that doesn't call the newlines static method. Unless UNIXPrinter provides the unneeded newlines staticmethod anyway, a caller rationally expecting newlines to perform '\r\n' -> '\n' conversion will be awfully surprised when it does not.
That is not to say such an approach never makes sense, but you've increased the maintenance burden on all of your child classes in support of one specific implementation of the printer method. That's a violation of the open/closed principal.
A good use of @classmethod/@staticmethod is when someone is naturally supplied a type (instead of an instance), and they need to perform useful operations on the type. This is common with plugins: you may have a list / dict of registered plugin types, and each type has a static/class method "add_options" that adds command-line options to an argparse / optparse instance, so the plugin can manipulate the command-line. They're necessary in this example because:
1) It's (typically) silly to instantiate an instance of the Plugin just to do command-line parsing. The Plugin shouldn't be instantiated until there is useful work to do.
2) The Plugin class must necessarily provide the entire interface between the main application and the plugin. If it doesn't exist on the Plugin class, the main application cannot call it.
and don't forget @classmethod that will pass you the class it's invoked on as first argument (usually named 'cls')
In addition to the philosophical satisfaction from having a related "bag of functions", static methods are better than module-level functions because subclasses can override them.
if there is a static method inside a class. Then say I have another method within the same class which happens to be a classmethod. Now if I am calling the static method from my classmethod why do I have to qualify the static method with a cls.<staticmethod name> syntax?
let me give an example for clarity: say I have a class A:
class A:
@staticmethod
def s_method():
print("this is static method message")
@classmethod
def c_method(cls):
cls.s_method()
print("this is class method message")
my question is why do I have to qualify the s_method() with a "cls" prefix, eventhough I am calling it from the same class?
Do you refer to the line `cls.s_method()`? Are you asking why you can't write `s_method()`?
|
https://www.peterbe.com/plog/newfound-love-of-staticmethod
|
CC-MAIN-2019-09
|
refinedweb
| 839
| 65.83
|
NAME
atan, atanf, atanl − arc tangent function
SYNOPSIS
#include <math.h>
double
atan(double x);
float atanf(float x);
long double atanl( long double x);
Link with −l.
ERRORS
No errors occur.
ATTRIBUTES
For an explanation of the terms used in this section, see attributes(7).
CONFORMING TO
C99, POSIX.1-2001, POSIX.1-2008.
The variant returning double also conforms to SVr4, 4.3BSD, C89.
SEE ALSO
acos(3), asin(3), atan2(3), carg(3), catan(3), cos(3), sin(3), tan(3)
COLOPHON
This page is part of release 4.13 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at−pages/.
|
https://man.cx/atan(3)
|
CC-MAIN-2017-47
|
refinedweb
| 120
| 77.13
|
As I told Jesse on IRC, the patch isn't going in. I'm not including OS-specific code into s6, even with a compile-time option. The main reason for it is that it changes the API: the choice to spawn the service in a new namespace or not should be made at run time, so it would introduce a new file in the service directory that would only be valid under Linux, and the file would need to be supported by s6-rc and friends even on other systems, etc. This is exactly the kind of complexity created by OS divergences that plagues the Unix world and that I very much want to avoid. This change itself looks quite simple, but it would be a precedent and the slope is extremely slippery.
Advertising
Though as Jesse explained, this requires some sort of exit/signal proxing, which isn't the case here. Here the direct child of s6-supervise remains the daemon itself - in its own pid ns - which is much better.
It would unarguably be more elegant, yes, but since there's a way to do without it, it's only about elegance, not feasability - and I really think the cost for elegance is too high. execline's 'trap' binary can adequately perform the needed proxying at low resource cost. If more various namespace feature requests come in, at some point I will look for a way to integrate some namespace functions into skalibs, with proper compile-time guards and stubs, and then reconsider; but as long as there are ways to achieve the desired outcome with external tools, it's not a priority. -- Laurent
|
https://www.mail-archive.com/skaware@list.skarnet.org/msg01009.html
|
CC-MAIN-2017-39
|
refinedweb
| 277
| 61.6
|
Agenda
See also: IRC log
<oeddie> scribe: Gregory_Rosmaita
<oeddie> ScribeNick: oeddie
<Steven> changed your nick Gregory?
i'm on my laptop
on my main machine i'm oedipus, on my laptop, oeddie, and sometimes on my linux box, i'm gregor_samsa
<Steven> Scribe: Steven
:-)
<oeddie> RM: Roland Merrick, chair, work for IBM
<oeddie> JG: Jeff Gerald, observer
<oeddie> SP: work for CWI/W3C, co-chair of this group and staff contact for many others
<oeddie> NVB: observer, member of Forms WG
<scribe> Scribenick: oeddie
RM: bunch of docs to reveiw to
ensure ready for CR -- spend as long as it takes to get them to
final stage
... old actions in agenda -- go through and clean out and migrate to trackbot those that need further consideration
... this afternoon: script: feature, what it means to refer to a feature
SP: good session at TPAC yesterday on this topic; TBL proposed standard way to record in namespace document which javascript implements this namespace so wouldn't need script line in doc source
RM: XHTML 1.2 discussion - what
makes the cut into that
... tomorrow: close off Forms issues ; moving forward on XHTMLMime document, how to get XHTML to work in UAs not optimized for XHTML, then get back to XHTML2 - what to do with it
RM: Shane did a very good job upsating them all
SM: start with CURIEs
<Steven>
CURIEs latest draft (18 october 2008) -
<Roland_>
SM: this draft, along with Access
and Role Modules, say in prose they are CR - they are not, but
used CSS to mark that;
... received number of comments over past year from variety of sources
RM: LC in May -- received some comments, extended period for TAG; received formal comments from TAG, replied, have not formally stated they are satisfiedwith what we did --- TAG meeting now, hopefully on their agenda
SM: couple of issues in CURIE
annotaed with 4 @ signs:
... namespace specifications and jeremy's request for prefix; my opinion is that it is not needed or necessary
SP: not prefix for CURIEs, but use XMLNS to define; if lang uses diff prefix mechanism, ok to use on that start with x m l, but good thing to warn off people from using this
SM: question for group - questions or concerns about what SP just read
quote from 18 October draft: "When CURIES are used in an XML-based host language, and that host language supports XML Namespaces, prefix values MUST be able to be defined using the 'xmlns:' syntax specified in [XMLNAMES]. Such host languages MAY also provide additional prefix mapping definition mechanisms.
@@@@The XML Namespaces specification states that prefix names are not permitted to begin with the characters 'xml' (see Leading XML). betw
When CURIES are used in a non-XML host language, the host language MUST provide a mechanism for defining the mapping from the prefix to an IRI.
A host language MAY interpret a reference value that is not preceded by a prefix and a colon as being a member of a host-language defined set of reserved values. Such reserved values MUST translate into an IRI, just as with any other CURIE.
A host language MAY declare a default prefix value, or MAY provide a mechanism for defining a default prefix value. This default prefix value MAY be different than the language's default namespace. In such a host language, when the prefix is omitted from a CURIE, the default prefix value MUST be used. Conversely, if such a language does not define a default prefix value mechanism and does not define a set of reserved values, CURIEs MUST NOT be used without a lea
RM: sounds reasonable to me
SP: me too, gets message over
GJR: plus 1
SM: next is also in this section "Syntax"
SP: reads from draft
SM: only added because continuing source of confusion for intelligent people
SP: should not be used as default namespace for curies, because default namespace...
SM: good addition
... should note be in separate paragraph
SP: separate, and change SHOULD NOT to MUST NOT
SM: in which case it is not a note
GJR: fine by me too - plus 1
SM: "MUST NOT used as
default..."
... those who like low-level specs as flexible as possible may not like this
RM: SHOULD NOT suggests may do anyway
GJR: SHOULD NOT allows if "compelling reason"
RM: allowed an opt-out to do what can no longer do
SM: stopped people using xml default namespace as CURIE default prefix, which i like
GJR: agree that should be -- er, must be -- a MUST NOT
SM: no compelling usecase for
using default ns - all using xmlns for is to associate prefix
string with IRI prefix
... good suggestion Steven
SP: no mechanism for defining default prefix - comes from language
SM: allow default prefix, so possible to have 2 prefixes; could have default prefix implied by CURIE with NO colon at all, and a defaullt prefix when there is a colon
SP: default prefix and empty prefix
SM: yes; reserved values conceopt
SP: declare by fiat in XML, but don't give opportunity to change
SM: host language may do it
SP: but this spec doesn't provide mechanisms for doing it; no mechanism for empty or default
SM: proposal from RDFa proposes changing prefix on fly -- good idea in their case
SP: if they are proposing method of changing default on fly, what is argument against using xmlns mechanism for doing that
RM: trying to stop confusion and work arounds
SM: namespace of xml elements and attributes has nothing to do with prefix or default prefix associated with CURIEs in document; coupling them can't be a good idea
RM: prefixed and unprifixed -- unprefixed means?
SM: i have to fix that - should
use term "reserved value"
... would also make consistent with RDFa
RM: that was at back of my
mind
... are we satisfied with our new statement?
SM: plus 1
GJR: plus 1
RM: disagree?
(no disagreement logged)
SP: only 2 diff bits of text
SM: yes, have to discuss exit criteria
SP: basically have implementations in RDFa, so just cound number of implementations that RDFa used to get through CR; find 2 interoperable implementations and cite them
RM: just want to make sure that everything talk about in CURIEs is used in RDFa
SM: believe every feature in here is used by RDFa
RM: still want 2 interop implementations of each feature; have confidence can achieve because of RDFa implementations
SM: test suite for RDFa could be adapted for CURIE test suite
<Steven>
SP: can't we point to that and say this is the test suite for CURIEs as well
SM: building block - doesn't don
anything on its own
... Roland, datatype we defined not used in RDFa --
SP: if click on details of example 64
SM: think safe to leave - hybrid
datatype - so i don't think is a big deal/obstacle
... where might run into trouble is old issue of value space versus lexical space will continue to come up and problem may be that people who raise issue speak diff laguage than we do; talking past each other; we don't understand problem and they can't understand why we do't understand
... removed "lexical space" and "value space" from document at advice of TAG
SP: ok
... then don't have to test, although test 64 of RDFa test harness does do that
RM: RDfa doesn't use safe_CURIEs
SP: rel is allowed to be list of CURIEs -- is there a test that has a list of CURIEs?
SM: yes, there is - not sure test of rel that combines use of reserved values and things that use colons
SP: write down what we need to test in what combinations and compare to RDFa harness
RM: could test against RDFa - if
don't support feature we've added have to go elsewhere or
create own
... Role?
SM: list of CURIEs not safe_CURIEs
SP: rel?
SM: doesn't use safe_CURIEs
SP: why in test harness
SM: probably to fail
SP: test 65 title is wrong - has
a resource with safe_CURIE, but no rel with safe_CURIE
... all safe_CURIEs they test are blank mode -- not testing real safe_CURIEs in realistic manner
... test: regular CURIE, safe_CURIE, reserved word CURIE and empty prefixes
... and a bnode CURIE
... combinations of those, too
... 4 cases: bnode, reserved word CURIE, empty prefix CURIE, regurla CURE and safe_CURIE - 16 combinations in all
RM: Role will cover some of those
SM: hang on - would be sufficient to test CURIEs in context of RDFa; if can get away without fostering bi-directional ??? to get CURIEs through process
RM: 16 items in test pile
<Steven> The 16 combinations are the 4 basic curies, then safe versions of them, and then lists of all those
SM: proposal: why don't i take
the action to work with the RDFa test people to expand
collection to cover these 16 variations
... they are open to that; plus all RDFa implementors are watching and can double-check what we are doing
SP: problem with list of safe_CURIEs
RM: still need matrix of things
need to be tested; hope don't need to write own test suite -
identify holes in existing test harnesses
... also need interoperable implemenations of all of those tests
SP: interoperability likely to be done -- at least 6 implementations of RDFa
<ShaneM> the datatypes not covered in RDFa are URIorSafeCURIEs, SafeCURIE, and SafeCURIEs
SP: 9 implementations of RDFa in test reqport
SM: is operator in there?
SP: yes
... check to see if any failed CURIE test
... none of the CURIE test fail
SM: none fail in reality - have to validate some by eye, but almost everyone passes
SP: even XML Literal test?
SM: SPARQL engine evaluating it incorrectly
SP: is the test suite right only for Michael who created it?
SM: whole harness is open source and available -- emailing michael right now that i'm going to id more tests
SP: mention test 65 has wrong title, please
RM: what else is needed?
SP: 2 interoperable implementations of all features
RESOVLED: Send CURIE to CR
RESOLUTION: Send CURIE to CR
SP: talking to people yesterday after giving last talk of day essentially about RDFa - some people suggested if could do RDFa stylesheet as well - external RDFa resource
SM: what would that mean?
SP: decorating DOM with RDFa
attributes, but not having RDFa in documents itself; use
selectors to add the necessary properties
... RDFa has to decorate every single node in XOXO, but if have RDFa sheet (this has class of XX or id of YY) should do this
SM: XOXO?
<unl> XOXO
<unl>
SP: microformat -- all sorts of
implied meaning that get inherited because at top of tree
marked as XOXO document; RDFa doesn't propagate down
notes
... like to represent family tree as nested lists (UL or OL) -- if want to add relationships to that in RDFa, on every node, ahve to say this node is child of parent element; adding loads of identical info at each LI -- would like to say if find UL class="expanded-tree" propogate RDFa
SM: started work on something
like that on different problem; defining mechanisms for
following nodes
... use proflie mechanism and defining rules to help RDFa processors to evaluate in profile -- brining extra intro into content of document
SP: RDFa in external RDF document
<ShaneM>
SM: Mark, Manu and some others working on this; interesting idea that dovetails nicely
GJR: working on similar concept for reuse of ABBR, ACRONYM, and DFN etc.
SP: Harry Halpen been sending posts to RDFa list - came up to me and said that we ought to go th HTML5 meeting this afternoon and talke about getting RDFa into HTML5
GJR: @profile stilll under negotiation
SP: rel=profile works beeter tan
@profile
... loss of rev is a bigger problem; but if in DOM, shouldn't care at all; remaining question is: if attribute is in markup, does it end up in DOM; i think it has to, otherwise break dojo
GJR: yes would break dojo
SP: as long as stays in DOM,
don't have leg to stand on
... HTML5 has bad validation story, but we are in position to say "this is valid" because provide schemas (validate this markup against this schema); can validate, browsers don't choke, as long as stuff ends up in DOM, we've all won
... if browsers not going to implement stuff, then will be done with javascript
... we provide language, HTML5 UA ignores what doesn'tknow about, but still gets into DOM, so can be extracted
... they say we chuck stuff away and don't do anything with it, but since appears in DOM, can be used
SM: ensuring Semantic Web is
first class citizen is only tangible deliverable W3C has this
decade
... behooves W3C to ensure that HTML5 accept RDFa
SP: don't think RDFa under threat if HTML5 ignores
<Steven>
SM: no outstanding issues; no @ signs in document; diff marks from LC working draft interesting thing to look at -- no changes, save schema is normative now
<Steven>
SM: very thin
specificiation
... Roland raised issue of conformance testing for CR
GJR: role examples in ARIA test suites could be leveraged
codetalks.org
<Steven>
<Steven>
SP: none use CURIEs
GJR: willing to work with someone on that
SP: WAI-ARIA allows CURIE values for role
GJR: have to check -- may have
been dropped
... will check at break
SP: Overkill for role
... only need 2 tests: prefix CURIE and reserved word CURIE
SM: Lot of reserved words in
vocab document
... what does it mean to implement surpport for role?
SP: spec for Role does not require any behavior
<Steven> s/sepc/spec/
GJR: role=search mockup-plans
SP: don't have to show implemented, but using it
SM: by using it, incorporating modules we have defined into markup languages and ensuring that such hybrids validate - might need for CR, but no behavioral action defined; CURIEs just tokens
SP: who is using role other than ARIA and us?
GJR: mobile very interested in role
SM: langugage that allows it - like class in HTML -- has no semantics to it; CSS and other microformats use class to achieve ends, but HTML has no req performance or usage for class
RM: depends upon how define -- implementation through languages;
SP: never know what is going to arise in CR
SM: problems: can't be dependent on WAI because WAI dependent upon us; think could demonstrate markup language that uses model -- XHTML2, XHTML11+RDFa -- could get extermal group to do this (Mark's hybrid language - XH)
SP: role doesn't have any semantics attached to it by spec itself; that sort of spec always causes trouble at transition
RM: show been combined into 2 languages (interoperability) and 2 implementations as well
SP: exit criteria will be: languages adopting this module and furthermore 2 interop implementations of one or more languages
RM: WAI-ARIA and mobile profile
SP: sounds good
RESOLUTION: Request CR for Role Module
SM: 3 exit criteria langs, 2 implementations, 2 interop implementations that support one or more of those languages; all comments received during LC have been disposed of
SP: SVG Tiny does have "role' attribute
<Steven>
SP: SVGT 1.2 has adopted role as well
GJR: check that 'role' in both are identical -- list of strings
SP: intention the same
RM: include implementation of role in SVG?
SP: reference 'role' informatively - don't want a dependency on us, but in future version will be able to reference Role Module normatively once through rec
BREAK FOR 30 MINUTES
rrsagend, draft minutes
<ShaneM> CR Ready draft of the CURIE spec is available at
<ShaneM> CR Ready draft of the XHTML Role Attribute Module spec is available at
SM: updated CURIE criteria
section
... 15 January 2008 as target for leaving CR
... takes into account holiday season
RN: disposition of comments?
SM: was going to do during break but GJR and i got to exchanging music pointers
mute Executive_3
unmute Executive3
SM: could markup CURIE syntax document with RDFa - could be its own test suite
<Roland_>
diff from editor's draft:
diff from previous wd:
SM: ok, Role needs diff info
RM: change made to earlier item "MUST NOT" - what is allowed to change after LC -- this was in rsponse to LC comment, so should be ok?
SP: LC comment from TAG
SM: follow up from Henry Thompson
RM: have we responded to all LC commentors?
SM: happy with responses
RM: XHTML Role Module still has prefix forms for reference
SM: good catch -- will take care of right now
RM: request from XForms group
<Steven> ()
<Roland_> Minutes of joint session to discuss XForms comments on XML Events 2
<Steven> (For later consideration)
Roland: XML events 2 was intended to be incorporated in XForms
<Roland_>
SP: happy with CURIE for the record
GJR: me too
unmute Executive_3
RM: prefix issue needs clean up before CURIE is ready
SM: came up with something uploadig now
SP: with RDFa we say host
language decides about prefix
... role defines cases for appropriate use
... SVG has "role"
RM: host language should define
SP: Host language must define
RM: unless specified by host language, default is...
SM: concern about RM's suggestion is have situation not sure how would affect implementatoin; value in flexibility, but what does mean for implemenattiono of ARIA -- if encounters CURIE relying on default prefix how does it now to interpret
RM: interesting questions -- if SVG changes role with different prefixed, ARIA wouldn't pick them up
SM: one way to address this is instead of talking about default prefix can say "default prefix kicks in when colon (foo:)" there is also collection of reserved values defined in vocabl
RM: reserved, default, and prefix
SP: section doesn't mention this
SM: trying to simplify, but maybe
have to be explained in more complicated detail
... could take language from RDFa in strong normative way - reserve values over here, host language responsible to define prefix and context -- reference or resereved value (which is in vocab doc), but SVG will want own reserved values
SP: as long as MLs don't keep asking us to add to vocab namespace
RM: but SVG should follow same
rules -- SVG Tiny 1,2 does't do this
... no predifined values for 'role' in SVG
RM: if want role attribute, who takes responsibility for expansion of role -- should be role processor
SP: not necessarily one processer
RM: one processor for that
SM: CURIE spec says specifically this is NOT in DOM
RM: creation of URI
SM: no power over DOM implementations -- trying to make as useful as possible today
RM: behavior from role attribute easier, but would be quite nice if people didn't have to understand about CURIEs and could just let processor handle URIs
SM: great role, but have to get there
<ShaneM> The CURIE spec says: Note that if CURIEs are to be used in the context of scripting, accessing a CURIE via standard mechanisms such as the XML DOM will return the lexical form,
<ShaneM> not its value as IRI. In order to develop portable applications that evaluate CURIEs, a script author must transform CURIEs into their value as IRI
<ShaneM> before evaluating them (e.g., dereferencing the resulting IRI or comparing two CURIEs).
from PF to SVG on 'role': * reference
* t
[suggested hyperlinking: link the first instance of WAI-ARIA in this paragraph to an entry in the references section which itself leads to the 'latest version' link on the Technical Reports page. Whatever you work out with the Comm Team is fine, here.]
RM: script to translate from CURIE to URI
SP: work on this from UbiWeb
RM: here is bit of javascript
that does it -- reuse it
... making life easier for people
GJR: crossover with RWAB XG library plans?
RM: not required
SM: won't be easy to write script
RM: why need pre-canned one
SM: requesting put in before CR
RM: no, but for REC -- help with adoption
GJR: ubiquity-xforms model for ubiquity-curies
RM: develop supporting materials to make adaptation easier
<ShaneM> ACTION: Shane to craft an example script to generically transform a string to a CURIE using the XML DOM for inclusion or reference from the CURIE spec. [recorded in]
<trackbot> Created ACTION-13 - Craft an example script to generically transform a string to a CURIE using the XML DOM for inclusion or reference from the CURIE spec. [on Shane McCarron - due 2008-10-30].
<Steven> And Ben Adida's RDFa implementaiton in javascript
RM: important thing: make script
that can do this and then make universally available;
informative section would be nice, but would be better if
people could witness the execution of the script
... status of Role?
SM: didn't agree on what we want to say? restrict role to specific set of values? restrict to specific prefix?
RM: default or specify
options;
... need to change paragraph to -- prefix version and default, and one cannot change default
SP: don't feel strongly enough either way
SM: for flexibility, should allow host languages to define their own default prefix if they so choose, BUT should require that our collection of reserved values are always respected
RM: value not found in default prefix, fall back to default default prefix?
<ShaneM> Three forms... foo:bar, :bar, and bar
SM: don't define
RM: default order - look in language namespace the role vocab?
<ShaneM> role="bar" that's from our list.
SM: not what i want; ways of extending but out of scope: for this spec, collection of reserved values; 3 forms of CURIE syntax - we know what foo:bar is, suggesting if goo:bar is fine, but prefix for that is host language definable; XHTML Role doesn't care what default prefix is, but do have reserved values,
RM: is that now in CURIE spec
SM: says is up to language using it to determine what it means
<ShaneM> role=":bar" is not defined. host languages can define it if they like.
RM: haven't discussed particular
feature in this spec so far -- haven't said what happens if
define foo:bar
... either invalid or it default values in vocab -- those are 2 options
SM: true
RM: if defaulted to vocab, one less error which doesn't need to be tested
SM: prefix used in that case is blah - host languge may override that
RM: yes
SM: will make changes so can revisit in context
<Roland_>
<ShaneM>
SM: status: XHTML Acess new draft 18 October 2008 - should be diff marks from LC draft; implemented all changes requested and agreed to by WG; included revamping of introduction; suggest we reveiw those changes
SP: don't have to worry about being adopted in more than one language; enough to say 2 interoperable imnplementations
SM: ok
RM: review period depends upon implementation -- need 2 -- do we have any idea where to find?
SP: Shane talking of doing one himself
SM: not hard to implement in
browser context
... open issue in agenda - what happens with regard to intrisic events and access element? if going to incorporate into other languages, how, and what does having event in mnodule?
... intrinsic event module - brings in all intrinsic events from HTML
<ShaneM>
SM: One could imagine could be useful
SP: Access Element not exposed to
markup - onKeyPress means something very specific in HTML - if
keyPressEvent passes through or bubbles
... can put events on there, but won't happen unless include script that says "put on this element" so is moot
events in HTML 4.01 can be found at:
SM: not sure how we get to point
where people other than me implementing access
... is mozilla working on it
SM: tricky bit is require UAs provide mechanism for overriding access keys
GJR: Opera satisfies that requirement, but not for access elemet
SM: implementing in plug-in tough to make portable
SP: use script?
SM: use developement framework like google gears, user can explicitly add access, but no support at javascript level and no portable plugin
SP: we do say MUST on override
RM: average feature - if that aspect important do implementation for it
GJR: something along ubiquity-xforms model for access?
SM: consistency/permanance is a SHOULD
RM: could stick into session cookie
SM: one could....
SP: write page that allows user to specify binding
SM: specific web site or collection of pages -- script has to be embedded in page
SP: cookie doesn't have to be site speficic
RM: demonstrate that effect can be achived in User Agent
SM: in terms of schedule, Access going to take longer to get out of CR than the rest
RM: 6 months?
SM: accepting input through 15
March 2009
... can always extend, but can never contract
SP: date by which may reach exit criteria
proposed resolution: send Access to CR; exit criteria 15 March 2009 ????
RM: in intro final paragraph remove :the needs of the Accessiblity community."
SM: ok
... Logical successor to accesskey - then put in pointers
RM: good bits in conformance -
chameleon version again, but all in native side - if not this,
and not this, have to infer it is the other?
... possible to write things if not in XHTML NS do this; if not in XML NS can still use
SM: huh?
RM: 1) if not in XHTML NS do this, if are in XHTML NS, do this
SM: don't think 2 pieces: document, not host language conformance
RM: does this statement say you
couldn't incorporate XHTML Access into HTML5?
... makes PF request for Access Module in HTML5 more difficult
SM: HTML5 in same namespace
... if write document not in XHTML NS and doesn not have appropriate NS, have to use our NS
RM: if document not in XHTML NS
the following is required; HTML5 in XHTML NS, so don't need to
worry; implied by spec that use in HTML5 covered
... if don't have that, must do this; if are in XHTML NS don't worry -- nothing needed to do because Access Module in same NS
... trying to find positive take on spec, which i think this does
... new attribute @order
SP: added one new attribute, @order in response to specific LC request
RM: "keys or other events" -- keys in quotes because ?
GJR: explained in 3.1.2
... key is an abstraction and historical
RM: SHOULD persist over sessions?
SM: yes
... a lot of changes in reaction to comments from SVG group
RM: MediaDesc link
<Steven>
SM: it is to M12n
SP: derrived from CSS?
SM: no
1. The value is a comma-separated list of entries. For example, media="screen, 3d-glasses, print and resolution > 90dpi" is mapped to:
"screen"
"3d-glasses"
"print and resolution > 90dpi"
2. Each entry is truncated just before the first character that isn't a US ASCII letter [a-zA-Z] (ISO 10646 hex 41-5a, 61-7a), digit [0-9] (hex 30-39), or hyphen-minus (hex 2d). In the example, this gives:
"screen"
"3d-glasses"
"print"
3. A case-insensitive match is then made with the set of media types defined above. User agents may ignore entries that don't match. In the example we are left with screen and print.
Note. Style sheets may include media-dependent variations within them (e.g., the CSS @media construct). In such cases it may be appropriate to use "media =all".
RM: seems ok to me
SP: activate attribute change?
SM: yes and no
... DougS suggested "true" and "false" which struck me as better
RM looks ok to me -- other comments
SM: has same issues with default
prefix that role did
... host language could override, but preserve both values
RM: only CURIEs used for roles,
so brings in role
... agree on criteria?
SP: yes
RM: so need implementation
... do need to set up test plan and test cases
SM: do we have to do that before CR?
SP: no, but have to have them before exit CR
RM: create disposition of comments doc?
SM: yes
RESOLUTION: Access Module should move to CR with exit criteria 15 March 2009
<Steven> #html-wg
BREAK FOR LUNCH: HTML5 Joint Session in 70 minutes time; discussions in #html-wg
<ShaneM> CR ready draft of XHTML Access is at
by the way, i pointed out to the HTML5 people during the PF joint meeting is that we need a text/html+ARIA profile for validation, NOT an HTML+ARIA
3 Resolutions logged so far: 1) send CURIE to CR; 2) send Role to CR; 3) send Access to CR with exit date of 15 March 2009
<Steeeven> hi
<Steeeven> now in #html-wg
<gregor_samsa> thanks - i forgot
<Steven> hi there
<ShaneM> once again I get to say "low tech crap"
<inserted> ScribeNick: oedipus
READJOURN
SM: plan to adjourn in approx 1 hour
SP: that's the plan
RM: couple of related items -
script @implements features
... put on SCRIPT element in XML Events 2; step back to 1.2 script module?
SP: pleased with positive reaction yesterday of using this method of implementing XML technologies;
RM: have idea -- feature can
refer to namespace
... compatible with that idea
<Roland_>
RM: Script Module in XML Events 2
- talking about @implements attribute
... only additional attribute; rest inherited
<ShaneM> call it "Script implements Attribute Module"
RM: optional attribute; provides implementation of feature defined by that URI; script should be loaded if UA doesn't have implementation of the URI referenced
SM: simple extension to scripting module for 1.1
RM: also suggest that say URI
SM: applicability outside of XHTML 1.2?
RM: XHTML5
GJR: Expert Handlers for Specialized Markup Languages
unmute me
SM: @implements is a fine thing; can provide a script that would help support it
RM: access module that way -- script implements XHTML Access
SM: need script to implement implements
<Steven> <script src="access.js" implements="xhtml:access"/>
SM: script that implements
implements and when gets initiallized (puts something into
onLoadEventQueue) and disables all other events
... is there a converse to @implements - @required?
GJR: real life example is MathML - local script or URI
RM: what are @implements features?
<Roland_>;%20charset=utf-8#soap-binding
RM: define features in
specification - URI that can be abbreviated
... define features of this spec on same basis
... what features ought to be -- off end of xmlns -- could be bad, could be good; feature/featureName
SP: using QNames
RM: no
... SOAP happens to be used for QNames,but important thing is URI
SM: action to produce features document
<ShaneM> ACTION: ShaneM to produce a features document that describes the various feature names for the XHTML collection. [recorded in]
<trackbot> Sorry, couldn't find user - ShaneM
<ShaneM> ACTION: Shane to produce a features document that describes the various feature names for the XHTML collection. [recorded in]
<trackbot> Created ACTION-14 - Produce a features document that describes the various feature names for the XHTML collection. [on Shane McCarron - due 2008-10-30].
RM: how to name URIs - should go
in our namespace - shane will have look to see if synonomous
with modules; did write XML Events as 3 module document:
Handler Module, Listener Module, Script Module
... what features for each module
SM: gives rise to a CURIE
issue
... issue: CURIE allows host lang to define a default prefix or collection of reserve values that will map into a CURIE
... if have multiple attributes using CURIEs, do we hvave multiple collectoins ?
RM: no
SM: for default prefixing
definitely no, but then CURIE processor can't follow its nose
and learn them
... do we need reserved values for a feature list?
RM: could reference URI
SM: reasonable check but doesn't
get to definitive source
... hang features off namespace -- agree we should, but given CURIE architecture today, have to put in vocab document
... what need in RDF of vocab is direct mapping and convention - these functions are associated with these attributes on these elements;
RM: when i'm processing script @implments, not concerned about issue you raised?
SM: are if =rdfa -- how do i deference that
SP: xhtml:rdfa
RM: better off without prefix
SM: can't say xhtml:rdfa - no xhtml: prefix
SP: CURIE - would have had to define
SM: in normal course of using
html, never define an html prefix
... can say have to define prefix or just use full URIs
<Steven> implements=":rdfa"
SP: empty prefix? what's the prefix in xhtml when just use colon
SM: just colon is the xhtml vocab
SP: ok
<Roland_>
SP: RDFa a feature of our namespace
<Roland_>
SM: not suggesting put features
in vocab doc unless good way to scope them;
... need to invent way to scope them, but for most other people dealing with @implements, using URIs -- we can always use URIs
RM: trivial compared to script
SM: if use URI like
<ShaneM>
RM: looks fine to me
SM: URIs have to de-reference to something
SP: someone can turn that into a prefix and use CURIEs to write that down
<Steven> that's what I meant
RM: point to specification so that machine aware of rdfa-syntax
SM: is ok if use CURIE or
URI
... will write this up as part of previous action item
... script module - what form
RM: like access element
<ShaneM> ACTION: Shane to create a Script implements Attribute Module [recorded in]
<trackbot> Created ACTION-15 - Create a Script implements Attribute Module [on Shane McCarron - due 2008-10-30].
RM: if something untoward happens
and don't get to do 2, then we have it as a module
... what does script implement? the feature
SM: doesn't just have to provide
mechanism
... we want to say: tie to HasFeature of DOM somehow
... how?
RM: don't know, but agree -- register DOM somewhere, somehow
SM: need to get UA devs to use well defined URIs or won't work
RM: should be encouraging into the future; becomes part of our spec - HasFeature should be reflected in DOM
SM: DOM3 - who developing?
RM: WebAps
SM: DOM Level 3 Core has the HasFeature method as part of implementation interface, so good idea to specify in our documents implementations return this value when DOM called
<Roland_> var isImplemented = document.implementation.hasFeature("feature", "version");
RM: has tied to HasFeature
registered in DOM that this feature is supported
... so SM going to create script implements attribute module
SM: can get infrastructure in place then you can edit it to your heart's content
RM: ok
... anything else need to discuss on @implements?
SP: not going to say what URIs are for us; defined elsewhere
<ShaneM> From DOM3: To avoid possible conflicts, as a convention, names referring to features defined outside the DOM specification should be made unique.
SP: meaningful accessible needs to be changed because Philippe thinks attack on HTML5
RM: XHTML (tm) 1.2 and leave at that
GJR: likes subtitle, but understands politics
SM: "Semantically Rich HTML That Works In Your Browser Today"
SP: doc most close to 1.0 Strict is 1.1
RM: small increments adding features to make superset of XHTML 1.1 Basic
<Steven> rdfa, inputmode, target, access, role
RM: adding inputmode, return of target, access and role
SM: didn't update Abstract
RM: first thing people read
GJR: also @lang -- can't find a single AT dev that triggers off xml:lang
RM: incremental update, why 1.1 - superset of 1.1 Basic - motivation behind XHTML 1.2 plus other features developed in interim that make XHTML stronger and more robust
<ShaneM> This specification builds upon XHTML 1.1 and XHTML Basic 1.1,
<ShaneM> incorporating new technologies to improve accessibility and
<ShaneM> integration with the semantic web.
RM: making a true super-set of
features in Basic 1.1
... should have inputmode in 1.2
... Basic 1.1 should run in 1.2 processor; hook to rationale as to why did in first place; had wrinkle in Basic 1.1 and this is way of ironing out wrinkle
SM: reintroduces some features left out of XHTML 1.1 -- @target and @lang
GJR: but more tightly describing as author proposed/suggestion user disposes
<ShaneM> <p>This specification builds upon XHTML 1.1 and XHTML Basic 1.1,
<ShaneM> helping to create an environment that is a superset of XHTML Basic
<ShaneM> 1.1. It also reintroduces widely requested features that were
<ShaneM> not included in XHTML 1.1. Finally, it
<ShaneM> incorporating new technologies to improve accessibility and
<ShaneM> integration with the semantic web.</p>
RM: differences summarized as:
reintroduction of @target and @lang, addition of @inputmode
@implements on src and question of absorbing all the ARIA
attributes
... status of ARIA attributes?
... candidate items: @target, @lang, @inputmode, @implements (from Recs or borrowed from past); Access and ARIA in second category
... in process, a candidate - if don't get into 1.2 wait until get into XHTML2
<Steven> Al is here
SM: put ARIA stuff in or not?
RM: should have "Candidate, but not Yet" section for ARIA and other modules that haven't progressed far enough yet to be normative
WAI-ARIA
RM: Access Module very much in the Candidate status --
SM: don't know value of 1.2 without Access and ARIA
RM: time-scales all theoretical - access implementations? aria?
SM: fair point; role and RDFa no problem
GJR: once ARIA spec frozen in CR for all intents and purposes -- already have 2 implementations
SP: start 9 CET
SM: update 1.2 and send out notice
<ShaneM> just remembered something....
whazzat?
<ShaneM> XHTML 1.2 - should we be disabling @accesskey if we include the access element?
<ShaneM> or include both as a transition?
the only way to kill it off is to disable it
<Steven> Roland: No
<Steven> ... we still want it to be a superset of the predecessors
<ShaneM> yeah, that's what I was thinking. Okay.
i have asked HTML WG several times for support for the access module as well as accesskey, but to no avail
This is scribe.perl Revision: 1.133 of Date: 2008/01/18 18:48:51 Check for newer version at Guessing input format: RRSAgent_Text_Format (score 1.00) Succeeded: s/ie/y/ Succeeded: s/Gerald Edger/Jeff Gerald/ Succeeded: s/bnote/bnode/ Succeeded: s/ShowShow/XOXO/G Succeeded: s/sepc/spec/ FAILED: s/sepc/spec/ Succeeded: s/TOPIC: XHTML Access Module/TOPIC: Role Module, continued/ Succeeded: s/access did/role did/ Succeeded: s/Access Module should move to CR/Access Module should move to CR with exit criteria 15 March 2009/ Succeeded: s/Intos/Intros/ Succeeded: s/xml/xhtml/ Succeeded: s/should be reflected in dom/HasFeature should be reflected in DOM/ Succeeded: i/READJOURN/ScribeNick: oedipus Found Scribe: Gregory_Rosmaita Found ScribeNick: oeddie Found Scribe: Steven Inferring ScribeNick: Steven Found ScribeNick: oeddie Found ScribeNick: oedipus Scribes: Gregory_Rosmaita, Steven ScribeNicks: oeddie, Steven, oedipus WARNING: Replacing list of attendees. Old list: Cannes ShaneM Gregory_Rosmaita New list: Gregory_Rosmaita Executive_3 ShaneM WARNING: Replacing list of attendees. Old list: Gregory_Rosmaita Executive_3 ShaneM New list: Executive_3 ShaneM oedipus Default Present: Executive_3, ShaneM, oedipus Present: Roland Uli Klaus Alan_Hauser Jeff_Gerald Nick_vd_Bleeker Steven Shane Gregory Masataka Yakura (remote) WARNING: Replacing previous Regrets list. (Old list: Tina) Use 'Regrets+ ... ' if you meant to add people without replacing the list, such as: <dbooth> Regrets+ Alessio Regrets: Alessio Tina MarkB Agenda: Got date from IRC log name: 23 Oct 2008 Guessing minutes URL: People with action items: shane shanem WARNING: Input appears to use implicit continuation lines. You may need the "-implicitContinuations" option.[End of scribe.perl diagnostic output]
|
http://www.w3.org/2008/10/23-xhtml-minutes.html
|
CC-MAIN-2015-32
|
refinedweb
| 6,554
| 52.23
|
In this chapter, we provide a broad overview of the different data types available in the R environment. This material is introductory in nature, and this chapter ensures that important information on implementing algorithms is available to you. There are roughly five parts in this chapter:
Working with variables in the R environment: This section gives you a broad overview of interacting with the R shell, creating variables, deleting variables, saving variables, and loading variables
Discrete data types: This section gives you an overview of the principle data types used to represent discrete data
Continuous data types: This section gives you an overview of the principle data types used to represent continuous data
Introduction to vectors: This section gives you an introduction to vectors and manipulating vectors in R
Special data types: This section gives you a list of other data types that do not fit in the other categories or have other meanings
The R environment is an interactive shell. Commands are entered using the keyboard, and the environment should feel familiar to anyone used to MATLAB or the Python interactive interpreter. To assign a value to a variable, you can usually use the = symbol in the same way as these other interpreters. The difference with R, however, is that there are other ways to assign a variable, and their behavior depends on the context.
Another way to assign a value to a variable is to use the
<- symbols (sometimes called operators). At first glance, it seems odd to have different ways to assign a value, but we will see that variables can be saved in different environments. The same name may be used in different environments, and the name can be ambiguous. We will adopt the use of the
<- operator in this text because it is the most common operator, and it is also the least likely to cause confusion in different contexts.
The R environment manages memory and variable names dynamically. To create a new variable, simply assign a value to it, as follows:
> a <- 6 > a [1] 6
A variable has a scope, and the meaning of a variable name can vary depending on the context. For example, if you refer to a variable within a function (think subroutine) or after attaching a dataset, then there may be multiple variables in the workspace with the same name. The R environment maintains a search path to determine which variable to use, and we will discuss these details as they arise.
The
<- operator for the assignment will work in any context while the
= operator only works for complete expressions. Another option is to use the
<<- operator. The advantage of the
<<- operator is that it instructs the R environment to search parent environments to see whether the variable already exists. In some contexts, within a function for example, the
<- operator will create a new variable; however, the
<<- operator will make use of an existing variable outside of the function if it is found.
Another way to assign variables is to use the
-> and
->> operators. These operators are similar to those given previously. The only difference is that they reverse the direction of assignment, as follows:
> 14.5 -> a > 1/12.0 ->> b > a [1] 14.5 > b [1] 0.08333333
The R environment keeps track of variables as well as allocates and manages memory as it is requested. One command to list the currently defined variables is the
ls command. A variable can be deleted using the
rm command. In the following example, the
a and
b variables have been changed, and the
a variable is deleted:
> a <- 17.5 > b <- 99/4 > ls() [1] "a" "b" > objects() [1] "a" "b" > rm(a) > ls() [1] "b"
If you wish to delete all of the variables in the workspace, the list option in the
rm command can be combined with the
ls command, as follows:
> ls() [1] "b" > rm(list=ls()) > ls() character(0)
A wide variety of other options are available. For example, there are directory options to show and set the current directory, as follows:
> getwd() [1] "/home/black" > setwd("/tmp") > getwd() [1] "/tmp" > dir() [1] "antActivity.R" "betterS3.R" [3] "chiSquaredArea.R" "firstS3.R" [5] "math100.csv" "opsTesting.R" [7] "probabilityExampleOne.png" "s3.R" [9] "s4Example.R"
Another important task is to save and load a workspace. The
save and
save.image commands can be used to save the current workspace. The
save command allows you to save a particular variable, and the
save.image command allows you to save the entire workspace. The usage of these commands is as follows:
> save(a,file="a.RData") > save.image("wholeworkspace.Rdata")
These commands have a variety of options. For example, the
ascii option is a commonly used option to ensure that the data file is in a (nearly) human-readable form. The
help command can be used to get more details and see more of the options that are available. In the following example, the variable
a is saved in a file,
a.RData, and the file is saved in a human-readable format:
> save(a,file="a.RData",ascii=TRUE) > save.image(" wholeworkspace.RData",ascii=TRUE) > help(save)
As an alternative to the
help command, the
? operator can also be used to get the help page for a given command. An additional command is the
help.search command that is used to search the help files for a given string. The
?? operator is also available to perform a search for a given string.
The information in a file can be read back into the workspace using the
load command:
> load("a.RData") > ls() [1] "a" > a [1] 19
Another question that arises with respect to a variable is how it is stored. The two commands to determine this are
mode and
storage.mode. You should try to use these commands for each of the data types described in the following subsections. Basically, these commands can make it easier to determine whether a variable is a numeric value or another basic data type.
The previous commands provide options for saving the values of the variables within a workspace. They do not save the commands that you have entered. These commands are referred to as the history within the R workspace, and you can save your history using the
savehistory command. The history can be displayed using the
history command, and the
loadhistory command can be used to replay the commands in a file.
The last command given here is the command to quit,
q(). Some people consider this to be the most important command because without it you would never be able to leave R. The rest of us are not sure why it is necessary.
One of the features of the R environment is the rich collection of data types that are available. Here, we briefly list some of the built-in data types that describe discrete data. The four data types discussed are the integer, logical, character, and factor data types. We also introduce the idea of a vector, which is the default data structure for any variable. A list of the commands discussed here is given in Table 2 and Table 3.
It should be noted that the default data type in R, for a number, is a double precision number. Strings can be interpreted in a variety of ways, usually as either a string or a factor. You should be careful to make sure that R is storing information in the format that you want, and it is important to double-check this important aspect of how data is tracked.
The first discrete data type examined is the integer type. Values are 32-bit integers. In most circumstances, a number must be explicitly cast as being an integer, as the default type in R is a double precision number. There are a variety of commands used to cast integers as well as allocate space for integers. The
integer command takes a number for an argument and will return a vector of integers whose length is given by the argument:
> bubba <- integer(12) > bubba [1] 0 0 0 0 0 0 0 0 0 0 0 0 > bubba[1] [1] 0 > bubba[2] [1] 0 > bubba[[4]] [1] 0 > b[4] <- 15 > b [1] 0 0 0 15 0 0 0 0 0 0 0 0
In the preceding example, a vector of twelve integers was defined. The default values are zero, and the individual entries in the vector are accessed using braces. The first entry in the vector has index
1, so in this example,
bubba[1] refers to the initial entry in the vector. Note that there are two ways to access an element in the vector: single versus double braces. For a vector, the two methods are nearly the same, but when we explore the use of lists as opposed to vectors, the meaning will change. In short, the double braces return objects of the same type as the elements within the vector, and the single braces return values of the same type as the variable itself. For example, using single braces on a list will return a list, while double braces may return a vector.
A number can be cast as an integer using the
as.integer command. A variable's type can be checked using the
typeof command. The
typeof command indicates how R stores the object and is different from the
class command, which is an attribute that you can change or query:
> as.integer(13.2) [1] 13 > thisNumber <- as.integer(8/3) > typeof(thisNumber) [1] "integer"
Note that a sequence of numbers can be automatically created using either the
: operator or the
seq command:
> 1:5 [1] 1 2 3 4 5 > myNum <- as.integer(1:5) > myNum[1] [1] 1 > myNum[3] [1] 3 > seq(4,11,by=2) [1] 4 6 8 10 > otherNums <- seq(4,11,by=2) > otherNums[3] [1] 8
A common task is to determine whether or not a variable is of a certain type. For integers, the
is.integer command is used to determine whether or not a variable has an integer type:
> a <- 1.2 > typeof(a) [1] "double" > is.integer(a) [1] FALSE > a <- as.integer(1.2) > typeof(a) [1] "integer" > is.integer(a) [1] TRUE
Logical data consists of variables that are either true or false. The words
TRUE and
FALSE are used to designate the two possible values of a logical variable. (The
TRUE value can also be abbreviated to
T, and the
FALSE value can be abbreviated to
F.) The basic commands associated with logical variables are similar to the commands for integers discussed in the previous subsection. The
logical command is used to allocate a vector of Boolean values. In the following example, a logical vector of length 10 is created. The default value is
FALSE, and the Boolean not operator is used to flip the values to evaluate to
TRUE:
> b <- logical(10) > b [1] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE > b[3] [1] FALSE > !b [1] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE > !b[5] [1] TRUE > typeof(b) [1] "logical" > mode(b) [1] "logical" > storage.mode(b) [1] "logical" > b[3] <- TRUE > b [1] FALSE FALSE TRUE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
To cast a value to a logical type, you can use the
as.logical command. Note that zero is mapped to a value of
FALSE and other numbers are mapped to a value of
TRUE:
> a <- -1:1 > a [1] -1 0 1 > as.logical(a) [1] TRUE FALSE TRUE
To determine whether or not a value has a logical type, you use the
is.logical command:
> b <- logical(4) > b [1] FALSE FALSE FALSE FALSE > is.logical(b) [1] TRUE
The standard operators for logical operations are available, and a list of some of the more common operations is given in Table 1. Note that there is a difference between operations such as
& and
&&. A single
& is used to perform an
and operation on each pairwise element of two vectors, while the double
&& returns a single logical result using only the first elements of the vectors:
> l1 <- c(TRUE,FALSE) > l2 <- c(TRUE,TRUE) > l1&l1 [1] TRUE FALSE > l1&&l1 [1] TRUE > l1|l2 [1] TRUE TRUE > l1||l2 [1] TRUE
Tip
You can download the example code files for all Packt books you have purchased from your account at. An additional source for the examples in this book can be found at. If you purchased this book elsewhere, you can visit and register to have the files e-mailed directly to you.
The following table shows various logical operators and their description:
Table 1 â list of operators for logical variables
One common way to store information is to save data as characters or strings. Character data is defined using either single or double quotes:
> a <- "hello" > a [1] "hello" > b <- 'there' > b [1] "there" > typeof(a) [1] "character"
The
character command can be used to allocate a vector of character-valued strings, as follows:
> many <- character(3) > many [1] "" "" "" > many[2] <- "this is the second" > many[3] <- 'yo, third!' > many[1] <- "and the first" > many [1] "and the first" "this is the second" "yo, third!"
A value can be cast as a character using the
as.character command, as follows:
> a <- 3.0 > a [1] 3 > b <- as.character(a) > b [1] "3"
Finally, the
is.character command takes a single argument, and it returns a value of
TRUE if the argument is a string:
> a <- as.character(4.5) > a [1] "4.5" > is.character(a) [1] TRUE
Another common way to record data is to provide a discrete set of levels. For example, the results of an individual trial in an experiment may be denoted by a value of
a,
b, or
c. Ordinal data of this kind is referred to as a factor in R. The commands and ideas are roughly parallel to the data types described previously. There are some subtle differences with factors, though. Factors are used to designate different levels and can be considered ordered or unordered. There are a large number of options, and it is wise to consult the help pages for factors using the
(help(factor)) command. One thing to note, though, is that the
typeof command for a factor will return an integer.
Factors can be defined using the
factor command, as follows:
> lev <- factor(x=c("one","two","three","one")) > lev [1] one two three one Levels: one three two > levels(lev) [1] "one" "three" "two" > sort(lev) [1] one one two three Levels: one two three > lev <- factor(x=c("one","two","three","one"),levels=c("one","two","three")) > lev [1] one two three one Levels: one two three > levels(lev) [1] "one" "two" "three" > sort(lev) [1] one one two three Levels: one two three
The techniques used to cast a variable to a factor or test whether a variable is a factor are similar to the previous examples. A variable can be cast as a factor using the
as.factor command. Also, the
is.factor command can be used to determine whether or not a variable has a type of factor.
The data types for continuous data types are given here. The double and complex data types are given. A list of the commands discussed here is given in Table 2 and Table 3.
The default numeric data type in R is a double precision number. The commands are similar to those of the integer data type discussed previously. The
double command can be used to allocate a vector of double precision numbers, and the numbers within the vector are accessed using braces:
> d <- double(8) > d [1] 0 0 0 0 0 0 0 0 > typeof(d) [1] "double" > d[3] <- 17 > d [1] 0 0 17 0 0 0 0 0
The techniques used to cast a variable to a double precision number and test whether a variable is a double precision number are similar to the examples seen previously. A variable can be cast as a double precision number using the
as.double command. Also, to determine whether a variable is a double precision number, the
as.double command can be used.
Arithmetic for complex numbers is supported in R, and most math functions will react properly when given a complex number. You can append
i to the end of a number to force it to be the imaginary part of a complex number, as follows:
> 1i [1] 0+1i > 1i*1i [1] -1+0i > z <- 3+2i > z [1] 3+2i > z*z [1] 5+12i > Mod(z) [1] 3.605551 > Re(z) [1] 3 > Im(z) [1] 2 > Arg(z) [1] 0.5880026 > Conj(z) [1] 3-2i
The
complex command can also be used to define a vector of complex numbers. There are a number of options for the
complex command, so a quick check of the help page,
(help(complex)), is recommended:
> z <- complex(3) > z [1] 0+0i 0+0i 0+0i > typeof(z) [1] "complex" > z <- complex(real=c(1,2),imag=c(3,4)) > z [1] 1+3i 2+4i > Re(z) [1] 1 2
The techniques to cast a variable to a complex number and to test whether or not a variable is a complex number are similar to the methods seen previously. A variable can be cast as complex using the
as.complex command. Also, to test whether or not a variable is a complex number, the
as.complex command can be used.
There are two other common data types that occur that are important. We will discuss these two data types and provide a note about objects. The two data types are
NA and
NULL. These are brief comments, as these are recurring topics that we will revisit many times.
The first data type is a constant,
NA. This is a type used to indicate a missing value. It is a constant in R, and a variable can be tested using the
is.na command, as follows:
> n <- c(NA,2,3,NA,5) > n [1] NA 2 3 NA 5 > is.na(n) [1] TRUE FALSE FALSE TRUE FALSE > n[!is.na(n)] [1] 2 3 5
Another special type is the
NULL type. It has the same meaning as the
null keyword in the C language. It is not an actual type but is used to determine whether or not an object exists:
> a <- NULL > typeof(a) [1] "NULL"
Finally, we'll quickly explore the term
objects. The variables that we defined in all of the preceding examples are treated as objects within the R environment. When we start writing functions and creating classes, it will be important to realize that they are treated like variables. The names used to assign variables are just a shortcut for R to determine where an object is located.
For example, the
complex command is used to allocate a vector of complex values. The command is defined to be a set of instructions, and there is an object called
complex that points to those instructions:
> complex function (length.out = 0L, real = numeric(), imaginary = numeric(), modulus = 1, argument = 0) { if (missing(modulus) && missing(argument)) { .Internal(complex(length.out, real, imaginary)) } else { n <- max(length.out, length(argument), length(modulus)) rep_len(modulus, n) * exp((0+1i) * rep_len(argument, n)) } } <bytecode: 0x2489c80> <environment: namespace:base>
There is a difference between calling the
complex()function and referring to the set of instructions located at
complex.
Two common tasks are to determine whether a variable is of a given type and to cast a variable to different types. The commands to determine whether a variable is of a given type generally start with the
is prefix, and the commands to cast a variable to a different type generally start with the
as prefix. The list of commands to determine whether a variable is of a given type are given in the following table:
Table 2 â commands to determine whether a variable is of a particular type
The commands used to cast a variable to a different type are given in Table 3. These commands take a single argument and return a variable of the given type. For example, the
as.character command can be used to convert a number to a string.
The commands in the previous table are used to test what type a variable has. The following table provides the commands that are used to change a variable of one type to another type:
Table 3 â commands to cast a variable into a particular type
In this chapter, we examined some of the data types available in the R environment. These include discrete data types such as integers and factors. It also includes continuous data types such as real and complex data types. We also examined ways to test a variable to determine what type it is.
In the next chapter, we look at the data structures that can be used to keep track of data. This includes vectors and data types such as lists and data frames that can be constructed from vectors.
|
https://www.packtpub.com/product/r-object-oriented-programming/9781783986682
|
CC-MAIN-2020-40
|
refinedweb
| 3,543
| 61.36
|
You Don’t Need Redux, MobX, RxJS, Cerebral
For state management, try this simple pattern instead of a library
Don’t get me wrong. Those libraries are great. But I’m suggesting a different, unique approach.
I know what you’re thinking: Redux alternatives are a dime a dozen. But this isn’t yet another library. In fact, it’s not a library at all.
It’s just a simple pattern: Meiosis.
Why a Pattern?
Using a pattern instead of a library means that you have more freedom. You are not dependent on a library’s features, bugfixes, and release dates. You are not worried about backward compatibility, deprecation, upgrade migration paths, or project abandonment. You are never waiting for a missing feature.
No More Black Box
Sometimes when using a framework or library, you get that uneasy “black box” feeling where you stop knowing what’s happening with your code. You call a framework function, something magical happens in the black box, and out comes the result. If the result is unexpected, you have to start searching for answers, or (gasp) digging into the framework’s source code.
With Meiosis, you can easily follow everything that is happening with the code. Because there is no black box. You can fully and completely implement the pattern from scratch, so you know exactly what the code is doing. There is no mystery. You get that satisfying feeling of being in control, of understanding how all of the code works.
No Library Lock-In
Using a pattern means that you are not tying your state management code to a library-specific API. No more annotations, wrapping in proprietary objects, using a dozen different stream operators, or piling up plugins, middleware, and other dependencies.
Instead, Meiosis is based on first principles: use plain JavaScript objects and functions to manage state in your web application. No
import or
require of a state management library.
This also means that you can achieve the equivalent of such things as React Context and Render Props without having to use React or even needing to upgrade your version of React to use these types of features.
Less Boilerplate
No more action constants,
if/
else or
switch statements, providers, connectors, thunks, sagas, annotations, mappers, etc.
No special code needed for asynchronous actions!
Use with any virtual DOM library
It is very simple to use the Meiosis pattern with any virtual DOM library: React, Preact, Inferno, Mithril, Snabbdom, and so on. Other view libraries such as lit-html and hyperHTML work just as well.
All you need to know is the library’s function to render/re-render into a DOM node. For example, with React this is
ReactDOM.render. So really it’s one line of code that sets up the pattern with a view library.
So What is the Meiosis Pattern?
The Meiosis pattern is a reactive loop, illustrated below:
- We start with a model, which is a plain JavaScript object. This represents our application state.
- Then, we have a view, which is a function of the model that produces a virtual-DOM node suitable for our chosen view library to render. Above, you can see that with React, this would be
ReactDOM.render.
- When an event happens, such as the user clicking on a button, we call an
updatefunction to produce an updated model.
- The view is automatically re-rendered, by calling
viewand
ReactDOM.render.
What Do We Gain?
By using the Meiosis pattern, we reach some important goals in implementing a web application:
- We have a single root model. This is our application state, as a plain JavaScript object. This is our single source of truth.
- The view is a function of the model. The view can be fully determined from the model.
- We have a controlled, well-defined way to update the model.
- Updating the model automatically re-renders the view. Because we implement the pattern ourselves, we know exactly how this works.
Implementing the Meiosis Pattern
The Meiosis Pattern can be implemented in different ways. First, you decide how you want to implement the reactive loop, and second, you choose how to structure model updates.
For most of the examples, I implement the reactive loop with a simple, minimal stream library. I only use
map and
scan, and I only use them in one place. You do not have to use stream operators anywhere else in the application!
However, you can also implement a bare-bones
map and
scan from scratch with just a handful of lines of code, and that is sufficient to implement the Meiosis Pattern.
I explain all of this in the Meiosis Tutorial.
Time-Travel Tool: The Meiosis Tracer
Meiosis is not a library. However, Meiosis does include a time-travel debugging tool, called the Meiosis Tracer. If you are already using a time-travel tool, know that you don’t have to abandon that possibility when using Meiosis.
You can see an example of the tracer below. Notice how you can not only trace back and forth through the history of application states, you can even directly type in a model and see the resulting view.
This example also demonstrates that routing can be implemented with Meiosis such that the application state still fully determines the view, including the route in the location bar.
Yes, You Can Do That With Meiosis
Meiosis is a simple pattern. Beyond that, Meiosis is documentation and examples to show how to achieve useful features in web application development:
- Computed properties
- Reusable components as plain objects with functions
- Nesting component models within the single state object
- Routing
- Render Props equivalent
- React Context equivalent
- Using view library lifecycle methods
- Imperative widgets (Bootstrap, jQuery, …)
- Preventing re-renders of unchanged components
- and so on.
Try It Out
You can learn the Meiosis pattern with the Meiosis Tutorial. Once you are comfortable with the pattern, you can learn more techniques by visiting the Meiosis Wiki.
Give it a try. I hope you find it useful!
|
https://medium.com/@foxdonut00/you-dont-need-redux-mobx-rxjs-cerebral-6a735b150a02
|
CC-MAIN-2021-21
|
refinedweb
| 997
| 65.32
|
I'm still very new to c++ so this might be a stupid question. In the code below why is it that when i change the type of index to a signed char,
index > 25
#include <iostream>
using namespace std;
char lowercase [26] = {'a','b','c','d','e','f','g','h','i','j','k','l','m','n','o','p','q','r','s','t','u','v','w','x','y','z'};
int main() {
short index;
cout << "Enter a number 0 to 25: ";
cin >> index;
if (index > 25 || index < 0) {
cout << "That number is out of range." << endl;
return 0;
}
cout << "The lowercase letter for this number is " << lowercase[index] << "." << endl;
return 0;
}
Let's rephrase the problem just a bit:
char index; cin >> index;
It may be easier to see the problem. When you read input into a
char, you get the character code for the first character entered by the user. In a typical system, this is ASCII, and the code for digits is between 48 and 57.
So when you get input into
index when it is a signed character, you'll get a value that is >= 48.
|
https://codedump.io/share/f2mRghlCXMTT/1/c---data-types-evaluate-differently
|
CC-MAIN-2017-39
|
refinedweb
| 188
| 73.81
|
Compatibility
- 1.2.0 and master5.35.25.15.04.2
- 1.2.0 and masteriOSmacOS(Intel)macOS(ARM)LinuxtvOSwatchOS
Elo Rating System written in Swift for Swift Package Manager
This is an Elo rating system built using Swift. It has win estimation and rating calculations for an unlimited amount of players. The rating calculator can handle games for 1 vs 1, 2 vs 2, 3 vs 100 or etc. It will provide you with the chance of winning as a percentage for player A versus player B.
.package(url: "", from: "1.2.0"
In Package.swift find main target and add
"EloRatingSystem"
Add this to any file where you would like to use the rating system:
import EloRatingSystem
Build / Run, then enjoy
To update the package manually open terminal and run
swift package update
If you have issues you can run this in termial to reset your packages
rm -rf .build/ *.xcodeproj/ Package.resolved
EloRating().chanceOfWinning(forPlayer: 100.0, vs: 1.0) -> chance of winning as a percentage in decimal form
We have two players. Bob with a rating of 800 and Jake with a rating of 1500.
To find out the likely hood that Bob will beat Jake you would use this
let chanceOfBobBeatingJake = chanceOfWinning(forPlayer 800, vs ratingB: 1500)
Then chanceOfBobBeatingJake = 0.17472092062234879 or 17.5% chance Bob will beat Jake
The chance of Jake beating Bob would be
let chanceOfJakeBeatingBob = chanceOfWinning(forPlayer 1500, vs ratingB: 800)
So chanceOfJakeBeatingBob = 0.98252791166305542 or 98.25% chance Jake will beat Bob
EloRating().calculateWinLossRatings(_ players: [EloPlayer])
This will take an array of EloPlayers and will return back an array of EloPlayers with their updated rating. You will need to create an EloPlayer for each of the players that played. The EloPlayer is just for that game. You can use your own player models.
Important: EloPlayer has an optional property id and uuid to help you identify your user. The function will work if you do not assign the EloPlayer your user's id or uuid, but it will be harder to identify your user when the rating calculations are returned
The players will be sorted into a winning team vs losing team based on the game result you assign the player (This is how it can handle any amounts of players). Then it will calculate the rating for everyone based on the ratings each player had before the start of the game. So you would then update your player model with their new rating, which would be
EloPlayer.ratingAfter
let players = [ EloPlayer(gameResult: .won, ratingBefore: 900, ratingAfter: 0, ratingChange: 0, id: 112), EloPlayer(gameResult: .won, ratingBefore: 600, ratingAfter: 0, ratingChange: 0, id: 200), EloPlayer(gameResult: .lost, ratingBefore: 1500, ratingAfter: 0, ratingChange: 0, id: 6), EloPlayer(gameResult: .lost, ratingBefore: 1100, ratingAfter: 0, ratingChange: 0, id: 22), EloPlayer(gameResult: .lost, ratingBefore: 1100, ratingAfter: 0, ratingChange: 0, id: 56) \] let calculatedPlayers = EloRating().calculateWinLossRatings(players)
This would return these players
calculatedPlayers = [ EloPlayer(gameResult: .won, ratingBefore: 900, ratingAfter: 943.9107, ratingChange: 43.9107, id: 112), EloPlayer(gameResult: .won, ratingBefore: 600, ratingAfter: 647.1034, ratingChange: 47.1034, id: 200), EloPlayer(gameResult: .lost, ratingBefore: 1500, ratingAfter: 1468.2899, ratingChange: -31.7100, id: 6), EloPlayer(gameResult: .lost, ratingBefore: 1100, ratingAfter: 1070.3479, ratingChange: -29.6520, id: 22), EloPlayer(gameResult: .lost, ratingBefore: 1100, ratingAfter: 1070.3479, ratingChange: -29.6520, id: 56) ]
|
https://swiftpackageindex.com/BryanNorden/elo-rating-swift
|
CC-MAIN-2021-10
|
refinedweb
| 553
| 60.51
|
No Half-Life 2 on Steam? 374
Karl the Pagan writes "Following on the heels of a previous Steam-related story, Vivendi Universal may block Half-Life 2 distribution via Steam. Additional motions can be filed until November 18th, but since Sierra/VU have final QA approval on the HL2 gold is it possible they could delay the game until after the court decides on these motions?"
Re:Coming Soon (Score:5, Funny)
Release Schedule (Score:4, Funny)
.
.
.
Doom V
Duke Nukem Forever
Half Life 2
Re:Release Schedule (Score:3, Informative)
nope... (Score:5, Informative)
Also, they've already said they are releasing it on Steam regardless of this case.
read here for more:
article on bluesnews.com [bluesnews.com]
Re:nope... (Score:4, Insightful)
Re:nope... (Score:3, Interesting)
I'd really like to see Valve dump Vivendi and stick it out themselves. Online distribution IS possible, as steam has shown. Pox and all, it is possible.
Also, I'm not sure of Gabe Newell's motives of saying what he did, but back in the days of 2000 Broadband adoption was nowhere near what it is today (especially in the states). Maybe he a
Re:nope... (Score:3, Insightful)
So perhaps, just perhaps, it did go gold and it wasn't Gabe Newell's fault that it was six months late? Frankly I don't know, but I strongly suspect you don't either.
Re:nope... (Score:5, Informative)
It's not like HL2 is the only iron that Vivendi has in the fire. Dark Age of Camelot, World of Warcraft -- heck ALL the Blizzard games, the Empire Earth series, Tribes: Vengeance, and so on. There are literally dozens of titles, some of which have the potential to be bigger than HL. Sure, they don't want to lose the HL2 revenue, but it's hardly going to kill them if it happens.
"I don't think I am wrong when I say Sierra exisits because of HL1."
Sierra doesn't really exist anymore other than as a vague shadow of their former selves. Now they're simply a small vassal of Vivendi in the grand scheme of things. In fact, Vivendi closed down the former Sierra offices and killed Dynamix off a few months ago. All that's left of Sierra, really, is the name.
it's possible they might delay the release.. (Score:5, Funny)
heh
Ok what about (Score:2, Funny)
Release Date (Score:4, Funny)
Re:Release Date (Score:3)
Re:Release Date (Score:2, Funny)
Duke Nukem is happy (Score:5, Funny)
Re:Duke Nukem is happy (Score:5, Funny)
Even Half Life (the original) came out *after that* in 1998!
Sorry, but DNF is still king. Shake it, baby!
Re:Duke Nukem is happy (Score:4, Funny)
OTOH, I think HL2 is actually going to ship within the next month.
Re:Duke Nukem is happy (Score:5, Insightful)
hl2 however has been 'just around the corner' and 'almost finished' and 'in the stores by fall' for quite some time.
Re:Duke Nukem is happy (Score:2)
Re:Duke Nukem is happy (Score:4, Insightful)
the code theft was just bullshit reasoning, they didn't have the thing ready back then.
Re:Duke Nukem is happy (Score:4, Funny)
That still beats most
Great news (Score:4, Interesting)
Re:Great news (Score:3, Informative)
valve's been piss poor to deliver anything and lusting over the collecting the fees from the cybercafes.
they're pissing on their feet though, with the hl key system horribly sucking too(it's not really that uncommon that you lose your key to someone running some keygen, leading into some major suckery to get it back, in some cases people have bought the game st
Re:Great news (Score:5, Informative)
The serial code for Half-Life is 14 digits, meaning a total of 289.254.654.976 possible combinations.. giving that the game has sold something like 20 million copies, that would turn out to try at least 20.000 keys before hitting one successful.. and as far as i know, no key generators checked with the WON network, so you'd just have to try (and that takes at least 15 seconds)..
No, most keys that people experienced that already were in use, were because of a handful of different things:
1. sloppy caretaking of covers etc on local LANs
2. getting their computers exploited (there were several worms afaik that stole cdkeys)
3. people writing down serial keys in stores (many stores used to have such things on display)
4. employees at mentioned stores, also writing down and supplying keys to friends
5. etc etc etc
The keygens were useless.
And Steam is the best thing to happen to Valve since Counter-Strike.
Re:Great news (Score:3, Informative)
ATI bundle? (Score:5, Interesting)
Re:ATI bundle? (Score:5, Funny)
Shit! You mean some of those guys are still alive?!
Re:ATI bundle? (Score:2)
Re:ATI bundle? (Score:2)
Also valid to get a retail-box of HL2 instead. If you pay shipping fees and allow 4-6 weeks for delivery. I wonder how expensive that is going to be.
Incidentially the graphics card I got the voucher with boke down over the weekend (noisy fan).
Re:ATI bundle? (Score:2, Informative)
Also of interest the ATI voucher gives you the a bundle of the original half-life as well as the expansions made by valve.
If you activate with steam you can download the whole bundle and play it right away. And well pre-load HL2 and hope it gets unlocked some day.
Impatience and gamergeeks. (Score:4, Insightful)
How would Valve be harmed by giving in on this issue? How would the consumers be harmed?
IMHO, neither would, in any important way.
Re:Impatience and gamergeeks. (Score:5, Insightful)
Re:Impatience and gamergeeks. (Score:3, Insightful)
While the actual contract language (probably impenetrable to the layperson, anyway) wasn't in the linked article, the answer to your question is that Valve would be harmed by loss of income. According to the article, Valve renegotiated what turned out to be a bad contract with Sierra (bad because the game turned out to be a huge hit - like musicians signing a contrast for a big front-end payday but a tiny percentage o
Re:Impatience and gamergeeks. (Score:3, Interesting)
Consumers? No harm (mostly benefits, actually). Valve? All the difference in the world.
Steam is, if you haven't noticed, Valve's way of getting rid of publishers/distributors altogether. If they can release the game simply by p2p-ing it to the buyers there is no need for deals with publishers. And publishers take in _most_ of the money you plunk down besides the cash register in the 'brick and mortar' store. So, their
Re:Impatience and gamergeeks. (Score:3, Insightful)
However, in this case my perspective is that of a Mac gamer. Since the chances of Steam working with the Mac are virtually nil, the more incentive Valve has to steer everything through Steam, the less chance there is that HL2 will ever be available for the Mac.
Not like I ever expected that it would be, given the history with the original Half-Life.
Re:Impatience and gamergeeks. (Score:2)
it is an issue for VALVE, because they would like to take 100% of the profit made(like with cybercafes and stuff why this lawsuit is going on).
Re:Impatience and gamergeeks. (Score:2)
Use your same logic against the music industry. Valve would be artists and VUG would be the 'evil people that take your money like the RIAA'. So sure, when it comes to music RIAA is evil! But with games, the 'artist' should bend over backwards and take it in the ass.
Re:Impatience and gamergeeks. (Score:2)
Cutting out the middleman is always good for the manufacturer. Not so good for the traditional distribution models, tho.
Gotta say that I kinda like the fact that EB employs about thirty people here in town who would otherwise probably not be able to get jobs. So, in the final analysis, I'm going to have to side *against* Valve on this one.
Re:Impatience and gamergeeks. (Score:2)
Re:Impatience and gamergeeks. (Score:2)
That was for HalfLife if you read the story. Sierra have been funding Valve with several million $'s per year since 1999.
(It's amazing what you hear smoking on the loading bay at Sierra's HQ).
why Steam? (Score:4, Insightful)
I'd much rather have a nice CD/DVD in my hand with the install on then a little code (which I could lose) to let me spend hours downloading it.
I'm trying not to sound like a troll but I really see no sane reason to download HL2 through steam and not just buy the damn CD. Preloading makes sense (install it faster) but why not get a nice shiney CD?
Re:why Steam? (Score:5, Insightful)
Steam has given me absolutely ZERO problems for months. It hasn't crashed, locked up, anything.
I feel the same way about the typical Slashdot BSOD jokes. I run a 2 year-old Win2k install that hasn't needed any real maintenence. I haven't gotten a mystery reboot or BSOD *once*, yet all I hear whenever the discussion about Windows comes up is how X Slashdotter can't even get the thing to boot.
So, you're either all stupid as hell (likely), or really unlucky.
Re:why Steam? (Score:2)
It's not the bugs, it's the DRM (Score:5, Insightful)
If Sierra goes belly up next week, how long do you think the Steam master server is going to be around? Probably not long. How can you sell a game you don't play anymore if it's on Steam? You can't! You don't actually have anything to sell, you've just been paying for access to someone else's game.
Re:It's not the bugs, it's the DRM (Score:3, Insightful)
Probably about as long as the verification servers that check your CD-Key and allow you to play any Half Life based game online. Which means your tangible property becomes a shiny coaster.
Re:It's not the bugs, it's the DRM (Score:3, Insightful)
Why? Steam supports offline play, so there's no issue there. Can you go to any computer, merely log in, and suddenly have access to every Valve product you've ever bought when you buy
Re:why Steam? (Score:4, Funny)
Re:why Steam? (Score:2)
And updates download and install themselves...
I consider my self a fair tech head (I have 3 pc's in my room that I built, and a cupboard full of spare parts)... and most tech heads I know hate steam.. but I love it.
Re:why Steam? (Score:2)
Lose your copy? Just redownload it. You can start playing as soon as the first level is downloaded, and on increasingly fast connections the download time won't be an issue. For 56kers, you can always get the CD. But as a Cable user I find Steam easier.
It gets rid of the pre-ordering / limited copies at shop / queueing
Re:why Steam? (Score:3, Insightful)
What happens if Valve goes out of business, or just doesn't feel like paying for the infrastructure to support steam anymore?
Re:why Steam? (Score:2)
Re:why Steam? (Score:2, Redundant)
Oh I'm sorry, you meant good reasons for us, the customers? Well, tough luck, because apart from being able to install directly after paying for it online, there aint none
Re:why Steam? (Score:3, Insightful)
Steam is handy, I think (Score:5, Insightful)
Re:Steam is handy, I think (Score:2)
I don't. I also don't like online constant activation of my programs. People dislike the Windows XP activation, but don't seem to balk at the Counter Strike activation process that has to happen at some time, even for LAN play. And before you say "offline mode", I've seen it fail so many times while running the helpdesk at Quakecon. If it decides it
Re:Steam is handy, I think (Score:3, Funny)
Just think of all the script-kiddy wanna-be "hackers" that directed attention at HL2 when it was delayed. Can you really blame them for having their MS software exploited? That's like hanging a piece of steak from your crotch and running into a dog kennel with the
Re:Steam is handy, I think (Score:3, Interesting)
Yes, yes I can. The guy got exploited on a machine that had access to their single most valuable resource - the HL2 source repository.
Why was something that precious, and that big a target, on a machine that was net-accessible? Why was he running a known vulnerable piece of software on it?
Sure, I take the odd chance with my machine too - but I'm not given access to that sort of stuff. If I was, I hope I'd be a little more careful.
Re:Steam is handy, I think (Score:2, Interesting)
Why was he running a known vulnerable piece of software on it?
The game is developed on Windows right? Makes it kind of hard to avoid the "known vulnerable piece of software"...
Re:Steam is handy, I think (Score:3, Interesting)
Do you trust handing your credit card to someone at a restaurant, store, etc. who is making minimum wage? At any rate, who cares? If your credit card gets stolen, you are liable for at most $50 and usually $0. It is the merchant who takes the stolen credit card who loses big time.
Re:Steam is handy, I think (Score:3, Insightful)
Nope, you just have to keep track of your account name and password. One of my friends has already been burned for having tied his old HL key to a Steam account that he no longer has access to, which is registered to an e-mail address he no longer has access to. Basically, he has no way of recovering that key for a Steam account unless and until he sends back the entire HL jewelcase (on which the original key is printed) to Valve, and he's not going to get another jewe
Preloading (Score:3, Funny)
I never imaged.... (Score:2, Redundant)
Geez. (Score:5, Funny)
October fucking 8th? (Score:5, Insightful)
So this means it's not coming out till at least October? WTF! I had my hopes up with this release candidate news, now this bullshit! Dammit, I'm going to be out of the country by the time it comes out! I may not be able to get it in any timely manner BUT via Steam.
Fer fucksake, games are perishible. Hype even moreso. The more they delay this thing, the less they're going to make off of it. The hype is at it's peak now, without ever having boiled over to the point of insanity (Phantom Menace, FF7). If they don't release this thing soon, they're gonna have another Daikatana on their hands.
Start selling the goddamn game, and settle out who gets how much in court!
Cut out the publishers (Score:3, Insightful)
well (Score:5, Interesting)
make it so that people can burn half life 2 cd's legally, then give them to their friends BUT with the catch that in order to decrypt it they gotta go pay valve directly online for the small program to activate it (they could sell it alot cheaper than normal and still make more money than normal, too)
I think piracy is why Steam takes flak Re:well (Score:2)
Re:well (Score:2)
Delayware (Score:2, Interesting)
its part of teh business model (Score:2)
and since when did valve become the good guys? they stopped giving a fuck about their customer base years ago when they turned them into the biggest guinnea pig test bed since the gov't was dumping acid in the water supply in the 50's
So, in short... (Score:3, Interesting)
Sierra: Oh no you don't...
I hope valve wins, it'd be nice to see these large game publishers dissapear.
Just like the music industry Re:So, in short... (Score:2)
Re:So, in short... (Score:2)
Sierra: Oh no you don't...
I hope valve wins, it'd be nice to see these large game publishers dissapear.
You missed the part where Valve signed a large contract with Sierra, and have been paid millions since 1999 to develop Half Life 2 for Siera... and are now trying to breach that contract.
But hey, stick
Awesome! (Score:5, Insightful)
Re:Awesome! (Score:3, Insightful)
Who gives a rat's ass? (Score:5, Funny)
I personally recommend a few hundred rar files (and one or two with checksum errors of course) on a few hundred floppies.
Don't tell SCO... (Score:5, Funny)
I saw an endif and a return near each other in the leaked version.
Good news? maybe (Score:2, Insightful)
Here's an Idea (Score:5, Insightful)
License the Steam technology and platform from Valve and use it to distribute the other games in your library. That way you gain the benefits of an electronic distribution channel without having to do the blood and sweat part yourself and you reward one of your forward-thinking business partners.
Or you can sue said customer and make yourself look like the idiotic, money grubbing, fear-mongering institutions of the MPAA and RIAA, which are locked in the past despite all signs customer preferences are pointing the other way. Oh, that's right. Universal is a RIAA member. No wonder.
This is what you get when crotchety septegenarians managing a confused, out of focus multinational try to sell entertainment "to the kids". Heavy handed, out of touch business practices that alienate more people than they are trying to attract.
Repeat (Score:2)
Valve's woes are punishment from God... (Score:4, Funny)
Yes, God is a Mac Gamer. And He is pissed.
That is Nihilism (Score:2, Funny)
Re:Valve's woes are punishment from God... (Score:3, Funny)
Another phrase that would make Him disappear in a puff of logic.
- shazow
I DON'T CARE. (Score:2)
Just delay? (Score:2)
Maybe that's just a high bid and they expect to be talked down between legal proceedings, but that's seriously scary.
It sounds like Valve intended to use Steam as its own little online marketplace. It didn't tell Sierra about this until a year after an agreement was filed because that would like scare them out
Vivendi Universal (Score:5, Funny)
I mean, who wanted all those free MP3s anyway? Most of them were made by artists who would never sell albums anyway! VU was actually being polite, by helping those musicians who never would have 'made it' to get a real job, like making the Fajita Sandwich Wrap Melts that Vivendi executives get at Wendys.
Onos (Score:2, Informative)
Re:I hope not (Score:2)
Re:I hope not (Score:2)
I wanna push my TOOL into some sexy onos.
Re:Worth the wait. (Score:5, Informative)
Sort of amusing. I wonder if Id's getting a kickback from ATI, Nvidia, etc.
:)
Re:Worth the wait. (Score:5, Interesting)
Re:Worth the wait. (Score:5, Insightful)
Re:Worth the wait. (Score:2, Interesting)
Perhaps once you start playing HL2 (haven't you seen any of the videos?), you'll realize then that even the engine is better. I'm not trying to diss DIII, it has it's place
Re:Worth the wait. (Score:2)
Everyone also seems to forget the community is what makes a game great. Most will be bored with the single-player mode several weeks into the game, it is only when the mod community works its magic that a game becomes legendary.
Just look at how well Counter-Strike did on a modded Quake eng
Re:Worth the wait. (Score:2)
the shadowing is what makes D3 worthwhile and it is the gift carmack has given to the gaming world.
mad props to him
Re:Worth the wait. (Score:3, Funny)
Maybe I'll check it out.
Re:This is news? (Score:2)
Re:Valve may also have unhappy Steam customers (Score:2)
If you pay per MB you really should install a tool to help you monitor your usage. I remember them back in the day, so I'm sure there is some really slick stuff available now.
Wrong (Score:5, Interesting)
Bottom line: HL2 is going to be delayed until this is resolved.
Re:DL games is much better (Score:2)
|
https://games.slashdot.org/story/04/09/20/2012208/no-half-life-2-on-steam
|
CC-MAIN-2017-13
|
refinedweb
| 3,484
| 73.27
|
wifi_set_sta_power()
Turn the Wi-Fi radio power on while the device is in client STA operational mode.
Synopsis:
#include <wifi/wifi_service.h>
WIFI_API int wifi_set_sta_power(bool on_off)
Since:
BlackBerry 10.0.0
Arguments:
- on_off
If true, the power will be turned on.
Library:libwifi (For the qcc command, use the -l wifi option to link against this library)
Description:
This function attempts to set the Wi-Fi radio power. When the device is in an operational mode other than client STA, this function fails, errno is set to EBUSY, and the Wi-Fi power remains unchanged.
When the function returns successfully in response to a power-on request the device will operate in the client STA mode.
This function does not provide a method to turn Wi-Fi power off.
Returns:
WIFI_SUCCESS upon success, WIFI_FAILURE with errno set otherwise.
Last modified: 2014-05-14
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
|
https://developer.blackberry.com/native/reference/core/com.qnx.doc.wifi_service.lib_ref/topic/wifi_set_sta_power.html
|
CC-MAIN-2019-35
|
refinedweb
| 160
| 67.55
|
Four simple tips for web page providers
Sure, the web is full of "silos", but let's not over-dramatize. We can fix this. The web is perfectly capable of supporting social use cases. On the social web there is no clear distinction between information publisher and information consumer: users are generally doing both simultaneously.
This is easy to implement if all users are publishing (and consuming) at the same domain name. This essay will highlight what I think are the four main topics to address when we want users to communicate with each other seamlessly, even if they happen to be at different domain names. And they are suprisingly simple tricks. Sure, it requires some technology and standards, but much of that already existed a decade ago. The important lessons are in how you design your application for your users.
The following are four tips for web page providers. If you run a website where people can create a web page (you probably call them "accounts" or "logins"), then this is what you should do to make your website a better participant in the social web:
Don't login wall me
The first trick is to make public information public, and not put it behind a login wall. Twitter currently allows you to view my web page there (), but if you want to see who I follow (click 'following' while not logged in), then you need to log in. Why?
Quora is even worse in this respect, prompting people to log in as a first thing. Why? This means that conversations are drawn into the website where they start, rather than staying decentralized. When I consume public information on Quora or any other website, I should not be forced to log in.
Consuming limited-audience content can also be implemented without login walls: for instance with unguessable URLs. Picassa used to do this - around a decade ago it was possible to share a Picassa album by sending its URL to someone. Now, the consumer of my photo album is prompted to create a Google account for this. Again: why? All web page providers should switch back to displaying content without login walls.
Of course, once a web page provider offers its users the ability to actually publish content (i.e., without a login wall in front of it), this is big progress compared to what most users get now from their web page provider. And it's an easy win. A next concern is whether the format of this web page follows standards like semantic markup and web feed support. But once web page providers commit to breaking down their login walls, this would then be a small additional step.
The problem is not that a web page provider like Facebook is unable to engineer a product where you can actually publish stuff, or that there are not enough standards describing how to actually publish something on the web. The power to redecentralize the social web does not lie with engineers, it lies with product managers, who sometimes decide that the ability to actually publish something on the web is not what users need.
Don't name space me
So removing login walls resolves decentralized consumption of information. Once that works, we only need a few more tricks so that I can also produce content on the web, even if this content is in reaction to existing content elsewhere (for instance when replying to a Quora question). I should be able to do so on my own web page, regardless of who my web page provider is.
A key concept for this is namespace. Web page providers don't just provide hosting space, they also provide a user interface that allows users to interact with other users. This application is usually namespaced: it will allow you only to interact with users on the same domain name, not with users on other domain names. The buttons, options, and controls of this user interface usually omit the domain name part when identifying other users. For instance, Twitter's @-mention syntax only works when mentioning a user whose web page is also on Twitter.
StatusNet introduced an important improvement to the @-mention, by changing it to an '@user@host' format. This defaults to a local @-mention when the '@host' part is omitted. Granted, the syntax is maybe not as simple and clean, but in return for that, it is more powerful.
An important field of research is how such user interactions can be made name space independent. Once web page providers commit to improving their user interfaces in this way, then its technical implementation is not so difficult.
Of course, there is also the underlying technical question of how a cross-domain @-mention is "delivered": the person being @-mentioned should somehow become aware of this. In fact, it's an engineering task that was already solved for blogs, a decade ago. That is what my third tip is about.
Polyglot rel-aware linkback
It's not only mentioning or messaging another person which should trigger a cross-domain notification. The same is true for mentioning content. That way, conversations can be threaded together across domains. And hyperlinks are also used in machine-readable data, so there we need something similar too. When my data document represents a move in a chess game, then it will reference the URI of that game with a hyperlink. This idea is the essence of linked data.
The data document representing the game will also want to link to all the moves that make up the game. How can it do that? My publication of a chess move somehow needs to result in it being linked to from the main data document of the game in which the move occurs.
Notifying the other document of your link, so that it can link back to you, is called Linkback. The wikipedia page there mentions three protocols. WebMention is a simplification of Pingback, which recently gained some real-world traction.
We also have the Salmon protocol which does the same thing, but using cryptographic signatures. And then there is the rdf trackers proposal, which solves the same problem, adding a slightly more powerful query engine than the other five protocols mentioned.
OK, so that's a problem. Six protocols for linkback? And it will only work if both hosts offer the same one? That would mean as many as 83% of all linkback requests might fail due to incompatibility of the emitting host with the receiving host. How can we solve that?
We can try to invent a sort of universal standard, in the same way Esperanto tries to allow people from all nations to communicate with each other. But this would be unlikely to succeed for organizational reasons. And there is a simpler option: make each server support all six linkback protocols, plus any new ones that might pop up.
If your server speaks all linkback protocols, then even if the other server supports only one of them, the communication will be successful. This works both if you are the emitter and if you are the receiver.
To make hyperlinks more meaningful, we can use the 'rel' attribute. For instance, my blog post may have a link with rel="author", indicating that it was written by me. A useful link relation for blog posts that are a reply to another blog post is "in-reply-to".
In rdf, the predicate fulfills this role, for instance 'foaf:knows'. In an ActivityStreams document, each field indicates a relationship, for instance 'actor'. When your server receives a linkback notification for a URI, it can look up in which relation the URI was linked to, and display the hyperlink back to it in a corresponding way.
Using polyglot rel-aware linkback, adding pages into a specific conversation, chess game, or photo tag list becomes possible on the web.
Decentralized search
There is however one fourth thing that's needed before the cross-domain experience on the federated social web is as good as the same-origin one: (friend) discovery. Sometimes you want to search a content-addressable index, for instance, searching for a user by name.
We need a decentralized search index which is publically available, like a phone book, which allows search by name, or by location, or other criteria. This search index cannot be hosted at one single point of failure: several mirrors need to offer an interface to it, and exchange index data with each other. To get started with this, a few existing web page providers could pool their search indexes into one, and publish that index on bittorrent in one or more well-known formats (SQL, csv, xml, json, ...). Anybody could then take this data and instantiate a search engine.
If we only index rows of "full name, nickname, location, avatar, URL", and create a full-text index on at least the "full name" and "nick name" fields, then typing in the first few letters of a name, could yield a list of full names, with avatars and locations, with each result row linked to a URL. It is then up to the web page owner what they publish on that URL, but at least they will be findable.
It will probably not be feasible to make the whole web searchable for topics this way, in the way Google Search does, but for the specific domain of friend discovery, this would definitely be doable.
There are of course important privacy concerns that need to be looked at carefully, and one should only index pages which are intended to be public. Only people who want to be findable on the web should ever end up in this database. And they should have the option of being removed from it; it is up to the ethics of the search provider to only display results from the latest version of the database, and "erase" historic results whenever this is requested. Also, we should make sure that surfacing information that is currently "public but buried" doesn't lead to unintended side-effects.
But the core question of decentralizing (social) search, stands: if I'm findable on Twitter, then what you see there is already "publically" findable within a specific domain name silo. I then have no problem with also being findable in this decentralized search database.
|
http://www.w3.org/2013/socialweb/papers/michiel-de-jong-position.html
|
CC-MAIN-2016-44
|
refinedweb
| 1,725
| 60.45
|
Automating XML/Java mapping with Jato
Java applications that deal with XML, often must resort to tedious parsing and mapping code. Data binding is gradually emerging as an effective mechanism for incorporating XML data into Java applications. The plus side to this approach is that XML data will look like Java objects which makes it easier for the Java programmer to manipulate and interact with that data. Such mapping are usually a two-way street in that they can also take a Java object and serialize it into XML. The negative side would be performance (by introducing some overhead) and the inherent incompatibility that exists among the different binding mechanism.
Jato, is an open-source effort focusing on XML/Java conversion ( ). Jato encapsulates the mapping between XML and Java objects into an XML file (using Jato-specific tags). This allows the developer to focus strictly on the mapping without paying too much attention to implementation. As the author of Jato observes, conversion implementations often have similar patterns. By capturing the mapping in a single place and then allowing Jato handle the parsing, conversion, and generation, you save many lines of code.
Writing a Jato script is similar to writing an XSLT stylesheet. When going from Java to XML, you can use the JavaToXml class. After instantiating it, you need to specify the root element to be used for the output XML and specify a "helper" class via the setHelperclass() method. The helper class is the interface between the Jato script (containing the mapping and instructions) and the Java application. When going the other way (from XML to Java), you can use the XmlToJava class. Again, there is a helper class you can use as the interface between Jato script and the application.
Within a Jato script, anything outside the Jato namespace, directly goes to the output. This allows you to easily provides a structure around the application specific data and change it without modifying any Java code. The rest of the script consists of Jato-specific tags. For example, the <Jato:attribute> element adds an attribute to the output XML file. The <Jato:if> element along with the <Jato:else> element allow for a conditional test to be done which could alter the output XML file. Inside the Jato script, you have access to instances of objects from the Java application and can retrieve their properties or invoke their methods. If the underlying objects are JavaBeans, then the naming convention can be used to make the mapping easier to follow.
The engine responsible for parsing Jato scripts and performing the instructions actually uses JDOM to interact with XML. JDOM is an open-source API for XML manipulation from Java (). Jato script elements all have a Java implementation which you can examine by looking at the source. Furthermore, you can create your own Jato tags by implementing specific interfaces. This is very important, because you can extend the functionality of Jato to fit your needs. Furthermore, the behavior of the application can be modified by merely changing the Jato script file without touching any of the actual Java code. This is also a promise of XSLT, but Jato offers a two-way street and the binding between XML elements and Java objects is more intuitive and natural. In that regards, Jato is closer to Java/XML data binding tools like Zeus (from Enhydra) than transformations done via XSLT.
Jato is still under development, but you can get a glimpse of what it can do by looking at the examples and the sample code. And of course, in the spirit of open-source software, you can always join the team of Jato developers to make it even better.
About the Author
Piroz Mohseni is president of Bita Technologies, focusing on business improvement through the effective use of technology. His areas of interest include enterprise Java, XML, and e-commerce applications.
|
http://www.developer.com/xml/article.php/742821/Automating-XMLJava-mapping-with-Jato.htm
|
CC-MAIN-2013-20
|
refinedweb
| 648
| 53.1
|
but i'm stuck with importing QtWidgets. I had found a nice method somewhere but can't find it back (below my result).
The problem i encounter now is that QtWidgets gives an error by importing it.
Code: Select all
def Activated(self): #"Do something here" #"for examples checkmanipulator wb" from PySide2 import QtUiTools from PySide2 import QtGui from PySide2 import QtCore from PySide2 import QtWidgets # from PySide import QtWidgets w = QtWidgets.QWidget() ui = QtUiTools.QUiLoader() # ui = FreeCADGui.UiLoader() # change later on. First swap spinbox to quantity spinbox ui.load(os.path.join(ui_path, 'ui_widget.ui'), w) return
I got this error message.
When i type from PySide2 import QtWidgets directly in the python console in FC i don't get an error message.
Code: Select all
Running the Python command 'command_1' failed: Traceback (most recent call last): File "C:\Users\USER\AppData\Roaming\FreeCAD\Mod\custom_workbench\command_1.py", line 31, in Activated from PySide2 import QtWidgets cannot import name 'QtWidgets'
Can anybody explain what happens?
|
https://forum.freecadweb.org/viewtopic.php?t=32947
|
CC-MAIN-2020-29
|
refinedweb
| 166
| 59.3
|
I'm Ajay Kumar. I presently work at Microsoft GTSC , Bangalore in the WebData team ( SQL Developer) which deals with client-side connectivity to Databases from many kind of applications and using several Data Access Technologies. Although the team name implies web applications, it also deals with stand-alone applications and in some cases, with other applications such as MS Access or SQL Server (and Oracle) using Data Access components to talk to other back-ends. The main technologies supported are ODBC, JDBC, OLEDB, .Net managed providers, and ADO.NET. The team also owns the MSXML parser and some pieces of the System.Xml namespace in .Net. Also includes Sql Reporting Service (SRS).The team handles issues that tend to get range from coding to configuration and tend to be difficult to place into classification buckets due to their complexity.
Apart from Technology, it's the randomness in art that I thrive upon for my survival.
|
http://blogs.msdn.com/b/ajaykumarks/about.aspx
|
CC-MAIN-2014-52
|
refinedweb
| 156
| 56.45
|
In addition to creating windows, buttons, and menus, Swing sets up what is called an event-based program. That means the top level program sits in a loop and waits for things to happen. When the user clicks a button or uses the keyboard, the program responds to that event. Swing itself runs the main loop and handles lots of stuff you don't want to deal with.
Somehow, though, you need to tell Swing what to do in response to certain events. For example, if there is a button that should run one iteration of a simulation, you need to be able to tell Swing what to do--what code to execute--when the user clicks on it. You tell Swing how to respond to certain events by hooking your own methods onto events or visual pieces of a window. For a simple interface widget such as a button, you add something called an ActionListener to the button. The ActionListener has a single method that gets executed only when the user clicks a button. By writing your own version of that function, your own code gets executed on a button click.
The visual part of Swing lets you create a window and place widgets and drawing areas according to a layout plan. The layout is hierarchical, like a tree, with the window itself as the parent of all window elements. Some elements are, themselves, hierarchical. For example, a tool bar is a container object that can include multiple widgets like buttons and text fields. To add a new widget to a window, you create a new instance of the proper class, set up its properties and then add it to the desired parent visual element. When designing your layout, it is useful to draw the tree of elements first in order to establish the proper hierarchy and layout.
The following sections go through some of the specifics of creating a Swing application, including creating a window and responding to events. The complete implementation is in Circles.java
The window is at the root of the visual hierarchy. The parent window class is called a JFrame. In order to create your own window, you have to extend the JFrame class by creating a constructor and a main method. The constructor sets up the window layout tree and any action hooks, and the main method creates a window and starts up the run loop.
In order to demonstrate the concepts, we're going to go through the process of creating an application that draws lots of circles in random locations. We'll go through the program in a top-down order to help break down the problem into smaller pieces that are easy to solve.
Our main class will be called Circles. Think of the Circles class as representing the application window. The Circles class extends JFrame in order to inherit all of the window creation and management code.
public class Circles extends JFrame {
The Circles class will have three methods. One is the constructor, one is the main method, and one is a helper method for the constructor. The Circle class will also contain two private classes to implement parts of the window.
The main method is straightforward: create a new instance of the Circles class, pack the layout (traverse the layout tree), and then make the window visible. Almost all of the hard work is done in the Circles class constructor. As part of creating a JFrame object (which Circles inherits), Swing starts up a thread that listens events once the window becomes visible. That thread keeps the program running even after the main method below is complete.
public static void main(String[] args) { int dx = 500; int dy = 500; // create a window Circles vw = new Circles(dx, dy); // generate the layout and set the window to visible vw.pack(); vw.setVisible(true); // run loop is going now }
The constructor for Circles is not much more complex. The first step is to call the constructor for the parent class (JFrame). Giving a string argument to the parent constructor gives the window that title. The second step is to call a method that tells the window to terminate the program when the window is closed. If you have a program that creates many windows, you may not want to do that, but in this case it's the proper behavior. The third step is to create a place on the screen in which to draw and then add the panel to the layout hierarchy by making it a child of the Circles JFrame. The final step is to create a toolbar and add it to the layout hierarchy as a child of Circles.
The first two steps will be common to many programs. The GraphPanel class and the createControlBar method take care of most of the rest of the tasks.
// constructor for the window public Circles(int dx, int dy) { // call the JFrame constructor with the window title super("My Cool Circles"); // set the program to terminate when the window is closed setDefaultCloseOperation(EXIT_ON_CLOSE); // create a drawing canvas and add it to the window GraphPanel mapCanvas = new GraphPanel(dx, dy); add(mapCanvas, BorderLayout.CENTER); // create the control bar JToolBar controlBar = createControlBar(mapCanvas); add(controlBar, BorderLayout.SOUTH); // done }
Next we're going to look at the private GraphPanel class. The GraphPanel class extends the JPanel class, which is a widget in which you can draw things like circles. We'll give the class one field, which is the number of circles to draw.
private class GraphPanel extends JPanel { int numCircles;
The constructor for the GraphPanel class starts by calling the parent constructor and setting the initial number of circles to 10. Then it sets the size of the drawing area (dx, dy), the background color (white), and tells it to have an small border.
// constructor for the graph panel public GraphPanel(int dx, int dy) { super(); numCircles = 10; // set its size, background color and border type this.setPreferredSize(new Dimension(dx, dy)); this.setBackground(Color.white); this.setBorder(BorderFactory.createEtchedBorder()); }
Every widget that is inherits the JComponent class has a method called paintComponent. The paintComponent method is called any time that particular widget needs to be updated. All of the typical widgets in a window are children of the JComponent class, so all of them have a paintComponent method. Our GraphPanel is of type JPanel, whose parent is JComponent.
In order to draw the circles into our window, we need to override the paintComponent method. However, we still need to let the parent paintComponent method draw first, because there are housekeeping items that need to be executed before we draw anything. The method shown below does just that. First, it calls the parent's paint component method, then it calls a method that draws the circles.
// function that gets called to refresh the screen public void paintComponent(Graphics g) { // paint background super.paintComponent(g); // draw the vertices drawCircles(); }
The final piece is the drawCircles function. The function first gets the Graphics context. Think of the graphics context as the actual canvas on which things are drawn. It also has state information contained in it, such as the current pen color.
Most of the rest of the function is involved with picking where to draw the circles and how big to make them. Note that the GraphPanel object knows how large it is and we can use the getWidth() and getHeight() functions to access that information. Within the for loop, once the location and diameter of the circle has been chosen, the pen color is set to a random RGB value and the circle is drawn into the graphics context.
public void drawCircles() { Graphics g = this.getGraphics(); Random gen = new Random(); for(int i=0;i
The last piece is to look at the control bar, how it is created, and how we tell the GraphPanel object when to draw the circles and how many to draw.
The createControlBar method takes a GraphPanel as its argument because the action for the draw button needs to know how to communicate with it. The first thing the method does is create a JToolBar objects, which is a container object that can hold other widgets.
The first widget in the toolbar is a JLabel object that holds static text. This particular label tells the user what to put in the text field. After creating it, the label is added to the toolbar, making it the first element in the toolbar. The order in which items are added is the order in which they are drawn from left to right or top to bottom.
The second widget is a text field, which takes a default string and a size. The text field is added to the toolbar next, putting it next to the label.
If you want some space between elements in a toolbar, you have to create a JToolbar.Separator, which takes a width and height as arguments. We use this to put some space between the text field and the button.
The button is the final element of the toolbar except for a final separator object. The button is the only action item on the toolbar. The user expects something to occur when the button is clicked. Here is where we need a hook that tells the GraphPanel to update when the button is clicked.public JToolBar createControlBar(GraphPanel canvas) { // create a new toolbar object JToolBar toolbar = new JToolBar(); // create a label field JLabel rootLabel = new JLabel("Number of Circles:"); toolbar.add(rootLabel); // create an input field and add it to the bar JTextField rootid = new JTextField("10", 10); toolbar.add(rootid); toolbar.add(new JToolBar.Separator(new Dimension(60, 1))); // create a button and add it to the bar JButton runButton = new JButton(" Draw "); toolbar.add(runButton); // hook up a method to the button runButton.addActionListener(new RunListener(rootid, canvas) ); toolbar.add(new JToolBar.Separator(new Dimension(30, 1))); return(toolbar); }
In order to create the hook, we create a private class that implements the interface for an ActionListener. A class that implements ActionListener must have a function called actionPerformed, that responds appropriately to an event.
In this case, the actionPerformed function does two things. First, it gets the number of circles to draw from the text field and puts that number into the GraphPanel's numCircles field. Second, it tells the graphics context of the GraphPanel to update itself. That forces the paintComponent method of GraphPanel to be executed, drawing the circles.
Because we need access to both the GraphPanel and the text field, the RunListener class has fields to hold them, and the constructor assigns those fields with the arguments passed into it.// private CircleListener hooks code with the button private class RunListener implements java.awt.event.ActionListener { GraphPanel canvas; JTextField field; public RunListener(JTextField tf, GraphPanel cf) { field = tf; canvas = cf; } public void actionPerformed( java.awt.event.ActionEvent evt ) { // get the new number of circles from the text field Integer numC = Integer.decode(field.getText()); canvas.numCircles = numC; // tell the canvas to update to a new number of circles Graphics g = canvas.getGraphics(); canvas.update(g); } }
That completes the Circles class. The important pieces to remember are the paintComponent method, which is where any widget draws itself, and the actionPerformed method, which is how a widget responds to an event.
|
http://cs.colby.edu/courses/F07/cs231/labs/lab04/SwingOverview.php
|
CC-MAIN-2017-51
|
refinedweb
| 1,890
| 62.27
|
Segmentation fault and no module named onboard
Bug Description
Hi. I use 64bits archlinux and today I've updated my system and onboard refuses to work. I recceive this error
Traceback (most recent call last):
File "/usr/bin/onboard", line 12, in <module>
from Onboard.Exceptions import chain_handler
ImportError: No module named 'Onboard'
I used a onboard trunk version built by myself. Changed to the current 0.98.1 from arch repo and now the error message is more cryptic
[1] 726 segmentation fault (core dumped) onboard
I've tested with lots of different onboard versions (0.98.0, 0.98.1 and 2 different trunks), some of them from Arch's repos and some of them built by myself (using Arch's PKGBUILD's) and the pattern was the same: all the onboard version built BEFORE the update give the no-module error, all the versions built AFTER the update give the segmentation fault.
Probably it is not an onboard bug, it seems pretty obvious that some updated package broke onboard, and maybe downgrading that buggy package will make onboard work again. However there are too muck packages to test blindly. -> 2.5.1-2)
[2012-10-17 14:40] upgraded python2-cssutils (0.9.9-2 -> 0.9.9-3)
[2012-10-17 14:40] upgraded python2-cherrypy (3.2.2-1 -> 3.2.2-2)
[2012-10-17 14:40] installed python2-mechanize (0.2.5-3)
[2012-10-17 14:40] upgraded python2-lxml (2.3.5-1 -> 3.0-1)
[2012-10-17 14:40] installed python2-imaging (1.1.7-5)
[2012-10-17 14:40] upgraded sip (4.13.3-2 -> 4.14-2)
[2012-10-17 14:40] upgraded python2-sip (4.13.3-2 -> 4.14-2)
[2012-10-17 14:40] upgraded python-dbus-common (1.1.1-1 -> 1.1.1-2)
[2012-10-17 14:40] upgraded python2-dbus (1.1.1-1 -> 1.1.1-2)
[2012-10-17 14:40] upgraded pyqt-common (4.9.4-2 -> 4.9.5-2)
[2012-10-17 14:40] upgraded python2-pyqt (4.9.4-2 -> 4.9.5-2)
[2012-10-17 14:40] upgraded python2-psutil (0.6.1-1 -> 0.6.1-2)
[2012-10-17 14:40] upgraded calibre (0.9.2-1 -> 0.9.2-2)
[2012-10-17 14:40] upgraded hplip (3.12.10.a-2 -> 3.12.10.a-3)
[2012-10-17 14:40] upgraded python2-xdg (0.23-1 -> 0.23-2)
[2012-10-17 14:40] installed python2-notify (0.1.1-12)
[2012-10-17 14:40] upgraded ibus (1.4.2-1 -> 1.4.2-2)
[2012-10-17 14:40] upgraded pygobject-devel (3.2.2-1 -> 3.2.2-2)
[2012-10-17 14:40] upgraded pygobject2-devel (2.28.6-6 -> 2.28.6-7)
)
[2012-10-17 14:40] upgraded python2-crypto (2.6-2 -> 2.6-3)
[2012-10-17 14:40] upgraded python2-distribute (0.6.28-1 -> 0.6.28-3)
[2012-10-17 14:40] upgraded python2-
[2012-10-17 14:40] upgraded python2-gobject (3.2.2-1 -> 3.2.2-2)
[2012-10-17 14:40] upgraded python2-gobject2 (2.28.6-6 -> 2.28.6-7)
[2012-10-17 14:40] upgraded python2-httplib2 (0.7.4-1 -> 0.7.6-1)
[2012-10-17 14:40] upgraded python2-pyinotify (0.9.3-2 -> 0.9.3-3)
[2012-10-17 14:40] upgraded python2-pyopenssl (0.13-1 -> 0.13-2)
[2012-10-17 14:40] upgraded python2-virtkey (0.61.0-1 -> 0.61.0-2)
[2012-10-17 14:40] upgraded python2-
[2012-10-17 14:40] upgraded system-
[2012-10-17 14:40] upgraded system-
Any idea of the wrong package? Maybe ibus? Thanks
Thanks. I've done what you said but honestly I can't understand anything from the display results. It's a bit long, I uploaded to http://
Do you have any idea of what's going on?
It segfaults somewhere in gobject introspection code. On Ubuntu this is the package python3-gi. It would perhaps help to install debug symbols and re-run it in gdb to get symbols for the topmost two entries of the stack. On Ubuntu the symbols are inpython3-gi-dbg. I guess the Arch package is python-gobject-dbg or something.
I've removed the unrelated packages from you list, that leaves these:
[2012-10-17 14:40] upgraded python-dbus-common (1.1.1-1 -> 1.1.1-2)
[2012-10-17 14:40] upgraded ibus (1.4.2-1 -> 1.4.2-2)
)
The big thing that sticks out is python itself, apparently Arch just upgraded to python 3.3. That's a first for Onboard, I haven't tried it yet. Will do here and let you know. The second is perhaps python-gobject, it's 3.2.2 there, while Ubuntu's python-gi is at 3.4.0.
Nope, python3-gi isn't ready for python 3.3. No segfault here, just import errors, but I can't fix this from Onboard. Are you able to install python3.2 in parallel to the python package? If yes, you can probably run
$ python3.2 ./setup.py build
$ python3.2 ./onboard
or modify the shebang in setup.py, onboard and onboard-settings to read
#!/usr/
You are my savior (again). I don't know how to install 3.2.3 and 3.3.0 versions simultaneously. However downgrading python to 3.2.3 and replacing python-
Using python-gobject 3.4 solves the segmentation fault, but Onboard still fails to run with python 3.3:
Traceback (most recent call last):
File "/usr/bin/onboard", line 15, in <module>
from Onboard.OnboardGtk import OnboardGtk as Onboard
File "/usr/lib/
from Onboard.Indicator import Indicator
File "/usr/lib/
config = Config()
File "/usr/lib/
cls.self = object.__new__(cls, args, kwargs)
TypeError: object.__new__() takes no parameters
@György,I think this is fixed in trunk now, though I can't really test it on Ubuntu. I don't even come that far due to python-gi import errors.
@marmuta: thanks! I applied your fix, and now Onboard works well with python 3.3 and pygobject 3.4.
I can also confirm that the stable 0.98.1 release with 1012 and 1013 revisions that György packed for the archlinux repo works with python 3.3 and pygobject 3.4 . I've also built the trunk version and onboard works. Thanks, marmuta
The fix is available in the alpha 1 preview release of Onboard 0.99.0. Thus, I am marking this bug as Fix Released. Please, do not hesitate to reopen it or file a new bug if this problem is still an issue for you.
onboard (0.99.0~
* New upstream alpha release. (LP: #1089396)
+ Fix Onboard becoming empty when system font dpi changes
-- Francesco Fumanti <email address hidden> Wed, 12 Dec 2012 21:33:43 +0100
onboard (0.99.0~
* Sponsorship request for Ubuntu Raring (LP: #1089396)
* debian/control: raise virtkey run dependency to 0.63.0 or above
* debian/patches: refresh patch and change default theme
* Onboard requires now virtkey >= 0.63.0
* Add example file with system defaults for the nexus7
* Various changes to get acceptable speeds on the nexus7 (LP: #1070760)
* Add docking feature (LP: #405034)
* Add sliding feature for docking and auto-repositioning
* Add multitouch support
* Add a toggle to stop listening to touch events in case of many problems
* Add popup on long press for key variants like diacritics
* New option to choose popup vs repeat for keys with variants
* New gsettings key for the popup delay
* Make move, frame and touch handles work on the nexus7
* Perform simulated clicks on correct touch position
* Auto-release pointer grab after timeout in case nexus7 is unresponsive
* Fix xserver memory leaking
* Improve speed when typing and moving the pointer (LP: #1055448)
* Fix rendering being slowed by emboss effect on keycaps (LP: #890221)
* Fix for not being able to move/resize Onboard on touchscreens (LP: #959035)
* Have Onboard respect launcher icon size (LP: #1078554)
* Auto-show Onboard by clicking already selected text entries (LP: #1078602)
* Make default shortcut for language/layout work from Onboard (LP: #1078629)
* New design of the Preferences dialog with more options (LP: #1053496)
* Disable click buttons when mousetweaks is not installed
* Add D-Bus service to show and hide the keyboard (LP: 1032042)
* Don't export dbus service for embedded instances
* Set NumLock's default sticky behavior to LOCK_ONLY
* Keep state of NumLock across restarts
* New attribute in layout files for sticky key behaviour
* New layout tags key_template and keysym_rule defining keysym-specific labels
* New window tag for color schemes to define border of popups
* New layout tag for language specific overrides in the layouts
* Move common key definitions into template for import by layout files
* Sync modifier states of Onboard with changes by hardware keyboard or tools
* Fix keys not re-rendered when releasing latched modifiers (LP: #1069990)
* Send key strokes for all modifiers (LP: #1067797)
* Blacklist Ctrl-LAlt+Fn keys by default
* Add alternative key generation by at-spi2
* Try to improve struts handling for metacity and mutter
* Fix getpreferredenc
* Build for all python3 versions, by Matthias Klose
* Add work arounds for some problems with the search box of firefox
* Improve startup sequence to fix Onboard showing up so...
Try running in the debugger:
$ gdb --args python3 /usr/bin/onboard
gdb$ r
wait for the segfault, then:
gdb$ bt
the topmost entries in the backtrace should hint at which modules are involved in the crash.
|
https://bugs.launchpad.net/onboard/+bug/1067797
|
CC-MAIN-2016-50
|
refinedweb
| 1,613
| 65.73
|
LORD WHITTY,
DR MARION
WOOLDRIDGE AND
MS JILL
WORDLEY
TUESDAY 2 JULY 2002
180. A good politician can spot the problems.
(Lord Whitty) I think it was recognised it was important
that we did start taking action on this front and what we have
done is bring together the various parties who are involved in
this, both private and public sector, to try and have a more co-ordinated
approach to it. We started that with the direct enforcement agencies
during the course of the disease but from earlier this year we
put together them, the airlines, the port authorities, the shipping
companies and so on to get everybody involved in facing up to
the problem. Amongst the stakeholders I think it has been received
quite well, and there has been a recognition that there is a shared
responsibility for dealing with it. If I can put this delicately
and following on from my opening remarks, dealing with the disease
internally in terms of stopping items getting into the food chain
and stopping the spread has led to pretty tight internal movements
and a biosecurity regime relating principally to the farming sector,
and they have felt that the parallel responsibilities of government
in relation to the position at the border has not been as strong
or effective as it might be, and they have been pretty effective
in drawing that to our attention and the attention of the general
public. From their perspective that is a valid comment but we
do have to proceed in co-operation with these other bodies and,
if we are to make any major shift in policy, including allocation
of substantial resources, that needs to be on the basis of sound
science and that is where the risk assessment fits in. On the
rest of the action plan, there is a greater visibility: there
is substantially greater co-ordination and greater intelligence
sharing amongst the agencies. We need another notch-raising of
awareness and we are intending before the big summer holiday rush,
probably next week, to announce a further stage in the public
awareness programme which involves both information at the airports,
with travel agencies, with airlines and at the point of embarkation.
So there will be another significant notching-up of that effort
which I think hitherto has not achieved the level of awareness
I would have liked
181. Minister, I do not want to put words in
your mouth but I want to be clear I understand what you have just
said. We had classical swine fever and we all said it was due
to imported foodin fact, it was some mythical Chinese ham
sandwich at some stage which was blamed for it. We then said foot
and mouth disease had to be caused by imported product because
we did not have the disease, therefore it could only have come
in from outside. Now, you have just said I think that the first
priority of the government was not, in fact, to address imports,
even though you agreed that it was the government who made that
diagnosis that it was caused by imports, but it was to deal with
it once it had got here. Is that what you said: that we needed
to deal with it when it got in the food chain?
(Lord Whitty) No. What I said is that the first priority
was to stop it getting into the food chain. However draconian
the border control, our first priority is to ensure that, if anything
does get through, it does not get into the food chain and the
second priority is, if it does get into the food chain and we
are in a disease situation, to stop the disease spreading, and
that was certainly our major priority for most of last year in
terms of containing the disease.
182. You quoted the example of the pig swill
and you said that the priority was to stop it getting into the
food chain, assuming it had got across the frontier. The implication
I had understood was that it was not your priority to stop it
getting here. It was to cope with it once it is here, not to stop
it getting here.
(Lord Whitty) The Chairman did ask me what the balance
of risk was.
183. Yes. I am going to pursue that; do not
worry. First of all, I know that the Department is strapped for
cash because it always has been and still is but I do not quite
understand why these two operations could not be carried out simultaneously.
Immediately in the aftermath of foot and mouth disease, what steps
were taken?
(Lord Whitty) It is not an either/or situation. The
burden of effort from the outbreak of the disease was clearly
in containing the disease. The immediate regulatory change which
that brought was to stop it getting into the food chain through
the most obvious and direct route which is the pig swill route,
but we did from before and the very early stages of the disease
start taking steps to co-ordinate amongst the various enforcement
authorities to reduce the risk of it getting in and from before
my time raising the regulatory dimension of import controls and
import checks with the European Union from a very early stage
in the outbreak. From memory my colleague Joyce Quin was raising
this issue in March of last year with the European Union and we
have pursued that, so it is not an either/or position. I was saying
that the burden of effort must have been, and there was no alternative,
during last year to stop the spread of the disease, to contain
it and eventually to eradicate it. The lessons from that relate
to stopping things getting into the food chain and minimising
the risk of it getting into the country.
184. Let us agree that neither of us would argue
that you should not have devoted your efforts to stamping out
foot and mouth disease: it is not a proposition that you should
not have been doing that. The point I was making is that it did
seem possible that some other work might have been going on at
the same time. But you said that the NFU had raised the profile
of this and had argued that the government had not been proactive
enough, and you said, "From their point of view I can see
why they did it", but at some stage you are going to have
to turn round to the NFU and say, "We have got this one fettled,
sorted". What degree of checks, what intensity, do you think
would enable you to turn round to the NFU and say, "Given
that we do not wish to bring every airport and port to a halt
and that trade has to carry on, we think we have just about got
actuarily the level of checks to give maximum assurance for an
acceptable degree of disruption"?
(Lord Whitty) I do not know that there is a straight
answer to that. As far as we can get close to it that would follow
on the risk assessment. What I think is not very much in the consciousness
of the public in general and farmers in general is the level of
checks which goes on at the moment, particularly in relation to
the commercial trade, because most meat comes into the country
and most trade comes in through commercial activities, and although
there is a lot of attention on the passenger, and rightly so,
probably the most likely entry is through the commercial trade
one way or another. In that respect there is already a check,
a 1:5 (20 per cent) minimum check, on all meat products that come
into this country and higher for certain species. It is higher
for poultry at the moment. That is partly an EU arrangement and
partly our own enforcement priority so there is a pretty high
level of checking where the bulk of the meat comes in. Where I
think the public and the farmers are conscious of a lack of the
appearance of a high level of checking is in relation to the passenger
traffic, and I think there it is as much a matter of deterrence
as of the actual level of detection that is likely to be achieved
by a high level of checking. Personally, although I need the risk
assessment to prove this to me, I think there should be a higher
level of checking. There is already a higher level of co-ordination
achieved amongst the various agencies since the outbreak, and
I think that if we were simply to move across without proper scientific
base to a different form of checking then it might well have a
minimum impact on the problem.
185. Did you ever in your most private thoughts,
when you were shaving in the morning or whatever, say to yourself,
"There has been a hell of a song and dance about this, the
NFU has gone on about it ad infinitum, yet we have been disease-free
despite all the trade for decades; we have tried to make sure
if anything does happen we have dealt with it at the point of
entry into the food chain; in terms of good old government and
treasury value for money and public expenditure, whatever the
pressures upon us there may not be value for money in simply multiplying
the checks at airports compared with spending the money on R&D"?
(Lord Whitty) Yes. In my more logical private moments
I think there is no point in simply throwing money and resources
at it because you get reducing returns, but what I do think is
that it is quite important to change the atmospherics and the
feeling of both importers and individual passengers, if they come
in, that there is a problem if they are carrying food and that
is why I think a public awareness campaign is very important and
the visibility of checks, which of course is constantly urged
on us by the farming unions and others, is probably an issue and
one that I would wish to tackle. But I wish to tackle it on the
basis of as sound a science as we can establish in this area.
186. We will come to that in a moment but let
us stick with the perception just for now, because some of the
more sexy parts of the action plan lie around sniffer dogs and
X-rays and disposal bins, honesty bins. Where have we got to on
those three measures?
(Lord Whitty) To take sniffer dogs first, clearly
there are some countries which I think relatively recently have
relied quite heavily on sniffer dogs in this area, and we have
in other areasdrugs and explosiveswhich have hitherto
had a higher priority at the point of enforcement. We are now
embarked on an experiment of using sniffer dogs: we have just
started an exercise of training those dogs which will last for
eight weeks and, before the end of the summer, we will have a
presence of sniffer dogs. That will be a pilot and we will have
to see how it works and how effective it is as detection, and
how effective it is in terms of deterrence. On X-rays, there is
of course some degree of X-ray activity already but the normal
X-ray machinery, even with an expert person looking at the screen,
is not very effective at picking out meat as distinct from other
things. There are suggestions that combining earlier forms of
technology can change that but we do not have a validated machine
which could do that. In terms of X-raying whole containers, this
would be an enormous job and one which could only be done on a
fairly limited random basis, even if we were to make the capital
investment without completely disrupting the four million container
consignments that come into the country. I think, therefore, although
there is a role for more X-ray, it is a limited one, one we are
looking at and which we may well wish to take a bit further. I
think the idea that X-rays are going to be a panacea for this
is probably not as valid as some people claim.
187. And honesty bins and discarding your stuff
and boarding cards on aeroplanes may not achieve anything but
they would respond to the kind of cry, "Something must be
done", and the balancing act is what the something is because
it is never going to be one hundred per cent. What is the necessary
deterrent, as it were?
(Lord Whitty) The issue of honesty bins is one where
the jury is still out and we are still discussing it with other
authorities. Hitherto both the airport authorities and Customs
& Excise have not been particularly keen on honesty bins and
I think the reality is they would be symbolic but may be part
of the public awareness campaign. They are unlikely to have huge
effects on the real amount that is coming in but I would not dismiss
their use as part of an overall package. In relation to landing
cards, what we have to recognise as distinct from the situation
in America or New Zealand is that the vast bulk of the incoming
traffic is European-based or has come from a European airport
and is therefore subject to the single market and this does not
apply. With landing cards, once you start discussing what should
be on the landing cards which are there solely at the moment for
immigration purposes, there may well be other questions which
government departments and others would wish to put on the landing
cards. Again, we are still in discussion on that. More directly,
and something which I think we can probably pursue more effectively,
is to persuade the airlines to do as they do in other countries
to make announcements themselves. Part of the next stage of our
public awareness campaign will be to provide in-flight messages.
We will have to get legal authority to enforce it on airlines
but we are hoping they will co-operate on this, and to produce
a video which could be used on long-haul flights, and in travel
agents and in the rather long hours that many passengers spend
waiting in UK airports on the way out. So getting the consciousness
of the incoming passenger raised is important, and that we can
do without necessarily changing the rules on landing cards or
immigration requirements.
188. All these things are fraught with difficulties,
and posters have not been easy either because airports are good
advertising venues and you are competing against others?
(Lord Whitty) Yes. I think there is more we can do
on posters as well. Part of the next campaign will involve posters
on the outgoers because we are particularly aiming at holidaymakers.
On incoming flights, of course, we have not yet but we are about
to put the posters on to the carousels at the main airports. There
is a commercial implication of that for the airport, for us and
for Customs & Excise. This shows I do not travel very much:
I am told by my colleague they have been on the carousels for
a few days now.
189. But what is the commercial implication,
because there is nothing else on the carousel at the moment. If
one is sitting at the airport, there is a big carousel which is
60 yards long, and there is absolutely nothing in the middle of
it. Instead there is a poster, in extraordinarily complicated
language, English only, revolving around the end of the carousel,
so it is competing with two other messages because it is a triangular
poster. What is to stop a bloody great message sitting along the
length of the carousel like it is in Los Angeles to tell people
what they cannot do? What is so offensive about that?
(Lord Whitty) It is not offensive
190. BA is not advertising anything else at
all at the moment. There is nothing there.
(Lord Whitty) I do not think that is true, with respect.
I think there are mainly BAA or Customs & Excise announcements
on that carousel.
191. No, there is nothing there. I looked at
it. We watched it going round. It was fascinatingjust like
old times!
(Lord Whitty) In any case, it is our determination
to get the message on the carousel but there is a cost to the
airport authorities in doing that.
192. Why? What is the cost? What is the cost
of having stiff cardboard
(Lord Whitty) You would have to do that through the
Customs & Excise arrangements that they have with BAA so I
cannot give you a fee. If I can give you more information I will
do it in writing.
193. Yes, please. Let us just locate to Heathrow
because we have been to Heathrow and we have seen the triangular
sign that goes round and it is quite right that on one of them
there is a poster, but we would be interested to know why you
cannot do more on the David Curry model, and what the cost implications
would be.
(Lord Whitty) Yes.
(Ms Wordley) I think it is fair to say that is not
one of the options that we have explored previously so we can
certainly look into whether there is any scope for that.
Mr Curry: That just shows how creative select
committees can be!
194. You said the question of deterrent was
important. We have just had a witness before us who said that
they checked 30 airlines at Gatwick and they found well over a
ton of illegal product but there was no prosecution whatsoever
of any of those people. Where is the deterrent in that?
(Lord Whitty) There are two channels. In the commercial
channel the deterrent is confiscation of the whole load, so there
is a deterrent in that respect. There are, of course, sanctions
in relation to individual travellers as well but there are very
few prosecutions and this is something that we need to address.
There are sanctions in relation to bringing in anything that is
above the legally entitled minimum, or bringing anything that
is illegal through CITES or anything else. But there have been
very few prosecutions, you are quite right.
195. So the reality is the worst that is going
to happen is it will be confiscated?
(Lord Whitty) For most people it has been that.
196. That is the message we send out, is it?
(Lord Whitty) Not on the posters because that says
you are going to be fined £5,000, so we are trying to up
the deterrent effect of prosecutions. However, DEFRA is not an
enforcing agency on the floor but what the enforcement agencies
will say is that catching the people and enforcing the fine is
a diversion of resources, whereas confiscation and deterrence
will be more effective. Now, I have heard this argument in other
contexts and I do not always agree with it, and I do think the
level of prosecutions is rather strangely low and certainly if
we raise the profile and the awareness nobody can say that they
did not know and, even if ignorance is not normally a mitigation
in law, in practice a lot of people will say they did not know
that was the situation and they would be let off. I think that
is the presumption of the prosecuting authorities. I think we
ought to change that presumption by raising the profile in-flight,
on the point of embarkation and when you land, and I think that
we could increase the number of prosecutions that are likely to
be successful.
197. Obviously we are sitting round doing this
investigation today and you are here because of foot and mouth
diseasethat is the essence of it, is it notand very
soon after the outbreak the feeding of swill was banned. If my
memory serves me right, we were considering banning swill before
but there was representation from the industry that stopped it.
(Lord Whitty) Yes.
198. Now it seemed a very easy thing to do to
ban swill, and perhaps you could ask Mr Curry why he did not do
it when he was Minister, but it never was done. Now that we have
stopped that particular way of getting waste product into the
food chain, what is happening to the product that would have gone
to swill? Is there not a danger that it could still illegally
get into the food chain or be put on landfill and be carried across
and still contaminate farm animals?
(Lord Whitty) Yes, there is, but it is a much lesser
danger than if perfectly legal and normal channels of feeding
certain animals were based on catering and food waste which was
the case up until we banned swill, although it had to be treated,
and as you will know the farm where the origin probably occurred
in the original case would have been illegal because they had
not treated the swill and it would have been illegal under pre-existing
rules. So all rules can be broken but we have stopped in legal
terms the most obvious and substantial way in which potentially
diseased food got into the animal food chain.
199. So you considered doing this before, or
the government did, and there was representation from the farming
industry to say that you should not ban swill, is that correct?
(Lord Whitty) Yes. Although by the end most representational
elements of the farming industry had accepted that something needed
to be done and there were European developments in parallel. But
it is still a matter of some resentment in parts of the farming
sector
|
https://www.publications.parliament.uk/pa/cm200102/cmselect/cmenvfru/968/2070211.htm
|
CC-MAIN-2017-09
|
refinedweb
| 3,658
| 59.37
|
Re: how to get the best sound quality
Expand Messages
- This question comes up often, and since it really boils down to a sound
comparison, here's an idea for anyone who cares to participate:
1) Take a song you have recorded that really shows off your preferred sound
module or soft synth
2) make an mp3 out of it
3) post it to the files section under a DEMO directory
4) write a short description of any special techniques, BIAB styles or
sound module patches used to achieve the sound
5) if you want to go the extra mile, post snippets of the song at different
stages
ie, raw BIAB output, and again as doctored midi sequence, and maybe again with
acoustic audio tracks added
6) you may even consider posting the final midi sequence so others can see
and examine the use of continuous controllers to enhance the song
There are lots of techniques & technologies that work together in the music
production
arena, and it could be interesting to see how BIAB can lay a solid
groundwork for
an end product which many people might not think is possible on a home PC
The combination of midi, looping and acoustic audio recording works for me.
I look forward to seeing (hearing, actually) what works for others.
If it is true that a picture is worth a thousand words, then perhaps it is
also true
that a song is worth a thousand HOW-TO posts
Pat Marr
who will start working on a demo for the Roland XV-3080
(or not, depending on time constraints)
- At 01:19 PM 3/31/2006, "Chris Laarman" <v.c.laarman@...> wrote:
><...>that it is all but useless to use the best-soundingI have to agree. Band-in-a-Box has a resolution
>synthesizers with BiaB, and that it is far more rewarding to export songs
>from BiaB to MIDI, then tweak those files in a sequencer, ultimately use
>some software synthesizer (or even an array of these) to render your file
>(to speakers or file).
of 120ppq (pulses per quarter note) and if the
style is written with the drum grid the
resolution of the drums is 4ppq with a very
limited special case resolution of 8ppq (note:
all Norton Music disks starting with disk 8 use
the "live drum" feature which allows a full GMidi
drum kit at 120ppq resolution).
IMHO 240ppq is the minimum amount for truly
expressive music. To make things worse, some of
the styles appear to be 100% quantized, no groove
at all. These are useless to me, unless they are
one of the few types of music that should be
quantized (techno, trance, some disco, etc.). YMMV
Band-in-a-Box also is lacking in continuous
controller support, many synthesizers support
different continuous controllers making the use
of some in BiaB not practical (BiaB doesn't know
what synth or sound card you are using), and in
addition, the fact that many patterns in BiaB
must serve multiple uses, the style writer is
also limited in the usage of the continuous controllers he/she can use.
An instrument gets its expression not only from
the sound it makes, but IMHO, more importantly
from the way it treats the notes. I'd rather hear
a lame sound with good emulative expression than
a great sound played without the proper expression.
Play a guitar patch or a sax patch like a piano
(note on and note off), and you won't fool
anyone. Conversely, play a near-perfect piano
patch with the scoops, variable vibrato, sustain,
volume changes while sustaining, and various
kinds of distortion that a sax player uses, and
it will definitely not sound like a piano.
Add to that the fact that a patch of the same
name will react very differently in two different
synthesizers and you will find that BiaB or any
other auto-accompaniment application simply
cannot create extremely realistic sounding
instruments, no matter what sound source you are
using. They have to be more "generic".
But by exporting your BiaB song into a software
sequencer (Power Tracks Pro, Sonar, Cubase,
Master Tracks Pro or whatever) you can refine the
expressive devices that are in BiaB to your sound
module plus add the expressive devices that BiaB
ignores into the parts -- for example, the
scoops, etc., into sax parts, -- the swells,
etc,. into brass parts, -- the hammer ons, etc.,
into guitar parts, -- the glissandos, etc., into trombone parts, and so on.
Granted, it's not instant gratification, but then
there are not many worthy art forms that are
instant gratification. Even the easiest thing we
can do, sing, takes a lot of practice, a lot of
learning, and a lot of technique to master. You
don't come out singing like Mark Murphy, Mariah
Carey, Luther Vandross or Rene Fleming without
paying your dues. It takes a lot of practice and
knowledge to be a Da Vinci, Rembrandt, Hendrix, Dvorak, Elfman, or BT.
Now I don't want to sound as if I am unhappy with
BiaB. Nothing could be farther from the truth. I
think the output of BiaB (especially when using
the better styles) is perfectly adequate for many
uses, especially amateur and some part-time
professional musicians. But if you want to make
the difference between adequate and excellent,.
But don't get discouraged. Although I studied
arranging in school, I learned more about
arranging in a software sequencer that it is
possible to be taught in school. School and all
the theory and arranging books are great, they
give you the "left brain" knowledge that you can
utilize to make good arrangements, but a
sequencer and multi-timbral sound module lets you
experiment with what you learned and instantly
hear what it sounds like. Sure, it isn't a real
orchestra, but who can afford a real orchestra to
be on your payroll just to hear what your
arrangements are going to sound like.
So the sequencer becomes a learning tool as well
as a performance tool. You can never know too much about music.
My formula:
1) If there is an appropriate style, I will start
my sequence in BiaB (if no appropriate style for
the particular song, I'll skip to step 3)
2) Refine the song in BiaB and then save and
export it in one or more different styles (so I can mix and match)
3) Either start the sequence from scratch, or
import either an instrument, multiple instruments
from one or more different BiaB arrangements, or
an entire BiaB sequence and assign the
instruments ***. I use either a keyboard, wind
controller or drum controller to input the parts into the sequencer.
*** When I assign the instruments, I use my
sequencer and an array of different sound modules
and samplers, picking what I think is the most
appropriate voice for each part and for that particular song.
4) Add song-specific parts that are usually not
in the more generic BiaB sequences.
5) Add the expressive devices to the instrument parts.
6) Balance, "Master" (get them to sound about the
same volume of my other sequences) and then record them to a WAV file
7) Turn the WAV file into a 192kbps mp3 using CDex (LAME encoder).
8) Put the mp3 file on my laptop and use it on stage
Once again, that sounds like hard work. It's time
consuming and brain involving, but it isn't hard,
in fact, when listening to what has been
achieved, the work is most enjoyable. I can work
at the computer with my music and hours go by
without me looking at the clock. When I'm done, I
can't believe the time went by that quickly. To
me, that means I'm involved in life.
I play in a duo () and I know that
if I have chosen the song wisely, and if the
audience reacts the way I hope it will to the
song, I am going to perform the song thousands of
times. If I hear something night after night that
I know that I could have "fixed", it would bug me forever.
Plus the time I spend gives me other rewards. The
Sophisticats work steadily and the two of us
makes as much as some 5 piece groups in the area.
So don't be afraid of the sequencer. The
combination of BiaB and a sequencer is very powerful indeed.
Insights and incites by Notes
>>>»»»O«««<<<Bob "Notes" Norton owner, Norton Music
BiaB user styles with live entered parts for that
live music groove for musicians who want BiaB to
sound like real musicians and not robots.
- Bob 'Notes' Norton (norton@...), Saturday, April 01, 2006 6:32
PM
> An instrument gets its expression not only fromAt this point I would like those interested to listen to the demo songs of
> the sound it makes, but IMHO, more importantly
> from the way it treats the notes. I'd rather hear
> a lame sound with good emulative expression than
> a great sound played without the proper expression.
software synthesizers (sorry, Bob) ;-) like Synful or (Kontakt-based)
Garritan. However, as these products are limited in scope, I would like to
remember users of Sonar of the included Style Enhancer Lite (actually part
of Onyx Arranger).
> 2) Refine the song in BiaB and then save andGreat tip! (Why didn't I think of this?)
> export it in one or more different styles (so I can mix and match)
(Novices: this way requires a proper sequencer, which BiaB is not, or
wetting your feet in juggling with StyleMaker inside BiaB - but you'll still
want to edit things in a sequencer.)
--
Chris Laarman
- Have you tried the "Musica Teoria" sound font with your SoundBlaster Live!
card? That’s the cheapest and best update you can do for BIAB use. I
downloaded several different font packages, but this one stood out as the
best.
I downloaded it from
-----Original Message-----
From: Band-in-a-Box@yahoogroups.com [mailto:Band-in-a-Box@yahoogroups.com]
On Behalf Of Dave Hoskins
Sent: 31. mars 2006 20:26
To: Band-in-a-Box@yahoogroups.com
Subject: [Band-in-a-Box] Re: how to get the best sound quality
Hi Jason
Now that 2006 has been implemented with VST there are many options
that you can use,the only problem with VST modules is the length of
time to set them up,where audigy with some good soundfonts is my
favorite at the moment because you load your song and play without to
much setting up,but vst like Sampletank 2 is tops
regards Dave Hoskins
- --- In Band-in-a-Box@yahoogroups.com, Bob 'Notes' Norton <norton@...>
wrote:
"............................
When I'm done, I can't believe the time went by that quickly. To me,
that means I'm involved in life........
So don't be afraid of the sequencer. The combination of BiaB and a
sequencer is very powerful indeed.".
- At 01:48 PM 4/10/2006, "P.R. Merrill" <prsings@...> wrote:
><...>Thanks so much for the kind words.
.
If you care to post it anywhere, you definitely have my permission.
<slight rant>
It seems that so many people are into instant
gratification these days. For some things instant
gratification is fine, but for making any kind of
art, making the art is the process and it
shouldn't be rushed. If Leo had rushed the Mona
Lisa, it wouldn't be hanging in Paris this day.
If Beethoven had rushed his symphonies, we
wouldn't want to hear them today. Take your time,
enjoy the process of making the art, enjoy the
process of learning how to improve your art, and
I think in the end you will find your art more rewarding.
</slight rant>
Insights and incites by Notes
---===o0O0o===---
Bob "Notes" Norton owner, Norton Music
Download your FREE BiaB song file of the week
(usually from one of my fake disks)
- Thanks for all the replies. I've been exporting the midi to Cakewalk
and can get something that sounds decent. I still have a lot to learn
about midi and working with soft synths though. ;-)
--- In Band-in-a-Box@yahoogroups.com, "Jason Brown" <JasonB5232@...>
wrote:
>
> Hi, I'm new to the group and new to BIAB.
>
> Wanted to find out if anyone has any opinions/comments/suggestions on
> getting the best midi sound quality? I have tried my SoundBlaster
> Live! and a software synth and both are acceptable, but not great.
>
> Thanks.
>
> Jason
>
- Hey Bob, I enjoyed you essay and your "slight rant", rich sentiments
indeed and beautifully expressed in a very "true" voice.
To pervert Billy Shakespeare: If ranting be the fruit of life rave on
(and I love the way you are on first name terms with "Leo" Davinci).
Keep em coming.
:0)
Your message has been successfully submitted and would be delivered to recipients shortly.
|
https://groups.yahoo.com/neo/groups/Band-in-a-Box/conversations/topics/25556
|
CC-MAIN-2015-18
|
refinedweb
| 2,152
| 65.86
|
I am trying to create a program that is transferring a value to a function only to have that function modify and then return that value. The program is very simple but the tricky part is that I am trying to spread the code over 3 different files, 1 header file and then 2 C++ source files. The header file is containing the decleariton of the function, the first source file contains the calling of the function and the last source file contains the modification aswell as the return value within the function.
There might be several reasons to why this error occurs but I thought I would try to ask you guys since I dont know what I am doing wrong.
The code on the header file is the following:
#ifndef G1_H_INCLUDED #define G1_H_INCLUDED int add(int x); #endif // G1_H_INCLUDED
The code on the first C++ source file is the following:
#include <iostream> #include "g1.h" using namespace std; int main() { int a(5), b; b = add(a); cout << "b = " << b << endl; }
And finally, the code of the second C++ source file is this:
#include <iostream> #include "g1.h" using namespace std; int add(int x) { return x+5; };
The problem that I am having is that there is an error in the first C++ source file(the one with the main in it). Specifically
at
b = add(a);. The error says that unfedined refrence to add(int). So basically what is happening is that the main part of the program in the first source file cant use the function when it is called. The only reason to remove the error was to put
int add(int x) { return x+5; };
in the header file, but that isnt correct according to the assignment that I am working on. Any advise to why I cant call the function in the main?
|
http://www.dreamincode.net/forums/topic/409700-functions-in-multiple-files/
|
CC-MAIN-2018-13
|
refinedweb
| 311
| 71.58
|
Saving time with a pythonic API of lxml.objectify
Over at the #pyugat IRC channel - Python User Group Austria - there was a question how Zato handles XML namespaces given that working with SOAP doesn’t require juggling any.
So for instance, given this SOAP request a typical Zato service can just freely access the elements without any particular effort.
{% gist 6369147 %}
{% gist 6369159 %}
{% gist 6369214 %}
The trick is, Zato uses lxml as its underlying XML parsing library and what can strike one as a smack of pure brilliancy, lxml.objectify’s default mode of operation is to assume any child elements are in the same namespace their parent is.
It may seem nothing big at first but this is exactly the feature that makes working with SOAP in lxml as elegant (pythonic) as it can possibly get.
90% of time one will be working in the same namespace so why remember about it at all?
Yes, namespaces come in handy and yes, there can be multiple namespaces in one document but given that the prevailing majority of SOAP processing is to do with the single one namespace this particular document’s business payload is in, why not forget about the whole thing?
Not having to deal with it at all was an excellent idea on lxml’s part that greatly reduces time spent on development and improves the resulting code’s readability, hence lowering the total maintenance costs.
The code above looks like accessing regular Python objects, or perhaps JSON. There is nothing really XML-specific to it.
This is what Zato actually does - it finds the first child in soapenv:Body, turns it into a Python object and makes it available to a service in self.request.input.
If you’re coming to Zato or Python with background in other programming languages, this single feature will make SOAP processing a new experience for you.
What if you really need to access multiple namespaces? This, of course, is possible, as in the example below (which is a standalone program, not a Zato service, but the same thing can be done in Zato too).
{% gist 6369792 %}
{% gist 6369815 %}
|
https://zato.io/blog/posts/saving-time-with-a-pythonic-api-of-lxmlobjectify.html
|
CC-MAIN-2022-27
|
refinedweb
| 357
| 58.01
|
This interface is an SCF interface for encapsulating csObject. More...
#include <iutil/object.h>
Add a name change listener.
Look for a child object that implements the given interface.
You can optionally pass a name to look for..
Return the first child object with the given name.
Get the unique ID associated with this object.
Return an iterator for all child objects.
Note that you should not remove child objects while iterating.
Query object name.
Add all child objects of the given object.
Remove all child objects.
Remove a name change listener.
Set object name.
The documentation for this struct was generated from the following file:
Generated for Crystal Space 1.4.1 by doxygen 1.7.1
|
http://www.crystalspace3d.org/docs/online/api-1.4/structiObject.html
|
CC-MAIN-2014-52
|
refinedweb
| 118
| 63.86
|
#include <fame.h> void fame_start_frame(fame_context_t *context, fame_yuv_t *yuv, unsigned char *shape);.
context is the context handle previously returned by fame_open
yuv is a pointer to the input frame. Currently, only YV12 planar format, also called YUV 4:2:0, is supported. The YV12 planar format consists in three plane, one for the Y (luminance) and two for the Cr and Cb (chrominance) components, the chrominance planes being subsampled by 2x2. These three planes are mapped linearly in memory:
width yuv -> +=========+ | | | Y | height | | +====+====+ | Cr | height / 2 +====+ | Cb | height / 2 +====+ width / 2
The process of converting RGB pictures to YUV12 will not be detailled here.
shape represents the shape in case of video with arbitrary shape. It consists in a bitmap of width x height bytes, with 255 representing an opaque pixel and 0 representing a transparent pixel. Values between 0 and 255 are not supported yet. For rectangular video, this parameter must be set to NULL.
|
http://www.linuxmanpages.com/man3/fame_start_frame.3.php
|
crawl-003
|
refinedweb
| 156
| 62.78
|
07 September 2012 17:20 [Source: ICIS news]
WASHINGTON (ICIS)--?xml:namespace>
In its monthly jobs report, the department said that the chemicals industry shed about 300 jobs last month, dropping the total industry workforce to 797,400.
The August decline in chemicals industry hiring followed two months of modest jobs growth for the sector.
In contrast, the plastics and rubber products industry saw a net gain of 800 jobs in August, bringing the sector’s total workforce to 650,400.
Overall, the department’s employment report showed a weakening jobs picture for the nation, with only 96,000 new hires last month, a figure well below expectations.
The department also revised downward
The July figure was revised from the original report of 163,000 new jobs down to 141
|
http://www.icis.com/Articles/2012/09/07/9593888/us-chemicals-sector-jobs-decline-in-august-but-plastics-see-gains.html
|
CC-MAIN-2015-06
|
refinedweb
| 130
| 50.97
|
Let's make our applet a little more interactive, shall we? The
following improvement, HelloWeb2, allows us to drag
the message around with the mouse. HelloWeb2 is
also customizable. It takes the text of its message from a parameter
of the <applet> tag of the HTML document.
HelloWeb2 is a new applet--another subclass
of the Applet class. In that sense, it's a
sibling of HelloWeb. Having just seen inheritance
at work, you might wonder why we aren't creating a subclass of
HelloWeb and exploiting inheritance to build upon
our previous example and extend its functionality. Well, in this
case, that would not necessarily be an advantage, and for clarity we
simply start over.[2]
Here is HelloWeb2:
[2]
You are left to consider whether such a subclassing would even make
sense. Should HelloWeb2 really be a kind of
HelloWeb? Are we looking for refinement or just
code reuse?
[2]
You are left to consider whether such a subclassing would even make
sense. Should HelloWeb2 really be a kind of
HelloWeb? Are we looking for refinement or just
code reuse?
import java.applet.Applet;
import java.awt.*;
import java.awt.event.*;
public class HelloWeb2 extends Applet implements MouseMotionListener {
int messageX = 125, messageY = 95;
String theMessage;
public void init() {
theMessage = getParameter("message");
addMouseMotionListener(this);
}
public void paint( Graphics graphics ) {
graphics.drawString( theMessage, messageX, messageY );
}
public void mouseDragged( MouseEvent e ) {
messageX = e.getX();
messageY = e.getY();
repaint();
}
public void mouseMoved( MouseEvent e ) { }
}
Place the text of this example in a file called
HelloWeb2.java and compile it as before. You
should get a new class file, HelloWeb2.class, as
a result. We need to create a new <applet> tag for
HelloWeb2. You can either create another
HTML document (copy
HelloWeb.html and modify it) or simply add a
second <applet> tag to the existing
HelloWeb.html file. The <applet>
tag for HelloWeb2 has to use a parameter; it should
look like:
<applet code=HelloWeb2 width=300 height=200>
<param name="message" value="Hello Web!" >
</applet>
Feel free to substitute your own salacious comment for the value of
Run this applet in your Java-enabled Web browser, and enjoy many hours
of fun, dragging the text around with your mouse.
So, what have we added? First you may notice that a few lines
are now hovering above our class:
import java.applet.Applet;
import java.awt.*;
import java.awt.event.*;
public class HelloWeb2 extends Applet implements MouseMotionListener {
...
The import statement lists external classes to use
in this file and tells the compiler where to look for them. In our
first HellowWeb example, we designated the Applet
class as the superclass of
HelloWeb. Applet was not defined
by us and the compiler therefore had to look elsewhere for it. In
that case, we referred to Applet by its fully
qualified name, java.applet.Applet, which told the
compiler that Applet belongs to the
java.applet package so it knew where to find it.
The import statement is really just a convenience; by
importing java.applet.Applet in our newer example,
we tell the compiler up front we are using this class and,
thereafter in this file, we can simply refer to it as
Applet. The second import statement makes use of
the wildcard "*" to tell the compiler to import all of
the classes in the java.awt package. But
don't worry, the compiled code doesn't contain any excess
baggage. Java doesn't do things like that. In fact, compiled
Java classes don't contain other classes at all; they simply
note their relationships. Our current example uses only the
java.awt.Graphics class. However, we are
anticipating using several more classes from this package in the
upcoming examples. We also import all the classes from the package
java.awt.event; these classes provide the
Event objects that we use to communicate
with the user. By listening for events, we find out when the user
moved the mouse, clicked a button, and so on. Notice that importing
java.awt.* doesn't automatically import
the event package. The star only imports the classes in a particular
package, not other packages.
The import statement may seem a bit like the C or C++
preprocessor #include statement, which injects
header files into programs at the appropriate places. This is not
true; there are no header files in Java. Unlike compiled C or C++
libraries, Java binary class files contain all necessary type
information about the classes, methods, and variables they
contain, obviating the need for prototyping.
We have added some variables to our example:
public class HelloWeb2 extends Applet {
int messageX = 125, messageY = 95;
String theMessage;
...
messageX and messageY are
integers that hold the current coordinates of our movable message.
They are initialized to default values, which should place a message
of our length somewhere near the center of the applet. Java integers
are always 32-bit signed numbers. There is no fretting about what
architecture your code is running on; numeric types in Java are
precisely defined. The variable theMessage is of
type String and can hold instances of the
String class.
You should note that these three variables are declared inside the
braces of the class definition, but not inside any particular method
in that class. These variables are called instance
variables because they belong to the entire class, and
copies of them appear in each separate instance of the class.
Instance variables are always visible (usable) in any of the methods
inside their class. Depending on their modifiers, they may also be
accessible from outside the class.
Unless otherwise initialized, instance variables are set to a default
value of 0 (zero), false, or
null. Numeric types are set to zero,
boolean variables are set to
false, and class type variables always have their
value set to null, which means "no
value." Attempting to use an object with a
null value results in a run-time error.
Instance variables differ from method arguments and other variables
that are declared inside of a single will generate a compile-time error. Local
variables live only as long as the method is executing and then
disappear (which is fine since nothing outside of the method can see
them anyway). Each time the method is invoked, its local variables
are recreated and must be assigned values.
We have made some changes to our previously stodgy
paint() method. All of the arguments in the call
to drawString() are now variables.
Several new methods have appeared in our class. Like
paint(), these are methods of the base
Applet class we override to add our own
functionality.
init() is an important method of the
Applet class. It's called once, when our applet is
created, to give us an opportunity to do any work needed to set up
shop. init() is a good place to allocate resources
and perform other activities that need happen only once in the
lifetime of the applet. A Java-enabled Web browser calls
init() when it prepares to place the
Applet on a page.
Our overridden init() method does two things;
it sets the text of the theMessage instance
variable, and it tells the system "Hey, I'm interested in anything
that happens involving the mouse":
public void init() {
theMessage = getParameter("message");
addMouseMotionListener(this);
}
When an applet is instantiated, the parameters given in the
<applet> tag of the HTML document
are stored in a table and made available through the
getParameter() method. Given the name of a
parameter, this method returns the value as a
String object. If the name is not found, it
returns a null value.
So what, you may ask, is the type of the argument to the
getParameter() method? It, too, is a
String. With a little magic from the Java
compiler, quoted strings in Java source code are turned into
String objects. A bit of funny-business is going
on here, but it's simply for convenience. (See Chapter 7, Basic Utility Classes for a complete discussion of the
String class.)
getParameter() is a public method we inherited
from the Applet class. We can use it from any of
our methods. Note that the getParameter() method
is invoked directly by name; there is no object name prepended to it
with a dot. If a method exists in our class, or is inherited from a
superclass, we can call it directly by name.
In addition, we can use a special read-only variable, called
this, to explicitly refer to our object. A method
can use this to refer to the instance of the object
that holds it. The following two statements are therefore equivalent:
theMessage = getParameter("message");
or
theMessage = this.getParameter("message");
I'll always use the shorter form. We will need the
this variable later when we have to pass a
reference to our object to a method in another class. We often do
this so that methods in another class can give us a call back later or can watch our public variables.
The other method that we call in init() is
addMouseMotionListener(). This method is
part of the event mechanism, which we discuss next.
The last two methods of HelloWeb2 let us
get information from the mouse. Each time the user performs an action,
such as hitting a
key on the keyboard, moving the mouse, or perhaps banging his or her
head against a touch-sensitive screen, Java generates an
event. An event represents an action that has
occurred; it contains information about the action, such as its time
and location. Most events are associated with a particular
graphical user interface
(GUI) component in an application. A keystroke, for
instance, could correspond to a character being typed into a
particular text entry field. Pressing a mouse button could cause a
certain graphical button on the screen to activate. Even just moving
the mouse within a certain area of the screen could be intended to
trigger effects such as highlighting or changing the cursor to a
special mouse cursor.
The way events work is one of the biggest changes between Java 1.0 and
Java 1.1. We're going to talk about the Java 1.1 events only; they're
a big improvement, and there's no sense in learning yesterday's news.
In Java 1.1, there are many different event classes:
MouseEvent,
KeyEvent,
ActionEvent, and many others. For the most
part, the meaning of these events is fairly intuitive. A
MouseEvent occurs when the user does
something with the mouse, a KeyEvent
occurs when the user types a key, and so on.
ActionEvent is a little special; we'll see
it at work in our third applet. For now, we'll focus on dealing with a
MouseEvent.
The various GUI components in Java generate events. For example, if
you click the mouse inside an applet, the applet generates a mouse
event. (In the future, we will probably see events as a general
purpose way to communicate between Java objects; for the moment, let's
limit ourselves to the simplest case.)
In Java 1.1, any object can ask to receive the events generated by
some other
component. We call the object that wants to receive events a
"listener." To declare that it wants to receive some component's
mouse motion events, you call that component's
addMouseMotionListener() method. That's what our
applet is doing in init(). In this case,
the applet is calling its own
addMouseMotionListener() method, with the
argument this, meaning "I want to receive
my own mouse motion events." (For the time being, we're ignoring a
minor distinction between "mouse events" and "mouse motion events."
Suffice it to say that in this example, we only care about mouse
motions.)
That's how we register to receive events. But how do we actually get
them? That's what the two remaining methods in our applet are for.
The mouseDragged() method is called
automatically whenever the user drags the mouse--that is, moves the
mouse with any button pressed. The
mouseMoved() method is called whenever the
user moves the mouse without pressing a button. Our
mouseMoved() method is simple: it doesn't
do anything. We're ignoring simple mouse motions.
mouseDragged() has a bit more meat to it.
It is called repeatedly. These are saved in the
messageX and
messageY instance variables. Now, having
changed the coordinates for the message, we would like
HelloWeb2 to redraw itself. We do this by
calling repaint(), which asks the system
to redraw the screen at a later time. We can't call
paint() directly because we don't have a
graphics context to pass to it.
The real beauty of this event model is that you only have to handle
the kinds of events you want. If you don't care about keyboard events,
you just don't register a listener for them; the user can type all he
or she wants, and you won't be bothered. Java doesn't go around asking
potential recipients whether they might be interested in some event,
as it did in older versions. If there are no
listeners for a particular kind of event, Java won't even generate it.
The result is that event handling in Java 1.1 is quite efficient.
I've danced around one question that should be bothering you by now:
how does the system know to call
mouseDragged() and
mouseMoved()? And why
do we have to supply a mouseMoved() method
that doesn't do anything? The
answer to these questions has to do with "interfaces." We'll discuss
interfaces after clearing up some unfinished business with
repaint().
We can use the repaint() method of the
Applet class to request our applet be
redrawn. repaint() causes the Java windowing system
to schedule a call to our paint() method at the
next possible time; Java supplies the necessary
Graphics object, as shown in Figure 2.5.
This mode of operation isn't just an inconvenience brought about
by not having the right graphics context handy at the moment. The
foremost advantage to this mode of operation is that the repainting of the painting functionality can be kept in our
paint() method; we aren't tempted to spread
it throughout the application.
Now it's time to face up to the question I avoided earlier: how does
the system know to call mouseDragged()
when a mouse event occurs? Is it simply a matter of knowing that
mouseDragged() is some magic name that our
event handling method must have? No; the answer to the question lies
in the discussion of interfaces, which are one of the most important
features of the Java language.
The first sign of an interface comes on the line of code that
introduces the HelloWeb2 applet: we say
that the applet implements
MouseMotionListener.
MouseMotionListener
is an interface that the applet implements. Essentially, it's a list
of methods that the applet must have; this particular interface
requires our applet to have methods called
mouseDragged() and
mouseMoved(). The interface doesn't say
what these methods have to do--and indeed,
mouseMoved() doesn't do anything. It does
say that the methods must take a
MouseEvent as an argument and return
void (i.e., no return value).
Another way of looking at an interface is as a contract between you,
the code developer, and the compiler. By saying that your applet
implements the MouseMotionListener
interface, you're saying that these methods will be available for
other parts of the system to call. If you don't provide them, the
compiler will notice and give you an error message.
But that's not the only impact interfaces have on this program. An
interface also acts like a class. For example, a method could return a
MouseMotionListener, or take a
MouseMotionListener as an argument. This
means that you don't care about the object's class; the only
requirement is that the object implement the given interface. In fact,
that's exactly what the method
addMouseMotionListener() does. If you look
up the documentation for this method, you'll find that it takes a
MouseMotionListener as an argument. The
argument we pass is this, the applet
itself. The fact that it's an applet is irrelevant, it could be a
Cookie, an
Aardvark, or any other class we dream
up. What.
In other languages, you'd handle this problem by passing a function
pointer; for example, in C, the argument to
addMouseMotionListener() might be a
pointer to the function you want to have called when an event
occurs. This technique is called a "callback." For security reasons,
the Java language has eliminated function pointers. Instead, we use
interfaces to make contracts between classes and the compiler. (Some
new features of the language make it easier to do something similar to
a callback, but that's beyond the present discussion.)
The Java distribution comes with many interfaces that define what
classes have to do in various situations. Furthermore,
in Chapter 5, Objects in Java, you'll see how to define your own interfaces. It turns
out that this idea of a contract between the compiler and a class is
very important. There are many situations like the one we just saw,
where you don't care what class something is, you just care that it
has some capability, like listening for mouse events. Interfaces give
you a way of acting on objects based on their capabilities, without
knowing or caring about their actual type.
Furthermore, interfaces provide an important escape clause to the rule
that any new class can only extend a single class (formally called
"single inheritance"). They provide most of the advantages of multiple
inheritance (a feature of languages like C++) without the confusion.
A class in Java can only extend one class, but can implement as many
interfaces as it wants; our next applet will implement two interfaces,
and the final example in this chapter will implement three. In many
ways, interfaces are almost like classes, but not quite. They can be
used as data types, they can even extend other interfaces (but not
classes), and can be inherited by classes (if class A implements
interface B, subclasses of A also implement B). The crucial
difference is that applets don't actually inherit methods from
interfaces; the interfaces only specify the methods the applet must
have.
|
https://docstore.mik.ua/orelly/java/exp/ch02_02.htm
|
CC-MAIN-2019-30
|
refinedweb
| 3,050
| 64.2
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.