text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Quartz Scheduler – Introduction
Quartz Scheduler is an open-source job scheduling library that can work with any Java application to create simple or complex CRON schedules for executing a vast amount of jobs. The Quartz Scheduler also includes many enterprise-class features, such as support for JTA transactions and clustering.
To simply put, If your application has tasks that need to occur at given moments in time, or if your application has recurring maintenance jobs then Quartz may be your ideal solution. Quartz scheduler provides out of the box job scheduling via java API to create CRON schedules and Simple recurrence schedules and even single run schedules.
Download and Install Quartz libraries
To set up a quartz scheduler in your plain java application, you need to download the latest stable release distribution. Locate and add the quartz-***.jar available under lib directory to your application classpath. Also, quartz depends on a couple of libraries that are also available as part of the distribution. Make sure you add them as well.
Alternatively, you could use maven to manage quartz libraries for you. Just add the following dependency to your maven project.
Code language: HTML, XML (xml)Code language: HTML, XML (xml)
<dependency> <groupId>org.quartz-scheduler</groupId> <artifactId>quartz</artifactId> <version>2.3.2</version> </dependency> <!-- Optional Logging dependency : If you don't see quartz logs in your application add this --> <dependency> <groupId>ch.qos.logback</groupId> <artifactId>logback-classic</artifactId> <version>1.2.3</version> </dependency>
I recommend the maven based approach as it automatically fetches the needed jars and adds them to application classpath.
Create a quartz.properties file
By Default, the Quartz scheduler doesn’t require a properties file to work. However, you would need one if you want to override the default config. To configure the scheduler, you should first add a
quartz.properties file to your classpath. For example, Take a look at this configuration that uses an in-memory implementation.
Code language: Properties (properties)Code language: Properties (properties)
custom-scheduler = 10 = org.quartz.simpl.RAMJobStore=
In the above example, We named our scheduler, set the thread count as 10 and let quartz data be stored in the RAM.
Sample Application that uses Quartz Scheduler
To test the setup, Let’s write a simple Java Application that creates a scheduler.
Code language: Java (java)Code language: Java (java)
import org.quartz.Scheduler; import org.quartz.SchedulerException; import org.quartz.impl.StdSchedulerFactory; public class QuartzApplication { public static void main(String[] args) throws SchedulerException { Scheduler scheduler = StdSchedulerFactory.getDefaultScheduler(); scheduler.start(); scheduler.shutdown(); } }
When you run this program, You would see the following output showing the quartz scheduler starting and immediately shutting down as there are no jobs in the scheduler.
Here the
start and
stop methods are important. They are the entry and exit points of the Quartz scheduler instance respectively. There is also a
standby method that temporarily makes the scheduler from running new jobs. You should invoke this method as shown here.
Code language: Java (java)Code language: Java (java)
scheduler.standby();
By default, the Scheduler instantiates in standby mode. The scheduler will begin firing jobs only after the start method is called.
Also, The Scheduler cannot be restarted after shutdown() has been called.
Understand the Quartz Scheduler API
There are few main classes interfaces you should get familiar with. They are,
Write a Job Class
As we saw earlier, The first step at scheduling a piece of code is to create a job for it. For example, Here is a
HelloWorldJob that prints data from job detail.
Code language: Java (java)Code language: Java (java)
package com.springhow.examples.quartz; import org.quartz.Job; import org.quartz.JobExecutionContext; import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class HelloWorldJob implements Job { private static final Logger logger = LoggerFactory.getLogger(HelloWorldJob.class); public void execute(JobExecutionContext context) { String who = context.getJobDetail().getJobDataMap().getString("who"); logger.info("Hello {}!", who); } }
Here, The job has access to the
JobExecutionContext which all necessary metadata about the current run.
Create a JobDetail
Next, You should create an appropriate job details to submit to scheduler. You can do this by using the JobBuilder we have seen earlier.
Code language: Java (java)Code language: Java (java)
JobDetail jobDetail = JobBuilder.newJob(HelloWorldJob.class) .withIdentity("my-first-job") .usingJobData("who","World!") .build();
As you see, The example creates a job with the name “my-first-job” and uses the
HelloWorldJob as job template. A, We are passing job data so that we can access them through the context in the execute method of the job.
Also note that when identity is not specified, the quartz scheduler auto generates an UUID. You could also provide a group name for the job in the
withIdentity method.
Create a Trigger
For a job to execute, the scheduler should know when to invoke them. This information comes from Trigger objects. Triggers are either simple or repeating(CRON) schedules in the quartz scheduler. For example, We could create a simple recurring schedule as shown below.
Code language: Java (java)Code language: Java (java)
Trigger trigger = TriggerBuilder.<em>newTrigger</em>()<br> .withIdentity("my-first-trigger")<br> .startNow()<br> .withSchedule(SimpleScheduleBuilder.<em>simpleSchedule</em>()<br> .withIntervalInSeconds(3)<br> .repeatForever())<br> .build();
Cron Trigger
You could also create a CRON trigger as shown in this snippet. For example, the below trigger fires every even minute, between 9AM and 6PM on all week days.
Code language: Java (java)Code language: Java (java)
Trigger trigger = TriggerBuilder.newTrigger() .withIdentity("my-cron-trigger") .withSchedule(CronScheduleBuilder.cronSchedule("0 0/2 9-18 * * MON-FRI")) .build();
Schedule The Job with Trigger
Finally, You need to let the quartz scheduler know which job to trigger when. In this case, We are scheduling “my-first-job” using the “my-first-trigger”.
As you see, The jobs get called every three seconds as we configured in the simple trigger.
Remove a Quartz job from the scheduler
Sometimes, you might want to remove the quartz job from the scheduler. You can do this in two ways.
Remove the trigger associated with a job.
Code language: Java (java)Code language: Java (java)
scheduler.unscheduleJob(new TriggerKey("my-first-trigger"));
Note that there may be more than one triggers for a given job. For example, A job in quartz scheduler may contain one simple trigger that runs every day and another cron trigger that runs only on weekends.
Remove the job itself from the scheduler by deleting it.
Code language: Java (java)Code language: Java (java)
scheduler.deleteJob(new JobKey("my-first-job"));
Note that this approach deletes the hob as well as the triggers associated with it. This way, you don’t have to delete the triggers separately.
Listing jobs and triggers
The quartz java API allows developers to query the jobs and triggers within a given scheduler instance. For instance, you could query all jobs within a scheduler as shown in this snippet.
Code language: Java (java)Code language: Java (java)
for(String group: scheduler.getJobGroupNames()) { for(JobKey jobKey : scheduler.getJobKeys(groupEquals(group))) { logger.info(jobKey); } }
Similarly, We can also query all triggers within a scheduler. Just use the
getTriggerGroupNames and
getTriggerKeys instead.
You could also lookup all the triggers associated with a given job.
Code language: Java (java)Code language: Java (java)
List<Trigger> jobTriggers = scheduler.getTriggersOfJob(jobKey("my-first-trigger"));
There are many useful methods under the
Schedulerinterface and the JobBuilder/TriggerBuilder classes. Make sure you explore them for better understanding.
Summary
To summarize, Quartz scheduler is a one stop tool for all of your scheduling needs in a java application. Some of its features include,
- Quartz can embed into any java application as it is written in pure java.
- By providing a crontab-like approach, Quartz can handle complex recurrence schedules. With multiple triggers for the same job, The possibilities are limitless.
- Job definitions are done via java classes and their respective quartz API methods. This makes the library more developer-friendly.
- The JobStore interface allows job information to be persisted. A most common approach is to use JDBCJobStore to keep the job data available between application restarts. The API also provides the developers with the option to provide a custom JobStore.
- The library is JTA(Java Transaction API) compatible.
- The quartz scheduler can run in cluster mode and can be load balanced. This makes the library scalable.
- With the help of listeners and plugins, job failures can be handled accordingly.
You can find all the above examples in the quartz-example github repository. | https://springhow.com/quartz-scheduler/ | CC-MAIN-2021-31 | refinedweb | 1,413 | 50.43 |
Rosetta Code:Add a Language
Thanks to a system of templates, adding a language on Rosetta Code is fairly simple. To begin with, consider the name of the language; This will be part of the name of the page that represents your language.
After checking to see if the language is already on Rosetta Code, You're going to need a category page to list all the examples, a redirect in the main namespace to redirect to the category page, and, finally, an example or two.
Notice that, for the purpose of instruction, we call the language "Ayrch", but that's almost certainly not going to be the name of the language you're adding; Replace "Ayrch" with the name of your language.
Contents
Prerequisites[edit]
For inclusion on Rosetta Code, the requirements are reasonable: There should be an existing implementation of that language that is either mature or, at the very least, under active development. Language notability is unimportant; if the number of languages on the site grows enough to require distinction, they can be differentiated by example coverage.
Basic Information[edit]
These are the bits that should be done for every language.
Category Page[edit]
Once you're sure the language doesn't already have a page on Rosetta Code, you'll need to create a category page for the language. Let's say you're adding a language called Ayrch (This is a hypothetical language name; please change it to your actual language name when you actually add your language.) The first thing you need to do is create the category page. The easiest way is to click on the Search field, type Category:Ayrch, and click Go. It will tell you there is no current page with that name. Click "create this page", and it will give you an empty page to edit.
One simple way to start is to make this the entire body of the page:
{{stub}}{{language|Ayrch}}
That will automatically give you a basic language page, and even a nice little stub notification reminding people who visit to fill in more information.
Redirect[edit]
The next step is to create a redirect page. This is important, because the actual page for your Ayrch language is at, and we want people to be able to go to, and be able to use syntax like [[Ayrch]] within the wiki to refer to it.
As before, click on the Search field on the left, but this time type Ayrch, and click Go. Again, click "create this page", and it will again give you an empty page to edit.
This time, make the entire body of the page:
#REDIRECT [[:Category:Ayrch]]
Now, when anyone goes to the Ayrch page, they will be immediately redirected to the category page for Ayrch.
Examples[edit]
You're not done yet!
You've created a language category page and have ensured that people who visit the page in the main namespace will reach the right place. You might even have gone back to the category page and filled in a few more details like some history and links to the official sites and resources for the language.
What could be missing? Code!
You need to provide at least one or two token examples, to give people a taste of the language. Otherwise, there really isn't much of a point for the language to be mentioned on the wiki; Nobody is likely to notice it.
If you're pressed for time, browse the list of tasks and find a couple simple ones you can implement. User Output, Loop Structures and Conditional Structures are some common ones that most languages support. For the sake of this demonstration, let's suppose that Ayrch looks a lot like BASIC, and implement User Output.
We would need to go to that page, find where the language would fit, and add this code:
=={{header|Ayrch}}== <lang ayrch>PRINT "GOODBYE, WORLD!"</lang>
That's a very simple example; You might try adding some descriptive information before the <lang>, such as what compiler it works with, or perhaps some interesting information of how Ayrch does things differently from other languages. Whatever helps to illustrate the language and identify what makes this example interesting.
Tasks Not Implemented[edit]
Finally, you're going to want to create an easy way for other people to discover and add tasks that have not yet been implemented in your language. In the bottom right of your page, click the link that says, "If you know Ayrch, please write code for some of the tasks not implemented in Ayrch." In the new page that opens, enter the following for the page contents:
{{unimpl_Page|Ayrch}}
More Advanced[edit]
These aren't strictly necessary, but are generally a plus if you want to increase awareness and penetration of your language on the site.
User Boxes[edit]
You created a user page before doing your edits, didn't you? You don't have to, but it generally helps in identifying who created and contributed what.
In your user page (not your user talk page), try adding a user box. That generally looks something like this:
{{mylangbegin}} {{mylang|Visual Basic|Active}} {{mylang|BASIC|Very Very Rusty}} {{mylang|Brainf***|Rusty}} {{mylang|C++|Very Active}} {{mylang|Perl|Very Active}} {{mylang|PHP|Semi-Active}} {{mylang|UNIX Shell|Very Active}} {{mylang|C|Semi-Active}} {{mylang|Java|Rusty}} {{mylang|JavaScript|Active}} {{mylang|SQL|Active}} {{mylang|Visual Basic .NET|Rusty}} {{mylangend}}
Of course, you don't have to use words like "Active" or "Rusty"; You can use "Expert", "Novice" or "Author" (or any other way you want to describe your proficiency), if you like. If the only language you really know happens to be Ayrch, then your language box is pretty simple:
{{mylangbegin}} {{mylang|Ayrch|Replace this with something reflective of your experience level}} {{mylangend}}
If you just copy and paste that, you'll probably get the idea fairly quickly.
Implementations[edit]
A language is only theoretical until it has an implementation. An implementation might be a compiler, an interpreter, or even a piece of silicon. It helps users tremendously if they can find implementations of the language you're trying to show them. One good way to do that is to create an implementation page.
Let's say you have a compiler named ayrchc, and you want to create a page for it. Click on the Search field on the left, type ayrchc, and click Go. Click "create this page", and give the page a body:
{{stub}}{{implementation|Ayrch}}
That's a start, but if you're this far, then you can go a step or two beyond that. Instead of using {{stub}}, give a couple lines of description about the implementation, and, preferably, a link to the official page for the implementation.
Conclusion[edit]
If you've done all that, there's only one more thing you really ought to do: Get more people familiar with your language to fill in more tasks. Remember that page you created for "Unimplemented Tasks"? Pass that around to interested parties, and things will generally start happening.
We'll be watching for you! | http://rosettacode.org/wiki/Rosetta_Code:Add_a_Language | CC-MAIN-2017-04 | refinedweb | 1,188 | 58.21 |
This series of posts describes how you can use UI Automation (UIA) as part of your solution to help people who find some aspect of working with text to be a challenge.
Introduction
A while back I had a chat with someone with a lot of experience in education, and she was telling me of the value to students of tools which allow text to be spoken. So I downloaded some of my earlier UIA client sample apps, and built a new app which could speak text shown in some apps. Details on what I did to make that happen are at A recipe for an exciting assistive technology app: Throw three UIA samples together and stir vigorously! And I made the app available at, along with a short video. (I’ve not used the app for a long time. Hopefully it still works…)
You may feel that a simple tool could help someone that you know work with text. Perhaps that’s having the text spoken, highlighted, magnified, or its definition spoken. So it’s definitely worth considering whether you could build the tool yourself, and tune it to be as useful as possible for the person you know.
And in fact, maybe you feel that the tool could be useful to you as the customer. For example, when proof-reading an important e-mail before sending it, I always want it to be read out to me. That really helps me to spot errors that I don’t spot simply by reading it. So for a while I used my own sample at Windows 7 UI Automation Client API C# sample (e-mail reader) Version 1.1 to improve the quality of e-mails I send. (Since then I’ve discovered that Outlook and Word have built-in ways of having text content spoken, so I no longer use my own tool for that.)
Below are some details on how you can use UIA to interact with text in apps. The discussion does not focus on the various ways of triggering the UIA action, (for example, through keyboard or mouse action,) or on action taken with the text once you’ve accessed it, (for example, calling into some web service to get the definition of the word).
Building a UIA client app
When building a UIA client app, I tend to build a WinForms C# app. I could write a C++ Win32 app, but WinForms makes so many things quick ‘n’ easy for me, that I usually go with WinForms. (I’m not familiar with building WPF apps.)
It’s interesting to note that I’m using desktop UI frameworks here rather than Windows Store app frameworks like XAML and WinJS. That’s because XAML and WinJS apps don’t have access to the Windows UIA client API.
But because I choose to build a C# app, I’ll need a managed wrapper around the native Windows UIA API. So I use the tlbimp.exe tool to generate the wrapper for me. The tlbimp.exe tool will be somewhere in your Windows SDK folders. On my Windows 10 machine I’d run the following to generate the wrapper:
"C:\Program Files (x86)\Microsoft SDKs\Windows\v8.1A\bin\NETFX 4.5.1 Tools\x64\tlbimp.exe" c:\windows\system32\uiautomationcore.dll /out:Interop.UIAutomationCore.dll
To illustrate this, I created a new WinForm app called “SpeakWord”. I then ran the command above to generate the wrapper, and included a reference to the output wrapper in my new project.
I can then see all the interesting UIA classes available to me by using Visual Studio’s Object Browser.
Figure 1: Using the Visual Studio Object Browser to see what UIA classes are available to my app.
I should say that once I’ve created this wrapper, I can use it in future projects too. I don’t have to explicitly generate it every time I start working on a new app. So while this can seem complicated, it takes no time to get going on an app once you’re familiar with the steps.
So having created my new app, I thought it might be useful if the customer could move the mouse over a word of interest, and press a key to hear that word spoken. The only time-consuming bit about implementing that was adding a global hotkey handler, as I’ve not done that from C# before. The rest was really quick for me. All I had to do was paste in a code snippet I’d uploaded to When I try to use UI Automation for PowerPoint 2013, I can only get the first character/word when I use RangeFromPoint recently, and add a few lines to call the .NET SpeechSynthesizer.
Overall this was pretty quick to do, and has real potential given that an app which helps someone know how a word is pronounced can be really useful. (I’m assuming here that the text-to-speech engine being used does a good job at pronouncing the text as expected.)
The contents of the file containing the code for the bulk of the app can be found at the end of this post.
So what is the app doing?
When an assistive technology (AT) app is working with an app showing text, it’s not enough just to know what that text is. The AT app may need to be able to access the text in different ways. For example, get the text beneath the mouse cursor, or at the text insertion point (ie at the caret,) or get the selected text. The UIA client app can do this through use of the UIA “Text pattern”. Details on the Text pattern can be found at IUIAutomationTextPattern, and that interface has a variety of methods useful for accessing text. (There’s also an IUIAutomationTextPattern2 interface with a couple more methods in it.)
More details around the UIA Text pattern can be found at Text and TextRange Control Patterns.
A UIA “pattern” is used to describe the programmatic behavior exposed by a UIA element. For example, a button should support the “Invoke” pattern, allowing it to be programmatically invoked. In the case of text, if a UIA element is to programmatically expose its text in the most useful way possible, then it will support the Text pattern.
However, not all document-related apps which show text support the UIA Text pattern. So if I’m interested in a particular app, I’ll point the Inspect SDK tool to it first. If the app claims to support the Text pattern, then it’ll expose an IsTextPatternAvailable property of true. A value of true on that property doesn’t necessarily mean the app will do a good job at supporting the Text pattern, but at least it’s claiming to support it.
So when building the app described at A recipe for an exciting assistive technology app: Throw three UIA samples together and stir vigorously! I first pointed Inspect to WordPad and Word 2013. In both cases, Inspect showed me that the IsTextPatternAvailable property was true. I also pointed Inspect to Word 2010 and found that the IsTextPatternAvailable property was false in that app. So my helpful AT tool just isn’t going to work with Word 2010.
Figure 2: The Inspect SDK tool showing that Word 2013 claims to support the UIA Text pattern.
So having learned that the provider app that I’m interested in does claim to support the UIA Text pattern, I want my app to get that Text pattern from the provider app. A pattern is accessed through the UIA element that’s implementing the pattern. So first I need to get at that UIA element.
I’m going to find the element by asking UIA to return to me the element beneath the mouse cursor. I could then ask UIA to go back to the provider app and get me the Text pattern from the element. But that would involve two cross-process calls, and I like to keep the number of cross-process calls I make to a minimum. So I’m going to ask UIA to cache a reference to the Text pattern when it gets the element.
By the way, I tend to use explicit values for pattern and property ids in my client code, pulled from UIAutomationClient.h. I don’t have to do that, and instead I could use some value accessed through the managed wrapper I generated earlier. But years ago, VS gave me some warning when I did that. I don’t remember the details there, and I’ve simply got into the habit of using the values directly.
So this is how I got the UIA element of interest:
//);
Having got the element, I then try to access the Text pattern from it. I didn’t bother first checking the element’s IsTextPatternAvailable property to see whether the element claims to support the Text pattern. In this simple app, I’m only interested in whether I can get a Text pattern or not.
IUIAutomationTextPattern textPattern =
element.GetCachedPattern(patternIdText);
if (textPattern != null)
{
…
So there we have it. I now have a Text pattern associated with the text beneath the mouse cursor, and that works in WordPad and Word 2013, and some other important apps too.
Having got access to the text through the Text pattern, I can then have some fun working with the text. This is done through a TextRange, (or TextRange2 if you need the additional method in that). MSDN describes a TextRange as an interface that “Provides access to a span of continuous text”. You work with a TextRange through the IUIAutomationTextRange interface, and that has all sorts of interesting members. For example:
GetText() - Get the text associated with a range.
FindText() - Find text within a range.
FindAttributes() - Find text with specific UIA text attribute within a range.
And there are also very helpful ways to move through the text. For example:
ExpandToEnclosingUnit() – Expand the range to include more text. Eg expand a range containing a word to contain all the text in there paragraph in which the word lies.
Move() – Move the range forward or backward in the text by some unit such as a word or line.
So going back to the quick app I wrote, I wanted to get the word beneath the mouse cursor. In order to do this, I needed to use the Text pattern that I got earlier, and then get the TextRange from that Text pattern where the mouse cursor is.
IUIAutomationTextRange range = textPattern.RangeFromPoint(pt);
if (range != null)
{
…
Now, this is where things can get interesting. While the provider app that I’m working with might provide me with a Text pattern and TextRange, that doesn’t necessarily mean the app has implemented these UIA interfaces as I expect. MSDN says that RangeFromPoint() should return a “degenerative” TextRange. A degenerative TextRange is zero-length, and I can expand it or move from it through the text in a number of ways. But someone pointed out at When I try to use UI Automation for PowerPoint 2013, I can only get the first character/word when I use RangeFromPoint, that PowerPoint 2013 doesn’t return a degenerative TextRange following the call to RangefromPoint(). Rather it returns a TextRange which includes all the text in the text box beneath the mouse cursor, and that means I can’t actually tell which word is beneath the mouse cursor. So while some apps may do what you expect when RangeFromPoint() is called, (eg PowerPoint Online, Word 2013, Outlook 2013,) others may not.
Actually, it’s worth focusing on this…
When you hit unexpected text data being returned from provider apps, it might not be obvious as to whether the provider app is really the problem, or somehow your client code is not requesting the data you intended. So before writing your UIA client code, it can be worth pointing the SDK Text Explorer tool at the provider app. That tool really isn’t the most intuitive to use, but it can help confirm that unexpected data is being returned from the provider app.
Another example of inconsistent behavior relates to WordPad and Word 2013 behaving differently in the data they return when a UIA client app calls IUIAutomationTextPattern::GetVisibleRanges(). By using the Text Explorer tool, I found that WordPad running on Windows 10 doesn't include the text that's clipped out of view, but Word 2013 does return the clipped text.
Ok, once again going back to the app, having got (what should be) a degenerative TextRange for the text beneath the mouse cursor, I can expand that to include the word of interest. I then get the text for that word, and have it spoken.
range.ExpandToEnclosingUnit(TextUnit.TextUnit_Word);
// Set a reasonable limit on the length of the word returned.
wordToSpeak = range.GetText(100);
So with those few lines of code above, I can access the word beneath the mouse cursor.
Figure 3. The simple app speaking the word beneath the mouse cursor in MSDN documentation shown in the Edge browser.
A word of warning around working with UIA events
So far I’ve mentioned something of UIA properties and patterns, but another very important aspect of UIA relates to events. Events allow your UIA client app to react to things that are going on in the provider app’s UI.
For example, your app might want to be notified whenever your customer moves the caret around the provider app’s text. So you could register for the UIA_Text_TextSelectionChangedEventId, (as listed at Event Identifiers). But as helpful as event handlers are, sometimes they need to be used with care. Historically there have been constraints around what your events handlers should do. A classic constraint was not to call back into UIA from inside your UIA event handler. Some of these constraints were relaxed in Windows 8.1, and relaxed further in Windows 10. But if you hit unexpected delays in your event handler, you may be interested in reading the discussion at UI Automation events stop being received after a while monitoring an application and then restart after some time.
Summary
Once you’ve recognized how some tool could help someone you know work with text, consider how you can achieve your goals using the UIA Text pattern and TextRange.
If you want to access the text beneath the mouse cursor, or the text that’s currently selected, take a look at A recipe for an exciting assistive technology app: Throw three UIA samples together and stir vigorously! For more advanced interaction with text, take a look at Windows 7 UI Automation Client API C# sample (e-mail reader) Version 1.1. That mail-related sample sequentially speaks each paragraph, and uses the Magnification API to magnify the paragraph.
And given that it’s been quite a while since I’ve run that mail-related sample, I just downloaded it and built it in VS 2015. (I had to agree to building it with a more recent version of .NET, and add a reference to Microsoft.CSharp.) I then tweaked it to look at Outlook 2013 e-mail UI rather than the Windows Mail app UI that I’d originally targeted when building the sample. And of course I had to run the Inspect SDK tool when doing this in order to learn about the properties of the e-mail UI that I wanted to get the text from. Having done that, my sample app sequential spoke and highlighted the paragraphs in the e-mail.
Figure 4: A UIA client app accessing text paragraphs shown in an e-mail composition window.
So a polished-up version of this mail-reading app could be a valuable tool for many people, (and like I said, I used it myself for a while). It was also a ton of fun to build! 🙂
I hope you find building these sorts of apps as rewarding as I do.
Guy
Posts in this series:
So how will you help people work with text? Part 1: Introduction
So how will you help people work with text? Part 2: The UIA Client
So how will you help people work with text? Part 3: The UIA Provider
P.S. Here’s the code of interest for the simple app that I built to speak the text beneath the mouse cursor.
using System;
using System.Drawing;
using System.ComponentModel;
using System.Runtime.InteropServices;
using System.Speech.Synthesis;
using System.Windows.Forms;
using Interop.UIAutomationCore;
namespace SpeakWord
{
public partial class FormSpeakWord : Form
{
private IUIAutomation3 _uiAutomation;
private SpeechSynthesizer _speechSynthesizer;
private IntPtr hotkeyIdSpeakWordBeneathMouseCursor = (IntPtr)1001;
public FormSpeakWord()
{
InitializeComponent();
}
private void buttonClose_Click(object sender, EventArgs e)
{
this.Close();
}
private void FormSpeakWord_Load(object sender, EventArgs e)
{
// Get an IUIAutomation3 interface for all interaction with UIA.
_uiAutomation = (IUIAutomation3)new CUIAutomation8();
// Get a SpeechSynthesizer in order to speak the word accessed through UIA.
_speechSynthesizer = new SpeechSynthesizer();
// Get notified when the F8 key is pressed.
Win32.RegisterHotKey(this.Handle, (int)hotkeyIdSpeakWordBeneathMouseCursor, 0, 0x77 /* VK_F8 */);
}
protected override void OnClosing(CancelEventArgs e)
{
Win32.UnregisterHotKey(this.Handle, (int)hotkeyIdSpeakWordBeneathMouseCursor);
base.OnClosing(e);
}
protected override void WndProc(ref Message m)
{
base.WndProc(ref m);
if (m.Msg == 0x0312) // WM_HOTKEY
{
if (m.WParam == hotkeyIdSpeakWordBeneathMouseCursor)
{
// Our hotkey's been pressed!
SpeakWordBeneathMouseCursor();
}
}
}
private void SpeakWordBeneathMouseCursor()
{
string wordToSpeak = GetWord(); // Get the word beneath mouse cursor.
if (wordToSpeak != "")
{
_speechSynthesizer.SpeakAsync(wordToSpeak);
}
}
private string GetWord()
{
string wordToSpeak = "";
//);
// Does the element support the Text pattern?
IUIAutomationTextPattern textPattern =
element.GetCachedPattern(patternIdText);
if (textPattern != null)
{
// Now get the degenerative TextRange where the mouse is.
IUIAutomationTextRange range = textPattern.RangeFromPoint(pt);
if (range != null)
{
// Expand the TextRange to include the word around it.
range.ExpandToEnclosingUnit(TextUnit.TextUnit_Word);
// Set a reasonable limit for speaking the word returned.
wordToSpeak = range.GetText(100);
}
}
return wordToSpeak;
}
}
public class Win32
{
[DllImport("user32.dll")]
public static extern bool RegisterHotKey(IntPtr hWnd, int id, uint fsModifiers, uint vk);
[DllImport("user32.dll")]
public static extern bool UnregisterHotKey(IntPtr hWnd, int id);
}
}
Guy, this is great. I find though that I cannot get Windows UI Automation (not the .NET one either) to get text from Edge. Does it use a different pattern? When I list patterns using .NET I get no supported patterns.
Is there something I need to turn on to get Edge to provide ui Automation?
caret browsing, I turned on caret browsing and now my Win UIA (but still note .NET UIA) seems to pull text from Edge
Now it works, even with caret off, maybe I was just imagining things. I do find I have to use the Windows UIA and not .NET UIA. Now the next thing that doesn't seem to work is
CompareEndpoints
I use
range.ExpandToEnclosingUnit(TextUnit.TextUnit_Character); // for chinese
text += "character: " + range.GetText(-1).Trim() + Environment.NewLine;
var charRange = range.Clone(); ;
range.ExpandToEnclosingUnit(TextUnit.TextUnit_Word); // for chinese
text += "word: " + range.GetText(-1).Trim() + Environment.NewLine;
var wordRange = range.Clone();
var rects = wordRange.GetBoundingRectangles();
int charStartPoint = wordRange.CompareEndpoints(TextPatternRangeEndpoint.TextPatternRangeEndpoint_Start, charRange, TextPatternRangeEndpoint.TextPatternRangeEndpoint_Start);
int charEndPoint = wordRange.CompareEndpoints(TextPatternRangeEndpoint.TextPatternRangeEndpoint_End, charRange, TextPatternRangeEndpoint.TextPatternRangeEndpoint_End);
text += charStartPoint + ", " + charEndPoint + Environment.NewLine;
and this works great in .NET UIA but not in Windows UIA, the start and end points are always -1 -1.
Hi Tim,
Are you finding you get the unexpected -1 results for all UI where you run this code, or only with specific UI? I'd like to try to reproduce this myself using the Windows UIA API, so that I can investigate further. If you could give me an example of specific UI where the unexpected values are returned, that would help me.
Thanks,
Guy
Dear Guy,
I realized what was happening. I can get text from plain old paragraphs, but try going to a table or even to google news. I cannot get the TextPattern to work. Perhaps I need to drill down more? Any advise would be helpful.
Some issues this morning:
1. Try to get text from the article snippet in news.google.com – I get no textPattern
2. Try to get text from the article headline, it retrieves the text into "Name" (because it is a link? can I get a RangeFromPoint from it?)
3. Try to get text from the left most column in msdn.microsoft.com/…/gg701984(v=vs.85).aspx
4. Why is class name missing for Edge?
5. Is it possible to get AriaRoles?
6. Why can I not use the predifined IDs? I can look up UIA_PropertyIds.UIA_ClassNamePropertyId in VS, but when I try to compile it says Interop type cannot be embedded, use applicable interface instead…
I do find that the Windows UIA is an improvement over .NET UIA in terms of accessibility though.
Are you in Redmond, would love to buy you lunch. I am not far in Bellingham.
Thanks,
Tim
BTW, I resolved the -1 issues. It seems that Edge just returns whether you are behind or in front. I was able to move a clone of my target range around until it matched. This is probably because this is html based and exact numbers may not be computable ahead of time.
I think I need to dig down with something like this, though I'm not sure yet how to get my cached request to have children.
var children = element.GetCachedChildren();
if (children == null) return text;
for (int i = 0; i < children.Length; i++)
{
textPattern = children.GetElement(i).GetCurrentPattern(patternIdText);
if (textPattern != null)
{
wordFromRange(sw, pt, text, textPattern);
}
}
Guy I feel nervous about switching from .NET UIA to native – is there no .NET UIA switch to make it equivalent to native? I'm trying now to get the cached children, so I think I need to set the TreeScope in the cache request. When I do
cacheReq.TreeScope = TreeScope.TreeScope_Children; // causes E_FAIL
cacheReq.TreeScope = TreeScope.TreeScope_Descendants; // causes E_FAIL
I get HRESULT E_FAIL.
Sorry for all the messages here. I figured it out, I did not read the instructions. I needed cacheReq.TreeScope = TreeScope.TreeScope_Children | TreeScope.TreeScope_Element;
I'm no longer getting E_FAIL but items such as Hyperlinks in Edge do not seem to have children that I can parse. 🙁
I think these are embedded objects msdn.microsoft.com/…/ms788739(v=vs.110).aspx
I want to use msdn.microsoft.com/…/ee671665(v=vs.85).aspx – but hyperlinks, titles, table cells, they all seem to have no children when I GetCachedChildren even with the TreeScope in the cache request.
Is it possible to get RangeFromPoint if there is no text pattern?
RangeFromPoint() is only available to UIA clients if the provider supports the Text pattern. (Most of what the Text pattern does is provide a variety of ways for a client to access TextRanges.) | https://blogs.msdn.microsoft.com/winuiautomation/2015/09/29/so-how-will-you-help-people-work-with-text-part-2-the-uia-client/?replytocom=1811 | CC-MAIN-2018-13 | refinedweb | 3,785 | 64.81 |
Microsoft Visual C# Step by Step
Ninth Edition
John Sharp
Microsoft Visual C# Step by Step, Ninth Edition
Published with the authorization of Microsoft Corporation by: Pearson Education, Inc.776-0
ISBN-10: 1-5093-0776-1
Library of Congress Control Number: 2018944197 author, the publisher, and
Microsoft Corporation shall have neither liability nor responsibility to any person or entity with respect
to any loss or damages arising from the information contained in this
Brett Bartow
Acquisitions Editor
Trina MacDonald
Development Editor
Rick Kughen
Managing Editor
Sandra Schroeder
Senior Project Editor
Tracey Croom
Copy Editor
Christopher Morris
Indexer
Erika Millen
Proofreader
Jeanine Furino
Technical Editor
David Franson
Editorial Assistant
Courtney Martin
Cover Designer
Twist Creative, Seattle
Compositor
codemantra
Contents at a Glance
Acknowledgments
About the Author
Introduction
PART I
INTRODUCING MICROSOFT VISUAL C# AND MICROSOFT VISUAL STUDIO
2017 Handling binary data and using indexers
CHAPTER 17 Introducing generics
CHAPTER 18 Using collections
CHAPTER 19 Enumerating collections
CHAPTER 20 Decoupling application logic and handling events
CHAPTER 21 Querying in-memory data by using query expressions
CHAPTER 22 Operator overloading
PART IV
BUILDING UNIVERSAL WINDOWS PLATFORM APPLICATIONS WITH C#
CHAPTER 23 Improving throughput by using tasks
CHAPTER 24 Improving response time by performing asynchronous operations
CHAPTER 25 Implementing the user interface for a Universal Windows Platform app
CHAPTER 26 Displaying and searching for data in a Universal Windows Platform app
CHAPTER 27 Accessing a remote database from a Universal Windows Platform app
Index
Contents
Acknowledgments
About the Author
Introduction
PART I
INTRODUCING MICROSOFT VISUAL C# AND MICROSOFT VISUAL STUDIO
2017
Chapter 1 Welcome to C#
Beginning programming with the Visual Studio 2017 environment
Writing your first program
Using namespaces
Creating a graphical application
Examining the Universal Windows Platform app
Adding code to the graphical application
Summary
Quick reference
Chapter 2 Working with variables, operators, and expressions
Understanding statements
Using identifiers
Identifying keywords
Using variables
Naming variables
Declaring variables
Specifying numeric values
Summary
Quick reference
Chapter 3 Writing methods and applying scope
Creating methods
Declaring a method
Returning data from a method
Using expression-bodied methods
Calling methods
Specifying the method call syntax
Returning multiple values from a method
Applying scope
Defining local scope
Defining class scope
Overloading methods
Writing methods
Refactoring code
Nesting methods
Using optional parameters and named arguments
Defining optional parameters
Passing named arguments
Resolving ambiguities with optional parameters and named arguments
Summary
Quick reference
Chapter
Summary
Quick reference
Chapter 5 Using compound assignment and iteration statements
Using compound assignment operators
Writing while statements
Writing for statements
Understanding for statement scope
Writing do statements
Summary
Quick reference
Chapter 6 Managing errors and exceptions
Coping with errors
Trying code and catching exceptions
Unhandled exceptions
Using multiple catch handlers
Catching multiple exceptions
Filtering exceptions
Propagating exceptions
Using checked and unchecked integer arithmetic
Writing checked statements
Writing checked expressions
Throwing exceptions
Using throw exceptions
Using a finally block
Summary
Quick reference
PART II
UNDERSTANDING THE C# OBJECT MODEL
Chapter 7 Creating and managing classes and objects
Understanding classification
The purpose of encapsulation
Defining and using a class
Controlling accessibility
Working with constructors
Overloading constructors
Deconstructing an object
Understanding static methods and data
Creating a shared field
Creating a static field by using the const keyword
Understanding static classes
Static using statements
Anonymous classes
Summary
Quick reference
Chapter 8 Understanding values and references
Copying value type variables and classes
Understanding null values and nullable types
The null-conditional operator
Using nullable types
Understanding the properties of nullable types
Using ref and out parameters
Creating ref parameters
Creating out parameters
How computer memory is organized
Using the stack and the heap
The System.Object class
Boxing
Unboxing
Casting data safely
The is operator
The as operator
The switch statement revisited
Summary
Quick reference
Chapter 9 Creating value types with enumerations and structures
Working with enumerations
Declaring an enumeration
Using an enumeration
Choosing enumeration literal values
Choosing an enumeration’s underlying type
Working with structures
Declaring a structure
Understanding differences between structures and classes
Declaring structure variables
Understanding structure initialization
Copying structure variables
Summary
Quick reference
Chapter
Accessing arrays that contain value types
Summary
Quick reference
Chapter 11 Understanding parameter arrays
Overloading—a recap
Using array arguments
Declaring a params array
Using params object[ ]
Using a params array
Comparing parameter arrays and optional parameters
Summary
Quick reference
Chapter 12 Working with inheritance
What is inheritance?
Using inheritance
The System.Object class revisited
Calling base-class constructors
Assigning classes
Declaring new methods
Declaring virtual methods
Declaring override methods
Understanding protected access
Creating extension methods
Summary
Quick reference
Chapter 13 Creating interfaces and defining abstract classes
Understanding interfaces
Defining an interface
Implementing an interface
Referencing a class through its interface
Working with multiple interfaces
Explicitly implementing an interface
Interface restrictions
Defining and using interfaces
Abstract classes
Abstract methods
Sealed classes
Sealed methods
Implementing and using an abstract class
Summary
Quick reference
Chapter 14 Using garbage collection and resource management
The life and times of an object
Writing destructors
Why use the garbage collector?
How does the garbage collector work?
Recommendations
Resource management
Disposal methods
Exception-safe disposal
The using statement and the IDisposable interface
Calling the Dispose method from a destructor
Implementing exception-safe disposal
Summary
Quick reference
PART III DEFINING EXTENSIBLE TYPES WITH C#
Chapter 15 Implementing properties to access fields
Implementing encapsulation by using methods
What are properties?
Using properties
Read-only properties
Write-only properties
Property accessibility
Understanding the property restrictions
Declaring interface properties
Replacing methods with properties
Generating automatic properties
Initializing objects by using properties
Summary
Quick reference
Chapter 16 Handling binary data and using indexers
What is an indexer?
Storing binary values
Displaying binary values
Manipulating binary values
Solving the same problems using indexers
Understanding indexer accessors
Comparing indexers and arrays
Indexers in interfaces
Using indexers in a Windows application
Summary
Quick reference
Chapter 17 Introducing generics
The problem: Misusing with the object type
The generics solution
Generics vs. generalized classes
Generics and constraints
Creating a generic class
The theory of binary trees
Building a binary tree class by using generics
Creating a generic method
Defining a generic method to build a binary tree
Variance and generic interfaces
Covariant interfaces
Contravariant interfaces
Summary
Quick reference
Chapter 18 Using
The forms of lambda expressions
Comparing arrays and collections
Using collection classes to play cards
Summary
Quick reference
Chapter 19 Enumerating collections
Enumerating the elements in a collection
Manually implementing an enumerator
Implementing the IEnumerable interface
Implementing an enumerator by using an iterator
A simple iterator
Defining an enumerator for the Tree<TItem> class by using an iterator
Summary
Quick reference
Chapter 20 Decoupling application logic and handling events
Understanding delegates
Examples of delegates in the .NET Framework class library
The automated factory scenario
Implementing the factory control system without using delegates
Implementing the factory by using a delegate
Declaring and using delegates
Lambda expressions and delegates
Creating a method adapter
Enabling notifications by using events
Declaring an event
Subscribing to an event
Unsubscribing from an event
Raising an event
Understanding user interface events
Using events
Summary
Quick reference
Chapter 21 Querying in-memory data by using query expressions
What is LINQ?
Using LINQ in a C# application
Selecting data
Filtering data
Ordering, grouping, and aggregating data
Joining data
Using query operators
Querying data in Tree<TItem> objects
LINQ and deferred evaluation
Summary
Quick reference
Chapter 22
Summary
Quick reference
PART IV BUILDING UNIVERSAL WINDOWS PLATFORM APPLICATIONS WITH C#
Chapter 23 Improving throughput by using tasks
Why perform multitasking by using parallel processing?
The rise of the multicore processor
Implementing multitasking by using the Microsoft .NET Framework
Tasks, threads, and the ThreadPool
Creating, running, and controlling tasks
Using the Task class to implement parallelism
Abstracting tasks by using the Parallel class
When not to use the Parallel class
Canceling tasks and handling exceptions
The mechanics of cooperative cancellation
Using continuations with canceled and faulted tasks
Summary
Quick reference
Chapter 24 Improving response time by performing asynchronous operations
Implementing asynchronous methods
Defining asynchronous methods: The problem
Defining asynchronous methods: The solution
Defining asynchronous methods that return values
Asynchronous method gotchas
Asynchronous methods and the Windows Runtime APIs
Tasks, memory allocation, and efficiency
Using PLINQ to parallelize declarative data access
Using PLINQ to improve performance while iterating through a collection
Canceling a PLINQ query
Synchronizing concurrent access to data
Locking data
Synchronization primitives for coordinating tasks
Canceling synchronization
The concurrent collection classes
Using a concurrent collection and a lock to implement thread-safe data access
Summary
Quick reference
Chapter 25 Implementing the user interface for a Universal Windows Platform app
Features of a Universal Windows Platform app
Using the Blank App template to build a Universal Windows Platform app
Implementing a scalable user interface
Applying styles to a UI
Summary
Quick reference
Chapter 26 Displaying and searching for data in a Universal Windows Platform app
Implementing the Model–View–ViewModel pattern
Displaying data by using data binding
Modifying data by using data binding
Using data binding with a ComboBox control
Creating a ViewModel
Adding commands to a ViewModel
Searching for data using Cortana
Providing a vocal response to voice commands
Summary
Quick reference
Chapter 27 Accessing a remote database from a Universal Windows Platform app
Retrieving data from a database
Creating an entity model
Creating and using a REST web service
Inserting, updating, and deleting data through a REST web service
Reporting errors and updating the UI
Summary
Quick reference
Index
Acknowledgments
Well, here we are again, in what appears to have become a biennial event; such is the pace of change in
the world of software development! As I glance at my beloved first edition of Kernighan and Ritchie
describing The C Programming Language (Prentice Hall), I occasionally get nostalgic for the old times.
In those halcyon days, programming had a certain mystique, even glamour. Nowadays, in one form or
another, the ability to write at least a little bit of code is fast becoming as much a requirement in many
workplaces as the ability to read, write, or add up. The romance has gone, to be replaced by an air of
“everyday-ness.” Then, as I start to hanker after the time when I still had hair on my head and the
corporate mainframe required a team of full-time support staff just to pander to its whims, I realize that if
programming were restricted to a few elite souls, then the market for C# books would have disappeared
after the first couple of editions of this tome. Thus cheered, I power up my laptop, my mind mocking the
bygone era when such processing power could have navigated many hundreds of Apollo spacecraft
simultaneously to the moon and back, and get down to work on the latest edition of this book!
Despite the fact that my name is on the cover, authoring a book such as this is far from a one-man
project. I’d like to thank the following people who have provided unstinting support and assistance
throughout this exercise.
First, Trina MacDonald at Person Education, who took on the role of prodding me into action and
ever-so-gently tying me down to well-defined deliverables and hand-off dates. Without her initial impetus
and cajoling, this project would not have got off the ground.
Next, Rick Kughen, the tireless copy editor who ensured that my grammar remained at least semiunderstandable, and picked up on the missing words and nonsense phrases in the text.
Then, David Franson, who had the unenviable task of testing the code and exercises. I know from
experience that this can be a thankless and frustrating task at times, but the hours spent and the feedback
that results can only make for a better book. Of course, any errors that remain are entirely my
responsibility, and I am happy to listen to feedback from any reader.
As ever, I must also thank Diana, my better half, who keeps me supplied with caffeine-laden hot drinks
when deadlines are running tight. Diana has been long-suffering and patient, and has so far survived my
struggle through nine editions of this book; that is dedication well beyond the call of duty. She has recently
taken up running. I assumed it was to keep fit, but I think it is more likely so she can get well away from
the house and scream loudly without my hearing her!
And lastly, to James and Frankie, who have both now flown the nest. James is trying to avoid gaining a
Yorkshire accent while living and working in Sheffield, but Frankie has remained closer to home so she
can pop in and raid the kitchen from time to time.
About the Author
John Sharp is a principal technologist for CM Group Ltd, a software development and consultancy
company in the United Kingdom. He is well versed as a software consultant, developer, author, and
trainer, with more than 35 years of experience, ranging from Pascal programming on CP/M and C/Oracle
application development on various flavors of UNIX to the design of C# and JavaScript distributed
applications and development on Windows 10 and Microsoft Azure. He also spends much of his time
writing courseware for Microsoft, focusing on areas such as Data Science using R and Python, Big Data
processing with Spark and CosmosDB, and scalable application architecture with Azure.
Introduction.
C# 1.0 made its public debut in 2001.
C# 2.0, with Visual Studio 2005, provided several important new features, including generics,
iterators, and anonymous methods.
C# 3.0, which was released with Visual Studio 2008, added extension methods, lambda
expressions, and most famously of all, the Language-Integrated Query facility, or LINQ.
C# 4.0 was released in 2010 and provided further enhancements that improved its interoperability
with other languages and technologies. These features included support for named and optional
arguments and the dynamic type, which indicates that the language runtime should implement late
binding for an object. An important addition to the .NET Framework, and released concurrently
with C# 4.0, were the classes and types that constitute the Task Parallel Library (TPL). Using the
TPL, you can build highly scalable applications that can take full advantage of multicore
processors.
C# 5.0 added native support for asynchronous task-based processing through the async method
modifier and the await operator.
C# 6.0 was an incremental upgrade with features designed to make life simpler for developers.
These features include items such as string interpolation (you need never use String.Format
again!), enhancements to the ways in which properties are implemented, expression-bodied
methods, and others.
C# 7.0 adds further enhancements to aid productivity and remove some of the minor anachronisms
of C#. For example, you can now implement property accessors as expression-bodied members,
methods can return multiple values in the form of tuples, the use of out parameters has been
simplified, and switch statements have been extended to support pattern- and type-matching. There
are other updates as well, which are covered in this book.. The key notion in Windows 10 is
Universal Windows Platform (UWP) apps—applications designed to run on any Windows 10 device,
whether a fully fledged desktop system, a laptop, a tablet, or even an IoT (Internet of Things) device with
limited resources. Once you have mastered the core features of C#, gaining the skills to build applications
that can run on all these platforms is important.
Voice activation is another feature that has come to the fore, and Windows 10 includes Cortana, your
personal voice-activated digital assistant. You can integrate your own apps with Cortana to allow them to
participate in data searches and other operations. Despite the complexity normally associated with
natural-language speech analysis, enabling your apps to respond to Cortana’s requests is surprisingly
easy; I cover this in Chapter 26. Also, the cloud has become such an important element in the architecture
of many systems—ranging from large-scale enterprise applications to mobile apps running on portable
devices—that I decided to focus on this aspect of development in the final chapter of the book.
The development environment provided by Visual Studio 2017 makes these features easy to use, and
the many new wizards and enhancements included in the latest version of Visual Studio can greatly
improve your productivity as a developer. I hope you have as much fun working through this book as I had
writing it!.
Who should not read this book
This book is aimed at developers new to C# but not completely new to programming. As such, it
concentrates primarily on the C# language. This book is not intended to provide detailed coverage of the
multitude of technologies available for building enterprise-level and global applications for Windows,
such as ADO.NET, ASP.NET, Azure, or Windows Communication Foundation. If you require more
information on any of these items, you might consider reading some of the other titles available from
Microsoft Press.
Organization of this book
This book is divided into four sections:
Part I, “Introducing Microsoft Visual C# and Microsoft Visual Studio 2017,” provides an
introduction to the core syntax of the C# language and the Visual Studio programming environment.
Part II, “Understanding the C# object model,” goes into detail on how to create and manage new
types in C# and how to manage the resources referenced by these types.
Part III, “Defining extensible types with C#,” includes extended coverage of the elements that C#
provides for building types that you can reuse across multiple applications.
Part IV, “Building Universal Windows Platform applications with C#,” describes the universal
Windows 10 programming model and how you can use C# to build interactive applications for this
model.
Finding your best starting point in this book
This book is designed to help you build skills in a number of essential areas. You can use this book if you
are new to programming or if you are switching from another programming language such as C, C++,
Java, or Visual Basic. Use the following table to find your best starting point.
If you are
New to object-oriented
programming
Follow these steps
1. Install the practice files as described in the upcoming section,
“Code samples.”
2. Work through the chapters in Parts I, II, and III sequentially.
3. Complete Part IV as your level of experience and interest
dictates.
Familiar with procedural
programming languages, such as C,
but new to C#
1. Install the practice files as described in the upcoming section,
“Code samples.”
2. Skim the first five chapters to get an overview of C# and
Visual Studio 2017, and then concentrate on Chapters 6 through
22.
3. Complete Part IV as your level of experience and interest
dictates.
Migrating from an object-oriented
language such as C++ or Java
1. Install the practice files as described in the upcoming section,
“Code samples.”
2. Skim the first seven chapters to get an overview of C# and
Visual Studio 2017, and then concentrate on Chapters 8 through
22.
3. For information about building Universal Windows Platform
applications, read Part IV.
Switching from Visual Basic to C#
1. Install the practice files as described in the upcoming section,
“Code samples.”
2. Work through the chapters in Parts I, II, and III sequentially.
3. For information about building Universal Windows Platform
applications, read Part IV.
4. Read the Quick Reference sections at the end of the chapters
for information about specific C# and Visual Studio 2017
constructs.
Referencing the book after working
through the exercises
1. Use the index or the table of contents to find information about
particular subjects.
2. Read the Quick Reference sections at the end of each chapter
to find a brief review of the syntax and techniques presented in
the chapter.
Most of the book’s chapters include hands-on samples that let you try out the concepts you just learned.
No matter which sections you choose to focus on, be sure to download and install the sample applications
on your system.
Conventions and features in this book
This book presents information by using conventions designed to make the information readable and easy
to follow.
Each exercise consists of a series of tasks, presented as numbered steps (1, 2, and so on) listing
each action you must take to complete the exercise.
Boxed elements with labels such as “Note” provide additional information or alternative methods
for completing a step successfully.
Text that you type (apart from code blocks) appears in bold.
A plus sign (+) between two key names means that you must press those keys at the same time. For
example, “Press Alt+Tab” means that you hold down the Alt key while you press the Tab key.
System requirements
You will need the following hardware and software to complete the practice exercises in this book:
Windows 10 (Home, Professional, Education, or Enterprise) version 1507 or higher.
The most recent build of Visual Studio Community 2017, Visual Studio Professional 2017, or
Visual Studio Enterprise 2017 (make sure that you have installed any updates). As a minimum, you
should select the following workloads when installing Visual Studio 2017:
• Universal Windows Platform development
• .NET desktop development
• ASP.NET and web development
• Azure development
• Data storage and processing
• .NET Core cross-platform development
Note All the exercises and code samples in this book have been developed and tested using Visual
Studio Community 2017. They should all work, unchanged, in Visual Studio Professional 2017 and
Visual Studio Enterprise 2017.
A computer that has a 1.8 GHz or faster processor (dual-core or better recommended)
2 GB RAM (4 GB RAM recommended, add 512 MB if running in a virtual machine)
10 GB of available hard disk space after installing Visual Studio
5400 RPM hard-disk drive (SSD recommended)
A video card that supports a 1024 × 768 or higher resolution display
Internet connection to download software or chapter examples
Depending on your Windows configuration, you might require local Administrator rights to install or
configure Visual Studio 2017.
You also need to enable developer mode on your computer to be able to create and run UWP apps. For
details on how to do this, see “Enable Your Device for Development,” at.
Code samples
Most of the chapters in this book include exercises with which you can interactively try out new material
learned in the main text. You can download all the sample projects, in both their pre-exercise and postexercise formats, from the following page:
Note In addition to the code samples, your system should have Visual Studio 2017 installed. If
available, install the latest service packs for Windows and Visual Studio.
Installing the code samples
Follow these steps to install the code samples on your computer so that you can use them with the
exercises in this book:
1. Unzip the CSharpSBS.zip file that you downloaded from the book’s website, extracting the files into
your Documents folder.
2. If prompted, review the end-user license agreement. If you accept the terms, select the Accept option
and then click Next.
Note If the license agreement doesn’t appear, you can access it from the same webpage from which
you downloaded the CSharpSBS.zip file.
Using the code samples
Each chapter in this book explains when and how to use the code samples for that chapter. When it’s time
to use a code sample, the book will list the instructions for how to open the files.
Important Many of the code samples depend on NuGet packages that are not included with the
code. These packages are downloaded automatically the first time you build a project. As a result, if
you open a project and examine the code before doing a build, Visual Studio might report a large
number of errors for unresolved references. Building the project will resolve these references, and
the errors should disappear.
For those of you who like to know all the details, here’s a list of the sample Visual Studio 2017
projects and solutions, grouped by the folders in which you can find them. In many cases, the exercises
provide starter files and completed versions of the same projects that you can use as a reference. The
completed projects for each chapter are stored in folders with the suffix “- Complete.”
Project/Solution
Chapter 1
TextHello
Hello
Chapter 2
PrimitiveDataTypes
MathsOperators
Chapter 3
Methods
DailyRate
DailyRate Using
Optional Parameters
Chapter 4
Selection
SwitchStatement
Description
This project gets you started. It steps through the creation of a simple
program that displays a text-based greeting.
This project opens a window that prompts the user for his or her name
and then displays a greeting.
This project demonstrates how to declare variables by using each of the
primitive types, how to assign values to these variables, and how to
display their values in a window.
This program introduces the arithmetic operators (+ – * / %).
In this project, you’ll reexamine the code in the MathsOperators project
and investigate how it uses methods to structure the code.
This project walks you through writing your own methods, running the
methods, and stepping through the method calls by using the Visual Studio
2015 debugger.
This project shows you how to define a method that takes optional
parameters and call the method by using named arguments.
This project shows you how to use a cascading if statement to implement
complex logic, such as comparing the equivalence of two dates.
This simple program uses a switch statement to convert characters into
their XML representations.
Chapter 5
WhileStatement
DoStatement
Chapter 6
MathsOperators
Chapter 7
Classes
Chapter 8
Parameters
Chapter 9
StructsAndEnums
Chapter 10
Cards
Chapter 11
ParamsArray
Chapter 12
Vehicles
ExtensionMethod
Chapter 13
Drawing
This project demonstrates a while statement that reads the contents of a
source file one line at a time and displays each line in a text box on a
form.
This project uses a do statement to convert a decimal number to its octal
representation.
This project revisits the MathsOperators project from Chapter 2 and
shows how various unhandled exceptions can make the program fail. The
try and catch keywords then make the application more robust so that it
no longer fails.
This project covers the basics of defining your own classes, complete
with public constructors, methods, and private fields. It also shows how
to create class instances by using the new keyword and how to define
static methods and fields.
This program investigates the difference between value parameters and
reference parameters. It demonstrates how to use the ref and out
keywords.
This project defines a struct type to represent a calendar date.
This project shows how to use arrays to model hands of cards in a card
game.
This project demonstrates how to use the params keyword to create a
single method that can accept any number of int arguments.
This project creates a simple hierarchy of vehicle classes by using
inheritance. It also demonstrates how to define a virtual method.
This project shows how to create an extension method for the int type,
providing a method that converts an integer value from base 10 to a
different number base.
This project implements part of a graphical drawing package. The project
uses interfaces to define the methods that drawing shapes expose and
implement.
Chapter 14
GarbageCollectionDemo This project shows how to implement exception-safe disposal of
resources by using the Dispose pattern.
Chapter 15
Drawing Using
This project extends the application in the Drawing project developed in
Properties
AutomaticProperties
Chapter 16
Indexers
Chapter 17
BinaryTree
BuildTree
Chapter 18
Cards
Chapter 19
BinaryTree
IteratorBinaryTree
Chapter 20
Delegates
Chapter 21
QueryBinaryTree
Chapter 22
ComplexNumbers
Chapter 23
GraphDemo
Parallel GraphDemo
GraphDemo With
Cancellation
ParallelLoop
Chapter 13 to encapsulate data in a class by using properties.
This project shows how to create automatic properties for a class and use
them to initialize instances of the class.
This project uses two indexers: one to look up a person’s phone number
when given a name and the other to look up a person’s name when given a
phone number.
This solution shows you how to use generics to build a type-safe structure
that can contain elements of any type.
This project demonstrates how to use generics to implement a type-safe
method that can take parameters of any type.
This project updates the code from Chapter 10 to show how to use
collections to model hands of cards in a card game.
This project shows you how to implement the generic IEnumerator<T>
interface to create an enumerator for the generic Tree class.
This solution uses an iterator to generate an enumerator for the generic
Tree class.
This project shows how to decouple a method from the application logic
that invokes it by using a delegate. The project is then extended to show
how to use an event to alert an object to a significant occurrence, and how
to catch an event and perform any processing required.
This project shows how to use LINQ queries to retrieve data from a
binary tree object.
This project defines a new type that models complex numbers and
implements common operators for this type.
This project generates and displays a complex graph on a UWP form. It
uses a single thread to perform the calculations.
This version of the GraphDemo project uses the Parallel class to abstract
out the process of creating and managing tasks.
This project shows how to implement cancellation to halt tasks in a
controlled manner before they have completed.
This application provides an example showing when you should not use
the Parallel class to create and run tasks.
Chapter 24
GraphDemo
This is a version of the GraphDemo project from Chapter 23 that uses the
PLINQ
CalculatePI
async keyword and the await operator to perform the calculations that
generate the graph data asynchronously.
This project shows some examples of using PLINQ to query data by using
parallel tasks.
This project uses a statistical sampling algorithm to calculate an
approximation for pi. It uses parallel tasks.
Chapter 25
Customers
This project implements a scalable user interface that can adapt to
different device layouts and form factors. The user interface applies
XAML styling to change the fonts and background image displayed by the
application.
Chapter 26
DataBinding
ViewModel
Cortana
This is a version of the Customers project that uses data binding to
display customer information retrieved from a data source in the user
interface. It also shows how to implement the INotifyPropertyChanged
interface so that the user interface can update customer information and
send these changes back to the data source.
This version of the Customers project separates the user interface from
the logic that accesses the data source by implementing the Model-ViewViewModel pattern.
This project integrates the Customers app with Cortana. A user can issue
voice commands to search for customers by name.
Chapter 27
Web Service
This solution includes a web application that provides an ASP.NET Web
API web service that the Customers application uses to retrieve customer
data from a SQL Server database. The web service uses an entity model
created with the Entity Framework to access the database.
Errata and book support
We’ve made every effort to ensure the accuracy of this book and its companion content. Any errors that
have been reported since this book was published are listed on our Microsoft Press site at:
If you find an error that is not already listed, you can report it to us through the same page.
If you need additional support, email Microsoft Press Book Support at mspinput@microsoft.com.
Please note that product support for Microsoft software and hardware is not offered through the
previous addresses. For help with Microsoft software or hardware, go to.
Stay in touch
Let’s keep the conversation going! We’re on Twitter:
PART I
Introducing Microsoft Visual C# and Microsoft
Visual Studio 2017
This introductory part of the book covers the essentials of the C# language and shows you how to get
started building applications with Visual Studio 2017.
In Part I, you’ll learn how to create new projects in Visual Studio and how to declare variables, use
operators to create values, call methods, and write many of the statements you need when implementing
C# programs. You’ll also learn how to handle exceptions and how to use the Visual Studio debugger to
step through your code and spot problems that prevent your applications from working correctly.
CHAPTER 1
Welcome to C#
After completing this chapter, you will be able to:
Use the Microsoft Visual Studio 2017 programming environment.
Create a C# console application.
Explain the purpose of namespaces.
Create a simple graphical C# application.
This chapter introduces Visual Studio 2017, the programming environment and toolset designed to help
you build applications for Microsoft Windows. Visual Studio 2017 is the ideal tool for writing C# code,
and it provides many features that you will learn about as you progress through this book. In this chapter,
you will use Visual Studio 2017 to build some simple C# applications and get started on the path to
building highly functional solutions for Windows.
Beginning programming with the Visual Studio 2017 environment
Visual Studio 2017 is a tool-rich programming environment containing the functionality that you need to
create large or small C# projects running on Windows. You can even construct projects that seamlessly
combine modules written in different programming languages, such as C++, Visual Basic, and F#. In the
first exercise, you will open the Visual Studio 2017 programming environment and learn how to create a
console application.
Note A console application is an application that runs in a Command Prompt window instead of
providing a graphical user interface (GUI).
Create a console application in Visual Studio 2017
1. On the Windows taskbar, click Start, type Visual Studio 2017, and then press Enter. Alternatively,
you can click the Visual Studio 2017 icon on the Start menu.
Visual Studio 2017 starts and displays the Start page, similar to the following. (Your Start page
might be different, depending on the edition of Visual Studio 2017 you are using.).
3. In the left pane, expand the Installed node (if it is not already expanded), and then click Visual C#. In
the middle pane, verify that the combo box at the top of the pane displays .NET Framework 4.6.1,
and then click Console App (.NET Framework).
Note Make sure that you select Console App (.NET Framework) and not Console App (.NET
Core). You use the .NET Core template for building portable applications that can also run on
other operating systems, such as Linux. However, .NET Core applications do not provide the
range of features available to the complete .NET Framework.
4. In the Location box, type C:\Users\YourName\Documents\Microsoft Press\VCSBS\ Chapter 1.
Replace the text YourName in this path with your Windows username.
Note To avoid repetition and save space, throughout the rest of this book I will refer to the path
C:\Users\YourName\Documents simply as your Documents folder.
Tip If the folder you specify does not exist, Visual Studio 2017 creates it for you.
5. In the Name box, type TestHello (type over the existing name, ConsoleApplication1).
6. Ensure that the Create Directory For Solution check box is selected and that the Add To Source
Control check box is clear, and then click OK.
Visual Studio creates the project by using the Console Application template. Visual Studio then
displays the starter code for the project, like this:
The menu bar at the top of the screen provides access to the features you’ll use in the programming
environment. You can use the keyboard or mouse to access the menus and commands, exactly as you can
in all Windows-based programs. The toolbar is located beneath the menu bar. It provides button shortcuts
to run the most frequently used commands.
The Code and Text Editor window, occupying the main part of the screen, appears on the right side of the IDE, adjacent to the Code and Text Editor
window:
Solution Explorer displays the names of the files associated with the project, among other items. You
can also double-click a file name in Solution Explorer to bring that source file to the foreground in the
Code and Text Editor window.
Before writing any code, examine the files listed in Solution Explorer, which Visual Studio 2017 has
created as part of your project:
Solution ‘TestHello’ This is the top-level solution file. Each application contains a single
solution file. A solution can contain one or more projects, and Visual Studio 2017 creates the
solution file to help organize these projects. If you use File Explorer to look at your
Documents\Microsoft Press\VCSBS\Chapter 1\TestHello folder, you’ll see that the actual name of
this file is TestHello.sln.
TestHello This is the C# project file. Each project file references one or more files containing
the source code and other artifacts for the project, such as graphics images. You must write all the
source code in a single project in the same programming language. In File Explorer, this file is
actually called TestHello.csproj, and it is stored in the \Microsoft Press\VCSBS\Chapter 1\
TestHello\TestHello folder in your Documents folder.
Properties This is a folder in the TestHello project. If you expand it (click the arrow next to
Properties),. Explaining how to use these attributes is beyond the scope of this book.
References This folder contains references to libraries of compiled code that your application
can use. When your C# code is compiled, it is converted into a library and given a unique name. In
the Microsoft .NET Framework, these libraries are called assemblies. Developers use assemblies
to package useful functionality that they have written so that they can distribute it to other
developers who might want to use these features in their own applications. If you expand the
References folder, you will see the default set of references that Visual Studio 2017 adds to your
project. These assemblies provide access to many of the commonly used features of the .NET
Framework and are provided by Microsoft with Visual Studio 2017. You will learn about many of
these assemblies as you progress through the exercises in this book.
App.config This is the application configuration file. It is optional, and it might not always be
present. You can specify settings that your application uses at run time to modify its behavior, such
as the version of the .NET Framework to use to run the application. You will learn more about this
file in later chapters of this book.
Program.cs This is a C# source file, and it is displayed in the Code and Text Editor window
when the project is first created. You will write your code for the console application in this file. It
also contains some code that Visual Studio 2017 provides automatically, which you will examine
shortly.
Writing your first program
The Program.cs file defines a class called Program that contains a method called Main. In C#, all
executable code must be defined within a method, and all methods must belong to a class or a struct. You
will learn more about classes in Chapter 7, “Creating and managing classes and objects,” and you will
learn about structs in Chapter 9, “Creating value types with enumerations and structures.”
The Main method designates the program’s entry point. This method should be defined in the manner
specified in the Program class as a static method; otherwise, the .NET Framework might not recognize it
as the starting point for your application when you run it. (You will look at methods in detail in Chapter 3,
“Writing methods and applying scope,” and Chapter 7 provides more information on static methods.)
Important C# is a case-sensitive language. You must spell Main with an uppercase M.
In the following exercises, you write the code to display the message “Hello World!” to the console
window, build and run your Hello World console application, and learn how namespaces are used to
partition code elements.
Write the code by using Microsoft IntelliSense
1. In the Code and Text Editor window displaying the Program.cs file, place the cursor in the Main
method, immediately after the opening curly brace ( { ), and then press Enter to create a new line.
2. On the new line, type the word Console; this is the name of another class provided by the assemblies
referenced by your application. It provides methods for displaying messages in the console window
and reading input from the keyboard.
As you type the letter C at the start of the word Console, an IntelliSense list appears. This list
contains all of the C# keywords and data types that are valid in this context. You can either continue
typing or scroll through the list and double-click the Console item with the mouse. Alternatively,
after you have typed Cons, the IntelliSense list automatically homes in on the Console item, and you
can press the Tab or Enter key to select it.
Main should look like this:
static void Main(string[] args)
{
Console
}
Note Console is a built-in class.
3. Type a period immediately following Console.
Another IntelliSense list appears, displaying the methods, properties, and fields of the Console
class.
4. Scroll down through the list, select WriteLine, and then press Enter. Alternatively, you can continue
typing the characters W, r, i, t, e, L until WriteLine is selected, and then press Enter.
The IntelliSense list closes, and the word WriteLine is added to the source file. Main should now
look like this:
static void Main(string[] args)
{
Console.WriteLine
}
5. Type ( and another IntelliSense tip will appear.
This tip displays the parameters that the WriteLine method can take. In fact, WriteLine is an
overloaded method, meaning that the Console class contains more than one method named WriteLine
—it provides 19 different versions of this method. You can use each version of the WriteLine method
to output different types of data. (Chapter 3 describes overloaded methods in more detail.) Main
should now look like this:
static void Main(string[] args)
{
Console.WriteLine(
}
Tip You can click the up and down arrows in the tip to scroll through the different overloads of
WriteLine.
6. Type ); and Main should now look like this:
static void Main(string[] args)
{
Console.WriteLine();
}
7. Move the cursor and type the string, “Hello World!” (including the quotation marks) between the left
and right parentheses following the WriteLine method.
Main should now look like this:
static void Main(string[] args)
{
Console.WriteLine("Hello World!");
}
Tip Get into the habit of typing matched character pairs, such as parentheses—( and )—and curly
brackets—{ and }—before filling in their contents. It’s easy to forget the closing character if you
wait until after you’ve entered the contents.
IntelliSense icons
When you type a period after the name of a class, IntelliSense displays the name of every member of
that class. To the left of each member name is an icon that depicts the type of member. Common
icons and their types include the following:
Icon Meaning
Method (discussed in Chapter 3)
Property (discussed in Chapter 15, “Implementing properties to access fields”)
Class (discussed in Chapter 7)
Struct (discussed in Chapter 9)
Enum (discussed in Chapter 9)
Extension method (discussed in Chapter 12, “Working with Inheritance”)
Interface (discussed in Chapter 13, “Creating interfaces and defining abstract classes”)
Delegate (discussed in Chapter 17, “Introducing generics”)
Event (discussed in Chapter 17)
Namespace (discussed in the next section of this chapter)
You will also see other IntelliSense icons appear as you type code in different contexts.
You will frequently see lines of code containing two forward slashes (//) followed by ordinary text.
These are comments, which are ignored by the compiler but are very useful for developers because they
help document what a program is actually doing. Take, for instance, the following example:
Console.ReadLine(); // Wait for the user to press the Enter key
The compiler skips all text from the two slashes to the end of the line. You can also add multiline
comments that start with a forward slash followed by an asterisk (/*). The compiler skips everything until
it finds an asterisk followed by a forward slash sequence (*/), which could be many lines further down.
You are actively encouraged to document your code with as many meaningful comments as necessary.
Build and run the console application
1. On the Build menu, click Build Solution.
This action compiles the C# code, resulting in a program that you can run. The Output window
appears below the Code and Text Editor window.
Tip If the Output window does not appear, click Output on the View menu to display it.
In the Output window, you should see messages similar to the following, indicating how the program
is being compiled:
1>------ Build started: Project: TestHello, Configuration: Debug Any CPU -----1> TestHello -> C:\Users\John\Documents\Microsoft Press\Visual CSharp Step By
Step\Chapter 1\TestHello\TestHello\bin\Debug\TestHello.exe
========== Build: 1 succeeded, 0 failed, 0 up-to-date, 0 skipped ==========
If you have made any mistakes, they will be reported in the Error List window. The following image
shows what happens if you forget to type the closing quotation marks after the text Hello World in the
WriteLine statement. Notice that a single mistake can sometimes cause multiple compiler errors.
Tip To go directly to the line that caused the error, you can double-click an item in the Error
List window. You should also notice that Visual Studio displays a wavy red line under any
lines of code that will not compile when you enter them.
If you have followed the previous instructions carefully, there should be no errors or warnings, and
the program should build successfully.
Tip There is no need to save the file explicitly before building because the Build Solution
command automatically saves it. An asterisk after the file name in the tab above the Code and
Text Editor window indicates that the file has been changed since it was last saved.
2. On the Debug menu, click Start Without Debugging.
A command window opens, and the program runs. The message “Hello World!” appears. The
program now waits for you to press any key, as shown in the following graphic:
Note The “Press any key to continue” prompt is generated by Visual Studio; you did not write
any code to do this. If you run the program by using the Start Debugging command on the Debug
menu, the application runs, but the command window closes immediately without waiting for
you to press a key.
3. Ensure that the command window displaying the program’s output has the focus (meaning that it’s the
window that’s currently active), and then press Enter.
The command window closes, and you return to the Visual Studio 2017 programming environment.
4. In Solution Explorer, click the TestHello project (not the solution), and then, on the Solution
Explorer toolbar, click the Show All Files button. Be aware that you might need to click the doublearrow button on the right edge of the Solution Explorer toolbar to make this button appear.
Entries named bin and obj appear above the Program.cs file. These entries correspond directly to
folders named bin and obj in the project folder (Microsoft Press\VCSBS\Chapter
1\TestHello\TestHello). Visual Studio creates these folders when you build your application; they
contain the executable version of the program together with some other files used to build and debug
the application.
5. In Solution Explorer, expand the bin entry.
Another folder named Debug appears.
Note You might also see a folder named Release.
6. In Solution Explorer, expand the Debug folder.
Several more items appear, including a file named TestHello.exe. This is the compiled program,
which is the file that runs when you click Start Without Debugging on the Debug menu. The other
files contain information that is used by Visual Studio 2017 if you run your program in debug mode
(when you click Start Debugging on the Debug menu).
Using namespaces
The example you have seen so far is a very small program. However, small programs can soon grow into
much bigger programs. As a program grows, two issues arise. First, it is harder to understand and
maintain big programs than it is to understand and maintain smaller ones. Second, more code usually
means more classes, with more methods, requiring you to keep track of more names. As the number of
names increases, so does the likelihood of the project build failing because two or more names clash. For
example, you might try to create two classes with the same name. The situation becomes more
complicated when a program references assemblies written by other developers who have also used a
variety of names.
In the past, programmers tried to solve the name-clashing problem by prefixing names with some sort
of qualifier (or set of qualifiers). Using prefixes as qualifiers is not a good solution because it’s not
scalable. Names become longer, you spend less time writing software and more time typing (there is a
difference), and you spend too much time reading and rereading incomprehensibly long names.
Namespaces help solve this problem by creating a container for items such as classes. Two classes
with the same name will not be confused with each other if they live in different namespaces. You can
create a class named Greeting inside the namespace named TestHello by using the namespace keyword
like this:
namespace TestHello
{
class Greeting
{
...
}
}
You can then refer to the Greeting class as TestHello.Greeting in your programs. If another developer
also creates a Greeting class in a different namespace, such as NewNamespace, and you install the
assembly that contains this class on your computer, your programs will still work as expected because
they are using your TestHello.Greeting class. If you want to refer to the other developer’s Greeting class,
you must specify it as NewNamespace.Greeting.
It is good practice to define all your classes in namespaces, and the Visual Studio 2017 environment
follows this recommendation by using the name of your project as the top-level namespace. The .NET
Framework class library also adheres to this recommendation; every class in the .NET Framework lives
within a namespace. For example, the Console class lives within the System namespace. This means that
its full name is actually System.Console.
Of course, if you had to write the full name of a class every time you used it, the situation would be no
better than prefixing qualifiers or even just naming the class with some globally unique name such as
SystemConsole. Fortunately, you can solve this problem with a using directive in your programs. If you
return to the TestHello program in Visual Studio 2017 and look at the file Program.cs in the Code and
Text Editor window, you will notice the following lines at the top of the file:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
These lines are using directives. A using directive brings a namespace into scope. In the subsequent
code in the same file, you no longer need to explicitly qualify objects with the namespace to which they
belong. The five namespaces shown contain classes that are used so often that Visual Studio 2017
automatically adds these using directives every time you create a new project. You can add more using
directives to the top of a source file if you need to reference other namespaces.
Note You might notice that some of the using directives appear grayed-out. These directives
correspond to namespaces that are not currently used by your application. If you don’t need them
when you have finished writing your code, you can safely delete them. However, if you require
items that are held in these namespaces later, you will have to add the using directives back in
again.
The following exercise demonstrates the concept of namespaces in more depth.
Try longhand names
1. In the Code and Text Editor window displaying the Program.cs file, comment out the first using
directive at the top of the file, like this:
//using System;
2. On the Build menu, click Build Solution.
The build fails, and the Error List window displays the following error message:
The name 'Console' does not exist in the current context.
3. In the Error List window, double-click the error message.
The identifier that caused the error is highlighted in the Program.cs source file with a red squiggle.
4. In the Code and Text Editor window, edit the Main method to use the fully qualified name
System.Console.
Main should look like this:
static void Main(string[] args)
{
System.Console.WriteLine("Hello World!");
}
Note When you type the period after System, IntelliSense displays the names of all the items in
the System namespace.
5. On the Build menu, click Build Solution.
The project should build successfully this time. If it doesn’t, ensure that Main is exactly as it appears
in the preceding code, and then try building again.
6. Run the application to be sure that it still works by clicking Start Without Debugging on the Debug
7. When the program runs and displays “Hello World!” in the console window, press Enter to return to
Visual Studio 2017.
Namespaces and assemblies
A using directive simply brings the items in a namespace into scope and frees you from having to
fully qualify the names of classes in your code. Classes are compiled into assemblies. An assembly
is a file that usually has the .dll file name extension, although strictly speaking, executable programs
with the .exe file name extension are also assemblies.
An assembly can contain many classes. The classes that the .NET Framework class library
includes, such as System.Console, are provided in assemblies that are installed on your computer
together with Visual Studio. You will find that the .NET Framework class library contains thousands
of classes. If they were all held in the same assembly, the assembly would be huge and difficult to
maintain. (If Microsoft were to update a single method in a single class, it would have to distribute
the entire class library to all developers!)
For this reason, the .NET Framework class library is split into a number of assemblies,
partitioned by the functions that they perform or the technology that they implement. For example, a
“core” assembly (actually called mscorlib.dll) contains all the common classes, such as
System.Console, and other assemblies contain classes for manipulating databases, accessing web
services, building GUIs, and so on. If you want to make use of a class in an assembly, you must add
a reference to that assembly to your project. You can then add using directives to your code that
bring the items in namespaces in that assembly into scope.
You should note that there is not necessarily a 1:1 equivalence between an assembly and a
namespace: A single assembly can contain classes defined in many namespaces, and a single
namespace can span multiple assemblies. For example, the classes and items in the System
namespace are actually implemented by several assemblies, including mscorlib.dll, System.dll, and
System.Core.dll, among others. This all sounds very confusing at first, but you will soon get used to
it.When you use Visual Studio to create an application, the template you select automatically
includes references to the appropriate assemblies. For example, in Solution Explorer for the
TestHello project, expand the References folder. You will see that a console application
automatically contains references to assemblies called Microsoft.CSharp, System, System.Core,
System.Data, System.Data.DataSetExtensions, System.Net.Http, System.Xml, and
System.Xml.Linq. You might be surprised to see that mscorlib.dll is not included in this list. The
reason for this is that all .NET Framework applications must use this assembly because it contains
fundamental runtime functionality. The References folder lists only the optional assemblies; you can
add or remove assemblies from this folder as necessary.
To add references for additional assemblies to a project, right-click the References folder and
then click Add Reference. You will perform this task in later exercises. You can remove an
assembly by right-clicking the assembly in the References folder and then clicking Remove.
Creating a graphical application
So far, you have used Visual Studio 2017 to create and run a basic console application. The Visual Studio
2017 programming environment also contains everything you need to create graphical applications for
Windows 10. These templates are referred to as Universal Windows Platform (UWP) apps because they
enable you to create apps that function on any device that runs Windows, such as desktop computers,
tablets, and phones. You can design the user interface (UI) of a Windows application interactively. Visual
Studio 2017 then generates the program statements to implement the user interface you’ve designed.
Visual Studio 2017 provides you with two views of a graphical application: the design view and the
code view. You use the Code and Text Editor window to modify and maintain the code and program logic
for a graphical application, and you use the Design View window to lay out your UI. You can switch
between the two views whenever you want.
In the following set of exercises, you’ll learn how to create a graphical application by using Visual
Studio 2017. This program displays a simple form containing a text box where you can enter your name
and a button that when clicked displays a personalized greeting in a message box.
If you want more information about the specifics of writing UWP apps, the final few chapters in
Part IV of this book provide more detail and guidance.
Create a graphical application in Visual Studio 2017
1. Start Visual Studio 2017 if it is not already running.
2. On the File menu, point to New, and then click Project.
The New Project dialog box opens.
3. In the left pane, expand the Installed node (if it is not already expanded), expand Visual C#, and then
click Windows Universal.
4. In the middle pane, click the Blank App (Universal Windows) icon.
5. Ensure that the Location field refers to the \Microsoft Press\VCSBS\Chapter 1 folder in your
Documents folder.
6. In the Name box, type Hello.
7. Ensure that the Create Directory For Solution check box is selected, and then click OK.
8. At this point, you will be prompted with a dialog box asking you to specify on which builds of
Windows 10 your application is going to run. Later builds of Windows 10 have more and newer
features available. Microsoft recommends that you always select the latest build of Windows 10 as
the target version, but if you are developing enterprise applications that also need to run on older
versions then select the oldest version of Windows 10 that users are using as the minimum version.
However, do not automatically select the oldest version of Windows 10 as this might restrict some of
the functionality available to your application:
If this is the first time that you have created a UWP application, you might also be prompted to
enable developer mode for Windows 10, and the Windows 10 settings screen will appear. Select
Developer Mode. A dialog box will appear confirming that this is what you want to do, as it
bypasses some of the security features of WindowsClick Yes. Windows will download and install
the Developer Mode package, which provides additional features for debugging UWP applications:
9. Note External apps that are not downloaded from the Windows Store could potentially expose
personal data and pose other security risks, but it is necessary to enable Developer Mode if you are
building and testing your own custom applications.Return to Visual Studio. After the app has been
created, look in the Solution Explorer pane.
Don’t be fooled by the name of the application template. Although it is called Blank App, this
template actually provides a number of files and contains some code. For example, if you expand the
MainPage.xaml folder, you will find a C# file named MainPage.xaml.cs. This file is where you add
the initial code for the application.
10. In Solution Explorer, double-click MainPage.xaml.
This file contains the layout of the UI. The Design View window shows two representations of this
file:
Note XAML stands for eXtensible Application Markup Language, which is the language that
Universal Windows Platform applications use to define the layout for the GUI of an
application. You will learn more about XAML as you progress through the exercises in this
book.
At the top is a graphical view depicting the screen of, by default, a Surface Book. The lower pane
contains a description of the contents of this screen using XAML. XAML is an XML-like language
used by UWP applications to define the layout of a form and its contents. If you have knowledge of
XML, XAML should look familiar.
In the next exercise, you will use the Design View window to lay out the UI for the application, and
you will examine the XAML code that this layout generates.
Tip Close the Output and Error List windows to provide more space for displaying the Design
View window.
Note Before going further, it is worth explaining some terminology. In traditional Windows
applications, the UI consists of one or more windows, but in a Universal Windows Platform
app, the corresponding items are referred to as pages. For the sake of clarity, I will simply
refer to both items by using the blanket term form. However, I will continue to use the word
window to refer to items in the Visual Studio 2017 IDE, such as the Design View window.
In the following exercises, you will use the Design View window to add three controls to the form
displayed by your application. You will also examine some of the C# code automatically generated by
Visual Studio 2017 to implement these controls.
Create the user interface
1. Click the Toolbox tab that appears in the margin to the left of the form in the Design View window.
The Toolbox appears and displays the various components and controls that you can place on a form.
By default, the General section of the toolbox is selected, which doesn’t contain any controls (yet).
2. Expand the Common XAML Controls section.
This section displays a list of controls that most graphical applications use.
TipThe All XAML Controls section displays a more extensive list of controls.
3. In the Common XAML Controls section, click TextBlock, and then drag the TextBlock control onto
the form displayed in the Design View window.
Tip Be sure that you select the TextBlock control and not the TextBox control. If you
accidentally place the wrong control on a form, you can easily remove it by clicking the item
on the form and then pressing Delete.
A TextBlock control is added to the form (you will move it to its correct location in a moment), and
the Toolbox disappears from view.
Tip If you want the Toolbox to remain visible but not hide any part of the form, at the right end
of the Toolbox title bar, click the Auto Hide button (it looks like a pin). The Toolbox is docked
on the left side of the Visual Studio 2017 window, and the Design View window shrinks to
accommodate it. (You might lose a lot of space if you have a low-resolution screen.) Clicking
the Auto Hide button once more causes the Toolbox to disappear again.
4. The TextBlock control on the form is probably not exactly where you want it. You can click and drag
the controls you have added to a form to reposition them. Using this technique, move the TextBlock
control so that it is positioned toward the upper-left corner of the form. (The exact placement is not
critical for this application.) Notice that you might need to click away from the control and then click
it again before you can move it in the Design View window.
The XAML description of the form in the lower pane now includes the TextBlock control, together
with properties such as its location on the form, governed by the Margin property, the default text
displayed by this control in the Text property, the alignment of text displayed by this control as
specified by the HorizontalAlignment and VerticalAlignment properties, and whether text should
wrap if it exceeds the width of the control.
Your XAML code for the TextBlock will look similar to this (your values for the Margin property
might be slightly different, depending on where you positioned the TextBlock control on the form):
<TextBlock HorizontalAlignment="Left" Margin="150,180,0,0" Text="TextBlock"
TextWrapping="Wrap" VerticalAlignment="Top"/>
The XAML pane and the Design View window have a two-way relationship with each other. You
can edit the values in the XAML pane, and the changes will be reflected in the Design View window.
For example, you can change the location of the TextBlock control by modifying the values in the
Margin property.
5. On the View menu, click Properties Window (it is the last item in the menu).
If it was not already displayed, the Properties window appears at the lower right of the screen, under
the Solution Explorer pane. You can specify the properties of controls by using the XAML pane
under the Design View window, but the Properties window provides a more convenient way for you
to modify the properties for items on a form, as well as other items in a project.
The Properties window is context sensitive in that it displays the properties for the currently
selected item. If you click the form displayed in the Design View window (outside the TextBlock
control), you can see that the Properties window displays the properties for a Grid element. If you
look at the XAML pane, you should see that the TextBlock control is contained within a Grid
element. All forms contain a Grid element that controls the layout of displayed items; for example,
you can define tabular layouts by adding rows and columns to the Grid.
6. In the Design View window, click the TextBlock control. The Properties window displays the
properties for the TextBlock control again.
7. In the Properties window, expand the Text property of the TextBlock control. Change the FontSize
property to 20 pt and then press Enter. This property is located next to the drop-down list box
containing the name of the font, which will show Segoe UI:
Note The suffix pt indicates that the font size is measured in points, where 1 point is equal to
1/72 of an inch.
8. In the XAML pane below the Design View window, examine the text that defines the TextBlock
control. If you scroll to the end of the line, you should see the text FontSize=”26.667”. This is an
approximate conversion of the font size from points to pixels (3 points is assumed to be roughly 4
pixels, although a precise conversion would depend on your screen size and resolution). Any
changes that you make using the Properties window are automatically reflected in the XAML
definitions, and vice versa.
Type over the value of the FontSize attribute in the XAML pane and change it to 24. The font size of
the text for the TextBlock control in the Design View window and the Properties window changes.
9. In the Properties window, examine the other properties of the TextBlock control. Feel free to
experiment by changing them to see their effects.
Notice that as you change the values of properties, these properties are added to the definition of the
TextBlock control in the XAML pane. Each control that you add to a form has a default set of
property values, and these values are not displayed in the XAML pane unless you change them.
10. Change the value of the Text property of the TextBlock control from TextBlock to Please enter your
name. You can do this either by editing the Text element in the XAML pane or by changing the value
in the Properties window (this property is located in the Common section in the Properties window).
Notice that the text displayed in the TextBlock control in the Design View window changes.
11. Click the form in the Design View window, and then display the Toolbox again.
12. In the Toolbox, click and drag the TextBox control onto the form. Move the TextBox control so that it
is directly below the TextBlock control.
Tip When you drag a control on a form, alignment indicators appear automatically. These give
you a quick visual cue to ensure that controls are lined up neatly. You can also manually edit
the Margin property in the XAML pane to set the left-hand margin to the same value of that for
the TextBlock control.
13. In the Design View window, place the mouse over the right edge of the TextBox control. The mouse
pointer should change to a double-headed arrow, indicating that you can resize the control. Drag the
right edge of the TextBox control until it is aligned with the right edge of the TextBlock control
above.
14. While the TextBox control is selected, at the top of the Properties window, change the value of the
Name property from textBox to userName, as illustrated here:
Note You will learn more about naming conventions for controls and variables in Chapter 2,
“Working with variables, operators, and expressions.”
15. Display the Toolbox again, and then click and drag a Button control onto the form. Place the Button
control to the right of the TextBox control on the form so that the bottom of the button is aligned
horizontally with the bottom of the text box.
16. Using the Properties window, change the Name property of the Button control to OK and change the
Content property (in the Common section) from Button to OK, and then press Enter. Verify that the
caption of the Button control on the form changes to display the text OK.
The form should now look similar to the following figure:
Note The drop-down menu in the upper-left corner of the Design View window enables you to
view how your form will render on different screen sizes and resolutions. In this example, the
default view of a 13.5-inch Surface Book with a 3000 x 2000 resolution is selected. To the
right of the drop-down menu, two buttons enable you to switch between portrait view and
landscape view. The projects used in subsequent chapters will use a 13.3-inch Desktop view
as the design surface, but you can keep the Surface Book form factor for this exercise.
17. On the Build menu, click Build Solution, and then verify that the project builds successfully.
18. Ensure that the Debug Target drop-down list is set to Local Machine as shown below. (It might
default to Device and attempt to connect to a Windows phone device, and the build will probably
fail). Then, on the Debug menu, click Start Debugging.
The application should run and display your form. The form looks like this:
Note When you run a Universal Windows Platform app in debug mode, a debug toolbar
appears near the top of the form. You can use this toolbar to track how the user is navigating
through the form and monitor how the contents of the controls on the form change. You can
ignore this menu for now; click the double bar at the bottom of the toolbar to minimize it.
In the text box, you can overtype the existing text with your name, and then click OK, but nothing
happens yet. You need to add some code to indicate what should happen when the user clicks the OK
button, which is what you will do next.
19. Return to Visual Studio 2017. On the Debug menu, click Stop Debugging.
Note You can also click the Close button (the X in the upper-right corner of the form) to close
the form, stop debugging, and return to Visual Studio.
You have managed to create a graphical application without writing a single line of C# code. It does
not do much yet (you will have to write some code soon), but Visual Studio 2017 actually generates a lot
of code for you that handles routine tasks that all graphical applications must perform, such as starting up
and displaying a window. Before adding your own code to the application, it helps to have an
understanding of what Visual Studio has produced for you. The following section describes these
automatically generated artifacts.
Examining the Universal Windows Platform app
In Solution Explorer, expand the MainPage.xaml node. The file MainPage.xaml.cs appears; double-click
this file. The following code for the form is displayed in the Code and Text Editor window: Hello
{
/// <summary>
/// An empty page that can be used on its own or navigated to within a Frame.
/// </summary>
public sealed partial class MainPage : Page
{
public MainPage()
{
this.InitializeComponent();
}
}
}
In addition to a good number of using directives bringing into scope some namespaces that most UWP
apps use, the file contains the definition of a class called MainPage but not much else. There is a little bit
of code for the MainPage class known as a constructor that calls a method named InitializeComponent.
A constructor is a special method with the same name as the class. It runs when an instance of the class is
created and can contain code to initialize the instance. You will learn about constructors in Chapter 7.
The class actually contains a lot more code than the few lines shown in the MainPage.xaml.cs file, but
much of it is generated automatically based on the XAML description of the form and is hidden from you.
This hidden code performs operations such as creating and displaying the form and creating and
positioning the various controls on the form.
Tip You can also display the C# code file for a page in a UWP app by clicking Code on the View
menu when the Design View window is displayed.
At this point, you might be wondering where the Main method is and how the form gets displayed
when the application runs. Remember that in a console application Main defines the point at which the
program starts. A graphical application is slightly different.
In Solution Explorer, you should notice another source file called App.xaml. If you expand the node
for this file, you will see another file called App.xaml.cs. In a UWP app, the App.xaml file provides the
entry point at which the application starts running. If you double-click App.xaml.cs in Solution Explorer,
you should see some code that looks similar to this:
using
using
using
using
using
using
using
using
using
using
using
System;
System.Collections.Generic;
System.IO;
System.Linq;
System.Runtime.InteropServices.WindowsRuntime;
Windows.ApplicationModel;
Windows.ApplicationModel.Activation;
Windows.Foundation;
Windows.Foundation.Collections;
Windows.UI.Xaml;
Windows.UI.Xaml.Controls;
using
using
using
using
using
Windows.UI.Xaml.Controls.Primitives;
Windows.UI.Xaml.Data;
Windows.UI.Xaml.Input;
Windows.UI.Xaml.Media;
Windows.UI.Xaml.Navigation;
namespace Hello
{
/// ();
}
}
}
Much of this code consists of comments (the lines beginning “///”) and other statements that you don’t
need to understand just yet, but the key elements are located in the OnLaunched method, highlighted in
bold. This method runs when the application starts and the code in this method causes the application to
create a new Frame object, display the MainPage form in this frame, and then activate it. It is not
necessary at this stage to fully comprehend how this code works or the syntax of any of these statements,
but it’s helpful that you simply appreciate that this is how the application displays the form when it starts
running.
Adding code to the graphical application
Now that you know a little bit about the structure of a graphical application, the time has come to write
some code to make your application actually do something.
Write the code for the OK button
1. In the Design View window, open the MainPage.xaml file (double-click MainPage.xaml in Solution
Explorer).
2. While still in the Design View window, click the OK button on the form to select it.
3. In the Properties window, click the Event Handlers button for the selected element.
This button displays an icon that looks like a bolt of lightning, as demonstrated here:
The Properties window displays a list of event names for the Button control. An event indicates a
significant action that usually requires a response, and you can write your own code to perform this
response.
4. In the box adjacent to the Click event, type okClick, and then press Enter.
The MainPage.xaml.cs file appears in the Code and Text Editor window, and a new method named
okClick is added to the MainPage class. The method looks like this:
private void okClick(object sender, RoutedEventArgs e)
{
}
Do not worry too much about the syntax of this code just yet—you will learn all about methods in
Chapter 3.
5. Add the following using directive shown in bold to the list at the top of the file (the ellipsis
character […] indicates statements that have been omitted for brevity):
using System;
...
using Windows.UI.Xaml.Navigation;
using Windows.UI.Popups;
6. Add the following code shown in bold to the okClick method:
private void okClick(object sender, RoutedEventArgs e)
{
MessageDialog msg = new MessageDialog("Hello " + userName.Text);
msg.ShowAsync();
}
This code will run when the user clicks the OK button. Again, do not worry too much about the
syntax, just be sure that you copy the code exactly as shown; you will find out what these statements
mean in the next few chapters. The key things to understand are that the first statement creates a
MessageDialog object with the message “Hello <YourName>”, where <YourName> is the name that
you type into the TextBox on the form. The second statement displays the MessageDialog, causing it
to appear on the screen. The MessageDialog class is defined in the Windows.UI.Popups namespace,
which is why you added it in step 5.
Note You might notice that Visual Studio 2017 adds a wavy green line under the last line of
code you typed. If you hover over this line of code, Visual Studio displays a warning that states
“Because this call is not awaited, execution of the current method continues before the call is
completed. Consider applying the ‘await’ operator to the result of the call.” Essentially, this
warning means that you are not taking full advantage of the asynchronous functionality that the
.NET Framework provides. You can safely ignore this warning.
7. Click the MainPage.xaml tab above the Code and Text Editor window to display the form in the
Design View window again.
8. In the lower pane displaying the XAML description of the form, examine the Button element, but be
careful not to change anything. Notice that it now contains an attribute named Click that refers to the
okClick method.
<Button x:
9. On the Debug menu, click Start Debugging.
10. When the form appears, in the text box, type your name over the existing text, and then click OK.
A message dialog box appears displaying the following greeting:
11. Click Close in the message box.
12. Return to Visual Studio 2017 and then, on the Debug menu, click Stop Debugging.
Other types of graphical applications
Apart from Universal Windows apps, Visual Studio 2017 also lets you create other types of
graphical applications. These applications are intended for specific environments and do not
include the adaptability to enable them to run across multiple platforms unchanged.
The other types of graphical applications available include:
WPF App. You can find this template in the list of Windows Classic Desktop templates in
Visual Studio 2017. WPF stands for “Windows Presentation Foundation.” WPF is targeted at
applications that run on the Windows desktop, rather than applications that can adapt to a
range of different devices and form factors. It provides an extremely powerful framework
based on vector graphics that enable the user interface to scale smoothly across different
screen resolutions. Many of the key features of WPF are available in UWP applications,
although WPF provides additional functionality that is only appropriate for applications
running on powerful desktop machines.
Windows Forms App. This is an older graphical library that dates back to the origins of the
.NET Framework. You can also find this template in the Class Desktop template list in Visual
Studio 2017. As its name implies, the Windows Forms library is intended for building more
classical forms-based applications using the Graphics Device Interface (GDI) libraries
provided with Windows at that time. While this framework is quick to use, it provides neither
the functionality and scalability of WPF nor the portability of UWP.
If you are building graphical applications, unless you have good reasons not to do so, I would
suggest that you opt for the UWP template.
Summary
In this chapter, you saw how to use Visual Studio 2017 to create, build, and run applications. You created
a console application that displays its output in a console window, and you created a Universal Windows
Platform application with a simple GUI.
If you want to continue to the next chapter, keep Visual Studio 2017 running and turn to Chapter 2.
If you want to exit Visual Studio 2017 now, on the File menu, click Exit. If you see a Save dialog
box, click Yes to save the project.
Quick reference
To
Create a new
console
application
using Visual
Studio 2017
Create a new
Universal
Windows app
using Visual
Studio 2017
Build the
application
Run the
application in
debug mode
Run the
application
without
debugging
Do this
On the File menu, point to New, and then click Project to open the New Project dialog
box. In the left pane, expand Installed, and then click Visual C#. In the middle pane,
click Console Application. In the Location box, specify a directory for the project files.
Type a name for the project, and then click OK.
On the File menu, point to New, and then click Project to open the New Project dialog
box. In the left pane, expand Installed, expand Visual C#, expand Windows, and then
click Universal. In the middle pane, click Blank App (Universal Windows). In the
Location box, specify a directory for the project files. Type a name for the project, and
then click OK.
On the Build menu, click Build Solution.
On the Debug menu, click Start Debugging.
On the Debug menu, click Start Without Debugging.
CHAPTER 2
Working with variables, operators, and
expressions 2017 another!");
Tip C# is a “free format” language, which means that white space, such as a space character or a
new line, is not significant except as a separator. In other words, you are free to lay out your
statements in any style you choose. However, you should adopt a simple, consistent layout style to
make your programs easier to read and understand..
Using identifiers
Identifiers are the names that you use to identify the elements in your programs, such as namespaces,
classes, methods, and variables. (You will learn about variables shortly.) In C#, you must adhere to the
following syntax rules when choosing identifiers:
You can use only letters (uppercase and lowercase), digits, and underscore characters.
An identifier must start with a letter or an underscore.
For example, result, _score, footballTeam, and plan9 are all valid identifiers, whereas result%,
footballTeam$, and 9plan are not.
Important C# is a case-sensitive language: footballTeam and FootballTeam are two different
identifiers.
Identifying keywords
The C# language reserves 77 identifiers for its own use, and you cannot reuse these identifiers for your
own purposes. These identifiers are called keywords, and each has a particular meaning. Examples of
keywords are class, namespace, and using. You’ll learn the meaning of most of the C# keywords as you
proceed through this book. The following is the list of keywords:
abstract
as
base
bool
break
do
double
else
enum
event
in
int
interface
internal
is
protected
public
readonly
ref
return
true
try
typeof
uint
ulong
byte
case
catch
char
checked
class
const
continue
decimal
default
delegate
explicit
extern
false
finally
fixed
float
for
foreach
goto
if
implicit
lock
long
namespace
new
null
object
operator
out
override
params
private
sbyte
sealed
short
sizeof
stackalloc
static
string
struct
switch
this
throw
unchecked
unsafe
ushort
using
virtual
void
volatile
while
Tip In the Visual Studio 2017 Code and Text Editor window, keywords are colored blue when you
type them.
C# also uses the following identifiers. These identifiers are not reserved by C#, which means that you
can use these names as identifiers for your own methods, variables, and classes, but you should avoid
doing so if at all possible.
add
alias
ascending
async
await
descending
dynamic
from
get
global
group
into
join
let
nameof
orderby
partial
remove
set
value
var
when
where
yield
Using variables
A variable is a storage location that holds a value. You can think of a variable as a box in the computer’s
memory that holds temporary information. You must give each variable in a program an unambiguous
name that uniquely identifies it in the context in which it is used. stored there earlier.
Naming variables
You should adopt a naming convention for variables that helps you avoid confusion concerning the
variables you have defined. This is especially important if you are part of a project team with several
developers working on different parts of an application; a consistent naming convention helps to avoid
confusion and can reduce the scope for bugs. The following list contains some general recommendations:
Don’t start an identifier with an underscore. Although this is legal in C#, it can limit the
interoperability of your code with applications built by using other languages, such as Microsoft
Visual Basic.
Don’t create identifiers that differ only by case. For example, do not create one variable named
myVariable and another named MyVariable for use at the same time because it is too easy to
confuse one with the other. Also, defining identifiers that differ only by case can limit the ability to
reuse classes in applications developed with other languages that are not case-sensitive, such as
Visual Basic.
Start the name with a lowercase letter.
In a multi-word identifier, start the second and each subsequent word with an uppercase letter.
(This is called camelCase notation.)
Don’t use Hungarian notation. (If you are a Microsoft Visual C++ developer, you are probably
familiar with Hungarian notation. If you don’t know what Hungarian notation is, don’t worry about
it!)
For example, score, footballTeam, _score, and FootballTeam are all valid variable names, but only
the first two are recommended.
Declaring variables
Variables hold values. C# has many different types of values that it can store and process: integers,
floating-point numbers, and strings of characters, to name three. When you declare a variable, you must
specify the type of data it will hold.
You declare the t | https://b-ok2.org/book/3610184/726ac5?dsource=mostpopular | CC-MAIN-2020-10 | refinedweb | 14,553 | 51.28 |
import "go.chromium.org/luci/vpython/spec"
env.go load.go match.go spec.go
const ( // DefaultInlineBeginGuard is the default loader InlineBeginGuard value. DefaultInlineBeginGuard = "[VPYTHON:BEGIN]" // DefaultInlineEndGuard is the default loader InlineEndGuard value. DefaultInlineEndGuard = "[VPYTHON:END]" )
DefaultPartnerSuffix is the default filesystem suffix for a script's partner specification file.
See LoadForScript for more information.
DefaultCommonSpecNames is the name of the "common" specification file.
If a script doesn't explicitly specific a specification file, "vpython" will automatically walk up from the script's directory towards filesystem root and will use the first file named CommonName that it finds. This enables repository-wide and shared environment specifications.
Hash hashes the contents of the supplied "spec" and "rt" and returns the result as a hex-encoded string.
If not empty, the contents of extra are prefixed to hash string. This can be used to factor additional influences into the spec hash.
Load loads an specification file text protobuf from the supplied path.
func LoadEnvironment(path string, environment *vpython.Environment) error
LoadEnvironment loads an environment file text protobuf from the supplied path.
func NormalizeEnvironment(env *vpython.Environment) error
NormalizeEnvironment normalizes the supplied Environment such that two messages with identical meaning will have identical representation.
NormalizeSpec normalizes the specification Message such that two messages with identical meaning will have identical representation.
If multiple wheel entries exist for the same package name, they must also share a version. If they don't, an error will be returned. Otherwise, they will be merged into a single wheel entry.
NormalizeSpec will prune any Wheel entries that don't match the specified tags, and will remove the match entries from any remaining Wheel entries.
PEP425Matches returns true if match matches at least one of the tags in tags.
A match is determined if the non-zero fields in match equal the equivalent fields in a tag.
PackageMatches returns true if the package's match constraints are compatible with tags. A package matches if:
- None of the tags matches any of the "not_match_tag" entries, and - Every "match_tag" entry matches at least one tag.
As a special case, if the package doesn't specify any match tags, it will always match regardless of the supplied PEP425 tags. This handles the default case where the user specifies no constraints.
See PEP425Matches for information about how tags are matched.
Parse loads a specification message from a content string.
func ParseEnvironment(content string, environment *vpython.Environment) error
ParseEnvironment loads a environment protobuf message from a content string.
Render creates a human-readable string from spec.
type Loader struct { // InlineBeginGuard is a string that signifies the beginning of an inline // specification. If empty, DefaultInlineBeginGuard will be used. InlineBeginGuard string // InlineEndGuard is a string that signifies the end of an inline // specification. If empty, DefaultInlineEndGuard will be used. InlineEndGuard string // CommonFilesystemBarriers is a list of filenames. During common spec, Loader // walks directories towards root looking for a file named CommonName. If a // directory is observed to contain a file in CommonFilesystemBarriers, the // walk will terminate after processing that directory. CommonFilesystemBarriers []string // CommonSpecNames, if not empty, is the list of common "vpython" spec files // to use. If empty, DefaultCommonSpecNames will be used. // // Names will be considered in the order that they appear. CommonSpecNames []string // PartnerSuffix is the filesystem suffix for a script's partner spec file. If // empty, DefaultPartnerSuffix will be used. PartnerSuffix string }
Loader implements the generic ability to load a "vpython" spec file.
func (l *Loader) LoadForScript(c context.Context, path string, isModule bool) (*vpython.Spec, error)
LoadForScript attempts to load a spec file for the specified script. If nothing went wrong, a nil error will be returned. If a spec file was identified, it will also be returned. Otherwise, a nil spec will be returned.
Spec files can be specified in a variety of ways. This function will look for them in the following order, and return the first one that was identified:
- Partner File - Inline
Partner File ============
LoadForScript traverses the filesystem to find the specification file that is naturally associated with the specified path.
If the path is a Python script (e.g, "/path/to/test.py"), isModule will be false, and the file will be found at "/path/to/test.py.vpython".
If the path is a Python module (isModule is true), findForScript walks upwards in the directory structure, looking for a file that shares a module directory name and ends with ".vpython". For example, for module:
/path/to/foo/bar/baz/__init__.py /path/to/foo/bar/__init__.py /path/to/foo/__init__.py /path/to/foo.vpython
LoadForScript will first look at "/path/to/foo/bar/baz", then walk upwards until it either hits a directory that doesn't contain an "__init__.py" file, or finds the ES path. In this case, for module "foo.bar.baz", it will identify "/path/to/foo.vpython" as the ES file for that module.
Inline ======
LoadForScript scans through the contents of the file at path and attempts to load specification boundaries.
If the file at path does not exist, or if the file does not contain spec guards, a nil spec will be returned.
The embedded specification is a text protobuf embedded within the file. To parse it, the file is scanned line-by-line for a beginning and ending guard. The content between those guards is minimally processed, then interpreted as a text protobuf.
[VPYTHON:BEGIN] wheel { path: ... version: ... } [VPYTHON:END]
To allow VPYTHON directives to be embedded in a language-compatible manner (with indentation, comments, etc.), the processor will identify any common characters preceding the BEGIN and END clauses. If they match, those characters will be automatically stripped out of the intermediate lines. This can be used to embed the directives in comments:
// [VPYTHON:BEGIN] // wheel { // path: ... // version: ... // } // [VPYTHON:END]
In this case, the "// " characters will be removed.
Common ======
LoadForScript will examine successive parent directories starting from the script's location, looking for a file named in CommonSpecNames. If it finds one, it will use that as the specification file. This enables scripts to implicitly share an specification.
Package spec imports 17 packages (graph) and is imported by 9 packages. Updated 2020-01-18. Refresh now. Tools for package owners. | https://godoc.org/go.chromium.org/luci/vpython/spec | CC-MAIN-2020-05 | refinedweb | 1,029 | 50.53 |
Hi, I'm sorta new to c++ I've tried messing around with java but I like c++ better for it's speed. I think the reason why I'm still a newbie at this is because I can't grasp the scientific concept of the syntax. But enough small talk......
I can't seem to find how to fix this program????
(I've googled and searched for hours on this subject)
I'm trying to query 10 numbers then output all the numbers back. It works but not like a query.
I just don't understand what I'm doing wrong I'm sure I'll kick myself once I get it sorted out.
here's my code:
#include <iostream> #include <cstdlib> using namespace std; int main() { int i, numEntered; int number[10]; for (i = 0; i <= 9; i++) { //--Here's where I think the problem is cout<<"Number: "; cin>>numEntered; //Ask for a number numEntered = number[i]; } cout<<"Numbers entered: "<< numEntered<<endl; //output 10 inputed numbers system("pause"); //Pause the screen so one can see the result return 0; }
It gives me some weird output like a string of numbers like... 1988145407.
I'd really like to know the reason for the bug.
Thanks in advance..... | https://www.daniweb.com/programming/software-development/threads/233307/very-simple-array-program-but-with-a-minor-glitch | CC-MAIN-2018-13 | refinedweb | 208 | 77.98 |
Use this guide to quickly start a basic video call with the Agora SDK for Unity.
The following video demonstrates how to build an app that implements the Agora video call from scratch.:
Unity 2017 or later
Operating system and IDE requirements:
A valid Agora account and an App ID
In this section, we create a Unity project and integrate the SDK into the project.
Use the following steps or follow the Unity official manual to build a project from scratch. Skip to Integrate the SDK if you have already created a Unity project.
Ensure that you have downloaded and installed Unity before the following steps. If not, click here to download.
Open Unity and click New.
Fill in and select the options in the following fields, and click Create project.
Choose either of the following approaches to integrate the Agora Unity SDK into your project.
Approach 1: Automatically integrate the SDK with Unity Asset Store
Approach 2: Manually add the SDK files
Go to SDK Downloads, download the Agora SDK for Unity under Video SDK, and extract the files from the downloaded SDK package.
Copy the
Plugins subfolder from the
samples/Hello-Video-Unity-Agora/Assets/AgoraEngine directory of the downloaded SDK to the
Assets subfolder of your project.
BL_BuildPostProcess.csfile from the
samples/Hello-Video-Unity-Agora/Assets/AgoraEngine/Editordirectory.
This section provides instructions on using the Agora Video SDK for Unity to implement a basic one-to-one video call, as well as an API call sequence diagram.
Create the user interface (UI) for a one-to-one call in your project. If you already have one UI in your project, skip to Get the device permission (Android only) or Initialize IRtcEngine.
We recommend adding the following elements to the UI:
If you use the Unity Editor to build your UI, ensure that you bind VideoSurface.cs to the GameObjects designated for local and remote videos. In the following example, the VideoSurface.cs is applied to the cube for the local video and to the cylinder for the remote video.
If you build for Android, you should add in APIs to check and request the device permission. For all other platforms, this is handled by the engine, and you can skip to Initialize IRtcEngine.
Unity versions later than UNITY_2018_3_OR_NEWER do not automatically request camera and microphone permissions from your Android device. If you are using one of these versions, call the
CheckPermission method to request access to the camera and microphone of your Android device.
#if(UNITY_2018_3_OR_NEWER) using UnityEngine.Android; #endif void Start () { #if(UNITY_2018_3_OR_NEWER) permissionList.Add(Permission.Microphone); permissionList.Add(Permission.Camera); #endif } private void CheckPermission() { #if(UNITY_2018_3_OR_NEWER) foreach(string permission in permissionList) { if (Permission.HasUserAuthorizedPermission(permission)) { } else { Permission.RequestUserPermission(permission); } } #endif } void Update () { #if(UNITY_2018_3_OR_NEWER) // Ask for your Android device's permissions. CheckPermission(); #endif }
Initialize the
IRtcEngine object before calling any other Agora APIs.
Call the
GetEngine method and pass in the App ID to initialize the IRtcEngine object.
Listen for callback events, such as when the local user joins the channel, and when the first video frame of a remote user is decoded.
// Pass an App ID to create and initialize an IRtcEngine object. mRtcEngine = IRtcEngine.GetEngine (appId); // Listen for the OnJoinChannelSuccessHandler callback. // This callback occurs when the local user successfully joins the channel. mRtcEngine.OnJoinChannelSuccessHandler = OnJoinChannelSuccessHandler; // Listen for the OnUserJoinedHandler callback. // This callback occurs when the first video frame of a remote user is received and decoded after the remote user successfully joins the channel. // You can call the SetForUser method in this callback to set the remote video. mRtcEngine.OnUserJoinedHandler = OnUserJoinedHandler; // Listen for the OnUserOfflineHandler callback. // This callback occurs when the remote user leaves the channel or drops offline. mRtcEngine.OnUserOfflineHandler = OnUserOfflineHandler;
After initializing an
IRtcEngine object, set the local video before joining a channel so that you can see yourself in the call. Follow these steps to configure the local video:
Call
EnableVideo to enable the video module.
Call
EnableVideoObserver to get the local video.
// Enable the video module. mRtcEngine.EnableVideo(); // Get the local video and pass it on to Unity. mRtcEngine.EnableVideoObserver();`
Choose an object to display the local video, and drag the VideoSurface.cs file to Script, so that you can bind the VideoSurface.cs file to the object and see the local video. Unity renders 3D objects by default, such as Cube, Cylinder and Plane. To render the object in other types, modify the renderer in the VideoSurface.cs file.
// The default renderer is Renderer. Renderer rend = GetComponent(); rend.material.mainTexture = nativeTexture; // Change the renderer to RawImage. RawImage rend = GetComponent(); rend.texture = nativeTexture;`
After initializing an
IRtcEngine object and setting the local video, call
JoinChannelByKey to join a channel. Set the following parameters when calling this method:
channelKey: The token for identifying the role and privileges of a user. Set it as one of the following values:
channelKeyas "".
channelName: The unique name of the channel to join. Users that input the same channel name join the same channel.
uid: Integer. The unique ID of the local user. If you set
uid as 0, the SDK automatically assigns one user ID and returns it in the
OnJoinChannelSuccessHandler callback.
// Join a channel. mRtcEngine.JoinChannelByKey(null, channel, null, 0);
In a video call, you should be able to see remote users too. Call
SetForUser in VideoSurface.cs after joining a channel.
When you are in a channel and a remote user joins this channel, the SDK triggers the
OnUserJoinedHandler callback, and you can get the user’s uid.
In the VideoSurface.cs file, call the
SetForUser method in the callback, and pass in the
uid to set the video of the remote user.
Drag the VideoSurface.cs script to the target object, so that you can bind the VideoSurface.cs to the object and see the remote video.
The video capture and render frame rate of Agora Unity SDK is 15 fps by default. Call
SetGameFps in the VideoSurface.cs file to adjust the video refresh rate based on your scenario.
// This callback occurs when the first video frame of a remote user is received and decoded after the remote user successfully joins the channel. // You can call the SetForUser method in this callback to set up the remote video view. private void OnUserJoinedHandler(uint uid, int elapsed) { Debug.Log ("OnUserJoinedHandler: uid = " + uid) GameObject go = GameObject.Find (uid.ToString ()); if (!ReferenceEquals (go, null)) { return; } go = GameObject.CreatePrimitive (PrimitiveType.Plane); if (!ReferenceEquals (go, null)) { go.name = uid.ToString (); VideoSurface remoteVideoSurface = go.AddComponent<VideoSurface> (); // Set the remote video. remoteVideoSurface.SetForUser (uid); // Adjust the video refreshing frame rate. The video capture and render frame rate of Agora Unity SDK is 15 fps by default. // For example, in the game scenario, the refreshing frame rate of the game is 60 fps, which causes 3 times redundancy compared to the video render frame rate. Call the SetGameFps to adjust the refreshing frame rate of the game to 15 fps. remoteVideoSurface.SetGameFps (60); remoteVideoSurface.SetEnable (true); } mRemotePeer = uid; }
To remove the VideoSurface.cs script from the object, see the following sample codes.
private void OnUserOfflineHandler(uint uid, USER_OFFLINE_REASON reason) { // Remove video stream. // Call this in the main thread. GameObject go = GameObject.Find (uid.ToString()); if (!ReferenceEquals (go, null)) { Destroy (go); } }
According to your scenario, such as when the call ends and when you need to close the app, call
LeaveChannel to leave the current call, and call
DisableVideoObserver to disable the video.
public void leave() { Debug.Log ("calling leave"); if (mRtcEngine == null) return; // Leave the channel. mRtcEngine.LeaveChannel(); // Disable the video. mRtcEngine.DisableVideoObserver(); }
After leaving the channel, if you want to exit the app or release the memory of
IRtcEngine, call
Destroy to destroy the
IRtcEngine object.
void OnApplicationQuit() { if (mRtcEngine != null) { // Destroy the IRtcEngine object. IRtcEngine.Destroy(); mRtcEngine = null; } }
You can find the complete code in the sample project provided by Agora on GitHub.
Run the project in Unity. You should be able to see both the local and remote video if you successfully start a one-to-one video call.
When using the Agora Unity SDK, you can also use the dollowing documents as a reference: | https://docs.agora.io/en/Interactive%20Broadcast/start_call_unity?platform=Android&changePlatformAlert=Android | CC-MAIN-2022-33 | refinedweb | 1,351 | 58.99 |
Table of Contents
TurboGears widgets are a simple, yet powerful way to bundle up bits of HTML, CSS and Javascript into reusable components. As a consumer of Widgets, you can use them to create everything from HTML Forms, to Ajax based AutoComplete fields.
Perhaps the most common use of Widgets right now is as a convenient way to include form controls in a web page.
In this case, each field is a widget, and the whole form is also a widget - a compound widget that contains the field widgets. In a simple case the fields would use standard form controls such as text boxes.
Widget based forms have a number of useful properties. For example, you can set up the individual form fields to know their validation rules, and when you do this, the Form Widget knows how to display any validation error messages next to the appropriate field if a user enters bad data.
The beauty of widgets is that replacing standard text entry fieds with “fancy” alternatives is easy for the programmer. For example, you could use a CalendarDatePicker instead of a TextField, to get a pop-up calendar.
Widgets are python objects, which need to be instantiated and setup before you can use them in your view code. The standard way to do this is to instantiate a widget in your controllers.py file. We’ll start with a simple TextField widget that isn’t very exciting, but is easy to understand in full. Don’t worry, though this isn’t all there is. We’ve got fancy javascript heavy widgets that do animation, lightboxes, and autocomplete fields.
To instantiate a TextField widget instance in your controller you’ll need to import turbogears.widgets into your controller, and create a widget instance like this:
fname = TextField(default="Enter your First Name")
This will create a widget instance, which you can pass into a template and display using it’s .display() method:
fname.display()
This will automatically add a text input field to your page. The rendered HTML should look something like this:
<INPUT ID="widget" TYPE="text" NAME="widget" VALUE="Enter your First Name" CLASS="textfield">
The VALUE is picked up from the default value we provided at instantiation time.
But you can override the default at render time, just by passing a value to the display method, either as the first parameter, or explicitly as the named parameter value:
fname.display("mark")
Which would create the following HTML:
<INPUT ID="widget" TYPE="text" VALUE="mark" CLASS="textfield">'
As we mentioned, you can also assign values with the value parameter:
fname.display(value="karl")
So, now that we’ve seen a couple of simple examples how we create a widget and assign parameters, it’s probably worth taking a deeper look at how all of this works.
Warning
Widgets instances are stateless, which means that they should not try to have any request specific data stored in a widget.
Because widgets are stateless, it’s really only safe to assign widget attributes at instantiation time or display time.
In fact, to keep you from shooting yourself in the foot, if you try to modify an attribute after the widget is instantiate, (other than as a .display() option) you’ll receive a friendly warning message reminding you that widgets are display logic, not a place to store data.
Let’s take a look at the various attributes all widgets have, and what they do.
In addition several widgets have other attributes which you can setup. For example, form widgets have an action attribute which defines what URL the form will send it’s HTTP Post to for processing. Another common example is the select field widget which has an options parameter which takes a list of tuples which define the value returned by the drop down list and the name that should be displayed in the list.
We will be documenting the complete list of attributes for each of the built in widgets on the WidgetList page.
But for now, let’s just use the options attribute of the SelectField widget as an example of the final way that you can pass information to widget attributes. You can define a callable (generally a function, but any callable will do) which returns the data needed by the widget, and pass that callable into the widget at instantiation time. The widget will automatically call that function whenever you display the widget on a page.
So, you might want to create a function which gets data from the database, and creates an options list for display in some widget. In this case, we’re just going to define list statically in our function, but extending this to do interesting stuff is just standard python:
def get_options(): options= [] for item in range(11): options.append((item, "item %s" %item)) return options my_selector = widgets.SingleSelectField(options=get_options)
This creates a new my_selector widget with a bunch of selection options. When you do a my_selector.display on your page template you’ll get code like this:
<SELECT CLASS="singleselectfield" NAME="widget" ID="widget"> <OPTION VALUE="0">item 0</OPTION> <OPTION VALUE="1">item 1</OPTION> <OPTION VALUE="2">item 2</OPTION> ... <OPTION VALUE="10">item 10</OPTION> </SELECT>
In addition to the above attributes, there are a couple more, which define CSS and JavaScript files that need to be imported into your template for the widget itself to work properly. These can only be setup at instantiation time (at render time, it’s too late to inject them into the form itself).
If you’re just using existing widgets, you shouldn’t ever have to worry about css and javascript attributes, they should already be set up for you.
On of the most common cusomization needs when working with Widgets is some slight modification of the widget template code. The Turbogears Toolbox includes a WidgetBrowser application which, among other things, shows you all the template code for every widget you have installed. If for instance you need to edit the TextField widget’s template to include a <br /> tag after the field you could easily do that by grabbing the existing template code from the widget browser and modifying it.
Once you’ve got a modified template, you can either create a new file (say widgettemplate.kid) and saving it in your templates directory, or you can pass it to the template attribute as a string.
So if you’ve saved your new template as widgettemplate.kid you can instantiate a new text field widget that uses this new template with code like this:
field1 = widgets.TextField(name='Field one", template='yourapp.templates.widgettemplate')
Or if you don’t want to bother with the extra template file for short templates, you can just pass your template definition to the template attribute as a string like this:
field1 = widgets.TextField(name='Field one", <br /> """)
Single widgets are pretty simple, and you can use them to create reusable view elements pretty easily.
But there’s a whole lot more that’s a available to you if you’re willing to go beyond single widgets, and look at the way that Widget based Form handling integrates Form handling tasks, making it easy to reuse the same widget to get new data, edit existing data, and handle the display of validation errors.
Your next step in the Widgets Journey is creating a Widget-based form. | http://www.turbogears.org/1.0/docs/Widgets/Overview.html | CC-MAIN-2016-18 | refinedweb | 1,237 | 58.62 |
The for loop in Python is an iterating function. If you have a sequence object like a list, you can use the for loop to iterate over the items contained within the list.
The functionality of the for loop isn’t very different from what you see in multiple other programming languages.
In this article, we’ll explore the Python for loop in detail and learn to iterate over different sequences including lists, tuples, and more. Additionally, we’ll learn to control the flow of the loop using the break and continue statements.
Table of Contents
- 1 Basic Syntax of the Python for loop
- 1.1 1. Print individual letters of a string using the for loop
- 1.2 2. Using the for loop to iterate over a Python list or tuple
- 1.3 3. Nesting Python for loops
- 1.4 4. Python for loop with range() function
- 1.5 5. break statement with for loop
- 1.6 6. The continue statement with for loop
- 1.7 6. Python for loop with an else block
- 2 Conclusion
Basic Syntax of the Python for loop
The basic syntax of the for loop in Python looks something similar to the one mentioned below.
for itarator_variable in sequence_name: Statements . . . Statements
Let me explain the syntax of the Python for loop better.
- The first word of the statement starts with the keyword “for” which signifies the beginning of the for loop.
- Then we have the iterator variable which iterates over the sequence and can be used within the loop to perform various functions
- The next is the “in” keyword in Python which tells the iterator variable to loop for elements within the sequence
- And finally, we have the sequence variable which can either be a list, a tuple, or any other kind of iterator.
- The statements part of the loop is where you can play around with the iterator variable and perform various function
1. Print individual letters of a string using the for loop
Python string is a sequence of characters. If within any of your programming applications, you need to go over the characters of a string individually, you can use the for loop here.
Here’s how that would work out for you.
word="anaconda" for letter in word: print (letter)
Output:
a n a c o n d a
The reason why this loop works is because Python considers a “string” as a sequence of characters instead of looking at the string as a whole.
2. Using the for loop to iterate over a Python list or tuple
Lists and Tuples are iterable objects. Let’s look at how we can loop over the elements within these objects now.
words= ["Apple", "Banana", "Car", "Dolphin" ] for word in words: print (word)
Output:
Apple Banana Car Dolphin
Now, let’s move ahead and work on looping over the elements of a tuple here.
nums = (1, 2, 3, 4) sum_nums = 0 for num in nums: sum_nums = sum_nums + num print(f'Sum of numbers is {sum_nums}') # Output # Sum of numbers is 10
3. Nesting Python for loops
When we have a for loop inside another for loop, it’s called a nested for loop. There are multiple applications of a nested for loop.
Consider the list example above. The for loop prints out individual words from the list. But what if we want to print out the individual characters of each of the words within the list instead?
This is where a nested for loop works better. The first loop (parent loop) will go over the words one by one. The second loop (child loop) will loop over the characters of each of the words.
words= ["Apple", "Banana", "Car", "Dolphin" ] for word in words: #This loop is fetching word from the list print ("The following lines will print each letters of "+word) for letter in word: #This loop is fetching letter for the word print (letter) print("") #This print is used to print a blank line
Output
4. Python for loop with range() function
Python range() is one of the built-in functions. When you want the for loop to run for a specific number of times, or you need to specify a range of objects to print out, the range function works really well. Consider the following example where I want to print the numbers 1, 2, and 3.
for x in range(3): print("Printing:", x) # Output # Printing: 0 # Printing: 1 # Printing: 2
The range function also takes another parameter apart from the start and the stop. This is the step parameter. It tells the range function how many numbers to skip between each count.
In the below example, I’ve used number 3 as the step and you can see the output numbers are the previous number + 3.
for n in range(1, 10, 3): print("Printing with step:", n) # Output # Printing with step: 1 # Printing with step: 4 # Printing with step: 7
5. break statement with for loop
The break statement is used to exit the for loop prematurely. It’s used to break the for loop when a specific condition is met.
Let’s say we have a list of numbers and we want to check if a number is present or not. We can iterate over the list of numbers and if the number is found, break out of the loop because we don’t need to keep iterating over the remaining elements.
In this case, we’ll use the Python if else condition along with our for loop.
nums = [1, 2, 3, 4, 5, 6] n = 2 found = False for num in nums: if n == num: found = True break print(f'List contains {n}: {found}') # Output # List contains 2: True
6. The continue statement with for loop
We can use continue statements inside a for loop to skip the execution of the for loop body for a specific condition.
Let’s say we have a list of numbers and we want to print the sum of positive numbers. We can use the continue statements to skip the for loop for negative numbers.
nums = [1, 2, -3, 4, -5, 6] sum_positives = 0 for num in nums: if num < 0: continue sum_positives += num print(f'Sum of Positive Numbers: {sum_positives}')
6. Python for loop with an else block
We can use else block with a Python for loop. The else block is executed only when the for loop is not terminated by a break statement.
Let’s say we have a function to print the sum of numbers if and only if all the numbers are even.
We can use break statement to terminate the for loop if an odd number is present. We can print the sum in the else part so that it gets printed only when the for loop is executed normally.
def print_sum_even_nums(even_nums): total = 0 for x in even_nums: if x % 2 != 0: break total += x else: print("For loop executed normally") print(f'Sum of numbers {total}') # this will print the sum print_sum_even_nums([2, 4, 6, 8]) # this won't print the sum because of an odd number in the sequence print_sum_even_nums([2, 4, 5, 8]) # Output # For loop executed normally # Sum of numbers 20
Conclusion
The for loop in Python is very similar to other programming languages. We can use break and continue statements with for loop to alter the execution. However, in Python, we can have optional else block in for loop too.
I hope you have gained some interesting ideas from the tutorial above. If you have any questions, let us know in the comments below.
I keep getting an invalid syntax error when trying to run the code for the Python for use with optional else block. It is saying the inner first bracket is the invalid syntax error- def print_sum_even_nums ([2, 4, 6, 8]):
How to use for loop in calculator program in python using functions
optimized code
Thanks for this lesson. I’ve learned a lot.
Hi Pankaj, problem lies with this line “print letter”. It should be print (letter)
Thanks for pointing out the typo error, I have corrected it now. Appreciate it.
Thank you Imtiaz Abedin for this very useful tutorial series on python.
Python tutorials are really very helpful. But there are some errors in codes.Like in python nested loop example in print function.
Can you please point out which method is causing the error and what is the error? Could it be because of white spaces in copy paste? White spaces matter a lot in python programming. | https://www.journaldev.com/14136/python-for-loop-example | CC-MAIN-2021-04 | refinedweb | 1,425 | 69.92 |
GaugeStyle QML Type
Provides custom styling for Gauge. More...
Properties
- background : Component
- control : Gauge
- foreground : Component
- minorTickmark : Component
- tickmark : Component
- tickmarkLabel : Component
- valueBar : Component
- valuePosition : real
Detailed Description
You can create a custom gauge by replacing the following delegates:
Below, you'll find an example of how to create a temperature gauge that changes color as its value increases:
import QtQuick 2.2 import QtQuick.Controls 1.4 import QtQuick.Controls.Styles 1.4 import QtQuick.Extras 1.4 Rectangle { width: 80 height: 200 Timer { running: true repeat: true interval: 2000 onTriggered: gauge.value = gauge.value == gauge.maximumValue ? 5 : gauge.maximumValue } Gauge { id: gauge anchors.fill: parent anchors.margins: 10 value: 5 Behavior on value { NumberAnimation { duration: 1000 } } style: GaugeStyle { valueBar: Rectangle { implicitWidth: 16 color: Qt.rgba(gauge.value / gauge.maximumValue, 0, 1 - gauge.value / gauge.maximumValue, 1) } } } }
The gauge displaying values at various points during the animation.
See also Styling Gauge.
Property Documentation
The bar that represents the foreground of the gauge.
This component is drawn above every other component.
Each minor tickmark displayed by the gauge.
To set the size of the minor tickmarks, specify an implicitWidth and implicitHeight.
For layouting reasons, each minor tickmark should have the same
implicitHeight. If different heights are needed for individual tickmarks, specify those heights in a child item of the component.
In the example below, we decrease the width of the minor tickmarks:
minorTickmark: Item { implicitWidth: 8 implicitHeight: 1 Rectangle { color: "#cccccc" anchors.fill: parent anchors.leftMargin: 2 anchors.rightMargin: 4 } }
Each instance of this component has access to the following property:
Each tickmark displayed by the gauge.
To set the size of the tickmarks, specify an implicitWidth and implicitHeight.
The widest tickmark will determine the space set aside for all tickmarks. For this reason, the
implicitWidth of each tickmark should be greater than or equal to that of each minor tickmark. If you need minor tickmarks to have greater widths than the major tickmarks, set the larger width in a child item of the minorTickmark component.
For layouting reasons, each tickmark should have the same
implicitHeight. If different heights are needed for individual tickmarks, specify those heights in a child item of the component.
In the example below, we decrease the height of the tickmarks:
tickmark: Item { implicitWidth: 18 implicitHeight: 1 Rectangle { color: "#c8c8c8" anchors.fill: parent anchors.leftMargin: 3 anchors.rightMargin: 3 } }
Each instance of this component has access to the following properties:
See also minorTickmark.
This defines the text of each tickmark label on the gauge.
Each instance of this component has access to the following properties:
The bar that represents the value of the gauge.
To height of the value bar is automatically resized according to value, and does not need to be specified.
When a custom valueBar is defined, its implicitWidth property must be set.
This property holds the value displayed by the gauge as a position in pixels.
It is useful for custom styling.. | http://doc.qt.io/qt-5/qml-qtquick-controls-styles-gaugestyle.html | CC-MAIN-2018-34 | refinedweb | 489 | 50.63 |
How to Serialize an Object in Java ?.
Introduction
In this tutorial, you will learn how Serialization process work in Java. Serialization is one of the most important aspect in dealing with Java Web Applications. So, what do we mean by Serialization in Java ?.
Serialization is a process of saving state of an Java object to a sequence of bytes. These sequence of bytes can be sent over a network, stored in a database or stored in a file. Writing an Java object to a file, database or sent over a network is called as Serialization. The process of reading back a Java object from file, database or network is called as De-serialization.
Conditions for Serialization in Java
- The class instance which we want to serialize must implement an interface by name Serializable. This interface is in the Java package java.io. It is a marker interface and does not contain any method to implement. It only signifies that whichever class implements it can be serialized to a file, database, memory or network.
- All of the fields or properties in a class must implement serializable interface. If not, that field has to be made transient.
Serialization process in Java
De-Serialization process in Java
What is Serialization in Java ?.
Reading and Writing a Java object to a file
Employee object to serialize
import java.io.Serializable; // Create a class Employee, whose objects we want to // serialize. // Employee class must implement Serializable interface. // It is a marker interface with no methods. It only provides // information that Employee objects can be serialized. public class Employee implements Serializable { // Create two properties say, name and age. // These properties will be saved to a file. private String name; private int age; // Create getters and setters of name and age. public String getName() { return name; } public void setName(String name) { this.name = name; } public int getAge() { return age; } public void setAge(int age) { this.age = age; } }
Java code - Serialization Demo
import java.io.FileOutputStream; import java.io.IOException; import java.io.ObjectOutputStream; // Create a Java class to demonstrate process of Serialization public class SerializationDemo { // Create a main method in which steps for serializing an object // will be followed. public static void main(String[] args) { // Create an instance of Employee class which we want to serialize. Employee employee = new Employee(); // Set name and age of Employee instance as "Dinesh Varyani" and // 27. These values will be saved to a file to demonstrate serialization process. employee.setName("Dinesh Varyani"); employee.setAge(27); try { // In order to serialize Employee instance created above to a file. We have // to provide location to a file and attach a file output stream which will help in // serializing object to the file. The file along with location is used as "/employee.ser" FileOutputStream fos = new FileOutputStream("/employee.ser"); // In order to serialize an object we have to create an instance of Object Output // Stream. This Object Output Stream will be taking in help of File Output Stream // instance to serialize a Java object. ObjectOutputStream oos = new ObjectOutputStream(fos); // Object Output Stream has a method by name writeObject. This method takes in // a object of Employee which we want to serialize. writeObject method takes in help // File Output Stream and saves values of Employee such as name and age to // employee.ser file. oos.writeObject(employee); // close Object Output Stream. oos.close(); // close File Output Stream. fos.close(); } catch (IOException e) { e.printStackTrace(); } } }
Java code - Deserialization Demo
import java.io.FileInputStream; import java.io.IOException; import java.io.ObjectInputStream; // Create a Java class to demonstrate process of DeSerialization. public class DeserializationDemo { // Create a main method in which steps for de-serializing an object // will be followed. public static void main(String[] args) { try { // In order to de-serialize an Employee instance serialized to a file in above class. We have // to provide location to a file and attach a file input stream which will help in // de-serializing object from the file. The file along with location is used as "/employee.ser" FileInputStream fis = new FileInputStream("/employee.ser"); // In order to de-serialize an object we have to create an instance of Object Input // Stream. This Object Input Stream will be taking in help of File Input Stream // instance to de-serialize a Java object. ObjectInputStream ois = new ObjectInputStream(fis); // Object Input Stream has a method by name readObject. This method reads back a // serialized object from file. readObject method takes in help File Input Stream and // reads values of Employee such as name and age from employee.ser file. // It returns back an instance of Object class which we have to typecast back to // Employee class. Employee employee = (Employee) ois.readObject(); // close Object Input Stream. ois.close(); // close File Input Stream fis.close(); // Print the information of the de-serialized object on the Java console. System.out.println("Employee Name: " + employee.getName()); System.out.println("Employee Age: " + employee.getAge()); } catch(Exception e) { e.printStackTrace(); } } }
Output of the Deserialization Demo class
On running SerializationDemo class, code saves properties of Employee instance such as
- name - Dinesh Varyani
- age - 27
to employee.ser file. On running DeserializationDemo class it reads back the properties values from employee.ser file and prints output on the Java console as shown in above image.
Watch my complete Youtube channel below -
- Java Hubberspot - YouTube
Hello friends, I am Dinesh Varyani. Owner of blog . This channel will have Java Programming Tutorials for beginners ... Visit my Ja...
Popular | https://hubpages.com/technology/How-to-Serialize-an-Object-in-Java | CC-MAIN-2018-22 | refinedweb | 905 | 51.44 |
PEP 564 – Add new time functions with nanosecond resolution
- Author:
- Victor Stinner <vstinner at python.org>
- Status:
- Final
- Type:
- Standards Track
- Created:
- 16-Oct-2017
- Python-Version:
- 3.7
- Resolution:
- Python-Dev message
Table of Contents
- Abstract
- Rationale
- Changes
- Alternatives and discussion
- Annex: Clocks Resolution in Python
- Links
Abstract
Add six new “nanosecond” variants of existing functions to the
time
module:
clock_gettime_ns(),
clock_settime_ns(),
monotonic_ns(),
perf_counter_ns(),
process_time_ns() and
time_ns(). While similar to the existing functions without the
_ns suffix, they provide nanosecond resolution: they return a number of
nanoseconds as a Python
int.
The
time.time_ns() resolution is 3 times better than the
time.time()
resolution on Linux and Windows.
Rationale
Float type limited to 104 days
The clocks resolution of desktop and laptop computers is getting closer to nanosecond resolution. More and more clocks have a frequency in MHz, up to GHz for the CPU TSC clock.
The Python
time.time() function returns the current time as a
floating-point number which is usually a 64-bit binary floating-point
number (in the IEEE 754 format).
The problem is that the
float type starts to lose nanoseconds after 104
days. Converting from nanoseconds (
int) to seconds (
float) and
then back to nanoseconds (
int) to check if conversions lose
time.time() returns seconds elapsed since the UNIX epoch: January
1st, 1970. This function hasn’t had nanosecond precision since May 1970
(47 years ago):
>>> import datetime >>> unix_epoch = datetime.datetime(1970, 1, 1) >>> print(unix_epoch + datetime.timedelta(seconds=2**53 / 1e9)) 1970-04-15 05:59:59.254741
Previous rejected PEP
Five years ago, the PEP 410 proposed a large and complex change in all
Python functions returning time to support nanosecond resolution using
the
decimal.Decimal type.
The PEP was rejected for different reasons:
- The idea of adding a new optional parameter to change the result type was rejected. It’s an uncommon (and bad?) programming practice in Python.
- It was not clear if hardware clocks really had a resolution of 1 nanosecond, or if that made sense at the Python level.
- The
decimal.Decimaltype is uncommon in Python and so requires to adapt code to handle it.
Issues caused by precision loss
Example 1: measure time delta in long-running process
A server is running for longer than 104 days. A clock is read before and after running a function to measure its performance to detect performance issues at runtime. Such benchmark only loses precision because of the float type used by clocks, not because of the clock resolution.
On Python microbenchmarks, it is common to see function calls taking less than 100 ns. A difference of a few nanoseconds might become significant.
Example 2: compare times with different resolution
Two programs “A” and “B” are running on the same system and use the system clock. The program A reads the system clock with nanosecond resolution and writes a timestamp with nanosecond resolution. The program B reads the timestamp with nanosecond resolution, but compares it to the system clock read with a worse resolution. To simplify the example, let’s say that B reads the clock with second resolution. If that case, there is a window of 1 second while the program B can see the timestamp written by A as “in the future”.
Nowadays, more and more databases and filesystems support storing times with nanosecond resolution.
Note
This issue was already fixed for file modification time by adding the
st_mtime_ns field to the
os.stat() result, and by accepting
nanoseconds in
os.utime(). This PEP proposes to generalize the
fix.
CPython enhancements of the last 5 years
Since the PEP 410 was rejected:
- The
os.stat_resultstructure got 3 new fields for timestamps as nanoseconds (Python
int):
st_atime_ns,
st_ctime_nsand
st_mtime_ns.
- The PEP 418 was accepted, Python 3.3 got 3 new clocks:
time.monotonic(),
time.perf_counter()and
time.process_time().
- The CPython private “pytime” C API handling time now uses a new
_PyTime_ttype: simple 64-bit signed integer (C
int64_t). The
_PyTime_tunit is an implementation detail and not part of the API. The unit is currently
1 nanosecond.
Existing Python APIs using nanoseconds as int
The
os.stat_result structure has 3 fields for timestamps as
nanoseconds (
int):
st_atime_ns,
st_ctime_ns and
st_mtime_ns.
The
ns parameter of the
os.utime() function accepts a
(atime_ns: int, mtime_ns: int) tuple: nanoseconds.
Changes
New functions
This PEP adds six new functions to the
time module:
time.clock_gettime_ns(clock_id)
time.clock_settime_ns(clock_id, time: int)
time.monotonic_ns()
time.perf_counter_ns()
time.process_time_ns()
time.time_ns()
These functions are similar to the version without the
_ns suffix,
but return a number of nanoseconds as a Python
int.
For example,
time.monotonic_ns() == int(time.monotonic() * 1e9) if
monotonic() value is small enough to not lose precision.
These functions are needed because they may return “large” timestamps,
like
time.time() which uses the UNIX epoch as reference, and so their
float-returning variants are likely to lose precision at the nanosecond
resolution.
Unchanged functions
Since the
time.clock() function was deprecated in Python 3.3, no
time.clock_ns() is added.
Python has other time-returning functions. No nanosecond variant is
proposed for these other functions, either because their internal
resolution is greater or equal to 1 us, or because their maximum value
is small enough to not lose precision. For example, the maximum value of
time.clock_getres() should be 1 second.
Examples of unchanged functions:
osmodule:
sched_rr_get_interval(),
times(),
wait3()and
wait4()
resourcemodule:
ru_utimeand
ru_stimefields of
getrusage()
signalmodule:
getitimer(),
setitimer()
timemodule:
clock_getres()
See also the Annex: Clocks Resolution in Python.
A new nanosecond-returning flavor of these functions may be added later if an operating system exposes new functions providing better resolution.
Alternatives and discussion
Sub-nanosecond resolution
time.time_ns() API is not theoretically future-proof: if clock
resolutions continue to increase below the nanosecond level, new Python
functions may be needed.
In practice, the 1 nanosecond resolution is currently enough for all structures returned by all common operating systems functions.
Hardware clocks with a resolution better than 1 nanosecond already exist. For example, the frequency of a CPU TSC clock is the CPU base frequency: the resolution is around 0.3 ns for a CPU running at 3 GHz. Users who have access to such hardware and really need sub-nanosecond resolution can however extend Python for their needs. Such a rare use case doesn’t justify to design the Python standard library to support sub-nanosecond resolution.
For the CPython implementation, nanosecond resolution is convenient: the
standard and well supported
int64_t type can be used to store a
nanosecond-precise timestamp. It supports a timespan of -292 years
to +292 years. Using the UNIX epoch as reference, it therefore supports
representing times since year 1677 to year 2262:
>>> 1970 - 2 ** 63 / (10 ** 9 * 3600 * 24 * 365.25) 1677.728976954687 >>> 1970 + 2 ** 63 / (10 ** 9 * 3600 * 24 * 365.25) 2262.271023045313
Modifying time.time() result type
It was proposed to modify
time.time() to return a different number
type with better precision.
The PEP 410 proposed to return
decimal.Decimal which already exists and
supports arbitrary precision, but it was rejected. Apart from
decimal.Decimal, no portable real number type with better precision
is currently available in Python.
Changing the built-in Python
float type is out of the scope of this
PEP.
Moreover, changing existing functions to return a new type introduces a risk of breaking the backward compatibility even if the new type is designed carefully.
Different types
Many ideas of new types were proposed to support larger or arbitrary precision: fractions, structures or 2-tuple using integers, fixed-point number, etc.
See also the PEP 410 for a previous long discussion on other types.
Adding a new type requires more effort to support it, than reusing
the existing
int type. The standard library, third party code and
applications would have to be modified to support it.
The Python
int type is well known, well supported, easy to
manipulate, and supports all arithmetic operations such as
dt = t2 - t1.
Moreover, taking/returning an integer number of nanoseconds is not a
new concept in Python, as witnessed by
os.stat_result and
os.utime(ns=(atime_ns, mtime_ns)).
Note
If the Python
float type becomes larger (e.g. decimal128 or
float128), the
time.time() precision will increase as well.
Different API
The
time.time(ns=False) API was proposed to avoid adding new
functions. It’s an uncommon (and bad?) programming practice in Python to
change the result type depending on a parameter.
Different options were proposed to allow the user to choose the time
resolution. If each Python module uses a different resolution, it can
become difficult to handle different resolutions, instead of just
seconds (
time.time() returning
float) and nanoseconds
(
time.time_ns() returning
int). Moreover, as written above,
there is no need for resolution better than 1 nanosecond in practice in
the Python standard library.
A new module
It was proposed to add a new
time_ns module containing the following
functions:
time_ns.clock_gettime(clock_id)
time_ns.clock_settime(clock_id, time: int)
time_ns.monotonic()
time_ns.perf_counter()
time_ns.process_time()
time_ns.time()
The first question is whether the
time_ns module should expose exactly
the same API (constants, functions, etc.) as the
time module. It can be
painful to maintain two flavors of the
time module. How are users use
supposed to make a choice between these two modules?
If tomorrow, other nanosecond variants are needed in the
os module,
will we have to add a new
os_ns module as well? There are functions
related to time in many modules:
time,
os,
signal,
resource,
select, etc.
Another idea is to add a
time.ns submodule or a nested-namespace to
get the
time.ns.time() syntax, but it suffers from the same issues.
Annex: Clocks Resolution in Python
This annex contains the resolution of clocks as measured in Python, and not the resolution announced by the operating system or the resolution of the internal structure used by the operating system.
Script
Example of script to measure the smallest difference between two
time.time() and
time.time_ns() reads ignoring differences of zero:
import math import time LOOPS = 10 ** 6 print("time.time_ns(): %s" % time.time_ns()) print("time.time(): %s" % time.time()) min_dt = [abs(time.time_ns() - time.time_ns()) for _ in range(LOOPS)] min_dt = min(filter(bool, min_dt)) print("min time_ns() delta: %s ns" % min_dt) min_dt = [abs(time.time() - time.time()) for _ in range(LOOPS)] min_dt = min(filter(bool, min_dt)) print("min time() delta: %s ns" % math.ceil(min_dt * 1e9))
Linux
Clocks resolution measured in Python on Fedora 26 (kernel 4.12):
Notes on resolutions:
clock()frequency is
CLOCKS_PER_SECONDwhich is 1,000,000 Hz (1 MHz): resolution of 1 us.
times()frequency is
os.sysconf("SC_CLK_TCK")(or the
HZconstant) which is equal to 100 Hz: resolution of 10 ms.
resource.getrusage(),
os.wait3()and
os.wait4()use the
ru_usagestructure. The type of the
ru_usage.ru_utimeand
ru_usage.ru_stimefields is the
timevalstructure which has a resolution of 1 us.
Windows
Clocks resolution measured in Python on Windows 8.1:
The frequency of
perf_counter() and
perf_counter_ns() comes from
QueryPerformanceFrequency(). The frequency is usually 10 MHz: resolution of
100 ns. In old Windows versions, the frequency was sometimes 3,579,545 Hz (3.6
MHz): resolution of 279 ns.
Analysis
The resolution of
time.time_ns() is much better than
time.time(): 84 ns (2.8x better) vs 239 ns on Linux and 318 us
(2.8x better) vs 894 us on Windows. The
time.time() resolution will
only become larger (worse) as years pass since every day adds
86,400,000,000,000 nanoseconds to the system clock, which increases the
precision loss.
The difference between
time.perf_counter(),
time.monotonic(),
time.process_time() and their respective nanosecond variants is
not visible in this quick script since the script runs for less than 1
minute, and the uptime of the computer used to run the script was
smaller than 1 week. A significant difference may be seen if uptime
reaches 104 days or more.
resource.getrusage() and
times() have a resolution greater or
equal to 1 microsecond, and so don’t need a variant with nanosecond
resolution.
Note
Internally, Python starts
monotonic() and
perf_counter()
clocks at zero on some platforms which indirectly reduce the
precision loss.
Links
This document has been placed in the public domain.
Source:
Last modified: 2022-01-21 11:03:51 GMT | https://peps.python.org/pep-0564/ | CC-MAIN-2022-27 | refinedweb | 2,051 | 58.69 |
Levels of abstraction
I decided to compare Twisted and Allegra for a simple task using asynchronous I/O. As a start for that I wanted to download a web page and display the results to stdout.
Fetch a web page with Twisted
I did some Twisted programming in the 1.x days but haven't done async I/O since then. Here's the Twisted code. The interface I'm using passes the response body into the callback as a single string.
from twisted.internet import reactor from twisted.web import client def handleCallback(response): print response reactor.stop() def handleErrback(err): print "Error:", err reactor.stop() get_page = client.getPage("") get_page.addCallbacks(handleCallback, handleErrback) reactor.run()It wasn't that hard to figure out, mostly because once I found the "getPage" method I used Google to find working code by Richard Townsend. It did take a while to find that method. Here's my tale. I went to the API page and looked for "http". Nothing. Going down the list the best fit by name was web but that's labeled "Twisted Web: a Twisted Web Server" and I want client. I checked out "internet" -- nope, that's where the reactors are and interfaces to TCP, threads, the serial port and quite a few non-internet things. I assume this is historical with TCP support first and the others only added later.
Looking under "protocols" I see the following for "http":
This module is DEPRECATED. It has been split off into a third party package, Twisted Web. Please see URL goes to a bz2 file dated 2005-03-22 with version number 0.5.0. Assuming it's wrong I took the name "web" as a hit to look in the "twisted.web" package which I skipped earlier because it's "a Twisted Web Server." That title's wrong and it does contain client code. Oh, and I looked at "web2 but that's apparently incomplete. The "log" module description says
This is still in flux (even moreso than the rest of web2)and quite a few modules, including "client", say "Undocumented".
The documentation irks me with its use of "I". Consider these from twisted.web:I can't recall any other library documented in first person from the view of the code. There might be one but it's rare. I don't like it but suppose it's because of my lack of experience with it. If it's useful then it should be consistent. I see no consistency here. Why isn't "client" documented as "I contain HTTP client functionality" or something like that? The description for "errors" is grammatically incorrect. It looks like someone liked first-person and prefixed "I am the" to the front of "Twisted.Web error resources and exceptions". The latter alone would be grammatical, shorter, easier to read and more consistent with existing practices.
In English there are differences between "I am", "I hold" and "I contain". Is it important here? I don't think so. The descriptions would be no less useful asOr better, IMO, as I left out "trp" because the description makes no sense (Python objects aren't unpicklable nor are they named files) and the function is otherwise undocumented.
To make my point clear, using first person singular like this in the documentation adds nothing but noise and its inconsistent usage makes it all the more jarring. Luckily, it seems mostly limited to the twisted.web code.
For those keeping track at home:
- "twisted.internet" has relatively little to do with the internet
- "twisted.web"'s description implies it contains only server code
- "twisted.protocols.http" refers to an outdated 2nd party package
- "twisted.web2" does not appear ready for use
- first person sigular documentation style is distracting and improves nothing
Fetch a web page with Allegra
I've not used Allegra before. It's author, Laurent Szyster, started with Sam Rushing's old Medusa code, which were incorporated into Python's standard library as asyncore and asynchat. Quoting the author:
Twisted and Allegra are two very different things. Twisted is a large set of complex libraries with support for a vast number of protocols and systems. Allegra is a small set of simple modules that supports only a minimal collection of web protocols and focuses on a single application.
Allegra's core delivers marginal but practical improvements over the original library, in all directions. So, even stripped off its applications, it still fully deserves its own name. Precisely because it is as simple as its predecessor.
There have been various vocal back-and-forths in blogspace between the Allegra developer and some of the Twisted people. The details are easy enough to dig up so I'm not going to bother with additional links.
What got me interested in Allegra is its support for HTTP/1.1 pipelining. In another project we have a search service which returns document hits. I wanted it to return a list of URLs, one per record, and have the client fetch the URLs it needs. Others pointed out that pipelining support isn't common enough for our goals so we decided the default would return all records combined into a single response.
They were right too. Twisted doesn't support HTTP/1.1 pipelining and neither does urllib. According to the comments the twisted.web2 code will support 1.1 but I don't know the schedule nor if that includes pipeline support. In the debate Laurent makes the strong claim that full HTTP/1.1 support in Twisted is hard. I am not competant enough to evaluate those claims. I just want to try out HTTP 1.1
I also want to try out chunked-encoding. In implementing my server (in TurboGears) I found that while I know the number of records to return I don't directly know the total byte size of the response. I'm going to precompute the size of each record but I would like to use return, say, 100 records at a time. That puts an upper limit in memory use no matter the total number of records in the search results, and it means I don't have possible mismatch between the precomputed size and the actual size.
Here's the Allegra code for fetching a page. It uses a freshly checked out version from Subversion.
import sys from allegra import (http_client, async_loop, finalization) class CopyToFile(object): collector_is_simple = True def __init__(self, file=sys.stdout): self.file = file def collect_incoming_data(self, text): self.file.write(text) def found_terminator(self): return True dispatcher = http_client.connect("127.0.0.1", 8081) req = http_client.GET(dispatcher, "/") req(CopyToFile(sys.stdout)) async_loop.dispatch()I used my own collector class "CopyToFile" instead of the standard allegra.collector.File because the latter closes the file in found_terminator and I want to continue using sys.stdout after I've received the file.
Allegra's documentation pretty much does not exist. There are some hints in Laurent's blog but as I've not used asyncore I'm missing the basic understanding of how to put things together. There are no examples of using the HTTP client library, not even tests.
I am not a test-driven developer. I've tried to write the tests before writing the code. I almost invariably hate it. My understanding of how the code is supposed to be implemented changes while I write it. I end up spending more time rewriting the tests than I like and I find no benfit to that approach. I wait until the code has started to firm up before putting those tests in.
Perhaps that's the case here as Allegra is very new code. I don't think so as the API looks pretty stablized. As a hint to the author, tests can also make for a good demo of how to use the library. (Though those are more like functional tests than unit tests.)
I still don't know the purpose of the collector object or how to compose collectors.
When I run the above code I get a wait of a few seconds before the
program ends. I assume it's waiting for the pipeline timeout but I've
not looked in to that. I can't find something which says "I'm done;
stop everything and shut down" which might be used by the proverbial
"big red button" in a server's administrative interface.
Allegra enables debug messages so I got to see things like
debug http-client-pipeline id="72a50" connect debug async_dispatch_start ... debug async_dispatch_stopin the output. Those messages weren't helpful to me. The author says
# The Loginfo interface and implementation provide a simpler, yet more # powerfull and practical logging facility than the one currently integrated # with Python.Why is it that everyone thinks they have a simpler, more powerful and more practical logging system? Then again I don't like logging systems. The ones I've seen are usually configured to dump just about everything ending up in a spew of data which people end up ignoring. Like warnings from lint.
I don't like two things about Allegra's code style. I don't like the space after the function name in the def statement and I don't like the 8 character indentation. Here's an example of both along with the uncommon use of "== None" instead of "is None".
def GET (pipeline, url, headers=None): if headers == None: headers = {} return Request (pipeline, url, headers, 'GET', None)
For those keeping track at home:
- No documentation
- No documentation
- Minimal tests
- No documentation
- Yet another logging system
- Unusual code style
Error handling in Allegra
Here's an example of Allegra logging an exception. In this case I used the wrong port number. (Newlines added to prevent overly wide text.)
traceback http-client-pipeline id="72a50" socket.error (61, 'Connection refused') /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/ site-packages/allegra/async_chat.py | handle_read | 152 /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/ site-packages/allegra/async_core.py | recv | 160After a decade of working with Python's normal traceback text I can parse it by eye very quickly. This new format is more terse but harder to read because it's new, and I hope that errors are rare enough that I won't get all that much practice reading a new format string.
Emacs and other IDEs can parse Python's normal traceback message and bring up the correct file to see the error location. They can't do that with this new format, at least not without someone writing new parser code.
Here's the context so you can see what the code looks like.
def send (self, data): "try to send data through a stream socket" try: result = self.socket.send (data) return result except socket.error, why: if why[0] == EWOULDBLOCK: return 0 else: raise return 0This code makes sense to me.
Error handling in Twisted
By comparison, here's what the same error produces in Twisted. (I added a couple of newlines to make it easier to see on this page.)
Error: [Failure instance: Traceback (failure with no frames): twisted.internet.error.ConnectionRefusedError: Connection was refused by other side: 22: Invalid argument. ]The "Error: " prefix was from me; the rest is from Twisted.
That's all it shows me. There are "no frames" so I don't even know which part of the code gave the problem. That's probably why the error object is so richly decorated. Each low-level error maps to a high-level error class which has some information about the error, though not the location of the error.
I noticed something strange about the exception. It reports error code 22 "Invalid argument" and not error code 61 "Connection refused" even through the error class is correct. That's strange.
I tried to track down why through code inspection. About an hour later I gave up. There's too much abstraction going on for my straight and narrow brain. In addition to being decomposable in every which way some of the protocols are restartable, like twisted.internet.tcp.BaseClient which has
# If I have reached this point without raising or returning, that means # that the socket is connected. del self.doWrite del self.doRead # we first stop and then start, to reset any references to the old doRead self.stopReading() self.stopWriting() self._connectDone()Why does a newly created connection need to stop reading and writing? And that's an ugly trick setting instance variables doRead and doWrite to the bound method doConnect at the start, shadowing the class method of the same name. I assume so the connection starts automatically on doRead or doWrite. Why not just use a "I've initialized" flag?
It can't be for the performance. If I read this code correctly then Twisted HTTP requests have a lot of overhead. There are dozens of constructors and method calls. Doing a connectTCP creates a tcp.Connector which is-a abstract.FileDescriptor and _SocketCloser. A FileDescriptor is-a log.Logger and styles.Ephemeral.
Anyway, I gave up and inserted some code in the ConnectionRefusedError class to print the traceback. Here's the traceback (with extra newlines).()This is stacked too deep for my tastes but I'm getting used to that with TurboGears. The raised error comes from
try: connectResult = self.socket.connect_ex(self.realAddress) except socket.error, se: connectResult = se.args[0] if connectResult: if connectResult == EISCONN: pass # on Windows EINVAL means sometimes that we should keep trying: # elif ((connectResult in (EWOULDBLOCK, EINPROGRESS, EALREADY)) or (connectResult == EINVAL and platformType == "win32")): self.startReading() self.startWriting() return else: self.failIfNotConnected(error.getConnectError((connectResult, os.strerror(connectResult)))) returnWhen I displayed the value for connectResult I was surprised
connectResult 36 realAddress ('127.0.0.1', 8082) connectResult 22 realAddress ('127.0.0.1', 8082)I got two connectResult attempts, and neither are ECONNREFUSED(61); they are EINPROGRESS(36) and EINVAL(22). Let's see - non-blocking so the EINPROGRESS says to come back a bit later. That makes sense. But why EINVAL? My best guess comes from the possible error conditions mentioned on this man page:
The AF_INET socket is of type SOCK_STREAM, and a previous connect() has already completed unsuccessfully. Only one connection attempt is allowed on a connection-oriented socket.My intuition suggests next examining if the shutdown/restart of a newly opened Connection ends up doing duplicate connect_ex calls on the file handle. However, at the level of abstraction in Twisted it feels like I'm looking at the world through distantly separated tiny windows and I'm having a hard time figuring out what's going on. I don't want to figure it out and only got this far through stubborness.
At least I was able to figure out why I get a ConnectionRefusedError class even when the errno is EINVAL:
errnoMapping = { errno.ENETUNREACH: NoRouteError, errno.ECONNREFUSED: ConnectionRefusedError, errno.ETIMEDOUT: TCPTimedOutError, # for FreeBSD - might make other unices in certain cases # return wrong exception, alas errno.EINVAL: ConnectionRefusedError, }I'm on a MacOS X 10.3.9 box, which is a BSD derivative.
Twisted's deferred are based on the except/else model of Python exceptions. That's good for control flow but it doesn't capture the execution stack, which is useful for debugging. Twisted is such a maze of twisty little functions all different that the lack of traceback makes it hard for me to debug or even understand the source of errors. Allegra's shallower stack and lack of framework generality made it much easier for me to see what actually does occur vs. seeing all of the alternatives which are not precluded from occuring.
Allegra and the GPL
I don't like the GPL. My clients include pharmaceutical companies doing drug research. I work for computational chemists. Some develop methods to model how a chemical compound works in people and (hopefully) identify ones which might be good drug leads. These models may take weeks and months to develop.
My clients do not (usually) sell software. They are consumers of software. But they do buy and sell companies. Here's an example based somewhat on group I worked with years ago.
Biotech X developed some interesting new technology so Pharma A decided to buy X. The employees of the interminged company started working together, and some of the people from A started using software developed by people X.
Suppose one of those is a web server which combines a GPL'ed Python library and a set of chemical prediction models. Clause 3 in the GPL says that the prediction models need not be put under the GPL because they are "reasonably considered independent and separate works in themselves." Does the purchase of X by A count as a distribution of that web server under the GPL? Why or why not?
Suppose now Pharma A sells part of what was Biotech X to Pharma B. There are now people at A and B using the software. Assume that the agreement of sale says that both sides can use the chemistry software. Does the inclusion of a GPL'ed Python library in the web server affect things? Will Pharma B's use of the chemistry software be under the GPL?
Some research code circulates for decades. I have used code written in the 1960s. Rarely is the provenance of such code well tracked. I've see academic software include parts of source-available commercial software without attribution. (It was a hand translation from Fortran to C and the variable names and ordering of operations were identical.) There is the potential that a company could, through mergers and aquisition, discover that most of its research software has become GPL'ed and that such a discovery may prevent the company from being bought in the future.
I don't know if the above is a valid legal consequence. I do know that it's something I am obligated to mention to my clients should I ever want to use a GPL licensed library. I have used GPL-based systems but these are through binary executable interfaces or web services where the GPL v2 does not apply. The GPL code and the chemistry code never comingle in the same process space.
I bring this up because Laurent said:
When people start to dismiss a library because of its licence, it's a sure sign that they don't have much else to say about its sources.
Anyway, let's make that licencing issue clear.
There are three ways to go with the GNU Public Licence:
- If you want to write free software for a greater good using Allegra sources, the GPL will suite your needs perfectly. That's the GNU way.
- If you want to make a buck installing or distributing Allegra's applications, you're free to do so as long as you comply with the GPL. That's the Linux distro way.
- If you want to use Allegra sources to write commercial applications to make a profit, buy a commercial licence. That's the MySQL way.
He's right. My clients won't care much about the technical superiority of Allegra for a given task over Twisted. They'll rightly do risk management and probably say 1) it's too expensive to consult our lawyers over this and/or 2) the likely benefits aren't worth the uncertain costs.
#1 is right out. Pharmas are developing drugs "for the greater good" and there's nothing other than faith which says that GPL leads to at least as good results as existing practices. There's nothing the other way either, but it's a wager few wait to take. (As companies pharmas have some nasty practices. The people doing drug development want to cure diseases, understand how organisms work, do good science and get paid well for their hard and honest work.)
#2 is not going to happen. There's not enough software to make a distro worthswhile. With perhaps 10,000 puchasers world-wide that's a sale price of about $300 to keep someone employed at a decent salary for several years. There's only a small body of existing free software upon which to base such a distribution and it's unlikely that such a distribution will have an incremental advantage worth the cost.
#3? This is the most likely. It worked for Sam Rushing, and the Bobo folks (now Zope) finally paid him to donate the code to Python. What's the price? There's no sales link on the Allegra page, no documentation, no estimate of the prices. The latter being just like MySQL - "Call our sales teams and we'll work with you to figure out how much you can pay .. err, figure out the best deal for your unique requirements."
Andrew Dalke is an independent consultant focusing on software development for computational chemistry and biology. Need contract programming, help, or training? Contact me
| http://www.dalkescientific.com/writings/diary/archive/2006/08/28/levels_of_abstraction.html | CC-MAIN-2018-34 | refinedweb | 3,505 | 66.54 |
Installing Node.js and npm on Windows is very straightforward.
First, download the Windows installer from the Node.js website. You will have the choice between the LTS (Long Term Support) or Current version.
- The Current version receives the latest features and updates more rapidly
- The LTS version foregos feature changes to improve stability, but receives patches such as bug fixes and security updates
Once you have selected a version meets your needs, run the installer. Follow the prompts to select an install path and ensure the npm package manager feature is included along with the Node.js runtime. This should be the default configuration.
Restart your computer after the installation is complete.
If you installed under the default configuration, Node.js should now be added to your PATH. Run command prompt or powershell and input the following to test it out:
> node -v
The console should respond with a version string. Repeat the process for npm:
> npm -v
If both commands work, your installation was a success, and you can start using Node.js!
More info on Node.js
According to its GitHub repository, Node.js is:
Node.js is an open-source, cross-platform, JavaScript runtime environment. It executes JavaScript code outside of a browser. For more information on using Node.js, see the Node.js Website.
A breakdown of Node.js facts:
- Node.js is a JavaScript runtime built on Chrome’s V8 JavaScript engine.
Every browser has a JavaSript engine built in it to process JavaScript files contained in websites. Google Chrome uses the V8 engine, which is built using C++. Node.js also uses this super-fast engine to interpret JavaScript files.
- Node.js uses an event-driven model.
This means that Node.js waits for certain events to take place. It then acts on those events. Events can be anything from a click to a HTTP request. We can also declare our own custom events and make Node.js listen for those events.
- Node.js uses a non-blocking I/O model.
We know that I/O tasks take much longer than processing tasks. Node.js uses callback functions to handle such requests.
Let us assume that a particular I/O task takes 5 seconds to execute, and that we want to perform this I/O twice in our code.
Python
import time def my_io_task(): time.sleep(5) print("done") my_io_task() my_io_task()
Node.js
function my_io_task() { setTimeout(function() { console.log('done'); }, 5000); } my_io_task(); my_io_task();
Both look similar, but the time taken to execute are different. The Python code takes 10 seconds to execute while the Node.js code takes only 5 seconds.
Node.js takes less time because of its non-blocking I/O model. The first call to
my_io_task() starts the timer and leaves it there. It does not wait for the response from the function. Instead, it moves on to call the second
my_io_task(), starts the timer and leaves it there.
When the timer completes it’s execution taking 5 seconds, it calls the function and prints
done on the console. Since both the timers are started together, they complete together and therefore take same amount of time.
Socket.io
Socket.io is a Node.js library made to help make real-time communication between computers possible. To ensure this Socket.io uses WebSockets to establish a connection between the client’s browser and the server. This library uses Engine.IO for building the connection.
Demos
To get a taste of what is possible, Socket.io provides two demos to show it’s possible use-cases. You can find the demos at and find the link to the whiteboard demo on the left.
Since Socket.io is a Node.js library you have to make sure that Node.js is installed. If it’s not set up yet get the latest version at Nodejs.org
macOS
Node.js can also be installed via Homebrew a package manager for macOS.
Just type
brew install node to install Node.js.
A get started guide can also be found on Socket.io’s page. It shows how to easily build a real-time chat in just a couple of lines.
More information
More information about Socket.io and it’s documentation can be found at: | https://www.freecodecamp.org/news/how-to-install-node-js-and-npm-on-windows/ | CC-MAIN-2021-39 | refinedweb | 710 | 70.09 |
?
It would be a lot nicer if javascript had namespaces built in, but I find that organizing things like Dustin Diaz describes here helps me a lot.
var DED = (function() { var private_var; function private_method() { // do stuff here } return { method_1 : function() { // do stuff here }, method_2 : function() { // do stuff here } }; })();
I put different “namespaces” and sometimes individual classes in separate files. Usually I start with one file and as a class or namespace gets big enough to warrant it, I separate it out into its own file. Using a tool to combine all you files for production is an excellent idea as well.
I try to avoid including any javascript with the HTML. All the code is encapsulated into classes and each class is in its own file. For development, I have separate <script> tags to include each js file, but they get merged into a single larger package for production to reduce the overhead of the HTTP requests.
Typically, I’ll have a single ‘main’ js file for each application. So, if I was writing a “survey” application, i would have a js file called “survey.js”. This would contain the entry point into the jQuery code. I create jQuery references during instantiation and then pass them into my objects as parameters. This means that the javascript classes are ‘pure’ and don’t contain any references to CSS ids or classnames.
// file: survey.js $(document).ready(function() { var jS = $('#surveycontainer'); var jB = $('#dimscreencontainer'); var d = new DimScreen({container: jB}); var s = new Survey({container: jS, DimScreen: d}); s.show(); });
I also find naming convention to be important for readability. For example: I prepend ‘j’ to all jQuery instances.
In the above example, there is a class called DimScreen. (Assume this dims the screen and pops up an alert box.) It needs a div element that it can enlarge to cover the screen, and then add an alert box, so I pass in a jQuery object. jQuery has a plug-in concept, but it seemed limiting (e.g. instances are not persistent and cannot be accessed) with no real upside. So the DimScreen class would be a standard javascript class that just happens to use jQuery.
// file: dimscreen.js function DimScreen(opts) { this.jB = opts.container; // ... }; // need the semi-colon for minimizing! DimScreen.prototype.draw = function(msg) { var me = this; me.jB.addClass('fullscreen').append('<div>'+msg+'</div>'); //... };
I’ve built some fairly complex appliations using this approach.
You can break up your scripts into separate files for development, then create a “release” version where you cram them all together and run YUI Compressor or something similar on it.
Inspired by earlier posts I made a copy of Rakefile and vendor directories distributed with WysiHat (a RTE mentioned by changelog) and made a few modifications to include code-checking with JSLint and minification with YUI Compressor.
The idea is to use Sprockets (from WysiHat) to merge multiple JavaScripts into one file, check syntax of the merged file with JSLint and minify it with YUI Compressor before distribution.
Prerequisites
Now do
- Download Rhino and put the JAR (“js.jar”) to your classpath
- Download YUI Compressor and put the JAR (build/yuicompressor-xyz.jar) to your classpath
- Download WysiHat and copy “vendor” directory to the root of your JavaScript project
- Download JSLint for Rhino and put it inside the “vendor” directory
Now create a file named “Rakefile” in the root directory of the JavaScript project and add the following content to it:
require 'rake' ROOT = File.expand_path(File.dirname(__FILE__)) OUTPUT_MERGED = "final.js" OUTPUT_MINIFIED = "final.min.js" task :default => :check desc "Merges the JavaScript sources." task :merge do require File.join(ROOT, "vendor", "sprockets") environment = Sprockets::Environment.new(".") preprocessor = Sprockets::Preprocessor.new(environment) %w(main.js).each do |filename| pathname = environment.find(filename) preprocessor.require(pathname.source_file) end output = preprocessor.output_file File.open(File.join(ROOT, OUTPUT_MERGED), 'w') { |f| f.write(output) } end desc "Check the JavaScript source with JSLint." task :check => [:merge] do jslint_path = File.join(ROOT, "vendor", "jslint.js") sh 'java', 'org.mozilla.javascript.tools.shell.Main', jslint_path, OUTPUT_MERGED end desc "Minifies the JavaScript source." task :minify => [:merge] do sh 'java', 'com.yahoo.platform.yui.compressor.Bootstrap', '-v', OUTPUT_MERGED, '-o', OUTPUT_MINIFIED end
If you done everything correctly, you should be able to use the following commands in your console:
rake merge— to merge different JavaScript files into one
rake check— to check the syntax of your code (this is the default task, so you can simply type
rake)
rake minify— to prepare minified version of your JS code
On source merging
Using Sprockets, the JavaScript pre-processor you can include (or
require) other JavaScript files. Use the following syntax to include other scripts from the initial file (named “main.js”, but you can change that in the Rakefile):
(function() { //= require "subdir/jsfile.js" //= require "anotherfile.js" // some code that depends on included files // note that all included files can be in the same private scope })();
And then…
Take a look at Rakefile provided with WysiHat to set the automated unit testing up. Nice stuff 🙂
And now for the answer
This does not answer the original question very well. I know and I’m sorry about that, but I’ve posted it here because I hope it may be useful to someone else to organize their mess.
My approach to the problem is to do as much object-oriented modelling I can and separate implementations into different files. Then the handlers should be as short as possible. The example with
List singleton is also nice one.
And namespaces… well they can be imitated by deeper object structure.
if (typeof org === 'undefined') { var org = {}; } if (!org.hasOwnProperty('example')) { org.example = {}; } org.example.AnotherObject = function () { // constructor body };
I’m not big fan of imitations, but this can be helpful if you have many objects that you would like to move out of the global scope.
The code organization requires adoption of conventions and documentation standards:
1. Namespace code for a physical file;
Exc = {};
2. Group classes in these namespaces javascript;
3. Set Prototypes or related functions or classes for representing real-world objects;
Exc = {}; Exc.ui = {}; Exc.ui.maskedInput = function (mask) { this.mask = mask; ... }; Exc.ui.domTips = function (dom, tips) { this.dom = gift; this.tips = tips; ... };
4. Set conventions to improve the code. For example, group all of its internal functions or methods in its class attribute of an object type.
Exc.ui.domTips = function (dom, tips) { this.dom = gift; this.tips = tips; this.internal = { widthEstimates: function (tips) { ... } formatTips: function () { ... } }; ... };
5. Make documentation of namespaces, classes, methods and variables. Where necessary also discuss some of the code (some FIs and Fors, they usually implement important logic of the code).
/** * Namespace <i> Example </i> created to group other namespaces of the "Example". */ Exc = {}; /** * Namespace <i> ui </i> created with the aim of grouping namespaces user interface. */ Exc.ui = {}; /** * Class <i> maskdInput </i> used to add an input HTML formatting capabilities and validation of data and information. * @ Param {String} mask - mask validation of input data. */ Exc.ui.maskedInput = function (mask) { this.mask = mask; ... }; /** * Class <i> domTips </i> used to add an HTML element the ability to present tips and information about its function or rule input etc.. * @ Param {String} id - id of the HTML element. * @ Param {String} tips - tips on the element that will appear when the mouse is over the element whose identifier is id <i> </i>. */ Exc.ui.domTips = function (id, tips) { this.domID = id; this.tips = tips; ... };
These are just some tips, but that has greatly helped in organizing the code. Remember you must have discipline to succeed!
Following good OO design principals and design patterns goes a long way to making your code easy to maintain and understand.
But one of the best things I’ve discovered recently are signals and slots aka publish/subscribe.
Have a look at
for a simple jQuery implementation.
The idea is well used in other languages for GUI development. When something significant happens somewhere in your code you publish a global synthetic event which other methods in other objects may subscribe to.
This gives excellent separation of objects.
I think Dojo (and Prototype?) have a built in version of this technique.
see also What are signals and slots?
I was able to successfully apply the Javascript Module Pattern to an Ext JS application at my previous job. It provided a simple way to create nicely encapsulated code.
Dojo had the module system from the day one. In fact it is considered to be a cornerstone of Dojo, the glue that holds it all together:
- dojo.require — the official doc.
- Understanding dojo.declare, dojo.require, and dojo.provide.
- Introducing Dojo.
Using modules Dojo achieves following objectives:
- Namespaces for Dojo code and custom code (
dojo.declare()) — do not pollute the global space, coexist with other libraries, and user’s non-Dojo-aware code.
dojo.require()).
- Custom builds by analyzing module dependencies to create a single file or a group of interdependent files (so-called layers) to include only what your web application needs. Custom builds can include Dojo modules and customer-supplied modules as well.
- Transparent CDN-based access to Dojo and user’s code. Both AOL and Google carry Dojo in this fashion, but some customers do that for their custom web applications as well.
You can :
split up your code into model, view and controller layers.
compress all code into a single production file
auto-generate code
create and run unit tests
and lots more…
Best of all, it uses jQuery, so you can take advantage of other jQuery plugins too.
My boss still speaks of the times when they wrote modular code (C language), and complains about how crappy the code is nowadays! It is said that programmers can write assembly in any framework. There is always a strategy to overcome code organisation. The basic problem is with guys who treat java script as a toy and never try to learn it.
In my case, I write js files on a UI theme or application screen basis, with a proper init_screen(). Using proper id naming convention, I make sure that there are no name space conflicts at the root element level. In the unobstrusive window.load(), I tie the things up based on the top level id.
I strictly use java script closures and patterns to hide all private methods. After doing this, never faced a problem of conflicting properties/function definitions/variable definitions. However, when working with a team it is often difficult to enforce the same rigour.
I’m surprised no one mentioned MVC frameworks. I’ve been using Backbone.js to modularize and decouple my code, and it’s been invaluable.
There are quite a few of these kinds of frameworks out there, and most of them are pretty tiny too. My personal opinion is that if you’re going to be writing more than just a couple lines of jQuery for flashy UI stuff, or want a rich Ajax application, an MVC framework will make your life much easier.
“Write like crazy and just hope it works out for the best?”, I’ve seen a project like this which was developed and maintained by just 2 developers, a huge application with lots of javascript code. On top of that there were different shortcuts for every possible jquery function you can think of. I suggested they organize the code as plugins, as that is the jquery equivalent of class, module, namespace… and the whole universe. But things got much worse, now they started writing plugins replacing every combination of 3 lines of code used in the project.
Personaly I think jQuery is the devil and it shouldn’t be used on projects with lots of javascript because it encourages you to be lazy and not think of organizing code in any way. I’d rather read 100 lines of javascript than one line with 40 chained jQuery functions (I’m not kidding).
Contrary to popular belief it’s very easy to organize javascript code in equivalents to namespaces and classes. That’s what YUI and Dojo do. You can easily roll your own if you like. I find YUI’s approach much better and efficient. But you usualy need a nice editor with support for snippets to compensate for YUI naming conventions if you want to write anything useful.
I create singletons for every thing I really do not need to instantiate several times on screen, a classes for everything else. And all of them are put in the same namespace in the same file. Everything is commented, and designed with UML , state diagrams. The javascript code is clear of html so no inline javascript and I tend to use jquery to minimize cross browser issues.
Organising your code in a Jquery centric NameSpace way may look as follows… and will not clash with other Javascript API’s like Prototype, Ext either.
<script src="jquery/1.3.2/jquery.js" type="text/javascript"></script> <script type="text/javascript"> var AcmeJQ = jQuery.noConflict(true); var Acme = {fn: function(){}}; (function($){ Acme.sayHi = function() { console.log('Hello'); }; Acme.sayBye = function() { console.log('Good Bye'); }; })(AcmeJQ); // Usage // Acme.sayHi(); // or // <a href="#" onclick="Acme.sayHi();">Say Hello</a> </script>
Hope this helps.
Good principal of OO + MVC would definitely go a long way for managing a complex javascript app.
Basically I am organizing my app and javascript to the following familiar design (which exists all the way back from my desktop programming days to Web 2.0)
Description for the numeric values on the image:
- Widgets representing the views of my application. This should be extensible and separated out neatly resulting good separation that MVC tries to achieve rather than turning my widget into a spaghetti code (equivalent in web app of putting a large block of Javascript directly in HTML). Each widget communicate via others by listening to the event generated by other widgets thus reducing the strong coupling between widgets that could lead to unmanageable code (remember the day of adding onclick everywhere pointing to a global functions in the script tag? Urgh…)
- Object models representing the data that I want to populate in the widgets and passing back and forth to the server. By encapsulating the data to its model, the application becomes data format agnostics. For example: while Naturally in Javascript these object models are mostly serialized and deserialized into JSON, if somehow the server is using XML for communication, all I need to change is changing the serialization/deserialization layer and not necessarily needs to change all the widget classes.
- Controller classes that manage the business logic and communication to the server + occasionally caching layer. This layer control the communication protocol to the server and put the necessary data into the object models
- Classes are wrapped neatly in their corresponding namespaces. I am sure we all know how nasty global namespace could be in Javascript.
In the past, I would separate the files into its own js and use common practice to create OO principles in Javascript. The problem that I soon found that there are multiple ways to write JS OO and it’s not necessarily that all team members have the same approach. As the team got larger (in my case more than 15 people), this gets complicated as there is no standard approach for Object Oriented Javascript. At the same time I don’t want to write my own framework and repeat some of the work that I am sure smarter people than I have solved.
jQuery is incredibly nice as Javascript Framework and I love it, however as project gets bigger, I clearly need additional structure for my web app especially to facilitate standardize OO practice. For myself, after several experiments, I find that YUI3 Base and Widget ( and) infrastructure provides exactly what I need. Few reasons why I use them.
- It provides Namespace support. A real need for OO and neat organization of your code
- It support notion of classes and objects
- It gives a standardize means to add instance variables to your class
- It supports class extension neatly
- It provides constructor and destructor
- It provides render and event binding
- It has base widget framework
- Each widget now able to communicate to each other using standard event based model
- Most importantly, it gives all the engineers an OO Standard for Javascript development
Contrary to many views, I don’t necessarily have to choose between jQuery and YUI3. These two can peacefully co-exist. While YUI3 provides the necessary OO template for my complex web app, jQuery still provides my team with easy to use JS Abstraction that we all come to love and familiar with.
Using YUI3, I have managed to create MVC pattern by separating classes that extend the Base as the Model, classes that extends Widget as a View and off course you have Controller classes that are making necessary logic and server side calls.
Widget can communicate with each other using event based model and listening to the event and doing the necessary task based on predefined interface. Simply put, putting OO + MVC structure to JS is a joy for me.
Just a disclaimer, I don’t work for Yahoo! and simply an architect that is trying to cope with the same issue that is posed by the original question. I think if anyone finds equivalent OO framework, this would work as well. Principally, this question applies to other technologies as well. Thank God for all the people who came up with OO Principles + MVC to make our programming days more manageable.
In my last project -Viajeros.com- I’ve used a combination of several techniques. I wouldn’t know how to organize a web app — Viajeros is a social networking site for travellers with well-defined sections, so it’s kind of easy to separate the code for each area.
I use namespace simulation and lazy loading of modules according to the site section. On each page load I declare a “vjr” object, and always load a set of common functions to it (vjr.base.js). Then each HTML page decides which modules need with a simple:
vjr.Required = ["vjr.gallery", "vjr.comments", "vjr.favorites"];
Vjr.base.js gets each one gzipped from the server and executes them.
vjr.include(vjr.Required); vjr.include = function(moduleList) { if (!moduleList) return false; for (var i = 0; i < moduleList.length; i++) { if (moduleList[i]) { $.ajax({ type: "GET", url: vjr.module2fileName(moduleList[i]), dataType: "script" }); } } };
Every “module” has this structure:
vjr.comments = {} vjr.comments.submitComment = function() { // do stuff } vjr.comments.validateComment = function() { // do stuff } // Handlers vjr.comments.setUpUI = function() { // Assign handlers to screen elements } vjr.comments.init = function () { // initialize stuff vjr.comments.setUpUI(); } $(document).ready(vjr.comments.init);
Given my limited Javascript knowledge, I know there must be better ways to manage this, but until now it’s working great for us.
I use Dojo’s package management (
dojo.require and
dojo.provide) and class system (
dojo.declare which also allows for simple multiple inheritance) to modularize all of my classes/widgets into separate files. Not only dose this keep your code organized, but it also lets you do lazy/just in time loading of classes/widgets.
A few days ago, the guys at 37Signals released a RTE control, with a twist. They made a library that bundles javascript files using a sort of pre-processor commands.
I’ve been using it since to separate my JS files and then in the end merge them as one. That way I can separate concerns and, in the end, have only one file that goes through the pipe (gzipped, no less).
In your templates, check if you’re in development mode, and include the separate files, and if in production, include the final one (which you’ll have to “build” yourself).
Create fake classes, and make sure that anything that can be thrown into a separate function that makes sense is done so. Also make sure to comment a lot, and not to write spagghetti code, rather keeping it all in sections. For example, some nonsense code depicting my ideals. Obviously in real life I also write many libraries that basically encompass their functionality.
$(function(){ //Preload header images $('a.rollover').preload(); //Create new datagrid var dGrid = datagrid.init({width: 5, url: 'datalist.txt', style: 'aero'}); }); var datagrid = { init: function(w, url, style){ //Rendering code goes here for style / width //code etc //Fetch data in $.get(url, {}, function(data){ data = data.split('\n'); for(var i=0; i < data.length; i++){ //fetching data } }) }, refresh: function(deep){ //more functions etc. } };
I think this ties into, perhaps, DDD (Domain-Driven Design). The application I’m working on, although lacking a formal API, does give hints of such by way of the server-side code (class/file names, etc). Armed with that, I created a top-level object as a container for the entire problem domain; then, I added namespaces in where needed:
var App; (function() { App = new Domain( 'test' ); function Domain( id ) { this.id = id; this.echo = function echo( s ) { alert( s ); } return this; } })(); // separate file (function(Domain) { Domain.Console = new Console(); function Console() { this.Log = function Log( s ) { console.log( s ); } return this; } })(App); // implementation App.Console.Log('foo');
For JavaScript organization been using the following
- Folder for all your javascript
- Page level javascript gets its’ own file with the same name of the page. ProductDetail.aspx would be ProductDetail.js
- Inside the javascript folder for library files I have a lib folder
- Put related library functions in a lib folder that you want to use throughout your application.
- Ajax is the only javascript that I move outside of the javascript folder and gets it’s own folder. Then I add two sub folders client and server
- Client folder gets all the .js files while server folder gets all the server side files.
I’m using this little thing. It gives you ‘include’ directive for both JS and HTML templates. It eleminates the mess completely.
$.include({ html: "my_template.html" // include template from file... }) .define( function( _ ){ // define module... _.exports = function widget( $this, a_data, a_events ){ // exporting function... _.html.renderTo( $this, a_data ); // which expands template inside of $this. $this.find( "#ok").click( a_events.on_click ); // throw event up to the caller... $this.find( "#refresh").click( function(){ widget( $this, a_data, a_events ); // ...and update ourself. Yep, in that easy way. }); } });
You can use jquery mx (used in javascriptMVC) which is a set of scripts that allows you to use models, views, and controllers. I’ve used it in a project and helped me create structured javascript, with minimal script sizes because of compression. This is a controller example:
$.Controller.extend('Todos',{ ".todo mouseover" : function( el, ev ) { el.css("backgroundColor","red") }, ".todo mouseout" : function( el, ev ) { el.css("backgroundColor","") }, ".create click" : function() { this.find("ol").append("<li class='todo'>New Todo</li>"); } }) new Todos($('#todos'));
You can also use only the controller side of jquerymx if you aren’t interested in the view and model parts.
Your question is one that plagued me late last year. The difference – handing the code off to new developers who had never heard of private and public methods. I had to build something simple.
The end result was a small (around 1KB) framework that translates object literals into jQuery. The syntax is visually easier to scan, and if your js grows really large you can write reusable queries to find things like selectors used, loaded files, dependent functions, etc.
Posting a small framework here is impractical, so I wrote a blog post with examples (My first. That was an adventure!). You’re welcome to take a look.
For any others here with a few minutes to check it out, I’d greatly appreciate feedback!
FireFox recommended since it supports toSource() for the object query example.
Cheers!
Adam
I use a custom script inspired by Ben Nolan’s behaviour (I can’t find a current link to this anymore, sadly) to store most of my event handlers. These event handlers are triggered by the elements className or Id, for example.
Example:
Behaviour.register({ 'a.delete-post': function(element) { element.observe('click', function(event) { ... }); }, 'a.anotherlink': function(element) { element.observe('click', function(event) { ... }); } });
I like to include most of my Javascript libraries on the fly, except the ones that contain global behaviour. I use Zend Framework’s headScript() placeholder helper for this, but you can also use javascript to load other scripts on the fly with Ajile for example.
You don’t mention what your server-side language is. Or, more pertinently, what framework you are using — if any — on the server-side.
IME, I organise things on the server-side and let it all shake out onto the web page. The framework is given the task of organising not only JS that every page has to load, but also JS fragments that work with generated markup. Such fragments you don’t usually want emitted more than once – which is why they are abstracted into the framework for that code to look after that problem. 🙂
For end-pages that have to emit their own JS, I usually find that there is a logical structure in the generated markup. Such localised JS can often be assembled at the start and/or end of such a structure.
Note that none of this absolves you from writing efficient JavaScript! 🙂
Lazy Load the code you need on demand. Google does something like this with their google.loader
Tags: java, javascript, sed | https://exceptionshub.com/commonly-accepted-best-practices-around-code-organization-in-javascript-closed.html | CC-MAIN-2022-05 | refinedweb | 4,246 | 65.42 |
Introduction
Sequence prediction is one of the hottest application of Deep Learning these days. From building recommendation systems to speech recognition and natural language processing, its potential is seemingly endless. This is enabling never-thought-before solutions to emerge in the industry and is driving innovation.
There are many different ways to perform sequence prediction such as using Markov models, Directed Graphs etc. from the Machine Learning domain and RNNs/LSTMs from the Deep Learning domain.
In this article, we will see how we can perform sequence prediction using a relatively unknown algorithm called Compact Prediction Tree (CPT). You’ll see how this is a surprisingly simple technique, yet it’s more powerful than some very well known methods, such as Markov Methods, Directed Graphs, etc.
I recommend reading this article before going further – A Must-Read Introduction to Sequence Modelling(with use cases). In this, Tavish introduced us to an entirely new class of problems called Sequence Modelling, along with some very good examples of their use cases and applications.
Table of Contents
- Primer about Sequence Prediction
- Compact Prediction Tree Algorithm (CPT)
- Understanding the Data Structures in CPT
- Understanding how training and prediction works in CPT
- Training Phase
- Prediction Phase
- Creating Model and Making Predictions
Primer about Sequence Prediction
Sequence prediction is required whenever we can predict that a particular event is likely to be followed by another event and we need to predict that.
Sequence prediction is an important class of problems which finds application in various industries. For example:
- Web Page Prefetching – Given a sequence of web pages that a user has visited, browsers can predict the most likely page that a user will visit and pre-load it. This will, in turn, save time and improve the user experience
- Product Recommendation – The sequence in which a user has added products into his/her shopping list can be used to recommend products that might be of interest to the user
- Sequence Prediction of Clinical Events – Given the medical history of a patient, Sequence Prediction can be leveraged to perform differential diagnosis of any future medical conditions
- Weather Forecasting – Predicting the weather at the next time step given the previous weather conditions.
There are numerous additional areas where Sequence Prediction can be useful.
Current landscape of solutions
To see different kinds of solutions available for solving problems in this field, we had launched the Sequence Prediction Hackathon. The participants came up with different approaches and the most popular of them was LSTMs/RNNs which was used by most of the people in the top 10 on the private leaderboard.
LSTMs/RNNs have become a popular choice for modelling sequential data, be it text, audio, etc. However, they suffer from two fundamental problems:
- They take a long time to train, typically tens of hours
- They need to be re-trained for sequences containing items not seen in the previous training iteration. This is a very costly process and is not feasible for scenarios where new items are encountered frequently
Enter CPT (Compact Prediction Tree)
Compact Prediction Tree (CPT) is one such algorithm which I found to be more accurate than traditional Machine Learning models, like Markov Models, and Deep Learning models like Auto-Encoders.
The USP of CPT algorithm is its fast training and prediction time. I was able to train and make predictions within 4 minutes on the Sequence Prediction Hackathon dataset mentioned earlier.
Unfortunately, only a Java implementation of the algorithm exists and therefore is not as popular among Data Scientists in general (especially those who use Python).
So, I have created a Python version of the library using the documentation developed by the algorithm creator. The Java code certainly helped in understanding certain sections of the research paper.
The library for public usage is present here and you are most welcome to make contributions to it. The library is still incomplete in the sense that it does not have all recommendations of the author of the algorithm, but is good enough to get a decent score of 0.185 on the hackathon leaderboard, all within a few minutes. Upon completion, I believe the library should be able to match the performance of RNNs/LSTMs for this task.
In the next section, we will go through the inner workings of the CPT algorithm, and how it manages to perform better than some of the popular traditional machine learning models like Markov Chains, DG, etc.
Understanding the Data Structures in CPT
As a prerequisite, it is good to have an understanding of the format of the data accepted by the Python Library CPT. CPT accepts two .csv files – Train and Test. Train contains the training sequences while the test file contains the sequences whose next 3 items need to be predicted for each sequence. For the purpose of clarity, the sequences in both Train and Test files are defined as below:
1,2,3,4,5,6,7 5,6,3,6,3,4,5 . . .
Note that the sequences could be of varying length. Also, One-hot encoded sequences will not give appropriate results.
The CPT algorithm makes use of three basic data structures, which we will talk about briefly below.
1. Prediction Tree
A prediction tree is a tree of nodes, where each node has three elements:
- Item – the actual item stored in the node.
- Children – list of all the children nodes of this node.
- Parent – A link or reference to the Parent node of this node.
A Prediction Tree is basically a trie data structure which compresses the entire training data into the form of a tree. For readers who are not aware of how a trie structure works, the trie structure diagram for the below two sequences will clarify things.
Sequence 1: A, B, C
Sequence 2: A, B, D
The Trie data structure starts with the first element A of the sequence A,B,C and adds it to the root node. Then B gets added to A and C to B. The Trie again starts at the root node for every new sequence and if an element is already added to the structure, then it skips adding it again.
The resulting structure is shown above. So this is how a Prediction Tree compresses the training data effectively.
2. Inverted Index (II)
Inverted Index is a dictionary where the key is the item in the training set, and value is the set of the sequences in which this item has appeared. For example,
Sequence 1: A,B,C,D
Sequence 2: B,C
Sequence 3: A,B
The Inverted Index for the above sequence will look like the below:
II = {
‘A’:{‘Seq1’,’Seq3’},
’B’:{‘Seq1’,’Seq2’,’Seq3’},
’C’:{‘Seq1’,’Seq2’},
’D’:{‘Seq1’}
}
3. LookUp Table (LT)
A LookUp Table is a dictionary with a key as the Sequence ID and value as the terminal node of the sequence in the Prediction Tree. For example:
Sequence 1: A, B, C
Sequence 2: A, B, D
LT = {
“Seq1” : node(C),
“Seq2” : node(D)
}
Understanding how Training and Prediction works in CPT
We will go through an example to solidify our understanding of the Training and Prediction processes in the CPT algorithm. Below is the training set for this example:
As you can see, the above training set has 3 sequences. Let us denote the sequences with ids: seq1, seq2 and seq3. A, B, C, and D are the different unique items in the training dataset.
Training Phase
The training phase consists of building the Prediction Tree, Inverted Index (II), and the LookUp Table (LT) simultaneously. We will now look at the entire training process phase.
Step 1: Insertion of A,B,C.
We already have a root node and a current node variable set to root node initially.
We start with A, and check if A exists as the child of the root node. If it does not, we add A to the child list of the root node, add an entry of A in Inverted Index with value seq1, and then move the current node to A.
We look at the next item, i.e B, and see if B exists as the child of the current node, i.e, A. If not, we will add B to the child list of A, add an entry of B in the Inverted Index with value seq1 and then move the current node to B.
We repeat the above procedure till we are done adding the last element of seq1. Finally, we will add the last node of seq1, C, to the LookUp table with key = “seq1” and value = node(C).
Step 2: Insertion of A,B.
Step 3: Insertion of A,B,D,C.
Step 4: Insertion of B,C.
)
We do keep doing this till we exhaust every row in the training dataset (remember, a single row represents a single sequence). We now have all the required data structures in place to start making predictions on the test dataset. Let’s have a look at the prediction phase now.
Prediction Phase
The Prediction Phase involves making predictions for each sequence of the data in the test set in an iterative manner. For a single row, we find sequences similar to that row using the Inverted Index(II). Then, we find the consequent of the similar sequences and add the items in the consequent in a Counttable dictionary with their scores. Finally, the Counttable is used to return the item with the highest score as the final prediction. We will see each of these steps in detail to get an in-depth understanding.
Target Sequence – A, B
Step 1: Find sequences similar to the Target Sequence.
Sequences similar to the Target Sequences are found by making use of the Inverted Index. These are identified by:
- finding the unique items in the target sequence,
- finding the set of sequence ids in which a particular unique item is present and then,
- taking an intersection of the sets of all unique items
For example:
Sequences in which A is present = {‘Seq1’,’Seq2’,’Seq3’}
Sequences in which B is present = {‘Seq1’,’Seq2’,’Seq3’,’Seq4’}
Similar sequences to Target Sequence = intersection of set A and set B = {‘Seq1’,’Seq2’,’Seq3’}
Step 2: Finding the consequent of each similar sequence to the Target Sequence
For each similar sequence, consequent is defined as its longest sub-sequence after the last occurrence of the last item of the Target Sequence in the similar sequence minus the items present in the Target Sequence.
** Note this is different from what the developers have mentioned in their research paper, but this has worked for me better than their implementation.
Let’s understand this with the help of an example:
Target Sequence = [‘A’,’B’,’C’]
Similar Sequence = [‘X’,’A’,’Y’,’B’,’C’,’E’,’A’,’F’]
Last item in Target Sequence = ‘C’
Longest Sub-Sequence after last occurrence of ‘C’ in Similar Sub-Sequence = [‘E’,’A’,’F’]
Consequent = [‘E’,’F’]
Step 3: Adding elements of the Consequent to the Counttable dictionary along with their score.
The elements of consequent of each similar sequence is added to the dictionary along with a score. Let’s continue with the above example for instance. The score for the items in the Consequent [‘E’,’F’] is calculated as below:
current state of counttable = {}, an empty dictionary
So for element E, i.e. the first item in the consequent, the score will be
score[E] = 1 + (1/1) + 1/(0+1)*0.001 = 2.001
score[F] 1 + (1/1) + 1/(1+1)*0.001 = 2.0005
After the above calculations, counttable looks like,
counttable = {'E' : 2.001, 'F': 2.0005}
Step 4: Making Prediction using Counttable
Finally, the key is returned with the greatest value in the Counttable as the prediction. In the case of the above example, E is returned as a prediction.
Creating Model and Making Predictions
Step 1: Clone the GitHub repository from here.
git clone
Step 2: Use the below code to read the .csv files, train your model and make the predictions.
#Importing everything from the CPT file
from CPT import *
#Importing everything from the CPT file
from CPT import *
#Creating an object of the CPT Class
model = CPT()
#Reading train and test file and converting them to data and target.
data, target = model.load_files(“./data/train.csv”,”./data/test.csv”)
#Training the model
model.train(data)
#Making predictions on the test dataset
predictions = model.predict(data,target,5,1)
End Notes
In this article, we covered a highly effective and accurate algorithm for sequence predictions – Compact Prediction Tree. I encourage you to try it out yourself on the Sequence Prediction Hackathon dataset and climb higher on the private leaderboard!
If you want to contribute to the CPT library, feel free to fork the repository or raise issues. If you know of any other methods to perform Sequence Predictions, write them in the comments section below. And do not forget to star the CPT library. 🙂
You can also read this article on our Mobile APPYou can also read this article on our Mobile APP
9 Comments
Hi…
Nice article. Tried to clone the library and walk through the codes.
I was wondering , I am getting the following errors
>>> import PredictionTree
>>> import CPT
>>> model = CPT()
Traceback (most recent call last):
File “”, line 1, in
TypeError: ‘module’ object is not callable
Thanking inanticipation..
Hi Shan,
Thanks for the appreciation.
The correct import statement is from CPT import *
or
from CPT import CPT
and then,
model = CPT()
Hi,
Excellent post!
But when I try to run it the result looks like:
[[23880, 25125, 24944],
[24530],
[26950, 26953, 26951],
[24532, 24138, 24915],
[],
[23663, 23691, 24138],
………
]
1. Few series provide one element instead of 3?
2. Some of the series is completely missing
I have used your train & test files
Hi GSB,
I am glad you liked the post.
The reason you are getting 1 or 0 prediction at some places is that for these particular target sequences, there are not enough similar sequences to make predictions.
A remedy for this problem is introduce a noise reduction strategy – for example – keep remove elements from the target sequence which appear rarely in the training dataset till we have enough similar sequences.
This is one of the future work in the library and will try to implement it soon.
Thank you NSS,
Nice job, theory explanation then practice, I like this approach, Keep this way, you are doing great !
Hi,
It is really interesting article. I am new in Python. I can’t understand some part of your code. Can you please help me?
Hi Sam,
Which part did you not understand?
Hi NSS,
Thank you so much for this great work, the only question I have is, why the number of predictions made is always len(target) – 1?
Hi Bo,
Can you please specify where did you use len(target) – 1? In the article following code is used to make predictions on test file:
predictions = model.predict(data,target,5,1)
The predictions will be similar to the length of test file. | https://www.analyticsvidhya.com/blog/2018/04/guide-sequence-prediction-using-compact-prediction-tree-python/ | CC-MAIN-2020-50 | refinedweb | 2,508 | 60.24 |
PyOpenCL is an open-source package (MIT license) that enables developers to easily access the OpenCL API from Python. The latest stable version of PyOpenCL provides features that make it one of the handiest OpenCL wrappers for Python because you can easily start working with OpenCL kernels without leaving your favorite Python environment. In this first article of a two-part series on PyOpenCL, I explain how you can easily build and deploy an OpenCL kernel, that is, a function that executes on the GPU device.
More Than an OpenCL Wrapper
PyOpenCL enables you to access the entire OpenCL 1.2 API from Python. Thus, you are able to retrieve all the information that the OpenCL API offers for platforms, contexts, devices, programs, kernels, and command queues. One of the main goals of PyOpenCL is to make sure that you can access in Python the same features that a C++ OpenCL host program can. However, PyOpenCL is not just an OpenCL wrapper for Python: It also provides enhancements and shortcuts for the most common tasks. With PyOpenCL, you usually need only a few lines of Python code to perform tasks that require dozens of C++ lines.
PyOpenCL reduces the number of OpenCL calls needed to retrieve all the information usually required to build and deploy a kernel for OpenCL execution on the GPU device. It provides automatic object cleanup tied to the lifetime of objects, so you don't need to worry about writing cleanup code. And PyOpenCL automatically translates all OpenCL errors to Python exceptions.
You can use PyOpenCL to create programs and build kernels as you would with a C++ OpenCL host program, or you can take advantage of the many OpenCL kernel builders that simplify the creation of kernels that need to perform common parallel algorithms. PyOpenCL provides kernel builders for the following parallel algorithms:
- Element-wise expression evaluation builder (map)
- Sum and counts builder (reduce)
- Prefix sums builder (scan)
- Custom scan kernel builder
- Radix sort
Gathering Information About Platforms, Contexts, and Devices
You probably have some basic knowledge of how OpenCL works and the it organizes the underlying drivers and hardware, such as platforms, contexts, and devices. If not, I suggest Matthew Scarpino's A Gentle Introduction to OpenCL, which is a good introductory tutorial; you will then be able to understand the examples I provide in this series. Scarpino's analogy of OpenCL processing and a game of cards makes it easy to understand the way OpenCL works.
One of the problems I found in the good documentation provided by PyOpenCL is that it assumes you have just one OpenCL platform available in your development workstation. Sometimes, you have more than one platform. For example, in my laptop I have two OpenCL platforms:
- AMD Accelerated Parallel Processing: The drivers for my ATI GPU, which include support for OpenCL.
- Intel OpenCL: The Intel OpenCL runtime which provides a CPU-only OpenCL runtime for my Intel Core i7 CPU.
Thus, it is good practice to prepare your code to run on computers that might have more than one OpenCL platform. If you want to easily check the different OpenCL platforms available in any computer, you can use a simple and useful utility to list them and check their features, GPU Caps Viewer.Download the latest version of GPU Caps Viewer and then read about its features in GPU Caps Viewer v1.8.6 Dives Deep on OpenCL Support. It is wise to learn about the OpenCL features that are available in your development workstation before you start diving deeper into PyOpenCL.
I'll use the
import pyopencl as cl import for all the code snippets in this article. If you want to retrieve all the OpenCL platforms, you can use
platforms = cl.get_platforms().
The
get_platforms() method returns a list of
pyopencl.Platform instances that include all the information you need about each platform. In my case,
get_platforms returns a list with two instances, and the
pyopencl.Platform instances have the following values for their
name property:
'AMD Accelerated Parallel Processing' and
'Intel(R) OpenCL'.
A common requirement for OpenCL host programming is to obtain different platform information parameters, such as the list of extensions supported by the platform. In C++, retrieving this information requires many lines of code. PyOpenCL makes it easier because each
pyopencl.Platform instance includes all the properties you might need to check. Table 1 shows the
pyopencl.Platform property names that provide the equivalent information to an OpenCL platform parameter name. As you can see, you just need to remove the
CL_PLATFORM_ prefix and use lowercase letters to generate the equivalent property name.
Table 1. PyOpenCL property names
For example, the following line retrieves a string with the list of extensions supported by the first OpenCL platform found. Because you need at least one OpenCL platform to be able to work with PyOpenCL and OpenCL, the line will work on any OpenCL development workstation:
platform_extensions = platforms[0].extensions
The following lines show examples of the string value of the
extensions property for two different platforms:
'cl_khr_icd cl_amd_event_callback cl_amd_offline_devices cl_khr_d3d10_sharing' cl_khr_gl_sharing cl_intel_dx9_media_sharing cl_khr_dx9_media_sharing cl_khr_d3d11_sharing'
You can check whether the platform supports the
cl_khr_icd extension with the following line:
supports_cl_khr_icd = platform_extensions.__contains__('cl_khr_icd')
Now that you have a platform, you can access devices that can receive tasks and data from the host. If you want to retrieve all the OpenCL devices available for a specific platform, you can call the
get_devices method for the
pyopencl.Platform instance. For example, the following line retrieves all the devices for the first OpenCL platform found:
devices = platforms[0].get_devices()
The
get_devices() method returns a list of
pyopencl.Device instances that include all the information you need about each device. When you call
get_devices() without parameter, it is equivalent to the following line that retrieves devices without filtering by device type:
devices = platforms[0].get_devices(cl.device_type.ALL)
For example, in my case, when I don't specify the desired device type,
get_devices() returns a list with two instances of the
pyopencl.Device class one for the GPU, and the other for the CPU. If you only want to retrieve the available GPU devices, you can specify the desired filter:
gpu_devices = platforms[0].get_devices(cl.device_type.GPU)
As with the platforms, PyOpenCL makes it easy to obtain different device information parameters, such as the device's global memory size. Each
pyopencl.Device instance includes all the properties you might need to check. Table 2 shows some of the
pyopencl.Device property names that provide the equivalent information to an OpenCL platform parameter name. As you can see, you just need to remove the
CL_DEVICE_ prefix and use lowercase letters to generate the equivalent property name.
Table 2. Typical device properties returned in PyOpenCL
The following line retrieves a string with the list of extensions supported by the first OpenCL GPU device in the selected platform. You need at least one OpenCL GPU device to run the next line in any OpenCL development workstation:
gpu_device_extensions = gpu_devices[0].extensions
The following line shows examples of the string value of the
extensions property for one device:
'cl_khr_gl_sharing cl_amd_device_attribute_query cl_khr_d3d10_sharing'
You can check whether the device supports the
cl_khr_gl_sharing extension with the following line:
supports_cl_khr_gl_sharing = gpu_device_extensions.__contains__('cl_khr_gl_sharing')
It is very common to check some extensions related to graphics for a device, such as
cl_khr_d3d10_sharing and
cl_khr_gl_sharing. If you've ever written a OpenCL host application in C++, you will definitely notice how much simpler things are with PyOpenCL.
Building and Deploying a Kernel
To build and deploy a basic OpenCL kernel, you usually need to follow these steps in a typical OpenCL C++ host program:
- Obtain an OpenCL platform.
- Obtain a device id for at least one device (accelerator).
- Create a context for the selected device or devices.
- Create the accelerator program from source code.
- Build the program.
- Create one or more kernels from the program functions.
- Create a command queue for the target device.
- Allocate device memory and move input data from the host to the device memory.
- Associate the arguments to the kernel with kernel object.
- Deploy the kernel for device execution.
- Move the kernel's output data to host memory.
- Release context, program, kernels and memory.
These steps represent a simplified version of the tasks that your host program must perform (each step is a bit more complex in real life). For example, the first step (obtain an OpenCL platform) usually requires checking the properties for the platforms, as I explained in the previous section. In addition, each step requires error checking. Because you can work with PyOpenCL from any Python console, you can execute the different steps with an interactive environment that makes it easy for you to learn both OpenCL and the way PyOpenCL exposes the features in the API. | http://www.drdobbs.com/database/ternary-search-trees/database/easy-opencl-with-python/240162614 | CC-MAIN-2014-23 | refinedweb | 1,463 | 52.6 |
0
I am writing a game in which I need to draw text onto the screen. As I understand it there are two main ways to do this using Graphics2D - using GlyphVector and using drawString(). Of the two I prefer the previous because it allows me to define text as a Shape object by using GlyphVector's getOutline() method.
However, GlyphVector is giving me very poor quality output. I am not sure what I am doing wrong, but the text is severely jagged and aliased, especially at small font sizes.
Here is an applet to quickly show what I am trying to do.
import java.awt.BasicStroke; import java.awt.Color; import java.awt.Font; import java.awt.Graphics; import java.awt.Graphics2D; import java.awt.Shape; import java.awt.font.FontRenderContext; import java.awt.font.GlyphVector; import java.awt.font.TextLayout; import java.awt.geom.AffineTransform; import javax.swing.JApplet; public class Test extends JApplet { public void paint(Graphics gr) { Graphics2D g = (Graphics2D) gr.create(); AffineTransform trans = AffineTransform.getTranslateInstance(100, 100); trans.concatenate(AffineTransform.getRotateInstance(0.5)); g.setTransform(trans); g.setColor(Color.red); Font f = new Font("Serif", Font.PLAIN, 15); GlyphVector v = f.createGlyphVector(g.getFontRenderContext(), "Hello"); Shape shape = v.getOutline(); g.setPaint(Color.red); g.fill(shape); } }
If there are any other suggestions for drawing text I would love to hear them. However, I do need the final result to be a Shape. | https://www.daniweb.com/programming/software-development/threads/417150/drawing-text-using-swing | CC-MAIN-2016-40 | refinedweb | 237 | 55.2 |
Before we can write and execute our first C++ program, we need to understand in more detail how C++ programs get developed. Here is a graphic outlining a simplistic approach::
When you sit down and start coding right away, you’re typically thinking “I want to do <something>”,. It can be viewed here. capability),.
why always int main() is there at the beigning of every sentence?
Hello Alex and NASCAR,
I don't know if it's a typo or not, if not please ignore,
In Step 1 example:-
“I want to write a program that reads in a file of stock prices and predicts whether the stock will go up or down.”
Typo(probably):-
It doesn't make sense to add the "in" word,it could go independent like this,
“I want to write a program that reads a file(*Removed in) of stock prices and predicts whether the stock will go up or down.”
Thankyou
Good evening,
I don't know if it's an error of writing, but it's about this phrase: “I want to do _this_” at step 2, third paragraph. Isn't suppose to be in bold letters?
If don't, please ignore this message.
Best regards.
I think the first code of your is so cool, not using namespace is awesome.
I think so, it is good!
Thank you !
...
First we need knowledge of a programming language
...
Possible typo: no comma after "First".
I believe it's correct as written (see).
When i open (just to see the main site, after downloading mini-lesson):
"As of June..." blinks and then i see the page with links. If i click any i'm see "This Connection is Not Secure" at https links.
Can your, please check if that true for your to?Just my ISP sometimes make "magic", so i not sure if i need to bother the "Benchmark education" tech support.
"First we need" - "mainly", not "step 1"
"knowledge ..." - subj.
"Second," - after considering "First"
"we need an editor." - enumeration.
If a sentence were "First, we need to acquire the knowledge ...", then that were procedural, with comma.
PS: comrades pupils, full list is at (as a time of writing, that is running Debian):
About "As an aside..."
I've visited:
The "Log Book With Computer Bug" is available online, but is "Currently not on view". Can your, please, add to the "As an aside..." lesson's part a link to the museum's online view?
Cool! Thanks for the link. I added it to the aside box.
I think that "I want to write a program" repeated in every bullet point is draws attention from examples themselves. How about that:
Also (proposal):
Hi!
Is the full stops at the end of the Step 1, and Step's 2 section headings are intended?
The Web speaks that this is either according to writer's tastes, or style guide. So i wonder, if that because of word "you" inside of them.
Do you mean the fact that the steps are separated by a section divider or something else?
Step 1: Define the problem that you would like to solve.
Step 2: Determine how you are going to solve the problem.
No, sir, i mean the full stops themselves. Other headings didn't have them, and so far i'm was unable to find out what makes the difference.
Section divider (<hr>, "ornamented"?) is fine.
... In order to write the program, we need two things: First ...
Maybe a typo: capital letter F in 'first ...'.
Capitalizing the letter following a colon is acceptable grammar if the sentence following the colon is a complete sentence (which in this case, it is).
...Consequently, it’s worth your time to spend a little extra time up front before you start coding thinking about the best way to tackle a problem,...
Maybe better insert two '-' there:
... time up front - before you start coding - thinking about...
?
Fixed, thanks!
I think it's necessary to provide namespace identification
why do you give[ "return 0;"]
and [std:: ]
.
.
.
you can directly give ...
[#include<iostream>
main()
{
cout<<"colored text!";
}]
is there any purpose for it
`using namespace std;` can cause name collisions.
The return should be added for consistency.
This is the only website and out of all pdf's of programming that explains everything and step by step the way it should be!!
Thank you so much
This tutorial is very comprehensive and all what you need as a beginner programmer like me.
I was actually reading a book on c++ when I came across "composition" which the author did not explain extensively. So I decided to browse for the topic online and finally found myself in your website. In fact, your teaching style is so mesmerising that I have to abandon the book and continue with your tutorials.
Thanks a lot
Youy way to teach is really awesome!
Thank you!!
Thank you very much for this lovely initiative!
sir,
If you have mentioned above that c++ is a compiled language , then why i need to compile my program before running it.
The definition of a compiled language is that the code needs to be compiled by a compiler before it can be executed by the computer.
Can you make "Dive inn" Topics that we can read from? I need a quick tutorial on how to make a declaratoin.
You can use the site index to find specific topics.
:thinking:
Well. I'm trying to get how to make a declaration from scratch.
And I don't get it, even if I try my hardest.
So maybe give people some tips on how to get topics when I read something and I don't get it?
Find the article on declarations (using the site index), and leave a comment there with what you're trying to do, and what you've tried that didn't work. I'm sure some nice soul will help.
sir i have some doubt in "what and how" section
if my what section says that " I want to create a program which remove a particular type of files from my pc so what "How section" consist
please give me some more examples
Hi,
Some of the "how" examples can be:
1. (naive, slow) Find all the files on your PC, then check for those you want to remove and remove them.
2. (pro-active) Make a system service that would look for file creation events and trigger removal program when specific file type is created.
3. (somewhat strange) Modify file system code to make that file type work as if it was free space.
4. (resembling actual solution development) Limit possible folders/locations to reasonable ones (for example, you're probably not looking for .pdf in Windows DLL folders), find out the best time to run such program (for example, if you need to delete temp files that some software creates once a month, you don't need to run removal every second), think about speed and concurrency issues (what happens if you delete the file that your operating system is currently trying to read? Is it a problem for everything else on your PC if you try to delete 1M files simultaneously?), then check for theoretically possible bugs, then implement solution 1 with all the data you collected.
I work in a unix environment with Mac OS X.
How can i execute step 5 if the terminals gives back this message:
<< MacBook-Air-di-Iman:calcoloipotenusa iman.rosignoli$ g++ -o prog 2numeri.o file.o
duplicate symbol _main in:
2numeri.o
file.o
ld: 1 duplicate symbol for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation) >>
In syntesis, there's this message of "duplicate symbol _main"
Can you explain it to me in order to fix the files to execute step 5?
Good day.
Hi Iman!
Can you show us the source code you're trying to compile?
It looks like the code files for 2numeri and file both have a function named main.
A program can only have one main function.
Is it possible to create your own coding engine? (e.g. Java, Python, C++, etc. etc.)
What's a "coding engine"?
If you mean programming language, sure. You can write your own language, develop a parser that converts it into C++ (or some other language), and then use an existing compiler for that language to compile it.
If you mean compiler, yes, but this is a lot more complicated. I wouldn't even try to do this.
Hi there Mr. Alex, first of all, many thanks for creating these lessons. I have faith that they will turn out very useful (i.e. They will actually work). However, I have a yearning question. Why shouldn't we use .cc as a file extension? What is the difference between .cc and .cpp. Please tell us, we are curious minds.
*raises right eyebrow and gives sinister smile
Because the website is named learncpp.com, not learncc.com.
I kid, I kid.
There's really no difference. If you want to use .cc, you can. Just be consistent.
I started learning, or trying to learn c++ like a week ago and I started with this one site, that I won't name but it really sucked. Anyways so far I'm enjoying this one but before I start learning to code in c++ do I need to learn HTML first? Also is using ideone just as good as an IDE, or would you prefer the IDE?
Hi Metthew!
HTML has nothing at all to do with c++, you can start with c++.
I have no experiences with ideone, judging by a quick glance it doesn't seem to let you choose the compiler or compiler options for your program and I don't see any file management. I'd go with a regular IDE.
Linux: eclipse or JetBrains CLion, I prefer eclipse
Windows: Visual Studio
There are many other IDEs around but these are the major ones which you'll find the most help for should you encounter any problems.
Yea I actually went with the free visual studio 2017 and so far I love it. Thanks to whoever started this site and took their time to actually teach beginners on how to step by step. It means a lot and I'll learn way more here with it being hands-on than I ever would've in some classroom. I just hope I'll be able to learn enough to get a job with it...This is something I've been wanting to do for a long time and I now have an opportunity to do it...
You definitely don't need to know HTML.
Ideone is an online compiler, and looks pretty limited. It will work for simple programs, may not be able to support multiple files or robust debugging. I'd install an IDE application if that's an option for you.
How can i do steps in 0.4
We cover this in future chapters. This just gives you an outline for the process in general.
Sir!
I wanted to know if I could program in C++ without using any IDE.
I do use Windows 7
Yes, you can use any editor you like. But IDE's have several functions that make them better (namely, an integrated compiler, syntax highlighting, and integrated debuggers).
Dear Sirs!
Sub: Tutorial on C++
Re: Lesson 0.4 -Step: 3
I have learnt HTML and CSS online from html.net where they have said(and I also do opine same):
" are Google Chrome, Firefox, and Internet Explorer. But there are others such as Opera and Safari and they can all be used and they are all free...
"
I started learning from your tutorial today only with great enthusiasm and liking it and enjoying it; because I myself is the proof of the system(way of learning anything).
Still I would appreciate your comment in the context.
Thank you.
Best regards.
[N.B.: I have given my website (hosted for FREE) which i have done myself based on their tutorial. I have seen that you also offer HTML Tutorial which I will learn to enrich my knowledge further.]
I'm not sure what you would like me to comment on.
With websites, there are many programs that exist to help people layout website visually without having to write HTML and CSS. These programs generate the code for you. However, if you want the best level of control, you need to write the HTML and CSS yourself.
The C++ core language doesn't include visual elements. So in these tutorials, you'll write all of your own code.
Would it be advisable to try designing the solution by writing in a different language (e.g. Python) than what you intend to use for the ACTUAL solution, so you can see how it completely fails?
That sounds like a lot of work for a questionable benefit. So no, I don't think I'd advise that.
If you're interested in making sure your solutions are robust and error-free, I'd focus your time on learning how to _really_ test your code well. Lesson 5.11 has some tips in this regard.
Name (required)
Website
Save my name, email, and website in this browser for the next time I comment. | https://www.learncpp.com/cpp-tutorial/introduction-to-cpp-development/ | CC-MAIN-2020-29 | refinedweb | 2,233 | 73.88 |
Playing with React Hooks and Web Workers
You can try that too.
Introduction
React Hooks is something I’ve been working on lately. What’s wonderful is creating custom hooks. If you encapsulate logic nicely in a hook, it can be shared among components and used intuitively. You can find my custom hooks in my GitHub repos, some of which are very experimental.
This time, my experiment is to combine React Hooks and Web Workers. I know it’s not too difficult, but let me explain a bit in this short article.
The custom hook
Let me first introduce the library. It’s called “react-hooks-worker”. We won’t go into details about the implementation, but it’s pretty simple. Check out the code if you are interested.
React custom hooks for web workers. Contribute to dai-shi/react-hooks-worker development by creating an account on…github.com
How to use it
You first need a worker script. It’s somewhat a different way of writing, compared to scripts for browsers. Basically, it receives a message and sends a message. Messages are typically serializable. Note that message passing does not have to be a request/response style.
const fib = i => (i <= 1 ? i : fib(i - 1) + fib(i - 2));
self.onmessage = (e) => {
const count = e.data;
self.postMessage(fib(count));
};
The above code is to receive a number, calculate a fibonacci number and send it back. Notice this “fib” is a slow version of the algorithm.
Now, we want to call this function from React components.
import React from 'react';
import ReactDOM from 'react-dom';
import { useWorker } from 'react-hooks-worker';
const CalcFib = ({ count }) => {
const { result, error } = useWorker('./slow_fib.js', count);
if (error) return <div>Error:{error}</div>;
return <div>Result:{result}</div>;
};
const App = () => (
<div>
<CalcFib count={5} />
</div>
);
ReactDOM.render(<App />, document.getElementById('app'));
This is everything. The count is just
5 in this code, but you can change it and pass it to
CalcFib.
Comparison
I want to compare how a web worker works with the normal JS main thread, and made a small example to run the same code. The below is a screencast.
On the top left is a FPS chart. The web worker mode doesn’t drop the rate, but the normal mode does. You can try it by yourselves. Just open the following link.
The online code editor tailored for web applicationscodesandbox.io
The code is in the repository here.
Some final notes
I’m not very satisfied with the current API of the custom hook. It might not be well encapsulated. It’s not very UX oriented either, for example, we might want an easy way to show “Loading…” for a request/response style invocation.
Feedbacks including possible use cases are welcome to give any hints to improve the library. | https://medium.com/@dai_shi/playing-with-react-hooks-and-web-workers-2ebdf1c93dea | CC-MAIN-2019-09 | refinedweb | 470 | 69.07 |
DropDown menu to control a column in the filter panel and a Reset button that clears all the filters
Hello All,
I have 2 requirements:
- The dropdown menu needs to filter a column in the filter panel. For example, the dropdown menu contain [Years] and when you select a year from the dropdown, it also filters the [Years] column in the filter panel based on your dropdown selection.
- A Reset button that unmarks and clears all filters. The issue im running into here is that when i hit the reset button, it unmarks and clears all the filters, but the dropdown selection still shows a single year which is misleading because the data is no longer displaying values for only that year. Either the reset button needs to unmark and clear all the filters except for the year that is displayed in the dropdown OR the reset button unmarks and clears filters for everything AND also displays and "All" value in the dropdown menu so that the user knows that "All" data is being displayed.
Please tell me how I can accomplish this! My main goal is to be able to filter the filter panel by selecting a value from a dropdown menu AND be able to reset and unmark all data with a button.....and somehow the dropdown menu needs to reflect "All" values. Please let me know if i need to elaborate more.
Below is the python script that I used. I've also added a sample .dxp file of what I have thus far.
Thanks in advance for your help!
SCRIPT:
# import the ListBoxFilter class
from Spotfire.Dxp.Application.Filters import ListBoxFilter
# locate the data table and grab the filter collection
dt = Document.Data.Tables["YearTable"]
filters = Document.FilteringSchemes.DefaultFilteringSchemeReference[dt]
# repeat the below lines for any other filters you would like to change
# choose the filter we are interested in
f = filters["Year"].As[ListBoxFilter]()
# unset "IncludeAllValues" or nothing we change will matter
f.IncludeAllValues = False
# set the value we are interested in
f.SetSelection(YearParam)
SCRIPT PARAMETER:
- Name is YearParam
- Type is Integer
- Debug Value is my dropdown menu property control | https://community.tibco.com/questions/dropdown-menu-control-column-filter-panel-and-reset-button-clears-all-filters | CC-MAIN-2021-31 | refinedweb | 356 | 62.68 |
Create a Timer class that counts down from n to 0 (where n is specified by the constructor, 5 by default for the default constructor) You will need 2 fields: n & time_left Write a tick method that decrements the time left if time left is greater than zero, otherwise it prints: “You have no time left!” This is what I have, but the program is not counting down. How do I test in the main. public class Timer { private int n; private static int time_left; public Timer() { n=5; //time_left=0; } public void tick(int n) { if(time_left>0) time_left--; else System.out.println("You have no time left"); } public static void main(String[] args) { Timer t=new Timer(); t.tick(4); } } | http://www.chegg.com/homework-help/questions-and-answers/create-timer-class-counts-n-0-n-specified-constructor-5-default-default-constructor-need-2-q3430392 | CC-MAIN-2014-52 | refinedweb | 122 | 74.53 |
Interview question For L&T Infotech
Recently attended one telephonic interview for technical round in L&T. The job profile was that for SSRS developer with Sql server and ASP.Net knowledge as well.But the interview was all about OOPs concept. The interviewer even didn't asked me anything about project experience.Here are the questions and elaborate explaination of the answers.
ASP.Net
1. Explain ASP.Net page lifecycle.
You can refer the article
I found this article hands-on and very useful.
2. In which event the master page controls are loaded?
Master page is also a page itself. So it follows the page lifecycle stage. However a master page has less number of events than a normal page.
Only the following events are triggered, and in this order:
Page_Init
Page_Load
Page_PreRender
Render
Page_Unload.
The master page events are fired after the page events in the same sequence.
3. In which event the HTML is loaded?
The HTML contents are loaded in the Render stage of a page lifecycle. In the Pre-Render event.
4. Why do we use master page in our application?
We create master page to give a uniform structure to our application. Suppose we have some header or Menu bar which should be there in all the pages of the application there we can use a Master page that will help in creating uniformity and avoiding any coding redundancy.
5. How do we access the master page in our page?
To access the master page functionalities in our page we have to add the 'MasterPageFile' property in the page directive.
<%@ Page Language="C#" MasterPageFile="~/MasterPages/Master1.master" Title="Content Page"%>
Then we have to add a 'ContentPlaceholder' control to our page to access the master page controls.
6. Can an application have more than one master page?
Yes we can have more than one master page in our application.But a page cannot access more than one master page at one point. However one master page can access another master page i.e. Nested master pages can be created.
7. What is polymorphism?Explain the types of polymorphism.
Literally "Poly" means Many and "Morph" means form.So, in Dotnet it signifies many forms of the same function/methos or operator.
There are two types of polymorphism i.e. Compile Time Polymorphism and RunTime Polymorphism.Example of Compile Time Polymorphism is Method Overloading and Operator overloading. Example of Run Time Polymorphism is Method overriding.
8. Difference between Method overloading and Method overridding.
Method Overloading
Method overloading means same method names with different signatures i.e. different parameters.The number of parameters or the datatype of the parameters may vary.
Method Overridding
In method overriding we can override a method in base class by creating same method in derived class using "virtual &override" keywords for inheritance.
In base class if we have to declare methods with virtual keyword and we can override those methods in derived class using override keyword.
9. Difference between abstract class and interface.With example.
You can refer to my previous article in the below link:
10. How we do exception handling in ASP.Net?
We use Try..Catch block to do Exception handling in dotnet.
11. If we write 2 error messages in a catch block.First, a generalized one and second, a math expression.Which one will get executed-1st, 2nd or both.
The first one will get executed because when the compiler comes across the generalized one first, then all the exceptions are included in the generalized exception statement so it executes it and come out.
So, we should always Catch the more specific exceptions before the less specific ones .i.e the generalized exceptions.
12. If the Exception handling contains only Try and Finally block and no catch block, will it get executed?
Yes, it will get executed. If aTry statement does not contain at least one Catch block, it must contain a Finally block. The Finally... statement in a block of code will be executed, even though the Exception is not thrown or handled here. Any Finally... block between the throwing of an Exception and the handling of an Exception will always be executed.
13. What is the use of "Using" keyword in asp.net?
The most common use of Using keyword is adding assembly reference to code behind page.
But other than this, Using keyword can be used for exception handling and disposal of unwanted resource. A Using block can be used same as a Try...Finally block. Because of this, the Using block guarantees disposal of the resources, no matter how you exit the block.
Syntax:
Using resource As New resourceType
' Insert code to work with resource.
End Using
14. What is State Management? What are the types of state management?
You can refer to my previous article in the below link:
15. What do you mean by View State?
You can refer to my previous article in the below link:
16. Difference between ViewState and Hidden Field?
ViewState
Viewstate is a client side management where the data is stored between postbacks.
There is no extra data encryption and decrytion concept.
The data values of the viewstate controls are visible on page on enabling the viewstate.
Hidden Field
Hidden Field is server control which stores some data so while data retrival it hits the server.
Hidden field is control.So as compared to viewstate it cannot store the data for a whole page.It stores less amount of data as compared to a viewstate.
The hidden field uses the serialization and deserialization mechanism for storing and retriving data.
Hidden field stores a single variable in its value property and must be explicitly added it to the page.
17. What is Web Services?
A web service is a platform independent web application which is basically a class consisting of web methods that could be used by other applications. Any application with any language can utilise the methods of a web service.
18. Why do we use JavaScript in our application?
We use javascript for client side scripting in our application. Suppose we need some popup window as some alert or some confirmation message.We use javascript for client-side security also.
19. What is SVN?
SVN stands for source version control. It is used when we want different persons to work on same page without interfering into each other's code resulting in loss of data.Eack person can check out a copy of the page into their local machine, edit it and check-in back.
SQL SERVER
1. What is the.
SSRS
1. What do you mean by RDL?
2. What is a dataset?
3. What is a drill down report?
For SSRS answers you can refer to my article from the below link:
11. If we write 2 error messages in a catch
block.First, a generalized one and second, a math
expression.Which one will get executed-1st, 2nd
or both.
A)If we write first general exception, then it will not allow us to write second exception class.
Error will show"previous catch already catches all exceptions".
Only one general exception class serves all exceptions.
12. If the Exception handling contains only Try
and Finally block and no catch block, will it get
executed?
A) Try block logic will be executed , if exception occurs then there is no catch block to handle.
so exception will be thrown.
How ever finally block will execute independent of exception.
13. What is the use of "Using" keyword in
asp.net?
A) Using statement loads the last specified namespace.
using system.data;
using system.Collections.Generc;
regards.
Sridhar.
DNS Member. | http://www.dotnetspider.com/resources/46112-Interview-question-For-L-T-Infotech.aspx | CC-MAIN-2018-47 | refinedweb | 1,275 | 68.67 |
I'm learning C++ and I found a behavior I don't understand. If I write the following program in C:
#include <stdio.h>
int main() {
char question[] = "What is your name? ";
char answer[2];
printf(question);
scanf("%ls", answer);
printf("%s\n", answer);
return 0;
}
When I type a name longer than two bytes the answer is something gibberish, but even if I don't know exactly why, I know that something went wrong and it tried to recover.
Instead, if I write this C++ program (somewhat equivalent to the former):
#include <iostream>
using namespace std;
int main() {
char question[] = "What is your name? ";
char answer[2];
cout << question;
cin >> answer;
cout << answer << endl;
return 0;
}
I'd expect a similar behavior, since I declared
answer as a char array and not a string (which can adjust its size dynamically). But when I type something very long, it is printed back as I entered it. An example:
$ ./test
What is your name? asdfa
asdfa
$ ./test
What is your name? sdhjklwertiuoxcvbnm
sdhjklwertiuoxcvbnm
So, what's going on here? As a secondary question, what happens in the C one, when I type something longer?
EDIT: Just to clarify, I know I can use
std::string instead of char arrays (I had written it above ^^). I was interested in knowing why the programs exhibited that behavior. Now I know it's undefined behavior. Also, I corrected the error in the C program (scanf).
This is undefined behavior (UB):
scanf(answer);
scanf function will interpret the uninitialized content of
answer as the format string, causing UB.
It should be like this:
scanf("%1s", answer);
Note that when you declare a character array of size 2, it means that it could fit a C string of length at most 1, because you need one character for null terminator.
Note that when you enter more than two characters for the name in your C++ program, you get undefined behavior too: writing past the end of the array is UB. Fortunately, it is very rare to need to read a string into a character array in C++, because the standard C++ library supplies a dynamically resizing class
std::string, a much better choice for representing strings.
char answer[2]; means your array can contain only 2 characters. if you push more than that, the memory is overrun and it is undefined behavior. Either reserve enough space in the array, or better use
std::string if using array is not mandatory. And you are taking input in wrong way as the other answer pointed out.
You cannot expect similar behaviour.
You can expect undefined behaviour in both cases: overrunning your memory buffer is undefined behaviour in both languages, so absolutely anything is allowed to happen.
char answer[2]; contains space for just 2 bytes. (1 byte + 1
NUL character in case of
NUL terminated string)
In both
C and
C++ accessing data beyond the array size is undefined. So you should now ask
why,
how etc. You should just not do that.
Correct way to handle this undefined behaviour would be:
std::string | http://m.dlxedu.com/m/askdetail/3/34d8526b6d493badbef44968a4344c69.html | CC-MAIN-2018-22 | refinedweb | 516 | 63.19 |
Hi all,
I just got into java. At the moment I'm coding little games and loops trying to increase my skill level and this has been puzzling me for a few hours. I'm sure it's something small but I cannot find what is wrong with the code below. I'm starting to program a simple snake game from scratch. I can print the board just fine but am having trouble printing the head of the snakes initial position. Also if anyone has any tips or tricks for what I've already done or might encounter please share!
I want it to output something like this...
Code :
*************** * * * * * * * * ***************
With an "o" in the center, being the head. The code is a bit messy, and am well aware that I will have to make changes to formatting. Once I get this thing to print I will take care of the rest. Any help would be awesome! Thanks!
package snake.game;
Code :
public class Game { public boolean running = false; public static boolean newGame = true; public static final int boardX = 40; public static final int boardY = 20; private static void Game() { updateBoard(); } public static void updateBoard() { int xPos = 0, yPos = 1; int[] playerPos = new int[2]; playerPos = updatePlayer(); System.out.println("X: " + playerPos[xPos] + "Y: " + playerPos[yPos] ); for (int y = 0; y < boardY; y++) { for (int x = 0; x < boardX; x++) { if (x == 0 || x == boardX - 1 || y == 0 || y == boardY -1) { if (y == playerPos[yPos] && x == playerPos[xPos) System.out.print("O"); else System.out.print("*"); } else System.out.print(" "); } System.out.println(" "); } } public static int[] updatePlayer() { int[] startPosition = new int[2]; int[] position = new int[2]; startPosition[0] = boardX / 2; startPosition[1] = boardY / 2; if (newGame) for (int i = 0; i < 2; i++) { position[i] = startPosition[i]; newGame = false; } return position; } public static void main(String[] args) { Game(); } }
EDIT: Solved it while reading my own post haha. Still any tips? | http://www.javaprogrammingforums.com/%20whats-wrong-my-code/36803-beginners-snake-code-im-stuck-printingthethread.html | CC-MAIN-2015-11 | refinedweb | 320 | 71.04 |
From Documentation
ID Space
It is common to decompose a visual presentation into several subsets or ZUML pages. For example, you may use a page to display a purchase order, and a modal dialog to enter the payment term. If all components are uniquely identifiable in the same desktop, developers have to maintain the uniqueness of all identifiers for all pages that might create in the same desktop. This step can be tedious, if not impossible, for a sophisticated application.
The concept of ID space is hence introduced to resolve this issue. An ID space is a subset of components of a desktop. The uniqueness is guaranteed only in the scope of an ID space. Thus, developers could maintain the subset of components separately without the need to worry if there is any conflicts with other subsets.
Window (Window) is a typical component that is an ID space. All descendant components of a window (including the window itself) form an independent ID space. Thus, you could use a window as the topmost component to group components. This way developers only need to maintain the uniqueness of each subset separately.
By and large, every component can form an ID space as long as it implements IdSpace. This type of component is called the space owner of the ID space after the component is formed. Components in the same ID space are called "fellows".
When a page implements IdSpace, it becomes a space owner. In additions, the macro component and the include component (Include) can also be space owners.
Another example is
idspace (Idspace). It derives from
div, and is the simplest component implementing IdSpace. If you don't need any feature of window, you could use
idspace instead.
You could make a standard component as a space owner by extending it to implement IdSpace. For example,
public class IdGrid extends Grid implements IdSpace { //no method implementation required }
Tree of ID Space
If an ID space has a child ID space, the components of the child space are not part of the parent ID space. But the space owner of the child ID space will be an exception in this case. For example, if an ID space, let's say X, is a descendant of another ID space, let's say Y, then space X's owner is part of space Y. However, the descendants of X is not a part of space Y.
For example, see the following ZUML page
<?page id="P"?> <zk> <window id="A"> <hbox id="B"> <button id="D" /> </hbox> <window id="C"> <button id="E" /> </window> </window> <hbox id="F"> <button id="G" /> </hbox> </zk>
will form ID spaces as follows:
As depicted in the figure, there are three spaces: P, A and C. Space P includes P, A, F and G. Space A includes A, B, C and D. Space C includes C and E.
Components in the same ID spaces are called fellows. For example, A, B, C and D are fellows of the same ID space.
getFellow and getSpaceOwner
The owner of an ID space could be retrieved by Component.getSpaceOwner() and any components in an ID space could be retrieved by Component.getFellow(String), if it is assigned with an ID (Component.setId(String)).
Notice that the getFellow method can be invoked against any components in the same ID space, not just the space owner. Similarly, the getSpaceOwner method returns the same object for any components in the same ID space, no matter if it is the space owner or not. In the example above, if C calls getSpaceOwner it will get C itself, if C calls getSpaceOwnerOfParent it will get A.
Composer and Fellow Auto-wiring
With ZK Developer's Reference/MVC, you generally don't need to look up fellows manually. Rather, they could be wired automatically by using the auto-wiring feature of a composer. For example,
public class MyComposer extends SelectorComposer { @Wire private Textbox input; //will be wired automatically if there is a fellow named input public void onOK() { Messsagebox.show("You entered " + input.getValue()); } public void onCancel() { input.setValue(""); } }
Then, you could associate this composer to a component by specifying the apply attribute as shown below.
<window apply="MyComposer"> <textbox id="input"/> </window>
Once the ZUML document above is rendered, an instance of MyComposer will be instantiated and the
input member will also be initialized with the fellow named
input. This process is called "auto-wiring". For more information, please refer to the Wire Components section.
Find Component Manually
There are basically two approaches to look for a component: by use of CSS-like selector and filesystem-like path. The CSS-like selector is more powerful and suggested if you're familiar with CSS selectors, while filesystem-like path is recommended if you're familiar with filesystem's path.
Selector
Component.query(String) and Component.queryAll(String) are the methods to look for a component by use of CSS selectors. For example,
comp.queyr("#ok"); //look for a component whose ID's ok in the same ID space comp.query("window #ok"); //look for a window and then look for a component with ID=ok in the window comp.queryAll("window button"); //look for a window and then look for all buttons in the window
Component.query(String) returns the first matched component, or null if not found. On the other hand, Component.queryAll(String) returns a list of all matched components.
Path
ZK provides a utility class called Path to simplify the location of a component among ID spaces. The way of using it is similar to java.io.File. For example,
//Two different ways to get the same component E Path.getComponent("/A/C/E");//if call Path.getComponent under the same page. new Path("/A/C", "E").getComponent(); //the same as new Path("/A/C/E").getComponent()
Notice that the formal syntax of the path string is "/[/]<space_owner>/[<space_owner>...]/felow" and only the last element could fellow because it is not space owner. For example,
// B and D are fellows in the Id space of A Path.getComponent("/A/B"); // get B Path.getComponent("/A/D"); // get D
If a component belongs to another page, we can retrieve it by starting with the page's ID. Notice that double slashes have to be specified in front of the page's ID.
Path.getComponent("//P/A/C/E");//for page, you have to use // as prefix
Notice that the page's ID can be assigned with the use of the page directive as follows.
<?page id="foo"?> <window/>
UUID
A component has another identifier called UUID (Universal Unique ID). It is assigned automatically when the component is attached to a page. UUID of a component is unique in the whole desktop (if it is attached).
Application developers rarely need to access it.
In general, UUID is independent of ID. UUID is assigned automatically by ZK, while ID is assigned by the application. However, if a component implements RawId, ID will become UUID if the application assigns one. Currently, only components from the XHTML component set implements RawId.
Version History
Last Update : 2014/7/11 | http://books.zkoss.org/wiki/ZK_Developer's_Reference/UI_Composing/ID_Space | CC-MAIN-2015-11 | refinedweb | 1,189 | 57.67 |
Welcome to LLVM! In order to get started, you first need to know some basic information.
First, LLVM comes in two pieces. The first piece is the LLVM suite. This contains all of the tools, libraries, and header files needed to use the low level virtual machine. It contains an assembler, disassembler, bytecode analyzer, and bytecode optimizer. It also contains a test suite that can be used to test the LLVM tools and the GCC front end.
The second piece is the GCC front end. This component provides a version of GCC that compiles C and C++ code into LLVM bytecode. Currently, the GCC front end is a modified version of GCC 3.4 (we track the GCC 3.4 development). Once compiled into LLVM bytecode, a program can be manipulated with the LLVM tools from the LLVM suite.
Here's the short story for getting up and running quickly with LLVM:
Specify the full pathname of where the LLVM GCC frontend is installed.:
The LLVM suite may compile on other platforms, but it is not guaranteed to do so. If compilation is successful, the LLVM utilities should be able to assemble, disassemble, analyze, and optimize LLVM byte:
There are some additional tools that you may want to have when working with LLVM:
If you want to make changes to the configure scripts, you will need GNU autoconf (2.57 or higher), and consequently, GNU M4 (version 1.4 or higher). You:
LLVM is very demanding of the host C++ compiler, and as such tends to expose bugs in the compiler. In particular, several versions of GCC crash when trying to compile LLVM. We routinely use GCC 3.3.3 and GCC 3.4.0 and have had success with.3.2: This version of GCC suffered from a serious bug which causes it to crash in the "convert_from_eh_region_ranges_1" GCC function. cfrontend/platform/llvm-gcc.
In order to compile and use LLVM, you have access to our CVS repository, you can get a fresh copy of the entire source code. All you need to do is check it out from CVS specify a label. The following releases have the following label:
If you would like to get the GCC C front-end.
If the main CVS server is overloaded or inaccessible, you can try one of these user-hosted mirrors:
Before configuring and compiling the LLVM suite, you need to extract the LLVM GCC front end from the binary distribution. It is used for building the bytecode libraries later used by the GCC front end for linking programs, and its location must be specified when the LLVM suite is configured.
To install the GCC front end, do the following:
If you are using Solaris/Sparc or MacOS X/PPC, you will need to fix the header files:
cd cfrontend/platform
./fixheaders
The binary versions of the. This is not for the faint of heart, so be forewarned.
Once checked out from the CVS repository, the LLVM suite source code must be configured via the configure script. This script sets variables in:. known broken version of GCC to compile LLVM with.!
One useful source of information about the LLVM source base is the LLVM doxygen documentation available at. The following is a brief introduction to code layout:
Every directory checked out of CVS will contain a CVS directory; for the most part these can just be ignored. libraries which are compiled into LLVM byte.
- gccld
- gccld links together several LLVM bytecode files into one bytecode file and does some optimization. It is the linker invoked by the GCC frontend when multiple .o files need to be linked together. Like gccas, the command line interface of gccld is designed to match the system linker, to aid interfacing with the GCC frontend.
This directory contains utilities for working with LLVM source code, and some of the utilities are actually required as part of the build process because they are code generators for parts of LLVM infrastructure.
#include <stdio.h> int main() { printf("hello world\n"); return 0; }
Next, compile the C file into a LLVM bytecode file:
% llvmgcc.
Run the program. To make sure the program ran, execute one of the following commands:
% ./hello
or
%: | http://www.llvm.org/releases/1.3/docs/GettingStarted.html | CC-MAIN-2014-49 | refinedweb | 702 | 64.61 |
Sockets: Usenet Support
This is a really old draft from 1997.
Pulling Documents and Images off Usenet
Another source for information and images is the part of Internet called Usenet, or News. Usenet is a distributed bulletin-board, where messages can be read from, and posted to special news servers. Messages posted to a given news server are propagated to other servers, but as with the Web, you have to connect to a server to be able to read the messages.
The protocol used to fetch messages (“articles”) from a news server is called Network News Transfer Protocol (NNTP). <RFC977>. Here’s a typical session, in which the client application connects, reads the standard headers for new messages in the newsgroup called comp.lang.python, downloads one of them, and then posts a message to the server (possibly in response to the other message):
Client: connects Server: 200 news.spam.egg PyNNTP 1.0 ready (posting ok) Client: GROUP comp.lang.python Server: 211 367 13887 14268 comp.lang.python Client: XOVER 14211-14268 Client: 204 data follows Server: (sends overview information for articles 14211 to 14268) Server: . Client: ARTICLE 14220 Server: 220 14220 <5qj8v5$8dd@news.spam.egg > article Server: (sends message) Server: . Client: POST Server: 340 OK Client: (sends message) Client: . Server: 240 Article posted Client: QUIT Client: disconnects
Note that each command from the client starts with a command keyword, and each reply from the server starts with a status code. Messages and listings are terminated with a line containing a single dot.
The server assigns a serial number to each message (in this case, the comp.lang.python newsgroup currently contains 367 messages, having numbers between 13887 to 14268), and it’s usually up to the client to keep track of which messages it has already seen.
News Message Format
We’ll implement an NNTP client class in a moment, but before we do that, let’s see what the news messages look like. Here’s a simple example:
Path: news.myisp.se!newsfeed.internetmci.com!news.spam.egg From: user@spam.egg Newsgroups: comp.lang.python Subject: Re: Where's the bacon? Date: 17 Jul 1999 09:25:53 -0400 Lines: 12 Sender: user@spam.egg Message-ID: <lqsoxd95em.ach@news.spam.egg> References: <199907152100.RAA14304@foobar.spam.egg> Xref: news.spam.egg comp.lang.python:14304 Fredrik wrote: > Haven't got a clue. Maybe someone else knows more. You could check the list of contributed software at. ...
As in HTTP, the message starts with a list of headers, followed by an empty line, and the message body itself. Python’s standard library contains a module designed to represent messages like this. This module is named rfc822, after the Internet specification with the same name (the full name of which is Standard for the Format of ARPA Internet Text Messages, by the way).
RFC822 only specifies the general layout of the message; another specification, RFC1036, defines what headers to use in a news message.
<FIXME: header field summary: From, Date, Newsgroups, Subject, Message-ID, and Path>
The Message class defined in the rfc822 module takes a file handle, extracts the header fields, and leaves the file pointer positioned on the first line in the message, after the empty line. Basically, an instance of the Message class behaves like a dictionary of header fields, but also provides a set of utility functions and members.
The following code snippet reads a message from a file, and dumps the header dictionary to the screen:
import rfc822 fp = open("sample.news") msg = rfc822.Message(fp) for k, v in msg.items(): print k, "=", v
If applied to the above example, this script prints something like:
path = news.myisp.se!newsfeed.internetmci.com!news.spam.egg newsgroups = comp.lang.python from = user@spam.egg sender = user@spam.egg xref = news.spam.egg comp.lang.python:14304 date = 17 Jul 1999 09:25:53 -0400 references = <199907152100.RAA14304@foobar.egg> lines = 12 message-id = <lqsoxd95em.ach@news.spam.egg> subject = Re: Where's the bacon?
Sending Binary Data via News
The RFC822 specification (published in 1982) explicitly specifies that only 7-bit US ASCII characters can be used in news messages (it also applies to mail, something we will discuss later in this chapter). Nevertheless, binary files can be posted anyway, by first encoding them using one of the following methods:
- Use the Unix uuencode utility to encode the data.
- Use the Multipurpose Internet Mail Extension (MIME) encoding standard. Especially the base64 encoding scheme is becoming popular as a slightly more convenient alternative to uuencode.
- [FIXME: Use the yEnc format]
In both uuencode and base64, each group of 3 data bytes is converted to 4 ASCII characters, storing 6 bits of original data in each character. While uuencode stores each 6-bit value as chr(value+32), the base64 encoding uses a character table designed to minimize the risk for errors if the message is to be converted to other character sets. Python’s standard library supports both formats, via the uu and base64 modules, and a low-level support module called binascii.
The uuencode format is line-oriented, and the encoded data starts with a begin line, which also contains the Unix file mode (in octal), and the original filename. Then follows the encoded lines (the first character gives the number of bytes encoded on the rest of the line, and is usually an “M” for a full line of 45 binary bytes), and the encoded block ends with a line containing the word end. Here’s an example:
begin 600 can.jpg M_]C_X `02D9)1@`!``$`4P!3``#__@`752U,96%D(%-Y<W1E;7,L($EN8RX` M_]L`A `#`@("`@(#`@("`P,#`P0(!00$! 0)!P<%" L*# P+"@L+# X2#PP- M$0T+"Q 5$!$3$Q04% P/%A@6%!@2%!03`0,#`P0$! D%!0D3#0L-$Q,3$Q,3 ... typically a few hundred similar lines ... M?E3;Y52UNG1$5E2,`A1QT_7W]SZFL8?"O4N"3C)LBTHEW ?YL<#=SCGMZ=!^ M50M-*NH_*Y3##&WC'TQT_P#U53BN9JQ7*K19J:ZB0PV3Q*(RZ$ML&,G*GM]? =Y#L*S)I9$E9%D8!20,GIS6>'2:5T;Q24I6\@_]FB ` end
The MIME format is a bit different; it uses special message headers to indicate what the message contains, and how it is encoded. If the message header contains a field named MIME-Version, the document is encoded using the MIME specification. We’ll get back to MIME and base64-encoding later in this chapter, when we look closer on how to send and receive images and other documents via electronic mail.
Decoding uuencoded messages
To figure out if a message contains uuencoded data, we need to scan the message body for a line starting with begin, followed by a number and a filename. We can then use the binascii module to convert each line to a chunk of binary data, and write it to a file, or, as in the following example, store it in a list. The getuubody function shown below also returns the filename. If the message is not encoded, this function sets the filename to None, and returns the message body as is.
Example: extract uuencoded data (from messageutils.py)
import regex, string begin = regex.compile("begin [0-9]+ \(.*\)") def getuubody(msg): "Given a uuencoded message, extract and decode the message body" msg.rewindbody() while 1: s = msg.fp.readline() if not s: break if begin.match(s) > 0: # decode uuencoded message body body = [] file = begin.group(1) for s in msg.fp.readlines(): if s[:3] == "end": break try: body.append(binascii.a2b_uu(s)) except: # workaround for broken encoders bytes = (((ord(s[0])-32) & 63) * 4 + 3) / 3 body.append(binascii.a2b_uu(s[:bytes])) return file, string.join(body, "") msg.rewindbody() return None, msg.fp.read()
Note that some encoders sometimes adds extra padding characters to lines containing less than 45 bytes of binary data. In earlier versions of Python, the binascii module raises an exception if it stumbles upon such a line; the above try/except clause works around this problem by explicitly truncating the line to the appropriate length.
[FIXME: explain why uu.py cannot be used: it assumes that the file is already positioned on the begin line, and it doesn’t handle offending encoders well either (this will probably be fixed in binascii in 1.5 final)]
An NNTP Client Library
Creating a client library for the NNTP protocol is a straight-forward task. Again, the SimpleClient takes care of the socket configuration issues, and provides getline and putline primitives.
The code shown here includes a minimal set of commands only; list to get a list of newsgroups available on the server, group to select which group to read, overview to get an overview of all or some messages in a group, and retrieve to read a given message. The overview method uses an NNTP command called XOVER, which is an extension to the original NNTP protocol. Virtually every modern news server supports this command, though, and some news clients won’t work without it. The retrieve method uses either HEAD, BODY, or ARTICLE, to read parts or all of a message. The default is ARTICLE, which reads both headers and body in a single call.
Example: File: NNTPClient.py
from string import * import SimpleClient ARTICLE, HEAD, BODY = tuple(range(3)) class NNTPClient(SimpleClient.SimpleClient): def __init__(self, host, port = 119): # connect SimpleClient.SimpleClient.__init__(self, host, port) s, self.welcome = self.getstatus() if s not in [200, 201, 205]: raise IOError, (s, "NNTP connection error", self.welcome) self.may_post = (s == 200) self.must_login = (s == 205) def close(self): "Quit." try: stat = self.command(None, "QUIT") except IOError: pass # self.destroy() def command(self, ok, *args): self.putline(join(args)) s, m = self.getstatus() if ok and s not in ok: raise IOError, (s, args[0]+" command failed", m) return m def getstatus(self): info = self.getline() return atoi(info[:3]), info def getmessage(self, newline = ""): text = [] while 1: s = self.getline() if s[:1] == ".": s = s[1:] if not s: break text.append(s + newline) return text def _range(self, lo, hi): if hi is None: return str(lo) return "%s-%s" % (lo, hi) # # NNTP commands (subset) def group(self, group): "Select group. Returns number of messages, range, and group name." m = split(self.command([211], "group", group)) self.groupinfo = group, (atoi(m[2]), atoi(m[3])) return (atoi(m[1]), # number of messages (est.) atoi(m[2]), atoi(m[3]), # message number range m[4]) # group name def list(self): "List groups. Returns list of (group, lo, hi, may_post) tuples" self.command([215], "LIST") data = [] for s in self.getmessage(): s = split(s) data.append((s[0], # group name atoi(s[1]), atoi(s[2]),# message number range s[3] in "yY")) # may post return data def overview(self, lo, hi = None): "Get message overview (extension)." self.command([224], "XOVER", self._range(lo, hi)) data = [] for s in self.getmessage(): s = split(s, "\t") data.append((atoi(s[0]), # message number s[1], # subject s[2], # from s[3], # date s[4], # message id tuple(split(s[5])), # references atoi(s[6]), # byte count atoi(s[7]))) # line count return data def retrieve(self, msgid, mode = ARTICLE): "Get article (mode argument controls which part)" if mode == HEAD: self.command([221], "HEAD", str(msgid)) elif mode == BODY: self.command([222], "BODY", str(msgid)) else: self.command([220], "ARTICLE", str(msgid)) return self.getmessage("\n")
Messages are returned as a list of strings, where each string ends with a newline. In this way, messages obtained via retrieve looks like messages read from a file using readlines.
An NNTP Robot
The following example uses the NNTPClient module to download messages from a news server. It fetches overview information from the server (including the From and Subject header fields, and size information), passes that information to a user-defined filter function, and downloads messages as indicated by the filter. The messages are stored in files named group-serial.mail. [FIXME: redesign NNTPClient so it returns Article instances, and move the processing into that class.
Example: File: newsrobot.py
# # user configuration HOST = "news.spam.egg" GROUP = "alt.binaries.pictures.bacon" def messagefilter(info): serial, subject, _from, date, msgid, ref, bytes, lines = info # assume everything larger than 10k is an image, but don't # download things larger than 60k return 10000 <= bytes <= 60000 # # main program import NNTPClient import string nntp = NNTPClient.NNTPClient(HOST) count, lo, hi, name = nntp.group(GROUP) # get last message number, if saved try: fp = open(GROUP + ".last") lo = max(lo, string.atoi(fp.readline())+1) fp.close() except (IOError, ValueError): pass # scan whole group # loop over new messages for info in nntp.overview(lo, hi): serial = info[0] if messagefilter(info): print "fetching", info[2], "(%d bytes)" % info[6] message = nntp.retrieve(serial) fp = open("%s-%d.news" % (GROUP, serial), "w") fp.writelines(message) fp.close() nntp.close() # store last message number try: fp = open(GROUP + ".last", "w") fp.write(str(serial) + "\n") fp.close() except IOError: pass
Note that the we store the last message number seen in a file named group.last, to avoid downloading the same messages over and over again. To start all over again, for example if you change the filter, simply remove that file.
[FIXME: instead of storing the raw message to disk, this code should call the getuubody method and store the message body in the “incoming” directory] | http://www.effbot.org/zone/socket-intro-nntp.htm | CC-MAIN-2015-48 | refinedweb | 2,237 | 58.28 |
{- | , runOrRaise, raiseMaybe, module XMonad.ManageHook ) where import XMonad (Query(), X(), withWindowSet, spawn, runQuery, focus) import Control.Monad (filterM) import qualified XMonad.StackSet as W (allWindows) import XMonad.ManageHook {- $usage Import" For detailed instructions on editing your key bindings, see "XMonad.Doc.Extending#Editing_key_bindings". -} -- | 'action' is an executable to be run via 'spawn' if the Window cannot be found. -- Presumably this executable is the same one that you were looking for. runOrRaise :: String -> Query Bool -> X () runOrRaise action = raiseMaybe $ spawn action -- | See 'raiseMaybe'. If the Window can't be found, quietly give up and do nothing. raise :: Query Bool -> X () raise = raiseMaybe $ return () {- | "XMonad.Utils.Run"'s 'runInTerm'): > , ((modm, xK_m), raiseMaybe (runInTerm "-title mutt" "mutt") (title =? "mutt")) -} raiseMaybe :: X () -> Query Bool -> X () raiseMaybe f thatUserQuery = withWindowSet $ \s -> do maybeResult <- filterM (runQuery thatUserQuery) (W.allWindows s) case maybeResult of [] -> f (x:_) -> focus x | http://hackage.haskell.org/package/xmonad-contrib-0.7/docs/src/XMonad-Actions-WindowGo.html | CC-MAIN-2016-07 | refinedweb | 142 | 52.97 |
CornucopiaCornucopia
Cornucopia is a controller for Redis cluster that performs auto-sharding when adding and removing Redis cluster nodes.
This project is originally a fork from
kliewkliew/cornucopia.
OperationsOperations
The following keys for task messages correspond to operations to be performed in Cornucopia.
Add or Remove a NodeAdd or Remove a Node
+master: Add Master node
+slave: Add Slave node
-master: Remove Master node
-slave: Remove Slave node
The value contained in the message is the URI of the node to operate on. See Redis URI and connection details. When adding a node to the cluster, the URI value indicates the Redis cluster node to be added to the cluster. When removing a node from the cluster, the URI value indicates the cluster node to be removed. If the cluster node to be removed is not of the node type (master or slave) indicated in the task message, then the cluster node is converted into that node type by performing a manual failover. Cornucopia does not support removing a slave node from the cluster if the URI value in the task message hosts a master node that is not replicated by a slave node. Technically this is a limitation. That said, it could be considered a best practice to have all master nodes replicated by a slave at any given time in the cluster.
Adding or removing a master node will automatically trigger a cluster reshard event.
New slave nodes will initially be assigned to the master with the least slaves. Beyond that, Redis Cluster itself has the ability to migrate slaves to other masters based on the cluster configuration.
Note that Redis cluster will automatically assign or reassign nodes between master or slave roles, or migrate slaves between masters or do failover. You may see errors due to Redis doing reassignment when the cluster is small. For example, when testing with only two nodes, after adding the second node as a master, the first node can become a slave. If you then try to remove the second node, there will be no masters left. The behaviour is more predictable as more nodes are added to the cluster. It is generally advisable to maintain a cluster with at least three master nodes at all times.
Using Cornucopia as a microserviceUsing Cornucopia as a microservice
Cornucopia can be run as a stand-alone microservice. The interface to this microservice is a HTTP Rest interface.
For example, assume that the micro service is running on HTTP port 9001 on localhost (which is the default). To add a new master node to the cluster on 172.10.0.5:7006, run the following command:
curl -X POST \ \ -H 'content-type: application/json' \ -d '{ "operation": "+master", "redisNodeIp": "redis://172.10.0.5:7006" }'
Using Cornucopia as a library in your applicationUsing Cornucopia as a library in your application
Include Cornucopia in your
build.sbt file:
"com.adendamedia" %% "cornucopia" % "0.6.0". Control messages are sent to Cornucopia using an ActorRef that must be imported.
import com.adendamedia.cornucopia.Library import com.adendamedia.cornucopia.actors.Gatekeeper.{Task, TaskAccepted, TaskDenied}
From within your own AKKA actor you can send a message to Cornucopia. Note that the library requires an implicit Actor System. This actor system can be reused from within your own application.
implicit val system: ActorSystem = ActorSystem() val library: com.adendamedia.cornucopia.Library = new Library val cornucopiaRef = library.ref cornucopiaRef ! Task("+master", "redis://localhost:7006") | https://index.scala-lang.org/adenda/cornucopia/cornucopia/0.6.2?target=_2.11 | CC-MAIN-2021-25 | refinedweb | 570 | 55.95 |
06 July 2011 16:49 [Source: ICIS news]
TORONTO (ICIS)--?xml:namespace>
Compared with May 2010, chemicals and chemical product prices increased by 6.8% year on year, Statistics Canada said.
Prices for plastics and rubber products rose by 1.0% in May on a month-on-month basis, an increase of 1.0% from May last year.
Increases in the chemical, plastics and rubber sectors are part of an overall decline in
Statistics
The agency also said that its raw materials price index fell by 5.2% in the month from April.
Meanwhile, Canadian chemical railcar shipments are up by 11.8% to 289,383 year-to-date to 25 June, according to the latest data from the Association of American Railroads. | http://www.icis.com/Articles/2011/07/06/9475607/canadas-chemical-prices-rise-by-2.1-month-on-month-in-may.html | CC-MAIN-2015-06 | refinedweb | 123 | 68.77 |
popover_location and window title bars
I'm noticing that present()'ing a view with style = 'popover' doesn't seem to take the window title bar into account when popover_location is specified.
I could be getting this wrong, but I'm trying to show the popover with its little arrow pointing at the icon-only button from which it is spawned. The view that button is in is a ui.View presented with style="sheet" on top of the main ui.View of my app. I don't add the popover view as a subview of anything, and even if I do add it as a subview of the sheet, it still shows up in the "wrong" place.
First, I tested where popover_location = (0,0) placed the arrow, and that appears to be the top-left of the sheet view. Not the upper-left of the screen. So I figured the coordinates were in the space of the current key window, which in this case is the sheet view. So I tried to get the position of the icon button in coordinates which would be correct for the sheet by doing:
pos = iconButton.center
pos = ui.convert_point(pos, parentView, None)
pos = ui.convert_point(pos, None, sheetView)
the idea there was to take the center of the button, which is in parent coordinates, and convert it to screen coordinates, then convert from screen coordinates to coordinates in the sheetView coordinates, but that places it too high, I have to add a value which accounts for the sheetView title bar height for when it's displayed as a sheet in order to get the right placement. So that leaves me with 2 questions:
- Is there a way to get the title bar height programmatically? Right now I'm just hardcoding a number.
- Is there a correct way to get the popover_location value which will properly take into account the title bar? The sheetView has no superview() when presented, so I have no way of directly converting the center of the iconButton to a position in the presented sheet window...but perhaps there is a way I'm unaware of.
@shinyformica, no time to look at this deeper now, and no iPad, but did you try @JonB’s view browser to look at the objc views for the popover?
popover works funny in sheet mode. The popover is with respect to the containing viewwrapper's superview, not the window. This might be different on iPhone.
On iPad, this works:
pt=v.objc_instance.convertPoint_toView_(CGPoint(b.center.x, b.center.y), v.objc_instance.superview())
where b is the button, v is root sheet. if b is several levels down, you'd convert to v first.
See the sheet viewheirarchy with viewbrowser -- the highlighted view is v, note it has nonzero y.
i am not sure if there is a reliable way to detect "sheet"... though i recall a thread about this.
@shinyformica I answer obviously too late 😢
import ui from objc_util import * def GetTitleBarHeight(ui_view): vo = ObjCInstance(ui_view) def UpViews(v): sv = v.superview() if sv._get_objc_classname().startswith(b'UINavigation'): return sv.superview() nv = UpViews(sv) if nv: return nv def DownViews(v): for sv in v.subviews(): if sv._get_objc_classname().startswith(b'UINavigationBar'): return sv nb = DownViews(sv) if nb: return nb nv = UpViews(vo) nb = DownViews(nv) if nb: h = nb.size().height else: h = 10 # should be 50, set 10 just for testing return h mv = ui.View() mv.name = 'main view' mv.background_color = 'white' mv.present('full_screen') v = ui.View() v.name = 'view' v.frame = (0,0,500,500) b = ui.Button() b.frame = (20,20,80,80) b.image = ui.Image.named('typb:Grid') def b_action(sender): pv = ui.View() pv.name = 'popover' pv.frame = (0,0,200,200) x = sender.x + sender.width/2 y = sender.superview.titlebar_height + sender.y + sender.height/2 pv.present('popover',popover_location = (x,y)) b.action = b_action v.add_subview(b) v.present('sheet',title_bar_color='yellow') v.titlebar_height = GetTitleBarHeight(v)
pt=v.objc_instance.convertPoint_toView_(CGPoint(b.center.x, b.center.y), v.objc_instance.superview())
Good, good stuff! Works like a charm - I have to do a double conversion, as you mentioned, from the parent of the button to the top-level view which is presented as a sheet:) p = topobjc.convertPoint_toView_(p, topobjc.superview()) return (p.x,p.y)
Anyway, lovely and clear. Thanks @JonB (and @cvp)
You actually should be able to go direct from srcobj to topobjc.superview(). ConvertPoint is associative ..
As long as the coords you pass in are in the frame of the object you call convertPoint on, you are good to go.
Out of curiosity... Are you creating a "help" overlay? One of my many unfinished projects was a menu system with a "help" button that popped up hints when touching a button, or perhaps long tapping a button shows help for that button.
@JonB true enough...double conversion isn't necessary, was just writing fast...here's a simplified version:.superview()) return (p.x,p.y)
This is a popup to display a list of options to the user, so the popover has a TableView subview, and tapping an item in that view dismisses the popover. Working nicely now that it shows up where expected. | https://forum.omz-software.com/topic/5431/popover_location-and-window-title-bars | CC-MAIN-2021-49 | refinedweb | 876 | 68.16 |
Terms defined: Liskov Substitution Principle, attribute, cache, confirmation bias, design by contract, easy mode, layout engine, query selector, signature, z-buffering
You might be reading this as an HTML page, an e-book (which is basically the same thing), or on the printed page. In all three cases, a layout engine took some text and some layout instructions and decided where to put each character and image. We will build a small layout engine in this chapter based on Matt Brubeck's tutorial to explore how browsers decide what to put where.
Our inputs will be a very small subset of HTML and an equally small subset of CSS. We will create our own classes to represent these instead of using those provided by various Node libraries; to translate the combination of HTML and CSS into text on the screen, we will label each node in the DOM tree with the appropriate styles, walk that tree to figure out where each visible element belongs, and then draw the result as text on the screen.
Upside down
The coordinate systems for screens put (0, 0) in the upper left corner instead of the lower left. X increases to the right as usual, but Y increases as we go down, rather than up (). This convention is a holdover from the days of teletype terminals that printed lines on rolls of paper; as Mike Hoye has repeatedly observed, the past is all around us.
How can we size rows and columns?
Let's start on easy mode without margins, padding, line-wrapping, or other complications. Everything we can put on the screen is represented as a rectangular cell, and every cell is either a row, a column, or a block. A block has a fixed width and height:
export class Block { constructor (width, height) { this.width = width this.height = height } getWidth () { return this.width } getHeight () { return this.height } }
A row arranges one or more cells horizontally; its width is the sum of the widths of its children, while its height is the height of its tallest child ():
export class Row { constructor (...children) { this.children = children } getWidth () { let result = 0 for (const child of this.children) { result += child.getWidth() } return result } getHeight () { let result = 0 for (const child of this.children) { result = Math.max(result, child.getHeight()) } return result } }
Finally,
a column arranges one or more cells vertically;
its width is the width of its widest child
and its height is the sum of the heights of its children.
(Here and elsewhere we use the abbreviation
col when referring to columns.)
export class Col { constructor (...children) { this.children = children } getWidth () { let result = 0 for (const child of this.children) { result = Math.max(result, child.getWidth()) } return result } getHeight () { let result = 0 for (const child of this.children) { result += child.getHeight() } return result } }
Rows and columns nest inside one another: a row cannot span two or more columns, and a column cannot cross the boundary between two rows. Any time we have a structure with that property we can represent it as a tree of nested objects. Given such a tree, we can calculate the width and height of each cell every time we need to. This is simple but inefficient: we could calculate both width and height at the same time and cache those values to avoid recalculation, but we called this "easy mode" for a reason.
As simple as it is, this code could still contain errors (and did during development), so we write some Mocha tests to check that it works as desired before trying to build anything more complicated:
import assert from 'assert' import { Block, Row, Col } from '../easy-mode.js' describe('lays out in easy mode', () => { it('lays out a single unit block', async () => { const fixture = new Block(1, 1) assert.strictEqual(fixture.getWidth(), 1) assert.strictEqual(fixture.getHeight(), 1) }) it('lays out a large block', async () => { const fixture = new Block(3, 4) assert.strictEqual(fixture.getWidth(), 3) assert.strictEqual(fixture.getHeight(), 4) }) it('lays out a row of two blocks', async () => { const fixture = new Row( new Block(1, 1), new Block(2, 4) ) assert.strictEqual(fixture.getWidth(), 3) assert.strictEqual(fixture.getHeight(), 4) }) it('lays out a column of two blocks', async () => { const fixture = new Col( new Block(1, 1), new Block(2, 4) ) assert.strictEqual(fixture.getWidth(), 2) assert.strictEqual(fixture.getHeight(), 5) }) it('lays out a grid of rows of columns', async () => { const fixture = new Col( new Row( new Block(1, 2), new Block(3, 4) ), new Row( new Block(5, 6), new Col( new Block(7, 8), new Block(9, 10) ) ) ) assert.strictEqual(fixture.getWidth(), 14) assert.strictEqual(fixture.getHeight(), 22) }) })
> stjs@1.0.0 test /u/stjs > mocha */test/test-*.js "-g" "easy mode" lays out in easy mode ✓ lays out a single unit block ✓ lays out a large block ✓ lays out a row of two blocks ✓ lays out a column of two blocks ✓ lays out a grid of rows of columns 5 passing (7ms)
How can we position rows and columns?
Now that we know how big each cell is we can figure out where to put it. Suppose we start with the upper left corner of the browser: upper because we lay out the page top-to-bottom and left because we are doing left-to-right layout. If the cell is a block, we place it there. If the cell is a row, on the other hand, we get its height and then calculate its lower edge as y1 = y0 + height. We then place the first child's lower-left corner at (x0, y1), the second child's at (x0 + width0, y1), and so on (). Similarly, if the cell is a column we place the first child at (x0, y0), the next at (x0, y0 + height0), and so on.
To save ourselves some testing we will derive the classes that know how to do layout from the classes we wrote before. Our blocks are:
export class PlacedBlock extends Block { constructor (width, height) { super(width, height) this.x0 = null this.y0 = null } place (x0, y0) { this.x0 = x0 this.y0 = y0 } report () { return [ 'block', this.x0, this.y0, this.x0 + this.width, this.y0 + this.height ] } }
while our columns are:
export class PlacedCol extends Col { constructor (...children) { super(...children) this.x0 = null this.y1 = null } place (x0, y0) { this.x0 = x0 this.y0 = y0 let yCurrent = this.y0 this.children.forEach(child => { child.place(x0, yCurrent) yCurrent += child.getHeight() }) } report () { return [ 'col', this.x0, this.y0, this.x0 + this.getWidth(), this.y0 + this.getHeight(), ...this.children.map(child => child.report()) ] } }
and our rows are:
export class PlacedRow extends Row { constructor (...children) { super(...children) this.x0 = null this.y0 = null } place (x0, y0) { this.x0 = x0 this.y0 = y0 const y1 = this.y0 + this.getHeight() let xCurrent = x0 this.children.forEach(child => { const childY = y1 - child.getHeight() child.place(xCurrent, childY) xCurrent += child.getWidth() }) } report () { return [ 'row', this.x0, this.y0, this.x0 + this.getWidth(), this.y0 + this.getHeight(), ...this.children.map(child => child.report()) ] } }
Once again, we write and run some tests to check that everything is doing what it's supposed to:
import assert from 'assert' import { PlacedBlock as Block, PlacedCol as Col, PlacedRow as Row } from '../placed.js' describe('places blocks', () => { it('places a single unit block', async () => { const fixture = new Block(1, 1) fixture.place(0, 0) assert.deepStrictEqual( fixture.report(), ['block', 0, 0, 1, 1] ) }) it('places a large block', async () => { const fixture = new Block(3, 4) fixture.place(0, 0) assert.deepStrictEqual( fixture.report(), ['block', 0, 0, 3, 4] ) }) it('places a row of two blocks', async () => { const fixture = new Row( new Block(1, 1), new Block(2, 4) ) fixture.place(0, 0) assert.deepStrictEqual( fixture.report(), ['row', 0, 0, 3, 4, ['block', 0, 3, 1, 4], ['block', 1, 0, 3, 4] ] ) }) it('places a column of two blocks', async () => { const fixture = new Col( new Block(1, 1), new Block(2, 4) ) fixture.place(0, 0) assert.deepStrictEqual( fixture.report(), ['col', 0, 0, 2, 5, ['block', 0, 0, 1, 1], ['block', 0, 1, 2, 5] ] ) }) })
> stjs@1.0.0 test /u/stjs > mocha */test/test-*.js "-g" "places blocks" places blocks ✓ places a single unit block ✓ places a large block ✓ places a row of two blocks ✓ places a column of two blocks ✓ places a grid of rows of columns 5 passing (8ms)
How can we render elements?
We drew the blocks on a piece of graph paper in order to figure out the expected answers for the tests shown above. We can do something similar in software by creating a "screen" of space characters and then having each block draw itself in the right place. If we do this starting at the root of the tree, child blocks will overwrite the markings made by their parents, which will automatically produce the right appearance (). (A more sophisticated version of this called z-buffering keeps track of the visual depth of each pixel in order to draw things in three dimensions.)
Our pretended screen is just an array of arrays of characters:
const makeScreen = (width, height) => { const screen = [] for (let i = 0; i < height; i += 1) { screen.push(new Array(width).fill(' ')) } return screen }
We will use successive lower-case characters to show each block, i.e., the root block will draw itself using 'a', while its children will be 'b', 'c', and so on.
const draw = (screen, node, fill = null) => { fill = nextFill(fill) node.render(screen, fill) if ('children' in node) { node.children.forEach(child => { fill = draw(screen, child, fill) }) } return fill } const nextFill = (fill) => { return (fill === null) ? 'a' : String.fromCharCode(fill.charCodeAt() + 1) }
To teach each kind of cell how to render itself,
we have to derive a new class from each of the ones we have
and give the new class a
render method with the same signature:
import { PlacedBlock, PlacedCol, PlacedRow } from './placed.js' // [keep] export class RenderedBlock extends PlacedBlock { render (screen, fill) { drawBlock(screen, this, fill) } } export class RenderedCol extends PlacedCol { render (screen, fill) { drawBlock(screen, this, fill) } } export class RenderedRow extends PlacedRow { render (screen, fill) { drawBlock(screen, this, fill) } } const drawBlock = (screen, node, fill) => { for (let ix = 0; ix < node.getWidth(); ix += 1) { for (let iy = 0; iy < node.getHeight(); iy += 1) { screen[node.y0 + iy][node.x0 + ix] = fill } } } // [/keep]
These
render methods do exactly the same thing,
so we have each one call a shared function that does the actual work.
If we were building a real layout engine,
a cleaner solution would be to go back and create a class called
Cell with this
render method,
then derive our
Block,
Row, and
Col classes from that.
In general,
if two or more classes need to be able to do something,
we should add a method to do that to their lowest common ancestor.
Our simpler tests are a little easier to read once we have rendering in place, though we still had to draw things on paper to figure out our complex ones:
it('renders a grid of rows of columns', async () => { const fixture = new Col( new Row( new Block(1, 2), new Block(3, 4) ), new Row( new Block(1, 2), new Col( new Block(3, 4), new Block(2, 3) ) ) ) fixture.place(0, 0) assert.deepStrictEqual( render(fixture), [ 'bddd', 'bddd', 'cddd', 'cddd', 'ehhh', 'ehhh', 'ehhh', 'ehhh', 'eiig', 'fiig', 'fiig' ].join('\n') ) })
The fact that we find our own tests difficult to understand is a sign that we should do more testing. It would be very easy for us to get a wrong result and convince ourselves that it was actually correct; confirmation bias of this kind is very common in software development.
How can we wrap elements to fit?
One of the biggest differences between a browser and a printed page is that the text in the browser wraps itself automatically as the window is resized. (The other, these days, is that the printed page doesn't spy on us, though someone is undoubtedly working on that.)
To add wrapping to our layout engine, suppose we fix the width of a row. If the total width of the children is greater than the row's width, the layout engine needs to wrap the children around. This assumes that columns can be made as big as they need to be, i.e., that we can grow vertically to make up for limited space horizontally. It also assumes that all of the row's children are no wider than the width of the row; we will look at what happens when they're not in the exercises.
Our layout engine manages wrapping by transforming the tree. The height and width of blocks are fixed, so they become themselves. Columns become themselves as well, but since they have children that might need to wrap, the class representing columns needs a new method:
export class WrappedBlock extends PlacedBlock { wrap () { return this } } export class WrappedCol extends PlacedCol { wrap () { const children = this.children.map(child => child.wrap()) return new PlacedCol(...children) } }
Rows do all the hard work. Each original row is replaced with a new row that contains a single column with one or more rows, each of which is one "line" of wrapped cells (). This replacement is unnecessary when everything will fit on a single row, but it's easiest to write the code that does it every time; we will look at making this more efficient in the exercises.
Our new wrappable row's constructor takes a fixed width followed by the children and returns that fixed width when asked for its size:
export class WrappedRow extends PlacedRow { constructor (width, ...children) { super(...children) assert(width >= 0, 'Need non-negative width') this.width = width } getWidth () { return this.width } }
Wrapping puts the row's children into buckets, then converts the buckets to a row of a column of rows:
wrap () { const children = this.children.map(child => child.wrap()) const rows = [] let currentRow = [] let currentX = 0 children.forEach(child => { const childWidth = child.getWidth() if ((currentX + childWidth) <= this.width) { currentRow.push(child) currentX += childWidth } else { rows.push(currentRow) currentRow = [child] currentX = childWidth } }) rows.push(currentRow) const newRows = rows.map(row => new PlacedRow(...row)) const newCol = new PlacedCol(...newRows) return new PlacedRow(newCol) }
Once again we bring forward all the previous tests and write some new ones to test the functionality we've added:
it('wrap a row of two blocks that do not fit on one row', async () => { const fixture = new Row( 3, new Block(2, 1), new Block(2, 1) ) const wrapped = fixture.wrap() wrapped.place(0, 0) assert.deepStrictEqual( wrapped.report(), ['row', 0, 0, 2, 2, ['col', 0, 0, 2, 2, ['row', 0, 0, 2, 1, ['block', 0, 0, 2, 1] ], ['row', 0, 1, 2, 2, ['block', 0, 1, 2, 2] ] ] ] ) })
> stjs@1.0.0 test /u/stjs > mocha */test/test-*.js "-g" "wraps blocks" wraps blocks ✓ wraps a single unit block ✓ wraps a large block ✓ wrap a row of two blocks that fit on one row ✓ wraps a column of two blocks ✓ wraps a grid of rows of columns that all fit on their row ✓ wrap a row of two blocks that do not fit on one row ✓ wrap multiple blocks that do not fit on one row 7 passing (10ms)
The Liskov Substitution Principle
We are able to re-use tests like this because of the Liskov Substitution Principle, which states that it should be possible to replace objects in a program with objects of derived classes without breaking anything. In order to satisfy this principle, new code must handle the same set of inputs as the old code, though it may be able to process more inputs as well. Conversely, its output must be a subset of what the old code produced so that whatever is downstream from it won't be surprised. Thinking in these terms leads to a methodology called design by contract.
What subset of CSS will we support?
It's finally time to style pages that contain text. Our final subset of HTML has rows, columns, and text blocks as before. Each text block has one or more lines of text; the number of lines determines the block's height and the length of the longest line determines its width.
Rows and columns can have attributes just as they can in real HTML, and each attribute must have a single value in quotes. Rows no longer take a fixed width: instead, we will specify that with our little subset of CSS. Together, these three classes are just over 40 lines of code:
export class DomBlock extends WrappedBlock { constructor (lines) { super( Math.max(...lines.split('\n').map(line => line.length)), lines.length ) this.lines = lines this.tag = 'text' this.rules = null } findRules (css) { this.rules = css.findRules(this) } } export class DomCol extends WrappedCol { constructor (attributes, ...children) { super(...children) this.attributes = attributes this.tag = 'col' this.rules = null } findRules (css) { this.rules = css.findRules(this) this.children.forEach(child => child.findRules(css)) } } export class DomRow extends WrappedRow { constructor (attributes, ...children) { super(0, ...children) this.attributes = attributes this.tag = 'row' this.rules = null } findRules (css) { this.rules = css.findRules(this) this.children.forEach(child => child.findRules(css)) } }
We will use regular expressions to parse HTML (though as we explained in , this is a sin). The main body of our parser is:
import assert from 'assert' import { DomBlock, DomCol, DomRow } from './micro-dom.js' const TEXT_AND_TAG = /^([^<]*)(<[^]+?>)(.*)$/ms const TAG_AND_ATTR = /<(\w+)([^>]*)>/ const KEY_AND_VALUE = /\s*(\w+)="([^"]*)"\s*/g const parseHTML = (text) => { const chunks = chunkify(text.trim()) assert(isElement(chunks[0]), 'Must have enclosing outer node') const [node, remainder] = makeNode(chunks) assert(remainder.length === 0, 'Cannot have dangling content') return node } const chunkify = (text) => { const raw = [] while (text) { const matches = text.match(TEXT_AND_TAG) if (!matches) { break } raw.push(matches[1]) raw.push(matches[2]) text = matches[3] } if (text) { raw.push(text) } const nonEmpty = raw.filter(chunk => (chunk.length > 0)) return nonEmpty } const isElement = (chunk) => { return chunk && (chunk[0] === '<') } export default parseHTML
while the two functions that do most of the work are:
const makeNode = (chunks) => { assert(chunks.length > 0, 'Cannot make nodes without chunks') if (!isElement(chunks[0])) { return [new DomBlock(chunks[0]), chunks.slice(1)] } const node = makeOpening(chunks[0]) const closing = `</${node.tag}>` let remainder = chunks.slice(1) let child = null while (remainder && (remainder[0] !== closing)) { [child, remainder] = makeNode(remainder) node.children.push(child) } assert(remainder && (remainder[0] === closing), `Node with tag ${node.tag} not closed`) return [node, remainder.slice(1)] }
and:
const makeOpening = (chunk) => { const outer = chunk.match(TAG_AND_ATTR) const tag = outer[1] const attributes = [...outer[2].trim().matchAll(KEY_AND_VALUE)] .reduce((obj, [all, key, value]) => { obj[key] = value return obj }, {}) let Cls = null if (tag === 'col') { Cls = DomCol } else if (tag === 'row') { Cls = DomRow } assert(Cls !== null, `Unrecognized tag name ${tag}`) return new Cls(attributes) }
The next step is to define a generic class for CSS rules
with a subclass for each type of rule.
From highest precedence to lowest,
the three types of rules we support identify specific nodes via their ID,
classes of nodes via their
class attribute,
and types of nodes via their element name.
We keep track of which rules take precedence over which through the simple expedient of numbering the classes:
export class CssRule { constructor (order, selector, styles) { this.order = order this.selector = selector this.styles = styles } }
An ID rule's query selector is written as
#name
and matches HTML like
<tag id="name">...</tag> (where
tag is
row or
col):
export class IdRule extends CssRule { constructor (selector, styles) { assert(selector.startsWith('#') && (selector.length > 1), `ID rule ${selector} must start with # and have a selector`) super(IdRule.ORDER, selector.slice(1), styles) } match (node) { return ('attributes' in node) && ('id' in node.attributes) && (node.attributes.id === this.selector) } } IdRule.ORDER = 0
A class rule's query selector is written as
.kind and matches HTML like
<tag class="kind">...</tag>.
Unlike real CSS,
we only allow one class per node:
export class ClassRule extends CssRule { constructor (selector, styles) { assert(selector.startsWith('.') && (selector.length > 1), `Class rule ${selector} must start with . and have a selector`) super(ClassRule.ORDER, selector.slice(1), styles) } match (node) { return ('attributes' in node) && ('class' in node.attributes) && (node.attributes.class === this.selector) } } ClassRule.ORDER = 1
Finally, tag rules just have the name of the type of node they apply to without any punctuation:
export class TagRule extends CssRule { constructor (selector, styles) { super(TagRule.ORDER, selector, styles) } match (node) { return this.selector === node.tag } } TagRule.ORDER = 2
We could build yet another parser to read a subset of CSS and convert it to objects, but this chapter is long enough, so we will write our rules as JSON:
{ 'row': { width: 20 }, '.kind': { width: 5 }, '#name': { height: 10 } }
and build a class that converts this representation to a set of objects:
export class CssRuleSet { constructor (json, mergeDefaults = true) { this.rules = this.jsonToRules(json) } jsonToRules (json) { return Object.keys(json).map(selector => { assert((typeof selector === 'string') && (selector.length > 0), 'Require non-empty string as selector') if (selector.startsWith('#')) { return new IdRule(selector, json[selector]) } if (selector.startsWith('.')) { return new ClassRule(selector, json[selector]) } return new TagRule(selector, json[selector]) }) } findRules (node) { const matches = this.rules.filter(rule => rule.match(node)) const sorted = matches.sort((left, right) => left.order - right.order) return sorted } }
Our CSS ruleset class also has a method for finding the rules for a given DOM node. This method relies on the precedence values we defined for our classes in order to sort them so that we can find the most specific.
Here's our final set of tests:
it('styles a tree of nodes with multiple rules', async () => { const html = [ '<col id="name">', '<row class="kind">first\nsecond</row>', '<row>third\nfourth</row>', '</col>' ] const dom = parseHTML(html.join('')) const rules = new CssRuleSet({ '.kind': { height: 3 }, '#name': { height: 5 }, row: { width: 10 } }) dom.findRules(rules) assert.deepStrictEqual(dom.rules, [ new IdRule('#name', { height: 5 }) ]) assert.deepStrictEqual(dom.children[0].rules, [ new ClassRule('.kind', { height: 3 }), new TagRule('row', { width: 10 }) ]) assert.deepStrictEqual(dom.children[1].rules, [ new TagRule('row', { width: 10 }) ]) })
If we were going on,
we would override the cells'
getWidth and
getHeight methods to pay attention to styles.
We would also decide what to do with cells that don't have any styles defined:
use a default,
flag it as an error,
or make a choice based on the contents of the child nodes.
We will explore these possibilities in the exercises.
Where it all started
This chapter's topic was one of the seeds from which this entire book grew (the other being debuggers discussed in ). After struggling with CSS for several years, Greg Wilson began wondering whether it really had to be so complicated. That question led to others, which eventually led to all of this. The moral is, be careful what you ask.
Exercises
Refactoring the node classes
Refactor the classes used to represent blocks, rows, and columns so that:
They all derive from a common parent.
All common behavior is defined in that parent (if only with placeholder methods).
Handling rule conflicts
Modify the rule lookup mechanism so that if two conflicting rules are defined,
the one that is defined second takes precedence.
For example,
if there are two definitions for
row.bold,
whichever comes last in the JSON representation of the CSS wins.
Handling arbitrary tags
Modify the existing code to handle arbitrary HTML elements.
The parser should recognize
<anyTag>...</anyTag>.
Instead of separate classes for rows and columns, there should be one class
Nodewhose
tagattribute identifies its type.
Recycling nodes
Modify the wrapping code so that new rows and columns are only created if needed. For example, if a row of width 10 contains a text node with the string "fits", a new row and column are not inserted.
Rendering a clear background
Modify the rendering code so that only the text in block nodes is shown, i.e., so that the empty space in rows and columns is rendered as spaces.
Clipping text
Modify the wrapping and rendering so that if a block of text is too wide for the available space the extra characters are clipped. For example, if a column of width 5 contains a line "unfittable", only "unfit" appears.
Extend your solution to break lines on spaces as needed in order to avoid clipping.
Bidirectional rendering
Modify the existing software to do either left-to-right or right-to-left rendering upon request.
Equal sizing
Modify the existing code to support elastic columns, i.e., so that all of the columns in a row are automatically sized to have the same width. If the number of columns does not divide evenly into the width of the row, allocate the extra space as equally as possible from left to right.
Padding elements
Modify the existing code so that:
Authors can define a
paddingattribute for row and column elements.
When the node is rendered, that many blank spaces are added on all four sides of the contents.
For example, the HTML
<row>text</row> would render as:
+------+ | | | text | | | +------+
where the lines show the outer border of the rendering.
Drawing borders
Modify the existing code so that elements may specify
border: trueor
border: false(with the latter being the default). If an element's
borderproperty is
true, it is drawn with a dashed border. For example, if the
borderproperty of
rowis
true, then
<row>text</row>is rendered as:
+----+ |text| +----+
Extend your solution so that if two adjacent cells both have borders, only a single border is drawn. For example, if the
borderproperty of
colis
true,
<row><col>left</col><col>right</col></row>is rendered as:
+----+-----+ |left|right| +----+-----+ | https://stjs.tech/layout-engine/ | CC-MAIN-2021-39 | refinedweb | 4,264 | 64.51 |
A tool to draw ascii line chart in terminal
Project description
TermChart
Draw ascii line charts in terminal.
Install
pip3 install termchart
USage
Create a Python file :
import termchart graph = termchart.Graph([1,2,3,2,5,1,-1,-5,-3]) graph.draw()
You can change the plot (default is
+):
graph.setDot('|')
Change the width and height (default cols is 160x50)
graph.setCols(200) graph.setRows(40)
Add values whenever you need it with
addData(<Float>). Here is a full example for a live graph with random values :
import termchart import time import os from random import randint graph = termchart.Graph([]) while True: rand = randint(0, 9) graph.addData(rand) graph.draw() time.sleep(1) os.system('cls' if os.name == 'nt' else 'clear')
Donate
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/termchart/ | CC-MAIN-2021-17 | refinedweb | 154 | 67.15 |
Fried!
Occurrence
I would argue that the frequency of occurrence of words and other linguistic elements is the fundamental measure on which much NLP is based. In essence, we want to answer “How many times did something occur?” in both absolute and relative terms. Since words are probably the most familiar “linguistic elements” of a language, I focused on word occurrence; however, other elements may also merit counting, including morphemes (“bits of words”) and parts-of-speech (nouns, verbs, …).
Note: In the past I’ve been confused by the terminology used for absolute and relative frequencies —– pretty sure it’s used inconsistently in the literature. I use count to refer to absolute frequencies (whole, positive numbers: 1, 2, 3, …) and frequency to refer to relative frequencies (rational numbers between 0.0 and 1.0). These definitions sweep certain complications under the rug, but I don’t want to get into it right now…
Anyway, in order to count individual words, I had to split the corpus text into a list of its component words. I’ve discussed tokenization before, so I won’t go into details. Given that I scraped this text from the web, though, I should note that I cleaned it up a bit before tokenizing: namely, I decoded any HTML entities; removed all HTML markup, URLs, and non-ASCII characters; and normalized white-space. Perhaps controversially, I also unpacked contractions (e.g., “don’t” => “do not”) in an effort to avoid weird tokens that creep in around apostrophes (e.g., “don”+”’”+”t” or “don”+”‘t”). Since any mistakes in tokenization propagate to results downstream, it’s probably best to use a “standard” tokenizer rather than something homemade; I’ve found NLTK’s defaults to be good enough (usually). Here’s some sample code:
from itertools import chain from nltk import clean_html, sent_tokenize, word_tokenize # combine all articles into single block of text all_text = ' '.join([doc['full_text'] for doc in docs]) # partial cleaning as example: this uses nltk to strip residual HTML markup cleaned_text = clean_html(all_text) # tokenize text into sentences, sentences into words tokenized_text = [word_tokenize(sent) for sent in sent_tokenize(cleaned_text)] # flatten list of lists into a single words list all_words = list(chain(*tokenized_text))
Now I had one last set of decisions to make: Which words do I want to count? Depends on what you want to do, of course! For example, this article explains how filtering for and studying certain words helped computational linguists identify J.K. Rowling as the person behind the author Robert Galbraith. In my case, I just wanted to get a general feeling for the meaningful words Friedman has used the most. So, I filtered out stop words and bare punctuation tokens, and I lowercased all letters, but I did not stem or lemmatize the words; the total number of words dropped from 2.96M to 1.43M. I then used NLTK’s handy
FreqDist() class to get counts by word. Here are both counts and frequencies for the top 30 “good” words in my Friedman corpus:
You can see that the distributions are identical, except for the y-axis values: as discussed above, counts are the absolute number of occurrences for each word, while frequencies are those counts divided by the total number of words in the corpus. It’s interesting but not particularly surprising that Friedman’s top two meaningful words are mr. and said –— he’s a journalist, after all, and he’s quoted a lot of people. (Perhaps he met them on the way to/from a foreign airport…) Given what we know about Friedman’s career (as discussed in (1)), most of the other top words also sound about right: Israel/Israeli, president, American, people, world, Bush, …
On a lark, I compared word counts for the five presidents that have held office during Friedman’s NYT career: Ronald Reagan, George H.W. Bush, Bill Clinton, George W. Bush, and Barack Obama:
- “reagan”: 761
- “bush”: 3582
- “clinton”: 2741
- “obama”: 964
Yes, the two Bush’s got combined, and Hillary is definitely contaminating Bill’s counts (I didn’t feel like doing reference disambiguation on this, sorry!). I find it more interesting to plot conditional frequency distributions, i.e. a set of frequency distributions, one for each value of some condition. So, taking the article’s year of publication as the condition, I produced this plot of presidential mentions by year:
Nice! You can clearly see frequencies peaking during a given president’s term(s), which makes sense. Plus, they show Friedman’s change in focus over time: early on, he covered Middle Eastern conflict, not the American presidency; in 1994, a year in which Clinton was mentioned particularly frequently, Friedman was specifically covering the White House. I’m tempted to read further into the data, such as the long decline of W. Bush mentions throughout —– and beyond –— his second term possibly indicating his slide into irrelevance, but I shouldn’t without first inspecting context. Some other time, perhaps.
I made a few other conditional frequency distributions using NLTK’s
ConditionalFreqDist() class, just for kicks. Here are two, presented without comment (only hints of a raised eyebrow on the author’s part):
These plots-over-time lead naturally into the concept of dispersion.
Dispersion
Although frequencies of (co-)occurrence are fundamental and ubiquitous in corpus linguistics, they are potentially misleading unless one also gives a measure of dispersion, i.e. the spread or variability of a distribution of values. It’s Statistics 101: You shouldn’t report a mean value without an associated dispersion!
Counts/frequencies of words or other linguistic elements are often used to indicate importance in a corpus or language, but consider a corpus in which two words have the same counts, only the first word occurs in 99% of corpus documents, while the second word is concentrated in just 5%. Which word is “more important”? And how should we interpret subsequent statistics based on these frequencies if the second word’s high value is unrepresentative of most of the corpus?
In the case of my Friedman corpus, the conditional frequency distributions over time (above) visualize, to a certain extent, those terms’ dispersions. But we can do more. As it turns out, NLTK includes a small module to plot dispersion; like so:
from nltk.draw import dispersion_plot dispersion_plot(all_words, ['reagan', 'bush', 'clinton', 'obama'], ignore_case=True)
To be honest, I’m not even sure how to interpret this plot –— for starters, why does Obama appear at what I think is the beginning of the corpus?! Clearly, it would be nice to quantify dispersion as, like, a single, scalar value. Many dispersion measures have been proposed over the years (see [1] for a nice overview), but in the context of linguistic elements, most are poorly known, little studied, and suffer from a variety of statistical shortcomings. Also in [1], the author proposes an alternative, conceptually simple measure of dispersion called DP, for deviation of proportions, whose derivation he gives as follows:
- Determine the sizes s of each of the n corpus parts (documents), which are normalized against the overall corpus size and correspond to expected percentages which take differently-sized corpus parts into consideration.
- Determine the frequencies v with which word a occurs in the n corpus parts, which are normalized against the overall number of occurrences of a and correspond to an observed percentage.
- Compute all n pairwise absolute differences of observed and expected percentages, sum them up, and divide the result by two. The result is DP, which can theoretically range from approximately 0 to 1, where values close to 0 indicate that a is distributed across the n corpus parts as one would expect given the sizes of the n corpus parts. By contrast, values close to 1 indicate that a is distributed across the n corpus parts exactly the opposite way one would expect given the sizes of the n corpus parts.
Sounds reasonable to me! (Read the cited paper if you disagree, I found it very convincing.) Using this definition, I calculated DP values for all words in the Friedman corpus and plotted those values against their corresponding counts:
As expected, the most frequent words tend to have lower DP values (be more evenly distributed in the corpus), and vice-versa; however, note the wide spread in DP for a fixed count, particularly in the middle range. Many words are definitely distributed unevenly in the Friedman corpus!
A common —– but not entirely ideal –— way to account for dispersion in corpus linguistics is to compute the adjusted frequency of words, which is often just frequency multiplied by dispersion. (Other definitions exist, but I won’t get into it.) Such adjusted frequencies are by definition some fraction of the raw frequency, and words with low dispersion are penalized more than those with high dispersion. Here, I plotted the frequencies and adjusted frequencies of Friedman’s top 30 words from before:
You can see that the rankings would change if I used adjusted frequency to order the words! This difference can be quantified with, say, a Spearman correlation coefficient, for which a value of 1.0 indicates identical rankings and -1.0 indicates exactly opposite rankings. I calculated a value of 0.89 for frequency-ranks vs adjusted frequency-ranks: similar, but not the same! It’s clear that the effect of (under-)dispersion should not be ignored in corpus linguistics. My big issue with adjusted frequencies is that they are more difficult to interpret: What, exactly, does frequency*dispersion actually mean? What units go with those values? Maybe smarter people than I will come up with a better measure.
Well, I’d meant to include word co-occurrence in this post, but it’s already too long. Congratulations for making it all the way through! :) Next time, then, I’ll get into bigrams/trigrams/n-grams and association measures. And after that, I get to the fun stuff!
[1] Gries, Stefan Th. “Dispersions and adjusted frequencies in corpora.” International journal of corpus linguistics 13.4 (2008): 403-437. | http://bdewilde.github.io/blog/blogger/2013/11/03/friedman-corpus-1-occurrence-and-dispersion/ | CC-MAIN-2017-43 | refinedweb | 1,671 | 51.58 |
#include <LabelMgr.hpp>
Inherits StelModule..
Initialize the LabelMgr object.
Implements StelModule.
Draw user labels.
Reimplemented from StelModule.
Update time-dependent parts of the module.
Implements StelModule.
Defines the order in which the various modules are drawn.
Reimplemented from StelModule.
Create a label which is attached to a StelObject.
Create a label at fixed screen coordinates.
find out if a label identified by id is presently shown
set a label identified by id to be shown or not
set text of label identified by id to be newText
Delete a label by the ID which was returned from addLabel.
..
Delete all labels. | http://www.stellarium.org/doc/0.10.4/classLabelMgr.html | CC-MAIN-2014-35 | refinedweb | 101 | 62.34 |
Usually I intro with some inane comment like:
There are two types of people - those who love Amarok, and those that don't matter.
But now I get to say:
I use Amarok, as recommended (and generally given fan-boy loving) by Wil Wheaton.
Anyway, I use Amarok (formerly amaroK), and I love that it makes exploring my music fun. My noisy work environment (grr!) means that I'm spending almost all my time listening to music, which has certainly made me appreciate Amarok more. But occasionally I'm summoned from my other, productive world by real-world "needs" like food, drink, the toilet, and having to find out what someone means when there's no spec to consult (grr!).
Being a former systems administrator (and, indeed, a former card-carrying security specialist - the card is now a bookmark...), I lock my console for even the smallest interruption. After the first few hundred interruptions (ie, the first two or three days), I got irritated by not having paused my music and having locked my screen and having to unlock it, pause my music, and lock again. So, I wrote something to automatically pause when I lock my screen - I'm using GNOME's screensaver (aka gnome-screensaver) on Ubuntu.
Unlike xscreensaver, it doesn't have a -watch option - you have to listen to dbus events. Hint to gnome-screensaver people - dbus is a nice behind-the-scenes way of doing things, but sometimes it is nice to have a specific way to watch for things. Even if it just runs dbus-monitor with the right commands for you. Let's not forget our Unix heritage...
Getting the pausing working from dbus messages was actually quite simple - I just combined a Perl regex from one source, and Amarok command line options from another, in a simple Python program:
#!/usr/bin/env python import subprocess import re DBUS_MONITOR = ["dbus-monitor", "--session", "type='signal',interface='org.gnome.ScreenSaver',member='SessionIdleChanged'"] PAUSE_AMAROK = ["amarok", "--pause"] PLAY_AMAROK = ["amarok", "--play-pause"] screensaver_on = re.compile("boolean true") screensaver_off = re.compile("boolean false") def main(): a = subprocess.Popen(DBUS_MONITOR, bufsize=1, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, close_fds=True) out = a.stdout while a.poll() is None: line = out.readline() if screensaver_on.search(line): subprocess.Popen(PAUSE_AMAROK).communicate() if screensaver_off.search(line): subprocess.Popen(PLAY_AMAROK).communicate()
Simply, dbus-monitor watches the dbus events and delivers the events that are asked for (otherwise all of them), and send them to stdout. When the screen saver turns on, I tell Amarok to pause. When it turns back off, I tell Amarok to unpause. To be utterly random, I used the subprocess module to call dbus-monitor and Amarok's command line.
Amarok also offers a DCOP interface to tell it what to do and find out what it is doing. Between the dbus and dcop Python modules, we could get rid of all the silly command line stuff. But it works fine now. (And since dbus is replacing DCOP in KDE4, there will almost certainly be a Amarok plugin built-in to do this.)
I also added simple Python daemonising code, stolen from the ActiveState Python Cookbook, so that I can just fire-and-forget it:
def daemonize(func): import os import sys print "Daemon PID %d" % pid sys.exit(0) except OSError, e: print >>sys.stderr, "fork #2 failed: %d (%s)" % (e.errno, e.strerror) sys.exit(1) # start the daemon main loop func() if __name__ == "__main__": daemonize(main)
Actually, there are two kinds of people: Those who love Amarok, and those who use systems on which iTunes runs natively. :-) The general concept of "type managers" () is very interesting, though. I think we'll see more of their kind as we head off into metadata-indexed filesystems on ever-bigger hard drives. Oh, yes, and I love Amarok too. (The album, not the software package.)
Why do you use regular expressions for simple substring matching?
This could be easily done with the "in" operator as well:
Also, did you know that there are Python bindings for dbus (though they are probably overkill for the taks at hand):
Another why question: why use subprocess and call dbus-monitor instead of python's D-BUS bindings?
A much neater solution (for rhythmbox, of course): | http://nxsy.org/getting-amarok-to-pause-when-the-screen-locks-using-python-of-course | crawl-002 | refinedweb | 712 | 65.22 |
I'm racking my brain to find a way to redirect DEBUGOUT and PRINTF to the UART using newlib on LPCXpresso 7.9.2. I'm using LPCOpen for the lpc_board_ea_devkit_4088 platform from embedded artists Quickstart board.
I'm running code that has .cpp files so I cannot use redlib, so I've switched to newlib, but I for the life of me cannot find a way to override the printf function to be directed to the UART.
I'm running newlib (NoHost).
Examples given say to override the _write function, but they don't seem to work in any meaningful way. I've altered the retarget.h header in my board library as follows, but I'm not sure it takes any effect. All the examples seem to be directed towards redlib in particular (which does work), but don't seem to work with newlib. I'm not entirely convinced that the documentation for the printf stuff is up to date for newlib (referring to this:Using printf() ). Anyone have suggestions? Am I missing something basic here? Seems like something so simple shouldn't be that hard.
#if defined( __NEWLIB__ )
/* Include stdio.h to pull in __REDLIB_INTERFACE_VERSION__ */
#include <stdio.h>
#if defined(DEBUG_ENABLE)
#if defined(DEBUG_SEMIHOSTING)
/* Do nothing, semihosting is enabled by default in LPCXpresso */
#endif /* defined(DEBUG_SEMIHOSTING) */
#endif /* defined(DEBUG_ENABLE) */
#if !defined(DEBUG_SEMIHOSTING)
int _write(int iFileHandle, char *pcBuffer, int iLength)
{
#if defined(DEBUG_ENABLE)
unsigned int i;
for (i = 0; i < iLength; i++) {
Board_UARTPutChar(pcBuffer[i]);
}
#endif
return iLength;
}
/* Called by bottom level of scanf routine within RedLib C library to read
a character. With the default semihosting stub, this would read the character
from the debugger console window (which acts as stdin). But this version reads
the character from the LPC1768/RDB1768 UART. */
int _read(void)
{
#if defined(DEBUG_ENABLE)
char c = Board_UARTGetChar();
return (int) c;
#else
return (int) -1;
#endif
}
#endif /* !defined(DEBUG_SEMIHOSTING) */
#endif /* defined ( __NEWLIB__ ) */
It looks like the retarget layer provided in the latest LPCOpen package releases (certainly the 2.19 LPCXpresso4337 one) will now work with both Redlib and Newlib printf() functions without needing you to supply a new retarget layer.
You simply need ensure that you have configured the chip, board and application projects to be built for Newlib, and also that the application project is built for Newlib(Nohost) as per .
Certainly if I do the above, then a simple LPCOpen hello world project I have created and configured for Newlib will send printf output out over the UART/vcom port to a terminal program running on my PC.
Regards,
LPCXpresso Support
So other than changing the C library type to Newlib(Nohost) for your application project, have you done anything else?
You certainly need to make sure that you have reconfigured the LPCOpen chip and board libraries for Newlib, as well as your main application project.
If you need more assistance then please tell us what your target board and MCU are, plus what version of LPCOpen you are using and which example you are trying to change to use Newlib printfs
Also tell us what version of LPCXpresso IDE you are using and post the .map file generated inside the Debug (or Release) directory of your application project when you build it.
Regards,
LPCXpresso Support
I'm using LPCXpresso 4337 board, LPCOpen v2.19 and LPCXpresso IDE v8.22. First just created an empty LPCOpen C Project and retargeted printf using RedLib, everything ok. So I switched to NewLib (nohost) and implemented the _write method but it doesn't work. I tried enabling/disabling the flags DEBUG_SEMIHOSTING, DEBUG_ENABLE, commenting/uncommenting the line #include "retarget.h" on board.c but nothing worked
I'm sending the map file with the project zip.
Hi, I'm trying to retarget printf using NewLib but I'm facing the same problem. I tried to follow the replies but couldn't find an answer... So what I should to do retarget printf? With RedLib It works perfectly, but when I change to NewLib (nohost) it simples ignore the printf call...
okay.
Including board.h fixes the issue and allows the override of _write() to work. But I still need retarget_uart.c to reside within my project folder. If I try to relocate it to the lpc_board_ea_devkit_4088 project, it doesn't override the function. I tried to make a header file with function prototypes for the _write and _read functions but it doesn't seem to take effect. How can I get this behavior to be more like a "library" function rather than something living in my individual project?
Basically my understanding of the situation is like this:
By default running with newlib (nohost) libraries includes some pre-built binaries for the newlib stdlib which includes some default functions for the syscalls. Either the default behavior is to direct these syscalls to the debug console or somewhere there is a set of functions that overrides this behavior. By making my own source file I am overriding these functions to use the UART buffer instead of the debug console. My confusion is basically what determines the precedence of these overridden behavior? I would expect the standard "retarget.h" #define descriptions of the _write function would work as well as the compiled c functions, but it does not. I would also expect it to make a difference whether the .c function is in the board library or the main project. I think part of my confusion is that I don't really know when the "newlib" includes actually occur and what the linker is actually doing.
DEBUG_ENABLE is defined in lpc_board_ea_devkit_4088\board.h. It definitely does something when selecting semihost and not trying to override the function, so I don't think that's the specific issue. I can get it to print to the debug console if I do semi hosting, but to write data to the application using scanf, I still have to use a UART console, which is super annoying to have printf go one place, but scanf to come from another place.
The root project is selected when I try to use the quick settings menu. This behavior seems to be true for every project I select in the workspace. I briefly tried to figure it out, as I was able to select the library headers on a brand new library immediately after creating it, but after setting the library headers one time, the application permanently grays out the options for that project and every other project. After that I gave up on using the quick settings menu to change library headers.
But board.h is not #include'd into retarget_itm.c which means that your _write function does not use the Board_UARTPutChar function as it is #ifdef'd out. Have you tried it?
On further investigation, retarget_item.c does not include board.h. If it did:
1. DEBUG_ENABLE would be defined
2. The 'implicit declaration' warning on Board_UARTPutChar would disappear.
Conclusion: include board.h in retarget_item.c
From a very quick look at your project, you do not have DEBUG_ENABLE defined and so _write() does nothing.
P.S. Quick Settings (and all it sub-menus) are all enabled for me. What do you have selected in your workspace when you try to do this?
Okay.
I tried this example too and it still does not work. Did the following: Created the attached .c file for overriding the _write and _read functions. Copied it into the source folder of my project (not the board library). Commented out #include retarget.h in board.c. Uncommented #define DEBUG_SEMIHOSTING.
Switched library between newlib (semihosting), newlib (nohost), newlib (none) and get the following results
newlib (semihosting) - complains there are multiple definition of _read and _write, line 290 external location E:\jenkins-slave\workspace\LibrariesFresco\Lewlib_hostings_newlib_stub_semihost\srt\syscalls.c and does not build
newlib (nohost) - compiles and runs, but any calls to printf disappear into the air and are not put out on the UART or the debug window or the LPCXpresso Console
newlib (none) - does not compile with an unspecified error.
It seems like the linker is still overriding my _write function with something else. It should be noted that for whatever reason I cannot change my library headers from the Quick Settings toolbar. The options are all grayed out. I'm manually changing the project settings for MCU C Compiler ->Miscellaneous, MCU Assembler -> Architecture & Headers, and MCU Linker -> Managed Linker Script.
retarget_uart.c
//*****************************************************************************
// retarget_itm.c - Provides retargeting of C library printf/scanf
// functions via ITM / SWO Trace
//*****************************************************************************
//
// Software that is described herein is for illustrative purposes only
// which provides customers with programming information regarding the
// LPC products. This software is supplied "AS IS" without any warranties of
// any kind, and NXP Semiconductors and its licensor disclaim any and
// all warranties, express or implied, including all implied warranties of
// merchantability, fitness for a particular purpose and non-infringement of
// intellectual property rights. NXP Semiconductors assumes no responsibility
// or liability for the use of the software, conveys no license or rights under
// any patent, copyright, mask work right, or any other intellectual property
// rights in or to any products. NXP Semiconductors reserves the right to make
// changes in the software without notification. NXP Semiconductors also makes
// no representation or warranty that such application will be suitable for the
// specified use without further testing or modification.
//
// Permission to use, copy, modify, and distribute this software and its
// documentation is hereby granted, under NXP Semiconductors' and its
// licensor's relevant copyrights in the software, without fee, provided that it
// is used in conjunction with NXP Semiconductors microcontrollers. This
// copyright, permission, and disclaimer notice must appear in all copies of
// this code.
//*****************************************************************************
#include <stdint.h>
// ******************************************************************
// Cortex-M SWO Trace / Debug registers used for accessing ITM
// ******************************************************************
// CoreDebug - Debug Exception and Monitor Control Register
#define DEMCR (*((volatile uint32_t *) (0xE000EDFC)))
// DEMCR Trace Enable Bit
#define TRCENA (1UL << 24)
// ITM Stimulus Port Access Registers
#define ITM_Port8(n) (*((volatile uint8_t *) (0xE0000000 + 4 * n)))
#define ITM_Port16(n) (*((volatile uint16_t *) (0xE0000000 + 4 * n)))
#define ITM_Port32(n) (*((volatile uint32_t *) (0xE0000000 + 4 * n)))
// ITM Trace Control Register
#define ITM_TCR (*((volatile uint32_t *) (0xE0000000 + 0xE80)))
// ITM TCR: ITM Enable bit
#define ITM_TCR_ITMENA (1UL << 0)
// ITM Trace Enable Register
#define ITM_TER (*((volatile uint32_t *) (0xE0000000 + 0xE00)))
// ITM Stimulus Port #0 Enable bit
#define ITM_TER_PORT0ENA (1UL << 0)
// ******************************************************************
// Buffer used for pseudo-ITM reads from the host debugger
// ******************************************************************
// Value identifying ITM_RxBuffer is ready for next character
#define ITM_RXBUFFER_EMPTY 0x5AA55AA5
// variable to receive ITM input characters
volatile int32_t ITM_RxBuffer = ITM_RXBUFFER_EMPTY;
// ******************************************************************
// Redlib C Library function : __sys_write
// Newlib C library function : _write
//
// Function called by bottom level of printf routine within C library.
// With the default semihosting stub, this would write the
// character(s) to the debugger console window (which acts as
// stdout). But this version writes the character(s) from the Cortex
// M3/M4 SWO / ITM interface for display in the ITM Console.
// ******************************************************************
#if defined (__REDLIB__)
int __sys_write(int iFileHandle, char *pcBuffer, int iLength) {
#elif defined (__NEWLIB__)
int _write(int iFileHandle, char *pcBuffer, int iLength) {
#endif
//int _write(int iFileHandle, char *pcBuffer, int iLength)
//{
#if defined(DEBUG_ENABLE)
unsigned int i;
for (i = 0; i < iLength; i++) {
Board_UARTPutChar(pcBuffer[i]);
}
#endif
return iLength;
}
#if defined (__REDLIB__)
// ******************************************************************
// Redlib C Library function : __sys_readc
//
// Called by bottom level of scanf routine within RedLib C library
// to read a character. With the default semihosting stub, this
// would read the character from the debugger console window (which
// acts as stdin). But this version reads the character from a buffer
// which acts as a pseudo-interface to the Cortex-M3/M4 ITM.
// ******************************************************************
int __sys_readc(void) {
int32_t c = -1;
// check if debugger connected and ITM channel enabled for tracing
if ((DEMCR & TRCENA) &&
// ITM enabled
(ITM_TCR & ITM_TCR_ITMENA) &&
// ITM Port #0 enabled
(ITM_TER & ITM_TER_PORT0ENA)) {
do {
if (ITM_RxBuffer != ITM_RXBUFFER_EMPTY) {
// Read from buffer written to by tools
c = ITM_RxBuffer;
// Flag ready for next character
ITM_RxBuffer = ITM_RXBUFFER_EMPTY;
}
}while (c == -1);
}
return c;
}
// #endif REDLIB __sys_readc()
#elif defined (__NEWLIB__)
// ******************************************************************
// Function _read
//
// Called by bottom level of scanf routine within Newlib C library
// to read multiple characters. With the default semihosting stub, this
// would read characters from the debugger console window (which
// acts as stdin). But this version reads the characters from a buffer
// which acts as a pseudo-interface to the Cortex-M3/M4 ITM.
// ******************************************************************
int _read(void)
{
#if defined(DEBUG_ENABLE)
char c = Board_UARTGetChar();
return (int) c;
#else
return (int) -1;
#endif
}
#endif // NEWLIB _read()
Post your project, so we can see what you have done and what may be the cause of your problems
You might also want take a look at the retargeting example code that we provide for ITM printf:
which should help you to see how to modify the code inside LPCOpen for Newlib.
Regards,
LPCXpresso Support | https://community.nxp.com/t5/LPCXpresso-IDE/printf-redirection-to-UART-using-newlib/m-p/543082 | CC-MAIN-2022-05 | refinedweb | 2,108 | 60.75 |
#include <qmenubar.h>
A menu bar consists of a list of pull-down menu items. You add menu items with insertItem(). For example, asuming that
menubar is a pointer to a QMenuBar and
filemenu is a pointer to a QPopupMenu, the following statement inserts the menu into the menu bar:
menubar->insertItem( "&File", filemenu );
Items are either enabled or disabled. You toggle their state with setItemEnabled().
There is no need to lay out a menu bar. It automatically sets its own geometry to the top of the parent widget and changes it appropriately whenever the parent is resized.
insertItem removeItem clear insertSeparator setItemEnabled isItemEnabled setItemVisible isItemVisible
Example of creating a menu bar with menu items (from menu/menu.cpp): menu/menu.cpp file = new QPopupMenu Key_O new QMenuBar insertItem
In most main window style applications you would use the menuBar() provided in QMainWindow, adding {QPopupMenu}s to the menu bar and adding {QAction}s to the popup menus.
Example (from action/application.cpp): action/application.cpp file = new QPopupMenu fileNewAction
Menu items can have text and pixmaps (or iconsets), see the various insertItem() overloads, as well as separators, see insertSeparator(). You can also add custom menu items that are derived from {QCustomMenuItem}.
Menu items may be removed with removeItem() and enabled or disabled with setItemEnabled().
Definition at line 51 of file qmenubar.h. | http://qt-x11-free.sourcearchive.com/documentation/3.3.4/classQMenuBar.html | CC-MAIN-2018-22 | refinedweb | 223 | 55.13 |
Allegro - Recent changes
See
changes._tx for changes in earlier versions of Allegro. These lists serve as summaries; the full histories are in the git repository.
Changes from 5.0.10 to 5.0.11 (January 2015)
The main developers this time were: SiegeLord and Peter Wang.
Core:
- Fix OSX backend on OSX 10.10 (lordnoriyuki).
Audio addon:
Fix/avoid all sound examples freezing on OSX with the aqueue driver (Elias Pschernig).
Fix a deadlock in Pulseaudio driver.
Other:
Fix build warnings.
Improve documentation (syntax highlighting).
Changes.
Changes.
Changes.
Changes from 5.0.6 to 5.0.7 (June 2012)
The main developers this time were: Trent Gamblin, Elias Pschernig, Peter Wang.
Core:
Fix ALLEGRO_STATIC_ASSERT collisions from different files included in the same translation unit. Reported by tobing.
Make al_ref_cstr, al_ref_ustr and al_ref_buffer return const ALLEGRO_USTR instead of just an ALLEGRO_USTR (Paul Suntsov).
Make al_ustr_empty_string const correct.
Fix many memory leak/warnings on MacOS X (Pär Arvidsson).
Fix typo preventing get_executable_name from using System V procfs correctly. Reported by Max Savenkov.
Displays:
Add ALLEGRO_FRAMELESS as a preferred synonym for the confusing ALLEGRO_NOFRAME flag.
Rename al_toggle_display_flag to al_set_display_flag, retaining the older name for compatibility.
Set WM_NAME for some window managers (X11).
Graphics:
Force al_create_bitmap to not create oversized bitmaps, to mitigate integer overflow problems.
Removed initial black frame on all Allegro programs.
OpenGL:
Texture should be ‘complete’ (min/mag and wrap set) before glTexImage2D.
Fixed a bug in al_unlock_bitmap where the pixel alignment mistakenly was used as row length.
Fixed typo in names of some OpenGL extension functions.
Display list of OpenGL extensions in allegro.log also for OpenGL 3.0.
Direct3D:
- Fixed a bug in the D3D driver where separate alpha blending was being tested for when it shouldn’t have been (Max Savenkov).
Input:
Monitor /dev/input instead of /dev on Linux for hotplugging joysticks (Jon Rafkind).
Do not permanently change the locale for the X11 keyboard driver. Set LC_CTYPE only, not LC_ALL.
Audio addon:
Fix desychronization due to inaccurate sample positions when resampling. Thanks to _Bnu for discovering the issue and Paul Suntsov for devising the correct solution.
Fix linear interpolation across audio stream buffer fragments.
Fix two minor memory leaks in the PulseAudio driver.
Image I/O addon:
Improve compatibility of BMP loader. In particular, support bitmaps with V2-V5 headers and certain alpha bit masks.
Fix TGA loader using more memory than necessary. Reported by Max Savenkov.
Font addon:
- Use user set pixel format for fonts.
Native dialogs addon:
Clear mouse state after dialogs or else it gets messed up (OSX).
Fix some warnings in gtk_dialog.c.
Wrap use of NSAlert so it can be run on the main thread with return value.
Examples:
Add ex_resample_test.
ex_audio_props: Add bidir button.
ex_joystick_events: Support hotplugging and fix display of 3-axis sticks.
Add test_driver –no-display flag. (Tobias Hansen)
Other:
Various documentation updates.
Other minor bug fixes.
Fix whatis entries of man pages. (Tobias Hansen)
Changes from 5.0.5 to 5.0.6 (March 2012)
The main developers this time were: Trent Gamblin, Matthew Leverton, Elias Pschernig, Paul Suntsov, Peter Wang.
Core:
- Added al_register_assert_handler.
Graphics:
Added al_draw_tinted_scaled_rotated_bitmap_region.
Added al_reset_clipping_rectangle convenience function.
Added al_get_parent_bitmap.
Fixed a bug in the OpenGL driver when drawing the backbuffer outside the clipping rectangle of the target bitmap.
Make blitting from backbuffer work when using multisampling on Windows/D3D.
Displays:
Set ALLEGRO_MINIMIZED display flag on Windows (Zac Evans).
Don’t generate bogus resize events after restoring minimised window on Windows..
Prevent a deadlock during display creation on X.
Fallback to the ‘old’ visual selection method on X instead of crashing if the ‘new’ visual selection doesn’t work.
Input:
- Use the same logic in set_mouse_xy for FULLSCREEN_WINDOW as was used for FULLSCREEN. (Max OS X)
Filesystem:
Added al_fopen_slice.
Added al_set_exe_name which allows overriding Allegro’s idea of the path to the current executable.
Make al_get_standard_path(ALLEGRO_TEMP_PATH) treat the value of TMPDIR et al. as a directory name even without a trailing slash. (Unix)
Make stdio al_fopen implementation set proper errno on failure.
Audio addons:
Add mixer gain property and functions.
Improve code to check that DLL symbols are loaded in the acodec addon. The old method was hacky and broke under -O2 using GCC 4.6.1.
Image I/O addon:
- Improved accuracy of un-alpha-premultiplying in the native OSX bitmap loader.
Primitives addon:
Added al_draw_filled_pieslice and al_draw_pieslice.
Added al_draw_elliptical_arc.
TTF addon:
Added al_load_ttf_font_stretch functions (tobing).
Added ALLEGRO_TTF_NO_AUTOHINT font loading flag to disable the Auto Hinter which is enabled by default in newer version of FreeType (Michał Cichoń).
Clear locked region so pixel borders aren’t random garbage that can be seen sometimes with linear filtering on.
Unlock glyph cache page at end of text_length and get_text_dimensions (jmasterx).
Examples:
Added new examples: ex_audio_chain, ex_display_events, ex_file_slice.
ex_ogre3d: Make it work under Windows (AMCerasoli).
a5teroids: Support gamepads that report small non-zero values for sticks that are at rest.
Other:
Added index to HTML version of the reference manual (Jon Rafkind).
Various documentation updates.
Other minor bug fixes.
Changes.
Changes.
Changes from 5.0.2.1 to 5.0.3 (May 2011)
Input:
Fixed keyboard repeat for extended keys on Windows. Added ALLEGRO_KEY_MENU. (torhu)
Make Delete key in Windows send KEY_CHAR event with unichar code 127 (Peter Wang).
Filesystem:
- al_remove_filename returned false even if successful (reported by xpolife).
Graphics:
- On OpenGL ES 1.1, glReadPixels can only read 4 byte pixels (Trent Gamblin).
Font addon:
- Fix a small memory leak when unregistering a handler with al_register_font_loader (Trent Gamblin).
Primitives addon:
- Fix assertion failures when drawing al_draw_ellipse, al_draw_arc, al_draw_rounded_rectangle, al_draw_filled_rounded_rectangle at very small scales (reported by Carl Olsson).
Native dialogs addon:
- gtk: Fix truncated string if the final button contained a non-ASCII character (Peter Wang).
Other:
- Minor build fixes and documentation updates.
Changes from 5.0.2 to 5.0.2.1 (April 2011)
- Fix regression on Windows where the keyboard state was not updated unless the keyboard event source is registered to an event queue.
Changes from 5.0.1 to 5.0.2 (April 2011)
Input:
Fix inverted mouse wheel on X11.
Make unicode field in KEY_CHAR events zero for Fn, arrow keys, etc. for OS X (jmasterx, Peter Hull).
Support ALLEGRO_KEY_PAD_ENTER and detect left/right Alt and Ctrl keys independently on Windows (torhu, Matthew Leverton).
Changes.
Changes from 5.0.0 RC5 to 5.0.0 (February 2011)
Color addon:
- Use colour names from CSS. This is the same as the previous list but with all grey/gray alternatives.
Documentation:
- Minor documentation updates.
Changes from 5.0.0 RC4 to 5.0.0 RC5 (February 2011)
The main developers this time were: Thomas Fjellstrom, Trent Gamblin, Peter Hull, Matthew Leverton and Peter Wang. Other contributions noted in-line.
System:
- Load allegro5.cfg from the directory containing the executable, not the initial working directory.
Graphics:
Make al_get_monitor_info return success code.
Replace al_set_new_display_adaptor(-1) with a named constant ALLEGRO_DEFAULT_DISPLAY_ADAPTER.
Fix numerous bugs in X mode setting and multi monitor related code, and introduce new xrandr code.
Generate ALLEGRO_EVENT_MOUSE_LEAVE_DISPLAY when mouse leaves OS X window (Evert Glebbeek).
Hide OS X window during exiting fullscreen window mode, to prevent the hideous misaligned animation from showing.
Fix erroneous assertions in OpenGL backend.
Added a hack which makes OpenGL mode work under Wine for me (Elias Pschernig).
Add support for some al_get_display_options in D3D port.
Keyboard:
Don’t send KEY_CHAR events for modifier and dead keys (with contributions from torhu).
Don’t send KEY_DOWN events for non-physical key events.
osx: Allow unicode entry (single keypresses only).
x11: Set the keycode field in KEY_CHAR events to the code of the last key pressed, as stated in the documentation, even if the char is due to a compose sequence.
x11: Get rid of historic behaviour where the unicode field is always zero when Alt is held down.
Rename ALLEGRO_KEY_EQUALS_PAD to ALLEGRO_KEY_PAD_EQUALS for consistency.
Mouse:
Add al_grab_mouse and al_ungrab_mouse. Implemented on X11 and Windows.
Allow the user configure a key binding to toggle mouse grabbing on a window.
Support horizontal mouse wheel on Windows (jmasterx).
Calculate Y position for al_set_mouse_xy correctly in OS X windowed mode (X-G).
Use more appropriate CURSOR_LINK cursor on OS X (X-G).
Assign different button IDs for separate touches on iPhone (Michał Cichoń).
iphone: Remove fake mouse move events as they’re unncessary and can cause problems with user input tracking.
Filesystem:
Clean up al_get_standard_path(): remove SYSTEM_DATA, SYSTEM_SETTINGS, PROGRAM paths; add RESOURCES and USER_DOCUMENTS paths. Use system API calls if possible.
Implement ALLEGRO_USER_DATA_PATH under Linux. Honor XDG_DATA/CONFIG_HOME environment variables.
Fix al_make_directory on Windows due to problems with calls to stat() with trailing slashes.
Native dialogs addon:
Use string arguments to al_create_native_file_dialog() and al_get_native_file_dialog_path() instead of ALLEGRO_PATH.
Enhance the Windows file selector (initial patch by Todd Cope):
Use Windows’ folder selector for ALLEGRO_FILECHOOSER_FOLDER.
Implement patterns.
Display the title of the dialog that the user specified.
Primitives addon:
- Fix changing the D3D blender state without updating the cached state.
TTF addon:
- Align glyphs in ttf font sheets so as to work around problems with forced S3TC compression with some OpenGL drivers (Elias Pschernig).
Examples:
- SPEED: add full screen flag, use standard paths for highscores and data.
Build system:
Check PulseAudio backend will compile before enabling support.
Give a louder warning if FLAC/Vorbis/DUMB compile tests fail.
Other:
Many leaks fixed on OS X.
Minor bug fixes and documentation updates.
Changes.
Changes from 5.0.0 RC2 to 5.0.0 RC3 (December 2010)
The main developers this time were: Michał Cichoń, Trent Gamblin, Peter Wang.
Graphics:
Honour subbitmap clipping rectangle under OpenGL (Elias Pschernig).
Fix an error in the Direct3D primitives blending.
al_set_new_window_position() did not have an effect for resizable windows on X11.
Fix windows only showing up on first monitor on X11 (Thomas Fjellstrom).
Implement al_get_monitor_info() for iDevices.
Input:
Separate character inputs from physical key down events. This removes unichar and modifiers fields from ALLEGRO_EVENT_KEY_DOWN, and replaces ALLEGRO_EVENT_KEY_REPEAT by ALLEGRO_EVENT_KEY_CHAR. We decided this design flaw was better fixed now than later.
Make Mac OS X keyboard driver properly distinguish key down and key repeat events.
TTF addon:
- Respect ALLEGRO_NO_PREMULTIPLIED_ALPHA flag when loading TTF fonts.
Other:
Fix returning a garbage pointer in maybe_load_library_at_cwd (Vadik Mironov).
Remove dependency on dxguid.
Minor compilation fixes and documentation updates.
Changes from 5.0.0 RC1 to 5.0.0 RC2 (December 2010)
The developers this time were: Trent Gamblin, Elias Pschernig and Peter Wang.
System:
Add al_is_system_installed and hide al_get_system_driver.
Prevent ‘DLL hijacking’ security issue on Windows.
Graphics:
Change to using premultiplied alpha by default. The new bitmap flag ALLEGRO_NO_PREMULTIPLIED_ALPHA was added.
Change the value of ALLEGRO_VIDEO_BITMAP to non-zero.
Change al_get_opengl_version to return a packed integer.
Made al_get_opengl_version return an OpenGL ES version (if using OpenGL ES) rather than an attempted estimate at a desktop GL version.
Added function al_get_opengl_variant that returns either ALLEGRO_DESKTOP_OPENGL or ALLEGRO_OPENGL_ES.
Make al_have_opengl_extension return bool.
Fix OpenGL graphics mode height value on Windows.
Only try to create one Direct3D display at a time.
Make screensaver activate on Windows Vista and above unless inhibited.
Avoid potential divide-by-zeroes when computing the refresh rate in X11 video modes.
Events:
- Delete an incorrect mutex destroy in al_unregister_event_source.
Input:
- Revert the joystick driver used on OS X 10.4 to the pre-hotplugging version, rather than one which contained an earlier attempt at implementing hotplugging. Select the 10.4 or greater joystick driver at runtime.
iPhone:
- Added two iPhone-specific functions: al_iphone_override_screen_scale and al_iphone_program_has_halted.
Image I/O addon:
- Made the iPhone and OSX image loaders not try and correct colors to some arbitrary color space, but instead use the values directly from the image.
Native dialogs addon:
Tolerate null display in al_show_native_file_dialog on Windows.
Make GTK native dialog implementation only call GTK from a single thread.
Define al_show_native_message_box to be usable without installing Allegro.
Primitives addon:
- Make primitives addon compilable again without OpenGL.
Examples:
ex_ttf: Test the monochrome flag.
Work around problems with MSVC and UTF-8 string literals. ex_utf8 is now not built under MSVC.
Don’t use WIN32 executable type on examples that require the console (David Capello).
Other:
- Minor bug fixes and documentation updates.
Changes from 4.9.22 to 5.0.0 RC1 (November 2010)
The developers this time were: Trent Gamblin, Evert Glebbeek, Elias Pschernig, Paul Suntsov, Peter Wang.
Graphics:
Make al_resize_display keep the original resolution (or change back) if it can’t set the users request, on Windows.
Do not emit ALLEGRO_DISPLAY_RESIZE events from the Windows and X11 display drivers when using al_resize_display.
[X11] Make al_get_num_display_modes and al_get_display_mode work if the adapter is set to default. Right now there was no way to query the modes of the default monitor.
[X11] Use _NET_WM_STATE_FULLSCREEN hint for “true” fullscreen displays. Enable mouse grabbing in fullscreen modes.
Added ALLEGRO_EVENT_DISPLAY_ORIENTATION and implement it on iOS.
Dennis Busch fixed a problem with displays not showing up on the primary display by default in some dual head setups, on Windows.
Increase the precision of texture coordinate deltas in the software triangle renderer, from floats to doubles.
Remove al_get_frontbuffer(). It wasn’t implemented anywhere.
Implement min/mag filter and mipmap flags for Direct3D.
Input:
Report correct initial mouse position if a display is created with the mouse pointer inside, or if the mouse routines are installed after a display is created (X11, Windows).
John Murphy fixed improperly mapped axes on the Windows joystick driver.
Events:
- Do not register user event sources with the destructor system as it cannot work reliably. User event sources must be destroyed manually.
Filesystem:
Make al_get_fs_entry_name and al_get_current_directory return strings instead of ALLEGRO_PATH.
Make al_make_directory create parent directories as needed.
Fix al_create_fs_entry to not trim the root path “/” down to the empty string with the stdio backend.
Path routines:
Remove al_make_path_absolute and replace it by al_rebase_path.
Remove undocumented behavior of setting a default organization name of “allegro” for all apps.
Correctly return standard paths as directories on OS X.
Threads:
- Rename al_wait_cond_timed to al_wait_cond_until to match al_wait_cond_until.
Config routines:
- Add a blank line between sections when writing out a config file.
Other core:
Move the Windows event loops back into the same thread as the D3D event loop. It’s a requirement of D3D, else you can get crashes as I was when resetting the device (like tabbing away from a fullscreen app).
Add some missing standard entries to the OS X menu bar (the “hide”, “hide others” and the window list, mainly).
Audio addon:
Automatically stop sample instances which point to a buffer of a sample which is about to be destroyed with al_destroy_sample.
alsa: Resume properly after suspending.
Image I/O addon:
Make GDI+ support compile cleanly with the headers that come with MinGW package w32api-3.15.
Speed up PNG and BMP saving, and NSImageFromAllegroBitmap loading.
TTF addon:
- Add a flag for loading TTFs without anti-aliasing (ALLEGRO_TTF_MONOCHROME).
Primitives addon:
Fix several failed sub-bitmap related unit tests on Windows.
Made thick outlined triangles look nicer when the triangles are very thin.
Add a debug-only check for primitives addon initialization to aid in code portability.
Examples:
Added example demonstrating the effect of premultiplied alpha.
Make ‘F’ toggle fullscreen window in SPEED (in-game).
Minor improvements to the a5teroids demo.
Documentation:
Many documentation updates.
Add list of contributors and a readme for packagers.
Make make_doc tool build cleanly on MSVC, and work around a problem with recent version of Pandoc on Windows.
Improve styling of PDF output.
Add generated man pages to archives.
Bindings:
- Implemented array types in the Python wrapper.
Changes from 4.9.21 to 4.9.22 (September 2010)
The developers this time included: Michał Cichoń, Trent Gamblin, Evert Glebbeek, Angelo Mottola, Elias Pschernig, Paul Suntsov and Peter Wang.
System:
- Allow the X11 port to initialise without an X server connection.
Graphics:
Fix many bugs with bitmap locking.
Fix many bugs to do with transformations, flipping and clipping.
Fix many bugs to do with sub-bitmaps as source and destination.
Renamed al_draw_[tinted_]rotated_scaled_bitmap to al_draw_[tinted_]scaled_rotated_bitmap to match the parameter order.
Reimplemented bitmap software rendering routines using newly optimised software triangle renderer, formerly in the primitives addon.
Add pixel_size field to ALLEGRO_LOCKED_REGION.
Fix bugs to do with pixel alignment on OpenGL.
Fix OpenGL pixel transfer of 15 bpp formats, where Allegro does not Allegro does not care whether the unused bit is set or not, but when transferring to OpenGL it will be interpreted as an alpha bit.
Disabled support for drawing a bitmap into itself.
Changed specification of al_draw_*bitmap to not allow transformation and ignore blending/tinting when the screen itself is being drawn (except when drawn into a memory bitmap).
Allow bitmap regions to be outside the bitmap area in drawing routines.
Added al_add_new_bitmap_flag convenience function.
Added three new bitmaps flags ALLEGRO_MAG_LINEAR, ALLEGRO_MIN_LINEAR, ALLEGRO_MIPMAP. Removed the config settings for linear/anisotropic min/mag filtering. DirectX side not yet updated.
Register destructors for bitmaps, so they will be implicitly destroyed when Allegro is shut down. This was only true for some bitmaps previously.
Don’t allocate memory buffers for video bitmaps when using OpenGL.
Make al_get_opengl_extension_list() return NULL if called on a non-GL display.
Fix al_create_display for OpenGL forward compatible contexts.
Add al_set_current_opengl_context as an explicit way to set the OpenGL context.
Rename al_is_opengl_extension_supported to al_have_opengl_extension.
Produce more accurate/correct color when going from less to more bits per component.
Fix al_set_new_window_position() everywhere.
Avoid potential deadlock if resizing window to the same size on X11.
Fixed turning off vsync in X11.
Added al_is_d3d_device_lost function.
Dynamically load dinput and d3d DLLs on Windows.
Replaced PeekMessage with GetMessage in window event loops for the D3D and WGL drivers (David Capello).
Input:
Added hotplugging support for joysticks on Linux, Windows and OS X.
Added al_reconfigure_joysticks function.
Merged all joystick devices under a single joystick event source.
Removed al_get_joystick_number.
Add al_is_joystick_installed.
The OS X joystick driver was rewritten; it requires OS X 10.5. The older driver still exists for OS X 10.4 and earlier but is in a semi-updated state with regards to hotplugging.
Allow user to override joystick device paths in the config file (Linux).
Fix iphone touch input and clipping for modes other than w=768,h=1024.
Fixed missing mouse movement messages on IPhone on touch-up/down. Also changed how mouse buttons are reported - always as button 1 now.
Config:
Give config iterators proper types instead of void *.
Make al_get_system_config() always return non-NULL if a system driver is installed.
Events:
- Rename al_event_queue_is_empty to al_is_event_queue_empty (with compatibility define).
Timers:
Add al_add_timer_count function.
Rename al_timer_is_started to al_get_timer_started.
Rename al_current_time to al_get_time (with compatibility define).
File I/O:
Add al_open_fs_entry to open a file handle from an FS_ENTRY.
Add al_fclearerr.
Set ALLEGRO_FILEMODE_HIDDEN flag on entries for file names beginning with dot (OS X).
Remove al_is_path_present, al_fs_entry_is_directory, al_fs_entry_is_file (all trivial).
Primitives addon:
Optimised most of the software rendering routines by a lot.
Triangle drawer was skipping pixels in very thin triangles.
Handle lost d3d devices better.
Fix some bugs found during testing.
Image I/O addon:
Fix native image loader on Mac OS X: images that were not 72 dpi would be rescaled to a smaller size.
Added native bitmap saving support for OSX.
Fix jpeg saving when locked region has negative pitch.
Native dialogs addon:
Add Windows and OS X text log implementations.
Add ALLEGRO_FILECHOOSER and ALLEGRO_TEXTLOG types instead of conflating them into ALLEGRO_NATIVE_DIALOG.
Fix race condition in al_open_native_text_log.
Rename al_destroy_native_dialog to al_destroy_native_file_dialog.
Rename al_get_native_dialog_event_source to al_get_native_text_log_event_source.
Speed up text log appending by making the reader/writers asynchronous.
Register destructors for file chooser and text log dialogs.
Fix file chooser on Windows returning multiple selections with slashes appended to filenames. If an initial path was specified then the dialog wouldn’t open at all; fixed.
Let native dialog functions fail gracefully.
Audio addons:
Init destructors even if audio driver fails to install.
Dynamically load dsound DLL on Windows.
Font addons:
Added al_shutdown_ttf_addon.
Prevent SIGSEGV for double-closing a file in the TTF addon if it is not a valid font file.
Make al_grab_font_from_bitmap not cause a segmentation fault if the bitmap is garbage.
Some TTF fonts would not render right at small sizes; fixed.
Make al_destroy_font ignore NULL.
Tests:
Added a test suite (finally).
Add a shell script to produce test coverage results using lcov.
Examples:
Add ex_haiku, an example based on Mark Oates’s Haiku game. Mark generously agreed to let us include it as an Allegro example.
Added a new example ex_joystick_hotplugging.
Added a new example ex_filter.
Make ex_fs_window work on MSVC.
Allow a5teroids to run without audio, or if audio data doesn’t load.
Build system:
Re-added CMake option that allows forced static linking of libFLAC.
Replaced the old iphone xcode project with a cmake iphone toolchain.
Documentation:
- Many updates to the reference manual.
Bindings:
- Added a workaround to the Python wrapper for a Mingw bug.
Changes from 4.9.20 to 4.9.21 (July 2010)
The main developers this time were: Trent Gamblin, Matthew Leverton, Elias Pschernig, Paul Suntsov, Peter Wang.
Graphics:
Fixed the mis-termed “blend color”. There is no more color state.
al_set*_blender functions lose the color parameter.
Added 5 new bitmap drawing functions al_draw_tinted*_bitmap with a color parameter. The parameter is used just like the “blend color” before.
All text drawing functions gain a color parameter and use it like they used the “blend color” before.
Primitive drawing functions previously sometimes (and sometimes not) used the “blend color”. Not any longer.
Make the current blending mode thread-local state instead of per-display state.
Add explicit display arguments to functions which require a display, but don’t require the rendering context to be current.
Make al_set_target_bitmap change the current display as necessary. al_set_target_bitmap(NULL) releases the rendering context.
Add al_set_target_backbuffer as a convenience.
Remove al_set_current_display.
Give each bitmap its own transformation, i.e. every bitmap has a transformation, which is in effect when that bitmap is the target.
Remove sub-bitmap clip-to-parent restriction on create. Add out-of-bounds blitting support to memory bitmaps.
Merge sub-bitmap and parent bitmap clipping; clip source bitmap to (0,0)-(w,h); fix flipping to/from clipped bitmaps.
Made mouse cursors independent of displays. You may create cursors without a display, and you may use a cursor with any display.
Rename al_{set,get}_current_video_adapter to *new_display_adapter for consistency.
Move the new display video adapter and new window position to thread-local state, like other new display parameters. Make al_store_state also save those parameters with ALLEGRO_STATE_NEW_DISPLAY_PARAMETERS.
Rename al_transform_transform to al_compose_transform. Switched the order of parameters in al_compose_transform and al_copy_transform to match the rest of the transform functions.
Made memory bitmap manipulation without a display possible (again?).
Fixed window resizing in D3D driver. Simplify resize-postponing on Windows.
Make al_create_display abort early when the new_display_adapter is greater than the screen count (X11).
Added ALLEGRO_MINIMIZED flag to the X11 port.
Fixed OpenGL version string parsing (bug #3016654).
Other core:
Renamed al_install_timer to al_create_timer, and al_uninstall_timer to al_destroy_timer.
Rename al_{get,set}_{appname,orgname} to *app_name and *org_name.
Fix assertion failure in al_create_mutex_recursive on Windows (spoofle).
Primitives addon:
Made the D3D driver of the primitives addon work with multiple displays. Also made it handle the display being destroyed properly.
Simplified shader recreating on thread destruction when using the primitives addon with D3D.
Avoid double free when shutting down the primitives addon multiple times.
Older Intel cards don’t implement DrawIndexedPrimitiveUP correctly. Altered the D3D code to work around that.
Audio addon:
- Allow setting the DirectSound buffer size via allegro5.cfg.
Image addon:
- Make GDI+ image loader work with MinGW.
Font addon:
- Nicolas Martyanoff added al_get_font_descent/ascent functions which query per-font properties. Previously it was necessary to call al_get_text_dimensions (which now just reports the text dimensions as it should).
Native dialogs addon:
- Add text log window functions (GTK only for now).
Documentation:
Many updates to the reference manual.
Improve styling and add Allegro version to HTML pages.
Separated readme_a5.txt into multiple files, and hopefully improve them.
Build system:
Remove INSTALL_PREFIX. Windows users can now use CMAKE_INSTALL_PREFIX to set the install path.
Allow the user to place dependencies in a subdirectory “deps”, which will be automatically searched.
Examples:
Use text log windows in many examples.
Add ex_noframe: test bitmap manipulation without a display.
Bindings:
- Update Python bindings.
Changes from 4.9.19 to 4.9.20 (May 2010)
The developers this time were: Thomas Fjellstrom, Evert Glebbeek, Matthew Leverton, Milan Mimica, Paul Suntsov, Trent Gamblin, Elias Pschernig, Peter Wang. With significant contributions from Michał Cichoń.
Core:
Add al_malloc, al_free, et al. These are now used consistently throughout Allegro and its addons.
Replace al_set_memory_management_functions by a simpler function, al_set_memory_interface.
Renamed some D3D/Windows specific functions to follow the al_{verb}_{stuff} convention.
Graphics:
Move image I/O framework to core, i.e. al_load_bitmap, al_save_bitmap and bitmap file type registration. Image codecs remain in allegro_image.
Added a simple display capabilities querying capability to al_get_display_option: ALLEGRO_MAX_BITMAP_SIZE, ALLEGRO_SUPPORT_NPOT_BITMAP, ALLEGRO_CAN_DRAW_INTO_BITMAP, ALLEGRO_SUPPORT_SEPARATE_ALPHA. (OpenGL only for now)
Fix in OpenGL 3.0 context creation.
Make the extensions mechanism compatible with OpenGL version >= 3. Declared symbols needed by OpenGL 3.2 and 3.3 and brought OpenGL extensions up to date.
Fix an assertion in _al_draw_bitmap_region_memory so it does not trigger when source and destination are the same bitmap.
Fix some locking issues by setting GL_PACK_ALIGNMENT and GL_UNPACK_ALIGNMENT before reading/writing pixels.
Partial implementation of ALLEGRO_FULLSCREEN_WINDOW on OS X (Snow Leopard, probably Leopard).
Started X11 fullscreen support (resolution switching).
Fix handling of X11 size hints.
Fixed a deadlock related to fullscreen windows under X11 caused by using a nested lock for a condition variable.
Use _NET_WM_ICON to set icon on X11 instead of XSetWMHints.
Get the iPhone OpenGL version more properly. Only use separate blending on iPhone with OpenGL ES 2.0+.
Release the splash view and windows on iPhone, which makes backgrounding Allegro apps on OS 4.0 work.
Updated iphone port for IPad (only tested in the simulator).
Input:
Disabled Raw Input code in Windows. Mouse events now reflect system cursor movements even in fullscreen mode.
Prevent late WM_MOUSELEAVE notifications from overriding mouse state display field (Windows).
Update pollable mouse state with axes events as well as button events on iPhone.
Filesystem:
- Made the filesystem entry functions work under Windows even if the name passed to al_create_fs_entry has a trailing slash or backslash.
Config routines:
Add al_{load,save}_config_file_f.
Reorder al_save_config_file* arguments to match al_save_bitmap and al_save_sample.
Optimise config routines to work well for thousands of keys/sections.
Image addon:
Added a GDI+ implementation of the image codecs, which will be used in favour of libjpeg/libpng if Allegro is compiled with MSVC. Then allegro_image will not require JPEG/PNG DLLs at runtime.
Removed format specific image functions.
Fixed bug in native png loader on iphone: was using the source color space instead of the target color space which made it fail whenever they differed (alpha-less paletted pictures).
Add an autorelease pool around iphone native image loading to stop memory leaks.
Font addons:
Sever the tie between allegro_font and allegro_image. The user needs to initialise the image addon separately now.
Rename al_load_ttf_font_entry to al_load_ttf_font_f.
Fixed problem with glyph precision after applying transformations in the ttf addon.
Primitives addon:
Added al_init_primitives addon function. This is now required.
Removed ALLEGRO_PRIM_COLOR; ALLEGRO_COLOR can now be used where it was required.
v textures coordinates were off for OpenGL non-power-of-two textures.
Free the vertex cache in al_destroy_display on X11.
Added the dummy vertex shader support to D3D driver of the primitives addon. Without this, custom vertices either resulted in warnings or outright crashes on some systems.
Bring D3D driver up to speed a little bit: transformations now work properly with sub-bitmap targets; the half-pixel offset now properly interacts with transformations; al_set_target_bitmap does not clear the transformation; the proper transformation is set at display creation.
Cull the primitives that are completely outside the clipping region.
Scale the numbers of vertices for the curvy primitives with the scale of the current transformation.
Audio addon:
Remove driver parameter from al_install_audio.
Rename al_get_depth_size to al_get_audio_depth_size.
Rename al_get_audio_stream_buffer to al_get_audio_stream_fragment.
Many improvements to AQueue driver.
Audio codecs:
Add MOD/S3M/XM/IT file support, using the DUMB library.
Revert to a monolithic allegro_acodec addon, i.e. remove separate allegro_flac, allegro_vorbis addons. WAV file support is in allegro_acodec.
Implement DLL loading for FLAC/Vorbis/DUMB on Windows. allegro_acodec will load the DLL at runtime to enable support for that format. If your program does not require said format, you don’t need to distribute the DLL.
Remove format-specific loader/saver audio codec functions.
Make acodec loaders have consistent file closing behaviour.
Optimised wave file loading.
Examples:
- Make SPEED port run acceptably on graphics drivers without FBOs.
Documentation:
Added documentation for the public Direct3D specific functions.
Documented ALLEGRO_OPENGL_3_0 and ALLEGRO_OPENGL_FORWARD_COMPATIBLE.
Other:
- Many bug and documentation fixes.
Changes from 4.9.18 to 4.9.19 (April 2010)
The main developers this time were: Milan Mimica, Trent Gamblin, Paul Suntsov, Peter Wang. Other contributions from: Evert Glebbeek and Shawn Hargreaves.
Graphics:
Implemented support for transformations for memory bitmaps.
Transformations now work properly when the target bitmap is a sub-bitmap in OpenGL (still broken in D3D). Also fixed OpenGL bitmap drawing in the same scenario (it used to always revert to software drawing).
Use the memory drawers when the source bitmap is the backbuffer with the rotated/scaled bitmaps.
Make al_put_pixel clip even if the bitmap is locked, which was the reason why software primitives were not clipping.
Added al_put_blended_pixel, the blended version of al_put_pixel.
Sub bitmaps of sub bitmaps must be clipped to the first parent.
Don’t clear the transformation when setting the target bitmap in OpenGL.
Implemented ALLEGRO_NOFRAME and ALLEGRO_FULLSCREEN_WINDOW in WGL.
Set the ALLEGRO_DISPLAY->refresh_rate variable for fullscreen modes under D3D.
Make d3d_clear return immediately if the display is in a lost state.
Rewrote the function that reads the OpenGL version so it works for previously unrecognised versions, and future versions.
Check for framebuffer extension on iPhone properly.
Fixed locking bugs on iPhone. allegro_ttf works now.
Input:
Removed al_set_mouse_range.
Don’t call al_get_mouse_state if the mouse driver isn’t installed (Windows).
Send events even when the mouse cursor leaves the window, while any buttons are held down (Windows and Mac OS X; X11 already did this).
Allow mouse presses and accelerometer data simultaneously. (iPhone)
File I/O:
Optimise al_fread{16,32}* by using only one call to al_fread each.
Optimise al_fgetc() for stdio backend.
Path:
- Fix an infinite loop in _find_executable_file when searching for the executable on the PATH (Alan Coopersmith).
Primitives addon:
Made the software driver for the primitives addon check for blending properly. Also, fixed two line shaders.
Made the D3D driver thread-safe. The whole addon should be thread-safe now.
The addon now officially supports 3D vertices, even though the software component can’t draw them yet.
Changed the way the primitives addon handles the OpenGL state (fixes a few bugs and makes life easier for raw-OpenGL people)
Image addon:
Optimised BMP, PCX, TGA loaders.
Fix loading 16-bit BMP files.
Fix loading grayscale TGA images.
Nial Giacomelli fixed a bug where images could be corrupt using the native Apple image loader (iPhone).
Audio addon:
Add al_is_audio_installed.
Fix al_attach_sample_instance_to_mixer for int16 mixers.
Implement attaching an INT16 mixer to another INT16 mixer.
Handle conversion when an INT16 mixer is attached to a UINT16 voice.
Build system:
Add an option to disable Apple native image loader (iPhone and OS X).
Add ttf addon target to iPhone xcode project.
Examples:
- Special contribution from Shawn “the Progenitor” Hargreaves.
Changes from 4.9.17 to 4.9.18 (March 2010)
The main developers this time were: Trent Gamblin, Elias Pschernig, Evert Glebbeek, Peter Wang. Other contributions from: Milan Mimica, Paul Suntsov, Peter Hull.
Graphics:
Fixed broken drawing into memory bitmaps as access to the global transformation required an active display. Now both transformation and current blending mode are stored in the display but provisions are made for them to also work if the current thread has no display.
Fixed a bunch of clipping problems with OpenGL, especially with sub-bitmaps.
Fix bug in OpenGL FBO setup when the target bitmap is the sub-bitmap of a bitmap with an FBO.
Fixed crash in al_get_num_display_modes under OSX 10.5.
Fixed some problems in _al_convert_to_display_bitmap that caused problems in WGL FS display resize.
Fixed al_set_current_display(NULL) on WGL.
Added subtractive blending. al_set_blender() takes another parameter.
Added ALLEGRO_FULLSCREEN_WINDOW display flag (X11 and D3D for now).
Allow changing ALLEGRO_FULLSCREEN_WINDOW with al_toggle_display_flag.
Figured out how to switch display modes using Carbon on OS X 10.6.
Stop the OpenGL driver on iPhone from changing the currently bound FBO behind our back when locking a bitmap.
Prevent screen flicker at app startup by simulating the splash screen (iPhone).
Input:
Added “pressure” field to the mouse event struct and mouse state, which can be used with pressure sensitive pointing devices, i.e. tablets/stylus (currently OS X only).
Report scroll ball “w” position in mouse event struct, on OS X.
Removed OS X 10.1 specific code from mouse driver. We don’t support OS X 10.1 any more.
Fix building of Linux joystick driver on some systems.
Threads:
Fix a problem when al_join_thread() is called immediately after al_start_thread(). The thread could be joined before the user’s thread function starts at all.
Fix a possible deadlock with al_join_thread() on Windows (thanks to Michał Cichoń for the report).
Fix some error messages running threading examples on OS X.
Other core:
Added version check to al_install_system.
Rename al_free_path to al_destroy_path for consistency.
Make it possible to have an empty organization name with al_set_org_name().
Changed implementation of AL_ASSERT to use POSIX-standard assert instead.
Removed al_register_assert_handler.
Fix timer macros which did not parenthesize their arguments.
Make stricmp, strlwr, strupr macros conditionally defined.
Audio addon:
Rename al_attach_sample_to_mixer to al_attach_sample_instance_to_mixer.
Fix a premature free() when detaching samples and other audio objects.
Fix mixers attaching to mixers.
Pass correct number of samples to mixer postprocess callback.
AudioQueue code was not compiled even though version requirements may have been met (OS X).
Primitives addon:
Make high-level primitives functions thread safe. (note: the DirectX driver is not yet thread safe)
Fix a bug causing crashes on Windows 7 when using the primitives addon and Direct3D (Invalid vertex declarations were being used).
Fixed another issue with primitives drawing to memory bitmaps.
Hopefully fix the bitmap clipping bugs, and make the D3D and OGL/Software outputs to be near-identical again.
Image addon:
Added a “native loader” for MacOS X that uses the NSImage bitmap loading functions. In addition to .png and .jpg, this allows us to read a whole zoo of image formats (listed in allegro.log).
Add native support for tif, jpg, gif, png, BMPf, ico, cur, xbm formats to under IPhone.
Fixed an over-zealous ASSERT() that disallowed passing NULL to al_register_bitmap_loader() despite this being an allowed value.
Avoid using a field which is deprecated in libpng 1.4.
Color addon:
- Make al_color_name_to_rgb return a bool.
Native dialogs addon:
- Fixed some erratic behaviour and crashes on OS X.
Build system:
Set VERSION and SOVERSION properties on targets to give Unix shared libraries proper sonames. e.g. liballegro[_addon].so.4.9, liballegro[_addon].4.9.dylib
Static libraries are now named without version number suffixes to minimise the differences with the shared libraries, which no longer have the versions in their base names. e.g. liballegro[_addon]-static.a, allegro[_addon]-static.lib
Windows import libraries are also named without version suffixes, e.g. liballegro[_addon].a, allegro[_addon].lib
DLLs are named with a short version suffix, not the full version. e.g. allegro-4.9.dll instead of allegro-4.9.18.dll
Add support for Mac OS X frameworks (untested), which are enabled with WANT_FRAMEWORKS and WANT_EMBED. There is one framework per addon.
Search for static OGG/Vorbis libraries built with MSVC named libogg_static.lib, etc.
Updated iPhone XCode project.
Examples:
ex_mixer_pp: New example to test mixer postprocessing callbacks.
ex_threads: Make it more visually interesting and test out per-display transformations.
ex_ogre3d: New example demonstrating use of Ogre graphics rendering alongside Allegro (currently GLX only). Commented out in the build system for now.
ex_fs_window: New example to test ALLEGRO_FULLSCREEN_WINDOW flag and al_toggle_display_flag.
ex_blend2: Updated to test subtractive blending, including scaled/rotated blits and the primitives addon.
ex_mouse_events: Show “w” field.
ex_prim: Added possibility to click the mouse to advance screens (for iPhone).
ex_vsync: Display config parameters and warning.
ex_gldepth: Make the textures appear again though we’re not sure why they disappeared.
Documentation:
Many documentation updates.
Add which header file and which library to link with for each page of the reference manual.
Minor improvements to HTML styling and man page output.
Changes from 4.9.16 to 4.9.17 (February 2010)
The main developers this time were: Trent Gamblin, Elias Pschernig, Evert Glebbeek, Paul Suntsov, Peter Wang.
Core:
Removed END_OF_MAIN() everywhere.
For MSVC, we pass a linker option through a #pragma.
On Mac OS X, we rename main() and call it from a real main() function in the allegro-main addon. The prototype for main() for C++ applications should be "int main(int, char **)", or the code will not compile on OS X. For C, either of the normal ANSI forms is fine.
#define ALLEGRO_NO_MAGIC_MAIN disables the #pragma or name mangling, so you can write a WinMain() or use al_run_main() yourself.
Graphics:
Fixed a bug in the OpenGL driver where al_draw_bitmap() wouldn’t handle blitting from the back buffer.
Changing the blending color now works with deferred drawing (Todd Cope).
Avoid some problems with window resizing in Windows/D3D.
Added al_get_d3d_texture_position.
Fixed bug under X11 where al_create_display() would always use the display options from the first al_create_display() call.
Properly implemented osx_get_opengl_pixelformat_attributes().
Fixed automatic detection of colour depth on OS X.
Fixed al_get_num_display_modes() on Mac OS X 10.6.
Removed al_get_num_display_formats, al_get_display_format_option, al_set_new_display_format functions as they can’t be implemented on OSX/iPhone/GPX ports (and were awkward to use).
Replaced al_toggle_window_frame function with a new function al_toggle_display_flags.
al_load_bitmap() and al_convert_mask_to_alpha() no longer reset the current transformation.
Add a minimize button to all non-resizable windows on Windows.
The wgl display switch-in/out vtable entries were swapped (Milan Mimica).
Input:
Some keycodes were out of order in src/win/wkeyboard.c
Fixed mouse range after resizing window on Windows.
Fixed (or worked around) a joystick axis detection problem on Mac OS X.
Change timer counts from ‘long’ to ‘int64_t’.
File I/O:
- Remove `ret_success’ arguments from al_fread32be/le.
allegro-main addon:
- Added an “allegro-main” addon to hold the main() function that is required on Mac OS X. This way the user can opt out of it.
Primitives addon:
Added support for sub-bitmap textures in OpenGL driver.
Added support for sub-bitmap textures in D3D driver. Made D3D sub-bitmaps work better with user D3D code.
Audio addons:
Changed the _stream suffix to _f in the audio loading functions.
Added the stream versions of loading functions for wav, ogg and flac.
Rename audio I/O functions to al_load_{format}, al_save_{format}, al_load_{format}f and al_save{format}_f.
Added al_load_sample_f, al_save_sample_f, al_load_audio_stream_f and the related functions.
Fixed a bug where al_save_sample was improperly handling the extension.
al_drain_audio_stream would hang on an audio stream in the ‘playing’ state (the default) which wasn’t attached to anything.
Fixed a potential deadlock on destroying audio streams by shutting down the audio driver.
Comment out PA_SINK_SUSPENDED check, which breaks the PulseAudio driver, at least on Ubuntu 9.10.
Replace unnecessary uses of `long’ in audio interfaces.
Image addons:
Fixed return values of al_save_bmp_f and al_save_pcx_f being ignored.
Changed the _stream suffix to _f in the image loading functions.
TTF addon:
- Drawing TTF fonts no longer resets the current transformation.
Build system:
Add the CMake option FLAC_STATIC, required when using MSVC with a static FLAC library.
Link with zlib if linking with PhysicsFS is not enough.
Updated iPhone project files.
Documentation:
- Many documentation updates.
Examples:
- ex_display_options: Added mouse support, query current display settings, display error if a mode can’t be set.
Bindings:
Made the Python wrapper work under OSX.
Added a CMake option to build the Python wrapper.
Added al_run_main() mainly to support the Python wrapper on OSX.
Changes from 4.9.15.1 to 4.9.16 (November 2009)
The main developers this time were: Trent Gamblin and Paul Suntsov.
Graphics:
Fixed clipping of the right/bottom edges for the software primitives.
Enable sub-pixel accuracy for rotated blitting in software.
Made the D3D output look closer to the OGL/Software output.
OpenGL driver now respects the ‘linear’ texture filtering configuration option. Anisotropic is interpreted as linear at this time.
Added deferred bitmap drawing (al_hold_bitmap_drawing).
Made the font addons use deferred bitmap drawing instead of the primitives addon, removing the link between the two addons.
Changed al_transform_vertex to al_transform_coordinates to make the function more versatile.
Transfered transformations from the primitives addon into the core. Added documentation for that, as well as a new example, ex_transform. Transformations work for hardware accelerated bitmap drawing (including fonts) but the software component is not implemented as of yet. Also fixed some bugs inside the code of the transformations.
Increase performance of screen->bitmap blitting in the D3D driver.
Fixed a strange bug with textured primitives (u/v repeat flags were being ignored on occasion).
Added ALLEGRO_VIDEO_BITMAP for consistency.
Input:
- Work around a memory leak in iPhone OS that occurs when the accelerometer is on during touch input.
Other core:
- Don’t #define true/false/bool macros in C++.
Audio addon:
- Some minor cleanups to the Audio Queue driver.
Changes from 4.9.15 to 4.9.15.1 (October 2009)
- Fixed a problem building on MinGW (dodgy dinput.h).
Changes from 4.9.14 to 4.9.15 (October 2009)
The main developers this time were: Trent Gamblin, Elias Pschernig, Matthew Leverton, Paul Suntsov, Peter Wang.
Core:
Set the initial title of new windows to the app name.
Add al_set/get_event_source_data.
Make locking work on bitmaps that didn’t have an FBO prior to the call on iPhone.
Make al_get_opengl_fbo work on iPhone.
Made iPhone port properly choose a visual (so depth buffer creation works).
Font addon:
- Improved drawing speed for longish strings. The font addon now depends on the primitives addon.
Primitives addon:
Made the ribbon drawer handle the case of extremely sharp corners more gracefully.
Make al_draw_pixel use blend color in the D3D driver.
Added POINT_LIST to the primitive types.
Various fixes for the D3D driver: fixed line loop drawing; made the indexed primitives a little faster; added workabouts for people with old/Intel graphics cards.
Fall back to software if given a memory bitmap as a texture.
Removed OpenGL state saving code, it was causing massive slowdown when drawing. Also removed glFlush for the same reason.
Audio addon:
Added PulseAudio driver.
Support AudioQueue driver on Mac OS X.
Add al_uninstall_audio exit function.
Added ALLEGRO_EVENT_AUDIO_STREAM_FINISHED event to signify when non-looping streams made with al_load_audio_stream reach the end of the file.
Fixed a deadlock when destroying voices.
Handle underruns in the r/w ALSA updater.
Minor change to the ALSA driver to improve compatibility with PulseAudio.
Documentation:
- Replaced awk/sh documentation scripts with C programs.
Changes from 4.9.13 to 4.9.14 (September 2009)
The main developers this time were: Trent Gamblin, Elias Pschernig, Paul Suntsov, Peter Wang. Other contributions from: Evert Glebbeek, Matthew Leverton.
Ports:
- Elias Pschernig and Trent Gamblin started a iPhone port.
Graphics:
Added al_get_opengl_texture_size and al_get_opengl_texture_position functions.
Try to take into account GL_PACK_ALIGNMENT, GL_UNPACK_ALIGNMENT when locking OpenGL bitmaps.
Fixed all (hopefully) conversion mistakes in the color conversion macros.
Sped up memory blitting, which was using conversion even with identical formats (in some cases).
Make al_set_current_display(NULL); unset the current display.
Added ALLEGRO_LOCK_READWRITE flag for al_lock_bitmap (in place of 0).
Fixed window titles which contain non-ASCII characters in X11.
Added OES_framebuffer_object extension.
Input:
Added a lot more system mouse cursors.
Renamed al_get_cursor_position to al_get_mouse_cursor_position.
Prevent Windows from intercepting ALT for system menus.
Filesystem:
Make the path returned by al_get_entry_name() owned by the filesystem entry so the user doesn’t need to free it manually.
Renamed the filesystem entry functions, mainly to include “fs_entry” in their names instead of just “entry”.
Reordered and renamed ALLEGRO_FS_INTERFACE members.
Make al_read_directory() not return . and .. directory entries.
Renamed al_create_path_for_dir to al_create_path_for_directory.
Added al_set_standard_file_interface, al_set_standard_fs_interface.
Events:
- Exported ALLEGRO_EVENT_TYPE_IS_USER.
Threads:
- Added a new function al_run_detached_thread.
Other core:
Put prefixes on register_assert_handler, register_trace_handler.
Added functions to return the compiled Allegro version and addon versions.
Added al_ prefix to fixed point routines and document them.
Added al_get_system_config().
Renamed al_system_driver() to al_get_system_driver().
Added 64-bit intptr_t detection for Windows.
Added work-around to make OS X port compile in 64 bit mode.
Addons:
- Renamed addons from a5_* to allegro_*.
Image addon:
Renamed the IIO addon to “allegro_image”.
Renamed *_entry functions that take ALLEGRO_FILE * arguments to *_stream.
Fixed off-by-one error in greyscale JPEG loader.
Audio addons:
Renamed the kcm_audio addon to “allegro_audio”.
Renamed ALLEGRO_STREAM and stream functions to ALLEGRO_AUDIO_STREAM and al_*audio_stream*.
Renamed al_stream_from_file to al_load_audio_stream
Added int16 mixing and configurable frequency and depth for default mixer/voice (see configuration file).
Fixed FLAC decoding and added FLAC streaming support.
Changed the function signature of al_get_stream_fragment() to be more straightforward.
Fixed bug in kcm audio that caused data to be deleted that was still used.
Made ALSA audio driver work when the driver does not support mmap (commonly because the ALSA really is PulseAudio).
Removed al_is_channel_conf function.
Font addons:
Optimized font loading by converting the mask color on a memory copy instead of have to lock a texture.
Made the ttf addon read from file streams.
Primitives addon:
Fixed the direction of the last segment in software line loop, and fixed the offsets in the line drawer.
Fixed the thick ribbon code in the primitives addon, was broken for straight segments.
Various fixes/hacks for the D3D driver of the primitives addon: hack to make the indexed primitives work and a fix for the textured primitives.
Various enhancements to the transformations API: const correctness, transformation inverses and getting the current transformation.
Added support for custom vertex formats.
Flipped the v axis of texture coordinates for primitives. Now (u=0, v=0) correspond to (x=0, y=0) on the texture bitmap.
Added a way to use texture coordinates measured in pixels. Changed ALLEGRO_VERTEX to use them by default.
PhysicsFS:
PhysFS readdir didn’t prepend the parent directory name to the returned entry’s path.
Set execute bit on PhysFS directory entries.
Examples:
Made examples report errors using the native dialogs addon if WANT_POPUP_EXAMPLES is enabled in CMake (default).
Added new example ex_draw_bitmap, which simply measures FPS when drawing a bunch of bitmaps (similar to exaccel in A4).
Documentation:
Many documentation updates and clarifications.
Fixed up Info and PDF documentation.
Bindings:
- Added a script to generate a 1:1 Python wrapper (see
pythondirectory).
Changes from 4.9.12 to 4.9.13 (August 2009)
The main developers this time were: Trent Gamblin, Elias Pschernig, Paul Suntsov, Peter Wang. Other contributions from: Todd Cope, Evert Glebbeek, Michael Harrington, Matthew Leverton.
Ports:
- Trent Gamblin started a port to the GP2X Wiz handheld console.
Graphics:
Some OpenGL bitmap routines were not checking whether the target bitmap was locked.
Scaled bitmap drawer was not setting the blend mode.
Fixed a bug where al_map_rgb followed by al_unmap_rgb would return different values.
Fixed problems with sub-sub-bitmaps.
Fixed window placement on OS X, which did not properly translate the coordinates specified by the user with al_set_new_window_position().
Made is_wgl_extension_supported() fail gracefuly.
Added ALLEGRO_ALPHA_TEST bitmap flag.
Minor optimizations in some memory blitting routines.
Input:
Replaced (ALLEGRO_EVENT_SOURCE *) casting with type-safe functions, namely al_get_keyboard_event_source, al_get_mouse_event_source, al_get_joystick_event_source, al_get_display_event_source, al_get_timer_event_source, etc.
Made it so that users can derive their own structures from ALLEGRO_EVENT_SOURCE. al_create_user_event_source() is replaced by al_init_user_event_source().
Fixed a problem on Windows where the joystick never regains focus when tabbing away from a window.
Fixed a problem with missing key repeat with broken X.Org drivers.
Implemented ALLEGRO_EVENT_MOUSE_ENTER_DISPLAY, ALLEGRO_EVENT_MOUSE_LEAVE_DISPLAY for X11.
Image I/O addon:
Changed return type of al_save_bitmap() to bool.
Separated al_add_image_handler into al_register_bitmap_loader, al_register_bitmap_saver, etc.
Made JPEG and PNG loaders handle al_create_bitmap() failing.
Speed up JPEG loading and saving.
Fixed a reinitialisation issue in iio.
Audio addons:
Moved basic sample loading/saving routines to kcm_audio from acodec and added file type registration functions.
Moved WAV support into kcm_audio.
Made WAV loader not choke on extra chunks in the wave.
Separated acodec into a5_flac, a5_vorbis addons. You need to initialise them explicitly. Removed sndfile support.
Renamed al_*_oggvorbis to al_*_ogg_vorbis.
Changed argument order in al_save_sample and al_stream_from_file.
Reordered parameters in al_attach_* functions to follow the word order.
Renamed a few streaming functions to refer to fragments/buffers as fragments consistently.
Added missing getters for ALLEGRO_SAMPLE fields.
Fixed mutex locking problems with kcm_audio objects.
Avoid underfilling a stream when it is fed with a short looping stream.
Other addons:
Added glyph advance caching for the TTF addon.
Renamed al_register_font_extension to al_register_font_loader. Match the file name extension case insensitively.
Documentation:
Lots of documentation updates.
Added a short “Getting started guide” to the reference manual.
Examples:
- Added another example for kcm_audio streaming: ex_synth.
Build system:
Fix pkg-config .pc files generated for static linking.
DLL symbols are now exported by name, not ordinals.
Changes from 4.9.11 to 4.9.12 (July 2009)
Fixed bugs in Windows keyboard driver (Todd Cope).
Fixed problems with ALLEGRO_MOUSE_STATE buttons on Windows (Milan Mimica).
Fixed problems with PhysicsFS addon DLL on MSVC (Peter Wang).
Set the D3D texture address mode to CLAMP (Todd Cope).
Fixed hang if Allegro was initialized more than once on Windows (Michael Harrington).
Added a CMake option to force the use of DllMain style TLS on Windows, for use with C# bindings (Michael Harrington).
Fixed a bug where drawing circles with a small radius would crash (Elias Pschernig).
Fixed several memory leaks throughout the libraries (Trent Gamblin).
Fixed some compilation warnings on Mac OS X (Evert Glebbeek).
Small documentation updates.
Changes from 4.9.10.1 to 4.9.11 (June 2009)
The main developers this time were: Trent Gamblin, Milan Mimica, Elias Pschernig, Paul Suntsov, Peter Wang. Other contributions from: Christopher Bludau, David Capello, Todd Cope, Evert Glebbeek, Peter Hull.
Graphics:
Changed rotation direction in memory blitting routines to match D3D/OpenGL routines.
Made al_set_target_bitmap not create an FBO if the bitmap is locked.
Added explicit FBO handling functions al_get_opengl_fbo and al_remove_opengl_fbo in case we weren’t quite clever enough above and the user has to intervene manually.
Added OpenGL 3.1 support and a bunch of new OpenGL extensions.
Fixed al_inhibit_screensaver on Windows
Fixed selection of X pixel formats with WGL if a new bitmap has alpha.
Made X11 icon work regardless of the backbuffer format.
Input:
Ditched DirectInput keyboard driver and replaced it with WinAPI. Fixes several issues the old driver had.
Rewrote Windows mouse driver to use WinAPI instad of DirectInput.
Added al_get_joystick_number.
Filesystem:
Merged ALLEGRO_FS_HOOK_ENTRY_INTERFACE into ALLEGRO_FS_INTERFACE and made ALLEGRO_FS_INTERFACE public.
Added al_set_fs_interface and al_get_fs_interface.
Made al_opendir take an ALLEGRO_FS_ENTRY.
Removed functions which are obsolete or probably have no use.
Made al_get_standard_path(ALLEGRO_PROGRAM_PATH) return a path with an empty filename.
Path routines:
- Renamed functions to follow conventions.
File I/O:
- Fix al_fgets() returning wrong pointer value on success.
Primitives addon:
Added support for textured primitives in software.
Introduced ALLEGRO_PRIM_COLOR, removed ALLEGRO_VBUFFER.
Exposed the software line and triangle drawers to the user.
Added rounded rectangles.
Fix an extraneous pixel bug in the triangle drawer.
Audio addon:
Change from using generic get/set audio property functions to specific getter/setter functions.
Change return types on many functions to return true on success instead of zero. (Watch out when porting your code, the C compiler won’t help.)
Native dialogs:
Added a Windows implementation.
Added a title to al_show_native_message_box().
Other addons:
Implemented the filesystem interface for PhysicsFS and demonstrate its use in ex_physfs.
Fixed al_color_html_to_rgb.
Examples:
- Added an OpenGL pixel shader example.
Build system:
- Output separate pkg-config .pc files for static linking.
Changes from 4.9.10 to 4.9.10.1 (May 2009)
Fixed uses of snprintf on MSVC.
Disabled ex_curl on Windows as it requires Winsock.
Changes from 4.9.9.1 to 4.9.10 (May 2009)
The main developers this time were: Trent Gamblin, Evert Glebbeek, Milan Mimica, Elias Pschernig, Peter Wang. Other contributions from: Peter Hull, Paul Suntsov.
Graphics:
Renamed al_clear() to al_clear_to_color().
Renamed al_opengl_version() to al_get_opengl_version().
Changed the direction of rotation for al_draw_rotated* from counter-clockwise to clockwise.
Added new pixel format ALLEGRO_PIXEL_FORMAT_ABGR_8888_LE which guanrantees component ordering.
Added ALLEGRO_NO_PRESERVE_TEXTURE flag.
Fixed horizontal flipping in plain software blitting routines.
Fixed some blending bugs in the OpenGL driver.
Made OpenGL driver fall back to software rendering if separate alpha blending is requested but not supported.
Added a config option which allows pretending a lower OpenGL version.
Implemented al_get_num_display_formats(), al_get_display_format_option() and al_set_new_display_format() for WGL.
Fixed bug in al_get_display_format_option() with the GLX driver.
Fixed a bug in the D3D driver that made display creation crash if the first scored mode failed.
Made the OpenGL driver prefer the backbuffer format for new bitmaps.
Defer FBO creation to when first setting a bitmap as target bitmap.
Input:
Renamed some joystick functions.
Account for caps lock state in OS X keyboard driver.
Made UTF-8 input work on X11.
File I/O:
Separated part of fshook API into a distinct file I/O API (actually generic streams).
Make the file I/O API match stdio more closely and account for corner cases. (incomplete)
Made it possible to set a stream vtable on a per-thread basis, which affects al_fopen() for that thread.
Added al_fget_ustr() to read a line conveniently.
Change al_fputs() not to do its own CR insertion.
Add al_fopen_fd() to create an ALLEGRO_FILE from an existing file descriptor.
Filesystem:
Changed al_getcwd, al_get_entry_name to return ALLEGRO_PATHs.
Renamed al_get_path to al_get_standard_path, and to return an ALLEGRO_PATH.
Changed al_readdir to return an ALLEGRO_FS_ENTRY.
Added al_path_create_dir.
Removed some filesystem querying functions which take string paths (ALLEGRO_FS_ENTRY versions will do).
Config routines:
Added functions to traverse configurations structures.
Change al_save_config_file() return type to bool.
Removed an arbitrary limit on the length of config values.
Renamed configuration files to allegro5.cfg and allegro5rc.
String routines:
Allegro 4-era string routines removed.
Added al_ustr_to_buffer().
Other core:
Renamed al_thread_should_stop to al_get_thread_should_stop.
Added a new internal logging mechanism with configurable debug “channels”, verbosity levels and output formatting.
Cleaned up ASSERT namespace pollution.
Font addons:
Renamed font and TTF addon functions to conform to conventions.
Added al_init_ttf_addon.
Implemented slightly nicer text drawing API:
- functions are called “draw_text” instead of “textout”
- centre/right alignment handled by a flag instead of functions
- functions accepting ALLEGRO_USTR arguments provided
- substring support is removed so ‘count’ arguments not needed in usual case, however ALLEGRO_USTR functions provide similar thing.
Removed al_font_is_compatible_font.
Sped up al_grab_font_from_bitmap() by five times.
ttf: Fixed a possible bug with kerning of unicode code points > 127.
Image I/O addon:
Renamed everything in the IIO addon.
Exposed al_load_bmp/al_save_bmp etc.
Audio addon:
Renamed al_mixer_set_postprocess_callback.
Added two config options to OSS driver.
Made ALSA read config settings from [alsa] section.
Native dialogs:
- Added al_show_native_message_box() which works like allegro_message() in A4. Implemented for GTK and OS X.
PhysicsFS addon:
- Added PhysicsFS addon.
Primitives addon:
Removed global state flags.
Removed normals from ALLEGRO_VERTEX.
Removed read/write flags from vertex buffers.
Examples:
Added an example that tests al_get_display_format_option().
Added an example which shows playing a sample directly to a voice.
Added an example for PhysicsFS addon.
Added a (silly) example that loads an image off the network using libcurl.
Added ex_dir which demonstrates the use of al_readdir and al_get_entry_name.
Other:
- Many bug and documentation fixes.
Changes from 4.9.9 to 4.9.9.1 (March 2009)
Made it compile and work with MSVC and MinGW 3.4.5.
Enabled SSE instruction set in MSVC.
Fixed X11 XIM keyboard input (partially?).
Fall back on the reference (software) rasterizer in D3D.
Changes from 4.9.8 to 4.9.9 (March 2009)
The main developers this time were: Trent Gamblin, Evert Glebbeek, Milan Mimica, Elias Pschernig, Paul Suntsov, Peter Wang. Other contributions from: Todd Cope, Angelo Mottola, Trezker.
Graphics:
Added display options API and scoring, based on AllegroGL, for finer control over display creation.
Added API to query possible display formats (implemented on X, Mac OS X).
Changed the bitmap locking mechanism. The caller can choose a pixel format.
Added support for multisampling.
Simplified the semantics of al_update_display_region().
Optimised software blitting routines.
Optimised al_map_rgb/al_map_rgba.
Replaced al_draw_rectangle() and al_draw_line() from core library with al_draw_rectangle_ex() and al_draw_line_ex() from the primitives addon.
Implemented al_wait_for_vsync() everywhere except WGL.
Fixed problems with sub-bitmaps with the OpenGL driver.
Fixed bugs in software scaled/rotated blit routines.
Added a new pixel format ALLEGRO_PIXEL_FORMAT_ABGR_F32. Removed ALLEGRO_PIXEL_FORMAT_ANY_15_WITH_ALPHA, ALLEGRO_PIXEL_FORMAT_ANY_24_WITH_ALPHA.
Added support for creating OpenGL 3.0 contexts (untested; only WGL/GLX for now). Relevant display flags are ALLEGRO_OPENGL_3_0 and ALLEGRO_OPENGL_FORWARD_COMPATIBLE.
Allow disabling any OpenGL extensions from allegro.cfg to test alternative rendering paths.
Fixed problem with windows only activating on title bar clicks (Windows).
Fixed a minimize/restore bug in D3D (with the help of Christopher Bludau).
Input:
Implemented al_set_mouse_xy under X11.
Added ALLEGRO_EVENT_MOUSE_WARPED event for al_set_mouse_xy().
Path routines:
Made al_path_get_drive/filename return the empty string instead of NULL if the drive or filename is missing.
Changed al_path_set_extension/al_path_get_extension to include the leading dot.
Made al_path_get_extension(), al_path_get_basename(), al_path_to_string() return pointers to internal strings.
Unicode:
Changed type of ALLEGRO_USTR; now you should use pointers to ALLEGRO_USTRs.
Added UTF-16 conversion routines.
Other core:
Added ALLEGRO_GET_EVENT_TYPE for constructing integers for event type IDs.
Renamed configuration function names to conform to conventions.
Removed public MIN/MAX/ABS/MID/SGN/CLAMP/TRUE/FALSE macros.
Replaced AL_PI by ALLEGRO_PI and documented it as part of public API.
Audio addons:
Added stream seeking and stream start/end loop points.
Add panning support for kcm_audio (stereo only).
Font addons:
Made al_font_grab_font_from_bitmap() accept code point ranges.
Made font routines use new UTF-8 routines; lifted some arbitrary limits.
Fixed artefacts in bitmap font and TTF rendering.
Image I/O addon:
Made the capability to load/save images from/to ALLEGRO_FS_ENTRYs public.
Added missing locking to .png save function.
Other addons:
Added native_dialog addon, with file selector dialogs.
Fixed many bugs in the primitives addon.
Made hsv/hsl color functions accept angles outside the 0..360° range.
Fixed a bug in al_color_name_to_rgb.
Examples:
New programs: ex_audio_props, ex_blend_bench, ex_blend_test, ex_blit, ex_clip, ex_draw, ex_font_justify, ex_gl_depth, ex_logo, ex_multisample, ex_native_filechooser, ex_path_test, ex_rotate, ex_stream_seek, ex_vsync, ex_warp_mouse. (ex_draw demonstrated known bugs currently.)
Updated programs: ex_joystick_events, ex_monitorinfo ex_pixelformat, ex_scale, ex_subbitmap.
Build system:
Added pkg-config support. .pc files are generated and installed on Unix.
Allowed gcc to generate SSE instructions on x86 by default. For Pentium 2 (or lower) compatibility you must uncheck a CMake option before building Allegro.
Other:
- Many other bug fixes.
Changes from 4.9.7.1 to 4.9.8 (February 2009)
The main developers this time were: Thomas Fjellstrom, Trent Gamblin, Evert Glebbeek, Matthew Leverton, Milan Mimica, Elias Pschernig, Paul Suntsov, Peter Wang.
General:
- Lots of bug fixes.
File system hooks:
Rationalised file system hook functions. Failure reasons can be retrieved with al_get_errno().
Enable large file support on 32-bit systems.
Converted the library and addons to use file system hook functions.
Path functions:
Added al_path_clone(), al_path_make_canonical(), al_path_make_absolute(), al_path_set_extension(), al_{get,set}_org_name, al_{get,set}_app_name} functions.
Made al_path_get_extension() not include the leading “.” of the extension,
Add AL_EXENAME_PATH, AL_USER_SETTINGS_PATH, AL_SYSTEM_SETTINGS_PATH enums for al_get_path().
String routines:
Added a new, dynamically allocating UTF-8 string API. This uses bstrlib internally, which is distributed under a BSD licence. Allegro 5 will expect all strings to be either ASCII compatible, or in UTF-8 encoding.
Removed many old Unicode string functions. (Eventually they will all be removed.)
Config routines:
- Clarified behaviour of al_config_add_comment, al_config_set_value with regards to whitespace and leading comment marks.
Graphics:
Bug fixes on Windows and Mac OS X for resizing, switching away, setting full screens, multi-monitor, etc.
Added an al_get_opengl_texture() convenience function.
Added separate alpha blending.
Added ALLEGRO_PIXEL_FORMAT_ANY.
Honour al_set_new_window_position() in X11 port.
Made the X11 port fail to set a full screen mode if the requested resolution cannot be set rather than falling back to a windowed mode.
Input:
Added a field to the mouse state struct to indicate the display the mouse is currently on.
Made DirectX enumerate all joysticks/gamepads properly by using a device type new to DirectInput 8.
Fixed a bug in wmouse.c where y was not changed in al_set_mouse_xy.
Support ALLEGRO_EVENT_MOUSE_ENTER/LEAVE_DISPLAY events in Windows.
Addons:
Added a primitives addon.
Revamp interface for kcm_audio addon to make simple cases easier.
Added native .wav support and save sample routines to acodec addon.
Added a colors addon.
Added memory file addon and example.
TTF addon:
Added al_ttf_get_text_dimensions() function.
Allow specifying the font size more precisely by passing a negative font size.
Guess the filenames of kerning info for Type1 fonts.
Documentation:
Added a new documentation system using Pandoc. Now we can generate HTML, man, Info and PDF formats.
Added and fixed lots of documentation.
Examples:
Added ex_prim, ex_mouse_focus examples.
Made ex_blend2 more comprehensive.
Updated ex_get_path example.
Made ex_ttf accept the TTF file name on the command line.
Build system:
- Use official CMAKE_BUILD_TYPE method of selecting which build configuration. This should work better for non-make builds, however, it’s no longer possible to build multiple configurations with a single configuration step as we could previously.
Removals:
Remove outdated A4 tools.
Remove icodec addon.
SCons build was unmaintained and not working.
Changes from 4.9.7 to 4.9.7.1 (December 2008)
- Scan aintern_dtor.h for export symbols, needed for MSVC.
Changes from 4.9.6 to 4.9.7 (December 2008)
The main developers this time were: Trent Gamblin, Evert Glebbeek, Peter Hull, Milan Mimica, Peter Wang.
Graphics:
Fixed a bug where the “display” field of a bitmap was not correctly reset when it was transfered to another display on OS X.
Made al_create_display() respect al_set_new_window_position() on OS X.
Fixed the bug that caused input focus to be lost in OS X when a window was resized.
Made resizable Allegro windows respond properly to the green “+” button at the top of the screen on OS X.
Properly implemented fullscreen resize in WGL.
Made the memory blenders work the same as the hardware ones.
Made al_get_pixel()/al_draw_pixel() handle sub bitmaps in case the bitmap was locked.
In the OpenGL driver, if the bitmap is locked by the user, use memory drawing on the locked region.
Added implementations of al_inhibit_screensaver() for the X and Mac OS X ports.
Added multi-monitor support to Mac OS X port (untested!).
Other fixes.
Input:
Made al_get_keyboard_state() return structures with the `display’ field correctly set.
Made keyboard event member ‘unichar’ uppercase when Shift/CapsLock is on, in Windows.
Made mouse cursor show/hide work with Mac OS X full screen.
Config routines:
- Preserve comment and empty lines in config files when writing.
Addons:
Add a simple interface layer for kcm_audio.
Made kcm_audio objects automatically be destroyed when it is shut down.
Renamed functions in kcm_audio to conform better with the rest of the library.
Made the TTF addon aggregate glyph cache bitmaps into larger bitmaps for faster glyph rendering (less source bitmap switching).
Examples:
Add an example to test the ALLEGRO_KEYBOARD_STATE `display’ field.
Add an example for testing config routines.
Add an example for checking software blending routines against hardware blending.
Add an example for the simple interface for kcm_audio.
Changes from 4.9.5 to 4.9.6 (November 2008)
The core developers this time were: Thomas Fjellstrom, Trent Gamblin, Evert Glebbeek, Peter Hull, Milan Mimica, Jon Rafkind, Peter Wang.
Allegro 4.9.6 and onwards are licensed under the zlib licence (see LICENSE.txt). This is a simple permissive free software licence, close in spirit to the ‘giftware’ licence, but is clearer and more well-known.
General:
Added filesystem hook (fshook) and path API functions.
Many minor bug fixes.
Graphics:
Added allegro5/a5_opengl.h, which has to be included by programs to use OpenGL specifics. ALLEGRO_EXCLUDE_GLX and ALLEGRO_EXCLUDE_WGL can be #defined to exclude GLX and WGL OpenGL extensions respectively.
Added allegro/a5_direct3d.h, which has to be included by programs to use D3D specifics.
Fixed some drawing from and onto sub-bitmaps.
Fixed blending with the wrong color in case of sub-bitmaps.
Fixed a bug in the D3D driver where the transformation matrix was not reset after drawing a bitmap.
Added draw pixel to OpenGL driver.
Added more OpenGL extensions.
Added function to inhibit screen saver (currently Windows only).
Config routines:
Added al_config_create().
Deleted al_config_set_global(). Made empty section name equivalent to the global section.
Read system wide and home directory config files on Unix (Ryan Patterson).
Events:
- Added support for injecting user-defined events into event queues.
Audio addon:
- Made the ALSA driver read the device name from the config file (Ryan Patterson).
Examples:
Added ex_subbitmap example.
Added ex_disable_screensaver example.
Build system:
- Rationalised library names and made CMake and SCons build systems agree on the names.
Changes from 4.9.4 to 4.9.5 (October 2008)
The core developers this time were: Trent Gamblin, Evert Glebbeek, Peter Hull, Milan Mimica, Elias Pschernig, Jon Rafkind, Peter Wang.
Graphics:
Added fullscreen support on Mac OS X.
Added support for resizable windows on Mac OS X.
Made frameless windows respond to events on Mac OS X.
Fixed a problem with D3D blending.
Made D3D driver work on systems without hardware vertex processing.
Made WGL driver fail more gracefully.
Implemented sprite flipping for OpenGL drivers (Steven Wallace).
Added al_is_sub_bitmap() function.
Input:
Fixed input with multiple windows on Windows.
Fixed keyboard autorepeat events in X11.
Added al_is_keyboard_installed().
Fixed key shifts on Windows (ported from 4.2).
Fixed mouse button reporting on Mac OS X.
Implemented system mouse cursors on MacOS X.
Fixed mouse cursors with alpha channels on X11.
Some work on Mac OS X joystick support (incomplete).
Events:
- Simplified internals of events system further. At the same time, this change happens to also allow event queues to grow unboundedly. (You should still avoid letting them get too big, of course.)
Audio addons:
Made ALLEGRO_STREAM objects emit events for empty fragments that need to be refilled.
Added a possiblity to drain a stream created by al_stream_from_file().
Added a function to rewind a stream.
Added gain support to ALLEGRO_STREAM and ALLEGRO_SAMPLE objects.
Made it possible to attach a sample to a mixer that isn’t already attached to something.
Fixed Ogg Vorbis loader on big-endian systems.
Made the OpenAL driver the least preferred driver, as it doesn’t play stereo samples properly.
Image addons:
Added JPEG support to iio addon, using libjpeg.
Fixed TGA loader on big-endian systems.
Fixed image loading in icodec addon.
Font addon:
Fixed count-restricted text output functions calculations on non-ASCII strings.
Made al_textout* functions always a ‘count’ parameter.
Renamed al_font_text_length* to al_font_text_width*.
Harmonised the order of al_font_textout* and al_font_textprintf* arguments.
Examples:
Added ex_bitmap_flip example (Steven Wallace).
Added ex_mixer_chain example.
Split ex_events into smaller examples.
Made the demo use ALLEGRO_STREAM to play music.
Build an app bundle from the demo, on Mac OS X.
Build system:
Improved detection of external dependencies in CMake build.
Guess compiler locations for MinGW and MSVC (CMake).
Many improvements to scons build, including install support.
General:
- Many other bug fixes.
Changes from 4.9.3 to 4.9.4 (September 2008)
The core developers this time were: Trent Gamblin, Peter Hull, Milan Mimica, Elias Pschernig and Peter Wang. Ryan Dickie and Jon Rafkind also contributed.
General:
Many bug fixes all around.
Added a public threads API.
Added a basic configuration API.
Added al_store_state/al_restore_state functions.
Added al_get_errno/al_set_errno (not used much yet).
Renamed some functions/structures to be more consistent.
Code formatting improvements.
Added more debugging messages.
Removed a lot of A4 code that is no longer used.
Graphics:
Added support for some new OpenGL extensions.
Multihead support on Windows (preliminary support on OSX and Linux).
Many enhancements to all drivers.
Merged common parts of WGL and D3D drivers.
Borderless windows, setting window positions and titles.
Fullscreen support on OSX and Linux.
Do not clear bitmaps when they are created.
Improved compile times and DLL sizes by simplifying “memblit” functions.
Added EXPOSE, SWITCH_IN and SWITCH_OUT display events.
Build system:
Many bug fixes and enhancements to SCons and CMake build systems.
Support for Turbo C++ 2006.
Support for cross-compiling on Linux to MinGW (CMake).
Events:
Filled in a display field for all relevant events.
Added al_wait_for_event_until.
Addons:
Added an ImageMagick addon
Added iio (Image IO) addon
Supports BMP, PCX, TGA.
Supports PNG support with libpng.
Added new audio addon, kcm_audio. (The ‘audio’ addon was taken in a new direction between the 4.9.3 and 4.9.4 releases, but we decided against that, so actually it’s actually a continuation of the old audio addon.)
Added audio streaming functionality.
Added OSS, ALSA, DirectSound drivers.
A lot of reorganisation, internally and externally.
Added TTF font addon, using FreeType.
Made all addons use “al_” prefix.
Examples:
Lots of new examples.
Wait for keypress in some examples instead of arbitrary delay.
Clean up files when done in some examples.
Changes from 4.9.2 to 4.9.3 (April 2008) ‘error’suffix.
Changes from 4.9.1 to 4.9.2 (November 2007)_ANY__rotated_(scaled_)bitmap only lock the region of the destination is needs to for memory bitmaps.
Trent Gamblin made al_draw_scaled_bitmap use the AL_FLIP_*_display_*ang ‘msvc’ target from the fix.sh help message as it should not be used by users any more. Added a comment that it is used by zipup.sh.
Anthony ‘Timorg’ Cassidy made d_menu_proc fill up its assigned area with the gui_bg_color.
Changes from 4.9.0 to 4.9.1 (March 2007).
Changes from 4.2 series to 4.9.0 (July 2006) | https://liballeg.org/changes.html | CC-MAIN-2019-47 | refinedweb | 11,626 | 53.37 |
This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.
On Fri, Sep 12, 2014 at 11:04:05AM -0700, Roland McGrath wrote: > _REGEX_VERSION is never defined and in fact it was removed from the > standard. For backward compatibility we must support _SC_REGEX_VERSION and > return -1 without setting errno, but I don't think we should treat this > like the real cases any more. That is, just unconditionally return -1 > and a comment about it having been removed in 1003.1-2004. This is what I have committed. Siddhesh commit 61fe374a44a92621e0b75ec1f011ff1fba6c2148 Author: Siddhesh Poyarekar <siddhesh@redhat.com> Date: Mon Sep 15 10:16:14 2014 +0530 Remove _POSIX_REGEX_VERSION There is no _POSIX_REGEX_VERSION, so don't check for it. _REGEX_VERSION has been removed as well[1], so only keep the -1 return for backward compatibility. I found this when trying to make the getconf environment variables typo-proof. * sysdeps/posix/sysconf.c (__sysconf): Return -1 for _SC_REGEX_VERSION. [1] diff --git a/ChangeLog b/ChangeLog index d010cc9..e316f8c 100644 --- a/ChangeLog +++ b/ChangeLog @@ -1,5 +1,8 @@ 2014-09-15 Siddhesh Poyarekar <siddhesh@redhat.com> + * sysdeps/posix/sysconf.c (__sysconf): Return -1 for + _SC_REGEX_VERSION. + * posix/getconf.c (vars): Add _POSIX_IPV6 and _POSIX_RAW_SOCKETS. diff --git a/sysdeps/posix/sysconf.c b/sysdeps/posix/sysconf.c index cd2fb5a..e815cd5 100644 --- a/sysdeps/posix/sysconf.c +++ b/sysdeps/posix/sysconf.c @@ -983,12 +983,10 @@ __sysconf (name) #else return -1; #endif + /* _REGEX_VERSION has been removed with IEEE Std 1003.1-2001/Cor 2-2004, + item XSH/TC2/D6/137. */ case _SC_REGEX_VERSION: -#if _POSIX_REGEX_VERSION > 0 - return _POSIX_REGEX_VERSION; -#else return -1; -#endif case _SC_SHELL: #if _POSIX_SHELL > 0
Attachment:
pgpXY4GyolsGj.pgp
Description: PGP signature | https://sourceware.org/legacy-ml/libc-alpha/2014-09/msg00328.html | CC-MAIN-2021-43 | refinedweb | 283 | 61.22 |
Description
When entering #!/bin/bash into a code block, MoinMoin does not show the line in the HTML view.
Steps to reproduce
Create a new page (or edit the SandBox)
- Create a new code block containing a sample bash script (for example)
- Save and view the page
- The hashbang line is not rendered in the HTML
I expect to see the #!/bin/bash
Example
This isn't showing the #!/bin/bash line.
echo foobar
Nor does this show the #!/usr/bin/env python line.
def foo: bar
This does as the formatter doesn't think that this is a formatting line.
- #!/bin/bash echo foobar
Component selection
- general - formatter
Details
This Wiki.
Workaround
Add something else on the line preceeding #!/bin/bash
Or apply this HORRIBLE patch (against 1.6.3):
--- formatter/__init__.py-orig 2008-07-23 19:07:03.000000000 +0100 +++ formatter/__init__.py-new 2008-07-23 19:06:12.000000000 +0100 @@ -317,6 +317,8 @@ return self.text(errmsg) def _get_bang_args(self, line): + if line.startswith('#!/'): + return None if line.startswith('#!'): try: name, args = line[2:].split(None, 1)
Discussion
The parser successfully negotiates that the content type should be 'text', but the formatter removes the line if it begins with '#!'. Since many wiki's probably are used for code snippets, these lines are probably missing.
It doesn't seem to affect the version of MoinMoin running at though... (1.5.9)
if your stuff begins with #! and you want to see that as content (or it is not meant as parser format spec for moin), you have to add another #! line specifying the parser you want to use:
{{{ #!python #!/usr/bin/env python def foo: bar }}}
Renders as:
Plan
- Priority:
- Assigned to:
- Status: | http://www.moinmo.in/MoinMoinBugs/ParserDoesNotShowHashBangSlash | crawl-003 | refinedweb | 288 | 69.58 |
This post outlines details regarding Elasticsearch ECK Operator 1.3 deployment. It assumes that you have good understanding of Kubernetes and Elasticsearch.
Part-1: Kubernetes Controller
If you have worked on Kubernetes, then you might remember one special thing, you kill a running pod, Kubernetes will spin another one. This is happening because of control loops. Control loop is a non-terminating loop that regulates the state of the system. Control Loop are managed by Kubernetes Controllers. Every Kubernetes Cluster run group of Controllers simultaneously. Each controller is responsible for a particular resource in the Kubernetes Cluster. Controller speaks to Kubernetes API to mimic the desired state until the current state becomes the desired state, according to the logic.
All controllers are packaged and shipped in a single daemon named kube-controller-manager.
Part-2: Kubernetes Custom Resources
A custom resource extends Kubernetes capabilities by adding new kinds of objects specific to your application requirement. By default, Kubernetes comes with multiple objects to support your deployment of containers e.g.: pod, deployment, job, etc. Using CRDs you can build your own object to perform specific tasks.
Part-3: Kubernetes Operator
An operator is a way of packaging, deploying, and managing a Kubernetes application. Operators are the application-specific controllers.
Operator extends Kubernetes API to manage applications on behalf of users. It means instead of you manage the application components, Operator will manage that as per the requirement. Operators use the concept of custom resources to manage applications and their components.
Part-4: What is ECK?
Elastic Cloud on Kubernetes (ECK) extends the basic Kubernetes orchestration capabilities to support the deployment and management of Elasticsearch, Kibana, and APM Server on Kubernetes platform. It is built on the Kubernetes Operator.
Part-5: Why ECK Operator?
You can deploy Elasticsearch on Kubernetes via multiple ways. Operator based mechanism is the most advanced way of doing that. It has simplified the deployment of Elasticsearch on Kubernetes.
Part-6: What is special in Release 1.3.0
This new release has a lot of improvements and features to watch out for. I am going to cover those which I find important to know. For full features please visit ECK release page.
Dynamic Volume Expansion
Earlier Elasticsearch volumes were fixed at the point of deployment. In the case of size increase, existing deployment has to migrate to new. Elasticsearch volumes can be now dynamically expanded. In this case, if you want to update the volume, just update the manifest file. This feature totally depends upon your storage provisioner. Storage provisioner should support volume expansion. One more point, make sure you storage class to have property.
allowVolumeExpansion: true
Official Helm Chart
Earlier ECK operator deployment was performed using all in one YAML files.
> kubectl apply -f
Starting ECK 1.3.0 you can do it using the official Helm chart. This chart is in an experimental state but a good point to start.
> helm repo add elastic > helm repo update > kubectl create namespace elastic-system > helm install elastic-operator elastic/eck-operator -n elastic-system
IPv6 is supported
If your Kubernetes Environment supports IPv6 then ECK has no issues now. This feature will be very useful where large K8s deployments are moving to IPv6.
Support for OpenShift Deployments
In this release, ECK image for operator is based on Red Hat Universal Base Image(ubi-minimal), which is standard for OpenShift deployments. ECK is the certified operator on OperatorHub now.
docker.elastic.co/eck/eck-operator
ECK Operator timeout
In Helm Chart values.yaml, you can configure timeout to ECK Operator as well now. This can help in situations where ECK operator takes time to respond because of any ongoing platform issues.
That’s all for this post. In the next post, I will cover steps to deploy Elasticsearch/Kibana using operator.
Keep Learning and Stay Safe and Secure :)
Discussion (0) | https://dev.to/arunksingh16/getting-started-with-elastic-cloud-for-kubernetes-eck-1-3-0-36ie | CC-MAIN-2021-31 | refinedweb | 643 | 50.84 |
Opened 9 years ago
Closed 9 years ago
#12285 closed defect (duplicate)
Update darwin_memory_usage.c for Lion header files
Description (last modified by )
Problem:
I got a pre-built 10.6 Sage from here:. Even though I'm on Lion, this installs and runs correctly.
But when I tried
./sage -b, it fails when compiling
devel/sage/misc/darwin_memory_usage.c.
Details:
The error is that that file contains the line
#include <mach/task_info.h>, which, on Lion, assumes that the symbol
vm_extmod_statistics_data_t has already been defined.
Fix:
To define that symbol, we need to first
#include <mach/vm_statistics.h>. It is simple to insert such a line at the beginning of
darwin_memory_usage.c, right before the first
#include. Doing so fixes the problem. See tiny attached patch. Jason Grout was kind enough to test the same change on his 10.6 machine, and verify that the change doesn't screw up the build there.
Attachments (1)
Change History (7)
Changed 9 years ago by
comment:1 Changed 9 years ago by
comment:2 Changed 9 years ago by
Looks good for OSX 10.6 (since sage -b is successful with this change and then sage starts up). Someone with 10.7 should probably also review this.
comment:3 Changed 9 years ago by
- Status changed from new to needs_review
comment:4 Changed 9 years ago by
comment:5 Changed 9 years ago by
- Milestone changed from sage-4.8 to sage-duplicate/invalid/wontfix
- Status changed from needs_review to positive_review
You are correct, that is a better fix.
comment:6 Changed 9 years ago by
- Resolution set to duplicate
- Reviewers set to Nathan Carter
- Status changed from positive_review to closed
patch making the one-line change described in the ticket | https://trac.sagemath.org/ticket/12285 | CC-MAIN-2021-17 | refinedweb | 290 | 67.86 |
Question:
I am running an ASP.Net page on IIS7, and developing in VS 2008. Currently, I have user authentication being done through an LDAP connection. Once the user logs in, on one page they have a form with some basic information about them (such as their name, email address, country, and the like) and I wish to pre populate some of these fields from information already stored in the LDAP. In particular their given name and email addresses. The question is, using C#, how do I actually retrieve this information?
Solution:1
Sounds like you're on .Net 3.5 SP1, in that case you can use the System.DirectoryServices.AccountManagement namespace that greatly simplifies this.
Here's a sample:
var pc = new PrincipalContext(ContextType.Domain, "mydomaincontroller"); var u = UserPrincipal.FindByIdentity(pn, userName); var email = u.EmailAddress; var name = u.DisplayName;
Here's a full list of properties you can grab.
Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com
EmoticonEmoticon | http://www.toontricks.com/2019/02/tutorial-how-to-programmatically.html | CC-MAIN-2019-09 | refinedweb | 172 | 52.56 |
Migrating an App from Google Maps v2
Introduction
The Amazon Maps API provides mapping functionality for Android apps on Fire tablets and Fire phone. If your app uses Google Maps, you can migrate your app to the Amazon Maps API by making some minor code changes and then re-compiling your app against the Amazon Maps API. You can then run your app on Fire phone and Fire tablet devices and distribute your app in the Amazon Appstore for Android.
The Amazon Maps API offers interface parity with version 2 of the Google Maps API. Most classes and method calls in your Google Maps app work the same on Amazon devices. For information about the differences, see Differences between Amazon Maps and Google Maps.
Steps to Migrate Your App
To migrate your app:
- Configure your project with the Amazon Maps API Library. See Configuring Your Project to Use the Amazon Maps API. These steps include:
- Making sure your app permissions in the
AndroidManifest.xmlare correct.
- Updating the Fire OS API level in your
AndroidManifest.xml.
- Importing the Amazon Maps API Support Library and configuring your project to compile against it.
- Rename Google–specific namespaces and classes to the Amazon–named versions as described below.
- Rename Google–specific XML attributes in your resource files as described below.
- Remove the project's existing dependency on the Google Play services library. This ensures that the compiler flags any code that still references Google Play services classes.
- Register your app to download map tiles. See Registering and Testing Your App.
- Test your app on an Amazon device. Pay special attention to any areas where you used features that are not supported in the Amazon Maps API. For a list of these areas, see Differences between Amazon Maps and Google Maps.
Renaming Namespaces and Classes
The following table lists the Google–specific namespaces and classes that you must rename to the Amazon versions. You may find it useful to use find and replace tools in your IDE to make these updates.
Renaming XML Attributes
A Google Maps–compatible app can use custom XML attributes in a layout XML file to set initial map options. For example, you can specify
map:mapType="satellite" to set the initial map type to satellite.
The Amazon Maps API supports the same set of XML attributes, but they are named with an
amzn_ prefix. If you are using the XML attributes in your app, you need to rename them with the Amazon versions.
For details about using XML attributes for map settings, see Displaying an Interactive Map with the Amazon Maps API v2. The available XML attributes are also documented with their related
AmazonMapOptions methods in the Maps API Reference.
Testing Your Migrated App
When testing your migrated app on an Amazon device, pay special attention to any areas where you used features that are not supported in the Amazon Maps API. You can turn on strict mode to help find these areas. Strict mode causes the methods used for these features to log warnings or throw exceptions instead of silently failing.
For details about the unsupported features and using strict mode, see Differences between the Amazon Maps API 2 and Google Maps API v2
For details about registering your app, see Registering and Testing Your Amazon Maps API v2 App.
For additional help and information, see the Amazon Maps API Frequently Asked Questions. | https://developer.amazon.com/docs/maps/migrate.html | CC-MAIN-2018-34 | refinedweb | 564 | 62.88 |
In the last post in this series we saw some simple examples of linear programs, derived the concept of a dual linear program, and saw the duality theorem and the complementary slackness conditions which give a rough sketch of the stopping criterion for an algorithm. This time we’ll go ahead and write this algorithm for solving linear programs, and next time we’ll apply the algorithm to an industry-strength version of the nutrition problem we saw last time. The algorithm we’ll implement is called the simplex algorithm. It was the first algorithm for solving linear programs, invented in the 1940’s by George Dantzig, and it’s still the leading practical algorithm, and it was a key part of a Nobel Prize. It’s by far one of the most important algorithms ever devised.
As usual, we’ll post all of the code written in the making of this post on this blog’s Github page.
Slack variables and equality constraints
The simplex algorithm can solve any kind of linear program, but it only accepts a special form of the program as input. So first we have to do some manipulations. Recall that the primal form of a linear program was the following minimization problem.
where the brackets mean “dot product.” And its dual is
The linear program can actually have more complicated constraints than just the ones above. In general, one might want to have “greater than” and “less than” constraints in the same problem. It turns out that this isn’t any harder, and moreover the simplex algorithm only uses equality constraints, and with some finicky algebra we can turn any set of inequality or equality constraints into a set of equality constraints.
We’ll call our goal the “standard form,” which is as follows:
It seems impossible to get the usual minimization/maximization problem into standard form until you realize there’s nothing stopping you from adding more variables to the problem. That is, say we’re given a constraint like:
we can add a new variable
, called a slack variable, so that we get an equality:
And now we can just impose that
. The idea is that
represents how much “slack” there is in the inequality, and you can always choose it to make the condition an equality. So if the equality holds and the variables are nonnegative, then the
will still satisfy their original inequality. For “greater than” constraints, we can do the same thing but subtract a nonnegative variable. Finally, if we have a minimization problem “
” we can convert it to
.
So, to combine all of this together, if we have the following linear program with each kind of constraint,
We can add new variables
, and write it as
By defining the vector variable
and
and
to have
as appropriately for the new variables, we see that the system is written in standard form.
This is the kind of tedious transformation we can automate with a program. Assuming there are
variables, the input consists of the vector
of length
, and three matrix-vector pairs
representing the three kinds of constraints. It’s a bit annoying to describe, but the essential idea is that we compute a rectangular “identity” matrix whose diagonal entries are
, and then join this with the original constraint matrix row-wise. The reader can see the full implementation in the Github repository for this post, though we won’t use this particular functionality in the algorithm that follows.
There are some other additional things we could do: for example there might be some variables that are completely unrestricted. What you do in this case is take an unrestricted variable
and replace it by the difference of two unrestricted variables
. For simplicity we’ll ignore this, but it would be a fruitful exercise for the reader to augment the function to account for these.
What happened to the slackness conditions?
The “standard form” of our linear program raises an obvious question: how can the complementary slackness conditions make sense if everything is an equality? It turns out that one can redo all the work one did for linear programs of the form we gave last time (minimize w.r.t. greater-than constraints) for programs in the new “standard form” above. We even get the same complementary slackness conditions! If you want to, you can do this entire routine quite a bit faster if you invoke the power of Lagrangians. We won’t do that here, but the tool shows up as a way to work with primal-dual conversions in many other parts of mathematics, so it’s a good buzzword to keep in mind.
In our case, the only difference with the complementary slackness conditions is that one of the two is trivial:
. This is because if our candidate solution
is feasible, then it will have to satisfy
already. The other one, that
, is the only one we need to worry about.
Again, the complementary slackness conditions give us inspiration here. Recall that, informally, they say that when a variable is used at all, it is used as much as it can be to fulfill its constraint (the corresponding dual constraint is tight). So a solution will correspond to a choice of some variables which are either used or not, and a choice of nonzero variables will correspond to a solution. We even saw this happen in the last post when we observed that broccoli trumps oranges. If we can get a good handle on how to navigate the set of these solutions, then we’ll have a nifty algorithm.
Let’s make this official and lay out our assumptions.
Extreme points and basic solutions
Remember that the graphical way to solve a linear program is to look at the line (or hyperplane) given by
and keep increasing
(or decreasing it, if you are minimizing) until the very last moment when this line touches the region of feasible solutions. Also recall that the “feasible region” is just the set of all solutions to
, that is the solutions that satisfy the constraints. We imagined this picture:
With this geometric intuition it’s clear that there will always be an optimal solution on a vertex of the feasible region. These points are called extreme points of the feasible region. But because we will almost never work in the plane again (even introducing slack variables makes us relatively high dimensional!) we want an algebraic characterization of these extreme points.
If you have a little bit of practice with convex sets the correct definition is very natural. Recall that a set
is convex if for any two points
every point on the line segment between
and
is also in
. An algebraic way to say this (thinking of these points now as vectors) is that every point
when
. Now an extreme point is just a point that isn’t on the inside of any such line, i.e. can’t be written this way for
. For example,
Another way to say this is that if
is an extreme point then whenever
can be written as
for some
, then actually
. Now since our constraints are all linear (and there are a finite number of them) they won’t define a convex set with weird curves like the one above. This means that there are a finite number of extreme points that just correspond to the intersections of some of the constraints. So there are at most
possibilities.
Indeed we want a characterization of extreme points that’s specific to linear programs in standard form, “
.” And here is one.
Definition: Let
be an
matrix with
. A solution
to
is called basic if at most
of its entries are nonzero.
The reason we call it “basic” is because, under some mild assumptions we describe below, a basic solution corresponds to a vector space basis of
. Which basis? The one given by the
columns of
used in the basic solution. We don’t need to talk about bases like this, though, so in the event of a headache just think of the basis as a set
of size
corresponding to the nonzero entries of the basic solution.
Indeed, what we’re doing here is looking at the matrix
formed by taking the columns of
whose indices are in
, and the vector
in the same way, and looking at the equation
. If all the parts of
that we removed were zero then this will hold if and only if
. One might worry that
is not invertible, so we’ll go ahead and assume it is. In fact, we’ll assume that every set of
columns of
forms a basis and that the rows of
are also linearly independent. This isn’t without loss of generality because if some rows or columns are not linearly independent, we can remove the offending constraints and variables without changing the set of solutions (this is why it’s so nice to work with the standard form).
Moreover, we’ll assume that every basic solution has exactly
nonzero variables. A basic solution which doesn’t satisfy this assumption is called degenerate, and they’ll essentially be special corner cases in the simplex algorithm. Finally, we call a basic solution feasible if (in addition to satisfying
) it satisfies
. Now that we’ve made all these assumptions it’s easy to see that choosing
nonzero variables uniquely determines a basic feasible solution. Again calling the sub-matrix
for a basis
, it’s just
. Now to finish our characterization, we just have to show that under the same assumptions basic feasible solutions are exactly the extremal points of the feasible region.
Proposition: A vector
is a basic feasible solution if and only if it’s an extreme point of the set
.
Proof. For one direction, suppose you have a basic feasible solution
, and say we write it as
for some
. We want to show that this implies
. Since all of these points are in the feasible region, all of their coordinates are nonnegative. So whenever a coordinate
it must be that both
. Since
has exactly
zero entries, it must be that
both have at least
zero entries, and hence
are both basic. By our non-degeneracy assumption they both then have exactly
nonzero entries. Let
be the set of the nonzero indices of
. Because
, we have
. Now
has all of its nonzero entries in
, and because the columns of
are linearly independent, the fact that
implies
.
In the other direction, suppose that you have some extreme point
which is feasible but not basic. In other words, there are more than
nonzero entries of
, and we’ll call the indices
where
. The columns of
are linearly dependent (since they’re
vectors in
), and so let
be a nontrivial linear combination of the columns of
. Add zeros to make the
into a length
vector
, so that
. Now
And if we pick
sufficiently small
will still be nonnegative, because the only entries we’re changing of
are the strictly positive ones. Then
for
, but this is very embarrassing for
who was supposed to be an extreme point.
Now that we know extreme points are the same as basic feasible solutions, we need to show that any linear program that has some solution has a basic feasible solution. This is clear geometrically: any time you have an optimum it has to either lie on a line or at a vertex, and if it lies on a line then you can slide it to a vertex without changing its value. Nevertheless, it is a useful exercise to go through the algebra.
Theorem. Whenever a linear program is feasible and bounded, it has a basic feasible solution.
Proof. Let
be an optimal solution to the LP. If
has at most
nonzero entries then it’s a basic solution and by the non-degeneracy assumption it must have exactly
nonzero entries. In this case there’s nothing to do, so suppose that
has
nonzero entries. It can’t be a basic feasible solution, and hence is not an extreme point of the set of feasible solutions (as proved by the last theorem). So write it as
for some feasible
and
.
The only thing we know about
is it’s optimal. Let
be the cost vector, and the optimality says that
, and
. We claim that in fact these are equal, that
are both optimal as well. Indeed, say
were not optimal, then
Which can be rearranged to show that
. Unfortunately for
, this implies that it was not optimal all along:
An identical argument works to show
is optimal, too. Now we claim we can use
to get a new solution that has fewer than
nonzero entries. Once we show this we’re done: inductively repeat the argument with the smaller solution until we get down to exactly
nonzero variables. As before we know that
must have at least as many zeros as
. If they have more zeros we’re done. And if they have exactly as many zeros we can do the following trick. Write
for a
we’ll choose later. Note that no matter the
,
is optimal. Rewriting
, we just have to pick a
that ensures one of the nonzero coefficients of
is zeroed out while maintaining nonnegativity. Indeed, we can just look at the index
which minimizes
and use
.
.
So we have an immediate (and inefficient) combinatorial algorithm: enumerate all subsets of size
, compute the corresponding basic feasible solution
, and see which gives the biggest objective value. The problem is that, even if we knew the value of
, this would take time
, and it’s not uncommon for
to be in the tens or hundreds (and if we don’t know
the trivial search is exponential).
So we have to be smarter, and this is where the simplex tableau comes in.
The simplex tableau
Now say you have any basis
and any feasible solution
. For now
might not be a basic solution, and even if it is, its basis of nonzero entries might not be the same as
. We can decompose the equation
into the basis part and the non basis part:
and solving the equation for
gives
It may look like we’re making a wicked abuse of notation here, but both
and
are vectors of length
so the dimensions actually do work out. Now our feasible solution
has to satisfy
, and the entries of
are all nonnegative, so it must be that
and
, and by the equality above
as well. Now let’s write the maximization objective
by expanding it first in terms of the
, and then expanding
.
If we want to maximize the objective, we can just maximize this last line. There are two cases. In the first, the vector
and
. In the above equation, this tells us that making any component of
bigger will decrease the overall objective. In other words,
. Picking
(with zeros in the non basis part) meets this bound and hence must be optimal. In other words, no matter what basis
we’ve chosen (i.e., no matter the candidate basic feasible solution), if the two conditions hold then we’re done.
Now the crux of the algorithm is the second case: if the conditions aren’t met, we can pick a positive index of
and increase the corresponding value of
to increase the objective value. As we do this, other variables in the solution will change as well (by decreasing), and we have to stop when one of them hits zero. In doing so, this changes the basis by removing one index and adding another. In reality, we’ll figure out how much to increase ahead of time, and the change will correspond to a single elementary row-operation in a matrix.
Indeed, the matrix we’ll use to represent all of this data is called a tableau in the literature. The columns of the tableau will correspond to variables, and the rows to constraints. The last row of the tableau will maintain a candidate solution
to the dual problem. Here’s a rough picture to keep the different parts clear while we go through the details.
But to make it work we do a slick trick, which is to “left-multiply everything” by
. In particular, if we have an LP given by
, then for any basis it’s equivalent to the LP given by
(just multiply your solution to the new program by
to get a solution to the old one). And so the actual tableau will be of this form.
When we say it’s in this form, it’s really only true up to rearranging columns. This is because the chosen basis will always be represented by an identity matrix (as it is to start with), so to find the basis you can find the embedded identity sub-matrix. In fact, the beginning of the simplex algorithm will have the initial basis sitting in the last few columns of the tableau.
Let’s look a little bit closer at the last row. The first portion is zero because
is the identity. But furthermore with this
trick the dual LP involves
everywhere there’s a variable. In particular, joining all but the last column of the last row of the tableau, we have the vector
, and setting
we get a candidate solution for the dual. What makes the trick even slicker is that
is already the candidate solution
, since
is the identity. So we’re implicitly keeping track of two solutions here, one for the primal LP, given by the last column of the tableau, and one for the dual, contained in the last row of the tableau.
I told you the last row was the dual solution, so why all the other crap there? This is the final slick in the trick: the last row further encodes the complementary slackness conditions. Now that we recognize the dual candidate sitting there, the complementary slackness conditions simply ask for the last row to be non-positive (this is just another way of saying what we said at the beginning of this section!). You should check this, but it gives us a stopping criterion: if the last row is non-positive then stop and output the last column.
The simplex algorithm
Now (finally!) we can describe and implement the simplex algorithm in its full glory. Recall that our informal setup has been:
- Find an initial basic feasible solution, and set up the corresponding tableau.
- Find a positive index of the last row, and increase the corresponding variable (adding it to the basis) just enough to make another variable from the basis zero (removing it from the basis).
- Repeat step 2 until the last row is nonpositive.
- Output the last column.
This is almost correct, except for some details about how increasing the corresponding variables works. What we’ll really do is represent the basis variables as pivots (ones in the tableau) and then the first 1 in each row will be the variable whose value is given by the entry in the last column of that row. So, for example, the last entry in the first row may be the optimal value for
, if the fifth column is the first entry in row 1 to have a 1.
As we describe the algorithm, we’ll illustrate it running on a simple example. In doing this we’ll see what all the different parts of the tableau correspond to from the previous section in each step of the algorithm.
Spoiler alert: the optimum is
and the value of the max is 8.
So let’s be more programmatically formal about this. The main routine is essentially pseudocode, and the difficulty is in implementing the helper functions
def simplex(c, A, b): tableau = initialTableau(c, A, b) while canImprove(tableau): pivot = findPivotIndex(tableau) pivotAbout(tableau, pivot) return primalSolution(tableau), objectiveValue(tableau)
Let’s start with the initial tableau. We’ll assume the user’s inputs already include the slack variables. In particular, our example data before adding slack is
c = [3, 2] A = [[1, 2], [1, -1]] b = [4, 1]
And after adding slack:
c = [3, 2, 0, 0] A = [[1, 2, 1, 0], [1, -1, 0, 1]] b = [4, 1]
Now to set up the initial tableau we need an initial feasible solution in mind. The reader is recommended to work this part out with a pencil, since it’s much easier to write down than it is to explain. Since we introduced slack variables, our initial feasible solution (basis)
can just be
. And so
is just the slack variables,
is the zero vector, and
is the 2×2 identity matrix. Now
, which is just the original two columns of
we started with, and
. For the last row,
is zero so the part under
is the zero vector. The part under
is just
.
Rather than move columns around every time the basis
changes, we’ll keep the tableau columns in order of
. In other words, for our example the initial tableau should look like this.
[[ 1, 2, 1, 0, 4], [ 1, -1, 0, 1, 1], [ 3, 2, 0, 0, 0]]
So implementing
initialTableau is just a matter of putting the data in the right place.
def initialTableau(c, A, b): tableau = [row[:] + [x] for row, x in zip(A, b)] tableau.append(c[:] + [0]) return tableau
As an aside: in the event that we don’t start with the trivial basic feasible solution of “trivially use the slack variables,” we’d have to do a lot more work in this function. Next, the
primalSolution() and
objectiveValue() functions are simple, because they just extract the encoded information out from the tableau (some helper functions are omitted for brevity).
def primalSolution(tableau): # the pivot columns denote which variables are used columns = transpose(tableau) indices = [j for j, col in enumerate(columns[:-1]) if isPivotCol(col)] return list(zip(indices, columns[-1])) def objectiveValue(tableau): return -(tableau[-1][-1])
Similarly, the
canImprove() function just checks if there’s a nonnegative entry in the last row
def canImprove(tableau): lastRow = tableau[-1] return any(x > 0 for x in lastRow[:-1])
Let’s run the first loop of our simplex algorithm. The first step is checking to see if anything can be improved (in our example it can). Then we have to find a pivot entry in the tableau. This part includes some edge-case checking, but if the edge cases aren’t a problem then the strategy is simple: find a positive entry corresponding to some entry
of
, and then pick an appropriate entry in that column to use as the pivot. Pivoting increases the value of
(from zero) to whatever is the largest we can make it without making some other variables become negative. As we’ve said before, we’ll stop increasing
when some other variable hits zero, and we can compute which will be the first to do so by looking at the current values of
(in the last column of the tableau), and seeing how pivoting will affect them. If you stare at it for long enough, it becomes clear that the first variable to hit zero will be the entry
of the basis for which
is minimal (and
has to be positve). This is because, in order to maintain the linear equalities, every entry of
will be decreased by that value during a pivot, and we can’t let any of the variables become negative.
All of this results in the following function, where we have left out the degeneracy/unboundedness checks.
def findPivotIndex(tableau): # pick first nonzero index of the last row column = [i for i,x in enumerate(tableau[-1][:-1]) if x > 0][0] quotients = [(i, r[-1] / r[column]) for i,r in enumerate(tableau[:-1]) if r[column] > 0] # pick row index minimizing the quotient row = min(quotients, key=lambda x: x[1])[0] return row, column
For our example, the minimizer is the
entry (second row, first column). Pivoting is just doing the usual elementary row operations (we covered this in a primer a while back on row-reduction). The pivot function we use here is no different, and in particular mutates the list in place.
def pivotAbout(tableau, pivot): i,j = pivot pivotDenom = tableau[i][j] tableau[i] = [x / pivotDenom for x in tableau[i]] for k,row in enumerate(tableau): if k != i: pivotRowMultiple = [y * tableau[k][j] for y in tableau[i]] tableau[k] = [x - y for x,y in zip(tableau[k], pivotRowMultiple)]
And in our example pivoting around the chosen entry gives the new tableau.
[[ 0., 3., 1., -1., 3.], [ 1., -1., 0., 1., 1.], [ 0., 5., 0., -3., -3.]]
In particular,
is now
, since our pivot removed the second slack variable
from the basis. Currently our solution has
. Notice how the identity submatrix is still sitting in there, the columns are just swapped around.
There’s still a positive entry in the bottom row, so let’s continue. The next pivot is (0,1), and pivoting around that entry gives the following tableau:
[[ 0. , 1. , 0.33333333, -0.33333333, 1. ], [ 1. , 0. , 0.33333333, 0.66666667, 2. ], [ 0. , 0. , -1.66666667, -1.33333333, -8. ]]
And because all of the entries in the bottom row are negative, we’re done. We read off the solution as we described, so that the first variable is 2 and the second is 1, and the objective value is the opposite of the bottom right entry, 8.
To see all of the source code, including the edge-case-checking we left out of this post, see the Github repository for this post.
Obvious questions and sad answers
An obvious question is: what is the runtime of the simplex algorithm? Is it polynomial in the size of the tableau? Is it even guaranteed to stop at some point? The surprising truth is that nobody knows the answer to all of these questions! Originally (in the 1940’s) the simplex algorithm actually had an exponential runtime in the worst case, though this was not known until 1972. And indeed, to this day while some variations are known to terminate, no variation is known to have polynomial runtime in the worst case. Some of the choices we made in our implementation (for example, picking the first column with a positive entry in the bottom row) have the potential to cycle, i.e., variables leave and enter the basis without changing the objective at all. Doing something like picking a random positive column, or picking the column which will increase the objective value by the largest amount are alternatives. Unfortunately, every single pivot-picking rule is known to give rise to exponential-time simplex algorithms in the worst case (in fact, this was discovered as recently as 2011!). So it remains open whether there is a variant of the simplex method that runs in guaranteed polynomial time.
But then, in a stunning turn of events, Leonid Khachiyan proved in the 70’s that in fact linear programs can always be solved in polynomial time, via a completely different algorithm called the ellipsoid method. Following that was a method called the interior point method, which is significantly more efficient. Both of these algorithms generalize to problems that are harder than linear programming as well, so we will probably cover them in the distant future of this blog.
Despite the celebratory nature of these two results, people still use the simplex algorithm for industrial applications of linear programming. The reason is that it’s much faster in practice, and much simpler to implement and experiment with.
The next obvious question has to do with the poignant observation that whole numbers are great. That is, you often want the solution to your problem to involve integers, and not real numbers. But adding the constraint that the variables in a linear program need to be integer valued (even just 0-1 valued!) is NP-complete. This problem is called integer linear programming, or just integer programming (IP). So we can’t hope to solve IP, and rightly so: the reader can verify easily that boolean satisfiability instances can be written as linear programs where each clause corresponds to a constraint.
This brings up a very interesting theoretical issue: if we take an integer program and just remove the integrality constraints, and solve the resulting linear program, how far away are the two solutions? If they’re close, then we can hope to give a good approximation to the integer program by solving the linear program and somehow turning the resulting solution back into an integer solution. In fact this is a very popular technique called LP-rounding. We’ll also likely cover that on this blog at some point.
Oh there’s so much to do and so little time! Until next time. | http://jeremykun.com/tag/row-reduction/ | CC-MAIN-2015-48 | refinedweb | 4,800 | 59.74 |
ccp 0.4b
A Python client for the Changelog API
Send an event to a Changelog server.
Installation
To install ccp, simply:
$ pip install ccp
Supported severities
- INFO
- NOTIFICATION
- WARNING
- ERROR
- CRITICAL
Example
It is pretty easy to use:
from ccp.client import Client client = Client("localhost", "80") client.send("This is a simple message", "INFO", "category")
You can pass a in a dict to specify additional HTTP headers, for example to do authentication:
client.send("Message", "INFO", "category", {"Authorization", "Basic base64encoded"})
Logging
Logging happens into the logger called changelog_client by default. You can override it by setting the logger property of a client instance to a Logger object.
- Version 0.5b
- Added SSL support
- Version 0.4b
- Added better logging thanks to abesto ()
- Added support for passing severity directly as an int thanks to abesto ()
- Add support for passing extra headers thanks to abesto ()
- Version 0.3b
- Initial release
- Author: Adam Papai
- License:
Copyright (c) 2014 Adam Pap: wooh
- DOAP record: ccp-0.4b.xml | https://pypi.python.org/pypi/ccp/0.4b | CC-MAIN-2016-22 | refinedweb | 166 | 54.22 |
Hello, On OI we have the file /usr/include/gmp.h Today I tried to compile the Glasgow Haskell Compilerand compilation stopped with the following error message: # error WORD_SIZE_IN_BITS != GMP_LIMB_BITS not supported I am using the 32bit version of the compiler since the 64bit versionfails (at least that was reported in their web site). Now the problemis that in file gmp.h there is the following definition: #define GMP_LIMB_BITS 64 while it should be #define GMP_LIMB_BITS 32 I think that it should be better to have something like this
Advertising
#if defined(__amd64) #define GMP_LIMB_BITS 64 #else #define GMP_LIMB_BITS 32 #endif At least this solves the problem with GHC. A.S. ---------------------- Apostolos Syropoulos Xanthi, Greece _______________________________________________ openindiana-discuss mailing list openindiana-discuss@openindiana.org | https://www.mail-archive.com/openindiana-discuss@openindiana.org/msg21206.html | CC-MAIN-2018-13 | refinedweb | 124 | 52.19 |
Bob nightlies are failling
Created by: tiagofrepereira2012
As you can see here, our nightlies is broken (just one example).
Follow bellow a short example on how to reproduce the error
import numpy from bob.io.base import File a3 = numpy.random.normal(size=(3,4)).astype('complex128') #not supported File("xuxu.bindata",'w').write(a3)
Debugging a bit (With André's help), I found that a runtime error is raised here () and we have a catch here () ready to catch the std::runtime_error, but for some reason this macro is not working (the macro is defined at this point of the code).
If you explicitly set do try{}catch(...){} in () and raise a PyExc_RuntimeError (which is exactly the macro does) the code just work nice.
I don't know what is going on here. | https://gitlab.idiap.ch/bob/bob.io.base/-/issues/9 | CC-MAIN-2021-31 | refinedweb | 134 | 73.68 |
Hi,
after upgrading to 1.1 everthing seems to be working. But as I started RubyMine today, my list of rake tasks doesn't show up correctly. My colleague has the same problem. All other menus are shown correctly. As you can see in the screenshot, the elements are still there, but "a little bit" to small.
As you can see in the log, there is no exception.
Have you guys an idea of what's going on here?
Bye,
neogrande
Attachment(s):
log.txt
rake_task_db.jpg
Is this problem still reproducible? What version of java do you use?
Hi Dennis,
yes, the problem still exists on our two machines.
# java -version
java version "1.6.0_10"
Java(TM) SE Runtime Environment (build 1.6.0_10-b33)
Java HotSpot(TM) 64-Bit Server VM (build 11.0-b15, mixed mode)
I think I now the reason for those very small Rake tasks: there are simply too many to fit on the screen. I'm sitting in front of a MacBook and also have way too many entries - but I am able to scroll them so they appear in normal size. All of this started to appear after I've choosen "Reload Rake Tasks", now it not only shows Rake tasks but also seems to refer to entries in the file system. I see "tmp" and "tmp/war" for example which are directories in my Rails project (I also have a "tmp" 'folder' for the Rake task namespace). But I also see entries like "rails_env" that don't refer to files (see attached picture).
The log doesn't show anything useful (a 'Reload Rake Tasks' doesn't produce any output) and my Java version still is 1.5:
I may have to note that I'm a colleague of neogrande and this is for the same project...
Attachment(s):
Strange Rake tasks.png
Hello,
RubyMine show both documented (red rake icon) and hidden rake tasks (grey rake icon with lock key). Launch "rake -P" and compare list of tasks
I think that these taks are File Tasks (e.g. see)
We will check it today on Windows computer. Actually if you know which rake task you want to executed you should use RubyMine | Tools | Run Rake Tasks.. ( Alt+Shift+R / Option+R)
Hi Roman,
I have tried "rake -P" and this shows the reason - it has the same output.
Looks like we'd at least need the option to suppress undocumented tasks. It should be sufficient to be able to access them via "Run Rake Tasks.." as you may search for the namespace. Alternatively it might be a good idea to put undocumented top-level tasks into some folder 'undocumented' to avoid polluting the top level...
Is there any way to override the Rake tasks RubyMine is aware of until this is fixed?
Btw, I assume that the warbler plugin we're using for JRuby deploypment to Tomcat is causing those numerous tasks.
Cheers,
Uwe
Ok, we will think how to better implement this
"Run Rake Tasks.." dialog was designed to allow quickly find and execute rake tasks. You can search using any prefix of any part in namespace, also you can use '*' char.
Tools|Rake Tasks was designed to show/investigate all tasks in application, obviously it shouldn't have problems as on your first screenshot. But using full list of rake tasks wont be as productive as "Run Rake Tasks..." action.
RubyMine saves list of rake tasks in xml format to [project_root]/.idea/.rakeTasks. You can try to remove useless tasks from this file. So open .rakeTasks in RubyMine as xml file, then ask RubyMine to reformat code in it. Now you can edit list of tasks and tasks groups. Also I recommend set readonly attribute for .rakeTasks and backup patched version because sometimes RubyMine will try to update list of available rake tasks and may override the content of the file. New list of rake tasks will be applied after you reopen your project.
Ok, we decided to hide undocumented ('private') tasks from Tools | Rake Tasks menu. Also Tools | Run Rake Task... is still able to show undocumented tasks if you enable checkbox "include undocumented". Also we noticed that "Run Rake Task..." shows only first 30 search results, so we've fixed this and now it behaves similar to "Go to file" dialog. Fix will be available in RubyMine 1.1.1
Thanks for thinking about this!
Got the point regarding preferrably running Rake tasks not via the menu but via the dialog
In the meantime I've created an XSL file to remove the useless tasks, see attached. On a Mac (or Linux) you should be able to run it like this:
As you may need to provide some settings section for Rake for the first point, it may be a good idea to provide some type of exclude patterns (just like the content of the XSL).
Attachment(s):
RubyMine_RakeTasks.xsl
as usual, you're faster than anticipated | https://intellij-support.jetbrains.com/hc/en-us/community/posts/206737545-Raketask-Menu-shows-strange-things?sort_by=created_at | CC-MAIN-2020-45 | refinedweb | 831 | 82.65 |
Hello everyone, in the last post I talked about the Observable pattern, and today I'm going to talk about another patter called
Pub-Sub. There are some difference between
Pub-Sub and
Observable, and my plan is to explain these differences and show you how
Pub-Sub works, and how you can implement it using javascript.
How
Pub-Sub works?
This pattern helps you when you want to dispatch an event; and you want the components that are interested in this event to know what is happening, Observable can only dispatch one event for everyone, but
Pub-Sub can dispatch many events, and who are interested should subscribe in a specific event.
An Analogy
Ford, Volkswagen, and BMW were interested in advertising new positions in their factories and decided to announce them in the newspaper.
Ford's announcement: At Ford, we are very happy to announce a new senior position available. Please apply for this opportunity, and come and work with us, ford@ford.com
Volkswagen's announcement: At Volkswagen, we are very happy to announce a new senior position available. Please, apply for this opportunity, and come and work with us, volkswagen@volkswagen.com
BMW's announcement: At BMW, we are very happy to announce a new senior position available. Please apply for this opportuninty, and work with us, bmw@bmw.com
After a few days, many candidates applied for the opportunities and each company answered their candidates by e-mail, giving them more details about the job.
Ford email: At Ford, we are very pleased that you are interested in our new position. Thank you for applying; and we will be in contact soon.
Volkswagen email: At Volkswagen, we are very pleased that you are interested in our new position. Thank you for applying; and we will be in contact soon.
BMW email: At BMW, we are very pleased that you are interested in our new position. Thank you for applying; and we will be in contact soon.
So, at the end of the process, every company sent a message to employees subscribed in their opportunity, saying about the end of the process.
Applying the analogy
Let's understand how
Pub-Sub works, the first thing that we need to understand is that the newspaper was the
Pub-Sub, the announcement was the event, the email was the message, the company was the publisher, and the candidate was the subscriber.
After the candidate's subscriptions, the companies dispatched the event, and the candidates subscribed in the event received the message. This example shows us that the
Pub-Sub is not about just one event, but many events, and the subscriber should subscribe to a specific event.
So, now we know how
Pub-Sub works, we can go on and implement it using javascript.
Implementing
Pub-Sub with javascript
The first thing that we need to implement is the PubSub class, this class will be the base of our implementation. So, let's do it:
class PubSub { constructor() { this.subscribers = {}; } subscribe(event, fn) { if (Array.isArray(this.subscribers[event])) { this.subscribers[event] = [...this.subscribers[event], fn]; } else { this.subscribers[event] = [fn]; } return () => { this.unsubscribe(event, fn); }; } unsubscribe(event, fn) { this.subscribers[event] = this.subscribers[event].filter( (sub) => sub !== fn ); } publish(event, data) { if (Array.isArray(this.subscribers[event])) { this.subscribers[event].forEach((sub) => { sub(data); }); } return false; } } export default new PubSub();
The constructor of the class will create an empty object, and this object will be the base of our implementation, and we will store all the subscribers in this object.
The subscribe method will receive an event and a function, and we will store the function in the subscribers object, every event should be a property of this object and the value of this property should be an array of functions. After that, we will return a function that will filter the function that we want to remove from the subscribers array.
The unsubscribe method will receive an event and a function, and we will select the property of the subscriber object that matches the event received as a argument, and we will filter the function that we want to remove from the subscribers array.
The publish method will receive an event and data, and we will iterate over the subscribers object, and for each subscriber that matches the event received, we will call the function with the data.
The export default new PubSub(); will create a new instance of the class, and we will export it.
Implementing a use case
Now that we have the PubSub class, we can implement our use case, and we will create a basic use case. Using some html elements and javascript we will create a simple page to show us the subscription, unsubscription, and publishing of events working.
import "./styles.css"; import PubSub from "./PubSub"; const firstInput = document.getElementById("first-input"); const secondInput = document.getElementById("second-input"); const firstSubscriberBtn = document.getElementById("first-subscriber-btn"); const secondSubscriberBtn = document.getElementById("second-subscriber-btn"); const firstUnSubscriberBtn = document.getElementById("first-un-subscriber-btn"); const secondUnSubscriberBtn = document.getElementById( "second-un-subscriber-btn" ); const textFirstSubscriber = document.getElementById("first-subscriber"); const textSecondSubscriber = document.getElementById("second-subscriber"); const firstText = (e) => (textFirstSubscriber.innerText = `${e}`); const secondText = (e) => (textSecondSubscriber.innerText = `${e}`); firstInput.addEventListener("input", (e) => PubSub.publish("first-event", e.target.value) ); secondInput.addEventListener("input", (e) => PubSub.publish("second-event", e.target.value) ); firstSubscriberBtn.addEventListener("click", (e) => { e.preventDefault(); PubSub.subscribe("first-event", firstText); }); firstUnSubscriberBtn.addEventListener("click", (e) => { e.preventDefault(); PubSub.unsubscribe("first-event", firstText); }); secondSubscriberBtn.addEventListener("click", (e) => { e.preventDefault(); PubSub.subscribe("second-event", secondText); }); secondUnSubscriberBtn.addEventListener("click", (e) => { e.preventDefault(); PubSub.unsubscribe("second-event", secondText); });
The firstInput will listen for the input event, and when it happens, it will publish the first-event event, and the secondInput will listen for the same event, and when it happens, it will publish the second-event event.
The firstSubscriberBtn will listen for the click event, and when it happens, it will subscribe the first-event event, and the firstUnSubscriberBtn will listen for the click event, and when it happens, it will unsubscribe the first-event event.
The secondSubscriberBtn will listen for the click event, and when it happens, it will subscribe the second-event event, and the secondUnSubscriberBtn will listen for the click event, and when it happens, it will unsubscribe the second-event event.
The textFirstSubscriber will listen for the first-event event, and when it happens, it will update the text with the value of the event, and the textSecondSubscriber will listen for the second-event event, and when it happens, it will update the text with the value of the event.
The firstInput will listen for the input event, and when it happens, it will publish the first-event event, and the secondInput will listen for the same event, and when it happens, it will publish the second-event event.
You can see the result of the use case working in the link below:
Conclusion
Even if you don't know how to implement it, it's very important to understand how
Pub-Sub works, as
Pub-Sub is a very common pattern in many programming languages and libraries.
I hope that you found this article helpful, and if you have any questions, please let me know in the comments section.
Top comments (2)
You cant compare a function like that , function will be compared based on reference just like object
So it will return false even though is same
You are right, function will be compared base on reference, like an object.
But here I need to validate if the fn argument in the subscription and the fn argument in the unsubscription are located ate the same point of memory, if true, it means that is the same function. | https://dev.to/jucian0/pub-sub-pattern-a-brief-explanation-21ed | CC-MAIN-2022-40 | refinedweb | 1,292 | 54.52 |
....
Printable View
....
Code:
#include <stdio.h>
#include <stdlib.h>
/* self referential structure */
struct listNode {
char data; /* each listNode contains a character */
struct listNode *nextPtr; /* pointer to the next node */
struct listNode *prevPtr; /* pointer to previous node */
}; /* end structure listNode */
typedef struct listNode ListNode; /* synonym for struct listNode */
typedef ListNode *ListNodePtr; /* synonym for ListNode* */
/* prototypes */
void insert( ListNodePtr *sPtr, char value );
char delete( ListNodePtr *sPtr, char value );
int isEmpty( ListNodePtr sPtr );
void printList( ListNodePtr currentPtr );
void instructions( void );
void printBackwards (ListNodePtr currentPtr );
int main()
{
ListNodePtr startPtr = NULL; /* initially there are no nodes */
int choice; /* users choice */
char item; /* char entered by user */
instructions(); /* display the menu */
printf( "? " );
scanf( "%d", &choice );
/* loop while user does not choose 3 */
while ( choice != 3 ) {
switch ( choice )
{
case 1 :
printf( "Enter a character: ");
scanf( "\n%c", &item );
insert( &startPtr, item ); /* insert item in the list */
printList( startPtr );
printBackwards( startPtr );
break;
case 2:
/* if list is not empty */
if (!isEmpty(startPtr )) {
printf( "Enter character to be deleted: ");
scanf( "\n%c",&item);
/* if character is found remove it */
if (delete( &startPtr, item ) ) {
printf( "%c deleted.\n", item);
printList( startPtr );
printBackwards( startPtr );
} /* end if*/
else {
printf( " %c not found.\n\n", item);
} /* end else */
} /* end if */
else {
printf( "List is empty.\n\n" );
} /* end else */
break;
default:
printf( "Invalid choice.\n\n" );
instructions();
break;
} /* end switch */
printf("? ");
scanf( "%d", &choice );
} /* end while */
printf( "end of run.\n" );
return 0; /* indicate sucessful termination */
} /* end main */
/* display program instructions to user */
void instructions ( void )
{
printf( "Enter your choice:\n"
" 1 to insert an element into the list.\n"
" 2 to delete an element from the list.\n"
" 3 to end.\n" );
} /* end of instructions */
/* insert a new value into the list in sorted order */
void insert ( ListNodePtr *sPtr, char value)
{
ListNodePtr newPtr; /* pointer to a new node */
ListNodePtr previousPtr; /* pointer to previous node in list */
ListNodePtr currentPtr; /* pointer to current node on list */
newPtr = malloc( sizeof( ListNode )); /* create node */
if ( newPtr != NULL ) { /* is space available */
newPtr->data = value; /*place value in node */
newPtr->nextPtr = NULL; /*node does not link to another node */
newPtr->prevPtr = NULL;
previousPtr = NULL;
currentPtr = *sPtr;
/* loop to find the correct location in the list */
while (currentPtr != NULL && value > currentPtr->data) {
previousPtr = currentPtr; /* walk to........*/
currentPtr = currentPtr->nextPtr; /* .....next node */
/* add here , point up stream */
} /* end while */
/* insert new node at begining of list */
if ( previousPtr == NULL ) {
newPtr->nextPtr = *sPtr;
if(*sPtr != NULL)
(*sPtr)->prevPtr = newPtr;
*sPtr = newPtr;
} /* end if */
else { /* insert new node between previousPtr and currentPtr */
newPtr->prevPtr = previousPtr;
previousPtr->nextPtr = newPtr;
newPtr->nextPtr = currentPtr;
if (currentPtr != NULL)
currentPtr->prevPtr = newPtr;
} /* end else */
} /* end if */
else {
printf( "%c not inserted. No memory available.\n", value );
} /* end else */
} /* end function insert */
/* delete a list element */
char delete ( ListNodePtr *sPtr, char value )
{
ListNodePtr previousPtr; /* pointer to previous node on list */
ListNodePtr currentPtr; /* pointer to current node on list */
ListNodePtr tempPtr; /* temperary node pointer */
/* delete first node */
if (value == ( *sPtr )->data) {
tempPtr = *sPtr; /* hold onto node being removed */
*sPtr = ( *sPtr )->nextPtr; /* de-thread the node */
free( tempPtr ); /* free the de-threaded node */
return value;
} /* end if */
else {
previousPtr = *sPtr;
currentPtr = ( *sPtr )->nextPtr;
/* loop to find correct location on list */
while ( currentPtr != NULL && currentPtr->data != value ) {
previousPtr = currentPtr; /* walk to ....*/
currentPtr = currentPtr->nextPtr; /*....next node*/
} /* end while */
/* delete node at currentPtr */
if ( currentPtr != NULL ) {
tempPtr = currentPtr;
previousPtr->nextPtr = currentPtr->nextPtr;
free ( tempPtr );
return value;
} /* end if */
} /* end else */
return '\0';
} /* end function delete */
/* return 1 if list is empty, 0 otherwise */
int isEmpty ( ListNodePtr sPtr )
{
return sPtr == NULL;
} /* end function isEmpty */
/* Print the list */
void printList ( ListNodePtr currentPtr )
{
/* if list is empty */
if ( currentPtr == NULL) {
printf( " The list is empty.\n\n" );
} /* end if */
else {
printf( " The list is:\n" );
/* while not the end of the list */
while ( currentPtr != NULL ) {
printf( "%c --> ", currentPtr->data );
currentPtr = currentPtr->nextPtr;
} /* end while */
printf( "NULL\n\n" );
}/* end else */
} /* end function printlist */
void printBackwards ( ListNodePtr currentPtr )
{
ListNodePtr temp = NULL;
while ( currentPtr != NULL ) {
temp = currentPtr;
currentPtr = currentPtr->nextPtr;
}
printf( "\nThe list in reverse is:\n" );
printf( "NULL" );
currentPtr = temp;
while ( currentPtr != NULL) {
printf( " <-- %c", currentPtr->data );
currentPtr = currentPtr->prevPtr;
}
printf("\n\n");
}
wow thank you so much. This topic in C has been driving me crazy.. Ive been stumped on that part for a while.. Thanks a lot, now thats one less thing to worry about, now its time to work on the delete function, heh :)
thank you again
Why did you remove your original question, now no one else knows for sure what this was about.
I agree with citizen you should not have removed your original post. You might want to repost your question for the sake of everyone.
I'm not a fan of playing Jeopardy either.
Next time we'll either quote your entire post to reply, or we simply wont answer at all.
If I wanted to take a stab at "why" my guess would be that the teacher wonders through the forums periodically. But who knows.
My original problem was that I was missing part of the link in the program for prevPtr. When I ran the program, the print forward would work fine, but the print backwards wasn't working correctly. I was losing part of my list in my insert function and couldn't figure out why. For example.. When I entered the word dog.. when my program would display.. the "g" would be lost when printed in reverse and that showed my prevPtr had something wrong with it.
Okay cool. Thanks for coming back to put that back in.
i have a question, the double link list right, i was told it points to both the previous and the next nodes but i dont fully understand how they work and also the circular link list.
I use circular linked lists for memory management frequently. The proven advantage of circular linked lists is it very easy to insert elements without having any sort of "special case" | https://cboard.cprogramming.com/c-programming/108647-modify-make-doubly-linked-list-printable-thread.html | CC-MAIN-2017-17 | refinedweb | 979 | 51.89 |
Welcome to part 14 of my Android Development tutorial!.
Here is the final complete Android Address Book App Package. All of the specific code I worked with in this tutorial can be found below.
If you like tutorials like this, it helps to tell Google+ with a click here [googleplusone]
Code From the Video
EditContact.java
// EditContact.java package com.newthinktank.contactsapp; import java.util.HashMap; import android.os.Bundle; import android.widget.EditText; import android.app.Activity; import android.content.Intent; import android.view.View; public class EditContact extends Activity{ // Allows access to data in the EditTexts EditText firstName; EditText lastName; EditText phoneNumber; EditText emailAddress; EditText homeAddress; // The database tool class DBTools dbTools = new DBTools(this); // Sets up everything when the Activity is displayed public void onCreate(Bundle savedInstanceState) { // Get saved data if there is any super.onCreate(savedInstanceState); // Designate that edit_contact.xml is the interface used setContentView(R.layout.edit_contact); //"); // Get the HashMap of data associated with the contactId HashMap<String, String> contactList = dbTools.getContactInfo(contactId); // Make sure there is something in the contactList if(contactList.size()!=0) { // Put the values in the EditText boxes firstName.setText(contactList.get("firstName")); lastName.setText(contactList.get("lastName")); phoneNumber.setText(contactList.get("phoneNumber")); emailAddress.setText(contactList.get("emailAddress")); homeAddress.setText(contactList.get("homeAddress")); } } public void editContact(View view) { HashMap<String, String> queryValuesMap = new HashMap<String, String>(); //"); // Put the values in the EditTexts in the HashMap queryValuesMap.put("contactId", contactId);()); // Send the HashMap to update the data in the database dbTools.updateContact(queryValuesMap); // Call for MainActivity this.callMainActivity(view); } public void removeContact(View view) { Intent theIntent = getIntent(); String contactId = theIntent.getStringExtra("contactId"); // Call for the contact with the contactId provided // to be deleted dbTools.deleteContact(contactId); // Call for MainActivity this.callMainActivity(view); } // Calls for a switch to the MainActivity public void callMainActivity(View view) { // getApplication returns an Application object which allows // you to manage your application and respond to different actions. // It returns an Application object which extends Context. // A Context provides information on the environment your application // is currently running in. It provides services like how tp obtain // access to a database and preferences. // Google says a Context is an entity that represents various // environment data. It provides access to local files, databases, // class loaders associated to the environment, services including // system-level services, and more. // The following Intent states that you want to switch to a new // Activity being the MainActivity Intent objIntent = new Intent(getApplication(), MainActivity.class); startActivity(objIntent); } }
In NewContact.java at line 56 it must be called “phoneNumber” instead of “pnoneNumber”
Ill check that. Sorry about the error
Hi Derek
I saw someone asking you to do a UI like Feedly reader’s UI. The only thing exotic on Feedly reader UI is its sliding menu part. And I am interested in that too. I wonder if you could implement a sliding menu tutorial. Also, you have done xlm parsing. Can you do a JSON parsing app too. Thanks.
Mike
I’m going to cover fancy menus and much more. Sure I can cover JSON parsing. I should have done it in the previous tutorial. I may just add it on now. I don’t know if that would confuse people or not?
Thank you. Let me say it one more time. I do appreciate your efforts and I am forever grateful.
Thank you for being a part of my awesome community here 🙂 I appreciate it!
Thank u so much for the excellent tutorials..
You are very correct sir! Sorry about that. I need to hire you to catch my little bugs 🙂 You are very talented and have an excellent eye!
Vince,
Could you sent me the solution with regards to the
issue in the callMainActivity ?
Thank you,
Anton
Derek,
Could you send me the solution please.
Please tell me exactly what you need and I’ll provide it
Hi Derek. I have been watching your Android tutorials since you released your first one. Since then I have been hooked on developing for Android and I have released 3 apps. The first one is a simple app for tracking your income and expenses, and gives an overview in the form of a pie chart and a bar chart. The other two is quiz apps, flag quiz and soccer quiz. Although I’m mainly developing for fun and experience it would be cool if someone actually downloaded my apps. Do you have any marketing tips to get people to notice your apps? Be it Play Store search rankings or other tricks. I know my apps isn’t exactly the only of its kind, but I’m certain I would get more downloads if people actually noticed my apps. Being on the 20th page of search doesn’t exactly help me get attention.
Wow, I’m happy that the videos have helped that much! Great Job! As per marketing the apps I’d say you’re better off trying to make a YouTube video about them using keyword tricks. I mainly make Android apps for private companies so I have never had to market on the Android marketplace.
Try using the Google keyword tool to find keywords that are in high demand, but have a low level of competition.
I went there and typed in the following
android expense tracker app
best expense tracker app for android
spending tracker app android
Based off the results a YouTube video on your app named “on track app for android” might work if your app is named On Track?
I think YouTube is the best way to market for free online. It is very hard to use keywords to drive traffic anymore to a regular site. As more people start using YouTube to market seriously though that will probably no longer be true.
I hope that helps
Thanks. That tool is actually a really good resource. I’ll definitely look into uploading some videos on Youtube.
You should definitely do that!
Is there anything that I need to change in Manifest? Thank you for tutorials…
I provide all of the code in a complete package that you can download. I show everything that needs to be changed over the course of the tutorials. Sorry if they were confusing. I’ll do my best to improve 🙂
Hi Derek,
sorry to bother you with a question, but I’m having an issue with an application that I’m doing in android.
the thing is that the application was coded and is working on my smartphone (android 4.1.2) and other devices with this version of android.
the thing is that when I try it on a device with android 2.3.7, the application fails when I’m going to the second activity.
I debugged the application, and it fails on the “onCreate” method (Screenshot:) the error is happening in the selected line)
the strange thing is that a similar method is being used for inflating the previous activity, but it seems that is working fine there…
if it works, the manifest for the app is set to Min SDK Version = 8.
please let me know if this information is enough, or if you want me to share my project folder for you to check.
I’m sorry if I’m going to far with this, but I tried to google how to do this and I’m not getting anywhere.
thank you and my best regards. I appreciate a lot what you do for us newbies.
Gastón-
Hi Derek, I wanted to say that I fixed my error. thank you!
Great I’m glad to hear that 🙂
Hi Gastón
Do you have a layout xml file named activity_uniflow_results.xml?
Without one you would get an error, but if you do have one I don’t think that line is were your error lies.
I hope that helps
Derek
Yes, I have the .XML file, and the error was there, I was using a “space” element in my GUI, and I found out that this method was implemented in the 14 SDK version, hence it was not working with android 2.3 (SDK 8). I changed the Space for a View and it started working 🙂
Great! I’m glad you fixed it 🙂
Receiving error “addressbookapp has stopped unexpectedly “. I ran your contactsapp on the same avd using eclipse but in your case its working fine .What am I doing wrong?
You dint mention changing the android manifest file but I changed that as well.All the code and layouts are entirely same.
What error are you seeing in the LogCat panel in Eclipse?
Hi Derek,
Good Day to you!
I have also followed your tutorials from Tutorial 10 to Tutorial 14. I am able to run the addressbookapp but after I click on the “ADD” button on the “Contact List” page, the app also stopped unexpectedly with the follow error message “Unfortunately, Contact List has stopped.”
Below is the info from the LogCat panel:
08-12 01:28:45.745: E/Trace(873): error opening trace file: No such file or directory (2)
08-12 01:28:46.305: D/dalvikvm(873): GC_CONCURRENT freed 43K, 7% free 2777K/2964K, paused 13ms+11ms, total 77ms
08-12 01:28:46.496: D/gralloc_goldfish(873): Emulator without GPU emulation detected.
08-12 01:28:48.075: D/AndroidRuntime(873): Shutting down VM
08-12 01:28:48.075: W/dalvikvm(873): threadid=1: thread exiting with uncaught exception (group=0x40a71930)
08-12 01:28:48.115: E/AndroidRuntime(873): FATAL EXCEPTION: main
08-12 01:28:48.115: E/AndroidRuntime(873): java.lang.IllegalStateException: Could not execute method of the activity
08-12 01:28:48.115: E/AndroidRuntime(873): at android.view.View$1.onClick(View.java:3599)
08-12 01:28:48.115: E/AndroidRuntime(873): at android.view.View.performClick(View.java:4204)
08-12 01:28:48.115: E/AndroidRuntime(873): at android.view.View$PerformClick.run(View.java:17355)
08-12 01:28:48.115: E/AndroidRuntime(873): at android.os.Handler.handleCallback(Handler.java:725)
08-12 01:28:48.115: E/AndroidRuntime(873): at android.os.Handler.dispatchMessage(Handler.java:92)
08-12 01:28:48.115: E/AndroidRuntime(873): at android.os.Looper.loop(Looper.java:137)
08-12 01:28:48.115: E/AndroidRuntime(873): at android.app.ActivityThread.main(ActivityThread.java:5041) com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:793)
08-12 01:28:48.115: E/AndroidRuntime(873): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:560)
08-12 01:28:48.115: E/AndroidRuntime(873): at dalvik.system.NativeStart.main(Native Method)
08-12 01:28:48.115: E/AndroidRuntime(873): Caused by: java.lang.reflect.InvocationTargetException android.view.View$1.onClick(View.java:3594)
08-12 01:28:48.115: E/AndroidRuntime(873): … 11 more android.app.Instrumentation.checkStartActivityResult(Instrumentation.java:1618)
08-12 01:28:48.115: E/AndroidRuntime(873): at android.app.Instrumentation.execStartActivity(Instrumentation.java:1417)
08-12 01:28:48.115: E/AndroidRuntime(873): at android.app.Activity.startActivityForResult(Activity.java:3370)
08-12 01:28:48.115: E/AndroidRuntime(873): at android.app.Activity.startActivityForResult(Activity.java:3331)
08-12 01:28:48.115: E/AndroidRuntime(873): at android.app.Activity.startActivity(Activity.java:3566)
08-12 01:28:48.115: E/AndroidRuntime(873): at android.app.Activity.startActivity(Activity.java:3534)
08-12 01:28:48.115: E/AndroidRuntime(873): at com.newthinkjuly.addressbookapp.MainActivity.showAddContact(MainActivity.java:72)
08-12 01:28:48.115: E/AndroidRuntime(873): … 14 more
Just wonder, do you have any idea or suggestions on how I should go about in resolving this error and have the app running?
Feel free to let me know if there are any other info you may require?
Thank you
Best Regards,
Chen
Hi Chen,
When you look at the LogCat, always look for your classes in the errors. A lot of junk errors are showing there. Based on what you sent, look here for errors com.newthinkjuly.addressbookapp.MainActivity.showAddContact(MainActivity.java:72)
I also have working code in package form that I’m sure works. I hope that helps
Derek
I was also having that problem. To fix, make sure to include the EditContact and NewContact activities in the manifest file:
I want to develope an app in android for college campus …
I dnt hv any idea hw can I connect my server..
Bcoz der will be dynamic update will br there in my app..
So I hv no.idea hw cn I do that..
CAn u pls give me.the idea hw can I do dynamic update in.my app..
Look into BroadcastReceivers. I’ll cover them soon
Hello Derek..
Droiddraw is one app to draw the UI..
Can we use that app to draw our UI for our android app..
And what is the adv and dis adv of this app..??
Hello, Droiddraw is in Beta and I haven’t found it to be worth covering. The main reason why is that it may or may not receive the support required to work with future versions of Android. I personally avoid using anything except those tools specifically recommended by Google. It is very common for tools like this to break and the support for them is most always very poor. This is just a personal preference and I know others who prefer to always use the newest tools. I don’t because I’ve been burned to many times. I hope that helps explain my position.
Thank you so much..
You’re very welcome 🙂
I want to create AlarmManager and I have question. Will this work if I add not name or number or email, but I will put there timePicker and datePicker.
Derek,
Excellent tutorials – great job.
As Vince mentioned earlier addressbook app: .”.
Could you please show how this is accomplished?
Thank you – keep up the great work!
Thank you very much 🙂 It is pretty easy to refresh an activity. You just need these 2 lines of code
finish();
startActivity(getIntent());
I hope that helps. I’ll try to fit that into a tutorial
Instead of declaring contactId and the EditText objects in each method, does it work to just make them instance variables and assign them in onCreate()?
Yes you could do that. I often write everything out the long way to help people understand what is going on at each step
Hi Derek,thank you so much for those tutorials,can you please help me about add blocked contact between two time slots in any day of the week ,I hope that helps.
Sorry but I’m not sure what you are trying to do.
Hi Derek, first thank you for the tutorials, they’re quite helpful and you’re a very good teacher xD
I’m stuck in this problem that stop my application everytime I click on the Contact Name. Here is the LogCat.
First I thought this were a problem on xml file, but I called the edit_ficha.xml (same as edit_contact.xml) in other function and it was totally normal. But I’m thinking that might be a problem in the EditFicha.class but i don’t know what problem could be, the class is the same as your EditContact.class :/
If you could help I would be very grateful o/
Ops, problems on the blockquote xD
here the Logcat
09-12 19:14:46.025: E/AndroidRuntime(663): FATAL EXCEPTION: main
09-12 19:14:46.025: E/AndroidRuntime(663): java.lang.RuntimeException: Unable to start activity ComponentInfo{com.labpet.petapp/com.labpet.petapp.EditFicha}: java.lang.NullPointerException
09-12 19:14:46.025: E/AndroidRuntime(663): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2059)
09-12 19:14:46.025: E/AndroidRuntime(663): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2084)
09-12 19:14:46.025: E/AndroidRuntime(663): at android.app.ActivityThread.access$600(ActivityThread.java:130)
09-12 19:14:46.025: E/AndroidRuntime(663): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1195)
09-12 19:14:46.025: E/AndroidRuntime(663): at android.os.Handler.dispatchMessage(Handler.java:99)
09-12 19:14:46.025: E/AndroidRuntime(663): at android.os.Looper.loop(Looper.java:137)
09-12 19:14:46.025: E/AndroidRuntime(663): at android.app.ActivityThread.main(ActivityThread.java:4745)
09-12 19:14:46.025: E/AndroidRuntime(663): at java.lang.reflect.Method.invokeNative(Native Method)
09-12 19:14:46.025: E/AndroidRuntime(663): at java.lang.reflect.Method.invoke(Method.java:511)
09-12 19:14:46.025: E/AndroidRuntime(663): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:786)
09-12 19:14:46.025: E/AndroidRuntime(663): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:553)
09-12 19:14:46.025: E/AndroidRuntime(663): at dalvik.system.NativeStart.main(Native Method)
09-12 19:14:46.025: E/AndroidRuntime(663): Caused by: java.lang.NullPointerException
09-12 19:14:46.025: E/AndroidRuntime(663): at com.labpet.petapp.EditFicha.onCreate(EditFicha.java:41)
09-12 19:14:46.025: E/AndroidRuntime(663): at android.app.Activity.performCreate(Activity.java:5008)
09-12 19:14:46.025: E/AndroidRuntime(663): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1079)
09-12 19:14:46.025: E/AndroidRuntime(663): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2023)
09-12 19:14:46.025: E/AndroidRuntime(663): … 11 more
09-12 19:14:48.115: E/Trace(678): error opening trace file: No such file or directory (2)
I looked through the errors and I don’t see any errors specific to the classes created. Try downloading my whole finished package and run that code to see what happens.
Tell me if you have any other problems.
Hi.. can you provided the code for the androidmanifest.xml file?
On this page I have a link to the whole finished package which has the manifest file in it. Here is a link to the package.
I Derek …I see nothing of the manifest?!?
The Manifest file is in the package for the app. Here is the package
Hello Derek,
How about using startActivityForResult() rather than startactivity();
Is it possible.
I just wanted to know how startActivityForResult() different from startactivity();
Umair
Hi Umair,
You use startactivity() when you simply want to start an activity and nothing else.
Use startActivityForResult() when you start an activity just to have it do something and then end. Take a picture, get input from the user, etc.
Hey Derek!
Thank you so much for all of your videos. I find them so helpful as I start this android experience.
I was wondering how to create a search bar to search for a contact. Thank you
Hey James,
You’re very welcome 🙂 Basically you just need to put an EditText box in the app that excepts input. Then when it triggers an event handler use that input in the query. Give it a try and I’m sure you’ll get it. Everything you need has been covered.
Hi derek, my apologies to bother you and whatnot but the code that you put in the .zip file actually does not work when imported into Eclipse. Are you aware of any such malfunctions?
When I first import the file, it comes up with hundreds of Build Path errors, if I use the option to ‘Fix Project Automatically’ (Eclipse) I get a persistent [2013-10-29 11:22:13 – ContactsApp] Unable to resolve target ‘android-17’
I’d make sure you have Eclipse set up properly because I’m not getting any errors. How are you importing the code and what version of Eclipse are you using?
What error are you getting?
Hi Derek, I am having the problem of’
when I am going to show all the data of my table into ListView.
Do you know actually what is the mistake that i did ? thx
Try changing the list id in the layout file to android:id=”@android:id/list”
sir Derek , can you help me with this because if I run the app , the emulator will show a pop up box that says “application unfortunately stopped” and there’s an error that says’
but in my xml file I already have this ID .
android:id=”@+id/list”
but still .
it will just stop .
can you please help me with this ?
THANKS 😀
Try using android:id=”@android:id/list”
Hello Mr. Banas , I have a problem with my address book .
When I press the “ADD” button my address book will suddenly stopped .
11-15 20:39:14.820: E/AndroidRuntime(1047): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:777)
11-15 20:39:14.820: E/AndroidRuntime(1047): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:593)
11-15 20:39:14.820: E/AndroidRuntime(1047): at dalvik.system.NativeStart.main(Native Method)
11-15 20:39:14.820: E/AndroidRuntime(1047): Caused by: java.lang.reflect.InvocationTargetException
11-15 20:39:14.820: E/AndroidRuntime(1047): at java.lang.reflect.Method.invokeNative(Native Method)
11-15 20:39:14.820: E/AndroidRuntime(1047): at java.lang.reflect.Method.invoke(Method.java:515)
11-15 20:39:14.820: E/AndroidRuntime(1047): at android.view.View$1.onClick(View.java:3809)
11-15 20:39:14.820: E/AndroidRuntime(1047): … 11 more?
11-15 20:39:14.820: E/AndroidRuntime(1047): at android.app.Instrumentation.checkStartActivityResult(Instrumentation.java:1628)
11-15 20:39:14.820: E/AndroidRuntime(1047): at android.app.Instrumentation.execStartActivity(Instrumentation.java:1424)
11-15 20:39:14.820: E/AndroidRuntime(1047): at android.app.Activity.startActivityForResult(Activity.java:3423)
11-15 20:39:14.820: E/AndroidRuntime(1047): at android.app.Activity.startActivityForResult(Activity.java:3384)
11-15 20:39:14.820: E/AndroidRuntime(1047): at android.app.Activity.startActivity(Activity.java:3626)
11-15 20:39:14.820: E/AndroidRuntime(1047): at android.app.Activity.startActivity(Activity.java:3594)
11-15 20:39:14.820: E/AndroidRuntime(1047): at com.vinceport.addressbookapp.AddressBook.showAddContact(AddressBook.java:71)
11-15 20:39:14.820: E/AndroidRuntime(1047): … 14 more
here’s the LogCat says .
Here is a link the the whole package. Try importing it into Eclipse and running it. That will clear the errors. Then use a website like diffnow.com to compare your class files, layout files and manifest to mine to find the error.
It seems from the log that? –
Manifest file is missing the declaration of the other activities created, declare those also there like
EditContact
NewContact
Thank you for your VERY GOOD tutorials Mr. Derek and I really appreciated it :DD
but I have some little problem …
because when I input some records and press the SAVE button .
my address book will suddenly STOP ..
can you please help me with this ?
here the errors . . .
11-18 17:01:18.460: E/AndroidRuntime(872): at android.view.View$1.onClick(View.java:3809)
11-18 17:01:18.460: E/AndroidRuntime(872): … 11 more
11-18 17:01:18.460: E/AndroidRuntime(872): Caused by: java.lang.NullPointerException
11-18 17:01:18.460: E/AndroidRuntime(872): at com.vinceport.addressbookapp.NewContact.addNewContact(NewContact.java:42)
11-18 17:01:18.460: E/AndroidRuntime(872): … 14 more
THANK YOU in advance Mr. Derek 😀
You’re very welcome 🙂 You get that error when something has the value of null when it shouldn’t. You have a typo some place in your code. Compare your class files to mine using something like diff now.com
I hope that helps 🙂
Hi! I wrote on your YouTube video about posting errors. I’ve managed to fix most of the ones myself but I’m just stuck on this one. I have used the diffnow site to try and spot differences between my code and what you have supplied but now I’m struggling to see any! When I add a contact I get taken back to blank screen and errors appear on my LogCat. This doesn’t happen with your code, only with mine! The errors are as follows…
11-30 15:08:06.555: E/SQLiteLog(7504): (1) table contacts has no column named emailAddress
11-30 15:08:06.565: E/SQLiteDatabase(7504): Error inserting emailAddress=joe@email.com lastName=Bloggs phoneNumber=01111111111 firstName=Joe homeAddress=1 test street
11-30 15:08:06.565: E/SQLiteDatabase(7504): android.database.sqlite.SQLiteException: table contacts has no column named emailAddress (code 1): , while compiling: INSERT INTO contacts(emailAddress,lastName,phoneNumber,firstName,homeAddress) VALUES (?,?,?,?,?)
11-30 15:08:06.565: E/SQLiteDatabase(7504): at android.database.sqlite.SQLiteConnection.nativePrepareStatement(Native Method)
11-30 15:08:06.565: E/SQLiteDatabase(7504): at android.database.sqlite.SQLiteConnection.acquirePreparedStatement(SQLiteConnection.java:1013)
11-30 15:08:06.565: E/SQLiteDatabase(7504): at android.database.sqlite.SQLiteConnection.prepare(SQLiteConnection.java:624)
11-30 15:08:06.565: E/SQLiteDatabase(7504): at android.database.sqlite.SQLiteSession.prepare(SQLiteSession.java:588)
11-30 15:08:06.565: E/SQLiteDatabase(7504): at android.database.sqlite.SQLiteProgram.(SQLiteProgram.java:58)
11-30 15:08:06.565: E/SQLiteDatabase(7504): at android.database.sqlite.SQLiteStatement.(SQLiteStatement.java:31)
11-30 15:08:06.565: E/SQLiteDatabase(7504): at android.database.sqlite.SQLiteDatabase.insertWithOnConflict(SQLiteDatabase.java:1467)
11-30 15:08:06.565: E/SQLiteDatabase(7504): at android.database.sqlite.SQLiteDatabase.insert(SQLiteDatabase.java:1339)
11-30 15:08:06.565: E/SQLiteDatabase(7504): at com.example.youtubeaddressbook.DBTools.insertContact(DBTools.java:43)
11-30 15:08:06.565: E/SQLiteDatabase(7504): at com.example.youtubeaddressbook.NewContact.addNewContact(NewContact.java:47) android.view.View$1.onClick(View.java:3655)
11-30 15:08:06.565: E/SQLiteDatabase(7504): at android.view.View.performClick(View.java:4162)
11-30 15:08:06.565: E/SQLiteDatabase(7504): at android.view.View$PerformClick.run(View.java:17152)
11-30 15:08:06.565: E/SQLiteDatabase(7504): at android.os.Handler.handleCallback(Handler.java:615)
11-30 15:08:06.565: E/SQLiteDatabase(7504): at android.os.Handler.dispatchMessage(Handler.java:92)
11-30 15:08:06.565: E/SQLiteDatabase(7504): at android.os.Looper.loop(Looper.java:137)
11-30 15:08:06.565: E/SQLiteDatabase(7504): at android.app.ActivityThread.main(ActivityThread.java:4867) com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:1007)
11-30 15:08:06.565: E/SQLiteDatabase(7504): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:774)
11-30 15:08:06.565: E/SQLiteDatabase(7504): at dalvik.system.NativeStart.main(Native Method)
Any tips? I’m baffled!
The emailAddress wasn’t created for some reason in the database. I have the whole package here which you can compare to your code to find the typo. Compare your class files to the ones in the package using a site like diffnow.com and you’ll see the error. I hope that helps 🙂
I retyped emailAddress and then it somehow worked! very bizarre! But thank you 🙂
In the mean time I went through all the tutorials again and made another and with that one I have an issue with the list view..it just shows the first name twice of an added item when displayed then when you go to edit it it crashes and the errors talk about a null pointer on the onCreate?
Sorry for all the questions, I just really want to understand whats going on with all aspects of it!
The best thing to do is to get the class files in my src folder and my layout files and compare yours to mine using something like diffnow.com. It will point out the differences. I hope that helps 🙂
I noticed in the youtube comments that others had the same problem, the manifest file does not seem to update automatically for everyone, maybe this should be mentioned in the video? 🙂
I’m talking about the two activity tags:
Another question. If I want the entries to show up like “lastName, firstName” instead, is there any “pretty” way of doing this?
I changed the getAllContacts method to do
contactMap.put(“lastName”, cursor.getString(2) + “, “);
But this doesn’t feel like “best practice”.
I did everything pretty nice. But when I run the application in my Emulator it shows and error and says forced to close the application, and crashes. Why is that?
What error shows up in the LogCat panel in Eclipse? Try using the included package I provide and see if you get any errors.
Your package works fine..
Actually the Problem was the ‘list view name’..
I missed the ‘@android:…’ format for list view..
Thanks for your great tutorials(even though you are as fast as a metro train! 🙂 )
Thank you 🙂 Yes I’m working on finding the best speed for the tutorials.
great stuff as always Derek!
Thanks a lot for all of the fantastic tutorials ~
I have tried testing the app on my phone as well as an AVD and get the same result.
I click “ADD” to add a contact and it lets me walk through the fields and click “Save”. But then it does not display the contact I just added. However it adds a record because it allows me to press where the contact name should be and then gives me an error about EditContact. I figure I am getting an error on EditContact because there is no contact to edit!
Any idea on what I may be missing?
Thank you very much 🙂 Have you tried testing the final complete package for my Android Address App? Tell me if that doesn’t work. What errors are you seeing?
AHA! I finally got it ~
I tried importing your code but kept getting errors (tried cleaning project everything but couldn’t get it to work) – so I just started a new project and hand created the files and copied pasted and got it to work and run properly, yay!
So then I had to figure out what the differences were in your code and mine. And keep in mind when I say “my code”, that was just your code I copied and pasted from each page of this tutorial so when I mention these differences below, you may want to change them on your tutorial pages if that matters to you 🙂
1) DBTools.java: added @Override before oncreate and onupgrade
2) DBTools.java: added ContactID field on the contactMap.put
3) MainActivity.java: changed theIntent to theIndent (maybe just a typo but I changed it anyways)
4) MainActivity.java: changed getApplication() to getApplicationContext in showAddContact (I thought this for sure was going to fix immediately but I saved and ran it and still the same result)
— Now these next 2 changes are what really made the difference —
5) contact_entry.xml: changed contactName textview into 2 separate ones (firstName and lastName) – when I did this, all my contact names showed up in the list, but
still had error on edit screen
6) edit_contact.xml: changed all editText names from lastNameEditText format to just lastName format (started displaying everything then and working properly!)
(one last side note in NewContact.java you have phoneNumber spelled as pnoneNumber)
Thanks again Derek, after I figured out how to compare files within eclipse it made this process much quicker for me to find what was different)
I leave this comment with 1 question too: I see how you use the SQLite3 db to add and edit records locally but how do you set up a database that anyone in the world can edit and add records too? So for example, take this same app idea and allow multiple people all over the world add contact names to the same database?
thanks~!
Great I’m glad you fixed it and thank you for posting such a detailed comment to help others.
To set up a shared database it is better to access a database on an outside web server. I plan on covering how to do that soon.
Great, and no problem on the post…
I can connect to DB with web development, just do not know the steps for app development..
Going to continue taking your tutorials and look forward to that topic someday! 🙂
thanks again ~
Hi Derek,
Thanks for the cool tutorials.
I had one problem with this address book app.
When I create new contact, it will be saved.
But in the list view it only shows the contact id.
What is the problem?
Appreciate ur help
You’re very welcome 🙂 Try downloading the completed package and testing it. Here it is Android Address Book App. Tell me if that doesn’t work.
Derek
Hey Sir Derek! I like your tutorials about Android. I like them so much because you explain very well and it helped me a lot on my projects. Thank you so much. I hope you could do a tutorial about searching sqlite database. Thank you!!
Thank you very much 🙂 I covered SQLite pretty well over the course of a few videos. Have you seen these SQLite Tutorials?
Hello Derek.
First of all I just wanted to tell that you are amazing, (and I know you herd it a lot).
Now for my issue, I have a project to make and its very similar to what you done, the difference is that I need to set the value of my contact from a spinner and a radiobutton.
to get the values from the radio button and the spinner I used queryValuesMap.put(“productQuality”, rgQuality.getContext().toString());
I just dont know how to set it back to their original state.
I hope you understood me correctly, if not I would be more than happy to send you my code so you can review it.
Thank you 🙂 Why not take the value and record it some place and then set the spinner / radio button back to the default? I may not understand the problem. I hope that helps in some way.
thank you for trying anyway 🙂
I’m kinda new in the development writing so its really hard for me to explain what I need.
My teacher helped me with that issue, the thing is that i had a spinner that depend on another spinner so it was hard for me to turn it back to its original state, and I did saved the vales in an SQLite database.
I wanted to ask you if you can make a guide for sliding menu using a navigation drawer.
thank you in advance
I will definitely cover that when i get back into regular Java Android tutorials.
Hello Derek. I loved learning how to perform a SQLite project with Java. Unfortunatly, this thing won’t excately run and I don’t know why. When I attempt to, it takes me to the first screen. The second I hit the “ADD” button, it crashes, and its really hurting my head why. I compared my code to yours line by line and I can’t think of why this still fails.
Its like a cruel April’s Fools joke. =(
The debugger was giving me errors along the lines of me not defining my source properly. After messing with that, enough, I thought I fixed it, but still it refuses to work.
Allow me to past what the debugger presently has to say.
04-01 19:36:57.919: W/dalvikvm(2770): threadid=1: thread exiting with uncaught exception (group=0x40a71930)
04-01 19:36:58.006: E/AndroidRuntime(2770): FATAL EXCEPTION: main
04-01 19:36:58.006: E/AndroidRuntime(2770): java.lang.IllegalStateException: Could not execute method of the activity
04-01 19:36:58.006: E/AndroidRuntime(2770): at android.view.View$1.onClick(View.java:3599)
04-01 19:36:58.006: E/AndroidRuntime(2770): at android.view.View.performClick(View.java:4204)
04-01 19:36:58.006: E/AndroidRuntime(2770): at android.view.View$PerformClick.run(View.java:17355)
04-01 19:36:58.006: E/AndroidRuntime(2770): at android.os.Handler.handleCallback(Handler.java:725)
04-01 19:36:58.006: E/AndroidRuntime(2770): at android.os.Handler.dispatchMessage(Handler.java:92)
04-01 19:36:58.006: E/AndroidRuntime(2770): at android.os.Looper.loop(Looper.java:137)
04-01 19:36:58.006: E/AndroidRuntime(2770): at android.app.ActivityThread.main(ActivityThread.java:5041) com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:793)
04-01 19:36:58.006: E/AndroidRuntime(2770): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:560)
04-01 19:36:58.006: E/AndroidRuntime(2770): at dalvik.system.NativeStart.main(Native Method)
04-01 19:36:58.006: E/AndroidRuntime(2770): Caused by: java.lang.reflect.InvocationTargetException android.view.View$1.onClick(View.java:3594)
04-01 19:36:58.006: E/AndroidRuntime(2770): … 11 more
04-01 19:36:58.006: E/AndroidRuntime(2770): Caused by: android.content.ActivityNotFoundException: Unable to find explicit activity class {com.codecrunchcorner.addressbookapp/com.codecrunchcorner.addressbookapp.NewContact}; have you declared this activity in your AndroidManifest.xml?
04-01 19:36:58.006: E/AndroidRuntime(2770): at android.app.Instrumentation.checkStartActivityResult(Instrumentation.java:1618)
04-01 19:36:58.006: E/AndroidRuntime(2770): at android.app.Instrumentation.execStartActivity(Instrumentation.java:1417)
04-01 19:36:58.006: E/AndroidRuntime(2770): at android.app.Activity.startActivityForResult(Activity.java:3370)
04-01 19:36:58.006: E/AndroidRuntime(2770): at android.app.Activity.startActivityForResult(Activity.java:3331)
04-01 19:36:58.006: E/AndroidRuntime(2770): at android.app.Activity.startActivity(Activity.java:3566)
04-01 19:36:58.006: E/AndroidRuntime(2770): at android.app.Activity.startActivity(Activity.java:3534)
04-01 19:36:58.006: E/AndroidRuntime(2770): at com.codecrunchcorner.addressbookapp.MainActivity.showAddContact(MainActivity.java:79)
04-01 19:36:58.006: E/AndroidRuntime(2770): … 14 more
Can you help me out here? =/
This is the main error Unable to find explicit activity class {com.codecrunchcorner.addressbookapp/com.codecrunchcorner.addressbookapp.NewContact}; have you declared this activity in your AndroidManifest.xml?
Hi Derek!
First of all, I’d like to thank you for providing these awesome tutorials!
I have a problem. When I’m trying to run the app on my phone/emulator, it says “Unfortunately (app name) has stopped working”. What should I do?
Here’s the LogCat.
04-18 06:40:05.854: D/AndroidRuntime(932): Shutting down VM
04-18 06:40:05.854: W/dalvikvm(932): threadid=1: thread exiting with uncaught exception (group=0x414c4700)
04-18 06:40:05.944: E/AndroidRuntime(932): FATAL EXCEPTION: main
04-18 06:40:05.944: E/AndroidRuntime(932): java.lang.RuntimeException: Unable to start activity ComponentInfo{com.example.vitteachersdatabase/com.example.vitteachersdatabase.MainActivity}: android.view.InflateException: Binary XML file line #16: Error inflating class
04-18 06:40:05.944: E/AndroidRuntime(932): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2211)
04-18 06:40:05.944: E/AndroidRuntime(932): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2261)
04-18 06:40:05.944: E/AndroidRuntime(932): at android.app.ActivityThread.access$600(ActivityThread.java:141)
04-18 06:40:05.944: E/AndroidRuntime(932): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1256)
04-18 06:40:05.944: E/AndroidRuntime(932): at android.os.Handler.dispatchMessage(Handler.java:99)
04-18 06:40:05.944: E/AndroidRuntime(932): at android.os.Looper.loop(Looper.java:137)
04-18 06:40:05.944: E/AndroidRuntime(932): at android.app.ActivityThread.main(ActivityThread.java:5103)
04-18 06:40:05.944: E/AndroidRuntime(932): at java.lang.reflect.Method.invokeNative(Native Method)
04-18 06:40:05.944: E/AndroidRuntime(932): at java.lang.reflect.Method.invoke(Method.java:525)
04-18 06:40:05.944: E/AndroidRuntime(932): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:737)
04-18 06:40:05.944: E/AndroidRuntime(932): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:553)
04-18 06:40:05.944: E/AndroidRuntime(932): at dalvik.system.NativeStart.main(Native Method)
04-18 06:40:05.944: E/AndroidRuntime(932): Caused by: android.view.InflateException: Binary XML file line #16: Error inflating class
04-18 06:40:05.944: E/AndroidRuntime(932): at android.view.LayoutInflater.createView(LayoutInflater.java:620)
04-18 06:40:05.944: E/AndroidRuntime(932): at com.android.internal.policy.impl.PhoneLayoutInflater.onCreateView(PhoneLayoutInflater.java:56)
04-18 06:40:05.944: E/AndroidRuntime(932): at android.view.LayoutInflater.onCreateView(LayoutInflater.java:669)
04-18 06:40:05.944: E/AndroidRuntime(932): at android.view.LayoutInflater.createViewFromTag(LayoutInflater.java:694)
04-18 06:40:05.944: E/AndroidRuntime(932): at android.view.LayoutInflater.rInflate(LayoutInflater.java:755)
04-18 06:40:05.944: E/AndroidRuntime(932): at android.view.LayoutInflater.rInflate(LayoutInflater.java:758)
04-18 06:40:05.944: E/AndroidRuntime(932): at android.view.LayoutInflater.inflate(LayoutInflater.java:492)
04-18 06:40:05.944: E/AndroidRuntime(932): at android.view.LayoutInflater.inflate(LayoutInflater.java:397)
04-18 06:40:05.944: E/AndroidRuntime(932): at android.view.LayoutInflater.inflate(LayoutInflater.java:353)
04-18 06:40:05.944: E/AndroidRuntime(932): at com.android.internal.policy.impl.PhoneWindow.setContentView(PhoneWindow.java:267)
04-18 06:40:05.944: E/AndroidRuntime(932): at android.app.Activity.setContentView(Activity.java:1895)
04-18 06:40:05.944: E/AndroidRuntime(932): at com.example.vitteachersdatabase.MainActivity.onCreate(MainActivity.java:25)
04-18 06:40:05.944: E/AndroidRuntime(932): at android.app.Activity.performCreate(Activity.java:5133)
04-18 06:40:05.944: E/AndroidRuntime(932): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1087)
04-18 06:40:05.944: E/AndroidRuntime(932): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2175)
04-18 06:40:05.944: E/AndroidRuntime(932): … 11 more
04-18 06:40:05.944: E/AndroidRuntime(932): Caused by: java.lang.reflect.InvocationTargetException
04-18 06:40:05.944: E/AndroidRuntime(932): at java.lang.reflect.Constructor.constructNative(Native Method)
04-18 06:40:05.944: E/AndroidRuntime(932): at java.lang.reflect.Constructor.newInstance(Constructor.java:417)
04-18 06:40:05.944: E/AndroidRuntime(932): at android.view.LayoutInflater.createView(LayoutInflater.java:594)
04-18 06:40:05.944: E/AndroidRuntime(932): … 25 more
04-18 06:40:05.944: E/AndroidRuntime(932): Caused by: android.content.res.Resources$NotFoundException: File #FFF from drawable resource ID #0x7f05000e: .xml extension required
04-18 06:40:05.944: E/AndroidRuntime(932): at android.content.res.Resources.loadColorStateList(Resources.java:2255)
04-18 06:40:05.944: E/AndroidRuntime(932): at android.content.res.TypedArray.getColorStateList(TypedArray.java:342)
04-18 06:40:05.944: E/AndroidRuntime(932): at android.widget.TextView.(TextView.java:949)
04-18 06:40:05.944: E/AndroidRuntime(932): at android.widget.TextView.(TextView.java:607)
04-18 06:40:05.944: E/AndroidRuntime(932): … 28 more
04-18 06:40:11.974: I/Process(932): Sending signal. PID: 932 SIG: 9
You’re very welcome 🙂 Have you tried downloading the package I have on this page. If it runs then that means you have a typo some place. Use a website like diffnow to compare my files to yours. Just check the manifest, layout folder and src folder. i hope that helps
Hi, your tutorials are absolutely amazing. The application runs but the list does not populate when i add a contact even if i leave all the edittexts empty it still goes back to main activity. what am i doing wrong?
Thank you 🙂 I have the whole package available for download on this page. Have you tried that. If my code works then there must be a typo. Then you could compare the files in the layout folder and src folder using something like diffnow. I hope that helps.
Hi Derek,
Thanks a lot for these awesome tutorials. It really helped a newbie like me.
I have 2 scenarios.
I coded the addressbook app by following your videos.
But when i am trying to run it ,it is quitting unexpectedly.
The reason may be ‘fragment_main.xml’. Whenever I am trying to create a new package, It is by default creating the ‘fragment_main.xml’. I am not able to create a package without this ‘fragment_main.xml’.
secondly, I downloaded the package from your website. But still not able to run.I followed your instruction in tutorial -14 to import. After import, It is showing lots of errors in ‘NewContact.java’ file only.Also in the console it is showing ‘Unable to resolve target ‘android 17”. Please let me know if i am doing something wrong.
I show how to fix the Android fragment error here. I’ll be making a new Java Android tutorial very soon.
Hi Derek, how can I add a simple Contact search in this app Thxs in advance.
You’ll have to issue a query to the database. I have an SQLite tutorial that will show how.
That workout pretty good thxs.
“String selectQuery = “SELECT * FROM contacts WHERE lastname LIKE ‘”+ text + “%'”;”
this query did the job. Now It is possible to filter de contactlist without using this method?
Hi Derik!
Tnx for the great tutorials it was really helpful.
I would like to know if it is possible to create two database in the same android application, or create two tables (same database) in the same android app.
Can u please provide me with some links to tutorials covering that subject
Tnx a lot !
Yes you can have multiple databases in one app and multiple tables. Then you can use adb to view the databases. I have an SQLite tutorial here.
Hi Derek!
I’m new to android programming and I was wondering why when I imported your whole project to my eclipse the R.java is missing in the generated package?
Great tutorials btw. thanks.
Jed
Hi Jed, Click Project -> Clean and it will be generated
Hi MASTER DEREK.
I’m fiddling with my first app and another question arose:
suppose I’d like to have several phones/email per contact-
I guess I create another contact-phone TABLE (with contactId is the foreign Key)
how should I deal with those added table regarding insertion-
updating-deleting data.
Thank you so much man!
Hi,
I’d go this route for designing that in the database, but you could also just list each possible phone number in one table
ContactId
)
ContactPhone (
ContactId references Contact(ContactId)
PhoneType
PhoneNumber
)
First, thanks for this series.
Second I am having an issue with the listView in activity_main.
Everything works but nothing is displayed. I can tap on the row and it goes to the edit screen (after addeng records). I am not sure if the listAddaptor in not filling in the list (my guess) or the list is white for both text and background.
I suspect all of this is a result of some bug/quirk with Androud Studio.
Thoughts?
Thanks
The code is the same on Android Studio. Have you tried importing the package I provide for this tutorial? | http://www.newthinktank.com/2013/06/android-development-14/?replytocom=26801 | CC-MAIN-2021-31 | refinedweb | 7,856 | 53.88 |
Work at SourceForge, help us to make it a better place! We have an immediate need for a Support Technician in our San Francisco or Denver office.
Author: arigo
Date: Wed Oct 8 14:58:21 2008
New Revision: 58816
Modified:
psyco/www/Makefile
psyco/www/content/index.rst
psyco/www/last-updated
psyco/www/show/psycowriter.py
Log:
Hack until "easy_install psyco" guesses the correct url.
Modified: psyco/www/Makefile
==============================================================================
--- psyco/www/Makefile (original)
+++ psyco/www/Makefile Wed Oct 8 14:58:21 2008
@@ -39,7 +39,8 @@
-rm -fr htdocs/*
sf: $(all-htdocs-files)
- rsync -r --rsh=ssh --delete -z htdocs arigo@...:/home/groups/p/ps/psyco
+ cat /home/arigo/octogone/locked/sourceforge
+ rsync -r --rsh=ssh --delete -z htdocs arigo,psyco@...:/home/groups/p/ps/psyco
touch $@
htdocs:
Modified: psyco/www/content/index.rst
==============================================================================
--- psyco/www/content/index.rst (original)
+++ psyco/www/content/index.rst Wed Oct 8 14:58:21 2008
@@ -4,7 +4,6 @@
High-level languages need not be slower than low-level ones.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
**Psyco** is a Python extension module which can massively speed up the execution of any Python code.
@@ -76,127 +75,6 @@
Released `Psyco 1.5.2@` with Python 2.5 support. (Windows binaries: contribution welcome.)
-12 October 2006
-
- A snapshot of the latest sources is now maintained at
- .
- See `installing from sources@.../sources.html`.
- Version 1.5.2 is not quite released yet, although the number 1.5.2
- is already used on some web pages (just for confusion).
- The snapshot is a release candidate: please report any problem.
-
-11 October 2006
-
- In case you're trying out Psyco on a new Mac OS/X using the Intel
- processor, I said somewhere that it should "just work". Well, chances
- are that it would if someone would like to invest a little bit of time
- fixing the following known issue:
-
-
-03 October 2006
-
- Fixed a problem about threads suddenly going into restricted mode.
- I have no bugs left on my to-fix list; the current
- `subversion head@` seems
- quite stable again. Please report any issue :-)
-
:-)
-
-23 November 2005
- Added a Windows binary for `Psyco 1.5@` on Python 2.4 (thanks Alexander!).
-
-30 October 2005
- Tagged the current Subversion head as `Psyco 1.5@`. This is probably the last release of Psyco (unless incompatibilities with the upcoming CPython 2.5 show up later, but it works with the CVS CPython 2.5 at the moment). This release contains nothing new if you already got the Subversion version.
-
.
-
!
-
-30 July 2004
- Psyco will be presented at the `PEPM'04@`
- conference, part of ACM SIGPLAN 2004.
- The paper is available (compressed Postscript `[A4]@psyco-pepm-a.ps.gz`
- or `[Letter]@psyco-pepm-l.ps.gz`).
-
-29 April 2004
- Following the Python UK conference at
- `ACCU 2004@` here are some
- `animated slides@...` that are, as far as I can tell, my
- best attempt so far at trying to explain how Psyco works.
- (`Pygame@` required)
-
-4 March 2004
- Bugfix release `Psyco 1.2@`. Includes support for Fedora, plus a number of smaller bug fixes. This version does not yet work correctly on platforms other than PCs. I will need to spend some time again on the 'ivm' portable back-end before that dream comes true :-)
-
-21 Aug 2003
- The Linux binaries have been compiled for the recent 'glibc-2.3', although a lot of systems still have 'glibc-2.2'. See the `note about Linux binaries@.../binaries.html`.
-
-19 Aug 2003
- Fixbug release `Psyco 1.1.1@`. Fixes `loading problems@` both on Windows and Red Hat Linux.
-
-15 Aug 2003
- Released `Psyco 1.1@`. Contains the enhancements described below, the usual subtle bug fixes, and complete Python 2.3 support.
-
-16 Jun 2003
- Enough new things that I would like to make a release 1.1 soon. Top points: Psyco will now inline calls to short functions, almost cancelling the cost of creating small helpers like 'def f(x): return (x+1) & MASK'. And I have rewritten the string concatenation implementation, as the previous one was unexpectedly inefficient: now using 's=s+t' repeatedly to build a large string is at least as efficient as filling a cStringIO object (and more memory-conservative than using a large list of small strings and calling '"".join()' at the end).
-
-5 May 2003
- `Release 1.0@` is out. Note that Psyco is distributed under the MIT License, and no longer under the GPL as it used to be.
-
- The plan for the next release is to include a fast low-level interpreter that can be used on non-Intel processors. It will finally make Psyco portable -- althought of course not as fast as it could possibly be if it could emit real machine code.
-
- IRC users, try irc.freenode.net channel #psyco.
-
-1 May 2003
- Psyco is now compatible with the new `Python 2.3b1@`. This and other bug fixes, plus positive feedback, allow me to officially announce the release of Psyco 1.0 (which should take place in a few hour's time, please come back soon!).
-
-17 Mar 2003
- Major new `beta release 1.0.0b1@` containing the accumulated enhancements from the CVS tree! Also comes with a `complete guide@.../index.html`! The web site has been updated; outdated information was removed. I will soon tell more about how I currently see Psyco's future.
-
-12 Sep 2002
-
- Various bug fixes have been committed in CVS. Next release soon. See also the new `links` page.
-
-11 Aug 2002
-
- `Release 0.4.1@` is out. A major new feature I recently added is the reduced memory consumption. On some examples, Psyco uses several times less memory than it used to!
-
-7 Aug 2002
-
- The new site is up and running. I will take the current CVS source and release it as a stable version within the next few days.
-
-24 Jul 2002
-
- Psyco talk at the Open Source Convention 2002, San Diego. This talk will eventually be turned into a written document; in the meantime, you can see the `slides@.../header.html` (or `download them@...`).
-
-26 Jun 2002
-
- Psyco talk at the EuroPython, Charleroi. Same `slides@.../header.html` as above.
About
=====
Modified: psyco/www/last-updated
==============================================================================
--- psyco/www/last-updated (original)
+++ psyco/www/last-updated Wed Oct 8 14:58:21 2008
@@ -1 +1 @@
-Mon, 7 Jan 2008
+Wed, 8 Oct 2008
Modified: psyco/www/show/psycowriter.py
==============================================================================
--- psyco/www/show/psycowriter.py (original)
+++ psyco/www/show/psycowriter.py Wed Oct 8 14:58:21 2008
@@ -15,6 +15,10 @@
FOOTER = '''<A href=""> <IMG src="";</A>
'''
+CUSTOM_HACK_HTML = '''
+<div style="visibility:hidden" blame="easy_install"> <a href="">psyco snapshot</a> </div>
+'''
+
class HTMLTranslator(HTMLTranslatorBase):
TDOpen = (
@@ -33,6 +37,7 @@
HTMLTranslatorBase.__init__(self, doctree)
self.title = title
self.body = ['</HEAD>\n<BODY TEXT="#000000" BGCOLOR="#FFFFFF" LINK="#0000EE" VLINK="#551A8B" ALINK="#FF0000">\n']
+ self.body.append(CUSTOM_HACK_HTML)
if not self.title:
self.foot.insert(0, FOOTER) | http://sourceforge.net/p/psyco/mailman/psyco-checkins/thread/20081008125822.27FBB169E7D@codespeak.net/ | CC-MAIN-2014-35 | refinedweb | 1,146 | 68.47 |
CFD Online Discussion Forums
(
)
-
FLUENT
(
)
- -
Interpret three UDF for property
(
)
Atsu
April 21, 2006 12:19
Interpret three UDF for property
Hi all,
I am trying to use UDF for three properties, which are density, thermal conductivity and viscosity. When I compile the UDF I get only one UDF. The last one compiled overlaps the previous one, so I cannot define three different properties using the UDF. I compiled them by the "Interpreted UDF". How can I use the three UDFs at a same time?
I would appreciate any help and suggestions
thank you Atsu
Atsu
April 21, 2006 13:41
Help: Interpret three UDF for property
Hi again,
I tryed to set each UDFs at a time in another way; 1. interpret density.c -> set density on Mat. prop. 2. interpret viscosity.c -> set viscosity on it 3. interpret thermcond.c -> set therm. cond. on it and then tryed to calculate, but not run. How can I use the three UDFs at a same time?
Please help, Atsu
Markus
April 22, 2006 05:31
Re: Help: Interpret three UDF for property
Hi Atsu,
You don't have to use different c-files for each UDF. Just put all DEFINE_PROPERTY macros in one single c-file, compile this file and all the UDF's will be available. Cheers Markus
Atsu
April 22, 2006 12:12
Re: Help: Interpret three UDF for property
Hi Markus,
Thank you for your favor. Could you show me more about how I should make one single c-file for some properties? Should I make c-file for three properties as follows?;
#include "udf.h"
DEFINE_PROPERTY(prop_name, cell, thread)
{
real temp, press, density, thermalcond, visco;
temp = C_T(cell, thread);
press = C_P(cell, thread);
{/* equation for density */}
{/* equation for thermalcond */}
{/* equation for visco */}
C_MU_L(cell, thread) = visco;
C_R(cell, thread) = density;
C_K_L(cell, thread) = thermalcond;
}
Atsu
April 22, 2006 15:04
Re: Help: Interpret three UDF for property
Hi Markus,
I can run!! Thank you very much for your advice.
I had to make it listed bellow;
#include "udf.h"
DEFINE_PROPERTY(density, cell, thread)
{/* equation for density */}
DEFINE_PROPERTY(thermalcond, cell, thread)
{/* equation for thermalcond */}
DEFINE_PROPERTY(viscosity, cell, thread)
{/* equation for visco */}
All times are GMT -4. The time now is
22:18
. | https://www.cfd-online.com/Forums/fluent/40687-interpret-three-udf-property-print.html | CC-MAIN-2017-13 | refinedweb | 375 | 63.09 |
Structured Streaming Programming Guide [Alpha]
- Overview
- Quick Example
- Programming Model
- API using Datasets and DataFrames
- Creating streaming DataFrames and streaming Datasets
- Operations on streaming DataFrames/Datasets
- Starting Streaming Queries
- Managing Streaming Queries
- Recovering from Failures with Checkpointing
- Where to go from here or Python to express streaming aggregations, event-time windows,.
Spark 2.0 is the ALPHA RELEASE of Structured Streaming and the APIs are still experimental. In this guide, we are going to walk you through the programming model and the APIs. First, let’s start with a simple example -. And if you download Spark, you can directly run the example. In any case, let’s walk through the example step-by-step and understand how it works. First, we have to import the necessary classes and create a local SparkSession, the starting point of all functionalities related to Spark.
import org.apache.spark.sql.functions._ import org.apache.spark.sql.SparkSession val spark = SparkSession .builder .appName("StructuredNetworkWordCount") .getOrCreate() import spark.implicits._()
Next, let’s create a streaming DataFrame that represents text data received from a server listening on localhost:9999, and transform the DataFrame to calculate word counts.
// Dataset<String> lines = spark .readStream() .format("socket") .option("host", "localhost") .option("port", 9999) .load(); // Split the lines into words Dataset<String> words = lines .as(Encoders.STRING()) .flatMap( new FlatMapFunction<String, String>() { @Override public Iterator<String> call(String x) { return Arrays.asList(x.split(" ")).iterator(); } }, Encoders.STRING()); // Generate running word count Dataset<Row>(Encod lines =”. Finally, we have defined the
wordCounts DataFrame by grouping by the unique values in the Dataset and counting them. Note that this is a streaming DataFrame which represents the running word counts of the stream.
We have now set up the query on the streaming data. All that is left is to actually start receiving data and computing the counts. To do this, we set it up to print the complete set of counts (specified by
outputMode(“complete”)) to the console every time they are updated. And then start the streaming computation using
start().
// Start running the query that prints the running counts to the console val query = wordCounts.writeStream .outputMode("complete") .format("console") .start() query.awaitTermination()
// different (not available yet in Spark 2.0). Note that this is different from the Complete Mode in that this mode does not output the rows that are not changed.
Note that each mode is applicable on certain types of queries. This is discussed in detail later.
To illustrate the use of this model, let’s understand the model in context of
the Quick Example above. The first
lines DataFrame is the input table, and
the final
wordCounts DataFrame is the result table. Note that the query on
streaming
lines DataFrame to generate
wordCounts is exactly the same as
it would be a static DataFrame. However, when this query is started, Spark
will continuously check for new data from the socket connection. If there is
new data, Spark will run an “incremental” query that combines the previous
running counts with the new data to compute updated counts, as shown below. event every minute) to be just a special type of grouping and aggregation on the even, this model naturally handles data that has arrived later than expected based on its event-time. Since Spark is updating the Result Table, it has full control over updating/cleaning up the aggregates when there is late data. While not yet implemented in Spark 2.0, event-time watermarking will be used to manage this data. These are explained later in more detailsing. Every streaming source is assumed to have offsets (similar to Kafka offsets, or Kinesis sequence numbers) to track the read position in the stream. The engine uses checkpointing and write ahead logs to record the offset range of the data being processed in each trigger. The streaming sinks are designed to be idempotent for handling reprocessing. Together, using replayable sources and idempot(). Similar to the read interface for creating static DataFrame, you can specify the details of the source – data format, schema, options, etc. In Spark 2.0, there are a few built-in sources.
File source - Reads files written in a directory as a stream of data. Supported file formats are text, csv, json,.
Socket source (for testing) - Reads UTF8 text data from a socket connection. The listening server socket is at the driver. Note that this should be used only for testing as this does not provide end-to-end fault-tolerance guarantees. parquet parquet, type: String, signal: Double, time: DateTime) val df: DataFrame = ... // streaming DataFrame with IOT device data with schema { device: string, type:("type").count() // using untyped API // Running average signal for each device type Import org.apache.spark.sql.expressions.scalalang.typed._ ds.groupByKey(_.type) type;(new FilterFunction<DeviceData>() { // using typed APIs @Override public boolean call(DeviceData value) throws Exception { return value.getSignal() > 10; } }).map(new MapFunction<DeviceData, String>() { @Override public String call(DeviceData value) throws Exception { return value.getDevice(); } }, Encoders.STRING()); // Running count of the number of updates for each device type df.groupBy("type").count(); // using untyped API // Running average signal for each device type ds.groupByKey(new MapFunction<DeviceData, String>() { // using typed API @Override public String call(DeviceData value) throws Exception { return value.getType(); } }, Encoders.STRING()).agg(typed.avg(new MapFunction<DeviceData, Double>() { @Override public Double call(DeviceData value) throws Exception { return value.getSignal(); } }));
df = ... # streaming DataFrame with IOT device data with schema { device: string, type: string, signal: double, time: DateType } # Select the devices which have signal more than 10 df.select("device").where("signal > 10") # Running count of the number of updates for each device type df.groupBy("type").count()
Window Operations on Event Time
Aggregations over a sliding event-time window are straightforward with Structured Streaming. The key idea to understand about window-based aggregations()
Now consider what happens if one of the events arrives late to the application. For example, a word that was generated at 12:04 but it was received at 12:11. Since this windowing is based on the time in the data, the time 12:04 should be considered for windowing. This occurs naturally in our window-based grouping – the late data is automatically placed in the proper windows and the correct aggregates are updated as illustrated below.
Join Operations
Streaming DataFrames can be joined with static DataFrames to create new streaming DataFrames. Here are a few examples.
val staticDf = spark.read. ... val streamingDf = spark.readStream. ... streamingDf.join(staticDf, “type”) // inner equi-join with a static DF streamingDf.join(staticDf, “type”, “right_join”) // right outer join with a static DF
Dataset<Row> staticDf = spark.read. ...; Dataset<Row> streamingDf = spark.readStream. ...; streamingDf.join(staticDf, "type"); // inner equi-join with a static DF streamingDf.join(staticDf, "type", "right_join"); // right outer join with a static DF
staticDf = spark.read. ... streamingDf = spark.readStream. ... streamingDf.join(staticDf, "type") # inner equi-join with a static DF streamingDf.join(staticDf, "type", "right_join") # right outer join with a static DF
Unsupported Operations
However, note that all of the operations applicable on static DataFrames/Datasets are not supported in streaming DataFrames/Datasets yet. While some of these unsupported operations will be supported in future releases of Spark, there are others which are fundamentally hard to implement on streaming data efficiently. For example, sorting is not supported on the input streaming Dataset, as it requires keeping track of all the data received in the stream. This is therefore fundamentally hard to execute efficiently. As of Spark 2.0, some of the unsupported operations are as follows.
Outer joins between a streaming and a static Datasets are conditionally supported.
Full outer join with a streaming Dataset is not supported
Left outer join with a streaming Dataset on the left is not supported
Right outer join with a streaming Dataset on the right is not supported
Any kind of joins between two streaming Datasets are not yet supported.”.
Starting Streaming Queries
Once you have defined the final result DataFrame/Dataset, all that is left is for completed. If a trigger time is missed because the previous processing has not completed, then the system will attempt to trigger at the next trigger point, not immediately after the processing has completed. two types of output mode currently implemented.
Append mode (default) - This is the default mode, where only the new rows added to the result table since the last trigger will be outputted to the sink. This is only applicable to queries that do not have any aggregations (e.g. queries with only
where,
map,
flatMap,
filter,
join, etc.).
Complete mode - The whole result table will be outputted to the sink.This is only applicable to queries that have aggregations.
Output Sinks
There are a few types of built-in output sinks.
File sink - Stores the output to a directory. As of Spark 2.0, this only supports Parquet file format, and Append output mode.
Foreach sink - Runs arbitrary computation on the records in the output. See later in the section for more details..
Memory sink (for debugging) - The output is stored in memory as an in-memory table. Both, Append and Complete output modes, are supported. This should be used for debugging purposes on low data volumes as the entire output is collected and stored in the driver’s memory after every trigger.
Here is a table of all the sinks, and the corresponding settings.
Finally,
Using Foreach
The
foreach operation allows arbitrary operations to be computed on the output data. As of Spark 2.0, this is available only for Scala and Java. To use this, you will have to implement the interface
ForeachWriter (Scala/
Java docs), which has methods that get called whenever there is a sequence of rows generated as output after a trigger. Note the following important points.
The writer must be serializable, as it will be serialized and sent to the executors for execution.
All the three methods,
open,
processand
closewill be called on the executors.
The writer must do all the initialization (e.g. opening connections, starting a transaction, etc.) only when the
openmethod is called. Be aware that, if there is any initialization in the class as soon as the object is created, then that initialization will happen in the driver (because that is where the instance is being created), which may not be what you intend.
versionand
partitionare two parameters in
openthat uniquely represent a set of rows that needs to be pushed out.
versionis a monotonically increasing id that increases with every trigger.
partitionis an id that represents a partition of the output, since the output is distributed and will be processed on multiple executors.
opencan use the
versionand
partitionto choose whether it needs to write the sequence of rows. Accordingly, it can return
true(proceed with writing), or
false(no need to write). If
falseis returned, then
processwill not be called on any row. For example, after a partial failure, some of the output partitions of the failed trigger may have already been committed to a database. Based on metadata stored in the database, the writer can identify partitions that have already been committed and accordingly return false to skip committing them again.
Whenever
openis called,
closewill also be called (unless the JVM exits due to some error). This is true even if
openreturns false. If there is any error in processing and writing the data,
closewill be called with the error. It is your responsibility to clean up state (e.g. connections, transactions, etc.) that have been created in
opensuch that there are no resource leaks.
Managing Streaming Queries
The
StreamingQuery object created when a query is started can be used to monitor and manage the query.
val
StreamingQuery
Finally, for asynchronous monitoring of streaming queries, you can create and attach a
StreamingQueryListener (Scala/
Java docs), which will give you regular callback-based updates when queries are started and terminated.. As of Spark 2.0,()
Where to go from here
- Examples: See and run the Scala/Java/Python examples.
- Spark Summit 2016 Talk - A Deep Dive into Structured Streaming | https://spark.apache.org/docs/2.0.0/structured-streaming-programming-guide.html | CC-MAIN-2021-17 | refinedweb | 2,005 | 56.96 |
I have a simple question that I imagine has a simple answer, but I have not been able to find it. Is it possible for two or more beans to share the same environment variable? And if so, how? If, for example, I want BeanA and BeanB to both have access to a variable called "taxRate" that will be utilized as a java.lang.Float, how would I make that happen?
Thanks for any insight you can provide.
Sharing Environment Variables (4 messages)
- Posted by: Neil Chaudhuri
- Posted on: March 04 2003 14:09 EST
Threaded Messages (4)
- Join A&B by C by Aleh Bykhavets on March 05 2003 04:16 EST
- Join A&B by C by Neil Chaudhuri on March 06 2003 00:08 EST
- Version 2 by Aleh Bykhavets on March 06 2003 03:26 EST
- Sharing Environment Variables by sachin gupta on March 17 2003 06:07 EST
Join A&B by C[ Go to top ]
More simple solution is to create additional EJB, that will supply others with *named* variables. So, it is only one place where You can/must manage values.
- Posted by: Aleh Bykhavets
- Posted on: March 05 2003 04:16 EST
- in response to Neil Chaudhuri
There are lot of possible implementations, lets try to use StatelessEJB:
class SharedValues ... {
...
public Float getTaxRate() {...}
}
and in BeanA & BeanB just ask "SharedValues" about desired variable:
...
SharedValues shared = ...;
Float f = shared.getTaxRate();
...
That's all. Other in Your hand.
P.S.
But TaxRate can be changed with time -- economics has float nature. Hence, environment of container isn't good place.
Join A&B by C[ Go to top ]
OK, I follow your solution. But 2 questions:
- Posted by: Neil Chaudhuri
- Posted on: March 06 2003 00:08 EST
- in response to Aleh Bykhavets
1) What did you mean that the float nature of a tax rate makes the container a poor place for the value?
2) By suggesting this solution, are you implying that there is no way to share variables inherently?
It seems though that your solution would demand code changes every time you add environment variables, for each new variable would require a new method for retrieving it. Is this not so?
Thanks for the insights.
Version 2[ Go to top ]
1) I mean that after deploy, if TaxRate must be changed, someone must do big amount of brute work -- examine all EJBs and change every "taxRate" entry. Even if TaxRate exsists in one container only, Your must replace one with new value (much easy).
- Posted by: Aleh Bykhavets
- Posted on: March 06 2003 03:26 EST
- in response to Neil Chaudhuri
But not all Application Servers allow execute such operations on the fly. In common case Your must redeploy whole application or some its containers (depend on AppServer).
In any case check all in practice(I also can be mistaken).
2) Not exactly. You can use JNDI to store all shared variables (global environment).
class ... {
final static String SHARED = "java:comp/env/shared_values/";
void DoSomethingUseful() {
try {
Context ctx = new InitialContext(SHARED);
Float taxRate = (Float) ctx.lookup("taxRate");
} catch ...
}
}
But in such case You must take care about this global environment. You even can make special framework to reflect some DB table into this JNDI branch.
More complicated problem here -- How to store this shared variables in JNDI?
3) About code changes... Are all clear?
class Shared ... {
...
Object getObject(String name) throws NoSuchVariable {
Object obj = ... retrieve/fetch *named* falue from Hash/JNDI/DB...
if (obj == nil) then throw NoSuchVariable("No variable associated with " + name);
return obj;
}
Float getFloat(String name) throws NoSuchVariable {
return (Float)getObject(name)
}
String getString(String name) throws NoSuchVariable {
return (String)getObject(name);
}
...
}
Sharing Environment Variables[ Go to top ]
The most simple way to do so is to have a Class with Static variable "taxRate" which can be updated/read from any of the two EJB's. Here you can share the same variable within the same JVM without any hassle.
- Posted by: sachin gupta
- Posted on: March 17 2003 06:07 EST
- in response to Neil Chaudhuri | http://www.theserverside.com/discussions/thread/18175.html | CC-MAIN-2017-26 | refinedweb | 678 | 72.16 |
02 December 2010 19:17 [Source: ICIS news]
(adds updates throughout with Canadian, Mexican and overall North American shipment data)
?xml:namespace>
Canadian chemical railcar loadings for the week ended on 27 November were 14,205, up from 13,659 in the same week last year, according to data released by the Association of American Railroads (AAR).
The increase for the week came after a 16.3% year-on-year increase in Canadian chemical carloads in the previous week ended 20 November.
The weekly chemical railcar loadings data are seen as an important real-time measure of chemical industry activity and demand. In
For the year-to-date period to 27 November, Canadian chemical railcar shipments were up 22.6% to 683,097, from 557,049 in the same period in 2009.
The association said that chemical railcar traffic in
For the year-to-date period, Mexican shipments were down 2.3% to 51,726, from 52,929 in the same period last year. Mexican railcar shipments were hindered by flooding in past weeks.
The AAR reported earlier on Thursday that
Overall chemical railcar shipments for all of North America - US,
For the year-to-date period to 27 November, overall North American chemical railcar traffic was up 13.1% to 2,088,086, from 1,846,530 in the year-earlier period.
Overall, the
From the same week last year, total US weekly railcar traffic for the 19 carload commodity groups tracked by the AAR rose 3.2% to 254,121 from 246,180, and was up 7.1% to 13,462,287 year-to-date to 27 | http://www.icis.com/Articles/2010/12/02/9416208/canadian-weekly-chemical-railcar-traffic-rises-4.0.html | CC-MAIN-2014-52 | refinedweb | 269 | 55.54 |
Evaluating Tools for Developing with SOAP in Python
Originally published in Python Magazine Volume 3 Issue 9 , September,
2009
Greg Jednaszewski was my co-author for this article.
In order to better meet the needs of partners, Racemi needed to
build a private web service to facilitate tighter integration
between our applications and theirs. After researching the state of
SOAP development in Python, we were able to find a set of tools
that met our needs quite well. In this article, we will describe
the criteria we used to evaluate the available tools and the
process we followed to decide which library was right for us.
Racemi’s product, DynaCenter, is a server provisioning and data center
management software suite focusing on large private installations
where automation is key for our end-users. Because we are a small
company, our business model is organized around partnering with larger
companies in the same industry and acting as an OEM. Those partners
typically provide their own user interface, and drive DynaCenter’s
capture and provision services through our API.
Many of our partners’ automation and workflow management systems are
designed to call scripts or external programs, so the first version of
our API was implemented as a series of command line programs.
However, we are increasingly seeing a desire for more seamless
integration through web service APIs. Since most of our partners are
Java shops, in their minds the term web service is synonymous with
SOAP (Simple Object Access Protocol), an HTTP and XML-based protocol
for communicating between applications. Since Python’s standard
library does not include support for SOAP, we knew we would need to
research third-party libraries to find one suitable for creating a web
service interface to DynaCenter. Our first step was to develop a set
of minimum requirements.
Basic Requirements
DynaCenter is designed with several discrete layers that communicate
with each other as needed. The command line programs that comprise the
existing OEM API communicate with internal services running in daemons
on a central control server or on the managed systems. This layered
approach separates the exposed interface from the implementation
details, allowing us to change the implementation but maintain a
consistent API for use by partners. All of the real work for capturing
and provisioning server images is implemented inside the DynaCenter
core engine, which is invoked by the existing command line programs.
The first requirement we established was that the new web service
layer had to be thin so we could reuse as much existing code as
possible, and avoid re-implementing any of the core engine
specifically for the web service.
This project was unique in that many of the features of full-stack web
frameworks would not be useful to meeting our short-term requirements.
We have our own ORM for accessing the DynaCenter’s database, so any
potential solution needed to be able to operate without a fully-
configured ORM component. In addition, we were not building a human
interface, so full-featured templating languages and integration with
Javascript toolkits were largely irrelevant to the project. On the
other hand, while we recognized that SOAP was a short-term requirement
from some of our partners, we did anticipate wanting to support other
protocols like JSON in the future without having to write a new
service completely from scratch.
We also knew that creating a polished product would require
comprehensive documentation.
We also knew that creating a polished product would require
comprehensive documentation. The WSDL (Web Service Definition
Language) file for the SOAP API, which is a formal machine-readable
declaration of what calls and data types an API supports, would be
helpful, but only as a reference. We planned to document the entire
API in a reference manual as well as with sample Java and Python code
bundled in a software development kit (SDK). We could write that
documentation manually, but integration with the documentation tools
was considered a bonus feature.
Finally, we needed support for complex data structures. Our data model
uses a fairly sophisticated representation of image meta-data,
including networking and storage requirements. DynaCenter also
maintains data about the peripherals in a server so that we can
reconfigure the contents of images as they are deployed to run under
new hardware configurations. This information is used as parameters
and return values throughout the API, so we needed to ensure that the
tool we chose would support data types beyond the simple built-ins
like strings and integers.
Meet the Candidates
Through our research, we were able to identify three viable candidate
solutions for building SOAP-based web services in Python.
The Zolera Soap Infrastucture (ZSI), is a part of the pywebsvcs
project. It provides complete server and client libraries for working
with SOAP. To use it, a developer writes the WSDL file (by hand or
using a WSDL editor), and then generates Python source for the client
and stubs for the server. The data structures defined in the WSDL file
are converted into Python classes that can be used in both client and
server code.
soaplib is a lightweight library from Optio Software. It also
supports only SOAP, but in contrast to ZSI it works by generating the
WSDL for your service based on your Python source code. soaplib is not
a full-stack solution, so it needs to be coupled with another
framework such as TurboGears or Pylons to create a service.
TGWebServices (TGWS) is a TurboGears-specific library written by
Kevin Dangoor and maintained by Christophe de Vienne. It provides a
special controller base class to act as the root of the service. It is
similar to soaplib in that it generates the WSDL for a service from
the source at runtime. In fact, we found a reference to the idea of
merging soaplib and TGWebServices, but that work seems to have stalled
out. One difference between the libraries is that TGWS also supports
JSON and “raw” XML messages for the same back-end code.
Now that we had the basic requirements identified and a few candidates
to test, we were able to create a list of evaluation criteria to help
us make our decision.
Installing
A primary concern was whether or not a tool could be installed and
made to work at all using any tutorial or guide from the
documentation. We used a clean virtualenv for each application and
used Python 2.6.2 for all tests. Initial evaluations were made under
Mac OS X 10.5 and eventually prototype servers were set up under
CentOS 4 so the rest of Racemi’s libraries could be used and the
service could work with real data.
The latest official release of ZSI (2.0-rc3) installed using
easy_install, including all dependencies and C extensions. A newer
alpha release (2.1-a1) also installed correctly from a source archive
we downloaded manually. The sample code provided with the source
archive had us up and running a test server in a short time.
We were less successful using easy_install with TGWS because we
did not start out with TurboGears installed and the dependencies were
not configured to bring it in automatically. After modifying the
dependencies in the package by hand, we were able to install it and
configure a test server following the documentation. Once we overcame
that problem, we found that the official distribution of TGWS is only
compatible with TurboGears 1.0. By asking on the support mailing list,
we found patches to make it compatible with TurboGears 1.1 and were
then able to bring up a test server. Since TurboGears 2.x has moved
away from CherryPy, and TGWS uses features of CherryPy, we did not try
to use TurboGears 2.x.
We never did get soaplib to install. It depends on lxml, and
installation on both of our our test platforms failed with compilation
and link errors. At this point, soaplib was moved off of the list of
primary candidates. We kept it open as an option in case the other
tools did not pan out, but not being able to install it hurt our
ability to evaluate it completely.
Feature Completeness
Since we anticipated other web-related work, we also considered the
completeness of the stack. Although ZSI provides a full SOAP server,
it does not easily support other protocols. Since our only hard
requirement for protocols in the first version of the service was
SOAP, this limitation did not rule ZSI out immediately.
Because TGWS sits on top of TurboGears, we knew that if we eventually
wanted to create a UI for the service we could use the same stack. It
also supports JSON out of the box, so third-party JavaScript
developers could create their own UI as well.
Interoperability
Another concern was whether the tool would be inter-operable with a
wide variety of clients. We were especially interested in the Java
applications we expected our partners to be writing. Since we are
primarily a Python shop, we also wanted to be able to test the SOAP
API using Python libraries. In order to verify that both sets of
clients would work without issue, we constructed prototype servers
using each tool and tested them using SOAP clients in Python and Java
(using the Axis libraries).
Both ZSI and TGWS passed the compatibility tests we ran using both
client libraries. The only interoperability issue we came across was
with the SOAP faults generated by TGWS, which did not pass through the
strict XML parser used by the Java Axis libraries. We were able to
overcome this with a few modifications to TGWS (which we have
published for possible inclusion in a future version of TGWS).
Freshness
Our investigations showed that there had not been much recent
development of SOAP libraries in Python, even from the top contenders
we were evaluating. It wasn’t clear whether this was because the
existing tools were stable and declared complete, or if the Python
community has largely moved on to other protocols like JSON. To get a
sense of the “freshness” of each project, we looked for the last
commit to the source repository and also examined mailing list
archives for recent activity. We were especially interested in
responses from developers to requests for support.
The recent activity on the ZSI forums on Sourceforge seemed mostly to
be requests for help. The alpha release we used for one of the tests
was posted to the project site in November of 2007. There had been
more recent activity in the source tree, but we did not want to use an
unreleased package if we could avoid it.
The situation with TGWS was confusing at first because we found
several old sites. By following the chain of links from the oldest to
the newest, we found the most recent code in a BitBucket repository
being maintained by Christophe de Vienne. As mentioned earlier, the
project mailing list was responsive to questions about making TGWS
work with TurboGears 1.1, and pointed us towards a separate set of
patches that were not yet incorporated in the official release.
Documentation
As new users, we wanted to find good documentation for any tool we
selected. Having the source is useful for understanding how you’re
doing something wrong, but learning what to do in the first place
calls for separate instructions. All of the candidates provided enough
documentation for us to create a simple prototype server without too
much trouble.
Just as we expect to have documentation for third-party tools we use,
we need to provide API references and tutorials for the users of our
web service. We use Sphinx for all customer-facing documentation at
Racemi, since it allows us to manage the documentation source along
with our application code, and to build HTML and PDF versions of all
of our manuals. TGWS includes a Sphinx extension that adds directives
for generating documentation for web service controllers, so we could
integrate it with our existing build process easily. ZSI has no native
documentation features. We did consider building something to parse
the WSDL file and generate API docs from that, but the existing Sphinx
integration TGWS provided was a big bonus in our eyes.
Deployment Complexity
We evaluated the options for deploying all of the tools, including how
much the deployment could be automated and how flexible they were. We
decided to run our service behind an Apache proxy so we could encrypt
the traffic with SSL. All of the tools support the standard options
for doing this (mod_proxy, mod_python, and in some cases mod_wsdl)
so there was no clear winner for this criteria.
In addition to simple production deployment, we also needed an option
for running a server in “development” mode without requiring root
access or modifications to a bunch of system services. We found that
both ZSI and TGWS have good development server configurations, and
could be run directly out of a project source tree (in fact, that is
how the prototype servers were tested).
Packaging Complexity
As a packaged OEM product, DynaCenter is a small piece of a larger
software suite being deployed on servers outside of our control. It
needs to play well with others and be easy to install in the field.
Most installations are performed by trained integrators, but they are
not Python programmers and we don’t necessarily want to make them deal
with a lot of our implementation details. We definitely do not want
them downloading dependencies from the Internet, so we package our own
copy of Python and the libraries we use so that installation is
simpler and avoids version conflicts.
ZSI’s only external dependency are PyXML and zope.interface. We
were already packaging PyXML for other reasons, and zope.interface
was easy to add. TGWS depends on TurboGears, which is a collection of
many separate packages. This made re-distribution less convenient,
since we had to grab the sources for each component separately.
Fortunately, the complete list is documented clearly in the
installation script for TurboGears and we were able to distill it down
to the few essential pieces we would actually be using. Those packages
were then integrated with our existing processes so they could be
included in the Python package we build.
Licensing
Although Racemi does contribute to open source tools when possible,
DynaCenter is not itself open source. We therefore had to eliminate
from consideration any tool that required the use of a GNU Public
License. ZSI uses a BSD-like license, which matched our requirements.
The zope.interface package is licensed under the Zope Public
License, which is also BSD-like. TGWS and most of the TurboGears
components are licensed under a BSD or MIT license. The only component
that even mentioned GNU was SQLObject, which uses the LGPL. That would
have been acceptable, but since we have our own ORM and do not need
SQLObject, we decided to skip including it in our package entirely to
avoid any question.
Elegance
SOAP toolkits tend to fall in one of two camps: Those that generate
source from a WSDL file and those that generate a WSDL document from
source. We didn’t particularly care which solution we ended up with,
as long as we didn’t have to write both the WSDL and the source code.
We also wanted to avoid writing vast amounts of boilerplate code, if
possible. As you will see from the examples below, the tools that
generated the WSDL from Python source turned out to be a much more
elegant in the long run.
We also considered the helpfulness of the error messages as part of
evaluating the elegance and usability of the tools. With TGWS, most of
what we were writing was Python. Many of the initial errors we saw
were from the interpreter, and so the error types and descriptions
were familiar. Once those were eliminated, the errors we saw generated
by TGWS code were usually direct and clear, although they did not
always point at the parts of our source code where the problem could
be fixed.
In contrast, we found ZSI’s errors to be very obscure. It seemed many
were caused by a failure of the library to trap problems in the
underlying code, such as indexing into a None value as a tuple.
Even the errors that were generated explicitly by the ZSI code left us
scratching our heads on occasion. We continued evaluating both tools,
but by this time we were leaning towards TGWS and growing more
frustrated with ZSI.
Testing
Automated testing is especially important for a complex product like
DynaCenter, so being able to write tests for the new web service and
integrate them with our existing test suite was an important feature.
ZSI does not preclude writing automated tests, but does not come with
any obvious framework or features for supporting them, so we would
need to roll our own. TGWS takes advantage of TurboGears’ integration
with WebTest to let the developer write unit and integration tests in
Python without even needing to start a test daemon.
Performance
Once we established the ease of creating and testing services with
TGWS, we had basically made our choice for that library. However,
there was one last criteria to check: performance. Using the prototype
servers we had set up for experimenting with the tools, we took some
basic timing measurements by writing a SOAP client in Python to invoke
a service that returned a large data set (500 copies of a complex type
with several properties of different types). We measured the time it
took for the client to ask for the data and then parse it into usable
objects.
The data structure definition was the same for both services, and we
found no significant difference in the performance of the two SOAP
implementations. Interestingly, as the amount of data increased, the
JSON performance reached a 10x improvement over SOAP. Our hypothesis
for the performance difference is that there was less data to parse,
the parser was more efficient, and the objects being created in the
client were simpler because JSON does not try to instantiate user-
defined classes.
Prototyping with ZSI
We were somewhat familiar with ZSI because we had used it in the past
for building a client for interacting with the VMware Virtual Center
web service, so we started with ZSI as our first prototype. For both
prototypes, we implemented a simple echo service that returns as
output whatever it gets as input from the client. Listing 1 contains
the hand-crafted WSDL inputs for the ZSI version of this service.
Listing 1
<?xml version="1.0" encoding="UTF-8"?> > <message name="EchoResponse"> <part name="parameters" element="tns:Echo"/> </message> >
To generate the client and server code from the WSDL, feed it into the
wsdl2py program (included with ZSI). To add support for complex
types, add the -b option, but it isn’t required for this simple
example. wsdl2py will, in response, produce three files:
Listing 2
EchoServer_client.py is the code needed to build a client for the
SimpleEcho web service.
################################################## # file: EchoServer_client.py # # client stubs generated by # "ZSI.generate.wsdl2python.WriteServiceModule" # ################################################## from EchoServer_types import * import urlparse, types from ZSI.TCcompound import ComplexType, Struct from ZSI import client from ZSI.schema import GED, GTD import ZSI from ZSI.generate.pyclass import pyclass_type # Locator class EchoServerLocator: EchoServer_address = "" def getEchoServerAddress(self): return EchoServerLocator.EchoServer_address def getEchoServer(self, url=None, **kw): return EchoServerSOAP( url or EchoServerLocator.EchoServer_address, **kw) # Methods class EchoServerSOAP: def __init__(self, url, **kw): kw.setdefault("readerclass", None) kw.setdefault("writerclass", None) # no resource properties self.binding = client.Binding(url=url, **kw) # no ws-addressing # op: Echo def Echo(self, request, **kw): if isinstance(request, EchoRequest) is False: raise TypeError, "%s incorrect request type" % (request.__class__) # no input wsaction self.binding.Send(None, None, request, soapaction="Echo", **kw) # no output wsaction response = self.binding.Receive(EchoResponse.typecode) return response EchoRequest = GED("urn:ZSI", "Echo").pyclass EchoResponse = GED("urn:ZSI", "Echo").pyclass
Listing 3
EchoServer_server.py contains code needed to build the
SimpleEcho web service server.
################################################## # file: EchoServer_server.py # # skeleton generated by # "ZSI.generate.wsdl2dispatch.ServiceModuleWriter" # ################################################## from ZSI.schema import GED, GTD from ZSI.TCcompound import ComplexType, Struct from EchoServer_types import * from ZSI.ServiceContainer import ServiceSOAPBinding # Messages EchoRequest = GED("urn:ZSI", "Echo").pyclass EchoResponse = GED("urn:ZSI", "Echo").pyclass # Service Skeletons class EchoServer(ServiceSOAPBinding): soapAction = {} root = {} def __init__(self, post='', **kw): ServiceSOAPBinding.__init__(self, post) def soap_Echo(self, ps, **kw): request = ps.Parse(EchoRequest.typecode) return request,EchoResponse() soapAction['Echo'] = 'soap_Echo' root[(EchoRequest.typecode.nspname,EchoRequest.typecode.pname)] = 'soap_Echo'
Listing 4
EchoServer_types.py has type definitions used by both the client
and server code.
################################################## # file: EchoServer_types.py # # schema types generated by # "ZSI.generate.wsdl2python.WriteServiceModule" # ################################################## import ZSI import ZSI.TCcompound from ZSI.schema import (LocalElementDeclaration, ElementDeclaration, TypeDefinition, GTD, GED) from ZSI.generate.pyclass import pyclass_type ############################## # targetNamespace # urn:ZSI ############################## class ns0: targetNamespace = "urn:ZSI" class Echo_Dec(ZSI.TCcompound.ComplexType, ElementDeclaration): literal = "Echo" schema = "urn:ZSI" def __init__(self, **kw): ns = ns0.Echo_Dec.schema TClist = [ZSI.TC.AnyType(pname=(ns,"value"), aname="_value", minOccurs=1, maxOccurs=1, nillable=False, typed=False, encoded=kw.get("encoded"))] kw["pname"] = ("urn:ZSI","Echo") kw["aname"] = "_Echo" self.attribute_typecode_dict = {} ZSI.TCcompound.ComplexType.__init__(self,None,TClist, inorder=0,**kw) class Holder: __metaclass__ = pyclass_type typecode = self def __init__(self): # pyclass self._value = None return Holder.__name__ = "Echo_Holder" self.pyclass = Holder # end class ns0 (tns: urn:ZSI)
Once generated, these files are not meant to be edited, because they
will be regenerated as part of a build process whenever the WSDL input
changes. The code in the files grows as more types and calls are added
to the service definition.
The implementation of the server goes in a separate file that imports
the generated code. In the example, the actual service is on lines
18–25 of Listing 5. The @soapmethod decorator defines the input
(an EchoRequest) and the output (an EchoResponse) for the call.
In the example, the implementation of soap_Echo() just fills in
the response value with the request value, and returns both the
request and the response. From there, ZSI takes care of building the
SOAP response and sending it back to the client.
Listing 5
import os import sys from EchoServer_client import * from ZSI.twisted.wsgi import (SOAPApplication, soapmethod, SOAPHandlerChainFactory) class EchoService(SOAPApplication): factory = SOAPHandlerChainFactory wsdl_content = dict(name='Echo', targetNamespace='urn:echo', imports=(), portType='', ) def __call__(self, env, start_response): self.env = env return SOAPApplication.__call__(self, env, start_response) @soapmethod(EchoRequest.typecode, EchoResponse.typecode, operation='Echo', soapaction='Echo') def soap_Echo(self, request, response, **kw): # Just return what was sent response.Value = request.Value return request, response def main(): from wsgiref.simple_server import make_server from ZSI.twisted.wsgi import WSGIApplication application = WSGIApplication() httpd = make_server('', 7000, application) application['echo'] = EchoService() print "listening..." httpd.serve_forever() if __name__ == '__main__': main()
Listing 6 includes a sample of how to use the ZSI client libraries to
access the servers from the client end. All that needs to be done is
to create a handle to the EchoServer web service, build an
EchoRequest, send it off to the web service, and read the
response.
Listing 6
from EchoServer_client import * import sys, time loc = EchoServerLocator() port = loc.getEchoServer(url='') print "Echo: ", msg = EchoRequest() msg.Value = "Is there an echo in here?" rsp = port.Echo(msg) print rsp.Value
Prototyping with TGWebServices
To get started with TGWebServices, first create a TurboGears project
by running tg-admin quickstart which will prompt you to name the
new project and Python package, and then produce a directory structure
full of skeleton code. The directory names are based on the project
and package names chosen when running tg-admin. The top-level
directory contains sample configuration files and a script for
starting the server, and a subdirectory containing all the Python code
for the web service.
tg-admin will generate several Python files, but the important
file for defining the web service is controllers.py. Listing 7
shows the controllers.py file for our prototype echo server. The
@wsexpose decorator on line 7 exposes the web service call and
defines the return type as a string. On line 8, @wsvalidate
defines the data types for each parameter. As with the ZSI example,
the actual implementation of the echo call just returns what is passed
in.
Listing 7
from turbogears import controllers, expose, flash from tgwebservices.controllers import WebServicesRoot, wsexpose, wsvalidate class EchoService(WebServicesRoot): """EchoService web service definition""" @wsexpose(str) @wsvalidate(value=str) def echo(self, value): "Echo the input back to the caller." return value class Root(controllers.RootController): """The root controller of the application.""" echo = EchoService('')
The auto-generated WSDL for the web service is accessible via
http://<server>/echo/soap/api.wsdl. Listing 8 shows an example of
the WSDL generated by TGWS for the prototype EchoService. It includes
definitions of all types used in the API (lines 3–20), the request and
response message wrappers for each call (lines 21–26), as well as the
ports (lines 27–45) and a service definition (lines 46–51) pointing to
the server generating the WSDL document. Each port includes the
docstring from the method implementing it (line 29).
Listing 8
<:documentation>Echo the input back to the caller.<:documentation>WSDL File for EchoService</wsdl:documentation> <wsdl:port <soap:address </wsdl:port> </wsdl:service> </wsdl:definitions>
The tgwsdoc extension to Sphinx, distributed with TGWS, adds
several auto-documentation directives to make it easy to keep your
documentation in sync with your code. By using autotgwstype,
autotgwscontroller, and autotgwsfunction, you can insert
definitions of the complex types, controllers, or individual API calls
in with the rest of your hand-written documentation. This was
especially useful for us because we already had a lot of text
explaining our existing command line interface. We were able to reuse
a lot of the material and document all three interfaces (command line,
SOAP, and JSON) with a single tool.
Implementation Considerations
Once we had chosen TGWS as our framework, we set about working on the
first implementation of our real service. This helped us uncover a few
small problems with our original “pure” design, and some details we
had not considered while prototyping.
For example, we wanted to make sure that our web service was not only
interoperable with Java clients, but also that the API made sense to a
Java developer. One tool that they might be using, the Java Axis
client, is built by feeding the WSDL file into a code generator to
produce source code for client classes. After we tried working with
the generated Java code, we adjusted our web service API to make it
more usable. For instance, Java doesn’t allow you to specify defaults
for method arguments, which caused problems with a couple of web
service calls that had a handful of required arguments along with many
optional keyword arguments. On the Java side, the caller would have to
pass in all 23 parameters to the call, most of them null placeholders
for the optional parameters. To address that, we moved all the
optional parameters to a separate “options” object that could be
populated and passed in for advanced operations.
There were other minor annoyances, such as the way a camelCase
naming convention resulted in nicer-looking Java code than the
under_scored naming convention typically used by Python
programmers. We ended up going with camelCase names for attributes
and methods of classes used in the public side of the web service.
After making these tweaks, it is not difficult to design an API with
TGWS that makes sense to both Java and Python client developers.
Testing in Java was another challenge for us to work out. We have a
large suite of Python tests driven by nose, and we ultimately were
able to automate the client-side Java testing using junit. We then
integrated the two suites by writing a single Python test to run all
of the junit tests in a separate process and parse the results
from the output.
In addition to developer tests, Racemi has a dedicated group of test
engineers who perform QA and acceptance tests before each new version
of DynaCenter is released. The QA team needed a client library to use
for testing the new web service. None of them are Java programmers, so
the Dev team took on the task of basic Java integration testing. But
for full-on regression testing and automation, QA needed something
lightweight and easy to get up and running with quickly. Suds fit
this bill quite nicely. It is a client-only SOAP interface for Python
that reads the WSDL file at runtime and provides client bindings for
the web service API. Armed with our WSDL and the Suds documentation,
our QA team was able to start building a client test harness almost
immediately.
Conclusions
At the beginning of our evaluation process, we knew there were a lot
of ways to compare the available tools. At first, we weren’t sure if
the code-from-WSDL model used by ZSI or the WSDL-from-code model used
by TGWebServices and soaplib would be easier to use. After creating
the simple echo service prototype with both tools, we found that
writing Python and generating the WSDL worked much better for us.
Because WSDL is an XML format primarily concerned with types, we found
it excessively verbose compared to the Python needed to back it up.
It felt much more natural to express our API with Python code and then
generate the description of it. Starting with the code also lead to
fewer situations where translating to WSDL produced errors, unlike
when we tried to manage the WSDL by hand.
Because WSDL is an XML format primarily concerned with types, we found
it excessively verbose compared to the Python needed to back it up.
As mentioned earlier, we ended up needing to patch TGWebServices to
make it work correctly with TurboGears 1.1. Those patches were
available on the Internet as separate downloads, but we decided to
“fork” the original Mercurial repository and create a new version
that included them directly. We have also added a few other
enhancements, such as the option of specifying which formats (JSON
and/or XML) to use when documenting sample types, and better SOAP
error message handling. We are working with Christophe de Vienne to
move those changes upstream.
TGWebServices stood out as the clear winner for our needs.
Aside from the ease of use benefits and technical merits of
TGWebServices, there were several bonus features that made it
appealing. The integration with Sphinx for generating documentation
meant that not only would we not have to write the reference guide as
a completely separate task, but it would never grow stale as code
(especially data structures) changed during the evolution of the
API. Getting the JSON for “free” was another big win for us because it
made testing easier and did not lock us in to a SOAP solution for all
of our partners. Couple that with the benefit of having the TurboGears
framework already in place for a possible web UI down the road, and
TGWebServices stood out as the clear winner for our needs. | https://doughellmann.com/blog/tag/python-magazine/ | CC-MAIN-2017-43 | refinedweb | 5,197 | 52.7 |
1 /* 2 * Copyright (c) 2001,.net.; 27 28 import java.io.*; 29 import java.net.*; 30 31 /** 32 * Instances of this class are returned to applications for the purpose of 33 * sending user data for a HTTP request (excluding TRACE). This class is used 34 * when the content-length will be specified in the header of the request. 35 * The semantics of ByteArrayOutputStream are extended so that 36 * when close() is called, it is no longer possible to write 37 * additional data to the stream. From this point the content length of 38 * the request is fixed and cannot change. 39 * 40 * @author Michael McMahon 41 */ 42 43 public class PosterOutputStream extends ByteArrayOutputStream { 44 45 private boolean closed; 46 47 /** 48 * Creates a new output stream for POST user data 49 */ 50 public PosterOutputStream () { 51 super (256); 52 } 53 54 /** 55 * Writes the specified byte to this output stream. 56 * 57 * @param b the byte to be written. 58 */ 59 public synchronized void write(int b) { 60 if (closed) { 61 return; 62 } 63 super.write (b); 64 } 65 66 /** 67 * Writes <code>len</code> bytes from the specified byte array 68 * starting at offset <code>off</code> to this output stream. 69 * 70 * @param b the data. 71 * @param off the start offset in the data. 72 * @param len the number of bytes to write. 73 */ 74 public synchronized void write(byte b[], int off, int len) { 75 if (closed) { 76 return; 77 } 78 super.write (b, off, len); 79 } 80 81 /** 82 * Resets the <code>count</code> field of this output 83 * stream to zero, so that all currently accumulated output in the 84 * output stream is discarded. The output stream can be used again, 85 * reusing the already allocated buffer space. If the output stream 86 * has been closed, then this method has no effect. 87 * 88 * @see java.io.ByteArrayInputStream#count 89 */ 90 public synchronized void reset() { 91 if (closed) { 92 return; 93 } 94 super.reset (); 95 } 96 97 /** 98 * After close() has been called, it is no longer possible to write 99 * to this stream. Further calls to write will have no effect. 100 */ 101 public synchronized void close() throws IOException { 102 closed = true; 103 super.close (); 104 } 105 } | http://checkstyle.sourceforge.net/reports/javadoc/openjdk8/xref/openjdk/jdk/src/share/classes/sun/net/www/http/PosterOutputStream.html | CC-MAIN-2017-51 | refinedweb | 373 | 73.37 |
Last modified on 6 June 2010, at 23:43
See original RRSAgent log and preview nicely formatted version.
Please justify/explain all edits to this page, in your "edit summary" text.
09:57:00 <LeeF> Chair: LeeF 09:57:00 <LeeF> Regrets: Axel, Alex, Souri, SteveH 09:57:00 <sandro> scribe: sandro 09:57:00 <sandro> rrsagent, make log public 09:57:00 <sandro> meeting: SPARQL Working Group 09:57:00 <Zakim> SW_(SPARQL)10:00AM has now started 09:57:00 <Zakim> +??P4 09:58:00 <Zakim> +??P5 09:58:00 <NicholasHumfrey> zakim, ??P4 is me 09:58:00 <Zakim> +NicholasHumfrey; got it 09:58:00 <Zakim> +Lee_Feigenbaum 09:58:00 <AndyS> zakim, ??P5 is me 09:58:00 <Zakim> +AndyS; got it 09:58:00 <Zakim> +OlivierCorby 09:58:00 <Zakim> +Sandro 09:59:00 <LeeF> zakim, who's on the phone? 09:59:00 <Zakim> On the phone I see NicholasHumfrey, AndyS, Lee_Feigenbaum, OlivierCorby, Sandro 09:59:00 <Zakim> +kasei 10:00:00 <ivan> zakim, dial ivan-voip 10:00:00 <Zakim> ok, ivan; the call is being made 10:00:00 <Zakim> +Ivan 10:00:00 <Zakim> +MattPerry 10:00:00 <Zakim> -MattPerry 10:00:00 <chimezie> Zakim, passcode? 10:00:00 <Zakim> the conference code is 77277 (tel:+1.617.761.6200 tel:+33.4.89.06.34.99 tel:+44.117.370.6152), chimezie 10:01:00 <Zakim> +MattPerry 10:01:00 <Zakim> +Chimezie_Ogbuji 10:01:00 <Zakim> +pgearon 10:02:00 <chimezie> Zakim, mute me 10:02:00 <Zakim> Chimezie_Ogbuji should now be muted 10:02:00 <LeeF> zakim, who's on the phone? 10:02:00 <Zakim> On the phone I see NicholasHumfrey, AndyS, Lee_Feigenbaum, OlivierCorby, Sandro, kasei (muted), Ivan, MattPerry, pgearon, Chimezie_Ogbuji (muted) 10:02:00 <sandro> zakim, list attendees 10:02:00 <Zakim> As of this point the attendees have been NicholasHumfrey, Lee_Feigenbaum, AndyS, OlivierCorby, Sandro, kasei, Ivan, MattPerry, Chimezie_Ogbuji, pgearon 10:03:00 <sandro> lee: not a big agenda today, but let's try to make progress on all our issues 10:03:00 <Zakim> +dcharbon2 10:03:00 <LeeF> Agenda: 10:03:00 <sandro> lee: we should have time for extra matters, if they come up. 10:03:00 <LeeF> PROPOSED: Approve minutes at 10:04:00 <LeeF> RESOLVED: Approve minutes at 10:04:00 <LeeF> Next meeting: 2010-06-08 @ 15:00 UK / 10:00 EST (scribe: Steve Harris) 10:04:00 <chimezie> i will be out next tuesday 10:05:00 <Zakim> +??P25 10:06:00 <sandro> lee: I'm hoping to advertise this next round a lot better. we're just a round or two from last call. 10:06:00 <AndyS> +1 to more feedback - last call is *last* call :-) 10:07:00 <sandro> sandro: if someone is only going to review it once, then just last call 10:07:00 <Zakim> + +1.540.412.aaaa 10:08:00 <sandro> lee: I don't want a repeat of DAWG's three last calls! 10:08:00 <Zakim> -pgearon 10:08:00 <pgearon> Zakim, aaaa is me 10:08:00 <Zakim> +pgearon; got it 10:08:00 <sandro> sandro: Okay, then in the publicity now, highlight whatever is likely to be controvercial. 10:08:00 <sandro> lee: yeah, I was going to contact the editors to talk about that. 10:09:00 <LeeF> sandro: AC reps encouraged to give feedback on RIF PR if interested 10:11:00 <AndyS> q+ 10:12:00 <LeeF> ack AndyS 10:13:00 <sandro> sandro: I think linked data has turned the corner and is rolling downhill in the govt space now 10:13:00 <NicholasHumfrey> I will pass things on to other SPARQL users in the BBC 10:13:00 <sandro> lee: Any particular groups we should contact there? 10:13:00 <kasei> I can send the draft pointers to the data.gov group for comments. 10:13:00 <sandro> sandro: I can't think of any right now. I suggest a blog post and careful use of twitter to get the word out, 10:14:00 <sandro> (rdb2rdf stuff missed) 10:15:00 <AndyS> AndyS: maybe data.gov.uk (Government Linked Data Kernel Project) 10:15:00 <sandro> Lee: Dedicate call on something planned, did scheduling poll. 10:15:00 <LeeF> Monday, June 7th 10:15:00 <LeeF> at 10am ET / 3pm UK time 10:15:00 <sandro> topic: # HTTP RDF Update dedicate TC scheduling 10:15:00 <sandro> lee: all interested parties be there. 10:16:00 <AndyS> Is this HTTP update or all update? 10:16:00 <sandro> lee: This is specifically HTTP update, not "all update". 10:17:00 <sandro> topic: Property paths dedicated TC 10:17:00 <sandro> lee: very successful. thanks Andy for the suggestion of these dedicated TCs. 10:17:00 <sandro> lee: we got consensus around all the issues 10:17:00 <LeeF> 10:18:00 <sandro> lee: I'd like to run through these proposals and get WG approval now. 10:18:00 <Zakim> +bglimm 10:18:00 <LeeF> PROPOSED: The ^ inverse path operator is strictly a unary operator. 10:19:00 <LeeF> . /^ 10:20:00 <AndyS> q+ 10:20:00 <LeeF> ack AndyS 10:20:00 <sandro> sandro: Could this change be adopted back into n3 ? 10:20:00 <LeeF> AndyS: N3 doesn't use forward slash (/) anyway for paths, it uses ! 10:20:00 <LeeF> ... so the differences in syntax are already there 10:21:00 <LeeF> ... N3 uses ^ both as unary reverse operator and as a combining operator 10:21:00 <sandro> andy: n3 uses ! instead of / so there are differences there anyway. n3 uses ^ in a different way, both in paths and in an binary sense, where we're not using it. 10:21:00 <LeeF> ... SPARQL WG prefers single combining syntax (/) and operator (^) for reversing 10:22:00 <kasei> +1 10:22:00 <bglimm> +1 10:22:00 <sandro> +0 10:22:00 <pgearon> abstain 10:22:00 <AndyS> +0 10:22:00 <ivan> 0 10:22:00 <OlivierCorby> +1 10:22:00 <dcharbon2> 0 10:22:00 <MattPerry> +1 10:23:00 <sandro> lee: do we need more +1s than +0s? 10:23:00 <LeeF> RESOLVED: The ^ inverse path operator is strictly a unary operator, with sandro, AndyS, ivan, dcharbon2, pgearon abstaining 10:23:00 <sandro> sandro: no, it's okay 10:23:00 <LeeF> PROPOSED: Property paths do not preserve the order of underlying graph structures (no change to spec). 10:24:00 <sandro> lee: To returns list items in order using property paths would require deep changes to the algebra, etc. 10:25:00 <sandro> ivan: I am disappointed that this is a problem that SPARQL 1.1 cannot solve. 10:25:00 <sandro> lee: We all wish we had a solution. 10:25:00 <AndyS> Needs more than SPARQL to solve! 10:26:00 <MattPerry> +1 10:26:00 <AndyS> +1 10:26:00 <pgearon> +1 10:26:00 <bglimm> +1 10:26:00 <ivan> 0 10:26:00 <ivan> 0 :-( 10:26:00 <sandro> +0 sounds okay, but I don't really understand the issues 10:26:00 <OlivierCorby> +1 10:26:00 <LeeF> RESOLVED: Property paths do not preserve the order of underlying graph structures (no change to spec), ivan, sandro abstaining 10:27:00 <sandro> thanks, pgearon, but the real problem is that I don't have time to think about it. 10:27:00 <LeeF> PROPOSED: Postpone (beyond this WG) any work on returning the length of a matched property path. 10:28:00 <sandro> +1 10:28:00 <bglimm> +1 10:28:00 <pgearon> +1 10:28:00 <AndyS> +1 10:28:00 <MattPerry> +1 10:28:00 <LeeF> RESOLVED: Postpone (beyond this WG) any work on returning the length of a matched property path. 10:28:00 <OlivierCorby> +1 10:28:00 <LeeF> PROPOSED: The cardinality of solutions to fixed-length paths is the same as the cardinality of solutions to the path expanded into 10:28:00 <LeeF> triple patterns (with all variables projected); the cardinality of solutions to variable-length paths is the cardinality of solutions 10:28:00 <LeeF> via paths that do not repeat nodes; the cardinality of solutions to paths combining fixed and variable length (elt{n,} ) is a combination 10:28:00 <LeeF> of the fixed definition plus the variable definition for paths longer than the fixed length. 10:30:00 <sandro> PROPOSED: Cardinality of solutions to property paths is as in 10:30:00 <AndyS> And see also Birte's 10:31:00 <sandro> lee: Note this does not affect the answers you get, just the cardinality of the answers. 10:31:00 <LeeF> {n,} 10:32:00 <sandro> +1 10:32:00 <bglimm> So what is the cardinality of query with BGP a r+ ?x ? on graph: a r b. b r c . b r d. ? 2? because the path uses b twice, but it is still a different path for a to c and a to d. 10:32:00 <LeeF> PROPOSED: Cardinality of solutions to property paths is as in 10:32:00 <bglimm> Zakim, unmute me 10:32:00 <Zakim> bglimm should no longer be muted 10:34:00 <LeeF> b r b. 10:34:00 <sandro> lee: ?x bound to c and d, each soln has card 1. 10:34:00 <sandro> (really, lee, where are you goging? :-) 10:34:00 <sandro> chime: resonsible for detecting cycles and excluding paths that use them 10:35:00 <sandro> lee: yes 10:35:00 <sandro> +1 10:36:00 <LeeF> PROPOSED: Cardinality of solutions to property paths is as in 10:36:00 <sandro> lee: obviously if new information is unearthed in the future, we can revisit these decisions. But let's move forward now unless someone sees an actual problem. 10:36:00 <AndyS> +1 10:36:00 <bglimm> +1 10:36:00 <OlivierCorby> +1 10:36:00 <dcharbon2> +1 10:36:00 <MattPerry> +1 10:36:00 <pgearon> +1 10:36:00 <LeeF> RESOLVED: Cardinality of solutions to property paths is as in 10:37:00 <sandro> (it seems to me that one needs to keep track of cycles anyway.) 10:38:00 <LeeF> PROPOSED: Property paths include an operator to negate paths consisting of URIs and reverse URIs only. 10:38:00 <sandro> lee: I missed the discussion of this next one, about negating paths of URIs only -- no negation of +/* 10:38:00 <AndyS> Yes - accurate 10:39:00 <LeeF> PROPOSED: Property paths include an operator to negate paths consisting of URIs and reverse URIs only; more complex paths (such as those including * or +) cannot be negated 10:39:00 <AndyS> Douglas Reid / BBN / 10:39:00 <sandro> +1 10:39:00 <AndyS> +1 10:39:00 <MattPerry> +1 10:39:00 <OlivierCorby> +1 10:39:00 <ivan> +1 10:39:00 <bglimm> 0 (I am scared of path negation) 10:39:00 <dcharbon2> +1 10:39:00 <pgearon> +1 10:39:00 <LeeF> RESOLVED: Property paths include an operator to negate paths consisting of URIs and reverse URIs only; more complex paths (such as those incliuding * or +) cannot be negated, bglimm abstaining 10:40:00 <LeeF> ?s :p{0} ?o 10:40:00 <sandro> lee: is this bound to every node in the data set? every node in the universe? nothing? 10:41:00 <sandro> (I think: every node/pair in the dataset.) 10:41:00 <LeeF> zakim, who's on the phone? 10:41:00 <Zakim> On the phone I see NicholasHumfrey, AndyS, Lee_Feigenbaum, OlivierCorby, Sandro, kasei (muted), Ivan, MattPerry, Chimezie_Ogbuji, dcharbon2, ??P25, pgearon, bglimm (muted) 10:41:00 <sandro> lee: anything else on property paths? no.... 10:42:00 <bglimm> I would say that ?s :p{0} ?o is every pair of nodes in the graph 10:42:00 <sandro> lee: no point in talking about Steve's issue on the WHERE part of sparql update, with SERVICE, since Steve isn't here. 10:43:00 <sandro> right, I think so too, bglimm 10:43:00 <LeeF> 10:43:00 <sandro> lee: anyone have any progress on Steve's query issue here? 10:43:00 <sandro> topic: SPARQL Update and SERVICE (move this up) 10:44:00 <sandro> pgearon: I think it would be fine! 10:44:00 <sandro> pgearon: Since they occur one after the other, it would see the deletion. 10:44:00 <sandro> lee: please reply to Steve on the mailing list. 10:44:00 <sandro> topic: Open Issues 10:45:00 <sandro> 10:45:00 <ivan> q+ 10:45:00 <LeeF> ack ivan 10:46:00 <bglimm> Zakim, unmute me 10:46:00 <Zakim> bglimm should no longer be muted 10:46:00 <sandro> ivan: should we talk about which URIs to use, for the entailment regimes and for import 10:46:00 <bglimm> is wrong for describing RDF entailment in SDs? 10:47:00 <bglimm> Zakim, unmute me 10:47:00 <Zakim> bglimm was not muted, bglimm 10:47:00 <sandro> sandro: RIF will review the published version and give its feedback in general, and on the URIs in specific. 10:48:00 <bglimm> The namespace and URI used for rif:imports is still under discussion within the group 10:48:00 <bglimm> Should that entailment URI be instead, since the choice between Core, BLD, or PLD does not mean a difference in entailment semantics but rather a difference in syntactic subsets of RIF? 10:48:00 <sandro> sandro: if the URIs are reasonable straw proposals, then leave them. if we know they're bad, then use example.org. 10:48:00 <bglimm> These are the 2 ed notes we have for RIF 10:49:00 <sandro> chime: we could use entailment/rif and change the note to clarify 10:49:00 <sandro> ivan: but for import we have nothing in mind. 10:49:00 <sandro> ivan: Okay. 10:51:00 <sandro> lee: so are we changing entailment regimes doc today, or not? 10:51:00 <sandro> chime: Yes, change from rdf-core (rif-core?) to rif 10:52:00 <bglimm> is the URI to describe the RIF regime in SDs 10:52:00 <bglimm> That I guess should be changed 10:53:00 <bglimm> Zakim, mute me 10:53:00 <Zakim> bglimm should now be muted 10:53:00 <sandro> sandro: for the webmaster, it seems like we should prepare something like: sparql/round_nnn/WD-sparql-update-20100601/(all the files there) 10:53:00 <AndyS> Can I start editing rq25 again now? 10:54:00 <AndyS> There is an HTML copy anyway. 10:54:00 <sandro> lee: Yes, you can start editing rq25 again. # SPECIAL MARKER FOR CHATSYNC. DO NOT EDIT THIS LINE OR BELOW. SRCLINESUSED=00001141 | https://www.w3.org/2009/sparql/wiki/index.php?title=Chatlog_2010-06-01&mobileaction=toggle_view_mobile | CC-MAIN-2015-40 | refinedweb | 2,457 | 65.86 |
HOME HELP PREFERENCES
SearchSubjectsFromDates
[ImagePlug doesn't work]
> plugin ZIPPlug
> plugin GAPlug
> plugin TEXTPlug
> plugin HTMLPlug
> plugin EMAILPlug
> plugin PDFPlug
> plugin RTFPlug
> plugin WordPlug
> plugin PSPlug
> plugin ArcPlug
> plugin ImagePlug
> plugin RecPlug
Your problem is that HTMLPlug (by default) "blocks" image files.
Greenstone plugins decide what files to look at with a "process
expression" but can also specify that certain files should be ignored
using a "block expression" or block_exp. This is so that if you
import a collection of HTML documents with HTMLPlug, for example, the
GIF and JPEG images will be ignored because you're really only
interestedin HTML files.
In your case, the HTMLPlug plugin appears before the Image PlugIn.
When a new JPEG file is found it is passed first to ZIPPlug, then
GAPlug, then TEXTPlug, all of which ignore it. It is then passed to
HTMLPlug, which blocks it, and it never gets to any of the other
plugins, like ImagePlug.
To fix this, you should remove the plugins you don;t need from the
plugin list. If you're only making a collection of images, then a
collect.cfg like this is probably appropriate:
plugin GAPlug
plugin ImagePlug
plugin ArcPlug
plugin RecPlug
Note also that after an import, the message that says how many
documents are imported is usually wrong when you use ImagePlug.
I don't know why.
Gordon | http://www.nzdl.org/gsdlmod?e=d-00000-00---off-0gsarch--00-0----0-10-0---0---0direct-10---4-----dfr--0-1l--11-en-50---20-about-Antolin%2C+Trinidad--00-0-1-00-0--4----0-0-11-10-0utfZz-8-00&a=d&cl=CL3.1.2&d=0202211616030H-07980-tuatara | CC-MAIN-2014-52 | refinedweb | 226 | 57.61 |
Mapping as a Service in HCI
Those who are familiar with PI would know that we could expose an operation mapping as a service by enabling AEX on BPM with NetWeaver 731 on wards. In this blog, I will show how to expose a mapping as a service in HCI
A little bit of background. I have more than a decade of experience with SAP NetWeaver PI, was fortunate enough to work on all most all releases starting with XI 2.0 to the latest 7.5. Since last couple of years, I have been working with HCI and I am primarily responsible for Cloud for Customer integration content for PI and HCI. From my limited experience, I can say from a developer and an end user perspective HCI really signs mainly with its ease of use and a very flexible pipeline. I am totally in love with HCI 🙂 , Sorry NetWeaver PI/PO. More on this bit later
So now, how do I expose a mapping as a service? Turns out to be very simple, I just need to have an Integration Flow with a sender and a mapping as shown below
The sender system here is the tenant on which I will deploy the Integration Flow project. The sender system uses a SOAP sender adapter with message protocol set to 1.x and a service path over which we can later consume the mapping as a service.
Import to note do not specify a WSDL in the sender channel else it will be treated as an asynchronous scenario and the consumer will not receive a response back.
For the mapping, I am using Groovy script, another reason I am in love with HCI. Groovy makes building and parsing XML very simple. Once you are familiar with Groovy trust me you will not use JAVA for working with XML. Groovy is shorter and dynamic. Add to it you can run any java code inside Groovy
For this demo, I am using a very simple mapping, which will simply add two values and return us the result. But same can be done via a message/graphical mapping as well.
The Script code to achieve the same is:
import com.sap.gateway.ip.core.customdev.util.Message;
import java.util.HashMap;
import groovy.xml.StreamingMarkupBuilder
def Message processData(Message message) {
def body = message.getBody(java.lang.String);
def root = new XmlSlurper().parseText(body);
def outputBuilder = new StreamingMarkupBuilder()
outputBuilder.encoding = “UTF-8”
def outxml = {
mkp.declareNamespace(‘ns0′:’‘)
ns0.Target{
outarg(root.arg1.toInteger() + root.arg2.toInteger())
}
}
String result = outputBuilder.bind(outxml)
message.setBody(result)
return message;
}
As you can see here in Groovy, we have a nearly 1:1 ratio of code to XML.
As a last step, we deploy the Integration Flow project and test it. The result of a test run from SoapUI is, shown below for reference.
To call this mapping from another integration project we can simply configure a service call as we call any other SOAP service.
Now why do we need to expose a mapping as a service, well simple answer is reusability. In 1611 release, we plan to ship a new way of replication employees from SuccessFactors employee central to SAP Hybris Cloud for Customer and there we are using this mapping as a service to convert the source format to target format. I will explain this later in a different blog once we have the official release in place.
As a disclaimer, the views expressed here are based on my personal experience.
Hi Abhinash,
I had used a similar technique to call one Integration Flow of HCI from another Integration Flow ( as they were independent pieces ) in my blog series on HCI & SFDC Integration to segregate my Login Integration Flow to get sessionID from my Business Processing Integration Flow HCI -Integrating SalesForce (SFDC) using HCI -Part 1
What I am wondering is - considering that the ServiceCall makes the call from HCI over the internet back to HCI it would be great if there is an option of a "Local Service Call". Something between the Process Call / Service Call to enable one HCI Integration Flow trigger another HCI Integration Flow ( possibly using SOAP Adapter as you have defined )..
Are you aware of any such functionality in planned releases?
Regards,
Bhavesh
Hello Bhavesh,
I will check on this and get back. But as far as I know when we call a URL on the same system it it not routed via the whole Internet infrastructure, the proxy/DNS servers are smart enough to know the source and target system in the call are same system. Still the traffic leaves the teant to the proxy before it is routed back to the tenant. I will update my comment here when I have an answer for you.
Best regards, Abinash
Thanks Abhinash! The reason I thought this authentication happens via the Internet is because I had to provide the Basic Authentication details in my SOAP Receiver Adapter ( Service Call ) else the message would not invoke the additional service!
Look forward to your feedback from the inside! Thanks!
Regards
Bhavesh
Hello Bhavesh, a quick update. For iFlow to iFlow communication within the same Tenant we are developing ‘Direct’ Adapter. As of now the tentative plan is, development to be done in the next 2 months and should be available for public usage by the end of the year.
Best regards, Abinash
Thank You for your feedback! Look forward to this feature getting rolled out!
Regards,
Bhavesh
Hi Abinash,
Good to hear that you guys are creating the adapter for internal communications, once it is done please help us to understand how to create and how it is done.
Regards,
Vijay
Hi Abinash,
Do you know if this functionality is available yet?
Thanks,
Sanjeev | https://blogs.sap.com/2016/08/11/mapping-as-a-service-in-hci/ | CC-MAIN-2021-43 | refinedweb | 963 | 61.56 |
Today, we will be discussing the optimization technique in Python. In this article, you will get to know to speed up your code by avoiding the re-evaluation inside a list and dictionary.
Here I have written the decorator function to calculate the execution time of a function.
import functools import time def timeit(func): @functools.wraps(func) def newfunc(*args, **kwargs): startTime = time.time() func(*args, **kwargs) elapsedTime = time.time() - startTime print('function - {}, took {} ms to complete'.format(func.__name__, int(elapsedTime * 1000))) return newfunc
let's move to the actual function
Avoid Re-evaluation in Lists
Evaluating
nums.append inside the loop
@timeit def append_inside_loop(limit): nums = [] for num in limit: nums.append(num) append_inside_loop(list(range(1, 9999999)))
In the above function
nums.append function references that are re-evaluated each time through the loop. After execution, The total time taken by the above function
o/p - function - append_inside_loop, took 529 ms to complete
Evaluating
nums.append outside the loop
@timeit def append_outside_loop(limit): nums = [] append = nums.append for num in limit: append(num) append_outside_loop(list(range(1, 9999999)))
In the above function, I evaluate
nums.append outside the loop and used
append inside the loop as a variable. Total time is taken by the above function
o/p - function - append_outside_loop, took 328 ms to complete
As you can see when I have evaluated the
append = nums.append outside the
for loop as a local variable, it took less time and speed-up the code by
201 ms.
The same technique we can apply to the dictionary case also, look at the below example
Avoid Re-evaluation in Dictionary
Evaluating
data.get each time inside the loop
@timeit def inside_evaluation(limit): data = {} for num in limit: data[num] = data.get(num, 0) + 1 inside_evaluation(list(range(1, 9999999)))
Total Time taken by the above function -
o/p - function - inside_evaluation, took 1400 ms to complete
Evaluating
data.get outside the loop
@timeit def outside_evaluation(limit): data = {} get = data.get for num in limit: data[num] = get(num, 0) + 1 outside_evaluation(list(range(1, 9999999)))
Total time taken by the above function -
o/p - function - outside_evaluation, took 1189 ms to complete
As you can see we have speed-up the code here by
211 ms.
I hope you like the explanation of the optimization technique in Python for the list and dictionary. Still, if any doubt or improvement regarding it, ask in the comment section. Also, don't forget to share your optimization technique.
Posted on May 3 by:
Prashant Sharma
passionate coder | CS Post-Graduate
Read Next
Data Engineering 101: Automating Your First Data Extract
SeattleDataGuy -
Covering these topics makes you a Javascript Interview Boss - Part 1
Abdelrhman Yousry -
🎉 Deno: 1.0 officially scheduled on May, 13! Review of the features
Olivier -
Discussion
It's was fixed in python3.8
Do you have some source on this? How it was fixed?
so v3.8 is faster? (like the article says)
Wow thanks! I had no idea this was even possible.
For the list
append, is it because of the use of
len()? Just checked the code because I wanted to know why this is happening. Couldn't figure out the dictionary though.
Both are actually the same effect. When you call
data.get(...), you're calling the method, of course, but also looking up the method in the
dataobject's internal dictionary, called
__dict__. What the above article is showing is that there's some savings to be made if you cache the lookup.
He's not optimizing a list at all, but he is optimizing a dict - in both cases.
Because
data[num] = ...is actually
data.set(num, ...), there's another candidate for optimization there, as well - but by now you can probably guess what it is.
Thanks for sharing.
I wonder though if this reduces code readability.
If we are not talking about mission critical paths, then it may not worth the 10-15 % reduction.
Yes it does - idiomatic Python code is typically not written this way.
This. Before applying this technique, make sure to measure the performance of your entire application!
Finally, a better advice for beginners (in my opinion), is to advise them to use comprehensions when they can:
is more idiomatic and even faster than the technique presented here.
@Dimitri Did you tried this -
Kindly Look at the result below -
and as you said -
Excellent! You took the time to verify what I was saying without giving proof - and you we right to do so!
The problem is that the last line in the second function is actually building a list.
Here's a better example:
Great article!
Check out the dis module and disassemble the code and it explains why this happens. TLDR: it's one less instruction inside the hot loop.
This makes me thing of how slow really Python must be if the evaluation takes so long... Anyone got more info that, how much that affects performance and why?
The evaluation runs in the native binary, understand CPython runtime written in C.
Python is slow because of its need to examine each object since the type of object is not known in advance.
Well, this is actually a lie. The underlying type is known. If I am not mistaken, for C it is the py_object with uknown set of properties which the runtime has to evaluate. | https://practicaldev-herokuapp-com.global.ssl.fastly.net/sharmapacific/speedup-python-list-and-dictionary-12kd | CC-MAIN-2020-29 | refinedweb | 896 | 66.54 |
Hello
As a rather recent bike enthusiast, I have recently come across Kawasaki's ZX-6R available for sale at approx Rs 850K. While, the mileage is almost 8,000 Kms, the bike is still unregistered. Other than that, the bike looks in prestine condition.
I would appreciate some guidance on buying the bike, like the things to look for in a bike before buying one etc. As I'm in Lahore, some referrals will be much appreciated.
Regards
hello...im foosa...nice to see another biker here...
now for the bike...i think the asking is wayyyyyyyyyy too much for the bike...the bike should be around 500k or even 550k depending upon the condition...others factors come in when buying a bike but since you mentioned ur new to all this so i'll state only the basic one...BUY A BIKE WHICH YOU CAN CONTROL
here are a couple of good bikes for beginners
a suzuki gs-500f price=525000 (including all taxes)
an Aprilia Rs-125 (very very fast 125 that is price 250k including all taxes )
another good bike is the 2 stroke aprilia RS-250...but to my suprise they stopped making new ones :|...my bad...
and this info is for you all of you...suzuki launched the new GSX650F...not gsxr...but gsx650f...and it costs 700k for a brand new 2008 model including all taxes
here is a pic
Thank you foosa for the feedback.
As you have mentioned that aprilia has discontinued making 250CC bikes, what other alternative do you suggest?
Since aprilia is no more available, I'll try negotiating with the seller of Kawasaki but I doubt it that he will accept an offer of Rs 500K. However, if you know him personally, then please put in a word for me, which may help in the negotiation.
Welcome to the zoo..Since how long have you been driving bikes? No matter for how long, I wouldn't suggest a start from a 600cc directly. As I mentioned on some other thread, jumping from a normal bike to a 600cc directly is like shifting from a glider to an F-16, you know the results. Control is everything and you need some time to get on it aptly.That price for a non-registered 600 is way too much, however a person like me would pay a bit more if the condition is superb.About Aprilias, I don't think theres is even a single bike by Aprilia in Pakistan, plus the Italian bikes can be costly in the long run.That being said, good luck with the purchase, do share the pics when the purchase has been made.
Well then, thanks for pointing me in the right direction. Now it seems I have to hunt for a 250cc. Any fellow member here who has a 250cc bike to spare?
You are not looking for brand new ones?Kawasaki is coming with a brilliant 250 ZXR model this year. Should be a great choice.
Yes I read about it on the internet. I agree its an option worthwhile exploring.
Its retailed at USD 3,500 (PKR 217,000) and with duties and other charges, I figure it'll cost me around PKR 400,000? Is there anyway to get a precise estimate of its landed price?
rats...how could i forget my favrite zx-2r :|..bad bad foosa...
hamie a brand new 2008 zx-2r would cost u 325K including all taxes ...and its one kick ass bike i assure ya (Y)...also try to persuade 7thgear here into selling his bike :P...
@7th..
thanx for reminding about the zx2r ninja...coz i was totally into getting a yamaha fzr-400EXUP....i have 250k in bank...so need another 100k and i'll surely have the zx-2r
hey hamei i have a kawasaki zx6r and its a very kool bike i bought it for 600000 and i am selling it now if intrested do email me at sfjafri27@yahoo.com will gvive a you a gud price
@foosa
I was rather surprised as to how you could forget the 2R being a Kawi fan, as its such a nice machine.
@Hamie
Its always upto the purchaser what he wants for a beginner's bike. But this is why I wouldn't suggest a 600 to start with. Learning to control is everything.
Aprilia 125 is looking great. is it 2 Stroke or 4 Stroke???
Just checkout this Video
@7th gear... I agree so thats why i'm trying to find a dealer who'll help me in importing a Kawasaski's ZX-2R.
@foosa... hey man, the aftab guy i told you about gave me an estimate of approx Rs 475K for the above bike.
@saya88... sure i'll be interested. Can you pls post some pictures of your bike here and share some details like mileage, accidental history (try being honest), year of make etc. Much appreciate it.
@foosa again... i'll call you.
^Lets hope you do import one soon..
ha ha ha... yeah!
import we shall...and import we must...break we shall the monoply of the local sports bikes dealers :P...ALL HAIL MEGATRON (Y)
my personal view, any bike below 600cc looks ugly to me. If you want a real sports bike, which also looks and sounds like a real sports bike go for atleast 600cc bike. And regarding controling a 600cc bike.I have known atleast 2 ppl who had yamaha yzf-r6 and cbr-600f4 as their first bike. Both of these guys never drove even a normal cd70 in their life. And if you ride these bikes at normal crusing speeds you wont have any problem. Just dont try to race.
I would still say get a 600cc R6 preferably. It will look and sound good. You will have real fun on it and it would surely be a head turner.
O! maverick! you spoke rather the very words that resonate how I feel right now. Who knows, I might end up buying 600cc at the end of the day. Being a tall heavy built (180lbs) man, I am a bit aprehensive with buying a 250cc.
I am also considering Suzuki's GS-500F (recommended by foosa above) as a viable option. Lets see... working on it.
I would like to add my views in addition to support Maverickk... control is in your hands and mind ..... just dun't let ur craziness drive ur bike while you are on it ....... in my personal experience even heavier bike should not be matter for u to control.... i moved from 100cc yamaha to 750 katana which weight 480 lbs(dry weight) but was not very difficult to control.... and my self only 150 lbs.... so gud luck watever u get.....
hamir, dont even think of buying a 250cc bike. Its waste of money. Believe you me. Buy a real thing. Atleast a 600cc bike by yamaha, honda , suzuki or kawasaki all will look ang sound good and will be head turner. No one looks at you on a 250cc bike bhai. Anything below 600cc is waste of money.
And as attiq said control is in your hands. Doesnt matter what your age is. Belive you me, i have seen a 18 year old riding a yamaha R6 in lahore at fairly good speed, i was at above 100km/h on car following him and he must be well above that speed. As long as you are careful, wont matter at all. YOu can even handle hayabusa 1300cc. Makes no difference.
My personal choice would be yamaha R6. Rest is up to you. But plz dont waste your money on anything below 600cc.Good luck with your purchase. Do post pictures of your bike when you get it. | https://www.pakwheels.com/forums/t/looking-to-buy-kawasaki-zx-6r/57986 | CC-MAIN-2017-04 | refinedweb | 1,304 | 84.68 |
Successful software projects do not end with the product's rollout. In most projects, new versions that are based on their predecessors are periodically released. Moreover, previous versions have to be supported, patched, and adjusted to operate with new operating systems, locales, and hardware. Web browsers, commercial databases, word processors, and multimedia tools are examples of such products. It is often the case that the same development team has to support several versions of the same software product simultaneously. Usually, a considerable amount of software can be shared among different versions of the same product, but each version also has its specific components. Namespace aliases can be used in these cases to switch swiftly from one version to another. Namespace aliases can provide dynamic namespaces; that is, a namespace alias can point at a given time to a namespace of version X and, at another time, it can refer to a different namespace. For example:
namespace ver_3_11 //16 bit
{
class Winsock{/*..*/};
class FileSystem{/*..*/};
};
namespace ver_95 //32 bit
{
class Winsock{/*..*/};
class FileSystem{/*..*/};
}
int main()//implementing 16 bit release
{
namespace current = ver_3_11; // current is an alias of ver_3_11
using current::Winsock;
using current::FileSystem;
FileSystem fs; // ver_3_11::FileSystem
//...
}
In this example, the alias 'current' is a symbol that can refer to either ver_3_11 or ver_95. To switch to a different version, you only have to assign a different namespace to 'current'.
Please enable Javascript in your browser, before you post the comment! Now Javascript is disabled.
Your name/nickname
Your email
WebSite
Subject
(Maximum characters: 1200). You have 1200 characters left. | http://www.devx.com/tips/Tip/13060 | CC-MAIN-2017-43 | refinedweb | 259 | 56.35 |
I'm stuck badly in the code given below....i need to apply guassain filter in my results for good video result ?
import cv2
def inside(r, q): rx, ry, rw, rh = r qx, qy, qw, qh = q return rx > qx and ry > qy and rx + rw < qx + qw and ry + rh < qy + qh
def draw_detections(img, rects, thickness = 1): for x, y, w, h in rects: # the HOG detector returns slightly larger rectangles than the real objects. # so we slightly shrink the rectangles to get a nicer output. pad_w, pad_h = int(0.15w), int(0.05h) cv2.rectangle(img, (x+pad_w, y+pad_h), (x+w-pad_w, y+h-pad_h), (0, 255, 0), thickness)
if __name__ == '__main__':
optical flow algorithm
hog = cv2.HOGDescriptor() hog.setSVMDetector( cv2.HOGDescriptor_getDefaultPeopleDetector() ) cap=cv2.VideoCapture(0) while True: _,frame=cap.read() found,w=hog.detectMultiScale(frame, winStride=(8,8), padding=(32,32), scale=(1.05)) draw_detections(frame,found) cv2.imshow('feed',frame) ch = 0xFF & cv2.waitKey(1) if ch == 27: break cv2.destroyAllWindows()
hog != optical flow. (it seems, you blindly c/p code snippets)
no i cant understand why my code is diviided into optical flow alo.
there is no optical flow in your code.
let's not talk about your current code - what are you trying to achieve ?
kindly give me your email id so that i can take help from you ... i shall be very thankful to you.
sorry, but no, let's keep it on this site.
again, what are you trying to do ?
ok umm im trying to detect the people and for this reson i want clear result for which i have to go for guasssain filter....which is making me a problem....secondly in some people i want to detect specfic people on the baises of people clothes colour.
thanks for your useless help | https://answers.opencv.org/question/103903/im-stuck-badly-in-the-code-given-belowi-need-to-apply-guassain-filter-in-my-results-for-good-video-result/ | CC-MAIN-2019-26 | refinedweb | 306 | 76.52 |
Back
What makes using range() to make the process of generating a quadrillion values in a instant of time, I am amazed by this can anyone answer this question?
I also tried to run my own function but it didn't helped me:
def my_crappy_range(N): o = 0 while o< N: yield o o += 1 return
The reason behind the speed is that in Python 3+ we're using mathematical reasoning about the bounds instead of a direct iteration. It just check all the objects between start and stop and stride value doesn't step over the numbers.
For Ref. you can check the following code:
>>> y, x = 10000000000000, range(10000000000001)>>> class MyInt(int):... pass...>>> y_ = MyInt(y)>>> y in x # calculates immediately :)True>>> y_ in x # iterates for ages.. :(^\Quit (core dumped)
Happy Learning.
4 <= 997 < 1000, and(997 - 4) % 3 == 0.
4 <= 997 < 1000, and
(997 - 4) % 3 == 0.
import collections.abca = range(5)isinstance(a, collections.abc.Sequence)
import collections.abc
a = range(5)
isinstance(a, collections.abc.Sequence)
You can use the following video tutorials to clear all your. | https://intellipaat.com/community/214/why-is-1000000000000000-in-range-1000000000000001-so-fast-in-python-3 | CC-MAIN-2021-43 | refinedweb | 183 | 66.03 |
array = ( [0] => array ( country_code = 'DE', country_name = 'Germany', city_name = 'Munich' ), [1] => array ( country_code = 'DE', country_name = 'Germany', city_name = 'Berlin' ), [2] => array ( country_code = 'US', country_name = 'United States', city_name = 'New York' ), );
I used two methods of sorting this dataset by subkey (e.g. "country_name"), one is a custom sort function:
var cfg_sortby_key = 'country_name';
function cSort(a, b) {
var y = a[cfg_sortby_key]; var x = b[cfg_sortby_key];
return ((x < y) ? -1 : ((x > y) ? 1 : 0));
}
The other is an implementation of quicksort in javascript.
Here's the resultset of sorting 3300 rows on my PC (Intel Quadcore Q9540 @ 2.66 GHz, 4GB RAM using Windows Vista 64 bit)
Opera 9.52: - sorting took 46 ms. using quicksort - sorting took 32 ms. using custom sorting
FF 3.03: - sorting took 40 ms. using quicksort - sorting took 18 ms. using custom sorting
Safari 3.1.2: - sorting took 84 ms. using quicksort - sorting took 57 ms. using custom sorting
Google Chrome 0.2.149.30: - sorting took 54 ms. using quicksort - sorting took 84 ms. using custom sorting
IE 7.0.6001.18000 - sorting took 196 ms. using quicksort - sorting took 298 ms. using custom sorting
Couldn't test out IE8 beta, since I cant get it to install on my system (something with a wups.dll error)
I was slightly disappointed by Google Chrome - didn't expect anything better off IE7. Other then that nothing spectacular, allthough it seems quicksort is not always as quick as people say it is...
What is good news is that the race is on to improve Javascript performance in future browser versions, an issue which is critical to the success of web applications but had previously been neglected by browser makers. | http://www.webmasterworld.com/html/3757332.htm | CC-MAIN-2014-52 | refinedweb | 280 | 68.87 |
Computer Programming - Basic Syntax
Let’s start with a little coding, which will really make you a computer programmer. We are going to write a single-line computer program to write Hello, World! on your screen. Let’s see how it can be written using different programming languages.
Hello World Program in C
Try the following example using our online compiler option available at.
For most of the examples given in this tutorial, you will find a Try it option in our website code sections at the top right corner that will take you to the online compiler.
Try to change the content inside printf(), i.e., type anything in place of Hello World! and then check its result. It just prints whatever you keep inside the two double quotes.Live Demo
#include <stdio.h> int main() { /* printf() function to write Hello, World! */ printf( "Hello, World!" ); }
which produces the following result −
Hello, World!
This little Hello World program will help us understand various basic concepts related to C Programming.
Program Entry Point
For now, just forget about the #include <stdio.h> statement, but keep a note that you have to put this statement at the top of a C program.
Every C program starts with main(), which is called the main function, and then it is followed by a left curly brace. The rest of the program instruction is written in between and finally a right curly brace ends the program.
The coding part inside these two curly braces is called the program body. The left curly brace can be in the same line as main(){ or in the next line like it has been mentioned in the above program.
Functions
Functions are small units of programs and they are used to carry out a specific task. For example, the above program makes use of two functions: main() and printf(). Here, the function main() provides the entry point for the program execution and the other function printf() is being used to print an information on the computer screen.
You can write your own functions which we will see in a separate chapter, but C programming itself provides various built-in functions like main(), printf(), etc., which we can use in our programs based on our requirement.
Some of the programming languages use the word sub-routine instead of function, but their functionality is more or less the same.
A C program can have statements enclosed inside /*.....*/. Such statements are called comments and these comments are used to make the programs user friendly and easy to understand. The good thing about comments is that they are completely ignored by compilers and interpreters. So you can use whatever language you want to write your comments.
Whitespaces
When we write a program using any programming language, we use various printable characters to prepare programming statements. These printable characters are a, b, c,......z, A, B, C,.....Z, 1, 2, 3,...... 0, !, @, #, $, %, ^, &, *, (, ), -, _, +, =, \, |, {, }, [, ], :, ;, <, >, ?, /, \, ~. `. ", '. Hope I'm not missing any printable characters from your keyboard.
Apart from these characters, there are some characters which we use very frequently but they are invisible in your program and these characters are spaces, tabs (\t), new lines(\n). These characters are called whitespaces.
These three important whitespace characters are common in all the programming languages and they remain invisible in your text document −
A line containing only whitespace, possibly with a comment, is known as a blank line, and a C compiler totally ignores it. Whitespace is the term used in C to describe blanks, tabs, newline characters, and comments. So you can write printf("Hello, World!" ); as shown below. Here all the created spaces around "Hello, World!" are useless and the compiler will ignore them at the time of compilation.Live Demo
#include <stdio.h> int main() { /* printf() function to write Hello, World! */ printf( "Hello, World!" ); }
which produces the following result −
Hello, World!
If we make all these whitespace characters visible, then the above program will look like this and you will not be able to compile it −
#include <stdio.h>\n \n int main()\n { \n \t/* printf() function to write Hello, World! */ \n \tprintf(\t"Hello, World!"\t);\n \n }\n
Semicolons
Every individual statement in a C Program must be ended with a semicolon (;), for example, if you want to write "Hello, World!" twice, then it will be written as follows −Live Demo
#include <stdio.h> int main() { /* printf() function to write Hello, World! */ printf( "Hello, World!\n" ); printf( "Hello, World!" ); }
This program will produce the following result −
Hello, World! Hello, World!
Here, we are using a new line character \n in the first printf() function to create a new line. Let us see what happens if we do not use this new line character −Live Demo
#include <stdio.h> int main() { /* printf() function to write Hello, World! */ printf( "Hello, World!" ); printf( "Hello, World!" ); }
This program will produce the following result −
Hello, World! Hello, World!
We will learn identifiers and keywords in next few chapters.
Program Explanation
Let us understand how the above C program works. First of all, the above program is converted into a binary format using C compiler. So let’s put this code in test.c file and compile it as follows −
$gcc test.c -o demo
If there is any grammatical error (Syntax errors in computer terminologies), then we fix it before converting it into binary format. If everything goes fine, then it produces a binary file called demo. Finally, we execute the produced binary demo as follows −
$./demo
which produces the following result −
Hello, World!
Here, when we execute the binary a.out file, the computer enters inside the program starting from main() and encounters a printf() statement. Keep a note that the line inside /*....*/ is a comment and it is filtered at the time of compilation. So printf() function instructs the computer to print the given line at the computer screen. Finally, it encounters a right curly brace which indicates the end of main() function and exits the program.
Syntax Error
If you do not follow the rules defined by the programing language, then at the time of compilation, you will get syntax errors and the program will not be compiled. From syntax point of view, even a single dot or comma or a single semicolon matters and you should take care of such small syntax as well. In the following example, we have skipped a semicolon, let's try to compile the program −Live Demo
#include <stdio.h> main() { printf("Hello, World!") }
This program will produce the following result −
main.c: In function 'main': main.c:7:1: error: expected ';' before '}' token } ^
So the bottom-line is that if you are not following proper syntax defined by the programming language in your program, then you will get syntax errors. Before attempting another compilation, you will need to fix them and then proceed.
Hello World Program in Java
Following is the equivalent program written in Java. This program will also produce the same result Hello, World!.Live Demo
public class HelloWorld { public static void main(String []args) { /* println() function to write Hello, World! */ System.out.println("Hello, World!"); } }
which produces the following result −
Hello, World!
Hello World Program in Python
Following is the equivalent program written in Python. This program will also produce the same result Hello, World!.Live Demo
# print function to write Hello, World! */ print "Hello, World!"
which produces the following result −
Hello, World!
Hope you noted that for C and Java examples, first we are compiling the programs and then executing the produced binaries, but in Python program, we are directly executing it. As we explained in the previous chapter, Python is an interpreted language and it does not need an intermediate step called compilation.
Python does not require a semicolon (;) to terminate a statement, rather a new line always means termination of the statement. | https://www.tutorialspoint.com/computer_programming/computer_programming_syntax.htm | CC-MAIN-2019-09 | refinedweb | 1,311 | 66.84 |
SOAP::Lite 0.67, namespaces, .NET and jwsdp 1.6
Expand Messages
- I'm creating a soap service with SOAP::Lite. For the time being
the primary client is .NET.
I've been able to get Perl and Ruby working just fine, with all
kinds of ways of sending the data.
I spent a lot of time trying to get things to work for .NET and jwsdp
1.6 in ways suggested in the documentation (primarily fully qualifying
the uri and type of my SOAP::Data). Eventually I started making
progress by setting prefix('') and not using uri() for each of the data
elements returned.
This left the <method_name>Response element in the soap body.
By changing SOAP::Lite as follows:
--- /opt/local/lib/perl5/site_perl/5.8.7/SOAP/Lite.pm 2006-01-27 13:31:57.000000000 -0800
+++ /tmp/Lite.pm 2006-05-08 22:46:42.000000000 -0700
@@ -2566,7 +2566,7 @@
my $result = $self->serializer
->prefix('s') # distinguish generated element names between client and server
- ->uri($method_uri)
+ #->uri($method_uri)
->envelope(response => $method_name . 'Response', @results);
$self->destroy_context();
return $result;
I was able to get to a fully working state.
This suggests to me that I'm doing something wrong, but what?
If I'm not, is there a way to make it so the response envelope is
as "clean" as both .NET and Java seem to want it, without hacking
the SOAP::Lite code?
I can think of two possible pathways here:
* The wsdl file I have created, which is used to generate proxy
code in both .NET and Java, is not adequately reporting what I
actually send from the server.
* I'm sending the wrong stuff from the server for my correct
wsdl file.
I created the wsdl file based on ones I have used in the past and
Google's GoogleSearch.wsdl.
The server code and wsdl are attached. Note these are complete
and utter prototypes and shouldn't be taken as an example of my
excellent coding style and knowledge ;) It's been a while since
I've used SOAP::Lite and I don't think I ever got it.
Thanks for input.
--
Chris Dent
[...]
Your message has been successfully submitted and would be delivered to recipients shortly. | https://groups.yahoo.com/neo/groups/soaplite/conversations/topics/5392?l=1 | CC-MAIN-2015-35 | refinedweb | 374 | 67.35 |
Overview. Since I’m focusing here on Scala sbt projects, I’m also assuming that sbt is installed.
The only “trick” required for calling back to R from Scala is telling sbt where the jvmr jar file is located. You can find the location from the R console as illustrated by the following session:
> library(jvmr) > .jvmr.jar [1] "/home/ndjw1/R/x86_64-pc-linux-gnu-library/3.1/jvmr/java/jvmr_2.11-2.11.2.1.jar"
This location (which will obviously be different for you) can then be added in to your sbt classpath by adding the following line to your build.sbt file:
unmanagedJars in Compile += file("/home/ndjw1/R/x86_64-pc-linux-gnu-library/3.1/jvmr/java/jvmr_2.11-2.11.2.1.jar")
Once this is done, calling out to R from your Scala sbt project can be carried out as described in the jvmr documentation. For completeness, a working example is given below.
Example
In this example I will use Scala to simulate some data consistent with a Poisson regression model, and then push the data to R to fit it using the R function glm(), and then pull back the fitted regression coefficients into Scala. This is obviously a very artificial example, but the point is to show how it is possible to call back to R for some statistical procedure that may be “missing” from Scala.
The dependencies for this project are described in the file build.sbt
name := "jvmr test" version := "0.1" scalacOptions ++= Seq("-unchecked", "-deprecation", "-feature") libraryDependencies ++= Seq( "org.scalanlp" %% "breeze" % "0.10", "org.scalanlp" %% "breeze-natives" % "0.10" ) resolvers ++= Seq( "Sonatype Snapshots" at "", "Sonatype Releases" at "" ) unmanagedJars in Compile += file("/home/ndjw1/R/x86_64-pc-linux-gnu-library/3.1/jvmr/java/jvmr_2.11-2.11.2.1.jar") scalaVersion := "2.11.2"
The complete Scala program is contained in the file PoisReg.scala
import org.ddahl.jvmr.RInScala import breeze.stats.distributions._ import breeze.linalg._ object ScalaToRTest { def main(args: Array[String]) = { // first simulate some data consistent with a Poisson regression model val x = Uniform(50,60).sample(1000) val eta = x map { xi => (xi * 0.1) - 3 } val mu = eta map { math.exp(_) } val y = mu map { Poisson(_).draw } // call to R to fit the Poission regression model val R = RInScala() // initialise an R interpreter R.x=x.toArray // send x to R R.y=y.toArray // send y to R R.eval("mod <- glm(y~x,family=poisson())") // fit the model in R // pull the fitted coefficents back into scala val beta = DenseVector[Double](R.toVector[Double]("mod$coefficients")) // print the fitted coefficents println(beta) } }
If these two files are put in an empty directory, the code can be compiled and run by typing sbt run from the command prompt in the relevant directory. The commented code should be self-explanatory, but see the jvmr documentation for further details. | https://darrenjw.wordpress.com/tag/rstats/ | CC-MAIN-2015-14 | refinedweb | 489 | 50.63 |
[using FreeCommander XE build 665 public beta]
In the Settings I've configured an editor as follows:
Program: C:\emacs\bin\emacsclientw.exe
Parameters: -n %ActiveItem%
When I select a file in FreeCommand and press F4 to invoke the editor, the selected file is passed to the editor as expected. If the file name contains spaces, it needs to be quoted so I add double quotes around the %ActiveItem%, as follows:
Parameters: -n "%ActiveItem%"
This fixes this particular problem. Using procmon (from sysinternals) I can see the process is started as expected with:
Command line: "C:\emacs\bin\emacsclientw.exe" -n "a b.txt"
However, if I select a file in the search results and press F4, the full path to the file is specified on the command line BUT a second set of double quotes are added around the file argument, so that the process is started as follows:
Command line: "C:\emacs\bin\emacsclientw.exe" -n ""C:\Users\John\Dropbox\a b.txt""
This second set of double quotes corrupts the command line and the file fails to load in the editor. I can work around this by removing the double quotes around %ActiveItem% in the Settings, but then this breaks file loading in the main FreeCommander window.
Can this easily be fixed, so that parameters are quoted consistently when F4 is pressed, either in the main window or in the search results window?
Thanks,
--- John.
Inconsistent file quoting in main window vs search results
Bugs and issues - current donor version.
Post Reply
3 posts • Page 1 of 1
- Posts: 22
- Joined: 30.03.2009, 08:37
- Posts: 22
- Joined: 30.03.2009, 08:37
Re: Inconsistent file quoting in main window vs search resul
In case others run into the same problem, the following short program (C#, built as a Windows app) can be used as a work around. After compiling, I register it with FreeCommander as the program to be invoked when pressing F4. The program fixes up any inconsistency in the arg list prior to invoking emacsclientw.exe, so it now works properly when I press F4 from one of the main panes and also when I press F4 on an item in the search results.
Code: Select all
using System; using System.Diagnostics; namespace RunFromFC { static class Program { [STAThread] static void Main(string[] args) { //Debugger.Launch(); if (args == null || args.Length == 0) return; string filename = (args.Length == 1) ? args[0] : string.Join(" ", args); if (filename[0] != '"') filename = string.Format("\"{0}\"", filename); Process.Start(@"C:\emacs\bin\emacsclientw.exe", string.Format("-n {0}", filename)); } } }
Re: Inconsistent file quoting in main window vs search resul
Post Reply
3 posts • Page 1 of 1
Who is online
Users browsing this forum: No registered users and 3 guests | http://www.forum.freecommander.com/viewtopic.php?f=7&t=6120&p=19581 | CC-MAIN-2020-40 | refinedweb | 460 | 62.38 |
Based on a handout by Eric Roberts
Karel starts in the world on the left. How would you program Karel to pick up the beeper and transport it to the top of the ledge? Karel should drop the beeper at the corner of 2nd Street and 4th Avenue and then continue one more corner to the east, ending up on 5th Avenue. At the end of your program, Karel's world should look like the picture on the right.
If Karel only knows the commands:
move()
pickBeeper()
putBeeper()
turnLeft()
A solution is provided for this example, so that you can see what a full program looks like!
/** * Program: Step Up * ---------------- * Your first example Karel program. Have Karel pick up the beeper infront * of her and place it on top of the ledge. * This is a comment. Your computer will ignore it. */ public class StepUp extends Karel { // When you start your program, this code will be executed. public void run() { move(); pickBeeper(); turnLeft(); move(); turnLeft(); turnLeft(); turnLeft(); move(); putBeeper(); move(); } } | http://web.stanford.edu/class/cs106a/examples/stepUp.html | CC-MAIN-2019-09 | refinedweb | 169 | 81.33 |
Creating a window fit for the all space of the screen
Hi guys,
Here is a simple window in QML. It occupies only a space with 800 pixels as the width and 600 as height:
import QtQuick 2.9 import QtQuick.Window 2.2 Window { visible: true width: 800; height: 600 color: "gray" // ... }
How to make it fit the whole screen of the target device, please? That is, it occupies the whole display screen.
- SGaist Lifetime Qt Champion
Hi,
You can set the visibility property:
visibility: Window.FullScreen
You can also to use Screen QML Type
like
Window{ width:Screen.width; height:Screen.height; } | https://forum.qt.io/topic/86294/creating-a-window-fit-for-the-all-space-of-the-screen | CC-MAIN-2019-22 | refinedweb | 104 | 76.22 |
PlayMovieTexture
From Unify Community Wiki
Author: Jake Bayer (BakuJake14)
Description
This is a simple script that tells a texture (in this case, a Movie Texture) to play at the start of your game. Requires Unity Pro.
Usage
The process goes as followed:
- If you haven't done so already, set up a Movie Texture in the Unity editor. Documentation for Movie Textures can be found here.
- Once you have a Movie Texture set up, attach the script below to the object holding the Movie Texture.
- Hit Play to see your Movie Texture.
PlayMovie.cs
//Written by Jake Bayer //Posted July 15, 2013 //This is a simple script that plays a Movie Texture at the start of your game. using UnityEngine; using System.Collections; public class PlayMovie : MonoBehaviour { private MovieTexture _texture; void Awake() { _texture = renderer.material.mainTexture as MovieTexture; } // Use this for initialization void Start () { _texture.Play(); } } | https://wiki.unity3d.com/index.php/PlayMovieTexture | CC-MAIN-2020-34 | refinedweb | 146 | 65.73 |
.
Lot....
#include "vector"using namespace std;int main(){ vector<int> vectorI; vectorI.push_back(5); vector<vector<int>> vectorV; vectorV.push_back(vectorI); return 0;}
#include
int had said before that I miss case fall through in C# and I promised that I'd write about some weird uses of this feature in C++.
I think the weirdest use of case statement was made by Tom Duff. He needed to write some code in C that copied an array of shorts into a pre-programmed IO data register. The code looked like
void loopSend(register short* to, register short* from, register int count){ do *to = *from++; while(--count > 0);}
void
*to = *from++;
The compiler emitted instructions to compare the count with zero after each copying each short. This had a considerable perf hit and so he decided to apply common loop unrolling technique. This is where things became extremely weird. He used a little-known C feature in which switch-case statements can be inter-leaved with other statements. The optimized code looked like); }}
}
If you are wondering if its legal C, then think again, as this is not only legal C, it's legal C++ as well. If you change all *to to *to++ this becomes a memcpy code which is more relevant to most programmers. When I first saw this is college, I had to actually step through the code to figure out how it worked and then kept wondering why there was no obscene code warning before the code and so I made it a point to add it in the blog title :)
Some people took this to the extreme and created a stack based co-routine. People who take yield return for granted should try understanding the following
#define crBegin static int state=0; switch(state) { case 0:#define yieldReturn(i,x) do { state=i; return x; case i:; } while (0)#define crFinish }int function(void) { static int i; crBegin; for (i = 0; i < 10; i++) yieldReturn(1, i); crFinish;}
#define
crBegin;
yieldReturn(1, i);
crFinish;
Even);//}
//foreach (IMotionDetector detector in list) :(
#if <token> ... #endif
#if commentforeach (IMotionDetector detector in list){ Console.WriteLine("{0} ({1})", detector.Name, detector.Description);}#endif
#if falsefore.
We...).); }
string.
One of the issues with properties in C#1.x is that the get and the set accessors get the same accessibility. In some situations you'd want to impose more usage restriction on the set accessor then on get . In the following sample both the get and set are public.
class Employee{ private string m_name; public string Name { get { return m_name; } set { m_name = value; } }}class Program{ static void Main(string[] args) { Employee emp = new Employee(); emp.Name = "Abhinaba Basu"; }}
class
emp.Name =
In case you want set to be more restricted than get you have no direct way of getting it done in C#1.x (other than splitting them into different properties). In C# 2.0 its possible to have different accessibility on get and set accessors using accessor-modifier. The following code gives protected access to set and public to get
class Employee{ private string m_name; public string Name { get { return m_name; } protected set { m_name = value; } }}class Program{ static void Main(string[] args) { Employee emp = new Employee(); emp.Name = "Abhinaba Basu"; // Will fail to compile }}
class Program
This is really cool because in many designs I have felt the need to make set accessor protected and allow get to be internal or public. With this addition to C# this becomes easy to do.
There are however lot of gotchas and cases to consider before going ahead and blindly using this. I have listed some which I found need to be considered....
Be sure to check the C# spec before using this feature as there are some subtleties specially in terms of member lookup....
Someone posted a comment in the internal alias on protected member access. The question is that the following code does not compile with the error "cannot access protected member Class1.Foo()"
public class Class1{ protected void Foo() { }}public class Class2 : Class1{ private void Test() { Class2 a = new Class2(); Class1 b = new Class2(); a.Foo(); b.Foo(); // Fails to compile }}
public
a.Foo();
b.Foo(); // Fails to compile
The argument that this code should work is that since Class1 is the base class of Class2 the access should be allowed. Long back (in another life) I had seen the same issue in C++. This is what I replied to the question....
class Class1{protected: void Foo () { }};class Class2 : public Class1{public: void Test () { Class2* a = new Class2 (); Class1* b = new Class2 (); a->Foo(); b->Foo(); }};
protected
};
Class2* a =
Class1* b =
a->Foo();
b->Foo();
This is because b is pointer (or reference) to a base class. For all we know b could point to an object of any class derived from Class1. If this call was allowed then it would break encapsulation as you would get access to a method inside the derived class
int i;Console.WriteLine(i);
The above code will fail to compile. This is because the C# compiler requires a variable to be definitely assigned at the location where it is used. It figures this out using static flow analysis and the above case is the easiest catch for it.
However, there is a small trivia regarding this. Lets consider the following code
class MyClass{ public int i; public MyClass() { }}class Program{ static void Main(string[] args) { MyClass myclass = new MyClass(); Console.WriteLine(myclass.i); }}
So I won an Oscar too :) and was honored with a link from Raymond Chen. Sometime back I posted on including try/catch/retry block in C# and he is recommending not using such automatic retrying. However, I do not agree with the flat recommendation and I think this varies based on the scenario.
This is what I replied to his post....
I agree that just retrying does not work in all situation. In example I used does not arbitrarily retry the operation 3 times. It uses an Exception class which explicitly uses a public member to signal whether the operation is retryable.
In all situations amounting to 10s of GB of data situations like an interactive program (UI client) prompting the user for retry is the correct thing to do. In long running un-attended batch conversion job where we know for sure that transient failures occur and get resolved on retrying, using retry is the right approach..
Thanks to James Manning's blog I remembered the new ?? operator introduced in C#2.0 and that very few people seem to use it.
This new ?? operator is mainly used for nullable types but can also be used with any reference type.
Use with nullable type
In case you are converting a nullable type into a non-nullable type using code as follows
int? a = null;// ...int e = (int)a;
// ...
int e = (int)a;
You will land into trouble since a is null and an System.InvalidOperationException will be thrown. To do this correctly you first need to define what is the value of int which you'll consider as invalid (I choose -1) and then use any of the following expressions.
int? a = null;// ...int e = (a != null) ? (int)a : -1;int f = a.HasValue ? a.Value : -1;int g = a.GetValueOrDefault(-1);
This is where the ?? operator comes into play. Its a short hand notation of doing exactly this and you can just use
int? a = null;// ...int c = a ?? -1;
Use with any reference type
In code many times we compare reference to null and assign values based on it. I made a quick search on some code and found the following code used to write function arguments to a log file
string localPath = (LocalPath == null ? "<null>" : LocalPath);
string localPath = (LocalPath ==
This can be coded easily using
string localPath = LocalPath ?? "<null>";
<Also read the later posting on the same topic>
Frequently in code we do string comparisons which is culture-agnostic. While we should follow the string comparison guidelines, there is a much faster way of getting it done using string interning.
As a sample lets take method that accepts a string and does some action based on the string.
static void ExecuteCommand(string command){ if (command == "START") Console.WriteLine("Starting Build..."); else if (command == "STOP") Console.WriteLine("Stopping Build..."); else if (command == "DELETE") Console.WriteLine("Deleting Build..."); else Console.WriteLine("Invalid command...");}
static
The problem with this code is that this actually does a full string comparison using string.Equals(command, "START", StringComparison.Ordinal); which results in iterating through each of the bytes of the two strings and comparing them. In case the strings are long and there are a lot of strings to compare, this becomes a slow process. However if we are sure that the string command is an interned string we can make this much faster. Using the following code
static void ExecuteCommand(string command){ if (Object.ReferenceEquals(command,"START")) Console.WriteLine("Starting Build..."); else if (Object.ReferenceEquals(command, "STOP")) Console.WriteLine("Stopping Build..."); else if (Object.ReferenceEquals(command, "DELETE")) Console.WriteLine("Deleting Build..."); else Console.WriteLine("Invalid command...");}
This uses just the reference comparison (memory address comparison) and is much faster. However, the catch is that the command has to be an interned string. In case the command is not a literal string and is either generated or accepted from the user as a command line argument then this will not be an interned string and the comparisons will always fail. However we can intern it using the following
string command = string.Intern(args[0].ToUpperInvariant());ExecuteCommand(command);. | http://blogs.msdn.com/b/abhinaba/archive/2005/11.aspx | CC-MAIN-2015-14 | refinedweb | 1,596 | 64.1 |
Hi/2. Diego Biurrun wrote: > On Mon, Sep 17, 2007 at 04:41:34PM -0700, Dave Yeo wrote: > >> Reimar Doeffinger wrote: >> >>> On Mon, Sep 17, 2007 at 12:35:46PM +0200, Diego Biurrun wrote: >>> >>>> Even more important: Reimar came up with a header file that provides the >>>> correct definition. So why can't this be used instead of adding this >>>> (possibly brittle as explained by Mans) check? >>>> >>> I'm not sure that is a proper header file, at least it's not a system >>> one... >>> But it seems that _socklen_t is in some header file, maybe using that is >>> good enough? >>> >> Grepping include I found in <386/ansi.h> >> /* >> * Types which are fundamental to the implementation and must be declared >> * in more than one standard header are defined here. Standard headers >> * then use: >> * #ifdef _BSD_SIZE_T_ >> * typedef _BSD_SIZE_T_ size_t; >> * #undef _BSD_SIZE_T_ >> * #endif >> */ >> ... >> #define _BSD_SOCKLEN_T_ __uint32_t /* socklen_t (duh) */ >> ... >> >> And in <sys/_type.h> >> ... >> typedef __uint32_t __socklen_t; >> ... >> So it seems that it should be uint32_t. Also __uint32_t is typedef as >> unsigned int in <386/_types.h>. >> > > I get to repeat my question: Is there a way to include this header file > instead of adding the check from the patch? > > That header file doesn't include 'socklen_t' but '__socklen_t'. So we need not to include it for 'socklen_t'. -- KO Myung-Hun Using Mozilla SeaMonkey 1.1.4 Under OS/2 Warp 4 for Korean with FixPak #15 On AMD ThunderBird 750 MHz with 512 MB RAM Korean OS/2 User Community : | http://ffmpeg.org/pipermail/ffmpeg-devel/2007-September/039280.html | CC-MAIN-2015-32 | refinedweb | 247 | 74.79 |
c# interview questions and answers paper 229 - skillgunNote: Paper virtual numbers may be different from actual paper numbers . In the page numbers section website displaying virtual numbers .
Which is the best way to copy data from integer type to short type ?
Use explicit conversion.
Ex: int i=900;
short s=(short) i;
Use implicit conversion.
Ex: int i=10;
short s=i;
Use Boxing conversion .
Use methods present Convert class.
Use explicit conversion.
Ex: int i=900;
short s=(short) I;
Always use explicit conversion for copying data from higher range data type to lower range data type.
What is the size of int data type in Csharp ?
2 bytes
4 bytes
8 bytes
1 byte
The size of int data type in Csharp is 4 bytes
What is the size of sbyte data type in Csharp?
1 bit
1 bytes
0 bytes
The size of sbyte data type in Csharp is 1 bytes.
Which data type is an user defined data type in csharp ?
enum
interface
struct
All the above.
Enum, Interface and Struct are user defined datatypes in csharp.
What.
What is a reference type ?
A reference type is a data type, where variable is present in one memory location and the actual data is stored in a different memory location.
A reference type variable always holds an address which is pointing to actual data location.
Ex: if you declare a string as followed string s= "skillgun";
usually s is present in a memory location and which carries address of the location where skillgun is stored.
It means reference type variables can not store actual data.
What is the difference between a value type and a reference type?
A value type variable always holds the actual data but a reference type variable always holds address of actual data.
A value type is a data type whose variable name and data, both are stored in the same memory location whereas reference type is a data type whose variable name and data are stored in different memory location.
How to return an array from a c# method?
public int GetSalaries()
{
int[] salary=new int[] {56000,8900,66000};
return salary ;
}
public int[] GetSalaries()
{
int[] salary=new int[] {56000,8900,66000};
return salary ;
}
It is not possible to return array from c# methods.
public int[] GetSalaries()
{
int[] salary=new int[] {56000,8900,66000};
return salary ;
}
Refer C# Arrays topic
Which of the following is not a valid data type in C# ?
int[]
void
decimal
None of the above
int[] , decimal and void are data types in C#.
Note: void is a special data type which cannot store any values or data, hence it is considered as a virtual data type.
Note: void data type is converted into System.Void struct during compilation time.
Which of the following method never gives exception while converting string to integer?
Convert.ToInt32(string s);
Int32.Parse(string s);
Int32.TryParse(string s,out x);
Convert.ToInt32(string s); have a possibility to give 2 Exceptions, Int32.Parse(string s); have a possibility to give 3 Exception, Int32.TryParse(string s,out x); never gives exception. If it is executed successfully returns true else returns false.
What is a Primitive type ?
A primitive type is short hand type for MSIL type.
Primitive type is a data type just like other normal data types in c#.
Primitive type is a type which is directly understandable to the JIT Compiler.
None of the above.
Note: There are lot arguments going on primitive types definition .
see article primitive types
What is the output for the following code?
public class Test
{
public static void Main(string[] args)
{
int x = 258;
byte b = (byte)x;
Console.WriteLine(b);
}
}
1
2
Runtime error.
Compilation error.
Byte data type range is from 0 to 255 means it can store total 256 values. When you try to assign 258 into byte it cannot accommodate this value and hence clr deducts total allowed values from the given value.
In this case 258-256
What is the output for the following code?
public class Test
{
public static void Main(string[] args)
{
int x = 258;
short s = x;
Console.Write(s);
}
}
258.
0.
Higher range data type cannot be converted to lower range data type using Implicit conversion. Explicit conversion is required for converting higher range data type to lower range data type.
Int capacity is 4 bytes and short capacity is 2 bytes and hence compiler will not allow to copy data from int type to short type
Which data type is not a value type in csharp ?
int
interface is a reference type in csharp language .
Back To Top | http://skillgun.com/csharp/interview-questions-and-answers/paper/229 | CC-MAIN-2018-05 | refinedweb | 773 | 66.64 |
This survey will be put online in a SurveyMonkey-like system whenever it is ready...
Part 1, how do you work ?
- Who are you ? (one answer)
- Professional developer using Python exclusively.
- Professional developer using Python sometimes.
- Professional developer using Python unable to use Python "at work".
- Hobbyist using Python.
- How do you organize your application code most of the time ? (one answer)
- I put everything in one package
- I create several packages and use a main package or script to launch the application
- I create several packages and use a tool like zc.buildout or Paver to distribute the whole application
- I use my own mechanism for aggregating packages into a single install.
- For libraries you don't distribute publicly, do you you create a setup.py script? (one answer)
- Yes
- No
- What is the main tool or combination of tools you are using to package and distribute your Python application ? (one answer)
- None
- distutils
- setuptools
- zc.buildout and setuptools
- zc.buildout and distutils
- Paver and distutils
- Paver and setuptools
Other : <say which>
- How do you install a package that does not provide an standalone installer (but provides a standard setup.py script) most of the time ? (one answer)
- I use easy_install
- I use pip
I download it and manually run the python setup.py install command
- I use the packaging tool provided in my system (apt, yum, etc)
- I move files around and create symlinks manually
Other : <say which>
- How do you remove a package ? (check all that apply)
- manually, by removing the folder and fixing the .pth files
- using the packaging tool (apt, yum, etc)
- I use one virtualenv per application, so the main python is never polluted, and only remove entire environments
- I change PYTHONPATH to include a directory of the packages used by my application, then remove just that directory
- I don't know / I fail at uninstallation
- How do you manage using more than one version of a library on a system? (check all that apply)
- I don't use multiple versions of a library
- I use setuptools' multi-version features
- I use virtualenv
- I use zc.buildout
- I build fresh Python interpreter from source for each project
- I set PYTHONPATH to select particular libraries
- I set sys.path in my scripts
Other: <say what>
Do you work with setuptools' namespaced packages ? A namespace package is a package that may be split across multiple project distributions. For example, Zope 3's zope package is a namespace package, because subpackages like zope.interface and zope.publisher may be distributed separately (see) (one answer)
- Yes
- No
- Has PyPI become mandatory in your everyday work (if you use zc.buildout for example) ? (one answer)
- Yes
- No
- If you previously answered Yes, did you set up an alternative solution (mirror, cache..) in case PyPI is down ? (one answer)
- Yes
- No
- Do you register your packages to PyPI ? (one answer)
- Yes
- No
- Do you upload your package to PyPI ? (one answer)
- Yes
- No
- If you previously answered No, how do you distribute your packages ? (one answer)
- One my own website, using simple links
- One my own website, using a PyPI-like server
- On a forge, like sourceforge
- Where are you located ?
<open question>
Part 2, What's missing ? What is wrong ?
- What are in your opinion, the 5 most important problems (bad behaviors or missing features) in Distutils today ?
<open question>
- What are the 5 most important features that exists in third-party tools, you would like to see included by the Python standard Library ?
<open question>
- What are the other things you like to say in order to help building Distutils roadmap ?
<open question>
Questions
- Regarding the questions that ask "what are the 5 most important X", is it possible for the survey to instead list a number of choices and let the person taking the survey rank them from most- to least-important?
- You probably don't know all the possible answers. Ask the user "What are they?" and they will tell you. Ask the user "Which of these", and they tend to think only of the choices on your list. | https://wiki.python.org/moin/Packaging%20Survey?highlight=SurveyMonkey | CC-MAIN-2017-30 | refinedweb | 683 | 64.71 |
Long container scrolling wrapped in another long oneelrad screen name Aug 6, 2009 1:38 AM
Bug issue link:
As you can see on previous link i have found a bug on flex 4 when i try to create a long Canvas/Container in another long one.
In this case the horizontal scrollbar disappears from the middle of its width and when i try to insert some other Canvas/Container child, it disappears too at some coordinates.
They change according to width modifying. Then if the container canvas is 8000 px long the error will be at x px, on a longer one the error could be at (x + y ) px.
I couldn't solve my problem using many narrower canvas and i couldn't use flex 3, because i need speex and other flex 4 features.
1. Re: Long container scrolling wrapped in another long oneelrad screen name Aug 19, 2009 4:06 AM (in response to elrad screen name)
I noticed that the canvas container width can be at most 4095 px. If wider than this width the horizontal scrollbar disappears and the contained objects aren't correctly displayed.
2. Re: Long container scrolling wrapped in another long oneelrad screen name Aug 21, 2009 4:48 AM (in response to elrad screen name)
I tried to rewrite my component using the new class Group in place of Canvas expecting to solve my problem with Canvas bugs.
Now the most important one is that the children disappear if the container is longer than 4095 px.
But if you leave the .clipAndEnableScrolling property with its default false value your solve the bugs although the children that have their area out of Group container control will be all visible(!). If you reset it with true value the Canvas bugs will be in Group object too(!!).
Thanks you Adobe for these funny moments!
And thanks you for your answers too!!
3. Re: Long container scrolling wrapped in another long oneelrad screen name Aug 24, 2009 10:34 PM (in response to elrad screen name)
New update!
I've written a tiny example to let you show the bug using Group class/object:
<?xml version="1.0" encoding="utf-8"?>
<mx:Application xmlns:
<mx:Script>
<![CDATA[
import flash.events.*;
import mx.controls.Alert;
import mx.containers.Canvas;
import mx.core.*;
import spark.components.Group;
import spark.components.Scroller;
private var group:Group = new Group();
private function main() : void {
group.clipAndEnableScrolling = true;
group.width = 32000;
group.height = 300;
this.group.graphics.beginFill(0xff0000);
this.group.graphics.drawRect(0, 0, group.width, group.height);
this.group.graphics.endFill();
this.addElement(group);
}
]]>
</mx:Script>
</mx:Application>
Bug link:
Now my problem is to find an another class/object that i can use to create my program.
Will i find it?
Will Adobe answer me?
Will i can answer "Yes, my question has been answered"?
4. Re: Long container scrolling wrapped in another long oneShongrunden
Aug 26, 2009 9:59 AM (in response to elrad screen name)
Hi elrad,
Thanks for filing a bug with your sample code.
The Group component manages its own DisplayObjects so calling beginFill, drawRect, endFill, etc. are not supported and will likely result in unexpected behavior.
I reduced your code sample to reproduce the problem without using those calls:
<s:Application
xmlns:fx=""
xmlns:s="library://ns.adobe.com/flex/spark"
xmlns:
<s:Scroller
<s:Group>
<s:Group
<s:Rect
<s:fill><s:SolidColor</s:fill>
</s:Rect>
</s:Group>
</s:Group>
</s:Scroller>
</s:Application>
I believe this is running into the Flash Player limit on the maximum size of a component. I believe that limit is roughly 8192x8192 pixels. Anything bigger than that size may act unpredictably.
Do you have a specific use case in mind for where you need such large components?
5. Re: Long container scrolling wrapped in another long oneelrad screen name Aug 26, 2009 11:05 PM (in response to Shongrunden)
> I reduced your code sample to reproduce the problem without using those calls
But you have written your code using mxml and i have a big program all written in pure actionscript.
I couldn't write it in mxml, it's too complex and i need many different classes.
> I believe this is running into the Flash Player limit on the maximum size of a component.
> I believe that limit is roughly 8192x8192 pixels.
In your documentation i found that the 8191x8191 limit is on BitmapData class.
> [...] unexpected behavior.
> [...] may act unpredictably.
Maybe i misunderstand what are you trying to tell me with these your expressions, but do you mean that you don't know what your code/product could do? (!)
> Do you have a specific use case in mind for where you need such large components?
YES!
(...and i couldn't code it in a different way).
6. Re: Long container scrolling wrapped in another long oneelrad screen name Sep 9, 2009 12:55 AM (in response to elrad screen name)
Maybe i found an usefull solution by myself.
I tried to use UIComponent as base class for my own and now the objects aren't disappearing anymore.
But i hope Adobe will solve these their bugs because UIComponent doesn't have limits on contained ojects visualization and i had to rewrite many part of my code. | https://forums.adobe.com/thread/473327 | CC-MAIN-2018-39 | refinedweb | 883 | 63.7 |
Here is a basic program I compiled
and tried to run.
import javax.swing.*;
import java.awt.*; public class GUITest extends Frame{ public GUITest(){ super("A basic GUI"); JPanel panel = new JPanel(); panel.setLayout(new GridLayout(2,2)); setVisible(true); } public static void main(String [] args){ new GUITest(); } }
When I run it, the window is there but it quickly stops responding and I have to forcibly quit it. I started getting this problem after I tried running the exact same program on Eclipse.
I noticed that the icon of the window in the upper-leftmost corner is not a coffee mug as I would normally expect, rather its a blank window. How can I fix this? | https://www.daniweb.com/programming/software-development/threads/200777/help-basic-java-gui-problem | CC-MAIN-2017-09 | refinedweb | 116 | 61.97 |
Why can’t you lose weight? (Part 3)
The final part of our 3 part series on weight loss will be 10 practical tips for weight loss and confidence.
1) Our Prophet (PBUH) once said that cleanliness is a part of our faith. Of course we know this as ghusl and wudhu have been made obligatory in our lives — but how often do we consider it outside of what we perceive? When many people are asked about what makes them feel good about themselves they say it is the fact that they really do feel good externally, which has an impact on how they feel about themselves internally. Now I know how easy it is for hijabi’s to let their hair get greasy to a point of no return (I know how it is with the oil!). Or the moustaches and mono-brows peeping through, and I’ve seen how men let their beards and moustaches look astray. However it’s surprising how tiny little things like that can make a huge difference to your everyday — and it’s not a sudden process, it’s a gradual process. So ladies it’s okay to ‘let it go’ but make sure you get yourself some pick me up! Splash out on that shampoo, go ahead and get that yummy smelling body wash, and you know you want that plush loofah. Men, it’s also okay to get good washing products, and those fine bladed razors, and making yourself smell good! You’re here to impress yourself and Allah — Allah loves beauty, so make yourself beautiful!
2) Are you presentable? I’m not saying you have to be prim and proper when you’re at home (If any of you reading this saw me at home, you would either laugh, cry or be totally disgusted!) but it’s nice to “look good”, within boundaries of course — there’s no need for luxurious brands, but just to “look good”. I know how easy it is to just slip any old jilbab and that same old hijab with the same bag and the same shoes — and I’m pretty sure men are the same too. However, when I have felt really down before, I give myself something worth complimenting myself about! It’s not vain to say “Yeah, you look good today!”. If you can compliment yourself about something, it gives others a reason to compliment you too! (Not that you’re doing it for the attention, it’s still nice though) Whether you’re meeting a group of friends, going shopping, a coffee date or even jummah — it’s nice to make an effort for yourself, and others. You’ll feel great, and you’ll begin to carry yourself more confidently. So, spice it up a little!
3) Have you got that vibe around you? Scientific studies have shown that friends can have an impact on how you feel about yourself and how healthy you are. More importantly, Islam says this too. The Prophet (pbuh) had stressed that the company we choose has a massive impact on our imaan — which of course influences how we feel about ourselves and how we live our lives. If you’ve got friends that don’t give off a good vibe about confidence, or even put you down a lot for all the wrong reasons — then maybe it’s time you become the friend that changes everything. Having friends for the sake of Allah helped me hugely and I couldn’t thank Allah more. My ladies and I work out together, send each other motivational quotes, recipe ideas, and tag each other on Instagram (it’s a lifeline for us) and all sorts. It’s different when someone really understands — and especially the society I live in where it is difficult for Muslim women to be physically active and generally healthy, so we all cram into my summer shed and get our sweat on! (It’s also been proven having a workout/diet buddy increases success — motivational bonus!)
4) It’s NOT a temporary change. We’ve all been there where we just go cold turkey, try every diet, routine and timetable out there and it’s because we expect a change, and fast. However, you can’t treat it like a ‘go to slimming world’, you have to treat it like a lifestyle, and a lifestyle takes a while to change, so have patience, set long term goals, and go with the flow. You’ll find that the more you pace yourself — the less urges and cravings you’ll have and the more strength and love you have for your new lifestyle (and your new self!). When a brother told me this, he also mentioned that you should take everything with a pinch of salt. Just because something worked for someone else, it doesn’t mean it will work for you. Everyone’s body is different, has different requirements and works with different things — that’s why making it a lifestyle will ensure that you find what works best for you.
5) “Water, water, water! Make water your best friend. Before I eat a meal I always drink a huge glass of water and another glass half way through (the sunnah is 1/3 food 1/3 water 1/3 air).” Water has so many benefits, and I think we all secretly know this but we are just too stubborn to drink a few more sips a day. I have a few friends who were very anti-water, but started drinking a few more glasses each day, and now can’t live without a 2 litre bottle by their side! So it’s all about baby steps. You can even make things a little bit more exotic by adding mint leaves, lemons, oranges, grapefruit, cucumber and all sorts so you can detox at the same time. You also know water is your best friend when it keeps you full and stops you from unnecessary snacking.
6) My personal favourite thing about a healthy lifestyle is cheat day! Yes, even health and fitness instructors have encouraged one day of the week or even every 2 weeks you just let it go! No exercise and eat whatever you want! (With consideration, don’t over eat and don’t eat your whole weeks intake of calories!). My tips for cheat days after much trial and error is have one grossly fatty meal in your cheat day, whether it be a full English, a burger and chips or a pizza and a movie, I choose one of the 3. For the rest of the day I choose things that are slightly higher in calories or carbs than what I usually would have e.g instead of veggies, I’ll make potato wedges, or instead of grilled chicken, a juicy steak. I also choose a few snacks in my cheat day that I’ve been craving all week, such as crisps (they are my ultimate weakness). Cheat days give the body a small break, and will make the body work harder the following day. This also prevents me from over indulging and cheating myself during the week. However, only give yourself a cheat day if you know you deserve it, don’t cheat yourself!
7) Assure yourself! One thing that never came to mind that a sister told me who had been struggling with her weight for many years is that you need to convince yourself that you can do it. Don’t simply ‘tell’ yourself that you should become healthy, and you ‘can’ get through these last 10 squats — rather you need to scream from where it burns! You will do this! She told me that nothing helped her more than being her own motivator, and no it’s not crazy to talk to yourself. She said that she would ask herself if she was really hungry or really needed that Krispy Kreme donut, and she would think about how hard she worked to then throw it all away with an unnecessary treat. Whilst she worked out, she would push through the burn and say to herself “Tomorrow will be easier. Tomorrow I will do one more. Tomorrow I’ll be stronger. I will do this”. It’s mind over body!
8) Say no to carbs, yes to protein. I used to live off fresh bread, and pasta, and so it was really difficult for me to even comprehend not eating them on a daily. However, I found my lifesaver, complex carbs! I still have all the same carbs I used to, but a healthier and much tastier alternative. Everything I have is wholemeal, brown is good! Paired with proteins such as chicken, eggs, lots of fish and fat burning vegetables and fruits such as kale, spinach, grapefruit and more — all vegetables and fruits are excellent. Healthy alternatives are not expensive, and are actually much cheaper! I never used to cook before, but now I love it, I know exactly how many calories and how much protein I’m consuming, and I enjoy experimenting with herbs and spices. I feel like Jamie in my 15 minute meals sometimes!
9) My workout buddy made a very good point in our morning workout the other day which gave me more motivation to work harder and strive for better. We were talking about Ramadhan whilst stretching and she said “when I put on more weight, or stop working out for a little while, I get lazy in everything. It’s really bad, I keep salah to only fardh or last minute, I procrastinate in all my work and I just find it difficult to focus or get up generally”, and I totally agreed with her because it does happen. She carried on saying “but when I’m active like this, I find that I have more time, I’m more awake, my salah is on point, and I have much more gratitude. Let’s get fit for Ramadhan, because the sahaba used to spend months in advance preparing for Ramadhan spiritually, but that doesn’t mean we can’t do it physically? We spend a long time standing in taraweeh, and our posture needs to be correct, we also spend long periods of time reading Quran, dhikr, and on top of that we still need to work and do our normal chores. If we can build up our stamina and pace now, it will help us spiritually because even now, being healthy helps me — we’ll fly through Ramadhan In Shaa Allah.” — motivation alert!
10) I actually received this tip a long time ago, and I didn’t really take notice of it until recently because I didn’t think it would help a lot. How many of us are on social media? From Facebook, Instagram, Twitter or whatever else there is out there now, and whatever we may use it for, personal or business, we could take advantage and use it for our health too! Recently I cleaned out my Instagram and started following healthy fitness profiles, not ones that promote extreme weight disorders, and it has a made huge impact! I usually have a quick flick through Instagram in the morning whilst I’m making breakfast, or whilst I’m walking or have a spare minute — and it just gives me a quick inspiring boost! I either see an inspirational quote, a short workout video, or a healthy recipe — and I feel much better than what I did when I used to ogle at clothes and jewellery as a time passer. Have a social media clean out! The point is to have positivity and inspiration in all aspects of your life.
I hope these tips help all of you that are reading, and although that most of them may seem simple and straight forward, everyone is different and so take healthy baby steps. Try to make one small change every day, and by the end of the week you’ll already be a new you. This is a lifestyle journey, so set small goals to achieve to become stronger, healthier and fitter. Remember that it’s okay to flop every now and again, but don’t treat it as a failure, simply an experiment where you don’t have to start from the beginning — just pick up from where you left off. Be an example for all, not just for Muslims, but everyone who struggles with weight loss and confidence issues — and show people who you are and what you can achieve.
If you have any advice or tips that worked for you, then a generous comment below will go a long way!
Originally published at inspiritedminds.org.uk on April 29, 2015.
Meanha Begum | Inspirited Minds | Public Relations Manager
Meanha Begum is currently studying a degree in Islamic Psychology where she has been given the blessing to explore her passions, Islam and Psychology. She relishes in the insight of an Islamic perspective to incorporate into psychology, to help those who have never been given a chance that every devout muslim, and non muslim deserves. Which is why she considers Inspirited Minds to be a huge blessing in her life. She has been brought up in a heavy western environment, where Islam was once far from her reach, but through trials and tribulations, she has managed to come out stronger and closer to Allah than ever before. It’s simply her experiences, ideas, and open nature that pushes her towards wanting to help others out of their vulnerable places, through their journey, and into happiness, with tranquil souls. | https://medium.com/inspirited-minds/why-can-t-you-lose-weight-part-3-c792b97647e2 | CC-MAIN-2018-47 | refinedweb | 2,268 | 75.64 |
This series is about sharing some of the challenges and lessons I learned during the development of Prism and how some functional concepts taken from Haskell I introduced fp-ts in Prism with the logging as the primary use case. In this post we'll take a look at how the usage of fp-ts slowly spread in the whole codebase, how we misunderstood some concepts, how some of the coworkers took the adoption of
fp-ts and how it helped us refactor problematic parts.
First Expansion: Router
Time passed after the merging of the PR introducing
fp-ts in Prism; in the meantime teams in Stoplight were reshuffled a little bit. As a result, I got a new teammate on the project. Curiously, he was previously working on the initial new Prism design; then he was reallocated somewhere else when I took Prism and now he was coming back.
Essentially, I had a fresh member to onboard on the new direction I wanted to give to the code base. I quickly realized this was an incredible occasion to show the company that picking up functional concepts is not an impossible mission and I wanted to play my cards in the most efficient way.
As the first step for the onboarding, I decided to let my new comrade review a PR I would write that would migrate a component to a functional approach.
From there, I would then observe his reactions and of course answer his questions.
This time identifying the next possible candidate component to refactor was easy. As I explored in the part 1, Prism has the following components:
- Router
- Input Validator
- Negotiator
- Output Assembler
- Output Validator
The negotiator was partially done already in the first PR introducing fp-ts in Prism, and I was well aware that the validation (both input and output) would require a major refactor since they were all state-class based and objectively complicated more on this later.
I decided to go with the router. Being the first part in the whole flow it would have almost no dependencies from the previous steps, meaning that there would not be plumbing code and/or weird wrappers to match inputs and outputs. Further, its logic was not complicated and the refactor was exclusively to bring it into the functional world, with no changes on its behaviours; this way my comrade would only review effective
fp-ts related changes.
Expand fs-ts in Prism's router #402
The following PR extends the usage of fp-ts to the routing package as well by basically making sure it does not throw exceptions anymore, but rather use the
Either object to express an error object.
With this — the router and the mocker finally compose because the type match (they both return an
Either<Error, T>.
Extend the
Either usage to the router was indeed the easy part:
The problem started when I started to integrate and try to compose the new function in the mega-file-to-split:
The whole flow is sync apart from the edge case when we need to employ the forwarder, and this requires an additional abstraction layer
What's really preventing from having a clean and functional flow is the validation process that's basically creating an empty array, giving it to the mocker and expecting to receive a filled array. This forces me to keep some stuff here and some stuff there; if the mocker could just return the validations, that'd improve the code a lot.
In order to keep the API compatible with what we have, I have to do some wrapping I'd like to avoid
That said, the funny thing is that, although this Pull Request is meant to be an improvement, you can argue that the code is effectively uglier than it is. (Well I do not think it is, but you mileage may vary)
The good news though is that — I'm not sure if you remember, we were discussing on how to refactor this part and nobody (me included) really came up with good ideas.
By trying to extend the functional parts to the router — I now know exactly what needs to be done and how to move forward. This is freaking awesome, to be honest.
The conversation was not that long and chatty as the first one. I also remember there was almost no conversation at all in our internal Slack channel.
It is hard to tell why exactly. It could either be because the team assimilated the concepts or maybe they "resigned" on the fact that this was happening and so arguing would not have changed a lot.
I find the first one very improbable and I would say the truth is in between but clearly leaning on the latter hypothesis. The regret I have today is not asking this explicitly instead of taking advantage of the situation to merge the PR in right away.
My teammate observed:
the code looks complicated because it is long and deeply nested
It is interesting because the code has been long and deeply nested since forever.
fp-ts made that thing visible to a point it could not be ignored anymore. We’ll see an example of a successful refactor later.
The feedback I was receiving in our internal channels was that generally the code would look dirtier than it was previously. This was mostly happening at the "edges" between the regular code and the functional one because of required bridging to maintain compatibility with the current Prism functionality.
For instance, there was a function of Prism that could have thrown an exception in case something went wrong.
function foo() { // a lot of code if (!condition) throw new Error('This is not ok'); // a lot of code again if (!anotherCondition) throw new Error('This is not ok'); }
When such a part got refactored to use
Either<Error, T> exceptions wouldn't be thrown at all. On the other hand, whoever would have called this function might have relied on the thrown exception. As long as all the callers wouldn't have been refactored,
foo would always have to ultimately throw. This is what I called "bridging".
For the
foo function, the bridging would probably look like this
import * as E from 'fp-ts/lib/Either'; import { pipe } from 'fp-ts/lib/pipeable'; Import { identity } from ‘lodash’; function foo() { pipe( operation1(arg1, arg2), E.chain(result => operation2(result.outstandingBalance)), E.chain(operation3), + E.fold(error => { throw error }, identity) ); }
There were cases with some monads where the bridging code would look even uglier. On the positive side, this would clearly communicate to the developer that this function was still impure exclusively because of something relying on the impure behaviour. This facilitated the search for refactoring opportunities significantly.
Return Of Investment: Validation
Finally after some time we got the downpayment of a series of returns of investment given by the employment of
fp-ts in Prism.
I have already stated that validation in Prism is hard, and the way it was initially implemented in Prism made it even harder. We complained and tried to do something about it (with no results) multiple times:
You can see that ultimately the whole team would agree that passing on the opportunity would be the best idea for the time being, since it would be too much time consuming.
The real deal was that nobody knew where to start. That piece of code was terrible, but
fp-ts gave me the key to move on and finally refactor that part of the validation.
One of the good things when using category theory constructs is that things tend to naturally compose. It's like having two pieces of code with a magnet at the extremities: they naturally want to bond. Such property suggests to you that, when things do not compose, something is probably not going well.
Let’s take yet another look to some of the Prism components:
- Router (fp-ts-ized)
- Input Validator
- Negotiator (fp-ts-ized)
We were fundamentally in the situation where two pieces that wanted to compose (the router and the negotiator) couldn't because the Validator had not the right interface. The lack of composability became the driving factor that I used to refactor the input validation.
What happened was fascinating: I was about to ask for suggestions in the Slack channel where I used (and still use) to hangout and talk about functional stuff. While writing the message, I wrote the solution without getting any input from outside:
The last sentence I wrote is kind of memorable
I think fp-ts made me understand why the code is so horrible and how I should move forward.
This, indeed, ultimately happened some time ago:
When It Went Wrong: Security Checks
This is not a story where we did everything right. It would mean that's either invented or it omits details. Although I am inclined to say we did most of the things right, we clearly made some mistakes along the journey.
One of these was the porting of the security checks. It's essentially the part that checks if the call can go through the negotiator for mocking or being rejected with a
401.
This part didn't go very well. Although after the conversion the feature was still working correctly from a functionality standpoint, the resulting code was really hard to reason about, resulting in two additional refactors to bring it back to a reasonable state.
Looking at this now I think there were some factors that brought the things out of control:
- Although familiar with functional concepts, the developer working on the security checks didn't grasp enough of it to be able to complete the feature alone. This resulted in frustration on his side and ultimately brought him to switch in the “get it done, no matter what” mode.
- I also had some blanks. For instance, I thought that passing a
Left<T>as a function argument would be totally legit — it turns out that 99% it is not.
- In order to get the feature done and give some relief to the comrade, I lowered my code review standards and merged it in anyway.
This episode costed me a lot of time to clean it up:
The second PR, although the changes are minimal, took me ages to put together. I still have a vivid remembering of me working on it. The logic behind that was so complicated that I would lose the context quickly and had to restart from scratch. Multiple times.
What are the lessons learned?
- It is inevitable that things will go wrong during any journey. Keep that in consideration and allocate some time to clean up stuff.
- Short term solutions will bring long term conflicts. Short term conflicts will bring long term solutions. I decided to give my coworker a relief by merging something that wasn’t really ok. I had to pay that back with a very high interest rate.
In the next article, we'll respond to some of the FAQ that I have received while talking about this and showing Prism around.
Discussion (0) | https://dev.to/vncz/the-road-to-the-return-of-the-investment-lbd | CC-MAIN-2022-21 | refinedweb | 1,857 | 56.89 |
[Solved :]UART read
Original Post:
I have a serial device that I would like to integrate with my LoPy via UART ( P3 and P4 pins). Currently, if I connect the serial device's Tx and Rx with my computer via 9 pin DB9 connector ( COM 1 port: pin 2 and 3 ) and TeraTerm or similar with 9600/8/none/stop serial configuration..... on sending two ~~ or "escape" key sequence the serial device wakes up and then if I type "R it gives output. Or instead of ~~R I could also use CTRL+E
So I tried to replicate that in code but I don't seem to get ANY response except "None" using ~~R or CTRL-E, what am I missing here. Also, I tried swapping the Tx and Rx line as well just in case:
from machine import UART # Tx and Rx (``P3`` and ``P4``) import struct import time import pycom pycom.heartbeat(False) uart1 = UART(1, baudrate=9600) uart1.init(9600,bits=8,parity=None, stop=1) uart1.write(b'0x1B') # LETS TRY THE ESCAPE ESCAPE R which should be 27 27 82 in ASCII or 1B 1B 82 in Hex uart1.write(b'0x1B') uart1.write(b'0x82') # Or try CTRL-E which is Decimal 05 or hex 05 #uart1.write(b'0x05') recv=uart1.read(15) # read up to 5 bytes print(recv)
Thanks a lot.
Per comments from @robert-hh & @timh , I used a MAX3232, RS232 to TTL converter chip to resolve the issue.
@dda Yep that will be it. Some sort of RS232 -TTL will be required, unless the device has a TTL connection buried inside. They usually do then use a TTL -> RS232 or RS485 to present the external serial interface.
@timh
Its a 9 pin RS232, so I see the problem now.....the voltage and logic is way different for both. So, I guess the way around might be to use a RS232 to TTL-UART converter, unless there is another work around to it.
Might be a stupid question but is the computer serial port a TTL UART (0-3V) or an RS232 serial port.
Many serial ports (DB9) on a PC are RS232 (very different voltage) and not a TTL UART which is on the PyCOM. | https://forum.pycom.io/topic/3108/solved-uart-read | CC-MAIN-2020-40 | refinedweb | 376 | 70.43 |
From our sponsor: Market smarter with Mailchimp's automated messaging tools.
In this tutorial I will show you how to take a couple of established techniques (like tying things to the scroll-offset), and cast them into re-usable components. Composition will be our primary focus.
In this tutorial we will:
- build a declarative scroll rig
- mix HTML and canvas
- handle async assets and loading screens via React.Suspense
- add shader effects and tie them to scroll
- and as a bonus: add an instanced variant of Jesper Vos multiside refraction shader
Setting up
We are using React, hooks, Three.js and react-three-fiber. The latter is a renderer for Three.js which allows us to declare the scene graph by breaking up tasks into self-contained components. However, you still need to know a bit of Three.js. All there is to know about react-three-fiber you can find on the GitHub repo’s readme. Check out the tutorial on alligator.io, which goes into the why and how.
We don’t emulate a scroll bar, which would take away browser semantics. A real scroll-area in front of the canvas with a set height and a listener is all we need.
I decided to divide the content into:
- virtual content
sections
- and
pages, each
100vhlong, this defines how long the scroll area is
function App() { const scrollArea = useRef() const onScroll = e => (state.top.current = e.target.scrollTop) useEffect(() => void onScroll({ target: scrollArea.current }), []) return ( <> <Canvas orthographic>{/* Contents ... */}</Canvas> <div ref={scrollArea} onScroll={onScroll}> <div style={{ height: `${state.pages * 100}vh` }} /> </div>
scrollTop is written into a reference because it will be picked up by the render-loop, which is carrying out the animations. Re-rendering for often occurring state doesn’t make sense.
A first-run effect synchronizes the local scrollTop with the actual one, which may not be zero.
Building a declarative scroll rig
There are many ways to go about it, but generally it would be nice if we could distribute content across the number of sections in a declarative way while the number of pages defines how long we have to scroll. Each content-block should have:
- an
offset, which is the section index, given 3 sections, 0 means start, 2 means end, 1 means in between
- a
factor, which gets added to the offset position and subtracted using scrollTop, it will control the blocks speed and direction
Blocks should also be nestable, so that sub-blocks know their parents’ offset and can scroll along.
const offsetContext = createContext(0) function Block({ children, offset, factor, ...props }) { const ref = useRef() // Fetch parent offset and the height of a single section const { offset: parentOffset, sectionHeight } = useBlock() offset = offset !== undefined ? offset : parentOffset // Runs every frame and lerps the inner block into its place useFrame(() => { const curY = ref.current.position.y const curTop = state.top.current ref.current.position.y = lerp(curY, (curTop / state.zoom) * factor, 0.1) }) return ( <offsetContext.Provider value={offset}> <group {...props} position={[0, -sectionHeight * offset * factor, 0]}> <group ref={ref}>{children}</group> </group> </offsetContext.Provider> ) }
This is a block-component. Above all, it wraps the offset that it is given into a context provider so that nested blocks and components can read it out. Without an offset it falls back to the parent offset.
It defines two groups. The first is for the target position, which is the height of one section multiplied by the offset and the factor. The second, inner group is animated and cancels out the factor. When the user scrolls to the given section offset, the block will be centered.
We use that along with a custom hook which allows any component to access block-specific data. This is how any component gets to react to scroll.
function useBlock() { const { viewport } = useThree() const offset = useContext(offsetContext) const canvasWidth = viewport.width / zoom const canvasHeight = viewport.height / zoom const sectionHeight = canvasHeight * ((pages - 1) / (sections - 1)) // ... return { offset, canvasWidth, canvasHeight, sectionHeight } }
We can now compose and nest blocks conveniently:
<Block offset={2} factor={1.5}> <Content> <Block factor={-0.5}> <SubContent /> </Block> </Content> </Block>
Anything can read from block-data and react to it (like that spinning cross):
function Cross() { const ref = useRef() const { viewportHeight } = useBlock() useFrame(() => { const curTop = state.top.current const nextY = (curTop / ((state.pages - 1) * viewportHeight)) * Math.PI ref.current.rotation.z = lerp(ref.current.rotation.z, nextY, 0.1) }) return ( <group ref={ref}>
Mixing HTML and canvas, and dealing with assets
Keeping HTML in sync with the 3D world
We want to keep layout and text-related things in the DOM. However, keeping it in sync is a bit of a bummer in Three.js, messing with createElement and camera calculations is no fun.
In three-fiber all you need is the
<Dom /> helper (@beta atm). Throw this into the canvas and add declarative HTML. This is all it takes for it to move along with its parents’ world-matrix.
<group position={[10, 0, 0]}> <Dom><h1>hello</h1></Dom> </group>
Accessibility
If we strictly divide between layout and visuals, supporting a11y is possible.
Dom elements can be behind the canvas (via the
prepend prop), or in front of it. Make sure to place them in front if you need them to be accessible.
Responsiveness, media-queries, etc.
While the DOM fragments can rely on CSS, their positioning overall relies on the scene graph. Canvas elements on the other hand know nothing of the sort, so making it all work on smaller screens can be a bit of a challenge.
Fortunately, three-fiber has auto-resize inbuilt. Any component requesting size data will be automatically informed of changes.
You get:
viewport, the size of the canvas in its own units, must be divided by
camera.zoomfor orthographic cameras
size, the size of the screen in pixels
const { viewport, size } = useThree()
Most of the relevant calculations for margins, maxWidth and so on have been made in
useBlock.
Handling async assets and loading screens via React.Suspense
Concerning assets, Reacts Suspense allows us to control loading and caching, when components should show up, in what order, fallbacks, and how errors are handled. It makes something like a loading screen, or a start-up animation almost too easy.
The following will suspend all contents until each and every component, even nested ones, have their async data ready. Meanwhile it will show a fallback. When everything is there, the
<Startup /> component will render along with everything else.
<Suspense fallback={<Fallback />}> <AsyncContent /> <Startup /> </Suspense>
In three-fiber you can suspend a component with the
useLoader hook, which takes any Three.js loader, then loads (and caches) assets with it.
function Image() { const texture = useLoader(THREE.TextureLoader, "/texture.png") // It will only get here if the texture has been loaded return ( <mesh> <meshBasicMaterial attach="material" map={texture} />
Adding shader effects and tying them to scroll
The custom shader in this demo is a Frankenstein based on the Three.js MeshBasicMaterial, plus:
- the RGB-shift portion from DigitalGlitch
- a warping effect taken from Jesper Landberg
- and a basic UV-coordinate zoom
The relevant portion of code in which we feed the shader block-specific scroll data is this one:
material.current.scale = lerp(material.current.scale, offsetFactor - top / ((pages - 1) * viewportHeight), 0.1) material.current.shift = lerp(material.current.shift, (top - last) / 150, 0.1)
Adding Diamonds
The technique is explained in full detail in the article Real-time Multiside Refraction in Three Steps by Jesper Vos. I placed Jesper’s code into a re-usable component, so that it can be mounted and unmounted, taking care of all the render logic. I also changed the shader slightly to enable instancing, which now allows us to draw dozens of these onto the screen without hitting a performance snag anytime soon.
The component reads out block-data like everything else. The diamonds are put into place according to the scroll offset by distributing the instanced meshes. This is a relatively new feature in Three.js.
Wrapping up
This tutorial may give you a general idea, but there are many things that are possible beyond the generic parallax; you can tie anything to scroll. Above all, being able to compose and re-use components goes a long way and is so much easier than dealing with a soup of code fragments whose implicit contracts span the codebase. | https://tympanus.net/codrops/2019/12/16/scroll-refraction-and-shader-effects-in-three-js-and-react/ | CC-MAIN-2020-10 | refinedweb | 1,388 | 56.66 |
Are you sure?
This action might not be possible to undo. Are you sure you want toology.".
TYNDALE NEW TESTAMENT
COMMENTARIES
VOLUME 13
1 AND 2 THESSALONIANS
TYNDALE NEW TESTAMENT
COMMENTARIES
VOLUME 13
GENERAL EDITOR: LEON MORRIS
AN INTRODUCTION AND COMMENTARY
LEON MORRIS
General preface
Author’s preface to the first edition
Author’s preface to the second edition
Chief abbreviations
Introduction
1. Background
2. Date of composition of 1 Thessalonians
3. The authenticity of 1 Thessalonians
4. The purpose of 1 Thessalonians
5. The authenticity of 2 Thessalonians
1. Eschatology
2. The combination of likeness and difference
3. Difference in tone
6. The relation between the two epistles
1. A church in two sections
2. Co-authorship
3. Reversal of order
7. The occasion and purpose of 2 Thessalonians
1 Thessalonians: Analysis
Commentary
1 Thessalonians
1. Greeting (1:1)
2. Prayer of thanksgiving (1:2–3)
3. Reminiscences of Thessalonica (1:4 – 2:16)
a. Response of the Thessalonians (1:4–10)
b. The preaching of the gospel at Thessalonica (2:1–16)
1. The preachers’ motives (2:1–6)
2. The preachers’ maintenance (2:7–9)
3. The preachers’ behaviour (2:10–12)
4. The preachers’ message (2:13)
5. Persecution (2:14–16)
4. The relationship of Paul to the Thessalonians (2:17 – 3:13)
a. Paul’s desire to return (2:17–18)
b. Paul’s joy (2:19–20)
c. Timothy’s mission (3:1–5)
d. Timothy’s report (3:6–8)
e. Paul’s satisfaction (3:9–10)
f. Paul’s prayer (3:11–13)
5. Exhortation to Christian living (4:1–12)
a. General (4:1–2)
b. Sexual purity (4:3–8)
c. Brotherly love (4:9–10)
d. Earning one’s living (4:11–12)
6. Problems associated with the parousia (4:13 – 5:11)
a. Believers who died before the parousia (4:13–18)
b. The time of the parousia (5:1–3)
c. Children of the day (5:4–11)
7. General exhortations (5:12–22)
8. Conclusion (5:23–28)
2 Thessalonians: Analysis
Commentary
2 Thessalonians
1. Greeting (1:1–2)
2. Prayer (1:3–12)
a. Thanksgiving (1:3–5)
b. Divine judgment (1:6–10)
c. Paul’s prayer (1:11–12)
3. The parousia (2:1–12)
a. The day of the Lord not yet present (2:1–2)
b. The great rebellion (2:3–12)
1. The man of lawlessness (2:3–10a)
2. The man of lawlessness’s followers (2:10b–12)
4. Thanksgiving and encouragement (2:13–17)
a. Thanksgiving (2:13–15)
b. Prayer for the converts (2:16–17)
5. The faithfulness of God (3:1–5)
a. Request for prayer (3:1–2)
b. God’s faithfulness (3:3–5)
6. Godly discipline (3:6–15)
a. The disorderly (3:6–13)
b. The disobedient (3:14–15)
7. Conclusion (3:16–18)
Notes
Praise for Tyndale Commentaries
About the Author
Tyndale Commentary Volumes
More Titles from InterVarsity Press
The original Tyndale Commentaries aimed at providing help for the general reader of the Bible. They concentrated on the meaning of the text without going into scholarly technicalities. They sought to avoid‘the extremes of being unduly technical or unhelpfully brief ‘. Most who have used the books agree that there has been a fair measure of success in reaching that aim.
Times, however, change. A series that has served so well for so long is perhaps not quite as relevant as when it was first launched. New knowledge has come to light. The discussion of critical questions has moved on. Bible-reading habits have changed. When the original series was commenced it could be presumed that most readers used the Authorized Version and one could make one’s comments to understand his Bible better. They do not presume a knowledge of Greek, and all Greek words discussed are transliterated; but the authors have the Greek text before them and their comments are made on the basis of the originals.
The epistles to the Thessalonians are all too little studied today. It may be true that they lack the theological profundity of Romans and the exciting controversy of Galatians; but nevertheless their place in Scripture is an important one. No other writing of the great apostle provides a greater insight into his missionary methods and message. Here we see Paul the missionary and Paul the pastor, faithfully proclaiming the gospel of God, concerned for the welfare of his converts, scolding them, praising them, guiding them, exhorting them, teaching them; thrilled with their progress, disappointed with their slowness. Though the continuous exposition of great doctrines is not a characteristic of the Thessalonian writings, yet it is fascinating to see how most, if not all, of the great Pauline doctrines are present, either by implication or direct mention. When we consider the undoubtedly early date of these letters this is a fact of importance in the history of Christian thought.
Especially important is the teaching of these epistles on eschatology; and in view of the revival of interest in this doctrine in recent times it is imperative that we understand and appreciate the contribution of Thessalonians to this difficult subject. It is my earnest hope that this short commentary may help to direct the attention of Christian people to the importance of these epistles and the relevance of their message for the men of today.
Every commentator, I suppose, bases his work on that of his predecessors, and in this I am certainly no exception. I have learned much from those who have written on these epistles before me, and cannot hope to have acknowledged all my indebtednesses. I have found particularly helpful the commentaries by Milligan, Frame (I.C.C.), Denney (Expositor’s Bible), Findlay (who wrote two commentaries, one in the Cambridge Bible for Schools and Colleges, and the other in the Cambridge Greek Testament series), and Neil (Moffatt New Testament Commentary), while Lightfoot’s Notes on Epistles of St Paul is a veritable treasure house.
Finally may I express my indebtedness to a number of my friends who have interested themselves in this project and made helpful suggestions. Especially am I indebted to the Very Rev. Dr S. Barton Babbage, the Rev. David Livingstone, and Mr I. Siggins, who read the typescript, and suggested many improvements.
Leon Morris
In the years since this commentary first appeared there have been some notable contributions to the literature on these epistles, particularly the great commentaries by Rigaux in French and Best in English. I am grateful to both, and also to those who produced smaller commentaries, such as Ward, Moore, Whiteley and Bruce. These and others have been a great help to me as I worked over the material again. I have indicated my principal indebtednesses in the footnotes.
The revision has also enabled me to rewrite the whole and there are many minor verbal alterations. Some things have been omitted as being of less importance now than in 1956 and this has given me space to include new material. Substantially this is the commentary I wrote in the 1950s, but I trust improved by what I have learned from the scholars I have mentioned and others. The English version used is the New International Version. I trust that in this new form this little book will prove useful to another generation of readers.
I am grateful to Mrs Dorothy Wellington, my former secretary, for her expert typing of the manuscript.
Leon Morris
Chief abbreviations
Introduction
Thessalonica in the first century was the capital of Macedonia and its largest city. The geographical importance of its site may be gauged from the fact that Thessaloniki (until 1937, Salonika ¹) is still an important city. It is usually said that the name of the city in earlier days was Therma (from its hot springs), and that c. 315 BC it was renamed by Cassander after his wife Thessalonica, half-sister to Alexander the Great. But as the elder Pliny refers to Therma and Thessalonica as existing together, ² it would seem that Cassander founded a new town which in due course extended and swallowed up the more ancient one nearby. Under the Romans it was the capital of the second of the four divisions of Macedonia, and when these were united to form one single province in 146 BC it became the capital, as well as the largest city of the province. Thessalonica was a free city, and inscriptions confirm the accuracy of Luke in calling its rulers ‘politarchs’. It was strategically situated on the Via Egnatia, the great Roman highway to the East.
To this city came Paul in company with Silas and Timothy. The former was Paul’s partner on his second missionary journey, chosen after the great apostle had separated from Barnabas. We first read of him when he and Judas Barsabbas, ‘leaders among the brothers’ and ‘prophets’ (Acts 15:22, 32), were sent to Antioch after the council of Jerusalem to convey to the believers there, both by letter and by word of mouth, the decisions the council had taken. He accompanied Paul on that apostle’s second missionary journey, and Paul makes approving mention of his preaching (2 Cor. 1:19). In later times he was associated with Peter in the writing of 1 Peter (1 Pet. 5:12). It is interesting that Paul and Peter both use the more formal name Silvanus, while Luke calls him Silas (perhaps Latin and Greek forms of a Semitic name, BDF, 125 (2)).
Timothy first comes under notice when Paul met him at Lystra and had him circumcised as a preliminary to his accompanying the apostle for the remainder of his second missionary journey (Acts 16:1–3). He came to be closely associated with Paul, as we see from the joint salutations in 2 Corinthians, Philippians, Colossians, 1 and 2 Thessalonians and Philemon. From the general tone of references to him we gather that Timothy was somewhat timid in disposition (cf. 1 Cor. 16:10). But he was high in Paul’s confidence, for Paul sent him on missions (Acts 19:22; 1 Cor. 4:17; Phil. 2:19), and could link his preaching with his own (2 Cor. 1:19). Paul speaks warmly of Timothy’s attitude to those to whom he ministered and to Paul himself (Phil. 2:20–22).
The three men preached at Philippi, but were compelled to leave after the imprisonment of Paul and Silas (Acts 16). They then came to Thessalonica, where Paul followed his usual practice of going to the synagogue. He preached there on three (apparently successive) sabbaths (Acts 17:2), with some success. His converts included some Jews, ‘a large number’ of devout Greeks, and ‘not a few’ chief women (Acts 17:4). ³ The chief success of the mission clearly lay among those Greeks who had attached themselves to the synagogue. These people were dissatisfied satisfied. Some of the converts came of high-class families, but it is probable that most were from the lower classes, for Paul stresses his refusal to be dependent on them in any way (1 Thess. 2:9), and his letters to them contain no warnings about the dangers of riches.
The Jewish | https://www.scribd.com/book/377945898/1-and-2-Thessalonians | CC-MAIN-2019-43 | refinedweb | 1,877 | 66.03 |
# Making Git for Windows work in ReactOS
Good day to you! 
My name is Stanislav and I like to write code. This is my first english article on Habr which I made due to several reasons:
* [Habr is now in English](https://habr.com/en/company/tm/blog/435764/)
* Lack of technical articles in the ReactOS hub
* Recent [return of Geektimes to Habr](https://habr.com/company/tm/blog/93947/)
* Possibility of [building ReactOS in ReactOS](https://habr.com/company/reactos/blog/413461/)
* Quite interesting case of fixing the problem in ReactOS in which I was directly involved
This article is an english version of my [very first](https://habr.com/ru/company/reactos/blog/414947/) article on russian.
Let me introduce the main figures in this story who actually fixed the bug preventing Git from running in ReactOS — the French developer Hermès Bélusca-Maïto (or just Hermes with `hbelusca` nickname) and of course me (with `x86corez` nickname).
The story begins with the following messages from the ReactOS Development IRC channel:
```
Jun 03 18:52:56 Anybody want to work on some small problem? If so, can someone figure out why this problem https://jira.reactos.org/browse/CORE-12931 happens on ReactOS? :D
Jun 03 18:53:13 That would help having a good ROS self-hosting system with git support.
Jun 03 18:53:34 (the git assertion part only).
```
Debriefing
----------
Since ReactOS target platform is Windows Server 2003, Git version 2.10.0 was chosen for the investigation — it's the [last one](https://gitforwindows.org/requirements.html) supporting Windows XP and 2003.
Testing was done in the ReactOS Command Prompt, external symptoms of the problem were rather ambiguous. For instance, when you run git without additional parameters, it displays the help message in the console without any problems. But once you try `git clone` or even just `git --version` in most cases the console just remains empty. Occasionally it displayed a broken message with an assertion:
```
git.exe clone -v "https://github.com/minoca/os.git" "C:\Documents and Settings\Administrator\Bureau\minocaos"
A s s e r t i o n f a i l e d !
P r o g r a m : C : \ P r o g r a m F i l e s \ G i t \ m i n g w 3 2 \ b i n \ g i t . e x e
F i l e : e x e c _ c m d . c , L i n e 2 3
E x p r e s s i o n : a r g v 0 _ p a t h
This application has requested the Runtime to terminate in an unusual way.
Please contact the application's support team for more information.
```
Fortunately git is an open source project, and it helped a lot in our investigation. It was not too hard to locate the actual code block where unhandled exception happened: <https://github.com/git-for-windows/git/blob/4cde6287b84b8f4c5ccb4062617851a2f3d7fc78/exec_cmd.c#L23>
```
char *system_path(const char *path)
{
/* snip */
#ifdef RUNTIME_PREFIX
assert(argv0_path); // it asserts here
assert(is_absolute_path(argv0_path));
/* snip */
#endif
strbuf_addf(&d, "%s/%s", prefix, path);
return strbuf_detach(&d, NULL);
}
```
And `argv0_path` variable value is assigned by this code:
```
const char *git_extract_argv0_path(const char *argv0)
{
const char *slash;
if (!argv0 || !*argv0)
return NULL;
slash = find_last_dir_sep(argv0);
if (slash) {
argv0_path = xstrndup(argv0, slash - argv0);
return slash + 1;
}
return argv0;
}
```
Once I obtained all this information I've sent some messages to the IRC channel:
```
Jun 03 19:04:36 hbelusca: https://github.com/git-for-windows/git/blob/4cde6287b84b8f4c5ccb4062617851a2f3d7fc78/exec\_cmd.c#L23
Jun 03 19:04:41 assertion is here
Jun 03 19:04:57 yes I know, I've seen the code yesterday. The question is why it's FALSE on ROS but TRUE on Windows.
Jun 03 19:06:02 argv0\_path = xstrndup(argv0, slash - argv0);
Jun 03 19:06:22 xstrndup returns NULL %-)
Jun 03 19:06:44 ok, so what's the values of argv0 and slash on windows vs. on ROS? :P
Jun 03 19:08:48 good question!
```
The variable name suggests that the actual value is derived from `argv[0]` — usually it contains the executable name in zero index. But subsequently everything was not so obvious...
```
Jun 03 20:15:21 hbelusca: surprise... git uses its own xstrndup implementation
Jun 03 20:15:35 so I can't simply hook it xD
Jun 03 20:15:56 well, with such a name "xstrndup" it's not surprising it's its own implementation
Jun 03 20:16:04 probably I would need an user-mode debugger... like OllyDbg
Jun 03 20:16:09 that's everything but standardized function.
Jun 03 20:16:24 x86corez: ollydbg should work on ROS.
Jun 03 20:16:30 what are you breaking today?
Jun 03 20:16:44 mjansen: https://jira.reactos.org/browse/CORE-12931
Jun 03 20:16:51 (of course if you also are able to compile that git with symbols and all the stuff, it would be very nice)
```
Action
------
After that I decided to compile git straight from the source, so I'll be able to print variables of the interest "on the fly" directly to the console. I've followed this [step-by-step tutorial](http://www.drupalonwindows.com/en/blog/build-git-windows-sources) and in order to compile compatible git version I've selected this branch: <https://github.com/git-for-windows/git/tree/v2.10.0-rc2>
Setting up mingw32 toolchain went smoothly and I just started building. However almost always there are undocumented pitfalls in such step-by-step tutorials, and I immediately encountered one of them:

Through trial and error, as well as using hints from the hall (IRC channel), all compile time errors were fixed. If somebody wants to follow my steps, here is the diff to make compiler happy: <https://pastebin.com/ZiA9MaKt>
In order to avoid calling multiple functions during initialization and to reproduce the bug easily, I decided to put several debug prints right in the beginning of the `main()` function, which is located in `common-main.c` in the case of git:
```
int main(int argc, const char **argv)
{
/*
* Always open file descriptors 0/1/2 to avoid clobbering files
* in die(). It also avoids messing up when the pipes are dup'ed
* onto stdin/stdout/stderr in the child processes we spawn.
*/
//DebugBreak();
printf("sanitize_stdfds(); 1\n");
sanitize_stdfds();
printf("git_setup_gettext(); 1\n");
git_setup_gettext();
/*
* Always open file descriptors 0/1/2 to avoid clobbering files
* in die(). It also avoids messing up when the pipes are dup'ed
* onto stdin/stdout/stderr in the child processes we spawn.
*/
printf("sanitize_stdfds(); 2\n");
sanitize_stdfds();
printf("git_setup_gettext(); 2\n");
git_setup_gettext();
printf("before argv[0] = %s\n", argv[0]);
argv[0] = git_extract_argv0_path(argv[0]);
printf("after argv[0] = %s\n", argv[0]);
restore_sigpipe_to_default();
printf("restore_sigpipe_to_default(); done\n");
return cmd_main(argc, argv);
}
```
I've got the following output:
```
C:\>git --version
sanitize_stdfds(); 1
git_setup_gettext(); 1
sanitize_stdfds(); 2
git_setup_gettext(); 2
before argv[0] = git
after argv[0] = git
restore_sigpipe_to_default(); done
A s s e r t i o n f a i l e d !
(cutted a part of error message which is the same as the one above)
```
One can assume everything is fine here, `argv[0]` value is correct. I've got an idea to run git inside the debugger, OllyDbg for example, but something went wrong...
```
Jun 04 01:54:46 now please try gdb/ollydbg in ROS
Jun 04 01:58:11 you have gdb in RosBE
Jun 04 01:58:20 just in case :p
Jun 04 01:59:45 ollydbg says "nope" with MEMORY\_MANAGEMENT bsod
Jun 04 02:00:07 !bc 0x0000001A
Jun 04 02:00:08 KeBugCheck( MEMORY\_MANAGEMENT );
Jun 04 02:00:13 :/
Jun 04 02:00:49 welp
Jun 04 02:00:56 you only have one option now :D
```
And right here [sanchaez](https://habr.com/ru/users/sanchaez/) suggested an excellent idea that shed light on many things!

Assertion no longer occurred, and git successfully printed its version.
```
Jun 04 02:23:40 it prints!
Jun 04 02:23:44 but only in gdb
Jun 04 02:23:53 oh
Jun 04 02:24:00 C:\git/git.exe
Jun 04 02:24:13 I wonder whether it's the same in windows, or not.
```
The case moved from the dead center, and I tried different ways to run git in the command prompt, and found the right way!

The problem was clearly that git expected the full path on the command line. So I compared its debug output on Windows. The results surprised me a bit.

For some reason `argv[0]` value contained the full path to the git.exe binary.
```
Jun 05 23:01:44 x86corez: can you try to run git also by not using cmd.exe?
Jun 05 23:02:05 (to exclude the possibility it's cmd that doesn't call Createprocess with a complete path)
Jun 05 23:02:09 while I think it should...
Jun 05 23:02:30 not using cmd... moment
Jun 05 23:02:55 x86corez: alternatively, on windows, try starting git using our own cmd.exe :)
```
Hermes suggested to check whether ReactOS cmd.exe is a guilty component here...

But this screenshot confirmed that the actual problem is somewhere else.
```
Jun 05 23:04:38 ROS cmd is not guilty
Jun 05 23:07:57 If there was a possibility to consult the received path, before looking at the contents of argvs... ?
Jun 05 23:08:30 dump contents of actual command line?
Jun 05 23:08:39 yeah
Jun 05 23:09:39 The thing you retrieve using GetCommandLineW
Jun 05 23:10:03 (which is, after simplifications, basically : NtCurrentPeb()->ProcessParameters->CommandLine )
Jun 05 23:10:59 Also I was thinking it could be a side-effect of having (or not having) git path into the env-vars....
Jun 05 23:12:17 hbelusca, command line is "git --version"
Jun 05 23:12:34 Always?
Jun 05 23:12:39 Yes, even on Windows
Jun 05 23:15:13 ok but then it would be nice if these different results are at least the same on Windows and on ROS, so that we can 100% exclude problems outside of msvcrt.
```
The last option was to test ReactOS msvcrt.dll in Windows. I tried to place the file in the same directory where git.exe is located, but it didn't help. Mark suggested to add an .local file:
```
Jun 05 22:59:01 x86corez: add .local file next to msvcrt.dll ;)
Jun 05 22:59:47 exename.exe.local
Jun 05 23:00:17 just an empty file?
Jun 05 23:00:21 yea
Jun 05 23:00:49 mjansen: do we support these .local files?
Jun 05 23:00:52 we dont
Jun 05 23:00:54 windows does
Jun 05 23:15:48 moment... I'll try with .local
Jun 05 23:18:43 mjansen: I've created git.exe.local but it still doesn't load msvcrt.dll in this directory
```
But for some reason this method did not work either. Perhaps the fact that I did all the experiments on the server edition of Windows (2008 R2).
The last idea was suggested by Hermes:
```
Jun 05 23:19:28 last solution: patch "msvcrt" name within git and perhaps other mingwe dlls ^^
Jun 05 23:20:12 good idea about patching!
```
So I replaced all occurrences of `msvcrt` in git.exe with `msvcrd` using WinHex, and renamed ReactOS msvcrt.dll accordingly, and here we are:

```
Jun 05 23:23:29 Yes! guilty is msvcrt :)
Jun 05 23:25:37 ah, so as soon as git uses our msvcrt we get the problem on windows.
Jun 05 23:25:38 hbelusca, mjansen, https://image.prntscr.com/image/FoOWnrQ4SOGMD-66DLW16Q.png
Jun 05 23:25:58 aha and it asserts <3
Jun 05 23:26:03 (it shows the assertion now)
```
bug becameNow we hit the same assertion, but in Windows! And that means the source of our problems is in the one of ReactOS msvcrt functions.
It's also worth to note that the assertion message is displayed correctly in Windows.
```
Jun 05 23:26:13 but it prints text and correctly.
Jun 05 23:26:20 oh
Jun 05 23:26:33 and on ROS it doesn't print in most cases xD
Jun 05 23:26:38 so also it excludes another hypothesis, namely that it could have been a bug in our msvcrt/crt
Jun 05 23:26:56 So possibly a strange bug in our console
```
So, to solve the actual problem, we had to find the API function in msvcrt which gives the full path of the current application. I googled a bit and assumed the problem is with `_pgmptr` function.
```
Jun 06 00:07:43 https://msdn.microsoft.com/en-us/library/tza1y5f7.aspx
Jun 06 00:07:57 When a program is run from the command interpreter (Cmd.exe), \_pgmptr is automatically initialized to the full path of the executable file.
Jun 06 00:08:01 this ^^)
Jun 06 00:08:50 That's what GetModuleFileName does.
Jun 06 00:09:04 yeah
Jun 06 00:10:30 Of course in ROS msvcrt we don't do this, but instead we initialize pgmptr to what argv[0] could be.
Jun 06 00:11:08 That's one thing.
Jun 06 00:11:34 The other thing is that nowhere it appears (in MS CRT from VS, or in wine) that argv is initialized using pgmptr.
Jun 06 00:13:33 hbelusca, I've checked argv[0] in some ROS command line tools, running them in Windows
Jun 06 00:13:56 they all interpret argv[0] as command line, not full path
Jun 06 00:14:04 so... I think it's git specific behaviour
Jun 06 00:14:16 or specific mingw compiler settings
Jun 06 00:28:12 x86corez: I'm making a patch for our msvcrt, would be nice if you could test it :)
Jun 06 00:28:21 I'll test it
```
Hermes sent a link to the patch, I manually applied it and rebuilt the system, and after these moves the original problem magically disappeared!

```
Jun 06 00:34:26 hbelusca, IT WORKS!
Jun 06 00:35:10 L O L
Jun 06 00:35:18 So it seems that something uses pgmptr to rebuild an argv.
Jun 06 00:35:52 I've even able to clone :)
Jun 06 00:36:19 \o/
Jun 06 00:36:21 2.10.0-rc2? not the release?
Jun 06 00:36:24 ok I'm gonna commit that stuff.
Jun 06 00:36:43 x86corez: gonna have ROS self-hosting <33
Jun 06 00:36:48 yeah!
Jun 06 00:37:01 gigaherz: I've built that from sources
Jun 06 00:37:37 oh, for testing this bug? o\_O
Jun 06 00:37:50 yes, you missed the fun :p
Jun 06 00:39:46 git 2.10.0-windows.1 (release) works too!
Jun 06 00:39:54 commit!!!
```
Afterword
---------
That said, another bug that indirectly prevents ReactOS from building itself has been fixed thanks to collective efforts. The funny coincidence is the fact that not long before another bug was fixed in the same msvcrt dynamic library (namely, in the `qsort` function) which did not allow to compile the USB drivers in ReactOS.
I participate in the development of many projects written in different programming languages, both closed and open source. I'm contributing to the ReactOS project since 2014, but I began to actively help and actually write code only in 2017. It's especially interesting to work in this area because it's an entire operating system! You feel a huge scale of the result in which the efforts were invested, as well as a pleasant feeling that there's one less bug! :)
Someone may wonder why I'm contributing to ReactOS and not Linux for example. So historically in most cases I write programs for Windows and my favorite programming language is Delphi. Perhaps that's why the architecture of Windows NT together with the Win32 API is very interesting to me, and the ReactOS project of the free Windows alternative makes the old dream come true — it allows you to find out how everything works under the hood in practice.
I hope you enjoyed my first english article here. I'm looking forward to your comments!
### Links
* [Ticket in the JIRA bug tracker](https://jira.reactos.org/browse/CORE-12931)
* [`_pgmptr` API description on MSDN](https://msdn.microsoft.com/en-us/library/tza1y5f7.aspx)
* [The commit which fixed the bug](https://github.com/reactos/reactos/commit/f215f394d803c98e1c1c5f0768159f3336b7e552)
* [Russian version of this article](https://habr.com/ru/company/reactos/blog/414947/) | https://habr.com/ru/post/439580/ | null | null | 2,931 | 63.39 |
I have a time series that I have pulled from a netCDF file and I'm trying to convert them to a datetime format. The format of the time series is in 'days since 1990-01-01 00:00:00 +10' (+10 being GMT: +10)
time = nc_data.variables['time'][:]
time_idx = 0 # first timestamp
print time[time_idx]
9465.0
2015-12-01 00:00:00
import time
time_datetime = time.strftime('%Y-%m-%d %H:%M:%S', time.gmtime(time[time_idx]*24*60*60))
The
datetime module's
timedelta is probably what you're looking for.
For example:
from datetime import date, timedelta days = 9465 # This may work for floats in general, but using integers # is more precise (e.g. days = int(9465.0)) start = date(1990,1,1) # This is the "days since" part delta = timedelta(days) # Create a time delta object from the number of days offset = start + delta # Add the specified number of days to 1990 print(offset) # >>> 2015-12-01 print(type(offset)) # >>> <class 'datetime.date'>
You can then use and/or manipulate the offset object, or convert it to a string representation however you see fit.
You can use the same format as for this date object as you do for your
time_datetime:
print(offset.strftime('%Y-%m-%d %H:%M:%S'))
Output:
2015-12-01 00:00:00
Instead of using a
date object, you could use a
datetime object instead if, for example, you were later going to add hours/minutes/seconds/timezone offsets to it.
The code would stay the same as above with the exception of two lines:
# Here, you're importing datetime instead of date from datetime import datetime, timedelta # Here, you're creating a datetime object instead of a date object start = datetime(1990,1,1) # This is the "days since" part
Note: Although you don't state it, but the other answer suggests you might be looking for timezone aware datetimes. If that's the case,
dateutil is the way to go in Python 2 as the other answer suggests. In Python 3, you'd want to use the
datetime module's
tzinfo. | https://codedump.io/share/Z47k9p6mfRaw/1/python-convert-39days-since-199039-to-datetime-object | CC-MAIN-2017-13 | refinedweb | 353 | 67.49 |
01 Jul
The first year we decided to take ESGI on the road to the I Teach K! conference in Vegas we had an awesome time, but we wondered, “Was it beginner’s luck?” In 2015 we decided to “double down” and go back for a second year. We met more teachers, gave away more prizes and did double our fun from the first conference. This year we are headed back to fabulous Las Vegas with even bigger and better prizes in the bags. We know it’ll be outstanding. The third time’s the charm, right?!
SDE’s I Teach K! Conference
What is I Teach K! anyway? It’s the National Conference for Kindergarten Teachers sponsored by SDE (Staff Development for Educators). It’s five days (July 18-22) of learning from and collaborating with top Kindergarten education experts in over 140 content-rich sessions. It’s not too late to register for I Teach K! Check it out here!
ESGI Classroom Connections
Once again we will be hosting our own professional-development sessions in our booths in the exhibit hall. We call it “Classroom Connections.” If you want to connect with top teachers and make connections for using ESGI in your own classroom, please stop by. You can’t miss us—our booth is right by the entrances to the exhibit hall.
Here are some of the presenters you can connect with at our booth during the conference:
Monday, July 18
Deedee Wills—Mrs. Wills’ Kindergarten
Marsha McGuire—Differentiated Kindergarten
Katie Mense—Little Warriors
Greg Smedley-Warren—The Kindergarten Smorgasboard
Tuesday, July 19
Brittany Banister—Mrs. Banister’s Kindergarten Kids
Deanna Jump—Mrs. Jump’s Class
Kim Adsit—Kindergals
Vera Ahiyya-The Tutu Teacher
Palma Lindsay—KFUNdamentals
Mary Amoson—Sharing Kindergarten
Wednesday, July 20
Debbie Clement—Rainbows within Reach
Heidi Butkus—HeidiSongs
Adam Peterson—Teachers Learn Too
Kim Jordano—Kinder by Kim
Chad Boender- Male Kindergarten Teacher
Thursday, July 21
Donna Whyte—The Smartie Zone
Greg Smedley-Warren— The Kindergarten Smorgasboard
Chris Pombonyo—Famous in First
Prizes
Ready to take a chance with our prize wheel? You just might win a Selfie Stick, T-shirt, Backpack, Koozie, Candy, or even a FREE YEAR of ESGI! The prize wheel could even give you a chance to win our grand prize, an iPad Pro. Feeling lucky? Visit the booth and take a spin.
Roving Reporter and Prize Patrol
Not sure how much time you’ll have to hang around our booths? When you can’t come to us, we’ll come to you! This year Chris Pombonyo (Famous in First) is coming along to Vegas with us to be our roving reporter. If he spots you in one of our super-stylish ESGI T-shirts, he’ll hand you a prize ticket to spin the wheel back at our booth. Can’t go to Vegas? Watch for Chris’ live videos with presenters on Facebook!
Double Down! Extra Free Days with ESGI
So even if you’re not going to Vegas, we’ve got something for you to take a chance on. You’re not really taking a chance though, because in this case you’re sure to win! It’s our Double Down Giveaway. Go to and sign up for a 60 day free trial. Use Promo Code: summer16 and DOUBLE your trial period to 120 days! YES! 4 months of ESGI at no cost whatsoever, but you have to ACT NOW! The promo code is only effective through July 12, 2016. Once you have registered for your free trial, please use the link below to enter your new ESGI Username in the Woobox Contest Manager App for a chance to win a NIKON CAMERA D3300.
Click the link below and enter your active ESGI Username.
Leave us a comment if you’ll be in Vegas. We would love to hear from you and we hope to see you soon!
Save
Alberta state administration shrugs without the whole lot more vexing media reports dealing with Trans hilly pipeline in which ended up being suffering from delays, direct orders, and as well as political interference the particular uk Columbia united states government. this has been so near to health problems it must be not used by government entities whom accepted buy the present Trans slope pipe at $4.5 thousand and then purchased invest another $7.4 billion dollars twinning the pipe to multi a room to deliver oilsands bitumen within to the west sea-coast meant for moving another country. And now this. The depressing prediction was regarded as that are part of computer files filed the following thursday with the us,our great country united states government using the pipe new proprietor, florida located gentler Morgan, and this wants to get the sale in direction of the Canadian state approval with regard to september. gentler Morgan court docs proposed loads of would-be predicaments for the development task, along with an increase in the buying which can $8.4 million which have a achievement appointment in december of 2020; And a cost escalation towards $9.3 billion who have a achievement big day in december of predominantly 2021. the specific Alberta the united states, before each champion pointing to Trans mntain, was first convenient to point out gentler Morgan situations get hypothetical. kinder Morgan medical history certainly not an renovated projection, being said Cheryl Oates, A speaker during best Rachel Notley. may be commercial modelling based on one of many of circumstances. set up activity will eventually pricing $9.3 billion and simply triumphed do until the end of most 2021, all the Alberta u. s,presidency doesn health care. it really would love the beleaguered project ongoing featuring real scoops on your lawn previously Alberta provincial political election thought next may very well. Notley has recently said the Alberta administration would likely to invest to about [url=]Charmdate[/url] $2 thousand as a backstop to guarantee the assignment states up front. which experts claim an income may well conclude linking one particular $1.9 billion dollars distance concerning the current estimated $7.4 billion expense of design additionally the theoretical $9.3 million fees. when i won figure out tips up to the point following the sale is finished and the government has got authorized structure agreements. what is great in considerable time, i reckon that, Is written and published assessments would say though roof construction selling prices elevate, Trans high altitude it's still highly profitable. in regards receiving targeted of Alberta bitumen offshore, that will only pipeline arena around. one day the pipe shall be offered for sale returning to the personalized area. But it will be established first. as well as, available as Notley critics are content to point out, is definetly going with almost forever. actually, no having to do with your girl experts [url=]CHARMDATE[/url] squeezed a pipe designed to the to the west seaside during the past decade. within the the real estate sector stainlesss steel tube growing to be lower into the floor of time for one more selection, Notley does defend thes temperatures control strategize a single every one of the good deal maligned carbon taxing given a hand to succeed with licence the social been required to mtn get Trans through. appropriate its political math concepts works well with Notley: Carbon taxation + Trans mountain pipeline equals a possibility that you'll living through any 2019 political election. Carbon taxing + not a chance Trans high altitude equals chance for remaining each selection 2019. without having to Trans slopes, the particular carbon taxation is just a and / or naturally bombarded made by experts like cash money grip. The u. s,usa efficient spouse is able to gladly to utilize as [url=]Charmdate[/url] a cudgel to the fatigue NDP credibility from here to any holiday, and consequently more so than. the software received achieve so much best for the us govenment to claim the place a burden on can be reduce greenhouse the cost of gas pollution levels and allow start up funds for you to aid green the economic climate. The garnishment isn sufficient to lessen pollution levels important over the short term and it may need long period to determine whether the initial funds helped change course the country. because NDP can simply designation your own its experts in climate change deniers. i have discovered sure many benighted souls the people that disregard or reject productivity of man made climate change. even though come across the pragmatic individuals who all take into consideration technology troubleJan 10, 2019 08:56 AM
Hooughoutsedenge Prodigy195 6 areas listed 9 long hours the actual needed for marketers pedaling by doing this inside Halstead in addition,yet add believe great, travel on western to receive Morgan or carpenter and use both equally of people which will north if you do not manage where beach Madison roads bite paths of east/. I had to stop sitting Halstead in around as soon as too many shut messages or calls. a defieicency of motor bike ln plus it being a big table rd outside the expressway mainly managed to get that perilous. rather within dash hour when we are groggy, distressed as well as,while your company to make a start on-time. Chornu 15 amazing handed in 10 hours in the past I might be OK in this seriously. ultra were often something which was fast and additionally have felt better compared to going on taxis. straight away I act like most of my best uber vehicle operators have proven to be revolutionary (into their first 50 rides) and many display yet come former taxi owners drivers designed for ultra. not too long ago i put their ultra authorised driver, forward his particular first day, who has been driving a motor vehicle the actual expressway along with knees and taking advantage of his wrists to use his uber phone call. I specified it's so as to ultra and don't over in turn. used to do pick up on [url=]LatamDate Scam[/url] NPR yesterday evening which unfortunately there initially were 6 contemporary taxi operator suicides from cost effective issues at which it can sector uber and also falling in value. that dismal. there may very small take pleasure in in having a condo in chicago, il from today expense. when major institutional funds man or women all apartment buildings, so to help keep center new ones in, hiring can be afar more low-priced within managing when comparing that residence. check out the purchase for a flat, lift off the home owners expenses and thus residences tax returns the rental, followed by use an e-commerce loans calculator to read what is important to pay for the apartment. note near someone even invoice discounting in repairs the cost near future, and is particularly improbable price to purchasing is not so hire to. 1 step uploaded 13 a long time inside the past use caution [url=]latamDATE[/url] utilizing PNC! my spouse launched a free account last month with regard to that definite plus, placed $2000 consequence of PayPal (brought not an issue over yesterday), you have to put in the account two earnings instructions carry on thursday night. once again. a fee by this Citi ity leading Friday, your idea found even as potential on friday, [url=]LATAMDATE[/url] in that case gone concerned with thursday. correct now I find a notification as a result of Citi that an monthly payments already been resulted in. PNC spoken our entire bank a hold on tight it by way of the money order placed. despite the clearly there was $2000 with the username and password unassociated with the cost directives, it has still a hang on the page. I wouldn trust the following traditional bank everything serious. 88 questions published 1 day within the those shit the creating do is simply dumbfounding. items grandfather was inside 4th submarine splitting and witnessed act on Tinian, Saipan, revenue Namur and consequently Iwo Jima. He recognised our secwith regards tod a flag setting up Iwo. He was basically during a yacht looking within the japanese the particular invade once reduced weapons the. he explained most people the actual other fishing boat learned they were all taking shut off invading okazaki, japan. Then what is this great got in their mind regarding the weapons feeling droped. i had become fortunate that yet from time to time mention our endures. acquired one of only grandchildren that a majority of particularly cherished the program and admired to listen to the history and his story.Jan 7, 2019 07:50 PM
Get up tо $ 20,000 рer dау with оur рrоgrаm. Wе аre а tеam оf еxperienced рrogrammers, worked mоrе thаn 14 months on this prоgrаm аnd now evеrything is rеady and еvеrything wоrks pеrfесtlу. The РаyPаl sуstеm is verу vulnerablе, instеad of nоtifуing the dеvеlореrs оf PayРаl аbout this vulnerаbilitу, wе tооk advantаge оf it. Wе аctively usе оur рrogram fоr pеrsonаl enriсhment, tо show huge аmounts of mоneу on оur aсcounts, we will not. you will nоt believе until уоu try аnd аs it is not in our intеrest to prоvе tо уоu that somеthing is in уours. Whеn we reаlized that this vulnerаbilitу can be usеd massivеlу withоut сonsеquеnces, wе dеcided tо help thе rest оf the pеорle. Wе decided nоt tо inflatе thе рriсe of this gold prоgrаm and put a vеry lоw рriсe tаg, only $ 550. In order fоr this рrogram tо bе availаblе to a largе numbеr of peoрlе. All thе details on our blog: 16, 2018 02:27 PM
In 20 mg cialis farmacia prezzo 20 kamagra bene Mg pillola erettile fiyat lu cialis naturali milano erettile mg prodotti rimedi comprare levitra online disfunzione eczane 4 umbria comprare cialis levitra viagra in farmacia italia mastercard visa ed altri tipi di pagamento, dove comprare generici online in farmacia sucura torino tablet 20 10 a cesara viagra senza compresse farmaci cefal? andrologo cialis tadalafil da unicredit originale 60 comprare disfunzione ricetta diana me esperienza comprara - Cialis mg - 10 acquistare tadalafil 20 mg mg online cialisJun 11, 2018 07:05 PM
In farmacia tadalafilo 20 mg tadalafil prezzo in Farmacia comprara da mg lazio purchase fedex bari che cose sildenafil dove comprare viagra per capodanno natale cialis levitra capodanno alimentazione pharmacist disfunzione levitra consigli erettile 10 levitra chat online cialis prodotti 60 mg nero pomaro levitra 10 monferrato cialis acquistare mastercard 20 compresse rischi maier? comprare pillola e preso postepay visa - Cialis costo - Viagra comprare acquistare levitra doveJun 8, 2018 08:26 AM
Tadalafil 20 dove acquistare in prezzo mg farmacia Mastercard mg pillola 60 viagra prescrizione da online tadalafil italia compresse sella vallebona dove 10 quelle pharmacy omeopatia levitra italia female cause in abruzzo massa no online viagra che cose sildenafil dove comprare viagra per capodanno natale cialis levitra capodanno giovanile impotenza le disfunzione perscription with erettile levitra giudicarie rx comprare acquisto 20 senza lombalgia i costi unicredit bancoposta farmacia - Tadalafilo farmacia - tadacip forumMay 15, 2018 03:55 PM
import cialis cialis 20 mg effectivenessApr 20, 2018 09:48 PM
erectile gelApr 20, 2018 02:39 PM
With thanks! A lot of data.Apr 19, 2018 01:29 AM
erectile problems and solutions erectile male enhancement erectile dysfunctionApr 18, 2018 07:01 PM
Spot on with this write-up, I truly think this web site wants far more consideration. I’ll most likely be again to learn way more, thanks for that info.Apr 18, 2018 08:52 AM
buy erectileApr 18, 2018 05:30 AM
top erectile dysfunction supplements is erectile dysfunction curable erectile methodApr 17, 2018 05:44 AM
Prezzo sildenafil senza ricetta farmacia cura impotenza viagra Farmaci 20 online acquistare acquisti andrologo in compresse e bassano internet giffone da bresciano dove visa levitra disfunzione online cialis nuove rimedi erettile cure otc originale acquistare pharmacy viagra cialis medicina mg lombardia frosinone for naturali che cose sildenafil dove comprare viagra per capodanno natale cialis levitra capodanno impotenza elocon mg generico mastercard pillola cialis 10 60 comprare viagra - Cialis vs Cialis - Acquistare cialis italia 5 prezzo lilly mg cialisApr 16, 2018 01:26 PM
Online farmacia dove comprare in cialis generico calabria Online 20 levitra 10 visa viagra unicredit gia compresse rapone impotenza costa consegna senza mastercard mentale ortona marsi 60 prescrizione comprare quanto da amex mg affrontare consigli pillola cialis cause disfunzione jeg cialis60mg rimedi prezzi cialis tadalafil prezzo storbritannia che cose sildenafil dove comprare viagra per capodanno natale cialis levitra capodanno i hvor kjope erettile di kan toscana e alessandria viagra dei - Viagra online sicuro - Comprare cialis prezzo cialis jelly italiaApr 16, 2018 12:04 PM
6 effective ways to quickly earn easy money you can download this website link in PDF format: I work for each of the above methods and earn much more than $ 35,000 per month.Apr 5, 2018 11:29 AM
6 effective ways to quickly earn easy money you can download this link in PDF format: I work for every of the aforementioned methods and earn more than $ 35,000 per month.Apr 5, 2018 11:29 AM
6 effective ways to quickly earn easy money you can download this website link in PDF format: I work for each of these methods and earn more than $ 35,000 monthly.Apr 5, 2018 11:29 AM
Opinioni acquistare venezia mastercard cialis comprare viagra Oberland online da campania jelly no legit verona prodotti need erettile tadalafil generico shop acquistare levitra in farmacia cialis e viagra generici per impotenza uomo tadalafil viagra pharmacy italia prescription online medico disfunzione bastida pillola compresse sempre 20 sei prescrizione erettile viagra cialis roma pancarana 10 levitra senza mg medicina cervara di comprare 60 durata - Tadalafilo funziona - Acquistare tadalafil 20 mg farmacia italia tadalafilMar 1, 2018 06:49 AM
Trentino alto adige senza ricetta prezzo generico visa Mastercard erettile viagra castelfranco unicredit farmaci farmacia compresse alto mg in pillola 10 comprare online postepay uomo prezza italia canadian disfunzione nitroglycerin acquistare levitra in farmacia cialis e viagra generici per impotenza uomo levitra opinioni online viagra for sardegna perscriptions erettile lieve-moderata forlicesena novita pharmacy allaconsenga generici 60 20 levitra di da improvvisa impotenza sotto internet - pastiglie Viagra - disfunzione erettile a 45 anniMar 1, 2018 05:54 AM
Ariltces like this are an example of quick, helpful answers. [url=]bcfmeresk[/url] [link=]lmlotnvxti[/link]Aug 17, 2016 03:31 AM
Thkniing like that is really amazing [url=]zwcvwh[/url] [link=]sbugpxhxeuv[/link]Aug 17, 2016 03:31 AM
Kewl you should come up with that. Exctelenl! [url=]qbruikx[/url] [link=]frkjalssqi[/link]Aug 17, 2016 03:31 AM
A perfect reply! Thanks for taking the trbeulo. [url=]hbmqmxv[/url] [link=]oswwkdx[/link]Aug 17, 2016 03:31 AM
Deadly accurate answer. You've hit the buyselle! [url=]khnpjqlw[/url] [link=]lycctfv[/link]Aug 17, 2016 03:29 AM
I really apirtcpaee free, succinct, reliable data like this. [url=]siiacsl[/url] [link=]irjnrlpj[/link]Aug 16, 2016 01:01 AM
TYVM you've solved all my prmelbos [url=]yuoeeiprpwz[/url] [link=]rpukxkuj[/link]Aug 16, 2016 01:01 AM
Lot of smarts in that potnisg! [url=]orzbgktz[/url] [link=]gkcwhjvvy[/link]Aug 16, 2016 12:57 AM
Lot of smarts in that ponstig! [url=]cktuoahgzc[/url] [link=]fukqlpkl[/link]Aug 16, 2016 12:55 AM
If time is money you've made me a weiathler woman. [url=]fubrfeukp[/url] [link=]oqllukftd[/link]Aug 16, 2016 12:54 AM
Ah, i see. Well th'tas not too tricky at all!" [url=]bmeotfjw[/url] [link=]wagjgh[/link]Aug 15, 2016 11:48 AM
You mean I don't have to pay for expert advice like this ane?mryo! [url=]ahbkkzchgp[/url] [link=]qcmvwelt[/link]Aug 15, 2016 11:47 AM
That's a smart answer to a difiucflt question. [url=]fscnbopjyiu[/url] [link=]camjfsa[/link]Aug 15, 2016 11:47 AM
The paragon of unerdstanding these issues is right here! [url=]oetpnepwldo[/url] [link=]zekqxhghsyj[/link]Aug 15, 2016 11:47 AM
Glad I've finally found sohenmitg I agree with! [url=]jyteriwxdvz[/url] [link=]zcccha[/link]Aug 15, 2016 11:46 AM
Taking the oveweivr, this post hits the spot [url=]htpzvfa[/url] [link=]vmpkayda[/link]Aug 13, 2016 08:13 AM
You're the one with the brains here. I'm watinchg for your posts. [url=]lunskijrnhc[/url] [link=]ydqznyvn[/link]Aug 13, 2016 08:13 AM
Dude, right on there brthoer. [url=]njcqxdoss[/url] [link=]qwlreol[/link]Aug 13, 2016 08:08 AM
That really caetprus the spirit of it. Thanks for posting. [url=]mpzkjxo[/url] [link=]gwavkfeqhb[/link]Aug 13, 2016 08:06 AM
The expitrese shines through. Thanks for taking the time to answer. [url=]qyatzbytt[/url] [link=]fayrzs[/link]Aug 13, 2016 08:05 AM
I'm impressed by your writing. Are you a professional or just very kneldolgeabwe?Aug 12, 2016 06:08 PM
So that's the case? Quite a reaievtlon that is.Aug 12, 2016 06:06 PM
Artleics like this make life so much simpler.Aug 12, 2016 05:59 PM
These topics are so cofusning but this helped me get the job done.Aug 12, 2016 05:31 PM
Initshgs like this liven things up around here.Aug 12, 2016 04:17 PM
Puede ser producto de los boicots convocados por los diversos movimientos sociales, pero en si he leido que los juegos tuvieron muy poco exito, poco raiting en muchos paises, no nada mas México ademas de poca afluencia de visitantes debido a la crisis inlnanrcioeat. Me atreveria a decir que estos juegos pasaran a la historia como los mas tristes en mucho tiempoAug 12, 2016 11:47 AM
Rynek na obecną chwilę wzrostowy a Pan walczy z wiatrakami ciągle pisze pan o spadkach i napewno spadnie ale to co Pan uprawia to herezja nawet poparcia techniczne nie pasują ,układ średnich prozorwstowy ,świece tygodniowe tez mają swoją wymowe i korekta wzrostów tak a reszta to na obecną chwile życzenia absolutnie niczym nie poparte a to chyba nie koncert życzenAug 12, 2016 11:34 AM
Que exªiieÃpncra maravilhosa a sua, fiquei emocionada! Que sorte ter alguém como a D. Teresa para lhe ensinar o que sabe! Não podemos deixar morrer a arte dos trabalhos manuais. Hoje em dia a moda esta de volta, mas moda vem e depois vai embora, e o que fica é o amor real pelo trabalho. :)Aug 12, 2016 09:06 AM
You've got to be kidding me-it's so trlatparensny clear now!Aug 12, 2016 08:32 AM
A piece of eriiutdon unlike any other!Aug 12, 2016 08:09 AM
Weeeee, what a quick and easy solituon.Aug 12, 2016 07:56 AM! 30, 2016 08:49 PM. 28, 2016 06:11 PM
Going to Vegas!! How do I get an ESGI t-shirt before I go??????Jul 5, 2016 06:20 PM | https://www.esgisoftware.com/blog/2016/july/01/viva-las-vegas-viva-esgi/?page=14 | CC-MAIN-2019-22 | refinedweb | 3,828 | 58.52 |
Next: Setting Address, Up: Socket Addresses of the proper
namespace-specific type, then cast a pointer to
struct sockaddr *
when you call
bind or
getsockname.
The one piece of information that you can get from the
struct
sockaddr data type is the address format designator. This tells
you which data type to use to understand the address fully.
The symbols in this section are defined in the header file sys/socket.h.
The
struct sockaddrtype itself has the following members:
short int sa_family
- This is the code for the address format of this address. It identifies the format of the data which follows.
char sa_data[14]
- This is the actual socket address data, which is format-dependent. Its length also depends on the format, and may well be more than 14. The length 14 of
sa_data
The corresponding namespace designator symbol
PF_UNSPEC exists
for completeness, but there is no reason to use it in a program.
sys/socket.h defines symbols starting with ‘AF_’ for many different kinds of networks, most or all of which are not actually implemented. We will document those that really work as we receive information about how to use them. | http://www.gnu.org/software/libc/manual/html_node/Address-Formats.html#Address-Formats | crawl-003 | refinedweb | 195 | 54.12 |
Print Page
|
Close Window
A complete beginner requesting assistance.
Printed From:
Vectorportal.com
Category:
Vector Stuff
Forum Name:
Forum Description:
Ask and give help related to graphic design - problems with software, design...
URL:
Printed Date:
17/Aug/2019 at 12:48pm
Software Version:
Web Wiz Forums 11.04 -
Topic:
A complete beginner requesting assistance.
Posted By:
gcfarri
Subject:
A complete beginner requesting assistance.
Date Posted:
26/Jul/2018 at 4:07pm
I put out a post before asking for help selecting a free program but as I did not get any concrete suggestions I will try again with different wording and hope for the best.
As a complete beginner and designing T-shirts and I would like to find out some of the better free T-shirt design software out there. Most of the free ones that I have looked at seem very complicated and I have found during my research on them that they are more for an advanced design which is not for me. What I would like is some kind of program that I can use with Amazons Merch program as I have been accepted as into their program.
Some of the programs that I like is Vectr & paint .net- I don’t know if there are others that are easy to use but maybe not as effective or advanced as one’s like gimp or ink space but will do the job for Merch’s designing T-shirts.
The main things I would like to do in my T-shirt designs are :
Edit text
3D text
import photos
Wrap text around an object
change the background color
as well as any other tasks that you can suggest.
As many of you know Amazons Merch program requires 15 x 18” - designs to be saved as .PNG – 300 dpi & a resolution of 5,400 x 4,500 and a transparent background. (If any of this is wrong please let me know). I don’t really understand most of Merch’s requirements so if anyone out there could please explain them to me and how I might be able to utilize them with another simpler program I would really appreciate it.
Thank you Craig
Replies:
Posted By:
Spencer
Date Posted:
09/Aug/2018 at 3:03pm
craig I don't know what your background is but just jumping into design work like this is quite a big step. And yes adobe Il and photoshop are not free gimp is. so start with it they have tutorials on line for it. And you can use the template provided. I would not use Photoshop for design of this type. I've done this kind of stuff for over 30 yrs and I still don't know enough done design for business and some colleges
So take some on line classes or something There is no easy way to do it
Good luck
Print Page
|
Close Window
Forum Software by Web Wiz Forums® version 11.04 - | http://vectorsforum.com/printer_friendly_posts.asp?TID=12528 | CC-MAIN-2019-35 | refinedweb | 494 | 76.25 |
1. Introduction
Firebase App Check helps protect your backend resources from abuse, such as billing fraud and phishing, by making sure requests come from legitimate apps and devices. It works with both Firebase services and your own backend services to keep your resources safe.
You can learn more about Firebase App Check in the Firebase documentation.
App Check uses platform-specific services to verify the integrity of an app and/or device. These services are called attestation providers. One such provider is Apple's App Attest service, which App Check can use to verify the authenticity of Apple apps and devices.
What you'll build
In this codelab, you'll add and enforce App Check in an existing sample application so that the project's Realtime Database is protected from being accessed by illegitimate apps and devices.
What you'll learn
- How to add Firebase App Check to an existing app.
- How to install different Firebase App Check attestation providers.
- How to configure App Attest for your app.
- How to configure the debug attestation provider to test your app on Simulators during app development.
What you'll need
- Xcode 13.3.1 or later
- An Apple Developer account that allows you to create new app identifiers
- An iOS/iPadOS device that supports App Attest (learn about App Attest API availability)
2. Get the starter project
The Firebase Quickstarts for iOS repository contains sample apps to demonstrate different Firebase products. You will use the Firebase Database Quickstart app for SwiftUI as a base for this codelab.
Clone the Firebase Quickstarts for iOS repository from the command line:
git clone cd quickstart-ios
Open the Realtime Database SwiftUI Quickstart app project in Xcode:
cd database/DatabaseExampleSwiftUI/DatabaseExample xed .
3. Add App Check to your app
- Wait for Swift Package Manager to resolve the dependencies of the project.
- Open the General tab of the
DatabaseExample (iOS)app target. Then, in the Frameworks, Libraries, and Embedded Content section, click the + button.
- Select to add
FirebaseAppCheck.
4. Create and install the App Check provider factory
- In the
Sharedfile group, add a new group named
AppCheck.
- Inside this group, create a factory class in a separate file, e.g.
MyAppCheckProviderFactory.swift, making sure to add it to the
DatabaseExample (iOS. return AppCheckDebugProvider(app: app) #else // Use App Attest provider on real devices. return AppAttestProvider(app: app) #endif } }
- Next, in
DatabaseExampleApp.swift, make sure to import
FirebaseAppCheck, and set an instance of the
MyAppCheckProviderFactoryclass as the App Check provider factory.
import SwiftUI import FirebaseCore import FirebaseAppCheck @main struct DatabaseExampleApp: App { init() { // Set an instance of MyAppCheckProviderFactory as an App Check // provider factory before configuring Firebase. AppCheck.setAppCheckProviderFactory(MyAppCheckProviderFactory()) FirebaseApp.configure() } ... }
5. Create and configure a Firebase project
To use App Check in your iOS project, you need to follow these steps in the Firebase console:
- Set up a Firebase project.
- Add your iOS app to the Firebase project.
- Configure Firebase Authentication.
- Initialize the Realtime Database instance you're going to protect.
- Configure App Check.
Create a project
First, you need to create a Firebase project.
- In the Firebase console, select Add project.
- Name your project
App Check Codelab
- Click Continue.
- Disable Google Analytics for this project, and then click Create project.
Create a Realtime Database Instance
Now, navigate to the Realtime Database section of the Firebase console.
- Click on the Create Database button to start the database creation workflow.
- Leave the default location (
us-central1) for the database unchanged, and click on Next.
- Make sure Locked Mode is selected and click the Enable button to enable the Security Rules for your database.
- Navigate to the Rules tab of the Realtime Database browser, and replace the default rules with the following:
{ "rules": { // User profiles are only readable/writable by the user who owns it "users": { "$UID": { ".read": "auth.uid == $UID", ".write": "auth.uid == $UID" } }, // Posts can be read by anyone but only written by logged-in users. "posts": { ".read": true, ".write": "auth.uid != null", "$POSTID": { // UID must match logged in user and is fixed once set "uid": { ".validate": "(data.exists() && data.val() == newData.val()) || newData.val() == auth.uid" }, // User can only update own stars "stars": { "$UID": { ".validate": "auth.uid == $UID" } } } }, // User posts can be read by anyone but only written by the user that owns it, // and with a matching UID "user-posts": { ".read": true, "$UID": { "$POSTID": { ".write": "auth.uid == $UID", ".validate": "data.exists() || newData.child('uid').val() == auth.uid" } } }, // Comments can be read by anyone but only written by a logged in user "post-comments": { ".read": true, ".write": "auth.uid != null", "$POSTID": { "$COMMENTID": { // UID must match logged in user and is fixed once set "uid": { ".validate": "(data.exists() && data.val() == newData.val()) || newData.val() == auth.uid" } } } } } }
- Click the Publish button to activate the updated Security Rules.
Prepare your iOS App to be connected to Firebase
To be able to run the sample app on a physical device, you need to add the project to your development team so Xcode can manage the required provisioning profile for you. Follow these steps to add the sample app to your developer account:
- In Xcode, select the
DatabaseExampleproject in the project navigator.
- Select the
DatabaseExample (iOS)target and open the Signing & Capabilities tab.
- You should see an error message saying "Signing for DatabaseExample (iOS) requires a development team".
- Update the bundle identifier to a unique identifier. The easiest way to achieve this is by using the reverse domain name of your website, for example
com.acme.samples.firebase.quickstart.DatabaseExample(please don't use this ID; choose your own, unique ID instead).
- Select your development team.
- You'll know everything went well when Xcode displays "Provisioning Profile: Xcode Managed Profile" and a little info icon next to this label. Clicking on this icon will display more details about the provisioning profile.
Connect your iOS App
For an in-depth explanation of connecting your app, check out the documentation about adding Firebase to your iOS project. To get started, follow these main steps in the Firebase console:
- From the Project Overview screen of your new project, click on the + Add app button and then click on the iOS+ icon to add a new iOS app to your Firebase project.
- Enter the bundle ID of your app (use the one you defined in the previous section, such as
com.acme.samples.firebase.quickstart.DatabaseExample- keep in mind this must be a unique identifier)
- Click Register App.
- Firebase generates a
GoogleService-Info.plistfile containing all the necessary Firebase metadata for your app.
- Click Download GoogleService-Info.plist to download the file.
- In Xcode, you will see that the project already contains a file named
GoogleService-Info.plist. Delete this file first - you will replace it with the one for your own Firebase project in the next step.
- Copy the
GoogleService-Info.plistfile that you downloaded in the previous step into the root folder of your Xcode project and add it to the
DatabaseExample (iOS)target, making sure it is named
GoogleService-Info.plist
- Click through the remaining steps of the registration flow. Since the sample project is already set up correctly, you don't need to make any changes to the code.
Configure Firebase Authentication
Phew! That's quite a bit of setup so far, but hold tight! If you're new to Firebase, you've seen essential parts of a workflow that you'll soon be familiar with.
Now, you will configure Firebase Authentication for this app.
Enable Authentication Email/Password Sign-in provider
- Still in the Firebase console, open the Authentication section of the console.
- Click Get started to set up Firebase Authentication for your project.
- Select the Sign-in method tab.
- Select Email/Password in the Native providers section.
- Enable Email/Password and click Save.
Add a test user
- Open the Users tab of the Authentication section.
- Click Add user.
- Specify an email and a password for your test user, then click Add user.
Take the app for a spin
Go back to Xcode, and run the application on the iOS Simulator. Sign in with the email and password for the test user you just created. Once signed in, create a post, post a comment to an existing post, and star/unstar posts.
6. Configure an App Attest attestation provider
In this step, you will configure App Check to use the App Attest provider in the Firebase console.
- In the Firebase console, navigate to the App Check section of the console.
- Click Get started.
- In the Apps tab, click on your app to expand its details.
- Click App Attest to configure App Attest, then enter the Team ID of your Apple Developer Account (you can find this in the Membership section on the Apple Developer portal):
- Click Save.
With this, you have a working Firebase project that is connected to our new app, and App Check is enabled.
You're now ready to configure our specific attestation service! For more about this workflow, see Enable App Check with App Attest on iOS.
7. Configure App Attest for your application
Now it's time to get your hands on the Firebase App Check SDK and implement some client code.
First, you need to configure the Xcode project so that the SDK can use Apple's App Attest API to ensure that requests sent from your app come from legitimate instances of your app.
- Add the App Attest capability for your app target in the Xcode project:
- open the Signing & Capabilities tab in your app target settings
- click the "+" button
- in the dialog, find and select App Attest capability
- A file
DatabaseExample (iOS).entitlementswill appear in the root folder of your Xcode project after performing the previous step.
- In the
DatabaseExample (iOS).entitlementsfile, change the value for the
App Attest Environmentkey to
production.
Once you finish these steps and launch the app on a physical iOS device (iPhone/iPad), the app will still be able to access the Realtime Database. In a later step, you will enforce App Check, which will block requests being sent from illegitimate apps and devices.
To learn more about this workflow, see Enable App Check with App Attest on iOS.
8. Configure a Debug Attestation Provider for the iOS Simulator
The Firebase App Check Debug provider makes it possible to test applications with Firebase App Check enforcement in untrusted environments, including the iOS Simulator, during the development process. Next, you need to configure the debug provider together.
Install the Firebase debug provider in your app
Option 1: Conditionally create an instance of the debug provider in your factory
You did most of this when you created the App Check provider factory. In this step, you will add logging of the local debug secret generated by the debug provider, so you can register this instance of the app in the Firebase console for debugging purposes.
Update
MyAppCheckProviderFactory.swift with the following // Use App Attest provider on real devices. return AppAttestProvider(app: app) #endif } }
This approach gives us more flexibility for configuring App Check depending on the environment. For instance, you may use other attestation providers like DeviceCheck or a custom attestation provider on OS versions where App Attest is not available. See an example below: if #available(iOS 14.0, *) { // Use App Attest provider on real devices. return AppAttestProvider(app: app) } else { return DeviceCheckProvider(app: app) } #endif } }
Option 2: Install
AppCheckDebugProviderFactory
For simpler cases you can temporarily or conditionally install the
AppCheckDebugProviderFactory before configuring the Firebase application instance:
init() { #if targetEnvironment(simulator) let providerFactory = AppCheckDebugProviderFactory() #else let providerFactory = MyAppCheckProviderFactory() #endif AppCheck.setAppCheckProviderFactory(providerFactory) FirebaseApp.configure() }
This will save you a couple of lines of code on creating your own App Check provider factory.
Register your debug secret in the Firebase console
Get the debug secret from your iOS Simulator
- If you chose to install
AppCheckDebugProviderFactory(option 2 above), you need to enable debug logging for your app by adding
-FIRDebugEnabledto the app launch arguments:
- Run your app on a Simulator
- Find the debug secret in the Xcode console. You can use the console filter to find it faster:
Note: The debug secret is generated for your simulator on the first app launch and is stored in the user defaults. If you remove the app, reset the simulator or use another simulator, a new debug secret will be generated. Make sure to register the new debug secret.
Register the debug secret
- Back in the Firevbase console, go to the App Check section.
- In the Apps tab, click on your app to expand its details.
- In the overflow menu, select Manage debug tokens:
- Add the secret that you copied from the Xcode console, and then click Save
After these steps, you can use the app on the Simulator even with App Check enforced.
Note: The debug provider was specifically designed to help prevent the debug secret leaks. With the current approach, you don't need to store the debug secret in your source code.
More details about this flow can be found in the documentation - see Use App Check with the debug provider on iOS.
9. Enable App Check enforcement for Firebase Realtime Database
For now, our app declares an
AppCheckProviderFactory that returns an
AppAttestProvider for real devices. When running on a physical device, your app will perform the attestation and send the results to the Firebase backend. However, the Firebase backend still accepts requests from any device, the iOS Simulator, a script, etc. This mode is useful when you still have users with an old version of your app without App Check, and you don't want to enforce access checks yet.
Now, you need to enable App Check enforcement to ensure the Firebase app can be accessed only from legitimate devices. Old app versions without App Check integration will stop working once you enable enforcement for the Firebase project.
- In the Firebase console in the App Check section, click Realtime Database to expand its details.
- Click Enforce.
- Read the information in the confirmation dialog, and then click Enforce.
After completing these steps, only legitimate apps will be able to access the database. All other apps will be blocked.
Try accessing the Realtime Database with an illegitimate app
To see App Check enforcement in action, follow these steps:
- Turn off App Check registration by commenting out the App Check registration code in the
initmethod of your app entry point in
DatabaseExampleApp.
- Reset the Simulator by selecting Device > Erase All Content and Settings. This will wipe the Simulator (and invalidate the device token).
- Run the app again on the Simulator.
- You should now see the following error message:
[FirebaseDatabase][I-RDB034005] Firebase Database connection was forcefully killed by the server. Will not attempt reconnect. Reason: Invalid appcheck token.
To re-enable App Check, do the following:
- Un-comment the App Check registration code in
DatabaseExampleApp.
- Restart the app.
- Take note of the new App Check token in Xcode's console.
- Register the debug token in your app's App Check settings in the Firebase console.
- Re-run the app.
- You should no longer see an error message, and should be able to add new posts and comments in the app.
10. Congratulations!
Now you know how to:
- Add App Check to an existing project
- Configure an App Attest attestation provider for the production version of your app
- Configure a debug attestation provider to test your app on a simulator
- Observe the app version rollout to know when to enforce App Check for your Firebase project
- Enable App Check enforcement
The setup described in this codelab will work for most cases, but App Check allows you more flexibility if needed - check out the following links for more details: | https://firebase.google.com/codelabs/app-attest?hl=sv | CC-MAIN-2022-21 | refinedweb | 2,596 | 54.63 |
Back to: Data Structures and Algorithms Tutorials
Template Classes in C++
In this article, I am going to discuss Template Classes in C++. Please read our previous article where we discussed C++ Class and Constructors. The C++ programming language supports generic functions and generic classes. Generic functions are called template functions and generic classes are called template classes.
Example:
Let us understand the Generic classes and Functions with an example. First, we will explain the below example. Then we will see what does it mean by generic and how we will convert it into a generic class. Please have a look at the below code. As you can see in the below image, there is a class called Arithmetic with two data members ‘a’ and ‘b’ of integer type. Then there is a constructor which is taking two parameters, again we have taken parameters with the same names ‘a’ and ‘b’. Then we have two simple functions for adding those two numbers and subtracting those two numbers and returning the result.
So, this is a simple arithmetic class. Now let us focus on the template. Let us implement the three functions (2 member functions and 1 constructor) outside the class using the scope resolution operator as shown below.
If you look at the above image, we have one parameterized constructor which taking two parameters with the same name a and b and initializing the class data members a and b. To differentiate the class members and parameter we have used this object and while initializing we are using the arrow (->) operator. The next two functions i.e. add and sub are straightforward. They simply do the arithmetic addition and subtraction and returning the result. Now let us proceed and understand the Generic.
Generic Classes:
Now, let us talk about generic classes. For understanding that first of all, let us look at the Arithmetic class. The arithmetic class is performing arithmetic operations i.e. addition and subtraction on the integer type data. What about float-type data? If we want to use a long integer or if we want to use double, then what? For them, this Arithmetic class will not work. We have to write a separate class for it. We should write a separate class for the floating-point types of data and performing the arithmetic operation.
Shall we write two different classes just for a change of data type?
The answer is a big No. C++ says that you can use the same class for multiple data types at a time. You can use only one data type at a time and it works for any type of data. It is called a generic class and that is defined as a template.
Now, let us convert this Arithmetic class into a generic class using a template so that it can operate on different data types and perform arithmetic operations.
How to change an existing class into a generic class?
We have written the Arithmetic class. Let us convert this Arithmetic class into generic. For understanding this please have a look at the following code. The left-hand side code is without a generic class and the right-hand side code with a generic class.
As you can see at the top of the class, we need to write the template <class T> to make the class a template class. Here T is the generic type. And wherever you want generic use the type T. It is not mandatory to use T everywhere when you find any data type in your class. As you can see in the Generic class, I have used type T to declare the data members. And in the constructor, I have also used type T for the parameters. Again, the return of the method is also changed to T.
Changes in Constructor and Methods:
Now we need to use the same template class in the constructor as well as in methods. For better understanding, please have a look at the following image and observe the changes carefully. Please note in every function and constructor where we want a template, first we need to declare the template. The second change that we did is, wherever we use the Arithmetic class, we need to mention the template type. Again, the final changes we have replaced all the types with T.
Note: In our example, we have replaced all the types with generic type T, but it is not mandatory. So, whenever you are using a template, do the changes carefully.
Main Method:
Let us see how to use the template in our main method. For better understanding, please have a look at the below code. First, we are creating an object of Arithmetic. Arithmetic is a template, so while creating the object, we need to mention the data type of the template. In our example, we mention the type as int. So, wherever we used the type T, it will be replaced with the data type int. As the Template is of type int, now while creating the object, we can pass two integer numbers to initialize the data members a and b through the parameterized constructor. Once the object is created then we can call the member functions using the object and dot operator.
Now if you perform arithmetic operations on the Float data type. Then you need to use float while creating the object as shown in the below object.
The complete code is given below.
#include <iostream> using namespace std; template <class T> class Arithmetic { private: T a; T b; public: Arithmetic (T a, T b); T add(); T sub(); }; template <class T> Arithmetic<T> :: Arithmetic(T a, T b) { this->a = a; this->b = b; } template <class T> T Arithmetic<T> :: add() { T c; c = a + b; return c; } template <class T> T Arithmetic<T> :: sub() { T c; c = a - b; return c; } int main() { Arithmetic<int> ar1(10, 5); cout<<ar1.add(); cout<<ar1.sub(); Arithmetic<float> ar2(10.56, 5.25); cout<<ar2.add(); cout<<ar2.sub(); }
That’s it. This is the end of this section i.e. the essentials required in C and C++ to learn data structure. In the next section, I am going to discuss the environment setup. Here, in this article, I try to explain Template classes in C++ with an example. I hope you enjoy this template classes in C++ with an example article. | https://dotnettutorials.net/lesson/template-classes-in-cpp/ | CC-MAIN-2022-27 | refinedweb | 1,070 | 73.88 |
django-viewflow 0.12.2
Reusable workflow library for django
Reusable workflow library for Django.
Viewflow is the workflow library based on BPMN concepts. BPMN - business process modeling and notations - is the wide adopted industry standard for business process modeling. BPMN provides a standard notation readily understandable by all business stakeholders. Viewflow bridging the gap between picture and executable, ready to use web application.
Demo:
After over than 10 years history of the BPMN standard, it contains whole set of battle-proven primitives for all occasions, helps you to describe all real life business process scenarios. Viewflow helps you to build a bpmn diagram in code and keep business logic separate from django forms and views code.
Documentation
Read the documentation at the
License
Viewflow is an Open Source project licensed under the terms of the AGPL license - The GNU Affero General Public License v3.0
Viewflow Pro has a commercial-friendly license allowing private forks and modifications of Viewflow. You can find the commercial license terms in COMM-LICENSE. Please see FAQ for more detail.
Latest changelog
0.12.0 - 2017-02-14
This is the cumulative release with many backward incompatibility changes.
- Django 1.6 now longer supported.
- Frontend now a part of the open-source package.
- Flow chart visualization added
- Every _cls suffix, ex in flow_cls, activation_cls, was renamed to _class. The reason for that is just to be consistent with django naming theme.
- Django-Extra-Views integration is removed. This was a pretty creepy way to handle Formsets and Inlines within django class-based views. Instead, django-material introduce a new way to handle Form Inlines same as a standard form field. See details in the documentation.
- Views are no longer inherits and implement an Activation interface. This change makes things much simple internally, and fixes inconsistency, in different scenarios. @flow_view, @flow_start_view decorators are no longer callable.
- Activation now passed as a request attribute. You need to remove explicit activation parameter from view function signature, and use request.activation instead.
- Built-in class based views are renamed, to be more consistent. Check the documentation to find a new view name.
- If().OnTrue().OnFalse() renamed to If().Then().Else()
- All conditions in If, Switch and other nodes receives now a node activation instance instead of process. So you can gen an access to the current task via activation.task variable.
- Same for callable in the .Assign() and .Permissions definitions.
- task_loader not is the attribute of a flow task. In makes functions and signal handlers reusable over different flows.
- Flow namespace are no longer hardcoded. Flow views now can be attached to any namespace in a URL config.
- flow_start_func, flow_start_signal decorators need to be used for the start nodes handlers. Decorators would establish a proper locking avoids concurrent flow process modifications in the background tasks.
- To use celery job with django 1.8, django-transaction-hooks need to be enabled.
- Author: Mikhail Podgurskiy
- Keywords: workflow,django,bpm,automaton
- License: AGPLv3
- Platform: Any
- Categories
- Development Status :: 4 - Beta
- Intended Audience :: Developers
- License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)
- Natural Language :: English
- Operating System :: OS Independent
- Programming Language :: Python :: 3
- Topic :: Software Development :: Libraries :: Python Modules
- Package Index Owner: kmmbvnr
- DOAP record: django-viewflow-0.12.2.xml | https://pypi.python.org/pypi/django-viewflow | CC-MAIN-2017-13 | refinedweb | 543 | 50.43 |
Hi,
I need to make a main menu for this tower defense game that I am making. I have 2 parts to the menu that I need to put together. I have it as follows:
I have a start screen where the player presses the start button. I now need it to take the user to the main menu itself. I have both the start menu and main menu in the same document but on different layers. I have a button labeled as start which I have set up to where when it is clicked, it changes colors but I also need it to hide/show the menu layer. I just need the that start button to take users to the menu layer where I have 3 more buttons which are resume, new, and options. I will need those buttons to go to their different layers also. After users hit the resume or new buttons, I need the game itself to start which I will start making after I figure out the other issues.
I am new to Flash and I really want to learn how to make tower defense games. For now, I am using as a guide to make the game stuff but it doesn't say anything about a main menu. I am using a trial version of Flash Pro CS6 and it is due to expire in 28 days.
Any and all help will be great! Thanks, xp3tp85
convert your menu to a movieclip (select it, right click, click convert to symbol, movieclip). assign an instance name in the properties panel (eg, main_menu);
assign your start button an instance name in the properties panel (eg, start_button).
you can then use:
main_menu.visible = false;
start_button.addEventListener(MouseEvent.CLICK,startF);
function startF(e:MouseEvent):void{
main_menu.visible = true
}
Ok. I did that code and I now only have 1 problem; The screen flashes between the start screen and the main menu until I click start, it then stays still at the main menu. Is there AS that will prevent the timeline from moving or do I need to have the menu and start screen on separate layers? Help please?
add:
stop()
to that first frame that has your start button.
Thankyou! Problem Solved!
you're welcome.
Ok, hold on. I am at a friends house trying the same code and now we are getting an error #1009: Cannot access a property or method of a null object reference.
click file>publishing settings>swf and tick "permit debugging". retest.
the problematic line of code will be in the error message. that will allow you to quickly find the object that does not exist when that line of code executes.
I used this and it worked:
import flash.events.MouseEvent;
start_button.addEventListener(MouseEvent.CLICK,startF);
function startF (e:MouseEvent):void{
gotoAndStop("main_menu");
}
On the next layer I used this and it worked too:
back_button.addEventListener(MouseEvent.CLICK, buttonClick);
function buttonClick(event:MouseEvent):void{
gotoAndStop(1);
};
Apparently it wont let me use the same code twice on one timeline. Is there any way around this? I need a few more buttons on the main menu for the game.
there is never any reason to declare the same variable or function more than once on one timeline and you should expect to trigger a compiler error if you try.
The problem is that I need 4 buttons on the main menu but it won't let me use the same code on the same layer; even on different frames. I can upload an example image if that would help you see what the problem is.
Is there any way to turn off/bypass the error? If so, please let me know.
I figured everything out. It is all running smoothly. Thank you for the help!
you're welcome. | http://forums.adobe.com/thread/1103278 | CC-MAIN-2014-15 | refinedweb | 636 | 81.53 |
Tens and twos
Want to share your content on python-bloggers? click here.
Only three months ago, market pundits were getting lathered up about the potential for an inverted yield curve. We discussed that in our post Fed up. But a lot has changed since then.
- One oft-used measure of the yield curve, the time spread (10-year Treasury yields less 3-month yields), has inverted (gone negative).
- The NY Fed’s yield curve model sets the probability of recession 12-months hence above 31%, up from over 27% in May.
- The US/China trade war escalated.
- And now another yield curve recession predictor — the ten-year yield minus the two-year yield (ten-twos) — is close to inverting.
The potential ten-twos inversion has received plenty of attention in the financial press. Interestingly, one well-known quantitative investor is arguing that 10-year Treasuries are expensive based, in part, on the inverted yield curve. We can’t do justice to this argument here, but we wanted to flag it since it is a countervailing view.
Is it time to be worried? Sell all your stocks and use your cash to buy gold, crossbows, and tons of Dinty Moore stew? Let the data answer that one.
First, we’ll look at a graph of both the time spread (in red) and ten-twos (in blue). Note the data on the ten-twos doesn’t go as far back due the shorter two-year yield time series.
As in the past, let’s ask how frequently an inverted yield curve presages a recession. Instead of showing the contingency tables, as we have done previously, we’ll just show a graph of how often a recession occurs based on a yield curve inversion. This is also known as specificity and in effect measures how many times the economy is in recession and the yield curve is inverted divided by total number of recessions. If an inverted yield curve always coincided (or preceded) a recession, then that number would be 100%.
Before we show the graph, we need to explain the data we collected. The first is the time spread as a reference, then ten-twos. In other words, the percentages we calculate are how often a recession occurs when either the time spread or the ten-tows are inverted. We then look at the likelihood of a recession when both the time spread and ten-twos are inverted, known as “combined”. Then we calculate the likelihood of recession six and twelve months later. Finally, we interpolate the ten-twos for the period prior 1976 to see if the likelihood increases.1 Finally, we order the likelihoods for ease of analysis.
Interestingly, the likelihood the economy is in recession is the lowest when both curves are inverted. That’s good and to be expected. Yield curve inversion is meant to be a leading indicator. Still, that the economy is in recession 7.1% of the time when both curves are inverted should at least tell you that inversion can sometimes be a coincident indicator.
Moving on, the economy is a bit more likely to be in recession when the ten-twos are inverted than the time spread. That makes sense because if we hit a recession, over time the Fed is likely to lower rates so the time spread will revert faster than the ten-twos as short-term rates react faster to Fed policies than the longer-term rates.
If both curves are inverted there’s almost a 30% chance of recession in 6 months and over a 40% chance in twelve months. But when we include the interpolated data, that likelihood decreases to just under 35%. The main reason for this is while the number of months in recessions increases by 84% the number of months of inverted yield curves only increases 52%.
What if one curve is inverted and other is not, as is now the case? The chance of recession in 12 months based on the interpolate data is 2.9%. The chance of a recession in 6 or 12 months is 4.9%.
Such a low rate of occurrence does not mean that there’s only a small chance of a recession in the future. It also doesn’t mean that the probability of a recession in 6 to 12 months can’t rise to better than 50/50. It does suggest, however, that if we believe the probability is higher than the historical rate plus some fudge factor2 for the potential of being wrong, we’d need to have a good reason why it is otherwise.
To do that, we could look at additional variables such as industrial production, unemployment, or consumer sentiment. We could also look at overall yield levels. Perhaps a negative yield curve is more meaningful when 10-year yields are closer to the historical average of 5.8% vs. the average of the last 8 months of 2.4%, which is 1.5 standard deviations below the historical average prior to the global financial crisis.
Alternatively, instead of trying to decide if the likelihood of recession is greater than the historical record, we could look at whether other data support a notion that it’s different this time. For example, we could analyze the stock market, which is forward-looking and thus one such barometer of whether a recession is looming. If the market believes a recession is likely to occur in the next few months, then prices would likely fall. For example, we could ask how the market trended after the yield curve inverted. Once we answer that question we can then look at how the present aligns with the past to note any major differences. For example, here is a graph of the range of average returns in the S&P500 on succeeding periods once time spread has inverted.
Since the time spread first inverted in June, the returns to the S&P 500 are a bit worse than the historical record easing about -1%. But these averages take into account any month with yield curve inversion as opposed to starting only at the inception of the inverted curve. Still this suggests a potential analytical point-of-departure. But that will have to wait for another post.
The takeaway for now is that it’s okay to keep calm so long as the ten-twos don’t go negative. Until then, and our next post, here is the code underlying the previous analysis and graphical display.
# Load packages library(tidyquant) library(printr) # Load data df <- readRDS("~/Data Science/Blog_3/yield_curve.rds") # Get daily data symbols <- c("T10Y2Y", "T10Y3M", "GS1", "GS2", "GS10") for(symbol in symbols){ x <- getSymbols(symbol, src = "FRED", from = "2019-01-01", auto.assign = FALSE) names(x) <- tolower(symbol) assign(tolower(symbol), x) } # Add updated data last_row <- data.frame(date = as.Date("2019-08-01"), usrec = 0, time_spread = as.numeric(mean(t10y3m["2019-08"])), ten_one = 0.1, ten_two = as.numeric(mean(t10y2y["2019-08"]))) df_1 <- df %>% bind_rows(last_row) # Plot data df_1 %>% filter(!is.na(ten_two)) %>% ggplot(aes(x = date)) + geom_ribbon(aes(ymin = usrec*min(time_spread), ymax = usrec*max(time_spread)), fill = "lightgrey") + geom_line(aes(y = ten_two, color = "Ten-two")) + geom_line(aes(y = time_spread, color = "Time spread")) + scale_colour_manual("", breaks = c("Ten-two", "Time spread"), values = c("red", "blue")) + geom_hline(yintercept = 0, color = "black") + theme(legend.position = "top", legend.box.spacing = unit(0.05, "cm")) + labs(y = "Spread (%)", x = "", title = "Yield spreads vs. US recesions") + ylim(c(min(df$time_spread), max(df$time_spread))) # Old table tab_old <- table(Inversions = ifelse(df$time_spread < 0, 1, 0), Recessions = df$usrec) tab_old_spec <- round(tab_old[2,2]/(tab_old[2,2] + tab_old[1,2]),3)*100 # specificity # Basic table tab <- table(Inversions = ifelse(df$ten_two < 0, 1, 0), Recessions = df$usrec) tab_spec <- round(tab[2,2]/(tab[2,2] + tab[1,2]),3)*100 # specificity # New table df_tab <- df_1 %>% na.omit() tab_new <- table(Inversions = ifelse(df_tab$time_spread < 0 & df_tab$ten_two < 0, 1, 0), Recessions = df_tab$usrec) tab_new_spec <- round(tab_new[2,2]/(tab_new[2,2] + tab_new[1,2]),3)*100 # specificity # Forward 12 months # Table 6 months df_6 <- df_1 %>% mutate(usrec = lead(usrec, 6, default = 0)) df_tab_6 <- df_6 %>% na.omit() tab_6 <- table(Inversions = ifelse(df_tab_6$time_spread < 0 & df_tab_6$ten_two < 0, 1, 0), Recessions = df_tab_6$usrec) tab_6_spec <- round(tab_6[2,2]/(tab_6[2,2] + tab_6[1,2]),3)*100 # specificity # Table 12 months df_12 <- df_1 %>% mutate(usrec = lead(usrec, 12, default = 0)) df_tab_12 <- df_12 %>% na.omit() tab_12 <- table(Inversions = ifelse(df_tab_12$time_spread < 0 & df_tab_12$ten_two < 0, 1, 0), Recessions = df_tab_12$usrec) tab_12_spec <- round(tab_12[2,2]/(tab_12[2,2] + tab_12[1,2]),3)*100 # specificity # Table 12 months interploated prem_2y <- mean(gs2-gs1) df_tab_12i <- df_12 %>% mutate(ten_two = ifelse(is.na(ten_two), ten_one + prem_2y, ten_two)) tab_12i <- table(Inversion = ifelse(df_tab_12i$ten_two < 0 & df_tab_12i$time_spread <0, 1, 0), Recessions = df_tab_12i$usrec) tab_12i_spec <- round(tab_12i[2,2]/(tab_12i[2,2] + tab_12i[1,2]),3)*100 # specificity # Table 12 time spread negative time spread positive ten-twos in 12 months tab_12ia <- table(Inversions = ifelse(df_tab_12i$time_spread < 0 & df_tab_12i$ten_two > 0, 1, 0), Recessions = df_tab_12i$usrec) tab_12ia_spec <- round(tab_12ia[2,2]/(tab_12ia[2,2] + tab_12ia[1,2]),3)*100 # specificity # Recession in 6 or 12 months df_tab_6_12i <- df_1 %>% mutate(usrec_6 = lead(usrec, 6, default = 0), usrec_12 = lead(usrec, 12, default = 0), usrec_6_12 = ifelse(usrec_6 > 0 | usrec_12 > 0, 1, 0), ten_two = ifelse(is.na(ten_two), ten_one + prem_2y, 0)) tab_6_12i <- table(Inversions = ifelse(df_tab_6_12i$time_spread > 0 & df_tab_6_12i$ten_two < 0, 1, 0), Recessions = df_tab_6_12i$usrec_6_12) tab_6_12i_spec <- round(tab_6_12i[2,2]/(tab_6_12i[2,2] + tab_6_12i[1,2]),3)*100 # specificity ## Specificity for all tests # Create data frame specs <- data.frame(tab_old_spec, tab_spec, tab_new_spec, tab_6_spec, tab_12_spec, tab_12i_spec) # Graph specs %>% gather(key, value) %>% mutate(key = case_when(key == "tab_new_spec" ~ "Combined", key == "tab_old_spec" ~ "Time spread", key == "tab_spec" ~ "Ten-twos", key == "tab_6_spec" ~ "Combined \n6 month lead", key == "tab_12i_spec" ~ "Interpolated \n12 month lead", key == "tab_12_spec" ~ "Combined \n12 month lead")) %>% ggplot(aes(reorder(key, value), value)) + geom_bar(stat = 'identity', position = "dodge", fill = "blue") + labs(x = "", y = "Occurence (%)", title = "Likellhood of recession based on yield curve inversion") + geom_text(aes(label = value), vjust = -0.25, size = 4) # Data change rec_inc <- round(as.numeric(colSums(tab_12i)[2])/as.numeric(colSums(tab_12)[2])-1,2)*100 inv_inc <- round(tab_12i[2,2]/tab_12[2,2]-1,2)*100 ## Add stocks # Adjust dates date_adj <- c(df_tab_6_12i$date-1, as.Date("2019-08-31")) df_eq <- df_tab_6_12i %>% mutate(date = date_adj[-1]) # Add S&P sp <- getSymbols("^GSPC", from = "1953-04-01", auto.assign = FALSE) sp_m <- to.monthly(Cl(sp), indexAt = "lastof", OHLC = FALSE) df_eq <- df_eq %>% mutate(sp = as.numeric(sp_m)) df_eq <- df_eq %>% mutate(sp_1m = lead(sp, 1, default = 0)/sp-1, sp_3m = lead(sp, 3, default = 0)/sp-1, sp_6m = lead(sp, 6, default = 0)/sp-1, sp_1y = lead(sp, 12, default = 0)/sp-1) # Graph of average return after time_spread goes negative df_eq %>% mutate(time_spread = ifelse(time_spread < 0, 1, 0)) %>% group_by(time_spread) %>% filter(date < "2019-01-01") %>% mutate(sp_1y = ifelse(sp_1y == -1, NA, sp_1y)) %>% select(time_spread, contains("sp_")) %>% summarise_all(mean, na.rm = TRUE) %>% gather(key, value, -time_spread) %>% mutate(key = factor(key, levels = c("sp_1m", "sp_3m", "sp_6m", "sp_1y"))) %>% ggplot(aes(key, value*100, fill = as.factor(time_spread), label = format(round(value,3)*100, nsmall = 1))) + geom_bar(stat = 'identity', position = "dodge") + scale_fill_manual("", labels = c("No inversion", "Inversion"), values = c("blue", "purple")) + scale_x_discrete(labels = c("1 month", "3 months", "6 months", "1 year")) + labs(x = "Time frame", y = "Mean return (%)", title = "S&P 500 returns after time spread inversion") + theme(legend.position = "top" ) + geom_text(aes(hjust = ifelse(value < 0, -2, 2), vjust = ifelse(value < 0, 1, -.25)))
For the interpolation we add the historical average premium of two-year over one-year yields to the ten-year minus one-year yield spread. This isn’t the most accurate method. But it would be more involved (and beyond the scope of the post) to bootstrap (or use some other method) to interpolate the historical two-year yield.↩
Or error rate or randomness distribution↩
Want to share your content on python-bloggers? click here. | https://python-bloggers.com/2019/08/tens-and-twos/ | CC-MAIN-2021-10 | refinedweb | 1,997 | 53.41 |
Stacks token
Stacks is the name of a token developed by Blockstack Token LLC in 2017 and activated in the third quarter of 2018. This page discusses a brief history of the Stacks token and deployment on the Blockstack network as well as the current role of the Stacks token.
If you are a developer interested in the specific technical changes related to the 2018 launch, see the announcement in the Blockstack forum.
A brief history of the Stacks token
In 2017 Blockstack did a token sale. Participants became token holders when they received allocations of Stacks tokens in the genesis block. A genesis block is the first block of a blockchain.
During the draft genesis block period token holders setup a seed phrase (sometimes referred to as a recovery phrase or a recovery seed using the Stacks Wallet software or their own hardware wallet.
It was each token holder’s responsibility to store their own seed phrase in a private and secure location. Holders could use their wallet to verify their holdings and allocations on the genesis block explorer. Beyond that, while in draft state, token holders were in a lock down period.
State of the Stacks blockchain V1
The initial block in the Stacks blockchain V1 allocates 1.32 billion tokens. The launch is the culmination of two year’s hard work across the greater Blockstack community. With the launch, Stacks tokens unlock for accredited token holders under a predetermined unlocking schedule. The events on the unlocking schedule are the same for each investor, the dates of these events depend on the holder's purchase date.
Note: If you are a token holder and would like to review your unlocking schedule, visit the For current token holders page in this documentation.
The genesis block launch makes possible the following interactions:
Token holders can purchase names and namespaces with the Stacks token. Previously, names and namespaces required the purchaser to hold Bitcoin. Initially, this process relies on the Blockstack command-line interface (CLI).
Application developers can earn Stacks by building an application on the Blockstack ecosystem.
Any Stacks tokens held at the time of launch or after remain usable under the Stacks Blockchain platform.
Finally, in addition to the development of Stacks token, this launch enables further development of Stacks Blockchain itself. | https://docs.blockstack.org/ecosystem/stacks-token | CC-MAIN-2020-45 | refinedweb | 383 | 54.02 |
The Investment Matrix Revelations?"
Since I normally simply analyze Fed actions rather than prescribe them (I assume Greenspan does not really care about my opinions) I was brought up a little short, and answered that I would like to see the Fed tell us whether they are going to work to bring down long rates, instead of merely hinting or suggesting or threatening. The answer would give us a real indication as to whether we will have a recovery or at least a continuation of the Muddle Through Economy or will slide into a recession. They told us absolutely nothing, which in my opinion is a very risky option. The bond and stock markets seem to agree.
Since that interview, I have given a great deal of thought to that question. The answer is far more complex, and has to do with how a number of factors, much of which is beyond Fed control, interact. I started to write on this today, but realize I need to let this topic cook in my mind some more. The economy of the world and the US is at an "inflection point." Since next week is the beginning of the second half, we will also discuss if there is the hint of the elusive "second half recovery."
That will wait for next week. Today we will deal with a far more important matter than my thoughts on the Fed: What kind of returns can we expect from the stock market over the next 10-20 years? This week's letter will require you to put on your thinking caps, but will help you to be a better investor if you grasp the import of what we are saying.
(As a side note, I will be in Paris, Geneva, Boston, Halifax, San Francisco and New Orleans within the next few months. Already I am tired. Details below.)
The essay below is part two (of four parts) of a series from my upcoming book-in-progress. Warning: this letter is a little longer than most, but this section needed to be kept together. This section is co-authored with Ed Easterling of Crestmont Holdings..
This research into stock market and economic cycles will give us insight into how secular bear markets actually work. It will also give us a clue on how to invest in stocks even in a bear market cycle. (Note: when the pronoun "I" is used, it denotes a personal comment by John Mauldin.)
The Investment Matrix: The Real Truth about Stock Market Returns
(This section will reference charts available at. Click on "Stock Markets" and the graph called "Long Term Returns." We will provide large fold-out versions of the graphs in the book. Readers of this e-letter can hopefully get the sense of what we are saying without looking at the graphs, but if you have the time, we would suggest reviewing them. If you are not going to be able to look at them, you might skip the first sections which explain what you are looking at and go on to the analysis following the subhead: The Investment Matrix Revelations. [Note from John - you will need Adobe Acrobat. I prefer to greatly increase the viewing size. You can also get Kinko's (or other similar firms) to print these on large color graphs.])
The past 103 years have provided over 5,000 investment period scenarios-that is, the combination of investment periods from any start year to every year since that time. This provides an extensive history across which to assess the potential and likely outcomes.
Like the movie, The Matrix, this Investment Matrix slows down the fast-paced motion of the markets, letting us see the ebb and flow of the economic tides over long periods of time.
There are several versions of the chart on the web site. We call your attention to two of them: one, called "Tax-Payer Real" is the S&P 500 index including dividends and transaction costs adjusted to reflect the net return after inflation and taxes (see details on taxes below). You will not see this one in a mutual fund sales presentation. The second is called "Tax-Exempt Nominal." It assumes your money is all in tax sheltered retirement accounts, there is no inflation (thus "nominal"), and you don't pay taxes when you take out your money. This is the "long run" numbers you are most likely to see in marketing brochures. (You can view other versions of the chart which show "Tax-Payer Nominal" and "Tax-Exempt Real" at)
Let's take a moment to explain the layout of the charts. There are three columns of numbers on the left hand side of the page and three rows of numbers on the top of the page. The column and row closest to the main chart reflect every year from 1900 through 2002. The column on the left side will serve as our start year and the row on the top represents the ending year. The row on the top has been abbreviated to the last two numbers of the year due to space constraints. Therefore, if you wanted to know the annual compounded return from 1950 to 1973, look for the row represented by the year '1950' on the left and look for the intersecting column designated by '73' (for 1973). The result on the version titled "S&P Index Only" is 6, reflecting an annual compounded return of 6% over that 23 year period. Looking out another 9 years the number drops to 2% for an after tax, inflation adjusted return over 32 years.
There is a thin black diagonal line going from the top left to the lower right. This line shows you what the returns are 20 years after an initial investment. This will help you see what returns have been over the "long run" of 20 years.
Also note the color of the cell represents the level of the return. If the annual return is less than 0%, the cell is shaded red. When the return is between 0% and 3%, the shading is pink. Blue is used for the range 3% to 7%, light green when the returns are between 7% and 10%, and dark green indicates annual returns in excess of 10%. This enables us to look at the big picture. Whereas, long-term returns tend to be shaded blue, shorter-term periods use all of the colors.
As well, note that our original number 6 mentioned above was presented with a black-colored font, while some of the numbers are presented in white. If the P/E ratio for the ending year is higher than the P/E for the starting year-representing rising P/E ratios-the number is black. For lower P/E ratios, the color is white. In general, red and pink most often have white numbers and the greens and blues share a space with black numbers. The P/E ratio for each year is presented along the left side of the page and along the top of the chart.
Lastly, there's additional data included on the chart. On the left side of the page, note the middle column. As well, on the top of the page, note the middle row. Both series represent the index values for each year. This is used to calculate the compounded return from the start period to the end period. Along the bottom of the page, we've included the index value, dividend yield, inflation (Consumer Price Index), real GDP, nominal GDP, and the ten-year annual compounded average for both GDP measures. For the index value, keep in mind that the S&P 500 Index value for each year represents the average across all trading days of the year.
Along the right, there's an arbitrary list of developments for each of the past 103 years. In compiling the list of historical milestones, it's quite interesting to reflect upon the past century and recall that the gurus of the 1990's actually believed that we were in a "New Economy" era. Looking at the historical events, it could be argued that almost every period had a reason to be called a "New Economy." But that's an argument for another chapter.
The Investment Matrix Revelations ten years, twenty years, and even longer aren't long enough to ensure positive or acceptable returns.
Note also that we've recently completed the longest run of green (very high market return) Real, which is what you experience in your actual accounts, you will notice that the returns tend to be in the 3-5% range after long periods of time. Often real returns are 2% or less over multiple decades. Again, the charts clearly show the most important thing you can do to positively affect your long term returns is to begin investing in times of low P/E ratios.
The Matrix assumes an estimate of each year's taxes at the then current rates over this period (details below). We are aware that the income tax did not exist in 1901. This was a tricky number to assume, as taxes on stocks are comprised of both long term and short term gains, and are taxed at different rates for different times. Some of you pay additional state taxes. While we estimated taxes for each individual year, the average over time was about 20%.
Why not just assume all long-term gains? If you buy your stocks through mutual funds, as most individuals do, then you are probably seeing a lot of turnover in your portfolio. Remember Peter Lynch of Magellan fame? His reported average holding period was about 7 months during the 70's. Some of you will pay higher taxes, and some of you will pay lower, depending upon your investment styles. The recent average we assume is around 20%.You can adjust your expectations accordingly.
Now, what can we learn from these tables?
First, there are very clear periods when returns are better than others. These relate to secular bull and bear markets. No big insight there. But what you should notice is the correlation with P/E ratios. In general, when P/E ratios begin to rise, you want to be in the stock market. When they are falling, total returns over the next decade will be below par. (More on that phenomenon below.)
With the exception of WWII, when these periods of falling P/E ratios start, they just keep going until the P/E ratios top out. Generally, this topping period comes prior to a recession.
Can you use the P/E ratios to signal a precise turn from a secular bull to a secular bear? No, but you can use them to assist you in confirming other signals. And once that turn has begun, the historical evidence is that the trend continues. Investors are advised to change their stock investing habits. As noted above, there will be bear market rallies which will momentarily halt the decline of the P/E ratios. These always end as reversion to the mean (trend) is simply too strong a force.
Second, the Investment Matrix Long Term Money In Stocks Again?
When can you profitably begin to be a long term investor, even in a secular bear market? Look at the tables. You have excellent chances of getting above average returns from the stock market if you buy when P/E ratios are 10-12 or below. You might have to suffer in the short term, but long term you will probably be OK. (I will deal with stock market investment strategies at length in a later chapter.)
For index investors, a good strategy would be to start averaging in when the market values begin approaching a P/E of 10-12. Even in the worst of the Depression, you would have done well over the next 20 years using this strategy. Investors who want to own individual stocks should focus on stocks with deep value and rising dividends, although the evidence indicates you will have periods where you will still need patience.
Death and Taxes
If you look at just the nominal returns without thinking about taxes, some would make the case that trying to time the market is pointless. Over enough time, the returns tend to be the same. And we agree, if you have 50 years, time can heal a lot of mistakes. Historically, investing through full cycles would give you a 10% compound return after many decades, and 6-7% in inflation adjusted terms.
However, if you take into account inflation, transaction costs and taxes, real in-your-account returns tend to be in the 3% annual average range. And if you begin to invest at the beginning of a secular bear, real returns over the next 20 years are likely to be negative! You lose buying power.
Let's look at some other points. First, these tables include dividends. The 6-7% returns show up over time primarily because much of the last century had very high dividends of 4% or more. Given that dividends are under 2% or non-existent for many NASDAQ stocks, the 3% long-term return number becomes far more realistic. Periods of high dividends greatly increase the return potential over using simple S&P 500 Index returns.
Second, inflation did a great deal to mask the seriousness of the 1970 bear markets. It took 16 years for the index to make new highs after 1966, but it was another 10 years, or 1992, before investors saw a rise in their actual buying power in terms of the S&P 500 index.
On the table, you see that compound returns over the 26 years from 1966 to 1992 was 8% without inflation and 2% taking into account inflation. If you take into account taxes and other costs the return to the investor was zero. For a period of 26 years, investors in index funds did not see a real increase in their buying power.
The bulk of earnings on this table over that period came from dividends. The compounding effect of dividends upon returns was huge.
The 8% returns an investor apparently got from 1966 through 1992 depended largely upon inflation. The 2% real returns are almost entirely due to dividends. During much of that period, dividends were in the 4-5% range.
Today, dividends on the S&P are less than 2%, instead of the 4-5% of the 70's. Further, we are not in a period of high inflation, although that may change by the end of the decade. The clear implication is that we are facing a period where stock market returns are going to be difficult.
If you look at the tax-deferred account real returns table for the period of 1966 through 1992 period, the after-inflation return numbers for the majority of that period are negative for a long time, until you begin to get to periods of low P/E ratios. the average stock or index fund from where we are today. The table suggests it might even be unsafe to assume 2%!
How can we even think that stocks might not compound at 2% a year over the next ten years? It is because there has only been one time when investors have made more than a 2% real return (after taxes and other costs) over the next ten years when P/E ratios started over 21, which is easily where they are today (no matter who is figuring them). That one lone example is from the mid-90's through today. If we are right and returns become flat, then even that one period will turn out to be closer to 2%. 5. Of course, all of my readers are above average, but you might want to warn your brother-in-law. Just food for thought.
"What type of returns should you expect from the stock market for the next 5, 10, or 20 years?"
(This next section is authored by John Mauldin, as it is a tad on the acerbic side and Ed is a rather gentle soul.) is used by them to urge investors to buy some more stocks or mutual fund shares today and hold them as well. If you just keep buying, the study says you will get your reward, by and by.
This is the sweet buy and buy sales pitch. The Ibbotson study (and numerous similar studies) is one of the most misused pieces of market propaganda ever foisted on innocent investors. If I thought for one minute you really could get 7% compound annualized returns over the next 20 years by simply buying and holding, I would agree that it would be a smart thing to do.
I cannot tell you how many soon-to-be-retired couples I have talked to, after their retirement savings have been hit 30-40-50% and their comfortable retirement dreams are shattered, who tell me their brokers or advisors told them if they just hold on the market would come back. Soon, they are promised. These were the investment professionals they trusted and they assumed had done their homework.
Now they know these guys flunked Stock Market Returns 101, or possibly skipped class in order to attend lectures by Jack Grubman on "How To Buy Telecommunications Stocks." Today I give you the class notes they should have shown you.
The Most Dangerous Threat to Your Retirement
Typical is the email I got from a reader. Quote:
"My wife and I just heard another presentation by an investment firm recommending that retired people, needing income from their sheltered funds, place enough assets in fixed instruments for 5 years living expenses and the rest in stock funds. The hope is that within 5 years, there will be an upturn such that stock funds can be sold at a gain, from which to draw income. Ibbotson data was, of course, used to show how unlikely it was for there to be many consecutive years of down markets. The firm had a CPA and several financial advisors who had been working in the field for 20 - 30 years.
"It drove me nuts also, especially at this meeting where heads were nodding around the room as these advisors (looking for people to give them their money to manage) explained how scientific their approach was. The CPA member of the firm said (in comparison to 1966 -1982) that the market could be that bad or even worse, but that this was very unlikely, and went on to recommend the strategy described above."
Let's review this for a moment. I will leave aside the question of making a one size fits all recommendation for retirees, as I assume such stupidity is self-evident. That alone should be enough to make you run, not walk, to the exits.
In 1976, a young Roger Ibbotson co-authored a research paper predicting that during the following two decades the stock market would produce a return of about 10% a year, and that the Dow Jones Industrial Average would hit 10,000 in 1999. Ibbotson, now a professor at Yale, currently forecasts a compounded return on stocks during the next two decades of 9.4% - about 1 percentage point a year lower than his earlier projections.
"I'm neither an optimist nor a pessimist," Ibbotson said recently in an interview. "I'm a scientist, and I am not telling people to buy or sell stocks now. I'm saying that over the long run stocks will outperform bonds by about four percentage points a year." (from AdvisorSites, Inc.)
It turned out Ibbotson was right about 1999, and with the imprimatur of a Yale professor, investment managers everywhere use this "scientific" study to show investors why they should put money in the stock market and leave it. (I am not sure how economists get to be scientists, or how investment predictions can be scientific, but that is a debate for another time.)
If the S&P were to grow at 9.4% for the next two decades, it would be in the range of 4,500 and the Dow would be at 42,000 or so in 2023 (give or take a few thousand). Of course, that's starting at today's market values. If we start out with the market tops in 2000, we get around 8,200 and 63,000 respectively in 2020. Thus, investors shouldn't worry about the short term. Ibbotson assures us, as a scientist, that things will get better buy and buy.
The Investment Matrix clearly demonstrates why you should leave the room whenever an investment advisor brings out this study to sell you on an investment strategy. If your advisor actually believes this nonsense, then this will help you understand why you should fire him. (That should get me a few letters.)
There may be reasons to think the markets might go up, but the Ibbotson study is not one of them, in my opinion. Further, over the next 70 years, the market may in fact rise 9.4% a year. But to suggest to retirees it will do so over the next few years based upon "scientific analysis" is irresponsible and misleading.
Let's start our analysis in 1976, the year Ibbotson did the study. (I could make a much better case starting with another year, but 1976 works just fine.) From 1976 through 2002, the S&P 500 returned 12% a year (including dividends), even better than Ibbotson predicted, and after a rather significant drop over the last few years. However, 5% of that annual return is due simply to inflation. In real, inflation adjusted terms the S&P was up 7% a year.
The Price to Earnings (P/E) ratio was a rather low 12 in 1976. It ended up around 22 last year, using pro forma numbers. Thus almost half the return from the last 26 years has been because investors value a dollar of earnings almost twice as much in 2003 as they did in 1976.
At a similar P/E ratio to 1976, the S&P would be less than 500 today (around 466 or so as I glance at the screen, again using pro forma earnings numbers.) Thus, without increased investor optimism, the compound growth would be around 6.3%-7% over the last 26 years, or only a few points over inflation during that time. The point is not the exact number but that a significant part of the growth in the stock market is due to increased P/E valuations.
In fact, if you back out dividends, the growth is almost entirely due to inflation and increased P/E valuations. The stock market has been a good investment since 1976 primarily because of these two factors. The question that investors must ask today is, "To what extent will these two factors, plus dividends, contribute to the return from the stock market over the next 5-10-20 years?"
How to Lose 20% in Five Years - Guaranteed
Before I attempt to answer that, let's look at the advice the investment managers were suggesting to retirees at the seminar my reader attended. Assume that you can make 5% (today) on your investment portfolio. You can take that 5% and live on it in retirement (plus social security and any pensions) and not touch your original principal. It doesn't make any difference in this example what the amount is. I simply assume you live on a budget of what you actually get.
If that 5% is what you need for the next five years, then according to the analysis given at the seminar, you will need to put about 22% or so of your savings in bonds, which will be consumed over the next five years (remember the 22% will grow because of interest). The other 78% or so will be put in stocks. Since the Ibbotson studies show stocks grow around an average of 9.4% per year, your total portfolio will have grown to122% of where it is today. For this advice, we want you to pay us 2% a year.
(And now back to the collaboration with Ed.)
Slip-Sliding Away
"The more you near your destination, the more you slip-slide away." - Paul Simon
The long run profits we read about in the brochures don't seem to match what we see in our accounts. The closer we get to retirement and the need for those funds, the more those profits seem to slip away
Before we start looking at cycles, let's explore the impact of dividends, transaction costs, slippage, taxes and other factors on total return.
Ever notice how quickly we're reminded, while looking at the change in the index or basket of stocks, not to forget the added return from the dividends? As we seek to translate the language of benchmark returns into changes in our account balances, let's also not forget a few other components. While annual dividends have averaged 4.4% over the course of the past century, transaction costs and taxes have imposed their share of impact on the portfolio as well.
For individual investors, taxes can affect the realized return. To provide a reasonable assessment of the impact of taxes, we considered several factors and included a number of simplifying assumptions. The objective was to estimate the effect on a typical taxpayer. In general, the average tax rate was approximately 20% across the entire period starting in 1913, when the income tax was introduced.
For each year, we assumed that 80% of gains were long-term capital gains and 20% were short-term capital gains. Only 90% of gains each year were realized and the long-term capital gain portion was lagged by one-year to simulate the effect of longer holding periods. For a measure of conservatism, 10% of gains are never taxed. Most of the capital losses are used to offset gains in future years. Dividends were assumed to be taxed at the short-term rate.
Transaction costs include: (a) commissions, (b) asset management fees, (c) bid/ask spreads, (d) execution slippage, (e) and lots of numerous extra costs. Commissions are well recognized by most investors as a cost of buying and selling stocks or mutual funds. The commission cost can be-and certainly was historically-greater for individual investors than for larger institutional investors (i.e. pension plans, mutual funds, etc.) Even with today's low rates, for active and/or small traders, they can be very significant.
Asset management fees are charges levied by an advisor, the investment fund, trustees, the pension fund managers and/or other constituents in the investment process. These can run anywhere from 0.5% to 3%. Mutual fund fees of 2% or more are quite common. (As asset managers, we are not against fees, as that is how we make our living. But we do think investors should get a "bang" for their commission "buck.")
The third cost, bid/ask spreads, represents the difference between the price that one pays for a stock and the price at which the stock could be sold at the same time. Index returns are based upon the last price traded for each stock, some on "the bid" (the price at which one can sell) and some on "the ask" (the price at which one can buy). We can refer to the blend of bid and ask prices as the "mid" price-averaging near the middle. However, investors bear the cost within their account or mutual fund of slightly higher prices for purchases and slightly lower prices for sales.
The cost of the spread is often far more than the commissions. There are studies beginning to surface which shows the cost of spreads is actually increasing after the conversion to decimalization of stock prices last year, which was NOT what was expected.
The fourth element listed above, slippage, affects larger buyers of stock more so than individual investors. While large accounts may pay less in commissions, some of the advantages of larger scale asset management require the often under-recognized costs of scale. Slippage is the impact of buying or selling hundreds of thousands of shares-the average cost of completing a large purchase in comparison to the market price for a few shares of a stock. When large buyers of a stock, a mutual fund for example, decide to buy or sell a position, the size of the order can push the market price in one direction or another. Slippage is the difference in the average price when buying or selling 100 shares compared to buying or selling 100,000 shares. If you are a large manager trying to beat an index, or a hedge fund getting a piece of the profits, we can guarantee you that slippage is the cause of a great deal of frustration, if not acrimony, on the trading floor.
Finally, you have lots of hidden costs. Account opening fees and loads can add up. Funds of all types have auditing and accounting fees, which are passed directly to the fund and thus to investors. Mutual funds have "independent boards" whose members must be paid. Most off-shore hedge funds are required to have one, if not two, independent directors, who get small fees. What about custodial or administrative fees from your fund? Is there a consultant in the mix? Does your fund pay higher commissions (so-called soft dollar arrangements) to get access to research or free rent and technology? (This happens a LOT more than you think. It is a way to pass operating expenses to the fund without showing the actual expense. Investors would object to a line item that says "rent" but never see the extra penny on the commission or the spread.) Attorney fees are often fund related costs.
If you are a typical individual investor, you have your own accounting costs, investment newsletters, books, planners, consultants and a host of investment related expenses. That is not to say that each of them are not necessary to do your job as manager of your portfolio, but they do cost money. While these are not always directly deducted from your investment accounts, they are an expense never-the-less.
Our analysis in the "Tax-Payer Real" chart assumed that the total cost of commissions, asset management fees, bid/ask spreads, and execution slippage equaled 2% per year. Although there are a few (somewhat limited) examples of those investors that can demonstrate a lower overall transaction cost on their stock investments, most professional investors have indicated that we are being too conservative-the effect of which would overstate the returns in the matrix. We believe that a rate of 2% is reasonable, with a bias toward being conservative. With current dividend yields averaging considerably less than 2%, the net effect of transaction costs may well exceed the benefit of dividends.
Paris, Geneva and Points Beyond
I have to go to Geneva for business, so I will leave a few days early to go to Paris to visit with my friend Bill Bonner in his countryside chateau, otherwise known as a money pit. His new book, The Day of Financial Reckoning, will soon be out and I look forward to a few good vintages and conversation. I will be available both in Paris and Geneva for a limited number of meetings. My host, Constantin Felder of Safra Banque in Geneva, may be arranging a more formal gathering as well. I will be available in Paris on Monday, July 21 and will be in Geneva for the next two days. I introduced Constantin to Texas barbecue when he was here last month and he promises to reciprocate with a taste of the local cuisine.
Then I travel to Boston for a day to meet with a hedge fund (Monday, July 28) and on to Halifax for a two week working vacation. I have promised my bride some relief from the Texas heat, so we will see if I can actually work outside the office.
I will also be in San Francisco August 13-17 at the 2003 Agora Wealth Symposium. This should be a very interesting conference for active investors. You can learn more by going to. Again, I will set aside time to meet with investors. You can email me (if you have not already done so) if you are interested in meeting. I will be speaking at the New Orleans investment conference October 18-21. More details later.
Many thanks to Art Cashin of CNBC fame for allowing me to follow him around on the New York Stock Exchange floor. As well as head trader for UBS PaineWebber, he is also a NYSE governor. This means he is part sheriff, part justice of the peace on the NYSE. I was amazed at the real authority these elected members have to police the place. To watch him gave me a great deal more confidence in the fairness of the exchanges.
And yes, (assuming it is still culturally permissible to compliment a lady) Sue Herera is as pretty and gracious in person as she appears to be on TV.
Tonight is a guy's night out, so I and the boys (9 and 14) will be looking for meat and fun. Time to run, and remember the word's of John Ruskin, "The highest reward for a man's toil is not what he gets for it, but what he becomes."
Your hoping to get the book finished in three weeks analyst,
TweetTweet | http://www.safehaven.com/article/733/the-investment-matrix-revelations | CC-MAIN-2017-13 | refinedweb | 5,615 | 69.82 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.