text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Customizing the Django Admin site
Django’s official doc here
Let’s start by saying that Django’s admin site is spectacular. It can literally save you months of work, but having said that, one thing that one needs to understand is that Django’s admin site is only ok as an internal tool. It is not, and never has been intended to be, an end user tool. Or in the words of Django’s doc:
The admin’s recommended use is limited to an organization’s internal management tool. It’s not intended for building your entire front end around.
The reason for that is that the admin site works by doing introspection on your Model classes so it will expose all your data and DB schema and while it provides ways to customize it at your heart’s content, if you need to customize it you will probably be better replacing it.
What I am going to explain next are ways to tailor it a little to make it more powerful but not with the intention of changing the way it works.
The AdminSite object
If you’ve been using the admin app you probably already created an admin.py file in your Django app and add it a bunch of calls like this one (if not you’ll need to check the admin tutorial):
# admin.py
from django.contrib import admin
admin.site.register(Book)
admin.site.register(Author)
The admin.py file is automatically looked for and imported by the Admin Django application when it launches so this is the place where you hook your models into the admin app.
Those admin.site.register() calls do exactly that.
But that admin.site object is the main object that represents your Admin and it provides customization options (documentation).
These are the ones that I always change:
admin.site.site_header, this is the “Django Administration” <h1> text that you find when you enter the site.
admin.site.site_title, same thing, but as <title>
admin.site.index_title, this is the heading that appears with a default text of “Site administration”(set it to blank if you don’t want to use it)
If you want to change that view you can set your own template modifying admin.site.index_template and admin.site.app_template.
If you want to just change the CSS what you can do is to specify here your own template and that template to extend the default one. Of course in your new template you would be adding your own CSS file.
(Same thing with the other template options)
Customizing the list views
The one thing that I care most to customize is the list view (or Change view rather how the documentation calls it). This is the view that it is used when you select a Model and you get a list with all that Model instances in your DB.
And to do that what you need to do next is to create a ModelAdmin instance. Like this:
# admin.py
from django.contrib import admin
import models
@admin.register(models.Author)
class AuthorAdmin(admin.ModelAdmin):
date_hierarchy = 'created'
search_fields = ['first_name', 'last_name', 'email']
list_display = ('first_name','last_name','email')
list_filter = ('status',)
Let’s dissect that example.
the @admin.register() decorator does the same thing as
admin.site.register(models.Author, AuthorAdmin)
It just shorter, what we are doing is registering the Author model in the admin app but also telling the app that it needs to use AuthorAdmin as ModelAdmin.
The ModelAdmin is how you customize its appearance.
The date_hierarchy (here) when set will make the app show a “timeline browser”. So you can use it to filter by date (ie, all model instances from July of 2015). The value need to be a Date or Datetime field from your model.
The search_fields (here), is a list of “QuerySet” fields, that can be used to filter search your result set. Here you can use as value any join expression, for instance “coauthor__name”
The list_display (here) is a tuple of field names. If set instead of being the view a single list, that list can be turned into a table! And in that case list_display the fields to be columns of such table.
Now. I said that it is a tuple of field names, but it can contain Model fields as well ModelAdmin fields. So you can extend what is shown here by adding a method in your ModelAdmin that returns complex query (this is useful because you can’t use QuerySet joins operations here, like “coauthor__first_name” kind of thing. Check the doc for more info)
The list_filter (here) is another tuple. If present then at the right of the table a filter box will be added with options to filter the given fields. This is particularly useful for Choice fields. (for free-text fields doesn’t make sense)
Rounding up. You can customize the Django admin and turn it into a very powerful DB console. It will still be something of an internal tool, but it will be a very powerful internal tool.
The options mentioned above are my favorite, are plenty more, and you can also customize the look and feel at your hearts content by replacing the defaults templates and by adding custom CSS (I don’t usually do it because it’s an internal tool after all)
You can find all that in the official doc. Here.
PS: this talk seems interesting: here | https://medium.com/django-musings/customizing-the-django-admin-site-b82c7d325510 | CC-MAIN-2018-17 | refinedweb | 906 | 62.98 |
10.5 Checking Hypertext (HTTP) Links
If you look back at the guestbook example in Chapter 7, Advanced Form Applications, you will notice that one of the fields asked for the user's HTTP server. At that time, we did not discuss any methods to check if the address given by the user is valid. However, with our new knowledge of sockets and network communication, we can, indeed, determine the validity of the address. After all, web servers have to use the same Internet protocols as everyone else; they possess no magic. If we open a TCP/IP socket connection to a web server, we can pass it commands it recognizes, just as we passed a command to the finger daemon (server). Before we go any further, here is a small snippet of code from the guestbook that outputs the user-specified URL:
if ($FORM{'www'}) { print GUESTBOOK <<End_of_Web_Address; <P> $FORM{'name'} can also be reached at: <A HREF="$FORM{'www'}">$FORM{'www'}</A> End_of_Web_Address }
Here is a subroutine that utilizes the socket library to check for valid URL addresses. It takes one argument, the URL to check.
sub check_url { local ($url) = @_; local ($current_host, $host, $service, $file, $first_line); if (($host, $service, $file) = ($url =~ m|http://([^/:]+):{0,1}(\d*)(\S*)$|)) {
This regular expression parses the specified URL and retrieves the hostname, the port number (if included), and the file.
Let's continue with the program:
chop ($current_host = `\bin\hostname`); $host = $current_host if ($host eq "localhost"); $service = "http" unless ($service); $file = "/" unless ($file);
If the hostname is given as "localhost", the current hostname is used. In addition, the service name and the file are set to "http", and "/", respectively, if no information was specified for these fields.
&open_connection (HTTP, $host, $service) || return (0); print HTTP "HEAD $file HTTP/1.0", "\n\n";
A socket is created, and a connection is attempted to the remote host. If it fails, an error status of zero is returned. If it succeeds, the HEAD command is issued to the HTTP server. If the specified document exists, the server returns something like this:
HTTP/1.0 200 OK Date: Fri Nov 3 06:09:17 1995 GMT Server: NCSA/1.4.2 MIME-version: 1.0 Content-type: text/html Last-modified: Sat Feb 4 17:56:33 1995 GMT Content-length: 486
All we are concerned about is the first line, which contains a status code. If the status code is 200, a success status of one is returned. If the document is protected, or does not exist, error codes of 401 and 404, respectively, are returned (see Chapter 3, Output from the Common Gateway Interface). Here is the code to check the status:
chop ($first_line = <HTTP>); if ($first_line =~ /200/) { return (1); } else { return (0); } close (HTTP); } else { return (0); } }
This is how you would use this subroutine in the guestbook:
if ($FORM{'www'}) { &check_url ($FORM{'www'}) || &return_error (500, "Guestbook File Error", "The specified URL does not exist. Please enter a valid URL."); print GUESTBOOK <<End_of_Web_Address; <P> $FORM{'name'} can also be reached at: <A HREF="$FORM{'www'}">$FORM{'www'}</A> End_of_Web_Address }
Now, let's look at an example that creates a gateway to the Archie server using pre-existing client software.
Back to: CGI Programming on the World Wide Web
© 2001, O'Reilly & Associates, Inc. | http://oreilly.com/openbook/cgi/ch10_05.html#CGI-CHP-10-SECT-5 | crawl-003 | refinedweb | 551 | 58.62 |
This reply is a little bit late (just got back from holiday) but I think
it's an important topic and I was wondering if anybody has given this more
thought in the meantime.
With some minor adaptions I think your general description of the usage of
XML namespaces sounds very good.
> -----Original Message-----
> From: Nicola Ken Barozzi [mailto:nicolaken@apache.org]
> Sent: Freitag, 6. September 2002 23:54
> To: Ant Developers List
> Subject: Re: xml namespace support in ant2
>
> So instead we could say that
> 1) all ant: namespace elements are core ant.
>
Well, what the namespace prefix is in the buildfile is really up to the
user. It's just the URI that matters. So all of thw following should work:
<project xmlns="urn:ant-namespace">
<target name="test"/>
</project>
<xyz:project xmlns:
But specifying it as the default namespace as in the first example or using
"ant" as the prefix will probably be the most common usages.
When you say "core ant" I assume you're talking about <project>, <target>,
all core and optional tasks and types in the distribution.
> 2) different namespaced elements have to be defined before use as
> taskdefs, typedefs or antlibs, and ant treats them accordingly
>
I think saying it the other way around would be more correct. I.e. Custom
tasks and types have to be declared to use a different XML namespace.
> 3) all namespaced tags that are not declared as in (2) must
> be declared
> beforehand as <namespacedef namespace="nms" handler="..."/>,
> so that the
> Ant ProjectHelper can use that particular Handler to process the
> extension. A convenient nullHandler is created when the
> handler is not
> specified.
>
What would this be used for? Isn't it enough with task and type
extenstions? Other extensions would certainly raise confusion...
The primary benefits I see of using XML namespaces the described way are:
- Mechanism to avoid name clashes of different tasks and types
- Schemas (grammars) could be written for Ant buildfiles covering the Ant
core elements and external tasks and types.
The schemas in turn would allow the validation of tasks and types to take
place externally and at an earlier stage. With externally I mean that the
tasks and types wouldn't need code to do the validation themselves.
At a later stage the schemas could maybe even be the basis for the
documentation. Also, the different usages of a task could maybe be bound to
the different processing modes somehow.
Playing some with RELAX NG and Sun's MSV validator I found that it was quite
easy to specify grammars for the different Ant tasks and types. Even
writing grammars that support other namespaces isn't very hard. That way a
grammar which validates buildfiles using other namespaces for external tasks
and types can be constructed easily. It should also be quite easy to allow
external tasks to specify their own grammar for validation.
Any thoughts?
Cheers,
--
knut | http://mail-archives.eu.apache.org/mod_mbox/ant-dev/200209.mbox/%3C36E996B162DAD111B6AF0000F879AD1A76BF9F@nts_par1.paranor.ch%3E | CC-MAIN-2019-51 | refinedweb | 487 | 61.67 |
How do I output colored text to the terminal in Python?
5
There is also the Python termcolor module. Usage is pretty simple:
from termcolor import colored print colored('hello', 'red'), colored('world', 'green')
Or in Python 3:
print(colored('hello', 'red'), colored('world', 'green'))
It may not be sophisticated enough, however, for game programming and the “colored blocks” that you want to do…
To get the ANSI codes working on windows, first run
os.system('color')
12
- 3
Since it’s emitting ANSI codes, does it work on Windows (DOS consoles) if ansi.sys is loaded? support.microsoft.com/kb/101875
Jul 29, 2011 at 4:16
- 42
Just noticed that as of 13/01/2011, it’s now under MIT license
Oct 28, 2011 at 2:19
- 16
doesn’t have unittests (unlike colorama) and not updated since 2011
Jul 20, 2013 at 19:28
- 6
- 63
The answer is Colorama for all cross-platform coloring in Python.
It supports Python 3.5+ as well as Python 2.7.
And as of January 2021 it is maintained.
Example Code:
from colorama import Fore from colorama import Style print(f"This is {Fore.GREEN}color{Style.RESET_ALL}!")
19
- 453
As the author of Colorama, thanks for the mention @nbv4.. On other platforms, Colorama does nothing. Hence you can use ANSI codes, or modules like Termcolor, and with Colorama, they ‘just work’ on all platforms. Is that idea, anyhow.
Sep 13, 2010 at 13:22
- 5
@Jonathan, This is truly an awesome library! The ability to cross platform color Python output is really really nice and useful. I am providing tools for a library that colors its own console. I can redirect the output of that console to the terminal and colorize the output. Now I can even one up the library and let the user select colors. This will allow color blind people to set things to work so they can actually see the output correctly. Thanks
Nov 30, 2012 at 13:05
- 79
This should be in the standard library… Cross platform colour support is important, I think.
Jun 28, 2013 at 14:08
- 10
Colorama is great! Also have a look at ansimarkup, which is built on colorama and allows you to use a simple tag-based markup (e.g.
<b>bold</b>) for adding style to terminal text
Feb 19, 2017 at 17:07
- 62
This doesn’t work without calling colorama.init(). Vote up!
Feb 19, 2018 at 3:32
This symbol would make a great colored block:
█Only problem is that it is extended ASCII, maybe you could get it to work using
Oct 5, 2013 at 16:14
Some terminals also can display Unicode characters. If that is true for your terminal, the possible characters are almost unlimited.
Nov 19, 2013 at 20:02
This answer came fairly late, but it seems to be the best to me… the ones voted above it require special hacks for Windows whereas this one just works: stackoverflow.com/a/3332860/901641
Dec 16, 2013 at 16:59
How about stackoverflow.com/a/42528796/610569 using pypi.python.org/pypi/lazyme ? (disclaimer: shameless plug)
Mar 1, 2017 at 10:12
If you don’t want to install an extra package, follow this new answer.
Mar 24, 2021 at 11:41
| | https://coded3.com/how-do-i-print-colored-text-to-the-terminal/ | CC-MAIN-2022-40 | refinedweb | 553 | 72.16 |
FLUTTER I - INTRO AND INSTALL
WHAT IS FLUTTER ?
Flutter is a new tool offered by Google that let developers build cross-platform applications that could be executed in different systems such as Android or iOS with just a common codebase.
This tool is is built in C and C++ and provides a 2D rendering engine, a React-inspired functional-reactive framework, and a set of Material Design widgets. It is currently being distributed in its alpha:0.0.20 version, but in spite of its early stage it already allows to create complex interfacing, perform networking and even file management.
Flutter´s approach is different from other solutions, Cordova for example which runs over a WebView executing HTML, CSS and Javascript. Unlike these tools, it just uses Dart as a single programming Language. Dart is pretty easy to learn and if you have Java knowledge, 75% of the job is almost done and getting used to Dart will just take you a couple of days.
Applications will not execute Dart code directly. By the time an app is release built, the code will be compiled to native getting as a result better performance and a better UI response. While developing in debug mode (checking for potential bugs) Flutter also performs several tasks that may make the application run slower. If going through this situation, Flutter will let know placing a red ribbon top right in the screen with the words “Slow Mode” written on it.
WHY USING FLUTTER ?
It is more than just creating both Android and iOS applications with a single project, very few code is needed compared with native programming in both platforms due to Flutter high expresiveness.
Performance and UI response.
Another good feature is that Flutter is Material Design oriented and offers lots of its specifications.
Google is also using Flutter in order to develop their new System UI called Fuchsia as it seems if we have a look into their repository.
INSTALLATION
This process may vary since Flutter is at an early stage and periodically updated, in order to keep up to date to the latest changes visit Flutter site and get through the installation process.
This one and the following posts will be based on 0.0.20+.alpha version. So, to get Flutter installed next steps are need to be done:
STEP 1 . CLONING
Clone alpha branch from Flutter repository using Git (SourceTree, Github Desktop…) and add bin folder to your PATH.
$ git clone -b alpha
$ export PATH=`pwd`/flutter/bin:$PATH
STEP 2. CHECKING DEPENDANCIES
Run Flutter doctor in order to install all dependancies needed.
$ flutter doctor
STEP 3. INSTALLING PLATFORMS
Next step will be installing the platforms to develop for, we can have both or just the one we want to build applications for.
Android choice, requires an Android SDK installation. You may just install Android Studio that already provides the SDK. In case Android Studio is installed in a different location that the default given, ANDROID_HOME variable must be added to the PATH, pointing that new location where the SDK was installed.
iOS choice, requires xCode 7.2 or higher to be installed. In order to run applications in a physical device, an additional tool is needed. This tool can be installed using Homebrew.
$ brew tap flutter/flutter
$ brew install ideviceinstaller ios-deploy
STEP 4. ATOM CONFIGURATION
It is recommended to use Atom editor with Flutter and Dart plugins installed.
Flutter Atom plugin installation:
- Run Atom.
- Packages > Settings View > Install Packages/Themes.
- Fill Install Packages field with the word ‘flutter’, then click Packages button.
- Select Flutter package and install.
Open Packages > Flutter > Package Settings and set FLUTTER_ROOT to the path where Flutter´s SDK was cloned.
Then Packages > Dart > Package Settings, and set the variable with dart sdk location, typically bin/cache/dart-sdk inside Flutter directory.
If using Mac, then Atom > Install Shell Commands in order to install shell commands.
Last thing to do, running Flutter doctor again in order to check everything is ok.
Following output shows how the installation process was successful and how iOS environment does not meet the requirements yet.
[✓] Flutter (on Mac OS, channel alpha)
• Flutter at /Users/XensS/dev-dart/flutter-sdk
• Framework revision 9a0a0d9903 (5 days ago), engine revision f8d80c4617
[✓] Android toolchain — develop for Android devices (Android SDK 24.0.1)
• Android SDK at /Users/XensS/Library/Android/sdk
• Platform android-N, build-tools 24.0.1
• Java(TM) SE Runtime Environment (build 1.8.0_25-b17)
[✓] iOS toolchain — develop for iOS devices (Xcode 6.4)
• XCode at /Applications/Xcode.app/Contents/Developer
• Xcode 6.4, Build version 6E35b
x Flutter requires a minimum XCode version of 7.0.0.
Download the latest version or update via the Mac App Store.
x ideviceinstaller not available; this is used to discover connected iOS devices.
Install via ‘brew install ideviceinstaller’.
x ios-deploy not available; this is used to deploy to connected iOS devices.
Install via ‘brew install ios-deploy’.
[✓] Atom — a lightweight development environment for Flutter
• flutter plugin version 0.2.4
• dartlang plugin version 0.6.37
FIRST STEPS (Hello World App!)
Let´s see how to create a new project and code some examples in order to see how Flutter works. Following posts should bring more complicated and interesting examples.
Go Packages > Flutter > create new Flutter Project.
Inside lib folder, there is a file called main.dart, open the file and erase the content.
Dart code start its execution from the main function that should be included in main.dart file.
void main() { }
Now import dart material library, this will provide a function to launch applications.
import ‘package:flutter/material.dart’;
That function is called runApp and accepts a Widget as parameter. A Widget could be compared to a View in Android or iOS to get an idea, of course there are differences among them. In Flutter the whole UI is built using widgets and everything is coded using just Dart, Android will use XML in view specification i.e.
Start creating an example about how using a Text Widget in order to show some text in an application.
Now run the application using Atom.
As seen, the text appears over the status bar, this is because it is where Flutter (0,0) coordinates are.
Let´s add some padding to fix it. As Flutter UI is built using widgets, the padding should be a widget as well. This may sound weird if you have some previous experience on Android or iOS where padding is just a View attribute. What has to be done here is adding a new Padding Widget and nest our Text Widget as follows.
In the code above a Padding Widget is created, its padding is set to 24 using an EdgeInsets object and its child is nesting the Text Widget. Run the app and see how the text is placed lower.
Note: If Java previous background, const EdgeInsets.only(top: 24.0) instruction is a call to only, an EdgeInsets constructor. It returns an instance which is a constant at compilation time. This is a difference between Java and Dart, more information about Dart constructors here.
To place the text in the center of the screen, make use of another specific widget called Center.
Both Padding and Center Widgets share an attribute called child. This is in fact, one of the features that make Flutter so powerful. Each widget could have its own child, what let nesting widgets. A Text nested to a Padding nested to a Center i.e.
TO SUM UP
In this first article about Flutter we have seen how easy is creating an application showing a text writing very few code. Following posts are meant to target more complex interfaces and show how easy (compared to the native way) is to implement them. | https://medium.com/@develodroid/flutter-i-intro-and-install-a8bf6dfcc7c8 | CC-MAIN-2017-09 | refinedweb | 1,301 | 65.73 |
Statistics array. All elements are of double data type.
- After you have figure out the dimension of an array, create the array with exactly enough elements. This way you will not waste unnecessary space in the memory.
- The elements of the array will be prompted from the user, but the array will be sorted at all times.
- Print all elements of an array in sorted order, and calculate the median in statistical terms.
When we say that we calculate the median in statistical terms, we mean that half of the elements of the array is less and half is greater than determined value.
If we deal with an array that has odd number of elements, it will give us one value that belongs to the array. If we deal with an array that has even number of elements, we should take two from the middle of a sorted one and find the average of those two values, this way we don’t need to get the value that is in the array.
Also, this algorithm might not be the fastest in terms of speed, but it will be good investment when we calculate some other, also important things of the given set of numbers.
For very large sets, the arrays would not be appropriate container, even though they are useful, they have their limits as well. To apply the program in real world you should analyze it more carefully and find some applications if possible.
Example Code to Calculate Median
#include <iostream> using namespace std; void intakeOfElements(double*,int); double calculateMedian (double*,int); void printElements (double*,int); int main() { cout<<"How manny elements will you imput->"; int nElements; cin>>nElements; double *ptrArray = new double[nElements]; intakeOfElements(ptrArray, nElements); double dMedian = calculateMedian(ptrArray, nElements); printElements(ptrArray, nElements); cout<<"The median of set is =" <<dMedian <<endl; delete [] ptrArray; return 0; } void intakeOfElements(double* ptr, int iN ) { double *ptrI, *ptrJ, dTemp ; for(ptrI = ptr; ptrI < ptr+iN; ptrI++) { cout<<"Next element->"; cin>>dTemp; for(ptrJ = ptrI-1; ptrJ >= ptr && *ptrJ > dTemp; *(ptrJ+1) = *ptrJ ,ptrJ--); *(ptrJ+1)= dTemp; } } double calculateMedian(double* ptr, int iN ) { double dResult; int iHalf = iN/2; if(iN%2==0) { dResult = (*(ptr + iHalf-1)+ *(ptr + iHalf))/double(2); } else { dResult = *(ptr + iHalf); } return dResult; } void printElements(double* ptr, int iN ) { for(double* d=ptr; d < ptr+iN ; cout<<*d<<endl, d++); }
Explanation of the Code
The main function does the following:
- nElements serves to keep the size of an array.
- We create array ptrArray with right amount of places in memory.
- The function intakeOfElements will provide the input of the elements. This function will sort the array as well.
- After the elements are sorted, we call the function calculateMedian, in which we find the value we are looking for.
- We print the elements of sorted array on a screen. Then, print the median.
- Finally, apply the delete operator on the array.
Now we will look at those functions and explain how do they work:
- The most important function is intakeOfElements. It gets: one pointer and one int. It will return void.
- In the function we have two pointers *ptrI and *ptrJ of double data type and one variable to contain the result.
- For first pointer we have reserved job of advancing from the start of an array towards the end of it.
- The start is figured with address that is kept in the name of the array. The end will be limited with simple operation of adding pointer and the number of elements, this way you prevent pointer ptrI from going beyond the right limit of an array.
- After this we take one element after another. The numbers are kept in the dTemp and after we have the next value of the array we will go back toward the begging of the array, and those elements that we will go through are already sorted. So, the part of an array in the memory is always sorted, and every element is looking for its place in ordered array from the back. In another words, it is inserted at its appropriate place.
- The function calculateMedian has two values to get: pointer at the begging of an array and number of the elements in that array.
- The return value is dResult, and it would be returned to main function in double data type.
- After we have sorted an array it is easy to calculate a value of a median. Even do, this might not be the fastest way to achieve that task, it would pay off when we calculate the frequencies of each element or if we wish to remove elements that are repeated.
- printElements() is the function that presents the elements. The pointer d will get the address of an array. The ptrI + iN is the marker for the end of an array, so that you don’t get over the limes of the array.
- Each element off an array is printed, one after another and the pointer is moved toward the end marker. It might be even possible to do this without “,” operator. That might be, way too much for some people.
Additional Exercises:
- Find the average value of the set also you should calculate the geometrical and harmonic middle.
- Find how often each element is repeated in an array.
- Figure out which of the element is most often repeated in an array.
- Find the element that has the lowest frequency in an array.
- Print the elements of original set without sorting the elements that are imputed.
- Reduce an array to show just the elements without repetitions.
- If an average of the set is signed as the avgValue try to calculate the value of this sum( avgValue – dArray[i])* (avgValue – dArray[i]). Where i goes from zero to the end of an array. After this you should sign the medValue as already mentioned median of the set and calculate similar value as the sum of ( medValue – dArray[i])* ( medValue – dArray[i]).
Get the Linux Sysadmin Course Now!
{ 6 comments… read them below or add one }
Excellent article.
A natural followup for this article is finding the median by sorting the data and picking the exact middle value. I know that some places have legislated that the median is the exact middle value (which depends on whether the data is even or odd in number).
THX a lot!
This article is bit more on C side of the programming but it could still be called a C++. The next logical thing could be also the vector container used instead of array, or wrtting it in pure C, or perhaps developing a complete line of methods and classes. It is up to you, it dipends what are you level on and so on.
It is just bare push toward the goal.
For those who did not like it, could try also to replace cout<<*d<<endl, d++ with something like this cout<<*d++<<endl.
But it is just up to the reader to see how She/He/It or what so ever wanna go from this point.
I also recomend some reading to those who don't know enouhg about statistics, and the article has intention to show few tricks, that would not be so easilly achieved in some other languages.
This is one of the resons why C++ has beautifull and powerfull sintaks to achieve great things that would be producing fast code.
Thank you for your work. I agree with duskoKoscica that this is more a c approach. For all of you who want to see it done the c++ way should look into the boost accumulator template library. here.
To be more precize this could be part of developing one ore few methods for one of the classes. And becuase I would like to show how to do it in C++ 11, this was my omiton, you culd do something like this for(type i: tContainter) cout<<i<<endl; or if you wanna access the locations you could use somethin like this for(type& i: tContainedr) i=something; As one might notice to develop OOP solution it would take way more than just one class with few methods. But, this is way it gets tough to explain the things through OOP things… THX and have nice day!!!
I have been in few situations where results from statistics have helped me to create better programs! Just look at the case of creating Morsse code from the textual file. It is faster if yyou know what is the language you are dilling with.
This is not connected to article, but somehow I think it is natural for people to like things like this: lunar eclipse
It is a moon! It looks just to cool not to be noticed. Planets look to cool to…> | http://www.thegeekstuff.com/2014/03/calculate-median/ | CC-MAIN-2014-42 | refinedweb | 1,462 | 69.01 |
On 07/03/2016 09:20 PM, Hillf Danton wrote:...>> When we have 64-bit PTEs (64-bit mode or 32-bit PAE), we were able>> to move the swap PTE format around to avoid these troublesome bits.>> But, 32-bit non-PAE is tight on bits. So, disallow it from running>> on this hardware. I can't imagine anyone wanting to run 32-bit>> on this hardware, but this is the safe thing to do.> > <jawoff>> > Isn't this work from Mr. Tlb?I have no idea what you mean.>> + if (!err)>> + err = check_knl_erratum();>>>> if (err_flags_ptr)>> *err_flags_ptr = err ? err_flags : NULL;>> @@ -185,3 +188,32 @@ int check_cpu(int *cpu_level_ptr, int *r>>>> return (cpu.level < req_level || err) ? -1 : 0;>> }>> +>> +int check_knl_erratum(void)> > s/knl/xeon_knl/ ?Nah. I mean we could say xeon_phi_knl, but I don't think it's worthworrying too much about a function called in one place and commentedheavily.>> + puts("This 32-bit kernel can not run on this processor due\n">> + "to a processor erratum. Use a 64-bit kernel, or PAE.\n\n");> > Give processor name to the scared readers please.Yeah, that's a pretty good idea. I'll be more explicit. | https://lkml.org/lkml/2016/7/6/427 | CC-MAIN-2020-50 | refinedweb | 196 | 78.45 |
01 June 2007 09:31 [Source: ICIS news]
By Nigel Davis
?xml:namespace>
?xml:namespace>
Reach represents a new, radical approach to chemicals control built more on the ‘precautionary principle’ than sound science. It will replace more than 40 separate pieces of legislation.
From the outset Reach has looked and sounded like a bureaucratic nightmare and unfortunately it is. Competent authorities across the European Union now are charged with helping make Reach work.
The burden on companies to pre-register and finally register many tens of thousands of substances over the next four years is immense. The practical implications of Reach compliance will soon come to the fore.
At first the costs of complying with Reach pre-registration and registration are likely to be significant. Reach requirements will demand a considerable amount of time from company participants and their legal and consultant advisors.
Yet to be determined are the international and trade implications of the new rules and just who will be covered by what. Foreign companies as well as downstream users operating in the EU have yet to get to grips with the implications of Reach.
And it is not yet clear whether Reach will become the accepted model for global chemicals control. The EU would certainly like it to be.
Across the Atlantic, the
Jack Gerard, president of the American Chemistry Council (ACC), argues that Reach first will undermine
Reach-like proposals appeared in nearly 80 state-level legislative initiatives last year. Even though all but a few were defeated, Gerard says he expects some 150 local measures mimicking Reach to arise this year.
Even if the American chemicals industry can stop the Reach tide at US shores, they fear there will be no escaping the contagion in the global market.
“Reach will affect US companies broadly - even those that do not do business in Europe - by forcing changes in their market strategies, R&D and in product selection and substitutions,” Washington, DC-based attorney Robert Matthews, a specialist in chemicals regulation, says
He contends that the new EU regulatory system will have worldwide impact because chemicals that in time will be banned in
The EU has taken a great leap forward with Reach. In doing so it has been accused of putting its economy at risk by tilting the competitive playing field against its industrial and research base.
Reach will have a considerable impact on EU chemicals. Its greater impact may be wider still. | http://www.icis.com/Articles/2007/06/01/9033823/insight-industry-has-a-long-way-to-go-with-reach.html | CC-MAIN-2015-22 | refinedweb | 407 | 51.28 |
Hi, I have recently started to teach myself Arduino programming. I am aiming to build my own vehicle simulator for use with the game ‘Euro Truck Simulator’
To achieve this I need to get data from the game to gauges and lights etc, so I though an Arduino would be the best way to do this.
The game has an API that broadcasts game variables via a JSON file.
This API is accessed via an ethernet server.
I have used the code below to successfully connect to the server port, but I am not sure what to do now to get to the JSON file.
#include <Ethernet.h> #include <SPI.h> byte mac[] = { 0xBE, 0xAD, 0xBE, 0xEF, 0xFE, 0xEC }; byte server[] = { 192, 168, 0, 13 }; // ETS2 Telemetry Server int tcp_port = 25555; EthernetClient client; void setup() { Ethernet.begin(mac); Serial.begin(9600); delay(1000); Serial.println("Connecting..."); if (client.connect(server, tcp_port)) { // Connection to ETS2 telemetry Server Serial.println("Connected to ETS2 Telemetry Server"); } else { Serial.println("connection failed"); } } void loop() { if (client.available()) { char c = client.read(); Serial.print(c); } if (!client.connected()) { Serial.println(); Serial.println("disconnecting."); client.stop(); for(;;) ; } }
The full URL to get to the JSON data is:
I have tried adding a GET command after connecting to the server but can’t seem to get the syntax right.
Hopefully someone can point me in right direction | https://forum.arduino.cc/t/arduino-connecting-to-url/597166 | CC-MAIN-2022-40 | refinedweb | 230 | 56.66 |
I like to use it feel free to grab a copy.
If you have any questions regarding this component, component development or components in general you can always sign up for the Flash Components mailing list
License
This work is published under a CC-GNU LGPL.
Can you please provide an example of how to import the FGButton component and if possible a sample file as a guide? Thanks!
Nathan, all the information, the class and SWC as well as an example are in the MXP (zipped) that is available in the downloads section.
If you wanted to import the component via code, you need to make sure the component is in the library then just use a standard import:
import com.flashgen.components.FGButton
All of my classes are installed by default in the Classes folder so you shouldn’t have an issue with providing the path to them.
FG
Hi there - I was trying to download the component (roll over and out for components) - and I get a corrupted zip file when trying to expand… | http://blog.flashgen.com/2006/05/05/fgbutton-component-finally-available/ | crawl-001 | refinedweb | 177 | 63.43 |
EIR Memory Card Support
Hardware
As explained on this page, the MMC socket is not connected to the hardware SPI. The following table shows the signals and pins:
Two additional GPIO pins are connected to the card detection and write protect switches of the memory card socket:
SPI Bit Banging Driver
This driver is available since Nut/OS 4.8 in the following source code files:
This portable driver can be configured with the Nut/OS Configurator. Typically this is done when loading the EIR configuration file eir10c.conf.
As expected, the performance of the bit banging driver is lousy. The following figures had been attained from a no-name micro-SD card.
Writing 64 byte blocks for 10s...9 kBytes/s Writing 128 byte blocks for 10s...9 kBytes/s Writing 256 byte blocks for 10s...10 kBytes/s Writing 512 byte blocks for 10s...16 kBytes/s Reading 64 byte blocks...24 kBytes/s Reading 128 byte blocks...24 kBytes/s Reading 256 byte blocks...28 kBytes/s Reading 512 byte blocks...28 kBytes/s
SPI Bus Based Driver
In opposite to the bit banging driver explained above, the newer SPI bus based drivers allow to concurrently access several devices on the same SPI bus. Actually this had been possibly with the old driver model as well, but the application needs to take care that accesses do not overlap. In the new driver model, the bus controller driver is separated from the device driver and serializes their access.
The required drivers for the EIR are available since Nut/OS 5.0 in the following source code files:
No special configuration required for the bus controller driver, because it has been specifically designed for the AT91SAM7SE CPU that is mounted on the EIR board. All memory card specific configuration items are either included in the eir10c.conf file or used with default values. However, earlier versions of the configuration file do not include the settings for the card detect and write protect switches. You may add them manually.
The performance gain compared to the bit banging driver is quite impressive. The following figures had been attained from the same micro-SD card as above.
Writing 64 byte blocks for 10s...28 kBytes/s Writing 128 byte blocks for 10s...26 kBytes/s Writing 256 byte blocks for 10s...30 kBytes/s Writing 512 byte blocks for 10s...37 kBytes/s Writing 1024 byte blocks for 10s...256 kBytes/s Writing 2048 byte blocks for 10s...359 kBytes/s Writing 4096 byte blocks for 10s...436 kBytes/s Reading 64 byte blocks...114 kBytes/s Reading 128 byte blocks...161 kBytes/s Reading 256 byte blocks...198 kBytes/s Reading 512 byte blocks...229 kBytes/s Reading 1024 byte blocks...242 kBytes/s Reading 2048 byte blocks...242 kBytes/s Reading 4096 byte blocks...242 kBytes/s
Application Code
An application that wants to use an MMC driver, must register it first. For the bit banging driver this is done by simply calling
#include <dev/sbi_mmc.h> NutRegisterDevice(&devSbi0MmCard0, 0, 0);
When using the new SSC based driver, the registration requires a different call and different header files to attach the MMC driver to the SPI bus:
#include <dev/spibus_ssc.h> #include <dev/spi_mmc_gpio.h> NutRegisterSpiDevice(&devSpiMmcGpio, &spiBus0Ssc, 0);
All these registration functions initialize the hardware and return 0 on success or -1 in case of an error. The following part is identical for all MMC drivers.
Typically, applications will use the FAT file system to access the data on the card. In fact, Nut/OS currently doesn't support any other. The Nut/OS PHAT filesystem driver supports FAT12, FAT16 and FAT32 including long filename support. Also this driver needs to be registered:
NutRegisterDevice(&devPhat0, 0, 0);
Now the application is ready to mount the file system on the memory card by using a C standard function:
#include <io.h> #include <fcntl.h> int volid; volid = _open("MMC0:1/PHAT0", _O_RDWR | _O_BINARY);
The path is composed of following parts: Name of the memory card driver, followed by a colon, followed by the number of the partition, followed by a slash and finally followed by the name of the file system driver.
The function returns a file handle (volume identifier) or -1 if the mounting failed. Depending on the format of the memory card, this call may take several seconds, or even a few minutes with the bit banging driver.
When successfully mounted, the application code can use all supported C standard functions to create, delete, read or write files.
Before removing the card or switching off the power supply, the application should close all open files and unmount the file system with
_close(volid);
This is mandatory, if data had been written. Otherwise the file system may become corrupted. There is currently no support in Nut/OS to repair corrupted FAT filesystems.
Known Problems
While MultiMedia and SD-Cards seem to work, I'm not able to mount SDHC cards.
Support for SDHC cards had been added since Nut/OS 5.0.5.
When trying to mount cards formatted with FAT12 or FAT16, the related open call never returns.
This bug has been fixed in Nut/OS 5.0.5.
The bit banging driver works on the EIR board, but the SPI/SSC bus controller doesn't.
When using the bus controller driver, it is required to short pins 17 and 20 at port A connector K1, as explained on this page.
The bit banging driver fails to read blocks larger than 512 bytes.
There is currently no fix available.
Return to the EIR project page. | http://www.ethernut.de/en/hardware/eir/mmc.html | CC-MAIN-2018-43 | refinedweb | 937 | 66.84 |
Like: [meta.wikimedia.org]
[ [en.wikipedia.org]. That address on its own is redirected to the Main Page.
The page name may include a namespace prefix (such as "Help:" in this page). With some special pages it may also include a parameter, as in [en.wikipedia.org] (but for most special page parameters, see below).
Other URLs associated with a page are constructed by adding a query string. The string can be added to either of the above forms (as in [en.wikipedia.org] ), but in this case the system defaults to the second form, i.e. it extends the index.php query string "title=Page_name".
Extended URLs are.
Other projects use similar URLs to those of English Wikipedia, except that the domain names vary: [meta.wikimedia.org] (Meta), [fr.wikipedia.org] (French Wikipedia), [de.wiktionary.org] [en.wikipedia.org] are used. For details, see mw:API. | https://readtiger.com/wkp/en/Help:URL | CC-MAIN-2020-10 | refinedweb | 147 | 79.77 |
We used an old Android smartphone, the Scripting Language for Android (SL4A), and Python to auto trigger the phones camera while suspended from a balloon. This was a nice alternative to purchasing a camera since I had the old phone sitting around and no rubber banding the camera button was needed. While there was an app available from the Google Play store to auto capture images, the app only captured low resolution images with a max 10 second delay.
The phone needs to have SL4A installed () and Python (). The installation instructions are available from the download sites.
The script when executed, will prompt the user for a delay time (seconds before image capture begins), the total number of images to capture, and the number of seconds of delay between captures.
Be sure to set the phone settings to ensure that the phone will not go into sleep mode. I had to design a holder out of foam, duct tape, and a sheet of plastic to ensure that the screen did not touch the foam, nor were any buttons pressed during flight.
You are welcome to write me for the script. I tired to post the script below but this site is not treating it very well. I am sure you hackers can figure it out! Gerry: geraldg@lummi-nsn.gov
try: print "Android/SL4A Balloon Camera Auto Shooter" print "Created by Gerry Gabrisch GISP, GIS Manager Lummi Indian Business Council." print "geraldg@lummi-nsn.gov" print "Copy? Right! 2014, No rights reserved."
import android import time droid = android.Android() delay = droid.dialogGetInput('Input 1','Delay before starting?','60').result numberOfShots = droid.dialogGetInput('Input 2','Total images to capture?', '20').result delayBetweenShots = droid.dialogGetInput('Input 3','Delay (Seconds) between captures','30').result droid.ttsSpeak('taking pictures in'+ delay +'seconds') time.sleep(int(delay)) counter = 1 droid.ttsSpeak('taking pictures now') while counter <=int(numberOfShots): droid.cameraCapturePicture('/sdcard/DCIM/Camera/'+str(counter)+ ".jpg") counter +=1 if counter != int(numberOfShots): time.sleep(int(delayBetweenShots)) print "done without error..." droid.ttsSpeak('Finished without error...') del droid
except: print "error!!!!!!!!!!!!!!!!" del droid
3 Comments
This is really neat. have you tried our own Skycam? it will take higher res photos.
Is this a question? Click here to post it to the Questions page.
Reply to this comment...
Log in to comment
Be sure to put parantheses around the print or it will error out in line 23 of the code.
print "done without error..."
Reply to this comment...
Log in to comment | https://publiclab.org/notes/LummiGIS/07-31-2014/android-smartphone-python-script-to-capture-images-from-a-baloon%20 | CC-MAIN-2019-39 | refinedweb | 417 | 69.89 |
Namespaces provide hierarchical clarity and organization of types and other members. A container of hundreds of classes, the .NET Framework Class Library (FCL) is an example of effective use of namespaces. The Framework Class Library would sacrifice clarity if planned as a single namespace with a flat hierarchy. Instead, the Framework Class Library organizes its members into numerous namespaces. System, which is the root namespace of the FCL, contains the classes ubiquitous to .NET, such as the Console class. Types related to data services are grouped in the System.Data namespace. Data services are further delineated in the System.Data.SqlClient namespace, which contains types specific to Microsoft SQL. The remaining types are organized similarly in other namespaces.
A namespace identifier must be unique within the namespace declaration space, which contains the current namespace but not a nested namespace. A nested namespace is considered a member of the containing namespace. Use the dot punctuator (.) to access members of the namespace.
A namespace at file scope, not nested within another namespace, is considered part of the compilation unit and included in the global namespace. A compilation unit is a source code file. A program partitioned into several source files has multiple compilation units—one compilation unit for each source file. Any namespace, including the global namespace, can span multiple compilation units. For example, all types defined at file scope are included into a single global namespace that spans separate source files.
The following code has two compilation units and three namespaces. Because of identical identifiers sharing the same scope, errors occur when the program is compiled.
// file1.cs public class ClassA { } public class ClassB { } namespace NamespaceZ { public class ClassC { } } // file2.cs public class ClassB { } namespace NamespaceY { public class ClassA { } } namespace NamespaceZ { public class ClassC { } public class ClassD { } }
Compile the code into a library:
csc /t:library file1.cs file2.cs
In the preceding code, the global namespace has four members. NamespaceY and NamespaceZ are members. The classes ClassA and ClassB are also members of the global namespaces. The members span the File1.cs and File2.cs compilation units, which both contribute to the global namespace.
ClassB and ClassC are ambiguous. ClassB is ambiguous because it is defined twice in the global namespace, once in each compilation unit. ClassC is defined in the NamespaceZ namespace in both compilation units. Because NamespaceZ is one cohesive namespace, ClassC is also ambiguous.
The relationship between compilation units, the global namespace, and nonglobal namespaces are illustrated in the Figure 1-3.
The using directive makes a namespace implicit. You can access members of the named namespace directly without the fully qualified name. Do you refer to members of your family by their fully qualified names or just their first names? Unless your mother is the queen, you probably refer to everyone directly by simply using their first names, for the sake of convenience. The using directive means that you can treat members of a namespace like family members.
The using directive must precede any members in the namespace where it is defined. The following code defines the namespace member ClassA. The fully qualified name is NamespaceZ.NamespaceY.ClassA. Imagine having to type that several times in a program!
using System; namespace NamespaceZ { namespace NamespaceY { class ClassA { public static void FunctionM() { Console.WriteLine("FunctionM"); } } } } namespace Application { class Starter { public static void Main() { NamespaceZ.NamespaceY.ClassA.FunctionM(); } } }
The following using directive makes NamespaceZ.NamespaceY implicit. Now you can directly access ClassA.
namespace Application { using NamespaceZ.NamespaceY; class Starter { public static void Main() { ClassA.FunctionM(); } } }
Ambiguities can occur when separate namespaces with identically named members are made implicit. When this occurs, the affected members can be assessed only with their fully qualified names.
The using directive can also define an alias for a namespace or type. Aliases are typically created to resolve ambiguity or as a convenience. The scope of the alias is the declaration space where it is declared. The alias must be unique within that declaration space. In this source code, an alias is created for the fully qualified name of ClassA:
namespace Application { using A=NamespaceZ.NamespaceY.ClassA; class Starter { public static void Main() { A.FunctionM(); } } }
In this code, A is a nickname for NamespaceZ.NamespaceY.ClassA and can be used synonymously.
Using directive statements are not cumulative and are evaluated independently. | http://etutorials.org/Programming/programming+microsoft+visual+c+sharp+2005/Part+I+Core+Language/Chapter+1+Introduction+to+Visual+C+Programming/Namespaces/ | crawl-001 | refinedweb | 716 | 51.55 |
Codeforces Round #446 Editorial
also there's another solution using dp by Kan : 32407269
Instead of sorting and finding the biggest two, isn't it faster to just find the biggest two directly?
Correct me if I'm wrong..
maxCapacity[2];
maxCapacity[0]=maxCapacity[1]=INT_MIN
if(capacity>=maxCapacity[0]){
maxCapacity[1]=maxCapacity[0]
maxCapacity[0]=capacity
}
else if(maxCapacity[1]<capacity)
maxCapacity[1]=capacity
this should work right?
Yes that works but it's just easier to code if you sort it.
Kaneki_04 methode : O(n) NotMoreThanANoob methode (n*logn) if you use quicksort/sort from alogirthm etc; They could have made the time limit shorter so you cant sort and get max points, but it works both ways
you wrote solution of C/Div2 and A/Div1 separately !
Thanks it's edited now
could you explain to me Div2-D problem Input 5 1 3 4 5 2 Participant's output 3 4 5 2 1
why WA?
Consider the index subset [1,4] The sum for the given array is 3+2=5 The sum for your array is 4+1=5
Auto comment: topic has been updated by NotMoreThanANoob (previous revision, new revision, compare).
if we follow editorial then answer for 6,10,15 is 5 but the answer is actually 4, where am i understanding wrong, can someone plz explain.(div2 C)
Run second loop(nested one) from i. Not i+1.
still not clear
If you can find the minimum length where gcd is 1 you can make a 1 in len-1 move. As now you find a 1 in the array you can make whole array 1 in n-1 move..total move n-1+len-1
thanks, understood.
This contest was organized well.. well done!
A Div1 can be solved with dp in O(nsqrt(max(ai) + n^2 * log^3(n)) however its almost impossible to do this much operations so my solution is pretty fast 32408288
we could solve in n^2*log(n) by tutorial method
We can solve this in nlog^2(n) with sparse table + binary search or segment tree.
Thank You to all the authors and to the CF team. Great contest! I enjoyed it. :D
someone plz explain div2 C in detail.. with possibly proof..
Which part do you not understand? When cnt1>0 (as given in the editorial) or in the other case?
other case i.e how we get can make the elements 1 in R-L+1+n-1 turns and can you plz explain with example like in case when we have 3 elements i.e. 6 10 15
(Considering that none of the elements are one in the given array)
As soon as we are able to make any of the array elements to 1 the answer will be the steps till now + n-1 as now we can make all of the elements to 1.
So if we have a case in which there are 2 consecutive elements such that their gcd is 1 then the answer will be n (one move for obtaining 1 and n-1 moves for the rest).
If we have no such pairs then we look for triplets. In this case the answer will be 2+n-1=n+1.
Just extend this and you'll get what the author is trying to explain.
For your case gcd(6,10)=2 and gcd(10,15)=5 but gcd(6,10,15)=1 so answer is n+1=4
Hope this helps :)
Hello, I understand what the algorithm is and why it works, but I don't understand how the editorial writer goes about checking the pairs and triples. They basically say to evaluate gcd(L, R) and keep increasing R? How does that work, doesn't that mean you miss pairs at the end? I am probably misunderstanding something.
What the editorial says is that we can fix a L (we do this n times) and by iterating through all the values of R(takes n steps so complexity=n*n) we can find the "smallest subarray" with GCD=1. (Which is what we need to do).
Let's call F(L, R) the gcd of the subarray between L and R, ok?
What is happening is that we can calculate F(L, R) recursively:
F(L, R) = V[L], if L == R or GCD(F(L, R-1), V[R]), if L != R
Now, look at this: --> if F(L, R) = 1, then for every R' >= R, F(L, R') = 1
About the problem: the smartest thing to do is get a 1, as fast as possible, so that we can "carry" this 1 for all the other positions. So, if F(L, R) = 1, we can get a 1, by doing V[L+1] = gcd(V[L], V[L+1]), then V[L+2] = gcd(V[L+1], V[L+2]) and so on, until reaching V[R] = 1.
Now we want to find the minimum R-L+1 that F(L, R) = 1 to get the answer, and we can brute force at each possible L to find the first R that solves the problem.
Can someone provide solution for Div1/E using generation functions?
Sure. Say that we end in the state (a1 - b1, a2 - b2, ..., an - bn), obviously with Then one can check by induction that regardless of the path taken to get there, we will have
Therefore, it suffices to compute
Note that in the last sum, this is precisely the [xk] coefficient of the following generating function:
Now multiply out the last polynomial: say that
This is O(n2) time to do, or with FFT (unnecessary here). Then we can compute that the [xk] coefficient of the last expression (by expanding out enx) is
Putting this all together gives that the answer is
Very beautiful solution!
damn, this is beautiful
It's solution without generating function is really nice!
:D
How do you solve it without generating functions? I think the most beautiful part is that the sum is the same regardless of the path. I can't believe I didn't see it. It's a really nice problem anyway.
I've solved it without generating functions, just with some combinatorics. Everytime, when we increase res, we add to it multiply of things like this: (ai - 1 - 1 - 1... - 1 - 1). So in every bracket we have to decide what will we take: ai or - 1. Let's assume that we will take - 1 p times. We will have to take multiply of some n - 1 - p numbers from input, here goes dp. So we have p+1 empty places, and we should put - 1 in them (p + 1, cause we must calculate result while decreasing some element). Number of ways to choose them is k * (k - 1) * (k - 2)...(k - p). But we have to decide from which - 1 will we be decreasing while increasing res this time. It appears, that we can do it in exactly one way, cause we should choose the latest - 1 (cause all other - 1 must be already added to numbers). k - 1 - p other - 1s can go anywhere, so nk - 1 - p.
Could anyone explain me the solution to div2 D. I can't seem to understand why this always works. For 4 4 1 2 3 wouldn't the output be 4 1 2 3?
It doesn't mean sort and then shift, even though that what it says. It means: for a sorted array we can just cyclic shift. For an un-sorted array, we do the same thing: replace the i-th smallest element with the (i + 1)-th smallest, and replace the largest with the smallest.
thanks for the explanation, but can you suggest me,how to approach these type of problems. It will be beneficial for future contests XD
a[1] = b[1] summations should differ
I finally understand. Thank you so much.
Can anyone tell me why my (supposedly) nlogn solution for E keeps getting TLE?
In the begining of main function you put "ios::sync_with_stdio(false);" without "cin.tie(0); cout.tie(0);".. This may be the reason.
What do those do? Is having cout.flush() at the end enough?
Explicitly, I don't know.
What I know is that those three instructions become useful for fast I/O when they come together :D.
Lol thanks, it worked :D
Haha welcome, Congrats :D
I am unable to understand the solution for 892-B Wrath. Can anybody please explain?
I did the following in B:
We know the last person in line will be alive no matter what, because they are attacking only people, who have lesser indexes;
The amount of alive people (int alive) is n before the bell rings;
We know, that Li people, who go right before the last one are guaranteed to be dead, let's denote that value as int kill.
Starting from the last person, let's iterate over the array with j = n-1 to 0 like this:
So, on each step we kill somebody, but also the amount of people decreases with each person, because we are getting farther and farther away from the killer. Also, we don't really care who the killer is, the most important is to maximize the length of the nail.
32387222
892B-Warth why my code is getting TLE it is in O(n)
Use Faster i/o. Include Following Line Into Your Code: ios::synch with stdio(false); cin.tie(0); Else Use print&scanf for taking input.
bhpra : try fast I/O, since cin and cout is slow.. use these lines at beginning,- inside main() : ios_base::sync_with_stdio(false); cin.tie(NULL);
hope it helps ! :)
Thanks Till now i have use codechef for cp this was never a problem their
Can someone explain Div 2E / Div 1C solution a bit more detail ?
I can explain my online solution — it would be a good way to verify if it's correct.
An edge is in some MST if and only if it is not the heaviest edge in every cycle. To check that I used Kruskal. I process all the edges with weight X at the same time. If an edge connects 2 vertices which are already in the same component, it is the heaviest in the cycle. Otherwise it will not be the heaviest in every cycle as we constructed everything we could so far, using edges with weight < X and these vertices are still in separate components. It means that every cycle in which that edge can exist would have edges of weight >= X.
The following part has been modified — not fully verified:
In addition to that, not all of these edges with weight X can exist within the same MST but they can exist separately in some MST's. The problem is that they can create a cycle, when we include cheaper edges. In order to detect such edges we can remember the state of each vertex (the id of components of both vertices when we process an edge in Kruskal).
Now I process queries — I check that every edge can exist in some MST. If that's the case, there is one more thing to check: that these edges do not form cycles — I just run another Kruskal on them, using remembered component numbers.
I understand your idea! But could you explain more the way you create graph G'? I mean how you do it? For example, I have some edges: ~~~~~ (1 2 1) (1 3 1) (3 4 1) (2 5 3) (5 6 3) (6 4 3) ~~~~~ Then, all the egdes with weigt 3(2-5,5-6,6-4) must be in graph G' right?
Thanks for that question, after more thinking I found a bug in my solution. I provided a test which fails it above.
In A Div.1/C Div.2 you make a little mistake, i think the result should be "(R - L) + (n - 1)"
In div2c, what if n<=10^5 and a[i]<=10^6?
n<=10^5 and a[i]<=10^6
We can solve using segment tree with binary search in O(nlogn*logn). And also we can solve it in O(nlogn).
actually the original problem was with limitation n < 1e5 and a[i] < 1e9 but we decided to change it. You can see O(nlogn) solution here by omidazadi
A brief explanation please, thanks!.
you can solve it with binary search and sparse table..
why binary search, if theres exist a segment with length = l,where gcd of that segment is 1, then it is true to say that there also exist a segment with length = l+1, l+2 ... whose gcd is 1.
if you don't know about spare table, the HERE competitive programming handbook has a good explanation about it. check it.
Thanks for the detailed explanation of Div-1 C. Hope your fingers didn't start cramping after writing all this.
Div1-C says "It is guaranteed that the sum of ki for 1 ≤ i ≤ q does not exceed 5·10^5", but test case #58 seems to violate this rule.
Can someone prove the validity of the solution of the 891C-Envy question?
In DIV1-C "If ans only if", I think it may be "If and only if"?
can someone explain the DIV 2-E in detail as I am not getting the tutorial solution
Can you tell me how to come up with solution for div 2 D? I have read the editorial, understood how it works but don't know how to deal with same problems.
Can someone explain in Div 2 problem B — Wrath how to maintain the min(j — Lj) for i + 1 <= j <= n in O(n) ?
You can maintain two array start[] and end[] and initialise them with 0, and let diff = i-L[i], so whenever diff>0 && diff start[diff]++ and end[i-1]++. Now diff<=0 is also possible, in this case you do like this--> start[1]++ and end[i-1]++. Now in last you can see if there is any start left then ans++ else continue..
Here is the code snippet-->> ~~~~~
for(int i=1;i<=n;i++)
{
scanf("%lld",&L[i]);
if(i==1 || L[i]==0)
continue;
ll diff = i-L[i];
if(diff<=0)
{
st[1]++;
en[i-1]++;
}
else if(diff>0 && diff<i)
{
st[diff]++;
en[i-1]++;
}
}
ll diff = 0;
int ct=0;
for(int i=1;i<=n;i++)
{
diff += st[i];
if(diff>0)
{
flag[i]=0;
}
diff -= en[i];
//cout<<flag[i]<<" ";
if(flag[i]) ct++;
}
printf("%d",ct);
~~~~~ But i think we can also solve it without maintaining those two arrays and simply keeping variables, did not implemented though ;)
you can solve it in this way.. for each index i check if there exist a index j(j > i) such that i >= j-arr[j].
this is same way as finding the next minimum element(j-arr[i]) for each i. you can use stacks for that.HERE.
check geeksforgeeks tutorial HERE on how to find next greater or smallest element.
You can do that without additional memory. Iterate line from the end and maintain killer index. Set index to current person if current person's claw extends further than current killer's claw. See this submission for implementation example.
Thanks now I got it.
I still not clear about the problem DIV2 E/ DIV1 C.
Why is "_It can be proven that there's a MST containing these edges if ans only if there are MSTs that contain edges with same weight_" and "_if we remove all edges with weight greater than or equal to X, and consider each connected component of this graph as a vertex, the edges given in query with weight X should form a cycle in this new graph_."
Can someone explain a little bit more detail about this?
I can give you my understanding of that, consider edges with weight X in query, we remove all edges with weight greater than or equal to X in graph. In this new graph, we consider if the edges with weight X in query is "neccessary". If the edge connect two nodes which have already been connected, we called this edge "unnecessary", if there is an "unnecessary" edge in query, the answer will be NO.
Oh now I got it! For the edges with weight i in each query we need to check all of them to see if they form a cycle or not. Of course we need to process weight i in non-decreasing order and eliminate all remaining edges that have weight >= i.
Thank you very much!
can u please explain me a bit more! why we are considering the edges which have weight = 'i' ? why we are not checking the edges with weight < i?
Can anyone explain the solution idea of problem 891C — Envy (Div 2 — E / Div 1 — C) ? I have read the editorial but I can't understand anything about the idea. Please help.
according to above explaination my code should give correct answer but it is giving wrong answer. Anyone can explain plzz. This is my code
using namespace std;
int main(){ ios::sync_with_stdio(0); cin.tie(0); cout.tie(0); long t,num,res=0; cin >> t; vector b(t); for(int i=0;i<t;i++){ cin >> num; res+=num; } for(int i=0;i<t;i++){ cin >> b[i]; } sort(b.begin(),b.end()); long res2=b[t-1]+b[t-2]; if(res<=res2) cout << "YES" << endl; else cout << "NO" << endl; return 0; }
You need to select highest available value from vector b. - instead of selecting smallest values from vector b. Reverse the vector or select last 2 elements from vector b.
Soooo Div1 B could be solved in O(N * log N). How come N was only 22? This really bamboozled me during the contest :D
Reason is here.
Ah, that kinda makes sense. Interesting to see problems in which checking the correctness of an answer is (exponentially) harder than generating a correct answer.
can anyone please explain me DIV2-D a bit more ? I didn't understand it completely !
div 2 c can anyone help me in finding the error for the particular test case 37 where all the 2000 values are 1870 its working fine in my compiler but not in codeforces.
In the editorial of PRIDE, should n't the answer be (R - L) + (n - 1) instead of (R - L + 1) + (n - 1) as in (R-L) steps. We can get a 1 in the segment by taking gcd of consecutive numbers.
Is 891A(Pride) solvable in O(n) ? Though it does not matter. Any explanation is appreciated.
For 891B — Gluttony What if the input array is already in the form of sorted then shifted by one? Wouldn't it be the same as the submitted array, leading to WA?
Agree with you (unless we wrongly understood what was mentioned in the editorial). In fact I tried to implement it the way described in the editorial, but it does not pass the tests.
Good Contest
Please explain Div. 1 C , unable to comprehend from editorial.
Hope this helps.
could anyone explain to me Div2-D and why Input 5 1 3 4 5 2 Participant's output 3 4 5 2 1 is WA | http://codeforces.com/blog/entry/55841 | CC-MAIN-2017-51 | refinedweb | 3,248 | 71.65 |
A common exception is dividing by zero. Exceptions are generally linked to an action. For dividing by zero, the action is a divisor of zero. Integer division, where the divisor is zero, triggers a divide by zero exception. (However, floating-point division by zero does not cause an exception; instead, infinity is returned.) The following code causes an unhandled divide by zero exception, which terminates the application:
using System; namespace Donis.CSharpBook{ public class Starter{ public static void Main(){ int var1=5, var2=0; var1/=var2; // exception occurs } } }
Place code that is likely to raise an exception in a try block because code in a try block is protected from exceptions. Exceptions raised in the try block are trapped. The stack is then walked by the CLR, searching for the appropriate exception handler. Code not residing in a try block is unprotected from exceptions. In this circumstance, the exception eventually evolves into an unhandled exception. As demonstrated in the previous code, an unhandled exception is apt to abort an application.
In the following code, the divide by zero exception is caught in a try block. Trapping, catching, and handling an exception are separate tasks. The catch statement consists of a catch filter and block. The DivideByZeroException filter catches the divide by zero exception. The catch block handles the exception, which is to display the stack trace. The proximity of the infraction is included in the stack trace. Execution then continues at the first statement after the catch block.
using System; namespace Donis.CSharpBook{ public class Starter{ public static void Main(){ try { int var1=5, var2=0; var1/=var2; // exception occurs } catch(DivideByZeroException except) { Console.WriteLine("Exception "+except.StackTrace); } } } } | http://etutorials.org/Programming/programming+microsoft+visual+c+sharp+2005/Part+III+More+C+Language/Chapter+9+Exception+Handling/Exception+Example/ | CC-MAIN-2018-09 | refinedweb | 278 | 50.23 |
TableEdit
Arrays, Vectors and TablesEdit
array Vector table
Arrays are a generally useful data structure, but they suffer from two important limitations:
itemize
The size of the array does not depend on the number of items in it. If the array is too big, it wastes space. If it is too small it might cause an error, or we might have to write code to resize it.
Although the array can contain any type of item, the indices of the array have to be integers. We cannot, for example, use a String to specify an element of an array.
itemize
In Section vector we saw how the built-in Vector class solves the first problem. As the user adds items it expands automatically. It is also possible to shrink a Vector so that the capacity is the same as the current size.
But Vectors don't help with the second problem. The indices are still integers.
That's where the Table ADT comes in. The Table is a generalization of the Vector that can use any type as an index. These generalized indices are called keys.
Just as you would use an index to access a value in an array, you use a key to access a value in a Table. So each key is associated with a value, which is why Tables are sometimes called associative arrays.
dictionary associative array key entry index
A common example of a table is a dictionary, which is a table that associates words (the keys) with their definitions (the values). Because of this example Tables are also sometimes called Dictionaries. Also, the association of a particular key with a particular value is called an entry.
The Table ADTEdit
Table ADT ADT!Table
Like the other ADTs we have looked at, Tables are defined by the set of operations they support:
description
[constructor:] Make a new, empty table.
[put:] Create an entry that associates a value with a key.
[get:] For a given key, find the corresponding value.
[containsKey:] Return true if there is an entry in the Table with the given Key.
[keys]: Return a collection that contains all the keys in the Table.
description
The built-in HashtableEdit
Hashtable class!Hashtable
Java provides an implementation of the Table ADT called Hashtable. It is in the java.util package. Later in the chapter we'll see why it is called Hashtable.
To demonstrate the use of the Hashtable we'll write a short program that traverses a String and counts the number of times each word appears.
We'll create a new class called WordCount that will build the Table and then print its contents. Naturally, each WordCount object contains a Hashtable:
verbatim public class WordCount
Hashtable ht;
public WordCount () ht = new Hashtable ();
verbatim
The only public methods for WordCount are processLine, which takes a String and adds its words to the Table, and print, which prints the results at the end.
processLine breaks the String into words using a StringTokenizer and passes each word to processWord.
verbatim
public void processLine (String s) StringTokenizer st = new StringTokenizer (s, " ,."); while (st.hasMoreTokens()) String word = st.nextToken(); processWord (word.toLowerCase ());
verbatim
The interesting work is in processWord.
verbatim
public void processWord (String word) if (ht.containsKey (word)) Integer i = (Integer) ht.get (word); Integer j = new Integer (i.intValue() + 1); ht.put (word, j); else ht.put (word, new Integer (1));
verbatim
If the word is already in the table, we get its counter, increment it, and put the new value. Otherwise, we just put a new entry in the table with the counter set to 1.
Enumeration class class!Enumeration traverse
To print the entries in the table, we need to be able to traverse the keys in the table. Fortunately, the Hashtable implementation provides a method, keys, that returns an Enumeration object we can use. Enumerations are very similar to the Iterators we saw in Section iterator. Both are abstract classes in the java.util package; you should review the documentation of both. Here's how to use keys to print the contents of the Hashtable:
verbatim
public void print () Enumeration enum = ht.keys (); while (enum.hasMoreElements ()) String key = (String) enum.nextElement (); Integer value = (Integer) ht.get (key); System.out.println (" " + key + ", " + value + " ");
verbatim
Each of the elements of the Enumeration is an Object, but since we know they are keys, we typecast them to be Strings. When we get the values from the Table, they are also Objects, but we know they are counters, so we typecast them to be Integers.
Finally, to count the words in a string:
verbatim
WordCount wc = new WordCount (); wc.processLine ("da doo ron ron ron, da doo ron ron"); wc.print ();
verbatim
The output is
verbatim
ron, 5 doo, 2 da, 2
verbatim
The elements of the Enumeration are not in any particular order. The only guarantee is that all the keys in the table will appear.
A Vector implementationEdit
implementation!Table table!vector implementation KeyValuePair
An easy way to implement the Table ADT is to use a Vector of entries, where each entry is an object that contains a key and a value. These objects are called key-value pairs.
A class definition for a KeyValuePair might look like this:
verbatim class KeyValuePair
Object key, value;
public KeyValuePair (Object key, Object value) this.key = key; this.value = value;
public String toString () return " " + key + ", " + value + " ";
verbatim
Then the implementation of Table looks like this:
verbatim public class Table
Vector v;
public Table () v = new Vector ();
verbatim
To put a new entry in the table, we just add a new KeyValuePair to the Vector:
verbatim
public void put (Object key, Object value) KeyValuePair pair = new KeyValuePair (key, value); v.add (pair);
verbatim
Then to look up a key in the Table we have to traverse the Vector and find a KeyValuePair with a matching key:
verbatim
public Object get (Object key) Iterator iterator = v.iterator (); while (iterator.hasNext ()) KeyValuePair pair = (KeyValuePair) iterator.next (); if (key.equals (pair.key)) return pair.value; return null;
verbatim
The idiom to traverse a Vector is the one we saw in Section iterator. When we compare keys, we use deep equality (the equals method) rather than shallow equality (the == operator). This allows the key class to specify the definition of equality. In our example, the keys are Strings, so it will use the built-in equals method in the String class.
traverse
For most of the built-in classes, the equals method implements deep equality. For some classes, though, it is not easy to define what that means. For example, see the documentation of equals for Doubles.
equals equality
Because equals is an object method, this implementation of get does not work if key is null. We could handle null as a special case, or we could do what the build-in Hashtable does---simply declare that null is not a legal key.
Speaking of the built-in Hashtable, it's implementation of put is a bit different from ours. If there is already an entry in the table with the given key, put updates it (give it a new value), and returns the old value (or null if there was none. Here is an implementation of their version:
verbatim
public Object put (Object key, Object value) Object result = get (key); if (result == null) KeyValuePair pair = new KeyValuePair (key, value); v.add (pair); else update (key, value); return result;
verbatim
The update method is not part of the Table ADT, so it is declared private. It traverses the vector until it finds the right KeyValuePair and then it updates the value field. Notice that we don't have to modify the Vector itself, just one of the objects it contains.
verbatim
private void update (Object key, Object value) Iterator iterator = v.iterator (); while (iterator.hasNext ()) KeyValuePair pair = (KeyValuePair) iterator.next (); if (key.equals (pair.key)) pair.value = value; break;
verbatim
The only methods we haven't implemented are containsKey and keys. The containsKey method is almost identical to get except that it returns true or false instead of an object reference or null.
As an exercise, implement keys by building a Vector of keys and returning the elements of the vector. See the documentation of elements in the Vector class for more information.
The List abstract classEdit
abstract class!List List abstract class
The java.util package defines an abstract class called List that specifies the set of operations a class has to implement in order to be considered (very abstractly) a list. This does not mean, of course, that every class that implements List has to be a linked list.
Not surprisingly, the built-in LinkedList class is a member of the List abstract class. Surprisingly, so is Vector.
The methods in the List definition include add, get and iterator. In fact, all the methods from the Vector class that we used to implement Table are defined in the List abstract class.
That means that instead of a Vector, we could have used any List class. In Table.java we can replace Vector with LinkedList, and the program still works!
This kind of type generality can be useful for tuning the performance of a program. You can write the program in terms of an abstract class like List and then test the program with several different implementations to see which yields the best performance.
Hash table implementationEdit
implementation!Table implementation!hash table hash table!implementation table!hash table implementation
The reason that the built-in implementation of the Table ADT is called Hashtable is that it uses a particularly efficient implementation of a Table called a hashtable.
Of course, the whole point of defining an ADT is that it allows us to use an implementation without knowing the details. So it is probably a bad thing that the people who wrote the Java library named this class according to its implementation rather than its ADT, but I suppose of all the bad things they did, this one is pretty small.
Anyhoo, you might be wondering what a hashtable is, and why I say it is particularly efficient. We'll start by analyzing the performance of the List implementation we just did.
Looking at the implementation of put, we see that there are two cases. If the key is not already in the table, then we only have to create a new key-value pair and add it to the List. Both of these are constant-time operations.
In the other case, we have to traverse the List to find the existing key-value pair. That's a linear time operation. For the same reason, get and containsKey are also linear.
Although linear operations are often good enough, we can do better. It turns out that there is a way to implement the Table ADT so that both put and get are constant time operations!
The key is to realize that traversing a list takes time proportional to the length of the list. If we can put an upper bound on the length of the list, then we can put an upper bound on the traverse time, and anything with a fixed upper bound is considered constant time.
analysis!hashtable
But how can we limit the length of the lists without limiting the number of items in the table? By increasing the number of lists. Instead of one long list, we'll keep many short lists.
As long as we know which list to search, we can put a bound on the amount of searching.
Hash FunctionsEdit
hash function mapping
And that's where hash functions come in. We need some way to look at a key and know, without searching, which list it will be in. We'll assume that the lists are in an array (or Vector) so we can refer to them by index.
The solution is to come up with some mapping---almost any mapping---between the key values and the indices of the lists. For every possible key there has to be a single index, but there might be many keys that map to the same index.
For example, imagine an array of 8 lists and a table made up of keys that are Integers and values that are Strings. It might be tempting to use the intValue of the Integers as indices, since they are the right type, but there are a whole lot of integers that do not fall between 0 and 7, which are the only legal indices.
modulus operator!modulus
The modulus operator provides a simple (in terms of code) and efficient (in terms of run time) way to map all the integers into the range . The expression
verbatim
key.intValue()
verbatim
is guaranteed to produce a value in the range from -7 to 7 (including both). If you take its absolute value (using Math.abs) you will get a legal index.
For other types, we can play similar games. For example, to convert a Character to an integer, we can use the built-in method Character.getNumericValue and for Doubles there is intValue.
shifted sum
For Strings we could get the numeric value of each character and add them up, or instead we might use a shifted sum. To calculate a shifted sum, alternate between adding new values to the accumulator and shifting the accumulator to the left. By ``shift to the left I mean ``multiply by a constant.
To see how this works, take the list of numbers
and compute their shifted sum as
follows. First, initialize the accumulator to 0. Then,
enumerate
Multiply the accumulator by 10.
Add the next element of the list to the accumulator.
Repeat until the list is finished.
enumerate
As an exercise, write a method that calculates the shifted sum of the numeric values of the characters in a String using a multiplier of 32.
For each type, we can come up with a function that takes values of that type and generates a corresponding integer value. These functions are called hash functions, because they often involve making a hash of the components of the object. The integer value for each object is called its hash code.
There is one other way we might generate a hash code for Java objects. Every Java object provides a method called hashCode that returns an integer that corresponds to that object. For the built-in types, the hashCode method is implemented so that if two objects contain the same data, they will have the same hash code (as in deep equality). The documentation of these methods explains what the hash function is. You should check them out.
deep equality hash function hash code
For user-defined types, it is up to the implementor to provide an appropriate hash function. The default hash function, provided in the Object class, often uses the location of the object to generate a hash code, so its notion of ``sameness is shallow equality. Most often when we are searching a hash table for a key, shallow equality is not what we want.
Regardless of how the hash code is generated, the last step is to use modulus and absolute value to map the hash code into the range of legal indices.
Resizing a hash tableEdit
resizing hash table!resizing
Let's review. A Hash table consists of an array (or Vector) of Lists, where each List contains a small number of key-value pairs. To add a new entry to a table, we calculate the hash code of the new key and add the entry to the corresponding List.
To look up a key, we hash it again and search the corresponding list. If the lengths of the lists are bounded then the search time is bounded.
So how do we keep the lists short? Well, one goal is to keep them as balanced as possible, so that there are no very long lists at the same time that others are empty. This is not easy to do perfectly---it depends on how well we chose the hash function---but we can usually do a pretty good job.
Even with perfect balance, the average list length grows linearly with the number of entries, and we have to put a stop to that.
The solution is to keep track of the average number of entries per list, which is called the load factor; if the load factor gets too high, we have to resize the table.
load factor rehashing
To resize, we create a new table, usually twice as big as the original, take all the entries out of the old one, hash them again, and put them in the new table. Usually we can get away with using the same hash function; we just use a different value for the modulus operator.
Performance of resizingEdit
analysis!Hashtable
How long does it take to resize the table? Clearly it is linear with the number of entries. That means that most of the time put takes constant time, but every once in a while ---when we resize---it takes linear time.
At first that sounds bad. Doesn't that undermine my claim that we can perform put in constant time? Well, frankly, yes. But with a little wheedling, I can fix it.
Since some put operations take longer than others, let's figure out the average time for a put operation. The average is going to be , the constant time for a simple put, plus an additional term of , the percentage of the time I have to resize, times , the cost of resizing.
equation t(n) = c + p kn equation
I don't know what and are, but we can figure out what
is. Imagine that we have just resized the hash table by
doubling its size. If there are entries, then we can add an addition entries before we have to resize again. So the percentage of the time we have to resize is .
Plugging into the equation, we get
equation t(n) = c + 1/n kn = c + k equation
In other words, is constant time!
GlossaryEdit
table entry key value dictionary associative array hash table hash function hash code shifted sum load factor
description
[table:] An ADT that defines operations on a collection of entries.
[entry:] An element in a table that contains a key-value pair.
[key:] An index, of any type, used to look up values in a table.
[value:] An element, of any type, stored in a table.
[dictionary:] Another name for a table.
[associative array:] Another name for a dictionary.
[hash table:] A particularly efficient implementation of a table.
[hash function:] A function that maps values of a certain type onto integers.
[hash code:] The integer value that corresponds to a given value.
[shifted sum:] A simple hash function often used for compounds objects like Strings.
[load factor:] The number of entries in a hashtable divided by the number of lists in the hashtable; i.e. the average number of entries per list.
description | http://en.m.wikibooks.org/wiki/The_Way_of_the_Java/Table | CC-MAIN-2014-52 | refinedweb | 3,165 | 64.61 |
!402 80 2011/03/31 22:44:04402 49 + various build-fixes for the rpm/dpkg scripts. 50 + add "--enable-rpath-link" option to Ada95/configure, to allow 51 packages to suppress the rpath feature which is normally used for 52 the in-tree build of sample programs. 53 + corrected definition of libdir variable in Ada95/src/Makefile.in, 54 needed for rpm script. 55 + add "--with-shared" option to Ada95/configure script, to allow 56 making the C-language parts of the binding use appropriate compiler 57 options if building a shared library with gnat. 58 59 20110329 60 > portability fixes for Ada95 binding: 61 + add configure check to ensure that SIGINT works with gnat. This is 62 needed for the "rain" sample program. If SIGINT does not work, omit 63 that sample program. 64 + correct typo in check of $PKG_CONFIG variable in Ada95/configure 65 + add ncurses_compat.c, to supply functions used in the Ada95 binding 66 which were added in 5.7 and later. 67 + modify sed expression in CF_NCURSES_ADDON to eliminate a dependency 68 upon GNU sed. 69 70 20110326 71 + add special check in Ada95/configure script for ncurses6 reentrant 72 code. 73 + regen Ada html documentation. 74 + build-fix for Ada shared libraries versus the varargs workaround. 75 + add rpm and dpkg scripts for Ada95 and test directories, for test 76 builds. 77 + update test/configure macros CF_CURSES_LIBS, CF_XOPEN_SOURCE and 78 CF_X_ATHENA_LIBS. 79 + add configure check to determine if gnat's project feature supports 80 libraries, i.e., collections of .ali files. 81 + make all dereferences in Ada95 samples explicit. 82 + fix typo in comment in lib_add_wch.c (patch by Petr Pavlu). 83 + add configure check for, ifdef's for math.h which is in a separate 84 package on Solaris and potentially not installed (report by Petr 85 Pavlu). 86 > fixes for Ada95 binding (Nicolas Boulenguez): 87 + improve type-checking in Ada95 by eliminating a few warning-suppress 88 pragmas. 89 + suppress unreferenced warnings. 90 + make all dereferences in binding explicit. 91 92 20110319 93 + regen Ada html documentation. 94 + change order of -I options from ncurses*-config script when the 95 --disable-overwrite option was used, so that the subdirectory include 96 is listed first. 97 + modify the make-tar.sh scripts to add a MANIFEST and NEWS file. 98 + modify configure script to provide value for HTML_DIR in 99 Ada95/gen/Makefile.in, which depends on whether the Ada95 binding is 100 distributed separately (report by Nicolas Boulenguez). 101 + modify configure script to add -g and/or -O3 to ADAFLAGS if the 102 CFLAGS for the build has these options. 103 + amend change from 20070324, to not add 1 to the result of getmaxx 104 and getmaxy in the Ada binding (report by Nicolas Boulenguez for 105 thread in comp.lang.ada). 106 + build-fix Ada95/samples for gnat 4.5 107 + spelling fixes for Ada95/samples/explain.txt 108 > fixes for Ada95 binding (Nicolas Boulenguez): 109 + add item in Trace_Attribute_Set corresponding to TRACE_ATTRS. 110 + add workaround for binding to set_field_type(), which uses varargs. 111 The original binding from 990220 relied on the prevalent 112 implementation of varargs which did not support or need va_copy(). 113 + add dependency on gen/Makefile.in needed for *-panels.ads 114 + add Library_Options to library.gpr 115 + add Languages to library.gpr, for gprbuild 116 117 20110307 118 + revert changes to limit-checks from 20110122 (Debian #616711). 119 > minor type-cleanup of Ada95 binding (Nicolas Boulenguez): 120 + corrected a minor sign error in a field of Low_Level_Field_Type, to 121 conform to form.h. 122 + replaced C_Int by Curses_Bool as return type for some callbacks, see 123 fieldtype(3FORM). 124 + modify samples/sample-explain.adb to provide explicit message when 125 explain.txt is not found. 126 127 20110305 128 + improve makefiles for Ada95 tree (patch by Nicolas Boulenguez). 129 + fix an off-by-one error in _nc_slk_initialize() from 20100605 fixes 130 for compiler warnings (report by Nicolas Boulenguez). 131 + modify Ada95/gen/gen.c to declare unused bits in generated layouts, 132 needed to compile when chtype is 64-bits using gnat 4.4.5 133 134 20110226 5.8 release for upload to 135 136 20110226 137 + update release notes, for 5.8. 138 + regenerated html manpages. 139 + change open() in _nc_read_file_entry() to fopen() for consistency 140 with write_file(). 141 + modify misc/run_tic.in to create parent directory, in case this is 142 a new install of hashed database. 143 + fix typo in Ada95/mk-1st.awk which causes error with original awk. 144 145 20110220 146 + configure script rpath fixes from xterm #269. 147 + workaround for cygwin's non-functional features.h, to force ncurses' 148 configure script to define _XOPEN_SOURCE_EXTENDED when building 149 wide-character configuration. 150 + build-fix in run_tic.sh for OS/2 EMX install 151 + add cons25-debian entry (patch by Brian M Carlson, Debian #607662). 152 153 20110212 154 + regenerated html manpages. 155 + use _tracef() in show_where() function of tic, to work correctly with 156 special case of trace configuration. 157 158 20110205 159 + add xterm-utf8 entry as a demo of the U8 feature -TD 160 + add U8 feature to denote entries for terminal emulators which do not 161 support VT100 SI/SO when processing UTF-8 encoding -TD 162 + improve the NCURSES_NO_UTF8_ACS feature by adding a check for an 163 extended terminfo capability U8 (prompted by mailing list 164 discussion). 165 166 20110122 167 + start documenting interface changes for upcoming 5.8 release. 168 + correct limit-checks in derwin(). 169 + correct limit-checks in newwin(), to ensure that windows have nonzero 170 size (report by Garrett Cooper). 171 + fix a missing "weak" declaration for pthread_kill (patch by Nicholas 172 Alcock). 173 + improve documentation of KEY_ENTER in curs_getch.3x manpage (prompted 174 by discussion with Kevin Martin). 175 176 20110115 177 + modify Ada95/configure script to make the --with-curses-dir option 178 work without requiring the --with-ncurses option. 179 + modify test programs to allow them to be built with NetBSD curses. 180 + document thick- and double-line symbols in curs_add_wch.3x manpage. 181 + document WACS_xxx constants in curs_add_wch.3x manpage. 182 + fix some warnings for clang 2.6 "--analyze" 183 + modify Ada95 makefiles to make html-documentation with the project 184 file configuration if that is used. 185 + update config.guess, config.sub 186 187 20110108 188 + regenerated html manpages. 189 + minor fixes to enable lint when trace is not enabled, e.g., with 190 clang --analyze. 191 + fix typo in man/default_colors.3x (patch by Tim van der Molen). 192 + update ncurses/llib-lncurses* 193 194 20110101 195 + fix remaining strict compiler warnings in ncurses library ABI=5, 196 except those dealing with function pointers, etc. 197 198 20101225 199 + modify nc_tparm.h, adding guards against repeated inclusion, and 200 allowing TPARM_ARG to be overridden. 201 + fix some strict compiler warnings in ncurses library. 202 203 20101211 204 + suppress ncv in screen entry, allowing underline (patch by Alejandro 205 R Sedeno). 206 + also suppress ncv in konsole-base -TD 207 + fixes in wins_nwstr() and related functions to ensure that special 208 characters, i.e., control characters are handled properly with the 209 wide-character configuration. 210 + correct a comparison in wins_nwstr() (Redhat #661506). 211 + correct help-messages in some of the test-programs, which still 212 referred to quitting with 'q'. 213 214 20101204 215 + add special case to _nc_infotocap() to recognize the setaf/setab 216 strings from xterm+256color and xterm+88color, and provide a reduced 217 version which works with termcap. 218 + remove obsolete emacs "Local Variables" section from documentation 219 (request by Sven Joachim). 220 + update doc/html/index.html to include NCURSES-Programming-HOWTO.html 221 (report by Sven Joachim). 222 223 20101128 224 + modify test/configure and test/Makefile.in to handle this special 225 case of building within a build-tree (Debian #34182): 226 mkdir -p build && cd build && ../test/configure && make 227 228 20101127 229 + miscellaneous build-fixes for Ada95 and test-directories when built 230 out-of-tree. 231 + use VPATH in makefiles to simplify out-of-tree builds (Debian #34182). 232 + fix typo in rmso for tek4106 entry -Goran Weinholt 233 234 20101120 235 + improve checks in test/configure for X libraries, from xterm #267 236 changes. 237 + modify test/configure to allow it to use the build-tree's libraries 238 e.g., when using that to configure the test-programs without the 239 rpath feature (request by Sven Joachim). 240 + repurpose "gnome" terminfo entries as "vte", retaining "gnome" items 241 for compatibility, but generally deprecating those since the VTE 242 library is what actually defines the behavior of "gnome", etc., 243 since 2003 -TD 244 245 20101113 246 + compiler warning fixes for test programs. 247 + various build-fixes for test-programs with pdcurses. 248 + updated configure checks for X packages in test/configure from xterm 249 #267 changes. 250 + add configure check to gnatmake, to accommodate cygwin. 251 252 20101106 253 + correct list of sub-directories needed in Ada95 tree for building as 254 a separate package. 255 + modify scripts in test-directory to improve builds as a separate 256 package. 257 258 20101023 259 + correct parsing of relative tab-stops in tabs program (report by 260 Philip Ganchev). 261 + adjust configure script so that "t" is not added to library suffix 262 when weak-symbols are used, allowing the pthread configuration to 263 more closely match the non-thread naming (report by Werner Fink). 264 + modify configure check for tic program, used for fallbacks, to a 265 warning if not found. This makes it simpler to use additonal 266 scripts to bootstrap the fallbacks code using tic from the build 267 tree (report by Werner Fink). 268 + fix several places in configure script using ${variable-value} form. 269 + modify configure macro CF_LDFLAGS_STATIC to accommodate some loaders 270 which do not support selectively linking against static libraries 271 (report by John P. Hartmann) 272 + fix an unescaped dash in man/tset.1 (report by Sven Joachim). 273 274 20101009 275 + correct comparison used for setting 16-colors in linux-16color 276 entry (Novell #644831) -TD 277 + improve linux-16color entry, using "dim" for color-8 which makes it 278 gray rather than black like color-0 -TD 279 + drop misc/ncu-indent and misc/jpf-indent; they are provided by an 280 external package "cindent". 281 282 20101002 283 + improve linkages in html manpages, adding references to the newer 284 pages, e.g., *_variables, curs_sp_funcs, curs_threads. 285 + add checks in tic for inconsistent cursor-movement controls, and for 286 inconsistent printer-controls. 287 + fill in no-parameter forms of cursor-movement where a parameterized 288 form is available -TD 289 + fill in missing cursor controls where the form of the controls is 290 ANSI -TD 291 + fix inconsistent punctuation in form_variables manpage (patch by 292 Sven Joachim). 293 + add parameterized cursor-controls to linux-basic (report by Dae) -TD 294 > patch by Juergen Pfeifer: 295 + document how to build 32-bit libraries in README.MinGW 296 + fixes to filename computation in mk-dlls.sh.in 297 + use POSIX locale in mk-dlls.sh.in rather than en_US (report by Sven 298 Joachim). 299 + add a check in mk-dlls.sh.in to obtain the size of a pointer to 300 distinguish between 32-bit and 64-bit hosts. The result is stored 301 in mingw_arch 302 303 20100925 304 + add "XT" capability to entries for terminals that support both 305 xterm-style mouse- and title-controls, for "screen" which 306 special-cases TERM beginning with "xterm" or "rxvt" -TD 307 > patch by Juergen Pfeifer: 308 + use 64-Bit MinGW toolchain (recommended package from TDM, see 309 README.MinGW). 310 + support pthreads when using the TDM MinGW toolchain 311 312 20100918 313 + regenerated html manpages. 314 + minor fixes for symlinks to curs_legacy.3x and curs_slk.3x manpages. 315 + add manpage for sp-funcs. 316 + add sp-funcs to test/listused.sh, for documentation aids. 317 318 20100911 319 + add manpages for summarizing public variables of curses-, terminfo- 320 and form-libraries. 321 + minor fixes to manpages for consistency (patch by Jason McIntyre). 322 + modify tic's -I/-C dump to reformat acsc strings into canonical form 323 (sorted, unique mapping) (cf: 971004). 324 + add configure check for pthread_kill(), needed for some old 325 platforms. 326 327 20100904 328 + add configure option --without-tests, to suppress building test 329 programs (request by Frederic L W Meunier). 330 331 20100828 332 + modify nsterm, xnuppc and tek4115 to make sgr/sgr0 consistent -TD 333 + add check in terminfo source-reader to provide more informative 334 message when someone attempts to run tic on a compiled terminal 335 description (prompted by Debian #593920). 336 + note in infotocap and captoinfo manpages that they read terminal 337 descriptions from text-files (Debian #593920). 338 + improve acsc string for vt52, show arrow keys (patch by Benjamin 339 Sittler). 340 341 20100814 342 + document in manpages that "mv" functions first use wmove() to check 343 the window pointer and whether the position lies within the window 344 (suggested by Poul-Henning Kamp). 345 + fixes to curs_color.3x, curs_kernel.3x and wresize.3x manpages (patch 346 by Tim van der Molen). 347 + modify configure script to transform library names for tic- and 348 tinfo-libraries so that those build properly with Mac OS X shared 349 library configuration. 350 + modify configure script to ensure that it removes conftest.dSYM 351 directory leftover on checks with Mac OS X. 352 + modify configure script to cleanup after check for symbolic links. 353 354 20100807 355 + correct a typo in mk-1st.awk (patch by Gabriele Balducci) 356 (cf: 20100724) 357 + improve configure checks for location of tic and infocmp programs 358 used for installing database and for generating fallback data, 359 e.g., for cross-compiling. 360 + add Markus Kuhn's wcwidth function for compiling MinGW 361 + add special case to CF_REGEX for cross-compiling to MinGW target. 362 363 20100731 364 + modify initialization check for win32con driver to eliminate need for 365 special case for TERM "unknown", using terminal database if available 366 (prompted by discussion with Roumen Petrov). 367 + for MinGW port, ensure that terminal driver is setup if tgetent() 368 is called (patch by Roumen Petrov). 369 + document tabs "-0" and "-8" options in manpage. 370 + fix Debian "lintian" issues with manpages reported in 371 372 373 20100724 374 + add a check in tic for missing set_tab if clear_all_tabs given. 375 + improve use of symbolic links in makefiles by using "-f" option if 376 it is supported, to eliminate temporary removal of the target 377 (prompted by) 378 + minor improvement to test/ncurses.c, reset color pairs in 'd' test 379 after exit from 'm' main-menu command. 380 + improved ncu-indent, from mawk changes, allows more than one of 381 GCC_NORETURN, GCC_PRINTFLIKE and GCC_SCANFLIKE on a single line. 382 383 20100717 384 + add hard-reset for rs2 to wsvt25 to help ensure that reset ends 385 the alternate character set (patch by Nicholas Marriott) 386 + remove tar-copy.sh and related configure/Makefile chunks, since the 387 Ada95 binding is now installed using rules in Ada95/src. 388 389 20100703 390 + continue integrating changes to use gnatmake project files in Ada95 391 + add/use configure check to turn on project rules for Ada95/src. 392 + revert the vfork change from 20100130, since it does not work. 393 394 20100626 395 + continue integrating changes to use gnatmake project files in Ada95 396 + old gnatmake (3.15) does not produce libraries using project-file; 397 work around by adding script to generate alternate makefile. 398 399 20100619 400 + continue integrating changes to use gnatmake project files in Ada95 401 + add configure --with-ada-sharedlib option, for the test_make rule. 402 + move Ada95-related logic into aclocal.m4, since additional checks 403 will be needed to distinguish old/new implementations of gnat. 404 405 20100612 406 + start integrating changes to use gnatmake project files in Ada95 tree 407 + add test_make / test_clean / test_install rules in Ada95/src 408 + change install-path for adainclude directory to /usr/share/ada (was 409 /usr/lib/ada). 410 + update Ada95/configure. 411 + add mlterm+256color entry, for mlterm 3.0.0 -TD 412 + modify test/configure to use macros to ensure consistent order 413 of updating LIBS variable. 414 415 20100605 416 + change search order of options for Solaris in CF_SHARED_OPTS, to 417 work with 64-bit compiles. 418 + correct quoting of assignment in CF_SHARED_OPTS case for aix 419 (cf: 20081227) 420 421 20100529 422 + regenerated html documentation. 423 + modify test/configure to support pkg-config for checking X libraries 424 used by PDCurses. 425 + add/use configure macro CF_ADD_LIB to force consistency of 426 assignments to $LIBS, etc. 427 + fix configure script for combining --with-pthread 428 and --enable-weak-symbols options. 429 430 20100522 431 + correct cross-compiling configure check for CF_MKSTEMP macro, by 432 adding a check cache variable set by AC_CHECK_FUNC (report by 433 Pierre Labastie). 434 + simplify include-dependencies of make_hash and make_keys, to reduce 435 the need for setting BUILD_CPPFLAGS in cross-compiling when the 436 build- and target-machines differ. 437 + repair broken-linker configuration by restoring a definition of SP 438 variable to curses.priv.h, and adjusting for cases where sp-funcs 439 are used. 440 + improve configure macro CF_AR_FLAGS, allowing ARFLAGS environment 441 variable to override (prompted by report by Pablo Cazallas). 442 443 20100515 444 + add configure option --enable-pthreads-eintr to control whether the 445 new EINTR feature is enabled. 446 + modify logic in pthread configuration to allow EINTR to interrupt 447 a read operation in wgetch() (Novell #540571, patch by Werner Fink). 448 + drop mkdirs.sh, use "mkdir -p". 449 + add configure option --disable-libtool-version, to use the 450 "-version-number" feature which was added in libtool 1.5 (report by 451 Peter Haering). The default value for the option uses the newer 452 feature, which makes libraries generated using libtool compatible 453 with the standard builds of ncurses. 454 + updated test/configure to match configure script macros. 455 + fixes for configure script from lynx changes: 456 + improve CF_FIND_LINKAGE logic for the case where a function is 457 found in predefined libraries. 458 + revert part of change to CF_HEADER (cf: 20100424) 459 460 20100501 461 + correct limit-check in wredrawln, accounting for begy/begx values 462 (patch by David Benjamin). 463 + fix most compiler warnings from clang. 464 + amend build-fix for OpenSolaris, to ensure that a system header is 465 included in curses.h before testing feature symbols, since they 466 may be defined by that route. 467 468 20100424 469 + fix some strict compiler warnings in ncurses library. 470 + modify configure macro CF_HEADER_PATH to not look for variations in 471 the predefined include directories. 472 + improve configure macros CF_GCC_VERSION and CF_GCC_WARNINGS to work 473 with gcc 4.x's c89 alias, which gives warning messages for cases 474 where older versions would produce an error. 475 476 20100417 477 + modify _nc_capcmp() to work with cancelled strings. 478 + correct translation of "^" in _nc_infotocap(), used to transform 479 terminfo to termcap strings 480 + add configure --disable-rpath-hack, to allow disabling the feature 481 which adds rpath options for libraries in unusual places. 482 + improve CF_RPATH_HACK_2 by checking if the rpath option for a given 483 directory was already added. 484 + improve CF_RPATH_HACK_2 by using ldd to provide a standard list of 485 directories (which will be ignored). 486 487 20100410 488 + improve win_driver.c handling of mouse: 489 + discard motion events 490 + avoid calling _nc_timed_wait when there is a mouse event 491 + handle 4th and "rightmost" buttons. 492 + quote substitutions in CF_RPATH_HACK_2 configure macro, needed for 493 cases where there are embedded blanks in the rpath option. 494 495 20100403 496 + add configure check for exctags vs ctags, to work around pkgsrc. 497 + simplify logic in _nc_get_screensize() to make it easier to see how 498 environment variables may override system- and terminfo-values 499 (prompted by discussion with Igor Bujna). 500 + make debug-traces for COLOR_PAIR and PAIR_NUMBER less verbose. 501 + improve handling of color-pairs embedded in attributes for the 502 extended-colors configuration. 503 + modify MKlib_gen.sh to build link_test with sp-funcs. 504 + build-fixes for OpenSolaris aka Solaris 11, for wide-character 505 configuration as well as for rpath feature in *-config scripts. 506 507 20100327 508 + refactor CF_SHARED_OPTS configure macro, making CF_RPATH_HACK more 509 reusable. 510 + improve configure CF_REGEX, similar fixes. 511 + improve configure CF_FIND_LINKAGE, adding add check between system 512 (default) and explicit paths, where we can find the entrypoint in the 513 given library. 514 + add check if Gpm_Open() returns a -2, e.g., for "xterm". This is 515 normally suppressed but can be overridden using $NCURSES_GPM_TERMS. 516 Ensure that Gpm_Close() is called in this case. 517 518 20100320 519 + rename atari and st52 terminfo entries to atari-old, st52-old, use 520 newer entries from FreeMiNT by Guido Flohr (from patch/report by Alan 521 Hourihane). 522 523 20100313 524 + modify install-rule for manpages so that *-config manpages will 525 install when building with --srcdir (report by Sven Joachim). 526 + modify CF_DISABLE_LEAKS configure macro so that the --enable-leaks 527 option is not the same as --disable-leaks (GenToo #305889). 528 + modify #define's for build-compiler to suppress cchar_t symbol from 529 compile of make_hash and make_keys, improving cross-compilation of 530 ncursesw (report by Bernhard Rosenkraenzer). 531 + modify CF_MAN_PAGES configure macro to replace all occurrences of 532 TPUT in tput.1's manpage (Debian #573597, report/analysis by Anders 533 Kaseorg). 534 535 20100306 536 + generate manpages for the *-config scripts, adapted from help2man 537 (suggested by Sven Joachim). 538 + use va_copy() in _nc_printf_string() to avoid conflicting use of 539 va_list value in _nc_printf_length() (report by Wim Lewis). 540 541 20100227 542 + add Ada95/configure script, to use in tar-file created by 543 Ada95/make-tar.sh 544 + fix typo in wresize.3x (patch by Tim van der Molen). 545 + modify screen-bce.XXX entries to exclude ech, since screen's color 546 model does not clear with color for that feature -TD 547 548 20100220 549 + add make-tar.sh scripts to Ada95 and test subdirectories to help with 550 making those separately distributable. 551 + build-fix for static libraries without dlsym (Debian #556378). 552 + fix a syntax error in man/form_field_opts.3x (patch by Ingo 553 Schwarze). 554 555 20100213 556 + add several screen-bce.XXX entries -TD 557 558 20100206 559 + update mrxvt terminfo entry -TD 560 + modify win_driver.c to support mouse single-clicks. 561 + correct name for termlib in ncurses*-config, e.g., if it is renamed 562 to provide a single file for ncurses/ncursesw libraries (patch by 563 Miroslav Lichvar). 564 565 20100130 566 + use vfork in test/ditto.c if available (request by Mike Frysinger). 567 + miscellaneous cleanup of manpages. 568 + fix typo in curs_bkgd.3x (patch by Tim van der Molen). 569 + build-fix for --srcdir (patch by Miroslav Lichvar). 570 571 20100123 572 + for term-driver configuration, ensure that the driver pointer is 573 initialized in setupterm so that terminfo/termcap programs work. 574 + amend fix for Debian #542031 to ensure that wattrset() returns only 575 OK or ERR, rather than the attribute value (report by Miroslav 576 Lichvar). 577 + reorder WINDOWLIST to put WINDOW data after SCREEN pointer, making 578 _nc_screen_of() compatible between normal/wide libraries again (patch 579 by Miroslav Lichvar) 580 + review/fix include-dependencies in modules files (report by Miroslav 581 Lichvar). 582 583 20100116 584 + modify win_driver.c to initialize acs_map for win32 console, so 585 that line-drawing works. 586 + modify win_driver.c to initialize TERMINAL struct so that programs 587 such as test/lrtest.c and test/ncurses.c which test string 588 capabilities can run. 589 + modify term-driver modules to eliminate forward-reference 590 declarations. 591 592 20100109 593 + modify configure macro CF_XOPEN_SOURCE, etc., to use CF_ADD_CFLAGS 594 consistently to add new -D's while removing duplicates. 595 + modify a few configure macros to consistently put new options 596 before older in the list. 597 + add tiparm(), based on review of X/Open Curses Issue 7. 598 + minor documentation cleanup. 599 + update config.guess, config.sub from 600 601 (caveat - its maintainer put 2010 copyright date on files dated 2009) 602 603 20100102 604 + minor improvement to tic's checking of similar SGR's to allow for the 605 most common case of SGR 0. 606 + modify getmouse() to act as its documentation implied, returning on 607 each call the preceding event until none are left. When no more 608 events remain, it will return ERR. 609 610 20091227 611 + change order of lookup in progs/tput.c, looking for terminfo data 612 first. This fixes a confusion between termcap "sg" and terminfo 613 "sgr" or "sgr0", originally from 990123 changes, but exposed by 614 20091114 fixes for hashing. With this change, only "dl" and "ed" are 615 ambiguous (Mandriva #56272). 616 617 20091226 618 + add bterm terminfo entry, based on bogl 0.1.18 -TD 619 + minor fix to rxvt+pcfkeys terminfo entry -TD 620 + build-fixes for Ada95 tree for gnat 4.4 "style". 621 622 20091219 623 + remove old check in mvderwin() which prevented moving a derived 624 window whose origin happened to coincide with its parent's origin 625 (report by Katarina Machalkova). 626 + improve test/ncurses.c to put mouse droppings in the proper window. 627 + update minix terminfo entry -TD 628 + add bw (auto-left-margin) to nsterm* entries (Benjamin Sittler) 629 630 20091212 631 + correct transfer of multicolumn characters in multirow 632 field_buffer(), which stopped at the end of the first row due to 633 filling of unused entries in a cchar_t array with nulls. 634 + updated nsterm* entries (Benjamin Sittler, Emanuele Giaquinta) 635 + modify _nc_viscbuf2() and _tracecchar_t2() to show wide-character 636 nulls. 637 + use strdup() in set_menu_mark(), restore .marklen struct member on 638 failure. 639 + eliminate clause 3 from the UCB copyrights in read_termcap.c and 640 tset.c per 641 642 (patch by Nicholas Marriott). 643 + replace a malloc in tic.c with strdup, checking for failure (patch by 644 Nicholas Marriott). 645 + update config.guess, config.sub from 646 647 648 20091205 649 + correct layout of working window used to extract data in 650 wide-character configured by set_field_buffer (patch by Rafael 651 Garrido Fernandez) 652 + improve some limit-checks related to filename length in reading and 653 writing terminfo entries. 654 + ensure that filename is always filled in when attempting to read 655 a terminfo entry, so that infocmp can report the filename (patch 656 by Nicholas Marriott). 657 658 20091128 659 + modify mk-1st.awk to allow tinfo library to be built when term-driver 660 is enabled. 661 + add error-check to configure script to ensure that sp-funcs is 662 enabled if term-driver is, since some internal interfaces rely upon 663 this. 664 665 20091121 666 + fix case where progs/tput is used while sp-funcs is configure; this 667 requires save/restore of out-character function from _nc_prescreen 668 rather than the SCREEN structure (report by Charles Wilson). 669 + fix typo in man/curs_trace.3x which caused incorrect symbolic links 670 + improved configure macros CF_GCC_ATTRIBUTES, CF_PROG_LINT. 671 672 20091114 673 674 + updated man/curs_trace.3x 675 + limit hashing for termcap-names to 2-characters (Ubuntu #481740). 676 + change a variable name in lib_newwin.c to make it clearer which 677 value is being freed on error (patch by Nicholas Marriott). 678 679 20091107 680 + improve test/ncurses.c color-cycling test by reusing attribute- 681 and color-cycling logic from the video-attributes screen. 682 + add ifdef'd with NCURSES_INTEROP_FUNCS experimental bindings in form 683 library which help make it compatible with interop applications 684 (patch by Juergen Pfeifer). 685 + add configure option --enable-interop, for integrating changes 686 for generic/interop support to form-library by Juergen Pfeifer 687 688 20091031 689 + modify use of $CC environment variable which is defined by X/Open 690 as a curses feature, to ignore it if it is not a single character 691 (prompted by discussion with Benjamin C W Sittler). 692 + add START_TRACE in slk_init 693 + fix a regression in _nc_ripoffline which made test/ncurses.c not show 694 soft-keys, broken in 20090927 merging. 695 + change initialization of "hidden" flag for soft-keys from true to 696 false, broken in 20090704 merging (Ubuntu #464274). 697 + update nsterm entries (patch by Benjamin C W Sittler, prompted by 698 discussion with Fabian Groffen in GenToo #206201). 699 + add test/xterm-256color.dat 700 701 20091024 702 + quiet some pedantic gcc warnings. 703 + modify _nc_wgetch() to check for a -1 in the fifo, e.g., after a 704 SIGWINCH, and discard that value, to avoid confusing application 705 (patch by Eygene Ryabinkin, FreeBSD bin/136223). 706 707 20091017 708 + modify handling of $PKG_CONFIG_LIBDIR to use only the first item in 709 a possibly colon-separated list (Debian #550716). 710 711 20091010 712 + supply a null-terminator to buffer in _nc_viswibuf(). 713 + fix a sign-extension bug in unget_wch() (report by Mike Gran). 714 + minor fixes to error-returns in default function for tputs, as well 715 as in lib_screen.c 716 717 20091003 718 + add WACS_xxx definitions to wide-character configuration for thick- 719 and double-lines (discussion with Slava Zanko). 720 + remove unnecessary kcan assignment to ^C from putty (Sven Joachim) 721 + add ccc and initc capabilities to xterm-16color -TD 722 > patch by Benjamin C W Sittler: 723 + add linux-16color 724 + correct initc capability of linux-c-nc end-of-range 725 + similar change for dg+ccc and dgunix+ccc 726 727 20090927 728 + move leak-checking for comp_captab.c into _nc_leaks_tinfo() since 729 that module since 20090711 is in libtinfo. 730 + add configure option --enable-term-driver, to allow compiling with 731 terminal-driver. That is used in MinGW port, and (being somewhat 732 more complicated) is an experimental alternative to the conventional 733 termlib internals. Currently, it requires the sp-funcs feature to 734 be enabled. 735 + completed integrating "sp-funcs" by Juergen Pfeifer in ncurses 736 library (some work remains for forms library). 737 738 20090919 739 + document return code from define_key (report by Mike Gran). 740 + make some symbolic links in the terminfo directory-tree shorter 741 (patch by Daniel Jacobowitz, forwarded by Sven Joachim).). 742 + fix some groff warnings in terminfo.5, etc., from recent Debian 743 changes. 744 + change ncv and op capabilities in sun-color terminfo entry to match 745 Sun's entry for this (report by Laszlo Peter). 746 + improve interix smso terminfo capability by using reverse rather than 747 bold (report by Kristof Zelechovski). 748 749 20090912 750 + add some test programs (and make these use the same special keys 751 by sharing linedata.h functions): 752 test/test_addstr.c 753 test/test_addwstr.c 754 test/test_addchstr.c 755 test/test_add_wchstr.c 756 + correct internal _nc_insert_ch() to use _nc_insert_wch() when 757 inserting wide characters, since the wins_wch() function that it used 758 did not update the cursor position (report by Ciprian Craciun). 759 760 20090906 761 + fix typo s/is_timeout/is_notimeout/ which made "man is_notimeout" not 762 work. 763 + add null-pointer checks to other opaque-functions. 764 + add is_pad() and is_subwin() functions for opaque access to WINDOW 765 (discussion with Mark Dickinson). 766 + correct merge to lib_newterm.c, which broke when sp-funcs was 767 enabled. 768 769 20090905 770 + build-fix for building outside source-tree (report by Sven Joachim). 771 + fix Debian lintian warning for man/tabs.1 by making section number 772 agree with file-suffix (report by Sven Joachim). 773 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 774 775 20090829 776 + workaround for bug in g++ 4.1-4.4 warnings for wattrset() macro on 777 amd64 (Debian #542031). 778 + fix typo in curs_mouse.3x (Debian #429198). 779 780 20090822 781 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 782 783 20090815 784 + correct use of terminfo capabilities for initializing soft-keys, 785 broken in 20090509 merging. 786 + modify wgetch() to ensure it checks SIGWINCH when it gets an error 787 in non-blocking mode (patch by Clemens Ladisch). 788 + use PATH_SEPARATOR symbol when substituting into run_tic.sh, to 789 help with builds on non-Unix platforms such as OS/2 EMX. 790 + modify scripting for misc/run_tic.sh to test configure script's 791 $cross_compiling variable directly rather than comparing host/build 792 compiler names (prompted by comment in GenToo #249363). 793 + fix configure script option --with-database, which was coded as an 794 enable-type switch. 795 + build-fixes for --srcdir (report by Frederic L W Meunier). 796 797 20090808 798 + separate _nc_find_entry() and _nc_find_type_entry() from 799 implementation details of hash function. 800 801 20090803 802 + add tabs.1 to man/man_db.renames 803 + modify lib_addch.c to compensate for removal of wide-character test 804 from unctrl() in 20090704 (Debian #539735). 805 806 20090801 807 + improve discussion in INSTALL for use of system's tic/infocmp for 808 cross-compiling and building fallbacks. 809 + modify test/demo_termcap.c to correspond better to options in 810 test/demo_terminfo.c 811 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 812 + fix logic for 'V' in test/ncurses.c tests f/F. 813 814 20090728 815 + correct logic in tigetnum(), which caused tput program to treat all 816 string capabilities as numeric (report by Rajeev V Pillai, 817 cf: 20090711). 818 819 20090725 820 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 821 822 20090718 823 + fix a null-pointer check in _nc_format_slks() in lib_slk.c, from 824 20070704 changes. 825 + modify _nc_find_type_entry() to use hashing. 826 + make CCHARW_MAX value configurable, noting that changing this would 827 change the size of cchar_t, and would be ABI-incompatible. 828 + modify test-programs, e.g,. test/view.c, to address subtle 829 differences between Tru64/Solaris and HPUX/AIX getcchar() return 830 values. 831 + modify length returned by getcchar() to count the trailing null 832 which is documented in X/Open (cf: 20020427). 833 + fixes for test programs to build/work on HPUX and AIX, etc. 834 835 20090711 836 + improve performance of tigetstr, etc., by using hashing code from tic. 837 + minor fixes for memory-leak checking. 838 + add test/demo_terminfo, for comparison with demo_termcap 839 840 20090704 841 + remove wide-character checks from unctrl() (patch by Clemens Ladisch). 842 + revise wadd_wch() and wecho_wchar() to eliminate dependency on 843 unctrl(). 844 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 845 846 20090627 847 + update llib-lncurses[wt] to use sp-funcs. 848 + various code-fixes to build/work with --disable-macros configure 849 option. 850 + add several new files from Juergen Pfeifer which will be used when 851 integration of "sp-funcs" is complete. This includes a port to 852 MinGW. 853 854 20090613 855 + move definition for NCURSES_WRAPPED_VAR back to ncurses_dll.h, to 856 make includes of term.h without curses.h work (report by "Nix"). 857 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 858 859 20090607 860 + fix a regression in lib_tputs.c, from ongoing merges. 861 862 20090606 863 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 864 865 20090530 866 + fix an infinite recursion when adding a legacy-coding 8-bit value 867 using insch() (report by Clemens Ladisch). 868 + free home-terminfo string in del_curterm() (patch by Dan Weber). 869 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 870 871 20090523 872 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 873 874 20090516 875 + work around antique BSD game's manipulation of stdscr, etc., versus 876 SCREEN's copy of the pointer (Debian #528411). 877 + add a cast to wattrset macro to avoid compiler warning when comparing 878 its result against ERR (adapted from patch by Matt Kraii, Debian 879 #528374). 880 881 20090510 882 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 883 884 20090502 885 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 886 + add vwmterm terminfo entry (patch by Bryan Christ). 887 888 20090425 889 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 890 891 20090419 892 + build fix for _nc_free_and_exit() change in 20090418 (report by 893 Christian Ebert). 894 895 20090418 896 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 897 898 20090411 899 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 900 This change finishes merging for menu and panel libraries, does 901 part of the form library. 902 903 20090404 904 + suppress configure check for static/dynamic linker flags for gcc on 905 Darwin (report by Nelson Beebe). 906 907 20090328 908 + extend ansi.sys pfkey capability from kf1-kf10 to kf1-kf48, moving 909 function key definitions from emx-base for consistency -TD 910 + correct missing final 'p' in pfkey capability of ansi.sys-old (report 911 by Kalle Olavi Niemitalo). 912 + improve test/ncurses.c 'F' test, show combining characters in color. 913 + quiet a false report by cppcheck in c++/cursesw.cc by eliminating 914 a temporary variable. 915 + use _nc_doalloc() rather than realloc() in a few places in ncurses 916 library to avoid leak in out-of-memory condition (reports by William 917 Egert and Martin Ettl based on cppcheck tool). 918 + add --with-ncurses-wrap-prefix option to test/configure (discussion 919 with Charles Wilson). 920 + use ncurses*-config scripts if available for test/configure. 921 + update test/aclocal.m4 and test/configure 922 > patches by Charles Wilson: 923 + modify CF_WITH_LIBTOOL configure check to allow unreleased libtool 924 version numbers (e.g. which include alphabetic chars, as well as 925 digits, after the final '.'). 926 + improve use of -no-undefined option for libtool by setting an 927 intermediate variable LT_UNDEF in the configure script, and then 928 using that in the libtool link-commands. 929 + fix an missing use of NCURSES_PUBLIC_VAR() in tinfo/MKcodes.awk 930 from 2009031 changes. 931 + improve mk-1st.awk script by writing separate cases for the 932 LIBTOOL_LINK command, depending on which library (ncurses, ticlib, 933 termlib) is to be linked. 934 + modify configure.in to allow broken-linker configurations, not just 935 enable-reentrant, to set public wrap prefix. 936 937 20090321 938 + add TICS_LIST and SHLIB_LIST to allow libtool 2.2.6 on Cygwin to 939 build with tic and term libraries (patch by Charles Wilson). 940 + add -no-undefined option to libtool for Cygwin, MinGW, U/Win and AIX 941 (report by Charles Wilson). 942 + fix definition for c++/Makefile.in's SHLIB_LIST, which did not list 943 the form, menu or panel libraries (patch by Charles Wilson). 944 + add configure option --with-wrap-prefix to allow setting the prefix 945 for functions used to wrap global variables to something other than 946 "_nc_" (discussion with Charles Wilson). 947 948 20090314 949 + modify scripts to generate ncurses*-config and pc-files to add 950 dependency for tinfo library (patch by Charles Wilson). 951 + improve comparison of program-names when checking for linked flavors 952 such as "reset" by ignoring the executable suffix (reports by Charles 953 Wilson, Samuel Thibault and Cedric Bretaudeau on Cygwin mailing 954 list). 955 + suppress configure check for static/dynamic linker flags for gcc on 956 Solaris 10, since gcc is confused by absence of static libc, and 957 does not switch back to dynamic mode before finishing the libraries 958 (reports by Joel Bertrand, Alan Pae). 959 + minor fixes to Intel compiler warning checks in configure script. 960 + modify _nc_leaks_tinfo() so leak-checking in test/railroad.c works. 961 + modify set_curterm() to make broken-linker configuration work with 962 changes from 20090228 (report by Charles Wilson). 963 964 20090228 965 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 966 + modify declaration of cur_term when broken-linker is used, but 967 enable-reentrant is not, to match pre-5.7 (report by Charles Wilson). 968 969 20090221 970 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 971 972 20090214 973 + add configure script --enable-sp-funcs to enable the new set of 974 extended functions. 975 + start integrating patches by Juergen Pfeifer: 976 + add extended functions which specify the SCREEN pointer for several 977 curses functions which use the global SP (these are incomplete; 978 some internals work is needed to complete these). 979 + add special cases to configure script for MinGW port. 980 981 20090207 982 + update several configure macros from lynx changes 983 + append (not prepend) to CFLAGS/CPPFLAGS 984 + change variable from PATHSEP to PATH_SEPARATOR 985 + improve install-rules for pc-files (patch by Miroslav Lichvar). 986 + make it work with $DESTDIR 987 + create the pkg-config library directory if needed. 988 989 20090124 990 + modify init_pair() to allow caller to create extra color pairs beyond 991 the color_pairs limit, which use default colors (request by Emanuele 992 Giaquinta). 993 + add misc/terminfo.tmp and misc/*.pc to "sources" rule. 994 + fix typo "==" where "=" is needed in ncurses-config.in and 995 gen-pkgconfig.in files (Debian #512161). 996 997 20090117 998 + add -shared option to MK_SHARED_LIB when -Bsharable is used, for 999 *BSD's, without which "main" might be one of the shared library's 1000 dependencies (report/analysis by Ken Dickey). 1001 + modify waddch_literal(), updating line-pointer after a multicolumn 1002 character is found to not fit on the current row, and wrapping is 1003 done. Since the line-pointer was not updated, the wrapped 1004 multicolumn character was written to the beginning of the current row 1005 (cf: 20041023, reported by "Nick" regarding problem with ncmpc 1006). 1007 1008 20090110 1009 + add screen.Eterm terminfo entry (GenToo #124887) -TD 1010 + modify adacurses-config to look for ".ali" files in the adalib 1011 directory. 1012 + correct install for Ada95, which omitted libAdaCurses.a used in 1013 adacurses-config 1014 + change install for adacurses-config to provide additional flavors 1015 such as adacursesw-config, for ncursesw (GenToo #167849). 1016 1017 20090105 1018 + remove undeveloped feature in ncurses-config.in for setting 1019 prefix variable. 1020 + recent change to ncurses-config.in did not take into account the 1021 --disable-overwrite option, which sets $includedir to the 1022 subdirectory and using just that for a -I option does not work - fix 1023 (report by Frederic L W Meunier). 1024 1025 20090104 1026 + modify gen-pkgconfig.in to eliminate a dependency on rpath when 1027 deciding whether to add $LIBS to --libs output; that should be shown 1028 for the ncurses and tinfo libraries without taking rpath into 1029 account. 1030 + fix an overlooked change from $AR_OPTS to $ARFLAGS in mk-1st.awk, 1031 used in static libraries (report by Marty Jack). 1032 1033 20090103 1034 + add a configure-time check to pick a suitable value for 1035 CC_SHARED_OPTS for Solaris (report by Dagobert Michelsen). 1036 + add configure --with-pkg-config and --enable-pc-files options, along 1037 with misc/gen-pkgconfig.in which can be used to generate ".pc" files 1038 for pkg-config (request by Jan Engelhardt). 1039 + use $includedir symbol in misc/ncurses-config.in, add --includedir 1040 option. 1041 + change makefiles to use $ARFLAGS rather than $AR_OPTS, provide a 1042 configure check to detect whether a "-" is needed before "ar" 1043 options. 1044 + update config.guess, config.sub from 1045 1046 1047 20081227 1048 + modify mk-1st.awk to work with extra categories for tinfo library. 1049 + modify configure script to allow building shared libraries with gcc 1050 on AIX 5 or 6 (adapted from patch by Lital Natan). 1051 1052 20081220 1053 + modify to omit the opaque-functions from lib_gen.o when 1054 --disable-ext-funcs is used. 1055 + add test/clip_printw.c to illustrate how to use printw without 1056 wrapping. 1057 + modify ncurses 'F' test to demo wborder_set() with colored lines. 1058 + modify ncurses 'f' test to demo wborder() with colored lines. 1059 1060 20081213 1061 + add check for failure to open hashed-database needed for db4.6 1062 (GenToo #245370). 1063 + corrected --without-manpages option; previous change only suppressed 1064 the auxiliary rules install.man and uninstall.man 1065 + add case for FreeMINT to configure macro CF_XOPEN_SOURCE (patch from 1066 GenToo #250454). 1067 + fixes from NetBSD port at 1068 1069 patch-ac (build-fix for DragonFly) 1070 patch-ae (use INSTALL_SCRIPT for installing misc/ncurses*-config). 1071 + improve configure script macros CF_HEADER_PATH and CF_LIBRARY_PATH 1072 by adding CFLAGS, CPPFLAGS and LDFLAGS, LIBS values to the 1073 search-lists. 1074 + correct title string for keybound manpage (patch by Frederic Culot, 1075 OpenBSD documentation/6019), 1076 1077 20081206 1078 + move del_curterm() call from _nc_freeall() to _nc_leaks_tinfo() to 1079 work for progs/clear, progs/tabs, etc. 1080 + correct buffer-size after internal resizing of wide-character 1081 set_field_buffer(), broken in 20081018 changes (report by Mike Gran). 1082 + add "-i" option to test/filter.c to tell it to use initscr() rather 1083 than newterm(), to investigate report on comp.unix.programmer that 1084 ncurses would clear the screen in that case (it does not - the issue 1085 was xterm's alternate screen feature). 1086 + add check in mouse-driver to disable connection if GPM returns a 1087 zero, indicating that the connection is closed (Debian #506717, 1088 adapted from patch by Samuel Thibault). 1089 1090 20081129 1091 + improve a workaround in adding wide-characters, when a control 1092 character is found. The library (cf: 20040207) uses unctrl() to 1093 obtain a printable version of the control character, but was not 1094 passing color or video attributes. 1095 + improve test/ncurses.c 'a' test, using unctrl() more consistently to 1096 display meta-characters. 1097 + turn on _XOPEN_CURSES definition in curses.h 1098 + add eterm-color entry (report by Vincent Lefevre) -TD 1099 + correct use of key_name() in test/ncurses.c 'A' test, which only 1100 displays wide-characters, not key-codes since 20070612 (report by 1101 Ricardo Cantu). 1102 1103 20081122 1104 + change _nc_has_mouse() to has_mouse(), reflect its use in C++ and 1105 Ada95 (patch by Juergen Pfeifer). 1106 + document in TO-DO an issue with Cygwin's package for GNAT (report 1107 by Mike Dennison). 1108 + improve error-checking of command-line options in "tabs" program. 1109 1110 20081115 1111 + change several terminfo entries to make consistent use of ANSI 1112 clear-all-tabs -TD 1113 + add "tabs" program (prompted by Debian #502260). 1114 + add configure --without-manpages option (request by Mike Frysinger). 1115 1116 20081102 5.7 release for upload to 1117 1118 20081025 1119 + add a manpage to discuss memory leaks. 1120 + add support for shared libraries for QNX (other than libtool, which 1121 does not work well on that platform). 1122 + build-fix for QNX C++ binding. 1123 1124 20081018 1125 + build-fixes for OS/2 EMX. 1126 + modify form library to accept control characters such as newline 1127 in set_field_buffer(), which is compatible with Solaris (report by 1128 Nit Khair). 1129 + modify configure script to assume --without-hashed-db when 1130 --disable-database is used. 1131 + add "-e" option in ncurses/Makefile.in when generating source-files 1132 to force earlier exit if the build environment fails unexpectedly 1133 (prompted by patch by Adrian Bunk). 1134 + change configure script to use CF_UTF8_LIB, improved variant of 1135 CF_LIBUTF8. 1136 1137 20081012 1138 + add teraterm4.59 terminfo entry, use that as primary teraterm entry, rename 1139 original to teraterm2.3 -TD 1140 + update "gnome" terminfo to 2.22.3 -TD 1141 + update "konsole" terminfo to 1.6.6, needs today's fix for tic -TD 1142 + add "aterm" terminfo -TD 1143 + add "linux2.6.26" terminfo -TD 1144 + add logic to tic for cancelling strings in user-defined capabilities, 1145 overlooked til now. 1146 1147 20081011 1148 + regenerated html documentation. 1149 + add -m and -s options to test/keynames.c and test/key_names.c to test 1150 the meta() function with keyname() or key_name(), respectively. 1151 + correct return value of key_name() on error; it is null. 1152 + document some unresolved issues for rpath and pthreads in TO-DO. 1153 + fix a missing prototype for ioctl() on OpenBSD in tset.c 1154 + add configure option --disable-tic-depends to make explicit whether 1155 tic library depends on ncurses/ncursesw library, amends change from 1156 20080823 (prompted by Debian #501421). 1157 1158 20081004 1159 + some build-fixes for configure --disable-ext-funcs (incomplete, but 1160 works for C/C++ parts). 1161 + improve configure-check for awks unable to handle large strings, e.g. 1162 AIX 5.1 whose awk silently gives up on large printf's. 1163 1164 20080927 1165 + fix build for --with-dmalloc by workaround for redefinition of 1166 strndup between string.h and dmalloc.h 1167 + fix build for --disable-sigwinch 1168 + add environment variable NCURSES_GPM_TERMS to allow override to use 1169 GPM on terminals other than "linux", etc. 1170 + disable GPM mouse support when $TERM does not happen to contain 1171 "linux", since Gpm_Open() no longer limits its assertion to terminals 1172 that it might handle, e.g., within "screen" in xterm. 1173 + reset mouse file-descriptor when unloading GPM library (report by 1174 Miroslav Lichvar). 1175 + fix build for --disable-leaks --enable-widec --with-termlib 1176 > patch by Juergen Pfeifer: 1177 + use improved initialization for soft-label keys in Ada95 sample code. 1178 + discard internal symbol _nc_slk_format (unused since 20080112). 1179 + move call of slk_paint_info() from _nc_slk_initialize() to 1180 slk_intern_refresh(), improving initialization. 1181 1182 20080925 1183 + fix bug in mouse code for GPM from 20080920 changes (reported in 1184 Debian #500103, also Miroslav Lichvar). 1185 1186 20080920 1187 + fix shared-library rules for cygwin with tic- and tinfo-libraries. 1188 + fix a memory leak when failure to connect to GPM. 1189 + correct check for notimeout() in wgetch() (report on linux.redhat 1190 newsgroup by FurtiveBertie). 1191 + add an example warning-suppression file for valgrind, 1192 misc/ncurses.supp (based on example from Reuben Thomas) 1193 1194 20080913 1195 + change shared-library configuration for OpenBSD, make rpath work. 1196 + build-fixes for using libutf8, e.g., on OpenBSD 3.7 1197 1198 20080907 1199 + corrected fix for --enable-weak-symbols (report by Frederic L W 1200 Meunier). 1201 1202 20080906 1203 + corrected gcc options for building shared libraries on IRIX64. 1204 + add configure check for awk programs unable to handle big-strings, 1205 use that to improve the default for --enable-big-strings option. 1206 + makefile-fixes for --enable-weak-symbols (report by Frederic L W 1207 Meunier). 1208 + update test/configure script. 1209 + adapt ifdef's from library to make test/view.c build when mbrtowc() 1210 is unavailable, e.g., with HPUX 10.20. 1211 + add configure check for wcsrtombs, mbsrtowcs, which are used in 1212 test/ncurses.c, and use wcstombs, mbstowcs instead if available, 1213 fixing build of ncursew for HPUX 11.00 1214 1215 20080830 1216 + fixes to make Ada95 demo_panels() example work. 1217 + modify Ada95 'rain' test program to accept keyboard commands like the 1218 C-version. 1219 + modify BeOS-specific ifdef's to build on Haiku (patch by Scott 1220 Mccreary). 1221 + add configure-check to see if the std namespace is legal for cerr 1222 and endl, to fix a build issue with Tru64. 1223 + consistently use NCURSES_BOOL in lib_gen.c 1224 + filter #line's from lib_gen.c 1225 + change delimiter in MKlib_gen.sh from '%' to '@', to avoid 1226 substitution by IBM xlc to '#' as part of its extensions to digraphs. 1227 + update config.guess, config.sub from 1228 1229 (caveat - its maintainer removed support for older Linux systems). 1230 1231 20080823 1232 + modify configure check for pthread library to work with OSF/1 5.1, 1233 which uses #define's to associate its header and library. 1234 + use pthread_mutexattr_init() for initializing pthread_mutexattr_t, 1235 makes threaded code work on HPUX 11.23 1236 + fix a bug in demo_menus in freeing menus (cf: 20080804). 1237 + modify configure script for the case where tic library is used (and 1238 possibly renamed) to remove its dependency upon ncurses/ncursew 1239 library (patch by Dr Werner Fink). 1240 + correct manpage for menu_fore() which gave wrong default for 1241 the attribute used to display a selected entry (report by Mike Gran). 1242 + add Eterm-256color, Eterm-88color and rxvt-88color (prompted by 1243 Debian #495815) -TD 1244 1245 20080816 1246 + add configure option --enable-weak-symbols to turn on new feature. 1247 + add configure-check for availability of weak symbols. 1248 + modify linkage with pthread library to use weak symbols so that 1249 applications not linked to that library will not use the mutexes, 1250 etc. This relies on gcc, and may be platform-specific (patch by Dr 1251 Werner Fink). 1252 + add note to INSTALL to document limitation of renaming of tic library 1253 using the --with-ticlib configure option (report by Dr Werner Fink). 1254 + document (in manpage) why tputs does not detect I/O errors (prompted 1255 by comments by Samuel Thibault). 1256 + fix remaining warnings from Klocwork report. 1257 1258 20080804 1259 + modify _nc_panelhook() data to account for a permanent memory leak. 1260 + fix memory leaks in test/demo_menus 1261 + fix most warnings from Klocwork tool (report by Larry Zhou). 1262 + modify configure script CF_XOPEN_SOURCE macro to add case for 1263 "dragonfly" from xterm #236 changes. 1264 + modify configure script --with-hashed-db to let $LIBS override the 1265 search for the db library (prompted by report by Samson Pierre). 1266 1267 20080726 1268 + build-fixes for gcc 4.3.1 (changes to gnat "warnings", and C inlining 1269 thresholds). 1270 1271 20080713 1272 + build-fix (reports by Christian Ebert, Funda Wang). 1273 1274 20080712 1275 + compiler-warning fixes for Solaris. 1276 1277 20080705 1278 + use NCURSES_MOUSE_MASK() in definition of BUTTON_RELEASE(), etc., to 1279 make those work properly with the "--enable-ext-mouse" configuration 1280 (cf: 20050205). 1281 + improve documentation of build-cc options in INSTALL. 1282 + work-around a bug in gcc 4.2.4 on AIX, which does not pass the 1283 -static/-dynamic flags properly to linker, causing test/bs to 1284 not link. 1285 1286 20080628 1287 + correct some ifdef's needed for the broken-linker configuration. 1288 + make debugging library's $BAUDRATE feature work for termcap 1289 interface. 1290 + make $NCURSES_NO_PADDING feature work for termcap interface (prompted 1291 by comment on FreeBSD mailing list). 1292 + add screen.mlterm terminfo entry -TD 1293 + improve mlterm and mlterm+pcfkeys terminfo entries -TD 1294 1295 20080621 1296 + regenerated html documentation. 1297 + expand manpage description of parameters for form_driver() and 1298 menu_driver() (prompted by discussion with Adam Spragg). 1299 + add null-pointer checks for cur_term in baudrate() and 1300 def_shell_mode(), def_prog_mode() 1301 + fix some memory leaks in delscreen() and wide acs. 1302 1303 20080614 1304 + modify test/ditto.c to illustrate multi-threaded use_screen(). 1305 + change CC_SHARED_OPTS from -KPIC to -xcode=pic32 for Solaris. 1306 + add "-shared" option to MK_SHARED_LIB for gcc on Solaris (report 1307 by Poor Yorick). 1308 1309 20080607 1310 + finish changes to wgetch(), making it switch as needed to the 1311 window's actual screen when calling wrefresh() and wgetnstr(). That 1312 allows wgetch() to get used concurrently in different threads with 1313 some minor restrictions, e.g., the application should not delete a 1314 window which is being used in a wgetch(). 1315 + simplify mutex's, combining the window- and screen-mutex's. 1316 1317 20080531 1318 + modify wgetch() to use the screen which corresponds to its window 1319 parameter rather than relying on SP; some dependent functions still 1320 use SP internally. 1321 + factor out most use of SP in lib_mouse.c, using parameter. 1322 + add internal _nc_keyname(), replacing keyname() to associate with a 1323 particular SCREEN rather than the global SP. 1324 + add internal _nc_unctrl(), replacing unctrl() to associate with a 1325 particular SCREEN rather than the global SP. 1326 + add internal _nc_tracemouse(), replacing _tracemouse() to eliminate 1327 its associated global buffer _nc_globals.tracemse_buf now in SCREEN. 1328 + add internal _nc_tracechar(), replacing _tracechar() to use SCREEN in 1329 preference to the global _nc_globals.tracechr_buf buffer. 1330 1331 20080524 1332 + modify _nc_keypad() to make it switch temporarily as needed to the 1333 screen which must be updated. 1334 + wrap cur_term variable to help make _nc_keymap() thread-safe, and 1335 always set the screen's copy of this variable in set_curterm(). 1336 + restore curs_set() state after endwin()/refresh() (report/patch 1337 Miroslav Lichvar) 1338 1339 20080517 1340 + modify configure script to note that --enable-ext-colors and 1341 --enable-ext-mouse are not experimental, but extensions from 1342 the ncurses ABI 5. 1343 + corrected manpage description of setcchar() (discussion with 1344 Emanuele Giaquinta). 1345 + fix for adding a non-spacing character at the beginning of a line 1346 (report/patch by Miroslav Lichvar). 1347 1348 20080503 1349 + modify screen.* terminfo entries using new screen+fkeys to fix 1350 overridden keys in screen.rxvt (Debian #478094) -TD 1351 + modify internal interfaces to reduce wgetch()'s dependency on the 1352 global SP. 1353 + simplify some loops with macros each_screen(), each_window() and 1354 each_ripoff(). 1355 1356 20080426 1357 + continue modifying test/ditto.c toward making it demonstrate 1358 multithreaded use_screen(), using fifos to pass data between screens. 1359 + fix typo in form.3x (report by Mike Gran). 1360 1361 20080419 1362 + add screen.rxvt terminfo entry -TD 1363 + modify tic -f option to format spaces as \s to prevent them from 1364 being lost when that is read back in unformatted strings. 1365 + improve test/ditto.c, using a "talk"-style layout. 1366 1367 20080412 1368 + change test/ditto.c to use openpty() and xterm. 1369 + add locks for copywin(), dupwin(), overlap(), overlay() on their 1370 window parameters. 1371 + add locks for initscr() and newterm() on updates to the SCREEN 1372 pointer. 1373 + finish table in curs_thread.3x manpage. 1374 1375 20080405 1376 + begin table in curs_thread.3x manpage describing the scope of data 1377 used by each function (or symbol) for threading analysis. 1378 + add null-pointer checks to setsyx() and getsyx() (prompted by 1379 discussion by Martin v. Lowis and Jeroen Ruigrok van der Werven on 1380 python-dev2 mailing list). 1381 1382 20080329 1383 + add null-pointer checks in set_term() and delscreen(). 1384 + move _nc_windows into _nc_globals, since windows can be pads, which 1385 are not associated with a particular screen. 1386 + change use_screen() to pass the SCREEN* parameter rather than 1387 stdscr to the callback function. 1388 + force libtool to use tag for 'CC' in case it does not detect this, 1389 e.g., on aix when using CC=powerpc-ibm-aix5.3.0.0-gcc 1390 (report/patch by Michael Haubenwallner). 1391 + override OBJEXT to "lo" when building with libtool, to work on 1392 platforms such as AIX where libtool may use a different suffix for 1393 the object files than ".o" (report/patch by Michael Haubenwallner). 1394 + add configure --with-pthread option, for building with the POSIX 1395 thread library. 1396 1397 20080322 1398 + fill in extended-color pair two more places in wbkgrndset() and 1399 waddch_nosync() (prompted by Sedeno's patch). 1400 + fill in extended-color pair in _nc_build_wch() to make colors work 1401 for wide-characters using extended-colors (patch by Alejandro R 1402 Sedeno). 1403 + add x/X toggles to ncurses.c C color test to test/demo 1404 wide-characters with extended-colors. 1405 + add a/A toggles to ncurses.c c/C color tests. 1406 + modify test/ditto.c to use use_screen(). 1407 + finish modifying test/rain.c to demonstrate threads. 1408 1409 20080308 1410 + start modifying test/rain.c for threading demo. 1411 + modify test/ncurses.c to make 'f' test accept the f/F/b/F/</> toggles 1412 that the 'F' accepts. 1413 + modify test/worm.c to show trail in reverse-video when other threads 1414 are working concurrently. 1415 + fix a deadlock from improper nesting of mutexes for windowlist and 1416 window. 1417 1418 20080301 1419 + fixes from 20080223 resolved issue with mutexes; change to use 1420 recursive mutexes to fix memory leak in delwin() as called from 1421 _nc_free_and_exit(). 1422 1423 20080223 1424 + fix a size-difference in _nc_globals which caused hanging of mutex 1425 lock/unlock when termlib was built separately. 1426 1427 20080216 1428 + avoid using nanosleep() in threaded configuration since that often 1429 is implemented to suspend the entire process. 1430 1431 20080209 1432 + update test programs to build/work with various UNIX curses for 1433 comparisons. This was to reinvestigate statement in X/Open curses 1434 that insnstr and winsnstr perform wrapping. None of the Unix-branded 1435 implementations do this, as noted in manpage (cf: 20040228). 1436 1437 20080203 1438 + modify _nc_setupscreen() to set the legacy-coding value the same 1439 for both narrow/wide models. It had been set only for wide model, 1440 but is needed to make unctrl() work with locale in the narrow model. 1441 + improve waddch() and winsch() handling of EILSEQ from mbrtowc() by 1442 using unctrl() to display illegal bytes rather than trying to append 1443 further bytes to make up a valid sequence (reported by Andrey A 1444 Chernov). 1445 + modify unctrl() to check codes in 128-255 range versus isprint(). 1446 If they are not printable, and locale was set, use a "M-" or "~" 1447 sequence. 1448 1449 20080126 1450 + improve threading in test/worm.c (wrap refresh calls, and KEY_RESIZE 1451 handling). Now it hangs in napms(), no matter whether nanosleep() 1452 or poll() or select() are used on Linux. 1453 1454 20080119 1455 + fixes to build with --disable-ext-funcs 1456 + add manpage for use_window and use_screen. 1457 + add set_tabsize() and set_escdelay() functions. 1458 1459 20080112 1460 + remove recursive-mutex definitions, finish threading demo for worm.c 1461 + remove a redundant adjustment of lines in resizeterm.c's 1462 adjust_window() which caused occasional misadjustment of stdscr when 1463 softkeys were used. 1464 1465 20080105 1466 + several improvements to terminfo entries based on xterm #230 -TD 1467 + modify MKlib_gen.sh to handle keyname/key_name prototypes, so the 1468 "link_test" builds properly. 1469 + fix for toe command-line options -u/-U to ensure filename is given. 1470 + fix allocation-size for command-line parsing in infocmp from 20070728 1471 (report by Miroslav Lichvar) 1472 + improve resizeterm() by moving ripped-off lines, and repainting the 1473 soft-keys (report by Katarina Machalkova) 1474 + add clarification in wclear's manpage noting that the screen will be 1475 cleared even if a subwindow is cleared (prompted by Christer Enfors 1476 question). 1477 + change test/ncurses.c soft-key tests to work with KEY_RESIZE. 1478 1479 20071222 1480 + continue implementing support for threading demo by adding mutex 1481 for delwin(). 1482 1483 20071215 1484 + add several functions to C++ binding which wrap C functions that 1485 pass a WINDOW* parameter (request by Chris Lee). 1486 1487 20071201 1488 + add note about configure options needed for Berkeley database to the 1489 INSTALL file. 1490 + improve checks for version of Berkeley database libraries. 1491 + amend fix for rpath to not modify LDFLAGS if the platform has no 1492 applicable transformation (report by Christian Ebert, cf: 20071124). 1493 1494 20071124 1495 + modify configure option --with-hashed-db to accept a parameter which 1496 is the install-prefix of a given Berkeley Database (prompted by 1497 pierre4d2 comments). 1498 + rewrite wrapper for wcrtomb(), making it work on Solaris. This is 1499 used in the form library to determine the length of the buffer needed 1500 by field_buffer (report by Alfred Fung). 1501 + remove unneeded window-parameter from C++ binding for wresize (report 1502 by Chris Lee). 1503 1504 20071117 1505 + modify the support for filesystems which do not support mixed-case to 1506 generate 2-character (hexadecimal) codes for the lower-level of the 1507 filesystem terminfo database (request by Michail Vidiassov). 1508 + add configure option --enable-mixed-case, to allow overriding the 1509 configure script's check if the filesystem supports mixed-case 1510 filenames. 1511 + add wresize() to C++ binding (request by Chris Lee). 1512 + define NCURSES_EXT_FUNCS and NCURSES_EXT_COLORS in curses.h to make 1513 it simpler to tell if the extended functions and/or colors are 1514 declared. 1515 1516 20071103 1517 + update memory-leak checks for changes to names.c and codes.c 1518 + correct acsc strings in h19, z100 (patch by Benjamin C W Sittler). 1519 1520 20071020 1521 + continue implementing support for threading demo by adding mutex 1522 for use_window(). 1523 + add mrxvt terminfo entry, add/fix xterm building blocks for modified 1524 cursor keys -TD 1525 + compile with FreeBSD "contemporary" TTY interface (patch by 1526 Rong-En Fan). 1527 1528 20071013 1529 + modify makefile rules to allow clear, tput and tset to be built 1530 without libtic. The other programs (infocmp, tic and toe) rely on 1531 that library. 1532 + add/modify null-pointer checks in several functions for SP and/or 1533 the WINDOW* parameter (report by Thorben Krueger). 1534 + fixes for field_buffer() in formw library (see Redhat Bugzilla 1535 #310071, patches by Miroslav Lichvar). 1536 + improve performance of NCURSES_CHAR_EQ code (patch by Miroslav 1537 Lichvar). 1538 + update/improve mlterm and rxvt terminfo entries, e.g., for 1539 the modified cursor- and keypad-keys -TD 1540 1541 20071006 1542 + add code to curses.priv.h ifdef'd with NCURSES_CHAR_EQ, which 1543 changes the CharEq() macro to an inline function to allow comparing 1544 cchar_t struct's without comparing gaps in a possibly unpacked 1545 memory layout (report by Miroslav Lichvar). 1546 1547 20070929 1548 + add new functions to lib_trace.c to setup mutex's for the _tracef() 1549 calls within the ncurses library. 1550 + for the reentrant model, move _nc_tputs_trace and _nc_outchars into 1551 the SCREEN. 1552 + start modifying test/worm.c to provide threading demo (incomplete). 1553 + separated ifdef's for some BSD-related symbols in tset.c, to make 1554 it compile on LynxOS (report by Greg Gemmer). 1555 20070915 1556 + modify Ada95/gen/Makefile to use shlib script, to simplify building 1557 shared-library configuration on platforms lacking rpath support. 1558 + build-fix for Ada95/src/Makefile to reflect changed dependency for 1559 the terminal-interface-curses-aux.adb file which is now generated. 1560 + restructuring test/worm.c, for use_window() example. 1561 1562 20070908 1563 + add use_window() and use_screen() functions, to develop into support 1564 for threaded library (incomplete). 1565 + fix typos in man/curs_opaque.3x which kept the install script from 1566 creating symbolic links to two aliases created in 20070818 (report by 1567 Rong-En Fan). 1568 1569 20070901 1570 + remove a spurious newline from output of html.m4, which caused links 1571 for Ada95 html to be incorrect for the files generated using m4. 1572 + start investigating mutex's for SCREEN manipulation (incomplete). 1573 + minor cleanup of codes.c/names.c for --enable-const 1574 + expand/revise "Routine and Argument Names" section of ncurses manpage 1575 to address report by David Givens in newsgroup discussion. 1576 + fix interaction between --without-progs/--with-termcap configure 1577 options (report by Michail Vidiassov). 1578 + fix typo in "--disable-relink" option (report by Michail Vidiassov). 1579 1580 20070825 1581 + fix a sign-extension bug in infocmp's repair_acsc() function 1582 (cf: 971004). 1583 + fix old configure script bug which prevented "--disable-warnings" 1584 option from working (patch by Mike Frysinger). 1585 1586 20070818 1587 + add 9term terminal description (request by Juhapekka Tolvanen) -TD 1588 + modify comp_hash.c's string output to avoid misinterpreting a null 1589 "\0" followed by a digit. 1590 + modify MKnames.awk and MKcodes.awk to support big-strings. 1591 This only applies to the cases (broken linker, reentrant) where 1592 the corresponding arrays are accessed via wrapper functions. 1593 + split MKnames.awk into two scripts, eliminating the shell redirection 1594 which complicated the make process and also the bogus timestamp file 1595 which was introduced to fix "make -j". 1596 + add test/test_opaque.c, test/test_arrays.c 1597 + add wgetscrreg() and wgetparent() for applications that may need it 1598 when NCURSES_OPAQUE is defined (prompted by Bryan Christ). 1599 1600 20070812 1601 + amend treatment of infocmp "-r" option to retain the 1023-byte limit 1602 unless "-T" is given (cf: 981017). 1603 + modify comp_captab.c generation to use big-strings. 1604 + make _nc_capalias_table and _nc_infoalias_table private accessed via 1605 _nc_get_alias_table() since the tables are used only within the tic 1606 library. 1607 + modify configure script to skip Intel compiler in CF_C_INLINE. 1608 + make _nc_info_hash_table and _nc_cap_hash_table private accessed via 1609 _nc_get_hash_table() since the tables are used only within the tic 1610 library. 1611 1612 20070728 1613 + make _nc_capalias_table and _nc_infoalias_table private, accessed via 1614 _nc_get_alias_table() since they are used only by parse_entry.c 1615 + make _nc_key_names private since it is used only by lib_keyname.c 1616 + add --disable-big-strings configure option to control whether 1617 unctrl.c is generated using the big-string optimization - which may 1618 use strings longer than supported by a given compiler. 1619 + reduce relocation tables for tic, infocmp by changing type of 1620 internal hash tables to short, and make those private symbols. 1621 + eliminate large fixed arrays from progs/infocmp.c 1622 1623 20070721 1624 + change winnstr() to stop at the end of the line (cf: 970315). 1625 + add test/test_get_wstr.c 1626 + add test/test_getstr.c 1627 + add test/test_inwstr.c 1628 + add test/test_instr.c 1629 1630 20070716 1631 + restore a call to obtain screen-size in _nc_setupterm(), which 1632 is used in tput and other non-screen applications via setupterm() 1633 (Debian #433357, reported by Florent Bayle, Christian Ohm, 1634 cf: 20070310). 1635 1636 20070714 1637 + add test/savescreen.c test-program 1638 + add check to trace-file open, if the given name is a directory, add 1639 ".log" to the name and try again. 1640 + add konsole-256color entry -TD 1641 + add extra gcc warning options from xterm. 1642 + minor fixes for ncurses/hashmap test-program. 1643 + modify configure script to quiet c++ build with libtool when the 1644 --disable-echo option is used. 1645 + modify configure script to disable ada95 if libtool is selected, 1646 writing a warning message (addresses FreeBSD ports/114493). 1647 + update config.guess, config.sub 1648 1649 20070707 1650 + add continuous-move "M" to demo_panels to help test refresh changes. 1651 + improve fix for refresh of window on top of multi-column characters, 1652 taking into account some split characters on left/right window 1653 boundaries. 1654 1655 20070630 1656 + add "widec" row to _tracedump() output to help diagnose remaining 1657 problems with multi-column characters. 1658 + partial fix for refresh of window on top of multi-column characters 1659 which are partly overwritten (report by Sadrul H Chowdhury). 1660 + ignore A_CHARTEXT bits in vidattr() and vid_attr(), in case 1661 multi-column extension bits are passed there. 1662 + add setlocale() call to demo_panels.c, needed for wide-characters. 1663 + add some output flags to _nc_trace_ttymode to help diagnose a bug 1664 report by Larry Virden, i.e., ONLCR, OCRNL, ONOCR and ONLRET, 1665 1666 20070623 1667 + add test/demo_panels.c 1668 + implement opaque version of setsyx() and getsyx(). 1669 1670 20070612 1671 + corrected xterm+pcf2 terminfo modifiers for F1-F4, to match xterm 1672 #226 -TD 1673 + split-out key_name() from MKkeyname.awk since it now depends upon 1674 wunctrl() which is not in libtinfo (report by Rong-En Fan). 1675 1676 20070609 1677 + add test/key_name.c 1678 + add stdscr cases to test/inchs.c and test/inch_wide.c 1679 + update test/configure 1680 + correct formatting of DEL (0x7f) in _nc_vischar(). 1681 + null-terminate result of wunctrl(). 1682 + add null-pointer check in key_name() (report by Andreas Krennmair, 1683 cf: 20020901). 1684 1685 20070602 1686 + adapt mouse-handling code from menu library in form-library 1687 (discussion with Clive Nicolson). 1688 + add a modification of test/dots.c, i.e., test/dots_mvcur.c to 1689 illustrate how to use mvcur(). 1690 + modify wide-character flavor of SetAttr() to preserve the 1691 WidecExt() value stored in the .attr field, e.g., in case it 1692 is overwritten by chgat (report by Aleksi Torhamo). 1693 + correct buffer-size for _nc_viswbuf2n() (report by Aleksi Torhamo). 1694 + build-fixes for Solaris 2.6 and 2.7 (patch by Peter O'Gorman). 1695 1696 20070526 1697 + modify keyname() to use "^X" form only if meta() has been called, or 1698 if keyname() is called without initializing curses, e.g., via 1699 initscr() or newterm() (prompted by LinuxBase #1604). 1700 + document some portability issues in man/curs_util.3x 1701 + add a shadow copy of TTY buffer to _nc_prescreen to fix applications 1702 broken by moving that data into SCREEN (cf: 20061230). 1703 1704 20070512 1705 + add 'O' (wide-character panel test) in ncurses.c to demonstrate a 1706 problem reported by Sadrul H Chowdhury with repainting parts of 1707 a fullwidth cell. 1708 + modify slk_init() so that if there are preceding calls to 1709 ripoffline(), those affect the available lines for soft-keys (adapted 1710 from patch by Clive Nicolson). 1711 + document some portability issues in man/curs_getyx.3x 1712 1713 20070505 1714 + fix a bug in Ada95/samples/ncurses which caused a variable to 1715 become uninitialized in the "b" test. 1716 + fix Ada95/gen/Makefile.in adahtml rule to account for recent 1717 movement of files, fix a few incorrect manpage references in the 1718 generated html. 1719 + add Ada95 binding to _nc_freeall() as Curses_Free_All to help with 1720 memory-checking. 1721 + correct some functions in Ada95 binding which were using return value 1722 from C where none was returned: idcok(), immedok() and wtimeout(). 1723 + amend recent changes for Ada95 binding to make it build with 1724 Cygwin's linker, e.g., with configure options 1725 --enable-broken-linker --with-ticlib 1726 1727 20070428 1728 + add a configure check for gcc's options for inlining, use that to 1729 quiet a warning message where gcc's default behavior changed from 1730 3.x to 4.x. 1731 + improve warning message when checking if GPM is linked to curses 1732 library by not warning if its use of "wgetch" is via a weak symbol. 1733 + add loader options when building with static libraries to ensure that 1734 an installed shared library for ncurses does not conflict. This is 1735 reported as problem with Tru64, but could affect other platforms 1736 (report Martin Mokrejs, analysis by Tim Mooney). 1737 + fix build on cygwin after recent ticlib/termlib changes, i.e., 1738 + adjust TINFO_SUFFIX value to work with cygwin's dll naming 1739 + revert a change from 20070303 which commented out dependency of 1740 SHLIB_LIST in form/menu/panel/c++ libraries. 1741 + fix initialization of ripoff stack pointer (cf: 20070421). 1742 1743 20070421 1744 + move most static variables into structures _nc_globals and 1745 _nc_prescreen, to simplify storage. 1746 + add/use configure script macro CF_SIG_ATOMIC_T, use the corresponding 1747 type for data manipulated by signal handlers (prompted by comments 1748 in mailing.openbsd.bugs newsgroup). 1749 + modify CF_WITH_LIBTOOL to allow one to pass options such as -static 1750 to the libtool create- and link-operations. 1751 1752 20070414 1753 + fix whitespace in curs_opaque.3x which caused a spurious ';' in 1754 the installed aliases (report by Peter Santoro). 1755 + fix configure script to not try to generate adacurses-config when 1756 Ada95 tree is not built. 1757 1758 20070407 1759 + add man/curs_legacy.3x, man/curs_opaque.3x 1760 + fix acs_map binding for Ada95 when --enable-reentrant is used. 1761 + add adacurses-config to the Ada95 install, based on version from 1762 FreeBSD port, in turn by Juergen Pfeifer in 2000 (prompted by 1763 comment on comp.lang.ada newsgroup). 1764 + fix includes in c++ binding to build with Intel compiler 1765 (cf: 20061209). 1766 + update install rule in Ada95 to use mkdirs.sh 1767 > other fixes prompted by inspection for Coverity report: 1768 + modify ifdef's for c++ binding to use try/catch/throw statements 1769 + add a null-pointer check in tack/ansi.c request_cfss() 1770 + fix a memory leak in ncurses/base/wresize.c 1771 + corrected check for valid memu/meml capabilities in 1772 progs/dump_entry.c when handling V_HPUX case. 1773 > fixes based on Coverity report: 1774 + remove dead code in test/bs.c 1775 + remove dead code in test/demo_defkey.c 1776 + remove an unused assignment in progs/infocmp.c 1777 + fix a limit check in tack/ansi.c tools_charset() 1778 + fix tack/ansi.c tools_status() to perform the VT320/VT420 1779 tests in request_cfss(). The function had exited too soon. 1780 + fix a memory leak in tic.c's make_namelist() 1781 + fix a couple of places in tack/output.c which did not check for EOF. 1782 + fix a loop-condition in test/bs.c 1783 + add index checks in lib_color.c for color palettes 1784 + add index checks in progs/dump_entry.c for version_filter() handling 1785 of V_BSD case. 1786 + fix a possible null-pointer dereference in copywin() 1787 + fix a possible null-pointer dereference in waddchnstr() 1788 + add a null-pointer check in _nc_expand_try() 1789 + add a null-pointer check in tic.c's make_namelist() 1790 + add a null-pointer check in _nc_expand_try() 1791 + add null-pointer checks in test/cardfile.c 1792 + fix a double-free in ncurses/tinfo/trim_sgr0.c 1793 + fix a double-free in ncurses/base/wresize.c 1794 + add try/catch block to c++/cursesmain.cc 1795 1796 20070331 1797 + modify Ada95 binding to build with --enable-reentrant by wrapping 1798 global variables (bug: acs_map does not yet work). 1799 + modify Ada95 binding to use the new access-functions, allowing it 1800 to build/run when NCURSES_OPAQUE is set. 1801 + add access-functions and macros to return properties of the WINDOW 1802 structure, e.g., when NCURSES_OPAQUE is set. 1803 + improved install-sh's quoting. 1804 + use mkdirs.sh rather than mkinstalldirs, e.g., to use fixes from 1805 other programs. 1806 1807 20070324 1808 + eliminate part of the direct use of WINDOW data from Ada95 interface. 1809 + fix substitutions for termlib filename to make configure option 1810 --enable-reentrant work with --with-termlib. 1811 + change a constructor for NCursesWindow to allow compiling with 1812 NCURSES_OPAQUE set, since we cannot pass a reference to 1813 an opaque pointer. 1814 1815 20070317 1816 + ignore --with-chtype=unsigned since unsigned is always added to 1817 the type in curses.h; do the same for --with-mmask-t. 1818 + change warning regarding --enable-ext-colors and wide-character 1819 in the configure script to an error. 1820 + tweak error message in CF_WITH_LIBTOOL to distinguish other programs 1821 such as Darwin's libtool program (report by Michail Vidiassov) 1822 + modify edit_man.sh to allow for multiple substitutions per line. 1823 + set locale in misc/ncurses-config.in since it uses a range 1824 + change permissions libncurses++.a install (report by Michail 1825 Vidiassov). 1826 + corrected length of temporary buffer in wide-character version 1827 of set_field_buffer() (related to report by Bryan Christ). 1828 1829 20070311 1830 + fix mk-1st.awk script install_shlib() function, broken in 20070224 1831 changes for cygwin (report by Michail Vidiassov). 1832 1833 20070310 1834 + increase size of array in _nc_visbuf2n() to make "tic -v" work 1835 properly in its similar_sgr() function (report/analysis by Peter 1836 Santoro). 1837 + add --enable-reentrant configure option for ongoing changes to 1838 implement a reentrant version of ncurses: 1839 + libraries are suffixed with "t" 1840 + wrap several global variables (curscr, newscr, stdscr, ttytype, 1841 COLORS, COLOR_PAIRS, COLS, ESCDELAY, LINES and TABSIZE) as 1842 functions returning values stored in SCREEN or cur_term. 1843 + move some initialization (LINES, COLS) from lib_setup.c, 1844 i.e., setupterm() to _nc_setupscreen(), i.e., newterm(). 1845 1846 20070303 1847 + regenerated html documentation. 1848 + add NCURSES_OPAQUE symbol to curses.h, will use to make structs 1849 opaque in selected configurations. 1850 + move the chunk in lib_acs.c which resets acs capabilities when 1851 running on a terminal whose locale interferes with those into 1852 _nc_setupscreen(), so the libtinfo/libtinfow files can be made 1853 identical (requested by Miroslav Lichvar). 1854 + do not use configure variable SHLIB_LIBS for building libraries 1855 outside the ncurses directory, since that symbol is customized 1856 only for that directory, and using it introduces an unneeded 1857 dependency on libdl (requested by Miroslav Lichvar). 1858 + modify mk-1st.awk so the generated makefile rules for linking or 1859 installing shared libraries do not first remove the library, in 1860 case it is in use, e.g., libncurses.so by /bin/sh (report by Jeff 1861 Chua). 1862 + revised section "Using NCURSES under XTERM" in ncurses-intro.html 1863 (prompted by newsgroup comment by Nick Guenther). 1864 1865 20070224 1866 + change internal return codes of _nc_wgetch() to check for cases 1867 where KEY_CODE_YES should be returned, e.g., if a KEY_RESIZE was 1868 ungetch'd, and read by wget_wch(). 1869 + fix static-library build broken in 20070217 changes to remove "-ldl" 1870 (report by Miroslav Lichvar). 1871 + change makefile/scripts for cygwin to allow building termlib. 1872 + use Form_Hook in manpages to match form.h 1873 + use Menu_Hook in manpages, as well as a few places in menu.h 1874 + correct form- and menu-manpages to use specific Field_Options, 1875 Menu_Options and Item_Options types. 1876 + correct prototype for _tracechar() in manpage (cf: 20011229). 1877 + correct prototype for wunctrl() in manpage. 1878 1879 20070217 1880 + fixes for $(TICS_LIST) in ncurses/Makefile (report by Miroslav 1881 Lichvar). 1882 + modify relinking of shared libraries to apply only when rpath is 1883 enabled, and add --disable-relink option which can be used to 1884 disable the feature altogether (reports by Michail Vidiassov, 1885 Adam J Richter). 1886 + fix --with-termlib option for wide-character configuration, stripping 1887 the "w" suffix in one place (report by Miroslav Lichvar). 1888 + remove "-ldl" from some library lists to reduce dependencies in 1889 programs (report by Miroslav Lichvar). 1890 + correct description of --enable-signed-char in configure --help 1891 (report by Michail Vidiassov). 1892 + add pattern for GNU/kFreeBSD configuration to CF_XOPEN_SOURCE, 1893 which matches an earlier change to CF_SHARED_OPTS, from xterm #224 1894 fixes. 1895 + remove "${DESTDIR}" from -install_name option used for linking 1896 shared libraries on Darwin (report by Michail Vidiassov). 1897 1898 20070210 1899 + add test/inchs.c, test/inch_wide.c, to test win_wchnstr(). 1900 + remove libdl from library list for termlib (report by Miroslav 1901 Lichvar). 1902 + fix configure.in to allow --without-progs --with-termlib (patch by 1903 Miroslav Lichvar). 1904 + modify win_wchnstr() to ensure that only a base cell is returned 1905 for each multi-column character (prompted by report by Wei Kong 1906 regarding change in mvwin_wch() cf: 20041023). 1907 1908 20070203 1909 + modify fix_wchnstr() in form library to strip attributes (and color) 1910 from the cchar_t array (field cells) read from a field's window. 1911 Otherwise, when copying the field cells back to the window, the 1912 associated color overrides the field's background color (report by 1913 Ricardo Cantu). 1914 + improve tracing for form library, showing created forms, fields, etc. 1915 + ignore --enable-rpath configure option if --with-shared was omitted. 1916 + add _nc_leaks_tinfo(), _nc_free_tic(), _nc_free_tinfo() entrypoints 1917 to allow leak-checking when both tic- and tinfo-libraries are built. 1918 + drop CF_CPP_VSCAN_FUNC macro from configure script, since C++ binding 1919 no longer relies on it. 1920 + disallow combining configure script options --with-ticlib and 1921 --enable-termcap (report by Rong-En Fan). 1922 + remove tack from ncurses tree. 1923 1924 20070128 1925 + fix typo in configure script that broke --with-termlib option 1926 (report by Rong-En Fan). 1927 1928 20070127 1929 + improve fix for FreeBSD gnu/98975, to allow for null pointer passed 1930 to tgetent() (report by Rong-en Fan). 1931 + update tack/HISTORY and tack/README to tell how to build it after 1932 it is removed from the ncurses tree. 1933 + fix configure check for libtool's version to trim blank lines 1934 (report by sci-fi@hush.ai). 1935 + review/eliminate other original-file artifacts in cursesw.cc, making 1936 its license consistent with ncurses. 1937 + use ncurses vw_scanw() rather than reading into a fixed buffer in 1938 the c++ binding for scanw() methods (prompted by report by Nuno Dias). 1939 + eliminate fixed-buffer vsprintf() calls in c++ binding. 1940 1941 20070120 1942 + add _nc_leaks_tic() to separate leak-checking of tic library from 1943 term/ncurses libraries, and thereby eliminate a library dependency. 1944 + fix test/mk-test.awk to ignore blank lines. 1945 + correct paths in include/headers, for --srcdir (patch by Miroslav 1946 Lichvar). 1947 1948 20070113 1949 + add a break-statement in misc/shlib to ensure that it exits on the 1950 _first_ matched directory (report by Paul Novak). 1951 + add tack/configure, which can be used to build tack outside the 1952 ncurses build-tree. 1953 + add --with-ticlib option, to build/install the tic-support functions 1954 in a separate library (suggested by Miroslav Lichvar). 1955 1956 20070106 1957 + change MKunctrl.awk to reduce relocation table for unctrl.o 1958 + change MKkeyname.awk to reduce relocation table for keyname.o 1959 (patch by Miroslav Lichvar). 1960 1961 20061230 1962 + modify configure check for libtool's version to trim blank lines 1963 (report by sci-fi@hush.ai). 1964 + modify some modules to allow them to be reentrant if _REENTRANT is 1965 defined: lib_baudrate.c, resizeterm.c (local data only) 1966 + eliminate static data from some modules: add_tries.c, hardscroll.c, 1967 lib_ttyflags.c, lib_twait.c 1968 + improve manpage install to add aliases for the transformed program 1969 names, e.g., from --program-prefix. 1970 + used linklint to verify links in the HTML documentation, made fixes 1971 to manpages as needed. 1972 + fix a typo in curs_mouse.3x (report by William McBrine). 1973 + fix install-rule for ncurses5-config to make the bin-directory. 1974 1975 20061223 1976 + modify configure script to omit the tic (terminfo compiler) support 1977 from ncurses library if --without-progs option is given. 1978 + modify install rule for ncurses5-config to do this via "install.libs" 1979 + modify shared-library rules to allow FreeBSD 3.x to use rpath. 1980 + update config.guess, config.sub 1981 1982 20061217 5.6 release for upload to 1983 1984 20061217 1985 + add ifdef's for <wctype.h> for HPUX, which has the corresponding 1986 definitions in <wchar.h>. 1987 + revert the va_copy() change from 20061202, since it was neither 1988 correct nor portable. 1989 + add $(LOCAL_LIBS) definition to progs/Makefile.in, needed for 1990 rpath on Solaris. 1991 + ignore wide-acs line-drawing characters that wcwidth() claims are 1992 not one-column. This is a workaround for Solaris' broken locale 1993 support. 1994 1995 20061216 1996 + modify configure --with-gpm option to allow it to accept a parameter, 1997 i.e., the name of the dynamic GPM library to load via dlopen() 1998 (requested by Bryan Henderson). 1999 + add configure option --with-valgrind, changes from vile. 2000 + modify configure script AC_TRY_RUN and AC_TRY_LINK checks to use 2001 'return' in preference to 'exit()'. 2002 2003 20061209 2004 + change default for --with-develop back to "no". 2005 + add XTABS to tracing of TTY bits. 2006 + updated autoconf patch to ifdef-out the misfeature which declares 2007 exit() for configure tests. This fixes a redefinition warning on 2008 Solaris. 2009 + use ${CC} rather than ${LD} in shared library rules for IRIX64, 2010 Solaris to help ensure that initialization sections are provided for 2011 extra linkage requirements, e.g., of C++ applications (prompted by 2012 comment by Casper Dik in newsgroup). 2013 + rename "$target" in CF_MAN_PAGES to make it easier to distinguish 2014 from the autoconf predefined symbol. There was no conflict, 2015 since "$target" was used only in the generated edit_man.sh file, 2016 but SuSE's rpm package contains a patch. 2017 2018 20061202 2019 + update man/term.5 to reflect extended terminfo support and hashed 2020 database configuration. 2021 + updates for test/configure script. 2022 + adapted from SuSE rpm package: 2023 + remove long-obsolete workaround for broken-linker which declared 2024 cur_term in tic.c 2025 + improve error recovery in PUTC() macro when wcrtomb() does not 2026 return usable results for an 8-bit character. 2027 + patches from rpm package (SuSE): 2028 + use va_copy() in extra varargs manipulation for tracing version 2029 of printw, etc. 2030 + use a va_list rather than a null in _nc_freeall()'s call to 2031 _nc_printf_string(). 2032 + add some see-also references in manpages to show related 2033 wide-character functions (suggested by Claus Fischer). 2034 2035 20061125 2036 + add a check in lib_color.c to ensure caller does not increase COLORS 2037 above max_colors, which is used as an array index (discussion with 2038 Simon Sasburg). 2039 + add ifdef's allowing ncurses to be built with tparm() using either 2040 varargs (the existing status), or using a fixed-parameter list (to 2041 match X/Open). 2042 2043 20061104 2044 + fix redrawing of windows other than stdscr using wredrawln() by 2045 touching the corresponding rows in curscr (discussion with Dan 2046 Gookin). 2047 + add test/redraw.c 2048 + add test/echochar.c 2049 + review/cleanup manpage descriptions of error-returns for form- and 2050 menu-libraries (prompted by FreeBSD docs/46196). 2051 2052 20061028 2053 + add AUTHORS file -TD 2054 + omit the -D options from output of the new config script --cflags 2055 option (suggested by Ralf S Engelschall). 2056 + make NCURSES_INLINE unconditionally defined in curses.h 2057 2058 20061021 2059 + revert change to accommodate bash 3.2, since that breaks other 2060 platforms, e.g., Solaris. 2061 + minor fixes to NEWS file to simplify scripting to obtain list of 2062 contributors. 2063 + improve some shared-library configure scripting for Linux, FreeBSD 2064 and NetBSD to make "--with-shlib-version" work. 2065 + change configure-script rules for FreeBSD shared libraries to allow 2066 for rpath support in versions past 3. 2067 + use $(DESTDIR) in makefile rules for installing/uninstalling the 2068 package config script (reports/patches by Christian Wiese, 2069 Ralf S Engelschall). 2070 + fix a warning in the configure script for NetBSD 2.0, working around 2071 spurious blanks embedded in its ${MAKEFLAGS} symbol. 2072 + change test/Makefile to simplify installing test programs in a 2073 different directory when --enable-rpath is used. 2074 2075 20061014 2076 + work around bug in bash 3.2 by adding extra quotes (Jim Gifford). 2077 + add/install a package config script, e.g., "ncurses5-config" or 2078 "ncursesw5-config", according to configuration options. 2079 2080 20061007 2081 + add several GNU Screen terminfo variations with 16- and 256-colors, 2082 and status line (Alain Bench). 2083 + change the way shared libraries (other than libtool) are installed. 2084 Rather than copying the build-tree's libraries, link the shared 2085 objects into the install directory. This makes the --with-rpath 2086 option work except with $(DESTDIR) (cf: 20000930). 2087 2088 20060930 2089 + fix ifdef in c++/internal.h for QNX 6.1 2090 + test-compiled with (old) egcs-1.1.2, modified configure script to 2091 not unset the $CXX and related variables which would prevent this. 2092 + fix a few terminfo.src typos exposed by improvments to "-f" option. 2093 + improve infocmp/tic "-f" option formatting. 2094 2095 20060923 2096 + make --disable-largefile option work (report by Thomas M Ott). 2097 + updated html documentation. 2098 + add ka2, kb1, kb3, kc2 to vt220-keypad as an extension -TD 2099 + minor improvements to rxvt+pcfkeys -TD 2100 2101 20060916 2102 + move static data from lib_mouse.c into SCREEN struct. 2103 + improve ifdef's for _POSIX_VDISABLE in tset to work with Mac OS X 2104 (report by Michail Vidiassov). 2105 + modify CF_PATH_SYNTAX to ensure it uses the result from --prefix 2106 option (from lynx changes) -TD 2107 + adapt AC_PROG_EGREP check, noting that this is likely to be another 2108 place aggravated by POSIXLY_CORRECT. 2109 + modify configure check for awk to ensure that it is found (prompted 2110 by report by Christopher Parker). 2111 + update config.sub 2112 2113 20060909 2114 + add kon, kon2 and jfbterm terminfo entry (request by Till Maas) -TD 2115 + remove invis capability from klone+sgr, mainly used by linux entry, 2116 since it does not really do this -TD 2117 2118 20060903 2119 + correct logic in wadd_wch() and wecho_wch(), which did not guard 2120 against passing the multi-column attribute into a call on waddch(), 2121 e.g., using data returned by win_wch() (cf: 20041023) 2122 (report by Sadrul H Chowdhury). 2123 2124 20060902 2125 + fix kterm's acsc string -TD 2126 + fix for change to tic/infocmp in 20060819 to ensure no blank is 2127 embedded into a termcap description. 2128 + workaround for 20050806 ifdef's change to allow visbuf.c to compile 2129 when using --with-termlib --with-trace options. 2130 + improve tgetstr() by making the return value point into the user's 2131 buffer, if provided (patch by Miroslav Lichvar (see Redhat Bugzilla 2132 #202480)). 2133 + correct libraries needed for foldkeys (report by Stanislav Ievlev) 2134 2135 20060826 2136 + add terminfo entries for xfce terminal (xfce) and multi gnome 2137 terminal (mgt) -TD 2138 + add test/foldkeys.c 2139 2140 20060819 2141 + modify tic and infocmp to avoid writing trailing blanks on terminfo 2142 source output (Debian #378783). 2143 + modify configure script to ensure that if the C compiler is used 2144 rather than the loader in making shared libraries, the $(CFLAGS) 2145 variable is also used (Redhat Bugzilla #199369). 2146 + port hashed-db code to db2 and db3. 2147 + fix a bug in tgetent() from 20060625 and 20060715 changes 2148 (patch/analysis by Miroslav Lichvar (see Redhat Bugzilla #202480)). 2149 2150 20060805 2151 + updated xterm function-keys terminfo to match xterm #216 -TD 2152 + add configure --with-hashed-db option (tested only with FreeBSD 6.0, 2153 e.g., the db 1.8.5 interface). 2154 2155 20060729 2156 + modify toe to access termcap data, e.g., via cgetent() functions, 2157 or as a text file if those are not available. 2158 + use _nc_basename() in tset to improve $SHELL check for csh/sh. 2159 + modify _nc_read_entry() and _nc_read_termcap_entry() so infocmp, 2160 can access termcap data when the terminfo database is disabled. 2161 2162 20060722 2163 + widen the test for xterm kmous a little to allow for other strings 2164 than \E[M, e.g., for xterm-sco functionality in xterm. 2165 + update xterm-related terminfo entries to match xterm patch #216 -TD 2166 + update config.guess, config.sub 2167 2168 20060715 2169 + fix for install-rule in Ada95 to add terminal_interface.ads 2170 and terminal_interface.ali (anonymous posting in comp.lang.ada). 2171 + correction to manpage for getcchar() (report by William McBrine). 2172 + add test/chgat.c 2173 + modify wchgat() to mark updated cells as changed so a refresh will 2174 repaint those cells (comments by Sadrul H Chowdhury and William 2175 McBrine). 2176 + split up dependency of names.c and codes.c in ncurses/Makefile to 2177 work with parallel make (report/analysis by Joseph S Myers). 2178 + suppress a warning message (which is ignored) for systems without 2179 an ldconfig program (patch by Justin Hibbits). 2180 + modify configure script --disable-symlinks option to allow one to 2181 disable symlink() in tic even when link() does not work (report by 2182 Nigel Horne). 2183 + modify MKfallback.sh to use tic -x when constructing fallback tables 2184 to allow extended capabilities to be retrieved from a fallback entry. 2185 + improve leak-checking logic in tgetent() from 20060625 to ensure that 2186 it does not free the current screen (report by Miroslav Lichvar). 2187 2188 20060708 2189 + add a check for _POSIX_VDISABLE in tset (NetBSD #33916). 2190 + correct _nc_free_entries() and related functions used for memory leak 2191 checking of tic. 2192 2193 20060701 2194 + revert a minor change for magic-cookie support from 20060513, which 2195 caused unexpected reset of attributes, e.g., when resizing test/view 2196 in color mode. 2197 + note in clear manpage that the program ignores command-line 2198 parameters (prompted by Debian #371855). 2199 + fixes to make lib_gen.c build properly with changes to the configure 2200 --disable-macros option and NCURSES_NOMACROS (cf: 20060527) 2201 + update/correct several terminfo entries -TD 2202 + add some notes regarding copyright to terminfo.src -TD 2203 2204 20060625 2205 + fixes to build Ada95 binding with gnat-4.1.0 2206 + modify read_termtype() so the term_names data is always allocated as 2207 part of the str_table, a better fix for a memory leak (cf: 20030809). 2208 + reduce memory leaks in repeated calls to tgetent() by remembering the 2209 last TERMINAL* value allocated to hold the corresponding data and 2210 freeing that if the tgetent() result buffer is the same as the 2211 previous call (report by "Matt" for FreeBSD gnu/98975). 2212 + modify tack to test extended capability function-key strings. 2213 + improved gnome terminfo entry (GenToo #122566). 2214 + improved xterm-256color terminfo entry (patch by Alain Bench). 2215 2216 20060617 2217 + fix two small memory leaks related to repeated tgetent() calls 2218 with TERM=screen (report by "Matt" for FreeBSD gnu/98975). 2219 + add --enable-signed-char to simplify Debian package. 2220 + reduce name-pollution in term.h by removing #define's for HAVE_xxx 2221 symbols. 2222 + correct typo in curs_terminfo.3x (Debian #369168). 2223 2224 20060603 2225 + enable the mouse in test/movewindow.c 2226 + improve a limit-check in frm_def.c (John Heasley). 2227 + minor copyright fixes. 2228 + change configure script to produce test/Makefile from data file. 2229 2230 20060527 2231 + add a configure option --enable-wgetch-events to enable 2232 NCURSES_WGETCH_EVENTS, and correct the associated loop-logic in 2233 lib_twait.c (report by Bernd Jendrissek). 2234 + remove include/nomacros.h from build, since the ifdef for 2235 NCURSES_NOMACROS makes that obsolete. 2236 + add entrypoints for some functions which were only provided as macros 2237 to make NCURSES_NOMACROS ifdef work properly: getcurx(), getcury(), 2238 getbegx(), getbegy(), getmaxx(), getmaxy(), getparx() and getpary(), 2239 wgetbkgrnd(). 2240 + provide ifdef for NCURSES_NOMACROS which suppresses most macro 2241 definitions from curses.h, i.e., where a macro is defined to override 2242 a function to improve performance. Allowing a developer to suppress 2243 these definitions can simplify some application (discussion with 2244 Stanislav Ievlev). 2245 + improve description of memu/meml in terminfo manpage. 2246 2247 20060520 2248 + if msgr is false, reset video attributes when doing an automargin 2249 wrap to the next line. This makes the ncurses 'k' test work properly 2250 for hpterm. 2251 + correct caching of keyname(), which was using only half of its table. 2252 + minor fixes to memory-leak checking. 2253 + make SCREEN._acs_map and SCREEN._screen_acs_map pointers rather than 2254 arrays, making ACS_LEN less visible to applications (suggested by 2255 Stanislav Ievlev). 2256 + move chunk in SCREEN ifdef'd for USE_WIDEC_SUPPORT to the end, so 2257 _screen_acs_map will have the same offset in both ncurses/ncursesw, 2258 making the corresponding tinfo/tinfow libraries binary-compatible 2259 (cf: 20041016, report by Stanislav Ievlev). 2260 2261 20060513 2262 + improve debug-tracing for EmitRange(). 2263 + change default for --with-develop to "yes". Add NCURSES_NO_HARD_TABS 2264 and NCURSES_NO_MAGIC_COOKIE environment variables to allow runtime 2265 suppression of the related hard-tabs and xmc-glitch features. 2266 + add ncurses version number to top-level manpages, e.g., ncurses, tic, 2267 infocmp, terminfo as well as form, menu, panel. 2268 + update config.guess, config.sub 2269 + modify ncurses.c to work around a bug in NetBSD 3.0 curses 2270 (field_buffer returning null for a valid field). The 'r' test 2271 appears to not work with that configuration since the new_fieldtype() 2272 function is broken in that implementation. 2273 2274 20060506 2275 + add hpterm-color terminfo entry -TD 2276 + fixes to compile test-programs with HPUX 11.23 2277 2278 20060422 2279 + add copyright notices to files other than those that are generated, 2280 data or adapted from pdcurses (reports by William McBrine, David 2281 Taylor). 2282 + improve rendering on hpterm by not resetting attributes at the end 2283 of doupdate() if the terminal has the magic-cookie feature (report 2284 by Bernd Rieke). 2285 + add 256color variants of terminfo entries for programs which are 2286 reported to implement this feature -TD 2287 2288 20060416 2289 + fix typo in change to NewChar() macro from 20060311 changes, which 2290 broke tab-expansion (report by Frederic L W Meunier). 2291 2292 20060415 2293 + document -U option of tic and infocmp. 2294 + modify tic/infocmp to suppress smacs/rmacs when acsc is suppressed 2295 due to size limit, e.g., converting to termcap format. Also 2296 suppress them if the output format does not contain acsc and it 2297 was not VT100-like, i.e., a one-one mapping (Novell #163715). 2298 + add configure check to ensure that SIGWINCH is defined on platforms 2299 such as OS X which exclude that when _XOPEN_SOURCE, etc., are 2300 defined (report by Nicholas Cole) 2301 2302 20060408 2303 + modify write_object() to not write coincidental extensions of an 2304 entry made due to it being referenced in a use= clause (report by 2305 Alain Bench). 2306 + another fix for infocmp -i option, which did not ensure that some 2307 escape sequences had comparable prefixes (report by Alain Bench). 2308 2309 20060401 2310 + improve discussion of init/reset in terminfo and tput manpages 2311 (report by Alain Bench). 2312 + use is3 string for a fallback of rs3 in the reset program; it was 2313 using is2 (report by Alain Bench). 2314 + correct logic for infocmp -i option, which did not account for 2315 multiple digits in a parameter (cf: 20040828) (report by Alain 2316 Bench). 2317 + move _nc_handle_sigwinch() to lib_setup.c to make --with-termlib 2318 option work after 20060114 changes (report by Arkadiusz Miskiewicz). 2319 + add copyright notices to test-programs as needed (report by William 2320 McBrine). 2321 2322 20060318 2323 + modify ncurses.c 'F' test to combine the wide-characters with color 2324 and/or video attributes. 2325 + modify test/ncurses to use CTL/Q or ESC consistently for exiting 2326 a test-screen (some commands used 'x' or 'q'). 2327 2328 20060312 2329 + fix an off-by-one in the scrolling-region change (cf_ 20060311). 2330 2331 20060311 2332 + add checks in waddchnstr() and wadd_wchnstr() to stop copying when 2333 a null character is found (report by Igor Bogomazov). 2334 + modify progs/Makefile.in to make "tput init" work properly with 2335 cygwin, i.e., do not pass a ".exe" in the reference string used 2336 in check_aliases (report by Samuel Thibault). 2337 + add some checks to ensure current position is within scrolling 2338 region before scrolling on a new line (report by Dan Gookin). 2339 + change some NewChar() usage to static variables to work around 2340 stack garbage introduced when cchar_t is not packed (Redhat #182024). 2341 2342 20060225 2343 + workarounds to build test/movewindow with PDcurses 2.7. 2344 + fix for nsterm-16color entry (patch by Alain Bench). 2345 + correct a typo in infocmp manpage (Debian #354281). 2346 2347 20060218 2348 + add nsterm-16color entry -TD 2349 + updated mlterm terminfo entry -TD 2350 + remove 970913 feature for copying subwindows as they are moved in 2351 mvwin() (discussion with Bryan Christ). 2352 + modify test/demo_menus.c to demonstrate moving a menu (both the 2353 window and subwindow) using shifted cursor-keys. 2354 + start implementing recursive mvwin() in movewindow.c (incomplete). 2355 + add a fallback definition for GCC_PRINTFLIKE() in test.priv.h, 2356 for movewindow.c (report by William McBrine). 2357 + add help-message to test/movewindow.c 2358 2359 20060211 2360 + add test/movewindow.c, to test mvderwin(). 2361 + fix ncurses soft-key test so color changes are shown immediately 2362 rather than delayed. 2363 + modify ncurses soft-key test to hide the keys when exiting the test 2364 screen. 2365 + fixes to build test programs with PDCurses 2.7, e.g., its headers 2366 rely on autoconf symbols, and it declares stubs for nonfunctional 2367 terminfo and termcap entrypoints. 2368 2369 20060204 2370 + improved test/configure to build test/ncurses on HPUX 11 using the 2371 vendor curses. 2372 + documented ALTERNATE CONFIGURATIONS in the ncurses manpage, for the 2373 benefit of developers who do not read INSTALL. 2374 2375 20060128 2376 + correct form library Window_To_Buffer() change (cf: 20040516), which 2377 should ignore the video attributes (report by Ricardo Cantu). 2378 2379 20060121 2380 + minor fixes to xmc-glitch experimental code: 2381 + suppress line-drawing 2382 + implement max_attributes 2383 tested with xterm. 2384 + minor fixes for the database iterator. 2385 + fix some buffer limits in c++ demo (comment by Falk Hueffner in 2386 Debian #348117). 2387 2388 20060114 2389 + add toe -a option, to show all databases. This uses new private 2390 interfaces in the ncurses library for iterating through the list of 2391 databases. 2392 + fix toe from 20000909 changes which made it not look at 2393 $HOME/.terminfo 2394 + make toe's -v option parameter optional as per manpage. 2395 + improve SIGWINCH handling by postponing its effect during newterm(), 2396 etc., when allocating screens. 2397 2398 20060111 2399 + modify wgetnstr() to return KEY_RESIZE if a sigwinch occurs. Use 2400 this in test/filter.c 2401 + fix an error in filter() modification which caused some applications 2402 to fail. 2403 2404 20060107 2405 + check if filter() was called when getting the screensize. Keep it 2406 at 1 if so (based on Redhat #174498). 2407 + add extension nofilter(). 2408 + refined the workaround for ACS mapping. 2409 + make ifdef's consistent in curses.h for the extended colors so the 2410 header file can be used for the normal curses library. The header 2411 file installed for extended colors is a variation of the 2412 wide-character configuration (report by Frederic L W Meunier). 2413 2414 20051231 2415 + add a workaround to ACS mapping to allow applications such as 2416 test/blue.c to use the "PC ROM" characters by masking them with 2417 A_ALTCHARSET. This worked up til 5.5, but was lost in the revision 2418 of legacy coding (report by Michael Deutschmann). 2419 + add a null-pointer check in the wide-character version of 2420 calculate_actual_width() (report by Victor Julien). 2421 + improve test/ncurses 'd' (color-edit) test by allowing the RGB 2422 values to be set independently (patch by William McBrine). 2423 + modify test/configure script to allow building test programs with 2424 PDCurses/X11. 2425 + modified test programs to allow some to work with NetBSD curses. 2426 Several do not because NetBSD curses implements a subset of X/Open 2427 curses, and also lacks much of SVr4 additions. But it's enough for 2428 comparison. 2429 + update config.guess and config.sub 2430 2431 20051224 2432 + use BSD-specific fix for return-value from cgetent() from CVS where 2433 an unknown terminal type would be reportd as "database not found". 2434 + make tgetent() return code more readable using new symbols 2435 TGETENT_YES, etc. 2436 + remove references to non-existent "tctest" program. 2437 + remove TESTPROGS from progs/Makefile.in (it was referring to code 2438 that was never built in that directory). 2439 + typos in curs_addchstr.3x, some doc files (noticed in OpenBSD CVS). 2440 2441 20051217 2442 + add use_legacy_coding() function to support lynx's font-switching 2443 feature. 2444 + fix formatting in curs_termcap.3x (report by Mike Frysinger). 2445 + modify MKlib_gen.sh to change preprocessor-expanded _Bool back to 2446 bool. 2447 2448 20051210 2449 + extend test/ncurses.c 's' (overlay window) test to exercise overlay(), 2450 overwrite() and copywin() with different combinations of colors and 2451 attributes (including background color) to make it easy to see the 2452 effect of the different functions. 2453 + corrections to menu/m_global.c for wide-characters (report by 2454 Victor Julien). 2455 2456 20051203 2457 + add configure option --without-dlsym, allowing developers to 2458 configure GPM support without using dlsym() (discussion with Michael 2459 Setzer). 2460 + fix wins_nwstr(), which did not handle single-column non-8bit codes 2461 (Debian #341661). 2462 2463 20051126 2464 + move prototypes for wide-character trace functions from curses.tail 2465 to curses.wide to avoid accidental reference to those if 2466 _XOPEN_SOURCE_EXTENDED is defined without ensuring that <wchar.h> is 2467 included. 2468 + add/use NCURSES_INLINE definition. 2469 + change some internal functions to use int/unsigned rather than the 2470 short equivalents. 2471 2472 20051119 2473 + remove a redundant check in lib_color.c (Debian #335655). 2474 + use ld's -search_paths_first option on Darwin to work around odd 2475 search rules on that platform (report by Christian Gennerat, analysis 2476 by Andrea Govoni). 2477 + remove special case for Darwin in CF_XOPEN_SOURCE configure macro. 2478 + ignore EINTR in tcgetattr/tcsetattr calls (Debian #339518). 2479 + fix several bugs in test/bs.c (patch by Stephen Lindholm). 2480 2481 20051112 2482 + other minor fixes to cygwin based on tack -TD 2483 + correct smacs in cygwin (Debian #338234, report by Baurzhan 2484 Ismagulov, who noted that it was fixed in Cygwin). 2485 2486 20051029 2487 + add shifted up/down arrow codes to xterm-new as kind/kri strings -TD 2488 + modify wbkgrnd() to avoid clearing the A_CHARTEXT attribute bits 2489 since those record the state of multicolumn characters (Debian 2490 #316663). 2491 + modify werase to clear multicolumn characters that extend into 2492 a derived window (Debian #316663). 2493 2494 20051022 2495 + move assignment from environment variable ESCDELAY from initscr() 2496 down to newterm() so the environment variable affects timeouts for 2497 terminals opened with newterm() as well. 2498 + fix a memory leak in keyname(). 2499 + add test/demo_altkeys.c 2500 + modify test/demo_defkey.c to exit from loop via 'q' to allow 2501 leak-checking, as well as fix a buffer size in winnstr() call. 2502 2503 20051015 2504 + correct order of use-clauses in rxvt-basic entry which made codes for 2505 f1-f4 vt100-style rather than vt220-style (report by Gabor Z Papp). 2506 + suppress configure check for gnatmake if Ada95/Makefile.in is not 2507 found. 2508 + correct a typo in configure --with-bool option for the case where 2509 --without-cxx is used (report by Daniel Jacobowitz). 2510 + add a note to INSTALL's discussion of --with-normal, pointing out 2511 that one may wish to use --without-gpm to ensure a completely 2512 static link (prompted by report by Felix von Leitner). 2513 2514 20051010 5.5 release for upload to 2515 2516 20051008 2517 + document in demo_forms.c some portability issues. 2518 2519 20051001 2520 + document side-effect of werase() which sets the cursor position. 2521 + save/restore the current position in form field editing to make 2522 overlay mode work. 2523 2524 20050924 2525 + correct header dependencies in progs, allowing parallel make (report 2526 by Daniel Jacobowitz). 2527 + modify CF_BUILD_CC to ensure that pre-setting $BUILD_CC overrides 2528 the configure check for --with-build-cc (report by Daniel Jacobowitz). 2529 + modify CF_CFG_DEFAULTS to not use /usr as the default prefix for 2530 NetBSD. 2531 + update config.guess and config.sub from 2532 2533 2534 20050917 2535 + modify sed expression which computes path for /usr/lib/terminfo 2536 symbolic link in install to ensure that it does not change unexpected 2537 levels of the path (Gentoo #42336). 2538 + modify default for --disable-lp64 configure option to reduce impact 2539 on existing 64-bit builds. Enabling the _LP64 option may change the 2540 size of chtype and mmask_t. However, for ABI 6, it is enabled by 2541 default (report by Mike Frysinger). 2542 + add configure script check for --enable-ext-mouse, bump ABI to 6 by 2543 default if it is used. 2544 + improve configure script logic for bumping ABI to omit this if the 2545 --with-abi-version option was used. 2546 + update address for Free Software Foundation in tack's source. 2547 + correct wins_wch(), which was not marking the filler-cells of 2548 multi-column characters (cf: 20041023). 2549 2550 20050910 2551 + modify mouse initialization to ensure that Gpm_Open() is called only 2552 once. Otherwise GPM gets confused in its initialization of signal 2553 handlers (Debian #326709). 2554 2555 20050903 2556 + modify logic for backspacing in a multiline form field to ensure that 2557 it works even when the preceding line is full (report by Frank van 2558 Vugt). 2559 + remove comment about BUGS section of ncurses manpage (Debian #325481) 2560 2561 20050827 2562 + document some workarounds for shared and libtool library 2563 configurations in INSTALL (see --with-shared and --with-libtool). 2564 + modify CF_GCC_VERSION and CF_GXX_VERSION macros to accommodate 2565 cross-compilers which emit the platform name in their version 2566 message, e.g., 2567 arm-sa1100-linux-gnu-g++ (GCC) 4.0.1 2568 (report by Frank van Vugt). 2569 2570 20050820 2571 + start updating documentation for upcoming 5.5 release. 2572 + fix to make libtool and libtinfo work together again (cf: 20050122). 2573 + fixes to allow building traces into libtinfo 2574 + add debug trace to tic that shows if/how ncurses will write to the 2575 lower corner of a terminal's screen. 2576 + update llib-l* files. 2577 2578 20050813 2579 + modify initializers in c++ binding to build with old versions of g++. 2580 + improve special case for 20050115 repainting fix, ensuring that if 2581 the first changed cell is not a character that the range to be 2582 repainted is adjusted to start at a character's beginning (Debian 2583 #316663). 2584 2585 20050806 2586 + fixes to build on QNX 6.1 2587 + improve configure script checks for Intel 9.0 compiler. 2588 + remove #include's for libc.h (obsolete). 2589 + adjust ifdef's in curses.priv.h so that when cross-compiling to 2590 produce comp_hash and make_keys, no dependency on wchar.h is needed. 2591 That simplifies the build-cppflags (report by Frank van Vugt). 2592 + move modules related to key-binding into libtinfo to fix linkage 2593 problem caused by 20050430 changes to MKkeyname.sh (report by 2594 Konstantin Andreev). 2595 2596 20050723 2597 + updates/fixes for configure script macros from vile -TD 2598 + make prism9's sgr string agree with the rest of the terminfo -TD 2599 + make vt220's sgr0 string consistent with sgr string, do this for 2600 several related cases -TD 2601 + improve translation to termcap by filtering the 'me' (sgr0) strings 2602 as in the runtime call to tgetent() (prompted by a discussion with 2603 Thomas Klausner). 2604 + improve tic check for sgr0 versus sgr(0), to help ensure that sgr0 2605 resets line-drawing. 2606 2607 20050716 2608 + fix special cases for trimming sgr0 for hurd and vt220 (Debian 2609 #318621). 2610 + split-out _nc_trim_sgr0() from modifications made to tgetent(), to 2611 allow it to be used by tic to provide information about the runtime 2612 changes that would be made to sgr0 for termcap applications. 2613 + modify make_sed.sh to make the group-name in the NAME section of 2614 form/menu library manpage agree with the TITLE string when renaming 2615 is done for Debian (Debian #78866). 2616 2617 20050702 2618 + modify parameter type in c++ binding for insch() and mvwinsch() to 2619 be consistent with underlying ncurses library (was char, is chtype). 2620 + modify treatment of Intel compiler to allow _GNU_SOURCE to be defined 2621 on Linux. 2622 + improve configure check for nanosleep(), checking that it works since 2623 some older systems such as AIX 4.3 have a nonworking version. 2624 2625 20050625 2626 + update config.guess and config.sub from 2627 2628 + modify misc/shlib to work in test-directory. 2629 + suppress $suffix in misc/run_tic.sh when cross-compiling. This 2630 allows cross-compiles to use the host's tic program to handle the 2631 "make install.data" step. 2632 + improve description of $LINES and $COLUMNS variables in manpages 2633 (prompted by report by Dave Ulrick). 2634 + improve description of cross-compiling in INSTALL 2635 + add NCURSES-Programming-HOWTO.html by Pradeep Padala 2636 (see). 2637 + modify configure script to obtain soname for GPM library (discussion 2638 with Daniel Jacobowitz). 2639 + modify configure script so that --with-chtype option will still 2640 compute the unsigned literals suffix for constants in curses.h 2641 (report by Daniel Jacobowitz: 2642 + patches from Daniel Jacobowitz: 2643 + the man_db.renames entry for tack.1 was backwards. 2644 + tack.1 had some 1m's that should have been 1M's. 2645 + the section for curs_inwstr.3 was wrong. 2646 2647 20050619 2648 + correction to --with-chtype option (report by Daniel Jacobowitz). 2649 2650 20050618 2651 + move build-time edit_man.sh and edit_man.sed scripts to top directory 2652 to simplify reusing them for renaming tack's manpage (prompted by a 2653 review of Debian package). 2654 + revert minor optimization from 20041030 (Debian #313609). 2655 + libtool-specific fixes, tested with libtool 1.4.3, 1.5.0, 1.5.6, 2656 1.5.10 and 1.5.18 (all work except as noted previously for the c++ 2657 install using libtool 1.5.0): 2658 + modify the clean-rule in c++/Makefile.in to work with IRIX64 make 2659 program. 2660 + use $(LIBTOOL_UNINSTALL) symbol, overlooked in 20030830 2661 + add configure options --with-chtype and --with-mmask-t, to allow 2662 overriding of the non-LP64 model's use of the corresponding types. 2663 + revise test for size of chtype (and mmask_t), which always returned 2664 "long" due to an uninitialized variable (report by Daniel Jacobowitz). 2665 2666 20050611 2667 + change _tracef's that used "%p" format for va_list values to ignore 2668 that, since on some platforms those are not pointers. 2669 + fixes for long-formats in printf's due to largefile support. 2670 2671 20050604 2672 + fixes for termcap support: 2673 + reset pointer to _nc_curr_token.tk_name when the input stream is 2674 closed, which could point to free memory (cf: 20030215). 2675 + delink TERMTYPE data which is used by the termcap reader, so that 2676 extended names data will be freed consistently. 2677 + free pointer to TERMTYPE data in _nc_free_termtype() rather than 2678 its callers. 2679 + add some entrypoints for freeing permanently allocated data via 2680 _nc_freeall() when NO_LEAKS is defined. 2681 + amend 20041030 change to _nc_do_color to ensure that optimization is 2682 applied only when the terminal supports back_color_erase (bce). 2683 2684 20050528 2685 + add sun-color terminfo entry -TD 2686 + correct a missing assignment in c++ binding's method 2687 NCursesPanel::UserPointer() from 20050409 changes. 2688 + improve configure check for large-files, adding check for dirent64 2689 from vile -TD 2690 + minor change to configure script to improve linker options for the 2691 Ada95 tree. 2692 2693 20050515 2694 + document error conditions for ncurses library functions (report by 2695 Stanislav Ievlev). 2696 + regenerated html documentation for ada binding. 2697 see 2698 2699 20050507 2700 + regenerated html documentation for manpages. 2701 + add $(BUILD_EXEEXT) suffix to invocation of make_keys in 2702 ncurses/Makefile (Gentoo #89772). 2703 + modify c++/demo.cc to build with g++ -fno-implicit-templates option 2704 (patch by Mike Frysinger). 2705 + modify tic to filter out long extended names when translating to 2706 termcap format. Only two characters are permissible for termcap 2707 capability names. 2708 2709 20050430 2710 + modify terminfo entries xterm-new and rxvt to add strings for 2711 shift-, control-cursor keys. 2712 + workaround to allow c++ binding to compile with g++ 2.95.3, which 2713 has a broken implementation of static_cast<> (patch by Jeff Chua). 2714 + modify initialization of key lookup table so that if an extended 2715 capability (tic -x) string is defined, and its name begins with 'k', 2716 it will automatically be treated as a key. 2717 + modify test/keynames.c to allow for the possibility of extended 2718 key names, e.g., via define_key(), or via "tic -x". 2719 + add test/demo_termcap.c to show the contents of given entry via the 2720 termcap interface. 2721 2722 20050423 2723 + minor fixes for vt100/vt52 entries -TD 2724 + add configure option --enable-largefile 2725 + corrected libraries used to build Ada95/gen/gen, found in testing 2726 gcc 4.0.0. 2727 2728 20050416 2729 + update config.guess, config.sub 2730 + modify configure script check for _XOPEN_SOURCE, disable that on 2731 Darwin whose header files have problems (patch by Chris Zubrzycki). 2732 + modify form library Is_Printable_String() to use iswprint() rather 2733 than wcwidth() for determining if a character is printable. The 2734 latter caused it to reject menu items containing non-spacing 2735 characters. 2736 + modify ncurses test program's F-test to handle non-spacing characters 2737 by combining them with a reverse-video blank. 2738 + review/fix several gcc -Wconversion warnings. 2739 2740 20050409 2741 + correct an off-by-one error in m_driver() for mouse-clicks used to 2742 position the mouse to a particular item. 2743 + implement test/demo_menus.c 2744 + add some checks in lib_mouse to ensure SP is set. 2745 + modify C++ binding to make 20050403 changes work with the configure 2746 --enable-const option. 2747 2748 20050403 2749 + modify start_color() to return ERR if it cannot allocate memory. 2750 + address g++ compiler warnings in C++ binding by adding explicit 2751 member initialization, assignment operators and copy constructors. 2752 Most of the changes simply preserve the existing semantics of the 2753 binding, which can leak memory, etc., but by making these features 2754 visible, it provides a framework for improving the binding. 2755 + improve C++ binding using static_cast, etc. 2756 + modify configure script --enable-warnings to add options to g++ to 2757 correspond to the gcc --enable-warnings. 2758 + modify C++ binding to use some C internal functions to make it 2759 compile properly on Solaris (and other platforms). 2760 2761 20050327 2762 + amend change from 20050320 to limit it to configurations with a 2763 valid locale. 2764 + fix a bug introduced in 20050320 which broke the translation of 2765 nonprinting characters to uparrow form (report by Takahashi Tamotsu). 2766 2767 20050326 2768 + add ifdef's for _LP64 in curses.h to avoid using wasteful 64-bits for 2769 chtype and mmask_t, but add configure option --disable-lp64 in case 2770 anyone used that configuration. 2771 + update misc/shlib script to account for Mac OS X (report by Michail 2772 Vidiassov). 2773 + correct comparison for wrapping multibyte characters in 2774 waddch_literal() (report by Takahashi Tamotsu). 2775 2776 20050320 2777 + add -c and -w options to tset to allow user to suppress ncurses' 2778 resizing of the terminal emulator window in the special case where it 2779 is not able to detect the true size (report by Win Delvaux, Debian 2780 #300419). 2781 + modify waddch_nosync() to account for locale zn_CH.GBK, which uses 2782 codes 128-159 as part of multibyte characters (report by Wang 2783 WenRui, Debian #300512). 2784 2785 20050319 2786 + modify ncurses.c 'd' test to make it work with 88-color 2787 configuration, i.e., by implementing scrolling. 2788 + improve scrolling in ncurses.c 'c' and 'C' tests, e.g., for 88-color 2789 configuration. 2790 2791 20050312 2792 + change tracemunch to use strict checking. 2793 + modify ncurses.c 'p' test to test line-drawing within a pad. 2794 + implement environment variable NCURSES_NO_UTF8_ACS to support 2795 miscellaneous terminal emulators which ignore alternate character 2796 set escape sequences when in UTF-8 mode. 2797 2798 20050305 2799 + change NCursesWindow::err_handler() to a virtual function (request by 2800 Steve Beal). 2801 + modify fty_int.c and fty_num.c to handle wide characters (report by 2802 Wolfgang Gutjahr). 2803 + adapt fix for fty_alpha.c to fty_alnum.c, which also handled normal 2804 and wide characters inconsistently (report by Wolfgang Gutjahr). 2805 + update llib-* files to reflect internal interface additions/changes. 2806 2807 20050226 2808 + improve test/configure script, adding tests for _XOPEN_SOURCE, etc., 2809 from lynx. 2810 + add aixterm-16color terminfo entry -TD 2811 + modified xterm-new terminfo entry to work with tgetent() changes -TD 2812 + extended changes in tgetent() from 20040710 to allow the substring of 2813 sgr0 which matches rmacs to be at the beginning of the sgr0 string 2814 (request by Thomas Wolff). Wolff says the visual effect in 2815 combination with pre-20040710 ncurses is improved. 2816 + fix off-by-one in winnstr() call which caused form field validation 2817 of multibyte characters to ignore the last character in a field. 2818 + correct logic in winsch() for inserting multibyte strings; the code 2819 would clear cells after the insertion rather than push them to the 2820 right (cf: 20040228). 2821 + fix an inconsistency in Check_Alpha_Field() between normal and wide 2822 character logic (report by Wolfgang Gutjahr). 2823 2824 20050219 2825 + fix a bug in editing wide-characters in form library: deleting a 2826 nonwide character modified the previous wide-character. 2827 + update manpage to describe NCURSES_MOUSE_VERSION 2. 2828 + correct manpage description of mouseinterval() (Debian #280687). 2829 + add a note to default_colors.3x explaining why this extension was 2830 added (Debian #295083). 2831 + add traces to panel library. 2832 2833 20050212 2834 + improve editing of wide-characters in form library: left/right 2835 cursor movement, and single-character deletions work properly. 2836 + disable GPM mouse support when $TERM happens to be prefixed with 2837 "xterm". Gpm_Open() would otherwise assert that it can deal with 2838 mouse events in this case. 2839 + modify GPM mouse support so it closes the server connection when 2840 the caller disables the mouse (report by Stanislav Ievlev). 2841 2842 20050205 2843 + add traces for callback functions in form library. 2844 + add experimental configure option --enable-ext-mouse, which defines 2845 NCURSES_MOUSE_VERSION 2, and modifies the encoding of mouse events to 2846 support wheel mice, which may transmit buttons 4 and 5. This works 2847 with xterm and similar X terminal emulators (prompted by question by 2848 Andreas Henningsson, this is also related to Debian #230990). 2849 + improve configure macros CF_XOPEN_SOURCE and CF_POSIX_C_SOURCE to 2850 avoid redefinition warnings on cygwin. 2851 2852 20050129 2853 + merge remaining development changes for extended colors (mostly 2854 complete, does not appear to break other configurations). 2855 + add xterm-88color.dat (part of extended colors testing). 2856 + improve _tracedump() handling of color pairs past 96. 2857 + modify return-value from start_color() to return OK if colors have 2858 already been started. 2859 + modify curs_color.3x list error conditions for init_pair(), 2860 pair_content() and color_content(). 2861 + modify pair_content() to return -1 for consistency with init_pair() 2862 if it corresponds to the default-color. 2863 + change internal representation of default-color to allow application 2864 to use color number 255. This does not affect the total number of 2865 color pairs which are allowed. 2866 + add a top-level tags rule. 2867 2868 20050122 2869 + add a null-pointer check in wgetch() in case it is called without 2870 first calling initscr(). 2871 + add some null-pointer checks for SP, which is not set by libtinfo. 2872 + modify misc/shlib to ensure that absolute pathnames are used. 2873 + modify test/Makefile.in, etc., to link test programs only against the 2874 libraries needed, e.g., omit form/menu/panel library for the ones 2875 that are curses-specific. 2876 + change SP->_current_attr to a pointer, adjust ifdef's to ensure that 2877 libtinfo.so and libtinfow.so have the same ABI. The reason for this 2878 is that the corresponding data which belongs to the upper-level 2879 ncurses library has a different size in each model (report by 2880 Stanislav Ievlev). 2881 2882 20050115 2883 + minor fixes to allow test-compiles with g++. 2884 + correct column value shown in tic's warnings, which did not account 2885 for leading whitespace. 2886 + add a check in _nc_trans_string() for improperly ended strings, i.e., 2887 where a following line begins in column 1. 2888 + modify _nc_save_str() to return a null pointer on buffer overflow. 2889 + improve repainting while scrolling wide-character data (Eungkyu Song). 2890 2891 20050108 2892 + merge some development changes to extend color capabilities. 2893 2894 20050101 2895 + merge some development changes to extend color capabilities. 2896 + fix manpage typo (FreeBSD report docs/75544). 2897 + update config.guess, config.sub 2898 > patches for configure script (Albert Chin-A-Young): 2899 + improved fix to make mbstate_t recognized on HPUX 11i (cf: 2900 20030705), making vsscanf() prototype visible on IRIX64. Tested for 2901 on HP-UX 11i, Solaris 7, 8, 9, AIX 4.3.3, 5.2, Tru64 UNIX 4.0D, 5.1, 2902 IRIX64 6.5, Redhat Linux 7.1, 9, and RHEL 2.1, 3.0. 2903 + print the result of the --disable-home-terminfo option. 2904 + use -rpath when compiling with SGI C compiler. 2905 2906 20041225 2907 + add trace calls to remaining public functions in form and menu 2908 libraries. 2909 + fix check for numeric digits in test/ncurses.c 'b' and 'B' tests. 2910 + fix typo in test/ncurses.c 'c' test from 20041218. 2911 2912 20041218 2913 + revise test/ncurses.c 'c' color test to improve use for xterm-88color 2914 and xterm-256color, added 'C' test using the wide-character color_set 2915 and attr_set functions. 2916 2917 20041211 2918 + modify configure script to work with Intel compiler. 2919 + fix an limit-check in wadd_wchnstr() which caused labels in the 2920 forms-demo to be one character short. 2921 + fix typo in curs_addchstr.3x (Jared Yanovich). 2922 + add trace calls to most functions in form and menu libraries. 2923 + update working-position for adding wide-characters when window is 2924 scrolled (prompted by related report by Eungkyu Song). 2925 2926 20041204 2927 + replace some references on Linux to wcrtomb() which use it to obtain 2928 the length of a multibyte string with _nc_wcrtomb, since wcrtomb() is 2929 broken in glibc (see Debian #284260). 2930 + corrected length-computation in wide-character support for 2931 field_buffer(). 2932 + some fixes to frm_driver.c to allow it to accept multibyte input. 2933 + modify configure script to work with Intel 8.0 compiler. 2934 2935 20041127 2936 + amend change to setupterm() in 20030405 which would reuse the value 2937 of cur_term if the same output was selected. This now reuses it only 2938 when setupterm() is called from tgetent(), which has no notion of 2939 separate SCREENs. Note that tgetent() must be called after initscr() 2940 or newterm() to use this feature (Redhat Bugzilla #140326). 2941 + add a check in CF_BUILD_CC macro to ensure that developer has given 2942 the --with-build-cc option when cross-compiling (report by Alexandre 2943 Campo). 2944 + improved configure script checks for _XOPEN_SOURCE and 2945 _POSIX_C_SOURCE (fix for IRIX 5.3 from Georg Schwarz, _POSIX_C_SOURCE 2946 updates from lynx). 2947 + cosmetic fix to test/gdc.c to recolor the bottom edge of the box 2948 for consistency (comment by Dan Nelson). 2949 2950 20041120 2951 + update wsvt25 terminfo entry -TD 2952 + modify test/ins_wide.c to test all flavors of ins_wstr(). 2953 + ignore filler-cells in wadd_wchnstr() when adding a cchar_t array 2954 which consists of multi-column characters, since this function 2955 constructs them (cf: 20041023). 2956 + modify winnstr() to return multibyte character strings for the 2957 wide-character configuration. 2958 2959 20041106 2960 + fixes to make slk_set() and slk_wset() accept and store multibyte 2961 or multicolumn characters. 2962 2963 20041030 2964 + improve color optimization a little by making _nc_do_color() check 2965 if the old/new pairs are equivalent to the default pair 0. 2966 + modify assume_default_colors() to not require that 2967 use_default_colors() be called first. 2968 2969 20041023 2970 + modify term_attrs() to use termattrs(), add the extended attributes 2971 such as enter_horizontal_hl_mode for WA_HORIZONTAL to term_attrs(). 2972 + add logic in waddch_literal() to clear orphaned cells when one 2973 multi-column character partly overwrites another. 2974 + improved logic for clearing cells when a multi-column character 2975 must be wrapped to a new line. 2976 + revise storage of cells for multi-column characters to correct a 2977 problem with repainting. In the old scheme, it was possible for 2978 doupdate() to decide that only part of a multi-column character 2979 should be repainted since the filler cells stored only an attribute 2980 to denote them as fillers, rather than the character value and the 2981 attribute. 2982 2983 20041016 2984 + minor fixes for traces. 2985 + add SP->_screen_acs_map[], used to ensure that mapping of missing 2986 line-drawing characters is handled properly. For example, ACS_DARROW 2987 is absent from xterm-new, and it was coincidentally displayed the 2988 same as ACS_BTEE. 2989 2990 20041009 2991 + amend 20021221 workaround for broken acs to reset the sgr, rmacs 2992 and smacs strings as well. Also modify the check for screen's 2993 limitations in that area to allow the multi-character shift-in 2994 and shift-out which seem to work. 2995 + change GPM initialization, using dl library to load it dynamically 2996 at runtime (Debian #110586). 2997 2998 20041002 2999 + correct logic for color pair in setcchar() and getcchar() (patch by 3000 Marcin 'Qrczak' Kowalczyk). 3001 + add t/T commands to ncurses b/B tests to allow a different color to 3002 be tested for the attrset part of the test than is used in the 3003 background color. 3004 3005 20040925 3006 + fix to make setcchar() to work when its wchar_t* parameter is 3007 pointing to a string which contains more data than can be converted. 3008 + modify wget_wstr() and example in ncurses.c to work if wchar_t and 3009 wint_t are different sizes (report by Marcin 'Qrczak' Kowalczyk). 3010 3011 20040918 3012 + remove check in wget_wch() added to fix an infinite loop, appears to 3013 have been working around a transitory glibc bug, and interferes 3014 with normal operation (report by Marcin 'Qrczak' Kowalczyk). 3015 + correct wadd_wch() and wecho_wch(), which did not pass the rendition 3016 information (report by Marcin 'Qrczak' Kowalczyk). 3017 + fix aclocal.m4 so that the wide-character version of ncurses gets 3018 compiled as libncursesw.5.dylib, instead of libncurses.5w.dylib 3019 (adapted from patch by James J Ramsey). 3020 + change configure script for --with-caps option to indicate that it 3021 is no longer experimental. 3022 + change configure script to reflect the fact that --enable-widec has 3023 not been "experimental" since 5.3 (report by Bruno Lustosa). 3024 3025 20040911 3026 + add 'B' test to ncurses.c, to exercise some wide-character functions. 3027 3028 20040828 3029 + modify infocmp -i option to match 8-bit controls against its table 3030 entries, e.g., so it can analyze the xterm-8bit entry. 3031 + add morphos terminfo entry, improve amiga-8bit entry (Pavel Fedin). 3032 + correct translation of "%%" in terminfo format to termcap, e.g., 3033 using "tic -C" (Redhat Bugzilla #130921). 3034 + modified configure script CF_XOPEN_SOURCE macro to ensure that if 3035 it defines _POSIX_C_SOURCE, that it defines it to a specific value 3036 (comp.os.stratus newsgroup comment). 3037 3038 20040821 3039 + fixes to build with Ada95 binding with gnat 3.4 (all warnings are 3040 fatal, and gnat does not follow the guidelines for pragmas). 3041 However that did find a coding error in Assume_Default_Colors(). 3042 + modify several terminfo entries to ensure xterm mouse and cursor 3043 visibility are reset in rs2 string: hurd, putty, gnome, 3044 konsole-base, mlterm, Eterm, screen (Debian #265784, #55637). The 3045 xterm entries are left alone - old ones for compatibility, and the 3046 new ones do not require this change. -TD 3047 3048 20040814 3049 + fake a SIGWINCH in newterm() to accommodate buggy terminal emulators 3050 and window managers (Debian #265631). 3051 > terminfo updates -TD 3052 + remove dch/dch1 from rxvt because they are implemented inconsistently 3053 with the common usage of bce/ech 3054 + remove khome from vt220 (vt220's have no home key) 3055 + add rxvt+pcfkeys 3056 3057 20040807 3058 + modify test/ncurses.c 'b' test, adding v/V toggles to cycle through 3059 combinations of video attributes so that for instance bold and 3060 underline can be tested. This made the legend too crowded, added 3061 a help window as well. 3062 + modify test/ncurses.c 'b' test to cycle through default colors if 3063 the -d option is set. 3064 + update putty terminfo entry (Robert de Bath). 3065 3066 20040731 3067 + modify test/cardfile.c to allow it to read more data than can be 3068 displayed. 3069 + correct logic in resizeterm.c which kept it from processing all 3070 levels of window hierarchy (reports by Folkert van Heusden, 3071 Chris Share). 3072 3073 20040724 3074 + modify "tic -cv" to ignore delays when comparing strings. Also 3075 modify it to ignore a canceled sgr string, e.g., for terminals which 3076 cannot properly combine attributes in one control sequence. 3077 + corrections for gnome and konsole entries (Redhat Bugzilla #122815, 3078 patch by Hans de Goede) 3079 > terminfo updates -TD 3080 + make ncsa-m rmacs/smacs consistent with sgr 3081 + add sgr, rc/sc and ech to syscons entries 3082 + add function-keys to decansi 3083 + add sgr to mterm-ansi 3084 + add sgr, civis, cnorm to emu 3085 + correct/simplify cup in addrinfo 3086 3087 20040717 3088 > terminfo updates -TD 3089 + add xterm-pc-fkeys 3090 + review/update gnome and gnome-rh90 entries (prompted by Redhat 3091 Bugzilla #122815). 3092 + review/update konsole entries 3093 + add sgr, correct sgr0 for kterm and mlterm 3094 + correct tsl string in kterm 3095 3096 20040711 3097 + add configure option --without-xterm-new 3098 3099 20040710 3100 + add check in wget_wch() for printable bytes that are not part of a 3101 multibyte character. 3102 + modify wadd_wchnstr() to render text using window's background 3103 attributes. 3104 + improve tic's check to compare sgr and sgr0. 3105 + fix c++ directory's .cc.i rule. 3106 + modify logic in tgetent() which adjusts the termcap "me" string 3107 to work with ISO-2022 string used in xterm-new (cf: 20010908). 3108 + modify tic's check for conflicting function keys to omit that if 3109 converting termcap to termcap format. 3110 + add -U option to tic and infocmp. 3111 + add rmam/smam to linux terminfo entry (Trevor Van Bremen) 3112 > terminfo updates -TD 3113 + minor fixes for emu 3114 + add emu-220 3115 + change wyse acsc strings to use 'i' map rather than 'I' 3116 + fixes for avatar0 3117 + fixes for vp3a+ 3118 3119 20040703 3120 + use tic -x to install terminfo database -TD 3121 + add -x to infocmp's usage message. 3122 + correct field used for comparing O_ROWMAJOR in set_menu_format() 3123 (report/patch by Tony Li). 3124 + fix a missing nul check in set_field_buffer() from 20040508 changes. 3125 > terminfo updates -TD 3126 + make xterm-xf86-v43 derived from xterm-xf86-v40 rather than 3127 xterm-basic -TD 3128 + align with xterm patch #192's use of xterm-new -TD 3129 + update xterm-new and xterm-8bit for cvvis/cnorm strings -TD 3130 + make xterm-new the default "xterm" entry -TD 3131 3132 20040626 3133 + correct BUILD_CPPFLAGS substitution in ncurses/Makefile.in, to allow 3134 cross-compiling from a separate directory tree (report/patch by 3135 Dan Engel). 3136 + modify is_term_resized() to ensure that window sizes are nonzero, 3137 as documented in the manpage (report by Ian Collier). 3138 + modify CF_XOPEN_SOURCE configure macro to make Hurd port build 3139 (Debian #249214, report/patch by Jeff Bailey). 3140 + configure-script mods from xterm, e.g., updates to CF_ADD_CFLAGS 3141 + update config.guess, config.sub 3142 > terminfo updates -TD 3143 + add mlterm 3144 + add xterm-xf86-v44 3145 + modify xterm-new aka xterm-xfree86 to accommodate luit, which 3146 relies on G1 being used via an ISO-2022 escape sequence (report by 3147 Juliusz Chroboczek) 3148 + add 'hurd' entry 3149 3150 20040619 3151 + reconsidered winsnstr(), decided after comparing other 3152 implementations that wrapping is an X/Open documentation error. 3153 + modify test/inserts.c to test all flavors of insstr(). 3154 3155 20040605 3156 + add setlocale() calls to a few test programs which may require it: 3157 demo_forms.c, filter.c, ins_wide.c, inserts.c 3158 + correct a few misspelled function names in ncurses-intro.html (report 3159 by Tony Li). 3160 + correct internal name of key_defined() manpage, which conflicted with 3161 define_key(). 3162 3163 20040529 3164 + correct size of internal pad used for holding wide-character 3165 field_buffer() results. 3166 + modify data_ahead() to work with wide-characters. 3167 3168 20040522 3169 + improve description of terminfo if-then-else expressions (suggested 3170 by Arne Thomassen). 3171 + improve test/ncurses.c 'd' test, allow it to use external file for 3172 initial palette (added xterm-16color.dat and linux-color.dat), and 3173 reset colors to the initial palette when starting/ending the test. 3174 + change limit-check in init_color() to allow r/g/b component to 3175 reach 1000 (cf: 20020928). 3176 3177 20040516 3178 + modify form library to use cchar_t's rather than char's in the 3179 wide-character configuration for storing data for field buffers. 3180 + correct logic of win_wchnstr(), which did not work for more than 3181 one cell. 3182 3183 20040508 3184 + replace memset/memcpy usage in form library with for-loops to 3185 simplify changing the datatype of FIELD.buf, part of wide-character 3186 changes. 3187 + fix some inconsistent use of #if/#ifdef (report by Alain Guibert). 3188 3189 20040501 3190 + modify menu library to account for actual number of columns used by 3191 multibyte character strings, in the wide-character configuration 3192 (adapted from patch by Philipp Tomsich). 3193 + add "-x" option to infocmp like tic's "-x", for use in "-F" 3194 comparisons. This modifies infocmp to only report extended 3195 capabilities if the -x option is given, making this more consistent 3196 with tic. Some scripts may break, since infocmp previous gave this 3197 information without an option. 3198 + modify termcap-parsing to retain 2-character aliases at the beginning 3199 of an entry if the "-x" option is used in tic. 3200 3201 20040424 3202 + minor compiler-warning and test-program fixes. 3203 3204 20040417 3205 + modify tic's missing-sgr warning to apply to terminfo only. 3206 + free some memory leaks in tic. 3207 + remove check in post_menu() that prevented menus from extending 3208 beyond the screen (request by Max J. Werner). 3209 + remove check in newwin() that prevents allocating windows 3210 that extend beyond the screen. Solaris curses does this. 3211 + add ifdef in test/color_set.c to allow it to compile with older 3212 curses. 3213 + add napms() calls to test/dots.c to make it not be a CPU hog. 3214 3215 20040403 3216 + modify unctrl() to return null if its parameter does not correspond 3217 to an unsigned char. 3218 + add some limit-checks to guard isprint(), etc., from being used on 3219 values that do not fit into an unsigned char (report by Sami Farin). 3220 3221 20040328 3222 + fix a typo in the _nc_get_locale() change. 3223 3224 20040327 3225 + modify _nc_get_locale() to use setlocale() to query the program's 3226 current locale rather than using getenv(). This fixes a case in tin 3227 which relies on legacy treatment of 8-bit characters when the locale 3228 is not initialized (reported by Urs Jansen). 3229 + add sgr string to screen's and rxvt's terminfo entries -TD. 3230 + add a check in tic for terminfo entries having an sgr0 but no sgr 3231 string. This confuses Tru64 and HPUX curses when combined with 3232 color, e.g., making them leave line-drawing characters in odd places. 3233 + correct casts used in ABSENT_BOOLEAN, CANCELLED_BOOLEAN, matches the 3234 original definitions used in Debian package to fix PowerPC bug before 3235 20030802 (Debian #237629). 3236 3237 20040320 3238 + modify PutAttrChar() and PUTC() macro to improve use of 3239 A_ALTCHARSET attribute to prevent line-drawing characters from 3240 being lost in situations where the locale would otherwise treat the 3241 raw data as nonprintable (Debian #227879). 3242 3243 20040313 3244 + fix a redefinition of CTRL() macro in test/view.c for AIX 5.2 (report 3245 by Jim Idle). 3246 + remove ".PP" after ".SH NAME" in a few manpages; this confuses 3247 some apropos script (Debian #237831). 3248 3249 20040306 3250 + modify ncurses.c 'r' test so editing commands, like inserted text, 3251 set the field background, and the state of insert/overlay editing 3252 mode is shown in that test. 3253 + change syntax of dummy targets in Ada95 makefiles to work with pmake. 3254 + correct logic in test/ncurses.c 'b' for noncolor terminals which 3255 did not recognize a quit-command (cf: 20030419). 3256 3257 20040228 3258 + modify _nc_insert_ch() to allow for its input to be part of a 3259 multibyte string. 3260 + split out lib_insnstr.c, to prepare to rewrite it. X/Open states 3261 that this function performs wrapping, unlike all of the other 3262 insert-functions. Currently it does not wrap. 3263 + check for nl_langinfo(CODESET), use it if available (report by 3264 Stanislav Ievlev). 3265 + split-out CF_BUILD_CC macro, actually did this for lynx first. 3266 + fixes for configure script CF_WITH_DBMALLOC and CF_WITH_DMALLOC, 3267 which happened to work with bash, but not with Bourne shell (report 3268 by Marco d'Itri via tin-dev). 3269 3270 20040221 3271 + some changes to adapt the form library to wide characters, incomplete 3272 (request by Mike Aubury). 3273 + add symbol to curses.h which can be used to suppress include of 3274 stdbool.h, e.g., 3275 #define NCURSES_ENABLE_STDBOOL_H 0 3276 #include <curses.h> 3277 (discussion on XFree86 mailing list). 3278 3279 20040214 3280 + modify configure --with-termlib option to accept a value which sets 3281 the name of the terminfo library. This would allow a packager to 3282 build libtinfow.so renamed to coincide with libtinfo.so (discussion 3283 with Stanislav Ievlev). 3284 + improve documentation of --with-install-prefix, --prefix and 3285 $(DESTDIR) in INSTALL (prompted by discussion with Paul Lew). 3286 + add configure check if the compiler can use -c -o options to rename 3287 its output file, use that to omit the 'cd' command which was used to 3288 ensure object files are created in a separate staging directory 3289 (prompted by comments by Johnny Wezel, Martin Mokrejs). 3290 3291 20040208 5.4 release for upload to 3292 + update TO-DO. 3293 3294 20040207 pre-release 3295 + minor fixes to _nc_tparm_analyze(), i.e., do not count %i as a param, 3296 and do not count %d if it follows a %p. 3297 + correct an inconsistency between handling of codes in the 128-255 3298 range, e.g., as illustrated by test/ncurses.c f/F tests. In POSIX 3299 locale, the latter did not show printable results, while the former 3300 did. 3301 + modify MKlib_gen.sh to compensate for broken C preprocessor on Mac 3302 OS X, which alters "%%" to "% % " (report by Robert Simms, fix 3303 verified by Scott Corscadden). 3304 3305 20040131 pre-release 3306 + modify SCREEN struct to align it between normal/wide curses flavors 3307 to simplify future changes to build a single version of libtinfo 3308 (patch by Stanislav Ievlev). 3309 + document handling of carriage return by addch() in manpage. 3310 + document special features of unctrl() in manpage. 3311 + documented interface changes in INSTALL. 3312 + corrected control-char test in lib_addch.c to account for locale 3313 (Debian #230335, cf: 971206). 3314 + updated test/configure.in to use AC_EXEEXT and AC_OBJEXT. 3315 + fixes to compile Ada95 binding with Debian gnat 3.15p-4 package. 3316 + minor configure-script fixes for older ports, e.g., BeOS R4.5. 3317 3318 20040125 pre-release 3319 + amend change to PutAttrChar() from 20030614 which computed the number 3320 of cells for a possibly multi-cell character. The 20030614 change 3321 forced the cell to a blank if the result from wcwidth() was not 3322 greater than zero. However, wcwidth() called for parameters in the 3323 range 128-255 can give this return value. The logic now simply 3324 ensures that the number of cells is greater than zero without 3325 modifying the displayed value. 3326 3327 20040124 pre-release 3328 + looked good for 5.4 release for upload to (but see above) 3329 + modify configure script check for ranlib to use AC_CHECK_TOOL, since 3330 that works better for cross-compiling. 3331 3332 20040117 pre-release 3333 + modify lib_get_wch.c to prefer mblen/mbtowc over mbrlen/mbrtowc to 3334 work around core dump in Solaris 8's locale support, e.g., for 3335 zh_CN.GB18030 (report by Saravanan Bellan). 3336 + add includes for <stdarg.h> and <stdio.h> in configure script macro 3337 to make <wchar.h> check work with Tru64 4.0d. 3338 + add terminfo entry for U/Win -TD 3339 + add terminfo entries for SFU aka Interix aka OpenNT (Federico 3340 Bianchi). 3341 + modify tput's error messages to prefix them with the program name 3342 (report by Vincent Lefevre, patch by Daniel Jacobowitz (see Debian 3343 #227586)). 3344 + correct a place in tack where exit_standout_mode was used instead of 3345 exit_attribute_mode (patch by Jochen Voss (see Debian #224443)). 3346 + modify c++/cursesf.h to use const in the Enumeration_Field method. 3347 + remove an ambiguous (actually redundant) method from c++/cursesf.h 3348 + make $HOME/.terminfo update optional (suggested by Stanislav Ievlev). 3349 + improve sed script which extracts libtool's version in the 3350 CF_WITH_LIBTOOL macro. 3351 + add ifdef'd call to AC_PROG_LIBTOOL to CF_WITH_LIBTOOL macro (to 3352 simplify local patch for Albert Chin-A-Young).. 3353 + add $(CXXFLAGS) to link command in c++/Makefile.in (adapted from 3354 patch by Albert Chin-A-Young).. 3355 + fix a missing substitution in configure.in for "$target" needed for 3356 HPUX .so/.sl case. 3357 + resync CF_XOPEN_SOURCE configure macro with lynx; fixes IRIX64 and 3358 NetBSD 1.6 conflicts with _XOPEN_SOURCE. 3359 + make check for stdbool.h more specific, to ensure that including it 3360 will actually define/declare bool for the configured compiler. 3361 + rewrite ifdef's in curses.h relating NCURSES_BOOL and bool. The 3362 intention of that is to #define NCURSES_BOOL as bool when the 3363 compiler declares bool, and to #define bool as NCURSES_BOOL when it 3364 does not (reported by Jim Gifford, Sam Varshavchik, cf: 20031213). 3365 3366 20040110 pre-release 3367 + change minor version to 4, i.e., ncurses 5.4 3368 + revised/improved terminfo entries for tvi912b, tvi920b (Benjamin C W 3369 Sittler). 3370 + simplified ncurses/base/version.c by defining the result from the 3371 configure script rather than using sprintf (suggested by Stanislav 3372 Ievlev). 3373 + remove obsolete casts from c++/cursesw.h (reported by Stanislav 3374 Ievlev). 3375 + modify configure script so that when configuring for termlib, programs 3376 such as tic are not linked with the upper-level ncurses library 3377 (suggested by Stanislav Ievlev). 3378 + move version.c from ncurses/base to ncurses/tinfo to allow linking 3379 of tic, etc., using libtinfo (suggested by Stanislav Ievlev). 3380 3381 20040103 3382 + adjust -D's to build ncursesw on OpenBSD. 3383 + modify CF_PROG_EXT to make OS/2 build with EXEEXT. 3384 + add pecho_wchar(). 3385 + remove <wctype.h> include from lib_slk_wset.c which is not needed (or 3386 available) on older platforms. 3387 3388 20031227 3389 + add -D's to build ncursew on FreeBSD 5.1. 3390 + modify shared library configuration for FreeBSD 4.x/5.x to add the 3391 soname information (request by Marc Glisse). 3392 + modify _nc_read_tic_entry() to not use MAX_ALIAS, but PATH_MAX only 3393 for limiting the length of a filename in the terminfo database. 3394 + modify termname() to return the terminal name used by setupterm() 3395 rather than $TERM, without truncating to 14 characters as documented 3396 by X/Open (report by Stanislav Ievlev, cf: 970719). 3397 + re-add definition for _BSD_TYPES, lost in merge (cf: 20031206). 3398 3399 20031220 3400 + add configure option --with-manpage-format=catonly to address 3401 behavior of BSDI, allow install of man+cat files on NetBSD, whose 3402 behavior has diverged by requiring both to be present. 3403 + remove leading blanks from comment-lines in manlinks.sed script to 3404 work with Tru64 4.0d. 3405 + add screen.linux terminfo entry (discussion on mutt-users mailing 3406 list). 3407 3408 20031213 3409 + add a check for tic to flag missing backslashes for termcap 3410 continuation lines. ncurses reads the whole entry, but termcap 3411 applications do not. 3412 + add configure option "--with-manpage-aliases" extending 3413 "--with-manpage-aliases" to provide the option of generating ".so" 3414 files rather than symbolic links for manpage aliases. 3415 + add bool definition in include/curses.h.in for configurations with no 3416 usable C++ compiler (cf: 20030607). 3417 + fix pathname of SigAction.h for building with --srcdir (reported by 3418 Mike Castle). 3419 3420 20031206 3421 + folded ncurses/base/sigaction.c into includes of ncurses/SigAction.h, 3422 since that header is used only within ncurses/tty/lib_tstp.c, for 3423 non-POSIX systems (discussion with Stanislav Ievlev). 3424 + remove obsolete _nc_outstr() function (report by Stanislav Ievlev 3425 <inger@altlinux.org>). 3426 + add test/background.c and test/color_set.c 3427 + modify color_set() function to work with color pair 0 (report by 3428 George Andreou <gbandreo@tem.uoc.gr>). 3429 + add configure option --with-trace, since defining TRACE seems too 3430 awkward for some cases. 3431 + remove a call to _nc_free_termtype() from read_termtype(), since the 3432 corresponding buffer contents were already zeroed by a memset (cf: 3433 20000101). 3434 + improve configure check for _XOPEN_SOURCE and related definitions, 3435 adding special cases for Solaris' __EXTENSIONS__ and FreeBSD's 3436 __BSD_TYPES (reports by Marc Glisse <marc.glisse@normalesup.org>). 3437 + small fixes to compile on Solaris and IRIX64 using cc. 3438 + correct typo in check for pre-POSIX sort options in MKkey_defs.sh 3439 (cf: 20031101). 3440 3441 20031129 3442 + modify _nc_gettime() to avoid a problem with arithmetic on unsigned 3443 values (Philippe Blain). 3444 + improve the nanosleep() logic in napms() by checking for EINTR and 3445 restarting (Philippe Blain). 3446 + correct expression for "%D" in lib_tgoto.c (Juha Jarvi 3447 | https://ncurses.scripts.mit.edu/?p=ncurses.git;a=blob;f=NEWS;h=15436e70b79dbabd7ef81c19aed24d09e11c87ad;hb=d91d170b303a5e8acb8c3e1327f6d881974fecdc | CC-MAIN-2022-40 | refinedweb | 25,528 | 66.44 |
Premium Red and White Wine Best Prices!!!!
Premium red wine direct from producers Wine Bottles: 750ml Glass 375ml Claret PET Bottle - Antique Green Wine Closures: Cork - Natural, Technical Closures, Twin Top, Synthetic Screw Caps: Orora Stelvin 30 x 60 screwcaps Orora Stelvin 31.5 x 60 ...
Poland
_2<<
United States
High White Wine Glass Bottle
High White Wine Glass Bottle Name High White Glass Bottle for wine Brimful capacity 50ml, 100ml, 250ml, 300ml, 500ml, 750ml, 1000ml, 1500ml, made as requests Material Soda-lime glass high white glass Color clear Painting any color in transparent, ...
750ml Dry white wine empty glass bottle
Specifications: Material:Glass Industrial Use:Beverage Use:Wine Sealing Type:Screw Cap Surface Handling:Frost Place of Origin:Hebei, China (Mainland) Brand Name:Fande Model Number:1022 Capacity:750ml Product name:750ml Dry white wine empty glass...
Classic Long Stemmed White Wine Glass Fashionable Wine Glass
glass goblet cup, white wine glasses, promotional glasswareHigh white glass material, clear, long stemmedsize:74*92*271mm, max:118mm, weight:215g,capacity:900mlbrown box package
A varity of skull head white wine glass bottle with cork hot sale
Material: Glass Industrial Use: Beverage Use: Wine Sealing Type: Cork Surface Handling: Screen Printing Place of Origin: Jiangsu, China (Mainland) Brand Name: HONGYOU Model Number: HY-0291 Cap: cork Color: transparent Usage: bar/gift/party Volume: all ...
Red white White wine - Castel Bitch
Dear Ladies and Gentlemen, at first thank you for your interest in our products. NUGA Company is a producer with more than 30 years experience. - We produce all spirits, vodka, whisky, liqueur, brandy, cognac , wine and beer and deliver worldwide....
Germany
Germany
Lead-free crystal glassware set ; Red wine glass ;White wine glass; Boadeaux; Champagne flutes
Bohemia crystal glassware set; Red wine glass; White wine glass; Fancy champagne flute; =>100%Hand-made(mouth-blown); =>Food-contact safety; =>Lead-free crystal glass; =>Recommed to wash by head =>A great gift idea for friend and wedding; 1> Red wine ...
VDP VIN DE PAYS FRENCH WHITE WINE LANGUEDOC ROUSSILLON
WHITE FRENCH WINE SAUVIGNON GRENACHE Origin and Place of production The grapes come from the south of France in the Languedoc vineyard surrounding Several small villages. The wine is produced in the heart of the Languedoc vineyard. SAUVIGNON GRENACHE ...
United States
United States
Top Quality Amorolfine Hydrochloride (Amorolfine HCl) 78613-38-4
Top Quality Amorolfine Hydrochloride (Amorolfine HCl) 78613-38-4 Basic Views Name Amorolfine hydrochloride Other Name AMOROLFINE HCL CAS No. 78613-38-4 EINECS N/A MF C21H36ClNO MW 353.97 MP 120 °C Storage temp. room temp Purity 99% min...
China
2017 white cardboard wine box with CE certificate
All packaging item are customized, we can provide one-stop packaging solution for you, which include conceptual design, cost evaluation, mass production Before pricing, please help to supply below information Product category: paper made...
Gift Packaging Ltd
China
Date: Feb 21, 2017
Category: Packaging & Printing | Gift Boxes
Gift Packaging Ltd
China
Adjustable Wine Aerator with Variable Speed Dial,7 Settings,Adjustable for Red or White Wine & Spiri
1. 7 Adjustable Levels With 3 Stage Aeration Process for Both Red and White Wines. 2. No More Waiting As Each Level Represents Approximately 1 Hour of Decanting. 3. Brings Out the Best Aromas and Flavors of All Wines Whether Young or Old, .....
Date: Jul 5, 2015
Category: Home & Garden | Bar Accessories
Supply high quality vacuum wine bottle cover-white box
Dear Sirs/Madam, We ( Zhuhai Starring Co.,Ltd) developed and manufacture a new product- vacuum fresh-keeping wine bottle cover, it is of high quality and best design. Please feel free to contact us. Brief Introduction: Starring Wine Vacuum .....
China
Date: Feb 15, 2017
Category: Packaging & Printing | Other Packaging Applications
sell the White quality wine
Product Description Product Type: Wine Type: White Wine Taste: Dry Use: Table Wine Place of Origin: Hungary Producing Region: Hungary Brand Name: Wineyard Hungary Grape Type: Chardonnay, Muscat, Pinot Blanc, Riesling, Sauvignon Blanc
Hungary
Date: Apr 21, 2015
Hungary
750ml dry white wine glass bottle cork top
Item code: 10223 Capacity: 750ml Height:330mm Diameter:77.4mm Weight:530g Color:Dark Green Stock in hand, in Pallet packaging 1065units/pallet 22,365units per 1*40'HQ MOQ is 1 pallet If you are interested, please feel free to contact me at monica .....
Date: Dec 26, 2014
Kamyanka Global Wine
The company is interested in long-term and fruitful cooperation with importers and distributors over the world. We strive for excellence by constantly streamlining our business processes. The main priorities for our company are high quality ...
Business Type: Manufacturer, Exporter, Importer, Distributor
Ukraine
Ukraine
Qingdao Langyatai Group Co., Ltd.
Established in 1958, Qingdao Langyatai Group Co. Ltd is located at Huangdao District, Qingdao, China. It has been involved in five backbone industries, respectively white spirits industry, marine biological industry, fruit wine industry, ecological...
Business Type: Manufacturer
China
Category: Food & Beverage | Other Alcoholic Beverages
SZATTEX ZRT.
49 Medal of the Hungarian wine is one of the world's most prestigious competition It is one of the biggest French international wines competition and one of the most famous wine event in the world thanks to numerous assets : The Hungarian ...
Business Type: Manufacturer, Exporter, Distributor
Hungary
Hungary
JVN, Lda
JVN, Lda, located in Lisbon-Portugal it was founded in January 2006 and it is developing his business of Portuguese table wines Worldwide. We are distributors for all kinds of wine : Vinho Verde, Red Wine, White Wine and Port Wine (Spirits).
Business Type: Exporter
Portugal
Category: Food & Beverage | Fruit & Vegetable Juice
Portugal
Weihai Fuou Trade Co.,LTD
Weihai Fuou Trade Co., Ltd. is a professional engaged in daily glass and ceramics for daily use of the export trading company, the main products have all kinds of beer bottles, food bottles, bottles of wine, olive oil, white wine bottles, beverage ...
Business Type: Exporter
China
Category: Packaging & Printing | Pharmaceutical Glass Bottles
wine
Dear Sir, We are interested to buy wine , Red wine, white , ETC 750 ml 1 litters 500ml Please kindly send me your latest OFFER .AND the following information please Quote. 1. Prices LIST 2. Payment terms 3. Delivery Period 4. Minimum Order quantity...
Benin
Benin
we want to buy wines
we want to buy all types of wine, red, white, pink, and all types fruit and alcoholic. please contact us if you have. We will discuss payment terms, and more about packing details depends on what you have when you send your offers.
Benin
Benin
We want to buy wine
We are interested to buy wines products. Product Type : Red Wines,White Wine and Sparkling Wine Quantity Required : 2 x 40 Ft Containers Destination Port : Odessa Port Ukraine Payment Terms : Bank T/T
Ukraine
Ukraine
import kinds of wine, like red wine, white wine and so on
our company are offering a good quality and good price of wine by bottle import.if u have that,contact us.
China
Buy red & white wines in bulk
We are an importer from the Philippines and we would like to bring in red & white wines in bulk from Spain, Italy, Chile, Australia, USA, Argentina, South Africa, France, Portugal, and Yugoslavia. Kindly give us your Best FOB and C&F Quotations;.....
Philippines
Category: Food & Beverage | Fruit & Vegetable Juice
Philippines
Recent History | https://www.ecplaza.net/white-wine--everything | CC-MAIN-2017-26 | refinedweb | 1,201 | 50.87 |
xdv 0.4b3
XDV implements a subset of Deliverance using a pure XSLT engine. With XDV, you "compile" your theme and ruleset in one step, then use a superfast/simple transform on each request thereafter. Alternatively, compile your theme during development, check it into Subversion, and not touch XDV during deployment. dynamic website, to which you want to apply a theme built by a web designer. The web designer is not familiar with the technology behind the dynamic website, and so has supplied a “static HTML” version of the site. This consists of an HTML file with more-or-less semantic markup, one or more style sheets, and perhaps some other resources like images or JavaScript files.
Using XDV, you could apply this theme to your dynamic website as follows:
- Identify the placeholders in the theme file that need to be replaced with dynamic elements. Ideally, these should be clearly identifiable, for example with a unique HTML id attribute.
- Identify the corresponding markup in the dynamic website. Then write a “replace” or “copy” rule using XDV’s rules syntax that replaces the theme’s static placeholder with the dynamic content.
- Identify markup in the dynamic website that should be copied wholesale into the theme. CSS and JavaScript links in the <head /> are often treated this way. Write an XDV “append” or “prepend” rule to copy these elements over.
- Identify parts of the theme and/or dynamic website that are superfluous. Write an XDV “drop” rule to remove these elements.
The rules file is written using a simple XML syntax. Elements in the theme and “content” (the dynamic website) can be identified using CSS3 or XPath selectors.
Once you have a theme HTML file and a rules XML file, you compile these using the XDV compiler into a single XSLT file. You can then deploy this XSLT file with your application. An XSLT processor (such as mod_transform in Apache) will then transform the dynamic content from your website into the themed content your end users see. The transformation takes place on-the-fly for each request.
Bear in mind that:
- You never have to write, or even read, a line of XSLT (unless you want to).
- The XSLT transformation that takes place for each request is very fast.
- Static theme resources (like images, stylesheets or JavaScript files) can be served from a static webserver, which is normally much faster than serving them from a dynamic application.
- You can leave the original theme HTML untouched, with makes it easier to re-use for other scenarios. For example, you can stitch two unrelated applications together by using a single theme file with separate rules files. This would result in two compiled XSLT files. You could use location match rules or similar techniques to choose which one to invoke for a given request. />:
<rules xmlns="" xmlns: ... </rules>
Here we have defined two namespaces: the default namespace is used for rules and XPath selectors. The css namespace is used for CSS3 selectors. These are functionally equivalent. In fact, CSS selectors are replaced by the equivalent XPath selector during the pre-processing step of the compiler. Thus, they have no performance impact.
XDV supports complex CSS3 and XPath selectors, including things like the nth-child pseudo-selector. You are advised to consult a good reference if you are new to XPath and/or CSS3.
The following elements are allowed inside the <rules /> element:
<theme />
Used to specify the theme file. For example:
<theme href="theme.html"/>
Relative paths are resolved relative to the rules.xml file. For http/https urls, the --network switch must be supplied to xdvcompiler/xdvrun.
<replace />
Used to replace an element in the theme entirely with an element in the content. For example:
<replace theme="/html/head/title" content="/html/head/title"/>
The (near-)equivalent using CSS selectors would be:
<replace css:
The result of either is that the <title /> element in the theme is replaced with the <title /> element in the (dynamic) content.
<copy />
Used to replace the contents of a placeholder tag with a tag from the theme. For example:
<copy css:
This would replace any placeholder content inside the element with id main in the theme with all children of the element with id portal-content in the content. The usual reason for using <copy /> instead of <replace />, is that the theme has CSS styles or other behaviour attached to the target element (with id main in this case).
<append /> and <prepend />
Used to copy elements from the content into an element in the theme, leaving existing content in place. <append /> places the matched content directly before the closing tag in the theme; <prepend /> places it directly after the opening tag. For example:
<append theme="/html/head" content="/html/head/link" />
This will copy all <link /> elements in the head of the content into the theme.
As a special case, you can copy individual attributes from a content element to an element in the theme using <prepend />:
<prepend theme="/html/body" content="/html/body/@class" />
This would copy the class attribute of the <body /> element in the content into the theme (replacing an existing attribute with the same name if there is one).
<before /> and <after />
These are equivalent to <append /> and <prepend />, but place the matched content before or after the matched theme element, rather than immediately inside it. For example:
<before css:theme=”#content” css:content=”#info-box” />
This would place the element with id info-box from the content immediately before the element with id content in the theme. If we wanted the box below the content instead, we could do:
<after css:
<drop />
Used to drop elements from the theme or the content. This is the only element that accepts either theme or content attributes (or their css: equivalents), but not both:
<drop css: <copy css:
This would copy all children of the element with id portal-content in the theme into the element with id content in the theme, but only after removing any element with class about-box inside the content element first. Similarly:
<drop theme="/html/head/base" />
Would drop the <base /> tag from the head of the theme.
Order of rule execution
In most cases, you should not care too much about the inner workings of the XDV compiler. However, it can sometimes be useful to understand the order in which rules are applied.
- <before /> rules are always executed first.
- <drop /> rules are executed next.
- <replace /> rules are executed next, provided no <drop /> rule was applied to the same theme node.
- <prepend />, <copy /> and <append /> rules execute next, provided no <replace /> rule was applied to the same theme node.
- <after /> rules are executed last.
Behaviour if theme or content is not matched
If a rule does not match the theme (whether or not it matches the content), it is silently ignored.
If a <replace /> rule matches the theme, but not the content, the matched element will be dropped in the theme:
<replace css:
Here, if the element with id header-element is not found in the content, the placeholder with id header in the theme is removed.
Similarly, the contents of a theme node matched with a <copy /> rule will be dropped if there is no matching content. Another way to think of this is that if no content node is matched, XDV uses an empty nodeset when copying or replacing.
If you want the placeholder to stay put in the case of a missing content node, you can make this a conditional rule:
<replace css:
See below for more details on conditional rules.
Advanced usage
The simple rules above should suffice for most use cases. However, there are a few more advanced tools at your disposal, should you need them.
Conditional rules
Sometimes, it is useful to apply a rule only if a given element appears or does not appear in the markup. The if-content attribute can be used with any rule to make it conditional.
if-content should be set an XPath expression. You can also use css:if-content with a CSS3 expression. If the expression matches a node in the content, the rule will be applied:
<copy css: <drop css:
This will copy all elements with class portlet into the portlets element. If there are no matching elements in the content we drop the portlet-wrapper element, which is presumably superfluous.
Here is another example using CSS selectors:
<copy css:
This will copy the children of the element with id header-box in the content into the element with id header in the theme, so long as an element with id personal-bar also appears somewhere in the content.
Above, we also saw the special case of an empty if-content (which also works with an empty css:if-content). This is a shortcut that means “use the expression in the content or css:content` attribute as the condition”. Hence the following two rules are equivalent:
<copy css: <copy css:
If multiple rules of the same type match the same theme node but have different if-content expressions, they will be combined as an if..else if…else block:
<copy theme="/html/body/h1" content="/html/body/h1/text()" if- <copy theme="/html/body/h1" content="//h1[@id='first-heading']/text()" if- <copy theme="/html/body/h1" content="/html/head/title/text()" />
These rules all attempt to fill the text in the <h1 /> inside the body. The first rule looks for a similar <h1 /> tag and uses its text. If that doesn’t match, the second rule looks for any <h1 /> with id first-heading, and uses its text. If that doesn’t match either, the final rule will be used as a fallback (since it has no if-content), taking the contents of the <title /> tag in the head of the content document.: </rules>
Including external content
Normally, the content attribute of any rule selects nodes from the response being returned by the underlying dynamic web server. However, it is possible to include content from a different URL using the href attribute on any rule (other than <drop />). For example:
<append css:
This will resolve the URL /extra.html, look for an element with id portlet and then append to to the element with id left-column in the theme.
The inclusion can happen in one of three ways:
Using the XSLT document() function. This is the default, but it can be explicitly specified by adding an attribute method="document" to the rule element. Whether this is able to resolve the URL depends on how and where the compiled XSLT is being executed:
<append css:
Via a Server Side Include directive. This can be specified by setting the method attribute to ssi:
<append css:
The output will render like this:
<!--#include virtual="/extra.html?;filter_xpath=descendant-or-self::*[@id%20=%20'portlet']"-->
This SSI instruction would need to be processed by a fronting web server such as Apache or Nginx. Also note the ;filter_xpath query string parameter. Since we are deferring resolution of the referenced document until SSI processing takes place (i.e. after the compiled XDV XSLT transform has executed), we need to ask the SSI processor to filter out elements in the included file that we are not interested in. This requires specific configuration. An example for Nginx is included below..
Via an Edge Side Includes directive. This can be specified by setting the method attribute to esi:
<append css:
The output is similar to that for the SSI mode:
<esi:include</esi:include>
Again, the directive would need to be processed by a fronting server, such as Varnish. Chances are an ESI-aware cache server would not support arbitrary XPath filtering. If the referenced file is served by a dynamic web server, it may be able to inspect the ;filter_xpath parameter and return a tailored response. Otherwise, if a server that can be made aware of this is placed in-between the cache server and the underlying web server, that server can perform the necessary filtering.
For simple ESI includes of a whole document, you may omit the content selector from the rule:
<append css:
The output then renders like this:
<esi:include</esi:include>
Modifying the theme on the fly
Sometimes, the theme is almost perfect, but cannot be modified, for example because it is being served from a remote location that you do not have access to, or because it is shared with other applications.
XDV allows you to modify the theme using “inline” markup in the rules file. You can think of this as a rule where the matched content is explicitly stated in the rules file, rather than pulled from the response being styled.
For example:
<rules xmlns="" xmlns: <append theme="/html/head"> <style type="text/css"> /* From the rules */ body > h1 { color: red; } </style> </append> </xdv:rules>
In the example above, the <append /> rule will copy the <style /> attribute and its contents into the <head /> of the theme. Similar rules can be constructed for <copy />, <replace />, <prepend />, <before /> or <after />.
It is even possible to insert XSLT instructions into the compiled theme in this manner. Having declared the xsl namespace as shown above, we can do something like this:
‘document’, ‘esi’ or ‘ssi’ -XDV header was set so the backend server may choose to serve different different CSS resources.
Including external content; | https://pypi.python.org/pypi/xdv | CC-MAIN-2017-09 | refinedweb | 2,225 | 60.35 |
Underlying of "doc" and "op" ?
I'm confused about
docand
c4d.documents.GetActiveDocuments()in a
Python Tag.
Here is the code in a
Python Tag:
import c4d #Welcome to the world of Python def main(): aDoc = c4d.documents.GetActiveDocument() print("Active Doc:\n{}\ntime:{}".format(aDoc, aDoc.GetTime().Get())) print(" ") print("doc:\n{}\ntime:{}".format(doc, doc.GetTime().Get())) print(" ")
If I scroll playhead in the scene, both
aDocand
docwith their time is same.
If I start an image sequence render,
aDocand
docreport the same object,
but "time" of
aDocstill represents the playhead in the scene,
and "time" of
docrepresents the current render frame's time.
Is there some difference between the two of them?
And Is there any function that can produce the same result of
opin a
Python Tag?
I guess the "op" can be obtained from a tag via GetObject()
Hello,
a Python Tag is part of the scene.
When you render a scene in the Picture Viewer, Cinema 4D will create a copy of that scene. That copy is then rendered. The copy of the scene includes a copy of your Python Tag.
GetActiveDocument()returns the active document. The active document is the document you are currently looking at in the viewport. So this document is not the same document as the document you are currently rendering.
For example: you can start rendering a document in the Picture Viewer and then close the currently open document while the rendering is still going on. That is why you should never reference the active document in any code that might be executed in the rendering pipeline.
docis the document that currently owns the Python Tag. When you are rendering in the Picture Viewer,
docis the document that is currently rendered. Not the active document.
The
opvariable references the tag itself. Again, when you are rendering, the scene (including the tag) is copied.
best wishes,
Sebastian
Hi, @C4DS and @s_bach, thank you for replies!
.
@s_bach, after reading your post, I understand that
Active Documentand
Scene being Renderingis different. And I should never use
ActiveDocumentwithin a
render pipeline.
Here I still have one more question:
There is no other way in a
Python Tagto produce the same result as
docand
op; they are very special variables. Am I correct?
Hello,
in a Python tag,
oppoints to the tag itself. Right now I don't see any other way to get this reference.
docreferences to the document that owns the tag. So it is the same as op.GetDocument().
best wishes,
Sebastian
@eziopan said in Underlying of "doc" and "op" ?:
.
Sorry, was away for a few days and couldn't respond earlier.
What I assumed you wanted to achieve was a way to obtain the "op" as if creating a regular script.
There the "op" is the current active object.
I misinterpreted your question, and didn't see your reference to the Python tag. Hence, I wrongly assumed you wanted to access the "op" as if from a user script, thus the active object, and therefore I mentioned to use "op.GetObject()" to obtain the object the tag was attached to.
I made too many wrongly assumptions. Sorry about that.
@c4ds, don’t worry! Every steps (even the off-road one) make some progress, right? I appreciate the way you focusing on the problem solving, which I believe is also the wonderful thing of this forum: we do our best to solving the problem itself, discussing about the better/more accurate answers rather than “who is right, who is wrong”. Last but not least, thank you for taking time to reply! ;) | https://plugincafe.maxon.net/topic/11112/underlying-of-doc-and-op | CC-MAIN-2020-29 | refinedweb | 600 | 67.35 |
From: Eric Niebler (eric_at_[hidden])
Date: 2008-03-27 14:48:58
Noah Stein wrote:
> My first stab at forming a vector type was, roughly:
>
> template<typename T>
> class vector
> : vector_expr< typename proto::terminal< vector<T> >::type >
>
> This fails to compile.
Ah, I see now.
> So it would appear that the terminals of my
> expressions cannot be terminals in expressions (so much for clarity in this
> post!). Thus I need to split my class into two parts: one that stores the
> data and one that defines the DSEL. I guess my lack of understanding in this
> case boils down to the question: Why isn't a lazy_vector a terminal in
> expressions composed of lazy_vectors? I don't see the difference
> conceptually.
Well, first let me address the compile error by analyzing what the above
code means, piece by piece:
typename proto::terminal< vector<T> >::type
This is a proto terminal that holds a vector<T> by value. That is,
inside this terminal there is a vector<T> object.
vector_expr< typename proto::terminal< vector<T> >::type >
As defined previously, vector_expr<E> "extends" the expression E. It
behaves like an E and, as far as Proto is concerned, it *is* an E.
vector_expr<E> holds an E object by value.
template<typename T>
class vector
: vector_expr< typename proto::terminal< vector<T> >::type >
This says that vector<T> is ... something that holds a vector<T> by
value. That's clearly not going to work.
The larger question is, should it be possible to define a type that is a
proto terminal, but that doesn't extend some other proto terminal? That
would be a good thing because it wouldn't force you to separate your
data from your user-visible types. Another way to say this is, can I
non-intrusively make a type a proto terminal?
The answer is yes. There is an (undocumented) way to non-intrusively
make non-proto types behave like proto terminals. Consider:
#include <boost/proto/proto.hpp>
using namespace boost;
using namespace proto;
template<typename T>
struct vector
{
// vector impl here
};
template<typename T>
struct is_vector : mpl::false_ {};
template<typename T>
struct is_vector<vector<T> > : mpl::true_ {};
BOOST_PROTO_DEFINE_OPERATORS(is_vector, default_domain)
int main()
{
vector<int> vi;
// OK, vi is a proto terminal!
// This builds a proto expression tree:
vi + 32;
}
The operator overloads that Proto defines requires at least one operand
to be either expr<> or some type that actually extends expr<>. If
neither is true for your terminal, as is the case with vector<T> above,
you'll need to define your own operator overloads (with
BOOST_PROTO_DEFINE_OPERATORS) and provide a Boolean metafunction that
can be used to contrain the overloads (e.g., is_vector<>).
> I definitely see the potential for handling so many domains. After iterating
> a few times to figure out how to handle expression trees properly for a
> single domain, I definitely saw the value in a general-purpose system. It's
> a gigantic step from a single custom-written system to a general system. I'm
> very excited to get to the stage where I can play around with transforms as
> expression optimization was something I wanted to play with before, but I
> just hadn't gotten around to it yet.
>
> The vector class was what I figured would be the gentlest introduction to
> your library, seeing as I've already written an expression tree handler for
> it.
Right, and it turned out to be not-so-gentle because Proto does things
differently.
> There are a lot of uses for proto that I see. I'd like to try adapting
> phoenix to create a library that implements co-routines. There are some
> interesting possibilities for state machines. I just need more time!
Me too.
-- Eric Niebler Boost Consulting
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2008/03/135027.php | CC-MAIN-2021-43 | refinedweb | 645 | 55.44 |
TextField API
API documentation for the React TextField component. Learn about the available props and the CSS API.
Import
You can learn about the difference by reading this guide on minimizing bundle size.
import TextField from '@mui/material/TextField'; // or import { TextField } from '@mui/material';
The
TextField is a convenience wrapper for the most common cases (80%).
It cannot be all things to all people, otherwise the API would grow out of control.
Advanced Configuration
It's important to understand that the text field is a simple abstraction on top of the following components:
If you wish to alter the props applied to the
input element, you can do so as follows:
const inputProps = { step: 300, };
return <TextField id="time" type="time" inputProps={inputProps} />;
For advanced cases, please look at the source of TextField by clicking on the "Edit this page" button above. Consider either:
- using the upper case props for passing values directly to the components
- using the underlying components directly as shown in the demos
Component nameThe name
MuiTextFieldcan be used when providing default props or style overrides in the theme.
Props
Props of the FormControl component are also available.
The
refis forwarded to the root element.
InheritanceWhile not explicitly documented above, the props of the FormControl component are also available on TextField.. | https://mui.com/api/text-field/ | CC-MAIN-2021-43 | refinedweb | 215 | 50.36 |
14.4. Implementation of Word2vec¶
In this section, we will train a skip-gram model defined in Section 14.1.
First, import the packages and modules required for the experiment, and load the PTB dataset.
import d2l from mxnet import autograd, gluon, np, npx from mxnet.gluon import nn npx.set_np() batch_size, max_window_size, num_noise_words = 512, 5, 5 data_iter, vocab = d2l.load_data_ptb(512, 5, 5)
14.4.1. The Skip-Gram Model¶
We will implement the skip-gram model by using embedding layers and minibatch multiplication. These methods are also often used to implement other natural language processing applications.
14.4.1\).^\mathrm{th}\) row of the weight matrix as its word vector. Below we enter an index of shape (\(2\), \(3\)) into the embedding layer. Because the dimension of the word vector is 4, we obtain a word vector of shape (\(2\), \(3\), \(4\)).
x = np.array([[1, 2, 3], [4, 5, 6]]) embed(x)
array([[[]]])
14.4.1.2. Minibatch Multiplication¶
We can multiply the matrices in two minibatches one by one, by the
minibatch multiplication operation
batch_dot. Suppose the first
batch contains \(n\) matrices
\(\mathbf{X}_1, \ldots, \mathbf{X}_n\) with a shape of
\(a\times b\), and the second batch contains \(n\) matrices
\(\mathbf{Y}_1, \ldots, \mathbf{Y}_n\) with a shape of
\(b\times c\). The output of matrix multiplication on these two
batches are \(n\) matrices
\(\mathbf{X}_1\mathbf{Y}_1, \ldots, \mathbf{X}_n\mathbf{Y}_n\) with
a shape of \(a\times c\). Therefore, given two
ndarrays of
shape (\(n\), \(a\), \(b\)) and (\(n\), \(b\),
\(c\)), the shape of the minibatch multiplication output is
(\(n\), \(a\), \(c\)).
X = np.ones((2, 1, 4)) Y = np.ones((2, 4, 6)) npx.batch_dot(X, Y).shape
(2, 1, 6)
14.4.1 minibatch
multiplication. Each element in the output is the inner product of the
central target word vector and the context word vector or noise word
vector.
def skip_gram(center, contexts_and_negatives, embed_v, embed_u): v = embed_v(center) u = embed_u(contexts_and_negatives) pred = npx.batch_dot(v, u.swapaxes(1, 2)) return pred
Verify that the output shape should be (batch size, 1,
max_len).
skip_gram(np.ones((2, 1)), np.ones((2, 4)), embed, embed).shape
(2, 1, 4)
14.4.2. Training¶
Before training the word embedding model, we need to define the loss function of the model. minibatch:.
pred = np.array([[.5]*4]*2) label = np.array([[1, 0, 1, 0]]*2) mask = np.array([[1, 1, 1, 1], [1, 1, 0, 0]]) loss(pred, label, mask)
array([0.724077 , 0.3620385])
We can normalize the loss in each example due to various lengths in each example.
loss(pred, label, mask) / mask.sum(axis=1) * mask.shape[1]
array([0.724077, 0.724077])
14.4.2.2. Initializing Model Parameters¶
We construct the embedding layers of the central and context words,
respectively, and set the hyperparameter word vector dimension
embed_size to 100.
embed_size = 100 net = nn.Sequential() net.add(nn.Embedding(input_dim=len(vocab), output_dim=embed_size), nn.Embedding(input_dim=len(vocab), output_dim=embed_size))) metric.add(l.sum(),.330, 27707 tokens/sec on gpu(0)
14.4.
def get_similar_tokens(query_token, k, embed): W = embed.weight.data() x = W[vocab[query_token]] # Compute the cosine similarity. Add 1e-9 for numerical stability cos = np.dot(W, x) / np.sqrt(np.sum(W * W, axis=1) * np.sum(x * x) + 1e-9) topk = npx.topk(cos, k=k+1, ret_typ='indices').asnumpy().astype('int32') for i in topk[1:]: # Remove the input words print('cosine sim=%.3f: %s' % (cos[i], (vocab.idx_to_token[i]))) get_similar_tokens('chip', 3, net[0])
cosine sim=0.549: intel cosine sim=0.505: mips cosine sim=0.503: chips dataset is large, we usually sample the context words and the noise words for the central target word in the current minibatch only when updating the model parameters. In other words, the same central target word may have different context words or noise words in different epochs. What are the benefits of this sort of training? Try to implement this training method. | https://d2l.ai/chapter_natural-language-processing/word2vec-gluon.html | CC-MAIN-2019-51 | refinedweb | 678 | 52.87 |
#include <searchquery.h>
Detailed Description
Search term represents the actual condition within query.
SearchTerm can either have multiple subterms, or can be so-called endterm, when there are no more subterms, but instead the actual condition is specified, that is have key, value and relation between them.
- Since
- 4.13
Definition at line 39 of file searchquery.h.
Constructor & Destructor Documentation
Constructs a term where all subterms will be in given relation.
Definition at line 141 of file searchquery.cpp.
Constructs an end term.
Definition at line 147 of file searchquery.cpp.
Member Function Documentation
Adds a new subterm to this term.
Subterms will be in relation as specified in SearchTerm constructor.
If there are subterms in a term, key, value and condition are ignored.
Definition at line 206 of file searchquery.cpp.
Returns relation between key and value.
Definition at line 191 of file searchquery.cpp.
Returns whether the entire term is negated.
Definition at line 201 of file searchquery.cpp.
Returns key of this end term.
Definition at line 181 of file searchquery.cpp.
Returns relation in which all subterms are.
Definition at line 216 of file searchquery.cpp.
Sets whether the entire term is negated.
Definition at line 196 of file searchquery.cpp.
Returns all subterms, or an empty list if this is an end term.
Definition at line 211 of file searchquery.cpp.
Returns value of this end term.
Definition at line 186 of file searchquery.cpp.
The documentation for this class was generated from the following files:
Documentation copyright © 1996-2019 The KDE developers.
Generated on Thu Dec 5 2019 04:14:06 by doxygen 1.8.11 written by Dimitri van Heesch, © 1997-2006
KDE's Doxygen guidelines are available online. | https://api.kde.org/kdepim/akonadi/html/classAkonadi_1_1SearchTerm.html | CC-MAIN-2019-51 | refinedweb | 287 | 61.83 |
Search Criteria
Package Details: chrome-shutdown-hook 1.3.2-1
Dependencies (4)
- gnome-python
- procps-ng (procps-ng-git, procps-ng-static, procps-ng-nosystemd)
- python2
- gnome-tweak-tool (gnome-tweaks) (optional) – to enable this hook as a Gnome Startup Application
Latest Comments
1 2 3 Next › Last »
notuxius commented on 2019-03-02 19:15
Chromium shows restore session popup even if the hook is enabled in Tweaks app, Chromium 72, Manjaro Gnome
zerophase commented on 2017-06-16 21:41
The latest version of Chrome has seemed to have mostly fixed the issue. Only once every couple boots require restoring tabs.
saivert commented on 2017-06-16 20:59
This hasn't been an issue in Chromium in over a year. Just reset your Chromium profile to defaults in Settings. That solved it for me.
Usually a corrupt profile or some crappy Extension is the culprit.
I no longer need this tool.
Terence commented on 2017-04-27 23:08
@elimpfor I'm slowly gathering information and reading docs, any help is appreciated :).
ncoder-2 commented on 2017-04-27 23:06
Any luck on this @Terence?
ncoder-2 commented on 2017-04-12 17:02
Cool thanks!
Terence commented on 2017-04-12 16:47
@elimpfo I also have this error and I think it's because it's now using deprecated APIs so I will try to write from scratch.
ncoder-2 commented on 2017-04-12 16:39
I'm getting this error running the hook:
import gnome
ImportError: No module named gnome
zerophase commented on 2016-08-31 18:06
@Terence I have it in startup for Cinnamon. Seemed to be running, since Chrome at shutdown would crash. Wonder if running it through systemd would change the behavior on Cinnamon.
Terence commented on 2016-08-31 18:01
@zerophase Good you ask, the script is triggered when you click on the shutdown button and the confirmation prompt appears. | https://aur.tuna.tsinghua.edu.cn/packages/chrome-shutdown-hook/ | CC-MAIN-2020-29 | refinedweb | 325 | 62.78 |
Provided by: liblcgdm-dev_1.8.10-1build3_amd64
NAME
netwrite - send a message on a socket
SYNOPSIS
#include "net.h" int netwrite (int s, char *buf, int nbytes); ssize_t netwrite_timeout (int s, void *buf, size_t nbytes, int timeout);
DESCRIPTION
netwrite sends a message on a socket.
RETURN VALUE
This routine returns the number of bytes if the operation was successful, 0 if the connection was closed by the remote end or -1 if the operation failed. In the latter case, serrno is set appropriately.
ERRORS
EINTR The function was interrupted by a signal. EBADF s is not a valid descriptor. EAGAIN The socket is non-blocking and there is no space available in the system buffers for the message. EFAULT buf is not a valid pointer. EINVAL nbytes is negative or zero. ENOTSOCK s is not a socket. SECONNDROP Connection closed by remote end. SETIMEDOUT Timed out.
SEE ALSO
send(2), neterror(3)
AUTHOR
LCG Grid Deployment Team | http://manpages.ubuntu.com/manpages/xenial/man3/netwrite.3.html | CC-MAIN-2019-30 | refinedweb | 157 | 69.18 |
Moving your code towards a more functional style can have a lot of benefits – it can be easier to reason about, easier to test, more declarative, and more. One thing that sometimes comes out worse in the move to FP, though, is organization. By comparison, Object Oriented Programming classes are a pretty useful unit of organization – methods have to be in the same class as the data they work on, so your code is pushed towards being organized in pretty logical ways.
In a modern JavaScript project, however, things are often a little less clear-cut. You’re generally building your application around framework constructs like components, services, and controllers, and this framework code is often a stateful class with a lot of dependencies. Being a good functional programmer, you pull your business logic out into small pure functions, composing them together in your component to transform some state. Now you can test them in isolation, and all is well with the world.
But where do you put them?
The first answer is often “at the bottom of the file.” For example, say you’ve got your main component class called UserComponent.js. You can imagine having a couple pure helper functions like fullName(user) at the bottom of the file, and you export them to test them in UserComponent.spec.js.
Then as time goes on, you add a few more functions. Now the component is a few months old, the file is 300 lines long and it’s more pure functions than it is component. It’s clearly time to split things up. So hey, if you’ve got a UserComponent, why not toss those functions into a
UserComponentHelpers.js? Now your component file looks a lot cleaner, just importing the functions it needs from the helper.
So far so good – though that
UserComponentHelpers.js file is kind of a grab-bag of functions, where you’ve got
fullName(user) sitting next to
formatDate(date).
And then you get a new story to show users’ full names in the navbar. Okay, so now you’re going to need that fullName function in two places. Maybe toss it in a generic utils file? That’s not great.
And then, a few months later, you’re looking at the
FriendsComponent, and find out someone else had already implemented
fullName in there. Oops. So now the next time you need a user-related function, you check to see if there’s one already implemented. But to do that, you have to check at least
UserComponent,
UserComponentHelpers, and
FriendsComponent, and also
UserApiService, which is doing some User conversion.
So at this point, you may find yourself yearning for the days of classes, where a User would handle figuring out its own
fullName. Happily, we can get the best of both worlds by borrowing from functional languages like Elixir.
Elixir has a concept called structs, which are dictionary-like data structures with pre-defined attributes. They’re not unique to the language, but Elixir sets them up in a particularly useful way. Files generally have a single module, which holds some functions, and can define a single struct. So a User module might look like this:
Even if you’re never seen any Elixir before, that should be pretty easy to follow. A User struct is defined as having a
last name, and
full_name function that takes a
User and operates on it. The module is organized like a class – we can define the data that makes up a
User, and logic that operates on
Users, all in one place. But, we get all that without trouble of mutable state.
There’s no reason we can’t use the same pattern in JavaScript-land. Instead of organizing your pure functions around the components they’re used in, you can organize them around the data types (or domain objects in Domain Driven Design parlance) that they work on.
So, you can gather up all the user-related pure functions, from any component, and put them together in a
User.js file. That’s helpful, but both a class and an Elixir module define their data structure, as well as their logic.
In JavaScript, there’s no built-in way to do that, but the simplest solution is to just add a comment. JSDoc, a popular specification for writing machine-readable documentation comments, lets you define types with the
@typedef tag:
With that we’ve replicated all the information in an Elixir module in JavaScript, which will make it easier for future developers to keep track of what a
User looks like in your system. But the problem with comments is they get out of date. That’s where something like TypeScript comes in. With TypeScript, you can define an interface, and the compiler will make sure it stays up-to-date:
This also works great with propTypes in react. PropTypes are just objects that can be exported, so you can define your User propType as a
PropType.shape in your User module.
Then you can use the User’s type and functions in your components, reducers, and selectors.
You could do something very similar with Facebook’s Flow, or any other library that lets you define the shape of your data.
However you define your data, the key part is to put a definition of the data next to the logic on the data in the same place. That way it’s clear where your functions should go, and what they’re operating on. Also, since all your user-specific logic is in once place, you’ll probably be able to find some shared logic to pull out that might not have been obvious if it was scattered all over your codebase.
It’s good practice to always put the module’s data type in a consistent position in your functions – either always the first parameter, or always the last if you’re doing a lot of currying. It’s both helpful just to have one less decision to make, and it helps you figure out where things go – if it feels weird to put user in the primary position, then the function probably shouldn’t go into the User module.
Functions that deal with converting between two types – pretty common in functional programming – would generally go into the module of the type being passed in –
userToFriend(user, friendData) would go into the
User module. In Elixir it would be idiomatic to call that
User.to_friend, and if you’re okay with using wildcard imports, that’ll work great:
On the other hand, if you’re following the currently popular JavaScript practice of doing individual imports, then calling the function
userToFriend would be more clear:
However, I think that with this functional module pattern, wildcard imports make a lot of sense. They let you prefix your functions with the type they’re working on, and push you to think of the collection of User-related types and functions as one thing like a class.
But if you do that and declare types, one issue is that then in other classes you’d be referring to the type
User.User or
User.userType. Yuck. There’s another idiom we can borrow from Elixir here – when declaring types in that language, it’s idiomatic to name the module struct’s type
t.
We can replicate that with React PropTypes by just naming the propType
t, like so:
It also works just fine in TypeScript, and it’s nice and readable. You use t to describe the type of the current module, and
Module.t to describe the type from Module.
Using
t in TypesScript does break a popular rule from the TypeScript Coding Guidelines to “use PascalCase for type names.” You could name the type
T instead, but then that would conflict with the common TypeScript practice of naming generic types
T. Overall,
User.t seems like a nice compromise, and the lowercase
t feels like it keeps the focus on the module name, which is the real name of the type anyway. This is one for your team to decide on, though.
Decoupling your business logic from your framework keeps it nicely organized and testable, makes it easier to onboard developers who don’t know your specific framework, and means you don’t have to be thinking about controllers or reducers when you just want to be thinking about users and passwords.
This process doesn’t have to happen all at once. Try pulling all the logic for just one module together, and see how it goes. You may be surprised at how much duplication you find!
So in summary:
Try organizing your functional code by putting functions in the same modules as the types they work on.
Put the module’s data parameter in a consistent position in your function signatures.
Consider using
import * as Module wildcard imports, and naming the main module type
t.
Will Ockelmann-Wagner is a software developer at Carbon Five. He’s into functional programming and testable code.
Interested in a Career at Carbon Five? Check out our job openings. | https://blog.carbonfive.com/a-proposal-elixir-style-modules-in-javascript/?utm_content=75686967&utm_medium=social&utm_source=twitter | CC-MAIN-2021-25 | refinedweb | 1,521 | 69.31 |
Instant Simple Botting with PHP [Instant] — Save 50%
Enhance your botting skills and create your own web bots with PHP with this book and ebook.
(For more resources related to this topic, see here.)
With the knowledge you have gained, we are now ready to develop our first bot, which will be a simple bot that gathers data (documents) based on a list of URLs and datasets (field and field values) that we will require.
First, let's start by creating our bot package directory. So, create a directory called WebBot so that the files in our project_directory/lib directory look like the following:
'-- project_directory|-- lib | |-- HTTP (our existing HTTP package) | | '-- (HTTP package files here) | '-- WebBot | |-- bootstrap.php| |-- Document.php | '-- WebBot.php |-- (our other files)'-- 03_webbot.php
As you can see, we have a very clean and simple directory and file structure that any programmer should be able to easily follow and understand.
The WebBot class
Next, open the file WebBot.php file and add the code from the project_directory/lib/WebBot/WebBot.php file:
In our WebBot class, we first use the __construct() method to pass the array of URLs (or documents) we want to fetch, and the array of document fields are used to define the datasets and regular expression patterns. Regular expression patterns are used to populate the dataset values (or document field values). If you are unfamiliar with regular expressions, now would be a good time to study them. Then, in the __construct() method, we verify whether there are URLs to fetch or not. If there , we set an error message stating this problem.
Next, we use the __formatUrl() method to properly format URLs we fetch data. This method will also set the correct protocol: either HTTP or HTTPS ( Hypertext Transfer Protocol Secure ). If the protocol is already set for the URL, for example.[dom].com, we ignore setting the protocol. Also, if the class configuration setting conf_force_https is set to true, we force the HTTPS protocol again unless the protocol is already set for the URL.
We then use the execute() method to fetch data for each URL, set and add the Document objects to the array of documents, and track document statistics. This method also implements fetchdelay logic that will delay each fetch by x number of seconds if set in the class configuration settings conf_delay_between_fetches. We also include the logic that only allows distinct URL fetches, meaning that, if we have already fetched data for a URL we won't fetch it again; this eliminates duplicate URL data fetches. The Document object is used as a container for the URL data, and we can use the Document object to use the URL data, the data fields, and their corresponding data field values.
In the execute() method, you can see that we have performed a \HTTP\Request::get() request using the URL and our default timeout value—which is set with the class configuration settings conf_default_timeout. We then pass the \HTTP\Response object that is returned by the \HTTP\Request::get() method to the Document object. Then, the Document object uses the data from the \HTTP\Response object to build the document data.
Finally, we include the getDocuments() method, which simply returns all the Document objects in an array that we can use for our own purposes as we desire.
The WebBot Document class
Next, we need to create a class called Document that can be used to store document data and field names with their values. To do this we will carry out the following steps:
- We first pass the data retrieved by our WebBot class to the Document class.
- Then, we define our document's fields and values using regular expression patterns.
- Next, add the code from the project_directory/lib/WebBot/Document.php file.
Our Document class accepts the \HTTP\Response object that is set in WebBot class's execute() method, and the document fields and document ID.
- In the Document __construct() method, we set our class properties: the HTTP Response object, the fields (and regular expression patterns), the document ID, and the URL that we use to fetch the HTTP response.
- We then check if the HTTP response successful (status code 200), and if it isn't, we set the error with the status code and message.
- Lastly, we call the __setFields() method.
The __setFields() method parses out and sets the field values from the HTTP response body. For example, if in our fields we have a title field defined as $fields = ['title' => '<title>(.*)<\/title>'];, the __setFields() method will add the title field and pull all values inside the <title>*</title> tags into the HTML response body. So, if there were two title tags in the URL data, the __setField() method would add the field and its values to the document as follows:
['title'] => [ 0 => 'title x', 1 => 'title y' ]
If we have the WebBot class configuration variable—conf_include_document_field_raw_values—set to true, the method will also add the raw values (it will include the tags or other strings as defined in the field's regular expression patterns) as a separate element, for example:
['title'] => [ 0 => 'title x', 1 => 'title y', 'raw' => [ 0 => '<title>title x</title>', 1 => '<title>title y</title>' ] ]
The preceding code is very useful when we want to extract specific data (or field values) from URL data.
To conclude the Document class, we have two more methods as follows:
- getFields(): This method simply returns the fields and field values
- getHttpResponse(): This method can be used to get the \HTTP\Response object that was originally set by the WebBot execute() method
This will allow us to perform logical requests to internal objects if we wish.
The WebBot bootstrap file
Now we will create a bootstrap.php file (at project_directory/lib/WebBot/) to load the HTTP package and our WebBot package classes, and set our WebBot class configuration settings:
<?php namespace WebBot; /** * Bootstrap file * * @package WebBot */ // load our HTTP package require_once './lib/HTTP/bootstrap.php'; // load our WebBot package classes require_once './lib/WebBot/Document.php'; require_once './lib/WebBot/WebBot.php'; // set unlimited execution time set_time_limit(0); // set default timeout to 30 seconds \WebBot\WebBot::$conf_default_timeout = 30; // set delay between fetches to 1 seconds \WebBot\WebBot::$conf_delay_between_fetches = 1; // do not use HTTPS protocol (we'll use HTTP protocol) \WebBot\WebBot::$conf_force_https = false; // do not include document field raw values \WebBot\WebBot::$conf_include_document_field_raw_values = false;
We use our HTTP package to handle HTTP requests and responses. You have seen in our WebBot class how we use HTTP requests to fetch the data, and then use the HTTP Response object to store the fetched data in the previous two sections. That is why we need to include the bootstrap file to load the HTTP package properly.
Then, we load our WebBot package files. Because our WebBot class uses the Document class, we load that class file first.
Next, we use the built-in PHP function set_time_limit() to tell the PHP interpreter that we want to allow unlimited execution time for our script. You don't necessarily have to use unlimited execute time. However, for testing reasons, we will use unlimited execution time for this example.
Finally, we set the WebBot class configuration settings. These settings are used by the WebBot object internally to make our bot work as we desire. We should always make the configuration settings as simple as possible to help other developers understand. This means we should also include detailed comments in our code to ensure easy usage of package configuration settings.
We have set up four configuration settings in our WebBot class. These are static and public variables, meaning that we can set them from anywhere after we have included the WebBot class, and once we set them they will remain the same for all WebBot objects unless we change the configuration variables. If you do not understand the PHP keyword static, now would be a good time to research this subject.
- The first configuration variable is conf_default_timeout. This variable is used to globally set the default timeout (in seconds) for all WebBot objects we create. The timeout value tells the \HTTP\Request class how long it continue trying to send a request before stopping and deeming it as a bad request, or a timed-out request. By default, this configuration setting value is set to 30 (seconds).
- The second configuration variable—conf_delay_between_fetches—is used to set a time delay (in seconds) between fetches (or HTTP requests). This can be very useful when gathering a lot of data from a website or web service. For example, say, you had to fetch one million documents from a website. You wouldn't want to unleash your bot with that type of mission without fetch delays because you could inevitably cause—to that website—problems due to massive requests. By default, this value is set to 0, or no delay.
- The third WebBot class configuration variable—conf_force_https—when set to true, can be used to force the HTTPS protocol. As mentioned earlier, this will not override any protocol that is already set in the URL. If the conf_force_https variable is set to false, the HTTP protocol will be used. By default, this value is set to false.
- The fourth and final configuration variable—conf_include_document_field_raw_values—when set to true, will force the Document object to include the raw values gathered from the ' regular expression patterns. We've discussed configuration settings in detail in the WebBot Document Class section earlier in this article. By default, this value is set to false.
Summary
In this article you have learned how to get started with building your first bot using HTTP requests and responses.
Resources for Article :
Further resources on this subject:
- Installing and Configuring Jobs! and Managing Sections, Categories, and Articles using Joomla! [Article]
- Search Engine Optimization in Joomla! [Article]
- Adding a Random Background Image to your Joomla! Template [Article]
About the Author :
Shay Michael Anderson
Sh,.
Post new comment | http://www.packtpub.com/article/creating-our-first-bot-webBot | CC-MAIN-2013-48 | refinedweb | 1,654 | 51.89 |
[armel] ICE immed_double_const at emit-rtl.c (-mfpu=neon -g)
Bug Description
I've tried this with linaro 2010.09 and 2011.03 and the one that is currently available from Ubuntu Natty repo:
kaltsi@
arm-linux-
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
The problem comes when using -mfpu=neon and -g (and at least -O1).
kaltsi@
optimized.c: In function ‘move_16bit_
optimized.c:4:6: internal compiler error: in immed_double_const, at emit-rtl.c:552
Please submit a full bug report,
with preprocessed source if appropriate.
See <file:/
I do not get the ICE with linaro gcc-4.4 available from Natty.
Also confirmed on upstream trunk.
Testing a patch, would you mind also submitting to upstream bugzilla too? Thanks.
Created attachment 23707
pre-processed source
When compiling the attached pre-processed source for arm (-march=armv7-a -mtune=cortex-a8) and using options -mfpu=neon -g -O1 I get an ICE.
kaltsi@
optimized.c: In function ‘move_16bit_
optimized.c:4:6: internal compiler error: in immed_double_const, at emit-rtl.c:552
Please submit a full bug report,
with preprocessed source if appropriate.
See <file:/
This does not happen if I leave the -g option out.
Also reported to linaro: https:/
Richard, Chung-Lin: could you follow up on this one please?
I am out of the office until 17/04/2011.
Note: This is an automated response to your message "[Bug 736007] Re:
[armel] ICE immed_double_const at emit-rtl.c (-mfpu=neon -g)" sent on
15/4/2011 4:50:25.
This is the only notification you will receive while this person is away.
Michael Hope <email address hidden> writes:
> Richard, Chung-Lin: could you follow up on this one please?
I think the status here is that we (Chung-Lin, Julian and myself)
have proposed three different patches on the list. They all work
around the underlying problem rather than fix it.
I was hoping the maintainers would pick one, but I suppose the
hackish nature of the patches means that no-one really wants to.
FWIW, I'm happy to defer to Chung-Lin's change for Linaro. I think
his patch has received the most testing and I agree it should be safe.
Richard
IMHO, I think that among the proposals, Richard Guenther's actually seems the most elegant (avoid the seemingly unneeded zero-set completely), though I think his patch is not yet really a intended tested fix currently...
OK. Can you move this bug forward then please Chung-Lin? That might be by pinging Richard Guenther and asking him to finish the patch or taking it over yourself.
Richard, could you follow this one up please?
Still present in gcc-linaro-
michaelh@
optimized.c: In function 'move_16bit_
optimized.c:4:6: internal compiler error: in immed_double_const, at emit-rtl.c:550
Adding a test case which Chung-Lin's patch does not appear to fix.
Correction: Chung-Lin's patch does fix this test case. (I was using a stale compiler build when I re-checked.)
Withdrawn from stable due to the size of the change and it not affecting a user of the stable branch.
I have some additional info related to this bug:
If I compile the following:
#include "arm_neon.h"
#include "stdlib.h"
int main ()
{
float r [] = {0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f};
float32x4x2_t d;
d = vld2q_f32 (r);
vst2q_f32 (r, d);
return 0;
}
using: arm-linux-
I get:
test.c: In function 'main':
test.c:4:5: internal compiler error: in immed_double_const, at emit-rtl.c:550
Please submit a full bug report,
with preprocessed source if appropriate.
See <http://
However, if I comment out vst2q_f32 (r, d);, the problem goes away. Also by not using -g or -O1.
$ arm-linux-
Using built-in specs.
COLLECT_
COLLECT_
Target: arm-linux-
Configured with: /Users/
Thread model: posix
gcc version 4.6.2 (GCC)
Shouldn't the upstream bug be resolved now?
Confirmed in gcc-linaro-
4.5+bzr99489:
ichaelh@
ursa1:/ scratch/ michaelh/ bugs$ ../toolchains/ gcc-linaro- 4.5+bzr99489- armv7l- maverick- cbuild79- carina5- cortexa8r1/ bin/gcc -mfpu=neon -O1 -g -c emit-rtl-ice.i to_32bit' :
optimized.c: In function 'move_16bit_
optimized.c:4:6: internal compiler error: in immed_double_const, at emit-rtl.c:552
Also appears on a cross compiler. The backtrace is:
#1 0x0000000000580d19 in immed_double_const (i0=0, i1=0, mode=OImode)
./src/gcc- linaro- 4.5-2011. 03-0/gcc/ emit-rtl. c:552 1ca8, target=0x0, ./src/gcc- linaro- 4.5-2011. 03-0/gcc/ expr.c: 8465 1ca8) ./src/gcc- linaro- 4.5-2011. 03-0/gcc/ expr.h: 558 1ca8) ./src/gcc- linaro- 4.5-2011. 03-0/gcc/ cfgexpand. c:2329 debug_locations () ./src/gcc- linaro- 4.5-2011. 03-0/gcc/ cfgexpand. c:3122 ./src/gcc- linaro- 4.5-2011. 03-0/gcc/ cfgexpand. c:3885 ./src/gcc- linaro- 4.5-2011. 03-0/gcc/ passes. c:1572
at ../../.
#2 0x00000000005a34c3 in expand_expr_real_1 (exp=0x7ffff4dd
tmode=<value optimized out>, modifier=<value optimized out>, alt_rtl=0x0)
at ../../.
#3 0x0000000000521425 in expand_expr (exp=0x7ffff4dd
at ../../.
#4 expand_debug_expr (exp=0x7ffff4dd
at ../../.
#5 0x00000000005237d8 in expand_
at ../../.
#6 gimple_expand_cfg () at ../../.
#7 0x00000000006715dc in execute_one_pass (pass=0xf8a720)
at ../../.
Note that mode is OImode (a 32 byte integer) which makes sense with the int32x4x2 types in the example code. | https://bugs.launchpad.net/gcc/+bug/736007 | CC-MAIN-2020-50 | refinedweb | 907 | 70.19 |
Everyone uses MessageBox - it's been a fixture of Windows since day 1, and its format and invocation have changed very little over the years.
MessageBox
For me, the single biggest drawback of MessageBox has been that it centers on the screen, not on its parent, and there's no way to tell it to center on its parent:
Here's what I want:
You'd think that at least one of MessageBox.Show()'s 21 overloads would have some way of doing this, or perhaps that there'd be a MessageBoxOptions.CenterOnParent flag, but no such luck.
MessageBox.Show()
MessageBoxOptions.CenterOnParent
This article presents a simple mechanism for implementing MessageBoxes that appear centered on their parent.
My technique uses a custom class - I call it MsgBox - that wraps the standard MessageBox.Show call with a Windows hook. The hook is set before popping the MessageBox, the hookproc finds and centers the MessageBox before it's initially displayed, then the hook is released.
MsgBox
MessageBox.Show
For simplicity, my example works only on a single thread. If you have multiple threads that pop MessageBoxes in an uncoordinated fashion, you'll need to add some code to handle that situation.
For additional references/articles, CodeProject has many articles on hooking. Search on "SetWindowsHookEx".
Microsoft's docs are available here:.
As mentioned above, the core mechanism is a Windows hook. For the uninitiated, this is a Windows mechanism that allows your code to get access to some low-level Windows functionality; you essentially inject a bit of your app's code into the inner workings of Windows.
There are may types of hooks; my code uses the WH_CBT hook and acts on the HCBT_ACTIVATE event. (Read the Microsoft page linked above for details on WH_CBT.)
WH_CBT
HCBT_ACTIVATE
Hooks are a not part of .NET. To use them, you must use PInvoke to access the Win32 hooking APIs SetWindowsHookEx(), CallNextHookEx(), and UnhookWindowsHookEx(). These set local hooks, meaning that they only operate on windows within our process. This is exactly what we want - we do not want to handle message boxes displayed by other applications, only our own.
SetWindowsHookEx()
CallNextHookEx()
UnhookWindowsHookEx()
When you set a WH_CBT hook, your callback will receive notifications of window events such as creation, activation, moving/sizing, and destruction. We're interested in activation: when a message box is first activated (but before it's initially visible), we'll reposition it, and then we're done.
The code snippets below are lifted from the attached sample. In them, you'll see me using a Win32.* syntax - in the sample, I've collected all P/Invoke methods and defs in a separate class named Win32, a common practice that I've adopted for my projects.
Win32.*
Win32
First, you need to import the hooking APIs:
using System.Runtime.InteropServices;
public class Win32
{
public const int WH_CBT = 5;
public const int HCBT_ACTIVATE = 5;
public delegate int WindowsHookProc(int nCode, IntPtr wParam,
IntPtr lParam);
[DllImport("user32.dll", CharSet = CharSet.Auto,
CallingConvention = CallingConvention.StdCall)]
public static extern int SetWindowsHookEx(int idHook,
WindowsHookProc lpfn, IntPtr hInstance, int threadId);
[DllImport("user32.dll", CharSet = CharSet.Auto,
CallingConvention = CallingConvention.StdCall)]
public static extern bool UnhookWindowsHookEx(int idHook);
[DllImport("user32.dll", CharSet = CharSet.Auto,
CallingConvention = CallingConvention.StdCall)]
public static extern int CallNextHookEx(int idHook, int nCode,
IntPtr wParam, IntPtr lParam);
}
and define a few variables for managing the hook:
private int _hHook = 0;
private Win32.WindowsHookProc _hookProcDelegate;
private static string _title = null;
private static string _msg = null;
Create a callback delegate then calls SetWindowsHookEx() to set the hook. It returns a hook ID that you'll use in your callback and when you release the hook.
// Remember the title & message that we'll look for.
// The hook sees *all* windows, so we need
// to make sure we operate on the right one.
_msg = msg;
_title = title;
Win32.WindowsHookProc hookProcDelegate =
new Win32.WindowsHookProc(HookCallback);
_hHook = Win32.SetWindowsHookEx(Win32.WH_CBT, hookProcDelegate,
IntPtr.Zero, AppDomain.GetCurrentThreadId());
Your hook callback looks something like this. Once you're done processing the notification, you must pass the event along to the next hook via CallNextHookEx(). (Note the use here of your hook ID, _hHook.)
_hHook
private static int HookCallback(int code, IntPtr wParam, IntPtr lParam)
{
if (code == Win32.HCBT_ACTIVATE)
{
// wParam is the handle to the Window being activated.
if(TestForMessageBox(wParam))
{
CenterWindowOnParent(wParam);
Unhook(); // Release hook - we've done what we needed
}
}
return Win32.CallNextHookEx(_hHook, code, wParam, lParam);
}
Then, finally, when you're done looking for the message box, you release the hook:
private static void Unhook()
{
Win32.UnhookWindowsHookEx(_hHook);
_hHook = 0;
_hookProcDelegate = null;
_title = null;
_msg = null;
}
Simply watch for a dialog box which has the correct title and message:
private static bool TestForMessageBox(IntPtr hWnd)
{
string cls = Win32.GetClassName(hWnd);
if (cls == "#32770") // MessageBoxes are Dialog boxes
{
string title = Win32.GetWindowText(hWnd);
string msg = Win32.GetDlgItemText(hWnd, 0xFFFF); // -1 aka IDC_STATIC
{
if ((title == _title) && (msg == _msg))
{
return true;
}
}
}
return false;
}
Centering one window on another - nothing special here:
private static void CenterWindowOnParent(IntPtr hChildWnd)
{
// Get child (MessageBox) size
Win32.RECT rcChild = new Win32.RECT();
Win32.GetWindowRect(hChildWnd, ref rcChild);
int cxChild = rcChild.right - rcChild.left;
int cyChild = rcChild.bottom - rcChild.top;
// Get parent (Form) size & location
IntPtr hParent = Win32.GetParent(hChildWnd);
Win32.RECT rcParent = new Win32.RECT();
Win32.GetWindowRect(hParent, ref rcParent);
int cxParent = rcParent.right - rcParent.left;
int cyParent = rcParent.bottom - rcParent.top;
// Center the MessageBox on the Form
int x = rcParent.left + (cxParent - cxChild) / 2;
int y = rcParent.top + (cyParent - cyChild) / 2;
uint uFlags = 0x15; // SWP_NOSIZE | SWP_NOZORDER | SWP_NOACTIVATE;
Win32.SetWindowPos(hChildWnd, IntPtr.Zero, x, y, 0, 0, uFlags);
}
and that's about it!
In your calling code, just call MsgBox.Show() instead of MessageBox.Show(), and you'll get parent-centered pop-ups.
MsgBox.Show()
I've frequently wondered why message boxes appear centered on the screen rather than on their parent. I assumed that there must be some method to the madness - Microsoft wouldn't let a bug like this slip for 20 years! (Well, maybe they would... )
While developing this code, I found a plausible explanation: say, for example, that your window is positioned outside the display boundaries and an error occurs that pops a message box. In this case, if the message box were centered on its parent, you'd never see it. I think that this is a valid concern, and you should consider it when using centered message boxes, maybe choosing to not parent-center some error messages, just in. | http://www.codeproject.com/Articles/59483/How-to-make-MessageBoxes-center-on-their-parent-fo?fid=1561265&df=90&mpp=50&sort=Position&spc=Relaxed&tid=4234030 | CC-MAIN-2016-07 | refinedweb | 1,075 | 50.12 |
Some Projective Geometry for our laser scanner
We’re building a simple laser scanner. Camera attached to a board, with line laser hot glued to a servo on same board.
The line laser will scan. This laser line defines a plane. a pixel on the camera corresponds to a ray originating from the camera. Where the ray hits the plane is the 3d location of the scanned point.
Projective geometry is mighty useful here. To take a point to homogenous coordinates add a 1 as a fourth coordinate.
Points with 0 in the fourth coordinate are directions aka points on the plane at infinity (that’s a funky projective geometry concept, but very cool).
Planes are also defined by 4 coordinates. The first 3 coordinates are the normal vector of the plane. $ a \cdot x = c $. The fourth coordinate is that value of c, the offset from the origin. We can also find the plane given 3 points that lie on it. This is what I do here. What we are using is the fact that a determinant of a matrix with two copies of the same row will be zero. Then we’re using the expansion of a determinant in term of its minors, probably the formula they first teach you in school for determinants. Because of these facts, this minor vector will dot to zero for all the points on that plane.
Then finally we’re finding the intersection of the ray with the plane. The line is described as a line sum of two homogenous points on the line. we just need to find the coefficients in the sum. You can see by dotting the result onto the plane vector that the result is zero.
Then we dehomogenize the coordinates by dividing by the fourth coordinate.
import numpy as np #origin is camera position. z is direction camera is looking. x is to the right. y is up. #Waiiiiiit. That's a left handed cooridnate system? Huh. Whatever. May come out mirrore PCameraHomog = np.array([0.,0.,0.,1.]) #Baseline distance of #Let's use units of meters PLaser = np.array([ 0.3 ,0,0]) #I have measured my angle from -x going clockwise. God that is dumb. laserAngle = 60. laserRadian = laserAngle *np.pi/180. PLaserHomog = np.append(PLaser, [1.]) upDirHomog = np.array([0.,0.,1.,0.]) laserDirHomog = np.array([-1.* np.cos(laserRadian), np.sin(laserRadian), 0., 0.]) planeMat = np.stack((PLaserHomog, upDirHomog, laserDirHomog)) def colminor(mat,j): subMat = np.delete(mat, j, axis=1) return (-1.)**j * np.linalg.det(subMat) #The homogenous vector describing the plane coming off of the line laser. p dot x = 0 if x is on plane laserPlaneHomog = np.array(map(lambda j: colminor(planeMat, j) , range(4))) #Should all be zero print np.dot(laserPlaneHomog, laserDirHomog) print np.dot(laserPlaneHomog, upDirHomog) print np.dot(laserPlaneHomog, PLaserHomog) def pixelDir(x,y): # pix / f = objsize / objdist # f = pix * objdist / objsize f = 100. #camera Width of 1m object at 1m in pixels, or 8m object at 8m. return np.array([ x / f , y / f , 1., 0.]) cameraRay = pixelDir(10,20) #pos is on line between camera pos and ray and lies on laserplane. Hence pos dot plane = 0, which you can see will happen posHomog = np.dot(cameraRay, laserPlaneHomog) * PCameraHomog - np.dot(PCameraHomog, laserPlaneHomog) * cameraRay print posHomog def removeHomog(x): return x[:3]/x[3] pos3 = removeHomog(posHomog) print pos3 | https://www.philipzucker.com/some-projective-geometry-for-our-laser-scanner/ | CC-MAIN-2021-43 | refinedweb | 568 | 69.07 |
Provided by: manpages-dev_5.01-1_all
NAME
bindresvport - bind a socket to a privileged IP port
SYNOPSIS
#include <sys/types.h> #include <netinet/in.h> int bindresvport(int sockfd, struct sockaddr_in *sin);
DESCRIPTION->sin_port returns the port number actually allocated. calling process was not privileged (on Linux: the calling process did not have the CAP_NET_BIND_SERVICE capability in the user namespace governing its network namespace). EADDRINUSE All privileged ports are in use. EAFNOSUPPORT (EPFNOSUPPORT in glibc 2.7 and earlier) sin is not NULL and sin->sin_family is not AF_INET.
ATTRIBUTES
For an explanation of the terms used in this section, see attributes(7). ┌───────────────┬───────────────┬─────────────────────────┐ │Interface │ Attribute │ Value │ ├───────────────┼───────────────┼─────────────────────────┤ │bindresvport() │ Thread safety │ glibc >= 2.17: MT-Safe │ │ │ │ glibc < 2.17: MT-Unsafe │ └───────────────┴───────────────┴─────────────────────────┘.01 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. 2017-09-15 BINDRESVPORT(3) | http://manpages.ubuntu.com/manpages/eoan/man3/bindresvport.3.html | CC-MAIN-2019-30 | refinedweb | 151 | 61.22 |
Asynchronous HTTP Programming with Play Framework
Last modified: July 9, 2020
I just announced the new Learn Spring course, focused on the fundamentals of Spring 5 and Spring Boot 2:>> CHECK OUT THE COURSE
1. Overview
Often our web services need to use other web services in order to do their job. It can be difficult to serve user requests while keeping a low response time. A slow external service can increase our response time and cause our system to pile up requests, using more resources. This is where a non-blocking approach can be very helpful
In this tutorial, we'll fire multiple asynchronous requests to a service from a Play Framework application. By leveraging Java's non-blocking HTTP capability, we'll be able to smoothly query external resources without affecting our own main logic.
In our example we'll explore the Play WebService Library.
2. The Play WebService (WS) Library
WS is a powerful library providing asynchronous HTTP calls using Java Action.
Using this library, our code sends these requests and carries on without blocking. To process the result of the request, we provide a consuming function, that is, an implementation of the Consumer interface.
This pattern shares some similarities with JavaScript's implementation of callbacks, Promises, and the async/await pattern.
Let's build a simple Consumer that logs some of the response data:
ws.url(url) .thenAccept(r -> log.debug("Thread#" + Thread.currentThread().getId() + " Request complete: Response code = " + r.getStatus() + " | Response: " + r.getBody() + " | Current Time:" + System.currentTimeMillis()))
Our Consumer is merely logging in this example. The consumer could do anything that we need to do with the result, though, like store the result in a database.
If we look deeper into the library's implementation, we can observe that WS wraps and configures Java's AsyncHttpClient, which is part of the standard JDK and does not depend on Play.
3. Prepare an Example Project
To experiment with the framework, let's create some unit tests to launch requests. We'll create a skeleton web application to answer them and use the WS framework to make HTTP requests.
3.1. The Skeleton Web Application
First of all, we create the initial project by using the sbt new command:
sbt new playframework/play-java-seed.g8
In the new folder, we then edit the build.sbt file and add the WS library dependency:
libraryDependencies += javaWs
Now we can start the server with the sbt run command:
$ sbt run ... --- (Running the application, auto-reloading is enabled) --- [info] p.c.s.AkkaHttpServer - Listening for HTTP on /0:0:0:0:0:0:0:0:9000
Once the application has started, we can check everything is ok by browsing, which will open Play's welcome page.
3.2. The Testing Environment
To test our application, we'll use the unit test class HomeControllerTest.
First, we need to extend WithServer which will provide the server life cycle:
public class HomeControllerTest extends WithServer {
Thanks to its parent, this class now starts our skeleton webserver in test mode and on a random port, before running the tests. The WithServer class also stops the application when the test is finished.
Next, we need to provide an application to run.
We can create it with Guice‘s GuiceApplicationBuilder:
@Override protected Application provideApplication() { return new GuiceApplicationBuilder().build(); }
And finally, we set up the server URL to use in our tests, using the port number provided by the test server:
@Override @Before public void setup() { OptionalInt optHttpsPort = testServer.getRunningHttpsPort(); if (optHttpsPort.isPresent()) { port = optHttpsPort.getAsInt(); url = ":" + port; } else { port = testServer.getRunningHttpPort() .getAsInt(); url = ":" + port; } }
Now we're ready to write tests. The comprehensive test framework lets us concentrate on coding our test requests.
4. Prepare a WSRequest
Let's see how we can fire basic types of requests, such as GET or POST, and multipart requests for file upload.
4.1. Initialize the WSRequest Object
First of all, we need to obtain a WSClient instance to configure and initialize our requests.
In a real-life application, we can get a client, auto-configured with default settings, via dependency injection:
@Autowired WSClient ws;
In our test class, though, we use WSTestClient, available from Play Test framework:
WSClient ws = play.test.WSTestClient.newClient(port);
Once we have our client, we can initialize a WSRequest object by calling the url method:
ws.url(url)
The url method does enough to allow us to fire a request. However, we can customize it further by adding some custom settings:
ws.url(url) .addHeader("key", "value") .addQueryParameter("num", "" + num);
As we can see, it's pretty easy to add headers and query parameters.
After we've fully configured our request, we can call the method to initiate it.
4.2. Generic GET Request
To trigger a GET request we just have to call the get method on our WSRequest object:
ws.url(url) ... .get();
As this is a non-blocking code, it starts the request and then continues execution at the next line of our function.
The object returned by get is a CompletionStage instance, which is part of the CompletableFuture API.
Once the HTTP call has completed, this stage executes just a few instructions. It wraps the response in a WSResponse object.
Normally, this result would be passed on to the next stage of the execution chain. In this example, we have not provided any consuming function, so the result is lost.
For this reason, this request is of type “fire-and-forget”.
4.3. Submit a Form
Submitting a form is not very different from the get example.
To trigger the request we just call the post method:
ws.url(url) ... .setContentType("application/x-www-form-urlencoded") .post("key1=value1&key2=value2");
In this scenario, we need to pass a body as a parameter. This can be a simple string like a file, a json or xml document, a BodyWritable or a Source.
4.4. Submit a Multipart/Form Data
A multipart form requires us to send both input fields and data from an attached file or stream.
To implement this in the framework, we use the post method with a Source.
Inside the source, we can wrap all the different data types needed by our form:
Source<ByteString, ?> file = FileIO.fromPath(Paths.get("hello.txt")); FilePart<Source<ByteString, ?>> file = new FilePart<>("fileParam", "myfile.txt", "text/plain", file); DataPart data = new DataPart("key", "value"); ws.url(url) ... .post(Source.from(Arrays.asList(file, data)));
Though this approach adds some more configuration, it is still very similar to the other types of requests.
5. Process the Async Response
Up to this point, we have only triggered fire-and-forget requests, where our code doesn't do anything with the response data.
Let's now explore two techniques for processing an asynchronous response.
We can either block the main thread, waiting for a CompletableFuture, or consume asynchronously with a Consumer.
5.1. Process Response by Blocking With CompletableFuture
Even when using an asynchronous framework, we may choose to block our code's execution and wait for the response.
Using the CompletableFuture API, we just need a few changes in our code to implement this scenario:
WSResponse response = ws.url(url) .get() .toCompletableFuture() .get();
This could be useful, for example, to provide a strong data consistency that we cannot achieve in other ways.
5.2. Process Response Asynchronously
To process an asynchronous response without blocking, we provide a Consumer or Function that is run by the asynchronous framework when the response is available.
For example, let's add a Consumer to our previous example to log the response:
ws.url(url) .addHeader("key", "value") .addQueryParameter("num", "" + 1) .get() .thenAccept(r -> log.debug("Thread#" + Thread.currentThread().getId() + " Request complete: Response code = " + r.getStatus() + " | Response: " + r.getBody() + " | Current Time:" + System.currentTimeMillis()));
We then see the response in the logs:
[debug] c.HomeControllerTest - Thread#30 Request complete: Response code = 200 | Response: { "Result" : "ok", "Params" : { "num" : [ "1" ] }, "Headers" : { "accept" : [ "*/*" ], "host" : [ "localhost:19001" ], "key" : [ "value" ], "user-agent" : [ "AHC/2.1" ] } } | Current Time:1579303109613
It's worth noting that we used thenAccept, which requires a Consumer function since we don't need to return anything after logging.
When we want the current stage to return something, so that we can use it in the next stage, we need thenApply instead, which takes a Function.
These use the conventions of the standard Java Functional Interfaces.
5.3. Large Response Body
The code we've implemented so far is a good solution for small responses and most use cases. However, if we need to process a few hundreds of megabytes of data, we'll need a better strategy.
We should note: Request methods like get and post load the entire response in memory.
To avoid a possible OutOfMemoryError, we can use Akka Streams to process the response without letting it fill our memory.
For example, we can write its body in a file:
ws.url(url) .stream() .thenAccept( response -> { try { OutputStream outputStream = Files.newOutputStream(path); Sink<ByteString, CompletionStage<Done>> outputWriter = Sink.foreach(bytes -> outputStream.write(bytes.toArray())); response.getBodyAsSource().runWith(outputWriter, materializer); } catch (IOException e) { log.error("An error happened while opening the output stream", e); } });
The stream method returns a CompletionStage where the WSResponse has a getBodyAsStream method that provides a Source<ByteString, ?>.
We can tell the code how to process this type of body by using Akka's Sink, which in our example will simply write any data passing through in the OutputStream.
5.4. Timeouts
When building a request, we can also set a specific timeout, so the request is interrupted if we don't receive the complete response in time.
This is a particularly useful feature when we see that a service we're querying is particularly slow and could cause a pile-up of open connections stuck waiting for the response.
We can set a global timeout for all our requests using tuning parameters. For a request-specific timeout, we can add to a request using setRequestTimeout:
ws.url(url) .setRequestTimeout(Duration.of(1, SECONDS));
There's still one case to handle, though: We may have received all the data, but our Consumer may be very slow processing it. This might happen if there is lots of data crunching, database calls, etc.
In low throughput systems, we can simply let the code run until it completes. However, we may wish to abort long-running activities.
To achieve that, we have to wrap our code with some futures handling.
Let's simulate a very long process in our code:
ws.url(url) .get() .thenApply( result -> { try { Thread.sleep(10000L); return Results.ok(); } catch (InterruptedException e) { return Results.status(SERVICE_UNAVAILABLE); } });
This will return an OK response after 10 seconds, but we don't want to wait that long.
Instead, with the timeout wrapper, we instruct our code to wait for no more than 1 second:
CompletionStage<Result> f = futures.timeout( ws.url(url) .get() .thenApply(result -> { try { Thread.sleep(10000L); return Results.ok(); } catch (InterruptedException e) { return Results.status(SERVICE_UNAVAILABLE); } }), 1L, TimeUnit.SECONDS);
Now our future will return a result either way: the computation result if the Consumer finished in time, or the exception due to the futures timeout.
5.5. Handling Exceptions
In the previous example, we created a function that either returns a result or fails with an exception. So, now we need to handle both scenarios.
We can handle both success and failure scenarios with the handleAsync method.
Let's say that we want to return the result, if we've got it, or log the error and return the exception for further handling:
CompletionStage<Object> res = f.handleAsync((result, e) -> { if (e != null) { log.error("Exception thrown", e); return e.getCause(); } else { return result; } });
The code should now return a CompletionStage containing the TimeoutException thrown.
We can verify it by simply calling an assertEquals on the class of the exception object returned:
Class<?> clazz = res.toCompletableFuture().get().getClass(); assertEquals(TimeoutException.class, clazz);
When running the test, it will also log the exception we received:
[error] c.HomeControllerTest - Exception thrown java.util.concurrent.TimeoutException: Timeout after 1 second ...
6. Request Filters
Sometimes, we need to run some logic before a request is triggered.
We could manipulate the WSRequest object once initialized, but a more elegant technique is to set a WSRequestFilter.
A filter can be set during initialization, before calling the triggering method, and is attached to the request logic.
We can define our own filter by implementing the WSRequestFilter interface, or we can add a ready-made one.
A common scenario is logging what the request looks like before executing it.
In this case, we just need to set the AhcCurlRequestLogger:
ws.url(url) ... .setRequestFilter(new AhcCurlRequestLogger()) ... .get();
The resulting log has a curl-like format:
[info] p.l.w.a.AhcCurlRequestLogger - curl \ --verbose \ --request GET \ --header 'key: value' \ ''
We can set the desired log level, by changing our logback.xml configuration.
7. Caching Responses
WSClient also supports the caching of responses.
This feature is particularly useful when the same request is triggered multiple times and we don't need the freshest data every time.
It also helps when the service we're calling is temporarily down.
7.1. Add Caching Dependencies
To configure caching we need first to add the dependency in our build.sbt:
libraryDependencies += ehcache
This configures Ehcache as our caching layer.
If we don't want Ehcache specifically, we can use any other JSR-107 cache implementation.
7.2. Force Caching Heuristic
By default, Play WS won't cache HTTP responses if the server doesn't return any caching configuration.
To circumvent this, we can force the heuristic caching by adding a setting to our application.conf:
play.ws.cache.heuristics.enabled=true
This will configure the system to decide when it's useful to cache an HTTP response, regardless of the remote service's advertised caching.
8. Additional Tuning
Making requests to an external service may require some client configuration. We may need to handle redirects, a slow server, or some filtering depending on the user-agent header.
To address that, we can tune our WS client, using properties in our application.conf:
play.ws.followRedirects=false play.ws.useragent=MyPlayApplication play.ws.compressionEnabled=true # time to wait for the connection to be established play.ws.timeout.connection=30 # time to wait for data after the connection is open play.ws.timeout.idle=30 # max time available to complete the request play.ws.timeout.request=300
It's also possible to configure the underlying AsyncHttpClient directly.
The full list of available properties can be checked in the source code of AhcConfig.
9. Conclusion
In this article, we explored the Play WS library and its main features. We configured our project, learned how to fire common requests and to process their response, both synchronously and asynchronously.
We worked with large data downloads and saw how to cut short long-running activities.
Finally, we looked at caching to improve performance, and how to tune the client.
As always, the source code for this tutorial is available over on GitHub. | https://www.baeldung.com/java-play-asynchronous-http-programming | CC-MAIN-2020-40 | refinedweb | 2,507 | 56.15 |
Animating with keyTimes and keySplines
This topic documents a feature of HTML+TIME 2.0, which is obsolete as of Windows Internet Explorer 9.
The timing and animation features provided by HTML+TIME (Timed Interactive Multimedia Extensions) make it easier to add basic animations to Web pages—just set values for a particular property of a target element over a simple duration, and you've created an animation. What's not immediately obvious is that HTML+TIME also incorporates some of the vector-drawing capabilities of the Scalable Vector Graphics (SVG) syntax to provide Web authors with more sophisticated control over timing intervals and paths. Using the keyTimes and keySplines attributes, you can divide an animation element's simple duration into multiple segments, speed up or slow down the animation at multiple rates during a single duration, and specify values for the animation to reach at particular points in its duration. Even better, HTML+TIME makes it possible to use these features without writing script. Animating with keyTimes and keySplines enables you to do the following:
- Apply spline interpolation to timing segments of an animation. In other words, you can vary the rate at which the animation function calculates the values that drive the animation. This provides much closer control over timing and positioning during animations, but keeps your code remarkably simple.
- Match time values with Bézier control points to include multiple smooth changes in element motion over the course of a single animation.
- Use splines to set varied rates of change when animating colors.
Although you can accomplish some of these effects by using script or separate animateMotion elements with the accelerate and decelerate attributes, it quickly becomes difficult to coordinate the timing of the various animation elements. The keyTimes and keySplines attributes provide a more flexible, immediate way to control your animation's interpolation.
- Prerequisites
- Terms
- A Quick Look at Paths and Splines
- Manage Animation Intervals with keyTimes
- Use keyTimes with a Values List
- What is a keySpline?
- Use keyTimes and keySplines with animateColor
- Multiple Animations with Multiple Timing Segments
- Related Topics
Prerequisites
To use this overview most effectively, you should have some understanding of the HTML+TIME time2 behavior and Introduction to DHTML Behaviors. Specifically, this overview assumes that you know how to create an XML namespace and import and reference the time2 behavior. For more information about how to do this, see Incorporate the time2 Behavior and Authoring HTML+TIME.
Terms
Bézier curve. A curve defined by cubic equations using the coordinates of two endpoints and two control points. Bézier curves are named after mathematician Pierre Bézier. If you've used software with vector-drawing capabilities, like Microsoft Visio, you've probably used Bézier curves.
Interpolation. An operation that estimates a value of a series or function between two known values. In the context of HTML+TIME, an animation interpolates between given coordinates or color values over a duration.
Scalable Vector Graphics (SVG). A language for describing vector graphics in XML. SVG is currently a World Wide Web Consortium candidate. HTML+TIME's path attribute uses a subset of the SVG path syntax.
Simple duration. The duration of an animation element or time container, as set by the dur property, which distinguishes simple time from individual timing segments within an animation.
Spline. Another term for a curve drawn with Bézier equations. In the context of HTML+TIME, spline might also mean the set of Bézier control points, corresponding to a values list, that define the interval pacing of an animation. A more detailed description of HTML+TIME's implementation of splines follows this section.
Vector Markup Language (VML). An XML-based markup language, supported in Windows and Microsoft Office 2000, that uses the Cascading Style Sheets (CSS), Level 2 (CSS2) to determine layout. Some of the examples in this overview animate a simple Vector Markup Language (VML) shape for demonstration purposes.
A Quick Look at Paths and Splines
If you know all about splines and are familiar with HTML+TIME's implementation of the Scalable Vector Graphics (SVG) path and spline syntax, you might prefer to skip to the Manage Animation Intervals with keyTimes section, which describes how to use splines for timing intervals with keyTimes and keySplines. If you'd like a quick explanation of splines as they apply to paths and motion, continue reading this section.
To help clarify HTML+TIME's implementation of the keyTimes and keySplines attributes, it's useful to first discuss splines in the context of animating motion and using the path attribute. HTML+TIME implements SVG syntax for paths. The following example animates a DIV element along a line between two points. Remember that any property you animate with HTML+TIME must be explicitly set on the element. You can set the property with a style class, but the examples that follow use inline style settings for simplicity's sake.
<HTML XMLNS: <HEAD> <STYLE> .time {behavior: url(#default#time2);} </STYLE> <?IMPORT namespace="t" implementation="#default#time2"> </HEAD> <BODY> <t:ANIMATEMOTION <DIV id="oDiv" class="time" style="background-color:#FFCC00; position:absolute;height:40;width:120;top:110;left:10;text-align:center; font-size:18;border-style:solid"> Moving DIV</DIV> </BODY> </HTML>
Code example:
The path attribute starts a new subpath at
(0,0) with the absolute "move to" command
M, and draws a line to
(200,200) with the
L command. In much the same way, you could set vertical or horizontal paths, specify multiple values in order to draw polylines, or specify Bézier curves. For detailed information about the PATH attribute, its possible values, and related elements and properties, see HTML+TIME: Animation.
The following example animates the same DIV element along a curved path.
Code example:
In this case, the PATH attribute draws a cubic Bézier curve from (0,0) to (250,250) using two control points at (150,0) and (250,150). Because the DIV is offset from the top and left by 10 pixels, it finishes the animation's duration at (260,260). Here's a diagram that shows a similar curve with the control points.
In much the same way, you can change the control points to produce a recurved path.
Code example:
Here's the same idea in a diagram showing the control points.
You can create a similar path by using two curve segments and the corresponding control points. As your curves become more complex, this becomes a more useful strategy. In the following example, the animation begins at (0,0) as before, and animates the DIV along a curve to the endpoint (150,150) using the control points (250,50) and (250,100). The first segment's endpoint becomes the start point of the next segment, which draws another curve between (150,150) and (250,300) using two more control points, (0,200) and (0,250).
Code example:
Manage Animation Intervals with keyTimes
Now that you've acquainted yourself with the basic ideas that govern cubic Bézier curves, you can apply those ideas to timing intervals. In the same way that a spline describes a curve by specifying change in the path between two endpoints over a given range of coordinate values, the keyTimes and keySplines attributes can help you specify change in other values that you want to animate over time—color, speed, position, style, and so on. The following discussion concerns some ways to use keyTimes and keySplines.
Use keyTimes with a Values List
The keyTimes attribute enables you to designate intervals, or timing segments, that subdivide an animation's simple duration. The keyTimes attribute specifies a semicolon-delimited list of time values. These values represent a proportional offset into the simple duration of the animation. When used with calcMode settings of
linear or
spline, use the following restrictions with keyTimes settings. The first value must be 0; subsequent values must be greater than 0 and less than 1; and the last value must be 1. Each successive time value in the series must be greater than or equal to the preceding time value.
If you use the keyTimes attribute with calcMode set to
discrete, the rules are slightly different. The first keyTimes value must be 0; subsequent values must be greater than 0 and less than 1. The last value, however, must be greater than the preceding value, but can be less than 1. This is because the
discrete setting for the calcMode attribute specifies that the animation reflects each value in a values list exactly at the corresponding keyTimes value. If you set the last keyTimes value to 1, your animation will only attain the last value in the values list at the time the animation's duration ends—the viewer won't see it.
Keep in mind that if the values you specify for keyTimes do not meet these requirements, the animation either will not work as expected or will not work at all. Similarly, if you use keyTimes with the values attribute, you must specify the same number of values in keyTimes as in the values list. Otherwise, the animation will not work.
Suppose you want to animate the width of a simple vector-drawn shape. Use the values attribute to set a list of values for the shape's width; use the keyTimes list to specify the length of the time segments for each interval between width values during the course of the animation.
How the animation interpolates the shape's width during those intervals depends on the value set for the animation's calcMode attribute, which can be set to
linear,
discrete, or
spline. Note that setting calcmode to
paced overrides any keyTimes list, since the
paced value specifies that the animation interpolate the width at an even pace through the animation's duration.
Code example:
In this example, the keyTimes list divides the animation's duration into two sections: the first, from the start of the animation to 3.5 seconds (or .7 of the duration); the second, from 3.5 seconds to the end at 5 seconds. Notice that the oval widens more quickly toward the end of the duration. Because calcMode is set to
linear, the animation uses a linear interpolation to calculate the oval's width during each time segment. During the first segment, the width increases by 200 pixels in 3.5 seconds, but during the second segment, the width increases by 200 pixels in 1.5 seconds, requiring a faster rate of change.
There are several ways to use keyTimes. You might set calcMode to
discrete and use the keyTimes list to set specific timing for many quick changes to a particular property during an animation. In this case, keyTimes lets you specify all the timing changes for a single property with a single animation element, helping simplify your markup if you have other complex timelines involved with the same animation.
The keyTimes attribute is also useful when you want to begin or end other animation timelines when the property you're animating reaches a certain value. You can associate a value in the values list with a value in the keyTimes list and organize the timing of other animation elements accordingly.
What is a keySpline?
To understand the keySplines property, it's helpful to keep in mind the preceding discussion of Bézier curves and control points. The keySplines property enables you to determine how an animation is interpolated during a time segment. With keySplines, instead of the location coordinates you specify with the PATH attribute, you specify timing coordinates for the pacing of an animation's simple duration. In other words, the values you set for keySplines are the Bézier "control points" for a curve. The curve represents the rate of change in the animation's target attribute over the time segment.
The values you set with keyTimes represent the anchor points of this curve. Because each successive keyTimes value represents a proportional offset into the simple duration of the animation, the key times begin at 0 and end at 1, with 1 representing the whole duration. For this reason, the coordinates of the control points you specify with keySplines must be greater or equal to 0 and less than or equal to 1. Each interval between keyTimes values defines a time segment for which you can set keySplines values, so you'll always have one fewer set of keySplines values than keyTimes values.
Though keyTimes and keySplines determine the interval pacing of the animation, there's one more set of values to keep in mind: the values for the property you want to animate. In the case of an animateMotion element, you could provide these values with the path attribute or with coordinates in a values list. For other animation elements, the values to be interpolated by the animation element appear in the values list. To put it still another way: the values list is the set of "hoops" through which you want the animation to jump. It can jump according to whatever interpolation you set—setting keyTimes and keySplines ensures that your interpolation takes on a certain "shape" (rate of change) between hoops.
In the following example, the keySplines settings for a simple straight-line animateMotion element specify that the animation moves quickly at the start of its duration and decelerates toward the end. You can also compare the animation using keySplines to the same animation with calcMode set to
linear or
discrete.
Code example:
In looking at the diagrams that follow,.
To see how these values for keySplines affect an animation, click the appropriate buttons in the following example.
Code example:
Use keyTimes and keySplines with animateColor
You can control the interpolation of an animated color property with keyTimes and keySplines just as you can control interpolation with motion animations. Ordinarily, you might use the to, from, or by attributes of animateColor to change the target element's color attribute from one color value to another. The animation then proceeds using linear, discrete, or paced interpolation, depending on the setting you specify for the animateColor element's calcMode attribute. However, you might want to smoothly vary the rate at which the target element's color value is interpolated across the animation's duration. To accomplish this, set one or a combination of several attributes on the animateColor element: either values and keyTimes, or keyTimes and keySplines. The following example uses animateColor to animate a shape's fill color through a series of color values.
#8A2BE2or
blueviolet).
Code example:
By setting the keyTimes attribute on aColor, you gain finer control over the timing of the oval's color changes. To make the oval's color change slowly from red to green, and quickly from green to blue, set a shorter timing segment between the values
#00FF00 and
#0000FF in the keyTimes list.
By adding keySplines to this example, you can change the rate of interpolation for each segment as well as its length. In the following example, the first time segment animates the oval's fill color from red to
#00FF00 (
lime) over the first seven-tenths of the simple duration; it uses the keySplines values (.75 0 1 .25) to specify that the color change happens slowly at first, then gradually speeds up. At the second segment, from seven-tenths of the duration to the end, the animation uses the keySplines values (0 0 1 1) to animate the fill color from
#00FF00 (
lime) to
#0000FF (
blue). Because these values describe a straight line from (0,0) to (1,1), you see a linear interpolation from
#00FF00 to
#0000FF at the end of the animation. Click the Show Me button to compare the animation using keyTimes and keySplines to
discrete and
paced animation settings.
<v:oval <t:ANIMATECOLOR
Code example:
Multiple Animations with Multiple Timing Segments
It's fairly straightforward to make a basic animation more complex by adding further animation elements and timelines. Here's a VML shape animated along a curved path. In this case, keyTimes and keySplines allow smooth acceleration and deceleration of the object to suggest inertia.
<t:ANIMATEMOTION <v:oval </t:ANIMATEMOTION>
Code example:
To make this more interesting, add a couple of short animation blocks to change the object's shape and color.
<!-- bang animation for vm1 object --> <t:ANIMATE <t:ANIMATE <t:ANIMATECOLOR <t:ANIMATE <t:ANIMATE <t:ANIMATECOLOR
Code example:
After animating the first arc, it's easier to add subsidiary shapes and animations. Though the elements in the example that follow are all grouped in a basic t:PAR time container and use specific begin times, you could also set some of the animation elements to begin relative to other elements, or relative to other events on the page. For example, you might want to play a short animation when the user moves the mouse pointer over part of your table of contents, or when the user clicks a particular part of the page. For more information, see Initiating Timed Elements with the begin Attribute.
Here's a more elaborate version of the preceding examples, with some additional shapes in motion.
Code example: | https://msdn.microsoft.com/en-us/library/ms533119(VS.85).aspx | CC-MAIN-2015-40 | refinedweb | 2,841 | 50.97 |
RSA, Blind Signatures, and a VolgaCTF Crypto Challenge
Maple Bacon participated in VolgaCTF 2019, which ran for 48 hours from March 29th at 15:00 UTC. We were all pretty busy, with it being the last week of classes, but we managed to finished 41st out of 1097 teams. In addition to the challenge in this writeup, I also solved Store (Web 100), Fakegram Star (Antifake 75), and Horrible Retelling (Antifake 50).
I wrote this since it was my first time looking in depth at a crypto problem. I researched and tried to understand how the RSA encryption algorithm worked (including all the math - keep reading for an actual proof), as well as the blinding and unblinding functions, and incorporated these into my exploit.
Table of Contents
- Understanding the Problem
- Diving into RSA
- RSA Signatures
- Blinding and Unblinding Functions
- The Exploit
Blind - Crypto 200
Description
Pull the flag…if you can.
nc blind.g.2019.volgactf.ru 7070
Understanding the Problem
I like to connect to the box to take a look at what happens and the input/outputs that we’re looking at. Using netcat, we connect to the port and see this:
Looks like it’s asking for a command. Perhaps a Linux command might work?
That didn’t work. Guess that might have been too easy. Let’s take a look at the server.py file that was helpfully provided for us. Let’s skip down to the
main function and try to figure out what the server’s doing.
if __name__ == '__main__': signature = RSA(e, d, n) check_cmd_signatures(signature) try: while True: send_message('Enter your command:') message = read_message().strip() (sgn, cmd_exp) = message.split(' ', 1)
There are a couple points of interest here:
signatureis initialized to an object of the RSA class, with the variable
e,
d, and
n. Looking through the script, we see that
nand
eare local variables in the section nicely commented as
Keys. However,
dis imported in from another Python module,
private_keyin the line
from private_key import d.
check_cmd_signatures(signature)just seems to do a verification that the signing process is working properly, so we can ignore that.
- We see the message that we get when we connect to the box: “Enter your command”. When we put an input,
read_message().strip()removes the whitespace. More importantly, the next line tells us that the expression is split into
sgnand
cmd_exp.
That last point is pretty significant. When we scroll down to the bottom, we see that one of the catch statements is:
except Exception as ex: send_message('Something must have gone very, very wrong...') eprint(str(ex))
Since our input was only a single word, it wasn’t able to unpack the message into the two separate variables and threw an Exception.
After the input is unpacked, the server uses
shlex to get an array of the command in a shell-like syntax.
cmd_l = shlex.split(cmd_exp) cmd = cmd_l[0]
shlex.split is similar to
split() but instead of splitting on a delimiter, also respects quotes. For example,
shlex.split("cd 'my folder'") will give you
["cd", "'my folder'"] while
"cd 'my folder'".split() gives you
['cd', "'my", "folder'"].
We’ll have to keep this in mind when writing our exploit, since the nuisances with the quotes may cause problems.
Skimming through the rest of the function, we see that it’s designed to work with commands that start with
ls,
dir,
cd,
cat,
sign,
exit, or
leave. Any other command will cause the script to exit with “Unknown command”.
If we look at the if statement for
ls and
dir, we see:
if cmd == 'ls' or cmd == 'dir': ret_str = run_cmd(cmd_exp) send_message(ret_str)
This seems to say that these commands aren’t verified and will just run on the server. Let’s just give this a shot. I use
1 ls because we know that the script splits the command into two parts, and the second part is used as the
cmd in the if statements. We don’t know anything about the
sgn right now, so we’ll just put something random for now, which works because the script won’t do anything with it if the command is
ls.
Nice! Looks like the
private_key.py file I mentioned earlier is there, along with the
flag file that we’ll probably want to read. Can we run
cat flag to just easily grab it?
😞
So the server doesn’t just let us run any command we want. Let’s go back to the script and try to figure out how it does this signature verification check.
elif cmd == 'cat': try: sgn = int(sgn) if not signature.verify(cmd_exp, sgn): raise SignatureException('Signature verification check failed') if len(cmd_l) == 1: raise Exception('Nothing to cat') ret_str = run_cmd(cmd_exp) send_message(ret_str) except Exception as ex: send_message(str(ex))
sgn is converted into an integer, and then sent to
signature.verify, along with the command. If it passes that check, the server will run our command, so we have to find a way for our command to verify successfully.
How do commands get signed? Let’s take a look at the script for the
sign command.
elif cmd == 'sign': try: send_message('Enter your command to sign:') message = read_message().strip() message = message.decode('base64') cmd_l = shlex.split(message) sign_cmd = cmd_l[0] if sign_cmd not in ['cat', 'cd']: sgn = signature.sign(sign_cmd) send_message(str(sgn)) else: send_message('Invalid command') except Exception as ex: send_message(str(ex))
In short, this command will sign any message encoded in base64 except for the ones that start with
cat or
cd. Looks like they really don’t want us reading any of the other files on the server.
The signature can be analyzed in the
RSA class.
class RSA: def __init__(self, e, d, n): self.e = e self.d = d self.n = n def sign(self, message): message = int(message.encode('hex'), 16) return pow(message, self.d, self.n) def verify(self, message, signature): message = int(message.encode('hex'), 16) verify = pow(signature, self.e, self.n) return message == verify
I had no idea what RSA cryptography was when I started looking into this challenge, so I started researching some background information on how it works.
Diving into RSA
RSA (Rivest-Shamir-Adleman) is an asymmetric encryption algorithm, which uses prime factorization as the trapdoor function.
The trapdoor function refers to a very important concept in cryptography: it is trivial to go from one state to another, but going the opposite direction, without specific information, becomes unfeasible. In other words, the function is one-way. You can imagine this being extremely useful, since you want to be able to quickly encrypt a message, but make it difficult for just anyone to decrypt it.
Prime factorization (or integer factorization) is a number theory concept that every positive integer can be broken down into composite prime numbers. The prime factorization of extremely large numbers cannot be efficiently computed, and the hardest instances are semiprimes, which is when the number is the product of two primes. If the two primes are close enough, they can be factored using Fermat’s method, but if they aren’t close enough together, trial and error can be more efficient than Fermat’s – which, suffice to say, isn’t efficient at all. Computing the prime factors of a very large number is known as the RSA problem.
RSA Encryption and Decryption
The RSA algorithm works with the following four steps:
1. Key Generation
- Two large prime numbers, p and q, are picked. These should be similar in magnitude, but differ in length so Fermat’s method will not work.
You use these numbers to compute
n will be the modulus for both the public and the private keys. Its length is usually expressed in bits and is known as the key length. n will be made public.
Compute the Euler totient function:
Carmichael’s totient function λ(n) can also be used, since φ(n) is always divisible by λ(n).
Select e such that it is between 3 and n-1 that is relatively prime to p-1 and q-1. Relatively prime, or coprime, means that the only common factor between them is 1, or that it’s greatest common denominator is 1. Equivalently:
or
Compute d as the multiplicative inverse of e modulo λ(n) as
or
The public key will consist of modulus n and public exponent e. The private key will consist of the private exponent d.
2. Key Distribution
Let’s say Bob wants to send Alice his message M. Bob needs Alice’s public key to encrypt the message, and Alice uses her private key to decrypt the message.
Alice sends Bob her public key (n, e) to Bob, while keeping the private key secret.
3. Encryption
To encrypt message M, first turn the message into an integer m, such that 0 ≤ m < n, with a reversible padding scheme known by both parties. This turns the message into a numeric form for encryption.
The ciphertext C is computed by raising m to the eth power modulo n.
4. Decryption
You can recover message m by using the private key d. Compute:
Given m, you can easily compute M by reversing the padding scheme.
For a proof of correctness of this encryption and decryption algorithms, section VI The Underlying Mathematics of the original RSA paper is a very interesting read.
RSA Signatures
Phew. That was quite a lot of information. But that still doesn’t explain how our challenge is going to be solved. The server implements RSA signing.
- When you are encrypting a message, you use the recipient’s public key, and they decrypt it with their private key.
- When signing a message, you use your private key, and the recipient verifies that the message is yours using your public key.
What the server is doing when signing is using their private key d to essentially encrypt the input with the private key, which we can verify by looking at the function in the server script (yes, we’re still working on this challenge):
def sign(self, message): message = int(message.encode('hex'), 16) return pow(message, self.d, self.n)
So the message is M is transformed into integer m by encoding it to hex, then returned by raising it to the dth power mod n. This gives you the signed message, which is signature S. Mathematically:
To verify the signature, it computes the inverse of this in the
verify function.
def verify(self, message, signature): message = int(message.encode('hex'), 16) verify = pow(signature, self.e, self.n) return message == verify
Which gives you
If m’ = m, then you can verify that the signature is correct. The
verify function compares your message (the command
cat flag in our case) with the signed version of it, and will return true if it matches.
However, our server refuses to sign
cat flag. How can we trick the server into signing it? I actually spent a long time trying to figure this out – I actually tried to factor p and q from n at the start before I knew any better. Just when I was about to give up and click on another challenge, I realized that the challenge was named Blind. Why? That led me to my next search.
Blind Signatures
Many times, for privacy and anonymity, you want to be able to have a message signed without the signer knowing what the message is – for example, in electronic voting or digital currency. This is called a blind signature.
Let’s say Bob wants Alice to sign a message, but he doesn’t want her to read it. These steps are followed to obtain Alice’s signature:
- Bob “blinds” the message m with a random number b, which is known as the blinding factor. Let’s call this blind(m, b).
- Alice signs the message, so we get signed(blind(m, b), d). This is signed with her private key d, so Bob does not know this.
- Bob “unblinds” the message with b, getting unblinded(signed(blind(m, b), d), b). The blind and unblind functions must reduce to signed(m, d), which is Alice’s signature on m.
That sounds like exactly what we want. Let’s analyze our problem:
- We want the server to sign
cat flag.
- The server refuses to sign any message starting with
cat.
- But the server will sign any other message. If we blind
cat flag, send it to the server to sign the blinded message (which it doesn’t know is one of the blacklisted commands), and then unblind it, we will be able to get its signature on our command.
Let’s go about building the exploit.
Blinding and Unblinding Functions
These are going to work like this.
For a message M, convert it to an integer equivalent m. We will also choose a random value k, which can be any integer that is coprime to n. k^e mod n will be the blinding factor. Given e as the public exponent and n as the modulus, same as in the RSA signing process, we will blind the message m by multiplying it with the blinding factor as follows:
We get the blinded message m’. We will send m’ to be signed, and it will return as:
where S’ is the signed blinded message. Because the message doesn’t start with
cat or any other blacklisted command, the server will happily do this for us. Now, the unblinded signature can be calculated by dividing by k.
Proof
More mathy stuff but I found it interesting so wanted to write it here. Let’s prove that this actually works. The
verify function will raise the signed message S to the eth power, and compare this with the original message m to check that they are equal. The verification can be written as so, and we can substitute in (Blind:2):
With (Blind:2), we can substitute S’:
Because RSA keys satisfy the equation:
We can reduce (Blind:5) further.
Substituting (Blind:1) into (Blind:6), we get:
This demonstrates that the unblinded signature is passed in through the verify function, it will prove to be equal to the original message m.
The Exploit
If you’ve skipped all the way down here, here’s the lowdown:
- We need to build the function
blind, which will multiply your command by the blinding factor k^e mod n.
- We need to build the function
unblind, which will divide by k mod n.
- We send the
blind(m)to the server, which will sign it. We take the response and
unblindit, giving us the signature on the original command.
- We then send the signature and the original command to the server, which will successfully pass the verification and run the command.
Blind
def blind(message): message = int(message.encode('hex'), 16) # original message in hex message = message * pow(k, e, n) # blinded message in hex message = long_to_bytes(message) # back to string message = "'" + message + "'" # to get around shlex.split message = base64.b64encode(message) # input to sign must be base64 encoded return message
As mentioned earlier, since the server uses
shlex.split, we need to put single quotes around the command for it to be parsed properly. Comments inline explaining what I’m doing should be pretty straightforward.
Unblind
def unblind(blinded_sgn): # unblind by m / (k % n) return str(int(blinded_sgn, 10) * inverse(k, n))
I originally had
int(blinded_sgn, 10) / (k % n), but Python’s division operator is an integer divide, not a modular divide. Equivalently, we can multiply by the inverse mod of k and n instead, which is from PyCrypto. (Shoutout to our amazing coach Robert for helping me with this!)
Script
Here’s the full script that connects to the server and spits out the flag.
import binascii from pwn import * from Crypto.Util.number import inverse, long_to_bytes import base64 n = 26507591511689883990023896389022361811173033984051016489514421457013639621509962613332324662222154683066173937658495362448733162728817642341239457485221865493926211958117034923747221236176204216845182311004742474549095130306550623190917480615151093941494688906907516349433681015204941620716162038586590895058816430264415335805881575305773073358135217732591500750773744464142282514963376379623449776844046465746330691788777566563856886778143019387464133144867446731438967247646981498812182658347753229511846953659235528803754112114516623201792727787856347729085966824435377279429992530935232902223909659507613583396967 e = 65537 k = 5 # change this if you get "no closing quotation" def blind(message): # blinds the message by multiplying by k^e mod n message = int(message.encode('hex'), 16) # original message in hex message = message * pow(k, e, n) # blinded message in hex message = long_to_bytes(message) # back to string message = "'" + message + "'" # to get around shlex.split message = base64.b64encode(message) return message def unblind(blinded_sgn): # unblind by m / (k % n) return str(int(blinded_sgn, 10) * inverse(k, n)) msg = "cat flag" r = remote("blind.q.2019.volgactf.ru", 7070) r.recvuntil("Enter your command:") r.send("1 sign\n") r.recvuntil("Enter your command to sign:") blinded_msg = blind(msg) r.send(blinded_msg + "\n") signed_blinded = r.recvuntil("Enter").strip("Enter") print("signed blinded: " + signed_blinded) r.close() signature = unblind(signed_blinded) print("unblinded signature: " + signature) r = remote("blind.q.2019.volgactf.ru", 7070) r.recvuntil("Enter your command:") r.send(signature + " " + msg + "\n") a = r.recvuntil("Enter").strip("Enter") print(a) # flag should print here! r.close()
Sometimes, if you’re unlucky, your randomly blinded message contains
', and the server’s
shlex.split will get confused; you’ll get the error “No closing quotation”. If this happens, just change the
k value until it works.
You can also send the message
cat private_key.py to see the value of d that the server is using the sign the commands as well for funsies.
We finally get the flag: VolgaCTF{B1ind_y0ur_tru3_int3nti0n5} ✨
References
The Mathematics of RSA Public-Key Cryptosystem (Kaliski, RSA Laboratories)
A Method for Obtaining Digital Signatures and Public-Key Cryptosystems (Rivest, Shamir & Adleman, MIT)
Twenty Years of Attacks on the RSA Cryptosystem (Boneh, Stanford)
Blind Signatures (Ryan, University of Birmingham)
RSA Signing is Not RSA Decryption (Cornell)
Wikipedia pages for RSA, Diffie-Hellman Key Exchange, Fermat’s Factorization, Integer factorization, and Coprime integers | https://kristen.dev/blog/RSA-Blind-Signatures | CC-MAIN-2020-16 | refinedweb | 2,962 | 64.51 |
This walkthrough is a transcript of the Column Visibility video available on the DevExpress YouTube Channel.
The DevExpress Grid Control supports the Microsoft Outlook style Column Chooser window, which can be invoked in the column header context menu.
You can drag a column header onto that window to hide the column from the View.
Drag it back to make the column visible again.
Note that you can also drop a column header just a little below the column header panel when the cross cursor appears. This will also hide the column and its header will appear in the customization window.
The same features are available to you at design time in Visual Studio. You can use drag-and-drop or select the Remove This Column item from the context menu. And just as at runtime, you can drag the headers back into the View.
If you need to change column visibility in code, the simplest way is to use its GridColumn.Visible property. Note that setting it to false also changes the VisibleIndex property value to -1.
Switch GridColumn.Visible back and see how the GridColumn.VisibleIndex is restored to its previous value.
Run the application and invoke the Column Chooser dialog. One column header is displayed there, so you can drag it back to the View.
If you don't want end-users to do that, you can hide the header even from the Column Chooser window. Select the desired column and disable its OptionsColumn.ShowInCustomizationForm option.
Now open the Column Chooser again to see that the header is no longer there.
Now hide a column by dragging it down a bit. This functionality is turned on by default, but you can disable it using the View's GridOptionsCustomization.AllowQuickHideColumns option. In that case, the cross cursor never appears and end-users can only hide columns by dragging them onto the Column Chooser form.
You can also disable column drag and drop as described in a previous tutorial (see GridOptionsCustomization.AllowColumnMoving). The Column Chooser dialog is not available in this case. But you can still hide the column using the context menu.
Since hiding a column is essentially changing its position to -1, then the event to use to respond to visibility change is the View's ColumnView.ColumnPositionChanged. This tutorial illustrates this event's usage with a simple sample. The handler will calculate the total width of all columns currently visible within the View. Note that the View provides you with the ColumnView.VisibleColumns property to make this easier. Then, the View's Auto Column Width feature gets enabled if the total column width is less than the control's width.
using DevExpress.XtraGrid.Views.Grid;
//...
private void gridView1_ColumnPositionChanged(object sender, EventArgs e) {
GridView view = sender as GridView;
if(view == null) return;
int totalWidth = view.VisibleColumns.Sum(column => column.Width);
view.OptionsView.ColumnAutoWidth = totalWidth < gridControl.Width;
}
Run the application. Horizontal scrolling is enabled by default. Now hide a few columns. Once horizontal scrolling is no longer necessary, the Auto Column Width mode gets enabled. Bring the columns back into the View to see the horizontal scrollbar again.
It's worth noting that similar visibility customization functionality is available in Banded Views and Advanced Banded Views. You can drag column or band headers until the cross cursor appears or directly onto the Column Chooser dialog and in the same manner drag them back into the View. | https://documentation.devexpress.com/WindowsForms/114721/Controls-and-Libraries/Data-Grid/Get-Started-With-Data-Grid-and-Views/Walkthroughs/Grid-View-Columns-Rows-and-Cells/Tutorial-Column-Visibility | CC-MAIN-2019-43 | refinedweb | 567 | 57.57 |
Opened 9 years ago
Last modified 2 weeks ago
#10060 new Bug
Multiple table annotation failure
Description (last modified by )
Annotating across multiple tables results in wrong answers. i.e.
In [110]: total = Branch.objects.all().annotate(total=Sum('center__client__loan__amount')) In [111]: total[0].total Out[111]: 3433000 In [112]: repaid = Branch.objects.all().annotate(repaid=Sum('center__client__loan__payment_schedule__payments__principal')) In [113]: repaid[0].repaid Out[113]: 1976320.0 In [114]: both = Branch.objects.all().annotate(total=Sum('center__client__loan__amount'),repaid=Sum('center__client__loan__payment_schedule__payments__principal')) In [115]: both[0].repaid Out[115]: 1976320.0 In [116]: both[0].total Out[116]: 98816000 ^^^^^^^^^^^
Compare the output of total in 116 vs. 111 (the correct answer).
Attachments (1)
Change History (57)
comment:1 Changed 9 years ago by
Changed 9 years ago by
Test project for issue.
comment:2 Changed 9 years ago by
if you unzip agg_test.tar.gz, start a django shell in the project and run count.py in the shell, you will see the error being reproduced.
cheers
Sid
comment:3 Changed 9 years ago by
(formatting)
comment:4 Changed 9 years ago by
comment:6 Changed 8 years ago by
This is a fairly big problem, but it's not going to be an easy one to fix - the solution may just have to be documentation to say "don't do that".
comment:7 Changed 8 years ago by
If anyone were to fix this, the first question to ask would be how you would structure the SQL. I'm not an SQL efficiency expert, but I know one way to get the correct result would be to use subqueries in the SELECT clause:
For example, let's say I have a User, PointEarning w/ ForeignKey(User), and PointExpense w/ F!oreignKey(User). Let's say I want a query to get the user's email, total points earned, and total points spent.:
SELECT auth_user.id, auth_user.email, (SELECT SUM(points) FROM app_pointearning WHERE user_id=auth_user.id) AS points_earned, (SELECT SUM(points) FROM app_pointexpense WHERE user_id=auth_user.id) AS points_spent FROM auth_user
Another way, I believe, is to join on derived tables (sub-queries). Whatever way we decide to write the SQL, django could perhaps detect when we're using aggregates accross multiple joins, and alter the structure of the query accordingly. Seems like it could be kinda messy, but at least the results would be more what the user expected.
comment:8 Changed 8 years ago by
comment:9 follow-up: 57 Changed 8 years ago by
From what I've researched, joining on subqueries would be faster than subqueries in the SELECT clause. So instead of this:
SELECT u.email, (SELECT SUM(points) FROM point_earning WHERE user_id=u.id) AS points_earned, (SELECT SUM(points) FROM point_expense WHERE user_id=u.id) AS points_spent FROM "user" u
...we would want this:
SELECT u.email, a.points AS points_earned, b.points AS points_spent FROM "user" u LEFT OUTER JOIN (SELECT user_id, SUM(points) AS points FROM point_earning GROUP BY user_id) a ON a.user_id=u.id LEFT OUTER JOIN (SELECT user_id, SUM(points) AS points FROM point_expense GROUP BY user_id) b ON b.user_id=u.id ORDER BY u.id
What this does, essentially, is move the aggregate function into a derived table which we join onto the main table. I would imagine a solution to this bug would detect if annotations are being used across multiple relations, and if so, would adjust the join and select clauses as necessary. I may attempt to write a patch for this soon, but it seems like it would be tough, so there's no guarantees :-p
Thoughts?
comment:10 Changed 8 years ago by
Ok, after spending several hours looking through django/db/models/sql/query.py, I feel like my brain has turned to mush. This is definitely better left up to the ppl that wrote it. I hope that at some point someone who is capable can write a patch for it.
I'm against simply adding a "don't do this" in the documentation, because users would simply expect something like this to work -- it's hard to understand at a high level why simply adding another annotation from a different relation would skew the results.
comment:11 Changed 8 years ago by
comment:12 Changed 8 years ago by
Would be nice to know why the milestone was deleted.
If we are pushing this to 1.2 we need to add it in "Known Bugs" in the doc page:
comment:13 Changed 8 years ago by
The milestone was deleted because this is a fairly big problem, without an obvious (or particularly easy) solution. There is no potential for data loss, and there is a viable workaround - either "don't do it",or use two queries rather than one. One of the side effects of a time-based release schedule is that some bugs will always exist in releases - unfortunately, this is one of those bugs.
Regarding adding this to a "known bugs" section - we don't document bugs. We document intended behavior. The ticket tracker is for bugs.
comment:14 Changed 8 years ago by
I'm going to second adding this to the aggregation documentation, because I just wasted two hours re-discovering this bug.
comment:15 Changed 8 years ago by
I also agree that this should be documented, but perhaps an exception should be raise when attempting to do multiple annotations on different tables/joins? I can't think of a situation where that would ever guarantee accurate results.
comment:16 Changed 8 years ago by
Regarding the example sql queries in comment 9 above, I've discovered that the first query is actually faster. My database has grown fairly large in the past few months, so I've been able to test with more clear results (this is using MySQL, by the way).
Using the JOIN method (where the aggregates are done in a derived join), the query was extremely slow. It took over 30 minutes to execute with 57,000 rows in the users table. When I changed the query to the SELECT method (where the aggregates are done in a subquery in the select clause), the query took only 2 seconds with the exact same results. I spent a whole day on this, and couldn't find a solid explanation for why the derived join method was taking so long. I'd say at this point if anyone moves forward with fixing this bug we should put the aggregate in the subquery of the select clause.
I've looked around for an official stance on how to write a query with multiple aggregations across different joined tables, and haven't come up with a solid answer. If anyone does find "the right way" to do it, I'd be interested to know -- I think that definitely needs to be settled before this bug can be fixed.
comment:17 Changed 8 years ago by
+1 victim
comment:18 Changed 8 years ago by
I can confirm this one... It happens with aggregate too wrong values in 2 - 5% range.
comment:19 Changed 7 years ago by
comment:20 Changed 7 years ago by
A big bug needs a big shoe...
comment:21 Changed 7 years ago by
chalk up another victim... ::sigh::
comment:22 Changed 7 years ago by
comment:23 Changed 7 years ago by
I also just ran into this bug and it almost went unnoticed. An exception would indeed be very nice indicating that this does not work yet.
comment:24 Changed 7 years ago by
comment:25 Changed 7 years ago by
comment:26 Changed 7 years ago by
Another victim here. Just spent several hours with finding this bug, which had existed several days. This definitely should be either fixed or prevented: values that the incorrect query provides are often somewhat in the sensible range, depending on your row structures, which can make this bug hard to spot.
I also tested both queries that bendavis78 outlined above and they give correct results. Didn't do extensive performance testing yet, with 20K rows in the largest table, both queries were between 0.05 and 0.10 secs.
comment:27 Changed 7 years ago by
Another victim.
This bug means that you can't use annotations at all unless you know exactly what is going to happen to your QuerySet later. Functions cannot expect that any QuerySet passed as a parameter is capable of being annotated as it may already have been done.
I am a reasonably to Django, but it seems odd to me that this has remained a known bug for 2 years without anyone working on it, or a the documentation being changed to reflect the lack of functionality. I understand russell's logic that bugs should not be documented, but if this cannot be fixed, then surely it should be removed from intended behaviour.
comment:28 Changed 6 years ago by
comment:29 Changed 6 years ago by
Another victim. I have a model
Foo. I also have models
Bar and
Baz, which each have a foreign key pointing to
Foo with related_names
'bars', and
'bazs', respectively:
for foo in Foo.objects.all().annotate(bar_count=Count('bars'), baz_count=Count('bazs')): print foo.bar_count # Was correct print foo.baz_count # Was completely wrong, but not by any apparent pattern (ie, the numbers seemed to be almost random)
I'm unable to display the
baz_count in my template now, unless I want to introduce an N+1 problem (ie,
foo.bazs.count() during each iteration)
comment:30 Changed 6 years ago by
Whoa, thought I was doing something wrong in my code until I searched and found this. Isn't this kind of a big deal? As in, with this bug, isn't Django's aggregation basically worthless for anyone who's not doing it on a single table, or did I miss something?
comment:31 follow-up: 32 Changed 6 years ago by
Is this something that will ever be fixed?
comment:32 Changed 6 years ago by
Is this something that will ever be fixed?
It will only ever get fixed if someone cares enough to write a patch (complete with tests). I think at this point a patch which threw an exception for the failure case would be accepted. Obviously ideally this would be replaced with a proper fix at some point.
comment:33 follow-up: 34 Changed 6 years ago by
Really, it seems there are plenty of ticket with patches (complete with tests), and with offers to do the work, but the core team doesn't care.
comment:34 Changed 6 years ago by
Really, it seems there are plenty of ticket with patches (complete with tests), and with offers to do the work, but the core team doesn't care.
Sorry you're feeling frustrated.
There are currently 15 tickets in Trac marked "Ready for checkin." The _only_ task in Django's development process that's limited to core developers is committing those 15 tickets. Fifteen tickets doesn't seem like such a terrible backlog for an entirely-volunteer crew, frankly.
Any other step in the process (writing a patch, adding tests to an existing patch, reviewing a patch and marking it "Ready for checkin" if there are tests, they pass, and it fixes the problem for you) can be done by anyone in the community. See
There is a larger backlog of "design decision needed" tickets (298), but anyone can contribute usefully to those as well by researching pros and cons and presenting a balanced summary to the developers mailing list to start a discussion.
comment:36 Changed 6 years ago by
comment:37 Changed 5 years ago by
comment:38 Changed 5 years ago by
Hmm, another victim here. Spent 2 hours trying to understand why my numbers were so odd to finally find this bug. This is the first time in several django projects I am really feeling left alone :-(
I am no database expert (which is one reason i LOVE django and it's awesome Queryset) so I don't see any way that I could provide implementation help or anything, but seeing that this bug is now open for several years really makes me shake my head. Even if there is no obvious fix available, PLEASE at least update the documentatin to point out this issue!
comment:39 Changed 5 years ago by
comment:40 Changed 5 years ago by
This is somewhat hard problem. We could do automatic subquery pushdown for aggregates - it isn't exactly easy but most of the pieces are already there. I am not sure how well such queries will perform. Although that isn't perhaps too important. If the results aren't correct speed doesn't matter...
I think a good start to this would be documentation + detecting the problematic situations and throwing an error in those cases. Basically if we have two multijoins in the query (from filtering, aggregation or something else) and aggregation is used then the query is likely wrong. However this condition is a little too restrictive - something like:
qs.filter(translations__lang='fi', translations__name='foo')
introduces a multijoin to translations, yet the query is safe as translations.lang is unique. Detecting if we have a unique multijoin is going to be nasty. So, we need a way to bypass this error checking.
Proposal: aggregates get a new kwarg "inline" - default False. If inline is False, then multiple multijoins in single query will raise an error. If it is True, the aggregate is forced to be joined to the query (that is, to do what aggregates do today). Later on we will likely add another kwarg "subquery" - defaults again to False. If set to True the aggregate is forced to subquery (something like in comment:9). We could also add kwarg "subselect". Later on maybe also "lateral_subquery" and so on.
The proposal allows us to throw an error, yet give users the ability to force the ORM to generate the queries they are using now. It also allows for extending the same API if we get subquery aggregate support some day.
I think we need to add the information about multijoins to JoinInfo, and add the info about used joins to aggregates when they are added to the query. Then at compiler stage, check for all inline=False aggregates if there are multijoins in the query not used by the aggregate itself. If so, throw an error. Changes should be needed at query.setup_joins(), compiler.get_from_clause() and then a way to annotate the used joins into the aggregate (this should not be too hard).
comment:41 Changed 4 years ago by
At this point the documentation on annotations can be considered to be intentionally misleading.
It is hard not to think this given the severity of the issue and the comments above.
Yes a code fix is hard but at least an obviously tentative documentation "fix" can be pushed fairly easily.
comment:42 Changed 4 years ago by
comment:46 Changed 3 years ago by
Agreed with anonymous 01/28/2013 that there needs to be at least some documentation on this:
comment:47 Changed 3 years ago by
comment:48 Changed 3 years ago by
comment:49 Changed 3 years ago by
comment:50 follow-up: 51 Changed 2 years ago by.
comment:51 Changed 2 years ago by
Replying to camillobruni:.
Sorry for the noise, I was wrong (in hindsight: "obviously") describes the problem I thought of dealing with in more detail.
comment:52 Changed 2 years ago by
comment:53 Changed 20 months ago by
If I understood Anssi correctly, the existing documentation warning isn't correct, so I submitted a revision.
comment:54 Changed 19 months ago by
comment:55 Changed 19 months ago by
comment:56 Changed 17 months ago by
comment:57 Changed 15 months ago by
Replying to bendavis78:
In comment16 bendavis78 recommends this query to solve the issue:
SELECT u.email, (SELECT SUM(points) FROM point_earning WHERE user_id=u.id) AS points_earned, (SELECT SUM(points) FROM point_expense WHERE user_id=u.id) AS points_spent FROM "user" u
For others who might arrive here looking for a reasonable way to work around this issue, the following worked well for me. I added a custom Queryset method to the Manager class, and used a RawSQL() expression to create the annotations. This at least encapsulates the SQL code and allows the annotation to be integrated with a normal django queryset. Here's a sample for the example given above:
def annotate_sum_for_user(user_related_modelClass, field_name, annotation_name): raw_query = """ SELECT SUM({field}) FROM {model} AS model WHERE model.user_id = user.id """.format( field = field_name, model = user_related_modelClass._meta.db_table, ) annotation = {annotation_name: RawSQL(raw_query, [])} return self.annotate(**annotation)
Usage for above query on presumed User model:
users = models.User.objects\ .annotate_sum_for_user(PointEarning, 'points', 'points_earned')\ .annotate_sum_for_user(PointExpense, 'points', 'points_spent')
Hope someone finds this useful, and a word of HUGE thanks to those who worked to get a statement of this issue and a link to this thread into the django documentation -- probably saved me hours.
comment:58 Changed 12 months ago by
comment:59 Changed 11 months ago by
The solution proposed by powderflask is very interesting and worked well.
This ticket helped me a lot to understand the big problem.
Thank you all, guys.
comment:60 Changed 8 months ago by
No sure this will work for all different cases, at least it worked well for me and the below simplified test case:
Model definitions of test case,
class ModelA(models.Model): title = models.CharField(max_length=255, blank=False, null=False, unique=True) class ModelB(models.Model): parent = models.ForeignKey(ModelA, on_delete=models.CASCADE) value = models.IntegerField() class ModelC(models.Model): parent = models.ForeignKey(ModelA, on_delete=models.CASCADE)
and sample data:
instance_a_1 = ModelA(title="instance1") instance_a_1.save() instance_b = ModelB(parent=instance_a_1, value=1) instance_b.save() instance_b = ModelB(parent=instance_a_1, value=3) instance_b.save() instance_b = ModelB(parent=instance_a_1, value=5) instance_b.save() instance_c = ModelC(parent=instance_a_1) instance_c.save() instance_c = ModelC(parent=instance_a_1) instance_c.save() instance_c = ModelC(parent=instance_a_1) instance_c.save() instance_c = ModelC(parent=instance_a_1) instance_c.save() instance_c = ModelC(parent=instance_a_1) instance_c.save() instance_a_2 = ModelA(title="instance2") instance_a_2.save() instance_b = ModelB(parent=instance_a_2, value=7) instance_b.save() instance_b = ModelB(parent=instance_a_2, value=11) instance_b.save() instance_c = ModelC(parent=instance_a_2) instance_c.save() instance_c = ModelC(parent=instance_a_2) instance_c.save() instance_c = ModelC(parent=instance_a_2) instance_c.save()
Trying to get two independent annotations from two different tables (sum of values from ModelB and number of ModelC instances per ModelA instance):
for a in ModelA.objects.all() \ .annotate(sumB=Sum('modelb__value')) \ .annotate(countC=Count('modelc', distinct=True)): print "%s: sumB: %s countC: %s" % (a.title, a.sumB, a.countC)
we get these results
instance1: sumB: 45 countC: 5 instance2: sumB: 54 countC: 3
instead of these:
instance1 sumB: 9 countC: 5 instance2 sumB: 18 countC: 3
no surprise until here.
As you can see the 'sumB' values are repeated (multiplied) by a factor, lets account that factor inside single query:
for a in ModelA.objects.all() \ .annotate(countC=Count('modelc', distinct=True)) \ .annotate(countB=Count('modelb')) \ .annotate(countB_distinct=Count('modelb', distinct=True)) \ .annotate(sumB_multiplied=Sum('modelb__value')) \ .annotate(sumB=(F('sumB_multiplied') * F('countB_distinct')) / F('countB')): print "%s sumB: %s countC: %s" % (a.title, a.sumB, a.countC)
and get correct values:
instance1 sumB: 9 countC: 5 instance2 sumB: 18 countC: 3
comment:61 Changed 2 weeks ago by
Just have spent 5 hours debugging this bug.
+1
It would help a great deal if you gave us the actual models and sample data. You have highlighted an inconsistency, but you haven't given us the ability to reproduce the failure. | https://code.djangoproject.com/ticket/10060 | CC-MAIN-2017-30 | refinedweb | 3,279 | 63.7 |
view raw
Problem:
I have a bunch of files that were downloaded from an org. Halfway through their data directory the org changed the naming convention (reasons unknown). I am looking to create a script that will take the files in a directory and rename the file the same way, but simply "go back one day".
Here is a sample of how one file is named:
org2015365_res_version.asc
2015365
2015364
2015001
2014365
datetime
# open all files
all_data = glob.glob('/somedir/org*.asc')
# empty array to be appended to
day = []
year = []
# loop through all files
for f in all_data:
# get first part of string, renders org2015365
f_split = f.split('_')[0]
# get only year day - renders 2015365
year_day = f_split.replace(f_split[:3], '')
# get only day - renders 365
days = year_day.replace(year_day[0:4], '')
# get only year - renders 2015
day.append(days)
years = year_day.replace(year_day[4:], '')
year.append(years)
# convert to int for easier processing
day = [int(i) for i in day]
year = [int(i) for i in year]
if day == 001 & year == 2016:
day = 365
year = 2015
elif day == 001 & year == 2015:
day = 365
year = 2014
else:
day = day - 1
import glob
import'c:\temp\xx', r'*.doc', r'new(%s)') | https://codedump.io/share/8kqrzOOozcdO/1/python---rename-files-incrementally-based-on-julian-day | CC-MAIN-2017-22 | refinedweb | 201 | 72.66 |
This year I've decided to focus on IoT, mainly the Raspberry PI (rPi) and Beaglebone Single Board Computers (SBC's). I'll call it my "personal year of IoT." My interest is initially taking me down the path of testing out whether one of these devices can be used as a simple web server (yes, of course it can) but has enough horsepower to support a SQL database and a "real" web application. As a test case, I'm planning on using a website that I developed for a client a few years back, but for the purposes of this article, I just want to vet a variety of technologies. At the end of the day (not this article-day), I should be able to determine the viability of using a very low cost SBC as a decent web server for low-bandwidth needs. Along the way I may play with some of the hardware features as well.
For my primary mission, I'm planning on using PostgresSQL (I have a personal historic distaste for MySQL) and DotNet Core for the web server. Three initial challenges present themselves:
So there will be some challenges not related to SBC's, and some definitely related!
The first step is to assemble the KanaKit rPi and get it fired up. After that, I want be able to boot the OS from a USB drive rather than use the micro SD card.
Opening the big box, we find inside the rPi itself, power supply, HDMI cable, SD card, case, and heat sinks:
Unpacking those boxes, we get to the actual hardware!
Assembling the rPi in the case took a good 10 or 15 minutes of struggling with the case (I refused to watch the video!) Turns out you have to sort of slide the rPI into position so that the SD card slot is properly positioned in the access hole for the case. Finally it was assembled (including heat sinks):
The SD card comes installed with NOOBS, "New Out Of the Box Software" which lets you choose the OS you wish to install:
When I tried installing the recommended Raspbian OS (the first checkbox) the installation process hung (I did wait several minutes.) With trepidation I pulled the power and restarted the rPi, which happily booted back into NOOBS. Selecting the second option, Raspbian Full, installed without any problems. Great!
One of the nice things about this version of the rPi is the built in WiFi, which was very easy to configure. I've never been able to get WiFi working on the BeagleBone Black, and WiFi was a requirement as I want to be able to work on this project without being confined to my office where I have cable connectivity (my office is cluttered enough as it is with BeagleBone projects for another client.)
The KanaKit (at least as far as I could tell) did not actually tell me what version rPi I had, so after some digging, I found this site that lists the different versions. Opening a command line window and typing:
cat /proc/cpuinfo
I determined that the "version" reported is "a020d3" and looking that up on the site (link above) I verified that I have an rPi 3B+:
After much googling, I discovered that the 3A+ and 3B+ rPi's can boot from USB without any configuration changes. While the configuration changes looked simple, I decided to verify, based on those instructions, that the rPi was USB boot capable. Turns out that if you type in:
vcgencmd otp_dump | grep 17:
and you get back this code: 0x3020000a, it means that the OTP (One Time Programmable) bit has been programmed to boot from USB. This bit says to try and boot from the SD card and if that doesn't work, scan the USB devices for bootcode. Apparently, given that this is "one time programmable", you can still change to boot from the SD card with a little finagling.
0x3020000a
The next step was to image the OS onto a USB drive. I already had a 2TB spinny (not SSD!) drive lying around, and given that the rPi doesn't support USB 3, the performance hit of using a mechanical drive vs. SSD seemed irrelevant, at least at this stage of the game -- and besides, I didn't have any unused SSD drives and I wanted to move along with the project.
Etcher UI:
After imaging the drive and removing the SD card, the rPI booted from the USB drive!
Mandarin oranges, scented candles (behind the oranges) and specialty chocolate (the bag behind the oranges) are only required if other incantations are required to get things working.
As much as possible, always gracefully shutdown your rPi with:
sudo shutdown -h now
On the command line or terminal window, type:
sudo reboot
Ultimately I don't want to have to hook up a monitor, keyboard, and mouse to the rPi (and therefore it also doesn't need a desktop UI). Instead, work with the rPi will be done using PuTTY (telnet client) and WinSCP for file transfers.
The Debian OS has an SSH server built into it, however we need to enable SSH. One way to enable SSH is to enter:
sudo raspi-config
which brings up a simple UI:
Read more about the raspi-config app here.
Now we can telnet directly into the rPi once we know the IP address, which is determined by using:
ifconfig
in the console window. Since I'm using WiFi, I find the IP address in the wlan0 section as inet 192.168.0.15
wlan0
inet 192.168.0.15
Other options to determine the IP address include using hostname -I which gives you the network IP, the local IP, and the IPv6 addresses.
hostname -I
Fire up PuTTY and enter the IP address of your rPi:
I find it useful to save the IP address.
The first time you connect (or whenever the rPi's IP address changes) you'll see a dialog like this:
Click on "Yes" and proceed to login:
The username is "pi" (unless you changed it) and the password is whatever you used when setting up the OS the first time it booted.
If you want to automatically log in, you can either use (change the IP address for your rPi of course!) : putty pi@192.168.0.15 -pw [your password] on the command line or create a desktop shortcut. As an exercise for the reader (mwahaha), a better approach would be to use key pairs.
putty pi@192.168.0.15 -pw [your password]
Setup for WinSCP (SCP stands for Secure Copy Protocol) is essentially the same except that here you can enter in the username and password with the option to save both:
When you save the session as a site, you have the option to save your password as well:
Once you've logged in, you'll see on the left the Windows directory structure and on the right, the rPi's OS directory structure. This makes it really easy to copy and paste files between the two and even edit files with a simple text editor.
WinSCP, on the first time connecting, will also display a security alert:
Select "Yes".
The first big step is to install PostgresSQL. Happily there is a great article to tell me how to do this, as otherwise I'd never be able to figure anything out. In the PuTTY terminal (or from a terminal window on your rPi itself), type in:
sudo apt install postgresql libpq-dev postgresql-client postgresql-client-common -y
Hint: If you're using PuTTY, right-click the mouse to copy text from the clipboard into the terminal window. If you want to copy text from the terminal window to the clipboard, select the text with your mouse and left-click.
At this point, just follow the rest of the steps up to, but no including, "Now connect to Postgres using the shell and create a test database." The steps I performed are:
sudo su postgres
createuser pi -P --interactive
and respond the prompts as indicated in the article or the screenshot of what I did (obviously, enter your own password):
Hint: once logged in as a superuser ("su"), you can type exit to, well, exit as that user and return to the "pi" user. Don't do this yet though if you're following the steps in this section.
exit
We all want to work remotely, right? I mean, in this day and age, with all this tech, driving in the office seems absurd. Sorry, off topic. Continuing further down the tutorial, let's enable Postgres so we can connect to it remotely. As per the article I linked to above:
1. Edit the PostgreSQL config file /etc/postgresql/9.6/main/postgresql.conf to uncomment the listen_addresses line and change its value from localhost to *.
/etc/postgresql/9.6/main/postgresql.conf
listen_addresses
localhost
*
Using nano, a simple terminal editor, edit the line as indicated in step 1. We're using nano as the superuser so we have the permissions to save the file.
nano
postgres@raspberrypi:/home/pi$ nano /etc/postgresql/9.6/main/postgresql.conf
Hint: The current path is on the left of the $, what you type in is on the right of the $.
Don't forget to remove the # at the start of the line!
#
2. Edit the /etc/postgresql/9.6/main/pg_hba config file and change 127.0.0.1/32 to 0.0.0.0/0 for IPv4 and ::1/128 to ::/0 for IPv6.
/etc/postgresql/9.6/main/pg_hba
127.0.0.1/32
0.0.0.0/0
::1/128
::/0
postgres@raspberrypi:/home/pi$ nano /etc/postgresql/9.6/main/pg_hba.conf
Make sure you edit the lines that are NOT commented out!
Regarding the third step, restarting the postgres service, I was unable to get this step to work -- it kept asking for the postgres user's password, and there isn't one. After rebooting the rPi, I tried the command again and it said the service hadn't started! So I don't know what's going.
Download pgadmin 3 (very important that you download the no longer supported pgadmin 3, as it is a desktop app, not a localhost web app, yuck) from here (or pgadmin 4 from here)and install on your Windows box. Certainly you can use pgadmin4, but I personnally prefer to have an application that doesn't require being run in my browser. Regardless, the UI's are similar. Using pgadmin3, create a connection to Postgres on your rPi (change the IP address and password accordingly):
If you modified the files above correctly (I didn't the first time), you should be able to connect and explore the database:
Unfortunately, I'm finding pgadmin 3 to be buggy (it's reporting query errors when I try to add tables, etc., but it doesn't seem to prevent the operation from completing) but pgadmin 4 (the browser version) works a lot better. Here's a screenshot from pgadmin 4:
Note that when I first ran pgadmin 4, it couldn't start the "pgadmin 4 server." When I tried it a second time, it worked. Go figure. The point being, we can connect to the Postgres server running on the rPi!
Let's use free -h to see how the memory allocation looks:
free -h
The 32M of shared memory is for the display driver. Given that there's 521M free, we're doing pretty well after installing Postgres.
And df -h to see how our disk free space looks:
df -h
Hint: the "-h" tells both commands to shorten the amount displayed to the byte, K (kilobyte), M (megabyte), G (gigabyte), or T (terabyte) with 1 digit of precision (past the decimal point) which makes for a much more readable display.
Again, "I know nothing!" and rely on others to tell how to do things. "Dave the Engineer" has a great MSDN blog post on setting up the .NET Core runtime on the rPi. But this is for .NET Core 2.0, and I'd like to install the latest stable version, which is 2.2 (I'm tempted to install 3.0, but I'll wait.) More googling finds Scott Hanselman's post on installing .NET Core 2.1. Skip all the Docker stuff and go down to the bottom of the post where you find the section "Second." I particularly don't want to use Docker as it's an added complexity and as Scott writes, I want to use .NET Core "on the metal." Interestingly, Scott's article includes installing the SDK, which from all I've read is unsupported. Dejan Stojanovic has an article for installing 2.2! If you think space junk in orbit is a problem, it's becoming a real nuisance to find posts on the most current technologies on the Internet!
Skipping down the middle of the article, we're supposed to run this:
wget
Good grief. How do you even figure out all these crazy numbers? Amazingly, that worked:
Following his post, we now run:
sudo mkdir -p /bin/dotnet && sudo tar zxf dotnet-sdk-2.2.101-linux-arm.tar.gz -C /bin/dotnet
export DOTNET_ROOT=/bin/dotnet
export PATH=$PATH:/bin/dotnet
and edit the bash.rc file:
bash.rc
sudo nano ~/.bashrc
and add this line at the end:
export PATH=$PATH:/bin/dotnet
Wow, it worked! (I'm always suprised when all this stuff works in Linux.)
Following the instructions here:
cd
dotnet new console
dotnet publish -r linux-arm
Then:
chmod 755 ./consoleApp
./consoleApp
If all went well, you should see Hello World emitted:
And there was much rejoicing!!!
Now let's see if we can connect to the Postgres database with an app running on the rPi. We should be able to test the code in Visual Studio, connecting to Postgres on the rPi -- hah, there's a secondary application, using the rPi as a database server!
Before we get started with the C# side of things, let's first create a database and a simple table for testing. In the pgadmin SQL window, execute this:
CREATE TABLE public."TestTable"
(
"ID" serial primary key NOT NULL,
"FirstName" text COLLATE pg_catalog."default",
"LastName" text COLLATE pg_catalog."default"
)
WITH (
OIDS = FALSE
)
TABLESPACE pg_default;
ALTER TABLE public."TestTable" OWNER to pi;
Hint: in pdgadmin 3, select Tools -> Query Tool, in pdgadmin 3, click on the SQL icon.
Hint: The serial (or bigserial) keyword is Postgres' way of specifying an auto-increment field.
serial
bigserial
Using the "consoleApp" project created above, right-click on the Dependencies:
and add two dependencies:
You're project should now reference these in the NuGet sub-folder:
Add the following namespaces:
using System.ComponentModel.DataAnnotations.Schema;
using System.Diagnostics;
using Microsoft.EntityFrameworkCore;
Diagnostics will be used to time the database operations.
public class TestTable
{
[DatabaseGenerated(DatabaseGeneratedOption.Identity)]
public int ID { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
}
public class Context : DbContext
{
public DbSet<TestTable> TestTable { get; set; }
public Context(DbContextOptions options) : base(options) { }
}
Main is modified to execute TestPostgres which we'll see next.
TestPostgres
static void Main(string[] args)
{
Console.WriteLine("Hello World!");
TestPostgres();
Console.ReadLine();
}
This is an async void method, normally to be avoided! This method will automatically insert a couple rows if no data is found and time the operations. In the code below, replace [your password] with the password you used when creating the Postgres pi user.
pi
static async void TestPostgres()
{
var contextBuilder = new DbContextOptionsBuilder();
// Database name is case-sensitive
contextBuilder.UseNpgsql("Host=192.168.0.15;Database=Test;Username=pi;Password=[your password]");
Stopwatch stopwatch = new Stopwatch();
stopwatch.Start();
using (var context = new Context(contextBuilder.Options))
{
Console.WriteLine(stopwatch.ElapsedMilliseconds + "ms");
stopwatch.Restart();
var items = await context.TestTable.ToListAsync();
Console.WriteLine("First query: " + stopwatch.ElapsedMilliseconds + "ms");
stopwatch.Restart();
Console.WriteLine("Number of items: " + items.Count);
if (items.Count == 0)
{
TestTable t1 = new TestTable() { FirstName = "Marc", LastName = "Clifton" };
TestTable t2 = new TestTable() { FirstName = "Kelli", LastName = "Wagers" };
context.Add(t1);
context.Add(t2);
context.SaveChanges();
Console.WriteLine("Insert: " + stopwatch.ElapsedMilliseconds + "ms");
Console.WriteLine("t1 ID = " + t1.ID);
Console.WriteLine("t2 ID = " + t2.ID);
stopwatch.Restart();
}
else
{
items.ForEach(t => Console.WriteLine("ID: " + t.ID + " FirstName: " + t.FirstName + " LastName " + t.LastName));
}
// Query again to see how long a second query takes.
var items2 = await context.TestTable.ToListAsync();
Console.WriteLine("Second query: " + stopwatch.ElapsedMilliseconds + "ms");
stopwatch.Restart();
}
using (var context = new Context(contextBuilder.Options))
{
// Query again to see how long a second query takes.
var items2 = await context.TestTable.ToListAsync();
Console.WriteLine("Third query: " + stopwatch.ElapsedMilliseconds + "ms");
}
}
Run the program, and you should see something similar to this:
Notice the first query, which establishes a connection to the database, takes almost 1.5 seconds. I've seen this as high as 4 seconds.
Don't forget to run the publish command again:
Using WinSCP as before, copy everything over to the rPi.
Hint: Normally we don't need to copy everything, only the most recent changes, but since we added a couple NuGet packages, it's best to copy the whole kit and caboodle as there will probably be dll's and so's that timestamped by their release date and won't show up if you sort the folder contents by descending date.
Hint: Interestingly, once consoleApp has been chmod'd to be an executable, we don't have to do this again.
consoleApp
chmod
After deleting the test data created in the run from Visual Studio, you should see something similar to this:
Hint: Press the Enter key to exit the program, as it's sitting on the Console.ReadLine() call.
Console.ReadLine()
Notice the rPi takes a whopping 8 seconds to create the connection to Postgres. Fortunately, once the connection has been established, the queries run in under 100ms, but insert took over a second.
Running the test program again, we get our data back instead:
I played around a little with forcing turbo mode (which voids your rPi warranty!) as described in this writeup but found no performance improvement. I didn't try the overclocking options as those may make the rPi unbootable and I didn't want to fuss with recovering the /boot/config.txt file, which is where you set the configuration options.
/boot/config.txt
There already is an excellent server on GitHub here and it already supports HTTPS, so why reinvent the wheel? Because my curiosity takes me to strange corners of programming, and I want to see (and learn) how one can use .NET Core to write a server that is not based on the ASP.NET Core stack. Having written a simple service using the ASP.NET Core stack, I found it easy to use, the dependency injection (DI) is cool, the automatic extraction of parameters from the HTTP headers, JSON POST body, etc., are all really snazzy. Definitely recommended! But still, I want try out a bare metal implementation for no other reason than it's what I like to do. So if you're curious like me, read on.
An HTTP server is trivial. First, let's modify Main to start up an HttpListener:
Main
HttpListener
static void Main(string[] args)
{
Console.WriteLine("Hello World!");
// TestPostgres();
StartServer();
Console.WriteLine("Press ENTER to exit.");
Console.ReadLine();
}
We'll start the server and respond with the GET path as well as write the verb and path to the console (remember your rPi's IP may be different):
static void StartServer()
{
HttpListener listener = new HttpListener();
listener.Prefixes.Add("");
listener.Start();
Task.Run(() => WaitForConnection(listener));
}
static void WaitForConnection(object objListener)
{
HttpListener listener = (HttpListener)objListener;
while (true)
{
HttpListenerContext context = listener.GetContext();
string verb = context.Request.HttpMethod;
string path = context.Request.RawUrl.LeftOf("?").RightOf("/");
Console.WriteLine($"Verb: {verb} Path: {path}");
byte[] buffer = Encoding.UTF8.GetBytes(path);
context.Response.StatusCode = (int)HttpStatusCode.OK;
context.Response.ContentLength64 = buffer.Length;
context.Response.OutputStream.Write(buffer, 0, buffer.Length);
context.Response.Close();
}
}
Add at the top of the file:
using System.Net;
using System.Text;
And for completeness, the two extension methods that I use everywhere:
public static class ExtensionMethods
{
public static string LeftOf(this String src, string s)
{
string ret = src;
int idx = src.IndexOf(s);
if (idx != -1)
{
ret = src.Substring(0, idx);
}
return ret;
}
public static string RightOf(this String src, string s)
{
string ret = String.Empty;
int idx = src.IndexOf(s);
if (idx != -1)
{
ret = src.Substring(idx + s.Length);
}
return ret;
}
}
Publish, WinSCP, and test:
Great! From there, the unsecure world of HTTP is our playground!
It's interesting to use htop, an interactive process viewer, that comes with Debian to see how many processes are started by .NET Core running the web server application (there are 10 ./consoleApp processes):
htop
Implementing an HTTPS server however requires a reverse proxy server such as nginx or Apache which will take the incoming IP and route it to a specified localhost port and return responses back to the client making the request. Before diving into HTTPS and SSL certificates, let's get a basic reverse proxy working with our C# code.
To install nginx on the rPi, enter this in a terminal window (this I learned from the rPi site):
sudo apt-get install nginx
Start the server with:
sudo /etc/init.d/nginx start
Now navigate to you rPi's IP address and you should see the nginx welcome screen:
Since I know nothing about nginx, I'm going to explore some of the basics. Obviously nginx created a default website and webpage somewhere. The configuration file in /etc/nginx/sites-available, where we see a file called default:
/etc/nginx/sites-available
default
If we inspect this file using WinSCP (or if you're in the terminal, use nano /etc/nginx/sites-available/default or cat /etc/nginx/sites-available/default), we see some important configuration lines (the following is truncated from the actual file):
nano /etc/nginx/sites-available/default
cat /etc/nginx/sites-available/default
listen 80 default_server;
listen [::]:80 default_server;
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
Great, so it's serving pages from /var/www/html, where we see the index file:
/var/www/html
So now we know where the default website is serving up static content.
Next, I want to route anything that isn't static content to my server. Reading the Beginner's Guide on nginx, I'll change the try_files command to proxy_pass and specify the URL that the consoleApp program is listening on. This has to be done as a superuser, so we use this command line: sudo nano /etc/nginx/sites-available/default to edit the file, commenting out the try_files command and adding proxy_pass;
try_files
proxy_pass
sudo nano /etc/nginx/sites-available/default
proxy_pass;
and after saving the file, restarting nginx with:
sudo /etc/init.d/nginx reload
Hint: If you get an error reloading nginx, go back into the editor and undo your edits, save the file, and see if reload now works. Then try changing the file again. I don't know what I did, but the first time I made the file change above, I got an error (didn't bother to follow the instructions to see what the error was), so I reverted my changes and re-applied them, and suddenly no error!
Hint: using does not work - you get a "Not Found" error from nginx, you must use <a href=""></a>.
<a href=""></a>
Lastly, change the IP address that the consoleApp server is listening to, to 127.0.0.1:
listener.Prefixes.Add("");
Publish, WinSCP the changed files over, start the consoleApp, and when you try a URL as depicted in the screenshot, you should get:
and in the terminal window:
Hint: I got a few errors before getting this right, and a very helpful Digital Ocean page led me to this command:
sudo tail -30 /var/log/nginx/error.log
which displays the last 30 lines of errors reported by nginx. Very useful in figuring out what's going wrong!
Hint: auto-start nginx (from here):
Auto-start NGINX at boot:
cd /etc/init.d
sudo update-rc.d nginx defaults
Remove auto-start:
sudo update-rc.d -f nginx remove
The great thing about a reverse proxy is that it handles HTTPS for you -- your local server can be HTTP. Also, nginx supports Server Name Identification (SNI) which means that I can run several domains, each with different SSL certificates, from the same physical device and IP address. For now, we'll just create a self-signed certificate for testing. First, let's prove that HTTPS isn't working:
Yup, not working. Next, let's create a self-signing certificate following (yes, once again I revert to people that know a lot more than me) Karlo van Wyk's instructions here. Basically, forgive me for essentially plagiarizing his instructions, create a folder and in that folder, create a file called localhost.conf. You can use the nano editor, and for some reason the instructions say to launch nano as the superuser via sudo nano, I'm not sure why. Next, copy the configuration file he provides (I feel really bad posting it here again, but for completeness of this article, it seems necessary):
localhost.conf
sudo nano
[req]
default_bits = 2048
default_keyfile = localhost.key
distinguished_name = req_distinguished_name
req_extensions = req_ext
x509_extensions = v3_ca
[req_distinguished_name]
countryName = Country Name (2 letter code)
countryName_default = US
stateOrProvinceName = State or Province Name (full name)
stateOrProvinceName_default = New York
localityName = Locality Name (eg, city)
localityName_default = Rochester
organizationName = Organization Name (eg, company)
organizationName_default = localhost
organizationalUnitName = organizationalunit
organizationalUnitName_default = Development
commonName = Common Name (e.g. server FQDN or YOUR name)
commonName_default = localhost
commonName_max = 64
[req_ext]
subjectAltName = @alt_names
[v3_ca]
subjectAltName = @alt_names
[alt_names]
DNS.1 = localhost
DNS.2 = 127.0.0.1
Feel free to edit your locality information.
Next, create the certificate key pairs (public and private):
sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout localhost.key -out localhost.crt -config localhost.conf
Then copy them to the /etc/ssl folders as the superuser (so you have write permissions):
sudo cp localhost.crt /etc/ssl/certs/localhost.crt
sudo cp localhost.key /etc/ssl/private/localhost.key
Lastly, using sudo nano, edit the default file like we did above:
uncommenting the SSL configuration lines and adding a couple extra lines, so it looks like this:
listen 443 ssl default_server;
listen [::]:443 ssl default_server;
ssl_certificate /etc/ssl/certs/localhost.crt;
ssl_certificate_key /etc/ssl/private/localhost.key;
ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
Finally, reload the nginx configuration as we did earlier:
Then browse to the rPi's IP using HTTPS, and you should get this:
Clearly the connection is now being made and Chrome is alerting you that the certificate in not valid, which is obvious because we're creating a test certificate rather than one with a certificate authority such as LetsEncrypt. So, click on "Advanced" and "proceed...." to get to the site:
And there you go!
Ideally, the web server should start up automatically when the rPi boots -- implementing this is straight forward though I did have to read through both this link on hosting ASP.NET Core and this link on setting up a simple console app as a service. The "trick" here is not to perform a Console.ReadLine as this times out the startup process, and obviously not to exit the application! Following the guidance from second link, add these namespaces:
Console.ReadLine
using System.IO;
using System.Reflection;
and edit Program.cs:
static readonly CancellationTokenSource tokenSource = new CancellationTokenSource();
static void Main(string[] args)
{
StartService();
}
and the implementation (trimmed down from the example in the second link):
static void StartService()
{
AppDomain.CurrentDomain.ProcessExit += CurrentDomain_ProcessExit;
StartServer();
while (!tokenSource.Token.IsCancellationRequested)
{
Thread.Sleep(1000);
}
}
private static void CurrentDomain_ProcessExit(object sender, EventArgs e)
{
tokenSource.Cancel();
}
Hint: The while loop can probably be improved with a task that waits until the token is cancelled.
Next, create a .service file in the /lib/systemd/system folder that looks like this (use sudo nano to create and edit the file):
.service
/lib/systemd/system
[Unit]
Description=Web Server
After=nginx.service
[Service]
Type=simple
User=pi
WorkingDirectory=/home/pi/webserver
ExecStart=/home/pi/webserver/consoleApp
Restart=always
[Install]
WantedBy=multi-user.target
As you can see, I place consoleApp (and its files) into a folder called webserver and I also called the service file webserver.service.
webserver
webserver.service
Once service file is created, create a symlink for easy reference:
sudo systemctl enable webserver
You can manually start the server right away with:
sudo systemctl start webserver
and check on its status with:
systemctl status webserver.service
Hint: Any console output is logged and displayed when you view the service status!
You should see something similar to this:
We can also see the service running using htop:
htop:
I found that errors are fairly indicative of the the problem.
1/11/2019: Added setting up the web server as a service.
This article accomplishes quite a few things:
And the big accomplishment here is that we did all this without using ASP.NET Core. Frankly, it's damn hard to find any articles that are not related to ASP.NET Core with regards to nginx, setting up HTTPS, etc., so hopefully the reader will appreciate the bare-metal approach that I've taken here. My next article will dive more into creating a real website (a port from an existing website), working with performance issues (that horrid 8 second connect delay to Postgres) and who knows what else.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
pkfox wrote:the only thing different is you are using USB not SD card
Mike Hankey wrote:Looking forward to your year of IoT.
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | https://www.codeproject.com/Articles/1273615/NET-Core-nginx-and-Postgres-with-EF-on-an-rPi | CC-MAIN-2019-04 | refinedweb | 5,043 | 54.63 |
Created on 2021-04-08 11:09 by larry, last changed 2021-04-11 03:00 by gvanrossum. This issue is now closed.
The implementation of the | operator for TypeVar objects is as follows:
def __or__(self, right):
return Union[self, right]
def __ror__(self, right):
return Union[self, right]
I think the implementation of __ror__ is ever-so-slightly inaccurate. Shouldn't it be this?
def __ror__(self, left):
return Union[left, self]
I assume this wouldn't affect runtime behavior, as unions are sets and are presumably unordered. The only observable difference should be in the repr() (and only then if both are non-None), as this reverses the elements. The repr for Union does preserve the order of the elements it contains, so it's visible to the user there.
Yes.--
--Guido (mobile)
New changeset 9045919bfa820379a66ea67219f79ef6d9ecab49 by Jelle Zijlstra in branch 'master':
bpo-43772: Fix TypeVar.__ror__ (GH-25339) | https://bugs.python.org/issue43772 | CC-MAIN-2021-43 | refinedweb | 152 | 54.83 |
Mainly because of the fact that I had to configure it in xml. If you ever did a JSF project you know that this is something you do on top later on. Or never. With the last option being the one I have seen a lot. Rewrite is going to change that. Programmatic, easy to use and highly customizable. Exactly what I was looking for.
Getting Started
Nothing is easy as getting started with stuff coming from one of the RedHat guys. Fire up NetBeans, create a new Maven based Webapp, add JSF and Primefaces to the mix and run it on GlassFish.
First step for adding rewriting magic to your application is to add the rewrite dependencies to your project.
<dependency> <groupId>org.ocpsoft.rewrite</groupId> <artifactId>rewrite-servlet</artifactId> <version>1.1.0.Final</version> </dependency>
That isn’t enough since I am going to use it together with JSF, you also need the jsf-integration.
<dependency> <groupId>org.ocpsoft.rewrite</groupId> <artifactId>rewrite-integration-faces</artifactId> <version>1.1.0.Final</version> </dependency>
Next implement your own ConfigurationProvider. This is the central piece where most of the magic happens.Let’s call it TricksProvider for now and we also extend the abstract HttpConfigurationProvider. A simple first version looks like this:
public class TricksProvider extends HttpConfigurationProvider { @Override public int priority() { return 10; } @Override public Configuration getConfiguration(final ServletContext context) { return ConfigurationBuilder.begin() .addRule(Join.path("/").to("/welcomePrimefaces.xhtml")); } }
Now you have to register your ConfigurationProvider. You do this by adding a simple textfile named org.ocpsoft.rewrite.config.ConfigurationProvider to your applications /META-INF/services/ folder. Add the fully qualified name of your ConfigurationProvider implementation to it and you are done. If you fire up your application.
The Rewriting Basics
While copying the above provider you implicitly added your first rewriting rule. By requesting you get directly forwarded to the Primefaces welcome page generated by NetBeans. All rules are based on the same principle. Every single rule consists of a condition and an operation. Something like “If X happens, do Y”. Rewrite knows two different kinds of Rules. Some preconfigured ones (Join) starting with “addRule()” and a fluent interface starting with defineRule(). This is a bit confusing because the next major release will deprecate the defineRule() and rename it to addRule(). So most the examples you find (especially the test cases in the latest trunk) are not working with the 1.1.0.Final.
Rewrite knows about two different Directions. Inbound and Outbound. Inbound is most likely working like every rewriting engine you know (e.g. mod_rewrite). A request arrives and is forwarded or redirected to the resources defined in your rules. The Outbound direction is little less. It basically has a hook in the encodeURL() method of the HttpServletRequest and rewrites the links you have in your pages (if they get rendered with the help of encodeURL at all). JSF is doing this out of the box. If you are thinking to use it with JSPs you have to make sure to call it yourself.
Forwarding .html to .xhtml with some magic
Let’s look at some stuff you could do with rewrite. First we add the following to the TricksProvider:
.defineRule() .when(Direction.isInbound() .and(Path.matches("{name}.html").where("name").matches("[a-zA-Z/]+"))) .perform(Forward.to("{name}.xhtml"));
This is a rule which is looking at inbound requests and checks for all Patch matches {name}.html which confirm to the regular expression pattern [a-zA-Z/]+ and Forwards those to {name}.xhtml files.
If this rule is in place all requests to will end up being forwarded to something.xhtml. Now your users will no longer know that you are using fancy JSF stuff underneath and believe you are working with html :) If a url which isn’t matching the regular expression is requested, for example something like this simply isn’t forwarded and if the something123.html isn’t present in your application you will end up receiving a 404 error.
Rewriting Outbound Links
The other way round you could also add the following rule:
.defineRule() .when(Path.matches("test.xhtml") .and(Direction.isOutbound())) .perform(Substitute.with("test.html"))
You imagine what this is doing, right? If you have a facelet which contains something like this:
<h:outputLinkNormal Test</h:outputLink>
The link that is rendered to the user will be rewritten to test.html. This is the most basic action for outbound links you will ever need. Most of the magic happens with inbound links. Not a big surprise looking at the very limited reach of the encodeURL() hook.
The OutputBuffer
The most astonishing stuff in rewrite is called OutputBuffer. At least until the release we are working with at the moment. It is going to be renamed in 2.0 but for now let’s simply look at what you could do. The OutputBuffer is your hook to the response. Whatever you would like to do with the response before it actually arrives at your client’s browser could be done here. Thinking about transforming the markup? Converting css? Or even GZIP compression? Great, that is exactly what you could do. Let’s implement a simple ZipOutputBuffer
public class ZipOutputBuffer implements OutputBuffer { private final static Logger LOGGER = Logger.getLogger(ZipOutputBuffer.class.getName()); @Override public InputStream execute(InputStream input) { String contents = Streams.toString(input); LOGGER.log(Level.FINER, "Content {0} Length {1}", new Object[]{contents, contents.getBytes().length}); byte[] compressed = compress(contents); LOGGER.log(Level.FINER, "Length: {0}", compressed.length); return new ByteArrayInputStream(compressed); } public static byte[] compress(String string) { ByteArrayOutputStream os = new ByteArrayOutputStream(string.length()); byte[] compressed = null; try { try (GZIPOutputStream gos = new GZIPOutputStream(os)) { gos.write(string.getBytes()); } compressed = os.toByteArray(); os.close(); } catch (IOException iox) { LOGGER.log(Level.SEVERE, "Compression Failed: ", iox); } return compressed; } }
As you can see, I am messing around with some streams and use the java.util.zip.GZIPOutputStream to shrink the stream received in this method. Next we have to add the relevant rule to the TricksProvider:
.defineRule() .when(Path.matches("/gziptest").and(Direction.isInbound())) .perform(Forward.to("test.xhtml") .and(Response.withOutputBufferedBy(new ZipOutputBuffer()) .and(Response.addHeader("Content-Encoding", "gzip")) .and(Response.addHeader("Content-Type", "text/html"))))
An inbound rule (we are not willing to rewrite links in pages here .. so it has to be inbound) which adds the ZipOutputBuffer to the Response. Also take care for the additional response header (both) unless you want to see your browser complaining about the content I have mixed up :) That is it. The request now delivers the test.xhtml with GZIP compression. That is 2,6KB vs. 1,23 KB!! Less than half of the size !! It’s not very convenient to work with streams and byte[]. And I am not sure if this will work with larger page sizes in terms of memory fragmentation, but it is an easy way out if you don’t have a compression filter in place or only need to compress single parts of your application.
Enhance Security with Rewrite
But that is not all you could do: You could also enhance the security with rewrite. Lincoln has a great post up about securing your application with rewrite. There are plenty of possible examples around how to use this. I Came up with a single use-case where didn’t want to use the welcome-file features and prefer to dispatch users individually. While doing this I would also inspect their paths and check if the stuff they are entering is malicious or not. You could either do it with the .matches() condition or with a custom constraint. Add the following to the TricksProvider:
Constraint<String> selectedCharacters = new Constraint<String>() { @Override public boolean isSatisfiedBy(Rewrite event, EvaluationContext context, String value) { return value.matches("[a-zA-Z/]+"); } };
And define the following rule:
.defineRule() .when(Direction.isInbound() .and(Path.matches("{path}").where("path").matches("^(.+)/$") .and(Path.captureIn("checkChar").where("checkChar").constrainedBy(selectedCharacters)))) .perform(Redirect.permanent(context.getContextPath() + "{path}index.html"))
Another inbound modification. Checking the path if it is has a folder pattern and capturing it in a variable which is checked against the custom constraints. Great! Now you have a save and easy forwarding mechanism in place. All request are now rewritten to. If you look at the other rules from above you see, that the .html is forwarded to .xhtml … and you are done!
Bottom Line
I like working with rewrite a lot. It feels easier than configuring the xml files of prettyfaces and I truly enjoyed the support of Lincoln and Christian during my first steps with it. I am curious to see what the 2.0 is coming up with and I hope that I get some more debug output for the rules configuration just to see what is happening. The default is nothing and it could be very tricky to find the right combination of conditions to have a working rule.
Looking for the complete sources? Find them on github. Happy to read about your experiences.
Where is the GlassFish Part?
Oh, yeah. I mentioned it in the headline, right? That should be more like a default. I was running everything with latest GlassFish 3.1.2.2 so you can be sure that this is working. And NetBeans is at 7.2 at the moment and you should give it a try if you haven’t. I didn’t came across a single issue related to GlassFish and I am very pleased to stress this here. Great work! One last remark: Before you are going to implement the OutputBuffer like crazy take a look at what your favorite appserver has in stock already. GlassFish knows about GZIP compression already and it simply can be switched on! Might be a good idea to think twice before implementing here.
Reference: Rewrite to the edge – getting the most out of it! On GlassFish! from our JCG partner Markus Eisele at the Enterprise Software Development with Java blog. | http://www.javacodegeeks.com/2012/08/rewrite-to-edge-getting-most-out-of-it.html | CC-MAIN-2015-18 | refinedweb | 1,658 | 58.89 |
MergeSort
Explanation: In each round, we divide our array into two parts and sort them. So after "int cnt = mergeSort(nums, s, mid) + mergeSort(nums, mid+1, e); ", the left part and the right part are sorted and now our only job is to count how many pairs of number (leftPart[i], rightPart[j]) satisfies leftPart[i] <= 2*rightPart[j].
For example,
left: 4 6 8 right: 1 2 3
so we use two pointers to travel left and right parts. For each leftPart[i], if j<=e && nums[i]/2.0 > nums[j], we just continue to move j to the end, to increase rightPart[j], until it is valid. Like in our example, left's 4 can match 1 and 2; left's 6 can match 1, 2, 3, and left's 8 can match 1, 2, 3. So in this particular round, there are 8 pairs found, so we increases our total by 8.
public class Solution { public int reversePairs(int[] nums) {); return cnt; } }
Or:
Because left part and right part are sorted, you can replace the Arrays.sort() part with a actual merge sort process. The previous version is easy to write, while this one is faster.
public class Solution { int[] helper; public int reversePairs(int[] nums) { this.helper = new int[nums.length];); myMerge(nums, s, mid, e); return cnt; } private void myMerge(int[] nums, int s, int mid, int e){ for(int i = s; i<=e; i++) helper[i] = nums[i]; int p1 = s;//pointer for left part int p2 = mid+1;//pointer for rigth part int i = s;//pointer for sorted array while(p1<=mid || p2<=e){ if(p1>mid || (p2<=e && helper[p1] >= helper[p2])){ nums[i++] = helper[p2++]; }else{ nums[i++] = helper[p1++]; } } } }
BST
BST solution is no longer acceptable, because it's performance can be very bad, O(n^2) actually, for extreme cases like [1,2,3,4......49999], due to the its unbalance, but I am still providing it below just FYI.
We build the Binary Search Tree from right to left, and at the same time, search the partially built tree with nums[i]/2.0. The code below should be clear enough.
public class Solution { public int reversePairs(int[] nums) { Node root = null; int[] cnt = new int[1]; for(int i = nums.length-1; i>=0; i--){ search(cnt, root, nums[i]/2.0);//search and count the partially built tree root = build(nums[i], root);//add nums[i] to BST } return cnt[0]; } private void search(int[] cnt, Node node, double target){ if(node==null) return; else if(target == node.val) cnt[0] += node.less; else if(target < node.val) search(cnt, node.left, target); else{ cnt[0]+=node.less + node.same; search(cnt, node.right, target); } } private Node build(int val, Node n){ if(n==null) return new Node(val); else if(val == n.val) n.same+=1; else if(val > n.val) n.right = build(val, n.right); else{ n.less += 1; n.left = build(val, n.left); } return n; } class Node{ int val, less = 0, same = 1;//less: number of nodes that less than this node.val Node left, right; public Node(int v){ this.val = v; } } }
Similar to this. But the main difference is: here, the number to add and the number to search are different (add nums[i], but search nums[i]/2.0), so not a good idea to combine build and search together.
This is nice, I find the BST a clear approach. I wonder if anybody did it in python? Sadly mine was Time Limit Exceeded and I even translated your code into python below but it still gets TLE.
class TreeNode: def __init__(self, val): self.val = val self.less = 0 self.same = 1 self.left = None self.right = None class Solution(object): def reversePairs(self, nums): root = None cnt = [0] for i in range(len(nums)-1, -1, -1): self.search(cnt, root, nums[i]/2.0) root = self.build(nums[i], root) return cnt[0] def search(self, cnt, node, target): if not node: return if target == node.val: cnt[0] += node.less elif target < node.val: self.search(cnt, node.left, target) else: cnt[0] += (node.less + node.same) self.search(cnt, node.right, target) def build(self, val, n): if not n: return TreeNode(val) elif val == n.val: n.same += 1 elif val > n.val: n.right = self.build(val, n.right) else: n.less += 1 n.left = self.build(val, n.left) return n
Impressive!!! Your solution reminds me how much practice I still need to do.... I did count of smaller numbers after self before, but I even cannot remember it during the contest....WTF
Below is my solution using merge sort.
public class Solution { public int reversePairs(int[] nums) { if (nums == null || nums.length == 0) return 0; return mergeSort(nums, 0, nums.length - 1); } private int mergeSort(int[] nums, int l, int r) { if (l >= r) return 0; int mid = l + (r - l)/2; int count = mergeSort(nums, l, mid) + mergeSort(nums, mid + 1, r); int[] cache = new int[r - l + 1]; int i = l, t = l, c = 0; for (int j = mid + 1; j <= r; j++, c++) { while (i <= mid && nums[i] <= 2 * (long)nums[j]) i++; while (t <= mid && nums[t] < nums[j]) cache[c++] = nums[t++]; cache[c] = nums[j]; count += mid - i + 1; } while (t <= mid) cache[c++] = nums[t++]; System.arraycopy(cache, 0, nums, l, r - l + 1); return count; } }
@yorkshire
Try not to use recursive. I had the same issue when I used recursive method. So I rewrited it using while loops, which speeds up the code by x1.5, and now it finishes within time limit.
class TreeNode(object): def __init__(self, val): self.val = val #the value at this node self.same = 1 #number of keys with this value self.less = 0 #number of keys in the left subtree self.left = None #left subtree (nodes with less value) self.right = None #right subtree (nodes with higher value) def search(node, val): n_temp = 0 while node != None: if node.val == val: n_temp += node.less #keys with the same value doesn't count node = None elif val > node.val: n_temp += node.less + node.same node = node.right else: node = node.left return n_temp def insert(node, val): while True: if node.val == val: node.same += 1 return elif val > node.val: if node.right == None: node.right = TreeNode(val) return else: node = node.right else: node.less += 1 if node.left == None: node.left = TreeNode(val) return else: node = node.left class Solution(object): def reversePairs(self, nums): length = len(nums) if length <= 1: return 0 n = 0 root = TreeNode(2*nums[-1]) for i in reversed(xrange(length - 1)): a = nums[i] n += search(root, a) insert(root, 2*a) return n
@louis925 The python code you posted does not pass the TLE. I wrote my own code and also test directly with your posted code. Neither passed the test. Thus, the BST method should not be considered as standard solution.
@tbjc The original java code from @Chidong now also gives a Time Limit Exceeded for the test case where nums = [i for i in range(5000)]. In this situation the tree is effectively linear and so O(n**2) time complexity.
At least the results are standardised now so that both languages are not accepted and we have to use something like mergesort that is O(n log n). It's just a shame that Java was accepted in the contest and Python was not.
@tbjc @yorkshire
Thanks for notifying me this! I think they change the test cases.
In the contest, the person who got second place actually only used binary sort and passed all the test cases simply because it was written in C++. I try to do the same thing in python and it got TLE.
Anyway, now I will try the mergesort method next.
@Chidong Hi, thanks for posting your solution. I'm just confused about one thing..maybe I'm missing something but where is the merge in your mergeSort? I see Arrays.sort()..isn't that O(nlogn) so total time complexity would be O(nlognlogn)?
@jonathan82
The process is first partitioning [s, e] to [s, mid] and [mid+1, e]. Then the entire [s, e] are "merged/sorted. The merging process in merge sort is essentially a sorting process. Yes you can use Arrays.sort() to do the job, which is easier to write but take more time to run. Or you can implement this part yourself, since left and right parts are all sorted, it only it takes O(n) to finish the merge/sort job. I added that in my post.
@Chidong oh, got it thx. I also thought about using a fenwick tree but couldn't seem to get it to fit this problem.
Why is it Arrays.sort(nums, s, e+1); instead of Arrays.sort(nums, s, e); ?
How is that +1 coming from? I tried to remove that +1 but get error. I can debug it to find out why. But how did you know it before debugging? Thank you very much!
@coder2 Java Api says: "public static void sort(short[] a, int fromIndex, int toIndex)
Sorts the specified range of the array into ascending order. The range to be sorted extends from the index fromIndex, inclusive, to the index toIndex, exclusive. If fromIndex == toIndex, the range to be sorted is empty. .... "
My Python merge sort solution:
class Solution(object): def reversePairs(self, nums): def mergeSort(s,e): if s >= e: return 0 mid = (s+e)/2 cnt,j = mergeSort(s,mid) + mergeSort(mid+1,e),mid+1 for i in xrange(s,mid+1): while j <= e and nums[i] > 2*nums[j]: j += 1 cnt += j-(mid+1) nums[s:e+1] = sorted(nums[s:e+1]) return cnt return mergeSort(0,len(nums)-1)
Thanks for the explanations. Post my 239ms C++ version:
class Solution { public: int reversePairs(vector<int>& nums) { return mergeSort(nums, 0, nums.size()-1); } int mergeSort(vector<int>& nums, int l, int r) { if (l >= r) return 0; int mid = l + (r - l) / 2; int res = 0; res = mergeSort(nums, l, mid) + mergeSort(nums, mid+1, r); for (int i = mid+1; i <= r; i++) { auto it = upper_bound(nums.begin()+l, nums.begin()+mid+1, nums[i] * 2L); int dis = distance(nums.begin()+l, it); if (dis > mid-l) break; res += mid-l+1 - dis; } inplace_merge(nums.begin()+l, nums.begin()+mid+1, nums.begin()+r+1); return res; } };
@Chidong Thanks for your post,
I have a question: When you work on this solution, how do you know whether your sorting could introduce duplicate or miss counting, since sorting changes the position of numbers in original array?
Thanks for advance.
@Chidong For the MergeSort Solution, could you explain the complexity? I think it's not O(nlogn), since the recurrence seems to be T(n) = 2T(n/2) + O(n^2), instead of T(n) = 2T(n/2) + O(n). Thanks.
两种merge sort 都是O(nlgn)
Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect. | https://discuss.leetcode.com/topic/78933/very-short-and-clear-mergesort-bst-java-solutions | CC-MAIN-2017-47 | refinedweb | 1,870 | 76.82 |
Advertising
Summary: wgNamespaceIds should also contain canonical namespaces Product: MediaWiki Version: unspecified Platform: All OS/Version: All Status: NEW Severity: enhancement Priority: Normal Component: Javascript AssignedTo: d...@ucsc.edu ReportedBy: llam...@gmail.com CC: tpars...@wikimedia.org >From: wgNamespaceIds - Gives a mapping from namespace names to namespace IDs. For each namespace name, ***including aliases***, the object has one entry that has namespace name as the key and the namespace ID as its integer value. ***Canonical names are not included***. So it is supposed to contain all aliases but not contain canonical namespaces. It's illogical to me. Canonical namespaces also work as aliases! I'm writing a script for Polish wikipedia which extracts a link from an article and works out the id of the namespace it leads to. The most straight-forward way would be to extract the namespace from the link and search for it in wgNamespaceIds. But it's not that simple, because the link may contain a namespace in its canonical form. So now I'm going to define my own table, put canonical namespaces to it by hand, merge it with wgNamespaceIds... Quite easy, but not nice. An alternative solution would be to define a new table with ids of canonical namespace names. -- Configure bugmail: ------- You are receiving this mail because: ------- You are on the CC list for the bug. _______________________________________________ Wikibugs-l mailing list Wikibugs-l@lists.wikimedia.org | https://www.mail-archive.com/wikibugs-l@lists.wikimedia.org/msg49497.html | CC-MAIN-2018-22 | refinedweb | 235 | 58.48 |
my $obj = lockpan(); //interacts with a vb6 dll
my $enc_data = obj->encryptval($data); //returns binary val
$enc_data = '0x' . unpack('H*', $enc_data); //bin -> hex
#$enc_data is then written to a sql db as 7 bit ascii even parity
[download]
//enc_data is a hex formatted string generated by the the COM object m
+y .net apps interact with. but in order for the .net app to decrypt t
+he hex value returned by the same com object to perl the hex string m
+ust be transformed into the appropriate parity format by doing the fo
+llowing
enc_data = Regex.Replace(enc_data, "^0x", "");
enc_data = FromUnicodeByteArray( hex_to_bin(enc_data) );
//now enc_data contains a value that my .net app can send to the com o
+bject for decryption and get back the correct value
private static byte[] hex_to_bin ( string s )
{
int stringLength = s.Length;
if ( (stringLength & 0x1) != 0 )
{
Console.WriteLine("not even numbers");
}
byte[] b = new byte[ stringLength / 2 ];
for ( int i=0 ,j= 0; i< stringLength; i+= 2,j ++ )
{
int high= charToNibble( (s.Substring ( i,1 ).ToCharArr
+ay())[0]);
int low = charToNibble( (s.Substring(i+1,1).ToCharArra
+y())[0] );
b[ j ] = (byte ) ( ( high << 4 ) | low );
}
return b;
}
/**
* convert a single char to corresponding nibble.
*
* @param c char to convert. must be 0-9 a-f A-F, no
* spaces, plus or minus signs.
*
* @return corresponding integer
*/
private static int charToNibble ( char c )
{
if ( '0' <= c && c <= '9' )
{
return c - '0' ;
}
else if ( 'a' <= c && c <= 'f' )
{
return c - 'a' + 0xa ;
}
else if ( 'A' <= c && c <= 'F' )
{
return c - 'A' + 0xa ;
}
else
{
Console.WriteLine( "Invalid hex character: " + c ) ;
return 0;
}
}
private static string FromUnicodeByteArray(byte[] characters)
{
string constructedString = Encoding.Default.GetString(char
+acters);
return (constructedString);
}
[download]
My guess is that this is an encoding problem. 0x84 is DOUBLE LOW-9 QUOTATION MARK in CP1252. CP1252 is a common character set with Windows adds smart quotes and other stuff to Latin-1. My suspicion is that the data you are decoding is a string containing a smart quote. If you want 7-bit ASCII, then you will need to strip or convert the 8-bit bytes.
The easy was to convert from data+parity to data is to & your input with 0x7F. Note that this destroys any benifit the parity might have (in this case, none).
If you've got a bunch of data in $data, and want to strip the high bit, try $data = chr(0x7F) x length($data);.
Note that I suspect your actual problem has little to do with parity vs non-parity, but rather with utf-8 encoding. I know of nowhere in perl that cares about parity.
Perl is almost certainly not converting to "7-bit even parity" because it doesn't know about any such thing. The only place I have seen "even parity" used is with modems.
Not to mention that the string you presented is not even partity.
That string is not in UTF-8 but it is still likely an encoding issue. Win32::OLE does have an option to set the code page for translating Unicode strings to Perl strings. You might want to check its value or explicitly set it the code | http://www.perlmonks.org/?node_id=335194 | CC-MAIN-2015-22 | refinedweb | 525 | 64.1 |
I am a new programmer, and I have a couple of questions.
Before I ask, im sorry if its been done before, but im at a loss as to what to search for, as i have no idea how to even begin what id like to do.
Here is my code...
Code:#include <iostream> #include <string.h> #include <string> #include <dos.h> #include <process.h> #include <fstream> // for files #include <stdio.h> #include <conio.h> using namespace std; string user; string pass; int main() { cout << "Username :"; cin >> user; if(user == "admin") { cout << "Password :"; } else { cout << "Incorrect username\n"; return main(); } cin>> pass; if(pass == "12345") { cout<< "Welcome.\n"; } else { return main(); } system("pause"); return 0; }
After the console displays "welcome" id like it to allow the user to "press any key to continue..." ( getch;? ) to clear the screen.
After the screen clears, id like the user to be able to type in text that can be saved within the program, not an outside text file, so the text can only be accessed through logging in.
This will serve as a personal journal if you will.
How do i go about doing this?
Thank you guys. | https://cboard.cprogramming.com/cplusplus-programming/131149-cplusplus-console-program-help.html | CC-MAIN-2017-39 | refinedweb | 195 | 84.37 |
April 20th-23rd 2012 :: 10 Year Anniversary! :: Theme: Tiny World
Furaingu Kiby RiamuDiMentis - Jam Entry
Windows release is really just the source. In one nice little .py I'll work on getting it ported ASAP
Instructions: Get the red things, avoid the laser, and the trees.
Python comes pre-installed on Mac, for linux there's a version (Kinda) on this page:
Download Python:
Download Pygame:
Windows:
Mac:
Ubuntu and others: (Scroll down a bit)
Downloads and Links
Ratings
First things first - I'm pretty sure you need to compose your own music rather than using someone else's, so you might want to get rid of the music here. (Not 100% sure for the Jam - someone want to correct me?)
Nice game though! Simple and straightforward and pretty fun. Good work. My highest score was 12k. Stuff starts getting pretty tricky towards the end!
I would really encourage you to package up your game with p2exe:. Most Ludum Darers are allergic to figuring out how to get python and pygame working, and py2exe alleviates this worry ;-) I in fact did this for my game, so if you have any questions or bugs, let me know.
Ah, darn... I wasn't thinking there, since he puts his things in public domain... >.< Anyone have anymore input?
"Fonts, drum loops, drum samples, and sampled instruments are allowed IF you have the legal right to use them." Does public domain music fall into that?
"For Jam games, you are free to use whatever artwork or content you like (preferably something you have the legal rights to), but you must accept all responsibility for its use."
Okay, great, that clears things up. Kevin Macleod only requests that I give him credit. Which I did! Boo-Yah! Thanks Transmit.
Nice. Could've used some more sound effects, maybe, but the gameplay seemed fairly tight and solid.
Liked the relaxing background music and overall concept/game play. Movement of the planet is a bit too predictable.
Gets you hooked though. Especially after johnfn announced his high score and I desperately tried breaking it without success :)
hurry with the porting to .exe plz, elsewise here on windows i can't play :( (and i don't want to install python)
I don't have windows, couldn't test.
I'm on Linux, and there's a couple things I had to do to get your game to run. I had to disable the music. Why not consider using pygame's built-in music module rather than mp3play? Also, I had to rename some of the files because capitalization matters on Linux systems. The music was well chosen. Kevin Macleod's stuff is great.
The reason being for mp3play, was simply because I forgot pygame can play mp3s, and I used mp3play before I used pygame... So, yeah. Sorry about the capitalization too. On another note, I'm working on the .exe but I've been really sick... Maybe Nick wants to work on it? (Wink wink, nudge nudge)
Can you make binaries please? I refuse to launch Python code - you py developers every compo seem to expect everyone to compile your code. Why is it such a difficulty to make an executable?
:C Unfortunately I cannot play this on my mac.
Good luck for this Ludum Dare though! I hope whatever happens you're happy with your game!
@demonpants listen, I'm trying very hard not to be angry here. But I'll do my best to explain myself. First off, you don't need to compile my code, you install python, you install pygame, and it runs, kay? Secondly, I don't EXPECT you to DO anything. If you have the capability to a) run .py with pygame or b) run two installers, then I would be happy to get your ratings, and feedback. And, it's proven difficult to make an .exe for me because, I've been sick (as I said above) and also, because the instructions aren't clear, and I just haven't gotten it working. My apologies, but if you had read above, you would've known these things. Thank you for your input, and I'll be sure to learn how to make an .exe, so people who "refuse to launch Python code" can give some appropriate feedback on my game. I'm sorry that I'm only 14, and I can't yet figure out everything that you great programmers can do. Thanks for your time though.
It feels like inverse "space invaders". Quiet difficult to stay alive.
I have the same comments as Cosmologicon. I've uploaded my version of the source here (for anyone else on Linux):
The game itself could use a bit more variation though.
Oh man, thanks for the linux changes. I actually don't know why it was taking "LASER.png" when the image file is "laser.png" on Windows... It shouldn't have done that. That "LASER.png" was an alternate image before we changed it... Sorry. Thanks for the great input. This has been a really great first experience.
Doesn't work for me on Linux (Ubuntu 10.04 64 with pygame installed):
Traceback (most recent call last):
File "Furaingu ki.py", line 9, in <module>
import pygame, random, sys, mp3play
File "/tmp/Furaingu Ki_Distro/mp3play/__init__.py", line 6, in <module>
raise Exception("mp3play can't run on your operating system.")
Exception: mp3play can't run on your operating system.
Something you might find useful for next time: pygame can play .ogg files itself, so using that format probably would have avoided this problem :)
I would try it if you had a Web or OSX version.
You must sign in to comment.
The music was cool, as was the beam sound effect. I felt like power-ups that made me go faster actually had the effect of making it harder to control. | http://ludumdare.com/compo/ludum-dare-23/?action=preview&uid=9968 | CC-MAIN-2019-35 | refinedweb | 981 | 76.32 |
NAME
vm_page_protect - lower a page’s protection
SYNOPSIS
#include <sys/param.h> #include <vm/vm.h> #include <vm/vm_page.h> void vm_page_protect(vm_page_t mem, int prot);
DESCRIPTION
The vm_page_protect() function lowers a page’s protection. The protection is never raised by this function; therefore, if the page is already at VM_PROT_NONE, the function does nothing. Its arguments are: mem The page whose protection is lowered. prot The protection the page should be reduced to. If VM_PROT_NONE is specified, then the PG_WRITABLE and PG_MAPPED flags are cleared and the pmap_page’s protection is set to VM_PROT_NONE. If VM_PROT_READ is specified, then the PG_WRITABLE flag is cleared and the pmap_page’s protection is set to VM_PROT_READ. Higher protection requests are ignored.
AUTHORS
This manual page was written by Chad David 〈davidc@acns.ab.ca〉. | http://manpages.ubuntu.com/manpages/hardy/man9/vm_page_protect.9.html | CC-MAIN-2014-42 | refinedweb | 132 | 60.61 |
August 29, 2018 Neural Arithmetic Logic Unit
Neural networks are being widely adopted across disciplines with regards to computation. One fundamental flaw with neural networks is that they are unable to count. The neural arithmetic logic unit (NALU) was created to extend the neural accumulator model (NAC) to allow for better arithmetic computation.
Neural networks have been shown to compute very well with the training sets they are given. However, outside of the training sets, even simple tasks like learning the scalar identity function is impossible outside of the training sets. However, with the introduction of NACs and NALUs, the ability for neural networks to train and retain mathematically complex models outside of their original training sets has been increased.
Building a NALU involves building a NAC. NACs outputs the linear transformation of the inputs. Simple arithmetic operations such as additions and subtractions are performed. In Python, the following code is provided for the NAC. W is the transformation matrix, M is the matrix of interest, A is the accumulation vector, and G is the learned sigosmodial gate. For some freedom, one can determine the standard of deviation for the layers. Note that one will need TensorFlow and NumPY for the implementation:
import tensorflow as tensor import numpy as np #This function will calculate the vector for W and M that will be outputted. We use the layers #provided through the NAC chip. def vector_calc(layer, sdev): return tensor.Variable(tensor.truncated_normal(layer, sdev)) #This is the product for the transformation matrix provided in the paper. def W_calc(W_vector, M_vector): return tensor.tanh(W_vector) * tensor.sigmoid(M_vector) def nac(sdev, array_layers, outputs): #The layer is taken based on the amount of outputs that are needed. layer = (int(array_layers.shape[-1]), outputs) W_vector = vector_calc(layer, sdev) M_vector = vector_calc(layer, sdev) W = W_calc(W_vector, M_vector) A = tensor.matmul(array_layers, W) G = vector_calc(layer, sdev)
Now for the NALU code. The NALU code will take the output from the NAC cells where the layers and standard of deviation as well as the number of inputs is given. The NALU will learn the weighted sums of the two subcells. The NALU will output what is passed through the NACs.
import tensorflow as tensor import numpy as np def vector_calc(layer, sdev): return tensor.Variable(tensor.truncated_normal(layer, sdev)) def W_calc(W_vector, M_vector): return tensor.tanh(W_vector) * tensor.sigmoid(M_vector) def nalu(ep, sdev, array_layers, outputs): layer = (int(array_layers.shape[-1]), outputs) #NAC cell. Should have seen it before. W_vector = vector_calc(layer, sdev) M_vector = vector_calc(layer, sdev) W = W_calc(W_vector, M_vector) A = tensor.matmul(array_layers, W) G = vector_calc(layer, sdev) #Take the output of the cells. M_out will take the exponent of W and the layers in the #tensor. abs_val = tensor.abs(array_layers) + ep M_out = tensor.exp(tensor.matmul(tensor.log(abs_val), W)) #Learned sigmoidal gate. G_out = tensor.sigmoid(tensor.matmul(array_layers, G)) #The output from the NALU as per the paper. out = G_out * A + (1-G_out) * M return out
This NALU can now be used in Python-based applications for any type of training and then used in general. The NALU implemented comes from this Python implementation. The NALU can be trained and tested to do the scalar identity function. If you want to learn more about NALUs, check out Andrew Trask’s paper. These concepts may be useful for challenges related to machine learning and neural networks.
maxwells_daemon
Guest Blogger | https://www.topcoder.com/neural-arithmetic-logic-unit/ | CC-MAIN-2019-43 | refinedweb | 567 | 50.02 |
Making plots and static or interactive visualizations is one of the most important tasks in data analysis. It may be a part of the exploratory process; for example, helping identify outliers, needed data transformations, or coming up with ideas for models.
matplotlib.pyplot is a plotting library used for 2D graphics in python programming language. It can be used in python scripts, shell, web application servers and other graphical user interface toolkits.
Matploitlib is a Python Library used for plotting, this python library provides and objected-oriented APIs for integrating plots into applications.
Before start plotting let us understand some basics
- With Pyplot, you can use the xlabel() and ylabel() functions to set a label for the x- and y-axis.
- With Pyplot, you can use the grid() function to add grid lines to the plot.
- You can use the keyword argument linestyle, or shorter ls, to change the style of the plotted line:
- The plot() function is used to draw points (markers) in a diagram. By default, the plot() function draws a line from point to point.
- You can use the keyword argument marker to emphasize each point with a specified marker.
Figures and Subplots
1.The subplots() function takes three arguments that describes the layout of the figure.
2.The layout is organized in rows and columns, which are represented by the first and second argument.
3.The third argument represents the index of the current plot.
Example-1: plt.subplot(1, 2, 1) #the figure has 1 row, 2 columns, and this plot is the first plot.
Example-2: plt.subplot(1, 2, 2) #the figure has 1 row, 2 columns, and this plot is the second plot.
Example program
import matplotlib.pyplot as plt import numpy as np #plot 1: x = np.array([0, 1, 2, 3]) y = np.array([3, 8, 1, 10]) plt.subplot(1, 2, 1) plt.plot(x,y) plt.title("SALES") #plot 2: x = np.array([0, 1, 2, 3]) y = np.array([10, 20, 30, 40]) plt.subplot(1, 2, 2) plt.plot(x,y) plt.title("INCOME") plt.suptitle("MY SHOP") plt.show()
Note: only a member of this blog may post a comment. | http://www.tutorialtpoint.net/2021/12/creating-subplots-in-python.html | CC-MAIN-2022-05 | refinedweb | 365 | 67.65 |
I'll get you up to speed. I'm trying to setup a windows dev environment. I've successfully installed python, django, and virtualenv + virtualenwrapper(windows-cmd installer)
workon env
Python 2.7.6 (default, Nov 10 2013, 19:24:24) [MSC v.1500 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import django
>>> django.VERSION
(1,6,1, 'final',0)
>>> quit()
python manage.py runserver
Traceback (most recent call last)"
File "manage.py", line 2, in (module)
from django.core.management import execute_manager
ImportError: cannot import name execute_manager
...C:\Python27\;C:\Python27\Scripts\;C:\PYTHON27\DLLs\;C:\PYTHON27\LIB\;C:\Python27\Lib\site-packages\;
execute_manager deprecated in Django 1.4 as part of the project layout refactor and was removed in 1.6 per the deprecation timeline:
To fix this error you should either install a compatible version of Django for the project or update the
manage.py to new style which does not use
execute_manager: Most likely if your
manage.py is not compatible with 1.6 then neither is the rest of the project. You should find the appropriate Django version for the project. | https://codedump.io/share/AEGEusAOr9Fu/1/import-error-cannot-import-name-executemanager-in-windows-environment | CC-MAIN-2016-50 | refinedweb | 193 | 53.78 |
omelette ignores eggs which share namespace
Bug #271069 reported by Taylor McKay on 2008-09-16
This bug affects 2 people
Bug Description
I'm attempting to install repoze, which bundles parts of zope into several different eggs, and restructure the eggs with omelette. Omelette should be placing some of these eggs under the 'omelette/zope' directory, but zope becomes a symlink when the first egg is omelette'd -- subsequent packages are then skipped and left out of the 'omelette/zope' since zope is a symlink. I am attaching my buildout, which is meant to be run in a sandbox created by virtualenv.
collective.
recipe. omelette decides for each namespace path whether or
not it contains further namespace paths. If so, it creates directories
for them, otherwise it creates links to package contents (__init__.py,
around line 118 as of version 0.7). However, a namespace package may
contain further namespace packages and non-namespace packages and
modules at the same time, delivered by the same egg. This is a case not
handled by the omelette. The simplest fix would be to unconditionally
and recursively create any namespace directories as well as links to
all other package content for every namespace path. | https://bugs.launchpad.net/collective.buildout/+bug/271069 | CC-MAIN-2015-32 | refinedweb | 202 | 58.21 |
Hi Ant developers,
I have implemented for myself modifications of tasks XMLValidate and SchemaValidate that use
JAXP 1.3 validation techniques, i.e. package javax.xml.validation which is part of standard
JRE since 1.5. The main reason for this was the possibility to use interface LSResourceResolver
to resolve XML schemas using XML Catalog based on the XML namespace that they define (i.e.
in XMLCatalog you specify a mapping from XML namespace to XML schema document and whenever
an element in this namespace is encountered in the XML document being validated, this XML
schema is used for validating this document). I've implemented this in a way that is probably
not compatible with Ant policy, namely that the code has to compile on Java 1.2, or am I wrong?
Anyway, this would need to be somehow finished and before I spend the time to do it I'd like
to ask whether you think this could be useful for inclusion in Ant.
The modification itself is not large, but if I must use only reflection to access non-1.2
classes, it could get messy. So what do you think?
Thanks for your opinions.
David | http://mail-archives.apache.org/mod_mbox/ant-dev/200903.mbox/%3C5d11930d616f4dac97ed52138fc79c43@bb542131206749199963074b39f8af36%3E | CC-MAIN-2014-23 | refinedweb | 198 | 64.1 |
Square Triple
February 14, 2020
We have a simple homework problem today:
Given a list of distinct integers, find all triples (x y z) where x, y and z are in the list and x * y = z².
Your task is to find the list of triples. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.
Easiest way to make this o(n2) is to first compute an array of squares and use this to filter results… as usual in Perl…
Here’s an O(n²) solution that doesn’t require any auxiliary storage. Assumes input is sorted and all positive:
Here is my approach to the problem using Julia, as usual:
Although I iterate through the data points using a triple loop, not all combinations are examined since I take advantage of the fact that the integers are distinct (so there is one zero at most), simplifying the whole process. Cheers!
Here’s a O(n^2) solution in Python.
I’ve assumed x, y, z are distinct (e.g., no (x=1,y=1,z=1) nor (x=0, y=2, z=0)), which I think is suggested by the problem specifying a “list of distinct integers” for the input.
Output:
@programmingpraxis, I think that using square roots for this problem can be problematic, since as-is it won’t handle negative values for z.
(where “square roots” in my comment refers to the conventional function that returns a single non-negative value)
Klong version
The problem says to find ALL triples, but the model answer finds only half of them.
Nothing whatsoever in the problem says that the numbers are all positive or all non-negative.
In fact by using “integer” instead of “natural number” it suggests that negative numbers are allowed.
Given [-2,1,2,4], (1,4,2) and (1,4,-2) are both legal triples, but the model answer will not find the second.
I thought it might be interesting to explain some of the facilities of the Klong programming language. I like to learn this about other languages – they all seem to have their strengths – and Klong is one of the more unusual ones I’ve come upon.
[0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19]
[1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20]
[[1 1] [1 2] [1 3] [1 4] [1 5] [1 6] [1 7] [1 8] [1 9] [1 10] [1 11] [1 12] [1 13] [1 14] [1 15] [1 16] [1 17] [1 18] [1 19] [1 20]]
[4 5 6]
[1 3 5 7 9 11 13 15 17 19]
[0]
[1 2 3]?4
[]
*/[2 3]
6
Richard noticed that the example solution did not find all solutions. Mine didn’t either. Here’s version 2 for Klong.
This is a common lisp solution. It assumes the input list is sorted in accending order.
Sample run:
Python Solution:
def find_triples(int_list):
triples = []
# Loop through all integers in list
for k in range(len(int_list)):
for i in range(len(int_list)):
for j in range(len(int_list)):
# Detect valid pair
if int_list[k] * int_list[i] == int_list[j] ** 2:
same = False
# Check each tuple to see if different permutation of pair is already present
for tup in triples:
if int_list[k] in tup and int_list[i] in tup and int_list[j] in tup:
same = True
break
if not same:
triples.append((int_list[k], int_list[i], int_list[j]))
else:
continue
return triples
print(find_triples(range(1, 15)))
OUTPUT:
[(1, 1, 1), (1, 4, 2), (1, 9, 3), (2, 8, 4), (3, 12, 6), (4, 9, 6), (5, 5, 5), (7, 7, 7), (10, 10, 10), (11, 11, 11), (13, 13, 13), (14, 14, 14)] | https://programmingpraxis.com/2020/02/14/square-triple/ | CC-MAIN-2020-50 | refinedweb | 658 | 61.7 |
Qt WebEngine Platform Notes
Building Qt WebEngine from Source
Static builds are not supported.
The requirements for building Qt 5 modules from source are listed separately for each supported platform::
Windows
On Windows, the following additional tools are required:
- Visual Studio 2017 version 15.8 or later
- Windows 10 SDK++ and
linux-clang..11 can be built with Qt 5.9.x, Qt 5.10.x, and Qt 5 and macOS. Sandboxing is currently not supported on Windows due to a limitation in how the sandbox is set up and how it interacts with the host process provided by the Qt WebEngine libraries.
On macOS, there are no special requirements for enabling sandbox support.
On Linux, the kernel has to support the anonymous namespaces feature (kernel version >= 3.8) and seccomp-bpf feature (kernel version >= 3.5). Setuid sandboxes are not supported and are thus disabled.
To explicitly disable sandboxing, the
QTWEBENGINE_DISABLE_SANDBOX environment variable can be set to 1 or alternatively the
--no-sandbox command line argument can be passed to the user application executable... | https://doc-snapshots.qt.io/qt5-dev/qtwebengine-platform-notes.html | CC-MAIN-2019-51 | refinedweb | 175 | 57.57 |
Market Risk. A financial firm’s market risk is the potential volatility in its income due to changes in market conditions such as interest rates, liquidity, economic growth etc. It is typically measured for a time period of one year or less. Reasons for Market Risk financial firm’s market risk is the potential volatility in its income due to changes in market conditions such as interest rates, liquidity, economic growth etc. It is typically measured for a time period of one year or less.
Reasons for Market Risk Measurement
1. Management information.
2. Setting trading limits.
3. Resource allocation.
4. Trader and management performance measurement.
5. Regulatory capital requirements.
1. Assume you own a zero-coupon bond with a 7-year maturity, $1,000,000 market value and a 7.243% yield.
2. Find its modified duration = Duration/(1+Y) = 7/1.07234 = 6.527
Assume yields move daily by .0001 on average then
Standard Deviation = 6.527(.0001) = .0006527
3. Assume 95% confidence so
price volatility = 1.65(.0006527) = .01077
4. Daily VAR = $1,000,000(.01077) = $10,770
5. VAR over 5 days = $10,770 (5).5 = $24,082
1. Assume you have a well-diversified $500,000 portfolio of stocks.
2. Assume the portfolio has a beta of 2 and that the market portfolio’s return has a daily standard deviation of 2%.
Portfolio standard deviation = (Beta)(Market Stand. Dev.)
= 2(.02) = .04
3. Assume 95% confidence so
Price volatility = .04(1.65) = .066
4. Daily VAR = $500,000(.066) = $33,000
5. VAR over 5 days = $33,000 (5).5 = $73,790
1. Assume you have a 1.6 million Swiss Francs and the exchange rate is 0.625 dollars per Swiss Franc.
Position value in dollars = 1,600,000(0.625) = $1,000,000
2. Assume the the daily standard deviation in the exchange rate is .00565.
3. Assume 95% confidence so
Price volatility = .00565(1.65) = .00932
4. Daily VAR = $1,000,000(.00932) = $9,320
5. VAR over 5 days = $9,320 (5).5 = $20,840
To get the combined risk for the three positions together, we need to find the correlations of the three assets. Assume that the correlation between the bond and the Franc is (r = -.2), between the bond and the stock portfolio is (r = .4), and between the Franc and the stock portfolio is (r = .1). Then using the definition of a portfolio standard deviation
VAR(portfolio)= [Si (VARi)2 + Si Sj 2corrij(VARi)(VARj)].5
we get
Combined Daily VAR = [ (10,770)2 + (33,000)2 + (9,320)2 + (2(-.2)(10,770)(9,320)) + (2(.4)(10,770)(33,000)) + (2(.1)(9,320)(33,000))].5 = $39,969
VAR(portfolio) over 5 days = $39,969 (5).5 = $89,373
The VAR measure has been criticized mainly because it assumes price changes are normally distributed and for some assets this is not true.
One alternative that does not assume normal price changes is Back Simulation. This method takes a firm’s current position values and revalues them using percent price changes observed for each type of asset for each of 500 days in the past. Now we have 500 values for the portfolio assuming the types of changes that actually occurred in the recent past. This gives us a range of values and from which we can determine the 25 (5%) largest negative value changes.
Monte Carlo simulation and the BIS regulatory models are alternative that require more specialized information.
Assume that we think the factors affecting the firm’s percent stock return (Rt) is the percent changes in the general stock market (SP500t), the 3 Month vs. 30 Year Treasury Bond Spread (TBt), the Baa-Aaa yield spread (BAt), the dollar exchange rate (FXt), and the money supply (MSt). A regression of the firm’s stock returns on these factors is
Suppose we run the regression for Husky Financial with the following results (T-statistics below coefficients)
R = 0.0003 + 0.871(D%S&P) + 1.124(D% TB)
(1.35) (9.87) (5.52)
-0.062(D% FX) + 0.007(D% BA) + 0.013(D% MS)
(2.47) (0.92) (0.01)
These results show that Husky’s equity value is significantly positively related to stock market return and Treasury spread but significantly negatively related to changes in the dollar. Baa-Aaa spread and money supply changes have no effect.
Question: Can you explain these results?
Besides showing which risks the firm is significantly exposed to, we can use the results to eliminate some risk.
Example: Suppose Husky has 1 million shares that trade for $50 per share and we wish to eliminate half of its Treasury spread risk over the next month. Assume that a spread contract price changes by less than 10 percent during a month 95% of the time.
1. Equity value = $50(1,000,000) = $50,000,000
2. DEquity = 1.124(0.10) $50,000,000 = $5,602,000
3. To eliminate half the risk, sell spread contracts. How much? .5($5,602,000) = (0.10)X => X = $28,010,000 | http://www.slideserve.com/cuthbert-nikolas/market-risk | CC-MAIN-2017-30 | refinedweb | 852 | 68.67 |
(This article was first published on Milk Trader, and kindly contributed to R-bloggers)Well, sorta. More precisely, money is the sum of coin$flip divided by the number of coin$flip. But we'll get to that later. For now, let me introduce you to a new algorithm written in R. This one is another "quote" -- simple few lines of code -- whose theme you can expand to include something more interesting than what I'm presenting here.
Let's say you're having a really bad spell trading and basically have no money left. You have a computer, but you don't have any spare change to spend on software or data. Well, you're in luck. R is free, open source software and Yahoo Finance offers free data. Yes the free-ness means that you get delayed quotes, but let's not quibble about a few minutes. It's free!
The following lines of code enable you to create a list of stocks that may be of interest to you. I've included the venerable Dow 30. You can include the entire S&P 500 or just some select sector ETFs that give you a broad-stroke view of the current day's market action. In any case, it returns a value that I've named 'money'. This value is a percentage of our list that is trading above yesterday's close. Yeah, kinda boring, but as I mentioned, it's a theme for you to play with.
require("quantmod")
bank <-")
coin <- getQuote(bank, what=yahooQF(c("Last Trade (Price Only)", "Change")))
random <- function(change)
{
if ( change > 0 )
return (1)
else
return (0)
}
coin$flip <- mapply(random, coin[,3])
money <- sum(coin$flip)/length(coin$flip)*100
money... | http://www.r-bloggers.com/money-is-coin-flip/ | CC-MAIN-2014-10 | refinedweb | 290 | 73.17 |
Visual:
Earlier today we shipped a public beta of our upcoming .NET 3.5 SP1 and VS 2008 SP1 releases. These
Wow, this SP is really great 8)
{Copy paste uiteraard....} Earlier today we shipped a public beta of our upcoming .NET 3.5 SP1 and VS
In case you haven't heard, Microsoft has formally announced plans to provide a service pack (SP1
Good job guys. Does it show intellisense and comments for the standard jQuery distribution, or do you need one with special XML comments?
Thanks!
@Mike: You'll get a basic level of IntelliSense for standard jQuery, but we still recommend using a version that has the special XML comments. In SP1, you no longer get errors, but providing full IntelliSense for non-commented code isn't something that can be gained from a bug fix. =)
Hi.
I have two more points which are not really bugs, but gotchas which often cause time on development.
1. I often copy some code and paste it directly after it, or in other words I replicate code. Especially in the HTML editor with tables, because I don't want to type the raw table code again and again for each row, but paste and modify it. With VS 2005 SP1 it worked fine: Ctrl+C, 2, 3 or 4 times Ctrl+V. Now in VS 2008 Ctrl+V does not work if any text is selected, I first have to move the cursor to the end of the selection, press Ctrl+V, move to the new end etc... It would be nice to have Ctrl+V working again also if some text/code is currently selected.
2. I use generic lists etc. very often and since I use VS 2088 intellisense creates a ListBox<int> for me instead of a List<int>. This is because the C# class template in ASP.NET projects by default contains using statements for System.Web.UI and sub-namespaces but not System.Collections.Generic. These ASP.NET related namespaces are not needed most times in "normal" classes I guess, esp. comraped to other more common namespaces. If I use Add -> New Item -> Web Server Control it's ok, but not if I use Add -> Class..
.Data.DataSetExtensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/>
<add assembly="System.Xml.Linq, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/>
</assemblies>
< | http://blogs.msdn.com/webdevtools/archive/2008/05/09/visual-studio-2008-sp1-beta.aspx | crawl-002 | refinedweb | 397 | 73.27 |
The box to develop or host a ASP.NET 5 (vNext) web application:
Note: For the sake of writing I am using a Ubuntu 14.04 Trusty x64 - DigitalOcean droplet (highly recommended for Linux based hosting).
SSH into your Linux box and lets get started...
Step (optional, but recommended): Let us first update / upgrade the Linux
sudo apt-get update sudo apt-get upgrade
Step: Install some pre-requisite packages that will be required for the following setup to work
sudo apt-get install make -y sudo apt-get install zip unzip curl git libtool autoconf automake build-essential zsh gyp -y
Step: ASP.NET 5 runs on top of Mono Framework (.NET equivalent for non-windows platform) using .NET tools (DNVM, DNU and DNX.
First, let us install Mono Framework
Update / Upgrade
apt-get package manager references:
sudo apt-get update
Install Mono Framework:
sudo apt-get install mono-complete -y
Once the installation completes you can check the Mono version that was installed:
mono --version
output should be similar to following (version number might vary)
Mono JIT compiler version 4.0.3 (Stable 4.0.3.20/d6946b4 Tue Aug 4 09:43:57 UTC 2015)
Step: Import few certificates to CERT manager for URL(s) that the .NET tools (DNVM / DNU / DNX) and your application may connect and need SSL trust to work properly (may not work in all cases):
sudo yes | certmgr -ssl -m -v sudo yes | certmgr -ssl -m -v sudo yes | certmgr -ssl -m -v sudo yes | certmgr -ssl -m -v sudo yes | certmgr -ssl -m -v sudo mozroots --import --sync --quiet
IMP note: There is no IIS on Linux (you awll know that aay) so ASP.NET 5 uses webserver package Kestrel to run the application and serve request.
This is a development ready / internal self host web server that initiates from the application and host the ASP.NET 5 web application. If you follow this article step-by-step completely then Kestrel will be proxy hosted with Nignx to serve request publically (general practice followed by other frameworks on Linux like node)
Kestrel package uses Libuv cross-platform asynchronous I/O library to self host ASP.NET 5 web application. This package may or may not be present or setup on your machine
Step: (Optional if Libuv is already present) Setup Libuv package on Linux [Reference ASP.NET github home repo docs]:
curl -sSL | sudo tar zxfv - -C /usr/local/src cd /usr/local/src/libuv-1.7.0 sudo sh autogen.sh sudo ./configure sudo make sudo make install sudo rm -rf /usr/local/src/libuv-1.7.0 && cd ~/ sudo ldconfig
At the time of this writing the latest version of Libuv was 1.7.0. Above command should be updated for the version of Libuv you are installing.
You can find your currently installed Libuv version installed in
/usr/lib/libuv.so.x.x.x location.
Step: Install and setup .NET tools (DNVM, DNU, DNX), you can follow the instructions on ASP.NET Home github repo or continue here. (Remember K tools are older versions of .NET tools before the big rename)
sudo apt-get install unzip curl -sSL | DNX_BRANCH=dev sh && source ~/.dnx/dnvm/dnvm.sh
Get latest version of .NET execution environment (DNX) for MONO / Linux:
dnvm upgrade
At the time of writing
dnvm upgrade installed
dnx-mono.1.0.0-beta6. On
dnvm list shows following output:
Active Version Runtime Arch OperatingSystem Alias ------ ------- ------- -------- ----- ----- * 1.0.0-beta6 mono linux/darwin default
Step: (Optional if do not want to publically host ASP.NET 5 web application) - Install Nignx and configure to proxy host - ASP.NET 5 web application which runs using Kestrel, .NET Tools (DNVM, DNX, DNU), Mono Framework.
sudo apt-get update sudo apt-get install nginx -y
This should install Nignx web server. To start / stop Nginx web server following command can be used:
sudo service nginx start sudo service nginx stop
This command helps to check if currently Nignx is running or not:
ifconfig eth0 | grep inet | awk '{ print $2}'
Configure Nginx to start on machine start or reboot:
update-rc.d nginx defaults
Create a Nginx specific configuration for your ASP.NET 5 application:
sudo nano /etc/nginx/conf.d/<domain-name>.conf
Replace
<domain-name> with your public domain name on which you want your ASP.NET 5 web application to be hosted.
In the NANO editor paste / write following code:
server { listen 80; server_name <domain-name> www.<domain.name>; client_max_body_size 10M; location / { proxy_pass; proxy_redirect off; proxy_set_header HOST $host; proxy_buffering off; } }
Note: Replace
<domain-name> with your actual domain name
Create cache directory for Nignx webserver:
sudo mkdir /var/cache/nginx sudo chown www-data:www-data /var/cache/nginx
Create a web root directory for your ASP.NET 5 web application to reside:
sudo mkdir /var/www sudo chown www-data:www-data /var/www sudo mkdir /var/www/<domain-name> sudo chown www-data:www-data /var/www/<domain-name>
Note: Replace
<domain-name> with your actual domain name. This is the folder where your application will reside.
Should also remove
default file if found in any of the following directories:
/etc/nginx/conf.d/,
/etc/nginx/sites-available/ or
/etc/nginx/sites-enabled/ using
sudo rm -rf <directory or filename>
Also do not forget to restart Nginx after the changes made using
sudo service nginx restart to take effect.
Step: Create the simplest form of ASP.NET web application to get started - will display brand new ASP.NET 5 vNext welcome page
cd /var/www/<domain-name>/ sudo nano project.json
Paste the following code in the newly created project.json file:
{ "dependencies": { "Kestrel": "1.0.0-beta6", "Microsoft.AspNet.Diagnostics": "1.0.0-beta6", "Microsoft.AspNet.Hosting": "1.0.0-beta6", "Microsoft.AspNet.Server.WebListener": "1.0.0-beta6", "Microsoft.AspNet.StaticFiles": "1.0.0-beta6" }, "frameworks": { "dnx451": { }, "dnxcore50": { }, }, "commands": { "kestrel": "Microsoft.AspNet.Hosting --server Kestrel --server.urls" } }
This should do the minimal required for the web application to run. Currently at the time of writing ASP.NET 5 is in BETA-6 (most stable beta), should update the package version accordingly if required.
Next create Startup.cs file:
sudo nano Startup.cs
Paste following code in newly created Startup.cs file and save the file with
Ctrl + x =>
Yes =>
Enter:
using Microsoft.AspNet.Builder; namespace HostKWebOnLinux { public class Startup { public void Configure(IApplicationBuilder app) { app.UseStaticFiles(); app.UseWelcomePage(); } } }
Run
dnu restore to download and restore all packages mentioned in the
project.json file.
Last and the final step to BOOM start it all
dnx . kestrel - hit your
<domain-name> and wallah, you see the default ASP.NET 5 vNext Welcome page.
Updates [18th March, 2015] - Article updated to use Mono 3.12.1, ASP.NET beta-3 and libuv-v1.4.2 (thanks to JUNSUI - for reaching out - refer comments).
Updates [19th March, 2015] - For Linux VM user on Azure, who wish to make their ASP.NET application publically accessible, need to enable or add entry of specific endpoints in Azure management portal.
Following screenshot explains this part while creating VM:
Things to understand here:
- The cloud service DNS name (configuration [1]): example
junsui-test.cloudapp.netis a default domain name assigned to this VM by Azure DNS service and can be used as your so called domain name to access anything hosting on this VM
- To make anything accessible from public (on Azure), you need to enable ENDPOINTS for it.
- If you followed the NGINX path from the above walkthrough then you should enable configuration [2] marked red in above screenshot. Considering that your
/etc/nginx/conf.d/<domain-name>.confhas
listen:80. In this case our project.json
localhost:5004should remain the same.
If you do not want to use the NGINX server as public webserver or proxy host your application, then you should try using configuration [3] in above screenshot: use PUBLIC PORT as whatever you wish to access with and the
PRIVATE PORTas
5004as mentioned in your application's project.json file. In this case you need to use public IP Address provided by Azure instead of
localhostat following line in project.json
"kestrel": "Microsoft.AspNet.Hosting --server Kestrel --server.urls http://<azure-public-ipaddress>:5004"
You can find the Public (Virtual) IP Address of your VM in that VM's dashboard page in Microsoft Azure Management Portal:
Umm:
Why do I need Nignx and Kestrel, that is two webserver, right? Can't I just host the application using
dnx . kestreland point to public URL?
Yes you can, but Kestrel is ASP.NET 5 specific in-place self hosting web server. This may or may not contain ways to configure many of the generic web server configuration. Even if it does then you need to change / set them in your application or somewhere in your
project.json.
Nginx play the role of middleman and takes care of the generic webserver request serving stuffs. Also enables to host more than one web application (any other web application - node.js, aspnet 5, php or wordpress) along side of your ASP.NET 5 web application.
Kestrel takes care of the ASP.NET specific and application specific stuffs in that case. This is much similar to hosting CoreCLR based application (web command) in IIS / IIS Express using IIS Helios hook.
Where is the
frameworksnode in
project.jsonfile?
As noted on Github ASP.NET Home repository:
NOTE: There is no Core CLR currently available on OSX/Linux. There is only a single platform (mono45) and a single architecture (x86).
I removed the
frameworksnode and it still works, sue me.
Update [11th May, 2015] - Sorry don't sue me, framework node was optional on Linux / MONO till
beta-3but now its mandatory or else you would encounter error similar to following when you
dnx . kestrel:
System.InvalidOperationException: Failed to resolve the following dependencies for target framework 'DNX,Version=v4.5.1':
There is so much commands to just get up and running, arrggh
Yes there is. Most of them are pretty straight forward. There is also a shell script created by Punit Ganshani (Unfortunately not updated since 6th Dec, 2014 - try or modify at your own risk) to do most of the stuffs automatically for your. Use it with few modification / update if you feeling lazy or big fan of Automation. Also should give a look at Chef, Puppet and Docker if you feel automation is your way. There is ASP.NET 5 Preview Docker Image and way to configure it which you can use to run ASP.NET 5 vNext application in Microsoft Azure Linux Docker Image.
Anything else Mr. Umm?
Yes, use
DNX_TRACEenvironment variable if anything related to .NET tools (DNVM, DNU, DNX) / Kestrel has gone bananas. Export environment variable on Linux using
export DNX_TRACE=1. Also look for
dnu pack/
dnu help packto create deployment packages for your ASP.NET 5 web application and then you can x-copy / FTP your package where you wish to host your application.
Updates [11th May, 2015] - Article updated to use Mono 4.0.1, ASP.NET 5 beta-4 and libuv-v1.5.0.
Updates [17th August, 2015] - Article updated to use Mono 4.0.3, ASP.NET 5 beta-6 and libuv-v1.7.0.
Request to come up with suggestion, improvements, questions, critics or throw eggs, will appreciate it except for eggs (comments below).
Happy coding !! | https://blog.jsinh.in/hosting-asp-net-5-web-application-on-linux/ | CC-MAIN-2018-13 | refinedweb | 1,899 | 57.67 |
- ?
Installing DNS
DNS can be installed in several ways. It can be added during the installation of Windows Server 2003, after installation using the Configure Your Server Wizard, or through the Add or Remove Program applet in the Control Panel. DNS can also be installed when promoting a server to a domain controller using the DCPROMO command.
The only real requirement for installing DNS is Windows Server 2003. Program.3).
Figure 3.3 After installing the DNS service, you can configure DNS server options through the server's Properties dialog box
The available tabs from the DNS server Properties sheet and their uses are summarized as follows:
- Interfaces— Using this tab, you can configure the interfaces on which the DNS server will listen for DNS queries.
- Forwarders— From this tab, you can configure where a DNS server can forward DNS queries that it cannot resolve.
- Advanced— This tab allows you to configure advanced options, determine the method of name checking, determine the location from which zone data is loaded, and enable automatic scavenging of stale records.
- Root Hints— This tab enables you to configure root name servers that the DNS server can use and refer to when resolving queries.
- Debug Logging— From this property tab, you can enable debugging. When this option is enabled, packets sent and received by the DNS server are recorded in a log file. You can also configure the type of information to record in the file.
- Event Logging— The Event Logging tab enables you to configure the type of events that should be written to the DNS event log. You can log errors, warnings, and all events. You can also turn off logging by selecting No Events.
- Monitoring— The— This tab enables you to assign permissions to users and groups for the DNS server.
Advanced DNS Server Options
There are several options that can be configured using the Advanced tab of the DNS server's properties window. Generally, the default settings should be acceptable and require no modifications. The advanced settings that can be configured are summarized in the following list:
- Disable Recursion— This determines whether the DNS server uses recursion. If recursion is disabled, the DNS server will always use referrals, regardless of the type of request from clients.
- BIND Secondaries— This determines whether fast transfers are used when transferring zone data to a BIND server. Versions of BIND earlier than 4.9.4 do not support fast zone transfers.
- Fail on Load if Bad Zone Data— This option determines whether the DNS server continues to load a zone if the zone data is determined to have errors. By default, the DNS server will continue to load the zone.
- Enable Round Robin— This option determines whether the DNS server will rotate and reorder a list of resource records when multiple resource records exist for a query answer.
- Enable Netmask Ordering— This determines whether the DNS server reorders host (A) records within the same resource record set in response to a query based on the IP address of the source query.
- Secure Cache Against Pollution— This determines whether the DNS server attempts to clean up responses to avoid cache pollution. This option is enabled by default. zone— This type of zone maintains the master writable copy of the zone in a text file. An update to the zone must be performed from the primary zone.
- Standard secondary zone— This zone type stores a copy of an existing zone in a read-only text file. To create a secondary zone, the primary zone must already exist, and you must specify a master name server. This is the server from which the zone information is copied.
- Active Directory–integrated zone— This zone type stores zone information within Active Directory. This enables you to take advantage of additional features, such as secure dynamic updates and replication. Active Directory–integrated zones can be configured on Windows Server 2003 domain controllers running DNS. Each domain controller maintains a writable copy of the zone information, which is stored in the Active Directory database.
- Stub zone—.
Stub Zones Versus Conditional Forwarding.4)..
Figure 3.4 If you are creating a reverse lookup zone, you must supply the network ID
- In the Zone File screen, select whether to create a new zone file or to use an existing one (see Figure 3.5). This option appears when creating a forward or reverse lookup zone. Click Next.
Figure 3.5 You must provide a filename for the zone file or select an existing file
- Specify how the DNS zone will receive updates from DNS client computers. Three options are available, as shown in Figure 3."
Figure 3.6 You must configure how the DNS zone will receive dynamic updates
- Click Finish. the resource records supported by Windows Server 2003 DNS, right-click a zone and select Other New Records (see Figure 3.7).
Figure 3.7 The next step in zone creation is populating the zone with DNS resource records
The following list summarizes some of the more common resource records you might encounter:
- Host Address (A) record— Maps a DNS name to an IP address. An A record represents a specific device on the network.
- Start of Authority (SOA) record— Identifies the primary DNS server for the zone. This is the first resource record in a zone file.
- Mail Exchanger (MX) record— Routes messages to a specified mail exchanger for a specified DNS domain name.
- Pointer (PTR) record— Points to a location in the DNS namespace. PTR records map an IP address to a DNS name and are commonly used for reverse lookups.
- Alias (CNAME) record— Specifies another DNS domain name for a name that is already referenced in another resource record.
- Service Locator (SRV) record— Used to identify network services offered by hosts, the port used by the service, and the protocol. SRV records are used to locate domain controllers in an Active Directory domain..8).
Figure 3.8 You can add a new host record via the DNS management console
To create additional resource records, simply select the type of record you want to create and fill in the required information.
Configuring DNS Simple Forwarding
As you learned earlier in the chapter, a DNS server can be configured to send all queries that it cannot resolve locally to a forwarder.. | https://www.informit.com/articles/article.aspx?p=600628&seqNum=7 | CC-MAIN-2020-24 | refinedweb | 1,049 | 61.26 |
The pygame.draw.lines method which takes in 1) Screen surface 2) Line color 3) Boolean value to indicate whether should close the lines or not 4) Line coordinates and 5) Line width, will produce the below outcome.
Just like pygame.draw.line, there is also an antialiased version of pygame.draw.lines which will draw smooth lines on the screen.
The drawlines script is as follow.
import pygame from pygame.locals import * from sys import exit pygame.init() screen = pygame.display.set_mode((640, 480), 0, 32) coordinate = [] while True: for event in pygame.event.get(): if event.type == QUIT: exit() if event.type == MOUSEMOTION: coordinate.append(event.pos) screen.fill((255, 255, 255)) if len(coordinate)>1: pygame.draw.lines(screen, (0,255,0), False, coordinate, 3) pygame.display.update()
The script above will append a tuple which consists of x, y coordinate to the coordinate array every time we touch a point on the screen, the coordinate array will then be used in the pygame.draw.lines method. | http://gamingdirectional.com/blog/2016/08/28/draw-lines-with-pygame/ | CC-MAIN-2019-18 | refinedweb | 170 | 62.04 |
Python.
The packages you’ll learn about in this tutorial are:
pudb: An advanced, text-based visual debugger
requests: A beautiful API for making HTTP requests
parse: An intuitive, readable text matcher
dateutil: An extension for the popular
datetimelibrary
typer: An intuitive command-line interface parser
You’ll begin by looking at a visual and powerful alternative to
pdb.
Free Download: Get a sample chapter from Python Tricks: The Book that shows you Python's best practices with simple examples you can apply instantly to write more beautiful + Pythonic code.
pudb for Visual Debugging
Christopher Trudeau is an author and course creator at Real Python. At work he’s a consultant who helps organizations improve their technical teams. At home, he spends his time with board games and photography.
I spend a lot of time working SSHed into remote machines, so I can’t take advantage of most IDEs. My debugger of choice is
pudb, which has a text-based user interface. I find its interface intuitive and easy to use.
Python ships with
pdb, which was inspired by
gdb, which itself was inspired by
dbx. While
pdb does the job, the strongest thing it has going for it is that it ships with Python. Because it’s based on the command line, you have to remember a lot of shortcut keys and can only see small amounts of the source code at a time.
An alternative Python package for debugging is
pudb. It displays a full screen of source code along with useful debugging information. It has the added benefit of making me feel nostalgic for the old days when I coded in Turbo Pascal:
The interface is divided into two main parts. The left panel is for source code, and the right panel is for context information. The right-hand side is subdivided into three sections:
- Variables
- Stack
- Breakpoints
Everything you need in a debugger is available on one screen.
Interacting With
pudb
You can install
pudb through
pip:
$ python -m pip install pudb
If you’re using Python 3.7 or above, then you can take advantage of
breakpoint() by setting the
PYTHONBREAKPOINT environment variable to
pudb.set_trace. If you’re using a Unix-based operating system such as Linux or macOS, then you set the variable as follows:
$ export PYTHONBREAKPOINT=pudb.set_trace
If you’re based in Windows, the command is different:
C:\> set PYTHONBREAKPOINT=pudb.set_trace
Alternatively, you can insert
import pudb; pudb.set_trace() directly into your code.
When your running code hits a breakpoint,
pudb interrupts execution and shows its interface:
You can navigate and execute the source code with the keyboard:
If you restart your code, then
pudb remembers breakpoints from the previous session. Right and Left allow you to move between the source code and control area on the right-hand side.
Inside the Variables box, you see all the variables currently in scope:
By default, the view of a variable is shortened, but you can see the full contents by pressing \. Expanding the view will show you the items in a tuple or list, or it will show the complete contents of a binary variable. T and R switch back and forth between
repr and
type display modes.
Using Watch Expressions and Accessing the REPL
While the Variables area on the right-hand side is focused, you can also add a watch expression. A watch can be any Python expression. It’s useful for examining data buried deep within an object while the object is still in its shortened form or for evaluating complex relationships between variables.
Note: You add a watch expression by pressing N. Since N is also used to execute the current line of code, you have to make sure the right area of the screen is in focus before pressing the key.
Pressing ! escapes you out to a REPL within the context of your currently running program. This mode also shows you any output that was sent to the screen before the debugger triggered. By navigating the interface or using shortcut keys, you can also modify breakpoints, change where you are in the stack frame, and load other source code files.
Why
pudb Is Awesome
The
pudb interface requires less memorization of shortcut keys than
pdb and is designed to display as much code as possible. It has most of the features of debuggers found in IDEs but can be used in a terminal. As the installation of this Python package is only a short call to
pip away, you can quickly bring it into any environment. The next time you’re stuck on the command line, check it out!
requests for Interacting With the Web
Martin Breuss is an author and course creator at Real Python. He works as a programming educator at CodingNomads, where he teaches bootcamps and online courses. Outside of work, he likes seatrekking, walking, and recording random sounds.
My number one pick for a Python package from outside the standard library is the popular
requests package. It holds a special status on my computer since it’s the only external package that I installed system-wide. All other packages live in their dedicated virtual environments.
I’m not alone in favoring
requests as the primary tool for Python web interactions: according to the
requests documentation, the package has around 1.6 million downloads per day!
This number is so high because programmatic interactions with the Internet offer many possibilities, whether that’s posting your writing through a web API or fetching data via web scraping. But Python’s standard library already includes the
urllib package to help to accomplish these tasks. So why use an external package? What makes
requests such a popular choice?
requests Is Readable
The
requests library presents a well-developed API that closely follows Python’s aim to be as readable as plain English. The
requests developers summarized that idea in their slogan, “HTTP for Humans.”
You can use
pip to install
requests on your computer:
$ python -m pip install requests
Let’s explore how
requests holds up in terms of readability by using it to access the text on a website. When tackling this task with your trusty browser, you’d follow these steps:
- Open the browser.
- Enter the URL.
- Look at the text of the website.
How could you achieve the same outcome with code? First, you lay out the necessary steps in pseudocode:
- Import the tooling you need.
- Get the website’s data.
- Print the text of the website.
After clarifying the logic, you translate the pseudocode to Python using the
requests library:
>>> import requests >>> response = requests.get("") >>> response.text
The code reads almost like English and is concise and clear. While this basic example isn’t much more difficult to build with the standard library’s
urllib package,
requests maintains its straightforward, human-centric syntax even in more complex scenarios.
In the next example, you’ll see that there’s a lot you can achieve with just a few lines of Python code.
requests Is Powerful
Let’s step up the game and challenge
requests with a more complex task:
- Log in to your GitHub account.
- Persist that login information to handle multiple requests.
- Create a new repository.
- Create a new file with some content.
- Run the second request only if the first one succeeds.
Challenge accepted and accomplished! The code snippet below does all the above tasks. All you need to do is replace the two strings
"YOUR_GITHUB_USERNAME" and
"YOUR_GITHUB_TOKEN" with your GitHub username and personal access token, respectively.
Note: To create a personal access token, click on Generate new token and select the repo scope. Copy the generated token and use it alongside your username to authenticate.
Read over the code snippet below, copy and save it to your own Python script, fill in your credentials, and run it to see
requests in action:
import requests session = requests.Session() session.auth = ("YOUR_GITHUB_USERNAME", "YOUR_GITHUB_TOKEN") payload = { "name": "test-requests", "description": "Created with the requests library" } api_url ="" response_1 = session.post(api_url, json=payload) if response_1: data = { "message": "Add README via API", # The 'content' needs to be a base64 encoded string # Python's standard library can help with that # You can uncover the secret of this garbled string # by uploading it to GitHub with this script :) "content": "UmVxdWVzdHMgaXMgYXdlc29tZSE=" } repo_url = response_1.json()["url"] readme_url = f"{repo_url}/contents/README.md" response_2 = session.put(readme_url, json=data) else: print(response_1.status_code, response_1.json()) html_url = response_2.json()["content"]["html_url"] print(f"See your repo live at: {html_url}") session.close()
After you run the code, go ahead and navigate to the link that it prints out at the end. You’ll see that a new repository was created on your GitHub account. The new repository contains a
README.md file with some text in it, all generated with this script.
Note: You might have noticed that the code authenticates only once but is still able to send multiple requests. This is possible because the
requests.Session object allows you to persist information over multiple requests.
As you can see, the short code snippet above accomplishes a lot and still remains understandable.
Why
requests Is Awesome
Python’s
request library is one of Python’s most widely used external libraries because it’s a readable, accessible, and powerful tool for interacting with the Web.
To learn more about the many possibilities of working with
requests, check out Making HTTP Requests With Python.
parse for Matching Strings
Geir Arne Hjelle is an author and reviewer at Real Python. He works as a data science consultant in Oslo, Norway, and is particularly happy when his analysis involves maps and images. Away from the keyboard, Geir Arne enjoys board games, hammocks, and walking aimlessly into the forest.
I love the power of regular expressions. With a regular expression, or regex, you can search for virtually any pattern in a given string. However, with great power comes great complexity! Constructing a regex can take a lot of trial and error, and understanding the subtleties of a given regex is possibly even harder.
parse is a library that packs much of the power of regular expressions but uses a syntax that is clearer and perhaps more familiar. In short,
parse is f-strings in reverse. You can use essentially the same expressions to search and parse strings as you use to format them. Let’s have a look at how that works in practice!
Find Strings That Match a Given Pattern
You need some text that you want to parse. In these examples, we’ll use PEP 498, the original f-strings specification.
pepdocs is a small utility that can download the text of Python Enhancement Proposal (PEP) documents.
Install
parse and
pepdocs from PyPI:
$ python -m pip install parse pepdocs
To get started, download PEP 498:
>>> import pepdocs >>> pep498 = pepdocs.get(498)
Using
parse you can, for example, find the author of PEP 498:
>>> import parse >>> parse.search("Author: {}\n", pep498) <Result ('Eric V. Smith <eric@trueblade.com>',) {}>
parse.search() searches for a pattern, in this case
"Author: {}\n", anywhere in the given string. You can also use
parse.parse(), which matches the pattern to the complete string. Similarly to f-strings, you use curly braces (
{}) to indicate variables that you want to parse.
While you can use empty curly braces, most of the time you want to add names to your search patterns. You can split out the name and email address of PEP 498 author Eric V. Smith like this:
>>> parse.search("Author: {name} <{email}>", pep498) <Result () {'name': 'Eric V. Smith', 'email': 'eric@trueblade.com'}>
This returns a
Result object with information about the match. You can access all results of your search with
.fixed,
.named, and
.spans. You can also use
[] to get a single value:
>>> result = parse.search("Author: {name} <{email}>", pep498) >>> result.named {'name': 'Eric V. Smith', 'email': 'eric@trueblade.com'} >>> result["name"] 'Eric V. Smith' >>> result.spans {'name': (95, 108), 'email': (110, 128)} >>> pep498[110:128] 'eric@trueblade.com'
.spans gives you the indices in your string that match your pattern.
Use Format Specifiers
You can find all matches of a pattern with
parse.findall(). Try to find the other PEPs that are mentioned in PEP 498:
>>> [result["num"] for result in parse.findall("PEP {num}", pep498)] ['p', 'd', '2', '2', '3', 'i', '3', 'r', ..., 't', '4', 'i', '4', '4']
Hmm, that doesn’t look very useful. PEPs are referenced using numbers. You can therefore use the format syntax to specify that you’re looking for numbers:
>>> [result["num"] for result in parse.findall("PEP {num:d}", pep498)] [215, 215, 3101, 3101, 461, 414, 461]
Adding
:d tells
parse that you’re looking for an integer. As a bonus, the results are even converted from strings into numbers. In addition to
:d, you can use most of the format specifiers used by f-strings.
You can also parse dates using special two-character specifications:
>>> parse.search("Created: {created:tg}\n", pep498) <Result () {'created': datetime.datetime(2015, 8, 1, 0, 0)}>
:tg looks for dates written as day/month/year. You can use
:ti and
:ta, as well as several other options, if the order or the format is different.
Access Underlying Regexes
parse is built on top of Python’s regular expressions library,
re. Every time you do a search,
parse builds the corresponding regex under the hood. If you need to do the same search several times, then you can build the regex once up front with
parse.compile.
The following example prints out all the descriptions of references to other documents in PEP 498:
>>> references_pattern = parse.compile(".. [#] {reference}") >>> for line in pep498.splitlines(): ... if result := references_pattern.parse(line): ... print(result["reference"]) ... %-formatting str.format [ ... ] PEP 461 rejects bytes.format()
The loop uses the walrus operator, available in Python 3.8 and later, to test each line against the provided template. You can look at the compiled pattern to see the regex lurking behind your newfound parsing capabilities:
>>> references_pattern._expression '\\.\\. \\[#\\] (?P<reference>.+?)'
The original
parse pattern,
".. [#] {reference}", is more straightforward to both read and write.
Why
parse Is Awesome
Regular expressions are clearly useful. However, thick books have been written to explain the subtleties of regexes.
parse is a small library that provides most of the capabilities of regexes but with a much friendlier syntax.
If you compare
".. [#] {reference}" and
"\\.\\. \\[#\\] (?P<reference>.+?)", then you’ll see why I love
parse even more than I love the power of regular expressions.
dateutil for Working With Dates and Times
Bryan Weber is an author and reviewer for Real Python and a professor in mechanical engineering. When not writing Python or teaching, he can most likely be found cooking, playing with his family, or going for a hike, and on good days, all three.
If you’ve ever had to do any programming with time, then you know the convoluted knots that it can tie you into. First, you have to deal with time zones, in which two different points on Earth will have a different time at any given instant. Then you have daylight saving time, a twice-yearly event in which an hour either happens twice or doesn’t happen at all, but only in certain countries.
You also have to account for leap years and leap seconds to keep human clocks in sync with the Earth’s revolutions around the Sun. You have to program around the Y2K and Y2038 bugs. The list goes on and on.
Note: If you want to keep going down this rabbit hole, then I highly recommend The Problem with Time & Timezones, a video explanation of some of the ways that time is hard to handle by the wonderful and hilarious Tom Scott.
Fortunately, Python includes a really useful module in the standard library called
datetime. Python’s
datetime is a nice way to store and access information about dates and times. However,
datetime has a few places where the interface isn’t quite so nice.
In response, Python’s awesome community has developed several different libraries and APIs for handling dates and times in a sensible way. Some of these extend the built-in
datetime, and some are complete replacements. My favorite library of them all is
dateutil.
To follow along with the examples below, install
dateutil like this :
$ python -m pip install python-dateutil
Now that you’ve got
dateutil installed, the examples in the next few sections will show you how powerful it is. You’ll also see how
dateutil interacts with
datetime.
Set a Time Zone
dateutil has a couple of things going for it. First, it’s recommended in the Python documentation as a complement to
datetime for handling time zones and daylight saving time:
>>> from dateutil import tz >>> from datetime import datetime >>> london_now = datetime.now(tz=tz.gettz("Europe/London")) >>> london_now.tzname() # 'BST' in summer and 'GMT' in winter 'BST'
But
dateutil can do so much more than provide a concrete
tzinfo instance. This is really fortunate, because after Python 3.9, the Python standard library will have its own ability to access the IANA database.
Parse Date and Time Strings
dateutil makes it much more straightforward to parse strings into
datetime instances using the
parser module:
>>> from dateutil import parser >>> parser.parse("Monday, May 4th at 8am") # May the 4th be with you! datetime.datetime(2020, 5, 4, 8, 0)
Notice that
dateutil automatically infers the year for this date even though you don’t specify it! You can also control how time zones are interpreted or added using
parser or work with ISO-8601 formatted dates. This gives you a lot more flexibility than you’d find in
datetime.
Calculate Time Differences
Another excellent feature of
dateutil is its ability to handle time arithmetic with the
relativedelta module. You can add or subtract arbitrary units of time from a
datetime instance or find the difference between two
datetime instances:
>>> from dateutil.relativedelta import relativedelta >>> from dateutil import parser >>> may_4th = parser.parse("Monday, May 4th at 8:00 AM") >>> may_4th + relativedelta(days=+1, years=+5, months=-2) datetime.datetime(2025, 3, 5, 8, 0) >>> release_day = parser.parse("May 25, 1977 at 8:00 AM") >>> relativedelta(may_4th, release_day) relativedelta(years=+42, months=+11, days=+9)
This is more flexible and powerful than
datetime.timedelta because you can specify intervals of time larger than a day, such as a month or a year.
Calculate Recurring Events
Last but by no means least,
dateutil has a powerful module called
rrule for calculating dates into the future according to the iCalendar RFC. Let’s say you want to generate a regular standup schedule for the month of June, occurring at 10:00 a.m. on Mondays and Fridays:
>>> from dateutil import rrule >>> from dateutil import parser >>> list( ... rrule.rrule( ... rrule.WEEKLY, ... byweekday=(rrule.MO, rrule.FR), ... dtstart=parser.parse("June 1, 2020 at 10 AM"), ... until=parser.parse("June 30, 2020"), ... ) ... ) [datetime.datetime(2020, 6, 1, 10, 0), ..., datetime.datetime(2020, 6, 29, 10, 0)]
Notice that you don’t have to know whether the start or end dates are Mondays or Fridays—
dateutil figures that out for you. Another way to use
rrule is to find the next time that a particular date will occur. Let’s find the next time that the leap day, February 29th, will happen on a Saturday like it did in 2020:
>>> list( ... rrule.rrule( ... rrule.YEARLY, ... count=1, ... byweekday=rrule.SA, ... bymonthday=29, ... bymonth=2, ... ) ... ) [datetime.datetime(2048, 2, 29, 22, 5, 5)]
The next leap day on a Saturday will happen in 2048. There are a ton more examples in the
dateutil documentation as well as a set of exercises to try out.
Why
dateutil Is Awesome
You’ve just seen four features of
dateutil that make your life easier when you’re dealing with time:
- A convenient way of setting time zones compatible with
datetimeobjects
- A useful method for parsing strings into dates
- A powerful interface for doing time arithmetic
- An awesome way of calculating recurring or future dates.
Next time you’re going gray trying to program with time, give
dateutil a shot!
typer for Command-Line Interface Parsing
Dane Hillard is a Python book and blog author and a lead web application developer at ITHAKA, a nonprofit organization supporting higher education. In his free time, he does just about anything but likes cooking, music, board games, and ballroom dance in particular.
Python developers often get their start in command-line interface (CLI) parsing using the
sys module. You can read
sys.argv to obtain the list of arguments a user supplies to your script:
# command.py import sys if __name__ == "__main__": print(sys.argv)
The name of the script and any arguments a user supplies end up as string values in
sys.argv:
$ python command.py one two three ["command.py", "one", "two", "three"] $ python command.py 1 2 3 ["command.py", "1", "2", "3"]
As you add features to your script, though, you may want to parse your script’s arguments in a more informed way. You may need to manage arguments of several different data types or make it more clear to users what options are available.
argparse Is Clunky
Python’s built-in
argparse module helps you create named arguments, cast user-supplied values to the proper data types, and automatically create a help menu for your script. If you haven’t used
argparse before, then check out How to Build Command Line Interfaces in Python With argparse.
One of the big advantages of
argparse is that you can specify your CLI’s arguments in a more declarative manner, reducing what would otherwise be a fair amount of procedural and conditional code.
Consider the following example, which uses
sys.argv to print a user-supplied string a user-specified number of times with minimal handling of edge cases:
# string_echo_sys.py import sys USAGE = """ USAGE: python string_echo_sys.py <string> [--times <num>] """ if __name__ == "__main__": if len(sys.argv) == 1 or (len(sys.argv) == 2 and sys.argv[1] == "--help"): sys.exit(USAGE) elif len(sys.argv) == 2: string = sys.argv[1] # First argument after script name print(string) elif len(sys.argv) == 4 and sys.argv[2] == "--times": string = sys.argv[1] # First argument after script name try: times = int(sys.argv[3]) # Argument after --times except ValueError: sys.exit(f"Invalid value for --times! {USAGE}") print("\n".join([string] * times)) else: sys.exit(USAGE)
This code provides a way for users to see some helpful documentation about using the script:
$ python string_echo_sys.py --help USAGE: python string_echo_sys.py <string> [--times <num>]
Users can provide a string and an optional number of times to print the string:
$ python string_echo_sys.py HELLO! --times 5 HELLO! HELLO! HELLO! HELLO! HELLO!
To achieve a similar interface with
argparse, you could write something like this:
# string_echo_argparse.py import argparse parser = argparse.ArgumentParser( description="Echo a string for as long as you like" ) parser.add_argument("string", help="The string to echo") parser.add_argument( "--times", help="The number of times to echo the string", type=int, default=1, ) if __name__ == "__main__": args = parser.parse_args() print("\n".join([args.string] * args.times))
The
argparse code is more descriptive, and
argparse also provides full argument parsing and a
--help option explaining the usage of your script, all for free.
Although
argparse is a big improvement over dealing with
sys.argv directly, it still forces you to think quite a bit about CLI parsing. You’re usually trying to write a script that does something useful, so energy spent on CLI parsing is energy wasted!
Why
typer Is Awesome
typer provides several of the same features as
argparse but uses a very different development paradigm. Instead of writing any declarative, procedural, or conditional logic to parse user input,
typer leverages type hinting to introspect your code and generate a CLI so that you don’t have to spend much effort thinking about handling user input.
Start by installing
typer from PyPI:
$ python -m pip install typer
Now that you’ve got
typer at your disposal, here’s how you could write a script that achieves a result similar to the
argparse example:
# string_echo_typer.py import typer def echo( string: str, times: int = typer.Option(1, help="The number of times to echo the string"), ): """Echo a string for as long as you like""" typer.echo("\n".join([string] * times)) if __name__ == "__main__": typer.run(echo)
This approach uses even fewer functional lines, and those lines are mostly focused on the features of your script. The fact that the script echoes a string some number of times is more apparent.
typer even provides the ability for users to generate Tab completion for their shells, so that they can get to using your script’s CLI that much more quickly.
You can check out Comparing Python Command-Line Parsing Libraries – Argparse, Docopt, and Click to see if any of those might be right for you, but I love
typer for its brevity and power.
Conclusion: Five Useful Python Packages
The Python community has built so many awesome packages. Throughout this tutorial, you’ve learned about several useful packages that are alternatives, or extensions, to common packages in Python’s standard library.
In this tutorial, you learned:
- Why
pudbmight help you debug your code
- How
requestscan improve the way you communicate with web servers
- How you can use
parseto simplify your string matching
- What features
dateutiloffers for working with dates and times
- Why you should use
typerfor parsing command-line arguments
We’ve written dedicated tutorials and sections of tutorials for some of these packages for further reading. We encourage you to dive deeper and share some of your favorite standard library alternatives with us in the comments!
Further Reading
Here are some tutorials and video courses that you can check out to learn more about packages covered in this tutorial: | https://realpython.com/python-packages/ | CC-MAIN-2020-34 | refinedweb | 4,311 | 64.61 |
Details
- Type:
Bug
- Status: Resolved
- Priority:
Minor
- Resolution: Fixed
- Affects Version/s: 2.2.4
-
- Component/s: karaf-shell
- Labels:
- Environment:
Java 1.6.029 on a Windows environment.
Description
When using the TAB auto-complete key after commands that do not take any Arguments/Options,
a message is printed saying "Error executing command: -1".
The shell then becomes unresponsive and the TAB auto-complete option is inactive.
Activity
- All
- Work Log
- History
- Activity
- Transitions
I think so, but it can happen on Karaf command is the description attribute is not defined.
I will override the OsgiCommandSupport to define a default empty description. Anyway, the Karaf console (jline) should handle that, I will enhance it as well.
The problem doesn't occur on trunk (link to the enhancement around the shell namespace). I'm testing on karaf-2.2.x.
I'm not able to reproduce the issue.
Could you provide your command code, especially the annotated class and the blueprint declaration of the command ?
This issue is still not resolved.
Will attach the command code.
attached the command and a printscreen of the shell client.
This was reproduces on 2.2.5, 2.2.6 and 3.0.0
How do you install the command? Do you use the blueprint namespace? Can you provide a full project?
Sure I can. basically it's an opensource project that you can check out using git from, or I can just send it to you directly.
just tell me what you prefer.
Thanks,
Adam
Checking out is ok. Can you provide some more informations where the sources are?
In the cloudify project, you will find a karaf based project called CLI. You will find everything you need there.
Please let me know if you need anything else.
Adam
I checked out the source and tried to compile the CLI project.
This is the error I get:
[ERROR] Failed to execute goal on project CLI: Could not resolve dependencies for project org.cloudifysource:CLI:jar:2.1.1: Failed to collect dependencies for [junit:junit:jar:4.8.2 (test), com.gigasp
aces:gs-openspaces:jar:9.0.1-m1-6694-50 (provided), org.apache.karaf.shell:org.apache.karaf.shell.console:jar:2.2.5 (compile), org.slf4j:slf4j-api:jar:1.6.1 (compile), org.slf4j:slf4j-nop:jar:1.6.1 (c
ompile), commons-io:commons-io:jar:2.0.1 (provided), org.hyperic:sigar:jar:1.6.5 (provided), org.codehaus.groovy:groovy:jar:1.8.5 (provided), org.apache.httpcomponents:httpclient:jar:4.1.1 (compile),
org.apache.httpcomponents:httpmime:jar:4.1.1 (compile), org.codehaus.jackson:jackson-core-asl:jar:1.3.0 (compile), org.codehaus.jackson:jackson-mapper-asl:jar:1.3.0 (compile), org.apache.commons:commo
ns-exec:jar:1.1.1-SNAPSHOT (compile), org.cloudifysource:dsl:jar:2.1.1 (provided), org.cloudifysource:rest-client:jar:2.1.1 (compile), commons-logging:commons-logging:jar:1.0.3 (provided), com.sun:dep
loy:jar:1 (system)]: Failed to read artifact descriptor for org.cloudifysource:dsl:jar:2.1.1: Could not transfer artifact org.cloudifysource:dsl:pom:2.1.1 from/to org.openspaces (
ry.openspaces.org): Access denied to:, ReasonPhrase:Forbidden. -> [Help 1]
Moving to 3.0.1. If I can reproduce the error I will move it back to 3.0.0
I have added instructions in the git cloudify page on how to build the project, import it to the eclipse and debug it.
The reason you got the error is that you can't build the CLI by itself. It depends on another project (DSL) that has to be compiled first.
If you follow the instructions on git you should not have any problems. ant will do all the work for you.
Please let me know if you are still having problems with it.
Forgot to publish the link:
Just to confirm, this error is occurring only on new commands that don't have any options that you've specially created for Karaf? Because AFAIK every Karaf command has the --help option, which would mean this error would never occur for any built-in Karaf commands, correct? | https://issues.apache.org/jira/browse/KARAF-1018 | CC-MAIN-2016-30 | refinedweb | 699 | 54.39 |
I am trying to figure out the best way to keep objects in memory without having them scattered everywhere within the code.
For example: I have a PyQT menu system which interacts with objects. Currently, these menus are able to create and modify an object by hitting code outside of their menu-related files, but afterwards the objects are being held inside the menu objects themselves.
e.g.
from foo_obj import Foo class Menu foo = Foo() ...
Some objects that need to be accessed from multiple menus sit in their own file where they are saved to a variable and imported into the “highest-level” file (
main.py) so they aren’t destroyed till the program terminates.
I’m sure this is not the best way to go about this. I have heard of MVC design and I have tried to follow something like that where almost all controller logic such as modifying the objects, conditionals, etc. are in their own “controller-like” files, and menu files are almost entirely views. Unfortunately, the issue still remains that I cannot find a good way to store these objects without creating some sort of abstract concept like
object_container.py.
What is a good standard for this type! | https://extraproxies.com/best-standard-for-keeping-objects-in-memory-python/ | CC-MAIN-2020-50 | refinedweb | 204 | 61.87 |
Hi i have this project for class in which i have to ask the user to input a file name add 5 lines to it close the file open it again and add anothr 5 lines, then post everything to the screen here is my code
import java.io.*; import java.util.*; public class siasJorge_project_03 { public static void main(String[] args) { PrintWriter outputStream=null; Scanner keyboard = new Scanner(System.in); String line = " "; int input = 0; int input2 = 0; String name = " "; System.out.println("Please enter the name of the file we will be working on, dont forget to add the extensiuon type...\n"); name = keyboard.next(); try { BufferedReader inputStream = new BufferedReader(new FileReader(name)); System.out.println("Please enter 5 lines of text: "); for (input=0; input<5; input++) { line = keyboard.nextLine(); outputStream.println(line); } inputStream.close(); outputStream = new PrintWriter(new FileOutputStream(name, true)); System.out.println("Please enter 5 more lines of text: "); for (input2 = 0; input2 < 5; input2++) { line = keyboard.nextLine(); outputStream.println(line); } inputStream.close(); line = inputStream.readLine(); while (line != null) { System.out.println("The content of " + name + " is : " + line); line = inputStream.readLine(); } } catch (FileNotFoundException e) { System.out.println("File " + name + " was not found"); System.out.println(" or could not be opened. "); } catch (IOException e) { System.out.println("Error reading from file " + name); } } }
The code compiles but crashes giving me a Exception in thread "main" java.lang.NullPointerException at sisJorge_project_03.main<siasJorge_project_03.java :27>
any clue what i am doing wrong??
ThankYou | http://www.javaprogrammingforums.com/whats-wrong-my-code/20339-help-i-am-running-out-ides.html | CC-MAIN-2018-22 | refinedweb | 245 | 52.56 |
26 March 2006 22:47 [Source: ICIS news]
SAN ANTONIO, ?xml:namespace>
?xml:namespace>
"Developments in biofuels in
If goals set by the government were achieved, biodiesel capacity in the country will increase to 10m tonne/year by 2015 from 6m tonne/year today, he added.
As the economy in
New projects in
The French government has already launched a large-scale project for the production of biofuels.
To help this development, the French government has in its initial draft for the 2006 budget taken further steps to bolster the research tax credit, which underpins its efforts to promote and support research and development in
The portion linked to total expenditures have been increased to 10% from 5%. The portion linked to the increase in spending has been set at 40%. The ceiling on the research tax credit has been raised to Euro10m from Euro6m, he added.
The industry now needs to move into the second tier of biofuels production, Girard said. "We need to go to the second generation [of biofuels]."
French oil and chemicals major signed an agreement to develop a new generation biofuels with Finland's Neste Oil last year. This will be based on Neste Oil's proprietary NExBTL technology in which high-quality biodiesel fuel is produced from renewable raw materials, such as vegetable oils and animal fats.
The next step would be to move to the next generation, tier three, and then to tier five, to enhance the quality of biodiesel and increase the usage of the fuel, possibly in the long-term to use for the production of downstream chemicals products.
This would include making biofuels from other natural resources, such as trees in the forests, Girard said on the sidelines of the
31st International Petrochemical Conference.
"About 30% of the growth of the forests remain in the forest every | http://www.icis.com/Articles/2006/03/26/1051901/npra-06-france-to-exploit-biofuels-boom.html | CC-MAIN-2013-48 | refinedweb | 307 | 56.79 |
In my iOS code in Xamarin I am getting a json string in response from the Rest API and its structure is like bellow.
{
"base": "gdps stations",
"id": 1269515,
"name": "Jaipur",
"weather": [
{
"description": "haze",
"icon": "50n",
"id": 721,
"main": "Haze"
}
],
"wind": {
"deg": 70,
"speed": 2.1
}
}
How can I use this string and extract values for the some key ? May be I can save this string in a dictionary and use that to get the key value. (Please specify which name space I need to use and what assembly I need to add )
You should check out RestSharp from the component store:
With that said, also comes in quite handy in combination with RestSharp at least for starters...
Have a look at Introduction to Web Services, specifically the section titled "Options for consuming RESTful data".
@vishnusharma.8852 would also recommend the Json.NET component. You could go the easy route using JObject.Parse () or if your data format is static you can create a class that you can easily deserialize with JsonConvert.DeserializeObject.
This code below was created with your json @.
I am new with .Net or C#
I updated my code something like bellow.
System;
System.Drawing;
System.IO;
MonoTouch.Foundation;
MonoTouch.UIKit;
System.Collections.Generic;
Newtonsoft.Json;
System.Web;
// and in my class
public class WeatherReport
{
public string name { get; set; }
}
But when I run the app I get exception
Could not load file or assembly 'System.Dynamic.Runtime, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' or one of its dependencies.
I tried to add "System.Dynamic.Runtime" but there is nothing like this in Xamarin.
Did you add the component from the store? There are some discussions on a similar thread:
why the ?
I come to know that System.Web.Helpers.Json is in System.Web.Helpers.dll is not the part of the MonoTouch or Mono for android so we can not use this in our platform specific projects. But there are several other options that can be used for this task.
System.Json from System.Json.dll it would need to add assembly System.Json in your project.
DataContractJsonSerializer from System.ServiceModel.web.dll assembly System.ServiceModel
3rd Party JSON Lib like JSON.net for Monotouch namespace Newtonsoft.Json [Link() download the code from this link and build it and you will find the lib in the bin folder then add the assembly in your project.
I used the 3rd one option and some code is bellow.
{
string sJSON = "";
sJSON = Newtonsoft.Json.JsonConvert.SerializeObject(oObject);
return sJSON;
}
public WeatherReport JSONToObj(String JSONString)
{
WeatherReport deseri = JsonConvert.DeserializeObject(JSONString);
return deseri;
}
@RockMeAmadeus sorry it was typo, I'm from iOS and in that string start with @string . | https://forums.xamarin.com/discussion/12899/how-to-extract-key-values-from-a-json-string | CC-MAIN-2019-39 | refinedweb | 452 | 59.4 |
Hello World!
So today we are going to do something really awesome. Operationalize Keras with Azure Machine Learning. Why in the world would we want to do this? Well we can configure Deep Neural Nets and train them on GPU. In fact, in this article, we will train a 2 depth neural network which outputs a linear prediction of energy efficiency of buildings and then operationalize that GPU trained network on Azure Machine Learning for production API usage.
So Why GPU?
Well on Azure we can get low level access to a single K80 and 12 cores for only $1.33/hour. Beyond this, you get a series of choices. Below are those choices.
Ok, so what is the realistic performance difference? Well here is a pretty standard comparison chart.
Notice that tiny little blue sliver at 1x. the K80 is 25x faster than top of the line CPUs. In Azure you can get 2 physical K80s. This is fantastic; 50x performance.
Why Azure Machine Learning
Azure Machine Learning is a great way to operationalize your deep nets. Inference does not necessarily need GPU acceleration, however training does. Azure ML will provide an easy way to scale up and out as well as generate security tokens. Other nice things are the ability to tie directly into things like Stream Analytics. By taking advantage of the Azure PaaS platform, you can tie into various components quickly and easily as they are designed to plug into each other quickly and easily. By playing well with this platform it becomes significantly easier to add new components drastically reducing maintenance and time to market.
What is Keras?
Keras is a deep learning framework in which you can choose a Tensor Flow or Theano back end. I definitely prefer the Theano back end. Tensor Flow still does not work on Windows…Lame. I like to be able to do my normal day job while I do deep learning, so Keras + Theano it is for single GPU workloads (CNTK for larger). If you are looking for instructions for how to get Theano + Keras working on Windows 10, here you go.
The Keras documentation can be found here. I found it easier to get up and running with the Sequential model first. There is a ton of flexibility in the Sequential model and I have yet to need to go to functional.
Training The Model
So to train the model, it makes the most sense to start with some data. You can download the data we are going to use here.
The first thing we need to do is to load the data into our environment for training. Notice that we set y to be Heating Load, but we also drop out the Cooling Load column. We do this because Cooling Load and Heating Load likely should be predicted together. As both should likely be predicted together and are likely predictive of each other, we drop both out of our training data so that we aren’t “Cheating”. You will also notice that we load in our testing data. You should always split your data across training and testing.
import pandas as pd import numpy as np #Train Data ee_train = pd.read_csv('C:\\data\\EE_Regression_Train.csv') ee_train.drop('Cooling Load', axis = 1, inplace=True) ee_y = ee_train['Heating Load'] ee_train.drop('Heating Load', axis = 1, inplace=True) #Test Data ee_test = pd.read_csv('C:\\data\\EE_Regression_Test.csv') ee_test.drop('Cooling Load', axis = 1, inplace=True) ee_test_y = ee_test['Heating Load'] ee_test.drop('Heating Load', axis = 1, inplace=True)
Next we build up the model graph. This code does not execute immediately, but rather builds a compute graph which when compiled generates all the partial derivatives and cuda code we need to train the model. If you have ever done this before by hand, it is painful and having a tool that can do this for us that works on Windows 10 is fantastic!
from keras.models import Sequential from keras.layers import Activation, Dropout, Flatten, Dense dataLen = len(ee_train.columns.values) model = Sequential() model.add(Dense(256, input_dim=dataLen)) model.add(Activation('relu')) model.add(Dense(256, input_dim=dataLen)) model.add(Activation('relu')) model.add(Dense(1)) model.add(Activation('linear')) model.compile(optimizer='rmsprop', loss='mse')
Without getting too deep here, basically what we are doing is using the Sequential API in Keras for Neural Networks. Our first layer has an input layer, which is the length of the data plus a 256 node fully connected layer. We then add a second 256 node fully connected layer. The final layer has a single node with a linear activation function. This allows us to produce regression like outputs as opposed to classifier outputs. Finally we use a gradient descent variant optimizer function, “rmsprop”. We use mean squared error as our loss function.
The final step here is to transform the inputs into numpy arrays, tanspose the y to be the correct form (its a linear algebra thing), fit the model and then see what our performance is like. Notice that we also transform our test data.
train_x = ee_train.as_matrix() train_y = ee_y.as_matrix().transpose() test_x = ee_test.as_matrix() test_y = ee_test_y.as_matrix().transpose() model.fit(train_x, train_y, validation_data=(test_x, test_y), nb_epoch=10000, batch_size=576) train_y.var()
Run Some Code
If you are using a GPU w/ Theano backend, you should see a message similar to below after importing anything from Keras.
As soon as you perform “model.fit” you will see numbers begin flying across the screen. You will notice the loss and val_loss initially decreasing together. BUT OH NO, val_loss starts increasing and loss continues to decrease! This means our model has started over fitting, probably around epoch 5,000 or so. We need to simplify our data, but also there is categorical data etc. This article is not about perfecting a model, so we will move on; but this demonstrates the need for a validation set and why having GPU is so beneficial in this phase, as if you were to do this on CPU, you are easily at 100x longer to perform the same task.
Just to show it can be done, here it is with some very minor fine tuning…
Our variation in values is 102, but our mse (not rmse) is 12; so really, our error is in fact 3.46, FANTASTIC! This is actually not half bad. Ok, we have a good one, lets operationalize this thing.
Persisting a Keras Model
So, we need to persist this thing a little bit different than the way you see on the docs page. I really wish Keras would persist like this out of the box. We are going to persist the model architecture and the model weights separately. The reason we do this is to support either GPU or CPU as well as not be reliant on this crazy .h5 storage thing. We just want regular ol’ out of the box stuff with minimal dependencies.
So here we go…
################## # Persist Model # ################# from keras.models import model_from_json import pickle model_json = model.to_json() model_weights = model.get_weights() pickle.dump(model_json, open('C:\\data\\EE_Keras_Model\\model_json.pkl', 'wb')) pickle.dump(model_weights, open('C:\\data\\EE_Keras_Model\\model_weights.pkl', 'wb'))
So just to verify it all works locally before we move to the cloud, lets try creating a verify model and ensure it predicts the same values our trained model predicts.
#Test Loading Persisted Model verify_weights = pickle.load(open('C:\\data\\EE_Keras_Model\\model_weights.pkl', 'rb')) verify_model_json = pickle.load(open('C:\\data\\EE_Keras_Model\\model_json.pkl', 'rb')) verify_model = model_from_json(verify_model_json) verify_model.set_weights(verify_weights) verify_model.predict(train_x) model.predict(train_x)
Excellent! Everything looks good so far. Time to put this into Azure ML.
Zipping The Files
To get Theano + Keras + Our Model into AzureML, we need to zip it all up. Now as always, its not as simple as stick it in a folder and zip that. Who knows why, but it is what it is. You need to select all items in a flat view as shown below and then zip those together and name the resulting zip what you want.
I named the resulting zip file: “keras_theano_ee_model.zip”. I have uploaded some files so you can more easily do this:
Azure ML – Part 1 – Upload Data
I will not be doing an intro to Azure ML. I am assuming you know how to create a blank experiment. Begin by creating a new data set and uploading your files. Begin by clicking the NEW button in the bottom left corner and selecting Data Set.
Select the zip which contains keras + theano + the operationalized model.
Repeat this process with the .csv file used to train the model. This will be our data set for identifying the schema.
Azure ML – Part 2 – Create Experiment
- Create a new experiment.
- Drag and drop the .csv file into the work space
- Drag and drop the .zip file into the work space
- Drag and drop an “Execute Python Script” into the work space. Change the Anaconda/Python version to 4.0/3.5.
- Inside the python script add the following imports:
from keras.models import model_from_json import pickle
Your experiment should like as below.
You should get a green checkbox. If you did, CONGRATULATIONS! You have theano + keras + your model in Azure ML. This is one of the harder parts. If not, the biggest thing to check is that you zipped your files EXACTLY as described in the zipping section.
Azure ML – Part 3 – Python Code to Operationalize Model
The code is almost exactly the same as we had in our experiment on our GPU box to load the model and test it. Here is the code within the Azure ML framework.
import pandas as pd import numpy as np from keras.models import model_from_json import pickle def azureml_main(dataframe1 = None, dataframe2 = None): dataframe1.drop('Cooling Load', axis = 1, inplace=True) dataframe1.drop('Heating Load', axis = 1, inplace=True) weights = pickle.load(open('./Script Bundle/model_weights.pkl', 'rb')) json = pickle.load(open('./Script Bundle/model_json.pkl', 'rb')) model = model_from_json(json) model.set_weights(weights) x = dataframe1.as_matrix() result = pd.DataFrame(model.predict(x)) return result,
Notice how we drop the Cooling Load as well as the Heating Load in the Python code. We could have done this with a drag and drop module in the Azure ML workspace, which would have been more ideal as it will reduce our api input parameters. I’ll leave that to you to figure out :D.
Notice that we look into the /Script Bundle/ folder. All extracted files are dropped here. If a python library is dropped here, it is usable immediately just as we did with Keras. Numpy and pickle come out of the box. lets take a look at the visualized results…
Producing a Web Endpoint
Drag and drop a Web Service Input & Web Service Output into the experiment and link up to the input and output of the python module as shown below and push the Deploy Web Service Button. You may need to run the experiment after dropping the web service modules into the work space so the deploy button becomes available.
Create your Name and select your pricing plan.
Try out the EndPoint Request Response Tester!
I used the dummy values and got back a bad value (of course). You now get all the goodness of Azure ML and the dev portals etc etc.
Suggested Next Steps
Try standing up API Management in front of the Azure ML API and add Stripe payment processing :D.
Pretty awesome stuff! Can’t wait to try it.
Hi David, I just used your brilliant post to operationalize a Keras model trained with the TensorFlow backend, and can confirm that this works just as well with TF as it does with Theano. The only thing I had to do differently was to add the “google” package to the zip, and of course replace Theano with the CPU version of TF.
I wonder how well this approach would work for live scoring in a production scenario. The Azure ML backend appears to persist the model in memory at least for for a little while after making a prediction, and as long as the web service is “kept” warm, it returns predictions within a few seconds. I would like to get the best of both worlds and take advantage of Keras/TF/XGBoost for modeling and Azure ML for web service endpoints, and it appears to work very well, but I’m a bit concerned about how well it would scale. What are your thoughts on this?
I’ve been working on operationalization via Service Fabric with a stateless C# Asp.net Core web gateway and a guest execute-able python program which listens to the Azure Service Bus. Versioning of the model weights is done via a remote look up table. I’ll have full docs on this later; the example will be posted here as I am building it out: | http://dacrook.com/operationalize-deep-learning-with-azure-ml/ | CC-MAIN-2017-30 | refinedweb | 2,153 | 66.44 |
Checking the unitary matrix of the quantum circuit on Qiskit
To create an application on a quantum computer, we have to create a quantum circuit which is a combination of quantum gates. We usually start all the qubits from 0 value and applying these quantum gates we finally get the result of the calculation as a sampling of the circuit.
On the background of this process we have “State Vector” as the state of the qubits and “Unitary Matrix” as the operation of the quantum circuit itself.
Sometimes we need to check the process what kind of operation is going on with these quantum gates. As a small problems we can check the process by checking the unitary matrix.
Now we see the process using Qiskit which is a famous quantum computing tools on python.
The quantum circuit
The circuit is like,
import numpy as np
from qiskit import *
from qiskit import Aer
backend_sim = Aer.get_backend('qasm_simulator')
#prepare the circuit
circ = QuantumCircuit(1,1)#hadamard gate
circ.h(0)#the measurement
circ.measure(range(1),range(1))
#job execution
job_sim = execute(circ, backend_sim, shots=1024)
result_sim = job_sim.result()
counts = result_sim.get_counts(circ)
print(counts)
We can get only the final result of the circuit as,
{'1': 474, '0': 550}
Now we want to see the operation inside.
Getting the unitary matrix
import numpy as np
from qiskit import *
from qiskit import Aer
#Changing the simulator
backend = Aer.get_backend('unitary_simulator')
#The circuit without measurement
circ = QuantumCircuit(1)
circ.h(0)
#job execution and getting the result as an object
job = execute(circ, backend)
result = job.result()
#get the unitary matrix from the result object
print(result.get_unitary(circ, decimals=3))
We now get,
[[ 0.707+0.j 0.707+0.j]
[ 0.707+0.j -0.707+0.j]]
This is the unitary matrix of the quantum circuit. We can check the process looking at this matrix.
import numpy as np
from qiskit import *
from qiskit import Aer
backend = Aer.get_backend('unitary_simulator')#prepare 2qubits
circ = QuantumCircuit(2)
circ.h(0)
circ.x(1)
job = execute(circ, backend)
result = job.result()
print(result.get_unitary(circ, decimals=3))
Now we have 2qubits and the unitary matrix becomes bigger 4*4 size
[[ 0. +0.j 0. +0.j 0.707+0.j 0.707+0.j]
[ 0. +0.j 0. +0.j 0.707+0.j -0.707+0.j]
[ 0.707+0.j 0.707+0.j 0. +0.j 0. +0.j]
[ 0.707+0.j -0.707+0.j 0. +0.j 0. +0.j]]
We can only check a small unitary matrix on our computer and finally we cannot get the bigger matrix, so it is just a small check on a small problems. | https://medium.com/mdr-inc/checking-the-unitary-matrix-of-the-quantum-circuit-on-qiskit-5968c6019a45 | CC-MAIN-2022-33 | refinedweb | 454 | 56.05 |
Threads and Delegates
Part of Standard Library Category
Example
import std.c.time; import std.stdio; import std.thread; class Foo { int _v; this(int n) { _v = n; } public int glob() { return _v; } } class WorkerThread { int _v; this(int n) { _v = n; } // Here's a private member function - we'll call this from the run() method void identify() { writefln("WorkerThread(%d)", _v); } // When run is called via the delegate it will have the appropriate this // pointer, and so has access to all the fields and methods of its class. public int run() { for (;;) { sleep(2); identify(); } return 0; } } // In the thread 2 case, we override the thread class, and provide the run method // explicitly class Thread2 : Thread { int _v; this(int n) { _v = n; } public int run() { for (;;) { sleep(2); writefln("Thread2 %d", _v);; } return 0; } } void bar(int delegate() a) { writefln("rv = %d", a()); } // In the case of thread 3, this will serve as the run method, and // it will be called with the value of the second argument to the // thread constructor (the documentation is not correct in this respect). int x(void *p) { int *ip = cast(int*) p; for (;;) { sleep(2); writefln("Thread3 %d", *ip); } return 0; } void main(char[][] args) { // Before we get to threads, these first few lines just illustrate the // general ideas of delegates as applied to class member functions. Foo foo = new Foo(42); int delegate() dg = &foo.glob; int n = dg(); writefln("n = %d", n); bar(dg); int n3 = 123; // Now the different ways of instantiating threads. WorkerThread wt = new WorkerThread(22); dg = &wt.run; Thread t1 = new Thread(dg); t1.start(); Thread2 t2 = new Thread2(321); t2.start(); Thread t3 = new Thread(&x, &n3); t3.start(); sleep(10); }
Sample Batch File
@echo off set pgm=ThreadsAndDelegatesExample dmd %pgm%.d %pgm%.exe pause erase *.obj erase *.map
Tested Environment
Tested with Digital Mars D Compiler v1.014 on Windows 2000.
Source
Based on "Threads, delegates, and all that" by teales. | http://www.dsource.org/projects/tutorials/wiki/ThreadsAndDelegatesExample | crawl-001 | refinedweb | 328 | 71.04 |
Ada Programming/Platform/VM/Java
Contents
Ada and the JVM[edit]
The Ada→J-Code compilers translate Ada programs directly to Java bytecode for the Virtual Machine. Both Ada programmers and Java programmers can use each other's classes almost seamlessly, including inheritance. Ada adds desirable language features like a strong base type system and full generics, Java VMs add garbage collection and a rich set of libraries.
“Almost every Ada 95 feature has a very direct mapping using J-code, and almost every capability of J-code is readily represented in some Ada 95 construct.”—Taft, 1996
There are (at least) two compilers that support this: AppletMagic and JGNAT. The following example assumes AppletMagic. Changes required to make them work with JGNAT should be minimal. Some are explained in due course.
Ada programs for JVMs (see for example Programming:J2ME) appear to be surprisingly smooth. Unlike creating, and sometimes using, a binding to some library, using Ada→J-code compilers will make binding issues disappear.
Naming Conventions[edit]
AppletMagic and JGNAT use naming conventions for deciding how to distribute compiled code across Java class files. When a (tagged) type together with its surrounding package should be mapped to one Java class file, AppletMagic requires the type name to have a known suffix, and otherwise to be the same as the package name:
package Foo is type Foo_Obj is tagged ... ... end Foo;
Similarly, JGNAT has some conventions that effectively turn type names into key words. These names tend to be short, and general, here is Typ:
package Foo is type Typ is tagged ... ... end Foo;
Java Uses References, So Does Ada, Then[edit]
The mapping of Java's primitive types to corresponding Ada types is straightforward. (The semantics of integer types differ between the languages when the values approach a subtype's limits. JVM integers can jump from positive to negative when adding in excess of the maximum integer, Ada programs should not permit this. The compiler docs will explain to what extent Ada semantics will be preserved on the JVM.)
For reference types—everything is allocated in the heap—there is another convention.
package Foo is type Foo_Obj is tagged null record; type Foo_Ptr is access all Foo_Obj'Class; procedure Op(x: access Foo_Obj); end Foo;
(JGNAT sources use Ref instead of Foo_Ptr.) The classwide pointers can be used just like Java references. Notice that primitive operations can have access parameters. This is a natural choice when interfacing to Java.
When Ada programmers wish to use Java objects in their programs, they will see the Java classes as normal packages, and the objects following the conventions outlined above. The compilers offer tools for creating Ada packages from Java .class files. Given the Java class Foo, the Ada package will be as show below.
public class Foo { public void op() {}; }
with java.lang; use java.lang; package Foo is -- pragma Preelaborate; -- uncomment where possible type Foo_Obj is new Object with null record; type Foo_Ptr is access all Foo_Obj'Class; function new_Foo(this : Foo_Ptr := null) return Foo_Ptr; procedure op(this : access Foo_Obj); pragma Import (Java, op); pragma Import (Java_Constructor, new_Foo); end Foo;
- new_Foo
- This is a constructor function corresponding to the default constructor of the Java type. (Usually you won't be concerned with default constructors.) There is a special convention identifier for constructors, Java_Constructor.
- pragma Import
- As both the constructor and the op method are defined in the Java class, they will be imported just like when importing C, or Fortran functions, using convention identifiers Java_Constructor, and Java, respectively.
Constructors[edit]
To be continued.
Further reading[edit]
- S. Tucker Taft (1996). Programming the Internet in Ada 95
- Cyrille Comar, Gary Dismukes, and Franco Gasperoni (1997). Targeting GNAT to the Java Virtual Machine | http://en.wikibooks.org/wiki/Ada_Programming/Platform/VM/Java | CC-MAIN-2014-42 | refinedweb | 622 | 55.24 |
Advanced Namespace Tools blog
07 March 2018
History of ANTS, part 3
This post, 3rd in the series, will complete the story of the work done prior to the 2013 synthesis of disparate components into the first ANTS release.
Writable /proc/pid/ns
I got this idea from a 9fans post by Roman Shaposhnik in 2008 that occurred in a discussion of automounting. He wrote: "I would imagine that making '#p'/
When I was looking for something else to work on after I recovered from the intensity of creating the pipe-muxing code, I remembered that idea and thought I would try to implement it. It was my first in-kernel hacking project and I'm glad that I was naive enough to just dive right in. The implementation of namespaces in /sys/src/9/port/chan.c is more or less the heart of the system, and deciding to modify it was a bit presumptuous for a relative newcomer. Recognizing my limitations though, I decided to adopt as conservative approach to doing the modification as I could work out - rather than modifying the existing functions in chan.c to take an additional process parameter, I would just create a duplicate set of near-identical functions which would only be invoked via the /proc mechanism. Furthermore, for the implementation of invoking them from within devproc.c, I would copy-paste most of the code used by the standard userspace invocation of the bind/mount syscalls.
In general, copypaste style development is regarded as a poor approach, but I believe it is justified in some contexts. In particular, I desperately wanted to avoid breaking or destabilizing the kernel. The way to ensure that I didn't cause some kind of breakage that wasn't apparent in my tests was simply to avoid modifying the existing working codepaths. By using a copy-and-modify system for implementing my patch, I could guarantee that the new logic I was adding didn't change the behavior of any existing code. The modified code was entirely isolated to the execution path triggered by writing to the /proc/pid/ns files.
"Rootless" boot, plan9rc, and the admin namespace
This is what I think of as the "core idea" of ANTS - restructuring the boot process so that a self-sufficient environment independent of the main root filesystem is created and persists independently of the standard user namespace. It all got started because I was frustrated that I couldn't respecify the ip address of my venti server to fossil at boot time. The parameter telling the fossil fs where to dial was set in plan9.ini, and if that changed for some reason while the system was off, the system would boot, fossil would dial the wrong address and fail to be able to provide a root, and the kernel would panic. The only way to fix this was to boot from the livecd so that I could add the right data to plan9.ini.
This struck me as an unnecessary hassle - if you were tcp booting, you could enter the ip of the fileserver to dial at boot time, why wasn't there a similar ability to prompt for a venti address? So, I dove into the boot process and made a simple mod to let you set venti=ask in plan9.ini for a prompt. I also applied the same =ask mechanism to the sysname variable, because many things are run from /cfg/sysname so it seemed like a useful knob for parameterizing system behavior. This got me thinking about the boot process in general, and other issues that were problematic at the time - fossil unreliability (thankfully much improved since then).
It seemed to me unfortunate that if a system had a problem with the root fileserver - either because it was tcp booted and there was a network interruption, or because fossil ran into an issue - that even though the kernel was still working fine, the whole system had to be rebooted. If the kernel was still running properly, why couldn't we start new processes and spawn a new environment? With the standard namespace setup, however, this was impossible - every process on the box was fundamentally dependent on the file descriptor for the root fs, and if data stopped flowing through it, the whole system would freeze up into an unresponsive state.
But Plan 9 has independent per process namespaces, so if there was a way of creating a namespace that was truly independent of the main root, we could use it as an "escape pod" to keep working and rebuild a new environment. To make this work, the boot process had to be changed. Standard Plan 9 from Bell Labs (this was all before 9front was created) had the fileservers and factotum compiled into the kernel, and used C programs to control boot. What if we compiled rc into the kernel, and put boot under the control of rc, with the ability to drop into an interactive shell that wasn't attached to the disk or network root?
Getting this to happen was a lot of challenging and very educational work for me. Just compiling in rc and starting it was relatively simple - but the kernel alone didn't provide enough of an environment for rc to work properly. The namespace of kernel-devices-only was inadequate. What was needed was to create a workable mini-environment so rc could operate normally. I decided to start a ramdisk at boot, and make a skeleton fs with a workable minimal environment within it. This led to an intense period of trial-and-error as I learned exactly what parts of the namespace were essential. Some of my most satisfying moments in software development were when things started to actually work and I was able to boot a kernel and drop into an interactive rc session with a minimal set of standard tools and do things with no standard root fs at all.
I got excited by the possibilities, and realized that I could make the minimal boot namespace truly self-sufficient if I was able to get rio working in that environment, and allow the user to access the independent namespace via cpu. Once that was worked out, I realized that it would be very powerful to be able to create a subrio that would be inside the standard user namespace - this led to the creation of the "rerootwin" script, a tool I find very useful but which hasn't caught on with other users as much as I expected. Plan 9 has a nice mechanism to enter a new namespace, auth/newns, which uses a namespace file to rebuild the namespace entirely. Unfortunately, trying to use newns with the standard namespace file when you are cpu in remote doesn't work right - when you are cpu/drawterm in, your input/output is coming from devices bound from /mnt/term, and not only does standard ns file not know to bind from there, but /mnt/term won't even exist. The solution is one that I still think is quite clever - what is needed is to use srvfs of /mnt/term to create a mountable connection to the originating fs of the machine being cpu'd from, and then have a customized namespace file that mounts the main root from /srv and also mounts the srvfs of the /mnt/term and binds the devices in place. This is the "rerootwin" script which I find to be an essential component of working with multiple independent namespaces.
Release, and development hiatus
I announced both the writable ns mod and the rootless boot system to 9fans at the start of January 2010, and received little response other than skepticism that these mechanisms were useful. In the years since, I have learned that even though feedback from others can be invaluable, it is generally a mistake to be emotionally dependent on the reactions of other people to feel fulfilled or satisfied by one's own work. Due to the lack of demand for additional work on my projects, as well as other factors in life, I didn't really continue to build on what I had done. I played around with a few projects and ideas for the rest of 2010, but released nothing of note. Personal factors also started to supervene: I had remained close to family in adulthood, and my father's worsening health began to sap my energy and attention for other things. He died in early 2012, and the rest of that year I was focused on helping my mother in the aftermath, as well as moving through my own grieving process. The story of ANTS resumes in 2013. | http://doc.9gridchan.org/blog/180307.ants.history.pt3 | CC-MAIN-2021-21 | refinedweb | 1,455 | 52.53 |
Here’s a C program to count the number of digits in an integer with output and proper explanation. The program uses while loop.
# include <stdio.h> # include <conio.h> void main() { int n, count = 0 ; clrscr() ; printf("Enter a number: ") ; scanf("%d", &n) ; while(n > 0) { count++ ; n = n / 10 ; } printf("\nThe number of digits is: %d", count) ; getch() ; }
Output of above program
Enter a number: 52
The number of digits is: 2
Explanation of above program
In this program we are first inputting an integer, counting the total number of digits in that number and finally displaying the output to the user. Let’s see how the above program does this.
There are two variables - n to hold the number and count to count the number of digits in that number (n).
Next we are asking the user to enter a number and storing it in the variable n. After that our while loop calculates the number of digits in n.
Next we are asking the user to enter a number and storing it in the variable n. After that our while loop calculates the number of digits in n.
To understand what is happening inside while loop it’s better to take an example. Suppose, user entered 52. The process of while loop with n = 52 is as follows -
- First n > 0 i.e. 52 > 0, so the program will enter in the while loop.
- In the next step, value of count (initialized to 0 at the start of program) is incremented by 1 with the help of C’s auto incrementing operator (++). So after this step, value of count is 1.
- Now the number (n) is divided by 10 and result is stored again in n i.e. n = n / 10 or n = 52 / 10. Now as you know when you divide two numbers and store the result in an integer variable, only integer part of that result is stored in that integer variable not the fractional part. So when you divide 52 by 10, even when the correct result is 5.2 the new value of n will be 5 i.e. after this step n = 5 and count = 1.
- Now that all statements inside the while loop has been executed once, the looping condition is again checked.
- Again n > 0 i.e. 5 > 0, so the program will again enter the while loop and above steps are followed in the same manner, each time incrementing the value of count by 1 and dividing the number by 10.
-.
We can use log10(logarithm of base 10) to count the number of digits of positive numbers (logarithm is not defined for negative numbers).
Digit count of N = log10(N) + 1
is a good way
Howdy Mate,
Grazie! Grazie! Grazie! Your blog is indeed quite interesting around Program to count number of digits in an integer | C Program!, | http://cprogramming.language-tutorial.com/2012/01/program-to-count-number-of-digits-in.html | CC-MAIN-2021-10 | refinedweb | 481 | 71.55 |
munmap - unmap pages of memory
#include <sys/mman.h> int munmap(void *addr, size_t len);
The function munmap() removes any mappings for those entire pages containing any part of the address space of the process starting at addr and continuing for len bytes. Further references to these pages result in the generation of a SIGSEGV signal to the process. If there are no mappings in the specified address range, then munmap() has no effect.
The implementation will require that addr be a multiple of the page size {PAGESIZE}.
If a mapping to be removed was private, any modifications made in this address range will be discarded.
Any memory locks (see mlock() and mlockall()) associated with this address range will be removed, as if by an appropriate call to munlock().
The behaviour of this function is unspecified if the mapping was not established by a call to mmap().
Upon successful completion, munmap() returns 0. Otherwise, it returns -1 and sets errno to indicate the error.
The munmap() function will fail if:
- [EINVAL]
- Addresses in the range [addr, addr + len) are outside the valid range for the address space of a process.
- [EINVAL]
- The len argument is 0.
- [EINVAL]
- The addr argument is not a multiple of the page size as returned by sysconf().
None.
The third form of EINVAL above is marked EX because it is defined as an optional error in the POSIX Realtime Extension.
None.
mmap(), sysconf(), <signal.h>, <sys/mman.h>. | http://pubs.opengroup.org/onlinepubs/007908799/xsh/munmap.html | CC-MAIN-2013-20 | refinedweb | 243 | 63.9 |
10 Oct 16:40 2013
Re: HTqPCR normalization issues - third posting
alessandro.guffanti@... <alessandro.guffanti@...>
2013-10-10 14:40:15 GMT
2013-10-10 14:40:15 GMT
Thanks, much appreciated ! It would be important for us to understand wether we are doing something fundamental wrong, or if there actually is a bug on the software (happens), because we are using heavily this package for validating NGS gene expression analysis findings.. Thanks you so much for the excellent work ! Keep in touch Alessandro & Elena On 10/10/2013 3:56 PM, James W. MacDonald wrote: > Hi Allesandro, > > I believe this package is still maintained, and it is unfortunate that > you have not received a reply. The expectation is that package > maintainers will subscribe (and pay attention) to the Bioc listserv, > but the list is fairly high traffic, so it never hurts to add a CC to > the maintainer as well (which I have done for you). > > Best, > > Jim > > > > On Thursday, October 10, 2013 8:35:06 AM, Alessandro Guffanti [guest] > wrote: >> >> Dear all, this is our third posting without a real reply so we wonder >> if this package is actually not maintained anymore ? if yes, it would >> be useful for us to know... >> >> >> We are using HTqPCR to analyze a set of cards which we trasformed in >> this format, which is accepted by HtQPCR: >> >> 2 Run05 41 Passed sample 41 ABCC5 Target 30 >> 3 Run05 41 Passed sample 41 ADM Target 31.3 >> 4 Run05 41 Passed sample 41 CEBPB Target 29.8 >> 5 Run05 41 Passed sample 41 CSF1R Target 31.2 >> 6 Run05 41 Passed sample 41 CXCL16 Target 26.9 >> 7 Run05 41 Passed sample 41 CYC1 Target 25.7 >> >> [...] >> >> The total number of files and groups is as follows - summarized in >> the file "Elenco_1.txt" which is used below: >> >> File Group >> 41.txt Sano >> 39.txt Sano >> 37.txt Sano >> 35.txt Sano >> 43.txt Sano >> 34.txt Sano >> 44.txt Sano >> 38.txt Sano >> 48.txt Sano >> 40.txt Sano >> 47.txt Sano >> 6.txt Non Responder DISEASE >> 26.txt Non Responder DISEASE >> 2.txt Non Responder DISEASE >> 69.txt Non Responder DISEASE >> 68.txt Non Responder DISEASE >> 5.txt Non Responder DISEASE >> 71.txt Responder DISEASE >> 3.txt Responder DISEASE >> 17.txt Responder DISEASE >> 1.txt Responder DISEASE >> 19.txt Responder DISEASE >> >> The comparison is DISEASE vs non DISEASE, but what leaves us >> dubious is the normalization part. >> Note that sample 41 is the *first* of the list. >> >> Here is the code up to the dump of the normalized values matrices: >> >> library("HTqPCR") >> path <- ("whatever/") >> files <- read.delim (file.path(path, "Elenco_1.txt")) >> files >> filelist <- as.character(files$File) >> filelist >> raw <- readCtData(files = filelist, path = path, n.features=46, >> type=7, flag=NULL, feature=6, Ct=8, header=FALSE, n.data=1) >> featureNames (raw) >> raw.cat <- setCategory(raw, Ct.max=36, Ct.min=9, replicates=FALSE, >> quantile=0.9, groups =files$Group, verbose=TRUE) >> >> s.norm <- normalizeCtData(raw.cat, norm="scale.rank") >> exprs(s.norm) >> write.table(exprs(s.norm),file="Ct norm scaling.txt") >> >> g.norm <- normalizeCtData(raw.cat, norm="geometric.mean") >> exprs(g.norm) >> write.table(exprs(g.norm),file="Ct norm media geometrica.txt") >> >> Now if we look at the content of the two expression value files, it >> looks like that the first column >> (corresponding to the first sample) is always unchanged, while all >> the others have been normalized. >> >> In this case the first dataset is sample 41 so you can check >> comparing between the corresponding column >> above and below what is happening. >> >> We do not include here all the columns; however, you can see that >> all the samples *except the first (number 41)* have all their values >> normalized >> >> Ct norm scaling: >> >> 41 39 37 35 43 34 44 38 >> ABCC5 30 27.37706161 26.47393365 29.7721327 >> 31.20189573 26.39260664 26.32436019 27.54274882 >> ADM 31.3 30.36540284 28.51753555 32.31241706 >> 34.40473934 26.29800948 29.82796209 28.60208531 >> CEBPB 29.8 28.53383886 26.65971564 27.84151659 >> 30.06540284 27.3385782 27.36597156 26.29080569 >> CSF1R 31.2 27.66625592 28.05308057 37.18976303 >> 36.98767773 31.0278673 34.56255924 29.75772512 >> CXCL16 26.9 27.56985782 24.15165877 30.28018957 >> 28.82559242 25.91962085 26.89251185 26.96492891 >> Ct norm geometric >> >> 41 39 37 35 43 34 44 38 >> ABCC5 30 27.73443878 26.93934246 29.88113261 >> 30.76352197 26.51166676 26.8989347 27.49219508 >> ADM 31.3 30.76178949 29.01887064 32.4307173 >> 33.92136694 26.41664286 30.47900874 28.5495872 >> CEBPB 29.8 28.90631647 27.12839047 27.94344824 >> 29.64299633 27.46190571 27.96328103 26.24254985 >> CSF1R 31.2 28.0274082 28.5462506 37.32591991 >> 36.46801611 31.16783762 35.31694663 29.70310587 >> CXCL16 26.9 27.92975172 24.57624224 30.39104955 >> 28.42060473 26.03654728 27.47948724 26.91543574 >> >> This looks odd - why the first sample seems to be taken as a >> 'reference' for both normalization methods and hence is left unchanged ? >> >> This happens with ANY normalization procedure selected. >> >> Another (related ?) oddity is that in the final differential >> analysis result the same sample ID is always reported >> in the feature.pos field, as you can see below: >> >> genes feature.pos t.test p.value adj.p.value >> 22 NUCB1 41 -1.998838921 0.077900837 0.251381346 >> 8 ERH 41 -1.958143348 0.091329532 0.251381346 >> 16 MAFB 41 -1.887142703 0.09421993 0.251381346 >> 28 RNF130 41 -1.904866754 0.099644523 0.251381346 >> 3 CEBPB 41 -1.853176708 0.103563968 0.251381346 >> 18 MSR1 41 -1.80887129 0.10432619 0.251381346 >> >> Are we doing something wrong in the data input or subsequent >> elaboration here? can we actually trust these normalizations? >> >> Many thanks in advance - kind regards >> >> Alessandro & Elena >> >> >> >> >> -- output of sessionInfo(): >> >> >> R version 3.0.1 (2013-05-16) >>] HTqPCR_1.14.0 limma_3.16.8 RColorBrewer_1.0-5 >> Biobase_2.20.1 >> [5] BiocGenerics_0.6.0 >> >> loaded via a namespace (and not attached): >> [1] affy_1.38.1 affyio_1.28.0 BiocInstaller_1.10.3 >> [4] gdata_2.13.2 gplots_2.11.3 gtools_3.0.0 >> [7] preprocessCore_1.22.0 stats4_3.0.1 zlibbioc_1.6.0 >> >> -- >> Sent via the guest posting facility at bioconductor.org. >> >> _______________________________________________ >> Bioconductor mailing list >> Bioconductor@... >> >> Search the archives: >> > > -- > James W. MacDonald, M.S. > Biostatistician > University of Washington > Environmental and Occupational Health Sciences > 4225 Roosevelt Way NE, # 100 > Seattle WA 98105-6099 -- Alessandro Guffanti@...> *P* *Per cortesia, prima di stampare questa e-mail pensate all'ambiente.* * Please consider the environment before printing this mail note.* ----------------------------------------------------------- Il Contenuto del presente messaggio potrebbe contenere informazioni confidenziali a favore dei soli destinatari del messaggio stesso. Qualora riceviate per errore questo messaggio siete pregati di cancellarlo dalla memoria del computer e di contattare i numeri sopra indicati. Ogni utilizzo o ritrasmissione dei contenuti del messaggio da parte di soggetti diversi dai destinatari è da considerarsi vietato ed abusivo. The information transmitted is intended only for the per...{{dropped:10}}
_______________________________________________ Bioconductor mailing list Bioconductor@... Search the archives: | http://permalink.gmane.org/gmane.science.biology.informatics.conductor/50862 | CC-MAIN-2015-06 | refinedweb | 1,163 | 69.68 |
This sample shows how to integrate search into your application using the Google Web Service API. I will discuss doing this using VS.NET which makes all this happen like magic and also discuss doing this using the .NET Framework SDK. I have also integrated a browser into the application using the WebBrowser Control. So after doing a search the user can right click on a listview item and use one of the context menu items "Goto Page" or "Cached Page". The Browser menu item "Hide" can be used to hide the browser and go back to the Search Screen. The sample as it stands will not compile. In the file SearchService.cs you need to replace the linemyKey = <REPLACE WITH YOUR GOOGLE KEY>;with a search key string which you can obtain at the Web ServiceWeb Services are described using a WSDL file (which is sort of an equivalent of IDL for web services). The Google WSDL can be obtained from. If you are using VS.NET you can just right click on the project and choose "Add Web Reference". In the popup which follows you, type the above URL and once the contract has been loaded you click "Add Reference" and you are done. If you are using the SDK, you can use "wsdl.exe" with this URL as the first argument and it will generate a C# stub for you which you can use in your program. The main methods of interest in the api are doGoogleSearch(string key, string query, int startIndex, int maxResults, ....., string ie, string oe)doGetCachedPage(string key, string url)The arguments which I have marked with "..." allow more tweaking of the search and you can get more information on them at the google site. The last two arguments were menat for input and output encoding, but these are not used anymore and the web service expects input and sends output in UTF-8.The SearchService class wraps these APIs and increases the max results returned per query. The Google API allows at most 10 results to be returned. This class changes this by making multiple queries.Integrating the Browser ControlTo integrate the browser control in VS.NET, right click on the ToolBox and choose "Customize ToolBox". Under the "COM Components" Tab choose the "Microsoft Web Browser" (SHDocVw.dll). This process will add an IE Icon under "General" in the toolbox. This icon can be dragged and dropped on your form and used like any other control.Note that for ActiveX components to be used in Forms a wrapper is needed to create a Control apart from the COM interop layer. To do this manually using this SDK, requires a few steps. You runaximp c:\winnt\system32\shdocvw.dllThis will generate two assemblies one called AxSHDocVw.dll (ActiveX Control Wraper for use in forms) and SHDocVw.dll (interop assembly). The Forms designer seems to generate a lot of code for using the browser, but this is what I found is needed in a small C# program which I wrote.using System.Windows.Forms;using System;using AxSHDocVw;public class TestForm : Form {public AxWebBrowser browser;public TestForm(){browser = new AxWebBrowser();browser.BeginInit();browser.Dock = DockStyle.Fill;browser.Visible = true;this.Controls.Add(browser);browser.EndInit();object a = string.Empty;object url =;browser.Navigate2(ref url, ref a, ref a, ref a, ref a);}public static void Main(String[] args){Application.Run(new TestForm());}}You can compile this program with csc TestForm.cs /r:AxSHDocVw.dll /r:SHDocVw.dllI have also integrated some ToolTip text so that when you hover over any ListView item you see a snippet of the page.
Integrating Google Search using the Google Web Services
Link Fetcher Service | http://www.c-sharpcorner.com/UploadFile/mchandramouli/IntegratingGoogleSearch11242005033911AM/IntegratingGoogleSearch.aspx | crawl-003 | refinedweb | 617 | 66.44 |
Business Rules / C# and Visual Basic
The core framework of your application always tries to create a class with the name YourProjectNamespace.Rules.SharedBusinessRules whenever it receives a request to process data on the server. The framework interacts with the class instance in order to determine if any custom business logic is implemented in the application. This happens when data is selected, updated, inserted, deleted, and when custom actions are requested.
The name YourProjectNamespace in the namespace entered in the project settings. The default namespace is MyCompany.
You can enable shared business rules if you select Settings of your project and choose the Business Logic Layer option. Toggle the check box in the Shared Business Rules section, click Finish button to save the changes.
If you generate the app then the class file SharedBusinessRules.cs(vb) will be created either in ~/App_Code/Rules folder on in the ~/Rules folder of the application class library. This depends on the type of your projects.
Class SharedBusinessRules provides a convenient placeholder for the business logic that may apply to all data controllers of your web application.
Enter the following code in the code file of shared business rules.
C#:
using System;
using System.Data;
using System.Collections.Generic;
using System.Linq;
using MyCompany.Data;
namespace MyCompany.Rules
{
public partial class SharedBusinessRules : MyCompany.Data.BusinessRules
{
[ControllerAction("", "grid1", "Select", ActionPhase.Before)]
protected void TellTheUserWhatJustHappened()
{
if (!IsTagged("UserIsInformed"))
{
Result.ShowMessage(
"You just have requested data from '{0}' controller",
ControllerName);
AddTag("UserIsInformed");
}
}
}
}
Visual Basic:
Imports MyCompany.Data
Imports System
Imports System.Collections.Generic
Imports System.Data
Imports System.Linq
Namespace MyCompany.Rules
Partial Public Class SharedBusinessRules
Inherits MyCompany.Data.BusinessRules
<ControllerAction("", "grid1", "Select", ActionPhase.Before)>
Protected Sub TellTheUserWhatJustHappened()
If (Not IsTagged("UserIsInformed")) Then
Result.ShowMessage(
"You just have requested data from '{0}' controller",
ControllerName)
AddTag("UserIsInformed")
End If
End Sub
End Class
End Namespace
Save the file and navigate to any page of your web application that contains data. Here is what you will see at the top of the page if you view the Suppliers in the Northwind sample.
If you continue sorting, filter, and paging the suppliers then the message will continue to be displayed at the top. If you start editing or select a supplier then the message from the child view will popup.
We are using IsTagged and AddTag methods to display the messages once in the lifecycle of the page instance in the browser window. If the data view on the page is not tagged then the message is displayed and the tag UserIsInformed is assigned to it.
The message will eventually disappear as soon as user invokes available data view actions.
Method TellTheUserWhatJustHappened is called every time the data is selected. This happens when the data is sorted, paged, changed, or if a filter is applied. Data view tagging ensures that the message will be sent to the client one time only.
If you navigate away from the page and come back using the Back button of your web browser then the tag is restored and no message will be displayed. If you click ahead and arrive to a non-historical version of the page then the data view tags will be blank and the message will show up once when the first attempt to select data is executed on the server.
The business logic implemented in this shared business rules example will be engaged for all data pages of your web application whenever the grid1 view of any controller requires data. This applies to an application with a single data page and will also be true in the application with three hundred data pages.
If you have existing controller-specific business rules, then make sure that their base class is changed from BusinessRules to SharedBusinessRules. If you create new controller-specific business rules, then the code generator will automatically specify the SharedBusinessRules class as their base. | http://codeontime.com/learn/business-rules/shared-business-rules | CC-MAIN-2018-22 | refinedweb | 646 | 57.67 |
The Telly Terminator
January 3, 2010 7 Comments
The Telly Terminator
I received an interesting gift recently. It is a universal IR remote control which can be used to turn a large number of televisions/VCRs etc. on or off and it calls itself the Telly Terminator (strongly suggesting that the “off”, rather than the “on”, function seems to be the intended use of this little device).
Front and back side of the Telly Terminator (click for larger image)
The Telly Terminator is certainly not the only such product on the market. As far as I can tell the original is the TV-B-Gone which is even available in a hacker-friendly kit from Adafruit and has the honour of having its own tag on hackaday (furthermore, as far as I can see, the Telly Terminator has simply ripped its data from TV-B-Gone and I can’t help wondering if the European Database Directive might have something to say about this).
Like Mitch Altman, the inventor of the TV-B-Gone, I find it too easy to waste time with a television and so I don’t own one. For this reason I think the Telly Terminator was an apt gift and it works well. Also, having recently taken up a bit of a hobby in electronics, the Telly Terminator provided me with an excellent opportunity to “teardown” a very simple electronic toy. I wanted to know what it was doing and find out if its creators had simply ripped the TV-B-Gone data.
The USBee SX logic analyser
Several months ago I bought CWAV‘s USBee SX logic analyser. I had had some trouble getting Telit‘s GM862 GSM module and Maxstream/Digi‘s XBee module to communicate (when I was working on this project) and I bought the logic analyser to help solve the problem. As it turned out I solved the problem without the logic analyser and so my USBee ended up being a bit of a solution waiting for a problem. So this was another reason why I was eagre to have a poke around inside the Telly Terminator.
The USBee SX
Incidentally, although I like both the USBee SX and its accompanying USBee Suite, I can’t resist mentioning the alternative Saleae logic analyzer. I found it hard to decide which of these two logic analyzers to buy and I was tempted to buy the Saleae analyzer simply because the inventor writes an interesting blog discussing his experiences creating and selling it. In the end I opted for the USBee SX because I was more confident of software support going forward and because it can also act as a signal generator which, as far as I could tell, the Saleae Logic cannot.
Inside the Telly Terminator
Inside the Telly Terminator there is a single simple circuit board:
Inside the Telly Terminator (click for larger image)
The board runs off a CR2032 3V cell battery (smoothed by a large 47uF electrolyic capacitor). There are 3 LEDs: an IR LED which is used to transmit the on/off signals, a small red LED which flashes while the IR LED is transmitting (as a signal to the user) and a white (phosphor based, I think) LED which enables the user of the Telly Terminator to pass it off as a simple torch in the event that he/she is foolish enough to turn off the wrong television. The brains of the operation is a Helios H5A02HP chip (in a 16 pin DIP package). A bit of poking around with a multimeter reveals the circuit details:
Telly Terminator schematic (click for larger image)
We see that pin 10 controls the IR LED (and so is the pin we are most interested in). Pin 16 controls the small red LED and pin 1 is connected to the switch K1. The part of the circuit connected to pin 15 puzzled me a little. The transistor Q2 will be on at power-on when the capacitor C5 is empty, and so pin 15 will be grounded, however once C5 is charged enough, Q2 switches off and never switches on again (indeed C5 will continue to charge through R6 even after Q2 switches off and so the voltage at the base of Q2 will drop well below the base emitter junction voltage). After Q2 switches off, C4 will charge till it reaches the voltage at pin 15. This much is obvious, but the question I asked myself was, why? I showed this to a friend who has more experience with these things and he suggested the plausible explanation that pin 15 is the reset pin, it is active low, it is internally pulled up and that it is desirable to have reset triggered for a brief while at power-on till things have stabilised. So my best guess is that pin 15 is the reset pin. In any case, it doesn’t matter really and the rest of the circuit is straightforward.
Unfortunately Helios do not currently make the datasheet for their H5A02HP chip generally avaliable [UPDATE: see link in comments below] and did not respond to an email I sent them asking for it. For this reason I was unable to probe the chip in as much detail as I would have liked. All that I could get my hands on was the following block diagram:
H5A02HP block diagram
Incidentally, Helios create the H5A02HP chip as a simple speech synthesis chip so it’s interesting to see it is the choice used for the Telly Terminator (presumably it was chosen because it was cost effective). It would be interesting to see exactly what the chip thinks it’s doing when sending the on/off signals.
I used my USBee SX to probe the chip in standard fashion by connecting its various digital lines to the pins of the H5A02HP chip, triggering off several different events (including the state of the IR LED). According to my USBee SX, only pins 1, 10 and 16 change their state. Pin 10 sends out the on/off signals and carries the data I am most interested in. Pin 1 is grounded when the push switch is pressed and pin 16 sends a signal at ~2Hz which is responsible for causing the small red LED to flash (oddly though this signal is not completely regular). I would really like to investigate using an oscilloscope but unfortunately I do not own one (yet).
Screenshot from USBee Suite showing data on pins 1, 10, 16, shown as digital 0, 1, 2 respectively (click for larger image)
If you count the orange groups of spikes in above screenshot from USBee Suite (digital 1, i.e. pin 10), you will find that there are 46. These are the 46 on/off signals that the Telly Terminator sends out when it is triggered. They are all equally spaced approximately 390ms apart. To get a better look at the data, we need to zoom in. Zooming in on just the first spike in the above diagram we can see this on/off signal in more detail (evidently, it is sent twice).
On/off signal 1 (click for larger image)
Zooming in further on the first two pulses in the above diagram we can see that they are themselves a series of high/low states, i.e. we can see the carrier signal.
Signal 1 carrier (click for larger image)
Finally we zoom in on the carrier signal and see that it is a simple repeated square wave at ~38kHz with a 50% duty cycle.
Signal 1 carrier close-up (click for larger image)
Having sniffed the IR data (i.e. the data sent out pin 10), the next step is to try and decode it, at least partially. Before we do this, I thought it might be useful to carry this out for an on/off signal from an IR remote control whose protocol specification is available.
An RC6 remote control
My laptop is a HP Pavilion dv8000 and it came with a remote control which uses an extension of the Philips RC6 protocol. In this case, the chip which is running the show is Renesas Technology‘s M34283G2 chip (clocked using a 4Mhz (4.00L) ceramic oscillator). As far as I can tell, the protocol that the remote control uses is RC6 mode 6 with a 32 data bits which seems to be referred to both as RC6-6-32 and as MCE (a variation pioneered by Microsoft, I think).
RC6-6-32 on/off data (click for larger image)
The RC6 protocol has a carrier frequency of 36kHz and uses bursts of 16 periods to transmit data. The basic timing unit is thus 16/(36kHz) = 444.44(4)us and bits are encoded using Manchester encoding. The signal contains the following fields which I have marked in corresponding colours in the above screen shot from USBee Suite:
- Leader pulse
- 1 start bit
- 3 mode bits
- 1 trailer bit
- 32 data bits
The leader pulse and the start bit should always be as above. We read the data in the mode bits by remembering the Manchester encoding and find that the data is 110 in binary, i.e. 6 in decimal. This is expressed by saying that we have an instance of RC6 in mode 6. The trailer bit is a twice as long as a normal bit and contains the value 0. Finally this leaves the data bits, of which there are 32. In the above case these contain the data: 1000 0000 0001 0001 0000 0100 0000 1100 in binary or 0x8011040C in hex. Presumably these 32 bits of information can be broken down further into address/control/information fields etc. but I’m happy to leave things at this point. I can at least confirm that my laptop understands what 0x8011040C means since I accidentally switched it off using the remote the first time I was trying to sniff this signal using my USBee.
It is pretty easy to decode the above data for my laptop remote by hand just because there is so little of it. However I also wanted to decode the signals sent by the Telly Terminator a little so I wrote a quick python script to help analyse the data a little. The script reads the sampled data generated by the USBee, separates out the 46 different on/off signals and in each case calculates the carrier frequency, the total signal length, the carrier duty cycle and the timing data for the pulses of carrier signal in each case. For example, the script generates the following using the data from my laptop remote control.
Signal 0 : 36.17kHz, 36.921ms, 35% On time (periods) Off time (periods) 2.654ms (96.0) 0.885ms (32.0) 0.442ms (16.0) 0.443ms (16.0) 0.442ms (16.0) 0.443ms (16.0) 0.442ms (16.0) 0.889ms (32.1) 0.443ms (16.0) 0.881ms (31.9) 1.327ms (48.0) 0.889ms (32.1) 0.443ms (16.0) 0.450ms (16.3) 0.443ms (16.0) 0.450ms (16.3) 0.442ms (16.0) 0.451ms (16.3) 0.442ms (16.0) 0.451ms (16.3) 0.442ms (16.0) 0.451ms (16.3) 0.442ms (16.0) 0.451ms (16.3) 0.442ms (16.0) 0.450ms (16.3) 0.443ms (16.0) 0.450ms (16.3) 0.443ms (16.0) 0.450ms (16.3) 0.885ms (32.0) 0.893ms (32.3) 0.443ms (16.0) 0.450ms (16.3) 0.443ms (16.0) 0.450ms (16.3) 0.885ms (32.0) 0.893ms (32.3) 0.442ms (16.0) 0.451ms (16.3) 0.442ms (16.0) 0.451ms (16.3) 0.442ms (16.0) 0.451ms (16.3) 0.442ms (16.0) 0.451ms (16.3) 0.885ms (32.0) 0.892ms (32.3) 0.443ms (16.0) 0.450ms (16.3) 0.443ms (16.0) 0.450ms (16.3) 0.443ms (16.0) 0.450ms (16.3) 0.443ms (16.0) 0.450ms (16.3) 0.443ms (16.0) 0.450ms (16.3) 0.885ms (32.0) 0.446ms (16.1) 0.443ms (16.0) 0.889ms (32.1) 0.442ms (16.0) 0.451ms (16.3) 0.443ms (16.0)
The script has thus estimated the carrier frequency as 36.17kHz, the total signal length as 36.921ms and the duty cycle of the carrier as 35%. The script has also calculated that the first burst of carrier marking lasts for 2.654ms which is 96.0 periods of carrier frequency, this is followed by 0.885ms of space which is 32.0 periods of carrier frequency and so on.
Python sourcecode
Although it’s not exactly a work of art (to say the least) I thought I might as well make the source code to the decoding script available.
import sys, csv SPACE = 0 MARK = 1 class StreamWithPush: def __init__(self, data): self.data = data self.buffer = [] def __iter__(self): return self def next(self): if len(self.buffer) == 0: return self.data.next() else: x = self.buffer[-1] del self.buffer[-1] return x def push(self, x): self.buffer.append(x) def push_l(self, l): self.buffer.extend(l) class Signal: def __init__(self, frequency, duty_cycle, data): self.frequency = frequency self.duty_cycle = duty_cycle self.data = data def __str__(self): s = 'On time (periods)\tOff time (periods)\n' tm = 0 assert len(self.data) % 2 == 1, 'Uh oh.' for i in range(len(self.data) / 2): s += '%.3fms (%.1f)\t\t%.3fms (%.1f)\n' % (self.data[2*i][1] * 1000, self.data[2*i][1] * self.frequency, self.data[2*i+1][1] * 1000, self.data[2*i+1][1] * self.frequency) tm += self.data[2*i][1] + self.data[2*i+1][1] s += '%.3fms (%.1f)' % (self.data[-1][1] * 1000, self.data[-1][1] * self.frequency) return ('%.2fkHz, %.3fms, %d%%\n' % (self.frequency * 0.001, tm * 1000, int(self.duty_cycle * 100))) + s def get_freq_dcycle(mark_data, sample_tm_ns): # TODO Check mark_data is consistent, i.e. all same frequency and duty cycle (or even same shape). tot = on = 0 for (h, l) in mark_data: tot += h + l on += h return (len(mark_data) / (tot * sample_tm_ns * 1e-9), float(on) / tot) def get_signals(pattern, sample_tm_ns): NEW_SIG_TM = 0.38 # I measure a gap of ~391ms between signals. TODO learn this from the data. signals = [] i_pattern = iter(pattern) done = False while not done: signal_data = [] mark_data = [] l_mark = 0 while True: try: packet = i_pattern.next() except StopIteration: signal_data.append((MARK, l_mark * sample_tm_ns * 1e-9)) done = True break if packet[0] == SPACE: signal_data.append((MARK, l_mark * sample_tm_ns * 1e-9)) l_mark = 0 if packet[1] * sample_tm_ns * 1e-9 >= NEW_SIG_TM: # End of this signal. break else: signal_data.append((SPACE, packet[1] * sample_tm_ns * 1e-9)) else: # packet[0] == MARK l_mark += packet[1] + packet[2] mark_data.append((packet[1], packet[2])) (frequency, duty_cycle) = get_freq_dcycle(mark_data, sample_tm_ns) signals.append(Signal(frequency, duty_cycle, signal_data)) return signals def markate(csv_data, channel): pattern = [] marking = False l_not_marking = l_last_period = 0 slippage = 15 # samples. sample_tm_ns = 333.333333333 # nanoseconds between samples. TODO learn this from data and verify all samples consistent. for row in csv_data: (tm, v) = (int(''.join(row[0].split('.'))), row[channel]) if not marking: if v == '0': l_not_marking += 1 else: marking = True pattern.append((SPACE, l_not_marking)) l_not_marking = l_last_period = 0 csv_data.push(row) else: # marking == True if v == '1': for (n_ones, row) in enumerate(csv_data): if row[channel] != '1': break n_ones += 1 buffer = [] for (n_zeros, row) in enumerate(csv_data): if row[channel] != '0': csv_data.push(row) break if l_last_period > 0: d = n_ones + n_zeros + 2 - l_last_period if d > 0: buffer.append(row) if d > slippage: csv_data.push_l(buffer) n_zeros -= slippage marking = False break n_zeros += 1 pattern.append((MARK, n_ones, n_zeros)) l_last_period = n_ones + n_zeros else: # v == '0' sys.stderr.write('Uh oh\n') # Shouldn't happen and causes us to lose data. marking = False csv_data.push(row) return (sample_tm_ns, pattern[1:]) # Exclude first element which is just ('SPACE', 0). def main(argv=None): if argv is None: argv = sys.argv if len(argv) != 3: sys.stderr.write('Usage: %s <usbee file.csv> <channel number>\n' % argv[0]) sys.exit(1) csv_data = StreamWithPush(csv.reader(open(argv[1]))) csv_data.next() # Field headings. (sample_tm_ns, pattern) = markate(csv_data, int(argv[2]) + 1) signals = get_signals(pattern, sample_tm_ns) for (i, sig) in enumerate(signals): print 'Signal %d : %s\n\n' % (i, sig) if __name__ == '__main__': sys.exit(main())
The Telly Terminator data
Using the above script, I decoded the data for the 46 separate on/off signals which the Telly Terminator sends. The results are here. Unsurprisingly, the Telly Terminator data looks like a rip of the North American TV-B-Gone data (rather than the European TV-B-Gone data). I spot checked the first five signals sent by the Telly Terminator and found that they match those sent by the North American TV-B-Gone in the same order and with the same repetitions. Not quite all signals match (indeed the TV-B-Gone sends 56 signals, 10 more than the Telly Terminator) but this is presumably the result of the Telly Terminator data having been ripped from an earlier version of the TV-B-Gone data.
Finally I thought I’d also mention that USBee Suite can be downloaded for free and so I might as well make the session file with my Telly Terminator data and my laptop’s remote control data available for download.
Pingback: Reverse engineering the Telly Terminator - Hack a Day
I took a closer look at the “Screenshot from USBee Suite showing data on pins 1, 10, 16, shown as digital 0, 1, 2 respectively (click for larger image)”. As it turns out, the visible LED is turning on for the entire duration of code transmission, then turning off. I counted 44 distinct codes in that screenshot, along with 44 distinct LED on/off events.
The open source TV-B-Gone kit blinks its LED in between code transmissions.
I stand corrected. It seems LED on that thing is active low. That means it is blinking the LED after each code is transmitted.
Interesting. I had thought that the manufacturers of the Telly Terminator had probably just ripped the IR codes. Based your observation that the blinking LED has the same behaviour as TV-B-Gone, this suggests they may well have taken the full sourcecode. It’s a pity I can’t get a datasheet from Helios for their H5A02HP as I might be able to extract the code from the chip using it (though even then I’d have to disassemble etc)
I am 99% confident that this came from the open source TV-B-Gone kit, version 1.0 source code. The exact same codes out of 56 in that source code are skipped over, as commented in that source code. (There was not enough room to fit ALL 56 of those codes, as designed to the ATtiny85. (Version 1.2 of the open source kit fits ~112 NA codes and ~112 EU codes, on the same ATtiny85 chip, through the use of compression.))
Congrats on the HackADay mention, I’ve posted this there as well:
This, I believe is the datasheet you’re looking for.
Cheers
Hi Oliver,
I was just wondering if you could somehow put a link to my charity website either on facebook or (). I am asking purely on the basis that when people have been searching for my book (via my name), they have been hitting your telly-terminator page via 123 people. I would not normally be so brazen in my request but I am trying really hard to raise cash for Gt Ormond Street as they saved my life as a kid. So I’ve written a kids book and am producing an audio book to do that. No worries if you cannot. Anyway, I can only say good luck and it is nice to know another Oliver Nash is out there, Thanks a lot, Oli | http://ocfnash.wordpress.com/2010/01/03/the-telly-terminator/ | CC-MAIN-2013-20 | refinedweb | 3,337 | 61.56 |
This file is shift.def, from which is created shift.c. It implements the builtin "shift" shift.c #include <config.h> #if defined (HAVE_UNISTD_H) # ifdef _MINIX # include <sys/types.h> # endif # include <unistd.h> #endif #include "../bashansi.h" #include "../bashintl.h" #include "../shell.h" #include "common.h" $BUILTIN shift $FUNCTION shift_builtin $SHORT_DOC shift [n] The positional parameters from $N+1 ... are renamed to $1 ... If N is not given, it is assumed to be 1. $END int print_shift_error; /* Shift the arguments ``left''. Shift DOLLAR_VARS down then take one off of REST_OF_ARGS and place it into DOLLAR_VARS[9]. If LIST has anything in it, it is a number which says where to start the shifting. Return > 0 if `times' > $#, otherwise 0. */ int shift_builtin (list) WORD_LIST *list; { intmax_t times; register int count; WORD_LIST *temp; times = get_numeric_arg (list, 0); if (times == 0) return (EXECUTION_SUCCESS); else if (times < 0) { sh_erange (list ? list->word->word : NULL, _("shift count")); return (EXECUTION_FAILURE); } else if (times > number_of_args ()) { if (print_shift_error) sh_erange (list ? list->word->word : NULL, _("shift count")); return (EXECUTION_FAILURE); } while (times-- > 0) { if (dollar_vars[1]) free (dollar_vars[1]); for (count = 1; count < 9; count++) dollar_vars[count] = dollar_vars[count + 1]; if (rest_of_args) { temp = rest_of_args; dollar_vars[9] = savestring (temp->word->word); rest_of_args = rest_of_args->next; temp->next = (WORD_LIST *)NULL; dispose_words (temp); } else dollar_vars[9] = (char *)NULL; } return (EXECUTION_SUCCESS); } | http://opensource.apple.com/source/bash/bash-80/bash/builtins/shift.def | CC-MAIN-2013-20 | refinedweb | 219 | 69.18 |
Hi guys,
Well, it's World Cup time and it seems only fitting that I submit something related to that. I was intrigued by the whole "Group of Death" concept. Intuitively (for me), a group of death should be a very unlikely event - yet they are always found in the World Cup. On talking to a mathematician, I was fascinated to find that they are supposedly unsurprising and quite predictable. Unconvinced, I decided to have a stab at simulating it. Here it is:
/* * File: main.cpp * Author: daniel * Group of Death Simulation * * Created on June 15, 2010, 3:34 PM */ #include <stdlib.h> #include <vector> #include <algorithm> #include <iostream> #include <ctime> #include <cstdlib> using namespace std; int main(int argc, char** argv) { // variable to record total groups of death for each run int groupTotal = 0; // holds total for number of death groups of overall simulation int deathGroup = 0; // random seed - how good is this I wonder? srand(unsigned(time(NULL))); // array to populate the vector int box[32] = {0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,2,2,2,2,2,2,2,2,3,3,3,3,3,3,3,3}; // vector to store and randomize values vector< int > group(box, box + 32); // three nested loops - outermost is number of simulations, inner one is groups, inner most is teams for(int i = 0; i < 10000; i++) { // randomize team rankings which are between 0 and 3 random_shuffle(group.begin(), group.end()); for(int j = 0; j < 8; j++) // number of groups { groupTotal = 0; for(int k = 0; k < 4; k++) // number of teams { groupTotal += group.back(); group.pop_back(); } if(groupTotal > 8) // threshold for group of death { deathGroup++; } } group.assign(box, box + 32); } cout << "Number of groups of death: " << deathGroup << endl; return (EXIT_SUCCESS); }
Now, after a fair bit of mucking around (with simulations at 100, 10000 up to 100000000 times) I get a figure around 98 - 99%!?!? Is something going drastically wrong somewhere. Surely it can't be that likely. I should add that my definition of a group of death is a total greater than 8. Thanks in advance.
Daniel
PS: Maths was never my forte, so please be gentle! | https://www.daniweb.com/programming/software-development/threads/290283/group-of-death-simulation | CC-MAIN-2017-17 | refinedweb | 368 | 61.16 |
A Linq to Orcas
Okay, okay, so it’s early in the morning and I’m not funny. I get that. Back on 1 May, I wrote about creating some helper functions for using delgates to Find/FindAll on collection objects. Well, that’s fine and dandy for the .NET 2.0 framework, but I decided to kick the library that I’ve been dinking with up to 3.5. My goal? Remove some of the excess code and, when we move forward (rollout for this library isn’t planned until the fall—plenty of time for me to break things), have a framework that matches current .NET standards. On the first, I discussed that I could simplify:
StudentRoster studentsInGrade9 = new StudentRoster(roster.FindAll( delegate(StudentRecord x) { return x.Grade.Equals(“09”); }));
Down to:
StudentCollection studentsInGrade9 = new StudentCollection(collection.FindAll(StudentCollection.GetByGrade(“09”)));
By adding static method to the StudentCollection object.
public static Predicate<StudentRecord> GetByGrade(string grade) { return (delegate(StudentRecord x) { return x.Grade.Equals(grade); }); }
Dandy. But… that’s kinda annoying and it requires there to be a “GetBy” method for EVERY parameter that I might want to search by. That gets messy and requires maintenance. Why not, if we’re using Framework 3.5, we use LINQ? Now, I’ll admit, this is my first venture into LINQ. Laugh, I can take it, but this is pretty cool to me. Also, a preface, many will wonder why I’m not using DLINQ and instancianting my database as an object using Metal or the little “I’m a DataSet, just cooler” designer tool. Two painful words: Oracle database. Until I can find a spiffy way to get either the designer or Metal to hit an Oracle database, I appear to be sunk. If there is an Oracle “HOWTO” out there, I’d REALLY appreciate a postup. Now, on to the fun. Similar to the example above, if we want to search for all students in the ninth (“09”) grade, we can assign our results to the new var interface. What is ‘var’? It’s a strongly typed “container” that morphs into whatever you first pass it when you initialize it. Thus, if I type:
var int = 1;
var is of type Int32. If I try to pass it a string value (after it’s initialized), I’ll recieve an error “Cannot implicitly convert type ‘string’ to ‘int’”. The same goes for passing objects—it allows us to create a container that will accept whatever is passed to it and then strongly type it—avoiding boxing and unboxing that occurs with objects. Cool beans.
var ninthGraders = from s in students where s.Grade == "09" select s; foreach (var s in ninthGraders) { Console.WriteLine(" --- {0}", s.FullName); }
If I wanted to search for a specific last name, first name, teacher, or whatever property there may be in our “StudentsCollection” object, I could replace .Grade and go from there. Now, what if I didn’t want the entire object returned? I’m just displaying the FullName property of the object—why return all that extra data?
var ninthGraders = from s in students where s.Grade == "09" select s.FullName; foreach (var s in ninthGraders) { Console.WriteLine(" --- {0}", s); }
Since our ‘var’ accepts whatever we’ve placed into it, it is now a collection of Strings rather than a collection of StudentRecord objects. | https://tiredblogger.wordpress.com/2007/05/07/a-linq-to-orcas/ | CC-MAIN-2018-05 | refinedweb | 561 | 66.54 |
CodeGuru Forums
>
.NET Programming
>
C-Sharp Programming
> Inaccessible Due to its protection Level
PDA
Click to See Complete Forum and Search -->
:
Inaccessible Due to its protection Level
greee
December 4th, 2008, 02:38 PM
Ive read a bunch of posts indicating this is mostly about needing to have an other than Private member. . .but I cant seem to make this work no matter what I make public. . .what am I doing wrong. Im a newbie to 2008.Net and C#! Any help is greatly appreciated.
using System;
using System.Text;
using System.IO;
using System.Web;
using System.Net;
using System.Collections.Specialized;
public static void Main(string[] args)
{
// Set the 'Method' property of the 'Webrequest' to 'POST'.
HttpWebRequest myHttpWebRequest = new HttpWebRequest();
myHttpWebRequest.Method = "POST";
Console.WriteLine("\nPlease enter the data to be posted to the () Uri :");
// Create a new string object to POST data to the Url.
string inputData = Console.ReadLine();
string postData = "Request=" +.Close();
}
BigEd781
December 4th, 2008, 02:46 PM
Could you make it clear which line is causing the problem?
dahwan
December 5th, 2008, 06:16 AM
And use Code blocks
cilu
December 5th, 2008, 06:44 AM
HttpWebRequest's constructor is protected. So you cannot do this:
HttpWebRequest myHttpWebRequest = new HttpWebRequest();
It should be something like this:
HttpWebRequest myHttpWebRequest = (HttpWebRequest)WebRequest.Create(url);
greee
December 5th, 2008, 07:41 AM
Thanks Cilu. . .that was it exactly, and got me going and finished on this project. . .
I think at one point i tried this:
HttpWebRequest myHttpWebRequest = new HttpWebRequest(url);
But it didnt work, so I gave up on that. . .can you briefly tell me why so I can get some direction on what I need to do more reading on?
Thanks for your help!
codeguru.com | http://forums.codeguru.com/archive/index.php/t-466619.html | crawl-003 | refinedweb | 290 | 60.72 |
I really do need help and am trying my hardest to learn. I hope I'm not annoying anyone too much here, anyways.....
I need to write a program that will be able to determine whether or not the function is a prime number, and if so or not, it will say, "XXX is / is not a prime number" I was able to get the program the exact way its supposed to run but he wants us to use int Prime (void).
My question is since i have the following program, how do I go about converting this program to use the void function.
OK, there ya go, all help is welcome and I am at your mercyOK, there ya go, all help is welcome and I am at your mercyCode:#include <iostream> using namespace std; int number; int main() { cout << "You have selected the prime option." << "\nEnter any number you wish to test."; cin >> number; if ((number % 2 ==0) || (number % 3 == 0) || (number % 5 == 0)) cout << "Not a prime number. "; else if ((number % 2 !=0) || (number % 3 != 0) || (number % 5 != 0)) cout << " " <<number << " is a prime number. "; return 0; }
-steve | https://cboard.cprogramming.com/cplusplus-programming/76521-converting-int-main-function-void.html | CC-MAIN-2017-26 | refinedweb | 191 | 80.82 |
Recently kusti8 has published an extension for the Chromium-Browser that offers the same functionality. It also uses the kweb package (omxplayerGUI and youtube-dl-server). Unfortunately this doesn't work any more with the recent RPF chromium-browser version.
I have built a similar solution for Firefox, which can also be used for chromium-browser now (although it is less elegant than kusti8's solution).
Installation
1) You need the kweb package, if you don't have it installed already: viewtopic.php?t=40860
During installation you will be asked, if you want to install the github version of youtube-dl. It's important to answer with "y". If you missed that, you can install it later from kweb's application page.
Simple method:
(based on a solution for Firefox suggested by MartinLaclaustra):
In chromium right click into the bookmarks toolbar and select "Add Page". In the form set name to "PlayVideo" and paste the following into the URL field:
Select "Bookmarks Toolbar" below and click "Save". You now should have a new button "PlayVideo".
Code: Select all
javascript:(function(){var target_url=window.location.href;var ytsvr=" final=ytsvr.concat(encodeURIComponent(target_url));var myWindow=window.open(final,'_top')})();
That's all. You can skip steps 2) and 3) below.
Userscript method:
2) Start chromium-browser and select "Settings" from the menu. Select "Extensions". At the bottom there is a link "Download more extensions" (or similar, I'm on a German system). Click it. A new tab will open with the chrome web store. Search for "Tapermonkey" and install it. Restart chromium-browser afterwards.
3) Go to extensions again, find the tapermonkey extension and click on "Options". On Tapermonkeys menu bar click the icon on the left (it's the only one, the other menu items are displayed as text). The script editor will open with the "New Script" heading. Remove the default content and paste the following into the editor:
Click the "Save" icon (second from the left). If you now browse to any web page, the tapermonkey icon will display the content shown in the attached image. It contains two new commands: "Play Video" and "Video Button on/off". In order to use them, the youtube-dl-server of the kweb package must be running (see below).
Code: Select all
// ==UserScript== // @name omxVideo // @namespace // @description Play web video with omxplayerGUI // @include * // @version 1 // @grant GM_registerMenuCommand // @grant GM_getValue // @grant GM_setValue // ==/UserScript== GM_registerMenuCommand( "Play Video", playvideo, "p"); GM_registerMenuCommand("Video Button on/off",togglebutton); if (GM_getValue('playbutton',false) === true) { var input=document.createElement("input"); input.type="button"; input.value="Play"; input.onclick = playvideo; input.setAttribute("style", "font-size:10px;position:absolute;top:100px;right:20px;"); document.body.appendChild(input); } function playvideo() { var newuri = " + window.location.href; window.top.location.href = newuri; } function togglebutton() { if (GM_getValue('playbutton',false) === false) { GM_setValue('playbutton',true) ; } else { GM_setValue('playbutton',false); } }
4) This step is optional, but it is the easiest way to use Chromium-Browser with the new feature.
Open a terminal to create a script:
Enter the following into it (you can use right click and "Paste"):
Code: Select all
cd Desktop nano chromium-omx
Press CTRL+o to save it and CTRL+x to leave the editor.Press CTRL+o to save it and CTRL+x to leave the editor.#!/bin/bash
ytdl_server.py > /dev/null 2>&1 &
chromium-browser
wget -O /dev/null
We have to make the script executable:
This script will start the youtube-dl-server (running in the background), then chromium-browser and will stop the youtube-dl-server again, if you quit chromium.
Code: Select all
chmod +x chromium-omx
Usage
Note: In order to use the extension, the youtube-dl-server must be running. There are different ways to do this:
1) Use the script created in step 4 above to start chromium-browser together with the server. This will also stop the server cleanly when you close the browser.
2) If you have started chromium-browser from the application menu, you can start the server separately in two ways:
Open a terminal and enter:
ytdl_server.py
( Clicking "Start Server" frome kweb's applications page will do the same)
Alternatively you can also start the omxplayerGUI frontend from the application menu. This will also start the server (and stop it, if you close omxplayerGUI frontend).
Now go to a video page on youtube (or any other supported website). If you are using the simple method, click "PlayVideo" in the bookmarks toolbar. If you are using the userscript method, select "Play Video" from the Tapermonkey menu. After a few seconds (2-3 on a RPi 3, but it may take a little bit longer for the first video) omxplayerGUI will be opened and start playing the video. The video in the web page will stop playing, because now it shows a very simple web page from the youtube-dl-server with a message "Playing: " followed by the title of the video and a "Go Back" button below. Click this button to return to the original video web page after you have finished watching the video.
The second user script command "Video Button on/off" gives you a faster way to start playing a video. If you click it and reload the page, a small "Play" button will appear on the top right side of (almost) every web page. Clicking this "Play" button will start the video player in the same way as using the menu command. To disable the button again, select the user script command "Video Button on/off" again (it's a toggle command).
Note: On some websites (vimeo.com, for example) the "Play" button may not be visible, because it is hidden by something else. In this case you have to use the menu command.
Note 2: The "Play" button will also appear within frames or iframes contained in a page. This may be irritating, but also has one advantage: if a web page uses an embedded video, the "Play" button inside the embedding frame can be used to start playing the video. A good example are the embedded videos in the Raspberry Pi Blog. Clicking the "Play" button on top of the page will not start any video, but clicking the "Play" button inside the video frame will work.
Stopping the youtube-dl server
If you have started chromium-browser with the script created in step 4 above, you don't have to do anything. If you have started it manually you can simply close the terminal window. Or you can stop it from inisde chromium-browser:
go to " (add this as a bookmark!).
Using and Configuring omxplayerGUI
If you are new to omxplayerGUI you should start the frontend and click the "Help" button. This will open the omxplayerGUI manual in your preferred PDF viewer. It will give you all the information you need about using and configuring omxplayerGUI. There are really a lot of fine tuning options.
Updating youtube-dl
You should update youtube-dl from time to time. Do not use the "youtube-dl -U" command! You can do it in two ways:
1) Start a terminal and run
update-ytdl
(no sudo!)
2) Start kweb, click on "Applications". At the bottom ("Youtube-dl Tools") click the button "Update (git)".
Note: If kusti8's extension will be working again, you should prefer it. It starts and stops the youtube-dl-server automatically for you. | https://forums.raspberrypi.com/viewtopic.php?t=163018&sid=1e91a9c0f70eefe5881ff9a40ae179e1 | CC-MAIN-2022-21 | refinedweb | 1,231 | 64.3 |
Nokia Asha web apps - known issues
App installation
Problems while deploying test apps with Bluetooth Launcher
Only for Series 40, phones based Nokia Asha Software platform 1.0 do not support Bluetooth launcher.
Get the latest Bluetooth Launcher:
If you have installed Nokia Xpress Browser to your phone () and still encounter an error note "Ovi Browser by Nokia was not found on your device..." when deploying a test app to the device over a Bluetooth connection, you can still test apps by following chapter 2.6.2 "Deploy by use of a short URL".
In this case you open Nokia Xpress Browser manually and enter the short URL as the page address. This will point the browser to your web app preview front page.
Nokia Asha Web App Tools
Release Notes
See release notes for possible known issues for Nokia Web Tools
- Nokia Asha Web App Tools 3.0 Beta releasenotes
- Nokia Web Tools 2.3 releasenotes
- Nokia Web Tools 2.0 releasenotes
- Nokia Web Tools 1.5 releasenotes
- Nokia Web Tools 1.2 releasenotes
- Nokia Web Tools 1.2.1 releasenotes (China servers version)
Preview server
Web app life span on the preview server
A Web App that is uploaded to test server during development, from Nokia Web Tools, is removed after two weeks of inactivity. Note that this does not apply to Web Apps published to Nokia Store.
Web app size limit on the preview server
- Situation: When previewing a web app in WAS or uploading a web app to the test server, the message Upload failed - 10 Jun 2011 02:47 PM Error Details: Bad Request may be observed.
- Issue: There is a size limit on *.wgt files of 500Kb. Files of greater size will not be uploaded to the test server.
- Workaround: Reduce the size of the *.wgt file.
Web app download and execution
Downloadable content size limit in Nokia Asha web apps
Maximum size for downloadable content is 20MB. Any file bigger than that will result HTTP Error 413: Request entity too large.
Page expired - Reloaded automatically note
A session is initiated, when you start a web app. If you are not using the web app for five minutes session expires, which causes the web app to be reloaded.
Device rendering
- Issue: web app UI elements may render incorrectly after backstepping to the app from an external URL. This is due an issue with translating CSS styles in this scenario.
- Workaround: No current workaround. Developers are advised to experiment with alternate CSS styles, if they encounter this issue.
APIs
XMLHttpRequest fails to follow HTTP 302 Redirect
If web server redirects XMLHttpRequest to another URL, current client does not automatically load the new URL. HTTP status code 302, can be handled in code, but new location cannot be read from the response headers. No known workaround.
Geolocation API anonymous callback functions
Please note that anonymous functions cannot be passed to getCurrentPosition() as arguments.
//does not work
if(navigator.geolocation){
navigator.geolocation.getCurrentPosition(
function(position){
//use position
},
function(error){
//check error code
});
}
Correct way to obtain location is introduced in Nokia Asha web apps – W3C Geolocation API.
Widget preferences
- Widget preferences values are not stored and retrieved correctly in some cases when deploying the app using the short URL method. As a workaround Use Bluetooth or USB deployment.
- Stored preferences should be under 256 characters long and number of keys for a single web app can use is 500.
For details please see Using widget.preferences in Nokia Asha Web Apps
Base64 encoding and decoding
Native binary to ASCII, btoa() and ASCII to binary atob() functions are broken. Use JavaScript implementations freely available on internet, instead of native functions. Be cautious when using 3rd party libraries as they might use native btoa or atob functions internally.
This issue is fixed in new in Nokia Asha web apps runtime proxy server update deployed in April 2013
Parsing XML document with namespace
<product xmlns:
<prd:title>Lawnmover</prd:title>
</product >
When executing getElementsByTagName("title"); query to above XML file, Simulator local preview returns the Correct NodeList having 1 element. However in Cloud Preview and on the Device zero nodes are returned.
As a workaround getElementsByTagNameNS('','title'); can be used in both environments.
CSS
CSS transitions
Animating multiple properties at once is not supported. For example following CSS property does not work in the device or in the cloud preview.
-webkit-transition-property: margin, height, width ; /*Does not work*/
Solution: Animate only a single property at a time.
For margin animation use explicit margin-top and margin-left instead of margin.
//works
-webkit-transition-property: margin-top;
-webkit-transition-property: margin-left;
-webkit-transition-property: width;
-webkit-transition-property: height;
Text Overflow Mode
text-overflow-mode: ellipsis || clip, does not work in the current version of Nokia Asha web apps
Text inside an element having overflow: hidden; and width: 0px; is not rendered correctly. The text should be hidden completely, but instead it is drawn letter by letter vertically. As a workaround limit the element minimum width 1px wide.
CSS Background image issues
Background images are up- or down-scaled to to match element e.g. DIV size, not repeated. If padding is applied to element, background image is scaled to element size, without taking padding into account. Then the scaled background image is repeated to fill the whole element area.
background-position property does not work in the device, but it does on local and cloud preview.
Web App Simulator preview issues
Reloading
The reload button in the simulator toolbar restarts the web app based on the last version launched in the simulator. Using this button doesn’t include any changes in the web app code made in the Web Developer environment after the simulation was launched. This ensures consistence between the local and cloud previews - as the cloud preview will always be based on the last version of the web app uploaded to the server. To include any changes in code in the simulation, it must be run again from the Web Developer Environment.
Large Images are not scaled for the Simulator preview
Images served to simulator on a cloud preview are original ones in the original size. However for device, large images are resized by the proxy to fit device screen. One must take this into account when previewing a web app in simuluator.
Secure callbacks when previewing apps on the simulator
- Situation: When submitting information over a secure (SSL) connection during web app testing, in WAS.
- Issue: Secure callbacks using SSL to servers via the MWL are currently not supported from the simulator. As a result secured data, such as Facebook user name and password, may not be secure when web apps are previewed in the simulator. This isn't an issue when using the app on a phone. As a result it's recommended that only test user accounts, instead of personal ones, be used during development and previewing of applications.
- Workaround: Use test accounts or change passwords after testing.
Phone based testing
Preference values do not work correctly when deploying apps via short URL
- Issue: Web apps launched from the short URL are unable to make use of the Widget API preferences attribute to save and restore data between sessions. This means that web app features making use of persistent data — such as preference settings — will not work when a web app is launched in this way during development.
- Workaround: Use the Bluetooth Launcher to launch the web app on a phone. Alternatively, you can open the original deployment URL in S40 device Nokia Browser (See image below)
Archived items
setItem() and getItem() methods in local previews
This issue is fixed in Web Tools 2.0
In a local preview, the widget preferences attribute doesn’t provide support for setItem() and getItem() methods. If you want to use preferences in a local preview, use the widget.preferences["key"] syntax instead. Alternatively, use a cloud based preview only with web apps using these methods.
Preview rendering
This issue is fixed in Web Tools 1.5
- Issue: Pages with vertical scrolling will cause a scrollbar to rendered in the Simulator. This will also cause horizontal scrolling, since the content is extended to a width larger than the window width.
- Workaround: Always try to test on Nokia Asha phones.
Issues after upgrades
Some users may encounter issues with the deployment, previewing, and uploading of web apps crashing after upgrading from 1.2 to 1.5.
This issue seems to be related to the memory available to Eclipse, in some installations. You should be able to fix the issue by setting MAXPermSize to a higher value. Locate the NokiaWDE.ini file in:
- C:\Program Files\Nokia Web Tools 1.5.0\Web Developer Environment\ under Windows.
- /applications/Nokia Web Tools 1.5.0/Web Developer Environment/ under Mac.
- /usr/local/NokiaWebTools-1.5.0/Web Developer Environment/ under Ubuntu.
Edit the ini file and look for:
--launcher.XXMaxPermSize
256m
Change 256m to 512m, save and restart WDE.
You should now be able to deploy, preview, and upload without issues.
You can check the current settings from the WDE About dialogue (from the Help menu click About NokiaWDE then in the dialogue click Installation Details and open the Configuration tab) as shown here:
Issues with non-standard WDE launches
In some cases, such as BAT execution of the WDE, the “--launcher.XXMaxPermSize” argument seems to be ignored. This results is similar issues to those described above. To fix such issues, in the NokiaWDE.ini file set the parameters to:
-vmargs
-Xms256m
-Xmx384m
-XX:PermSize=256M
Templates include incorrect code for page refreshing
This issue is fixed in Web Tools 1.5
A number of the web app templates provided in Nokia Web Tools 1.5 use the syntax <a href="javascript:window.location.reload();" to implement a page refresh within the web app. This method results in the web app being switched to browser mode and no longer functioning as expected. This code should be replaced with <a onclick="refreshPageContent();"> to ensure the web app behaves as expected. | http://developer.nokia.com/community/wiki/Series_40_web_apps_-_known_issues | CC-MAIN-2014-10 | refinedweb | 1,679 | 55.84 |
Hi All. In Windows 7 (And probably Win10 too) if you need to quickly open an additional Rhino program session
i.e. to open a 2nd model:
Move the mouse pointer down to the windows taskbar and middle click on the current running rhino toolbar/icon.
An additional rhino program will startup immediately which you can use to open a 2nd, 3rd etc. model.
This tip works for some other programs too.
Useful for Windows Explorer too to open multiple windows without going to desktop or start menu.
Also, Someone else on the forum posted a tip to use:_Run Rhino.exe
from the command line, which does the same job. Michael VS
Hi All. In Windows 7 (And probably Win10 too) if you need to quickly open an additional Rhino program session
That’s a great tip. Thanks.
If you’re in Windows 8 or 10, then you can go the start screen [Windows Key] and SHIFT-click on the Rhino icon.
Well, if your start screen looks like the following (Win8), you can just hit the Windows key and click on the Rhino icon.
–Mitch
I made a button, right click start another instance of Rhino left to export part of the design to a temp file and start Rhino with that.
Right Button
_Run “C:\Program Files\Rhinoceros 5 (64-bit)\System\Rhino.exe”
Left Button.
!_-Export
_-pause
c:\temp\temp.3dm
_Run “C:\Program Files\Rhinoceros 5 (64-bit)\System\Rhino.exe c:\temp\temp.3dm”
Mark
Or this:
import rhinoscriptsyntax as rs def OpenRhino(): message = rs.MessageBox("Are you sure you want to open another session?", 4 + 32 + 256, "Rhinoceros 5.0") if message == 6: rs.Command("Run rhino.exe") print "Please wait while a new Rhino session loads...." if __name__ == "__main__": OpenRhino() | https://discourse.mcneel.com/t/windows-tip-quickly-open-additional-rhino-program-session/25496 | CC-MAIN-2020-45 | refinedweb | 300 | 76.11 |
An RMSfile may contain more than one record with the same key. Dfindnk(C-3) returns successive records with the same key as the previous dfindk or dfindnk call. When no more records exist with the same key, -1 is returned. This function is useful for retrieving details for a master record, where the key of the detail record is the same as the master record or the detail key is stored in the master record. To do this, your program first calls dfindk to retrieve the first detail record and successively calls dfindnk until -1 is returned. If, in the cycle of dfindnk calls, the program decides to start looking somewhere else (a new key) or to start at the beginning of the key again, just call dfindk may be used. Your program can read as many of the records with dfindnk as it needs up through the last record. The following example illustrates dfindnk by reading the subscriber file and finding all of the subscriptions for each subscriber.
#include <stdio.h>
#include <cbase/dtypes.h>
#include <cbase/dirio.h>
#include "sub.h"
#include "script.h"
#include "mag.h"
int linecount = 0;
main (argc, argv)
int argc;
char *argv[];
{
long recno;
char *magdescr;
if ((sub = dlopen ("sub", "r")) == NULL) {
puts (derrmsg());
fatal ("can't open subscriber file\n");
}
if (drlist (sublist, sub) < 0L) {
puts (derrmsg());
fatal ("bad field list for subscriber file\n");
}
if ((script = dlopen ("script", "r")) == NULL) {
puts (derrmsg());
fatal ("can't open subscription file\n");
}
if (drlist (scriptlist, script) < 0L) {
puts (derrmsg());
fatal ("bad field list for subscription file\n");
}
if ((mag = dlopen ("mag", "r")) == NULL) {
puts (derrmsg());
fatal ("can't open magazine file\n");
}
if (drlist (maglist, mag) < 0L) {
puts (derrmsg());
fatal ("bad field list for magazine file\n");
}
/* loop through the subscriber file */
recno = dfind (&subbuf, sub);
while (recno > 0L) {
/* process subscriber record */
do_master (&subbuf);
/* get first subscription record */
strncpy (scriptbuf.subscriber,
subbuf.subscriber,
sizeof (scriptbuf.subscriber));
recno = dfindk (&scriptbuf, script);
/* process each subscription record */
while (recno > 0L) {
/* look up title for magazine
subscription */
strncpy (magbuf.magazine,
scriptbuf.magazine,
sizeof (magbuf.magazine));
if (dfindk (&magbuf, mag) < 0)
magdescr = "No such magazine"
else
magdescr = magbuf.title;
/* process subscription record */
do_details (&scriptbuf, magdescr);
/* get next subscription */
recno = dfindnk (&scriptbuf, script);
}
/* get next subscriber record */
recno = dfindn (&subbuf, sub);
}
fatal(NULL);
}
do_master (sb)
SubRec *sb;
{
headings();
printf ("\n%s %-35s\n", sb->subscriber,
sb->name);
linecount += 2;;
footings();
}
do_details (sc, t)
ScriptRec *sc;
char *t;
{
headings();
printf ("%-14s%-36s %3d %s\n", "", t,
sc->issues, datetoa (sc->started));
linecount++;
footings();
}
headings()
{
if (linecount != 0)
return;
printf ("\n Subscription List\n");
printf ("Subscriber %-36s Issues Started\n\n", "Title");
linecount = 4;
}
footings()
{
if (linecount <= 63)
return;
printf ("\f");
linecount = 0;
}
fatal (p)
char *p;
{
if (p)
printf (p);
if (sub != NULL) {
dclose (sub);
sub = NULL;
}
if (script != NULL) {
dclose (script);
script = NULL;
}
if (mag != NULL) {
dclose (mag);
mag = NULL;
}
if (p)
exit (1);
else
exit (0);
}
Like sequential file reading, a program can read duplicate keys in reverse order: (last key to first). The function dfindlk(C-3) returns the last record matching the key contents stored in the user record. This function is analogous to the dfindk function. Like dfindk, the program must place the key values in the user record prior to calling dfindlk. The record returned matches the last record that would be returned by calling dfindk and repeatedly calling dfindnk. A sample call to dfindlk is:
recno = dfindlk (&scriptbuf, sfp);
When a key search pattern has been established with dfindk or dfindlk, the function dfindpk(C-3) can be used to find a previous matching record. Typically, dfindlk would be used to find the last matching record and dfindpk would be called repeatedly to find previous matching keys. When all of the matching keys have been found, dfindpk returns -1. A sample call to dfindpk is:
recno = dfindpk (&scriptbuf, sfp);
After calling dfindk or dfindlk, it is permissible to mix calls to dfindnk and dfindpk provided that the previous function call returned a valid record. If not, the functions return an error: "previous call canceled sequence". This error indicates that the functions dfindnk and dfindpk are reading a sequence of keyed records. When an error is encountered while reading records in this sequence (beginning of key values, end of key values, etc.), RMS abandons the current sequence. Your program must call dfindk or dfindlk to restart the sequence. | http://www.conetic.com/helpdoc/cbautil/cbautil00000238.html | CC-MAIN-2017-43 | refinedweb | 734 | 68.91 |
On January 13, 2002 09:39 pm, Alexander Viro wrote:> On 13 Jan 2002, Eric W. Biederman wrote:> > > "H. Peter Anvin" <hpa@zytor.com> writes:> > > > > This is an update to the initramfs buffer format spec I posted> > > earlier. The changes are as follows:> > > > Comments. Endian issues are not specified, is the data little, big> > or vax endian?> > Data is what you put into files, byte-by-byte. Headers are ASCII.In a perfect world we would settle of one of big or little-endian and byte-swap as appropriate, as we do with, e.g., Ext2 filesystems. However it seems that cpio in its current form has no concept of byte-swapping. Cpio(1) can neither generate nor decode a cpio file in the 'foreign' byte sex. So if we are determined to use cpio as it stands, then we are stuck with the goofy ASCII encoding, does that sum up the situation?Too bad about that, otherwise cpio seems quite reasonable.I just can't get over those ascii encoding though, and I can't shake the feeling that relying on never having a file named TRAILER!!! is strange. It's gratuitous pollution of the namespace.What was the reason for going with cpio again - so we can use standard tools? How hard would it be to fix cpio to get rid of the warts? What would we break? Is the problem that we would have to, ugh, go into user space or, eww, cooperate with non-kernel developers?--Daniel-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | http://lkml.org/lkml/2002/1/16/162 | CC-MAIN-2016-18 | refinedweb | 279 | 76.11 |
Hi All, Thanks to all who helped my url encoding problem, even though I had trouble getting it working :) Here is a solution that I found, although it relies on the msxsl namespace. <msxsl:script <![CDATA[ function urlEncode(strURL) return encodeURI(strURL); } ]]> </msxsl:script> ..that's it, and then down below I call it like this: <xsl:value-of one weird problem that I had was that I *had* to use the string function, I thought that (//parentCategory) would have been fine as is, but I was mistaken. The script relies on version 5.5 of the Microsoft Script Engine. Regards, Serdar XSL-List info and archive: | http://www.oxygenxml.com/archives/xsl-list/200105/msg00430.html | CC-MAIN-2019-13 | refinedweb | 107 | 71.85 |
Opened 11 months ago
Closed 10 months ago
#22448 closed Bug (fixed)
django test command runs wrong tests if test module has no tests
Description
The Django test command seems to behave incorrectly if passed a test module that has no tests.
For example, if passed test label foo.bar corresponding to module foo.bar and foo.bar has no tests, the command seems to discover and run all tests in foo.*, which is more than it should.
This behavior made it much harder to troubleshoot the fact that one of my test modules mistakenly had no tests. If given a module with no tests, Django should report back with a message like "0 tests found in ---" or simply run no tests.
Change History (11)
comment:1 Changed 11 months ago by cjerdonek
- Cc chris.jerdonek@… added
- Needs documentation unset
- Needs tests unset
- Patch needs improvement unset
comment:2 Changed 11 months ago by prestontimmons
comment:3 Changed 11 months ago by cjerdonek
Thanks for the response. No, I'm not using a custom test runner. But it looks like this was fixed in 1.7? I'm using 1.6 as the bug report indicates. And indeed, I don't have the fix when I manually checked the Django code I'm using.
comment:4 Changed 11 months ago by prestontimmons
Although we added the test case in the 1.7 cycle, this wasn't a bug in 1.6 that I'm aware of. That's the version I'm using to try to reproduce what you're seeing.
I'm using the following layout to test:
foo/ __init__.py bar.py car.py
The bar.py file is empty and car.py has:
import unittest from django.test import TestCase class SampleTest(unittest.TestCase): def test_one(self): assert True class DjangoTest(TestCase): def test_one(self): assert True
Running python manage.py test foo.bar returns:
---------------------------------------------------------------------- Ran 0 tests in 0.000s OK
Is this a correct interpretation of the scenario your describing?
comment:5 Changed 11 months ago by cjerdonek
Thanks for putting the test case together! I ran it and got 0 tests, too. But then I realized it's not discovering tests because the filenames don't match the default test pattern. When I renamed the files to test_bar.py and test_car.py, I was able to reproduce the problem. (Sorry, I probably should have said foo.test_bar in my original post above.)
comment:6 Changed 11 months ago by prestontimmons
- Triage Stage changed from Unreviewed to Accepted
Yes, I can reproduce this now. That explains why the existing test reports back without error as well.
comment:7 Changed 11 months ago by prestontimmons
After some further testing I realized this is fixed in 1.7. That's because we now only run discovery if the test label is a module or directory.
comment:8 Changed 10 months ago by cjerdonek
So just to clarify, what changes need to be made in what versions? From the comments, it seems the test needs to be fixed in 1.7 so that it has a chance of failing, and the bug can be fixed in versions prior to 1.7.
(By the way, I don't quite understand the explanation for why it's working in 1.7. In the test case, the test label is a module, which you say will cause discovery to be run. But you also said that Python's unittest discovery is what has the bug?)
comment:9 Changed 10 months ago by carljm
I don't think the severity of this bug justifies backporting, so the relevant question is what still needs fixing in 1.7.
comment:10 Changed 10 months ago by prestontimmons
Sorry for not being clear. I meant package, not module.
No change is needed for 1.7. The behavior described was fixed at the same time as #21206.
When the test runner is given a label, two things happen.
1) Tests are first loaded using unittest loadTestsFromName
2) If none are found, and the path is a package or folder, unittest discovery is run.
Before #21206 we didn't check if a path was a package or folder. That's what exposed the buggy unittest discovery behavior.
comment:11 Changed 10 months ago by prestontimmons
- Resolution set to fixed
- Status changed from new to closed
Hi Chris,
The behavior you describe is a bug in how Python's unittest discovery works, but it isn't an issue with Django's test runner than I'm aware of.
There's even a test to ensure that:
Are you using a custom test runner by any chance?
I can't reproduce the problem in my own testing. If you can give me a case where it discovers incorrectly, I'll take another look. | https://code.djangoproject.com/ticket/22448 | CC-MAIN-2015-11 | refinedweb | 804 | 75.71 |
Using Sage with TensorFlow
I've written something in TensorFlow that makes use of some nice group theory functions that work very easily in Sage (and seem prohibitively difficult to code from scratch). However, I can't get TensorFlow and Sage to work together. Each works on its own, but I think they rely on different Python versions and therefore won't run together. I think Sage uses Python 2.6 and TensorFlow 2.7.
Specifically, I can make a small Python script test.py that uses some Sage functions and run it using
sage --python test.py
and it runs with no problem. But trying to import the TensorFlow module in test.py throws an error saying the tensorflow module doesn't exist. Similarly, I get errors trying to import sage.all inside my .py script that uses TensorFlow. So I can neither add TensorFlow to Sage nor add Sage to Tensorflow.
I first encountered this problem in Sage 6.10 and upgrading to Sage 7.0 hasn't helped.
I'm not sure if this is relevant, but if I fire up normal Python (the kind TensorFlow uses), I get this:
from sage.env import SAGE_LOCAL
SAGE_LOCAL
which outputs
'$SAGE_ROOT/local'.
However if I fire up Sage first I get this:
sage SAGE_LOCAL
which outputs '
'/usr/lib/sagemath/local'.
Any possible workaround?
Thanks!
Note: also asked at...
Note: to display code blocks, indent them by 4 spaces. You can also select full lines and click the "code" button, which is the button with '101 010'.
The Python version shouldn't be the problem: | https://ask.sagemath.org/question/32743/using-sage-with-tensorflow/ | CC-MAIN-2018-26 | refinedweb | 265 | 69.99 |
Created on 2010-01-13.10:58:09 by WA, last changed 2015-04-16.16:10:33 by zyasoft.
This code crashes always on my machine:
from javax.swing import *
class PanelTest(JPanel):
def paintComponent(self, g):
super(PanelTest, self).paintComponent(g)
if __name__ == '__main__':
frame = JFrame("Jython Swing Test")
frame.setDefaultCloseOperation(WindowConstants.EXIT_ON_CLOSE)
pnl = PanelTest()
frame.add(pnl)
frame.pack()
frame.setVisible(True)
Also seen in trunk
The workaround is to use super__PROTECTED_METHOD. This is an old form in Jython, that predates the use of the super function in Python 2.2.
Of course it's reasonable that super should simply work, or the alternative, to specify the super type directly, without super(), that is
JPanel.paintComponent(self, g)
But they don't, yet. This looks like a gap that opened when we implemented new-style classes in Jython, and made Java types new-style as well.
Presumably we need to look at modifying the code in or around org.python.core.PyReflectedFunction.
In particular, this is related to our attempt to work around this issue:
This issue still exists in v2.7 beta 1...
Simpler example:
import java
class D(java.util.Date):
def toString(self):
return super(D, self).toString()
Then you can reproduce the issue with:
>>> D()
Target beta 4
Top priority for 2.7.1
Fixed as of
A | http://bugs.jython.org/issue1540 | CC-MAIN-2017-17 | refinedweb | 225 | 53.37 |
On Tue, 9 Jul 2002, Bruce Atherton wrote:
> The reason that Ant 1 is an exception to this is because of the level of
> backward compatibility that has to be maintained while refactoring. It is
> far more extreme in this regard than almost any other project. Any public
> class must be maintained in perpetuity. Any method that is not private
> should be maintained, signature and all, although the implementation can
> change. This approach is very beneficial for custom task writers, but it
> puts a straightjacket on developers and the refactorings that are possible.
> The real reason Ant 2 is needed is because it allows us to break this
> extreme backwards compatibility and go for one that is less intense. Yes,
> Ant 1 build files should be supported or at least be convertible with a
> script. Yes, custom tasks in Gump should either work as in Ant 1 or
> replacements for them offered long before the Ant 2 release. But that is a
> far lower bar than Ant 1 must meet.
If we do accept breaking the API backward compatibility in future versions
- that can be done with the Ant1 codebase as well.
But I don't agree you need to brake backward compatibility in order
to refactor - we can easily create a new core ( hopefully cleaner -
which is not the case with the current proposals IMHO ) and
then keep the old API as just a wrapper.
Tomcat5 proposal is doing exactly that ( whith the goal of
having binary compatibility with both tomcat3.3 and 4.0 !).
> So rather than using one of the rewrite proposals, should we just take a
> copy of the Ant 1 code and start refactoring on that? Given the difficulty
> of rewriting from scratch, I'd support that were it not for the fact that
> we already have not one but two rewrites already completed (which is part
> of the problem in moving forward). Thus there is no need to refactor Ant 1
> to get Ant 2 to a running state, we are already there.
The problem is not to get a proposal in a running state. It is to make
sure the proposal is what we need.
In the main trunk all code changes are carefully discussed ( look
at the antlib, or ProjectHelper hooks !) and they have to pass a very high
bar. That's normal - we'll have to live with them and maintain them
for a long time in future.
Both proposals have a huge amount of changes that will get in with
very little review and discussion. That's a serious problem for me.
And at the moment, they both suffer (IMHO again ) from serious
overengineering.
> As for the one real issue that the current codebase can't address (without
> breaking Ant 1-level compatibility), I've already complained here about the
> problems I had with PatternSet that could not be fixed without potentially
> breaking compatibility. Then there are the modularization and reuse issues
> others have mentioned. Believe me, there is a need for Ant 2.
I agree there is a need for ant2 - all I'm saying I disagree with the
procedure of getting there by swallowing a different codebase.
For modularization and reuse - there are plenty of solutions that can
be done in ant1. Adding namespace support is one part, and
I think it can be addressed in ant1.6 ( it already works with ant1.5 ).
Antlib is the other part.
I don't know the details of the PatternSet problem, but even if there
are problems, we must put in balance the amount of changes in the
user build files to the benefit of making a change. There are dozens
of broken things in the servlet spec - and sometimes they get
corrected ( if they affect security, etc ) but most of the time they
are preserved. Same for the JDK itself. I think ant must be
at the same level.
The 'backward compatibility' rule is not written in stone - if all
commiters ( or all but 1 - using the 'valid' veto rule :-) agree
something needs changed - it can be changed.
The 'ant2 revolution' is just a way to replace consensus voting with
majority voting for braking backward compatibility. But what's
really bad is that instead of having each change individually voted,
we'll have to vote on a boundle.
If the voting is the issue ( i.e. we deadlock with 2 -1s blocking
a change ) - we can still use a micro-revolution and use majority
voting to get past the issue.
Costin
--
To unsubscribe, e-mail: <mailto:ant-dev-unsubscribe@jakarta.apache.org>
For additional commands, e-mail: <mailto:ant-dev-help@jakarta.apache.org> | http://mail-archives.apache.org/mod_mbox/ant-dev/200207.mbox/%3CPine.LNX.4.44.0207091341240.1693-100000@costinm.sfo.covalent.net%3E | CC-MAIN-2014-23 | refinedweb | 775 | 70.84 |
Static resources with Morepath¶
Introduction¶
A modern client-side web application is built around JavaScript and CSS. A web server is responsible for serving these and other types of static content such as images to the client.
Morepath does not include in itself a way to serve these static resources. Instead it leaves the task to other WSGI components you can integrate with the Morepath WSGI component. Examples of such systems that can be integrated through WSGI are BowerStatic, Fanstatic, Webassets, and webob.static.
Examples will focus on BowerStatic integration to demonstrate a method for serving JavaScript and CSS. To demonstrate a method for serving other static resources such as an image we will use webob.static.
We recommend you read the BowerStatic documentation, but we provide a small example of how to integrate it here that should help you get started. You can find all the example code in the github repo.
Application layout¶
To integrate BowerStatic with Morepath we can use the more.static extension.
First we need to include
more.static as a dependency of our code
in
setup.py. Once it is installed, we can create a Morepath
application that subclasses from
more.static.StaticApp to get its
functionality:
from more.static import StaticApp class App(StaticApp): pass
We give it a simple HTML page on the root HTML that contains a
<head> section in its HTML:
@App.path(path='/') class Root(object): pass @App.html(model=Root) def root_default(self, request): return ("<!DOCTYPE html><html><head></head><body>" "jquery is inserted in the HTML source</body></html>")
It’s important to use
@App.html as opposed to
@App.view, as
that sets the content-header to
text/html, something that
BowerStatic checks before it inserts any
<link> or
<script>
tags. It’s also important to include a
<head> section, as that’s
where BowerStatic includes the static resources by default.
The app configuration code we store in the
app.py module of the Python
package.
In the
run.py module of the Python package we set up a
run() function
that when run serves the WSGI application to the web:
from .app import App def run(): morepath.autoscan() App.commit() wsgi = App() morepath.run(wsgi)
Manual scan¶
We recommend you use
morepath.autoscan to make sure that all code
that uses Morepath is automatically scanned. If you do not use
autoscan but use manual
morepath.scan() instead, you need to
scan
more.static explicitly, like this:
import more.static def run(): morepath.scan(more.static) App.commit() wsgi = App() morepath.run(wsgi)
Bower¶
BowerStatic integrates the Bower JavaScript package manager with a Python WSGI application such as Morepath.
Once you have
bower installed, go to your Python package directory
(where the
app.py lives), and install a Bower component. Let’s
take
jquery:
bower install jquery
You should now see a
bower_components subdirectory in your Python
package. We placed it here so that when we distribute the Python
package that contains our application, the needed bower components are
automatically included in the package archive. You could place
bower_components elsewhere however and manage its contents
separately.
Registering
bower_components¶
BowerStatic needs a single global
bower object that you can
register multiple
bower_components directories against. Let’s
create it first:
bower = bowerstatic.Bower()
We now tell that
bower object about our
bower_component
directory:
components = bower.components( 'app', os.path.join(os.path.dirname(__file__), 'bower_components'))
The first argument to
bower.components is the name under which we
want to publish them. We just pick
app. The second argument
specifies the path to the
bower.components directory. The
os.path business here is a way to make sure that we get the
bower_components next to this module (
app.py) in this Python
package.
BowerStatic now lets you refer to files in the packages in
bower_components to include them on the web, and also makes sure
they are available.
Saying which components to use¶
We now need to tell our application to use the
components
object. This causes it to look for static resources only in the
components installed there. We do this using the
@App.static_components
directive, like this:
@App.static_components() def get_static_components(): return components
You could have another application that use another
components
object, or share this
components with the other application. Each
app can only have a single
components registered to it, though.
The
static_components directive is not part of standard Morepath.
Instead it is part of the
more.static extension, which we enabled
before by subclassing from
StaticApp.
Including stuff¶
Now we are ready to include static resources from
bower_components
into our application. We can do this using the
include() method on
request. We modify our view to add an
include() call:
@App.html(model=Root) def root_default(self, request): request.include('jquery') return ("<!DOCTYPE html><html><head></head><body>" "jquery is inserted in the HTML source</body></html>")
When we now open the view in our web browser and check its source, we
can see it includes the jquery we installed in
bower_components.
Note that just like the
static_components directive, the
include() method is not part of standard Morepath, but has been
installed by the
more.static.StaticApp base class as well.
Local components¶
In many projects we want to develop our own client-side JS or CSS
code, not just rely on other people’s code. We can do this by using
local components. First we need to wrap the existing
components in
an object that allows us to add local ones:
local = bower.local_components('local', components)
We can now add our own local components. A local component is a directory
that needs a
bower.json in it. You can create a
bower.json file
most easily by going into the directory and using
bower init command:
$ mkdir my_component $ cd my_component $ bower init
You can edit the generated
bower.json further, for instance to
specify dependencies. You now have a bower component. You can add any
static files you are developing into this directory.
Now you need to tell the local components object about it:
local.component('/path/to/my_component', version=None)
See the BowerStatic local component documentation for more
of what you can do with
version – it’s clever about automatically
busting the cache when you change things.
You need to tell your application that instead of plain
components
you want to use
local instead, so we modify our
static_components directive:
@App.static_components() def get_static_components(): return local
When you now use
request.include(), you can include local
components by their name (as in
bower.json) as well:
request.include('my_component')
It automatically pulls in any dependencies declared in
bower.json
too.
As mentioned before, check the
morepath_static github repo for
the complete example.
A note about mounted applications¶
more.static uses a tween to inject scripts into the response (see
Tweens). If you use
more.static in a view in a mounted
application, you need to make sure that the root application also
derives from
more.static.StaticApp, otherwise the resources aren’t
inserted correctly:
from more.static import StaticApp class App(StaticApp): # this needs to subclass StaticApp too pass class Mounted(StaticApp): pass @App.mount(app=Mounted, path='mounted') def mount(): return Mounted()
Other static content¶
In essence, Morepath doesn’t enforce any particular method for serving static content to the client as long as the content eventually ends up in the response object returned. Therefore, there are different approaches to serving static content.
Since a Morepath view returns a WebOb response object, that object can be loaded with any type of binary content in the body along with the necessary HTTP headers to describe the content type and size.
In this example, we use a WebOb helper class webob.static.FileApp to serve a PNG image:
from webob import static @App.path(path='') class Image(object): path = 'image.png' @App.view(model=Image) def view_image(self, request): return request.get_response(static.FileApp(self.path))
In the above example FileApp does the heavy lifting by opening
the file, guessing the MIME type, updating the headers, and returning
the response object which is in-turn returned by the Morepath view.
Note that the same helper class can be used to to serve most types
of
MIME content.
This example is one way to serve an image, but it is not the only way. In cases that require a more elaborate method for serving the content this WebOb File-Serving Example may be helpful. | http://morepath.readthedocs.io/en/latest/more.static.html | CC-MAIN-2016-40 | refinedweb | 1,410 | 50.33 |
How To Code in JavaScript
JavaScript is a high-level, object-based, dynamic scripting language popular as a tool for making webpages interactive.
- August 20, 2021This tutorial will go over how to work with the Console in JavaScript within the context of a browser, and provide an overview of other built-in development tools you may use as part of your web development process.
- August 20, 2021This tutorial will go over how to incorporate JavaScript into your web files, both inline into an HTML document and as a separate file.
- August 23, 2021This.
- August 23, 2021In this tutorial, we'll go over many of the rules and conventions of JavaScript syntax and code structure.
- August 30, 2021JavaScript comments are annotations in the source code of a program that are ignored by the interpreter, and therefore have no effect on the actual output of the code. Comments can be immensely helpful in explaining the intent of what your code is or should be doing.
- August 23, 2021In this tutorial, we will go over how data types work in JavaScript as well as the important data types native to the language.
- August 24, 2021A string is a sequence of one or more characters that may consist of letters, numbers, or symbols. Strings in JavaScript are primitive data types and immutable, which means they are unchanging. As strings are the way we display and work with text, and text is our main...
- August 24, 2021In this tutorial, we will learn the difference between string primitives and the String object, how strings are indexed, how to access characters in a string, and common properties and methods used on strings.
- August 24, 2021This tutorial will guide you through converting JavaScript’s primitive data types, including numbers, strings, and Booleans.
- August 24, 2021This tutorial covers what variables are, how to declare and name them, and also take a closer look at the difference between var, let, and const. It also goes over the effects of hoisting and the significance of global and local scope to a variable’s behavior.
- August 24, 2021In this JavaScript tutorial, we will go over arithmetic operators, assignment operators, and the order of operations used with number data types.
- August 24, 2021The field of computer science has many foundations in mathematical logic. If you have a familiarity with logic, you know that it involves truth tables, Boolean algebra, and comparisons to determine equality or difference. The JavaScript programming language uses operators…
- August 24, 2021In this tutorial, we will learn how to create arrays; how they are indexed; how to add, modify, remove, or access items in an array; and how to loop through arrays.
- August 25, 2021JavaScript has many useful built-in methods to work with arrays. Methods that modify the original array are known as mutator methods, and methods that return a new value or representation are known as accessor methods. In this tutorial, we will focus on mutator methods.
- August 25, 2021This tutorial will go over methods that will concatenate arrays, convert arrays to strings, copy portions of an array to a new array, and find the indices of arrays.
- August 25, 2021In JavaScript, the array data type consists of a list of elements. There are many useful built-in methods available for JavaScript developers to work with arrays. In this tutorial, we will use iteration methods to loop through arrays, perform functions on each item in an array, filter the desired results of an array, reduce array items down to a single value, and search through arrays to find values or indices.
- August 25, 2021Objects.
- August 25, 2021JavaScript comes with the built in Date object and related methods. This tutorial will go over how to format and use date and time in JavaScript.
- August 25, 2021Events are actions that take place in the browser that can be initiated by either the user or the browser itself. In this JavaScript aticle,.
- August 25, 2021This tutorial provides an introduction to working with JSON in JavaScript. Some general use cases of JSON include: storing data, generating data from user input, transferring data from server to client and vice versa, configuring and verifying data.
- August 25, 2021...
- August 26, 2021Conditional statements are among the most useful and common features of all programming languages. "How To Write Conditional Statements in JavaScript" describes how to use the...
- August 26, 2021Automation is the technique of making a system operate automatically; in programming, we use loops to automate repetitious tasks. Loops are one of the most useful features of programming languages, and in this this article we will learn about the while and do...while...
- August 26, 2021Loops are used in programming to automate repetitive tasks. In this tutorial, we will learn about the for statement, including the for...of and for...in statements, which are essential elements of the JavaScript programming language.
- August 26, 2021A function is a block of code that performs an action or returns a value. Functions are custom code defined by programmers that are reusable, and can therefore make your programs more modular and efficient. In this tutorial, we will learn several ways to define a...
- August 26, 2021JavaScript is a prototype-based language, meaning object properties and methods can be shared through generalized objects that have the ability to be cloned and extended. This is known as prototypical inheritance and differs from class inheritance. Among popular...
- August 26, 2021Understanding prototypical inheritance is paramount to being an effective JavaScript developer. Being familiar with classes is extremely helpful, as popular JavaScript libraries such as React make frequent use of the class syntax.
- August 26, 2021Objects in JavaScript are collections of key/value pairs. The values can consist of properties and methods, and may contain all other JavaScript data types, such as strings,...
- August 26, 2021The `this` keyword is a very important concept in JavaScript, and also a particularly confusing one to both new developers and those who have experience in other programming languages. In JavaScript, `this` is a reference to an object. In this article, you'll learn what `this` refers to based on context, and you'll learn how you can use the `bind`, `call`, and `apply` methods to explicitly determine the value of `this`.
- August 27, 2021Introduced in ECMAScript 2015, Maps in JavaScript are ordered collections of key/value pairs, and Sets are collections of unique values. In this article, you will go over the Map and Set objects, what makes them similar or different to Objects and Arrays, the properties and methods available to them, and examples of some practical uses.
- August 27, 2021In ECMAScript 2015, generators were introduced to the JavaScript language. A generator is a process that can be paused and resumed and can yield multiple values. They can maintain state, providing an efficient way to make iterators, and are capable of dealing with infinite data streams. In this article, we'll cover how to create generator functions, how to iterate over Generator objects, the difference between yield and return inside a generator, and other aspects of working with generators.
- August 27, 2021In ECMAScript 2015, default function parameters were introduced to the JavaScript programming language. These allow developers to initialize a function with default values if the arguments are not supplied to the function call. Initializing function parameters in this way will make your functions easier to read and help you avoid errors caused by undefined arguments and the destructuring of objects that don't exist. In this article, you will learn how to use default parameters.
- Introduced in ECMAScript 2015 (ES6), destructuring, rest parameters, and spread syntax provide more direct ways of accessing the members of an array or an object, and can make working with data structures quicker and more succinct. In this article, you will learn how to destructure objects and arrays in JavaScript, how to use the spread operator to unpack objects and arrays, and how to use rest parameters in function calls.
- August 27, 2021The 2015 edition of the ECMAScript specification (ES6) added template literals to the JavaScript language. Template literals are a new form of making strings in JavaScript that add a lot of powerful new capabilities, such as creating multi-line strings, using placeholders to embed expressions in a string, and parsing dynamic string expressions with tagged template literals. In this article, you will go over the differences between single/double-quoted strings and template literals.
- August 27, 2021Arrow functions are a new way to write anonymous function expressions in JavaScript, and are similar to lambda functions in some other programming languages like Python. They differ from traditional functions in the way their scope is determined and how their syntax is expressed, and are particularly useful when passing a function as a parameter to a higher-order function, like an array iterator method. In this article, you will find examples of arrow function behavior and syntax.
- In order to avoid blocking code in JavaScript development, asynchronous coding techniques must be used for operations that take a long time, such as network requests made from Web APIs like Fetch. This article will cover asynchronous programming fundamentals, teaching you about the event loop, the original way of dealing with asynchronous behavior through callbacks, the updated ECMAScript 2015 (ES6) addition of promises, and the modern practice of using async/await.
- As the importance of JavaScript in web development grows, there is a bigger need to use third-party code for common tasks, to break up code into modular files, and to avoid polluting the global namespace. To account for this, ECMAScript 2015 (ES6) introduced modules to the JavaScript language, which allowed for the use of import and export statements. In this tutorial, you will learn what a JavaScript module is and how to use import and export to organize your code. | https://www.digitalocean.com/community/tutorial_series/how-to-code-in-javascript | CC-MAIN-2021-43 | refinedweb | 1,627 | 50.36 |
On my computer, the following code
#include <cstdlib>
int main()
{
int i = 5;
int j = std::abs(i);
}
raises the warning "Values of type 'long' may not fit into the receiver type 'int'. However, there is an int-valued abs function (), so why is CLion raising this warning? Thanks for the help!
Hi Spencer.
It correctly works in my environment. Could you please specify which OS and CLion version ('About CLion') you are using?
Hi Anna,
I am using Arch Linux with CLion version:
CLion 2016.1.2
Build #CL-145.972, built on May 18, 2016
JRE: 1.8.0_92-b14 amd64
JVM: OpenJDK 64-Bit Server VM by Oracle Corporation
Could you please try CLion 2016.2 EAP? Does the problem occur?
Yes, unfortunately the problem still occurs. | https://intellij-support.jetbrains.com/hc/en-us/community/posts/207565445-Incorrect-Value-may-not-fit-into-receiver-inspection-for-std-abs | CC-MAIN-2020-34 | refinedweb | 130 | 77.53 |
Necessary for benchmarks.
This is currently the most needed opcode reported by abort messages in v8 & sunspider test suite when they are executed with '--ion -n' flags.
Created attachment 584358 [details] [diff] [review]
Implement JSOP_THIS
Comment on attachment 584358 [details] [diff] [review]
Implement JSOP_THIS
Review of attachment 584358 [details] [diff] [review]:
-----------------------------------------------------------------
JSOP_THIS is unfortunately more complicated - see ComputeThis(). The logic is something like:
* If |this| is an object, return |this|.
* If |this| is null or undefined, |this| is globalObj->thisObject()
* If |this| is a primitive, return js_PrimitiveToObject(this)
So any time |this| is used where we don't already know that |this| has been computed, we need to replace it with a new SSA name. In this case it's okay to use the MIR node for |this| to determine whether |this| is already computed.
One option is to have a ComputeThis(Value) -> Value instruction that has a guard, and an out-of-line path for returning the new |this|. With TypeInference we can also determine the type ComputeThis will return (however there wouldn't be a type barrier, so we'd have to manually unbox).
Created attachment 585484 [details] [diff] [review]
Implement JSOP_THIS
Specialize this with type inference and compile JSOP_THIS only if the type is an
object. Otherwise, abort the compilation with a message.
Comment on attachment 585484 [details] [diff] [review]
Implement JSOP_THIS
Review of attachment 585484 [details] [diff] [review]:
-----------------------------------------------------------------
::: js/src/ion/IonBuilder.cpp
@@ +362,5 @@
> // -- ResumePoint(v0)
> //
> // As usual, it would be invalid for v1 to be captured in the initial
> // resume point, rather than v0.
> + current->add(actual);
Whoops, good catch.
@@ +3004,5 @@
> +{
> + // initParameters only initialized "this" after the following check, make
> + // sure we can safely access thisSlot.
> + if (!info().fun())
> + return false;
Should this be an abort? Or is this an error? (As written it'll be OOM)
@@ +3011,5 @@
> + MDefinition *thisParam = current->getSlot(info().thisSlot());
> +
> + if (thisParam->type() != MIRType_Object) {
> + IonSpew(IonSpew_Abort, "Cannot compile this, not an object.");
> + return false;
Instead: return abort("... otherwise this will act as OOM.
(In reply to David Anderson [:dvander] from comment #6)
> @@ +3004,5 @@
> > +{
> > + // initParameters only initialized "this" after the following check, make
> > + // sure we can safely access thisSlot.
> > + if (!info().fun())
> > + return false;
>
> Should this be an abort? Or is this an error? (As written it'll be OOM)
I think this should be an assertion, because the bytecode is badly produced. So I just removed the check as the assertion is already done by "info().thisSlot()". (forgot reviewer, sorry)
*** Bug 713855 has been marked as a duplicate of this bug. *** | https://bugzilla.mozilla.org/show_bug.cgi?id=701961 | CC-MAIN-2017-09 | refinedweb | 424 | 56.66 |
freeing the memory of a casted object ?
2011-06-22 22:18:57 GMT
I use intensively ctypes for wrapping lot of cdll's and love the module !
However I face a painful problem of memory leak when I cast an object to something other.
I post the following example which running on win32 is perfectly stable in memory when not casting, but leask 10MB per cycle when casting.
I couldnt find help on the internet for this, nor found a free function on ctypes module or whatever.
Please see and test if possible the following code:
"""
import ctypes
import os
import sys
import time
if __name__ == '__main__':
if sys.argv[1] == '0':
cast_flag = False
else:
cast_flag = True
while 1:
print 'creating 1MB buffer'
tmp_buffer = ctypes.create_string_buffer(10*1024*1024)
if cast_flag:
print 'casting it to c_void_p'
tmp_pointer = ctypes.cast(tmp_buffer, ctypes.POINTER(ctypes.c_void_p))
time.sleep(1)
"""
running:
python myscript.py 0
it will run without leaking
running:
python myscript.py 1
will leak more than 10MB at each cycle (seen with windows task manager or process explorer).
Can someone please give me a tip on how to free that memory ?
I really thank you for any tip !
Patricio
------------------------------------------------------------------------------ Simplify data backup and recovery for your virtual environment with vRanger. Installation's a snap, and flexible recovery options mean your data is safe, secure and there when you need it. Data protection magic? Nope - It's vRanger. Get your free trial download today.
_______________________________________________ ctypes-users mailing list ctypes-users <at> lists.sourceforge.net | http://blog.gmane.org/gmane.comp.python.ctypes/month=20110601 | CC-MAIN-2015-11 | refinedweb | 255 | 67.65 |
PyAccess Module¶
The
PyAccess module provides a CFFI/Python implementation of the PixelAccess Class. This implementation is far faster on PyPy than the PixelAccess version.
Note
Accessing individual pixels is fairly slow. If you are looping over all of the pixels in an image, there is likely a faster way using other parts of the Pillow API.
Example¶
The following script loads an image, accesses one pixel from it, then changes it.
from PIL import Image im = Image.open('hopper.jpg') px = im.load() print (px[4,4]) px[4,4] = (0,0,0) print (px[4,4])
Results in the following:
(23, 24, 68) (0, 0, 0) | https://pillow.readthedocs.io/en/4.0.x/reference/PyAccess.html | CC-MAIN-2018-26 | refinedweb | 108 | 59.4 |
How to reverse words in a sentence using Python and C
This is a technical problem I attempted recently. The problem was to reverse the words in a sentence. For example, The quick brown fox jumped over the lazy dog. becomes dog. lazy the over jumped fox brown quick The. I had to solve the problem first using Python, and then using C. In addition, the C version could only use 1 extra character of memory. I solved the Python version easily, but the C version was too difficult for me. Here are possible solutions.
Python version
sentence = "The quick brown fox jumped over the lazy dog." words = sentence.split() sentence_rev = " ".join(reversed(words)) print sentence_rev
C version
Credit for this solution goes to Hai Vu
#include <stdio.h> /* function declarations */ void reverse_words(char *sentence); void reverse_chars(char *left, char *right); /* main program */ int main() { char mysentence[] = "The quick brown fox jumped over the lazy dog."; reverse_words(mysentence); printf("%s\n", mysentence); return 0; } /* reverse the words in a sentence */ void reverse_words(char *sentence) { char *start = sentence; char *end = sentence; /* find the end of the sentence */ while (*end != '\0') { ++end; } --end; /* reverse the characters in the sentence */ reverse_chars(start, end); /* reverse the characters in each word */ while (*start != '\0') { /* move start pointer to the beginning of the next word */ for (; *start != '\0' && *start == ' '; start++) ; /* move end pointer to the end of the next word */ for (end=start; *end != '\0' && *end != ' '; end++) ; --end; /* reverse the characters in the word */ reverse_chars(start, end); /* move to next word */ start = ++end; } } /* reverse the characters in a string */ void reverse_chars(char *left, char *right) { char temp; while( left < right) { temp = *left; *left = *right; *right = temp; ++left; --right; } }
Related posts
- Find all combinations of a set of lists with itertools.product — posted 2011-11-01
- Some more python recursion examples — posted 2011-10-05
- Free Computer Science courses online — posted 2009-06-30
- Find the N longest lines in a file with Python — posted 2009-06-28
- Python recursion example to navigate tree data — posted 2008-08-19
Reposted a C version so it looks better. Just to show the power of C.
Here is a C version with better formatting:
#include <string.h> #include <stdio.h> int main(int argc, char *argv[]) { char **pstr, *ptok[20]; int i; char pinstr[] = "Quick brown fox jumps over the lazy dog."; for (i = 0, ptok[i++] = strtok(pinstr, " "); ptok[i] = strtok(NULL, " "); i++); for (i--; i > -1; i--) { printf ("%s ", ptok[i]); } }
Karsten: Thanks for you solution. I removed your first comment with the broken formatting and I added a single space to your second comment to fix the formatting.
Python is really great and very good language. How it sorted the program in just four lines... Really python is a high level programming language but it is very very easy to learn and implement.....
I am trying to run this program but it keeps cutting off the first word everytime i do it. Would anyone know why?
Python:
a = "Quick brown fox jumps over the lazy dog." #reverse b = a[::-1]
ElroySmith:
Your solution actually reverses the characters of the string and not the words.
a = "The quick brown fox jumped over the lazy dog." b = a[::-1] print b
Results:
.god yzal eht revo depmuj xof nworb kciuq ehT
This is a neat trick though. For others, the slicing notation used is start:end:step and a negative step indexes from the end of the list instead of the beginning. See Note 5 in the Python documentation for Sequence Types
Hi, You're right :)
But what will you say about this :
a = "The quick brown fox jumped over the lazy dog."
reversed order of words
b = " ".join( a.split()[::-1] )
I think it will do the trick :)
But it's the same as b = " ".join( reversed( a.split() ) )
(only you don't need to remember magic word reversed and also because in the first version there are less parenthesis it's looks more readable for me)
What do you think ? :) BR, Elroy
Three Lines of code
sentence = "The quick brown fox jumped over the lazy dog."
For Reversing string,
" ".join(sentence.split()[::-1]) | https://www.saltycrane.com/blog/2009/04/how-reverse-words-sentence-using-python-and-c/ | CC-MAIN-2019-30 | refinedweb | 697 | 72.36 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.