text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Hi Andrew, For your metadata question, JDBC metadata calls can help you. ResultSet.getMetaData will give you ResultSetMetaData object. ResultSetMetaData has various apis like getColumnName, getColumnType etc. to get the metadata information. Also, _I think_, if the table does not have indexes defined on it, then import into it will be much faster. But maybe someone else more familiar with import export can help you better. Mamta On 5/16/05, Andrew Shuttlewood <andrew.shuttlewood@futureroute.co.uk>. > > Also, is it possible to get some notification when a database is > recovering? It just seems to take forever in the boot stage and progress > information would be desirable. > >). > > Sorry for the laundry list of questions, just wondering if there are any > good answers :) > >
http://mail-archives.apache.org/mod_mbox/db-derby-user/200505.mbox/%3Cd9619e4a05051607057328470e@mail.gmail.com%3E
CC-MAIN-2014-23
refinedweb
122
51.34
Upload accounting records to an eudat accounting server Upload accounting records to an eudat accounting server Administrators of (storage) resources provided through the EUDAT Common Data Infrastructure can use this tool to conveniently report current resource consumption per registered resource. Accounting records are submitted per individual resource identified by its (P)ID which is available as soon as the resource has been registered at Default settings are such that only the resource id and the consumed value need to be provided. The default unit is byte and the default resource type is set to storage. The full documentation of options supported is described in the next section. Full documentation and API Installation The easiest way to install the tool is via pip or easy_install. It is usually best to do this in a virtualenv: $ pip install eudat.accounting.client Command line interface As a result of the above there is now a console script called addRecord. Invoke it with -h to see its usage pattern and options: $ bin/addRecord -h usage: addRecord [-h] [--version] [-b BASE_URL] [-u USER] [-p PASSWORD] [-d DOMAIN] [-k KEY] [-t TYPE] [-n NUMBER] [-m MEASURE_TIME] [-c COMMENT] [-v] account value [unit] positional arguments: account account to be used. Typically the (P)ID of the resource to be accounted value The value to be recorded unit The unit of measurement for the value provided. Default: "byte" optional arguments: -h, --help show this help message and exit --version show program's version number and exit -b BASE_URL, --base_url BASE_URL base URL of the accounting server to use. Default: -u USER, --user USER user id used for logging into the server. If not provided it is looked up in the environment variable "ACCOUNTING_USER". Default: "" - aka not set -p PASSWORD, --password PASSWORD password used for logging into the server. If not provided it is looked up in the environment variable "ACCOUNTING_PW". Default: "" - aka not set -d DOMAIN, --domain DOMAIN name of the domain holding the account. Default: eudat -k KEY, --key KEY key used to refer to the record. If not set the accounting server will create the key. Specifying an existing key will overwrite the existing record. Default: "" - not set -t TYPE, --type TYPE type of the resource accounted. Default: storage -s SERVICE, --service SERVICE UID (or PID) of the registered service component reporting the record. Default: "" - not set -n NUMBER, --number NUMBER number of objects associated with this accounting record. This is EUDAT specific. Default: "" - not set -o OBJECT_TYPE, --object_type OBJECT_TYPE object type for the number of objects specified with "-n". This is EUDAT specific. Default: "registered objects" -m MEASURE_TIME, --measure_time MEASURE_TIME measurement time of the accounting record if different from the current time. Default: "" - not set -c COMMENT, --comment COMMENT arbitrary comment (goes into the meta dictionary). Default: "" - not set -v, --verbose return the key of the accounting record created. Default: off Most of this should be self-explaining. Note that you need to provide credentails for the accounting service. If you do not have any contact the EUDAT accounting manager. Basic usage information as well as error messages are logged to a file named .accounting.log in the current working directory from where addRecord has been invoked. Developer notes Please use a virtualenv to maintain this package, but I should not need to say that. The package can be installed directly from GitHub: $ pip install git+git://github.com/EUDAT-DPMT/eudat.accounting.client The code is organized in a nested namespace package, i.e., the real action is happening in the subdirectory $ cd src/eudat/accounting/client Start looking around there. Run the tests (not really that meaningful so far): $ python setup.py test $ python run_tests.py Links Project home page Source code Issues tracker 1.0.0rc1 (2016-07-22) - Initial release [raphael-ritz] Download Files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://test.pypi.org/project/eudat.accounting.client/
CC-MAIN-2017-34
refinedweb
652
56.05
When we want to improve the performance of system, there are many ways to enhance performance such as architecture design, optimization algorithms, … But we can immediately think about multithread at first. So, in this article, we will find something out about multithreading in C++, how to know difference between process and thread, how to create thread in C++, … Table of contents - Introduction to multithread - The difference between process and thread - Launching a thread - Transferring ownership of a thread - How std::thread.join() do - Disadvantages when using join() method Introduction to multithread Before, going deeper into multithreading, we need to understand about the concept - process. In computing, a process is the instance of a computer program that is being executed. It contains the program code and its activity. Depending on the OS, a process may be made up of multiple threads of execution that execute instructions concurrently. According to the wikipedia.org, we have the definition of thread: dynamically allocated variables and non-thread-local global variables at any given time. A thread can contains all this information in a Thread Control Block (TCB): - Thread Identifier: Unique id (TID) is assigned to every new thread. - Stack pointer: points to thread’s stack in the process. Stack contains the local variables under thread’s scope. - Program counter: a register which stores the address of the instruction currently being executed by thread. - Thread state: can be running, ready, waiting, start or done. - Thread’s register set: registers assigned to thread for computations. - Parent processs Pointer: A pointer to the Process control block (PCB) of the process that the thread lives on. The difference between process and thread Below is a table that describe the difference between them. Launching a thread With normal function Assuming that we have: void hello() { std::cout << "Hello, world with multithread."; } std::thread t(hello); t.join(); //t.detach(); or void printSomething(const std::string& strInput) { std::cout << strInput << "\n"; } std::thread t(printSomething, "hello"); So, we will embed a name of function to the constructor of std::threadlike the above sample. And it’s important to bear in mind that by default the arguments are copied into internal storage, where they can be accessed by the newly created thread of execution, even if the correponding parameter in the function is expecting a reference. The reason for this is that the arguments may need to outlive the calling thread, copying the arguments guarantees that. Instead, if we want to really pass a reference, we can use a std::reference_wrappercreated by std::ref. std::thread (foo, std::ref(arg1)); By doing this, we are promising that you will take care of guaranteeing that the arguments will still exist when the thread operates on them. Note that all the things mentioned above can also be applied to std::async and std::bind. With function object class background_task { public: void operator() { do_something(); do_something_else(); } }; background_task f; std::thread thBackgroundTask(f); The above function object fis copied into the storage belonging to the newly created thread of execution and invoked from there. It’s therefore essential that the copy behave equivalently to the original, or the result may not be what’s expected. With lambda expression Based on the sample in With function object, we have: std::thread th([](){ do_something(); do_something_else(); }); or class blub { private: void test() {} public: std::thread spawn() { return std::thread( [this] { this->test(); } ); } }; Since this->can be omitted, it could be shorten to: std::thread( [this]{ test(); } ); or just std::thread( [=] { test(); } ); With a method in class class test { public: void hello() { std::cout << "Hello, everyone.\n"; } }; std::thread t(&test::hello, test()); t.join(); So, to pass a method into a thread, we must specify what class and a object of this class that this method lie. The above syntax is defined in terms of the INVOKEdefinition: Define INVOKE(f, t1, t2, ..., tN) as follows: - (t1.*f)(t2, ..., tN) when f is a pointer to a member function of a class T and t1 is an object of type T or a reference to an object of type T or a reference to an object of a type derived from T; - ((*t1).*f)(t2, ..., tN) when f is a pointer to a member function of a class T and t1 is not one of the types described in the previous item; - t1.*f when N == 1 and f is a pointer to member data of a class T and t 1 is an object of type T or a reference to an object of type T or a reference to an object of a type derived from T; - (*t1).*f when N == 1 and f is a pointer to member data of a class T and t 1 is not one of the types described in the previous item; - f(t1, t2, ..., tN) in all other cases. –> Once we’ve started our thread, we need to explicitly decide whether to wait for it to finish (by joining with it) or leave it to run on its own (by detaching it). If we don’t decide before the std::thread object is destroyed, then our program is terminated (the std::thread destructor calls std::terminate()). We only have to make this decision before the std::thread object is destroyed - the thread itself may well have finished long before we join with it or detach it, and if we detach it, then the thread may continue running long after the std::thread object is destroyed. And we have a difference between using mutex and join() method: join() stops current thread until another one finishes. mutex stops current thread until mutex owner releases it or locks right away if it isn't locked. Transferring ownership of a thread Context’s problem We want to write a function that creates a thread to run in the background but passes back ownership of the new thread to the calling function rather than waiting for it to complete. Create a thread and pass ownership in to some functions that should wait for it to complete. Solution Use std::move()method with std::threadobject. If ownership should be transferred into a function, it can just accept an instance of std::threadby value as one of the parameters. void f(std::thread t); void g() { void some_function(); f(std::thread(some_function)); std::thread t(some_function); f(std::move(t)); } One benefit of the move support of std::threadis that we can build on the class is taken advantages of RAII paradigm. This avoids any unpleasant consequences should the RAII class’s object outlive the thread it was referencing, and it also means that no one else can join or detach the thread once ownership has been transferred into the object. class scope_thread { private: std::thread t; public: explicit scoped_thread(std::thread t_) : t(std::move(t_)) { if (!t.joinable()) { throw std::logic_error("No thread"); } } ~scoped_thread() { t.join(); } scoped_thread(scoped_thread const&) =delete; scoped_thread& operator=(scoped_thread const&) =delete; }; struct func; void f() { int some_local_state; scoped_thread t(std::thread(func(some_local_state))); do_something_in_current_thread(); } Another benefit of the move support std::threadis to allow for containers of std::thread objects, if those containers are move aware. void do_work(unsigned id); void f() { std::vector<std::thread> threads; for (unsigned i = 0; i < 20; ++i) { threads.push_back(std::thread(do_work, i)); } std::for_each(threads.begin(), threads.end(), std::mem_fn(&std::thread::join)); } How std::thread.join() do According to en.cppreference.com, we have information about join() method: Blocks the current thread until the thread identified by *this finishes its execution. The completion of the thread identified by *this synchronizes with the corresponding successful return from join(). No synchronization is performed on *this itself. Concurrently calling join() on the same std::thread object from multiple threads constitutes a data race that results in undefined behavior. So, in reality, when using join() method, current thread will be blocked until thread identified by *this finishes its execution, do we need to use mutex.lock()? –> We still need mutexes and conditions. Joining a thread makes one thread of execution wait for another thread to finish running. We still need mutexes to protect shared resources. It allows main() in this example to wait for all threads to finish before quitting itself. #include <iostream> #include <thread> #include <chrono> #include <mutex> using namespace std; int global_counter = 0; std::mutex counter_mutex; void five_thread_fn(){ for(int i = 0; i<5; i++){ counter_mutex.lock(); global_counter++; counter_mutex.unlock(); std::cout << "Updated from five_thread" << endl; std::this_thread::sleep_for(std::chrono::seconds(5)); } //When this thread finishes we wait for it to join } void ten_thread_fn(){ for(int i = 0; i<10; i++){ counter_mutex.lock(); global_counter++; counter_mutex.unlock(); std::cout << "Updated from ten_thread" << endl; std::this_thread::sleep_for(std::chrono::seconds(1)); } //When this thread finishes we wait for it to join } int main(int argc, char *argv[]) { std::cout << "starting thread ten..." << std::endl; std::thread ten_thread(ten_thread_fn); std::cout << "Running ten thread" << endl; std::thread five_thread(five_thread_fn); ten_thread.join(); std::cout << "Ten Thread is done." << std::endl; five_thread.join(); std::cout << "Five Thread is done." << std::endl; } starting thread ten... Running ten thread Updated from ten_thread Running five_thread Updated from ten_thread Updated from ten_thread Updated from ten_thread Updated from ten_thread Updated from five_thread Updated from ten_thread Updated from ten_thread Updated from ten_thread Updated from ten_thread Updated from ten_thread Updated from five_thread Ten Thread is done. Updated from five_thread Updated from five_thread Five Thread is done. Since std::cout is a shared resource access and use of it should also be mutex protected too. Disadvantages when using join() method Encourages continual creating/terminating/destroying of threads, so hammering performance and increasing the probabilty of leaks, thread-runaway, memory-runaway and general loss-of-control of your app. Stuffs GUI event-handlers by enforcing unwanted waits, resulting in unresponsive ‘hourglass apps’ that your customers will hate. Causes apps to fail to shutdown because they are waiting for the termination of an unresposive, uninterruptible thread. Other bad things. Thanks for your reading. Refer: Some keywords to need to understand: pools, tasks, app-lifetime threads, inter-thread comms via producer-consumer queues.
https://ducmanhphan.github.io/2019-03-24-Understanding-about-multithread-in-C++/
CC-MAIN-2021-25
refinedweb
1,683
52.49
I have tried many php function to uncompress data sent by jazzlib but none is working. Please can you help me ?. Remember am using GzipOutputStream not ZipOutputStream anymore. Type: Posts; User: elvis10ten; Keyword(s): I have tried many php function to uncompress data sent by jazzlib but none is working. Please can you help me ?. Remember am using GzipOutputStream not ZipOutputStream anymore. I have changed my code. Its using gzipOutputStream. Do you know any php function that can uncompress it ? Thanks wizard_hu. I tried using GZIPOutputStream in jazzlib but i can't uncompress it in php I tried using the jazzlib library to create a zip file but i only get PK when i finished. I tried different string but i got the same result. Here is my code: import net.sf.jazzlib.*; import... After using the jazzlib library to compress my data, i send it to my website but i do not know how to use php to decompress a file compressed in jazzlib ?. I have tried gzopen, gzuncompress,... Thank you TK2000. It worked. Thank you very much Hi. I tried using jazzlib ported to j2me on application but i get VerifyError. My pc just spoilt so i can't preverify now. Please can anyone help me preverify jazzlib and send the link to download it... Here is the code: import javax.microedition.midlet.*; import javax.microedition.lcdui.*; import javax.microedition.io.*; import java.io.*; public class Example extends MIDlet implements... Please i want to view in j2me using socket. I have tried many ports with it but i don't get any content what should i do ?. I am the owner of smsme.cu.cc too. Please any code or... Please i need the soft key keycode of all manufacturers of j2me devices. They say -6 and -7 is for nokia. I have checked google but i didn't get much. Thanks. But i do not understand the code. Please i need an explanation of how it works please. please how can i create a scrollbar in a j2me canvas
http://developer.nokia.com/community/discussion/search.php?s=197ff97b7f09a1b6b249a997ac86d68d&searchid=2412136
CC-MAIN-2014-23
refinedweb
342
77.74
From: Boris Kolpackov (boris_at_[hidden]) Date: 2007-03-01 12:59:22 Hi Robert, "Robert Ramey" <ramey_at_[hidden]> writes: > A while ago I made a suggestion about using the spirit parser with its > associated xml grammers. > > No one has commented on this. I'm curious why this idea doesn't seem to be > attractive to anyone else.. I used it with very good results in the > serialization library. It created a much more robust and maintainable > parser than I could have done by hand. What am I missing here? The question is whether it is a conforming XML parser? That means support for: - namespaces - character references - entity references - CDATA - DTD well-formedness checking, entity declaration processing and replacement, substitution of default attribute values, etc. My uneducated guess is that "spirit-based XML grammar" is not a conforming XML parser. The next question is how much effort it will take to fix it up and whether it will still be as robust, maintainable, and efficient (I doubt it very much). The reason why you had good results with serialization library is because you control both production and consumption of the instances so you can easily restrict yourself to a subset of XML. Once you need to process *any* valid XML things get a lot more complicated. h
https://lists.boost.org/Archives/boost/2007/03/117258.php
CC-MAIN-2019-43
refinedweb
216
62.48
A blog, put simply, is nothing more that a list of content. In the past, blogging platforms were viewed as only useful for one purpose: content. How boring is that? Fortunately for us, this couldn't be furthest from the truth. Many CMSs, namely WordPress, have built-in ways to use the blogging platform to create custom feeds of content. WordPress calls them "custom post types," and they were ground-breaking when they were introduced into the platform. It changed the way that people used blogs. Although not immediately evident, HubSpot's blogging platform has the same function as WordPress's custom post types. You can use the platform to create a whole host of custom uses. This post will be the first in a series of articles that will help you elevate your site and make it easier to maintain using HubSpot's blogging platform. Our first use of the blog system is something that I get asked to create quite often: we will be completing a press release blog. Let's get started 1. Planning In our case, the press releases are going to need a couple of things: - The option for a featured image - A 'published on' date - Separation and categorization by year - The ability to link to either an article hosted in your HubSpot platform or an external source, such as a PR agency website or another featured blog article Once we have our Press Releases blog planned out and have a decent design, we are ready to start setting up the blog and developing our template. 2. Setup The next step in this process is creating a new blog. Under "Content Settings" in your HubSpot portal, go to the blog link in the sidebar menu. Once there, click the "add new blog" button. Name your blog and give it a url. In this example, we've named out 'Press Releases.' We have also given it a url of /press. Scroll down the blog settings a little further to "Date Formats" and click "Custom format" next to Publish Date Format. In the dialog window that appears, type YYYY into the input and click Create Date Format. Once we have the blog set up in out system, we are ready to start developing our Press Release Template. Note: We are missing one portion of our Press Releases setup: selecting the correct template for our press blog. That step will come after we have actually created the new templates. 3. The Single View or Article Template The single view or article view is the most important part of any blog-related template. The reason for this is that the view template will dictate the options that will be available to your posts, and determine all of the available features of our press release template. The good news about this template is that it is generally the easiest part to code. Go to the Design Manager and click "New Template." A dialog box will appear. Select "Template Builder" and then "Blog." After clicking 'Create,' select the folder where your new template will live, name the template (in this example, we've gone with 'Bz Press') and click Create. Open the template in the Design Manager and click Edit Post Template. One of our features will be defined here: The ability to link to external urls. The first item in your template should be this code: {% text 'extlURL' label="External URL", export_to_template_context='True' %} This HubL module is a text module, in which we'll be able to add an external url to another website or page instead of the internal URL of the blog (if nessesary). The next piece of code is the post body tag, which looks like this: {{ content.post_body }} All together, your code may look something like this: {% text 'extURL' label="External URL", export_to_template_context='True' %} <div class="blog-section"> <div class="blog-post-wrapper cell-wrapper"> <div class="blog-section"> <div class="section post-body"> {{ content.post_body }} </div> </div> </div> </div> 4. The Listing Page As I stated before, a blog is essentially just a list or feed of content. In this case, a press release blog happens to be the same. Open the "Edit Listing Template" section of the Press Release template. This is where our listing code will go. In this case, there are a couple items that are different from a normal blog. For starters, if you remember, we changed the blog date format to year-only. This helps with grouping the articles by year, but it doesn't do much for displaying the actual publish date. What we have below is the published date function, which normally ouputs an ugly date stamp. However, with the datetimeformat filter on it, we can tell it how to display the date. %b is the month in 3 leter abreviations, %d is the date in 2 digit numerical form, and %Y is the year. This will help you get your year back. <div> <span class="month">{{ content.publish_date|datetimeformat('%b') }}</span> <span class="day">{{ content.publish_date|datetimeformat('%d') }}</span> <span class="year">{{ content.publish_date|datetimeformat('%Y') }}</span> </div> The next step for this template is to test whether the author is using an external URL or not: {% if content.widgets.extURL.body.value %} <div class="press-item__ext-link">External Link <i class="fa fa-external-link"></i></div> {% endif %} In this example, we are outputting the words External Link to signify it is an external URL. However, you can also use this test to show internal links or external links. You may also want to test for a featured image the same way you test for an external URL. That way, you won't display the image if there isn't a reason to. 5. Group by Year Next, we need to group the posts by year. We can do this using the groupby function: {% for group in contents|groupby('publish_date_localized') %} <div class="press-listing__group"> <h1 class="press-item__year-group">{{ group.grouper }}</h1> {% for content in group.list %} This little snippet of code shows us how the groupby filter works. You can see that we are first grouping using the publish_date_localized function. Then, we can call the item we grouped by using the group.grouper function. Lastly, we can loop through the segmented group and output the content that belongs in its group in the format we want it to display. When all said in done, the listing template code should look like this: <div class="press-section"> <div class="press-listing"> {% for group in contents|groupby('publish_date_localized') %} <div class="press-listing__group"> <h1 class="press-item__year-group">{{ group.grouper }}</h1> {% for content in group.list %} <div class="press-item row-fluid-wrapper flex-grid"> <div class="row-fluid"> <div class="span1 press-item__date"> <span class="month">{{ content.publish_date|datetimeformat('%b') }}</span> <span class="day">{{ content.publish_date|datetimeformat('%d') }}</span> <span class="year">{{ content.publish_date|datetimeformat('%Y') }}</span> </div> <div class="span11 press-item__body"> <!--post summary--> {% if content.widgets.extURL.body.value %} <div class="press-item__ext-link">External Link <i class="fa fa-external-link"></i></div> {% endif %} {% if content.featured_image %} <div class="press-item__header"> {% if content.widgets.extURL.body.value %} <a href="{{ content.widgets.extURL.body.value }}" title="" class="press-item__featured-image" style="background-image: url({{ content.featured_image }});"></a> {% else %} <a href="{{content.absolute_url}}" title="" class="press-item__featured-image" style="background-image: url({{ content.featured_image }});"></a> {% endif %} </div> {% endif %} <h2 class="press-item__title"> {% if content.widgets.extURL.body.value %} <a href="{{ content.widgets.extURL.body.value }}">{{ content.name }}</a> {% else %} <a href="{{content.absolute_url}}">{{ content.name }}</a> {% endif %} </h2> <p class="press-item__summary"> {{ content.post_list_content|striptags|truncate(300) }} </p> </div> </div> </div> {% endfor %} </div> {% endfor %} </div> </div> Style it a bit and you're all set. When you are finished adding a test press release, it should work like a dream. .press-listing__group:not(:last-of-type) { margin-bottom: 4em; } .press-item__year-group { color: black; } .press-item { margin-bottom: 1.5em; } .press-item__date { color: blue; } .press-item__date .month { display: block; text-transform: uppercase; font-size: 1.1em; line-height: 1; text-align: center; } .press-item__date .year { display: block; text-transform: uppercase; font-size: 1.1em; line-height: 1; text-align: center; } .press-item__date .day { display: block; text-transform: uppercase; font-size: 2.3em; line-height: 1em; text-align: center; font-weight: 700; } .press-item__body { border-left: 1px solid grey; padding-left: 1.5em; } .press-item__ext-link { color: darkgreen; text-transform: uppercase; font-size: .7em; margin-bottom: .35em; } .press-item__featured-image { display: block; height: 200px; background: #eee; margin-bottom: 1em; background-size: cover; } .press-item__title { margin-top: 0; font-weight: 400; line-height: 1.2; } .press-item__title a { color: blue; text-decoration: none; } @media screen and (max-width: 768px) { .press-item__date .month, .press-item__date .year, .press-item__date .day { text-align: left; display: inline-block; font-size: 1em; font-weight: 300; color: lightgrey; } .press-item__date .month, .press-item__date .day { font-weight: 400; } .press-item__body { border-top: 1px solid lightgrey; border-left: 0 solid lightgrey; padding-top: 1.5em; padding-left: 0; } } In this example, ours turned out pretty good, if you ask me: You're Done! This is a great way to add some dynamic content and easy editing to your site. I hope you've learned from this article, and if you have any questions or possibly a better way to do this, please comment below! Chad Pierce Chad is the lead designer and developer for Bluleadz Inbound Marketing Agency, father of 3 and husband of one.
https://www.bluleadz.com/blog/how-to-use-hubspots-blog-as-a-news-or-press-page-vlog
CC-MAIN-2020-40
refinedweb
1,579
58.28
> Hello, I'd like a button to pop up when an input field has at least 1 character in it. I'm struggling with the syntax; everything I try gives me a: "NullReferenceException: Object reference not set to an instance of an object" I tried using .Length but suspect that that only works for strings. So I tried to use .ToString to convert to something that can use .Length but am not having any luck with that approach either. What steps/conversions do I need to follow/implement? An example with C# syntax would be ideal. Thanks in advance, Best regards, Don Posting your code will help identify the cause of the null reference, and understand your approach. eg, the .text variable of an InputField is already a string. One method would be to utilize the OnValueChange callback event from the InputField component. So for every change to the field, check the length of the .text value @alucardj Thank you for the feedback! The reason I didn't post any code was because towards the end I was trying all sorts of different things and was deleting them as I went along. In retrospect, I should have at least posted my last attempt; maybe then my request for "example syntax" would have been more justified and as a result I might have been able to avoid that down-vote. Lesson learned. Evidently, as I mention bellow, the reason I was getting that error was because I tried to access a reference from another script. I thought I could do that and I'm sure there's probably a way, just haven't figured it out yet. Anyway, problem solved; thanks again! As this just came up in another question, I had the link handy (for script to script references) Maybe you could do this in a script placed on the InputField GameObject! It deactivate whatever Button you assigned to it when nothing is entered. using UnityEngine; using UnityEngine.UI; public class InputFieldTest : MonoBehaviour { public Button button; private InputField inputField; void Start() { inputField = GetComponent<InputField>(); } void Update() { if (inputField.text == string.Empty) { if (button.gameObject.activeInHierarchy) { button.gameObject.SetActive(false); } } else { if (!button.gameObject.activeInHierarchy) { button.gameObject.SetActive(true); } } } } Answer by getyour411 · Dec 13, 2016 at 01:38 AM // Add UI namespace using UnityEngine.UI; Setup references to myInputField and myButton. In the Editor, drag&drop these UI elements from hierarchy into script fields that will show up; alternatively use GameObject.Find, private [SerializeField] field, etc... public InputField myInputField; public Button myButton; In Update (or similar) if (myInputField.text != "") { myButton.SectActive(true); } @getyour411 thank you for the elegant if-statement solution in your comment, it workded! I wish you'd have posted it as an answer instead of a comment so I can mark it "Best Answer"; it wouldn't let me do it. Anyway, I think the reason I was getting the "NullReferenceException: Object reference not set to an instance of an object" error was because I was trying to access a reference from another script (that always gets me.) Once I created the reference in the script that was calling the if statement it worked like a charm! Thanks again! Glad it worked; I modified my comment to be an Answer and your Answer to be a comment. If you are happy, tick the Accept can I reset the placeholder of an input field back to the default readout? 2 Answers Check if 2 InputField contains 2 words (not the same)? Feel like it's easy but can't figure out what's wrong 0 Answers Unity 5.3 Caret is missing on mobile devices 0 Answers On Trigger Display Text but centered not centered? 1 Answer how can I save text to input field with button? 1 Answer
https://answers.unity.com/questions/1284583/how-to-detect-the-length-of-an-input-field-c-noob.html
CC-MAIN-2019-22
refinedweb
630
64.51
String data type is used quite often for storing hard-coded secrets in the code. These secrets could be general purpose literals used in applications like connection strings or business specific secrets (like coupon codes, license key, etc.). The fact that a vast majority of developers/applications have sensitive data in string data types makes it quite interesting for hackers. In this article, we will demonstrate some techniques that hackers use to discover sensitive information stored within strings. There are couple of things that I would like to make absolutely clear before we delve into any further discussion. First in this article, I will demonstrate how hackers can use tools like JustDecompile, ILDASM and Windbg to their advantage. What's important to remember is that none of these tools were built to facilitate hacker's interests. These tools are built for legitimate purposes and for good reasons. It's just that bad guys use the same tools for their own interests that can be potentially harmful for others. It is therefore quite important to learn the techniques that hacker's use so that we can be better prepared to mitigate against specific threats posed by those techniques. Finally this article by no means is about defense mechanisms for the threats that I will demonstrate. I will briefly mention some common approaches that could be used to defend against these threats, however, it is not a comprehensive discussion about defense mechanisms & methodologies. With CLR Execution model, you write code in your favorite .NET Language like C#, Managed C++, VB.NET, etc. This code is compiled into byte code called Microsoft Intermediate Language or IL. So your application or .NET Assembly actually contains code in the form of Intermediate Language. When you execute your application, at runtime this IL code is compiled into native CPU instructions. The process of compiling IL code into native instructions is called Just In Time compilation aka JITting. This intermediate language contained in .NET assemblies is highly detailed and it preserves information about all data structures like classes, fields, properties, methods, parameters, even the method code. Presence of this highly verbose IL is what makes reverse engineering a .NET assembly quite trivial thus enabling attackers to gain valuable information without having access to the actual source code. In order to find hard coded secret in an assembly, you can open the assembly in .NET code browsers like Telerik’s JustDecompile or ILSpy, etc. These code browsers provide you with the ability to open a .NET assembly and view the code in form of C#, VB.NET or Intermediate Language. Let’s take a brief look at our sample application. The code itself is very simple & straightforward but it’s good enough to demonstrate the concept. This sample application has a class called Constants that contains some strings used to store sensitive business information as shown below: Constants public class Constants { public static string ConnectionString = "Data Source=HOMEPC; Initial Catalog=MyDb;User Id=SpiderMan;Password=secret;"; public static string CouponCode50Pct = "AlphaBetaGamma50"; public static string CouponCode75Pct = "AlphaBetaGamma75"; public static string UserName = "SuperUser"; public static string Password = "SuperSecretPassword"; public Constants() { } } Now let’s open the compiled executable assembly in Telerik’s JustDecompile. Figure 2 shows the view of this assembly and as you can easily view these strings that should not be easily revealed to anyone. Typically in a real-life application, searching through strings by opening the assembly (or set of assemblies) in any .NET code browser (like Telerik JustDecompile) is a tedious job. Today’s hackers are smart and they have more efficient tools in their arsenal to efficiently find vulnerable pieces of code and plan their next attack accordingly. One of those tools is Intermediate Language Disassembler (a.k.a ILDASM). Just like JustDecompile, ILDASM could be used to disassemble a .NET assembly and view its code, however, with ILDASM you can only view this code in the form of Intermediate Language. ILDASM can be operated in two modes, one using its graphical user interface; the other one using command console. ILDASM in console mode is what typically hackers prefer especially to perform an information disclosure attack. Figure 3 shows how ILDASM could be used in command mode to search for string types present in .NET Assembly code. We have used ILDASM with text switch, which basically implies to display the decompiled assembly in the console windows. This text is then piped into findstr command. The argument to findstr is ldstr. ldstr is an intermediate language instruction to load a string into memory/evaluation stack. As you can see from the output of this command, it only list strings from our .NET assembly including sensitive data like connection string, user name, password, etc. text findstr ldstr One common mechanism widely used to protect intellectual property and hard-coded secrets is by obfuscating your assembly. Most commercial obfuscation tools not only make .NET code less human-readable but they also provide options like encrypting string types. Figure 4 below demonstrates the same ILDASM command when run against an obfuscated assembly with strings been encrypted. I would like to point out that assembly obfuscated and string encryption by no means a perfect solution. It does provide some protection against casual user, however, a determined user with reasonable skillset and enough time in hand can crack though many defense mechanisms. What we have seen so far are "static" attacks performed against .NET assemblies. String data types are also vulnerable to secret leaks during runtime. What it means is that at runtime a hacker can attach a debugger and inspects data stored within these string data types. Let's take a look at how this type of attack could be performed. I will use Windbg for this demo. Windbg is a native debugger that could also be used to debug managed application with the help of extensions DLLs. Windbg is part of "Debugging Tools for Windows" and could be downloaded from here. Let's take a look at how string secrets can be inspected at runtime. When you launch the demo application, a login screen appears. I have simplified a couple of things in this demo application to make things easy to follow. For instance, typically a password type field you don't see the characters that user is typing (rather just display a * for each character). Another simplification I did is by displaying a message box when user hits OK button. In a real-life application, typically clicking on OK button takes users to a new screen. I have used this method to simplify the process of attaching windbg to a running process. There are ways of setting breakpoint in Windbg also. You can read one way of setting breakpoint my blog here. The figure below shows the login screen from demo. When you hit OK, a dialog will appear showing that login has failed. At this point, launch windbg and attach it to the running process. For this, you have to choose "Attach to a Process" menu option under File as shown below: Next, you will get the following dialog box to select the running process that you want to attach. At this point, we would like to take a peek at .NET managed heap and inspect string data type instances. There are various ways of doing it. I will simply use !strings command to get to it. !strings The output of !strings command could be really long but the values we entered in text boxes for username and password should be visible here as shown below: !strings Microsoft has provided SecureString class that could help provide some protection against these type of attacks. Keep in mind people have found ways of inspecting SecureString class to determine the actual value of string stored within SecureString. SecureString SecureString In this article, we discussed why string data types is of particular interest for hackers. We demonstrated some common ways that hackers can use statically and at runtime to discover sensitive information stored in string data types. Hopefully, you will find this information useful the next time you store some secrets within
http://www.codeproject.com/Articles/401220/Why-hackers-love-String-data-type?fid=1732636&df=90&mpp=10&sort=Position&spc=None&tid=4425440
CC-MAIN-2014-10
refinedweb
1,350
55.95
LOGIN_FBTAB(3) BSD Programmer's Manual LOGIN_FBTAB(3) login_fbtab - implement device security based on /etc/fbtab #include <sys/types.h> #include <util.h> void login_fbtab(const char *tty, uid_t uid, gid_t gid); The login_fbtab() function reads the /etc/fbtab file and implements dev- ice security as described in the fbtab(5) manual page. /etc/fbtab Problems are reported via the syslogd(8) daemon with the severity of LOG_ERR. fbtab(5) Wietse Venema (wietse@wzv.win.tue.nl) Eindhoven University of Technology The Netherlands Previous versions of this routine used strtok(3), which can cause con- flicts with other uses of that routine. This may be an issue when porting programs to OpenBSD 3.1 and below. MirOS BSD #10-current June.
https://www.mirbsd.org/htman/i386/man3/login_fbtab.htm
CC-MAIN-2015-40
refinedweb
121
51.65
I created an HD44780 LCD library for the AVR architecture today. There were a ton already, and mine is based off of one by Peter Fleury. I added a couple cool little features, and decided it needed to be on github to allow people to improve it and access it easier. Here’s a quick code example using the library: #include <avr/io.h> #include <avr/pgmspace.h> #include <util/delay.h> #include "lcd.h" // Include the chars we want #define CHAR_USE_OPEN_RECTANGLE #define CHAR_USE_HEART #include "chars.h" int main(void) { /* initialize display, cursor off */ lcd_init(LCD_DISP_ON); lcd_command(LCD_FUNCTION_4BIT_2LINES ); lcd_clrscr(); // Testing if x,y are set wrong lcd_gotoxy(3, 1); // Load character lcd_custom_char_p(0x00, _char_open_rectangle); lcd_custom_char_p(0x01, _char_heart); // We better still be at 3, 1 lcd_putc(0); lcd_putc(1); lcd_putc(255); for(;;); } See how easy it is to define custom characters? Cool, huh? I think so. The heart and rectangle bit vectors are in chars.h, and are only compiled in when they are defined, keeping code size down. Here’s how I have it hooked up on the breadboard (and the output of the program above): Nice library. I note that the Adruino LCD library has an option to NOT use the “RW” line. You wire RW to GND. You can get away with this if you have only a single LCD on the data lines. It saves an I/O line (always scarce). Have you considered add this as an option? Thanks… I could probably do that. If you’re feeling frisky, you could clone my github repo, make the necessary changes, and send me a pull request and I’d be glad to roll it in. Or, you could send me a patch file. Otherwise, I’m not likely to get around to it any time soon. Thanks, though!
http://www.ralree.com/2011/02/26/lcdiesel-yet-another-avr-hd44780-lcd-library/
CC-MAIN-2017-47
refinedweb
301
75.71
random sentence Hello all i have 8 sentences, i would to copy in my files after an html tag in random one of these 8 sentences is it possible with notepad? how? thanks @pouemes If you thinking of using a regular expression (regex) then no. regex does NOT have a calculating ability and no random number generator with which to do an assessment and then pick 1 of 8 sentences. However Notepad++ also provides the environment to support some programming languages, such as Pythonscript. Of course this then reduces the support field considerably. However there might be a way that regex could ‘emulate’ some randomness. It would depend on what occurs within the html tag. For example if we would select a particular character (say 8th inside the tag), knowing it could be anything within the alphabet it could be possible to select a ‘random’ sentence based on that character’s value. So with 8 sentences you would create 8 lists of characters. Whether they be [a-d], [e-h], [i-l] type or [a,f,u], [b,e,z] types where the characters are random, then this could help in the selection process. If you need the 8 sentences to be used only once then the above idea fails. Terry - cipher-1024 I don’t think NPP has a random number generator. It’s possible you could figure out something with a regex and use the line number or number of characters to get some faux randomness, but that’s beyond my regexpertice. I would use a scripting plugin like Python or Lua to accomplish this. EDIT: Terry snuck a better answer in before me! thanks for your answer, i shall see with python - Scott Sumner So as a Pythonscript example, this will grab and display a random line from the current file when run: from random import randint notepad.messageBox(editor.getLine(randint(1, editor.getLineCount()) - 1).rstrip(), 'A random line from active file...') Note that I’m kinda assuming that the OP’s sentences will be one-per-line in a file… thanks scott
https://notepad-plus-plus.org/community/topic/16608/random-sentence/2
CC-MAIN-2019-22
refinedweb
347
61.67
Introduction This is the first blog post in a series about using the programming language F# for Cloud-oriented tasks. I am new to using F# in practice and these series of blog posts are essentially documentation of my explorations of F# as a language for building various cloud solutions. This starts from the beginning of learning F#. F# is an open-source, cross-platform functional language. It has been around for about 15 years and since 2010 been a cross-platform language (used to be Windows-only before that). It is a part of .NET. It can also be compiled to Javascript (through Fable) and WebAssembly (through Bolero). It started out as an implementation of the language OCaml for the .NET platform to bring a strongly typed functional language to .NET and has evolved on its own since then. I am a fan of functional languages and the expressive power they usually bring and I do find that in other languages I use, static type information is a benefit for many solutions. F# being part of .NET there is obviously some usage in Azure in cloud space, but I mainly work with AWS and secondly GCP - so it is interesting to explore how useful it can be. The .NET platform is unknown territory for me though, so this is also a part of the learning effort. There are some great material for F#, but it seems a lot of it assume familiarity with .NET, C#, maybe Azure and also perhaps working with Windows. I am writing this from the point of view of a .NET newbie, not really any experience with C# and working on a MacBook Pro. The primary cloud platform target will be AWS and then some GCP - no Azure planned. Hopefully, this can be useful for people without a .NET/C#/Azure background. Before continuing, here is a few good links for material related to F#, well worth looking at for further exploration of F#: - F# for fun and profit - Microsoft F# documentation - F# Core library documentation (community edition) - F# source code - F# software foundation - FsharpConf - F# Online Youtube channel Unfortunately, some of the reference links in the F# documentation at Microsoft may be broken. So let's get started! Installation The FSharp website has some links to get you started on various platforms (Linux, MacOS, Windows, FreeBSD), which essentially boils down to: This is the minimum to get started with F# locally on your own computer. To get a feel for the language without installing anything (yet), one can go to Try F#, Repl.it and a few other places. For the version of .NET, pick the 5.0 release. If you have downloaded and installed the .NET SDK, then the command line tool dotnet will be available. Open a command-line window and run dotnet --info This will show some information about the .NET installation. And that's it for the installation of F# itself. Running some F# code F# has a bundled REPL (read eval print loop) in dotnet called F# Interactive, which can be used to type in F# code and/or run F# scripts. The dotnet toolset itself is though mainly focused on building and managing projects and solutions and not just simple scripts. In fact, when googling to look at different "Hello world" implementations in F# a lot of them would not work as a simple script, but rather require you to build a console application. This introduces you to a slightly larger scope of concepts, which is not really optimal for a quick start in my opinion. I also want to use F# for some simple scripts to start with and thus do not want all the pieces that are involved with building a complete application. The REPL First, let's start with some simple code using the F# Interactive REPL: - We start the REPL using the command dotnet fsi - First just type in a simple "Hello cloud!" at the prompt (>), by calling the function printfn with the hello cloud string and press Enter. - We get a new prompt (-), which indicates that FSI expects more input. The expression entered is a complete F# expression, but it is necessary to tell FSI that we are done, which is indicated by using two semi-colons (;;). - The expression is evaluated and "Hello cloud!" is printed. Success! - After the result, FSI prints type information about the expression val it : unit = (). For now, skip the details here. You will see a pattern emerging after entering some expressions in FSI. Worth noting is that the parameter to the printfn function is just separated by space from the function called. The second variation is to provide an additional parameter to printfn to construct the string. - First create a named value, using the letkeyword. We use the identifier messageto represent the string value "cloud". - F# uses type inference, so in many cases, there is no need to declare the type of an identifier - F# knows the type from the context anyway. - Next we call the function printfnagain, but in this case with two arguments - a string with a substitution pattern (%s) and the messageidentifier. The result from execution is our hello cloud string again. - Note that multiple parameters to the printfnfunction are just separated by space. The third variation of our "Hello cloud!" is through defining a function of our own to call. This is what any non-trivial development would include anyway. - First use letkeyword to create a named function called hello. This is just like in the simple value case, only that we also include a parameter to the function, in this case, the parameter named - Although we can technically define the function on a single line, in this case, we continue the definition of the function body in the next line. Note also that we indent the next line; similar to languages like Python F# uses indentation to indicate code blocks. - The function hello is then called with the parameter "cloud" and the result is printed out. Another success! Script file So we started with three ways to implement "Hello cloud!" using the F# Interactive REPL. Now the next step is to create a script file to execute it in. To do that, we can create a file called Hello.fsx with the following content: // This is our first hello cloud script. This is a comment line. let hello message = printfn "Hello %s!" message hello "cloud" This script can then be run using the dotnet command: dotnet fsi Hello.fsx We can however do it a bit better and use shebang notation also: #!/usr/bin/env dotnet fsi // This is our first hello cloud script. This is a comment line. let hello message = printfn "Hello %s!" message hello "cloud" Even though just fine. Change the Hello.fsx file to be executable and then just run it: #is not a comment character in F#, FSI understands the shebang notation and this works >>> chmod u+x Hello.fsx >>> ./Hello.fsx Hello cloud! >>> Some notes: - If you search for executing F# script files, you may find references to using the command fsharpi. This seems to be the old way, before FSI became a proper .NET component. I found fsharpion my computer, but that was from a previous installation of .NET and fsharpistarted F# 4.5 and not F# 5.0, which was bundled with my .NET 5.0 installation and which I get when executing dotnet fsi. - There are also references to that you need to specify the --execoption when executing scripts. The documentation says this is to indicate to FSI that it should exit after executing the script. Presumably, it would have stayed in the interpreter otherwise. However, it exits properly even without --exec, so I assume it knows nowadays when it is executing as a script and then properly exits afterward anyway. Command line arguments It is all good to be able to execute a simple script, but how can I pass in command-line arguments to the script? Turns out that there are different ways, depending on whether you are running the code as a script (through FSI) or as a real application. We focus on the script option now. The F# Interactive docs specify that one can use fsi.CommandLineArgs in the script. So one way to process the command line arguments is: #!/usr/bin/env dotnet fsi // Now with command-line arguments! let hello messages = for message in messages do printfn "Hello %s!" message hello fsi.CommandLineArgs We have changed the hello function to take a parameter called messages, since it can be multiple arguments on the command line. In fact, what fsi.CommandLineArgs return is an array of strings. In order to process each of the messages provided, we iterate over the messages using a for loop - which is one way to iterate over a collection of values. Note when we execute this, the output will be like this: ❯❯❯ ./Hello.fsx cloud worker Hello ./Hello.fsx! Hello cloud! Hello worker! So the first argument in the array is the command itself. One way to get rid of that is to make a slice of the original array, skipping the first element (with index 0) and create a new array that starts with the element at index 1 in fsi.CommandLineArgs: #!/usr/bin/env dotnet fsi // Now with command-line arguments! let hello messages = for message in messages do printfn "Hello %s!" message hello fsi.CommandLineArgs.[1..] Now we just get the arguments to the script itself. ❯❯❯ ./Hello.fsx cloud worker Hello cloud! Hello worker! Another approach for iteration, which is more common than an explicit loop in many functional languages, is a pipeline approach: #!/usr/bin/env dotnet fsi // Now with command line arguments! let hello messages = messages |> Array.map (printfn "Hello %s!") hello fsi.CommandLineArgs.[1..] The output when executed will be the same as the previous version. There are a few concepts that are important here. First, the |> (forward pipe operator) is an operator that takes the result of the expression on the left side and provides that as a parameter to the function on the right side. The result of the right-hand side of |> is a function - expressions in F# can produce function results. A simple concrete example in the REPL: > printfn "Hello %s!" "cloud" - ;; Hello cloud! val it : unit = () > "cloud" |> printfn "Hello %s!" - ;; Hello cloud! val it : unit = () > So essentially the last function parameter to the function on the right side of |> is provided with the result of the expression on the left side of |>. Second, the expression Array.map (printfn "Hello %s!") produces a function and this expression in itself has multiple functions (Array.map and printfn). Array.map represents a function map in the module Array. A module is a container for types, functions and other modules. Map functions operate on collections of data (in this case an array) and applies a function on each element of that collection. The result is a new collection of these results. A concrete example in the REPL: > let arr = [| "AWS"; "GCP"; "Azure" |] // Define an array of strings - ;; val arr : string [] = [|"AWS"; "GCP"; "Azure"|] > let hellostr Array.map hellostr arr // Apply hellostr function to all strings in array - ;; val it : string [] = [|"Hello AWS"; "Hello GCP"; "Hello Azure"|] > From this example, we can see that the function parameter in Array.map (printfn "Hello %s!") expression is (printfn "Hello %s!"). This is a function call that produces another function actually. In F# a function call that does not have all expected parameters produce another function that takes the missing parameters as arguments. Another example in the REPL: > let hellostr = printfn "Hello %s!" // Define function hellostr based on function call to printfn with missing parameter - ;; val hellostr : (string -> unit) > hellostr "cloud" - ;; Hello cloud! val it : unit = () > This is an example of partial function application. In the expression Array.map (printfn "Hello %s!") we need the parenthesis to make sure that Array.map takes the result of printfn "Hello %s!" as the function to use - if there were no parenthesis, it would just try to use printfn as the function. Playing around with the pipeline approach, that could be used for stripping away the first element of fsi.CommandLineArgs also: #!/usr/bin/env dotnet fsi let hello messages = messages |> Array.map (printfn "Hello %s!") hello (fsi.CommandLineArgs |> Array.skip 1) So far we have touched a bit on some F# functional concepts and done some simple variations of a Hello cloud-script. Next, it is time to actually connect and use some cloud services. Connecting to AWS In order to programmatically connect to AWS, we will need an AWS account and an AWS profile on the locale computer that can access that AWS account. If you do not have an AWS profile configured yet, you can look at this AWS documentation. For this you need the AWS CLI installed. If that is not yet installed, look at this documentation. Now, before we can communicate with AWS services using F#, we also need the AWS SDK for .NET. If we were building a .NET application, this would be fairly easy to get, since the dotnet command-line tool can add packages, such as AWS SDK for .NET, to a project. This is not as obvious though when using scripts, but thanks to version 5.0 of .NET, it is pretty simple. Access S3 Let us do a simple script to list the AWS S3 buckets in a configured AWS account - this is at least my typical "hello cloud" type of activity when I want to test some access to an AWS account. So in this case let us do a script HelloS3.fsx to print names and creation date & time of S3 buckets in the account. In order to figure things out with AWS SDK for .NET if is useful to have a look at the reference documentation. The AWS SDK for .NET consists of a number of separate packages, more or less one for each service. In our case, we will need two of them - AWSSDK.Core and AWSSDK.S3. These can be retrieved from the official .NET package management service, NuGet. With .NET 5.0 it is easy to include these in the script: #!/usr/bin/env dotnet fsi // Get the AWS SDK packages/assemblies needed #r "nuget: AWSSDK.Core" #r "nuget: AWSSDK.S3" The #r directive in FSI can be used to reference the assemblies needed. What is new in .NET 5.0 is that if the prefix "nuget:" is added, it will automatically download the package from NuGet, if needed. It seems other package managers can be added as well, but only NuGet is available by default. AWS SDK calls to various AWS services, regardless of language, in many cases require a client handle - one for each AWS service used. So a starting point is to obtain this client handle for S3, in this case. We also set up our main function to execute here: #!/usr/bin/env dotnet fsi // Get the AWS SDK packages needed #r "nuget: AWSSDK.Core" #r "nuget: AWSSDK.S3" open Amazon.S3 let helloS3 () = let client = new AmazonS3Client() "dummy" helloS3() The helloS3 function does not have any parameters, so we need to use empty parenthesis both in the declaration and call to distinguish it from just a simple value. Leave them out and F# will complain about it. The line open Amazon.S3 is similar to an import statement in some other languages - it makes the names from a module/namespace available in the current namespace. If open was not used, the namespace/module path would need to be specified completely, e.g. Amazon.S3.AmazonS3Client instead of just AmazonS3Client. In the SDK, AmazonS3Client is a class. F# supports object programming, but the way programs are typically constructed is not the same as more object-oriented languages. It needs to be and is interoperable with other .NET languages. The expression new AmazonS3Client() is to create a new instance of that class. In the helloS3 function there is a string "dummy" added. This is because F# does not like functions to end with a let expression, there must be another expression returning a result. So the dummy string is added here - it will be removed later. This is a functional program, but it does not do anything useful - running it will not generate any output - but should not give any error either - assuming that there is an AWS profile configured properly. If the profile you have created is not named "default", you can run the script by setting the AWS_PROFILE environment variable in the call to the script with the AWS profile name: ❯❯❯ AWS_PROFILE=myprofilename ./HelloS3.fsx Next, we will define a function to get bucket info and return it to us. This will be one string per bucket. We start by adding a dummy function first: #!/usr/bin/env dotnet fsi // Get the AWS SDK packages needed #r "nuget: AWSSDK.Core" #r "nuget: AWSSDK.S3" open Amazon.S3 let getBucketsInfo (s3Client: AmazonS3Client) = [| "Bucket 1 info"; "Bucket 2 info" |] let helloS3 () = let client = new AmazonS3Client() let bucketsInfo = getBucketsInfo client for bucketInfo in bucketsInfo do printfn "%s" bucketInfo helloS3() Note here that the parameter for the function getBucketsInfo has type information added. We will call methods on the S3 client handle, so F# will need to know the type and since it is statically typed, it will need to know this at compile time. So in this case we need to add type information when we implement the function for real. The main function helloS3 calls the function to get the result and then prints that out. Running this should print the dummy info: ❯❯❯ AWS_PROFILE=myprofilename ./HelloS3.fsx Bucket 1 info Bucket 2 info ❯❯❯ To actually get the bucket information, the SDK has methods to list buckets. There is some difference when using the SDK for different platforms, .NET, Windows .NET, Unity as well as Xamarin. I was not aware of the differences initially and I picked the wrong method calls initially. For .NET (Core), only the asynchronous AWS service calls are implemented, i.e. the calls that end with Async in the name. So the method ListBuckets is not available even though it is in the documentation, but ListBucketsAsync is available. Both the synchronous and the asynchronous will have a result represented by a ListBucketsResponse. In the asynchronous case though, it will not be available directly. The (successful) response will contain a sequence of buckets, represented by the S3Bucket class. So we can add the needed info to create a string with bucket info from an S3Bucket object through a function: open Amazon.S3.Model let getBucketInfo (bucket: S3Bucket) = sprintf "Name: %s created at %O" bucket.BucketName bucket.CreationDate The S3Bucket class is in a different module, so we add an open Amazon.S3.Model to be able to reference that directly. The function sprintf is used to construct a new string based on the parameters provided with the formatting template string. Now we come to the somewhat tricky part, which I struggled a bit with - doing asynchronous calls with the AWS SDK. I could find a few examples in C#, but none in F#. F# documentation for asynchronous programming helped a bit, but it was not crystal clear for this use case how to do this. I got this to work after some trial and error though in a reasonably compact way and the result was this for the full script: #!/usr/bin/env dotnet fsi // Get the AWS SDK packages needed #r "nuget: AWSSDK.Core" #r "nuget: AWSSDK.S3" open Amazon.S3 open Amazon.S3.Model let getBucketInfo (bucket: S3Bucket) = sprintf "Name: %s created at %O" bucket.BucketName bucket.CreationDate let listBuckets (s3Client: AmazonS3Client) = async { let! response = s3Client.ListBucketsAsync() |> Async.AwaitTask return response } let helloS3 () = let client = new AmazonS3Client() let response = (listBuckets client) |> Async.RunSynchronously let bucketsInfo = (List.ofSeq response.Buckets) |> List.map getBucketInfo for bucketInfo in bucketsInfo do printfn "%s" bucketInfo helloS3() The asynchronous call is wrapped in what is called a computation expression of type async. A computation expression is a generalized way to describe certain behaviors or workflows. It is not a topic that needs to be understood in depth at this point. There is a lot of material on the topic here, for those who want to jump into this. This particular case of computation expression is a bit similar to promises/futures in other languages, but computation expressions is a more generic construct and there are other use cases as well. Anyway, the code that should run asynchronously is wrapped in async { }. The keyword let! is used similarly to let, but there is not yet a result when the (asynchronous) function call returns. The return value from ListBucketsAsync is a Task object. This can then be sent to the function Async.AwaitTask which waits for the result produced. This is then returned using the return keyword, which is one way to get data out of a computation expression. I had missed that this was needed explicitly initially, so I tried to just state the response itself on the last line, which did not work - no result provided to the caller. After re-reading the documentation I did find the return keyword and it worked out fine. The returned data is of type ListBucketResponse - which we do not need to specify explicitly. In the ListBucketsResponse object, there is a list of buckets in the Buckets property. However, this is not the same type of list as the default F# immutable list, so trying functions from the List module did not work. It is instead a System.Collections.Generic.List, which is a template list type for .NET. I struggled a bit to figure out how to work with this type of list and ended up converting it to an F# List, with the List.ofSeq function. Then I could then use functions in the List module on the result. Then the very final bit is that the asynchronous code needs to be triggered also and one way seems to be with the Async.RunSynchronously function call. I am not sure if this is the most suitable approach, but it works for the happy case. Time will tell if there will be other approaches. From a design perspective the listBuckets function only performs the call to ListBucketsAsync and other logic is done separately. This is to make a clear separation of code that has side effects (reading data from AWS) in our computation logic. Closing remarks for part 1 Source code The source code in the blog posts in this series are posted to this Github repository: Finding F# information I have listed a few resources that I have found earlier in this blog post. I much enjoyed a lot of the material at F# for fun and profit and the F# language site. The Microsoft documentation for F# is reasonably good but has some room for improvement I think. I still do not know much about .NET, so there is a learning curve there also and it would have been more helpful with more examples in F#, rather than almost exclusively in C#. This also goes for the AWS SDK for .NET. Most of the Youtube videos and introduction posts to the language praise the language and that in many ways is better or more enjoyable to use than C#. I do not know about using C#, but I do find the experience with F# quite enjoyable, despite struggles with finding information or examples sometimes. Development environment I have not touched anything on the development environment and tools. The main IDEs (integrated development environments) seems to be: - Visual Studio - Visual Studio Code (VS Code) - Jetbrains Rider I have never used Visual Studio, but that seems to be available for multiple platforms nowadays. VS Code has plugins for F# and that seems to be pretty good as far as I can tell. If you use VS Code for other languages, that may work well for you. Jetbrains Rider is a commercial IDE from Jetbrains and is part of the IntelliJ family of IDEs. Personally, I am a big fan of their products and use them both at work and for private projects (GoLand, PyCharm, Intellij). So my choice, for now, is to try it out with Rider on a trial basis for now. Rider is more oriented towards solutions/projects and not the standalone scripts that I do now and it does not handle the .NET 5.0 NuGet references in the script right now. Other than that, I am pretty happy with it so far. TTP - Time to print F# code is compiled and running a script triggers the just-in-time (JIT) compilation of the code. This means there is a bit of a cold start time. This remains to be seen how much of an issue this will be when running plain scripts. Does it spark joy? There are numerous videos, blog posts, and other material out there that talks about what programming languages to learn in , the N top languages according to some criteria, etc. I do not see too often references to in such comparisons if the language or tooling/development workflow sparks joy. In contrast, many introductions to F# talk a bit about the joy of programming with the language. I think F# looks like a good candidate for a bit of sparkle and I hope this will continue. Next steps If you like this, then feel free to continue with part 2! Discussion (2) A great series of posts! I'm fairly new to F# but already love the language. This will be very helpful for my understanding. Thank you! Thank you for your comment, glad you like it!
https://dev.to/eriklz/f-for-the-cloud-worker-part-1-26c6
CC-MAIN-2022-21
refinedweb
4,296
65.42
The QMoveEvent class contains event parameters for move events. More... #include <qevent.h> Inherits QEvent. List of all member functions. Move events are sent to widgets that have been moved to a new position relative to their parent. The event handler QWidget::moveEvent() receives move events. See also QWidget::pos, QWidget::geometry, and Event Classes. Constructs a move event with the new and old widget positions, pos and oldPos respectively. Returns the old position of the widget. Returns the new position of the widget. This excludes the window frame for top level widgets. This file is part of the Qt toolkit. Copyright © 1995-2003 Trolltech. All Rights Reserved.
http://doc.trolltech.com/3.2/qmoveevent.html
crawl-002
refinedweb
108
71.92
This is the mail archive of the newlib@sourceware.org mailing list for the newlib project. On 04/26/2013 08:52 AM, Joel Sherrill wrote: > How about a patch like this which adds a warning comment to speak up > and an #if 0/#endif Idea seems sane to me. > * > - * NOTE: This is to be executed at task exit. It does not tear anything > + * NOTE: This is to be executed at task exit. It does not tear > anythingkkk Why the spurious change? > +#endif Is it worth making it obvious what this pairs with? #endif /* 0 */ -- Eric Blake eblake redhat com +1-919-301-3266 Libvirt virtualization library Attachment: signature.asc Description: OpenPGP digital signature
https://sourceware.org/ml/newlib/2013/msg00308.html
CC-MAIN-2018-09
refinedweb
114
76.62
Padre::Plugin - Padre Plugin API 2.2 package Padre::Plugin::Foo; use strict; use base 'Padre::Plugin'; # The plugin name to show in the Plugin Manager and menus sub plugin_name { 'Example Plugin'; } # Declare the Padre interfaces this plugin uses sub padre_interfaces { 'Padre::Plugin' => 0.29, 'Padre::Document::Perl' => 0.29, 'Padre::Wx::Main' => 0.29, 'Padre::DB' => 0.29, } # The command structure to show in the Plugins menu sub menu_plugins_simple { my $self = shift; return $self->plugin_name => [ 'About' => sub { $self->show_about }, 'Submenu' => [ 'Do Something' => sub { $self->do_something }, ], ]; } 1; The plugin_name method will be called by Padre when it needs a name to display in the user inferface. The default implementation will generate a name based on the class name of the plugin. The plugin_directory_share method finds the location of the shared files directory for the plugin, if one exists. Returns a path string if the share directory exists, or undef if not. The plugin_directory_locale() method will be called by Padre to know where to look for your plugin l10n catalog. It defaults to $sharedir/locale (with $sharedir as defined by File::ShareDir and thus should work as is for your plugin if you're using the install_share command of Module::Install. Your plugin catalogs should be named $plugin-$locale.po (or .mo for the compiled form) where $plugin is the class name of your plugin with any character that are illegal in file names (on all file systems) flattened to underscores. That is, Padre__Plugin__Vi-de.po for the german locale of Padre::Plugin::Vi.. sub padre_interfaces { 'Padre::Plugin' => 0.43, 'Padre::Document::Perl' => 0.35, 'Padre::Wx::Main' => 0.43, 'Padre::DB' => 0.25, } In Padre, plugins are permitted to make relatively deep calls into Padre's internals. This allows a lot of freedom, but comes at the cost of allowing plugins to damage or crash the editor. To help compensate for any potential problems, the Plugin Manager expects each Plugin module to define the Padre classes that the Plugin uses, and the version of Padre that the code was originally written against (for each class). This information will be used by the plugin manager to calculate whether or not the Plugin is still compatible with Padre. The list of interfaces should be provided as a list of class/version pairs, as shown in the example. The padre_interfaces method will be called on the class, not on the plugin object. By default, this method returns nothing. In future, plugins that do NOT supply compatibility information may be disabled unless the user has specifically allowed experimental plugins. The new constructor takes no parameters. When a plugin is loaded, Padre will instantiate one plugin object for each plugin, to provide the plugin with a location to store any private or working data. A default constructor is provided that creates an empty HASH-based object. sub registered_documents { 'application/javascript' => 'Padre::Plugin::JavaScript::Document', 'application/json' => 'Padre::Plugin::JavaScript::Document', } The registered_documents methods can be used by a plugin to define document types for which the plugin provides a document class (which is used by Padre to enable functionality beyond the level of a plain text file with simple Scintilla highlighting). This method will be called by the Plugin Manager and the information returned will be used to populate various internal data and do various other tasks at a time of its choosing. Plugin authors are expected to provide this information without having to know how or why Padre will use it. This (theoretically at this point) should allow Padre to keep a document open while a plugin is being enabled or disabled, upgrading or downgrading the document in the process. The method call is made on the Plugin object, and returns a list of MIME-type to class pairs. By default the method returns a null list, which indicates that the plugin does not provide any document types. Default method returning an empty array. TBD. See Padre::Document TBD. See Padre::Document If implemented in a plugin, this method will be called when a context menu is about to be displayed either because the user pressed the right mouse button in the editor window ( Wx::MouseEvent) or because the Right-click menu entry was selected in the Window menu ( Wx::CommandEvent). The context menu object was created and populated by the Editor and then possibly augmented by the Padre::Document type (see "event_on_right_down" in Padre::Document). Parameters retrieved are the objects for the document, the editor, the context menu ( Wx::Menu) and the event. Have a look at the implementation in Padre::Document::Perl for an example. The plugin_enable object method will be called (at an arbitrary time of Padre's choosing) to allow the plugin object to initialise and start up the Plugin. This may involve loading any config plugin started up ok, or false on failure. The default implementation does nothing, and returns true. The plugin_disable method is called by Padre for various reasons to request the plugin do whatever tasks are necesary to shut itself down. This also provides an opportunity to save configuration information, save caches to disk, and so on. Most often, this will be when Padre itself is shutting down. Other uses may be when the user wishes to disable the plugin, when the plugin is being reloaded, or if the plugin is about to be upgraded. If you have any private classes other than the standard Padre::Plugin::Foo, you should unload them as well as the plugin may be in the process of upgrading and will want those classes freed up for use by the new version. The recommended way of unloading your extra classes is using Class::Unload. Suppose you have My::Extra::Class and want to unload it, simply do this in plugin_disable: require Class::Unload; Class::Unload->unload('My::Extra::Class'); Class::Unload takes care of all the tedious bits for you. Note that you should not unload any external CPAN dependencies, as these may be needed by other plugins or Padre itself. Only classes that are part of your plugin should be unloaded. Returns true on success, or false if the unloading process failed and your plugin has been left in an unknown state. my $hash = $self->config_read; if ( $hash ) { print "Loaded existing configuration\n"; } else { print "No existing configuration"; } The config_read method provides access to host-specific configuration stored in a persistant plugin, or undef if there is no existing saved configuration for the plugin. $self->config_write( { foo => 'bar' } ); The config_write method is used to write the host-specific configuration information for the plugin into the underlying database storage. At this time, the configuration must be a nested, non-cyclic structure of HASH references, ARRAY references and simple scalars (the use of undef values is permitted) with a HASH reference at the root. $plugin->plugin_preferences($wx_parent); The plugin_preferences method allows a plugin to define an entry point for the Plugin Manager dialog to trigger to show a preferences or configuration dialog for the plugin. The method is passed a wx object that should be used as the wx parent. sub menu_plugins_simple { 'My Plugin' => [ Submenu => [ 'Do Something' => sub { $self->do_something }, ], '---' => undef, # Separator About => 'show_about', # Shorthand for sub { $self->show_about(@_) } "Action\tCtrl+Shift+Z" => 'action', # Also use keyboard shortcuts # to call sub { $self->show_about(@_) } ]; } The menu_plugins_simple method defines a simple menu structure for your plugin. It returns two values, the label for the menu entry to be used in the top level Plugins menu, and a reference to an ARRAY containing an ordered set of key/value pairs that will be turned into menus. If the key is a string containing three hyphons (i.e. '---') the pair will be rendered as a menu seperator. If the key is a string containing a tab ("\t") and a keyboard shorcut combination the menu action will also be available through a keyboard shortcut. If the value is a Perl identifier, it will be treated as a method name to be called on the plugin object when the menu entry is triggered. If the value is a reference to an ARRAY, the pair will be rendered as a sub-menu containing further menu items. sub menu_plugins { my $self = shift; my $main = shift; # Create a simple menu with a single About entry my $menu = Wx::Menu->new; Wx::Event::EVT_MENU( $main, $menu->Append( -1, 'About', ), sub { $self->show_about }, ); # Return it and the label for our plugin return ( $self->plugin_name => $menu ); The menu_plugins method defines a fully-featured mechanism for building your plugin menu. It returns two values, the label for the menu entry to be used in the top level Plugins plugin. sub editor_enable { my $self = shift; my $editor = shift; my $document = shift; # Make changes to the editor here... return 1; } The editor_enable method is called by Padre to provide the plugin plugin and other plugins that need deep integration with the editor widget. sub editor_disable { my $self = shift; my $editor = shift; my $document = shift; # Undo your changes to the editor here... return 1; The editor_disable method is the twin of the previous editor_enable method. It is called as the file in the editor is being closed, AFTER the used has confirmed the file is to be closed. It provides the plugin plugin and other plugins that need deep integration with the editor widget. The ide convenience method provides access to the root-level Padre IDE object, preventing the need to go via the global Padre->ide method. The main convenience method provides direct access to the Padre::Wx::Main (main window) object. The current convenience method provides a Padre::Current context object for the current plugin. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. The full text of the license can be found in the LICENSE file included with this module.
http://search.cpan.org/~garu/Padre-0.45/lib/Padre/Plugin.pm
CC-MAIN-2014-35
refinedweb
1,638
51.99
The following thing worked for me. Microsoft.TeamFoundation.TestImpact For those that have the same issue. Its my blunder (maybe). The @Test annotated method has no access specifier. I just use void method(). I added public and it started working! Thanks @user2507946. Running from IDE help solve the issue. You are creating 2 chrome instances in your code: First when your @BeforeClass @BeforeClass public static void start()throws Exception { System.setProperty("webdriver.chrome.driver", "E:/Selenium/lib/chromedriver.exe"); driver = new ChromeDriver(); And the again when you create LoginPage object. public class LoginPage{ WebDriver driver = new ChromeDriver(); So depending on your need, you have to remove one of these. I cant really see where the user and pass variables are assigned, but assuming they are webelements you probably get them from driver. So if you don't initiate driver in LoginPage you get nullpointer indeed. If you still need to start the driver in your @BeforeClass and you want to use the same driver instance in your code, you could get a getter for this. A good way to run Selenium Java code in Eclipse is to run them as JUnit tests. 1. Create a Maven Project in your Eclipse.If you haven't done this before, see: How to install/integrate Maven with Eclipse How to create a simple Maven project 2. Add the following dependencies to your pom.xml file: <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.7</version> <scope>test</scope> </dependency> <dependency> <groupId>org.seleniumhq.selenium</groupId> <artifactId>selenium-java</artifactId> <version>2.25.0</version> </dependency> <dependency> <groupId>org.seleniumhq.seleni I haven't seen any commands that make this happen. Official command line documentation: You could author your tests to load from an external file (xml, ini, etc.) or source (db) which would simulate this. If the application were to generate the data, the test methods could load it and use the data during execution. I've been doing something similar where I generate a setting file containing various information for a web application to be tested, it then launches the test and the tests load from that data file and use the settings for parameters. Something very similar to this was done for the Functional Programming principles in Scala with Martin Odersky at the EPFL. They used two sets of tests, one which were published with the code, which the students could run while coding (using sbt test), and one set which lived on the server, which were run when the code was published. The second (server) was the one which produced the report and gave the grade. This was fed back using the coursera site, but you can always send a mail or something. The server wasn't done synchronously, it was done as part of a batch. The code was also graded based for various style points using Scalastyle. Don't base your grades on the code which is executed on the student machines. This isn't secure. Yes, you can do this without using a plugin, you just Just extend from this trait org.specs2.mutable.BeforeAfter, it comes with before and after methods, implement them, here is a link to documentation docs I haven't done it myself, but you could try to configure two different executions of the maven-surefire-plugin and configure one of these executions to only run JUnit tests and the other to only run testNG tests. To differentiate between JUnit und testNG tests you should either put the tests in different directories or use a distinguishable name scheme and configure each of the executions with suitable inclusion and exclusion patterns. In theory this should work :) A subclass of unittest.TestCase is like any other class, hence you can write a unittest.TestCase that checks whether its methods work as they should. In particular you should build sets of pairs of numbers that should pass and fail the test, and then call the assertSFAlmostEqual method with these inputs and see whether the test pass or fail. The answer you linked does this, even though it's probably a more complicated solution than what's required. For example, I'd simply have written something like: import unittest class MyBaseTestCase(unittest.TestCase): def assertSpec(self, thing): assert thing == 123 class TestMyTest(MyBaseTestCase): def test_failures(self): self.assertRaises(AssertionError, self.assertSpec, 121) def test_successes(self): sel think that's by design. NUnit is designed to clean up the environment before any test, to generate same conditions for any test independent their execution order. If Test A would change the XDocument, Test B would be run with that changes. That might led to indeterminate test results. If Test B expects changes made from Test A, then your tests are not fully isolated, that's a bad practice. If you wan't to change that behaviour, implement a lazy field and load the test data only on first access. You can increase performance if you know that you are not changing data in any of your tests, but pay attention. private static Lazy<IEnumerable> testData = new Lazy<IEnumerable>(GetExample); private static IEnumerable GetExample() { var doc = XDocument.Load("Example.xml"); Make it an independent function. def run_main(): .... if __name__ == "__main__": run_main() And you can call run_main() from another file.. First, you have an odd indentation in the line with if __name__ ==... - guess you should check it in your script. Then, make sure with what current directory your script runs, AFAIK it is your $HOME - this is where the file would appear.) Make all of them lists and then iterate over the list executing each in turn. for actionVal,actionDesc,actionFunctions in validActions: if ctx["newAction"] == actionVal: for actionFunction in actionFunctions: actionFunction(). import. Use subprocess module. import os import signal import subprocess import sys params = [...] for param for params: proc = subprocess.Popen(['/path/to/CProg', param.., param..]) subprocess.call([sys.executable, 'B.py', param.., param...]) os.kill(proc.pid, signal.SIGINT) proc.wait() operator.methodcaller() will give you a function that you can use for this. map(operator.methodcaller('method_name'), sequence)() If you are running Jenkins as a Windows service, by default it runs as user Local System. Did you check the box titled "Allow service to interact with desktop"?. If that does not help you may have to set the service to log on as an actual user, instead of Local System. This is a common problem with running any process with a GUI from Jenkins.. The way to address these deficiences is: Get the full path to the Python interpreter executing setup.py from sys.executable. Classes inheriting from distutils.cmd.Command (such as distutils.command.install.install which we use here) implement the execute method, which executes a given function in a "safe way" i.e. respecting the dry-run flag. Note however that the --dry-run option is currently broken and does not work as intended anyway. I ended up with the following solution: import os, sys from distutils.core import setup from distutils.command.install import install as _install def _post_install(dir): from subprocess import call call([sys.executable, 'scriptname.py'], cwd=os.path.join(dir, 'packagename')) class install(_install): def run(self): _in Just give them as separate strings in the array, instead of combining the last two into "val_31 val_32": String[] command = {"script.py", "run", "-arg1", "val1", "-arg2", "val2" , "-arg3" , "val_31", "val_32", }; Otherwise it will escape the space in between val_31 and val_32 because you are telling it that they're a single parameter. Incidentally, you can also use the varargs constructor and skip having to create an array, if you want: ProcessBuilder probuilder = new ProcessBuilder( "script.py", "run", "-arg1", "val1", "-arg2", "val2" , "-arg3" , "val_31", "val_32"); I can't think of a better way than updating a boolean inside the for loop. any_results = False for x in g: any_results = True print x if not any_results: print 'Done' You cant do that. Instead you can filter which test are shown to you, if that suits your needs. You can find more information about test here: I've also just experienced this same issue, and it seems that it comes down to the test framework extracting the controller name from the the name of the testing class. The convention is that the test class is named <controller name>ControllerSpec In the above case, the test class should be named PriceTierControllerSpec so that the test framework will successfully resolve the controller to PriceTeir. Naming the class according to these guidelines seems to fix this problem. Further reference can be found here: Got my answer there : That's because a run configuration is created by template. And that template has default setting to pick classes across module dependencies. For me in 132.46 the following helps: 1. Open Run Configuration dialog, Defaults section. 2. Find respective template. I tried JUnit. 3. "Test kind" combo, select All in package. 4. Set "In single module". 5. Apply to save the template. After that delete created configurations and repeat "Run All tests". It picks only classes from current module for me. The possible improvement in IDEA is to modify these defaults specially for Maven-based projects. Rather specific change...
http://www.w3hello.com/questions/How-to-execute-a-python-test-using-httpretty-sure
CC-MAIN-2018-17
refinedweb
1,529
57.27
This tool works with common C++ compilers (such as GCC and VC 7.1) to leverage C++ constructs for scripts. It is simply beyond my grasp that one needs 'special script' languages, like Perl, in order to do 'small' tasks. More info is found at. Release information is found at CpshVersion1. Sample Sessions I.e., one can have this shell file, that goes over all files in a given directory and prints out the big ones: #!/usr/bin/cpsh path topPath = argv[1] remove_copy_if ( directory_iterator(topPath), directory_iterator(), bind(less<size_t>, bind(file_size,_1), 10000000L), ostream_iterator<path>(cout, '\n')); Using ordinary C++ (with the Boost library accessible directly, such as the 'path' type in the filesystem sub namespace) It also comes with an interactive shell, via the '-i' option, enabling sessions as (the text following '>' is what the user types): > vector<string> names; > ? sizeof names < 20 > ? names.size() < 0 > const char* texts[] = { 'John', 'Greg', 'Bob' }; > copy (texts, texts+3, insert_iterator(names)); > ? names.size(); < 3 > transform (names.begin(), names.end(), \ >> bind(substr, _1, 1), \ >> ostream_iterator<string>(cout, ' ')); {ohn reg ob } This is perfect for learning the language or verifying one's knowledge about complicated constructs. So, why learn a new scripting language when all the power you need is available in C++, at least as succinctly expressed as with any other language. Oh, did I forget to mention that C++ executables are a bit faster than interpreted Ruby :-)
http://code.google.com/p/cpsh/
crawl-002
refinedweb
236
56.05
0 So maybe this is a pretty basic question, then again I am new to python so please bear with me... I have some code, and I have a tkInter checkbox. I figured out how to get the on/off value of the check box but I dont really understand why I need to do it the way I do. def __init__(self, master): self.var = IntVar() c = Checkbutton(master, text="TEST_BUTTON", variable=self.var, command=self.testFunc) c.pack() def testFunc(self, event): print self.var.get() My question is; self.var.get() gets me the 0/1 value of var which is what I want so I can eventually use the value as a flag to preform some function. It took me a while to figure this out though. Why doesnt print self.var give me the 0/1 value of the check box?
https://www.daniweb.com/programming/software-development/threads/283620/tkinter-checkbox-question
CC-MAIN-2016-50
refinedweb
146
76.62
Quick links Increment and Decrement operator are used to increment or decrement value by 1. There are two variants of increment/decrement operator. - Prefix (pre-increment and pre-decrement) - Postfix (post-increment and post-decrement) Syntax of increment/decrement operator Important note: ++ and -- operators are used with variables. Using ++ or -- with constant will result in error. Such as expressions like 10++, (a + b)++ etc. are invalid and causes compilation error. Let us consider an integer variable int a = 10;. To increment a by 1, you can use either a = a + 1 (Simple assignment) a += 1 (Shorthand assignment) a++ (Post increment) ++a (Pre increment) Result of all the above code are similar. Prefix vs Postfix Both prefix and postfix does same task of incrementing/decrementing the value by 1. However, there is a slight difference in order of evaluation. Prefix first increment/decrements its value then returns the result. Whereas postfix first returns the result then increment/decrement the value. To understand this efficiently let’s consider an example program #include <stdio.h> int main() { int a, b, c; a = 10; // a = 10 b = ++a; // a=11, b=11 c = a++ // a=12, c=11 printf("a=%d, b=%d, c=%d", a, b, c); return 0; } Output of above program is a=12, b=11, c=11. Let us understand the code. a = 10 Assigns 10 to variable a b = ++a Since we have used prefix notation. Hence, first it increments the value of a to 11, then assigns the incremented value of a to b. c = a++ Here we have used postfix notation. Hence, first it assigns the current value of a i.e. 11 to b, then increments the value of a to 12. Important note: Never use postfix and prefix operator at once otherwise it will result in error. Such as ++a++, ++a-- both results in compilation error. <pre><code> ----Your Source Code---- </code></pre>
http://codeforwin.org/2017/08/increment-decrement-operator-c.html
CC-MAIN-2017-47
refinedweb
318
58.58
Code. Collaborate. Organize. No Limits. Try it Today. Some recent languages like C# and Java allow you to seal your classes easily using a keyword like sealed or final respectively. C++ doesn't have any such keyword for this purpose. However, it's possible to still do it using a trick. When using virtual inheritance, the initialization list of the most-derived-class's constructor directly invokes the virtual base class's constructor. This means that if we can hide access to the virtual base class's constructor, then we can prevent any class from deriving from it. This mimics the effect of being sealed. To provide an easy way to seal classes, we can write a header file Sealed.h like this: class SealedBase { protected: SealedBase() { } }; #define Sealed private virtual SealedBase Now to seal a class, say Penguin, we just need to derive it from Sealed, like this: Penguin Sealed #include "Sealed.h" class Penguin : Sealed { }; That's it. Penguin is now a sealed class. Let's try deriving another class, BigZ (Surf's Up (2007), anyone?) from Penguin: Penguin BigZ class BigZ : Penguin { }; BigZ bigZ; // error C2248 Instantiating an object of BigZ should yield a compiler error. The MSVC++ 2005 compiler gives me the following error message: error C2248: 'SealedBase::SealedBase' : cannot access inaccessible member declared in class 'SealedBase' All seems to be working well. However, one of my fellow programmers, Angelo Rohit, pointed out to me that this method has a serious flaw in it. Angelo says that if BigZ derives from Penguin and Sealed, then it will be possible to create objects of BigZ: BigZ class BigZ : Penguin, Sealed { }; BigZ bigZ; // OK; no compiler error Why does this happen? BigZ derives from Sealed just like Penguin does, which means that it now has access to Sealed's constructor. And since Sealed is inherited virtually by both Penguin and BigZ, there is only one copy of it - which is now also accessible to BigZ. Bummer. We need to have a mechanism by which BigZ is forced to call the constructor of a class which it doesn't have access to. Sealed After pondering over this for a while, I realized that if we can somehow generate different base classes every time Sealed is derived from, then it would work. Let's rewrite the Sealed.h header to look like this: template <int T> class SealedBase { protected: SealedBase() { } }; #define Sealed private virtual SealedBase<__COUNTER__> What does this do? SealedBase is now a templated class which takes an integer as an argument. __COUNTER__ is a predefined macro which expands to an integer starting with 0 and incrementing by 1 every time it is used in a compiland. So every time Sealed is derived from, it generates a new SealedBase class using the incremental number which __COUNTER__ expands to. SealedBase __COUNTER__ Now let's go back to our BigZ class which derives from both Penguin and Sealed: class BigZ : Penguin, Sealed { }; BigZ bigZ; // error C2248 This time around though, BigZ can't escape from the compiler. Penguin derives from SealedBase<number1> and BigZ derives from SealedBase<number2>, where number1 and number2 are two non-identical integers. So now BigZ has to invoke the constructor of SealedBase<number1>, which it doesn't have access to. SealedBase<number1> SealedBase<number2> number1 number2 The MSVC++ 2005 compiler gives me the following error message: error C2248: 'SealedBase<T>::SealedBase' : cannot access inaccessible member declared in class 'SealedBase<T>' 1> with 1> [ 1> T=0 1> ] However, you might be thinking that since we're using a special predefined macro __COUNTER__ in our implementation, this code is not portable. Well, it's supported by MSVC++ (which I used to test the above code) and also by GCC (). But what about compilers which don't? After a little thought, I came up with the following way: In Sealed.h: template <class T> class SealedBase { protected: SealedBase() { } }; #define Sealed(_CLASS_NAME_) private virtual SealedBase<_CLASS_NAME_> And to seal a class: #include "Sealed.h" class Penguin : Sealed(Penguin) { }; When sealing a class, we need to mention that class's name to the Sealed macro. This enables the Sealed macro to generate a new version of SealedBase. This is less elegant than simply having to derive from Sealed, but is more portable, making it a good alternative for compilers which don't support the __COUNTER__ predefined macro. SealedBase People who use MSVC++ or GCC can simply use Solution Attempt #2, as it is cleaner. People on other compilers, can use the Portable Solution. If you have any questions, suggestions, improvements, or simply want to say hi, please email me. Thanks for reading!Francis Xavier This article, along with any associated source code and files, is licensed under The MIT License emilio_grv wrote:there could be things that by design I don't want to do I can forget about later. By having a mechanism that forces me into a compile error makes me wonder abut what the problem is. emilio_grv wrote:may be because the internal functionality of the class may change in the future, and I don't want to risk to create supportability problems, or because the "class" is just a language way to expose a functionality that's not internally a class (so whatever derivation mechanism will simply not work as intended by a language's users) emilio_grv wrote:But if you think to Java, C#, D and other languages where "sealed" is a keyword (so the compiler really knowns about it), emilio_grv wrote:Hope this may give you a more wide perspective. emilio_grv wrote:Prevent mistakes class event { event(const event&); //no impl. event& operator=(const event&); // no impl event() {} //private constructable ~event() { ::DestroyEvent((HANDLE)this); } public: void signal() { ::SetEvent((HANDLE)this); } void reset() { ::ResetEvent((HANDLE)this); } static event* create() { (event*)::CreateEvent(....); } //no "new" here ... static void destroy(evennt* p) { p->~event(); } //...and no delete here }; new event delete event class BigZ : public Penguin, Sealed(Penguin) { }; #define SEALED( ClassName ) \ class ClassName; \ \ namespace Sealed_##ClassName##_ \ { \ class Sealed_##ClassName \ { \ friend class ClassName; \ private: \ Sealed_##ClassName() {} \ Sealed_##ClassName(const Sealed_##ClassName&) {} \ }; \ } \ \ class ClassName : public virtual Sealed_##ClassName##_::Sealed_##ClassName SEALED( Usable ) { // ... public: Usable(); Usable(char*); // ... }; class DD : public Usable { }; Usable a; DD dd; // error: DD::DD() cannot access General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. C# 6: First reactions
http://www.codeproject.com/Articles/42021/Sealing-Classes-in-C?fid=1548729&df=10000&mpp=25&noise=1&prof=True&sort=Position&view=Quick&spc=Relaxed&fr=11
CC-MAIN-2014-15
refinedweb
1,074
60.24
I'm trying to get started with plugin development as there's a need that doesn't yet exist. So I've been searching for everything I can find and am running into a number of unexplained things that maybe someone can help with. 1) I see references to examples in the Default package. This is great, however, when I open Default.sublime-package it's been encoded in some format. I've not seen any mention of this and instead all the documentation (unofficial or official I'm not sure) says you can add Default packages to your project and just browse the *.py files. Searching for converting sublime-package to something else doesn't seem to return anything else. How are these files packed or encoded and how can I browse the API and examples? 2) There seems to be some variation in arguments for the run method. The skeleton plugin shows:def run(self, edit)However I've seen other examples that show:def run(self, view, args)Is run overloaded? Is this a Sublime Text 2 versus 3 thing? Without seeing the examples (point #1) I'm not sure what the current preferred call is for this. def run(self, edit) def run(self, view, args) 3) Another tutorial site speaks of creating menu bar additions for the commands rather than having to type into the console. I understand the JSON file format for these additions; it's pretty straightforward. However a vital component was left out -- where does this file go, how is it named? I assume it's something that's in the plugin project directory but how to structure this wasn't explained. Hopefully that's enough to get started for now. Thankful for any comments on this and how to get rolling with this. Welcome to the wonderful world of Sublime Plugin development! sublime-package files are just zip files with their extensions changed. If you're using a command line tool you can just provide the sublime-package directly to e.g. the unzip command to see the list of files and extract them; if you're using a GUI type tool you may need to rename the file to get it to be recognized: sublime-package zip unzip tmartin:dart:~/local/sublime_text_3/Packages> unzip -l Default.sublime-package | head -10 Archive: Default.sublime-package Length Date Time Name --------- ---------- ----- ---- 141 01-26-2016 11:19 Widget Context.sublime-menu 1382 06-22-2016 14:24 mark.py 2402 06-22-2016 14:24 delete_word.py 4274 06-22-2016 14:24 sort.py 450 06-22-2016 14:24 duplicate_line.py 370 01-26-2016 11:19 Line Endings.sublime-menu 47 01-26-2016 11:19 Preferences (Windows).sublime-settings Looking at those files you can see some of the default functionality and how it's implemented, which is a great resource for a working example. Note however that not all of the default functionality is contained in the package; some things are directly in the Sublime Text core for performance reasons, so you won't find them in the Default package. Default Can you show an example of something using run(self, view, args)? run(self, view, args) Python doesn't support overloading in the way you may be familiar with it (i.e. you can't have several run methods with different argument lists). However you are allowed to define the method as taking extra arguments after the ones that Sublime expects, but you have to provide them at the time you call the method. run By way of example: class ExampleOneCommand(sublime_plugin.TextCommand): def run(self, edit): self.view.insert(edit, 0, "Hello, World!") class ExampleTwoCommand(sublime_plugin.TextCommand): def run(self, edit, text): self.view.insert(edit, 0, text) class ExampleThreeCommand(sublime_plugin.TextCommand): def run(self, edit, text="Hello, World"): self.view.insert(edit, 0, text) The first of these makes a command that accepts no additional arguments and inserts hard coded text into the start of the current buffer. If you provide any extra arguments, you'll get an error. The second makes a command that requires an additional argument named text to give it the text. In this case if you run the command with no arguments you get an error because it's expecting an argument you're not giving it. So you would need to use something like view.run_command("example_two", {"text": "Hello, World!"}) to run the command and tell Sublime what the text argument should be. text view.run_command("example_two", {"text": "Hello, World!"}) The third is a blend of the first two; it takes an argument, but it has a value that it will use by default if you don't provide one. If you haven't read through it yet, I recommend a perusal of the Unofficial Documentation which covers a lot of this sort of thing. Generally speaking, the name of the file is the most important. The file has to be named Main.sublime-menu in order to appear in the main menu, Context.sublime-menu to appear in the context menu, and so on. The unofficial documentation has a list of the various names. Main.sublime-menu Context.sublime-menu As to the location, that depends on whether or not you're creating a package for others to use or just for yourself. If you're just modifying the menu for your own purposes, you would put it into your User package directory; otherwise it goes in the directory for the package that you're creating (e.g. MyPackage). User MyPackage Thank you for the information. Knowing about the zip file thing should make extracting those pretty straightforward. As for the python run method call, I'll have to dig up what I was reading at the time. I got started on the plugin research on Monday, but then actual design work popped up and I haven't been able to get back to tearing this apart. It very well could be a difference between ST2 and ST3 potentially. Not everything seems to mark which version of the editor it's relevant to. The signature of your command's run() function depends on its purpose. There are a couple of different Classes you can build Commands from based on the scope you need. You can see the definitions of the different types of Command class in the sublime_plugin.py file in Sublime's application directory. run() sublime_plugin.py Basically, you can provide any arguments you want to a command's run() method, and there's only one "required" argument: TextCommands expect an edit object as their first argument, which is used by Sublime to track changes for undo/redo/etc. Sublime handles this for you, but the object is provided to the run method in case you need to call "destructive" methods on a View, like insert or replace. TextCommands edit insert replace Looking at the Default package's menu files, the commands they call and the arguments passed through them is pretty instructive. Check out side_bar.py and Side Bar.sublime-menu to see how writing commands that work with project paths is handled, for example. side_bar.py Side Bar.sublime-menu Many thanks! I have managed to decompose the Defaults package and am poking through looking for examples to cannibalize and learn from. I did get a command to print my very first version number to the console, so that's a baby step Also diving into snippets as I think a large portion of what I'm looking to do might be encompassed in that feature, if I can sort out a few things first. I'm not entirely sure the snippet will be extensible enough but it might do the trick.
https://forum.sublimetext.com/t/getting-started-questions/27650/5
CC-MAIN-2018-22
refinedweb
1,297
65.22
Socialwg/2017-04-11-minutes Contents Social Web Working Group Teleconference 11 Apr 2017 Attendees - Present - tantek, aaronpk, rhiaro, cwebber, ben_thatmustbeme, sandro, csarven, eprodrom - Regrets Chair - tantek - Scribe - eprodrom, cwebber Contents - Topics - Summary of Action Items - Summary of Resolutions <tantek> <tantek> <cwebber> I can scribe for the non-AP parts <cwebber> oh ok! <tantek> scribenick: eprodrom approval of minutes <tantek> PROPOSED: approve minutes of and <rhiaro> +1 4th, +0 28th (wasn't there) +1 <aaronpk> +1 <ben_thatmustbeme> +1 <sandro> +1 <cwebber> +0 RESOLUTION: approve minutes of and update on PRs tantek: let's start with AS2 <rhiaro> Yeah I did tantek: we have two new issues <cwebber> yes I can scribe <cwebber> scribenick: cwebber <tantek> scribenick: cwebber eprodrom: forgot I had actual speaking today. we have 2 new issues on AS2 ... one is geojson one, interesting but may be an extension / namespace thing rather than change to as2 ... also one on extending what the attribution mechanism is. I don't think either one is core <tantek> issue URLs? <eprodrom> <eprodrom> eprodrom: geospatial stuff, we've all kind of gone with covering the top level of an object and then expect extensions to ... ... first is around extensions around geojson one, second is around attribution/licensing ... for second, Creative Commons already has a vocab, maybe show an example of using those ... my feeling is these are outside of the scope of AS2... if not outside of scope, good fit for extension <eprodrom> cwebber: +1 eprodrom. GeoJSON is a great example for an extension, ccREL has already done a lot of work handling licensing. <eprodrom> cwebber: if we've gone to the trouble of including extensions, these are good examples of using extensions. <scribe> scribenick: cwebber <csarven> Yea, good candidates for extensions. Not core. tantek: ok, is there a place where we can have people look at a list of extensions we can encourage for reuse? eprodrom: I don't think we have a single place like that, could be a good wiki page. Now that I've said it, I wonder if that's a link we should include in the document rhiaro: is this the domain of the community group? we said extensions were part of CG's work ... as incubation eprodrom: I think so tantek: I think that's another good answer we can add to the issue ... encourage incubation of extensions to the CG ... sounds like we have some good responses, one of which is "great suggestion, would be great as an extension", second is to create a place perhaps on wiki where we can informally have a list of extensions or things underway ... and lastly to encourage list of folks to join the CG and incubate their extensions there ... outside of the GH issues for this spec rhiaro: I'd say that we don't need to maintain a list of ongoing extensions, because that would go stale, and just say "the CG will provide it" eprodrom: what I wanted to ask is should we include that in the text of the document? maybe in the vocab document, say we have a community group that maintains extensions? sandro: you're certainly not imagining tantek: I remember you filing an issue <eprodrom> "Some popular extensions are included in the Activity Streams 2.0 namespace document, and can be reviewed at." sandro: looking at spec, see geojson used as an example extension <sandro> Some popular extensions are included in the Activity Streams 2.0 namespace document, and can be reviewed at. eprodrom: if it's ok for our document process, I would love to add a sentence underneath there along the lines of "extensions for AS2 are done in CG" with a link <rhiaro> +1 what evan said eprodrom: could be a good way to provide that continuity <rhiaro> hopefully if the CG changes its name it leaves a forwarding address sandro: one problem is if we change name of CG maybe link becomes stale... <sandro> sandro: currently it doesn't link to the CG ... it says the same space may be used by extensions... [whole quote happens here] tantek: that seems like a reasonable small edit, to link to the specific CG since we have it ... seems reasonable based on previouss decisions sandro: in spec or in ? tantek: in the spec sandro: it's in the namespace doc right now, not the spec <sandro> Some popular extensions are included in the Activity Streams 2.0 namespace document, and can be reviewed at. sandro: but we can change the namespace doc whenever ... here's the line in the line: ^^^ tantek: ok, seems like a reasonable proposal, we can leave it to the CG ... and if the CG terminates, we can say here's where to go from here ... so you're proposing a small edit to the NS document and small edit to the spec itself in the same place that refers to extensions and the CG? sandro: yes eprodrom: does that set us back another 2 months sandro: nope tantek: is that something we can do and still hit our PR on thurs? sandro: not sure if it will go in there, maybe if rhiaro has staged a version, but probably <rhiaro> yeah I haven't staged anything yet sandro: another possible thing to do would be at the top of the document, with github link and etc, have a link to extensions that pointed to ns document on extensions tantek: I'm going to suggest not doing that, and here's why uhoh I disconnected someone scribe <eprodrom> scribenick: eprodrom tantek: sometimes people propose extensions without reading the spec <cwebber> scribenick: cwebber <tantek> PROPOSED: Resolve AS2 issues 413 414 with sounds like good extensions, but not core spec. Suggest joining SWICG. Edit ns document to link to SWICG. Similar small "Note:" in AS2 in the same place it refers to extensions and the ns doc. <rhiaro> +1 <aaronpk> +1 cwebber: +1 <eprodrom> +1 <sandro> +1 RESOLUTION: Resolve AS2 issues 413 414 with sounds like good extensions, but not core spec. Suggest joining SWICG. Edit ns document to link to SWICG. Similar small "Note:" in AS2 in the same place it refers to extensions and the ns doc. <ben_thatmustbeme> +1 tantek: ok so we have only a couple editorial changes to as2, not sure if it will block PR or not but ... what are we waiting for PR on? sandro: amy submitted transition request on friday so we're waiting for ralph, expecting them to make the decision tomorrow? so we may transition on thursday tantek: I think that's all for as2 ... let's move on to LDN <eprodrom> scribenick: eprodrom LDN rhiaro: we are getting reviews, no formal objections, some implementations Micropub tantek: Ralph filed a security issue <aaronpk> tantek: aaronpk provided text <tantek> PROPOSED Resolution of issue 89: <ben_thatmustbeme> +1 Addition of <tantek> 6.1 <rhiaro> +1 +1 <sandro> +1 <tantek> ack <aaronpk> +1 but i think that was implied because i wrote the text ;-) <cwebber> I'm still here eprodrom: is there a document that we could refer to for security issues in sharing URLs? tantek: none comes to mind <cwebber> give me one more minute <cwebber> sorry, brain kicking in <cwebber> it seems fine to me WebSub CR <rhiaro> <Loqi> [Julien Genestoux] WebSub sandro: it's out this morning <ben_thatmustbeme> yay! <aaronpk> 🎉 tantek: congrats to aaronpk and jullien <Loqi> 😄 sandro: can you link the tweet RESOLUTION: Resolution of issue 89: <sandro> <Loqi> [@sandhawke] WebSub, the http Pub/Sub protocol formerly known as PubSubHubbub (PuSH) finally makes it to @W3C Candidate Rec! tantek: aaronpk has to stage new pr draft, ralph has to review the edit for micropub <scribe> scribenick: eprodrom activitypub tantek: how are we doing? cwebber: changes to the spec are imminent <cwebber> cwebber: we had an extra # char at the end of the profile, inconsistent with AS2 ... OK since AS2 is authoritative ... issues filed since last night <cwebber> <cwebber> cwebber: did not document side effects for the ignore verb; would be normative ... document Ignore as a MAY eprodrom: Mute and Block are important functions cwebber: could we include it as a MAY? <rhiaro> It might be something that implementations decide to do anyway <rhiaro> hopefully in a consistent way tantek: that would be more conservative, include as an extension cwebber: the commenter understands they're minor and last-minute ... is more difficult <cwebber> cwebber: we have some leftovers from pump.io ... icon is either a link or an URL ... what to do? ... we have suggested change text ... we could drop properties that are in AS2 tantek: it looks like sandro is in favor of dropping dupes ... my understanding is that's not a functional change rhiaro: I think it's fine cwebber: this will shorten the doc anyway ... "name" is called out as nickname or full name <tantek> that seems fine to me cwebber: I don't see any properties that are restricted or more explicit ... should I keep the name one? <cwebber> <rhiaro> It would be kind of conspicuously absent if it wasn't there cwebber: should we keep the one "name" property definition? <rhiaro> +1 simply listing AS2 ones that are expected <rhiaro> expected/recommended cwebber: no other properties have been changed in AP ... with those edits done we should not have any normative changes <tantek> Keeping this text then? "Implementations SHOULD, in addition, provide the following properties:" ? tantek: any other comments <tantek> in here: cwebber: we reviewed with Ralph in email, he said it was OK <tantek> PROPOSED: Resolve AP issues 180 182 per consensus as discussed above. +1 <cwebber> +1 <aaronpk> +1 <ben_thatmustbeme> +1 <rhiaro> +1 RESOLUTION: Resolve AP issues 180 182 per consensus as discussed above. <tantek> PROPOSED: Publish updated AP CR draft with editorial fixes <rhiaro> +1 <cwebber> +1 <aaronpk> +1 <ben_thatmustbeme> +1 RESOLUTION: Publish updated AP CR draft with editorial fixes <rhiaro> yeah needs publishing by hand, but no director approval, will take care of it <sandro> +1 RESOLUTION: Publish updated AP CR draft with editorial fixes tantek: any other updates on AP cwebber: we have an implementation report template ... working on the test suite ... at worst 2 weeks ... may end up an interactive text adventure ... testing client-to-server (both sides), server-to-server ... might be really hard <rhiaro> Same for LDN.. we just did a web form that people fill in after running their client, cwebber cwebber: implementations have been moving along ... mastodon has a couple of commits that AP will be coming ... AP might be a good next step because it has a test suite aaronpk: i released the report before the test suite, so any reports had to be reverified ... which was a hassle. So finish the test suite before getting implementation reports. ... take a look at micropub.rocks cwebber: strugee said they are hoping to get AP implementation in pump.io this month tantek: before we close, apologies that we are over ... next telcon date 4/25 <cwebber> strugee, \o/ <tantek> PROPOSED: Next telcon 2017-04-25 tantek: any objections? <cwebber> +1 <ben_thatmustbeme> +1 <aaronpk> +1 <rhiaro> +1 +1 <sandro> +1 RESOLUTION: Next telcon 2017-04-25 tantek: hopefully we'll have more PRs then <cwebber> thanks! tantek++ <Loqi> tantek has 50 karma in this channel (329 overall) cwebber, so, what I was going to ask is, once we get test suite and implementation report template up scribe: we make a page on the wiki linking to issues for each fedsocweb app asking for implementation <cwebber> eprodrom: ah yeah Diaspora, Elgg, GNU Social, Mastodon, pump.io, rstatus, GNU MediaGoblin, Owncloud maybe? <ben_thatmustbeme> sandro, go for it! <cwebber> eprodrom: yes sounds good <tantek> trackbot, end meeting Friendica or whatever it's called now Summary of Action Items Summary of Resolutions - approve minutes of and - Resolve AS2 issues 413 414 with sounds like good extensions, but not core spec. Suggest joining SWICG. Edit ns document to link to SWICG. Similar small "Note:" in AS2 in the same place it refers to extensions and the ns doc. - Resolution of issue 89: - Resolve AP issues 180 182 per consensus as discussed above. - Publish updated AP CR draft with editorial fixes - Publish updated AP CR draft with editorial fixes - Next telcon 2017-04-25
https://www.w3.org/wiki/Socialwg/2017-04-11-minutes
CC-MAIN-2018-26
refinedweb
2,029
60.14
Best local taiwan import and export freight forwarder agent US $15-135 / Cubic Meter 1 Cubic Meter (Min. Order) biggest exporter of callus blades, pedicure blades in China US $100-101.5 / Parcel 10000 Parcels (Min. Order) Fuji apple's biggest export businesses, direct export US $0.4-0.7 / Kilogram 10000 Kilograms (Min. Order) Biggest Manufacturer and Exporter of Formic acid At Cheap Price US $400-600 / Ton 5 Tons (Min. Order) North China Biggest Supplier and Exporter of Iron Pyrite US $300-400 / Ton 1 Ton (Min. Order) The biggest sweet/bitter aprcicot kernels for import and export company in china,Asina. US $1-3500 / Metric Ton 1 Metric Ton (Min. Order) 2015 Best tooth brush for Export with gum massage US $0.08-0.15 / Piece 30000 Pieces (Min. Order) Pe tarpaulin with Biggest Factory export to Pakistan US $1600-2200 / Ton 5 Tons (Min. Order) Exported to Iran for mechanical guillotine shearing machine with biggest discount US $4200-150000 / Set 1 Set (Min. Order) import from turkey marmald white marble,laizhou biggest nature stone supplier,quality life,quality marble US $20-40 / Square Meter 500 Square Meters (Min. Order) 180kg shandong china biggest sewing machine oil manufacturer tianhui machine lubricant oil US $0.9-1.9 / Liter 100 Liters (Min. Order) Unique Promotional Wholesale Gold Metal Cups Sports Trophy US $4.99-13.99 / Piece 50 Pieces (Min. Order) Biggest promotion internal ssd 2.5'' SATA3.0 ssd 500gb computer parts hard drive US $5-200 / Piece 10 Pieces (Min. Order) Alibaba Golden logistics company US $1-10 / Cubic Meter 1 Cubic Meter (Min. Order) OEM hand painted wine glass designs US $1-3 / Piece 2400 Pieces (Min. Order) 2016 Biggest China Trade Assurance Manufacturer pre galvanized steel pipe supplier US $350-700 / Ton 10 Tons (Min. Order) Biggest Agent company,Hardware import export agent companies looking for agent in India US $0.02-0.02 / Acres 100 Acres (Min. Order) Raw Copper From China(Biggest Mill) US $4000.0-4000.0 / Tons 25 Tons (Min. Order) 20*30cm Biggest non woven tea bag and Tea Pouch Design US $9.7-9.7 / Bag | Buy Now 10 Bags (Min. Order) According to Your Play Area Design Adult Indoor Trampoline Floor,Biggest Trampoline US $10000-100000 / Set 1 Set (Min. Order) biggest wholesaler titanium 540 derma roller for stretch marks/face roller with needles US $0.1-1 / Piece 20 Pieces (Min. Order) 2016 new anti-Slip new arrivals yoga ball biggest manufacturer US $1-5 / Piece 10 Pieces (Min. Order) white blanched peanut/2013 new crop shandong blanched peanut/biggest shandong blanched peanut for sale US $1000-1200 / Metric Ton 5 Metric Tons (Min. Order) Pipette Shot Horderves Tutorial (Biggest hit at parties) US $0.04-0.04 / Pieces 10000 Pieces (Min. Order) Chinese Biggest Supplier Octyl Methoxycinnamate US $1-10 / Kilogram 20 Kilograms (Min. Order) wall cube decoration home furniture wood crafts US $6.42-7.98 / Set 1 Set (Min. Order) Biggest pipe mill in tianjin!!!galvanized steel pipe US $458.0-458.0 / Metric Ton 1 Metric Ton (Min. Order) China Manufacturing Biggest Sale Stone Garden Water Features US $500-1000 / Unit 1 Unit (Min. Order) We are the one of biggest non woven bag manufacturer in China.We have Shopping Bag/ PP Non Woven bag US $0.35-0.35 / Pieces 2000 Pieces (Min. Order) germany's biggest-selling conversion plug US $0.15-1.6 / Piece 120 Pieces (Min. Order) the biggest granite producers US $5-8 / Square Meter 50 Square Meters (Min. Order) Best local ningbo import and export freight forwarder agent US $15-135 / Cubic Meter 1 Cubic Meter (Min. Order) Best local import and export shenzhen logistics agent US $15-135 / Cubic Meter 1 Cubic Meter (Min. Order) Best local import and export taiwan logistics agent US $15-135 / Cubic Meter 1 Cubic Meter (Min. Order) Buying Request Hub Haven't found the right supplier yet ? Let matching verified suppliers find you. Get Quotation NowFREE Do you want to show biggest export and imports or other products of your own company? Display your Products FREE now! Related Category Supplier Features Supplier Types Recommendation for you related suppliers related Guide related from other country
http://www.alibaba.com/countrysearch/CN/biggest-export-and-imports.html
CC-MAIN-2016-44
refinedweb
705
60.41
Problem analysis of Codeforces Round #401 (Div. 2) Auto comment: topic has been updated by GlebsHP (previous revision, new revision, compare). In the problem C I used a map to store previous queries to pass on the 100th test case. It worked, lol and main program?Are you using the is_sorted() function in stl? upd:sorry I'm bit naive No, actually I precomputed the answer as said in the problem explanation.But my algorithm runs on O(k*m), Because it checks every column, it's pretty easy to repeat queries just to slow it down. Só I used a map to store previous queries. Here is a link: 24986527. you are so clever. I saw your solution, the key here is using "pref" dynamic programming, not the map. No need to store the previous queries, I can pass all test without map and faster two times yours. 25166547 What I meant was that I didn't need to quite do what the editorial said and it worked. Later I did what the editorial explained. cheers :) But they are similar! nice....haha... Seems distinct queries are very needed i saw ur comment and did the same thing :) There may be a mistake in prob E's solution. ans[i] += opt[i].height; should be ans[i] += r[i].height; ans[i] += opt[i].height; ans[i] += r[i].height; opt.back() should be opt.top() opt.back() opt.top() Is it right? Yes, fixed now. Thanks you! for problem E can't we use modified longest increasing sub-sequence in O(nlgn) ? i did something similar to LIS, but in the end it's not so far from the solution proposed above #include <cstdio> #include <algorithm> #include <set> using namespace std; int II(){int n;scanf("%d",&n);return n;} void II(int n){printf("%d\n",n);} typedef long long ll; void LL(ll n){printf("%lld\n",n);} typedef pair<int,ll> il; struct ring_t { int inner,outer,height; bool operator<(ring_t b) { if(b.outer==outer)return inner>b.inner; return outer>b.outer; } }; #define MAXN 100000 ring_t V[MAXN]; ring_t *vend=V; int main() { int N=II(); set<il> dp; while(N--)*vend++={II(),II(),II()}; ll H=0; sort(V,vend); for(ring_t *x=V;x<vend;x++) { //printf("adding %d,%d,%d",x->inner,x->outer,x->height); ll myH=0; { auto it=dp.lower_bound({x->outer,-1}); if(it!=dp.begin()) { it--; myH=it->second; } } myH+=x->height; H=max(H,myH); { auto r=dp.insert({x->inner,myH}); if(r.second) { auto it=r.first; auto prv=it;prv++; while(prv!=dp.end()&&it->second>=prv->second) { dp.erase(prv); prv=it;prv++; } } } //for(auto y:dp)printf("[%d|%d] ",y.first,y.second);putchar(10); } LL(H); } Should the dynamic programming recursion be ans(i) = maxj < i, bj > aians(j) + hi instead of what you wrote? Sorry, I misunderstood your notation. I think there is a mistake in 777C. It's supposed to be "such that the table is not decreasing in COLUMN j" right? Thanks for the editorial! In problem C, should be a[i, j] <= a[i + 1, j] instead of a[i, j] < a[i + 1, j]. When CF round #403 begins? I see that there is a binary search tag for problem C. Did anyone solve this problem using binary search? Please enlighten me on how to use binary search for this problem. If you just store the indexes which are greater than the next number in an adjacency list for each column. Then for each query you could apply binary search within that range and see if no such element exists even for one column, then Yes. But there seems to be some kind of optimization that I have missed. Plus, the approach is way slower than that in the editorial. This might help Please let me know if I'm wrong. I am unable to understand the tutorial for problem C. Can anybody please explain the tutorial in a more easy way. I am new to DP. Thank you. Here are two submissions for problem D. 26348627 26348633 These two submissions contain exactly the same code. Yet, I didn't use the same compiler. The first one got TLE, and the second one got Accepted. I would be thankful if anybody could explain me why. Problem E (777E - Hanoi Factory):Could someone please explain the second approach described in the tutorial? I'm failing to understand how/why it should work with stack. If someone could provide a link to a complete solution which uses the approach, it could possibly help as well. I was able to understand an approach behind this submission: 33027274.It is also stack-based,although I'm not sure it uses the same idea(because I'm still not sure I understand the tutorial's idea). can someone provide me dp solution for B I have used hashing instead of dp and my solution works in O(n). 67647151 DDangr coongj sanr vieetj nam muoon nawm >u< Editorial for problem C should be:* $$$up(i,j) = up(i+1,j)$$$ if $$$a[i][j] \leq a[i+1][j]$$$* $$$up(i,j) = i$$$, otherwise.I think it is more suitable with your explanation. I think I got E in O(n log n) time. Sort rings by outer radius first, then inner radius. Sorting should take O(n log n). From the end of the array (biggest outer radius), do the below for each ring: Knowing that each individual ring can only be added and removed to the stack one time each, this part of the program takes O(n) time. The whole algorithm thus takes O(n log n + n) = O(n log n) time. Code: Feel free to try and prove me wrong.
http://codeforces.com/blog/entry/50670
CC-MAIN-2020-16
refinedweb
976
67.76
So far so good but there is one more thing I want the MutexLock to do. The Mutex object may throw an exception (AbandonedMutexException) when being waited for if the current owning thread terminates without releasing the mutex. I want to hide that fact in my MutexLock so I don't need to handle that exception everywhere in my code: 1: public class Given_an_abandoned_lock : IDisposable 2: { 3: private MutexLock _lock = new MutexLock(); 4: private EventWaitHandle _threadStarted = new EventWaitHandle(false, EventResetMode.ManualReset); 5: private EventWaitHandle _threadStop = new EventWaitHandle(false, EventResetMode.ManualReset); 6: private Thread _thread; 7: 8: public Given_an_abandoned_lock() 9: { 10: _thread = new Thread(() => 11: { 12: _lock.Lock(); 13: _threadStarted.Set(); 14: _threadStop.WaitOne(); 15: }); 16: _thread.Start(); 17: } 18: 19: public void Dispose() 20: { 21: if (_thread != null) 22: { 23: _thread.Abort(); 24: } 25: } 26: 27: [Fact(Timeout=1000)] 28: void It_should_be_possible_to_take_lock_when_thread_dies() 29: { 30: _threadStarted.WaitOne(); 31: _threadStop.Set(); 32: Assert.DoesNotThrow(() => { _lock.Lock(); }); 33: } 34: } And that test leads to the following implementation: 1: public class MutexLock : Lock 2: { 3: private readonly Mutex _lock = new Mutex(); 4: 5: public void Lock() 6: { 7: try 8: { 9: _lock.WaitOne(); 10: } 11: catch (AbandonedMutexException) {} 12: } 13: 14: public void Unlock() 15: { 16: _lock.ReleaseMutex(); 17: } 18: } Why would you ever want to silently allow another thread to terminate when a lock is held? The underlying data is now in an unknown state; isn't it appropriate to die? @Matthew: I think you're right that a thread terminating without releasing the lock is one of twobad things. Either the thread terminated early because of some other problem or the programmer did somethign wrong. Programmer errors can be handled by an assert in the code but are not really a big problem for other threads from the Mutex' perspective since you just forgot to release it. So I think it is OK to "ignore" it in this case. In the case a of a crashing thread things may definitly be in a weird state. Not only from a mutex point of view. So do I think it is better to terminate the application or try to continue? I think the answer is "it depends". In some cases, for example a high availability service I would try to survive as long as possible and always be defensive in the code, never assuming anything. For a simple user application - yeah I would consider terminating the program - if it is an acceptable user experience. Either way the important thing is to add one (or more tests) to document the desired behavior.
http://blogs.msdn.com/b/cellfish/archive/2009/12/19/2009-advent-calendar-december-19th.aspx
CC-MAIN-2014-10
refinedweb
426
56.76
Bug 332579 comment 22: > Using build from comment #16, and I saw an issue that might be related to > this patch. In Gmail, I clicked on the "Download" link in order to download > the attachment. This does two this: (1) sets the focus on the "Download" > link, and (2) brings up (at least in my case) a dialog asking whether to > save the file or open with an app. 'Save' was already selected, so I just > hit Enter, but then I saw the dialog immediately reappear. > > It looks like the Enter key was being counted twice, first to select the > default button in the dialog, and secondly to reselect the "Download" link. I'm seeing keypress events for the RETURN key coming back from dispatch with a status of nsEventStatus_eIgnore, even though the events are being handled and the proper status seems like it should be nsEventStatus_eConsumeNoDefault. This can cause a double-processing problem for us on the Mac because we accept key events through two channels: a raw key event handler and a TSM event handler. If, for a single keypress, one handler indicates that the event was not handled, the system will give the event to the other. In the case above, I'm seeing the raw key event handler being called. An NS_KEY_PRESS event is dispatched, but the status is nsEventStatus_eIgnore, so the system is told that the event was not handled. The system pushes the event to the TSM handler, which fires off another NS_KEY_PRESS event. This behavior seems to affect the RETURN key, but not other keys. What's probably happening is that VK_RETURN is special-cased to call some "do default action" function like a mouse click would do. The original event doesn't get preventDefault called on it. This might be the best way to attack the problem, but it might not be possible to fix it everywhere. One idea I've played with is to use the raw key-down event to always generate NS_KEY_DOWN and the TSM event to always generate NS_KEY_PRESS. To do this, we'd need to always tell the system that the raw key-down event was not handled, otherwise we wouldn't get the TSM event. That's a little suspicious, but it's not the worst part: the worst part is that there's no good way for the key-down handler to tell the TSM handler that the NS_KEY_PRESS event should have preventDefault set if the NS_KEY_DOWN event was cancelled. If it's of any help, here's the bit of code that hard-wires VK_ENTER to the default button on mac os x: 104 #ifndef XP_MACOSX Actually, this line would fix the problem, but only where the problem return keys are directed at XUL nsButtonBoxFrames: 112 *aEventStatus = nsEventStatus_eConsumeNoDefault; Similar problem at bug 337277. Created attachment 221541 [details] [diff] [review] Checkpoint v1 I spent a lot of time overhauling how native key events are turned into Gecko events today, and the results are very clean. I overhauled nsMacTSMMessagePump to be Carbon Event-based instead of Apple Event-based. Then, I moved all key event processing out of the distinct application-scope raw event and IME handlers, both of which are now disabled, and into a new raw event handler that's attached to each window as a target. The raw event handler accepts Unicode data directly. One problem is that leaving composition mode isn't handled as cleanly as it could be. If you type (with a US keyboard) option-U,B, you would expect to see umlaut,B (try it in a current release version), but instead, you just get the umlaut. I'm also interested in testing from 10.3 users. This patch fixes this bug, bug 337277, bug 337338, and probably a host of others that haven't been discovered or reported yet. It renders a bunch of code dead but doesn't remove it - the final patch here (or a followup) will clean up unreachable codepaths. I'm putting a new test build up at , equivalent to Firefox 1.5.0.3 + bug 332579v8 + this patch. I'm especially interested in getting feedback from CJK users and users of other esoteric input methods. (In reply to comment #5) > Created an attachment (id=221541) [edit] > I'm especially interested in > getting feedback from CJK users and users of other esoteric input methods. > Japanese IME a problem is reported with bug337370. This patch might be able to be tested tonight. (In reply to comment #5) > , equivalent to > I tried Japanese IME with this build. But, Japanese was not able to be input. The input switch of English(Ei-suu) and the hiragana(Kana) cannot be done with the keyboard. (When the English input switch key is pushed, space is input. ) Thanks, Hiro. Based on that, it looks like I really need to get the keypresses out of the way of TSM, and make sure the kEventTextInputUpdateActiveInputArea handler gets first crack. Comment on attachment 221541 [details] [diff] [review] Checkpoint v1 Command-shift-G is doing find next instead of find previous. I did some testing to help clarify IME behavior on the most recent patched test build you posted in comment #5. First, the part you already know about: 1. Put cursor in search bar. 2. Switch your input mode to Japanese Hiragana. 3. Type "hashiru." Result: Latin characters "hashiru" Expected: Japanese characters 「はしる」; composition mode should still be active (i.e. characters are underlined, waiting for you to either activate kanji/alternate results matching [spacebar] or confirm displayed characters [return]). Here's the new and intriguing bit: 1. Put cursor in search bar. 2. Switch your input mode to Japanese Hiragana. 3. Type "[option-h]ashiru." Result: Japanese characters 「はしる」; composition mode is still active. Expected: Same, because Japanese IME doesn't use opt-h for any special characters (e.g. opt-y) or input-modification commands (e.g. opt-i, opt-k, opt-j, opt-l, opt-z, opt-x, opt-c, opt-a, and opt-s), which means it is treated the same as an unmodified "h." Since unmodified "h" as first character should enter composition mode, so should opt-h. 3a. Test input-modification commands: spacebar, arrow keys, opt-i/k, opt-z/x/c, etc. Result: as expected. 4. Hit return to confirm the current modified input. Result: TWO copies of the input are inserted into the search bar AND a search is immediately triggered with the currently selected search engine. Expected: ONE copy of the input is inserted into the search bar and no search is triggered. Clicking in the blank space to the right of the selected input DOES yield expected behavior, so this doubling behavior is being caused by the return key rather than whatever the generic exit-the-composition-mode signal is. Also, going in a slightly different direction at step 3: 1. Put cursor in search bar. 2. Switch your input mode to Japanese Hiragana. 3a. Type "[option-i]" Result: Nothing. Expected: Same, because opt-i is claimed by the IME as a special command. 3b. Type "[option-y]" Result: ¥ (yen symbol); composition mode is active. Expected: Same, because opt-y is claimed by the IME as a special character. CONCLUSIONS: 1. This patch is allowing the Japanese IME to treat all option-modified letter keys properly. Although using unmodified letters fails to initiate composition mode, the use of unclaimed opt-letters acts as a backdoor into composition mode, since opt-letters not claimed for special commands or characters are treated by the IME as unmodified letters. (So: Unmodified keys are not being allowed to reach the IME, but option-modified keys are, and are being converted to unmodified keys [as necessary] BY the IME.) 2. Once we enter composition mode using the aforementioned backdoor, we can observe the current bad behavior of the return key (doubled input + activating default action for current form field). Er, I hope that at least reveals something interesting. Cheers. And my apologies about the mess those Unicode chars made in the comment above; I've never tried typing Japanese into Bugzilla before. >_< (Following up comment #10) I didn't work through all of Nicholas's report, but I've confirmed the first part -- IME doesn't work with Bon Echo Alpha 2 on any of OS X 10.4.6, 10.3.9 or 10.2.8. Even when your input mode is Japanese Hiragana (or some other input mode that uses IME, chosen from the "flags" menu), typing characters results in ordinary input. This happens even when a Java applet hasn't yet been loaded -- i.e. the Java Embedding Plugin isn't involved. I've also (I think) figured out how to fix this problem. Mozilla.org browsers (the "official" versions) use Apple Event handlers to do IME. But getting rid of WaitNextEvent() seems to have stopped those handlers from receiving any Apple events. Mark, you're trying to do the right thing in widget/src/mac/nsMacMessagePump.cpp's DispatchEvent() (the call to AEProcessAppleEvent()). But the handling required for Apple events in a Carbon handler for kEventClassAppleEvent / kEventAppleEvent is very peculiar -- you need to explicitly remove the Carbon event from the queue before calling AEProcessAppleEvent()! For sample code see the following (it's in a section titled "Processing Apple Events With the Carbon Event Model"): I've noticed that, in your patch to this bug (337199), you've switched to using Carbon event handlers to do IME. But if that code is in, it's failing in exactly the same way -- IME is simply not happening. I suggest that you go back to using Apple Event handlers for IME, and try augmenting your code in nsMacMessagePump::DispatchEvent() from the sample code that I've referred to. I've actually got it working with CE handlers throughout in my own tree, but am having problems with ending an inline input session. Do whatever works :-) But getting rid of WaitNextEvent() was already a pretty radical (and dangerous) thing to do on a "production" branch. And switching IME from Apple event handlers to Carbon event handlers would be even more radical. At the very least, it may require me to change the Java Embedding Plugin. It contains code to deal with browsers that use Carbon event handlers to do IME ... but that code's never been tested :-( Created attachment 222508 [details] [diff] [review] Patch to change how the JEP sends keystrokes to the browser Much to my surprise, I've found a way to change the Java Embedding Plugin to get rid of the character-doubling problem in Bon Echo Alpha 2. Mark, I'm sure you're aware of this, but here's a bit of background for those who aren't aware: The Java Embedding Plugin uses Apple's Cocoa-Carbon interface to integrate the Cocoa-based JVM with Carbon-based Firefox (and Seamonkey and Mozilla). But there's a design flaw in the Carbon event handlers that implement Apple's Cocoa-Carbon interface -- they swallow all keyboard and mouse events! So if the Java Embedding Plugin didn't work around this design flaw, after loading a Java applet you would no longer be able to use the keyboard or mouse outside of that applet. The JEP's workaround is to install another Carbon handler "after" (or "on top of") Apple's handlers, and "redispatch" to the browser those mouse and keyboard events that the browser needs to see. All recent versions of the JEP do this by calling SendEventToEventTargetWithOptions() with "OptionBits" set to "kEventTargetSendToAllHandlers". This sends the event to _all_ handlers installed (for that event) on a given target, including those that would normally be pre-empted by a "later" handler returning "noErr". The key-doubling that people have reported is probably due to key events being sent both to WNETransitionEventHandler() (in widget/src/mac/nsMacMessagePump.cpp) and to some kind of default keystroke handler (for objects that support text input) installed by the OS. For a long time I thought this was the only possible workaround. But now I've found that, if I send key events to the "application" target (instead of to the "user focus" event target as I have been doing), they no longer get swallowed by Apple's Cocoa-Carbon interface handlers! I'm now thinking of including this change in my next JEP release (0.9.5+e, which I _hope_ will come out next week). My "patch" is for those (Mark?) who are willing to recompile the JEP to test my new way to get keyboard events to a Carbon-based browser. It's against JEP 0.9.5+d (the most recent release). Side note: My patch doesn't seem to get rid of the character doubling problem completely. I still see JavaScript alert messages re-appear after having dismissed them by pressing "Return" or "Enter". Something I forgot to mention: My patch makes makes keyboard input stop working in the browser (after a Java applet has been loaded) when used with -- presumably because that revision stops WNETransitionEventHandler() from handling key events. Steven, the checkpoint patch here and the v9 test build should already be immune to double-keys with a current JEP. Getting rid of WNE was radical, but the only way we'll know for sure if it's feasible for fox2 is to try to shake the regressions out now. > the checkpoint patch here and the v9 test build should already be > immune to double-keys with a current JEP They are ... but (of course) they're broken in other ways. How soon might I/we be able to test one of your builds that has IME working (at least partially) via Carbon handlers? (I'm trying to decide when to release JEP 0.9.5+e, and whether or not to include my keystroke-handling patch in it.) I'm putting together a new patch now and am comparing it to the 1.8.0 behavior. I'll make a universal test build available later today. Thanks! Re comment 15, the Carbon event is never on the queue when AEProcessAppleEvent is called. In fact, on the 1.8 branch now, that should be dead code, because the loop is always run by RunApplicationEventLoop, which has its own handler for Apple events. I've left it in the nsMacMessagePump::DispatchEvent because it's not dead on the trunk, where the loop is sometimes run by RunApplicationEventLoop and sometimes by manual cranking of ReceiveNextEvent. (When I use RNE, I have it pull the events from the queue.) So at least all of those bases are covered. Anyway, this afternoon, I managed to fix the remaining (known) bugs here. Most of the key badness was fixed by ensuring proper routing of the events. The trouble I was having with leaving IME sessions properly was fixed by going back to the AE model for unicode input, as Steven suggested. One difference is that any keypress, even those in IME sessions, now produces an NS_KEY_DOWN event. If in IME, there's no NS_KEY_PRESS, of course. In the past, there wouldn't have been any NS_KEY_DOWN events when in IME either, but there would have been NS_KEY_UP events. I don't think that the new behavior is wrong. (But I might be wrong.) Created attachment 222537 [details] [diff] [review] v2 for trunk, AE TSM handlers restored Will do a test build before requesting review. There will be a followup cleanup patch to get rid of dead code like HandleKeyEvent and HandleUpdateInputArea. Test build will be 2.0a2 (1.8.1a2) plus patch v2, above. Steven, if you want to work on JEP's handling of IME with Carbon events, we can do that separately once this is substantially wrapped up. A test build fixing this bug and the three dependencies (337277, 337338, and 337370) is available at: This is equivalent to Firefox/BonEcho 2.0a2 (1.8.1a2) plus the v2 patch from above. Unlike previous test builds, it is not based on the 1.5.0.x (1.8.0.x) series. Bugs or behavioral differences should be compared to BonEcho 2.0a2: Note that the last official 2.0 (1.8.1) build without the updates from bug 332579 was from 0508:. - variation on the above: sometimes, the RETURN key has no effect. (may be dependent on the timing betewen the last letter typed and the RETURN key.) this variant occurs infrequently. I've tested these by backing out this patch and the patch from bug 332579, and the behaviors still occur, so they seem to be regressions caused by some other change. I've even experienced the former variant on win32. (Anyone want to QA it or let me know if there are already open bugs?) (In reply to comment #24) > Steven, if you want to work on JEP's handling of IME with Carbon events, we can > do that separately once this is substantially wrapped up. > There is bug in JEP and IME.(bug315972) Does this problem improve by this change? Supposing that is not right, the test of a Java applet and IME is impossible. (Because, if bug315972 occurs, all character inputs will become impossible.) Test build of comment#25 is tried excluding the above issue. (In reply to comment #27) I've never been able to reproduce any of the problems reported at bug 315972, possibly because I don't have an Apple Japanese keyboard (aka the Shift-JIS keyboard). (I understand that this keyboard has a couple of extra keys, used for changing input methods.) And it's possible that these problems can be avoided even with the Shift-JIS keyboard if you only use the "flags" menu to change input methods. (I probably won't be able to fix bug 315972 until someone tells me how/where to get a Apple Shift-JIS keyboard.) (Following up comment #25) In brief tests that I've just done (on OS X 10.3.9, using Apple's English keyboard), I had no problem doing Japanese (Hiragana) and Chinese (Pinyin) IME, either in an applet or in the browser. I even found that my send-keystroke patch (attachment 222508 [details] [diff] [review]) no longer stops keyboard input in the browser after an applet has been loaded. (I still haven't decided whether to include this patch in JEP 0.9.5+e, though.) (In reply to comment #26) >. I can't seem to reproduce that, hittin RETURN opens the search results in the same tab for me. Possibly you have this pref set ? user_pref("browser.search.openintab", true); Not sure about the second problem you mention, maybe my typing is too slow. The problems from bug 315972 (JEP and IME) are still there. BTW Steven (comment #28), whatever way I choose to switch between input methods: keyboard or 'flags' trigger bug 315972. The problems noted in bug 337370 is fixed by this patch, so far. I couldn't reproduce the character duplication either (bug 336357). Typing special characters (bug 337338) works on my side. After an hour surfing around, no problems to report. (In reply to comment #14) > Do whatever works :-) > > But getting rid of WaitNextEvent() was already a pretty radical (and > dangerous) thing to do on a "production" branch. And switching IME > from Apple event handlers to Carbon event handlers would be even more > radical. I(Katsuhiro MIHARA) wanted to be more radical at Bug 158859. I wrote Carbon Event handlers(like Apple Event handlers) for TSM, but some problems exist and were not accepted by reviewers. I think that the reason of this bug in checkpoint v1 is removing Apple Event handlers for TSM and not writing Carbon Event handlers for TSM. This bug maybe be fixed by writing Carbon Event handlers. If you want me to merge patches, please wait. Feel free to merge patches. (In reply to comment #30) > I think that the reason of this bug in checkpoint v1 is removing Apple Event > handlers for TSM and not writing Carbon Event handlers for TSM. > This bug maybe be fixed by writing Carbon Event handlers. I missunderstood checkpoint v1. I read that Carbon Event handlers exist. Wmmm.... At Bug 158859, I remains Apple Event handler for kUnicodeNotFromInputMethod(because I don't understand Key event handlers on Gecko). > In order to avoid interference with input methods, providing a handler for the kEventTextInputUnicodeForKeyEvent event and examining its parameters is the preferred means on Carbon for directly examining keyboard input in general. I think we need to implement either Carbon Event handler for kUnicodeNotFromInputMethod or Apple Event handler for kUnicodeNotFromInputMethod. The patch checkpoint v1 remove Apple Event handler for kUnicodeNotFromInputMethod, but doesn't install Carbon Event handler for kEventTextInputUnicodeForKeyEvent. > - err = AEInstallEventHandler(kTextServiceClass, kUnicodeNotFromInputMethod, mKeyboardUPP, (long)this, false); > - NS_ASSERTION(err==noErr, "nsMacTSMMessagePump::InstallTSMAEHandlers: AEInstallEventHandler[FromInputMethod] failed"); > +#if 0 > + // All key events are handled by raw key event handlers > + err = > + ::InstallApplicationEventHandler(sUnicodeForKeyEventHandlerUPP, > + GetEventTypeCount(kUnicodeForKeyEventList), > + kUnicodeForKeyEventList, > + NS_STATIC_CAST(void*, this), > + &mUnicodeForKeyEventHandler); > + NS_ASSERTION(err == noErr, "Could not install UnicodeForKeyEventHandler"); > +#endif (In reply to comment #32) > Katsuhiro, that's right, the v1 checkpoint used an experimental and poorly documented way of getting unicode key data directly from the raw key event. It didn't work. Even when I gave up that battle and turned on the kEventTextInputUnicodeForKeyEvent CE handler active, though, there were problems ending IME sessions. (In reply to comment #27) > Test build of comment#25 is tried excluding the above issue. > IME works. There is no problem. (But JEP is not tried. ...) Comment on attachment 222537 [details] [diff] [review] v2 for trunk, AE TSM handlers restored All of the feedback so far has been positive, so I'm going for review. Thanks to the testers and other developers who took the time to help out. Reviewer's guide: the main functinal part of this patch is in nsMacEventHandler. Comment on attachment 222537 [details] [diff] [review] v2 for trunk, AE TSM handlers restored >Index: mozilla/widget/src/mac/nsMacEventHandler.cpp >=================================================================== ... > #ifndef XP_MACOSX > #include <locale> > #endif Whack that please. > // > // create a TSMDocument for this window. We are allocating a TSM document for > // each Mac window > // > mTSMDocument = nsnull; Whack the tabs. Looks fine to me. Comment on attachment 222537 [details] [diff] [review] v2 for trunk, AE TSM handlers restored If the assertions in HandleNKeyEvent can happen in situations that don't involve changing a fundamental physical constant, it seems like we should be propagating the error somehow, or perhaps indicating that we didn't handle the event? Not sure if that would lead to us looping on a bogus event, though, or if the better response would be to just claim that we'd handled it and have the martian event just disappear. sr=shaver and 181=shaver if we're sure we're doing the right thing in that case. Very excited to see these fixes coming along! This is not a comment to patch v2. Apple Event data types left in nsMacEventHandler. ex. nsMacEventHandler::InitializeKeyEvent() If this method recieves Carbon events, this should use GetEventParameter(EventRef, kEventParamKeyModifiers, ...) to read modifiers. To adopt Carbon events, some methods should be rewritten. I don't estimate roughly how lines should be written. Mark, I don't read code you didn't attach to Bugzilla. Do you overhaul key event handlers of Gecko on Mac? (In reply to comment #39) > Mark, I don't read code you didn't attach to Bugzilla. > Do you overhaul key event handlers of Gecko on Mac? Yes, I did that in v1. The only difference between nsMacTSMMessagePump.cpp as modified by patch v1 (attachment 221541 [details] [diff] [review]) and as it properly exists to implement CE TSM handlers is that the proper implementation doesn't include the |#if 0| around the installation of the kEventTextInputUnicodeForKeyEvent handler, as you pointed out in comment 32, and I explained in comment 34. The work involved in converting the AE handlers to CE is already embodied in patch v1, and it seems also in your patches in bug 158859. (I didn't know about the existence of that bug before I did the conversion myself, and I certainly would have looked at your patches had been aware of it.) As you point out, the conversion does involve calls to GetEventParameter. For anyone who cares, I'll post a patch against the future current trunk to convert the AE to CE handlers, once this is checked in. Created attachment 222792 [details] [diff] [review] v3, review comments addressed, for checkin I'll check this in tonight. The 1.8 branch version is identical except for context in the Makefile diff. Comment 37 is addressed. In reply to comment 38: The only caller of nsMacEventHandler::HandleNKeyEvent is nsMacWindow::KeyEventHandler. KeyEventHandler is installed as a window-scope handler for only the two event kinds that HandleNKeyEvent knows how to cope with. The only way HandleNKeyEvent could be called and find different values for eventKind is if something very terrible happens. Created attachment 222816 [details] [diff] [review] v4, this one's going in Renamed HandleNKeyEvent to HandleKeyUpDownEvent to better reflect its function; relaxed one assertion in HandleKeyUpDownEvent to accept events without a charcode. Checked in on trunk and MOZILLA_1_8_BRANCH for 1.8.1a3 (Fox 2a3). Stay tuned for followups. Bug 338759 for the dead-code removal I alluded to above. I noticed (and filed) bug 338760 while testing this patch. It's not strictly a dependency, but it does affect Japanese text. I've got a new test build available, equivalent to 1.5.0.4rc3 plus 332579v8 and further key regression fixes from 337199v4 (this patch). This build is available at: I've prepared this build even though the above patches are now all checked in on the 1.8.1 branch because that branch is only alpha-quality at this point, and I'd like testers who have the time to be able to bang on the event improvements in isolation on a more stable codebase. So, testers, if you have the time, have at it! Created attachment 223068 [details] [diff] [review] TSM carbonization As promised in comment 40, this was my first cut at carbonizing the TSM handlers. The problem here is that it doesn't properly handle leaving IME sessions. When you type option-u, b, you expect to see u-umlaut, b, but instead, it just produces u-umlaut and leaves IME. When the focus is not a textarea, the behavior is bad: if you focus the content area and type option-u, command-l, you expect the location bar to be focused, but the command-l keystroke gets swallowed when IME exits. (In reply to comment #46) I have one question and one comment about current trunk codebase. 760 void nsMacEventHandler::InitializeKeyEvent(nsKeyEvent& aKeyEvent, 761 EventRecord& aOSEvent, nsWindow* aFocusedWidget, PRUint32 aMessage, 762 PRBool aConvertChar) 763 { 780 aKeyEvent.isAlt = ((aOSEvent.modifiers & optionKey) != 0); 786 if (aMessage == NS_KEY_PRESS 787 && !IsSpecialRaptorKey((aOSEvent.message & keyCodeMask) >> 8) ) 788 { 789 if (aKeyEvent.isControl) 790 { 791 if (aConvertChar) 792 { 793 aKeyEvent.charCode = (aOSEvent.message & charCodeMask); 794 if (aKeyEvent.charCode <= 26) 795 { 796 if (aKeyEvent.isShift) 797 aKeyEvent.charCode += 'A' - 1; 798 else 799 aKeyEvent.charCode += 'a' - 1; 800 } // if (aKeyEvent.charCode <= 26) 801 } 802 aKeyEvent.keyCode = 0; 803 } // if (aKeyEvent.isControl) 804 else // else for if (aKeyEvent.isControl) 805 { 806 if (!aKeyEvent.isMeta) 807 { 808 aKeyEvent.isControl = aKeyEvent.isAlt = aKeyEvent.isMeta = 0; 809 } // if (!aKeyEvent.isMeta) 810 if ((aMessage == NS_KEY_PRESS) && (!aKeyEvent.isControl) && (!aKeyEvent.isMeta)), this method clear Option-Key flag (aKeyEvent.isAlt) at line 808. This may cause misunderstanding Option-u. Are there reasons? And this method can be simplified to only accept NS_KEY_PRESS and aConvertChar == PR_FALSE now. (In reply to comment #47) > if ((aMessage == NS_KEY_PRESS) && (!aKeyEvent.isControl) && > (!aKeyEvent.isMeta)), this method clear Option-Key flag (aKeyEvent.isAlt) at > line 808. This may cause misunderstanding Option-u. Are there reasons? If the keystroke was supposed to go to an IME session (to begin it or to service one that's already active), the kUpdateActiveInputArea handler would have picked it up and NOT the kUnicodeNotFromInputMethod handler. For an English keyboard layout, we'll never take option-U through HandleUKeyEvent, and we'll never send an NS_KEY_PRESS for it, because it's managed by IME. We'll only hear about it in UnicodeHandleUpdateInputArea. The alt flag is cleared because if we do reach HandleUKeyEvent for a keypress that was produced with option down and command and control up, it's supposed to generate a character (usually a symbol). Leaving the alt flag on in the NS_KEY_PRESS event will prevent that character from being interpreted as normal entered text, and it will instead be seen as alt-weirdsymbolcharacter. The result would be an inability to type any option-keyed characters not entered in an IME session. That's bad, and it's definitely not the way the Mac is supposed to work. I agree that this is not at all clear in the code. > And this method can be simplified to only accept NS_KEY_PRESS and > aConvertChar == PR_FALSE now. Yeah, and we can also avoid clearing flags already known to be clear (above), comment tricky situations (above), and probably do a general tidying-up in InitializeKeyEvent. If you file a new bug and write a patch, I'll review it. If you file a new bug and assign it to me, time permitting, I'll write the patch. (In reply to comment #48) > If you file a new bug and write a patch, I'll review it. I filed Bug #339221 and wrote a patch (Attachment #223406 [details] [diff]). Please review it.
https://bugzilla.mozilla.org/show_bug.cgi?id=337199
CC-MAIN-2017-30
refinedweb
4,923
63.9
Say Hello to Anvil Anvil Essentials Part 1 of 4 In this video, we’ll take a tour of Anvil. We’ll start from a blank page, and build up to a multi-user web app complete with database storage and user accounts - all in nothing but Python. By signing up and building along with this video, you can learn all the essentials of Anvil. Take a tour with me: Now you’ve watched this tour, it’s time to explore further. Sign up and try it yourself, or watch more tutorials in the Anvil Learning Centre. Next tutorial In our next tutorial, we build a To-Do list that allows you to add, edit and delete reminders. Topics Covered Constructing a User Interface | 0:36 - 1:38 We construct the UI by dragging-and-dropping components from the Toolbox. We add a TextBox to enter a name, a Button to click, and an empty Label where a greeting will go. The Properties panel allows us to configure these components. Handling events | 1:38 - 1:50 To make the Button do something, we simply double-click the Button in the Editor. This creates a Python method that runs when the Button is clicked. To configure more event handlers, use the ‘Events’ box at the bottom of the Properties panel on the right: Controlling components from code | 1:50 - 2:18 Each component is available as a Python object. Their properties can be modified in code. We set the event handler to put a greeting in the message label, using the name entered in the text box: def button_1_click(self, **event_args): """This method is called when the button is clicked""" self.message_label.text = 'Hello, ' + self.name_box.text + '!' Entering a name now displays a greeting: Publishing your app | 2:18 - 2:50 We’ve just built a simple web app - let’s publish To run code on the server, we add a Server Module. This is a Python module that runs on the server. It’s ready to go right away. We define a function to print the name that was entered: def say_hello(name): print("Hello, " + name) And we decorate it with @anvil.server.callable so we can call it from our page: @anvil.server.callable def say_hello(name): print("Hello, " + name) Then in the client code, we can call it by running: anvil.server.call('say_hello', self.name_box.text) When we run our app again and enter “Meredydd”, we see that the server has printed “Hello, Meredydd”. This Server Module is running standard Python, so it can run any Python packages such as pandas, numpy, or googleads. Storing Data | 3:51 - 5:00 You can connect to your own database, but you’ll often want something easier. We create a Data Table to record the name of each visitor we’ve seen. We give it a single text column, name. Then we put some code in the Server Module to store the names as they are entered: app_tables.visitors.add_row(name=name) User registration | 5:00 - 6:50 We enable the Users Service and discuss the features that it supports - Email + Password, Google, Facebook, plus your company’s Active Directory or certificate system. It automatically creates a Data Table to store the usernames, password hashes, and other data it manages for you. To show the login/signup dialog, we add this line to our Form: anvil.users.login_with_form() We link the Users table to the Visitors table so we can see which user entered which name. Then we try it out - we run the app, sign up and verify our email, and log in. We see that we have a new row, and we’ve linked the entries in the visitors table to our users. Saving and version control | 6:48 - 7:40 That’s it! Time to save what we’ve made. We hit the Save button to store the state of the app at this point in time. Then we look at the version history for this app. It’s backed by Git, so we have complete version control - and we can clone the app as a Git repo to work on it offline. Finally, we set a particular verison of the app to ‘published’ - this keeps our published app separate from the one we’re working on. Try it for yourself Build this app for yourself to master the essentials of Anvil. Sign up and follow along, or watch more tutorials in the Anvil Learning Centre. Next up The Anvil Essentials course continues with Building a News Aggregator. By the end of the four Anvil Essentials tutorials, you will be able to build and publish multi-user data management apps with Anvil.
https://anvil.works/learn/tutorials/hello-world
CC-MAIN-2019-51
refinedweb
785
71.24
Alamofire 5 Tutorial for iOS: Getting Started In this Alamofire tutorial, you’ll build an iOS companion app to perform networking tasks, send request parameters, decode/encode responses and more. Version - Swift 5, iOS 13, Xcode 11 If you’ve been developing iOS apps for some time, you’ve probably needed to access data over the network. And for that you may have used Foundation’s URLSession. This is fine and all, but sometimes it becomes cumbersome to use. And that’s where this Alamofire tutorial comes in! Alamofire is a Swift-based, HTTP networking library. It provides an elegant interface on top of Apple’s Foundation networking stack that simplifies common networking tasks. Its features include chainable request/response methods, JSON and Codable decoding, authentication and more. In this Alamofire tutorial, you’ll perform basic networking tasks including: - Requesting data from a third-party RESTful API. - Sending request parameters. - Converting the response into JSON. - Converting the response into a Swift data model via the Codable protocol. Getting Started To kick things off, use the Download Materials button at the top or bottom of this article to download the begin project. The app for this tutorial is StarWarsOpedia, which provides quick access to data about Star Wars films as well as the starships used in those films. Start by opening StarWarsOpedia.xcworkspace inside the begin project. Build and run. You’ll see this: It’s a blank slate now, but you’ll populate it with data soon! Using the SW API SW API is a free and open API that provides Star Wars data. It’s only updated periodically, but it’s a fun way to get to know Alamofire. Access the API at swapi.dev. There are multiple endpoints to access specific data, but you’ll concentrate on and. For more information, explore the Swapi documentation. Understanding HTTP, REST and JSON If you’re new to accessing third-party services over the internet, this quick explanation will help. HTTP is an application protocol used to transfer data from a server to a client, such as a web browser or an iOS app. HTTP defines several request methods that the client uses to indicate the desired action. For example: - GET: Retrieves data, such as a web page, but doesn’t alter any data on the server. - HEAD: Identical to GET, but only sends back the headers and not the actual data. - POST: Sends data to the server. Use this, for example, when filling a form and clicking submit. - PUT: Sends data to the specific location provided. Use this, for example, when updating a user’s profile. - DELETE: Deletes data from the specific location provided. JSON stands for JavaScript Object Notation. It provides a straightforward, human-readable and portable mechanism for transporting data between systems. JSON has a limited number of data types to choose from: string, boolean, array, object/dictionary, number and null. Back in the dark days of Swift, pre-Swift 4, you needed to use the JSONSerialization class to convert JSON to data objects and vice-versa. It worked well and you can still use it today, but there’s a better way now: Codable. By conforming your data models to Codable, you get nearly automatic conversion from JSON to your data models and back. REST, or REpresentational State Transfer, is a set of rules for designing consistent web APIs. REST has several architecture rules that enforce standards like not persisting states across requests, making requests cacheable and providing uniform interfaces. This makes it easy for app developers to integrate the API into their apps without having to track the state of data across requests. HTTP, JSON and REST comprise a good portion of the web services available to you as a developer. Trying to understand how every piece works can be overwhelming. That’s where Alamofire comes in. Why Use Alamofire? You may be wondering why you should use Alamofire. Apple already provides URLSession and other classes for accessing content via HTTP, so why add another dependency to your code base? The short answer is that while Alamofire is based on URLSession, it obscures many of the difficulties of making networking calls, freeing you to concentrate on your business logic. You can access data on the internet with little effort, and your code will be cleaner and easier to read. There are several major functions available with Alamofire: - AF.upload: Upload files with multi-part, stream, file or data methods. - AF.download: Download files or resume a download already in progress. - AF.request: Other HTTP requests not associated with file transfers. These Alamofire methods are global, so you don’t have to instantiate a class to use them. Underlying Alamofire elements include classes and structs like SessionManager, DataRequest and DataResponse. However, you don’t need to fully understand the entire structure of Alamofire to start using it. Enough theory. It’s time to start writing code! Requesting Data Before you can start making your awesome app, you need to do some setup. Start by opening MainTableViewController.swift. Under import UIKit, add the following: import Alamofire This allows you to use Alamofire in this view controller. At the bottom of the file, add: extension MainTableViewController { func fetchFilms() { // 1 let request = AF.request("") // 2 request.responseJSON { (data) in print(data) } } } Here’s what’s happening with this code: - Alamofire uses namespacing, so you need to prefix all calls that you use with AF. request(_:method:parameters:encoding:headers:interceptor:)accepts the endpoint for your data. It can accept more parameters, but for now, you’ll just send the URL as a string and use the default parameter values. - Take the response given from the request as JSON. For now, you simply print the JSON data for debugging purposes. Finally, at the end of viewDidLoad(), add: fetchFilms() This triggers the Alamofire request you just implemented. Build and run. At the top of the console, you’ll see something like this: success({ count = 7; next = "<null>"; previous = "<null>"; results = ({...}) }) In a few very simple lines, you’ve fetched JSON data from a server. Good job! Using a Codable Data Model But, how do you work with the JSON data returned? Working with JSON directly can be messy due to its nested structure, so to help with that, you’ll create models to store your data. In the Project navigator, find the Networking group and create a new Swift file in that group named Film.swift. Then, add the following code to it: struct Film: Decodable { let id: Int let title: String let openingCrawl: String let director: String let producer: String let releaseDate: String let starships: [String] enum CodingKeys: String, CodingKey { case id = "episode_id" case title case openingCrawl = "opening_crawl" case director case producer case releaseDate = "release_date" case starships } } With this code, you’ve created the data properties and coding keys you need to pull data from the API’s film endpoint. Note how the struct is Decodable, which makes it possible to turn JSON into the data model. The project defines a protocol — Displayable — to simplify showing detailed information later in the tutorial. You must make Film conform to it. Add the following at the end of Film.swift: extension Film: Displayable { var titleLabelText: String { title } var subtitleLabelText: String { "Episode \(String(id))" } var item1: (label: String, value: String) { ("DIRECTOR", director) } var item2: (label: String, value: String) { ("PRODUCER", producer) } var item3: (label: String, value: String) { ("RELEASE DATE", releaseDate) } var listTitle: String { "STARSHIPS" } var listItems: [String] { starships } } This extension allows the detailed information display’s view controller to get the correct labels and values for a film from the model itself. In the Networking group, create a new Swift file named Films.swift. Add the following code to the file: struct Films: Decodable { let count: Int let all: [Film] enum CodingKeys: String, CodingKey { case count case all = "results" } } This struct denotes a collection of films. As you previously saw in the console, the endpoint swapi.dev/api/films returns four main values: count, previous and results. For your app, you only need count and results, which is why your struct doesn’t have all properties. The coding keys transform results from the server into all. This is because Films.results doesn’t read as nicely as Films.all. Again, by conforming the data model to Decodable, Alamofire will be able to convert the JSON data into your data model. Codable, see our tutorial on Encoding and Decoding in Swift. Back in MainTableViewController.swift, in fetchFilms(), replace: request.responseJSON { (data) in print(data) } With the following: request.responseDecodable(of: Films.self) { (response) in guard let films = response.value else { return } print(films.all[0].title) } Now, rather than converting the response into JSON, you’ll convert it into your internal data model, Films. For debugging purposes, you print the title of the first film retrieved. Build and run. In the Xcode console, you’ll see the name of the first film in the array. Your next task is to display the full list of movies. Method Chaining Alamofire uses method chaining, which works by connecting the response of one method as the input of another. This not only keeps the code compact, but it also makes your code clearer. Give it a try now by replacing all of the code in fetchFilms() with: AF.request("") .validate() .responseDecodable(of: Films.self) { (response) in guard let films = response.value else { return } print(films.all[0].title) } This single line not only does exactly what took multiple lines to do before, but you also added validation. From top to bottom, you request the endpoint, validate the response by ensuring the response returned an HTTP status code in the range 200–299 and decode the response into your data model. Nice! :] Setting up Your Table View Now, at the top of MainTableViewController, add the following: var items: [Displayable] = [] You’ll use this property to store the array of information you get back from the server. For now, it’s an array of films but there’s more coolness coming soon! In fetchFilms(), replace: print(films.all[0].title) With: self.items = films.all self.tableView.reloadData() This assigns all retrieved films to items and reloads the table view. To get the table view to show the content, you must make some further changes. Replace the code in tableView(_:numberOfRowsInSection:) with: return items.count This ensures that you show as many cells as there are films. Next, in tableView(_:cellForRowAt:) right below the declaration of cell, add the following lines: let item = items[indexPath.row] cell.textLabel?.text = item.titleLabelText cell.detailTextLabel?.text = item.subtitleLabelText Here, you set up the cell with the film name and episode ID, using the properties provided via Displayable. Build and run. You’ll see a list of films: Now you’re getting somewhere! You’re pulling data from a server, decoding it into an internal data model, assigning that model to a property in the view controller and using that property to populate a table view. But, as wonderful as that is, there’s a small problem: When you tap one of the cells, you go to a detail view controller which isn’t updating properly. You’ll fix that next. Updating the Detail View Controller First, you’ll register the selected item. Under var items: [Displayable] = [], add: var selectedItem: Displayable? You’ll store the currently-selected film to this property. Now, replace the code in tableView(_:willSelectRowAt:) with: selectedItem = items[indexPath.row] return indexPath Here, you’re taking the film from the selected row and saving it to selectedItem. Now, in prepare(for:sender:), replace: destinationVC.data = nil With: destinationVC.data = selectedItem This sets the user’s selection as the data to display. Build and run. Tap any of the films. You should see a detail view that is mostly complete. Fetching Multiple Asynchronous Endpoints Up to this point, you’ve only requested films endpoint data, which returns an array of film data in a single request. If you look at Film, you’ll see starships, which is of type [String]. This property does not contain all of the starship data, but rather an array of endpoints to the starship data. This is a common pattern programmers use to provide access to data without providing more data than necessary. For example, imagine that you never tap “The Phantom Menace” because, you know, Jar Jar. It’s a waste of resources and bandwidth for the server to send all of the starship data for “The Phantom Menace” because you may not use it. Instead, the server sends you a list of endpoints for each starship so that if you want the starship data, you can fetch it. Creating a Data Model for Starships Before fetching any starships, you first need a new data model to handle the starship data. Your next step is to create one. In the Networking group, add a new Swift file. Name it Starship.swift and add the following code: struct Starship: Decodable { var name: String var model: String var manufacturer: String var cost: String var length: String var maximumSpeed: String var crewTotal: String var passengerTotal: String var cargoCapacity: String var consumables: String var hyperdriveRating: String var starshipClass: String var films: [String] enum CodingKeys: String, CodingKey { case name case model case manufacturer case cost = "cost_in_credits" case length case maximumSpeed = "max_atmosphering_speed" case crewTotal = "crew" case passengerTotal = "passengers" case cargoCapacity = "cargo_capacity" case consumables case hyperdriveRating = "hyperdrive_rating" case starshipClass = "starship_class" case films } } As with the other data models, you simply list all the response data you want to use, along with any relevant coding keys. You also want to be able to display information about individual ships, so Starship must conform to Displayable. Add the following at the end of the file: extension Starship: Displayable { var titleLabelText: String { name } var subtitleLabelText: String { model } var item1: (label: String, value: String) { ("MANUFACTURER", manufacturer) } var item2: (label: String, value: String) { ("CLASS", starshipClass) } var item3: (label: String, value: String) { ("HYPERDRIVE RATING", hyperdriveRating) } var listTitle: String { "FILMS" } var listItems: [String] { films } } Just like you did with Film before, this extension allows DetailViewController to get the correct labels and values from the model itself. Fetching the Starship Data To fetch the starship data, you’ll need a new networking call. Open DetailViewController.swift and add the following import statement to the top: import Alamofire Then at the bottom of the file, add: extension DetailViewController { // 1 private func fetch<T: Decodable & Displayable>(_ list: [String], of: T.Type) { var items: [T] = [] // 2 let fetchGroup = DispatchGroup() // 3 list.forEach { (url) in // 4 fetchGroup.enter() // 5 AF.request(url).validate().responseDecodable(of: T.self) { (response) in if let value = response.value { items.append(value) } // 6 fetchGroup.leave() } } fetchGroup.notify(queue: .main) { self.listData = items self.listTableView.reloadData() } } } Here is what’s happening in this code: - You may have noticed that Starshipcontains a list of films, which you’ll want to display. Since both Filmand Starshipare Displayable, you can write a generic helper to perform the network request. It needs only to know the type of item its fetching so it can properly decode the result. - You need to make multiple calls, one per list item, and these calls will be asynchronous and may return out of order. To handle them, you use a dispatch group so you’re notified when all the calls have completed. - Loop through each item in the list. - Inform the dispatch group that you are entering. - Make an Alamofire request to the starship endpoint, validate the response, and decode the response into an item of the appropriate type. - In the request’s completion handler, inform the dispatch group that you’re leaving. - Once the dispatch group has received a leave()for each enter(), you ensure you’re running on the main queue, save the list to listDataand reload the list table view. Now that you have your helper built, you need to actually fetch the list of starships from a film. Add the following inside your extension: func fetchList() { // 1 guard let data = data else { return } // 2 switch data { case is Film: fetch(data.listItems, of: Starship.self) default: print("Unknown type: ", String(describing: type(of: data))) } } Here’s what this does: - Since datais optional, ensure it’s not nilbefore doing anything else. - Use the type of datato decide how to invoke your helper method. - If the data is a Film, the associated list is of starships. Now that you’re able to fetch the starships, you need to be able to display it in your app. That’s what you’ll do in your next step. Updating Your Table View In tableView(_:cellForRowAt:), add the following before return cell: cell.textLabel?.text = listData[indexPath.row].titleLabelText This code sets the cell’s textLabel with the appropriate title from your list data. Finally, add the following at the end of viewDidLoad(): fetchList() Build and run, then tap any film. You’ll see a detail view that’s fully populated with film data and starship data. Neat, right? The app is starting to look pretty solid. However, look at the main view controller and notice that there’s a search bar that isn’t working. You want to be able to search for starships by name or model, and you’ll tackle that next. Sending Parameters With a Request For the search to work, you need a list of the starships that match the search criteria. To accomplish this, you need to send the search criteria to the endpoint for getting starships. Earlier, you used the films’ endpoint,, to get the list of films. You can also get a list of all starships with the endpoint. Take a look at the endpoint, and you’ll see a response similar to the film’s response: success({ count = 37; next = "<null>"; previous = "<null>"; results = ({...}) }) The only difference is that this time, the results data is a list of all starships. Alamofire’s request can accept more than just the URL string that you’ve sent so far. It can also accept an array of key/value pairs as parameters. The swapi.dev API allows you to send parameters to the starships endpoint to perform a search. To do this, you use a key of search with the search criteria as the value. But before you dive into that, you need to set up a new model called Starships so that you can decode the response just like you do with the other responses. Decoding Starships Create a new Swift file in the Networking group. Name it Starships.swift and enter the following code: struct Starships: Decodable { var count: Int var all: [Starship] enum CodingKeys: String, CodingKey { case count case all = "results" } } Like with Films you only care about count and results. Next, open MainTableViewController.swift and, after fetchFilms(), add the following method for searching for starships: func searchStarships(for name: String) { // 1 let url = "" // 2 let parameters: [String: String] = ["search": name] // 3 AF.request(url, parameters: parameters) .validate() .responseDecodable(of: Starships.self) { response in // 4 guard let starships = response.value else { return } self.items = starships.all self.tableView.reloadData() } } This method does the following: - Sets the URL that you’ll use to access the starship data. - Sets the key-value parameters that you’ll send to the endpoint. - Here, you’re making a request like before, but this time you’ve added parameters. You’re also performing a validateand decoding the response into Starships. - Finally, once the request completes, you assign the list of starships as the table view’s data and reload the table view. Executing this request results in a URL{name} where {name} is the search query passed in. Searching for Ships Start by adding the following code to searchBarSearchButtonClicked(_:): guard let shipName = searchBar.text else { return } searchStarships(for: shipName) This code gets the text typed into the search bar and calls the new searchStarships(for:) method you just implemented. When the user cancels a search, you want to redisplay the list of films. You could fetch it again from the API, but that’s a poor design practice. Instead, you’re going to cache the list of films to make displaying it again quick and efficient. Add the following property at the top of the class to cache the list of films: var films: [Film] = [] Next, add the following code after the guard statement in fetchFilms(): self.films = films.all This saves away the list for films for easy access later. Now, add the following code to searchBarCancelButtonClicked(_:): searchBar.text = nil searchBar.resignFirstResponder() items = films tableView.reloadData() Here, you remove any search text entered, hide the keyboard using resignFirstResponder() and reload the table view, which causes it to show films again. Build and run. Search for wing. You’ll see all the ships with the word “wing” in their name or model. That’s great! But, it’s not quite complete. If you tap one of the ships, the list of films that ship appears in is empty. This is easy to fix thanks to all the work you did before. There’s even a huge hint in the debug console! Display a Ship’s List of Films Open DetailViewController.swift and find fetchList(). Right now, it only knows how to fetch the list associated with a film. You need to fetch the list for a starship. Add the following just before the default: label in the switch statement: case is Starship: fetch(data.listItems, of: Film.self) This tells your generic helper to fetch a list of films for a given starship. Build and run. Search for a starship. Select it. You’ll see the starship details and the list of films it appeared in. You now have a fully functioning app! Congratulations. Where to Go From Here? You can download the completed project using the Download Materials button at the top or bottom of this article. While building your app, you’ve learned a lot about Alamofire’s basics. You learned that Alamofire can make networking calls with very little setup and how to make basic calls using the request function by sending just the URL string. Also, you learned to make more complex calls to do things like searching by sending parameters. You learned how to use request chaining and request validation, how to convert the response into JSON and how to convert the response data into a custom data model. This article covered the very basics. You can take a deeper dive by looking at the documentation on the Alamofire site at. I highly suggest learning more about Apple’s URLSession, which Alamofire uses under the hood: I hope you enjoyed this tutorial. Please share any comments or questions about this article in the forum discussion below!
https://www.raywenderlich.com/6587213-alamofire-5-tutorial-for-ios-getting-started
CC-MAIN-2021-04
refinedweb
3,791
65.22
iv Hsqldb User Guide v Hsqldb User Guide vi Hsqldb User Guide vii List of Tables 1. Alternate formats of this document .............................................................................. x 4.1. Hsqldb URL Components ..................................................................................... 27 4.2. Connection Properties .......................................................................................... 28 4.3. Hsqldb Server Properties Files ............................................................................... 29 4.4. Property File Properties ........................................................................................ 30 4.5. Server Property File Properties .............................................................................. 30 4.6. WebServer Property File Properties ........................................................................ 31 4.7. Database-specific Property File Properties ............................................................... 31 8.1. Data Types ......................................................................................................... 85 viii List of Examples 1.1. Java code to connect to the local Server above ............................................................ 4 2.1. Column values which satisfy a 2-column UNIQUE constraint ........................................ 9 2.2. Query comparison ............................................................................................... 11 2.3. Numbering returned rows of a SELECT in sequential order ......................................... 14 3.1. server.properties fragment ..................................................................................... 25 3.2. example sqltool.rc stanza ...................................................................................... 25 6.1. Exporting certificate from the server's keystore ......................................................... 44 6.2. Adding a certificate to the client keystore ................................................................. 44 6.3. Specifying your own trust store to a JDBC client ....................................................... 44 6.4. Running an Hsqldb server with TLS encryption ........................................................ 45 6.5. Getting a pem-style private key into a JKS keystore ................................................... 46 7.1. Sample sqltool.rc File .......................................................................................... 49 7.2. Defining and using an alias (PL variable) ................................................................. 56 7.3. Piping input into SqlTool ...................................................................................... 57 7.4. Valid comment example ....................................................................................... 60 7.5. Invalid comment example ..................................................................................... 60 7.6. Simple SQL file using PL ..................................................................................... 65 7.7. SQL File showing use of most PL features ............................................................... 67 A.1. Buiding the standard Hsqldb jar file with Ant ........................................................... 95 A.2. Example source code before CodeSwitcher is run ..................................................... 96 A.3. CodeSwitcher command line invocation ................................................................. 96 A.4. Source code after CodeSwitcher processing ............................................................. 96 B.1. JDBC Client source code example ......................................................................... 98 ix Introduction If you notice any mistakes in this document, please email the author listed at the beginning of the chapter. If you have problems with the procedures themselves, please use the HSQLDB support facilit- ies which are listed at. hosts the latest production versions of all available formats. If you want a different format of the same version of the document you are reading now, then you should try your current distro. If you want the latest productoin version, you should try ht- tp://hsqldb.sourceforge.net/guide. Sometimes, distributions other than do not host all available formats. So, if you can't access the format that you want in your current distro, you have no choice but to use the newest production version at. x Chapter 1. Running and Using Hsqldb/24 23:40:59 $ Introduction The HSQLDB jar package is located in the /lib lirectory and contains several components and programs. Different commands are used to run each program. • HSQLDB RDBMS The HSQLDB RDBMS and JDBC Driver provide the core functionality. The rest are general-purpose database tools that can be used with any database engine that has a JDBC driver. Running Tools All tools can be run in the standard way for archived Java classes. In the following example the AWT version of the Database Manager, the hsqldb.jar is located in the directory ../lib relative to the current directory. • org.hsqldb.util.DatabaseManager 1 Running and Using Hsqldb • org.hsqldb.util.DatabaseManagerSwing • org.hsqldb.util.Transfer • org.hsqldb.util.QueryTool • org.hsqldb.util.SqlTool Some tools, such as the Database Manager or SQL Tool, can use command line arguments or entirely rely on them. You can add the command line argument -? to get a list of available arguments for these tools. Database Manager and Transfer Tool feature a graphical user interface and can be explored inter- actively. Running Hsqldb exten- sions, located in the same directory. For example, the database named "test" consists of the following files: • test.properties • test.script • test.log • test.data • cre- ated and deleted afterwards. Note When the engine closes the database at a shutdown, it creates temporary files with the exten- sion .new which it then renames to those listed above. Server Modes 2 Running and Using Hsqldb serves up to 10 databases that are specified at the time of running the server. Server modes can use preset properties or command line arguments as detailed in the Advanced Topics chapter. There are three server modes, based on the protocol used for communications between the cli- ent and server. Hsqldb.*". The command line argument -? can be used to get a list of available arguments. To run a web server, replace the main class for the server in the example command line above with the following: org.hsqldb.WebServer The command line argument -? can be used to get a list of available arguments. Hsqldb Servlet This uses the same protocol as the Web Server. It is used when a separate servlet engine (or application server) such as Tomcat or Resin provides access to the database. The Servlet Mode cannot be started in- dependently from the servlet engine. The hsqlServlet class, in the HSQLDB jar, should be installed on the application server to provide the connection. The database is specified using an application server property. Refer to the source file hsqlServlet.java to see the details. Both Web Server and Servlet modes can only be accessed using the JDBC driver at the client end. They do not provide a web front end to the database. The Servlet mode can serve only a single database. 3 Running and Using Hsqldb. Security Considerations When HSQLDB is run as server, the port should be adequately protected with a firewall. The password for the default system user should also be changed from the default empty string. When the server is run with the HTTP protocol, public access should not be allowed to confidential databases as it is not diffi- cult to spoof an existing open session. The recommended way of using the in-process mode in an application is to use an HSQLDB Server in- stance for the database while developing the application and then switch to In-Process mode for deploy- ment. An In-Process Mode database is started from JDBC, with the database file path specified in the connec- tion URL. For example, if the database name is testdb and its files are located in the same directory as where the command to run your application was issued, the following code is used for the connection: The database file path format can be specified using forward slashes in Windows hosts as well as Linux hosts. So relative paths or paths that refer to the same directory on the same drive can be identical. For 4 Running and Using Hsqldb example if your database path in Linux is /opt/db/testdb and you create an identical directory structure on the C: drive of a Windows host, you can use the same URL in both Windows and Linux: When using relative paths, these paths will be taken relative to the directory in which the shell command to start the Java Virtual Machine was executed. Refer to Javadoc for jdbcConnection [../src/org/hsqldb/jdbc/jdbcConnection.html] for more details. Memory-Only Databases It is possible to run HSQLDB in a way that the database is not persistent and exists entirely in random access memory. As no information is written to disk, this mode should be used only for internal pro- cessing of application data, in applets or certain special applications. This mode is specified by the mem: protocol. You can also run a memory-only server instance by specifying the same URL in the serv- er.properties. This usage is not common and is limited to special applications where the database server is used only for exchanging information between clients, or for non-persistent data. General Closing the Database All databases running in different modes can be closed with the SHUTDOWN command, issued as an SQL query. In 1.7.2, in-process databases are no longer closed when the last connection to the database is explicitly closed via JDBC, a SHUTDOWN is required. in- serts, updates or deletes are performed on the cached tables. Changes to the structure of the database, such as dropping or modifying tables or indexes also create large amounts of unused space that can be reclaimed using this 5 Running and Using Hsqldb an existing database only and avoid creating a new database. In this case, if the database does not exist, the getConnection() method will throw an exception. pos- sible to achieve so far in a small-footprint database engine. The full list of SQL commands is in the SQL Syntax chapter. TEMP tables are not written to disk and last only for the lifetime of the Connection object. Each TEMP table is visible only from the Connection that was used to create it; other concurrent connections to the database will not have access to the table. The three types of persistent tables are MEMORY tables, CACHED tables and TEXT tables. Memory tables are the default type when the CREATE TABLE command is used. Their data is held en- tirely mega- bytes of memory. Another advantage of cached tables is that the database engine takes less time to start up when a cached table is used for large amounts of data. The disadvantage of cached tables is a reduc- tion in speed. Do not use cached tables if your data set is relatively small. In an application with some small tables and some large ones, it is better to use the default, MEMORY mode for the small tables. TEXT tables are new to version 1.7.0 and use a CSV (Comma Separated Value) or other delimited text file as the source of their data. You can specify an existing CSV file, such as a dump from another data- base. 6 Running and Using Hsqldb HSQLDB creates indexes internally to support PRIMARY KEY, UNIQUE and FOREIGN KEY con- straints:. SE- LECT ... WHERE a >10 AND b = 0, an indexes is required on column used in the condition. Indexes have no effect on ORDER BY clauses or some LIKE conditions. As a rule of thumb, HSQLDB is capable of processing queries and returning over 100,000 rows per second. Any query that runs into several seconds should be checked and indexes should be added to the relevant columns of the tables if necessary. SQL Support The SQL syntax supported by HSQLDB is essentially that specified by the SQL Standard (92 and 200n). Not all the features of the Standard are supported and there are some proprietary extensions. In 1.7.2 the behaviour of the engine is far more compliant with the Standards than with older versions. The main changes are • correct treatment of NULL column values in joins, in UNIQUE constraints and in query conditions The supported commands are listed in the SQL Syntax chapter. For a well written basic guide to SQL (ALL, ANY, OUTER, OID's, etc.) or used differently (IDENTITY/SERIAL, LIMIT, TRIGGER, SEQUENCE, etc.). JDBC Support In 1.7.2, support for JDBC2 has been significantly extended and some features of JDBC3 are also sup- ported. The relevant classes are thoroughly documented. See the JavaDoc for org.hsqldb.jdbcXXXX [../src/] classes. 7 Chapter 2. SQL Issues/05 18:38:46 $ Purpose Many questions repeatedly asked in Forums and mailing lists are answered in this guide. If you want to use HSQLDB with your application, you should read this guide. The SQL Syntax chapter of this guide SQL Syntax lists all the keywords and syntax that is supported. When writing or converting existing SQL DDL (Data Definition Language) and DML (Data Manipula- tion Language) statements for HSQLDB, you should consult the supported syntax and modify the state- ments accordingly. Several words are reserved by the standard and cannot be used as table or column names. For example, the word POSITION is reserved as it is a function defined by the Standards with a similar role as String.indexOf() in Java. HSQLDB does not currently prevent you from using a reserved word if it does not support its use or can distinguish it. For example ANY and BEGIN are reserved words that are not currently supported by HSQLDB and are allowed as names of tables or columns. You should avoid the use of such words as future versions of HSQLDB are likely to support the words and will reject your ta- ble definitions or queries. The full list of SQL reserved words is in the source of the org.hsqldb.Token class. HSQLDB also supports some keywords and expressions that are not part of the SQL standard as en- hancements. Expressions such as SELECT TOP 5 FROM .., SELECT LIMIT 0 10 FROM ... or DROP TABLE mytable IF EXISTS are among such constructs. Unique Constraints 8 SQL Issues According to the SQL standards, a unique constraint on a single column means no two values are equal unless one of them is NULL. This means you can have one or more rows where the column value is NULL. A unique constraint on multiple columns (c1, c2, c3, ..) means that no two sets of values for the columns are equal unless at lease one of them is NULL. Each single column taken by itself can have repeat val- ues. The following example satisfies a UNIQUE constraint on the two columns: 1, 2 2, 1 2, 2 NULL, 1 NULL, 1 1, NULL NULL, NULL NULL, NULL Since version 1.7.2 the behaviour of UNIQUE constraints and indexes with respect to NULL values has changed to conform to SQL standards. A row, in which the value for any of the UNIQUE constraint columns is NULL, can always be added to the table. So multiple rows can contain the same values for the UNIQUE columns if one of the values is NULL. Unique Indexes In 1.7.3, user defined UNIQUE indexes can still be declared but they are deprecated. You should use a UNIQUE constraint instead. CONSTRAINT <name> UNIQUE always creates internally a unique index on the columns, as with previous versions, so it has exactly the same effect as the deprecated UNIQUE index declaration. FOREIGN KEYS From version 1.7.0, HSQLDB features single and multiple column foreign keys. A foreign key can also be specified to reference a target table without naming the target column(s). In this case the primary key column(s) of the target table is used as the referenced column(s). Each pair of referencing and refer- enced columns in any foreign key should be of identical type. When a foreign key is declared, a unique constraint (or primary key) must exist on the referenced columns in the primary key table. A non-unique index is automatically created on the referencing columns. For example: CREATE TABLE child(c1 INTEGER, c2 VARCHAR, FOREIGN KEY (c1, c2) REFERENCES pare There must be a UNIQUE constraint on columns (p1,p2) in the table named "parent". A non-unique index is automatically created on columns (c1, c2) in the table named "child". Columns p1 and c1 must be of the same type (INTEGER). Columns p2 and c2 must be of the same type (VARCHAR). 9 SQL Issues clause, it is often possible to start directly from the first candidate row and reduce the number of rows that are examined. Indexes are even more important in joins between multiple tables. SELECT ... FROM t1 JOIN t2 ON t1.c1 = t2.c2 is performed by taking rows of t1 one by one and finding a matching row in t2. If there is no index index on t2.c2 then for each row of t1, all the rows of t2 must be checked. Whereas with an index, a matching row can be found in a fraction of the time. If the query also has a condition on t1, e.g., SELECT ... FROM t1 JOIN t2 ON t1.c1 = t2.c2 WHERE t1.c3 = 4 then an index on t1.c3 would eliminate the need for checking all the rows of t1 one by one, and will reduce query time to less than a millisecond per returned row. So if t1 and t2 each contain 10,000 rows, the query without indexes involves checking 100,000,000 row combinations. With an index on t2.c2, this is reduced to 10,000 row checks and index lookups. With the additional index on t2.c2, only about 4 rows are checked to get the first result row. Indexes are automatically created for primary key and unique columns. Otherwise you should define an index using the CREATE INDEX command. Note that in HSQLDB a unique index on multiple columns can be used internally as a non-unique index on the first column in the list. For example: CONSTRAINT name1 UNIQUE (c1, c2, c3); means there is the equivalent of CREATE INDEX name2 ON atable(c1);. So you do not need to specify an extra index if you require one on the first column of the list. In 1.7.3, a multi-column index will speed up queries that contain joins or values on ALL the columns. You need NOT declare additional individual indexes on those columns unless you use queries that search only on a subset of the columns. For example, rows of a table that has a PRIMARY KEY or UNIQUE constraint on three columns or simply an ordinary index on those columns can be found effi- ciently when values for all three columns are specified in the WHERE clause. For example, SELECT ... FROM t1 WHERE t1.c1 = 4 AND t1.c2 = 6 AND t1.c3 = 8 will use an index on t1(c1,c2,c3) if it exists. As a result of the improvements to multiple key indexes, the order of declared columns of the index or constraint has less affect on the speed of searches than before. If the column that contains more diverse values appears first, the searches will be slightly faster. A multi-column index will not speed up queries on the second or third column only. The first column must be specified in the JOIN .. ON or WHERE conditions. Query speed depends a lot on the order of the tables in the JOIN .. ON or FROM clauses. For example the second query below should be faster with large tables (provided there is an index on TB.COL3). The reason is that TB.COL3 can be evaluated very quickly if it applies to the first table (and there is an index on TB.COL3): (TB is a very large table with only a few rows where TB.COL3 = 4) SELECT * FROM TA JOIN TB ON TA.COL1 = TB.COL2 AND TB.COL3 = 4; SELECT * FROM TB JOIN TA ON TA.COL1 = TB.COL2 AND TB.COL3 = 4; The general rule is to put first the table that has a narrowing condition on one of its columns. 1.7.3 features automatic, on-the-fly indexes for views and subselects that are used in a query. An index is added to a view when it is joined to a table or another view. 10 SQL Issues SELECT ... FROM TA, TB, TC WHERE TC.COL3 = TA.COL1 AND TC.COL3=TB.COL2 AND TC.C The query implies TA.COL1 = TB.COL2 but does not explicitly set this condition. If TA and TB each contain 100 rows, 10000 combinations will be joined with TC to apply the column conditions, even though there may be indexes on the joined columns. With the JOIN keyword, the TA.COL1 = TB.COL2 condition has to be explicit and will narrow down the combination of TA and TB rows before they are joined with TC, resulting in much faster execution with larger tables: The query can be speeded up a lot more if the order of tables in joins are changed, so that TC.COL1 = 1 is applied first and a smaller set of rows are joined together: In the above example the engine automatically applies TC.COL4 = 1 to TC and joins only the set of rows that satisfy this condition with other tables. Indexes on TC.COL4, TB.COL2 and TA.COL1 will be used if present and will speed up the query. SELECT ... FROM TA WHERE TA.COL1 = (SELECT MAX(TB.COL2) FROM TB WHERE TB.COL3 = SELECT ... FROM (SELECT MAX(TB.COL2) C1 FROM TB WHERE TB.COL3 = 4) T2 JOIN TA O The second query turns MAX(TB.COL2) into a single row table then joins it with TA. With an index on TA.COL1, this will be very fast. The first query will test each row in TA and evaluate MAX(TB.COL2) again and again. Previous versions of HSQLDB featured poor handling of arithmetic operations. For example, it was not possible to insert 10/2.5 into any DOUBLE or DECIMAL column. In 1.7.0, full operations are pos- sible with the following rules: 11 SQL Issues TINYINT, SMALLINT, INTEGER, BIGINT, NUMBER and DECIMAL (without a decimal point) are supported integral types and map to byte, short, int, long and BigDecimal in Java. The SQL type dictates the maximum and minimum values that can be held in a field of each type. For example the value range for TINYINT is -128 to +127, although the actual Java type used for handling TINYINT is java.lang.Integer. DECIMAL and NUMERIC are mapped to java.math.BigDecimal and can have very large num- bers of digits before or after the decimal point. Integral Types TINYINT, SMALLINT, INTEGER, BIGINT, NUMBER and DECIMAL (without a decimal point) are fully interchangeable internally, and no data narrowing takes place. Depending on the types of the oper- ands, the result of the operations is returned in a JDBC ResultSet in any of related Java types: In- teger, Long or BigDecimal. The ResultSet.getXXXX() methods can be used to retrieve the values so long as the returned value can be represented by the resulting type. This type is determinstic- ally based on the query, not on the actual rows returned. The type does not change when the same query that returned one row, returns many rows as a result of adding more data to the tables. If the SELECT statement refers to a simple column or function, then the return type is the type corres- ponding to the column or the return type of the function. For example: would return a result set where the type of the first column is java.lang.Integer and the second column is java.lang.Long. However, would return java.lang.Long and BigDecimal values, generated as a result of uniform type pro- motion for all the return values. There is no built-in limit on the size of intermediate integral values in expressions. As a result, you should check for the type of the ResultSet column and choose an appropriate getXXXX() method to retrieve it. Alternatively, you can use the getObject() method, then cast the result to java.lang.Number and use the intValue() or longValue() methods on the result. When the result of an expression is stored in a column of a database table, it has to fit in the target column, otherwise an error is returned. For example when 1234567890123456789012 / 12345687901234567890 is evaluated, the result can be stored in any integral type column, even a TINYINT column, as it is a small value. When a REAL, FLOAT or DOUBLE (all synonymous) is part of an expression, the type of the result is DOUBLE. 12 SQL Issues Otherwise, when no DOUBLE value exists, if a DECIMAL or NUMBER value is part an expression, the type of the result is DECIMAL. The result can be retrieved from a ResultSet in the required type so long as it can be represented. This means DECIMAL values can be converted to DOUBLE unless they are beyond the Double.MIN_VALUE - Double.MAX_VALUE range. Similar to integral val- ues, when the result of an expression is stored in a table column, it has to fit in the target column, other- wise an error is returned. The distinction between DOUBLE and DECIMAL is important when a division takes place. When the terms are DECIMAL, the result is a value with a scale (number of digits to the right of the decimal point) equal to the larger of the scales of the two terms. With a DOUBLE term, the scale will reflect the actual result of the operation. For example, 10.0/8.0 (DECIMAL) equals 1.2 but 10.0E0/8.0E0 (DOUBLE) equals 1.25. Without division operations, DECIMAL values represent exact arithmetic; the resulting scale is the sum of the scales of the two terms when multiplication is performed. REAL, FLOAT and DOUBLE values are all stored in the database as java.lang.Double objects. Special values such as NaN and +-Infinity are also stored and supported. These values can be submitted to the database via JDBC PreparedStatement methods and are returned in ResultSet objects. Since 1.7.3 the BOOLEAN type conforms to the SQL standards and supports the UNDEFINED state in addition to TRUE or FALSE. NULL values are treated as undefined. This improvement affects queries that contain NOT IN. See the test text file, TestSelfNot.txt, for examples of the queries. For comparison purposes and in indexes, any two Java Objects are considered equal unless one of them is NULL. You cannot search for a specific object or perform a join on a column of type OTHER. Please note that HSQLDB is not an object-relational database. Java Objects can simply be stored intern- ally and no operations should be performed on them other than assignment between columns of type OTHER or tests for NULL. Tests such as WHERE object1 = object2, or WHERE object1 = ? do not mean what you might expect, as any non-null object would satisfy such a tests. But WHERE object1 IS NOT NULL is perfectly acceptable. The engine does not return errors when normal column values are assigned to Java Object columns (for example assigning an INTEGER or STRING to such a column with an SQL statement such as UPDATE mytable SET objectcol = intcol WHERE ...) but this is highly likely to be disallowed in future. So please use columns of type OTHER only to store your objects and nothing else. Since 1.7.2, the qualifiers are still ignored unless you set a database property. SET PROPERTY 13 SQL Issues Please note that casting a value to a qualified CHARACTER type will not result in truncation or padding as you might expect. So you cannot rely on a test such as CAST (mycol AS VARCHAR(2)) = 'xy' to find the values beginning with 'xy'. Use SUBSTRING(mycol FROM 1 FOR 2) instead. In 1.7.2 the SQL standard syntax is used by default, which allows the initial value to be specified. The supported form is(<colname> INTEGER GENERATED BY DEFAULT AS IDENTITY(START WITH n, [INCREMENT BY m])PRIMARY KEY, ...). Support has also been added for BI- GINT identity columns. As a result, an IDENTITY column is simply an INTEGER or BIGINT column with its default value generated by a sequence generator. When you add a new row to such a table using an INSERT INTO <tablename> ...; statement, you can use the NULL value for the IDENTITY column, which results in an auto-generated value for the column. The IDENTITY() function returns the last value inserted into any IDENTITY column by this connection. Use CALL IDENTITY(); as an SQL statement to retrieve this value. If you want to use the value for a field in a child table, you can use INSERT INTO <childtable> VALUES (...,IDENTITY(),...);. Both types of call to IDENTITY() must be made before any addition- al update or insert statements are issued on the database. Sequences The SQL 200n syntax and usage is different from what is supported by many existing database engines. Sequences are created with the CREATE SEQUENCE command and their current value can be modified at any time with ALTER SEQUENCE. The next value for a sequence is retreived with the NEXT VALUE FOR <name> expression. This expression can be used for inserting and updating table rows. You can also use it in select statements. For example, if you want to number the returned rows of a SE- LECT in sequential order, you can use: SELECT NEXT VALUE FOR mysequence, col1, col2 FROM mytable WHERE ... Please note that the semantics of sequences is not exactly the same as defined by SQL 200n. For ex- ample if you use the same sequence twice in the same row insert query, you will get two different val- ues, not the same value as required by the standard. You can query the SYSTEM_SEQUENCES table for the next value that will be returned from any of 14 SQL Issues the defined sequences. The SEQUENCE_NAME column contains the name and the NEXT_VALUE column contains the next value to be returned. If two transactions modify the same row, no exception is raised when both transactions are committed. This can be avoided by designing your database in such a way that application data consistency does not depend on exclusive modification of data by one transaction. When an ALTER TABLE .. INSERT COLUMN or DROP COLUMN command results in changes to the table structure, the current session is committed. If an uncommitted transaction started by another connections has changed the data in the affected table, it may not be possible to roll it back after the AL- TER TABLE command. This may also apply to ADD INDEX or ADD CONSTRAINT commands. It is recommended to use these ALTER commands only when it is known that other connections are not us- ing transactions. 15 Chapter 3. UNIX Quick Start How to quickly get Hsqldb up and running on UNIX, including Mac OS X Blaine Simpson, HSQLDB Development Group <blaine.simpson@admc.com> $Date: 2004/07/16 13:13:07 $ Purpose This chapter explains how to quickly install, run, and use HSQLDB version 1.7.2 on UNIX. HSQLDB has lots of great optional features. I intend to cover very few of them. I do intend to cover what I think is the most common UNIX setup: To run a multi-user database with permament data per- sistence. (By the latter I mean that data is stored to disk so that the data will persist across database shut- downs and startups). I also cover how to run Hsqldb as a system daemon. Installation Go to and click on the "files" link. Look for "hsqldb_1_7_2" under lower-case "hsqldb". Click on "show only this release" link right after "hsqldb_1_7_2". Click the "hsqldb_1_7_2" link to find out what version of Java this binary HSQLDB distribution was built with. Don't hold me to it, but if you get your distro from us at SourceForge, it will probably be built with Java 1.4. Choose a binary package format that will work with your UNIX variant and which sup- ports your Java version. Otherwise choose the hsqldb_1_7_2.zip file. Click the filename to download it. (If you have an older version of Java and there's nothing preventing you from upgrading it, you'll prob- ably be happier in the end if you upgrade Java rather than downgrading HSQLDB). If you want an rpm, then click "hsqldb" in the "free section" of. Hopefully, the JPackage folk will document what JVM versions their rpm will support (currently they document this neither on their site nor within the package itself). Download the package you want, making sure that you get version 1.7.2 of HSQLDB. (I really can't document how to download from a site that is totally beyond my control). Note It could very well happen that some of the file formats which I discuss here are not in fact offered. If so, then we have not gotten around to building them. Installing from a .pkg.Z file This package is only for use by a Solaris super-user. It's a System V package. Download then uncompress the package with uncom- press or gunzip uncompress filename.pkg.Z 16 UNIX Quick Start pkginfo -l -d filename.pkg pkgadd filename.pkg Installing from a .rpm file This is a Linux rpm package. After you download the rpm, you can read about it by running as root. Installing from a .zip file Take a look at the files you installed. (Under hsqldb for zip file installations. Otherwise, use the utilit- ies for your packaging system). The most important file of the hsqldb system is hsqldb.jar, which resides in the directory lib. Important. 17 UNIX Quick Start 1. If you don't already have Ant, download the latest stable binary version from. cd to where you want Ant to live, and extract from the archive with unzip /path/to/file.zip or or Everything will be installed into a new subdirectory named apache-ant- + version. You can rename the directory after the extraction if you wish. 2. Set the environmental variable JAVA_HOME to the base directory of your Java JRE or SDK, like The location is entirely dependent upon your variety of UNIX. Sun's rpm distributions of Java nor- mally). 4. cd to HSQLDB_HOME/build. Make sure that the bin directory under your Ant home is in your search path. Run the following command. ant jar See the Building Hsqldb version 1.7.2 appendix if you want to build anything other than hsqldb.jar with all default settings. 1. 18 UNIX Quick Start HSQLDB_OWNER, since that user will own the database instance files and processes. If the account doesn't exist, then create it. On all system-5 UNIXes and most hybrids (including Linux), you can run (as root) something like Since the value of the first database (server.database.0) begins with file:, the database instance will be persisted to a set of files in the specified directory with names beginning with the specified name. You can read about how to specify other database instances of various types, and how to make settings for the listen port and many other things, in the Advanced Topics chapter. 3. Set and export the environmental variable CLASSPATH to the value of HSQLDB_HOME (as de- scribed above) plus "/lib/hsqldb.jar", like This will start the Server process in the background, and will create your new database instance "db0". Continue on when you see the message containing "HSQLDB server... is online". 19 UNIX Quick Start ########################################################################### #. 20 UNIX Quick Start . If you get a prompt, then all is well. If security is of any concern to you at all, then you should change the privileged password in the database. Use the command SET PASSWORD command to change SA's If you changed the SA password, then you need to fix the password in the sqltool.rc file accord- ingly. ac- count has read access to the hsqldb.jar file and to an sqltool.rc file. See the SqlTool chapter about where to put sqltool.rc, how to execute sql files, and other SqlTool features. 21 UNIX Quick Start Connect to the database as SA (or any other Administrative user) and run CREATE USER to create new accounts for your database instance. HSQLDB accounts are database-instance-specific, not Server- specific. There are two classes of database accounts, Admin accounts and non-Admin accounts. Admins have privileges to do anything, non-Admins may be granted some privileges, but may never create or own database objects. When you first create a hsqldb database, it has only one database user-- SA, an Admin account, with no password set. You should set a password (as described above). You can create as many additional Admin users as you wish. Each Admin user has a unique user name (and optional password), but these accounts are otherwise indistinguishable. These accounts are created by appending the keyword "ADMIN" to the CREATE USER command. If you create a user without the ADMIN tag, it will be a Non-Admin account. These users can not create or own objects, and, by default, they can't use any database objects. The user will then be able to per- form operations which have been granted to the pseudo-user PUBLIC. To give the user additional priv- ileges (even the privilege to read data), an Admin user must grant those rights to the user (or to PUB- LIC).). Shutdown Do a clean database shutdown when you are finished with the database instance. You need to connect up as SA or some other Admin user, of course. With SqlTool, you can run You don't have to worry about stopping the Server because it shuts down automatically when all served database instances are shut down.. After you have the init script set up, root can use it anytime to start or stop HSQLDB. (I.e., not just at system bootup or shutdown). 22 UNIX Quick Start. 1. direct- ory for the last). 2. Look at the init script and see what the value of CFGFILE is for your UNIX platform. You need to copy the sample config file HSQLDB_HOME/ src/org/hsqldb/sample/sample-hsqldb.cfg to that location. Edit the config file ac- cording to the instructions in it. 23 UNIX Quick Start 24 UNIX Quick Start 3.. 4. Edit your server.properties file. For every server.database.X that you have defined, set a property of name server.urlid.X to the urlid for an Administrative user for that database instance. server.database.0= server.urlid.0=localhostdb1 Warning. urlid localhostdb1 url jdbc:hsqldb:hsql://localhost username sa password secret Just run /path/to/hsqldb as root to see the arguments you may use. Notice that you can run /path/to/hsqldb status Re-run the script with each of the possible arguments to really test it good. If anything doesn't work right, then see the Troubleshooting the Init Script section. 6. Tell your OS to run the init script upon system startup and shutdown. If you are using a UNIX vari- ant that has /etc/rc.conf or /etc/rc.conf.local (like BSD variants and Gentoo), you 25 UNIX Quick Start Star- t. If your database really is not starting, then verify that you can su to the database owner account and start the database. If these. 26 Chapter 4. Advanced Topics/11/24 23:06:35 $ Purpose Many questions repeatedly asked in Forums and mailing lists are answered in this guide. If you want to use HSQLDB with your application, you should read this guide. This document covers system related issues. For issues related to SQL see the SQL Issues chapter. Connections The normal method of accessing an HSQLDB database is via the JDBC Connection interface. An intro- duction to different methods of providing database services and accessing them can be found in the SQL Issues chapter. Details and examples of how to connect via JDBC are provided in our JavaDoc for jdb- cConnection [../src/org/hsqldb/jdbc/jdbcConnection.html]. Version 1.7.2 introduces a uniform method of distinguishing between different types of connection, alongside new capabilities to provide access to multiple databases. The common driver identifier is jd- bc:hsqldb: followed by a protocol identifier (mem: file: res: hsql: http: hsqls: https:) then followed by host and port identifiers in the case of servers, then followed by database identifier. Lowercase, single-word identifier creates the in-memory database when the first connection is made. Subsequent use of the same Connection URL connects to the existing DB. The old form for the URL, jdbc:hsqldb:. creates or connects to the same database as the new form for the URL, jdbc:hsqldb:mem:. not available jdbc:hsqldb:file: mydb /opt/db/accounts C:/data/mydb The file path specifies the database file. In the above examples the first one refers to a set of mydb.* files in the directory where the javacommand for running the application was issued. The second and third examples refer to absolute paths on the host machine. not available jdbc:hsqldb:res: /adirectory/dbname Database files can be loaded from one of the jars specified as part of the Java command the same way as resource files are accessed in Java programs. The /adirectory above stands for a directory in 27 Advanced Topics The host and port specify the IP address or host name of the server and an optional port number. The database to connect to is specified by an alias. This alias is a lowercase string defined in the serv- er.properties file to refer to an actual database on the file system of the server or a transient, in- memory database on the server. The following example lines in server.properties or web- server.properties define the database aliases listed above and accessible to clients to refer to different file and in-memory databases. database.0=file:/opt/db/accounts dbname.0=an_alias database.1=file:/opt/db/mydb dbname.1=enrollments database.2=mem:adatabase dbname.2=quickdb The old form for the server URL, e.g., jdbc:hsqldb:hsql//localhost connects to the same database as the new form for the URL, jdbc:hsqldb:hsql//localhost/ where the alias is a zero length string. In the example below, the database files lists.* in the /home/dbmaster/ dir- ectory are associated with the empty alias: database.3=/home/dbmaster/lists dbname.3= Connection properties Each new JDBC Connection to a database can specify connection properties. The properties user and password are always required. In 1.7.2 the following optional properties can also be used. Connection properties are specified either by establishing the connection via the: method call, or the property can be appended to the full Connection URL. This property is used for compatibility with other JDBC driver implementations. When true (the de- fault), ResultSet.getColumnName(int c) returns the underlying column name 28 Advanced Topics When false, the above method returns the same value as ResultSet.getColumnLabel(int column) Example below: jdbc:hsqldb:hsql://localhost/enrollments;get_column_name=false When a ResultSet is used inside a user-defined stored procedure, the default, true, is always used for this property. ifexists false connect only if database already exists Has an effect only with mem: and file: database. When true, will not create a new database if one does not already exist for the URL. When false (the default), a new mem: or file: database will be created if it does not exist. Setting the property to true is useful when troubleshooting as no database is created if the URL is mal- formed. Example below: jdbc:hsqldb:file:enrollments;ifexists=true In addition, when a connection to an in-process database creates a new database, or opens an existing database (i.e. it is the first connection made to the database by the application), all the user-defined data- base properties can be specified as URL properties. This is used to specify properties to enforce more strict SQL adherence, or to change cache_scale or similar properties before the database files are cre- ated. Properties Files HSQLDB relies on a set of properties files for different settings. Version 1.7.0 streamlines property naming and introduces a number of new properties (in this document, all references to versions 1.7.0 also apply to versions 1.7.1 and 1.7.2 unless stated otherwise). This process will continue with future versions and the properties will be used in a hierarchical manner. In all properties files, values are case-sensitive. All values apart from names of files or pages are re- quired in lowercase (e.g. server.silent=FALSE will have no effect, but server.silent=false will work). The properties files and the settings stored in them are as follows: 29 Advanced Topics Properties files for running the servers are not created automatically. You should create your own files that contain server.property=value pairs for each property. The properties file for each database is generated by the database engine. This file can be edited after closing the database. In 1.7.2, some property values can be changed via SQL commands. In 1.7.2, each server can serve up to 10 different databases simultaneously. The server.database.0 prop- erty; 30 Advanced Topics All the above values can be specified on the command line to start the server by omitting the server. prefix. Note Upgrading: If you have existing custom properties files, change the values to the new naming convention. Note the use of digits at the end of server.database.n and server.dbname.n proper- ties. In version 1.7.2 a new SQL command allows some user-accessible database properties to be modified as follows: Properties that can be modified via SET PROPERTY are indicated in the table below. Other properties are indicated as PROPERTIES FILE ONLY and can be modified only by editing the .properties file after a shutdown and before a restart. The *.properties file can be edited after closing the database. Only the user-defined values listed below should ever be modified. Changing any other value will result in unexpected malfunction in database operations. Most of these values have been introduced for the new features since 1.7.0 and are listed below with their default values in different contexts: 31 Advanced Topics When true, the database cannot be modified in use. This setting can be changed to yes if the database is to be opened from a CD. Prior to changing this setting, the database should be closed with the SHUTDOWN COMPACT command to ensure consistency and compactness of the data. (PROPERTIES FILE ONLY) hsqldb.files_readonly false database files will not be written to When true, data in MEMORY tables can be modified and new MEMORY tables can be added. However, these changes are not saved when the database is shutdown. CACHED and TEXT tables are always readonly when this setting is true. (PROPERTIES FILE ONLY) hsqldb.first_identity 0 first identity value for a new table The first value assigned automatically to the identity column of new tables. (SET PROPERTY) sql.enforce_size false trimming and padding string columns When true, all CHARACTER and VARCHAR values that are in a row affected by an INSERT INTO or UPDATE statement are trimmed to the size specified in the SQL table definition. Also all char strings that are shorter than the specified size are padded with spaces. When false (default), stores the exact string that is inserted. (SET PROPERTY) sql.enforce_strict_size false size enforcement and padding string columns Conforms to SQL standards. When true, all CHARACTER and VARCHAR values that are in a row af- fected by an INSERT INTO or UPDATE statement are checked against the size specified in the SQL table definition. An exception is thrown if the value is too long. Also all char strings that are shorter than the specified size are padded with spaces. When false (default), stores the exact string that is inser- ted. (SET PROPERTY) sql.compare_in_locale false locale used for sorting CHARACTER and VARCHAR columns are by default sorted according to POSIX standards. Setting the value to true will result in sorting in the character set of the current JRE locale. Changing this value for an existing database that contains cached tables will break the indexing and result in inconsistent operation. To avoid this, first change the value in the properties file, then open the database and issue the SHUTDOWN COMPACT command to recreate all the indexes. If set true, this setting affects all the database in the JVM. (PROPERTIES FILE ONLY) hsqldb.cache_scale 14 memory cache exponent Indicates the maximum number of rows of cached tables that are held in memory, calculated as 3 *(2**value) (three multiplied by (two to the power value)). The default results in up to 3*16384 rows from all cached tables being held in memory at any time. The value can range between 8-18. (SET PROPERTY). If the value is set via SET PROPERTY then it becomes effective after the next database SHUTDOWN or CHECKPOINT. (SET PROPERTY) hsqldb.cache_size_scale 10 memory cache exponent Indicates the average size of each row in the memory cache used with cached tables, calculated as 2**value (two to the power value). This result value is multiplied by the maximum number of rows 32 Advanced Topics defined by hsqldb.cache_scale to form the maximum number of bytes for all the rows in memory cache. The default results in 1024 bytes per row. This default, combined with the default number of rows, results in approximately 50MB of the .data file to be stored in the memory cache. The value can range between 6-20. (SET PROPERTY). If the value is set via SET PROPERTY then it becomes effective after the next database SHUTDOWN or CHECKPOINT. (SET PROPERTY) hsqldb.log_size 200 size of log when checkpoint is performed The value is the size in megabytes that the .log file can reach before an automatic checkpoint occurs. A checkpoint and rewrites the .script file and clears the .log file. The value can be changed via the SET LOGSIZE nnn SQL command. runtime.gc_interval 0 forced garbage collection This setting forces garbage collection each time a set number of result set row or cache row objects are created. The default, "0" means no garbage collection is forced by the program. This should not be set when the database engine is acting as a server inside an exclusive JVM. The set- ting can be useful when the database is used in-process with the application with some Java Runtime Environments (JRE's). Some JRE's increase the size of the memory heap before doing any automatic garbage collection. This setting would prevent any unnecessary enlargement of the heap. Typical val- ues for this setting would probably be between 10,000 to 100,000. (PROPERTIES FILE ONLY) hsqldb.nio_data_file true use of nio access methods for the .data file When HSQLDB is compiled and run in Java 1.4, setting this property to false will avoid the use of nio access methods, resulting in reduced speed. If the data file is larger than 512MB when it is first opened, nio access methods are not used. Also, if the file gets larger than the amount of available com- puter memory that needs to be allocated for nio access, non-nio access methods are used. (SET PROPERTY). If used before defining any CACHED table, it applies to the current session, oth- erwise it comes to effect after a SHUTDOWN and restart or CHECKPOINT. textdb.* 0 default properties for new text tables Properties that override the database engine defaults for newly created text tables. Settings in the text table SET <tablename> SOURCE <source string> command override both the engine de- faults and the database properties defaults. Individual textdb.* properties are listed in the Text Tables chapter. (SET PROPERTY) When connecting to an in-process database creates a new database, or opens an existing database (i.e. it is the first connection made to the database by the application), all the user-defined database properties listed in this section can be specified as URL properties. Note Upgrading: From 1.7.0, the location of the database files can no longer be overridden by paths defined in the properties file. All files belonging to a database should reside in the same direct- ory. The JDBC interface has been significantly improved in 1.7.2. There is almost full support for JDBC2. Under JDK 1.4, there is also some support for JDBC3 methods. ResultSetMetaData methods are now fully supported, as are ParameterMetaData and Savepoint (JDBC3). When upgrading from previous ver- sions, certain issues may arise as several JDBC methods that previously returned incorrect values or were not supported now return correct values. All changes have been documented in the Javadoc for the jdbcXXX classes. Connection pooling software can be used to connect to the database but it is not generally necessary. With other database engines, connection pools are used for reasons that may not apply to HSQLDB. • To allow new queries to be performed while a time-consuming query is being performed in the back- ground. This is not possible with HSQLDB as it blocks while performing the first query and deals with the next query once it has finished it. • To limit the maximum number of simultaneous connections to the database for performance reasons. With HSQLDB this can be useful only if your application is designed in a way that opens and closes connections for each small task. • To control transactions in a multi-threaded application. This can be useful with HSQLDB as well. For example, in a web application, a transaction may involve some processing between the queries or user action across web pages. A separate connection should be used for each session so that the work can be committed when completed or rolled back otherwise. An application that is not both multi-threaded and transactional, such as an application for recording user login and logout actions, does not need more than one connection. The connection can stay open in- definitely and reopened only when it is dropped due to network problems. When using an in-process database with versions prior to 1.7.2 the application program had to keep at least one connection to the database open, otherwise the database would have been closed and further at- tempts to create connections could fail. This is not necessary in 1.7.2, which does not automatically close an in-process database that is opened by establishing a connection. An explicit SHTDOWN com- mand, with or without an argument, is required to close the database. When using a server database (and to some extent, an in-process database), care must be taken to avoid creating and dropping JDBC Connections too frequently. Failure to observe this will result in unsuccess- ful connection attempts when the application is under heavy load. 34 Advanced Topics slots for int or reference variables (10 slots in previous versions). It contains an array of objects for the fields in the row. Each field is an object such as Integer, Long, String, etc. In addition each index on the table adds a node object to the row. Each node object has 6 slots for int or reference variables (12 slots in previous versions). As a result, a table with just one column of type INTEGER will have four objects per row, with a total of 10 slots of 4 bytes each - currently taking up 80 bytes per row. Beyond this, each extra column in the table adds at least a few bytes to the size of each row. The memory used for a result set row has fewer overheads (fewer slots possible to perform deletes or updates involving very large numbers of rows of CACHED tables. Such operations should be performed in smaller sets. When transactions support is enabled with SET AUTOCOMMIT OFF, lists of all insert, delete or up- date operations are stored in memory so that they can be undone when ROLLBACK is issued. Transac- tions that span hundreds of modification to data will take up a lot of memory until the next COMMIT or ROLLBACK clears the list. 1.7.2 uses a fast cache for immutable objects such as Integer or String that are stored in the database. In most circumstances, this reduces the memory footprint still further as fewer copies of the most fre- quently-used objects are kept in memory. In 1.7.2, an additional property, hsqldb.cache_size_scale is introduced. This property, combined with the hsqldb.cache_scale property, puts a limit in bytes on the total size of rows that are cached. When the de- fault.) 35 Advanced Topics Once a database is upgraded to 1.7.2, it can no longer be used with Hypersonic or HSQLDB 1.6.x, 1.7.0 or 1.7.1. There may be some potential problems in the upgrade which should be resolved by editing the .script file: • Version 1.7.2 does not accept duplicate names for table columns. • Version 1.7.2 does not create the same type of index for foreign keys as previous versions. 2. Issue the SCRIPT command, for example SCRIPT 'newversion.script' to create a script file containing a copy of the database. 3. You can now edit the newversion.script file to change any table and index definition so long as it is consistent with the data. Use the guidelines in the next section (Manual Changes to the .script File). Use a programming editor that is capable of handling very large files and does not wrap long lines of text. 4. Use the 1.7.2 version of DatabaseManager to create a new database, in this example 'newver- sion' in a different directory. 6. Copy the newversion.script file from step 2 over the file of the same name for the new data- base created in 4. 8. If there is any inconsistency in the data, the script line number is reported on the console and the opening process is aborted. Edit and correct any problems in the newversion.script before attempting to open again. 36 Advanced Topics Index and row data for CACHED tables is stored in the *.data file. Because of this, in 1.7.2, a new com- mand, SHUTDOWN SCRIPT, has been introduced to save all the CACHED table data in the .script file and delete the .data and *.backup files. After issuing this command, you can make changes to the *.script file as listed below. This procedure can also be used for adding and removing indexes or con- straints to CACHED tables when the size of the *.data file is over 1GB and the normal SQL commands do not work due to the *.data file growing beyond 2GB. The following changes can be applied so long as they do not affect the integrity of existing data. • CREATE UNIQUE INDEX ... to CREATE INDEX ... and vice versa A unique index can always be converted into a normal index. A non-unique index can only be con- verted into a unique index if the table data for the column(s) is unique in each row. • NOT NULL A not-null constraint can always be removed. It can only be added if the table data for the column has no null values. • PRIMARY KEY A primary key constraint can be removed or added. It cannot be removed if there is a foreign key referencing the column(s). • COLUMN TYPES Some changes to column types are possible. For example an INTEGER column can be changed to BIGINT, or DATE, TIME and TIMESTAMP columns can be changed to VARCHAR. Any other changes to data structures should be made only through the supported ALTER commands. After completing the changes and saving the modified *.script file, you can open the database as normal. Backing Up Databases The data for each database consists of up to 5 files in the same directory. The endings are *.properties, *.script, *.data, *.backup and *.log (a file with the *.lck ending is used for controlling access to the data- base sig- nificant files to *.properties, *.script and *.backup. Normal backup methods, such as archiving the files in a compressed bundle can be used. 37 Chapter 5. Text Tables Text Tables as a Standard Feature of Hsqldb Bob Preston, HSQLDB Development Group Fred Toussi, HSQLDB Development Group <ft@cluedup.com> Copyright 2002-2004/08/12 20:43:20 $, includ- ing SELECT, INSERT, UPDATE and DELETE. Indexes and unique constraints can be set up, and for- eign. 1. We aimed to finalise the DDL for Text Tables so that future releases of HSQLDB use the same DDL scripts. 2. We aimed to support Text Tables as GLOBAL TEMPORARY or GLOBAL BASE tables in the SQL domain. The Implementation Definition of Tables Text Tables are defined similarly to conventional tables with the added TEXT keyword: In addition, a SET command specifies the file and the separator character that the Text table uses: 38 Text Tables Text Tables cannot be created in memory-only databases (databases that have no script file). • A Temporary Text table has the scope and the lifetime of the SQL session (a JDBC Connection). • Reassigning a Text Table definition to a new file has implications in the following areas: 3. Constraints, including foreign keys referencing this table, are kept intact. It is the responsibility of the administrator to ensure their integrity. From version 1.7.2 the new source file is scanned and indexes are built when it is assigned to the ta- ble. At this point any violation of NOT NULL, UNIQUE or PRIMARY KEY constrainst are caught and the assignment is aborted. owever, foreign key constraints are not checked at the time of assign- ment or reassignment of the source file. • Empty fields are treated as NULL. These are fields where there is nothing or just spaces between the separators. Configuration The default field separator is a comma (,). A different field separator can be specified within the SET TABLE SOURCE statement. For example, to change the field separator for the table mytable to a vertic- al bar, place the following in the SET TABLE SOURCE statement, for example:: The following example shows how to change the default separator to the pipe (|), VARCHAR separator 39 Text Tables to the period (.) and the LONGVARCHAR separator to the tilde (~). Place the following within the SET TABLE SOURCE statement, for example: \semi semicolon \quote qoute \apos apostrophe \r carriage return \t tab \\ backslash Furthermore, HSQLDB provides csv file support with three additional boolean options: ig- nore_first, quoted and all_quoted. (1.7.2). These options may be specified within the SET TABLE SOURCE statement: When the default options all_quoted= false and quoted=true are in force, fields that are writ- ten- ator. While reading an existing data source file, the program treats each individual field separately. It de- termines, by placing the keyword "DESC" at the end of the SET TABLE SOURCE statement: 40 Text Tables: • File locations are restricted to below the directory that contains the database, unless the textdb.allow_full_path property is set true in the database properties file. • Blank lines are allowed anywhere in the text file, and are ignored. is the directory that contains the database and the file name is based on the table name. The table name is converted into the file name by replacing all the non-alphanumeric characters with the un- derscore character, conversion into lowercase, and adding the ".csv" suffix. • From version 1.7.2 it is possible to define a primay key or identity column for text tables. • When a table source file is used with the ignore_first=true option, the first, ignored line is replaced with a blank line after a SHUTDOWN COMPACT. • An existing table source file may include CHARACTER fields that do not begin with the quote char- acter but contain instances of the quote character. These fields are read as literal strings. Alternat- ively,. • Inserts or updates of CHARACTER type field values are not allowed with any string that contains the linefeed or the carriage return character. • textdb.fs • textdb.lvs • textdb.quoted • textdb.all_quoted • textdb.ignore_first • textdb.encoding • textdb.cache_scale • textdb.allow_full_path 42 Chapter 6. TLS TLS Support (a.k.a. SSL) Blaine Simpson, HSQLDB Development Group <blaine.simpson@admc.com> $Date: 2004/06/08 13:04:25 $ The instructions in this document are liable to change at any time. In particular, we will be changing the method to supply the server-side certificate password. Requirements Hsqldb TLS Support Requirements • Sun Java 2.x and up. (This is probably possible with IBM's Java, but I don't think anybody has at- tempted to run HSQLDB with TLS under IBM's Java, and I'm sure that nobody in the HSQLDB De- velopment Group has documented how to set up the environment). • de- mand, we could work around this). •). • You need a HSQLDB jar file that was built with JSSE present. If you got your HSQLDB 1.7.2 dis- tribution from us, you are all set, because we build with Java 1.4 (which contains JSSE). If you build your own jar file with Java 1.3, make sure to install JSSE first. Client-Side Just use one of the following protocol prefixes. • jdbc:hsqldb:hsqls:// • jdbc:hsqldb:https:// 43 TLS At this time, the latter will only work for clients running with Java 1.4. If the server you wish to connect to is using a certificate approved by your default trust keystores,- store, I'll show an example of how to get what you need from the server-side JKS keystore. You may already have an X509 cert for your server. If you have a server keystore, then you can generate a X590 cert like this. follow- ing command will add the cert to an existing keystore, or create a new keystore if client.store doesn't exist. sytem property javax.net.ssl.trustStore every time that you run your client program. For ex- ample This example runs the program SqlTool. SqlTool has built-in TLS support, however, so, for SqlTool you can set truststore on a per-urlid basis in the SqlTool configuration file. 44 TLS N.b. The hostname in your database URL must match the Common Name of the server's certificate ex- actly. That means that if a site certificate is admc.com, you can not use jd- bc:hsqldb:hsqls://localhost or jdbc:hsqldb:hsqls:// to connect to it. Server-Side Get yourself a JKS keystore containing a private key. Then set the system property javax.net.ssl.keyStore to the path to that file, and javax.net.ssl.keyStorePassword to the password of the keystore (and to the private key-- they have to be the same). (This is a single command that I have broken into 2 lines using my shell's \ line-continuation feature. In this example, I'm using a server.properties file so that I don't need to give arguments to specify database instances or the server endpoint). Caution Specifying a password on the command-line is definitely not secure. It's really only appropri- ate when untrusted users do not have any access to your computer. If there is any user demand, we will have a more secure way to supply the password before long. JSSE If you are running Java 4.x, then you are all set. Java 1.x users, you are on your own (Sun does not provide a JSSE that will work with 1.x). Java 2.x and 3.x users continue... Pretty painless. 45 TLS CA-Signed Cert. openssl pkcs8 -topk8 -outform DER -in Xpvk.pem -inform PEM -out Xpvk.pk8 -nocry openssl x509 -in Xcert.pem -out Xcert.der -outform DER java DERImport new.keystore NEWALIAS Xpvk.pk8 Xcert.der Important Make sure to set the password of the key exactly the same as the password for the keystore! You need the program DERImport.class of course. Do some internet searches to find DERIm- port.java or DERImport.class and download it. If DERImport has become difficult to obtain, I can write a program to do the same thing-- just let me know. Non-CA-Signed Cert Run man keytool or see the Creating a Keystore section of JSSERefGuide.html [].. 46 Chapter 7. SqlTool SqlTool Manual Blaine Simpson, HSQLDB Development Group <blaine.simpson@admc.com> $Date: 2004/10/20 01:46:51 $ Purpose This document explains how to use SqlTool, the main purpose of which). Some of the examples below use quoting which works exactly as-is for any normal UNIX shell. I have not yet tested these commands on Windows, and I doubt whether the quoting will work just like this (though it is possible). SqlTool is still a very useful tool even if you have no quoting capability at all. This document is now updated for version 1.37 of SqlTool and 1.86 of SqlFile (the latter is the class which does most of the work for SqlTool). The startup banner will report both versions when you run SqlTool interactively. I expect this version of this document to accurately describe SqlTool for some un- known number of versions into the future. 1. Copy the file sqltool.rc from the directory src/org/hsqldb/sample of your HSQLDB distribution to your home directory and secure access to it if your home directory is accessible to anybody else. cre- ated). Edit the file if you need to change the target Server URL, username, password, character set, JDBC driver, or TLS trust store as documented in the Authentication Setup section. 2. Find out where your hsqldb.jar file resides. It typically resides at HSQLDB_HOME/ lib/hsqldb.jar where HSQLDB_HOME is the base directory of your HSQLDB software in- stallation.. 3. Run 47 SqlTool to see what command-line arguments are available. Note that you don't need to worry about setting the CLASSPATH when you use the -jar switch to java. Assuming that you set up your SqlTool configuration file at the default location and you want to use the HSQLDB JDBC driver, you will want to run something like or where mem is an urlid, and the following arguments are paths to text SQL files. For the filepaths, you can use whatever wildcards your operating system shell supports. The urlid mem in these commands is a key into your SqlTool configuration file, as explained in the loc- al HSQLDB Server. At the end of this section, I explain how you can load some sample data to play with, if you want to. Important SqlTool does not commit DML changes by default. This leaves it to the user's disgression whether to commit or rollback their modifications. Remember to either run the command com- mit; before quitting SqlTool, or use the --autoCommit command-line switch. Note that the --sql switch runs the given commands in addition to standard input and/or specified SQL files. If you want to run only SQL command(s) which you give on the command-line, then use the --sql switch and the --noinput accom- plished by either using the "--driver" switch, or setting "driver" in your config file. The Authentication Setup section. explains the second method. Here's an example of the first method (after you have set the classpath appropriately). Tip 48 SqlTool want some sample database objects and data to play with, execute the sampledata.sql SQL file. sampledata.sql resides in the src/org/hsqldb/sample directory of your HSQLDB dis- tribution. Run it like this from an SqlTool session \i HSQLDB_HOME/src/org/hsqldb/sample/sampledata.sql Authentication Setup Authentication setup is accomplished by creating a text SqlTool configuration file. In this section, when I say configuration or config file, I mean an SqlTool configuration file (aka SqlTool RC file). ########################################################################### # 49 SqlTool ########################################################################### # ########################################################################### 50 SqlTool You can put this file anywhere you want to, and specify the location to SqlTool by using the "--rcfile" argument. If there is no reason to not use the default location (and there are situations where you would not want to), then use the default location and you won't have to give "--rcfile" arguments to SQLTool. The default location is sqltool.rc in your home directory. If you have any doubt about where that is, then just run SqlTool with a phony urlid and it will tell you where it expects the configuration file to be. urlid web url jdbc:hsqldb:hsql://localhost username web password webspassword These four settings are required for every urlid. (There are optional settings also, which are described a couple paragraphs down). You can have as many blank lines and comments like # This comment in the file as you like. The whole point is that the urlid that you give in your SqlTool command must match a urlid in your configuration file. Important Use whatever facilities are at your disposal to protect your configuration file. It should be readable, both locally and remotely, only to users who need to use the records in it to run SqlTool. ap- ply to that urlid. charset Sets encoding character set for input. See the Character Encoding section of the Non- Interactive section. You can, alternatively, set this for one SqlTool invocation by set- ting the system property sqltool.charset . Defaults to US-ASCII. driver Sets the JDBC driver class name. You can, alternatively, set this for one SqlTool in- vocation by using the SqlTool switch --driver. Defaults to org.hsqldb.jdbcDriver. truststore TLS trust keystore store file path as documented in the TLS chapter. You usually only need to set this if the server is using a non-publicly-certified certificate (like a self- signed self-ca'd cert). Property and SqlTool command-line switches override settings made in the configuration file. Interactive 51 SqlTool Do read the The Bare Minimum section before you read this section. You run SqlTool interactively by specifying no SQL filepaths on the SqlTool command line. Like this. Procedure 7.2. What happens when SqlTool is run interactively (using all default settings) 1. SqlTool starts up and connects to the specified database, using your SqlTool configuration file (as explained in the Authentication Setup section). 2. SQL file auto.sql in your home directory is executed (if there is one), 3. SqlTool displays a banner showing the SqlTool and SqlFile version numbers and describes the dif- ferent command types that you can give, as well as commands to list all of the specific commands available to you. You exit your session by using the "\q" special command or ending input (like with Ctrl-D or Ctrl-Z). Important Every command (regardless of type) and comment must begin at the beginning of a line or im- mediately after a comment ends with "*/"). You can't nest commands or comments. You can only start new commands (and comments) after the preceding statement has been terminated. (Remember that you if you're running SqlTool interactively, you can terminate an SQL statement without executing it by entering a blank line). (Special Commands, Buffer Commands and PL Commands always consist of just one line. Any of these commands or comments may be preceded by space characters.) When you are typing into SqlTool, you are always typing part of the current command. The buffer is the last SQL command. If you're typing an SQL command, then the previous SQL command will be in the buffer, not the one you are currently typing. The current command could be any type of command, but only SQL commands get moved to the buffer when they are completed. When you type command-edit- ing commands, the current command is the editing command (like ":s/tbl/table/"), the result of which is to modify the SQL command in the buffer (which can thereafter be executed). The ":a" com- mand (with no argument) is special in that it takes a copy of the SQL command in the buffer and makes that the current command, leaving you in a state where you are appending to that now current command. The buffer is the zeroeth item of the SQL command history. Command types SQL Statement ex- ecuted against the SQL database and the command will go into the command 52 SqlTool buffer and SQL command history for editing or viewing later on. In the latter case (you end an SQL Statement with a blank line), the command will go to the buffer and SQL history, but will not be executed (but you can execute it later from the buffer). (Blank lines are only interpreted this way when SqlTool is run interactively. In SQL files, blank lines inside of SQL statements remain part of the SQL state- ment). As a result of these termination rules, whenever you are entering text that is not a Special Command, Buffer Command, or PL Command, you are always ap- pending lines to an SQL Statement. (In the case of the first line, you will be ap- pending to an empty SQL statement. I.e. you will be starting a new SQL State- ment). Special Command Run the command "\?" to list the Special Commands. All of the Special Com- mands begin with "\". I'll describe some of the most useful Special Commands below. Buffer Command Run the command ":?" to list the Special Commands. All of the Special Com- mands begin with ":". Buffer commands operate upon the command "buffer", so that you can edit and/or (re-)execute previously entered commands. PL Command Procedural Langage commands. Run the command "* ?" to list the PL Com- mands. All of the PL Commands begin with "* ". PL commands are for setting and using scripting variables and conditional and flow control statements like * if and * while. Using variables as command aliases (aka macros) can be a real convenience for nearly all users, so this feature will be discussed briefly in this section. More detailed explanation of PL variables and the other PL fea- tures, with examples, are covered in the Procedural Language section. \q quit \dt [filter_substring] \dv [filter_substring] \d* [filter_substring] \ds [filter_substring] \da [filter_substring] Lists available table-like objects of the given type. • t: non-system Table# • v: Views • s: System table# If your database supports schemas, then the schema name will also be listed. 53 SqlTool If you supply an optional filter substring, then only items which con- tain the given substring (in the object name or schema name) will be listed. The substring test is case-insensitive. Tip dic- tionary while writing SQL commands, without having to scroll. \s Shows the SQL command history. The SQL command history will show a number (a negative number) for each SQL Statement that has made it into the buffer so fare (by either executing or entering a blank line). You can then use the "\-" command (which is described next) to retrieve commands from the SQL history to work with. To list just the very last command, you would use the ":l" buffer com- mand to list the buffer contents, instead of this command. \-[3] Enter "\" followed by the command number from SQL history, like "\-3". That command will be written to the buffer so that you can ex- ecute it or edit it using buffer commands. in- put. You can invoke non-interactive and graphical interactive programs, but not command-line interact- ive com- mand to the buffer (which is the 0th element of the history) with a command like "\-4" before you give the \w command. Buffer Commands 54 SqlTool :; Executes the SQL statement in the current buffer against the data- base. :l (This is a lower case L). List the current contents of the buffer. :a Enter append mode with the contents of the buffer as the current SQL Statement. Things will be exactly as if you physically re- typed the command that is in the buffer. Whatever line you type next will be appended to the SQL Statement. You can execute the command by terminating a line with ";", or send it back to the buffer by entering a blank line. You can, optionally, put a string after the :a, in which case this text will be appended and you will remain in append mode. (Unless the text ends with ';', in which case the resultant statement will be executed immediately). Note that if you do put text after the "a", exactly what you type immediately after "a" will be ap- pended. If your buffer contains SELECT x FROM mytab and you run a:le, the resultant command will be SELECT x FROM mytable. If your buffer contains SELECT x FROM mytab and you run a: ORDER BY y, the resultant command will be SELECT x FROM mytab ORDER BY y. Notice that in the latter case the append text begins with a space character. :s/from string/to string/switches This is the primary command for SqlTool command editing-- it operates upon the current buffer. The "to string" and the "switches" are both optional. All occurrences of "$" in the from string and the to string are treated as line breaks. For example, from string of "*$FROM mytable" would actually look for occurrences of * FROM mytable 55 SqlTool :s/this// would remove the first occurrence of "this". (With the "g" substi- tution mode switch, as explained below, it would remove all oc- currences of "this"). • Use "i" to make the searches for from string case insensitive. Essential PL Command * VARNAME = value Set the value of a variable. If the variable doesn't exist yet, it will be created. If you set a variable to an SQL statement (without the terminating ";") you can then use it as an alias like *VARNAME, as shown in this example. If you put variable definitions into the SQL file auto.sql in your home directory, those aliases/variables will always be available for interactive use. 56 SqlTool See the Procedural Language section below for information on using variables in other ways, and in- formation on the other PL commands and features. SQL History The SQL history shown by the \s command, and used by other commands, is truncated to 20 entries, since the utility comes from being able to quickly view the history list. You can change the history length by setting the system property sqltool.historyLength to an integer like The SQL history list explicitly does not contain Special, Buffer, or PL commands. It only contains SQL commands, valid or invalid, successful or unsuccessful. The reason for including bad SQL commands is so that you can recall and edit them if you want to. The same applies to the editing buffer (which is ele- ment 0 of the history). Raw mode If for some reason you want SqlTool to process your commands as if it were reading an SQL file (i.e., no startup banner, no command prompts, no Buffer/history commands, aborting upon failure by default, allow blank lines with SQL statements), then specify a SQL filepath of "-", like This is very useful in (at least) two situations. This is how you would pipe input into SqlTool, which is extremely useful in shell scripts where the SQL code is generated dynamically, like This is also a good way to test what SqlTool will do when it encounters a specific command in a SQL file. You emulate SQL file execution while giving SqlTool commands interactively. Non-Interactive Read the Interactive section if you have not already, because much of what is in this section builds upon that. Even if your plans are to run SqlTool non-interactively, you should really learn to run it interact- ively because it's such a powerful debugging tool, and you can use it to prototype sql scripts. Important de- 57 SqlTool Since SqlTool executes SQL statements only when a statement line is terminated with ";", you can only execute more than one SQL statement this way if your OS shell has some mechanism to pass linebreaks in arguments through to the target program. With any Bourne-compatible shell, you can include linebreaks in the SQL statements like this. If you don't give the --noinput switch, then after executing the given statements, an interactive ses- sion will be started. The --sql switch is very useful for setting shell variables to the output of SQL Statements, like this. # A shell script USERCOUNT=`java -jar $HSQLDB_HOME/lib/hsqldb.jar --noinput --sql 'select count( # Handle the SqlTool error } echo "There are $USERCOUNT users registered in the database." [ "$USECOUNT" -gt 3 ] && { # If there are more than 3 users registered # Some conditional shell scripting SQL Files Just give paths to sql text file(s) on the command line after the urlid. \* false to the top of the script. 58 SqlTool If you specify multiple SQL files on the command-line, the default behavior is to exit SqlTool src/ org/hsqldb/sample directory of your HSQLDB distribution. It contains SQL as well as Special Commands making good use of most of the Special Commands documented below. /* $Id: sample.sql,v 1.3 2004/07/12 16:51:01 unsaved Exp $ Examplifies use of SqlTool. PCTASK Table creation */ /* Ignore error for the two drop statements * For HSQLDB databases, you can use "IF EXISTS" instead of ignoring errors: * DROP TABLE x IF EXISTS; * "IF EXISTS" is non-portable, however. */ \* true DROP TABLE pctasklist; DROP TABLE pctask; \*'); 59 SqlTool commit; You can execute this SQL file with a Memory Only database with a command like (The --sql "create..." arguments create an account which the script uses). These switches provide compatibilty at the cost of poor control and error detection. • --continueOnErr The output will still contain error messages about everything that SqlTool doesn't like (malformatted commands, SQL command failures, empty SQL commands), but SqlTool will continue to run. Er- rors will not cause rollbacks (but that won't matter because of the following setting). • --autoCommit/Buffer/PL) Command could begin, and they end with the very first "*/" (regardless of quotes, nesting, etc. You may have as many blank lines as you want inside of a comment. Notice that a command can start immediate after the comment ends. 60 SqlTool en- gine, and the DB engine will determine whether to parse them as comments. \q [abort message] Be aware that the \q command will cause SqlTool to completely exit. If a script x.sql has a \q command in it, then it doesn't matter if the script is executed like in- cluded files until such point as the SQL file runs its own error handling commands. \H Toggle HTML output mode. If you redirect output to a file, this can make a long session log much easier to view. This will HTML-ify the entire session. For example, (See the Generating Text or HTML Reports section about how to easily store just the query output to file.) \a [true|false] This turns on and off SQL transaction autocommits. Auto-commit de- faults to false, but you can change that behavior by using the - -autoCommit command-line switch. 61 SqlTool \* [true|false] A "true" setting tells SqlTool to continue when errors are encountered. The current transaction will not be rolled back upon SQL errors, so if \*" be- fore dropping tables (so that things will continue if the tables aren't there), then set it back to false so that real errors are caught. DROP TA- BLE tablename IF EXISTS; is a more elegant, but less portable, way to accomplish the same thing. Tip It depends on what you want your SQL files to do, of course, but I usually want my SQL files to abort when an error is encountered, without necessarily killing the SqlTool session. If this is the behavior that you want, then put an explicit \*). Important The default settings are usually best for people who don't want to put in any explicit \* er- rors. Automation SqlTool is ideal for mission-critical automation because, unlike other SQL tools, SqlTool returns a de- pendable exit status and gives you control over error handling and SQL transactions. Autocommit is off by default, so you can build a completely dependable solution by intelligently using \* commands (Continue upon Errors) and commit statements, and by verifying exit statuses. Using the SqlTool Procedural Language, you have ultimate control over program flow, and you can use variables for database input and output as well as for many other purposes. See the Procedural Language section. 62 SqlTool or cat filepath1.sql... | java -jar $HSQLDB_HOME/lib/hsqldb.jar urlid > /tmp/log.html 2>&1 Character Encoding SqlTool defaults to the US-ASCII character set (for reading). You can use another character set by set- ting the system property sqltool.charset, like You can also set this per urlid in the SqlTool configuration file. See the Authentication Setup section about that. 1.. 2.. 3. When you want SqlTool to stop writing to the file, run \o (or just quit SqlTool if you have no other work to do). 4. If you turned on HTML mode with \H before, you can run \H again to turn it back off, if you wish. It is not just the output of "SELECT" statements that will make it into the report file, but • Output of all "\d" Special Commands. (I.e., "\dt", "\dv", etc., and "\d OBJECTNAME"). • Output of "\p" Special Commands. You will want to use this to add titles, and perhaps spacing, for 63 SqlTool). Warning Remember that \o appends to the named file. If you want a new file, then use a new file name or remove the targe file ahead of time. Tip So that I don't end up with a bunch of junk in my report file, I usually leave \o off while I per- fect ":;" Buffer Command. Procedural Language Aka PL Most importantly, run SqlTool interactively and give the "* ?" command to see what PL commands are available to you.. For all PL commands (other than plain "*") the leading "*" must be followed by whitespace. The only reason for this requirement is to provide for aliases, i.e., the ability to short-cut *VARNAME for *{VARNAME} at the beginning of a command. Therefore, "*X" is an alias whereas "* X" is a PL com- mand. The command "*" is an exception to this rule. it doesn't matter whether you put white space after it.). Aliasing may only be used for SQL statements. You can define variables for everything in a Special or PL Command, except for the very first character ("\" or "*"). Therefore, you can use variables but not aliasing in Special and PL Commands. Here is a hyperbolically impractical example to show the extent to which PL variables can be used in Special commands even though you can not use them as aliases. 64 SqlTool Variables • Use the * list command to list some or all variables and their values. • You can also set variables using the --setvar command-line switch. I give a very brief but useful example of this below. • Variables are always expanded in SQL, Special, and PL commands if they are written like *{VARNAME} (assuming that a PL command has been run previously). Your SQL scripts can give good feedback by echoing the value of variables with the "\p" special command. • Variables written like *VARNAME are expanded if they begin an SQL Statement. (They must also be followed by whitespace or terminate the Statement). I usually refer to this use of PL variables as ali- asing. If the value of a variable is an entire SQL command, you generally do not want to include the terminating ";" in the value. There is an example of this above. • Variables are normally written like *VARNAME in logical expressions to prevent them from being evaluated too early. See below about logical expressions. • You can't do math with expression variables, but you can get functionality like the traditional for (i = 0; i < x; i++) by appending to a variable and testing the string length, like Here is a short SQL file that gives the specified user write permissions on some application tables. /* com- mand-line. I.e., no need to modify the tested and proven script. There is no need for a commit state- ment in this SQL file since no DML is done. If the script is accidentally run without setting the USER variable, SqlTool will give a very clear notificaton of that. 65 SqlTool The purpose of unsetting the INIT variable is just to initialize PL so that the *{USER} variables will be expanded. (This would not be necessary if the USER variable, or any other variable, were set, but we don't want to depend upon that). Logical Expressions Logical expressions occur only inside of logical expression parentheses in PL statements. For example, if (*var1 > astring) and while (*checkvar). (The parentheses after "foreach" do not en- close ex- pand and then the expression would most likely no longer be a valid expression as listed in the table be- low). Quotes and whitespace are fine in *VARNAME variables, but it is the entire value that will be used in evaluations, regardless of whether quotes match up, etc. I.e. quotes and whitespace are not special to the token evaluator. Logical Operators TOKEN1 > TOKEN2 True if the TOKEN1 string is longer than TOKEN2 or is the same length but is greater according to a string sort. *VARNAMEs in logical expressions, where the VARNAME variable is not set, evaluate to an empty string. Therefore (*UNSETVAR = 0) would be false, even though (*UNSETVAR) by itself is false and (0) by itself is false. When developing scripts, you definitely use SqlTool interactively to verify that SqlTool evaluates logic- al expressions as you expect. Just run * if commands that print something (i.e. \p) if the test expres- sion is true. Flow Control Flow control works by conditionally executing blocks of Commands according to conditions specified by logical expressions. 66 SqlTool The conditionally executed blocks are called PL Blocks. These PL Blocks always occur between a PL flow control statement (like * foreach, *while, * if) and a corresponding * end PL Com- mand (like * end foreach). Caution Be aware that the PL block reader is ignorant about SQL statements and comments when look- ing src/org/hsqldb/sample directory with the name pl.sql. Definitely give it a run, like /* $Id: pl.sql,v 1.3 2004/06/10 01:44:52.*/ \* true \p \p Loading up a table named '*{MYTABLE}'... 67 SqlTool /*. 68 SqlTool Using hsqlsqltool.jar This section is only for those users who want to use SqlTool but without the overhead of hsqldb.jar. If you do not need to directly use JDBC URLs like jdbc:hsqldb:mem: + something, jd- bc:hsqldb:file: + something, or jdbc:hsqldb:res: + something, then you can use hsqlsqltool.jar in place of the much larger hsqldb.jar file. hsqlsq. The HSQLDB distribution doesn't "come with" a pre-built hsqlsqltool.jar file. I recommend that you either download the Java 1.4 jar at (by right-clicking and downloading if you're reading this with a web browser), or build the jarsqltool target, as explained in the Building Hsqldb version 1.7.2 appendix. If you are using the HSQLDB JDBC driver (i.e., you're connecting up to a URL like jdbc:hsqldb:hsql + something or jdbc:hsqldb:http + something), you run SqlTool exactly as with hsqldb.jar except you use the file path to your hsqlsqltool.jar file (for example, hsqlsqltool-1.7.2.jar) instead of the path to hsqldb.jar. If you are using a non-HSQLDB JDBC driver, follow the instructions at the end of the The Bare Minim- um section, but use your hsqlsqltool jar file in place of hsqldb.jar. 69 Chapter 8. SQL Syntax The Hypersonic SQL Group Fred Toussi, HSQLDB Development Group <ft@cluedup.com> Peter Hudson, HSQLDB Development Group Joe Maher, HSQLDB Development Group <jrmaher@ameritech.net> Edited by Blaine Simpson $Date: 2004/11/24 23:06:37 $ HSQLDB version 1.7.2 supports the SQL statements and syntax described in this chapter. ( and ) are the actual characters '(' and ')' used in statements. SQL Commands ALTER INDEX1 ALTER INDEX <indexname> RENAME TO <newname>; Index names can be changed so long as they do not conflict with other user-defined or sytem-defined names. ALTER SEQUENCE1 ALTER SEQUENCE <sequencename> RESTART WITH <value>; ALTER TABLE1 ALTER TABLE <tablename> ADD [COLUMN] <columnname> Datatype [(columnSize[,precision] [DEFAULT <defaultValue> [NOT NULL]] [BEFORE <existingcolumn>]; 1 These features were added by HSQL Development Group since April 2001 70 SQL Syntax Adds the column to the end of the column list. Optional attributes, size and default value (with or without NOT NULL) can be specified. The optional BEFORE <existingcolumn> can be used to specify the name of an existing column so that the new column is inserted in a position just before the <existingcolumn>. If NOT NULL is specified and the table is not empty, then a default value must be specified. If an SQL view includes a SELECT * FROM <tablename> in its select statement, the new column is ad- ded to the view. This is a non-standard feature which is likely to change in the future. Drops the column from the table. Will not work if column is part of a primary key, unique or foreign key constraint. Adds a check constraint to the table. In the current version, a check constraint can reference only the row being inserted or updated. Adds a unique constraint to the table. This will not work if there is already a unique constraint covering exactly the same <column list>. This will work only if the values of the column list for the existing rows are unique or include a null value. Adds a foreign key constraint to the table, using the same constraint syntax as when the foreign key is specified in a table definition. This will fail if for each existing row in the referring table, a matching row (with equal values for the column list) is not found in the referenced tables. Drops a named unique, check or foreign key constraint from the table. 71 SQL Syntax ALTER USER1 ALTER USER <username> SET PASSWORD <password>; CALL CALL Expression; Any expression can be called like a stored procedure, including, but not only Java stored procedures or functions. This command returns a ResultSet with one column and one row (the result) just like a SE- LECT statement with one row and one column. CHECKPOINT CHECKPOINT [DEFRAG1]; Closes the database files, rewrites the script file, deletes the log file and opens the database. If DEFRAG is specified, this command also shrinks the .data file to its minimal size. COMMIT COMMIT [WORK]; CONNECT CONNECT USER <username> PASSWORD <password>; Connects to the database as a different user. Use "" for an empty password. CREATE ALIAS 72 SQL Syntax Creates an alias for a Java function. The function must be accessible from the JVM in which the data- base runs. Example: CREATE INDEX CREATE [UNIQUE] INDEX <index> ON <table> (<column> [DESC] [, ...]) [DESC]; Creating an index on searched columns may improve performance. The qualifier DESC can be present for command compatibility with other databases but it has no effect. Unique indexes can be defined but this is deprecated. Use UNIQUE constraints instead. The name of an index must be unique within the whole database. CREATE SEQUENCE1 CREATE SEQUENCE <sequencename> [AS {INTEGER | BIGINT}] [START WITH <startvalue>] [I Creates a sequence. The default type is INTEGER. The default start value is 0 and the increment 1. Neg- ative values are not allowed. If a sequence goes beyond Integer.MAXVALUE or Long.MAXVALUE, the next result is determined by 2's complement arithmetic. The next value for a sequence can be included in SELECT, INSERT and UPDATE statements as in the following example: In the proposed SQL 200n and in the current version, there is no way of retreiving the last returned value of a sequence. CREATE TABLE CREATE [MEMORY | CACHED | TEMP1 | TEXT1] TABLE <name> ( <columnDefinition> [, ...] [, <constraintDefinition>...] ); Creates a tables in memory (default) or on disk and only cached in memory. If the database is all- in-memory, both MEMORY and CACHED forms of CREATE TABLE return a MEMORY table. If the database is file based, then MEMORY table contents are persisted to disk. 73 SQL Syntax columnDefinition columnname Datatype [(columnSize[,precision])] [{DEFA GENERATED BY DEFAULT AS IDENTITY (START WITH <n>[, IN [[NOT] NULL] [IDENTITY] [PRIMARY KEY] Default values that are allowed are constant values or certain SQL datetime functions. constraintDefinition [CONSTRAINT <name>] UNIQUE ( <column> [,<column>...] ) | PRIMARY KEY ( <column> [,<column>...] ) | FOREIGN KEY ( <column> [,<column>...] ) REFERENCES <r [ON {DELETE | UPDATE} {CASCADE | SET DEFAULT | SET NU CHECK(<search condition>)1 74 SQL Syntax CREATE TRIGGER1 CREATE TRIGGER <name> {BEFORE | AFTER} {INSERT | UPDATE | DELETE} ON <table> [FOR E In 1.7.2 the implementation has been changed and enhanced. When the 'fire' method is called, it is passed the following arguments: where 'row1' and 'row2' represent the 'before' and 'after' states of the row acted on, with each column be- ing a member of the array. The mapping of members of the row arrays to database types is specified in Data Types. For example, BIGINT is represented by a java.lang.Long Object. Note that the number of elements in the row arrays is larger than the number of columns by one or two elements. Nev- er modify the last elements of the array, which are not part of the actual row. If the trigger method wants to access the database, it must establish its own JDBC connection. This can cause data inconsistency and other problems so it is not recommended. The jd- bc:default:connection: URL is not currently supported. Implementation note: If QUEUE 0 is specified, the fire method is execued in the same thread as the database engine. This al- lows trigger action to alter the data that is about to be stored in the database. Data can be checked or modified in BEFORE INSERT / UPDATE + FOR EACH ROW triggers. All table constraints are then enforced by the database engine and if there is a violation, the action is rejected for the SQL command that initiated the INSERT or UPDATE. There is an exception to this rule, that is with UPDATE queries, referential integrity and cascading actions resulting from ON UPDATE CASCASE / SET NULL / SET DEFAULT are all performed prior to the invocation of the trigger method. If an invalid value that breaks referential integrity is inserted in the row by the trigger method, this action is not checked and results in 75 SQL Syntax Alternatively, if the trigger is used for external communications and not for checking or altering the data, a queue size larger than zero can be specified. This is in the interests of not blocking the database's main thread as each trigger will run in a thread that will wait for its firing event to occur. When this hap- pens, the trigger's thread calls TriggerClass.fire.. Note also that the timing of trigger method calls is not guar- anteed, so applications should implement their own synchronization measures if necessary. With a non-zero QUEUE parameter, if the trigger methods modifies the 'row2' values, these changes may or may not affect the database and will almost certainly result in data inconsistency. CREATE USER CREATE USER username PASSWORD password [ADMIN]; Creates a new user or new administrator in this database. Empty password can be made using "". You can change a password afterward using an ALTER USER1 command. CREATE VIEW1 CREATE VIEW <viewname>[(<viewcolumn>,..) AS SELECT ... FROM ... [WHERE Expression]; vir- tual table by referencing the view name in SQL statements the same way a table is referenced. A view is used to do any or all of these functions: • Restrict a user to specific rows in a table. For example, allow an employee to see only the rows re- cording his or her work in a labor-tracking table. • Restrict a user to specific columns. For example, allow employees who do not work in payroll to see the name, office, work phone, and department columns in an employee table, but do not allow them to see any columns with salary information or personal information. • Join columns from multiple tables so that they look like a single table. 76 SQL Syntax • Aggregate information instead of supplying details. For example, present the sum of a column, or the maximum or minimum value from a column. Views are created by defining the SELECT statement that retrieves the data to be presented by the view. The data tables referenced by the SELECT statement are known as the base tables for the view. In this example, is a view that selects data from three base tables to present a virtual table of commonly needed data: You can then reference mealsjv in statements in the same way you would reference a table: A view can reference another view. For example, mealsjv presents information that is useful for long de- scriptions that contain identifiers, but a short list might be all a web page display needs. A view can be built that selects only specific mealsjv columns: The SELECT statement in a VIEW definition should return columns with distinct names. If the names of two columns in the SELECT statement are the same, use a column alias to distinguish between them. A list of new column names can always be defined for a view. DELETE DELETE FROM table [WHERE Expression]; DISCONNECT DISCONNECT; 77 SQL Syntax Closes this connection. It is not required to call this command when using the JDBC interface: it is called automatically when the connection is closed. After disconnecting, it is not possible to execute oth- er queries (including CONNECT) with this connection. DROP INDEX DROP INDEX index [IF EXISTS]; Removes the specified index from the database. Will not work if the index backs a UNIQUE of FOR- EIGN KEY constraint. DROP SEQUENCE1 DROP SEQUENCE <sequencename>; DROP TABLE DROP TABLE <table> [IF EXISTS]; Removes a table, the data and indexes from the database. When IF EXIST is used, the statement returns without an error even if the table does not exist. Will fail if the table has been reference by a foreign key or a view. DROP TRIGGER DROP TRIGGER <trigger>; DROP USER DROP USER <username>; 78 SQL Syntax DROP VIEW1 DROP VIEW <viewname> [IF EXISTS]; Removes a view from the database. When IF EXIST is used, the statement returns without an error if the view does not exist. GRANT GRANT { SELECT | DELETE | INSERT | UPDATE | ALL } [,...] ON { table | CLASS "package.class" } TO { username | PUBLIC }; Assigns privileges to a user or to all users (PUBLIC) for a table or for a class. To allow a user to call a function from a class, the right ALL must be used. Examples: INSERT INSERT INTO table [( column [,...] )] { VALUES(Expression [,...]) | SelectStatement}; REVOKE REVOKE { SELECT | DELETE | INSERT | UPDATE | ALL } [,...] ON { table | CLASS "package.class" } TO { username | PUBLIC }; Withdraws privileges from a user or for PUBLIC (all users) for a table or class. ROLLBACK 79 SQL Syntax ROLLBACK used on its own, or with WORK, undoes changes made since the last COMMIT or ROLL- BACK. ROLLBACK TO SAVEPOINT <savepoint name> undoes the change since the named savepoint. It has no effect if the savepoint is not found. SAVEPOINT1 SAVEPOINT <savepoint name>; SCRIPT SCRIPT ['file']; Creates an SQL script describing the database. If the file is not specified, a result set containing only the DDL script is returned. If the file is specified then this file is saved with the path relative to the machine where the database engine is located. SELECT1 SELECT [{LIMIT n m | TOP m}1][ALL | DISTINCT] { selectExpression | table.* | * } [, ...] [INTO [CACHED | TEMP | TEXT]1 newTable] FROM tableList [WHERE Expression] [GROUP BY Expression [, ...]] [HAVING Expression] [{ UNION [ALL | DISTINCT] | {MINUS [DISTINCT] | EXCEPT [DISTINCT] } | INTERSECT [DI [ORDER BY orderExpression [, ...]]; tableList table [{ INNER | LEFT OUTER } JOIN table ON Expressio table 80 SQL Syntax selectExpression { Expression | COUNT(*) | {COUNT | MIN | MAX | SUM | orderExpression { columnNr | columnAlias | selectExpression } [ASC | LIMIT n m Creates the result set for the SELECT statement first and then dis- cards the first n rows and returns the first m rows of the remaining result set. Special cases: LIMIT 0 m is equivalent to TOP m or FIRST m in other RDBMS's; LIMIT n 0 discards the first n rows and returns the rest of the result set. SET AUTOCOMMIT SET AUTOCOMMIT { TRUE | FALSE }; Switches on or off the connection's auto-commit mode. If switched on, then all statements will be com- mitted as individual transactions. Otherwise, the statements are grouped into transactions that are ter- minated by either COMMIT or ROLLBACK. By default, new connections are in auto-commit mode. 81 SQL Syntax This command should not be used directly. Use the JDBC equivalent method, Connec- tion.setAutoCommit(boolean autocommit). SET IGNORECASE SET IGNORECASE { TRUE | FALSE }; Disables (ignorecase = true) or enables (ignorecase = false) the case sensitivity of text comparison and indexing for new tables. By default, character columns in new databases are case sensitive. The sensitiv- ity must be switched before creating tables. Existing tables and their data are not affected. When switched on, the data type VARCHAR is set to VARCHAR_IGNORECASE in new tables. Alternatively, you can specify the VARCHAR_IGNORECASE type for the definition of individual columns. So it is possible to have some columns case sensitive and some not, even in the same table. SET LOGSIZE SET LOGSIZE size; Sets the maximum size in MB of the .log file. Default is 200 MB. The database will be closed and opened (just like using CHECKPOINT) if the .log file gets over this limit, and so the .log file will shrink. 0 means no limit. SET PASSWORD SET PASSWORD password; Changes the password of the currently connected user. Empty password can be set using "". SET PROPERTY1 SET PROPERTY <double quoted name> <value>; Sets a database property. Properties that can be set using this command are either boolean or integral and are listed in the Advanced Topics chapter. This commands enables / disables the referential integrity checking (foreign keys). Normally it should be switched on (this is the default) but when importing data (and the data is imported in the 'wrong' or- der) the checking can be switched off. 82 SQL Syntax Warning Note that when referential integrity is switched back on, no check is made that the changes to the data are consistent with the existing referential integrity constraints. SET SCRIPTFORMAT1 SET SCRIPTFORMAT {TEXT | BINARY | COMPRESSED}; Changes the format of the script file. BINARY and COMPRESSED formats are slightly faster and more compact than the default TEXT. Recommended only for very larg script files. This command is only used internally to store the position of index roots in the .data file. It appears only in database script files; it should not be used directly. This command is used exclusively with TEXT tables to specify which file is used for storage of the data. The optional DESC qualifier results in the text file indexed from the end and opened as readonly. The <file and options> argument is a double quoted string that consists of: Example: 83 SQL Syntax \semi semicolon \quote quote \apos apostrophe \r carriage return \t tab \\ backslash In 1.7.2 this controls the frequencty of file synch. When WRITE_DELAY is set to FALSE, the synch takes place once every second. WRITE_DELAY TRUE performs the synch once every minute. The de- fault is TRUE (60 seconds). A numeric value can be specified instead. SHUTDOWN SHUTDOWN [IMMEDIATELY | COMPACT | SCRIPT1]; SHUTDOWN Performs a checkpoint to creates a new .script file that has the minimum size and contains the data for memory tables only. It then backs up the .data file containing the CACHED TABLE data in zipped format to the .backup file and closes the database. SHUTDOWN IMMEDI- Just closes the database files (like when the Java process for the data- ATELY base is terminated); this command is used in tests of the recovery mech- anism. This command should not be used as the routine method of clos- 84 SQL Syntax de- leting the existing files, it does not rewrite the .data and text table files. After SHUTDOWN SCRIPT, only the .script and .properties file re- main. At the next startup, these files are processed and the .data and .backup files are created. This command in effect performs part of the job of SHUTDOWN COMPACT, leaving the other part to be per- formed automatically at the next startup. This command produces a full script of the database which can be ed- ited for special purposes prior to the next startup. UPDATE UPDATE table SET column = Expression [, ...] [WHERE Expression]; Data Types Table 8.1. Data Types. The types on the same line are equivalent. Name Range Java Type INTEGER | INT as Java type int | java.lang.Integer DOUBLE [PRECISION] | as Java type double | FLOAT java.lang.Double VARCHAR as Integer.MAXVALUE java.lang.String VARCHAR_IGNORECASE as Integer.MAXVALUE java.lang.String CHAR | CHARACTER as Integer.MAXVALUE java.lang.String LONGVARCHAR as Integer.MAXVALUE java.lang.String DATE as Java type java.sql.Date TIME as Java type java.sql.Time TIMESTAMP | DATETIME as Java type java.sql.Timestamp DECIMAL No limit java.math.BigDecimal NUMERIC No limit java.math.BigDecimal BOOLEAN | BIT as Java type boolean | java.lang.Boolean TINYINT as Java type byte | java.lang.Byte 85 SQL Syntax The uppercase names are the data types names defined by the SQL standard or commonly used by RDMS's. The data types in quotes are the Java class names - if these type names are used then they must be enclosed in quotes because in Java names are case-sensitive. Range indicates the maximum size of the object that can be stored. Where Integer.MAXVALUE is stated, this is a theoretical limit and in practice the maximum size of a VARCHAR or BINARY object that can be stored is dictated by the amount of memory available. In practice, objects of up to a megabyte in size have been successfully used in production databases. The recommended Java mapping for the JDBC datatype FLOAT is as a Java type "double". Because of the potential confusion it is recommended that DOUBLE is used instead of FLOAT. In table definition statements, HSQLDB accepts size, precision and scale qualifiers only for certain types: CHAR(n), VARCHAR(n), DOUBLE(n), DECIMAL(p,s). The specified precision and scale for DOUBLE and DECIMAL is simply ignored by the engine. Instead, the values for the corresponding Java types are always used, which in the case of DECIMAL is an un- limited precision and scale. When defining CHAR and VARCHAR columns, the SIZE argument is op- tional and defaults to 0. If any other size is specified, it is stored in the database definition but is not en- foreced by default. Once you have created the database (before adding data), you can add a database property value to enforce the sizes: This will enforce the specified size and pad CHAR fields with spaces to fill the size. This complies with SQL standards by throwing an exception if an attempt is made to insert a string longer than the maxim- um size. CHAR and VARCHAR and LONGVARCHAR columns are by default compared and sorted according to POSIX standards. To use the current JRE locale for sorting and comparison, add the following data- base property to the properties file. sql.compare_in_locale=true Columns of the type OTHER or OBJECT contain the serialized form of a Java Object in binary format. To insert or update such columns, a binary format string (see below under Expression) should be used. Using PreparedStatements with JDBC automates this transformation. SQL Comments 86 SQL Syntax "java.lang.Math.sqrt"(2.0) This means the package must be provided, and the name must be written as one word, and inside " be- cause otherwise it is converted to uppercase (and not found). When an alias is defined, then the function can be called additionally using this alias: Only static java methods can be used as stored procedures. If, within the same class, there are over- loaded methods with the same number of arguments, then the first one encountered by the program will be used. If you want to use Java library methods, it is recommended that you create your own class with static methods that act as wrappers around the Java library methods. This will allow you to control which method signature is used to call each Java library method. BITOR(a,b) returns a | b 87 SQL Syntax RAND() returns a random number x bigger or equal to 0.0 and smaller than 1.0 88 SQL Syntax LOCATE(search,s,[start]) returns the first index (1=left, 0=not found) where search is found in s, starting at start OCTET_LENGTH(str)1 returns the length of the string in bytes (twice the number of char- acters) SUBSTRING(s,start[,len]) returns the substring starting at start (1=left) with length len DATEDIFF(string, datetime1, date- returns the count of units of time elapsed from datetime1 to date- time2)1 time2. The string indicates the unit of time and can have the fol- lowing values 'ms'='millisecond', 'ss'='second','mi'='minute','hh'='hour', 'dd'='day', 'mm'='month', 'yy' = 'year'. Both the long and short form of the strings can be used. 89 SQL Syntax NOW() returns the current date and time as a timestamp) - use CUR- RENT_TIMESTAMP instead CUR- SQL standard function, returns the user name of this connection RENT_USER IDENTITY() returns the last identity values that was inserted by this connection IFNULL(exp,value) if exp is null, value is returned else exp) - use COALESCE() in- stead CASEWHEN(exp,v1,v2) if exp is true, v1 is returned, else v2) - use CASE WHEN instead 90 SQL Syntax 1 if expr1 is not null then it is returned else, expr2 is evaluated and if not null it is returned and so on CASE WHEN...1 CASE WHEN expr1 THEN v1[WHEN expr2 THEN v2] [ELSE v4] END if the first string is a sub-string of the second one, returns the pos- ition of the sub-string, counting from one; otherwise 0 SQL Expression [NOT] condition [{ OR | AND } condition] condition { value [|| value] | value { = | < | <= | > | >= | <> | != | IS [NOT] } value | EXISTS(selectStatement) | value BETWEEN value AND value | value [NOT] IN ( {value [, ...] | selectStatement } ) | value [NOT] LIKE value [ESCAPE] value } value [+ | -] { term [{ + | - | * | / | || } term] | ( condition ) | function ( [parameter] [,...] ) | selectStatement giving one value 91 SQL Syntax term { 'string' | number | floatingpoint | [table.]column | TRUE | FALSE | NULL } sequence NEXT VALUE FOR <sequence> HSQLDB does not currently enforce the SQL 200n proposed rules on where sequence generated values are allowed to be used. In general, these values can be used in insert and update statements but not in CASE statements, order by clauses, search conditions, aggregate functions, or grouped queries. ex- ample, SELECT .... LIKE '\_%' ESCAPE '\' will find the strings beginning with an un- derscore. name The character set for quoted identifiers (names) in HSQLDB is Unicode. A unquoted identifier (name) starts with a letter and is followed by any number of AS- CII letters or digits. When an SQL statement is issued, any lowercase characters in un- quoted identifiers are converted to uppercase. Because of this, unquoted names are in fact ALL UPPERCASE when used in SQL statements. An important implication of this is the for accessing columns names via JDBC DatabaseMetaData: the internal form, which is the ALL UPPERCASE must be used if the column name was not quoted in the CREATE TABLE statement. Quoted identifiers can be used as names (for tables, columns, constraints or indexes). Quoted identifiers start and end with " (one doublequote). A quoted identifier can con- tain any Unicode character, including space. In a quoted identifier use "" (two double- quotes) to create a " (one doublequote). With quoted identifiers it is possible to create mixed-case table and column names. Example: The equivalent quoted identifier can be used for an unquoted identifer by converting the identifier to all uppercase and quoting it. For example, if a table name is defined as Ad- dress2 (unquoted), it can be referred to by its quoted form, "ADDRESS2", as well as ad- dress2, aDDress2 and ADDRESS2. Quoted identifiers should not be confused with SQL strings. Quoting can sometimes be used for identifiers, aliases or functions when there is an am- biguity. For example: 92 SQL Syntax Although HSQLDB 1.7.2 does not force unquoted identifiers to contain only ASCII characters, the use of non-ASCII characters in these identifiers does not comply with SQL standards. Portability between different JRE locales could be an issue when accen- ted characters (or extended unicode characters) are used in unquoted identifiers. Because native Java methods are used to convert the identifier to uppercase, the result may vary not be expected in different locales. It is recommended that accented characters are used only in quoted identifiers. When using JDBC DatabaseMetaData methods that take table, column, or index identi- fiers as arguments, treat the names as they are registered in the database. With these methods, unquoted identifiers should be used in all-uppercase to get the correct result. Quoted identifiers should be used in the exact case combination as they were defined - no quote character should be included around the name. JDBC methods that return a res- ult set containing such identifiers return unquoted identifiers as all-uppercase and quoted identifiers in the exact case they are registered in the database (a change from 1.6.1 and previous versions). values • A DATE literal starts and ends with ' (singlequote), the format is yyyy-mm-dd (see java.sql.Date. • A TIME liteal starts and ends with ' (singlequote), the format is hh:mm:ss (see java.sql.Time). • A TIMESTAMP or DATETIME literal starts and ends with ' (singlequote), the format is yyyy-mm-dd hh:mm:ss.SSSSSSSSS (see java.sql.Timestamp). When specifying default values for date / time columns in CREATE TABLE statements, or in SELECT,INSERT, and UPDATE statements, special SQL functions: NOW, SYS- DATE, TODAY, CURRENT_TIMESTAMP, CURRENT_TIME and CUR- RENT_DATE (case independent) can be used. NOW is used for TIME and TIMESTAMP columns, TODAY is used for DATE columns. The data and time variants CURRENT_* are SQL standard versions and should be used in preference to others. Ex- ample: Binary data starts and ends with ' (singlequote), the format is hexadecimal. '0004ff' for example is 3 bytes, first 0, second 4 and last 255 (0xff). Any number of commands may be combined. With combined commands, ';' (semicolon) must be used at the end of each command to ensure data integrity, despite the fact that the engine may understand the end of commands and not return an error when a semicolon is not used. 93 Appendix A. Building Hsqldb version 1.7.2 Fred Toussi, HSQLDB Development Group <ft@cluedup.com> $Date: 2004/06/06 08:45:56 $ Purpose From 1.7.2, the supplied hsqldb.jar file is built with Java 1.4.2. If you want to run the engine under JDK1.3 or earlier, you should rebuild the jar with Ant. Obtaining Ant Ant is a part of the Jakarta/Apache Project. ant -projecthelp This displays the available ant targets, which you can supply as command line arguments to ant. These include jarmain to build a smaller jar for hsqldb that does not contain utilities jarclient to build an extremely small jar containing only the client-side JDBC driver (does not support direct connection to HSQLDB URLs of the form jdbc:hsldb:mem:*, jd- 94 Building Hsqldb HSQLDB can be built in any combination of five different sizes and three JRE (Java Runtime Environ- ment) versions. The smallest jar size(hsqljdbc.jar) contains only the HSQLDB JDBC Driver cli- ent. The next smallest jar size (hsqldbmin.jar) contains only the standalone database (no servers) and JDBC support and is suitable for embedded applications. The default size (hsqldb.jar) also con- tains server mode support and the utilities. The largest size (hsqldbtest.jar)includes some test classes as well. (You can also build hsqlsqltool.jar. If you use SqlTool, see the SqlTool chapter about that.) In order to build and run the test classes, you need the JUnit jar in the /lib directory. This is available from. The preferred method of rebuilding the jar is with Ant. After installing Ant on your system use the fol- lowing command from the /build directory: ant explainjars The command displays a list of different options for building different sizes of the HSQLDB Jar. The default is built using: Example A.1. Buiding the standard Hsqldb jar file with Ant ant jar The Ant method always builds a jar with the JDK that is used by Ant and specified in the JAVA_HOME environment variable. Building with JDK 1. Before building the hsqldbtest.jar package, you should download the junit jar from ht- tp:// and put it in the /lib directory, alongside servlet.jar, which is included in the .zip package. For DOS/Windows users, a set of MSDOS batch files is provided as an alternative to using Ant. These produce only the default jar size. The path and classpath variables for the JDK should of course be set before running any of the batch files. If you are compiling for JDK's other than 1.4.x, you should use the appropriate switchTo- JDK11.bat or switchToJDK12.bat to adapt the source files to the target JDK before running the appropriate buildJDK11.bat or buildJDK12.bat JDK and JRE versions. 95 Building Hsqldb From version 1.7.2, Use of JDK 1.1.x is not recommended for building the JAR, even for running under JDK 1.1.x -- use JDK1.3 for running under 1.1.x. Javadoc can be built with Ant and batch files. Hsqldb CodeSwitcher com- pile. ... //#ifdef JAVA2 properties.store(out,"hsqldb database"); //#else /* properties.save(out,"hsqldb database"); */ //#endif ... The '.' means the program works on the current directory (all subdirectories are processed recursively). - JAVA2 means the code labelled with JAVA2 must be switched off. 96 Building Hsqldb ... //#ifdef JAVA2 /* pProperties.store(out,"hsqldb database"); */ //#else pProperties.save(out,"hsqldb database"); //#endif ... 97 Appendix B. First JDBC Client Example There is a copy of Testdb.java in the directory src/org/hsqldb/sample of your HSQLDB distribution. package org.hsqldb.sample; import java.sql.Connection; import java.sql.DriverManager; import java.sql.ResultSet; import java.sql.ResultSetMetaData; import java.sql.SQLException; import java.sql.Statement; /** * Title: Testdb * Description: simple hello world db example of a * standalone persistent db application * * every time it runs it adds four more rows to sample_table * it does a query and prints the results to standard out * * Author: Karl Meissner karl@meissnersd.com */ public class Testdb { Connection conn; //our connnecti 98 First JDBC Client Example 99 First JDBC Client Example st.close(); } // void update() public static void dump(ResultSet rs) throws SQLException { // the order of the rows in a cursor // are implementation dependent unless you use the SQL ORDER statement ResultSetMetaData meta = rs.getMetaData(); int colmax = meta.getColumnCount(); int i; Object o = null; // the result set is a cursor into the data. You can only // point to one row at a time // assume we are pointing to BEFORE the first row // rs.next() points to next row and returns true // or false if there is no next row, which breaks the loop for (; rs.next(); ) { for (i = 0; i < colmax; ++i) { o = rs.getObject(i + 1); // Is SQL the first column is indexed // with 1 not 0 System.out.print(o.toString() + " "); } System.out.println(" "); } } //void dump( ResultSet rs ) public static void main(String[] args) { Testdb db = null; try { db = new Testdb("db_file"); } catch (Exception ex1) { ex1.printStackTrace(); // could not start db return; // bye bye } try { //make an empty table // // by declaring the id column IDENTITY, the db will automatically // generate unique values for new rows- useful for row keys db.update( "CREATE TABLE sample_table ( id INTEGER IDENTITY, str_col VARCHAR(2 } catch (SQLException ex2) { //ignore //ex2.printStackTrace(); // second time we run program // should throw execption since table // already there // // this will have no effect on the db } try { // add some rows - will create duplicates if run more then once // the id column is automatically generated 100 First JDBC Client Example db.update( "INSERT INTO sample_table(str_col,num_col) VALUES('Ford', 100)"); db.update( "INSERT INTO sample_table(str_col,num_col) VALUES('Toyota', 200)"); db.update( "INSERT INTO sample_table(str_col,num_col) VALUES('Honda', 300)"); db.update( "INSERT INTO sample_table(str_col,num_col) VALUES('GM', 400)"); // do a query db.query("SELECT * FROM sample_table WHERE num_col < 250"); // at end of program db.shutdown(); } catch (SQLException ex3) { ex3.printStackTrace(); } } // main() } // class Testdb 101 Appendix C. Hsqldb Database Files and Recovery This text is based on HypersonicSQL documentation, updated to reflect the latest version 1.7.2 of HSQLDB. $Date: 2004/04/20 18:35:56 $ The Standalone and Client/Server modes will in most cases use files to store all data to disk in a persist- ent and safe way. This document describes the meaning of the files, the states and the procedures fol- lowed by the engine to recover the data. A database named 'test' is used in this description. The database files will be test.script, test.properties, test.data and test.backup. Database Files test.properties Contains the entry 'modified'. If the entry 'modified' is set to 'yes' then the data- base is either running or was not closed correctly (because the close algorithm sets 'modified' to 'no' at the end). test.script This file contains the SQL statements that makes up the database up to the last checkpoint - it is in synch with test.backup. test.data This file contains the (binary) data records for CACHED tables only. test.backup This is compressed file that contains the complete backup of the old test.data file at the time of last checkpoint. test.log This file contains the extra SQL statements that have modified the database since the last checkpoint (something like the 'Redo-log' or 'Transaction-log', but just text). States Database is closed correctly • The test.script contains the information in the database, excluding data for CACHED and TEXT tables. 102 Hsqldb Database Files and Recovery • The test.data file does not exist; all CACHED table data is in the test.script file • The test.script contains the information in the database, including data for CACHED and TEXT tables. Database is aborted This may happen by sudden power off, Ctrl+C in Windows, but may be simulated using the command SHUTDOWN IMMEDIATELY. • The test.script contains a snapshot of the database at the last checkpoint and is OK. • The test.data file may be corrupt because the cache in memory was not written out completely. • The test.log file contain all information to re-do all changes since the snanapshot. As a result of abnormal termination, this file may be partially corrupt. Procedures The database engine performs the following procedures internally in different circumstances. Clean Shutdown 1. The test.data file is written completely (all the modified cached table rows are witten out) and closed. 3. The file test.script.new is created using the information in the database (and thus shrinks because no UPDATE and DELETE statements; only INSERT). 103 Hsqldb Database Files and Recovery Startup 1. Check if the database files are in use (by checking a special test.lck file). 3. If the test.properties did not exist, then this is a new database. Create the empty test.log to append new commands. Repair The current test.data file is corrupt, but with the old test.data (from the test.backup file and test.script) and the current test.log, the database is made up-to-date. The database engine takes these steps: 1. Restore the old test.data file from the backup (uncompress the test.backup and overwrite test.data). 104 Hsqldb Database Files and Recovery 3. Execute all commands in the test.log file. If due to corruption, an exception is thrown, the rest of the lines of command in the test.log file are ignored. 105 Appendix D. Running Hsqldb with OpenOffice.org Hermann Kienlein, EDV - Systeme Kienlein <hermann@kienlein.com> Copyright 2003-2004 Hermann Kienlein./04/22 09:25:28 $ Introduction HSQLDB can now act as a Database with OpenOffice.org. This document is written to help you con- necting and running HSQLDB out of OpenOffice.org in a simple way. Without user-managment and only for your single-system. If you have problems read the other available documents, because I will not write them here again. If you need a real DB-System with user-management and different rights for different users, read the other documents. Installing I assume you have a running OpenOffice.org (OOo) and a JavaRuntimeEnvironment. So place the hsqldb-1.7.2.*.zip file where you want on your disk and unpack it (I assume you have done this already). Setting up OpenOffice.org Start OOo with a text document and go to the Database-Explorer (simply by pressing F4). In the left frame you see a tree-view with all known databases in OOo. A right mouse-click opens a menu where you can manage your databases. So click on New Database and choose a name that you want to have inside OOo. I chose HSQLDB as name. On Windows You can specify a directory where HSQLDB should store the info and data. Something like jd- bc:hsqldb:file:c:\javasrc\hsqldb-dev\databasename (where jdbc: is written by OOo). The string c:\javasrc\hsqldb-dev\databasename works only on windows, but you can write this down as linux-path like /javasrc/hsqldb-dev/databasename too. Then HSQLDB takes the c:\ drive as root. This means this works only on c:\ for you. The first is the directory-path and the databasename is the identifier for the database. On Linux Choose a path as said for windows like /opt/db/data 106 Running Hsqldb with OpenOffice.org Now OOo has to find your hsqldb.jar file. So go to options => security and insert the path to the .jar file. If you have problems, search the Online-help for JDBC. You then get help in your own lan- guage (this is generally quite better than my English, I think ;-) If you cannot write to your Tables, OOo thinks that you don't have permission to write to HSQLDB. Then we tell OOo to ignore the DriverPrivileges because on our single-user-system we do not need them. Because OOo is working on this, the next Step is only needed for systems without write - permission. Open tools => macro in OOo to get the Basic-IDE. Here simple copy and paste the code and run the macro. You see a input-box where you only have to insert the name of your DB, in my example I have to insert HSQLDB, because I took this as name in OOo. Note that if you change your OOo-DB name, you have to run this macro again! Now we only have to stop and restart OOo. Be sure that you exit Quickstarter and all running processes too. On next OOo-Start you should have a running Database in OpenOffice.org. 107 Appendix E. Hsqldb Test Utility $Date: 2004/12/24 23:40:59 $ The org.hsqldb.test package contains a number of tests for various functions of the database en- gine. Among these, the TestSelf class performs the tests that are based on scripts. To run the tests, you should compile the hsqldbtest.jar target with Ant. For TestSelf, a batch file is provided in the testrun/hsqldb directory, together with a set of Test- Self*.txt files. To start the application in Windows, change to the directory and type: runtest TestSelf ./runTest.sh TestSelf The new version of TestSelf runs multiple SQL test files to test different SQL operations of the data- base. in- dicate what the expected result should be. • Lines starting with spaces are the continuation of the previous line • The remaining items in this list exemplify use of the available command line-prefixes. • /*c<rows>*/ SQL statement returning column count of <rows> • /*u<count>*/ SQL statement returning an update count equal to <count> • /*e*/ SQL statement that should produce an error when executing • /*r<string1>,<string2>*/ SQL statement returning a single row ResultSet equal to • /*r 108 Hsqldb Test Utility <string1>,<string2> <string1>,<string2> <string1>,<string2> */ SQL statement returning a multiple row ResultSet equal to the specified value 109 Appendix F. Database Manager Fred Toussi, HSQLDB Development Group <ft@cluedup.com> $Date: 2004/06/18 14:24:42 $ Brief Introduction proper- ties to pass. You should also enter the username and password before clicking on the OK button. The AWT se- lect be- fore making the connection. The modified values will be saved. 110 Appendix G. Transfer Tool Fred Toussi, HSQLDB Development Group <ft@cluedup.com> $Date: 2004/06/18 14:24:42 $ Brief Introduction Transfer Tool is a GUI program for transferring SQL schema and data from one JDBC source to anoth- er. Source and destination can be different database engines or different databases on the same server. Transfer Tool works in two different modes. Direct transfer maintains a connection to both source and destination and performs the transfer. Dump and Restore mode is invoked once to transfer the data from the source to a text file (Dump), then again to transfer the data from the text file to the destination (Restore). With Dump and Restore, it is possible to make any changes to database object definitions and data prior to restoring it to the target. Dump and Restore modes can be set via the command line with -d (--dump) or -r (--restore) options. Al- ternatively the Transfer Tool can be started with any of the three modes from the Database Manager's Tools menu. The connection dialogue allows you to save the settings for the connection you are about to make. You can then access the connection in future sessions. These settings are shared with those from the Database Manager tool. See the appendix on Database Manager for details of the connection dialogue box. 111
https://tr.scribd.com/document/53019069/guide
CC-MAIN-2019-30
refinedweb
22,858
65.32
We’re happy to announce that. You can find the book’s Contents at a Glance and an excerpt from the Introduction in this previous post. We’ve already heard from a couple of customers who had trouble locating the companion content for this book. The files are located here; you’ll need to click on the Examples link on the left-hand side of the page. (And yes, we’re actively working to make the companion files easier to find.) In this post we offer an excerpt from this book’s Chapter 3, “Exchange Environmental Considerations”: Chapter 3 Exchange Environmental Considerations This chapter describes all the basic components surrounding Exchange Server 2010 that need to be considered to plan a solid Exchange implementation. These components provide the basis to build Exchange on a solid foundation and to identify potential issues. It provides a basis for other chapters in this book by describing some of the technologies that will be discussed later. For example, this chapter includes a discussion on namespace design as well as a review of certificate requirements, which are then taken to the next level in Chapter 4, “Client Access in Exchange 2010.” Of particular importance when using this book is the “Planning Naming Conventions” section, which explains the names that are used throughout the entire book. Evaluating Network Topology Evaluating the network topology through which Exchange Server 2010 will communicate is crucial during the Delivery Phase, Step 2: Assess, as described in Chapter 2, “Exchange Deployment Projects.” Often, making changes in the network infrastructure can take a considerable amount of time because the Exchange team isn’t necessarily responsible for making changes to the network, and communication and negotiation are often required before network changes can be made, especially in large organizations that support heterogeneous operating systems. Identifying any required changes and making sure that the execution of the change can occur without any difficulties early in the design process can save time later when you are implementing Exchange Server 2010. This section provides an overview of the network-related requirements for Exchange 2010. Reviewing Current and Planned Network Topology The first step is to collect all information about your internal network, the perimeter network, and its external collections as thoroughly as possible from a variety of sources. These sources include the following: - Physical network topology Verify that TCP/IP is used everywhere, which Internet Protocol is used (IPv4 and/or IPv6), how IP addresses are allocated for servers, and that IP subnets are used according to location. - Internal physical network connections or links This includes LAN and WAN links, router, and so on. - External physical network connections This includes the Internet, partner companies, and so on. - Interconnection of physical network connections This includes hub-and-spoke, ring or star, and point-to-point. - Physical network speed Divide between guaranteed bandwidth, available bandwidth, and latency for each identified network link. - Network protection that might interfere This includes firewalls that protect physical links or network link encryption devices that reduce the link speed. - Firewall port availability to both external and internal systems. - Server name resolution used in locations or between locations (DNS/WINS name resolution). - Defined namespaces in DNS This is described in the “Planning Namespace” section later in this chapter. - Perimeter network servers Including any servers that are located in a perimeter network, especially any server that provides SMTP-relay functionality. Be sure to identify any known changes that will occur to the network configuration during the interim between the planning phase and the deployment phase so that the impact of the change can be assessed just prior to deployment and the proper adjustments made. NOTE In large organizations, gathering this information might be quite a time-consuming effort—you may have to meet with many disparate network teams to get a thorough understanding of the network specific details. If you want to evaluate a global network infrastructure that includes many sites or locations, make sure you understand the company structure, the businesses that Exchange will serve, and how these businesses are supplied with IT currently. Having these discussions will provide you with much insight into the current network topology and help identify any problems and potential issues that you should consider when planning the messaging design. Domain Name System (DNS) This section is about the technical foundation on domain name system (DNS). It does not include any discussion about namespace planning. The aspects of namespace planning and disjoint namespace or single label domains are described in the “Planning Namespace” section later in this chapter. DNS and Active Directory Microsoft Windows uses the DNS standard as the primary name registration and resolution service for Active Directory. For that reason it is a basic requirement that all clients and servers must be able to reliably resolve DNS queries for a given resource in the appropriate namespace. DNS provides a hierarchically distributed and scalable database where hosts can automatically update their records. These dynamic records can be fully integrated into Active Directory when using Active Directory–integrated DNS zones. NOTE In Exchange Server 2003 or earlier, the Windows Internet Name Service (WINS) was required to support multi-domain environments. This is no longer required for Exchange Server 2010. The following list provides best practices for DNS settings when implementing Exchange Server 2010 in your Active Directory: - Use the DNS Server service that is part of Windows Server. This provides you with features such as Dynamic Update and Active Directory–integrated DNS zones. For example, domain controllers register their network service types in DNS so that other computers in the forest can access them. - If you cannot use the Windows DNS Server for Active Directory and Exchange, make sure the DNS server supports SRV resource records and allows dynamic updates of Locator DNS resource records (SRV and A records). If your company uses BIND, make sure you use BIND 8.x or later. - Store all DNS zones as Active Directory–integrated in Active Directory to gain the benefit of having DNS and Active Directory replicated by a single mechanism. This prevents the need to use different tools for troubleshooting. - Configure Dynamic Updates as Secure, thus only allowing authorized clients to register their host name and IP address. - Only configure Forward Lookups Zones, which are required by Exchange 2010. You do not need to configure Reverse Lookup Zones because they are not used by Windows 2008 or Exchange 2010. More information can be found in the whitepaper “DNS Requirements for Installing Active Directory” at. Notes from the Field DNS Dynamic Updates John P Glynn Principal Consultant, Microsoft Consulting Services, US/Central Region Active Directory is a key dependency for Exchange; without it Exchange does not and will not properly function. Active Directory is based on the DNS service. Without DNS, many components of Active Directory, Exchange, and client interaction fail to function properly. When a domain controller is installed on a domain, a series of records is created. These DNS records contain service location records for Kerberos, LDAP, GC, site-specific information, and a domain record that is a unique GUID. Exchange servers utilize these DNS records to locate authentication or other specific services. Exchange will use Active Directory site-specific service location records for services such as: locating the closest Global Catalog servers to utilize for name resolution, locating domain controllers to utilize for Exchange configuration information, and routing messages between remote Exchange servers. Exchange servers as well as workstations that run the Exchange management tools rely heavily on Kerberos for authentication. Therefore, it is equally important that the Exchange server A records are registered within DNS correctly as well. As a best practice, implement DNS with dynamic updates enabled. I have been in a few environments where transient Exchange and client issues were tracked to missing or invalid SRV records. Some of the specific issues that I have seen include the following: - Invalid host record for the Exchange Server—the connection suffix of the server did not match the DNS record causing Kerberos authentication failure. - The domain GUID records for the domain were incorrectly entered under the_msdcs zone, causing improper identification of domain controllers for the Active Directory domain. - Slowness issues resulting from missing site location records, causing Exchange to possibly grab a Global Catalog located at a distant site—thus communication needs to flow across WAN links. This might be because some or all of the following records are missing or incorrect in DNS: _ldap._tcp ._sitename._sites._gc._msdcs.domain.com. Most modern DNS implementations in use today support dynamic updates. As a best practice it is advisable to allow only secure updates, which prevents rogue systems from injecting invalid entries into your DNS zones. A few environments refused to globally enable dynamic updates on their zones. We were able to convince the team to allow only domain controllers to dynamically update their records. Exchange server records were created manually. However, A records are familiar to DNS administrators and less likely to be incorrect. As with any manual process, it can be incorrectly created, so always double-check. If this is not possible, try to convince the DNS team to temporarily enable dynamic updates during the DCPROMO process and the subsequent reboot to allow the domain controllers to dynamically create/update all of the necessary records. Obviously this requires more process overhead, but in the long run it will save on issues, outages, and hours of troubleshooting caused by incorrectly configured DNS records. Several tools are available to validate records and the functionality of DNS, such as DNSLint, DCDiag, and netdiag. Other standard tools include nslookup, ipconfig, and nltest. DNS Records Used by Exchange 2010 DNS provides a number of critical functions for Exchange 2010. This section provides an overview of the most important records in DNS. A Records A records or Host records provide a host name to IP address mapping. Host records are required for each domain controller and other hosts that need to be accessible to xchange Servers or client computers. Host records use IPv4 (A records). Here is an example of an A record: berlin-dc01.litware.com. IN A 10.10.0.10. SRV Records. MX Records A Mail Exchanger (MX) record is a resource record that allows servers to locate other servers to deliver Internet e-mail using the Simple Mail Transfer Protocol round-robin between the SMTP servers. You also can specify a lower preference value for one of the MX records. All messages are routed through the SMTP server that has the lower-preference-value MX record, unless that server is not available. Here is an example of an MX record: litware.com MX 10 fresno-ht01.na.litware.com. NOTE You don’t need MX records for Hub Transport servers that are involved in internal mail routing. That is only required for external SMTP routing—for example, to the Internet. More information about MX records and how they are used for SMTP message routing can be found in Chapter 5, “Routing and Transport.” SPF Records Exchange Server 2010 uses Sender Policy Framework (SPF) records to support Sender ID spam filtering. If you want to use this feature, you need to configure the SPF records in DNS. This is described in more detail in Chapter 7, “Edge Transport and Messaging Security.” Split DNS Split DNS or split-brain DNS is about setting up separate DNS zones so that DNS requests that come from the Internet will resolve to different IP addresses than requests coming from your internal workstations or servers. In other words, as shown in Figure 3-1, if the Internet client resolves mail.litware.com, it will receive an IP address that is associated with an external firewall solution that is sitting in the perimeter network. The internal client will get an IP address associated with the internal Client Access server array. The benefit of using split DNS is that it helps control client access. Internal clients use the internal systems instead of the external systems. In other words, internal users’ sessions aren’t handled by the firewall application and you do not expose internal IP addresses or host names to the Internet. You can also limit access to specific hosts that are part of the perimeter network or force users to take a specific communication route. For this reason it is a best practice to implement split DNS in every Exchange organization that has server roles exposed to the Internet. Fixed IP Address vs. Dynamic IP Address It’s important to know whether your company has an Internet provider that provides your company with fixed IP addresses or if you’re using dynamic IP addresses to access the Internet. If your servers that have some relationship to external communication, such as Edge Transport servers, have fixed IP addresses and your DNS entries (MX or A records) are registered accordingly, you’re working with the best practices approach. However, a fixed IP address might be a cost issue, especially in small companies. Thus some companies might want to implement Exchange 2010 based on an Internet provider that only provides a dynamic IP address. If you’re in this situation, you should consider a Dynamic DNS service that lets you register your dynamic IP address to their DNS service. However, make sure the dynamic DNS service includes the following: - Your IP addresses should automatically register in DNS when the IP address changes. Your router and/or Dynamic DNS service provider need to support this. - IP updates should be replicated in DNS real time to make sure the change is known to the Internet immediately. - For external SMTP servers to know how to send messages to your domain, the DNS record for your domain should include an MX record. - The Dynamic DNS service should provide you with an SMTP relay host to send messages to the Internet. If you directly send messages, your server is quite likely to be detected as spam because of your changing IP addresses. Many SMTP servers consider dynamic IP addresses as not trustworthy and thus don’t accept messages from them. If you consider these points, you’ll have no problem operating Exchange Server 2010 when using a dynamic IP address. Internet Protocol (IPv4 and IPv6) Internet Protocol Version 4 (IPv4) is commonly available and the basis for communication between any device on the Internet. The successor of IPv4 is called Internet Protocol Version 6 (IPv6), as defined in RFC 2460 in 1996. IPv6 was developed to correct many of the shortcomings of IPv4, such as the limited pool of available IP addresses and the lack of extensibility. Because IPv6 addresses are 128 bits long (compared to IPv4 addresses, which are 32 bits long), there are enough IPv6 addresses available for every living insect, animal, and person on earth. Unfortunately, IPv6 is not an extension of IPv4 but a completely new protocol. Therefore, an IPv4 network can’t communicate directly with an IPv6 network and vice versa. Any network device, such as a router, needs to be able to understand IPv6; otherwise, IPv6 causes communication problems. IPv6 for Windows The client and server software needs to support IPv6 to use it. The following Microsoft server operating systems support IPv6: - Windows Server 2003 (IPv4 is installed and enabled; IPv6 is not installed by default.) - Windows Server 2008 (IPv4 and IPv6 are installed and enabled by default.) - Windows Server 2008 R2 (IPv4 and IPv6 are installed and enabled by default.) IMPORTANT Microsoft also recommends that you do not turn off IPv6 in a clustered environment because Windows Server 2008 R2 Clustering uses IPv6 for internal communication. Not only does the server need to support IPv6, but also the client operating system. The following Microsoft client operating systems support IPv6: - Windows XP Service Pack 1 (SP1) or later (IPv4 is installed and enabled; IPv6 is not installed by default.) - Windows Vista (IPv4 and IPv6 are installed and enabled by default.) - Windows 7 (IPv4 and IPv6 are installed and enabled by default.) - For more information about IPv6, see the IPv6 for Microsoft Windows FAQ at.
https://blogs.msdn.microsoft.com/microsoft_press/2010/07/13/new-book-microsoft-exchange-server-2010-best-practices/
CC-MAIN-2019-09
refinedweb
2,678
51.68
Hi! I need some help with a VERY simple program I'm making. It's a calulator that adds two numbers and uses arrays. I've just been doing this for a few days, so don't flame on me. This is the code: #include <stdio.h> #include <conio.h> #include <iostream.h> int x[3]; x[2]=x[0]+x[1]; int main(int argc, char *argv[]) { //Prints instructions cout << "This is a simple calculator. All it does" << endl; << "is add two numbers." << endl << "Please insert " << "your first number, press Enter, " << endl << "then the second and Enter." //Asks for the first number and to press Enter cin >> x[0] >> getch(); //Displays the plus sign cout << " + "; //Asks for the second number and to press Enter cin >> x[1] >> getch(); //Displays the equal sign and the answer cout << " = " x[2] << endl //Prints instructions << "Please press Enter to exit this program."; //Waits until user presses Enter getch(); //Closes the program return 0; } What's wrong? Could someone please help me? Happy holidays! -scary_freak_x
http://cboard.cprogramming.com/cplusplus-programming/7406-please-help-w-arrays.html
CC-MAIN-2014-35
refinedweb
171
83.86
This article describe an abstract web part class developed for the SharePoint 2007 (WSS 3.0 or MOSS 2007) platform. The web part contains the logic to address some known conflicts between SharePoint and ASP.NET AJAX and is designed as the base class for all AJAX enabled SharePoint web parts. With SharePoint (WSS 3.0 or MOSS 2007) SP1, AJAX is officially supported. However, there are still lots of manual configuration needs to be performed for AJAX to work in the SharePoint environment. Basically you can create an ASP.NET web application targeting .NET framework 3.5 and merge the AJAX related entries into the web.config of your SharePoint application. However, this is not enough. In the MSDN article titled Walkthrough: Creating a Basic ASP.NET AJAX-enabled Web Part, a technique is introduced to fix the conflict between SharePoint and the ASP.NET AJAX UpdatePanel. This article provides a good starting point for developing AJAX enabled SharePoint web part. However, the technique described in the article had its limitations as well. I will go over these limitations below and explained how they can be addressed. ASP.NET AJAX requires one instance, and only one instance of the ScriptManager on any page. There are several ways to include the ScriptManager in a SharePoint web part page. One thing you can do is to modify the master page. Another common technique is to detect if an instance of the ScriptManager already exist, and create one on demand if it does not exist. I like the later approach as it is more flexible than modifying the master page, which affects all pages regardless if AJAX is used in the page. After all, there are 3rd party AJAX libraries that are not currently compatible with ASP.NET AJAX, and you may not have full control on all the contents that appears on a portal. After reviewing the life cycles of an ASP.NET page one more time, I decided to place the logic that creates an instance of the ScriptManager inside the OnInit event, and that seems to work pretty well. Another issue comes with the "EnsurePanelFix" logic, as it too, should not be registered more than once. By creating a common base class for AJAX enabled web part, and register the script using the type of the base web part, the problem can be solved. This is especially good as not only the base web part promotes code reuse, it also fixes problems! The full code for the web part is included below: using System; using System.Collections.Generic; using System.Text; using System.Web; using System.Web.Services; using System.Web.UI; using System.ComponentModel; using System.Xml.Serialization; using System.Web.UI.WebControls; public abstract class AjaxBaseWebPart : System.Web.UI.WebControls.WebParts.WebPart { protected override void OnInit(EventArgs e) { base.OnInit(e); // Register the ScriptManager ScriptManager scriptManager = ScriptManager.GetCurrent(this.Page); if (scriptManager == null) { scriptManager = new ScriptManager(); scriptManager.ID = "ScriptManager1"; scriptManager.EnablePartialRendering = true; Controls.AddAt(0, scriptManager); } } protected override void CreateChildControls() { // Add fix according to EnsurePanelFix(); } private void EnsurePanelFix() { if (this; } }"; ScriptManager.RegisterStartupScript(this, typeof(AjaxBaseWebPart), "UpdatePanelFixup", fixupScript, true); } } } Despite the official support of AJAX in SP1 of SharePoint 2007, it still take lots of effort to start using AJAX in the SharePoint environment. Maybe the next service patch for Visual Studio 2008 will provide the same support for SharePoint development as the support we get for ASP.NET 3.5 AJAX? Let's keep our fingers crossed. This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/aspnet/BaseAjaxSharePointWebPart.aspx
crawl-001
refinedweb
606
51.65
Working at Sencha means belonging to a growing group of brilliant people who are making it their mission to explore and enhance the possibilities of today’s web and mobile technologies. Why work at Sencha? We've got a couple of ideas… LEARN MORE+ + Post New Thread Forum suggests Sencha Cmd 5.1.0.13 urls only. But Ext JS 5.1.0.74 requires 5.1.0.21. Support area does not have nightly Sencha Cmd urls. I use Extjs with Rails. As such, I use RubyMine for all development (Javascript, Ruby, SQL, etc), I also run the application using Webrick which is... Hi, I was looking at some of the samples distributed with ExtJS 5 and I saw that some of them actually have 'test' folder, with jasmine tests... Hi, my application uses DirectJngine and DirectJngine generates additional configuration javascripts (ServerConfig.js) which has to be included... I create ExtJs 5 test app with Sencha Cmd on my Asp.NET MVC application in Scripts/Ext folder: I have successfully run it with index.html file... Sencha Cmd v5.0.3.324 - error on phonegap init Sencha Cmd version(s) tested: · Sencha Cmd 5.0.3.324 Operating System: · Windows 7 Home... Hello, I use sencha compile to build my production files as follows: sencha compile -classpath=ext/src,app,app.js union -r -file app.js and... I am building for phonegap and running into 2 issues that may be related. The first issue is that I cannot get the IOS simulator to run using the... I am trying to create and use packages and something is not working for me. This is the folders structure .sencha app ... packagesPKG1 ...... I have an override in my overrides folder. In my app.js I have Ext.require(); app.json Using Architect 3.1.0.1943, windows 8 Testing version building pass correctly. But production version cause error: C2009: YUI Parse Error... Hi, We have an application namespace of "Xyz.abc". This works fine with the build processes and compiler. However it fails with the upgrade... I'm currently experiencing a problem when trying to run a 'Package' build of my app built in Sencha Architect. I've included various javascript files... Hello folks, the title says it already. Files imported like this aren't watched at all :( // import the ios theme as a basis @import... Hello, we've been building our app using sencha cmd 4.0.4.84 for months. Suddenly, our builds no longer work. Did not find a system... Sencha Cmd version(s) tested: Sencha Cmd 5.0.1.231 Sencha Cmd 5.0.2.270 Sencha Cmd 5.0.3.324 Operating System: Mac OS X 10.10 Framework... Hi, Same issue persists in 5.1.0.13 with CMD 5.0.3.324 I have upgraded and the same runtime failure occurs. see -... I feel that it seems opposite description in your document. In... I've already asked this question on deftjs github but with no luck, let's try here. When I build a sencha touch 2.4.1 app using deftjs 0.9.1 with... REQUIRED INFORMATION Sencha Cmd version(s) tested: · Sencha Cmd 5.0.3.324 Operating System: · OSX 10.9.5 ( Macbook pro )... Sencha is used by over two million developers. Join the community, wherever you’d like that community to be or Join Us
http://www.sencha.com/forum/forumdisplay.php?8-Sencha-Cmd/page9&order=desc
CC-MAIN-2015-11
refinedweb
565
71
This tutorial will show you how to set up your own CAN Local Area Network using the CAN Bus shield! The CAN bus protocol is far superior to USART, SPI, and I2C because it uses a twisted pair to communicate between devices just like the Ethernet port on the back of your computer. This means it rejects noise very well and is great for electrically noisy environments like cars, planes, and even homes when data has to be transmitted long distances. For this tutorial, we will use two Arduino Uno R3’s, paired with the CAN bus shields. We will also need a few LED’s and Ethernet cables, to connect the boards together. The shields come with headers needed to connect to the Arduino UNO, but does not come with the RJ-45 connectors you will need for this example. The mating connector should be available shortly on our website. The shield can also be used to connect to an existing CAN network, like the one found in your modern cars, but we will not be going over on how to do that in this tutorial. Features of a CAN Network Before we begin, here are some of the key features of CAN. 1) Each node is able to send and receive messages, but not simultaneously. 2) A message consists primarily of an ID (identifier, in our code this is My_ID), which represents the priority of the message, and up to eight data bytes. 3) It is transmitted serially onto the bus and is sensed by all nodes. 4) If the bus is free, (known as Bus Arbitration). Messages with numerically smaller values of IDs have higher priority and are transmitted first. Setup: Alright, off to the fun part! Let's put everything together. We will start by setting two solder jumpers on both boards. POE: The first thing we will do is solder the jumper labeled POE on the board, which stands for Power Over Ethernet. This will allow us to power only one board (either one), while the second one will receive power over the Ethernet cable. Without these jumpers enabled, you will need to provide separate power to both Arduino Boards. Using a soldering iron, you have to solder the two jumper pads together. The location of this jumper, as well as the Termination jumper you will need in the next step, is shown here circled in orange. Termination: The next step is to solder the termination jumper, which is required at the end points of the network. Since both are boards are going to be end nodes, they both need to have this jumper soldered to follow the CAN bus specifications. If we had a third board in the middle of the two, we would not solder this termination jumper for that board. Solder this jumper the same way you did the POE jumper. Connectors: In this example we will simply solder one RJ-45 connector per board. It does not matter which one of the two spots you use as they both are identical. Start by soldering the Arduino headers since soldering the RJ-45 connector first would make it harder to keep the headers strait. Once you have finished soldering the headers, proceed to the RJ-45 connector; the finished board mounted on the Arduino UNO should look something like this. Programming: The next step is programming up the devices and wiring a couple of LEDs. First thing you need to do is have the Arduino IDE installed on your computer and then add the CAN Bus library to it. The library has some examples built in, two of which will be used in this tutorial. The library handles all the difficult register reads and writes to the Microchip 2515 IC over SPI. All the user has to do is assemble packets to send over the bus as well as read package on the bus. CAN TX: The CAN TX example is used in conjunction with the CAN RX code to fade an LED connected to Pin 9 of the Arduino UNO on the receiver side. The code is pretty simple; the UNO is calculating a fade value every 10ms to be sent over the CAN bus. Each packet transmitted over the CAN bus contains a Device ID, as well as 8 bytes of data. We will use the first data byte for the address of the device, and the second byte for the led brightness control of that device. Can TX Code: #include <Canbus.h> unsigned int MY_ID = 0x0756; // This is the devices ID. The Lower the Number, the Higher its Priority on the CAN Bus. ID 0x0000 would be the highest Priority. (Cant have two with the same ID) //We wont use this for this example, but if you need to send messages back to the Master, we would need this // unsigned char MY_PID = 1; // This is the ID we will use to check if the message was for this device. int brightness = 0; // how bright the LED is int fadeAmount = 5; // how many points to fade the LED by unsigned char RX_Buff[8]; void setup() { Serial.begin(9600); Serial.println("CAN TX"); /* For debug use */ delay(1000); if(Canbus.init(CANSPEED_500)) /* Initialise MCP2515 CAN controller at the specified speed */ Serial.println("CAN Init ok"); else Serial.println("Can't init CAN"); delay(1000); } void loop() { //This is just a basic fading routine brightness = brightness + fadeAmount; if (brightness == 0 || brightness == 255) { fadeAmount = -fadeAmount ; } RX_Buff[0] = 10; // We want this message to be picked up by device with a PID of 10 RX_Buff[1] = brightness; // This is the brightness level we want our LED set to on the other device Canbus.message_tx(MY_ID,RX_Buff); // Send the message on the CAN Bus to be picked up by the other devices delay(10); } The MY_ID variable is used to store the ID of this device; it basically sets the priority of this device. Any device on the bus with an address lower then this device would have bus priority. If two devices tried to transmit a packet at the same time, the one with the lower MY_ID would "Win", and the other would have to retry. We just chose a random 16 bit (non negative number) of 0x0756 to be its address. The setup routine is used to initialize the CAN bus shield, by calling Canbus.init(speed). Once we get the "Can Init ok" we know that our shield is functioning properly. The second part of this code is simply calculating a fade value as shown in the "Fade" example that comes with the Arduino IDE. The difference is that we are now trying to control an LED connected to a different Arduino UNO. This is where the CAN network comes in handy! Once the fade value is calculated, we assemble a packet by setting the first byte of the RX_Buff to a devices' PID (10 in this example, we will see why in the RX code), and the second byte to hold the brightness level. Then we use the Canbus.message_tx to put the message on the CAN bus. We pass the buffer to the function, along with the MY_ID, so the other devices on the bus can see who the message came from. CAN RX: Now the RX side of things. The second UNO is simply sitting in a infinite loop to receive a message over the CAN bus. The TX is sending an updated brightness value ever 10ms, so there is a new message on the bus a 100 times a second! Can RX Code: #include <Canbus.h> unsigned int MY_ID = 0x0757; // This is the devices ID. The Lower the Number, the Higher its Priority on the CAN Bus. ID 0x0000 would be the highest Priority. (Cant have two with the same ID) unsigned char MY_PID = 10; // This is the ID we will use to check if the message was for this device. If you have more than one UNO with the same PID, they will all accept the message. int LED = 9; // Our LED Pin unsigned char RX_Buff[8]; // Buffer to store the incoming data void setup() { Serial.begin(9600); // For debug use Serial.println("CAN RX"); delay(1000); if(Canbus.init(CANSPEED_500)) /* Initialise MCP2515 CAN controller at the specified speed */ Serial.println("CAN Init ok"); else Serial.println("Can't init CAN"); delay(1000); } void loop() { unsigned int ID = Canbus.message_rx(RX_Buff); // Check to see if we have a message on the Bus if (RX_Buff[0] == MY_PID) // If we do, check to see if the PID matches this device. We are using location (0) to transmit the PID and (1) for the LED's duty cycle analogWrite(LED, RX_Buff[1]); // If it does check what the LED's duty cycle should be set to, and set it! We are done! } On this side, you will notice the MY_ID is different. We need a different MY_ID if we plan on transmitting from this node on the bus. Since this example is only going to receive, it is there just for notation sake. The next value is MY_PID, this is the ID which the device will respond to. The nice thing about this is that we can have many nodes with the same MY_PID (say we need two devices to do the same thing when a message is received). We have to be careful not to have two devices trying to transmit at the same time, but in this case it does not matter. We have randomly picked 10, but it can be anything between 0-255. This is why we load the RX_Buff[0] with 10 on the TX side. The setup loop is simply setting up our CAN bus shield, as well as the Serial debugger so we can see what's going on; nothing special there. Now the loop, there is where we sit waiting for a message. We call Canbus.message_rx and pass an RX_Buff array to the function. It will fill the buffer with the data received on the bus, as well as return the ID of the device the message came from. In this case, all we have is one other device, so we can ignore the ID. Once the buffer is full, we use an if statement to test for the matching My_PID. If the ID matches, it sets the brightness of the LED connected to Pin 9. That's it! Connect it all up: So now that you understand how the code works, lets connect everything up and give it a shot! Go ahead and program one UNO (either one) with the TX code, and the other with the RX code. Once you upload the code for each one, open up the serial communicator and make sure you're getting the "CAN Init ok" message. If you do on both sides, you are ready to disconnect from your computer and give it a shot! Simply connect both boards with a straight through Ethernet cable (not like your Ethernet shield which connects to your home network), and attach the LED's anode to Pin 9 of the receiving UNO and the cathode lead to GND. Lastly, you will need a power source. The easiest way is to power one of the two UNOs is with a DC wall adapter. The complete setup should look like this: And there you have it, one UNO contolling the other UNO's LED over a CAN bus! The UNO on the left is controlling the LED of the UNO on the right. And just to show you how awesome this really is, here is a picture of a third UNO, we simple soldered up both RJ-45 connectors to make a bridge, loaded the CAN RX example on it, and threw it in the loop! Both of the receivers are programmed with the same PID, so they are both responding to the message by setting the LED's intensity to the same value. We could have just as easily given this third UNO a new PID, and sent a different message to it. In this example, we are just controlling one LED, but CAN bus is used in your car to link engines and transmission together, as well as AC units, gauges, and just about everything with a microcontroller in your car is on it's CAN network. The possibilities here are endless! I hope you enjoyed this tutorial, and have learned a thing or two about the CAN bus. If you have any questions about this tutorial, don't hesitate to post a comment, shoot us an email, or post it in our forum!
http://www.jayconsystems.com/tutorials/canbus
CC-MAIN-2017-17
refinedweb
2,108
78.38
System Alerting and Monitoring Guide System Alerting and Monitoring (SAM) is a cluster monitoring solution for InterSystems IRIS® data platform version 2020.1 and later. Whether your application runs on a local mirrored pair or in the cloud with multiple application and data servers, you can use SAM to monitor your application. Group your InterSystems IRIS instances into one or more clusters, then observe a high-level summary of your application performance. Use the SAM web portal to view real-time performance metrics for your clusters and instances. Keep an eye out for the alerts SAM sends when a metric crosses a user-defined threshold. SAM packages all your monitoring needs behind an easy-to-use web interface. The core System Alerting and Monitoring application is built on an InterSystems IRIS instance called the SAM Manager. Using cloud-native software, the SAM Manager collects and stores performance data and alerts from the InterSystems IRIS instances you care about. You can also interface with SAM using its REST API. For details, see the System Alerting and Monitoring API Reference. Deploying SAM This section covers the following topics: SAM Component Breakdown System Alerting and Monitoring is made up of multiple open-source technologies, augmenting their features with enterprise resiliency. The SAM application consists of the following containers: Alertmanager v0.20.0 Grafana v6.7.1 Nginx 1.17.9 Prometheus v2.17.1 SAM Manager 1.0.0.115 Each container performs a different role in the SAM application. Prometheus, an efficient cloud-native monitoring tool, collects time series data at a regular interval from all your target InterSystems IRIS instances. The SAM Manager stores these metrics, enabling high availability and scalability features not present in default Prometheus databases. Grafana, a world-class metrics visualization tool, presents these metrics in graphs that make it easy to examine the state of your application. The Alertmanager aggregates InterSystems IRIS alerts, which are pre-configured on the target instances, and Prometheus alerts, which you configure from the SAM web application. These containers communicate over the Nginx web server, which is set to port 8080 by default. When trying to access any component of SAM, such as Grafana or the SAM Manager, do so using the Nginx port. Nginx also serves the SAM web application, which provides a graphical interface for configuring SAM and monitoring your instances. Docker Compose makes it possible to run all these containers simultaneously. When you run SAM, Docker Compose starts each of these containers, which are listed in the SAM docker-compose.yml file. For more information about the benefits of containerized applications, see Why Containers? in Running InterSystems Products in Containers. First-Time Preparations The first time you deploy System Alerting and Monitoring, perform the following steps to prepare your machine: For instructions, see “Install Docker Compose” in the Docker documentation ( The following versions are required: Docker Engine version 19.03.098 or higher Docker Compose version 1.25 or higher InterSystems provides several files that define the container configuration necessary for SAM. These files include the following: You can obtain these files from either: the SAM GitHub Repository ( the WRC software distribution website, in the Components section. If you obtain the distribution files as a tarball, use the following command to uncompress it while preserving permissions: tar zpxvf sam-<version-number>-unix.tar.gz Replace <version-number> with the version of the SAM tarball you have. SAM comes with a free, built-in Community Edition license with the capacity to monitor approximately 40 instances. If using the community edition license, you may skip this step. The Community Edition license for SAM limits its container to using eight cores. To specify a different SAM license, you must edit the docker-compose.yml file obtained in the previous step. To do so: Open docker-compose.yml in a text editor. Locate the iris service in the docker-compose.yml file. Add a new command directly beneath the image line that specifies the desired license key to use. For example, with a key named iris.key located in the SAM /config directory, add: [...] image:intersystems/SAM:1.0.0.100 command: ["--key","/config/iris.key"] init:true [...] By default, SAM deploys on port 8080 of the host system. On Linux machines, you can check whether the port 8080 is available by using the netcast command: $ nc -zv localhost 8080 Connection to localhost 8080 port [tcp/ succeeded! If necessary, you can change the host port mapping in the nginx section of the docker-compose.yml file. To do so: Open docker-compose.yml in a text editor. Locate the nginx service in the docker-compose.yml file. In the ports section, enter the desired port on your host machine. For example, if you would like to access SAM on port 9999, edit the section to look like: [...] ports: - 9999:8080 [...] For more information, see the “ports” section of the Docker Compose File Reference ( Starting and Stopping SAM InterSystems provides two scripts that make it easy start or stop System Alerting and Monitoring. To start SAM: Using the cd command in the command line, navigate to the directory containing the SAM docker-compose.yml file, which was acquired during initial setup. Next, run the start.sh script: ./start.sh This runs a Docker Compose command to start the SAM application. Optionally, you can use the docker ps command to confirm that all the containers are running. The output should look similar to the following: $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2aaa06f06a9c nginx:1.17.9-alpine "nginx -g 'daemon of..." About an hour ago Up About an hour 80/tcp, 0.0.0.0:8080->8080/tcp sam_nginx_1 0e2b30fcb376 grafana/grafana:6.7.1 "/run.sh" About an hour ago Up About an hour 3000/tcp sam_grafana_1 d2c825f9d220 prom/alertmanager:v0.20.0 "/bin/alertmanager -..." About an hour ago Up About an hour 9093/tcp sam_alertmanager_1 4851893bc369 prom/prometheus:v2.17.1 "/bin/prometheus --w..." About an hour ago Up About an hour 9090/tcp sam_prometheus_1 61120be391df intersystems/sam:1.0.0.83 "/iris-main" About an hour ago Up About an hour (healthy) 2188/tcp, 51773/tcp, 52773/tcp, 53773/tcp, 54773/tcp sam_iris_1 Once SAM is up and running, you can access it from a web browser or using the SAM API. To stop SAM: Using the cd command in the command line, navigate to the directory containing the SAM docker-compose.yml file. Next, run the stop.sh script: ./stop.sh This runs a Docker Compose command to stop the SAM application. Optionally, you can use the docker ps command to confirm that all the containers have stopped. Use the -a flag to view all containers, even those that are not running: docker ps -a Accessing SAM from a Web Browser When System Alerting and Monitoring is running, you can access it from a web browser at the following address: where <sam-domain-name> is the DNS name or IP address of the system SAM is running on, and <port> is the configured Nginx port (8080 by default). You may want to bookmark this address. When accessing SAM, you must log in using a valid User Name and Password. Like InterSystems IRIS, SAM includes several predefined accounts with the default password SYS. Choose any of these accounts with login permissions (such as Admin or SuperUser) and log in using the default password SYS. The first time you sign in with one of the predefined accounts, SAM prompts you to enter a new password. To secure the SAM application, be sure to set a new password for all the predefined accounts. For a list of all the predefined accounts, see Predefined User Accounts in the “Users” chapter of the Security Administration Guide. Advanced Configuration System Alerting and Monitoring is designed to begin monitoring InterSystems IRIS instances with minimal setup. Each SAM component is configured by the docker-compose.yml file, reducing the necessary setup work. While the default SAM configuration is usually sufficient, it is possible to adjust the docker-compose.yml file settings. This section describes configuration changes you may want to consider. SAM includes two databases: the Prometheus database (used for short-term metrics storage) and an InterSystems IRIS database (used for longer-term storage). The Prometheus database retains data for two hours in a cache optimized for rapid querying, while the InterSystems IRIS database retains the data for long term analysis. If you constantly run queries for data older than two hours, increasing the Prometheus retention time may increase performance. Adjust this setting by changing the “--storage.tsdb.retention.time” flag in the docker-compose.yml file. For more information, see “Operational aspects” in the Prometheus documentation ( Setting Up SAM Once System Alerting and Monitoring is deployed, you can specify the InterSystems IRIS instances you want to monitor, which must be grouped into SAM clusters. You can also perform additional setup actions in order to maximize the utility and performance of SAM. These actions can be performed as necessary to establish your desired System Alerting and Monitoring configuration: Creating a New Cluster Within System Alerting and Monitoring, you must group InterSystems IRIS instances into clusters, which are unique sets of instances. Once you have created a cluster, you can view the alerts and statuses of all instances within the cluster. To create a new cluster: Navigate to the main System Alerting and Monitoring page. Clicking System Alerting & Monitoring from anywhere within the SAM application navigates to the main page. Open the Add New Cluster dialog. To do this, click the + New Cluster button (if this is the first cluster, click Create Your First Cluster instead). Fill in the following information about your cluster: Cluster name — The name can be any combination of numbers and letters. Cluster names must be unique.Note: Cluster names are converted to lowercase before they are saved. This means it is not possible to have two clusters with the same name but different casing. Description (optional) Click Add Cluster to create the cluster. After creating a cluster, SAM immediately displays the Edit Cluster dialog, allowing you to continue to define the cluster. Adding an Instance to SAM System Alerting and Monitoring can collect metrics and alerts from any InterSystems IRIS instance version 2020.1 or higher. To add an instance to SAM, first prepare the instance and then add it to a SAM cluster. There is no specific limit to the number of instances SAM can monitor, but this number is constrained by the available memory in the SAM database. The maximum database size for the SAM Community Edition is 10GB, which is enough to support monitoring of about 40 instances with the default settings. You can monitor the SAM Manager to ensure the SAM database does not run out of space. If you need to monitor more instances from SAM, consider using a different license. Preparing the Instance for Monitoring In order for SAM to monitor an instance, the following must be true: The InterSystems IRIS instance is version 2020.1 or higher The /api/monitor endpoint allows unauthenticated access Necessary monitoring tools are enabled for the instance The instance has a unique IP and Port combination The /api/monitor endpoint allows unauthenticated access Each InterSystems IRIS instance version 2020.1 or higher contains the /api/monitor web application. In order for SAM to collect metrics and alerts from the /api/monitor endpoint, the endpoint must allow for unauthenticated access. To make sure this is the case: Open the Management Portal of the InterSystems IRIS instance you would like to add to SAM. Go to the Web Applications page (System Administration > Security > Applications > Web Applications). Select /api/monitor to open the Edit Web Application page. In the Security Settings section, select Unauthenticated. For more information about the /api/monitor web application, see Monitoring InterSystems IRIS Using REST API in Monitoring Guide. Necessary monitoring tools are enabled for the instance InterSystems IRIS instances have built-in monitoring tools, and SAM uses these tools to collect information about the instance. Check that the following tools are enabled on the InterSystems IRIS instance: System Monitor, which SAM uses when determining the instance state. By default, System Monitor is enabled. Log Monitor, which enables SAM to see alerts from the instance. By default, Log Monitor is enabled and writes alerts to the alerts log.Important: SAM is only able to view instance alerts if Log Monitor writes them to the alerts log. If Log Monitor sends alerts by email instead of by writing them to the alerts log, SAM cannot view alerts for the instance. The instance has a unique IP and Port combination In order for SAM to monitor an InterSystems IRIS instance, the instance must be uniquely identifiable by an IP (or domain name) and Port. SAM does not support connecting to an InterSystems IRIS instance with a URL prefix; URL prefixes are most common when multiple InterSystems IRIS instances are located on the same system. For more information about configuring multiple InterSystems IRIS instances on the same system and URL prefixes, see the Connecting to Remote Servers topic in System Administration Guide. Adding the Instance to a Cluster To add an InterSystems IRIS instance to a SAM cluster, do the following: Ensure that the instance has been prepared be monitored by SAM. Navigate to the main System Alerting and Monitoring page. Clicking System Alerting & Monitoring from anywhere within the SAM application navigates to the main page. Select the cluster to which you would like to add the instance. If there are no clusters, you must create one. Click Edit Cluster to open the Edit Cluster dialog. Click the +New button at the top to the Instances table.Note: You may select an existing instance from this table to edit or delete it. Fill in the following fields: IP – The fully qualified domain name or IP address of the machine hosting the target InterSystems IRIS instance. InterSystems recommends using domain names whenever possible, as IP addresses may change.Note: If the instance you are monitoring is located on the same system as SAM, you may enter host.docker.internal in this field. Port – The web server port of the target InterSystems IRIS instance. Cluster – The cluster to add the target instance to. When first adding an instance, this defaults to the current cluster. Instance name and Description – Optional text descriptors to help you identify the instance. Click Add Instance to begin monitoring the instance with SAM. Defining Cluster Alert Rules System Alerting and Monitoring automatically collects InterSystems IRIS alerts from the instances it monitors. If you want to specify additional events that generate alerts, you can do so by defining Prometheus alert rules. An alert displays information about the instance that generated it; the time the alert fired; and the alert name, message, and severity. A Prometheus alert rule indicates when SAM should fire an alert. Alert rules are defined on a cluster level, but evaluated distinctly for each instance within the cluster. This means instances within a cluster share the same alert rules, but generate alerts individually. To create a new alert rule for a cluster, do the following: Navigate to the main System Alerting and Monitoring page. Clicking System Alerting & Monitoring from anywhere within the SAM application navigates to the main page. Select the cluster for which you would like to create an alert rule. If there are no clusters, you must create one. Click Edit Cluster to open the Edit Cluster dialog. Click the +New button at the top to the Alert Rules table.Note: You may select an existing alert rule from this table to edit or delete it. Fill in the following fields: Alert rule name – Any name for the alert rule. It is often useful to include the metric the rule uses in the name. Alert severity – Either Critical or Warning. The severity of the alert determines the impact it will have on the instance state; see Understanding Instance State for more details.Note: You can give multiple alert rules the same name, but different severities. If both rules fire at the same time, SAM suppresses the rule with lower severity. This behavior reduces duplicate alerts firing for the same event. Alert expression – An expression that defines when the alert fires, written in Prometheus Query Language. The Alert Expression Syntax section below contains an overview of the Prometheus Query Language syntax and several examples. Alert message – A text description of the alert rule, which SAM displays when the alert fires.Note: The Alert message supports the $value variable, which contains the evaluated value of an alert expression. The syntax is: {{ $value }} The $value variable only holds one value; as such, you should not use it for alert rules that evaluate to multiple values (such as a rule that uses the and operator). Click Add Alert Rule. SAM validates the alert expression and then adds the alert rule to the cluster. Below is an example of an alert rule: Alert Expression Syntax To write an alert expression, you must use Prometheus Query Language (PromQL). This section provides an overview of how to write alert expressions and some examples. If you want to learn how to write advanced alert expressions, read about the full capabilities of PromQL on the “Querying Prometheus” page in the Prometheus Documentation ( A simple alert expression compares a metric to a value. For example: # Greater than 80 percent of InterSystems IRIS licenses are in use: iris_license_percent_used{cluster="production"}>80 # There are less than 5 active InterSystems IRIS processes: iris_process_count{cluster="test"}<5 # The disk storing the MYDATA database is over 75% full: iris_disk_percent_full{cluster="test",id="MYDATA"}>75 # Same as above, but specifying directory instead of database name: iris_disk_percent_full{cluster="production",dir="/IRIS/mgr/MYDATA"}>75 The basic format for an alert expression based on a single metric is: metric_name{cluster="cluster_name",label(s)}>value Alert Examples Below are several examples of alert expressions that demonstrate some of the capabilities of PromQL. The simplest alert expressions directly compares a single metric to a value. The following alert expression evaluates iris_cpu_usage, which measures the total percent of CPU in use on the machine running InterSystems IRIS. If the value of iris_cpu_usage exceeds 90 for any InterSystems IRIS instance in the test cluster, the alert fires. iris_cpu_usage{cluster="test"}>90 PromQL supports the following arithmetic operators, ordered by precedence: ^ (exponentiation) * (multiplication), / (division), % (modulo) + (addition), - (subtraction) Arithmetic operators are particularly useful when writing an alert expression that contains two or more metrics. The following expression is triggered when the USER database in the test cluster is greater than 90 percent full. The expression calculates the percent by dividing the database size (iris_db_size_mb) by the database maximum size (iris_db_max_size_mb). (iris_db_size_mb{cluster="test",id="USER"}/iris_db_max_size_mb{cluster="test",id="USER"})*100>90 PromQL supports logical operators for writing more complex rules. When using the or operator, the expression evaluates two conditions and fires if either is true. One use for the or operator is to check whether a metric falls outside of a certain range. The following alert expression is triggered when either of the following conditions is true: There are greater than 20 active ECP connections in the production cluster. There is less than one active ECP connection in the production cluster. iris_ecp_conn{cluster="production"}<1 or iris_ecp_conn{cluster="production"}>20 PromQL also supports the and operator. When using the and operator, the expression evaluates two conditions and fires if both are true. The following example shows an alert rule that fires when both conditions are true: There are unread alerts in the test cluster. The system health state of an instance in the test cluster is something other than 0. iris_system_alerts_new{cluster="test"}>=1 and iris_system_monitor_health_state{cluster="test"}!=0 Adjusting Configuration Settings The main System Alerting and Monitoring page contains a gear icon, located near the top of the screen. Click this icon to access the Configuration Settings dialog. From this dialog, you can set the number of days (between 1 and 30) for SAM to store alert and metric data. The Advanced Configuration section describes how to change other SAM settings. Tuning the SAM Manager The SAM Manager is the InterSystems IRIS instance that powers the System Alerting and Monitoring application. You can open the SAM Manager from a web browser using the following address: where <sam-domain-name> is the DNS name or IP address of the system SAM is running on, and <port> is the configured Nginx port (8080 by default). The SAM Manager should not be used to develop or run any application; it is strictly for use by SAM. This section describes the appropriate uses and interactions with the SAM Manager. For a general purpose InterSystems IRIS instance, see the InterSystems IRIS community edition. You can do the following actions with the SAM Manager: Adjusting Startup Settings The SAM Manager initially allocates memory on startup as follows: 2,000 MB of 8KB blocks for the database cache 300 MB for the routines cache This allocation should be sufficient when monitoring a modest number (30 or fewer) of InterSystems IRIS instances. If you are monitoring a large number of instances, or find that the SAM Manager is regularly using the full amount of allocated memory, you can increase these limits. For details on adjusting these settings, see the Allocating Memory to the Database and Routine Caches topic in the System Administration Guide. Clearing the SAM Database System Alerting and Monitoring Community Edition has a maximum database size limit of 10 GB. If this limit is met, SAM may exhibit unexpected behavior, and it becomes necessary to clear the database. In the SQL page of the SAM Manager (System Explorer > SQL), enter the following command to delete all SAM metric data: DELETE FROM %SAM.PrometheusSample To prevent the SAM database from filling up again, consider using a difference license or lowering the number of days that SAM stores metrics (as described in Adjusting Configuration Settings). Monitoring the SAM Manager It is possible to use System Alerting and Monitoring to monitor the SAM Manager, as the SAM Manager is itself an InterSystems IRIS instance. This allows you to keep track of whether the SAM Database is at risk of filling up, and make sure the configured cache sizes are sufficient for SAM operations. Monitoring the SAM Manager is similar to monitoring any other instance, as described in the Adding Instances to a Cluster section, with the following difference: For the IP and Port fields, specify the fully qualified DNS name and port (8080 by default) where SAM runs. You can see these values in the address bar of your browser when accessing SAM. For example, if the URL for SAM is: Specify <sam-domain-name> in the IP field, and <port> in the Port field. It does not work to specify localhost in the IP field; you must enter a fully qualified DNS name. Adding Alert Handlers You can create alert handlers that specify additional actions for System Alerting and Monitoring to perform when an alert fires, such as sending a text or email. Setting up an alert handler is a two step process: Writing the Alert Handler To create an alert handler, you must create a class using an ObjectScript IDE. Connect this IDE to an InterSystems IRIS instance that is not part of SAM. You cannot use the SAM Manager to create the alert handler, as SAM is not a development platform. Instead, you must connect the IDE to a different InterSystems IRIS instance (such as the InterSystems IRIS Community Edition), and later import the alert handler into the SAM Manager. After setting up the IDE, create a class with the following characteristics: The class extends the %SAM.AbstractAlertsHandler class. The class implements the HandleAlerts() class method. Within this method, specify the desired behavior when an alert fires. When SAM detects a new alert (or multiple new alerts), SAM calls the HandleAlerts() method of all alert handlers. The HandleAlerts() method receives a %DynamicArray packet of alerts with the following format: [ { "labels":{ "alertname":"High CPU Usage", "cluster":"1", "instance":"10.0.0.24:9092", "job":"SAM", "severity":"critical" }, "annotations":{ "description":"CPU usage exceeded the 95% threshold." }, "ts": "2020-04-17 18:07:42.536" }, { "labels":{ "alertname":"iris_system_alert", "cluster":"1", "instance":"10.0.0.24:9092", "job":"SAM", "severity":"critical" }, "annotations":{ "description":"Previous system shutdown was abnormal, system forced down or crashed" }, "ts":"2020-04-17 18:07:36.926" } ] Alerts generated by InterSystems IRIS are all named iris_system_alert. Below is an example of an alert handler class. This example writes a message to the messages log (or Console Log) whenever an alert fires: /// An example Alert Handler class, which writes messages to the messages log. Class User.AlertHandler Extends %SAM.AbstractAlertsHandler { ClassMethod HandleAlerts(packet As %DynamicArray) As %Status { set iter = packet.%GetIterator() while iter.%GetNext(.idx, .alert) { set msg = alert.annotations.description if alert.labels.severity = "critical" {set severity = 2} else {set severity = 1} do ##class(%SYS.System).WriteToConsoleLog(msg, 1, severity) } q $$$OK } } Importing the Alert Handler into SAM After creating the alert handler, the next step is to import it into SAM. First, export the alert handler in XML format. How to do this depends on the IDE you are using. Next, log in to the SAM Manager from a web browser, using the following address: where <sam-domain-name> is the DNS name or IP address of the system SAM is running on. From the SAM Manager, navigate to the Classes page (System Explorer > Classes). Make sure the SAM namespace is selected, then click Import. This brings up the Import Classes dialog. In the Import Classes dialog: For The import file resides on, select My Local Machine. For Select the path and name of the import file, click the Choose File button and select the alert handler XML file from your file system. At the bottom of the dialog, click Next, then Import. A result dialog should appear to tell you the status of your import. After the import is complete, you have successfully added the alert handler to SAM. From now on, any time SAM detects a new alert, it calls the HandleAlerts() method of your class. If you ever need to update an alert handler, simply repeat the steps above with the newer version. This replaces the previous version with the new one. Monitoring with SAM Once System Alerting and Monitoring is fully set up, you can use it to see real-time metrics and alerts for your InterSystems IRIS instances. The SAM application consists of multiple pages that display this information at different levels of detail. These pages are: Monitor Cluster Page – the “home page” of SAM, which displays an overview of all clusters. Single Cluster Page – a more focused view, which displays only the information for instances in a single cluster. Single Instance Page – the narrowest and most detailed view, which displays the instance’s details, alerts, and metrics dashboard. The following sections describe various details of SAM: Monitor Clusters Page The Monitor Clusters page displays an overview of all your clusters. You can navigate to the Monitor Clusters page at any time by clicking the System Alerting & Monitoring title at the top of any SAM page. Each SAM cluster appears as a circle depicting the state of all the cluster’s instances. The Monitor Clusters page also includes an Alerts table, showing the recent alerts from all monitored instances, and provides access to the configuration settings`. To see detailed information about a specific cluster or instance, simply click on it. Single Cluster Page To view details about a cluster, click on the cluster card on the Monitor Clusters page. The Cluster Page displays an Alerts table, showing the recent alerts from all instances in that cluster. There is also an Instances table with details about the target instances. The Instances table shows the following details: IP:Port – The IP address and Port which specify where a target instance is located. You can click this to “zoom in” to the Instance page. State – The state of the instance, which can be OK, Warning, Critical, or Unresponsive. See the UnderStanding Instance State section below for a description of how SAM determines instance state. Name – The name of the instance. Description – The description of the instance. Single Instance Page To see the Instance Page, click on an instance’s IP:Port. The Instance Page contains the following sections: A Details table, which contains the instance’s IP:Port, State, Name, Description, and a link to the Management Portal. For details about how SAM calculates State, see the Understanding Instance State section below. An Alerts table, showing the recent alerts for the current instance. A Dashboard, which shows an overview of the Grafana Dashboard for the instance. The page also has an Edit Instance button, which allows you to modify some of the instance details, and Delete Instance button, which allows you to remove the instance from SAM. If you edit an instance and change its network address, SAM purges all existing alerts tied to that instance. This is because SAM assumes different network address refer to different instance. Grafana Dashboard The Dashboard displays several graphs of metrics, providing a snapshot of recent activity on the instance. This section describes the information visible in the dashboard by default The Dashboard is generated using Grafana, an open-sourced metrics visualization tool. You can click View in Grafana to edit the dashboard. For more information about customizing the dashboard, check out the Grafana documentation ( If you edit the dashboard to display metrics older than two hours, you may want to increase the Prometheus database retention time. The default dashboard contains the following information: Viewing the Alerts Table Multiple pages in System Alerting and Monitoring include an alerts table. By default, an alerts table displays alerts form the last hour; to view all alerts, select Show All. Alerts tables contains the following information: Last Reported – The most recent time the alert was reported. Cluster – The cluster containing the instance that generated the alert. IP:Port – The IP address and Port of the instance that generated the alert. Severity – The severity of the alert: either Critical or Warning. Source – The source that generated the alert: either IRIS or Prometheus. An IRIS alert is generated by an InterSystems IRIS instance. The instance’s log monitor scans the messages log and posts notifications with severity 2 or higher to the alerts log, where SAM collects them. For more information, see the Monitoring Guide. A Prometheus alert is generated by SAM according to user-defined alert rules. For more information, see the Defining Cluster Alert Rules section above. Name – The name of the alert. Message – The message associated with the alert. Understanding Instance Metrics All InterSystems IRIS instances collect metrics that describe the status and operation of the instance. System Alerting and Monitoring allows you to monitor those metrics over time, and use them to configure alert rules. For a list of all these metrics, see Metrics Description in the “Monitoring InterSystems IRIS Using REST API” section of the Monitoring Guide that corresponds to your version of InterSystems IRIS. The Create Application Metrics section on the same page describes how to create your own metrics. Understanding Instance State Instance state indicates whether an InterSystems IRIS instance has fired any alerts recently. There are four possible values for instance state: OK, Warning, Critical, or Unreachable. A state of OK means there have been no recent alerts. When an instance fires an alert fires, System Alerting and Monitoring elevates that instance’s state to Warning or Critical. Unreachable means that, for some reason, SAM cannot access the instance. A state of OK does not necessarily mean there are no problems with an instance. Likewise, you may determine that no action is required for an instance with a Critical state. The instance state reflects the number of recent alerts, but does not provide comprehensive information about the instance. Instance state is a combination of two factors: the InterSystems IRIS instance’s System Health State (which SAM obtains from the iris_system_state metric), and recent Prometheus alerts generated by the instance. For information about the System Health State, see System Monitor Health State in the “Using System Monitor” chapter of Monitoring Guide. For more information about Prometheus alerts, see the Manage Cluster Alert Rules section above. System Alerting and Monitoring determines instance state as follows: The state is Critical if either of the following is true: A Prometheus alert with severity Critical fired within the past 30 minutes. The System Health State is 2 or -1. Otherwise, the state is Warning if any of the following are true: A Prometheus alert with severity Critical fired between 30 and 60 minutes ago. A Prometheus alert with severity Warning fired within the past 30 minutes. The System Health State is 1. Finally, the state is OK if: No Prometheus alerts have fired in the past hour. The System Health State is 0. Unreachable means SAM cannot access the instance. See the section below for more information. Troubleshooting an Unreachable Instance There are many reasons the state of an instance could become Unreachable. This section provides several potential causes and solutions. If none of these steps resolve the Unreachable status, contact the InterSystems Worldwide Response Center (WRC) for further troubleshooting help. System Alerting and Monitoring may not be able to reach an instance with an IP address in the 172.17.x.x range (for example, 172.17.123.123). This is because Docker uses this IP range for its own networks. You can resolve this issue by changing the Docker IP address range. To do this, specify a different range (e.g. 10.10.x.x) in the Docker daemon using the default-address-pools option. The instance you are monitoring with SAM may not be outputting metrics properly. You can check this by using the curl command in the command window, or by viewing the metrics endpoint for the target instance in your web browser at the following URL: If this displays a list of metrics, the instance is outputting metrics properly. Otherwise, the instance may not be properly configured. In that case, ensure that the instance is on InterSystems IRIS version 2020.1 or higher and that the /api/monitor application allows for unauthenticated access, as described in the Adding an Instance to SAM section. If the SAM database fills up, instances may show up as Unreachable and stop reporting metrics. To check whether this is the case: Open the SAM Manager from a web browser, using the following address: Navigate to the Databases page (System Operation > Databases). Select Free Space View. Check the % Free column for the SAM database to see whether the value is 0. If the database is full, you should free some space by deleting data, as described in the Clearing the SAM Database section. Once you have done so, shut down System Alerting and Monitoring using the stop.sh script, and restart it using start.sh. To prevent this from happening again, consider lowering the number of days SAM stores data using the Configuration Settings menu. SAM 1.1 Release Notes Overview Version 1.1 of InterSystems System Alerting and Monitoring (SAM) provides performance improvements for the graphs in the Grafana dashboard and the underlying Prometheus queries, especially when displaying metrics over a longer period of time. Upgrade notes and requirements: You must deploy SAM 1.1 as an upgrade to an existing SAM 1.0 installation. The upgrade requires a new docker-compose.yml file.Note: The upgrade does not update or overwrite the files in the /config directory created by the SAM 1.0 installation. Performance improvements to SAM use an index that may include up to 50% more data. You must have the space available to accommodate this index. Performing the Upgrade To upgrade from SAM 1.0 to SAM 1.1, perform the following steps: Shut down the existing SAM installation. Copy version 1.1 of the docker-compose.yml file into the SAM installation directory. Restart SAM. Restarting SAM pulls a SAM 1.1 image, and then uses that image to upgrade and run the SAM container. The new SAM image uses InterSystems IRIS 2021.1.2. To accommodate this, the new docker-compose.yml includes a new ‘iris-init’ service, which runs briefly at startup and then exits. About Rebuilding the Index The upgrade creates a new index for any existing data at the initial startup of the new version. Depending on the amount of data, this may take several minutes or longer. In the SAM messages.log file, there are entries marking the start and completion of creating the index: [Utility.Event] SAM Manager starting Index rebuild for PrometheusSample class. ... [Utility.Event] SAM Manager completed Index rebuild for PrometheusSample class. Note that older data may not be available in the SAM dashboards until this process has completed.
https://docs.intersystems.com/components/csp/docbook/Doc.View.cls?KEY=ASAM
CC-MAIN-2022-21
refinedweb
6,157
55.54
{-# LANGUAGE CPP #-} {-# LANGUAGE Trustworthy #-} {-# LANGUAGE NoImplicitPrelude #-} {-# LANGUAGE MagicHash #-} {-# LANGUAGE UnboxedTuples #-} {-# MkId.hs -------------------------------------------------- -- Here import TYPE explicitly from GHC.Types and not from GHC.Prim. This is -- because TYPE is not exported by the source Haskell module generated by -- genprimops which Haddock will typecheck. -- Likewise, realWorld# is not generated by genprimops so we use CPP and only -- import/use it when not building haddock docs. #if !defined(__HADDOCK_VERSION__) import GHC.Prim (realWorld#) #endif import GHC.Prim (State#, RealWorld) import GHC.Types (RuntimeRep,'. -- -- Like 'seq', the argument of 'lazy' can have an unboxed type. lazy :: a -> a lazy x = x -- Implementation note: its strictness and unfolding are over-ridden -- by the definition in MkId.hs; in both cases to nothing at all. -- That way, 'lazy' does not get inlined, and the strictness analyser -- sees it as lazy. Then the worker/wrapper phase inlines it. -- Result: happiness -- |. oneShot :: forall (q :: RuntimeRep) (r :: RuntimeRep) (a :: TYPE q) (b :: TYPE r). (a -> b) -> a -> b oneShot f = f -- Implementation note: This is wired in in MkId.hs, CorePrep {-# NOINLINE runRW# #-} -- runRW# is inlined manually in CorePrep #if !defined(__HADDOCK_VERSION__) runRW# m = m realWorld# #else runRW# = runRW# -- The realWorld# is too much for haddock #endif
https://downloads.haskell.org/~ghc/8.10.2/docs/html/libraries/ghc-prim-0.6.1/src/GHC-Magic.html
CC-MAIN-2022-40
refinedweb
199
60.41
Red Hat Bugzilla – Bug 203662 Review Request: dx - Open source version of IBM's Visualization Data Explorer Last modified: 2008-02-08 13:23:56 EST Spec URL: SRPM URL: Description: OpenDX is a uniquely powerful, full-featured software package for the visualization of scientific, engineering and analytical data: Its open system design is built on familiar standard interface environments. And its sophisticated data model provides users with great flexibility in creating visualizations. For the record, it builds with lesstif (bugzilla #203274), so the transition after openmotif is dropped will be quite painless. Quick notes... Release: 1 - needs the dist License: IBM Public License - is this a valid licence Fedora is happy with? %prep %{__libtoolize} --force %{__aclocal} %{__autoconf} %{__autoheader} %{__automake} -a Is this lot really needed or can the relibtoolize thing work? --with-jni-path=%{java_home}/include \ You need something in the BR if java is going to be used You have %{_libdir}/%{name}/ in files and %{_libdir}/%{name}/samples in samples. As the files has already taken ownership of %{_libdir}/%{name}, does the second one need to be in there? dist - ACK, will fix License is OSI-approved: %prep - I prefer to spell them out, autoreconf never worked for me java support is in todo it doesn't build currently, hence commented out BuildRequires: java-devel I have %exclude %{_libdir}/dx/samples in main and %{_libdir}/dx/samples in -samples build fails memory.c:69:23: error: linux/sys.h: No such file or directory Build successfuly in fc5 mock. It's a bug in kernel-headers in devel (missing sys.h). Filed bug #204538. According to David, dx is to blame for using a private kernel header. I'll try patching it out of it then. And success! It builds in fc6 mock. And completely fails to build here. Dies when it gets to compiling the java stuff ERRORS uipp/java/./dx/net/WELApplication.java import netscape.javascript.JSObject; (netscape doesn't exist) import vrml.external.Node; (vrml doesn't exist) extends DXLinkApplication implements vrml... (vrml can't be resolved to type) private EventOutSFTime touchTimeEO; (EventOutSFTime can't resolved to type) private EventOutSFVec3f vps_tp = null; (as above for EventOutSFVec3f) private JSObject window; (JSObject can't be resolved to type) There plenty of these. Is the spec missing a BR or two? Hm. Java parts are not supposed to build. I've intentionally commented out java BRs from the spec. Are you building in mock? If not, you probably have java-devel installed and configure picks that up and tries to build the java parts. It builds fine in mock for me. (In reply to comment #9) > Hm. Java parts are not supposed to build. I've intentionally commented out java > BRs from the spec. This is not good enough for reproducible builds, as demonstrated in comment 8. If you intend to have the java parts not built, be explicit about it, eg. using an argument to ./configure, patch things, or as a last resort if everything else fails, try BuildConflicts. - removed -samples, will package separately - disable java parts completely for now - fixed build on fc6 - moved non-binary stuff to _datadir Created attachment 135465 [details] rpmlint errors from main dx package Something is seriously wrong with the package here. It builds fine and the srpm is clean with rpmlint, but the main package gives the errors with this attachment (piles of them!) and the devel package gives E: dx-devel only-non-binary-in-usr-lib W: dx-devel no-documentation (not worried by that) E: dx wrong-script-interpreter /usr/share/dx/help/dxall549 "F-adobe-helvetica-medium-r-normal--18*" E: dx non-executable-script /usr/share/dx/help/dxall549 0644 These errors are to be ignored, IMHO. Those are help files, not scripts. E: dx-devel only-non-binary-in-usr-lib This is caused by arch.mak file being in %_libdir, I'll move it to %_includedir/dx Have you updated the spec and srpms? If you have, please can you post the URL for them? If it's only the spec file which has altered, you only need to upload that. - moved arch.mak to _includedir/dx - fixed program startup from the main ui Review: builds fine in mock, rpmlint doesn't like the packages but on this occasion, the errors can be ignored Software installs and works Good consistent use of macros in US-English, UTF-8 License valid upstream and source tarballs have same md5 correct use of dist uses smp_mflags contains docs contains clean no ownership problems devel package doesn't need pkgconfig no dupes in the rpms Needs work %configure \ --disable-static \ --enable-shared \ --with-jni-path=%{java_home}/include \ --without-javadx \ --disable-dependency-tracking \ --enable-smp-linux \ --enable-new-keylayout \ --with-rsh=%{_bindir}/ssh \ Trailing \ on the end should not be there. If you remove this, I'm happy to approve the package as is. Removed: You're really picky. I know, but you love me really :-) APPROVED Likewise, thanks! Imported (wow, that took a while) and built (with a small fix) for devel, FC-5 branch requested. Package Change Request ====================== Package Name: dx New Branches: EL-5 cvs done. Package Change Request ====================== Package Name: dx New Branches: EL-4 cvs done
https://bugzilla.redhat.com/show_bug.cgi?id=203662
CC-MAIN-2017-13
refinedweb
873
55.64
On Sun, Jan 08, 2006 at 04:02:12AM +0100, Andrea Arcangeli wrote: > On Wed, Sep 14, 2005 at 05:02:18PM +0200, Sven Luther wrote: > > On Wed, Sep 14, 2005 at 04:46:04PM +0200, Andrea Arcangeli wrote: > > > On Wed, Sep 14, 2005 at 11:18:46AM +0200, Sven Luther wrote: > > > > On Sat, Sep 10, 2005 at 05:19:07PM +0200, Andrea Arcangeli wrote: > > > > > On Sat, Sep 10, 2005 at 07:43:31AM +0200, Sven Luther wrote: > > > > > > Not a good idea. Why clutter the namespace of versions in order to adapt to > > > > > > non-debian needs. ? What is it you intent to do anyway ? > > > > > > > > Thanks to Bastian Blank, code was added to make your life easier, > > > > /proc/version now shows the debian version after the kernel version as : > > > > > > > > (Debian 2.6.13-1) > > > > > > > > For example, not yet in released packages, and will not account for self > > > > compiled kernels from the debian sources, but it should be usefull to you. > > > > > > Yes, that's nice thanks! > > > > > > Could you show me a full /proc/version? > > > > 10:49 < waldi> Linux version 2.6.13-1-powerpc64 (Debian 2.6.13-1) > > (waldi@debian.org) (gcc version 4.0.2 20050821 (prerelease) (Debian 4.0.1-6)) > > #1 SMP Wed Sep 14 09:28:56 UTC 2005 > > I enabled the parsing in KLive protocol 1 of the /proc/version according > to the above format. > > As always in my code I'm paranoid and I added some overkill debugging > runtime check, and (as usual) they triggered. > > See the SMS the server sent me: > > File "/home/klive/klive/server/regexp.py", line 69, in select_kernel_group_and_branch > raise 'corrupted proc version', (version.groups(), kernel_release, kernel_proc_version) > corrupted proc version: (('2.6.14-2-amd64-k8', 'Debian', '2.6.14-7'), '2.6.14-2-amd64-k8', 'Linux version 2.6.14-2-amd64-k8 (Debian 2.6.14-7) > (maks@sternwelten.at) (gcc version 4.0.3 20051201 (prerelease) (Debian 4.0.2-5)) #2 Fri Dec 30 06:24:03 CET 2005\n') > > The debugging check is this: > > if version.group(1)[:len(version.group(3))] != version.group(3): > raise 'corrupted proc version', (version.groups(), kernel_release, kernel_proc_version) > > (originally it was: > > assert version.group(1)[:len(version.group(3))] == version.group(3) > > but I changed it to be more verbose after I noticed it was firing > exceptions and rejecting packets) > > In short the above assert is verifying that "2.6.14-7" is a substring of > "2.6.14-2-amd64-k8". Clearly something went wrong in the debian kernel > build process if somebody has a "uname -r" == 2.6.14-2-amd64-k8 but the > kernel group is 2.6.14-7. -2 is the module ABI name (the name of the package), while -7 is the debian revision, i have no idea where you get this from, mmm, i think i know, and that the first example we gave you mislead you since it was -1, and that this -1 was both the abi number and the debian revision. I guess you need to fix your script again :) Not the abi number is the one we guarantee you can run modules against for all kernel versions that satisfy it, but you need to rebuild modules as soon as this abi number changes. > Note, I can as well delete this check from the server, but then kernels > with uname -r == 2.6.14-2-amd64-k8 will be classified under the group > "2.6.14-7" which sounds wrong. Currently the packet is rejected right > away by twisted when the unhandled exception is fired by the assert. In neither case is either -2 or -7 related to some kind of mainline kernel patch level. The uname -r output is composed of : 2.6.14: kernel version 2: kernel module abi number amd64-k8: the kernel flavour While 2.6.14-7 is the debian revision of the package, and is composed of : 2.6.14: the kernel package upstream version. 7: the debian revision. Note that it is not guaranteed that 2.6.14 here will stay at is, if we are forced to rebuild the .orig tarball, because we had to remove some additional material of dubious licencing situation in the kernel source for example, then it will probably be something like 2.6.14a or 2.6.14.dfsg1 or some other non-deterministic change. So, i would use the uname -r output for your clasifying, and use the full debian version (2.6.14-7) for identifying the exact version inside the above main kernel version, and the flavour. I am not sure the abi number is important; but your call. So to summarize, the important info is : 2.6.14 -> upstream kernel version 2 -> debian abi number amd64-k8 -> debian flavour 2.6.14-7 -> debian package version And none of those is really a subset of the other, altough the debian package version is clearly the most unique and maps uniquely to both the kernel version and the abi (but not the flavour). Friendly, Sven Luther > > Please let me know if I can help solving this. > > Thanks! > --------------------------------------------------------------------------------------- > Wanadoo vous informe que cet e-mail a ete controle par l'anti-virus mail. > Aucun virus connu a ce jour par nos services n'a ete detecte. > > >
https://lists.debian.org/debian-kernel/2006/01/msg00229.html
CC-MAIN-2021-49
refinedweb
881
74.59
Deepsleep stops execution of main Hello Pycom I am currently working on reducing the amount of power that my SiPy uses. I am making periodic readings and uploading to a database using MQTT and a wifi connection. Once done uploading i want to have my device in deepsleep for a period of time before redoing the whole proces again. The code functions like i want it to with one exception: After completing a deepsleep my SiPy stops executing main.py in >95% of the cases, but every now and then it functions properly, restarting main.py once it has finished a deepsleep. According to the documentation deepsleep is supposed to: "resume execution from the main script just as with a reset" Which i assume means that it will execute the script from the top starting over. Most of the time, my result is that it runs the code once and then does nothing. Below is my code for main.py and boot.py and at the bottom is the output from the Pymakr console. import machine from machine import I2C import time import pycom from simple import MQTTClient from network import WLAN DISCONNECTED = 0 CONNECTING = 1 CONNECTED = 2 DEVICE_ID = "4D2A4D" HOST = "xxxxxxxxx.iot.eu-west-1.amazonaws.com" #not real endpoint! TOPIC = "myThingName" WLANNAME = "SDU-GUEST" state = DISCONNECTED connection = None deepSleep = 5000 counter = 0 wlan = WLAN(mode=WLAN.STA) pycom.heartbeat(False) i2c = I2C(0) i2c.init(I2C.MASTER, baudrate=9600, pins=("P9", "P19")) def run(): global deepSleep global state global connection global counter while True: while state != CONNECTED: while not wlan.isconnected(): nets = wlan.scan() for net in nets: if net.ssid == 'SDU-GUEST': print('Network found!') wlan.connect(net.ssid, auth=(net.sec, ''), timeout=30000) while not wlan.isconnected(): machine.idle() # save power while waiting print('WLAN connection succeeded!') break try: state = CONNECTING print("CONNECTING TO MQTT") connection = MQTTClient(client_id=DEVICE_ID, server=HOST, port=8883, keepalive=10000, ssl=True, ssl_params={"certfile":"/flash/cert/cert.pem", "keyfile":"/flash/cert/private.key", "ca_certs":"/flash/cert/ca.pem"}) connection.connect() print("MQTT CONNECTED") state = CONNECTED except: print('ERROR') time.sleep(2) continue while state == CONNECTED: counter += 1 if counter >= 10: deepSleep = 1800000 print("COUNTER: %i"%(counter)) #Does it save values in ram during deepsleep? I dont think so. i2c.writeto_mem(112, 0x00, b'\x51') # wait for the sensor to complete its reading time.sleep(0.1) x = i2c.readfrom_mem(112, 0x02, 2) time.sleep(0.01) dist = x[1] | x[0] << 8 print("DISTANCE: %i"%(dist)) msg = '{"device":"%s", "DeepsleepDuration":"%i" ,"Data":"%s"}'%(DEVICE_ID,deepSleep, dist) connection.publish(topic=TOPIC, msg=msg, qos=0) print('SENDING: %s'%(msg)) time.sleep(5.0) print("DEEPSLEEPING FOR: %i"%(deepSleep)) machine.deepsleep(deepSleep) print("continue?") time.sleep(5) run() known_nets = [('AndroidAP', 'zvxs8927'), ('SDU-GUEST', '')] #) Here is a dump of what pymakr gets over the UART connection when i run my code. Running main.py I (1425042) wifi: sleep disable CONNECTING TO MQTT MQTT CONNECTED COUNTER: 1 DISTANCE: 0 SENDING: {"device":"4D2A4D", "DeepsleepDuration":"5000" ,"Data":"0"} DEEPSLEEPING FOR: 5000 I (1439918) wifi: state: run -> init (0) I (1439918) wifi: n:1 0, o:1 0, ap:255 255, sta:1 0, prof:6 I (1439919) wifi: pm stop, total sleep time: 0/1424973621 E (1439922) wifi: esp_wifi_connect 806 wifi not:248 load:0x40078000,len:4056 load:0x4009fc00,len:920 entry 0x4009fde4 I (1541) wifi: wifi firmware version: 2a22b2d I (1557) wifi: pp_task_hdl : 3ffd6bfc, prio:23, stack:8192 I (1557) wifi: Init lldesc rx mblock:10 I (1557) wifi: Init lldesc rx ampdu len mblock:7 I (1559) wifi: Init lldesc rx ampdu entry mblock:4 I (1564) wifi: sleep disable I (2552) wifi: frc2_timer_task_hdl:3ffdcc4c, prio:22, stack:2048 I (2556) wifi: mode : softAP (24:0a:c4:00:f6:1d) I (2610) wifi: mode : sta (24:0a:c4:00:f6:1c) I (2610) wifi: sleep disable MicroPython v1.8.6-493-g99ac80fe on 2017-03-03; SiPy with ESP32 Type "help()" for more information. >>> I (4940) wifi: n:1 0, o:6 0, ap:255 255, sta:1 0, prof:6 I (4941) wifi: state: init -> auth (b0) I (4942) wifi: state: auth -> assoc (0) I (4945) wifi: state: assoc -> run (10) I (4945) wifi: connected with SDU-GUEST, channel 1 I (14945) wifi: pm start, type:0 It looks like it boot.py always runs after a deepsleep, as the wifi connection gets reset after deepsleeping. If anyone knows how to make sure that main.p resumes after deepsleeping i would very much appreciate the help. Thanks for reading :) @jcaron - I don't really have the facility to capture the actual Wifi messaging, but this may highlight the issue. I'm not using any other radios and primarily using and external antenna, but tried the internal too. The WDT in the boot.py seems to be working well - been running for 24 hours and keeps reporting - previously rarely got more than an hour. I can see the WDT kicking in and doing it's job. I guess this is the sort of thing the WDT was designed for!! Thanks for your help. @guyadams99 I don't use Wi-Fi much on the LoPy so I can't help much there. I guess it would be interesting to capture the Wi-Fi traffic and see what's happening exactly. Are you using any other radios on the module? Internal or external antenna? @jcaron - I'm using the RGB LED and a colour sequence through the boot to achieve a similar thing to the 'boot_state' you suggested and I've been able to narrow it down to the "while not wlan.isconnected():" in this snip from boot.py: if not wlan.isconnected(): # change the line below to match your network ssid, security and password wlan.connect('Hansen2.5G', auth=(WLAN.WPA2, 'xxxx'), timeout=5000) connectcount = 0 while not wlan.isconnected(): connectcount += 1 print("idling count:" + str(connectcount)) machine.idle() # save power while waiting Most times you see "idling count: 1" and "idling count: 2" and so on - usually up to around 100, but then it moves on through the boot. However say 1 time in 20 (although this frequency is highly variable), this step just stops - wlan.isconnected never returns true and the board basically sits in this while loop forever. I am using the WDT, but I don't set this up until the start of main.py, so one way (hack) around this may be to get the WDT init'd earlier in boot.py, and since I'm not feeding it in this loop, it should kick in and reset, but this doesn't really solve the solution properly, at least in my mind. I'm using a pretty good Ubiquiti access point and the management systems shows the WiPy connected at -25dBm (I'm REALLY close to the AP - about 3 feet with a directional antenna) while in this loop. So far I have tried not enabling low power wifi mode (which I did have on) and disabling the external antenna I was using, but no joy. The only other thing I can think now is to add a clause for the idling counter here to say 2000 and at that point bail out and try the wlan.connect again (basically wrap the wlan.connect in a loop to keep trying until it works) but again, this is just really putting a sticky plaster over the problem. Any thoughts? many thanks in advance - Guy @guyadams99 Not waking properly usually means you have something in your code that throws and exception and stops execution. You would then be in the REPL when you connect to the board. This is not always easy to diagnose as you won't have USB active during deep sleep, so one solution is simply to store state in a variable so you can check it afterwards. You could even store logs. For instance, you could have boot_state = 0 ... do some stuff boot_state = 1 ... do more stuff boot_state = 2 yet more stuff boot_state=3 Then, when you find the board still running and in the REPL, just print boot_stateto know where it stopped. Once you've pin-pointed which part of the code is the culprit, you can add further logs (stored in a variable rather than just sent to the console) to find the actual issue. Did you get any resolution on this - I am having the same issue - my code works fine with sleep at the end of each 'measure and send' cycle and works for a period of time with deep sleep but then just stops. Rather than not coming out of deep sleep, when it fails it appears to start to boot but then stop, possibly between boot and main @_peter this is not the issue. Deepsleep work like "reset" with one exception - you can use pin hold to "remember pin state" but whole programm start as fresh boot.pythen main.py @YoungChul i have the same problem on LoPy board
https://forum.pycom.io/topic/929/deepsleep-stops-execution-of-main
CC-MAIN-2019-13
refinedweb
1,496
71.65
- Advertisement RedRabbitMember Content Count122 Joined Last visited Community Reputation109 Neutral About RedRabbit - RankMember Perfect squares RedRabbit posted a topic in Math and PhysicsHey all, I have a program due in one of my classes in 2 days and I have a problem. I understand what a perfect square is and how to check for it mathematically but how can you check to see if the result of a square in C++ is an integer value? Thanks for the help :) texturing in directx RedRabbit replied to Slyfox's topic in For Beginners's ForumQuote: float l_angle = l_time * ( 0 * D3DX_PI) / 5000.0f; how does multiplying pi by 0 work? If you comment out the translation do you even get a rotating mesh? I haven't tried compiling this but doesn't that deduce to 0/5000? Therefore the "angle" you are trying to create is undefined or always the y-axis is it not? If so that is useless computation. EDIT: Quote was screwy. Where to go next? RedRabbit posted a topic in Graphics and GPU ProgrammingI recently bought "Introduction to 3D Game Programming with DirectX 9.0" by Frank D. Luna. While I'm not done with it I'm getting there and was wondering where I should look next in terms of reading material for more advanced/practical applications of what I know. Luna's books are excellent and I'm sad there isn't an "Advanced 3D Programming with DirectX 9.0" by him. Anway...is it better to go off and just use tutorials after his book or to read another book by someone else? I'm really looking forward to starting a virtual world project where I can implement advanced techniques in world I create but the terrain generation in Luna's book is...let's say...sub par. It just looks crappy. I'm also interested in learning more about animation and modeling as Luna really only shows how to load a .X file I'd like to learn how to load and use a .X file that can animate (ie: a walking person). Thanks for the help in advance. Where to start? RedRabbit replied to laserdude45's topic in Graphics and GPU ProgrammingQuote: [can't remember the last time i actually got use out of sin and cos functions in a practical game application, and thats all that trig is really as far as games are concerned.] Not to troll or anything, but I disagree with you here. Your advice on not needing to know trig for game programming is a little to specific an answer for such a generalized subject. In the 3D game programming world Trig could make or break a game. Whether it comes to a simple head/gun "bobbing" motion that follows a sine/cosine line or drafting an algorithm that constantly draws pixels in random positions but the amount is inversely proportional to the distance from the center of the screen giving a spreading effect for some particle system. Anyway...my point is you can do a lot with a little bit of knowledge of trig (or any other level of math) in GAME programming. It really depends on what type of program your making and how efficient/qualitative you want your results to be. nevermind RedRabbit posted a topic in For Beginners's Forumnvm Recruiting for a small new team. RedRabbit posted a topic in For Beginners's ForumHey guys, I'm starting a small new team of developers and artists. I am by no means an expert with C/C++ or a graphics API but I can say I have a fair share of knowledge in both areas. I'm looking for semi-casual artists and coders to begin development on a fully functional FPS. This is designed entirely as a learning exercise and I plan on use DirectX because of it's functionality (and my familiarity with it). If anyone is interested drop a reply in my post or e-mail me at redrabbit109@yahoo.com (my gmail account is broken atm but I'll repost because that's my main e-mail). A website is in the works but I've halted my work on it as I would like team input as to how it will be setup. This team will be completely democratic...those with more experience, however, will naturally weigh in more often than others and will probably have more to say. I plan on using ventrilo (or teamspeak I have two available servers) as a means of communication between everybody. Again if you're interested please contact me and I anticipate initiating this project asap! Help w/andy pike tuts RedRabbit posted a topic in Graphics and GPU Programminghas any1 been able to convert this matrix transofrmation tutorial () from DX8 to DX9? I realize its probably not that hard but EnumAdapterModes and GetAdapterModeCount don't make sense to me. I know they have changed from DX8 to DX9 but I've never used DX8 so I don't understand exactly how to use the new funcs. Much thanks. - ah, k... so turn my CheckDisplayMode func into like GetDisplayMode and have enumadaptermodes = dm.Format;? (btw thanks for help) - Yes I've read that plenty of times, though I'm not quite sure what to do with the information given. Does it mean that the new parameter eliminates the need to check which display mode is the right one? If so how do I use it correctly? If not...what exactly is it's purpose. Sorry that SDK explanation just doesn't cut it for me :/ C/C++ graphics RedRabbit replied to brandonman's topic in For Beginners's ForumHeh asking what you need for C/C++ graphics is analogous to asking what youll need if your going on vacation. This topic is a heated debate between many factions; ranging from the 3D game gurus to the SDL learners, you'll get many different opinions here. You pretty much have a lot of freedom but read up on what you think you can tackle next before proceeding. For 3D graphics your best bet is OpenGL or DirectX just because of the shear amount of help you'll receive via forum discussion and tutorials. 3D may, though, be a little bumpy, especially if your new still to C++ itself. For starters, try some 2D games...you'll most likely get a lot of SDL fans that will reply to this. I believe the site is or something. It's a very comprehensive and EXTREMELY accessible/easy interface for noobs like you and I to learn the basic mechanics of 2D games. Like I mentioned earlier take your time. No matter which realm you go into: wanting to create a spinning cube or a pong clone, your going to have to sit down and learn a lot of material. You also will not be learning either of these in a day. Have a positive outlook because game/graphics programming can be really frustrating, but youll see the rewards are well worth your hard spent time. Good luck buddy! And when you do go into 3D...go DirectX!!!! ITS THE BEST!!! (Haha, jk...you'll be able to decide when you're ready to.) Errors with DX9 RedRabbit posted a topic in For Beginners's ForumHi all, im trying to convert a tutorial from DX8 to DX9. So far I've done it all but 2 function parameters are still putting up a fight. Heres the code snippet that gives me the following errors: D3DFORMAT CGame::CheckDisplayMode(UINT nWidth, UINT nHeight, UINT nDepth) { UINT x; D3DDISPLAYMODE dm; for(x = 0; x < m_pD3D->GetAdapterModeCount(D3DADAPTER_DEFAULT); x++) { m_pD3D->EnumAdapterModes(D3DADAPTER_DEFAULT, x, &dm); if(dm.Width == nWidth) { if(dm.Height == nHeight) { if((dm.Format == D3DFMT_R5G6B5) || (dm.Format == D3DFMT_X1R5G5B5) || (dm.Format == D3DFMT_X4R4G4B4)) { if(nDepth == 16) { return dm.Format; } } else if((dm.Format == D3DFMT_R8G8B8) || (dm.Format == D3DFMT_X8R8G8B8)) { if(nDepth == 32) { return dm.Format; } } } } } return D3DFMT_UNKNOWN; } Quote: 'IDirect3D9::EnumAdapterModes' : function does not take 3 arguments 'IDirect3D9::GetAdapterModeCount' : function does not take 1 arguments I've read the documentation but I still dont understand what I should put in the extra Format parameter. Does anyone know of a fix? (or do I have to create a whole new way to check display modes with DX9?) thanks in advance! WNDCLASSEX prob... RedRabbit replied to RedRabbit's topic in For Beginners's Forumwow am I ever retarded....its been one of those days >.< thanks man WNDCLASSEX prob... RedRabbit posted a topic in For Beginners's ForumWNDCLASSEX wc = (sizeof(WNDCLASSEX), CS_CLASSDC, WinProc, 0L, 0L, GetModuleHandle(NULL), NULL, NULL, NULL, NULL, "Cubes Rotate", NULL); yields me the error: Quote: error C2440: 'initializing' : cannot convert from 'int' to 'WNDCLASSEX' any ideas? this is for a Direct3D compilation....this is my last error but I have no clue where to begin with it...thanks guys EDIT: To clarify: I know the problem lies in the argument "sizeof(WNDCLASSEX)"... but what should I use instead of this? I've used it before (at least I think I have). thanks again Noob problem (rendering) RedRabbit replied to RedRabbit's topic in Graphics and GPU Programmingthank you so much for the help!! one quick question tho... Quote: a quick look at the SDK docs tells me the parameters are: Quote: HRESULT SetStreamSource( UINT StreamNumber, IDirect3DVertexBuffer9 * pStreamData, UINT OffsetInBytes, UINT Stride ); so your Stride is NULL, and the OffsetInBytes is the size of a single vertex - oops! Fixing that makes it stop crashing, but I still don't see anything on screen... what should this be changed to? thanks again for all the excellent advice I'll start reading the SDK more often now! Noob problem (rendering) RedRabbit posted a topic in Graphics and GPU ProgrammingIm trying to get write a simple directx program but I dont want to just use the source code from one tutorial so I'm trying to learn from different sources different aspects of my prog and put them together. So far I'm stuck at drawing to the screen. I believe all the code is here but for some reason it's not rendering. Please some1 help! #include <d3d9.h> #include <d3dx9.h> //////////////////////////////// //Macros #define D3DFVF_CUSTOMVERTEX (D3DFVF_XYZ | D3DFVF_DIFFUSE) //////////////////////////////// //Globals IDirect3D9* pD3D9 = NULL; IDirect3DDevice9* pD3DDevice9 = NULL; LPDIRECT3DVERTEXBUFFER9 m_vb; ///////////////////////////////// //Prototypes void Render(void); void DestroyD3D(void); LRESULT CALLBACK WindowProc(HWND hwnd, UINT message, WPARAM wParam, LPARAM lParam); HRESULT InitDirect3D(HWND hwnd, int width, int height, bool fullscreen); HRESULT InitVertexBuffer(); //////////////////////////////// //Structs and Stuff struct CUSTOMVERTEX { float x,y,z; //pos float color; //color }; /////////////////////////////// //Win Entry)GetStockObject(BLACK_BRUSH); windowClass.lpszMenuName = NULL; windowClass.lpszClassName = "window"; windowClass.hIconSm = LoadIcon(NULL, IDI_WINLOGO); // register the windows class if (!RegisterClassEx(&windowClass)) return 0; hWND = CreateWindowEx(NULL, "window", "Simple Direct3D Program", WS_POPUP | WS_VISIBLE, 0, 0, 400, 400, NULL, NULL, hInstance, NULL); // check if window creation failed (hWND would equal NULL) if (!hWND) return 0; if (FAILED(InitDirect3D(hWND, 800, 600, false))) { DestroyD3D(); return 0; } InitVertexBuffer(); MSG msg; while (1) { PeekMessage(&msg, hWND, NULL, NULL, PM_REMOVE); if (msg.message == WM_QUIT) break; else { TranslateMessage(&msg); DispatchMessage(&msg); Render(); } } DestroyD3D(); UnregisterClass("window", hInstance); return (msg.wParam); } //////////////////////////////////////// //Render void Render(void) { pD3DDevice9->Clear(0, NULL, D3DCLEAR_TARGET, D3DCOLOR_XRGB(0, 0, 0), 1.0f, 0); pD3DDevice9->BeginScene(); // Rendering the triangle pD3DDevice9->SetStreamSource(0, m_vb, sizeof(CUSTOMVERTEX), NULL); pD3DDevice9->SetFVF(D3DFVF_CUSTOMVERTEX); pD3DDevice9->DrawPrimitive(D3DPT_TRIANGLELIST, 0, 1); pD3DDevice9->EndScene(); pD3DDevice9->Present(NULL, NULL, NULL, NULL); } ////////////////////////////////////// //Destroy void DestroyD3D(void) { if (m_vb) { m_vb->Release(); m_vb = NULL; } if (pD3DDevice9) { pD3DDevice9->Release(); pD3DDevice9 = NULL; } if (pD3D9) { pD3D9->Release(); pD3D9 = NULL; } } ///////////////////////////// //Window Procedure LRESULT CALLBACK WindowProc(HWND hwnd, UINT message, WPARAM wParam, LPARAM lParam) { switch(message) { case WM_CREATE: return 0; break; case WM_KEYDOWN: switch( wParam ) { case VK_ESCAPE: PostQuitMessage(0); return 0; } break; case WM_CLOSE: // windows is closing PostQuitMessage(0); return 0; break; case WM_DESTROY: PostQuitMessage(0); return 0; break; default: break; } return (DefWindowProc(hwnd, message, wParam, lParam)); } /////////////////////////////// //D3D initialization HRESULT InitDirect3D(HWND hwnd, int width, int height, bool fullscreen) { pD3D9 = Direct3DCreate9(D3D_SDK_VERSION); if (pD3D9 == NULL) return E_FAIL; //.PresentationInterval = D3DPRESENT_INTERVAL_IMMEDIATE; //; } //InitVertexBuffer(); return S_OK; } HRESULT InitVertexBuffer() { VOID* pVerts; CUSTOMVERTEX cvVertices[] = { { 300.0f, 400.0f, 0.5f, D3DCOLOR_XRGB(255,0,0),}, { 400.0f, 200.0f, 0.5f, D3DCOLOR_XRGB(255,0,0),}, { 200.0f, 200.0f, 0.5f, D3DCOLOR_XRGB(255,0,0),}, }; if(FAILED(pD3DDevice9->CreateVertexBuffer(3*sizeof(CUSTOMVERTEX),0,D3DFVF_CUSTOMVERTEX, D3DPOOL_MANAGED, &m_vb, NULL))) { return E_FAIL; } if(FAILED(m_vb->Lock(0, sizeof(cvVertices),&pVerts, 0))) { return E_FAIL; } memcpy(pVerts, cvVertices, sizeof(cvVertices)); m_vb->Unlock(); return S_OK; } EDIT: FYI the program opens as I want it to and I get no errors it simply wont render. Hope this helps. - Advertisement
https://www.gamedev.net/profile/58610-redrabbit/
CC-MAIN-2018-47
refinedweb
2,085
54.93
A python package for defensive data analysis. Project description Bulwark's Documentation Bulwark is a package for convenient property-based testing of pandas dataframes. Documentation: This project was heavily influenced by the no-longer-supported Engarde library by Tom Augspurger(thanks for the head start, Tom!), which itself was modeled after the R library assertr. Why? Data are messy, and pandas is one of the go-to libraries for analyzing tabular data. In the real world, data analysts and scientists often feel like they don't have the time or energy to think of and write tests for their data. Bulwark's goal is to let you check that your data meets your assumptions of what it should look like at any (and every) step in your code, without making you work too hard. Installation pip install bulwark or conda install -c conda-forge bulwark Note that the latest version of Bulwark will only be compatible with newer version of Python, Numpy, and Pandas. This is to encourage upgrades that themselves can help minimize bugs, allow Bulwark to take advantage of the latest language/library features, reduce the technical debt of maintaining Bulwark, and to be consistent with Numpy's community version support recommendation in NEP 29. See the table below for officially supported versions: Usage Bulwark comes with checks for many of the common assumptions you might want to validate for the functions that make up your ETL pipeline, and lets you toss those checks as decorators on the functions you're already writing: import bulwark.decorators as dc @dc.IsShape((-1, 10)) @dc.IsMonotonic(strict=True) @dc.HasNoNans() def compute(df): # complex operations to determine result ... return result_df Still want to have more robust test files? Bulwark's got you covered there, too, with importable functions. import bulwark.checks as ck df.pipe(ck.has_no_nans()) Won't I have to go clean up all those decorators when I'm ready to go to production? Nope - just toggle the built-in "enabled" flag available for every decorator. @dc.IsShape((3, 2), enabled=False) def compute(df): # complex operations to determine result ... return result_df What if the test I want isn't part of the library? Use the built-in CustomCheck to use your own custom function! import bulwark.checks as ck import bulwark.decorators as dc import numpy as np import pandas as pd def len_longer_than(df, l): if len(df) <= l: raise AssertionError("df is not as long as expected.") return df @dc.CustomCheck(len_longer_than, 10, enabled) # doesn't fail because the check is disabled What if I want to run a lot of tests and want to see all the errors at once? You can use the built-in MultiCheck. It will collect all of the errors and either display a warning message of throw an exception based on the warn flag. You can even use custom functions with MultiCheck: def len_longer_than(df, l): if len(df) <= l: raise AssertionError("df is not as long as expected.") return df # `checks` takes a dict of function: dict of params for that function. # Note that those function params EXCLUDE df. # Also note that when you use MultiCheck, there's no need to use CustomCheck - just feed in the function. @dc.MultiCheck(checks={ck.has_no_nans: {"columns": None}, len_longer_than: {"l": 6}}, warn) See examples to see more advanced usage. Contributing Bulwark is always looking for new contributors! We work hard to make contributing as easy as possible, and previous open source experience is not required! Please see contributing.md for how to get started. Thank you to all our past contributors, especially these folks: Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/bulwark/
CC-MAIN-2021-04
refinedweb
632
64.51
How can I import and export a webdriver FireFox profile? What I wold like to do is something like: from selenium import webdriver #here I want to import the FF profile from a path if profile: driver = webdriver.Firefox(profile) else: #this is the way I get the WebDriver currently driver = webdriver.Firefox() #doing stuff with driver #Here I want to save the driver's profile #so I could import it the next time You have to decide on a location to store the cached profile, then use functions in the os library to check if there is a file in that location, and load it. To cache the profile in the first place, you should be able to get the path to the profile from webdriver.firefox_profile.path, then copy the contents to your cache location. All that said, I'd really recommend against this. By caching the profile created at test runtime, you are making your test mutate based upon previous behavior, which means it is no longer isolated and reliably repeatable. I'd recommend that you create a profile separately from the test, then use that as the base profile all the time. This makes your tests predictably repeatable. Selenium is even set up to work well with this pattern, as it doesn't actually use the profile you provide it, but instead duplicates it and uses the duplicate to launch the browser. Similar Questions
http://ebanshi.cc/questions/3007783/how-to-import-and-export-firefox-profile-for-selenium-webdriver-in-python
CC-MAIN-2017-09
refinedweb
238
58.82
Overview Atlassian SourceTree is a free Git and Mercurial client for Windows. Atlassian SourceTree is a free Git and Mercurial client for Mac. DESCRIPTION: ------------------------------ Imalse (Integrated MALware Simulator and Emulator) is a framework to help researchers to implement prototype of botnet based network malware. Researchers just need to implement the malware behaviour once and then it can run the following modes: 1. emulation mode: In this mode, each copy of imalse will behave exactly like a real malware. You can install it in a real machine, or in a virtual machine and set up a testbed to test the characteristic of the malware.(Don’t use it to attack other people’s machines;) ) [Note: you can potentially work with Common Open Research Emulator to emulate a lot of nodes in one machine] 2. netns3 simulation mode: You can specifiy the topology of the network and the ip addresses of each node in this mode. IMALSE will launch virtual machines (linux namespace) for each node in the network and construct the network automatically. All virtualized nodes will connect to NS3 through tapbridge and all traffic will consume there. The simulation will be in real time. It is based on netns3 project. 3. pure ns3 simulation mode: No virtual machince will be launched for the pure ns3 simulation mode, the whole simulation will be done in ns3. ns3 default scheduler will be used instead of the real time scheduler in netns3 case, which saves much time. One simulation day may only consume several real seconds. ------------------------------ The following user case will help to determine whether you should use Imalse or not. Suppose Conan is a Ph.D student who has proposed a novel anomaly detection technique for Internet traffic. He wants to demostrate the usefulness of this approach. To do this, he designs a scenario that 100 client computers accessing a server through the internet, 10 of which had already been compromised and controlled by botmaster through botnet. At some point, the botmaster will initiate a ddos attack by asking all compromised computers to send ping requests to the servers. The anomaly detection technique requires all the incoming and outcoming traffic of the server for at least two days. How can he collect the data he want? imalse provides different solutions at different abstract level.3 simulation mode** Conan can finish one simulation with less 100 real seconds, though the time has past for more than two days in the simulator. After extensive testing, Conan has been quite confident about the performance of the anomaly detection techinique now. But he is still a little bit worried about whether the result of ns3 is convincing enough. As a result, he run a complete simulation under **netns3 simulation model** and collect data. Of course, this time it runs more than two days, but he doesn't care that much because he only need to run it for very few times. Conan generates some plots and writes a paper with datanet, each computer run an independent copy of imalse under **emulation client mode**, there is a computer serving as botmster and running a imalse under **emulation server model**\ (the server refers to the C&C server in the botnet). The data of attacked server is recorded and analyzed with Conan's tools. It turns out to be good, and the Company decide to use this method. As a lazy Ph.D student, Conan just need to write one copy of code to describe the secnario during the whole process. With the help of imalse, he can have more time to sleep and enjoy the classical music. :) ----------------------------- Imalse is just a newbie. The features I am considering to add: * Background Traffic Generator Now Imalse only describe the behaviour of abnormal nodes( which is so called "scenario"). Because of the lack of time, I haven't implemented the behaviour for normal nodes. An immediate feature that need to be added is to provide some modes for the normal nodes. It may require different implementation for sim node, netns3 node and read node, but they need to provide unified interface. My preliminary idea is to use NS3 on-off application for sim node. * Full support of Common Open Research Emulator. The dependency of Imalse on CORE are two aspects. The CORE GUI is used with support of exporting Imalse Configuration Script. The netns3 mode rely on some components of the CORE. However, the whole procedure is not integrated and there are some features of CORE that has problems. * More Practical Attacking Scenario and More APIs for Node Imalse is useful only when there are more pratical attacking scenario. Also, different scenario may require different APIs for nodes. For example, key logger may need a node API to record key log. Whenever a Node API is added, support for Sim Node, Net ns3 Node and real node need to be implemented.
https://bitbucket.org/hbhzwj/imalse
CC-MAIN-2017-26
refinedweb
811
63.7
0 I use a neopixel 6x6 led matrix (ws2812) + Arduino Uno I wrote a small code that uses Kinect to track movement in processing. It displays everything between the depth of 300 & 500 in pixels. I want this to be displayed on a led matrix. (I haven’t gotten more out the matrix it displaying a rainbow). It will stay connected to my computer throughout so the video processing wouldn’t have to be done on the Arduino itself. I am new to Arduino but I think for this I need to solve both these problems: - I want to know how I can translate everything between the depth of 300 & 500 to values I can use in Arduino. to make the leds glow up when these values are met. - And how would I make it so the right Leds light up? so it basically displays what processing captures. basically the leds should display something like this: but how would I go and do this? does anybody have any tips/articles/tutorials I could look at? If someone can get me on the right track that would be greatly appreciated! do I need extra hardware or is this just code? here is my processing code: import org.openkinect.processing.*; Kinect kinect; PImage img; void setup() { size(6, 6); kinect = new Kinect(this); kinect.initDepth(); img = createImage(kinect.width, kinect.height, RGB); } void draw() { background(0); img.loadPixels(); int[] depth = kinect.getRawDepth(); int skip = 60; for (int x = 0; x < kinect.width; x+=skip) { for (int y = 0; y < kinect.height; y+=skip) { int offset = x + y * kinect.width; int d = depth[offset]; if (d>300 && d < 500) { img.pixels[offset] = color(255, 255, 255); } else { img.pixels[offset] = color(0); } } } img.updatePixels(); image(img, 0, 0); } thank you greatly for any tips or leads!
https://discourse.processing.org/t/kinect-interacting-with-leds-processing-arduino/10799
CC-MAIN-2022-27
refinedweb
303
77.33
You can subscribe to this list here. Showing 6 results of 6@... | On Nov 11, 2003, at 6:46 AM, <pynode@...> wrote: >) The connections are implicit; you've already made the connection when you defined __connection__ and defined your class. > class Test(Page): > def title(self): > return 'Test' > > def awake(self, trans): > Page.awake(self, trans) > # should I do connection here??? So you don't need to do anything here, just use your Person objects. >) -- Ian Bicking | ianb@... | Actually you are correct, I thought about this last night and remembered that the app in question was losing its connection to the database due to the firewall configuration. It was fixed by convincing the sysadmin that his firewall needs to let idle connections from the webserver to the db server stay open. The result was the same though, dbPool tries to use a connection that it thinks is still open, but in reality is closed. -Aaron Thomas E Jenkins wrote: >On Mon, 2003-11-10 at 15:14, Aaron Held wrote: > > >>Sorry, >>I had the same problem with pgSQL a little over a year ago. I'm a fan >>of Postgres and have had million row tables, but you will get the same >>connection issue. >> >> > >I'm guessing it must be harder to expose this error in postgres since I >have converted databases from mysql to postgres and seen the problem >disappear. Not that I do not believe you. > > >. -Aaron Frank Barknecht wrote: >Hallo, >Aaron Held hat gesagt: // Aaron Held wrote: > > > >>Sorry, >>I had the same problem with pgSQL a little over a year ago. I'm a fan >>of Postgres and have had million row tables, but you will get the same >>connection issue. >> >> > >So, may I ask: What did you do then? I mean, lots of us need (and use) >databases in the back, and it seems, that "normally" it just works. > >ciao > > hi,) class Test(Page): def title(self): return 'Test' def awake(self, trans): Page.awake(self, trans) # should I do connection here???) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ thanks, mike@... |
http://sourceforge.net/p/webware/mailman/webware-discuss/?viewmonth=200311&viewday=11
CC-MAIN-2015-48
refinedweb
338
81.43
Just a quick selection of images randomly-generated by the following 10 lines or so of Python. import sys import numpy as np from PIL import Image try: filename = sys.argv[1] except IndexError: filename = 'img.png' rshift = 3 width, height = 600, 450 arr = np.ones((height, width, 3)) * 128 for y in range(1,height): arr[y,0] = arr[y-1,0] + np.random.randint(-rshift, rshift+1, size=3) for x in range(1, width): for y in range(1,height-1): arr[y,x] = ((arr[y-1, x-1] + arr[y,x-1] + arr[y+1,x-1])/3 + np.random.randint(-rshift, rshift+1, size=3)) im = Image.fromarray(arr.astype(np.uint8)).convert('RGBA') im.save(filename) Share on Twitter Share on Facebook Comments are pre-moderated. Please be patient and your comment will appear soon. Joseph C. Slater 3 years, 3 months ago Dr. Hill,Link | Reply I'm working through your book, playing with examples, and this one drew my attention. However, I cannot figure out where the Image module is (with a capital "I"). There is an image package on pypi, but it's clearly not the same. Would you please clarify where this module can be found? As an aside: my compliments on the text. I used to think I knew Python. I have two packages on pypi. I now realize that I have much more to learn. Thank you for such a wonderful text. christian 3 years, 3 months ago Hi Joseph,Link | Reply Thanks for your kind comments: I'm constantly finding new things to learn about Python. To answer your question, the most popular recent imaging library for Python is Pillow (), a fork of the older PIL library, and this contains the Image module. I've updated the code above to do the import properly with this library: you can install it with pip or conda if you don't have it already. Cheers, Christian Peter 1 year, 3 months ago Hi ChristianLink | Reply Thank you for this post ! Really great this "simple" generator - a loneley example that deserves the name ART generator ! It has motivated me to eliminate the two outer for-loops. This resulted in a speed gain of factor 12 (code:). And the mystery curves are inspiring too () Thank you, Peter New Comment
https://scipython.com/blog/computer-generated-contemporary-art/
CC-MAIN-2020-29
refinedweb
388
67.55
#include <thread_manager.h> List of all members. Thread reference object to corresponding java.lang.ThreadWeakRef instance. Java thread object to corresponding java.lang.Thread instance. Memory pool where this structure is allocated. This pool should be used by current thread for memory allocations. JNI environment associated with this thread. Flag to detect if a class is not found on bootclasspath, as opposed to linkage errors. Used for implementing default delegation model. Flag to detect if a thread is suspend. Used for serialization Java suspend. The upper boundary of the stack to scan when verifying stack enumeration. Is this thread daemon? JVMTI support in thread structure. Genereated on Tue Mar 11 19:26:04 2008 by Doxygen. (c) Copyright 2005, 2008 The Apache Software Foundation or its licensors, as applicable.
http://harmony.apache.org/subcomponents/drlvm/doxygen/vmcore/html/struct_v_m__thread.html
CC-MAIN-2015-11
refinedweb
129
54.79
May contain peanuts... Well, another interesting gathering of some local BizTalk Server enthusiasts has come and gone. A big thanks to Nabeel for preparing and presented a very interesting session on implementing TDD using BizUnit for your BizTalk Server solutions. Some of the main points from the demo include: Of course it was also enjoyable engaging in an informal, open floor discussion where we discussed some of less obvious benefits of using a TDD approach to your BTS solutions. One of the more noteworthy benefits was fault isolation in your solution. What I mean by this is that if you do find that your end to end solution is not performing as expected, you could use the BizUnit test which you have created as means to either help identify it as a problem in your environment or a problem in some external system you are integrating with. As always, a big thanks to Ryan and Frikkie for making arrangements and to Ryan and Nabeel for getting the site up. See you all at the next one... Just a follw-up posting (and to show that I'm still around ) regarding a brief discussion from last week's Johannesburg BizTalk Server User Group meeting... Some may recall a question I put forward, after Ryan's interesting presentation on the BRE, regarding the immutability of messages in the message box when calling rules from inside an orchestration. The general consensus was that the BRE itself would not guard the message box's message since: What we didn't establish was what would happen in various scenarios where the orchestration (through the BRE) modifies a message variable and any possible affects on the message still in the message box tables. The possible impact of allowing that to happen would be that any port which may be enlisted (but not yet started) or which may be running in a service window may receive the modified message instead of the original. I've recently done a bit more investigation on some of the BizTalk technical sites and blogs and came across an article entiltiled "Messages are immutable.... or are they??" This does go some way to show that there are some additional unseen guards provided by orchestrations to protect the immutability of messages. I suspect that this one is provided by the Call Rule shape. Maybe one day I will dig a little deeper in to the orchestration's generated C# code and have a look. Nabeel also mentioned that he recalled some instances where immutability on the message box may not be guarenteed. So far I've not investigated this any further, however will post any information which surfaces regarding this topic. Hope to see you all at the next one... Some interesting news emerging from the Visual Studio and F# teams is that the language appears to moving out of the Microsoft Research (MSR) realm and into the Visual Studio Language Suite. I have not yet read when this may be happening (apparently it will be the usual CTP's etc, and likely after VS 2008 has shipped), nor have I found anything on what project templates may be supported. F# has OO capabilities, runs on .NET and already can be used for GUI's (web and forms). F# Web Services anyone? Really great timing, mainly since my copy of "Foundations of F#" was only delivered to me two days ago. The main reason for buying the book was to satisfy my curiosity and although I haven't gotten far, it has certainly started off interesting. For those who haven't heard of F#, it is currently a research language (of the funtional programming [FP] variery) from MSR and is used internally by both MSR and Microsoft (amongst others). In addition to its FP roots it also supports OOP and is claimed to be the de facto FP language (some say the future language) for the .NET Framework. Being an MSR language it has already had an influence on both .NET and C# (amongst other languages). F# is widely acknowledged as being the driving force behind Generics in .NET, and LINQ and Lambda expressions borrow (or are) functional programming (FP) concepts. Also, a few months back, I watched a video where Anders Hejlsberg discusses FP and the possibility to take advantage of multi-core processors by compiling code to execute in a multi-threaded way. For those interested, I'll try report back as I go through the book and share some experiences going from the OO world to the FP world. You can view a post on Somasegar's WebLog and on Don Syme's WebLog. I've recently come across this little "gotcha" regarding class hierarchies and passing arguments to methods using ref (and out). The restriction I encountered is you can't pass a variable defined as a derived class to a method which takes a ref argument of it's base class. To demonstrate this little restriction, consider the following example. Take the following two classes: public class Widget { private int quantity = 0; public int Quantity { get { return quantity; } set { quantity = value; } } } public class GrandWidget : Widget private decimal unitPrice = 0m; public decimal UnitPrice get { return unitPrice; } set { unitPrice = value; } and the following method: public static void IncreaseQuantity(ref Widget widget) ++widget.Quantity; If I were to try and compile the following: static void Main(string[] args) GrandWidget instance = new GrandWidget(); IncreaseQuantity(ref instance); Console.WriteLine("Quantity: {0}", instance.Quantity); Console.ReadLine(); I would get these error messages: Now this is a perfectly legal (on the surface) use of OO principles, so why can't I do this? The reason is because I am passing the argument by reference (if you don't understand this concept, there are plenty of articles out there to explain). So why is that important? Suppose I introduce this class into the picture: public class OtherWidget : Widget And the body of my method changes as follows: widget = new OtherWidget(); Again, this is perfectly legal, except when you consider the effect of the variable mapped to the instance argument. Since I have passed by reference I could potentially be attempting to do incompatible type assignments. So how to get past this? You could either remove the ref keyword from the method (which would prevent any change on the original value) and let the default behaviour compile, or you could rewrite GrandWidget instance = new GrandWidget(); as Widget instance = new GrandWidget(); This will then still allow you to pass the instance variable by reference.. One thing I find many developers over-look when moving to the new versions of Visual Studio is the IDE code editting enhancements. I know of many developers who are using the .NET 2.0 generation enhancements in VS 2005 very competently, yet still do not know about the code snippets, refactoring and "Surround With..." features available in the IDE. As with VS 2005, VS 2008 brings a few more of these features into the code editors. I reommend taking a look at Scott Guthrie post for a quick tour. (personally, I really like the C# using statements organization feature). I recently spend a few (many) hours doing some research into the workings of LINQ providers for an internal session on LINQ. I thought I'd share some resources I came across during the exercise for anyone who may also be interested and / or looking to create their own providers. The ultimate result of this investigation was the reimplementation (translated into C# and made compatible with VS Orcas Beta I) of a LINQ to Windows Desktop Search provider. (As you can probably deduce, there have been some breaking changes on the interface(s) for LINQ providers between Beta I & II). Now, to understand the role (and thus expectation) of the provider, consider how we query data from an in-memory object perspective (loops, predicates, methods, etc) and then consider how we may query data from a particular data source. These data sources (to name the most common associated with LINQ providers) could be SQL, XML, SharePoint or ActiveDirectory, All these data sources have a different data access API, such as Structured Query Language, XPath or LDAP statements. What the provider ultimately has to do is translate the LINQ expressions (in the form of Lambda expressions, .NET properties and methods, etc) into an equivalent representation which is understood by the underlying data access API and provide the logic for the various types of statements which may be issued against the provider. For those who are interested in actually going ahead and creating your own provider, check out the links at the end of the post. I found those to be the most useful for information and examples. From my perspective, I definately found the following aspects the most interesting: Useful resources: Just a quick update for anyone who has been following news around BizTalk Server R2 and the WCF LOB Adapter SDK. I mentioned in my "BizTalk Adapter Wizard (for 2006) posted on CodePlex" post that the SDK was only available for the TAP customers. I am happy to say that it appears that it is now available throught Microsoft Connect as a public RC. If you are interested, you can find some more information and download details at the BizTalk Server Team Blog. Custom components (.NET assemblies) can be used quite a bit throughout BizTalk Server. The main scenarios would be: Assuming you have compiled these components for debugging and have deployed all the BizTalk artefacts, you can then debug these components using the following process. 1. In the Visual Studio IDE, select Debug > Attach to Process... 1. In the Visual Studio IDE, select Debug > Attach to Process... This will bring up the list of available processes running on the machine (as seen below) This will bring up the list of available processes running on the machine (as seen below) 2. Select the correct instance of the BTSNTSvc.exe process. If you prefer, you can select all of the instances to attach to simultaneously. (You may need to check the "Show processes from all users" checkbox.) 3. Click the Attach button. 2. Select the correct instance of the BTSNTSvc.exe process. If you prefer, you can select all of the instances to attach to simultaneously. (You may need to check the "Show processes from all users" checkbox.) 3. Click the Attach button. You can now execute your BizTalk process and still have Visual Studio hit any break points which you have set on your code. Should you not want to attach to all the processes, you can attach to the particualr instance you want. The instance will depend on which host your component will be executed from. In order to determine which host you want, shut down the host instance before you open the processes window. Once the process window is open, restart the host instance and refresh the process window. Note which of the entries is a new ID and that will be the host to attach to. One of the nice features of the BizTalk Server Business Rules Engine (BRE) is the capability to host the engine within your own applications (i.e. it is not exclusively tied to BizTalk Server). This allows you to use the rules engine and all of it's nice policy management and API's from any of your custom written applications. However, there are licensing cost which one must consider when designing and deploying such application. Should you currently be interested in alternatives, I've just recently come across Smart Rules from Kontac. I think this product, in particular, is worth mentioning mainly due to it similarities with the BizTalk Server BRE. Just a quick look at the product information and you kind find the following in common: In addition to these similarities, the UI looks almost like an exact clone of the BizTalk Server BRE UI. An added bonus is that there is also a Lite Edition (with some limitations) which can be downloaded for free. Unfortunately, having just come across it, I am not able to comment on how it works or give feedback on any experiences I've had with this tool. If I do get a chance to play with it, I'll definately give some more feedback on how it compares with the BRE and even WF's rules infrastructure. If you are interested, there is also a post on The Code Project containing a demo which uses this product. Excellent to see that Boudewijn van der Zwan has posted the source to his BizTalk Adapter Wizard on CodePlex. This is a definite must have for anyone looking to develop adapters as it takes care of much of the plumbing in regards to using the Adapter Framework. The wizard will take care of the following for you: A nice feature which Boudewijn added to the 2006 release is support for the XML annotations which make the properties schemas more useful. This includes fields such as Friendly name and Description. I still recommend having some familiarity with the Adapter Framework as there will still be a few cases when you'll need to take care of the plumbing yourself. These cases may be: As for R2, Microsoft will be shipping the WCF LOB Adapter SDK which should take much of the complexity away from the current Adapter Framework. Good news is that it appears that both the current framework and the new WCF framework based adapters should be able to run smoothly side-by-side. Unfortunately the WCF Adapter SDK appears to only be available for the TAP customers and is currently scheduled to be made available Q3. In the meantime, keep visiting Developing adapters using WCF. Note: In addition to the adapter wizard, you can also get his BizTalk Time Breakdown tool from his blog. I've recently completed going through the first issue of BizTalk Hotrod Magazine (available for download here) and have to commend the guys who spent time putting this together. They really have gone out of their way to produce a fun and informative publication which stands out. The first issue deals with the following topics: In addition to the articles, there are also links to: Well done guys! Can't wait to see the next issue. Just a quick pointer to a new offering from Microsoft which I've had a brief look at: BizTalk Services. This offering is currently being described as "the first Internet Services Bus". From what I gather it is an extension of the Enterprise Service Bus model, extending the model to provide a single (public) repository and uniform authentication, security and relaying model across the services. To quote one of the team members: " The following services are currently either live or on the way: Developers can also create their own services using WCF and the BizTalk Services SDK and then expose these on the Internet Service Bus. A plus point is it appears that your service does not actually need to be exposed via the BizTalk Server product, just WCF. BizTalk Services will handle the authentication requirements, after which the client and service will communicate directly. You can have a look at the site / CTP here. Also take time to download the SDK and have a look at the samples and API documentation. As I mentioned, this was just a brief excursion into BizTalk Services. I will hopefully get a chance to play with this and do a few more posts or correct any errors I've made in this post. Useful links: Virtualization is a great thing in software development. You can create specific environments with specific versions of toolsets and servers without having significant hardware costs. However, you hit an unexpected (and rather amusing) problem, as I did, when you start using device emulators. I had been using a particular Microsoft Virtual PC 2004 setup for a few months which I created as a dedicated development environment for a particular group of solutions. Recently there was a requirement to create a line of business application developed to run on the .NET Compact Framework and target Pocket PC 2003 devices. The following is a screen shot of the message I got when I started debugging the application on the Pocket PC 2003 Emulator. I've just found this nice little article in the May 2007 edition of MSDN Magazine. As the title implies, it gives some details on some of the better development practices for developing BizTalk Server solutions. A quick list of the tips and tricks discussed are: When I first looked at some of these recommendations, some appeared to be taking things a little too far. However if you read the justifcation for scalability, deployment and flexibility, then you will getting an idea of where these recommendations come from. I know I'll be considering them for future projects.
http://dotnet.org.za/lacya/
crawl-002
refinedweb
2,811
59.33
Mike asks: How do I specify the HTML for a web browser control from a string? The answer is fairly simple, yet I could not find a quick reference link anywhere from a Google search. So, here is the set of steps to use the web browser control and set the HTML for the control using a string. There is no real magic: you just need 2 different libraries. The WebBrowser’s Document property returns an object representing the DOM for a web page. However, this DOM does not exist until a page is loaded. Rather than load a URL from a file, use about:blank for the URL to load a blank page. When you call the Navigate method of the browser, the status text becomes “Opening page about:blank…” When the document is finished loading, the status text changes to “Done”. You can leverage this event to know that the browser is finished loading the blank page, at which time the DOM is accessible. If you are not using Visual Studio .NET, see the Microsoft SDK documentation for instructions on the Windows Forms ActiveX Control Importer. This utility is used to create a Windows Forms control based on the type library information in the ActiveX control. Create a new Windows Form project. Open the WYSIWYG designer for Form1. Right-click the Toolbox and choose Customize Toolbox…, and click the COM Components tab. Browse the SYSTEM32 directory for shdocvw.dll.. Click OK to choose this DLL. Click OK to close the dialog. Your project references now shows a reference to AxShDocVw.dll. Right-click the References tab and choose Add Reference… Click the COM tab. Click the Browse button. Navigate to the SYSTEM32 directory and choose mshtml.tlb. Click OK to close the dialog. Your project references now shows a reference to MSHTML. From the Toolbox pane, drag an Explorer component onto Form1. name this control “browser” in the properties pane. Switch to the code view for the form and enter the following code for the form: using System; using System.Drawing; using System.Collections; using System.ComponentModel; using System.Windows.Forms; using System.Data; using mshtml; namespace WindowsApplication2 { public class Form1 : System.Windows.Forms.Form { private AxSHDocVw.AxWebBrowser browser; private System.ComponentModel.Container components = null; public Form1() { InitializeComponent(); string url = “about:blank”; object o = System.Reflection.Missing.Value; browser.Navigate ( url,ref o,ref o,ref o,ref o); AxSHDocVw.DWebBrowserEvents2_StatusTextChangeEventHandler handler = new AxSHDocVw.DWebBrowserEvents2_StatusTextChangeEventHandler (this.browser_StatusTextChange); browser.StatusTextChange += handler; } private void browser_StatusTextChange (object sender, AxSHDocVw.DWebBrowserEvents2_StatusTextChangeEvent e) { mshtml.HTMLDocument doc = (mshtml.HTMLDocument)this.browser.Document; doc.body.innerHTML = “<H1>foo</H1>”; } protected override void Dispose( bool disposing ) { if( disposing ) { if (components != null) { components.Dispose(); } } base.Dispose( disposing ); } #region Windows Form Designer generated code private void InitializeComponent() { System.Resources.ResourceManager resources = new System.Resources.ResourceManager(typeof(Form1)); this.browser = new AxSHDocVw.AxWebBrowser(); ((System.ComponentModel.ISupportInitialize)(this.browser)).BeginInit(); this.SuspendLayout(); // // browser // this.browser.Enabled = true; this.browser.Location = new System.Drawing.Point(16, 16); this.browser.OcxState = ((System.Windows.Forms.AxHost.State)(resources.GetObject(“browser.OcxState”))); this.browser.Size = new System.Drawing.Size(344, 224); this.browser.TabIndex = 0; // // Form1 // this.AutoScaleBaseSize = new System.Drawing.Size(5, 13); this.ClientSize = new System.Drawing.Size(392, 302); this.Controls.AddRange(new System.Windows.Forms.Control[] { this.browser}); this.Name = “Form1”; this.Text = “Form1”; ((System.ComponentModel.ISupportInitialize)(this.browser)).EndInit(); this.ResumeLayout(false); } #endregion [STAThread] static void Main() { Application.Run(new Form1()); } } } That’s all there is to it. Once the DOM is available, you have access to the body of the HTMLDocument. The ActiveX Control Importer converts type definitions in a COM type library for an ActiveX control into a Windows Forms control… Creating a HTML Viewer Control : Dustin Mihalik’s Blog Another quick method: browser.Navigate ( "about:<h1>Foo</h1>",ref o,ref o,ref o,ref o); Thank you so much for this code!! It only took 3 hours to find! You are a life saver – I found a lot of snipets of code that sort of did what I wanted, but none of them actually explained why it was written that way. Yours is the first that clearly explained why I had to call "navigate", before then I was only getting exception errors. You are one righteous dude! Good stuff mate… I’ve been looking for something similar for a while then come to the same solution, although without using the textchange event… Good on you… Dude u rock!! Thanks for the explanation.. FInally….I do have another question though. Iam using this to make a telnet/mudding application. Once I convert the stream to html and display it How do i get the control to scroll down like a textbox would? Try something like this. It works for some scrollable controls. May or may not work for the browser control. Theoretically, you’d just call ScrollToBottom() each time you add text to the control. const int WM_VSCROLL = 0x0115; readonly IntPtr SB_BOTTOM = new IntPtr( 7 ); public void ScrollToBottom() { System.Windows.Forms.Message msg = Message.Create( this.Handle, WM_VSCROLL, SB_BOTTOM, IntPtr.Zero ); this.DefWndProc( ref msg ); } Sorry, the example assumes that you’ve subclassed the browser control. If not, replace "this" with "this.browser" (assuming the naming system above). I tried doing exactly this. But, when the form is opened, it gave exception stating "Exception has been thrown by the target of an invocation". And when I click "OK", it displays the stack with bottom most trace pointing to "System.RuntimeType.CreateInstanceImpl(Boolean publicOnly)" I am using Windows 2000 SP4 with IE 6 SP1. I have VStudio 2003 with .NET framework 1.1. I have no idea what’s happening! Greatly appreciate if anyone can help. Thanks. OK. I was trying this on a form which is an MDI child. When I played around with it, I got a clearer exception on the line where the browser was created (within initComponents). It said "could not instantiate ActiveX control because current thread is not signle-threaded apartment". When I tried the same code on a form (not an MDI child) that has the main (like in the above example) it worked! Anyone faced this issue? Thanks. I got the answer. I had missed out "[STAThread]" directive above main method declaration. Thanks. Thanks! Hello, I am trying put the Web browser control in an application. I want to load an embedded HTML resource into the web browser control that will contain several link. What I want is to know which link has been clicked and then according to the selected option display the specific windows form etc. How I can do that. Best, Nauman It will be your great help if you can e-mail me any information at naumanemails@yahoo.com. Thanks, Nauman When attempting to add this form as a child form in a MDI container I don’t get an error, but the browser navigation appears to go away and not come back. It runs through the code to set the inner html, but the text never appears. The mouse is using the wait cursor and it just sits there. The application is not locked up, but the browser never shows the content. Any ideas? [Will look back here for answers] Well done! This was very helpful information. I am trying to get the web control to scroll as Asim was but the problem I’ve found is that the handle returned by browser.Handle is not the correct window handle (as shown by Spy++). If I get the correct handle with Spy++ and use another app to send a message to the browser window, it will scroll correctly. Any ideas on how to get the right handle? can someone hel me ou with this. thanks mani
https://blogs.msdn.microsoft.com/kaevans/2003/02/25/specify-the-html-for-a-windows-forms-web-browser-control/
CC-MAIN-2017-09
refinedweb
1,291
61.12
02 November 2011 21:57 [Source: ICIS news] HOUSTON (ICIS)--?xml:namespace> The weakness, which took glacial acrylic acid (GAA) prices to a new range of $1.31–1.36/lb, as assessed by ICIS, stemmed from seasonally weaker demand and longer supply. Though September feedstock chemical-grade propylene (CGP) settled flat, slackening demand has stretched supply to excess levels and pushed October pricing sharply lower as producers seek to trim year-end inventory, sources said. Price reductions heard during October ranged from 6–12 cents/lb for glacial acrylic acid (GAA) to as much as 12 cents down for 2-ethylhexyl acrylate (2-EHA) and one offer of 15 cents/lb down for butyl acrylate (butyl-A). Some considered the butyl-A reduction an outlier, but all buyers agreed that weaker demand is anticipated by the end of the year. Producers are willing to offer further concessions than usual on freely negotiated pricing, a buyer said, “because they don’t want to get stuck with inventory, and buyers don’t want it either". Most formula buyers will see their largest discounts in November, based on October’s sharp 14 cent/lb drop in CGP, sources said, but negotiated October reductions have been significant. Sources said combined reductions in October and November could total 20 cents/lb or more. “Producers know they are going to have to give back major price … due to the propylene price drops,” a buyer said. The October CGP contract price is 62.50 cents/lb, and November feedstock CGP contracts are expected to be lower on weak demand and plentiful spot availability. Major producers of acrylates in the ($1 = €0.73) For more on US acrylates,
http://www.icis.com/Articles/2011/11/02/9505008/us-october-acrylates-fall-by-10-centslb-on-soft-market.html
CC-MAIN-2014-10
refinedweb
281
61.26
Linked by Thom Holwerda on Sat 18th Nov 2006 23:26 UTC, submitted by Dolphin Thread beginning with comment 184036 To read all comments associated with this story, please click here. To read all comments associated with this story, please click here. Member since: 2005-12-04 bio: I'm a regular osnews reader (though seldom make comments), and for the last little while I've been maintaining NTFS up here in Redmond. The article describes accurately how symlinks work in Vista, although it should be pointed out that this is completely by design. A junction is evaluated on the server. A symlink is evaluated on the client. At least as far as directories are concerned, Vista gives you the choice about which evaluation method you prefer. Unfortunately, there is no such thing as a file junction, so linking to files requires symbolic links. As other comments point out, the reason the user scenario works on Linux is because Samba etc. evaluate the symbolic link on the server so that an NT4 Windows client can "traverse" the link because it never really traverses anything. Samba turns client-evaluation into server-evaluation, which works pretty well, at least most of the time. Others here have commented that it's unwise to have a link on a remote share that links to a local file. That is correct, and by default, Vista has policies that prevent evaluation of those links. However, it is configurable - you can have these links if you really want to, unwise or not. The design of symlinks was client-evaluation. Server-evaluation would have been a *lot* easier (and faster!) to do. This behavior will not change in a hotfix/qfe/service pack/fiji/anything else. Also, to be clear, we've been very consistent about NTFS - the on-disk NTFS format has not changed. The on-disk format is 3.1, which is the same as XP. This is important for compatibility. Symlinks are just reparse points. You could, given enough free time and a Win2k IFS kit, build your own Vista-Symlink-evaluator for Win2k. Similarly, Samba could be made to parse them, although there would be namespace translation issues (what's C:foobar on Linux anyway?) The real prick with Vista Symlinks is (as others have pointed out) the requirement that, by default, only admins can create them. There are several compatibility/security reasons for this, which belong in a seperate discussion. Hopefully *that* can be fixed in future - but it won't be until a future Windows release, at the earliest. You can change this behavior by giving other groups the right - mmc, local computer policy, computer configuration, windows settings, security settings, local policies, user rights assignment, create symbolic links.
http://www.osnews.com/thread?184036
CC-MAIN-2017-43
refinedweb
456
55.84
Mar 26, 2012 03:30 PM|prasadvemala|LINK Recently we have converted the ASP.NET MVC2(VS 2008) project to ASP.NET MVC4(VS 2010, Razor View). In ASP.NET MVC2, we have used LINQ to SQL. In ASP.NET MVC4, we are using ENTITY Framework. When comparing the loading speed, ASP.NET MVC4 runs slower than ASP.NET MVC2. I have compared a basic page that just pulls the list of Contacts and both are having the same functionality and flow(Controller -> BLL -> Repository(DAL)). But ASP.NET MVC4 loads slower(about 5 times slower in some pages) than MVC2. Above is screenshot of firebug from MVC2 application Above is screenshot of firebug from MVC4 application Both are the same functionality, but it differs in the time they load. How to find the loading issue in my ASP.NET MVC4? Is there any tool that points out any issues/leakage? Please suggest Mar 26, 2012 03:44 PM|BrockAllen|LINK All-Star 18669 Points Mar 26, 2012 05:02 PM|CodeHobo|LINK Honestly, with a difference of about 400 milliseconds I'd say that it's very likely that the bottleneck is the data access code. According to this chart Linq to Sql has a slight performance benefit over Entity framework. However, you get much more functionality out of of EF. I'd run a profiler against the database to see what kind of sql query is being generated. You could be selecting lots of data and that would hurt performance. Consider using Dapper for this page, it's signifcantly faster than EF, but requires you to do a lot more setup work Also consider introducing a caching layer so that you are not hitting the database regularly. Mar 26, 2012 06:58 PM|prasadvemala|LINK I almost hit through the issue which is causing the delayed loading. Its due to the Interfaces in my controller. When i comment the Interfaces in my controller, the page loads faster. I am using the Unity Application Block for Dependency Injection. Below is the screenshot(Controller with Interfaces) from the Glimpse: public class UserController : BaseController { #region Declaration private IUserService _userService; private IPaymentService _paymentService; private ICommonService _commonService; private IAdminService _adminService; private IAUserService _adminUserService; #endregion #region Constructor public UserController() { } public UserController(IUserService userService, IPaymentService paymentService, ICommonService commonService, IAdminService adminService, IAUserService adminUserService) { this._userService = userService; this._paymentService = paymentService; this._commonService = commonService; this._adminService = adminService; this._adminUserService = adminUserService; } #endregion Below is the screenshot(Controller without Interfaces) from the Glimpse: public class UserController : BaseController { #region Constructor public UserController() { } #endregion From the above screeshots, there's a clear difference in the speed with and without interfaces. Is there anything to be noted here to fix? I am little confused on the differnce in loading time of pages with and without Interfaces. Both the screenshots i took is without any database calls. Though I am using the same latest Unity Application Block(V2.1.505.0) in both MVC2 and MVC4, it works fine with MVC2, but not with the MVC3 or MVC4. Mar 26, 2012 07:01 PM|BrockAllen|LINK Not exactly sure what you mean by "Controller with Interfaces" -- are you talking about using DI? Can you show the before and after code snippets? unity Mar 26, 2012 07:17 PM|BrockAllen|LINK And you're using dependency injection to satisify all the interface dependencies? So for your profiling, did you test the first request or subsequent requests once the DI container has been "warmed up"? So two variables changed for you: 1) the DB engine, and 2) MVC2 -> MVC4. I'd suggest eliminating one of these variables to really determine which one caused the perf issues. Mar 26, 2012 07:25 PM|prasadvemala|LINK Hi BrockAllen, I am using the Dependency Injection perfectly in global.asax. The screenshot i took is from the least time the page loaded(After 2nd or 3rd time of reloading the pages) 1). I checked even after commenting the call to database(the screenshot from my previous post). 2). The issue is when using these interfaces with my upgraded MVC4 application Mar 26, 2012 07:31 PM|BrockAllen|LINK So then something in MVC4 changed about how the dependency resolved is used, perhaps? I'd suggest submitting this as feedback to the MVC4 team. 8 replies Last post Mar 26, 2012 07:31 PM by BrockAllen
http://forums.asp.net/p/1785359/4900183.aspx?Re+Performance+issue+after+upgrading+MVC2+from+L2S+to+EF
CC-MAIN-2013-48
refinedweb
725
56.55
Java program to capitalize first letter of each word in a String : In this tutorial, we will learn how to capitalize first letter of each word in a string in Java. User will input one string and then we will capitalize first letter of each word and modify and save the string in a different String variable. Finally, we will output the String. Java program : import java.util.Scanner; public class Main { private static void print(String message) { System.out.print(message); } private static void println(String message) { System.out.println(message); } public static void main(String[] args) throws java.lang.Exception { //1 String currentWord; String finalString = ""; //2 Scanner scanner = new Scanner(System.in); //3 println("Enter a string : "); String line = scanner.nextLine(); //4 Scanner scannedLine = new Scanner(line); //5 while (scannedLine.hasNext()) { //6 currentWord = scannedLine.next(); finalString += Character.toUpperCase(currentWord.charAt(0)) + currentWord.substring(1) + " "; } //7 println("Final String : " + finalString); } } Explanation : The commented number in the above program denotes the steps number below : - Create one String variable currentWord to save the current scanned word and one different variable finalString to save the final String. - Create one Scanner variable to scan the user input string. - Ask the user to enter the string and store it in line variable. - Next, create one more Scanner object scannedLine. Note that we are passing line variable while creating this object. So, the Scanner will basically start scanning from this string variable line. - Start one while loop and scan the line word by word. - Store the current word in string variable currentWord. This while loop will read word by word. We are changing the first character to upper case of a word and then adding the next letters of that word. And,finally we are adding one space after that word. So, for example, the word hello will become Hello. - After the loop is completed, we have the result string stored in variable finalString. So, print out the final string finalString. Example Output : Enter a string : this is a test string Final String : This Is A Test String Similar tutorials : - Java Program to get all the permutation of a string - Java program to remove all white space from a string - Java program to convert a string to boolean - Java program to replace string in a file - Java program to swap first and last character of a string - Java program to find the total count of words in a string
https://www.codevscolor.com/java-program-capitalize-first-letter-words-string
CC-MAIN-2020-40
refinedweb
404
65.83
email: from django.shortcuts import render from django.http import HttpResponseRedirect(request, 'contact.html', { 'form': form, }) There are three possible code paths here:. You can still access the unvalidated data directly from request.POST at this point, but the validated data is better. In the above example, cc_myself will be a boolean value. Likewise, fields such as IntegerField and FloatField convert values to a Python int and float respectively. Read-only fields are not available in form.cleaned_data (and setting a value in a custom clean() method won’t have any effect). These fields are displayed as text rather than as input elements, and thus are not posted back to the server. Extending the earlier Tip For more on sending email from Django, see Sending email.. If your form includes uploaded files, be sure to include enctype="multipart/form-data" in the form element. If you wish to write a generic template that will work whether or not the form has files, you can use the is_multipart() attribute on the form: <form action="/contact/" method="post" {% if form.is_multipart %}enctype="multipart/form-data"{% endif %}>.">Email. Email address. - {{ field.label_tag }} - The field’s label wrapped in the appropriate HTML <label> tag, e.g. <label for="id_email">Email address</label> - {{ field.id_for_label }} - The ID that will be used for this field (id_email in the example above). You may want to use this in lieu of label_tag if you are constructing the label manually. It’s also useful, for example, if you have some inline JavaScript and want to avoid hardcoding the field’s ID. - {{ field.value }} - The value of the field. e.g someone@example.com - {{ field.html_name }} - The name of the field that will be used in the input element’s name field. This takes the form prefix into account, if it has been set. - {{ field.help_text }} - Any help text that has been associated with the field. - {{ field.errors }} - Outputs a <ul class="errorlist"> containing any validation errors corresponding to this field. You can customize the presentation of the errors with a {% for error in field.errors %} loop. In this case, each object in the loop is a simple string containing the error message. - {{ field.is_hidden }} This attribute is True if the form field is a hidden field and False otherwise. It’s not particularly useful as a template variable, but could be useful in conditional tests such as: {% if field.is_hidden %} {# Do something special #} {% endif %} - {{ field.field }} - The Field instance from the form class that this BoundField wraps. You can use it to access Field attributes , e.g. {{ char_field.field.max_length }}. form and field validation.
https://docs.djangoproject.com/en/1.5/topics/forms/
CC-MAIN-2015-14
refinedweb
435
59.4
hy! thanks for your explanations! i want to avoid performance-problems by repeated (very often) function-calls with very long strings. in some languages (C, PHP, Powerbuilder, ...) i have the opportunity, to pass "by value" or "by reference/pointer)". "by value": the CPU must perform a complete copy of the string "by reference/pointer": only a long value (address) is passed ... the performance difference is in most cases unimportant, but sometimes ... Unfortunately i made an error by testing the behavior of Phyton, and so i thougt, LISTs are also passed "by value". But this isn't true, and so i can solve my problem by putting the string- argument in the first place of a list. greetings iolo "Christos TZOTZIOY Georgiou" <tzot at sil-tec.gr> schrieb im Newsbeitrag news:79v2vv0ijgdvqu354uh4mh21v4b3liurj0 at 4ax.com... > On Tue, 30 Dec 2003 14:21:46 +0100, rumours say that "EsC" > <christian.eslbauer at liwest.at> might have written: > > >Hy! > > > >is it possible to pass function-arguments by reference? > >(for example in PHP you can use the "&" operator ... ) > > > >thx > >iolo > > > > There is no such concept as "pass by value" in python. Only references > are passed around, and therefore there is no special syntax for that. > > What you need to understand is that objects can be mutable (changeable) > or immutable. Search for these terms in the python documentation. > > In other languages, variables are a container: they contain a "value". > In python, "variables" are only "names" referring to "objects" and they > have no "value". If you assign anything to a name, you just change the > object it is pointing to. > > Presumably you ask this question because you want your function to pass > back some more data than its return value. Python handles fine multiple > values, check for "tuple" in the docs. > > An example: (I am not familiar with php, therefore I will write program > A in pseudocode, but you will get the point I hope) > > function f(&a): > if (a > 10) then a = 10 > return (a*2) > > var = 20 > result = f(var) > > This function makes sure that the "var" variable stays less than or > equal to 10, and then returns the double of the corrected argument. > > In python you would do this: > > def f(a): > if a > 10: > a = 10 > return a, a*2 > > var = 20 > var, result = f(var) > > If not covered, please write back. > > PS reading this could be helpful too: > > > -- > TZOTZIOY, I speak England very best, > Ils sont fous ces Redmontains! --Harddix
https://mail.python.org/pipermail/python-list/2003-December/196225.html
CC-MAIN-2014-15
refinedweb
409
73.78
The Book of Visual Studio .NET - A Visual Basic .NET Crash Course Implementing Namespaces Namespaces make it easy to organize classes, functions, data types, and structures into a hierarchy. Namespaces allow you to quickly access classes and methods buried in the .NET Framework Class Library or any other application that provides a namespace. The .NET Framework Class Library provides hundreds of classes and thousands of functions as well as data types and structures. Use the Imports statement to import a namespace for easy access to its classes and methods. Once imported it is no longer necessary to use a fully qualified path to the desired class or method. For example: Imports system.text ' Give access to the StringBuilder class This ends the first of three parts of a sample chapter from The Book of Visual Studio .NET, ISBN 1-886411-69-7 from No Starch Press # # # Page 5 of 5
http://www.developer.com/net/vb/article.php/10926_1570001_5/The-Book-of-Visual-Studio-NET---A-Visual-Basic-NET-Crash-Course.htm
CC-MAIN-2016-22
refinedweb
150
66.44
How to Make a Game Like Jetpack Joyride in Unity 2D – Part 2 This is the second part of tutorial about How to create a game like Jetpack Joyride in Unity 2D. If you’ve missed the first part, you can find it here. In the first part of this tutorial series you created a game with a mouse flying up and down in a room. Oh, and don’t forget the flames, they look nice ;] Although the fire is fun to look at, simply adding jetpack flames doesn’t make a good game. In this part of the tutorial series you’re going to move the mouse forward through randomly generated rooms simulating an endless level. In addition, you’ll add a fun animation to make the mouse run when it is grounded. Getting started If you completed the first part of this tutorial series you can continue working with your own project. Alternatively you can download the starter project for this part of tutorial here: RocketMouse_Final_Part1 Find your project or download and unpack the final project for the first part. Then find and open the RocketMouse.unity scene. Making the Mouse Fly Forward It is time to move forward, literally. To make the mouse fly forward you will need to do two things. - Make the mouse actually move. - Make the camera follow the mouse. Adding a bit of code solves both tasks. Setting the Mouse Velocity That’s easy. Open the MouseController script and add following public variable: It will define how fast the mouse moves forward. Note: Once again, by making it a public variable you’re giving yourself an opportunity to adjust the speed from Unity without opening the script in MonoDevelop. After that add the following code at the end of FixedUpdate: This code simply sets the velocity x-component without making any changes to y-component. It is important to update only the x-component, since the y-component is controlled by the jetpack force. Run the scene. The mouse moves forward, but there is a problem. At some point the mouse just leaves the screen. To fix this you need to make the camera follow the mouse. Making the Camera Follow the Player Create a new C# Script named CameraFollow. Drag it over the Main Camera in the Hierarchy to add it as a component. Open CameraFollow script in MonoDevelop and add the following public variable: You will assign it to the mouse GameObject in a moment, so that the camera knows which object to follow. Add the following code to Update: This code simply takes the x-coordinate of the target object and moves the camera to that position. Note: You only change the x-coordinate of the camera, since you don’t want it to move up or down following the mouse. Switch back from MonoDevelop to Unity and select Main Camera in the Hierarchy. There is a new property in the CameraFollow script component called Target Object. You will noticed that it is not set to anything. To set the Target Object, click on mouse in the Hierarchy and without releasing drag mouse to the Target Object field in the Inspector as shown below: Note: It is important not to release the mouse button, since if you click on mouse and release the mouse button you will select it and the Inspector will show the mouse properties instead of Main Camera. Alternatively you can lock the Inspector to the Main Camera by clicking the lock button in the Inspector. Run the scene. This time the camera follows the mouse. This is a good news / bad news kinda thing – the good news is the camera follows the mouse! The bad news is that, well, nothing else does! You’ll address this in a moment, but first, you need to give the mouse a little space. He’s a shy sort of fella. :] Keeping the Camera at the Distance In most endless run games the player character is not placed at the center of the screen. Instead it is placed somewhere in the middle of the left side of the screen. This is to give the player more time to react and avoid obstacles, collect coins, etc. To do this, select the mouse in the Hierarchy and set its Position to (-3.5, 0, 0) and run the scene. Wait, the mouse is still centered on the screen, but this has nothing to do with the mouse position. This happens because the camera script centers the camera at the target object. This is also why you see the blue background on the left, which you didn’t see before. To fix this, open the CameraFollow script and add distanceToTarget private variable: Then add the following code to Start: This will calculate the initial distance between the camera and the target. Finally, modify the code in Update to take this distance into account: The camera script will now keep the initial distance between the target object and the actual camera. It will also maintain this gap throughout the entire game. Run the scene. The mouse now remains offset to the left. Generating Endless Level Right now playing the game more then a few seconds doesn’t make much sense. The mouse simply flies out of the room into a blue space. You could write a script that adds backgrounds, places the floor and the ceiling and finally adds some decorations. However, it is much easier to save the complete room as a Prefab and then instantce the whole room at once. Note: In a game like Jetpack Joyride, you’ll often see different areas (aquarium, caves, etc.) that are each their own different Prefab. For the purposes of this game, you’ll stick with one. Here is an excerpt from Unity documentation regarding Prefabs.. In other words you add objects to your scene, set their properties, add components like scripts, colliders, rigidbodies and so on. Then you save your object as a Prefab and you can instantiate it as many times as you like with all the properties and components in place. Creating Room Prefab You’re going to want your Prefab to contain all the different room elements: the book case, the window, the ceiling, etc. To include all these elements as part of the same Prefab, you’ll first need to add them to a parent object. To do this, create empty GameObject by choosing GameObject\Create Empty. Then select GameObject in the Hierarchy, and make following changes in the Inspector: - Rename it to room1 - Set its Position to (0, 0, 0) This is what you should see in the Inspector: Note: At this moment it is important to understand that Empty is placed right in the center of the room and at the (0,0,0) point. This is not a coincidence and is done intentionally. When you add all the room parts into Empty to group them, their positions will become relative to that Empty. Later when you will want to move the whole room by moving Empty it will be much easier to position it knowing that setting the position of the Empty will move the room center at this point. In other words when you add objects to Empty its current position becomes the pivot point. So it is much easier if the pivot point is at the center of the group rather then somewhere else. Move all the room parts (bg, bg, bg_window, ceiling, floor, object_bookcase_short1, object_mousehole) into room1, just as you did when added the jetpack flames particle system to the mouse object. Note: If you decorated your room with more bookcases or mouse holes you should also add them to room1. Create a new folder named Prefabs in the Project browser. Open it and drag room1 from the Hierarchy directly into Prefabs folder. That’s it. Now you can see a Prefab named room1 containing all the room parts. To test it try and drag room1 Prefab to the scene. You will see how easy it is to create room duplicates using a Prefab. Note: You can reuse this Prefab not only in this scene but in other scenes too. The Idea Behind the Room Generation The idea behind the generator script is quite simple. The script that has an array of rooms it can generate, a list of rooms currently generated, and two additional methods. One method checks to see if another room needs to be added and the other method actually adds room. To check if a room needs to be added, the script will enumerate all existing rooms and see if there is a room ahead, farther then the screen width, to guarantee that the player never sees the end of the level. As you can see in case #1 you don’t need to add a room yet, since the end of the last room is still far enough from the player. And in case #2 you should already add a room. Note: The center of the mouse object doesn’t lie on the left edge of the screen. So although the distance to the end of the level is less then screen width, the player still won’t see the level end in case #2, but he will soon, so it is better to add a room. Of course the image above is only a rough example, the real script will detect that it needs to generate new room much earlier, right after the end of the room passes the point at which it is closer to mouse then the screen width. Now when the idea is clear it is time to add the script. Adding Script to Generate Rooms Create new C# Script and name it GeneratorScript. Add this script to the mouse GameObject. Now mouse should have two script components: Open GeneratorScript in MonoDevelop by double clicking it in the Project view or in the Inspector. First, add System.Collections.Generic namespace, since you’re going to use List<T> class: Then add following instance variables: The availableRooms will contain an array of Prefabs, which the script can generate. Currently you have only one Prefab (room1). But you can create many different room types and add them all to this array, so that the script could randomly choose which room type to generate. Note: The final project that you can download at the end of Part 3 contains multiple room types as well as other improvements, but right now it is easer to work with only one room Prefab. The currentRooms list will store instanced rooms, so that it can check where the last room ends and if it needs to add more rooms. Once the room is behind the player character, it will remove it as well. The screenWidthInPoints variable is just required to cache screen size in points. Now, add the following code in Start: Here you calculate the size of the screen in points. The screen size will be used to determine if you need to generate new room, as it is described above. The Method to Add New Room Add the following AddRoom method to GeneratorScript: This method adds new room using the farhtestRoomEndX point, which is rightmost point of the level so far. Here is description of every line of this method: - Picks a random index of the room type (Prefab) to generate. - Creates a room object from the array of available rooms using the random index above. - Since the room is just an Empty containing all the room parts, you cannot simply take its size. Instead you get the size of the floor inside the room, which is equal to the room’s width. - When you set the room position, you set the position of its center so you add the half room width to the position where the level ends. This way gets the point at which you should add the room, so that it started straight after the last room. - This sets the position of the room. You need to change only the x-coordinate since all rooms have the same y and z coordinates equal to zero. - Finally you add the room to the list of current rooms. It will be cleared in the next method which is why you need to maintain this list. - Now take a short break, the next method is going to be a bit bigger. The Method to Check If New Room Is Required Ready? Add the GenerateRoomIfRequired method: It only might looks scary, but in fact it is quite simple. Especially if you keep in mind the idea previously described. - Creates a new list to store rooms that needs to be removed. Separate lists are required since you cannot remove items from the list while you iterating through it. - This is a flag that shows if you need to add more rooms. By default it is set to true, but most of the time it will be set to false inside the foreach. - Saves player position. Note: I always say position, but most of the time you only work with x coordinate. - This is the point after which the room should be removed. If room position is behind this point (to the left), it needs to be removed. Note: You need to remove rooms, since you cannot endlessly generate rooms without removing them after they are already not needed. Otherwise you will simply run out of memory. - If there is no room after addRoomXpoint you need to add a room, since the end of the level is closer then the screen width. - In farthestRoomEndXyou store the point where the level currently ends. You will use this variable to add new room if required, since new room should start at that point to make the level seamless. - In foreachyou simply enumerate current rooms. You use the floor to get the room width and calculate the roomStartX(a point where room starts, leftmost point of the room) and roomEndX(a point where the room ends, rightmost point of the room). - If there is a room that starts after addRoomXthen you don’t need to add rooms right now. However there is no breakinstruction here, since you still need to check if this room needs to be removed. - If room ends to the left of removeRoomXpoint, then it is already off the screen and needs to be removed. - Here you simply find the rightmost point of the level. This will be a point where the level currently ends. It is used only if you need to add a room. - This removes rooms that are marked for removal. The mouse GameObject already flew through them and thus, they are far behind, so you need to remove them. - If at this point addRoomsis still truethen the level end is near. addRoomswill be true if it didn’t find a room starting farther then screen width. This indicate that a new room needs to be added. Phew, that was hard but you’ve maid it! Add FixedUpdate to GeneratorScript containing the call to GenerateRoomIfRequred: This insures that GenerateRoomIfRequred is periodically executed. Note: You don’t have to call this method each time, and the method itself can be optimized, but for simplicity sake you’ll leave it like this. Setting the Script Options and Enjoying Now, return to Unity and select the mouse GameObject in the Hierarchy. In the Inspector, find the GeneratorScript component. Drag the room1 from the Hierarchy to Current Rooms list. Then open Prefabs folder in Project browser and drag room1 from it to Available Rooms. As a reminder, the Available Rooms property in the GeneratorScript is used as an array of room types that the script can generate. The Current Rooms property is room instances that are currently added to the scene. This means that Available Rooms or Current Rooms can contain unique room types, that are not present in the other list. Here is an animated GIF demonstrating the process. Note that I’ve created one more room type called room2, just to demonstrate what would you do in case you had many room Prefabs. Run the scene. Now the mouse can endlessly fly through the level. Note that rooms are appearing and disappearing in the Hierarchy while you fly. And for even more fun, run the scene and switch to the Scene view without stopping the game. This way you will see how rooms are added and removed in real time. Note: If you run the scene and switch to the Scene view after some time you will only find empty Scene. This is because the mouse already flew far to the right and all the rooms behind, including the first room were removed. You can try to find the mouse or just restart the scene and switch to the Scene view straight away. Animating the Mouse Right now the mouse is very lazy. It doesn’t want to move a muscle and simply let the jetpack drag it on the floor. However, the price for the jetpack fuel is quite expensive, so it is better for the mouse to run while on the ground :] To make the mouse run, you’re going to create an animation and modify the MouseController script to switch between animations while on the ground or in the air. Slicing mouse_run Animation Spritesheet Frames of running animation are contained within the mouse_run spritesheet, so first of all you need to slice it correctly. Open Sprites folder in the Project browser and find mouse_run. Select it and set its Sprite Mode to Multiple in the Inspector. Then click the Sprite Editor button to open the Sprite Editor. In the Sprite Editor click Slice button in the left top corner, to open slicing options. Set the Type field to Grid. Set the grid size to 162 x 156 and click Slice button. You will see the grid immediately appear. Don’t forget to click Apply button to save changes. Close the Sprite Editor. Now if you expand mouse_run in the Project browser you will see that it was sliced on four different sprites. Creating Animations Now that you have all frames, you can create the running and flying mouse animation. Note: Flying animation consists of one sprite. The sprite that you used to create the mouse, so you already had all the frames for this animation. To work with animations you will need to open Animation window, if you don’t have it already opened. Choose Window\Animation and open the Animation view. Place it somewhere so that the Project browser so that you can see both the Animation view and the Project view. I prefer placing it on top, next to the Scene and the Game< views, but you can place it anywhere you like. Before you create your first animation, create a Animations folder in the Project view and select that new folder. Don’t forget that most of new files in Unity are created in the folder that is currently selected in the Project browser. Next, select the mouse GameObject in the Hierarchy, since new animations will be added to most recently selected object in the Hierarchy. In the Animation window create two new clips: run and fly, by selecting [Create New Clip] in the dropdown menu at the top left corner, to the left of Samples property. Note the 3 new files created in the Project view: fly and run animations. You will also notice a mouse animator file. Select the mouse in the Hierarchy. In the inspector, you will see that that an Animator component was automatically added to it. Adding Run Animation Frames First, you’re going to add frames to the run animation. Make sure both the Animation view and the Project view are visible. In the Animation view select run animation. In the Project view, open the Sprites folder and expand mouse_run spritesheet. Select all animation frames: mouse_run_0, mouse_run_1, mouse_run_2, mouse_run_3. Drag the frames to the Animation view’s timeline as shown below: Here is how the timeline should look like after you have added the frames. Adding Fly Animation Frame Believe it or not, the Fly animation will consist of only one frame. Select fly animation in the Animation view. In the Project view find mouse_fly sprite and drag it to the timeline, just as you did with run animation. But this time you only need to add one sprite. Make sure you stop the recoding mode in the Animation view after you add animation frames, or you might accidentally animate mouse position or other properties. Click the red dot in the Animation view’s control bar to stop the recording. Why would someone want to create an animation with only one frame? Well, you will see in a moment that it will be much easier to switch between the running and flying mouse states using the Animator transitions. Adjusting the Animator and Run Animation Settings Run the scene. You will notice two strange things. - The mouse is running like crazy. - The mouse is not falling down even when you don’t touch the screen. Fortunately both issues are quite easy to fix. The first one occurs because the animation is played too fast. The second one is due to the Animator settings. Note: Did you notice that the mouse is now running by default, although earlier it was just single flying mouse sprite? This happens because you added run animation first and Animator component set is as the default animation. This way it starts playing as soon as you run the scene. To fix the animation speed select run animation in the Animation view and set Samples property to 8 instead of 60. To fix the second issue, select mouse GameObject in the Hierarchy and search for Animator component in the Inspector. Disable Apply Root Motion and enable Animate Physics. Here is an excerpt from Unity documentation about the Apply Root Motion property: In other words, you need to enable it if your animation changes the object Transform. This is not the case right now, which is why you turned it off. Also since the game is using physics, it is a good idea to keep animations in sync with physics. This is why you check the Animate Physics checkbox. Run the scene. Now the mouse walks on the floor. However, the mouse continues to walk even while it is in the air. To fix this, you need to create some animation transitions. Switching Between Animations Since there are two animations, you should make the mouse GameObject switch between them. To do this you’re going to use the Animator Transitions mechanism. Creating Animation Transitions At this point you’re going to need one more Unity window. In top menu choose Window\Animator to add the Animator view. Currently you have two animations there: run and fly. The run animation is orange, which means that it is the default animation. However, now there is no transition between run and fly animations. This means that the mouse is stuck forever in the run animation state. To fix this you need to add two transitions: from run to fly and back from fly to run. To add a transition from run to fly, right-click the run animation and select Make Transition, then hover over fly animation and left-click on it. To add a transition from fly to run, right-click the fly animation, select Make Transition and this time hover over run animation and left-click. Here is the process of creating both transitions: This has created two unconditional transitions which means that when you run the scene, the mouse will first play its run state, but after playing run animation one time, the mouse will switch to fly state. Once the fly state is completed, it will transition back to the run state and continue ad infinitum. It is hard to notice, because the fly animation takes only a fraction of a second to play, since there is only one frame. However, if you switch to the Animator while the scene is running you will see that there is a constant process of transitioning between the animations as follows: Adding Transition Parameter To break the vicious circle, you need to add a condition that controls when the fly animation should transition to the run animation and vice versa. Open the Animator view and find the Parameters panel in the left bottom corner. Currently it is empty. Click a + button to add a parameter, in the dropdown select Bool. Name the new parameter grounded. Select the transition from run to fly to open transition properties in the Inspector. In Conditions section change the only condition from Exit Time to grounded and set its value to false. Do the same with the transition from fly to run, but this time set the grounded value to true. This way the mouse state will be changed to fly when grounded is false, and to run when grounded is true. While you still have to pass in the parameters, you can test the transitions right now. To do this, run the scene, then make sure the Animator view is visible and check or uncheck grounded parameter while the game is running. Adding Object to Check if Mouse Grounded There are many ways to check if the game object is grounded. I like following method because it provides visual representation of the point where the ground is checked, and this is quite useful when you have many different checks (e.g. ground check, ceiling check, and so on). What makes this method visual is an Empty GameObject added as a child of the player character, like it is shown below. Go ahead and create an Empty GameObject, then drag it over mouse GameObject in the Hierarchy, to add it as a child object. Select this GameObject in the Hierarchy and rename it to groundCheck. Set its Position to (0, -0.7, 0). To make it more visual, click on the icon selection button in the Inspector and set its icon to the green oval. You can really choose any color, but green is truly the best. :] Here is what you should get in the end: The script will use the position of this Empty to check if it is on the ground. Using Layers to Define What is Ground Before you can check that the mouse is on the ground you need to define what is ground. If you don’t do this, the mouse will walk on top of lasers, coins and other game objects with colliders. You’re going to use the LayerMask class in the script, but to use it, you first must set correct Layer to the floor object. Open Prefabs folder in the Project view and expand room1 Prefab. Select floor that is inside the Prefab. In the Inspector click on Layer dropdown and choose Add Layer… option. This will open the Tags & Layers editor in the Inspector. Find first editable element, which is User Layer 8 and enter Ground in it. All previous layers are reserved by Unity. Next, select the floor within Prefab folder and once again and set its Layer to Ground. Checking if Mouse is Grounded To make the mouse automatically switch states, you will have to the update MouseController script to check if the mouse is currently grounded, then let the Animator know about it. Open MouseController script in MonoDevelop and add following instance variables: The groundCheckTransform variable will store a reference to that groundCheck Empty that you created earlier. The grounded variable denotes if the mouse is grounded. The groundCheckLayerMask stores a LayerMask that defines what is the ground. Finally the animator variables contains a reference to the Animator component. Note: It is better to cache components you get by GetComponent in some instance variable, since GetComponent is slow. To cache Animator component open following line of code to Start: Now add UpdateGroundedStatus method: This methods checks if the mouse is grounded and sets the animator parameter: - To Check if the mouse GameObject is grounded, you create a circle of 0.1radius at position of the groundCheck object that you added to the scene. If this circle overlaps any object that has a Layer specified in groundCheckLayerMaskthen the mouse is grounded. - This code actually sets grounded parameter of Animator which then triggers the animation. Finally, add a call to UpdateGroundedStatus at the end of FixedUpdate: This calls the method with each fixed update, insuring that the ground status is consistently checked. Setting MouseController Script Parameters for Ground Check There is only one small step left to make the mouse automatically switch between flying and running. Open Unity and select mouse GameObject in the Hierarchy. Search for the Mouse Controller script component. You will see two new parameters of the script: Click the Ground Check Layer Mask dropdown and select Ground layer. Drag the groundCheck from the Hierarchy to the Ground Check Transform property. Run the scene. Enabling and Disabling Jetpack Flames Although you cured the mouse from laziness you haven’t cured its wastefulness :] The jetpack is still On even when the mouse is on the ground. You don’t want the mouse to go bankrupt do you? Especially when only few tweaks in the code are needed to fix this. Open MouseController script and add jetpack public variable to store a reference to the particle system. Then add the following AdjustJetpack method: This method disables the particle system when grounded and in addition to that, it also decreases the emission rate when the mouse is falling down, since jetpack might be still be active, but not at full strength. Add a call to this method to the end of FixedUpdate: Just a reminder: jetpackActive variable is true when you hold the left mouse button and false when you don’t. Now switch back to Unity and drag jetpackFlames from the Hierarchy to jetpack property of the MouseController script. Run the scene. Now the jetpack has three different states. It is disabled when the mouse is grounded, full strength when going up, and a small emission rate when the mouse is going down. I find it pretty realistic since it is unlikely that you just turn Off the jetpack at the highest point and fall down. Where to Go From Here I hope you are enjoying the tutorial so far. You can download the final project for this part using this link: RocketMouse_Final_Part2 The next part will be the final part and has all the fun. You will add lasers, coins, sound effects, and many more. If you want to know more about Prefabs Unity documentation is a good place to start. If you have any comments, questions or issues (I hope you don’t have them:]) please post them below. You can read earlier discussions of this topic in the archives
https://www.raywenderlich.com/69544/make-game-like-jetpack-joyride-unity-2d-part-2
CC-MAIN-2017-09
refinedweb
5,159
71.55
Introduction: MKR1000 IoT Client/server Communications This project describes how to set up two Arduino/Genuino MKR1000 devices as server and client. The client MKR1000 will connect to your local wifi and listen for two inputs physically connected to the client; one from a button and the other from a vibration sensor. On sensing an input the client MKR sends a GET request to the server MKR. On receiving a GET request, the server MKR is set up to turn on/off the built in LED (triggered by client button) and to fade up and down an attached LED (triggered by vibration sensor) Step 1: Creating a Server on the MKR Set up the MKR and breadboard as in the image. The red LED is connected through a 1K Ohm resistor, to Pin #5. On the MKR this is a digital pin with pulse width modulation (PWM) which allows us to set a variable for brightness of the red LED. The other side of the LED is connected to ground. The other LED used in this project is the onboard one on the MKR. This is marked "L" and is a green LED positioned close to the VCC pin. Now download (or just copy) the code for the MKR server from here: - the Arduino sketch name is "MKRServerLED.ino" Edit this to include your wifi netwwork credentials and upload this to your MKR1000. Once uploaded, open your serial monitor. (See image for descriptions of output) Initially this will show you little more than the IP address of the server. Take note of this address as you will need to include it in the client code too. At this point, the server is up - we're going to set up the other MKR1000 as a client to this server. However, because it's a server you will be able to connect to it from any device on your network by typing the provided.*.* address into any browser. Give this a go and note that the provided page has clickable addresses to change the status of the LEDs on your MKR10000 server. Also note that the serial monitor detail updates to acknowledge these GET requests received by the server. Note: there are libraries you may need to install, I'm pretty certain you'll have to install the Wifi101 library at the very least. Having tinkered for a long time I'm not sure what you will or won't need from a fresh install. Please refer to the wealth of info available about installing libraries or any other issues you may have with connecting/uploading etc. Step 2: Creating a Client to Send Requests to the Server Again, set up the breadboard as showin in the image. In this case the button is connected to pin 9 and the vibration sensor is connected to pin 8. Bot pins are digital pins as the states for both of these inputs are binary. Once complete you can download (or copy and paste) the client code from here: - the file name is "MKRClientGET.ino" At this point I recommend unplugging the server MKR from your PC as you'll not see any difference in naming when you are selecting COM port. Edit the code to provide your wifi network credentials and the MKR server IP address. Make sure you look for each instance of "192" and change to your server IP address. Upload the code to the client MKR and open the serial monitor. See the image of serial monitor output and try hitting the button and triggering the vibration sensor. Step 3: Test It! You should be done.... At this point you can provide power to each MKR1000 (how ever you chose to do so). Give them about 10 seconds and try triggering the client inputs to see the outputs on the server MKR. Step 4: Troubleshooting Before getting into troubleshooting - check the basics. Are you providing power to both MKRs? Are you sure the server code is on the server MKR and client code for client MKR? Possible issues and solutions: 1. C:\Users\tony\Documents\Arduino\MKRClientGET\MKRClientGET.ino:11:18: fatal error: 1234.h: No such file or directory #include <1234.h> ^ compilation terminated. This is an issue with a library you haven't installed. As noted in previous steps there is a wealth of info about this. 2. Server or Client not making a connection to your wifi; likely you haven't provided your wifi credentials. 3. Client serial monitor showing state changes but no reaction on server; likely caused by not providing server IP address in your client code. 4. Button not showing change state in serial monitor; check your breadboard contacts. Participated in the Microcontroller Contest Be the First to Share Recommendations
https://www.instructables.com/MKR1000-IoT-Clientserver-Communications/
CC-MAIN-2022-05
refinedweb
795
72.16
Theoretical concepts in Windows Azure Some useful theoretical concepts in Microsoft Windows Azure Platform. Tried to include the much needed information pertaining to Windows Azure. Also tried to present the points in a simple and easy to understand manner. Windows Azure Platform 1. Windows Azure: a. Consists of Compute Roles - Web Role, Worker Role, VM - Consists of Hosting Services, Virtual machines configured via role instances. - Role can be Web Role and Worker Role. - Worker Role - 2008 R2 (64 bit Default) and .NET 3.5/4.0 - Web Role - All components in Worker Role + IIS - Role can consists of: --> Definition(STATIC, endpoints, VM size(Extra Small, Small, Medium, large, extra large - Resource to be used, Settings, local storage, etc., CODE), Repackage and deploy) --> Configuration (Dynamic in nature, packaged separately) - End points can be Input or Internal. Input can be public. Internal can communicate in between the roles. - Every role inherits from class Role Entry Point. - Lifecycle of RoleEntryPoint. --> On Start() - Role is in busy state. Cannot pass to Load balancer. --> Run() - Role is now in Ready state. Load balancer can communicate. --> On Stop() - Shutdown the role. Happens while repackaging and re-deploying. - Role code executes in WAIISHost.exe -VM(virtual machine) gives us more control over the virtual environment where our application is deployed. We deploy our application on Microsoft Virtual Hard Drive. b. Storage - Queue, BLOB, Table c. Connect 2. SQL Azure a. Database b. SQL Azure Reporting Service c. SQL Azure Data Sync 3. Azure AppFabric a. Service Bus b. Caching c. Access Control Fabric Controller - Configures the VM size, endpoints, etc. - Communicates with Fabric Agent. - Fabric Agent is present under HOST VM. - Fabric Agent downloads the VHD for Web Role and provides it to Fabric Controller. This process is known as Guest VM - Our code runs on Guest VM. It can communicate with Fabric Controller via host VM only. Direct access is not provided. - 1 node can have multiple Guest VM. Each guest VM has multiple Guest Agents. Storage - We must create a storage account to host our application on Cloud. The limit is 100TB. - Storages supported are BLOB, Tables (not RDBMS), Queues, Drive(Map drives on local storage mapped to BLOB on AZURE storage). - Avoid Read/Write Drives. It may run into problems if there are multiple instances. -Creating a storage account will give us 3 URIS. For ex: If cgStorage is a storage account it will create the following URIs: cgStorage.BLOB.windows.core.net/container/blob, cgStorage.Tables.windows.core.net, cgStorage.Queues.windows.core.net -By default, BLOB is a secured access. Set the permission level as public in the code or provide an access key. - Cost plays an important role in architecting the CLOUD applications. Moreeinteractions with CLOUD may charge us more. Table Storage: - Not RDBMS. It is an entity store. - Entities must have some properties defined. The properties are: Partition Key, Row Key, Time Stamp. - Partition key is multiple. Row key is unique. - Each transaction can have 100 records. Diagnostics - Role(Worker Role or Web Role) can pass the request to Azure diagnostics Trace listener. - This request will be passed on to VM storage. - VM storage may contains IIS Logs, Trace Information. - Roles can be configured via Diagnostics Monitor. - Diagnostics Monitor.will take the information from IIS logs, trace information and provide it to the Azure Storage. From Azure storage we can easily fetch our record. - Process is: Capture diagnostics, configure diagnostics monitor and put it in the Azure Storage (Blobs or Tables). SQL AZURE - Supports only SQL Authentication and NOT WINDOWS AUTHENTICATION. - Cannot create jobs and run in cloud as SQL Server Agent is not present in CLOUD. - No support for CLR. - Only core DB Engine level features are present. - No backup is there in SQL Azure. - Communication Protocol is TDS. - Protected by Firewall. Outside networks cannot access SQL Azure. - Client request will go to load balancer. The load balancer(LB) will balance the request as per the polling mechanism and will pass it on to the Gateway server. The gateway server will then ensure that the request is passed to the appropriate database and request is sent back to the client. CLIENT ---> Load balancer ---> choose Gateway Server 1/ Server 2/ Server 3 ----> Database - Distributed transactions or distributed queries are not supported in Windows Azure App Fabric - Consists of Service Bus, Access Control, Caching. - API is provided by caching service to read and write. caching service in turn manages and stores the data in cluster of servers. - Windows server App fabric consists of Dublin (services for IIS, WCF, etc) and Velocity (caching). - In Proc Session state will never work. - Service Bus: Provides with 3 services: Naming, Discovery and Relay Messages. - When we register a service under service bus we actually register using a namespace. - Service register itself and makes a request with the service bus. After client connects, he will make actual message call on address register with service bus. Looking at the address of the client, service bus will send request to service. Service will process the request and pass it on to service bus which in turn pass it on to the client. - Hence service bus doesnt do any processing, it is just used for relaying the messages and nothing else.
https://www.dotnetspider.com/resources/44575-Theoretical-concepts-Windows-Azure.aspx
CC-MAIN-2022-27
refinedweb
872
69.99
#Example; Building Python lists. import timeit #Build a list consisting of ten elements using the append method. def build_append(): list_obj=[] for i in range(0,10): list_obj.append(i) #Build a list consisting of ten elements using the inline method. def build_inline(): list_obj=[i for i in range(0,10)] if __name__=="__main__": #Setup timers. t_append=timeit.Timer("build_append()",\ "from __main__ import build_append") t_inline=timeit.Timer("build_inline()",\ "from __main__ import build_inline") #Execute timers. r_append=t_append.timeit() r_inline=t_inline.timeit() #Show results. print "APPEND:",r_append print "INLINE:",r_inline print "DIFF %:",(r_inline/r_append)*100 Thursday, August 27, 2009 Building Python Lists Building primitive Python lists is a common task in virtually any Python application. The Python list is not unlike a traditional array in many respects, only more powerful. A common scenario is building a Python list from an iterator that yields values. In this case, each iteration adds a new element to the list being built. One method used to build Python lists using an iteration construct is the append method. This approach invokes the list append() method to append a new list element. Another approach to building Python lists is to place the iteration itself within the new list. This approach can be considered inline list building. The benefit to using the inline approach is the performance gain. The benefit to using the append method is readability when logic is required within each iteration that appends elements to the list. Below is an example illustrating both approaches. Hi, saw your blog on Proggit. Good post, but you didn't mention that the basis of your example, ie, list_obj=[i for i in range(0,10)], is called a "list comprehension". Also, using a simple range will lead to the same result as your example. >>> [i for i in range(0,10)] [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] >>> range(0,10) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] >>> Cheers, Bill
http://www.boduch.ca/2009/08/building-python-lists.html
CC-MAIN-2020-40
refinedweb
326
58.38
ผู้ตอบมากที่สุด Intellisense Autocomplete Not working for Visual Studio 2010 VB code I have the ubiquitous "intellisense not working" issue... here's my particular scenario: I have a solution with multiple projects... in one (Class Library) project, intellisense works great. In another (web service), the intellisense does not work. It used to, but is now failing. I googled this and discovered that it's quite a common issue, with all sorts of potential fixes. None seem to work for me: I've tried <ctrl><alt><space> ....... I've dried devenv /ResetSettings ....... I've tried excluding/including web/config ....... I've tried removing the .suo file ....... I've tried looking for an ncb file. None of these work. What's disturbing is that in the same Visual Studio session, in the same solution, I can flip up to the other project, and in there, Intellisense works great. In fact, in my editor MDI, I can have the code from one project in one tab with Intellisense working, and the code from the other project in another tab, not working. Go figure! This suggests a project-related issue, but I'm at a loss to find it. Aside: the entire codebase compiles cleanly without even any warnings, so that should have nothing to do with it. Any pointers would be appreciated.15 กุมภาพันธ์ 2554 0:45 คำถาม คำตอ ตอบทั้งหมด IS is context sensitive. The fact that it works in one project and not another only tells us that IS is working and is not completely broken. When IS needs information it requests that information from the language service associated with the currently open document. It is up to the LS to give IS the information to display. The .ncb file is the legacy IS file. For managed projects you won't have one nor will you have one under VS2010 since IS moved to a database format. The .suo file has nothing to do with IS nor will /ResetSettings help you. The problem resides in your web service project itself. Do you get any IS at all? Meaning if you add a new code file and start typing do you get any IS information at all. At a minimum it should start throwing keywords at you. If you get basic IS support then the problem resides in the references you're using. If only the system types appear and none of your custom types then it is probably a bad reference. If not even the system types show up then it is something more complicated. Is this a new project or one that was upgraded from VS 2008? Michael Taylor - 2/16/2011 กุมภาพันธ์ 2554 15:05ผู้ดูแ Great, that worked a treat for me. No idea how it got changed to "Content", but changing back to "Compile" did the trick. Thanks.24 กุมภาพันธ์ 2554 17:15 Found your post after experiencing the same problem, but when I set it to Compile, I get an error about a duplicate reference : ERROR The namespace '<global namespace>' already contains a definition XXXX where XXXX is the name of the WebService, but for the life of me I cannot find the duplicate (the "find all references" only returns one hit), and I cannot change the name of the webservice without breaking it. So I had to set the code behind file back to Content so I could at least get it to compile. Peter (BTMI)28 มกราคม 2555 9:38 This is a compilation error related to your references and not an IDE issue. Please post your question in the C# General forum. Michael Taylor - 1/28/2012 มกราคม 2555 14:55ผู้ดูแล I had the same problem, I tried different methods suggested in different discussion forms, but none worked fine in my case. But in between I found the intellisense is not working for a single solution and finally found the compilation error makes the intellisense not working. So in order to reach your problem you may have go through these steps - Compile the entire solution and correct the compilation error. If it is not working do the following steps too - devenv.exe /ResetSettings - devenv /resetuserdata - Goto settings and check the following setting are there 2 มีนาคม 2555 9:58 I have this same problem (Intellisense doesn't work and Go To Definition doesn't work). However, you can't just change the web service build action from Content to Compile ... it will break the projects ability to compile/build correctly. My work around is to simply just edit any line in the Web Service, even just putting in a blank space or carriage return will fix the problem (*.svc.vb). This IS an IDE bug and I suspect it might be specific to TFS. 5 กันยายน 2555 23:10 - แก้ไขโดย Robin Ainscough 5 กันยายน 2555 23:14 Robin your question is off topic for this forum. Your question is related to VB which is an entirely different language written by an entirely different team. The fact that it shows up in the same IDE isn't relevant. The IDE just hosts the languages and their providers. Please post your question in the VB IDE forums.6 กันยายน 2555 1:20ผู้ดูแล I think the relevance is that the exact same IDE bug is present for both VB and C# ... so yes I think this is the correct thread to post on. Agree that parser will operate differently based on the syntax (aka language), but given that it's broken for both C# and VB it would appear to me a common shared element is responsible for this bug. Separating out this response/post to some other forum will only take away useful information regarding the problem. Unless of course Microsoft engineers don't do any code re-use and sharing across parsers ... but wouldn't that be funny if that were the case ;)7 กันยายน 2555 20:47
https://social.msdn.microsoft.com/Forums/th-TH/bd76accc-f7b1-4a61-8a88-437e07aeb507/intellisense-autocomplete-not-working-for-visual-studio-2010-vb-code?forum=csharpide
CC-MAIN-2017-22
refinedweb
1,003
72.05
Introduction: Pre-Build and Post-Build events are powerful yet underused feature of Visual Studio. In this article I will explain how to use the Post-Build event to run your unit tests. Why Not Use Nant and Build Automation? Ideally, Nant scripts and automated builds are much better solution then using the Pre/Post Build options. But if you are working on a small project and don’t want to deal with writing Nant scripts then you will find this solution to be useful. Project Structure: The project structure is fairly simple. We have three projects added to a single solution. Here is the list of the projects: 1) MyClassLibrary: Contains the domain objects2) PrePostBuildEvents: Console application3) TestSuite: Contains tests for the domain objects Project Implementation: The class library project “MyClassLibrary” contains a single domain object called “Calculator”. Here is the implementation of the Calculator object: public class Calculator { public double Add(double a, double b) { return (a + b); } } The TestSuite project contains the following test: [Test] public void should_be_able_to_add_two_numbers() { Calculator c = new Calculator(); Assert.AreEqual(10, c.Add(5, 5)); } So, everything is pretty simple! Now, we want that when we build our TestSuite then it should run the unit tests automatically. In the next section we will see how to attach the Post-Build event to our TestSuite project so that it will automatically run the unit tests when the project build is successful. Attaching the Post-Build Event: To attach a Post-Build event on the TestSuite project simply right click the project and select properties. Select the Build Events tab from the right and type the following in the Post-Build event. call $(ProjectDir)batchfile.bat The event will be fired on successful build which will call a batchfile.bat contained inside the project directory of the application. If you don’t want to run the unit test on every successful build of the TestSuite project then select “When the build updates the project type” from the drop down list. This will fire the unit tests only when some new activity happens in the project. The new activity can be adding a new file, deleting an old file etc. Creating a Batch File: The batch file is very simple. Take a look at the contents of the batch file: start c:\MbUnit\mbunit.cons.exe C:\Projects\PreBuildPostBuildEvents\TestSuite\bin\Debug\ TestSuite.dll /rt:html /rf:C:\Reports\ /rnf:CalculatorTestReport EXIT It simply executes the MbUnit console driver and passes the parameters to run the unit tests and create a HTML report. What About Pre-Build: The Pre-Build is fired before starting the build process. Scott Hanselman has a very interesting post about the Pre-Build events. Take a look at the following post: Managing Multiple Configuration Files Environment Using Pre-Build Events Conclusion: In this article we learned how to use the Post-Build event to run unit tests. I hope you liked the article, happy coding!
http://www.gridviewguy.com/Articles/398_Using_Post_Build_Event_to_Execute_Unit_Tests.aspx
crawl-001
refinedweb
494
62.78
Transcript 2971 STATEMENT BY THE PRIME MINISTER, MR EG WHITLAM QC MP, AND THE BY THE MINISTER FOR OVERSEAS TRADE AND SECONDARY INDUSTRY, DR JF CAIRNS MP, WEDNESDAY 18 JULY 1973 AT 7PM - TARIFF REDUCTION Whitlam, Gough Period of Service: 05/12/1972 to 11/11/1975 Release Date: 18/07/1973 Release Type: Statement Transcript ID: 2971 Document: Original Transcript (PDF 627.28 KB) _-rATEMENT BY THE PRIME MINISTER, MR. E. G. WHITLAM, CQ. C., M. P., lDBY THE MINISTER FOR OVERSEAS TRADE A19D SECONiDARY INDUSTIRY, * DR. J. F. CAIRNS, WEDNESDAY 18T-H JULY, 1973 AT 7. OOPM. TARIFF REDUCTION 1. The Prime Minister and the Minister for Overseas Trade and for Secondary Industry today announced a decision taken by the Australian Government for a reduction of in all tariffs each tariff will be reduced by 4 of what it is now excluding revenue items and anti-dumping duties. This reduction which will apply forthwith, is designed to restrain price increases by increased competition and by stimulating in the short run a sufficiently large inflow of additional imports to help-~ meet pressing demand. Action to combat price inflation through tariff action was clearly forecast in the Prime Minister's policy speech and has recently been urged upon the Government by Trade Union, primary production 1) commercial and industrial groups. The Government has also decided that in future by-law administration will be more flexible. The reduction is to be combined with an ambitious program of assistance to employees and firms who may be adversely affected by the tariff reduction. 2. The decision followed unanimous advice from a Committee which was appointed on 27 June and reported on July. It was chaired by Mr. G. A. Rattigan, Chairman of the Tariff Board and comprised Professor F. H. Gruen, Consultant to the Prime Minister; Mr. B. Brogan, Consultant to the Minister f or Overseas lrrade arid for Secondary Industry; Dr. S. F. Harris, Deputy Secretary, Departmewnt of Overseas -2- Trade; Mr. J. C. Taylor, First Assistant SecretCary, Department of the Prime Minister and Cabinet and Mr. F. A. Bennett, First Assistant Secretary, Department of Secondary Industry. The Committee's report will not be released at present because it also deals with matters affecting our trading relations and makes some comments on budgetary strategy. It will be released with the Budget papers. 3. The justification for the general reduction of tariffs is the excessive rate of inflation which now prevails. Inflation i s harmful to every Australian. Action must be taken to reduce its harmful effects with the least delayl. Inflation can be offset by an increase of supply of goods in . Australia. The most readily available source . of supply of goods is imports from overseas. Whilst it is 6xpected that imports will increase in the next few months, because of the high level of tariffs the increase would be insufficient to help combat inflation. Consequently the Government has decided to reduce tariffs so that imports may increase in the short term to help meet inflationary pressures in Australia. 4. The tariff changes will have a direct impact-on import prices of approximately the same order of magnitude as* a revaluation of slightly less than Increased competition in Australia will have a salutary effect upon those who have taken advantage of shortages by unjustified price increases which have exploited the public. The Joint Committee on 2 Prices will be asked to ensure that consumers get the full benefit of these reductions as it is already doing by its present inquiry on import prices. The earlier reference to the Committee related to the revaluation of the Australian dollar in December, 1972 and other relevant currency changes in 1973. The Government is similarly determined that import prices respond to the tariff reduction. The increased imports may affect production and employment in Australia. A Tribunal is being established to immediately hear appeals from any firm or company which may be seriously affected by imports. The Tribunal will be authorised to immediately recommend appropriatL-e assistance either to bring about changes for a firm or company or to restore the tariff level which previously protected it. Any person who may lose his job as a result of these changes will be entitled to receive, as special readjustment assistance, a weekly amount equal to his average wage in the previous six months until he obtains or is found~ suitable alternative employment. Subh persons will be of fered retraining for new and better occupations if they so desire. The Government has provided $ 25m to cover assistance which may be necessary as a result of the tariff changes. The procedure for providing assistance required as a result of these changes are set out in a separate statement which is attaolhed. 6. While these changes can be expected to require some workers to move from one employment to another this must be / 4 -4- seen against the existing high level of unfilled vacancies and rising employment opportunities. The Governmnent is confident that most of the employees who may be affected will gain from the assistance provided and from the new opportunities which will be created. In making -this decision the Government is conscious of the urgent need for some major steps to deal with inflation. It has adopted these changes as part of a complex of measures some of which have already been taken and others will follow. These tariff changes will assist in the fight against inflation in the interest of the nation as a whole and at the same time the Government is confident that losses which may affect individuals will be adequately otfset . by readily available assistance. 7. This decision represents a major step towards implementing the Government's objcctive of support for moves to liberalise international trade moves which in the long term can only strengthen the economies of the trading countries of the world. Consistent with these objectives the Government intends to imqplement its international commitments, which are of principle concern to New Zealand and Canada, in the contexi. of the new tariff structure. Furthermore, the scheme to aid developing countries, announced by the Minister for Overseas Trade and for Secondary Industry on 3 July, will be implemented and existing and future preference margins will be lowered from the new general tariff rate. p. C. ' Tl^ l T. ESS . LLHT, D I RV IJIG Yl. SP-CIAL ASSI C iEUIRDl A: A RESULT C THl-I. EAUIiES TAE. TOl IITCJ., 1IOA. E On the recommendation of the Prime Minister and Minister for Overseas Trade and for Secondary Industry, the Government has established a Tribunal to examine and report on requests for relief for the measures taken to increase the demand for imports rwhere a domestic industry is being seriously damaged by the tariff reduction. The tribunal will be guided by two basic principles; its recommendations should be compatible w. ith the long-term objectives of assistance to industries, and it should not provide relief as a matter of course that is, simply because the question of relief hod been referred to it. The Government has also decided that there should be established an effective ranre of adjustment assistance measures which would be available to assist in'those cases where the Tribunal felt that some assistance was required but that restoration of the duties in whole or in part *' as not a suitable means of assistance. If the Tribunal were to recommrend relief too readily by raising, perhaps, to their former level., individual tariffs which have just been. reduced as part of a general tariff cut, the purpose of the initial tariff cut would be largely frustrated. The desired increase in imports resulting fromthe tariff reduction would not occur, and the desired increase in aggregate domestic supplies would not be achieved. In other w. ords, the antiinflationary impact of the original measure would be seriously weakened. If. L the Tribulnal 7rere to provide relief ( paroticularly in more highly protected iusre) not by raisingr tariff-: s previously reduced, but by other measures such as; comnensati on ipayments the -nrohlmcrsd-' r---. ribed above could be partly avoided. ]' ut compensation paymiento or other direct subsidies to aparticular industry could ha,, ve uidecira'ole side effects, particularly if those payments were not m-iade conditi onal on some adjustment by the industry concerned to the new situation. For examiple, c ompersation p ayments, if the--y are equivalent to the tariLff te ubsiAdy' they replace, could continue to suqoport the use of res-ources in activities which c Tc. hi. gh c ost; in these circumstances the gain to consumilers of th-. e action proposed w! ould be at the ex-, pense of taxpayerS. In cases: iw. here it is evi-dent that the induS-try could, w--ith ass~ istance; adjust to the changed circum-, nstances of import com'.--. etition ( and of protection), it irould clearly b! e, in the in~ tcres'-ts of -the cotinmunity as a whl) ole t'Uo pre-fer adjus1-tme. qntU assistance tLo rctorinr: the hig~ h rates of duty. In summary, therefore, if. 1-relief from 113 effcts of the veneral tariff redu-ction is provided readily by partially,, or -w. holly restoring7 the tari-Cfs previ. ously reduced, part iculsarly w-here the previ-ous -Gari ff1-,-as hi-, h, the ob; jcctives of the initial tariff reduction vill tend to be frus2tr! Aed; and if the reief^ i, 17 readily provided in ways w-. hich do not induce some adjustmen to the new situ., i. tion hy hiehly protocted d~ omes L.-ic msuatrrthat relief wl c-nd to heD incons: irtent wi-4th the-) le{ Itrmojcti-; res-of protoctiou': policy. Adjustment -Assietance . yven's sucib as a currency hino atrf reduction can be to the overall Ibcnñ: flt of thic n) ation as a whole by makinf-more goodo.,: cervices available to the Australian people, by helping to restrain price incre! as.,., es and by improving the allocation of our resources and efficiency of our industries. These benlefits, however, ma~ y involve sonme cost in the form of reduced business opportunities for come firm-s and the need for changes in employmecnt and phaslocation for some individuals, V~ hlstthe benefits are enjoyed -enerally throug-hout the economy, the costs may tend to fall on a small minority in t'he cormmuity. In such circwmtan'cejcs, it is ineuitable toalo the cost to fall on any liarticuLn. r group1 csy-: ccially hen the chanYges r broughIt abkout by a conrscious-Governrncmnt decision. iKore importantly, i. t1i socially undesirabe for the cost to fall on g-roups in ther-cor.-unity iwhich are likely to be les! s. Tpri. -ili71, ed thani' the majority or on to indiiduls hich do not have the o nportvnity o r the capability to adapt read. cily to the cha-n-e in ircitac A. rositive aYMor1oachI, is needed for these hum_. an and economic problemE of structural techanological chiang7e uhich reflects a genuirie unosa wof the non-econcaic as well as the economic cost-, to those affTected and ,., hich in cons ecuenirce can. not be too f: lncly calculated. In th e abs--ence o--r effe ctive action to coun;, ter tete i contLinue -to reprceen-t a real iecnntto desira.) ble a-ridC beneficial chang. r: O \ ith . thes e poirlts in minrd, the !-in: Ltors for SccLodiacru sty, P~ bor an 20ia1 Securitly established an inter-departmental committee t0o 7tndy moasures ncected to facilitate desirable structur-al ch) anges in the Australian economy. Thio ine-1pr1nn a omte ill be recomnmending a long--term-approach to structuUra~ l cbhange0 This wvill incluide: assistance to employceo; ' social seciirity Iacasurocs; assistance to frs The short-term tariff chiangos now taken will. precipitate the need for immodiate adjustment assistance. Because of the iranortanco, which this Government attaches to the availability of adjuslment . s tac it has anti cios2ted t-hose elements of the lonp-tc-rm program that could be developed. cuickzly to assist in adjus tment to the Prom~ sed tariff chang-ec. Assistance to -!, npmlovrees S3uch assistance will be given throughI the ex,-istijngservices such as the Commjonealth PEmployr-ent 1' Iervice ith ppropriate strengfthening and augmentation such as the propos-als already outlined by tile IIinister for Labour in his statement on manpower policy. The measureS -, hich will-be applied include: Grants-7 to indivicl 17 to mneet relocation exnoonm; es iihere the is able to offer omploymont a different location.. to t-hor-e sui-ted by age, health, ecC., inludins ., guaranteed iim agsdrn re-trainin-, payment of tr~ rigftes -L-d related transport costs includinr: I\ Tlf-naw--ay from hiome al3owa,, cne s where ncces cry Ince-itives to frirm, to tGra. Ln or re-trair st,' affL rendered redundant by ta~ zriff-1 ch-ar. n; es. The Government has decided that an initial alloc.-Aion of million could properly be made for theSc purposes. .) ocial ISecurity I31eas--uros Additional social security me,-asu1. res mafy also be needed for several pi urposes. order to e-Lisure adeouate incomes for those tenmorarifly displaced by the tariff change, the Governm-ient has decided it uould be desirable for them to be paid their averag,( e weekrly wage over the precedin-z six miiontlis for a-p'eriod of tip to , Aix months or until they reoffered suitable alternative ernplovyt~ ent '. There special local unemploym,-ent problems are caused by the tariff changces, specia. lI local uepomn relief grants ,. ijll be given in a forma similar to the rural and urban unemployment reliefC scheor~ ics now being phased out. To provide adequate incomens for those membersof the iork force ". ho becore-redund'ant in adv. a) nce of norm.-al retiring a-e as a result of -the tariff change cvnd are unlikelyr to find suitable emnploymj~ en_-t becauce( of aghealth or physicali. harndicao., 1s, the Governn-ent wvill makec provision for early retirement bene-fits as full1 cupera nnua." ti-on entitlemcents, retirement 7, rants ard. early retirerrenl; a]. lo'w-ance. To minimise socia-l pr-oblems for uorkers ,? nd e -ififcesc ted byT the( taif Ia~ fam" ily eounsellimc- Csccrv iceos 1.. ill be provided. These meiasures' will be ' Ghyroelenn'Go il the Department of 2" ocial. Seci. 1rity. The Governmirent' ha-; s decided that an initial ailocat'iop of be made -for this,, pu-rposc. 1-fsircta. Ie. to Idrt~ sa2 im Forms of adjust,: ent ,, ssistarce Iich sbudbe considered by,, the TPribunal in relation Lonusre af-foct ed, by theovo; c tlori ff redu. Lction I. cl-e Ratons1iztio~ aistnceto encouror-e, w,. hen arpronri tc: chn. o-s in an industry's structure numbers of ent--rprisies or c s t 1i slM,! n; L s chsar-,, es in -patterns of productionl- ( g. greate-r Product specializa'tion or diversificat-ion into new, 7 areas of production);-investient in neii cc u,_-ient; cha-nges in the ec. tienxl strutctuGre Of and industry3 ai-d clhanges in the indu strial ' truicture of ani affectod. reeion. Such ass-susla nce mr-i7r be Drovided In -the form of grant, lol oanloan ' uornr csa comiaion of thes~ e. Onc e i. ts -new eistonhar. been.. l'assed, the Australimn InduS'Gry Developmen(-;, t Corporation m-, ay be ,-blo to ass.-ist merge-, rs w ithin_ industries. Coi-ei-plzn-t-ion t,-o f-m; for closu,, re, give) he part~ u ectablj-.: h'~ ts cantre!, mLn ln productionl-thl-rough sne: Lom of rat ion! ali; a' tion or other adjustme-nt'U. ' Vcn ioctoncannot be continued efficiently, the cc tablirchmets concerned 2hould be cdoe d, or the inofCi cien~ t procesaP2sShould cease, Th e lon;-term ocheme under d evelopmcnt for ad juatment assistance to inchuntrLes and firms w. ill be administ cr& c by an Ad justme~ nt~ Asrilstance Board empoowered to Frant specified forms of nesctance from a pool of funds 2. pproprjited for the purpose. Applicationo for aszis tance arising out of tariff chan~ es are expected to be an important port of the Board's activits. T'he Government ha2 ~ Ocicded that an effective masof handling thsaspect, pjdirnr further eopon of the loI1:' er term proposain, i0 for the Interdepartmental Commi ttee on Structural Ad j ustment ( acting in concert vith the Tribunal raecponsible for rcommendiny; Ph n Qdjus tment assistance is appropriate) to administer assistance to fcirms affected by the proposed tariff chang: es. It has decided further that an initial allocation of 010 million will be included in the 1970/ 714 budget for this purp~ ose. A prima fscie case for : ceiere:' ie to the Tribunfal could be cstab!. ihed if an industry claiming tu be affected by the tariff change cnn ( denontrato: that the fi rms compricing the industry were, via~ ble prior to the effects or likely effects of the tariff choivwe beir:; recommended; and that tsriff ch~ nge iY cnusinq or has caused serious injury affectin viablity of* th firms or tha jobs of' thei~ r cmphywcEs in such a nay as to cause hardtship or ' hiqh' social cost; the effects of the tariff chang-; e cannot be avoided or offset by action wich the industry itselff might reasonably be e:: nectcd to tae; the affected firms within the induLtry ae unable to obtain financinG needed to m-de necessMary chai-nges on reasonazable termr, a-tnC. conditions from commercial sources ; ithout, Government bcing. Procedures for R-viewinL; Reouests for A-istance The following ( general procedures have been etablished for reviewing rdeuests for a-. s istance from indutries or persos employed in them: Requests for assistance should be submitted by organizations representative of the affected employers and employees. It ; ould be uirealiStic to allowv; for the submi; ssion of requerst s from individul-s, or isinEle f irm~ s, and at the carme ti-me to expect careful but speedy revieu~ j of thiose resluests. To ensure expedition an,: d consistency of approach and result, * Lc e-istingi; iter-deparb Ucn machinery for servicing-Cabinet's consideration of assistance -to industres w-iill be the vehicle for co-ordinatiing the ,, orkl of the relevant departments on requests for referral to the Tribunal. Requests for assistance sh1-ould be acompaniedci by the data which would ' be required by the Tribunal to asSecss damlagp, and by tht-ie Interdepartmental Committe on 2t ruetural Ad ju; tme7nM t to deal iw: ithi questi. onz of adjusti:: nI; asais r. ce. The data reanived uU. be spccificd., in a ciuestionnai. re. The Tribu-a! lcshould submit a report on a. reference 1-ithin 60 cysof receivim.-g it. This report cohou. 1d contai-n advice on: whetlicr -dr-industry, or peroon eIipiye in it hI'ave been, or are liklly to be, seriously dam. na[ cd by the gcneral redluction of tariiffo; and, if. s-, o, * the form of azoi,, stance ( t'Uhat is, by tari-ff action. or acdjue-tutcni,) t asist,---ance) . ihich chould be given to the industrv or emI ) 10r Ce. on the evidence available, the Tribunal cons iders that ad jus-t. ment assis-tance should ' be recornx-ended, the f~ encrri. I natu,,., re, extent and lieycoSt (-Go public rcvenuea) oñ f tlm aedjus-tment as: s tance will. be iden') tified ' by the Intexdempartmental Commrittee on-, g tructura-l. Ad justment. If tariff actio-n is recoruienrded, the Tribunal's report wilrecomenIL it, form anrd lcve-. l. The Tribunal il submit it,-rejports to the i'rinc K~ inist or, and they will1 be published at the s-a;-e tImcJre as2 the decisions are. an-, noun. ced on thieir recor. unendati. is ii. udin,, t the gcneral nature of t-he d ecisions on ad iusit;. ient ass , istanice, if1 thA form-o-f' relief is.-roc1nlsedc,-iCcd. It wilbe noted tbrnat tis7 time-wtab--le wli* ll recuirec thl-e Tribuna-l t--o decide oia the c.: L; 1. on of' da wihin ab) Out t'-hree-weeks of recoiving the ro-e. s; t fo-) r asitnsand, 1.. Jn 1, uh IeeaCtom-: A ttC to iotCjit f~ nea ) a'tiire nr-l c: te( nt 4-h1 ~ e et cecheo . cc rrcc o i--1, e'o d -, oi il the lb c eae! rii -e Oi: tie to Tecaton o? darrere-and thei altiereti-ve cCow",;) ts, of 2e e.' ie. ane 0.! v n. tio C n hI LVniz io rL I," Xh ; onv : to o onito durin]: n thle inot ei.. , htee-. Ka-, of~ tho x-ity-da. y pcriod Conprm: -ition, oT TribunalO uO fci the( y nayi~ Oe a : ffctedt by te action t o comnremict : Icaoyn mmi;; ad -to nrovidc an ad v: 0rV rYze Co-ri" o. c fo coc rel sxm edeevn n t~ to: C. 2 t1 ro h f oC n o 0 i f t v T -br~ " 1 .4 ~ Oor.~ ib -,-vor th~-~ trib;'. ne uo c; tcd above, ori. reso zp~ cLtdco-eerice tTotith the pYrev-',: Lio.-tat C ol tha-nr-m and ed on his . J. h Iteod L. 2~ eoi f1h ~ iuv C C: r re o eye~ jer 1ic .2 I~ L: 0 i 2 1 ' 1 tIa:: 1 *. i~ CL0' rt o a 0 tio0 11 oi 61ic -) ote:-DtiaI economic bo: ncfitc;) ( Co 9: tleu ntv r' hecaiDDcity t-o une t-he re.:, ourccos o-r rclevmI't C 0vm-1Y11. mnt is itto to Sfervice the,-1C Tr ibu n a'r adv i ce Uh thoseconioai; i it ha-, decided t1. at ilcricde has the rccu. VirCd b20-f Sjj e-por~ ience anJ~ ! uuld r,-rovide C11 e-, U-. r: 2cstcCI rvdvice onl th'u tOt1-1 evLcv is--r, ccmnfro I) t i t11 1-Im1 11le -ill tlhcrcoforo boe av~ obtc' e~ the Tribun.. nmee . tioneId earlier, the Tribuinal. T'will report e thcre i_, e atiny recd s~ ncc cm~ cdoethat aiea~ c s eird t il recoiD !-. cd either aro. etL-orr. i77' ut 0io 1or1 hol of theoiral dutesor that relief shou-ld t: othe for:.. of'
http://pmtranscripts.pmc.gov.au/release/transcript-2971
CC-MAIN-2017-39
refinedweb
3,701
54.83
Although the previous chapter covered object initialization in great detail, it didn't quite cover all ways to initialize objects in Java, because it didn't cover all ways to create objects in Java. Aside from the new operator, which was the focus of the last chapter, Java offers two other ways to create objects: clone(), which is described in this chapter, and newInstance(), which is described in Part IV. The newInstance() method, a member of class Class, is most often used in the context of class loaders and dynamic program extension. The clone() method, a member of class Object, is Java's mechanism for making copies of objects. clone()Method In Java, the way to make an identical copy of an object is to invoke clone() on that object. When you invoke clone(), it should either: Objectreference to a copy of the object upon which it is invoked, or CloneNotSupportedException Because clone() is declared in class Object, it is inherited by every Java object. Object's implementation of clone() does one of two things, depending upon whether or not the object implements the Cloneable interface. If the object doesn't implement the Cloneable interface, Object's implementation of clone() throws a CloneNotSupportedException. Otherwise, it creates a new instance of the object, with all the fields initialized to values identical to the object being cloned, and returns a reference to the new object. The Cloneable interface doesn't have any members. It is an empty interface, used only to indicate cloning is supported by a class. Class Object doesn't implement Cloneable. To enable cloning on a class of objects, the class of the object itself, or one of its superclasses other than Object, must implement the Cloneable interface. In class Object, the clone() method is declared protected. If all you do is implement Cloneable, only subclasses and members of the same package will be able to invoke clone() on the object. To enable any class in any package to access the clone() method, you'll have to override it and declare it public, as is done below. (When you override a method, you can make it less private, but not more private. Here, the protected clone() method in Object is being overridden as a public method.) // In Source Packet in file clone/ex1/CoffeeCup.java class CoffeeCup implements Cloneable { private int innerCoffee; public Object clone() { try { return super.clone(); } catch (CloneNotSupportedException e) { // This should never happen throw new InternalError(e.toString()); } }; } } You could make a copy of this CoffeeCup class, which implements Cloneable, as follows: // In Source Packet in file clone/ex1/Example1.java class Example1 { public static void main(String[] args) { CoffeeCup original = new CoffeeCup(); original.add(75); // Original now contains 75 ml of coffee CoffeeCup copy = (CoffeeCup) original.clone(); copy.releaseOneSip(25); // Copy now contains 50 ml of coffee // Figure 15-1 shows the heap at this point in the program int origAmount = original.spillEntireContents(); int copyAmount = copy.spillEntireContents(); System.out.println("Original has " + origAmount + " ml of coffee."); System.out.println("Copy has " + copyAmount + " ml of coffee."); } } In this example, a new CoffeeCup object is instantiated and given an initial 75 ml of coffee. The clone() method is then invoked on the CoffeeCup object. Because class CoffeeCup declares a clone() method, that method is executed when clone() is invoked on the CoffeeCup object referred to by the original reference. CoffeeCup's clone() does just one thing: invoke the clone() method in CoffeeCup's superclass, Object. The first thing Object's clone() does is check to see whether the object's class implements the Cloneable interface. This test passes because CoffeeCup, the object's class, does indeed implement Cloneable. The clone() method then creates a new instance of CoffeeCup, and initializes its one field, innerCoffee, to 75--the same value it has in the CoffeeCup object being cloned. Object's clone()returns a reference to the new object, which is then returned by CoffeeCup's clone(). The reference returned by clone() refers to a CoffeeCup object, but the reference itself is of type Object. The code above downcasts the returned reference from Object to CoffeeCup before assigning it to local variable copy. At this point, both CoffeeCup objects-- original and copy--contain 75 ml of coffee. Finally, 25 ml is removed from the copy, so it ends up with only 50 ml of coffee. A graphical representation of the result inside the Java Virtual Machine of executing the first four statements in main() is shown in Figure 15-1. (As mentioned in the last chapter, the native pointer to class information shown here is just one potential way a Java Virtual Machine could connect instance data to its class information.) Figure 15-1. Cloning a CoffeeCup. CoffeeCup's clone() implementation surrounds the call to Object's clone implementation with a try block so it can catch CloneNotSupportedException. This exception should never actually be thrown by Object's clone(), because in this case, CoffeeCup correctly implements Cloneable. If CoffeeCup's clone() didn't explicitly catch it, however, then clone() would have to declare in a throws clause that it may throw CloneNotSupportedException. This would force any method invoking clone() on a CoffeeCup object to deal with the exception, either by explicitly catching it or declaring it in their own throws clause. Thus, CoffeeCup's clone() catches CloneNotSupportedException to make it simpler for other methods to invoke clone() on a CoffeeCup. If you wish to enable cloning of an object that includes object references as part of its instance data, you may have to do more work in clone() than just calling super.clone(). Clone should return an independent copy of the object. Object's clone() will copy the value of each instance variable from the original object into the corresponding instance variables of the copy object. If one of those variables is an object reference, the copy object will get a duplicate reference to the same object. As an example, consider this version of CoffeeCup, in which the innerCoffee field has been upgraded from a mere int to a full fledged object reference: // In Source Packet in file clone/ex2/CoffeeCup.java class CoffeeCup implements Cloneable { private Coffee innerCoffee = new Coffee(0); public Object clone() { try { return super.clone(); } catch (CloneNotSupportedException e) { // This should never happen throw new InternalError(e.toString()); } } public void add(int amount) { innerCoffee.add(amount); } public int releaseOneSip(int sipSize) { return innerCoffee.remove(sipSize); } public int spillEntireContents() { return innerCoffee.removeAll(); } } // In Source Packet in file clone/ex2/Coffee.java public class Coffee implements Cloneable { private int volume; // Volume in milliliters Coffee(int volume) { this.volume = volume; } public Object clone() { try { return super.clone(); } catch (CloneNotSupportedException e) { // This should never happen throw new InternalError(e.toString()); } } public void add(int amount) { volume += amount; } public int remove(int amount) { int v = amount; if (volume < amount) { v = volume; } volume -= v; return v; } public int removeAll() { int all = volume; volume = 0; return all; } } Given these declarations of CoffeeCup and Coffee, there would be a surprise waiting for any method that attempts to clone a CoffeeCup object: // In Source Packet in file clone/ex2/Example2.java class Example2 { public static void main(String[] args) { CoffeeCup original = new CoffeeCup(); original.add(75); // Original now contains 75 ml of coffee CoffeeCup copy = (CoffeeCup) original.clone(); copy.releaseOneSip(25); // Copy now contains 50 ml of coffee. // Unfortunately, so does original. // Figure 15-2 shows the heap at this point in the program int origAmount = original.spillEntireContents(); int copyAmount = copy.spillEntireContents(); System.out.println("Original has " + origAmount + " ml of coffee."); System.out.println("Copy has " + copyAmount + " ml of coffee."); } } Here, when releaseOneSip() is invoked on copy with a parameter of 25 ml, that amount of coffee is correctly removed from the CoffeeCup object referenced by copy. The trouble is that 25 ml of coffee is also removed from the cup referenced by original. The reason is that both the original and copy objects contain a reference to the same Coffee object. A graphical representation of the result of these statements is shown in Figure 15-2. Figure 15-2. Incorrect cloning of a CoffeeCup that contains object references. To rectify this situation, you need to modify CoffeeCup's clone() method: // In Source Packet in file clone/ex3/CoffeeCup.java class CoffeeCup implements Cloneable { private Coffee innerCoffee = new Coffee(0); public Object clone() { CoffeeCup copyCup = null; try { copyCup = (CoffeeCup) super.clone(); } catch (CloneNotSupportedException e) { // this should never happen throw new InternalError(e.toString()); } copyCup.innerCoffee = (Coffee) innerCoffee.clone(); return copyCup; } public void add(int amount) { innerCoffee.add(amount); } public int releaseOneSip(int sipSize) { return innerCoffee.remove(sipSize); } public int spillEntireContents() { return innerCoffee.removeAll(); } } In this version of clone(), Object's clone() is invoked as before. But instead of simply returning the reference to the new CoffeeCup object created by Object's clone(), the new CoffeeCup object is modified before it is returned. First, the Coffee object referenced by innerCoffee is cloned. A reference to the cloned Coffee object is then stored in the innerCoffee variable of the cloned CoffeeCup object . At this point, the original object and the clone refer to their own Coffee objects, but those Coffee objects are exact duplicates of each other. If you now performed the same statements on this version of CoffeeCup, you would once again have the expected behavior: // In Source Packet in file clone/ex3/Example3.java class Example3 { public static void main(String[] args) { CoffeeCup original = new CoffeeCup(); original.add(75); // original now contains 75 ml of coffee CoffeeCup copy = (CoffeeCup) original.clone(); copy.releaseOneSip(25); // Copy now contains 50 ml of coffee. // Original still has 75 ml of coffee. // Figure 15-3 shows the heap at this point in the program int origAmount = original.spillEntireContents(); int copyAmount = copy.spillEntireContents(); System.out.println("Original has " + origAmount + " ml of coffee."); System.out.println("Copy has " + copyAmount + " ml of coffee."); } } Because the CoffeeCup objects referenced by original and copy each have their own Coffee objects, when copy's was reduced by 25 ml, original's wasn't affected. A graphical representation of the result of these statements is shown in Figure 15-3. Figure 15-3. Proper cloning of a CoffeeCup that contains object references. These examples demonstrate the customary approach to writing clone(). The first thing to do in any clone() method (besides Object's) is invoke super.clone(). This will cause Object's implementation of clone() to be executed first. This scheme is similar to that of constructors, in which an invocation of the superclass's constructor is always executed first. Object's clone() will create a new instance of the class and copy the values contained in the original's instance data to the new object's instance data. Catching CloneNotSupportedException is also a good idea, to make calling clone() on that class of objects simpler to code. When super.clone() returns, a clone() method should make clones of any mutable objects referenced by its instance variables, and assign these clones to the instance variables of the copy. A mutable object is one whose state can change over the course of its lifetime. An object whose state can't change is immutable. An example of an immutable object is String. You must give a value to a String when you create it. Once created, a String's value can't change over the lifetime of the String object. The same is true for the wrapper objects Integer, Float, and so on. You assign them a value when they are created, and there is no way to change it for the remainder of their lifetimes. The real trouble with the clone() method shown above that didn't clone Coffee was that Coffee is mutable. When the state of the Coffee object changed ( volume changed from 75 to 50), both CoffeeCup objects saw their own internal state change. Had CoffeeCup included an instance variable of type String, you wouldn't have had to clone it because Strings are immutable. (In fact, you couldn't have cloned it, because String doesn't implement Cloneable. Since Strings are immutable, it doesn't make sense to clone them.) Java's cloning mechanism enables you to allow cloning, allow cloning conditionally, or forbid cloning altogether. If you wish to completely forbid cloning, you have a few different approaches to choose from. To decide which way to forbid cloning upon a particular class of objects, you must know something about the class's superclasses. If none of the superclasses implement Cloneable or override Object's clone() method, you can prevent cloning of objects of that class quite easily. Simply don't implement the Cloneable interface and don't override the clone() method in that class. The class will inherit Object's clone() implementation, which will throw CloneNotSupportedException anytime clone() is invoked on objects of that class. All the classes shown as examples in this book prior to the CoffeeCup class declared immediately above used this method of preventing cloning. By doing nothing, they disallowed cloning. Thus, forbidding cloning is the default behavior for an object. In cases where a superclass already implements Cloneable, and you don't want the subclass to be cloned, you'll have to override clone() in the subclass and throw a CloneNotSupportedException yourself. In this case, instances of the superclass will be clonable, but instances of the subclass will not. For those of you who know C++, Java's clone() method is what happened to C++'s copy constructor. For those of you who don't know C++, a copy constructor is one which takes a single parameter of the same type of the class. In the body of the copy constructor, you have to copy all values from the object passed as a parameter to the object under construction. Like Java's clone() method, in a C++ copy constructor, you should allocate new memory for objects referenced (or pointed to) from instance (member) variables. For example, a copy constructor for class CoffeeCup would be: // In Source Packet in file clone/ex4/CoffeeCup.java // Copy constructors are not the Java way... class CoffeeCup { private int innerCoffee; public CoffeeCup(CoffeeCup cup) { innerCoffee = cup.innerCoffee; } //... } One of the primary uses of the copy constructor in C++ is to pass objects by value. The copy constructor is used to create a copy of an object that is passed by value to a function. This is not an issue in Java, because all objects in Java programs are passed by reference. If you are a C++ programmer and feel the urge to write a copy constructor in a Java class, STOP! Close your eyes. Take a few deep breaths. Then--when you feel your ready--open your eyes, implement Cloneable and write clone(). It will be OK.
http://www.artima.com/objectsandjava/webuscript/ClonCollInner1.html
CC-MAIN-2015-32
refinedweb
2,454
55.54
15 March 2012 03:52 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> The company’s sales revenue was up by 56.96% year on year to CNY37.56bn in 2011, the company said in a statement to the Shenzhen Stock Exchange. The company’s 5m tonne/year refinery, a 450,000 tonne/year naphtha cracker and its derivative units, which operated well since starting up in January 2010, contributed CNY20.64bn of the revenue last year, according to the statement. Its sales revenue from urea and methanol increased by 19.93% and 75.94% year-on-year to CNY2.85bn and CNY114m respectively, the statement said. Sales revenue from its polyolefins, aromatic and butadiene (BD) sector rose by 65.08%, 102.09% and 102.54% year-on-year to CNY8.0bn, CNY2.9bn and CNY1.8bn last year, the statement said. Liaoning Huajin Tongda Chemicals, which is based
http://www.icis.com/Articles/2012/03/15/9541746/chinas-liaoning-huajin-tongda-chemicals-doubles-profit-in.html
CC-MAIN-2014-52
refinedweb
148
80.48
NetBeans Selection Management Tutorial II - Using Nodes Last reviewed on 2020-12-22 This tutorial is part 2 of a series. The previous tutorial covered the basics of component-wide selection handling in NetBeans - how to provide objects from a ` TopComponent 's ` Lookup, and how to write other components that are sensitive to the Lookup of whatever component has focus. This tutorial assumes that you have completed the previous tutorial, and have the source code to hand, since we will be modifying it in this part.. Later parts of this tutorial series introduce more advanced use of the Nodes API. For troubleshooting purposes, you are welcome to download the completed tutorial source code. (from the previous tutorial). This starts with opening it in the editor. First, bring up the properties dialog for the My Editor project by right-clicking the My Editor project and choosing Properties. On the Libraries tab, click the Add Dependency button, and type "BeanTreeView" in Filter text box: Select the "Explorer & Property Sheet API", and click OK to add this dependency so you can use classes from it. Open the MyEditor.java file. If not already showing, switch to the form designer. Select all of the components (two text fields, and the button), and delete them. In the following steps, we will use a component from the Explorer & Property Sheet API, instead of the components we have been using so far. Switch to the Source view of MyEditor.java, and rewrite the constructor of the MyEditorclass as shown below: public MyEditor() { initComponents(); Event obj = new Event(); associateLookup(new AbstractLookup(content)); setLayout(new BorderLayout()); add(new BeanTreeView(), BorderLayout.CENTER); setDisplayName("MyEditor " + obj.getIndex()); } BeanTreeView is a component from the Explorer & Property Sheet API - a basic JTree-based view over a Node and its children, with built-in handling of popup menus, searching and more. Press Ctrl+Shift+I to import BeanTreeView, because the import statement needs to be added. Remove the unused updateContent()method. cursor in the signature line, a lightbulb glyph should appear in the margin. Press Alt+Enter, and accept the hint "Implement all abstract methods". This will add one method, getExplorerManager(). Implement it as follows: private final ExplorerManager mgr = new ExplorerManager(); @Override public ExplorerManager getExplorerManager() { return mgr; } Now, since the goal is one component that can display multiple Event`s, you need a `Nodeor two to display in your component. Each one will own an instance of Event. So, right now you’ll just add the code that will create a root node for your tree view. Add the following line to the constructor: mgr.setRootContext(new AbstractNode(Children.create(new EventChildFactory(), true))); This is the code that sets the root node for all of the explorer views that are child components of MyEditor. The Children.create is a static call from the NetBeans APIs that will, thanks to the true parameter, create the child components asynchronously, that is, as needed, instead of all at once. If you tried Fix Imports, you may have seen the error dialog telling you that neither AbstractNode, Children, nor EventChildFactorycould be resolved. To resolve AbstractNodeand Children, you need to add one dependency, on the Nodes API module. Right click the My Editor project and select Properties, EventChildFactorycould not be resolved. That’s okay - you’re about to write it, in the next section. EventChildFactory, so that there are subnodes underneath the initial node. Right click the org.myorg.myeditorpackage in the My Editor project, and choose New > Java Class from the popup menu. In the New Java Class wizard, name the class "EventChildFactory", and click Finish or press Enter to create the class. Modify the signature of the class so it extends ChildFactory: public class EventChildFactory extends ChildFactory<Event> { Press Ctrl+Shift+I to Fix Imports. Position the cursor in the class signature line. When the lightbulb glyph appears in the margin, press Alt-Enter and then Enter again to accept the hint "Implement all abstract methods". This will add a protected createKeys(List<Event> list)method - this is where you will create the keys, on a background thread, that will be used to create the children of your root node. The children will be created the first time the object is asked for its child nodes. So you can delay creation of child Nodes until the user has really expanded the parent node in a view and needs to see them. Implement the method as follows: @Override protected boolean createKeys(List<Event> list) { Event[] objs = new Event[5]; for (int i = 0; i < objs.length; i++) { objs[i] = new Event(); } list.addAll(Arrays.asList(objs)); return true; } As you may have guessed from the name ChildFactory, what your parent class does is take an array or Collection of key objects, and act as a factory for child nodes for them. For each element in the array or collection you pass to the toPopulate list above, the createNodeForKey() method shown below will be called once when true is returned. Now you need to implement the code that actually creates Node objects for all of these. Implement createNodeForKeyas follows: @Override protected Node createNodeForKey(Event key) { Node result = new AbstractNode( Children.create(new EventChildFactory(), true), Lookups.singleton(key)); result.setDisplayName(key.toString()); return result; } The new Node is created by passing in the definition of its Children, together with the current Event, which is put into the Lookup of the Node. When the user selects the Node, the object in its Lookup will be proxied by the Lookup of the TopComponent, which in turn is proxied by the global Lookup. In this way, you make the current Event object available to any object that is interested in it, whenever the Node is selected.(); Event obj = new Event(); associateLookup(ExplorerUtils.createLookup(mgr, getActionMap())); setLayout(new BorderLayout()); add(new BeanTreeView(), BorderLayout.CENTER); setDisplayName("MyEditor " + obj.getIndex()); mgr.setRootContext(new AbstractNode(Children.create(new EventChildFactory(), true))); } Press Ctrl+Shift+I to Fix Imports which will add the ExplorerUtils import. Running the Tutorial You may have noticed that because you pass a new instance of EventChildFactory to each AbstractNode you create, that you will end up with an infinitely deep tree of Events - each Node will have five child Nodes, each with its own Event, although the children will only be created as required. You are now ready to run, so right-click EventManager and choose Clean and Build, and then right-click again and choose Run from the popup menu. When the application starts, you should be able to browse the Events, as shown below: If you open the property sheet (From the Window menu, select IDE Tools and then Properties), then you should see your viewer and the property sheet update themselves to show the Event belonging to each node, as shown below: Exploring Explorer Now that you have the above code, it can be interesting to explore some of the other components available in the Explorer & Property Sheet API, which can also render a Node and it’s children. You can do this by opening MyEditor in the Source view and changing add (new BeanTreeView(), BorderLayout.CENTER) to something different, in the constructor. Some of the options are: OutlineView - a tree-table - a table that has a tree as the leftmost column: IconView - a component that shows Node children in equally spaced icons, similar to Windows Explorer ListView - display nodes in a JList (you can set how deep into the Node hierarchy it should go) ChoiceView - a combo-box view of a Node and its children (typically used in combination with other elements, rather than being the primary view) MenuView - a JButtonthat pops up a menu of a Node and its children. Switch to Source view if not already selected. Replace the resultChanged()listener method with the following code: @Override public void resultChanged(LookupEvent lookupEvent) { Collection<? extends Event> allEvents = result.allInstances(); if (!allEvents.isEmpty()) { StringBuilder text1 = new StringBuilder(); StringBuilder text2 = new StringBuilder(); for (Iterator i = allEvents.iterator(); i.hasNext();) { Event o = (Event)(""); } } As usual, fix imports. Clean and Build, and Run again. So you can see that the Lookup created by ExplorerUtils handle not only proxies the Lookup of whatever Node is selected; it also correctly proxies the Lookup for multiple selected entries.. A Nodeis a presentation object ( Event in your case). In the next tutorial, you will cover how to enhance the Nodes you have already created with actions, properties and more colorful display names.
https://netbeans.apache.org/tutorials/nbm-selection-2.html
CC-MAIN-2021-17
refinedweb
1,405
52.6
This FAQ provides answers to the most frequently asked questions from Oracle Call Interface (OCI) application developers. Specific topics discussed are:: See Part 1, "Basic OCI Concepts" of Oracle Call Interface Programmer's Guide, Volume I for more information on the basics of OCI programming. Question. With the release of Oracle8i are there any new reasons to choose OCI instead of Pro*C to develop applications? Answer. The key distinctions between OCI and Pro*C are: Question. Why is Oracle8i OCI a better API for a scalable, multi-threaded application than Open Database Connectivity (ODBC) 3.0? Answer.: Some of these comments may or may not be applicable to your particular application. For example, if an application invocation is always dedicated for a user, then sessions, transactions, multiplexing, and multi-threading are not issues. This section provides answers to the most frequently asked questions about objects. It also includes an overview of objects, the object cache, and navigational access. The following topics are discussed: Oracle8i has facilities for working with object types and objects. An object type is a user-defined data structure representing an abstraction of a real-world entity. For example, the database might contain a definition of a person object. That object might have attributes--first that are new to Oracle8i. OCI includes a set of datatype mapping and manipulation functions that enable an application to manipulate these datatypes, and thus manipulate the attributes of objects. Throughout the following sections,: See Oracle8 Concepts and Oracle8 Application Developer's Guide - Fundamentals for a more detailed explanation of object types and objects. See "Part II OCI Object Concepts" of Oracle Call Interface Programmer's Guide, Volume II for detailed information on the use of Oracle8i objects with OCI. Question. Does OCI provide an external function to free unreferenced objects in the client-side object cache? Answer.. Question. What controls the size of the client-side object cache? Answer. The client-side object cache size is controlled by the following two attributes of the environment handle: These parameters refer to levels of cache memory usage, and they help to determine when the object cache automatically ages out eligible objects to free up memory. If the memory occupied by the objects currently in the object cache reaches or exceeds the high watermark, the object cache automatically begins to free unmarked objects that have a pin count of 0. The object cache continues freeing such objects until memory usage in the object. Oracle8i OCI, mutexes are granted on a per-environment-handle basis. Question. If a client-side application is multi-threaded or multi-process, do all processes share a single client-side object cache or does each have its own object cache? Answer.. Question. Can several threads or processes participate in a single session or transaction? Answer.. This section provides answers to the most frequently asked questions about user session handles. It also includes an overview of user session handles. The following topics are discussed:. Question. Should OCI_PRELIM_AUTH be used with SYSDBA or SYSOPER? Answer.. This section provides answers to the most frequently asked questions about threads. It also includes an overview. The following topics are discussed:. Question. Can I pass a handle between threads? That is, can I pass control from one thread to another so that the second thread can call OCI using the handle that initially has been used by another thread? Answer.. Question. How can thread safety levels be set in OCI? Answer. In order to take advantage of thread safety in Oracle8i OCI, an application must be running on a thread safe platform. Then the application must tell the OCI layer that the application is running in multi-threaded mode, by specifying OCI_THREADED for the mode parameter of the opening call to OCIInitialize(), which must be the first OCI function called in the application. Applications running on non-thread-safe platforms should not pass a value of OCI_THREADED to OCIInitialize(). If an application is single-threaded, whether or not the platform is thread safe, the application should pass a value of OCI_DEFAULT to OCIInitialize(). Single-threaded applications that run in OCI_THREADED mode may incur performance hits. If a multi-threaded application is running on a thread-safe platform, the OCI library manages mutexing for the application on a per-environment-handle basis. If the application programmer desires, this application can override this feature and maintain its own mutexing scheme. This is done by specifying a value of OCI_NO_MUTEX to the OCIEnvInit() call. The following two scenarios are possible, depending on how many connections exist per environment handle, and how many threads are spawned per connection: Question. Is it feasible to use OCI release 7.3.3 in non-thread safe mode within a multi-threaded program? Answer. No. If you have multiple threads of execution, then you must always call opinit(OCI_EV_TSF). Question. What level of locking should be maintained between the different threads after I have called opinit(OCI_EV_TSF)? Answer.. Question. Can I link an OCI release 7.3.x multi-threaded program with Oracle8i OCI and obtain correct multi-threading behavior without making any changes? Answer. You do not have to make any changes. Your OCI release 7.3.x multi-threaded program should run when linked with Oracle8i OCI libraries. This section provides answers to the most frequently asked questions about large objects (LOBs). It also includes an overview. The following topics are discussed: A LOB (large object) is an Oracle datatype that can hold up to 4 GB first allocates the LOB locator and then uses database. Question. What is the size of the Oracle8i LOB locator? Answer. Oracle8i stores the LOB in the row containing the LOB column. This is because for smaller LOBs there is no extra disk input/output (I/O). Question. How many bytes are used when the LOB column value is NULL? Answer. When a LOB column is NULL, only the NULL indication is stored, which is 1 byte. Question. Are there any other things to note when using a binary FILE (BFILE) other than it is read-only? Answer.. Question. What happens when the operating system file changes? For example, if I delete the file and do not change anything on the BFILE field. Answer.. Question. What happens if the DIRECTORY objects are dropped or replaced when the database is in operation? Answer.. Question. Can I put several LOBs in the same block or do I have to put each LOB in its own block? Answer.. A chunk is different from a data block. The chunk size for LOBs can be specified as part of the LOB storage specification. Question. Is it possible to use OCI to create a new persistent object with a LOB attribute and write to that LOB attribute? Answer. Yes. The application follows these steps: Question. Is there a way to test if the LOB data associated with the OCILobLocator() is NULL before performing an OCILobRead() or OCILobGetLength()? Answer.. Question. What are the recommendations regarding the parameter "Enable/Disable storage in row" when using small BLOBs? For example, if you are using 1K BLOBs and 5K BLOBs. Answer. The default for LOBs is that the system automatically figures out whether or not a particular LOB should be stored inline in the row, or outside the row. If the LOB is greater than approximately 4000 bytes, it is automatically be stored outside the row. If you are mostly using large LOBs, you may decide to enforce this separation to avoid extra disk contention on the disk that stores the table data.. Question. How do you decide the optimal chunk size regarding the block size and the file system physical I/O? For example, if you are using 1K BLOBs, 5K BLOBs, and 40K BLOBs. Answer. The largest chunk allowed is 32k. The smallest chunk is one database. It is recommend to choose a chunk size of 8k. Question. When reading in stream mode, does Oracle8i use synchronous or asynchronous reads? That is, does control return to the client after the first chunk read or only at the end of a chunk read? Answer. You can read BLOBs in two modes: This section provides answers to the most frequently asked questions about transactions. It also includes an overview. The following topics are discussed: Oracle8i OCI provides a set of API calls to support operations on both local and global transactions. These calls include object support, so that if an OCI application is running in object mode, the commit and rollback calls synchronizes. Depending on the level of transactional complexity in your application, you may need all or only a few of these calls. The following section discusses this in more detail. OCI supports three levels of transaction complexity. Each level is described in one of the following sections. Many applications work with only simple local transactions. In these applications, an implicit transaction is created when the application makes database changes. The only transaction-specific calls needed by such applications are: authorizations, each one can have an implicit transaction associated with it. Applications requiring serializable or read-only transactions require an additional OCI call beyond those needed by applications operating on simple local transactions. To initiate a serializable or read-only transaction,. As they are not further discussed in this guide, see "OCI Programming Advanced Topics" of Oracle Call Interface Programmer's Guide, Volume I for more information. Question. At what time does an implicit transaction start? Answer.. Question. If I do not have access to implicit transaction, how do I find out what features this transaction has? Answer. The transaction has the usual (ACID) properties. Unlike global transactions (that are started explicitly), implicit transactions cannot be migrated between service contexts. Unless you are planning on building a multi-tier application, implicit transactions should be sufficient. ACID is a mnemonic for the properties a transaction should have to satisfy the Object Management Group Transaction Service specifications. A transaction should be Atomic, its result should be Consistent, Isolated (independent of other transactions) and Durable (its effect should be permanent). Question. Do I need to allocate the transaction handle before starting an implicit transaction? Answer. No. You do not need to allocate the transaction handle. But you will not be able to get any attributes of this transaction later in the program. Question. What steps should be performed to start a serializable or read-only transaction? Answer. To initiate a serializable or read-only transactions: Specifying the read-only option in the OCITransStart() call saves the application from performing a server round-trip to execute a SET TRANSACTION READ ONLY statement. Question. I am currently setting the value_sz parameter in OCIDefineByPos() to SB4MAXVAL(2147483647) which works successfully. What is the difference between setting value_sz to a small versus a large value? Are internal buffers created and is it more efficient to use small values? Why is this parameter necessary? Answer.. This section provides answers to the most frequently asked questions about handles and descriptors. It also includes an overview. The following topics are discussed:. See "OCI Programming Basics" of Oracle Call Interface Programmer's Guide, Volume I and "Handle and Descriptor Attributes" of Oracle Call Interface Programmer's Guide, Volume II for detailed information on handles and descriptors. Question. What is the significance of the numbers 21, 33, 52, and 100 in the source code example, CDEMO81.c in Oracle Call Interface Programmer's Guide, Volume II? Answer. The source code example CDEMO81.c in Oracle Call Interface Programmer's Guide, Volume II has several calls to OCIEnvInit() and OCIHandleAlloc() that specify values for the xtramem_sz parameter. For example: OCIEnvInit (... OCI_DEFAULT, 21, ...); OCIHandleAlloc (... OCI_HTYPE_ERROR, 33, ...); OCIHandleAlloc (... OCI_HTYPE_SERVER, 52, ...);. #ifdef RCSID static char *RCSid = "$Header: cdemo81.c 14-oct-98.17:04:30 dchatter Exp $ "; #endif /* RCSID */ /* Copyright (c) Oracle Corporation 1996, 1997, 1998. All Rights Reserved. */ /* NAME cdemo81.c - Basic OCI V8 functionality DESCRIPTION * An example program which adds new employee * records to the personnel data base. Checking * is done to insure the integrity of the data base. * The employee numbers are automatically selected using * the current maximum employee number as the start. * * The program queries the user for data as follows: * * Enter employee name: * Enter employee job: * Enter employee salary: * Enter employee dept: * * The program terminates if return key (CR) is entered * when the employee name is requested. * * If the record is successfully inserted, the following * is printed: * * "ename" added to department "dname" as employee # "empno" Demonstrates creating a connection, a session and executing some SQL. Also shows the usage of allocating memory for application use which has the life time of the handle. MODIFIED (MM/DD/YY) dchatter 10/14/98 - add the usage of xtrmemsz and usrmempp azhao 06/23/97 - Use OCIBindByPos, OCIBindByName; clean up echen 12/17/96 - OCI beautification dchatter 07/18/96 - delete spurious header files dchatter 07/15/96 - hda is a ub4 array to prevent bus error mgianata 06/17/96 - change ociisc() to OCISessionBegin() aroy 04/26/96 - change OCITransCommitt -> OCITransCommit slari 04/24/96 - use OCITransCommitt aroy 02/21/96 - fix bug in get descriptor handle call lchidamb 02/20/96 - cdemo81.c converted for v8 OCI lchidamb 02/20/96 - Creation */ #include <stdio.h> #include <stdlib.h> #include <string.h> #include <oci.h> static text *username = (text *) "SCOTT"; static text *password = (text *) "TIGER"; /* Define SQL statements to be used in program. */ static text *insert = (text *) "INSERT INTO emp(empno, ename, job, sal, deptno)\ VALUES (:empno, :ename, :job, :sal, :deptno)"; static text *seldept = (text *) "SELECT dname FROM dept WHERE deptno = :1"; static text *maxemp = (text *) "SELECT NVL(MAX(empno), 0) FROM emp"; static text *selemp = (text *) "SELECT ename, job FROM emp"; static OCIEnv *envhp; static OCIError *errhp; static void checkerr(/*_ OCIError *errhp, sword status _*/); static void cleanup(/*_ void _*/); static void myfflush(/*_ void _*/); int main(/*_ int argc, char *argv[] _*/); static sword status; int main(argc, argv) int argc; char *argv[]; { sword empno, sal, deptno; sword len, len2, rv, dsize, dsize2; sb4 enamelen = 10; sb4 joblen = 9; sb4 deptlen = 14; sb2 sal_ind, job_ind; sb2 db_type, db2_type; sb1 name_buf[20], name2_buf[20]; text *cp, *ename, *job, *dept; sb2 ind[2]; /* indicator */ ub2 alen[2]; /* actual length */ ub2 rlen[2]; /* return length */ OCIDescribe *dschndl1 = (OCIDescribe *) 0, *dschndl2 = (OCIDescribe *) 0, *dschndl3 = (OCIDescribe *) 0; OCISession *authp = (OCISession *) 0; OCIServer *srvhp; OCISvcCtx *svchp; OCIStmt *inserthp, *stmthp, *stmthp1; OCIDefine *defnp = (OCIDefine *) 0; OCIBind *bnd1p = (OCIBind *) 0; /* the first bind handle */ OCIBind *bnd2p = (OCIBind *) 0; /* the second bind handle */ OCIBind *bnd3p = (OCIBind *) 0; /* the third bind handle */ OCIBind *bnd4p = (OCIBind *) 0; /* the fourth bind handle */ OCIBind *bnd5p = (OCIBind *) 0; /* the fifth bind handle */ OCIBind *bnd6p = (OCIBind *) 0; /* the sixth bind handle */ ); checkerr(errhp, OCIHandleAlloc( (dvoid *) envhp, (dvoid **) &stmthp, OCI_HTYPE_STMT, (size_t) 0, (dvoid **) 0)); checkerr(errhp, OCIHandleAlloc( (dvoid *) envhp, (dvoid **) &stmthp1, OCI_HTYPE_STMT, (size_t) 0, (dvoid **) 0)); /* Retrieve the current maximum employee number. */ checkerr(errhp, OCIStmtPrepare(stmthp, errhp, maxemp, (ub4) strlen((char *) maxemp), (ub4) OCI_NTV_SYNTAX, (ub4) OCI_DEFAULT)); /* bind the input variable */ checkerr(errhp, OCIDefineByPos(stmthp, &defnp, errhp, 1, (dvoid *) &empno, (sword) sizeof(sword), SQLT_INT, (dvoid *) 0, (ub2 *)0, (ub2 *)0, OCI_DEFAULT)); /* execute and fetch */ if (status = OCIStmtExecute(svchp, stmthp, errhp, (ub4) 1, (ub4) 0, (CONST OCISnapshot *) NULL, (OCISnapshot *) NULL, OCI_DEFAULT)) { if (status == OCI_NO_DATA) empno = 10; else { checkerr(errhp, status); cleanup(); return OCI_ERROR; } } /* * When we bind the insert statement we also need to allocate the storage * of the employee name and the job description. * Since the lifetime of these buffers are the same as the statement, we * will allocate it at the time when the statement handle is allocated; this * will get freed when the statement disappears and there is less * fragmentation. * * sizes required are enamelen+2 and joblen+2 to allow for \n and \0 * */ checkerr(errhp, OCIHandleAlloc( (dvoid *) envhp, (dvoid **) &inserthp, OCI_HTYPE_STMT, (size_t) enamelen + 2 + joblen + 2, (dvoid **) &ename)); job = (text *) (ename+enamelen+2); checkerr(errhp, OCIStmtPrepare(stmthp, errhp, insert, (ub4) strlen((char *) insert), (ub4) OCI_NTV_SYNTAX, (ub4) OCI_DEFAULT)); checkerr(errhp, OCIStmtPrepare(stmthp1, errhp, seldept, (ub4) strlen((char *) seldept), (ub4) OCI_NTV_SYNTAX, (ub4) OCI_DEFAULT)); /* Bind the placeholders in the INSERT statement. */ if ((status = OCIBindByName(stmthp, &bnd1p, errhp, (text *) ":ENAME", -1, (dvoid *) ename, enamelen+1, SQLT_STR, (dvoid *) 0, (ub2 *) 0, (ub2 *) 0, (ub4) 0, (ub4 *) 0, OCI_DEFAULT)) || (status = OCIBindByName(stmthp, &bnd2p, errhp, (text *) ":JOB", -1, (dvoid *) job, joblen+1, SQLT_STR, (dvoid *) &job_ind, (ub2 *) 0, (ub2 *) 0, (ub4) 0, (ub4 *) 0, OCI_DEFAULT)) || (status = OCIBindByName(stmthp, &bnd3p, errhp, (text *) ":SAL", -1, (dvoid *) &sal, (sword) sizeof(sal), SQLT_INT, (dvoid *) &sal_ind, (ub2 *) 0, (ub2 *) 0, (ub4) 0, (ub4 *) 0, OCI_DEFAULT)) || (status = OCIBindByName(stmthp, &bnd4p, errhp, (text *) ":DEPTNO", -1, (dvoid *) &deptno, (sword) sizeof(deptno), SQLT_INT, (dvoid *) 0, (ub2 *) 0, (ub2 *) 0, (ub4) 0, (ub4 *) 0, OCI_DEFAULT)) || (status = OCIBindByName(stmthp, &bnd5p, errhp, (text *) ":EMPNO", -1, (dvoid *) &empno, (sword) sizeof(empno), SQLT_INT, (dvoid *) 0, (ub2 *) 0, (ub2 *) 0, (ub4) 0, (ub4 *) 0, OCI_DEFAULT))) { checkerr(errhp, status); cleanup(); return OCI_ERROR; } /* Bind the placeholder in the "seldept" statement. */ if (status = OCIBindByPos(stmthp1, &bnd6p, errhp, 1, (dvoid *) &deptno, (sword) sizeof(deptno),SQLT_INT, (dvoid *) 0, (ub2 *) 0, (ub2 *) 0, (ub4) 0, (ub4 *) 0, OCI_DEFAULT)) { checkerr(errhp, status); cleanup(); return OCI_ERROR; } /* Allocate the dept buffer now that you have length. */ /* the deptlen should eventually get from dschndl3. */ deptlen = 14; dept = (text *) malloc((size_t) deptlen + 1); /* Define the output variable for the select-list. */ if (status = OCIDefineByPos(stmthp1, &defnp, errhp, 1, (dvoid *) dept, deptlen+1, SQLT_STR, (dvoid *) 0, (ub2 *) 0, (ub2 *) 0, OCI_DEFAULT)) { checkerr(errhp, status); cleanup(); return OCI_ERROR; } for (;;) { /* Prompt for employee name. Break on no name. */ printf("\nEnter employee name (or CR to EXIT): "); fgets((char *) ename, (int) enamelen+1, stdin); cp = (text *) strchr((char *) ename, '\n'); if (cp == ename) { printf("Exiting... "); cleanup(); return OCI_SUCCESS; } if (cp) *cp = '\0'; else { printf("Employee name may be truncated.\n"); myfflush(); } /* Prompt for the employee's job and salary. */ printf("Enter employee job: "); job_ind = 0; fgets((char *) job, (int) joblen + 1, stdin); cp = (text *) strchr((char *) job, '\n'); if (cp == job) { job_ind = -1; /* make it NULL in table */ printf("Job is NULL.\n");/* using indicator variable */ } else if (cp == 0) { printf("Job description may be truncated.\n"); myfflush(); } else *cp = '\0'; printf("Enter employee salary: "); scanf("%d", &sal); myfflush(); sal_ind = (sal <= 0) ? -2 : 0; /* set indicator variable */ /* * Prompt for the employee's department number, and verify * that the entered department number is valid * by executing and fetching. */ do { printf("Enter employee dept: "); scanf("%d", &deptno); myfflush(); if ((status = OCIStmtExecute(svchp, stmthp1, errhp, (ub4) 1, (ub4) 0, (CONST OCISnapshot *) NULL, (OCISnapshot *) NULL, OCI_DEFAULT)) && (status != OCI_NO_DATA)) { checkerr(errhp, status); cleanup(); return OCI_ERROR; } if (status == OCI_NO_DATA) printf("The dept you entered doesn't exist.\n"); } while (status == OCI_NO_DATA); /* * Increment empno by 10, and execute the INSERT * statement. If the return code is 1 (duplicate * value in index), then generate the next * employee number. */; } while (status == 1) {; } } /* end for (;;) */ /* Commit the change. */ if (status = OCITransCommit(svchp, errhp, 0)) { checkerr(errhp, status); cleanup(); return OCI_ERROR; } printf("\n\n%s added to the %s department as employee number %d\n", ename, dept, emp; } } /* * Exit program with an exit code. */ void cleanup() { if (envhp) (void) OCIHandleFree((dvoid *) envhp, OCI_HTYPE_ENV); return; } void myfflush() { eb1 buf[50]; fgets((char *) buf, 50, stdin); } /* end of file cdemo81.c */ This section presents one possible scenario for an application that is managing multiple user, multiple server connections, and multi-threading. This example is intended to help you understand some of the issues involved in programming such an application. An application is supporting two users, User1 and User2. The application has completed the following steps: User1 performs the following actions: User2 performs the following actions: The following questions and answers relate to the previous connection examples: Question. How many server handles are required? Answer. Even though DB1 and DB2 reside on the same server computer, two server handles are required. Each server handle represents a database connection, and is identified by its own connect string. Question. How many service context handles are required? Answer. Four service context handles are required. Each user is executing two transactions simultaneously, so each requires its own service context. Therefore 2 users x 2 transactions = 4 service context handles. If each user had executed the statements in the same transaction, each would require only a single service context. Question. How many user session handles are required? Answer. Four user session handles are required. Each user needs a user session handle on each server. If each user executed their statements serially, then two sessions would be sufficient. Note that user session handles used to be called authentication handles. Question. How many transaction handles are required? Answer. Four transaction handles are required; one for each concurrent transaction. However, the application could also take advantage of the implicit transaction created when database changes are made, and avoid allocating transaction handles altogether. Question. Could the example use multiple environment handles? Answer. Yes. Since there are two databases involved, the application should use two environment handles so that accesses to each database can be completely concurrent. Question. If a single user in a single environment wants to execute four different statements on 4 transactions concurrently against the same database, how many server handles are required? Answer. Four server handles are required; one for each concurrent transaction. There can be at most a single outstanding call on any one server handle at a time. Question. What is the default value of OCI_ATTR_TOP_RCNT?. Question. What are the main steps when building a Pro*C or Pro*C++ application? Answer. There are three main steps when building a C or C++ application: See "Using the Object Type Translator" of Oracle Call Interface Programmer's Guide, Volume I. Question. Is the use of OCI_UCS2ID supported outside of varying width CLOBs? That is, can it be used as a client character NCHAR character set? If it can be used like this, does this cause an Endian problem during an import? Answer.. 'utext' (ub2) is the client side C data type to hold UCS2 data. OCI_UCS2ID assumes the buffer to be in 'utext' format and therefore chooses the Endian of the underlying platform. This table provides a list of the most frequently used OCI relational functions. See "OCI Relational Functions" in Oracle Call Interface Programmer's Guide, Volume II to view the complete list of OCI functions. OCIAttrSet() Sets a particular attribute of a handle or a descriptor OCIAttrGet() Gets a particular attribute of a handle OCIBindByName() Creates an association between a program variable and a placeholder in a SQL statement or PL/SQL block OCIBindByPos() OCIDefineByPos() Associates an item in a select-list with the type and output data buffer OCIDescribeAny() Describes existing schema and sub-schema objects OCIDescriptorAlloc() Allocates storage to hold descriptors or LOB locators OCIDescriptorFree() Deallocates a previously allocated descriptor OCIEnvInit() Allocates and initializes an OCI environment handle OCIErrorGet() Returns an error message in the buffer provided and an ORACLE error OCIHandleAlloc() Returns a pointer to an allocated and initialized handle OCIHandleFree() Explicitly deallocates a handle OCIInitialize() Initializes the OCI process environment OCILobRead() Reads a portion of a LOB/FILE, as specified by the call, into a buffer OCILobWrite() Writes a buffer into a LOB OCILogoff() Terminates a connection and session created with OCILogon() OCILogon() Creates a simple logon session OCIParamGet() Returns a descriptor of a parameter specified by position in the describe handle or statement handle OCIParamSet() Sets a complex object retrieval descriptor into a complex object retrieval handle OCIServerAttach() Creates an access path to a data source for OCI operations OCIServerDetach() Deletes an access to a data source for OCI operations OCISessionBegin() Creates a user session and begins a user session for a given server OCISessionEnd() Terminates a user session context created by OCISessionBegin() OCIStmtExecute() Associates an application request with a server OCIStmtFetch() Fetches rows from a query OCIStmtPrepare() Prepares a SQL or PL/SQL statement for execution OCITransCommit() Commits the transaction associated with a specified service context Question. Can an Oracle7 OCI client connect with an Oracle8i database? Answer. Yes. Question. Can an Oracle8i OCI client connect with an Oracle7 database? Answer. Yes. A user can communicate with an Oracle7 database using Oracle8i OCI. You do not have to change any application code to perform this operation as long you do not use SQL types or features that are not supported by Oracle7. Question. What happens if there are multiple client components (.so) within a process linked with different database versions? Answer. Multiple components linked with multiple client side libraries in a single process are not supported.
http://www.oracle.com/technology/tech/oci/htdocs/faq.html
crawl-002
refinedweb
4,060
52.19
Hi All, I want to find the year's in a line. For this i need to find the first year and then i need to collect two words before and after of the year and so on for all year. How can i do that? For example, Input Hobbs, F. 2005. Examining American Composition: 1990 and 2000. U.S. Census Bureau, Census 2000 Special Reports, CENSR-24. I.S. Government Printing Office, Washington, DC. Output i required Hobbs, F. 2005. Examining American American Composition: 1990 and 2000. 1990 and 2000. U.S. Census [download] What have you tried? What is your code? Where do you have problems? Please show a small, self-contained script (maybe 20 lines, no longer than 50 lines) that shows input data, actual output, desired output. Please describe where you encounter problems. Maybe you want to see perlretut or perlre. Hi Corion, I have already given required input and output. See the below code what i am trying to do? I need to identify years in a line and it may come like 2005, 2005a, a2005, May 31, 2005 some thing like that. Here first i am trying to find the occurance. I need the below output: 1. i need to find out the words which has 4 digits number. it's my first requirement. 2. then i should collect all years and needs to find before/after two words due to full date appearances. $var='Hobbs, F. 2005a. Examining American Household Composition: b1990 + and 2000. U.S. Census Bureau, Census 2000 Special Reports, CENSR-24. + I.S. Government Printing Office, Washington, DC.'; #$var=~s/(\w?) (\w?) ([0-9]{4}+[a-zA-Z]?) (\w?) (\w?)/&identify_year($ +1.$2.$3.$4.$5)/ge; $var=~s/([0-9]{4})/&identify_year($1)/ge; sub identify_year { my ($input)=@_; print "$input\n"; return ($input); } [download] Why do you try s/.../identify_year($1)/ge? What is that supposed to do? I thought your objective was to identify a year and the surrounding words? If you want to know whether there is one or more occurrence of a regular expression, you can use the following idiom: my $var = "This is the year 2000."; my @matches = ($var =~ /([0-9]{4})/g); [download] The regular expression I gave will only find four digits. You will need to modify that regular expression to also recognize two words before that year and two words after that year. You could split your string into words and examine each word. If the word contains four connected digits, print the previous two words, the examined word itself, and the following two words. Apart from Corions request, please ask yourself the following questions Now you may try to find the sequence "word word year word word" in your text. This can be done by a regex (Corion has already pointed to the docu). Regexes have also the possibility to loop through a string to find all occurences - check the docu. Once the basics work, you can fine-tune as you want (word separated by colons, years like 100BC ...) Tron Wargames Hackers (boo!) The Net Antitrust (gahhh!) Electric dreams (yikes!) Office Space Jurassic Park 2001: A Space Odyssey None of the above, please specify Results (118 votes), past polls
http://www.perlmonks.org/?node_id=850574
CC-MAIN-2014-35
refinedweb
539
76.62
#include <BCP_USER.hpp> Inheritance diagram for USER_initialize: The user will have to return an instance of the initializer class when the BCP_user_init() function is invoked. The member methods of that instance will be invoked to create the various objects (well, pointers to them) that are used/controlled by the user during the course of a run. Definition at line 45 of file BCP_USER.hpp. virtual destructor Definition at line 50 of file BCP_USER.hpp. Create a message passing environment. Currently implemented environments are single and PVM, the default is single. To use PVM, the user has to override this method and return a pointer to a new BCP_pvm_environment object. Definition at line 84 of file BCP_USER.hpp. Definition at line 89 of file BCP_USER.hpp. Definition at line 94 of file BCP_USER.hpp. Definition at line 99 of file BCP_USER.hpp.
http://www.coin-or.org/Doxygen/CoinAll/class_u_s_e_r__initialize.html
crawl-003
refinedweb
141
59.19
You should explore the rest of the Boo Compiler so this is more understandable, especially the Compiler Steps, the Abstract Syntax Tree structures, and the Boo Parser. Say you have a line of code like: What is m? What is x? What does something do? You're in the same boat that the compiler is in when it processes code. To us and the compiler, those are just words or names. They are references to something, but to what we don't know yet. Even without knowing what the names refer to, the parser can tell certain things that it uses to generate the Abstract Syntax Tree. The above code is a statement (not an enum or class or import...). Seeing the "X = Y" form it knows the statement is a binary assignment expression. Seeing the "X.Y" form it knows that "something" should refer to some member of "x". Seeing the parentheses () it knows we are invoking either a method or other callable object named "something", or if "something" is a type like a class, we are invoking its constructor (like "m = SomeNameSpace.SomeClass()"). So if we wanted to generate the equivalent AST by hand, we would say something like: So now the compiler has to find out "what" everything in the AST is. By "what" I mean some type of code object that exists in either some external assembly or that is defined somewhere else in your code. Besides a "Name" property, AST nodes like ReferenceExpressions have an "Entity" property that will store information about the kind of type that needs to created for that node in the EmitAssembly step. What are the different kinds of types a node can possibly be? See EntityType.cs. A name might refer to a system type (which means class, enum, interface, or struct), or a method, field, property, whatever. Now in our example we'll skip ahead to the ProcessMethodBodiesWithDuckTyping step in the compiler pipeline (actually ProcessMethodBodies, its superclass). We can do this because our example code doesn't define any new types on its own (we have no "class" statement for example). Any new types that have been created or imported were handled in earlier steps like BindTypeDefinitions, BindBaseTypes, and BindTypeMembers. In ProcessMemberBodies.cs the compiler visits the 2 simple reference expressions "m" and "x" in the OnReferenceExpression method. It retrieves the appropriate entity by calling the Resolve(name) method in NameResolutionService.cs The name resolution service asks the type system service, does this name refer to a built-in primitive (like "int" or "date")? If yes, we know the entity type because we have a hashtable mapping primitive names to their corresponding entity types (which correspond to real .NET/Mono types like System.Int32 or System.DateTime). If no, then it starts a hierarchical search through each namespace context in which the referenceexpression is enclosed. Each namespace may have its own hashtable mapping names to entity types. Let's say the line of code is in the global namespace (actually that code will have been moved inside a "Main" method inside a module, see the IntroduceModuleClasses step). To resolve "m" and "x" it has to start with the global or module-level namespace. Back in the InitializeNameResolutionService step, a global namespace and module namespaces were created. When asked to resolve a name, these namespaces will search the external assembly references for a type matching that name, or the internal modules in your code for any types you have created yourself, like new classes. The "something" memberreferenceexpression is processed in the ProcessMemberReferenceExpression method of ProcessMemberBodies.cs. It asks the target of the member reference ("x") for its namespace (unless "something" is a type itself and so we ask for its constructor). Expressions or types have their own namespaces (see INamespace.cs and IType and other interfaces contained in IEntity.cs), which may store a list of child entities they contain, and can retrive an entity type given a name. Watch it happen To see the names being bound to their respective types, run the boo compiler (booc.exe) with the "-vvv" option. This very verbose option spits all the references and their corresponding entities during the compile pipeline. Type Inference A little on Type Inference. You can see though that if "m" was not declared earlier in the code, then the compiler cannot find out its type until it finds out the type of the "something" member reference. If "m" was declared earlier (like, "m as MyClass"), then when the compiler visits that declaration it will bind the type created for MyClass to "m". If the compiler then visits the binary expression and finds the type that "something" returns is not assignable to the type of "m" then it will complain. In our sample line of code, it is mandatory at least that "x" is defined elsewhere (perhaps it is a class type or a namespace), "x" contains an entity matching the name "something", and "something" refers to either a method or callable entity (if for example "x" is a class), or else "something" must be a type entity like a class itself with an accessible constructor method ("m = SomeNameSpace.SomeClass()"). Creating New Types When you create a new class with a line like "class MyClass...", for example, the boo compiler creates a new instance of the InternalClass entity and sets the ClassDefinition's Entity property to that object. The ClassDefinition also stores the name you used ("MyClass"). The InternalClass handles name resolution and later is used in the EmitAssembly step to help generate the correct IL assembly code to define the type you created. Relation to Macro Processing See Syntactic Macros. Understanding the type system and other features of the Boo Compiler, it is easier to understand how AST macros work, and their limitations. Currently, macros are processed before the types are processed. A lot of times a macro is just rearranging the AST structure or adding new references to save typing. But say you need to know the type of a parameter passed to your macro. The Entity property will be null at that point. Boo may eventually incorporate "type-safe" macros that are processed later in the compiler pipeline after the type system has done its thing. Look at the "with" macro on the Syntactic Macros page and you'll understand why it has to check for an underscore "_" at the beginning of a referenceexpression in order to know whether or not that reference should turned into a memberreferenceexpression targeting fooInstanceWithReallyLongName. It can't use a leading period like Visual Basic because that is an illegal name. And we can't simply use no prefix (i.e., "f1" instead of "_f1") like some other languages do, because how would we distinguish which references refer to members of fooInstanceWithReallyLongName and which do not. We can't since we do not know the type of fooInstanceWithReallyLongName at that point. A type safe macro that is processed after the type system would have to be more careful in how it processes the AST, so as to not break the name-type bindings. Remember that the correct type is determined according to the hierarchical structure of a node's enclosing namespaces. If the nodes are rearranged, the name might really refer to a completely different type if the name exists in multiple namespaces. And if you move the AST node for "x.something()" before "x = MyClass()", then x is undefined at first and should have been a type error.
http://docs.codehaus.org/pages/diffpages.action?pageId=23107&originalId=228166748
CC-MAIN-2015-14
refinedweb
1,249
61.46
IN five important areas the 1990s will bring far-reaching changes in the social and economic environment, and in the strategies, structure and management of business. For a start, the world economy will be quite different from what businessmen, politicians and economists still take for granted. The trend towards reciprocity as a central principle of international economic integration has by now become well-nigh irreversible, whether one likes it or not (and I don't). Economic relations will increasingly be between trading blocks rather than between countries. Indeed an East Asian block loosely organised around Japan and paralleling the EC and North America may emerge during the decade. Relationships will therefore increasingly be conducted through bilateral and trilateral deals in respect both of investment and of trade. Reciprocity can easily degenerate into protectionism of the worst kind (that's why I dislike it). But it could be fashioned into a powerful tool to expand trade and investment, if—but only if—governments and businessmen act with imagination and courage. In any event, it was probably inevitable. It is the response to the first emergence as a major economic power of a non-western society, Japan. In the past, whenever a new major economic power appeared, new forms of economic integration soon followed (eg, the multinational company, which was invented in the middle of the nineteenth century—in defiance of everything Adam Smith and David Ricardo had taught—when the United States and Germany first emerged as major economic powers. By 1913, multinationals had come to control as much of the world's industrial output, maybe more, as they do now). Reciprocity is the way, for better or worse, to integrate a modern but proudly non-western country such as Japan (and the smaller Asian “tigers” that are now following it) into a West-dominated world economy. The West will no longer tolerate Japan's adversarial trading methods of recent decades—a wall around the home market to protect social structures and traditions, plus a determined push beyond it for world dominance for selected Japanese industries. Yet the western pattern of an autonomous, value-free economy in which economic rationality is the ultimate criterion, is alien to a Confucian society; is indeed seen by it as cultural imperialism. Reciprocity may make possible close economic relationships between culturally distinct societies. Into alliance Second, businesses will integrate themselves into the world economy through alliances: minority participations, joint ventures, research and marketing consortia, partnerships in subsidiaries or in special projects, cross-licensing and so on. The partners will be not only other businesses but also a host of non-businesses such as universities, health-care institutions, local governments. The traditional forms of economic integration—trade and the multinational company—will continue to grow, in all likelihood. But the dynamics are shifting rapidly to partnerships based neither on the commodity nexus of trade nor on the power nexus of ownership by multinationals. There are several reasons for this rapidly accelerating trend: • Many middle-sized and even small businesses will have to become active in the world economy. To maintain leadership in one developed market, a company increasingly has to have a strong presence in all such markets worldwide. But middle-sized and small companies rarely have the financial or managerial resources to build subsidiaries abroad or to acquire them. • Financially, only the Japanese can still afford to go multinational. Their capital costs them around 5% or so. In contrast, European or American companies now pay up to 20% for money. Not many investments, whether in organic growth or in acquisitions, are likely to yield that high a return (except acquisitions by management experts such as Lord Hanson or Warren Buffett, who know how to find a healthy but under-managed business and turn it around). This is especially true of multinational investment, whose risks are increased by currency variations and unfamiliarity with the foreign environment. Financially, it is hard to justify most of the recent acquisitions in America made by European companies. To say that they are “cheap” because of the low dollar is nonsense: the companies acquired, after all, earn in these low dollars. Only a very big and cash-rich company can really still afford today to go the multinational route. • The major driving forces, however, behind the trend towards alliances are technology and markets. In the past, technologies overlapped little. Electronics people did not need to know much about electrical engineering or about materials. Paper-makers needed to know mainly about paper mechanics and paper chemistry. Telecommunications was self-contained. So was investment banking. Today there is hardly any field in which this is still the case. Not even a big company can any longer get from its own research laboratories all, or even most, of the technology it needs. Conversely, a good lab now produces results in many more areas than can interest even a big and diversified company. So pharmaceutical companies have to ally themselves with geneticists; commercial bankers with underwriters; hardware-makers like IBM with software boutiques. The need for such alliances is the greater the faster a technology grows. Markets, similarly, are rapidly changing, merging, criss-crossing and overlapping each other. They too are no longer separate and distinct. Alliances, while needed, are anything but easy. They require extreme—and totally unaccustomed—clarity in respect of objectives, strategies, policies, relationships and people. They also require advance agreement on when and how the alliance is to be brought to an end. For alliances become the more problematic the more successful they are. The best text on them is not to be found in a management book; it is in Winston Churchill's biography of his ancestor the first duke of Marlborough. Reshaping companies Third, businesses will undergo more, and more radical, restructuring in the 1990s than at any time since the modern corporate organisation first evolved in the 1920s. Only five years ago it was treated as sensational news when I pointed out that the information-based organisation needs far fewer levels of management than the traditional command-and-control model. By now a great many—maybe most—large American companies have cut management levels by one-third or more. But the restructuring of corporations—middle-sized ones as well as large ones, and, eventually, even smaller ones—has barely begun. Businesses tomorrow will follow two new rules. One: to move work to where the people are, rather than people to where the work is. Two: to farm out activities that do not offer opportunities for advancement into fairly senior management and professional positions (eg, clerical work, maintenance, the “back office” in the brokerage house, the drafting room in the large architectural firm, the medical lab in the hospital) to an outside contractor. The corporation, in stockmarket jargon, will be unbundled. One reason is that this century has acquired the ability to move ideas and information fast and cheaply. At the same time the great nineteenth-century achievement, the ability to move people, has outlived its usefulness; witness the horrors of daily commuting in most big cities and the smog that hovers over the increasingly clogged traffic arteries. Moving work out to where the people are is already in full train. Few large American banks or insurance companies still process their paperwork in the down-town office. It has been moved out to a satellite in the suburbs (or farther afield—one insurance company ships its claims by air to Ireland every night). Few airlines still locate their reservations computer at the main office or even at the airport. It may take another “energy crunch” for this trend to become a shock wave. But most work that requires neither decision-making nor face-to-face customer contact (and that means all clerical work) will have been moved out by the end of the decade, at least in western countries; Tokyo and Osaka will take a little longer, I suspect. (What, by the way, does this mean for the large cities, the children of the nineteenth century's transport revolution? Most of them—London, Paris, New York, Tokyo, Frankfurt—successfully made in this century the transition from manufacturing centre to office centre. Can they make the next transition—and what will it be? And is the worldwide urban real-estate boom that began in eighteenth-century London at last nearing its end?) The trend towards “farming out” is also well under way, even in Japan. organisation with its own career ladders. Otherwise, it will be given neither enough attention nor importance to ensure the hard work that is needed not just on quality and training, but on work-study, work-flow and tools. Finally, corporate size will by the end of the coming decade have become a strategic decision. Neither “big is better” nor “small is beautiful” makes much sense. Neither elephant nor mouse nor butterfly is, in itself, “better” or “more beautiful”. Size follows function, as a great Scots biologist, D'Arcy Wentworth Thompson, showed in his 1917 classic “On Growth and Form”. A transnational automobile company such as Ford has to be very large. But the automobile industry also has room for a small niche player like Rolls-Royce. Marks & Spencer, for decades the world's most successful retailer, was run as a fair-sized rather than as a large business. So is Tokyo-based Ito-Yokado, arguably the most successful retailer of the past decade. Successful high-engineering companies are, as a rule, middle-sized. But in other industries the middle size does not work well: successful pharmaceutical companies, for instance, tend to be either quite large or quite small. Whatever advantages bigness by itself used to confer on a business have largely been cancelled by the universal availability of management and information. Whatever advantages smallness by itself conferred have largely been offset by the need to think, if not to act, globally. Management will increasingly have to decide on the right size for a business, the size that fits its technology, its strategy and its markets. This is both a difficult and a risky decision—and the right answer is rarely the size that best fits a management's ego. The challenge to management Fourth, the governance of companies themselves is in question. The greatest mistake a trend-spotter can make—and one, alas, almost impossible to prevent or correct—is to be prematurely right. A prime example is my 1976 book “The Unseen Revolution”. In it I argued that the shift of ownership in the large, publicly held corporation to representatives of the employee class—ie, pension funds and unit trusts—constitutes a fundamental change in the locus and character of ownership. It is therefore bound to have profound impact, especially on the governance of companies: above all, to challenge the doctrine, developed since the second world war, of the self-perpetuating professional management in the big company; and to raise new questions regarding the accountability and indeed legitimacy of big-company management. “The Unseen Revolution” may be the best book I ever wrote. But it was prematurely right, so no one paid attention to it. Five years later the hostile takeovers began. They work primarily because pension funds are “investors” and not “owners” in their legal obligations, their interests and their mentality. And the hostile takeovers do indeed challenge management's function, its role and its very legitimacy. The raiders are surely right to assert that a company must be run for performance rather than for the benefit of its management. They are, however, surely wrong in defining “performance” as nothing but immediate, short-term gains for shareholders. This subordinates all other constituencies—above all, managerial and professional employees—to the immediate gratification of people whose only interest in the business is short-term pay-offs. No society will tolerate this for very long. And indeed in the United States a correction is beginning to be worked out by the courts, which increasingly give such employees a “property right” in their jobs. At the same time the large American pension funds (especially the largest, the funds of government employees) are beginning to think through their obligation to a business as a going concern; that is, their obligation as owners. But the raiders are wrong also because immediate stockholder gains do not, as has now been amply proven, optimise the creation of wealth. That requires a balance between the short term and the long term, which is precisely what management is supposed to provide, and should get paid for. And we know how to establish and maintain this balance*. The governance of business has so far become an issue mainly in the English-speaking countries. But it will soon become an issue also in Japan and West Germany. So far in those two countries the needed balance between the short term and the long has been enforced by the large banks' control of other companies. But in both countries big companies are slipping the banks' leash. And in Japan pension funds will soon own as high a proportion of the nation's large companies as American ones do in the United States; and they are just as interested in short-term stockmarket profits. The governance of business, in other words, is likely to become an issue throughout the developed world. Again, we may be further advanced towards an answer than most of us realise. In a noteworthy recent article in the Harvard Business Review, Professor Michael C. Jansen, of the Harvard Business School, has pointed out that large businesses, especially in the United States, are rapidly “going private”. They are putting themselves under the control of small number of large holders; and in such a way that their holders' self-interest lies in building long-term value rather than in reaping immediate stockmarket gains. Indeed only in Japan, with its sky-high price/earnings ratios, is a public issue of equity still the best way for a large company to finance itself. Unbundling too should go a long way towards building flexibility into a company's cost structure, and should thus enable it to maintain both short-term earnings and investments in the future. Again the Japanese show the way. The large Japanese manufacturing companies maintain short-term earnings (and employment security for their workers) and long-term investments in the future, by “out-sourcing”. They buy from outside contractors a far larger proportion of their parts than western manufacturers usually do. Thus they are able to cut their costs fast and sharply, when they need to, by shifting the burden of short-term fluctuations to the outside supplier. The basic decisions about the function, accountability and legitimacy of management, whether they are to be made by business, by the market, by lawyers and courts, or by legislators—and all four will enter the lists—are still ahead of us. They are needed not because corporate capitalism has failed but because it has succeeded. But that makes them all the more controversial. The primacy of politics Fifth, rapid changes in international politics and policies, rather than domestic economics, are likely to dominate the 1990s. The lodestar by which the free world has navigated since the late 1940s, the containment of Russia and of communism, is becoming obsolescent, because of that policy's very success. And the other basic policy of these decades, restoration of a worldwide, market-based economy, has also been singularly successful. But we have no policies yet for the problems these successes have spawned: the all-but-irreversible break-up of the Soviet empire, and the decline of China to the point where it will feature in world affairs mainly because of its weakness and fragility. We have no policies yet for the problems these successes have spawned: the all-but-irreversible break-up of the Soviet empire, and the decline of China to the point where it will feature in world affairs mainly because of its weakness and fragility. Besides, new challenges have arisen that are quite different: the environment; terrorism; third-world integration into the world economy; control or elimination of nuclear, chemical and biological weapons; and control of the world-wide pollution of the arms race altogether. They all require concerted, common, transnational action, for which there are few precedents (suppressing the slave trade, outlawing piracy, the Red Cross are the successful ones that come to mind). The past 40 years, despite tensions and crises, were years of political continuity. The next ten will be years of political discontinuity. Save for such aberrations as the Vietnam era in the United States, political life since 1945 has been dominated by domestic economic concerns such as unemployment, inflation or nationalisation/privatisation. These issues will not go away. But increasingly international and transnational political issues will tend to upstage them. So??” * The easiest way is for a company to have two operating budgets: one, short-term, for on-going operations; a second, extending over 3.5 years, that covers the work (rarely more than 10% or 12% of total expenses) needed to build and maintain the company's wealth-producing capacity—processes, products, services, markets, people. This second, “futures” budget should neither be increased in good years nor cut in poor ones. This is what Japanese companies have been doing ever since I first told them about it 30 years ago.
https://www.economist.com/news/1989/10/21/the-futures-that-have-already-happened
CC-MAIN-2020-05
refinedweb
2,889
51.28
- Tutoriels - 2D Game Kit - Object Pooling in the Gamekit Object Pooling in the Gamekit Vérifié avec version: 2017.3 - Difficulté: Débutant The Game Kit uses an extensible object pooling system for some of its systems. The following explanation is only relevant to those wishing to extend the system for their own use and is not required knowledge for working with the Game Kit. In order to extend the object pool system you must create two classes - one which inherits from ObjectPool and the other from PoolObject. The class inheriting from ObjectPool is the pool itself while the class inheriting from PoolObject is a wrapper for each prefab in the pool. The two classes are linked by generic types. The ObjectPool and PoolObject must have the same two generic types: the class that inherits from ObjectPool and the class that inherits from PoolObject in that order. This is most easily shown with an example: public class SpaceshipPool: ObjectPool { … } public class Spaceship: PoolObject { … } This is so that the pool knows the type of objects it contains and the objects know what type of pool to which they belong. These classes can optionally have a third generic type. This is only required if you wish to have a parameter for the PoolObject’s WakeUp function which is called when the PoolObject is taken from the pool. For example, when our spaceships are woken up, they might need to know how much fuel they have and so could have an additional float type as follows: public class SpaceshipPool: ObjectPool { … } public class Spaceship: PoolObject { … } By default a PoolObject has the following fields: inPool: This bool determines whether or not the PoolObject is currently in the pool or is awake. instance: This GameObject is the instantiated prefab that this PoolObject wraps. objectPool: This is the object pool to which this PoolObject belongs. It has the same type as the ObjectPool type of this class. A PoolObject also has the following virtual functions: SetReferences: This is called once when the PoolObject is first created. Its purpose is to cache references so that they do not need to be gathered whenever the PoolObject is awoken, although it can be used for any other one-time setup. WakeUp: This is called whenever the PoolObject is awoken and gathered from the pool. Its purpose is to do any setup required every time the PoolObject is used. If the classes are given a third generic parameter then WakeUp can be called with a parameter of that type. Sleep: This is called whenever the PoolObject is returned to the pool. Its purpose is to perform any tidy up that is required after the PoolObject has been used. ReturnToPool: By default this simply returns the PoolObject to the pool but it can be overridden if additional functionality is required. An ObjectPool is a MonoBehaviour and can therefore be added to GameObjects. By default it has the following fields: Prefab: This is a reference to the prefab that is instantiated multiple times to create the pool. InitialPoolCount: The number of PoolObjects that are created in the Start method. Pool: A list of the PoolObjects. An ObjectPool also has the following functions: Start: This is where the initial pool creation happens. You should note that if you have a Start function in your ObjectPool it effectively hides the base class version. CreateNewPoolObject: This is called when PoolObjects are created and calls their SetReferences and then Sleep functions. It is not virtual, so it cannot be overridden but it is protected and can therefore be called in your inheriting class if you wish. Pop: This is called to get a PoolObject from the pool. By default it searches for the first one that has the inPool flag set to true and return that. If none are true it creates a new one and returns that. It calls WakeUp on whichever PoolObject is going to be returned. This is virtual and can be overridden. Push: This is called to put a PoolObject back in the pool. By default it just sets the inPool flag and calls Sleep on the PoolObject but it is virtual and can be overridden. For a full example of how to use the object pool system, see the BulletPool documentation and scripts. BulletPool The BulletPool MonoBehaviour is a pool of BulletObjects, each of which wraps an instance of a bullet prefab. The BulletPool is used for both Ellen and enemies but is used slightly differently for each. For Ellen there is a BulletPool MonoBehaviour attached to the parent GameObject with the Bullet prefab set as its Prefab field. The other use of BulletPool is in the EnemyBehaviour class. It uses the GetObjectPool static function to use BulletPools without the need to actively create them. The BulletPool class has the following fields: Prefab: This is the bullet prefab you wish to use. Initial Pool Count: This is how many bullets are created in the pool to start with. It should be as many as you expect to be used at once. If more are required they are created at runtime. Pool: This is the BulletObjects in the pool. This is not shown in the inspector. The BulletPool class has the following functions: Pop: Use this to get one of the BulletObjects from the pool. Push: Use this to put a BulletObject back into the pool. GetObjectPool: This is a static function that finds an appropriate BulletPool given a specific prefab. When getting a bullet from the pool it comes in the form of a BulletObject. The BulletObject class has the following fields: InPool: Whether or not this particular bullet is in the pool or being used. Instance: The instantiated prefab. ObjectPool: A reference to the BulletPool this BulletObject belongs to. Transform: A reference to the Transform component of the instance. Rigidbody2D: A reference to the Rigidbody2D component of the instance. SpriteRenderer: A reference to the SpriteRenderer component of the instance. Bullet: A reference to the Bullet script of the instance. The BulletObject has the following functions: WakeUp: This is called by the BulletPool when its Pop function is called. Sleep: This is called by the BulletPool when its Push function is called. ReturnToPool: This should be called when you are finished with a particular bullet. It calls the Push function of its BulletPool and so calls its Sleep function. Tutoriels apparentés - RandomAudioPlayer (Cours) - (Cours) - VFXController (Cours) - Data Persistence (Cours) - SceneLinkedSMB (Cours) - Object Pooling in the Gamekit (Cours) - Behaviour Tree (Cours)
https://unity3d.com/fr/learn/tutorials/projects/2d-game-kit/object-pooling-gamekit
CC-MAIN-2019-13
refinedweb
1,076
63.19
NAME XML::Assert - Asserts XPaths into an XML Document for correct values/matches SYNOPSIS use XML::LibXML; use XML::Assert; my $<bar baz="buzz">text</bar></foo>"; my $<bar baz="buzz">text</bar></foo>"; # get the DOM Document for each string my $doc = $parser->parse_string( $xml )->documentElement(); my $doc_ns1 = $parser->parse_string( $xml_ns1 )->documentElement(); # create an XML::Assert object my $xml_assert = XML::Assert->new(); # assert that there is: # - only one <bar> element in the document # - the value of bar is 'text' # - the value of bar matches /^tex/ # - the value of the baz attribute is buzz $xml_assert->assert_xpath_count($doc, '//bar', 1); $xml_assert->assert_xpath_value_match($doc, '//bar', 'text'); $xml_assert->assert_xpath_value_match($doc, '//bar', qr{^tex}); $xml_assert->assert_xpath_value_match($doc, '//bar[1]/@baz', 'buzz'); # do the same with namespaces ... $xml_assert->xmlns({ 'ns' => 'urn:message' }); $xml_assert->assert_xpath_count($doc, '//ns:bar', 1); # ...etc... DESCRIPTION Thispath_value_match(). This method can test against strings or regexes. You can also text a value against a number of nodes by using the assert_xpath_values_match() method. This can check your value against any number of nodes. Each of these assert methods throws an exception if they are false. Therefore, there are equivalent methods which do not die, but instead return a truth value. They are does_xpath_count(), does_xpath_value_match() and do_xpath_values_match(). Note: all of the *_match() methods use the smart match operator ~~ against node->text_value() to test for truth. SUBROUTINES Please note that all subroutines listed here that start with assert_* throw an error if the assertion is not true. You'd expect this. Also note that there are a corresponding number of other methods for each assert_* method which either return true or false and do not throw an error. Please be sure to use the correct version for what you need. - assert_xpath_count($doc, $xpath, $count) Checks that there are $countnodes in the $docthat are returned by the $xpath. Throws an error if this is untrue. - is_xpath_count($doc, $xpath, $count) Calls the above method but catches any error and instead returns a truth value. - assert_xpath_value_match($doc, $xpath, $match) Checks that $xpathreturns only one node and that node's value matches $match. - does_xpath_value_match($doc, $xpath, $match) Calls the above method but catches any error and instead returns a truth value. - assert_xpath_values_match($doc, $xpath, $match) Checks that $xpathreturns at least one node and that all nodes returned smart match against $match. - do_xpath_values_match($doc, $xpath, $match) Calls the above method but catches any error and instead returns a truth value. - assert_attr_value_match($doc, $xpath, $attr, $match) Checks that $xpathreturns only one node, that node has an attr called $attrand that attr's value matches $match. - does_attr_value_match($doc, $xpath, $attr, $match) Calls the above method but catches any error and instead returns a truth value. - assert_attr_values_match($doc, $xpath, $attr, $match) Checks that $xpathreturns at least one node, that every node has an attr called $attrand that those attr values smart match against $match. - do_attr_values_match($doc, $xpath, $attr, $match) Calls the above method but catches any error and instead returns a truth value. - register_ns Takes a hash containing ns => value pairs which are namesapces to register when Asserting into an XML document. If the correct namesapces are not registered, then it's likely that you XPath expressions won't match any of the desired nodes. PROPERTIES - xmlns A hashref of prefix => XMLNS, if you have namespaces in the XML document or in the XPaths. EXPORTS Nothing. SEE ALSO Test::XML::Assert, XML::Compare, XML::LibXML AUTHOR Andrew Chilton - Work <andy at catalyst dot net dot nz>, - Personal <andychilton at gmail dot com>,.
https://metacpan.org/pod/XML::Assert
CC-MAIN-2019-13
refinedweb
579
52.39
For some reason, i cannot use react-bootstrap. So I'd like to call bootstrap's functions like 'modal' but it seems not work and i got this error: modal is not a function $('#my-modal').modal(); ReactDOM.findDOMNode(this.refs.myModal).modal(); $(this.refs.myModal).modal(); <div class="modal fade" id="my-modal" ref="myModal"> ... </div> <button type="button" class="btn btn-default" data-Open Modal</button> <div class="modal fade" id="my-modal" ref="myModal"> <form onSubmit={this.testModal}>...</form> </div> testModal() { // here i tried to call modal('hide') like i did above, but still get the error "modal is not a function" } I figure out how to solve this problem. Firstly, i need to make jQuery global (i don't know why $ didn't work, or maybe is conflicted) import jQuery from 'jquery'; window.jQuery = jQuery; Then, i need to require bootstrap in my js file: require('bootstrap') // or just require('bootstrap/js/modal'); require('bootstrap/js/transition'); instead of include this script in index.html: <script src="/node_modules/bootstrap/dist/js/bootstrap.min.js"></script> Now, i'm able to call jQuery('#my-modal').modal(); UPDATE: I find out that maybe there is a problem if you don't add jQuery.noConflict(true) in your constructor.
https://codedump.io/share/NPVvumWdvp23/1/react-call-bootstrap-function
CC-MAIN-2017-39
refinedweb
210
57.27
#include <grass/imagery.h> #include <grass/glocale.h> Go to the source code of this file. Definition at line 4 of file points.c. Referenced by I_get_control_points(), and I_put_control_points(). read group control points Reads the control points from the POINTS file for the group into the cp structure. Returns 1 if successful; 0 otherwise (and prints a diagnostic error). Note. An error message is printed if the POINTS file is invalid, or does not exist. Definition at line 117 of file points.c. References G_mapset(), G_warning(), I_fopen_group_file_old(), NULL, and POINT_FILE.. Definition at line 57 of file points.c. References tools::size. write group control points Writes the control points from the cp structure to the POINTS file for the specified group. Note. Points in cp with a negative status are not written to the POINTS file. Definition at line 153 of file points.c. References G_mapset(), G_warning(), I_fopen_group_file_new(), NULL, and POINT_FILE.
http://grass.osgeo.org/programming6/points_8c.html
crawl-003
refinedweb
151
69.68
jQuery Autocomplete: CSS ui-focus-state class not added to focused item when using key arrows Normally, when using jQuery Autocomplete, one can browse through the shown list items with keys. If the item currently chosen with arrow keys is to be styled with CSS, I do this: .ui-state-focus { background-color: blue; } This worked for me every time. Currently, in jQuery UI - v1.12.0, the ui-state-focus class is not added to the list element when it's selected with arrow keys. I render the items in a custom way, like this: return $('<li>') .attr('data-id', item.id) .attr('tabindex', '-1') .append(appendItem) /* Text content of the item */ .appendTo(ul); I understand this probably has nothing to do with the jQuery UI version I'm using. But I don't get why the ui-state-focus is not added to my items, so I can style the focus state. UPDATE Also, when I use the autoFocus: true property in the Autocomplete config - it doesn't work at all. It should focus on the first list element shown, but it does nothing. Changing other properties (like delay) works fine. It actually has to do with the jquery-ui version you are using. They are mentioning the change in the v1.12 upgrade guide.. JqueryUI Autocomplete item focus does not work using arrow keys, In my jQuery Autocomplete widget, for some reason, the list items are selected But when using arrow keys to navigate the Autocomplete list the item is not focused. But the item is not focused (see screenshot). CSS: .ui-menu-item:hover{ cursor: pointer; Try adding all those classes to your own CSS. Browse other questions tagged javascript jquery css jquery-ui jquery-ui-autocomplete or ask your own question. The Overflow Blog Podcast 240: JavaScript is ready to get its own place You need to add a sub DOM element inside the li, for example: <li><div class="ui-menu-item-wrapper"></div></li> This wrapper will now get the ui-state-active class (in before this class was called ui-state-focus) you can style within your CSS, otherwise this state won't be applied to the selected row when the wrapper is missing, which is also a change to previous jQuery UI autocomplete versions. Autocomplete change background color of focused element on up , But the background color of the focused element is not changed when the user focuses the element using keyboard up/down arrows. I want to JqueryUI - Autocomplete - Auto completion is a mechanism frequently used in modern websites to provide the user with a list of suggestions for the beginning of the word, which he/she has Need to customize js(v1.12) for ui-focus-state class added to focused item when using key arrows Custom way like this : Step 1: focused = this.active.children( ".ui-menu-item-wrapper" ); this._addClass( focused, null, "ui-state-active" ); **Replace with**: focused = this.active.closest( ".ui-menu-item" ); this._addClass( focused, null, "ui-state-focus" ); Step 2: activeParent = this.active.parent().closest( ".ui-menu-item" ). children( ".ui-menu-item-rapper" ); this._addClass( activeParent, null, "ui-state-active" ); **Replce with**: activeParent = this.active.parent().closest( ".ui-menu-item" ); this._addClass( activeParent, null, "ui-state-focus" ); Step 3: this._removeClass( this.active.children( ".ui-menu-item-wrapper" ) ,null, "ui-state-active" ); **Replace with** : this._removeClass( this.active.closest( ".ui-menu-item" ), null, "ui-state-focus" ); jQuery UI example - automatically (Autocomplete), Previous: jQuery UI example - folding panel (Accordion) <Script src to match items with accented characters, even if the text field does not contain currentCategory) { ul.append ( "<li class = 'ui-autocomplete-category'>" + item.category + "</ li>");. Autocomplete Widget, TAB : Select the currently focused item, close the menu, and move focus to the next CSS class names can be used for overrides or as keys for the classes option: to the user, the ui-autocomplete-loading class is also added to this element. Note: The appendTo option should not be changed while the suggestions function autocomplete(inp, arr) { /*the autocomplete function takes two arguments, the text field element and an array of possible autocompleted values:*/ jQuery UI 1.9 Upgrade Guide, removeUniqueId() method will remove the id from the element if it was added by . The key codes are not useful in this situation, and you should use the event Non-namespaced widget instances and item.autocomplete will be removed in If you were styling the ui-tabs-selected or ui-state-processing classes, you will. Vue autocomplete codepen, We also make use of the new cache-items prop. codepen See the Pen Twitter Jul 02, 2019 · jQuery UI Widgets › Forums › Editors › MaskedInput › turn off auto complete values, e. focus() Sets focus to the input element representing the widget. js. Navigate to the suggestion list either by mouse or keyboard arrow keys. The - I was pulling my hair out over this until i actually added the ui-menu-item-wrapperclass to the DOM element, which seems to be what Jquery searches for, not just the first child element...
http://thetopsites.net/article/53389898.shtml
CC-MAIN-2021-04
refinedweb
842
53
+ - Netflix is "Arrogant" for expecting net neutrality-> Translation: SIlly netflix, haven't you figured out that both the subscriber AND the content provider should pay for the same bandwidth!" Link to Original Source + -" + - Samsung Cites Sci-Fi Classic in Attacking Apple Pa-> Link to Original Source + - Internet replaces girl's stolen Tardis bus shelter-> 1 A young girl gets a new cell phone, but her number used to belong to a pro basketball player. The family is featured by the local news, and when the return home, they find their hand-made, 300 lb. Tardis school bus shelter stolen from their yard. Several Redditors stepped up and built her a new one, all with donations from strangers looking to help. The Internet can be a wonderful place sometimes." Link to Original Source + - Windows XP - 10 years since RTM-> 3 As the ageing operating system is still used by tens of millions worldwide, holding around 45 percent share according to StatCounter, it finally dipped below the 50 percent mark last month." Link to Original Source Comment: Re:Changing their principal rationale to political (Score 1) 1040 Comment: Re:Changing their principal rationale to political (Score 1) 1040). Comment: Re:But the political one is the correct one. (Score 1) 1040. Comment: Re:what I did (Score 1) 510 Small basic is the version of basic without all the more difficult concepts to master (). There is nothing wrong with python per se- I do not think it is as readable as basic however- consider a simple adder function: class adder: def __init__(self, value=0): self.data = value # initialize data def __add__(self, other): self.data += other # add other in-place Comment: Re:Your rights OFFLINE! (Score 1) 709 Or you could just punch them in the head- either way they'll get the message. It really depends on how much time to want to spend explaining the error of their ways. If you're in high school I would expect that all the talking in the world isn't going to teach you the lesson you should have already learned as eloquently as a right cross.
http://slashdot.org/~jayp00001/tags/notthebest
CC-MAIN-2014-42
refinedweb
355
59.84
... sent drafts check it once...its urgent Developing Simple Struts Tiles Application Developing Simple Struts Tiles Application  ... will show you how to develop simple Struts Tiles Application. You will learn how to setup the Struts Tiles and create example page with it. What< Struts Books ; Programming Jakarta Struts: Using Tiles... Tag Library. Using Tiles. The JSTL and Struts. Internationalization (I18N... covers everything you need to know about Struts and its supporting technologies best Struts material - Struts best Struts material hi , I just want to learn basic Struts.Please send me the best link to learn struts concepts Hi Manju...:// Thanks tiles - Struts Tiles in Struts Example of Titles in Struts Hi,We will provide you the running example by tomorrow.Thanks Hi,We will provide you the running example by tomorrow.Thanks Tiles Plugin Tiles Plugin I have used tiles plugin in my projects but now I am... code written in tiles definition to execute two times and my project may has...:// Hope that it will be helpful for you Struts Articles , using the Struts Portlet Framework Having a good design... it. Struts is a very popular framework for Java Web applications... can be implemented in many ways using Struts and having many developers working Alternative Struts Alternative Struts is very robust... discussion forum. <stxx/> Struts for transforming XML with XSL (stxx) is an extension of the struts framework to support XML Free Web Hosting - Why Its not good Idea Free Web Hosting - Why Its not good Idea This article shows you why Free... are looking for getting high traffic on your web site. Also its very important... future. Its very important decision in choosing web hosting server for hosting Social Media Marketing: Do you really need it? for nothing is not really a good idea. The first thing that you need to consider...Social Media Marketing: Do you really need it? Social Media Marketing might be a very efficient type of online marketing, but this doesn't mean Tutorials is provided with the example code. Many advance topics like Tiles, Struts Validation.... Using the Struts Validator Follow along as Web development expert Brett... and develop them with WebSphere Studio Struts is a very popular framework that adds Top 10 Tips for Good Website Design really make your content more readable than a bunch of loose texts or lines. Using... choice An herbal medicine website really deserves appreciation if its commanding...Designing a good website as to come up with all round appreciation, traffic Book - Popular Struts Books the framework, then spent months really figuring out how to use it to its fullest... Struts and its supporting technologies, including JSPs, servlets, Web applications.... The book begins with a discussion of Struts and its Model-View-Controller we are using Struts framework for mobile applications,but we are not using jsps for views instead of jsps we planning to use xhtmls.In struts..., please reply my posted question its very urgent Thanks in struts? please it,s urgent........... session tracking? you mean session management? we can maintain using class HttpSession. the code follows... that session and delete its value like this... session.removeAttribute servlet not working properly ...pls help me out....its really urgent servlet not working properly ...pls help me out....its really urgent Hi, Below is the front page of my project 1)enty.jsp </form> </body> </html> </form> </body> </html>> < XML Books is a comprehensive introduction to using XML for Web page design. It shows you... information on XML and describe its role in building electronic business... XML Books   Interview Questions - Struts Interview Questions for your View. See more at...? Answer: Struts is very rich framework and it provides very good... descriptor. A servlet instance can determine its name using - Framework /struts/". Its a very good site to learn struts. You dont need to be expert... to learn and can u tell me clearly sir/madam? Hi Its good...Struts Good day to you Sir/madam, How can i start Advance Struts Action . The Struts2 framework reduces the complexity using this xml file. This decides...Advance Struts2 Action In struts framework Action is responsible... and action. For the good Action in Struts2 framework writing an action Struts 2.2.1 - Struts 2.2.1 Tutorial Struts 2 hello world application using annotation Running... in Struts 2.2.1 How to implement aspects using interceptors How... Development in Struts 2.2.1 application JUnit Using Spring mock objects Hi.. - Struts .....its very urgent Hi Soniya, I am sending you a link. This link.../struts/ Thanks. struts-tiles.tld: This tag library provides tiles...Hi.. Hi, I am new in struts please help me what data write Developing Struts Application are also using struts-tags named as 'html' tags. (for simplicity, we.... This is not JSTL but Struts Tag Library. We should note the following very carefully...Developing Struts Application   Read XML using Java Read XML using Java Hi All, Good Morning, I have been working... of all i need to read xml using java . i did good research in google and came to know...(); } } } Parse XML using JDOM import java.io.*; import org.jdom. Struts Console visually edit Struts, Tiles and Validator configuration files. The Struts Console... Struts Console The Struts Console is a FREE standalone Java Swing design pattern. Using Struts framework developer can develop, test and deploy... 2 framework. These days developers are using Struts 2 for the development...An introduction to the Struts Framework This article is discussing about Why XML?, Why XML is used for? . The XML is very simple language that allows the developers to store.... are using the XML language We developers can used the XML data files to generate... to develop the content management systems Many companies are using XML files XML Parsers , its data can be manipulated using the appropriate parser. We will soon discuss APIs and parsers for accessing XML documents using serially accesss mode... and manipulate XML documents using any programming language (and a parser Program Very Urgent.. - JSP-Servlet Program Very Urgent.. Respected Sir/Madam, I am R.Ragavendran.... reference.... its most urgent.. Thanks/Regards, R.Ragavendran.. Hi Beginners Stuts tutorial. had seen how we can improvise our own MVC implementation without using Struts... , is a skill that is very much in demand now. Struts Framework was developed by Craig... that the Struts naming of its classes leaves much to be desired. (for instance Using tiles-defs.xml in Tiles Application Struts 2 Tutorial Struts 2 Framework with examples. Struts 2 is very elegant and flexible front... Beans, ResourceBundles, XML etc. Struts 2 Training! Get Trained Now!!! Struts 2 Features It is very extensible as each class of the framework Downloading MyFaces example integrated with tomahawk ; Downloading : If you are beginner in the field of JSF framework then really its necessary to follow the examples provided by any source. Here apache itself has provided examples in a zipped format. These examples are good read xml using java read xml using java <p>to read multiple attributes and elements from xml in an order.... ex :component name="csl"\layerinterfacefile="poo.c... element to b printed... here is the xml code......................< Hello - Struts to going with connect database using oracle10g in struts please write the code and send me its very urgent only connect to the database code Hi...:// Thanks Amardeep Is Action class is thread safe in struts? if yes, how it is thread safe? if no, how to make it thread safe? Please give me with good... safe. You can make it thread safe by using only local variables, not instance Introduction to Struts 2 Framework , Java Beans, ResourceBundles, XML etc. Struts 2 Framework is very... of the Struts 2 Framework. The Struts 2 framework is very elegant framework for developing web applications. Applications developed in Struts 2 is very extensible Can you suggest any good book to learn struts Can you suggest any good book to learn struts Can you suggest any good book to learn struts java with xml parsing - Java Beginners java with xml parsing Hi, I need the sample code for parsing complex data xml file with java code. Example product,category,subcategory these type of xml files and parse using java. Please send the code immediately its very JUnit and its benefits then using a good testing framework is recommended. JUnit has established a good... JUnit and its benefits  ... functional testing. This really helps to develop test suites that can be run any time What is Struts - Struts Architecturec . Struts is famous for its robust Architecture and it is being used for developing... work very closely together. Overview of the Struts Framework The Struts framework... What is Struts - Struts Architecture   with Struts Tiles | Using tiles-defs.xml in Tiles Application | Struts... directory Structure | Writing Jsp, Java and Configuration files | Struts 2 xml... | Site Map | Business Software Services India Struts 2.18 Tutorial Section Interceptors in Struts 2 ;?xml version="1.0" encoding="UTF-8" ?> <!-- /* * $Id: struts...Interceptors in Struts 2 Interceptors are conceptually analogous to Servlet Filters and are an important part of Struts 2 as it provides Struts 2 xml xml validate student login using xml for library management system EJB Books in-depth guide to using Enterprise Java Beans, including versions 1.0 and 2.0. Filled with practical advice for good design and performance and plenty... code as well as the XML descriptors needed to deploy each sample. (With EJB Getting Attributes And its Value and their value from a XML document using the SAX APIs. Description... to get the attribute and its value from a xml file. Here is the XML File... Getting Attributes And its Value   Basic problem but very urgent - JSP-Servlet Basic problem but very urgent Respected Sir/Madam, I am... kind reference.... me the cause of the problem asap because its most urgent.. Thanks/Regards XML XML create flat file with 20 records. Read the records using xml parser and show required details Best iPhone Accessories and Attachments ' iPhone sites before. So what's new here? Well for starters, here you will find only... sense to protect your iPhone with a protective case without blunting its sleek... the iPhone touch screen experience, using our dirt-laden fingers, leaving behind big java material What is JavaScript and its Features? in hand with XML and PHP. Quiz, polls, etc that is present in the website... appears whenever there is a need of one. JavaScript is very different from... documents are also created using JavaScript. JavaScript is easy to learn Web 3.0 Design years RSS and its related technologies will become the single most important Internet technology because of its specific quality to development of the new web as it?s really very simple. Any person who has a little bit knowledge of coding DOM - XML DOM Hi... I created an xml file through java by using DOM Now..I need to get the prompt for download when i clicked the button... I used downloadaction...for this But it didn't worked.. Its jus viewing the xml file Struts 2.0.4 Released applications.. Tiles Plugin- A new plugin allows your Struts... Struts 2.0.4 Released Struts 2.0.4 is released and added dependency on Struts Annotations 1.0.1. Experimental
http://www.roseindia.net/tutorialhelp/comment/4353
CC-MAIN-2015-06
refinedweb
1,876
68.57
FLUSH(3) OpenBSD Programmer's Manual FFLUSH(3) NAME fflush, fpurge - flush a stream SYNOPSIS #include <stdio.h> int fflush(FILE *stream); int fpurge(FILE *stream); DESCRIPTION. RETURN VALUES Upon successful completion 0 is returned. Otherwise, EOF is returned and the global variable errno is set to indicate the error. ERRORS [EBADF] stream is not an open stream, or, in the case of fflush(), not a stream open for writing. The function fflush() may also fail and set errno for any of the errors specified for the routine write(2). SEE ALSO write(2), fclose(3), fopen(3), setbuf(3) STANDARDS The fflush() function conforms to ANSI X3.159-1989 (``ANSI C''). OpenBSD 2.6 June 4, 1993 1
http://www.rocketaware.com/man/man3/fflush.3.htm
crawl-002
refinedweb
119
76.52
hi guys, im not amazing with javascript and could do with some advice. anyways ive tried putting two different scripts on my page and they clash. Ive read up about it ()and i can understand it, but just can't apply the solution to my scripts. ive put my site online at theres a slide show which is currently working, and a scrolling box at the bottom which is currently dead. i would love a nudge in the right direction.... thanks for your time. alsweet awesome!thanking you kindly One of your scripts use jQuery.noConflict() which removes $ from the global namespace. Your news.js script can recreate the $ object by passing jQuery to the function, which will use it as $ inside the function. Here's the construct that you would want to use: jQuery(function($) { ... })(jQuery); The jQuery.noConflict() page shows similar example code. In their example code, note that the [.ready() snippet is nowdays superseded by [url=""]jQuery callback]()
http://community.sitepoint.com/t/multiple-javascript-clashing/64507
CC-MAIN-2015-35
refinedweb
161
85.79
Whenever I do any machine learning I either manually implement models in MATLAB or use Python libraries like scikit-learn where all of the work is done for me. However, I wanted to learn how to manually implement some of these things in Python so I figured I’d document this learning process over a series of posts. Lets start with something simple: ordinary least squares multiple regression The goal of multiple regression is predict the value of some outcome from a series of input variables. Here, I’ll be using the Los Angeles Heart Data Setting up the data Lets import some modules: import numpy as np import matplotlib.pyplot as plt Next, we’ll need to load the dataset: dataset = np.genfromtxt('regression_heart.csv', delimiter=",") We’ll need to create a design matrix ( x) containing our predictor variables, and a vector ( y) for the outcome we’re trying to predict. In this case, I’m just going to predict the first column from all of the others for demonstration purposes. We can use slicing to grab every row and all columns (after the first) to create x. x = dataset[:, 1:] Optionally, we can scale (standardize) the data so gradient descent has an easier time converging later: x = (x - np.mean(x, axis=0)) / np.std(x, axis=0) We’ll need to add a column of 1’s so we can estimate a bias/intercept: x = np.insert(x, 0, 1, axis=1) # Add 1's for bias We can do the same thing to pull out the first column to create y. We want this as a column vector for later so we need to reshape it: y = dataset[:, 0] y = np.reshape(y, (y.shape[0], 1)) Training the model First we need to initialize some weights. We should also initialize some other variables/parameters that we’ll use during training: alpha = 0.01 # Learning rate iterations = 1000 # Number of iterations to train over theta = np.ones((x.shape[1], 1)) # Initial weights set to 1 m = y.shape[0] # Number of training examples. Equivalent to x.shape[0] cost_history = np.zeros(iterations) # Initialize cost history values At this stage, our weights ( theta) will just be set to some initial values (1’s in this case) that will be updated during training: array([[ 1.], [ 1.], [ 1.], [ 1.], [ 1.], [ 1.], [ 1.], [ 1.], [ 1.], [ 1.], [ 1.], [ 1.], [ 1.], [ 1.], [ 1.], [ 1.], [ 1.]]) The actual training process for multiple regression is pretty straightforward. For a given set of weight values, we need to calculate the associated cost/loss: - Evaluate the hypothesis by multiplying each variable by their weight - Calculate the residual and squared error - Calculate the cost using quadratic loss h = np.dot(x, theta) residuals = h - y squared_error = np.dot(residuals.T, residuals) cost = 1.0/(2*m) * squared_error # Quadratic loss Now that we know how ‘wrong’ our current set of weights are, we need to go back and update those weights to better values. ‘Better’ in this case just means weight values that will lead to a smaller amount of error. We can use (batch) gradient descent to do this. I won’t go into details about gradient descent here. The general idea is that we are trying to minimize the cost and to do that we can calculate the partial derivative (gradient) with respect to the weights. Once we know the gradient, we can adjust the value of the weights (theta) in the direction of the minimum. Over many iterations, the weights will converge towards values that will give us the smallest cost value. Note: the speed of this update is controlled by the learning rate alpha. Setting this value too large can cause gradient descent to diverge, which is not what we want. gradient = 1.0/m * np.dot(residuals.T, x).T # Calculate derivative theta -= (alpha * gradient) # Update weights cost_history[i] = cost # Store the cost for this iteration We simply repeat this entire process over many iterations, and we should end up learning weights that give us the smallest error: for i in xrange(iterations): # Batch gradient descent h = np.dot(x, theta) residuals = h - y squared_error = np.dot(residuals.T, residuals) cost = 1.0/(2*m) * squared_error # Quadratic loss gradient = 1.0/m * np.dot(residuals.T, x).T # Calculate derivative theta -= (alpha * gradient) # Update weights cost_history[i] = cost # Store the cost for this iteration if (i+1) % 100 == 0: print "Iteration: %d | Cost: %f" % (i+1, cost) You can see the cost dropping across each iteration: Iteration: 100 | Cost: 177.951582 Iteration: 200 | Cost: 55.546768 Iteration: 300 | Cost: 38.582054 Iteration: 400 | Cost: 36.015047 Iteration: 500 | Cost: 35.516054 Iteration: 600 | Cost: 35.364456 Iteration: 700 | Cost: 35.296070 Iteration: 800 | Cost: 35.258532 Iteration: 900 | Cost: 35.236041 Iteration: 1000 | Cost: 35.221815 We can visualize learning with a plot. This can be useful for determining whether gradient descent is converging or diverging: plt.plot(range(1, len(cost_history)+1), cost_history) plt.grid(True) plt.xlim(1, len(cost_history)) plt.ylim(0, max(cost_history)) plt.title("Training Curve") plt.xlabel("Iteration #") plt.ylabel("Cost") theta now contains the learned weight values for each variable (including the bias/intercept): array([[ 46.06305449], [ 1.0213985 ], [ 2.35116093], [ -0.48043933], [ -0.51614245], [ 1.71466022], [ 0.95959334], [ -0.5916098 ], [ 1.10940082], [ 0.68334108], [ 4.57416498], [ -3.38696989], [ -0.96933318], [ -1.85941235], [ -0.14792604], [ 1.73684471], [ 1.37675869]]) The full code can be found in my GitHub repo here
https://www.simonho.ca/machine-learning/multiple-regression-using-python/
CC-MAIN-2022-27
refinedweb
915
65.62
Overview Python packages are the building blocks of Python applications. They encapsulate some coherent functionality that can be imported and used by many applications and systems. But first, developers need to find your package and be able to install it. Python provides a free public repository for packages, which is the de facto standard for sharing Python packages. You can also use private package repositories for proprietary packages. In this tutorial you'll learn how to share your own packages with the community. If you have proprietary packages you need to share just within your company, you will learn how to do that too. For background, see How to Use Python Packages and How to Write Your Own Python Packages. What Is PyPI? PyPI stands for the Python Package Index. It is a public repository for uploading your packages. Pip is aware of PyPI and can install and/or upgrade packages from PyPI. PyPI used to be called the "Cheese Shop" after Monty Python's famous sketch. If you hear people refer to the "Cheese Shop" in a Python packaging context, don't be alarmed. It's just PyPI. Prepare a Package for Upload Before uploading a package, you need to have a package. I'll use the conman package I introduced in the article How to Write Your Own Python Packages. Since PyPI contains thousands of packages, it is very important to be able to describe your package properly if you want people to find it. PyPI supports an impressive set of metadata tags to let people find the right package for the job. The setup.py file contains a lot of important information used to install your package. But it can also include the metadata used to classify your package on PyPI. Packages are classified using multiple metadata tags. Some of them are textual and some of them have a list of possible values. The full list is available on PyPI's List Classifiers page. Let's add a few classifiers to setup.py. There is no need to increment the version number as it is only metadata and the code remains the same: from setuptools import setup, find_packages setup(name='conman', version='0.3', url='', license='MIT', author='Gigi Sayfan', author_email='the.gigi@gmail.com', description='Manage configuration files', classifiers=[ 'Development Status :: 3 - Alpha', 'Intended Audience :: Developers', 'Topic :: Software Development :: Libraries', 'License :: OSI Approved :: MIT License', 'Programming Language :: Python :: 2', 'Programming Language :: Python :: 2.6', 'Programming Language :: Python :: 2.7', ], packages=find_packages(exclude=['tests']), long_description=open('README.md').read(), zip_safe=False, setup_requires=['nose>=1.0'], test_suite='nose.collector') You need to create an account on PyPI to be able to upload packages. Fill in this form and verify your identity by clicking on the URL in the verification email. Now, you need to create a .pypyrc file in your home directory that will contain the information needed to upload packages. [distutils] index-servers=pypi [pypi] repository = username = the_gigi You can add your password too, but it's safer if you don't in case some bad element gets hold of your laptop. This is especially important if you upload popular packages because if someone can upload or upgrade your packages, all the people that use these packages will be vulnerable. Testing If you want to test the package registration and upload process and not worry about publishing something incomplete, you can work with the alternative PyPI testing site. Extend your ~/.pypirc file to include a 'pypitest' section. [distutils] index-servers= pypi pypitest [pypitest] repository = username = the_gigi [pypi] repository = username = the_gigi Remember that the test site is cleaned up regularly, so don't rely on it. It is intended for testing purposes only. Register Your Package If this is the first release of your package, you need to register it with PyPI. Twine has a register command, but I can't figure out how to use it. Following the documentation produces an error, and checking the unit tests for twine there is no test for the register command. Oh, well. You can do it manually too using this form to upload the PKG-INFO file. If you use Python 2.7.9+ or Python 3.2+, you can also safely register using python setup.py register. Let's register conman on the PyPI test site. Note the -r pypitest, which based on the section in ~/.pypirc will register with the test site. python setup.py register -r pypitest running register check Password: Registering conman to Server response (200): OK Twine You can upload a package using python setup.py upload, but it is not secure as it used to send your username and password over HTTP until Python 2.7.9 and Python 3.2. Twine always uses HTTPS and has additional benefits like uploading pre-created distributions, and it supports any packaging format, including wheels. I will use twine for the actual upload. Twine is not part of the standard library so you need to install it: pip install twine. Upload Your Package Finally, it's time to actually upload the package. > twine upload -r pypitest -p ******* dist/* Uploading distributions to Uploading conman-0.3-py2-none-any.whl Uploading conman-0.3-py2.py3-none-any.whl Uploading conman-0.3.tar.gz Twine uploaded all the distribution formats, both the source and the wheels. Test Your Package Once your package is on PyPI, you should make sure you can install it and everything works. Here I create a one-time virtual environment, pip install conman from the PyPI testing site, and then import it. You may want to run more thorough tests for your package. > mkvirtualenv test_conman_pypi New python executable in test_conman_pypi/bin/python2.7 Also creating executable in test_conman_pypi/bin/python Installing setuptools, pip...done. Usage: source deactivate removes the 'bin' directory of the environment activated with 'source activate' from PATH. (test_conman_pypi) > pip install -i conman Downloading/unpacking conman Downloading conman-0.3-py2-none-any.whl Storing download in cache at /Users/gigi/.cache/pip/https%3A%2F%2Ftestpypi.python.org%2Fpackages%2Fpy2%2Fc%2Fconman%2Fconman-0.3-py2-none-any.whl Installing collected packages: conman Successfully installed conman Cleaning up... (test_conman_pypi) > python Python 2.7.10 (default, Jun 10 2015, 19:43:32) [GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.57)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import conman >>> Note that the wheel distribution was installed by default. Versioning When you evolve your packages and upload new versions, it is important to follow a sensible versioning scheme. People will get pretty upset if an unintentional upgrade breaks their code. Your versioning scheme must comply with PEP-440 -- Version identification and dependency specification. This specification allows multiple schemes to choose from. I recommend using the popular Semantic Versioning scheme. It is pretty much "<major>.<minor>.<patch>", which corresponds to PEP-440's "<major>.<minor>.<micro>". Just beware of versions continuing the hyphen or plus signs, which are not compatible with PEP-440. Private Package Repositories PyPI is great, but sometimes you don't want to share your packages. Many companies and organizations have engineering teams that use Python and need to share packages between them, but are not allowed to share them publicly on PyPI. This is not a problem. You can share packages on private package repositories under your control. Note that sometimes you may want to have a private package repository under your control just to manage your third-party dependencies. For example, a package author can decide to delete a package from PyPI. If your system relies on being able to install this package from PyPI, you're in trouble. Devpi Devpi (which stands for Development Package Index) is a drop-in replacement for the public PyPI server. It is open source and MIT licensed, so you can run it inside your firewall. Devpi is very powerful and has many features that allow it to function as your ultimate packaging server: - Fast PyPI mirror - Uploading, testing and staging with private indexes - Index inheritance - Web interface and search - Replication - Importing/Exporting - Jenkins integration Devpi has excellent documentation, a plugin system and is in active development with a vibrant community. Conclusion Python provides a complete solution for hosting your packages and making them available to your fellow Pythonistas. There is a streamlined process assisted by tools to package and upload packages and make them easy to find and install. If you need to keep things private, Devpi is here for you as a mature and robust private package repository.
https://code.tutsplus.com/tutorials/how-to-share-your-python-packages--cms-26114?ec_unit=translation-info-language
CC-MAIN-2022-40
refinedweb
1,427
57.87
import "github.com/weaveworks/flux/pkg/git/gittest" var TestConfig git.Config = git.Config{ Branch: "master", UserName: "example", UserEmail: "example@example.com", NotesRef: "fluxtest", } Checkout makes a standard repo, clones it, and returns the clone with a cleanup function. func CheckoutWithConfig(t *testing.T, config git.Config, syncTag string) (*git.Checkout, *git.Repo, func()) CheckoutWithConfig makes a standard repo, clones it, and returns the clone, the original repo, and a cleanup function. Repo creates a new clone-able git repo, pre-populated with some kubernetes files and a few commits. Also returns a cleanup func to clean up after. Workloads is a shortcut to getting the names of the workloads (NB not all resources, just the workloads) represented in the test files. Package gittest imports 8 packages (graph). Updated 2020-09-10. Refresh now. Tools for package owners.
https://godoc.org/github.com/weaveworks/flux/pkg/git/gittest
CC-MAIN-2020-45
refinedweb
138
52.97
WebMethodAttribute.MessageName Property The name used for the XML Web service method in the data passed to and returned from an XML Web service method. Assembly: System.Web.Services (in System.Web.Services.dll) The MessageName property can be used to alias method or property names. The most common use of the MessageName property will be to uniquely identify polymorphic methods. By default, MessageName is set to the name of the XML Web service method. Therefore, if an XML Web service contains two or more XML Web service methods with the same name, you can uniquely identify the individual XML Web service methods by setting the MessageName to a name unique within the XML Web service, without changing the name of the actual method name in code. When data is passed to an XML Web service it is sent in a request and when it is returned it is sent in a response. Within the request and response, the name used for the XML Web service method is its MessageName property. The message name associated with an XML Web service method must be unique within the XML Web service. If a new XML Web serivce method with the same name but different parameters is added after clients are calling the original method, a different message name should be specified for the new method but the original message name should be left as is to ensure compatability with existing clients. In the example below, MessageName is used to disambiguate the two Add methods. <%@ WebService Language="C#" Class="Calculator" %> using System; using System.Web.Services; public class Calculator : WebService { // The MessageName property defaults to Add for this XML Web service method. [WebMethod] public int Add(int i, int j) { return i + j; } [WebMethod(MessageName="Add2")] public int Add(int i, int j, int k) { return i + j +.
http://msdn.microsoft.com/en-us/library/system.web.services.webmethodattribute.messagename(v=vs.100).aspx
CC-MAIN-2014-35
refinedweb
305
58.82
Demonstrates how to add a COM string (BSTR) to a database and how to marshal a from a database to a B COM strings are being passed as values for the database column StringCol. Inside DatabaseClass, these strings are marshaled to managed strings using the marshaling functionality found in the namespace. Specifically, the method is used to marshal a BSTR to a String, and the method is used to marshal a String to a BSTR. Output Compiling the Code To compile the code from the command line, save the code example in a file named adonet_marshal_string_native.cpp and enter the following statement: Security For information on security issues involving ADO.NET, see . See Also Reference Other ResourcesData Access Using ADO.NET in C++ Native and .NET Interoperability
http://www.yaldex.com/c_net_tutorial/html/5daf4d9e-6ae8-4604-908f-855e37c8d636.htm
CC-MAIN-2017-34
refinedweb
127
53.51
in reply to Re: Perl vs. Python for prime numbers in thread Perl vs. Python for prime numbers We could save typing and electrons: use Math::Prime::Util qw/forprimes/; forprimes { say } 1000; # optionally takes range a,b [download] Some Python ways to do this, all of which are *much* faster than the OP code when we want anything more than tiny values like 1000. There are probably even better ways. Using sympy. Much slower than the Perl module: from sympy import sieve for i in sieve.primerange(2,1000): print i [download] using gmpy2 (only ~2x slower than the Perl module): import gmpy2 n = 2 while n <= 1000: print n n = gmpy2.next_prime(n) [download] Or some Python by hand that is very fast: from math import sqrt, ceil def rwh_primes(n): # +ll-primes-below-n-in-python/3035188#3035188 """ Input n>=6, Returns a list of primes, 2 <= p < n """ correction = (n%6>1) n = {0:n,1:n-1,2:n+4,3:n+3,4:n+2,5:n+1}[n%6] sieve = [True] * (n/3) sieve[0] = False for i in xrange(int(n**0.5)/3+1): if sieve[i]: k=3*i+1|1 sieve[ ((k*k)/3) ::2*k]=[False]*((n/6-(k*k)/6-1)/k+1 +) sieve[(k*k+4*k-2*k*(i&1))/3::2*k]=[False]*((n/6-(k*k+4*k-2*k*( +i&1))/6-1)/k+1) sieve[n/3-correction] = False # If you want the count: return 2 + sum(sieve) return [2,3] + [3*i+1|1 for i in xrange(1,n/3-correction) if sieve +[i]] for i in rwh_primes(1000): print
http://www.perlmonks.org/index.pl?node_id=1040264
CC-MAIN-2015-48
refinedweb
281
74.02
Want this course? Writing our first macro Course: Beginning with Clojure Macros We jump right in and start coding a simple macro, discovering the sigils we need along the way. Code is available: lispcast/macro-playground You can checkout the code in your local repo with this command: $CMD git clone $CMD cd macro-playground Let's write a macro. But first, on our way to writing that macro, we're going to start with a function. So let's start a function called square that takes an x and it returns x times x. And we can see, we could do square ten, we get 100, and we can even square a random integer, that's seven times seven, 49. Okay, nice. So now, we know that a macro is a function from code to code, so instead of returning the number 100 or the number 49, let's return the code that that would return. So we know that with parentheses, this code is parentheses, right, which means a list, so we do list, and then that first thing there, that asterisk there it is, multiplication is a symbol, and then we want to put in the x of x. So that's our square, and we see we get the list with times ten and ten. So we made a function that returns code, okay. So the next step is actually just to turn this into a macro, there's a macro called defmacro, that tells the compiler that this thing runs at compile time. So let's say that and then when we do square ten this time, we get 100 again, because the macro expansion the compiler happens, and then it gets passed to the runtime which evaluates it at 100. And there's a cool thing we could do, you can actually see the macro expansion by passing in the code. So you quote the expression, the expression is square 10, and that turns it into a quoted list with square and 10, and then we see how it's macro expanded. And so we can see that. So this a macro, it returns a list of star x x, but it's kind of hard to read. What does this code eventually look like? Well you can imagine it, you can run the macro in your head, and you can say "Oh I see, it's gonna look like the code times ten ten." Because that's a pain to do in your head, there's a convenience syntax for making your code look way more alike what it's gonna look like. And that is the backtick. So this is one of the first magic sigils that are useful in macros. It's useful elsewhere, but it was specifically made for macros. So if we put this sigil here right in front, it's called a backtick or a backquote, this is going to now expand like this. Okay, so there's two things that I notice. The first thing is that wow, everything has name spaces on it, right, that's something that the backquote operator does. And then the other thing is, instead of 10, we now have the x in there. X has been put in both places, and that's not really the code we want because what is x? It doesn't exist. For this code here, x is not defined. If we try to run it, we'll see that. No such var x. Okay, so that's expected. The thing that's happening is we are confusing runtime and compile time. This is the code that will run at runtime, which is times x x, but we really want the x to be evaluated at compile time because I'm passing this 10 in, and we want that 10 to be passed in, not the x, right? So let's use another sigil, it's the tilde, it's also called unquote, that's the operation it does, and what this does is it says, "run this at compile time". So this is saying, this backquote says, "this is for runtime", and then this is saying, "ah except here's a hole where it should be at compile time". Okay and so by alternating a backtick and an unquote, you are able to, the backtick is also called syntax quote, so you do syntax quote and unquote, syntax quote unquote, you're able to say exactly what is happening at runtime and what is happening at compile time. And this takes practice, but it's a simple concept. So now when I do a macro expand, boom the ten, which is evaluated at compile time, gets put into this expression which is then gonna be passed onto runtime. And we can see, when I call it with ten we get the right answer. Okay, but this now has a new problem. Let's say I go back to one of the original examples rand-int 10. I'm calling it but these don't really look like squares to me, well that one does, but not all of them do. No that's not a square, see these aren't squares. So what's happening? Well, we're gonna use our macro expand function. This is very common when you're doing macros, gonna see what it looks like, and we see that this code for rand-int got put in twice, and so each time it's generating a different random number and passing it to times. That's not really what we want, so we still haven't gotten to where we want to be with this. So if I did want to do that let's say in code, not in a function but just in code, what would it look like? So really I want to generate one random number, save it to a variable, and then call times on it. So it would look like this. Let x rand-int 10 and then times x x right? And that would give me the number I want. So we get to make our macro look like this. That's pretty simple. So we need to put a let and we need to put some variable name, we can call it x, and then we're gonna backquote x here, and now we want this to run just like this one, so we need to have x and x. Alright, let's see if this works. We're gonna macroexpand it, and it looks like we are looking at that syntax quote, it's expanding this x to have a namespace, and so I don't know if that will work, I bet it won't. It can't let a qualified name. So that's saying that this is illegal in this position in a let, that you can't have a symbol with a namespace in that position. So we need a way to make this not have a namespace, and Clojure of course, has a way. When you're in syntax quote, if you put a pound sign after the variable, it will make a new variable for you. And so then you need that same pound sign everywhere, and let's see what this looks like. So this is what's happening, it's generating this new symbol, notice it doesn't have a namespace and it's this crazy name that's unique because it's got this number in there that keeps incrementing. It's also got this auto, and you would probably never write your own variable name like this so it won't ever conflict, and so then this gets passed on with the same name, see it's got the same thing, so closure keeps track of all these x's with the pound sign after, they expand to the same variable. Okay so now let's test our square with a random-int. Oh did I say read-int? Yeah I meant rand-int. Okay, so now we're seeing squares, so it appears to be working and the code looks right. Now what happened here is another question that could come up. Why is this now a let star? I wrote let. Well let is actually a macro and it expands into let star, and what macroexpand does is it keeps expanding this outer form until there's no macros left. If you don't want to expand it all the way, you can just do one expansion, call it macroexpand-1, and now you see that it's just one let, right, and it still will expand the namespace because it's inside that syntax quote, but it's just one level of macro expansion, which sometimes is exactly what you want.You don't want to expand all the internal code, internal macros that come with Clojure. And we're gonna see that in the next lesson.
https://purelyfunctional.tv/lesson/writing-our-first-macro/
CC-MAIN-2020-34
refinedweb
1,508
86.33
0 replies on 1 page. I have been on a C++ coding bender during the Holidays. While putting together a couple of demo programs for the YARD parser I found I desparately wanted a library which allowed using C++ programs within programs like unix filters, where their standard-in and standard-out can be redirected. While developing this unix-filter library it lead me to desire a variant style type, but I don't like boost::any or boost::variant ( too many dependencies ), so I rolled my own. So all of this to say, I just posted my latest creation, a union-list type, at CodeProject.com. Here is a demonstration of how the type is used: #include "..\utils\union_list.hpp" #include <iostream> using namespace std; typedef ul<int, ul<char const*, ul_end> > IntOrString_T; int main() { IntOrString_T i(42); IntOrString_T s("hello"); cout << i.TypeIndex() << endl; // outputs 0 cout << s.TypeIndex() << endl; // outputs 1 cout << i.Get<0>() << endl; // outputs 42 cout << s.Get<1>() << endl; // outputs hello return 0; }
http://www.artima.com/forums/flat.jsp?forum=106&thread=86889
CC-MAIN-2016-40
refinedweb
170
66.13
In this article, you will learn about calling ASP.NET Web API Service cross domain, using AJAX. Introduction This article explains how to call ASP.NET Web API in the cross domain, using AJAX in a simple step by step way. We can deploy and use our Web API in the cross-domain. While calling cross domain, we will be facing some issues. This article explains how to resolve and how to use Web API in the cross domain. Please read the previous part of this article in the link given below before reading this article. Calling Web API in Cross Origin Web page and ASP.NET Web API Services hosted in the different domains are called cross-origin. For example, a Web Application hosted in and ASP.NET Web API Service hosted in. We can see the URL for the Web Application and ASP.NET Web API looks as shown below in the cross domain or cross-origin. JSONP JSONP is a JSON with padding. While accessing the data, using AJAX call, we cannot access the data cross-domain because of security reasons. JSONP is used to access the data in the cross domain. JSON returns a JSON-formatted object only. JSONP returns a JSON-formatted object, which is wrapped inside a function call. The example given below differentiates it for JSON and JSONP. JSON format JSONP format Steps to create cross-origin or cross domain Step 1 This step is a continuation of previous part of the article. Please find the links given below for first part of the article. Step 2 Remove the HTML page in our solution files. Now, add a new project in the same solution. Right click on the Solution Explorer, go to “Add” and click new project. Step 3 Now, select The Web. Under Web, select ASP.NET Web Application in Add New Project Windows and give the project name. Finally, click OK. Now “New ASP.NET Project - Customers” Window will open. Now, select Web Forms and click OK. Afterwards, add the new project in the solution. We can see the two projects in the single solution. Step 4 Now, right click on Customer project adds “Customer.html” page in Customers project. Now, copy the same code from “WebAPIsameDomain” project “Customers.html” page and paste it to “Customer” project “Customers.html” page. Step 5 Now, build our solutions and run ASP.NET Web API Service and Customers.html page. We can see the different port number. The screenshot given below shows it differently. - ASP.NET Web API URL. - Web Page URL. ASP.NET Web API Service returns JSON format data. It is not supported in the cross domain because of the security issue. If we go to developer tool, we can see the error. The error is “XMLHttpRequest cannot load. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin '' is therefore not allowed access.” To fix this security issue, we need to use JSONP in AJAX, so that ASP.NET Web API Service returns JSONP format. First, enable JSONP in our “Customers” project. Step 6 We need to install JSONP package in our solution. Go TOOLS àNuGet Package Manager à Manage NuGet Packages for Solutions. We can see it looks like the screenshot given below. Step 7 After opening Manage NuGet Packages Window, type “JSONP” in the search box, search it. Now, we can get the “WebApiContrib.Formatting.Jsonp” package and click the install button. Read the description columns and one can learn about the package. After clicking install, it will open Select Project Window, which looks, as shown below. Now, click the check box in which the project needs this package and click OK. Afterwards, click OK and then click I accept button and after some moments; it will be installed. Finally, close Manage NuGet Packages Window. Step 8 Go to the “WebAPIsameDomain” and go to the “App_Start” folder, followed by going to the “WebApiConfig.cs” file. Now, write the code mentioned below to enable JSONP format in Register method. The code given below is used to convert JSON to JSONP format. We are using “JsonpMediaTypeFormatter” class, which is used to convert JSON to JSONP. It's namespace is “WebApiContrib.Formatting.Jsonp”. Code to convert JSONP Step 9 Now, go to HTML page and change the data type from JSON to JSONP, which looks, as shown below. Now build and run the project we get the correct output which looks like the below screenshot. We are calling ASP.NET Web API Service from to Web Application. Here, we can see the different port numbers in the URL. It is called cross-origin. If it uses a different domain, like calling ASP.NET Web API Service from to Web Application is called cross domain. Conclusion This article explained about calling ASP.NET Web API in the cross-origin, using AJAX by following some steps. This helps new learners and developers. This article and previous parts of the article clearly explain about the same and cross-origin in ASP.NET Web API. View All View All
https://www.c-sharpcorner.com/article/calling-asp-net-web-api-service-cross-domain-using-ajax/
CC-MAIN-2022-05
refinedweb
846
70.09
28 October 2010 12:16 [Source: ICIS news] (Releads and adds detail throughout) LONDON (ICIS)--Bayer MaterialScience's main driver of investment will be the further expansion of production capacities in China after it made a strong contribution to the German specialty chemical maker's total earnings growth, chairman Marijn Dekkers said on Thursday. Bayer reported a 12.4% year-on-year increase in third-quarter net profit to €280m ($384m) as sales for the three months to September jumped 16% to €8.58bn on the back of an improved performance from its MaterialScience business. Sales from MaterialScience surged 30.8% to €2.67bn, while its earnings before interests, taxes and amortisation (EBITDA) soared 72% to €409m. "The growth was due to significantly higher demand in our main customer industries. Selling prices also rose distinctly overall," Dekkers said. Dekkers added that Asia, in particular ?xml:namespace> "We plan to take advantage of these trends. The main driver of investment at BMS is therefore the ongoing expansion of production capacities in Bayer’s HealthCare business saw sales increase 8.5% year on year to €4.27bn while EBITDA before special items fell 3.7% to €1.10bn. Dekkers said that the group’s pharmaceuticals business had become stagnant due to current difficult market conditions. The group’s CropScience segment meanwhile witnessed sales grow 17.6% to €1.3bn compared with the third quarter of 2009, partly due to an improvement in demand. EBITDA before special items of the subgroup grew by 16.7% in the third quarter of 2010 to €126m, after Bayer achieved strong growth rates for its fungicides, insecticides and herbicides. In addition, Bayer further reduced the group's overall net financial debt during the third quarter from €10.7bn to €9.1bn, on the back of cash inflows from business operations and positive currency effects of €400m, Dekkers said. Group operating profit, however, slipped 14% to €556m as it had set aside €436m in provisions for litigations in the US. In the first nine months of 2010, the company’s net profit was up 24% to €1.50bn on the back of a 12% growth in sales to €26.08bn, the chairman said. Its operating profit for the period rose 4.5% to €2.76bn, he added. Looking ahead, Bayer remained optimistic for 2010 and announced it would continue to target sales growth of more than 5% on a currency and portfolio adjusted basis. “It remains our aim to increase EBITDA before special items to more than €7bn. And we still expect core earnings per share to improve by more than 15%,” Dekkers said. In addition, Bayer was optimistic about its MaterialScience business for the rest of the year. “For the full year 2010 we expect sales at MaterialScience to be in the region of €10bn, and EBITDA before special items in excess of €1.3bn. This would be about three times the prior-year earnings level,” Dekkers said. “Our original goal for MaterialScience was to re-attain the pre-crisis sales level by 2012. We will now achieve this much earlier than planned," he added. Additional reporting by Pearl Bantillo (
http://www.icis.com/Articles/2010/10/28/9405232/bayer-will-invest-to-expand-materialscience-in-china-ceo.html
CC-MAIN-2015-14
refinedweb
523
67.55
My forty year career has been a long, strange, and marvelous trip. Along the way, I've found a number of techniques of architecture, design, and implementation in C and C++ that have worked well for me, solving a number of recurring problems. I don't claim that these techniques are the best, or that I am the only, or even the first, developer to have thought them. Some of them I shamelessly borrowed, sometimes from other languages or operating systems, because I try to know a good idea when I see one. This article documents some of those ideas. Standard Headers and Types Just about every big closed-source project I've worked on defined its own integer types. Even the Linux kernel does this. Types like u32 for unsigned 32-bit integers, or s8 for signed 8-bit integers. Historically - and I'm talking ancient history here - that was a good idea, way back when C only had integer types like int and char, and there weren't dozens and dozens of different processors with register widths that might be 8, 16, 32, or 64 bits. But this kind of stuff is now available in standard ANSI C headers, and Diminuto leverages the heck out of them. stdint.h defines types like uint32_t, and int8_t, and useful stuff like uintptr_t, which is guaranteed to be able to hold a pointer value, which on some platforms is 32-bits, and on others 64-bits. stddef.h defines types like size_t and ssize_t that are used by a variety of POSIX and Linux systems calls and functions, and are guaranteed to be able to hold the value returned by the sizeof operator. stdbool.h defines the bool type and constant values for true and false (although, more about that in a moment). In at least one client project, I talked them into redefining their own proprietary types via typedef to use these ANSI types, which simplified porting their code to new platforms. Boolean Values Even though I may use the bool type defined in stdbool.h, I don't actually like the constants true and false. For sure, false is always 0. But what value is true? Is it 1? Is it all ones, e.g. 0xff? Is 9 true? How about -1? In C, true is anything that is not 0. So that's how I code a true value: !0. I don't care how the C standard or the C compiler encodes true, because I know for sure that !0 represents it. I normalize any value I want to use as a boolean using a double negation. For example, if I have two variables alpha and beta that are booleans. Do I use (alpha == beta)to check if they are equal? What if alpha is 2 and beta is 3? Both of those are true, but the comparison will fail. Do I use (alpha && beta)instead? No, because I want to know if they are the same, both true, or both false, not if they are both true. If C had a logical exclusive OR operator, I'd use that - or, actually, the negation of that - but it doesn't. I could do something like (((alpha && beta) || ((!alpha) && (!beta)))but that hurts my eyes. Double negation addresses this: !!-1 equals !0 equals whatever the compiler uses for true. I use ((!!alpha) == (!!beta))unless I am positive that both alpha and beta have already been normalized. I also do this if I am assigning a value to a boolean alpha = !!betaunless I am very sure that beta has already been normalized. Inferring Type If I want to check if a variable being used as a boolean - no matter how it was originally declared - is true, I code the if statement this way. if (alpha)But if the variable is a signed integer and I want to know if it is not equal to zero, I code it this way. if (alpha != 0)And if it's an unsigned integer, I code it this way. if (alpha > 0)When I see a variable being used, I can usually infer its intended type, no matter how it might be declared elsewhere. Parentheses and Operator Precedence You may have already noticed that I use a lot of parentheses, even where they are not strictly speaking necessary. Can I remember the rules of operator precedence in C? Maybe. Okay, probably not. But here's the thing: I work on big development projects, hundreds of thousands or even millions of lines of code, with a dozen or more other developers. And if I do my job right, the stuff I work on is going to have a lifespan long after I leave the project and move on to something else. Just because I can remember the operator precedence of C, the next developer that comes along may not. So I want to make my assumptions explicit and unambiguous when I write expressions. Also: sure, I may be writing C right now. But this afternoon, I may be elbow deep in C++ code. And tomorrow morning I may be writing some Python or Java. Tomorrow afternoon, sadly, I might be hacking some JavaScript that the UI folks wrote, even though I couldn't write a usable line of JavaScript if my life depended on it. Even if I knew the rules of operator precedence for C, that's not good enough; I need to know the rules for every other languages I may find myself working in. But I don't. So I use a lot of parentheses. This is the same reason I explicitly code the access permissions - private, protected, public - when I define a C++ class. Because the default rules in C++ are different for class versus struct (yes, you can define access permissions in a struct in C++), and different yet again for Java classes. Exploiting the sizeof Operator I never hard-code a value when it can be derived at compile time. This is especially true of the size of structures or variables, or values which can be derived from those sizes. For example, let's suppose I need two arrays that must have the same number of elements, but not necessarily the same number of bytes. int32_t alpha[4];The number of array positions of beta is guaranteed to be the same as the number of array positions in alpha, even though they are different types, and hence different sizes. The expression int8_t beta[sizeof(alpha)/sizeof(alpha[0])]; (sizeof(alpha)/sizeof(alpha[0]))divides the total number of bytes in the entire array with the number of bytes in a single array position. This has been so useful that I defined a countof macro in a header file to return the number of elements in an array, providing it can be determined at compile time. I've similarly defined macros like widthof, offsetof, memberof, and containerof. Doxygen Comments Doxygen is a documentation generator for C and C++, inspired by Java's documentation generator Javadoc. You write comments in a specific format, run the doxygen program across your source code base, and it produces an API document in HTML. You can use the HTML pages directly, or convert them using other tools into a PDF file or other formats. /** * Wait until one or more registered file descriptors are ready for reading, * writing, or accepting, a timeout occurs, or a signal interrupt occurs. A * timeout of zero returns immediately, which is useful for polling. A timeout * that is negative causes the multiplexer to block indefinitely until either * a file descriptor is ready or one of the registered signals is caught. This * API call uses the signal mask in the mux structure that contains registered * signals. * @param muxp points to an initialized multiplexer structure. * @param timeout is a timeout period in ticks, 0 for polling, <0 for blocking. * @return the number of ready file descriptors, 0 for a timeout, <0 for error. */ static inline int diminuto_mux_wait(diminuto_mux_t * muxp, diminuto_sticks_t timeout) { return diminuto_mux_wait_generic(muxp, timeout, &(muxp->mask)); } Even if I don't use the doxygen program to generate documentation, the format enforces a useful discipline and consistent convention in documenting my code. If I find myself saying "Wow, this API is complicated to explain!", I know I need to revisit the design. I document the public functions in the .h header file, which is the de facto API definition. If there are private functions in the .c source file, I put Doxygen comments for those functions there. Most of the big product development projects I've worked on have involved writing about 10% new code, and 90% integrating existing closed-source code from prior client projects. When that integration involved using C or C++ code from many (many) different projects, name collisions were frequently a problem, either with symbols in the source code itself (both libraries have a global function named log), or in the header file names (e.g. both libraries have header files named logging.h). In my C++ code, such as you'll find in my Grandote project on GitHub (a fork from Desperadito, itself a fork from Desperado, both since deprecated), there is an easy fix for the symbol collisions: C++ provides a mechanism to segregate compile-time symbols into namespaces. namespace com { namespace diag { namespace grandote { Condition::Condition() { ::pthread_cond_init(&condition, 0); } Condition::~Condition() { ::pthread_cond_broadcast(&condition); ::pthread_yield(); ::pthread_cond_destroy(&condition); } int Condition::wait(Mutex & mutex) { return ::pthread_cond_wait(&condition, &mutex.mutex); // CANCELLATION POINT } int Condition::signal() { return ::pthread_cond_broadcast(&condition); } } } } It doesn't matter that another package defines a class named Condition, because the fully-qualified name of my class is actually com::diag::grandote::Condition. Any source code that is placed within the namespace com::diag::grandote will automatically default to using my Condition, not the other library's Condition, eliminating the need for me to specify that long name every time. Note too that the namespace incorporates the domain name of my company in reverse order - borrowing an idea from how Java conventionally organizations its source code and byte code files. This eliminates any collisions that might have occurred because another library itself has the same library, class, or module name as part of its own namespace. This works great for C++, but the problem remains for C. So there, I resort to just prepending the library name and the module name to the beginning of every function. You already may have noticed that in the diminuto_mux_wait function in the Doxygen example above: the wait operation in the mux module of the diminuto library. But one problem that neither C++ namespaces nor my C naming convention, solves is header file collisions. So in both C and C++ I organize header file directories in a hierarchical fashion so that #include statements look like this, from my Assay project. #include <stdio.h> #include <errno.h> #include "assay.h" #include "assay_parser.h" #include "assay_fixup.h" #include "assay_scanner.h" #include "com/diag/assay/assay_scanner_annex.h" #include "com/diag/assay/assay_parser_annex.h" #include "com/diag/diminuto/diminuto_string.h" #include "com/diag/diminuto/diminuto_containerof.h" #include "com/diag/diminuto/diminuto_log.h" #include "com/diag/diminuto/diminuto_dump.h" #include "com/diag/diminuto/diminuto_escape.h" #include "com/diag/diminuto/diminuto_fd.h" #include "com/diag/diminuto/diminuto_heap.h" The first two header files, stdio.h and errno.h, are system header files. The next four header files whose names begin with assay_ are private header files that are not part of the public API and which are in the source directory with the .c files being compiled. The next two header files are in a com/diag/assay subdirectory that is part of the Assay project. The remaining header files are in a com/diag/diminuto subdirectory that is part of the Diminuto project. Here's another example, from the application gpstool that makes use of both my Hazer and Diminuto projects. #include <assert.h> #include <unistd.h> #include <stdio.h> #include <string.h> #include <stdint.h> #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> #include <errno.h> #include "com/diag/hazer/hazer.h" #include "com/diag/diminuto/diminuto_serial.h" #include "com/diag/diminuto/diminuto_ipc4.h" #include "com/diag/diminuto/diminuto_ipc6.h" #include "com/diag/diminuto/diminuto_phex.h" This convention is obviously not so important in my C header files, where the project name is part of the header file name. But it becomes a whole lot more important in C++, where my convention is to name the header file after the name of the class it defines, as this snippet from Grandote illustrates. #include <unistd.h> #include <sys/stat.h> #include <fcntl.h> #include "com/diag/grandote/target.h" #include "com/diag/grandote/string.h" #include "com/diag/grandote/Platform.h" #include "com/diag/grandote/Print.h" #include "com/diag/grandote/DescriptorInput.h" #include "com/diag/grandote/DescriptorOutput.h" #include "com/diag/grandote/PathInput.h" #include "com/diag/grandote/PathOutput.h" #include "com/diag/grandote/ready.h" #include "com/diag/grandote/errno.h" #include "com/diag/grandote/Grandote.h" Once I came upon this system of organizing header files, not only did it become much easier to integrate and use multiple projects into a single application, but the source code itself became more readable, because it was completely unambiguous where everything was coming from. This strategy worked so well, I use it not just for C and C++, but to organize my Python, Java, and other code too. I also organize my GitHub repository names, and, when I use Eclipse, my project names, in a similar fashion: - com-diag-assay, - com-diag-grandote, - com-diag-diminuto, - com-diag-hazer etc. This is more important than it sounds. Digital Aggregates Corporation () is my consulting company that has been around since 1995. It holds the copyright on my open-source code that may find its way into my clients' products. The copyright of my closed-source code is held by another of my companies, Cranequin LLC (). So if I see the header file directory com/cranequin or a project or repository with the prefix com-cranequin, I am reminded that I am dealing with my own proprietary code under a different license. Time (updated 2017-07-26) The way time is handled in POSIX or Linux - whether you are talking about time of day, a duration of time, or a periodic event with an interval time - is a bit of a mess. Let's see what I mean. gettimeofday(2), which returns the time of day, uses the timeval structure that has a resolution of microseconds. It's cousin time uses an integer that has a resolution of seconds. clock_gettime(2), which when used with the CLOCK_MONOTONIC_RAW argument returns an elapsed time suitable for measuring duration, uses the timespec structure that has a resolution of nanoseconds. setitimer(2), which is be used to invoke an interval timer, uses the itimerval structure that has a resolution of microseconds. timer_settime(2), which also invokes an interval timer, uses the itimerspec structure that has a resolution of nanoseconds. select(2), which is be used to multiplex input/output with a timeout, uses the timerval structure and has a resolution of microseconds. Its cousin pselect uses the timespec structure that has a resolution of nanoseconds. poll(2), which can also be used to multiplex input/output with a timeout, uses an int argument that has a resolution of milliseconds. It's cousin ppoll uses the timespec structure that has a resolution of nanoseconds. nanosleep(2), which is used to delay the execution of the caller, uses the timespec structure that has a resolution of nanoseconds. Its cousin sleep(3) uses an unsigned int argument that has a resolution of seconds. Its other cousin usleep(3) uses an useconds_t argument that is a 32-bit unsigned integer and which has a resolution of microseconds. Seconds, milliseconds, microseconds, nanoseconds. So many different granularities of time. Several different types. It's too much for me. Diminuto has two types in which time is stored: diminuto_ticks_t and diminuto_sticks_t. Both are 64-bit integers, the only difference being one is unsigned and the other is signed. (I occasionally regret even that.) All time is maintained in a single unit of measure: nanoseconds. This unit is generically referred to as a Diminuto tick. diminuto_time_clock returns the time of day (a.k.a. wall clock time) in Diminuto ticks. There is another function in the time module to convert the time of day from a ticks since the POSIX epoch into year, month, day, hour, minute, second, and fraction of a second in - you guessed it - Diminuto ticks. The POSIX clock_gettime(2) API is used with the CLOCK_REALTIME argument. diminuto_time_elapsed returns the elapsed time in Diminuto ticks suitable for measuring duration. It also uses the clock_gettime(2) API but with the CLOCK_MONOTONIC_RAW argument. This makes the returned values unaffected by changes in the system clock wrought by system administrators, by the Network Time Protocol daemon, by Daylight Saving Time, or by the addition of leap seconds. However, this means the value returned by the function is only really useful in comparison or in arithmetic operations with prior or successive values returned by the same function. diminuto_timer_periodic invokes a periodic interval timer, and diminuto_timer_oneshot invokes an interval timer that fires exactly once, both using Diminuto ticks to specify the interval. Both are implemented using the POSIX timer_create(2) API with the CLOCK_MONOTONIC argument. diminuto_mux_wait uses pselect(2) for I/O multiplexing, and diminuto_poll_wait does the same but uses ppoll(2), both specifying the timeout in Diminuto ticks. (I believe pselect is now implemented in the Linux kernel using ppoll, but that wasn't true when I first wrote this code and they had very different performance characteristics.) diminuto_delay delays the caller for the specified number of Diminuto ticks. It is implemented using the POSIX nanosleep(2) API. Floating Point (updated 2017-07-26) If you really need to use floating point, you'll probably know it. But using floating point is problematic for lots of reasons. Diminuto avoids using floating point except in a few applications or unit tests where it is useful for reporting final results. Even though Diminuto uses a single unit of time, that doesn't mean the underlying POSIX or Linux implementation can support all possible values in that unit. So every module in Diminuto that deals with time has an inline function that the application can call to find out what resolution the underlying implementation supports. And every one of those functions returns that value in a single unit of measure: Hertz. Hertz - cycles per second - used because it can be expressed as an integer. It's inverse is the smallest time interval supported by the underlying implementation. diminuto_frequency returns 1,000,000,000 Hz, the base frequency used by the library. The inverse of this is one billionth of a second or one nanosecond. diminuto_time_frequency returns 1,000,000,000 Hz, the inverse being the resolution of timespec in seconds. diminuto_timer_frequency returns 1,000,000,000 Hz, the inverse being the resolution of itimerspec in seconds. diminuto_delay_frequency returns 1,000,000,000 Hz, the inverse being the resolution of timespec in seconds. With just some minor integer arithmetic - which can often be optimized out at compile-time - the application can determine what is the smallest number of ticks the underlying implementation can meaningfully support in each module. (Note that this is merely the resolution representable in the POSIX or Linux API; the kernel or even the underlying hardware may support an even coarser resolution.) Remarks I first saw the countof technique in a VxWorks header file perhaps twenty years ago while doing embedded real-time development in C++ at Bell Labs. Similarly, a lot of these techniques have been picked up - or learned the hard way - while taking this long strange trip that has been my career. I have also benefitted greatly from having had a lot of mentors, people smarter than I am, who are so kind and generous with their time. Many of these techniques gestated in my C++ library Desperado, which I began putting together in 2005. I started developing and testing the Diminuto C code for a number of reasons. - I found myself solving the same problems over and over again, sometimes for different clients, sometimes even for different projects for the same client. For one reason or another - sometimes good, sometimes not so good - the proprietary closed-source code I developed couldn't be shared between projects. But open-source code could easily be integrated into the code base. - I wanted to capture some of the design patterns I had found useful in my closed-source work. Working, unit tested, open-source code, developed in a clean room environment, and owned by my own company, was a useful way to do that. - I needed a way to get my head around some of the evolving C, C++, POSIX, and Linux APIs. Since I can only really learn by doing, I had to do, which for me meant writing and testing code. - I wanted to make an API that was more consistent, more easily interoperable, and less prone to error than was offered by raw POSIX, Linux, and GNU. - In the past I have been a big proponent of using C++ for embedded development. I have written hundreds of thousands of lines of C++ while doing such work over the years. But more recently, I have seen a gradual decline in the use of C++ by my clients, with a trend more towards segregating development into systems code in C and application code in languages like Python or Java. Although I miss C++, I have to agree with the economics that. And I fear that C++ has evolved into a language so complex that it is beyond the ken of developers that my clients can afford. This is some stuff that, over the span of many years, has worked. Update (2017-04-14) I recently forked Desperadito (which itself is a mashed up fork of both Desperado and Hayloft) into Grandote (). What's different is Grandote uses the Diminuto C library as its underlying platform abstraction. It requires that you install Diminuto, Lariat (a Google Test helper framework), and Google Test (or Google Mock). That's some effort, but at least for me it wasn't overly burdensome when I built it all this morning on a Ubuntu 16.04 system. The other improvement over Desperadito is that both the old Desperado hand coded unit tests, and the newer Hayloft Goggle Test unit tests, work. Grandote would be my C++ framework going forward, if I were to need my own C++ framework. (And Grandote doesn't preclude using STL or Boost as well.)
https://coverclock.blogspot.com/2017/04/
CC-MAIN-2018-22
refinedweb
3,795
56.25
google.appengine.ext.ndb package Summary NDB – A new datastore API for the Google App Engine Python runtime. Contents - google.appengine.ext.ndb.Return alias of StopIteration - class google.appengine.ext.ndb. - class google.appengine.ext.ndb.Expando(*args, **kwds)source Bases: google.appengine.ext.ndb.model.Model Model subclass to support dynamic Property names and types. See the module docstring for details. - google.appengine.ext.ndb. - read_only=True: Indicates a transaction will not do any writes, which potentially allows for more throughput..TimeProperty(*args, **kwds)source Bases: google.appengine.ext.ndb.model.DateTimeProperty A Property whose value is a time object. - class’, ‘user value’ is a value such as would be set and accessed by the application code using standard attributes on the entity. A ‘base ‘stacking’ ‘lax’ and ‘st ‘in’ comparison operator. The Python ‘.Query(*args, **kwds)source Bases: object Query object. Usually constructed by calling Model.query(). See module docstring for examples. Note that not all operations on Queries are supported by _MultiQuery instances; the latter are generated as necessary when any of the operators !=, IN or OR is used. - ancestor Accessor for the ancestor (a Key or None). - app Accessor for the app (a string or None). - count(*args, **kwds)source Count the number of query results, up to a limit. This returns the same result as len(q.fetch(limit)) but more efficiently. Note that you must pass a maximum value to limit the amount of work done by the query.Parameters limit – How many results to count at most. **q_options – All query options keyword arguments are supported. Returns: - default_options Accessor for the default_options (a QueryOptions instance or None). - fetch_page(*args, **kwds)source Fetch a page of results. This is a specialized method for use by paging user interfaces.Parameters page_size – The requested page size. At most this many results will be returned. In addition, any keyword argument supported by the QueryOptions class is supported. In particular, to fetch the next page, you pass the cursor returned by one call to the next call using start_cursor=<cursor>. A common idiom is to pass the cursor to the client using <cursor>.to_websafe_string() and to reconstruct that cursor on a subsequent request using Cursor.from_websafe_string(<string>).Returns A tuple (results, cursor, more) where results is a list of query results, cursor is a cursor pointing just after the last result returned, and more is a bool indicating whether there are (likely) more results after that. - filters Accessor for the filters (a Node or None). - get(**q_options)source Get the first query result, if any. This is similar to calling q.fetch(1) and returning the first item of the list of results, if any, otherwise None.Parameters **q_options – All query options keyword arguments are supported.Returns A single result, or None if there are no results. - group_by Accessor for the group by properties (a tuple instance or None). - is_distinct True if results are guaranteed to contain a unique set of property values. This happens when every property in the group_by is also in the projection. - kind Accessor for the kind (a string or None). - map(*args, **kwds)source Map a callback function or tasklet over the query results.Parameters callback – A function or tasklet to be applied to each result; see below. merge_future – Optional Future subclass; see below. **q_options – All query options keyword arguments are supported. Callback signature: The callback is normally called with an entity as argument. However if keys_only=True is given, it is called with a Key. Also, when pass_batch_into_callback is True, it is called with three arguments: the current batch, the index within the batch, and the entity or Key at that index. The callback can return whatever it wants. If the callback is None, a trivial callback is assumed that just returns the entity or key passed in (ignoring produce_cursors). Optional merge future: The merge_future is an advanced argument that can be used to override how the callback results are combined into the overall map() return value. By default a list of callback return values is produced. By substituting one of a small number of specialized alternatives you can arrange otherwise. See tasklets.MultiFuture for the default implementation and a description of the protocol the merge_future object must implement the default. Alternatives from the same module include QueueFuture, SerialQueueFuture and ReducingFuture.Returns When the query has run to completion and all callbacks have returned, map() returns a list of the results of all callbacks. (But see ‘optional merge future’ above.) - namespace Accessor for the namespace (a string or None). - orders Accessor for the filters (a datastore_query.Order or None). - projection Accessor for the projected properties (a tuple instance or None). - class google.appengine.ext.ndb.QueryOptionssource Bases: google.appengine.ext.ndb.context.ContextOptions, google.appengine.datastore.datastore_query.QueryOptions Support both context options and query options (esp. use_cache). - class google.appengine.ext.ndb.Cursor(*args, **kwds)source Bases: google.appengine.datastore.datastore_query._BaseComponent An immutable class that represents a relative position in a query. The position denoted by a Cursor is relative to a result in a query even if the result has been removed from the given query. Usually to position immediately after the last result returned by a batch. A cursor should only be used on a query with an identical signature to the one that produced it or on a query with its sort order reversed. - advance(offset, query, conn)source Advances a Cursor by the given offset.Parameters offset – The amount to advance the current query. query – A Query identical to the one this cursor was created from. conn – The datastore_rpc.Connection to use. A new cursor that is advanced by offset using the given query. - static from_bytes(cursor)source Gets a Cursor given its byte string serialized form. The serialized form of a cursor may change in a non-backwards compatible way. In this case cursors must be regenerated from a new Query request.Parameters cursor – A serialized cursor as returned by .to_bytes.Returns A Cursor.Raises datastore_errors.BadValueError if the cursor argument does not represent a serialized cursor. - static from_websafe_string(cursor)source Gets a Cursor given its websafe serialized form. The serialized form of a cursor may change in a non-backwards compatible way. In this case cursors must be regenerated from a new Query request.Parameters cursor – A serialized cursor as returned by .to_websafe_string.Returns A Cursor.Raises datastore_errors.BadValueError if the cursor argument is not a string type of does not represent a serialized cursor. - class google.appengine.ext.ndb.QueryIterator(*args, **kwds)source Bases: object This iterator works both for synchronous and async callers! For synchronous callers, just use: - for entity in Account.query(): <use entity> Async callers use this idiom: it = iter(Account.query()) while (yield it.has_next_async()): entity = it.next() <use entity> You can also use q.iter([options]) instead of iter(q); this allows passing query options such as keys_only or produce_cursors. When keys_only is set, it.next() returns a key instead of an entity. When produce_cursors is set, the methods it.cursor_before() and it.cursor_after() return Cursor objects corresponding to the query position just before and after the item returned by it.next(). Before it.next() is called for the first time, both raise an exception. Once the loop is exhausted, both return the cursor after the last item returned. Calling it.has_next() does not affect the cursors; you must call it.next() before the cursors move. Note that sometimes requesting a cursor requires a Cloud Datastore roundtrip (but not if you happen to request a cursor corresponding to a batch boundary). If produce_cursors is not set, both methods always raise an exception. Note that queries requiring in-memory merging of multiple queries (i.e. queries using the IN, != or OR operators) do not support query options. - cursor_after()source Return the cursor after. - cursor_before()source Return the cursor before. - index_list()source Return the list of indexes used for this query. This returns a list of index representations, where an index representation is the same as what is returned by get_indexes(). Before the first result, the information is unavailable, and then None is returned. This is not the same as an empty list – the empty list means that no index was used to execute the query. (In the dev_appserver, an empty list may also mean that only built-in indexes were used; metadata queries also return an empty list here.) - Proper use is as follows: q = <modelclass>.query(<filters>) i = q.iter() try: i.next() - except Stopiteration: pass indexes = i.index_list() assert isinstance(indexes, list) Notes: - Forcing produce_cursors=False makes this always return None. - This always returns None for a multi-query. - probably_has_next()source Return whether a next item is (probably) available. This is not quite the same as has_next(), because when produce_cursors is set, some shortcuts are possible. However, in some cases (e.g. when the query has a post_filter) we can get a false positive (returns True but next() will raise StopIteration). There are no false negatives. -.PostFilterNodesource Bases: google.appengine.ext.ndb.query.Node Tree node representing an in-memory filtering operation. This is used to represent filters that cannot be executed by the datastore, for example a query for a structured value. -(key)source Bases: google.appengine.ext.ndb.query.ParameterizedThing Represents a bound variable in a GQL query. Parameter(1) corresponds to a slot labeled “:1” in a GQL query. Parameter(‘xyz’) corresponds to a slot labeled “:xyz”. The value must be set (bound) separately by calling .set(value). - key Retrieve the key. -.
https://cloud.google.com/appengine/docs/standard/python/refdocs/google.appengine.ext.ndb
CC-MAIN-2020-50
refinedweb
1,587
52.36
Question: Referring on this question, I have a similar -but not the same- problem.. On my way, I'll have some text file, structured like: var_a: 'home' var_b: 'car' var_c: 15.5 And I need that python read the file and then create a variable named var_a with value 'home', and so on. Example: #python stuff over here getVarFromFile(filename) #this is the function that im looking for print var_b #output: car, as string print var_c #output 15.5, as number. Is this possible, I mean, even keep the var type? Notice that I have the full freedom to the text file structure, I can use the format I like if the one I proposed isn't the best. EDIT: the ConfigParser can be a solution, but I don't like it so much, because in my script I'll have then to refer to the variables in the file with config.get("set", "var_name") But what I'll love is to refer to the variable directly, as I declared it in the python script... There is a way to import the file as a python dictionary? Oh, last thing, keep in mind that I don't know exactly how many variables would I have in the text file. Edit 2: I'm very interested at stephan's JSON solution, because in that way the text file could be read simply with others languages (PHP, then via AJAX JavaScript, for example), but I fail in something while acting that solution: #for the example, i dont load the file but create a var with the supposed file content file_content = "'var_a': 4, 'var_b': 'a string'" mydict = dict(file_content) #Error: ValueError: dictionary update sequence element #0 has length 1; 2 is required file_content_2 = "{'var_a': 4, 'var_b': 'a string'}" mydict_2 = dict(json.dump(file_content_2, True)) #Error: #Traceback (most recent call last): #File "<pyshell#5>", line 1, in <module> #mydict_2 = dict(json.dump(file_content_2, True)) #File "C:\Python26\lib\json\__init__.py", line 181, in dump #fp.write(chunk) #AttributeError: 'bool' object has no attribute 'write' In what kind of issues can I fall with the JSON format? And, how can I read a JSON array in a text file, and transform it in a python dict? P.S: I don't like the solution using .py files; I'll prefer .txt, .inc, .whatever is not restrictive to one language. Solution:1 Load your file with JSON or PyYAML into a dictionary the_dict (see doc for JSON or PyYAML for this step, both can store data type) and add the dictionary to your globals dictionary, e.g. using globals().update(the_dict). If you want it in a local dictionary instead (e.g. inside a function), you can do it like this: for (n, v) in the_dict.items(): exec('%s=%s' % (n, repr(v))) as long as it is safe to use exec. If not, you can use the dictionary directly. Solution:2 But what i'll love is to refer to the variable direclty, as i declared it in the python script.. Assuming you're happy to change your syntax slightly, just use python and import the "config" module. # myconfig.py: var_a = 'home' var_b = 'car' var_c = 15.5 Then do from myconfig import * And you can reference them by name in your current context. Solution:3 Use ConfigParser. Your config: [myvars] var_a: 'home' var_b: 'car' var_c: 15.5 Your python code: import ConfigParser config = ConfigParser.ConfigParser() config.read("config.ini") var_a = config.get("myvars", "var_a") var_b = config.get("myvars", "var_b") var_c = config.get("myvars", "var_c") Solution:4 You can treat your text file as a python module and load it dynamically using imp.load_source: import imp imp.load_source( name, pathname[, file]) Example: // mydata.txt var1 = 'hi' var2 = 'how are you?' var3 = { 1:'elem1', 2:'elem2' } //... // In your script file def getVarFromFile(filename): import imp f = open(filename) global data data = imp.load_source('data', '', f) f.close() # path to "config" file getVarFromFile('c:/mydata.txt') print data.var1 print data.var2 print data.var3 ... Solution:5 The other solutions posted here didn't work for me, because: - i just needed parameters from a file for a normal script import *didn't work for me, as i need a way to override them by choosing another file - Just a file with a dict wasn't fine, as I needed comments in it. So I ended up using Configparser and globals().update() Test file: #File parametertest.cfg: [Settings] #Comments are no Problem test= True bla= False #Here neither #that neither And that's my demo script: import ConfigParser cfg = ConfigParser.RawConfigParser() cfg.read('parametertest.cfg') # Read file #print cfg.getboolean('Settings','bla') # Manual Way to acess them par=dict(cfg.items("Settings")) for p in par: par[p]=par[p].split("#",1)[0].strip() # To get rid of inline comments globals().update(par) #Make them availible globally print bla It's just for a file with one section now, but that will be easy to adopt. Hope it will be helpful for someone :) Solution:6 Suppose that you have a file Called "test.txt" with: a=1.251 b=2.65415 c=3.54 d=549.5645 e=4684.65489 And you want to find a variable (a,b,c,d or e): ffile=open('test.txt','r').read() variable=raw_input('Wich is the variable you are looking for?\n') ini=ffile.find(variable)+(len(variable)+1) rest=ffile[ini:] search_enter=rest.find('\n') number=float(rest[:search_enter]) print "value:",number Solution:7 How reliable is your format? If the seperator is always exactly ': ', the following works. If not, a comparatively simple regex should do the job. As long as you're working with fairly simple variable types, Python's eval function makes persisting variables to files surprisingly easy. (The below gives you a dictionary, btw, which you mentioned was one of your prefered solutions). def read_config(filename): f = open(filename) config_dict = {} for lines in f: items = lines.split(': ', 1) config_dict[items[0]] = eval(items[1]) return config_dict Solution:8 What you want appear to want is the following, but this is NOT RECOMMENDED: >>> for line in open('dangerous.txt'): ... exec('%s = %s' % tuple(line.split(':', 1))) ... >>> var_a 'home' This creates somewhat similar behavior to PHP's register_globals and hence has the same security issues. Additionally, the use of exec that I showed allows arbitrary code execution. Only use this if you are absolutely sure that the contents of the text file can be trusted under all circumstances. You should really consider binding the variables not to the local scope, but to an object, and use a library that parses the file contents such that no code is executed. So: go with any of the other solutions provided here. (Please note: I added this answer not as a solution, but as an explicit non-solution.) Solution:9 hbn's answer won't work out of the box if the file to load is in a subdirectory or is named with dashes. In such a case you may consider this alternative : exec open(myconfig.py).read() Or the simpler but deprecated in python3 : execfile(myconfig.py) I guess Stephan202's warning applies to both options, though, and maybe the loop on lines is safer. Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com EmoticonEmoticon
http://www.toontricks.com/2018/05/tutorial-best-way-to-retrieve-variable.html
CC-MAIN-2018-34
refinedweb
1,225
66.33
Thread::Isolate::Pool - A pool of threads to execute multiple tasks. This module creates a pool of threads that can be used to execute simultaneously many tasks. The interface to the pool is similar to a normal Thread::Isolate object, so we can think that the pool is like a thread that can receive multiple calls at the same time. use Thread::Isolate::Pool ; my $pool = Thread::Isolate::Pool->new() ; $pool->use('LWP::Simple') ; ## Loads LWP::Simple in the main thread of the pool. print $pool->main_thread->err ; ## $@ of the main thread of the pool. my $url = '' ; my $job1 = $pool->call_detached('get' , $url) ; my $job2 = $pool->call_detached('get' , $url) ; my $job3 = $pool->call_detached('get' , $url) ; ## Print what jobs are running in the pool: while( $job1->is_running || $job2->is_running || $job3->is_running ) { print "[1]" if $job1->is_running ; print "[2]" if $job2->is_running ; print "[3]" if $job3->is_running ; } print "\n<<1>> Size: " . length( $job1->returned ) . "\n" ; print "\n<<2>> Size: " . length( $job2->returned ) . "\n" ; print "\n<<3>> Size: " . length( $job3->returned ) . "\n" ; ## Shutdown all the thread of the pool: $pool->shutdown ; The code above creates a Pool of threads and make simultaneously 3 LWP::Simple::get()s. Internally the pool has a main thread that is used to create the execution threads. The main thread should have all the resources/modules loaded before make any call()/eval() to the pool. When a call()/eval() is made, if the pool doesn't have any thread free (without be executing any job), a new thread is created from the main thread, and is used to do the task. Note that no threads will be removed after be created since this won't free memory, so is better to let them there until shutdown(). Creates a new pool. If LIMIT is defined will set the maximal number of threads inside the pool. So, this defines the maximal number of simultaneous calls that the pool can have. Returns the main thread. Returns the LIMIT of threads of the pool. Return a free thread. If is not possible to get a free thread and create a new due LIMIT, any thread in the pool will be returned. If called in a ARRAY contest will return ( FREE_THREAD , ON_LIMIT ), where when ON_LIMIT is true indicates that was not possible to get a free thread or create a new free thread. Add a new thread if is not in the LIMIT. Make an "use MODULE qw(ARGS)" call in the main thread of the pool. Get a free thread and make a $thi-call()> on it. Get a free thread and make a $thi-call_detached()> on it. Get a free thread and make a $thi-eval()> on it. Get a free thread and make a $thi-eval_detached()> on it. Shutdown all the threads of the pool. Thread::Isolate, Thread::Isolate::Map. Graciliano M. P. <gmpassos@cpan.org> I will appreciate any type of feedback (include your opinions and/or suggestions). ;-P This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
http://search.cpan.org/~gmpassos/Thread-Isolate-0.05/lib/Thread/Isolate/Pool.pm
CC-MAIN-2013-48
refinedweb
510
72.97
Content-type: text/html #include <sys/ddi.h> #include <sys/sunddi.h> uint8_t ddi_get8(ddi_acc_handle_t handle, uint8_t *dev_addr); uint16_t ddi_get16(ddi_acc_handle_t handle, uint16_t *dev_addr); uint32_t ddi_get32(ddi_acc_handle_t handle, uint32_t *dev_addr); uint64_t ddi_get64(ddi_acc_handle_t handle, uint64_t *dev_addr); Solaris DDI specific (Solaris DDI). handle The data access handle returned from setup calls, such as ddi_regs_map_setup(9F). dev_addr Base device address. The ddi_get8(), ddi_get16(), ddi_get32(), and ddi_get64() functions read 8 bits, 16 bits, 32 bits and 64. For certain bus types, you can call these DDI functions from a high-interrupt context. These types include ISA and SBus buses. See sysbus(4), isa(4), and sbus(4) for details. For the PCI bus, you can, under certain conditions, call these DDI functions from a high-interrupt context. See pci(4). These functions return the value read from the mapped address. These functions can be called from user, kernel, or interrupt context. ddi_put8(9F), ddi_regs_map_free(9F), ddi_regs_map_setup(9F), ddi_rep_get8(9F), ddi_rep_put8(9F) The functions described in this manual page previously used symbolic names which specified their data access size; the function names have been changed so they now specify a fixed-width data size. See the following table for the new name equivalents:
http://backdrift.org/man/SunOS-5.10/man9f/ddi_get8.9f.html
CC-MAIN-2017-04
refinedweb
199
58.69
Node Modules to Rule Them All Years ago I wrote a little post on modules, frameworks and micro-frameworks, and oh boy did the landscape change! Today, if you’re not using NPM and Node modules when writing any JavaScript code, you’re most likely doing it wrong. It doesn’t matter if you’re writing for the Browser, Node or anything else. TL;DR: - Modularise: write small modules that do only one thing, and compose them together to do more complex stuff. - Use a package manager: use NPM to manage your dependencies and stop worrying about them. - Use Node modules: it’s a simple and expressive module system. And it gives you first-class parametric modules! Table of Contents - Table of Contents - 1. Introduction - 2. Namespacing and Modules - 3. Module solutions for JS - 4. Node modules (a superset of CommonJS modules) - 5. One NPM to Rule Them All - 6. NPM and Node modules outside of Node-land - 7. Conclusion - 8. References and additional reading 1. Introduction If you have ever read my blog, you’d know I’m a strong advocate of both the Unix philosophy and functional programming. They encourage you to write small, self-contained pieces of functionality and compose them together to build bigger things. Unfortunately, lots of people writing JavaScript are still dwelling in the dark ages when it comes down to modularising their applications. You see, there are still plenty of people that think that “The Module Pattern” is a good enough idea; it is not, however, it’s just boilerplate that indicates the lack of proper tooling — and if you ever find yourself having boilerplate, you should be worrying about your tools not solving your problem, because that’s the first symptom of I Ain’t Not Solving Ya Problem, Son. There have been plenty of module solutions over the time in JavaScript, the most expressive and simple of them is still the CommonJS standard, which is used in a slightly different form in Node. CommonJS gives you a nice syntax, and more important, first-class modules. You might ask why first-class modules are important, and I could answer you with “for the same reason first-class functions” are, but instead I will just leave you with a most awesome keynote that explains that, and assume you know the answer from now on. The rest of this article is laid out as follows: in the first section I give a conceptual overview of namespacing and modules, in the second section there’s an overview of all module solutions available for JavaScript, and a quick analysis of the pros-and-cons in each one. In the subsequent sections I present Node modules in more depth, then introduce the concepts of parametric modules. Then there’s a whole section on package management and NPM. In the last section I introduce Browserify as a tool for using Node modules in non-Node.js environments. Finally, I give a kick-ass conclusion on all of this mess and point you to additional reading material. 2. Namespacing and Modules Both namespaces and modules are important when developing any kind of application, but they also solve entirely different problems. Some people tend to think that namespaces give you modularity: they don’t, they only solve name collision problems. 2.1. What’s a namespace? A namespace is something that holds a mapping from a particular name to a particular meaning. Some languages have different namespaces for different kinds of things (for example, the namespace used for functions is not the same as the one used for types, or variables, so a variable A is still different from a function A), some other languages (like JavaScript) just roll out with a single namespace for everything. Namespaces exist because we can only give things so many names before we run out of alternatives and start writing SquareRoot2345, as if we were trying to find an available username on Twitter. Not the best thing, you see. 2.2. What’s a module? A module provides a set of logically related functionality that fulfills a particular interface. So, for example, one could say that an object X that implements the interface Y is a module. Some languages, like Java or Clojure, don’t give you modules, and instead just give you namespaces — Clojure’s namespaces are first-class and expressive, though, unlike Java’s. For modularity, we want more. Basically, there are three things we look for in a good module implementation: - It must allow one to provide a set of functionality that fulfills a given interface. A Queue could be an interface, as much as List or DOM. - It must be first-class, so you can hold and abstract over a module. - It must allow delayed dependency binding, so you can mix and match different implementations of the same module easily. To make it a little bit clearer, repeat after me “Modules are not files. Modules are not files. Modules are not files!”. The correlation of files and modules in most programming languages is just incidental, but modules are objects that provide a particular interface, a file is just a medium used to describe that and load it. 2.3. What’s a “module loader”? Since Modules are not files, but we more often than not use files to store them, we need something that’ll evaluate a file and give us back a module. These things are often called module loaders. Not all languages have these, of course, but some do. In Node.js, require is the module loader. In Python, import is the loader. In Io objects are modules, and the loader is the Importer object, and can be implicit when you reference any object (Io will try to load a file with the same name if the message doesn’t exist in the Lobby). In Java you don’t have loaders because you don’t have modules. In Clojure modules are replaced by first-class namespaces, so you don’t have a loader either — you have read and eval, however, and can apply your way around everything. Module loaders are interesting because they allow one to dynamically reference a Module that is stored in any other medium. A medium could be a file, it could be the internets, it could be a database, a zip file, an image, or anything else. 3. Module solutions for JS It’s actually interesting to see how many things the lack of module support baked right into the language has spawned. This is, in fact, one of the things that keep amazing me in Node.js: give people a really small core, but a rather expressive one, and they’ll come up with their own creative solutions to solve problems X and Y, and these solutions will then compete to see which one solves the problem best. Languages and platforms that are heavily batteries-included usually strive for “There Should Be Only One Way of Doing X”, and with that way of thinking those solutions might lose relevance with time, or they might simply not solve the stated problem in the best way it could (for example, Python’s module system is shitty as fuck). But look at JavaScript, it has evolved over the time and accumulated a handful of different solutions to this problem, from a handful of well-known players that do the job (AMD, CommonJS, Node modules), to silly ports of non-expressive module systems in other languages (Gjs), to the most-naïve-solution-that-could-possibly-work (Module pattern). 3.1. The no-module way The worst thing you could ever do: not using modules, nor namespaces. Since JS only gives you a single namespace everywhere, name collisions are just waiting to bite you in the ass. Let’s not mention that now you’re going to have a hard time explaining to people how to play well with your code and get everything working. So, don’t do this: 3.2. The “Hey let’s give JS namespaces” crowd Then, there came the Java crowd. These are particularly annoying, because they’re treating a symptom of not having modules nor multiple namespaces with… just giving a half-baked solution. Oh, the naïvety. Somehow, there are still people out there that believe that namespacing solves the same problems as modules do, but if you have been paying close attention to this article you already know they don’t. This is, however, not what this crowd wants you to believe, instead they come barging down your house, screaming “THOU MUST NAMESPACE ALL YOUR SCRIPTS”, and then some people go and write shit like this: And, well, the madness goes on and on. In JavaScript, first-class namespacing can be emulated through objects, but they’re still rather awkward to work with. We can’t have a function run in a particular namespace, for example, as you’d be able to do in something like Io or Piccola. And ES5 strict just got rid of with, so you can’t unpack a first-class namespace in the current scope — though the feature did cause way too many more problems than it was worth it. Tough luck. First-class namespaces are a real nice thing, unfortunately they don’t solve modularity problems. 3.3. The Module Pattern Moving on, the module pattern gives you… you guessed it: modules! Albeit a rather crude form of that. In the module pattern you use a function to get a new scope where you can hide implementation details, and then you return an object that provides the interface your module exposes. Now, Queue is properly isolated and exposes only the interface we need. Nice, but then it doesn’t tell us which kind of dependencies Queue has, so while the module pattern is a start, it doesn’t solve all our problems. We need our modules to specify their own dependencies, so we can have the module system assemble everything together for us. 3.4. Asynchronous Module Definition (AMD) AMD is a step in the right direction when it comes down to modules in JS, they give you both first-class modules and parametric modules (it should be noted that the CommonJS standard does not support straight-forward parametric modules — you can’t export a function). However, AMD comes with the cost of way-too-much-boilerplate. Remember that I said boilerplate is harmful and means your tools are not solving your problem, well this happens to AMD: Besides this, there is not a clear mapping about the identifier used to refer to a module and the actual module being loaded. While this allows us to delay the concrete binding of a module by just requiring a certain interface to be implemented in the loaded modules, this means we need to know everything about every dependency of every module we use — with enough dependencies there’s such a high cognitive load to keep everything working that it outweights the benefits of modules. The major implementation of AMD in JavaScript is Require.JS, although there are a few others. Require.JS still allows plugins to be defined for loading different kinds of modules, which is a powerful feature, but one that can be easily abused and a source of complexity. In general, my impression from reading Require.js’s documentation is that its loader (the thing that’ll map module identifiers to actual Module objects) is far too complected. However, I am not too familiar with its actual implementation to say anything on this. Lazy and dynamic loading of modules, which are things AMD enable, are a nice thing to have, if your application ever needs to load more than 1MB of JavaScript — there are some kinds of applications that will just load a lot of new code over time, where it would be a nice fit. I’d still use a tool that converts from a simpler format to AMD, however. 4. Node modules (a superset of CommonJS modules) The CommonJS modules defined by the standard just don’t cut it. But Node modules are an improved implementation of CommonJS modules that give you first-class and straight forward parametric modules (as AMD does), with a simpler mapping of module identifiers to implementations. Plus, you get to use NPM for managing your dependencies, but we’ll get there in a second. Unfortunately, the Node implementation is still a tad bit too complex, because it allows the use of plugins to transform modules before they are loaded, and allow one to omit file extensions, which are the trigger for plugins — these two features combined are not a good thing, as acknowledged by Isaacs. 4.1. A quick conceptual overview Node modules and its loader are conceptually easy, and relatively simple: Each file corresponds to exactly one object. Once your module runs, you get a fresh exportsobject already instantiated, but you can replace it by any other value. Each module gets three magical variables: The requirefunction, bound to the module’s location (so that relative modules do The Right Thing™); The __dirnamevariable, which contains the module’s location. The modulevariable, conforming to the interface { exports: Object }(and a few other fields), used to store the module’s value. A call to requirewith a relative path will resolve to a module file (and ultimately an Object) relative to the current module. A call to requirewith an absolute path will resolve to a single module file (and ultimately an Object) relative to the root of the file system tree. A call to requirewith a module identifier (no leading dot or slash) will resolve to the closest module with that name in a parent or sister node_modulesfolder. Additionally, a module can be a part of a package. Packages encode a collection of modules along with their meta-data (dependencies, author, main module, binaries, etc). We’ll talk about packages in depth once we visit NPM later in this article. 4.2. First-class modules As mentioned before, Node modules are first-class. This means they’re just a plain JavaScript object that you can store in variables and pass around. This is one of the most important steps for a good module system. To write a Node module, you just create a new JavaScript file, and assign any value you want to module.exports: 4.3. Module loading Then, Node modules give you a way of resolving a module identifier to an actual module object. This is done by the first-class function require. This function takes in a String containing a module identifier, resolve the identifier to a JavaScript file, executes the file, then returns the object that it exports: A module identifier can be either a relative or absolute path, in which case the regular file lookup rules apply: ./foo resolves to a foo file in the requirer’s directory, /foo/bar resolves to the /foo/bar file relative to the root of the file system. Module identifiers can also be the name of a module, for example jquery or foo/bar — in the latter case bar is resolved relative to the root of foo. In these cases, the algorithm will try to find the closest module that matches that name living in a node_modules folder above the requirer’s location. Node’s module loading algorithm, while slightly complex (due to allowing one to omit extensions and allowing people to register transformers based on the file extension), is still pretty straight forward, and encourages people to have dependencies installed per-module, rather than globally, which avoids lots of versioning hell. 4.4. Parametric modules and delayed binding Last, but not least, Node modules allow straight-forward parametric modules, by making it possible for your module to be a closure. Parametric modules gives us the possibility of delaying implementation decisions, so you code your module using a particular interface, and when instantiating your module the user gives you the correct implementation of that. This is good for a handful of things, from shims, to modular and abstract DOM manipulation, to choosing performant implementations of X or Y. So, in practice, it works like this: You define an interface for writing your code against. Then you export a function that takes the concrete implementation of that interface: And finally, when someone wants to use your module, they just instantiate the code with the right implementation of the Stack interface: 4.5. A real-world scenario So, the above example was simple just to convey the basics of the applicability of parametric modules, but let’s see a more real-world scenario. At the company I work for we’ve had to store some data in the session in a website, and we had to support old browsers that have no support to SessionStorage and we had to write a handful of services on top of that. What we did was to write a parametric module that expected a Storage-like interface, and instantiate it with the right implementation depending on the capabilities of the browser. Basically, we had this interface: And derived a concrete implementation for browsers supporting local storage, and one for browsers that do not by talking to a webservice over HTTP (which is slower, and we didn’t want to push the cost on every user): With this, we had a single storage module, which we could instantiate with the implementation of SessionStorage or HTTPStorage depending on the browser (this was for in-app performance, not for optimising bandwidth, so both modules were bundled), all of the other modules that depended on a storage then were made parametric modules, accepting a concrete implementation of storage. The following is a simplified version: As you see, all of the other modules are decoupled from the implementation details of how data is stored and retrieved, and they can just carry on with their business as usual. If we didn’t have such straight-forward parametric modules (or worse, no parametric modules at all), we’d have to place the burden of such decisions within each high-level module, which clearly couldn’t care less about the particulars of how data storage is performed. 5. One NPM to Rule Them All Okay, cool, we have a way to load independent components in our applications and even swap in different implementations without breaking a sweat. Now there’s one thing left: solving dependency hell. Once you start having lots of modules — and you should, because modules are awesome, and modules make your applications a fucking ton simpler, — you’re eventually run into things like: “OH MY GOD I have to fetch this module, and then this, and on this one depends on this version of that one which depdends on that other version of OH FUCK THIS SHIT” Meet NPM, the guy who’s going to do that job for you, so you can just keep your mind into coding awesome applications in JavaScript. 5.1. On Package management in general Package management is not a new idea, it goes back all the way down the story of computing. It’s sad that even though we had a lot of time to improve on this area, some package managers still repeat lots of mistakes from the past. Package managers are, basically, tools that handle all the requirements of your application upfront, instead of you having to tell every user and every developer the requirements for running your application, and expecting them to search for those, download, install, configure, etc. A package manager will do all of that for free, so you can just jump into rocking on right away, any time you want. 5.2. How NPM works? NPM stores packages in a registry. Packages are a possible collection of modules along with their meta-data, which means every package knows who created it, which versions of Node it runs in, and which other packages (and their versions) are needed for it to properly work. This meta-data is specified in a package.json file at the root of your module directory. The most basic file that could possible work would be something like this: There’s quite a bit more to these meta-data, though, so be sure to check out NPM’s documentation on package.json to learn everything else. Moving on, having NPM installed in your system means you can now declare that your module depends on X, Y and Z to work and just leave the job of fetching and configuring those for NPM, by running npm install at the root of your module directory. Then, once you’ve finished fiddling with your module and are ready to let the world know about it, you just npm publish it, and everyone else can install your module as easily as doing npm install my-thingie. 6. NPM and Node modules outside of Node-land Alas, while NPM is a general tool for managing JavaScript dependencies, it uses Node-style modules (which are a superset of CommonJS modules), which your browser, and most other JavaScript environments don’t quite understand. Amazed by the awesomeness of Node modules, and pulled by the desire of having the same amazing development experience in other platforms, people wrote tools to fix this. I’ll talk about Browserify, by substack, here, but there are other tools that do similar things. 6.1. Limitations Let me start by saying a few words about the limitations of a tool like Browserify for Node modules: they work by performing optimistic static analysis on your modules and then generating one or more bundles that provide the necessary run-time for instantiating code objects. With Browserify, this means that you’ll send down a single JavaScript file to your users, containing everything they need. This also means that we can’t have conditionally bundled modules in Browserify, because that would require flow-control analysis and runtime support for loading modules — Browserify can’t load modules that were not statically “linked” at compile time. Such a thing is, however, possible with a tool that would compile Node-style modules to AMD, and then lazily load dependencies — in that case you’d be trading off higher bandwidth usage for higher-latency, and the latter is most likely to be a ton more times horrible in most cases, unless you need to send >1MB of scripts down to your user, or keep sending scripts over the time. It also means that most modules will just work, as long as you don’t use require as a first-class construct — you don’t bind it to another variable and use that variable for loading modules, or pass an expression to require. We can not have first-class module loading with just optimistic static analysis, we’d need runtime support, but the trade-offs don’t justify it most of the time. 6.2. Browserify means Node modules + NPM in the browser The first thing you need to do to get browserify kicking up and running is: Now you can write all your modules as if you were writing for Node, use NPM to manage all your dependencies and then just generate a bundle of your module that you can use in a web browser: 6.3. Stand-alone modules Sometimes you need to share your modules with people who don’t know the awesome world of Node modules yet, shame on them. Browserify allows you to generate stand-alone modules, which will include all dependencies, and can be used with AMD or the No-module approach: 7. Conclusion Modularity is really important for developing large applications that you can actually maintain, and first-class parametric modules give you just the right tool for that. Anything less powerful than that and you’re going to suffer badly to componentise everything sooner or later, sometimes people just put up with that and say “Oh, just use this boilerplate” or “Here, I just gave this pattern a name.” Patterns and boilerplates should not be what we, as developers, strive for. We should be using our creativity to solve our problems, not just work around the symptoms. Node modules are not perfect either, but they’re a real good start considering all the other alternatives. Could we do better? Possibly. And I have some ideas of my own that I’d like to try over the next weeks, but that’s the topic of another blog post. For now, I just hope you can go back to your projects and use these techniques to write even bigger and awesomer stuff :D 8. References and additional reading - Living in a Post-Functional World - This amazing talk by Daniel Spiewak at flatMap(Oslo) touches some important aspects of first-class & parametric modules, and is an absolute must watch. - The Node.js documentation on Modules - These describe Node modules in much more detail than I have in this little article. - Browserify - The browserify tool, which transforms Node-style modules so they can run in non-Node environments. - Separating Concerns With First-Class Namespaces - Nierstraz & Archermann present Piccola, a language with first-class namespaces based on π-calculus. Piccola uses forms (you can think of them as objects) as modules, services (you can squint your eyes hard and think of them as functions) as parametric modules and mixin layer composition for composing different modules together. - The Programming Language Jigsaw - Mixins, Modularity and Multiple Inheritance - Gilad Bracha's thesis describe a framework for modularity in programming languages with basis in inheritance for module manipulation. A fairly interesting read. - Traits - Composing Classes from Behavioural Building Blocks - Traits came up as an alternative for multiple inheritance and the diamond problem, being a simpler and more robust choice than plain mixins. Schärli's dissertation describes them in a rather nice way, and the operators and semantics for traits just happen to make a perfect fit for enforcing compositionality contracts for dynamic module. - Pandora - Pandora is a rather old module I've written that uses Trait operators to define a series of constraints for first-class modules. It's influenced by Scheme, Clojure, Piccola and Traits, and here mostly for the conceptual relevance. - RequireJS API - The documentation on the RequireJS loader API. - CommonJS Modules/1.1 - The specification for CommonJS modules. - CommonJS Modules/Asynchronous Modules Definition - The specification for Asynchronous Module Definition in the CommonJS standard
http://robotlolita.me/2013/06/06/node-modules-to-rule-them-all.html
CC-MAIN-2018-05
refinedweb
4,399
56.08
Hello, I just received my ChipWhisperer-Lite and the target board: CW305. I am using Windows 10 and I installed ChipWhisperer 5.5 directly (no VM). The USB driver of CW305 was automatically installed. However, I have installed the USB driver for ChipWhisperer-Lite (through device manager). For FPGA is XC7A100T: the M0M1M2 switch is “111”, the led light situation is following: on cw305, the led8(green) is on, the led1(red) is flashing, and the led FPGA_DONE-ON=Unprog(red) is on; For CW-lite: blue (continuous blinking blue light) and others are off. I am having great difficulty trying to program the CW305 target board. The scope = cw.scope() command is not working. Did I miss some installation/operation steps so it doesn’t work? I believe this based upon running the following commands: import chipwhisperer as cw scope = cw.scope() giving the error:) giving the following full response: import chipwhisperer as cw scope = cw.scope() Traceback (most recent call last): File “c:\users\nalla\chipwhisperer5_64\git\home\portable\chipwhisperer\software\chipwhisperer\hardware\naeusb\naeusb.py”, line 316, in txrx response = self.open(serial_number=payload) File “c:\users\nalla)
https://forum.newae.com/t/scope-cw-scope-command-is-not-running-with-cw305/2369
CC-MAIN-2021-31
refinedweb
191
52.26
SVG 1.2 and ECMAScript implementation Expand Messages - Hi, I made a quick implementation of the <svg:handler> and <ev:listener> elements as proposed for SVG Tiny 1.2 to the extend allowed by the not so good Working Draft <>... A simple example would be <!DOCTYPE svg [ <!ATTLIST handler id ID #IMPLIED> ]> <svg xmlns = "" xmlns:xlink = '' xmlns:<defs> <handler id="test"> evt.target.setAttributeNS(null, 'fill', 'green'); </handler> </defs> <rect id="obs1" height="100" width="100" x="0"/> <a xlink: <rect id="obs2" height="100" width="100" x="100"/> </a> <rect id="obs3" height="100" width="100" x="200"/> <ev:listener <ev:listener <ev:listener </svg> This is at which uses to enable use of these elements in non-compliant SVG 1.0/1.1 viewers like Opera9, Firefox 1.5 and ASV6. In compliant viewers like Batik it won't work (unless you use only the ev:listener element). There is some code to handle mutations of the elements and the document (including listener registration during progressive download) but it turns out that there are too many problems with existing implementations and specifications that this is not really feasible, so some of the code is missing and/or not used as intended. The code also has an implementation of the DOM Level 3 Core Node method lookupNamespaceURI if you are into namespaces that might be of interest. The code is available under the GPL and might eventually end up in the SVG QA project <>. I am not likely to main- tain this code, if you'd like to, please drop me a mail. regards, -- Björn Höhrmann · mailto:bjoern@... · Weinh. Str. 22 · Telefon: +49(0)621/4309674 · 68309 Mannheim · PGP Pub. KeyID: 0xA4357E78 · Your message has been successfully submitted and would be delivered to recipients shortly.
https://groups.yahoo.com/neo/groups/svg-developers/conversations/topics/53735
CC-MAIN-2017-22
refinedweb
296
62.48
CodePlexProject Hosting for Open Source Software Hi there, Is there anyone run with issue like I have? I build a module have events and actions for rule and it works. When I copy the same module to another Orchard project, I could not find all the events and actions in Rule. But the other functionality of this module still work. Occasionally, rebuild the whole solution and restart site helps, but not always. I know Autofac do the magic when the site loading. Is there a constant procedure to make sure these events and actions got picking up? I'm tired of trying crossing fingers.:) Any possible solution will help. At the same time, I will investigate the issue as well. Cheers, Did you remember to set dependencies in Module.txt? (i.e. dependencies on Rules and Actions) randompete wrote: Did you remember to set dependencies in Module.txt? (i.e. dependencies on Rules and Actions) Yes, I have set the dependenices to Orchard.Rules and I think Actions inside the Rules as well. I finally make a wired solution, copy the interface file to another file or change the namespace different with your module. After compile, it magically shows in the Action List in Rule module. Then you can either copy back your code or change back the namespace, it will continue work. I don't know whether it was cached somewhere, but I think if a recompile and restart site won't refresh the cache, that will be a bug. For me, it looks like a bug from Autofoc. @sebastienros can you have a look of this? To replicate it, you can write events and actions in existed module which enabled before. Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
http://orchard.codeplex.com/discussions/282063
CC-MAIN-2017-26
refinedweb
319
75.81
This is part II of a two part series. First off, you can view the blog live here. The following is an article "with attitude" on putting together a basic blog engine with .NET 2.0. Why am I writing a blog engine? Because I've never found one that I like. They either: And because I'm a maverick and I like to do my own thing. I've tried DotText, and it's clunky and obsolete. I've tried SubText, and I can't figure out how to change simple things like the widths of the margins. And the other blog engines? Forget it. I can't even make my way around their websites to figure out what the requirements are or how to install them. And amazingly, there is not one article on CodeProject regarding a blog engine. So here's the first one (assuming one doesn't get posted in the next day or so). I will probably say this a few times in this article, but I think this is the first time: I'm rather clueless when it comes to web apps. So I'm doing this to learn how not to write a web app, as I'm sure I'll learn a lot more of that than "how to". So this is also a foray into the wonderful world of web applications and the things I discover while writing what should be a simple application. You should be aware, if you use this code (I can't imagine anyone would actually want to use it) that this is most likely the wrong way to do things. In Part I, I had gotten the basic engine working with some style capabilities (my own, not CSS) and the ability to click on a blog entry and go back to the home page. In this article: For the normal listing, I want to limit the blog entries displayed to, say, 10 entries. Hmmm. Let's put this number into the database in the BlogInfo table. After adding the "MaxEntries" field to the database, I just add the property to the BlogInfo class: protected long maxEntries; public long MaxEntries { get { return maxEntries; } set { maxEntries = value; } } and my poor man's ORM method will populate it properly. Now, how do I limit the number of rows retrieved from SQLite? Hmm. There's a "Limit" keyword for the select statement: public static DataTable LoadBlogEntries(SQLiteConnection conn, string id, long max) { SQLiteCommand cmd = conn.CreateCommand(); if (id == null) { cmd.CommandText = "select * from BlogEntry order by Date desc limit "+max.ToString(); } else { cmd.CommandText = "select * from BlogEntry where ID=@id"; cmd.Parameters.Add(new SQLiteParameter("id", id)); } SQLiteDataAdapter da = new SQLiteDataAdapter(cmd); DataTable dt = new DataTable(); da.Fill(dt); return dt; } Appears to work. The archives should display a link for each month/year in which at least one blog entry has been made, and in parenthesis, the number of entries. The most difficult part about this is not the ASP.NET but rather figuring out an elegant query in SQLite. The query that appears to work for what I require is SELECT strftime('%Y-%m', date) as Date, count(strftime('%Y-%m', date)) as Num from blogentry group by strftime('%Y-%m', date) order by strftime('%Y-%m', date) desc . Note that I want the listings in reverse chronological order. I created a helper class for the resulting rows: public class ArchiveEntry { protected string date; protected long num; public long Num { get { return num; } set { num = value; } } public string Date { get { return date; } set { date = value; } } public ArchiveEntry() { } } I'm going to put the archives in the right gutter (yes, this is hard coded for now) and the categories in the left gutter. So, I'll need the right gutter cell to create the table that will list the archive entries, and this is done by just assigning cellArchive to the ID tag. I might as well assign cellCategory to the left cell as well while I'm at it. The problem that I see when testing it out is that the archive list is centered vertically on the page, rather than being aligned on the top like I want it to be. Setting the VerticalAlign property of the cell fixed that problem. The archives also need a to be links that provide the information for querying the blog entries for that month. So now I have this method call for adding the archive table: protected void LoadArchives() { // Set the style for the entire cell. SetStyle(cellArchive, "CellArchive"); // Get the archive list. DataTable dtArchives = Blog.LoadHistory(conn); // Create a table for the cell. Table tblArchive = new Table(); // Set the style for the table. SetStyle(tblArchive, "ArchiveTable"); cellArchive.Controls.Add(tblArchive); // For each archive listing... foreach (DataRow row in dtArchives.Rows) { // Create the helper class. ArchiveEntry archive = new ArchiveEntry(); // Load it up. Blog.LoadProperties(archive, row); // Create a row. TableRow rowArchive = new TableRow(); // Set the row style. SetStyle(rowArchive, "ArchiveRow"); tblArchive.Rows.Add(rowArchive); // Create a cell. TableCell tblCellArchive = new TableCell(); // Set the cell style. SetStyle(tblCellArchive, "ArchiveEntry"); rowArchive.Cells.Add(tblCellArchive); // Get the date. DateTime dt = Convert.ToDateTime(archive.Date); // Convert it to our display format (hardcoded!). string text = dt.ToString("MMMM yyyy") + " (" + archive.Num.ToString() + ")"; // Set the cell text. tblCellArchive." + text + "</a>"; } } And the result, after adding some style information, looks like: Not bad! And look, I'm blogging in the future! Now I just need to deal with the parameters when clicking on the link. We try getting the parameters: ... string year = Request.QueryString["Year"]; string month = Request.QueryString["Month"]; string archiveDate = null; if ((year != null) && (month != null)) { archiveDate = year + "-" + month.PadLeft(2, '0'); } and in the LoadBlogEntries query: ... else if (archiveDate != null) { cmd.CommandText = "select * from BlogEntry where strftime('%Y-%m', date)==@archiveDate order by Date desc"; cmd.Parameters.Add(new SQLiteParameter("archiveDate", archiveDate)); } Did I say in Part I that a blog entry would have 1 or more categories? Wrong! A blog entry can have zero or one category. I'm being lazy and I'll want to work only with one category right now. Otherwise, I'd need another table to manage the one-to-many relationship between a blog entry and 0 or more categories. The category table already exists, I just need my little helper class next. You'll note I added a Num property, because I want the categories to be like the archives--showing the number of posts in the category. public class Category { protected long id; protected string name; protected string description; protected long num; .. etc ... The query is select a.id as Id, a.name as Name, count(b.id) as Num from blogcategory a, blogentry b where a.id=b.categoryid group by a.name order by a.name. And the result (after creating some sample data) is: You'll note I didn't bore you with the ASP.NET details, because they're basically identical to the archive listing. In fact, the code ought to be unified! I've saved what I consider will be the worst part for now. Along the principle of "keep it simple", I'll just create table cells and use my style setter to determine the look. But I'm loathe to hard code menus. I'd rather use the database to generate the menus and have them link to pages. First, the helper class: public class MenuItem { protected long id; protected long menuBarId; protected string text; protected string page; ... etc ... Then, some default menu bar stuff in the database's MenuBar table: You'll notice I'm using a special blog entry for the Contact link rather than a separate page. One thing I notice that's annoying is the menu items are spaced proportionally across the page. So if I have to menu items, the first in on the left and the second is in the middle of menu bar. I'd rather they be spaced evenly across the page. I tried the ASP.NET Menu class, and WTF!?!? It does the same thing!!! And gosh, I wonder why. When I look at the page source, it's using tables! AH! But the REAL reason is because I'm setting the width to 100%. I bet I was doing that in my original table as well. If I leave the width alone, it looks ok, except that the menu items are too close together. But I think I'll stick with the ASP.NET menu bar because it has a lot of interesting features. And the spacing between the menu items is set by the menu bar's StaticMenuItemStyle's HorizontalPadding property. That took a while to find. Of course, now any background color for the menubar only sets the area covered by the menu items! So I'm back to the original problem. My only solution to this is to create Panel object and put the menu bar inside the panel. The panel extends to the limits of the window, and then I can set both the panel and the menubar to the same background color. Dynamically loading the menubar is easy enough: protected void LoadMenuBar(int menuBarId) { menuBar.Items.Clear(); // Set styles for the table elements. SetStyle(menuPanel, "MenuPanel"); SetStyle(menuBar, "MenuBar"); // Get the menu bar. DataTable dtMenus = Blog.LoadMenuBar(conn, menuBarId); // Our helper class. MenuItem item=new MenuItem(); // For each menu bar item... foreach (DataRow row in dtMenus.Rows) { // Load the helper class. Blog.LoadProperties(item, row); menuBar.Items.Add(new System.Web.UI.WebControls.MenuItem( item.Text, null, null, StringHelpers.LeftOfRightmostOf(url, '/') + item.Page)); } } And after setting the Orientation property to horizontal, I get pretty much what I want: Time to deal with the login and basic administration, so I can move away from editing rows in the database by hand. For one thing, I need a username and password in the database! The default will be "Admin" for the username and password. The login page sets a session IsAdmin key if the login is valid and redirects to the admin page: protected void btnLogin_Click(object sender, EventArgs e) { // Go to the admin page if the login validates. if ((tbUsername.Text == blogInfo.Username) && (tbPassword.Text == blogInfo.Password)) { Session["IsAdmin"] = true; Response.Redirect("admin.aspx"); } } Conversely, the logout page nulls the IsAdmin key and redirects to the home page: protected void Page_Load(object sender, EventArgs e) { Session["IsAdmin"] = null; Response.Redirect("default.aspx"); } Because I'd like the website to have a consistent look and feel, the admin page uses the same header, menu, and footers: And is set up essentially the same as the home page, except that a menu bar ID of 1 is specified. The test for whether we're in admin mode must be made for every admin page. protected void Page_Load(object sender, EventArgs e) { if (Session["IsAdmin"] != null) { url = StringHelpers.LeftOf(Request.Url.ToString(), '?'); conn = (SQLiteConnection)Application["db"]; blogInfo = Blog.LoadInfo(conn); Page.Title = blogInfo.Title; Style.SetStyle(conn, tblHeader, "HeaderTable"); PageCommon.InitializeHeaderAndFooter(conn, url, blogInfo, cellTitle, cellSubtitle, cellCopyright); PageCommon.LoadMenuBar(conn, url, 1, menuBar, menuPanel); } else { Response.Redirect("login.aspx"); } } Because of this commonality, I've moved the initialization code into a static class, so the call for all the pages becomes: PageCommon.LoadCommonElements(this, -1, out blogInfo, out conn, out url, tblHeader, cellTitle, cellSubtitle, cellCopyright, menuBar, menuPanel); which of course requires that the layout of each page contains the header, menu, and footer "pieces". The new blog entry consists of a pulldown for the category, title and subtitle, and a multi-line textbox to paste in the HTML for the entry. As I said before, editing a blog entry online is klunky and I've found it to be buggy as well. I suppose it's nice when you're in an Internet Cafe, but frankly, that describes my blogging scenario 0.1% of the time. I'd rather use a nice HTML editor and simply copy and paste the HTML up to my blog. Why blog engines don't provide this simpler interface for this is beyond me. Instead, you get this rich text box editor that often takes half a minute to load, and then you have to click on the "HTML" button before posting the HTML. Sigh. Overly complicated, time consuming, and too many clicks. Because I want HTML tags allowed in the entry, I have to explicitly turn off page validation: You can disable request validation by setting validateRequest=false in the Page directive or in the configuration section. However, it is strongly recommended that your application explicitly check all inputs in this case. Well, since only the admin will be making these posts, I don't see any problem with turning off validation. I still haven't figured out how to properly turn off validation just for the new blog entry page, so for the moment, this: <system.web> <pages validateRequest="false" /> is sitting in the Web.Config file. Of course, this turns off validation for ALL pages, which isn't what I want either. Here's the code for entering a blog entry. Note the use of the BlogEntry helper class: public partial class NewEntry : System.Web.UI.Page { protected BlogInfo blogInfo; protected SQLiteConnection conn; protected string url; protected void Page_Load(object sender, EventArgs e) { if (Session["IsAdmin"] != null) { PageCommon.LoadCommonElements(this, 1, out blogInfo, out conn, out url, tblHeader, cellTitle, cellSubtitle, cellCopyright, menuBar, menuPanel); if (!IsPostBack) { cbCategory.DataSource = Blog.LoadCategories(conn); cbCategory.DataTextField = "Name"; cbCategory.DataValueField = "ID"; cbCategory.DataBind(); } } else { Response.Redirect("login.aspx"); } } protected void btnPost_Click(object sender, EventArgs e) { BlogEntry blogEntry = new BlogEntry(); blogEntry.CategoryId = Convert.ToInt64(cbCategory.SelectedValue); blogEntry.Date = DateTime.Now; blogEntry.Title = tbTitle.Text; blogEntry.Subtitle = tbSubtitle.Text; blogEntry.BlogText = tbBlogEntry.Text; Blog.SavePost(conn, blogEntry); } } There's not much to writing out the record. I use reflection to create the parameter list, which is over-simplified and probably not re-usable as it stands: public static void SavePost(SQLiteConnection conn, BlogEntry blogEntry) { SQLiteCommand cmd = conn.CreateCommand(); cmd.CommandText = "insert into BlogEntry (Title, Subtitle, CategoryID, Date, BlogText) values (@Title, @Subtitle, @CategoryID, @Date, @BlogText)"; AddParameters(cmd, blogEntry); cmd.ExecuteNonQuery(); } public static void AddParameters(SQLiteCommand cmd, object source) { foreach (PropertyInfo pi in source.GetType().GetProperties()) { cmd.Parameters.Add(new SQLiteParameter(pi.Name, pi.GetValue(source, null))); } } For editing categories, I'd like to try and use a built-in grid control. I must say, working with the built-in GridView is abysmal. In fact, I've spent three hours now trying to figure out why it doesn't update the underlying table row when I edit a value. The only thing I can figure out at this point is that I need to manually move the data into the underlying table, which is counter to all the examples I've seen. And then it seems that I have to use templates to get access to the data being edited in the cell. There's an example of dynamic templates fields here, but I simply cannot believe it is this difficult to work with a GridView control! Another article here, showing how this is done with the designer when you have the data source. Unbelievable. This completely entangled the presentation layer with the data layer, and I might as well learn how to do this dynamically (using G. Mohyuddin's article in the first link), as I absolutely refuse to hardcode my pages with information from the table's schema. Finally though, I get things working. May not be the best way nor even the right way, but it works, and it works thanks to the example by G. Mohyuddin. The most interesting part is where the templated fields are created. Because the fields are being dynamically created, a considerable amount of extra work is required. protected void CreateTemplatedGridView() { // Must do this so that BoundField columns are replaced with the template // field columns. gvCategories.Columns.Clear(); foreach (DataColumn dc in dtCategories.Columns) { DataControlField dcf = null; if (dc.ColumnName.ToLower() == "id") { BoundField bf = new BoundField(); bf.ReadOnly = true; bf.Visible = false; dcf = bf; } else { TemplateField tf = new TemplateField(); tf.HeaderTemplate = new DynamicallyTemplatedGridViewHandler( ListItemType.Header, dc.ColumnName, dc.DataType.ToString()); tf.ItemTemplate = new DynamicallyTemplatedGridViewHandler( ListItemType.Item, dc.ColumnName, dc.DataType.ToString()); tf.EditItemTemplate = new DynamicallyTemplatedGridViewHandler( ListItemType.EditItem, dc.ColumnName, dc.DataType.ToString()); dcf = tf; } gvCategories.Columns.Add(dcf); } } Refer to the article here for a more complete description of what's going on. There's certainly no way I would ever have figured this out! The RowUpdating event handler now can access the Text property of the template field, which is a TextBox. It loads the data into my Category helper class and passes that to the "data access layer", haha, to update the record. void OnRowUpdating(object sender, GridViewUpdateEventArgs e) { GridViewRow row = gvCategories.Rows[e.RowIndex]; Category cat = new Category(); DataRow dataRow = dtCategories.Rows[e.RowIndex]; foreach (DataColumn dc in dtCategories.Columns) { TextBox tb = row.FindControl(dc.ColumnName) as TextBox; // Hidden fields return null. if (tb != null) { string val = tb.Text; dataRow[dc] = val; } } Blog.LoadProperties(cat, dataRow); Blog.UpdateCategory(conn, cat); dtCategories.AcceptChanges(); gvCategories.EditIndex = -1; BindData(); } The UpdateCategory method uses my little reflection, poor man's ORM utility: public static void UpdateCategory(SQLiteConnection conn, Category cat) { SQLiteCommand cmd = conn.CreateCommand(); cmd.Name=@Name, Description=@Description where ID=@ID"; AddParameters(cmd, cat); cmd.ExecuteNonQuery(); } The code is nearly identical for deleting and inserting a category; the SQL is different of course. One caveat here is that I haven't implemented cascading deletes, so deleting a category does not delete blog entries that reference that category. This is why I hate ASP.NET development. Because the simplest solution to creating the EditStyle page, which, presentation and behavior-wise is identical to editing categories, is to copy and paste the friggin' code. Yuck. OK, code, take 2. I though I might use generics, but because of the GridView events that have to be hooked, generics seemed not really possible. The only issue with a code re-use approach is that the calls to the data access layer to update, insert, and delete is method-name specific. There are a variety of workarounds to this, and I chose to simply pass a string for the table being operated on (I did this for a certain amount of consistency, but it's still an awful implementation). I did use generics in one place--the new record handler: protected void btnNewCategory_Click(object sender, EventArgs e) { GridEdit.NewRecord<Category>(); } I can now create the style editor (a simple grid, but good for now), and the entire Page_Load becomes: protected void Page_Load(object sender, EventArgs e) { if (Session["IsAdmin"] != null) { PageCommon.LoadCommonElements(this, 1, out blogInfo, out conn, out url, tblHeader, cellTitle, cellSubtitle, cellCopyright, menuBar, menuPanel); Session["Conn"] = conn; if (!IsPostBack) { dtStyles = Blog.LoadStyles(conn); Session["Styles"] = dtStyles; } else { dtStyles = (DataTable)Session["Styles"]; } GridEdit.PageLoad(gvStyles, dtStyles, "StyleRecord"); } else { Response.Redirect("login.aspx"); } } which, in my opinion, is much more re-usable than copying and pasting 99.9% identical code from one page to the next, and the next, and so on. Because my poor-man's ORM/reflection methods already work with the business objects as objects rather than hardcoded class types, the data access layer takes very little work to change over to the general purpose GridEdit helper. For example, the NewRecord (which used to be NewCategory) now looks like: public static long NewRecord(SQLiteConnection conn, object rec) { SQLiteCommand cmd = conn.CreateCommand(); switch (rec.GetType().Name) { case "Category": cmd.CommandText = "insert into BlogCategory (Name, Description) values (@Name, @Description);select last_insert_rowid() AS [ID]"; break; case "StyleREcord": cmd.CommandText = "insert into Style (CellType, PropertyName, FontPropertyName, Value) values (@CellType, @PropertyName, @FontPropertyName, @Value);select last_insert_rowid() as [ID]"; break; } AddParameters(cmd, rec); long id = Convert.ToInt64(cmd.ExecuteScalar()); return id; } Now this is at least a start as to how programming ASP.NET should be done when dealing with pages that are nearly identical in functionality and only different in the data that they manipulate! This is trivial now that I have the general purpose grid editor working. I'll use the same mechanism to edit the blog setup fields, except it won't have a button for adding a new record, since there's only one setup row for the blog. Also, deleting the record is disallowed. And 5 minutes later: All done with that page. Thinking out loud here... Same with blog entries. It took about 5 minutes to create. Now, I'm going to be really lazy. The blog entry includes a category ID, which at the moment is displayed as an integer. It would be much better to of course display a pulldown list. I'll do that later. I can imagine it's going to take me another 3 hours to figure out how to do that with a template field. And I'm sure your saying "oh, this blows away Marc's re-use method for grids!" Not so fast. I'm planning on using attributes to decorate the property so that I can associate a pulldown with a property. Another issue is that the blog entry is displayed in its entirety when not edited, in a huge TextBox. This will make the list unwieldy very quickly. And when you go to edit the blog entry, you get a small little textbox. I can live with that, as I'll copy the original entry to an HTML editor and paste in the changes. And when I think about it, instead of displaying a grid, it would simply be easier to edit the blog entry by being in admin mode and selecting the blog. I like that idea much better! Then I don't need to deal with attributes, huge text boxes, pagination, etc. Woohoo! Design a web app on the fly! Don't do this at home, folks! OK, so to support this feature I added a hidden ID and date field and added the logic for the button visibility based on whether the user is in admin mode and editing the entry. The update and delete methods are quite simple. Notice the redirect to the home page when the entry is deleted and the redirect with the entry ID when the entry is updated. protected void btnUpdate_Click(object sender, EventArgs e) { BlogEntry blogEntry = new BlogEntry(); blogEntry.Id = Convert.ToInt64(tbID.Text); blogEntry.CategoryId = Convert.ToInt64(cbCategory.SelectedValue); blogEntry.Date = Convert.ToDateTime(tbDate.Text); blogEntry.Title = tbTitle.Text; blogEntry.Subtitle = tbSubtitle.Text; blogEntry.BlogText = tbBlogEntry.Text; Blog.UpdateRecord(conn, blogEntry); Response.Redirect("default.aspx?ID="+blogEntry.Id.ToString()); } protected void btnDelete_Click(object sender, EventArgs e) { BlogEntry blogEntry = new BlogEntry(); Blog.DeleteRecord(conn, "BlogEntry", Convert.ToInt64(tbID.Text)); Response.Redirect("default.aspx"); } Hmm. I just noticed the page title, as it appears in a Firefox tab, is "untitled page". Yuck! One line in the Page_Load method fixes that! Page.Title = blogInfo.Title; OK, this is the next big issue to tackle. Getting an RSS link working so my blog works with aggregators. Wikipedia has some information on the RSS format and I guess I'll try RSS 2.0 format. This link provides a nice summary of required and optional elements. There is nothing thrilling about the code that generates the RSS--it's ugly and brute force. I haven't supported all the optional RSS tags. The RSS is updated whenever a blog post is made or a blog post is changed. The nice thing is that the DateTime.ToString() method has a format specifier "r" that formats the string into the correct format for the RSS! I've added the RSS link to the archives columns, as this seems to be a reasonable place for it to go. I tested the RSS with FeedReader and it worked fine. Testing with Google's reader, the link is still stuck on a different header and content. Google must be caching the link and not updating it very frequently. Or I'm not supplying some information it's expecting. Wierd. Learning ASP.NET reminds me somewhat of learning to write games for a Commodore PET. It seems klunky and I'm not sure I'm doing things in the right way. Far too much time is spent fussing with the presentation layer. ASP.NET development is a ripe platform for all kinds of automation to help smooth out the rough edges with presentation, usability, and data management. There seems to be no clean separation of UI layer, business layer, and data layers. Any separation one adds doesn't really feel right because the presentation layer and its complexities is always in your face. I'm sure I've made a lot of mistakes putting together this web app, but it's taught me a few things and I imagine, if I were to do it again, I would look more closely at third party tools to make the presentation layer easier to work with, some real automation tools for tying the presentation layer and business objects together, and better separation of concerns between the business layer and data layer. Also something that needs a serious looking at is the repetition and redundancy that occurs from page to page. In many ways, web pages are like little autonomous islands. Working with them so that there is something common and consistent between them is very difficult. And because everything is stateless, I find that I end up writing a lot of static classes to promote re-usability, and passing objects in to those classes. So there is little that strikes me as object oriented regarding ASP.NET development. Certainly, the ASP.NET development environment is very slick. The ability to debug applications, switch between design and HTML code, the automatically generated Javascript, etc., make it very easy to put together reasonable looking, functional, websites quickly. The GridView is a hideous control. Obviously, a lot of thought has gone into making it as customizable as possible. This does not deter from the fact that it is plain ugly. The way the presentation changes when going into edit mode is unacceptable, in my opinion. The visual presentation of the control and its usability is so poor, I can see why people scramble for third party packages that offer some hope of a better presentation than what comes out of the box with ASP.NET. While elements on web pages are supposed to "flow", I find that in practice the flow of a web elements is problematic and fraught with problems. Sometimes web elements start on a "new line" for no explainable reason, and setting an element's visibility has inconsistent behavior--horizontally, it appears that other elements collapse but vertically the space occupied by an invisible element remains. There seems to be little rhyme or reason (or control) to flow. As a result, basic web layout looks ugly because of alignment problems. One thing that is driving me nuts is the purple "You've already clicked on this link" colorization. It makes the whole website look like it has the measles after clicking around a bit. However, this is something I'll deal with later. In this project I had to work with: Things/issues a web app often deals with that I didn't: And this is just for ASP.NET. I really admire people who know ASP.NET and PHP, Apache webserver, Linux, and things like Ruby. How do they have time to actually get anything done? Copy the files to your webserver. You will also need to install SQLite ADO.NET (see link below). Adjust the web.config file accordingly. Since this is an ASP.NET 2.0 application, I learned about dealing with running IIS with both ASP.NET 1.1 and 2.0. Read more here about application pools, as you'll need to create a separate application pool for a ASP.NET 2.0 application, if you're running both 1.1 and 2.0 together. The database blog.db and the rss.xml file must be set up for write permissions. I had to change the IIS_WPG account for write permissions for the App_Data folder and the root folder to enable writing of the database and rss.xml file respectively. That's probably totally the wrong thing to do. If you delete the database, the application will create the database with some initial values. You'll need to create records in the style list to make it pretty, otherwise it looks like this: (A conversation with Mark Harris) Mark: so with this, how does one change the layout of the blog? Marc: you can't. Not in the "requirements" :) Mark: lol k Mark: I hope you used MasterPages so people can purdy it up easy Marc: Pushkin wants to know what "master pages" are. (I figure, if the cat asks, I won't look like an ignoramus) I guess there might be a Part III one day: "Refactoring The Blog Engine". So let's see how I compare against my original rant about other people's blog engines: Not great. But it was fun adventure! General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/aspnet/mblog2.aspx
crawl-002
refinedweb
4,850
57.98