text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
Hide Forgot
From Bugzilla Helper:
User-Agent: Mozilla/4.7 [en] (WinNT; U)
Description of problem:
I have a simple program to read the ar.rnat register. In order to test this, I use gdb to set ar.rnat to a value.
Although printing the contents of ar.rnat from gdb indicates that the value was changed, the program output shows that the ar.rnat was not changes,
and still reads as 0
Version-Release number of selected component (if applicable):
Fails with both gdb-5.1-1 or gdb-5.2-2
How reproducible:
Always
Steps to Reproduce:
1. .file "get_rnat.asm"
.text
.align 16
.global get_rnat#
.proc get_rnat#
// Function Prototype is uint64_t get_rnat(void)
get_rnat:
mov r2=ar.rsc // preserve ar.rsc in r2
mov r14=0x3;; // bits zero and one
andcm r3=r2,r14;; // r3 = r2 & ~r14
mov ar.rsc=r3;; // ar.rsc = enforced lazy mode
mov r8=ar.rnat;; // return value of ar.rnat
mov ar.rsc=r2;; // restore previous mode to ar.rsc
.endp get_rnat#
You can assemble the above with as -o get_rnat.o get_rnat.asm
2. Here is a simple program to utilize the above.
/* file: try.c */
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
extern uint64_t get_rnat(void);
int main(void)
{
uint64_t rnat = get_rnat();
printf("ar.rnat = %#lx\n", rnat);
exit(0);
}
Note: I use gcc3.04
gcc -o try try.c get_rnat.o -W -Wall -g
3. Use gdb to set rnat
Now, inorder to test this, I want to use the debugger to force ar.rnat to
have a value, so I do the following.
gdb try
(gdb)br main
Breakpoint 1, 0x4000000000000692 in main ()
(gdb)run
Then the debugger stops at the breakpoint, and I do
(gdb) print $rnat=32
(gdb) c
Continuing.
ar.rnat = 0
Program exited normally.
(gdb) quit
My program reported ar.rnat as 0, even though I used gdb to force the value
to 32
If I put another breakpoint before the exit and do "info register $rnat,
then it is still 32,", but my program reports 0.
Furthermore, If I force $rnat to a value through some asm code, and then
call get_rnat(), it works correctly.
Actual Results: My program reported ar.rnat as 0, even though I used gdb to force the value
to 32
Expected Results: The program should have output
ar.rnat = 0x20
Additional info:
Problem exists with PTRACE. Gdb is writing to 0x860 which is the
glibc supplied location to write to the rnat register. It also reads
the location back and this works within the confines of gdb (i.e. read
of the data produces the previously written value). This value is not
being used for the rnat register value when accessed via user
assembler code.
Kernel help is required to debug this problem. Either the published
address of PT_RNAT_ADDR is wrong in /usr/include/asm/ptrace_offsets.h,
the value should not be used for accessing/modifying the rnat
register, or there is a bug translating writes to modify the register.
358 ptrace (PT_WRITE_U, tid, (PTRACE_TYPE_ARG3) addr, buf[i]);
(outer) p/x buf[0]
$2 = 0x20
(outer) next
359 if (errno != 0)
(outer)
363 addr += sizeof (PTRACE_TYPE:
|
https://partner-bugzilla.redhat.com/show_bug.cgi?id=78670
|
CC-MAIN-2019-43
|
refinedweb
| 530
| 69.18
|
Hello, Sorry for the response delay.
Advertising
At some point, I saw I was in the middle of a big really big refactoring, and I'm afraid it is very related to the framework I use. However I would like to make some observations about my findings. Apart from the DOM parser, one of the things I pursued was to replace constants by configuration, and remove code that was automatically executed during file inclusion. E.g., namespaces are registered if PHPTAL is configured for that, and then it registers only what is set in config - and by default config has the built-in namespaces :). I'd like to elaborate these a bit more as suggestions for PHPTAL, but first I have to dig back what I've done. Then, I'll try to extract the DOM parser into a pure PHPTAL installation, because mine is *too* changed now :(. regards, rodrigo moraes _______________________________________________ PHPTAL mailing list PHPTAL@lists.motion-twin.com
|
https://www.mail-archive.com/phptal@lists.motion-twin.com/msg00031.html
|
CC-MAIN-2018-26
|
refinedweb
| 160
| 60.85
|
The update of the LoPy causes bugs on my taking value on the DHT22
Hi, I have recently updated the LopY and when I want to get the DHT22 sensor temperature I have an error (I never quite the while beacause ), the program always finds the same value and I tested the program on an old version (1.0.0.b1) it works :s. Someone would know where the problem might come from?
The old version :
(sysname='LoPy', nodename='LoPy', release='1.0.0.b1', version='v1.8.6-237-g9d21f17 on 2016-12-15', machine='LoPy with ESP32')
My current version :
(sysname='LoPy', nodename='LoPy', release='1.6.10.b1', version='v1.8.6-556-g989d5ac9 on 2017-03-30', machine='LoPy with ESP32', lorawan='1.0.0')
from machine import Pin from dht import DHT22 #Initialisation du capteur dht_pin=Pin('P22', Pin.OPEN_DRAIN) dht_pin(1) #Aquisition des données du capteur temp = -350 while temp < -300 : temp, hum = DHT22(dht_pin) #Transfomation de celles-ci temp_str = '{}.{}'.format(temp//10,temp%10) hum_str = '{}.{}'.format(hum//10,hum%10) #Affichage print('T'+temp_str+'H'+hum_str)
from machine import enable_irq, disable_irq import time def getval(pin) : ms = [1]*300 pin(0) time.sleep_us(20000) pin(1) irqf = disable_irq() for i in range(len(ms)): ms[i] =: return([0xff,0xff,0xff,0xff]) for i in range(len(res)): for v in bits[i*8:(i+1)*8]: #process next 8 bit res[i] = res[i]<<1 ##shift byte one place to left if v > 2: res[i] = res[i]+1 ##and add 1 if lsb is 1 if (res[0]+res[1]+res[2]+res[3])&0xff != res[4] : ##parity
Ok thank you (I use the good pin and all of your code).
@bessonf OK. Sometimes the read fails, and then you get -32768, 6553,5. That's a problem of the esp32 timing.
@bessonf Did you take a) my code literally, or did you b) just copy in the functions getval() and decode().
In case of a) my code uses a different Pin
in case of b) should work I'l check later. You could print the bits array in decode(), how the distributions of values is. There shoudl be two groups of values, and _LIMIT must be chosen as the median between those.
I updated it again and tested your code and it works. Maybe when the last update there was a bug or something it was incorrectly installed.
@robert-hh
That work but i have only 0.0 and 0.0 on response and no error message.
0.0 0.0
And if i put a while loop for haven't 0.0 0.0 i have that :
object not in sequence 0 0 -3277.3 6553.5
@bessonf There was another post about that, and during the discussion I modified the code here , my most recent response, which I have tested with 1.6.9.b1 and an DHT22. I could test that this evening with 1.6.10b1.
I never leave the while because the value found is always the same and false (-32767). This is not really an error or an exception.
|
https://forum.pycom.io/topic/980/the-update-of-the-lopy-causes-bugs-on-my-taking-value-on-the-dht22
|
CC-MAIN-2018-09
|
refinedweb
| 524
| 73.78
|
[Vladimir Marangozov, on Sat, 30 Oct 1999]: :: Bummer. I'm already way too much off-topic for this forum. Hmmm. Is a discussion of Python, XML, and an application server platform which is *built* out of Python off-topic for this forum? Not in my view and I would hope that others here find this a worthwhile line of inquiry. Three years from now, will you be surprised if the payload of 50% of the packets shipped on the Internet (and across narrowband wireless, for that matter) is XML? I won't, especially counting automated machine to machine communications, behind the scenes, for ecommerce, financial transactions, news and who knows what else. Nor will I be surprised to see Python and Zope playing key roles in this new Web of logic. There are ways in which Python, Zope and developments in XML have converged on very similar solutions already (e.g., look at the WebBroker specs for namespaces). I can understand and sympathize with Chris's view. Digital Creations already has its hands full inventing a new business model on a day by day basis; they need to focus on their current strategy. Nevertheless, Zope has "good bones" and perhaps the best shot at being first to the party with a coherent top to bottom XML model. I admit I too was a bit suspicious of hype when I first heard people begin to discuss means of storing data in XML. Nevertheless, I find it intriguing that the discussion list at was started only one week ago, but already has 120 posts and is converging on a plan. But anyway, that's why Zope, Python and XML are open Vladimir! So that an aficianado like you, with your own view of things, can lift the hood and start tinkering. :)
|
https://mail.python.org/pipermail/xml-sig/1999-October/001539.html
|
CC-MAIN-2017-04
|
refinedweb
| 299
| 69.82
|
Precompiling ASP.NET Applications And The Dreaded LoadResource Failure Error
Wednesday, July 02 2008 2 Comments
This is a very strange problem that burned quite a bit of my time last week. Hopefully this will be found by someone else with a similar dilemma.
Basically I’ve got a central web application project that I use to house user controls to be shared by many applications. To accomplish this, I’ve followed this post by K. Scott Allen (of OdeToCode fame).
Since precompiling an ASP.NET project produces (at least) one funkily-named dll, I decided to ILMerge it with the code-behind assembly. So the process looks like this:
After-build:
- Precompile the website, updatable=false, into a temporary directory
- Merge App_Web*.dll & my project assembly together into one
- Reference this in another project & use the controls
If you look at this dll in reflector, you get something like this:
As a side-note, If you haven’t ever tried doing this before then you should know that you have to reference the compiled user control classes from the ASP namespace in your pages… so in this example <cc1:keywordgrid_ascx /> rather than <cc1:KeywordGrid />.
This first control worked just fine. It was basically a custom GridView control. Then I added that next control, SelectKeywords. Adding it caused the assembly to crash upon loading, i.e. even when we weren't using SelectKeywords it was crashing pages that used KeywordGrid. The error:
This happened in the constructor of any control in the assembly. Digging a little deeper into the error, I found that this happens when your user control has literal content in it that gets embedded as a Win32 resource file during precompilation.
In reflector, this is what it looked like:
When building the control tree, the compiler was embedding literal content as a resource. To try and fix the issue, I started removing literal content. Bit-by-bit I removed some HTML until the error went away. This is what reflector looked like when it was working:
But where is this resource file? Clearly the assembly is referencing some resource file somewhere, right? I peeked into Temporary ASP.NET Files to find the answer.
If you open this up in Notepad, you’ll find your missing literals!
For some reason, this file wasn’t making it into the final assembly.
So if you have more than 256 characters of contiguous literal content, the compiler will put this into a resource file (likely for perf reasons). Sadly there is no way to turn this off.
I’m not sure if ILMerging the assembly was the reason why this was getting lost, but luckily David Ebbo from Microsoft was able to provide me with a couple of options.
- You can break up the literal content by injecting empty code blocks like
<% %>but who would really want to do that?
- You can fake out the compiler & supply a code provider that claims to not support Win32 Resource files. This is what I chose.
I created a class (of course in another assembly, otherwise you have a chicken-and-egg problem):
public class CustomCodeDomProviderWithoutWin
{
public override bool Supports(GeneratorSupport generatorSupport)
{
if ((generatorSupport & GeneratorSupport.Win32Resources) == generatorSupport)
return false;
return base.Supports(generatorSupport);
}
}
To get the precompiler to use this new class, add this to your web.config:
<system.codedom>
<compilers>
<compiler language="c#;cs;csharp" extension=".cs" type="MyAssembly.CustomCodeDomProvide
</compilers>
</system.codedom>
Now my user controls can have as much literal content as they need and resources will not get added. I’d like to thank David Ebbo for his assistance in helping me figure this out. Thanks David!
Scott Allen
7.02.2008
11:58 AM
Wow, so sorry you are experiencing this wierd problems!
|
http://flux88.com/blog/precompiling-asp-net-applications-and-the-dreaded-loadresource-failure-error/
|
crawl-002
|
refinedweb
| 624
| 56.66
|
> socket_demon13.zip > SOCDEMON.C
/* * Socket Demon. * - A little program that waits on a internet port and * handles interactive commands. * Most of the routines are taken out of "Unix Network Programming" * by W. Richard Stevens, Prentice Hall, 1990. * * History: * 1994: * Jan. 14 - Initial program completed. * Jan. 17 - Added in nil command (if you just press enter) * - Added in "?" to be the same as help * - Added in the "bye", "ls", "pwd", and "cd" commands. * - Added in lotsa comments * - Added the 'ver' command. * - Added in support for the -port, -log and -command * command line options. * - Added in the 'die' command. * - Added logging of connects, disconnects, die's, and * bad passwords. * Apr. 07 - Changed the password to be encrypted so it cannot be * discovered with a strings(1) command. * - Added the 'id' command. * - Call setuid if uid != euid (so that if program is setuid root * - we can take advantage of the priviledge). * - Took out the annoying uid listing at login * Apr. 11 - Added a debug mode, and moved most of the verbose FTP style * comments to that mode, leaving normal mode more unix-ish. * - Added in a uname command * - Changed the output format of several commands to look more * unix-ish * Apr. 13 - Split the program up in to a bunch of different files and * made a make file for it. - Speeds up development and * makes if easier to change.. * Apr. 14 - Added new commands, cp, mv, rm * - Redirected stdin, stdout and stderr to the client socket. * - Added a shell! * Apr. 15 - Added new commands, who, w, ps, chmod, chgrp, chown, cat, * create */ #include "socdemon.h" #include "socketio.h" #include "soclog.h" /* * print_usage() * - prints the program name, version, and date, and also * the program parameters. See socdemon.h for all their * definitions. */ void print_usage() { int i; fprintf(stdout,"%s ver %s, %s.\n",SOCDEMON_NAME,SOCDEMON_VER,SOCDEMON_DATE); fprintf(stdout,"usage: %s [option] [option]...\n",program_name); fprintf(stdout,"where [option] is one of:\n"); for (i=0;i
0) { tell_user(client_descriptor, "Password? "); pword_len = soc_read(client_descriptor, pword_str, MAX_INPUT_LEN); if (pword_str[strlen(pword_str)-1] == '\n') { pword_str[strlen(pword_str)-2] = 0; } if (check_pw(pword_str, SOCDEMON_PASSWORD)) { #ifdef LOG_TO_DEATH strcpy(log_string,pword_str); log_stuff(bad_password); #endif tell_user(client_descriptor, "Access Denied.\n"); return false; } } #endif return true; } /* * main() * - This is the main procedure.. * - Most of the socket setup stuff is taken from Stevens * (the socket(), bind(), listen() and accept() stuff), * the rest is mine. */ main(argc, argv) int argc; char *argv[]; { int client_address_len; t_parms parms; int i; silent_mode = false; debug_mode = false; port_to_use = SOCDEMON_PORT; /* get the name of the executable for use in the print_usage() */ /* call so that the "usage: XXX [options]" will be correct */ strcpy(program_name,argv[0]); #ifdef LOG_TO_DEATH log_to_file = false; command_log = false; strcpy(log_file,program_name); strcat(log_file,".log"); strcpy(cmd_log_file,program_name); strcat(cmd_log_file,".log"); #endif if (argc > 1) { /* Collect and process the program arguments */ for (i=1;i > 24); client_dotted_q.netid = (client_address.sin_addr.s_addr >> 16) & 0x00ff; client_dotted_q.subnetid = (client_address.sin_addr.s_addr >> 8) & 0x0000ff; client_dotted_q.hostid = client_address.sin_addr.s_addr & 0x000000ff; sprintf(dotted_4_str, "%d.%d.%d.%d", client_dotted_q.class, client_dotted_q.netid, client_dotted_q.subnetid, client_dotted_q.hostid); #endif if (fork() != 0) { /* let the parent process handle the user comming in, and */ /* make the shild go on to wait for more requests - this */ /* will avoid defunct processes and have an added bonus side- */ /* effect of having the main daemon have it's process ID change */ /* periodically. */ #ifdef LOG_TO_DEATH log_stuff(accepting_connection); #endif /* direct stdin, stdout and stderr to the socket */ dup2(client_descriptor,0); dup2(client_descriptor,1); dup2(client_descriptor,2); if (soc_demon_welcome()) { soc_demon_main_proc(); } shutdown(client_descriptor, 2); close(client_descriptor); #ifdef LOG_TO_DEATH log_stuff(connection_closed); #endif exit(0); } close(client_descriptor); } }
|
http://read.pudn.com/downloads/sourcecode/internet/743/SOCDEMON.C__.htm
|
crawl-002
|
refinedweb
| 589
| 51.04
|
[
]
Jonathan Coveney commented on PIG-2687:
---------------------------------------
The downside to the approach of renaming within the block is as such...
{code}
a = load 'thing' as (x:int);
b = foreach (group a all) {
a = distinct a;
generate a;
}
{code}
In this case, you know that the distinct is on the globally scoped a, so the output of the
distinct should be sequential_prefix_a
However, if you did:
{code}
a = load 'thing' as (x:int);
b = foreach a generate x, x as y;
c = foreach (group a all) {
a = distinct a;
b = limit a 100;
generate b;
}
{code}
In this case, the distinct is on the global a, but the limit is not... so you're going to
have to check what variables are defined anyway in order to know whether you need to do the
replacement or not....you only do it if there is a clash. Hmm. Will think about the cleanest
implementation. Ideally I want to avoid a bunch of lookups, but it may be unavoidable and
this still may be the cleanest way...
> Add relation/operator scoping to Pig
> ------------------------------------
>
> Key: PIG-2687
> URL:
> Project: Pig
> Issue Type: Improvement
> Reporter: Jonathan Coveney
> Priority: Minor
> Fix For: 0.11
>
>
> The idea is to add a real notion of scope that can be used to manage namespace. This
would mean the addition of blocks to pig, probably with some sort of syntax like this...
> {code}
> a = load thing as (x:int, y:int);
> b = foreach a generate x, y, x*y as z;
> {
> a = group b by z;
> b = foreach a generate COUNT(b);
> global b;
> }
> {code}
> which would replace the alias b with the nested b value in the scope. This could also
be used in nested foreach blocks, and macros could just become blocks as well.
> I am 95% sure about how to implement this... I have a failed patch attempt, and need
to study a bit more about how Pig uses its logical operators.
> Any thoughts?
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
For more information on JIRA, see:
|
http://mail-archives.apache.org/mod_mbox/pig-dev/201205.mbox/%3C858369532.41737.1336522071024.JavaMail.tomcat@hel.zones.apache.org%3E
|
CC-MAIN-2018-17
|
refinedweb
| 351
| 65.46
|
Hi,
I am writting a dll and want to return a result back to the calling code.
I have tried putting:-
public string oSQL2XStreamBridge( string Name)
public class string oMyDll
both of which it does not like...
the code below errors because there is a return statment and it says you can't have a return statement when it is set to void. but it will not let me set it to string (as above)
public class oMyDll { public oSQL2XStreamBridge( string Name) { string ResultMess = ""; // work code goes here return "Test"; } }
How do I get the result back to the calling code?
Thanks
|
https://www.daniweb.com/programming/software-development/threads/294475/writting-a-dll-and-want-to-return-a-result-back-to-the-calling-code
|
CC-MAIN-2018-43
|
refinedweb
| 102
| 79.64
|
Closed Bug 1264085 Opened 4 years ago Closed 4 years ago
shell/warning
.js is going to permafail when Gecko 48 merges to Beta
Categories
(Core :: JavaScript Engine, defect)
Tracking
()
mozilla48
People
(Reporter: RyanVM, Assigned: arai)
References
Details
Attachments
(1 file)
[Tracking Requested - why for this release]: SM permafail when Gecko 48 merges to Beta. Any idea what might have changed in RELEASE_BUILD land recently here, Arai? This is burning all SM jobs. ## shell/warning.js: rc = 3, run time = 0.062321 1170716: Add js shell functions to get last warning shell/warning.js:16:1 Error: Assertion failed: got false, expected true Stack: @shell/warning.js:16:1 TEST-UNEXPECTED-FAIL | shell/warning.js | (args: "")
Flags: needinfo?(arai.unmht)
This is a regression from bug 1049041. The warning in String#contains is non-release only. We should use another warning that is enabled on all branches.
Flags: needinfo?(jorendorff)
the code is changed again in bug 1103588, to expression closure. > -let line0 = new Error().lineNumber; > -assertEq("foo".contains("bar"), false); > +eval(`(function() "This is an expression closure.")`); That is also non-release only warning... > template <typename ParseHandler> > bool > Parser<ParseHandler>::warnOnceAboutExprClosure() > { > #ifndef RELEASE_BUILD > JSContext* cx = context->maybeJSContext(); > if (!cx) > return true; > > if (!cx->compartment()->warnedAboutExprClosure) { > if (!report(ParseWarning, false, null(), JSMSG_DEPRECATED_EXPR_CLOSURE)) > return false; > cx->compartment()->warnedAboutExprClosure = true; > } > #endif > return true; > }
Following warning is enabled on all branches, right? Will it be removed or disabled near future? > MSG_DEF(JSMSG_DEPRECATED_BLOCK_SCOPE_FUN_REDECL, 1, JSEXN_NONE, "redeclaration of block-scoped function `{0}' is deprecated")
Assignee: nobody → arai.unmht
Attachment #8740831 - Flags: review?(shu)
(In reply to Tooru Fujisawa [:arai] from comment #3) > Following warning is enabled on all branches, right? > Will it be removed or disabled near future? Looks like it. In any case fixing the build before merge is the main thing...
Flags: needinfo?(jorendorff)
(In reply to Jason Orendorff [:jorendorff] from comment #4) > (In reply to Tooru Fujisawa [:arai] from comment #3) > > Following warning is enabled on all branches, right? > > Will it be removed or disabled near future? > > Looks like it. I meant to say, yes, this warning appears to be enabled on all branches. It won't be removed in the near future. Deprecation is always a long play...
Comment on attachment 8740831 [details] [diff] [review] Use not-non-release-only warning in js/src/tests/shell/warning.js. Review of attachment 8740831 [details] [diff] [review]: ----------------------------------------------------------------- ::: js/src/tests/shell/warning.js @@ +8,5 @@ > // Warning with JSEXN_NONE. > > enableLastWarning(); > > +eval(`{ function f() {} function f() {} }`); So the goal here is to do *something* that generates a warning? Because this warning around functions-in-block will go away too eventually. Is there a more permanent warning we can use?
Attachment #8740831 - Flags: review?(shu) → review+
Thank you for reviewing :) The testcase is to test getLastWarning shell builtin on JSEXN_NONE and one more non-JSEXN_NONE type. I wanted to test both types, because they use different path for warning.name property: > if (report->exnType == JSEXN_NONE) > nameStr = JS_NewStringCopyZ(cx, "None"); > else > nameStr = GetErrorTypeName(cx->runtime(), report->exnType); So, yes, I want to do something that generates warning messages with JSEXN_NONE there. Then, currently there are following warning message that uses JSEXN_NONE but not related to deprecation. > MSG_DEF(JSMSG_ALREADY_HAS_PRAGMA, 2, JSEXN_NONE, "{0} is being assigned a {1}, but already has one") > MSG_DEF(JSMSG_STMT_AFTER_RETURN, 0, JSEXN_NONE, "unreachable code after return statement") > MSG_DEF(JSMSG_USE_ASM_TYPE_OK, 1, JSEXN_NONE, "Successfully compiled asm.js code ({0})") I have no idea how to hit JSMSG_ALREADY_HAS_PRAGMA in shell. JSMSG_STMT_AFTER_RETURN was the reason why getLastWarning was added (bug 1170716), so I couldn't use it (because getLastWarning was added before JSMSG_STMT_AFTER_RETURN). Maybe I could change the testcase to use JSMSG_STMT_AFTER_RETURN instead, but it might be better to use other warning message (also just for coverage). JSMSG_USE_ASM_TYPE_OK could also be used, but it has no information about line/column... Bug 1264085 - Use not-non-release-only warning in js/src/tests/shell/warning.js. r=shu
Status: NEW → RESOLVED
Closed: 4 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla48
tracking-firefox47: --- → +
|
https://bugzilla.mozilla.org/show_bug.cgi?id=1264085
|
CC-MAIN-2020-34
|
refinedweb
| 662
| 51.24
|
Hi,
I have been wrestling with this program all day and am looking for some help.
I have never separated a program up before into separate files (main.cpp, base.h and base.cpp).
What I'm trying to do is call a function from main() and have that function return a pointer to a new object.
Once the pointer has been returned, I'm trying to push it into the std::list.
The problem I've been having is this compile error:
I have wrote a simple version of my program which reproduces the same problem:I have wrote a simple version of my program which reproduces the same problem:illegal call of non-static member function
main.cpp :
base.h :base.h :Code:#include <iostream> #include <list> #include <string> #include "base.h" using namespace std; int main() { typedef list<Base *>BaseList; do{ char choice; cout << "a) Create then add a base class pointer into the list" << endl; cin >> choice; cin.ignore(); switch(choice) { case 'a': { //Create an object and return a pointer to that object //(I will be making many of these objects) Base::ReturnPtrToANewBaseClassObject(); //push the returned pointer onto the std::list BaseList.push_back( ptr ); cout << "Object now pushed onto list" << endl; break; } default: { cout << "You made an invalid choice" << endl; break; } } } while(true); system("pause"); }
base.cpp :base.cpp :Code:#ifndef BASE_H #define BASE_H #include <iostream> #include <string> using namespace std; class Base { public: //constructor Base(string mystring); //Member Functions: Base * ReturnPtrToANewBaseClassObject();//Create an object and return a pointer to it private: protected: //Data Members common to all objects (abstract base class): string _mystring; }; #endif
Here is what Visual Studio reports when I try to build:Here is what Visual Studio reports when I try to build:Code:#pragma once #include <iostream> #include <string> #include "base.h" //Base class contructor Base::Base(string mystring) { _mystring = mystring; } //Base class function Base * ReturnPtrToANewBaseClassObject() { //Build an object, and then return a pointer to that object //Wthin main() the pointer will be be pushed into the std:list string mystring; cout << "Value? " << endl; getline (cin, mystring); //Get a pointer to the new object Base *ptr = new Base(mystring); //return the pointer to main() return (ptr); }
I have a very similar program working when I use a single main.cpp file, but I want to stop using single files, and this problem has really been holding me back.I have a very similar program working when I use a single main.cpp file, but I want to stop using single files, and this problem has really been holding me back.------ Build started: Project: myproject, Configuration: Debug Win32 ------
Compiling...
main.cpp
c:\main.cpp(22) : error C2352: 'Base::ReturnPtrToANewBaseClassObject' : illegal call of non-static member function
c:\base.h(15) : see declaration of 'Base::ReturnPtrToANewBaseClassObject'
c:\main.cpp(25) : error C2143: syntax error : missing ';' before '.'
c:\main.cpp(25) : error C2143: syntax error : missing ';' before '.'
Build log was saved at ":\Debug\BuildLog.htm"
myproject - 3 error(s), 0 warning(s)
========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ==========
Hope you guys and gals can help me with this.
Thanks very much!!!
|
https://cboard.cprogramming.com/cplusplus-programming/124419-returning-pointers-functions-problem.html
|
CC-MAIN-2017-51
|
refinedweb
| 522
| 60.45
|
Retail owned Reliance Digital stores has come up with three new LYF branded VoLTE handsets Flame 2, Water 7, Wind 4; ahead of Jio’s launch in the coming weeks.
Out of the three smartphones, Flame 2 is the cheapest one, and is offered at a price of Rs. 4,999. It is mainly targeted at a lower-end segment of buyers. On the other hand, the LYF Wind 4 comes with a massive 4,000mAh battery, while the LYF Water 7 is packed with features such optical fingerprint sensor that is touted by the company to make fingerprint detection and verification simpler. Interestingly, the LYF Water 7 is the first fingerprint sensor equipped smartphone by the brand.
With these phones, Reliance Retail targets to provide a vast range of 4G phones from lower price point to higher range. It looks like the company’s strategy to encourage mass adoption of the high-speed internet, high-definition voice, video calls and value-added services.
Reliance said in a statement,”LYF aims to be a key player in the Indian smartphone market by offering cutting edge technology and a superior user experience – the true 4G experience comprising HD Voice and Video calling,”
Reliance Retail said that the launch of new devices is a part of Holi Celebration offer by LYF. These new models were initially scheduled to be launched in the latter half of March, however, due to great response from markets the company decided to go for an early launch. The entire line-up will be available across retail stores over the next few days.
As a part of Holi offer, the company has slashed the pricing of its existing phones. The LYF smartphones that are getting a price slash are Earth 1, Water 1, Water 2, Flame 1 and Wind 6.
Furthermore, the pricing of Earth 1 has been slashed from Rs 23,990 to Rs 19,399. The device is up for sale with updated pricing at Reliance Digital stores. The phone is equipped with Snapdragon 615 processor; features 5.5-inch full HD AMOLED display, and 32GB of internal storage.
import price of flame 2 is 2723 so probably they will launch it under 4k.already import more than 20k units of lyf mobile.
Reliance LYF Earth 1 OEM Manufacture by CK Telecom Corporation, China
Reliance LYF Water 1 OEM Manufacture by CK Telecom Corporation, China
Reliance LYF Water 2 OEM Manufacture by CK Telecom Corporation, China
|
https://telecomtalk.info/reliance-retail-owned-lyf-flame-2-water-7-wind-4-smartphones-now-up-for-sale-other-lyf-smartphones-get-a-price-cut/150257/
|
CC-MAIN-2020-40
|
refinedweb
| 410
| 58.82
|
Type: Posts; User: toraj58
hannes in post #3 you gave me the clue because when i saw that solution is not platfrom-independent (as i expected) i tried to change my approach so thank you very much for helping me always my...
yes. i have also decided to create a service for this issue.
thanks.
if i don't want to hide the process i prefer to have a watcher process for the main process and when main process get killed the watcher process will restart it and vice versa.
thanks both of you....
how to hide a process from windows task manager?
i want it to work in XP, VISTA and 7
consider this link:
try it yourself step by step!
Touraj:
Version 2010 (last release)
somewhat its' possible to convert a win app to web app with almost all the features by using ASP.NET AJAX ToolKit for .net 2.0 or 3.5.
you need to know how to work with AJAX toolkit. basicly it is...
private void button1_Click(object sender, EventArgs e)
{
button1.Text = "line1\nline2";
}
thanks for the link.
but i need a solution that i can implement it with both java code and c++.
in my java code i use aspect and i can not invoke c++ dll when i use aspect. so i should write code...
how to get mac address of the system.
my OS is linux fedora.
i have done it already in java but i don't know how to get MAC ADDress with C++.
How i can get CPU ID.
the OS i use is Fedora 10.
for me this feature that c++ allows default arguments on function parameters is good also.
for one corner i calculated the coordination and made the upper left corner round-shaped; you can calculate yourself coordination for other three corners and then make them look rounded just adding a...
i revised the code a little for you:
using System;
using System.Collections.Generic;
using System.Text;
namespace Recursive
{
you only need some math calculation to get free space in GB:
private void button1_Click(object sender, EventArgs e)
{
ManagementObject disk = new...
also some times asp.net server controls create client-side codes for example validation controls that for security create both server and client-side codes for validation.
in this case first thing i do is copy and pase the error number and message to google and in most cases i redirect to microsft site that describe the error and ways to fix it and maybe workaround of...
session is server side base and will be expired when user close explorer.
better choice is cookie but it can be disabled bu users how ever these days most users have it enabled.
don't look for it there is no such event. for problems like this you need using windos APIs or some workaround and creative ways. i have posted some solution for a problem like this for two listbox...
it deponds on kind of synchronization you have used. for example performance of ReaderWriterLockSlim is better than ReaderWriterLock becuase of the bug in ReaderWriterLock. so tweeking your multi...
your application should be built on a thread apartment to interact with os. so you need to use [STAThread] attribute for your main method.
yes it is not necessary to use GridView for this problem.
i wrote this code for you:
private void button1_Click(object sender, EventArgs e)
{
string number =...
you should add flash activex to your project then using its properties and methods.
also there are commercial plugins for flash in the web that can do a lot of good job for you.
...
thanks hans
i want to load modules (.JAR file) in tomcat that exists in folder other than webapps in tomcat.
for example my module resides in a folder such as "/opt/mymodules".
how i should so it and what...
|
http://forums.codeguru.com/search.php?s=6cd4781c9b3b4d30cba24705d46d3ae6&searchid=6639023
|
CC-MAIN-2015-14
|
refinedweb
| 646
| 74.9
|
#include <stdio.h> int main(void) { int my_array[10] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}; int test, count; test=sizeof my_array / sizeof my_array[0]; printf("Size of my array is: %d\n", test); for (count=test; count>=0; count--) { printf(" %d% ", my_array[count]); */ }; }
2) Also printing in descending order works with '9' but not with 'test'.
3) What would be the simplest/easiest way to order (not just print) the array in descending order.
At the moment the only way i can think of is to create another array of size 'test' and read/write each value of my_array in reverse order.
Thanks in advance.
Edit: I figured out the size of array part, still stuck on sorting.
This post has been edited by bestbat: 21 February 2007 - 08:51 PM
|
http://www.dreamincode.net/forums/topic/24259-c-language-sorting-array-into-descending-order/
|
CC-MAIN-2018-09
|
refinedweb
| 135
| 67.79
|
- - Sorting
You can sort
List collections using the
java.util.Collections.sort() method.
You can sort these two types of
List's.
- List
- LinkedList
Sorting Objects by their Natural Order
To sort a
List you do this:
List list = new ArrayList(); //add elements to the list Collections.sort(list);
When sorting a list like this the elements are ordered according to their "natural order".
For objects to have a natural order they must implement the interface
java.lang.Comparable.
In other words, the objects must be comparable to determine their order.
Here is how the
Comparable interface looks:
public interface Comparable<T> { int compareTo(T o); }
The
compareTo() method should compare this object to another object, return an
int value. Here are the rules for
that
int value:
- Return a negative value if this object is smaller than the other object
- Return 0 (zero) if this object is equal to the other object.
- Return a positive value if this object is larger than the other object.
There are a few more specific rules to obey in the implementation, but the above is the primary requirements. Check out the JavaDoc for the details.
Let's say you are sorting a
List of
String elements. To sort them, each
string is compared to the others according to some sorting algorithm (not interesting here). Each
string compares itself to another string by alphabetic comparison. So, if a string is less than
another string by alphabetic comparison it will return a negative number from the
compareTo()
method.
When you implement the
compareTo() method in your own classes you will have to decide
how these objects should be compared to each other. For instance,
Employee objects
can be compared by their first name, last name, salary, start year or whatever else you think makes sense.
Sorting Objects Using a Comparator
Sometimes you may want to sort a list according to another order than their natural order.
Perhaps the objects you are sorting do not even have a natural order. In that case you
can use a
Comparator instead. Here is how you sort a list using a
Comparator:
List list = new ArrayList(); //add elements to the list Comparator comparator = new SomeComparator(); Collections.sort(list, comparator);
Notice how the
Collections.sort() method now takes a
java.util.Comparator as parameter
in addition to the
List. This
Comparator compares the elements in the list
two by two. Here is how the
Comparator interface looks:
public interface Comparator<T> { int compare(T object1, T object2); }
The
compare() method compares two objects to each other and should:
- Return a negative value if object1 is smaller than object2
- Return 0 (zero) if objec1 is equal to object2.
- Return a positive value if object1 is larger than object2.
There are a few more requirements to the implementation of the
compare() method, but these
are the primary requirements. Check out the JavaDoc for more specific details.
Here is an example
Comparator that compares two fictive Employee objects:
public class MyComparator<Employee> implements Comparator<Employee> { public int compare(Employee emp1, Employee emp2){ if(emp1.getSalary() < emp2.getSalary()) return -1; if(emp1.getSalary() == emp2.getSalary()) return 0; return 1; } }
A shorter way to write the comparison would be like this:
public class MyComparator<Employee> implements Comparator<Employee> { public int compare(Employee emp1, Employee emp2){ return emp1.getSalary() - emp2.getSalary(); } }
By subtracting one salary from the other, the resulting value is automatically either negative, 0 or positive. Smart, right?
If you want to compare objects by more than one factor, start by comparing by the first factor (e.g first name). Then, if the first factors are equal, compare by the second factor (e.g. last name, or salary) etc.
|
http://tutorials.jenkov.com/java-collections/sorting.html
|
CC-MAIN-2020-10
|
refinedweb
| 615
| 55.54
|
#include <stdbool.h>
#include <stdint.h>
#include <sys/eventfd.h>
#include <rte_memory.h>
#include <rte_mempool.h>
#include <linux/vhost.h>
#include <linux/virtio_ring.h>
#include <linux/virtio_net.h>
Go to the source code of this file.
Interface to vhost-user
Definition in file rte_vhost.h.
Protocol features.
Definition at line 60 of file rte_vhost.h.
Indicate whether protocol features negotiation is supported.
Definition at line 113 of file rte_vhost.h.
Function prototype for the vhost backend to handle specific vhost user messages.
Definition at line 252 of file rte_vhost.h.
Possible results of the vhost user message handling callbacks
Definition at line 227 of file rte_vhost.h.
Convert guest physical address to host virtual address
This function is deprecated because unsafe. New rte_vhost_va_from_guest_pa() should be used instead to ensure guest physical ranges are fully and contiguously mapped into process virtual address space.
Definition at line 312 of file rte_vhost.h.
Convert guest physical address to host virtual address safely
This variant of rte_vhost_gpa_to_vva() takes care all the requested length is mapped and contiguous in process address space.
Definition at line 347 of file rte_vhost.h.
Log the memory write start with given address.
This function only need be invoked when the live migration starts. Therefore, we won't need call it at all in the most of time. For making the performance impact be minimum, it's suggested to do a check before calling it:
if (unlikely(RTE_VHOST_NEED_LOG(features))) rte_vhost_log_write(vid, addr, len);
Log the used ring update start at given offset.
Same as rte_vhost_log_write, it's suggested to do a check before calling it:
if (unlikely(RTE_VHOST_NEED_LOG(features))) rte_vhost_log_used_vring(vid, vring_idx, offset, len);
Register vhost driver. path could be different for multiple instance support.
Set the vdpa device id, enforce single connection per socket
Unset the vdpa device id
Get the device id
Set the feature bits the vhost-user driver supports.
Enable vhost-user driver features.
Note that
Disable vhost-user driver features.
The two notes at rte_vhost_driver_enable_features() also apply here.
Get the feature bits before feature negotiation.
Set the protocol feature bits before feature negotiation.
Get the protocol feature bits before feature negotiation.
Get the queue number bits before feature negotiation.
Get the feature bits after negotiation
Start the vhost-user driver.
This function triggers the vhost-user negotiation.
Get the MTU value of the device if set in QEMU.
Get the numa node from which the virtio net device's memory is allocated.
Note this function is deprecated, as it returns a queue pair number, which is vhost specific. Instead, rte_vhost_get_vring_num should be used.
Get the number of vrings the device supports.
Get the virtio net device's ifname, which is the vhost-user socket file path.
Get how many avail entries are left in the queue
This function adds buffers to the virtio devices RX virtqueue. Buffers can be received from the physical port or from another virtual device. A packet count is returned to indicate the number of packets that were successfully added to the RX queue.
This function gets guest buffers from the virtio device TX virtqueue, construct host mbufs, copies guest buffer content to host mbufs and store them in pkts to be processed.
Get guest mem table: a list of memory regions.
An rte_vhost_vhost_memory object will be allocated internally, to hold the guest memory regions. Application should free it at destroy_device() callback.
Get guest vring info, including the vring address, vring size, etc.
Get guest inflight vring info, including inflight ring and resubmit list.
Set split inflight descriptor.
This function save descriptors that has been comsumed in available ring
Set packed inflight descriptor and get corresponding inflight entry
This function save descriptors that has been comsumed
Save the head of list that the last batch of used descriptors.
Update the inflight free_head, used_idx and used_wrap_counter.
This function will update status first before updating descriptors to used
Clear the split inflight status.
Clear the packed inflight status.
Notify the guest that used descriptors have been added to the vring. This function acts as a memory barrier.
Get vhost RX queue avail count.
Get log base and log size of the vhost device
Get last_avail/used_idx of the vhost virtqueue
Get last_avail/last_used of the vhost virtqueue
This function is designed for the reconnection and it's specific for the packed ring as we can get the two parameters from the inflight queueregion
Set last_avail/used_idx of the vhost virtqueue
Register external message handling callbacks
Get vdpa device id for vhost device.
Notify the guest that should get virtio configuration space from backend.
|
https://doc.dpdk.org/api-20.11/rte__vhost_8h.html
|
CC-MAIN-2021-39
|
refinedweb
| 760
| 59.5
|
Table of Contents
- What are React Hooks?
- Quotes API
- How to Fetch Data from an API with React Hooks
- Conclusion
What are React Hooks?
If you are brand new to React and APIs consider checking out:
Either of the articles can provide a decent introduction to React and the relationship with APIs. Otherwise, you have probably at least heard of React hooks.
Earlier on in React, components were either class components or functional components. Functional components were Javascript functions that accepted props (data) to display and returned JSX.
Class components typically required more code but could store state variables and could use lifecycle methods. This made class components the go-to option for making API calls, processing user input, etc. Functional components became useful helpers for class components.
The Trouble with Functional Components
The relationship between class and functional components made sense until the moment functional components needed to do things that only class components could do (i.e hold state, respond to lifecycle changes).
Consequently, functional components would be converted to class components or class components were created as wrappers for functional components. This was inconvenient and all changed with React version 16.8.
The Introduction of React Hooks
React hooks were introduced in version 16.8 and are widely accepted. With hooks, class components didn’t need to be converted to functional components, which meant hooks didn’t break anything in the current class and functional components. Hooks are “opt-in”. The only requirement was that other React packages, like React DOM, needed to be compatible with version 16.8 of React.
Let’s give a small example of what adding hooks to a functional component looks like.
The below code represents what a simple functional component may look like.
import React from 'react' const Example = (props) => ( <h1>{props.greeting}</h1> ) export default Example
At some point in the progression of the application, let’s say the developers wanted to translate the greeting to a different language. This could require an API call to the translation service and a state variable to hold the new greeting. Those are two things the old functional components could not do.
In the old React, this component would probably be converted to a class component, or a parent class component would be built. Unfortunately, the parent class component may already have a lot of code and a large state. Our code becomes easier to maintain as the functional component becomes versatile. Therefore, with hooks, conceptually, the component can easily obtain the functionality required.
import React from 'react' const Example = (props) => { let [translation, setTranslation] = React.useState('') // state hook React.useEffect(() => { // side effect hook // call API with props.greeting parameter setTranslation(response.data.translation) }, [setTranslation]) return ( <h1>{translation}</h1> // use state in component ) } export default Example
useState is introduced at the top of the function to store the translated text.
useEffect is called when the component mounts and sets the translation variable when the API is done.
useState and useEffect
If you are new to hooks, seeing functions like
useEffect or
useState can seem foreign. However, these will become common to you as they are some of the most commonly used hooks in React.
useState
I have already mentioned that the
useState hook provides a state variable to the functional component. The value inside the function call, an empty string, is the initial value of the variable.
[translation, setTranslation] utilizes Javascript’s object unpacking capabilities to extract the variable
translation and the function to set the variable,
setTranslation.
Notice we are using both the variable for its value, and the function to set the value, in the example above.
useEffect
This hook requires more know-how because it expects an argument of dependencies to be provided with the implementation.
useEffect takes in two arguments.
- Callback function
- array of dependencies
In the example above, the callback function argument is represented by an unimplemented API call. Furthermore, the only dependency that the hook has is the
setTranslation function.
It’s important to provide the dependencies to communicate to the function when, or if, it needs to change. Dependencies can be tricky, and using a linter when running a development server can help you tremendously in the process.
useEffect vs. componentDidMount
componentDidMount became a popular lifecycle function in class components because it was the ideal way to fetch data for the component. This is something that both
useEffect and
componentDidMount have in common.
However,
componentDidMount is only supposed to run at the beginning of the lifecycle and then become dormant.
useEffect runs when the component mounts but also can run at any time when dependencies change. Therefore, if no dependencies are passed to the second argument array, or if the dependencies always change, the function continues to run (sometimes causing an infinite loop).
To have the
useEffect only run once when the component mounts remember to pass an empty array as the second argument (or pass an array with the necessary dependencies).
Learning more about hooks
Hooks can take a while to pick up on, and there are quite a lot of them. Furthermore, developers can create custom hooks, which is a new fun thing to explore in React. However, I would recommend getting the built-in hooks down first.
If you still want to learn more about hooks, I would start with the introduction to hooks or the frequently asked questions about hooks on reactjs.org, and then coming back to build the example application!
Quotes API
Free multilingual API for random famous quotes in many languages.
The Quotes API is a useful API for getting started with using HTTP requests, testing out RapidAPI, and combining those technologies with ReactJS.
The API only has one endpoint, Random Quote, that allows it to be understood quickly.
This, of course, is not the only quote API on the marketplace. Regardless, some of the advantages are:
- Free and unlimited
- Support for many languages
- Return the best quotes curated by users
If you want to see more quote API options check out this link.
How to Fetch Data from an API with React Hooks
Prerequisites
- You’ll need to have Node >= 8.10 and npm >= 5.6 on your machine.
- Familiarity with React components, CSS, and HTML.
- Internet connection.
- A text editor (I am using Visual Studio Code).
- Understanding of how to open a terminal, or command-line on your machine
1. Set Up the Application
The application is bootstrapped with create-react-app. Therefore, we need to open up a new terminal and execute a command that builds a starter React app.
Open up a new terminal and run
npx create-react-app rapidapi-using-react-hooks. The command takes a couple of minutes to execute. Then, change directories into the project and open up your text editor in the project’s root folder.
Using the terminal, this is done with the command
cd rapidapi-using-react-hooks.
In the project’s root, download the HTTP request library Axios.
npm install --save axios
Next, start the app by running
npm start.
In the browser, visit. You should see the following page.
Well done!
2. Sign Up For a Free Account on RapidAPI
To use the Quotes API in the next section with React hooks, you will need an account. Visit RapidAPI to get signed up!
3. Subscribe to the Quotes API
>.
4. Fetch Data with
useEffect
First, remove all the JSX code in the
<header> inside of the file
src/App.js and replace it with the following HTML.
.... <h1> Fetching Data with React Hooks </h1> ....
The application running at should reflect these changes.
Next, it’s time to add our first hook in
App.js.
Add a state variable using the React hook
useState at the top of the function.
.... function App() { let [responseData, setResponseData] = React.useState(''); // new return ( <div className="App"> ....
useEffect
Let’s explore the
useEffect hooks before setting it up to make an API call.
The next step is to add
useEffect to the file and give the variable
responseData a value when the component loads.
.... function App() { let [responseData, setResponseData] = React.useState(''); React.useEffect(() => { setResponseData('hello') console.log(responseData) }, [setResponseData, responseData]) return ( <div className="App"> ....
Inspect the developer console and notice that the variable’s value is set every time we refresh the page and logged to the console.
This is the process that we need to use for the API call.
- State variable is ready to store the response data
- API call happens when the component is mounted
- Response data is saved to the state variable
Add API Call
Earlier, we installed Axios for making HTTP requests. Now, let’s add the API call to
useEffect. Also, we are going to add some JSX code to the component to display the raw response data.
However, we are going to need the RapidAPI key available for the API call. WARNING: The following process does not secure your API key for production and is only used for the sake of this tutorial.
You can read about React app environment variables by following this link.
Create a file
.env in the project root. Inside the file add the code;
REACT_APP_API_KEY=yourapikey
You can get your API key by visiting the API dashboard page,
Restart the application in the terminal so the environment variable can be loaded in.
Next, replace the code inside of
React.useEffect() with,
.... axios({ "method": "GET", "url": "", "headers": { "content-type": "application/octet-stream", "x-rapidapi-host": "quotes15.p.rapidapi.com", "x-rapidapi-key": process.env.REACT_APP_API_KEY }, "params": { "language_code": "en" } }) .then((response) => { setResponseData(response.data) }) .catch((error) => { console.log(error) }) ....
Import Axios at the top of the page with the line
import axios from 'axios'.
The new function above does not use
responseData, therefore, it can be removed as a dependency for the
useEffect function.
Finally, below the
<header> JSX tag, add this code so we can see the response data that is being fetched,
.... <pre> <code> {responseData && JSON.stringify(responseData, null, 4)} </code> </pre> ....
You should now see new response data displayed every time the page loads!
5. Display API Data with React Hooks
The application is not very appealing, despite successfully fetching the response data.
Fortunately, we can see the response data and can use dot-notation to extract the data that we need.
Add the following JSX code below the
<header> tag (you can comment out, or delete, the
<pre> block that we just added).
.... <main> {responseData && <blockquote> "{responseData && responseData.content}" <small>{responseData && responseData.originator && responseData.originator.name}</small> </blockquote> } </main> ....
Now, we just get the
content and
originator data, but what if we want a different quote?
It would be nice to have the API call in a function that we can call, not only in
useEffect, but also from the click of a button.
- Move the API call into a function named
fetchData.
- Call the function in
useEffect
- Update
useEffect‘s dependencies
- Add a button to the JSX code that can call the function
fetchData
Unfortunately, this setup causes the
useEffect hook to continuously be called because of the side effects of the API call in the new
fetchData function.
To stop the infinite loops we utilize the
useCallback hook. This hook returns a memoized callback that only changes if the dependencies of the function change. This hook can be a lot to digest when starting out, so I am not going to explain it in depth. Let’s focus on the two most commons hooks that we have already used.
The final version of
App.js, using three different React hooks to orchestrate the API call, is;
import React from 'react'; import axios from 'axios'; import './App.css'; function App() { let [responseData, setResponseData] = React.useState(''); const fetchData = React.useCallback(() => { axios({ "method": "GET", "url": "", "headers": { "content-type": "application/octet-stream", "x-rapidapi-host": "quotes15.p.rapidapi.com", "x-rapidapi-key": process.env.REACT_APP_API_KEY }, "params": { "language_code": "en" } }) .then((response) => { setResponseData(response.data) }) .catch((error) => { console.log(error) }) }, []) React.useEffect(() => { fetchData() }, [fetchData]) return ( <div className="App"> <header className="App-header"> <h1> Fetching Data with React Hooks </h1> <button type='button' onClick={fetchData}>Click for Data</button> </header> <main> {responseData && <blockquote> "{responseData && responseData.content}" <small>{responseData && responseData.originator && responseData.originator.name}</small> </blockquote> } </main> {/* <pre> <code> {responseData && JSON.stringify(responseData, null, 4)} </code> </pre> */} </div> ); } export default App;
Add CSS
You have probably noticed that the app is still missing some style and is difficult to interact with.
Replace all the code in
App.css with this code,
.App { background-color: #282c34; text-align: left; width: auto; min-height: 100vh; } button { margin: auto; padding: .25rem .75rem; font-size: 1.2rem; cursor: pointer; background: rgb(255, 119, 0); color: white; border: 2px solid rgb(255, 119, 0); border-radius: 5px; -webkit-border-radius: 5px; -moz-border-radius: 5px; -ms-border-radius: 5px; -o-border-radius: 5px; } h1 { margin: 1.2rem auto; font-family: sans-serif; font-size: 3.5rem; } blockquote { border-top: 2px solid rgb(255, 119, 0); border-bottom: 2px solid rgb(255, 119, 0); font-size: 2rem; margin: 5% 8%; font-weight: 200; padding: 2%; } small { display: block; font-size: 0.7em; text-align: right; font-style: italic; margin: 10px; padding: 4px; } .App-header, .App { display: flex; flex-direction: column; align-items: center; justify-content: center; color: white; } .App-link { color: #61dafb; }
With that, the app comes to life!
How to Build an App with Node.js & TypeScript
How to use an API with React Redux
|
https://rapidapi.com/blog/react-hooks-fetch-data-api/
|
CC-MAIN-2020-34
|
refinedweb
| 2,235
| 57.77
|
----- Transcript of session follows -----
Executing: /usr/lib/mail/surrcmd/smtpqer -n -B -C -u npvsrv1.NapervilleIL.NCR.COM!ncrhub4!ncrgw1!vger.rutgers.edu!linux-kernel frsos.napervilleil.attgis.com jejb@frsos.napervilleil.attgis.com
smtpqer: Binary contents cannot be sent via SMTP
server "/usr/lib/mail/surrcmd/smtpqer" failed - unknown mailer error 1
----- Unsent message follows -----
>From ncrgw1!vger.rutgers.edu!linux-kernel Wed Jul 26 21:10 EDT 1995 remote from ncrhub4
Received: by ncrhub4.ATTGIS.COM; 26 Jul 95 21:10:01 EDT
Received: by ncrgw1.ATTGIS.COM; 26 Jul 95 21:09:50 EDT
Received: (from majordomo@localhost) by vger.rutgers.edu (8.6.12/8.6.12) id XAA29710 for linux-kernel-digest-outgoing; Sun, 23 Jul 1995 23:53:25 -0400
Date: Sun, 23 Jul 1995 23:53:25 -0400
Message-Id: <199507240353.XAA29710@vger.rutgers.edu>
From: owner-linux-kernel-digest@vger.rutgers.edu
To: linux-kernel-digest@vger.rutgers.edu
Subject: linux-kernel-digest V1 #128
Reply-To: linux-kernel@vger.rutgers.edu
Errors-To: owner-linux-kernel-digest@vger.rutgers.edu
Precedence: bulk
linux-kernel-digest Sunday, 23 July 1995 Volume 01 : Number 128
----------------------------------------------------------------------
From: "Louis-D. Dubeau" <ldd@step.polymtl.ca>
Date: Sat, 22 Jul 1995 12:30:05 -0400
Subject: Re: linux-kernel-digest V1 #123
>>>>> "RJrLm" == Rob Janssen reading Linux mailinglist <linux@pe1chl.ampr.org> writes:
R
------------------------------
From: "Harik A'ttar" <harik@chaos.rutgers.edu>
Date: Fri, 21 Jul 1995 21:31:55 -0400 (EDT)
Subject: Re: 3rd/4th.. IDE ports
On Thu, 20 Jul 1995, mark (m.s.) lord wrote:
> That's how it is being implemented.. command line parms for all
> IDE ports, probing is done only for the primary/secondary at the
> "standard" ports.
Hmm... Where is a list of commandline params? I recently
had to specify the parameters for someone's HD, finally
found the information in the LILO configs. Then, they had
an ethernet card on a strange address (SMC Ultra on 360, it tests
340, 380... not good) And even though I went in all the core
setup files, I couldn't figure out what the heck to send it.
(and when I did, it was ignored.. suprise?)
Changing parametrs like addresses w/o a recompile is a _GOOD_ thing,
but it is the devil itself to find an actual list of what commands
are supported.
I remember this being discussed a month or so ago, but that degraded
into another mount root ro or rw debate. ack!
#include <stupid/signature>
chaos@dynamic.ip.don't.reply Guess what? I really _DO_ speak for my
Dan Merillat / Harik A'ttar system. And if you share my opinions,
in00621@pegasus.cc.ucf.edu you should seek professional help.
------------------------------
From: Michael Nelson <mikenel@netcom.com>
Date: Sat, 22 Jul 1995 13:02:45 -0700 (PDT)
Subject: init/main.c
Is there any reason that the 200 lines worth of function definitions and
structure definitions for drivers in init/main.c couldn't be moved
into another source file -- such as init/config.c?
Moreover, what would be really slick (for those who are working on a new
configuration system) is to automatically generate a config.c which would
invoke the init procedures for the appropriate drivers.
I don't have Linux installed (other than the sources) right now -- but if
it is felt to be beneficial -- I might take a stab at it...
- -- Mike
- --
Michael Nelson (mikenel@netcom.com) | Real programmers don't comment their
Rockville, Maryland | code. It was hard to write, it should
BSD/OS, WinNT, OLE2 Development | be hard to understand.
------------------------------
From: Michael Nelson <mikenel@netcom.com>
Date: Sat, 22 Jul 1995 12:55:17 -0700 (PDT)
> lately myself to collect them, but I'd be willing to write a page or
> two. The current configuration doesn't bother me too much. I hate it
This would be great -- especially for fledgling kernel developers... I am
currently in the process of figuring out how the kernel all fits together
- -- and spend most of my time tracking down procedures/macros/etc... and
then deciphering what they mean.
Unfortunately, some of the procedure names are ambigious, which makes it
hard (at first) to figure out what they do. Sure, they mean something to
current developers -- but they don't mean much who haven't really worked
with the kernel.
- -- Mike
- --
Michael Nelson (mikenel@netcom.com) | Real programmers don't comment their
Rockville, Maryland | code. It was hard to write, it should
BSD/OS, WinNT, OLE2 Development | be hard to understand.
------------------------------
From: scott snyder <SNYDER@D0SB10.FNAL.GOV>
Date: Sat, 22 Jul 1995 22:15:10 -0500 (CDT)
Subject: Re: Sanyo CDR C3 G
>I have a Sanyo 3 Disk changer in my Gateway P5-120 machine, 2940
>Controller, ATI Mach64/2Meg VRAM and have been working to get access to
>the drive.
>...
> Is there any way to access the other two
>platters with the current drivers, or will there have to be a specialized
>driver developed for this CDROM Drive ?
With the present driver, i don't think there's any way to make it use
the other platters. I've noticed some cd changer commands in v2 of the
atapi spec, but i haven't looked at in in detail. Since i don't have
access to one of these drives myself, i'm not likely to implement this
any time soon; however, i'd be happy to receive changes to support it
from someone who does have such a device.
sss
------------------------------
From: Michael Nelson <mikenel@netcom.com>
Date: Sat, 22 Jul 1995 16:03:22 -0700 (PDT)
Subject: Kernel questions...
I have few kernel-related questions...
1) Are there any particular caveats when using kmalloc (i.e. allocs must
be in 4k pages, kmallocs must be < n bytes, etc...). How does it
different from vmalloc (pageable memory?)
2) Does task[0] exist as a user or kernel mode process?
3) With regards to executables: Can someone explain brk and bss? I
vaguely understand that the former has something to do with limits on
data segments, but that's it.
4) How do user mode programs allocate memory? I don't have libc handy,
but I haven't noticed that there is any sort of "malloc" system call...
Thanks.
- -- Mike
- --
Michael Nelson (mikenel@netcom.com) | Real programmers don't comment their
Rockville, Maryland | code. It was hard to write, it should
BSD/OS, WinNT, OLE2 Development | be hard to understand.
------------------------------
From: cjs <noone@nowhere.com>
Date: Sat, 22 Jul 1995 12:53:34 -0500 (CDT)
Subject: Re: Benchmarks - 1.3.11
> Hmm, I wouldn't call a 59.77% change 'minute.' In general, I have discounted
> anything that is less than 1% change as being noise. Probably impacted by
If you look again, you'll find that most of the numbers are +/- 1.5%
or less. I don't think it requires a great leap of faith to grasp what
I was trying to say. The 59.77% you sight is thethe exception, not the
rule.
The big problem is that a lot of the time are no kernel changes that
would effect whats being timed -- so we have to figure that those +/-
1.5% are just error margins of the tests and are in no way
significant.
Chrisotpher
------------------------------
From: linux@pe1chl.ampr.org (Rob Janssen reading Linux mailinglist)
Date: Sat, 22 Jul 1995 23:29:25 +0200 (MET DST)
Subject: Re: linux-kernel-digest V1 #124
According to owner-linux-kernel-digest@vger.rutgers.edu:
> From: friebe@xvnews.unconfigured.domain (Bernhard Friebe Student (SV S.N J.L))
> Date: 19 Jul 1995 15:34:58 GMT
> Subject: 16-bit read/write operations on ISA bus
>
> Hi folks,
>
> I'm new to this newsgroup (and Linux), and not shure if this question really belongs here, but anyway here it comes:
>
> I am trying to comunicate with a custom pc board with help of the linux provided
> device /dev/port and the open/write/read operations. However these are 8-bits (char) only (that's what I think) and I need to perform 16-bit operations.
>
> I'm aware that there are existing operations like outb, inb, ... for direct ISA-bus
> comunication, but unfortunately I have no information which file I have to include
> in my C source or in which files these operations are defined (if I include io.h,
> where a definition of inb, outb, ... is given, the internal functions __outb, __inb, ... are unknown on compilation).
>
> I'm aware that this is a really unprecise question, but I would be very happy if
> anyone could give me some help.
One thing that is unclear to many users of inb/outb/etc is that you need
to compile with -O2 (optimization) to force GCC to expand the inline
functions __outb etc. If you don't, they will be undefined.
(I think there is also a specific flag to enable only this expansion and
don't do optimization. see the GCC docs)
Rob
- --
+------------------------------------+--------------------------------------+
| Rob Janssen rob@knoware.nl | AMPRnet: rob@pe1chl.ampr.org |
+------------------------------------+--------------------------------------+
------------------------------
From: dholland@husc.harvard.edu
Date: Sat, 22 Jul 1995 18:00:31 -0400 (EDT)
Subject: Re: kernel config
> :> And what format shall this take? It needs to be parsable from
> :> sh using only expr, sed, and awk.
>
> Only if not a single kernel hacker understands lex/yacc well enough to
> write a parser to do the necessary Makefile/header configuration.
> Seriously, why would it need to be parsable from sh at all? Can
> include a flex/bison grammar (pre flexed/bisoned even, though I doubt
> this would at all be necessary, I would guess that flex and bison are
> much more ubiquitous on machines with development tools installed than
> perl would be.) with the rest of the kernel sources, that would get
> built with make config. The sound driver already has a configure
> program written in C.
I'd be happy to write it, and make the syntax as complicated as anyone
wants, but I refuse to do it using lex and yacc.
Since the only free parser generators around are basically yacc, and I
doubt a non-free solution would be welcomed in the kernel, I'll have
to stay out of it except for suggestions and kibitzing.
That said, how about something along the lines of
config file ::= section...
section
::= identifier, doc-string, flag..., '{', config option... '}'
flag
::= "require", identifier
::= "provide", identifier
::= "cflags", quoted string
::= "module"
config option
::= typename, identifier, doc-string, flag..., ';'
typename
::= "bool"
::= "int"
::= "int", constant, "..", constant
doc-string ::= quoted string
where identifier, constant, and quoted string are what you'd expect.
If anyone tries to build this, note that identifiers can't begin with
a digit or you'll get a conflict.
This lets you, for instance, specify that SCSI tape support requires
SCSI support, in the following way:
scsi-adapters "Support for SCSI host adapters (disk/device
controllers). This is required if you want support for other SCSI
devices (hard disks, tapes, cdroms, scanners)."
{
aha1542 "Support for Adaptec 1542 SCSI cards."
provide scsi cflags "-DCONFIG_AHA_1542";
aic7xxx "Support for Adaptec 274x/284x/294x SCSI cards."
provide scsi cflags "-DCONFIG_AIC7XXX";
ABCDE "Support for the A.B.C.D.E. Super SCSI 1998 Edition."
provide scsi cflags "-DCONFIG_ABCDE";
}
scsi-devices "Support for SCSI devices (hard drives, tapes, cdroms,
scanners, etc.) Support for SCSI host adapters is required."
{
tapes "Support SCSI tape drives."
require scsi provide tape cflags "-DCONFIG_SCSI_TAPE";
disks "Support SCSI hard drives."
require scsi provide disk cflags "-DCONFIG_SCSI_DISK";
cdrom "Support SCSI cdroms."
require scsi provide cdrom cflags "-DCONFIG_SCSI_CDROM";
}
etc.
The idea is that you can select the adapters section all you like, but
unless you actually choose a controller, it won't let you turn on any
of the scsi devices.
Hmm. Thinking about it there ought to be an option "suggest". For
instance, since iso9660 support isn't useful without a cdrom, you'd
have "suggest cdrom", and you'd have the cdroms "suggest iso9660".
Presumably the config utility would do something intelligent with
"suggest" operations that were references to things the user hadn't
gotten to yet.
> :> Maybe. I'm not convinced the configuration is that badly
> :> broken. Proper documentation of the sources is more important,
> :> in my opinion. ;-)
>
> Proper documentation would be nice. Man pages for kernel internal
> functions would be really nice. Probably wouldn't take too long to
> have most of the major ones if someone were to volunteer to collect
> the kernel-man pages and ask people to write just one.
There's already somebody who maintains the man-pages package; I'm sure
if people sent him section 9 man pages he'd be willing to include
them. (I don't remember who it is though.)
> I'm too busy lately myself to collect them, but I'd be willing to
> write a page or two.
Same here.
> The current configuration doesn't bother me too much. I hate it
> when I have to start all over or go manually edit the files when I
> flub a single one up though. So I think some improvement could be
> achieved in that area, just don't think it is the most important
> thing happening with linux right now.
Right.
> NSA: plutonium Mossad NSA Panama Croatian Qaddafi domestic disruption
> munitions Noriega jihad Honduras NORAD genetic DES [Hello to all my
> fans in domestic surveillance]
That's not good enough - I'm sure they have better text retrieval
systems than that. :-p
- --
- David A. Holland | Peer pressure (n.): The force from the
dholland@husc.harvard.edu | eyeballs of everybody watching you.
------------------------------
From: miquels@drinkel.ow.org (Miquel van Smoorenburg)
Date: Sat, 22 Jul 1995 14:04:10 +0200 (MET DST)
Subject: Re: Patches for serial console available
In article <199507201321.PAA21549@lrcsun1.epfl.ch>,
Werner Almesberger <almesber@lrc.epfl.ch> wrote:
>Miquel van Smoorenburg wrote:
>> I've hacked up serial console support for Linux 1.3.10.
>> The diffs are pretty big (17k)
>
>Argl. I knew there was a reason why I didn't want to merge my serial
>consoles with the tty driver ;-)
>
>For your amusement, I've attached my 1638 bytes patch (should work with
>most recent kernels).
[patches deleted]
Okay, your patches are smaller. But mine do some things yours don't:
- - supports /dev/console
- - initializes UART speed. I know LILO already lets the BIOS do this at
boot time, but what if you don't boot through LILO ? (bootfloppy)
- - Turns off 16550 FIFO mode before it prints and turns it on
afterwards if needed.
- - Has a nice serial kernel monitor built in :)
Just depends on what you want, really.
Mike.
- --
+ Miquel van Smoorenburg + Cistron Internet Services + Living is a |
| miquels@cistron.nl | Independent Dutch ISP | horizontal |
+ miquels@drinkel.ow.org + + fall +
------------------------------
From: Michael Nelson <mikenel@netcom.com>
Date: Sat, 22 Jul 1995 20:58:01 -0700 (PDT)
Subject: i386 switch_to() question...
(I will be looking for 386 books tomorrow<g>)
[snip]
#define switch_to(tsk) do { \
__asm__("cli\n\t" \
"xchgl %%ecx,_current\n\t" \
"ljmp %0\n\t" \
"sti\n\t" \
"cmpl %%ecx,_last_task_used_math\n\t" \
"jne 1f\n\t" \
"clts\n" \
"1:" \
: /* no output */ \
:"m" (*(((char *)&tsk->tss.tr)-4)), \
"c" (tsk) \
:"cx"); \
[etc...]
I am thrououghly confused as to why the kernel is "ljmp"ing to (tss.tr)-4
(if I am interpreting this correctly).
1) Where is it going?
2) What is "tr"? My first guess was "task register" -- but I am just
confused now.
I have been trying to compare Linux's switch to the 4.4BSD-Lite (BSDI)
switch to try to understand what is going on -- but they are radically
different (BSD's is a lot more involved).
- -- Mike
- --
Michael Nelson (mikenel@netcom.com) | Real programmers don't comment their
Rockville, Maryland | code. It was hard to write, it should
BSD/OS, WinNT, OLE2 Development | be hard to understand.
------------------------------
From: "Harik A'ttar" <harik@chaos.rutgers.edu>
Date: Fri, 21 Jul 1995 21:58:46 -0400 (EDT)
Subject: Re: Benchmarks - 1.3.11
On Thu, 20 Jul 1995, cjs wrote:
> > On Thu, 20 Jul 1995, cjs wrote:
> >
> > > > *******
> > > > Results
> > > > *******
> > > >
> > > > Pipe-based Context Switching Test || 3007.4 -> 4804.9 +59.77%
> > > > Pipe Throughput Test || 18574.4 -> 18774.0 +1.07%
> > > > Execl Throughput Test || 60.9 -> 61.4 +0.82%
> > > [stuff deleted]
> > >
>
> Sounds to me like you are making an good effort to have simular
> testing conditions every time. =)
For a non-official benchmark, thats pretty damm good. Even
cleaner then my compile tests-- I may shutdown unneded proci, but
I don't clean reboot for a compile (even though I _DO_ time it. bah)
>
> >.
Taking the above oft-quoted numbers, pipe-based context switching
went up by ~60%. Changes in other numbers _WILL_ occur, simply because
the kernel has changed. Things are different. the .0x% changes in the
timings would be best described as the secondary effects of the major
patches. Even one extra clause in the task switcher.... (shiver)
how many times would that be evoked? How about tossing an extra
line of code into the tty driver? Or the memory managment?
Would that not affect the execution of floating point math? It would,
if only by 1% or so. The 20 (or 10) run testing eliminates most of the
noise (50-100 would be better, but, as he said, it allready runs 4 hours)
On the same idea, Is it possible that you could nuke the results
of tests that are < 2% (a good number) simply because most changes
of .02% Don't affect anyone. Full figures on request or something,
but noise level changes are just eating bandwidth.
BTW: Earlier tests had a lot of HUGE negative %s, like -50 or -200%
any comments? Changes that big on the - side should have evoked some
reaction :) I'll go hunting through old mail sometime today or
tomorrow to see if I can find what specific things changed.
Anyway, Thanks for the numbers (Hey, I don't have to get locked
out of MY system for 4-5 hours! Thanks!)
chaos@dynamic.ip.don't.reply Guess what? I really _DO_ speak for my
Dan Merillat / Harik A'ttar system. And if you share my opinions,
in00621@pegasus.cc.ucf.edu you should seek professional help.
------------------------------
From: "W. Bryan Thrasher" <thrasher@mindspring.com>
Date: Sun, 23 Jul 1995 10:03:49 -29900
I'm your volunteer. I am relatively new to Linux development, but think
I could handle assembling the man pages. So, to anyone interested in
writing one, please let me know. I will assume that in the case of
developing apps where different people are working on different
functions, there is some existing method for posting items that are
needed and letting people know what's not. Since I haven't been involved
in such a development, I'm not aware of that method. If someone could
fill me in, I'd appreciate it. From my own thoughts, I could post needs,
haves, and completed files on my web page:
()
Thanks,
Bryan Thrasher
------------------------------
From: Jaakko Hyvatti <Jaakko.Hyvatti@>
Date: Sun, 23 Jul 1995 17:09:38 +0200 (EET DST)
Subject: Re: Benchmarks - 1.3.11
Anyone doing this kind of performance analysis should read Jain,
Raj: The art of computer systems performance analysis: techniques for
experimental design, measurement, simulation, and modelling (1991),
chapter 13: Comparing systems using sample data. Or something similar.
`The basic idea is that a definitive statement cannot be made about
the characteristics of all systems, but a propabilistic statement
about the range in which the characteristics of most systems would lie
can be made. The concept of confidence intervals introduced in this
chapter is one of the fundamental concepts that every performance
analyst needs to understand well. In the remainder of this book, most
conclusions drawn from samples are stated interms of confidence
intervals.' - Raj Jain.
Earlier in the book he mentions some pitfalls of perf.analysis,
one of which is not doing enough analysis after collecting data.
I will now take some time to explain what should be done with the
analysis of these kernel benchmarks. Someone with greater
understanding of performance analysis and statistics: please correct
me if I got it wrong.
In 13.4.2 comparing unpaired observations is explained. This is what
we are doing, as we have two sets of nA=nB=20 (or 10) observations
xA[0..19] and xB[0..19] for two different kernel releases A and B.
The steps of the so called t-test are:
1. compute the sample means meanA = sum(xA)/nA and meanB..
2. compute the sample standard deviations:
sA = sqrt((sum(xA**2)-nA*(meanA**2))/nA-1) and sB similarily.
3. compute the mean difference meanA-meanB
4. compute the standard deviation of the mean difference:
s = sqrt(sA**2/nA + sB**2/nB)
5. compute the effective number of degrees of freedom: (this gets tricky)
v = (sA**2/nA + sB**2/nB)/(sA**4/nA**2/(nA+1) + sB**4/nB**2/(nB+1)) - 2
6. compute the confidence interval for the mean difference.
That is, look at the table for a value for t=t[1-alpha/2,v],
where 1-alpha/2 is 0.95 for 90% confidence level etc.
Then the conf.interval is: (meanA-meanB) +- t*s.
7. If the confidence interval includes zero, the difference is not
significant at 100*(1-alpha) % confidence level. If the zero is
not included, the system with better mean value is better.
The alpha parameter above says how much uncertainty we can tolerate
when we want to see if the difference is significant. 0.1 or 0.05 are
commonly used for 90% and 95% confidence levels, but this does not say
what should we use. I do not know.
Now you only need some t[]-tables. You should find one in any book
of statistics.
Be careful out there, statistics is dangerous.
- --
#Jaakko Hyvdtti Jaakko.Hyvatti@ +358 40 5011222
echo 'movl $36,%eax;int $128;movl $0,%ebx;movl $1,%eax;int $128'|as -o/bin/sync
------------------------------
From: Robin Cutshaw <robin@intercore.com>
Date: Sun, 23 Jul 1995 11:09:49 -0400 (EDT)
Subject: Re: Crashing Problems
>
> I am running kernel 1.3.11, and am having system lockup problems. It
I've noticed that after upgrading from 1.2.8 to 1.2.11 that the system
will periodically lock up after running a few hours.
robin
------------------------------
From: Eric Bosch <ecbosch@tyrell.net>
Date: Sun, 23 Jul 1995 11:22:26 -0500 (CDT)
Subject: 1.3.11 Kernel Crashes
As I reported in a previous message, I had been having hard system
crashes while attempting to backup my system. I made some changes in my
AHA2940 controller config, ie I disabled disconnection for all devices,
and did succesfully get through the backup. However I then ran X and ran
Seyon for a dialout, and upon exit from X, the system froze again. I
feel this may be pointing to 1) Disconnect/Reconnect code problem in
AIC7xxx code, and possibly a memory leak (Just a gut feeling) somewhere
in the kernel.
------------------------------
From: dholland@husc.harvard.edu
Date: Sun, 23 Jul 1995 12:47:53 -0400 (EDT)
Subject: Re: Benchmarks - 1.3.11
> >.
Any change can make a small difference in just about anything. Suppose
somebody added three lines of code to the SCSI driver, and this (by
being slightly larger) caused the task-switch code to lie across a
page boundary. Presto! A few more cycles every task-switch, probably,
and perhaps a 1% drop in switching performance.
Don't underestimate the effects of things like that.
- --
- David A. Holland | Peer pressure (n.): The force from the
dholland@husc.harvard.edu | eyeballs of everybody watching you.
------------------------------
From: Linus Torvalds <Linus.Torvalds@cs.Helsinki.FI>
Date: Sun, 23 Jul 1995 19:59:45 +0300
Subject: Re: goto
Tommy Thorn: "goto" (Jul 21, 16:22):
> Linus Torvalds wrote/ecrit/skrev:
> | + *
> | + * The goto is "interesting".
> | *
> ......
> | + cli();
> | + switch (current->state) {
> | + case TASK_INTERRUPTIBLE:
> | + if (current->signal & ~current->blocked)
> | + goto makerunnable;
> | + timeout = current->timeout;
> | + if (timeout && (timeout <= jiffies)) {
> | + current->timeout = 0;
> | + timeout = 0;
> | + makerunnable:
> | + current->state = TASK_RUNNING;
> | + break;
> | + }
> | + default:
> | + del_from_runqueue(current);
> | + case TASK_RUNNING:
> | + }
>
> I suppose this is the Linus Torvalds version of Fermats Last Theorem :-)
> (Leaving people wondering "why" for hundreds of years...)
Nothing
------------------------------
From: "Gregory L. Galloway" <gregg@localhost.gtri.gatech.edu>
Date: Sun, 23 Jul 1995 13:07:47 -0400
Subject: Null derefence in 1.3.9
I switched from 1.2.8 to 1.3.9 to get support for my GCD-R540 ATAPI CDROM.
The next day my machine locked up twice with the following message:
Jul 21 19:52:40 mu-shu kernel: Unable to handle kernel NULL pointer dereference
at virtual address c0000000
Jul 21 19:52:40 mu-shu kernel: current->tss.cr3 = 0040c000, nr3 = 0040c000
Jul 21 19:52:40 mu-shu kernel: *pde = 00102067
Jul 21 19:52:40 mu-shu kernel: *pte = 00000027
Jul 21 19:52:40 mu-shu kernel: Oops: 0000
Jul 21 19:52:40 mu-shu kernel: EIP: 0010:00112e23
Jul 21 19:52:40 mu-shu kernel: EFLAGS: 00013202
Jul 21 19:52:40 mu-shu kernel: eax: 00000000 ebx: 00000004 ecx: 001cda2c e
dx: 0000b000
Jul 21 19:52:40 mu-shu kernel: esi: 003bf2e8 edi: 00286000 ebp: 0026efa0 e
sp: 0026ef90
Jul 21 19:52:40 mu-shu kernel: ds: 0018 es: 0018 fs: 002b gs: 002b ss: 0
018
Jul 21 19:52:40 mu-shu kernel: Process X (pid: 2908, process nr: 14, stackpage=0
026e000)
Jul 21 19:52:40 mu-shu kernel: Stack: 0026efbc 003bf2e8 003bf2e4 0025cb10 bffff8
2c 0010d3ce 00000000 0026efbc
Jul 21 19:52:40 mu-shu kernel: 0010c116 00000000 0026efbc 00383900 000000
04 00222474 003bf2e8 003bf2e4
Jul 21 19:52:40 mu-shu kernel: bffff82c 00143570 ffff002b 0010002b 001000
2b 0000002b fffffffe 0013db47
Jul 21 19:52:40 mu-shu kernel: Call Trace: 0010d3ce 0010c116 00143570 0013db47
Jul 21 19:52:40 mu-shu kernel: Code: 00 00 19 c0 83 c1 08 01 db 75 d6 fa ff 05 a
4 13 1b 00 a1 a4
Jul 21 19:52:40 mu-shu kernel: Aiee, killing interrupt handler
Here is the EIP info from vmlinux:
001128b4 t _timer_bh
00112954 T _tqueue_bh
001129a4 T _immediate_bh
001129f4 t _do_timer
00112e94 T _sys_alarm
00112ee4 T _sys_getpid
00112f04 T _sys_getppid
00112f24 T _sys_getuid
00112f44 T _sys_geteuid
More information can be provided if needed,
Greg
- ----
Gregory L. Galloway E-mail: greg.galloway@gtri.gatech.edu
Research Scientist I Mail: Georgia Institute of Technology
GTRI / EOEML / Baker 247
Voice: +1 404 853-3076 Atlanta, Georgia 30332-0834
------------------------------
From: Bernie Doehner <bad@ee.WPI.EDU>
Date: Sun, 23 Jul 1995 12:25:15 -0500 (EDT)
Subject: Console/getty handling bug
Since I upgraded from kernel 1.2.3 to 1.2.11, I noticed a bug
with the way getty's (I use agetty) are handled. I run two Linux systems:
One is a 386 with 5MB of RAM and NO monitor (my ax25 router and nfs print/
fileserver) The other is a 486 notebook with 8MB of RAM (my "user" machine).
Since the 386 is tight on memory and has no monitor, I disabled all getty's
from running on it (not even on on the serial port).
With kernel version 1.2.11 and beyond (I even tried 1.3.11), whenever NO
gettys are running on the system, and one logs in via the ethernet network
"init" ends up running all the time and grabs
roughly 80-90% of system cpu time, even while the system is idle
To make sure this wasn't related to something on my
386, I also disabled all gettys on my 486 and logged in via the network
and noticed the same "init" behavior. This strange init behavior
immediately goes away when I edit /etc/inittab, and kill -1 1 to start at
least one getty.
I definitely did NOT see this happen in kernel 1.2.3 and before, and I don't
even know where to try to find the source. I can tell you what is not
the problem. Here is a summary of some of the software differences between the
two machines:
386: 486:
- -3c503 network card -maxtech pcmcia (ne2000 compat) network card
- -Kernel 1.2.11 with Alan Cox's -straight kernel 1.2.11 without ANY patches
ax25028 patches
- -No modules of any kind -pcmcia card services and modules
>From information in /usr/adm/messages it seems that whenever a command is
executed init starts forking uncontrolably.
I even disabled all of my nfs and other network stuff to see wether any of
the network stuff was causing this and it had NO effect on init's behavior.
Here are the /usr/adm/messages and top outputs for the case when the 386
has been idle for a while and has NOT getty's of any kind running:
Jul 23 10:17:23 zeus syslogd: restart
Jul 23 10:17:24 zeus kernel: Kernel logging (proc) started.
Jul 23 10:17:24 zeus kernel: Console: mono EGA+ 80x25, 1 virtual console (max 63)
Jul 23 10:17:24 zeus kernel: Calibrating delay loop.. ok - 6.50 BogoMips
Jul 23 10:17:24 zeus kernel: Serial driver version 4.11 with no serial options enabled
Jul 23 10:17:24 zeus kernel: tty00 at 0x03f8 (irq = 4) is a 16450
Jul 23 10:17:24 zeus kernel: tty01 at 0x02f8 (irq = 3) is a 16450
Jul 23 10:17:24 zeus kernel: lp1 at 0x0378, using polling driver
Jul 23 10:17:24 zeus kernel: hda: WDC AC1210F, 202MB w/64KB Cache, CHS=989/12/35, MaxMult=16
Jul 23 10:17:24 zeus kernel: ide0: primary interface on irq 14
Jul 23 10:17:24 zeus kernel: Floppy drive(s): fd0 is 1.44M, fd1 is 1.2M
Jul 23 10:17:24 zeus kernel: FDC 0 is a 8272A
Jul 23 10:17:24 zeus kernel: Memory: 4240k/5376k available (572k kernel code, 384k reserved, 180k data)
Jul 23 10:17:24 zeus kernel: Swansea University Computer Society NET3.019
Jul 23 10:17:24 zeus kernel: GW4PTS AX.25 for Linux. Version 0.25 ALPHA for Linux NET3.016 (Linux 1.1.51)
Jul 23 10:17:24 zeus kernel: Portions (c) Copyright 1984 University Of British Columbia
Jul 23 10:17:24 zeus kernel: Portions (c) Copyright 1990 The Regents of the University Of California
Jul 23 10:17:24 zeus kernel: Swansea University Computer Society TCP/IP for NET3.019
Jul 23 10:17:24 zeus kernel: IP Protocols: ICMP, UDP, TCP
Jul 23 10:17:24 zeus kernel: PPP: version 0.2.7 (4 channels) NEW_TTY_DRIVERS OPTIMIZE_FLAGS
Jul 23 10:17:24 zeus kernel: TCP compression code copyright 1989 Regents of the University of California
Jul 23 10:17:24 zeus kernel: PPP line discipline registered.
Jul 23 10:17:24 zeus kernel: SLIP: version 0.8.3-NET3.019-NEWTTY (4 channels) (6 bit encapsulation enabled)
Jul 23 10:17:24 zeus kernel: AX25: KISS encapsulation enabled
Jul 23 10:17:24 zeus kernel: 3c503.c:v1.10 9/23/93 Donald Becker (becker@cesdis.gsfc.nasa.gov)
Jul 23 10:17:24 zeus kernel: eth0: 3c503 at 0x300, 02 60 8c 3e 5c d3
Jul 23 10:17:24 zeus kernel: eth0: 3C503 with shared memory at 0xdc000-0xddfff,
Jul 23 10:17:24 zeus kernel: Checking 386/387 coupling... Ok, fpu using old IRQ13 error reporting
Jul 23 10:17:24 zeus kernel: Checking 'hlt' instruction... Ok.
Jul 23 10:17:24 zeus kernel: Linux version 1.2.11 (root@flint) (gcc version 2.6.3) #1 Fri Jul 21 14:43:17 EDT 1995
Jul 23 10:17:24 zeus kernel: Partition check:
Jul 23 10:17:24 zeus kernel: hda: hda1 hda2
Jul 23 10:17:24 zeus kernel: VFS: Mounted root (ext2 filesystem) readonly.
Jul 23 10:17:24 zeus kernel: Adding Swap: 8604k swap-space
Jul 23 10:17:28 zeus sendmail[57]: starting daemon (8.6.12): SMTP+queueing@00:10:00
Jul 23 10:17:29 zeus init[1]: No more processes left in this runlevel
Jul 23 10:17:45 zeus last message repeated 482 times
Jul 23 10:17:45 zeus in.telnetd[62]: connect from flint.nu1s.ampr.org
Jul 23 10:17:45 zeus init[1]: No more processes left in this runlevel
Jul 23 10:17:49 zeus last message repeated 84 times
Jul 23 10:17:49 zeus login: ROOT LOGIN ON ttyp0 FROM flint.nu1s.ampr.org
Jul 23 10:17:49 zeus init[1]: No more processes left in this runlevel
Jul 23 10:18:20 zeus last message repeated 733 times
Jul 23 10:19:21 zeus last message repeated 1581 times
Jul 23 10:20:22 zeus last message repeated 1581 times
Jul 23 10:21:22 zeus last message repeated 1581 times
Jul 23 10:22:23 zeus last message repeated 1568 times
Jul 23 10:23:23 zeus last message repeated 1556 times
Jul 23 10:24:23 zeus last message repeated 1556 times
Jul 23 10:25:23 zeus last message repeated 1556 times
Jul 23 10:26:23 zeus last message repeated 1556 times
Jul 23 10:27:23 zeus last message repeated 1556 times
Jul 23 10:28:23 zeus last message repeated 1554 times
Jul 23 10:29:23 zeus last message repeated 1543 times
Jul 23 10:30:23 zeus last message repeated 1483 times
Jul 23 10:31:23 zeus last message repeated 1505 times
Jul 23 10:32:23 zeus last message repeated 1503 times
Jul 23 10:33:23 zeus last message repeated 1506 times
Jul 23 10:34:23 zeus last message repeated 1506 times
Jul 23 10:35:24 zeus last message repeated 1506 times
Jul 23 10:36:24 zeus last message repeated 1504 times
Jul 23 10:37:24 zeus last message repeated 1504 times
Jul 23 10:38:24 zeus last message repeated 1502 times
Jul 23 10:39:24 zeus last message repeated 1504 times
Jul 23 10:40:24 zeus last message repeated 1504 times
Jul 23 10:41:24 zeus last message repeated 1504 times
Jul 23 10:42:24 zeus last message repeated 1504 times
Jul 23 10:43:24 zeus last message repeated 1504 times
Jul 23 10:44:24 zeus last message repeated 1504 times
Jul 23 10:45:24 zeus last message repeated 1504 times
Jul 23 10:46:24 zeus last message repeated 1504 times
10:56am up 39 min, 1 user, load average: 1.16, 1.11, 0.97
15 processes: 13 sleeping, 2 running, 0 zombie, 0 stopped
CPU states: 16.2% user, 0.0% nice, 94.5% system, 0.0% idle
Mem: 4236K av, 4108K used, 128K free, 2408K shrd, 1580K buff
Swap: 8604K av, 0K used, 8604K free
PID USER PRI NI SIZE RES SHRD STAT %CPU %MEM TIME COMMAND
1 root 30 0 55 236 304 S 83.0 5.5 33:53 init auto
77 root 21 0 156 408 428 R 17.1 9.6 0:00 top
38 root 6 0 65 272 332 R 10.5 6.4 4:45 /usr/sbin/syslogd
62 root 1 0 90 312 356 S 0.0 7.3 0:00 in.telnetd
63 root 1 0 348 520 484 S 0.0 12.2 0:01 -tcsh
57 root 1 0 255 440 456 S 0.0 10.3 0:00 sendmail: accepting
6 root 1 0 4 8 0 S 0.0 0.1 0:00 /sbin/bdflushd
7 root 1 0 4 8 0 S 0.0 0.1 0:00 /sbin/updated
40 root 1 0 40 236 312 S 0.0 5.5 0:00 /usr/sbin/klogd
42 bin 1 0 84 248 312 S 0.0 5.8 0:00 /usr/sbin/rpc.portmap
44 root 1 0 76 268 320 S 0.0 6.3 0:00 /usr/sbin/inetd
46 root 1 0 64 232 296 S 0.0 5.4 0:00 /usr/sbin/lpd
49 root 1 0 100 340 400 S 0.0 8.0 0:00 /usr/sbin/rpc.mountd
51 root 1 0 116 344 404 S 0.0 8.1 0:00 /usr/sbin/rpc.nfsd
53 root 1 0 96 264 356 S 0.0 6.2 0:00 /etc/rpc.pcnfsd /var
I 'd be interrested to see if anyone else can duplicated the above behavior.
Please also let me know if you need additional information.
Regards,
Bernie Doehner
bad@ee.wpi.edu
------------------------------
End of linux-kernel-digest V1 #128
**********************************
|
http://lkml.iu.edu/hypermail/linux/kernel/9507/0607.html
|
CC-MAIN-2019-35
|
refinedweb
| 6,126
| 73.07
|
SSH client for network devices built on ssh2-python
Project description
ssh2net
Library focusing on connecting to and communicating with network devices via SSH. Built on ssh2-python which provides bindings to libssh2.
ssh2net is focused on being lightweight and pluggable so that it should be flexible enough to be adapted to handle connecting to, sending commands, and reading output from most network devices.
Platforms
In theory ssh2net should be able to connect to lots of different network devices. At the moment the following devices are included in the "functional" tests and should be pretty reliable:
- Cisco IOS-XE (tested on: 16.04.01)
- Cisco NX-OS (tested on: 9.2.4)
- Juniper JunOS (tested on: 17.3R2.10)
- Cisco IOS-XR (tested on: 6.5.3)
I would like to add functional tests for:
- Arista EOS (currently blocked - at least for password based auth via keyboard_interactive)
Any additional platforms would likely not be included in the "core" platform (and therefore functional testing). Additional platforms could be considered, however a pre-requisite for additional platforms would be the capability to create vrnetlab containers for that platform.
As for platforms to run ssh2net on -- it has and will be tested on MacOS and Ubuntu regularly and should work on any POSIX system. It has never been tested on Windows, but I don't see any reason it should not work, however I have no plans on supporting Windows as I don't have access or desire to do so.
Platform Drivers
SSH2Net supports "core" and "community" platform drivers. This is similar to a "device_type" in Netmiko, for example. The intent of a "driver" is to handle device specific operations, to include privilege escalation and deescalation. The "core" drivers will support Cisco IOS-XE, NX-OS and IOS-XR, Juniper JunOS, and hopefully/eventually Arista EOS. Community drivers can be merged in for other platforms but will not be tested or supported officially.
Example IOS-XE driver setup:
from ssh2net import SSH2Net from ssh2net.core.cisco_iosxe.driver import IOSXEDriver my_device = {"setup_host": "1.2.3.4", "auth_user": "person", "auth_password": "password"} with SSH2Net(**my_device) as conn: driver = IOSXEDriver(conn)
Once the driver is setup, "netmiko-like" operations are supported:
version = driver.send_command("show version") print(version[0]) results = driver.send_config_set("["interface loopback123", "description ssh2net was here"]) print(results)
The major caveat here is that SSH2Net returns a LIST of results (hence the "version[0]" above) as all operations support passing lists of commands.
Platform Regex
The
comms_prompt_regex is perhaps the most important argument to getting SSH2Net working.
The "base" pattern is:
"^[a-z0-9.\-@()/:]{1,20}[#>$]$"
This pattern works for (tested on show commands only, but should work on config commands for at least IOS-XE, and NX-OS) IOS-XE, NX-OS, JunOS, and IOS-XR.
If you do not wish to match cisco "config" level prompts you can use:
"^[a-z0-9.-@]{1,20}[#>$]$"
If you use a platform driver, the base prompt is set in the driver so you don't really need to worry about this!
Installation
You should be able to pip install it "normally":
pip install ssh2net
To install from this repositories master branch:
pip install git+
To install from source:
git clone cd ssh2net python setup.py install
Examples
- Basic "native" SSH2Net operations
- Basic "driver" SSH2Net operations
- Basic "ConnectHandler" (i.e. Netmiko) SSH2Net operations
- Setting session and channel logging
- Using SSH Key for authentication
- Question: Why build this? Netmiko exists, Paramiko exists, Ansible exists, etc...?
- Answer: To learn and build hopefully a really cool thing!
- Question: Is this better than Netmiko/Paramiko/Ansible?
- Answer: Nope! It is different though! The main focus is just to be stupid fast. It is very much that. It should be super reliable too as the timeouts are very easy/obvious to control, but that said it for sure has not been tested thoroughly with latent devices.
- Question: Is this is Netmiko-like experience -OR- write your own driver as this has been built with the thought of being easily extended.
- Other questions? Ask away!
Linting and Testing
Linting
This project uses black for auto-formatting. In addition to black, tox will execute pylama, and pydocstyle for linting purposes. I have began playing with adding type hinting and testing this with mypy, however I've not added this to tox at this point. I've also added docstring linting with darglint which has been quite handy!
All commits to this repository will trigger a GitHub action which runs tox, but of course its nicer to just run that before making a commit to ensure that it will pass all tests!
Testing
I broke testing into two main categories -- unit and functional. Unit is what you would expect -- unit testing the code. Functional testing connects to virtual devices in order to more accurately test the code.
Unit Tests
Unit tests can be executed via pytest or using the following make command:
make test_unit
This will also print out a coverage report as well as create an html coverage report. The long term goal would be >=75% coverage with unit tests, and more if possible of course! Right now that number is more like >=50%..
So far, basic functional tests exist for Cisco IOS-XE and Cisco NX-OS, these use the CSR1000v and Nexus 9000v virtual platforms respectively.. After creating the image(s) that you wish to test, rename the image to the following format:
ssh2net[PLATFORM]
The docker-compose file here will be looking for the container images matching this pattern, so this is an important bit! The container image names should be:
ssh2netiosxe ssh2netnxos ssh2netiosxr ssh2netjunos
You can tag the image names on creation (following the vrnetlab readme docs), or create a new tag once the image is built:
docker tag [TAG OF IMAGE CREATED] ssh2netnxos
Password: VR-netlab9
Once the container(s) are ready, you can use the make commands to execute tests as needed:
test_functionalwill execute all currently implemented functional tests
test_allwill execute all currently implemented functional tests as well as the unit_junoswill execute all unit tests and junos functional tests
Note - the functional tests test the "native" SSH2Net functionality, but as of now do not test the "driver" functionality (i.e. it does not test anything in ssh2net/core/).
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/ssh2net/2019.10.7/
|
CC-MAIN-2020-10
|
refinedweb
| 1,068
| 53.1
|
I've found that I'm slowly digging my own grave, because the deeper my folder structure gets, the further back I must go each time I need to import a module from my root component:
import {ComponentsModule, SharedModule} from '../../../../../../shared';
import {ComponentsModule, SharedModule} from 'src/app/shared';
{
"compilerOptions": {
"baseUrl": ".",
"paths": {
"*": [
"*"
]
},
"declaration": false,
"emitDecoratorMetadata": true,
"experimentalDecorators": true,
"lib": ["es6", "dom"],
"mapRoot": "./",
"module": "es6",
"moduleResolution": "node",
"outDir": "../dist/out-tsc",
"sourceMap": true,
"target": "es5",
"typeRoots": [
"../node_modules/@types"
]
}
}
ERROR in Entry module not found: Error: Recursion in resolving
ERROR in multi main Module not found: Error: Recursion in resolving
It seems this could be done with
paths and
baseUrl compiler options from
tsconfig.json.
Here's an example from docs:
{ "compilerOptions": { "baseUrl": ".", "paths": { "*": [ "*", "generated/*" ] } } }
This tells the compiler for any module import that matches the pattern "*" (i.e. all values), to look in two locations:
- "*": meaning the same name unchanged, so map => \
- "generated*" meaning the module name with an appended prefix “generated”, so map => \generated\
Should be available in TypeScript 1.9+
I just tried it: adding
"baseUrl": "." allows me to
import {...} from 'app/shared' anywhere in my app (my
tsconfig.json is in the
src\ folder). Compiles without any errors and works with Angular CLI.
|
https://codedump.io/share/tsjnTE9Ibf4D/1/angular-2-can-you-import-something-from-the-project-root-instead-of-having-to-go-back-using-
|
CC-MAIN-2017-26
|
refinedweb
| 204
| 57.67
|
is there a way to check if a string is a date, regardless of OS regional config, withouth using ParseDate?
Dates can have too many forms to do this easily. There is a huge difference in SQLDates (2016-12-30) to dates from a mail (1 Jul 2005 13:55:47 -0000"). How do you expect your check to work?
Right. If you are looking for a particular form of date, you could use a regular expression, but otherwise, if it doesn’t conform to the regional settings, you would have to fake it and could never be 100% certain, or even close if other languages are involved.
the user input in my app would be as follows
dd/mm/yyyy
yyyy/mm/dd
dd-mm-yyyy
yyyy-mm-dd
i want to check if the entered data is a valid date
I feel we already went through that.
Let us say you encounter 10/05/2016
How do you know that 10/5/2016 is October 5, or May 10 ?
Of course, if day > 12 then it is easy to spot day versus month. Otherwise, you will never know.
Given that, you can very well verify that the structure complies to certain criterias :
- Only one pair of digits can > 12
- Years can be within a valid period
Pass that, you can indeed infer a structure xx/xx/xxxx or xx-xx-xxxx or xxxx-xx-xx or xx/xx/xxxxx is a date.
If you are certain there will not be US style dates, then you can verify that the second pair of digits does not exceed 12, that no pair equals zero…
It is not terribly difficult to do.
OR… is it an invalid date input? Hmmmmmm?
All that can say it the format is “valid” … but valid for the US? or valid for certain non-US locales?
you cannot say it contains a “correct” date
[quote=306260:@Dave S]OR… is it an invalid date input? Hmmmmmm?
All that can say it the format is “valid” … but valid for the US? or valid for certain non-US locales?
you cannot say it contains a “correct” date[/quote]
[quote=306255:@Michel Bujardet]Pass that, you can indeed infer a structure xx/xx/xxxx or xx-xx-xxxx or xxxx-xx-xx or xx/xx/xxxxx is a date.
[/quote]
Never said “correct”.
i know previously if a number is a day or a month or a year because i use three text fields to enter data. based on that, i want to know if a string is a valid date. i.e user input 31/02/2016 OR 2016/31/02 OR 31-02-2016 OR 2016-31-02, would not be correct because february does not have 31 days. i want this type of validation regardless of OS regional config, withouth using ParseDate
What is you aversion to PARSEDATE?
if the user input a valid string “30/12/2016” and the system is configured as yyyy/mm/dd, even if it is a valid entry, ParseDate would throw error… i don’t want to be system dependent
System locale is the only way you can be sure that 03/12/2016 is March 12th and not December 3rd.
If ParseDate errors, tell the user they put the date in wrong.
I also remember that topic came for iOS, where ParseDate simply does not exist.
What you do is parse the input, and use the data to create a new date object.
Dim d As New Xojo.Core.Date(2015, 8, 1, Xojo.Core.TimeZone.Current)
If you create a xojo.core.date with for instance February 31, 2015, the resulting date will be March 3, 2015. So when you check the entered date versus the created date, they won’t be the same.
In theory at least there should not be a reason. If the user enters a date on their machine, it is reasonable to assume you should be able to parse it and a failure would suggest an invalid date.
Having said that I can’t use it!!! I live in Switzerland, where things are different - we have four nation languages and the politics of choosing a company language can be difficult. I once worked in a company where if you send an email in German to a French speaking area they just behaved as if it never happened!!!
To avoid such issues many companies solve the problem by configuring all machines to use a neutral language - US English They then add local keyboard drivers and depending on the company they may or may not configure date, time, currency etc… But there is still a rub - French and Italian Swiss formats are not exactly the same as French and Italian formats… So are you dealing an Italian PC configured to the Swiss or Italian standard???
The result is that ParseDate is about as useful as a chocolate teapot to me. We normally provide user parameters that allow them to overwrite our default identification of the machines and of course internally everything is done in UTC.
in this link you can see the IsDate function in VB
—code from that link—
Dim firstDate, secondDate As Date
Dim timeOnly, dateAndTime, noDate As String
Dim dateCheck As Boolean
firstDate = CDate(“February 12, 1969”)
secondDate = #2/12/1969#
timeOnly = “3:45 PM”
dateAndTime = “March 15, 1981 10:22 AM”
noDate = “Hello”
dateCheck = IsDate(firstDate)
dateCheck = IsDate(secondDate)
dateCheck = IsDate(timeOnly)
dateCheck = IsDate(dateAndTime)
dateCheck = IsDate(noDate)
In the previous example, IsDate returns True on the first four calls and False on the last one
—end—
i think we need that powerful and simple function in xojo
Maybe we all can create an opensource function called IsDate that works the same as in vb?? (Although perhaps it is a very crazy idea)
Nicols, you may not notice, but we are trying to help here, and have understood a while ago what you are after.
I hate to break to you, but my previous post was precisely an attempt to show you how to the same thing.
Either go back to VB, or try to learn how to do things in Xojo. Like build your own isDate. It would be nice if instead of constantly crying after what VB does great, you simply reacted to others trying to help
@Michael Bujardet, i really appreciate your help, but i think you misunderstood what i am trying to achieve.
i want to have a peaceful conversation about xojo. i love xojo. and i want to contribute to the xojo community. i saw that questions about date are very common, so create the IsDate function for the community would be much appreciated for a lot of programmers, including me.
what you say about me crying about what VB does great, I think that’s not the case. Although a tool like xojo can be great, can have its weaknesses and accept constructive criticism is not bad, on the contrary, it forces us all to not accept everything as it comes. Rather, it pushes us to constantly improve. I avoid being conformist and also avoid being blind fan of any technology, since almost everything is perfectible and objectionable, especially in the field of programming. I’m not crying, just comparing tools without blind fanaticism
Use a different way to get an always valid date from the user. Say display a calendar or three PopupMenus or
Doing that will allow you to be sure the user given date is 100% correct.
If you get 3 text fields with, presumably, day, month & year then create a new date object with those values
Then read out the values that it ends up with and see if they match
If person enters mm/dd/yyyy (or however you have the text fields set up) like 31/02/2016 and you put that into a new Date you wont get back 31/02/2016
Function IsValidDate( year as integer, month as integer, day as integer) as boolean dim tmp as New Date( year, month, day) return (tmp.Year = year) and (tmp.Month = month) and (tmp.day = day) end function
If what you want is to get that feature, then simply file a feature request, and hope for Xojo to create it one of these days.
What you don’t get is that I was, plain honest, trying to guide you on the path to create a validation method. Sorry, I was mistaken.
Edit : I see that Norman gave you the fish to eat.
|
https://forum.xojo.com/t/date-question/35205
|
CC-MAIN-2022-21
|
refinedweb
| 1,423
| 66.88
|
Definition at line 68 of file TS3WebFile.h.
#include <TS3WebFile.h>
Construct a TS3WebFile object.
The path argument is a URL of one of the following forms:
For files hosted by Google Storage, use the following forms:
The 'as3' scheme is accepted for backwards compatibility but its usage is deprecated.
The recommended way to create an instance of this class is through TFile::Open, for instance:
The specified scheme (i.e. s3, s3 s3 ...) determines the underlying transport protocol to use for downloading the file contents, namely HTTP or HTTPS. The 's3', 's3 'gs' and 'gs schemes imply using HTTPS as the transport protocol. The 's3 'as3' and 'gs schemes imply using HTTP as the transport protocol.
The 'options' argument can contain 'NOPROXY' if you want to bypass the HTTP proxy when retrieving this file's contents. As for any TWebFile-derived object, the URL of the web proxy can be specified by setting an environmental variable ' If this variable is set, we ask that proxy to route our requests HTTP(S) requests to the file server.
In addition, you can also use the 'options' argument to provide the access key and secret key to be used for authentication purposes for this file by using a string of the form "AUTH=myAccessKey:mySecretkey". This may be useful to open several files hosted by different providers in the same program/macro, where the environemntal variables solution is not convenient (see below). separate them by ' ' (blank), for instance: "NOPROXY AUTH=F38XYZABCDeFgH4D0E1F:V+frt4re7J1euSNFnmaf8wwmI4AAAE7kzxZ/TTM+"
Examples:
If there is no authentication information in the 'options' argument (i.e. not AUTH="....") the values of the environmental variables S3_ACCESS_KEY and S3_SECRET_KEY (if set) are expected to contain the access key id and the secret access key, respectively. You have been provided with these credentials by your S3 service provider.
If neither the AUTH information is provided in the 'options' argument nor the environmental variables are set, we try to open the file without providing any authentication information to the server. This is useful when the file is set an access control that allows for any unidentified user to read the file.
Definition at line 150 of file TS3WebFile.cxx.
Definition at line 93 of file TS3WebFile.h.
Definition at line 105 of file TS3WebFile.h.
Definition at line 105 of file TS3WebFile.h.
Definition at line 96 of file TS3WebFile.h.
Definition at line 98 of file TS3WebFile.h.
Sets the access and secret keys from the environmental variables, if they are both set.
Sets the security session token if it is given.
Definition at line 364 285 of file TS3WebFile.cxx.
Definition at line 99 of file TS3WebFile.h.
Definition at line 97 of file TS3WebFile.h.
Definition at line 100 of file TS3WebFile.h.
Reimplemented from TFile.
Definition at line 105 of file TS3WebFile.h.
Extracts the S3 authentication key pair (access key and secret key) from the options.
The authentication credentials can be specified in the options provided to the constructor of this class as a string containing: "AUTH=<access key>:<secret key>" and can include other options, for instance "NOPROXY" for not using the HTTP proxy for accessing this file's contents. For instance: "NOPROXY AUTH=F38XYZABCDeFgHiJkLm:V+frt4re7J1euSNFnmaf8wwmI401234E7kzxZ/TTM+" A security token may be given by the TOKEN option, in order to allow the use of a temporary key pair.
Definition at line 253 of file TS3WebFile.cxx.
This method is called by the super-class TWebFile when a HTTP header for this file is retrieved.
We scan the 'Server' header to detect the type of S3 server this file is hosted on and to determine if it is known to support multi-range HTTP GET requests. Some S3 servers (for instance Amazon's) do not support that feature and when they receive a multi-range request they sent back the whole file contents. For this class, if the server do not support multirange requests we issue multiple single-range requests instead.
Reimplemented from TWebFile.
Definition at line 344 of file TS3Web TFile.
Definition at line 309 of file TS3WebFile.cxx.
Definition at line 83 299 of file TS3WebFile.cxx.
Definition at line 84 of file TS3WebFile.h.
Definition at line 105 of file TS3WebFile.h.
Definition at line 87 of file TS3WebFile.h.
Definition at line 88 of file TS3WebFile.h.
|
https://root.cern/doc/master/classTS3WebFile.html
|
CC-MAIN-2022-21
|
refinedweb
| 722
| 56.86
|
This tutorial explains thread life cycle in Java with examples. This is the second article in the Java Concurrency Series, with the first articleClick to Read tutorial on Java MultiThreading Basics & How-to create/run Threads covering basics of multithreading in Java. In this tutorial we will start by looking at the Java thread lifecycle diagram. We will then look at individual thread states in detail to understand the state information they encapsulate and how transitions happen between these states. Lastly, we will take a look at a code example showing how a thread moves through its states, understand the logic of the program via a sequence diagram and then understand how the code works.
Java Thread Life Cycle
Let us start by getting a high level understanding of the 6 thread states in Java with the diagram shown next –
Above diagram shows how a typical thread moves through the different stages of its life cycle. Let us look at these states one-by-one and understand their place in the life of a Java thread –
- New – A newly created thread object instance on which the
start()method has not yet been invoked is in the
newstate. To learn how to instantiate threads in the proper way check out this tutorialClick to Read tutorial explaining how to create a Thread in Java.
- Runnable – A thread in
newstate enters the
runnablestate when the
Thread.start()method is invoked on it. There are 2 important points to note regarding the
runnablestate –
- Although the thread enters the
runnablestate.
Secondly, a thread in
runnablestate may run for some time and then get blocked for a monitor lock, or enter the
waiting/
timed_waitingstates as it waits for the opportunity/time to enter
runnablestate again.
- Blocked – A running thread may enter the
blockedstate as it waits for a monitor lock to be freed. It may also be
blockedas it waits to reenter a monitor lock after being asked to wait using the
Thread.wait()method.
- Waiting – A thread enters the
waitingstate when it is made to wait for a go-ahead signal to proceed. The go-ahead in this case is given by another thread and can be given in the following 3 scenarios –
- Thread waiting due to
Thread.wait()method being called on it: The other thread can use
Thread.notify()or
Thread.notifyAll()to give the go-ahead to the
waitingthread.
- Thread waiting as it itself has asked for joining another thread using
Thread.join(): The
waitingthread gets a go-ahead when the thread its waiting for ends.
Thread waiting due toThe waiting thread resumes when
LockSupport.
park()method being invoked on it:
LockSupport.
unPark()is called with the parked thread object as the parameter.
- Timed_Waiting – A thread which is waiting as it has been specifically ‘instructed’ to wait for a specified waiting time is in a
timed_waitingstate. A thread can be made to wait for a pre-determined amount of time in the following ways –
- Thread made to wait using
Thread.sleep()method.
- Threads being asked to wait for a permit for a specified amount of time using
LockSuport.parkNanos()and
LockSupport.parkUntil()methods.
Threads being made to wait for a fixed amount of time using
Thread.wait(long millis)or
Thread.join
(long millis, int nanos).
- Terminated – A thread enters its ‘final resting’ state or
terminatedstate when it has finished executing the logic specified in its
run()method.
In-built Java Enum Constants for thread states
To provide a standard naming and reference for individual thread states, Java language designers have defined an enum named
java.lang.Thread.State which has the following constants defined (each one named after the thread state it refers to) –
ThreadState.BLOCKED
ThreadState.NEW
ThreadState.RUNNABLE
ThreadState.TERMINATED
ThreadState.TIMED_WAITING
ThreadState.WAITING
Java example showing thread states in action
The Java example below shows how a typical thread moves through the various life cycle states. A sequence diagram showing interaction between threads and detailed explanation of the code follows.
package com.javabrahman.corejava.threads; public class ThreadStates { public static void main(String args[]){ //Creating an instance of Basic Thread Thread threadInstance=new Thread(new BasicThread()); threadInstance.start(); System.out.println("BasicThread State: "+threadInstance.getState()); try { boolean keepRunning=true; int count=1; while(keepRunning) { Thread.sleep(2000); System.out.println(count*2+ " Seconds elapsed - BasicThread State: "+threadInstance.getState()); count++; if(count==4){ //6 seconds elapsed synchronized(threadInstance) { threadInstance.notify(); } } if(Thread.State.TERMINATED == threadInstance.getState()){ keepRunning = false; } } }catch(InterruptedException iException){ iException.printStackTrace(); } } } //BasicThread.java package com.javabrahman.corejava.threads; public class BasicThread implements Runnable { @Override public void run() { Thread thread = Thread.currentThread(); try{ //Making the thread sleep for 5 seconds System.out.println("Basic thread to sleep for 5 seconds"); thread.sleep(5000); synchronized (thread) { thread.wait(); } }catch(InterruptedException iException){ iException.printStackTrace(); } } }
Basic thread to sleep for 5 seconds
2 Seconds elapsed – BasicThread State: TIMED_WAITING
4 Seconds elapsed – BasicThread State: TIMED_WAITING
6 Seconds elapsed – BasicThread State: WAITING
8 Seconds elapsed – BasicThread State: TERMINATED
To better understand how
BasicThread.javais moving through different thread states, let us first take a look at a sequence diagram showing the interactions between above two programs –
ThreadStates.javais the class with the
main()method in above example. It instantiates a
BasicThread.javainstance, named
threadInstance,and then calls
start()on
threadInstanceto start the execution of parallel thread. The state of
BasicThreadis printed at this point as
RUNNABLE.
ThreadStatesthen gets into an infinite while loop with a boolean flag variable named
keepRunning. At the beginning of the loop the thread goes to sleep for
2000
millisecondsor 2 seconds. At the same time
BasicThreadtoo goes to sleep for
5000
milliseconds.
- In every iteration of while loop, at a gap of every 2 seconds,
BasicThread’s state is printed. During 1st iteration(2 seconds) and 2nd iteration(4 seconds) elapsed times –
BasicThreadis printed as being in state
TIMED_WAITINGas it is sleeping for 5 seconds.
- When 5 seconds elapse
BasicThreadwakes up, and it immediately asks itself to wait using
Thread.wait()method. The state of
BasicThreadis printed at this point as
WAITING.
- Then, in 3rd iteration i.e. when 6 seconds elapse,
ThreadStatesinvokes
notify()on
BasicThreadwhich then starts executing.
- In 4th iteration(8 seconds)
ThreadStateschecks for
BasicThread’s state. The state is now
TERMINATED. The flag
keepRunningis now set to false.
ThreadStatesexits the while loop and ends its processing.
Summary
In the above tutorial, 2nd in the Java Concurrency Series, we understood the lifecycle of a thread, looked at different thread states, and saw how a thread moves through the different states during its life. In the next tutorial in this series we will take a look at the thread characteristics and how they can used to control the execution of threads as required.
|
https://www.javabrahman.com/corejava/understanding-thread-life-cycle-thread-states-in-java-tutorial-with-examples/
|
CC-MAIN-2018-34
|
refinedweb
| 1,116
| 56.15
|
Display collection_check_box for Has_and_belongs_to_many association
Hi, I want to display a list of check boxes, below is my code,
<%= f.collection_check_boxes :job_ids, @current_user.jobs.all, :id, :job_name do |b| %> <div class="collection-check-box"> <%= b.check_box %> <%= b.label %> </div> <% end %>
It works well which display a list of Job_name which can be selected by the user. However, I want to display more information about the job, not just the job_name only, but also the job_type as well. How would I achieve that? I want it to be display in the check_box like this -> System Engineer, Full Time
Hi Lee,
In the first example, you'll see they made a function to return whatever they'd like to display:
def name_with_initial "#{ first_name.first }. #{ last_name }" end
So in your case, you could do something like:
def name_with_type "#{ job_name } - #{ job_type }" end
Then update your view like so:
<%= f.collection_check_boxes :job_ids, @current_user.jobs.all, :id, :name_with_type do |b| %> <div class="collection-check-box"> <%= b.check_box %> <%= b.label %> </div> <% end %>
I haven't tested this, so you may have to play with it some.
|
https://gorails.com/forum/display-collection_check_box-for-has_and_belongs_to_many-association
|
CC-MAIN-2021-04
|
refinedweb
| 179
| 65.22
|
Chatlog 2012-02-08
From RDF Working Group Wiki
See panel, original RRSAgent log or preview nicely formatted version.
Please justify/explain non-obvious edits to this page, in your "edit summary" text.
15:56:36 <RRSAgent> RRSAgent has joined #rdf-wg 15:56:36 <RRSAgent> logging to 15:56:38 <trackbot> RRSAgent, make logs world 15:56:40 <trackbot> Zakim, this will be 73394 15:56:40 <Zakim> ok, trackbot; I see SW_RDFWG()11:00AM scheduled to start in 4 minutes 15:56:41 <trackbot> Meeting: RDF Working Group Teleconference 15:56:41 <trackbot> Date: 08 February 2012 15:57:08 <AndyS> zakim, this is 73394 15:57:08 <Zakim> ok, AndyS; that matches SW_RDFWG()11:00AM 15:58:55 <ivan> zakim, dial ivan-voip 15:58:55 <Zakim> ok, ivan; the call is being made 15:58:56 <Zakim> +Ivan 15:58:57 <swh> swh has joined #rdf-wg 15:59:36 <Zakim> +Guus 15:59:48 <gavinc> gavinc has joined #rdf-wg 15:59:49 <yvesr> Zakim, who is on the phone? 15:59:49 <Zakim> On the phone I see ??P5, Ivan, Guus 15:59:54 <yvesr> Zakim, ??P5 is me 15:59:54 <Zakim> +yvesr; got it 15:59:56 <Zakim> +Peter_Patel-Schneider 16:00:10 <Arnaud> Arnaud has joined #rdf-wg 16:00:12 <pfps> pfps has joined #rdf-wg 16:00:34 <Zakim> +OpenLink_Software 16:00:38 <Zakim> + +33.9.54.07.aaaa 16:00:40 <MacTed> Zakim, OpenLink_Software is temporarily me 16:00:40 <Zakim> +MacTed; got it 16:00:41 <MacTed> Zakim, mute me 16:00:41 <Zakim> MacTed should now be muted 16:00:43 <Zakim> +??P13 16:00:53 <AZ> zakim, +33.9.54.07.aaaa is me 16:00:53 <Zakim> +AZ; got it 16:00:54 <NickH> Zakim, ??P13 is me 16:00:54 <Zakim> +NickH; got it 16:00:59 <Zakim> + +1.707.861.aabb 16:00:59 <AZ> zakim, mute me 16:01:01 <Zakim> AZ should now be muted 16:01:01 <Zakim> + +1.408.996.aacc 16:01:02 <NickH> Zakim, mute me 16:01:03 <Zakim> NickH should now be muted 16:01:10 <gavinc> Zakim, aabb is me 16:01:10 <Zakim> +gavinc; got it 16:01:13 <AlexHall> AlexHall has joined #rdf-wg 16:01:20 <Zakim> + +1.443.212.aadd 16:01:31 <AlexHall> zakim, aadd is me 16:01:31 <Zakim> +AlexHall; got it 16:01:33 <Zakim> +Sandro 16:01:45 <sandro> sandro has changed the topic to: 8 Feb -- 16:01:47 <Guus> zakim, who is here? 16:01:48 <Zakim> On the phone I see yvesr, Ivan, Guus, Peter_Patel-Schneider, MacTed (muted), AZ (muted), NickH (muted), gavinc, +1.408.996.aacc, AlexHall, Sandro 16:01:55 <Zakim> On IRC I see AlexHall, pfps, Arnaud, gavinc, swh, RRSAgent, AZ, ScottB, Zakim, LeeF, AndyS, Guus, MacTed, mischat, ivan, danbri, SteveH, yvesr, davidwood, mdmdm, manu, trackbot, 16:02:02 <Zakim> ... manu1, NickH, sandro, ericP 16:02:04 <Zakim> +??P17 16:02:09 <swh> Zakim, ??P17 is me 16:02:10 <pchampin> pchampin has joined #rdf-wg 16:02:20 <Zakim> +swh; got it 16:02:24 <Zakim> +Tony 16:02:42 <ScottB> Zakim, Tony is me 16:02:42 <Zakim> +ScottB; got it 16:03:46 <Zakim> +??P24 16:03:51 <AndyS> zakim, ??P24 is me 16:03:51 <Zakim> +AndyS; got it 16:04:02 <Zakim> +davidwood 16:04:15 <Zakim> + +44.117.230.aaee 16:04:23 <danbri> zakim, +44.117.230.aaee is danbri 16:04:25 <Zakim> +danbri; got it 16:05:27 <danbri> (we've got a breather...) 16:05:43 <swh> scribe: swh 16:05:48 <swh> scribenick: swh 16:06:29 <pfps> minutes look fine 16:06:35 <swh> PROPOSED: accept minuites of last week 16:06:53 <zwu2> zwu2 has joined #rdf-wg 16:06:55 <swh> RESOLVED 16:06:57 <zwu2> zakim, code? 16:06:57 <Zakim> the conference code is 73394 (tel:+1.617.761.6200 sip:zakim@voip.w3.org), zwu2 16:07:07 <swh> Pending review items 16:07:14 <swh> close ACTION-136 16:07:14 <trackbot> Sorry... adding notes to ACTION-166 failed, please let sysreq know about it 16:07:29 <Zakim> + +1.650.265.aaff 16:07:35 <FabGandon> FabGandon has joined #rdf-wg 16:07:38 <zwu2> zakim, +1.650.265.aaff is me 16:07:38 <Zakim> +zwu2; got it 16:09:10 <Zakim> +[Sophia] 16:10:53 <Zakim> +EricP 16:11:47 <Zakim> -AndyS 16:12:15 <Zakim> +[IPcaller] 16:12:16 <danbri> (was I audible?) 16:12:21 <AndyS> zakim, IPCaller is me 16:12:22 <Zakim> +AndyS; got it 16:13:15 <Zakim> + +1.707.318.aagg 16:13:25 <cgreer> cgreer has joined #rdf-wg 16:13:55 <swh> One more review since last week 16:14:49 <swh> AlexHall: there's a lot of cleanup in XSD around defn's of lexical and value spaces, and mapping. RDF doesn't say anything about them. Just refers to them. No action needed. 16:14:59 <Zakim> +??P37 16:15:10 <swh> … one change that needs discussion is distinction between identity and equality 16:15:11 <sandro> zakim, who is talking? 16:15:14 <Guus> zakim, who is talking? 16:15:22 <Zakim> sandro, listening for 10 seconds I heard sound from the following: Guus (14%), AlexHall (19%), AndyS (13%) 16:15:33 <Zakim> Guus, listening for 10 seconds I heard sound from the following: Guus (9%), swh (5%), AlexHall (94%) 16:16:00 <swh> … has implications around entailment - for eg. +0 and -0 are distinct under XSD 1.1, but were equiv under 1.0 16:16:20 <swh> … NaN has implications for SPARQL, but not RDF 16:16:30 <Zakim> -AndyS 16:16:55 <swh> … we might need to write some text in the semantics document to make this clear 16:17:00 <Zakim> +??P24 16:17:04 <AndyS> zakim, P24 is me 16:17:04 <Zakim> sorry, AndyS, I do not recognize a party named 'P24' 16:17:11 <AndyS> zakim, ??P24 is me 16:17:11 <Zakim> +AndyS; got it 16:18:04 <swh> … should probably include duration as well as the datetime etc. datatypes in the types that are good for use with RDF 16:18:17 <swh> Guus: we should raise that as an issue 16:18:29 <swh> AlexHall: there's already an older issue 16:18:44 <AlexHall> 16:19:46 <swh> Guus: should add that review to ISSUE-66 16:20:36 <swh> Turtle 16:20:56 <swh> Guus: things came up this week, from Ivan w.r.t. multiline comments 16:21:20 <swh> gavinc: read through it, not had time to write an email 16:21:35 <swh> … comment that we should do it like python is a problem because python doesn't 16:21:45 <swh> … have multiline comments 16:21:59 <Souri> Souri has joined #rdf-wg 16:22:00 <sandro> ? CSS has multiline comments. 16:22:05 <swh> … CSS only has single line comments, I don't see it as major problem 16:22:07 <sandro> ??? 16:22:16 <Zakim> +Souri 16:22:17 <Zakim> -zwu2 16:22:31 <swh> Danny Ayres had a comment 16:22:47 <danbri> re CSS .... /* I thought it \n did */ 16:22:47 <swh> gavinc: yes, CSS does have multiline comments 16:23:13 <swh> … we could do what python does, but seems like a large change 16:23:30 <swh> Guus: ask gavinc or ericP to respond to Danny 16:23:36 <swh> ericP: do we need a descision? 16:23:45 <sandro> "similar to those in the C programming language" 16:23:52 <swh> … we want to keep alignment with SPARQL etc. 16:23:54 <ivan> q+ 16:24:22 <gavinc> sandro, Yeah, I have have no idea where my brain was 16:24:32 <swh> ericP: I feel like I have lack of authority 16:24:48 <swh> Guus: might be best to raise an issue 16:24:50 <sandro> Yeah, I think this needs a WG resolution. 16:25:27 <AndyS> ack ivan 16:25:45 <swh> ivan: was a discussion danny raised on swig, not formally raised on this group, Ivan just drew groups attention to it 16:25:58 <Zakim> + +1.603.438.aahh 16:26:21 <zwu2> zakim, +1.603.438.aahh is me 16:26:21 <Zakim> +zwu2; got it 16:26:22 <swh> Guus: I think it would be in the spirit to regard this as a comment 16:26:42 <swh> … ericP and gavinc, please take an action 16:27:01 <swh> ACTION: ericP to repsond to multiline comments comment of Danny Ayres 16:27:01 <trackbot> Created ACTION-142 - Repsond to multiline comments comment of Danny Ayres [on Eric Prud'hommeaux - due 2012-02-15]. 16:27:22 <swh> Guus: 2nd issue, raised by Alex, on local name escapes 16:27:48 <swh> AlexHall: one issue is a typo… [noise] 16:28:18 <swh> … double \ was shown as introducing a char escape squence 16:28:30 <swh> … the other I was confused by appearance of % escape encoding in local part 16:28:44 <swh> … want clarification that they're not treated as escapes 16:29:02 <swh> … Andy confirmed for SPARQL, but not Turtle editors 16:29:37 <AlexHall> rdf:foo%20bar 16:29:38 <swh> ericP: you concern is whether I can say %68 and have it be equivalent to the unencoded version 16:29:51 <gavinc> on comments, Python still doesn't have multi line comments, nor Perl, and Ruby has really really funky multi line comment 16:29:59 <AndyS> f-o-o-%-2-0-b-a-r 16:30:16 <swh> AlexHall: if I include a % in the localname, is it equiv. to the unescaped version 16:30:23 <swh> ericP: [writing example] 16:30:31 <ericP> my:foob%61ar == my:foobar ? 16:31:16 <swh> ericP: I believe that the % has to stay in there - can't take %s out and have equivalence 16:31:19 <sandro> I think you mean: my:foob%61r -- my:foobar 16:31:52 <swh> AlexHall: I'm worried that people might thing it gets deescaped 16:32:23 <swh> ericP: ok, we need a bit of text saying you're not intended to unescape during processing 16:32:46 <swh> gavinc: possibly just reference the RFC doc 16:33:08 <Guus> q? 16:33:10 <swh> §5.3.1- simple string comparison 16:35:09 <ericP> ACTION: ericP to propose text to say that %nn is *NOT* unescaped while parsing Turtle 16:35:09 <trackbot> Created ACTION-143 - Propose text to say that %nn is *NOT* unescaped while parsing Turtle [on Eric Prud'hommeaux - due 2012-02-15]. 16:35:58 <swh> gavinc: do want to look at publishing a new draft of Turtle soon 16:36:03 <Zakim> - +1.707.318.aagg 16:36:09 <ivan> +1 to gavinc 16:36:12 <swh> … we have Turtle in HTML, grammar has changed, 16:36:49 <swh> ericP: were there any changes we made without consensus 16:37:01 <swh> Guus: may need formal review if WG wants 16:37:48 <swh> Topic: named graphs 16:37:59 <swh> Guus: want to talk a bit about exchanging data 16:38:14 <swh> … message from AndyS 16:38:32 <AndyS> I don't think we have consensus 16:38:37 <swh> Guus: AndyS, do you think we reached consensus 16:39:09 <swh> AndyS: I brought it up because it was a priority before 16:39:29 <swh> … I think Pat and I are agreed about whether it would be best if we published the smenatics of the 4th column 16:39:42 <swh> … I think we'd like to see it published, but need back compat 16:40:26 <swh> AndyS: if you look at dbpedia they have a 4th col, but I don't know what it means, they're not complying to a published method, but it is a usecase 16:40:53 <swh> Guus: so, you can't use the 4th column as an IRI to fetch the triples 16:40:58 <swh> AndyS: no 16:41:04 <swh> … it's got a hash on the end 16:41:11 <swh> … it's quite profile, and it exists 16:41:11 <MacTed> Zakim, unmute me 16:41:11 <Zakim> MacTed should no longer be muted 16:41:21 <Guus> q? 16:42:11 <swh> Guus: what is the relationship between the 4th col and the triple? 16:42:19 <swh> MacTed: I don't know, trying to get answer 16:43:05 <swh> ACTION: MacTed to investigate what the relationship is, and document it 16:43:05 <trackbot> Created ACTION-144 - Investigate what the relationship is, and document it [on Ted Thibodeau - due 2012-02-15]. 16:43:17 <swh> AndyS: the consequence is that quads are not just an internal issues 16:43:24 <swh> ivan: I don't understant 16:43:34 <swh> AndyS: I phrased the usecase as being about TriG 16:43:53 <swh> … it's from the extraction project, not the running service 16:46:08 <swh> ACTION-144: relationship between 4th col and triple 16:46:08 <trackbot> ACTION-144 Investigate what the relationship is, and document it notes added 16:46:26 <swh> Guus: usecase discussion between sandro and AndyS 16:46:43 <davidwood> q+ to ask about the details of the use case 16:46:53 <swh> … to illustrate how you go about identifying time-varying gboxes 16:47:01 <davidwood> q- 16:47:07 <swh> … AndyS, sandro, do you think your desgins are the same 16:47:23 <swh> sandro: we're talking about the same pattern 16:47:38 <swh> Guus: that makes it worth exploring in more detail 16:47:50 <swh> … in general I'l like to explore more solution designs and apply to usecase 16:48:07 <Guus> 16:48:40 <swh> Guus: we started exploring sulution designs 16:48:50 <swh> … try to give a natural language explanation of what it means 16:48:58 <swh> … so non RDF geeks can understand 16:49:04 <swh> … started writing down examples 16:49:17 <swh> … my question to WG is, is this kind of approach useful? 16:49:17 <Zakim> -AZ 16:49:33 <AndyS> Useful to explore this pattern 16:49:58 <swh> … in the 3rd usecase sandro has a statement about static graph container, those appear in AndyS's solution too 16:50:07 <swh> … do we want to define that, and if so, in what namespace 16:50:12 <ivan> q+ 16:50:17 <swh> no moee namespaces please 16:50:23 <sandro> +1 16:50:26 <swh> Guus: is it useful to look at these designs? 16:50:36 <AndyS> q? 16:50:37 <sandro> +1 on looking at solutions in more detail 16:50:41 <swh> +1 16:50:59 <pchampin> +1 16:51:01 <Zakim> +AZ 16:51:02 <swh> ivan: we certainly have to move on and look at possibilities 16:51:04 <AZ> zakim, mute me 16:51:04 <Zakim> AZ should now be muted 16:51:31 <swh> … what I don't fully understand is that the usecase means the semantics of GET, but does it mean that you would have other semantics? 16:51:49 <swh> Guus: how do we do the page structuring? I thought one page per solution 16:52:07 <swh> … tried to come up with nat lang description of TriG RESR 16:52:14 <swh> s/RESR/REST/ 16:52:30 <MacTed> AndyS - do you have a sample fourth-column value from those DBpedia dumps? 16:52:33 <swh> sandro: at a tutorial level the text description is ok 16:52:45 <swh> ivan: I would prefer to see all on one page 16:52:50 <swh> Guus: no problem 16:52:54 <MacTed> (even better, a full row or two from any of those N-quad files) 16:52:55 <sandro> 16:53:02 <AndyS> MacTed - the load page has samples IIRC 16:53:12 <swh> sandro: I was doing the same thing 16:53:28 <AndyS> ... not sure if they are representative enough. 16:54:00 <swh> sandro: my impression right now is that most of the group is not following this closely enough to make descisions about it 16:54:26 <swh> Guus: I prefer to see people owning solutions 16:54:47 <swh> sandro: don't want to get too emotionally involved 16:55:07 <ericP> scribenick: ericP 16:55:08 <Zakim> -swh 16:55:28 <ericP> Guus: happy to write more examples for this solution 16:56:13 <ericP> … i'll try to stay impartial in my defense of this solution 16:56:36 <sandro> 16:56:38 <ericP> … and there's the equality use case? 16:57:15 <ericP> sandro: in TF-Graphs-Designs, the first is "trig state" (earlier called "trig rest") (subject of guus's page) 16:57:26 <ericP> … second is "trig equality" 16:57:47 <ericP> Guus: where the lable is a placeholder for the set of triples, instead of pointing to it 16:59:05 <ericP> sandro: 3rd is the n3 style of explicitly naming the relation (scroll down to "graph object") 17:00:48 <ericP> ... "Graph Objects" is triples, where nodes can be graphs 17:01:09 <zwu2> q+ 17:01:19 <ivan> q- 17:01:22 <ericP> Guus: could be seen at quints, e.g.: 17:01:23 <danbri> (regarding 'bigger than RDF', I always think of RDF as ... and quads as adding an extra dimension...) 17:01:44 <ericP> … eg:s1 eg:p1 eg:o1. eg:g1 rdf:graphState 17:01:52 <yvesr> it is possible to store such things in a quad store (that's how my SWI-Prolog N3 implementation works) 17:02:09 <ericP> sandro: steve argued that there's a problematic computational overhead to this approach 17:03:06 <ericP> … distinction between Graph Objects and Graph Datatypes is that former has graphs in the object position and latter has turtle literals 17:04:41 <sandro> the type of the thing identified by the label. 17:04:41 <pchampin> re 5., I would be more comfortable with something more keywordish, like "@graphLabelRelation rdf:graphState" 17:04:47 <ericP> AndyS: [re: Relation Flag"] i was imagining the the modifier applied to each triple, e.g. eg:s eg:p eg:o. eg:g 17:06:21 <ericP> Guus: in AndyS's Oct mail, he used types like StaticGraphContainer 17:06:38 <ericP> … these are close to Graph Objects 17:07:11 <ericP> sandro: with trig state, there are precise semantics which i think are distinguishable [from andy's mail] 17:07:50 <AndyS> q? 17:07:53 <Guus> q? 17:07:55 <ericP> … calling Andy's proposal "typed @@1" 17:08:28 <ericP> zwu: with graphs as objects, how could you enforce equality? 17:08:57 <ericP> ericP: i.e. have the same extension? 17:09:12 <ericP> zwu: right, owl:sameAs has powerful semantics 17:09:40 <ericP> sandro: hasn't come up in the use cases, but you could say 17:10:09 <AndyS> """The built-in OWL property owl:sameAs links an individual to an individual.""" 17:10:12 <ericP> … ^h^h^h utter some inconsistency which was computationally hard to catch 17:10:28 <Souri> +1 to calling it something else 17:10:53 <sandro> ivan: later, we can say sameas is sameas sameas 17:10:57 <pchampin> sameGraph ? 17:10:58 <ericP> … so if we use owl:sameAs, are we bringing in baggage? happy to use something else 17:11:06 <ericP> … sameGraphAs 17:11:23 <MacTed> -1000 sameGraphAs 17:11:30 <ericP> zwu: triple order and bnodes make it hard 17:11:32 <MacTed> sameGbox maybe 17:11:32 <AndyS> It is expensive to test for equality. See JJC paper. 17:12:00 <MacTed> (implying also sameGsnap, sameGtext) 17:12:06 <pchampin> @MacTed no, as I get it, the idea is to identify the g-snap here, to the g-box 17:12:44 <gavinc> Please see 17:12:49 <sandro> MacTed, how about rdf:isGSnap ? 17:13:01 <ericP> sandro: i think we can factor out that computation because it's not necessary for the use cases 17:14:29 <ericP> [general exceptance of the term "GSnap"] 17:14:55 <ericP> Guus: would like to execute the provenance scenario with these different designs 17:15:12 <ericP> Arnaud: have we defined GSnap et al? 17:15:28 <sandro> edited to use rdf:isGSnap 17:15:32 <ericP> sandro: plan is that these terms won't make it into the final specs 17:15:59 <ericP> ACTION: Guus to merge his page with Sandro's 17:15:59 <trackbot> Created ACTION-145 - Merge his page with Sandro's [on Guus Schreiber - due 2012-02-15]. 17:17:03 <ericP> ACTION: Guus to write down the provenance scenario example, as well as those in AndyS's and Steve's email, and the one from DBPedia 17:17:03 <trackbot> Created ACTION-146 - Write down the provenance scenario example, as well as those in AndyS's and Steve's email, and the one from DBPedia [on Guus Schreiber - due 2012-02-15]. 17:17:19 <Zakim> -Peter_Patel-Schneider 17:17:55 <ericP> Guus: let's start with the three in sandro's message, plus the one in Andy's message and the keeping-inference-separate examples 17:18:19 <AZ> bye 17:18:22 <Zakim> -AZ 17:18:23 <zwu2> bye 17:18:27 <danbri> bye! 17:18:27 <Zakim> -zwu2 17:18:29 <Zakim> -Ivan 17:18:37 <MacTed> s/general exceptance/general acceptance/ 17:18:38 <Zakim> -Souri 17:18:41 <Zakim> -yvesr 17:18:43 <Zakim> -NickH 17:18:47 <Zakim> -AlexHall 17:18:48 <Zakim> -gavinc 17:18:52 <Zakim> -ScottB 17:19:03 <danbri> ? # SPECIAL MARKER FOR CHATSYNC. DO NOT EDIT THIS LINE OR BELOW. SRCLINESUSED=00000318
|
http://www.w3.org/2011/rdf-wg/wiki/Chatlog_2012-02-08
|
CC-MAIN-2014-42
|
refinedweb
| 3,686
| 58.55
|
.
However, the purpose of this post is not to dive in and discuss the intricacies of their mechanism, but to compare the feature set each framework offers and highlight the unique aspects and features of every framework.
Overview of Popular Static Page Frameworks
In this post, we will take a closer look the following static page frameworks: Jekyll, Middleman, Hugo, and Hexo. These are by no means the only generators out there, but they are the most commonly used ones, backed by large communities and lots of useful resources.
Let’s take a closer look at each of them and compare their basic features:
- Jekyll
- written in Ruby,
- supports the Liquid template engine out of the box;
- Middleman
- written in Ruby,
- supports ERB and Haml template engines out of the box;
- Hugo
- written in Go,
- supports Go template engine out of the box;
- Hexo
- written in JavaScript,
- supports EJS and Pug out of the box.
Note: It is worth pointing out that each of these static page generators can be customized and extended using plugins and extensions, allowing you to cover most or all of your needs.
Setting up Static Site Generators
The documentation for each of these frameworks is comprehensive and nothing short of excellent and you can grab it here:
If you simply follow the installation guides, you should have the development environment ready within a matter of minutes. Once installed, you can start a new project by running commands from the terminal.
For example, this is how you start a new project in different frameworks:
Jekyll
jekyll new my_website
Middleman
middleman init my_website
Hugo
hugo new my_website
Hexo
hexo init my_website
Configuration
Configuration is usually stored in a single file. Each static website generator has its specifics, but many settings are the same across all four.
You could specify where are source files stored or where to output built sources. It is always useful to skip data that will not be used in the build process by setting exclude or
skip_render option. You could also use the config file to store global settings like project title or the author.
Migrating to a Static Generator
If you already have a Wordpress project ready to go, you can migrate it to a static page generator with relative ease.
For Jekyll, you could Jekyll Exporter plugin. For Middleman, you could use a command line tool called wp2middleman. You can use Wordpress to Hugo Exporter for Hugo migration, and for Hexo, you could read our guide on how to migrate from Wordpress to Hexo that I wrote last year.
The principle is nearly identical and quite straightforward — first export all content to a suitable format, and then include it in the right folder.
Content
Static page generators use Markdown for the main content. Markdown is powerful and one can learn it quickly. Writing content in Markdown feels natural because of its simple syntax. The document looks clean and organized.
You should place articles in a folder specified in the global configuration file. Article names should follow convention specified by the generator.
In Jekyll, you should place an article in the
_posts directory. Article name should have the following format: YEAR-MONTH-DAY-title.MARKUP. Other generators have similar rules, and they provide a command for creating a new article.
Here are the commands for creating a new article in Middleman, Hugo, and Hexo:
Middleman
middleman article my_article
Hugo
hugo new posts/my_article.md
Hexo
hexo new post my_article
In Markdown, you are limited to a particular set of syntax. Luckily for us, all generators can process raw HTML as well. For example, if you want to add an anchor with a specific class, you could add it as you would in a regular HTML file:
This is a text with <a class="my-class" href="#">a link</a>.
Front Matter
Front matter is a block of data on top of the Markdown file. You could set custom variables to store the data you need to create better content. Instead of writing HTML in Markdown, which could lead to a cluttered and ugly document structure, you could define a variable in the front matter.
For example, this is how you could add tags to your article.
tags: - web - dev - featured
Templates in Static Page Generators
Static page generators use a templating language to process templates. To insert data into a template, you need to use tags. For example, to display the page title in Jekyll, you could write:
{{ page.title }}
Let’s try to display a list of tags from the front matter in our post in Jekyll. You need to check if a variable is available. Then, you need to loop through tags and display them in an unordered list.
{%- if page.tags -%} <ul> {%- for tag in page.tags -%} <li>{{ tag }}</li> {%- endfor -%} </ul> {%- endif -%}
Middleman:
<% if current_page.data.tags %> <ul> <% for tag in current_page.data.tags %> <li><%= tag %></li> <% end %> </ul> <% end %>
Hugo:
{{ if .Params.Tags }} <ul> {{ range .Params.Tags }} <li>{{ . }}</li> {{ end }} </ul> {{ end }}
Hexo:
<% if (post.tags) { %> <ul> <% post.tags.forEach(function(tag) { %> <li><%= tag.name %></li> <% } ); %> </ul> <% } %>
Note: It is a good practice to check if a variable exists to prevent a build process from failing. It could save you hours of debugging and testing.
Using Variables
A static page generator provides global variables available for handing in templates. Different variable type holds different information. For example, a global variable site in Hexo holds information about posts, pages, categories, and tags of a site.
Knowing the available variables and how to use them could make a developer’s life easier. Hugo uses Go’s template libraries for templating. Working with variables in Hugo could be a problem if you are not familiar with the context, or “the dot” how they call it.
Middleman doesn’t have global variables. However, you could turn on the middleman-blog extension that would allow you to get access to some variables, like a list of articles. If you want to add global variables, you could do that by extracting data to data files.
Data Files
When you want to store data that are not available in Markdown files, you should use data files. For example, if you need to save the list of your social links that you want to display in the footer of your site. All static page generators support YAML and JSON files. Additionally, Jekyll supports CSV files, and Hugo supports TOML files.
Let’s store those social links in our data file. Since all generators support YAML format, let’s save the data in the social.yml file:
- name: Twitter href: - name: LinkedIn href: - name: GitHub href:
Jekyll stores data files in
_data directory by default. Middleman and Hugo use data directory, and Hexo uses
source/_data directory.
To output the data, you could use the following code:
Jekyll
{%- if site.data.social -%} <ul> {% for social in site.data.social %} <li><a href="{{ social.href }}">{{ social.name }}</li> {%- endfor -%} </ul> {%- endif -%}
Middleman
<% if data.social %> <ul> <% data.social.each do |s| %> <li><a href="<%= s.href %>"><%= s.name %></li> <% end %> </ul> <% end %>
Hugo
{{ if $.Site.Data.social }} <ul> {{ range $.Site.Data.social }} <li><a href="{{ .href }}">{{ .name }}</a></li> {{ end }} </ul> {{ end }}
Hexo
<% if (site.data.social) { %> <ul> <% site.data.social.forEach(function(social){ %> <li><a href="<%= social.href %>"><%= social.name %></a></li> <% }); %> </ul> <% } %>
Helpers
Templates often support data filtering. For example, if you want to make the title uppercase, you could do it like so:
{{ page.title | upcase }}
Middleman has similar syntax:
<%= current_page.data.title.upcase %>
Hugo uses the following command:
{{ .Title | upper }}
Hexo has different syntax, but the result is the same.
<%= page.title.toUpperCase() %>
How Static Page Generators Handle Assets
Asset management is handled differently across static page generators. Jekyll compiles assets files wherever they are placed. Middleman handles only assets stored in source folder. The default location for assets in Hugo is assets directory. Hexo suggests placing assets in global sourcedirectory.
SASS
Jekyll supports Sass out of the box, but you should follow some rules. Middleman also supports Sass out of the box. Hugo compiles Sass through Hugo Pipes for Sass. Hexo does it via plugin.
ES6
If you want to use modern JavaScript features of es6, then you should install a plugin. There might be more than one version of a similar plugin, so you might want to check the code or see open issues or latest commit to finding the best one.
Images
Image optimization is not supported by default either. Also, like es6 plugins, there is more than one plugin to optimize images. Do your homework and try to find the best plugin for the job. Alternatively, you could use a third party solution. In my blog that is build with Hexo, I am using Cloudinary free plan. I developed a cloudinary tag, and I am providing responsive and optimized images via Cloudinary transformations.
Plugins, Extensions
Static page generators have potent libraries that allow you to customize your website. Each plugin serves a different purpose. You could find a wide range of plugins, from LiveReload for a better development environment to generating Sitemap or RSS feed.
You could write a new plugin or extension. Before you do, check if a similar plugin exists. See Jekyll plugin list, Middleman extensions, and Hexo plugins. Hugo doesn’t have plugins or extensions. However, it does support custom shortcodes.
Shortcodes in Markdown
Shortcodes are code snippets that you could place in Markdown documents. Those snippets output HTML code. Hugo and Hexo support shortcodes. There are built-in shortcodes, like figure in Hugo:
{{< figure}}
Hexo youtube shortcode:
{% youtube video_id %}
If you cannot find a proper shortcode, you could create a new one. For example, Hexo doesn’t support CanIUse embeds, and I developed a new tag that supports CanIUse embedding. Don’t forget to publish your plugin on npm or official generator site. The community will be grateful if you do.
CMS
Static page generators could be overhead for a non-technical person. Learning how to use commands or Markdown is not something that is easy for everybody. In that case, a user could benefit from Content Management System for JAMstack sites. In this list, you could find a system that best suits your needs. Know that it takes some time to configure the CMS, but longterm you and other users could benefit from publishing content more efficiently.
Bonus: JAMstack Templates
If you don’t want to spend too much time on configuring your project, you could benefit from JAMstack templates. Some of these templates are already preconfigured with CMS which could save you much time.
You could also learn a lot by examining the code. Try to install a template, compare it to others and choose the best one for you.
Wrapping Up
Static page generators are a fast and reliable way to build a website. You can even build non-trivial and highly customized websites with a generator nowadays.
For example, Smashing Magazine moved to JAMstack last year, and they managed to speed up their site significantly. There are other successful examples of static websites and they all share the same principle — to produce static resources and deliver them over Content Delivery Networks for faster loading and a superior user experience.
There is much more you could do with your static website: from using Wordpress REST API as a backend to using Lambda functions. There are excellent solutions even for simple websites, like using HTTPS out of the box or handling form submissions.
I hope this overview of static page frameworks helped you realize their potential and consider using them next time you think of a new project.
Understanding the basics
A static web page is composed of fixed content, coded in HTML. It delivers the exact same HTML to every user. This absence of automated generation makes static pages extremely fast.
A dynamic web page relies on servers to dynamically build each page when a user requests to access it. This allows the page to display different content every time it is viewed.
Unlike a static web page, a dynamic page relies on server-side scripting. This allows for a greater degree of flexibility and easier content management. Static pages tend to be faster, more reliable, and require far fewer resources.
A static site generator (sometimes abbreviated SSG) creates a static HTML page using source files, thus allowing for a hybrid approach. In theory, this means you can reap some benefits of static pages, without giving up the practicality of a CMS.
|
https://www.toptal.com/front-end/static-site-generators-comparison-2018
|
CC-MAIN-2022-40
|
refinedweb
| 2,090
| 65.52
|
Jumbled Word: A string of characters is given, the task is to find all the meaningful words that can be created by rearranging its letters. Solving a jumble word means to find all the meaningful words that can be made with the initial string.
Objective Of The Article
In this article first we will describe how a jumbled word can be solved and then we present a very simple computer program. After this we present an advanced computer program which will make the solution very fast with the help of a specially designed tree.
Article Revision: 3
Notice 25.9.2009 : This article is undergoing an update and will be made online soon .
Here is another related post Jumbled word solver with C++ and Perl implementation with hash and list: Jumble word solver again.
*Please Note:* This article is very old and lengthy, I am planning for either a revision or a rewrite for the trie tree part which reflect the recent modifications. At the time I wrote this I did not know that the datastructure which I worked out was a trie :) therefore the long explanation.
Article Index
- Jumbled Word: Definition
- Objective of the article
- Introduction
- Sequential File Searching method
- The Tree Searching Method
-
Introduction
Rearranging the characters of a string is actually generating new permutations of the characters present in the string, but how can we possibly know that the new permutation (string) is meaningful ? To do this we need to check if the newly formed sting is within a list of valid words. For this we need a word list file which will contain a lot of meaningful words stored in.
Primarily one might think of generating all the possible permutations of the given string and check each of them with the word list. If we find a match then we have found one such meaningful word and then we continue the search for more with new permutations. Though this technique seems very simple but actually is very bad and time consuming. To understand why this is not a good idea. If the user enters a word of length n” and the wordlist has x” words. The string would have n! permutations, each permutation is compared with x words in the word list file making it ( x * n! ) comparisons to find the matches. As it can be seen this function grows factorially, it will take a huge time and the time increases with the length of the input string factorially. Just think of a word list file with 20,000 words and an input string of length 5 then we need ( 20,000 * 5! ) = 2,400,000 comparisons plus the computation needed to generate the permutations. We will not use technique, instead we inspect the properties of different permutations of strings and device a technique.
Sequential File Searching
If the characters of a string string1 is rearranged to form another string string2 then the below properties hold.
- The length of both the strings are same
- Each character in one string is present in the other string
If the characters of string1 can be rearranged to form string2 then we tell that string2 is a ‘solution‘ of string1 (or vice-versa)
The first property is clear, if the length of the two strings differ then the characters of the two strings cannot be rearranged to get each other. If the two strings have the same length but do not have the same characters, then they cannot be rearranged to form each other. This suggests the following: (1) Compare the length of string1 with string2 (read from the word list), if it matches then, (2) check if they have the same characters, if this is also true then string2, from the word list, can be formed by rearranging the characters of string1. We need both the condition to match, if any of the condition fails we quit comparing with that word and move to the next word in the word list file. The technique is outlined below.
Sequential File Search Algorithm Outline
READ the input string into 'string1' WHILE the word list file has not finished, DO READ the next word from the word list file into 'string2' IF length of 'string1' equal to length of 'string2', THEN IF each character in 'string1' is present in 'string2, THEN 'string2' IS a solution of 'string1' ELSE, 'string2' is NOT a solution, continue searching ('else' of inner 'if') ELSE, 'string2' is NOT a solution,continue searching ('else' of outer 'if') END of while loop
We present the above idea a bit formally: If there are two strings string1 = w1w2w3w4….wn and string2 = x1x2x3x4….xm, then the characters of string1 can be permuted to form string2 or vice-versa if and only if m = n and each wi has one and only one match xj in the second string.
To make clear what we want to express by each character wi of a string should have one and only one match xj in the other string, is described with an example:
The string “hello” and “eloho” both apperantly have the same characters, but as we told each character should have only one match, this is not satisfied by these two strings: “hello” contains ‘l’ in 2nd and 3rd position, but they will be considered different because they are not in the same positions. When it is matched with another string “eloho” the first ‘l’ in “hello” is matched in “eloho” but the second one is not, because the only ‘l’ in “eloho” is already matched.
To make the above process work we now device an algorithm to find if string1 can be permuted to form string2, that is, if each character in string1 has one and only match in string2. We will present two techniques to do this.
Mark Matching: Checking Character Occurences, The First Method
To match characters in two strings and check if both have the same characters we do the following: Sequentially take each character from string1 and check if it is present in string2, if yes then, mark the matched position with some marker character say a ‘*’, then we stop searching with the current character and continue with the next next character of string1. If a character of string1 is not found in string2, then we stop the whole process and come to the conclution that string1 is not a solution of string2. If all characters in string1 is found in string2 then we conclude that string1 is a solution of string2 Let us see this with an example.
The general outline / algorithm of the above mark matching process is described below. The end of a string is determined by a NUL character. string[i] means the ith character of the string:
Mark Matching Algorithm: To check if two strings have the same characters or not
INITIALIZE 'i' to 0 WHILE 'string1[i]' is not NUL, DO INITIALIZE 'J' TO 0 WHILE 'string2[j]' is not NUL, DO IF 'string1[i]' is equal to 'string2[j]', THEN MARK 'string2[j]' as "matched", and BREAK from the inner loop ELSE INCREMENT 'j' : 'j'='j+1' END of inner while loop IF no characters of 'string2' in the inner loop has been marked, THEN BREAK from outer loop ELSE INCREMENT i' : 'i=i+1' END of outer loop IF all characters in 'string1' has been matched, THEN 'string2' IS a solution of 'string1' ELSE 'string2' is NOT a solution of 'string1'
The C function for the above method is below. The whole program will be presented later.
check_for_match() function with mark matching:
/* This check function is designed with the first matching technique * * where each character matched from string1 in string2 is marked */ bool check_for_match(char *string1, char *string2) { char copy_of_string1[MAX_LENGTH], copy_of_string2[MAX_LENGTH]; int i, j; /* If string length differs return FALSE */ if( strlen(string1) != strlen(string2) ) { return FALSE; } /* Mave a copy of string 1 and string2 and operate on the copy, * * because they will get modified in the matching process */ strcpy( copy_of_string1, string1 ); strcpy( copy_of_string2, string2 ); /* Make the upper case characters to lower */ tolower_word( copy_of_string1 ); tolower_word( copy_of_string2 ); for( i = 0 ; copy_of_string1[i] != NUL ; i++ ) { for(j = 0 ; copy_of_string2[j] != NUL ; j++ ) { /* If string1[i] and string2[j] is equal mark string2[j] */ if( copy_of_string1[i] == copy_of_string2[j] ) { copy_of_string2[j] = MARKER; break; } } /* If string2[j] is NUL that means that no match was found in inner * * for loop so string2 is not a solution os string1, return FALSE */ if( copy_of_string2[j] == NUL ) { return FALSE; } } /* If the outer loop terminates, that means that string1[i] is NUL * * which means all the characters in string1 is matched in string2 * * that is string2 is a solution of string1, return TRUE */ return TRUE; }
Note: All the strings are first converted to lower case before doing anything
Examples
Let string1 = “loleh” and the string read from the word list is string2 = “holme“. Now we perform the below operations.
- Take from string1, the 0th character: string1[0] = 'l', and check if it is present in string2 = “holme”.It is found in the 2nd position and is marked and made “ho*me”.
- Take from string1, the 1st character: string1[1] = 'o', and check if it is present in string2 = “ho*me”.It in found in the 1st position and is marked and made “h**me”.
- Take from string1, the 2nd character: string1[2] = 'l', and check if it is present in string2 = “h**me”.It is NOT found, so we come to a conclution that “loleh” cannot be rearranged to form the word “hello” hence it is not a solution
Note above the first occurence of the character ‘l’ in the second string was crossed in the first match, so when the second time ‘l’ was seached it was not found.
Let there be an input string “loleh” and the second string for matching is “hello“.
- Take from string1, the 0th character: string1[0] = 'l' and check if it is present in string2 = “hello”.It is found in the 2nd position and is marked and made “he*lo”.
- Take from string1, the 1st character” string1[1] = 'o' and check if it is present in string2 = “he*lo”.It is found in the 4th position and is marked and made “he*l*”.
- Take from string1, the 2nd character: string1[2] = 'l' and check if it is present in string2 = “he*l*”.It is found in the 3rd position and is marked and made “he***”.
- Take from string1, the 3th character: string1[3] = 'e' and check if it is present in string2 = “he***”.It is found in the 1st position and is marked and made “h****”.
- Take from string1, the 4th character: string1[4] = 'h' and check if it is present in string2 = “h****”.It is found in the 0th position and is marked and made “*****”.
- string1 has ended, this means all the characters of string1 is found in string2, so “hello” is a solution to “loleh”.
Note that the first time the character ‘l’ is found we only cross the very first match, and the second occurence of ‘l’ is crossed by the second ‘l’ in of string1. Precisely the job is to find atleast and atmost one match for each character.
Because the matched position of the second string is marked with a marker we call this the marked match method
In the above process string2 is destroyed, so it is need to copied elsewhere before using it in this process.
Matching by Sorting: Checking Character Occurences, The Second Method
We now talk about a second process with which we can check if the characters of two strings can be permuted to get each other.
Let string2 be a ‘solution‘ of string1. Then the sorted character sequence of string1 and string2 will be the same. This means if all the characters of string1 occurences in string2 and both the string has the same length, then sorting their characters will position them into a certain relative positions which will be same in both the string. Note that, the sorted character pattern of any string is just one permutaion of it. What we need is just sort string1 and string2 and check if the sorted strings are exactly the same or not. We discuss this matter with a few examples.
the string “sopt” has 6 meaningful permutations among its 24 permutations:
- opts
- pots
- spot
- stop
- tops
Notice that if we sort the sequence of the characters of any of the 24 permutations we get the same string which is “opst“. That means that all permutations of a string has the same sorted pattern. So, if we sort any of the above 6 words we will have “opst”.
It is easily understandable that all the permutations of the string has a unique sorted pattern. We name the input string string1 and the second string (read from the word list) as string2. The outline is described below, The process of checking if two strings can be rearranged to form each other with this technique follows:
Matching by Sorting Algorithm: To check if two strings have the same characters or not:
SORT 'string1' SORT 'string2' IF 'string1' is equal to 'string2', THEN 'string2' IS a solution of 'string1' ELSE 'string2' is NOT a solution of 'string1'
The sorting algorithms are not described. Any general method can be followed. A decent sorting mechanism is to be implemented to minimize the search time. It is good to implement the quicksort method to get good performance.
We also present the in C Language function. We are using the string library functions “stdlib.h” to compare the strings. The sort function is not described here.
check_for_match() function with matching by sorting
/* This check_for_match function is designed with the Matching by Sorting method */ bool check_for_match(char *string1, char *string2) { char copy_of_string1[MAX_LENGTH], copy_of_string2[MAX_LENGTH]; /* If string length differs return FALSE */ if( strlen( string1 ) != strlen( string2 ) ) return FALSE; /* Mave a copy of string 1 and string2 and operate on it, because they * * get modified in the matching process */ strcpy( copy_of_string1 , string1 ); strcpy( copy_of_string2 , string2 ); /* Make the upper case characters to lower */ tolower_word( copy_of_string1 ); tolower_word( copy_of_string2 ); /* Sort strings */ sort_char( copy_of_string1 ); sort_char( copy_of_string2 ); /* Compare the sorted strings, if same then a match id found */ if( strcmp( copy_of_string1 , copy_of_string2 ) == 0 ) return TRUE; /* Else no match */ return FALSE; }
Note: All the strings are first converted to lower case before doing anything
Which One To Use
The first method is better than the second one. This is because in the second method two strings need to be sorted at first and then both are compared with each other. So for each word read from the dictinary it first needs to be sorted before it is compared. But in the first method sorting is not needed, instead we search for the presense of a character, and if we do not find one we can at once break from the process saving the remaining comparisons, which could not be avoided when using the sorting method. The sorting method is important which we will see in the next process where we will use tree to search the words.
The Whole Picture
We have now deviced a mechanism to check if all the characters in a string are present in another string. Previously we discussed that the length of the strings should be same. We have presented the character occurences checking functions. Now we will write the main function and the other functions and build the whole program.
The main function reads each word from the word list and drives the matching process iterating on each word. The process for matching string1 and string2 the check_for_match() function is called from main.
The main function
/* This function takes input from the user, then reads each word from the * * word list and checks if the scanned words from the file is a solution * * of the input string. The matching is done by the check_for_match() function */ /* Default word list name */ char word_list[FILE_NAME_LENGTH] = "words"; int main(int argc, char *argv[]) { FILE *fp; char jumble_word[MAX_LENGTH],scanned_word[MAX_LENGTH]; int i=0; /* Copy the passed wordlist name */ if( argc >1 ) { strcpy(word_list,argv[1]); } fp = fopen(word_list,"rb"); if( fp == NULL ) { perror("Cannot open file "); printf("\"%s\"",word_list); printf("\nSyntax: %s <wordlist_file>\n",argv[0]); exit(1); } printf("\nEnter \'q\' to quit "); while( TRUE ) { i=0; printf("\n\nJumble Word: "); scanf(" %[^\n]",jumble_word); if( jumble_word[0] == 'q' && jumble_word[1] == NUL ) break; rewind( fp ); while( ! feof( fp ) ) { fscanf( fp ," %[^\n\r]", scanned_word); if( check_for_match( jumble_word, scanned_word ) == TRUE ) { i++; printf("\n[%d] %s",i,scanned_word); } } if( i == 0 ) { printf("<no matches>"); } } return 0; }
Note: Any one version of check_for_match() can be used with the main function to make the program, and no code modification is needed in the main function for that
sort_char() function, to sort the characters of a string:
/* the qsort() function of standard library is used to sort the characters */ void sort_char(char *word) { qsort( word , strlen( word ) , sizeof( char ) , compare_char ); } /* the compare function for qsort */ int compare_char(const void *a, const void *b) { return *( (const char *) a ) - *( (const char *) b ); }
tolower_word() function, to make all the characters in a string lowercase:
/* A function to convert all the upper case letters to lower case */ void tolower_word(char *word) { short int i; for( i = 0 ; word[i] != NUL ; i++ ) { word[i] = tolower( word[i] ); } }
Preprocessor directives
#include <stdio .h> #include <string .h> #include <stdlib .h> #define a_minus_A 32 //pre calculated value of 'a' - 'A' #define FILE_NAME_LENGTH 256 #define MAX_LENGTH 150 #define MARKER '*' #define TRUE 1 #define FALSE 0 #define bool short int #ifndef NUL #define NUL '' #endif
Now is the time to run the code and check what it has to say. Below is the download link of the code which contains a big word list, and both of the source codes with the two versions of check_for_match() function
Source Code of Jumble Word Solver: Sequential File Searching method
Now its time to get the C Programming Language source code of the above methods described
Click here to download the C codes and the word list
The compressed archive contains
- wordlist file: “words.txt”
- source file1: “jumble_mark_match_method.c”
- source file2: “jumble_sort_match_method.c”
- read me file: “readme.txt”
Optimizations
The sequential file searching needs to sweep one single pass through the whole word list file comparing with each word. Some slight modifications could be done in the wordlist file itself and change the code accordingly to make the search faster and get better performance. We do not present source code but discuss the optimizations strategies below
- The modification: Sort the word list file according to assending string length order.The benifit: The strings with lesser length will be skipped, the strings with equal length will be matched, when ever the first string with a greater string length than the input string is encountered comparison can be stopped, no need to scan more words further, break at once. Thus the strings with greater string length are not processed, saving computation.
- The modification: Sort the word list file with string length as the primary key and then sort according to the sorted word pattern as the secondary key.The benifit: Like above, the strigns with more length than the input string are avoided. The strings with same lengths are compared, after the first match is found the next word will also be a match, and if the next word is not a match then no further word can be a match, stop at once scanning further. This is because the words with the same character permutation are placed adjacent.
- The modification: Sort each word in the list and store the sorted string patterns in another index file and beside each sorted pattern also store the file byte offsets of the meaningful words in the word list.The benifit: The input word is sorted and searched for in the index file. When found the file byte offsets of the word list files can be used to read the meaningful words and then display.
For these optimizations the first two needs the word list to be designed specially, and the third one needs an extra index file to store the unique sorted patterns ans the meaningful file offsets for each pattern. The last concept will be used in our next tree method which will index the whole word list in memory.
Complexity of sequential searching method
Each word sweeps the whole file once. If the word list file has x words then each words sweeps through x words. The length of each string is computed which needs total (x * n) operations where n is the average length of the strings. Only the strings with the same length are compared. Let there be m strings of the same length of the input string. With the first method each character from string1 is compared with string2. For each string compared worst case needs n2 operations, making a total of (x * n) + (m * n2) operations are needed. But it is to be nodes worst case occurs only when a match is found. So the worst case occurs only a few times, making the complexity mainly depending on m then the number of matches and x.
The second method has sorting and needs sorting of strings in each iteration. If the quick sort is used the worst complexity will be O(n*log n). Total m strings are only sorted making it O(n*log n) * m + (x * n) operations.The complexity here also depends mainly on m, then the length of each of the m strings, and x
The Tree Searching method
Now we talk about a tree which we will use as an index table to find match and then print the locations. Here each word will be sorted and inserted into a tree in such a manner so that there is no need of a linear search and the list of meaningful words can be accessed almost instantly. The maximum number of operations needed by this process depends on the string length. So this process is much much better than the previous one.
Let us understand how the tree is structured. Again look at the example of string “sopt” which has 6 meaningful words among its 24 permutations:
- opts
- pots
- spot
- stop
- tops
Think of a table, where all meaningful words with the same sorted permutations recide at the same location. For example, all the 6 meaningful perutations of “sopt” above will fall in the same location of the table. Let the user input any permutation of the 4 letters ‘s’,’o’,’p’ and ‘t’ we sort it and we get the same sorted permutation “opst”. With this sorted permutation the table is searched and the matched location will contain all the meaningful words.
We will make a special kind of search tree to load the words from a word list. Now we describe how we design a the tree with an example to be searched. First we describe the structure of the node, and how it is made and accessed.
Node structure
typedef struct histogram_structure { int *file_pos; struct histogram_structure *next[MAX_NODES]; }histogram_tree;
Each node has an array of pointers, each element will point to a node. Each element of the array represents each characters. Each character is represented with one node. The character a node represents is determined by the index of the next pointer which points to it.
For example:
- 0th element of next will stand for ‘a’
- 1st element of next will stand for ‘b’
- 2nd element of next will stand for ‘c’
- .
- .
- 25th element of next will stand for ‘z’
If the ith element of the next pointer array contains NULL then it represents that that location do not point to the ith character of the alphabet. If the element does not contain NULL then it points to the ith character of the alphabet, and it can proceed to the next node. A string is stored after sorting and by dividing each of the character to each node, where each node points to the next node mantaining the sorted character sequence through the next pointers. index of the next pointer by which a node is pointed by, determines which character is represented by it.
The meaningful words are actually stored in a wordlist file. The node has another array which contains the byte offset of the meaningful word read from the wordlist file. When access to the word is needed, it is read from the word list using the stored byte offset value stored in the array. The first element of the file_pos array does not store a byte offset, instead it is used to store a counter, counting the number of file byte offset values are stored in that array. This is initillized to 0. If the array contains 5 file byte offsets then the counter (the 0th location of the file_pos array) will contain 5 and the five byte offset values will be stored in the array location 1 to array location 5. Note the file_pos is declared as a long int type pointer. When allocating a node we use ‘malloc()’ to allocate him the first location and we initilize the counter. When indexing the words in the tree when we need to insert a new file offset value into this file_pos array we use ‘realloc()’ function to resize the file_pos array and then insert at the last. This saves a lot of memory space.
The structure is named ‘histogram_structure’ because it creates a histogram like structure of the words of the word list.
The Indexing And Searching Mechanism (direct index)
Each character is represented with one node, the character it represents is determined by the index of the next pointer which points to it. If a node ‘node2‘ is pointed by the 10th location of the next array of another node ‘node1‘ then the node ‘node2‘ will be represented as ‘k‘ the 10th letter (count starts from 0). The sorted permutation “aehrs” will be indexed like below.
- The 0th location (index of ‘a’) of next array of the head node will point to another node, say ‘node1‘.
- The 4th location (index of ‘e’) of next array of the ‘node1‘ node will point to another node, say ‘node2‘.
- The 7th location (index of ‘h’) of next array of the ‘node2‘ node will point to another node, say ‘node3‘.
- The 16th location (index of ‘r’) of next array of the ‘node3‘ node will point to another node, say ‘node4‘.
- The 18th location (index of ‘s’) of next array of the ‘node4‘ node will point to another node, say ‘node5‘.
- And the ‘node5‘ will contain the byte offset of the permutation “aehrs” where it is stored.
To search with the string (character sorted) “aehrs” and check if it is present in the tree (if it creates a valid path)
Start from the head node.
- If the pointer in the 0th location of the next array (index for ‘a’) is not null then move to next[0] (node1)
- If the pointer in the 4th location of the next array in ‘node1’ (index for ‘e’) is not null then move to next[4] (node2)
- If the pointer in the 7th location of the next array in ‘node2’ (index for ‘h’) is not null then move to next[7] (node3)
- If the pointer in the 17th location of the next array in ‘node3’ (index for ‘r’) is not null then move to next[17] (node4)
- If the pointer in the 18th location of the next array in ‘node4’ (index for ‘s’) is not null then move to next[18] (node5)
- The string has ended. We see if the file_pos array stores any file offset by checking if the first position (the counter) is greater than 0, if yes the search is successful and we have found the string. Now we will read the word list file with the file offsets and show the meaningful words.
If the applied string creates a ‘valid path’, that is, it traverses down the tree, and the file_pos counter at the last node is greater than zero, then the string will have file_pos[0] number of meaningful words. When traversing with a certain pattern, if a NULL link is encountered that means that pattern is not valid and has no meaningful permutations. If a string successfully traverses but the file_pos[0] is zero then, though it has successfullt traversed there are no meaningful words for that pattern. This is in the case when the applied string is a substring of a ‘valid path’ down the tree.
To have meaningful permutations, that is to have a valid path in the tree the conditions below must be satisfied:
- The sorted character pattern of a string must traverse down the tree
- After traversing down, the value of file_pos[0] of the last node visited must be greater than 0
Now we come up with a general algorithm
Algorithm to INDEX the tree with string2 read from word list:
ALLOCATE the 'head' node WHILE the word list is not finished, DO READ the file offset of the next word into 'current_offset' READ next word from word list into 'string2' SORT 'string2' INITIALIZE 'i' to 0 INITIALIZE 'current_node' with 'head' WHILE 'string2[i]' not equal to NUL convert the character 'string2[i]' to numerical index and store it in 'index' IF 'current_node -> next[index]' is equal to NULL, THEN ALLOCATE new node 'new' LINK 'new' to the index location of 'current_node' : 'current_node -> next[index]' = 'new' MOVE to newly allocated node: 'current_node' = 'current_node -> next[index]' ELSE IF 'current_node -> next[index]' is NOT equal to NULL, THEN MOVE to the node pointed by index: 'current_node' = 'current_node -> next[index]' INCREMENT 'i' : 'i'='i+1' END of inner while loop //Path Created, now to populate the offset INCREMENT 'current_node -> file_pos[0]' by 1 INSERT the current word offset value in the next 'file_pos' location: 'current_node -> file_pos[ file_pos[0] ] = current_offset' END of outer while loop
Algorithm to SEARCH the tree with string1 to find for a solition:
Read the jumble word to solve from user into 'string1' Sort string1' Initilize 'i' to 0 Initilize 'current_node' with 'head' While 'string1[i]' not equal to NUL convert the character 'string1[i]' to numerical index and store it in 'index' If 'current_node -> next[index]' is equal to NULL, Then 'string1' is not indexed in the tree: No solution for 'string1', search ends, break from loop Else If 'current_node -> next[index]' is NOT equal to NULL, Then Move to the node pointed by index: 'current_node' = 'current_node -> next[index]' Increment 'i' : 'i'='i+1' End of while loop If ('string1[i]' is equal to NUL, ie:'string1' is found ie)-AND-('current_node ->file_pos[0] > 0', ie: 'current_node' has file offset stored), Then Read file offset values from 'file_pos' array, and apply then to fetch the word from wordlist Else, There is solution to 'string1', ie: No meaningful words can be constructed by permuting the character sequence of 'string1'
This process is named ‘direct index’ because the index value of the next pointer array is calculated directly from the value of the character.
Now we will present equivalent C Language code of the above method. Only the main portion (the main loop) of the loading and searching method is presented below. This code is used in the poogram source code presented at the end of the section.
The C Language code segment to INDEX the word list is as below
while( ! feof( fp ) ) { current_node = head; current_file_position = ftell( fp ); fscanf(fp," %[^\n\r]",word); /* If the string word contains invalid characters, skip the word */ if( scan_for_valid_word( word ) == FAIL ) { continue; } word_count++; tolower_word( word ); sort_char( word ); /* While end of the word not reached advance in the histogram tree */ for( i = 0 ; word[i] != NUL ; i++ ) { index = char_to_index( word[i] ); if( node_is_empty( current_node -> next[index] ) == TRUE ) { temp = allocate_new_node(); current_node -> next[index] = temp; } current_node = current_node -> next[index]; } /* After a word has end, store word location offset */ set_current_word_location(current_node,current_file_position); } fclose( fp ); return head; }
The C Language code segment to SEARCH the word list is as below
/* Searches the tree pointed by head for the unsorted pattern word * * The function sorts the word characters and them searches the tree */ current_node = head; /* Backup original word */ strcpy( pattern , word ); /* Sort pattern and make lowercase to make it ready to be searched */ tolower_word( pattern ); sort_char( pattern ); /* While pattern has not ended, advance in the histogram tree through the links * * whenever the converted index points to null ie no path, stop searching, no match * * If the pattern traverses through the tree successfully and the pattern has ended * * check if the last node visited is a word end node, which is detected by, if the * * file_pos array stores atleast one word location offset. If yes search succesful * * return match node, else failed return NULL */ for( i = 0 ; pattern[i] != NUL ; i++ ) { index = char_to_index( pattern[i] ); if( node_is_empty( current_node -> next[index] ) == TRUE ) { return NULL; } current_node = current_node -> next[index]; } if( current_node -> file_pos[0] > 0 ) return current_node; /* this returned node will be passed to another function which will print the words */ else return NULL; }
These codes are used to construct the whole program. The whole program can be found in the downloads section.
More Examples
Now let us inspect some more cases:
Let us again consider the the six meaningful words of “opst” from the first example. Each word is scanned from the wordlist and then it is indexed in the tree making a unique “path” for all the strings which are permutations of each other. When the six words are read and sorted they will generate the same sorted permutation “opst” as we described before. Each such sorted permutation corresponding to a meaningful word creates a ‘vaild path’ which we load into the tree. Now we describe how the six meaningful words, after being read, is indexed/loaded in the tree.
Note: counting starts from zero
Start with the head node of the tree.
- The first character is ‘o‘, the 14th letter of the alphabet, allocate a new node ‘node1‘ and link it to the 14th location of the next array of the head node, and move to ‘node1‘.
- Next we read ‘p‘, the 15th letter of the alphabet, allocate a new node ‘node2‘ and link it to the 15th location of the next array of the current node ‘node1, and move to ‘node2‘.
- Next we read ‘s‘, the 18th letter of the alphabet, allocate a new node ‘node3‘ and link it to the 18th location of the next array of the current node ‘node2‘, and move to ‘node3‘.
- Next we read ‘t‘, the 19th letter of the alphabet, allocate a new node ‘node4‘ and link it to the 19th location of the next array of the current node ‘node3‘, and move to ‘node4‘.
- The string has ended. Now we read the offset value of the word from the wordlist file and then store it to the file_pos array in the current node ‘node4‘ and update the counter by incrementing it to 1
Thus all the words will be scanned, sorted and then indexed. If a node already exists then there is no need to allocate it and we will simply move to it. For example after the ‘valid path’ “opst” is created above, when the next string with the same sorted permutation will be inserted, it will have the same path, so when entering it there is not need to create new nodes instead it only needs to traverse down the path and at the end store the file offset value for the new word and update the counter. So for all those meaningful strings having same sorted character permutations travel the same path but have their file offsets stored in different locations in the file_pos array.
The above case is when two words has same sorted permutation. Now let us see the case of when the paths of two sorted permutation matches partially.
Let the program read a word “pros” which has the sorted permutation “oprs”.
- First we read ‘o‘, the 14th letter of the alphabet. The 14th location of the next array of the head node is already allocated, move to next[14] of current node (to node1)
- Next we read ‘p‘, the 15th letter of the alphabet. The 15th location of the next array of the current node (node1) is already allocated, move to next[15] of current node (to node2)
- Next we read ‘r‘, the 17th letter of the alphabet. The 17th location of the next array of the current node (node2) representing ‘r’ is empty, so we allocate it and then link that allocated node to the 17th location of the next array of the current node, and then move to it (current node’s next[17]). (At this point the path splits)
- Next we read ‘s‘, the 18th letter of the alphabet. The 18th location of the next array of the current node representing ‘s’ is empty, so we allocate it and then link that allocated node to the 18th location of the next array of the current node, and then move to it.
- The word has ended so we read the file offset value of the current word from the wordlist file and enter it into the file_pos array of the current node and update the file_pos counter accordingly
Note The sorted permutation from the previous examples “opst” and “oprs” has a common starting path because of which we find that the nodes are already made and we can proceed, but because the permutations are differnt at a point the paths pointed by the two strings split.
Similarly think of a word “fadge” and “deaf”. The first one has the sorted pattern “adefg” and the second one has “adef” . After the first one is read into the tree and we attempt to enter the second one we see that both the paths which the words would traverse are the same, but the length is different which makes the difference in the permutations. So when we reach the end of the string “adef” what we do is just store the file offset in the file location array and update the file_pos counter.
When we attempt to search for the sorted pattern “opxt” the tree with “opst” indexed in it, the below operations are done. Note how the search instantly breaks out when the first ,match is not found.
- First we read ‘o‘, the 14th letter of the alphabet. The 14th location of the next array of the head node, this is not empty, move to next[14] (to node1)
- Next we read ‘p‘, the 15th letter of the alphabet. The 15th location of the next array of the current node (node1), this is not empty, move to next[15] (to node2)
- Next we read ‘x‘, the 23th letter of the alphabet. The 23th location of the next array of the current node, This IS empty, break from the search, no ‘valid path’ exists, no solutions.
Take the example of “rampage”. Let “rampage” be indexed first, which creates a path “aaegmpr”. If we search the tree with the sorted permutation “aaegm”, then it is clear that “aaegm” will successfully traverse the tree, because it has a common path as “aaegmpr”. When the traversal will finish with “aaegm” it will also need to satisfy another condition, which is the file_pos[0] of the current node is greater than 0 (meaningful word exists), to make the path a ‘valid one’. And if both these satisfy then only we find a successful match
The Indexing Function
This is a very simple function. This function will return the position number in the alphabet. Like ‘a’ will be indexed as 0, b will be indexed as 1 and the values will be returned. Accessing the next pointer array with the letter ‘j’ means accessing the next pointer array with the index value 9. This can be very simply done by subtracting the ASCII value of ‘a’ (97) from each character. The upper case characters are first translated to lowercase then the index calculated. There is no significance in making separate uppercase letters in the tree because this carries no significance in jumble words and also increase huge amount of memory for pointers because the upper cases will be needed to indexed separately. The function also returns index values of three other special characters, hyphen: ‘-‘ , backtick: ‘`’ , apostrophe: ‘\” . The hypen is the most common occuring in words, like “man-to-man”.
The index function will be modified when we introduce an optimized version of the tree.
The indexing function code segment for the direct index method is as below
/* Convert a character to index to be used in the historgram_tree -> next array * * to move to the next node in the path. Each index is accessed directly * * Needs lower case only characters to be passed, else error will incur */ short int char_to_index(char c) { short int index; /* Convert the character to lowercase */ c = tolower(c); if( c >= 'a' && c <= 'z' ) { index = ( short int ) ( c - 'a' ); } else if( c == '\'' ) { index = 26; } else if( c == '-' ) { index = 27; } else if( c == '`' ) { index = 28; } else { /* Not a valid character */ return EMPTY; } return index; }
Case Sensitivity
The program only deals with lower case characters. Distingwishing between upper and lower case characters are not significant to solve jumble words. Whenever a string is read from thw wordlist or from the user both should be first converted to lower case before applying sorting and tree indexing or searching function.
The Memory Optimized Tree (searched index)
The version of the tree just presented using direct index takes huge amount of memory. Like for our test wordlist takes 86 Megabytes of memory in the ram. This is because a huge amount of the next pointer arrray locations go unused. We will make a change in the structure and change the routines to tally with this change to make the memory allocation optimum. The next array is made like the file_pos array. Instead of statically allocating all the 29 pointers we will first allocate only the first location. After each new node is to be inserted, the ‘realloc()’ function would be used to resize the next pointer and insert the pointer in the new created location. After this modification it cannot be determined in which location a character will be indexed, because now the each letter do not point to any unique fixed position. To solve this, a companion character array next_index is introduced. This is also like the file_pos array. The first position of next_index array contains the number of non-NULL links, that is the number of characters a certain node points to. A character is searched in the next_index array, if it is found in position i, then the link for the character is stored in the ith index of the next pointer array.
The process of storing:
When indexing with a certain character, it is searched in the next_index. If it is not found then the array is reallocated and one single storage location is increased, the counter at next_index[0] is increased, and the new character is stored at the new position (last). Else if the character is present in the next_index at the ith location, then the next link is accessed with the index i and proceeded to the next node.
The process of accessing:
First the next_index array is searched for the character to be indexed. Let it be found in the ith position of the array. That means the index for the character in question is (i-1) and next[i-1] represents the link. Else if the character is not found in the next_index list then the character has a NULL link, ie, it does not exists. In the previous versions these NULL links needed to be stored, which resulted in a wastage of memory. The i minus 1 is done because the first location of the next_index array stores a counter, but the array next first location does not store any counter so the index needs to be adjusted. Tip: The first location of the next array can be left unused to avoid the adjustment.
The modified structure is as below
typedef struct histogram_structure { int *file_pos; char *next_index; struct histogram_structure **next; }histogram_tree;
To implement this change basically the main change is needed in the indexing functino which will search for the requested character in the next_index list and return it’s index, with which the next pointer array will be accessed. The modified version of indexing function to work with the optimized search tree , the searched index method, is presented below.
/* Convert a character to index to be used in the historgram_tree -> next array * * to move to the next node in the path. Each index is accessed directly */ short int char_to_index(char c, histogram_tree *current_node) { short int index, n; /* The first location of the next_index stores the counter of how many indexes it has got stored */ /* Locate the position of character c in the next_index */ for( index = 1 , n = current_node -> next_index[0] ; index <= n ; index++) { if( current_node -> next_index[index] == c ) { /* If c is found at position 'index' then the link for character c is in the * * 'index' number location of the next histogram_tree pointer array */ return ( index - 1 ); } } return EMPTY; }
After implementing this modification, loading the same test wordlist takes 48 Megabytes in the memory. It reduces to 55% . The new structure looks like below. Because an array is needed to be searched to get the index of a character this process is named the searched index.
This process determins the index of the character in next array by searching for it in another companion array, this is because this process is named ‘searched index’.
Source Code of Jumble Word Solver: Tree Searching Method
Now we give the links of the C Programming Language source code of the both the above direct index and searched index methods described
Click here to download the C codes of the direct and searched index tree search method and the word list
The compressed archive contains
- wordlist file: words.txt
- direct_index tree source files in the direct_index directory, and a single file version of the same.
- searched_index tree source files in the searched_index directory, and a single file version of the same.
- compile.sh : a shell script file to compile the sources with gcc
- read me file : readme.txt
Complexity of the tree method
It is clear that there is no more to sweep down whole the file to search each word. The word list is indexed into memory at the begining. Then to find if the input jubmble word is in the tree it is sorted and then traversed down n levels, where n is the length of the string. If the string is a valid one and can be rearranged to form a meaningful word, then the search will be successful. To detect if the search is successful for a n length string only n oprations are needed. And if the search is unsuccessful then the search loop breaks before the string completes making less than n comparisons. Making the search O(n) and where n = length of search string / jumble word. Because the sort routine needs to be invoked before searching once, the searching algorithm will determin the overall complexity. If the quick sort is used the complexity will be O(n*log n) where n = length of search string / jumble word. The average length of words in the dictinary is limited, and small. For an invalid long input the search will break before the whole string is traversed. This makes the search technique very fast, needing only a few operations. After a match is found the words can be at once directly read from the word list file with the offset values, without traversing the file linearly.
The loding routine needs some time to index the word list, because it sorts each string in the word list and then indexes them into the tree. The main time is taken for sorting the strings which again grow with O(n*log n) time. if there are x words in the word list then the function is x * O(n*logn), where n = average length of words in word list
Pingback: jumble solver « Trendstrends's Blog
hey
r u using visual basic 6.0 for the codings??
absolutely not. The code is completely written in C language.
You are a Born Genius, Phoxis! Congratulations and all the best.
The design for the weblog is a tad off in Epiphany. Nevertheless I like your web site. I may need to install a
hey, its a really mind blowing code…i really liked it.
I also have a C code in which you enter jumbled words and that program gives you meaningful english words.
Hope you would like my code. Its very short and purely written in C.
for more details please visit
This is a very old one from my blog. Actually this code has undergone many changes and I haven’t updated here. For example now instead of storing the file offsets in the dictionary file, the latest code loads the entire dictionary into memory, which doesn’t take much memory. Also using this module I have made single player scrabble word solver, the one comes in the news paper.
I will definitely visit your blog and check it out. Thanks for visiting.
Really you are a genius.
you manage the code very efficiently, its really great.
thanks to you also for uploading such interesting code and helping us to have a new thought for the same problem that we had already encountered.
I hope I get some time to update the post. Thanks for visiting again.
phoxis
can i challenge u can u convert it into java
Yes, definitely, but at this time I am involved with other stuffs and cannot do it. Actually I thought once to make it with Java. This post is a bit out dated. Here is the latest code: You may want to give it a look.
Thanks for stopping by!
thx for a quick reply i am using java eclipse and some of the codes is unknown.. but when i run it on MS visual c++ its working fine
Ya, it’s in C, so MSVC++ should run it just fine. Java has a lot of classes with which you can do the same stuff without manually coding the trie. For a trie based implementation, I don’t know if Java already has Trie based class.
Thank you very much sir You help me a lot.
Glad to know it helped!
is there a class on java that will separate string to single chars
Check the methods charAt and getBytes of the String class.
docs.oracle.com/javase/6/docs/api/java/lang/String.html
Pingback: Jumble word solver again | Phoxis
|
http://phoxis.org/2009/05/28/jumble-word-solver/
|
CC-MAIN-2015-40
|
refinedweb
| 8,463
| 63.43
|
How to Add Cluster Support to Node.js
April 1st, 2021
What You Will Learn in This Tutorial
How to use the Node.js cluster module to take advantage of a multi-core processor in your production environment.
Table of Contents
By nature, JavaScript is a single-threaded language. This means that when you tell JavaScript to complete a set of instructions (e.g., create a DOM element, handle a button click, or in Node.js to read a file from the file system), it handles each of those instructions one at a time, in a linear fashion.
It does this regardless of the computer it's running on. If your computer has an 8-core processor and 64GB of ram, any JavaScript code you run on that computer will run in a single thread or core.
The same rules apply in a Node.js application. Because Node.js is based on the V8 JavaScript Engine the same rules that apply to JavaScript apply to Node.js.
When you're building a web application, this can cause headaches. As your application grows in popularity (or complexity) and needs to handle more requests and additional work, if you're only relying on a single thread to handle that work, you're going to run into bottlenecks—dropped requests, unresponsive servers, or interruptions to work that was already running on the server.
Fortunately, Node.js has a workaround for this: the
cluster module.
The
cluster module helps us to take advantage of the full processing power of a computer (server) by spreading out the workload of our Node.js application. For example, if we have an 8-core processor, instead of our work being isolated to just one core, we can spread it out to all eight cores.
Using
cluster, our first core becomes the "master" and all of the additional cores become "workers." When a request comes into our application, the master process performs a round-robin style check asking "which worker can handle this request right now?" The first worker that meets the requirements gets the request. Rinse and repeat.
Setting up an example server
To get started and give us some context, we're going to set up a simple Node.js application using Express as an HTTP server. We want to create a new folder on our computer and then run:
npm init --force && npm i express
This will initialize our project using NPM—the Node.js Package Manager—and then install the
express NPM package.
Be mindful of Node.js and NPM version here
For this tutorial, we're using Node.js
v15.13.0with NPM
v7.7.6. Check out this tutorial on using NVM to install and manage different versions of Node.js.
After this is complete, we'll want to create an
index.js file in our new project folder:
/index.js
import express from "express"; const app = express(); app.use("/", (req, res) => { res.send( `"Sometimes a slow gradual approach does more good than a large gesture." - Craig Newmark` ); }); app.listen(3000, () => { console.log("Application running on port 3000."); });
Here, we
import express from 'express' to pull
express into our code. Next, we create an instance of
express by calling that import as a function and assigning it to the variable
app.
Next, we define a simple route at the root
/ of our application with
app.use() and return some text to ensure things are working (this is just for show and won't have any real effect on our cluster implementation).
Finally, we call to
app.listen() passing
3000 as the port (we'll be able to access the running application at in our browser after we start the app). Though the message itself isn't terribly important, as a second argument to
app.listen() we pass a callback function to log out a message when our application starts up. This will come in handy when we need to verify if our cluster support is working properly.
To make sure this all works, in your terminal,
cd into the project folder and then run
node index.js. If you see the following, you're all set:
$ node index.js Application running on port 3000.
Adding Cluster support to Node.js
Now that we have our example application ready, we can start to implement
cluster. The good news is that the
cluster package is included in the Node.js core, so we don't need to install anything else.
To keep things clean, we're going to create a separate file for our Cluster-related code and use a callback pattern to tie it back to the rest of our code.
/cluster.js
import cluster from "cluster"; import os from "os"; export default (callback = null) => { const cpus = os.cpus().length; if (cluster.isMaster) { for (let i = 0; i < cpus; i++) { const worker = cluster.fork(); worker.on("message", (message) => { console.log(`[${worker.process.pid} to MASTER]`, message); }); } cluster.on("exit", (worker) => { console.warn(`[${worker.process.pid}]`, { message: "Process terminated. Restarting.", }); cluster.fork(); }); } else { if (callback) callback(); } };
Starting at the top, we import two dependencies (both of which are included with Node.js and do not need to be installed separately):
cluster and
os. The former gives us access to the code we'll need to manage our worker cluster and the latter helps us to detect the number of CPU cores available on the computer where our code is running.
Just below our imports, next, we
export the function we'll call from our main
index.js file later. This function is responsible for setting up our Cluster support. As an argument, make note of our expectation of a
callback function being passed. This will come in handy later.
Inside of our function, we use the aforementioned
os package to communicate with the computer where our code is running. Here, we call to
os.cpus().length expecting
os.cpus() to return an array and then measuring the length of that array (representing the number of CPU cores on the computer).
With that number, we can set up our Cluster. All modern computers have a minimum of 2-4 cores, but keep in mind that the number of workers created on your computer will differ from what's shown below. Read: don't panic if your number is different.
/cluster.js
[...] if (cluster.isMaster) { for (let i = 0; i < cpus; i++) { const worker = cluster.fork(); worker.on("message", (message) => { console.log(`[${worker.process.pid} to MASTER]`, message); }); } cluster.on("exit", (worker) => { console.warn(`[${worker.process.pid}]`, { message: "Process terminated. Restarting.", }); cluster.fork(); }); } [...]
The first thing we need to do is to check if the running process is the master instance of our application, or, not one of the workers that we'll create next. If it is the master instance, we do a for loop for the length of the
cpus array we determined in the previous step. Here, we say "for as long as the value of
i (our current loop iteration) is less than the number of CPUs we have available, run the following code."
The following code is how we create our workers. For each iteration of our
for loop, we create a worker instance with
cluster.fork(). This forks the running master process, returning a new child or worker instance.
Next, to help us relay messages between the workers we create and our master instance, we add an event listener for the
message event to the worker we created, giving it a callback function.
That callback function says "if one of the workers sends a message, relay it to the master." So, here, when a worker sends a message, this callback function handles that message in the master process (in this case, we log out the message along with the
pid of the worker that sent it).
This can be confusing. Remember, a worker is a running instance of our application. So, for example, if some event happens inside of a worker (we run some background task and it fails), we need a way to know about it.
In the next section, we'll take a look at how to send messages from within a worker that will pop out at this callback function.
One more detail before we move on, though. We've added one additional event handler here, but this time, we're saying "if the cluster (meaning any of the running worker processes) receives an exit event, handle it with this callback." The "handling" part here is similar what we did before, but with a slight twist: first, we log out a message along with the worker's
pid to let us know the worker died. Next, to ensure our cluster recovers (meaning we maintain the max number of running processes available to us based on our CPU), we restart the process with
cluster.fork().
To be clear: we'll only call
cluster.fork() like this if a process dies.
/cluster.js
import cluster from "cluster"; import os from "os"; export default (callback = null) => { const cpus = os.cpus().length; if (cluster.isMaster) { for (let i = 0; i < cpus; i++) { const worker = cluster.fork(); // Listen for messages FROM the worker process. worker.on("message", (message) => { console.log(`[${worker.process.pid} to MASTER]`, message); }); } cluster.on("exit", (worker) => { console.warn(`[${worker.process.pid}]`, { message: "Process terminated. Restarting.", }); cluster.fork(); }); } else { if (callback) callback(); } };
One more detail. Finishing up with our Cluster code, at the bottom of our exported function we add an
else statement to say "if this code is not being run in the master process, call the passed callback if there is one."
We need to do this because we only want our worker generation to take place inside of the master process, not any of the worker processes (otherwise we'd have an infinite loop of process creation that our computer wouldn't be thrilled about).
Putting the Node.js Cluster to use in our application
Okay, now for the easy part. With our Cluster code all set up in the other file, let's jump back to our
index.js file and get everything set up:
/index.js
import express from "express"; import favicon from "serve-favicon"; import cluster from "./cluster.js"; cluster(() => { const app = express(); app.use(favicon("public/favicon.ico")); app.use("/", (req, res) => { if (process.send) { process.send({ pid: process.pid, message: "Hello!" }); } res.send( `"Sometimes a slow gradual approach does more good than a large gesture." - Craig Newmark` ); }); app.listen(3000, () => { console.log(`[${process.pid}] Application running on port 3000.`); }); });
We've added quite a bit here, so let's go step by step.
First, we've imported our
cluster.js file up top as
cluster. Next, we call that function, passing a callback function to it (this will be the value of the
callback argument in the function exported by
cluster.js).
Inside of that function, we've placed all of the code we wrote in
index.js earlier, with a few modifications.
Immediately after we create our
app instance with
express(), up top you'll notice that we're calling to
app.use(), passing it another call to
favicon("public/favicon.ico").
favicon() is a function from the
serve-favicon dependency added to the imports at the top of the file.
This is to reduce confusion. By default, when we visit our application in a browser, the browser will make two requests: one for the page and one for the app's
favicon.ico file. Jumping ahead, when we call to
process.send() inside of the callback for our route, we want to make sure that we don't get the request for the
favicon.ico file in addition to our route.
Where this gets confusing is when we output messages from our worker. Because our route receives two requests, we'll end up getting two messages (which can look like things are broken).
To handle this, we import
favicon from
serve-favicon and then add a call to
app.use(favicon("public/favicon.ico"));. After this is added, you should also add a
public folder to the root of the project and place an empty
favicon.ico file inside of that folder.
Now, when requests come into the app, we'll only get a single message as the
favicon.ico request will be handled via the
favicon() middleware.
Continuing on, you'll notice that we've added something above our
res.send() call for our root
/ route:
if (process.send) { process.send({ pid: process.pid, message: "Hello!" }); }
This is important. When we're working with a Cluster configuration in Node.js, we need to be aware of IPC or interprocess communication. This is a term used to describe the communication—or rather, the ability to communicate—between the master instance of our app and the workers.
Here,
process.send() is a way to send a message from a worker instance back to the master instance. Why is that important? Well, because worker processes are forks of the main process, we want to treat them like they're children of the master process. If something happens inside of a worker relative to the health or status of the Cluster, it's helpful to have a way to notify the master process.
Where this may get confusing is that there's no clear tell that this code is related to a worker.
What you have to remember is that a worker is just the name used to describe an additional instance of our application, or here, in simpler terms, our Express server.
When we say
process here, we're referring to the current Node.js process running this code. That could be our master instance or it could be a worker instance.
What separates the two is the
if (process.send) {} statement. We do this because our master instance will not have a
.send() method available, only our worker instances. When we call this method, the value we pass to
process.send() (here we're passing an object with a
pid and
message, but you can pass anything you'd like) pops out in the
worker.on("message") event handler that we set up in
cluster.js:
/cluster.js
worker.on("message", (message) => { console.log(`[${worker.process.pid} to MASTER]`, message); });
Now this should be making a little more sense (specifically the
to MASTER part). You don't have to keep this in your own code, but it helps to explain how the processes are communicating.
Running our Clustered server
Last step. To test things out, let's run our server. If everything is set up correctly, from the project folder in your terminal, run
node index.js (again, be mindful of the Node.js version you're running):
$ node index.js [25423] Application running on port 3000. [25422] Application running on port 3000. [25425] Application running on port 3000. [25426] Application running on port 3000. [25424] Application running on port 3000. [25427] Application running on port 3000.
If everything is working, you should see something similar. The numbers on the left represent the process IDs for each instance generated, relative to the number of cores in your CPU. Here, my computer has a six-core processor, so I get six processes. If you had an eight-core processor, you'd expect to see eight processes.
Finally, now that our server is running, if we open up in our browser and then check back in our terminal, we should see something like:
[25423] Application running on port 3000. [25422] Application running on port 3000. [25425] Application running on port 3000. [25426] Application running on port 3000. [25424] Application running on port 3000. [25427] Application running on port 3000. [25423 to MASTER] { pid: 25423, message: 'Hello!' }
The very last log statement is the message received in our
worker.on("message") event handler, sent by our call to
process.send() in the callback for our root
/ route handler (which is run when we visit our app at).
That's it!
Wrapping up
Above, we learned how to set up a simple Express server and convert it from a single-running Node.js process to a clustered, multi-process setup. With this, now we can scale our applications using less hardware by taking advantage of the full processing power of our server.
Get the latest free JavaScript and Node.js tutorials, course announcements, and updates from CheatCode in your inbox.
No spam. Just new tutorials, course announcements, and updates from CheatCode.
|
https://cheatcode.co/tutorials/how-to-add-cluster-support-to-node-js
|
CC-MAIN-2022-27
|
refinedweb
| 2,746
| 67.04
|
g:
PPM was written by Activestate for their distribution of Perl. Are you sure that there are no licensing restrictions concerning its redistribution with other (non-Activestate) distributions?
As for a Fortran compiler... I can't imagine more than a few percent of Win32 Perl users requiring one. So for a broad majority of users, that will be unnecessary bloat, in a package that is already quite large.
Adam has put a fair amount of work into the build chain for Strawberry Perl, which opens up the possibility for you to create and distribute a Perl-Fortran distribution. You could call it Punched Card Perl (the initials can be read on two levels :)
• another intruder with the mooring in the heart of the Perl
Regarding Vista, for Strawberry Perl 5.8.8, Cosimo Streppone found that adding "c:\strawberry-perl\mingw\libexec\gcc\mingw32\3.4.5" to the PATH environment variable fixed compilation errors. I'm curious if a similar fix with the libexec paths for Strawberry Perl 5.10.0 would help (possibly for g77, too.)
Unfortunately, I don't have Vista handy to test.
C:\C>type try.c
#include <stdio.h>
int main(void) {
printf("Hello World\n");
return 0;
}
C:\C>gcc -o try.exe try.c
gcc: installation problem, cannot exec `cc1': No such file or director
+y
C:\C>set PATH=C:\strawberry\c\libexec\gcc\mingw32\3.4.5;%PATH%
C:\C>gcc -o try.exe try.c
try.c:1:19: no include path in which to search for stdio.h
C:\C>
[download]
And any compilation that includes perl.h fails, since the the standard C headers that perl.h includes aren't found.
There have been problems getting the installer to set the LIB and INSTALL env variables properly (due to the new UAC in Vista). For example, on my WinXP, I have this:
INCLUDE=C:\strawberry\c\include;C:\strawberry\perl\lib\CORE;C:\Program
+ Files\GnuWin32\include
LIB=C:\strawberry\c\lib;C:\strawberry\perl\bin;C:\Program Files\GnuWin
+32\lib
[download]
(The GnuWin32 were my own addition -- the others are provided by Strawberry on installation.)
If you don't have those set, try setting them and compiling again.
doesn't require anything special
Except XML::Parser, which requires the Expat library, which does require at least a little something special to get it working.
a) Why would Strawberry Perl *not* include the PPM utility ?
Because Strawberry perl is not ActivePerl
That said, I would hope a next release to have cpan to be a copy of cpan.bat, because when I have the Strawberry path in front of my $PATH in a Cygwin environment, cpan will start the Cygwin version, and bring me into trouble.
b) Why would Strawberry Perl *not* include the g77 compiler ?
I've never had the need for any fortran linking with perl, and neither did the maintainers of Strawberry perl I guess.
Anyway, is Strawberry Perl compatible with ActivePerl? Is it possible to use compiled modules from AcrivePerl in Strawberry Perl? If so, then the addition of PPM would be useful, as in this case there would exist usable PPM repositories.
And for g77, maybe this could be added in the planned Chocolate Perl distribution..
cpan.bat has been flagged as an issue before. See #21864: Explicit path in cpan.bat.
While I'm not sure exactly how we can save you from having similarly named tools in different parts of your PATH, please add your suggestion to the RT queue.
Added a few remarks to the RT queue
(Strawberry Perl)++ ! Getting further and further. I also suggest to (sym)link/copy dmake to nmake and make and bundle Tk, as it cannot be installed with cpan, as make requires the extra argument "MSWin32" to succeed.
Today I noticed the absence of the DB_File CORE module, that I think should be available, as building it with cpan.bat is impossible.
The development environment is not available in /C/strawberry/c, and many of my programs/scripts depend on having DB_File available.
Does it hit the same problems as the devel env for expat?
BerkeleyDB - was SleepyCat - is now owned by Oracle. If you search on "SleepyCat", it's the first hits with Google. Download here, where I see no need to register. Found in a minute, posting this reply took much.
|
http://www.perlmonks.org/?node_id=659492
|
CC-MAIN-2017-47
|
refinedweb
| 731
| 66.13
|
#include <rtt/scripting/ParsedStateMachine.hpp>
Definition at line 58 of file ParsedStateMachine.hpp.
Start this StateMachine.
The Initial state will be entered.
Add a State.
If already present, changes nothing.
Retrieve the current program in execution.
Returns null if the StateMachine is not active or no programs are being run.
Retrieve the current state of the state machine.
Returns null if the StateMachine is not active.
Referenced by RTT::StateMachine::getCurrentStateName(), RTT::StateMachine::inState(), and RTT::StateMachine::inStrictState().
Stop this StateMachine.
The current state is left unconditionally.
Execute any pending State (exit, entry, handle) programs.
You must executePending, before calling requestState() or requestNextState(). You should only call requestState() or requestNextState() if executePending returns true.
Due to the pending requests, the currentState() may have changed.
Returns the current program line in execution,.
Lookup a State by name.
Returns null if not found.
Referenced by RTT::StateMachine::requestState().
Inspect if the StateMachine is interruptible by events.
Only the run program may be interrupted, or if no program is currently executed.
Inspect if the StateMachine is performing a state transition.
A transition is in progress if entry, transition or exit programs are executed.
Referenced by RTT::StateMachine::inStrictState(), and RTT::StateMachine::isStrictlyActive().
Search from the current state a candidate next state.
If none is found, the current state is returned..
Request going to the Final State.
This will always proceed..
Search from the current state a candidate next state.
If none is found, the current state is taken. Next, handle the resulting state.
Go stepwise through evaluations to find out next state.
Request a state transition to a new state.
If the transition is not set by transitionSet(), acquiering the state will fail.
Referenced by RTT::StateMachine::requestState().
This was added for extra (non-user visible) initialisation when the StateMachine is activated.
Definition at line 549 of file StateMachine.hpp.
Set the name of this machine.
If recurisive == true, this also sets subMachines' names, to the given name + "." + the name they have been instantiated by in this machine.
Express a possible transition from one state to another under a certain condition.
Express a possible transition from one state to another under a certain condition.
A map keeping track of all events of a state.
Not all states need to be present as a key in this map.
Definition at line 636 of file StateMachine.hpp.
A map keeping track of all preconditions of a state.
Not all states need to be present as a key in this map.
Definition at line 630 of file StateMachine.hpp.
A map keeping track of all States and conditional transitions between two states.
Every state of this StateMachine must be] present as a key in this map.
Definition at line 624 of file StateMachine.hpp.
|
http://people.mech.kuleuven.be/~orocos/pub/stable/documentation/rtt/v1.8.x/api/html/classRTT_1_1ParsedStateMachine.html
|
crawl-003
|
refinedweb
| 458
| 62.95
|
#include <genesis/tree/formats/newick/element.hpp>
Store the information for one element of a Newick tree.
Most of the class' members are public, as it is intended to serve an an intermediate data exchange format, so different callers might need to modify its content. However, this means paying attention when working with the data, as it can be changed from anywhere.
See NewickBroker class for a description of this intermediate format.
Definition at line 60 of file element.hpp.
Constructor, initializes the item values.
Definition at line 72 of file element.hpp.
Return whether this is an inner node, i.e., not a leaf node.
Definition at line 161 of file element.hpp.
Return whether this is a leaf node.
Definition at line 150 of file element.hpp.
Return whether this is the root node of the tree.
Definition at line 142 of file element.hpp.
Returns the rank (number of immediate children) of this node.
NewickBroker::assign_ranks() has to be called before using this function. Otherwise, this function will throw an std::logic_error.
Definition at line 131 of file element.hpp.
Arbitrary strings that can be attached to a node, e.g. in Newick format via "[]".
Definition at line 114 of file element.hpp.
Depth of the node in the tree, i.e. its distance from the root.
Definition at line 119 of file element.hpp.
Name of the node.
In case it is a leaf, this is usually the name of the taxon represented by the node. Internal nodes are named "Internal Node" in case no name is specified in the Newick format, same applies to the (possibly virtual) root, which is named "Root Node" by default.
Definition at line 96 of file element.hpp.
Definition at line 62 of file element.hpp.
Arbitrary strings that can be attached to a node, e.g. in Newick format via "{}".
Definition at line 109 of file element.hpp.
Numerical values associated with the node, i.e. branch lengths.
In cases wehre the values need to be interpreted as edge values, this is the edge leading to this node's parent.
Definition at line 104 of file element.hpp.
|
http://doc.genesis-lib.org/structgenesis_1_1tree_1_1_newick_broker_element.html
|
CC-MAIN-2018-17
|
refinedweb
| 359
| 61.83
|
blkg->key = cfqd is an rcu protected pointer and hence we used to docall_rcu(cfqd->rcu_head) to free up cfqd after one rcu grace period.The problem here is that even though cfqd is around, there are nogurantees that associated request queue (td->queue) or q->queue_lockis still around. A driver might have called blk_cleanup_queue() andrelease the lock.It might happen that after freeing up the lock we callblkg->key->queue->queue_ock and crash. This is possible in followingpath.blkiocg_destroy() blkio_unlink_group_fn() cfq_unlink_blkio_group()Hence, wait for an rcu peirod if there are groups which have notbeen unlinked from blkcg->blkg_list. That way, if there are any groupswhich are taking cfq_unlink_blkio_group() path, can safely take queuelock.This is how we have taken care of race in throttling logic also.Signed-off-by: Vivek Goyal <vgoyal@redhat.com>--- block/cfq-iosched.c | 48 ++++++++++++++++++++++++++++++++++++------------ 1 files changed, 36 insertions(+), 12 deletions(-)diff --git a/block/cfq-iosched.c b/block/cfq-iosched.cindex a905b55..e2e6719 100644--- a/block/cfq-iosched.c+++ b/block/cfq-iosched.c@@ -300,7 +300,9 @@ struct cfq_data { /* List of cfq groups being managed on this device*/ struct hlist_head cfqg_list;- struct rcu_head rcu;++ /* Number of groups which are on blkcg->blkg_list */+ unsigned int nr_blkcg_linked_grps; }; static struct cfq_group *cfq_get_next_cfqg(struct cfq_data *cfqd);@@ -1063,6 +1065,7 @@ static struct cfq_group * cfq_find_alloc_cfqg(struct cfq_data *cfqd, cfq_blkiocg_add_blkio_group(blkcg, &cfqg->blkg, (void *)cfqd, 0); + cfqd->nr_blkcg_linked_grps++; cfqg->weight = blkcg_get_weight(blkcg, cfqg->blkg.dev); /* Add group on cfqd list */@@ -3815,15 +3818,11 @@ static void cfq_put_async_queues(struct cfq_data *cfqd) cfq_put_queue(cfqd->async_idle_cfqq); } -static void cfq_cfqd_free(struct rcu_head *head)-{- kfree(container_of(head, struct cfq_data, rcu));-}- static void cfq_exit_queue(struct elevator_queue *e) { struct cfq_data *cfqd = e->elevator_data; struct request_queue *q = cfqd->queue;+ bool wait = false; cfq_shutdown_timer_wq(cfqd); @@ -3842,7 +3841,13 @@ static void cfq_exit_queue(struct elevator_queue *e) cfq_put_async_queues(cfqd); cfq_release_cfq_groups(cfqd);- cfq_blkiocg_del_blkio_group(&cfqd->root_group.blkg);++ /*+ * If there are groups which we could not unlink from blkcg list,+ * wait for a rcu period for them to be freed.+ */+ if (cfqd->nr_blkcg_linked_grps)+ wait = true; spin_unlock_irq(q->queue_lock); @@ -3852,8 +3857,20 @@ static void cfq_exit_queue(struct elevator_queue *e) ida_remove(&cic_index_ida, cfqd->cic_index); spin_unlock(&cic_index_lock); - /* Wait for cfqg->blkg->key accessors to exit their grace periods. */- call_rcu(&cfqd->rcu, cfq_cfqd_free);+ /*+ * Wait for cfqg->blkg->key accessors to exit their grace periods.+ * Do this wait only if there are other unlinked groups out+ * there. This can happen if cgroup deletion path claimed the+ * responsibility of cleaning up a group before queue cleanup code+ * get to the group.+ *+ * Do not call synchronize_rcu() unconditionally as there are drivers+ * which create/delete request queue hundreds of times during scan/boot+ * and synchronize_rcu() can take significant time and slow down boot.+ */+ if (wait)+ synchronize_rcu();+ kfree(cfqd); } static int cfq_alloc_cic_index(void)@@ -3909,14 +3926,21 @@ static void *cfq_init_queue(struct request_queue *q) #ifdef CONFIG_CFQ_GROUP_IOSCHED /*- * Take a reference to root group which we never drop. This is just- * to make sure that cfq_put_cfqg() does not try to kfree root group+ * Set root group reference to 2. One reference will be dropped when+ * all groups on cfqd->cfqg_list are being deleted during queue exit.+ * Other reference will remain there as we don't want to delete this+ * group as it is statically allocated and gets destroyed when+ * throtl_data goes away. */- cfqg->ref = 1;+ cfqg->ref = 2; rcu_read_lock(); cfq_blkiocg_add_blkio_group(&blkio_root_cgroup, &cfqg->blkg, (void *)cfqd, 0); rcu_read_unlock();+ cfqd->nr_blkcg_linked_grps++;++ /* Add group on cfqd->cfqg_list */+ hlist_add_head(&cfqg->cfqd_node, &cfqd->cfqg_list); #endif /* * Not strictly needed (since RB_ROOT just clears the node and we-- 1.7.1
|
http://lkml.org/lkml/2011/5/18/366
|
CC-MAIN-2015-14
|
refinedweb
| 581
| 51.48
|
Make IDA Pro Great Again
In this post I will explain a neat way to use our own python, Qt 5.6, PyQt 5.6 version with IDA Pro, then how to use a chrooted Arch Linux 32bit to run IDA Pro in order not to pollute our host with pesky lib32-XX packages and Finally I will document installation and configuration of some useful IDA Pro plugins.
Introduction
Why not using already available techniques such as Arybo, using rpyc to tunnel rpc call to 64bit python or idalink or Installing PIP package, and using them from IDA on a 64-Bit machine ? The reason is simple. I don’t want to pollute my host with a ton of annoying pip package, lib32-xx and so on. I also want to use some cool IDA Pro plugins like keypatch, ipyida and idaemu.
Note: This entire procedure was realized on an Arch Linux 64bit os. It could easily be applied to Ubuntu or other distributions.
Let’s get our hands dirty !!
Install and configure a chroot and schroot environment
Arch Linux 32 bit chroot
Lot of tricks has been taken from ArchLinux Wiki.
Install a few utilities and schroot
sudo pacman -S arch-install-scripts schroot mkdir /opt/arch32
Configure custom pacman.conf
If you are using multilib you need to remove it from your custom pacman.conf.
Configure a custom pacman.conf without multilib support: Copy the default one and remove:
[Multilib] Include = /etc/pacman.d/mirrorlist
Create the chrooted installation of Arch 32bit
Note: There are some packages that are not needed for a normal installation of IDA Pro without plugins. I found it faster to install them during the pacstrap rather than from the chroot.
linux32 pacstrap -C path/to/pacman.conf -di /opt/arch32 base base-devel zlib libxext libxrender libsm libice glibc glib2 fontconfig freetype2 python2-keystone python2 python2-jupyter_client python2-ipykernel libxkbcommon-x11 libxkbcommon cmocka gtk2 p7zip wget python2-pip git
Configure our newly created chroot for users/network
sudo su cd /etc for i in passwd* shadow* group* sudoers resolv.conf localtime locale.gen vimrc inputrc profile.d/locale.sh; do cp -p /etc/"$i" /opt/arch32/etc/; done
Configure schroot
Add the chrooted env in /etc/schroot/schroot.conf
[Arch32] type=directory profile=arch32 description=Arch32 directory=/opt/arch32 users=youruser groups=users root-groups=root personality=linux32 aliases=32,default
Configure schroot to run properly with ipython and jupyter
Edit /etc/schroot/arch32/mount and add:
/run/user/1000 /run/user/1000 none rw,bind 0 0
Configuration of schroot is done. We need to change some stuff in order to make IDA Pro works with our own python.
Configure X and IDA Pro in the chroot
X Stuff
Give access to your X for the chroot
xhost +local:
Find the display ID of your current X session (keep it for later)
echo $DISPLAY
Configure IDA pro to use our own version of python
Enter the chroot
sudo linux32 arch-chroot /opt/arch32
Change terminal in order to have proper auto-completion
export TERM=xterm
OPTIONAL - Add some fonts to the chroot in order to shows properly the IDA Pro GUI:
If you are like me and you like to change your font in your X, don’t forget to install it in the chroot in order to prevent having weird characters in IDA Pro.
pacman -S adobe-source-code-pro-fonts xorg-fonts-type1 ttf-dejavu artwiz-fonts font-bh-ttf \ font-bitstream-speedo gsfonts sdl_ttf ttf-bitstream-vera \ ttf-cheapskate ttf-liberation \ ttf-freefont ttf-arphic-uming ttf-baekmuk
Configure your chroot to use the Xserver of your host
export DISPLAY=:0
Where 0 is the ID of your display as retrived earlier.
Download, Patch, Compile and Install Qt
Download Qt 5.6 sources:
wget
Extract, configure and build it:
7z x qt-everywhere-opensource-src-5.6.0.7z
Download and Apply Hex-Rays patch:
cd qt-everywhere-opensource-src-5.6.0 wget 7z x qt-5_6_0_full.zip patch -p1 < qt-5_6_0_full.patch
Symlink python2 to python to solve Qt make install error:
ln -s /usr/bin/python2 /usr/bin/python
Configure, compile and install:
./configure -nomake tests -qtnamespace QT -confirm-license -accessibility -opensource -force-debug-info -developer-build -fontconfig -qt-freetype -qt-libpng -glib -qt-xcb -dbus -qt-sql-sqlite -gtkstyle make -j9 make install
Download, Compile and Install QtSvg 5.6
Starting from Qt 5.1, the QtSvg has been moved to a standalone package.
Download the package:
wget 7z x qtsvg-opensource-src-5.6.0.7z cd qtsvg-opensource-src-5.6.0
Qmake, make and install Qtsvg 5.6:
../qt-everywhere-opensource-src-5/qtbase/bin/qmake make -j5 make install
Download, Compile and Install SIP
Download SIP 4.18 sources:
wget tar xvf sip-4.18.tar.gz cd sip-4.18
Configure, Compile and install Sip 4.18:
python2 configure.py make -j9 make install
Download, Patch, Compile and Install PyQt 5.6
Download PyQt 5.6:
wget tar xvf PyQt5_gpl-5.6.tar.gz cd PyQt5_gpl-5.6
Download my PyQt patch (with the help of Hex-Rays):
wget patch -p1 < pyqt.patch
configure, Compile and Install PyQt 5.6:
python2 configure.py \ --sip /root/build/qt/sip-4.18/sipgen/sip \ --sip-incdir /root/build/qt/sip-4.18/siplib \ --confirm-license \ --enable QtCore \ --enable QtGui \ --enable QtWidgets \ --enable QtSvg \ --no-designer-plugin \ --no-qml-plugin \ --no-tools \ --verbose \ --qmake /root/build/qt/qt56/qt-everywhere-opensource-src-5.6.0/qtbase/bin/qmake make -j9 make install
Install IDA Pro
Run the IDA Pro installer. When installing IDA don’t forget choosing “no” when the installer asks to install the bundled version of python.
Configure IDA pro to use our own python and Qt suite
cd /opt/ida-6.95 rm -r libQt5* cp /root/build/qt/qt56/qt-everywhere-opensource-src-5.6.0/qtbase/lib/libQt5{CLucene,Core,DBus,Gui,Help,Network,PrintSupport,Sql,Widgets,XcbQpa}.so.5 . cd python rm -r PyQt5 rm -r sip-files cd lib rm python27.zip mv python2.7 python_old ln -s /usr/lib/python2.7 . cp -r "python_old/lib-dynload/ida_*" /usr/lib/python2.7/lib-dynload rm -r python_old
Exit from the chroot, very important
exit
WARNING - Don’t run schroot when chrooting with linux32, always leave the chroot with exit before schrooting.
WARNING - Rename your ~/.idapro and remove all plugins before launching IDA Pro for the first time.
Use schroot to launch our fully chrooted IDA (with access to the host home directory of the user in the schroot.conf)
schroot -p /opt/ida-6.95/idaq
BONUS - Install and configure useful plugins for IDA pro
I’m a big fan of IPython for auto-completion and rapid scripting of python snippets. I often patch binaries, but the patching function in IDA Pro is incomplete. I like to use unicorn-engine to emulate weird code as well.
Install, configure and patch ipyida
When I was using IDA Pro under Windows, one of my favorite plugin was ida_ipython. Unfortunately this plugin is Windows only. Marc-Etienne from ESET developped a similar plugin but this time available on Windows, Linux and Mac OSX.
Install qtconsole
In the chroot again:
sudo linux32 arch-chroot arch_32_chroot pip2 install qtconsole
Installation of ipyida with jupyter_support
Not in the chroot:
cd ~/.idapro cd plugins git clone cd ipyida git checkout jupyter_support cd .. mv ipyida ipyida_temp
Install it
mv ipyida_temp/ipyida ~/.idapro/plugins mv ipyida_temp/ipyida_plugin_stub.py ~/.idapro/plugins/ipyida rm -r ipyida_temp
Install keypatch in order to patch binaries, using assembly language
wget
Note - Keypatch needs keystone (installed during the pacstrap).
Install Unicorn-Engine with the idaemu script to emulate things
Back in our chroot to install and configure Unicorn-Engine
WARNING - Don’t chroot with linux32 if IDA Pro is running in the schroot, quit IDA Pro first.
sudo linux32 arch-chroot arch_32_chroot cd /root mkdir build
Little trick to be able to run makepkg as root (normally not allowed):
Makepkg cannot be run as root for security reasons. Here is a little trick to be able to package as root.
chgrp nobody /root/build chmod g+ws /root/build setfacl -m u::rwx,g::rwx /root/build setfacl -d --set u::rwx,g::rwx,o::- /root/build
Create unicorn-engine package
git clone cd unicorn-git sudo -u nobody makepkg
Install it
pacman -U unicorn-xxx.pkg.tar.xz pacman -U python2-unicorn-xxx.pkg.tar.xz
Fix weird issue on unicorn egg file
chown -R youruser:youruser /usr/lib/python2.7
PS: I know it’s hacky, but it’s a chroot so who cares…
Quit the chroot
exit
Install idaemu plugins to use unicorn in IDA Pro
wget
BONUS2 - Adding our coloring theme to the chrooted IDA Pro
I like to use the consonance color theme for my IDA Pro. If you already applied a theme on your host, just copy the /opt/ida-xx/idacolor.cf to the /opt/ida-xx on your chroot.
Wrap up and short demo
It took me a lot of research, for example trying to compile QT/PyQt is a lot of pain. I ended it up just removing the dependencies from the ipyida plugin.
Here is a little video of the ipyida plugin:
All the resources for this post are available here.
I would like to thank sh4ka and kamino for tollerating me raging on this f** Qt/PyQt nightmWare, while helping me on some python stuff and providing useful links. Arnaud Diederen from IDA Pro Team helping me to patch and compile Qt, PyQt and SIP.
If you have questions I’m available on IRC @freenode and on twitter @dummys1337.
|
https://duksctf.github.io/2017/03/15/Make-IDA-Pro-Great-Again.html
|
CC-MAIN-2021-25
|
refinedweb
| 1,619
| 56.86
|
Post your Comment
Struts Flow Diagram Step By Step
Struts Flow Diagram Step By Step
In this section we will read about the flow....
Below is the diagram of Struts 2 architecture.
In struts application, struts work flow starts with the request, coming from
a resource such as JSP page
Struts Step by Step
Struts Step by Step
Step by step Struts Tutorials
Here we are providing Step by step struts tutorials, the step by step
tutorial on struts is grate... a
beginner can go through the tutorial and learn struts.
The Struts Step-by-Step
connectivity step
connectivity step sir.pls give me step one by one for connect the java program to oracle.
Follow these steps:
1) Import the following packages in your java file:***
import java.sql.*;
import oracle.jdbc.driver.
I want to learn hibernate step-by-step.
I want to learn hibernate step-by-step. Hello,
I am very new... to start from start and learn it step by step. Can you please help me..
I would... Hibernate step-by-step, just go through the following links that are provided here
Java Class Diagram
Java Class Diagram How to create a class diagram for the process of buying glasses from the viewpoint of the patient. The first step is to see an eye doctor who will give the patient a prescription. Once the patient has
First Step towards JDBC!
First Step towards JDBC!
First Step towards JDBC
Introduction
This
article introduce you with JDBC and shows you how to create a database
application to access
First Step towards JDBC!
the database.
First
Step towards JDBC
This article
Step by Step Java tutorials for beginners
How to make animated flow river, make animated flow river, animated flow river
How to make animated flow river
Now you can make animation of the still river picture... then apply on the river only as I have done here.
We have to use same step two
designing of a text
effect. The beginners can easily learn step by step tutorial. Here we will explain this
tutorial step by step.
New File: Take a new file...; Wind and apply this step till you want. I have used this step three time
How to make a blooming Face.
effect. It is an easy step by step guide to learn easily. I am sure that you can...) and
set opacity = 14%, Flow = 16% to remove the part you don't want.
Final Image
How to design hard steel cover
How to design hard steel cover
This example has step by step processing to make a hard steel
cover.
New File: Take a new file.
Color: Set Foreground Color "Black"
and Background Color "White"
How Struts 2 Framework works?
and simplified Struts Execution Flow Diagram.
Struts 2 - Open source MVC Framework...How Struts 2 Framework works?
This tutorial explains you the working of the Struts 2.3.15.1 framework.
Struts 2 framework implements MVC architecture, which
ER Diagram
ER Diagram is there is any tool to draw ER diagram automatically ?
(Like when i am connecting to Database i want to ER Diagram of That DB..i don't want to do it manually
Photoshop a LCD Monitor
step by step guide
for designing this.
Let's start
New File.... Make adjustment as I have done here.
Inner shadow: Use same step like previous
flow charts
flow charts draw a flow chart program with a user prompt of 5 numbers computing the maximum, minimum and average
JDO - Java Data Objects Tutorials
JDO - Java Data Objects Tutorials
This step-by-step Java Data Objects (JDO) tutorial will show you how to use
JDO to develop standalone
ATLFlow
ATLFlow
This plug in provides a diagram editor for modeling a
process of model transformations in ATL. It describes the structure of a
transformation flow. The plug
Struts Flow
Struts Flow can u explain about struts flow with clear explaination Hello,
Please visit the following link:
Thanks
Fedora Core 6 Guide, Installing Fedora Core6
Post your Comment
|
http://roseindia.net/discussion/49000-Struts-Flow-Diagram-Step-By-Step.html
|
CC-MAIN-2016-07
|
refinedweb
| 669
| 64.71
|
<a href="" target="_blank">Math</a> can be intimidating.
Math can be intimidating.
Depending on the teacher and how it is taught, it can be an infuriating combination of inscrutable and boring.
But, there’s a beauty to math—a symmetry to the intelligence and logic behind numbers. I love math, and I want other people to love it too.
One neat way to make math more approachable and show its beauty visually is to combine it with something called “generative art.” Generative art is where you create a few, usually simple rules which are often math- or geometry-based, and then you tell a computer to process these rules.
Since computers are great for processing instructions quickly, and on a much greater scale than a human could, the designs that are created are often more complex and interesting than you might expect from such simple rules.
This example shows a bunch of floating particles that move with a mesmerizingly natural motion. They float around, link up with other particles, and change directions all on their own. It’s a variation on a “flocking algorithm,” and the amazing thing is that most of this natural motion comes from simply having each particle follow the rules of “don’t run into anybody,” “stay with the flock,” and “go in roughly the same direction as those near you.”
Python is a great option for creating these generative art projects; it is used by data scientists, mathematicians, and engineers (among many others) as an open source option for processing numerical calculations and generating visualizations.
It is also extremely easy to read and write clear code, which makes it an ideal language for outlining the simple rules needed to create this generative art.
One of the simplest mathematical constructs you can create with these rules is an integer sequence, which is an ordered list of integers (whole numbers, including: positive, negative, and zero). Usually, the relationship between these integers is spelled out in some sort of logical way with a set of rules in order to help someone figure out what the next number in the pattern is.
In this article, we are going to take the mathematics of integer sequences, supercharge them with the power of programming in Python, and use them to create some interesting and beautiful generative art.
It is a fun exercise, and the calculations and code samples are simple enough that they should be engaging for programmers, mathematicians, and artists alike.
This is not a beginner’s Python article, though, so I will assume you have some familiarity with Python’s syntax—or, at least, a willingness to pick it up as you go along—as well as how to run Python programs. If you’re not sure how to run them, and you’d like to write code in an application with a big giant “Play” button, the Mu text editor is great for beginners.
The particular sequence I want to talk about this time is the Recamán sequence. The rules are deceptively simple, but when the numbers are given visual or auditory shape, the results can be interesting and even a little spooky.The Recamán Sequence
Here are the rules:
Let’s do the first few as examples.
We start at zero.
The next step size will be 1. Stepping backward would put us at -1, which is not allowed, so we’ll step forward.
The next step size is 2. Stepping backward would put us at -1. That’s still not allowed, so forward we must go.
The next step size is 3. Stepping backward puts us at 0. Since we’ve already been to 0 (our first starting point), this is not a valid move. I promise the sequence gets interesting soon, but for now, we step forward.
The next step size is 4. Stepping backward lands us at 2. Since 2 is positive, and we haven’t seen it yet, we can take our first legitimate backward step!
Hopefully you’re beginning to see how the rules work. Now, this is kind of interesting to think about, but I’m not sure I would call a list of five numbers beautiful. That’s where we’ll lean a little bit harder on art and code. Luckily Python provides both of these to us in a fun and adorable module in its Standard Library: Turtle.Introducing Turtle Graphics
Turtle Graphics was originally a key feature of the Logo programming language.
It’s a relatively simple Domain-Specific Language (DSL) where there is an avatar—traditionally shaped like a little turtle—on the screen, and you give it instructions on where to go: forward, left, or right. As it moves, it draws a line wherever it goes with its tail, although you can tell it to pick its tail up or put it down as necessary. It can even jump positions and change colors!
Python also includes a “turtle” library packaged along with it in its standard library. Let’s take a look at a sample program to see how it works. Create a file called “example1.py” with the following contents.
import turtle
window = turtle.Screen()
joe = turtle.Turtle()
joe.forward(50)
joe.left(90)
joe.forward(100)
turtle.done()
Here are the important bits:
If you run your code, you should see something like the following:
Side note: If you want the real, old-fashioned Logo experience, you can add “joe.shape(“turtle”)” to your code, right under the line where you define “joe.” Isn’t he cute?
Okay, now that you’ve seen what “turtle” is all about, let’s get back to the sequence we were working on.Coding the Sequence
Like anything good, we’re definitely not going to get a perfect result on the first go. We’ll need to do some iteration. I’ll take you through three passes at this art project, and each one will get a little more complicated and a little more visually interesting. After that, I have some potential ideas for further iteration that you can try if this gets your creative spark going. Let’s get to it!
The code that we write will be very similar to the English we would use to describe the steps for the sequence. Remember the two rules: Go backward when possible, otherwise go forward, and increase the step size by one after each step.
Create a new file named “recaman1.py.” We’ll start with those basic rules and then figure out how to make it actually work. I’m naming our new turtle Euler, after some guy..backward(step_size) current = backwards seen.add(current) # Otherwise, go forwards else: euler.forward(step_size) current += step_size seen.add(current)
turtle.done()
However, when we run it, it doesn’t look very good. In fact, it looks like maybe somebody gave Euler too much coffee.
So, that was pretty reasonable. The code seems to read just like we might explain it to someone, which is good.
I’m afraid that the linear motion is just a little boring, though. This is when we want to put our artists’ caps on (do artists wear caps?) and figure out a little more creative way to get Euler from point A to point B.
Luckily turtles don’t have to move in straight lines! They can also move in arcs and circles. Let’s have him bounce from spot to spot on the number line! To make a turtle draw a circle or partial arc, we’ll use the “circle” command, which causes the turtle to follow a circle where the imaginary center is “radius” units to the turtle’s left.
That means we’ll have to orient our turtle before drawing, depending on whether he’s going forward or backward, using the “setheading” command.
Remember that you can find all the turtle commands in the official documentation, just in case you’re curious. euler.circle(step_size/2, 180) # 180 degrees means "draw a semicircle" current = backwards seen.add(current) # Otherwise, go forwards else: euler.setheading(270) # 270 degrees is straight down euler.circle(step_size/2, 180) current += step_size seen.add(current)
turtle.done()
That’s neat, but for the first little while, it seems like he just wiggles around in one place and the lines are very close together. Also, he’s not going to be using the whole left half of the screen!
Let’s do one more iteration together, where we make it even nicer to look at.
The goals for this iteration are to make the picture bigger, and to give him more room to work.
import turtle
window = turtle.Screen()Move the little buddy over to the left side to give him more room to work
window.setup(width=800, height=600, startx=10, starty=0.5)
euler = turtle.Turtle() # A good mathy name for our turtle
euler.shape("turtle")
scale = 5 # This isn't a turtle module setting. This is just for us.
euler.penup()
euler.setpos(-390, 0)
euler.pendown() # 180 degrees means "draw a semicircle" euler.circle(scale * step_size/2, 180) current = backwards seen.add(current) # Otherwise, go forwards else: euler.setheading(270) # 270 degrees is straight down euler.circle(scale * step_size/2, 180) current += step_size seen.add(current)
turtle.done()
As you can see, we’ve added a scaling factor which you can tune to whatever you think works best. I arrived at this value by trying a couple and picking my favorite. We also shifted him over so he starts at the left side of the screen. Since he can never go negative, we know he will only go right from wherever he starts.
By now, I think you get the gist, and you’re hopefully starting to see the magic of integer sequences: out of a few simple rules (and with the help of a tireless reptile assistant), you can make some truly interesting and captivating results.
You’ve got all the tools you need to do something even cooler. Here are some ideas to get your creative juices flowing:
If you found the Recamán sequence particularly fascinating, you’re not alone. There are a ton of different incarnations out there people have created, combining the lines with color, sound, and more. Here are a few of my favorite.
This slightly spooky version with sound:
This Numberphile video about the sequence is really interesting:
The Coding Train also did a couple videos on coding this up if you want a cool video walkthrough with an amazing teacher and don’t mind writing JavaScript:
In fact, P5.js (and its Java-based predecessor, Processing) are both great alternatives for making art and animations with code; they come with dedicated editors to help you do that, and they allow support for sound and other add-ins!Get Your Turtles in Gear
Hopefully this tutorial was enough to spark your interest in using code to generate art, and hopefully (if you were before), you are no longer too intimidated to look to mathematics as a source for your artistic ideas.
Geometry, never-ending constants, golden ratios, Fibonacci spirals, fractals, and number theory are all goldmines of awesome visual projects just waiting to be programmatically generated, and now you have all the tools you need to get your own set of turtles and start generating!
|
https://morioh.com/p/c8bd8f13a1b5
|
CC-MAIN-2020-05
|
refinedweb
| 1,890
| 64.1
|
Dealing with WSDL file in deifferent location than webservice
Discussion in 'ASP .Net Web Services' started by JerryK, Mar 2,21
- milfar
- Nov 14, 2008
is the w3c's schema for wsdl and wsdl/soap binding possibly buggy ? _clb_Chris Bedford, Aug 21, 2003, in forum: XML
- Replies:
- 0
- Views:
- 633
- Chris Bedford
- Aug 21, 2003
WSDL file produces useless class when imported with WSDL.exeRH, May 27, 2004, in forum: ASP .Net Web Services
- Replies:
- 1
- Views:
- 306
- Dino Chiesa [Microsoft]
- May 27, 2004
wsdl.exe fails with WebMethods generated wsdl filemrnu, Jun 7, 2004, in forum: ASP .Net Web Services
- Replies:
- 0
- Views:
- 257
- mrnu
- Jun 7, 2004
Root node in a webservice in a different namespace than the webservice. Is this possible?Johnny Fry, Mar 10, 2007, in forum: ASP .Net Web Services
- Replies:
- 0
- Views:
- 203
- Johnny Fry
- Mar 10, 2007
|
http://www.thecodingforums.com/threads/dealing-with-wsdl-file-in-deifferent-location-than-webservice.784621/
|
CC-MAIN-2015-11
|
refinedweb
| 144
| 65.93
|
Next Steps¶
The First Steps with Celery guide is intentionally minimal. In this guide I’ll demonstrate what Celery offers in more detail, including how to add Celery support for your application and library.
This document doesn’t document all of Celery’s features and best practices, so it’s recommended that you also read the User Guide
Using Celery in your Application¶
Our Project¶
Project layout:
proj/__init__.py /celery.py /tasks.py
proj/celery.py¶
from __future__ import absolute_import, unicode_literals from celery import Celery app = Celery('proj', broker='amqp://', backend='amqp://', include=['proj.tasks']) # Optional configuration, see the application user guide. app.conf.update( result_expires=3600, ) if __name__ == '__main__': app.start()
In this module you created our
Celery instance (sometimes
referred to as the app). To use Celery within your project
you simply import this instance.
The
brokerargument specifies the URL of the broker to use.
See Choosing a Broker for more information.
The
backendargument specifies the result backend to use,
It’s used to keep track of task state and results. While results are disabled by default I use the RPC result backend here because I demonstrate how retrieving results work later, you may want to use a different backend for your application. They all have different strengths and weaknesses. If you don’t need results it’s better to disable them. Results can also be disabled for individual tasks by setting the
@task(ignore_result=True)option.
See Keeping Results for more information.
The
includeargument is a list of modules to import when the worker starts. You need to add our tasks module here so that the worker is able to find our tasks.
Starting the worker¶
The celery program can be used to start the worker (you need to run the worker in the directory above proj):
$ celery -A proj worker -l info
When the worker starts you should see a banner and some messages:
-------------- celery@halcyon.local v4.0 (latentcall) ---- **** ----- --- * *** * -- [Configuration] -- * - **** --- . broker: amqp://guest@localhost:5672// - ** ---------- . app: __main__:0x1012d8590 - ** ---------- . concurrency: 8 (processes) - ** ---------- . events: OFF (enable -E to monitor this worker) - ** ---------- - *** --- * --- [Queues] -- ******* ---- . celery: exchange:celery(direct) binding:celery --- ***** ----- [2012-06-08 16:23:51,078: WARNING/MainProcess] celery@halcyon.local has started.
– The broker is the URL you specified in the broker argument in our
celery
module, you can also specify a different broker on the command-line by using
the
-b option.
–
the
celery worker -c option.
There.
Including the default prefork pool, Celery also supports using Eventlet, Gevent, and running in a single thread (see Concurrency).
– Events is an option that when enabled causes Celery to send
monitoring messages (events) for actions occurring in the worker.
These can be used by monitor programs like
celery events,
and Flower - the real-time Celery monitor, that you can read about in
the Monitoring and Management guide.
– Queues is the list of queues that the worker will consume tasks from. The worker can be told to consume from several queues at once, and this is used to route messages to specific workers as a means for Quality of Service, separation of concerns, and prioritization, all described in the Routing Guide.
You can get a complete list of command-line arguments
by passing in the
--help flag:
$ celery worker --help
These options are described in more detailed in the Workers Guide.
Stopping the worker¶
To stop the worker simply hit Control-c. A list of signals supported by the worker is detailed in the Workers Guide.
In the background¶
In production you’ll want to run the worker in the background, this is described in detail in the daemonization tutorial.
The daemonization scripts uses the celery multi command to start one or more workers in the background:
$ celery multi start w1 -A proj -l info celery multi v4.0.0 (latentcall) > Starting nodes... > w1.halcyon.local: OK
You can restart it too:
$ celery multi restart w1 -A proj -l info celery multi v4.0.0 (latentcall) > Stopping nodes... > w1.halcyon.local: TERM -> 64024 > Waiting for 1 node..... > w1.halcyon.local: OK > Restarting node w1.halcyon.local: OK celery multi v4.0.0 (latentcall) > Stopping nodes... > w1.halcyon.local: TERM -> 64052
or stop it:
$ celery multi stop w1 -A proj -l info
The
stop command is asynchronous so it won’t wait for the
worker to shutdown. You’ll probably want to use the
stopwait command
instead, this ensures all currently executing tasks are completed
before exiting:
$ celery multi stopwait w1 -A proj -l info
Note
celery multi doesn’t store information about workers so you need to use the same command-line arguments when restarting. Only the same pidfile and logfile arguments must be used when stopping.
By default it’ll create pid and log files in the current directory, to protect against multiple workers launching on top of each other you’re encouraged to put these in a dedicated directory:
$ mkdir -p /var/run/celery $ mkdir -p /var/log/celery $ celery multi start w1 -A proj -l info --pidfile=/var/run/celery/%n.pid \ --logfile=/var/log/celery/%n%I.log
With the multi command you can start multiple workers, and there’s a powerful command-line syntax to specify arguments for different workers too, for example:
$ celery multi start 10 -A proj -l info -Q:1-3 images,video -Q:4,5 data \ -Q default -L:4,5 debug
For more examples see the
multi module in the API
reference.
About the
--app argument¶
The
--app argument specifies the Celery app instance
to use, it must be in the form of
module.path:attribute
But it also supports a shortcut form If only a package name is specified, where it’ll try to search for the app instance, in the following order:
With
--app=proj:
- an attribute named
proj.app, or
- an attribute named
proj.celery, or
- any attribute in the module
projwhere the value is a Celery application, or
If none of these are found it’ll try a submodule named
proj.celery:
- an attribute named
proj.celery.app, or
- an attribute named
proj.celery.celery, or
- Any attribute in the module
proj.celerywhere the value is a Celery application.
This scheme mimics the practices used in the documentation – that is,
proj:app for a single contained module, and
proj.celery:app
for larger projects.
Calling Tasks¶
You can call a task using the
delay() method:
>>> add.delay(2, 2)
This method is actually a star-argument shortcut to another method called
apply_async():
>>> add.apply_async((2, 2))
The latter enables you to specify execution options like the time to run (countdown), the queue it should be sent to, and so on:
>>> add.apply_async((2, 2), queue='lopri', countdown=10)
In the above example the task will be sent to a queue named
lopri and the
task will execute, at the earliest, 10 seconds after the message was sent.
Applying the task directly will execute the task in the current process, so that no message is sent:
>>> add(2, 2) 4
These three methods -
delay(),
apply_async(), and applying
(
__call__), represents the Celery calling API, that’s also used for
signatures.
A more detailed overview of the Calling API can be found in the Calling User Guide.
Every task invocation will be given a unique identifier (an UUID), this is the task id.
The
delay and
apply_async methods return an
AsyncResult
instance, that can be used to keep track of the tasks execution state.
But for this you need to enable a result backend so that
the state can be stored somewhere.
Results are disabled by default because of the fact that there’s no result backend that suits every application, so to choose one you need to consider the drawbacks of each individual backend. For many tasks keeping the return value isn’t even very useful, so it’s a sensible default to have. Also note that result backends aren’t used for monitoring tasks and workers, for that Celery uses dedicated event messages (see Monitoring and Management Guide).
If you have a result backend configured you can retrieve the return value of a task:
>>> res = add.delay(2, 2) >>> res.get(timeout=1) 4
You can find the task’s id by looking at the
id attribute:
>>> res.id d6b3aea2-fb9b-4ebc-8da4-848818db9114
You can also inspect the exception and traceback if the task raised an
exception, in fact
result.get() will propagate any errors by default:
>>> res = add.delay(2) >>> res.get(timeout=1)
Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/opt/devel/celery/celery/result.py", line 113, in get interval=interval) File "/opt/devel/celery/celery/backends/rpc.py", line 138, in wait_for raise meta['result'] TypeError: add() takes exactly 2 arguments (1 given)
If you don’t wish for the errors to propagate then you can disable that
by passing the
propagate argument:
>>> res.get(propagate=False) TypeError('add() takes exactly 2 arguments (1 given)',)
In this case it’ll return the exception instance raised instead, and so to check whether the task succeeded or failed you’ll have to use the corresponding methods on the result instance:
>>> res.failed() True >>> res.successful() False
So how does it know if the task has failed or not? It can find out by looking at the tasks state:
>>> res.state 'FAILURE'
A task can only be in a single state, but it can progress through several states. The stages of a typical task can be:
PENDING -> STARTED -> SUCCESS
The started state is a special state that’s only recorded if the
task_track_started setting is enabled, or if the
@task(track_started=True) option is set for the task.
The pending state is actually not a recorded state, but rather the default state for any task id that’s unknown: this you can see from this example:
>>> from proj.celery import app >>> res = app.AsyncResult('this-id-does-not-exist') >>> res.state 'PENDING'
If the task is retried the stages can become even more complex. To demonstrate, for a task that’s retried two times the stages would be:
PENDING -> STARTED -> RETRY -> STARTED -> RETRY -> STARTED -> SUCCESS
To read more about task states you should see the States section in the tasks user guide.
Calling tasks is described in detail in the Calling Guide.
Canvas: Designing Work-flows¶
You just learned how to call a task using the tasks
delay method,
and this is often all you need, but sometimes you may want to pass the
signature of a task invocation to another process or as an argument to another
function, for this Celery uses something called signatures.
A signature wraps the arguments and execution options of a single task invocation in a way such that it can be passed to functions or even serialized and sent across the wire.
You can create a signature for the
add task using the arguments
(2, 2),
and a countdown of 10 seconds like this:
>>> add.signature((2, 2), countdown=10) tasks.add(2, 2)
There’s also a shortcut using star arguments:
>>> add.s(2, 2) tasks.add(2, 2)
And there’s that calling API again…¶
Signature instances also supports the calling API: meaning they
have the
delay and
apply_async methods.
But there’s a difference in that the signature may already have
an argument signature specified. The
add task takes two arguments,
so a signature specifying two arguments would make a complete signature:
>>> s1 = add.s(2, 2) >>> res = s1.delay() >>> res.get() 4
But, you can also make incomplete signatures to create what we call partials:
# incomplete partial: add(?, 2) >>> s2 = add.s(2)
s2 is now a partial signature that needs another argument to be complete,
and this can be resolved when calling the signature:
# resolves the partial: add(8, 2) >>> res = s2.delay(8) >>> res.get() 10
Here you added the argument 8 that was prepended to the existing argument 2
forming a complete signature of
add(8, 2).
Keyword arguments can also be added later, these are then merged with any existing keyword arguments, but with new arguments taking precedence:
>>> s3 = add.s(2, 2, debug=True) >>> s3.delay(debug=False) # debug is now False.
As stated signatures supports the calling API: meaning that;
sig.apply_async(args=(), kwargs={}, **options)
Calls the signature with optional partial arguments and partial keyword arguments. Also supports partial execution options.
sig.delay(*args, **kwargs)
Star argument version of
apply_async. Any arguments will be prepended to the arguments in the signature, and keyword arguments is merged with any existing keys.
So this all seems very useful, but what can you actually do with these? To get to that I must introduce the canvas primitives…
The Primitives¶
These primitives are signature objects themselves, so they can be combined in any number of ways to compose complex work-flows.
Note
These examples retrieve results, so to try them out you need
to configure a result backend. The example project
above already does that (see the backend argument to
Celery).
Let’s look at some examples:
Groups¶
A
group calls a list of tasks in parallel,
and it returns a special result instance that lets you inspect the results
as a group, and retrieve the return values in order.
>>> from celery import group >>> from proj.tasks import add >>> group(add.s(i, i) for i in xrange(10))().get() [0, 2, 4, 6, 8, 10, 12, 14, 16, 18]
- Partial group
>>> g = group(add.s(i) for i in xrange(10)) >>> g(10).get() [10, 11, 12, 13, 14, 15, 16, 17, 18, 19]
Chains¶
Tasks can be linked together so that after one task returns the other is called:
>>> from celery import chain >>> from proj.tasks import add, mul # (4 + 4) * 8 >>> chain(add.s(4, 4) | mul.s(8))().get() 64
or a partial chain:
>>> # (? + 4) * 8 >>> g = chain(add.s(4) | mul.s(8)) >>> g(4).get() 64
Chains can also be written like this:
>>> (add.s(4, 4) | mul.s(8))().get() 64
Chords¶
A chord is a group with a callback:
>>> from celery import chord >>> from proj.tasks import add, xsum >>> chord((add.s(i, i) for i in xrange(10)), xsum.s())().get() 90
A group chained to another task will be automatically converted to a chord:
>>> (group(add.s(i, i) for i in xrange(10)) | xsum.s())().get() 90
Since these primitives are all of the signature type they can be combined almost however you want, for example:
>>> upload_document.s(file) | group(apply_filter.s() for filter in filters)
Be sure to read more about work-flows in the Canvas user guide.
Routing¶
Celery supports all of the routing facilities provided by AMQP, but it also supports simple routing where messages are sent to named queues.
The
task_routes setting enables you to route tasks by name
and keep everything centralized in one location:
app.conf.update( task_routes = { 'proj.tasks.add': {'queue': 'hipri'}, }, )
You can also specify the queue at runtime
with the
queue argument to
apply_async:
>>> from proj.tasks import add >>> add.apply_async((2, 2), queue='hipri')
You can then make a worker consume from this queue by
specifying the
celery worker -Q option:
$ celery -A proj worker -Q hipri
You may specify multiple queues by using a comma separated list,
for example you can make the worker consume.
To learn more about routing, including taking use of the full power of AMQP routing, see the Routing Guide.
Remote Control¶
If you’re using RabbitMQ (AMQP), Redis, or Qpid as the broker then you can control and inspect the worker at runtime.
For example you can see what tasks the worker is currently working on:
$ celery -A proj inspect active
This is implemented by using broadcast messaging, so all remote control commands are received by every worker in the cluster.
You can also specify one or more workers to act on the request
using the
--destination option.
This is a comma separated list of worker host names:
$ celery -A proj inspect active --destination=[email protected]
If a destination isn’t provided then every worker will act and reply to the request.
The celery inspect command contains commands that doesn’t change anything in the worker, it only replies information and statistics about what’s going on inside the worker. For a list of inspect commands you can execute:
$ celery -A proj inspect --help
Then there’s the celery control command, that contains commands that actually changes things in the worker at runtime:
$ celery -A proj control --help
For example you can force workers to enable event messages (used for monitoring tasks and workers):
$ celery -A proj control enable_events
When events are enabled you can then start the event dumper to see what the workers are doing:
$ celery -A proj events --dump
or you can start the curses interface:
$ celery -A proj events
when you’re finished monitoring you can disable events again:
$ celery -A proj control disable_events
The celery status command also uses remote control commands and shows a list of online workers in the cluster:
$ celery -A proj status
You can read more about the celery command and monitoring in the Monitoring Guide.
Timezone¶
All times and dates, internally and in messages uses the UTC timezone.
When the worker receives a message, for example with a countdown set it
converts that UTC time to local time. If you wish to use
a different timezone than the system timezone then you must
configure that using the
timezone setting:
app.conf.timezone = 'Europe/London'
Optimization¶
The default configuration isn’t optimized for throughput by default, it tries to walk the middle way between many short tasks and fewer long tasks, a compromise between throughput and fair scheduling.
If you have strict fair scheduling requirements, or want to optimize for throughput then you should read the Optimizing Guide.
If you’re using RabbitMQ then you can install the librabbitmq module: this is an AMQP client implemented in C:
$ pip install librabbitmq
What to do now?¶
Now that you have read this document you should continue to the User Guide.
There’s also an API reference if you’re so inclined.
|
https://docs.celeryproject.org/en/latest/getting-started/next-steps.html
|
CC-MAIN-2019-43
|
refinedweb
| 3,020
| 53.71
|
Last week I tried to do something which I’ve been planning for quite sometime. Porting a Python program into Haskell. In case you didn’t know, Haskell is a purely functional programming language that’s recently become a hot favourite. It has a lot of cutting edge ideas from the academic world esp laziness and strong typing. It has an interesting way to solve the ‘multi-CPU problem’.
Mars Rover is a famous programming problem used by Thoughtworks in their recruitments. I first solved the problem in Python and later attempted to solve the same in Haskell. I cannot say that I ported it from Python because the approach I’ve used is completely different.
The ’M’. ‘L’ and ‘R’ Python solution
The Python solution is actually smaller than the problem itself. The readability isn’t that great, but it is quite extensible. In fact, adding a new instruction like
B(ackward) would need just one additional line. You can also extend the four cardinal directions to eight with minimal changes to the code.
dirs = "NESW" # Notations for directions shifts=[(0,1),(1,0),(0,-1),(-1,0)] # delta vector for each direction # One letter function names corresponding to each robot instruction r = lambda x, y, a: (x, y, (a + 1) % 4) l = lambda x, y, a: (x, y, (a - 1 + 4) % 4) m = lambda x, y, a: (x + shifts[a][0], y + shifts[a][1], a) raw_input() # Ignore the grid size while 1: # parse initial position triplet x, y, dir = raw_input().split() pos = (int(x),int(y),dirs.find(dir)) # parse instructions instrns = raw_input().lower() # Invoke the corresponding functions passing prev position for i in instrns: pos = eval('%s%s' % (i, str(pos))) print pos[0], pos[1], dirs[pos[2]]
The Haskell solution
I am a beginner in Haskell, so apologies for any bad coding practices. You might notice that rather than using Reflection as in the Python code, I have used Type-inference to invoke the correct function for each instruction. Yet again, this scales well while adding new instructions.
import Data.List dirs = "NESW" shifts 0 = (0, 1) shifts 1 = (1, 0) shifts 2 = (0, -1) shifts 3 = (-1, 0) instrn (x, y, a) 'R' = (x, y, mod (a + 1) 4) instrn (x, y, a) 'L' = (x, y, mod (a - 1 + 4) 4) instrn (x, y, a) 'M' = (x+fst (shifts a), y+snd (shifts a), a) showpos (x, y, a) = show x ++ " " ++ show y ++ " " ++ [dirs !! a] finddir dirchar = case elemIndex dirchar dirs of Nothing -> error "invalid direction" Just position -> position readpos line = (x, y, a) where a = finddir $ head $ drop 1 line3 [(y,line3)] = reads line2 :: [(Integer, String)] [(x,line2)] = reads line :: [(Integer, String)] robo = do posn <- getLine instrns <- getLine putStrLn (showpos (foldl instrn (readpos posn) instrns)) robo main = do skip <- getLine -- Skip reading the grid size robo
Key learnings
Since some of you might be interested in Haskell, I have tried to summarize my experience in Haskell programming
- There are no loop constructs. So everything must be done using recursion!
- Haskell I/O is very hard. This is because of my little knowledge of Monads. In fact, I solved the logic pretty quickly. It took me a while to figure out the input parsing.
- Type inference catches a lot of errors. This is quite handy but error messages are sometimes confusing
- I could have used Abstract Data Types for directions but it would have made the code lengthier
In short, programming in Haskell is a mind-bending exercise. Highly recommended!
|
https://arunrocks.com/mars-rover-in-python-and-haskell/
|
CC-MAIN-2018-43
|
refinedweb
| 587
| 67.89
|
Introduction
You made an app with Django. Great! , and you are excited to show it to everyone on the planet. For that, you need to put it somewhere online so that others can access it.
Okay, you did some research and selected Heroku. Great Choice!👍
Now you started to deploy your app and did your research online and now you have many different resources to follow which suggest millions of ways to do it.
So you are confused, frustrated and stuck and somehow managed to put host it but after that, to your surprise, you found that your CSS files didn't show up.🤦♂️
OK, Now let's solve your problems. This article covers almost everything that you will need.
Roadmap
- Install Required Tools
- Create Required Files
- Create a Heroku app
- Edit settings.py
- Make changes for static files
Install the required tools
For deploying to Heroku you need to have Heroku CLI (Command Line Interface) installed.
You can do this by going here:
CLI is required because it will enable us to use features like login, run migrations etc.
Create files required by Heroku
After installing the CLI let's create all the files that Heroku needs.
Files are :
requirements.txt
Requirements.txt is the simplest to make. Just run the command
pip freeze > requirements.txt
This command will make a .txt file which will contain all the packages that are required by your current Django Application.
Note: If you add any package further then run this command again, this way the file will be updated with the new packages
What is the use of requirements.txt?
As you can see that it contains all the dependencies that your app requires. So when you put your app on Heroku it will tell Heroku which packages to install.
Procfile
After this make a new file name Procfile and do not put any extension in it. It is a file required by Heroku
According to Heroku :
Heroku apps include a Procfile that specifies the commands that are executed by the app on startup. You can use a Procfile to declare a variety of process types, including:
- Your app’s web server
- Multiple types of worker processes
- A singleton process, such as a clock
- Tasks to run before a new release is deployed
For our app we can write the following command
web: gunicorn name_of_your_app.wsgi —log-file -
If you are confused about your app name, then just go to wsgi.py file in your project and you will find your app name there.
For this, you should have gunicorn installed and added to you requirements.txt file
Installing is super simple. You must have guessed it!
pip install gunicorn
runtime.txt
After this make a new text file called runtime.txt and inside it write the python version you are using in the following format
python-3.8.1
That's all the files we require. Now we have to start editing our settings.py file.
Create a Heroku app
This is a simple step and can be done in 2 ways, either by command line or through the Heroku Website.
Let's use the Heroku Website for now.
- After making Heroku Account you will see an option to create a new app
- It will ask you for a name, the name should be unique. After hit and trials, you will be redirected to your app dashboard.
- There are many options to play with here but let's go to the settings tab and there click on Reveal Config Vars
- In the KEY write SECRET_KEY and in VALUE paste the secret key from the settings file and you can change it because only this key will be used.
- That's all for now.
- We will revisit it soon.
Edit settings.py
There are quite a few changes that should be made in this file.
Lets' start
First change
DEBUG = False
In the allowed hosts enter the domain of your Heroku app
eg.
ALLOWED_HOSTS = ["your_app_name.herokuapp.com", "127.0.0.1"]
Replace the SECRET_KEY variable with the following (assuming that you have setup the secret key in heroku from the previous step)
SECRET_KEY = os.environ.get('SECRET_KEY')
What this does is, it gets the SECRET_KEY from the environment. In our case, we can set the secret_key in Heroku and it will provide the key here through environment variables.
Setup static files
In settings file, you will find
STATIC_URL = '/static/'
Replace this with the following code
STATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles') STATIC_URL = '/static/' STATICFILES_DIRS = ( os.path.join(BASE_DIR, 'static'), )
Basically this will create a folder named static which will hold all the static files such as CSS files.
If your App contains images that you have stored on it or the user has the ability to store then add the following lines
MEDIA_URL = "/media/" MEDIA_ROOT = os.path.join(BASE_DIR, 'media')
This is pretty much the same as the above
There is one more thing you need to do.
If you have media files then to allow Django to server them you have to add a line to your urls.py file of the project (top-level urls file)
from django.conf import settings from django.conf.urls.static import static urlpatterns = [ # ... the rest of your URLconf goes here ... ] + static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)
I highly recommend you have a look at this documentation.
The last thing you need to serve your static files in production is WhiteNoise.) -WhiteNoise Documentation
Install white noise
pip install whitenoise
Add it in MIDDLEWARE in settings.py file
MIDDLEWARE = [ # 'django.middleware.security.SecurityMiddleware', 'whitenoise.middleware.WhiteNoiseMiddleware', # ... ]
After this don't forget to run the command which creates the requirements.txt file. Remember?
have a look at the documentation
So finally we have completed the 2 most important steps for deploying
Adding Code to Github
Make a new Github Repo and add all of your code in it.
Use this post a reference
After that go to Heroku and under the Deploy tab, you will see an option to connect Github.
Connect your repo and you can hit the deploy button to deploy your app.
Using Heroku Postgres
What is the Need? I am using SQLite already!
The problem is that
The Heroku filesystem is ephemeral -.
In addition, under normal operations, dynos will restart every day in a process known as "Cycling".
Basically all the data you will store will get delete every 24hrs.
To solve Heroku suggest using either AWS or Postgres. Heroku has made it very simple to use Postgres.
Let's do this
Go to your app dashboard and in the Resources section search for Postgres. Select it and you will get something like this
Now go to the settings tab and reveal the config vars
You will see a DATABASE_URL key there. It means that Heroku has added the database and now we have to tell our app to use this database.
For this, we will be needing another package called dj_database_url. Install it through pip and import it at top of settings.py file
Now paste the following code below DATABASES in settings file
db_from_env = dj_database_url.config(conn_max_age=600) DATABASES['default'].update(db_from_env)
That's it now your database is setup
Currently, your database is empty and you might want to fill it.
- Open terminal
- type →
heroku login
- After the login run the following commands
heroku run python manage.py makemigrations heroku run python manage.py migrate heroku run python manage.py createsuperuser
Now your app is ready to be deployed
either use
git push heroku master
(after committing the changes) or push it through Github.
OK, so it's finally done. It is a bit long but quite easy to do because you have to make only some minor changes, nothing too big.
If there is any kind of improvement then please tell me in the comments.
Visit My Blog
Visit my website and follow me on Blogging and social platforms
Discussion
|
https://practicaldev-herokuapp-com.global.ssl.fastly.net/theshubhagrwl/deploying-django-app-to-heroku-full-guide-4ce0
|
CC-MAIN-2020-45
|
refinedweb
| 1,321
| 73.47
|
Advertisement
Issue with writting to log file
When I tried to write to a log file using loggers, I am having a issue of creating new set of file. For an example if my initial file name is test.log, when my service restart, its creating new log file as test.log.1 and all the data is written to th
Lang and Util Base Libraries
Lang and Util Base Libraries
The Base libraries provides us the fundamental features and functionality of
the Java platform.
Lang and Util Packages
Lang....
Java Archive (JAR) Files
The jar file provide us file contraction which
Java Util Examples List
examples that demonstrate the syntax and example code of
java util package... to log the error and messages into the log file. Explanation of
java logging... illustrates you how to limit the size of a
log file in Java. Log files haven't any
Writing Log Records to a Log File
Writing Log Records to a Log File
... to a
log file. Logger provides different types of level like: warning, info and
severe that have log records. Log records are written into a log file
Limiting the Size of a Log File in Java
Limiting the Size of a Log File in Java
... to limit the size of a
log file in Java. Log files haven't any boundation at the file creation time but
Java provides the facility for limiting the size of a log
Context Log Example Using Servlet
Context Log Example Using Servlet
... to server log file when servlet is
called. In the following JSP page (message.jsp) we... file by log()
method of ServletContext interface
The Source code
util packages in java
util packages in java write a java program to display present date and after 25days what will be the date?
import java.util.*;
import java.text.*;
class FindDate{
public static void main(String[] args
for writting java program why we are taking more than one class
for writting java program why we are taking more than one class for writting java program only one class is enough but why we are taking more than one class
Write Log Records to Standard Error in Java
Write Log Records to Standard Error in Java
... or publishes
log records on the standard error which is written in the log file...
is used for constructing the log file and write some log records to standard
error
|
http://roseindia.net/tutorialhelp/allcomments/1114
|
CC-MAIN-2015-40
|
refinedweb
| 404
| 69.31
|
Hello peeps, You guys gonna have to bare with me as I am new to programing.
My algorithms professor gave us an assignment develop a program in C wich will take any integer and multiply it form 0 to 10. The problem is I only get the result in the command line prompt after I hit "ESC" and press "ENTER" I have no clue why, since I haven't messed at all with the debugger, can anybody shed some light?
this is my code
Code:
#include <stdio.h>
int main()
{
int x, multiply, result;
printf("Multiply from 0 to 10:");
scanf("%d\n", &x);
for ( multiply = 0; multiply <= 10; multiply++ ) {
result = x * multiply;
printf("%d x %d = %d\n", x, multiply, result );
}
return 0;
}
|
http://cboard.cprogramming.com/c-programming/150396-unknown-cause-small-program-printable-thread.html
|
CC-MAIN-2015-06
|
refinedweb
| 124
| 64.64
|
Provided by: libncarg-dev_6.3.0-6build1_amd64
NAME
EZISOS - Draws an isosurface.
SYNOPSIS
CALL EZISOS (F,MU,MV,MW,EYE,SLAB,FISO)
C-BINDING SYNOPSIS
#include <ncarg/ncargC.h> void c_ezisos (float *f, int mu, int mv, int mw, float eye[3], float *slab, float fiso)
DESCRIPTION
F (an input array of type REAL, dimensioned MU x MV x MW) is a three-dimensional array of data defining the function f(u,v,w). The entire array (elements F(IU,IV,IW), for IU = 1 to MU, IV = 1 to MV, and IW = 1 to MW) is to be used.)). MU (an input expression of type INTEGER) is the first dimension of the array F. MV (an input expression of type INTEGER) is the second dimension of the array F. MW (an input expression of type INTEGER) is the third dimension)). SLAB (a scratch array of type REAL, dimensioned at least n x n, where "n" is defined to be MAX[MU,MV,MW]+2) is a workspace for ISOSRF. FISO (an input expression of type REAL) is the value of fiso in the equation f(u,v,w)=fiso, which defines the isosurface to be drawn.
EZISOS is called to draw an isosurface if all of the input array is to be used (rather than a subset of it), if ISOSRF's argument IFLAG is to be chosen internally, and if a frame advance is to be done after the isosurface is drawn. If any of these conditions is not met, use ISOSRF instead.
EXAMPLES
Use the ncargex command to see the following relevant example: tisosr, fisissrf.
ACCESS
To use EZISOS or c_ezisos, load the NCAR Graphics libraries ncarg, ncarg_gks, and ncarg_c, preferably in that order.
SEE ALSO
Online: isosurface, isosurface_params, isgeti, isgetr, isosrf, isseti, issetr, pwrzi, ncarg_cbind Hardcopy: NCAR Graphics Fundamentals, UNIX Version
Copyright (C) 1987-2009 University Corporation for Atmospheric Research The use of this Software is governed by a License Agreement.
|
http://manpages.ubuntu.com/manpages/xenial/man3/ezisos.3NCARG.html
|
CC-MAIN-2019-35
|
refinedweb
| 325
| 60.04
|
Drag&Drop an image in a gui field
I have User Area with some images.
Now I want to drag&drop one of those images in the UA to a Gui field, so I can get the filename of the chosen image.
Something like this.
.
Questions:
- what kind of field should the Gui field be.
A link field (top right field) or a simple text field (2e top right field)?
- how to initiate a drag&drop in the UA?
- how to know the image is dropped on the gui field?
- should the field, where the image should be dropped (the gui field), also be an UA?
Hope you can help me with a example.
-Pim
Hi @pim you have to use GeUserArea.HandleMouseDrag.
- what kind of field should the Gui field be?
- Any that accept the DRAGRTYPE you passed, so if you pass DRAGTYPE_FILENAME_IMAGE, all Gadget that accepts a DRAGTYPE_FILENAME_IMAGE will accept the drag operation.
- how to initiate a drag&drop in the UA?
- Use HandleMouseDrag in your InputEvent method.
- how to know the image is dropped on the GUI field?
- By calling HandleMouseDrag, your code is paused until the drag operation ended. You have no way to know if the operation changed anything in the scene (you could for sure listen for EVMSG_CHANGE) the return value of HandleMouseDrag only indicate you if a drag/Drop operation happened.
- should the field, where the image should be dropped (the gui field), also be an UA?
- No as long as the Gadget(IcustomGUi / GeUserArea) accepts the dragtype.
Here an example for file.
import c4d class DraggingArea(c4d.gui.GeUserArea): def InputEvent(self, msg): filePath = "MySuperPath" self.HandleMouseDrag(msg, c4d.DRAGTYPE_FILENAME_IMAGE, filePath, 0) teh GeDialog. self.area = DraggingArea() print "droped"()
In any case, I will extend the python documentation to have all information about each type but here a summary:
DRAGTYPE_FILES
DRAGTYPE_FILENAME_SCENE
DRAGTYPE_FILENAME_OTHER
DRAGTYPE_FILENAME_IMAGE
You should pass a string.
DRAGTYPE_ATOMARRAY
A list of c4d.C4DAtom object
DRAGTYPE_DESCID
A dict {"did": the c4d.DescID, "arr": a list of c4d.C4DAtom object}
DRAGTYPE_RGB
A c4d.Vector
DRAGTYPE_RGB_ARRAY
A list of c4d.Vector
DRAGTYPE_RGBA_ARRAY
A maxon.BaseArray(maxon.ColorA)
Other types are not supported.
Cheers,
Maxime..
Correct, You can then process this as a standard Cinema 4D drag operation and react to BFM_DRAGRECEIVE in your GeUserArea Message method.
You can retrieve it from BFM_INPUT_X and BFM_INPUT_Y
def InputEvent(self, msg): mouseX = msg[c4d.BFM_INPUT_X] mouseY = msg[c4d.BFM_INPUT_Y]
As you will do in a normal drag process so in your Message method override.
InputEvent is only called with BFM_INPUT, so if you want to do something before handling the drag, then do it previously and avoid HandleMouseDrag.
Hope it helps,
Cheers,
Maxime.
- danielsian last edited by
@pim I'm trying to figure out how you did this. Could you please share a sample code of this example?
I'm trying to get the full path of an image by dragging it into a edit text field...
Cheers
|
https://plugincafe.maxon.net/topic/12159/drag-drop-an-image-in-a-gui-field
|
CC-MAIN-2021-17
|
refinedweb
| 491
| 68.26
|
#include <hallo.h> * Branden Robinson [Thu, Feb 27 2003, 04:25:34AM]: > > can you tell me what "Our Users" in #4 of the social contract means? > > Since Debian is not a market-share-seeking organization, we don't care > > about people who don't use Debian, so it seems a tautology. > > I think it means that we need to listen to and be accountable to our > users. This means helping our users to get the most of our system, > following up on bug reports, and improving the system to benefit them. Funny to hear it from someone still refusing to change few things to improve useability on _small_ costs of mental consistency (remember x-session-manager story). Gruss/Regards, Eduard. -- The early bird gets the worm. If you want something else for breakfast, get up later.
|
https://lists.debian.org/debian-vote/2003/02/msg00113.html
|
CC-MAIN-2016-44
|
refinedweb
| 136
| 61.67
|
tried option "-fwrapv" in my AdaCore GNAT Programming Studio, =
which is suppository using public domain C and C++ compiler from MinGW =
(release=3D 3.4.2 mingw-special), and then re-built the entire project.
>=20
> However, the following expression is still being capped, not wrapped:
>=20
> oibrg =3D (T_180B15_ANGLE)(olv_realbrg * C_RADIAN_TO_B15);
>=20
> with olv_realbrg =3D 5.23598775598 rad (or 300deg), oibrg =3D -32768, =
instead of expected -10923.
>=20
> Where:
> #define C_RADIAN_TO_B15 10430.37835047F
> typedef T_SIGNED_16 T_180B15_ANGLE;
>=20
>=20
> Is there another compiler option to try , or another option I should =
be enabling in AdaCore GNAT Programming Studio ?
>=20
>=20
> --------------------------------------------------------
> Bruno G=E9linas
> Ing=E9nieur logiciel
>=20
>=20
"Gelinas, Bruno" wrote:
Please don't send your message as quoted text. It appears as if you
have nothing to say.
> > I have tried option "-fwrapv" in my AdaCore GNAT Programming Studio, which is suppository using public domain C and C++ compiler from MinGW (release= 3.4.2 mingw-special), and then re-built the entire project.
First of all, gcc is not public domain. It is licensed under the GPL.
There is a huge difference. Parts of MinGW are public domain, but not
gcc.
> > However, the following expression is still being capped, not wrapped:
> >
> > oibrg = (T_180B15_ANGLE)(olv_realbrg * C_RADIAN_TO_B15);
> >
> > with olv_realbrg = 5.23598775598 rad (or 300deg), oibrg = -32768, instead of expected -10923.
> >
> > Where:
> > #define C_RADIAN_TO_B15 10430.37835047F
> > typedef T_SIGNED_16 T_180B15_ANGLE;
> >
> >
> > Is there another compiler option to try , or another option I should be enabling in AdaCore GNAT Programming Studio ?
I think your understanding of -fwrapv is wrong. This is what the manual
says:
.
Note that this does not influence what actually happens in the case of
signed overflow, only what the compiler's optimizers can assume. The C
standard says that signed overflow is undefined behavior. But the x86
architecture (among others) implement wrapping for signed overflow just
the same as unsigned overflow, which means from a practical standpoint
you can assume that e.g. INT_MAX + 1 = INT_MIN.
However, from a compiler optimizer standpoint, you can't assume that if
you're obeying the standard, and there are a certain class of
optimizations that you can make if you are willing to assume that
overflow of a signed int can never happen, since that would be undefined
and no well-formed program invoked undefined behavior. This strict
interpretation of the C standard [that signed overflow will never
happen] goes against traditional C coding practice since the dark ages,
and is somewhat counterintuitive for some people. And in fact in gcc
4.2 a new VRP pass was added that used this allowance to do some
optimizations that broke real world C code that required signed overflow
to happen. There was a lot of head scratching and debating, and the
compromize that came as a result of it was the -Wstrict-overflow and
-fstrict-overflow pair of options. This is one of those things, like
strict aliasing, that the compiler is allowed by the standard to do, but
that many humans don't know about or even disagree about. -fwrapv is a
way to tell the compiler "never assume -- even though the standard
allows you to -- that signed overflow will never happen in my program,
even if that means you can't do some optimizations that you'd otherwise
be able to do."
But what this flag does not do is tell the compiler whether signed
overflow saturates or wraps; that is a function of the hardware and gcc
does not have a flag to control it.
Now as to your actual problem, I think it's just a matter of missing a
cast. But we can't really say anything without a testcase that
compiles. I tried:
#include <stdio.h>
#include <stdint.h>
int main()
{
int16_t foo = (int)(5.23598775598F * 10430.37835047F);
printf ("%hd", foo);
}
...but this prints -10923. If you remove the (int), then you get 32767,
which is probably a result of trying to cast a float to a short without
the intermediate int representation. I think that just means you need
the cast, there's no flag that will influence this.
Brian
I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details
|
https://sourceforge.net/p/mingw/mailman/mingw-users/thread/4660BC0B.29DD1981@dessent.net/
|
CC-MAIN-2017-30
|
refinedweb
| 726
| 64.41
|
country_code_picker 2.0.1
country_code_picker: ^2.0.1 copied to clipboard
A flutter package for showing a country code selector. In addition it gives the possibility to select a list of favorites countries, as well as to search using a simple searchbox
Use this package as a library
Depend on it
Run this command:
With Flutter:
$ flutter pub pub add country_code_picker
This will add a line like this to your package's pubspec.yaml (and run an implicit
dart pub get):
dependencies: country_code_picker: ^2.0.1
Alternatively, your editor might support
flutter pub get.
Check the docs for your editor to learn more.
Import it
Now in your Dart code, you can use:
import 'package:country_code_picker/country_code_picker.dart';
|
https://pub.dev/packages/country_code_picker/install
|
CC-MAIN-2021-17
|
refinedweb
| 118
| 55.95
|
Hi, I have a small MFC app, i have two buttons, i have a dummy function too.
I have a small dll, the issue is when i press the button 2 I have memory leak (i have tried with CMemoryState and does not detect any memory leak, but i can see it in the Process Explorer).
In my simple dll code i have in the "stdafx.h" file the following line
#include <afxwin.h>,
but if i comment this line, the memory leak is gone.
I tried to comment all my dll code (except that include) and the problem still persists.
Any clues of this behavior?
(I tried in a Win32 application instead of MFC and i have the same problem)
Code:void CPruebaThreadDlg::OnBnClickedButton1() { LPVOID lpParam = NULL; AfxBeginThread(dummy, lpParam, THREAD_PRIORITY_NORMAL, 0, 0, NULL); } void CPruebaThreadDlg::OnBnClickedButton2() { int i; HMODULE hmod = LoadLibrary("SimpleDll.dll"); while(true) { for(i = 0; i <= 150; i++) OnBnClickedButton1(); Sleep(700); } } UINT dummy(LPVOID) { return 0; }
|
http://cboard.cprogramming.com/windows-programming/77046-threads-dll.html
|
CC-MAIN-2015-11
|
refinedweb
| 161
| 71.55
|
Content count1,251
Donations0.00 CAD
Joined
Last visited
Days Won8
Everything posted by Mario Marengo
Maya to Mantra
Mario Marengo posted a topic in Other 3d PackagesHello OdForce! Circumstances have recently forced me to explore the possibility of rendering directly to Mantra from Maya -- that is: generating an IFD directly from Maya. This is in contrast to the more typical exporting of scene elements to Houdini (via some intermediate format like Alembic/Fbx, say) and then rendering from Houdini, which I realize is still a valid avenue open to us. Instead, I'm looking into the possibility of a method that will allow Maya users to simply "press render" and have Mantra transparently service said renders behind the curtains. My uncertainty with the whole thing lies primarily on the Maya side, because while I'm quite comfortable with Mantra and IFD's, I'm very much *not* a Maya power user. I realize this is not at all a trivial task (and perhaps not even worth the effort in the end), and am also conversant with some of the individual components that may be put to use in a hypothetical solution: Maya & Houdini C++/Python toolkits IFD's reference SOHO implementation Houdini Engine for Maya etc... But I'm curious, so I thought I'd tap into the vast Houdini brain-store here to see if anyone has had experience with this or can point me to existing partial/complete solutions (I'm aware of at least one partial attempt), or simply has travelled down the road enough to say "don't even think about it, you fool!" TIA!
- Stu already posted the ultimate glass shader, but I thought it might be fun to try it the old fashion way, just for giggles I'll spread this over several posts so I can add bits and pieces as I find the time to work on them. I have a bunch of code lying around that deals with glass, but since I'm sharing, I thought I'd take the opportunity to rethink the whole thing from scratch, and post as I build it. That way we can also discuss each step separately and critique the choices made along the way. Requirements: A complete glass shader should support the following optical properties: reflection, refraction, transmission, absorption, and dispersion (did I leave anything "visible" out?). All these effects are wavelength-dependent, so there's a big choice to be made along the way regarding the potential need for a spectral color model. This hypothetical "complete" model would be relatively expensive to compute (in *any* renderer) and clearly "overkill" in many situations (e.g: glass windows), so we'll need the ability to turn features on/off as needed (or end up with several specialized shaders if we can't keep the full version both flexible *and* efficient). Space: It is customary to do all reflectance calculations in tangent space, where the x and y axes are two orthonormal vectors lying on the plane tangent to the shade point "P", and z is the unit normal to that plane. This simplifies some of the math and can therefore lead to more efficient code. However, since both VEX and RSL provide globals and runtime functions in some space other than tangent ("camera" space for those two), working in tangent space will inevitably mean some transformations. Whether working in tangent space is still advantageous after that, remains to be seen. As a result, we'll need to look at the costs involved in writing our functions in either space, and base our final choice on what we find. Naming Conventions: I'll adopt the following naming conventions for the main variables: vector n - unit normal to the surface vector wi - unit incidence vector, points *toward* the source vector wo - unit exitance vector, points *toward* the observer vector wt - unit transmission direction [float|spectrum] eta_i - index of refraction for the incident medium [float|spectrum] eta_t - index of refraction for the transmissive medium (glass) [float|spectrum] kr - fraction of incident light/wavelength that gets reflected [float|spectrum] kt - fraction of incident light/wavelength that gets transmitted All angles, unless otherwise stated, are in *radians*! All vector parameters, unless otherwise stated, are expected to be normalized! Fresnel: This is the workhorse for glass, it alone is responsible for 90% of the visual cues that say "glass". The Fresnel functions determine the fraction of light that reflects off a surface (and also the fraction that gets transmitted, after refraction, *into* the surface). Glass is a "dielectric" material (does not conduct electricity), so we'll use that form of the function. We'll also ignore light polarization (we're doing glass, not gems... a full model for gem stones would need to take polarization into account). But wait! both RSL and VEX already *have* this kind of fresnel function, so why re-invent the wheel?!? Implementations are all slightly different among shading languages. Having our own will hopefully provide a homogeneous (actually, we're shooting for "identical") look and API across renderers -- if we find the renderer's native fresnel is identical to ours we could always choose to switch to the native version (which is usually faster). The following is, to the best of my knowledge, an accurate Fresnel implementation in VEX for dielectrics (unpolarized). In case we find it useful at some point, I give it for both "current" space (camera space for VEX and RSL), and tangent space. Here's the fragment for world space: // Full Fresnel for dielectrics (unpolarized) //------------------------------------------------------------------------------- // world space void wsFresnelDiel(vector wo,n; float eta_i,eta_t; export vector wr,wt; export float kr,kt; export int entering) { if(eta_i==eta_t) { kr = 0.0; wr = 0.0; kt = 1.0; wt = -wo; entering = -1; } else { float ei,et; // determine which eta is incident and which transmitted float cosi = wsCosTheta(wo,n); if(cosi>0.0) {entering=1; ei=eta_i; et=eta_t; } else {entering=0; ei=eta_t; et=eta_i; } // compute sine of the transmitted angle float sini2 = sin2FromCos(cosi); float eta = ei / et; float sint2 = eta * eta * sini2; // handle total internal reflection if(sint2 > 1.0) { kr = 1.0; wr = 2.0*cosi*n - wo; kt = 0.0; wt = -wo; // TODO: this should be zero, but... } else { float cost = cosFromSin2(sint2); float acosi = abs(cosi); // reflection float etci=et*acosi, etct=et*cost, eici=ei*acosi, eict=ei*cost; vector para = (etci - eict) / (etci + eict); vector perp = (eici - etct) / (eici + etct); wr = 2.0*cosi*n - wo; kr = (para*para + perp*perp) / 2.0; // transmission if(entering!=0) cost = -cost; kt = ((ei*ei)/(et*et)) * (1.0 - kr); wt = (eta*cosi + cost)*n - eta*wo; } } } The support functions like cosFromSin() and so on, are there just for convenience and to help with readability. These are included in the header. After some testing, it looks like VEX's current version of fresnel() (when used as it is in the v_glass shader) and the custom one given above, are identical. Here's an initial test (no illumination, no shadows... this is just a test of the function). Yes, you'd expect the ground to be inverted in a solid glass sphere. The one on the right is a thin shell. I ran a lot of tests beyond that image, and I'm fairly confident it's working correctly. The first thing that jumps out from that image though, is the crappy antialiasing of the ground's procedural texture function for the secondary rays. In micro-polygon mode, you'd need to raise the shading quality to 4 or more to start getting a decent result... at a huge speed hit. It looks like there's no area estimation for secondary rays in micropolygon mode. In ray-tracing mode however, things are good -- whether it uses ray differentials or some other method, the shader gets called with a valid estimate and can AA itself properly. The micropolygon problem needs to get looked into though. You would expect the built-in version to run faster than the custom one, and it does (~20% faster)... as long as you keep the reflection bounces low (1 or 2). As the number of bounces increases, our custom version starts out-performing the built-in one. Yes, this is weird and I very much suspect a bug. By the time you get to around 10 bounces, the custom code runs around 7 times faster (!!!) -- something's busted in there. OK. That's it for now. It's getting late here, so I'll post a test hipfile and the code sometime soon (Monday-ish). Next up: Absorption. (glass cubes with a sphere of "air" inside and increasing absorption -- no illumination, no shadows, no caustics, etc)
The SSS Diaries
Mario Marengo posted a topic in Lighting & RenderingHi all, I have a few days of "play time" ahead of me, so I thought I'd revisit the various SSS models, and see if I can come up with something a little more user friendly. And since I'm sharing the code, I thought I'd take a cue from Marc's "Cornell Box Diaries" and share the process as well... selfishly hoping to enlist some of the great minds in this forum along the way My initial approach to this SSS thing was a somewhat faithful implementation of the di-pole approximation given in Jensen's papers. However, that model is very hard to parameterize in a way that makes it intuitive to use; the result is that, as it stands, it can be very frustrating. Regardless, I'll continue to think about ways to re-parameterize that model; but I must confess it's evaded every one of my attempts so far -- maybe I can convince someone here (TheDunadan? ) to look at the math with me. As a user, I'd love to have a model that I can control with just two parameters: 1. Surface Color (or "diffuse reflectance"). We need to texture our surfaces (procedurally or otherwise), so we must have this one. In Jensen's model, this gets turned into the "reduced scattering albedo", which in turn gets used to calculate the actual scattering and absorption properties of the material; all of which relate to each other in very non-linear ways, making it hard to control. So the goal here is to come up with a "what you set is what you get" model (or as close to that as possible). 2. Scattering Distance. This should behave exactly as one would expect; i.e: "I want light to travel 'this far' (in object-space units) inside the medium before it gets completely extinguished". No more and no less. Well... the main problem with an exponential extinction (Jensen) is that, while physically correct, it never quite reaches zero, so again, it is hard to control. At this point in time, I don't see how any model that satisfies this "two parameter" constraint can ever also be physically correct -- meaning whole swathes of Jensen's model will need to go out the window. And first in the list of things to dissappear will likely be the di-pole construction... next in line is the exponential falloff... and the list grows... OK. Looking over a whole bunch of papers, I think I've decided that Pixar's approach from the Siggraph 2003 renderman notes (chapter 5, "Human Skin for Finding Nemo") is the closest thing to what I'm looking for, so I'll start with that. I'll post my progress (and the code, natch), in this thread so people can take it for a spin and see what they think. Cheers!
-: #define ns_fperlin4 \ nsdata ( "perlin" , 0.0168713 , 0.998413 , 0.507642 , 1 ) // +/- 0.0073 #define ns_vperlin4 \ nsdata ( "perlin" , 0.00576016 , 1.025 , 0.518260 , 1 ) // +/- 0.0037
- The short answer: "Use PBR" MIS is used by the default PBR path tracer. The path tracer is written in VEX and, if you're interested, you can look at its source code in $HH/vex/include/pbrpathtrace.h. This means you could, in theory, customize pretty much all of PBR except for the BSDFs (bsdf's are not written in VEX). The PhysicallyBasedSpecular VOP, and all other "Physically Based xxxx" VOPs resolve to a BSDF -- notice that its output (F) is not a color (vector type) but a BSDF type (which is an opaque type that represents a linear combination of scattering distributions, or "lobes"). All these nodes that only output an 'F' (a bsdf) are meant to be used with the PBR engines. You can look at their code by RMB on the VOP and selecting "Type Properties...", then click on the "Code" tab of the Type Properties dialog to see the source code for that VOP. You'll notice that none of these "Physically Based" VOPs use illuminance() or phongBRDF() or any of those functions. PBR samples (or transports) light differently than MP or RT -- for example, you'll see things like "sample_light()" instead of "illuminance()", and "sample_bsdf()" instead of "phongBRDF()"... similar ideas but different approach (in PBR, a BRDF is a probability distribution instead of a weighting function, and things like MIS are used to balance the various importance measures assigned to each sampling strategy). */ float phongBRDF() is the standard Phong lobe as a weighting function (in [0,1]) -- note that it returns a float. */ vector phong() computes illumination using the Phong lobe as a weight (i.e: using phongBRDF() as the weighting function). That is: it returns the color (notice it returns a vector, not a float) of the incident illumination, as weighted by phongBRDF() and so is equivalent to using phongBRDF() inside your own illuminance loop. */ bsdf phong() is, again, the Phong lobe but this time expressed as a probability distribution. It is normalized in the sense that it integrates to 1 over its domain of incident directions (a hemisphere in this case), meaning that, unlike phongBRDF(), its range is not necessarily in [0,1]. Also note that its return data type is "bsdf", the contents of which are inaccessible to the user (you can only combine bsdf's with other types in certain ways but not manipulate their values directly). Long story short: these "bsdf" animals are meant to be used with the PBR engines -- they can be sampled and evaluated to resolve into a color, yes, but the scaffolding required to make that happen correctly (or in a useful way) is, well, a path tracer, not an illuminance loop. */ None of these functions "invoke" anything -- they just compute and return values. But, yes, some shading globals (like F) are only used by certain engines (F -- and the code path that defines it -- is only executed when rendering with PBR, for example). So, any assignment to the global F when rendering using, say, the MP engine, would be ignored, and conversely, any assignment to Cf will be ignored by the PBR engines. But these functions themselves do not "invoke" anything. HTH.
- The mirror reflection brdf is a bit of a strange animal in that its density distribution integrates to 0, which is why it's modeled as a delta distribution (which is more like a limit than a function). In any case, if you were writing it as a VEX function that computes the fraction of energy leaving the surface in the direction 'wo', after arriving from direction 'wi' at a location on the surface with normal 'wn' (all vectors unit length and pointing away from the surface position 'P' -- and note that here we're using vectors instead of spherical angles), then it might look something like this: float brdf_mirror(vector wi,wn,wo) { return (wo==reflect(-wi,wn)); } vector illum_mirror(vector p,wn,wo) { vector out = 0; illuminance(p,wn,M_PI_2) { shadow(Cl); vector wi = normalize(L); out += Cl*brdf_mirror(wi,wn,wo); } return out; } This would be a direct interpretation of the delta function you posted above -- a function that returns zero everywhere except for the unique case where wo is in the exact mirror direction (about wn) of the incident vector wi (where it returns 1) -- a situation which, if drawing from a random set of directions wi, would occur with probability 0. That's what I meant when I said that it's not a very useful model in the context of an illuminance loop, where the wi's are chosen for you by Mantra -- that is: inside an illuminance loop, *Mantra* decides where the samples on an area light will go, not you, and the chances that it will pick a sample (with direction 'wi') that just happens to exactly line up with the mirror direction of the viewing vector ('wo' above) are zero. And, as expected, it looks like this: The only way to work with a delta distribution is to sample it explicitly -- you manually take a sample in the single direction where you know the function will be meaningful. This can be done either using ray tracing (see the functions reflectlight(), trace(), and gather()), or using a reflection map (see the function environment()) -- but *not* inside an illuminance loop. This is not "cheating", it just follows from the kind of statistical animal we're talking about. Even the PBR path tracer handles delta BxDF's this way -- when a BSDF contains a delta lobe, it will, when sampled, return a single direction with probability 1, and be excluded from multiple importance sampling. Here's a version using trace(). The only catch is that, when using ray tracing (as opposed to a reflection map), you'll need to turn the light geometry into an actual object so that it can be reflected: vector illum_trace(vector p,dir; float maxcontrib) { // Using reflectlight(): //return reflectlight(p,dir,-1,maxcontrib); // Or... using trace() instead of reflectlight(): vector hitCf = 0; trace(p,dir,Time,"raystyle","reflect","samplefilter","opacity","Cf",hitCf); return hitCf; } And it looks like this (using the RT engine): Here's your hipfile augmented with those two approaches (the otl is embedded in the file). square reflection_mgm.hipnc Oh, one more thing: A Phong lobe is not the same as a Delta lobe -- if you want Phong then just use the phongBRDF() function (and note it's "phong", not "phone"). Cheers.
- 1. The product of those 2 delta functions is zero everywhere except when the viewing direction is the exact mirror (about the normal) of the incident direction (or, stated in polar coords, when theta_r==theta_i and phi_r is exactly +/-PI radians, or 180 degrees, away from phi_i), at which point the argument to both delta functions is 0 and therefore the functions themselves evaluate to 1 (as does their product). The scaling of 1/cos(theta_i) is there to cancel out a cos(theta_i) factor that would normally appear outside the brdf to convert incident power to irradiance. All of it essentially boiling down to a radiant value of "I" along the exact mirror direction from incidence and 0 everywhere else -- an effect we all know as a "mirror reflection". What do you mean by a "square specular"? 2. The kind of analysis you mention in #1 is better suited to a statistical context where the BRDF's can be explicitly sampled (like in PBR). It's not really suitable for "illuminance loops" (you mention Cl) where you have no control over the directions in which to sample incident illumination. In that context, the probability that any one of the samples that the loop iterates over is in the exact mathematical mirror direction to the viewing direction is pretty much zero -- so yeah, not the right context to be thinking in terms of delta distributions. In the traditional old-style shading approach, a perfect mirror reflection would have necessitated a "reflection map", which you can indeed sample in a specific direction. In that method, the illuminance loop is only used to do approximations to broad glossy or diffuse reflections of light sources. HTH.
Axyz Animation Now Hiring
Mario Marengo posted a topic in StudiosAxyz Animation, in beautiful Toronto, Canada, is now looking for a Houdini person experienced with lighting and shading. The position does not require technical knowledge of VEX and writing shaders per se (though some knowledge of VOPs is a plus), but rather an intimate knowledge of Mantra, shading concepts, and preparing scenes for efficient rendering, as well as an excellent eye for lighting, texturing, tone mapping, and generally integrating CG with live elements. This is a full-time position, starting now. Candidates should have 2 years experience or more. Please contact: John Stollar, General Manager, js@axyzfx.com Thank you.
How to use VEX variables in shader
Mario Marengo replied to titor's topic in ShadersAt the end of your 2 VOPSops I see 3 point attributes: "Cd" (vector), "Alpha" (float), and "topp" (float). Over in the shader, AFAICT, you're only picking up one of these: "Alpha", and piping it directly to the Of and Af outputs. Finally, over in the "mantra1" ROP, you're adding 2 AOVs (or "deep rasters"): "Alpha" (float) and "MapDisintegration" (vector). So... 1. I don't see any attribute or shader parameter called "DestrMatte" anywhere. 2. Even though the shader is picking up "Alpha" (and using it), it is not exporting it, and so Mantra can't itself pick it up and pipe it to an AOV. To export it so that Mantra can use it, set "Export" to "Always" in the Parameter VOP (see attached). 3. If you intended either of the other two attributes ("Cd" and "topp") to stand for "MapDisintegration" or "DestrMatte" or whatever, then you have to pick them up in your shader and export them as well. I've done this to both in the attached. Once exported by the shader, the rop can pick them up (and rename them to whatever you like). Anyway, that's how the mechanism works, but having said that, keep in mind that Af, Of, Cf, N, P, and Pz, are all automatically available for AOV output (look at the pull down menu for the "VEX Variable" parameter of each AOV and you'll see them). This means that, in your case, since all you're doing with the attribute "Alpha" is assigning it directly to Af/Of, you don't strictly need to manually export them as you could still get at it via the automatic "Af" AOV -- but only because you're currently doing nothing with it inside the shader and so Alpha==Of==Af (which is not usually the case with most attributes, so it's still good to learn how the export business works). HTH Head_Creepv3_1_mgm.hip
volume density falloff
Mario Marengo replied to goshone's topic in EffectsHave a look at the "Countour" controls in the field modifiers of the pyro shader: Pyro Docs: Contour
Fast Gi Anyone?
Mario Marengo posted a topic in Lighting & RenderingHi all, I recently came across a cool paper by Michael Bunnel from nVidia (one of the chapters from the GPU Gems 2 book), that deals with speeding up GI calculations using surface elements. Complementary to that paper, and along similar lines, but broader, is this one by Arikan et al. which will appear in the SIGGRAPH 2005 proceedings. A lot of yummy stuff in there! Here are some early results from an HDK implementation I'm working on for the ambient occlusion portion of the nVidia paper (sorry, won't be able to share code on this one, but I thought you'd be interested in the viability of the method nonetheless): Test Geometry: Car Wheel, 45,384 points, 44,457 polys, rendered as subdiv surface. Reference: Here's the reference image, with 200 samples per shade point. Calculation time: 2 hrs. 48 mins. 12.26 secs. (the little black artifacts are probably due to the wrong tracing bias, but I didn't feel like waiting another 3 hours for a re-render). Solid Angle: And this is a test using two passes of the "solid angle" element-to-point idea described in the paper. Calculation time: 6.20609 secs. (!!!) . A little softer than the reference, but... not bad! The implementation is in its infancy right now, and I know there's room for more optimization, so it should go faster. Point Cloud Method More comparison fun... Here is the "occlusion using point clouds" method with the point cloud matching the geometry points, and 200 samples for each computed point (note that about half of all points were probably actually visited since, at render time, we only touch the ones visible to camera plus a small neighborhood). Calculation time: 38 mins. 19.482 secs The artifacts are due to a too-small search radius, but if I were to increase it, then everything would go too soft, and since it's a global setting I left it at a small enough size to catch the high-frequency stuff. In any case, I was more interested in the timing than the look of it in this case. Methinks the days of sloooow GI are numbered.... yay! Cheers!
Fast Gi Anyone?
Mario Marengo replied to Mario Marengo's topic in Lighting & RenderingYou could always bake onto a point cloud (from a scatter SOP) and then do a filtered lookup (of the pre-baked point cloud) at render time. You're not limited to working strictly with the points in the geometry you're piping to the IFD. So, for example, you'd only need to subdivide for the purpose of scattering the points, but you can then render the un-subdivided surface thereafter. But no, this is not something that can be implemented at the shader level because at that point you don't have access to the entire scene geometry (though some limited functionality with file-bound geometry exists) -- and even if you did, you wouldn't want to be calculating this thing for every shade point (it would be many orders of magnitude slower than tracing). The whole idea here is that you compute it once on a much coarser scale than what normally results from a rendere's dicing step (or the even larger sampling density while raytracing), and then interpolate -- the assumption being that AO is sufficiently low-frequency an effect to survive that treatment... though that's not always a safe assumption, as the later updates to the original paper confirm). Also keep in mind that this method, even when working properly, has its own set of limitations: Displacements and motion blurred or transparent shadows come to mind. And... well, AO is just *one* aspect of reflectance -- a noisy AO may be noisy on its own, but not after you add 15 other aspects of light transport. So, yes, tracing may be slower, but you're likely not just tracing AO for a real material in a real scene... just be careful how you judge those speeds.
|
https://forums.odforce.net/profile/148-mario-marengo/content/
|
CC-MAIN-2019-18
|
refinedweb
| 4,483
| 56.39
|
direct.directnotify.RotatingLog¶
from direct.directnotify.RotatingLog import RotatingLog
Inheritance diagram
- class
RotatingLog(path='./log_file', hourInterval=24, megabyteLimit=1024)[source]¶
An
open()replacement that will automatically open and write to a new file if the prior file is too large or after a time interval.
__init__(self, path='./log_file', hourInterval=24, megabyteLimit=1024)[source]¶
- Parameters
path – a full or partial path with file name.
hourInterval – the number of hours at which to rotate the file.
megabyteLimit – the number of megabytes of file size the log may grow to, after which the log is rotated. Note: The log file may get a bit larger than limit do to writing out whole lines (last line may exceed megabyteLimit or “megabyteGuidline”).
shouldRotate(self)[source]¶
Returns a bool about whether a new log file should be created and written to (while at the same time stopping output to the old log file and closing it).
write(self, data)[source]¶
Write the data to either the current log or a new one, depending on the return of shouldRotate() and whether the new file can be opened.
|
https://docs.panda3d.org/1.10/python/reference/direct.directnotify.RotatingLog
|
CC-MAIN-2021-17
|
refinedweb
| 181
| 53.21
|
#include <sys/stream.h> #include <sys/ddi.h> clock_t quntimeout(queue_t *q, timeout_id_t id);
Solaris DDI specific (Solaris DDI).
Pointer to a STREAMS queue structure.
Opaque timeout ID a previous qtimeout(9F) call.
The quntimeout() function cancels a pending qtimeout(9F) request. The quntimeout() function is tailored to be used with the enhanced STREAMS framework interface, which is based on the concept of perimeters. (See mt-streams(9F).) quntimeout() returns when the timeout has been cancelled or finished executing. The timeout will be cancelled even if it is blocked at the perimeters associated with the queue. quntimeout() should be executed for all outstanding timeouts before a driver or module close returns. All outstanding timeouts and bufcalls must be cancelled before a driver close routine can block and before the close routine calls qprocsoff(9F).
The quntimeout() function returns -1 if the id is not found. Otherwise, quntimeout() returns a 0 or positive value.
The quntimeout() function can be called from user, interrupt, or kernel context.
mt-streams(9F), qbufcall(9F), qtimeout(9F), qunbufcall(9F)
Writing Device Drivers for Oracle Solaris 11.2
STREAMS Programming Guide
|
http://docs.oracle.com/cd/E36784_01/html/E36886/quntimeout-9f.html
|
CC-MAIN-2015-27
|
refinedweb
| 185
| 51.55
|
Prev
C++ VC ATL STL PPP Experts Index
Headers
Your browser does not support iframes.
Re: temporaries and const&
From:
"Alf P. Steinbach" <alfps@start.no>
Newsgroups:
comp.lang.c++
Date:
Mon, 30 Apr 2007 20:43:24 +0200
Message-ID:
<59mrmgF2ks3d3U1@mid.individual.net>
* dragoncoder:
On Apr 30, 2:21 pm, "Alf P. Steinbach" <a...@start.no> wrote:
* dragoncoder:
Thanks for the response. In the same context, does this code invoke
undefined behaviour ?
#include <iostream>
template <class T1, class T2>
const T1& max ( const T1& a, const T2& b )
{
return ( a > b ) ? a : b;
}
int main() {
int i = 20; double d = 40;
std::cout << max ( i, d ) << std::endl;
return 0;
}
Yep. It would be less clear-cut if both arguments were "int const&".
I'd have to read the standard's fine print about the ?:-operator to
figure that out, but I think that when the types are identical reference
types it can produce a reference result, thus no UB in that case.
I am a bit confused now. Are you saying it is a case of UB because the
temporary is being accessed after the function call ?
Yes.
The temporary no longer exists at the point where it's used.
Or, in practice it may still exist, but in practice it may also have
been overwritten.
That being the
case a simple function like below will also invoke UB ? Am I right ?
Yes.
const int& bar ( ) { return 10; }
std::cout << bar() << std::endl;
Please enlighten me.
<url:>.
Hm, I'd better buy that book, and quite a few others!
Can't go on recommending books I've never even read (I only have two C++
books, namely TCPPPL in 1st and 2nd edition, the 3rd edition on
permanent load to someone I don't know, and Modern C++ Design, yet I go
on recommending Accelerated C++, C++ Primer, You Can Do It!, etc.).
Thanks again.
You're welcome.
--
A: Because it messes up the order in which people normally read text.
Q: Why is it such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?
Generated by PreciseInfo ™):
|
https://preciseinfo.org/Convert/Articles_CPP/PPP_Experts/C++-VC-ATL-STL-PPP-Experts-070430214324.html
|
CC-MAIN-2022-05
|
refinedweb
| 361
| 74.9
|
How can i convert a data format like:
James Smith was born on November 17, 1948
("James Smith", DOB, "November 17, 1948")
from nltk import word_tokenize, pos_tag
new = "James Smith was born on November 17, 1948"
sentences = word_tokenize(new)
sentences = pos_tag(sentences)
grammar = "Chunk: {<NNP*><NNP*>}"
cp = nltk.RegexpParser(grammar)
result = cp.parse(sentences)
print(result)
You could always use a regular expressions.
The regex
(\S+)\s(\S+)\s\bwas born on\b\s(\S+)\s(\S+),\s(\S+) will match and return data from specifically the string format above.
Here's it in action:
Regex in python:
import re regex = r"(\S+)\s(\S+)\s\bwas born on\b\s(\S+)\s(\S+),\s(\S+)" test_str = "James Smith was born on November 17, 1948" matches = re.search(regex, test_str) # group 0 in a regex is the input string print(matches.group(1)) # James print(matches.group(2)) # Smith print(matches.group(3)) # November print(matches.group(4)) # 17 print(matches.group(5)) # 1948
|
https://codedump.io/share/zrVP6uwptYQt/1/extracting-specific-information-from-data
|
CC-MAIN-2018-22
|
refinedweb
| 164
| 59.8
|
As the ObjectSpaces are running late, the development of o/r mapping tools and persistence frameworks seems to be the popular thing to do among .NET developers. Somebody counted over 30 projects and the number is daily increasing. (Ok. I admit it. I wrote one too. But I have an excuse :). I am playing with EOS (an aspect-oriented extension for C#) at the moment, and I thought I could show a quick implementation of a persistence aspect.
Let's take a look at the following example:
using System;using ObjectStorage;using PersistentObjects;namespace PersistentObjects{ class Cat { string _name; int _lives; public string Name { get {return _name;} set {_name = value;} } public int Lives { get {return _lives;} set {_lives = value;} } } class Dog { string _name; public string Name { get {return _name;} set {_name = value;} } }} namespace Main{ class MainClass { [STAThread] static public void Main (string[] args) { Cat cat = new Cat (); cat.Name = "Garfield"; cat.Lives = 9; Dog dog = new Dog (); dog.Name = "Oddie"; ObjectSpace.Dump (); ObjectSpace.Store (); } }}
I would like the classes Cat and Dog from the namespace PersistentObjects to register themselves "magically" in the ObjectSpace so that they can be stored as soon as they change. I also want to assign a unique id to every instance, in order to be able to find them later. This is not easily accomplished in standard CSharp, especially if you don't want to touch the original code. Some persistence tools modify the IL-Code (code enhancement) in order to intercept the field gets and sets. In Java AOP tools like AspectJ we would define a field pointcut to intercept the field calls, and the unique ID would be introduced. This can be done in EOS as well:
using System;using System.Collections;using System.IO;using System.Reflection;namespace ObjectStorage{ public aspect ObjectSpace { static BindingFlags InstanceFields = BindingFlags.Default | BindingFlags.Instance | BindingFlags.Public | BindingFlags.NonPublic; static Hashtable changedObjects = new Hashtable (); // Every object in a object space should have a unique id introduce in PersistentObjects.any { private Guid _guid = System.Guid.NewGuid(); public Guid ID { get {return _guid;} set {_guid = value;} } } // keep track of changed objects from the persistenceobjects namespace after():fset (any PersistentObjects.any.any) { changedObjects[thisJoinPoint.getTarget()] = "changed"; } // dump changed objects public static void Dump () { foreach (DictionaryEntry entry in changedObjects) { Console.WriteLine ("{0}, {1}", entry.Key.ToString (), entry.Value.ToString()); } } // store all changed objects to persistence. public static void Store () { #region this code writes the changed objects to xml file StreamWriter writer = File.CreateText ("persistence.xml"); using (writer) { writer.WriteLine ("<Persistence>"); foreach (DictionaryEntry entry in changedObjects) { writer.WriteLine ("<" + entry.Key.GetType () + ">"); foreach (FieldInfo fieldInfo in entry.Key.GetType ().GetFields (InstanceFields)) { writer.WriteLine ("<Field Name=\"" + fieldInfo.Name + "\" type =\"" + fieldInfo.FieldType +"\">"); writer.WriteLine (fieldInfo.GetValue (entry.Key)); writer.WriteLine ("</Field>"); } writer.WriteLine ("</" + entry.Key.GetType () + ">"); } writer.WriteLine ("</Persistence>"); } #endregion } }}
If you run this code after compiling it with eos compiler (note that we are not in csharp-pure world anymore!) you will get a file persistence.xml with stored cats and dogs, each having its own id:
<Persistence>
<PersistentObjects.Dog>
<Field Name="_name_Eos_Original" type ="System.String">
Oddie
</Field>
<Field Name="_guid_Eos_Original" type ="System.Guid">
99571520-e2b3-48a4-8ccc-90cd55d212fd
</Field>
</PersistentObjects.Dog>
<PersistentObjects.Cat>
Garfield
<Field Name="_lives_Eos_Original" type ="System.Int32">
9
801f3149-b825-4ff1-88bd-cb80a107ee21
</PersistentObjects.Cat>
</Persistence>
I hope that this (rather simple) example gives you the feeling what it is like to work with an aop tool with a rich pointcut model in programming language. It would be nice to see eos in a more complete (almost no documentation at all) and open source version soon, or even better to get something like AspectJ or eos in .NET framework itself.
Title:
Name:
Website:
Comment:
|
http://geekswithblogs.net/imilovanovic/articles/11584.aspx
|
crawl-002
|
refinedweb
| 612
| 50.33
|
.. _quick-install-admin-guide:
Administrator's Installation Guide
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This is the Administrator's installation guide.
It describes how to install the whole Synnefo stack on two (2) physical nodes,
with minimum configuration. It installs synnefo from Debian packages, and
assumes the nodes run Debian Wheezy. After successful installation, you will
have the following services running:
* Identity Management (Astakos)
* Object Storage Service (Pithos)
* Compute Service (Cyclades)
* Image Service (part of Cyclades)
* Network Service (part of Cyclades)
and a single unified Web UI to manage them all.
If you just want to install the Object Storage Service (Pithos), follow the
guide and just stop after the "Testing of Pithos" section.
Installation of Synnefo / Introduction
======================================
We will install the services with the above list's order. The last three
services will be installed in a single step (at the end), because at the moment
they are contained in the same software component (Cyclades). public IPs are "203.0.113.1" and
"203.0.113.2" respectively. It is important that the two machines are under the same domain name.
In case you choose to follow a private installation you will need to
set up a private dns server, using dnsmasq for example. See node1 below for
more information on how to do so.
General Prerequisites
=====================
These are the general synnefo prerequisites, that you need on node1 and node2
and are related to all the services (Astakos, Pithos, Cyclades).
To be able to download all synnefo components you need to add the following
lines in your ``/etc/apt/sources.list`` file:
| ``deb wheezy/``
| ``deb-src wheezy/``
and import the repo's GPG key:
| ``curl | apt-key add -``
Update your list of packages and continue with the installation:
.. code-block:: console
# apt-get update (be sure to set no_root_squash flag)..
Finally, it is required for Cyclades and Ganeti nodes to have synchronized
system clocks (e.g. by running ntpd).
Node1
-----
General Synnefo dependencies
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* apache (http server)
* public certificate
* gunicorn (WSGI http server)
* postgresql (database)
* rabbitmq (message queue)
* ntp (NTP daemon)
* gevent
* dnsmasq (DNS server)
You can install apache2, postgresql, ntp and rabbitmq by running:
.. code-block:: console
# apt-get install apache2 postgresql ntp rabbitmq-server
To install gunicorn and gevent, run:
# apt-get install gunicorn python-gevent
On node1, we will create our databases, so you will also need the
python-psycopg2 package:
.. code-block:: console
#:
.. code-block:: console.
.. code-block:: console/9.1/main/postgresql.conf`` and change
``listen_addresses`` to ``'*'`` :
.. code-block:: console
listen_addresses = '*'
Furthermore, edit ``/etc/postgresql/9.1/main/pg_hba.conf`` to allow node1 and
node2 to connect to the database. Add the following lines under ``#IPv4 local
connections:`` :
.. code-block:: console
host all all 203.0.113.1/32 md5
host all all 203.0.113.2/32 md5
Make sure to substitute "203.0.113.1" and "203.0.113.2" with node1's and node2's
actual IPs. Now, restart the server to apply the changes:
.. code-block:: console
# /etc/init.d/postgresql restart
Certificate Creation
~~~~~~~~~~~~~~~~~~~~~
Node1 will host Cyclades. Cyclades should communicate with the other Synnefo
Services and users over a secure channel. In order for the connection to be
trusted, the keys provided to Apache below should be signed with a certificate.
This certificate should be added to all nodes. In case you don't have signed keys you can create a self-signed certificate
and sign your keys with this. To do so on node1 run:
# apt-get install openvpn
# mkdir /etc/openvpn/easy-rsa
# cp -ai /usr/share/doc/openvpn/examples/easy-rsa/2.0/ /etc/openvpn/easy-rsa
# cd /etc/openvpn/easy-rsa/2.0
# vim vars
In vars you can set your own parameters such as KEY_COUNTRY
.. code-block:: console
# . ./vars
# ./clean-all
Now you can create the certificate
.. code-block:: console
# ./build-ca
The previous will create a ``ca.crt`` file in the directory ``/etc/openvpn/easy-rsa/2.0/keys``.
Copy this file under ``/usr/local/share/ca-certificates/`` directory and run :
.. code-block:: console
# update-ca-certificates
to update the records. You will have to do the following on node2 as well.
Now you can create the keys and sign them with the certificate
.. code-block:: console
# ./build-key-server node1.example.com
This will create a ``01.pem`` and a ``node1.example.com.key`` files in the
``/etc/openvpn/easy-rsa/2.0/keys`` directory. Copy these in ``/etc/ssl/certs/``
and ``/etc/ssl/private/`` respectively and use them in the apache2
configuration file below instead of the defaults.
Apache2 setup
~~~~~~~~~~~~~
Create the file ``/etc/apache2/sites-available/synnefo`` containing the
following:
<VirtualHost *:80>
ServerName node1.example.com
RewriteEngine On
RewriteCond %{THE_REQUEST} ^.*(\\r|\\n|%0A|%0D).* [NC]
RewriteRule ^(.*)$ - [F,L]
RewriteRule (.*)
</VirtualHost>
Create the file ``/etc/apache2/sites-available/synnefo-ssl`` containing the
following:
<IfModule mod_ssl.c>
<VirtualHost _default_:443>
ServerName node1.example.com
Alias /static "/usr/share/synnefo/static"
# SetEnv no-gzip
# SetEnv dont-vary
AllowEncodedSlashes On
RequestHeader set X-Forwarded-Protocol "
>
</IfModule>
Now
.. warning:: Do NOT start/restart the server yet. If the server is running::
# /etc/init.d/apache2 stop
.. _rabbitmq-setup:
Message Queue setup
~~~~~~~~~~~~~~~~~~~
The message queue will run on node1, so we need to create the appropriate
rabbitmq user. The user is named ``synnefo`` and gets full privileges on all
exchanges:
.. code-block:: console
# rabbitmqctl add_user synnefo "example_rabbitmq_passw0rd"
# rabbitmqctl set_permissions synnefo ".*" ".*" ".*"
We do not need to initialize the exchanges. This will be done automatically,
during the Cyclades setup.
Pithos data directory setup
~~~~~~~~~~~~~~~~~~~~~~~~~~~
As mentioned in the General Prerequisites section, there should be a directory
called ``/srv/pithos`` visible by both nodes. We create and setup the ``data``
directory inside it:
.. code-block:: console
# mkdir /srv/pithos
# cd /srv/pithos
# mkdir data
# chown www-data:www-data data
# chmod g+ws data
This directory must be shared via `NFS <
In order to do this, run:
.. code-block:: console
# apt-get install rpcbind nfs-kernel-server
Now edit ``/etc/exports`` and add the following line:
.. code-block:: console
/srv/pithos/ 203.0.113.2(rw,no_root_squash,sync,subtree_check)
Once done, run:
.. code-block:: console
# /etc/init.d/nfs-kernel-server restart
DNS server setup
~~~~~~~~~~~~~~~~
If your machines are not under the same domain name you have to set up a dns server.
In order to set up a dns server using dnsmasq do the following:
.. code-block:: console
# apt-get install dnsmasq
Then edit your ``/etc/hosts/`` file as follows:
203.0.113.1 node1.example.com
203.0.113.2 node2.example.com
dnsmasq will serve any IPs/domains found in ``/etc/resolv.conf``.
There is a `"bug" in libevent 2.0.5 <
, where if you have multiple nameservers in your ``/etc/resolv.conf``, libevent
will round-robin against them. To avoid this, you must use a single nameserver
for all your needs. Edit your ``/etc/resolv.conf`` to include your dns server:
nameserver 203.0.113.1
Because of the aforementioned bug, you can't specify more than one DNS servers
in your ``/etc/resolv.conf``. In order for dnsmasq to serve domains not in
``/etc/hosts``, edit ``/etc/dnsmasq.conf`` and change the line starting with
``#resolv-file=`` to:
.. code-block:: console
resolv-file=/etc/external-dns
Now create the file ``/etc/external-dns`` and specify any extra DNS servers you
want dnsmasq to query for domains, e.g., 8.8.8.8:
.. code-block:: console
nameserver 8.8.8.8
In the ``/etc/dnsmasq.conf`` file, you can also specify the ``listen-address``
and the ``interface`` you would like dnsmasq to listen to.
Finally, restart dnsmasq:
.. code-block:: console
# /etc/init.d/dnsmasq restart
You are now ready with all general prerequisites concerning node1. Let's go to
node2.
Node2
-----
General Synnefo dependencies
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* apache (http server)
* gunicorn (WSGI http server)
* postgresql (database)
* ntp (NTP daemon)
* gevent
* certificates
* dnsmasq (DNS server)
You can install the above by running:
.. code-block:: console
# apt-get install apache2 postgresql ntp
Node2 will connect to the databases on node1, so you will also need the
python-psycopg2 package:
.. code-block:: console
# apt-get install python-psycopg2
Database setup
~~~~~~~~~~~~~~.
Apache2 setup
~~~~~~~~~~~~~
<VirtualHost *:80>
ServerName node2.example.com
Create the file ``synnefo-ssl`` under ``/etc/apache2/sites-available/``
containing the following:
.. code-block:: console
<IfModule mod_ssl.c>
<VirtualHost _default_:443>
ServerName node2.example.com
SetEnv no-gzip
SetEnv dont-vary
AllowEncodedSlashes On
RequestHeader set X-Forwarded-Protocol "
<Proxy * >
Order allow,deny
Allow from all
</Proxy>
As in node1,
Acquire certificate
~~~~~~~~~~~~~~~~~~~
Copy the certificate you created before on node1 (`ca.crt`) under the directory
``/usr/local/share/ca-certificate`` and run:
# update-ca-certificates
to update the records.
DNS Setup
~~~~~~~~~
Add the following line in ``/etc/resolv.conf`` file
.. code-block:: console
nameserver 203.0.113.1
to inform the node about the new DNS server.
As mentioned before, this should be the only ``nameserver`` entry in
``/etc/resolv.conf``. and updated, as
described previously), by running:
# apt-get install snf-astakos-app snf-pithos-backend
.. _conf-astakos:
Configuration of Astakos
========================
Gunicorn setup
--------------
Copy the file ``/etc/gunicorn.d/synnefo.example`` to
``/etc/gunicorn.d/synnefo``, to make it a valid gunicorn configuration file:
.. code-block:: console
# mv :
.. code-block:: console
DATABASES = {
'default': {
# 'postgresql_psycopg2', 'postgresql','mysql', 'sqlite3' or 'oracle'
'ENGINE': 'django.db.backends.postgresql_psycopg2',
# ATTENTION: This *must* be the absolute path if using sqlite3.
# See:
'NAME': 'snf_apps',
'USER': 'synnefo', # Not used with sqlite3.
'PASSWORD': 'example_passw0rd', # Not used with sqlite3.
# Set to empty string for localhost. Not used with sqlite3.
'HOST': '203.0.113
choice_COOKIE_DOMAIN = '.example.com'
ASTAKOS_BASE_URL = '
The ``ASTAKOS_COOKIE_DOMAIN`` should be the base url of our domain (for all
services). ``ASTAKOS_BASE_URL`` is the Astakos top-level URL. Appending an
extra path (``/astakos`` here) is recommended in order to distinguish
components, if more than one are installed on the same machine.
.. note:: For the purpose of this guide, we don't enable recaptcha authentication.
If you would like to enable it, you have to edit the following options:
.. code-block:: console
ASTAKOS_RECAPTCHA_PUBLIC_KEY = 'example_recaptcha_public_key!@#$%^&*('
ASTAKOS_RECAPTCHA_PRIVATE_KEY = 'example_recaptcha_private_key!@#$%^&*('
ASTAKOS_RECAPTCHA_USE_SSL = True
ASTAKOS_RECAPTCHA_ENABLED = True
For the ``ASTAKOS_RECAPTCHA_PUBLIC_KEY`` and ``ASTAKOS_RECAPTCHA_PRIVATE_KEY``
go to and create your own pair.
Then edit ``/etc/synnefo/20-snf-astakos-app-cloudbar.conf`` :
.. code-block:: console
CLOUDBAR_LOCATION = '
CLOUDBAR_SERVICES_URL = '
CLOUDBAR_MENU_URL = '
Those settings have to do with the black cloudbar endpoints and will be
described in more detail later on in this guide. For now, just edit the domain
to point at node1 which is where we have installed Astakos.
If you are an advanced user and want to use the Shibboleth Authentication
method, read the relative :ref:`section <shibboleth-auth>`.
----------------------------
Many of the ``Astakos`` operations require the server to notify service users
and administrators via email. e.g. right after the signup process, the service
sents an email to the registered email address containing an verification url.
After the user verifies the email address, Astakos once again needs to
notify administrators with a notice that a new account has just been verified.
More specifically Astakos sends emails in the following cases
- An email containing a verification link after each signup process.
- An email to the people listed in ``ADMINS`` setting after each email
verification if ``ASTAKOS_MODERATION`` setting is ``True``. The email
notifies administrators that an additional action is required in order to
activate the user.
- A welcome email to the user email and an admin notification to ``ADMINS``
right after each account activation.
- Feedback messages submited from Astakos contact view and Astakos feedback
API endpoint are sent to contacts listed in ``HELPDESK`` setting.
- Project application request notifications to people included in ``HELPDESK``
and ``MANAGERS`` settings.
- Notifications after each project members action (join request, membership
accepted/declinde etc.) to project members or project owners.
Astakos uses the Django internal email delivering mechanism to send email
notifications. A simple configuration, using an external smtp server to
deliver messages, is shown below. Alter the following example to meet your
smtp server characteristics. Notice that the smtp server is needed for a proper
installation.
Edit ``/etc/synnefo/00-snf-common-admins.conf``:
.. code-block:: python
# this gets appended in all email subjects
# Address to use for outgoing emails
DEFAULT_FROM_EMAIL = "server@example.com"
# Email where users can contact for support. This is used in html/email
# templates.
CONTACT_EMAIL = "server@example.com"
# The email address that error messages come from
SERVER_EMAIL = "server-errors@example.com"
Notice that since email settings might be required by applications other than
Astakos, they are defined in a different configuration file than the one
previously used to set Astakos specific settings.
Refer to
`Django documentation <
for additional information on available email settings.
As refered in the previous section, based on the operation that triggers
an email notification, the recipients list differs. Specifically, for
(administrators, managers, helpdesk etc) synnefo provides the following
settings located in ``00-snf-common-admins.conf``:
.. code-block:: python
ADMINS = (('Admin name', 'admin@example.com'),
('Admin2 name', 'admin2@example.com))
MANAGERS = (('Manager name', 'manager@example.com'),)
HELPDESK = (('Helpdesk user name', 'helpdesk@example.com'),)
Alternatively, it may be convenient to send e-mails to a file, instead of an actual smtp server, using the file backend. Do so by creating a configuration file ``/etc/synnefo/99-local.conf`` including the folowing:
.. code-block:: python
Enable Pooling
--------------
This section can be bypassed, but we strongly recommend you apply the following,
since they result in a significant performance boost.
Synnefo includes a pooling DBAPI driver for PostgreSQL, as a thin wrapper
around Psycopg2. This allows independent Django requests to reuse pooled DB
connections, with significant performance gains.
To use, first monkey-patch psycopg2. For Django, run this before the
``DATABASES`` setting in ``/etc/synnefo/10-snf-webproject-database.conf``:
.. code-block:: console
from synnefo.lib.db.pooled_psycopg2 import monkey_patch_psycopg2
monkey_patch_psycopg2()
Since we are running with greenlets, we should modify psycopg2 behavior, so it
works properly in a greenlet context:
from synnefo.lib.db.psyco_gevent import make_psycopg_green
make_psycopg_green()
Use the Psycopg2 driver as usual. For Django, this means using
``django.db.backends.postgresql_psycopg2`` without any modifications. To enable
connection pooling, pass a nonzero ``synnefo_poolsize`` option to the DBAPI
driver, through ``DATABASES.OPTIONS`` in Django.
All the above will result in an ``/etc/synnefo/10-snf-webproject-database.conf``
file that looks like this:
.. code-block:: console
# Monkey-patch psycopg2
from synnefo.lib.db.pooled_psycopg2 import monkey_patch_psycopg2
monkey_patch_psycopg2()
# If running with greenlets
from synnefo.lib.db.psyco_gevent import make_psycopg_green
make_psycopg_green()
'OPTIONS': {'synnefo_poolsize': 8},
# ATTENTION: This *must* be the absolute path if using sqlite3.
# See:
}
Database Initialization
-----------------------
After configuration is done,
# snf-manage migrate quotaholder_app
Then, we load the pre-defined user groups
# snf-manage loaddata groups
.. _services-reg:
Services Registration
---------------------
When the database is ready, we need to register the services. The following
command will ask you to register the standard Synnefo components (Astakos,
Cyclades and Pithos) along with the services they provide. Note that you
have to register at least Astakos in order to have a usable authentication
system. For each component, you will be asked to provide two URLs: its base
URL and its UI URL.
The former is the location where the component resides; it should equal
the ``<component_name>_BASE_URL`` as specified in the respective component
settings. For example, the base URL for Astakos would be
``
The latter is the URL that appears in the Cloudbar and leads to the
component UI. If you want to follow the default setup, set
the UI URL to ``<base_url>/ui/`` where ``base_url`` the component's base
URL as explained before. (You can later change the UI URL with
``snf-manage component-modify <component_name> --url new_ui_url``.)
The command will also register automatically the resource definitions
offered by the services.
# snf-component-register
.. note::
This command is equivalent to running the following series of commands;
it registers the three components in Astakos and then in each host it
exports the respective service definitions, copies the exported json file
to the Astakos host, where it finally imports it:
.. code-block:: console
astakos-host$ snf-manage component-add astakos --base-url astakos_base_url --ui-url astakos_ui_url
astakos-host$ snf-manage component-add cyclades --base-url cyclades_base_url --ui-url cyclades_ui_url
astakos-host$ snf-manage component-add pithos --base-url pithos_base_url --ui-url pithos_ui_url
astakos-host$ snf-manage service-export-astakos > astakos.json
astakos-host$ snf-manage service-import --json astakos.json
cyclades-host$ snf-manage service-export-cyclades > cyclades.json
# copy the file to astakos-host
astakos-host$ snf-manage service-import --json cyclades.json
pithos-host$ snf-manage service-export-pithos > pithos.json
# copy the file to astakos-host
astakos-host$ snf-manage service-import --json pithos.json
Notice that in this installation astakos and cyclades are in node1 and pithos is in node2.
Setting Default Base Quota for Resources
----------------------------------------
We now have to specify the limit on resources that each user can employ
(exempting resources offered by projects). When specifying storage or
memory size limits consider to add an appropriate size suffix to the
numeric value, i.e. 10240 MB, 10 GB etc.
# snf-manage resource-modify --default-quota-interactive
.. _pithos_view_registration:
Register pithos view as an OAuth 2.0 client
-------------------------------------------
Starting from synnefo version 0.15, the pithos view, in order to get access to
the data of a protect pithos resource, has to be granted authorization for the
specific resource by astakos.
During the authorization grant procedure, it has to authenticate itself with
astakos since the later, we have to run
the following command::
snf-manage oauth2-client-add pithos-view --secret=<secret> --is-trusted --url
Servers Initialization
----------------------
Finally, we initialize the servers on node1:
.. code-block:: console
root@node1:~ # /etc/init.d/gunicorn restart
root@node1:~ # /etc/init.d/apache2 restart, let's
assume that you created the user with username ``user@example.com``.
Now we need to activate that user. Return to a command prompt at node1 and run:
.. code-block:: console
root@node1:~ # snf-manage user-list
This command should show you a list with only one user; the one we just created.
This user should have an id with a value of ``1`` and flag "active" and
"verified" set to False. Now run:
root@node1:~ # snf-manage user-modify 1 --verify --accept
This verifies the user email and
the additions needed in your ``/etc/apt/sources.list`` file, as described
previously), by running:
.. code-block:: console
# apt-get install snf-pithos-app snf-pithos-backend
Now, install the pithos web interface:
.. code-block:: console
#.
.. _conf-pithos:
Configuration of Pithos
=======================
Gunicorn setup
--------------
Copy the file ``/etc/gunicorn.d/synnefo.example`` to
``/etc/gunicorn.d/synnefo``, to make it a valid gunicorn configuration file
(as happened for node1):
.. code-block:: console
# cp
this options:
.. code-block
|
https://git.minedu.gov.gr/itminedu/synnefo/-/blame/5c9737d6256b5567a3ac6921664929194d4eb756/docs/quick-install-admin-guide.rst
|
CC-MAIN-2022-21
|
refinedweb
| 3,078
| 50.63
|
Coding Standard
When possible, follow the internal Microsoft Coding Standard* as a guideline for coding. "Guideline" means that it is a way to set a routine and standardize things when it makes sense to do so. Its not a law or something that is iron clad. Not following it will not get patches rejected, though they may be modified when applied. Common sense and sound logic trumps all.
The reasoning why the guidelines should be followed is that anyone can open any file and know where the code is located, its consistent, easy to read and follow.
* minus putting the using statements inside the namespace.
|
https://cwiki.apache.org/confluence/display/LUCENENET/Coding+Guidelines
|
CC-MAIN-2018-17
|
refinedweb
| 105
| 73.17
|
See also: IRC log
<scribe> Chair: Steven/Roland
s/USA>/USA,/
Hi there Oedipus
<scribe> Scribe: Steven
Rich; There is the issue of whether we are OK with the role attribute being chameleon
\s/:/:/
hi Mark
Roland: Why wouldn't we?
Steven: Well, it's just that modularization says you can't
Roland: So then we have to change that rule in modularization
<markbirbeck> Hi everyone.
<oedipus> :(
Steven: I don't think I feel strongly either way. Class is similar in some senses. It is in other specs, and it 'means' the same, and it is spelled the same
Roland: I was actually suggesting that we create role outside of modularization, and then create a modularization module that imports that
So Oedipus, was that frowny to mean that you don't support chameleon role?
<oedipus> i'm intrigued by roland's suggestion
Well, people want to have it in their spec without having to namespace it
<g role="thingy"> rather than <g xh:
<oedipus> the frown was because modularization said we couldn't, but what would changing that rule in modularization entail -- ramifications?
Roland: So people have the choice of which of those two forms they care to use
Well, Roland is suggesting just changing it for role
<oedipus> ok
Rich: So what can we do?
<markbirbeck> When @role appears without a namespace in another language, it is because that language has added it to its own language. Just like @class in SVG is *not* @class in HTML, but they have given it the same semantics to make it easier for people to use.
Roland: We just put it back through last call with the change made
Steven: Short three weeks.
Roland: THe question is, one spec or two?
<markbirbeck> So all we have to do is what we discussed the other day--remove the restriction that "you can't use our stuff unless you namespace prefix it".
<oedipus> gut feeling is one
<markbirbeck> It's your language...you do what you want with it, is the point.
Steven: I would prefer just one, with an ENglish sentence "Rule 3.1.5 of modularization does not apply to this attribute" or somesuch
Roland: Good
<Rich> yes
<oedipus> +1 from GJR
<markbirbeck> Let's solve this for all of our modules, though.
Steven: This is our newest
member, Christina
... she is going to help with our specs and tutorials
Christina: I am based in New York, and work for STC
<markbirbeck> What was the question "one spec or two" about?
Whether to have a non-modularization role, and a modularization role
Rule 5 in that
Steven: So it is the last MUST
that changes
... Not sure how Shane feels about this change
... So change the MUST to a SHOULD?
[General nodding]
Roland: We did this with xml:id
Steven: But exactly the other way
round
... So the reason we went for xml:id is that it was generally processable WITHOUT knowing anything about the enclosing language
... what we are doing is now the opposite: you have to know the language to know whether @role is our role or not
... so it is worse really
... it is less generally applicable
Roland: But they have the choice to do the right thing or not
<oedipus> so would MUST to SHOULD be a "best practices" convention -- you don't have to, but you really probably should, unless you're lazy, ignorant or just don't give a damn?
I think so oedipus, yes
Roland: There is enough rope for people to hang themselves with
<oedipus> hmmm... well my catagorization covers some cases we already know about...
Steven: So M12N is on the brink of transition, so do we make that change before the transition?
Roland: Yes. It is a relaxation, so it won't break anyone's software
<oedipus> +1 -- i think that may be a MUST not a SHOULD
I don't quite understand what you are saying Oedipus
Here is the old text:
and here is the proposed new text:
"The schema that defines the document type may define additional elements and attributes. However, these MUST be in their own XML namespace [XMLNAMES]. If additional elements are defined by a module, the attributes defined in included XHTML modules are available for use on those elements, but SHOULD be referenced using their namespace-qualified identifier (e.g., xhtml:class). The semantics of the attributes remain the same as when used on an XHTML-namespace ele
<oedipus> i think we need to add the new text before M12N transition
(one word has changed MUST => SHOULD)
OK, right
Steven: So it looks like we have agreement here, modulo Shane
Let me ping him now
<markbirbeck> We should change M12N, for the following reasons:
<scribe> ACTION: Shane to change 3.1.5 in Modularization 1.1 [recorded in]
Steven: Does this mean role has to go through last call again? (I think so)
<markbirbeck> Above, this was said: "... so it is worse really ... it is less generally applicable", referring to @role v. @xh:role.
<markbirbeck> I think the big problem with the way XML in general has been used, and M12N in particular is that we've assumed that all languages are the same. But it's obviously not true. Some document architecture that transfers bank details around and blends items form many different sources as transactions pass through the system is very different to mark-up used to create web-pages, and which could even be hand authored. So the fact that in HTML or SVG you have
<markbirbeck> The 'bank transaction' super-language can still use @xh:role.
ends at "or SVG you have "
Yes Mark, that is true, but you can't write a generic 'role' processor; it has to know which langauges import @role where
whereas before, you didn't need that information
You could just depend on seeing the xhtml namespace
Steven: I see that we do need to
change the role module:
...
"If the host language is not in the XHTML namespace, the document MUST contain an xmlns declaration for the XHTML Role Attribute Module namespace"
Steven: SO we have to change that too
<scribe> ACTION: Shane to change to match [recorded in]
<scribe> ACTION: Steven to put role through last call again [recorded in]
Steven: Iwas pleased to hear that
the WAI ARIA stuff works in Dojo already
... so I don't see the issue really
... it works fine
<oedipus> the dojo examples work much better than the current mozilla samples (been doing QA on ARIA test materials for PF)
<markbirbeck> Steven, my point is that we don't have generic role processors on the client anyway. The only generic processing we have is server-side XSLT translations. So there's little point in gearing everything around something that hasn't happened.
<markbirbeck> So I'm *very* glad to see this change. :)
Well, role is so new. But we are putting hurdles in the way of role processors by making role harder to identify
Now we *can't* have generic role processors
but it used to be possible
<markbirbeck> But it wasn't.
so the situation has got worse not better
<markbirbeck> There was no architecture in place for that generic processor to hook into.
It was possible. You only need to know if there was a role attribute in the xh namespace
<markbirbeck> Exactly.
now you have to know the list of languages that have a role, and which elements use it
<markbirbeck> Like I said "there was no architecture...". :)
<markbirbeck> In today's technologies, how do you find something in the XHTML namespace?
<markbirbeck> It's circular.
<markbirbeck> You end up trying to invent a 'hook', like 'aria-' or 'xh-' to indicate it.
<alessio> hallo
<markbirbeck> In short, as long as the biggest consumer of XHTML is actually an HTML browser, you have a problem.
Masataka Yakura joins the meeting
<markbirbeck> The fact that we can't have a generic role processor is so insignificant a problem when compared to the enormous problem created by that disjuncture...sorry to say. :(
Masataka: I represent
Mitsue-Links Co., Ltd.
... we build sites using XHTML and CSS
... I am joining both this and the HTML5 working groups
Steven: At the TP on Wednesday, we tried to identify XHTML2 as an authoring language, you write your site once, and direct it to different devices, with accessibility, personalization, device independence all for free, or low cost
Ho Shane
Hi
Shane, you might like to look at the minutes, and scream and yell, or as you see fit
<alessio> hi steven :)
Roland: So now we have the discussion about role, the same question applies to access
Steven: Should it be chameleon?
Roland: Yes
<ShaneM>
Steven: I happen to know that the
Internationalization people are talking with SVG about keys
today
... (key="s")
... (as a side issue)
So Shane, can you live with the decision to make @role chameleon?
<ShaneM> few things this group could possibly do would ever effect my ability to live.
scribe: and change the MUST into a SHOULD
lol
especially if you don't come to meetings ;-)
<Roland_> but could they affect your will to live :-)
<ShaneM> note that making this change to M12N will require our returning to last call
<oedipus> the thinking was 3 weeks LC, right?
Are you sure?
because it won't invalidate any software or documents
<ShaneM> am I sure that changing the requirements of M12N so that any attribute in the xhtml namespace can be magically imported into any other namespace with the same semantics rquires a new last call? oh yeah.
<ShaneM> it changes the requirements for behavior on xhtml family user agents
<ShaneM> in that they MUST recognize attributes that look like XHTML as being XHTML all the time. Somehow.
But M12N only defines syntax, not behaviour
Roland: If Shane is right, and it means a new last call, then I think we should only do it for role, and say that rule 3.1.5 doesn't apply only in that spec
<ShaneM> no it defines the behavior of xhtml family user agents - see section 3.5
<ShaneM> oh wait... interesting.
Steven: I don't see it
<ShaneM> question: if I extend XHTML 1.1 with my own elements and I want to use role on those elements, current M12N says I MUST reference role as <my:element xh:
Steven: why this change would affect any XHTML conformant software or documents
<Roland_> orr we develop two specifications, 1) Role attribute that defines an unnamespaced attribute, 2) An XHTML Role Attribute Module that incorporate the "Role attribute spec" into a very similarly named XHTML Module. Just lots of work which we can hopefully avoid.
yes, and we are now saying that <my:element is fine
or at least, OK< though the other form is preferable
<ShaneM> okay, then that is a conformance change on conforming user agents
It is like everyone has @class, which is actually the same one
spelled the same, has the same effect
but in different namespaces
<ShaneM> because of section 3.1 clause 5.
yes
<ShaneM> a current conforming user agent will not expect that "role" == "xh:role" semantically
we want to change the MUST there into a SHOULD
Well, it can't identify it syntactically, it has to use other knowledge to identify that they aqre the same
<ShaneM> I understand. I dont care - I am just pointing out that this change is a conformance change for user agents because they were not previously required to interpret them as the same.
that is why xh:role is preferable (there is no doubt it is the same)
In reality thoughm anyonecan create a language with the element <g role="foo">
and just *say* it is the same as the XHTML role
"because we say so"
so all we are doing is accepting that
<Rich> anyone can say so
<ShaneM> yes. and that's why chameleon namespaces are BAD. 'cause they can also say "it is NOT the same" and an ARAI aware browser would have no way of knowing the difference
Yes, you are right there
<ShaneM> but whatever. you all do what you think is best. I will happily implement it.
I hear the magic words
RESOLUTION: allow role to be chameleon
<ShaneM> "The bar is open" ?
lol
Steven: do we agree that M12N
does not need to go through last call again?
... we are required to say what has changed since the last transition
... so the change will be pointed out
<markbirbeck> I agree.
to what?
I see
that we don't need to go to last call
good
<ShaneM> ok
Roland: Does the same apply to the access element?
<markbirbeck> it doesn't affect XHTML-based user agents, since they can already use @role, and it doesn't affect non-XHTML based user agents, since they will be using @xh:role. But going forward, it allows SVG to use @role, and say that it's the same as @xh:role.
<markbirbeck> (My comment is in relation to LC, not access.)
yes
Roland: Let us leave as is now
and see who screams
... publish as-is
<markbirbeck> Leave what, as is?
Access
don't change the M12N rules for access (yet)
<ShaneM> err....
<markbirbeck> It should be usable in other languages without a prefix, if that's what people want.
<ShaneM> if we are changing M12N, then we perforce are changing all modules. role, access, whatever.
<markbirbeck> +1 to Shane.
we have only changed the rules for attributes today
<markbirbeck> We're fundamentally changing our philosophy, after all.
<Rich> we could
<Roland_> is there an equivalent rule for elements?
<ShaneM> The logical ramification of this change is that in ANY XHTML HOST LANGUAGE any non-xhtml elements can use attributes from XHTML Attribute Collections in an unqualified form.
I don't think we have the same rule for elements
Maybe someone can point me to it
<ShaneM> actually XML has that rule. we dont need one
<ShaneM> elements are in a namespace. Whatever the default namespace is dictates the form of elements referenced in a document.
But do we have a rule that says that elements defined here MUST be in the XHTML namespace?
I don't see that rule anywhere, but I may have missed it
(I see it for attributes)
If not, then we don't need to discuss it
<Roland_> OK, so what is it that we actually want to allow? Once we agree on that we can decide which rules need to be added/changed/deleted.
Roland, if you use a module, that you can import it into a different namespace
ie that the definition of the namespace is not hardwired in the module
<ShaneM> For elements too?
Roland: FIne with me
Shane, yes, that is the discussion
<ShaneM> It was absolutely the intent that XHTML M12N module elements are in the xhtml namespace.
Yes Shane, but does it *say* that?
We may have intended it, but not required it
Steven: Looking at the time, and that we have a session with the HTML WG at 11, maybe we should break for coffee now
<ShaneM> Well - in fact we may explicitly permit the opposite. See section 3.3. clause 5
Question - we want to talk about future meetings; who will be around later to join that?
<ShaneM> I suspect this was to permit the SVG use of "a" and "p" but I dont remember
We are breaking for coffee, and html wg session
watch this space
<ShaneM> The implication of this text, to me, is that XHTML Family Modules define their elements (and attributes) in a namespace, but that other modules are permitted to import them into another namespace as long as the content model remains the same or expands.
<ShaneM> ok
<alessio> steven, do you intend also future FTF's?
<alessio> because I would like to propose a possible FTF in Venice, Italy
<alessio> maybe with PF and GRDDL WGs
<markbirbeck> You are probably all out having fun, but I'd like to make two comments on the 'M12N modules in other languages' discussion, ready for your return. :)
<markbirbeck> The first is that, when I did my work on the M12N schemas, I was using XForms, MathML and SVG as my 'case studies' in order to get things 'right'. Some of the changes that myself and Shane then made to M12N were based on this experience of trying to mix languages, and I don't think there were any severe limitations that would prevent us from taking a much more 'mix and match' approach.
<markbirbeck> Second, over in RDFa-land we've been having discussions with the Open Document people, and they are keen to incorporate RDFa into ODF. They are going to import the attributes into their own namespace, as we've just been discussing, but we should do everything we can to ensure that they can use 'our' modules.
<ShaneM> Note that the schema implementation of XHTML M12N uses "late binding" to define the namespace, just to accomodate this.
<markbirbeck> (I.e., as opposed to having to 'copy' the modules, in order to get round limitations imposed by the modularisation framework.)
<markbirbeck> Yes.
<markbirbeck> It's a shame that it's 'the modularisation of XHTML', because what we really have is a 'modularisation framework', and then a bunch of modules defined in the XHTML namespace. But other people could create a module in a different namespace, and make it 'modularisable'.
<oedipus> GJR +1 to f2f in venice with PF and GRDDL -- perhaps add devs of EARL, ERT () -- been good interaction between EARL and GRDDL, but don't know if each is aware of the other's progress
<ShaneM> and many people have, Mark
<alessio> also RDFa, gregory...
<oedipus> yes, good point, alessio
<oedipus> seriously, your remarks about a modularization framework are quite rich food for thought...
<markbirbeck> Shane...who has?
<ShaneM> mathml? svg? jabber?
<ShaneM> xforms
<ShaneM> our framework was ALWAYS intended for use by other groups. we were chartered to create a pluggable architecture long before there was a CDF group
we are back
CSB=Christina Bottomly (new WG member)
<CSB> hi!
<markbirbeck> Shane, but the pluggable architecture is for XHTML-based languages.
<markbirbeck> My point is that the architecture should be for *any* language that one might want to create.
<oedipus> +1 to architecture applying to ANY language
<markbirbeck> (We may be agreed that this is a desirable goal. :) I'm just saying that at the moment that isn't what XHTML M12N does...it's currently 'the modularisation of XHTML'.)
In fact, we did produce a general modularization architecture, but we were only *chartered* do do it for XHTML, hence the name
but indeed, the intention was to use it for any markup at all
It is only our *modules* that carry the XHTML Namespace
SO you can use M12N for *any* namespace
<markbirbeck> Mmm...I think you might be reading history backwards. But it doesn't matter. :)
as has been pointed out by and to the TAG
It was a CYA that we named it XHTML Modularization
we long discussed whether we should call it XML Modularization
and decided against it for political reasons
+1 to Venice Alessio
all for it
no better place on earth for a FtF
<oedipus> so, we strip out the ML-specifics and deliver a note/WD of XMod - The Extensible Modularization Module?
Well Oedipus, if you look, that is already there
Originally there were two specs - the modularization architecture and the modules
we decided eventually to merge them
but the split is still in the document
Ah, sorry :-S
Roland: Let us not get bogged
down on the mechanisms
... let us focus on the deliverables
... I think there should be a more generic structure
<alessio> thx steven
Roland: like head and body
everywhere
... and a foot
... so a doc can have all three, a table can
... so not head body and thead tbody, but just use head and bosy
Steven: So our elements should depend on context
Roland: Yes
Rich: That is good for accessibility
Roland: I don't know how we do that in Schema, but it doesn't matter
Steven: That has never been a constraint
Roland: Anyway, I think we need a class of structure element more than block and inline
<oedipus> strong +1 to roland's class of structure
Steven: In the past things like
meta everywhere has been blocked for us by IE that puts meta in
the head regardless of where it is in the source
... but that is Not Our Problem (NOP) now
Roland: THis structure idea makes
embedding much easier
... because then you can compose documents much more easily
Steven: Yes! For instance our spec authoring system has separate files for chapters, but if each chapter is an html doc then you can't easily compose them
<alessio> +1 to roland for me too
Steven: this would make composability a doddle!
[Roland steps to board]
Steven: And if each chapter is only an html fragment, you can't load them into html editors
Roland writes:
<section>
<head>
<h> Heading for section</h>
</head>
</body>
</section>
ignore last two lines of example
<body>
scribe:
</body>
<foot>
</foot>
</section>
Steven: What is the semantics of foot?
Roland: Think of tfooter, and then generalise
Steven: If we focus on
composability, I think that will help us focus on what we are
trying to achieve, and help answer questions as they
arise
... I wonder if this will affect how RDFa defaults @about
<alessio> <Steven> +1 to Venice Alessio
<alessio> <Steven> all for it
<alessio> <Steven> no better place on earth for a FtF
<alessio> pardon, irc window is joking
Steven: Since currently it defaults to the document, but this might require it to default to the unit of composability (need to think about this)
<alessio> roland, it seems like a generic <tfoot>, semantically could it be a sort of closing <div>?
Alession, yes, I think so, but with possible extra semantics
Rich: You see a lot of divs used this way now, they clearly need structure
Steven: Yes, though they use divs
because they are completely presentation free. If there were a
way to turn off all existing presentation on an element, I'm
sure people would use more meaningful structure
... maybe this is a comment we should send to CSS
... please make it easy to unset all properties on an element, so I have complete control of the presentation
<alessio> I understand, it should have a definite role for content, more than a possible <div role="foot">
Yes Alessio
<Roland_> e.g.
<Roland_> <section>
<Roland_> <head> </head>
<Roland_> <body>
<Roland_> </body
<Roland_> <foot> </foot>
<Roland_> </section>
<Roland_> <section>
<Roland_> <head> </head>
<Roland_> <body>
<Roland_> <section>
<Roland_> <head> </head>
<Roland_> <body>
<Roland_> </body
<Roland_> <foot> </foot>
<Roland_> </section>
<Roland_> </body
<Roland_> <foot> </foot>
<Roland_> </section>
<Tina> People also tend to use DIVs because they don't understand the concept of semantic-bearing elements. This is a sad fact even in 2007. That's why it is, among other things, important NOT to use DIVs as example elements if semantics is involved.
<Roland_> <table>
<Roland_> <head> </head>
<Roland_> <body>
<Roland_> </body
<Roland_> <foot> </foot>
<Roland_> </table>
<oedipus> many many uses for foot - pagination information for online text books, for example; things that users will either want to know are there, inspect/read, or want to skip, need as much context as possible to make decision, so like specificity
Hi Tina!
Agree Tina
<alessio> developers often use divs to design boxes and so on
<CSB> yes, <div> is the hold all for web developers
<oedipus> the catch-all container
Rich: Do we really need the body tag?
Roland: It makes it explicit
<alessio> so these elements have de facto replaced old <td> in tableless layout
<alessio> without a real semantic use
<CSB> "<td>" that's kinda how I understand it
<Tina> Hello again, Steven. Sorry about late, it's what you might call a very traditional Swedish november weather out here, and it is playing merry hell with my schedules.
rrswagent, make minutes
<Tina> I would very much like to see us, atleast, avoid constructs where additional semantics - ie. semantics beyond "nil" - is attatched to DIV elements, in particular as examples.
Steven: On the other hand I wouldn't want this discussion to delay putting XHTML2 out as a WD now
Roland: Well, we should look to see if there is anything we can take out, like h1-6
Steven: Oh yes
<oedipus> agreed, tina, i'd rather have <PROLOGUE> </PRoLOGUE> than <DIV id="prologue"> or <TOC> </TOC> rather than <DIV id="toc" class="fancy-toc">
Steven: And what about hr? :-)
<oedipus> ;)
oedipus, but don't forget RDFa and role
<oedipus> true
<oedipus> but it is strange to assign a role to something that isn't intended to convey that role, and have it function as that role not because the element contains the construct, but because a role="" has been applied...
<Tina> oedipus: I think we ALL do!
RESOLUTION: We will reopen the hr discussion at a future date
thus overriding the previous resolution on this issue
TIna, I agree 100% about not using div for anything but *very* vanilla purposes in our examples
Roland: Oh! 12:15, lunch calls
We are breaking now
back at 1.30
(1hr 15 mins from now)
Alessio, Mark, how late can you stay?
Just so we can make sure you areound for the meeting discussions
future FtF, call times etc
<alessio> I can rejoin later, because here are 6.30pm ca.
OK
<alessio> have a good lunch :)
<markbirbeck> Some comments on the above:
<markbirbeck> At xx:44, it is FF that moves <meta>, not IE. :)
<markbirbeck> At xx:48, on composability: agree with goal (of course I would...I've been blogging about it for a long time). But a few things to say. First, ensure consistency; at xx:48 use <title> instead of <h>, for example. Second, I'd actually come at it the other way up, and allow the <html> element to be used in other places. This means that HTML could be embedded in Atom, SVG and MathML, by using <html> but it also means we could create modular documents real
<markbirbeck> On yy:02...changes to CSS. :) You have to be kidding, right?
<markbirbeck> At yy:04, I wouldn't get too worked up about semantic and non-semantic divs; that's the kind of discussion that will keep our box of tricks irrelevant for another 5 years. We have a 'philosophy' that what we lacked in the past was not a bunch of elements, but a good set of extension points. Now we have @role and RDFa as great ways to extend documents; why now go back to adding elements? Sure, create modules that have extra elements in like <video> and <
Agree about html element Mark
Agree about adding elementsw, and that's what I said a little further down
Christina: We have talked about a tutorial, and a white paper
Roland: We have talked about our
change of emphasis
... and about developing material to explain that story to some audience or another
... we should think about who we want to address
Christina: Is it time to start now, or do we need to wait?
Roland: Not today, but soon, after we have worked out what and to whom
<CSB> so who would be the target readers?
Roland: Over time we need a
number of documents, for content providers, for tool producers,
for pipeline makers
... to explain the possibilities for adaptation
CSB: We want one to compare XHTML and HTML4/5 I suppose
Roland: Showing the differentiation
CSB: Showingwhat is involved, and
how the transformations work\
... So this is something or the next 6 months?
Roland: Definitely
<oedipus> work with EOWG (WAI Education & Outreach) to promote adaptation/cognizance among assisstive technology vendors that this is where the future of the web lies, not in a buttoned-down non-extensible ML
Steven: And I think a fresh pair of eyes on the XHTML2 spec would be advantageous
Roland; The point of the abstract markup is to achieve amazing effects without changing the base; think CSS Zen Garden as example
CSB: So the main audience is enterprise?
Steven: But bear in mind that there are a lot of eyes on us
Roland: Sure
CSB: So what features are these people interested in?
Roland: I think we need to work that out.
CSB: Do we have anyone in this space?
Steven: Well, Roland has worked in this area
<oedipus> entities such as those listed in yesterday's minutes?
STEVEN STeven StEvEn
CSB: So we need a paper that
addresses what is XHTML2 and what are the differences
... Who wants to nominate themself as subject expert?
Steven raises his hand
Roland: THis would be an excuse to start a wiki
<ShaneM> we have had a wiki action for like a year. we dont need an excuse.
Steven asks systeam about status of mediawiki
hi Shane
CSB: Can I get a first round of features that would be compared?
Roland: I would like to see the
distinction between the view source principle and ease of
authoring
... But you could still provide a link to do a view source even when transformed
Steven: I would like that
Roland: Even if you have transformed, you can still let people see the source
Steven: Yes, it doesn't have to be physically on the client machine unless asked for
<oedipus> but it does need to be there when asked for -- the source is the course of last resort according to the User Agent Accessibility Guidelines -- it may be needed to trigger events in an AT, such as change of voice, pitch, etc.
<oedipus> UAWG has an outstanding issue on this -- what does "make document source available to user" mean in mashups?
Oedipus, I think that DanC has the mantra of View source for learnability rather than last-resort accessibility
Steven expounds principles behing the design
* Device independence
* Accessibility
* International
* User experience/usability
* Structure
* Composability
* Semantics
* Separation of concerns
* Extensible
* Declarative
Roland: The most important person is the user
Wiki requested
Roland: So let's update the roadmap, and make sure it matches the charter
<alessio> steven I agree totally with principles
<alessio> ...what about "modularity"?
Steven: We should pin transitions to FtF meetings
Roland: Suppose we were to have three instead of four FtFs,when would that be then?
Steven: End Jan, June, and then Mid Oct for the TP next year
Roland: So LC for Access after
the June meeting, and deal with LC comments in Oct
... XFrames
Steven: I think modulo some editorial fixes it can go (need to check the DB though)
Roland: Implementations?
Steven: XSmiles has an implementation of one draft, the guy who was doing XForms in Flash, and Daniel Austin did it in Javascript
ChrisL: ... Does it have a separate media type?
Steven: Yes
Chris: So it would be easy to do with a Firefox extension
Steven: Good point
... Why not make last call in Feb, after the next FtF?
Roland: XML Events
Rich: Is XForms looking at this
Steven: The ideas do come from XForms
ChrisL: SVG Tiny 1.2uses XML
Events 1.0, and previously had an extended version
... let me get those extenstions to you, to get them in to the spec
<scribe> ACTION: Steven to get XML Events extensions for SVG Tiny 1.2 from ChrisL [recorded in]
Shane?
ShaneM_?
Roland: WHat is the outlook for
LC on Events2?
... we have a window of Feb or June
Mark?
ChrisL: Is there a dependency on DOM3 Events?
Steven: Yes
Chris: There is news -- the key
stuff is being stripped
... and the rest is going forward
Steven: Didn't that happen with DOM2 Events as well?
(rhetorical question)
Roland: We can return to
that
... XHTML2
Steven: Shane is jumping up and down to get this out
<ShaneM_> I am here
Roland: Let's say December for a new WD, and then look to a serious rev for the next FtF
Shane, we were missing the latest ED of Events2 on the drafts page
(we were sure there was a later version than the public draft)
<ShaneM_> checking
<ShaneM_> no there has not been
I could have sworn that you had said there was a rev ready to go public
<ShaneM_> its that one
in TR space?
<ShaneM_> there have been no changes since then.
ack
<ShaneM_> nor were there any comments since then as far as I remember
So ready for last call then?
<ShaneM_> looking
<ShaneM_> btw I think it is insane to wait a year for a last call on access. there's nothign thre.
We can do it earlier, we're setting milestones here
<ShaneM_> okay. there is an xml events 2 comment from john boyer int he issue system
<ShaneM_> from 10 August
Rich and ChrisL are about to send comments too
<ShaneM_> two actually. we need ot address those
<oedipus> i for one would like to hear access go to last call in a shorter time-fram
ok, great
<ShaneM_> I have moved the event issues into the event bucket. let's see what ChrisL and Rich have to say
<ShaneM_> anything else you need from me? I have some errands to run with my kids (no school here today for some odd reason)
So move forward to February then OK?
<ShaneM_> xml events 2? works for me.
no that was access
now done
Roland: So, if we get the new comments for Events2, do you think we can get to LC in Feb?
Steven: Dunno, Shane?
<ShaneM_> if we get the comments soon, sure.
5 mins break to get coffee, we will drink it here as we talk, brb
<Roland__> work on the assumption that we get all comments by end of November 2007
thanks Shane!
Steven: For the RDFa schedule see
Alessio: I think we can put role to good use on the object element
<object role="audio" type="application/x-shockwave-flash">
Alessio: Because role is contextual
Steven: This is because several mediaformats you don't know if it is video or audio etc?
Alessio: Yes, exactly
... I don't like the use of all these new elements.
Steven: I'd like to think longer, but it looks convincing
Alessio: You can do the same with images
<alessio> object -> role -> type
Alessio: I was looking for a solution for identifying media
Steven: I'd like Mark, as implementer, to comment on this as well (he's not here at the moment)
<scribe> ACTION: Alessio to send a message about identifying media using role [recorded in]
<oedipus> +1 to alessio's role for object idea
sort of wrapping up Shane
Talking about future meetings
Roland: So I propose we meet
three times a year
... and break the link with the Forms WG, in order to make the locations easier for this group
Alessio?
Roland: So maybe Venice in February
Steven: I had pencilled in June for Amsterdam
<Tina> Late Feb, in such a case, if possible :)
If you promise to come :-)
Roland: Then we have three European meetings in a row
<Tina> Ah, promise ... that'll be difficult. Depends on my better half :)
<alessio> here I am
Roland: So where could we go for June? (In the USA?)
Steven: New York?
CSB: A bit hot
Steven: Google?
<oedipus> ah, but then i can accomodate a few people at my place...
Where oedipus?
<oedipus> NYC
<alessio> mmmm... in venice usually february is still "burning" for carnival :)
Steven: There is a certain value to clustering where people are based
<oedipus> i live 15 miles from central park in new jersey (a.k.a. joisey)
Carnival is early this year right?
<alessio> yes
So when is Venice restful again?
<alessio> surely it will be full of people
<alessio> what about venice in june?
A bit hot in June surely?
<oedipus> warmer than amsterdam, though :-)
<alessio> roberto (scano) has told me that may/june should be good
<alessio> anyway, from second week of february it could be more "human" to organize
<alessio> if you plan to make june's FTF in US
<Tina> Basically I am in London for business likely first and second week of feb. 3rd and 4th I should be avail. for Venice, if all goes well and my clients do not panic.
<Tina> June, now, that's abit too far to plan! :)
<alessio> carnival is from january 25th to february 5th
<Roland__> how about week beginning 11 Feb?
Roland: How about this: Venice Feb, New York/Boston in June, and France in October
<Tina> Although the later in Feb the better.
<Roland__> then how about Feb 18-20?
Sounds like a plan Alessio
<alessio> better
* 18-20 Feb Venice
* Mid June NY or Boston
<alessio> good, so we can propose it to PF and RDFa too if you agree...
* s/Mid/16-18/
* October France
<Nick>
Roland: So we need a volunteer for hosting in June
Steven: Hmm, Shane maybe ;-)
Bye Tina
Thanks for being here
<alessio> good night tina :)
ADJOURNED
This is scribe.perl Revision: 1.128 of Date: 2007/02/23 21:38:13 Check for newer version at Guessing input format: RRSAgent_Text_Format (score 1.00) FAILED: s/USA>/USA,/ Succeeded: s/USA</USA,/ Succeeded: s/;/:/ Succeeded: s/GOod/Good/ Succeeded: s/2.5.1/3.1.5/ Succeeded: s/][/]/ Succeeded: s/xml:is/xml:id/ Succeeded: s/TH/Th/ Succeeded: s/sd/s/ Succeeded: s/Masatka/Masataka/ Succeeded: s/RO/Ro/ Succeeded: s/shane/Shane/ Succeeded: s/element/eleent?/ Succeeded: s/eleent?/element?/ Succeeded: s/cx/c/ Succeeded: s/Ro/Ro/ Succeeded: s/STeven/Steven/ Succeeded: s/AN/An/ Succeeded: s/Th/Th/ Succeeded: s/dics/divs/ Succeeded: s/DO/Do/ Succeeded: s/ROland/Roland/G Succeeded: s/philosphy/philosophy/ Succeeded: s/SO/So/ Succeeded: s/STeven/Steven/ Succeeded: s/RO/Ro/ Succeeded: s/SO/So/ Succeeded: s/WH/Wh/ Succeeded: s/expound/expounds/ Succeeded: s/SO/So/ Succeeded: s/SO/So/ Succeeded: s/So/Roland: So/ Succeeded: s/EV/Ev/ Succeeded: s/role/ role/ Succeeded: s/Alees/Aless/ Succeeded: s/Roland/Roland/G Succeeded: s/So/Roland: So/ Succeeded: s/goos/good/ Found Scribe: Steven Inferring ScribeNick: Steven Present: Steven Roland Rich Nick_vd_Bleeken Gregory Tina MarkB Christina Bottomly Masataka Shane Alessio ChrisLilley Got date from IRC log name: 9 Nov 2007 Guessing minutes URL: People with action items: alessio shane steven[End of scribe.perl diagnostic output]
|
http://www.w3.org/2007/11/09-xhtml-minutes.html
|
CC-MAIN-2016-40
|
refinedweb
| 6,414
| 65.66
|
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
[odoo 8]: How set value of one2many field by overriding create method with api coding standard?
Hello All,.
Hello Prince,
Try to use this code.
@api.model
def create(self,vals):
vals.update({'one2many_field_name':[(0,0,{'field_of_co_model':'val_of_co_model'})]}})
return super(class_name,self).create(vals)
Hope this help.
@Nikunj, thanks for your ans. but till now I've achieved already, Now the problem is that one2many field which is also one2many, than how can do? Here i got stuck. and i want the same thing for write method.
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
Could you share the code that you are using with comment so everyone could better get the idea of what you are doing and trying to do
@Axel: I've given the example in new question, because i was not able to edit my question. here is the link.
|
https://www.odoo.com/forum/help-1/question/odoo-8-how-set-value-of-one2many-field-by-overriding-create-method-with-api-coding-standard-104609
|
CC-MAIN-2017-43
|
refinedweb
| 191
| 67.65
|
I have a Panorama configuration that I imported into the MT3, which contains a single Device Group with all device group post-rules. Is it possible to selectively convert/move some of those rules to Shared post-rules? I see the ability to convert objects to Shared objects, but I don't see how this can be done with policies/rules.
Thanks
Hi,
Not yet but this is in our TODO list :-) Thanks to bring this in.
Hi,
if you are 100% sure that objects used by your rules are shared (or use MT3 to move them to shared) then you can use third party scripts like mine (yes it's an advertisement :smileywink:) :
action 'actions=copy:shared' and you might want to use 'filters' to select the rules you are interested in or just copy them all and then delete the ones you don't need.
You can also edit XML config file and move what you need, it's a trick admins use and that takes a few minutes.
Thank you both!
One tool that's really come in handy for me is using the API and the "load config partial" command. I was trying to use MT3 to convert policies in a PAN-OS config with multiple VSYS' to Panorama. I wanted to convert all services and addresses to shared, and could not get the shared objects to merge to my Panorama base configuration. Using load config partial saved the day, and I just completed moving the whole configuration using that tool.
I tried to use "load config partial" from the CLI for this but had no luck. the command executed successfully but the rules didn't end up in the new candidate config. Maybe my xpath values were wrong? If you have a minute, could you please post an example command with the xpaths included that would move rules from an imported config file to that shared rule base of the current candidate configuration? It would be much appreciated.
Or, and this would be even more useful for me at least, if someone could provide an explanation of how to derive the xpath for a given piece of configuration - particularly in a config that was just imported but not loaded. I had trouble figuring out where exactly my rules were, from an xpath perspective, in the config file that I imported.
Thanks all
Here is an example command I used during a migration from a local configuration to a shared group in Panorama.
Snapshot the local configuration and export the file Snapshot-import-2014-12-11
import the file into Panorama
GROUP-NAME is your device group name for the shared rules
load config partial from Snapshot-import-2014-12-11 from-xpath devices/entry[@name='localhost.localdomain']/vsys/entry[@name='vsys1']/rulebase/security to-xpath /config/devices/entry[@name='localhost.localdomain']/device-group/entry[@name='GROUP-NAME']/pre-rulebase/security mode merge
Remember that the security policies are dependent on the existence of the address objects & groups, service objects and groups, custom applications and profiles that are used in the rules which all must exist in Panorama as well.
in the case of rules I suppose it cannot work because rules cannot have same name while it's possible with objects. It's not an issue if you are moving rules : first you delete, second you do 'load config partial'
if you use my script it should take care of (report issue if!
|
https://live.paloaltonetworks.com/t5/Migration-Tool-Discussions-old/How-to-convert-DG-rules-to-Shared-rules/m-p/35830
|
CC-MAIN-2019-47
|
refinedweb
| 578
| 57.61
|
Python bindings for Clojure
Available items
The developer of this repository has not created any items for sale yet. Need a bug fixed? Help with integration? A different license? Create a request here:
JNA libpython bindings to the tech ecosystem.
We aim to integrate Python into Clojure at a deep level. This means that we want to be able to load/use python modules almost as if they were Clojure namespaces. We also want to be able to use Clojure to extend Python objects. I gave a talk at Clojure Conj 2019 that outlines more of what is going on.
This code is a concrete example that generates an embedding for faces:
(ns facial-rec.face-feature (:require [libpython-clj.require :refer [require-python]] [libpython-clj.python :refer [py. py.. py.-] :as py] [tech.v2.datatype :as dtype]))
(require-python 'mxnet '(mxnet ndarray module io model)) (require-python 'cv2) (require-python '[numpy :as np])
(defn load-model [& {:keys [model-path checkpoint] :or {model-path "models/recognition/model" checkpoint 0}}] (let [[sym arg-params aux-params] (mxnet.model/load_checkpoint model-path checkpoint) all-layers (py. sym get_internals) target-layer (py/get-item all-layers "fc1_output") model (mxnet.module/Module :symbol target-layer :context (mxnet/cpu) :label_names nil)] (py. model bind :data_shapes [["data" [1 3 112 112]]]) (py. model set_params arg-params aux-params) model))
(defonce model (load-model))
(defn face->feature [img-path] (py/with-gil-stack-rc-context (if-let [new-img (cv2/imread img-path)] (let [new-img (cv2/cvtColor new-img cv2/COLOR_BGR2RGB) new-img (np/transpose new-img [2 0 1]) input-blob (np/expand_dims new-img :axis 0) data (mxnet.ndarray/array input-blob) batch (mxnet.io/DataBatch :data [data])] (py. model forward batch :is_train false) (-> (py. model get_outputs) first (py. asnumpy) (#(dtype/make-container :java-array :float32 %)))) (throw (Exception. (format "Failed to load img: %s" img-path))))))
(ns my-py-clj.config (:require [libpython-clj.python :as py]))
;; When you use conda, it should look like this. (py/initialize! :python-executable "/opt/anaconda3/envs/my_env/bin/python3.7" :library-path "/opt/anaconda3/envs/my_env/lib/libpython3.7m.dylib")
{... ;; This namespace going to run when the REPL is up. :repl-options {:init-ns my-py-clj.config} ...}
user> (require '[libpython-clj.require :refer [require-python]]) ...logging info.... nil user> (require-python '[numpy :as np]) nil user> (def test-ary (np/array [[1 2][3 4]])) #'user/test-ary user> test-ary [[1 2] [3 4]]
We have a document on all the features but beginning usage is pretty simple. Import your modules, use the things from Clojure. We have put effort into making sure things like sequences and ranges transfer between the two languages.
One very complimentary aspect of Python with respect to Clojure is it's integration with cutting edge native libraries. Our support isn't perfect so some understanding of the mechanism is important to diagnose errors and issues.
Current, we launch the python3 executable and print out various different bits of configuration as json. We parse the json and use the output to attempt to find the
libpython3.Xm.soshared library so for example if we are loading python 3.6 we look for
libpython3.6m.soon Linux or
libpython3.6m.dylibon the Mac.
This pathway has allowed us support Conda albeit with some work. For examples using Conda, check out the facial rec repository above or look into how we build our test docker containers.
New to Clojure or the JVM? Try remixing the nextjournal entry and playing around there. For more resources on learning and getting more comfortable with Clojure, we have an introductory document.
To install jar to local .m2 :
$ lein install
$ lein deploy clojars
This command will sign jar before deploy, using your gpg key. (see dev/src/build.clj for signing options)
This program and the accompanying materials are made available under the terms of the Eclipse Public License 2.0 which is available at.
|
https://xscode.com/clj-python/libpython-clj
|
CC-MAIN-2020-45
|
refinedweb
| 657
| 61.12
|
I am looking to create base64 inline encoded data of images for display in a table using canvases. Python generates and creates the web page dynamically. As it stands python uses the Image module to create thumbnails. After all of the thumbnails are created Python then generates base64 data of each thumbnail and puts the b64 data into hidden spans on the user's webpage. A user then clicks check marks by each thumbnail relative to their interest. They then create a pdf file containing their selected images by clicking a generate pdf button. The JavaScript using jsPDF generates the hidden span b64 data to create the image files in the pdf file and then ultimately the pdf file.
I am looking to hopefully shave down Python script execution time and minimize some disk I/O operations by generating the base64 thumbnail data in memory while the script executes.
Here is an example of what I would like to accomplish.
import os, sys
import Image
size = 128, 128
im = Image.open("/original/image/1.jpeg")
im.thumbnail(size)
thumb = base64.b64encode(im)
TypeError: must be string or buffer, not instance
You first need to save the image again in JPEG format; using the
im.tostring() method would otherwise return raw image data that no browser would recognize:
from cStringIO import StringIO output = StringIO() im.save(output, format='JPEG') im_data = output.getvalue()
This you can then encode to base64:
data_url = ' + base64.b64encode(im_data)
Here is one I made with this method:
t let me use this as an actual image, but you can see it in action in a snippet instead:
<img src=">
|
https://codedump.io/share/EPQgcV8fCEoG/1/is-it-possible-to-create-encodeb64-from-image-object
|
CC-MAIN-2018-09
|
refinedweb
| 272
| 55.34
|
Hey guys,
I was wondering if anyone can help me in terms of using atoi to get my my results to add the 1's of my binary output and then join them together... eg. 110101 and 100101 is 4 and 3 = 43.
I was thinking of integer dividing the number by 10, but i am unsure of how to do that.
Here is my code:
#include <iostream> #include <cstdlib> #include <string> using namespace std; void decToBin(int number, int base); int main() { int answer; int count=0; cout<<"Enter a string: "; string name; getline(cin,name); /* for (unsigned int i=0; i<name.size(); i++) { cout << name[i] << " binary ("; decToBin(name[i], 2); cout << ")" << endl; } */ for (unsigned int i=0; i<name.size(); i++) { decToBin(name[i],2)=answer; answer=answer/10; cout<<answer<<endl; } system("PAUSE"); return 0; } void decToBin (int number, int base) { if (number>0) { decToBin(number/base, base); cout << number%base; } }
The commented part gets the numbers into binary, the other loop under it i was trying to do so decToBin became an int and could then div 10.
As you can tell i am not sure of the direction i should go, any help would be appreciated, thanks!
|
https://www.daniweb.com/programming/software-development/threads/53209/adding-numbers-of-binary-output
|
CC-MAIN-2021-17
|
refinedweb
| 204
| 54.46
|
-- | A module containing the central type of the library, 'EventM', and various -- related helper functions. module Numeric.Probability.Game.Event (EventM, makeEvent, makeEventProb, outcomes, enact, coinToss, subst, compareEvent) where import Control.Applicative (Applicative(..), (<$>), liftA2) import Control.Arrow (second) import Control.Monad (ap, replicateM) import Data.Map (Map, fromListWith) import Data.Ratio (denominator) import Numeric.Probability.Distribution (T(..), certainly, decons, fromFreqs, norm, selectP, uniform) import System.Random (randomRIO) -- |. newtype EventM a = EventM (T Rational a) deriving (Monad, Functor) instance Ord a => Eq (EventM a) where -- Eq not defined properly for T prob a, work-around for now: --(===) :: Ord a => EventM a -> EventM a -> Bool (==) (EventM a) (EventM b) = decons (norm a) == decons (norm b) normEventM :: Ord a => EventM a -> EventM a normEventM (EventM dc) = EventM (norm dc) instance Num (EventM Int) where (+) x y = normEventM $ liftA2 (+) x y (-) x y = normEventM $ liftA2 (-) x y negate = fmap negate abs = normEventM . fmap abs signum = normEventM . fmap signum (*) = lotsOf where lotsOf :: EventM Int -> EventM Int -> EventM Int lotsOf x y = do n <- x sum <$> replicateM n y -- a `lotsOf` b = [(bx,ap*bp) | (ax,ap)<-a,(bx,bp)<- sum (replicate ax b)] -- b `lotsOf` c = [(cx,bp*cp) | (bx,bp)<-b,(cx,cp)<- sum (replicate bx c)] -- a `lotsOf` (b `lotsOf` c) -- = [(bcx,ap*bcp) | (ax,ap)<-a,(bcx,bcp)<- sum (replicate ax [(cx,bp*cp) | (bx,bp)<-b,(cx,cp)<- sum (replicate bx c)])] fromInteger = EventM . certainly . fromInteger instance (Show a, Ord a) => Show (EventM a) where show (EventM dc) = showBars (decons (norm dc)) showBars :: Show a => [(a, Rational)] -> String showBars xs = unlines (map (makeBar . scale) xs) where den = fromIntegral $ foldr lcm 1 (map (denominator . snd) xs) scale (x, r) = (x, floor (r * den)) -- should be integral anyway width = maximum (map (length . show . fst) xs) makeBar (x, n) = pad (show x) ++ ": " ++ replicate n '#' where pad s = s ++ replicate (width - length s) ' ' instance Applicative EventM where pure = return (<*>) = ap -- |. outcomes :: Ord a => EventM a -> [(a, Rational)] outcomes (EventM dc) = decons (norm dc) -- | :: [a] -> EventM a makeEvent = EventM . uniform -- |. makeEventProb :: (Ord a, Real prob) => [(a, prob)] -> EventM a makeEventProb = EventM . norm . fromFreqs . map (second toRational) -- | An event with a 50% chance of giving True, and a 50% chance of giving False. coinToss :: EventM Bool coinToss = makeEvent [True, False] -- | Actually enacts the event and produces a single result according to the probabilities -- in the @EventM a@ parameter. enact :: EventM a -> IO a enact (EventM dc) = selectP (toDouble dc) <$> randomRIO (0, 1) where toDouble :: T Rational a -> T Double a toDouble = Cons . map (second fromRational) . decons -- |@. subst :: Eq a => a -> a -> EventM a -> EventM a subst x y = fmap (\n -> if x == n then y else n) -- | Compares the outcomes of the two events, and works out the probability associated -- with the first outcome being greater than, equal to or less than the second -- outcome. The probabilites for each are returned in an associative map. -- -- Added in version 1.1. compareEvent :: Ord a => EventM a -> EventM a -> Map Ordering Rational compareEvent ex ey = fromListWith (+) [(compare x y, px * py) | (x, px) <- outcomes ex, (y, py) <- outcomes ey]
|
http://hackage.haskell.org/package/game-probability-1.1/docs/src/Numeric-Probability-Game-Event.html
|
CC-MAIN-2013-48
|
refinedweb
| 510
| 60.75
|
Vim’s event model
Vim’s editing functions behave as if they are event-driven. For performance reasons, the actual implementation is more complex than that, with much of the event handling optimized away or handled several layers below the event loop itself, but you can still think of the editor as a simple while loop responding to a series of editing events.
Whenever you start a Vim session, open a file, edit a buffer, change your editing mode, switch windows, or interact with the surrounding filesystem, you are effectively queuing an event that Vim immediately receives and handles.
For example, if you start Vim, edit a file named demo.txt, swap into Insert mode, type in some text, save the file, and then exit, your Vim session receives a series of events like what is shown in Listing 1.
Listing 1. Event sequence in a simple Vim editing session
> vim BufWinEnter (create a default window) BufEnter (create a default buffer) VimEnter (start the Vim session):edit example.txt BufNew (create a new buffer to contain demo.txt) BufAdd (add that new buffer to the session’s buffer list) BufLeave (exit the default buffer) BufWinLeave (exit the default window) BufUnload (remove the default buffer from the buffer list) BufDelete (deallocate the default buffer) BufReadCmd (read the contexts of demo.txt into the new buffer) BufEnter (activate the new buffer) BufWinEnter (activate the new buffer's window)i InsertEnter (swap into Insert mode) Hello CursorMovedI (insert a character) CursorMovedI (insert a character) CursorMovedI (insert a character) CursorMovedI (insert a character) CursorMovedI (insert a character)<ESC> InsertLeave (swap back to Normal mode):wq BufWriteCmd (save the buffer contents back to disk) BufWinLeave (exit the buffer's window) BufUnload (remove the buffer from the buffer list) VimLeavePre (get ready to quit Vim) VimLeave (quit Vim)
More interestingly, Vim provides "hooks" that allow you to intercept any of these editing events. So you can cause a particular Vimscript command or function to be executed every time a specific event occurs: every time Vim starts, every time a file is loaded, every time you leave Insert mode … or even every time you move the cursor. This makes it possible to add automatic behaviors almost anywhere throughout the editor.
Vim provides notifications for 78 distinct editing events, which fall into eight broad categories: session start-up and clean-up events, file-reading events, file-writing events, buffer-change events, option-setting events, window-related events, user-interaction events, and asynchronous notifications.
To see the complete list of these events, type
:help autocmd-events on the Vim command line. For detailed
descriptions of each event, see
:help autocmd-events-abc.
This article explains how events work in Vim and then introduces a series of scripts for automating editing events and behaviours.
Event handling with autocommands
The mechanism Vim provides for intercepting events is known as the
autocommand. Each autocommand specifies the type of event to be
intercepted, the name of the edited file in which such events are to be
intercepted, and the command-line mode action to be taken when the event
is detected. The keyword for all this is
autocmd (which is
often abbreviated to just
au). The usual syntax is:
autocmd EventName filename_pattern :command
The event name is one of the 78 valid Vim event names (as listed under
:help autocmd-events). The filename pattern syntax is similar
-- but not identical -- to a normal shell pattern (see
:help autocmd-patterns for details). The command is any valid
Vim command, including calls to Vimscript functions. The colon at the
start of the command is optional, but it’s a good idea to include it;
doing so makes the command easier to locate in the (usually complex)
argument list of an
autocmd.
For example, you could surrender all remaining dignity and specify an event
handler for the
FocusGained event by adding the following to
your
.vimrc file:
autocmd FocusGained *.txt :echo 'Welcome back, ' . $USER . '! You look great!'
FocusGained events are queued whenever a Vim window becomes
the window system’s input focus, so now whenever you swap back to your Vim
session, if you’re editing any file whose name matches the filename
pattern
*.txt, then Vim will automatically execute the
specified
echo command.
You can set up as many handlers for the same event as you wish, and all of
them will be executed in the sequence in which they were originally
specified. For example, a far more useful automation for
FocusGained events might be to have Vim briefly emphasize the
cursor line whenever you swap back to your editing session, as shown in
Listing 2.
Listing 2. A useful automation for FocusGained events
autocmd FocusGained *.txt :set cursorline autocmd FocusGained *.txt :redraw autocmd FocusGained *.txt :sleep 1 autocmd FocusGained *.txt :set nocursorline
These four autocommands cause Vim to automatically highlight the line
containing the cursor (
set cursorline), reveal that
highlighting (
redraw), wait one second
(
sleep 1), and then switch the highlighting back off
(
set nocursorline).
You can use any series of commands in this way; you can even break up a
single control structure across multiple autocommands. For example, you
could set up a global variable (
g:autosave_on_focus_change)
to control an "autosave"mechanism that automatically writes any modified
.txt file whenever the user swaps away from Vim to some other
window (causing a
FocusLost event to be queued):
Listing 3. Autocommand to autosave when leaving an editor window
autocmd FocusLost *.txt : if &modified && g:autosave_on_focus_change autocmd FocusLost *.txt : write autocmd FocusLost *.txt : echo "Autosaved file while you were absent" autocmd FocusLost *.txt : endif
Multi-line autocommands like this require that you repeat the essential
event-selector specification (i.e.,
FocusLost *.txt) multiple
times. Hence they are generally unpleasant to maintain, and more
error-prone. It’s much cleaner and safer to factor out any control
structure, or other command sequences, into a separate function and then
have a single autocommand call that function. For example:
Listing 4. A cleaner way to handle multi-line autocommands
function! Highlight_cursor () set cursorline redraw sleep 1 set nocursorline endfunction function! Autosave () if &modified && g:autosave_on_focus_change write echo "Autosaved file while you were absent" endif endfunction autocmd FocusGained *.txt :call Highlight_cursor() autocmd FocusLost *.txt :call Autosave()
Universal and single-file autocommands
So far, all the examples shown have restricted event handling to files that
matched the pattern
*.txt. Obviously, that implies that you
can use any file-globbing pattern to specify the files to which a
particular autocommand applies. For example, you could make the previous
cursor-highlighting
FocusGained autocommand apply to any file
simply by using the universal file-matching pattern
* as the
filename filter:
" Cursor-highlight any file when context-switching ... autocmd FocusGained * :call Highlight_cursor()
Alternatively, you can restrict commands to a single file:
" Only cursor-highlight for my .vimrc ... autocmd FocusGained ~/.vimrc :call Highlight_cursor()
Note that this also implies that you can specify different behaviors for the same event, depending on which file is being edited. For example, when the user turns their attention elsewhere, you might choose to have text files autosaved, or have Perl or Python scripts check-pointed, while a documentation file might be instructed to reformat the current paragraph, as shown in Listing 5.
Listing 5. What to do when the user’s attention is elsewhere
autocmd FocusLost *.txt :call Autosave() autocmd FocusLost *.p[ly] :call Checkpoint_sourcecode() autocmd FocusLost *.doc :call Reformat_current_para()
Autocommand groups
Autocommands have an associated namespace mechanism that allows them to be gathered into autocommand groups, whence they can be manipulated collectively.
To specify an autocommand group, you can use the
augroup
command. The general syntax for the command is:
augroup GROUPNAME " autocommand specifications here ... augroup END
The group’s name can be any series of non-whitespace characters, except
"
end"or "
END”, which are reserved for specifying
the end of a group.
Within an autocommand group, you can place any number of autocommands. Typically, you would group commands by the event they all respond to, as shown in Listing 6.
Listing 6. Defining a group for autocommands responding to FocusLost events
augroup Defocus autocmd FocusLost *.txt :call Autosave() autocmd FocusLost *.p[ly] :call Checkpoint_sourcecode() autocmd FocusLost *.doc :call Reformat_current_para() augroup END
Or you might group a series of autocommands relating to a single filetype, such as:
Listing 7. Defining a group of autocommands for handling text files
augroup TextEvents autocmd FocusGained *.txt :call Highlight_cursor() autocmd FocusLost *.txt :call Autosave() augroup END
Deactivating autocommands
You can remove specific event handlers using the
autocmd!
command (that is, with an exclamation mark). The general syntax for this
command is:
autocmd! [group] [EventName [filename_pattern]]
To remove a single event handler, specify all three arguments. For example,
to remove the handler for
FocusLost events on
.txt files from the
Unfocussed group, use:
autocmd! Unfocussed FocusLost *.txt
Instead of a specific event name, you can use an asterisk to indicate that
every kind of event should be removed for the particular group and
filename pattern. If you wanted to remove all events for
.txt
files within the
Unfocussed group, you would use:
autocmd! Unfocussed * *.txt
If you leave off the filename pattern, then every handler for the specified
event type is removed. You could remove all the
FocusLost
handlers in the
Unfocussed group like so:
autocmd! Unfocussed FocusLost
If you also leave out the event name, then every event handler in the
specified group is removed. So, to turn off all event handling specified
in the
Unfocussed group:
autocmd! Unfocussed
Finally, if you omit the group name, the autocommand removal applies to the
currently active group. The typical use of this option is to "clear the
decks" within a group before setting up a series of autocommands. For
example, the
Unfocussed group is better specified like so:
Listing 8. Making sure a group is empty before adding new autocommands
augroup Unfocussed autocmd! autocmd FocusLost *.txt :call Autosave() autocmd FocusLost *.p[ly] :call Checkpoint_sourcecode() autocmd FocusLost *.doc :call Reformat_current_para() augroup END
Adding an
autocmd! to the start of every group is important
because autocommands do not statically declare event handlers; they
dynamically create them. If you execute the same
autocmd
twice, you get two event handlers, both of which will be separately
invoked by the same combination of event and filename from that point
onward. By starting each autocommand group with an
autocmd!,
you wipe out any existing handlers within the group so that subsequent
autocmd statements replace any existing handlers, rather than
augmenting them. This, in turn, means that your script can be executed as
many times as necessary (or your
.vimrc can be
source’d repeatedly) without multiplying event-handling
entities unnecessarily.
Some practical examples
The appropriate use of autocommands can make your editing life vastly easier. Let’s look at a few ways you can use autocommands to streamline your editing process and remove existing frustrations.
Managing simultaneous edits
One of the most useful features of Vim is that it automatically detects when you attempt to edit a file that is currently being edited by some other instance of Vim. That often happens in multi-window environments, where you’re already editing a file in another terminal; or in multi-user setups, where someone else is already working on a shared file. When Vim detects a second attempt to edit a particular file, you get the following request:
Swap file ".filename.swp" already exists! [O]pen Read-Only, (E)dit anyway, (R)ecover, (Q)uit, (A)bort: _
Depending on the environment in which you’re working, your fingers probably
automatically hit one of those options every time, without much conscious
thought on your part. For example, if you rarely work on shared files, you
probably just hit
q to terminate the session, and then go
hunting for the terminal window where you’re already editing the file. On
the other hand, if you typically edit shared resources, perhaps your
fingers are trained to immediately hit <ENTER>, in order to
select the default option and open the file read-only.
With autocommands, however, you can completely eliminate the need to see,
recognize, and respond to that message, simply by automating the response
to the
SwapExists event that triggers it. For example, if you
never want to edit files that are already being edited elsewhere, you
could add the following to your
.vimrc:
Listing 9. Automatically quitting on simultaneous edits
augroup NoSimultaneousEdits autocmd! autocmd SwapExists * :let v:swapchoice = 'q' augroup END
This sets up an autocommand group, and removes any previous handlers (via
the
autocmd! command). It then installs a handler for the
SwapExists event on any file (using the universal file
pattern:
*). That handler simply assigns the response
'q' to the special
v:swapchoice variable. Vim
consults this variable prior to displaying the
"
swapfile exists"message. If the variable has been set, it
uses the value as the automatic response and doesn’t bother showing the
message. So now you’ll never see the
swapfile message; your
Vim session will just automatically quit if you try to edit a file that’s
being edited elsewhere.
Alternately, if you’d prefer always to open already edited files in
read-only mode, you can simply change the
NoSimultaneousEdits
group to:
Listing 10. Automating read-only access to existing files
augroup NoSimultaneousEdits autocmd! autocmd SwapExists * :let v:swapchoice = 'o' augroup END
More interestingly, you could arrange to select between these two (or any
other) alternatives, based on the location of the file being considered.
For example, you might prefer to auto-quit files in your own
subdirectories, but open shared files under
/dev/shared/ as
read-only. You could do that with the following:
Listing 11. Automating a context-sensitive response
augroup NoSimultaneousEdits autocmd! autocmd SwapExists ~/* :let v:swapchoice = 'q' autocmd SwapExists /dev/shared/* :let v:swapchoice = 'o' augroup END
That is: if the full filename begins with the home directory, followed by
anything at all (
~/*), then preselect the "quit" behaviour;
but if the full filename starts with the shared directory
(
/dev/shared/*), then preselect the "read-only" behaviour
instead.
Autoformatting code consistently
Vim has good support for automatic edit-time code layout (see
:help indent.txt and
:help filter). For example,
you can turn on the
'autoindent' and
'smartindent' options and have Vim re-indent your code blocks
automatically as you type. Or you can hook your own language-specific code
reformatter to the standard
= command by setting the
'equalprg' option.
Unfortunately, Vim doesn’t have an option or a command to deal with one of the commonest code-formatting situations: being forced to read someone else’s abysmally malformatted code. Specifically, there’s no built-in option to tell Vim to automatically sanitize the formatting of any code file you open.
That's okay because it’s trivially easy to set up an autocommand to do that instead.
For example, you could add the following autocommand group to your
.vimrc, so that C, Python, Perl, and XML files are
automatically run through the appropriate code formatter whenever you open
a file of the corresponding type, as shown in Listing 12.
Listing 12. Beautiful code, on autocommand
augroup CodeFormatters autocmd! autocmd BufReadPost,FileReadPost *.py :silent %!PythonTidy.py autocmd BufReadPost,FileReadPost *.p[lm] :silent %!perltidy -q autocmd BufReadPost,FileReadPost *.xml :silent %!xmlpp –t –c –n autocmd BufReadPost,FileReadPost *.[ch] :silent %!indent augroup END
All of the autocommands in the group are identical in structure, differing only in the filename extensions to which they apply and the corresponding pretty-printer they invoke.
Note that the autocommands do not name a single event to be handled.
Instead, each one specifies a list of events. Any
autocmd can
be specified with a comma-separated list of event types, in which case the
handler will be invoked for any of the events listed.
In this case, the events listed for each handler are
BufReadPost (which is queued whenever an existing file is
loaded into a new buffer) and
FileReadPost (which is queued
immediately after any
:read command is executed). These two
events are often specified together because between them they cover the
most common ways of loading the contents of an existing file into a
buffer.
After the event list, each autocommand specifies the file suffix(es) to
which it applies: Python’s
.py, Perl’s
.pl and
.pm, XML’s
.xml, or the
.c and
.h files of C. Note that, as with events, these filename
patterns could also have been specified as a comma-separated list, rather
than a single pattern. For example, the Perl handler could have been
written:
autocmd BufReadPost,FileReadPost *.pl,*.pm :silent %!perltidy -q
or the C handler could be extended to handle common C++ variants
(
.C,
.cc,
.cxx, etc.) as well, like
so:
autocmd BufReadPost,FileReadPost *.[chCH],*.cc,*.hh,*.[ch]xx :silent %!indent
As usual, the final component of each autocommand is the command to be
executed. In each case, it is a global filter command
(
%!filter_program), which takes the entire contents of
the file (
%) and pipes it out (
!) to the
specified external program (one of:
PythonTidy.py,
perltidy,
xmlpp, or
indent). The
output of each program is then pasted back into the buffer, replacing the
original contents.
Normally, when filter commands like these are used, Vim automatically displays a notification after the command completes, like so:
42 lines filtered Press ENTER or type command to continue_
To prevent this annoyance, each of the autocommands prefixes its action
with a
:silent, which neutralizes any ordinary information
messages, but still allows error messages to be displayed.
Opportunistic code autoformatting
Vim has excellent support for automatically formatting C code as you type it, but it offers less support for other languages. That’s not entirely Vim’s fault; some languages -- yes, Perl, I’m looking at you -- can be extremely hard to format correctly on the fly.
If Vim doesn't give you adequate support for autoformatting source code in your preferred language, you can easily have your editor invoke an external utility to do that for you.
The simplest approach is to make use of the
InsertLeave event.
This event is queued whenever you exit from
Insert mode (most
commonly, immediately after you hit
<ESC>). You can
easily set up a handler that reformats your code every time you finish
adding to it, like so:
Listing 13. Invoking PerlTidy after every edit
function! TidyAndResetCursor () let cursor_pos = getpos('.') %!perltidy -q call setpos('.', cursor_pos) endfunction augroup PerlTidy autocmd! autocmd InsertLeave *.p[lm] :call TidyAndResetCursor() augroup END
The
TidyAndResetCursor() function first makes a record of the
current cursor position, by storing the cursor information returned by the
built-in
getpos() in the variable
cursor_pos. It
then runs the external
perltidy utility over the entire file
(
%!perltidy -q), and finally restores the cursor to its
original position, by passing the saved cursor information to the built-in
setpos() function.
Inside the
PerlTidy group, you then just set up a single
autocommand that calls
TidyAndResetCursor() every time the
user leaves
Insert mode within any Perl file.
This same code pattern could be adapted to perform any appropriate action
each time you insert text. For example, if you were working on a very
unreliable system and wished to maximize your ability to recover files
(see
:help usr_11.txt) if something went wrong, you could
arrange for Vim to update its swap-file every time you left
Insert mode, like so:
augroup UpdateSwap autocmd! autocmd InsertLeave * :preserve augroup END
Timestamping files
Another useful set of events are
BufWritePre,
FileWritePre, and
FileAppendPre. These
"
Pre" events are queued just before your Vim session writes a
buffer back to disk (as a result of a command such as
:write,
:update, or
:saveas). A
BufWritePre
event occurs just before the entire buffer is written, a
FileWritePre occurs just before part of a buffer is written
(that is, when you specify a range of lines to be written:
:1,10write). A
FileAppendPre occurs just before
a
:write command is used to append rather than replace; for
example:
:write >> logfile.log).
For all three types of events, Vim sets the special line-number aliases
'[ and
'] to the range of lines being written.
These aliases can then be used in the range specifier of any subsequent
command, to ensure that autocommand actions are applied only to the
relevant lines.
Typically, you would set up a single handler that covered all three types of pre-writing event. For example, you could have Vim automatically update an internal timestamp whenever a file was written (or appended) to disk, as shown in Listing 14.
Listing 14. Automatically updating an internal timestamp whenever a file is saved
function! UpdateTimestamp () '[,']s/^This file last updated: \zs.*/\= strftime("%c") / endfunction augroup TimeStamping autocmd! autocmd BufWritePre,FileWritePre,FileAppendPre * :call UpdateTimestamp() augroup END
The
UpdateTimestamp() function performs a substitution
(
s/.../.../) on every line being written, by specifically
limiting the range of the substitution to between
'[ and
'] like so:
'[,']s/.../.../. The substitution
looks for lines starting with "
This file last updated:”,
followed by anything (
.*). The
\zs before the
.* causes the substitution to pretend that the match only
started after the colon, so only the actual timestamp is replaced.
To update the timestamp, the substitution uses the special
\= escape sequence in the replacement text. This escape
sequence tells Vim to treat the replacement text as a Vimscript
expression, evaluating it to get the actual replacement string. In this
case, that expression is a call to the built-in
strftime()
function, which returns a standard timestamp string of the form:
"
Fri Oct 23 14:51:01 2009". This string is then written back
into the timestamp line by the substitution command.
All that remains is to set up an event handler (
autocmd) for
all three event types
(
BufWritePre,
FileWritePre,
FileAppendPre)
in any file (
*) and have it invoke the appropriate
timestamping function (
:call UpdateTimestamp()). Now, any
time a file is written, any timestamp in the lines being saved will be
updated to the current time.
Note that Vim provides two other sets of events that you can use to modify
the behavior of write operations. To automate some action that should
happen after a write, you can use
BufWritePost,
FileWritePost, and
FileAppendPost. To completely
replace the standard write behavior with your own script, you can use
BufWriteCmd,
FileWriteCmd, and
FileAppendCmd (but consult
:help Cmd-event first
for some important caveats).
Table-driven timestamps
Of course, you could also create much more elaborate mechanisms to handle files with different timestamping conventions. For example, you might prefer to specify the various timestamp signatures and their replacements in a Vim dictionary (see the previous article in this series) and then loop through each pair to determine how the timestamp should be updated. This approach is shown in Listing 15.
Listing 15. Table-driven automatic timestamps
let s:timestamps = { \ 'This file last updated: \zs.*' : 'strftime("%c")', \ 'Last modification: \zs.*' : 'strftime("%Y%m%d.%H%M%S")', \ 'Copyright (c) .\{-}, \d\d\d\d-\zs\d\d\d\d' : 'strftime("%Y")', \} function! UpdateTimestamp () for [signature, replacement] in items(s:timestamps) silent! execute "'[,']s/" . signature . '/\= ' . replacement . '/' endfor endfunction
Here, the for loop iterates through each timestamp’s signature/replacement
pair in the
s:timestamps dictionary, like so:
for [signature, replacement] in items(s:timestamps)
It then generates a string containing the corresponding substitution command. The following substitution command is identical in structure to the one in the previous example, but is here constructed by interpolating the signature/replacement pair into a string:
"'[,']s/" . signature . '/\= ' . replacement . '/'
Finally, it executes the generated command silently:
silent! execute "'[,']s/" . signature . '/\= ' . replacement . '/'
The use of
silent! is important because it ensures that any
substitutions that don’t match will not result in the annoying
Pattern not found error message.
Note that the last entry in
s:timestamps is a particularly
useful example: it automatically updates the year-range of any embedded
Filename-driven timestamps
Instead of listing all possible timestamp formats in a single table, you
might prefer to parameterize the
UpdateTimestamp()function
and then create a series of distinct
autocmds for different
filetypes, as shown in Listing 16.
Listing 16. Context-sensitive timestaming for different filetypes
Click to see code listing
In this version, the signature and replacement components are passed
explicitly to
UpdateTimestamp(), which then generates a
string containing the single corresponding substitution command and
executes it. Within the
Timestamping group, you then set up
individual autocommands for each required file type, passing the
appropriate timestamp signature and replacement text for each.
Conjuring directories
Autocommands can be useful even before you begin editing. For example, when you start editing a new file, you will occasionally see a message like this one:
"dir/subdir/filename" [New DIRECTORY]
This means that the file you specified (in this case
filename)
does not exist and that the directory it’s supposed to be in (in this case
dir/subdir) doesn’t exist either.
Vim will happily allow you to ignore this warning (many users don’t even recognize that it is a warning) and continue to edit the file. But when you try to save it you’ll be confronted with the following unhelpful error message:
"dir/subdir/filename" E212: Can't open file for writing.
Now, in order to save your work, you have to explicitly create the missing directory before writing the file into it. You can do that from within Vim like so:
:write "dir/subdir/filename" E212: Can't open file for writing. :call mkdir(expand("%:h"),"p") :write
Here, the call to the built-in
expand() function is applied to
"%:h", where the
% means the current filepath
(in this case
dir/subdir/filename), and the
:h
takes just the "head" of that path, removing the filename to leave the
path of the intended directory (
dir/subdir). The call to
Vim’s built-in
mkdir() then takes this directory path and
creates all the interim directories along it (as requested by the second
argument,
"p").
Realistically, though, most Vim users would be more likely to simply escape to the shell to build the necessary directories. For example:
:write "dir/subdir/filename" E212: Can't open file for writing. :! mkdir -p dir/subdir/ :write
Either way, it’s a hassle. If you’re eventually going to have to create the
missing directory anyway, why not have Vim notice up-front that it doesn’t
exist, and simply create it for you before you even start? That way,
you’ll never encounter the obscure [
New DIRECTORY] hint; nor
will your workflow be later interrupted by an equally mysterious
E212 error.
To have Vim take care of prebuilding non-existent directories, you could
hook a handler into the
BufNewFile event, which is queued
whenever you start to edit a file that does not yet exist. Listing 17
shows the code you would add to your
.vimrc file to make this
work.
Listing 17. Unconditionally autocreating non-existent directories
augroup AutoMkdir autocmd! autocmd BufNewFile * :call EnsureDirExists() augroup END function! EnsureDirExists () let required_dir = expand("%:h") if !isdirectory(required_dir) call mkdir(required_dir, 'p') endif endfunction
The
AutoMkdir group sets up a single autocommand for
BufNewFile events on any kind of file, calling the
EnsureDirExists() function whenever a new file is edited.
EnsureDirExists() first determines the directory being
requested by expanding the "head"of the current filepath:
expand("%:h"). It then uses the built-in
isdirectory() function to check whether the requested
directory exists. If not, it attempts to create the directory using Vim’s
built-in
mkdir().
Note that, if the
mkdir() call can’t create the requested
directory for any reason, it will produce a slightly more precise and
informative error message:
E739: Cannot create directory: dir/subdir
Conjuring directories more carefully
The only problem with this solution is that, occasionally, autocreating non-existent subdirectories is exactly the wrong thing to do. For example, suppose you requested the following:
> vim /share/sites/corporate/root/.htaccess
You had intended to create a new access control file in the already
existing subdirectory
/share/corporate/website/root/. Except,
of course, because you got the path wrong, what you actually did was
create a new access control file in the formerly non-existent subdirectory
/share/website/corporate/root/. And because that happened
automatically, with no warnings of any kind, you might not even realize
the mistake. At least, not until the misapplied access control
precipitates some online disaster.
To guard against errors like this, you might prefer that Vim be a little
less helpful in autocreating missing directories. Listing 18 shows a more
elaborate version of
EnsureDirExists(), which still detects
missing directories but now asks the user what to do about them. Note that
the autocommand set-up is exactly the same as in Listing 17; only the
EnsureDirExists() function has changed.
Listing 18. Conditionally autocreating non-existent directories
augroup AutoMkdir autocmd! autocmd BufNewFile * :call EnsureDirExists() augroup END function! EnsureDirExists () let required_dir = expand("%:h") if !isdirectory(required_dir) call AskQuit("Directory '" . required_dir . "' doesn't exist.", "&Create it?") try call mkdir( required_dir, 'p' ) catch call AskQuit("Can't create '" . required_dir . "'", "&Continue anyway?") endtry endif endfunction function! AskQuit (msg, proposed_action) if confirm(a:msg, "&Quit?\n" . a:proposed_action) == 1 exit endif endfunction
In this version of the function,
EnsureDirExists() locates the
required directory and detects whether it exists, exactly as before.
However, if the directory is missing,
EnsureDirExists() now
calls a helper function:
AskQuit(). This function uses the
built-in
confirm() function to inquire whether you want to
exit the session or autocreate the directory.
"Quit?" is
presented as the first option, which also makes it the default if you just
hit
<ENTER>.
If you do select the
"Quit?"option, the helper function
immediately terminates the Vim session. Otherwise, the helper function
simply returns. In that case,
EnsureDirExists() continues to
execute, and attempts to call
mkdir().
Note, however, that the call to
mkdir() is now inside a
try...endtry construct. This is -- as you might expect -- an
exception handler, which will now catch the
E739 error that
is thrown if
mkdir() is unable to create the requested
directory.
When that error is thrown, the
catch block will intercept it
and will call
AskQuit() again, informing you that the
directory could not be created, and asking whether you still want to
continue. For more details on Vim’s extensive exception handling
mechanisms see:
:help exception-handling.
The overall effect of this second version of
EnsureDirExists()
is to highlight the non-existent directory but require you to explicitly
request that it be created (by typing a single
‘c’ when
prompted to). If the directory cannot be created, you are again warned and
given the option of continuing with the session anyway (again, by typing a
single
'c' when asked). This also makes it trivially easy to
escape from a mistaken edit (simply by hitting
<ENTER>
to select the default
"Quit?" option at either prompt).
Of course, you might prefer that continuing was the default, in which case,
you would just change the first line of
AskQuit() to:
if confirm(a:msg, a:proposed_action . "\n&Quit?") == 2
In this case the proposed action would be the first alternative, and hence
the default behaviour. Note that
"Quit?" is now the second
alternative, so the response now has to be compared against the value 2.
Looking ahead
Autocommands can save you a great deal of effort and error by automating repetitive actions that you would otherwise have to perform yourself. A productive way to get started is to take a mental step back as you edit and watch for repetitive patterns of usage that might be suitably automated using Vim’s event-handling mechanisms. Scripting those patterns into autocommands may require some extra work up front, but the automated actions will repay your investment every day. By automating everyday actions you’ll save time and effort, avoid errors, smooth your workflow, eliminate trivial stressors, and thereby improve your productivity.
Though your autocommands will probably start out as simple single-line automations, you may soon find yourself redesigning and elaborating them, as you think of better ways to have Vim do more of your grunt work. In this fashion, your event handlers can grow progressively smarter, safer, and more perfectly adapted to the way you want to work.
As Vim scripts like these become more complex, however, you also will need
better tools to manage them. Adding 10 or 20 lines to your
.vimrc every time you devise a clever new keymapping or
autocommand will eventually produce a configuration file that is several
thousand lines long ... and utterly unmaintainable.
So, in the next article in this series we’ll explore Vim’s simple plug-in
architecture, which allows you to factor out parts of your
.vimrc and isolate them in separate modules. We’ll look at
how that plug-in system works by developing a standalone module that
ameliorates some of the horrors of working with XML.
Resources
Learn
- Start learning about Vimscript, the embedded language for extending the Vim editor, with the first article in this series: "Scripting the Vim editor, Part 1: Variables, values, and expressions" (developerWorks, May 2009).
-
- Start at the Vim distributions downloads page to upgrade to the latest version of Vim for your platform.
-.
|
http://www.ibm.com/developerworks/linux/library/l-vim-script-5/index.html
|
CC-MAIN-2015-18
|
refinedweb
| 5,512
| 53
|
Hi everyone, I have been a long time forum viewer but first time poster. I am really struggling with this project I am working on. I created the three functions but they are not being called when the input matched the else ifs. I really don't understand this problem. I have read so much about functions that I think I know less. Please any help would be appreciated. My errors are probably apparent to most of you right away.
I need these functions to be called upon when the user inputs the corresponding number.
Thank you all!
#include <iostream> #include <cmath> #include <string> using namespace std; void option1() // would this execute if 1 was entered? (see below "cin >> option") { years = retire - age; cout << "You have " << years << " years until retirement." <<endl; } //---------------------------------------------------------------------------------- void option2() { cout << "How much do you make per week in dollars? "; cin >> salary; earnings = 52 * (salary * (retire - age)); cout << "You will earn $" << earnings << " between now and retirement. " << endl; } //----------------------------------------------------------------------------------- void option3() { cout << "How much do you make per week in dollars? "; cin >> salary; cout << "What percentage will you save? "; cin >> rate; earnings = 52 * salary; total = (rate / 100) * earnings; years = retire - age; cout << "There are " << years << " years till retirement." << endl; savings = savingsNow; pmt = (savings + total) / 52; // weekly payment made to retirement account cout << "PMT is " << pmt << endl; wr = .06 / 52; // wr is weekly intrest rate cout << "wr is " << wr << endl; n = 52 * years; //week until retirement cout << "n is " << n << endl; totalRetirement = (pmt) * (pow(1 + wr, n)-1) / (wr); cout << "You will have $" << totalRetirement << " saved when you retire. " << endl; } //------------------------------------------------------------------------------------------------------------- int main () { string name; double total, rate, salary, savings, intrest, savingsNow = 0, x, pmt, wr, n, totalRetirement; int age, retire, option, years, earnings; cout << "What is your name? "; getline (cin, name); cout << "How old are you? "; cin >> age; if ((age >= 60) && (age < 65)) retire = 65; if ((age >= 50) && (age < 60)) retire = 67; if ((age >= 40) && (age < 50)) retire = 70; if (age < 40) retire = 72; do { cout << "Please select an option: " << endl; cout << "\t(1) Number of years to retirement " << endl; cout << "\t(2) Amount earned between now and retirement " << endl; cout << "\t(3) Amount saved at retirement " << endl; cout << "\t(4) Exit (do nothing) " << endl; cin >> option; if (option == 1) { option1(); // Shouldn't this call the declared function above and send it back down here?!?!? } else if (option == 2) { option2(); } else if (option == 3) { option3(); } else { cout << "Please choose a valid option." << endl; } }while (option !=4); cout << "Thanks for using our program!" << endl; cout << endl; }
Again, thank for looking guys. ANY help would be appreciated.
|
https://www.daniweb.com/programming/software-development/threads/356887/user-defined-functions-not-being-called-upon
|
CC-MAIN-2017-09
|
refinedweb
| 424
| 80.62
|
Introducing Kubernetes API Version v1beta3
We've been hard at work on cleaning up the API over the past several months (see for details). The result is v1beta3, which is considered to be the release candidate for the v1 API.
We would like you to move to this new API version as soon as possible. v1beta1 and v1beta2 are deprecated, and will be removed by the end of June, shortly after we introduce the v1 API.
As of the latest release, v0.15.0, v1beta3 is the primary, default API. We have changed the default kubectl and client API versions as well as the default storage version (which means objects persisted in etcd will be converted from v1beta1 to v1beta3 as they are rewritten).
You can take a look at v1beta3 examples such as:
To aid the transition, we've also created a conversion tool and put together a list of important different API changes.
- The resource
idis now called
name.
name,
labels,
annotations, and other metadata are now nested in a map called
metadata
desiredStateis now called
spec, and
currentStateis now called
status
/minionshas been moved to
/nodes, and the resource has kind
Node
- The namespace is required (for all namespaced resources) and has moved from a URL parameter to the path:
/api/v1beta3/namespaces/{namespace}/{resource_collection}/{resource_name}
- The names of all resource collections are now lower cased - instead of
replicationControllers, use
replicationcontrollers.
- To watch for changes to a resource, open an HTTP or Websocket connection to the collection URL and provide the
?watch=trueURL parameter along with the desired
resourceVersionparameter to watch from.
- The container
entrypointhas been renamed to
command, and
commandhas been renamed to
args.
- Container, volume, and node resources are expressed as nested maps (e.g.,
resources{cpu:1}) rather than as individual fields, and resource values support scaling suffixes rather than fixed scales (e.g., milli-cores).
- Restart policy is represented simply as a string (e.g., "Always") rather than as a nested map ("always{}").
- The volume
sourceis inlined into
volumerather than nested.
- Host volumes have been changed to hostDir to hostPath to better reflect that they can be files or directories
And the most recently generated Swagger specification of the API is here:
More details about our approach to API versioning and the transition can be found here:
Another change we discovered is that with the change to the default API version in kubectl, commands that use "-o template" will break unless you specify "--api-version=v1beta1" or update to v1beta3 syntax. An example of such a change can be seen here:
If you use "-o template", I recommend always explicitly specifying the API version rather than relying upon the default. We may add this setting to kubeconfig in the future.
Let us know if you have any questions. As always, we're available on IRC (#google-containers) and github issues.
|
https://kubernetes.io/blog/2015/04/introducing-kubernetes-v1beta3/
|
CC-MAIN-2020-40
|
refinedweb
| 477
| 51.28
|
Results 1 to 3 of 3
I'm trying to create a Linux image for an embedded system so that I can boot from a USB stick (or USB flash card reader). I can't use any other ...
- Join Date
- Apr 2011
- 3
[SOLVED] Booting from a USB stick: VFS: Cannot open root device
I'm working from a previous configuration that was not decided by me. Basically, the image is created from scratch using PTXdist (see ptxdist.org) and written to an ext3 partition (previously to a flash drive, now to a USB drive). We use EXTLINUX as a bootloader. The whole process takes place on a Debian system.
I had to "hack" init/do_mounts.c so that the USB device (USB stick or USB card reader) could be detected. There are patches to do that for old versions of the kernel, just search "usb-storage-root.patch" on Google.
Here's what my do_mounts.c file looks like:
Code:
get_fs_names(fs_names); retry: for (p = fs_names; *p; p += strlen(p)+1) { int err = do_mount_root(name, p, flags, root_mount_data); switch (err) { case 0: goto out; case -EACCES: flags |= MS_RDONLY; goto retry; case -EINVAL: continue; } /* * Allow the user to distinguish between failed sys_open * and bad superblock on root device. * and give them a list of the available devices */ #ifdef CONFIG_BLOCK __bdevname(ROOT_DEV, b); #endif printk("VFS: Cannot open root device \"%s\" or %s, retrying in 1s.\n", root_device_name, b); printk("Please append a correct \"root=\" boot option; here are the available partitions:\n"); printk_all_partitions(); /* wait 1 second and try again */ current->state = TASK_INTERRUPTIBLE; schedule_timeout(HZ); goto retry; }
My extlinux.conf file looks like this:
Code:
DEFAULT linux LABEL linux SAY Now booting the kernel from EXTLINUX... KERNEL /boot/bzImage APPEND rw root=/dev/sdb1
Code:
VFS: Cannot open root device "sdb1" or unknown-block(0,0), retrying in 1s. Please append a correct "root=" boot option: here are the available partitions: ... 0820 249088 sdb driver: sd 0821 249072 sdb1
ext3 is enabled. I've tried with ext2. Same result.
I've also tried with grub. Same result.
What should I do?
Olivier
- Join Date
- Apr 2011
- 3
I'd add that I've tried with two different USB devices: a USB stick (4 GB) and a multi-card reader (with a 250 MB SD card). Both lead to the same result ("VFS: Cannot open root device").
- Join Date
- Apr 2011
- 3
Solution: initramfs
The solution is *not* to boot from the USB device directly, but to use an initramfs image. More info here:
sourcemage.org/HowTo/Initramfs
jootamam.net/howto-initramfs-image.htm
In the init script, you can easily wait for USB devices before calling switch_root. Example:
Code:
try_count=1 while [ $try_count -le 20 ] do if [[ -e "${root}" ]] ; then break fi sleep 1 mdev -s let try_count=$try_count+1 echo -n "." done
|
http://www.linuxforums.org/forum/other-linux-distributions/177802-solved-booting-usb-stick-vfs-cannot-open-root-device.html
|
CC-MAIN-2014-52
|
refinedweb
| 472
| 64.41
|
Thanks to inline the compiler can replace the function call by the function body. There are two reasons to use inline functions: performance and safety.
My primary goal was to write this post about performance. Fortunately, a further great benefit of inline came to my mind. inline makes macros as function replacement superfluous.
Macros are only the poor man means to replace text. Macros have no understanding of the C++ syntax. Therefore, a lot can go wrong.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
// macro.cpp
#include <iostream>
#define absMacro(i) ( (i) >= 0 ? (i) : -(i) )
inline int absFunction(int i){
return i >= 0 ? i : -i;
}
int func(){
std::cout << "func called" << std::endl;
return 0;
}
int main(){
std::cout << std::endl;
auto i(0);
auto res = absMacro(++i);
std::cout << "res: " << res << std::endl;
absMacro(func());
std::cout << std::endl;
i=0;
res= absFunction(++i);
std::cout << "res: " << res << std::endl;
absFunction(func());
std::cout << std::endl;
}
As well the macro in line 5 as the inline function in line 7 - 9 returns the absolute value of its arguments. I invoke the function with the argument ++i. i is 0. The result should be 1. Should be because the macro increments the expression i two times. Consequently, the result is 2 instead of 1. The function func shows it explicitly. When I use the function func as an argument, the function will be two times invoked in the case of the macro but only one time in the case of the inline function.
What's happening if I use an inline.
At first, all behave not as it seems. The compiler will interpret it only as a recommendation if I declare a function as inline. The compiler is free to ignore my recommendation. But it will also work the other way around. Modern compiler like Microsoft Visual C++, gcc, or clang can inline a function if it makes sense from a performance perspective.
Now I have to write in the conjunctive. We have to assume the compiler will accept my recommendation and applies the inline keyword in the exchange function.
inline void exchange(int& x, int& y){
int temp= x;
x= y;
y= temp;
}
What's happening at the function invocation?
...
auto a(2011);
auto b(2014);
exchange(a,b);
...
The compiler substitutes the function call by the function body.
...
auto a(2011);
auto b(2014);
int temp= a;
a= b;
b= temp;
...
The small example shows the advantages and disadvantages of inline a function.
Although I only mentioned one disadvantage that should not be judging. The usage of the keyword inline is a balance between performance versus the size of the executable. That was the simple rule. The details are a lot more complicated. The executable may become faster or slower, bigger or smaller by the usage of inline. inline can cause or prevent the crash of your program. inline case increases or decrease the number of cache misses of your program. Who wants to be confused should read the FAQ about inline functions at isocpp.org.
Until this point, I only wrote about functions. Of course, you can also declare methods as inline.
A method can become implicitly and explicitly inline. Implicitly, if you define the method in the class body. Explicitly, if you define the methods outside the class body and use the keyword inline.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
// inline.cpp
class MyClass{
public:
void implicitInline(){};
void explicitInline();
inline void notInline();
};
inline void MyClass::explicitInline(){}
void MyClass::notInline(){}
int main(){
MyClass cl;
cl.implicitInline();
cl.explicitInline();
cl.notInline();
}
Therefore, the method implicitInline (line 5) is inline because I defined it in the class boy. Therefore, the method explicitInline (line 6) is inline because I used the keyword inline at the point of the method definition. I want to stress one point. If I use only the keyword inline at the point of the method declaration, I will not get an inline function. This error happened to me with the method notInline (line 7).
Good advice is expensive. Should you use the keyword inline ever or never? Of course, the answer is not so simple. You should use inline if you have a function that is time critical, and you invoke this function not too often. In this case, the performance advantages will dominate the size disadvantages.
But we have to keep the big picture in our mind. The Working Group WG 21 wrote the paper ISO/IEC TR 18015 about C++ performance in 2006. Chapter 5.3.4 of the paper is explicitly about the keyword inline on five popular C++ compilers. They compare in this chapter functions, inline functions, and macros. The conclusion in the paper is that inline function calls are about 2-17 time faster than function calls and that inline function calls and macros are in the same performance range.
If this rule of thumb is too simple for you, you should measure the performance of your program. This is, in particular, true for embedded systems that have stronger resource concerns.
After getting much attention at Reddit for missing the main point about inline functions I will add a few words about ODR.
ODR stands for the One Definition Rule and says in the case of a function.
In modern compilers, the keyword inline is not about inlining functions anymore. Modern compilers almost completely ignore it. The more or less use-case for inline is to mark functions for ODR correctness. In my opinion, the name inline is nowadays quite misleading.
Sorry, the confusion will not end here. I want to explicitly stress that point.inline function by default has external linkage in C++. This is different from C. In C inline functions by default have internal linkage. You can read the details in the article Linkage of inline functions.
This was a post about classical C++. In the next post, I will write about C++11. C++11 has the keyword constexpr. You can use constexpr for values, functions, and user-defined data types. By constexpr declared constant expression can be evaluated at compile time. They offer a lot of benefits. Which? You will.
Great article!Small note:>>> "Inline functions with external linkage"As far as I know functions declared with inline have internal linkage. Generally, everything you wrote is fine, just this phrase is misleading IMO.
Hunting
Today 4402
Yesterday 6041
Week 36258
Month 150148
All 10307910
Currently are 148 guests and no members online
Kubik-Rubik Joomla! Extensions
Read more...
Small note:
>>> "Inline functions with external linkage"
As far as I know functions declared with inline have internal linkage. Generally, everything you wrote is fine, just this phrase is misleading IMO.
I added a few sentences in the paragraph C versus C++.
of msn. Thiis iѕ ann extremely welⅼ written article.
I'll make syre too bookmark it and return tо learn extra of
ʏour սseful info. Ꭲhank уou fоr the post.
I will dеfinitely comeback.
Keep up the goiod work! You understand, a lot of
people are looking round for this information, yyou could
aid them greatly.
something like this before. So good to find somebody with some genuine thoughts on this topic.
Seriously.. thank you for starting this up.
This website is something that is required on the internet, someone with a bit of originality!
|
https://www.modernescpp.com/index.php/inline
|
CC-MAIN-2022-40
|
refinedweb
| 1,261
| 67.55
|
In order to learn sockets in Python I am trying to write a simple group chat server and client. I need a little nudge, please. My question contains some GUI but only as decoration. The root question is asyncore / asynchat. I have read that twisted makes this all simple, but, I wanted to get my hands dirty here for educational purposes. I've managed to build a small echo server with asyncore and asynchat that can handle any number of connections and repeat to all clients what was sent in by individual clients. The server seems to work well. The GUI client, wxPython, creates a socket on open. The user types some text and clicks [SEND]. The send event puts the text out to the socket then reads the response and adds it to list control. Everything works well. I'm now ready for the next step, to disconnect the send data from the recieve data. So a user who does not send text, still gets what others may have written. I need to somehow poll the thread so "add to list control" can be triggered on recieving the data. I've tried several approaches and can't seem to get it right. First I tried setting up a separate threading.Thread that created the socket. This way I could put a poll on the socket. I even created (borrowed) a custom event so when data was found, it could notify the GUI and call the "add to list control" But I must have been doing it wrong because it kept locking up in the loop.. I knew from the server side that asynchat has built in helpers with collect_incoming_data() and found_terminator(). (my terminator is appended when data is sent to the server) I could then use the asyncore.poll() in my threading.Thread to raise the custom event that sends data to the GUI, when something comes over the socket. I think I have the concepts right, but the execution breaks down here. I tried having the threading.Thread create an asyn_chat but I keep getting errors that I assume the asyncore.dispatcher takes care of. So... maybe I should use both core and chat on the client as well as the server. The only examples of dispatcher I can find open up an asyn_chat on "listen". The client needs to initiate the communication when it starts. The server is listening and will respond. I won't know on which port the response will come back. Here's some bits showing what I'm trying to do. You may find some of it familiar. I hope I didn't clip anything that may have been needed If anybody has time to point out where I'm going wrong it would be very helpful. Thank you if you were patient enough to get this far... import threading import time import socket import asyncore import asynchat from wxPython.wx import * # server is listening on ... REMOTE_HOST = '172.0.0.1' REMOTE_PORT = 50001 class MyTest(wxFrame): def __init__(self, parent, ID, title): # Initialize wxFrame wxFrame.__init__(self, ..... # start the thread self.network = NetworkThread(self) self.network.start() EVT_NETWORK(self,self.OnNetwork) # build rest of GUI... works fine # the network thread communicates back to the main # GUI thread via this sythetic event class NetworkEvent(wxPyEvent): def __init__(self,msg=""): wxPyEvent.__init__(self) self.SetEventType(wxEVT_NETWORK) self.msg = msg wxEVT_NETWORK = 2000 def EVT_NETWORK(win, func): win.Connect(-1, -1, wxEVT_NETWORK, func) class NetworkThread(threading.Thread): def __init__(self,win): threading.Thread.__init__(self) self.win = win self.keep_going = true self.running = false self.MySock = NetworkServer(REMOTE_HOST, REMOTE_PORT, self.received_a_line) self.event_loop = EventLoop() def push(self,msg): self.MySock.push(msg) def is_running(self): return self.running def stop(self): self.keep_going = 0 def check_status(self,el,time): if not self.keep_going: asyncore.close_all() else: self.event_loop.schedule(1,self.check_status) def received_a_line(self,m): self.send_event(m) def run(self): self.running = true self.event_loop.schedule(1,self.check_status) # loop here checking every 0.5 seconds for shutdowns etc.. self.event_loop.go(0.5) # server has shutdown self.send_event("Closed down network") time.sleep(1) self.running = false # send a synthetic event back to our GUI thread def send_event(self,m): evt = NetworkEvent(m) wxPostEvent(self.win,evt) del evt class NetworkServer (asyncore.dispatcher): def __init__ (self, host, port, handler=None): asyncore.dispatcher.__init__(self) self.create_socket(socket.AF_INET, socket.SOCK_STREAM) self.connect((host, port)) def handle_connect (self): # I thought, maybe, the server would trigger this... nope ChatSocket (host, port, self.handler) def handle_read (self): data = self.recv (8192) self.send_event(self.data) print data def writable (self): return (len(self.buffer) > 0) def handle_write (self): sent = self.send (self.buffer) self.buffer = self.buffer[sent:] # send a synthetic event back to our GUI thread def send_event(self,m): evt = NetworkEvent(m) wxPostEvent(self.win,evt) del evt class ChatSocket(asynchat.async_chat): def __init__(self, host, port, handler=None): asynchat.async_chat.__init__ (self, port) def collect_incoming_data(self, data): self.data.append(data) def found_terminator(self): if self.handler: self.send_event(self.data) else: print 'warning: unhandled message: ', self.data self.data = '' class EventLoop: socket_map = asyncore.socket_map def __init__ (self): self.events = {} def go (self, timeout=5.0): events = self.events while self.socket_map: print 'inner-loop' now = int(time.time()) for k,v in events.items(): if now >= k: v (self, now) del events[k] asyncore.poll (timeout) def schedule (self, delta, callback): now = int (time.time()) self.events[now + delta] = callback def unschedule (self, callback, all=1): "unschedule a callback" for k,v in self.events: if v is callback: del self.events[k] if not all: break # ----------------------------------------- # Run App # ------------------------------------------- class TestApp(wxApp): def OnInit(self): frame = MyTest(None, -1, "Test APP") frame.Show(true) self.SetTopWindow(frame) return true app = TestApp(0) app.MainLoop()
|
https://mail.python.org/pipermail/python-list/2003-July/196882.html
|
CC-MAIN-2017-04
|
refinedweb
| 971
| 62.54
|
TERMIOS(4) BSD Programmer's Manual TERMIOS(4)
termios - general terminal line discipline
#include <termios.h> re- lated ter- minal; all processes spawned from that login shell are in the same ses- sion, nev- er changes the process group of the terminal and doesn't wait for the job to complete (that is, it immediately attempts to read the next command). If the job is started in the foreground, the user may type a key (usually '^Z') which generates the terminal stop signal (SIGTSTP) and has the ef- fect. Con- cept pro- cess of a session that has a controlling terminal has the same control- ling terminal. A terminal may be the controlling terminal for at most one session. The controlling terminal for a session is allocated by the ses- sion leader by issuing the TIOCSCTTY ioctl. A controlling terminal is never acquired by merely opening a terminal device file. When a control- ling control- ling terminal simply by closing all of its file descriptors associated with the controlling terminal if other processes continue to have it open. When a controlling process terminates, the controlling terminal is disas- sociated termi- nal, pro- cess. wheth- er canon- ical: 1. If there is enough data available to satisfy the entire re- quest,.
In canonical mode input processing, terminal input is processed in units of lines. A line is delimited by a newline ', neces- sary transmis- sions. If VMIN is greater than {MAX_INPUT}, the response to the request is undefined. The four possible values for VMIN and VTIME and their in- teractions are described below. Case A: VMIN > 0, VTIME > 0 start- ed. If VMIN bytes are received before the inter-byte timer expires (remember that the timer is reset upon receipt of each byte), the read is satisfied. If the timer expires before VMIN bytes are received, the char- acters ac- tivated by the receipt of the first byte, or a signal is received. If data is in the buffer at the time of the read(), the result is as if data had been received immediately after the read(). Case B: VMIN > 0, VTIME = 0. Case C: VMIN = 0, VTIME > 0 ti- mer sec- tion). Special character on input and is recognized if the ISIG flag (see the Local Modes section) is enabled. Generates a SIGINT sig- nal processes a NL, EOF, or EOL character. If ICANON is set, the ERASE character is discarded when processed. im- mediately pro- cessed. 1003 preced- ing whitespace is erased, and then the maximal sequence of alphabetic/underscores or non alphabetic/underscores. As a spe- cial case in this second algorithm, the first previous non- whitespace character is skipped in determining whether the preceding word is a sequence of alphabetic/underscores. This sounds confusing per- formed when that character is received is undefined.
If a modem disconnect is detected by the terminal interface for a con- trolling ap- propriately /* ignore BREAK condition */ BRKINT /* map BREAK to SIGINTR */ IGNPAR /* ignore (discard) parity errors */ PARMRK /* mark parity and framing errors */ INPCK /* enable checking of parity errors */ ISTRIP /* strip 8th bit off chars */ INLCR /* map NL into CR */ IGNCR /* ignore CR */ ICRNL /* map CR to NL (ala CRMOD) */ IXON /* enable output flow control */ IXOFF /* enable input flow control */ IXANY /* any char will restart after stop */ IMAXBEL /* ring bell on input queue full */ IUCLC /* translate upper case to lower case */ In the context of asynchronous serial data transmission, a break condi- tion asynchro- nous serial data transmission the definition of a break condition is im- plementation defined. If IGNBRK is set, a break condition detected on input is ignored, that is, not put on the input queue and therefore not read by any process. If IGNBRK is not set and BRKINT is set, the break condition flushes the in- put', or if PARMRK is set, as '\377', '\0', ' ' ' con- nected charac- ter. con- trol de- fined. /* the CR function */'s /* */ ex- ample, at 110 baud, two stop bits are normally used. If CREAD is set, the receiver is enabled. Otherwise, no character is re- ceived. Not all hardware supports this bit. In fact, this flag is pretty silly and if it were not part of the termios specification it would be omitted. If PARENB is set, parity generation and detection are enabled and a pari- ty ter- min. The CCTS_OFLOW (CRTSCTS) flag is currently unused. termi- nal on another host, the baud rate may or may not be set on the connec- tion between that terminal and the machine it is directly connected to. */ XCASE /* canonical upper/lower case */ possi- ble. If there is no character to erase, an implementation may echo an in- dication that this was the case or do nothing. If ECHOK and ICANON are set, the KILL character causes the current line to be discarded and the system echoes the ' deter- mining what constitutes a word when processing WERASE characters (see WERASE). If ECHONL and ICANON are set, the ' re- ceived or the timeout value VTIMEat- ed correspond- ing input characters are not processed as described for ICANON, ISIG, IXON, and IXOFF. If NOFLSH is set, the normal flush of the input and output queues associ- ated: for: use: ` \' | \! ~ \^ { \( } \) \ \\ pro- duce, con- sult the header file ) MirOS BSD #10-current April 19, 1994.
|
https://www.mirbsd.org/htman/sparc/man4/termios.htm
|
CC-MAIN-2016-50
|
refinedweb
| 877
| 60.85
|
15 November 2010 09:50 [Source: ICIS news]
SINGAPORE (ICIS)--Taiwan’s China American Petrochemical (Capco) shut its 700,000 tonne/year No 6 purified terephthalic acid (PTA) plant in Kaohsiung in the morning due to a mechanical outage, a source close to the company said on Monday.
The shutdown was estimated to last five to seven days, said the source.
The company could not be reached for any comments.
“The outage will keep spot supplies tight for ?xml:namespace>
He noted that the 444,000 tonne/year plant of another Taiwan PTA major, Tuntex Petrochemical, had remained shut for a three-week turnaround since 1 November.
Spot PTA prices for Taiwan cargoes were around $1,160-1,180/tonne (€850-865/tonne) CFR (cost & freight) China Main Port (CMP) on Monday, $10/tonne lower from last Friday’s level, ceasing the rapid downtrend seen last week, according to ICIS data.
|
http://www.icis.com/Articles/2010/11/15/9410237/taiwans-capco-shuts-no-6-pta-plant-on-mechanical-outage.html
|
CC-MAIN-2014-10
|
refinedweb
| 151
| 57.3
|
some frustrations with using PostageStamp nodes though.
First when you Ctrl/Cmd-click a node to select all upstream nodes and move them, nodes are selected through the hidden inputs of postageStamps, so you can accidentally move nodes that might be in a totally different place in your script.
Second, when you want to cut or copy and paste a section of your node graph, all PostageStamps with hidden inputs will be disconnected. This tool posted on Nukepedia got me thinking about how to fix this particular frustration, and I finally got around to writing something that works reliably.
This tool monkeypatches the cut/copy/paste behavior in Nuke to handle PostageStamps as a special exception. It stores what node each one is connected to in a knob on the node itself, so that it can reconnect to the right place when you paste it.
It also adds a shortcut to create a new PostageStamp node, and adds some helpful things like a button to connect it to the selected node (useful if you need to connect it to a node that’s really far away, and don’t want to drag a pipe for hours, or select one and then select the other and hit Y).
And here’s how to add it to your menu.py
import postageStampTools
nuke.toolbar('Nodes').addCommand('Other/PostageStamp', 'postageStampTools.create()', 'alt+shift+p')
nuke.menu("Nuke").addCommand('Edit/Cut', lambda: postageStampTools.cut(), 'ctrl+x')
nuke.menu("Nuke").addCommand('Edit/Copy', lambda: postageStampTools.copy(), 'ctrl+c')
nuke.menu("Nuke").addCommand('Edit/Paste', lambda: postageStampTools.paste(), 'ctrl+v')
|
http://jedypod.com/posts
|
CC-MAIN-2017-22
|
refinedweb
| 264
| 56.15
|
Entering edit mode
Hi I've been trying to loop through a list of search times I'm interested in. When I apply the first part of my code: I only get out publications matching the last search term in my list. Instead I want it to iterate through the list, and append the record_list object for each new search term, not just overwrite with the results from the last search. Thanks!!
from Bio import Entrez from Bio import Medline from tqdm import tqdm import pandas as pd pd.set_option('display.max_colwidth', -1) import numpy as np # Change this email to your email address Entrez.email = "Put your email address here" disease_list = ['ebola', 'aml', 'primary glomerular disease associated with significant proteinuria'] #search and return total number of publications def search(x): Entrez.email=Entrez.email results = {} for x in disease_list: keyword = x handle = Entrez.esearch(db ='pubmed', retmax=1000, retmode ='text', term = keyword) results= Entrez.read(handle) print('Total number of publications that contain the term {}: {}'.format(keyword, results['Count'])) for keyword, results['Count'] in results: results[x].append(results['Count']) return results if __name__ == '__main__': results = search(disease_list)
If youre only getting the last of an entry in a loop, its because somewhere your loop is overwriting the entry instead of appending new ones.
I've tried to fix your formatting for the code, but at the moment, your indentation and loop structures are not clear, so its not obvious which bits you mean to have in which loop, and thus where your problem comes from. Please make sure the code appears correct.
Sorry about that, here let's try again, I turned it into a function so it's easier to read, this is the part I'm getting stuck on:
|
https://www.biostars.org/p/413019/#413180
|
CC-MAIN-2022-05
|
refinedweb
| 293
| 60.75
|
Deep linking and URL scheme in iOS
Opening an app from an URL is such a powerful iOS feature. Its drives users to your app, and can create shortcuts to specific features. This week, we’ll dive into deep linking on iOS and how to create an URL scheme for your app.
When we talk about deep linking for mobile app, it means creating a specific URL to open a mobile application. It separate into two formats:
- a custom URL scheme that your app registers:
scheme://videos
- a universal link that opens your app from a registered domain:
mydomain.com/videos
Today, we’ll focus on the former one.
I’ll mostly focus on the code for an UIKit implementation but I’ll also briefly cover SwiftUI one if that’s what you’re looking for too.
Setting up URL Scheme
Setting up an custom URL scheme for iOS is same regardless you are using SwiftUI or UIKit. In Xcode, under your project configuration, select your target and navigates to Info tab. You’ll see an
URL Types section at the bottom.
Clicking
deeplink.
+, I can create a new type. For the identifier, I often reuse the app bundle. For the URL Schemes, I would suggest to use the app name (or shortened) to be as short as possible. It shouldn’t include any custom character. For the example, I’ll use
That’s it. The app is ready to recognize the new URL, now we need to handle it when we receive one.
SwiftUI deep linking.
If you don’t have any
AppDelegate and
SceneDelegate files, which is most of the case for SwiftUI implementation, we don’t have much work to do.
In the App implementation, we can capture the url open from
onOpenURL(perform:) action.
import SwiftUI @main struct DeeplinkSampleApp: App { var body: some Scene { WindowGroup { ContentView() .onOpenURL { url in print(url.absoluteString) } } } }
To test it, I can install the app on a simulator and launch the given url from the Terminal app
xcrun simctl openurl booted "deeplink://test"
Pretty cool! Let’s look how UIKit implementation is different.
UIKit deep link
On paper, UIKit or SwiftUI shouldn’t make a difference in the way we handle deep linking. However, it mostly falls down to having an
AppDelegate or
SceneDelegate which are more common for UIKit apps.
For older apps that only have
AppDelegate, the app captures the deeplink opening from the following method.
extension AppDelegate { func application(_ app: UIApplication, open url: URL, options: [UIApplication.OpenURLOptionsKey: Any]) -> Bool { print(url.absolueString) return true } }
The function return a Boolean if the app can handle that given url.
For newer apps that includes
SceneDelegate, the callback will be there. It’s important to note that the
AppDelegate won’t get called, even if you implement it.
extension SceneDelegate { func scene(_ scene: UIScene, openURLContexts URLContexts: Set<UIOpenURLContext>) { guard let firstUrl = URLContexts.first?.url else { return } print(firstUrl.absoluteString) } }
In this implementation, we can notice we don’t need anymore to return any result. However, the parameter passed is now a
Set<> and not just a
URL anymore, it’s to open one or more URLs. I don’t have a use-case where we would have more than URL so I’ll just keep one for the moment.
The same way as earlier, we can install the app on our simulator and try to see if all is setup correctly. We should see print our deeplink URL.
xcrun simctl openurl booted "deeplink://test"
Once it’s setup, the idea is to create routes to identify and open the right screen. Let’s dive in.
Deeplink handler implementations
The idea is pretty simple, for a given link, we need to identify what user journey or screen we should open. Since they can be many features across the app, and because we want to avoid a massive
switch case to handle it, we’ll be smarter and divide to conquer.
For this example, let’s image we have a video editing app. They are 3 main tabs, to edit a new video, to list the videos edited, then an account page with different app and user information.
We can think of three main paths
deeplink://videos/new- start a new video edition journey
deeplink://videos- lands on videos listing tab screen
deeplink://account- lands on account screen
First, I’ll create a protocol of deeplink handler to define the minimum requirements of any new handlers.
protocol DeeplinkHandlerProtocol { func canOpenURL(_ url: URL) -> Bool func openURL(_ url: URL) }
I will also define a
DeeplinkCoordinator that will holds on the handlers and find the right one to use. It also returns a Boolean like the
AppDelegate has, so we can use in different implementations.
protocol DeeplinkCoordinatorProtocol { @discardableResult func handleURL(_ url: URL) -> Bool } final class DeeplinkCoordinator { let handlers: [DeeplinkHandlerProtocol] init(handlers: [DeeplinkHandlerProtocol]) { self.handlers = handlers } } extension DeeplinkCoordinator: DeeplinkCoordinatorProtocol { @discardableResult func handleURL(_ url: URL) -> Bool{ guard let handler = handlers.first(where: { $0.canOpenURL(url) }) else { return false } handler.openURL(url) return true } }
Now we can define separate handlers, one for each different path. Let’s start first with the Account journey, the simplest one.
final class AccountDee == "deeplink://account" } func openURL(_ url: URL) { guard canOpenURL(url) else { return } // mock the navigation let viewController = UIViewController() viewController.title = "Account" viewController.view.backgroundColor = .yellow rootViewController?.present(viewController, animated: true) } }
To keep it simple, I only test for the matching url and navigate to the right screen.
I also set a background color to see what is my landing. In your case, we can just set the right
UIViewController rather than an empty one.
I will do the same for the different video journeys.
final class VideoDee.hasPrefix("deeplink://videos") } func openURL(_ url: URL) { guard canOpenURL(url) else { return } // mock the navigation let viewController = UIViewController() switch url.path { case "/new": viewController.title = "Video Editing" viewController.view.backgroundColor = .orange default: viewController.title = "Video Listing" viewController.view.backgroundColor = .cyan } rootViewController?.present(viewController, animated: true) } }
Now we can inject them into the
DeeplinkCoordinator and let it handle the right route.
We’ll have two variations, the first one for
AppDelegate.
class AppDelegate: UIResponder, UIApplicationDe application(_ app: UIApplication, open url: URL, options: [UIApplication.OpenURLOptionsKey: Any]) -> Bool { return deeplinkCoordinator.handleURL(url) } }
And the second one for the
SceneDelegate
class SceneDelegate: UIResponder, UIWindowSceneDe scene(_ scene: UIScene, openURLContexts URLContexts: Set<UIOpenURLContext>) { guard let firstUrl = URLContexts.first?.url else { return } deeplinkCoordinator.handleURL(firstUrl) }
We can test it again the same way we did so far, hoping to land on the right screen (expecting orange background).
xcrun simctl openurl booted "deeplink://videos/new"
To summarize, once the URL scheme was setup, we defined a funnel to capture all the deep links used to open the app and leveraged protocol oriented programming to create multiple implementations of handlers, one for each specific path.
This implementation is extensible for newer path and can easily be unit tested to make sure each parts behaves as expected.
That being said, there could be few improvements, like verifying full path rather than relative one, for safer behavior. The navigation only
present but it’s to focus on the handler and not the transition itself.
On a security note, if you also pass parameters within your deeplink, make sure to verify the type and values expected. It could expose different injection vulnerabilities if we’re not careful.
From there, you should have a good understanding of how to use and handle deeplink to open your app and jump to a specific screen. This code is available on Github.
Happy coding 🎉
|
https://benoitpasquier.com/deep-linking-url-scheme-ios/?utm_campaign=AppCoda%20Weekly&utm_medium=email&utm_source=Revue%20newsletter
|
CC-MAIN-2022-21
|
refinedweb
| 1,264
| 56.45
|
size_t fwrite( void *buf, size_t size, size_t n,FILE *stream )
void *buf; /* address of data buffer */
size_ t size; /* size of each buffer element */
size_ t n; /* number of buffer elements */
FILE *stream; /* output stream */
Synopsis
#include "stdio.h"
The fwrite function retrieves n objects, each of size size bytes, from the buffer pointed to by buf , and writes them into the file associated with fp .
Parameters
buf is the address of the buffer where the data is stored, and should be at least size * n bytes in length. size and n are integers. stream is a stream pointer, open for input.
Return Value
fwrite returns the actual number of objects written, which will be less than or equal to n .
See Also
fputs , fread
Help URL:
|
http://silverscreen.com/idh_silverc_compiler_reference_fwrite.htm
|
CC-MAIN-2021-21
|
refinedweb
| 127
| 78.69
|
!
97 Reader Comments
it's not like i have a lot to be throwing around for this stuff but if I ever do I'll return to Kiva.
I had the same question. The Wikipedia page quotes 31.25% as the average rate charged as of Nov 2009. Average repayment rate is above 98% (less than 2% of lendees default, and many of these are covered by the field agents).
That interest rate is way, way too high for what is supposed to be happening here. Kiva pulls on the same heart strings that Salvation Army bell ringers and Toys for Tots pull on, then charges 32% interest. That is no way to "help" 3rd-world entrepreneurs with no other recourse. I would like to make a little profit as a lender, to justify the risk, but I would be A-OK with 10% for this kind of approach.
Presumably there are other risks that are being accounted for; if nobody has shown up to do loans more cheaply, that suggests there's no way to make money doing it.
Well you could read the FAQ:
The purpose of Kiva is to serve as a clearinghouse for funding microlenders. The funds you allocate to loans aren't investments, rather they are recycled again and again which helps keep the costs of the loans down as the microlenders can spend less to raise capital.
Again, if you did a little homework, you might realize that one of the reasons that microfinance isn't done by large institutions is the the high cost of servicing microfinance loans.
In order for MFI's to be self sustaining, they have to make back the transaction and other costs of the loan, which add up to a much larger proportion of the loan costs than in larger loan amounts -- that means higher interest rates. Which are still going to be MUCH lower than the only other source of lending available to many of the poorest families -- criminals and loansharks whose rates can start at 100% or more.
I think the interest is high because the sums we are talking about are so low. If they collected low interests, they couldn't cover their costs, since the loans we are talking about are something like 30 bucks a piece. If the interest was lower, they would only get few cents, and that's not enough to cover their expenses.
So you would be happy to get 10% profit for your charity? How very magnanimous of you...
That may seem even more odious to you, but the smaller the loan, the more the transaction costs dominate. Those costs need to be covered somehow. It could be covered by donations, but I think that would actually hurt the viability of whole enterprise. Organizations like Kiva would be constrained not just by the availability of capital, but also the generosity of donors (which can swing widely) for funding operations. The bigger issue though is that all the businesses built on availability of microfinance would be built around an artificially low cost of capital, which is neither scalable, nor sustainable.
The point of Kiva and the like is to help build economies so more people have the opportunity to support themselves and improve their standard of living, not to keep people dependent on charity.
as far as there not being a profitable way to do this fairly, i think you put too much faith in the market (ESPECIALLY such a young one). my understanding is there simply isn't enough competition in the field to drive down the rates right now.
EDIT: these are not $30 loans people they are multi-hundred dollar loans. still very low value but a significant difference nonetheless.
Not so. Kiva does not pay interest on loans and the rate the borrower is charged varies by lending partner. Kiva posts information about all field partners and their lending practices, including average interest rates and return on assets. You are free to chose loans with the lowest possible interest rates for borrowers.
So don't get all high and mighty with your holiday spirit by sarcastically ridiculing someone for a perfectly reasonable question. Save your holier than thou attitude for your local community where you can comment to your neighbors to your heart's content about how magnanimous you are compared to them. Or better yet, don't say anything at all.
Happy Holidays.
I'd like it better if they pitched it as straight-up charitable donations, where you had the possibility of your contribution getting recycled. As it is, it just... I dunno, it seems like it's trying to pretend to be something other than what it is.
They really aren't trying to be something other than what they are. Kiva does a very good job of ensuring transparency. They audit their field partners and try to ensure that all loans are paid back. But its a very tough job, especially in countries where political unrest and fraud are commonplace. The low default rates are a testament to Kiva's work in building relationships with reputable agencies and firms, but they can't eliminate all risk. Browse their website to get a sense of what's involved, particularly with regards to their Field Partner's track records and affiliations. I think you'll find they are conscientious and scrupulous in their dealings with partners and lenders alike.
However, to me the comparison between what the MFI charges and what the average in the area is, is far more important. If the average is 130%, then a 30% interest rate is a bargain even if it seems outrageous to us. Let's also remember that inflation isn't everywhere in the low single digits - so the inflation-adjusted rate is likely to be significantly lower. Finally, I'd like to think the borrowers know what they're doing. If the business loan isn't going to pay off for them, I doubt they'd get it. We're not talking about loans for consumption here, after all.
Go right ahead and call these "investments" if you like. But like it was already said, they do not recommend that you consider these to be "investments". If I want to invest, I'll buy in to my stock-funds. Kiva on the other hand is NOT something that I would consider an "investment". Not in a million years. Investment is something that I would expect to earn money from. Kiva is nothing like that.
Sure, go right ahead and consider these to be traditional "investments" with expectations of return of that investment. You would be wrong, but hey, such is life.
So you walked in with dollar-signs in your eyes, and when you realized that this isn't really your typical investment where you have the option of raking in big bucks, you were disappointed. Well, too bad.
Sure, I can see someone who has never heard of Kiva to not know what it is. But I think that after reading the article and spending few minutes in Google, it would be pretty damn clear. But no, some people immediately assumed that it's just a way for them to earn money, and were disappointed.
Like I said, we are basically talking about charity here, and some people treat it as a vessel to earn even more money.
I'm not claiming to be a perfect person. And no, I haven't given money to Kiva. But then again, I don't look at it as a tool to earn even more money, unlike some other people seem to think.
I think there is a clear difference between microfinancing and "traditional" charity. The problem with charity is that either it doesn't fix the problem (like, just handing out food instead of trying to fix the cause of the lack of food) or that they make people reliant on the charity. With the latter I mean that if they simply give out money with no expectation of paying the money back, it could cause the recipient to simply waste the money (observe the lavish lifestyles of various African leaders, while their countries receive millions in aid). If they are expected to pay the money back, they will make better use of that money.
You can find faults with every charity or way to help, but at some point it's time to stop being Eeyore with a keyboard and do something. Kiva has some arguable points, but you're doing way more good than harm by participating.
See, Janne, there you go again attributing ill motives to those merely asking a question. Did anyone walk in with “dollar signs in their eyes?” No, they asked a question based on the definition of the word “loan,” which is the lending of money for interest. Did anyone say they were disappointed? No. Did anyone say they were not going to participate because there was no return? No. You ascribe greed where none exists. In other words, you are calling anyone who asked the question greedy, which is arrogant and rude.
Actually, no. Neither the article, nor the links to Kiva's site that were included in the article indicated one way or another that Kiva donors would or would not receive a return on their money. And again, nobody was disappointed when they learned there was no return (at least I wasn't). What IS disappointing is how you immediately ASSUME people were disappointed and started pointing fingers without any reason at all. That's my problem with your attitude.
There, fixed that for you. Now instead of wagging your finger at those who don’t measure up to your standards, I have a better use for that finger: place it over the text you are reading so that you actually read the words instead of reading your own moral judgment into them.
Janne, you said "we are basically talking about charity here".
This confuses me. This is charity -> "Here's $30, use it to help improve your life, don't worry about not paying me back".
This is a loan -> "Here's $30, use it to help improve your life. I expect payment with 32% interest in (x) months/years". I know that lenders don't get any interest, but you bet your buttons the people making the money to and collecting payment from the "benefactors" do.
Maybe it's just me, but I consider charity and loans to be very different concepts. Perhaps you truly contribute to Kiva with charitable motives. If that's the case, I commend you. According to the wikipedia article, "As the entrepreneurs repay their loans, the Field Partners remit funds back to Kiva. As the loan is repaid, the Kiva lenders can withdraw their principal or re-loan it to another entrepreneur." This is not charity. This is a loan.
Maybe Kiva is great, but articles titled "Join Kiva and let's all help the world today" and calling this charity seem to be misleading. Sure, loans can help people. Heck, student loans are helping me get through grad school. But my bank gave me that loan because they will eventually profit from it, not because they were feeling charitable when I needed it. So maybe Kiva is great, but it is NOT a charity.
Happy Holidays.
This comment was edited by allinthefamily on December 26, 2009 23:56
This comment was edited by allinthefamily on December 26, 2009 23:59
Funny, you lend a US family at 31%, you are a damn payday loan shark and the scum of the Earth.
You lend a Dominican family at 31%, you are a Nobel Peace prize winner.
*Not help in the traditional sense, but an example of good intentions having unforeseen consequences.
I'm still a little wary of Kiva, mostly because of the gap between the presentation and what seems to me to be the reality (not because of the reality itself, but because of the gap), but it's certainly a mistake to look at an international effort too simplistically and without realizing the difference between your local reality and others' local reality.
I found this article to be a poignant example. Chicago's problems are outside the scope of what Kiva is trying to do. As with all charitable endeavours, choice rules the day. Some people have a thing for animals, some for children, some for local efforts, and some for Kiva-type things. It's the aggregate effort that matters. As such, criticizing one's choice amounts to criticizing one's taste in music or food.
I'd see the money as gone, as I'd reloan then when returned, but the risk seems weird. Why can't they charge a small interest to pay thoose loaners, if the 2% not getting their money back?
---
Myself my charity is mostly buying Fair Trade products, a mentality that is very similar to this.
With any luck Israel will be in fucking the place up and Talal will have a booming business.
joined and donated. thanks for the info - it's a great idea. Too many people need to immigrate to prove they are hard workers. This is a brilliant way to share some wealth. Thanks, Ken.
We can do without the politics, thank you.
As for these organizations, they are not charities. Charities don't expect to be paid back. These loans are paid back for the most part.
I would like to qualify one thing though. We have had a group of people on Appleinsider doing this through Kiva for years. It's not new.
But there is something that people don't understand about where their loans are going. It looks as though you are loaning money to an individual, but you are not. All the money that comes in, even though it's specified to go to an individual of your choice, actually goes into a pool of money. That money is lent out by the organization.
The individuals whose names and stories that appear in the lists have already been lent money. You aren't actually giving it to them, though it seems that way.
It's deceptive, but as someone here already said about these organizations, it's set up to tug at your heartstrings. People want a face and a story, or they won't lend.
It's the same thing when you give money to organizations that help children around the world. You aren't actually giving to the child you think you are, you're just adding money to the pool. The letters aren't specifically correct because you haven't given that child money directly.
There is a significant difference: you assume the risk of a specific loan. You can also be assured that the whole money goes to the borrower and nothing is used for administrative expenses. Early on, it used to be that you loaned to an entrepreneur and had to wait 2 weeks or more for him (or her) to get the money. Then the MFI and Kiva had to send money back and forth on a monthly basis. This is very costly, was plagued by delays (lack of a working banking system) and essentially does nothing for the lender or the borrower. You also had to wait for the duration of the whole loan to get repaid. So now they pre-loan the money based on projections of past lending and repayments trickle in on a monthly basis. Last I checked, no loan had ever gone unfunded... and if, I'm sure the MFI could find the $200 somewhere else. Kiva just lowers the average cost of capital, it's not the sole source of funding for any MFI.
Do you think a microcredit loan given in Chicago would have the same potential to help as, say, a microcredit loan given to a part of the world that has less surrounding infrastructure?
This is a genuine question. I, personally, have lost heart with local charitable work, as whatever I have done just doesn't seem to make a difference. Part of it is the ridiculous overhead that drags things down; the amount of paperwork required to document things stateside can be crippling. Part of it is that very few of the people I ever worked with were interested in anything other than a handout. Very depressing.
So if I can give a few bucks to someone who might actually use the money as a lever to a better life instead of a quick fix for the day, I will. I'd be quite happy to do so locally, but I have yet to find a way to do so.
I thought it was a bit funny at least. Sigh. I did put my 25USD to a South-Lebanese construction company. I find it hard to distinguish between who "deserves it the most".
If what you say is correct that I am just putting money into a pool (for Lebanon or the KIVA), then I guess it doesn't matter.
That's not completely true.
In Denmark there's the "save a child" organisation, where you donate a monthly amount to become "father" for a specific child, not one you choose, but it's one child.
This child will by the Danish organisation be asked to send you letters about their life, ie. doing well in school, getting food etc.
It has become public however that each child has several "fathers", since the amount is not enough from a single "father", and that the organisation has been trying to give the impression, implicetly, that you were the only one.
They explained that this was to, as you call it, pull your heart strings.
However, you're in contact with one specific child who writes you personally.
---
In writing this, I am wrong I realise. Of course they also just run a pool of money and then assign one child to send letters to you.
---
I don't have a problem with it, though KIVA seems a bit dodgy as they're out right lying, in that case.
The "save the child" from Denmark doesn't lie, they just assign a child to send you letters and take your money. They child is real and likely happy to get education, school, shelter and food.
There's a lot of interesting information there about the nitty gritty details of how Kiva actually does its thing; lots of praise as well as some airing of (moderately) dirty laundry.
You must login or create an account to comment.
|
http://arstechnica.com/staff/2009/12/join-kiva-and-lets-all-help-the-world-today/?comments=1
|
CC-MAIN-2014-23
|
refinedweb
| 3,109
| 71.24
|
As far as I can see there are three (oops! four then) types of Custom Page:
- Custom Pages Part 1 - Standard Custom Page based on an existing PageTemplate and customised in the DynamicData\CustomPages folder.
- Custom Pages Part 2 - A completely Custom Page again in the DynamicData\CustomPages folder.
- Custom Pages Part 3 - Standard ASP.Net Page with Dynamic Data features added to take advantage of the FieldTemplates.
- Custom Pages Part 4 - A DetailsView and a GridView using Validation Groups
- Custom Pages Part 5 - I18N? Internationalisation Custom Page
Creating a Standard Custom Page
To customise a standard PageTemplate all you do is create a folder with the name of the entity collection (e.g. Order entity would have the folder Orders).
Figure 1 – Copy PageTemplate to CustomPages folder
All you do then is copy the PageTemplate you want to modify to the CustomPages sub-folder.
Figure 2 - Copying Details.aspx to the CustomPages <TableName>folder
Customising the Standard Custom Page
For out first step we will wire up the LinqDataSource to the Orders Table/Entity set, this will cause the column of the DetailsView to be updated see below.
<Fields> <asp:BoundField <asp:BoundField <asp:BoundField ...
<asp:BoundField <asp:BoundField </Fields>
Listing 1 – Columns collection from the Details.aspx
Now to edit his to take advantage of DynamicData:
<Fields> <asp:DynamicField <asp:DynamicField <asp:DynamicField ...
<asp:DynamicField <asp:DynamicField </Fields>
Listing 2 – Updated columns collection from the Details.aspx
Here what I did was search and replace BoundField with DynamicField and remove all other unneeded properties (I used regular expressions to remove the extra fields here is the expression HeaderText=\".*\" SortExpression=\".*\" and replaced with an empty string), (these should come from the metadata from the column) then rearrange the order and replace the columns I didn’t want displaying with columns that I did (i.e. ShipVia with Shipper, CustomerID with Customer, etc) you can find the names of the EntitySet and EntityRef in the designer.cs/designer.vb file under the dbml file of your LinqToSql classes just drop the ‘_’ underscore at the beginning (you will see the actual property for each later on but this bit at the beginning always gets me what I want).
private EntitySet<Order_Detail> _Order_Details; private EntityRef<Customer> _Customer; private EntityRef<Employee> _Employee; private EntityRef<Shipper> _Shipper;
Listing 3 – from Northwind.designer.cs file
If we run the DynamicData website now we will get what seems similar to the default Details.aspx page, but missing the Edit and Delete links.
Now whart we need is to add Edit and Delete functionality again.
" OnClientClick='return confirm("Are you sure you want to delete this item?");' Text="Delete" /> </ItemTemplate> </asp:TemplateField>
Listing 4 – copied from the original List.aspx
Here I’ve copied and pasted the TemplateField from the original List.aspx page to get the same functionality for Edit, Details and Delete.
Adding Master Details page funtionality
Now we add a GridView and LinqDataSource to the page for the Order_Details, (which all I did was copy the one from the List.aspx PageTemplate and then edit it).
<asp:GridView <Columns> .GetActionPath(PageAction.Details, GetDataItem()) %>' Text="Details" /> < />
Listing 5 – Extra GridView and LinqDataSource which I copied from the List.aspx PageTemplate
Linking the two GridViews together is done in the customising of the LinqDataSource switch to design view of the Details.aspx page and find the new LinqDataSource and click on the tasks button:
Figure 3 – Configuring the LinqDataSource
Follow the wizard through and select Order_Details as the source table and then click on the Where button. Then configure the where expression to limit the list to the record in the DetailsView.
Figure 4 - Configure the where expression
Say yes to updating the GridViews column when asked. Then edit the GridView’s columns collection the same way we did the DetailsView’s fileds collection, edit the columns so that any EntitySets/EntityRefs are shown instead of their foreign keys.
Now if we run the app and choose Orders and then Details we will see something like this:
Figure 5 – Output from our custom Details.aspx page
We need to ad a new class level variable:
protected MetaTable table1;
Then the Page_Init must updated:
protected void Page_Init(object sender, EventArgs e) { DynamicDataManager1.RegisterControl(DetailsView1, true); DynamicDataManager1.RegisterControl(GridView1, false); }
Listing 6 – Updated Page_Init
Also in the Page_Load event handler add the following line right after one for that default table so that its metadata for the second table can be referenced:
table1 = GridDataSource.GetTable();
and then in the NavigateUrl declaration change table to table1
We need to put the edit capability back into the GridView to do this all we have to do is copy them from the List.aspx PageTemplate and then add them to the beginning of the Columns collection:
<asp:TemplateField> <ItemTemplate> <asp:HyperLink </ItemTemplate> </asp:TemplateField>
Listing 6 – Adding the Edit, Delete and Details functionality
As you can see from running the app again the Edit, Delete and Details functionality is back.
Figure 6 – With Edit, Delete and Details functionality
And we finish off by adding paging to the DetailsView
<asp:DetailsView <PagerSettings Mode="NextPreviousFirstLast" />
Listing 7 – Adding paging to the DetailsView
I think that will do for this part of the series, we can add other features to the page in the next edition.
28 comments:
Thanks :-)
Thanks :-)
Stephen,
i did it the same way and i get a error message.
error CS0103: The name 'table1' does not exist in the current context.
its comming up when dynamic data tries to open the list.aspx page template.
Whats wrong?
It sounds like an error with your project youve not created a Linq to SQL DD website and then added an ADO.Net model or the otherway around?
Steve :D
p.s. try posting your problem here and give the whole error page as it helps a lot with diagnosing issues.
Hi Steve,
I'm having a problem when I copy a (listdetails) template into a custom page folder (as per your instructions) The errors say there's ambiguity because it's defined more than once... I tried changing the partial class name which works but becuase it is autogenerated it always reverts back.
Any help would be appreciated.
Cheers.
Mel
Hi Mel, can send me the the error as in copy an paste the error into an e-mail and I'll have a look.
Steve
steve@notaclue.net
Steve, what was the answer to Mels problem above, I get the same thing. Could it be because I am referencing a DataContext in an extrernal dll/namespace?
Thanks
Tom
See this thread on the forum
Steve :D
I've followed this example and I've tried to add another datagrid to show the client data, I've added ClientID to DetailsView DataKeyNames, and I've added the where clause to the LinqDataSource of this new gridview:
asp:ControlParameter ControlID="DetailsView1" Name="CustomerID" PropertyName="SelectedValue" Type="String"
I've also added the ClientID column to detailsview, but it doesn't work.
It seems as if one could only bind the master details gridview to the master detailsview by using it's primary key.
Any ideas on how to solve this?
Hi Niner, if you have multiple child grids to show have a look at this article here •An Advanced FieldTemplate with a GridView this basically uses a field template that is a gridview and allows you to show children relationships in a gridview. ans Also see A variation of Part 1 with the Details and SubGrid in Tabs
Hope this is some use.
Steve :D
Would it be possible to have 'insert' functionality in the gridview?
Hi Niner, have a look at Dynamic/Templated Grid with Insert (Using ListView)
Steve :D
How can I make a custompage which will only show a few fields from five different tables?
I would create a view (if you don't need it to be updateable)
You can make a view updateable just give it a primary key and setup some SPROCS to do the Insert, Update and Delete.
Steve :D
Steve,
As has already been stated. You are a genius. Dynamic data websites have a lot of potential for me but I am struggling with some of the fundamentals of custom pages. Specifically right now, when I see that a list.aspx generated for a table provides a hyperlink reference "View tables" for the foreign key attributes and then automaically provide page parameters in the hyperlink ?ID=[foreigh key value] that posts the foreign p key back to the table that uses it as a primary key, I am having a hard time understanding
1. why/how these additional entries are generated in the gridview
2. why the foreign key is automatically used when passed to the related table and how the related table control knows how to use that foreign key to filter the displayed results.
Hi billgrowjr,
1. there are generated byt the foreign key relationship column the Linq to SQL and Entity Framewoke creates, the actual foreign key columns are always hidden,
2. the foreign key values are auto matically picked up by the relavent filter and the filter applied.
What version of DD are you using?
Steve :)
Hey there, I am just following along on your example there and I have a few questions:
I have a table (PurchaseOrders) and it has a one to many relationship with another table (LineItems)
I am using the scaffolding TRUE option, so visual studio generates all of the pages for me.
I am trying to get the PurchaseOrder Insert page to allow me to create a new PurchseOrder as well as add multiple LineItems to that PurchaseOrder and then insert into the database (a SQL server 2008 backend) at once.
Can I customise one of the dynamic pages to do this?
Yes you could, and I have a field template that does just that, see it here:
Steve
Great. Thanks, I will read over the article now, cheers.
Will there be many differences between the dynamic-data-field template version you have listed, and the visual studio 2010 dynamic linq to sql asp.net 4.0 version I am using?
Yes there are a number of differences between DD1 and DD4 but none are killers just look at the Page_Init in both and you will quickly see them.
Steve
For all of those who copy a page from the pageTemplate folder to the CustomPages folder and get the table already exist some place else error, if you make the CustomPage folder plural and then attach that name to the end of the namespace, that should make everything works as the documentation describes. It worked for me after much frustration and back and forth over the months.
Thanks
A good point but that is specific to Web Application Project not Website project.
Steve
Hi Steve thanks for the post!
One question. How do I create a custom page for derived types?
I am able to create a custom page for my base type, but none of the pages for my derived types work. They all go to the standard page templates.
I have made sure that the folder names match the table names for the derived types to no avail.
Thanks
Hi Steve, thanks for the post!
I am able to create custom pages for a base type, but not for any of its derived types. I have made sure that the custom page folders match the names of the derived types database tables to no avail.
Is this a limitation of Dynamic Data?
Thank you.
Not sure what you mean by derived type, if you want you can just e-mail me directly, my address is in the top right of this site.
Steve
Hi Steve, Good Day!
I have been trying with your above post but due to difference of detailsview to formsview in DD4, I have not been able to progress. May you please suggest some way of doing the parent child page functionality (as your above post) in DD4. Better if using tabs for multi child tables :). THANKS
Hi Faizan, it would be real hard work getting that to work in DD now :) I have a new project coming out soon that has ALL my bits included in a new shiny Project Template you will be able to get it all then.
Steve
|
http://csharpbits.notaclue.net/2008/07/dynamic-data-and-custom-pages.html
|
CC-MAIN-2017-30
|
refinedweb
| 2,062
| 60.35
|
0
Create a function generateNumbers(num) that takes in a positive number as argument and returns a list of number from 0 to that number inclusive. Note: The function range(5) will return a list of number [0, 1, 2, 3, 4].
Examples
>>> generateNumber(1) [0, 1] >>> generateNumber(10) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10] >>> generateNumber(0) [0]
i dont know how to do this. i tried;
def num(num): for x in range(num): lis = [] lis.append(x) y = x + 1 lis.append(y) return lis
but it return the following;
num(10) [0, 1]
previous attempts returned some weird results...
anyway, can you please advise me on this, or give me hints so i can advance my knowledge? would be much appreciated.
|
https://www.daniweb.com/programming/software-development/threads/407150/function-to-return-a-list-of-numbers-from-0-to-argument
|
CC-MAIN-2016-40
|
refinedweb
| 129
| 73.58
|
Nyc radial heatmap¶
Most examples work across multiple plotting backends equivalent, this example is also available for:
import numpy as np import pandas as pd import holoviews as hv hv.extension("matplotlib")
Declaring data¶
NYC Taxi Data¶
Let's dive into a concrete example, namely the New York - Taxi Data (For-Hire Vehicle (“FHV”) records). The following data contains hourly pickup counts for the entire year of 2016.
Considerations: Thinking about taxi pickup counts, we might expect higher taxi usage during business hours. In addition, public holidays should be clearly distinguishable from regular business days. Furthermore, we might expect high taxi pickup counts during Friday and Saterday nights.
Design: In order model the above ideas, we decide to assign days with hourly split to the radial segments and week of year to the annulars. This will allow to detect daily/hourly periodicity and weekly trends. To get you more familiar with the mapping of segemnts and annulars, take a look at the following radial heatmap:
# load example data df_nyc = pd.read_csv("../../../assets/nyc_taxi.csv.gz", parse_dates=["Pickup_date"]) # create relevant time columns df_nyc["Day & Hour"] = df_nyc["Pickup_date"].dt.strftime("%A %H:00") df_nyc["Week of Year"] = df_nyc["Pickup_date"].dt.strftime("Week %W") df_nyc["Date"] = df_nyc["Pickup_date"].dt.strftime("%Y-%m-%d") heatmap = hv.HeatMap(df_nyc, ["Day & Hour", "Week of Year"], ["Pickup_Count", "Date"])
At first glance: First, let's take a closer look at the mentioned segments and annulars. Segments correspond to hours of a given day whereas annulars represent entire weeks. If you use the hover tool, you will quickly get an idea of how segments and annulars are organized. Color decodes the pickup values with blue being low and red being high.
Plot improvements: The above plot clearly shows systematic patterns however the default plot options are somewhat disadvantageous. Therefore, before we start to dive into the results, let's increase the readability of the given plot:
- Remove annular ticks: The information about week of year is not very important. Therefore, we hide it via
yticks=None.
- Custom segment ticks: Right now, segment labels are given via day and hour. We don't need hourly information and we want every day to be labeled. We can use a tuple here which will be passed to
xticks=("Friday", ..., "Thursday")
- Add segment markers: Moreover, we want to aid the viewer in distingushing each day more clearly. Hence, we can provide marker lines via
xmarks=7.
- Rotate heatmap: The week starts with Monday and ends with Sunday. Accordingly, we want to rotate the plot to have Sunday and Monday be at the top. This can be done via
start_angle=np.pi*19/14. The default order is defined by the global sort order which is present in the data. The default starting angle is at 12 o'clock.
Let's see the result of these modifications:
heatmap.opts( radial=True, fig_size=300, yticks=None, xmarks=7, ymarks=3, start_angle=np.pi*19/14, xticks=("Friday", "Saturday", "Sunday", "Monday", "Tuesday", "Wednesday", "Thursday"))
After tweaking the plot defaults, we're comfortable with the given visualization and can focus on the story the plot tells us.
There are many interesting findings in this visualization:
- Taxi pickup counts are high between 7-9am and 5-10pm during weekdays which business hours as expected. In contrast, during weekends, there is not much going on until 11am.
- Friday and Saterday nights clearly stand out with the highest pickup densities as expected.
- Public holidays can be easily identified. For example, taxi pickup counts are comparetively low around Christmas and Thanksgiving.
- Weather phenomena also influence taxi service. There is a very dark blue stripe at the beginning of the year starting at Saterday 23rd and lasting until Sunday 24th. Interestingly, there was one of the biggest blizzards in the history of NYC.
Download this notebook from GitHub (right-click to download).
|
http://holoviews.org/gallery/demos/matplotlib/nyc_radial_heatmap.html
|
CC-MAIN-2019-18
|
refinedweb
| 640
| 57.37
|
Engage your audience or class in real time. Involve them to contribute to your presentations with their smartphones and show the results live.
Add a slide "Poll" to your presentation.
It is super
Meh
I could'nt care less
Tell me why
Go to deckdeckgo.com/poll and use the code 0
Awaiting first votes
Ain't nothin' but a heartache
This template does not currently save the results of the voting. Each time you will refresh or launch your presentation, the poll start again.
If you would have this requirement, let us now with a new feature request in our GitHub issue tracker.
This template could be added to your presentation using the following methods.
If you are using our Starter Kit, no need to worry about this, this template is included, therefore you could skip the "Installation" chapter.
It's recommended to use unpkg-poll
The Stencil documentation provide examples of framework integration for Angular, React, Vue and Ember.
That being said, commonly, you might either
import or
load it:
import '@deckdeckgo/slide-poll';
import { defineCustomElements as deckDeckGoSlideElement } from '@deckdeckgo/slide-poll/dist/loader'; deckDeckGoSlideElement();
The "Poll" slide's Web Component could be integrated using the tag
<deckgo-slide-poll/>.
<deckgo-slide-poll <h1 slot="question">Do you like my presentation so far?</h1> <p slot="answer-1">It is super</p> <p slot="answer-2">Meh</p> <p slot="answer-3">I could'nt care less</p> <p slot="answer-4">Tell me why</p> <p slot="answer-5">Ain't nothin' but a heartache</p> <p slot="how-to">Go to <a href="">deckdeckgo.com/poll</a> and use the code {0}</p> <p slot="awaiting-votes">Awaiting first votes</p> </deckgo-slide-poll>
The slots
question and at least one
answer should be provided. Answer slots have to be provided as
answer-x where
x is a number bigger than 0.
The slot
how-to and
awaiting-votes are optional, still, it's probably for best of your audience to provide these.
Note also that if you provide a string
0 in the content of your slot
how-to, the information will be automatically converted to the real key of your poll (the key your audience could use to reach it and vote).
This component offers the following options which could be set using attributes:
The following theming options will affect this component if set on its host or parent.
Moreover, this component is using the Web Components "QR Code" and "Bar chart". Their respective CSS4 variables could be applied too.
The slide "Poll" offers some extra methods.
Update not in progress, update the answers and chart of the poll.
const slide = deck.getElementsByTagName('deckgo-slide-poll'); await slide.update();
Test if the poll has been at least answered once by a member of your audience.
const slide = deck.getElementsByTagName('deckgo-slide-poll'); await slide.isAnswered(); // resolve a boolean
|
https://docs.deckdeckgo.com/slides/poll/
|
CC-MAIN-2020-40
|
refinedweb
| 484
| 56.45
|
- Issued:
- 2021-10-12
- Updated:
- 2021-10-12
RHBA-2021:3682 - Bug Fix Advisory
Synopsis
OpenShift Container Platform 4.8.14 bug fix update
Type/Severity
Bug Fix Advisory
Topic
Red Hat OpenShift Container Platform release 4.8.14. There are no.14-x86_64
The image digest is sha256:bf48faa639523b73131ec7c91637d5c94d33a4afe09ac8bdad672862f5e86ccb
(For s390x architecture)
$ oc adm release info
quay.io/openshift-release-dev/ocp-release:4.8.14-s390x
The image digest is sha256:76a5261502d6a420df99a55843a157c354fd12b816d8ee858594c4e5f44d6816
(For ppc64le architecture)
$ oc adm release info
quay.io/openshift-release-dev/ocp-release:4.8.14-ppc64le
The image digest is sha256:1e18b0b39591d89f33f868fb1179a65699246625110e1415a419a38579c12a18 - 1965376 - Wrong interpolation in environment values in the build pod
- BZ - 1967369 - pkgman-to-bundle will exit with flag "--build-cmd"
- BZ - 1974421 - [Assisted-4.8] [Staging] Disable / Ignore network latency and packet loss validations, in case Automatic role selected
- BZ - 1977473 - Assisted service on IPv6 cluster installed with proxy: the assisted-service container doesn't start " network is unreachable"
- BZ - 1982001 - [4.8] Bootimage bump tracker
- BZ - 1982002 - [4.8.z] On a Azure IPI installation MCO fails to create new nodes
- BZ - 1986708 - Single stack external gateway makes the pod not starting with dual stack clusters
- BZ - 1994624 - [4.8.z backport] On an IPv6 single stack cluster traffic between master nodes is sent via default gw instead of local subnet
- BZ - 1996739 - High ICNI2 application pod creation times
- BZ - 1998103 - The removed ingresscontrollers should not be counted in ingress_controller_conditions metrics
- BZ - 1999092 - [4.8] Should not allow upgrades to 4.9 without admin acknowledgement that apis are being removed
- BZ - 1999531 - Keepalived fails with Liveness probe failed: command timed out
- BZ - 1999717 - Object Service tab should not be part of OCP Console for ODF Managed Services
- BZ - 2000696 - [4.8] RHCOS live ISO can fail to boot in UEFI mode; drops to grub shell
- BZ - 2000958 - The kubeletconfig controller has wrong assumption regarding the number of kubelet configs
- BZ - 2001244 - Enforce OpenShift's defined kubelet version skew policies
- BZ - 2002357 - Missing ZTP ArgoCD Container Image
- BZ - 2003199 - CRI-O leaks some children PIDs
- BZ - 2004122 - OVS logging in must gather is missing previous logging levels
- BZ - 2004336 - [4.8z] OVN CNI should ensure host veths are removed
- BZ - 2004677 - [4.8] Boot option recovery menu prevents image boot
- BZ - 2004716 - [4.8.z] Inexplicably slow kubelet on bootstrap makes installation fail
- BZ - 2005357 - [4.8z] Pod creation failed due to mismatched pod IP address in CNI and OVN
- BZ - 2005464 - [4.8z] ovn-kube may never attempt to retry a pod creation
- BZ - 2005480 - [4.8z] [ICNI2] 'ErrorAddingLogicalPort' failed to handle external GW check: timeout waiting for namespace event
- BZ - 2006963 - [4.8] OS boot failure "x64 Exception Type 06 - Invalid Opcode Exception"
- BZ - 2007090 - [4.8] Intermittent failure mounting /run/media/iso when booting live ISO from USB stick
- BZ - 2007666 - crio's selinux module has performance improvements when compiled with golang 1.16
- BZ - 2008414 - authentication errors with "square/go-jose: error in cryptographic primitive" are observed in the CI
- BZ - 2008589 - Slow OVN Recovery on SNO.
|
https://access.redhat.com/errata/RHBA-2021:3682
|
CC-MAIN-2022-05
|
refinedweb
| 510
| 51.18
|
import "github.com/jhoonb/archivex"
ArchiveWriteFunc is the closure used by an archive's AddAll method to actually put a file into an archive Note that for directory entries, this func will be called with a nil 'file' param
type Archivex interface { Create(name string) error CreateWriter(name string, w io.Writer) error Add(name string, file io.Reader, info os.FileInfo) error AddAll(dir string, includeCurrentFolder bool) error Close() error }
interface
type TarFile struct { Writer *tar.Writer Name string GzWriter *gzip.Writer Compressed bool // contains filtered or unexported fields }
TarFile implement *tar.Writer
Add add byte in archive tar
AddAll adds all files from dir in archive Tar does not support directories
Close the file Tar
Create new Tar file
Create a new Tar and write it to a given writer
ZipFile implement *zip.Writer
Add file reader in archive zip
AddAll adds all files from dir in archive, recursively. Directories receive a zero-size entry in the archive, with a trailing slash in the header name, and no compression
Close close the zip file
Create new file zip
Create a new ZIP and write it to a given writer
Package archivex imports 12 packages (graph) and is imported by 29 packages. Updated 2018-07-19. Refresh now. Tools for package owners.
|
https://godoc.org/github.com/jhoonb/archivex
|
CC-MAIN-2020-34
|
refinedweb
| 213
| 51.68
|
A Quick Guide to Iterating a Map in Groovy
Last modified: October 26, 2019
1. Introduction
In this short tutorial, we'll look at ways to iterate over a map in Groovy using standard language features like each, eachWithIndex, and a for-in loop.
2. The each Method
Let's imagine we have the following map:
def map = [ 'FF0000' : 'Red', '00FF00' : 'Lime', '0000FF' : 'Blue', 'FFFF00' : 'Yellow' ]
We can iterate over the map by providing the each method with a simple closure:
map.each { println "Hex Code: $it.key = Color Name: $it.value" }
We can also improve the readability a bit by giving a name to the entry variable:
map.each { entry -> println "Hex Code: $entry.key = Color Name: $entry.value" }
Or, if we'd rather address the key and value separately, we can list them separately in our closure:
map.each { key, val -> println "Hex Code: $key = Color Name $val" }
In Groovy, maps created with the literal notation are ordered. We can expect our output to be in the same order as we defined in our original map.
3. The eachWithIndex Method
Sometimes we want to know the index while we're iterating.
For example, let's say we want to indent every other row in our map. To do that in Groovy, we'll use the eachWithIndex method with entry and index variables:
map.eachWithIndex { entry, index -> def indent = ((index == 0 || index % 2 == 0) ? " " : "") println "$index Hex Code: $entry.key = Color Name: $entry.value" }
As with the each method, we can choose to use the key and value variables in our closure instead of the entry:
map.eachWithIndex { key, val, index -> def indent = ((index == 0 || index % 2 == 0) ? " " : "") println "$index Hex Code: $key = Color Name: $val" }
4. Using a For-in Loop
On the other hand, if our use case lends itself better to imperative programming, we can also use a for-in statement to iterate over our map:
for (entry in map) { println "Hex Code: $entry.key = Color Name: $entry.value" }
5. Conclusion
In this short tutorial, we learned how to iterate a map using Groovy's each and eachWithIndex methods and a for-in loop.
The example code is available over on GitHub.
|
https://www.baeldung.com/groovy-map-iterating
|
CC-MAIN-2021-39
|
refinedweb
| 366
| 65.42
|
Hello Ewan, I was testing out the current xen-unstable on x86 and found that with automake 1.9 I needed the floowing patch to have the generated Makefile work.
Advertising
Otherwise I was getting variations on BR_URL = ifdef BR_SNAPSHOT else BR_URL ?= endif Which meant that BR_URL was always getting "" Signed-off-by: Tony Breeds <[EMAIL PROTECTED]> --- Makefile.am | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff -r abee5c6b930d tools/xm-test/ramdisk/Makefile.am --- a/tools/xm-test/ramdisk/Makefile.am Wed Oct 25 15:29:36 2006 +0100 +++ b/tools/xm-test/ramdisk/Makefile.am Thu Oct 26 14:27:54 2006 +1000 @@ -17,9 +17,9 @@ BR_ARCH ?= $(shell uname -m | sed -e s/i BR_ARCH ?= $(shell uname -m | sed -e s/i.86/i386/ -e 's/ppc\(64\)*/powerpc/') @[EMAIL PROTECTED] BR_SNAPSHOT - BR_URL = [EMAIL PROTECTED]@ BR_URL = @[EMAIL PROTECTED] - BR_URL ?= [EMAIL PROTECTED]@ BR_URL = @[EMAIL PROTECTED] BR_TAR = $(notdir $(BR_URL)) Yours Tony linux.conf.au || Jan 15-20 2007 The Australian Linux Technical Conference! _______________________________________________ Xen-ppc-devel mailing list Xen-ppc-devel@lists.xensource.com
|
https://www.mail-archive.com/xen-ppc-devel@lists.xensource.com/msg01131.html
|
CC-MAIN-2018-34
|
refinedweb
| 177
| 61.12
|
re — Regular expression operations¶ with a byte pattern or
vice-versa; similarly, when asking for a substitution, the replacement
string must be of the same type as both the pattern and the search string.
Regular expressions use the backslash character (
DeprecationWarning
and in the future this will become a
SyntaxError. This behaviour
will happen even if it is a valid escape sequence for a regular expression.
'\'). Also, please note that any invalid escape sequences in Python’s usage of the backslash in string literals now generate a
The solution is to use Python’s raw string notation for regular expression
patterns; backslashes are not handled in any special way in a string literal
prefixed with
'r'. So
r"\n" is a two-character string containing
'n', while
"\n" is a one-character string containing a
newline. Usually patterns will be expressed in Python code using this raw
string notation.
'\'and
It is important to note that most regular expression operations are available as module-level functions and methods on compiled regular expressions. [Frie09],.
Repetition operators or quantifiers (
{m,n}, etc) cannot be
directly nested. This avoids ambiguity with the non-greedy modifier suffix
(?:a{6})* matches any multiple of six
'a' characters.
*,
+,
?,
?, and with other modifiers in other implementations. To apply a second repetition to an inner repetition, parentheses may be used. For example, the expression
'<a> b <c>', it will match the entire string, and not just
'<a>'. Adding
'<a>'.
Like the
a*awill match
'aaaa'because the
a*will match all 4
'a's, but, when the final
'a'is encountered, the expression is backtracked so that in the end the
a*ends up matching 3
'a's total, and the fourth
'a'is matched by the final
'a'. However, when
a*+ais used to match
'aaaa', the
a*+will match all 4
'a', but when the final
'a'fails to find any more characters to match, the expression cannot be backtracked and will thus fail to match.
x*+,
x++and
x?+are equivalent to
(?>x*),
(?>x+)and
(?>x?)correspondingly.
New in version 3.11.
ab'or quantifier. For example, on the 6-character string
'aaaaaa',
a{3,5}will match 5
'a'characters, while
a{3,5}?will only match 3 characters.
{m,n}+
Causes the resulting RE to match from m to n repetitions of the preceding RE, attempting to match as many repetitions as possible without establishing any backtracking points. This is the possessive version of the quantifier above. For example, on the 6-character string
'aaaaaa',
a{3,5}+aaattempt to match 5
'a'characters, then, requiring 2 more
'a's, will need more characters than available and thus fail, while
a{3,5}aawill match with
a{3,5}capturing 5, then 4
'a's by backtracking and then the final 2
'a's are matched by the final
aain the pattern.
x{m,n}+is equivalent to
(?>x{m,n}).
New in version 3.11.
Either escapes special characters (permitting you to match characters like
[a-z]will match any lowercase ASCII letter,
[0-5][0-9]will match all the two-digits numbers from
00to
59, and
[0-9A-Fa-f]will match any hexadecimal digit. If
[a\-z]) or if it’s placed as the first or last character (e.g.
[-a]or
[a-]), it will match a literal
Special characters lose their special meaning inside sets. For example,
[^5]will match any character except
'5', and
To match a literal
Support of nested sets and set operations as in Unicode Technical Standard #18 might be added in the future. This would change the syntax, so to facilitate this change a
FutureWarningwill be raised in ambiguous cases for the time being. That includes sets starting with a literal
Changed in version 3.7:
FutureWarningis raised if a character set contains constructs that will change semantically in the future.
A|B, where A and B can be arbitrary REs, creates a regular expression that will match either A or B. An arbitrary number of REs can be separated by
This is an extension notation (a
(?P<name>...)is the only exception to this rule. Following are the currently supported extensions.
(?aiLmsux)
),
re.U(Unicode matching), and
re.X(verbose), for the entire regular expression. (The flags are described in Module Contents.) This is useful if you wish to include the flags as part of the regular expression, instead of passing a flag argument to the
re.compile()function. Flags should be used first in the expression string.
Changed in version 3.11: This construction can only be used at the start of the expression.
A non-capturing version of regular parentheses. Matches whatever regular expression is inside the parentheses, but the substring matched by the group cannot be retrieved after performing a match or referenced later in the pattern.
(?aiLmsux-imsx:...)
(Zero or more letters from the set
'a',
'i',
'L',
'm',
's',
'u',
'x', optionally followed by
'i',
'm',
's',
'x'.) The letters set or remove the corresponding flags:
re.A(ASCII-only matching),
re.I(ignore case),
re.L(locale dependent),
re.M(multi-line),
re.S(dot matches all),
re.U(Unicode matching), and
re.X(verbose), for the part of the expression. (The flags are described in Module Contents.)
The letters
'a',
'L'and
'u'are mutually exclusive when used as inline flags, so they can’t be combined or follow
(?a:...)switches to ASCII-only matching, and
(?u:...)switches to Unicode matching (default). In byte pattern
(?L:...)switches to locale depending matching, and
(?a:...)switches to ASCII-only matching (default). This override is only in effect for the narrow inline group, and the original matching mode is restored outside of the group.
New in version 3.6.
Changed in version 3.7: The letters
'a',
'L'and
'u'also can be used in a group.
Attempts to match
New in version 3.11.
():
Deprecated since version 3.11: Group names containing non-ASCII characters in bytes patterns.
(?P=name)
A backreference to a named group; it matches whatever text was matched by the earlier group named name.
A comment; the contents of the parentheses are simply ignored.
Matches if
Isaac (?=Asimov)will match
'Isaac 'only if it’s followed by
'Asimov'.
Matches if
Isaac (?!Asimov)will match
'Isaac 'only if it’s not followed by
'Asimov'.
Matches if the current position in the string is preceded by a match for
(?<(r'(?<=-)\w+', 'spam-egg') >>> m.group(0) 'egg'
Changed in version 3.5: Added support for group references of fixed length.
Matches if the current position in the string is not preceded by a match for
(?'nor
'user@host.com>'.
Deprecated since version 3.11: Group id containing anything except ASCII digits.
The special sequences consist of
'\'and a character from the list below. If the ordinary character is not an ASCII digit or an ASCII letter,
\A
Matches only at the start of the string.
\b
Matches the empty string, but only at the beginning or end of a word. A word is defined as a sequence of word characters. Note that formally,
\bis defined as the boundary between a
\wand a
\Wcharacter (or vice versa), or between
\wand the beginning/end of the string. This means that
r'\bfoo\b'matches
'foo',
'foo.',
'(foo)',
'bar foo baz'but not
'foobar'or
'foo3'.
By default Unicode alphanumerics are the ones used in Unicode patterns, but this can be changed by using the
ASCIIflag. Word boundaries are determined by the current locale if the
LOCALEflag is used. word characters in Unicode patterns are Unicode alphanumerics or the underscore, although this can be changed by using the
ASCIIflag. Word boundaries are determined by the current locale if the
LOCALEflag is used.
\d
- For Unicode (str) patterns:
Matches any Unicode decimal digit (that is, any character in Unicode character category [Nd]). This includes
[0-9], and also many other digit characters. If the
ASCIIflag is used only
[0-9]is matched.
- For 8-bit (bytes) patterns:
Matches any decimal digit; this is equivalent to
[0-9].
\D
Matches any character which is not a decimal digit. This is the opposite of
\d. If the
ASCIIflag is used this becomes the equivalent of
[^0-9].
.
- For 8-bit (bytes) patterns:
Matches characters considered whitespace in the ASCII character set; this is equivalent to
[ \t\n\r\f\v].
\S
Matches any character which is not a whitespace character. This is the opposite of
\s. If the
ASCIIflag is used this becomes the equivalent of
[^ \t\n\r\f\v].
\w
- For Unicode (str) patterns:
Matches Unicode word characters; this includes most characters that can be part of a word in any language, as well as numbers and the underscore. If the
ASCIIflag is used, only
[a-zA-Z0-9_]is matched.
- For 8-bit (bytes) patterns:
Matches characters considered alphanumeric in the ASCII character set; this is equivalent to
[a-zA-Z0-9_]. If the
LOCALEflag is used, matches characters considered alphanumeric in the current locale and the underscore.
\W
Matches any character which is not a word character. This is the opposite of
\w. If the
ASCIIflag is used this becomes the equivalent of
[^a-zA-Z0-9_]. If the
LOCALEflag is used, matches characters which are neither alphanumeric in the current locale nor the underscore.
\Z
Matches only at the end of the string.
Most of the standard escapes supported by Python string literals are also accepted by the regular expression parser:
\a \b \f \n \N \r \t \u \U \v \x \\
(Note that
\b is used to represent word boundaries, and means “backspace”
only inside character classes.)
'\u',
'\U', and
'\N' escape sequences are only recognized in Unicode
patterns. In bytes patterns they are errors. Unknown escapes of ASCII
letters are reserved for future use and treated as errors..
Changed in version 3.6: Unknown escapes consisting of
'\'and an ASCII letter now are errors.
Changed in version 3.8: The
'\N{name}' escape sequence has been added. As in string literals,
it expands to the named Unicode character (e.g.
'\N{EM DASH}').
Module Contents¶
The module defines several functions, constants, and an exception. Some of the functions are simplified versions of the full featured methods for compiled regular expressions. Most non-trivial applications always use the compiled form.
Changed in version 3.6: Flag constants are now instances of
RegexFlag, which is a subclass of
enum.IntFlag.
- re.compile(pattern, flags=0)¶
Compile a regular expression pattern into a regular expression object, which can be used for matching using its
match(),
search()and other methods, described below.
The expression’s behaviour can be modified by specifying a flags value. Values can be any of the following variables, combined using bitwise OR (the.compile()and the module-level matching functions are cached, so programs that use only a few regular expressions at a time needn’t worry about compiling regular expressions.
- class re.RegexFlag¶
An
enum.IntFlagclass containing the regex options listed below.
New in version 3.11: - added to
__all__
- re.A¶
- re.ASCII¶
Make
\w,
\W,
\b,
\B,
\d,
\D,
\sand
\Sperform ASCII-only matching instead of full Unicode matching. This is only meaningful for Unicode patterns, and is ignored for byte patterns. Corresponds to the inline flag
(?a). also match lowercase letters. Full Unicode matching (such as
Ümatching
ü) also works unless the
re.ASCIIflag is used to disable non-ASCII matches. The current locale does not change the effect of this flag unless the
re.LOCALEflag is also used. Corresponds to the inline flag
(?i).
Note that when the Unicode patterns
[a-z]or
[A-Z]are used in combination with the
IGNORECASEflag, they will match the 52 ASCII letters and 4 additional non-ASCII letters: ‘İ’ (U+0130, Latin capital letter I with dot above), ‘ı’ (U+0131, Latin small letter dotless i), ‘ſ’ (U+017F, Latin small letter long s) and ‘K’ (U+212A, Kelvin sign). If the
ASCIIflag is used, only letters ‘a’ to ‘z’ and ‘A’ to ‘Z’ are matched.
- re.L¶
- re.LOCALE¶
Make
\w,
\W,
\b,
\Band case-insensitive matching dependent on the current locale. This flag can be used only with bytes patterns. The use of this flag is discouraged as the locale mechanism is very unreliable, it only handles one “culture” at a time, and it only works with 8-bit locales. Unicode matching is already enabled by default in Python 3 for Unicode (str) patterns, and it is able to handle different locales/languages. Corresponds to the inline flag
(?L).
Changed in version 3.6:
re.LOCALEcan be used only with bytes patterns and is not compatible with
re.ASCII.
- re.M¶
- re.MULTILINE¶
When specified, the pattern character
(?m).
- re.NOFLAG¶
Indicates no flag being applied, the value is
0. This flag may be used as a default value for a function keyword argument or as a base value that will be conditionally ORed with other flags. Example of use as a default value:
def myfunc(text, flag=re.NOFLAG): return re.match(text, flag)
New in version 3.11.
- re.S¶
- re.DOTALL¶
Make the
(?s).
-, or within tokens like
(?P<...>. When a line contains a
This means that the two following regular expression objects that match a decimal number are functionally equal:
a = re.compile(r"""\d + # the integral part \. # the decimal point \d * # some fractional digits""", re.X) b = re.compile(r"\d+\.\d*")
Corresponds to the inline flag
(?x).
- re.search(pattern, string, flags=0)¶
Scan through string looking for the first location where the regular expression pattern produces a match, and return a corresponding match object. object..fullmatch(pattern, string, flags=0)¶
If the whole string matches the regular expression pattern, return a corresponding match object. Return
Noneif the string does not match the pattern; note that this is different from a zero-length match.
New in version 3.4.
-(r'\W+', 'Words, words, words.') ['Words', 'words', 'words', ''] >>> re.split(r'(\W+)', 'Words, words, words.') ['Words', ', ', 'words', ', ', 'words', '.', ''] >>> re.split(r'(r'(\W+)', '...words, words...') ['', '...', 'words', ', ', 'words', '...', '']
That way, separator components are always found at the same relative indices within the result list.
Empty matches for the pattern split the string only when not adjacent to a previous empty match.
>>> re.split(r'\b', 'Words, words, words.') ['', 'Words', ', ', 'words', ', ', 'words', '.'] >>> re.split(r'\W*', '...words...') ['', '', 'w', 'o', 'r', 'd', 's', '', ''] >>> re.split(r'(\W*)', '...words...') ['', '...', '', '', 'w', '', 'o', '', 'r', '', 'd', '', 's', '...', '', '', '']
Changed in version 3.1: Added the optional flags argument.
Changed in version 3.7: Added support of splitting on a pattern that could match an empty string.
- re.findall(pattern, string, flags=0)¶
Return all non-overlapping matches of pattern in string, as a list of strings or tuples. The string is scanned left-to-right, and matches are returned in the order found. Empty matches are included in the result.
The result depends on the number of capturing groups in the pattern. If there are no groups, return a list of strings matching the whole pattern. If there is exactly one group, return a list of strings matching that group. If multiple groups are present, return a list of tuples of strings matching the groups. Non-capturing groups do not affect the form of the result.
>>> re.findall(r'\bf[a-z]*', 'which foot or hand fell fastest') ['foot', 'fell', 'fastest'] >>> re.findall(r'(\w+)=(\d+)', 'set width=20 and height=10') [('width', '20'), ('height', '10')]
Changed in version 3.7: Non-empty matches can now start just after a previous empty match.
- re.finditer(pattern, string, flags=0)¶
Return an iterator yielding match objects over all non-overlapping matches for the RE pattern in string. The string is scanned left-to-right, and matches are returned in the order found. Empty matches are included in the result.
Changed in version 3.7: Non-empty matches can now start just after a previous empty match.
- of ASCII letters are reserved for future use and treated as errors. Other unknown escapes a pattern object.
The optional argument count is the maximum number of pattern occurrences to be replaced; count must be a non-negative integer. If omitted or zero, all occurrences will be replaced. Empty matches for the pattern are replaced only when not adjacent to a previous empty match, so
sub('x*', '-', 'abxd')returns
'-a-b- 3.1: Added the optional flags argument.
Changed in version 3.5: Unmatched groups are replaced with an empty string.
Changed in version 3.6: Unknown escapes in pattern consisting of
Changed in version 3.7: Unknown escapes in repl consisting of
Changed in version 3.7: Empty matches for the pattern are replaced when adjacent to a previous non-empty match.
Deprecated since version 3.11: Group id containing anything except ASCII digits. Group names containing non-ASCII characters in bytes replacement strings.
-(pattern)¶
Escape special characters in pattern. This is useful if you want to match an arbitrary literal string that may have regular expression metacharacters in it. For example:
>>> print(re.escape(' >>> legal_chars = string.ascii_lowercase + string.digits + "!#$%&'*+-.^_`|~:" >>> print('[%s]+' % re.escape(legal_chars)) [abcdefghijklmnopqrstuvwxyz0123456789!\#\$%\&'\*\+\-\.\^_`\|\~:]+ >>> operators = ['+', '-', '*', '/', '**'] >>> print('|'.join(map(re.escape, sorted(operators, reverse=True)))) /|\-|\+|\*\*|\*
This function must not be used for the replacement string in
sub()and
subn(), only backslashes should be escaped. For example:
>>> digits_re = r'\d+' >>>>> print(re.sub(digits_re, digits_re.replace('\\', r'\\'), sample)) /usr/sbin/sendmail - \d+ errors, \d+ warnings
Changed in version 3.3: The
'_'character is no longer escaped.
Changed in version 3.7: Only characters that can have special meaning in a regular expression are escaped. As a result,
-.
Regular Expression Objects¶
Compiled regular expression objects support the following methods and attributes:
- Pattern.search(string[, pos[, endpos]])¶
Scan through string looking for the first location where this regular expression produces a match, and return a corresponding match object. <re.Match object; span=(0, 1), >>> pattern.search("dog", 1) # No match; search doesn't include the "d"
- Pattern.match(string[, pos[, endpos]])¶
If zero or more characters at the beginning of string match this regular expression, return a corresponding match object.". <re.Match object; span=(1, 2),
If you want to locate a match anywhere in string, use
search()instead (see also search() vs. match()).
- Pattern. <re.Match object; span=(1, 3),
New in version 3.4.
- Pattern.findall(string[, pos[, endpos]])¶
Similar to the
findall()function, using the compiled pattern, but also accepts optional pos and endpos parameters that limit the search region like for
search().
- Pattern.finditer(string[, pos[, endpos]])¶
Similar to the
finditer()function, using the compiled pattern, but also accepts optional pos and endpos parameters that limit the search region like for
search().
- Pattern.flags¶
The regex matching flags. This is a combination of the flags given to
compile(), any
UNICODEif the pattern is a Unicode string.
- Pattern.groupindex¶
A dictionary mapping any symbolic group names defined by
(?P<id>)to group numbers. The dictionary is empty if no symbolic groups were used in the pattern.
Changed in version 3.7: Added support of
copy.copy() and
copy.deepcopy(). Compiled
regular expression objects are considered atomic.
Match Objects¶:
- Match.
Changed in version 3.5: Unmatched groups are replaced with an empty string.
- Match'
- Match.__getitem__(g)¶
This is identical to
m.group(g). This allows easier access to an individual group from a match:
>>> m = re.match(r"(\w+) (\w+)", "Isaac Newton, physicist") >>> m[0] # The entire match 'Isaac Newton' >>> m[1] # The first parenthesized subgroup. 'Isaac' >>> m[2] # The second parenthesized subgroup. 'Newton'
New in version 3.6.
- Match.groups(default=None)¶')
- Match.groupdict(default=None'}
- Match.start([group])¶
- Match'
-.
- Match.lastgroup¶
The name of the last matched capturing group, or
Noneif the group didn’t have a name, or if no group was matched at all.
- Match.re¶
The regular expression object whose
match()or
search()method produced this match instance.
Changed in version 3.7: Added support of
copy.copy() and
copy.deepcopy(). Match objects
are considered atomic.
Regular Expression Examples¶ = re.compile(r".*(.).*\1") >>> <re.Match object; span=(2, 3),
Regular expressions beginning with
search() to
restrict the match at the beginning of the string:
'^'can be used with
>>> re.match("c", "abcdef") # No match >>> re.search("^c", "abcdef") # No match >>> re.search("^a", "abcdef") # Match <re.Match object; span=(0, 1), match= <re.Match object; span=(4, 5),
maxsplit of
4, we could separate the
house number from the street name:
:?pattern matches the colon after the last name, so that it does not occur in the result list. With a
>>> .'
Finding all Adverbs¶
findall() matches all occurrences of a pattern, not just the first
one as
search() does. For example, if a writer wanted to
find all of the adverbs in some text, they might use
findall() in
the following manner:
>>>>> re.findall(r"\w+ly\b", text) ['carefully', 'quickly']
Finding all Adverbs and their Positions¶
If one wants more information about all matches of a pattern than the matched
text,
finditer() is useful as it provides match objects instead of strings. Continuing with the previous example, if
a writer wanted to find all of the adverbs and their positions in
some text, they would use
finditer() in the following manner:
>>>>> for m in re.finditer(r"\w+ly\b", text): ... print('%02d-%02d: %s' % (m.start(), m.end(), m.group(0))) 07-16: carefully 40-47: quickly ") <re.Match object; span=(0, 4), >>> re.match("\\W(.)\\1\\W", " ff ") <re.Match object; span=(0, 4),
When one wants to match a literal backslash, it must be escaped in the regular
expression. With raw string notation, this means
r"\\". Without raw string
notation, one must use
"\\\\", making the following lines of code functionally identical:
>>> re.match(r"\\", r"\\") <re.Match object; span=(0, 1), >>> re.match("\\\\", r"\\") <re.Match object; span=(0, 1),
Writing a Tokenizer¶:
from typing import NamedTuple import re class Token(NamedTuple): type: str value: str line: int column: int def tokenize(code): ('MISMATCH', r'.'), # Any other character ] tok_regex = '|'.join('(?P<%s>%s)' % pair for pair in token_specification) line_num = 1 line_start = 0 for mo in re.finditer(tok_regex, code): kind = mo.lastgroup value = mo.group() column = mo.start() - line_start if kind == 'NUMBER': value = float(value) if '.' in value else int(value) elif kind == 'ID' and value in keywords: kind = value elif kind == 'NEWLINE': line_start = mo.end() line_num += 1 continue elif kind == 'SKIP': continue elif kind == 'MISMATCH': raise RuntimeError(f'{value!r} unexpected on line {line_num}') yield Token(kind, value, line_num, column) statements = ''' IF quantity THEN total := total + price * quantity; tax := price * 0.05; ENDIF; ''' for token in tokenize(statements): print(token)
The tokenizer produces the following output:
Token(type='IF', value='IF', line=2, column=4) Token(type='ID', value='quantity', line=2, column=7) Token(type='THEN', value='THEN', line=2, column=16) Token(type='ID', value='total', line=3, column=8) Token(type='ASSIGN', value=':=', line=3, column=14) Token(type='ID', value='total', line=3, column=17) Token(type='OP', value='+', line=3, column=23) Token(type='ID', value='price', line=3, column=25) Token(type='OP', value='*', line=3, column=31) Token(type='ID', value='quantity', line=3, column=33) Token(type='END', value=';', line=3, column=41) Token(type='ID', value='tax', line=4, column=8) Token(type='ASSIGN', value=':=', line=4, column=12) Token(type='ID', value='price', line=4, column=15) Token(type='OP', value='*', line=4, column=21) Token(type='NUMBER', value=0.05, line=4, column=23) Token(type='END', value=';', line=4, column=27) Token(type='ENDIF', value='ENDIF', line=5, column=4) Token(type='END', value=';', line=5, column=9)
|
https://docs.python.org/3.11/library/re.html
|
CC-MAIN-2022-21
|
refinedweb
| 3,947
| 59.6
|
I have been working with GatsbyJS for the past week. I am trying to set up my new blog. My old blog used to be on WordPress. But after working with Hugo for my recipe blog, I fell in love with static site generators. I learnt HTML/CSS using static HTML sites. My first website contained folders of HTML pages. I loved that approach. However, I outgrew it when I started journaling. Copying the HTML template every time and then replace the latest writing was a pain. When I wanted a design refresh (which was often), it was painful. And that is how I discovered WordPress.
My initial foray with Static Site Generators (SSG) was Hugo. It had a steep learning curve for me, especially since I didn’t know template syntax in Golang. But I credit Hugo for spiking my interest in Golang, and I studied the language due to it (though you do not need to know Go to learn using Hugo).
However, the language I am most comfortable in is JavaScript. I was late to the React game (Angular is my first love). I started using it two months back. Once I got comfortable enough to code in React, I wanted to create my blog in Gatsby - static site, JavaScript, and React. I love it.
Problem Statement
Add a “no-js” class to the document head in a Gatsby site.
Why do it?
I have this terrible habit of checking websites by disabling JavaScript. Not because I want to find issues with the site but mostly to find the fresh ways people make their site. Some sites create wonderful fallbacks for their site, and it’s great to see the way they enhance the experience with JavaScript. And others make fantastic and stunning websites without JavaScript. With my Gatsby site, I wanted to see how my site will behave by disabling JavaScript.
Note - Do not disable JavaScript from your Developer Tools. That will not be an actual test of the site behavior as the service workers are still loaded. To altogether disable JavaScript, do it from the browser settings.
So I disable JavaScript and fire up my Gatsby site
gatsby build; gatsby serve. Everything looks good. It does display that “This app works best with JavaScript” that Gatsby automatically adds. But other than that, all is well. However, some JavaScript-only functionality will not work. e.g., I had a theme switcher. It works with CSS Variables, but the actual switching and remembering of the old settings are JavaScript. So the switcher is there, but the user clicks on it, and nothing happens. Wouldn’t it be nice if all JavaScript functionality doesn’t even show up? Many sites handle this by adding a
no-js class to the
<head> element.
Issues galore
The
no-js class is added to the HTML by default. JavaScript is used to replace it. The JS code is the following three liner.
window.addEventListener('DOMContentLoaded', function() { document.documentElement.className = document.documentElement.className.replace("no-js","js"); });
- I am using the react-helmet library to manage the header. Using the
htmlAttributesproperty of
<Helmet>I am already managing the
classproperty. So I added the
no-jsclass there. And I put the above JS code in a separate JS file, and I call it in
gatsby-browser.jsfile. Does it work? No. Even though my JS file is sourced, react-helmet is dynamic. It fires and keeps the header updated even when I am replacing it from my static HTML. And worse, the
no-jsclass remains every time so all my JavaScript-only behavior is gone even when it is enabled. So I cannot club
no-jsclass to react-helmet as I am already managing it there.
- I put the
no-jsas a body class. But, you guessed it, I am managing the body classes too using react-helmet. So that’s a no go.
- As mentioned here [Customize Gatsby HTML] I copy over the html.js. I inlined the script in the
<head>tag and manually add the class to the
<head>. Doesn’t work as react-helmet removes the class. 😢
The Solution
So finally I figured it out. Since I want to continue using react-helmet to manage my
head classes, I needed some other property to hold this value. So I add a
data-html-class="no-js" to the
<head> in
html.js file. And then I inline the script, but instead of replacing class, I do a
setAttribute. Here is how the template looks (only showing the HTML template) from
html.js. The relevant parts are line number 3 and line numbers 21-28.
export default function HTML(props) { return ( <html data- <meta httpEquiv="x-ua-compatible" content="ie=edge" /> <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> {props.headComponents} </head> <body {...props.bodyAttributes}> {props.preBodyComponents} <div key={`body`} id="___gatsby" dangerouslySetInnerHTML={{ __html: props.body }} /> {props.postBodyComponents} <script dangerouslySetInnerHTML={{ __html: ` window.addEventListener('DOMContentLoaded', function() { document.querySelector('html').setAttribute('data-body-class', 'js') }); ` }}/> </body> </html> ) }
Now in CSS, I can have this.
html[data-body-class='no-js'] .only-js { display: none; }
Whichever elements are JavaScript-only, I add the class
only-js to them, and their display property is set to
none when JavaScript is disabled.
Conclusion
HTML is awesome. JavaScript is awesome. Gatsby is awesome. I am in love.
Discussion (0)
|
https://dev.to/shubho/add-a-no-js-class-in-gatsby-2914
|
CC-MAIN-2022-33
|
refinedweb
| 899
| 68.87
|
Django urlpatterns - it's more than just urls
In this project we are processing text messages. So peice of text comes into a method and needs to be routed to a particular view/method/class/thing. There's lots of ways of doing it, a big if might be one. RapidSMS includes a damn cunning way of doing it by using decorators.
Here's an example:
@keyword(r'n(?:ote)? \+(\d+) (.+)') @authenticated def note_case (self, message, ref_id, note): [...]
I thought that was really neat, you can see the decorator here. Nice stuff!
However since I'm a GOD (Grumpy Old Developer) after using it for about 10 minutes I see there's a basic problem I'm not happy with: I find it hard to track what message goes where. There could be 100's of lines of code between each decorator, its hard to see them all at once.
Hmm. So how about using Django's urlpatterns? I actually like the url regex in Django because it's easy to see in one editor window what is going where.
In my case I made a file:
msgs.py that simply maps the regex's to the view:
from django.conf.urls.defaults import patterns urlpatterns = patterns('', (r'^country (?P
\S+)', "apps.testy.views.country",), (r'^list (?P \S+)', "apps.testy.views.list",), )
Then in my app I can use the URLResolver:
from django.core.urlresolvers import RegexURLResolver, Resolver404 resolver = RegexURLResolver(r'', "apps.testy.msgs")
In my class I'm able to use the resolver in a method, on any text:
try: callback, callback_args, callback_kwargs = resolver.resolve(text) except Resolver404: raise ValueError, "There was no view found for: %s" % text response = callback(self, message, *callback_args, **callback_kwargs)
And now I can pass text through to different views based on those regex's. The only problem with it is that (understandably) it's using HTTP semantics. That
Resolver404 inherits from a HTTPError, which isn't ideal for a non HTTP source. Also it's kind of annoying having to call it urlpatterns in
msgs.py since they aren't really about URL's.
So then that's where I'd say something like: I've ripped urlpatterns out of Django, made it generic, placed it on pypi, then put a Django wrapper around it for URL'ishness and put it back into the Django project. Except I haven't. But if you were doing a project that included doing logic on a bit of text.... perhaps you could do that.
|
http://agmweb.ca/2009-04-19-django-urlpatterns-its-more-than-just-urls/
|
CC-MAIN-2018-47
|
refinedweb
| 418
| 66.54
|
2015-11-19 20:49 GMT+02:00 Roberto Ierusalimschy <roberto@inf.puc-rio.br>: >> Lua 5.3.2 (rc1) is now available for testing at >> > > What is new: > > - table.sort "randomize" the pivot This is done as follows: static int choosePivot (int lo, int up) { unsigned int t = (unsigned int)(unsigned long)time(NULL); /* time */ unsigned int c = (unsigned int)(unsigned long)clock(); /* clock */ unsigned int r4 = (unsigned int)(up - lo) / 4u; /* range/4 */ unsigned int p = (c + t) % (r4 * 2) + (lo + r4); lua_assert(lo + r4 <= p && p <= up - r4); return (int)p; } Since `time` returns a number of seconds, the value of `t` is quite likely to be constant for any given invocation of `sort`. Not very random.
|
http://lua-users.org/lists/lua-l/2015-11/msg00220.html
|
CC-MAIN-2020-16
|
refinedweb
| 120
| 59.84
|
I am trying to build a multipage dashboard where each page uses functions written in a separate .py files.
Title_page.py can read function (called ‘add_two()’) from
func_1.py without any problem. However,
Page2.py can’t seem to read function (called ‘multiply_two()’) from
func_2.py under
p2_functions folder, which is placed inside the
pages folder. It throws an error saying:
ModuleNotFoundError: No module named ‘p2_functions’
I think it’s because streamlit is ignore anything other than .py files inside the
pages folder, but I don’t know how then I could call modules and read functions from other .py files or from other folders.
Here’s what I have in the Page2.py:
import streamlit as st
from p2_functions.func_2 import multiply_two
st.markdown("# Page 2 ")
st.write(multiply_two(10 * 20))
Here’s the tree of the structure:
├── Title_page.py ├── func_1.py └── pages ├── Page2.py └── p2_functions └── func_2.py
|
https://discuss.streamlit.io/t/multipage-how-to-import-module-from-different-directory/27646
|
CC-MAIN-2022-33
|
refinedweb
| 149
| 71.1
|
Hi guys, trying to get a day number from 1 - 366 for a specific date.
I've been working on this for a while. I can't seem to figure out. I'm not asking for an answer, just a push in the right direction. So far here is what I have for code:
#include <iostream> using namespace std; void main() { int a, y, m, year, month, yearBefore, day; int day1; int julianDay = 0; cin>>month>>day>>year; for(yearBefore=year; yearBefore >= (year-1); yearBefore--) { a = (14 - month) / 12; y = yearBefore + 4800 - a; m = month + 12 * a - 3; julianDay = day + (153 * m + 2) / 5 + y * 365 + y / 4 - y/100 + y/400 - 32045; if(day1 = 0) { day1 += julianDay; } if(day1 > julianDay) { day1 -= julianDay; } day = 31; month = 12; } cout << day1; }
I keep trying to find a way to get it to subtract the second julian day from the first. Please help!
|
https://www.daniweb.com/programming/software-development/threads/188961/am-i-on-the-right-track-c-code
|
CC-MAIN-2017-09
|
refinedweb
| 150
| 69.04
|
Console 1.4.1 is now available in the File Release section. I merged the consolelib.tcl file into console.tcl and added extra language support for the menus. No real changes in functionality, so if you already have 1.4.0, and have no need for French, Spanish, etc. menus, there shouldn't be any urgency to upgrade.
TiK-0.90-Pre98-2 has been available for download since May 1st, just incase you didn't notice. Also made some minor tweaks to the web site, most of which shouldn't be noticable. Hopefully more updates to come in the near future.
Check the usual download area to test it out.
Change log:
- package namespaces should go away now when they're unloaded.
- capture files are configured for utf-8
- removed "send file" option since it's broken and there's no fix forcasted
- ver_check shouldn't error out now if you don't have your HTTPProxy setup properly
The version check was re-written to be a component instead of integrated so that it could take advantage of HTTPProxy. Now proxy users shouldn't experience a hang or crash while starting TiK.
We're getting closer and closer to the next official release. In the meantime you can check out the latest CVS code (which is highly stable) from the TiK section. Changes in this latest release include:
- Entry for changing the buddylist bground color, (the default is now a very bright white)
- Version notification
- and a few others I can't remember. Just check it out.
Mengmeng fixed a problem with TiK reporting idle times when told not to. You can pick it up from the TiK section.
I'm revamping some of the internals of the TiK section right now, so if you have any problems, wait a minute or two and try again. The Old-CVS section should work now. SourceForge doesn't display directories so I had to play with the index.php file to get something usable.
. . . and has been for a couple days. (I knew I forgot to do something.) This was a minor fix with the popup text and a few other places that were affected by a trailing tab at the end of buddy names. Nothing critical, and strickly low priority.
Fixes for this (pre-)release include a user-definable delay for reconnecting when a user has persistent connect checked and the (hopefully) last update to keep TiK from freaking out if the config directory is removed while TiK is running. The delay helps against screenname lockout in the event you open two TiK clients and signon with the same screenname.
You can download this release from the TiK section.
Updates for this release:
- Now supports unix platforms.
Download it from the TiK section.
Download it from the TiK area.
Fixes for this release:
- fixed boxinfo's http proxy problem (for good, I hope)
- fixed left over http tokens (they caused a slow? memory leak) in boxinfo and quickchat
See below for changes made for this release.
Updates in this version include:
- less memory usage
- fixed a directory bug with the initial sound loading
Updates in this pre-release include:
- packages/components no longer write to a config directory that doesn't exist
As usual, you can download this pre-release from the TiK area.
Fixed a bug with boxinfo that was causing memory errors in the interpreter. Download it from the TiK area.
Download it from the TiK area.
Changes include:
- Moved socket wrappers to sflap so that they don't interfere with the http package anymore
- Added JDLSpeedy's boxinfo patch.
Download it from the TIK area.
TiK 0.90-Pre70 is available for download in the TiK section.
(more. . . )
Bugs that will be fixed before the official 0.90 release:
- Tcl Special Characters in play havoc with away messages and away nicks.
- Error when trying to write to non-existent files
Bugs fixed in this Pre-Release:
- Preferences Dialog inserts a lot of BR tags
- Quotes in away messages work (but not in nicks)
- Permit/Deny flash on edit (partially fixed)
The .
Finally got tikae 0.88.2 up on the Sourceforge servers. Things are insane because of the holidays, and with a hard drive upgrade for my main development system (from the old 5 gig to the new 30 gig). Hopefully everything will calm down after the new year.
Incase you haven't heard yet, Sourceforge is going down this Friday, December 15, at 9pm - 3am PST. So, there's going to be an interruption of service here during that time. I should also be gone from Friday through Monday on a trip to Virginia, weather permitting. I'll check in when I can, and get to any problems first thing Tuesday, again, weather permitting. Also, keep an eye out at the TiK site this weekend and next week for a flurry of development. I know I'm a broken record, but the next release is coming very, VERY, SOON.
For those not running under TiK under unix, the CVS tree contains a fix that allows you to copy text from IM, CHAT, getaway, info, etc. windows. Also, word is that TiK 0.90 should be released VERY soon (or I'm gonna tease Daspek until it does). Un-official pre-release containing this fix should be up on the TiK area here shortly. (Read more.)
The copy fix allows under users Windows, or any other non-unix platform that can't copy, to copy by selecting the text they want copied with their mouse. The sheer act of selection causes the copy. I set it up this way because it was the simplest. If anyone has problems, make them known and I'll see about another method.... read more
Added an old news page to keep track of the old pre-export news items, as well as the last 20 exported ones. I decided to rebuild my "exported data" parser, only to decide to keep it mostly the way it was. I also spent way too much time on command-line arguments, especially for something that's probably only useful to me.
Starting Sunday, weather permitting, I'll be on the road for a nice tour through Virginia visiting some old friends. I'll be gone for about a week so if you can't find me, now you know why.
I setup a tcl script to fix the broken link in the exported news data, so now there shouldn't be any broken links on the main project page. I started to remove the horizontal lines from out of the link area, but I couldn't get it to look nice, so the lines are staying for now. When I play with regsub some more, I'll see about putting up something that looks nicer.
Under the project links is now the project statistics provided by the Sourceforge export feature. It's a nice feature, but I'm not thrilled with the way the horizontal lines look. I added the lines to the project links to help balance, but I think they will probably go. I'm definitely going to look into pre-parsing the exported data from Sourceforge.
I've changed the index for this site to use php and Sourceforge's new News Export feature. I've also put the code for this site's php files into cvs. I've noticed a minor linking bug with the way Sourceforge exports the news and am looking into a php workaround. In the meantime, you'll notice the bug when you click on the "/projects/keylargo" broken link. If you want to see the old news items, go to
Despite the lack of CVS stats, this project is active and to prove it there are two TiK plugins available for download and the latest copy of tikae. Over at TiK we're gearing up for the next release which I'm hoping will be later tonight or tomorrow. Yep, that project is active, too.
|
http://sourceforge.net/p/keylargo/news/?page=1
|
CC-MAIN-2015-32
|
refinedweb
| 1,341
| 72.97
|
Guidance for Building a Control Plane for Envoy, Part 5: Deployment Tradeoffs
Guidance for Building a Control Plane for Envoy, Part 5: Deployment Tradeoffs
In this post, we will explore the tradeoffs in deploying the various control plane components.
Join the DZone community and get the full member experience.Join For Free
This is part 5 of a series that explores building a control plane for Envoy Proxy. Follow along @christianposta and @soloio_inc for more!.
In this blog series, we'll take a look at the following areas:
- Adopting a mechanism to dynamically update Envoy’s routing, service discovery, and other configuration
- Identifying what components make up your control plane, including backing stores, service discovery APIs, security components, et. al.
- Establishing any domain-specific configuration objects and APIs that best fit your usecases and organization
- Thinking of how best to make your control plane pluggable where you need it
- Options for deploying your various control-plane components (this entry)
- Thinking through a testing harness for your control plane
In the previous entry, we explored why a pluggable control plane is crucial for keeping up with the fast-moving Envoy API as well as integrating with different workflows an organization may wish to adopt. In this post, we'll look at the tradeoffs in deploying the various control plane components.
Deploying Control Plane Components
Once you've built and designed your control plane and its various supporting components, you'll want to decide exactly how its components get deployed. You'll want to weight various security, scalability, and usability concerns when settling into what's best for your implementation. The options vary from co-deploying control plane components with the data plane to completely separating the control plane from the data plane. There is also a middle ground here as well: deploy some components co-located with the control plane and keep some centralized. Let's take a look.
In the Istio service-mesh project, the control plane components are deployed and run separately from the data plane. This is very common in a service-mesh implementation. That is, the data plane runs with the applications and handles all of the application traffic and communicates with the control plane via xDS APIs over gRPC streaming. The control-plane components generally run in their own namespace and are ideally locked down from unexpected usage.
The Gloo project, as an API Gateway, follows a similar deployment model. The control-plane components are decoupled from the data plane and the Envoy data plane uses xDS gRPC streaming to collect configuration about listeners, routes, endpoints, and clusters, etc. You could deploy the components co-located with the dataplane proxies with Gloo, but that's discouraged. We'll take a look at some of the tradeoffs in a bit.
Lastly, we take a look at co-deploying control plane components with the data plane. In the Contour project, by default, control plane components are deployed with the data plane though there is an option to split up the deployment. Contour actually leverages CRDs or Ingress resources for its configuration, so all of the config-file handling and watching happens in Kubernetes. The xDS service, however, is co-deployed with the dataplane (again, that's by default - you can split them).
When eBay built their control plane for their deployment of Envoy, they also co-deployed parts of their control plane (the discovery pieces) with their data plane. They basically wrote a controller to watch CRDs, Ingress, and Service resources and then generate config maps. These config maps would then be consumed by the
discovery container running with the Pod and hot restarted with changed, which updated Envoy.
In the Ebay case, we see a "hybrid" approach and was highly influenced by the specifics of the rest of their architecture. When evaluating a control plane for Envoy, or considering building one yourself, how should you deploy the control-plane components?
Should I Keep Control Planes Separated From the Data Plane?
There are pros and cons to the various approaches. The Gloo team believes keeping the control plane separate is the right choice for most use cases and you should avoid fully co-deploying your control plane with your data plane.
If Envoy is the heart and soul of your L7 networking, the control plane is the brains. Deploying the control plane separately from the data plane is important for these reasons:
- Security - If somehow a node in your data plane gets compromised, you definitely do NOT want to exacerbate your situation by giving up control to the rest of your applications and network by allowing your control plane to become compromised. Additionally, a control plane could be dealing with distribution of keys, certificates, or other secrets that should be kept separate from the data plane.
- Scaling - You probably will end up scaling your data plane and control plane differently. For example, if your control plane is polling Kubernetes for services/endpoints, etc., you definitely don't want to co-locate those components with your data plane — you'll choke any chance of scalability.
- Grouping - You may have different roles and responsibilities of the data plane; for example, you may have data plane Envoys at the edge which would need a different security and networking posture than a pool of shared proxies for your microservices vs. any sidecar proxies you may deploy. Having the control plane co-located with the data plane makes it more difficult to keep data and configuration separate.
- Resource usage - You may wish to assign or throttle certain resource usage depending on your components. For example, your data plane may be more compute intensive vs. the control plane (which may be more memory intensive) and you'd use different resource limits to fulfill those roles. Keeping them separate allows you more fine-grained resource pool options than just lumping them together. Additionally, if the control plane and data plane are collocated and competing for the same resources, you can get odd tail latencies which are hard to diagnose.
- Deployment/lifecycle - You may wish to patch, upgrade, or otherwise service your control plane independently of your data plane.
- Storage - If your control plane requires any kind of storage, you can configure this separately and without the data plane involved if you separate out your components.
For these reasons, we recommend to keep the control plane at arms length and decoupled from the data plane.
Takeaway
Building a control plane for Envoy is not easy and once you get to understand what's needed from a control plane for your workflow, you'll need to understand how best to deploy it. The Gloo team recommends building a pluggable control plane and keeping it separate from the data plane for the reasons outlined above. Gloo's architecture is built like this and enables the Gloo team to quickly add any new features and scale to support any platforms, configurations, filters, and more as they come up. Again, follow along @christianposta and @soloio_inc for more Envoy, API Gateway, and service mesh goodness.
Published at DZone with permission of Christian Posta , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }}
|
https://dzone.com/articles/guidance-for-building-a-control-plane-for-envoy-pa-3?fromrel=true
|
CC-MAIN-2020-29
|
refinedweb
| 1,209
| 50.36
|
There isn't much to say... I just want to know if it's possible to property draw a class that has a generic type. If the answer is no just say no (I don't really need an alternative solution, so I don't need to add more detains. I have a different solution, but I don't like it), if it is so how am I suppose to do it?
edit(Moved from answer):
I will just specify what I want, because my other answer won't work:
I have a class called "Attack", which contains some properties an attack has (such as a name, or a cost), and a delegate, which is the method being called when attacking. The delegate is a delegate called "VoidFunc" (when I made it I didn't know about the Action delegate, and what I made is basically a Func delegate without a return type). The Attack class has a few overloads, each one adds one generic, and that generic is the generic type in the delegate. Here is an example for one with one generic type:
[System.Serializable]
public class Attack<T>
{
public string name;
public int cost;
public VoidFunc<T> attackMethod;
}
Now I need to make a custom property drawer for this. I have a non-generic overload, which worked perfectly fine, but it is not what I need. My initial attempt was this:
[CustomPropertyDrawer(typeof(Attack<T>))]
public class AttackPropertyDrawer<T> : PropertyDrawer
But apparently you can't use generic types in attributes (I would actually like to know the logic behind it)... So I tried instead of making the class generic and put the generic type in the Attack type, just use the object keyword, meaning I will do "Attack". But it didn't work... So I tried something else, to test if it's even possible to draw generic classes... Let's say I made an Attack, ok? So then when I property drawn it, I used "Attack". It still didn't work. So the problem must be Unity's support to it. I came to the forum and got @jdean300's answer, which also didn't work. I now had that different solution of using "object" at the attack class, but I can't overload a variable. So do you have any other solution? The ONLY solution I have left is to use the "object" keyword at the delegate declaration, but I'd prefer almost ANY other solution to do it.
Answer by Bunny83
·
Jun 29, 2016 at 12:11 PM
Unity's serialization system doesn't support serializing generic types except the generic "List" type. The inspector can only edit things that are serializable by Unity.
editSince you have added more details i will get a more detailed answer.
Again, Unity doesn't support serializing generic types. Furthermore delegates can't be serialized by Unity in general. Unity now supports a special delegate wrapper type called "UnityEvent". This class is defined in the "UnityEngine.Events" namespace.
Unity also implemented a generic version of that class in order to specify event parameters. However as i said Unity doesn't support generic types, that's why you have to create a concrete type out of the generic type in order to be serializable.
[System.Serializable]
public class EventWithStringParam : UnityEvent<string> { }
You can use generic base classes but the final classes and types that are used have to be concrete, non-generic types if you want them to be serializable and to be viewable / editable in the inspector.
Make sure you read the UnityEvent documentation carefully, especially the last paragraph about the generic type.
So how are there some plugins that make the dictionary appear in the inspector? They probably used the PropertyDrawer.
A PropertyDrawer only alters the way how serialized properties are displayed. You can implement a custom editor / inspector for the containing class. A custom editor could display pretty much anything but editing is pretty pointless since the generic classes aren't serialized.
Your question is worded in a very abstract way so it's not clear what you actually want to achieve.
I actually didn't want the methods to appear, though it would be nice... But if I do it, I would like to only have one slot for one method, not a full UnityEvent. But that wasn't even the problem, what I wanted to do is to make them show up (and in a custom way) even without the method.
@Bunny83 So you have no idea? :/
Once and for all: Unity does not support serializing generic types at all with the one and only special excpetion: List<T>.
List<T>
So if you use a generic parameter in your type definition, no matter if the type is actually used or not, it can't be serialized by Unity and thus can't be edited in the inspector. You have to get rid of your generic parameter if you want it to be visible in the inspector. That's it.There's no way around that.
ps. You again posted an comment as answer. I again converted the answer into a comment but that was the last time. I again advise you to read the user guide and the FAQs
For some reason I didn't get an Email saying I got an answer, so that's why it's so late...
Ok, I read the user guide. The reply I make now might not be good, and maybe it should have been an answer, but the user guide didn't contain anything against it, so let's just hope it will be fine...
I made something like this
public class AttackWithPlayer : Attack<Player> { }
And then when I made the property drawer, I did it like that:
[CustomPropertyDrawer(typeof(Attack))]
[CustomPropertyDrawer(typeof(AttackWithPlayer))]
public class AttackPropertyDrawer : PropertyDrawer
And the variable I made was like that:
public AttackWithPlayer testAttack;
You might say I give too much details in code, ins$$anonymous$$d of explaining in sentences, but this is the best way I can show it. I have no idea how, but it doesn't work... $$anonymous$$aybe like it doesn't edit that class even if it's not generic, only because it's base class is generic. How does it make sense? And is it suppose to be like that?
Answer by ModLunar
·
Jan 17, 2020 at 04:42 PM
Unity 2020. Still in alpha, but I'm never going back. It was just like this in 2018.3 with Nested Prefabs. WOO!
If you have this for example, as a data class:
using System;
using UnityEngine;
[Serializable]
public class Blah<T> {
[SerializeField] private T value;
}
Then you can use the following code snippet for your PropertyDrawer.
using UnityEditor;
using UnityEditor;
[CustomPropertyDrawer(typeof(Blah<>))]
public class BlahDrawer : PropertyDrawer {
//... (your typical PropertyDrawer stuff here).
}
Just note your PropertyDrawer should NOT be generic itself, but you can use typeof(Blah<>) without any type in between the angle brackets, to specify any type can go there. This is confirmed to work in Unity 2020.1.0a18 as of today, and thank the heavens.
typeof(Blah<>)
Answer by jdean300
·
Jun 29, 2016 at 11:54 AM
It's possible with a bit of hacking. The generic class that you want to create a property drawer for needs to inherit from a non-generic base class, and then you can create a property drawer for the non-generic class.
public class NonGenericBase {}
public class GenericType<T> : NonGenericBase {}
[CustomPropertyDrawer(typeof(NonGenericBase ))]
public class NonGenericBaseDrawer {}
That only works with the variables of type NonGenericBase, but when I make a variable of type GenericType, it doesn't draw it. Do you have any other solution?
Yes, it's works for class, but you should indicate that the drawer also works for inherited childs: [CustomPropertyDrawer(typeof(NonGenericBase), true)]
Unfortunately, it doesn't works for struct, which couldn't be inherited... I am looking for a custom drawer adapted to System.Nullable
This method doesn't seem to be working in Unity 2019 unfortunately. Though apparently you can do this without workarounds in Unity 2020, but that's not quite fully released yet at this exact People are following this question.
Count returning 0 when called outside object~
1
Answer
Generic CustomEditor
0
Answers
Min Attribute Not Working on Custom Property Drawer
0
Answers
Extended class attributes/methods referenced as base class attributes/methods
1
Answer
Custom property drawer for generic
0
Answers
|
https://answers.unity.com/questions/1209710/property-draw-a-class-that-has-a-generic-type.html
|
CC-MAIN-2021-10
|
refinedweb
| 1,410
| 62.38
|
synchronized 3.0.0
synchronized: ^3.0.0 copied to clipboard
synchronized #
Basic lock mechanism to prevent concurrent access to asynchronous code
Goal #
You were missing hard to debug deadlocks, here it is!
The goal is to propose a solution similar to critical sections and offer a simple
synchronized API à la Java style.
It provides a basic Lock/Mutex solution to allow features like transactions.
The name is biased as we are single threaded in Dart. However since we write asychronous code (await) like we would write synchronous code, it makes the overall API feel the same.
The goal is to ensure for a single process (single isolate) that some asynchronous operations can run without conflict. It won't solve cross-process (or cross-isolate) synchronization.
For single process (single isolate) accessing some resources (database..), it can help to
- Provide transaction on database system that don't have transaction mechanism (mongodb, file system)
- In html application make sure some asynchronous UI operation are not conflicting (login, transition)
Feature #
- By default a lock is not reentrant
- Timeout support
- Support for reentrant lock (using Zone)
- Consistent behavior (i.e. if it is unlocked calling synchronized grab the lock)
- Values and Errors are properly reported to the caller
- Work on Browser, DartVM and Flutter
- No dependencies (other than the sdk itself)
It differs from the
pool package used with a resource count of 1 by supporting a reentrant option
Usage #
A simple usage example:
import 'package:synchronized/synchronized.dart'; main() async { // Use this object to prevent concurrent access to data var lock = new Lock(); ... await lock.synchronized(() async { // Only this block can run (once) until done ... }); }
If you need a re-entrant lock you can use
var lock = new Lock(reentrant: true); // ... await lock.synchronized(() async { // do some stuff // ... await lock.synchronized(() async { // other stuff } });
A basic lock is not reentrant by default and does not use Zone. It behaves like an async executor with a pool capacity of 1
var lock = Lock(); // ... lock.synchronized(() async { // do some stuff // ... });
The return value is preserved
int value = await lock.synchronized(() { return 1; });
Using the
package:synchronized/extension.dart import, you can turn any object into a lock.
synchronized() can then be called on any
object
import 'package:synchronized/extension.dart'; class MyClass { /// Perform a long action that won't be called more than one at a time. Future performAction() { // Lock at the instance level return synchronized(() async { // ...uninterrupted action }); } }
How it works #
The next tasks is executed once the previous one is done
Re-entrant locks uses
Zone to know in which context a block is running in order to be reentrant. It maintains a list
of inner tasks to be awaited for.
Example #
Consider the following dummy code
Future writeSlow(int value) async { await Future.delayed(new Duration(milliseconds: 1)); stdout.write(value); } Future write(List<int> values) async { for (int value in values) { await writeSlow(value); } } Future write1234() async { await write([1, 2, 3, 4]); }
Doing
write1234(); write1234();
would print
11223344
while doing
lock.synchronized(write1234); lock.synchronized(write1234);
would print
12341234
The Lock instance #
Have in mind that the
Lock instance must be shared between calls in order to effectively prevent concurrent execution. For instance, in the example below the lock instance is the same between all
myMethod() calls.
class MyClass { final _lock = new Lock(); Future<void> myMethod() async { await _lock.synchronized(() async { step1(); step2(); step3(); }); } }
Typically you would create a global or static instance Lock to prevent concurrent access to a global resource or a class instance Lock to prevent concurrent modifications of class instance data and resources.
Features and bugs #
Please feel free to:
- file feature requests and bugs at the issue tracker
- or contact me
- How to guide
|
https://pub.flutter-io.cn/packages/synchronized
|
CC-MAIN-2021-25
|
refinedweb
| 621
| 54.22
|
Hacking Blind
Hacking Blind Bittau et al. IEEE Symposium on Security and Privacy, 2014
(With thanks to Chris Swan for pointing this paper out to me a few months ago…)
The ingenuity of attackers continues to amaze. Today’s paper presents an interesting trade-off: security or availability, pick one! (*) The work you put in to make sure that your processes are monitored and restarted on failure is enough for an attacker to exploit them given the existence of a stack buffer overflow vulnerability. “Unfortunately, these are still present today in popular software…” Using a systematic approach, the one bit of information that is leaked when a payload is sent to a server (does it crash or not) is enough to build up a full-blown attack. Restarting processes after a crash (either a server process directly restarting its worker processes, or use of a daemon such as systemd) provides the attacker with the ability to repeatedly probe the system and build up knowledge. The attack requires no prior knowledge of the source code or binary.
Starting from nothing, an automated version of the BROP attack (Blind Return-Oriented Programming) in a tool called Braille can go from a crash to a remote shell in anything from a few minutes to up to 20 minutes. Braille is 2000 lines of Ruby code. For the particular system under attack, the user needs to supply a ‘try exploit’ function which is passed the data the harness wishes to overflow the stack buffer with, and must return either ‘CRASH’, ‘NO_CRASH’, or ‘INF’ (if the socket stays open for longer than a timeout but does not otherwise behave normally). Here’s an example for nginx:
def try_exp(data) s = TCPSocket.new($victim, 80) req = "GET / HTTP/1.1\r\n" req << "Host: site.com\r\n" req << "Transfer-Encoding: Chunked\r\n" req << "Connection: Keep-Alive\r\n" req << "\r\n" req << "#{0xdeadbeefdead.to_s(16)}\r\n" req << "#{data}" s.write(req) if (s.read() == nil return RC_CRASH else return RC_NO_CRASH end end
(nginx was comprised in under a minute, after making 2401 requests).
Hacking without binary knowledge is useful even in the not-completely-blind case (e.g., open-source) because it makes it possible to write generic, robust exploits that work against all distributions and are agnostic to a specific version of the binary. Today, attackers need to gather exact information (e.g., binaries) for all possible combinations of distribution versions and vulnerable software versions, and build an exploit for each. One might assume attackers would only bother with the most popular combinations. An implication of our work is that more obscure distributions offer little protection (through obscurity) against buffer overflows.
(*) Ok, it’s not actually quite that dramatic. The availability (automated restart) of a single server process is what’s needed. If you load balance across several servers the attack isn’t quite so easy as it assumes the same machine and (top-level) process can be hit after each attempt. If you are load balancing, and PIE is used (not widely deployed as of 2014) or canaries cannot be circumvented by other means then BROP will not succeed.
Defences that BROP must overcome
In the ‘good old days’ exploiting stack buffer overflows was relatively straightforward. The malicious code could be included as part of the overflow payload, and the return address set to the location on on the stack where the instructions had been placed. This simple approached stopped working with the introduction of non-executable memory (NX).
NX can be overcome using a technique called return oriented programming (ROP) that we’ve looked at previously on The Morning Paper. The basic idea is very simple, if very clever. What if the instructions you need for your attack are already present in the binary? By finding appropriate small sequences of instructions (called ‘gadgets’) that end with a return, it is possible to chain these together to achieve the desired goal. Consider the following very common code that restores all saved registers:
pop rbx pop rbp pop r12 pop r13 pop r14 pop r15 ret
If you start parsing this code at the beginning (offset 0x0) it does exactly what is intended. But if you jump into it an unintended offset (not aligned with the original instruction boundaries), you find sequences of bytes that do other useful things. At offset 0x7 you get:
pop rsi pop r15 ret
At and offset 0x9 you get:
pop rdi ret
This save register code fragment is so useful the authors call it the BROP gadget. If you can find it, the two useful gadgets it contains give you enough to control the first two arguments of any call.
So ROP can overcome NX pages, but there still remain the challenges of address space layout randomization (ASLR) and canaries. ASLR randomizes the location of code and data memory segments in the process address space. This makes it impossible to predict the address locations of code in advance. Fortunately, the BROP attack doesn’t need to do this. Stack canaries are a secret value that is placed just before each saved frame pointer and return address in the stack. When a function returns, the canary value is checked – if it has changed it indicates a stack buffer overflow and the program exits to prevent any exploit. Canaries aren’t perfect as they can sometimes be bypassed, but they are still effective in many cases. Blind ROP is able to determine the canary value so that it can be included in the overflow data at the expected location, avoiding triggering detection.
One bit at a time, crafting an attack
The high-level attack plan is as follows:
- Using a process the authors call ‘stack reading,’ determine the canary value and a return address that can be used to defeat ASLR
- Find enough gadgets to be able to invoke the
writesystem call and control its arguments (Blind ROP)
- Use
writeto dump enough of the binary over a socket such that the attacker can find enough gadgets to build a shellcode (known technique) and launch the final exploit.
Stack reading
Finding the canary value proceeds one byte at a time. The overflow is used to overwrite a single byte of the canary. If the process crashes, the guess was clearly wrong. If the server does not crash, we’ve found one byte of the canary value. Repeat this procedure with the next byte, until all 8 bytes (for 64-bit) are leaked. Then you can keep going to discover the saved instruction pointer on the stack (or any alternate value that also enables the program to keep executing without crashing).
On 64-bit Linux, whereas a brute-force attack requires 227 attempts on average, stack reading can defeat ASLR in 640 attempts on average.
Gadget finding
The next stage is to find enough gadgets to be able to invoke write. A convenient starting point is to find the BROP gadget we saw earlier (this isn’t necessary for the attack to succeed, it just makes things a bit easier – see the full paper for what to do if the BROP gadget is not found). Gadgets are found by overwriting the saved return address and inspecting the program behaviour (the entire .text segment can be scanned to compile a list of gadgets). The first thing to find is a stop gadget, which is anything that causes the program to block instead of crashing (which we can detect remotely). In fact, any signal that we can detect remotely will do, it doesn’t have to be a blocking (e.g. a sleep) gadget.
For example a server may handle requests in a while-true loop, so returning to that loop may “resume” program execution and another request can be handled. This can be used to signal whether a program crashed or is still alive (i.e., the stop gadget ran). The attacker in this case would populate the stack with the addresses of enough ret instructions to “eat up” enough stack until the next word on the stack is a return address of a previous stack frame that acts as a stop gadget (e.g., returns to the main program loop).
Given a stop gadget we can use gadget chaining to search for another gadget by first trying a candidate return address, and chaining the stop gadget after it. If the stop gadget runs, we found a valid gadget of some kind (i.e., it does something and then returns, without crashing). Suppose the attacker now has three addresses: probe, the address of the gadget being scanned; stop, the address of a stop gadget; and trap the address of any non-executable memory that will cause a crash. By chaining these in different combinations, you can find out information about what the gadget does. For example:
- probe, stop, traps (trap, trap, …) finds gadgets that do not pop the stack like
retor
xor rax, rax; ret
- probe, trap, stop, traps finds gadgets that pop exactly one stack work like
pop rax; retor
pop rdi; ret
- probe, stop, stop, stop, stop, stop, stop, traps finds gadgets that pop up to six words, for example the BROP gadget.
The BROP gadget has a very unique signature. It pops six items from the stack and landing in other parts of it pops fewer items from the stack so one can verify a candidate by laying out traps and stop gadgets in different combinations and checking behavior. A misaligned parse in the middle yields a pop rsp which will cause a crash and can be used to verify the gadget and further eliminate false positives. The gadget is 11 bytes long so one can skip up to 7 bytes when scanning the .text segment to find it more efficiently, landing somewhere in the middle of it.
For a set of binaries analysed by the authors, it should take on average between 154 and 972 attempts to find a BROP gadget.
Finding ‘write’ and a way to control rdx
Now that we have a BROP gadget, the next thing is to find
write’s entry in the Procedure Linking Table (PLT) and a way to control
rdx for the length of the write.
pop rdx; ret gadgets are rare, but fortunately calls to
strcmp are not – and strcmp sets
rdx to the length of the string being compared.
The PLT is relatively easy to find since it has a unique structure with entries 16 bytes apart, and a ‘slow path’ address at an offset of 6 bytes. If a couple of addresses 16 bytes apart do not cause a crash, and the same addresses plus six do not cause a crash, there’s a high probability you’ve found the PLT. The next step is to work out what function calls the various entries correspond to. By exercising the functions with different arguments and seeing what happens it is possible to figure this out. (The first two arguments can be controlled thanks to the BROP gadget). For example, the ‘signature’ of strcmp is:
- strcmp(bad address, bad address): crash
- strcmp(bad, readable): crash
- strcmp(readable, bad): crash
- strcmp(readable, readable): no crash
Once strcmp is found, the attacker can set rdx to a non-zero value by just supplying a pointer to either a PLT entry (non-zero code sequence) or the start of the ELF header (0x400000) which has seven non-zero bytes.
Given the ability to control the first two arguments via the BROP gadget, and rdx indirectly via strcmp, it’s easy to find
write: just scan each PLT entry, force a write to the socket, and check whether the write occured. To find the file descriptor for the open socket, searching the first few file descriptors generally works well.
Finding write usually only takes a few additional requests once BROP has been found.
Once the attacker can write to the socket, it’s relatively straightforward to write the entire .text segment from memory to the attacker’s socket (some methods are described in the paper). With this information the attacker can find more gadgets by local analysis of the binary, and complete the exploit.
Defence against the dark arts
- Using a load balancer with PIE enabled (as described previously)
- Randomizing the canary on a per-user or per-request basis
- Slowing down attacks by delaying restarts after crashes (which of course may not be what you want in the presence of an innocent crash)
- Using techniques such as Control Flow Integrity that defend against ROP attacks.
- Using compiler options to insert runtime bounds checks on buffers (may add up to 2x performance overhead). “One bright spot to make these solutions practical is that Intel has announced a set of instruction extensions to reduce the costs of bounds checking variables.”
And where can we download this tool?
Thanks for that! I tried it against my local Apache server, but attack immediately aborts after receiving “HTTP/1.1 411 Length Required”.
So perhaps nginx specific?
|
https://blog.acolyer.org/2016/06/22/hacking-blind/
|
CC-MAIN-2018-13
|
refinedweb
| 2,170
| 58.62
|
view raw
I have a set of ranges that might look something like this:
[(0, 100), (150, 220), (500, 1000)]
(250, 400)
[(0, 100), (150, 220), (250, 400), (500, 1000)]
(399, 450)
(250, 400)
bisect
bisect
It looks like you want something like bisect's insort_right/insort_left. The bisect module works with lists and tuples.
import bisect l = [(0, 100), (150, 300), (500, 1000)] bisect.insort_right(l, (250, 400)) print l # [(0, 100), (150, 300), (250, 400), (500, 1000)] bisect.insort_right(l, (399, 450)) print l # [(0, 100), (150, 300), (250, 400), (399, 450), (500, 1000)]
You can write your own
overlaps function, which you can use to check before using
insort.
I assume you made a mistake with your numbers as
(250, 400) overlaps
(150, 300).
overlaps() can be written like so:
def overlaps(inlist, inrange): for min, max in inlist: if min < inrange[0] < max and max < inrange[1]: return True return False
|
https://codedump.io/share/6q2V1aRw1MFe/1/is-there-a-standard-python-data-structure-that-keeps-thing-in-sorted-order
|
CC-MAIN-2017-22
|
refinedweb
| 155
| 84.1
|
#include <signal.h>
void (*sigset (sig, func))() int sig; void (*func)();
int sighold (sig) int sig;
int sigrelse (sig) int sig;
int sigignore (sig) int sig;
int sigpause (sig) int sig;
sighold- holds a signal until released or discarded
sigrelse- release a held signal
sigignore- sets the action for signal to SIG_IGN
sigpause- suspends the calling process until it receives a signal
These functions provide signal management for application processes.
The sigset system call
specifies the system signal action to be taken upon receipt of signal
sig.
This action is either calling a process signal-catching handler
func or performing a system-defined action.
sig can be assigned any one of the following values except SIGKILL and SIGSTOP. Machine- or implementation-dependent signals are not included (see Notes below). Each value of sig is a macro, defined in <signal.h>, that expands to an integer constant expression.
Otherwise, func must be a pointer to a function, the signal-catching handler, that is to be called when signal sig occurs. In this case, sigset specifies that the process calls this function upon receipt of signal sig. Any pending signal of this type is released. This handler address is retained across calls to the other signal management functions listed here.
When a signal occurs, the signal number sig is passed as the only argument to the signal-catching handler. Before calling the signal-catching handler, the system signal action is set to SIG_HOLD. During normal return from the signal-catching handler, the system signal action is restored to func and any held signal of this type released. If a non-local goto (longjmp) is taken, then the signal remains held, and sigrelse must be called to restore the system signal action and release any held signal of this type.
If a signal-catching function is invoked, how it effects the interrupted function is described by each individual function. See the X/Open Portability Guide, Issue 3, 1989 sigaction(S) section ``Signal Effects on Other Functions'', for further information.
sighold and sigrelse are used to establish critical regions of code. sighold is analogous to raising the priority level and deferring or holding a signal until the priority is lowered by sigrelse. sigrelse restores the system signal action to that specified previously by sigset.
sigignore sets the action for signal
sig to SIG_IGN (see above).
sigpause suspends the calling process until it receives a signal, the same as pause(S). However, if the signal sig had been received and held, it is released and the system signal action taken. This system call is useful for testing variables that are changed on the occurrence of a signal. The correct usage is to use sighold to block the signal first, then test the variables. If they have not changed, then call sigpause to wait for the signal. sigset fails if one or more of the following is true:
For the other functions, upon successful completion, a value of 0 is returned. Otherwise, a value of -1 is returned and errno is set to indicate the error.
For the SIGCONT
signal, the default action is to
continue the process if it is
stopped, otherwise it is ignored.
funcequal to function address. The same is true if the signal is SIGCLD with one exception: while the process is executing the signal-catching function, any received SIGCLD signals are ignored. (This is the default action.)
In addition, SIGCLD affects the wait, waitpid, and exit system calls as follows:
funcfor SIGCLD is set to SIG_IGN and a wait is executed, the wait blocks until all of the calling process's child processes terminate; it then returns a value of -1 with errno set to ECHILD
funcfor SIGCLD is set to SIG_IGN and a waitpid is executed, the waitpid blocks until all of the calling process's child processes terminate; it then returns a value of -1 with errno set to ECHILD. However, if the WNOHANG option is specified, waitpid does not block.
funcvalue of SIGCLD is set to SIG_IGN, the exiting process does not create a zombie process.
When processing a pipeline, the shell makes the last process in the pipeline the parent of the preceding processes. A process that may be piped into in this manner (and thus become the parent of other processes) should take care not to set SIGCLD to be caught.
Other implementations of UNIX System V may have other implementation-defined signals. Also, additional implementation-defined arguments may be passed to the signal-catching handler for hardware-generated signals. For certain hardware-generated signals, it may not be possible to resume execution at the point of interruption.
The signal type SIGSEGV is reserved for the condition that occurs on an invalid access to a data object.
The other signal management functions, signal(S) and pause(S), should not be used in conjunction with these routines for a particular signal type.
Intel386 Binary Compatibility Specification, Edition 2 (iBCSe2) .
|
http://osr507doc.xinuos.com/cgi-bin/man?mansearchword=sigrelse&mansection=S&lang=en
|
CC-MAIN-2020-50
|
refinedweb
| 822
| 52.6
|
I currently have a working Active Record association but I was wondering if there was a more efficient way of doing something like this. Basically I have a model called Task. A task has one creator and can be assigned to many people. The user model is a Devise Model called User. This is my current setup but I don't like the query I need to use to fetch all Tasks for a user whether they created them or were assigned to them. Here are my models. My current setup is also terrible with pagination. Any suggestions?
class Task < ActiveRecord::Base
has_and_belongs_to_many :users
belongs_to :creator, foreign_key: 'creator_id', class_name: 'User'
end
class User < ActiveRecord::Base
has_and_belongs_to_many :assigned_tasks, class_name: 'Task'
has_many :created_tasks, foreign_key: 'creator_id', class_name: 'Task'
def tasks
(assigned_tasks.includes(project: [:client]) + created_tasks.includes(project: [:client])).uniq
end
end
def tasks
Task.joins('LEFT JOIN tasks_users ON tasks_users.task_id = tasks.id').where('tasks_users.user_id = :user_id OR tasks.creator_id = :user_id', { user_id: id }).includes(project: [:client])
end
A quick way to do this is:
class User def associated_tasks Task.joins(:user).joins("LEFT JOIN user_tasks ON user_tasks.task_id = tasks.id").where("users.id = :user_id OR user_tasks.user_id = :user_id", { user_id: id }).includes(project: [:client]) end end
Note that we're
LEFT JOINing the join table between tasks and users, I called that
user_tasks, you should be able to substitute for others.
There are other ways of doing this; I'd update the answer to include these, when I have some time.
|
https://codedump.io/share/DeVdlctI3rYL/1/active-record-relation-for-assigned-and-created
|
CC-MAIN-2018-17
|
refinedweb
| 247
| 59.9
|
Thinking of how CDI can be integrated with JBoss ESB. These notes are super rough right now, but I figure it's best to talk out loud early. I hope to share some pseudo/mock-up code within the next two weeks to make this a bit more concrete.
I see four use cases (listed in order from obvious to crazy).
Consuming a service hosted in the ESB from within a Managed Bean
On the surface, this is pretty much meat and potatoes. You have a managed bean and you want to inject a reference to a service hosted in the ESB. So you might expect to see something like this:
public class MyBean { @Inject @ServiceName("CoolService") Service esbService; ... }
Pretty straightforward, but what exactly do you expect esbService to do? There are two possibilities:
- Service is simply a client proxy to the ESB. Calling a method on this class results in a message being sent over the ESB to the provider of the service.
- Service is a representation of a response from a service. What this means to me is that the @Inject processor actually invokes the target service during injection and maps the result to the instance being injected. What's in the request message? Good question, I have no idea. Maybe there's another annotation that provides a reference to a bean or a literal value that is used as the request message.
The other thing to keep in mind here is the context management in CDI. So if the Service reference is scoped to anything other than request, we are assuming that the target service is stateful. This is not guaranteed in SOA by any stretch of the imagination, but could be relevant if a correlation or conversation identifier was associated with each call to the service. Need to think about this more.
Providing a service on the ESB from a Managed Bean
Three ways I can see this happening:
- The managed bean injects some type of registration interface from the ESB and uses it to programmatically register itself as a service.
- The managed bean implements a producer method for an interface that the ESB expects for service registrations. The CDI Extension implementation for the ESB would have to hunt for these.
- The managed bean implements a specific interface or is annotated in some way so that the ESB CDI Extension picks it up.
Consuming and providing using Events
This is a pretty natural mapping for the ESB, which is inherently asynchronous and message-based. Not too difficult to imagine how this will play out. A bean would provide a service by observing events qualified with the service name that it's providing. A bean would consume a service by firing events qualified with the name of the service to be consumed.
Using Weld as a Composed Service Model
I need to provide a lot more context for this to make any sense at all, but I'll try with a quick and dirty explanation now. The current version of the ESB provides a service definition model based on the concept of an action processing pipeline. This is basically just the traditional pipes and filters pattern, but it all happens within the context of a service. A service is composed of the serial execution of one or more ordered actions. The ESB creates instances of the action classes and handles the dispatch.
If you do the right amount of drugs, you can twist the ESB notion a bit to think of CDI as the compositional model. CDI can inject the beans and service references into a managed bean that represents your service. There is some notion of ordering here based on dependency resolution, but it's certainly not at the same level as the ESB action chain. Maybe there's a clever way to define this with CDI. Or maybe not. Maybe it's just not part of the compositional model of CDI and we use other routing/orchestration technologies when that is necessary.
A few additional notes:
- I assume that we will need to provide an implementation of CDI Extension to do most/all of the above
|
https://developer.jboss.org/wiki/ESBAndCDIIntegration
|
CC-MAIN-2018-17
|
refinedweb
| 691
| 62.27
|
Introduction
The linked list is one of the most important concepts and data structures to learn while preparing for interviews. Having a good grasp of Linked Lists can be a huge plus point in a coding interview.
Problem Statement
In this question, we are given a circular linked list. We have to traverse the list and print its elements.
Problem Statement Understanding
Let's try to understand the problem with help of example.
Suppose the given circular linked list is:
Now, we have to traverse this list. So, the output after traversing the list and printing every element will be: 1 7 18 15
Input :
Output: 1 7 18 15
Explanation: We will traverse the given list and while traversing we will print its elements which is the output.
Helpful Observations
This question is not a very complex one.
Let us first think about how we will traverse the circular linked list.
- As we already know, there is no NULL node in a circular linked list and hence, no endpoint. The last node of the list points back to the first node of the list.
- So, how can we do a successful traversal of the circular linked list?
We can counter this by considering any node as the starting point. This is possible because, after a complete traversal, we will reach the starting node again. So, we can use any node as the starting point.
Let us have a glance at the approach.
Approach
Which node should we choose as our starting node?
- The head node will make our life easier as we already have a pointer to the head of the list.
Create a node temp and make it point to the head. Now, keep incrementing temp while temp is not equal to the head of the list. In every iteration, print the temp’s data.
As explained already, we are using the head as our starting node, and terminating the loop when we are reaching the head again.
Algorithm
- If the head is NULL, simply return because the list is empty.
- Create a node temp and make it point to the head of the list.
- Now, with the help of a do-while loop, keep printing the temp - > data and increment the temp, while temp is not equal to the head of the list.
- As we have chosen the head as our starting point, we are terminating the loop when we are reaching the head again.
Dry Run
Code Implementation
#include
using namespace std; class Node { public: int data; Node *next; }; void; } void printList(Node *head) { Node *temp = head; if (head != NULL) { do { cout << temp->data << " "; temp = temp->next; } while (temp != head); } } int main() { Node *head = NULL; push(&head, 15); push(&head, 18); push(&head, 7); push(&head, 1); cout << "Contents of Circular Linked List\n "; printList(head); return 0; }
public class PrepBytes { static class Node { int data; Node next; }; static Node; return head_ref; } static void printList(Node head) { Node temp = head; if (head != null) { do { System.out.print(temp.data + " "); temp = temp.next; } while (temp != head); } } public static void main(String args[]) { Node head = null; head = push(head, 15); head = push(head, 18); head = push(head, 7); head = push(head, 1); System.out.println("Contents of Circular " + "Linked List:"); printList(head); } }
#include
#include #include struct Node { int data; struct Node* prev; struct Node* next; }; void push(struct Node **head_ref, int data) { struct Node* ptr1 = (struct Node*) malloc(sizeof(struct Node)); struct Node *temp = *head_ref; ptr1->data = data; ptr1->next = *head_ref; if (*head_ref != NULL) { while (temp->next != *head_ref) temp = temp->next; temp->next = ptr1; } else ptr1->next = ptr1; *head_ref = ptr1; } void printList(struct Node *head) { struct Node *temp = head; if (head != NULL) { do { printf("%d ",temp->data); temp = temp->next; } while (temp != head); } } int main(void) { struct Node *head = NULL; push(&head, 15); push(&head, 18); push(&head, 7); push(&head, 1); printf( "Contents of Circular Linked List\n "); printList(head); }
class Node: def __init__(self, data): self.data = data self.next = None class CircularLinkedList: def __init__(self): self.head = None def push(self, data): ptr1 = Node(data) temp = self.head ptr1.next = self.head if self.head is not None: while(temp.next != self.head): temp = temp.next temp.next = ptr1 else: ptr1.next = ptr1 self.head = ptr1 def printList(self): temp = self.head if self.head is not None: while(True): print (temp.data, end=" ") temp = temp.next if (temp == self.head): break cllist = CircularLinkedList() cllist.push(15) cllist.push(18) cllist.push(7) cllist.push(1) print ("Contents of circular Linked List") cllist.printList()
Output
Contents of Circular Linked List:
1 7 18 15
Time Complexity: O(n), since we are traversing the linked list once.
[forminator_quiz id="4102"]
So, in this article, we have tried to explain the most efficient approach to traverse a circular linked list. If you want to solve more questions on Linked List, which are curated by our expert mentors at PrepBytes, you can follow this link Linked List.
|
https://www.prepbytes.com/blog/linked-list/circular-linked-list-traversal/
|
CC-MAIN-2022-21
|
refinedweb
| 829
| 74.9
|
AWS Media Blog
Deep dive into CORS configs on Amazon S3
As part of the technical marketing team at AWS Elemental, my role includes building tools that help customers streamline their video workflows. I’m currently developing a web application for processing videos through AWS machine learning services using a serverless framework called the Media Insights Engine.
Recently, I’ve been having difficulties with Cross-Origin Resource Sharing (CORS) errors in my web component for uploading files to Amazon S3. This was one of the hardest software defects I’ve had to solve in a long time so I thought it would be a good idea to share what I learned along the way.
Drag-and-Drop Uploads with DropzoneJS
I chose to implement the front-end and DropzoneJS to provide drag-and-drop file upload functionality, as shown below. My Vue.js component for Dropzone was derived from vue-dropzone.
Uploading to Amazon S3 with Presigned URLs
Here’s what’s supposed to happen in my application when a user uploads a file:
1. The web browser sends two requests to an API Gateway endpoint that acts as the point of entry to a Lambda function. This function returns a presigned URL which can be used in a subsequent POST to upload a file to Amazon S3.
2. The first of the two requests is an HTTP OPTIONS method to my /upload endpoint. This is called a CORS preflight request and is used by the browser to verify that the server (an API Gateway endpoint in my case) understands the CORS protocol. The server should respond with an empty 200 OK status code.
3. The browser then submits another preflight CORS request to verify that the S3 endpoint understands the CORS protocol. Again, the S3 endpoint should respond with an empty 200 OK.
4. The second request is an HTTP POST to /upload . The prescribed AWS Lambda function then responds with the presigned URL.
5. Finally, the browser uses the presigned URL response from step #3 to POST to the S3 endpoint with the file data.
Configuring CORS on an S3 Bucket
Before you can use presigned URLs to upload to S3, you need to define a CORS policy on the S3 bucket so that web clients loaded in one domain (e.g. localhost or cloudfront) can interact with resources in the S3 domain. Setting a CORS policy on an S3 bucket is not complicated; however, if you do get it wrong, you can often solve it with the suggestions mentioned in this CORS troubleshooting guide. This is the CORS policy I used on my S3 bucket:
What can go wrong?
There are a lot of different ways I found to break things (this happens to be my specialty). Sometimes, I would neglect to configure a CORS policy on my S3 bucket. This would cause S3 to block my CORS preflight request with an HTTP 403 error:
Occasionally, I would get the same error when I put an incorrect CIDR block on the Amazon API Gateway endpoint for the Lambda function I used to get presigned URLs.
Even with a correct CORS policy on my S3 bucket and access policies in API Gateway, I continued to encounter
HTTP 307 Temporary Redirect errors on the CORS preflight request sent to the S3 endpoint in any region other than Virginia (us-east-1). As noted in step 2 above, a CORS preflight request is an HTTP OPTIONS request that checks to see if the server understands the CORS protocol (reference). Here’s what it looks like when a server redirects a CORS preflight request to a different endpoint:
Now, look closely at the preflight redirect. Where is it directing the browser? How is the redirected URL different from the original request?
The redirected URL is region-specific. This was an important clue.
Browsers won’t redirect preflight requests for several reasons. After doing some research in AWS documentation about S3 usage here, here, here, and here, I realized that my DropzoneJS component needed to use a region-specific S3 endpoint for CORS preflight requests. The default S3 endpoint is only valid for buckets created in Virginia!
Creating presigned URLs the right way!
The solution to my problems started coming together when I realized my DropzoneJS implementation used a statically defined URL that worked in Virginia (us-east-1) but not for any other region. I also noticed that the
get_presigned_url() boto3 function in my Lambda function returned different results depending on the region it was deployed to. I was able to isolate this region dependency once I learned that you can create a region-dependent S3 client by using botocore.client.Config from Python, like this:
s3_client = boto3.client('s3', region_name='us-west-2')
This was a surprise to me because, according to the boto3 docs, there is no option to specify a region for your S3 client. Having learned about the botocore approach, I will now always initialize S3 clients with a region name, the latest signature_version, and virtual host style addressing, like this:
s3_client = boto3.client('s3', region_name='us-west-2', config = Config(signature_version = 's3v4',
My uploads started working reliably in every region after changing the S3 client to use a region-specific configuration and changing DropzoneJS to use the URL provided in the response from
get_presigned_url().
Sample code
You can use the following code to see what region-specific presigned URLs look like from a Python environment on your laptop:
import requests import boto3 from botocore.config import Config s3_client = boto3.client('s3', region_name='us-west-2', config = Config(signature_version = 's3v4', s3={'addressing_style': 'virtual'})) response = s3_client.generate_presigned_post('mie01-dataplanebucket-1vbh3c018ikls','cat.jpg') with open('/Users/myuser/Desktop/cat.jpg', 'rb') as f: files = {'file': ('cat.jpg', f)} requests.post(response['url'], data=response['fields'], files=files)
Here’s what my
/upload
Lambda function looks like now:
@app.route('/upload', cors=True, methods=['POST'], content_types=['application/json']) def upload(): region = os.environ['AWS_REGION'] s3 = boto3.client('s3', region_name=region, config = Config(signature_version = 's3v4', s3={'addressing_style': 'virtual'})) # limit uploads to 5GB max_upload_size = 5368709120 try: response = s3.generate_presigned_post( Bucket=(app.current_request.json_body['S3Bucket']), Key=(app.current_request.json_body['S3Key']), Conditions=[["content-length-range", 0, max_upload_size ]], ExpiresIn=3600 ) except ClientError as e: logging.info(e) raise ChaliceViewError( "Unable to generate pre-signed S3 URL for uploading media: {error}".format(error=e)) except Exception as e: logging.info(e) raise ChaliceViewError( "Unable to generate pre-signed S3 URL for uploading media: {error}".format(error=e)) else: print("presigned url generated: ", response) return response
Takeaways
Here are the key points to remember about uploading to S3 using presigned URLs:
- Always use region-specific S3 endpoints when trying to upload to S3. (reference)
- Always use botocore Config options to initialize Python S3 clients with a region, sig 3/4, and virtual path addressing. (reference)
- Don’t assume that you have a CORS issue when browsers report CORS errors because they may not be aware of lower-level issues, such as DNS resolution of S3 endpoints or API access controls.
|
https://aws.amazon.com/blogs/media/deep-dive-into-cors-configs-on-aws-s3-how-to/
|
CC-MAIN-2021-43
|
refinedweb
| 1,173
| 53.1
|
%Status usage in ObjectScript
Hi developers!
Want to discuss with you the case of %Status.
If you familiar with ObjectScript you know what is it. I'd love to hear the history of the case why it had appeared in ObjectScript but it turned out that almost every system/library classmethods return %Status and there is a whole set of tools to deal with it.
What is does it gives you the responsibility to check the value or %Status of every system method you call.
E.g. if you save the data of the persistent class, you should never call like this:
do obj.%Save()
you need to call:
set sc=obj.%Save() if $$$ISERR(sc) do // something or quit.
Or if you use try/catch approach in your code you use the following macro:
$$$THROWONERROR(sc,obj.%Save())
Which turns your code into something like:
set sc=$$$OK Try { $$$THROWONERROR(sc,##class(x.y).a()) $$$THROWONERROR(sc,##class(x.y).b()) $$$THROWONERROR(sc,##class(x.y).c()) $$$THROWONERROR(sc,##class(x.y).z()) do obj.NormalMethod(parameter) } catch e { // error handling }
Which makes the code look like a sequence of $$$THROWONERROR calls. Which I don't find very readable.
So, if you introduce a new %Status method, you make ALL the users of this method use either $$$THROWONERROR or $$$ISERR
And my questions are:
What is the value of %Status vs try/catch?
do you use %Status in newly introduced methods of YOUR solutions?
Why?
Register or login to poll
Results
Hi, Evgeny
I would add a question to your poll: Do you allow the return of a value other than Status in your functions. If so, in what cases. Then I think the survey would have been not so clear.
I think if the method should return something else, it should return not %Status but the value.
I use %Status exclusively; I really, really don't like try/catch. The most important reason for me is that I want to add information about what went wrong to the status. In e.g. obj.%Save(), the returned status tells me that saving an object went wrong, and hopefully why. I want to add to this which object could not be saved, and possibly some other state that may be relevant for debugging the problem. I find this creates code that is easy to read and debug.
By the way, "If 'sc" is, to me, a lot easier on the eyes than "If $$$ISERR(sc)"...
Thanks for the answer, Gertjan!
May I ask you, why don't you like try/catch? Is it slow?
Imho it has all the error and stack information, what's wrong with try/catch?
You only need Try-Catch when you expect that something might go wrong and can't handle it upfront.
For regular error-handling, something that you anticipate never happens, the $ZTrap-ErrorHandler is much more elegant. It doesn't introduce extra stack-levels and identation.
%Status is an elegant way of letting the caller know something went wrong and leave it to it's discretion to do with it whatever is appropriate, the callee should have done anything to handle the 'error' properly, logging, recovery, whatever. It's a matter of seperation of concerns.
Thanks for the insights, @Herman Slagman!
$ZT + Label is better than try/catch
%Status is better than try/catch too.
why %Status is better than throw ("something went wrong") approach?
But, how to do you track errors of your solutions happen on a customer side? There is an elegant way to manage this with try/catch:
How could you manage this with %Status and $ZT approach?
I don't know if try/catch is slow, and I don't care. I don't use it because it is too wordy to handle errors exactly where they occur. It encourages code like in your example, where the code in methods is wrapped entirely in a try/catch block. You say the error object has all the information, but I disagree. It has some low-level information, but often lacks the context I need to determine what the problem is.
My preferred way of handling %Status errors is to add a %Status in front of it with more details of what happened when the error occurred, and return this to the caller. Somewhere up the call chain something will then handle the problem, e.g. add something to the Ensemble event log. This is such a standard way of working for me that I created a macro specifically for prefixing the new status information.
For errors that "raise" I also prefer $ZTrap+$ZError; I don't see the added value of try/catch here either.
AFAIK Try/catch indeed slows a bit the execution (@Dan.Pasco is it true?)
And try/catch shouldn’t be presented in every method - it could be somewhere on top and in the places where you need to catch errors.
I like your answer but for better understanding it deserves a sample code to see how do you manage error scenarios.
we really lack of good templates for beginners on how to better handle errors in serious project.
I would not say that try-catch is slow. If you reach the end of a try block, it simply branches over the catch block. This is a very cheap operation. In the nominal case (i.e., no error or exception), the cost of a try block is probably less than setting a %Status variable, and it would take several try blocks to match the cost of setting $zt once.
The catch block does almost all of the work. It instantiates the exception object, unwinds the stack, etc. I don't know offhand how catch compares to $zt, but the performance of the exceptional cases is usually not as important.
As I recall, some of the conventions of the object library date back to Caché 2.0 in the late nineties, whereas try-catch wasn't introduced until Caché 4.0.
One arguable advantage of %Status-based interfaces is that they work well with the noisy %UnitTest framework. If you $$$AssertStatusOK on a series of method calls, the test log reads like a transcript of your activity. You can often understand the failure without even needing to read the test program.
Also, try-catch is kind of annoying to use without finally, which was never implemented.
Thanks, Jon!
Why try-catch doesn't work well without finally? I think finally is only for the cases when you need to free resources taken, isn't it?
RAII isn't a well established pattern in ObjectScript, so you frequently end up duplicating code in the main block and the catch block that could otherwise live in a finally block. Besides "resources", clean-up code may restore state of any kind--globals, files, the current namespace, etc.
Evgeny,
Is it a question: try/catch (or another way of error handling) or %Status? If we try to follow some solid design principles, each method should perform only one function, have only one way to exit from, and should inform the caller whether it succeded. So, we should use both. Extending your sample:
One could write it in another fashion, e.g.
Which version is better? If we don't need individual error handling of ##class(x.y).a(), ##class(x.y).b(), etc calls, I'd prefer solid1. Otherwise, solid2 seems to be more flexible: it's easy to add some extra processing of each call return. To achieve the same flexibility in solid1, we are to cover each call in an individual try-catch pair, making the coding style too heavy.
Thanks for the reply, Alexey.
my topic is an attempt to find the value in %Status why people use it even for their own methods.
IMHO it steals the option of return values from methods and code looks dirty with “mandatory” $$$TOE or $$$ISERR for every call of the method with %Status
Whenever I return "useful" value from a method I regret it afterward. E.g.
What if something went wrong, e.g. actual parameters (a, b, c) are invalid? I need some sign of it, so I'm forced to add some kind of status code. Let it be of %Status type. So, the caller's code is getting more complex:
In contrast, when the status code is returned, it is not getting significantly complex:
Besides, if each method returns status (even when it is always $$$OK), the whole code is looking more consistent and does not need modification if some of its methods get new behavior and able to return some error status as well.
I always use %Status and try to wrap calls in this macro:
For example:
Acting this way, you are just quitting your method on error. What if your method needs finalization?
Both approaches have been discussed above (try/catch or $ztrap/$etrap) allow it easily.
Usually I do this:
Doing finalization like this you are rejecting your previous point to use $$$qoe(...).
To leave a comment or answer to post please log in
Please log in
|
https://community.intersystems.com/post/status-usage-objectscript
|
CC-MAIN-2020-45
|
refinedweb
| 1,525
| 73.07
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.