text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
I am working with COVID19 case data and created a dashboard. I have a Jupyter Notebook inside ArcGIS Pro to process the heath dept.'s CSV file everyday. That said, I am a complete novice with Python and fumbled my way though but got something working. I now have a request to show the daily change in cases from the previous day. The source data table just lists the cumulative cases each day over time and lumps the dates together: ID Date FIPS Cases 1 5/7/20 001 25 2 5/7/20 002 13 3 5/6/20 001 23 4 5/6/20 002 9 5 5/5/20 001 21 6 5/5/20 002 8 7 5/4/20 001 21 8 5/4/20 002 6 I would like to add a field where it contains the change in value from the previous day: ID Date FIPS Cases Difference 1 5/7/20 001 25 2 2 5/6/20 001 23 2 3 5/5/20 001 21 0 4 5/4/20 001 21 0 (because this is the starting value) 5 5/7/20 002 13 4 6 5/6/20 002 9 1 7 5/5/20 002 8 2 8 5/4/20 002 6 0 (start) The goal is a time series chart showing the sum of the daily changes for all FIPS by date (but might need to show them by FIPS as well). I know others are doing it but maybe their source data is supplied that way. This seems like it should be fairly simple but I don't know where to start. Right now I am downloading the csv, truncating the table in my GDB, then appending the csv data to the table to refresh it every day. I think I need a bit of code to run daily to recalculate the difference after I grab the new day's values. Appreciate any direction. Thanks! Sara, if you have Data Interoperability extension then change detection between next/previous values in a series is available in the AttributeManager. I don't want to send you down this path if its all new to you though as you're working on response data. If you need to pursue this let me know. Thanks for the quick reply, but I'm afraid I do not have that extension. Sounds ideal though. You could do it with dictionaries. Use the FIPS as the key. Each value could be a list of tuples with 2 values (date, case_num) After you build the dictionaries sort each list by the date part of the tuple. Then take the top two list elements and subtract the case number. Output data done! Maybe something like this... (not jupyter notebook) code: import csv with open('case.csv', 'r') as f: reader = csv.reader(f) data = list(reader) data.pop(0) data = [x[1:] for x in data] data_dict = {x[1]: [] for x in data} for row in data: data_list = data_dict[row[1]] data_list.append((row[0], row[2])) data_dict[row[1]] = data_list for key, value in data_dict.items(): value.sort(reverse=True) for i in range(0, len(value) - 1): diff = int(value[1]) - int(value[i + 1][1]) print '{},{},{},{}'.format(key, value[0], value[1], diff) print '{},{},{},{}'.format(key, value[-1][0], value[-1][1], 0) csv: ID,Date,FIPS,Cases 7,5/4/20,001,21 8,5/4/20,002,6 1,5/7/20,001,25 2,5/7/20,002,13 3,5/6/20,001,23 4,5/6/20,002,9 5,5/5/20,001,21 6,5/5/20,002,8 output: 002,5/7/20,13,4 002,5/6/20,9,1 002,5/5/20,8,2 002,5/4/20,6,0 001,5/7/20,25,2 001,5/6/20,23,2 001,5/5/20,21,0 001,5/4/20,21,0 Thanks for providing this sample - I'll see if I can make it work. I had some real hope today that Esri's new Coronavirus Recovery Dashboard was going to take care of this for me but alas, they require that the data already has the daily increase to calculate the trends. you're welcome... I updated the code sample so the output will do the calc for all dates, not just the top two.
https://community.esri.com/t5/python-questions/calculate-daily-change-between-unique-values/td-p/88676
CC-MAIN-2022-21
refinedweb
740
78.48
I’ve recently been using IA Writer as my markdown editor. I love the fact that I can use any of my iDevices and that it’s all synced in the iCloud. But how do I access the iCloud data so that I can include it in my Octopress git repository? On Mountain Lion (and Lion I believe) all the iCloud data is hidden away in your home directory. Each application is given it’s own area just like iOS apps. Check out ~/Library/Mobile Documents, you’ll see folders for each iCloud application that you have launched. Your files are stored in here. ~/Library/Mobile Documents Once you have found your editors data folder we can now create a link from here into your git repository. Unfortunately, symbolic links are not good enough for either iCloud or git so we’ll need to use hard links. Mountain Lion does not ship with this tool but let’s not worry, it’s very easy to make our own. be careful, deleting from a hard link deletes from the source! be careful, deleting from a hard link deletes from the source! So, at the end of this post is the complete source code to create hard links. Copy the code into a new file named hlink.c and compile it. hlink.c $ gcc hlink.c -o hlink Now we can create a hard link to link our blog posts into our iCloud documents. $ hlink ~/<octopress_dir>/source/_posts ~/Library/Mobile\ Documents/<app_dir>/Documents/pages I did a quick comparison between the iCloud integration and DropBox. Unfortunately, DropBox seemed to be more manual regarding the syncing. Also, if you lose your network connection (happens to me often whilst on a train) the DropBox integration moves the document to local storage and you have to manually copy back into DropBox - not a nice feature. iCloud seems to handle this with ease and I don’t have to install anything to get it to work. #include <unistd.h> #include <stdio.h> int main(int argc, char* argv[]) { if (argc != 3) { fprintf(stderr, "Use: hlink <src_dir> <target_dir>\n"); return 1; } int ret = link(argv[1], argv[2]); if (ret != 0) perror("link"); return ret; } © Tony Lawrence 2017 - Waffly Bollocks
http://tonylawrence.com/post/blog/publishing-from-the-icloud/
CC-MAIN-2018-51
refinedweb
373
74.39
JS++ | Virtual Methods As we mentioned in the previous section, if we want runtime polymorphism, using casts can lead to unclean code. By way of example, let’s change our main.jspp code so that all our animals are inside an array. From there, we will loop over the array to render the animal. Open main.jspp and change the code to: import Animals; Animal[] animals = [ new Cat("Kitty"), new Cat("Kat"), new Dog("Fido"), new Panda(), new Rhino() ]; foreach(Animal animal in animals) { if (animal instanceof Cat) { ((Cat) animal).render(); } else if (animal instanceof Dog) { ((Dog) animal).render(); } else { animal.render(); } } Now our code is even less elegant than our original code that just instantiated the animals, specified the most specific type, and called render(). However, this code can be massively simplified until it becomes elegant. In fact, we can reduce the ‘foreach’ loop down to one statement. The answer: virtual methods. Virtual methods enable “late binding.” In other words, the specific method to call is resolved at runtime instead of compile time. We don’t need all the ‘instanceof’ checks, all the casts, and all the ‘if’ statements as we saw in the code above. We can achieve something much more elegant. First, open Animal.jspp and change the ‘render’ method to include the ‘virtual’ modifier: external $; module Animals { class Animal { protected var $element; protected Animal(string iconClassName) { string elementHTML = makeElementHTML(iconClassName); $element = $(elementHTML); } public virtual void render() { $("#content").append($element); } private string makeElementHTML(string iconClassName) { string result = '<div class="animal">'; result += '<i class="icofont ' + iconClassName + '"></i>'; result += "</div>"; return result; } } } Save Animal.jspp. That’s the only change we need to make. However, just making our method virtual isn’t enough. In Cat.jspp and Dog.jspp, we are using the ‘overwrite’ modifier on their ‘render’ methods. The ‘overwrite’ modifier specifies compile-time resolution. We want runtime resolution. All we have to do is change Cat.jspp and Dog.jspp to use the ‘override’ modifier instead of the ‘overwrite’ modifier. For the sake of brevity, I will only show the change to Cat.jspp but you need to make the change to Dog.jspp as well: external $; module Animals { class Cat : Animal { string _name; Cat(string name) { super("icofont-animal-cat"); _name = name; } override void render() { $element.attr("title", _name); super.render(); } } } That’s it. All we had to do was change modifiers. Now we can finally edit main.jspp so there is only one statement inside the loop: import Animals; Animal[] animals = [ new Cat("Kitty"), new Cat("Kat"), new Dog("Fido"), new Panda(), new Rhino() ]; foreach(Animal animal in animals) { animal.render(); } Compile your code and open index.html. Everything should work. Now we’ve been able to massively simplify our code and still get the expected behavior. Specifically, we reduced the code of our ‘foreach’ loop down from: foreach(Animal animal in animals) { if (animal instanceof Cat) { ((Cat) animal).render(); } else if (animal instanceof Dog) { ((Dog) animal).render(); } else { animal.render(); } } To this: foreach(Animal animal in animals) { animal.render(); } The reason we’ve been able to simplify our code so dramatically is because marking a method as ‘virtual’ signifies potential runtime polymorphism. Together with the ‘override’ modifier, the compiler knows we want late binding on the ‘render’ method so the “late” binding happens exactly when it’s needed: the ‘render’ method will be resolved at runtime if and only when it needs to be resolved (inside the ‘foreach’ loop).
https://www.geeksforgeeks.org/js-virtual-methods/
CC-MAIN-2021-43
refinedweb
575
50.94
Python To find the Chi-Square critical value in Python, you can use the scipy.stats.chi2.ppf() function, which uses the following syntax: scipy.stats.chi2.ppf(q, df) where: - q: The significance level to use - df: The degrees of freedom This function returns the critical value from the Chi-Square distribution based on the significance level and degrees of freedom provided. For example, suppose we would like to find the Chi-Square critical value for a significance level of 0.05 and degrees of freedom = 11. import scipy.stats #find Chi-Square critical value scipy.stats.chi2.ppf(1-.05, df=11). scipy.stats.chi2.ppf(1-.01, df=11) 24.72497 And consider the Chi-Square critical value with the exact same degrees of freedom, but with a significance level of 0.005: scipy.stats.chi2.ppf(1-.005 df=11) 26.75685 Refer to the SciPy documentation for the exact details of the chi2.ppf() function.
https://www.statology.org/chi-square-critical-value-python/
CC-MAIN-2022-21
refinedweb
160
58.89
At 10:38 AM 5/23/01 +0200, Siberski, Wolf wrote: . I partially agree ... but thats mainly because of import statements in java make classes usable. So where it is possible to have java classes named org.apache.foo.Main org.apache.bar.Main you can choose to import only one and use short form (Main) in .java files. This is the same way that occurs in xml files ... the URI (ie) associated with each namespace is fully qualified name while the prefix (ie ant:) is equivelent to short form of java names. Perhaps a compromise would be that the prefix defaults to library specified name unless overidden by user (this matches behaviour in certain scripting languages aswell). So in most cases javac will be <jdk:javac .../> while you could import it into default namespace and use <javac .../> or into alternate namespace if some future jdk tasks act differently Thoughts? Cheers, Pete *-----------------------------------------------------* | "Faced with the choice between changing one's mind, | | and proving that there is no need to do so - almost | | everyone gets busy on the proof." | | - John Kenneth Galbraith | *-----------------------------------------------------*
http://mail-archives.apache.org/mod_mbox/ant-dev/200105.mbox/%3C3.0.6.32.20010523190103.00a92100@mail.alphalink.com.au%3E
CC-MAIN-2016-07
refinedweb
181
66.64
Config::IniFiles - A module for reading .ini-style configuration files. use Config::IniFiles; my $cfg = new Config::IniFiles( -file => "/path/configfile.ini" ); print "We have parm " . $cfg->val( 'Section', 'Parameter' ) . "." if $cfg->val( 'Section', 'Parameter' ); Config::IniFiles provides a way to have readable configuration files outside your Perl script. Configurations can be imported (inherited, stacked,...), sections can be grouped, and settings can be accessed from a tied hash. INI files consist of a number of sections, each preceded with the section name in square brackets. The first non-blank character of the line indicating a section must be a left bracket and the last nonblank. Parameter names are localized to the namespace of the section, but must be unique within a section. [section] Parameter=Value Both the hash mark (#) and the semicolon (;) are comment characters. by default (this can be changed by configuration) Lines that begin with either of these characters will be ignored. Any amount of whitespace may preceed the comment character. Multiline or multi-valued parameters. As a configuration option (default is off), continuation lines can be allowed: [Section] Parameter=this paramater \ spreads across \ a few lines Get a new Config::IniFiles object with the new method: $cfg = Config::IniFiles->new( -file => "/path/configfile.ini" ); $cfg = new Config::IniFiles -file => "/path/configfile'); Returns a new configuration object (or "undef" if the configuration file has an error). One Config::IniFiles object is required per configuration file. The following named parameters are available:. Specifies a section to be used for default values. For example, if you look up the "permissions" parameter in the "users" section, but there is none, Config::IniFiles will look to your default section for a "permissions" value before returning undef.. The default comment character is #. You may change this by specifying this option to an arbitrary character, except alphanumeric characters and square brackets and the "equal" sign. (ie: like /[chars]/). Returns the value of the specified parameter ( $parameter) in section $section, returns undef if no section or no parameter for the given section. Assignes a new value, $value (or set of values) to the parameter $parameter in section $section in the configuration file. Deletes the specified parameter from the configuration file Forces the configuration file to be re-read. Returns undef if the file can not be opened, no filename was defined (with the -file option) when the object was constructed, or an error occurred while reading. If an error occurs while parsinf the INI file the @Config::IniFiles::errors array will contain messages that might help you figure out where the problem is in the file. Returns an array containing section names in the configuration file. If the nocase option was turned on when the config object was created, the section names will be returned in lowercase. Returns 1 if the specified section exists in the INI file, 0 otherwise (undefined if section_name is not defined).. Completely removes the entire section from the configuration. Returns an array containing the parameters contained in the specified. Makes sure that the specified section is a member of the appropriate group. Only intended for use in newval. Makes sure that the specified section is no longer a member of the appropriate group. Only intended for use in DeleteSection. Returns an array containing the members of specified $group. Each element of the array is a section name. For example, given the sections [Group Element 1] ... [Group Element 2] ... GroupMembers would return ("Group Element 1", "Group Element 2"). Writes out a new copy of the configuration file. A temporary file (ending in .new) is written out and then renamed to the specified filename. Also see BUGS below. Same as WriteConfig, but specifies that the original configuration file should be rewritten..) Returns a list of lines, being the comment attached to section $section. In scalar context, returns a string containing the lines of the comment separated by newlines. The lines are presented as-is, with whatever comment character was originally used on that line. Removes the comment for the specified section. Sets the comment attached to a particular parameter. Any line of @comment that does not have a comment character will be prepended with one. See "SetSectionComment($section, @comment)" above Gets the comment attached to a parameter. Deletes the comment attached to a parameter. Accessor method for the EOT text (in fact, style) of the specified parameter. If any text is used as an EOT mark, this will be returned. If the parameter was not recorded using HERE style multiple lines, GetParameterEOT returns undef.). Removes the EOT marker for the given section and parameter. When writing a configuration file, if no EOT marker is defined then "EOT" is used. Deletes the entire configuration file in memory..";} ); Sets the value of $parameter in $section to $value. To set a multiline or multiv-alue. When tied to a hash, you can use the Perl delete function to completely remove a parameter from a section. The tied interface also allows you to delete an entire section from the ini file using the Perl delete function. If you really want to delete all the items in the ini file, this will do it. Of course, the changes won't be written to the actual file unless you call RewriteConfig on the object tied to the hash. if the default section (if set), although accessing an unknown parameter in the specified section will return a value from the default section if there is one. When tied to a hash, you use the Perl keys and each functions to iteratively list the sections in the ini file. You can also use the Perl exists function to see if a section is defined in the file. Contains a list of errors encountered while parsing the configuration file. If the new method returns undef, check the value of this to find out what's wrong. This value is reset each time a config file is read. The original code was written by Scott Hutton. Then handled for a time by Rich Bowen (thanks!), It is now managed by Jeremy Wadsack,.@lists.sourceforge.net Development discussion occurs on the mailing list config-inifiles-dev@lists.sourceforge.net, which you can subscribe to by going to the project web site (link above). This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. $Log: IniFiles.pm,v $pp@power.
http://search.cpan.org/~wadg/Config-IniFiles-2.29/IniFiles.pm
crawl-002
refinedweb
1,066
58.48
Start MATLAB engine session for single, nonshared use #include "engine.h" Engine *engOpenSingleUse(const char *startcmd, void *dcom, int *retstatus); startcmd String to start MATLAB® process. On Microsoft® Windows® systems, the startcmd string must be NULL. dcom Reserved for future use; must be NULL. retstatus Return status; possible cause of failure. Pointer to an engine handle, or NULL if the open fails. Not supported on UNIX® systems. This routine allows you to start multiple MATLAB processes using MATLAB as a computational engine. engOpenSingleUse starts a MATLAB process, establishes a connection, and returns a unique engine identifier, or NULL if the open fails. Each call to engOpenSingleUse starts a new MATLAB process. engOpenSingleUse opens a COM channel to MATLAB. This starts the MATLAB software you registered during installation. If you did not register during installation, enter the following command at the MATLAB prompt: !matlab -regserver engOpenSingleUse allows single-use instances of an engine server. engOpenSingleUse differs from engOpen, which allows multiple applications to use the same engine server. See MATLAB COM Integration for additional details.
http://www.mathworks.com/help/matlab/apiref/engopensingleuse.html?requestedDomain=www.mathworks.com&nocookie=true
CC-MAIN-2016-36
refinedweb
174
52.15
Agenda See also: IRC log <scribe> Agenda: <scribe> ScribeNick: Dave <Kangchan> I'm in IRC. Jose asks about FT presentation and whether it is the same as the position paper in the workshop? Keith: a bit of both Dave to add link to Keith's slides from the minutes Jose: we made a resolution that DIAL was a superset of XHTML2 Dave: yes to ensure its a proper superset rather than an extended subset Any further questions on the minutes? None Any objections to publishing the f2f minutes? No objections: Resolutions: we will publish UWA F2F minutes from Dublin Stephane is connecting to the issue tracker and has done a few of them. On the wiki action item, we are waiting for feedback from the Systems tem. Stephane proposes we wait until July 10th. <scribe> ACTION: [PENDING] Boyera to report back on possibility of using Semantic Media Wiki [recorded in] <steph> <Kangchan> If it is really impossible to install the Semantic Media Wiki in W3c Site, I can install it in W3C Korea Office. <steph> thks kangchan for the offer Stephane: drop asking Keio about hosting the face to face as ETRI can host it afterall. <steph> we will see on july 10 <steph> <steph> <steph> Stephane proposes we def er discussion of the DISelect/XAF transition until next week when Rhys and Rotan will be present. Stephane asks Keith about the action from May on creating a DCCI namespace. Stephane proposes <steph> Stephane will work on this today. <Keith> Thanks! Dave to follow up with Stephane the action numbering after some problems gluing together different parts of the minutes. <steph> <steph> Stephane will update the action tracker to note that Rhys completed action 19 this morning. Keith: we are working through the unit tests and will update the test page. Stephane: let's cover CC/PP 2 in next week's agenda, as we have had a number of comments. end of meeting This is scribe.perl Revision: 1.128 of Date: 2007/02/23 21:38:13 Check for newer version at Guessing input format: RRSAgent_Text_Format (score 1.00) Found ScribeNick: Dave Inferring Scribes: Dave WARNING: Replacing previous Present list. (Old list: Stephane, Dave._Jose) Use 'Present+ ... ' if you meant to add people without replacing the list, such as: <dbooth> Present+ +, Keith Present: + Keith WARNING: Fewer than 3 people found for Present list! Regrets: Rotan Rhys Kevin Agenda: Got date from IRC log name: 14 Jun 2007 Guessing minutes URL: People with action items: boyera WARNING: Input appears to use implicit continuation lines. You may need the "-implicitContinuations" option.[End of scribe.perl diagnostic output]
http://www.w3.org/2007/06/14-uwawg-minutes.html
CC-MAIN-2015-18
refinedweb
437
64.1
Darryl Gove's | Interposing on malloc Ended up wanting to look at malloc calls, how much was requested, where the memory was located, and where in the program the request was made. This was on S9, so no dtrace, so the obvious thing to do was to write an interpose library and use that. The code is pretty simple: #include <stdlib.h> #include <stdio.h> #include <dlfcn.h> #include <ucontext.h> void * malloc(size_t size) { static void* (*func)(size_t)=0; void* ret; if (!func) {func=(void*(*)(size_t))dlsym(RTLD_NEXT,"malloc");} ret=func(size); printf("size = %i address=%x\n",size,ret); printstack(0); return ret; } The code uses a call to printstack to print out the stack at the point of the call. The code is compiled and run with: $ cc -O -G -Kpic -o libmallinter.so mallinter.c $ LD_PRELOAD=./libmallinter.so ls size = 17 address=25118 size = 17 address=25138 Posted at 01:07PM Feb 08, 2008 by Darryl Gove in Sun | Comments[1] | Sample chapter from Solaris Application Programming available. Posted at 10:07AM Jan 28,] Register windows and context switches Interesting paper on register windows and context switching Posted at 11:58AM Aug 31, 2007 by Darryl Gove in Sun | Outline of book for Solaris developers It's probably useful to outline the contents of the book I'm working on. The book is meant as a resource for people coding for or on the Solaris platform, for either SPARC or x85/x64 processors. It falls into four main sections: - Hardware. Solaris is supported on both x86/x64 and SPARC. Both processor families have different features and different assembly languages. But there's also a lot of commonality in processors (e.g. Caches, TLBs etc.). The first section of the book outlines common features of processors, and also the differences between the two families. It also covers particular implementations of the families (e.g. UltraSPARC T1 etc.) All this material is useful context and definitions for the material that follows later. - Software. The software is Solaris and the tools that ship with it, the Sun Studio compilers, the performance profiling tools, and the debugging tools. In fact, there are tools for most questions that a developer could think of asking, the trick is to know that they exist and have some examples that demonstrate the use of the tools. - Source code. Inevitably much of what the developer deals with is source code, and this section demonstrates how to use the available tools to identify, tune, and improve source code. The section has coverage of the topic of using performance counters to determine what's causing performance bottlenecks, and also of deriving metrics using performance counters. The section also covers using compiler options and source code modifications to improve performance. - Multi-core. Almost all systems that are available today have more than one core. The challenge going forwards is to utilise these resources effectively and efficiently. This section focuses on the various approaches that can be used to leverage these resources, and the tools that can be used to diagnose and improve the code. Posted at 09:05AM Aug 02, 2007 by Darryl Gove in Sun | Snippet from book: cost of calling libraries. Posted at 09:58AM Jul 31, 2007 by Darryl Gove in Sun | Solaris observability tools A comprehensive list of the observability tools shipped with Solaris. Unfortunately the links on the page go to the source of the tool rather than the man page. Posted at 04:02PM Apr 11, 2007 by Darryl Gove in Sun |
http://blogs.sun.com/d/tags/solaris
crawl-001
refinedweb
590
63.09
TweetPony 1.2.8 A Twitter library for Python ======================================== …it's called TweetPony because I developed it with ponies in mind. License ------- This program is licensed under the AGPLv3. See the `LICENSE` file for more information. Installation ------------ You can easily install TweetPony using the Python Package Index. Just type: sudo pip install tweetpony Usage basics ------------ You can see the internal names of all the API endpoints in the file `endpoints.py`. For example, to update your status, you would do: ```python status = api.update_status(status = "Hello world!") ``` All the parameter names are the same as in the API documentation. Values will be automatically converted to their correct representation. For example, the boolean `True` will become the string `true`. TweetPony has an internal model system which lets you perform actions related to the model quite easily! Suppose you have a `Status` model: ```python status = api.get_status(id = 12345) ``` Now if you want to favorite this status, you would probably do this: ```python api.favorite(id = status.id) ``` But TweetPony makes this easier! You can just do: ```python status.favorite() ``` and the pony will favorite the tweet! Of course, this will only work if you obtained the `Status` instance through an API call, which should be the case 99% of the time. It won't work if you create the `Status` instance directly from a dictionary. But why would you do that? You can also manually connect an `API` instance to a model instance by using the model's `connect_api` method. For example, if you have two `API` instances (e.g. for two different users) and want to fetch a tweet with the first user's account and retweet it with the second user's account, you do: ```python status = api1.get_status(id = 12345) status.connect_api(api2) status.retweet() ``` Look into `models.py` to see which methods exist for which models. Image uploading --------------- For all API endpoints that take an image as a parameter, just pass the image file object to upload as the appropriate parameter and the pony will do the rest for you. Error handling -------------- On error, TweetPony will raise either an `APIError`, `NotImplementedError` or `ParameterError` exception. An `APIError` instance has the following attributes: `code`: The error code returned by the API *or* the HTTP status code in case of HTTP errors `description`: The error description returned by the API *or* the HTTP status text in case of HTTP errors `NotImplementedError` and `ParameterError` instances have only one attribute, the error description. Models ------ Almost every API call (except for the ones that return only a list or something equally simple) will return a parsed model instance representing the response data. There are `User`, `Status`, `Message`, `List`, `APIError` and many more models. You can access the response data as instance attributes like `status.text` or using a dictionary lookup like `status['text']`. Authentication -------------- You can either pass your access token and access token secret when initializing the API instance or go through the normal authentication flow. The authentication flow works like this: ```python api = tweetpony.API(consumer_key = "abc", consumer_secret = "def") auth_url = api.get_auth_url() print "Open this link to obtain your authentication code: %s" % auth_url code = raw_input("Please enter your authentication code: ") api.authenticate(code) ``` After you've done this, the access token and access token secret can be obtained from the `API` instance as `api.access_token` and `api.access_token_secret`. By default, TweetPony loads the authenticating user's profile as soon as all four authentication tokens are present. This is also a way of checking whether these tokens are correct. If you do not want the user to be loaded, pass `load_user = False` to the `API` constructor. This is useful if: * you want to save API calls * you can be sure that the access tokens are correct * you don't need the user profile (if you do, you can still load it using the `verify` function of the `API` instance) Usage example ------------- This is a simple example script. More can be found in the `examples` directory. ```python import tweetpony api = tweetpony.API(consumer_key = "abc", consumer_secret = "def", access_token = "ghi", access_token_secret = "jkl") user = api.user print "Hello, @%s!" % user.screen_name text = raw_input("What would you like to tweet? ") try: api.update_status(status = text) except tweetpony.APIError as err: print "Oops, something went wrong! Twitter returned error #%i and said: %s" % (err.code, err.description) else: print "Yay! Your tweet has been sent!" ``` - Downloads (All Versions): - 0 downloads in the last day - 490 downloads in the last week - 3830 downloads in the last month - Author: Julian Metzler - Keywords: twitter library api wrapper pony - License: AGPLv3 - Package Index Owner: Mezgrman - DOAP record: TweetPony-1.2.8.xml
https://pypi.python.org/pypi/TweetPony/1.2.8
CC-MAIN-2015-35
refinedweb
770
57.67
>>. 46 Reader Comments. Well the whole globals thing (I don't think anyone still remembered that) just goes to show how not security minded the language developers are. It's a problem to the core of the the PHP community. Yes, they are learning to get better, but the track record is far from stunning. If you want to learn to do websites in a compiled language that is quite awesome, I'd suggest C++ because for local apps you can use Qt and for web apps you can use Wt. They are even similar in mentality. It is often difficult to think about a website in a OOP way, but Wt pulls it off. Wt's feature set out of the box is impressive. The only downside is it takes a bit of coding to get a site started, but they are working on a project template generator that will create an empty site skeleton that you just fill out. The really nice thing about Qt is it abstracts all that HTML/JS from you. If you want an input text box, rather than have your code include <input type=text...> in it, you just say WLineEdit(). The underlying library will then express it as needed. Then for handling interactions on the client, it generates Javascript for you... and the best part is it is all native code on the server. Yeah but that's the point: Those are 3rd party ORM frameworks (which I absolutely agree is a much more sensible approach than using low level JDBC driver and co anyhow) which don't really have much to do with the language. PHP does have libraries for prepared statements (also escaping HTML for XSS attacks) and a quick google search brings up an ORM for it too. There is something to be said for the mentality and the culture. And that is what I am criticizing. The culture of 'leave ?> off' Since I really don't program in PHP, I can't say anything about the culture, but good frameworks can make a world of a difference, since those can guide users in the right direction to let them avoid the pitfalls. But that doesn't mean that the language is bad or anything, only that - maybe - a larger fraction of beginners are using the language. And if you look around, PHP is still almost everywhere on cheap hosts, etc. so that's no big surprise (on the other hand try to find something for django) Also Java has by all means a gigantic ecosystem with several different frameworks, had one of the first major known ORMs, etc., etc. but I'd take any bet that you'll still find lots of software that is vulnerable to SQL injections. One can only hope that the ongoing publicity about SQL injections will get users to educate themselves at least a bit. Next up on the list then: XSS attacks. Both of these errors are so easily fixed in any language with the right frameworks (and I'm sure they exist!), it's just a question of getting people to actually use them.. Because PHP is a horrible language, created by people who are not aware of what they are doing. Making \ the namespace separator? Please. Its array of standard functions, and their lack of standardised syntax and whatnot is a clear sign of this. The whole <?php ... ?> was created initially, because it was the intent that PHP should be included amongst HTML. But eventually, PHP grew into a huge system, that usually meant people stopped including PHP in raw HTML, and rather built strings to output it. But some editors would leave newlines at the end of files (because this is a standard mechanism in some languages), which resulted in problems when including, for instance, a configuration file, that suddenly added unneeded whitespace to the website. The solution was realising that they had made a mistake of their strict <?php ... ?> requirement, and decided to allow people not to use ?> at the end. Using the argument that PHP itself suggests you do <?php ... ?> is not really a good argument for it, because most of the arguments that PHP makes for itself are dumb. Leaving the ?> at the end of files introduces so many more problems than removing it does (which, as far as I know, introduces no problems). 1) Dirt Jumper gets an update after public popularization of its vulnerabilities (rendering this info useless) 2) Dirt Jumper's market price declines as a stronger, more secure competitor's price rises. Neither of these outcomes seem to benefit the public good. You must login or create an account to comment.
http://arstechnica.com/security/2012/08/ddos-take-down-manual/?comments=1&start=40
CC-MAIN-2017-04
refinedweb
779
70.84
Opensource Project That Works On Both SQL Server And MS Access DB?Sep 26, 2010 I searched on Google and I can't find any, do you know some ?View 1 Replies I searched on Google and I can't find any, do you know some ?View 1 Replies. To start, I am a new web developer. I have been writing C# for about a year, but just recently learning web development. I wrote an application that connects to a MySql database and when I run it on my localhost it wokrs great! I created a ubuntu server on amazon ec2 and it is hosting my MySql server. It is up and running just fine. I created a windows 2008 server instance on amazon ec2 to host my asp.net project. I published my project to my windows server and I went to my public IP address and my site was there! it was so easy... However, when I went to login, I recieved this error: Could not load file or assembly 'MySql.Data, Version=6.3.5.0, Culture=neutral, PublicKeyToken=c5687fc88969c44d' or one of its dependencies. The system cannot find the file specified. I made sure the correct connector was installed on ym server. I have the most recent mysql connector installed. I do not know how the publish actually works. I saw some forum posts that said maybe I was missing DLLs from my bin directory but I cannot even find my bin directory for my project on the server. I also saw some posts about changing the web config file, but that was gibberish to me at this level. Have a project that was created in Visual Studio 2008 and deployed to a 64-bit Windows 2003 server. This application references a 32-bit Interop.ActiveDs.dll. The applications were originally compiled for 'Any CPU', however, explicitly compiling as 'x86' doesn't solve the problem. The project targets the 3.5 framework. The server is running IIS 6.0 in 64-bit mode. When we deploy the version compiled in Visual Studio 2008, the app runs perfectly fine; all pages show up. In retrospect, this is actually surprising.We migrated the application to Visual Studio 2010 (we did not change the targeted framework) and redeployed. Now we get a BadImageFormatException loading Interop.ActiveDs.dll. Which actually makes more sense than the 2008 version running.To solve the problem, we set Enable32bitAppOnWin64 to true and ran aspnet_regiis.exe -i from the 32-bit folder of the 2.0 framework (as per various instructions on the web). In IIS, web service extensions, there were two versions of ASP.NET 2.0, one for 32-bit and one for 64-bit. We prohibited the 64-bit version, restarted IIS, and launched the website. What we expected: The app to run as 32-bit, load the interop, and display What we got: "Service Unavailable"All other web pages that were previously working displayed the same message, as did the Visual Studio 2008 version.The support page here describes the problem exactly, but tells us to do exactly what we did to resolve the problem (enable 32-bit mode).We've rolled back to 64-bit mode in IIS and deployed the Visual Studio 2008 version for now, but we really need to figure out how to make this app run and load the interop (there are also 32-bit Oracle DLLs that are referenced)Two questions:Why does the Visual Studio 2008 version work at all?? How do we get the Visual Studio 2010 free and opensource free DALand BOL generator. C#.View 2. i have an project with name called(dbservice layer) which is in path: d:webserviceDBService. here i have an webservice which connects to DB and returns an object of an class. once i added an reference here i get an url: now i have another project name (UILayer) whic is in path: E:SchoolUILayer i added an service reference here with url as but i get an messgae telling service is unable why is that happening. if both my webserivce layer and ui layer are in same project. then i able to use the webserive in the ui layer. and get the required output so i wanted to know is there any way we can acesss the webserive from one project to another project I need to implement Routing or Url-Rewriting in my application. So, is there any utility like logging utility Elmah. I dont want to write much code, I need to configure and start playing.View 2 Replies On PHP Platforms like Worldpress, Joomla, OSCommerce there are plenty of beautifull templates some even free, I cannot find any asp.net cms (the ones in Microsoft Gallery are boring) that has such a variety of templates, am I missing something in my search or do you know some ?View 1 Replies App works well on local intranet, but having problema on external access?View 4 Replies Locally I have the cascading dropdown which loads countries in a cascading dropdown working. But as soon as I place the code on the hosting server, Firebug shows me an: 500 Internal Server Error - [URL] The cascading dropdown just shows "[Method error 500]" REMEMBER: IT WORKS ON MY LOCAL SERVER!!! local configuration: Windows 7 IIS7.5 ASP.NET4 server configuration: Windows Server 2008 IIS7.5 ASP.NET4 So it almost MUST be something on my hosting server! :s I dont know what to configure though... [code].... Does anyone knows why HttpContext.Current.Server.MapPath within a server control works when I run the server control but gives and error whilst in Design mode?View 2 Replies I want to Access remote server( ie., want to access online server not local server) in my Application.. I change Settings in SQL surface Area Configuration---> Remote Connection ( checked the Local and Remote Connection ) Then while running the application it showing the following error.(I make off the firewall)View 1 Replies here's the code so far: [Code].... [Code].... mov is a quicktime file, my server has the mimetype: video/quicktime .......... but as I read, this code forces the save as download box which is exactly what i want :) now, here's the catch, i the file I am fetching is NOT on the physical path... it is on a completely different server:Protected Sub LinkButton1_Click(ByVal sender As Object, ByVal e As System.EventArgs) Handles LinkButton1.Click Response.Clear() Response.ContentType = "x-msdownload" Response.AppendHeader("Content-Disposition", ("attachment; filename=mydownload.mov")) Response.TransmitFile("") Response.End() End Sub Obviously this doesn't work since TransmitFile requires that the file be on your physical path, so how do i do this? Someone said you must use the stream method. Do you have any sample code I could try? I've tried the HTTPStreamReader object but it's giving me issues, so I would love to find out if there is anyway this might work. Now here's some more important information: this are HUGE video files.. we are creating a downloads page... written in asp.net -- so you create an account using the .net membership class, then you select the file you want, go through a form where you enter your billing info and then after you pay a certain fee (this is already implemented), you go to your "downlaods" area in your account... there you have access to the files......... the reason i'm doing this is because i want to hide the download link, which will be something like ... (something really crazy)....... we don't want people seeing this on the status bar (Therefore hiding the download link is ESSENTIAL)........ the files are a good 500MB each approximately; so i would love to hear all of your suggestions as to making the streamreader work for me and how long would it take for the streamreader to READ the file........ I've never had this problem before, I'm at a total loss. I have a SQL Server 2008 database with ASP.NET Forms Authentication, profiles and roles created and is functional on the development workstation. I can login using the created users without problem. I back up the database on the development computer and restore it on the production server. I xcopy the DLLs and ASP.NET files to the server. I make the necessary changes in the web.config, changing the SQL connection strings to point to the production server database and upload it. I've made sure to generate a machine key and it is the same on both the development web.config and the production web.config. And yet, when I try to login on the production server, the same user that I'm able to login successfully with on the development computer, fails on the production server. There is other content in the database, the schema generated by FluentNHibernate. This content is able to be queried successfully on both development and production servers. I have some code that allows users to upload file attachments into a varbinary(max) column in SQL Server from their web browser. It has been working perfectly fine for almost two years, but all of a sudden it stopped working. And it stopped working on only the production database server -- it still works fine on the development server. I can only conclude that the code is fine and there is something up with the instance of SQL Server itself. But I have no idea how to isolate the problem. I insert a record into the ATTACHMENT table, only inserting non-binary data like the title and the content type, and then chunk-upload the uploaded file using the following code: [code]..... I created one textbox custom control and i opened new asp.net web project. now i register my custom control in my web project. Afterthat i can't able to access my custom control textbox and its not showing in my web project(not web application). its working in my local machine, the same way i upload all the files to my remote server its not working. Let me know , what mistake i have done and what way can i access my textbox controls in my web project. through webproject how to get the custom controls values. I've ran trough many websites and tried soo many things that I've finally decided to try it creating a question of my own. This is the error that I get when I try to access my WCF project from my Silverlight 4 app. anyone have any idea on how to fix this error? PS. If anyone needs any more information let me know. An error occurred while trying to make a request to [URL]. see the inner exception for more details. i have created asp.net project and added it in vss but when i tried to access from another pc is not allow me to run it and gives error like Failed to start monitoring changes to\192.168.1.139WebsiteLibraryLibrary ManagmentApp_Code' because the network BIOS command limit has been reached. For more information on this error, please refer to Microsoft knowledge base article 810886. Hosting on a UNC share is not supported for the Windows XP Platform. I have 2 projects in my solution. MVC Web application Class library The MVC Web application references the class library. The class library contains a class that extends the default ASP.Net Controller. I'm putting a variable in session in the application's Global.asax. protected void Session_Start(object sender, EventArgs args) { HttpContext.Current.Session["DomainName"] = Request.Url.Host; } In the class library I'm trying to get the value from the HttpContext.Session, but HttpContext.Session keeps coming up null. [code]... HttpContext.Current.Session doesn't seem to be an option in controllers. I am using VB.Net/ASP.Net 2008 and I have a solution with 3 projects: 1. DAL 2. BLL 3. MainWebProject anyways because it's a Web Application Project (not a web project) I had to set it up so I could access the Profile using Imports System.Web.Profile in the MainWebProj but I can only access the profile in that project. anyways I need to access the Profile in the DAL also, and I figure since I can import the Profile in my MainWebProj there must be a way to also import that into the sister project, DAL. My Issue: I can import a class in Project A but not Project B. Is there import syntax so I can also import the namespace in Proj B? I have a created a solution using visual studio 2008 and added a new project namely "test". In this project i added a folder namely "code" and in this folder i added a class file "class1.vb". But i am facinng problem when i am going to create object of that class1.vb in default.aspx.vb page i want to access my class1.vb methods. I have a search box on a web application that is to use a query I have built in Access to search through projects to find any that are related to the text entered in the search box. The query in Access is just made up of parameters that use wildcards to search through all the fields. In Access the query works fine and returns the correct data but when I try and link this all up to a ListView in Visual Studio, the ListView just diplays the message "No data was returned" Below is my page-behind code, hopefully you can see why it is not working but it looks like it should work to me. [Code].... I have been working on a stored procedure to calculate likert scales for course evaluations. I have the stored procedure done but I ran into an interesting but frustrating situation. I used a case statement along with a select query to count the number of responses of a given value. Likert scales are usually 5 point scales 5 being the highest and 1 being the lowest. The value that gave me the trouble was null values. In my evaluation page the instert query puts a null value in the field instead of leaving the response blank. These are the two queries I used both are syntactically correct but one works and the other doesn't. #1 Null query that works [Code].... Can anyone explain the differences and why one works but the other doesn't? Can it be as simple as switching the WHEN and the column name and if it is would it be advisable the other ones around? I have two web servers plus my desktop. I use Visual Studio 2008 to generate asp.net pages. I put a statement like: response.write("Testing if function: " & if(0>1,"true","false") & "<br />") in a code-behind file (test.aspx.vb). When I type it, Intellisense clearly recognizes the IF function and gives me guidance on the two overloaded versions. When I Build the page, it gives no errors, but when I View in Browser, I get an error: "BC30201: Expression expected." with the compiler details pointing to the IF. When I run the page on my test server, I get the same result, but when I run it on my production server, it works just fine.
http://asp.net.bigresource.com/Opensource-project-that-works-on-both-SQL-Server-and-MS-Access-DB--VvAHD3nGJ.html
CC-MAIN-2019-09
refinedweb
2,563
73.47
Welcome to this review of the fourth module from the Pluralsight course Implementing an API in ASP.NET Web API by Shawn Wildermuth. Shawn is a 14-time Microsoft MVP (ASP.NET/IIS) and is involved with Microsoft as an ASP.NET Insider, ClientDev Insider and Windows Phone Insider. He is the author of eight books on software development, and has given talks at a variety of international conferences including TechEd, Oredev, SDC, VSLive, DevIntersection, MIX, DevTeach, DevConnections and DevReach. The full course also contains modules on: – Implementing an API in ASP.NET Web API – API Basics – Versioning – REST Constraints – Web API Version 2 Securing APIs APIs and Security The amount of effort we should give to securing our data is related to how sensitive the data is. For example medical records and credit card numbers are highly sensitive and you could be liable for large fines if there is a data breach. So we should start with the question “What needs to be secured?” This was previously discussed in Shawn’s course “Web API Design” and covered again here. The following items must be secured: – Private or personalized data – Sensitive data across the wire – Credentials of any kind – Servers against denial of service attacks Shawn says not all of these things can be handled at the API level. We can do Threat Modeling to understand the threats to our business. Some of the threats discussed in this lesson are: – Users – Eavesdropppers (packet sniffers etc.) – Hackers – Personnel Once we understand the role of the API in protecting our data, we can do some things in Web API to support that. Requiring SSL We want to have a way in our Web API to force the website to use HTTPS. In this lesson, Shawn assumes that we have already figured out how to get certificates onto our IIS server. The focus here is to make sure that Web API itself uses HTTPS even if the server isn’t configured to force it on the entire website. There could be cases where we only want to apply HTTPS to the Web API. We do this with Filters, and in this lesson Shawn shows us how to create a RequireHttpsAttribute class. You may have noticed that MVC has a RequireHttpsAttribute class, however this will not work with Web API. We need to derive from one of the filters in System.Web.Http.Filters that themselves derive from IFilter. In this case we derive from AuthorizationFilterAttribute and override the OnAuthorization method. This code checks if the RequestUri scheme is HTTPS, and creates a “Https is required” HTML message if not. If it isn’t a GET, we respond with Not Found with actionContext.Response = req.CreateResponse(HttpStatusCode.NotFound) and we set the Content of the response to a new StringContent with our HTML response, UTF8 encoding and “text/html” media type. This sets all of the appropriate response headers. If the request is a GET, the response is similar but it will be Found instead of NotFound. This code block also create a HTTPS URI with UriBuilder and Port 443, and assigns this to Response.Headers.Location With this filter written, we can apply it to our Controller by using it as an attribute, either on an action method, or on the whole controller: [RequireHttps] public class FoodsController : BaseApiController We can also apply it to every controller by updating our WebApiConfig.cs config.Filters.Add(new RequireHttpsAttribute()); This adds our filter to the standard set of filters applied to every Web API request. Fiddler Also in this lesson, Shawn makes a HTTP GET request to /api/nutrition/foods using Fiddler. We see two responses: a 302 Found response, and a 502 response. The 302 response contains the Location beginning with and the message “Https is required” The reason that we get a second response is because Fiddler immediately attempts to make a new request to the Found location. We get a 502 error because we don’t have HTTPS configured on our test server. On Firefox Finally Shawn makes the same request using Firefox, and we see the URI changes to automatically. In this situation Shawn likes to use #if !DEBUG so that he can locally test the application without SSL, but all production builds will require SSL. Cross Origin Security and JSONP If we want to be able to support JavaScript requests that come from other domains, it won’t work by default, but we have a couple of options. 1. Support JSONP as a format This wraps the response in a piece of JavaScript. This gives permission to execute the JavaScript when it is called back. JavaScript files can be called from multiple domains, browsers often allow JSONP requests. 2. Enable Cross Origin Resource Sharing (e.g. CORS) This is supported in Web API version 2. At the time of recording of this module, Web API 2 was up and coming. However a final module was later added covering Web API 2. In Web API v1, there’s no native support for JSONP. However there is which has implemented JSONP as a formatter. We can download this using Nuget. The package is WebApiContrib.Formatting.Jsonp Shawn demonstrates how to use this, creating an instance of JsonpMediaTypeFormatter with a jsonFormatter argument. The reason that Shawn adds JsonpMediaTypeFormatter as the first formatter is the standard JSON formatter will look at Application/JavaScript and return plain JSON. With the code written, Shawn runs up Fiddler Composer and shows the effect of using different Accept request headers. A JSONP request wraps the HTTP response in a small piece of JavaScript that calls a function that we have implemented. We must tell the API that there’s a query string parameter specifying the name of the callback. In Fiddler Composer, Shawn adds ?callback=foo to the URI and executes the request. When jQuery AJAX or Angular or another framework/library is used, it sees this is returning data type, and knows how to parse it. As part of the parsing it executes the method call. In other words, JSONP pretends that we’re downloading a piece of JavaScript to be executed rather than pure data Supporting CORS At the time of recording of this lesson, Web API 2 wasn’t yet released but a preview version was available. Shawn gives a quick overview of it’s features here. For more detail see the Web API Version 2 module. Authentication vs. Authorization Authentication is using credentials to determine identity Authorization is verifying an Identity has rights to a specific resource So what might we want to Authenticate? – Allowing developers to use the API with App Authentication (typically an AppKey and a Secret) – Authenticating Users to grant them access to the API (typically Basic Auth, OAuth and/or Integrated Auth) Piggybacking on ASP.NET Authentication This is the simplest approach. Our demo application allows users to Register or Login, and this uses the ASP.NET Authentication. A simple case is allowing access to our API if the user is authenticated. Up to now, we can access our API without any authentication. We can add the built in Authorize attribute to our Web API controller. With this added, when we request information from our API we see the message: “Authorization has been denied for this request.” But if we login to the website we can now update the browser URI to our API and we are allowed access to that as well. In our JavaScript we can simply redirect to the Login page if we get a 401 Unauthorized response. But this isn’t always appropriate. Users won’t always be using a website as well as our API. Forms authentication is a Microsoft technology and it might not work cleanly with non Microsoft clients. Shawn demonstrates a CountingKsIdenitytService which uses Thread.CurrentPrincipal.Identity.Name. When he logs in as swildermuth instead of shawnwildermuth and then requests the API, we just see an empty array. This is because the database doesn’t know that swildermuth is a user. The name swildermuth is fetched from memory not the database. But in the DiariesController method, it tries to get the user name from the database. Implementing Basic Authentication Basic Authentication allows us to take the credentials of the user and pass them in through a header. These credentials are included in every call. We implement this by adding a new class in Filters called CountingKsAuthorizeAttribute. This class derives from AuthorizationFilterAttribute and we override the OnAuthorization method. This method checks whether the AuthorizationHeader exists in the request, that the Scheme is Basic, and that there is a raw credential. If so, it gets the credentials as a byte array from the raw credential using iso-8859-1 encoding, and splits it into the username and password. There is a Microsoft type WebMatrix.WebData.WebSecurity which has a Login method that we can use. If this returns we set the identity to the new user using System.Security.Principal.GenericPrincipal. We also add a HandleUnauthorized method in here which sets the Response to Unauthorized and adds a WWW-Authenticate header. Shawn also discusses the InitializeSimpleMembership attribute which is part of the ASP.NET MVC 4 template. Testing Basic Authentication Shawn tests this using Fiddler by making a request with following Request Headers: Host: localhost:8901 Authorization: Basic c2hhd253aWxkZXJtdXRoOnBsdXJhbHNpZ2h0 Our credentials are encoded using Tokens Authentication Shawn runs through the process of authenticating a token: - Developer requests an API Key - Supplies API Key and Shared Secret to API - Requests Token - Validates and Returns Token - Uses API with Access Token until Timeout We create a simple and naive implementation of this, starting with a TokenController deriving from BaseApiController. This has a Post method which takes a TokenRequestModel with an ApiKey and Signature. At the end of this lesson we head over to Fiddler again and make a POST request with the apiKey and Signature in the request body. We get back a JSON object with an expiration date and token value. Implementing Token Authentication Shawn adds per user logic into the CountingKsAuthorizeAttribute. We use an attribute called [Inject] which allows property injection, and this is part of the Ninject IoC container. We also create a new class called NinjectWebApiFilterProvider which derives from IFilterProvider and implements GetFilters. Shawn says it may be added either Web API itself or to the WebApiContrib project at a future date. Walkthrough of OAuth Implementation OAuth is similar to Token-based but there are some extra steps involved. - Developer requests an API Key - Supplies API Key and Shared Secret to API - Requests Token - Validates and Returns Token - Redirects to API’s Auth URI - API displays the Authorization UI “xxx’s app wants you to give permission. Approve?” - User Confirms Authorization - Redirects Back to Developer - Request Access Token via OAuth & Request Token - Returns Access Token (with Timeout) - Uses API with Access Token until Timeout Shawn says he could do a whole course and spend 6 hours building this demo. Instead he shows us the DotNetOpenAuth project which includes implementations of OAuth 1 and 2. Shawn talks about the OAuth2ProtectedWebApi project within the DotNetOpenAuth solution and says this is the best place to start. These controllers use asynchronous communication with the Task type, and Shawn says this is a good idea.
https://zombiecodekill.com/2016/08/02/securing-apis/
CC-MAIN-2022-21
refinedweb
1,877
54.22
Processes the DaemonRunner in case of the daemon running the process. A good example of a process is the WorkChain class, which is in fact a sub class of the Process class. In the workflows and workchains section you can see how the WorkChain defines how it needs to be run. In addition to those run instructions, the WorkChain Calculation class. This Calculation class is a sub class of Node and serves as the record of the process’ execution in the database and by extension the provenance graph. It is very important to understand this division of labor. A Process describes how something should be run, and the Calculation node serves as a mere record in the database of what actually happened during execution. A good thing to remember is that while it is running, we are dealing with the Process and when it is finished we interact with the Calculation node. The WorkChain is not the only process in AiiDA and each process uses a different node class as its database record. The following table describes which processes exist in AiiDA and what node type they use as a database record. Note The concept of the Process is a later addition to AiiDA and in the beginning this division of ‘how to run something` and ‘serving as a record of what happened’, did not exist. For example, historically speaking, the JobCalculation class fulfilled both of those tasks. To not break the functionality of the historic JobCalculation and InlineCalculation, their implementation was kept and Process wrappers were developed in the form of the JobProcess and the FunctionProcess, respectively. When a function, decorated with the make_inline decorator, is run, it is automatically wrapped into a FunctionProcess to make sure that is a process and gets all the necessary methods for the Runner to run it. Similarly, the process() class method was implemented for the JobCalculation class, in order to automatically create a process wrapper for the calculation. The good thing about this unification is that everything that is run in AiiDA has the same attributes concerning its running state. The most important attribute is the process state. In the next section, we will explain what the process state is, what values it can take and what they mean. The process state¶ Each Process has a process state. This property tells you about the current status of the process. It is stored in the instance of the Process itself and the workflow engine, the plumpy library, operates only on that value. However, the Process instance ‘dies’ as soon as its is terminated, so therefore we also write the process state to the calculation node that the process uses as its database record, under the process_state attribute. The process can be in one of six states: - Created - Running - Waiting - Killed - Excepted - Finished The first three states are ‘active’ states, whereas the final three are. A process that is in the Killed state, means that considered to be successful, it just executed without any problems. To distinghuis between a successful and a failed execution, we have introduced the ‘exit status’. This is another attribute that is stored in the node of the process and is an integer that can be set by the process. A zero means that the result of the process was successful, and a non-zero value indicates a failure. All the calculation nodes used by the various processes are a sub class of AbstractCalculation, which defines handy properties to query the process state and exit status. When you load a calculation node from the database, you can use these property methods to inquire about its state and exit status. The process builder¶ The process builder is essentially a tool that helps you build the object that you want to run. To get a builder for a Calculation or a Workflow all you need is the Calculation or WorkChain class itself, which can be loaded through the CalculationFactory and WorkflowFactory, respectively. Let’s take the TemplatereplacerCalculation as an example: TemplatereplacerCalculation = CalculationFactory('simpleplugins.templatereplacer') builder = TemplatereplacerCalculation.get_builder() The string simpleplugins.templatereplacer is the entry point of the TemplatereplacerCalculation and passing it to the CalculationFactory will return the corresponding class. Calling the get_builder method on that class will return an instance of the ProcessBuilder that is tailored for the TemplatereplacerCalculation. The builder will help you in defining the inputs that the TemplatereplacerCalculation requires and has a few handy tools to simplify this process. Defining inputs¶.parameters? Type: property String form: <property object at 0x7f04c8ce1c00> Docstring: "non_db": "False" "help": "Parameters used to replace placeholders in the template", "name": "parameters", "valid_type": "<class 'aiida.orm.data.parameter.ParameterData'>" description and label inputs: builder.label = 'This is my calculation label' builder.description = 'An example calculation to demonstrate the process builder' If you evaluate the builder instance, simply by typing the variable name and hitting enter, the current values of the builder’s inputs will be displayed: builder { 'description': 'An example calculation to demonstrate the process builder', 'label': 'This is my calculation label', 'options': {}, } In this example, you can see the value that we just set for the description and the label. In addition, it will also show any namespaces, as the inputs of processes support nested namespaces, such as the options namespace in this example. This namespace contains all the additional options for a JobCalculation that are not stored as input nodes, but rather have to do with how the calculation should be run. Examples are the job resources that it should use or any other settings related to the scheduler. Note that these options are also all autocompleted, so you can use that to discover all the options that are available, including their description. All that remains is to fill in all the required inputs and we are ready to launch the Calculation or WorkChain. Launching the process¶ When all the inputs have been defined for the builder, it can be used to actually launch the Process. The ProcessBuilder can be launched by passing it to the free functions run and submit from the aiida.work.launch module, just as you would do a normal process. For more details please refer to the process builder section in the section of the documentation on running workflows. Submit test¶ The ProcessBuilder of a JobCalculation has one additional feature. It has the method submit_test(). When this method is called, provided that the inputs are valid, a directory will be created locally with all the inputs files and scripts that would be created if the builder were to be submitted for real. This gives you a chance to inspect the generated files before actually sending them to the remote computer. This action also will not create an actual calculation node in the database, nor do the input nodes have to be stored, allowing you to check that everything is correct without polluting the database. By default the method will create a folder submit_test in the current working directory and within it a directory with an automatically generated unique name, each time the method is called. The method takes two optional arguments folder and subfolder_name, to change the base folder and the name of the test directory, respectively.
http://aiida-core.readthedocs.io/en/latest/concepts/processes.html
CC-MAIN-2018-34
refinedweb
1,197
50.46
A few weeks ago I worked on a project in which java is involved. It is nice experience to integrate VC++, Java and assembly together to do something. So I decide to rewrite my C# article in which I call Assembly language from C# with the help of VC and now do the same thing with the help of VC++, Java and Assembly by using JNI (Java Native Interface). But the basic question is same, why should we call Assembly Language from Java? There might be some reason for this: So you will use JNI (Java Native Interface) to execute Native code from java. But be careful when you plan to call native functions from your java program then your program will not be portable. To call native code from java you have to follow these steps. Now there is a question, how can we use our existing DLL into java program, because those DLLs are not written to use java created header file? The solution of this problem is to create a wrapper DLL that just call the functions of your DLL. Let's discuss these steps one by one. To declare any function native in java you use native keyword with that function and done declare body of that function. class prog1 { public static native void test(); public static void main(String args[]) { System.out.println("Hello World!"); } } It is not necessary to declare native function static; you can declare it as a non-static function too. The only difference comes between static native function and non-static native function comes when we call the function. So we will the difference of these when call the function. Now compile this program at command line Javac prog1.java The output of this program is class file. And this is working program you can run it if you want by typing Java prog1 Now second step is to generate header file for C/C++. There is a utility javah comes with java which create the header file for C/C++ from the java class file. Type this at command prompt Javah prog1 This will create one header file with the same name as class file name i.e. prog1.h. This header file is very simple and declares the prototype of all the functions, which are, declares native in the java program. The header file looks like this /* DO NOT EDIT THIS FILE - it is machine generated */ #include <jni.h> /* Header for class prog1 */ #ifndef _Included_prog1 #define _Included_prog1 #ifdef __cplusplus extern "C" { #endif /* * Class: prog1 * Method: test * Signature: ()V */ JNIEXPORT void JNICALL Java_prog1_test (JNIEnv *, jclass); #ifdef __cplusplus } #endif #endif You should include at least one header file in you JNI based dll i.e. jni.h. This header file is automatically included when you create heave file from javah utility. JNIEXPORT is define as __declspec(dllexport) and JNICALL is standard calling convention define is jni_md.h. #define JNIEXPORT __declspec(dllexport) #define JNICALL __stdcall So the functions name is something like this __declspec(dllexport) void stdcall Java_prog1_test(JNIEnv*, jclass) The function name begins with the Java followed by the package name then class name and at last the function name, which is declared as native in java file. Here we haven't defined any function so there is no package name in exported function. The first parameter of any JNI based function is pointer to JNIEnv structure. This structure is used java environment related function within the C++ Program, such as the storing of string is different in Java and C++ so you have to convert it into appropriate type before use it by calling the function define by JNI. Now made on DLL Project in VC and include this header file in the project. And give the implementation to those native functions. Remember you can use any name for this cpp file it is not necessary to use the same name which is the class file name or header file name. #include <windows.h> #include <stdio.h> #include "prog1.h" BOOL WINAPI DllMain(HANDLE hHandle, DWORD dwReason, LPVOID lpReserved) { return TRUE; } JNIEXPORT void JNICALL Java_prog1_test(JNIEnv *, jclass) { printf("Hello world from VC++ DLL\n"); } You will get the required DLL file after compile this project. Now the next step is to use this DLL and its function in java program. The API to load DLL is loadLibarary. Now the programs comes something like this class prog1 { static { System.loadLibrary("test.dll"); } public static native void test(); public static void main(String args[]) { System.out.println("Hello World!"); test(); } } When you run this program, then program crash after throwing an exception UnsatisfiedLinkError. So lets catch this exception and change our program little bit. class prog1 { static { try { System.loadLibrary("test.dll"); } catch(UnsatisfiedLinkError ule) { System.out.println(ule); } } public static native void test(); public static void main(String args[]) { System.out.println("Hello World!"); test(); } }The output of this program is java.lang.UnsatisfiedLinkError: no test.dll in java.library.path Hello World! Exception in thread "main" java.lang.UnsatisfiedLinkError: test at prog1.test(Native Method) at prog1.main(prog1.java:19) This program shows that it can't find the DLL, which we just made. Be sure to copy the DLL in the path or the current folder. But you will get the same error even you copy the DLL in the same folder. The reason is that you don't have to write the extension of the DLL in the loadLibrary function. class prog1 { static { try { System.loadLibrary("test"); } catch(UnsatisfiedLinkError ule) { System.out.println(ule); } } public static native void test(); public static void main(String args[]) { System.out.println("Hello World!"); test(); } } The output of this program is Hello World! Hello world from VC++ DLL Now come to non-static native function. In fact you can declare native function to non-static too. But in this case you have to create instance of prog1 class and call the native function. class prog1 { static { try { System.loadLibrary("test"); } catch(UnsatisfiedLinkError ule) { System.out.println(ule); } } public native void test(); public static void main(String args[]) { System.out.println("Hello World!"); new prog1().test(); } } The only difference of the program 4 and 5 are the declaration and calling of native function. The output of the above program is same as previous one. But I will use the static native function in rest of the article. Ok let's try to do something with passing parameter and return something to native function. Let's make a program to call one native function which sum two numbers and return the results. class prog1 { static { try { System.loadLibrary("test"); } catch(UnsatisfiedLinkError ule) { System.out.println(ule); } } public static native int Sum(int a, int b); public static void main(String args[]) { System.out.println(Sum(5, 10)); } } And here is CPP file to create DLL #include <windows.h> #include "prog1.h" BOOL WINAPI DllMain(HANDLE hHandle, DWORD dwReason, LPVOID lpReserved) { return TRUE; } JNIEXPORT jint JNICALL Java_prog1_Sum(JNIEnv *, jclass, jint a, jint b) { return a + b; } The output of this program is: 15 Handling integers is easy. Now let's do some experiments with character stream. Let's make a native function which takes a string as a parameter and return it after capitalize it. class prog1 { static { try { System.loadLibrary("test"); } catch(UnsatisfiedLinkError ule) { System.out.println(ule); } } public static native String saySomething(String strString); public static void main(String args[]) { System.out.println(saySomething("Hello world")); System.out.println(saySomething("Bye world")); } } And here is the CPP program to use this #include <windows.h> #include <string.h> #include "prog1.h" BOOL WINAPI DllMain(HANDLE hHandle, DWORD dwReason, LPVOID lpReserved) { return TRUE; } JNIEXPORT jstring JNICALL Java_prog1_saySomething(JNIEnv * env, jclass, jstring strString) { char *lpBuff = (char*)env->GetStringUTFChars(strString, 0); _strupr(lpBuff); jstring jstr = env->NewStringUTF(lpBuff); env->ReleaseStringUTFChars(strString, lpBuff); return jstr; } The output of the program is HELLO WORLD BYE WORLD The important thing in this program is the use of GetStringUTFChars, ReleaseStringUTFChars and NewStringUTF functions. GetStringUTFChars convert the character representation from Java Unicode representation to C language Null terminated string. You have to call ReleaseStringUTFChars to free the memory allocated by virtual machine allocated. If you forgot to call this then it will create memory leak. NewStringUTF is used to create new string which is return by the function. Now we have enough knowledge to make C# program into Java. Let's make Java Program to create header file. class sysInfo { static { try { System.loadLibrary("SysInfo"); } catch(UnsatisfiedLinkError ule) { System.out.println(ule); } } public static native int getCPUSpeed(); public static native String getCPUType(); public static native int getCPUFamily(); public static native int getCPUModal(); public static native int getCPUStepping(); public static void main(String args[]) { System.out.println("Information about System"); System.out.println("========================"); System.out.println("Get CPU Speed: " + getCPUSpeed()); System.out.println("Get CPU Type: " + getCPUType()); System.out.println("Get CPU Family: " + getCPUFamily()); System.out.println("Get CPU Modal: " + getCPUModal()); System.out.println("Get CPU Stepping: " + getCPUStepping()); } } And here is the CPP program which use the header file created by this program. #include <windows.h> #include "sysinfo.h" BOOL WINAPI DllMain(HANDLE hHandle, DWORD dwReason, LPVOID lpReserved) { return TRUE; } JNIEXPORT jint JNICALL Java_sysInfo_getCPUSpeed(JNIEnv *, jclass) { LARGE_INTEGER ulFreq, ulTicks, ulValue, ulStartCounter, ulEAX_EDX, ulResult; // it is number of ticks per seconds QueryPerformanceFrequency(&ulFreq); // current valueofthe performance counter QueryPerformanceCounter(&ulTicks); // calculate one second interval ulValue.QuadPart = ulTicks.QuadPart + ulFreq.QuadPart; // read time stamp counter // this asm instruction load the highorder 32 bit of the register into EDX // and the lower order 32 bits into EAX _asm { rdtsc mov ulEAX_EDX.LowPart, EAX mov ulEAX_EDX.HighPart, EDX } // start no of ticks ulStartCounter.QuadPart = ulEAX_EDX.QuadPart; // loop for 1 second do { QueryPerformanceCounter(&ulTicks); } while (ulTicks.QuadPart <= ulValue.QuadPart); // get the actual no of ticks ; } JNIEXPORT jstring JNICALL Java_sysInfo_getCPUType(JNIEnv * env, jclass) { static char pszCPUType[13]; memset(pszCPUType, 0, 13); _asm { mov eax, 0 cpuid // getting information from EBX mov pszCPUType[0], bl mov pszCPUType[1], bh ror ebx, 16 mov pszCPUType[2], bl mov pszCPUType[3], bh // getting information from EDX mov pszCPUType[4], dl mov pszCPUType[5], dh ror edx, 16 mov pszCPUType[6], dl mov pszCPUType[7], dh // getting information from ECX mov pszCPUType[8], cl mov pszCPUType[9], ch ror ecx, 16 mov pszCPUType[10], cl mov pszCPUType[11], ch } pszCPUType[12] = '\0'; return env->NewStringUTF(pszCPUType); } JNIEXPORT jint JNICALL Java_sysInfo_getCPUFamily(JNIEnv *, jclass) { int retVal; _asm { mov eax, 1 cpuid mov retVal, eax } return (retVal >> 8); } JNIEXPORT jint JNICALL Java_sysInfo_getCPUModal(JNIEnv *, jclass) { int retVal; _asm { mov eax, 1 cpuid mov retVal, eax } return ((retVal >> 4 ) & 0x0000000f); } JNIEXPORT jint JNICALL Java_sysInfo_getCPUStepping(JNIEnv *, jclass) { int retVal; _asm { mov eax, 1 cpuid mov retVal, eax } return (retVal & 0x0000000f); } The output of this program on my computer is given below, you may get different result depending on the computer which you are using. There is one more thing I use CPUID instruction to identify the vendor name of the microprocessor and this instruction is available only on Pentium and above microprocessors, so if you are using microprocessor less then Pentium then you may get unpredictable results. Information about System ======================== Get CPU Speed: 1003 Get CPU Type: AuthenticAMD Get CPU Family: 6 Get CPU Modal: 4 Get CPU Stepping: 2 Thanks to Tasnim Ahmed, Java team leader at SoftPakSys to answer my questions related to java, Khuram Rehmani give me some tips on usage of JNI and Muhammad Kashif Shafiq to review this before publication. General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/cpp/integratingcppjava.aspx
crawl-002
refinedweb
1,913
55.74
As with many Perl systems, AxKit often provides multiple ways of doing things. Developers from other programming cultures may find these choices and freedom a bit bewildering at first but this (hopefully) soon gives way to the realization that the options provide power and freedom. When a tool set limits your choices too much, you end up doing things like driving screws with a nailgun. Of course, too many choices isn't necessarily a good thing, but it's better than too few. Last time, we saw how to build a weather reporting application by implementing a simple taglib module, My::WeatherTaglib, in Perl and deploying it in a pipeline with other XML filters. The pipeline approach allows one kind of flexibility: the freedom to decompose an application in the most appropriate manner for the requirements at hand and for the supporting organization. Another kind of flexibility is the freedom to implement filters using different technologies. For instance, it is sometimes wise to build taglibs in different ways. In this article, we'll see how to build the same taglib using two other approaches. The first rebuild uses the technique implemented by the Cocoon project, LogicSheets. The second uses Jörg Walter's relatively new SimpleTaglib in place of the TaglibHelper used for My::WeatherTaglib in the previous article. SimpleTaglib is a somewhat more powerful, and, oddly, more complex module than TaglibHelper (though the author intends to make it a bit simpler to use in the near future). CHANGES AxKit v1.6 is now out with some nice bug fixes and performance improvements, mostly by Matt Sergeant and Jörg Walter, along with several new advanced features from Kip Hampton which we'll be covering in future articles. Matt has also updated his AxKit compatible AxPoint PowerPoint-like HTML/PDF/etc. presentation system. If you're going to attend any of the big Perl conferences this season, then you're likely to see presentations built with AxPoint. It's a nice system that's also covered in an XML.com article by Kip Hampton. AxTraceIntermediate The one spiffy new feature I used -- rather more often than I'd like to admit -- in writing this article is the debugging directive AxTraceIntermediate, added by Jörg Walter. This directive defines a directory in which AxKit will place a copy each of the intermediate documents passed between filters in the pipeline. So a setting like: AxTraceIntermediate /home/barries/AxKit/www/axtrace will place one file in the axtrace directory for each intermediate document. The full set of directives in httpd.conf used for this article is shown later. Here is the axtrace directory after requesting the URIs / (from the first article), /02/weather1.xsp (from the second article), /03/weather1.xsp and /03/weather2.xsp (both from this article): |index.xsp.XSP # Perl source code for /index.xsp |index.xsp.0 # Output of XSP filter |02|weather1.xsp.XSP # Perl source code for /02/weather1.xsp |02|weather1.xsp.0 # Output of XSP |02|weather1.xsp.1 # Output of weather.xsl |02|weather1.xsp.2 # Output of as_html.xsl |03|weather1.xsp.XSP # Perl source code for /03/weather1.xsp |03|weather1.xsp.0 # Output of XSP |03|weather1.xsp.1 # Output of weather.xsl |03|weather1.xsp.2 # Output of as_html.xsl |03|weather2.xsp.XSP # Perl source code for /02/weather2.xsp |03|weather2.xsp.0 # output of my_weather_taglib.xsl |03|weather2.xsp.1 # Output of XSP |03|weather2.xsp.2 # Output of weather.xsl |03|weather2.xsp.3 # Output of as_html.xsl Each filename is the path portion of the URI with the /s replaced with |s and a step number (or .XSP) appended. The numbered files are the intermediate documents and the .XSP files are the Perl source code for any XSP filters that happened to be compiled for this request. Compare the |03|weather2.xsp.* files to the the pipeline diagram for the /03/weather2.xsp request. Watch those " |" characters: they force you to quote the filenames in most shells (and thus foil any use of wildcards): $ xmllint --format "www/axtrace/|03|weather2.xsp.3" <?xml version="1.0" standalone="yes"?> <html> <head> <meta content="text/html; charset=UTF-8" http- <title>My Weather Report</title> </head> <body> <h1><a name="title"/>My Weather Report</h1> <p>Hi! It's 12:43:52</p> <p>The weather in Pittsburgh is Sunny .... NOTE: The .XSP files are only generated if the XSP sheet is recompiled, so you may need to touch the source document or restart the server to generate a new one. Another gotcha is that if an error occurs halfway down the processing pipeline, then you can end up with stale files. In this case, the lower-numbered files (those generated by successful filters) will be from this request, but the higher-numbered files will be stale, left over from the previous requests. A slightly different issue can occur when using dynamic pipeline configurations (which we'll cover in the future): you can end up with a shorter pipeline that only overwrites the lower-numbered files and leaves stale higher-numbered files around. These are pretty minor gotchas when compared to the usefulness of this feature, you just need to be aware of them to avoid confusion. When debugging for this article, I used a Perl script that does something like: rm -f www/axtrace/* rm www/logs/* www/bin/apachectl stop sleep 1 www/bin/apachectl start GET to start each test run with a clean fileset. Under the XSP Hood Before we move on to the examples, let's take a quick peek at how XSP pages are handled by AxKit. This will help us understand the tradeoffs inherent in the different approaches. AxKit implements XSP filters by compiling the source XSP page into a handler() function that is called to generate the output page. This is compiled in to Perl bytecode, which is then run to generate the XSP output document: This means that XSP page is not executed directly, but by running relatively efficient compiled Perl code. The bytecode is kept in memory so the overhead of parsing and code generation is not incurred for each request. There are three types of Perl code used in building the output document: code to build the bits of static content, code that was present verbatim in the source document -- enclosed in tags like <xsp:logic> and <xsp:expr> -- and code that implements tags handled by registered taglib modules like My::WeatherTaglib from the last article. Taglib modules hook in to the XSP compiler by registering themselves as handlers for a namespace and then coughing up snippets of code to be compiled in to the handler() routine: The snippets of code can call back into the taglib module or out to other modules as needed. Modules like TaglibHelper, which we used to build My::WeatherTaglib and SimpleTaglib, which we use later in this article for My::SimpleWeatherTaglib, automate the drudgery of building a taglib module so you don't need to parse XML or even (usually) generate XML. You can view the source code that AxKit generates by cranking the AxDebugLevel up to 10 (which places the code in Apache's ErrorLog) or using the AxTraceIntermediate directive mentioned above. Then you must persuade AxKit to recompile the XSP page by restarting the server and requesting a page. If either of the necessary directives are already present in a running server, then simply touching the file to update its modification time will suffice. This can be useful for getting a really good feel for what's going on under the hood. I encourage new taglib authors to do this to see how the code for your taglib is actually executed. You'll end up needing to do it to debug anyway (trust me :). LogicSheets: Upstream Taglibs AxKit uses a pipeline processing model and XSP includes tags like <xsp:logic> and <xsp:expr> that allow you to embed Perl code in an XSP page. This allows taglibs to be implemented as XML filters that are placed upstream of the XSP processor. These usually use XSLT to and convert taglib invocations to inline code using XSP tags: In fact, this is how XSP was originally designed to operate and Cocoon uses this approach exclusively to this day (but with inline Java instead of Perl). I did not show this approach in the first article because it is considerably more awkward and less flexible than the taglib module approach offered by AxKit. The Cocoon project calls XSLT sheets that implement taglibs LogicSheets a convention I follow in this article (I refer to the all-Perl taglib implementation as "taglib modules"). weather2.xsp Before we look at the logicsheet version of the weather report taglib, here is the XSP page from the last article updated to use it: <?xml-stylesheet href="my_weather_taglib.xsl" type="text/xsl"?> < <?xml-stylesheet href="my_weather_taglib.xsl" type="text/xsl"?> processing instruction causes my_weather_taglib.xsl (which we'll cover next) to be applied to the weather2.xsp page before the XSP processor sees it. The other three PIs are identical to the previous version: the XSP processor is invoked, followed by the same presentation and HTMLification XSLT stylesheets that we used last time. The only other change from the previous version is that this one uses the corrent URI for XSP tags. I accidently used a deprecated URI for XSP tags in the previous article and ended up tripping over it when I used the up-to-date URI in the LogicSheet for this one. Such is the life of a pointy-brackets geek. The ability to switch implementations without altering (much) code is one of XSP's advantages over things like inline Perl code: the implementation is nicely decoupled from the API (the tags). The only reason we had to alter weather1.xsp at all is because we're switching from a more advanced approach (a taglib module, My::WeatherTaglib) that is configured in the httpd.conf file to LogicSheets, which need per-document configuration when using <xml-stylesheet> stylesheet specifications. AxKit has more flexible httpd.conf, plugin and Perl based stylesheet specification mechanisms which we will cover in a future article; I'm using the processing instructions here because they are simple and obvious. The pipeline built by the processing instructions looks like: (does not show final compression stage). my_weather_taglib.xsl Now that we've seen the source document and the overall pipeline, here is My::WeatherTaglib recast as a LogicSheet, my_weather_taglib.xsl: <xsl:stylesheet <xsl:output <xsl:template <xsl:copy> <xsp:structure> use Geo::Weather; </xsp:structure> <xsl:apply-templates </xsl:copy> </xsl:template> <xsl:template <xsp:logic> my $zip = <xsl:apply-templates <!-- Copy the rest of the doc almost verbatim --> <xsl:copy> <xsl:apply-templates </xsl:copy> </xsl:template> </xsl:stylesheet> The first <xsl:template> inserts an <xsp:structure> at the top of the page with some Perl code to use Geo::Weather; so the Perl code in the later <xsl:logic> element can refer to it. You could also preload Geo::Weather in httpd.conf to share it amongst httpd processes and simplify this stylesheet, but that would introduce a bit of a maintainance hassle: keeping the server config and the LogicSheet in synchronization. The second <xsl:template> replaces all occurences of <weather:report> (assuming the weather: prefix happens to map to the taglib URI; see James Clark's introduction to namespace for more details). In place of the <weather:report> tag(s) will be some Perl code surrounded by <xsp:logic> and <xsp:expr> tags. The <xsp:logic> tag is used around Perl code that is just logic: any value the code returns is ignored. The <xsp:expr> tags surround Perl code that returns a value to be emitted as text in the result document. The get_weather() call returns a hash describing the most recent weather oberservations somewhere close to a given zip code: { 'city' => 'Pittsburgh', 'state' => 'PA', 'cond' => 'Sunny', 'temp' => '77', ... }; All those <xsp:expr> tags extract the values from the hash one by one and build an XML data structure. The resulting XSP document looks like: <?xml version="1.0"?> <xsp:page xmlns:xsp="" xmlns:util="" xmlns:param="" xmlns: <xsp:structure> use Geo::Weather; </xsp:structure> <data> <title><a name="title"/>My Weather Report</title> <time> <util:time </time> <weather> <xsp:logic> my $zip = <param> </weather> </data> </xsp:page> and the output document of that XSP page looks like: <?xml version="1.0" encoding="UTF-8"?> <data> <title><a name="title"/>My Weather Report</title> <time>17:06:15</time> <weather> <state>PA</state> <heat>77</heat> <page>/search/search?what=WeatherLocalUndeclared &where=15206</page> <wind>From the Northwest at 9 gusting to 16</wind> <city>Pittsburgh</city> <cond>Sunny</cond> <temp>77</temp> <uv>4</uv> <visb>Unlimited miles</visb> <url>? what=WeatherLocalUndeclared&where=15206</url> <dewp>59</dewp> <zip>15206</zip> <baro>29.97 inches and steady</baro> <pic></pic> <humi>54%</humi> </weather> </data> LogicSheet Advantages - One taglib can generate XML that calls another taglib. Taglib modules may call each other at the Perl level, but taglib modules are XSP compiler plugins and do not cascade: The XSP compiler lives in a pipeline environment but does not use a pipeline internally. - No need to add an AxAddXSPTaglibdirective and restart the Web server each time you write a tag lib. Restarting a Web server just because a taglib has changed can be awkward in some environments, but this seems to be rare; restarting an Apache server is usually quick enough in a development environment and better not be necessary too often in a production environment. In the Cocoon community, LogicSheets can be registered and shared somewhat like the Perl community uses CPAN to share modules. This is an additional benefit when Cocooning, but does not carry much weight in the Perl world, which already has CPAN (there are many taglib modules on CPAN). There is no Java equivalent to CPAN in wide use, so Cocoon logic sheets need their own mechanism. LogicSheet Disadvantages There are two fundamental drwabacks with LogicSheets, each with several symptoms. Many of the symptoms are minor, but they add up: - Requires inline code, usually in an XSLT stylesheet. - Putting Perl code in XML is awkward: You can't easily syntax check the code (I happen to like to run perl -cw ThisFile.pma lot while writing Perl code) or take advantage of language-oriented editor features such as autoindenting, tags and syntax highlighting. - The taglib author needs to work in four languages/APIs: XSLT (typically), XSP, Perl, and the taglib under development. XSLT and Perl are far from trivial, and though XSP is pretty simple, it's easy to trip yourself up when context switching between them. - LogicSheets are far less flexible than taglib modules. For instance, compare the rigidity of my_weather_taglib.xsl's output structure with the that of My::WeatherTaglib or My::SimpleWeatherTaglib. The LogicSheet approach requires hardcoding the result values, while the two taglib modules simply convert whatever is in the weather report data structures to XML. - XSLT requires a fair amount of extra boilerplate to copy non-taglib bits of XSP pages through. This can usually be set up as boilerplate, but boilerplate in a program is just another thing to get in the way and require maintainance. - LogicSheet are inherently single-purpose. Taglib modules, on the other hand, can be used as regular Perl modules. An authentication module can be used both as a taglib and as a regular module, for instance. - LogicSheets need a working Web server for even the most basic functional testing since they need to be run in an XSP environment and AxKit does not yet support XSP outside a Web server. Writing taglib modules allows simple test suites to be written to vet the taglib's code without needing a working Web server. - Writing LogicSheets works best in an XML editor, otherwise you'll need to escape all your <characters, at least, and reading / writing XML-escaped Perl and Java code can be irksome. - Embracing and extending a LogicSheet is difficult to do: The source XSP page needs to be aware of the fact that the taglib it's using is using the base taglib and declare both of their namespaces. With taglib modules, Perl's standard function import mechanism can be used to releive XSP authors of this duty. - Requires an additional stylesheet to process, usually XSLT. This means: - A more complex processing chain, which leads to XSP page complexity (and thus more likelihood of bugs) because each page must declare both the namespace for the taglib tags and a processing instruction to run the taglib. As an example of a gotcha in this area, I used an outdated version of the XSP namespace URI in weather2.xspand the current URI in my_weather_taglib.xsl. This caused me a bit of confusion, but the AxTraceIntermediatedirective helped shed some light on it. - More disk files to check for changes each time an XSP page is served. Since each LogicSheet affects the output, each LogicSheet must be stat()ed to see if it has changed since the last time the XSP page was compiled. As you can probably tell, I feel that LogicSheets are a far more awkward and less flexible approach than writing taglibs as Perl modules using one of the helper libraries. Still, using upstream LogicSheets is a valid and perhaps occasionally useful technique for writing AxKit taglibs. Upstream Filters good for? So what is XSLT upstream of an XSP processor good for? You can do many things with it other than implementing LogicSheets. One use is to implement branding: altering things like logos, site name, and perhaps colors, or other customization, like administrator's mail addresses on a login page that is shared by several sub-sites. A key advantage of doing transformations upstream of the XSP processor is that the XSP process caches the results of upstream transformations. XSP converts whatever document it receives in to Perl bytecode in memory and then just runs that bytecode if none of the upstream documents have changed. Another use is to convert source documents that declare what should be on a page to XSP documents that implement the machinery of a page. For instance, a survey site might have the source documents declare what questions to ask: <survey> <question> <text>Have you ever eaten a Balut</text> <response>Yes</response> <response>No</response> <response>Eeeewww</response> </question> <question> <text>Ok, then, well how about a nice haggis</text> <response>Yes</response> <response>No</response> <response>Now that's more like it!</response> </question> ... </survey> XSLT can be used to transform the survey definition in to an XSP page that uses the PerForm taglib to automate form filling, etc. This approach allows pages to be defined in terms of what they are instead of how they should work. You can also use XSLT upstream of the XSP processor to do other things, like translate from a limited or simpler domain-specific tagset to a more complex or general purpose taglib written as a taglib module. This can allow you to define taglibs that are easier to use in terms of more powerful (but scary!) taglibs that are loaded in to the XSP processor. My::SimpleWeatherTaglib A new-ish taglib helper module has been bundled in recent AxKit releases: Jörg Walter's SimpleTaglib (the full module name is Apache::AxKit::Language::XSP::SimpleTaglib). This module performs roughly the same function as Steve Willer's TaglibHelper, but supports namespaces and uses a feature new to Perl, subroutine attributes, to specify the parameters and result formatting instead of a string. Here is My::SimpleWeatherTaglib: package My::SimpleWeatherTaglib; use Apache::AxKit::Language::XSP::SimpleTaglib; $NS = ""; package My::SimpleWeatherTaglib::Handlers; use strict; require Geo::Weather; ## Return the whole report for fixup later in the processing pipeline sub report : child(zip) struct({}) { return 'Geo::Weather->new->get_weather( $attr_zip );' } 1; The $NS variable defines the namespace for this taglib. This module uses the same namespace as my_weather_taglib.xsl and My::WeatherTaglib, because all three implement the same taglib (this repetetiveness is to demonstrate the differences between the approaches). See the Mixing and Matching Taglibs section to see how My::WeatherTaglib and My::SimpleWeatherTaglib can both be used in the same server instance. My::SimpleWeatherTaglib then shifts gears in to a new package, My::SimpleWeatherTaglib::Handlers to define the subroutines for the taglib tags. Using a virgin package like this provides a clean place with which to declare the tag handlers. SimpleTaglib looks for the modules in the Foo::Handlers package if it's use()d in the Foo package (don't use require for this!). My::SimpleWeatherTaglib requires Geo::Weather and declares a single tag, which handles the <weather:report> tag in weather1.xsp (which we'll show in a moment). The require Geo::Weather; instead of use Geo::Weather; is to avoid importing subroutines in to our otherwise ...::Handlers namespace which might look like a handler. There's something new afoot in the declaration for sub report: subroutine attributes. Subroutine attributes are a new feature of Perl (as of perl5.6) that allow us to hang additional little bits of information on the subroutine declaration that describe it a bit more. perldoc perlsub for the details of this syntax. Some attributes are predefined by Perl, but modules may define others for their own purposes. In this case, the SimpleTaglib module defines a handful of attributes, some of which describe what parameters the taglib tag can take and others which describe how to convert the result value from the taglib implementation into XML output. The child(zip) subroutine attribute tells the SimpleTaglib module that this handler expects a single child element named zip in the taglib's namespace. In weather1.xsp, this ends up looking like: <weather:report> <!-- Get the ?zip=12345 from the URI and pass it to the weather:report tag as a parameter --> <weather:zip><param:zip/></weather:zip> </weather:report> The text from the <weather:zip> element (which will be filled in from the URI query string using the param: taglib) will be made available in a variable named $attr_zip at request time. The fact that the text from an element shows up in a variable beginning with $attr_ is confusing, but it does actually work that way. The struct({}) attribute specifies that the result of this tag will be returned as a Perl data structure that will be converted into XML. Geo::Weather->new->get_weather( $zip ) returns a HASH reference that looks like: { 'city' => 'Pittsburgh', 'state' => 'PA', 'cond' => 'Sunny', 'temp' => '77', ... }; The struct attribute tells SimpleTaglib to turn this in to XML like: <city>Pittsburgh</city> <state>PA</state> <cond>Sunny</cond> <temp>77</temp> .... The {} in the struct({}) attribute specifies that the result nodes should be not be in a namespace (and thus have no namespace prefix), just like the static portions of our weather1.xsp document. This is one of the advantages that SimpleTaglib has over other methods: It's easier to emit nodes in different namespaces. To emit nodes in a specific namespace, put the namespace URI for that namespace inside the curlies: struct({}). The {} notation is referred to as James Clark (or jclark) notation. Now, the tricky bit. Harkening back to our discussion of how XSP is implemented, remember that the XSP processor compiles the XSP document into Perl code that is executed to build the output document. As XSP compiles the page, it keeps a lookout for tags in namespaces handled by taglib modules that have been configured in with AxAddXSPTaglib. When XSP sees one of these tags, it calls in to the taglib module--My::SimpleWeatherTaglib here--for that namespace and requests a chunk of Perl source code to compile in place of the tag. Taglibs implemented with the SimpleTaglib module covered here declare handlers for each taglib tag ( sub report, for instance). That handler subroutine is called at parse time, not at request time. Its job is to return the chunk of code that will be compiled and then run later, at request time, to generate the output. So returns a string containing a snippet of Perl code that calls into Geo::Weather. This Perl code will be compiled once, then run for each request. This is a key difference between the TaglibHelper module that My::WeatherTaglib used in the previous article and the SimpleTaglib module used here. SimpleTaglib calls My::SimpleWeatherTaglib's subroutine at compile time whereas TaglibHelper quietly, automatically arranges to call My::WeatherTaglib's report() subroutine at request time. This difference makes SimpleTaglib not so simple unless you are used to writing code that generates code that will be compiled and run later. On the other hand, "Programs that write programs are the happiest programs in the world" (Andrew Hume, according to a few places on the net). This is true here because we are able to return whatever code is appropriate for the task at hand. In this case, the code is so simple that we can return it directly. If the work to be done was more complicated, then we could also return a call to a subroutine of our own devising. So, while a good deal less simple than the approach taken by TaglibHelper, this approach does offer a bit more flexibility. SimpleTaglib's author does promise that a new version of SimpleTaglib will offer the "call this subroutine at request time" API which I (and I suspect most others) would prefer most of the time. I will warn you that the documentation for SimpleTaglib does not stand on its own, so you need to have the source code for an example module or two to put it all together. Beyond the overly simple example presented here, the documentation refers you to a couple of others. Mind you, I'm casting stones while in my glass house here, because nobody has ever accused me of fully documenting my own modules. For reference, here is the weather1.xsp from the previous article, which we are reusing verbatim for this example: < processing pipeline and intermediate files are also identical to those from the previous article, so we won't repeat them here. Mixing and Matching Taglibs using httpd.conf As detailed in the first article in this series, AxKit integrates tightly with Apache and Apache's configuration engine. Apache allows different files and directories to have different configurations applied, including what taglibs are used. In the real world, for instance, it is sometimes necessary to have part of a site to use a new version of a taglib that might break an old portion. In the server I used to build the examples for this article, for instance, the 02/ directory still uses My::WeatherTaglib from the last article, while the 03/ directory uses the my_weather_taglib.xsl for one of this article's examples and My::SimpleWeatherTaglib for the other. This is done by combining Apache's <Directory> sections with the AxAddXSPTaglib directive: ## ## application/x-xsp \\ Apache::AxKit::Language::XSP AxAddStyleMap text/xsl \\ Apache::AxKit::Language::LibXSLT </Directory> <Directory "/home/me/htdocs/02"> AxAddXSPTaglib My::WeatherTaglib </Directory> <Directory "/home/me/htdocs/03"> AxAddXSPTaglib My::SimpleWeatherTaglib </Directory> See How Directory, Location and Files sections work from the apache httpd documentation (v1.3 or 2.0) for the details of how to use <Directory> and other httpd.conf sections to do this sort of thing. Help and thanks Jörg Walter as well as Matt Sergeant were of great help in writing this article, especially since I don't do LogicSheets. Jörg also fixed a bug in absolutely no time and wrote the SimpleTaglib module and the AxTraceIntermediate feature. In case of trouble, have a look at some of the helpful resources we listed in the first article.
http://www.perl.com/pub/2002/07/axkit.html
CC-MAIN-2014-15
refinedweb
4,636
60.55
@asymmetrik/ngx-leaflet Leaflet packages for Angular.io (v2+). Provides flexible and extensible components for integrating Leaflet v0.7.x and v1.x into Angular.io projects. Supports Angular v4, Ahead-of-Time compilation (AOT), and use in Angular-CLI based projects. Table of Contents Install Install the package and its peer dependencies via npm (or yarn): npm install leaflet npm install @asymmetrik/ngx-leaflet If you intend to use this library in a typescript project (utilizing the typings), you'll need to install the leaflet typings: npm install --save-dev @types/leaflet If you want to run the demo, clone the repository, perform an npm install, gulp dev and then go to Usage To use this library, there are a handful of setup steps to go through that vary based on your app environment (e.g., Webpack, ngCli, SystemJS, etc.). Generally, the steps are: - Install Leaflet, this library, and potentially the Leaflet typings (see above). - Import the Leaflet stylesheet - Import the Leaflet module. How you include the stylesheet will depend on your specific setup. Here are a few examples: Direct Import from HTML If you are just building a webpage and not using a bundler for your css, you'll want to directly import the css file in your HTML page. <head> ... <link rel="stylesheet" type="text/css" href="./node_modules/leaflet/dist/leaflet.css"> ... </head> Configuring Webpack Style Loaders If you are using Webpack, you will need to import the css file and have a style-loader configured. You can use the demo included in this application as a reference. Generally, in vendor.ts: import 'leaflet/dist/leaflet.css'; And then in your webpack config file: { ... "module" : { loaders: [ ... { test: /\.css$/, loaders: [ 'style-loader', 'css-loader' ] }, ... ] }, ... } Adding Styles in Angular CLI If you are using Angular CLI, you will need to add the Leaflet CSS file to the styles array contained in .angular-cli.json { ... "apps": [ { ... "styles": [ "styles.css", "../node_modules/leaflet/dist/leaflet.css" ], ... } ] ... } (and potentially the module that's using it). For example, in your app.module.ts, add: import { LeafletModule } from '@asymmetrik/ngx-leaflet'; ... imports: [ ... LeafletModule.forRoot() ] ... Potentially, you'll also need to import it into the module of the component that is going to actually use the ngx-leaflet directives. See Angular.io docs of modules for more details (). In this case, in my-module.module.ts, add: import { LeafletModule } from '@asymmetrik/ngx-leaflet'; ... imports: [ ... Leaflet Once the dependencies are installed and you have imported the LeafletModule, you're ready to add a map to your page. To get a basic map to work, you have to: - Apply the leafletattribute directive (see the example below) to an existing DOM element. - Style the map DOM element with a height. Otherwise, it'll render with a 0 pixel height. - Provide an initial zoom/center and set of layers either via leafletOptionsor by binding to leafletZoom, leafletCenter, and leafletLayers. Template: <div style="height: 300px;" leaflet [leafletOptions]="options"> </div> Example leafletOptions object: options = { layers: [ tileLayer('http://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png', { maxZoom: 18, attribution: '...' }) ], zoom: 5, center: latLng(46.879966, -121.726909) }; Changes to leafletOptions are ignored after they are initially set. This is because these options are passed into the map constructor, so they couldn't be updated easily regardless. So, make sure the object exists before the map is created. You'll want to create the object in ngOnInit or hide the map DOM element with ngIf until you can create the options object. Add a Layers Control The leafletLayersControl input bindings give you the ability to add the layers control to the map. The layers control lets the user toggle layers and overlays on and off. Template: <div style="height: 300px;" leaflet [leafletOptions]="options" [leafletLayersControl]="layersControl"> </div> Example layersControl object: layersControl = { baseLayers: { 'Open Street Map': tileLayer('http://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png', { maxZoom: 18, attribution: '...' }), 'Open Cycle Map': tileLayer('http://{s}.tile.opencyclemap.org/{z}/{x}/{y}.png', { maxZoom: 18, attribution: '...' }) }, overlays: { 'Big Circle': circle([ 46.95, -122 ], { radius: 5000 }), 'Big Square': polygon([[ 46.8, -121.55 ], [ 46.9, -121.55 ], [ 46.9, -121.7 ], [ 46.8, -121.7 ]]) } } You can add any kind of Leaflet layer you want to the overlays map. This includes markers, shapes, geojson, custom layers from other libraries, etc. Add Custom Layers (base layers, markers, shapes, etc.) You can add layers (baselayers, markers, or custom layers) to the map without showing them in the layer control using the leafletLayers directive. Template: <div style="height: 300px;" leaflet [leafletOptions]="options" [leafletLayers]="layers"> </div> Layers array: layers = [ circle([ 46.95, -122 ], { radius: 5000 }), polygon([[ 46.8, -121.85 ], [ 46.92, -121.92 ], [ 46.87, -121.8 ]]), marker([ 46.879966, -121.726909 ]) ]; Dynamically Change Map Layers Layer inputs (arrays and maps) are mutable Previous versions of this plugin treated layers arrays and layer control objects as immutable data structures. We've changed that behavior. Now, mutable changes to the leafletLayers, leafletBaseLayers, and leafletLayersControlinputs are detected. The plugin is now using internal ngx iterable and key/value differs to detect and track changes to mutable data structures. This approach requires a deep compare of the contents of the data structure (which can be slow when the contents are really big). For immutable data structures, all that is needed is a top-level instance equality check (which is way faster). This change is backwards compatible and was motivated by feedback and confusion. While there is a performance impact for some use cases, this approach is more intuitive. There are at least two good approaches to improving performance when there are a lot of layers bound to the map. First, you can use the OnPush change detection strategy. There's an example of this in the demo. Second, you can wrap a large number of layers into a Leaflet layer group, which will reduce the number of layers the plugin actually has to track during diffs. API This section includes more detailed documentation of the functionality of the directives included in this library. Advanced Map Configuration There are several input bindings available for configuring the map. <div leaflet </div> leafletOptions Input binding for the initial leaflet map options (see Leaflet's docs). These options can only be set initially because they are used to create the map. Later changes are ignored. leafletPanOptions Input binding for pan options (see Leaflet's docs). These options are stored and used whenever pan operations are invoked. leafletZoomOptions Input binding for zoom options (see Leaflet's docs). These options are stored and used whenever zoom operations are invoked. leafletZoomPanOptions Input binding for zoom/pan options (see Leaflet's docs). These options are stored and used whenever zoom/pan operations are invoked. leafletFitBoundsOptions Input binding for FitBounds options (see Leaflet's docs). These options are stored and used whenever FitBounds operations are invoked. Dynamically changing zoom level, center, and fitBounds <div leaflet </div> leafletZoom Input bind a zoom level to the map. zoom: number On changes, the component applies the new zoom level to the map. There is no output binding or events emitted for map zoom level changes made using the map controls. leafletCenter Input bind a center position to the map. center: LatLng On changes, the component re-centers the map on the center point. There is no output binding or events emitted for map pan changes made using map controls. Note: center/zoom operations may cancel each other Zoom/Center operations cancel each other. If both changes are picked up at the same time, they will be applied as a map.setView() operation so both are processed. leafletFitBounds Input bind a fitBounds operation to the map. fitBounds: LatLngBounds On changes, the component calls map.fitBounds using the bound parameter. Simple Layer Management: Setting Baselayers There is a convenience input binding for setting the baselayers on the map called leafletBaseLayers. You can also provide leafletLayersControlOptions if you want to show the control on the map that allows you to switch between baselayers. If you plan to show more than just baselayers, you should use the more advanced layers controls described in Advanced Layer Management below. For an example of the basic map setup, you should check out the Simple Base Layers demo. <div leaflet </div> leafletBaseLayers Input bind an Control.LayersObject to be synced to the map. baseLayers: { 'layer1': Layer, 'layer2': Layer } On changes, the component syncs the baseLayers on the map with the layers in this object. Syncing is performed by tracking the current baselayer and on changes, searching the map to see if any of the current baselayers is added to the map. If it finds a baselayer that is still added to the map, it will assume that is still the baselayer and leave it. If none of the baselayers can be found on the map, it will add the first layer it finds in the Control.LayersObject and use that as the new baselayer. Layers are compared using instance equality. If you use this directive, you can still manually use the leafletLayers directive, but you will not be able to use the leafletLayersControl directive. This directive internally uses the layers control, so if you add both, they'll interfere with each other. Because it uses control.layers under the hood, you can still provide options for the layers control. leafletLayersControlOptions Input binding for Control.Layers options (see Leaflet's docs). These options are passed into the layers control constructor on creation. Advanced Layer Management: Layers, and Layers Control The leafletLayers and leafletLayersControl input bindings give you direct access to manipulate layers and the layers control. When the array bound to leafletLayers is changed, the directive will synchronize the layers on the map to the layers in the array. This includes tile layers and any added shapes. The leafletLayersControl input binding allows you to provide a set of base layers and overlay layers that can be managed within leaflet using the layers control. When the user manipulates the control via Leaflet, Leaflet will automatically manage the layers, but the input bound layer array isn't going to get updated to reflect those changes. So, basically, you use leafletLayers to assert what should be added to/removed from the map. Use leafletLayersContro to tell Leaflet what layers the user can optionally turn on and off. For an example of using the layers controls, you should check out the Layers and Layer Controls demo. <div leaflet </div> leafletLayers Input bind an array of all layers to be synced (and made visible) in the map. layers: Layer[] On changes, the component syncs the layers on the map with the layers in this array. Syncing is performed by selectively adding or removing layers. Layers are compared using instance equality. As a result of how the map is synced, the order of layers is not guaranteed to be consistent as changes are made. leafletLayersControl Input bind a Control.Layers specification. The object contains properties for each of the two constructor arguments for the Control.Layers constructor. layersControl: { baseLayers: { 'layerName': Layer }, overlays: { 'overlayName': Layer } } leafletLayersControlOptions Input binding for Control.Layers options (see Leaflet's docs). These options are passed into the constructor on creation. Advanced Layer Management: Layers and *ngFor / *ngIf The leafletLayer input bindings gives you the ability to add a single layer to the map. While this may seem limiting, you can nest elements inside the map element, each with a leafletLayer input. The result of this is that each layer will be added to the map. If you add a structural directive - *ngFor or *ngIf - you can get some added flexibiltiy when controlling layers. <div leaflet <div *</div> </div> In this example, each layer in the layers array will create a new child div element. Each element will have a leafletLayer input binding, which will result in the layer being added to the map. For more details, you should check out the Layers and ngFor demo. Getting a Reference to the Map Occasionally, you may need to directly access the Leaflet map instance. For example, to call invalidateSize() when the map div changes size or is shown/hidden. There are a couple of different ways to achieve this depending on what you're trying to do. The easiest and most flexible way is to use the output binding leafletMapReady. This output is invoked after the map is created, the argument of the event being the Map instance. The second is to get a reference to the leaflet directive itself - and there are a couple of ways to do this. With a reference to the directive, you can invoke the getMap() function to get a reference to the Map instance. leafletMapReady This output is emitted when once when the map is initially created inside of the Leaflet directive. The event will only fire when the map exists and is ready for manipulation. <div leaflet [leafletOptions]="options" (leafletMapReady)="onMapReady($event)"> </div> onMapReady(map: Map) { // Do stuff with map } This method of getting the map makes the most sense if you are using the Leaflet directive inside your own component and just need to add some limited functionality or register some event handlers. Inject LeafletDirective into your Component In Angular.io, directives are injectable the same way that Services are. This means that you can create your own component or directive and inject the LeafletDirective into it. This will only work if your custom component/directive exists on the same DOM element and is ordered after the injected LeafletDirective, or if it is on a child DOM element. <!-- On the same DOM element --> <div leaflet myCustomDirective> </div> <!-- On a child DOM element --> <div leaflet> <div myCustomDirective></div> </div> @Directive({ selector: '[myCustomDirective]' }) export class MyCustomDirective { leafletDirective: LeafletDirective; constructor(leafletDirective: LeafletDirective) { this.leafletDirective = leafletDirective; } someFunction() { if (null != this.leafletDirective.getMap()) { // Do stuff with the map } } } The benefit of this approach is it's a bit cleaner if you're interested in adding some reusable capability to the existing leaflet map directive. This is how the @asymmetrik/ngx-leaflet-draw and @asymmetrik/ngx-leaflet-d3 packages work, so you can use them as references. A Note About Markers If you use this component in an Angular.io project and your project uses a bundler like Webpack, you might run into issues using Markers on maps. The issue is related to how Leaflet manipulates the image URLs used to render markers when you are using the default marker images. The url manipulation is done at runtime and it alters the URLs in a way that breaks their format (this happens regardless of if you're using a file-loader or a url-loader). The demo contained in this project demonstrates how to get around this problem (at least in a Webpack environment). But, here is a rough overview of the steps taken to get them working. Webpack Marker Workaround Import the marker images in your vendor file to get Webpack to process the images in the asset pipeline import 'leaflet/dist/images/marker-shadow.png'; import 'leaflet/dist/images/marker-icon.png'; Either host the images statically or use the file-loader Webpack plugin to generate the images. Determine the correct URL for the marker and marker-shadow images. If you're using a file hasher, you should be able to check Webpack's output for the generated images. If you are serving them directly without chunk hashing just figure out how to resolve the images on your server. Configure Leaflet to use the correct URLs as customer marker images let layer = marker([ 46.879966, -121.726909 ], { icon: icon({ iconSize: [ 25, 41 ], iconAnchor: [ 13, 41 ], iconUrl: '2273e3d8ad9264b7daa5bdbf8e6b47f8.png', shadowUrl: '44a526eed258222515aa21eaffd14a96.png' }) }); Angular CLI Marker Workaround If you build your project using the Angular CLI, you can make the default leaflet marker assets available by doing the following: Edit .angular-cli(formerly angular-cli.json) Configure the CLI to include leaflet assets as below. Detailed instructions can be found in the asset-configuration documentation. { "project": { ... }, "apps": [ { ... "assets": [ "assets", "favicon.ico", { "glob": "**/*", "input": "../node_modules/leaflet/dist/images", "output": "./assets/" } ] } ] } Configure Leaflet to use the correct URLs as customer marker images let layer = marker([ 46.879966, -121.726909 ], { icon: icon({ iconSize: [ 25, 41 ], iconAnchor: [ 13, 41 ], iconUrl: 'assets/marker-icon.png', shadowUrl: 'assets/marker-shadow.png' }) }); Extensions There are several libraries that extend the core functionality of ngx-leaflet: Getting Help Here's a list of articles, tutorials, guides, and help resources: - ngx-leaflet on Stack Overflow - High-level intro to @asymmetrik/ngx-leaflet - Using @asymmetrik/ngx-leaflet in Angular CLI projects - Integrating 3rd Party Leaflet Libraries with @asymmetrik/ngx-leaflet and @angular/cli Changelog 2.6.0 Wrapping several map operations in NgZone.runOutsideAngular in order to prevent excessive dirty checking. If you encounter an unexpected issue due to this change, please file an issue. 2.5.0 Added the [leafletLayer] directive for adding/removing individual layers. 2.3.0 Renamed the package to ngx-leaflet Contribute PRs accepted. If you are part of Asymmetrik, please make contributions on feature branches off of the develop branch. If you are outside of Asymmetrik, please fork our repo to make contributions. License See LICENSE in repository for details. Credits Leaflet Is an awesome mapping package. Github Help us keep the lights on Dependencies Used By Total: 0
https://swiftpack.co/package/Asymmetrik/ngx-leaflet
CC-MAIN-2018-05
refinedweb
2,868
57.16
In this article, we’ll discuss the 50 most frequently asked C programming interview questions during technical interview rounds by various companies. The following C interview questions cover the level from easy to advanced. Even if you are a beginner in C, these hard questions for programming conversation in C will help you better understand and improve. 50 C Programming Interview Questions with Answers There are two ways to do this. One of them is to use the increment operator ++ and the decrement operator -. For example, “x ++” means increasing x by 1. Similarly, “x -” means reducing x by 1. Another way to write incremental statements is to use the conventional plus sign or the minus sign. For “x ++” another way to write is “x = x +1”. 2) What is the difference between Call by Value and Call by Reference? By using the Call by Value function, you send the variable value as a parameter to the function, while the Call by Reference function sends the variable address. In addition, in the Call by Value area, the parameter value is not affected by any operation, while in the case of Call by Reference, the process can affect the values within the function. 3) Some coders debug their programs by placing comment symbols on some codes instead of deleting it. How does this aid in debugging? Putting comment symbols / * * / around code, also called “commenting”, is a way to isolate some codes that you think may cause errors in the program without removing the code. The thing is, if the code is really correct, just remove the comment symbols and continue. It also saves you the time and effort of having to re-enter the codes if they were deleted first. 4) What is the difference between the expression “++a” and “a++”? In the first expression, the increment will first be for the variable a, and the resulting value will be the one that will be used. This is also called prefix increment. In the second expression, the current value of variable a would be the one that will be used in the operation before the value of the value itself is increased. This is also called postfix increment. 5) What is a stack? The stack is one form of data structure. Data is stored in stacks using the FILO (First In Last Out) method. In each particular case, only the top of the stack is available, which means that in order to retrieve the data stored in the stack, you must first extract those in the top. Stack data storage is also referred to as PUSH, while data retrieval is called POP. 6) What is a sequential access file? When you write programs that store and retrieve data from a file, you can designate the file in various forms. The sequential access file consists in the fact that the data is saved in sequential order: one data is placed in a file after another. To access specific data in a sequential access file, data must be read one at a time until the correct one is reached. 7) What is variable initialization and why is it important? This refers to the process in which the variable is assigned an initial value before it is used in the program. Without initialization, the variable would have an unknown value that can lead to unpredictable results when used in calculations or other operations. 8) What are the features of the C language? memory and improves the efficiency of our program. - Extensible: C is an extensible language as it can adopt new features in the future. 9) What is the difference between malloc() and calloc()? 10) What is the difference between the local variable and global variable in C? Following are the differences between a local variable and global variable:) Write a program to swap two numbers without using the third variable? #include<stdio.h> #include<conio.h> main() { int a=10, b=20; //declaration of variables. clrscr(); //It clears the screen. printf("Before swap a=%d b=%d",a,b); a=a+b;//a=30 (10+20) b=a-b;//b=10 (30-20) a=a-b;//a=20 (30-10) printf("\nAfter swap a=%d b=%d",a,b); getch(); } 30) What would happen to X in this expression: X += 15; (assuming the value of X is 5) X +=15 is a short method of writing X = X + 15, so if the initial value of X is 5, then 5 + 15 = 20.) Write a program to print Fibonacci series without using recursion? #include<stdio.h> #include<conio.h> void main() { int n1=0,n2=1,n3,i,number; clrscr();; } getch(); } 38) What does the format %10.2 mean when included in a printf statement? This format is used for two things: to set the number of spaces allocated for the output number and to set decimal places. The number before the decimal point relates to the allocated space, in this case it would allocate 10 spaces for the initial number. If the space occupied by the output number is less than 10, additional space characters will be inserted before the actual output number. The number after the decimal point determines the number of decimal places, in this case they are 2 decimal places. is pointer to pointer in C? In the case of the pointer to pointer concept, one pointer refers to the address of another pointer. A pointer to a pointer is a chain of indicators. Basically the pointer contains the address of the variable. The pointer to the pointer contains the address of the first pointer. We understand this concept on an example: #include <stdio.h> int main() { int a=10; int *ptr,**pptr; // *ptr is a pointer and **pptr is a double pointer. ptr=&a; pptr=&ptr; printf("value of a is:%d",a); printf("\n"); printf("value of *ptr is : %d",*ptr); printf("\n"); printf("value of **pptr is : %d",**pptr); return 0; }) Write a program to check the prime number in C Programming? #include<conio.h> void main() { int n,i,m=0,flag=0; //declaration of variables. clrscr(); //It clears the screen. printf("Enter the number to check prime:"); scanf("%d",&n); m=n/2; for(i=2;i<=m;i++) { if(n%i==0) { printf("Number is not prime"); flag=1; break; //break keyword used to terminate from the loop. } } if(flag==0) printf("Number is prime"); getch(); //It reads a character from the keyword. }) Write a program to reverse a given number in C? #include<conio.h> main() { int n, reverse=0, rem; //declaration of variables. clrscr(); // It clears the screen. printf("Enter a number: "); scanf("%d", &n); while(n!=0) { rem=n%10; reverse=reverse*10+rem; n/=10; } printf("Reversed Number: %d",reverse); getch(); // It reads a character from the keyword. } 50) Why is C language being considered a middle-level language? This is due to the fact that the C language is rich in functions, thanks to which it behaves like a high-level language, and at the same time can interact with the equipment using low-level methods. The use of a well-structured approach to programming in conjunction with the English words used in functions makes it work as a high-level language. On the other hand, C can directly access memory structures similar to assembly language procedures.
https://www.codeatglance.com/c-programming-interview-questions/
CC-MAIN-2020-40
refinedweb
1,227
63.9
ABORT(3P) POSIX Programmer's Manual ABORT(3P) This manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux. abort — generate an abnormal process abort #include <stdlib.h> void abort(void); The functionality described on this reference page is aligned with the ISO C standard. Any conflict between the requirements described here and the ISO C standard is unintentional. This volume of POSIX.1‐2008 defers to the ISO C standard.(), waitid(), or waitpid() by abort() shall be that of a process terminated by the SIGABRT signal. The abort() function shall override blocking or ignoring the SIGABRT signal. The abort() function shall not return. No errors are defined. The following sections are informative. None. Catching the signal is intended to provide the application developer with a portable means to abort processing, free from possible interference from any implementation-supplied functions. The ISO/IEC 9899:1999 standard requires the abort() function to be async-signal-safe. Since POSIX.1‐2008‐003 POSIX.1‐2008 of a message catalog descriptor using a standard I/O stream FILE object as would be expected by fclose(). None. exit(3p), kill(3p), raise(3p), signal(3p), wait(3p), wait ABORT(3P) Pages that refer to this page: stdlib.h(0p), assert(3p)
http://man7.org/linux/man-pages/man3/abort.3p.html
CC-MAIN-2018-30
refinedweb
235
58.38
Greetings Monks! I've got a weird problem. I have a script that is losing two args from ARGV before the script finished compiling. My best guess is that one of my own modules is shifting ARGV in a BEGIN block, but I cannot find it. By putting lots of BEGIN { print join(', ',@ARGV)."\n" } blocks in there and in modules it loads I've narrowed it down somewhat. The heavy lifting for this script is in a module I'm writing. Before I use my module, ARGV is fine. After I use it, ARGV is two items shorter. I put more of my ARGV printers in my module and ARGV is fine at the top of the file all the way to the bottom. So, it looks like something is happening after the compiler finishes reading my module and before it gets back to reading the script that usees the module. Does anyone know what that might be? Because I'm pretty sure that made no sense, I'll try to illustrate with an example: Module.pm package Module; BEGIN { print 'beginning my module'.join(', ',@ARGV)."\n" } # do lots of stuff BEGIN { print 'ending my module'.join(', ',@ARGV)."\n" } 1; [download] script.pl #! /usr/bin/perl -w -T BEGIN: { print 'before using my module'.join(', ',@ARGV)."\n" } use lib qw(.); use Module; BEGIN { print 'after using my module'.join(', ',@ARGV)."\n" } [download] Run it... # ./script.pl three two one before using my module three, two, one beginning my module three, two, one ending my module three, two, one after using my module one [download] Note this is not what you see if you run my example code. This is what you get if you run my actual code. My actual code is too huge an embarrasing to post. Woah!I just ran my example code and the results look like: $ ./script.pl beginning my module ending my module after using my module before using my module [download] Which means my understanding of how this stuff is ordered is truly whacked... A BEGIN block is supposed to execute as soon as the compiler finishes reading it, right? How does the order from my example script come about then? Hopefully this post includes enough rambling to alert the many of you who are smarter than me to a plausable error. I'm going to stop typing now. Thanks! --Pileofrogs P.S. I am not crazy package Module; BEGIN { print 'beginning my module '.join(', ',@ARGV)."\n" } # do lots of stuff BEGIN { print 'ending my module '.join(', ',@ARGV)."\n" } sub import { print 'starting module import '.join(', ',@ARGV)."\n"; # stuff pop @ARGV; print 'ending module import '.join(', ',@ARGV)."\n"; } 1; [download] $ ./script.pl a b c before using my module a, b, c beginning my module a, b, c ending my module a, b, c starting module import a, b, c ending module import a, b after using my module a, b [download] Dave BEGIN {} is a BEGIN block. BEGIN: {} is a block with an unfortunately chosen label. You can avoid editing all those modules by adding a hook into @INC as documented in require: #!/usr/bin/perl -w -T # install a 'hook' into @INC / see: perldoc -f require BEGIN { unshift @INC, '.'; # replaces 'use lib(.);' unshift @INC, sub { printf "***** TRACE: use %-20s with ARGV=%s\n", $_[1], join(', ', @ARGV); return; }; } BEGIN { print 'before using my module: '.join(', ',@ARGV)."\n" } #use lib qw(.); # removed: would shift hook to 2nd position, disabling + tracing use Module; # modified: sub import { shift @ARGV; } use Module2; # just a copy of the unmodified Module.pm BEGIN { print 'after using my module: '.join(', ',@ARGV)."\n" } __END__ > ./script.pl a b c before using my module: a, b, c ***** TRACE: use Module.pm with ARGV=a, b, c beginning my module1: a, b, c ending my module1: a, b, c ***** TRACE: use Module2.pm with ARGV=b, c beginning my module2: b, c ending my module2: b, c after using my module: b, c [download] If you have several modules, you could at least narrow the problem down to two candidates. Then, you can add debugging code to those modules. Hopefully, no other module fiddles with @INC at the same time... Good luck! Update: OK, here are sample modules Module.pm and Module2.pm: Update2: In case the approach described above does not work: BEGIN { *CORE::GLOBAL::require = sub { printf "===== TRACE: req %-20s with +ARGV=%s\n", $_[0], join(', ' ,@ARGV); CORE::require( $_[0] ); }; } [download] Update3: Trivial, but might also help to narrow down candidate modules: find my/modules -name \*.pm -exec egrep -l @ARGV {} \; I've considered this before, but never bothered to implement it. tie your array. Here's the idea: You have an array that something strange is happening to and you can't find where/when/why. You want to inspect it without altering your code with a bunch of print statements. Sure, there's the Perl debugger, and that's probably where you ought to turn first (and the main reason I haven't gotten around to giving this a try before). But today I'm going to try something different. Tie an array to a class that provides debug info whenever the array is modified in some way. Often tied entities are discouraged as they create action at a distance; it's very difficult to look at the primary source code and see why some behavior is happening -- too easy to forget about the tied behavior. But in this case, that's exactly what we're after: Behavior that is mostly invisible to the primary script, but helpful to us in some way. Start by using Tie::Array, and subclassing Tie::StdArray since it provides us with all the methods that emulate a plain old array. Then just override the ones we care about. It turns out we care about a lot of the methods (any that alter the array). But the source for Tie::Array (under the Tie::StdArray section) is a good starting point for maintaining default Array behavior while providing hooks for additional functionality. Note that with the "use parent" pragma, we have to say '-norequire', since Tie::StdArray is part of Tie::Array, and has already been loaded. We could just manipulate @ISA directly but that's not very Modern Perlish. I used Carp because its cluck() function is perfectly verbose. And I created an object method called $self->debug() that is a setter and getter for debug status. With debug set to '1' (or any true value) we get a noisy tied array. Set to zero, the array becomes silent again. Here's one fun aspect of a tied entity; we get the side-effect behavior (the primary objective of the tie is to create side-effects in our variables), and we also get the ability to control our variable's behavior via object oriented interface. If our tied variable is @array, and our tie returns an object to $o, then $o->debug(1) will turn on debugging verbosity for @array. What's it all for? Tie @ARGV in the beginning of your script (be sure to save its content and restore it after the tie). Then watch what other parts of your script do to @ARGV. Put the tie in a BEGIN{} block, and put that block before any 'use Module;' statements that may involve tinkering with @ARGV. Here's some somewhat messy code that demonstrates what I'm talking about: If you read the code you will also see that I'm creating a sort of inside-out object in tandom with the primary blessed array-ref. This is because Tie::Array and Tie::StdArray tie our array to an array-ref, which makes it simple to implement a tied array. But it doesn't help much if we need some name-space for additional storage per tied object. So by creating a hash called %STASH as a package global within the tie class module I create room for a namespace. I can create keys for the %STASH with names that are just the stringified version of the blessed object reference. And when the object is distroyed, I made sure to delete the key. There are probably better approaches nowadays, but it's been awhile since I played with such things, and that's how I remember doing it in the past. Your output should be: Now that you've seen this atrocity, be glad that Perl has a debugger. ;) Dave] If you favour the perl debugger, you can debug perl compile time statements by manually setting a breakpoint in a BEGIN block. If you add: BEGIN { $DB::single =1; } [download] To the top of your perl script, and then run it with perl -d the debugger will stop there before your use statements and the module start-up code they run are evaluated. Awesome! I tried the debugger with no luck. I was looking for this, but I didn't understand it from the perldebug doc. My spouse My children My pets My neighbours My fellow monks Wild Animals Anybody Nobody Myself Spies Can't tell (I'm NSA/FBI/HS/...) Others (explain your deviation) Results (52 votes). Check out past polls.
http://www.perlmonks.org/?node_id=920783
CC-MAIN-2016-50
refinedweb
1,535
72.16
Benchmark - Main - Test Cases - Test Case Details - Tool Support/Results - Quick Start - Tool Scanning Tips - RoadMap - Acknowledgements Version 1.0 of the Benchmark was published on April 15, 2015 and had 20,983 test cases. On May 23, 2015, version 1.1 of the Benchmark was released. The 1.1 release improves on the previous version by making sure that there are both true positives and false positives in every vulnerability area. Version 1.2 was released on June 5, 2016 (and the 1.2beta August 15, 2015). Version 1.2 and forward of the Benchmark is a fully executable web application, which means it is scannable by any kind of vulnerability detection tool. The 1.2 has been limited to slightly less than 3,000 test cases, to make it easier for DAST tools to scan it (so it doesn't take so long and they don't run out of memory, or blow up the size of their database). The 1.2 release covers the same vulnerability areas that 1.1 covers.'. The test case areas and quantities for the Benchmark releases are: Each Benchmark version comes with a spreadsheet that lists every test case, the vulnerability category, the CWE number, and the expected result (true finding/false positive). Look for the file: expectedresults-VERSION#.csv in the project root directory. Every test case: <test-metadata> <category>ldapi</category> <test-number>00001</test-number> <vulnerability>true</vulnerability> <cwe>90</cwe> </test-metadata> BenchmarkTest00001.java in the OWASP Benchmark 1.0 simply reads in all the cookie values, looks for a cookie named "foo", and uses the value of this cookie when performing an LDAP query. Here's the code for BenchmarkTest00001.java: package org.owasp.benchmark.testcode;("/BenchmarkTest00001") public class BenchmarkTest00001 extends HttpServlet { private static final long serialVersionUID = 1L; @Override public void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { doPost(request, response); } @Override public void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { // some code javax.servlet.http.Cookie[] cookies = request.getCookies(); String param = null; boolean foundit = false; if (cookies != null) { for (javax.servlet.http.Cookie cookie : cookies) { if (cookie.getName().equals("foo")) { param = cookie.getValue(); foundit = true; } } if (!foundit) { // no cookie found in collection param = ""; } } else { // no cookies param = ""; } try { javax.naming.directory.DirContext dc = org.owasp.benchmark.helpers.Utils.getDirContext(); Object[] filterArgs = {"a","b"}; dc.search("name", param, filterArgs, new javax.naming.directory.SearchControls()); } catch (javax.naming.NamingException e) { throw new ServletException(e); } } } for 5 free tools, PMD, FindBugs, FindBugs with the FindSecBugs plugin, SonarQube and OWASP ZAP are available here against version 1.2 of the Benchmark:. We've included multiple versions of FindSecBugs' and ZAP's results so you can see the improvements they are making finding vulnerabilities in Benchmark. We have Benchmark results for all the following tools, but haven't publicly released the results for any commercial tools. However, we included a 'Commercial Average' page, which includes a summary of results for 6 commercial SAST tools along with anonymous versions of each SAST tool's scorecard. The Benchmark can generate results for the following tools: Free Static Application Security Testing (SAST) Tools: - PMD (which really has no security rules) - .xml results file - FindBugs - .xml results file (Note: FindBugs hasn't been updated since 2015. Use SpotBugs instead (see below)) - SonarQube - .xml results file - SpotBugs - .xml results file. This is the successor to FindBugs. - SpotBugs with the FindSecurityBugs plugin - .xml results file Note: We looked into supporting Checkstyle but it has no security rules, just like PMD. The fb-contrib FindBugs plugin doesn't have any security rules either. We did test Error Prone, and found that it does report some use of insecure ciphers (CWE-327), but that's it. Commercial SAST Tools: - CAST Application Intelligence Platform (AIP) - .xml results file - Checkmarx CxSAST - .xml results file - (Formerly Coverity Code Advisor) (On-Demand and stand-alone versions) - .json results file (You can scan Benchmark w/Coverity for free. See:) - Thunderscan SAST - .xml results file - Veracode SAST - .xml results file - XANITIZER - xml results file (Their white paper on how to setup Xanitizer to scan Benchmark.) (Free trial available) We are looking for results for other commercial static analysis tools like: Grammatech CodeSonar, RogueWave's Klocwork, etc. If you have a license for any static analysis tool not already listed above and can run it on the Benchmark and send us the results file that would be very helpful. The free SAST tools come bundled with the Benchmark so you can run them yourselves. If you have a license for any commercial SAST tool, you can also run them against the Benchmark. Just put your results files in the /results folder of the project, and then run the BenchmarkScore script for your platform (.sh / .bat) and it will generate a scorecard in the /scorecard directory for all the tools you have results for that are currently supported. Free Dynamic Application Security Testing (DAST) Tools: Note: While we support scorecard generators for these Free and Commercial DAST tools, we haven't been able to get a full/clean run against the Benchmark from most of these tools. As such, some of these scorecard generators might still need some work to properly reflect their results. If you notice any problems, let us know. - Arachni - .xml results file - To generate .xml, run: ./bin/arachni_reporter "Your_AFR_Results_Filename.afr" --reporter=xml:outfile=Benchmark1.2-Arachni.xml - OWASP ZAP - .xml results file. To generate a complete ZAP XML results file so you can generate a valid scorecard, make sure you: - Tools > Options > Alerts - And set the Max alert instances to like 500. - Then: Report > Generate XML Report... Commercial DAST Tools: - Acunetix Web Vulnerability Scanner (WVS) - .xml results file (Generated using command line interface (see Chapter 10.) /ExportXML switch) - Burp Pro - .xml results file - IBM AppScan - .xml results file - Micro Focus (Formally HPE) WebInspect - .xml results file - Netsparker - .xml results file - Qualys Web App Scanner - .xml results file - Rapid7 AppSpider - .xml results file If you have access to other DAST Tools, PLEASE RUN THEM FOR US against the Benchmark, and send us the results file so we can build a scorecard generator for that tool. Commercial Interactive Application Security Testing (IAST) Tools: - Contrast Assess - .zip results file (You can scan Benchmark w/Contrast for free. See:) - Hdiv Detection (IAST) - .hlg results file - Seeker IAST - .csv results file Commercial Hybrid Analysis Application Security Testing Tools: - Fusion Lite Insight - .xml results file WARNING: If you generate results for a commercial tool, be careful who you distribute it to. Each tool has its own license defining when any results it produces can be released/made public. It may be against the terms of a commercial tool's license to publicly release that tool's score against the OWASP Benchmark. The OWASP Benchmark project takes no responsibility if someone else releases such What is in the Benchmark? The Benchmark is a Java Maven project. Its primary component is thousands of test cases (e.g., BenchmarkTest00001.java) , each of which is a single Java servlet that contains a single vulnerability (either a true positive or false positive). The vulnerabilities span about a dozen different types currently and are expected to expand significantly in the future. An expectedresults.csv is published with each version of the Benchmark (e.g., expectedresults-1.1.csv) and it specifically lists the expected results for each test case. Here’s what the first two rows in this file looks like for version 1.1 of the Benchmark: # test name category real vulnerability CWE Benchmark version: 1.1 2015-05-22 BenchmarkTest00001 crypto TRUE 327 This simply means that the first test case is a crypto test case (use of weak cryptographic algorithms), this is a real vulnerability (as opposed to a false positive), and this issue maps to CWE 327. It also indicates this expected results file is for Benchmark version 1.1 (produced May 22, 2015). There is a row in this file for each of the tens of thousands of test cases in the Benchmark. Each time a new version of the Benchmark is published, a new corresponding results file is generated and each test case can be completely different from one version to the next. The Benchmark also comes with a bunch of different utilities, commands, and prepackaged open source security analysis tools, all of which can be executed through Maven goals, including: - Open source vulnerability detection tools to be run against the Benchmark - A scorecard generator, which computes a scorecard for each of the tools you have results files for. What Can You Do With the Benchmark? - Compile all the software in the Benchmark project (e.g., mvn compile) - Run a static vulnerability analysis tool (SAST) against the Benchmark test case code - Scan a running version of the Benchmark with a dynamic application security testing tool (DAST) - Instructions on how to run it are provided below - Generate scorecards for each of the tools you have results files for - See the Tool Support/Results page for the list of tools the Benchmark supports generating scorecards for Getting Started Before downloading or using the Benchmark make sure you have the following installed and configured properly: GIT: or Maven: (Version: 3.2.3 or newer works.) Java: (Java 7 or 8) (64-bit) Getting, Building, and Running the Benchmark To download and build everything: $ git clone $ cd benchmark $ mvn compile (This compiles it) $ runBenchmark.sh/.bat - This compiles and runs it. Then navigate to: to go to its home page. It uses a self signed SSL certificate, so you'll get a security warning when you hit the home page. Note: We have set the Benchmark app to use up to 6 Gig of RAM, which it may need when it is fully scanned by a DAST scanner. The DAST tool probably also requires 3+ Gig of RAM. As such, we recommend having a 16 Gig machine if you are going to try to run a full DAST scan. And at least 4 or ideally 8 Gig if you are going to play around with the running Benchmark app. Using a VM instead We have several preconstructed VMs or instructions on how to build one that you can use instead: - Docker: A Dockerfile is checked into the project here. This Docker file should automatically produce a Docker VM with the latest Benchmark project files. After you have Docker installed, cd to /VMs then run: ./buildDockerImage.sh --> This builds the Docker Benchmark VM (This will take a WHILE) docker images --> You should see the new benchmark:latest image in the list provided # The Benchmark Docker Image only has to be created once. To run the Benchmark in your Docker VM, just run: ./runDockerImage.sh --> This pulls in any updates to Benchmark since the Image was built, builds everything, and starts a remotely accessible Benchmark web app. If successful, you should see this at the end: [INFO] [talledLocalContainer] Tomcat 8.x started on port [8443] [INFO] Press Ctrl-C to stop the container... Then simply navigate to: from the machine you are running Docker Or if you want to access from a different machine: docker-machine ls (in a different terminal) --> To get IP Docker VM is exporting (e.g., tcp://192.168.99.100:2376) Navigate to: in your browser (using the above IP as an example) - Amazon Web Services (AWS) - Here's how you set up the Benchmark on an AWS VM: sudo yum install git sudo yum install maven sudo yum install mvn sudo wget -O /etc/yum.repos.d/epel-apache-maven.repo sudo sed -i s/\$releasever/6/g /etc/yum.repos.d/epel-apache-maven.repo sudo yum install -y apache-maven git clone cd benchmark chmod 755 *.sh ./runBenchmark.sh -- to run it locally on the VM. ./runRemoteAccessibleBenchmark.sh -- to run it so it can be accessed outside the VM (on port 8443). Running Free Static Analysis Tools Against the Benchmark There are scripts for running each of the free SAST vulnerability detection tools included with the Benchmark against the Benchmark test cases. On Linux, you might have to make them executable (e.g., chmod 755 *.sh) before you can run them. Generating Test Results for PMD: $ ./scripts/runPMD.sh (Linux) or .\scripts\runPMD.bat (Windows) Generating Test Results for FindBugs: $ ./scripts/runFindBugs.sh (Linux) or .\scripts\runFindBugs.bat (Windows) Generating Test Results for FindBugs with the FindSecBugs plugin: $ ./scripts/runFindSecBugs.sh (Linux) or .\scripts\runFindSecBugs.bat (Windows) In each case, the script will generate a results file and put it in the /results directory. For example: Benchmark_1.2-findbugs-v3.0.1-1026.xml This results file name is carefully constructed to mean the following: It's a results file against the OWASP Benchmark version 1.2, FindBugs was the analysis tool, it was version 3.0.1 of FindBugs, and it took 1026 seconds to run the analysis. NOTE: If you create a results file yourself, by running a commercial tool for example, you can add the version # and the compute time to the filename just like this and the Benchmark Scorecard generator will pick this information up and include it in the generated scorecard. If you don't, depending on what metadata is included in the tool results, the Scorecard generator might do this automatically anyway. Generating Scorecards The scorecard generation application BenchmarkScore is included with the Benchmark. It parses the output files generated by any of the supported security tools run against the Benchmark and compares them against the expected results, and produces a set of web pages that detail the accuracy and speed of the tools involved. For the list of currently supported tools, check out the: Tools Support/Results tab. If you are using a tool that is not yet supported, simply send us a results file from that tool and we'll write a parser for that tool and add it to the supported tools list. The following command will compute a Benchmark scorecard for all the results files in the /results directory. The generated scorecard is put into the /scorecard directory. createScorecard.sh (Linux) or createScorecard.bat (Windows) An example of a real scorecard for some open source tools is provided at the top of the Tool Support/Results tab so you can see what one looks like. We recommend including the Benchmark version number in any results file name, in order to help prevent mismatches between the expected results and the actual results files. A tool will not score well against the wrong expected results. Customizing Your Scorecard Generation The createScorecard scripts are very simple. They only have one line. Here's what the 1.2 version looks like: mvn validate -Pbenchmarkscore -Dexec.args="expectedresults-1.2.csv results" This Maven command simply says to run the BenchmarkScore application, passing in two parameters. The 1st is the Benchmark expected results file to compare the tool results against. And the 2nd is the name of the directory that contains all the results from tools run against that version of the Benchmark. If you have tool results older than the current version of the Benchmark, like 1.1 results for example, then you would do something like this instead: mvn validate -Pbenchmarkscore -Dexec.args="expectedresults-1.1.csv 1.1_results" To keep things organized, we actually put the expected results file inside the same results folder for that version of the Benchmark, so our command looks like this: mvn validate -Pbenchmarkscore -Dexec.args="1.1_results/expectedresults-1.1.csv 1.1_results" In all cases, the generated scorecard is put in the /scorecard folder. WARNING: If you generate results for a commercial tool, be careful who you distribute it to. Each tool has its own license defining when any results it produces can be released/made public. It is likely to be against the terms of a commercial tool's license to publicly release that tool's score against the OWASP Benchmark. The OWASP Benchmark project takes no responsibility if someone else releases such results. It is for just this reason that the Benchmark project isn't releasing such results itself. People frequently have difficulty scanning the Benchmark with various tools due to many reasons, including size of the Benchmark app and its codebase, and complexity of the tools used. Here is some guidance for some of the tools we have used to scan the Benchmark. If you've learned any tricks on how to get better or easier results for a particular tool against the Benchmark, let us know or update this page directly. Generic Tips Because of the size of the Benchmark, you may need to give your tool more memory before it starts the scan. If its a Java based tool, you may want to pass more memory to it like this: -Xmx4G (This gives the Java application 4 Gig of memory) SAST Tools Checkmarx The Checkmarx SAST Tool (CxSAST) is ready to scan the OWASP Benchmark out-of-the-box. Please notice that the OWASP Benchmark “hides” some vulnerabilities in dead code areas, for example: if (0>1) { //vulnerable code } By default, CxSAST will find these vulnerabilities since Checkmarx believes that including dead code in the scan results is a SAST best practice. Checkmarx's experience shows that security experts expect to find these types of code vulnerabilities, and demand that their developers fix them. However, OWASP Benchmark considers the flagging of these vulnerabilities as False Positives, as a result lowering Checkmarx's overall score. Therefore, in order to receive an OWASP score untainted by dead code, re-configure CxSAST as follows: - Open the CxAudit client for editing Java queries. - Override the "Find_Dead_Code" query. - Add the commented text of the original query to the new override query. - Save the queries.Bugs to find security issues. We include this free tool in the Benchmark and its all dialed in. Simply run the script: ./script/runFindSecBugs.(sh or bat). If you want to run a different version of FindSecBugs, just change the version number of the findsecbugs-plugin artifact in the Benchmark pom.xml file. Micro Focus (Formally HP) Fortify If you are using the Audit Workbench, you can give it more memory and make sure you invoke it in 64-bit mode by doing this: set AWB_VM_OPTS="-Xmx2G -XX:MaxPermSize=256m" export AWB_VM_OPTS="-Xmx2G -XX:MaxPermSize=256m" auditworkbench -64 We found it was easier to use the Maven support in Fortify to scan the Benchmark and to do it in 2 phases, translate, and then scan. We did something like this: Translate Phase: export JAVA_HOME=$(/usr/libexec/java_home) export PATH=$PATH:/Applications/HP_Fortify/HP_Fortify_SCA_and_Apps_17.10/bin export SCA_VM_OPTS="-Xmx2G -version 1.7" mvn sca:clean mvn sca:translate Scan Phase: export JAVA_HOME=$(/usr/libexec/java_home) export PATH=$PATH:/Applications/HP_Fortify/HP_Fortify_SCA_and_Apps_4.10/bin export SCA_VM_OPTS="-Xmx10G -version 1.7" mvn sca:scan PMD We include this free tool in the Benchmark and its all dialed in. Simply run the script: ./script/runPMD.(sh or bat). If you want to run a different version of PMD, just change its version number in the Benchmark pom.xml file. (NOTE: PMD doesn't find any security issues. We include it because its interesting to know that it doesn't.)! OWASP ZAP ZAP may require additional memory to be able to scan the Benchmark. To configure the amount of memory: - Tools --> Options --> JVM: Recommend setting to: -Xmx2048m (or larger). (Then restart ZAP). To run ZAP against Benchmark: - Because Benchmark uses Cookies and Headers as sources of attack for many test cases: Tools --> Options --> Active Scan Input Vectors: Then check the HTTP Headers, All Requests, and Cookie Data checkboxes and hit OK - Click on Show All Tabs button (if spider tab isn't visible) - Go to Spider tab (the black spider) and click on New Scan button - Enter: into the 'Starting Point' box and hit 'Start Scan' - Do this again. For some reason it takes 2 passes with the Spider before it stops finding more Benchmark endpoints. - When Spider completes, click on 'benchmark' folder in Site Map, right click and select: 'Attack --> Active Scan' - It will take several hours, like 3+ to complete (it's actually likely to simply freeze before completing the scan - see NOTE: below) For faster active scan you can - Disable the ZAP DB log (in ZAP 2.5.0+): - Disable it via Options / Database / Recover Log - Set it on the command line using "-config database.recoverylog=false" - Disable unnecessary plugins / Technologies: When you launch the Active Scan - On the Policy tab, disable all plugins except: XSS (Reflected), Path Traversal, SQLi, OS Command Injection - Go the Technology Tab, disable everything and only enable: MySQL, YOUR_OS, Tomcat - Note: This 2nd performance improvement step is a bit like cheating as you wouldn't do this for a normal site scan. You'd want to leave all this on in case these other plugins/technologies are helpful in finding more issues. So a fair performance comparison of ZAP to other tools would leave all this on. To generate the ZAP XML results file so you can generate its scorecard: - Tools > Options > Alerts - And set the Max alert instances to like 500. - Then: Report > Generate XML Report... NOTE: Similar to Burp, we can't simply run ZAP against the entire Benchmark in one shot. In our experience, it eventually freezes/stops scanning. We've had to run it against each test area one at a time. If you figure out how to get ZAP to scan all of Benchmark in one shot, let us know how you did it! Things we tried that didn't improve the score: - AJAX Spider - the traditional spider appears to find all (or 99%) of the test cases so the AJAX Spider does not appear to be needed against Benchmark v1.2 - XSS (Persistent) - There are 3 of these plugins that run by default. There aren't any stored XSS in Benchmark, so you can disable these plugins for a faster scan. - DOM XSS Plugin - This is an optional plugin that didn't seem to find any additional XSS issues. There aren't an DOM specific XSS issues in Benchmark v1.2, so not surprising. IAST Tools Interactive Application Security Testing (IAST) tools work differently than scanners. IAST tools monitor an application as it runs to identify application vulnerabilities using context from inside the running application. Typically these tools run continuously, immediately notifying users of vulnerabilities, but you can also get a full report of an entire application. To do this, we simply run the Benchmark application with an IAST agent and use a crawler to hit all the pages. Contrast Assess To use Contrast Assess, we simply add the Java agent to the Benchmark environment and run the BenchmarkCrawler. The entire process should only take a few minutes. We provided a few scripts, which simply add the -javaagent:contrast.jar flag to the Benchmark launch configuration. We have tested on MacOS, Ubuntu, and Windows. Be sure your VM has at least 4M of memory. - Ensure your environment has Java, Maven, and git installed, then build the Benchmark project $ git clone $ cd Benchmark $ mvn compile - Download a licensed copy of the Contrast Assess Java Agent (contrast.jar) from your Contrast TeamServer account and put it in the /Benchmark/tools/Contrast directory. $ cp ~/Downloads/contrast.jar tools/Contrast - In Terminal 1, launch the Benchmark application and wait until it starts $ cd tools/Contrast $ ./runBenchmark_wContrast.sh (.bat on Windows) [INFO] Scanning for projects... [INFO] [INFO] ------------------------------------------------------------------------ [INFO] Building OWASP Benchmark Project 1.2 [INFO] ------------------------------------------------------------------------ [INFO] ... [INFO] [talledLocalContainer] Tomcat 8.x started on port [8443] [INFO] Press Ctrl-C to stop the container... - In Terminal 2, launch the crawler and wait a minute or two for the crawl to complete. $ ./runCrawler.sh (.bat on Windows) - A Contrast report has been generated in /Benchmark/tools/Contrast/working/contrast.log. This report will be automatically copied (and renamed with version number) to /Benchmark/results directory. $ more tools/Contrast/working/contrast.log 2016-04-22 12:29:29,716 [main b] INFO - Contrast Runtime Engine 2016-04-22 12:29:29,717 [main b] INFO - Copyright (C) 2012 2016-04-22 12:29:29,717 [main b] INFO - Pat. 8,458,789 B2 2016-04-22 12:29:29,717 [main b] INFO - Contrast Security, Inc. 2016-04-22 12:29:29,717 [main b] INFO - All Rights Reserved 2016-04-22 12:29:29,717 [main b] INFO - ... - Press Ctrl-C to stop the Benchmark in Terminal 1. Note: on Windows, select "N" when asked Terminate batch job (Y/N)) [INFO] [talledLocalContainer] Tomcat 8.x is stopped Copying Contrast report to results directory - In Terminal 2, generate scorecards in /Benchmark/scorecard $ ./createScorecards.sh (.bat on Windows) Analyzing results from Benchmark_1.2-Contrast.log Actual results file generated: /Users/owasp/Projects/Benchmark/scorecard/Benchmark_v1.2_Scorecard_for_Contrast.csv Report written to: /Users/owasp/Projects/Benchmark/scorecard/Benchmark_v1.2_Scorecard_for_Contrast.html - Open the Benchmark Scorecard in your browser /Users/owasp/Projects/Benchmark/scorecard/Benchmark_v1.2_Scorecard_for_Contrast.html Hdiv Detection Hdiv has written their own instructions on how to run the detection component of their product on the Benchmark here:. You'll see that these instructions involve using the same crawler used to exercise all the test cases in the Benchmark, just like Contrast above..
https://www.owasp.org/index.php?title=Benchmark&amp;oldid=239078
CC-MAIN-2019-30
refinedweb
4,204
55.84
MVC3 (model view controller)- a new mvc framework with razor engine by microsoft This. MVC3 (MODEL VIEW CONTROLLER)- A NEW MVC FRAMEWORK WITH RAZOR ENGINE BY MICROSOFT MVC (Model View Controller) MVC (Model View Controller) is the architecture of triad where each components- Model, View and Controller are independent to each other so that each of these components are loosely coupled which makes the MVC architecture more expandable and scalable. So this architecture is basically used when the project needs scalable in the future. MVC3 application : The article is explaining about the creation of a sample application. 1. New Project ? ASP.Net MVC3 Web Application 2. Provide the name "MVC3RazorDemo" and click OK. A structure will be created for the MVC3 as shown below: This structure contains the folders- Content- It contains the style for the pages. We can add our own styles and formatting. Controllers- To create the actions and events related to the View (UI) . Models- It's the mediator which is used to carry the data from Model to View. View- For the UI (User Interface). By default it creates few views which are used for layout purpose, for showing the error and the start page. Also we can observe here the different symbol and files in view with the extension cshtml(C# Html) . So these are used for the View in MVC3 Razor application. 3. Now the first thing is that we need to create the Controller and Model classes. 4. To create the Controller Class. Right click on Controllers folder ? Add Controller 5. Add Controller window will be displayed. Now provide the name of the Controller "MyController" and click Add. Here we can see that Template field dropdown contains 3 different values- The Empty Controller will add the default Index method while other option "Controller with read/write actions and Views, using Entity Framework" will create the Controller methods which will support the Entity Framework mapping. The Controller with empty read/write actions will write the default method for read and Write actions. For Now, we will choose the Default one- Empty Controller. 6. Now, to display the data, we need the Model class which will be used to hold the data. So create a Model class. 7 Go to the Models folder. Right click ?Add New ? Class 8. Provide the name "CustomerModel" and click OK. 9.Here in this class, we will define some public properties which will hold the data during the execution. 10. Also I need a class which will have the list of customers to display. So I will create a new class called "Customers" inside this Model class which will use the public properties and get the customers data. I am not using here the Database connection but you can write the database connection, command etc in place of list to get the required customers data. 11. Create a new class "Customers" inside the Model class and initialize the list of data inside the constructor of the class. 12. Till here, we have done so for is to write the Customers class in the Model which is having the list of customers. 13. Now to display this list of customer, we need to call this method in the controller class by creating the object as: For this, first we need to provide the Model class references as: using MVC3RazorDemo.Models; If you are not able to get this reference, then build the project and then try to add it. It will be added. 14. Now create the object of the Model class "Customers" and get the customers lists in Controller class: 15. Now our Controller and Model is done. We need to create the View to display this data. To create the View, go to the Index method of the Controller class and right click ? Add View 16. Here we can see that we can add the View for the Controller class method. Also there are various other options like if the View is already exists, then we can go to the View etc. 17. By clicking the add View, a new window opens to choose the properties: View name: Name of the view View Engine: It contains 2 types- Razor (CSHTML) and ASPX Create a strongly-typed view- This is the checkbox which is used to create the strongly types view. It means that the view will be binded with the data source directly. Model Class: This dropdown list will show the classes exist to select for the View. Scaffold Template: Lit of template for the view. 18.When clicking the Add button, it will generate the code automatically for the view. In the above example, I have selected the List so the list of employees will be displayed n the View. 19. The generated code will look like below: 20. So this is the way to create the views for the application. Like this we can create the MVC architecture with Razor engine (MVC 3) . Basically the work of Razor Engine is to create the Views automatically without writing any code. Processing of MVC 3 Applications : 1. The processing of MVC application starts from the Controller class. The controller class takes care of handling each actions and events which are raised from the View or front-end. 2. Client or the end user requests for the particular page by doing some action to the application. The request starts with the controller class. The controller class contains the events and action methods which get triggered and then the processing starts. 3. For example, the end user want to open the Customer Details page(CustomerDetail.aspx) and the URL for the page is : 4. This URL shows the below actions- WebSite Name:- Controller Name:- CustomerController Method name:- GetCustomerDetails Parameter:- 101 It means there must be a controller class with the name CustomerController.cs which will have a method called GetCustomerDetails and this method will use the parameter as 101. 5. So this whole URL will be parsed and will get the required information related to controller class, its method and the parameter (if any). 6. After parsing this URL, the runtime engine searches for these components and does the processing accordingly. 7. In the GetCustomerDetails method, there will be the call for the Model class to process the request and get the response back to the controller and then open the View with the returned data. Nice explanation . Can you explain how to do validations in mvc?
https://www.dotnetspider.com/resources/44324-mvc3-model-view-controller-a-new-mvc-framework-with-razor-engine-by-microsoft.aspx
CC-MAIN-2019-30
refinedweb
1,077
72.26
Optimization should be considered even before starting the game. In fact even before choosing your assets more importantly 3D Meshes. Let me provide you some important tips which I collated after reading some books, web links and going through videos. Tip # 1: Minimize Vertex Count The Unity renderer has lots of techniques and optimizations built in to draw meshes quickly and efficiently across different devices, but you can really help that renderer do its work even more efficiently if you reduce the vertices in your meshes so far as possible. Note that the Stats panel displays the number of triangles and vertices currently being rendered - and not the total number of vertices and triangles actually in the scene, which will often be more because the camera usually does not see the entire scene at once. Tip #2 Minimize Materials Unity allows you to create as many materials as you need for your meshes and objects and you can even assign multiple materials to the same mesh. Using materials comes at a performance cost, however, in terms of draw calls and draw calls are relatively expensive on any platform. Try to avoid using multiple textures for a single mesh; instead, share the same image for all parts. Using Atlas Textures for 2D games, Objects could share the same texture and material. Sharing materials in this way between objects allows the Unity renderer to internally batch those objects into the same draw call, and this typically improves on performance than if those objects had been drawn in separate calls. Refer Texture2D.PackTextures. Tip #3 Use Per-Platform Texture Settings If you select a texture in the Unity Project Panel and examine its settings in the Object Inspector, you'll see its options regarding size and compression are customizable on a per-platform basis. This means you can adjust the size and compression of textures differently for each target platform: Windows, Mac, Android, iOS, and so on. The trick is to try and build your game so that optimization is possible without being too noticeable, using tricks such as reducing the amount of alpha transparency in textures so that you can use a non-alpha compression system, system, or using texture atlases to put everything onto a single texture rather than several separate ones. Scripting Optimization Tip # 4 -Cache Components and Objects One of the most common C# scripted statements when working with the Unity API is and OnBecameInvisible These call backs are tied into the rendering system. As soon as any camera can see the object, OnBecameVisible will be called, when no camera sees it anymore OnBecameInvisible will be called. This is useful in some cases, but often for AI it is not useful because enemies would become disabled as soon as you turn the camera away from them. using UnityEngine; using System.Collections; public class example : MonoBehaviour { void OnBecameVisible() { enabled = true; } void OnBecameInvisible() { enabled = false; } } Use triggers A simple sphere trigger can work wonders though. You get OnTriggerEnter/Exit calls when exiting the sphere of influence you want using UnityEngine; using System.Collections; public class example : MonoBehaviour { void OnTriggerEnter(Collider c) { if (c.CompareTag("Player")) enabled = true; } void OnTriggerExit(Collider c) { if (c.CompareTag("Player")) enabled = false; } } Use Coroutines The problem with Update calls is that they happen every frame. Quite possibly checking the distance to the player could be performed only every 5 seconds. This would save a lot of processing power. Tip #6 - Avoid OnGUI and the GUI class The OnGUI function and the GUI class can be two of the biggest drains on performance, especially for mobile games. OnGUI is expensive primarily because it's called multiple times per frame. This means you almost never use it to perform game-logic or core functionality--the purpose of OnGUI is exclusively for drawing GUI elements. The GUI class can work well for smaller and simpler interfaces, but should really be avoided for complex interfaces with many interactive elements, such as inventories, statistic panels, mini-maps, option screens, and type-in dialogs. I've almost never seen high-performance come from the default GUI class when implementing more feature-filled GUIs. To implement these, Unity offers very little native support. There are two main and popular solutions to this: either a custom solution is developed, using Atlas Textures and, or a third-party add-on must be purchased, such as NGUI or EZGUI. Tip #7--Use Object Pooling Imagine this: the player character is firing a fast-reload weapon (such as a chain gun) and each shot spawns a new bullet into the level. The fire-rate for this weapon means that two bullets are generated per second. Assuming the bullet object is created in the project as a prefab, how should the bullet generation be scripted? One way to handle this is to call the Instantiate function for each bullet, whenever the gun is being fired. This method might be called "Dynamic Object Creation." However, there's another method: at level start-up you could generate a large batch of bullets (perhaps 30), and store them off-screen. Then, whenever the player fires a weapon, you would continually show, hide, and move these pre-generated bullets, as required, to simulate bullets being spawned, as opposed to really spawning them. This latter method is known as Object Pooling. In general, Object Pooling is to be preferred over Dynamic Object Creation because it avoids the performance issues that sometimes comes from dynamic memory allocation, especially on mobile operating systems. The upshot of this is: avoid instantiating and destroying objects dynamically. Instead: generate objects at level start-up, and then show and hide as required to simulate object destruction and creation. Profiler What Is the Profiler? The profiler is an advanced tool for finding out all kinds of important information about your system resources during playback. It can be used to profile CPU, rendering.memory, audio, and physics so that you can find ways to optimize toward a better-performing game. occ. Conclusion Performance Optimization is the most important aspect of your game which you should consider even before starting your game design.These are a few tips to consider before choosing your meshes,textures and other assets for your game. The same is applicable to 2D game development alsowhere you use sprites with textures.These tips also are not limited to Unity3D game engine alone and can be considered for any game engine or framework.
https://www.gamedev.net/articles/programming/general-and-gameplay-programming/unity-performance-optimization-tips-r3772/
CC-MAIN-2018-22
refinedweb
1,074
50.46
Containers are supported in Pro and Enterprise plans. Pro subscriptions include 10 containers for each paid host. Enterprise subscriptions include 20 for each paid host. This container count is averaged across your entire infrastructure. Additional containers are billed at $0.002 per container per hour. In addition, you can purchase prepaid containers at $1 per container per month. Contact Sales or your Customer Success Manager to discuss containers for your account. Kubernetes creates pause containers to acquire the respective pod’s IP address and set up the network namespace for all other containers that join that pod. Datadog excludes all pause containers from your quota and does not charge for them (requires Agent 5.8+). Fargate is charged based on the concurrent number of tasks. For technical questions, contact Datadog support. For billing questions, contact your Customer Success Manager.
https://docs.datadoghq.com/account_management/billing/containers/
CC-MAIN-2019-26
refinedweb
139
60.82
I'm a total newbie and just got in need of a useful tool called "Clarifai Photo Sorter". I've installed Python 2.7 in Windows 7 64-bit and have downloaded the Clarifai API Python Client, unzipped the package and tried to run setup.py as the installation guide in README.md suggests. A black window appears and gets closed by itself. I tried to run setup.py in IDLE and got the following error: Traceback (most recent call last): File "<pyshell#0>", line 1, in <module> from setuptools import setup, find_packages ImportError: No module named setuptools EDIT: Downloaded get-pip.py from here, but then another error while trying to run setup.py: error: no commands supplied I'm stuck at here, but got other questions regarding the Clarifai Photo Sorter install. Next, in the Clarifai API Python Client, it suggests to run the following code: pip install clarifai==2.0.20 But I'm not sure where should I type this and the other pieces of code on that page? in the Python command line? Hey Adrian - you'll want to run the pip command on the command line. Were you able to install it? Thanks. I've fixed that issue.
http://community.clarifai.com/t/need-help-installing-clarifai-photo-sorter/476
CC-MAIN-2018-47
refinedweb
204
75.81
My. The last post detailed how the TrackloggingService worked, but started the service only when the main activity was launched. Now comes the time to hook it into the Android boot sequence. Here is how: After boot completes the Android system broadcasts an intent with the action android.intent.action.BOOT_COMPLETED. And now all we need is an IntentReceiver, now called a BroadcastReceiver, to listen and act on it. This is how this class looks: public class LocationLoggerServiceManager extends BroadcastReceiver { public static final String TAG = "LocationLoggerServiceManager"; @Override public void onReceive(Context context, Intent intent) { // just make sure we are getting the right intent (better safe than sorry) if( "android.intent.action.BOOT_COMPLETED".equals(intent.getAction())) { ComponentName comp = new ComponentName(context.getPackageName(), LocationLoggerService.class.getName()); ComponentName service = context.startService(new Intent().setComponent(comp)); if (null == service){ // something really wrong here Log.e(TAG, "Could not start service " + comp.toString()); } } else { Log.e(TAG, "Received unexpected intent " + intent.toString()); } } } The key is of course the onReceive() method. I have decided to check that it is actually the Intent I am expecting, but otherwise it is straightforward starting the service and return. The receiver needs to be declared in the manifest, e.g. with the following entry: <receiver android:name=".LocationLoggerServiceManager" android:enabled="true" android:exported="false" android: <intent-filter> <action android: </intent-filter> </receiver> Furthermore this class listen to this specific event needs to be declared in the security settings: <uses-permission android: That's it. Now the service will be started as soon a Android has finished booting. Now we will need something to make this user-configurable... 9 comments: Thanks! This was useful. This was very helpful. It got me unstuck. Thank you. Hi, I am getting the following error when i added your code Error : "The application has stopped unexpectedly please try again" Without more information I do not think anyone will be able to help you. It looks like the code worked for others. It might be best to have a look in the debugger what is happening in your code. I assume it is some form of ActivityNotFound exception... thanks you! And I have another question. How can I start the "LocationLoggerServiceManage" or need to register it? Thanks. I'm new to andriod and JAVA. I want to know how to start the "LocationLoggerServiceManager". Thanks in advance. This seems to be working in a emulator, but on an actual device it does not. I tested the code on both, 2.1 and 2.2 ZTE Blade Android handsets. The other device requests for a PIN code and the other does not have SIM card in it, so it cannot be that the signal broadcasted would be delayed, causing a timeout or something. Hopefully someone will get the code fixed? very good post, it was really informative thanks a lot for posting… Mobile App Development
http://androidgps.blogspot.com/2008/09/starting-android-service-at-boot-time.html
CC-MAIN-2017-26
refinedweb
477
51.34
Type: Posts; User: jfaust zerver, Anything's possible--it's only software after all. It's just a matter of cost. I've learned that venturing off the beaten MFC path is fraught with peril. We're looking to other... Alin, I did try handling WM_GETMINMAXINFO. It did correctly restrict the size of the window, but it still behaved maximized. It lost the title bar. Also when it was the active window, the other... zerver, Thanks for the answer. I tried it, and this solution only blocks the action of clicking on the maximize button. So, double clicking on the title bar still maximizes. There's likely... I have an MDI application with windows that can maximize, minimize, etc.--all the default functionality you'd expect. I have one MDI child window that behaves as a modeless dialog that I want to... Post your code, please. Jeff Yes. You can use it, change it, redistribute it. It can be used in open source and commercial projects. It's a completely open license that is much more relaxed than GPL, for instance. It does... I would suggest using both boost::filesystem and boost::regex. They are both portable and powerful. Jeff Take your example of adding a derived class 'truck'. In the casting example, it would compile and run without any changes, perhaps leading to undesired behavior. In the Visitor example, you will... Unit tests should not care about inheritance or polymorphism, but about testing interfaces, pre and post conditions, functionality, etc. "Unit testing" is over used and can mean different things. ... vector will always be contiguous, even after reallocating. The usual reason for choosing a deque over a vector is that it may not be possible for the OS to find a large block of contiguous memory,... Intersting. You must be calling this a lot. I'm assuming that you are profiling on a release build, not a debug build, and running outside the debugger. Just out of curiousity, what are the... I missed the dll requirement. This will only work as statically linked. One option, if you know all the types that will be used, is to move the getInstace to an implementation file and... This works for me: #include <iostream> using namespace std; template <class T> class Singleton And if you're not using vc++ 6.0 (which has a bug), you can also make the destructor private. Jeff Definitely a Visitor should have a pure virtual interface. The resulting compile error generated when a new type is added is what separates the Visitor Pattern from type checking. It forces you to... The overhead isn't bad: it's a double-dispatch technique, resulting in two virtual function calls. This is usually not an issue. One last recommendation: boost::variant is a simple object... Sounds good to me! I think you're on the right track. If you get tripped up with the code, let us know. Jeff But that's a different problem, with varying solutions that don't sacrifice the fundamental design. The file/folder relationship is simple and obvious, and should be modeled in a simple and obvious... timestamp? Jeff Reading this thread, I was also going to suggest the Visitor pattern. Be aware that having isFile()/isFolder() methods is just as bad as using dynamic_cast<> from a design standpoint. They both... 'for' begins another scope, allowing you to hide outer-scope variables. 'float i = 0.0f, j = 0;' defines two floats, i and j. solution: set j = 0 before the loop: int j = 0; for(float i =... And also the reason I used the word "clarify" in my post :D Jeff To confuse even more (and hopefully clarify by doing so): A a0; A a1(a0); // copy constructor A a2 = a1; // copy constructor a2 = a1; // assignment operator Jeff Although there is some amount of error inherent in floating point calculations, you can keep that error constant by avoiding operations that accumulate the error. For instance, to iterate over a... You can use boost::rational for exact computation. However, your application most likely does not need it. A certain amount of error is acceptable in most applications, including virtually all...
http://forums.codeguru.com/search.php?s=3cfdd3120124ce16a514e5e81cef8f23&searchid=7940535
CC-MAIN-2015-40
refinedweb
695
68.06
How to Build a Chat App With Next.js & Firebase In this tutorial, we will guide you through building a simple chat application with Next.js and Firebase. By now you already have a good understanding of how SwiftUI views work. You create a state variable and a body computed variable that spits out a view based on that state. Whenever the state changes, SwiftUI calls body and updates what's on the screen. This paradigm works incredibly well, but there are some special cases in this paradigm that need extra care. One of those cases is when you want to share a piece of state between two views. There are a couple of ways to share state in SwiftUI: In this part of the SwiftUI course, you'll take a look at these ways to share data. You'll get an understanding of when and how to share data, as well as the advantages and disadvantages of these three approaches. As always, you can find a link to the finished project code on GitHub. First, let's take a look at the simplest possible way to share data: Passing it in the initializer of a view. You already used the pattern when you created the AvatarView: {% c-block language="swift" %} let isOnline: Bool init(url: URL?, isOnline: Bool) { ... self.isOnline = isOnline ... } var body: some View { ... Circle() .foregroundColor(isOnline ? .green : .gray) ... } {% c-block-end %} You gave AvatarView a property called isOnline that you pass in its initializer. You made the online indicator green or gray based on this property. You later called this initializer from ContactRow with a value of whether the user is online or not: AvatarView(url: nil, isOnline: item.contact.isOnline) Because SwiftUI is reactive, whenever the contact's status changes to offline, SwiftUI will know to rebuild the avatar view with the new value. This isn't anything fancy, it's just plain old Swift initializers. That's the biggest advantage of this approach: Simplicity. Here, AvatarView has no state, all properties are marked with let, which means it's completely immutable. Instead of responding to state changes, it will be rebuilt when the state changes. This type of immutable view has different names in different reactive frameworks: functional, stateless... dumb. (Yes, dumb.) Using stateless views has a lot of advantages. For starters, they're simple. You don't have to track or debug state changes. You can easily preview and test the view without having to modify its state. Stateless views are decoupled from your frameworks and libraries, whether it's networking with Alamofire or a database framework like Core Data. They're easy to reuse and even copy and paste to different parts of the app or entirely different projects. As you can see, statelessness is a good thing. You can't always use this pattern, though. To pass state into the initializer, you need to be the view creating the stateless view. If you think of the view hierarchy as a tree, passing state in the initializer is only possible to the view's direct children. Think hard about whether a view needs to track its internal state variables or if you can get away with passing the state in the initializer. Try to use stateless views as much as you can. In families, just like in SwiftUI, parents are usually the ones telling kids the state of the world. Sometimes, however, your dad might not know how to install Viber to call your uncle. In that case, you're the one passing information to your parents. This happens in SwiftUI too. Passing data in the initializer is great if you want to pass data to the view's children. If you want the reverse, however, you need a binding. Bindings enable two-way communication between your views. They're usually used with interactive views like text fields, toggles, checkboxes and similar. They allow two views to share a single piece of state. For instance, when you created ErrorTextField, you used a binding to a String instead of a state variable. {% c-block language="swift" %} struct ErrorTextField: View { ... let text: Binding<String> ... } {% c-block-end %} Later, you initialized this view in the login screen by passing it the binding: {% c-block language="swift" %} struct LoginView: View { @State private var email = "" var body: some View { ErrorTextField( ... text: $email, ...) } } {% c-block-end %} In the above code, the $ operator converted a state to a binding. A binding is a kind of pointer to a piece of state. LoginView owns the state, but by giving ErrorTextField a binding to that state, it lets the text field change the value of the state. This means that both LoginView and ErrorTextField can change the value of the email, and both will always update to match the latest value. Passing bindings, just like passing the state values directly, is used when you want to share state between a parent view and its direct children. As opposed to passing state values, though, a binding adds statefulness to the component, making it more complex to read, write, test and preview. Keep that in mind when using this approach. The above two ways of sharing data have all been between two directly linked views. Sometimes, you want to share data between two unrelated screens. For instance, the currently logged in user is shared between the feed, the settings screen and the profile screen. Or, there might be a registration flow with multiple screens that stores the data of each screen at a shared location. One way to do this is to use bindings or passing the user directly to child views. The disadvantage of that approach is that it's far too cumbersome. You'd have to pass the user to each child separately. For any change that one of the child views make, you'd have to roll back up to the main view to update the user, and then propagate back down the hierarchy so all views update. Instead, it's much easier to have a single place where the user is stored. Whenever any view changes the user, all other views update instantly. You can do this type of global state sharing using a combination of Combine and SwiftUI features: Observable objects and environment objects. Even though this is a SwiftUI course, we'll take a brief detour and talk a little about Combine. Combine is a Swift framework that enables you to track changes to some value over time. Instead of using a plain String, Combine gives you a Publisher that emits a series of strings over time. If you're familiar with RxSwift or ReactiveCocoa, Combine does the same thing, but it's built into Swift. SwiftUI uses Combine to track changes to state variables. While you might see the email as a String, SwiftUI sees it as a series of different string values over time. Using Combine, it can listen for changes to the value and update the view accordingly. You can do the same thing SwiftUI does by using Combine's ObservableObject to track changes to any value, whether it's in a View or an entirely different object. ObservableObject object tracks changes to its properties and publishes a Combine event whenever a change occurs. Let's create a new observable object to track the currently logged in user. Create a new plain Swift file and name it AppStore.swift. No, you're not building the iOS App Store, instead, you're making a shared storage location for different views in your app. Add the following class to the file: {% c-block language="swift" %} import SwiftUI import Combine class AppStore: ObservableObject { struct AppState { var currentUser: Contact? } @Published private(set) var state = AppState(currentUser: nil) func setCurrentUser(_ user: Contact?) { state.currentUser = user } } {% c-block-end %} First, you import both Combine and SwiftUI. Then, you declare a class that conforms to the ObservableObject protocol. Inside the class, you write a nested struct that will hold the current user and create a property of that type. The ObservableObject protocol has one requirement: objectWillChange. This is a Combine Publisher, an object that fires a series of events whenever the observed object changes. Thankfully, you don't have to implement this yourself. By marking state with @Published, you tell Combine to automatically generate an implementation of objectWillChange that tracks changes to state and publishes an event whenever you change the state. Thanks, Combine! You also mark the property with private(set). If you haven't seen this before, this is an access control modifier in Swift that makes the property private for modification, but public for access. This means that everyone can read state, but only AppStore can change the state. This is just an insurance policy: When dealing with shared state, it's always a good idea to limit how other objects can change that state. Finally, you add a method that lets users of this class change the state by setting a new user. Whenever this method is called, currentUser is changed, and AppStore publishes an event telling everyone that's listening that the state has changed. For instance, a profile screen could listen to these changes and update the user it's showing when a new one is set. Since you can have multiple listeners, different unrelated views can be updated all at once whenever AppState changes. Writing the observable object is only one part of the equation, though. You still need a way to create the object and share it with different views in your app. You'll do this using SwiftUI's Environment. Each view and all of its children exist in an environment. The environment is a shared pool of floating objects and values that the view or any of its children can grab and use at any time. You'll add an instance of AppStore to the environment of your root view. This will enable any view in your app to grab the store and its data. Remember, the SceneDelegate is where you create your root view, so open SceneDelegate.swift and add a new property: {% c-line %}let store = AppStore(){% c-line-end %} This is the store you'll add to the environment. Since you need to add it to your root component, call the environmentObject method on the contentView: {% c-block %} let contentView: some View = NavigationView { WelcomeView() }.environmentObject(store) {% c-block-end %} This will add the store to the environment. Now that the store is in there, it's time to fetch it from different views in your app. Start by opening LoginView.swift. Add the following property to the struct: {% c-line %}@EnvironmentObject private var store: AppStore{% c-line-end %} By declaring the property an @EnvironmentObject, you tell SwiftUI to automatically look for an object of that type in the environment. No need to manually instantiate or look for it! At the bottom of login, add a line to save a new user in the store: {% c-line %}store.setCurrentUser(Contact(name: "Me", avatar: nil, id: "me", isOnline: true)) {% c-line-end %} When a user logs in, they will get saved in the app store. This same app store is shared with other views, which will update automatically. One of those is ContactsView: Open ContactsView.swift. Instead of hard-coding a user, you'll change the view to grab the user in the app store. First, fetch the store the same way you did for LoginView: {% c-line %}@EnvironmentObject private var store: AppStore{% c-line-end %} Then, inside body, replace the ForEach block with a ZStack of the contact row and a navigation link to the chat screen: {% c-block language="swift" %} currentUser: currentUser, receiver: self.items[i].contact )) { EmptyView() } } } .background(Color.white) .shadow( color: i == self.items.count - 1 ? Color.shadow : Color.clear, radius: 10, x: 0, y: 2) .listRowInsets(EdgeInsets()) // ... to here ... } }.navigationBarTitle("Contacts", displayMode: .inline) {% c-block-end %} You might be confused by the use of map. store.state.currentUser is optional, so you need to unwrap it before you can use it. However, writing if let inside a body will result in a compiler error because Swift will get confused about what kind of view you're trying to make. To get around this, you can use map on the optional. It will work the same way as if let: The function passed to map will only get called if the optional is not nil. If the current user is nil, the function won't get called, and no navigation link will get created. Build and run the project now. You now have the app store, an observable object, inside the environment. When you log in, the view will update the store in the environment with a new user. Since the store is an observable object, all other views will get notified of the change. This includes ContactView, which will use the currently logged in user to create a chat screen. Sharing data like this is perfect for global state of your app, like the currently logged in user, whether or not the user is logged in, does the user have a premium or regular account and other app-level data. Stateless views, bindings, environment objects... All of these approaches have advantages and disadvantages and should be used at different times. To give you a better understanding of when to use which, I prepared a little cheat sheet. Don't worry, I won't take your exam away for using it! Run your app and navigate to the chat screen. When you start typing your message, you'll see the keyboard appear. Only, it appears on top of the text field, so you can't even see what you're typing! That's no good. Let's dive further into Combine with another use case for observable objects: Observing the keyboard. You'll fix this issue by creating an observable object that will listen to NotificationCenter notifications for changes to the keyboard's height. As the keyboard rises or falls, you'll publish events to everyone that's listening. You'll listen to this event in the chat screen. When the keyboard pops up, you'll increase the padding of the text field so that it's always above the keyboard. Create a new plain Swift file and call it KeyboardObserver.swift. Add a new ObservableObject class to the file: {% c-block language="swift" %} import Combine import SwiftUI class KeyboardObserver: ObservableObject { @Published private(set) var keyboardHeight: CGFloat = 0 } {% c-block-end %} Just like earlier, you use @Published to automatically publish events whenever that value changes. You'll update the height based on two notifications: keyboardWillShowNotification and keyboardWillHideNotification. Notification Center has Combine extensions that use Publishers instead of manually subscribing to the notification. Add the following property to the class: {% c-block language="swift" %} let keyboardWillShow = NotificationCenter.default .publisher(for: UIResponder.keyboardWillShowNotification) .compactMap { ($0.userInfo?[UIResponder.keyboardFrameEndUserInfoKey] as? CGRect)?.height } {% c-block-end %} You call publisher(for:) on the default Notification Center to get a publisher that emits that notification's events. Remember, Combine lets you deal with streams of values over time. These streams, just like arrays, are collections. This means that all of the collection methods you're used to, like map, compactMap and others, are already there. In the above example, you use compactMap to get a value from the userInfo of the notification that corresponds to the keyboard's height. This makes keyboardWillShow a stream of CGFloat values over time. Next, you'll add another property that tracks when the keyboard hides. Add it below the one you just created: {% c-block language="swift" %} let keyboardWillHide = NotificationCenter.default .publisher(for: UIResponder.keyboardWillHideNotification) .map { _ -> CGFloat in 0 } {% c-block-end %} Whenever you receive a notification that the keyboard has hidden, you'll convert that notification to a CGFloat of 0, since a hidden keyboard has no height. Essentially, keyboardWillHide is a stream of zeros. Now that you are publishing events for those two notifications, it's time to merge them together to update the keyboard height property. Add the following initializer to the class: {% c-block language="swift" %} init() { Publishers.Merge(keyboardWillShow, keyboardWillHide) .subscribe(on: RunLoop.main) .assign(to: \.keyboardHeight, on: self) } {% c-block-end %} keyboardWillShow and keyboardWillHide are both Publishers. Publishers emit values. To listen and respond to those values, you have to subscribe to the publishers. Subscribing allows you to call a function or closure (using the sink method), or update a value whenever a Publisher emits a new event (using assign(to:on:)). In the above case, you first merge the two publishers into one. That way, you'll get the keyboard's height when it raises and a zero when it falls. You'll subscribe to the new publisher on the main thread because it's responsible for updating the UI. By calling assign, you tell Combine to set keyboardHeight whenever to whichever value the Publisher emits. At this point, you'll get a warning that the result of assign is unused. This is because assign returns a cancellable that you're not using. A cancellable doesn't do much, it's more of a reference to the subscription. If you used Notification Center, you might remember that you need to eventually remove observers to avoid memory loops. The same is true for Combine: If you don't remove subscriptions from KeyboardObserver, it will never get deallocated. If you store the cancellable in a property of the class, the cancellable will get deallocated when the class does. Upon deallocation, the cancellable automatically destroys the subscription, removing all reference cycles that were created and letting the object die peacefully, as opposed to going on forever as a zombie. Add a new property to the class: {% c-line %}private var cancellables: Set<AnyCancellable> = []{% c-line-end %} This is a Set which will house all of your cancellables for this class. Next, add the following line to the bottom of init: {% c-line %}.store(in: &cancellable){% c-line-end %} This method will store the cancellable in the set. No more warnings, no more memory leaks! Now that you have the keyboard observer, it's time to use it ChatView.swift. Open the file and add a new property to the struct: {% c-line %}@ObservedObject private var keyboardObserver = KeyboardObserver(){% c-line-end %} By using @ObservedObject you tell SwiftUI to update the UI whenever the object changes. Next, use the observer to grab the keyboard's height in body: {% c-block language="swift" %} ZStack { Color.background.edgesIgnoringSafeArea(.top) VStack { List { ... } ChatTextField(sendAction: onSendTapped) // These two lines are new: .padding(.bottom, keyboardObserver.keyboardHeight) .animation(.easeInOut(duration: 0.3)) } }.navigationBarTitle(Text(receiver.name), displayMode: .inline) {% c-block-end %} When the keyboard pops up, you'll raise the text field by the height of the keyboard. Since everything is in a VStack, the List will get shorter automatically. You'll also animate this change by calling animation after padding. SwiftUI will calculate the to and from values to animate without you having to do anything. Pretty neat, right? Run the project and navigate to the chat screen. As you start typing, you'll see the keyboard pop up and the text field raises to match it. Now you can see what you're typing. You are now almost done with your chat app! In this section of the SwiftUI course, you went one step further from static, hard-coded values. You used initializers, bindings and observable objects to pass data between your views and breathe a bit of life into your app. You learned that passing data in the initializer is the simplest way to give a bit of state to a direct child of a view. You also learned that you can use bindings for two-way communication between a child and its parent. Using environment objects, you added global state to your app that any view can access. Finally, using Combine you converted a Notification Center notification into an observable object that updates the UI. All in a day's work, right?
https://www.cometchat.com/tutorials/swiftui-architecture-observable-objects-the-environment-and-combine-6-7
CC-MAIN-2021-43
refinedweb
3,337
64.1
Problem: In Scala Lift Application. There are two ways to maintain and access the session information. Either we can use SessionVars or net.liftweb.http.S object. Each of them work significantly well until there are used in the scope of session. Now the actual problem arise when I used Actor to get concurrency in scala lift application. Actor runs outside of the context of the session that you set the SessionVar or S object.I used scala Actor, Akka Actor, Lift Actor but no one is able to hold the session state. In the context of session it is quite possible to get the locale using S.get(“locale”) throughout the application. But I use Actor, It is not possible for Actor to maintain the session. so It is not possible to get locale from Session or S Object. When you hit the Actor with the message “Hello”. MyActor shoud print ” Hello in it_IT “. But unfortunately you are not so lucky. It prints “Hello in” instead of “Hello in it_IT”. This problem arise because Actor breaks the context of session so MyActor is not able to retrieve the Session information. Solution : Fortunatly lift has initIfUninitted for rescue. It initialize the current request session if it’s not already initialized. Generally this is handled by Lift during request processing, but this method is available in case you want to use S outside the scope of a request (standard HTTP , Comet and Actor). Now Every thing will work fine. By using S.initIfUninitted you can initialize the session object explicitly in the Actor. 2 thoughts on “Scala lift : Access Session Information In Actors.2 min read” Hi 🙂 I’m having some trouble getting your example to work. Can you please assist? My code below: object SetLocale{ S.set(“locale”,”it”) } object CallTheActor { def render = SHtml.onSubmit(x => { MyActor ! (x, S.session) //This is the problem, method “def session” in S is of type Box[LiftSession] please see the API SetValById(“clearcontents”, “”) }) } object MyActor extends LiftActor { def messageHandler = { case (msg, s:LiftSession) => S.initIfUninitted(s) { //Doesn’t match because “S.session” is Box[LiftSession] println(msg +”In “+S.get(“locale”).openOr(“”)) } /* //If you add the case below you get a compiler error saying: //found : net.liftweb.common.Box[net.liftweb.http.LiftSession] //required: net.liftweb.http.LiftSession //Check out S in the API you will see there is no method in S to simply call LiftSession, only method “def session” to call Box[LiftSession] //Method initIfUninitted is S requires type LfitSession and not Box[LiftSession] //So how do you give LiftSession to S.initIfUninitted(??) ? case (msg, s:Box[LiftSession]) => S.initIfUninitted(s) { println(msg +”In “+S.get(“locale”).openOr(“”)) } } */ } Hi Tylor, This is a good catch. Thanks for finding the Typo. >>MyActor ! (x, S.session) //This is the problem, method “def session” in S is of type Box[LiftSession] please see the API But the actual call should be like this MyActor ! (x, S.session.open_!) // Now this statement will hit the Actor with the LiftSession, instead of Box[LiftSession] I hope, now this would work as expected. Thank You!
https://blog.knoldus.com/scala-lift-access-session-information-in-actors/
CC-MAIN-2021-04
refinedweb
518
59.4
The. Contents Introduction You can use ESL in the following cases: - Custom controls - You can display a button called Become a member to only those contacts who are not members of your loyalty scheme. Although this could be done via simple conditioning or section targeting as well, for those who know HTML, it makes email creation a lot easier. - Counters - You can place a counter into your email without having to put a URL into your HTML code. - Conversions - You can handle transformations from one contact field to another, for example, to display field values provided in kilometers in miles instead. - Loops - You can create a loop, and do not have to refer to each and every array element separately. As ESL is a scripting language, it can automatically go through the array and use all elements one by one. All in all, customization of your contacts becomes easy and fully automatic with the Emarsys Scripting Language. Editor-specific details The Emarsys Scripting Language is supported only by custom HTML email campaigns and by email campaigns created in the new Visual Content Editor. Old template-based campaigns do not support it. Note that operators are case sensitive and they should always be in lower case: "and", "or". Language Reference Terminology {{ this is an expression }} {% this is a statement %} |this is a filter Filters Contents You can define a split character with the help of the split filter. It divides the string and creates an array from it, which can be used by foreach . {% foreach item in 'one,two,three'|split(',') %} {{ item }} <br /> {% endforeach %} In this example, the split character is , (comma). item loops through all elements from the array and lists them one by one. If you include the HTML <br> tag, every element will be displayed in a new row: one two three You can also include a limit argument. There is a difference between working with positive and negative numbers. An example for a positive number is 2, where the last 2 elements will become one string and the remaining elements will stay separately. {% foreach item in "one,two,three,four,five"|split(',',2) %} {{ item }} <br /> {% endforeach %} Result one two three four,five On the other hand, if the limit is -1, the very last element will not be returned, all the other elements will be displayed. {% foreach item in "one,two,three,four,five"|split(',',-1) %} {{ item }} <br /> {% endforeach %} Result one two three four If the limit is zero, one string will be created from all elements in the array. {% foreach item in "one,two,three,four,five"|split(',',0) %} {{ item }} <br /> {% endforeach %} Result one,two,three,four,five If you dont define a split character, instead you include an empty string, the array will be split at every character. {% foreach item in "123"|split('') %} {{ item }} <br /> {% endforeach %} Result 1 2 3 If you add a limit to the empty string delimiter, the array will be split at this specific number. {% foreach item in "112233"|split('',2) %} {{ item }} <br /> {% endforeach %} Result 11 22 33 You can cut and display a part of an array or string with the slice filter. If it is an array, foreach is used with it. Its first parameter defines from which element it starts (note that the very first element is signed by 0), and the second parameter defines the number of elements to cut (note that it includes the starting element as well). In the next example, the starting element of an array is 1, so it will start from the second element, and the number of elements is 2, so it will cut 2 elements starting with the second element. {% foreach item in [1, 2, 3, 4, 5]|slice(1, 2) %} {{ item }} <br /> {% endforeach %} Result 2 3 An example for slicing a string is the following: {{ '12345'|slice(2, 2) }} Result 34 If the starting number is negative, counting starts from the end of the array or string (e.g. -4 will sign the second element, which is 2 in the above example). If no second parameter is provided, it means including all upcoming elements. Please note that zero cannot be a second parameter. {{ '12345'|slice(-4) }} Result 2345 The values of string template placeholders can be defined with the format filter. These placeholders are %s (string) and %d (number). In the following example, the value 10 is set for the message number. The format parameter is added with this variable as a parameter. The result is having this number in the string. {{ "You have %d messages."|format("10") }} Result You have 10 messages. You can encode a URL or part of a URL which contains invalid URL characters. This means that the invalid URL characters will be replaced by their valid HTML character combinations. An example for a URL particle containing a * (asterisk): {{ "seg*ment"|url_encode }} Result seg%2Ament Another example is having one or more spaces in the URL: {{ "one space"|url_encode }} Result one%20space In the next example, a key-value pair is converted into valid URL parameters: {{ {'key': 'value', 'foo': 'bar'}|url_encode }} Result key=value&foo=bar The characters valid in HTML can be escaped with the escape filter. An example is when the First name field of the contacts is checked. If any contains an HTML character (e.g. <), it will be treated as a simple string. {{ "< '"|escape }} Please note that escape can be abbreviated to e. {{ "< '"|e }} It capitalizes a string. {{ 'hello sunshine!'|capitalize }} Result Hello sunshine! It displays the number of array elements or the string character number. {{ [1, 2, 3, 4]|length }} Result 4 {{ '1234'|length }} Result 4 The length filter can have a parameter and it can be included within if tags. In the following example, the length parameter 10 defines that if the number of users is more than 10, the provided message is shown. {% if users|length > 10 %} The number of users is more than 10. {% endif %} It creates a union from two arrays or objects. {% foreach item in [1, 2]|merge(['apple', 'orange']) %} {{item}} <br /> {% endforeach %} Result 1 2 apple orange Please note here that if a value is provided for the same key again in the merge parameter, the original value is overwritten by this new one. {% foreach item in { 'apple': 'fruit', 'potato': 'unknown' }|merge({ 'potato': 'vegetable'}) %} {{item}} <br /> {% endforeach %} Result fruit vegetable This formats decimal numbers. The first parameter of number_format determines how many numbers the decimal part contains. In the following example, it is 2, so 2 decimals will be displayed from the 3. The second parameter defines the decimal separator, which is , (comma) in this case. The third parameter stands for the thousands separator, which is . (period) here. {{ 9800.333|number_format(2, ',', '.') }} Result 9.800,33 Please note that if no parameter if given for the number_format filter, the decimals are not displayed at all and the thousands separator is , (comma). {{ 2005.35|number_format }} Result 2,005 The replace filter replaces string parts. {{ "I like this and that."|replace({'this': 'chocolate', 'that': 'candy'}) }} Result I like chocolate and candy. Please note that if the very same string part is provided as a value first, then a key in the parameter, it will become overwritten. In this example, you can see that "cats" changes to "dogs" first, then both "dogs" change to "birds". {{ "I like cats and dogs."|replace({'cats': 'dogs', 'dogs': 'birds'}) }} Result I like birds and birds. The rounding conditions can be defined with the help of the round filter. Its first parameter defines the number of decimals, which is 1 in our example. As for the second parameter, two options exist. The first is floor, which rounds the numbers down regardless of the common rounding rules. {{ 42.58|round(1, 'floor') }} Result 42.5 The second is ceil, which rounds the numbers up in any case. {{ 42.54|round(1, 'ceil') }} Result 42.6 Please note that the default is having no decimals and using common rounding (rounding the value up from decimal 5 and rounding it down under it). {{ 42.52|round }} Result 43 Whitespaces can be cut with using this filter. {{ ' I like Emarsys Scripting Language. '|trim }} Result I like Emarsys Scripting Language. Any other character can be defined to be cut if you include it as a parameter. {{ ' I like Emarsys Scripting Language.'|trim('.') }} Result I like Emarsys Scripting Language Please note that date formatting follows the ICU convention and is case sensitive. For more information see the ICU Project details. It is a date formatting function. It displays the date of the specified country and it can also translate this date (it can handle the PHP language translations). Please note that it displays only the date, even if the time is also provided. It can be overwritten, so the predefined date format becomes displayed (e.g. [{locale: en, format: yyyy mm}] means that the format YYYY MM will become visible for English people.) The localized_date filter can have 4 parameters. These are displayed in the order defined by their language settings, and contain the following: - full: year, month, date, day of the week - long: year, month, date - medium: year, abbreviated version of month, date - short: yyyy/MM/dd, the separator can be different. The input string can have the following formats: - 'yyyy-MM-dd' - 'now' - '+[number] hours' - '+[number] days' - '-[number] hours' - '-[number] days' In this example, a full version is provided on the basis of the stored country+language value in contact field 12 for the specified date (e.g. it will become January 1 Friday, 1999). {{ '1999-01-01'|localized_date(contact.12, 'full') }}') }} As another example, you can have "x days from now" date, too. {{ '+7 days'|localized_date('de','dd.MM.yyyy') }} It is a time formatting function. It displays the time of the specified country and it can also translate this time (it can handle the PHP language translations). Please note that it displays only the time, even if the date is also provided. Here you can find what is displayed exactly if you provide parameters. - full: HH:MM:SS (AM/PM) - long: HH:MM:SS (AM/PM) - medium: HH:MM:SS (AM/PM) - short: HH:MM (AM/PM) {{ '1999-01-01 13:13:13'|localized_time(contact.12, 'medium') }}, Please note here as well that the input string has a compulsory form (HH:MM:SS for time). In this example, a time is generated from a datetime with the localized_time filter on the basis of the stored country+language value in contact field 12 (13:13:13). It is a date and time formatting function. It displays the date and time of the specified country and it can also translate this (it can handle the PHP language translations). Regarding its parameters, it puts the values of the above two filters together (e.g. if its parameter is full, the date and time will be displayed as full as well). {{ '1999-01-01 13:13:13'|localized_datetime(contact.12, 'long') }}, A possible result of the above example is 1999 01 January, 13:13:13. If the evaluation does not have a result and this filter is added, the email campaign will not be sent. It will be displayed on the Analysis page in Emarsys though. The default invalid values for this filter are "" (empty string), and the value null. {{ contact.2|required(['-']) }} The parameter of this filter (the string - in this case) overwrites the default invalid empty string, so this string will become the new invalid one. You can still include the empty string later as well as it is possible to define more than one invalid values. For example, ([x], [y], [ ]) means that there are 3 invalid elements, the last one is the empty value. Personalization -. Contact data is pulled from the Emarsys database. Here you can find the list of the possible email personalization placeholders: General Information Personal Information Company Information Other Information {{. Link Tracking You can use ESL for link tracking purposes, too, by embedding the link in an ESL expression. {% if event.payload.example == 'EXAMPLE' %} {% endif %} Please make sure that in the text version of the email, the link and the ESL expression is separated by a space or a line break. Otherwise, link tracking will not work and the campaign may become invalid. Useful Tools Twig testing is possible with TwigFiddle, where the result of any twig can be checked. Note here that we have some unique tags, parameters and filters, for which you cannot use this testing method easily. If you wish to test code with the foreach tag, we suggest that you replace it with a simple for tag, so that you can check your code. Other elements which cannot be tested here are the limit parameter and the .rds-related codes. From filters, the four unique ones are: - localized_date - localized_time - localized_datetime - required For editing your script, Atom and Notepad++ can be useful with a twig-specific add-on. Samples External Data for Transactional Emails If you want to create transactional emails in Visual Content Editor, you need to have the following: - a JSON to trigger the API - an HTML snippet to personalize with the data JSON { "key_id": "3", "external_id": "test@example.com", "data": { "orderId": 1234, "orderItems": [ { "productCode": 12, "imageUrl": "" }, { "productCode": 13, "imageUrl": "" } ] } } With the above JSON example, order 1234, its related products 12 and 13 and their images will be available for the email campaign. The data displayed in the email itself will be gathered from here. You can include any number of items. You have the possibility to test your JSON on the API Demo API Demo page. You need to include the data part of your JSON in the text field of data and define the remaining parameters. You can check how the Triggering an External Event API endpoint works here. HTML Snippet.
https://help.emarsys.com/hc/en-us/articles/115004090589
CC-MAIN-2019-47
refinedweb
2,296
62.27
! Lazy loading TreeView items The usual process when using the TreeView is to bind to a collection of items or to manually add each level at the same time. However, in some situations, you want to delay the loading of a nodes child items until they are actually needed. This is especially useful if you have a very deep tree, with lots of levels and child nodes and a great example of this, is the folder structure of your Windows computer. Each drive on your Windows computer has a range of child folders, and each of those child folders have child folders beneath them and so on. Looping through each drive and each drives child folders could become extremely time consuming and your TreeView would soon consist of a lot of nodes, with a high percentage of them never being needed. This is the perfect task for a lazy-loaded TreeView, where child folders are only loaded on demand. To achieve this, we simply add a dummy folder to each drive or child folder, and then when the user expands it, we remove the dummy folder and replace it with the actual values. This is how our application looks when it starts - by that time, we have only obtained a list of available drives on the computer: You can now start expanding the nodes, and the application will automatically load the sub folders. If a folder is empty, it will be shown as empty once you try to expand it, as it can be seen on the next screenshot: So how is it accomplished? Let's have a look at the code: <Window x: <Grid> <TreeView Name="trvStructure" TreeViewItem. </Grid> </Window> using System; using System.IO; using System.Windows; using System.Windows.Controls; namespace WpfTutorialSamples.TreeView_control { public partial class LazyLoadingSample : Window { public LazyLoadingSample() { InitializeComponent(); DriveInfo[] drives = DriveInfo.GetDrives(); foreach(DriveInfo driveInfo in drives) trvStructure.Items.Add(CreateTreeItem(driveInfo)); } public void TreeViewItem_Expanded(object sender, RoutedEventArgs e) { TreeViewItem item = e.Source as TreeViewItem; if((item.Items.Count == 1) && (item.Items[0] is string)) { item.Items.Clear(); DirectoryInfo expandedDir = null; if(item.Tag is DriveInfo) expandedDir = (item.Tag as DriveInfo).RootDirectory; if(item.Tag is DirectoryInfo) expandedDir = (item.Tag as DirectoryInfo); try { foreach(DirectoryInfo subDir in expandedDir.GetDirectories()) item.Items.Add(CreateTreeItem(subDir)); } catch { } } } private TreeViewItem CreateTreeItem(object o) { TreeViewItem item = new TreeViewItem(); item.Header = o.ToString(); item.Tag = o; item.Items.Add("Loading..."); return item; } } } The XAML is very simple and only one interesting detail is present: The way we subscribe to the Expanded event of TreeViewItem's. Notice that this is indeed the TreeViewItem and not the TreeView itself, but because the event bubbles up, we are able to just capture it in one place for the entire TreeView, instead of having to subscribe to it for each item we add to the tree. This event gets called each time an item is expanded, which we need to be aware of to load its child items on demand. In Code-behind, we start by adding each drive found on the computer to the TreeView control. We assign the DriveInfo instance to the Tag property, so that we can later retrieve it. Notice that we use a custom method to create the TreeViewItem, called CreateTreeItem(), since we can use the exact same method when we want to dynamically add a child folder later on. Notice in this method how we add a child item to the Items collection, in the form of a string with the text "Loading...". Next up is the TreeViewItem_Expanded event. As already mentioned, this event is raised each time a TreeView item is expanded, so the first thing we do is to check whether this item has already been loaded, by checking if the child items currently consists of only one item, which is a string - if so, we have found the "Loading..." child item, which means that we should now load the actual contents and replace the placeholder item with it. We now use the items Tag property to get a reference to the DriveInfo or DirectoryInfo instance that the current item represents, and then we get a list of child directories, which we add to the clicked item, once again using the CreateTreeItem() method. Notice that the loop where we add each child folder is in a try..catch block - this is important, because some paths might not be accessible, usually for security reasons. You could grab the exception and use it to reflect this in the interface in one way or another. Summary By subscribing to the Expanded event, we can easily create a lazy-loaded TreeView, which can be a much better solution than a statically created one in several situations.
https://www.wpf-tutorial.com/pl/87/kontrolka-treeview/lazy-loading-treeview-items/
CC-MAIN-2021-39
refinedweb
787
62.07
When used in a class declaration, the “final” keyword means the class can’t be subclass? You should make a “final” class only if you need an absolute guarantee that none of the methods in that class will ever be overridden. If you’re deeply dependent on the implementations of certain methods, then using “final” gives you the security that nobody can change the implementation out from under you. You’ll notice many classes in the Java core libraries are “final”. For example, the “String” class cannot be subclassed. Imagine the havoc if you couldn’t guarantee how a “String” object would work on any given system your application is running on! If to make an instance of it.) I will show you with an example. How, the havoc can be caused. public class Test1 { public void runTest() { Test2 test2 = new Test2(); StringType st=null; AnotherType at=null; test2.testMethod(st); test2.testMethod(at); // This is legal. But it end up returning unintended result. } } class Test2 { protected void testMethod(StringType st) { System.out.println(st.return2()); } } class StringType { public int return2() { return 2; } } class AnotherType extends StringType { @Override public int return2() //Someone intentionally overrode the function and change the logic to be different { return 50; } }
http://code.ssingh.in/2010/01/
CC-MAIN-2018-30
refinedweb
205
57.67
Important: Please read the Qt Code of Conduct - #include repost Since my original post DID NOT get solved I am taking the liberty t repost with additional info. I am trying not to reinvent the wheel and putting two WORKING application under one roof. My task is to add existing files of btscanner apprication into tab dialog. I am using plain "add existing files " and they are being added into correct folders under correct additional sub folders showing the path. They are added into tab dialog project I have added this manually After all this - complier cannot find #inlcude - starting wiht "main.cpp" Missing device.h in main error Questions - What do additional folders with path accomplish ? - How do x.pro HEADERS gets used or not ge used by whom - makefile or compiler? - It looks as plain "include device.h" in source file is not sufficient to make / compile - WHERE DO I ADD the necessryn path ? Addendum After adding relative path to #include it passed device but not service header. . Cheers Hi, shouldn't it be #include ..../service.h (not sevice.h)? #include " file " ; searches local directory ONLY and stops #include <file> ; searches "above " local directory The "../../.." syntax seems to go "up the tree" , assuming the referenced project is in "current access". (I need to work on that theory) Anything else , such as ellipsis or "....." is unknown syntax to me. Can you provide reference ? Sorry, I mean, the line no. 45 that currently has the red error is: #include <../../bluetooth/btscanner/sevice.h> try changing it to: #include <../../bluetooth/btscanner/service.h> @AnneRanch I think @hskoglund means you've made a typo... Here is what works partial path (?) and then full path - the entire project compiles and runs. Anybody interested to find out WHY it works ? Perhaps detailed analysis of complier output - now available AFTER the error is gone (!) would be interesting to somebody. ( I could post it) My "solution" is path (GUI) in "project tree" and (../../..) in x.pro file means NOTHING to the complier. @AnneRanch said in #include repost: I have added this manually Just to add: This is pure evil :) Dont use =to change settings... Use +=to add modules to your basic config or use -=to take modules. Minor , insignificant detail not helping to resolve THIS #include issue. BTW - I just cut and pasted it from "an official" btscanner example . This post is deleted! It's a general thing. As I've said in one of your other posts, examples are minimalistic standalone projects, that are (in most cases) not meant to get improved or extended even further. So, IMHO it's not a good idea to import a whole example to your own projects and take over the example's profile... I don't want to question your whole idea, but I would say, that there are easier and faster ways to make your own BT Scanner "test" / "example" project. Or is there anything that forces you, to import the full example? - J.Hilk Moderators last edited by J.Hilk @AnneRanch to answer your original question, the pro- file offers you the possibility to expand the include path, via INCLUDPATH += .... in your case INCLUDEPATH +=$$PWD/../../bluetooth/btscanner should do the trick. But use it with caution, I find that using INCLUDEPATH convolutes the code more than that it makes it easier to read. But that maybe just me #include " file " ; searches local directory ONLY and stops #include <file> ; searches "above " local directory where did you get that from? the actual definition: - #include <filename>: Searches for the file in implementation-defined manner. The intent of this syntax is to search for the files under control of the implementation. Typical implementations search only standard include directories. The standard C++ library and the standard C library are implicitly included in these standard include directories. The standard include directories usually can be controlled by the user through compiler options. - #include "filename": Searches for the file in implementation-defined manner. The intent of this syntax is to search for the files that are not controlled by the implementation. Typical implementations first search the directory where the current file resides and, only if the file is not found, search the standard include directories as with (1). The only thing that "searches upward" that I now, is qmake in search of a .qmake.conf file But there may be more 🤷♂️ @Pl45m4 Agree with your approach, however, the initial question was about why "#include" does not work as expected AND why the project tree and project file entries make no difference OR more precisely does not effect the complication. My usage of btscanner is purely selfish – it works in Qt - as opposed to many other “sample codes” , and that is OK with me. What is NOT OK is tool likes Qt Creator messing with C language syntax by adding layers of poorly explained “STUFF” , such as inventing syntax “/../../xxx” where #include <FILIE>; should do. As far as “samples” being second grade code – something about advertising Qt comes to mind, and I shall leave that as is. @J-Hilk I will repeat what I have said already and add - the syntax for #inlcude has not changed since it was introduced. The Qt Creator adds stuff which is not only odd but is not used during compile. Yes, I did not cut and paste "the real Mc Coy" definition as you did. Was MY definition incorrect ? I am not sure if adding PATH (to pro file) is necessary - it is already in project tree and if it is important it should be added to pro file by Qt Creator. Appreciate all the comments and suggestions, it is very helpful to get my project going. Thanks @AnneRanch said in #include repost: What is NOT OK is tool likes Qt Creator messing with C language syntax by adding layers of poorly explained “STUFF” , such as inventing syntax “/../../xxx” where #include <FILIE>; should do. Qt Creator is an IDE, including a C/C++ editor and a debugger. It does not, and cannot, alter the syntax of any language. If it did, programs would not compile. The compiler/linker is not a Qt component. Whatever #includes you have shown in your code will conform to, or similar, as per @J-Hilk 's post. Assuming your are using gcc, its documentation should provide any details on handling/how to pass directories on the compile line, etc. #include <>tends to look in some system directories which #include ""does not. Handling of a relative path with ..is probably compiler-implementation-specific. @JonB This comments i misinterpreters this entire thread. Nobody is challenging Qt as IDE "interface " to make and compiler / linker. What I questioned is what appears superficial "includes" with no visible effect on processes. If common item likes #include has to be done manually we have very poorly functioning "IDE", nothing to do with complier. Since you mentioned complier - where can I read Qt Creator compile options ? I have 4 processor system and like to add "-j" option to speed things up. But I'll post this separately - different subject @AnneRanch said in #include repost: @JonB This comments i misinterpreters this entire thread. No, it doesn't. You think that Qt Creator is doing something funny about #includes, and keep saying so. It is not. @AnneRanch said in #include repost: I have 4 processor system and like to add "-j" option to speed things up. (4 processor cores, I assume) This post is deleted! @Pl45m4 Pardon my ignorance , but -j is a complier option . When I added it to "make" it did not show in compiler output. Which brings another question - who is on first - "make" or "qmake" or both ? And since I am not allowed to do multiple posts - why is there "build" and "rebuild" ? Back in the beginning of programming - when file was "dirty" it would get rebuild AUTOMATICALLY when "build" was requested anyway. . "Build" only builds (link +compile) files that have changed ("dirty" files). "Rebuild" will build all files, regardless whether they have changed or not. And this could take several minutes or even more in huge projects. The path in your error msg says "Qt_Repository Copy". Is that the right one? Did you move or rename any files? Try to rename any button (by double clicking on e.g. "Scan") in your current ui file (just the button text, not the actual widget name) and run your program. If the name is still the old one, your program is probably using a different ui file. - JKSH Moderators last edited by @AnneRanch said in #include repost: who is on first - "make" or "qmake" or both ? qmake... - ...parses your *.pro file and generates your Makefile - ...parses your *.ui file and generates *.cpp and .h files - (and more) makeparses your Makefile and runs your build tools
https://forum.qt.io/topic/117793/include-repost
CC-MAIN-2020-45
refinedweb
1,465
66.44
Can Gizmos folders be located in subfolders? I have an asset 'Foo' derived from ScriptableObject. I can associate a custom editor icon by putting 'Foo Icon.png' into Assets/Gizmos/. Unity doesn't seem to recognize the icon if I put it in, say, Assets/SomeFolder/Gizmos/. Is there a way to do this? (BTW, the ScriptableObject subclass is defined in a DLL.) Answer by TonyLi · Mar 02, 2014 at 08:16 PM So the answer appears to be "No," but you can add an editor script that runs on launch that moves the icons to Assets/Gizmos/. Answer by Benzor · May 09, 2017 at 04:07 PM Short answer: NO Long Answer: Go and vote on this feature request so that one day the answer will be . Gizmos DrawIcon NOT located in Gizmos folder 0 Answers Custom asset icons changed after upgrade to Unity 2017.2 0 Answers Unity3d how to access Gizmos Tab in scene view via code 0 Answers Custom icons for custom Monobehaviours in custom namespaces 1 Answer Is there a better way to have custom Script icons than Assets/Gizmos 0 Answers
https://answers.unity.com/questions/600772/gizmos-folder-in-a-subfolder.html
CC-MAIN-2020-24
refinedweb
187
72.05
Overlapping Delegate in listview I have a list of items, onClicking each item a rectangle with its associated properties open. When I click the second item the rectangle of the second item opens up overlapping the first. I don't want this to happen, the second rectangle should open only when first is closed (OR) on clicking second item it should automatically close the first rectangle. The list is an xml list, and the delegate has the common property to all its list element. I tried to change the opacity - didnt work visibility - didnt work MouseArea- enabled: false didnt work. tried to access the particular index by xmlmodel.get(index)- didnt work. Anyone who has an idea on how to deal with it, kindly drop in your suggestion. Thanks for your time. You can for example have state for each element (ie in the delegate). Then you give at the listItem the state in function of the index == currentIndex of the ListView. I have not got your idea, could you explain it in detail or cite an example. Am a beginner so am nt too sure of what you are saying. you could : give an id to your ListView like @id : idLV@ replace your onClicked: if(shortview.state..... by @state : index==idLV.isCurrentItem ? 'Details' : 'shortview' @ In the initial code the state of the first clicked element is not updated when you clicked on a another one. Edit obviously :) @ onClicked: state = idLV.isCurrentItem ? 'Details' : 'shortview'@ The code you gave as an example is too long for people to read and reply. If I have understood correctly thge problem indicated in the first comment, then here is a simple solution. However, I admit this could be done in multiple ways. @ import QtQuick 2.0 Rectangle{ id:mainRect width:400; height:width function showActionList(mousX, mousY){ if(actionList.opacity>0){ console.log("visible"); mainRect.hideActionList(); } actionList.x = mousX; actionList.y = mousY; actionList.opacity = 1; } function hideActionList(){ actionList.opacity = 0; } Timer { interval: 50; running: false; repeat: false onTriggered: mainRect.hideActionList(); } ListModel{ id: mainlistModel ListElement{ title:"One";pubDate:"Aug"; param:" 2010" } ListElement{ title:"Two";pubDate:"Sep"; param:" 2011" } ListElement{ title:"Three";pubDate:"Oct"; param:" 2012" } } ListView{ id:mainListView anchors.fill: parent model: mainlistModel delegate: Rectangle{ width: parent.width; height: 40; Text {id:txt; anchors.centerIn: parent; text:title+ ": " + pubDate+param } Rectangle{ anchors{ top:txt.bottom; horizontalCenter: parent.Center }width: parent.width - 2; height: 1; color: "black"} MouseArea{ anchors.fill:parent onClicked: mainRect.showActionList(mouseX,mouseY); } } } ListModel { id: listModel ListElement{action:"View"} ListElement{action:"Edit"} ListElement{action:"Delete"} } Component { id: actionDelegate Rectangle{ width: 40; height: 30; radius: 3 border{width: 2; color: "black"} Text{id:dText;anchors.centerIn: parent; text:action} } } ListView{ id: actionList model:listModel width: 180; height: 300 delegate: actionDelegate opacity: 0 } } @ MouseArea { id:marea anchors.fill:parent onClicked: state.index===idLV.isCurrentItem ? 'Details' : 'shortview' }@ I gave a name to my listview Now nothing is happening onClicked event. try @ MouseArea { id:marea anchors.fill:parent onClicked: state = index==idLV.isCurrentItem ? 'Details' : 'shortview' }@ Edit or obviously :) @ onClicked: state = index==idLV.currentIndex ? 'Details' : 'shortview'@ can i upload the project file, will tat be easier?? The code you gave is a little too long. Try with a simple example, the shortest ( without sorting and with fewer text for example, etc...), then you can fix the bug. OK will try that. Here is the file if you have time kindly go through this. Thanks.. I will try if you give some clean code. Cleaned up my code mate. Thanks. Iwill also try to do it with simple code. here it is mate @import QtQuick 1.1 Rectangle{ width:400 height:width property real ropacity:0 XmlListModel { id: xmlModel source:"example_grouping.xml" query: "/rss/channel/item" XmlRole { name: "id"; query: "id/string()" } XmlRole { name: "title"; query: "title/string()" } XmlRole { name: "pubDate"; query: "pubDate/string()" } XmlRole { name: "param"; query: "param/string()" } //onStatusChanged: if (status === XmlListModel.Ready) { console.log("XML elements read: ", count); fillListModel(); sortModel(); } } Component { id: sectionHeading Rectangle { id:rect1 width: 50 height:18 color: "lightgrey" Text { text: section font.bold: true } } } Component { id: mainDelegate Item { id: shortview property real detailsOpacity : 0 width: 100 height: 29 state : index==idLV.currentIndex ? 'details' : '' states: [ State { name: "" PropertyChanges { target: shortview; height:29;} PropertyChanges { target: tert; color:"black" } }, State { name: "details" ; PropertyChanges {target: shortview ; height:50; } PropertyChanges {target: tert ; color:"pink" } } ] Text { id:tert text:param color : "black" } MouseArea { id:marea anchors.fill:parent onClicked: idLV.currentIndex = index } } } ListView{ id:idLV model:xmlModel width: 180; height: 300 delegate: mainDelegate section.property: "param" section.criteria: ViewSection.FullString section.delegate: sectionHeading } }@ Thanks mate :).. Appreciated! The first item is always on 'Details' mode. I have a close button on clikcing it goes back to the old state, when I click that I am unable to access the list. Chk this if u have some time. Its the code which am working on. "chk this": Thanks In your code there is still shortview.state='shortview', but this state is not defined. You had to study the documentaion a little. I read the documentation it did not work and hence I reverted to the existing code. I will try in another method. in that case the first item is always on the detailed state -Please edit you Question and add [Solved] infront of it, if you got the answer.-
https://forum.qt.io/topic/20153/overlapping-delegate-in-listview
CC-MAIN-2018-26
refinedweb
880
51.85
The QBluetoothSdpRecord class represents a bluetooth SDP record. More... #include <QBluetoothSdpRecord> The QBluetoothSdpRecord class represents a bluetooth SDP record. Each Bluetooth record is composed of zero or more attributes. Each attribute contains exactly one value. To group several values, sequences or alternatives are used. Each attribute has a unique 16 bit identifier associated with it. The mapping between SDP basic types and the types used by the QBluetoothSdpRecord implementation are given below: The attributes are stored as QVariants. See also QVariant, QBluetoothSdpSequence, and QBluetoothSdpAlternative. Construct a new empty SDP Service record. Construct a SDP service record, copying contents from other. Deconstruct a SDP Service record. Tries to add an attribute attr with id id to the service. Returns false if the attribute already exists. See also attributeIds(), removeAttribute(), attribute(), and clearAttributes(). Returns the attribute with id id from the service. If the attribute is not found, a null QSDPAttribute is returned. For extra error information, you can pass in the ok flag, which specifies whether an error occurred, or an actual NULL attribute was returned. See also attributeIds(), addAttribute(), removeAttribute(), and clearAttributes(). Returns a list of all attribute identifiers this service contains. See also addAttribute(), removeAttribute(), attribute(), and clearAttributes(). Returns a list of unique identifiers of all browse groups this service is a part of. See also setBrowseGroups(). Clears all attributes. See also attributeIds(), addAttribute(), removeAttribute(), and attribute(). Returns the Doc URL attribute. See also setDocUrl(). Returns the Exec URL attribute. See also setExecUrl(). Returns a SDP service record generated from the contents of data. Returns a null service record if the contents of data cannot be parsed. Returns a SDP service record generated from the contents of device. Returns a null service record if the contents of device cannot be parsed. Returns the group id attribute. See also setGroup(). Returns the Icon URL attribute. See also setIconUrl(). Returns the ServiceID attribute. Each service on the SDP Server is uniquely identified using this uuid. See also setId(). This method can be used to find out whether a SDP record is an implementation of a particular service class, given by profile parameter. This method returns true if the service class matches, false otherwise. This is an overloaded member function, provided for convenience. This method can be used to find out whether a SDP record is an implementation of a particular service class, given by serviceUuid parameter. This method returns true if the service class matches, false otherwise. Returns true if this SDP service record has no attributes. Returns the provider name attribute. See also setProviderName(). Returns a server specific record handle. See also setRecordHandle(). Removes the attribute with the specified id id from the service record. Returns true on success. If the attribute is not found, nothing is done and false is returned. See also attributeIds(), addAttribute(), attribute(), and clearAttributes(). For a family of services that work over the RFCOMM protocol, this method returns the RFCOMM channel the service is running on. The service parameter specifies the service record to search. Returns the channel number on success, -1 if no channel number was found. Returns the service description attribute. See also setServiceDescription(). Returns the service name attribute. See also setServiceName(). Sets a list of unique identifiers of all browse groups this service is a part of to groups. See also browseGroups(). Sets the Doc URL attribute to docUrl. See also docUrl(). Sets the Exec URL attribute to execUrl. See also execUrl(). Sets the GroupID attribute to group. All services which belong to a Group Service Class will require this attribute. All other services can be a part of one or more groups. This is set through the browse group list attribute. See also group(). Sets the Icon URL attribute to iconUrl. See also iconUrl(). Sets the ServiceID attribute to id. The id argument should be unique identifier of the service. See also id(). Sets the provider name attribute to providerName See also providerName(). Sets a server specific record handle to handle. See also recordHandle(). Sets the service description attribute to serviceDesc. See also serviceDescription(). Sets the service name attribute to serviceName. See also serviceName(). Assign the contents of other to the current SDP service record. Returns whether other is equal to this SDP service record.
https://doc.qt.io/archives/qtopia4.3/qbluetoothsdprecord.html
CC-MAIN-2021-21
refinedweb
704
63.05
I'm having a terrible time trying to convert a variable. When I convert the value of argv[1] from char* to int using atoi, it just doesn't seem to come out right. Anytime I use a two-digit number as my argument, it comes out as 15, regardless of what number it was I typed. With three digit numbers, it's 16. What exactly am I doing wrong here? It works fine when I use the prompt rather than specify an argument. Code:#include <iostream> #include <cstdlib> using namespace std; int main ( int argc, char *argv[] ) { int a; int b; int c; int count; int endset; unsigned keepLooping; if (argc != 1) {endset = atoi(argv[1]); goto noprompt; } cout << "\nFibonacci's Number Sequence\n"; cout << "Input how many numbers to go through: "; cin >> endset; noprompt: count = 0; endset -= 1; if (endset == 0) { cout << "\n1\n"; return 0; } if (endset < 0) { cerr << "\nInvalid number specified\n"; return 1; } a = 0; b = 1; keepLooping = 1; while (keepLooping) { count += 1; c = a + b; if (count == 2) { c = 1; a = 1; b = 1; } cout << "\n" << c; a = b; b = c; if (count > endset) { keepLooping = 0; cout << "\n"; return 0; } } }
http://cboard.cprogramming.com/cplusplus-programming/105145-having-trouble-converting-variable.html
CC-MAIN-2014-35
refinedweb
196
73.51
This chapter is all about getting to know Xamarin and what to expect from it. It is the only chapter that is pure theory; all the others cover hands-on projects. You are not expected to write any code at this point, but instead, simply read through this chapter to develop a high-level understanding of what Xamarin is and how Xamarin.Forms relates to Xamarin and how to set up a development machine. We will start by defining what a native app is and what .NET as a technology brings to the table. After that, we will look at how Xamarin.Forms fits into the bigger picture and learn when it is appropriate to use the traditional Xamarin and Xamarin.Forms apps. We often use the term traditional Xamarin to describe apps that don't use Xamarin.Forms, even though Xamarin.Forms apps are bootstrapped through a traditional Xamarin app. In this chapter, we will cover the following topics: - Native applications - Xamarin and Mono - Xamarin.Forms - Setting up a development machine Let's get started! Native applications The term native application means different things to different people. For some people, it is an app that is developed using the tools specified by the creator of the platform, such as an app developed for iOS with Objective-C or Swift, an Android app developed with Java or Kotlin, or a Windows app developed with .NET. Others use the term native application to refer to apps that are compiled into machine code that is native. In this book, we will define a native application as one that has a native UI, performance, and API access. The following list explains these three concepts in greater detail: - Native UI: Apps built with Xamarin use the standard controls for each platform. This means, for example, that an iOS app built with Xamarin will look and behave as an iOS user would expect and an Android app built with Xamarin will look and behave as an Android user would expect. - Native performance: Apps built with Xamarin are compiled for native performance and can use platform-specific hardware acceleration. - Native API access: Native API access means that apps built with Xamarin can use everything that the target platforms and devices offer to developers. Xamarin and Mono Xamarin is a developer platform that is used to develop. The C# APIs we use when we develop apps with Xamarin are more or less identical to the platform APIs, but they are .NETified. For example, APIs are often customized to follow .NET naming conventions and the Android set and get methods are often replaced by properties. The reason for this is that APIs should be easier to use for .NET developers. Mono () is an open source implementation of the Microsoft .NET framework, which is based on the European Computer Manufacturers Association (ECMA) standards for C# and the Common Language Runtime (CLR). Mono was created to bring the .NET framework to platforms other than Windows. It is part of the .NET Foundation (), an independent organization that supports open development and collaboration involving the .NET ecosystem. With a combination of the Xamarin platforms and Mono, we can use both the platform-specific APIs and the platform-independent parts of .NET, including namespaces, systems, System.Linq, System.IO, System.Net, and System.Threading.Tasks. There are several reasons for using Xamarin for mobile app development, which we will cover in the following sections. Code sharing If we use one common programming language for multiple mobile platforms (and even server platforms), then we can share a lot of code between our target platforms, as illustrated in the following diagram. All code that isn't related to the target platform can be shared with other .NET platforms. Code that is typically shared in this way includes business logic, network calls, and data models: There is also a large community based around the .NET platforms, as well as a wide range of third-party libraries and components that can be downloaded from NuGet () and used across the .NET platforms. Code sharing across platforms leads to shorter development times. It also produces apps of a higher quality because, for example, we only need to write the code for business logic once. There is a lower risk of bugs and are also be able to guarantee that a calculation returns the same result, regardless of what platform our users use. Using existing knowledge For .NET developers who want to start building native mobile apps, it is easier to just learn the APIs for the new platforms than it is to learn programming languages and APIs for both old and new platforms. Similarly, organizations that want to build native mobile apps can use existing developers with their knowledge of .NET to develop apps. Because there are more .NET developers than Objective-C and Swift developers, it's easier to find new developers for mobile app development projects. Xamarin platforms The different Xamarin platforms available are Xamarin.iOS, Xamarin.Android, and Xamarin.Mac. In this section, we will take a look at each of them. Xamarin.iOS Xamarin.iOS is used to build apps for iOS with .NET and contains the bindings to the iOS APIs mentioned previously. Xamarin.iOS uses AOT compiling to compile the C# code into Advanced RISC Machine (ARM) assembly language. The Mono runtime runs alongside the Objective-C runtime. Code that uses .NET namespaces, such as System.Linq or System.Net, are executed by the Mono runtime, while code that uses iOS-specific namespaces are executed by the Objective-C runtime. Both the Mono runtime and the Objective-C runtime run on top of the X is Not Unix (XNU) Unix-like kernel (), which was developed by Apple. The following diagram shows an overview of the iOS architecture: Xamarin.Android Xamarin.Android is used to build apps for Android with .NET and contains bindings to the Android APIs. The Mono runtime and the Android Runtime (ART) run side by side on top of a Linux kernel. Xamarin.Android apps could either be Just-In-Time (JIT)-compiled or AOT-compiled, but to AOT-compile them, we need to use Visual Studio Enterprise. Communication between the Mono runtime and ART occurs via a Java Native Interface (JNI) bridge. There are two types of JNI bridges—Manage Callable Wrapper (MCW) and Android Callable Wrapper (ACW). An MCW is used when code needs to run in ART and an ACW is used when ART needs to run code in the Mono runtime, as shown: Xamarin.Mac Xamarin.Mac is used to build apps for macOS with .NET and contains the bindings to the macOS APIs. Xamarin.Mac has the same architecture as Xamarin.iOS—the only difference is that Xamarin.Mac apps are JIT-compiled, unlike Xamarin.iOS apps, which are AOT-compiled. This is shown in the following diagram: Xamarin.Forms Xamarin.Forms is a UI framework that is built on top of Xamarin (for iOS and Android) and the Universal Windows Platform (UWP). Xamarin.Forms allows developers to create a UI for iOS, Android, and UWP with one shared code base, as illustrated in the following diagram. If we build an app with Xamarin.Forms, we can use XAML, C#, or a combination of both to create the UI: The architecture of Xamarin.Forms Xamarin.Forms is more or less just an abstract layer on top of each platform. Xamarin.Forms has a shared layer that is used by all platforms, as well as a platform-specific layer. The platform-specific layer contains renderers. A renderer is a class that maps a Xamarin.Forms control to a platform-specific native control. Each Xamarin.Forms control has a platform-specific renderer. The following diagram illustrates how entry control in Xamarin.Forms is rendered to a UITextField control from the UIKit namespace when the shared Xamarin.Forms code is used in an iOS app. The same code in Android renders an EditText control from the Android.Widget namespace: Defining a UI using XAML The most common way to declare our UI in Xamarin.Forms is by defining it in a XAML document. It is also possible to create the GUI in C#, since XAML is really only a markup language for instantiating objects. We could, in theory, use XAML to create any type of object, as long as it has a parameterless constructor. A XAML document is an Extensible Markup Language (XML) document with a specific schema. Defining a Label control As a simple example, let's look at the following snippet of a XAML document: <Label Text="Hello World!" /> When the XAML parser encounters this snippet, it creates an instance of a Label object and then sets the properties of the object that correspond to the attributes in the XAML. This means that if we set a Text property in XAML, it sets the Text property on the instance of the Label object that is created. The XAML in the preceding example has the same effect as the following: var obj = new Label() { Text = "Hello World!" }; XAML exists to make it easier to view the object hierarchy that we need to create in order to make a GUI. An object model for a GUI is also hierarchical by design, so XAML supports adding child objects. We can simply add them as child nodes, as follows: <StackLayout> <Label Text="Hello World" /> <Entry Text="Ducks are us" /> </StackLayout> StackLayout is a container control that organizes the children vertically or horizontally within a container. Vertical organization is the default value and is used unless we specify otherwise. There are also a number of other containers, such as Grid and FlexLayout. These will be used in many of the projects in the following chapters. Creating a page in XAML A single control is no use unless it has a container that hosts it. Let's see what an entire page would look like. A fully valid ContentPage object defined in XAML is an XML document. This means that we must start with an XML declaration. After that, we must have one—and only one—root node, as shown: <?xml version="1.0" encoding="UTF-8"?> <ContentPage xmlns="" xmlns:x="" x: <StackLayout> <Label Text="Hello world!" /> </StackLayout> </ContentPage> In the preceding example, we defined a ContentPage object that translates into a single view on each platform. In order to make it a valid XAML, we need to specify a default namespace (xmlns="") and then add the x namespace (xmlns:x=""). The default namespace lets us create objects without prefixing them, such as the StackLayout object. The x namespace lets us access properties such as x:Class, which tells the XAML parser which class to instantiate to control the page when the ContentPage object is created. A ContentPage object can have only one child. In this case, it's a StackLayout control. Unless we specify otherwise, the default layout orientation is vertical. A StackLayout object can, therefore, have multiple children. Later on, we will touch on more advanced layout controls, such as the Grid and FlexLayout controls. In this specific example, we will create a Label control as the first child of StackLayout. Creating a page in C# For clarity, the following code shows you how the previous example would look in C#: public class MainPage : ContentPage { } page is a class that inherits from Xamarin.Forms.ContentPage. This class is automatically generated for us if we create an XAML page, but if we just use code, we will need to define it ourself. Let's create the same control hierarchy as the XAML page we defined earlier using the following code: var page = new MainPage(); var stacklayout = new StackLayout(); stacklayout.Children.Add( new Label() { Text = "Welcome to Xamarin.Forms" }); page.Content = stacklayout; The first statement creates a page object. We could, in theory, create a new ContentPage page directly, but this would prohibit us from writing any code behind it. For this reason, it's good practice to subclass each page that we plan to create. The block following this first statement creates the StackLayout control, which contains the Label control that is added to the Children collection. Finally, we need to assign StackLayout to the Content property of the page. XAML or C#? Generally, using XAML provides a much better overview, since the page is a hierarchical structure of objects and XAML is a very nice way of defining that structure. In code, the structure is flipped around as we need to define the innermost object first, making it harder to read the structure of our page. This was demonstrated in the Creating a page in XAML section of this chapter. Having said that, it is generally a matter of preference as to how we decide to define the GUI. This book will use XAML rather than C# in the projects to come. Xamarin.Forms versus traditional Xamarin While this book is about Xamarin.Forms, we will also highlight the differences between using traditional Xamarin and Xamarin.Forms. Traditional Xamarin is used when developing apps that use iOS and an Android Software Development Kit (SDK) without any means of abstraction. For example, we can create an iOS app that defines its UI in a storyboard or in the code directly. This code would not be reusable for other platforms, such as Android. Apps built using this approach can still share non-platform-specific code by simply referencing a .NET standard library. This relationship is shown in the following diagram: Xamarin.Forms, on the other hand, is an abstraction of the GUI, which allows us to define UIs in a platform-agnostic way. It still builds on top of Xamarin.iOS, Xamarin.Android, and all the other supported platforms. The Xamarin.Forms app can be created as a .NET standard library or as a shared code project, where the source files are linked as copies and built within the same project as the platform we are currently building for. This relationship is shown in the following diagram: Having said that, Xamarin.Forms cannot exist without traditional Xamarin since it's bootstrapped through an app for each platform. This gives us the ability to extend Xamarin.Forms on each platform using custom renderers and platform-specific code that can be exposed to our shared code base through interfaces. We'll look at these concepts in more detail later on in this chapter. When to use Xamarin.Forms We can use Xamarin.Forms in most cases and for most types of apps. If we need to use controls that not are available in Xamarin.Forms, we can always use the platform-specific APIs. There are, however, cases where Xamarin.Forms is not useful. The most common situation where we might want to avoid using Xamarin.Forms is if we build an app that should look very different across our different target platforms. Setting up a development machine Developing an app for multiple platforms imposes higher demands on our development machine. One reason for this is that we often want to run one or multiple simulators or emulators on our development machine. Different platforms also have different requirements with regard to what is needed to begin development. Regardless of whether we use macOS or Windows, Visual Studio will be our integrated development environment (IDE). There are several versions of Visual Studio, including the free community edition. Go to to compare the available versions. The following list is a summary of what we need to begin development for each platform: - iOS: To develop an app for iOS, we need a Macintosh (Mac) device. This could either be the machine that we are developing on or a machine on our network, if we are using one. The reason we need to connect to a Mac is that we need to use Xcode to compile and debug an app. Xcode also provides an iOS simulator. It is possible to do some iOS development on Windows without a connected Mac; you can read more about this in the Xamarin Hot Restart section of this chapter. - Android: Android apps can be developed on either macOS or Windows. Everything we need, including SDKs and simulators, are installed with Visual Studio. - UWP: UWP apps can only be developed in Visual Studio on a Windows machine. Setting up a Mac There are two main tools that are required to develop apps for iOS and Android with Xamarin on a Mac. These are Visual Studio for Mac (if we are only developing Android apps, this is the only tool we need) and Xcode. In the following sections, we will take a look at how to set up a Mac for app development. Installing Xcode Before we install Visual Studio, we need to download and install Xcode. Xcode is the official development IDE from Apple and contains all the tools available for iOS development, including SDKs for iOS, macOS, tvOS, and watchOS. We can download Xcode from the Apple developer portal () or from the Apple App Store. I recommend that you download it from the App Store because this guarantees you have the latest stable version. The only reason to download Xcode from the developer portal is if you want to use a prerelease version of Xcode to develop it for a prerelease of iOS. After the first installation, and after each update of Xcode, it is important that you open it. Xcode often needs to install additional components after an installation or an update. We also need to open Xcode to accept the license agreement with Apple. Installing Visual Studio To install Visual Studio, we first need to download it from. When we start the Visual Studio installer via the file we downloaded, it will start to check what we already have installed on our machine. When the check is finished, we can select which platforms and tools we would like to install. Once we have selected the platforms that we want to install, Visual Studio downloads and installs everything that we need to get started with app development using Xamarin, as shown: Configuring the Android emulator Visual Studio uses the Android emulators provided by Google. If we want our emulator to be fast, then we need to ensure that it is hardware-accelerated. To hardware-accelerate the Android emulator, we need to install the Intel Hardware Accelerated Execution Manager (HAXM), which can be downloaded from. The next step is to create the Android emulator. First, we need to ensure that the Android emulator and the Android OS images are installed. To do this, take the following steps: - Go to the Tools tab to install the Android emulator: - We also need to install one or multiple images to use with the emulator. We can install multiple images if, for example, we want to run our app on different versions of Android. We can select emulators with Google Play (as in the following screenshot) so that we can use Google Play services in our app, even when we are running it in an emulator. This is required if, for example, we want to use Google Maps in our app: - Then, to create and configure an emulator, go to Device Manager in the Android section of the Tools tab in Visual Studio. From Android Device Manager, we can start an emulator if we already have one created; or, we can create new emulators, as shown: - also possible to edit the properties of the device so that we have an emulator that matches our specific needs. Because we will not run the emulator on a device with an ARM processor, we have to select either an x86 processor or an x64 processor, as in the following screenshot. If we try to use an ARM processor, the emulator will be very slow: Setting up a Windows machine We can use either a virtual or physical Windows machine for development with Xamarin. We can, for example, run a virtual Windows machine on our Mac. The only tool we need for app development on our Windows machine is Visual Studio. Installing Xamarin for Visual Studio If we already have Visual Studio installed, we first need to open Visual Studio Installer; otherwise, we need to go to to download the installation files. Before the installation starts, we need to select which workloads we want to install. If we want to develop apps for Windows, we need to select the Universal Windows Platform development workload, as shown: For Xamarin development, we need to install Mobile development with .NET. If you want to use Hyper-V for hardware acceleration, you can deselect the checkbox for Intel HAXM in the detailed description of the Mobile development with .NET workload on the left-hand side, as in the following screenshot. When you deselect Intel HAXM, the Android emulator is also deselected, but you can reinstall it later: When we first start Visual Studio, we will be asked whether we want to sign in. It is not necessary for us to sign in unless we want to use Visual Studio Profession or Enterprise, in which case we will need to sign in so that our license can be verified. Pairing Visual Studio with a Mac If we want to run, debug, and compile our iOS app, then we need to connect it to a Mac. We can set up our Mac manually, as described earlier in this chapter, or we can use Automatic Mac Provisioning. This installs Mono and Xamarin.iOS on the Mac that we are connecting to. It will not install the Visual Studio IDE, but this isn't necessary if we just want to use it as a build machine. We do, however, need to install Xcode manually. To be able to connect to a Mac—either manually or using Automatic Mac Provisioning—we need to be able to access the Mac via our network, and we need to enable Remote Login on the Mac. To do this, go to Settings | Sharing and select the checkbox for Remote Login. To the left of the window, we can select which users are allowed to connect with Remote Login, as shown: To connect to the Mac from Visual Studio, use the Pair to Mac button in the toolbar (as in the following screenshot); or, in the top menu, go to Tools | iOS | Pair to Mac: A dialog box will appear showing all the Macs that can be found on the network. If your Mac doesn't appear in the list of available Macs, we can use the Add Mac... button at the bottom-left corner of the window to enter an IP address, as shown: If everything that we need is installed on the Mac, then Visual Studio will connect and we can start building and debugging our iOS app. If Mono is missing on the Mac, a warning will appear. This warning will also give us the option to install it, as shown: Configuring an Android emulator and hardware acceleration If we want a fast Android emulator that works smoothly, we need to enable hardware acceleration. This can be done using either Intel HAXM or Hyper-V. The disadvantage of Intel HAXM is that it can't be used on machines with an Advanced Micro Devices (AMD) processor; we have to use a machine with an Intel processor. We can't use Intel HAXM in parallel with Hyper-V. Because of this, Hyper-V is the preferred way to hardware-accelerate an Android emulator on a Windows machine. To use Hyper-V with our Android emulator, we need to have the April 2018 update (or later) for Windows and Visual Studio version 15.8 (or later) installed. To enable Hyper-V, we need to take the following steps: - Open the Start menu and type in Turn Windows features on or off. Click the option that appears to open it, as shown: - To enable Hyper-V, select the Hyper-V checkbox. Also, expand the Hyper-V option and check the Hyper-V Platform checkbox. We also need to select the Windows Hypervisor Platform checkbox, as shown: - Restart the machine when Windows prompts you to. Because we didn't install an Android emulator during the installation of Visual Studio, we need to install it now. Go to the Tools menu in Visual Studio, then click on Android and then Android SDK Manager. Under Tools in Android SDK Manager, we can install the emulator by selecting Android Emulator, as in the following screenshot. Also, we should ensure that the latest version of Android SDK Build Tools is installed: We also recommend installing the Native Development Kit (NDK). The NDK makes it possible to import libraries that are written in C or C++. An NDK is also required if we want to AOT-compile an app. The Android SDK allows multiple emulator images to be installed simultaneously. We can install multiple images if, for example, we want to run our app on different versions of Android. Select emulators with Google Play (as in the following screenshot) so that we can use Google Play services in our app, even when we are running it in an emulator. This is required if, for example, we want to use Google Maps in our app: The next step is to create a virtual device to use the emulator image. To create and configure an emulator, go to Android Device Manager, which we can open from the Tools tab in Visual Studio. From the device manager, we can either start an emulator—if we already have one created—or we can create new emulators, as shown: possible to edit the properties of the device so that we have an emulator that matches our specific needs. We have to select either an x86 processor (as in the following screenshot) or an x64 processor since we are not running the emulator on a device with an ARM processor. If we try to use an ARM processor, the emulator will be very slow: Configuring UWP developer mode If we want to develop UWP apps, we need to activate developer mode on our development machine. To do this, go to Settings | Update & Security | For developers. Then, click on Developer Mode, as in the following screenshot. This makes it possible for us to sideload and debug apps via Visual Studio: If we select Sideload apps instead of Developer mode, we will only be able to install apps without going to Microsoft Store. If we have a machine to test, rather than debug our apps on, we can just select Sideload apps. Xamarin productivity tooling Xamarin Hot Restart and Xamarin Hot Reload are two tools that increase productivity for Xamarin developers. Xamarin Hot Restart Hot Restart is a Visual Studio feature, which is currently in preview, to make developers more productive. It also gives us a way of running and debugging iOS apps on an iPhone without having to use a Mac connected to Visual Studio. Microsoft describes Hot Restart as follows: " Xamarin Hot Restart enables you to quickly test changes to your app during development, including multi-file code edits, resources, and references. It pushes the new changes to the existing app bundle on the debug target which results in a much faster build and deploy cycle." To use Hot Restart, you need the following: - Visual Studio 2019 version 16.5 or later - iTunes (64 bit) - An Apple Developer account and paid Apple Developer Program () enrollment Hot Restart can currently only be used with Xamarin.Forms apps. To activate Hot Restart, go to Tools | Options | Environment | Preview Features | Enable Xamarin Hot Restart. Read more about the current state of Hot Restart at. Xamarin XAML Hot Reload Xamarin XAML Hot Reload allows us to make changes to our XAML without having to redeploy our app. When we have carried out changes to the XAML, we just save the file and it updates the page on the simulator/emulator or on a device. XAML Hot Reload is currently only supported by iOS and Android. To enable XAML Hot Reload for Visual Studio on Windows, go to Visual Studio | Preferences | Tools for Xamarin | XAML Hot Reload. To enable XAML Hot Reload for Visual Studio on Mac, go to Tools | Options | Xamarin | Hot Reload. To use XAML Hot Reload, we have to use Xamarin.Forms 4.1+ with Visual Studio 2019 16.4+ (or Visual Studio for Mac 8.4+). Summary You should now feel a bit more comfortable about what Xamarin is and how Xamarin.Forms relates to Xamarin itself. In this chapter, we established a definition of what a native app is and saw how it has a native UI, native performance, and native API access. We talked about how Xamarin is based on Mono, which is an open source implementation of the .NET framework, and discussed how, at its core, Xamarin is a set of bindings to platform-specific APIs. We then looked at how Xamarin.iOS and Xamarin.Android work under the hood. After that, we began to touch on the core topic of this book, which is Xamarin.Forms. We started off with an overview of how platform-agnostic controls are rendered to platform-specific controls and how to use XAML to define a hierarchy of controls to assemble a page. We then spent some time looking at the difference between a Xamarin.Forms app and a traditional Xamarin app. A traditional Xamarin app uses platform-specific APIs directly, without any abstraction, other than what .NET adds as a platform. Xamarin.Forms is an API that is built on top of the traditional Xamarin APIs and allows us to define platform-agnostic GUIs in XAML or in code that is rendered to platform-specific controls. There's more to Xamarin.Forms than this, but this is what it does at its core. In the last part of this chapter, we discussed how to set up a development machine on Windows or macOS. Now, it's time to put our newly acquired knowledge to use! We will start off by creating a to-do app from the ground up in the next chapter. We will look at concepts such as Model–View–ViewModel (MVVM) for a clean separation between business logic and the UI, and SQLite.NET to persist data to a local database on our device. We will do this for three platforms at the same time—so, read on!
https://www.packtpub.com/product/xamarin-forms-projects-second-edition/9781839210051
CC-MAIN-2020-50
refinedweb
5,063
54.52
<?php$link = mysql_connect('localhost', 'username', 'password');if (!$link) { die('Could not connect: ' . mysql_error());}echo 'Connected successfully';if (!mysql_select_db('database')) die("Can't select database");// choose id 31 from table usersecho $id; //31echo $name; //id31's nameecho $surname //id31's surname output with lets is there a function in c that lets me look at the next char in an array? Also where could I find this information on my own, I tried Google and looking for existing threads on this site. I am trying to pull numbers from a line, and store those numbers. So I want to do something like if(c = a number and c "next character" is not a number){value is = value*10+c-'0', sto I need to use a thread pool in python, and I want to be able to know when at least 1 thead out or "maximum threads allowed" has finished, so I can start it again if I still need to do something. I has been using something like this: def doSomethingWith(dataforthread): dostuff() i = i-1 #thread has finishedi = 0poolSize = 5threads = I'm looking for a non-standard feature of a shopping cart:My partners should be able to preform some actions in my website, and gain money. At a certain point, a partner may whish to cash-out/withdraw the money she has gained to her pay-pal account. Are you aware of such a reverse shopping cart? Open source solutions are preferred.Thanks! P.S.I'm aware I have a screen that popups up that Facebook login thing with the following code: //facebook stuff if (Session.getActiveSession() == null || Session.getActiveSession().isClosed()) { Session.openActiveSession(this, true, null); } If the user hits the "back" button and dismisses the login thing, it automatically sends them to Question: Is it possible to create an AdornerDecorator that takes only the Adorners I want to its AdornerLayer? AdornerDecorator Adorner AdornerLayer public class SimpleCircleBehavior : Behavior<TextBox>{ private SimpleCircleAdorner sca; protected override void OnAttached() { base.OnAttached(); AssociatedObject.Loa
http://bighow.org/tags/Lets/1
CC-MAIN-2017-04
refinedweb
333
59.84
If. Somewhat on-topic, one thing that bugs me (and I don’t know if it’s strictly a Windows thing or if Unix does this too), but why on Earth can’t you change the date/times of directories? I know it’s largely useless, but I seem to recall encountering that limitation even in DOS. =) But you can. SetFileTime() But you can. SetFileTime() with the CreateFile() call including the GENERIC_WRITE and FILE_FLAG_BACKUP_SEMANTICS. But you can. SetFileTime() with the CreateFile() call including the GENERIC_WRITE and FILE_FLAG_BACKUP_SEMANTICS. PS: Someting wrong with Blogging software? This is my 4th attempt to submit. Thanks "A", no to go see if that works under ‘9x and NT. =) Don’t think that will work in 9x Goes slightly OT, what interfaces you need to add your own sorting to the Explorer for the file/directory column – like Asc/Desc by name and then something else by me. Does this really require namespace extension etc? No sort of simpler hook ? "Don’t think that will work in 9x" How many hours of your programming career is it worth to try and get advanced features out of a 10-year-old platform? Don’t fool yourself by looking at Windows installed base, look at your potential buyers. If you’re doing it all for free because your time is valued at $0/hr, then by all means have fun with solving these 9x brain teasers. I am happy to let software work as well as 9x/Me will let it, but I’m not going to do a bunch of extra work to recreate 10 years of Windows API progress. Can we all make a blood pact to ignore 9x now? I have just stuck my finger with a pin and am pressing it to the screen. Please do the same and press Submit. Thanks Raymond! This answers a question I put into your suggestion box, although I didn’t know enough at the time to explain it properly. One of our partitions was copied over from an old Novell server, and the directories were all intermingled with files, and were ‘alphabetically challenged’. Turns out, over half of these folders didn’t have modified dates. A quick command line later (for /d %x in (*) do touch "%xfixdate.fix" && del "%xfixdate.fix") and now everything is displaying in the correct order :) explorer often sorts "shortcut to folders" among files, instead of among folders. Is it related to this behavior? Dave: It’s less an issue of *wanting* to be compatible with ‘9x and more a matter of wanting to know what kind of minimum system requirements my application would require by utilizing feature X. I agree, I’m happy everything is running off of NT now and I hope ‘9x dies a quick death. Kind of on topic, but one interesting thing I’ve noticed in Windows XP explorer is that it will load a folder with one sorting system, and then shortly after (or after a refresh) it will modify it slightly. For an example, put the following folders somewhere and watch..: 1.0.1 1.0.12 1.0.2 It should modify it to be: 1.0.1 1.0.2 1.0.12 I do like it this way, but it is very annoying that it only changes after a few seconds, or after a refresh (and not on load-up). Much better if it was consistent. No; shortcuts to folders *are* files. "put the following folders somewhere and watch" I assume that newly created or renamed files and folders are put/left in place unsorted because that makes them easier to find. Imagine if you created a new folder and then renamed it, the second you pressed enter it would "disappear" to a new location in the list because of the sort order. That would be confusing and a bit scary ("OMG, where did my folder go? I just renamed it and it’s GONE!"). You could do some sort of hokey animation showing the file moving from its old position to the new one I guess. (That was NOT a feature suggestion!). Actually, that’s exactly what happens if you have Show in Groups turned on and auto-arrange by name. If you create a new folder, then it starts off under "N" for "New Folder", then when you type in the name you want and press Enter it gets filed under the new first letter. Not entirely related (the more appropriate topic has been and gone, and it involves "directories" and "sort order", so I’ll risk it) but I came across some very strange "short filename" versions of directories today, and I can’t even begin to work out how the short names have been generated: JOC971~1 Job16_001 JOC981~1 Job16_002 JOC991~1 Job16_003 JOC9A1~1 Job16_004 JOC9B1~1 Job16_005 JO846D~1 Job16_006&007 JOC9E1~1 Job16_008 JOB16_~2 Job16_009 JOB16_~3 Job16_010 JOB16_~4 Job16_011 JOD989~1 Job16_012 JOB16_~1 Job16_013 IIRC, a checksum of the long filename is used for part of the short filename if there are too many would-be-overlapping-as-short-filenames long filenames in a directory to just do the sequential ~1, ~2, ~3 etc. "I assume that newly created or renamed files and folders […]" It affects the folders no matter how long they have been there (but maybe just when in "Arrange in groups" mode). In other words, the ones writing (or even designing) Explorer screwed up so badly you felt the need to publicly humiliate them (hoping they’d be fired – I sure do for the ways they destroyed the once perfectly working Explorer of Win95/NT3.51+NewShell/NT4 with the VB-ish crashing Web-ish thing (junk) we are forced to use nowadays)? Should you ever need support for having them seen in tar and feathers on web-TV, I can likely get ~10k-40k votes from paying customers. in response to that old ordering thingy: The transitivity of equality condition means that you can’t use floating point with epsilon for sorting – or (for instance) creating a map/dictionary. Which… sucks sometimes. Vorn Anon Coward: Thank you for putting words in my mouth. What other criteria can be used to tell a simple pidl from a valid file or dir? Vorn A simple pidl is just a normal pidl, but with some information set to 0. The problem is that 0 is a valid value for some of these things (e.g., no last modified time), so just by looking at a pidl you can’t really tell if it’s a simple pidl or a normal pidl with a lot of 0’s. Shortcuts to directories aren’t always files. For example, create a directory shortcut on your Start Menu in either Windows 2000 or Windows XP. I created a shortcut to a directory called "bin" on my Start Menu: [ben@frazzle C:Documents and SettingsAll UsersStart Menu]$ dir Directory of C:Documents and SettingsAll UsersStart Menu 2005-01-31 16:45 <DIR> . 2005-01-31 16:45 <DIR> .. 2005-01-31 16:45 <DIR> bin 2005-01-31 01:14 <DIR> Programs [ben@frazzle C:Documents and SettingsAll UsersStart Menu]$ dir bin Directory of C:Documents and SettingsAll UsersStart Menubin 2005-01-31 16:45 <DIR> . 2005-01-31 16:45 <DIR> .. 2005-01-31 16:45 471 target.lnk In this case, it creates a directory with a specially-named shortcut file and a Desktop.ini (which is hidden, of course) inside it. Strangely, this only seems to happen if you make a directory shortcut on the Start menu. Elsewhere, they are created as they were in previous versions of Windows. If you move or copy the created shortcut from the Start Menu to another directory it goes on working. It seems that the shortcut directory must also be read-only for it to work. Wacky stuff. Fun can be had by putting one of these desktop.ini files in a normal directory along with a target.lnk shortcut and setting the directory read-only. Suddenly the directory becomes inaccessible from Explorer! Oh, the fun things you can do with Windows Explorer.
https://blogs.msdn.microsoft.com/oldnewthing/20050125-00/?p=36603
CC-MAIN-2017-43
refinedweb
1,368
70.94
import "go.chromium.org/luci/common/data/rand/cryptorand" Package cryptorand implements a mockable source or crypto strong randomness. In real world scenario it is same source as provided by crypt/rand. In tests it is replaced with reproducible, not really random stream of bytes. Get returns an io.Reader that emits random stream of bytes. Usually this returns crypto/rand.Reader, but unit tests may replace it with a mock by using 'MockForTest' function. MockForTest installs deterministic source of 'randomness' in the context. Must not be used outside of tests. Read is a helper that reads bytes from random source using io.ReadFull. On return, n == len(b) if and only if err == nil. Package cryptorand imports 5 packages (graph) and is imported by 15 packages. Updated 2019-10-14. Refresh now. Tools for package owners.
https://godoc.org/go.chromium.org/luci/common/data/rand/cryptorand
CC-MAIN-2019-43
refinedweb
137
61.93
Mean of a frame always change? Hi everyone, i read frames from webcam and print mean of 3-channels of frame. import cv2 import numpy as np cv2.destroyAllWindows() capture = cv2.VideoCapture(0,cv2.CAP_DSHOW) while (True): ret, frame = capture.read() cv2.imshow('video', frame) a=np.mean(frame, axis=(0, 1)) print(a) if cv2.waitKey(30) == 27: break capture.release() cv2.destroyAllWindows() everything is consant (lightness,camera etc.) But values are always changing. What is the reason of that? What do you mean (changing)? 100% 10% 0.01%? What is light source? It is around %0.1. There is no any light source " There is no any light source" it's a black image so... 0.1% it is possible. Instead cv2.imshow('video', frame) try : Normally you will see image noise I mean, there is just day light not any external source. i tried it is better but i think it is impossible to reduce that to zero, isn't it? Cool your camera at -200°C no noise! or if your image is not moved then you can mean succesive image 0.1 % is not "change" it is "variation of input" within "tolerances"
https://answers.opencv.org/question/225655/mean-of-a-frame-always-change/
CC-MAIN-2020-10
refinedweb
197
80.58
Introduction: Raspberry Pi - HIH6130 I2C Humidity & Temperature Sensor Python Tutorial HIH6130 is a humidity and temperature sensor with digital output. These sensors provide an accuracy level of ±4% RH. With industry-leading long-term stability, true temperature-compensated digital I2C, Industry-leading reliability, Energy efficiency and Ultra-small package size and options. Here is its demonstration with raspberry pi using python code. Step 1: What You Need..!! 1. Raspberry Pi 2. HIH6130 LINK : 3. I²C Cable 4. I²C Shield for Raspberry Pi LINK : 5. Ethernet Cable Step 2: Connections: Take an I2C shield for raspberry pi and gently push it over the gpio pins of raspberry pi. Then connect the one end of I2C cable to HIH6130 sensor and the other end to the I2C shield. Also connect the Ethernet cable to the pi or you can use a WiFi module. Connections are shown in the picture above. Step 3: Code: The python code for HIH6130 can be downloaded from our github repository- ControlEverythingCommunity Here is the link for the same :... The datasheet of HIH6130. # HIH6130 # This code is designed to work with the HIH6130_I2CS I2C Mini Module available from ControlEverything.com. #... import smbus import time # Get I2C bus bus = smbus.SMBus(1) # HIH6130 address, 0x27(39) # Read data back from 0x00(00), 4 bytes # humidity MSB, humidity LSB, temp MSB, temp LSB data = bus.read_i2c_block_data 4: Applications: HIH6130 can be used to provide precise relative humidity and temperature measurement in air conditioners, enthalpy sensing, thermostats, humidifiers/de-humidifiers, and humidistats to maintain occupant comfort. It can also be employed in air compressors, weather stations and telecom cabinets. Recommendations We have a be nice policy. Please be positive and constructive.
http://www.instructables.com/id/Raspberry-Pi-HIH6130-I2C-Humidity-Temperature-Sens/
CC-MAIN-2018-09
refinedweb
280
50.73
Recently me and my colleague Mehani got a chance to work on one of the Data migration scenarios. As part of this module, we are supposed to move data present in Oracle AWS RDS to Snowflake. In RDS, we have ORACLE database setup with couple of tables present in it. In order to move the Table data from RDS to Snowflake we used Data Migration Service(DMS) service and implemented the following Data pipeline. High level steps included to perform this migration: - For this Demo we have launched RDS service for ORACLE in AWS and created some tables. - Populate the table using sqlworkbench editor. - Create S3 bucket to hold the Tables data. - Launched Data Migration (DMS) service in AWS. - Lambda function to trigger Glue job. - Glue job to load data into Snowflake. Data Migration steps - Launching the RDS service for ORACLE database in AWS organization. - Download and install the workbench where we can connect to oracle and execute some queries. - Create some tables inside the Database and insert the data into tables. - insert into CUSTOMERselect level, ‘Raj’, LEVEL ,’Active’,10000 from dualconnect by level <= 5000000; - Create S3 bucket to hold the RDS table data. - Question: How we should migrate this data from RDS to S3. Here we would be leveraging the AWS DMS (Data migration service) to export the records in Bucket - Steps to be required to successful run DMS. - Create an IAM role having full access to Bucket. - Create Replication Instance. - Create Source End Points - On the similar lines, Create Target End Points. - Create Database Migration task. - Select the replication instance, source endpoint, target endpoint and migration type as migrate existing data. Set target table mode as drop tables on target and leave other options as default - Start the data migration task , once the replication is started it will replicate all data to s3 bucket created - Checkout the s3 bucket where we can get the output as a database folder wise and tables in csv format - Develop a Lambda function which will trigger once file uploaded to the bucket and call the Glue Job. import json import boto3 def lambda_handler(event, context): s3_client = boto3.client("glue"); s3_client.start_job_run(JobName="AWStoSF ") # TODO implement return { 'statusCode': 200, 'body': json.dumps('Hello from Lambda!') } - Develop a Glue job which will be invoked by lambda. - Glue Job: AWStoSF - Inorder to connect Glue with Snowflake ,follow the below post: -
https://cloudyard.in/2022/02/data-migration-oracle-aws-rds-to-snowflake-via-dms/
CC-MAIN-2022-33
refinedweb
395
56.96
MySample.csv contains the below details : NAME Id No Dept Tommy 1 12 CS Jimmy 2 35 EC Bonny 3 21 IT Franky 4 61 EE And my Python file contains the below code : import csv myifile = open('mysample.csv', "rb") read = csv.reader(myifile) for row in read : print (row) But when I try to run the above code in Python, I get the below exception: File "csvformat.py", line 4, in for row in read : _csv.Error: iterator should return strings, not bytes (did you open the file in text mode?) How should I fix it? You need to open your file in text mode. More specifically refer below line of code : myifile = open('mysample.csv', "rt", encoding=<theencodingofthefile>) Good guesses for the encoding the "ascii" and "utf8". You can also try to leave the encoding off, and it will just use your system default encoding, which tends to be the UTF8, but may be something else. OR In the Python3, csv.reader expects, that the passed iterable returns the strings and not bytes. Below is one more solution to your problem, that uses the codecs module: csv.reader codecs import csv import codecs myifile = open('mysample.csv', "rb") read = csv.reader(codecs.iterdecode(myifile, 'utf-8')) for row in read : print (row)
https://kodlogs.com/34726/iterator-should-return-strings-not-bytes-did-you-open-the-file-in-text-mode
CC-MAIN-2021-04
refinedweb
215
75.91
From: Vladimir Prus (ghost_at_[hidden]) Date: 2005-01-17 11:08:14 I'm trying the clean up doc areas when Dave marked as problematic, and is stuck because I can't reasonably explain the current behaviour of project-ids and use-project. I think it's a sign that we really need to fix them up. How, I'm unsure how to fix theml without breaking every existing project, so I'd appreciate some help.: Use case 1 ("external project"). I want to use C++ Boost. I'd expect to write use-project /home/ghost/boost/cvs ; and then have convenient ids for all boost libraries. Use case 2 ("internal project"). I want to use project id to refer to some directory inside my project, to protect against directory reorganisation. Use case 3 ("aliases"). I want to use two different project id for the same project. Or, I want two use two different project ids for two different project, even if they define the same project id. Problem 1. The 'use-project' rule requires to always specify project-id, even if the referenced project already defines a project-id. This is syntax inconvenience and can be easily fixed. Problem 2. The 'use-project' rule requires the the specified project-id is equal to that project-id specified by the referenced project. So, it's not possible to give alias to a project id. This is easy to fix, too. Problem 3. The project ids are all "global" -- they share the same namespace. It means that something like use-project boost-cvs : /home/ghost/boost/cvs ; use-project boost : /home/ghost/boost/release ; is not possible, since Jamfiles in both location will define the same global project id "boost". If we want to solve the third problem, it means that we need "local" project ids. So, "/boost//" can mean different things in different Jamfiles. But then we have: Question 1. Should there be any meaning of "relative" project ids. Now, all ids start with "/". If ids can be local, it's reasonable to drop "/" from them. But that would require updating of all Jamfiles, which is not desirable. Problem 4. Subprojects. After use-project /home/ghost/boost/cvs ; I want to use all Boost libraries. I also want to be able to use them after use-project boost-cvs : /home/ghost/boost/cvs ; There are two ways to make it work: automatically translate ids of subprojects or add a number of "alias" targets to Jamroot. In the first case, one will be able to use "/boost-cvs/filesystem//boost_filesystem", and in the second case: "/boost-cvs//filesystem". The first case requires that Jamroot somehow provide the list of all the subprojects (we don't have a documented way to do that), and the second case requires manual changes for each new library. But maybe, it can be automated. Question 2. Which variant is better? There are some further questions, but so far I can't formulate them without going into implementation details. Feedback is very
https://lists.boost.org/boost-build/2005/01/8824.php
CC-MAIN-2021-21
refinedweb
505
65.93
Investors considering a purchase of Seritage Growth Properties (Symbol: SRG) shares, but tentative about paying the going market price of $41.48/share, might benefit from considering selling puts among the alternative strategies at their disposal. One interesting put contract in particular, is the January 2019 put at the $35 strike, which has a bid at the time of this writing of $1.40. Collecting that bid as the premium represents a 4% return against the $35 commitment, or a 6.3% annualized rate of return (at Stock Options Channel we call this the YieldBoost ). Selling a put does not give an investor access to SRG Seritage Growth Properties sees its shares decline 15.6% and the contract is exercised (resulting in a cost basis of $33.60 per share before broker commissions, subtracting the $1.40 from $35), the only upside to the put seller is from collecting that premium for the 6.3% annualized rate of return. Interestingly, that annualized 6.3% figure actually exceeds the 2.4% annualized dividend paid by Seritage Growth Properties by 3.9%, based on the current share price of $41.48. And yet, if an investor was to buy the stock at the going market price in order to collect the dividend, there is greater downside because the stock would have to lose 15.62% to reach the $35 strike price. Always important when discussing dividends is the fact that, in general, dividend amounts are not always predictable and tend to follow the ups and downs of profitability at each company. In the case of Seritage Growth Properties, looking at the dividend history chart for SRG below can help in judging whether the most recent dividend is likely to continue, and in turn whether it is a reasonable expectation to expect a 2.4% annualized dividend yield. Below is a chart showing the trailing twelve month trading history for Seritage Growth Properties, and highlighting in green where the $35 strike is located relative to that history: The chart above, and the stock's historical volatility, can be a helpful guide in combination with fundamental analysis to judge whether selling the January 2019 put at the $35 strike for the 6.3% annualized rate of return represents good reward for the risks. We calculate the trailing twelve month volatility for Seritage Growth Properties (considering the last 252 trading day closing values as well as today's price of $41.48) to be 23%. For other put options contract ideas at the various different available expirations, visit the SRG Stock Options page of.
https://www.nasdaq.com/articles/commit-buy-seritage-growth-properties-35-earn-63-annualized-using-options-2018-06-01
CC-MAIN-2020-34
refinedweb
429
62.58
Using Time Zones with Rails On a recent Rails application I built, I needed to determine which time zone a user was in to correctly display and use time-related information. I'll walk through two techniques I've found to get the job done. The theory around working with time zones is pretty straightforward. Save all time-related data in Coordinated Universal Time (UTC), and display all time-related data in the time zone of any given user. If you're unfamiliar with ActiveSupport's TimeWithZone, quickly check out this blog post from David Eisinger and familiarize yourself with the in_time_zone method for display purposes. The question then becomes, "How do we know what time zone a user is in?" Method 1 - Quick, simple, 90% right In search for the answer, I first discovered some simple JavaScript code: var currentTime = new Date(); var utcOffset = currentTime.getTimezoneOffset() document.cookie = 'time_zone_offset='+utcOffset+';'; new Date() constructs a date object with the current time based on your system clock. getTimezoneOffset() returns the number of minutes from which you differ from UTC. Throw that in a cookie and you're done with the JavaScript. On the Rails side of things, you can grab that offset from the cookies and run the following code to determine which time zone has that offset. offset = cookies["time_zone_offset"].to_i time_zone = ActiveSupport::TimeZone[-offset.minutes] current_user.update_attribute(:time_zone => time_zone) However, This doesn't exactly work. There are multiple time zones that share a common offset. If you use this technique on the east coast of America, it's likely you'll get (GMT-05:00) Bogota, the capitol of Columbia, or (GMT-04:00) Atlantic Time (Canada), depending on if you've recently sprung forward or fallen back. This is due to the fact that these time zones come before (GMT-04/05:00) Eastern Time (US & Canada) alphabetically. While this approach will still give you the accurate time for the present day, Daylight Savings standards of other regions could affect your code in the future. If your app does not care about the future, this solution is fine. My app does care about the future, though, and suddenly all my tests started failing when Daylight Savings Time came around. Method 2 - Quick, a little less simple, 100% right Enter JSTZ! This stands for jsTimezoneDetect and is a sweet bit of code which parses out the actual time zone from your computer's system time. Source code can be found here. With the set of functions jstz offers, the following lines of code will correctly determine the user's system time zone and store it in a cookie called jstz_time_zone. var timeZone = jstz.determine(); document.cookie = 'jstz_time_zone='+timeZone.name()+';'; From there, you can retrieve the cookie in any controller and update the time_zone attribute on the user. current_user.update_attribute(:time_zone => cookies["jstz_time_zone"]) Excellent! Now we are correctly retrieving a user's time zone from their computer and saving it on the user. Confirming At this point you have retrieved the user's system time zone, however it's possible that this is not the correct time zone. While it's a good guess to have when asking the user for input on the matter, it's necessary that you ask the user to confirm or select a time zone to guarantee correctness. <% unless cookies[:selected_a_time_zone] %> <div class="time-zone-confirmation"> <%= "Are you currently in the following time zone: #{current_user.time_zone}?" %> <%= button_tag "Yes", :class => 'hide-time-zone-confirmation' %> <%= button_tag "No", :class => 'show-time-zone-selection' %> <div class="time-zone-selection" style="display:none;"> <%= simple_form_for @user, :url => "/time_zone", :method => :put do |f| %> <%= f.input :time_zone, :collection => ActiveSupport::TimeZone.us_zones.map(&:name) %> <%= f.submit "Submit" %> <% end %> </div> </div> <% end %> The following jQuery code accompanies the form to make for a nice user interaction. $('button.hide-time-zone-confirmation').on('click', function() { document.cookie = 'selected_a_time_zone=true;'; $('.time-zone-confirmation').hide(); }); $('button.show-time-zone-selection').on('click', function() { $('.time-zone-selection').show(); }); A route is added in routes.rb to direct the post to the correct controller. put "/time_zone" => "time_zones#update" And lastly, here is the controller code to save the decision. def update current_user.update_attribute(:time_zone => params["user"]["time_zone"]) cookies[:selected_a_time_zone] = true redirect_to :back end To sum up, we first make our best guess as to what time zone the user is in. We then ask the user to either confirm our guess, or select a time zone from a given list. In both cases, a cookie named selected_a_time_zone is set to true, which prevents the confirmation form from being displayed in the future. Updating a time_zone attribute on the current_user can then be used to correctly display all time-related info.
https://www.viget.com/articles/using-time-zones-with-rails
CC-MAIN-2016-40
refinedweb
780
54.83
The problem was I did not consider the namespace in the interceptor, config file and login action. <action name="authenticate" class...> <result type="chain"> <param name="actionName">${#session.action}</param> <param name="namespace">${#session.space}</param> </result> </action> Well, this is actually the easy part but the original question remains: How do I remember the original request parameters? When the flow is forwarded to Login.jsp the original request is lost. I can save the parameters map in session but when the time comes for the originally requested action (dynamic result) I don't know how to pass the original request parameters. I guess the right place to do it is the custom interceptor but I don't know how to pass parameters to the request. Is it possible to do? El Viernes, 12 de julio de 2013 17:39:59 usted escribió: > If I use "redirections" I will lose the original request(parameters, uploading binary data ...). But I am unable to make it work using forwards (chaining actions). > > I give up. I can't do his with S2. I guess this use case requires some external approach: servlet filter (as Dave pointed out), container managed security, Spring security... > > Thank you all for your support. > > El Viernes, 12 de julio de 2013 16:09:54 Rahul Tokase escribió: > > Hi > > Here is the way you can achieve this. > > You need to design login action to have the url 'redirectto' parameter > > which will holds the redirectaction. Upon login interception you will first > > check the login is done and then check for this parameter if there any > > value then simply forward to that action. else if login is required > > redirect it to the login page. > > > > If 'redirectto' url parameter is blank and login is success then forward it > > to the home page. > > > > > > > > > > On Wed, Jul 10, 2013 at 5:57 PM,? > > > > > > --------------------------------------------------------------------- > > >
http://mail-archives.us.apache.org/mod_mbox/struts-user/201307.mbox/%3C6487151.WjcHPuEeAN@caridad%3E
CC-MAIN-2020-16
refinedweb
308
66.64
Food Products Buyers in China China Food Products Buyers Directory provides list of China Food Products importers, buyers and purchasers who wanted to import food products in China. Qingdao Huanghouai Trade. We are distribution company since 2004 and mainly have foods products. We only have imported products and we do have over 400 products to supply all territories of China. KO18 Global Limited KO18 Global Limited is an importer of food products, mainly fresh fruits and fish. Hainan Jiuding Agricultural Development Co.,Ltd We are a professional import company located in Hainan island. Willing to buy healthy food and baby products all over the world. Xinhui Foood import food from abroad. mainly from Asia countries. Our main product is eggs. Recently we are planning to import white shell eggs from Thailand, Vietnam etc. We are looking for white shell eggs! Qingdao Good Prosper Imp. & Exp. Co., Ltd. Qingdao Good Prosper Imp. & Exp. Co., Ltd. is one of the leading supplier of all kinds of agricultural products and animal feed. Our products are exported to many countries. We also imported some of our products from f Honghua Trading Company Limited Importer & Exporter of consumer products ranging from Food & Beverage, Beauty Salon Supplies and Mobile Phone Accessories Matin Bell Ltd. Matin Bell Ltd. Food importer and distributor Similar products are developing too Matin Bell Ltd. Food importer and distributor Similar products are developing too Yichun Dahaigui Life Science Co.ltd we are specilized in manufacturing industrial products and food additives.for example,vitamine E,the phytosterol. Wangdeng Commercial&Trading Co.,Ltd We are an import/export company of food from mainland, China. Our main products are organic food and beverages, fresh seafood, and textile. Shanghai Jin Han International Trade Co.Ltd We are a global trading company located in Shanghai China. We deal with imported food, baby products, etc. We hope to find vendors for long-term cooperation. Aimtop Sourcing Service We are a sourcing company in China and we import food and dairy products from overseas. We are looking forward to cooperation with customers from all over the world. Linyi Huixiangyuan Trading Co., LTd We are a company engaged in the import of agricultural products and food additives,include sesame , black pepper and other products Bio-win Inc. Bio-Win is a company specializing in the business of food raw materials and additives. It is headquartered in Montreal, Canada, and we do business across Canada, America, China, Southeast Asia and Africa. Our products in PinoyRiverSand Co., Ltd Import company, main products is River sand sell in China. JINING MINGZHEN INDUSTRY AND TRADE CO., LTD. R&D, production, sales and technical services of electromechanical equipment; electric vehicles and their accessories (excluding low-speed electric vehicles), automobiles and their accessories, engineering equipment, dai Xiamen Chungtsang Imp & Exp Co.,Ltd We're a Xiamen based company dealing with food distributions. Our company is committed to local food market for 8 years. AKLAN INTERNATIONAL LIMITED aklan international limited general trading bahrain china e
https://china.tradeford.com/buyers/food-products
CC-MAIN-2019-30
refinedweb
498
51.65
On Sun, 2018-04-01 at 13:18 -0400, Dennis Clarke wrote: > '/usr/local/build/make-4.2.1_linux_4.15.12-genunix_i686.003/glob' > gcc -DHAVE_CONFIG_H -I. -I.. -g -O2 -MT glob.o -MD -MP -MF > .deps/glob.Tpo -c -o glob.o glob.c > glob.c: In function 'glob': > glob.c:581:23: warning: implicit declaration of function '__alloca'; did > you mean 'alloca'? [-Wimplicit-function-declaration] > newp = (char *) __alloca (dirlen + 1); > ^~~~~~~~ > alloca > glob.c:814:11: warning: implicit declaration of function '__stat'; did > you mean '__xstat'? [-Wimplicit-function-declaration] > : __stat (dirname, &st)) == 0 > ^~~~~~ > __xstat The contents of the glob/ directory are actually taken directly from glibc (although at this point an extremely old version) not developed by GNU make. Examining the source code, I'd say that your system is defining __GNU_LIBRARY__ but is not creating all the expected aspects of having __GNU_LIBRARY__ defined; for example: #ifndef __GNU_LIBRARY__ # define __stat stat ... #endif the __stat macro is only defined if __GNU_LIBRARY__ is not defined... because it's expected that if __GNU_LIBRARY__ is defined then we're compiling glob to be part of glibc. I would use "make glob/glob.i" then examine the resulting "glob/glob.i" file to find out who is setting __GNU_LIBRARY__, and why.
https://lists.gnu.org/archive/html/bug-make/2018-04/msg00002.html
CC-MAIN-2019-26
refinedweb
207
52.26
April 9, 2020 By Nolan Phillips & Kendall Strautman Plugins are a powerful concept. In general plugins are used to extend core functionality of a base system. While many plugin systems are static, TinaCMS is powered by a dynamic plugin system. In this approach, plugins are added and removed programmatically. This dynamism allows developers to add and remove CMS features based on the context. If you’ve worked with Tina, you may have already used a few plugins without realizing it. The most common plugin used in tinacms is the FormPlugin which adds forms for editing content to the sidebar. Another plugin worth noting is the ContentCreatorPlugin. This plugin provides the foundation for creating new data source files. One of the more recent additions has been the ScreenPlugin, which is the topic of this blog. Screens allow you to render modal UI and handle various content editing needs. For example, one might use a Screen Plugin to register a form to edit 'global site data'. The ScreenPlugin has three main pieces: a name, an icon, and a React Component. For example with a GlobalFormPlugin (a type of screen plugin), the name and the icon are used to list the screen plugin in the global menu. When the user clicks on the menu item, it opens a screen in which the React Component is rendered. Think of a screen as an empty canvas, it provides the space to create an editing interface beyond the sidebar. There are two potential layouts for a screen plugin: fullscreen and popup. You can choose to utilize either depending on the purpose of the screen. To really get a feel for the way screen plugins work, let’s dive into setting one up. Here's an example layout: import { Quokka } from './cute-marsupials' export default function Island({ smiles }) { return <Quokka>{smiles}</Quokka> } Here we have an Island component that renders a smiling quokka. Quokkas are ridiculously cute marsupials found on islands off of Australia. Yes, they're real. Lately, a trend has developed to take jovial selfies with these critters. Let's make a quokka selfie screen plugin for the tourists. We'll set this up in three steps: useScreenPluginhook from tinacms SelfiePlugin // 1. Import `useScreenPlugin` import { useScreenPlugin } from 'tinacms' import { Quokka } from './cute-marsupials' // 2. Define the screen plugin const SelfiePlugin = { name: 'Quokka Selfie', Icon: () => <span>🐨</span>, layout: 'popup', Component() { return <img src="/img/quokka-selfie.jpg" /> }, } export default function Island({ smiles }) { // 3. Use the plugin useScreenPlugin(SelfiePlugin) return <Quokka>{smiles}</Quokka> } For the icon, we inserted a little koala emoji (distant cousin of the quokka), but you replace that with an svg or png. Let's see the selfie screen plugin in action. Using it will add a new item to the global menu. Clicking on the menu item will open a popup modal where our Component will render. Tada! It works. We made a plugin to brighten everyone's day ☺️. Screen plugins are just React Components, so the screen plugin world is your oyster, so to speak. You could make magic 8 ball 🎱 screen plugin to help the content team decide where to order lunch. It's all deadly. If nothing else, I hope this blog introduced you to quite possibly the happiest creature alive. We could all use a little quokka in our lives right now. Be well 🖖!
https://tinacms.org/blog/screen-plugins
CC-MAIN-2020-34
refinedweb
555
66.03
This action might not be possible to undo. Are you sure you want to continue?: q q q q q q q q q q q q Getting started Getting and staying motivated Getting the books you need Learning the spoken language Learning the written language (when the time comes) Learning Japanese on a tight budget Finding Japanese language resources on the vast Internet Getting free Japanese word processor software Getting a really good computer dictionary - for free Using many different resources, to make learning Japanese fun Finding things you can do - easily - to speed up the process.. Why not just study on your own . but rather focus on what you do know. and unlearn things that were in error. you will end up with an A. just as the falsehood was beat into your head over a period of years. or talking for that matter. Don't worry about what you don't know. you fail to learn anything. Try to rekindle the love of learning that children have. You can learn new things. If you've already taken my word for it and believe it. Some of you still have that love of learning. In learning Japanese (and this goes for almost any goal you set). don't worry. I will point out the best ways to improve yourself in each area. Japanese may sound very foreign to you. it's very fun to look back at how far you've come in a week. if you study outside of class. as the saying goes. Unlearning in particular tends to require a lot of repetition. and that's great. something you want to do. listening to Japanese music helps you in the following areas: q q q Getting rid of the "foreign" feel Learning new words (and remembering them well) Learning pronunciation As this column progresses. Let's think of Japanese study as something fun you will do alongside your favorite hobby anime. There are many "resources" you can use in the study of Japanese. Take it one step at a time. it's still up to you to learn. you will become more comfortable with it. it will be much harder to learn. good for you. and your confidence may falter at times. Don't treat it as a monster you wish to tame. For example. If you treat Japanese like a chore. a fun hobby that you only approach when you want to. I'll help you out as much as any teacher or professor. I speak from experience on this. or year! If you're not used to learning things on your own. Also. but rather approach it like a friend and an ally. Conversely. As you begin to listen to it and learn more about it. At first. You must beat the corrected information into your head. Remember that even in a classroom environment. On the other hand. even more than you realize. the people in this site's forum are always eager to help people with any questions they have. the sooner you can unlearn that "it's hard" the sooner you'll be able to make rapid progress. and do your homework. I strongly suggest that you "make friends" with the Japanese language.Japanese is Possible! Lesson 2 Some helpful tips Your brain is a very powerful tool. If you don't apply yourself in a class. video games. music. You will get more enjoyment out of any one of these. "Rome wasn't built in a day". each giving you practice in one or two particular areas. Eventually. you'll prevail. month. He then stuffs tiny pebbles into the jug and the pebbles find their way through the cracks in the rock." he says. Indeed. Without a word. Here's an interesting analogy I found on the Internet on managing your time: (by James R. you should be able to make impressive progress. and it probably sounds foreign to you. Realize How Much Time You Have to Learn You may think you have very little time. When you hear it. occasionally shaking the jug. he begins placing white rocks. In other words. If you set aside at least 5 or 10 minutes in the morning. you must believe that "Japanese is Possible". for example. when studying word lists. your instinct is to ignore it as a "foreign" language.to begin with? As long as you have the materials and the guidance. but you might be surprised how much time you can scrape together. you will notice the difference. Have you ever awakened to a horrible song on your alarm clock radio. He then shovels sand into the jug. That is because your brain. There will be more specific tips as the lessons progress. "now is it full?" The class nods in unison. "Slow and steady wins the race". We've all heard the fable of "The Tortoise and the Hare". "words I don't know yet". The whole class nods. Start thinking about how you could manage your time better. read part of a "Learn Japanese" book. "Full now?" he asks. or watch a subtitled Anime. Beach) A professor walks into the room carrying an empty 10-gallon water jug and dragging an obviously heavy bag.just enough to look at a word list a few times. and then tried to get it out of your head? It's almost impossible. First steps in learning Japanese Getting the Right Mindset Unless you watch a lot of subtitled anime. just big enough to fit through the mouth of the jug. You can listen to Japanese music or Anime while surfing the Web. such as "Word Lists" can be done anytime. Important parts of learning Japanese. The class nods. is VERY receptive to new information at that time. For instance. anywhere. you probably aren't very familiar with Japanese. (That part won't take too much discipline!) The idea is to do a little bit every day. Studies have found that school kids do better in their 1st hour classes for the same reason. . and the tiny grains sift through the rocks and pebbles. You must try to think of Japanese dialog as simply. "Maybe not." he says. when learning a language. Not much time is required . early morning is the best time. He places them on the teacher's desk. like wet cement. You only need 10 seconds in a row to look at a list and study some words! Other things can be done at the same time as other things. to give yourself some time each day to study. "OK. "Is it full?" he asks. into the jug until they reach the very top. This works to your advantage when you are trying to memorize something. . then the pebbles. Remember.He smiles. so don't miss it! Copyright © 2001 Maktos. Next week . Whether you are an anime fan or not. "is that there is always more room in our lives than we think there is. So the moral is. we may even like to do." he says. you can always turn to the Internet. you can pause the tape or DVD and look up a word you don't know in a Japanese dictionary. you might be able to rent subtitled anime on tape and you can definitely buy it. here are some tips. writing down random words from a Japanese dictionary is a horribly inefficient way of building a vocabulary." Here's the time-saving payoff: The ROCKS are the important things we have to accomplish regularly to be successful. If you don't have a DVD player. then the sand.. it's probably a good word to learn. All Rights Reserved. Media Play. but I can't force you to learn. especially if you don't have any other resource for listening to Japanese dialogue. . If you reverse the order. it can be a serious help for your study of Japanese. it's a very useful method. You can get Anime at Best Buy. On the other hand. whether Japanese is a rock or the water. "The lesson here. Anime DVDs are great because they almost always have the Japanese speech available. So prioritize your activities and make sure the rocks go on your schedule first. When you think you're out of time. because they are most important. putting in the water. For instance.Japanese pronunciation Other areas will be discussed as well. They go in next. The WATER represents the few remaining things that make a difference. You can then write the word on a list so you can learn it! If it was used in an Anime. there will not be enough room for the rocks. (Don't laugh. Chances are slim that you won't find anything that you don't enjoy at least a little bit.many people have tried it!) I will focus on vocabulary building again in lesson four If you don't know where to start with anime buying. The SAND represents things that we should do. as long as you're ready to do it when the opportunity comes. I can guide you. We don't have any links right now. Rent some anime DVDs. but they're not as important. there is still more available if you look for it. Also check small hobby shops.com. but we may soon. He then slowly pours water into the jug until a water glass is finally empty. The PEBBLES represent those things we may not like to do. it's up to you to work on learning Japanese. but we must do. so give it a try. and other like stores. They go into our "time jug" first. there's always space for even a little bit of it in your day. If you happen to live in the vicinity of a Yaohan (Japanese malllike place) you should definitely stop by and see what they have there! If you don't have a store that sells Anime in your vicinity. . Consider this: Say "lu. Italian. the word kaeru would be pronounced "KAH eh roo". It's pretty close to how the R is pronounced in Spanish. (It isn't "trilled". Pope" U as in "fruity moogle" You'll notice that the vowels are pronounced the similarly to Spanish. but there are a few differences: R . In English. with a bit of 'R' mixed in. an R sounds a lot like a 'D'. There are no silent vowels (although sometimes the Japanese choose not to voice a vowel).Prounounced like a combination of 'L' and 'D'. Polite Japanese Music Vowel Sounds The vowel sounds in Japanese are as follows: A as in "father" E as in "seven eleven" I as in "Easter treat" O as in "open.Japanese is Possible! Lesson 3 Information you will need q q q q Pronunciation Using the Internet Plain vs. The vowels 'i' and 'u' are weak vowels. That means that many times they are not pronounced. The most important example is: desu (the u is silent . People will have no idea what you're saying. don't just go around dropping u's and i's. you might want to pronounce it "KAY roo" or "KAY ruh".pronounced DESS) However." Notice how you . and Latin (and several other European languages) Pronunciation of these vowels is very consistant. however) In Spanish. For example. Consonant sounds are generally pronounced the same way as in English. Each vowel sound is pronounced distinctly. and the songs are very unique. I'll go over the basics. there are patterns to move between five different pitches to distinguish a word's meaning.You can pronounce it like an F. and purchase import CD's from many other websites. "holographic" is an adjective. In Chinese. "man" would be the subject of the sentence. since it DESCRIBES the frog. If you don't know what's good. ^_^ Subject The person or thing that performs the action of the sentence's verb. they do not put stress on their words but raise the pitch of their voices instead. but often it sounds more like an 'H'. there are only two pitches. For example. Example: The man jumped through the frog. to be fair to those of you that slept through English class. English and Spanish have accents. in English.part 1 You'll need to know a few basics about grammar to be able to make sentences. To say a Japanese R. we put stress on a certain part of a word to make it sound right and this is marked by an apostrophe-like symbol in the dictionary. When you find out what artists you like. try downloading MP3's of different songs. In Japanese. but the only real way to grasp where to raise the pitch of your voice is from listening to Japanese speech and repeating it. Some of the songs don't even have lyrics. F . For practice in this area: q q Listen to Japanese music Watch subtitled (or Japanese language) Anime Listening to Japanese music is enjoyable. since he is the one who jumped. Japanese does not. and this is their substitute for accents. . but they're still wonderful! For links to great Japanese and Anime related MP3 sites. scroll to the end of this column. just briefly touch the tip to that spot at the moment you say the consonant. There is no accent in Japanese. Grammar Terms . place. Adjective A word used to describe a person. support them by purchasing their CDs. and use a little more "punch" in your voice. these songs are eclectic and beautiful! The lyrics are easy to understand in many of the songs. meaning there is no emphasis on a particular part of a word. and helps you out tremendously in many areas. or thing Example: The man jumped through the holographic frog. Inspired by the famous Xenogears game for Playstation. You can download MP3s from many websites.drag the tip of your tongue along the roof of your mouth. Some songs I would recommend to anyone are the Xenogears Creid songs. Japanese does have pitch inflections. In Japanese. Since frog is a noun. you may have noticed that royalty (princesses. there are actually different words and verb endings for this purpose. Polite form Unlike English. and use of slang words. One speaks differently among friends than to one's boss. Here are a number of reasons. four main ones to be precise. I personally have had conversations with Japanese teenagers. Adverb A word used to modify a verb Example: The man quickly jumped through the frog. the polite form is taught first. Japanese has distinct levels of formality in speech and writing. In Japanese however. Anime and video games tend to use the plain form. So "apple" would be the direct object. In America.) . so that's why I use it. In most Japanese language courses. "the apple". Since this website is somewhat focused around anime. If you've watched Anime. Direct Object Is the entity on which the verb is performed Example: The woman ate the apple. However. books. anime and on television.simply ask the question. we will begin by teaching the plain form. The plain form is only acceptable with friends and close family members. "She ate WHAT?" The question would be answered. kings) speak differently than most other characters. The instructors reason that you can use the polite form anywhere (including with friends). since it describes how he jumped. 1. and for other good reasons. Even if you make Japanese friends in America. the plain form is by far more common in songs. including "Ayeka" from Tenchi Muyo. There are hundreds of examples. Jump is the verb. and they have told me that I sound funny because I speak so politely (I learned Japanese starting with the polite form and I consider myself a polite person anyway. they will speak to you with the plain form and will definitely not feel insulted if you do the same. that difference would mainly be reflected in tone of voice.any word describing the frog would be an adjective. contractions and so on. and that is where most people will use their Japanese skills unless they go to Japan. "quickly" is the adverb. so any word describing how he jumped would be an adverb. For practice in this area: q q q Purchase a good Japanese grammar book Find websites that cover grammar Review an English grammar textbook Plain vs. Now figuring out the direct object is straighforward . manga. the next step is to download the latest version of the EDICT Japanese-English dictionary (optional). For example. The Internet The independent Japanese student's best resource! There are many webpages on the Internet devoted to the Japanese language. go to any search engine. try: Japanese study learn hiragana program Also remember to try more than one search engine. Type a few words like "Japanese study learn" followed by specific things you're looking for. It's a big source of motivation to hear an Anime character say a phrase you're trying to learn. and you can find help on just about every topic. the plain form of verbs is the form that is printed in dictionaries. Once you download it. You'll find some pages with even better links pages . It's very small (about 4 MB) and is very advanced yet easy to use. If you focus on the polite form. Besides. where speaking rudely to a Samurai would cost you your life. follow the links and see what's out there. When a Japanese speaking American is encountered. if you're looking for software to drill you on the hiragana alphabet. to remember a word 4 months later you have to use it (or hear it used). In the present tense. For motivation. 5. Some examples of things I have seen on webpages: q q q q q q q q q Popular words Verb endings Pronunciation Java-based games and software Info on Kansai Ben dialect (and others) Making your Windows PC "Japanese friendly" Software to help you learn Katakana/Hiragana Software to help you learn Kanji Web-based Japanese-English dictionaries If you want to find resources such as these.2" You can download it from NJStar's Website. and are making progress. Once you find a good site with a links page. This means that you can just pluck a word out of the dictionary and throw it into a sentence with no conjugation. you won't hear it used much. Times have changed since feudal Japan. It's called "NJStar Japanese Word Processor 4. Japanese people cut Americans some slack when it comes to speaking Japanese.follow those as well. Try one of the "comprehensive" search engines that search all of them. Once you've learned a lot of Japanese. It's been my experience that I learn "popular Anime" words about 10 X faster than other more obscure words. They all use different databases.a free Japanese word processor. You can find it at various FTP . 3. It makes it more "real" . 4. and you will often be complimented if your vocabulary exceeds 10 words. Search engines are a good start.it makes you realize you can actually understand some Anime if you learn this word or phrase. the last thing on their mind is "What form is he using?" They are often glad to hear that you are learning Japanese. such as The Mother of All Search Engines. but there is no substitute for links pages. Here's something you can start playing with .2. you need to hear what you're learning. it's no problem to learn the polite form later. many people are offering their assistance. You don't want to miss it. You also need to go into that directory and click on 2 programs "E2jdic" and "J2edic". and this column can only address them one at a time. Another good word processor (the one I use). regardless of their experience.com. All Rights Reserved.Your first step into real Japanese grammar. . There are many parts to learning a language. The URLs posted so far will prove VERY helpful to any student. The Internet is the #1 source of information for someone learning Japanese on their own. available at this link: JWPCE To the JIP forum participants . Since there are JIP readers far beyond the beginner stage. don't worry. Although much of the information is helpful. I'll add more later: Next week . there needs to be information for them as well. Also. some of it may confuse and overwhelm beginners. To install it. Teaching a broad subject such as Japanese is a major undertaking. and any help is greatly appreciated. unzip it to "C:\Program Files\NJStar Japanese WP" or wherever you installed it. is JWPCE. When you see a post that you don't understand or feel you're not ready for. and a whole lot more. please understand that the posts ARE appreciated. In the meantime. They will be explained so that everyone can understand them. the posts help me to design the course. there are bound to be some things posted that are "too advanced" for some students. To those who post in the forums. Since there is no central organization or lesson plan among the various posters. We'll have sentence structure. posting information to the forums. Hopefully."Thank you!" I would like to thank those of you that have contributed to the "Japanese is POSSIBLE" forum. particles. and you should be all set. Internet Links! I have one link right now. I would also like to say something to the many students that visit the JIP forum in seach of information. but it's rather hard to find. You will now have MANY more words in your E-J and J-E dictionaries than you had two minutes earlier.sites. Copyright © 2001 Maktos. They both take about a minute to run. You can download the latest dictionary files right here. The subjects will all be covered in detail in future "Japanese is POSSIBLE" columns. Japanese will usually avoid pronouns like the plague. Japanese would sound a lot like that if you translated it literally. a subject or a direct object)." "An abode of evil it is. on the other hand. you. If you've seen Star Wars (and who hasn't)." As a result. no worries! Japanese grammar isn't as difficult as most people think. More on particles later. In order to avoid this as much as possible.. In any case. Japanese. "Your father is." "In you must go. In fact. he. two of their words for 'I' (there are many)literally mean "personal" and "slave. and if they see the opportunity to leave something out of a sentence (say. Japanese pronouns actually have roots in meanings that are unrelated to "I. so they for the most part will not be 100% literal.Japanese is Possible! Lesson 4 Time to start learning q q Introduction to Japanese Grammar Learning new vocabulary! Introduction to Japanese Grammar Japanese sentences are very different from familiar languages like English and Spanish. Japanese speakers tend to have a bit of an aversion to redundancy. This (topic marker) water is. they will most likely take it. I will incorporate already taught concepts into translations.. even when talking directly to that person. think back to the way Yoda spoke. Japanese Sentence Structure Here's a typical Japanese sentence: Kore wa mizu desu. and in many ways. or can seem that way. I will start by giving a literal translation to the example sentences that may look a little strange. uses short words called "particles" to mark a word's purpose in a Japanese sentence. it it more logical than most other languages and has few exceptions. Spanish is very similar to English grammar-wise." Japanese is also a bit vague. However. Back to redundancy: I know I'm getting a little ahead of myself. a completely natural translation will always be provided. Incidentally. but I'll take you by the hand and guide you as you look at this example: . and most often refer to other people by name." For example. store e . Person B: Hai. but they are very different sometimes and I will strengthen this distinction as we go along no signifies that the item before it posesses the item after it. e shows the direction or destination of a motion. he is you('s) teacher. wa marks the topic of a sentence. This meaning can be broadened to the sense of attatching attributes to nouns. o marks the direct object of a sentence. for example. it is perfectly natural to say (literally). Nakamura san wa sensei desu.to go ka . the "wa" tells us that Matt is the topic of the sentence the sentence will be about Matt. I or there' because they aren't really needed. Let's modify what kind of teacher he is: Matt wa anata no sensei desu.] Matt is a teacher.Person A: Mise e iku ka. Very often this topic is the subject of the sentence.yes Notice that there is no mention of 'you. "Yes. you have a particle telling what the word "was" to that sentence. [Matt (as for) teacher is. questions can often be very different from their corresponding statements. but not always.particle signifying that the sentence is a question hai . It tells what or who receives the action of the verb. this is usually not the case and a statement can be changed to a question simply by tacking a ka onto the end. ka shows that a sentence is a question." Particles Note: these are by far not the only meanings for these particles. [I] go" mise . I am going there. iku. I go.] Matt is your teacher. after many of the words in a sentence. they are only the most common usages. Person A: Are you going to the store? Person B: Yes. Let's add something to the sentence. "to the store. [As for Matt. It most closely resembles the phrase "as for" ga marks the subject of a sentence and puts emphasis on it. After the word "Matt".particle meaning "toward" or "to" iku . In English. It is very confusing at first to distinguish between the uses of wa and ga since both can label a subject." In Japanese. Other uses for these particles will be discussed at a later time. q q q q q q To put it simply. [Matt is your teacher?] . "mise e iku" literally translates. he is a teacher. We can turn it into a question by adding ka: Matt wa anata no sensei desu ka./As for Matt. To a Japanese person. "Does he go to the store?" has a rather different word order from "He goes to the store. since you will encounter the plain version da in various reading and visual material. was. logical pattern that is easy to follow! English . In English.love heya . We will start with the more common words and progress toward the less frequently used ones. some words are more common than others.deshita will be . Desu doesn't conjugate like most other verbs. There is only one "person" as far as Japanese verbs are concerned. This will be the one exception to my policy of using plain verbs.who ikutsu .what dare . somewhere on the order of 3 to 20. Interrogatives: doko . not its plain form. there are very few irregular verbs in Japanese. one should use desu.cat Verbs: aruku .dog kami . were.Is Matt your teacher? Desu is the most often used word for "to be".how many Nouns: ai .desu Learning new vocabulary! You will begin learning many Japanese words. second or third person subjects or for plural subjects.deshita were . but luckily. Just like in English. the present-tense is often called the "non-past" form. will be Japanese verbs are not conjugated for first." they do have a past tense: Present tense . The other hundreds of verbs follow a strict.to walk hanasu . However.Japanese -----------------is . Also. because I think that even in plain speech.desu was .person inu .desu are . Luckily for us learners wading in "context. depending on how you look at it. we have to conjugate that verb to the following forms: is.book hito .god neko . I will teach its forms. but I do not recommend using it.Deshita (prounounced DESH ta) Note: Desu is not actually a plain verb. Japanese verbs do not distinguish between present and future and as a result.to speak .where nani .room hon .Desu Past tense . are. Learning things by sound isn't your strong suit. If you're a visual learner. Adjectives: akai .she watashitachi . The more you look at it. to watch taberu . Japanese pronouns are far less common in polite speech than their English counterparts. no use wasting time with oral vocabulary drills. others have to hear something in order to learn it.black shiroi . Lists and flashcards are an effective method of learning words for most people.we anatatachi/anatagata .to run korosu .25 words on a page. If you fall into this category.you kare . the other ones are more common. so you should take advantage of your visual strengths. You write from 5 . Also note that these pronouns can be made plural by adding -tachi. If you have to hear something before you learn it.he kanojo . if you have a study partner) .red aoi .to eat Pronouns: watashi .fast Effective methods to learn new words Making Lists You take a piece of paper. you have to see something to learn it.hashiru .Look at the English meaning and try to guess the Japanese word A combination of the two would be best. the sooner you will learn the words. but where others exist (above).to kill miru .blue kuroi . Look at your list as often as possible.I anata . and its English meaning on the other. Picture the words in your head.slow hayai . for example) to fit your personal learning style. and the English meaning on the other.them (for an all-female group) Note: As I stated earlier. LOOK at your flashcards. and write the Japanese word on one side. (or have someone else read them. Look at the flashcards when you get time. Some people learn visually. You can adapt a technique (like flashcards.them (when referring to a group that includes males) kanojotachi/kanojora .Look at the Japanese word and try to guess the English meaning . You can practice in two different ways .you (plural) karetachi/karera . Flashcards Take some index cards (cut in half if you like) and write the Japanese word on one side. read the flashcards aloud.to see.white osoi . Anime. thinking "I need at least 10 minutes to study this properly". You won't want to look at it. When you have the same list for more than a week. and so it does you no good.when you review your old lists. cool.Tips to get the most out of your study time! (These apply to flash cards as well as word lists . If you have a list with 25 words. and video games.awful. add the word to your list again later (with one of the other meanings). I have done that for a month or so. the more memorizing work you have. I discovered that a list of around 15-20 words works best. Don't make the lists too big Everyone is different. you might not look at it if you only have a minute. Learning it for keeps comes later . or 2 hours on the weekend.so it's a good idea to look at your word list right before bed. I heard that your brain files things away while you sleep . Many words have more than one English meaning . Try to pick one or two English meanings per word. I take a brand new list and by the next day I already know most of the words. . If there are many synonyms. but I'm sure many people get overwhelmed if they perceive too much work ahead of them. The more meanings you have written down on your list. Keep old lists for review They say you have to forget something 7 times before it enters your long-term memory.however for the sake of simplicity I am going to use word lists as an example) Don't let lists go stale Make sure you have a new list every few days. incredible. Slow and steady is the best way to go If you have a choice of studying 10 minutes a day. amazing. Example: BAD: sugoi . Try to make a new one every day or every other day. After a lot of experimentation. get rid of all but one. I make sure I look over the list for about 3 or 4 minutes before bed. Your brain is always working (even when you're sleeping) so it's best to make use of your brain's power.pick ONE! Don't write down too many meanings at once. That seems to be true in my experience. and you don't need that right now. Most words make several "word list" appearances before I know them like the back of my hand. you start getting sick of it. Just memorize a word until you can get it right on a "quiz". unbelievable GOOD: sugoi . It would be better to break that list down into 5 mini-lists with 5 words each .on an index card perhaps. The word becomes an acquaintance. cool If there are other meanings associated with the word. and I've noticed results. You won't become friends with the word (where you use it all the time and remember it perfectly) until you use it in sentences and/or hear it used in songs. That's when you start to make the words permanent residents of your brain. You won't find yourself using the words right away The lists are used to make you familiar with a given word. That's a waste of the minute you had to study. choose the 10 minutes a day.amazing. your brain says. it will be easier. If you wake up and look over your list. Study often You don't have to spend more than a few minutes. and you will have an easier time learning it. If you're still in high school.have some storage space in long term memory if the word is so important to you!". internet sites and lesson books that use the words in example sentences (provided you read the sentences. in college. but it allows you to "learn the word" fewer times. This isn't required. You may be thinking. Keep learning the word. Eventually. Now the rest of the day you can't be scared of Japanese. I practiced a lot with my younger sister and brother. Studies have proven that kids do better in their 1st hour classes. but it will help you get a feel for what Japanese sentences sound like. you've just set yourself off on the right foot.. to make sure that you know the difference. the words become cemented in your brain. I grab around 10 lists and look them over. However. take it. You are better off picking one of the words for now. and totally forgetting about the other at least for a couple weeks. That makes it more real to you. When I first started.Study in the morning Your brain is very receptive to information first thing in the morning. NEVER try to learn two words at the same time that sound or look alike! That is. songs. I keep my old lists in a binder. Then you should go back and put both of them on the same list at a later time. so if you have the chance. write the sentence you heard it in This isn't critical. tf they look or sound alike to YOU. video game manuals. "but I knew them a week ago!". and your brain will get the message. Of course. you should have plenty of time. Study with siblings or friends if at all possible When you can make sentences and practice with others. there are long periods of "downtime" so you should have no trouble finding a moment to glance at a list. That is because they were only in your short-term memory. There are plenty of words to learn! Get your words from the right sources Good sources for words include: Anime. When I'm going on a trip or I have to wait in line somewhere. either between classes or even during class if the teacher gives you some free time or there's a lull in the action. video games. Below each word. A few months of this and you won't be afraid of Japanese at all. but look at your list around 10 times a day. The best thing is to have a context sentence that will connect it to a spot in your brain. but more and more colleges are offering Japanese as a foreign language. Put old lists somewhere AWAY FROM your current list! You don't want to feel like you have to study all 10 or 20 pieces of paper! That will scare you away from your list (which you DO need to use). when you learn them a second time. . It's way too challenging to learn 2 similar words at the same time.) It's not all that wise to just grab a word out of the dictionary because you'll never be able to connect it to anything. You need to remember where you heard the word as you studied it. and that really helped me learn the words I was using at the time. and will remain that way for months. You should be able to make a big dent in a 20 word list in a 24-hour period. Reviewing is important Don't be concerned if you can't remember half of the words after a week or two. The third time will be even easier yet. because your list (what Japanese is to you) is already familiar to you. manga.. You will only be confused about the two words. and you will also learn the word MUCH more easily. "Ok Ok. . All Rights Reserved.Next Week q q q More popular words Japanese Grammar Study tips Copyright © 2001 Maktos.com. but if you ask something like the above and you are not talking about anybody in particular. In Japanese. depending on the context. I will also use sj to show that ga marks a subject. the predicate verb (the main verb of a sentence). godan . [This oj buy?] Will you buy this? Itsu gohan o taberu? [When meal oj eat?] When will you eat? See? It's pretty simple at this point. [Tanaka tpc water oj drink. The speaker could just have easily been referring anyone else. it's very easy to make simple sentences with Japanese verbs. Ame ga furu. any vocabulary that you don't recognize and that has not been explained in a previous lesson will be at the bottom of the lesson): Tanaka san wa mizu o nomu. Let's look at an example (from now on. with the exception of particles that may follow it.] It's raining. You can see that I put "you" in a number of the translations.] Tanaka drinks water. [Rain sj precipitate.Japanese is Possible! Lesson 5 Here come the verbs q q q q Verb use Past tense Useful Words Study Tips Using verbs In the present tense. they require no conjugation. Now that you know how to place a verb into a sentence. Kore o kau ka. Note: I have used the arbitrary abbreviations tpc to show that wa marks a topic and oj to show that o marks an object. will always be at the end of a sentence. there are three types of Japanese verbs: ichidan verbs. Why? Because in plain speech. let's take it up a notch with: Conjugation Conjugation wise. the listener will assume that you are referring to him or her. change it to -ita tataku --> tataita -gu changes to -ida isogu --> isoida and finally. Godan verbs are also known as "u" verbs or consonant verbs. Verbs with -su change it to -shita hanasu --> hanashita desu --> deshita . The title godan. I would say that there are more ichidan verbs ending in -eru than godan verbs. meaning "five step" will be explained later. it's not an ichidan verb. Let's start with ichidan verbs first. ichidans are relatively easy to conjugate and all you have to do to change a plain ichidan verb into the past tense. remove the final syllable (the ending) and replace it with -tta: matsu --> matta hashiru --> hashitta toru --> totta kau --> katta Verbs with -mu. -nu or bu conjugate by removing the ending and adding -nda: yomu --> yonda (there aren't very many -mu verbs) shinu --> shinda (this is the only -nu verb) yobu --> yonda (this is the same as for yomu. Ichidan means "one step" and verbs are put into this category because they are conjugated rather easily. Ichidan verbs All ichidan end with either -eru or -iru. all you have to do is take the -ru off the end and replace it with -ta. so I have them herded into some groups that conjugate similarly. and more godan verbs ending in -iru than ichidan. Other teaching methods refer to them as ru verbs or vowel verbs. and irregular verbs. -tsu or -ru .verbs. This means that you have to learn whether any particular verb with -iru or -eru at the end is ichidan or godan. taberu --> tabeta oshieru--> oshieta iru --> ita Godan verbs Godan verbs are not so easy. -NU. gotta look at context for these) tobu --> tonda For verbs with -ku. So if you see a verb with any ending other than this. -TSU. -RU -MU. But there are a lot of both in both groups. -U. since they are the simplest. -BU -KU -GU -SU You can see that godan verbs may also end in -ru. To conjugate verbs that end in -u preceded by a vowel. Like I said. ] [She] pushed the car. but never when referring to yourself. [Japanese oj spoke.] [I] spoke Japanese.bad .T. You should use it any time you refer to another person. Uta o utatta.when Michio .that over there ame . Nihongo o hanashita." but it's a little different. Michio san ga tabeta." or "Mrs.] Michio ate. uta . [Song oj sang.song Adjectives: atsui .this sore .a surname Pronouns: kore .this word is added to the end of a person's name to show simple respect for that person.good samui . It can be used with first names as well as surnames.] [I] sang a song. boiled rice kuruma .hot ii . Useful words to add to your list Miscellaneous words: san .a meal.sky terebi . itsu . [Michio sj ate. Simply put it at the end of the sentence.a female given name Tanaka . and should not be forgotten./Michio has eaten. Kuruma o oshita. Many people equate it with "Mr.There are four verbs that conjugate irregularly in the past tense: suru --> shita iku --> itta (this is its only irregular conjugation) kuru --> kita da --> datta You can use the past tense just as you would use the present tense.water nihongo . [Car oj pushed.the Japanese language okane .car mizu .rain gohan .cold warui .that are .money sora .V. Verbs: erabu - to choose furu - to precipitate, to fall (for rain, snow, etc.) hashiru - to run (godan) iku - to go iru - to exist (for animate objects: people, large animals, etc.) (ichi) isogu - to hurry kau - to buy kiku - to listen kuru - to come matsu - to wait motsu - to have nomu - to drink oshieru - to teach (ichidan) osu - to push shinu - to die suru - to do tataku - to hit tobu - to jump, to fly toru - to take utau - to sing yobu - to call yomu - to read Study Tips To get ready for your study of Japanese, I suggest getting a good book. You can find some on my book recommendation page here: Book Recommendations. A review of study tips: q q q q q q q q q q Study with siblings/friends Talk to Japanese people in various chat rooms, including Don't worry about what you don't know Practice often Review lists/flashcards often Study often (but not as often as you review lists) Use words in sentences Listen in Anime/songs/video games for words you just learned Learn the lyrics to songs you enjoy Pull out cool phrases from Anime, and look them up in a dictionary Next Time q q q Some more use with verbs How to use a few particles And, like always, more useful words Japanese is Possible! Lesson 6 Particles Galore q q q q Particles "Koto" New Words ko-so-a-do words Particles Japanese uses several particles to give most of the words in a sentence a purpose. Usually, many of the words in a sentence will be followed by a particle. Most are one syllable, a few are two syllables, and a precious few are more than that. The particle WA This particle tells that the preceding word is the topic of the sentence. A good way to translate it is as for. kono mise no ryouri wa oishii desu. [This store's food as for delicious is.] The food here is delicious. kore wa boku no mono desu. [this (as for) I ('s) thing is.] This is mine. Hiroshi wa tsuyoi desu. [Hiroshi (as for) strong is.] Hiroshi is strong. Difference between WA and GA People often confuse wa and ga, since usually ga marks the subject of a sentence, but wa often does as well. One way to think about ga is that it emphasizes the subject as in "this and not something else did..." Wa often emphasizes the action or verb of the sentence. Take the following examples: Dare ga mise e itta ka. (Emphasis on dare) Jon san ga itta. (Emphasis on Jon) [Who sj store to went? John sj went.] Who went to the store? John did. Jon san wa doko e iku ka. (Emphasis on where Jon went) Jon san wa mise e iku. [John as for where to go? John as for store to go.] Where will John go? John will go to the store. you can express an attribute this way by using no after a common noun (such as boy. Also. It could translate to at. as you'll see below. Example: kore ga hayai desu. [this (as for) fast is.e. [that over there as for my mother 's hair is. You could also think of it as the preposition of in English. Boy is just a characteristic of the store. If you're interested in all the nuances. i. on. kono shounen no mise wa chikai desu. When a question word is the subject of a sentence. de in Spanish or di in Italian. [Japan loc sake oj drank. tree.) The particle NO No is often best translated as 'S (the 's in Bob's). in. (as opposed to that or the other thing) kore wa hayai desu.] I drank sake in Japan. [he as for I 's friend is. [this boy 's store as for near is.] This boy's store is nearby. [that computer attribute store as for far is. Examples: Nihon de sake o nonda. I'm sure there are several websites that have in-depth info. It is known as the possessive particle. book. and probably another bunch of words in English. it must be followed with ga not wa. Also. There was a big discussion on sci. etc. kare wa boku no tomodachi desu.] That computer store is far away. it's not a state. You can see in the above example that a boy probably does not own the store. The particle DE De often follows the location where an action takes place. This is fast. but the important thing to remember is that it's an action that happens.lang. . are wa haha no kami desu. the subject of the response to the question must have ga after it as well. listed on the book recommendation page.com for newsgroup posts from about 1 or 2 years ago.] (You wanted to know about this? It's fast./He is friend of I] He is my friend. try searching Deja.The examples above illustrate an important point.). Often. it caters to young boys. sono konpyuutaa no mise wa tooi desu. I would also strongly recommend Making Sense of Japanese by Jay Rubin.japan about this very topic./that over there is hair of my mother] That is my mother's hair. "hey! that first sentence has two subjects. It looks ok. If we add "koto". you can change a verb into a noun phrase by adding koto to it. it just shows which topic we're discussing ("I" in this case).red chikai .cold . I can eat sushi. sushi eating is doable.] Useful words to add to your list! Adjectives: akai . etc) nagai .Densha de bangohan o tabeta.] I ate supper on the train... A useful word . you might be saying.long oishii . right? But it isn't. [Train loc supper oj ate. [As for me. Be wary of this. but: Nihon de sumu." It's not the kind of thing you can touch." So the wa in the first sentence doesn't mark the subject at all.far tsumetai . we get a nifty way of saying "can do. it allows us to use the phrase as the subject of a larger sentence. Take a typical short sentence: Sushi o taberu. We'll just look at one use of it in this lesson." or "why would 'sushi eating' be performing the action in the second sentence?" Here's your answer. Eat sushi. Sushi o taberu koto The act of eating sushi If we place a koto noun phrase before ga dekiru.] I live in Japan..near mijikai . it's just a state of being that goes on for some span of time.expensive tooi .. Living isn't an active sort of action.tasty. as in "what kind of things to you do at those meetings." Watashi wa sushi o taberu koto ga dekiru. delicious takai . hold or spit on. if you're really observant. Dekiru literally is closer to "is doable" than to "can do.short (hair. Watashi wa sushi o taberu koto ga dekiru. eat sushi? Now."koto" Koto literally means "thing" in an abstract sense. <----Bad! [Japan loc reside... In many cases. Sushi o taberu koto ga dekiru ka. Can you/he/. thing Verbs: sumu .woman okaasan . So you would pronounce 'kore' as KO RAY.makes interrogative words.slowly. Japanese has a few sets of words with the same or similar endings.supper.young boy tomodachi . One example of this is the ko-so-ado concept.that over there dore .that are .always yukkuri .food ude . things are often quite organized.denotes something within the grasp of the speaker. dinner boushi . kore .is for something far from the speaker.is for things a small distance from the speaker. not an English one.thing (concrete) otoko .mouth mono .friend tabemono .which? Just as a reminder. remember that for now you should pronounce all the vowels in Japanese.yasui . but of course with a Japanese R.to be doable ko-so-a-do words In Japanese. a. Back to the topic.train kami .mother otousan . There is no such thing as a silent 'E' in Japanese.thing (abstract) kumo .arm kuchi .hat densha .hair (the same as the word for god) konpyuutaa .to reside (don't try using this word yet) dekiru . and do.when Nouns: bangohan. so. and these four syllables switched in at the beginning.computer koto .father shoujo .this sore . you can see from that set that ko.cloud mise . Note this pattern.young girl shounen . leisurely Interrogatives: itsu .man onna .inexpensive Adverbs: itsumo .store mono . . Anime. Nouns can follow this set.this sono . and looking at them often. which makes them rather useful: kono . That food is expensive. [this as for I 's thing is. try to avoid them in this case.] This is mine. If not. . are wa kuruma desu. After you use Japanese sentences for a while. you will start to get an idea of how they should sound.that ano . so you will probably have an easier time remembering the sentences. Another useful set of ko-so-a-do words are also demonstrative pronouns. there are at least three items. and how to create your own.This will help you learn the particles.. Since the grammar is so "different". Which is inexpensive? These four words are known as demonstrative pronouns. dore ga yasui ka. and end with no. and must precede either a particle or desu. I highly suggest checking out one of the textbooks on the book recommendation page. What is the best way to practice in this area? q q Grammar books . And now for some advice: Get a feel for Japanese sentences! I would recommend writing several of these sentences on index cards (or on regular paper).which Ano tabemono ga takai. and how to make sentences. as well as a fair bit of vocabulary for you to can. I will address Japanese writing in an upcoming lesson. (notice that you must use ga here) Which book is good? As a side note: technically.kore wa watashi no mono desu. or in other words. and though you'll probably get the point across using dono and dore. [that as for car is. The books usually have several example sentences. don't worry.that over there dono . so you can become more comfortable with it.] That is a car. dono and dore are for asking questions for which there are at least three possible responses. The same goes for Manga. There is a special way of asking when there are only two choices. and get a feel for it. Just reading Japanese setences helps give you a feel for how they work. The only obstacle here is trying to pull words and sentences out of the sometimes speedy speech. carrying them around with you. and helps you to make your own sentences.You will encounter countless real-world sentences which help you immensely. Dono hon ga ii ka. Many people find anime to be very interesting. you need to expose yourself to it a lot. if you can get your hands on some that's transliterated into Roman letters (as opposed to Japanese writing). All Rights Reserved. .com.Next Time q A step back for a moment You won't want to miss it. Copyright © 2001 Maktos. you've been going along learning Japanese vocabulary and grammar.good night (said when departing) hajimemashite .good bye.you're welcome. Nationality Another important topic you should know before we go any further is the way to express nationality. Learn those expressions well as soon as you can. -an and sometimes it's completely irregular (Holland<->Dutch?) In Japanese. we use suffixes like -ish. later (more informal than sayounara) jaa mata .hello konban wa .good morning konnichi wa . but an important one. don't mention it (o)genki( desu ka)? . are you well?) Not the shortest list in the world. you simply add the suffix -jin to the name of a country: Amerika + jin = Amerikajin American person .how are you? (lit.thank you (add the words in () to increase politeness) dou itashimashite . In English. how do you do? sayounara . but there's a chance that you don't even know simple Japanese greetings yet (through no fault of your own).I'm pleased to meet you. learn them well: ohayou (gozaimasu) .Japanese is Possible! Lesson 7 Wait a minute q q q q q q Basic espressions Nationality Two more particles: yo and ne Kara Some more useful words How are you doing so far? Basic expressions All of this time.good bye dewa mata . -ese. So here they are.good evening (said when meeting someone) oyasumi (nasai) .see you later (more informal than dewa mata) (doumo) arigatou (gozaimasu) . (No. Kyou sakana o katta yo. and it is used to assert (usually strongly) some information that the speaker believes that the listener does not already know. It means "eh?" or "right?" As a way of looking for agreement. I'm Italian.) Supeinjin desu ka. Examples: . perhaps to explain something that the listener is questioning." but in Japanese it can idiomatically mean "because. Examples: Ano tatemono wa takai desu ne. [Japanese language attribute book oj read right?] You read the Japanese book. It is similar to the English expression "you know. etc. isn't it? Nihongo no hon o yonda ne. and express the consequence afterward. you know. [Beef oj make? Today fish oj bought you know. [That building as for tall is huh?] That building is tall. sometimes rhetorically.) More on Particles The particle NE The particle ne is a sentence particle.] You're going to make beef? I bought fish today. just put it after a verb or adjective expressing the reason. Note that the subject of the clause must be followed with ga. that means that it's used at the end of a sentence in the manner that ka is. Kara Kara is a very important particle that literally means "from." Example: Biifu o tsuku ka. (Are you Spanish?) Iie. Kara is called a clause particle because it follows a chunk of words that would otherwise be a complete sentence. and they only apply to people(not cars.Nihon + jin = Nihonjin Japanese person these words are always nouns (in English they're sometimes used as adjectives). didn't you? The particle YO The particle yo is also a sentence particle. Itariajin desu. not wa." To use it this way. blood ningen .easy Verbs hashiru .today How are you doing so far? .you (one step above kisama . They are the most popular words in Anime and video games .you (what you would say to a baby -.sweet.hard yasashii .true muzukashii .still extremely rude!) Nouns bakemono .human Adjectives amai . awful hontou .fish chi .terrible.] I'm tired because I ran here. More Useful Words These words should be added to everyone's list if you don't know them.to continue Miscellaneous kyou .Kono heya ga hiroi kara.they are well worth learning! Pronouns kisama . to date tsuzuku . good is right?] Since this room is large. tsukareta.to search (for) tsukareru .to become tired tsukiau .ghost sakana . [Here to ran from.to hang around.to drink sagasu .to run (godan verb) mitsukeru .old hidoi . tired.monster obake . [This room sj wide from. it's nice.or an enemy) temee .new furui . naive atarashii . isn't it? Koko e hashitta kara. ii desu ne.to find (use it with the particle o) nomu . As long as you keep trying. Each time you learn a new word or piece of grammar. In fact.com. you can't do much . I personally know many people who believe that! The learning curve is pretty steep at first . If you don't see instant results. When you're learning words. there are also periods where you feel like you're not learning anything. They are not "hard" per se. It's the same way with Japanese (or any language). As long as you stick to it.and they think. Don't worry you'll get through those dry periods if you stick with it. so you won't get the wrong idea. Think of each word you learn as ONE LESS WORD you'll need to look up when you're reading something. you probably won't. and become more powerful by the week and by the month. the most important thing is your mindset.but you know that you can eventually be a virtuoso.At the very beginning of this column (Part 1). just like when you learn an instrument. I can truly say that Japanese isn't hard. Whenever you're learning something. that's perfectly natural and is not bad news at all. With that definition of hard in mind. you can. If you believe you can do it. but it is different from other languages you may have encountered.you learn tons of stuff every day and every week. Calculus . My experience tells me that is the best way. However. when you'll see: q q q q More particles Common phrases More popular words (surprised?) Intro to Japanese writing Copyright © 2001 Maktos. don't worry about the thousands of words you don't know . You will be surprised what you can do if you only BELIEVE that you can. I believe that some people out there consider Japanese to be "hard" the way that calculus is hard . Practice makes perfect.that can be hard. "I can't learn it!". All Rights Reserved.instead concentrate on the 10's or 100's that you do know. you'll slowly and surely get better. I would compare Japanese to a musical instrument. Tune in next time. . Your Japanese skills can increase every day. You have to slowly beat it into your head over a period of years. I told you that learning Japanese isn't hard. while some don't find it all that difficult. I will clarify what I said. If you believe you can't. you will become very proficient in several months' time. At first. Some people just can't seem to grasp its concepts. Just learn 5 or 10 words at a time. there are infinitely more sentences you can make (or understand!). . when something is in a particular place.") Anyone who has studied Spanish knows that the verb used to indicate location (estar literally "to stay") is not the same as the verb used to indicate a personal characteristic (ser). sewing machines). I believe. tables. it exists there. you can use desu to indicate location in Japanese and not be wrong. and aru for inanimate objects (books. a bit more grammatically correct. but the method that I am about to teach you is. On the contrary. one uses a form of the verb "to be" to indicate his or her present location. those things will be saved for last since they are the least useful. "I exist!" q q q q q q Review of JIP Objectives Existing in Japanese A Note on GA Example Sentences More Popular Words Important Points to Remember Review of JIP Objectives I just want to take a few minutes to review the objectives of "Japanese is Possible!" Unlike your average college course.Japanese is Possible! Lesson 8 A man said to the universe. Learning things with no immediate relevance harms your motivation. The simple sentence pattern is like this: . then you can learn those nice extras! How to exist In English and other languages. this column will not focus on "formal" Japanese and learning the Chinese characters (Kanji) before teaching anything else. (To tell you the truth. Once you're watching Anime without subtitles. Japanese is like this. ("I am at the store. You use the verbs iru and aru (both meaning "to exist") to express this.) In Japanese. animals). Use iru to show the location of animate objects (people. GA . However. A Note on GA There is a lot of similarity between WA and GA. you need to reinforce the new things you learn by using them in sentences.] Ranma is here. you get a feel for what Japanese sentences look like. "Who is in here?" someone might respond "RANMA is here". You should read many Japanese sentences that use the words and grammar you learned. "Where is Ranma?" you would respond "Ranma is here. they would answer different questions.] Ranma is here. ranma wa koko ni iru. (this may or may not indicate emphasis on here Similar? Yes. here is a way to keep them straight. Pasokon ga tsukue ni aru. Notice that you need to use the particle ni (at/in/on) after the location and before the verb. [Ranma topic here at exists. as opposed to something else" ranma ga koko ni iru [Ranma sj here at exists. [Computer sj desk on exists. However. kono heya wa hiroi desu ne [This room topic wide is right?] This room is spacious. if someone said."This.Something wa/ga somewhere ni aru/iru. And you can expand from there.] Takashi is at the store. in that they both have to do with the subject of the sentence.] A computer is on the desk. Takashi san wa mise ni iru. Example Sentences As you learn the various parts of Japanese grammar. If someone said." On the other hand. and exactly how the different grammar "items" come together. That way. [Takashi topic store at/in exists. . to hold back tasukeru .you (disrespectful/casual) ~ nante . More Popular Words Nouns asa .last. minna no chikara ga hitsuyou desu. to head for tomaru . the end ookii . [everyone's power sj necessary is.to be heard korosu .hopeless.without a doubt kesshite .to test mukau .[you oj kill] I will kill you.morning chikara .to ride tekagen suru .everyone makoto .computer tsukue .to rescue tamesu .to attach Extra words arigatou .to stop kikoeru .shadow ki .to kill tsukeru .answer minna . impossible saigo .one's own kanarazu .] We need everyone's power.big chiisai .small Verbs noru .energy.thank you jibun no .yourself/oneself kage .never omae .desk Adjectives hitsuyou . spirit kokoro .such a thing such as ~ (Don't use wa or ga after nante) Common Phrases .truth pasokon .heart kotae .power jibun .necessary muri .to face. " each of which has a meaning of its own. some background.omae no saigo da! you ('s) end is! It's the end of you! kono mama as it is now sou desu yo That's the way it is! There are many words and phrases involving the word KI. because it is a system that consists of syllables.to decide ki o tsukeru . "heaven's spirit/mood") Writing in Japanese It's come to that point in time. Next week. First. and today. We will start by learning one of the kana systems known as "hiragana. Finally. asking . starting at the higher end and finishing in the bottom right corner of the character. But the truth is. Two of the systems are called kana. but I will continue with these . but no meaning.' I will first teach you the syllables that are lone vowels. There are forty-six hiragana characters currently in use. That's all there is to it. The last one is the syllable 'n. first draw the horizontal stroke across the top. If a dialog pops up on your screen on the next lesson. Then.gif images for the rest of the lone vowels.weather (literally. we will just go over the first vowel 'a:' To write it. It's time for you to start learning the eerie and mysterious Japanese writing systems. The third system. They are the Japanese equivalent of our alphabet. and all three are used in nearly every Japanese publication in the world since the beginning of the century. and I will help you to understand them. and forty-five of them are syllables ending in vowels. Some examples include: ki ga suru . draw the third curved stroke. draw the vertical stroke through that. I will try to display the characters on your screen using Japanese encoding. There are three writing systems in the Japanese language. there's really nothing mysterious (or eerie) about them.to be careful ("attach some thought/energy to it") tenki . since each character has a sound associated with it. Kanji is a collection of "picture characters." Hiragana is called a syllabary. . I don't think a teenager or adult can learn Japanese well without immersing him/herself to a certain degree. However. etc) Stop worrying about what the words sound like to an American Next Time q q q q q Adjectives as modifiers Example Sentences Review More Popular Words Two more vowels That's all for now. and reading manga. playing Japanese video games. Important Points to Remember . It's a step you can't leave out. you can't learn Japanese just by watching Anime.make friends with the language Slow and steady . You also get a feel for what a typical Japanese sentence looks and sounds like.learn at least 1 word every day Listen to it and use it as often as possible Study ONLY when you are in the mood and have time Look at word lists and review EVEN when you're busy (at work. tell it that you do want to. Only through sheer repetition can an American get a Japanese native's ear for Japanese! I don't believe you can leave out either part.com. You learn the words and phrases from a website or book. listening to songs.whether you want to install Japanese language support. because then you get a feel for what the different words and phrases mean. Unless you're under the age of 5. All Rights Reserved. See you soon! Copyright © 2001 Maktos. The keys to learning Japanese are: q q q q q q Believe you can do it . It's nice to have subtitles for a while. you'll be able to understand an increasing amount of the dialogue in a typical Anime episode. but you learn how they're used by watching Anime. I recommend watching subtitled Anime for quite a while before you go do "raw Japanese".How to become proficient in Japanese As you learn more Japanese grammar. . I recommend you at least read through the last 8 episodes. you discover that you are becoming more and more capable. and/or Anime. like most sports or activities. You are very unskilled at first. You have to tell yourself that everyone else is wrong . You can't make excuses. There are a lot of important points in them. It shouldn't take too long. After working at it for a while. What have you learned since then? I hope you've learned a lot about Japanese in the last 2 months. I believe that most people can aquire a decent proficiency of Japanese. They just don't know how easy the grammar and spelling is. especially if they like Japanese music. Now would be an excellent time to go back and review the previous 8 lessons. but you make yourself practice every day. I see no reason to put all that material here! That's what the archives are for. and I want to make sure everyone has absorbed that information! Japanese is Logical Learning Japanese. video games. involves repetition. . Since an actual review column would be about as long as the last 8 episodes put together. The only catch is they have to believe they can do it.Japanese is Possible! Lesson 9 A look back ● ● ● ● ● ● ● Review Japanese is Logical Adjectives Da Example Sentences Popular Words Hiragana: i and u Review It's been 8 weeks since the first JIP column. You start out able to do almost nothing.it's not hard. You may catch something that you overlooked the first time through. gaggle. I adjectives always end in the vowels -ai.peace of mind FUAN .un. It doesn't take much effort to learn "tasumaki" when you know that "tatsu" is dragon and "maki" is wind up. never -ei or a consonant followed by -i. inu = dog) tatsumaki . comfortable SHIN .puppy (ko = child or small. and you will soon see why. not AN .anxiety. maki = wind [as in "roll up. Up to now.heart. worry The kanji are like building blocks. -ii. I haven't said much about them and it's already lesson 9. such as kirai. There are a great many words that are made from 2 "kanji".rest. If you've been following the lessons well.To give you an idea of what awaits you in Japanese: tekubi . There aren't very many. From now on. Over time. and what their meanings are. flock. so they are everything else. -ui or -oi. feelings ANSHIN .wrist (te = hand. So let's start from the beginning. if there is a na-adjective that looks like an i-adjective. etc) In Japanese there is just one . or Chinese characters. I don't recommend learning them right away. which are used to build different words. chances are that you . but after a while you may want to start learning them. can look just like i-adjectives. There is no real rule about what a naadjective looks like. There are two types of adjectives. murder."mure"." not the weather type of wind]) You know how English has about 100 different words for a group of animals? (herd. ostentation. ease. It does help to learn some of the more popular kanji. Sometimes naadjectives. you get used to how the different kanji are read. Regarding adjectives It's about time you started learning the details of adjectives. kubi = neck) koinu . and learners of Japanese commonly call them naadjectives and i-adjectives. For example: FU . I will explicitly mark it in the word list at the end of a lesson.tornado (tatsu = dragon. That's important." but there is another word with the same meaning: da. you must insert the word na between the adjective and the noun. and once you begin to learn more complex sentences.black cat atsui ocha . When an i-adjective is the predicate of a sentence (such as.") To modify a noun with an i-adjective. As a predicate verb. (da level politeness) That cat is black. I am a student. I'm your father. There is one important difference.important thing kirei na onna no ko . cold water is the best.pretty girl Da the other copula Up to now. kuroi neko . there is no da after it. (i. you will need to use it no matter what (just not as the predicate verb). To modify a noun with a na-adjective.already know pretty well how to use them as predicate adjectives in the form: Something wa/ga adjective desu. Compare: Ano neko wa kuroi desu. . ore wa omae no otousan da. you can also use adjectives to directly modify nouns.hot tea This enables to to use a verb other than "to be" to say what the subject did or does rather than what it is. da works just like desu: Boku ga gakusei da. Da is a little less polite than desu. "The black cat did something. noun is adjective). just place the adjective before the noun. but you will frequently hear it in anime and read it in manga. (desu level politeness) Ano neko wa kuroi. you have used the word desu as the verb meaning "to be. Example Sentences tsumetai mizu wa ichiban desu. Unsurprisingly. Do this if the rest of your sentences have da. Sore wa taisetsu da. taisetsu na mono ." instead of "The cat is black.e. small . Most of these appeared in previous lessons. ano furui neko wa akai desu. Popular Words New words for this week: [Noun] honoo .new [Adj] atsui . ano hidoi bakemono wa chi o nonda. Lina's money is precious. [Adj] akai .pretty [Adj] taisetsu .unlikable (na-adjective) [Adj] kirei .blue [Adj] are .fire [Noun] koori .was (the past tense of da) Here are more 129 VERY popular words.ice [Noun] mizu .kono jigoku kara no pasokon wa atsui desu yo! [This hell from modifier computer (subject) hot is!] This computer from hell is hot! yukina no koori wa samui.sweet.cow [Adj] ii .is (a less polite version of desu) [Verb] datta .good [Adj] kirai . That awful monster drank blood. rekka no honoo wa atsui. Yukina's ice is cold.that over there [Adj] atarashii . That old cat is red.hot [Adj] chiisai .to fly [Verb] da . Each word you learn from this list will be extremely useful.precious.tea [Noun] onna no ko . naive [Adj] aoi .girl [Noun] ushi . omae o korosu.red [Adj] amai . I'm going to kill you. important [Verb] tobu . kanarazu lina no okane wa taisetsu da. Without a doubt.water [Noun] ocha . Rekka's fire was hot. last.when [Int] nani .short (hair.where [Int] ikutsu .that [Adj] tooi .morning [Noun] atama .long [Adj] ooki .train [Noun] heya .how many [Int] itsu .big [Adj] osoi .hard [Adj] nagai .person [Noun] hon .please (informal) [Misc] nante .cold [Adj] warui .always [Adv] yukkuri . leisurely [Int] dare .blood [Noun] chikara .please [Misc] kure .green [Adj] mijikai .fast [Adj] hidoi .this [Adj] kuroi .slowly. the end [Adj] samui .head [Noun] bakemono . etc) [Adj] muri .[Adj] chikai .far [Adj] tsumetai .what [Misc] arigatou .terrible.thank you [Misc] kanarazu .necessary [Adj] hontou .number 1 [Adj] kore .never [Misc] kudasai .who [Int] doko .yourself [Noun] jigoku .bad [Adv] itsumo .without a doubt [Misc] kesshite .black [Adj] midori .white [Adj] sore .dog [Noun] jibun .a thing such as [Noun] ai .love [Noun] ame . awful [Adj] hitsuyou .shadow .cold [Adj] shiroi .monster [Noun] boushi .near [Adj] furui .book [Noun] inu . impossible [Adj] muzukashii .hat [Noun] chi .room [Noun] hito .power [Noun] densha .slow [Adj] saigo .hopeless.old [Adj] hayai .hell [Noun] kage .rain [Noun] asa .true [Adj] ichiban . she [Pron] kare .to run [Verb] iru .[Noun] kami .energy. spirit [Noun] kokoro .song [Pron] aitsu .I (speaker thinks he is superior) [Pron] sore .you (what you would say to a baby -.hair [Noun] ki .god [Noun] kami .this [Pron] omae .boy [Noun] sora .car [Noun] makoto .girl [Noun] shounen . she (informal) [Pron] anata .that over there [Pron] atashi .T.I (said by females) [Pron] boku .I (said by males) [Pron] kanojo .money [Noun] onna .computer [Noun] shoujo .mother [Noun] okane .to go [Verb] kaesu .to earn (money) .everyone [Noun] mise .friend [Noun] ude .woman [Noun] otoko .thing [Noun] neko .he.father [Noun] pasokon .mouth [Noun] kumo .to be somewhere (for a person) [Verb] iku .cloud [Noun] kuruma .V.answer [Noun] kuchi .you (one step above kisama .(a) fight [Noun] terebi .that guy [Pron] aitsu .same as aitsu [Pron] kore .man [Noun] otousan .heart [Noun] kotae .to choose [Verb] hanasu .food [Noun] tatakai .still extremely rude!) [Verb] aruku .to explode [Verb] erabu .he [Pron] kisama .store [Noun] mono .sky [Noun] tabemono .cat [Noun] ningen .or an enemy) [Pron] koitsu .you (said to inferiors) [Pron] ore .ghost [Noun] okaasan .to speak [Verb] hashiru . give back [Verb] kasegu . [Noun] tomodachi .you (always appropriate) [Pron] are .to walk [Verb] bakuhatsu suru .return something.arm [Noun] uta .that [Pron] temee .truth [Noun] minna .human [Noun] obake . u. starting to the left and finishing below.to see [Verb] miru .to hit [Verb] tekagen suru . o. you need to set up your computer to read Japanese text. to head for [Verb] naru . .to attach [Verb] tsukiau . you will learn the hiragana characters for the vowels i and u. the top-leftmost strokes are drawn first. The Japanese encoding page should be able to solve your problem. left to right. In English the order of the vowels is "a.to drive [Verb] utau .to rescue [Verb] tataku .to push [Verb] ou .to have [Verb] mukau .to search (for) [Verb] taberu . watch [Verb] mitsukeru . For u. Right now. first draw the top stroke from left to right. I'm sure you've gathered a pattern by now.to become [Verb] nomu .to kill [Verb] miru . In Japanese.to follow [Verb] sagasu .to ride [Verb] omou .to take [Verb] tsukeru . you usually start drawing from the higher tip and finish at the lower tip of the stroke." their order in the alphabet. and in a character. When a stroke slants down and to the left or curls around.to eat [Verb] tamesu . then the bottom stroke. to date [Verb] tsuzukeru .to come[br] [Verb] korosu . o. each stroke is drawn from top to bottom. If it doesn't.to find [Verb] motsu .to hang around. e. u. i.to think [Verb] osu .to listen [Verb] kuru .to be heard [Verb] kiku .to stop [Verb] toru . In this lesson.to see. i.to continue [Verb] unten suru . somebody in the JIP forum will surely help you out. as you will see much later. then the right stroke from top to bottom." and this is a little more significant than it is in English. For the most part. e. For i just draw the left stroke from top to bottom.to face.to test [Verb] tasukeru . proceeding down to the bottom-right ones. it's "a.to drink [Verb] noru . here's what i and u look like: [い] and [う] Writing them is pretty simple.[Verb] kikoeru .to hold back [Verb] tomaru .to sing Hiragana If you see garbled letters in the brackets [] below. you can already write a few simple words: [いい] (ii-good) love) [あう] (au-to meet) [あい] (ai- Be here for the next lesson where you'll learn: ● ● ● ● More Grammar Common Phrases Two more hiragana More Popular Words Copyright c 2001 Maktos.With these new characters.com. All Rights Reserved. . A good place to start for this is the book recommendation page. on the other hand. You can use JIP for additional study material. Many books stress the polite form. such as how to order food and ask where the restroom is.Japanese is Possible! Lesson 10 Part 10 ● ● ● ● ● The Role JIP Plays Be Creative Four Important Points The -te form Kana: e and o The Role JIP Plays Because everyone is different. and never touch on the plain. everyday form of the language. If you want to learn more than I'm teaching here. and eventually. you're welcome to go off on your own and find books and other materials to study. Other books insist you learn the written language (Hiragana/Katakana) right away. it doesn't matter where you are progress-wise. Be Creative! What's your subject of expertise? I'm sure you have some . letters. and also as a source of advice. each person requires a different method to learn the same exact thing. you can read the past lessons. They will be here as long as any of the lessons. you should be able to understand many phrases commonly spoken in Anime and video games. You should also be able to make Japanese sentences of your own. After studying for several months. and converse with Japanese people and other learners by e-mail. JIP will gain new lessons at a moderate speed. through speech. That's why there are so many different Japanese books out there right now. However. If you just discovered the column recently. JIP. wishes to guide you to a practical understanding of the language. Some books are "survival guides" teaching you only a few common phrases. A good long-range goal would be "To be able to speak and understand Japanese". They can be something that you think would be useful to know off the top of your head or they can be something silly. You can work through obstacles when you have a goal.it is there for motivation. Four Important Points I will give several tips here that apply to most people.interest or hobby where you really know your stuff. If you want to be sure that you're doing it right. You aren't supposed to worry about it on a day-to-day basis -. A good way to go over what you've learned is to try to construct your own Japanese sentences. That's because you see obstacles for what they are . you won't get very far in anything. just keep your goal in mind. Make sentences. etc.Set Goals Everyone has to have goals. review. I can think up some good ways to help ME learn. Whenever you're having any kind of trouble. Sometimes the long-term goal seems unreachable. Tip 1 . the more likely you are to remember it.and long-range goals. You can take as long as 5 to 7 years to reach your long-range goal. Some good short-term goals . others like to push themselves harder and harder. I used to have a teacher who would make us create a sentence completely from scratch for every grammatical construction we learned. Some people have to make it "fun" somehow or they lose interest. Unless you have well crafted short. These are the small "milestones" on the way to your long-term goal. I'd say the possibilities are limited only by your imagination which is endless. When it comes to things like learning. Try to brainstorm how you can apply these to your particular learning style. otherwise we are just drifting through life waiting to die! It's no different in Japanese. How did you become that knowledgable? Did you get a lot of hands-on experience? Did you learn about it because you were involved in your hobby almost every day? Since everyone IS different. Do this yourself. practice. you meet your first obstacle and take it as an excuse to quit. but only YOU know what kind of techniques worked for you in the past. Be creative. The sillier it is. There are plenty of people who would be glad to help you out. I have to rely a bit on YOU to come up with the best way to make YOU fluent in Japanese. Others have to have a friend or sibling to work with. It is your ultimate objective. Everyone runs into obstacles. The road to success is littered with obstacles.something to overcome! If you don't have a goal. post your sentences up on the JIP forum. Some people like to take it easy. That's where short-term goals come in. and there are even ways to practice speaking it. You need to get out some maps and plan out how far you will travel each day. if you're in the United States. . When you go on a vacation.English and Spanish. it is challenging to find places where other languages are spoken. Remember.Learning 10 new kanji You will set a large number of short-term goals. You decide which highways you will take. you can't just get in the car and say "Let's go to Vegas" and start driving.Use It (or Lose It) I've talked with many people about the topic of learning a second language. his own native language because he hadn't used it. and it encourages you toward your ultimate goal. You feel like you're making forward progress. Don't misunderstand me.Memorizing my latest word list . The other languages are tossed into the "other" category. Many people reached a decent level of proficiency at Spanish or French in high school. Now. Where to listen to Japanese ● ● ● ● ● J-pop and Anime music Anime Movies/"Doramas" (dramas) CD dramas Video games All of the above sources give you an idea of how Japanese is ACTUALLY spoken . and how often you'll have to stop for fuel and food.and you'll notice they use the "plain" form 8 times out of 10. It gives you the feeling of momentum. you have a destination in mind. momentum is important indeed when you run into an obstacle! Tip 2 . Years later. How do you practice Japanese in a country with very few native speakers? It turns out there are several places you can hear Japanese in action.Learning the hiragana alphabet . However. Even if you think about your destination constantly. You look behind you. However. only to become seriously "rusty" years later."I never used it after high school".include: . The short-term goals help encourage you. he saw the boy in a store and said "ソJorge como est・" But the boy could no longer speak Spanish. you won't make it there. My own Spanish teacher once recounted an anectdote about a boy to whom he had taught English. and see a series of goals you have reached. I understand that many other languages are spoken. there are really only 2 major languages widely spoken . People always seem to give the same reason . It will be of great benefit to both of you. Tip 3 . It depends on the person! Japanese people are very forgiving when it comes to Americans speaking their language. However. Print it out and give it to them! If you have a younger sibling (under the age of 7) you'll have an easy time convincing them. They all have taken it in high school. You want to TRY to be as childlike as possible when learning a language. I'm sure you'll find several Japanese people to chat with. if only to myself.they just dive in and don't worry about how hard it's supposed to be. Other times. but look at the bright side! You would have a much harder time if you waited until age 60 to start! Try to . That's why kids learn so quickly . or anything like that. even though there are still words and sentences I don't understand. See "JIP Part 1" for a list of reasons why learning Japanese is a good idea.Practice Speaking It Try to practice throughout the day. If all else fails. Japanese people are not that rude. it still helps to be using "Japanese" that often. try chatting with Japanese people on the Internet. On the contrary. You need to practice. Even though I know the words in that sentence like the back of my hand. and you must use your skills often if you want to make them a part of you. After several months. They don't worry about what it sounds like to a native English speaker.net MSN chat rooms support Japanese text and there are hundreds of Japanese rooms You might want to check out the Japanese newsgroups. Japanese seems very "friendly". your brain isn't as "absorbent" as it was at the age of 5. "kutsu wa doko ni aru?" It makes Japanese seem more like a familiar language. and they'll talk English to you. but few become proficient enough to speak it. There are hundreds of them. There are several places you can go: ● ● Wbs. I'll say it in Japanese instead. (a very fascinating topic that I could talk about all day) I will just say one thing. we make fun of people that can't speak English perfectly. Don't be afraid to mess up.How can I practice speaking Japanese? I recommend convincing one (or more) siblings and/or friends to join you on your Japanese adventure. Many will want to practice their English with you! Often you will talk in Japanese to them. they consider English to be "exotic" and "cool". it's too boring to say "Where are my shoes?". which all begin with "japan" or "fj". Without going too deep into the topic of how kids learn. They don't set limits on how much they can learn each day. you both talk Japanese. In America. Yes. I like to speak Japanese all the time. When I'm looking for my shoes. I know people as young as 45 that are afraid of PCs! The interesting thing is. It's sort of like the gerund in English (the -ing form). but it's very often quite different. The -te form Right now. Little kids no reason to fear a PC. Tip 4 . but over a period of years the child develops quite a skill in playing the violin. However. and I've almost reached my long-term goal of understanding Japanese. I didn't know how to teach myself a language. you should get so used to forming the -te form that you can forget about the past tense as . The idea is to teach a child something before they can learn the conventional "wisdom" that certain things are hard to do. He has groups of 5 and 6 year olds playing Mozart and other "difficult" works on the violin. I've worked hard to learn Japanese for about 4 years. Sometimes I picked random words out of a dictionary to learn. However. even if only for a couple minutes. He develops an ear for music. I was decent in Spanish back in high school. I started learning Japanese when I was 15. Modern computers are pretty easy to learn how to use nowadays. I probably wasted a lot of time. The child watches his mom play her violin. etc. When I first started.rekindle the love of learning that all kids have. even if you don't start when you're 3. and before long he's able to play music without using a sheet (playing by ear). and as soon as he's able. many older people believe they're "hard". I've concluded it's possible to become proficient at Japanese. he places a small violin in the child's playpen. the older folks you see on PCs have overcome that mental block. A day shouldn't pass where you don't study Japanese for at least 5 minutes. I am going to teach you a very simple verb form called the -te form. Here is how he does it: While giving the mother violin lessons.it has to do with frame of mind. It has nothing to do with age though . The lessons only last about an hour. Some days you need to spend more than that. The easiest way to form the -te form is to remove the final a from the past tense of a verb and replace it with an e. If they could somehow convince themselves that computers are no big deal. they could learn them with no problem. No one told them computers were "hard" before they first used one. just look at the home PC. You have to use it OFTEN. I was totally on my own.It's All in your Mind The concepts of "hard" and "easy" are all in your mind. Eventually. he tries to play his as well. Some of you may have heard of a famous music teacher from Japan by the name of Suzuki. I made a lot of mistakes. you really need to "beat it" into your head. it's the same PC that 8 year olds use with ease. Sure enough. For example. but I don't come from a bilingual family or anything. going from left to right. then draw the next stroke. I'm sure you won't get it rignt. Let's look at some examples of forming the -te form: Dictionary form -> Past -> -te form kau -> katta -> katte (to buy) kaku -> kaita -> kaite (to write) isogu -> isoida -> isoide (to hurry) kasu -> kashita -> kashite (to lend) utsu -> utta -> utte (to strike) shinu -> shinda -> shinde (to die) asobu -> asonda -> asonde (to play) yomu -> yonda -> yonde (to read) kiru -> kitta -> kitte (to cut) taberu -> tabeta -> tabete (to eat) Irregular: iku -> itta -> itte(to go) kuru -> kita -> kite Notice that these irregular verbs are irregular with respect to the dictionary form. You can write this and use it to combine the second and third strokes or leave it out and draw them separately. Here's e and o: [え] and [お] For e. so get the Japanese viewing on your browsers straightened out. Finally. first draw the stroke at the top. but we'll start to introduce ways to use it in the next lesson. Follow with the long vertical stroke that crosses through the first. depending on your preference. For o. e and o. which looks almost like a seven. Finish with the short curved stroke from left to right. curving around and finishing at the bottom. this is the last time that I will be providing . start with the short horizontal stroke from left to right. Be here next time for: ● ● ● Particles Commonly Heard Phrases More Useful Words . but changing them from the past to the -te form is completely regular.an intermediary. we'll finish up with the lone vowels and learn the last two. Next draw the long curved stroke starting at the left. Don't bother trying to use this verb form yet. Kana Today. In the diagram you can see a small diagonal line connecting the second and third strokes. Remember. That's all folks.gif images of the kana. the curved third stroke goes from the right of the second stroke to the bottom right. ne . As you listen to Japanese."also" to .for example. q q q q q q q q q q q q q q q wa . but suggests an incomplete list. "but also" kara .from toka . things like q q q q q q q q q q The Conditional "-eba" . and location (indirect objects) mo .used for listing several items ya . and read hundreds of sentences. da .means "Restricting ourselves to" ni .Used at the end of sentences.Towards.Used to tell "by who".A spoken question mark."and" . because they are almost as popular as actual "Particles".for example.Subject marker .The preceding word answers WHAT or WHO (direct object) e ."is" (short for DESU.Like the particle "to".Use with a certain category of adjectives o .works like 's na .but kedo . it's ok if you don't know how to use some of these.until node .Limits the sentence . so it isn't a particle at all) tte .because demo . I put some "Conjunctions" in this list as well.Japanese is Possible! Week 11 Part 11 q q q q q q Particle List The Conditional "-ba" A Note on Adjectives Useful Words Example Sentences Commonly Heard Phrases Particle List Here is a more or less complete list of particles used in Japanese. you'll get a feel for how they are used."and what's more".from / because made .means "instead of something/someone else" no ..means "this is the subject" ga . things like nado .".but yori .and that's what he said" shi .Possessive . "not only"..A spoken exclamation point de .Subject marker . Needless to say. kind of like "huh?" or "right?" yo . to ka . aitsu o sagaseba. or thing). Examples taberu . Some Normal Adjectives: kawaii .This is a very powerful ending. add -eba and you're done! Now let's translate those 3 example sentences! (about 15 lines up) sore o tabereba. You would use it in sentences like: If you eat that. you'll find him! If you become human. You can do some really cool things with these "normal" adjectives. IS BECOMING white.cute kuroi . This makes it easy to recognize a word as an adjective. mitsukeru yo. In Japanese there are 2 types of adjectives . However. place. ii no desu. by adding different endings to "shiroi". ningen ni nareba. you will die! If you look for him. it's good.koros Next. most adjectives end in -i. remove the last "u" from the verb. all of the grammar (verb endings. etc. Actually."Normal" and "Quasi".black .The crimson red car exploded into a searing ball of flame. an adjective modifies a noun (person. Normal Adjectives In Japanese. Technically. shinu yo. there's no way to "know" what group an adjective is from just by looking at it. A Note on Adjectives Just for quick review.nusum korosu . it isn't really hard to distinguish between the two after you become more accustomed to Japanese sentences. WASN'T white. etc) you learn will be well worth the effort! The grammar is used more frequently than any word. How to use the -eba ending First.taber nusumu . It gives a sentence more flavor. you can say something WAS white. so it's very important that you learn it! It only makes sense. For instance.The car exploded. . -eba is used to say "if" something were to happen. Which sentence paints a more vivid picture? . However.a convenient goddess genki na ko . you have to use the particle NA. Without getting into too much detail. let's just say you add it to "normal" adjectives (after dropping the final 'i').energetic.nagai .girl otoko no ko .gentle To use these adjectives. let's take the -ku ending. yasashii hito . Even if it happens to end in -i. You hear people say "shiroku".boy ko .intention kou .child tsumori .black turtle kawaii onna no ko . you just plop down the adjective.slow turtle kuroi kame .white yasashii . You basically get a sense for whether it "sounds" right or not.slow samui . etc. "kuroku".(a) victory onna no ko .disliked. To give you an example.liked When using a Quasi adjective.a person (you) dislike benri na megami . You can't do the "cool things" that you can do with normal adjectives.a healthy child After studying Japanese for a while. hated suki . all the time.someone (you) like kirai na yatsu . suki na hito . when you hear "kireku" it doesn't sound right. I just want to use it here to illustrate a point.cute girl Quasi Adjectives This type of adjective MAY or MAY NOT end in -i.cold shiroi . Useful Words Nouns kachi . because you never heard it before. Some books name this group "Quasi Adjectives".convenient genki .like this . then plop down a noun after it. healthy kirai .long osoi .gentle person osoi kame . Some "Quasi" Adjectives: benri . Note: I'll teach this ending some time in the next month. you'll find it's pretty easy to tell which "type" an adjective is. Commonly Heard Phrases .how horrible! kou shite.only hodo .about.cute yasashii . Or. "Thanks to this. you (subject) energetic is. healthy kirai (na) . omae wa shiroi no da. you (subject) white is.however Example Sentences kore kurai ii desu yo this (thereabouts) good is ! This much is good. you are white. this only (if exists).energetic. to rephrase it. ore no kachi desu. omae wa genki na no da. hated Verbs michi ni mayou . I'll be able to win".gentle benri (na) . If I only have this. I 's win is.to get lost (lit.as much as shikashi . do it like this. it will be my win.to end Misc sae . You are very energetic.kurai . You are white.disliked. omae wa shiroi da. hidoi desu yo! person (who) killing (thing such as) horrible is ! Killing a person .thing such as Adjectives kawaii .convenient genki (na) . nani o suru tsumori ka? what (who or what) to do plan? What do you plan to do? hito o korosu nante. around nante . "lose the road") owaru . kore sae areba. All Rights Reserved. "definitely" ."With this. and. please post them in the "Japanese Is POSSIBLE!" Forum...."moving from that".. "let's forget about that.make no mistake. See you next week! Copyright © 2001 Maktos. ." kore de owari da ."The final blow!" machigai nai .hang in there! If you have any questions.com.sore yori .And that's all for this week! Good luck with your studying -. it's the end" todome da! . au .to enter There are some Yodan verbs that look a lot like Ichidan verbs. However. . and -MU verbs. Here you find your familiar -RU.to go out taberu . BU. -U.to do (used with 100's of verbs) kuru .Irregular. -GU. watch ochiru .to look. KU. Be forewarned. Irregular Verbs I bet I scared you by telling you there was a whole GROUP of irregular verbs! Actually.to eat miru . -TSU. Verbs from this group include: deru . -SU. and you conjugate them the same way.to meet tatsu . Ichidan. suru . and Yodan. Ichidan and Yodan verbs (all but TWO verbs) use the same verb endings.to fall Yodan Verbs This group contains every verb except for those that belong to the Irregular and Ichidan groups. there are only two. there is a subtle difference between the two groups.to come Ichidan Verbs Verbs in this group end with -eru or -iru.to stand suwaru .Japanese is Possible! Week 12 Part 12 q q q q q Types of Verbs Verb Endings Useful Words Example Sentences Anime Videos and Music Types of Verbs There are three groups of verbs in Japanese .to sit hairu . to run iru . I will now list them.to cut hashiru . However.to come/go nigiru .outer space hate .question michi .to enter kiru . You simply memorize these 4 sentences . purpose shitsumon . you just recite these sentences to yourself. and "E" is the here and now.outside uchuu .end Verbs au . have you noticed what all the verb endings have in common? "A" seems to mean past tense. scatter hairu . etc) have already been discussed. these are Yodan verbs! chiru . spirit isu . Remember. It's a lot like the "ABC" song. I thought I'd share with you the way I memorized them. Useful Words Nouns mokuteki . That way.to limit keru .to return kagiru . because people often use that to remember the alphabet.road. U TSU RU tta MU NU BU nda KU GU ita da SU SHITA By the way.to know As you listen to Japanese more and become more familiar with it.to stand suwaru . Verb Endings The basic verb endings (-ITA. -TTA.intention.to kick mairu .to sit .they are quite mesmerizing after a while! Then when you conjugate a verb.to need kaeru .to grasp shiru .chair soto . path kokoro .to fall.^_^ I would suggest copying this list down and putting it in a safe place. by the way. you can refer to this list! Some of these verbs don't belong in the "must learn" category.to meet tatsu . when there's a verb that you aren't sure about.heart. you'll develop an ear for what "sounds" right. -SHITA. is waiting You can conjugate IRU like any other verb.to look.to go out taberu . go out (don't) Don't leave until I can meet you. Conjugating IRU in this way changes the meaning from "is waiting" to "was waiting". you'll die! koko ni tatte kudasai. that store (into) if you enter.to begin Misc zutto . Anime Videos and Music The first episode of Hime-chan no Ribbon has been converted to RealPlayer format for you to download! Now you can hear Japanese in action. big sister (subject) I (who) looking My big sister is watching me. here (location) stand please Please stand here.to fall hajimaru .to suit. boku to au made. soto e dereba ii desu. Note: deru na . watch ochiru . It's very abrupt.hairu . good is It's ok to go outside. look good on deru .the whole time Example Sentences oneesan wa boku wo mite iru. -TA is the normal past ending for RU verbs. this chair (location) was sitting (He) was sitting in this chair. ano mise ni haireba shinu zo. deru na. outside (toward) if you go out. die (rough ending) If you enter that store. and only a male would ever use it. zutto matte ita no desu yo! I've was waiting the whole time! Note: matte iru .don't go out The "na" ending can be added to any verb to mean don't ____. I (with) meet until. even if you . Isn't that going to come in handy! kono isu ni suwatte ita.to enter niau .to eat miru . .don't own any Anime. you can stop by: Anime in RealPlayer format If you want MP3's.7 MB) Part 3 (6. here is an important link: ESP Japanese MP3s Copyright © 2001 Maktos. All Rights Reserved.9 MB) Part 2 (6.8 MB) If you want tons of other Anime videos like this.com. Part 1 (6. This is called conjugating a verb. Before you can add the -nai. you learned that there are 2 main categories of verbs -. He doesn't eat. and add any one of a large number of useful endings to the verb "stem". When you want to say: I don't kill people. -nai. add -nai to complete the conjugation: .Japanese is Possible! Week 13 Part 13 q q q q q q Negative Form .Ichidan and Yodan. -nai is one of those endings. Ichidan Verbs Verbs in the Ichidan category (which end with -eru or -iru) are conjugated this way: Take the "dictionary" form of the verb: taberu Take off the -ru: tabe Now. This form is indeed used in sentences. You would use the negative verb ending. In the last lesson. She can't find the treasure. but more often than not you have to drop a couple letters at the end of the verb. you have to get the right verb "stem" to add it to.NAI The form of the verb listed in the "Useful words" section is referred to as the "dictionary form" of the verb.NAI Double Negatives Ja Nai Contractions Example Sentences Very Popular Words Negative Form . "If he didn't NOT go. Incorrect: aitsu wa nanika tabenai desu. Not only they are OK. add -nai to complete the conjugation: korosanai You can use it in a sentence now: daremo korosanai yo! no one don't kill I won't kill anyone! mamono wo korosanai no? monster (what) don't kill? You won't kill the monster? (said by a girl) Double Negatives In Japanese. the negatives seem to cancel each other out. In English. He didn't not go to the store. but you're expected to use them. Yodan verbs are conjugated this way: Take the "dictionary" form of the verb: korosu Take off the -u. right? . korosa Now. he (subject) something doesn't eat That wouldn't make any sense to a Japanese speaker. You think to yourself. and replace it with an 'a'. double negatives are ok (just like in Spanish). he (subject) nothing doesn't eat ore wa daremo tabenai ze. I don't eat anybody! Yodan Verbs Remember that most verbs are in this category.tabenai You can use it in a sentence now: aitsu wa nanimo tabenai desu. then he must have GONE to the store. The four words above (nda. Why say the no at all? It softens the sentence a bit. Sometimes you can translate it 'it is that'. . NDA.However. However. We have several in English: Can't. That's why girls often end sentences with no.NO DESU You'll often hear someone use one of the above contractions. ndesu. You are silly. you can ignore the no and just treat it as a regular da.words with a letter intentionally left out so you can more quickly say the word. it would be used in sentences such as: Shouldn't you go with? Wouldn't it be a good idea to forget about him? Don't I look just like her? Doesn't it look good on me? niau ja nai? to suit isn't it? Doesn't it suit (look good on) me? omae wa tsuyoi ja nai ze you (subject) strong isn't (male ending) You are NOT strong. and so on. In this respect. but in casual speech you hear people doing it all the time. Actaully. in Japanese and Spanish. really. no desu) have about the same meaning. the double negatives reinforce each other. JA NAI . Of course. Both nda and ndesu are contractions . it makes it more gentle. doing so makes the word less formal. "How can a sentence be soft?". What does no da mean? It's about the same thing as da or desu. Don't. there are cases where you can cut out a vowel to make a word easier to say. you use ja nai. kisama wa ore no kashira ja nai yo! You (subject) I ('s) leader isn't ! You are not my leader! Contractions Just like in English. Japanese has something in common with English.NO DA NDESU . NDESU NDA . omae wa baka no da. no da. kore wa neko no da This is a cat. Chichiri from Fushigi Yuugi uses no da at the end of all of his sentences (they did that to make Chichiri even more unique).The opposite of DESU When you want to say something ISN'T. you ask? Well. gay person hikari . "Why don't you kill him?" hayaku shitara ii jan. someone (answers who) was throwing.JAN Pronounced JAHN.to fly tonde iku .self-confidence yappari . quickly if you did it good isn't? If you did it quickly. him (who) (if you were to kill) good isn't? Wouldn't it be good if you killed him? Or. where) in the world doushite mo .someone daremo . You use ja nai at the end of a sentence. no matter what necessary is.darkness . It's used all the time in Japanese. (He) was throwing someone.no one ittai .sure enough.strange okama .light kurayami .something nanimo -nothing dareka .to go flying jishin . Shinji was eating something. and often it's used with the conditional. after all hen .exactly like nageru .(what. shinji wa nanika wo tabete ita nda. dareka wo nagete ita ndesu. aitsu o koroseba ii jan. Very Popular Words nanika . Shinji (subject) something (what) was eating. It is necessary no matter what.to throw tobu . -eba.absolutely sokkuri . wouldn't it be good? Example Sentences ittai doko e iku tsumori? in the world where (toward) to go intention? Where in the world do you plan on going? doushitemo hitsuyou na no desu. it is short for ja nai which means 'is not'. just like desu. If you do the same exact thing every day. worry about grammar. The trick is to make sure you don't try to do the same thing several days in a row. the longer each song "lasts". try looking up phrases you hear in the import games you own. When learning Japanese. if you only listened to 1 song for a week. If you had 3 songs. you'll no doubt become bored in a short time. but it CAN happen if you want it to. because you have so many to listen to. Many fighting games have a "victory phrase" which is usually a commonly heard phrase. it would take considerably longer. try to spend time finding good Japanese MP3s (or CDs) on the net and listen to them. none of them will get old. On other days. It will take a few years. After a while. If you're into Video Games. It all helps. Japanese is something you have to slowly beat into your head. When possible. there are many things you can do to increase your ability: q q q q q Learn vocabulary Learn grammar Practice making sentences Listen to Japanese music Read Japanese manga You can also watch subtitled Anime. For instance. and try to pull out words and phrases that you understand. just watch an Anime. you reach a point where they never really get old. My advice: Some days. Negative Adjectives . and adds up very slowly (but surely) to mastery of Japanese. focus on learning vocabulary. Sometimes when you're not in the mood to study. it's always a good idea to inject some variety. Remember. If you cycle through several different activities however. everything gets old. They more songs you have. Eventually. you'd quickly grow tired of it.Japanese is Possible! Week 14 Part 14 q q q q How to Keep it Interesting Negative Adjectives Example Sentences Very Popular Words How to Keep it Interesting When you're studying or practicing something. Examples: yoi . But I'm not strong at all.A couple lessons ago. To refresh your memory. for use of this ending. doing some amazing things. ISN'T hot. and add -ku nai. we discussed the two different types of Adjectives. etc. One of those powerful endings is -ku nai. they were called "normal" and "Quasi". This is a very important ending to know. bakemono wa zenzen kowaku nai yo! monster (subject) completely scary not ! The monster isn't scary at all! kore wa zenzen chigau! this (subject) completely different! This is completely different! boku wa zenzen tsuyoku nai kedo. Think about how many times you need to say that something ISN'T necessary.good yoku nai .not red See what the -ku nai ending does for an adjective? It gives the adjective exactly the opposite meaning it normally has. ore wa juubun ja nai wake? I (subject) enough not (it is that)? Is it that I'm not enough? taisetsu na mono o nakushita wake? precious thing (what) lost it is that? Did you lose something precious? Very Popular Words Misc . Refer to the section below. as you might have guessed.red akaku nai . I (subject) completely strong not but. It was mentioned that the "normal" adjectives could have various endings tacked on to them. All you have to do is remove the -i from the adjective. she (subject) too much strong not. Example Sentences kanojo wa amari tsuyoku nai.not good akai . She isn't too strong. "Example Sentences". com.to lose kirameku .completely Nouns netsu . .too much zenzen .it is that sorekara . glisten Copyright © 2001 Maktos.fever bouken .enough wake .to sparkle.swordsman madoushi .eerie kowai .juubun .and then amari .wizard Adjectives taisetsu (na) .knowledgable bukimi (na) .scary Verbs nakusu .adventure kenshi .precious kuwashii . All Rights Reserved. there are some Yodan verbs that try to masquerade as Ichidan verbs. (Sometimes you will see it spelled -oo. All the verbs will end with those two letters. Since they end in -eru or -iru. there are only 2 of them in Japanese! You'll just have to memorize these two. The ending for this form is -ou.shiyou (let's do) These are irregular. because it would be used in sentences like: Let's go. A list of these verbs was given in Lesson 12. Luckily. Let's go back now.koyou (let's come) suru . Let's do it! Let's eat. and have no pattern. they're both used a lot -. you can be fooled. Irregular Verbs We'll start with the two irregular verbs: kuru . I'm calling this form the Let's form. In this case. These are the verbs that end in -eru or -iru. but it's the same thing when you write it in the Japanese alphabet) We will now demonstrate how to add the Let's ending to all three types of verbs. Ichidan Verbs Then we'll move to the second most popular category. Luckily. the Ichidan verbs. Remember.Japanese is Possible! Week 15 Part 15 q q q q The Let's Form What are you Let's Do ing? Example Sentences Very Popular Words The Let's Form I would rather give a form an easy to remember name instead of a textbook one.it shouldn't take you long. Example: . However. add -ou and you're done! asobou . you can do more interesting things with this form. For instance. Example: asobu . add -you to finish the job. What are you Let's Do ing? .miru . benkyou shiyou yo! home (location) return (and). you can make some very neat sentences. study let's do ! Let's go home and study! mamono o korosou! monster (who) let's kill! Let's kill the monster! umi ni oyogou! ocean (location) let's swim! Let's swim in the ocean! There isn't a lot to it. if you combine it with the particle to.(to see) First.(to play) Remove the -u from the dictionary form of the verb. (that was quick!) miyou (let's see) Yodan Verbs The rest of the verbs fit in this category. asob Now. remove the -ru from the dictionary form of the verb.(let's play) How to use the "Let's" Form Here are some example sentences that use the new ending: kore o tabeyou! this (what) let's eat! Let's eat this! mou yameyou! already let's quit! Let's stop it already! uchi ni kaette. mi Now. Everyone's different. Example Sentences mou osoi kara. kaerou yo! already late because. "who are you let's killing?" I get a kick out of translating it that way. shown below is my favorite kind of sentence in Japanese. ranma wa akane to kekkon shiyou to shite ita. that IS an example of a normal Japanese sentence. so let's return! chotto nihongo ga dekiru yo! little bit japanese (subject) can do ! I can speak a little Japanese. Ryu wanted to throw Ken. Very Popular Words Misc ippai .straight (ahead) . remember that you can't just take an English sentence and replace the English words with Japanese words.already takusan .This is actually one of my favorite bits of grammar.many massugu . When you're making your own sentences.full of mou . Ryo-ohki wants to eat the carrots. let's return ! It's late already. Another popular sentence is: nani o shiyou to shite iru? what (answers 'what') let's do and you're doing? What are you "let's do" ing? Although we don't say "what are you let's eating". so you may find a DIFFERENT bit of grammar more fun. ryu (subject) ken (who) let's throw and he was doing. ranma (subject) akane (with) let's marry (and) was doing. I don't know why! It is a valid translation. ryo-ohki wa ninjin o tabeyou to shite iru. Ranma had a mind to marry Akane. ^_^ dare o korosou to shite iru? who (answers 'who') let's kill and you're doing? Who are you trying to kill? I like to translate it. ryu wa ken o nageyoo to shite ita. although it isn't proper English. ryo-ohki (subject) carrots (what) let's eat and is doing. In other words. to throw kekkon suru . inside Adjectives nigai .nearby area kazoku . . All Rights Reserved.family hana .nose kuchi .right hidari (no) .carrot soba .mouth naka .bitter migi (no) .to marry Copyright © 2001 Maktos.left Verbs nageru .com.Nouns ninjin .center. Japanese is Possible! Week 16 Part 16 q q q Sentences are Important Example Sentences Words to Learn Sentences are Important A lot of you have been waiting for this one! You've learned quite a few words, and quite a few articles of grammar. Now you're wondering how to use them in your own sentences. When learning Japanese (or any language), it is very important to read 100's of example sentences. This is so that you can get a feel for exactly how the words and bits of grammar are put together. You then get a feel for how to make your own sentences. Beleive it or not, it IS possible for an English speaker to get a "feel" for whether or not a Japanese sentence sounds right. Just like English speakers have an "ear" for good English, you will also acquire an "ear" for Japanese. It will take a while, but it will happen for sure if you keep moving toward that goal. When you begin learning all the grammar, there are always a ton of questions in the back (or front) of your mind: "Would I use this word for this sentence?" "What if I want to say..." "...does that sound right?" You won't be able to answer those questions yourself until you become familiar with Japanese . A lot of this type of knowledge can be acquired by reading tons of actual Japanese sentences. Seeing the various words and grammar pieces used together gives you an idea of how they are used. Many sentences have a lot in common, which you come to realize. Examples X wa Y desu. X is Y. X wa Y ni iru. X is in the Y. X no Y ni wa Z ga aru. There is a Z in X's Y. If you replace the X's and Y's with actual nouns (or even names), the sentences will become MUCH more interesting, and easy to remember: ore wa baka desu. I'm an idiot. ore wa honou ni iru. I'm in the fire. boku no boushi ni wa mikan ga aru. Inside my hat, there is an orange. Do you notice a similarity between these sentences? boku no tomodachi wa hashitte iru. omae no niisan wa nagutte iru. kanojo no kareshi wa unten shite iru. They all have a 'NO' at the beginning, and they all have a verb in its -ing form. This is what I mean by sentences being similar. Even though the sentences have totally different topics, the structure of the sentence is identical. Some of you may remember "diagramming" sentences in English class. That's basically what I'm referring to here. Incidentally, the 3 sentences above translate to: My friend is running. Your big brother is hitting. Her boyfriend is driving. Example Sentences kore wa ichiban daiji na mono da. this (subject) number one important thing is. This is the most important thing. If 2 fighters met in a ring, and one of them pulled out a rubber duck and squeezed it...the other fighter might say: nan no mane da? what 's imitation is? What lunacy is this? or What are you trying to do? ittai nani o shiteiru ndesu ka? in the world what (answers what) are doing ? What in the world are you doing? koko made ka? here up to ? Is this the end? kanojo wa boku no airashii ryoko chan desu. she (subject) I ('s) lovely ryoko (term of affection) is. She's my lovely little Ryoko. Darou is the let's form of DESU. It is used where something isn't set in stone. If someone is guessing something to be true, or something is probably true, use this. mamono o korosu to katsu darou. monster (who, what) kill and win probably. If the monster is killed, you'd probably win. boku no kachi da! I ('s) win is! It's my win! juu o tsukaeba ii nda. gun (what) if use good is. You can use the gun if you want. omae no saigo da! you ('s) end is! It's the end of you! In the next sentence, remember that "shika" is pronounced "SHKA". tako shika tabenai yo! octopus besides don't eat ! You can only eat octopus! sore wa atarimae da. that (subject) natural is That's only to be expected. Words to Learn juu - gun atarimae - natural, to be expected shika - besides ittai - "What in the world" airashii - lovely mane - imitation, farce katsu - to win saigo - end, last The bird entered the cave. Aya chan got on a horse. cow (subject) squid (who or what) ate.Japanese is Possible! Week 17 Part 17 q q q Example Sentences New Words Words by Category Example Sentences ano imomushi wa chou ni naritai. ano shiroku nai tori wa akai ni natta. tori wa doukutsu ni haitta. The cow ate the squid. sassato henshin shite! hurry up and transform . monster (subject) frog (into) transformed thank goodness. john (subject) cow (who or what) let's eat and was doing. that white not bird (subject) red (into) became. aya chan (subject) horse (onto) got on. okaasan wa niwatori o tabetai yo! mother (subject) chicken (who or what) wants to eat ! Mom wants to eat the (live) chicken! doushite ikite iru niwatori o tabetai no ka? how come living chicken (what) want to eat ? How come you want to eat a living chicken? aya chan wa uma ni notta. That non-white bird became red. bird (subject) cave (into) entered. that caterpillar (subject) butterfly (into) wants to become That caterpillar wants to become a butterfly. john wa ushi o tabeyou to shite ita. John wanted to eat the cow. Thank goodness the monster transformed into a frog. mamono wa kaeru ni henshin shite yokatta. ushi wa ika o tabeta. You can learn some of them if you want. totsuzen ringo o tabete kieta..shame tte .so he said". town (to) went. and ate the apple. ushi o notte machi ni itta. insect (into) transformed (and) apple (what) ate. and add the "Popular Words" to your word list. cow (what) got on. If there are some that you haven't learned. That guy said "I like Ryoko". All the sudden.". mushi ni henshin shite ringo o tabeta. and disappeared. He got on the cow. He turned into an insect. I suggest that you learn the words that you might need.. Mother said "hurry up and leave!" aitsu wa "ryoko ga suki" tte. If you were going to the Zoo. all the sudden apple (who or what) eat (and) disappeared. but it's not critical that you do. not a suggestion as to what words you should learn next. These words are used in just about every Anime and video game. This is merely provided as a convenience. It is sometimes convenient to have word lists by category. below you will find "categories" of words that are not HALF as important. I suggest that you visit the archives. However. he ate the apple.thank goodness henshin suru . . and went to the town. used to quote a person totsuzen . you might want to print out the list below. precious rat (who or what) killed and (subject) shame is right? Isn't it a shame that I killed my precious rat? tsuki ni wa usagi ga aru tte shite iru? moon on (subject) rabbit (subject) exists (end quote) are knowing? Did you know that there is a rabbit on the moon? okaasan wa "hayaku dete!" tte mother (subject) "quickly leave!" said. New Words ikiru . that guy (subject) "ryoko (subject) like" said. I DON'T suggest learning all of these words instead of more useful words.Hurry up and transform! taisetsu na nezumi o koroshita to wa zannen desu ne.all the sudden Words by Category I strongly recommend learning all of the words that you have seen in the "Popular Words" section of JIP. so you can talk about what you find there.to transform zannen .to live yokatta . mosquito .squid kani .goat dachou .cat nezumi .insect hachi .badger wani .sheep inu .eagle shika .walrus Insects mushi .animal tori .ostrich rakuda .dolphin kingyou .bee ka .whale kurage .dog neko .crow.swan kitsune .elephant taka .cow hitsuji .eel hamaguri .goldfish sakana .fish same .horse buta .lion tora . raven itachi .mouse.camel karasu .jellyfish unagi .goose hakuchou .fox saru .alligator Aquatic Animals kame .chicken uma .snake kaeru .lizard zou .duck kuma .crab seiuchi .rabbit tokage .clam ika .frog ahiru .octopus kujira .monkey mujina .pig ushi .skunk gachou .tiger hebi .hawk washi .seal tako .bird niwatori .shark azarashi .squirrel usagi .turtle iruka .hedgehog (needle rat) shishi .deer yagi .bear risu .Animals doubutsu . rat hari nezumi . cockroach hae .earthworm imomushi .com.spider gokiburi .ant inago .leech Note: A group of ANY animal is a mure (pronounced moo RAY).mimizu . In Japanese. it's much easier! Copyright © 2001 Maktos. a flock of geese. All Rights Reserved. we have a school of fish. etc.slug hiru .fly ari . a murder of crows.butterfly kumo . In English.grasshopper sasori .caterpillar chou .scorpion namekuji . a herd of cattle. . shiroku osoi .Past Tense Example Sentences Words by Category Adjectives . Example: neko wa shirokatta.osoku Remember. on the other hand. this only applies to normal adjectives (that end in -i) and not Quasi adjectives. we say something was white. For a review on adjectives. In English. there is a better way to do it.Past Tense In Japanese. and you could get your point across. we learned how to take (normal) adjectives and add the -ku ending: shiroi . If you wanted to say "The cat was white". It also happens to be the form that Japanese speakers would use. How to use the -katta ending: 1) Take the adjective osoi 2) Remove the final -i .Japanese is Possible! Week 18 Part 18 q q q Adjectives . you simply add an ending to an adjective to make it apply to the past. -katta : The Past Tense Ending Several weeks ago. cat (subject) was white The cat was white. However. please take a look at Week 11. you could say: neko wa shiroi datta. You can learn some of them if you want. but it's not critical that you do. Keiko ('s) face (subject) was white. If there are some that you haven't learned. hayakatta yo! (it) was fast ! That was quick! neko wa kurokatta kara. below you will find "categories" of words that are not HALF as important. nothing couldn't do. It is sometimes convenient to have word lists by category.wasn't white hayaku nai .couldn't do Example Sentences kimi wa osokatta zo! you (subject) were slow ! You're late! keiko no kao wa shirokatta. However. Keiko's face was white. This is merely provided as a convenience. cat (subject) was black because. These words are used in just about every Anime and video game. you can change it to -nakatta to make it past tense. and add the "Popular Words" to your word list. and you're done! osokatta There's another place you can use this ending! Any time you have -nai. That lizard had a red stomach. Because it was a black cat. that lizard ('s) stomach (subject) was red.hayaku nakatta isn't fast . ano tokage no hara wa akakatta.dekinakatta can't do .wasn't fast dekinai . nanimo dekinakatta. Words by Category I strongly recommend learning all of the words that you have seen in the "Popular Words" section of JIP. shiroku nai . .oso 3) Add -katta.shiroku nakatta isn't white . I suggest that you visit the archives. not a suggestion as to what words you should learn next. I DON'T suggest learning all of these words instead of more useful words. I suggest that you learn the words that you might need. I couldn't do anything. orange kiiro .toe daichou .red enshoku .black kasshoku .forehead hou .brown akai .head ke .butt ashi no yubi .cheek kao .hand tekubi .nose me .finger oyayubi .elbow ude .eye medama .face ha .thigh sune .chest hara .muscle hone .heart mune .eyeball kuchi .knee ashikubi .large intestine Colors kuroi . foot momo .internal organs shinzou .yellow midori .tooth In the Middle hiji .pelvis ketsu .wrist (kubi = neck) yubi .purple haiiro .hair hidai .stomach kinniku .ankle kotsuban .lips kubi .shin hiza .mouth mimi .white momoiro .ear kuchibiru .arm te .deep red .Parts of the Body Above the Neck hana .thumb (oya = parent) naizou .green aoi .blue murasaki .pink makka .leg.grey shiroi .bone Below the Waist ashi .neck atama . CD-Quality) Aya version Youji version Omi version aozameta yoake no naka de furisosogu kanashii yuushi hito wa minna akumu ni yotte gensou mo yogen mo nai shizuka ni nemutta anata no kizuato inori no kotoba de iyasou sekihi ni kizanda kotoba wo nando mo kurikaesu uta ga hoshi ni nari asa hi ni kiete yuku unmei no tobira wo mamori kanashimi no tane o katte mo konran ni jidai wa michite dare hitori hokorenai yo futatabi deatte waraiaeru hi wo . I will walk you through an actual Japanese song. a different character from Weiss Kreuz sings the lyrics. since that wouldn't help you understand the Japanese. and there are 3 different versions of the song. giving you a rough translation of what each sentence means.Japanese is Possible! Week 19 Lyrics . MP3. They aren't simple remixes like you might think! They all have the exact same music. What song will we be using? We'll start with a song from the popular Anime series "Weiss Kreuz". They are found in the Media section.com. I won't translate it into perfect English. The song is called 'Epitaph'. visit Maktos. For each song.Epitaph This week we'll take a field trip to see Japanese in action. To download full episodes of Weiss Kreuz. Click on each picture to download that character's version of Epitaph: (Full Song. sadness tane .to be proud of deau .to be filled hokoru .prayer iyasu .to go unmei . but it should be easy to learn them if you learn the song.tears afureru .to get drunk gensou .to mow konran .nightmare you . I'll admit that not ALL of them are popular.grieving akumu .to become pale yoake .dawn furisosogu .why namida .torimodoseru sa to tsubuyaku keredomo naze darou namida ga afurete tomerarenai mama hoshi ni nari sorezore wakare yuku Epitaph Words Most of these words are used in Anime. laugh aeru .to meet futatabi .to downpour kanashii .morning hi .many times. so you should learn most of them.again warau .to flow tomeru . light kieru .sad yuushi .day. murmur keredomo .to disappear yuku .to protect kanashimi .to stop mama .to mutter.to carve into sekihi .prediction shizuka .to meet torimodosu .illusion yogen .to sleep kizuato .to become asa .star kotoba .to heal hoshi .to take back tsubuyaku .confusion jidai .as (as it is) .seed karu .however naze . aozameru .to smile.quiet nemuru .era michiru . often kurikaesu .door mamoru .song naru .scar inori .word kizamu .to repeat uta .destiny tobira .epitaph nando mo . what) protecting kanashimi no tane o katte mo sadness's seed (what) even if you mow konran ni jidai wa michite confusion (with) this era (subject) is filled dare hitori hokorenai yo a single person is not proud of (it)! . into) disappear-goes unmei no tobira wo mamori destiny 's door (who.to part Lyrics Walkthrough aozameta yoake no naka de paled dawn 's inside (restricting ourselves to) furisosogu kanashii yuushi downpours sad grief hito wa minna people (subject) all akumu ni yotte intoxicated by nightmares gensou mo yogen mo nai illusions (also) predictions (also) not there shizuka ni nemutta quietly slept anata no kizuato you 's scar inori no kotoba de iyasou prayer (of) word (by means of) let's heal sekihi ni kizanda epitaph (into) inscribed kotoba wo nando mo words (who or what) often kurikaesu uta ga repeats song (subject) hoshi ni nari star (into) becoming asa hi ni kiete yuku morning light (in.sorezore .each one wakaru . so why translate every sentence into proper English? If you get the meaning of what they're saying. remember that most native Japanese speakers understand Japanese perfectly. That is ok. They have to decide what words to use. because there are some endings that you don't know yet. but couldn't translate Japanese into another language. You can get as creative (and/or inaccurate) as you want. what) torimodoseru sa to tsubuyaku force to take back (emphasis) (and) murmurs keredomo naze darou however why I wonder namida ga afurete tears (subject) flow tomerarenai mama won't be stopped as it is hoshi ni nari star (into) becoming sorezore wakare yuku each one part-goes What you see above is the "first stage" of translation. dawn has faded. they will all be covered in the coming weeks. Within the faded dawn. They have to maintain a balance between making it sound good in English. that's all that matters. and staying faithful to the original meaning. Here are some examples of how the following line could be translated: aozameta yoake no naka de paled dawn 's inside (restricting ourselves to) Inside the pale dawn.futatabi deatte again meets waraiaeru hi wo smile-meet day (answers who. Inside. If you doubt it. You have to learn to understand Japanese at this level. You won't understand how I arrived at some of the lines in the translation. Only in the pale dawn. you won't learn as much about what the song really means. They don't use our grammar. An example of this would be translations by Viz Video. Don't worry. Within the pale dawn. Feel free to clean up the English lyrics however you wish. You'll see what it's like to be a translator. . Some translations really take liberties (and end up with a different meaning than the original). If you only see the final translation. (Especially the Ranma songs) Understanding Japanese and being able to translate Japanese into English are two completely different skills. My reasoning is as follows: If you're romanizing something. I find it rather annoying. Our alphabet is called the Roman alphabet. some books use that romanization exclusively. yasashî . they call our alphabet Roumaji. so an average American can read them. a line will be drawn over a vowel to show that there are actually 2 of that vowel.Japanese is Possible! Week 20 Part 20 q q q q q q Romanization The Particle E The Particle SHI -ku naru (to become something) Example Sentences Useful Words Romanization Romanization is turning Japanese characters into our alphabet. I don't like that romanization. In Japan. For various reasons. Often times. Here are some examples of how the same couple of letters could be written in several different ways: ji = zi e = he o = wo shi = si ou = oo ou = ô (o with a line over it) So Jishin (self confidence) could technically be written: zisin Personally. Notice that we don't use the "bar over a letter" symbol at all in English! I think a romanization should be pretty easy to read. However. because it doesn't look like it sounds. you'd have to use the Japanese alphabet(s) to write everything in Japanese. Without romanization. it should look pretty normal to an average reader. there are at least 2 styles of Romanizing Japanese. I perfer to actually write 2 of the vowel. The Particle SHI This particle is used to connect several mini-sentences. store (to) went. It's not all that different from the English word AND. (he. and there are monsters. you want it to rhyme with crude. ai shite iru shi [I'm] loving you (and) kekkon sureba ii to omou yo! if [I] were to marry [you] good (and) think ! You're cute. sairaag e iku ka? sairaag (toward) go ? Are you going to Sairaag? Notice that it is usually used with some form of iku (to go). However. Examples mise e itta. but the second one suggests how to pronounce it. ikou ze! monsters (in particular) are here (and) let's go! It's scary here. it's supposed to be pronounced oooh. "to" or "toward". The first one looks like something out of a pronunciation dictionary. she) went to the store. The best way to explain it is to demonstrate its use! Examples koko wa kowai shi. or a similar verb. The Particle E This particle basically means. and I'm in love with you. I think spelling it ou helps you to pronounce it the right way. here (subject) scary (and) bakemono ga iru shi. If you have a double 'o' in English.yasashii The above 2 words are the same. so I'm thinking marriage would be a good idea! -ku naru (to become something) . so let's go! kimi wa kawaii shi you (subject) cute (and). That's also why I prefer ou to oo. to exist) naku natta .became nothing Example Sentences Akane wa Dracula o taoshita kedo.white shiroku natta . If you watch just about any Anime. but at the price of her own life.slow osoku natta . koko de matte kudasai! here (location) wait please! Please wait here until this candle completely burns up. Akane (subject) Dracula (who or what) killed but.Remember the -ku form? You can use the -ku form of a normal adjective with the verb naru (to become).is becoming fast nai . ranma (who) defeated after. This form is used a lot. this candle (what) nothing becomes until.became slow hayai . Akane defeated Dracula. genma to tatakatte ita.became white osoi . doushite konna tokoro ni kuru nante? why this kinda place (to) come (a thing such as) Why did you come to a place like this? ame ga futte iru shi. life (subject) not there became (is). inochi ga naku natta n'da. you'll hear it at least once. Examples shiroi . tabako mou naku natta nda! cigarettes already nothing became ! We're already out of cigarettes! kono rousoku o naku naru made. mother (subject) is being angry (and) . genma (with) was fighting.fast hayaku natte iru . rain (subject) is falling (and) okaasan ga okotte iru shi. After he defeated Ranma. he was fighting with Genma. ranma o taoshita ato de.not there (opposite of aru. and make some very nice sentences. life nakusu .to lose rousoku . and mom's mad.rain Copyright © 2001 Maktos. Useful Words inochi .candle tabako .cigarettes ame .com.nanimo dekinai n dakara. It's raining. so I can't do anything. . All Rights Reserved. nothing can't do (*filler*) therefore. gakkou kara kaetta (you) returned from school Examples: ima biiru o nonda bakari desu. like a lot of other Japanese grammar. is best described by using it in a sentence. like bakari. You simply add bakari to the end of any simple sentence that says "what you did". Ato If you ever want to say after. The verb will be in the -ta form (past tense) in every case. gakkou kara kaetta bakari ja nai no? school from returned (just did) didn't? Didn't you just return from school? ranma o koroshita bakari da to omou ranma (who) killed (just did) and thinks I think (he) just killed Ranma. Ato.Japanese is Possible! Week 21 Part 21 q q q q Bakari Ato Example Sentences Useful Words Bakari Bakari. is used with the -ta form. Now I drank beer. now beer (what) drank (just did) is. You can add it to just about any "past . Some sentences that would be eligible for bakari: ima biiru o nonda. Ato is very similar to the English word after. you will probably need to use this word. If you wanted to say "I just finished drinking" you would use bakari. Just now I drank beer. After (he) killed the scoundrel. waratte ita. Hikaru (who) defeated after. hayaku nigeta. he (subject) really strong is! He is as strong as the rumor said he was! jitsu wa. Actually. Example Sentences uwasa touri. hikaru o taoshita ato. quickly escaped. Hikaru was defeated. kanojo no koto mane shitai desu! but. (completely) scary not ! That monster isn't so dreadful. aitsu wa hontou ni tsuyoi n da! rumor (as). Example 1: Akunin o koroshita Scoundrel (who) killed. we'll enhance the sentence: Akunin o koroshita ato. hottoite kudasai! leave alone please! Please leave me alone! mane sureba dame da yo! imitate if you do (no good) is ! You shouldn't imitate (something)! ano yama ni sunde iru yatsu wa osoroshii desu! that mountain (in) living guy (subject) dreadful is! The guy that lives in that mountain is dreadful! mamono wa osoroshiku nai kara. truth (subject) long ago story is. so I'm not scared at all! naze konna tokoro ni kakurete iru? why this kind of place (in) are hiding? Why are you hiding in a place like this? demo. Example 2: hikaru o taoshita Hikaru (who) defeated. Killed the scoundrel Next. her 's (situation) imitate I want to do ! But I want to imitate her! boku no ie wa dondon ookiku natte iru! I ('s) house (subject) steadily big is becoming! My house is steadily becoming bigger! .tense" sentence. it's an old story. was laughing He was laughing after (he) defeated Hikaru. zenzen kowaku nai yo! monster (subject) not dreadful because. mukashi na hanashi desu. Scoundrel (who) killed after. he quickly escaped. as dondon .to conceal oneself hottoku .answer Misc touri .to go out mane suru . . All Rights Reserved. story uwasa .talk.to think Adjectives mukashi (na) .rumor omoide .dreadful Nouns hanashi .to leave someone alone dekakeru .steadily Copyright © 2001 Maktos.long ago osoroshii .to imitate kangaeru .com.Useful Words Verbs sumu .memory henji .to live kakureru . "went and did without thinking". It's a relatively cute ending. but it doesn't happen as often. I went and ate it! taisetsu na tomodachi wa shinjatta node. It means "irrevocably".tabechatta omou (to think). -ru) and add -chatta. Also. It varies. Guys can use this ending. (I'm) sad. "can't be undone".koronjatta ochikomu (to be depressed) .oshichatta With -MU -NU -BU verbs. taberu (to eat). am being sad.shinjatta korobu (to fall) . Because an important friend is dead. korosu (to kill) . with -U -TSU -RU verbs. For instance. important friend (subject) died because. and remove the last -DE.koroshichatta iyasu (to heal) . -tsu. you conjugate it into its -NDE form.omochatta katsu (to win) . you remove the ending from the verb (-u. depending on what group the verb is in. kanashinde iru. You probably wouldn't see Kenshin using it! You have to do something to the verb before you can just add this ending. you change the -SU to SHI first. .ochikonjatta okaasan no ringo datta no ni. mo tabechatta! mother ('s) apple was even though. you use -jatta instead of -chatta. already ate it (and it can't be un-ate) Even though it was mother's apple.iyashichatta osu (to push) . usually a girl. Most of the time a person using this ending would have to be pretty cute.kachatta With -SU verbs. shinu (to die) .Japanese is Possible! Week 1 Part 22 q q q q -Chatta Kagiri Example Sentences Useful Words -Chatta This is probably one of the most popular endings you'll encounter. hikaru wa mokona wo dakishimeyou to shite iru. watch) dakishimeru . he died! dekiru kagiri. we'll use it in a sentence so you can see exactly what function it performs! boku ga shitte iru kagiri.to try (also to see. Kaoru had a mind to marry Kenshin.to defeat kanashimu . he (subject) died ! As far as I know.no matter what .to do miru . kaoru (subject) kenshin (who) let's marry and was doing. aya wa imouto no aya chan wo mamoritai n da! aya (subject) little sister of aya chan (who) want to protect is! Aya wants to protect his little sister Aya! ken tachi wa taketori wo doushitemo koroshitai n desu! ken (lots of) (subject) taketori (who) no matter what want to kill is! Ken (and the others) want to kill Taketori no matter what else! Useful Words Verbs yaru .to embrace yattsukeru . do let's try! As far as we're able. let's try to do it! Example Sentences miru koto mo dekinai yo! to see (situation) also can't ! I can't see (anything) either! ranma wo taoshita koto aru no ka? ranma (who) defeated situation exists? Have you ever beaten Ranma before? kaoru wa kenshin wo kekkon shiyou to shite ita.to be sad Adjectives kimyou (na) . aitsu wa shinda yo! I (subject) am knowing as far as. As always.important Misc doushitemo . Hikaru wants to give Mokona a hug. or "as far as". yatte miyou! to be able (as far as).strange taisetsu (na) .Kagiri This word basically means "to the extent that". hikaru (subject) mokona (who) let's hug and is doing. the first sentence would look like this: ano bakemono ga boku wo taberu nante iya da! that monster (subject) me (who) eat (a thing such as) disliked is! I would like for that monster to eat me! ."? Of course I doubt you ever wanted to say a strange sentence like that! Besides being strange. Passive . What if there was a monster approaching in the distance.The snake was killed by Jack. you are making the sentence sound more "formal" or "intellectual".Jack killed the snake. Your brain converts it to an active sentence first! So basically. Passive .It's the second one. Grammar Review: Passive vs.The apple was hit by the man. the sentence is also in the passive form. I don't want to be eaten by the monster! You might be able to guess which one is passive -. Experts say that the brain has to go through an extra step to process a passive sentence. Active . You don't need to use them. you want to avoid those kind of sentences when writing. and you didn't want to be eaten by the aforementioned monster? You could express your desire to continue living in two different ways: 1.Japanese is Possible! Week 23 Part 23 q q q -Rareru : the Passive Form Example Sentences Useful Words -Rareru : the Passive Form Did you ever want to say something like "The monster was eaten by Ranma. As you can see by the examples. Active What does passive mean when you're talking about sentences? I'll give you a couple of very clear examples! Examples: Active . In Japanese. I don't want the monster to eat me! 2. really.The man hit the apple. matt sareru . Example Sentences kare wa erabareta senshi desu. it changes the meaning to "doesn't want to be eaten". He is the chosen warrior. you'll need to know how to conjugate a suru verb. I can promise you that any Anime you watch will use this form several times! How exactly do you use -Rareru? I will give an example for each of the different verb endings.erabareru (to be chosen) mitsukeru . An easy way to remember it: You can't have ubaareru! To make it easy to pronounce. they throw in a w. to be "matt"-ed.mirareru (to be seen) korosu (to kill) . The second sentence (in the passive tense) would look like this: ano bakemono ni taberaretaku nai yo! that monster (by) don't want to be eaten ! I don't want to be eaten by that monster! Remember that tabetaku nai means "doesn't want to eat".(to find) . This form is very important to learn (which of them aren't important?). doushite minna ni mushi sarete iru no ka? .I will tell you this-.ubawareru (to be stolen) miru (to look) . Pay special attention to the verb hakken suru.(to be discovered) kakomu .to have "matt" done to you.mitsukerareru (to be found) Basically.a Japanese person would never say it this way. you remove the -u from the dictionary form of the verb.kakomareru (to be surrounded) erabu . If you ever want to make up your own verbs. If you throw the rare in there.) ubau (to steal) . he (subject) chosen warrior is. which is one of the rarest endings for verbs. Just remember that you add a w in the case of -u verbs.korosareru hakken suru (to discover) . sonna ni matt saretaku nai! that kind of matt don't want done to me I don't want to be "matt"-ed that much! This "matt" character is no doubt famous for doing something or other! It would be like saying "He's pulling a Matt" in English. and add - areru. There are tons of suru verbs (where the second part of the verb is suru).hakken sareru .to "matt" someone. They would use sentence #2. What am I talking about? I'll give you an example: matt suru .(to surround) . (Except for -TSU.(to choose) . Fei no imouto ga mitsukerarenakatta.why everyone by ignore (is being done to me) ? Why am I being ignored by everyone? kuruma ga ubawareta node.to protect hakken suru . However. up till now 's island(s) (subject) all were cursed. however. Useful Words Nouns shima . absolutely you (answers 'who') kill! For the sake of my little brother you killed.high school kyoushi . drive can't do ! Because my car was stolen. kanarazu omae o korosu! was killed little brother for the sake of. Fei's little sister (subject) wasn't found. The cursed island was called Lodoss. I will destroy you! dareka ni miraretara. I can't drive! shikashi. ima made no shima wa zenbu norowareta n da. dou shiyou? hazukashii wa! someone (by) if was seen. korosareta otouto no tame ni. rear ue (no) .behind.from the beginning ima made . all the islands have been cursed.to steal ai suru . was cursed island Lodoss called is.up till now . Fei's little sister could not be found.monk sanpatsu .haircut koukou .front ura (no) .island bouzu . unten dekinai yo! car (subject) was stolen because.to surround ubau .teacher Verbs norou .above shita (no) . Up till now.to curse mamoru .on purpose saisho kara .to love kanchigai suru .to discover kakomu .below Misc mizukara . what let's do? embarassed (female ending)! What would I do if someone saw me? How embarassing! norowareta shima LODOSS to iu n da.to misunderstand Adjectives omote (no). we learned the -EBA conditional form. this form is called conditional because it says "if a certain condition occurs".Japanese is Possible! Week 24 Part 24 q q q -TTARA : another Conditional form Example Sentences Useful Words -Ttara : another Conditional form A while ago. add -eba and you're done! Now let's translate those 3 example sentences! (about 15 lines up) sore o tabereba. aitsu o sagaseba. ii no desu. You would use it in sentences like: If you eat that.you can call the form whatever you want..koros Next. shinu yo. How to use the -eba ending First. Here is a short review of the -EBA form: -eba is used to say "if" something were to happen. remove the last "u" from the verb. you'll find him! If you become human. mitsukeru yo. It could be called the "if" ending. but it can be used with some kinds of adjectives as well.taber nusumu . .nusum korosu . it's good. ningen ni nareba. -TTARA: another Conditional ending Like the -EBA form.. This ending is often used with verbs. Examples taberu . you will die! If you look for him. Don't ever let fancy grammar terms scare you off. you can only use "normal" adjectives that end in -i. To review different kinds of adjectives.Remove the last -i. Here are some examples of how -TTARA is used in a sentence.telling you how you can use it in your own sentences! It's not all that difficult. adding -ku nai reverses the meaning. shinitai = "(I) want to die" shinitaku nai = "(I) don't want to die" How to use -TTARA Here comes the fun part -. and add tabetaka tabetaku naka shiroka osoka Step 3 . kaereba ii ja nai ka? want to die if not.What kind of adjectives? The ones that end in -i that you can change to a -ku. "to die". which you can say about most Japanese grammar! Step 1 . see Lesson 11 and Lesson 14. why don't you go back? I'll explain that last sentence a bit. If you recall from our previous study of adjectives. shinitaku nai is simply the -ku nai form. (iu = to say) fast if you say good probably is.Add -TTARA tabetakattara tabetaku nakattara shirokattara osokattara And you're done! Here are the meanings of the phrases that you just built: tabetakattara = "if you want to eat" tabetaku nakattara = "if you don't want to eat" shirokattara = "if it is white" osokattara = "if it is slow" Here are a few exceptions. hayaku ittara ii n deshou. For adjectives. Examples of -TTARA capable verbs and adjectives: tabetai tabetaku nai shiroi osoi Step 2 . if you return good isn't it? If you don't want to die. shinitai is the "want to" form of shinu. shinitaku nakattara. (I think) You should say it quickly.Find a Verb or Adjective Any verb in the -tai or -taku nai form will do. that are used ALL the time! -KA . monster (who. without fail buy (male ending)! If I have the money. awful is (and) don't think? Don't you think it would be awful if Ash became a Jigglypuff? Useful Words Nouns heiwa . Persia (subject) mad probably! If Schuldich teases Omi. the worst is! If I became a woman. kanarazu kau zo! money (subject) if there is. omoshiroi da to omou yo! pikachu (subject) human into if became. I'll be happy! Schuldich wa omi wo karakattara. I also death point is. this week's Example Sentences will feature -TTARA. mou sukoshi benkyou shitara.X suru = X shitara (if X is done) X ga kuru = X ga kitara (if X comes) X ga aru = X ga attara (if X is there) X desu = X dattara (if it is X) X ni naru = X ni nattara (if it becomes X) Example Sentences Since you need to see -TTARA in action. happy is! If Pikachu wins. everyone (subject) be eaten ! If he shows up. then I will die too. minna ga taberareru n desu! he (subject) if comes. Persia ga okoru n deshou! Schuldich (subject) Omi (answers who or what) if teases. pikachu ga kattara. you 's end if it is. machi wa heiwa ni naru n deshou. the city would probably become peaceful. city (subject) peace (into) become probably.the worst. we'll all be eaten! omae no saigo dattara. mamono wo koroshitara. anta no kachi deshou. you 's win probably is If you studied a little more. the pits . If it is the end of you. ureshii desu! pikachu (subject) if wins. I'll definitely buy it! boku wa onna ni nattara. you'd probably win. okane ga attara. interesting is (and) think ! I think it would be interesting if Pikachu became a human. saitei desu! I (subject) woman (into) if became. ore mo shinu tokoro da. it would be the pits! pikachu wa ningen ni nattara. aitsu ga kitara. what) if you were to kill. more little if you were to study. If you were to kill the monster.peace saitei . taihen da to omowanai? Ash (subject) Jigglypuff into if became. Persia will probably be mad! Ash wa jigglypuff ni nattara. Verbs karakau . .to win Copyright © 2001 Maktos. All Rights Reserved.to tease kau .com. tabezu korosu . nanimo tabezu. That's why we're only touching on it right now. It doesn't change the meaning. mise e itchatta.Japanese is Possible! Week 25 Part 25 q q q q -ZU : without doing (intro) Example Sentences Useful Words Japanese Resource Links -ZU: without doing (intro) If you wanted to say "He didn't eat. Here are some examples: taberu . In the first sentence. he went off to the store. but it's nice to know it. (He) went out right away. You need one more grammar item to be able to say it properly. but it's nice to know why it's there. it's telling in what way he "went out right away". if you know enough verbs. Without eating anything. you'll at least know it's a verb form. .sorasazu This form isn't among the most popular. store (toward) up and went. You'll notice that sometimes the particle ni is added after the verb in -zu form. that grammar item is -ZU. Usually that isn't too difficult. it doesn't follow the normal pattern) suru (to do) becomes sezu (without doing) benkyou sezu ni. As you can tell by the heading. Very frequently. sugu dekaketa. nothing without eating. It roughly means "without doing". (Being irregular. You then remove the -zu and see if you can recognize the verb stem. you will conjugate suru (to do) into this form. but went right to the store. without studying. If you hear someone say a word that ends in -zu. right away went out. As suru is one of two irregular verbs." you wouldn't be able to right now.korosazu sorasu . study without doing. That's why the particle ni is used. you will need to be told what suru turns into. uchi ni kaetta. all the pokemon fainted. all pokemon (subject) went and fell over. satoshi kun (subject) pikachu (not someone else) much likes. katapii wo mitsukeru mae ni. Without killing Pikachu. Satoshi kun (Ash in the USA) likes Pikachu very much. horrible annoyance is.pain in the butt. home (to) returned. koratta (who) chose time since. he returned home.experience points saisho . poppo wo koroshita ato de.continuance toki . occasion shoubu . katapii (answers who) find before.to continue morau . mise ni ikeba ii to omou. it's been nothing but trouble. shiro ni kaetta.attack Verbs tsuzuku .beginning kougeki . and returned to his castle. avert (your eyes) . brock wo taosezu. pikachu (answers who) without killing. poppo (answers who) killed after. satoshi-kun wa pikachu ga dai suki. I think. Sephiroth robbed the Sando of his life.to receive sorasu .challenge keikenchi . castle (to) returned. burden tsuzuki . waratta. Received 51 experience points.Example Sentences pikachu wo korosezu. zen pokemon ga taorechatta. Ever since he chose the Koratta. Useful Words Nouns meiwaku . Unable to defeat Brock. (he) smiled.time. koratta wo eranda toki kara. I think you should go to the store before you find the Katapii. store (to) if you go is good. sefirosu (subject) sando 's life (what) took. shoubu no tsuzuki desu ka? challenge 's continuation is it? A continuation of our challenge? 51 keikenchi wo moratta 51 experience points (what) received. taihen na meiwaku desu.to turn away. brock (answers who) without defeating. sefirosu wa sando no inochi wo ubatte. He smiled after he killed the Poppo. because there's a LOT more great information on the way! Besides.A guy's personal page. that not all online Japanese courses help the Otaku to learn Japanese. 2 new Kanji are added! Copyright © 2001 Maktos.Several books of the New Testament. All Rights Reserved. to get more enjoyment and satisfaction out of their hobby. which is why these links are being provided! Nafai's Japanese Language Course .com. Japanese Tutor. and other software) Learning Japanese . Japanese Online's Links page . The only one I would mess with is Kansai-ben.I wouldn't worry about dialects YET. JIP's mission is to help Anime and Video Game fans to learn the language. Edict Japanese Dictionary .The title says it all! HUGE list of Japan links . Japanese Dialects . Japanese Online . dedicated to teaching Japanese language and culture. Any information helps though.Another JIP-like online Learn Japanese column. Kanji a Day . Resources for Teachers/Students .A college's online Japanese tutorial! Learn Japanese Free! . Don't forget about Maktos' "Japanese is Possible!" column. in Japanese! Great practice! JDrill Homepage .Japanese Resource Links I found some great websites that students of Japanese might want to use to further their studies.A Java-based program to help you learn Kanji OSU Japanese Dept . with various Japanese languagerelated info.This site has hundreds of links.Each business day.All kinds of Anime and Japanese language resources.com .A very professional site.Another site that might be useful to a Japanese student. Bible in Japanese .Edict is an electronic Japanese dictionary (used in NJWP.Another site helping people to learn Japanese. . many more great links will be given out in the future ^_^ Remember too. This action might not be possible to undo. Are you sure you want to continue? We've moved you to where you read on your other device. Get the full title to continue listening from where you left off, or restart the preview.
https://www.scribd.com/doc/140124356/Japanese-Is-Possible-Lesson-1-to-25-pdf
CC-MAIN-2016-36
refinedweb
27,945
80.07
Application Lifecycle Management (ALM) on the Power Platform Hey Guys, this is all very exciting, but where is the promised 'extends' keyword with ability to adapt an existing class or interface to another interface locally (within a certain code)? I summarized and reviewed what are the new features in C# 9. If you guys want to read: Love the pattern matching bits, the not pattern has been long overdue. Unfortunately, that's where it ends for me. Don't like the top level statements...makes it seem like I'm programming in a scripting language and makes me want to run as fast as I can in the opposite direction. Now, records... It will be another feature that is pretty useless and I'll explain why. When I'm writing an immutable type, my constructor validates the invarients based on business requirement. Due to this, records won't ever benefit me and, I'd imagine, people writing immutable classes won't be able to take advantage of it either. Sure the language feature that checks for nulls helps but that doesn't do anything for all the other data types. Will there also be data structs then? Great job, congrats to the C# team! Discriminated unions? How to try C# 9.0 after installing latest .NET 5 preview, what is minimum VS version required? Any configuration in project? Mmmm... data keyword will definitely conflict with variable name I am using in most MVC, API methods right now. Very minior indeed but what about prototype keyword instead? I agree to DevSec's comment that removing the boilerplate code on Program / Main does not seem to be of much benefit. In fact at the first glance, it even makes it more hard to recognize now. So, if possible, focus on other more important areas to improve upon rather than this Trivial thing. I agree as well. Removing the boilerplate doesn't make much sense. It's better off leaving it. I have 3MB internet. Even with lower quality settings these videos are buffering. I can watch Youtube videos and videos on other sites without buffering. Namspaces is taking up precious screen estate. Why can't I set it to 0 tabs in Visual Studio ? Why cant I have default namespaces as in VB.Net ? Good namning style is to have telling names for methods and variables. C# is not a help in this effort ! 'init', while lovely, seems targeted as a very narrow case; why not allow inline initialization for read-only properties without any new access modifier ? So the () constructor will take type inference from the left side, and the var keyword will take type inference from the right. This makes var p = (1,2) make syntactic sense (Although inference will fail). I'm not a fan of this sugar, and feel it, combined with the var keyword, is going to make code very difficult to read. One or the other, fine, but not both. Great job with these features! Would love to see some changes to the constructor field initialization ceremony in the future (i.e. when using dependency injection). The whole idea about the boilerplate removal on Program / Main is not meant to be for most cases, its best used for scripts or simple tryouts. Since you are making a "target-typed new" why don't you make and a "target-typed generic", the thing java is calling diamond operator. List<int> list = new List<>(); ?! Hello! this is ayesha. I have a question i want to get a certificate of .NET from here what are the steps? Thank,
https://channel9.msdn.com/Events/Build/2020/BOD108/?WT.mc_id=learntv-video-learn
CC-MAIN-2020-50
refinedweb
599
75.71
The highly anticipated Apple Watch opens up a rare opportunity to be one of the first developers on a new platform that is almost guaranteed to be a hit with consumers. Apple has recently released the WatchKit Beta, allowing developers to start making apps before the official hardware is released. One of the key features of WatchKit at the moment is its dependence on the host iOS app to do most of the data processing work. In this talk, Natasha of NatashaTheRobot.com shared advanced techniques for sharing data between your iOS app and your WatchKit Extension and how to best architect your project for this purpose. (The demo code from this talk can be found in full on GitHub.) Overview (0:00) When you’re developing for the Apple Watch, you want to keep mind the user on the go. They need information and they need it right now. How can you best architect your app to provide that information very easily without having the user do too much work? As you think about architecting your app, keep in mind how to make it most convenient for your user to live with the watch as more like a helping device, rather than something to keep staring at all the time. The WatchKit SDK is still very basic, and the hardest part is getting information from your iPhone app to the watch and sharing data in between the two. Most of the code logic happens on the phone, and things on your watch will be very static. The basic architecture you’ll be working with is having most of the code in the WatchKit extension and having to share data between your devices. NSUserDefaults (3:12) NSUserDefaults is a quick way to share information. It’s good for just storing various small pieces of data, like usernames and profile information, which you can quickly access and populate. If you want to use UserDefaults, use it for something very static that the user doesn’t have to think about changing. You’ll need to set up App Groups so that the devices can share information through a shared container. Make sure to do it both on the extension and your iOS target. You’re basically creating a little App Group identifier common to the devices so they can both access it. If you need to delete it, you can go through a similar process on developer.apple.com. Get more development news like this You use the defaults with the App Group name you’ve created, and you basically set the object for a specific key. On the iPhone, the user enters the text, clicks save, and the text is stored in the UserDefaults shared between the apps. On the watch up, you get the defaults from the App Group and then you can update your watch display. // on the iPhone app let defaults = NSUserDefaults(suiteName: "group.com.natashatherobot.userdefaults") let key = "userInput" override func viewDidLoad() { super.viewDidLoad() textLabel.text = defaults?.stringForKey(key) ?? "Type Something..." } @IBAction func onSaveTap(sender: AnyObject) { let sharedText = textField.text textLabel.text = sharedText defaults?.setObject(sharedText, forKey: key) defaults?.synchronize() } // WatchKit class InterfaceController: WKInterfaceController { @IBOutlet weak var textLabel: WKInterfaceLabel! let defaults = NSUserDefaults(suiteName: "group.com.natashatherobot.userdefaults") var userInput: String? { defaults?.synchronize() return defaults?.stringForKey("userInput") } NSFileCoordinator (8:09) For larger pieces of data, NSFileCoordinator is a way to manage files in a shared space that both the app and the watch extension can access. This is good if you have a finite list, and also works for images. The following example is a very simple to-do list app, where you add a task in your phone and that data is transferred to the WatchKit extension and shows up on the watch. Your viewController needs to conform to the NSFile Presenter protocol, which is pretty simple. You have just two required methods, with other things that you’ll be able to do if you want. The FilePresenter protocol has a presented item URL, which is the part where you’re using your App Group identifier. With the URL, you’re creating a file in that directory where the file will be stored. You have an operation queue so you can have different multithreading if you wanted. There’s also a delegate method, presentedItemDidChange, in FilePresenters that tells you if an item changed, so you can update your app without having the user press an update button anywhere. However, there’s a bug with NSFileCoordinator and NSFilePresenter that prevents it from being used in extensions. For more information, see this blogpost on Natasha’s site. With the FileCoordinator, you write to a file in the to-do items array. You can archive and un-archive items to the array to write to and read from the file, and then you can populate a table with the items from the file. One thing to note is that if you have a delete function, where both your watch extension and iPhone app can modify the file, you might have issues with threading. // iPhone app private func saveTodoItem(todoItem :String) { // write item into the todo items array if let presentedItemURL = presentedItemURL { fileCoordinator.coordinateWritingItemAtURL(presentedItemURL, options: nil, error: nil) { [unowned self] (newURL) -> Void in self.todoItems.insert(todoItem, atIndex: 0) let dataToSave = NSKeyedArchiver.archivedDataWithRootObject(self.todoItems) let success = dataToSave.writeToURL(newURL, atomically: true) } } } // in the Watch // MARK: Populate Table From File Coordinator private func fetchTodoItems() { let fileCoordinator = NSFileCoordinator() if let presentedItemURL = presentedItemURL { fileCoordinator.coordinateReadingItemAtURL(presentedItemURL, options: nil, error: nil) { [unowned self] (newURL) -> Void in if let savedData = NSData(contentsOfURL: newURL) { self.todoItems = NSKeyedUnarchiver.unarchiveObjectWithData(savedData) as [String] self.populateTableWithTodoItems(self.todoItems) } } } } Frameworks (14:01) “If the code appears more than once, it probably belongs in a framework.” -WWDC 2014, Building Modern Frameworks Frameworks are great for business logic, Core Data, and reuseable UI components. As said at WWDC, you can put repeated code into a framework. In the FileCoordinator example, we had code that appeared twice when getting the file, reading it, and writing to it. That could be extracted into a framework. Creating a framework is really easy: you create a new target, select Cocoa Touch framework, and you name it. It will automatically link it for you in your iOS app, so don’t forget to link the framework in your WatchKit extension. One key thing, especially for Swift, is to think of a framework as an API. It needs to be public, since it’s an interface that both the iOS app and the WatchKit extension are using. So, if you’re creating a class, make sure to include the “public” keyword. Then in both the phone and watch apps, you can import the framework to have access to whatever is public in there. import WatchKit import MySharedDataLayer class InterfaceController: WKInterfaceController { @IBOutlet weak var favoriteThingsLabel: WKInterfaceLabel! override func awakeWithContext(context: AnyObject?) { super.awakeWithContext(context) let myFavoriteThings = MySharedData().myFavoriteThings let favoriteThingsString = ", ".join(myFavoriteThings) favoriteThingsLabel.setText("My favorite things are \(favoriteThingsString)") } } Keychain Sharing (19:12) Keychain sharing is really for more secure data. When you need a lot more security than UserDefaults provides, you can start using keychain sharing to keep your information secure and shareable across different extensions. One big issue with WatchKit currently is that there’s no authentication mechanism. Apple has sample code for KeychainIteamWrapper, but the API is old enough that it doesn’t have ARC support. I’d recommend using this version, which has been ARCified and has a cleaner interace. The key here is to instantiate a KeychainItemWrapper with an access group. This is a similar concept to App Groups, where you have that shared space between the devices. You’ll need the keychain in both the iOS and the WatchKit extension to access the user’s data. With the key-value store type of system, you set a username and password and create the same type of keychain item between the devices using the same identifier. In this example, just to show how it works, when the user puts in their username and password, the WatchKit extension will show that information. // iPhone app @IBAction func onSaveTap(sender: AnyObject) { let username = usernameTextField.text let password = passwordTextField.text let keychainItem = KeychainItemWrapper(identifier: "SharingViaKeychain", accessGroup: "W6GNU64U6Q.com.natashatherobot.SharingViaKeychain") // WatchKit extension let keychainItem = KeychainItemWrapper(identifier: "SharingViaKeychain", accessGroup: "W6GNU64U6Q.com.natashatherobot.SharingViaKeychain") let passwordData = keychainItem.objectForKey(kSecValueData) as NSData let password = NSString(data: passwordData, encoding: NSUTF8StringEncoding) let username = keychainItem.objectForKey(kSecAttrAccount) as? String Q&A (24:52) Q: If we have to make a call through the WatchKit extension in the iPhone app, do we still need to use shared credentials within that? Natasha: It depends on how you want to structure it, but yes, you still need to have an app group to share keychain data with the actual iPhone app. If you want the extension to have access to the API toke stored in the keychain, it would have to be in the same app group as the iOS app for the keychain to work. Q: Is any of the communication between the iPhone app and the Watch app going to be encrypted? Natasha: I don’t know. Information is sent over Bluetooth, so I don’t know if Bluetooth LE is secure automatically. Ideally, you wouldn’t send the information to the watch. The extension itself wouldn’t be doing the API call. Q: If Apple later provides a better API within the watch itself, is there any expectation that what we develop in the extension will work? Natasha: I don’t know what Apple will do. There’s supposed to be a native SDK, but it should be very lightweight because the watch is tiny. Q: Is there a way in which, if I tap something on the watch, it can invoke something and open up an app on the iPhone? Natasha: Currently you cannot physically open the app, but you can send a quick burst of data. In your app delegate, there’s a new call “handleWatchKitExtension” where you can quickly send a little dictionary or string. It’s a very quick connection, and in the simulator it’s misleading because it works every time, but when it’s via Bluetooth, it might not always work. Q: What are some of the first challenges that you encountered when you first started using WatchKit and what advice do you have for those of us who haven’t delved into it yet? Natasha: The first thing I had a problem with was getting the simulator to run, because in the video they very briefly mention having to go to the hardware section and choosing to display the watch. Other things lke getting data was kind of the bigger challenge. I think the UI is pretty straightfoward, but very limited. For example, you can’t get the string from a label, you can only set the text on it. Q: Do you have any experience with local notifications on WatchKit? Natasha: I played with the remote and local notifications, but I haven’t heard it anyone has ever got the local ones to work. Q: Do you know of any other shared containers that are evented? Natasha: You can actually have different queues in the File Coordinator. There’s a lower level API for file sharing, which I haven’t really used, but you can even send bytes or data. Q: Currently animation is basically flip card, so there’s a limit of 20 megabytes per application for images. When do you think we’ll get real animation support? Can you develop Swift-based applications that will on iPad, iPhone, and the watch? Natasha: I can’t answer for Apple, but according to their WWDC video, you can even draw on the watch. I guess they’ll have to deliver if they showed it in their video. You can also make a universal app with the WatchKit extension, and all my code is in Swift so you can do it in Swift or Objective-C. Q: What do we know about WatchKit and CoreData (or Realm)? Natasha: It’s just an extension, so I would put it in a framework and then you can just use it like you normally would. I don’t know if there’s anything special that you’ll need to do. Tim: There’s process and multiprocess codes that can be a little tricky, but for the most part you can make it work. Realm is currently working on that. About the content This content has been published here with the express permission of the author.
https://academy.realm.io/posts/architecting-app-apple-watch-natashatherobot/
CC-MAIN-2018-09
refinedweb
2,096
63.39
03 September 2010 18:14 [Source: ICIS news] LONDON (ICIS)--EU and ?xml:namespace> After the The EU sanctions do not specifically apply to methanol, making the purchase of Iranian material completely legal for European companies. Nevertheless, fewer and fewer banks are willing to offer credit lines for the purchase of Iranian products, making it difficult for companies to proceed with payments. “It’s not really the consumers who don’t want to buy it, it’s the banks that are cautious. They’re starting to take the American view more. The For the time being, however, the effect on the European market has been marginal at best, with most sources agreeing the situation has only partially contributed to rising European spot prices, if at all. Yet many sources said that it would soon become all but impossible to buy Iranian methanol in If that were to occur, the market would be faced with a significant shortfall in supply, with Iranian nameplate capacity totalling more than 5m tonnes/year. Furthermore, this would come at a time when at least two plants are shut down for maintenance, namely the 1.15m tonne/year Atlantic Methanol Production Co (AMPCO) plant on To complicate matters, there are reports that “This Iranian situation could become a big problem in [the fourth quarter],” said a trader, whose warnings were echoed by several others. Some players were more optimistic, however, and said the situation would be rectified before prices rose to prohibitively high levels. “They will somehow continue to sell into “Sometimes you see Indian material, which is just ridiculous – there is no methanol production in Others said the sanctions could be eased if global industries begin to suffer, noting that more than just the methanol market is affected. “If the price of crude starts to go up, the ($1 = €0.78) For more on methanol
http://www.icis.com/Articles/2010/09/03/9390772/iran-sanctions-to-hamper-europe-methanol-supply---industry.html
CC-MAIN-2013-48
refinedweb
310
56.59
Applicative functors are more general than monads and hence have a broader area of application in computation. Haskell does not define monads to be a derivative of applicative functors, possibly for some historical reasons. Scalaz does it and does it correctly and conveniently for you to reduce the number of abstractions that you need to have. In Haskell we have sequenceand sequenceAimplemented for monads and applicatives separately .. -- in Control.Monad sequence :: Monad m => [m a] -> m [a] -- in Data.Traversable class (Functor t, Foldable t) => Traversable t where sequenceA :: Applicative f => t (f a) -> f (t a) In scalaz we have the following .. // defined as pimped types on type constructors def sequence[N[_], B](implicit a: A <:< N[B], t: Traverse[M], n: Applicative[N]): N[M[B]] = traverse((z: A) => (z: N[B])) sequenceis defined on applicatives that works for monadic structures as well .. Monads are more restrictive than applicatives. But there are quite a few use cases where you need to have a monad and NOT an applicative. One such use case is when you would like to change the sequence of an effectful computation depending on a previous outcome. A use case for monads Have a look at the function ifM(monadic if) defined in scalaz .. // executes the true branch of ifM scala> true.some ifM(none[Int], 4.some) res8: Option[Int] = None Here's how ifMis defined .. def ifM[B](t: => M[B], f: => M[B])(implicit a: Monad[M], b: A <:< Boolean): M[B] = >>= ((x: A) => if (x) t else f) It uses the monadic bind that can influence a subsequent computation depending on the outcome of a previous computation. (>>=) :: m a -> (a -> m b) -> m b Now consider the same computation implemented using an applicative .. scala> val ifA = (b: Boolean) => (t: Int) => (f: Int) => (if (b) t else f) ifA: (Boolean) => (Int) => (Int) => Int = <function1> // good! scala> none <*> (some(12) <*> (some(false) map ifA)) res41: Option[Int] = None // bad! scala> none <*> (some(12) <*> (some(true) map ifA)) res42: Option[Int] = None <*>just sequences the effects through all computation structures - hence we get the last effect as the return value which is the one for the else branch. Have a look at the last snippet where even though the condition is true, we have the else branch returned. (<*>) :: f(a -> b) -> f a -> f b So <*>cannot change the structure of the computation which remains fixed - it just sequences the effects through the chain. Monads are the correct way to model your effectful computation when you would like to control the structure of computation depending on the context. A use case for applicatives scalaz implements Validationas an applicative functor. This is because here we need to accumulate all the effects that the validation can produce. As I noted in my last post on composing abstractions in scalaz, the following snippet will accumulate all validation errors before bailing out .. quite unlike a monadic API .. def makeTrade(account: Account, instrument: Instrument, refNo: String, market: Market, unitPrice: BigDecimal, quantity: BigDecimal) = (validUnitPrice(unitPrice).liftFailNel |@| validQuantity(quantity).liftFailNel) { (u, q) => Trade(account, instrument, refNo, market, u, q) } Let's look into it in a bit more detail .. sealed trait Validation[+E, +A] { //.. } final case class Success[E, A](a: A) extends Validation[E, A] final case class Failure[E, A](e: E) extends Validation[E, A] Note in case of success only the actual computation value gets propagated, as in the following .. trait Validations { //.. def success[E, A](a: A): Validation[E, A] = Success(a) def failure[E, A](e: E): Validation[E, A] = Failure(e) //.. } With a monadic bind, the computation will be aborted on the first error as we don't have the computation value to be passed to the second argument of >>=. Applicatives allow us to sequence through the computation chain nicely and accumulate all the effects. Hence we can get effects like .. //)) One other interesting abstraction to manipulate computation structures is an Arrow, which makes an interesting comparison with Applicative Functors. But that's for some other day, some other post ..
http://debasishg.blogspot.com/2010/12/monads-applicative-functors-and.html
CC-MAIN-2018-13
refinedweb
677
54.02
The QAbstractSocket class provides the base functionality common to all socket types. More... #include <QAbstractSocket> Inherits QIODevice. Inherited by QTcpSocket and QUdpSocket. Note: All the functions in this class are reentrant. QAbstractSocket::UnconnectedState. After calling connectToHost(), the socket first enters QAbstractSocket::HostLookupState. If the host is found, QAbstractSocket enters QAbstractSocket::ConnectingState and emits the hostFound() signal. When the connection has been established, it enters QAbstractSocket::ConnectedState and emits connected(). If an error occurs at any stage, error() is emitted. Whenever the state changes, stateChanged() is emitted. For convenience, isValid() returns true if the socket is ready for reading and writing. Read or write data by calling read() or write(), or use the convenience functions readLine() and readAll(). QAbstractSocket also inherits getChar(), putChar(), and ungetChar() from QIODevice, which work on single bytes. For every chunk of data that has been written to the socket, the bytesWritten() signal is emitted. close(). QAbstractSocket enters QAbstractSocket::ClosingState, then emits closing(). After all pending data has been written to the socket, QAbstractSocket actually closes the socket, enters QAbstractSocket::ClosedState, and emits closed(). If you want to abort a connection immediately, discarding all pending data, call abort() instead.Http, and QTcpServer. This enum describes the network layer protocol values used in Qt. See also QSocketLayer::protocol(). This enum describes the socket errors that can occur. See also QAbstractSocket::error(). This enum describes the different states in which a socket can be. See also QAbstractSocket::state(). This enum describes the transport layer protocol. See also QAbstractSocket::socketType(). Creates a new abstract socket of type socketType. The parent argument is passed to QObject's constructor. See also socketType(), QTcpSocket, and QUdpSocket. Destroys the socket. Aborts the current connection and resets the socket. Unlike close(), this function immediately closes the socket, clearing any pending data in the write buffer. See also close(). Returns the number of incoming bytes that are waiting to be read. Reimplemented from QIODevice. See also bytesToWrite() and read(). Returns the number of bytes that are waiting to be written. The bytes are written when control goes back to the event loop or when flush() is called. Reimplemented from QIODevice. See also bytesAvailable() and flush(). Returns true if a line of data can be read from the socket; otherwise returns false. Reimplemented from QIODevice. See also readLine(). Attempts to close the socket. If there is pending data waiting to be written, QAbstractSocket will enter ClosingState and wait until all data has been written. Eventually, it will enter UnconnectedState and emit the disconnected() signal. Reimplemented from QIODevice. See also abort().., ""). QAbstractSocket will do a lookup only if required. port is in native byte order. See also state(), peerName(), peerAddress(), peerPort(), and waitForConnected(). This is an overloaded member function, provided for convenience. It behaves essentially like the above function. Attempts to make a connection to address on port port. This signal is emitted after connectToHost() has been called and a connection has been successfully established. See also connectToHost() and connectionClosed(). Disconnects the socket's connection with the host. See also connectToHost(). This signal is emitted when the socket has been disconnected. See also connectToHost() and close(). Returns the type of error that last occurred. See also state() and errorString(). This is an overloaded member function, provided for convenience. It behaves essentially like the above function. This signal is emitted after an error occurred. The socketError parameter describes the type of error that occurred. See also error() and errorString().(). This signal is emitted after connectToHost() has been called and the host lookup has succeeded. See also connected(). Returns true if the socket is valid and ready for use; otherwise returns false. See also state(). Returns the host address of the local socket if available; otherwise returns QHostAddress::Null. This is normally the main IP address of the host, but can be QHostAddress::LocalHost (127.0.0.1) for connections to the local host. See also localPort() and peerAddress(). Returns the host port number (in native byte order) of the local socket if available; otherwise returns 0. See also localAddress() and peerPort(). Returns the address of the connected peer if the socket is in ConnectedState; otherwise returns QHostAddress::Null. See also peerName(), peerPort(), and localAddress(). Returns the name of the peer as specified by connectToHost(), or an empty QString if connectToHost() has not been called. See also peerAddress() and peerPort(). Returns the port of the connected peer if the socket is in ConnectedState; otherwise returns 0. See also peerAddress() and localPort().. See also readBufferSize() and read().. See also socketDescriptor(). Sets the type of error that last occurred to socketError. See also setSocketState() and setErrorString(). Sets the state of the socket to state. See also state(). Returns the native socket descriptor of QAbstractSocket if this is available; otherwise returns -1. The socket descriptor is not available when QAbstractSocket is in UnconnectedState. See also setSocketDescriptor(). Returns the socket type (TCP, UDP, or other). See also QTcpSocket and QUdpSocket. Returns the state of the socket. See also error(). This signal is emitted whenever QAbstractSocket's state changes. The socketState parameter is the new state. See also state().. See also connectToHost() and connected().(); if (socket->waitForDisconnected(1000)) qDebug("Disconnected!"); If msecs is -1, this function will not time out. See also disconnect() and close().
http://doc.trolltech.com/4.0/qabstractsocket.html
crawl-001
refinedweb
870
54.59
TinyDisk, A File System on Someone Else's Web App 188 Posted by ScuttleMonkey from the all-your-byte-are-belong-to-us dept. from the all-your-byte-are-belong-to-us dept. Psy writes "I attended Phreaknic this weekend where Acidus released TinyDisk, a shared file system that runs on top of TinyURL or his own implementation NanoURL. TinyDisk compresses a file, encrypts it, and dices it into clusters. Each cluster is submitted to TinyURL as if it were a url. This clusters can be read back out of the database, making TinyDisk a global file system anyone can use. There are safeguards in the default config to prevent people from dumping gigs of MP3s into TinyURL. While file-system-on-web-applications are nothing new (GMail file system anyone?) this hack shows how easy it is to accidentally design a web application insecurely despite the default PHP protections. See his presentation for more info" Nifty hack (Score:5, Interesting) Nifty little program all the same and a nice hack , Having it running on his NanoURL implementation locally , could allow for a cool little service . Though there are better ways to offer web based file systems in the real world . He does state in the FAQ that its not intended to pollute TinURL in any way Perhaps it will give TinyURL a nudge to tighten up their security though . It's simple. (Score:3, Insightful) Re:It's simple. (Score:5, Insightful) You wouldn't even need to do this with every URL added to the system. Spot-checking every 1 in 10 URLs or so will go a long way to preventing any sort of abuse. =Smidge= Re:It's simple. (Score:3, Informative) Re:It's simple. (Score:3, Interesting) Re:It's simple. (Score:4, Interesting) (1) create one tinyurl which contains encrypted data (2) create another tinyurl which contains the decryption key Never access them from the same IP nor around the same time, and nobody will ever know what you're hiding. Re:It's simple. (Score:2) Re:It's simple. (Score:3, Funny) Oh, great; now we're all gonna have to remember "" instead of "". Re:It's simple. (Score:2) Re:It's simple. (Score:2) Actually, that would only make things worse, as the filesystem code would simply have to resubmit 10% of queries, doing nothing but drive the load up further. Assuming, of course, that the trick suggested by the AC where all the data is stored in the query string is not used. Re:It's simple. (Score:2) Re:It's simple. (Score:2) Re:It's simple. (Score:2) In other terms: if you spot check 2500 URL's per month, you will catch at least one invalid URL each month, 99.98% of the time. This isn't true. The base rate of 'false URLs' matters. e.g. if all 11 million URLs are valid, checking 2500 urls finds you an invalid URL 0% of the time. If all are invalid, checking 1 finds you an invalid URL 100% of the time. If half are invalid, checking 1 finds you an invalid url 50% of the time, checking 2 finds you at least one invalid url 75% of the time, etc Re:It's simple. (Score:4, Informative) Further, even the best visual captchas are easily overridden if the attacker is motivated enough; a common means to perform this action is to get other humans to voluntarily solve the captchas as they are encountered by offering, eg, free porn. Basically, captchas aren't really the solution to preventing bots (there are no good solutions for this), they only deter casual botters. Re:Nifty hack, or antisocial behavior? (Score:5, Insightful) At its core, Tinyurl is just a write-once database. You add data and get back a key/pointer to said data. As with typical databases, the size of the pointer is logarithmic in the size of the input (* number of keys stored, not bytes; however, the number of bytes/key is bounded under some constant, so it's effectively the number of bytes). This gives us a logarithmic compression scheme, where our compression ratio (N-logN)/N approaches 100% as N gets large. This kind of "infinite compression" is what makes the method attractive: you put in say a kilobyte of data and get out a (currently) 5 byte key. All you have to do is keep an index of the keys. TinyDisk doesn't seem to do this, but you could then turn around and store the index as a key. Take 1000/5 = 200 keys and get back one key. Lather. Rinse. Repeat. In the end, you have a single key that points to the backup of your mp3 collection, all in one TinyUrl! Not too shabby. After all, it's free storage, right? Wrong. Someone ends up paying for the infinite compression. In this case, it's Tinyurl. If this kid had stopped to think for a few minutes before publishing his hack, he would have realized that he's actually doing a malicious, antisocial thing. I suspect there will be a dozen copycats in the wild before the end of the day. Farewell TinyUrl, we knew ye well. Re:Nifty hack, or antisocial behavior? (Score:2) I don't really see this as abuse as much as the fundamental flaw in providing free services. If the cost to support the service is higher than the cost to the recipient, it's just a matter of time before someone finds a way to cut their co Re:Nifty hack, or antisocial behavior? (Score:3, Insightful) Sure, but I think it's a pretty dumb idea because of the large overhead (in time and data) of actually retrieving that data.. http request and response, encoding, etc. And the fact that tinyurl will (rightly) kick your ass off the service once he's on to you. Re:Nifty hack, or antisocial behavior? (Score:2) After all, it's free storage, right? Wrong. He wasn't suggesting it was a good idea to do it - he was giving a sample mindset of someone who would use TinyDisk to do stupid/malicious things. Re:Nifty hack (Score:2) damn that's a lame filter. -nB Problems ahead? ;) (Score:5, Funny) Re:Problems ahead? ;) (Score:2) From what I understand (Score:4, Informative) 1- Open a meta file 2- Retrieve and concatenate all the clusters from TinyURL in the order specified in the meta file. 2- Base64 decode the file 3- Decrypt the file with the algorithm and key in the meta file 4- Decompress the file with the algorithm in the meta file. 5 - Verify the file size given in the meta file is correct for the decoded/decrypted/decompressed file 6- Verify the checksum with the algorithm and value in the meta file matches for the decoded/decrypted/decompressed file 7- Set the filename of the decoded/decrypted/decompressed file to the filename specified in the meta file. Hope that helps somebody Solution for which problem? (Score:4, Interesting) I adore the ingenuity (correct spelling?) of the hack but... I can't really find a problem this hack is a solution for. As a way to distribute files, it's probably too slow. The pro's I see here: the file is not stored as one single file but it's stored as a distributed file (a set of Base-64 encoded clusters), making removal of the file hard. On the other hand, if one single segment drops out, the file will be destroyed (except if some redundancy exists, of which I did not find evidence). If you want to send attachments in an e-mail, this is a very complicated way to do it. Every receiver must have the decoder program to re-assemble the file. Moreover, if tinyURL builds in a check to see whether the submitted URL exists (not just some 404 page), the whole concept would probably break. Anyways, very clever hack! Re:Solution for which problem? (Score:3, Interesting) Also, it could be used for distributing small text files containing reports from warzones and other heavy-censored countries. EFF should have a blast on this one. Re:Solution for which problem? (Score:2) Yes yes, and thermonuclear weapons were only invented to deter the Soviets. Sorry, but moral questions DO come into this. Insecure? Really? (Score:5, Insightful) Insecure? Rancid tabloid hyperbole more like. Re:Insecure? Really? (Score:2) Well, your reaction is not very restrained either. Hang on. Is it hyperbole day on Slashdot and no one told me? obBart: "This is the greatest injustice in the history of mankind!" Re:Insecure? Really? (Score:5, Funny) Hyperbole day? That's the most ridiculous thing I've ever heard in my entire life! Re:Insecure? Really? (Score:2) Of course, on Slashdot, every day is hyperbole day! Re:Insecure? Really? (Score:2) Re:Insecure? Really? (Score:2) Re:Insecure? Really? (Score:4, Insightful) It would also prevent tinyurl being useful for private URLs (e.g. those behind firewalls which only allow connections from known IP addresses). You can also use currently use tinyurl with protocols that the tinyurl server knows nothing about, e.g. ed2k: or magnet:. The better solution is just to disallow any single IP from creating more than, say, 10 URLs in an hour. This would make such a filesystem implementation useless without overly restricting legitimate users. Re:Insecure? Really? (Score:2) What about set-ups where a large number of users (say > 1000) are masqueraded behind one IP address? Re:Insecure? Really? (Score:2) Valid URLs can be used (Score:2) d+seven+years+ago [google.com] rought+forth+on+this+continent [google.com] Oh my god! (Score:2) NanoURL review (Score:5, Funny) Re:NanoURL review (Score:2) I'm sure this won't be abused (Score:3, Interesting) Pretty soon you'll see someone trying to use this as their backup system for 30gb of pr0n. Will large files kill TinyURL? What kind of latency is this going to introduce? If nothing else, this might constitute a DoS attack on TinyURL.com (which would be illegal. It's still interesting work. Re:I'm sure this won't be abused (Score:3, Insightful) Even more interesting would be something which encrypts your files and spreads them around in various free storage media (slashdot trolls?) in such a way that they can not be easily correlated with each other. Cramming all this stuff into tinyurl is bound to be noticed, but if it is a couple of dozen bytes here and there it might be possible to store lots of stuff with a reasonable degree of safety. Re:I'm sure this won't be abused (Score:2) Default PHP protections? (Score:4, Funny) Re:Default PHP protections? (Score:3, Informative) Really? So in what way is 'echo "hello world";' insecure? The only PHP scripts that are insecure are the ones where programmers made stupid decisions or wasn't thinking the design through, just like in any other language. 99% of these PHP problems are using external data without checking it. 99% of those cases are where the programm Re:Default PHP protections? (Score:2) And the main reason for that is the brain-dead magic quotes feature - i.e. not only is the grandparent wrong about no default security, but it's actually the default security that causes the problems he's complaining about. not just like any other language out there (Score:2) This would have been a straightforward feature to copy/adapt into PHP if anyone were interested in making it a decent server-side web language. Don't say "just like in any other language" when you're unaware of lang Re:Default PHP protections? (Score:2) Re:Default PHP protections? (Score:2) I haven't seen a whole lot of PHP 5 either, but from what i have seen they mostly concentrated on fixed a lot of the OO problems. Which is good but I was hoping they would address some of the more serious (IMHO) problems with the underlying language (adopting a standard naming scheme for functions, maybe creating some namespaces, Re:Default PHP protections? (Score:2) Re:Default PHP protections? (Score:2) Perl allows you to use 'taint Is it (Score:2, Funny) No. (Score:2) TinyDisk's inspiration (Score:3, Interesting) You could do this with blogs or any CMS (Score:5, Insightful) But overall 'WHY?' must be the question? Al Quaeda or The Real IRA? They still have their old working communication channels. Also who needs space like this? Space of this amount could be made redundant and available by using GoogleMail, Yahoo and Hotmail in synchrony. If none of those are available, presumably you'd have it on USB key as well. Re:You could do this with blogs or any CMS (Score:3, Informative) I've used something similar myself, and there are a few obscure reasons for hiding data in somebody else's web application. For instance, Opera's UserJS (the inspiration for Greasemonkey) doesn't have a restriction-free XMLHttpRequest object, so the only information you can retrieve with it is from the original host. Stuffing data onto that host is sometimes the only way of making some features work. Re:You could do this with blogs or any CMS (Score:2) Re:You could do this with blogs or any CMS (Score:2) Re:You could do this with blogs or any CMS (Score:2) Re:You could do this with blogs or any CMS (Score:2) I wouldn't trust tinyurl.com not to keep logs with enough info to identify me if somebody was that desparate to find me. Far better to go through a service that is, at least, supposed to be anonymous. Furthur Compression (Score:5, Interesting) Re:Furthur Compression (Score:2, Insightful) Re:Furthur Compression (Score:2) See here [wikipedia.org] if that was indeed simply a typo! Re:Furthur Compression (Score:2) Just had to try tinyurl, I think it was designed for uses just like this Holy.. cool technology overload.. (Score:2) This sounds like a very cool conference, are they going to distribute a conference program in pdf format, or is Phreaknic too underground for that, and require you to get it off torrent ?? What does PHP have to do with it? (Score:5, Informative) article defends PHP; no bashing (Score:2) The underlying message is that web application development is inherently difficult to secure, despite PHP's valiant attempts to protect programmers from themselves. This is the opposite of PHP bashing. It's PHP apologetics. I disagree with the article's premise. It seems to me the same sort of mindset that attributes to "pilot error" aviation incidents that would better be attributed to poorly designed instrumentation. Re:article defends PHP; no bashing (Score:2) Security is layered. PHP components can be secure, but in providing a general purpose language, anything can be built from those components, including insecure web applications. To create a programming language or environment from which you cannot build an insecure application would require seriously compromising the flexibility which gives the l commentary defends PHP; no bashing (Score:2) In my analogy the programmer is the pilot and the programming language is the instrumenta The end of TinyURL. (Score:2, Insightful) But this is a misuse of a really useful service. When TinyURL's administrator has to either go out and buy his second 2Terabyte disk array in a week or shutdown, which do you think he will pick? Re:The end of TinyURL. (Score:2) Re:The end of TinyURL. (Score:2) Seriously - this app encodes your data as URLs. Imagine splitting a DVD image into URL sized chunks and then submitting them one by one. Does that sound like a workable storage system to you? As a fully-distributed system for illegally distributed or illegal materials? Absolutely. The reason an abuse shell script wouldn't be as bad is because of motive. This is a way to abuse the system which is useful. Video/Overview of Acidus's presentation (Score:4, Informative) Here [wilpig.org] is a video of Acidus's presentation. If you haven't seen him present before (At Hope, O'Reilly's E-Tech, Toorcon, Phreaknic, Interz0ne, etc, etc) he puts on a good show. The presentation was called: Layer 7 Fun: Extending web applications in interesting ways. He discusses how traditional web applications work -vs- "new" web ppas that use AJAX. He talks about writing extensions to web apps using an API supplied (ala Housingmaps.com, or chicagocrime.org). Finally he talks about writing an extension to a web app where you don't have access to an API. TinyDisk was a case study for writes these so-called "non-sanctioned" extensions. He has a funny little slide he goes back to about how to properly implement a web app (which TinyRUL fails to do). Things like "don't wallow users to uploaded arbitrary amounts of data directly into your database." Funny Stuff. His upcoming talk at Shmoocon [shmoocon.org] seems pretty cool too. Book names - Recommended Reading (Score:4, Informative) There are definitive works in certain fields that online guides and HOWTOs cannot even approach in terms of detail or quality. It's a class of books that are so familiar people refer to them by nicknames instead of by full title. Well maybe so, but I did not know them all, and in the interest of helping people along the path here they are: Books like: K&R, The C Programming Language by Brian W. Kernighan and Dennis M. Ritchie The Dinosaur Book, Operating System Concepts by Abraham Silberschatz Knuth's never-ending story, The Art of Computer Programming, but Donald Knuth The White Book, Introduction To Algorithms by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Cliff Stein P&H, Computer Organization and Design The Hardware/Software Interface David Patterson John Hennessy The Illustrated's. TCP/IP Illustrated Series (The Illustrated's) - W. Richard Stevens The Rainbow series. U.S. DOD Computer Security Series Re:Book names - Recommended Reading (Score:2) ISO 9945-1 Portable Operating System Interface (POSIX) -- Part 1: Base Definitions "The Ugly Red Book That Won't Fit On The Shelf" This one was not mentioned in the TinyURL Recommended reading. But there are many of these entries in the Jargon File [catb.org]. Security isn't the issue; resource exploitation is (Score:4, Insightful) Re:Security isn't the issue; resource exploitation (Score:2) Re:Security isn't the issue; resource exploitation (Score:2) Why not go further? (Score:4, Interesting) Take the list of cluster URLs [msblabs.org]. Concatenate them into a single URL. Submit it again. Thus compressing literally ANY file to five characters. At least, as long as the possibility space of five-character URLs isn't exhausted. It's very much first come, first served. Re:Why not go further? (Score:3, Informative) We are simply addressing it. By your definition, a filesystem path (e.g. Re:Why not go further? (Score:2, Funny) Okay, back to working on my perpetual motion machine! Re:Why not go further? (Score:2) I can't be arsed to look it up but off the top of my head I think it's =256 characters. Re:Why not go further? (Score:2) Re:Why not go further? (Score handl Great Idea! (Score:4, Insightful) I guess once this goes down, I'll have to go back to posting UUencoded files in peoples blogs. Juggling With Packets (Score:4, Insightful) Now make it RAID 10 (Score:2) What does PHP have to do with it (Score:3, Interesting) The basic functionality of TinyURL, NanoURL or any other service is to accept a string (presumably a URI) and return a shorter string that will serve as a pointer to it. If you want your application to accomplish that it doesn't matter what it was written in, people can store things other than URLs in your database. The protections against this sort of use/abuse suggested in the article are also language independent. Greatest FAQ answer ever. (Score:5, Funny) From the TinyDisk FAQ: Q: This damn thing doesn't work on large files! #@%& You! A: Did you not read the manual? Man I wish I could punch you in the face over TCP/IP! Change the config file's MaxSize line. By default the limit is 2 megs. google fuck (Score:3, Interesting) At the time, no one else had written about such things. I just never got around to automating the process, so it never really materialized. Maybe some brave and time-rich soul would like to give it a go? took a bit of work on OS X (Score:2) Re:took a bit of work on OS X (Score:2) Re:took a bit of work on OS X (Score:2) Re:TinyDisk? (Score:2) Re:TinyDisk? (Score:2, Funny) Re:TinyDisk? (Score:2, Funny) Re:TinyDisk? (Score:2) Mine is not so *Big*Disk but it is certainly HardDisk permanent media! Re:Google and banks (MODS ON CRACK!) (Score:3, Interesting) And WTF is this modded 'offtopic'? Re:This is simply vandalism (Score:2) Re:It's not a file system (Score:3, Informative) A filesystem stores and retrieves files. Here are some exmaples of filesystems that undoubtably violate posix: FAT as shipped in DOS 1.0 Had no subdirectories Had no notion of users Had no permissions Limited filenames to 8.3 CD-R Doesn't allow data to be modified Re:It's not a file system (Score:2) Whoa. In nearly the words of Babbage, "I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a statement." Only POSIX filesystems are filesystems? Trivial Solution to TinyURL URL validation (Score:2) Two flaws. (1) It is possible to create a loop of redirects. Of course, the solution on TinyURL's end would be to follow an arbitrary number of redirects, and declare anything that redirects
http://tech.slashdot.org/story/05/10/25/0350222/tinydisk-a-file-system-on-someone-elses-web-app
CC-MAIN-2014-10
refinedweb
3,707
65.32
--- Sylvain Wallez <sylvain@apache.org> wrote: > Marc Portier wrote: > > I like "choice" which is a noun better than "select" which is a verb. I assume you mean better as in more declarative than imperative? Sounds good. > Also, it seems to me that "class", "struct" and "new" are variations > around the concept of widget groups. We could then have: > - "group-template" for "class" > - "group-instance" for "new" > - "group" for "struct" > > How does it sound? Sounds like a correct analysis. The group-* names may cause some misunderstanding however, because of the different semantics of "struct" versus "new". "struct" exists to wrap a set of widgets in a namespace, while "new" does not providing a wrapping namespace. The original idea was to allow several uses of "new" to include several classes into a "union", with each class providing multiple additional choices (cases) for the union. Should we change "new" to have it provide a wrapping namespace? Then we would have to support union cases with names like "some-namespace.some-widgetname". How would this interact with your union proposal below? I like the proposed names if we can solve this cleanly, and deal with the first two being so long... > Note also that we can make a direct parallel between "wd:group" (former > "struct") and the instance-only "wi:group" wigets I introduced in > woody-page-styling.xsl. I am interested. Would you explain what you are thinking in more detail? > Finally, something that bothers me in "choice" (currently "union") is > that the various cases only appear in the template, but not in the > definition. It would be better IMO to define the choices in the form > definition to ensure there's no accidental mixing in the template. There currently is no danger of mixing. Each direct child widget of a union is a "case" for that union. If you need a case to include more than one widget, then you are forced to wrap the set of widgets with a container widget (such as "struct"), and then the *container* acts as the union case. > Or maybe was it done on purpose to allow a single widget to be present > in several cases? As explained above this was not the idea, but it is an interesting thought. > If true, then we can have a single namespace for child > widgets as of now, and have additional "case" statements in the > definition identifying which widgets can appear in which case. > > We could then have: > <wd:choice > <wd:label>Travel schedule</wd:label> > <wd:widgets> > <wd:field...</wd:field> > <wd:field...</wd:field> > </wd:widgets> > <wd:when > <wd:use-widget > </wd:when> > <wd:when > <wd:use-widget > <wd:use-widget > </wd:when> > </wd:choice> > > We may also have a "permissive mode" when there's no <wd:when> statement > that would allow any widget for any case. > > Ah, and another question: why is the case-selection widget a sibling of > the "choice" widget? Shouldn't it be a child widget? This would allow an > stronger semantic grouping. But I'm not sure if this is good... Originally, the "union" widget held the case value itself, but was changed to reference a separate widget to determine the case value. The idea was to allow multiple unions (and other dependent widgets, when dependencies get implemented) to feed from the same case value. I have considered allowing the union case to be specified as expression that could reference multiple widgets while calculating its value. I opted instead to focus on adding support for dependent widgets (widgets whose values are automatically calculated based on other widgets values) and then letting the union reference one of these dependent widgets to get its case value. --Tim Larson __________________________________ Do you Yahoo!? Find out what made the Top Yahoo! Searches of 2003
http://mail-archives.apache.org/mod_mbox/cocoon-dev/200401.mbox/%3C20040105142033.59851.qmail@web41903.mail.yahoo.com%3E
CC-MAIN-2018-30
refinedweb
628
63.9
: Charting with ZedGraph: A Basic Introduction Contents Introduction ZedGraph is a powerful and free charting solution for .NET. It can produce a great range of attractive graphs and charts, which can easily be integrated into Resolver through 'ImageWorksheets'. ImageWorksheets display an image instead of a grid. At some point (soon) Resolver will support images in the grid, but personally I like ImageWorksheets. This article shows you how to use some of the basic features of ZedGraph with Resolver, for producing a few different types of graphs. The disadvantage of this approach is that it uses the ZedGraph objects directly - you have to take care of all the details yourself. As I experiment with ZedGraph, I will probably be able to produce a much 'higher level' interface that is easy to use. The advantage of this approach, is that you have complete control over the graphs you create and aren't restricted in any way. For the moment, this article is most useful to those familiar with tinkering in IronPython - or willing to give it a try. Getting Started In order to use ZedGraph you will need the zedgraph.dll. It can be downloaded from the sourceforge project. You should place the dll in the bin folder in your Resolver installation [1]. To get the best from ZedGraph you will also want the documentation, which is a separate download. ZedGraph provides a Windows Forms control for use in desktop applications and a Web control for ASP applications. We will actually be using it to directly produce images, as described in this article. ZedGraph can produce quite a range of different graphs, and we will only be demonstrating a couple of them here. To see more of the possibilities, go to the Sample Graphs page, which has screenshots and example code. Simple Example - Plotting a Sine Curve This first example plots a sine wave curve, as a line graph with diamond symbols. This example is adapted from the Codeproject Tutorial. The graph is drawn into a GraphPane, which must first be initialised [2]: title = "Graph Title" xAxisTitle = "X Title" yAxisTitle = "Y Title" pane = GraphPane(RectangleF(0, 0, 480, 320), title, xAxisTitle, yAxisTitle) The data to be graphed is placed in a PointPairList: a collection class containing the set of points to be displayed on the curve. In this example we plot a simple curve by passing the numbers 0 to 63 (divided by 10) [3] to the System.Math.Sin function. For each value, the (x, y) co-ordinates are added to our PointPairList: ppl = PointPairList() # A sine-wave with 64 data points for x in range(64): x = float(x) y = Math.Sin(x / 10.0) ppl.Add(x, y) The final steps are to create the graph, extract the image and place it in an ImageWorksheet. : pane.AddCurve("Sine Wave", ppl, Color.Blue, SymbolType.Diamond) # A hack because the axis change needs a real # image if we aren't using a control bm = Bitmap( 1, 1 ) g = Graphics.FromImage(bm) pane.AxisChange(g) # Create the ImageWorksheet imageSheetName = "Sine Wave" workbook.AddImageWorksheet(imageSheetName, pane.GetImage()) - The created is created by calling pane.AddCurve - SymbolType.Diamond is used to specify the markers that appear on the curve. - pane.AxisChange is called because we are auto-scaling the axes and they must be updated [4]. Multi-Colored Bar Chart The last chart used a formula to create the curve. In this example we use data from a CellRange and create a bar chart. The bar chart is multicolored, using a gradient fill to make it attractive. This example is adapted from the ZedGraph Multi-Colored Bar Demo. In the Pre-constants user code, a CellRange is created and formatted. The CellRange here is populated with bar numbers from 1 to 18, each with a randomly assigned value of 0 to a thousand (this time using the random function from the Python standard library random module: from random import random for y in range(2, chartData.MaxRow+1): chartData['Bar', y] = y -1 chartData['Value', y] = random() * 1000 As the CellRange has a header row (the top one), it can be indexed with the column names rather than index numbers, which makes the code a bit nicer to read. (The call to range starts at 2 to skip out the header row.) The GraphPane is created in the same way as last time, but this time we have three co-ordinates to include in each member of the PointPairList (rather than two which we had last time): colorStep = 4.0 / (chartData.MaxRow-1) for i in range(2, chartData.MaxRow+1): x = int(chartData['Bar', i]) y = chartData['Value', i] z = x * colorStep ppl.Add(x, y, z) To specify the 'point' in the gradient of each bar we give it a 'Z co-ordinate' which is a number from 0.0 to 4.0. We divide 4.0 by the number of rows of data (which is the number of rows in the CellRange - 1, because the first row is the header row). The Z co-ordinate is then the bar number multiplied by the step value. Next we define the color array we will use for the gradient fill and a pale blue color that is used for a background gradient file on the chart: colorList = [Color.Red, Color.Yellow, Color.Green, Color.Blue, Color.Purple] colors = Array[Color](colorList) Instead of calling 'AddCurve' to create the graph, we call AddBar: curve = pane.AddBar("Multi-Colored Bars", ppl, Color.Blue) curve.Bar.Fill = Fill(colors) curve.Bar.Fill.Type = FillType.GradientByZ curve.Bar.Fill.RangeMin = 0 curve.Bar.Fill.RangeMax = 4 - Fill creates the gradient fill - FillType.GradientByZ specifies that it is a gradient on the Z-co-ordinate - Setting RangeMin and RangeMax specifies the gradient range (which is why we made the Z co-ordinate values between 0 and 4 earlier). We also create two more fills, to give the chart background a nice gradient from white to pale blue: pane.Chart.Fill = Fill(Color.White, paleBlue, 45) pane.Fill = Fill(Color.White, paleBlue, 45) Now that the hard work is done, the rest of the code is identical to the last example. Pie Chart This example is adapted from the ZedGraph Pie Chart Demo. It starts by creating a CellRange very similar to the last one. Instead of numbers in the first column it has countries: In this example we create the GraphPane slightly differently. Instead of passing in a rectangle and titles, we construct the GraphPane without arguments and configure it as a separate step: title = "Resolver Sales by Region\n$ Millions" pane = GraphPane() pane.Title.Text = title pane.Title.FontSpec.IsItalic = True pane.Title.FontSpec.Size = 24.0 pane.Title.FontSpec.Family = "Times New Roman" pane.Rect = RectangleF(0, 0, 640, 480) We also configure where the 'Legend' (the colour key) appears: pane.Legend.Position = LegendPos.Float pane.Legend.Location = Location(0.95, 0.15, CoordType.PaneFraction, AlignH.Right, AlignV.Top) pane.Legend.FontSpec.Size = 10 pane.Legend.IsHStack = False The important step though is creating the 'Pie slices': total = 0 for i in range(chartData.MaxRow-1): offset = 0 label = chartData['Location', i + 2].Value sales = chartData['Sales $M', i + 2].Value if sales < 150: # offset some of the slices offset = 0.2 total += sales # Create the pie slices segment = pane.AddPieSlice(20, GetRandomColor(), Color.White, 45, offset, label) segment.LabelDetail.FontSpec.FontColor = GetRandomColor() The value and the region name are pulled out of the CellRange. The slice is actually created with a call to pane.AddPieSlice. The slice and label colour are set randomly and values below 200 are offset a bit. The total value is accumulated to use in the label. A text label, highlighting the total value and with a gradient fill, is constructed: text = TextObj("Total World Sales\n$%sM" % total, 0.18, 0.40, CoordType.PaneFraction) text.Location.AlignH = AlignH.Center text.Location.AlignV = AlignV.Bottom text.FontSpec.Border.IsVisible = False text.FontSpec.Fill = Fill(Color.White, Color.FromArgb(255, 100, 100), 45.0) text.FontSpec.StringAlignment = StringAlignment.Center pane.GraphObjList.Add(text) The text label is a TextObj. Another TextObj (slightly offset) is created to create a drop shadow for the label: text2 = TextObj(text) text2.FontSpec.Fill = Fill(Color.Black) text2.Location.X += 0.008 text2.Location.Y += 0.01 pane.GraphObjList.Add(text2) Saving the Chart Image This spreadsheet has an added bonus, a button to save the chart image: It uses the Windows Forms SaveFileDialog. def SaveChart(): try: import clr clr.AddReference('System.Windows.Forms') from System.Windows.Forms import DialogResult, SaveFileDialog dialog = SaveFileDialog() dialog.Title = 'Save Chart as Jpg Image' dialog.Filter = 'Jpg Image (*.jpg)|*.JPG;|All files (*.*)|*.*' if dialog.ShowDialog() == DialogResult.OK: image.Save(dialog.FileName, ImageFormat.Jpeg) except Exception, e: print '%s: %s' % e.__class__.__name__, e b.Click += SaveChart sheet.E5 = b This isn't a hack, we don't do anything 'clever' like triggering a recalc so it is a perfectly valid use of the button. The ImageWorksheet is always constructed with an Image object, so this approach can be used with any ZedGraph chart. Last edited Fri Feb 15 13:45:04 2008.
http://www.resolverhacks.net/zedgraph_basic.html
crawl-002
refinedweb
1,531
56.86
when will empty tags pass schema validation? Discussion in 'XML' started by wolf_y, May - Replies: - 1 - Views: - 1,214 - Patrick TJ McPhee - Nov 21, 2003 [XML Schema] Including a schema document with absent target namespace to a schema with specified tarStanimir Stamenkov, Apr 22, 2005, in forum: XML - Replies: - 3 - Views: - 1,357 - Stanimir Stamenkov - Apr 25, 2005 New to xml schema - does the dtd/schema validation happens always ?pramodr, Apr 1, 2009, in forum: XML - Replies: - 3 - Views: - 895 - Peter Flynn - Apr 5, 2009 Validation with XSD using XML::LibXML::Schema, and XML::Validator::Schema, Nov 28, 2006, in forum: Perl Misc - Replies: - 5 - Views: - 1,179 - Brian McCauley - Nov 29, 2006
http://www.thecodingforums.com/threads/when-will-empty-tags-pass-schema-validation.237279/
CC-MAIN-2015-11
refinedweb
113
52.87
a side-by-side reference sheet sheet one: grammar and invocation | variables and expressions | arithmetic and logic | strings | regexes | dates and time | tuples | arrays | arithmetic sequences | 2d arrays | 3d arrays | dictionaries | functions | execution control | file handles | directories | processes and environment | libraries and namespaces | reflection sheet two: tables | import and export | relational algebra | aggregation vectors | matrices | sparse matrices | optimization | polynomials | descriptive statistics | distributions | linear regression | statistical tests | time series | fast fourier transform | clustering bar charts | scatter plots | line charts | surface charts | chart options tables | import and export | relational algebra | aggregation vectors | matrices | sparse matrices | optimization | polynomials | descriptive statistics | distributions | linear regression | statistical tests | time series | fast fourier transform | clustering GUI REPL. The shell zsh has a built-in command r which re-runs the last command. Shell built-ins take precedence over external commands, but one can invoke the R REPL with: $ command r. octave: Octave, but not MATLAB, also supports shell-style comments which start with #. Variables and Expressions assignment r: Traditionally <- was used in R for assignment. Using an = for assignment was introduced in version 1.4.0 sometime before 2002. -> can also be used for assignment: 3 -> x compound assignment The compound assignment operators. octave: Octave, but not MATLAB, has compound assignment operators for arithmetic and bit operations: += -= *= /= **= ^= &= |= Octave, but not MATLAB, also has the C-stye increment and decrement operators ++ and --, which can be used in prefix and postfix position. increment and decrement operator The operator for incrementing the value in a variable; the operator for decrementing the value in a variable. null matlab: NaN can be used for missing numerical values. Using a comparison operator on it always returns false, including NaN == NaN. Using a logical operator on NaN raises an error. octave: Octave, but not MATLAB, provides NA which is a synonym of NaN.. octave: Octave, but not MATLAB, has isna and isnull, which are synonyms of isnan and isempty.. octave: Octave, but not MATLAB, also uses the exclamation point '!' for negation. relational operators The relational operators. octave: Octave, but not MATLAB, also uses != octave: Octave, but not MATLAB, supports ** as a synonym? octave: Double quote strings are Octave specific. A newline can be inserted into a double quote string using the backslash escape \n. A double quote string can be continued on the next line by ending the line with a backslash. No newline is inserted into the string. literal escapes Escape sequences for including special characters in string literals. matlab: C-style backslash escapes are not recognized by string literals, but they are recognized by the IO system; the string 'foo\n' contains 5 characters, but emits 4 characters when written to standard output.. octave: Octave supports indexing string literals directly: 'hello'(1:4).. matlab: Cell arrays, which are essentially tuples, are used to store variable-length strings. A two dimensional array of characters can be used to store strings of the same length, one per row. Regular arrays cannot otherwise be used to store strings.. pad How to pad the edge of a string with spaces so that it is a prescribed length. number to string How to convert a number to a string. string to number How to convert a string to number. translate case How to put a string into all caps. How to put a string into all lower case letters. sprintf How to create a string using a printf style format. length How to get the number of characters in a string. character access How to get the character in a string at a given index. octave: Octave supports indexing string literals directly: 'hello'(1). chr and ord How to convert an ASCII code to a character; how to convert a character to its ASCII code. Regular Expressions character class abbreviations The supported character class abbreviations. A character class is a set of one or more characters. In regular expressions, an arbitrary character class can be specified by listing the characters inside square brackets. If the first character is a circumflex ^, the character class is all characters not in the list. A hyphen - can be used to list a range of characters. matlab: The C-style backslash escapes, which can be regarded as character classes which match a single character, are a feature of the regular expression engine and not string literals like in other languages. anchors The supported anchors. The \< and \> anchors match the start and end of a word respectively. match test How to test whether a string matches a regular expression. case insensitive match test How to perform a case insensitive match test. substitution How to replace all substring which match a pattern with a specified string; how to replace the first substring which matches a pattern with a specified string. backreference in match and substitution How to use backreferences in a regex; how to use backreferences in the replacement string of substitution. Date and Time current date/time How to get the current date and time. r: Sys.time() returns a value of type POSIXct. date/time type The data type used to hold a combined date and time value. matlab: The Gregorian calendar was introduced in 1582. The Proleptic Gregorian Calendar is sometimes used for earlier dates, but in the Proleptic Gregorian Calendar the year 1 CE is preceded by the year 1 BCE. The MATLAB epoch thus starts at the beginning of the year 1 BCE, but uses a zero to refer to this year. date/time difference type The data type used to hold the difference between two date/time types. get date parts How to get the year, the month as an integer from 1 through 12, and the day of the month from a date/time value. octave: In Octave, but not MATLAB, one can use index notation on the return value of a function: t = now datevec(t)(1). parse datetime How to parse a date/time value from a string in the manner of strptime from the C standard library. format datetime How to write a date/time value to a string in the manner of strftime from the C standard library. Tuples type The name of the data type which implements tuples. literal How to create a tuple, which we define as a fixed length, inhomogeneous list. lookup element How to access an element of a tuple. update element How to change one of a tuple's elements.. computed difference An arithmetic sequence where the difference is computed using the start and end values and the number of elements. iterate How to iterate over an arithmetic sequence. permute axes flip—2d flip—3d circular shift—2d rotate—2d apply function element-wise apply function to linear subarrays Dictionaries literal The syntax for a dictionary literal. size How to get the number of keys in a dictionary. lookup How to use a key to lookup a value in a dictionary. update How to add or key-value pair or change the value for an existing key. missing key behavior What happens when looking up a key that isn't in the dictionary. delete How to delete a key-value pair from a dictionary. iterate How to iterate over the key-value pairs. keys and values as arrays How to get an array containing the keys; how to get an array containing the values. merge How to merge two dictionaries. Functions define function How to define a function. invoke function How to invoke a function. nested function missing argument behavior What happens when a function is invoked with too few arguments. extra argument behavior What happens when a function is invoked with too many arguments. default argument How to assign a default argument to a parameter. variadic function How to define a function which accepts a variable number of arguments. return value How the return value of a function is determined. multiple return values How to return multiple values from a function. anonymous function literal The syntax for an anonymous function. invoke anonymous function closure function as value How to store a function in a variable.. File Handles standard file handles Standard input, standard output, and standard error. read line from stdin write line to stdout How to write a line to stdout. matlab: The backslash escape sequence \n is stored as two characters in the string and interpreted as a newline by the IO system.. R An Introduction to R Advanced R Programming The Comprehensive R Archive Network The primitive.
http://hyperpolyglot.org/numerical-analysis
CC-MAIN-2016-30
refinedweb
1,403
56.25
I want semantic to always know as much as it can about my project in order that it can help me the best. Maybe I'm not using semantic to it's fullest extent, but I've tried to add the headers for Qt to it in order that it can autocomplete and annotate errors in line. Those are probably two of the biggest, but the very most important part is knowing type & function information on the fly. Perhaps filling out my function arguments for me would be useful too, but I haven't see that yet. Also, I'm having trouble getting it to automatically jump to functions and always getting it right... One of the problems is that semantic can't parse qt headers when all the Qt headers contain is an include that's relative, like the case of #include <QApplication> What happens is that when I right click on this and then visit the include, I find that it's an include file that has only one line in it--- #include "qapplication.h" But as far as emacs visiting this particular include, there's no way to right click this particular header and visit it. In fact, it's not shaded with the semantic face as either unfound (red), or green as in incompletely parsed. This is because it doesn't have a .h ending-I need all my files in my include directories to be appropriately interpreted (*automatically*). How can I set all files in my include directories to be interpreted in a specific mode? Also, despite my having added all the Qt include dirs to my semantic system includes path, sometimes when I load a particular Qt file it can't load other Qt headers. Take for instance this snapshot (), taken in context with my .emacs containing the following: (note: QtDir is a correctly set variable pointing to the parent of all of the Qt include folders) Why is it not correctly interpreting in this case?
http://sourceforge.net/p/cedet/mailman/attachment/1376601570.36838.YahooMailNeo%40web140006.mail.bf1.yahoo.com/1/
CC-MAIN-2015-06
refinedweb
332
56.89
Asked by: "Building Xamarin.Mac Legacy projects requires Business edition or higher." Question - User11167 posted Greetings! This is popping up with tonight's alpha channel updates. Anyone have a clear explanation? The release notes did not indicate anything, unless I missed it. As an Indie subscriber with a valid subscription, am I to: - Regenerate existing projects somehow in order to not be whatever "legacy" means. - ...other? (Anyone know what legacy vs. not means in this context?) Thanks a bunch!Thursday, August 14, 2014 2:41 AM All replies - User11167 posted Found it. Hints are in the release notes --- but for the last version (1.10.0). Note the Projects -> Migrate to... selection to do the project conversion. Then you'll likely have to do at least some minor work to put Humpty back together again. Thanks, all.Thursday, August 14, 2014 3:03 AM - User35201 posted So the migratation will convert you to the new unified API. This may not be what you want. It gets you a slimmer profile, 64-bit support, and makes it easier to share code between iOS and Mac, however we do not guarantee API/ABI stability right now. If you have a valid subscription to Xamarin.Mac that is below business, please contact support and explain the situation. You need to be special cased to fix your license. If they are unaware of the solution, have them contact me, Chris Hamons and I'll help get it fixed.Thursday, August 14, 2014 4:50 PM - User11167 posted Thanks for info on the distinctions, Chris. I'm fine with exploring the unified API so long as it actually runs / doesn't block basic local development as the Xamarin team dots the i's and crosses the t's with it on its way to stable. Note that to me it seems a bit iffy, though, that a change in licensing that takes away something advertised (and therefore purchased for a year assuming it would be available) would require any "special case" licensing fixes. It's a bit bait-and-switch, really, and it is probably a good idea that you ask the licensing team have something implicit in place to grandfather anyone with licenses that predate the change. I personally am not going to get bent out of shape about it (unless the new stuff doesn't adequately support development---already blocked locally by #22116 without mutating project structure to dodge it). But just sayin' --- IANAL, but this smells like a little bit of a legal gray area here. :) (FWIW, when do you expect (roughly) to be able to guarantee API/ABI stability? Thinking mainly about when I can count on this being in stable, good to go for shipping product.) Again, many thanks for the reply!Thursday, August 14, 2014 5:28 PM - User12211 posted @chamons, Ok. Let me get this straight. The new Xamarin.Mac warns you that the new Unified API is unstable and lets you choose not to migrate if you don't want to deal with the breaking changes. But then it won't let you compile your non-migrated Classic API project without a Business or higher license. But you can use the new API with an Indie license. Access to the new features with any license, but access to existing features only with a Business or higher license. I'm sorry, but that makes no sense at all.Thursday, August 14, 2014 6:06 PM - User13 posted Hello Plynkus, Yes, the intention was to grandfather everyone that had that license, we dropped the ball on that one. We are working on it. As for API/ABI stability: we are basically doing very few cosmetic changes, the only thing that we are missing now is that we are going to turn the return values that return classes that implement a protocol into an interface, so changes like this: NSTableViewSource DataSource { get; set; } Will become: INSTableViewSource DataSource { get; set; } And of course, NSTableViewSource will implement INSTableViewSource, so in general, this should not break existing code in the vast majority of cases, but will break this particular scenario: NSTableViewSource mySource = foo.DataSource; Because the return is only guaranteed to be an INSTableViewSource, not the concrete implementation of NSTableViewSource. This came to be as we introduced Protocol support last year, but since we did not want to break the API, we were not able to fully realize the Protocol vision. With this, you would be able to implement protocols in many cases without having to subclass one of our concrete implementations. A common idiom you see in Objective-C would also be usable in C# now. A common idiom in Objective-C can now be used in C#: public MyTableView () { DataSource = this; } /* here you implement the INSTableViewSource methods */ Other than that, we do not really anticipate any major changes. MiguelThursday, August 14, 2014 6:48 PM - User11167 posted Thanks for the words, Miguel. The API changes all sound reasonable---I do not expect problems (aside from a few bumps and bruises on the build front). I'll keep pushing on the new stuff for now (and will file bug reports as needed), but I am pleased to know that rolling back will be an option if I end up truly---if only temporarily---blocked. Cheers.Thursday, August 14, 2014 7:10 PM - User69303 posted Ahhh ... so this explains why the update I just did broke / restricted building for my application (I have an Indie sub purchased on the 15th of August). I do NOT want to be forced to build with an unstable API, and cannot find a support contact email or details either. So ... I upgraded my project to the new Unified API and now get this error: "Info.plist: error : The name 'Info.plist' is reserved and cannot be used." I guess I'm going to have to put in a support request via your contact form and hope to get a reply ASAP. Meanwhile all development on a time-critical application will grind to a halt. Good job, Xamarin.Thursday, September 4, 2014 5:56 AM - User69303 posted Look I should have done to begin with, I've reverted to the previous version for now.Thursday, September 4, 2014 9:54 AM - User26078 posted @MichaelBoth? You probably have a library project which you have migrated. A info.plist file was added automatically to that project. If that is the case, just delete it to solve the problemThursday, September 4, 2014 12:15 PM - User35230 posted This is also broken for me after running some tests with the update. When I try to build even a brand new clean project (no migration), I get what looks to be an invalid compile error: "Building from the command-line requires a Business license." when I am in fact compiling with the Unified API. Anyone else get that error? Or am I alone there.Thursday, September 4, 2014 6:30 PM - User71371 posted I'm getting this error also. I currently have an evaluation license if that makes a difference.Thursday, September 4, 2014 7:40 PM - User69303 posted Thanks, Radu, but it's not a library project that I upgraded. And the main problem is that I don't want to be 'forced' to use anything unstable. I did get a reply from support quite speedily - thanks! - and they said that they've identified an issue with the 1.10 version of Xamarin Mac, and the recommendation for now is to downgrade to the 1.8 version, which is what I've done.Thursday, September 4, 2014 8:20 PM - User69303 posted Link to the recommended version:, September 4, 2014 8:25 PM - User13824 posted Note that you'll also need to downgrade Xamarin Studio to version 5.2.1.1 for compatibility with Xamarin.Mac 1.8. Here's the download link for Xamarin Studio 5.2.1.1: Or if you'd like to migrate to the Unified API If you'd like to migrate to the Unified API, you can instead use Xamarin.Mac 1.10, and then install Xamarin Studio 5.3.0.427 (or earlier). Here's a link to Xamarin Studio 5.3.0.427: Unfortunately, as noted by @AndrewWitte, there's a bug in the current stable version of Xamarin Studio 5.3.0.440 that will prevent building Unified API projects on an Indie license. That bug has been fixed, and a new repaired version of Xamarin Studio should be available soon. But for now, using Xamarin Studio version 5.3.0.427 or earlier is the best bet.Friday, September 5, 2014 12:33 AM - User46031 posted yikes. Same issue for me this morning. Upgraded to Xamarin.Mac 1.10.0.10, chose not to migrate to unified API but unable to compile existing project (I have indie license), get the following: Building Xamarin.Mac Legacy projects requires Business edition or higher. So I migrated, refactored to compile, then got Building from the command-line requires a Business license. (MM9008) (HiiLite.Mac) downgraded to Xamarin.mac 1.8.1.6 as per above link, reverted code, now get Error: The Unknown Edition of Xamarin.Mac does not support building outside of Xamarin Studio. Please go to to upgrade to the Business Edition. (HiiLite.Mac) any suggestions please? This is crippling me with a tight timeline. thanks.Friday, September 5, 2014 12:37 AM - User13824 posted @VinnieVivace?, you'll need to downgrade Xamarin Studio to version 5.2.1.1 for compatibility with Xamarin.Mac 1.8.1.6.Friday, September 5, 2014 12:41 AM - User46031 posted thanks @BrendanZagaeski? I have gone with workoutaround 2, now running Xamarin Studio 5.3 (build 427) and Xamain.Mac 1.10.0.10 and have migrated to unified api. I now get an "application has not been built" message, and the following warnings: MMPTASK: Warning MM2006: Native library 'libfam.so.0.dylib' was referenced but could not be found. MMPTASK: Warning MM2006: Native library 'libgamin-1.so.0.dylib' was referenced but could not be found. MMPTASK: Warning MM2006: Native library 'libsqlite3.dylib' was referenced but could not be found.Friday, September 5, 2014 1:17 AM - User46031 posted No response on this one? Had to give up and downgrade. Have to admit to feeling a bit disappointed.Sunday, September 7, 2014 10:09 PM - User13824 posted Hi, apologies for leaving the follow-up question up in the air. I was hoping I'd have a chance to investigate it a bit myself to look for a solution before writing back, but I've gotten tied up with other issues. This bug appears to be the same issue ("Native library ... was referenced but could not be found"):. So that will be the best place to watch for updates about how to resolve the error. I'll take a look at that bug when I get a chance, and make sure the developers have a way to reproduce it.Monday, September 8, 2014 2:12 PM - User59797 posted @MigueldeIcaza, @chamons You mention that you intend to grandfather in those who had the Indie license; are you still executing on that and would that happen automatically? I mean, with Indie license we don't have any support address offered to us that could be used for asking to be special-cased? As it is, I did try to convert our legacy API project to unified API and now have the stupid "Building from the command-line requires a Business license." error instead. And downgrading XS causes the "Native library..." issue outlined above. So regardless whatever action I happen to take, I'm sitting dead in the water with my recently bought XS subscription...Tuesday, September 9, 2014 5:32 AM - User13824 posted @ksaunamaki?, you can write to Xamarin support via contact@xamarin.com. Just to get moving again as quickly as possible, you could downgrade to Xamarin.Mac 1.8 + Xamarin Studio 5.2.1.1 (see earlier in the thread). If you like, you could also continue to experiment with the Unified API using an Indie license by using Xamarin.Mac 1.10 with Xamarin Studio 5.3.0.427 (confusingly, as mentioned earlier in the thread, this is and old beta version of Xamarin Studio, not the current stable version). Unfortunately, based on Vinnie's results, it sounds like there are still some packaging problems related to native libraries when using the Unified API with an Indie license, so downgrading to Xamarin.Mac 1.8 + Xamarin Studio 5.2.1.1 is probably easier in the end.Tuesday, September 9, 2014 5:55 AM - User41437 posted @chamons - I have an indie license. I wrote to support (hello at xamarin....) as you suggested (to ask to be special-cased to fix my license) last Friday, and explained the situation, and mentioned this thread and you and what you wrote .... However I didn't receive any reply from them. I did the reverting-to-an-older-Xamarin-version solution for now, however I would love to not be stuck on this old version forever.Tuesday, September 9, 2014 6:33 AM - User35230 posted FYI, after updating to Xamarin Studio 5.3(build 441) I still have the same error. Not sure if the bug is in the IDE or the SDK, so just am putting it out there.Tuesday, September 9, 2014 11:00 PM - User35201 posted @MichaelBevin - Did you hear from support yet? (I was on vacation last week and am digging through my e-mails / forum posts)Wednesday, September 10, 2014 1:31 PM - User41437 posted @chamons I didn't hear from them still.Wednesday, September 10, 2014 2:35 PM - User35201 posted @MichaelBevin? I'm going to figure out what is going on and get you a reply. Apologies for the delay.Wednesday, September 10, 2014 2:38 PM - User57282 posted Chris Hamons say to contact support and explain the situation, but your page say that only Business can contact support, Indie can only write in forums. So as requested i write here my situation. I buy in may 2014 a 250$ Xamarin.Mac license. I develop my application successfully. Today i need to build a new version, urgently, for a bugfix. but i unconsciously do a Xamarin Upgrade and after i see the message "Building Xamarin.Mac Legacy projects requires Business edition or higher.". If i convert to Unified API, i have thousand of compilation error, about NS* classes and also "Type type or namespace name 'MonoMac' could not be found" and other kind of problems. There is a guide about the migration? Why you ask this additional effort without explain it during the Xamarin upgrade? Also my previous application was very stable, and you force me to upgrade to an instabile 'Preview Only' version? Now i need to decide to try a downgrade or try to correct thousand errors, and i need to do urgently because my customers need a new version of my application quickly. Very, very, very disappointed. Any suggestion? Thanks.Wednesday, September 10, 2014 7:52 PM - User13824 posted @PaoloBrini?, as discussed earlier in the thread, you can write to Xamarin support via contact@xamarin.com, or downgrade to Xamarin.Mac 1.8 + Xamarin Studio 5.2.1.1. Hopefully we can get you back up and running quickly!Wednesday, September 10, 2014 8:06 PM - User41437 posted @chamons I heard back from support and got the business license. Thanks for the help.Thursday, September 11, 2014 3:23 PM - User35201 posted @MichaelBevin? Glad it got sorted out for you. Sorry for the trouble.Thursday, September 11, 2014 3:25 PM - User46031 posted just giving this a bump as I emailed support on the 5th of September, and have not yet got a response (just sent followup). as for: This bug appears to be the same issue ("Native library ... was referenced but could not be found"):. So that will be the best place to watch for updates about how to resolve the error. no resolutions offered yet, so still stuck on an old version of Xamarin Studios.Monday, September 15, 2014 10:48 PM - User69303 posted There appears to be new versions of Xamarin Studio and Xamarin for Mac out on the stable channel, however after last time I'm not game enough to upgrade until I see the results of someone else doing it first. :)Tuesday, September 16, 2014 8:01 PM - User35201 posted @VinnieVivace - I poked support, have you heard back yet? @MichaelBoth - The newer version that were just hotfixed out fix an issue with Indie licenses and Unified builds. I am working on your guys fix, but i'll be a bit. That version acts just as the version for in your guys case.Tuesday, September 16, 2014 8:15 PM - User46031 posted @chamons? yeah, i have. Been told I will not get the grandfathered license, despite others (@MichaelBevin for one) being issued one. Really REALLY unhappy with the current situation.Tuesday, September 16, 2014 9:34 PM - User9 posted @VinnieVivace? Sorry about the mix up on this, I'll have a team member reach out to you and get grandfathered, set up and running with the classic API until we get this resolved fully on our side.Tuesday, September 16, 2014 10:55 PM - User46031 posted Hi @chrisntr. Thanks. Not what support is telling me though. When can I expect this to be actioned?Wednesday, September 17, 2014 6:47 AM - User9 posted @VinnieVivace? You should have got a response on this earlier this morning, sorry for the delay on the response.Wednesday, September 17, 2014 7:07 PM - User46031 posted thank you @chrisntr. been a saga, appreciate your help.Wednesday, September 17, 2014 9:42 PM - User13824 posted A new version Xamarin.Mac (1.10.0.18) has just been released to the Beta channel. This version should allow Indie users to build both Classic API and Unified API Xamarin.Mac apps with their current licenses. If any Indie users still have trouble with the "requires Business edition or higher" error after updating to Xamarin.Mac 1.10.0.18, be sure to let us know (either here on the forums or via an email to contact@xamarin.com). Thanks!Wednesday, October 1, 2014 8:33 PM
https://social.msdn.microsoft.com/Forums/en-US/b53333e8-f8aa-4572-91ec-ce0c2d060849/building-xamarinmac-legacy-projects-requires-business-edition-or-higher?forum=xamarinios
CC-MAIN-2021-43
refinedweb
3,052
66.03
Hello minh, Wednesday, April 5, 2006, 10:41:02 PM, you wrote: > but in 1/, i have to choose between different kind of array > representation (and i dont know which one is better) and it seems to > me that the resulting code (compiled) would have to be the same. no, the code will be slightly different. IOUArray will allocate space in the GHC's heap, while malloc - in the C heap (ghc's heap is additional storey on the C heap) btw, `getElems` is VERY INEEFECIENT way - it will convert entire array to the list before return > for example, the couples (hGet*,peek/readArray) could be written in one line; > also, one line for the reading/reconstructing more-than-one-Word8 value. > is it already possible ? > would it be interesting to add such capabilities to haskell ? (i think so) > i can try to add it but i need some pointers about how to do it. i don't see much problems here, just add peek16LE and other procedures like it and you can use trivial code: idLength <- peek8 a 1 x <- peek16LE a 8 peek8 a i = do (x::Word8) <- peekByteOff a i return (fromIntegral x) peek16LE a i = do (x::Word8) <- peekByteOff a i (y::Word8) <- peekByteOff a (i+1) return (fromIntegral x + fromIntegral y * 256 ) there are a couple of binary I/O libs (including my own one :) ), i just don't think you need such power here. of course, if you want to read data sequentially, binary i/o lib will be preferable. with my lib you can write smth like this: -- Create new MemBuf filled with data from file h <- readFromFile "test" -- Read header fields sequentially idLength <- getWord8 h x <- getWord16le h .... i attached here a part of my library docs where this described in much more details :) the lib itself is at -- Best regards, Bulat mailto:Bulat.Ziganshin at gmail.com -------------- next part -------------- In AltBinary library there * Serialization API This part is a really small! :) There are just two operations: get h put_ h a where `h` is a BinaryStream. These operations read and write binary representation of any value belonging to the class Binary.
http://www.haskell.org/pipermail/haskell/2006-April/017817.html
CC-MAIN-2014-42
refinedweb
360
63.83
25 November 2009 03:26 [Source: ICIS news] SINGAPORE (ICIS news)--Asian styrene monomer (SM) prices have reversed a recent uptrend falling by $10-15/tonne (€7-11/tonne) this week as crude values declined to the mid $70s/bbl, traders said on Wednesday. Spot values were heading towards $1,100/tonne CFR (cost and freight) ?xml:namespace> With the downstream styrenic resin sector in a lull period - typical of every year end - demand for the monomer has been tepid. Market players attributed the rise in SM values over the past weeks on firm raw material costs. “The SM price uptrend over the past few weeks was largely due to cost push,” said a Korean trader. Heightened energy futures at around $80/bbl previously and soaring naphtha cost above $700/tonne CFR Japan were the key booster for SM values. Feedstock benzene prices also posted large gains to $920/tonne FOB (free on board) “With crude now weak and benzene numbers slipping below $900/tonne today [Wednesday], SM prices are under downward pressure,” a broker said. Separately, around 70,000 tonnes of deep-sea SM cargoes were fixed from the “The arrival of US cargoes during the slow January month has weighed down sentiment among Asian players,” said a trader in (.
http://www.icis.com/Articles/2009/11/25/9266900/weaker+crude+reverses+sm+uptrend+price+down+10-15t+this.html
CC-MAIN-2013-20
refinedweb
211
63.63
Lopy4 I2C trouble [OSError: I2C bus error] Hi , i have a small trouble with i2c and Lopy4 my firmware version is MicroPython v1.8.6-849-83e2f7f on 2018-03-19; LoPy4 with ESP32 I have several slaves on one I2C bus and only one on the second i2c bus When i work with one instance of i2c with several slave all work fine, but if i initialize a second instance i2c of first instance fail to work and return OSError: I2C bus error Sorry for my english, it's not my native language, some code will be help to understood Any idea to deal with this bug ? I have misunderstood how i2c work on lopy ? from machine import I2C from machine import Timer import utime import PCA9534 as Slio import zonemgt as Heating import LM73 as Thermal import SSD1306 as Screen import MCP7940 as Rtc #I2C0 for SSD1306, LM73, and MCP7940 i2c=I2C(0) i2c.init(I2C.MASTER,baudrate=400000) Screen.initialize(Screen.kDisplayI2C128x64,i2c) #<-work fine Rtc.init(i2c) #<-work fine, rtc is read on set to correct time Temperature=Thermal.read_temp(i2c) #<-work temperature is read from sensor print("Rtc init done") print("Thermal="+str(Temperature)) Rtc.init(i2c) #<-still working fine i2c_slio=I2C(1) i2c_slio.init(I2C.MASTER, pins=('P23','P22'),baudrate=400000) Rtc.init(i2c) #<- any access to I2C0 slave fail with OSError, here with Rtc but LM73 or SSD1306 cannot be read/write all i2c0 function fail - Xykon administrators last edited by @eric73 I think the reason it gets stuck is because when you run i2c_slio=I2C(1)you assign I2C bus 1 to the default gpio pins. Now I2C bus 0 no longer has any pins assigned to it and that's why the scan is never ending. Either way I'll get a notification if there are any updates here so let me know when you have a chance to test this. @xykon Thank for your feedback, unfortunatly i can't test this code before 15th may , i will post you a result at this date (i have a lot of holliday to finish before end of may and i don't come back at work before 15th may). I think you don't need any hardware to test this behaviour, just having the programm terminating is a good think as actually it's stuck in infinite loop in latest slave=i2c.scan() - Xykon administrators last edited by @eric73 Can you try the code like this please? from machine import I2C i2c=I2C(0, mode=I2C.MASTER,baudrate=400000) slave=i2c.scan() #<- show slave list print(str(slave)) slave=i2c.scan() #<-show slave list print(str(slave)) i2c_slio=I2C(1, mode=I2C.MASTER, pins=('P23','P22'),baudrate=400000) slave=i2c_slio.scan() #<-show slave list print(str(slave)) slave=i2c.scan() #<- board stucked in infinite loop, REPL is no more responsive to command print(str(slave)) Please let me know if this works better otherwise I'll set up some hardware to test this further... @robert-hh Thank for your answer, i will specify my hardware setting, i have designed a custom board hosting my lopy4. So on P23 there's no sd-card i only wiil used it as i2c signal. In fact, when i used it as i2c-p23, it work fine for i2c communication (i can scan and read write for my slave). The "bug" is after i call init function for I2C(1) i cannot still use I2C(0) communication on default pins (P9-P10) (scan will run in endless loop and read or write function cause an os-error). I seem that i can't use simultaneously the both I2C instance ? A simplier code that everibody can run will show similar behaviour from machine import I2C i2c=I2C(0) i2c.init(I2C.MASTER,baudrate=400000) slave=i2c.scan() #<- show slave list print(str(slave)) slave=i2c.scan() #<-show slave list print(str(slave)) i2c_slio=I2C(1) i2c_slio.init(I2C.MASTER, pins=('P23','P22'),baudrate=400000) slave=i2c_slio.scan() #<-show slave list print(str(slave)) slave=i2c.scan() #<- board stucked in infinite loop, REPL is no more responsive to command print(str(slave))
https://forum.pycom.io/topic/3178/lopy4-i2c-trouble-oserror-i2c-bus-error
CC-MAIN-2021-43
refinedweb
694
70.33
Cache Cache¶ Using a cache is a great way of making your application run quicker. The Symfony cache component ships with many adapters to different storages. Every adapter is developed for high performance. The following example shows a typical usage of the cache: use Symfony\Contracts\Cache\ItemInterface; // The callable will only be executed on a cache miss. $value = $pool->get('my_cache_key', function (ItemInterface $item) { $item->expiresAfter(3600); // ... do some HTTP request or heavy computations $computedValue = 'foobar'; return $computedValue; }); echo $value; // 'foobar' // ... and to remove the cache key $pool->delete('my_cache_key'); Symfony supports Cache Contracts, PSR-6/16 and Doctrine Cache interfaces. You can read more about these at the component documentation. New in version 4.2: The cache contracts were introduced in Symfony 4.2. use pre-configured: - cache.adapter.apcu - cache.adapter.array - cache.adapter.doctrine - cache.adapter.filesystem - cache.adapter.memcached - cache.adapter.pdo - cache.adapter.psr6 - cache.adapter.redis Some of these adapters could be configured via shortcuts. Using these shortcuts will create pools with service IDs that follow the pattern cache.[type]. - YAML - XML - PHP Creating Custom (Namespaced) Pools¶ You can also create more customized pools: - YAML - XML - PHP Each pool manages a set of independent cache keys: keys whose Symfony\Contracts\Cache\CacheInterface or Psr\Cache\CacheItemPoolInterface: use Symfony\Contracts\Cache\CacheInterface; // from a controller method public function listProducts(CacheInterface $customThingCache) { // ... } // in a service public function __construct(CacheInterface $customThingCache) { // ... } Tip If you need the namespace to be interoperable with a third-party app, you can take control over auto-generation by setting the namespace attribute of the cache.pool service tag. For example, you can override the service definition of the adapter: - YAML - XML - PHP Custom Provider Options¶ Some providers have specific options that can be configured. The RedisAdapter allows you to create providers with the options fastest to slowest. If an error happens when storing an item in a pool, Symfony stores it in the other pools and no exception is thrown. Later, when the item is retrieved, Symfony stores the item automatically in all the missing pools. New in version 4.4: Support for configuring a chain using framework.cache.pools was introduced in Symfony 4.4. - YAML - XML - PHP Clearing the Cache¶ To clear the cache you can use the bin/console cache:pool:clear [pool] command. That will remove all the entries from your storage and you will have to recalculate all the values. You can also group your pools into “cache clearers”. There are 3 cache clearers by default: cache.global_clearer cache.system_clearer cache.app_clearer The global clearer clears all the cache items in every pool. The system cache clearer is used in the bin/console cache:clear command. The app clearer is the default clearer. To see all available cache pools: New in version 4.3: The cache:pool:list command was introduced in Symfony 4.3. Clear one pool: Clear all custom pools: Clear all caches everywhere: This work, including the code samples, is licensed under a Creative Commons BY-SA 3.0 license.
https://symfony.com/doc/4.4/cache.html
CC-MAIN-2021-21
refinedweb
507
58.18
Difference between revisions of "Zero residues" Revision as of 13:10, 12 May 2009 import pymol def zero_residues(sel1,offset=0): """ PURPOSE: renumbers the residues so that the first one is zero, or offset . USAGE: zero_residues protName # first residue is 0 USAGE: zero_residues protName, 5 # first residue is 5 EXAMPLE: zero_residues * """ offset = int(offset) # variable to store the offset stored.first = None # get the names of the proteins in the selection names = cmd.get_names(selection=sel1) # for each name shown for p in names: # get this offset cmd.iterate("first %s and polymer and n. CA" % p,"stored.first=resi") # don't waste time if we don't have to if ( stored.first == offset ): continue; # reassign the residue numbers cmd.alter("%s" % p, "resi=str(int(resi)-%s)" % str(int(stored.first)-offset)) # update pymol cmd.sort() # let pymol know about the function cmd.extend("zero_residues", zero_residues)
https://pymolwiki.org/index.php?title=Zero_residues&diff=prev&oldid=6454
CC-MAIN-2022-40
refinedweb
147
59.6
Re: [caplet] ADsafe, Take 4 Expand Messages - catch is problematic. Below is my writeup of scoping re catch. If my recollections of the behavior of old versions of Firefox/Mozilla are correct, then catch can be used to inject into the global namespace, to, for example, replace encodeURIComponent/encodeURI with a function that when called by the embedding page, would substitute malicious cgi parameters into a URL possibly tricking the embedding page into issuing a completely different request than the one it intended.On 04/10/2007, Douglas Crockford <douglas@...> wrote: I have put more limitations on what is tolerated in HTML. I suspect there are more gremlins out there. I am worried about catch(name) clauses. The way that name is scoped is unexpected. I think there may be more problems there. Big thanks to everyone who has been looking at this. - --- In caplet@yahoogroups.com, "Mike Samuel" <mikesamuel@...> wrote: >Does any browser include object references or functions in its > catch is problematic. exception objects? Your message has been successfully submitted and would be delivered to recipients shortly.
https://groups.yahoo.com/neo/groups/caplet/conversations/topics/76?source=1&var=1&l=1
CC-MAIN-2015-14
refinedweb
179
58.08
Type: Posts; User: Clipster You only need quotes for strings with spaces in them. Like setting a font, some fonts (lucida grande) have spaces so you would need quotes: element { font-family: 'lucida grande'; } ... You don't understand my question. I'm asking if anyone else has had this problem, thus possibly knowing a way to fix it. (without returning the monitor) Why would I have to spend extra money? I... Are you just spamming? I would like to get a better response. I just got an Acer monitor. It seems to have too bugged pixels. One dead and one "hot", this one I'm not to sure about. Where the dead is always gray/black and the other is like a contrast color... It works fine: #include <iostream> #include <windows.h> using std::cout; void sort(int array[], int numts) Ahh. My mistake.. Thanks! Thanks, But I'm not starting a project with it, I just wanted to see if I could make it. xD Brushing up on my math lol. I solved the problem now. :) Turns out the Math.sin and Math.cos was... I'm trying to draw a line from one point to another. This seems to work ok on 45 degree angles other than the distance is still wrong. But if the angle is not 45 degrees it goes wack. <html>... I have had a time trying to use an unsigned driver. Is there no way around this?? Not including F8?? This digitally signed stuff is complete bull crap. Whats up with it?? Another one of microsoft... Well never mind guys. Ive decided to go multi threaded. 1) Threads are easy to create. 2) Doing multiple connections with select get real complicated with just simple server features. 3)... Every website I go to gives FireFox's console errors. So I doubt anything is wrong with your code. Plus it all looks valid. It compiles fine, though I'm not sure if I'm doing everything right. All I have is the server right now. It's a blocking socket, though it uses select, so it doesn't really block. #include... @Russco: Well theres no point in anyone using mine because it sends it to my server. And who ever that knows how to make it work for them, I would think, knows how to make a keylogger anyway. ... Here's my key loggers source: "attachment" It was when I was real new to c++ though so the way I included files was kinda weird. BTW: Your responsible with this not me, and if you compile this... I found out how to use the select() function, or at least I think I have. It allows the loop to keep going and then receive something when there is something to receive. Here's the Server: //... ***** retrieve flags c eval flags = fcntl(sock: F_GETFL) c if flags < 0 C* handle error "unable to retrieve descriptor... You sure about MSG_DONTWAIT? I get undeclared identifier. Thanks. Yeah like jarro said. What do I do to fix this problem? Any game books you suggest? Also I probably need to know a little more math like trig. I'm only 16 so Id say I'm pushing a little, going into trig. :D Oh and btw, I think I know what 1000.0f /... I'm just making a simple socket program that the client can send strings to the server. Server: #include <iostream> #include <winsock2.h> int main() { What does / by float do? And also, Do I put this in the main while loop before the render function, or do I put it in the frame rending function? Thanks. How Would I go about timing how fast my game runs no matter the frame rate speed? Limiting Frame speed does work, but I don't think frame speed is reliable on such an important aspect of game... No luck.... This is what I'm getting when I don't link to libmysql.lib and link to mysqlclient.lib: 1>Generating Code... 1>Linking... 1>LIBCMTD.lib(dbgheap.obj) : error LNK2005:... All right, Ill try messing with it and see what I get. Ill reply back on where I'm at then. Thanks. I said if don't link to libmysql.lib I get unresolved externals link errors....
http://forums.codeguru.com/search.php?s=4d3e13bb914ea5597d8fd1e173af2edd&searchid=8138177
CC-MAIN-2015-48
refinedweb
715
86.6
Porting to the Universal Windows Platform (C++) In this topic, you can find information on how to port existing C++ code to the Windows 10 app platform, the Universal Windows Platform. What is meant by the term universal is that your code can run on any of the devices that run Windows 10, including desktop, phone, tablets, and future devices that run Windows 10. You create a single project and a single XAML-base user interface that works well on any device that runs Windows 10. You can use dynamic layout features in XAML to allow the app's UI to adapt to different display sizes. The Windows Dev Center documentation contains a guide for porting Windows 8.1 apps to the Universal Windows Platform. See Move from Windows Runtime 8 to UWP. Although the guide focuses mostly on C# code, most of the guidance is applicable to C++. The following procedures contain more detailed information. This topic contains the following procedures for porting code to the UWP. If you have a classic desktop Win32 DLL and you want to call it from a UWP application, you can do that as well. Using such procedures, you can create a UWP user interface layer for an existing classic Windows desktop C++ application, or your cross-platform standard C++ code. See How to: Use Existing C++ Code in a Universal Windows Platform App. Porting a Windows 8.1 Store App to the UWP If you have a Windows 8.1 Store App, you can use this procedure to get it working on the UWP and any device that runs Windows 10. It's a good idea to first build the project with Visual Studio 2017 as a Windows 8.1 project, to first eliminate any issues that arise from changes in the compiler and libraries. Once you've done that, there are two ways to convert this to a Windows 10 UWP project. The easiest way (as explained in the following procedure) is to create a Universal Windows project, and copy your existing code into it. If you were using a Universal project for Windows 8.1 desktop and Windows 8.1 Phone, your project will start with two different layouts in XAML but end with a single dynamic layout that adjusts to the display size. To port a Windows 8.1 Store App to the UWP If you have not already done so, open your Windows 8.1 App project in Visual Studio 2017, and follow the instructions to upgrade the project file. You need to have installed the Windows 8.1 Tools in Visual Studio setup. If you don't have those tools installed, start Visual Studio setup from the Programs and Features window, choose Visual Studio 2017, and in the setup window, choose Modify. Locate Windows 8.1 Tools, make sure it is selected, and choose OK. Open the Project Properties window, and under C++ > General, set the Platform Toolset to v141, the toolset for Visual Studio 2017. Build the project as a Windows 8.1 project, and address any build errors. Any errors at this stage are probably due to breaking changes in the build tools and libraries. See Visual C++ change history 2003 - 2015 for a detailed explanation of the changes that might affect your code. Once your project builds cleanly, you are ready to port to Universal Windows (Windows 10). Create a new Universal Windows App project using the Blank template. You might want to give it the same name as your existing project, although to do that the projects must be in different directories. Close the solution, and then using Windows Explorer or the command line, copy the code files (with extensions .cpp, .h, and .xaml) from your Windows 8.1 project into the same folder as the project file (.vcxproj) for the project you created in step 1. Do not copy the Package.appxmanifest file, and if you have separate code for Windows 8.1 desktop and phone, choose one of them to port first (you'll have to do some work later to adapt to the other). Be sure to copy and subfolders and their contents. If prompted, choose to replace any files with duplicate names. Reopen the solution, and choose Add > Existing Item from the shortcut menu for the project node. Select all the files you copied, except any that are already part of the project. Check any subfolders and make sure to add the files in them as well. If you are not using the same project name as your old project, open the Package.appxmanifest file and update the Entry Point to reflect the namespace name for the Appclass. The Entry Point field in the Package.appxmanifest file contains a scoped name for the Appclass, which includes the namespace that contains the Appclass. When you create a Universal Windows project, the namespace is set to the name of the project. If this is different from what's in the files you copied in from your old project, you must update one or the other to make them match. Build the project, and address any build errors due to breaking changes between the different versions of the Windows SDK. Run the project on the Local Desktop. Verify that there are no deployment errors, and that the layout of the app looks reasonable and that it functions correctly on the desktop. If you had separate code files and .xaml for another device, such as Windows Phone 8.1, examine this code and identify where it differs from the standard device. If the difference is only in the layout, you might be able to use a Visual State Manager in the xaml to customize the display depending on the size of the screen. For other differences, you can use conditions sections in your code using the following #if statements. . Run and debug the app on an emulator or physical device, for each type of device that your app supports. To run an emulator, you need to run Visual Studio on a physical computer, not a virtual machine. Porting a Windows 8.1 Runtime Component to the UWP If you have a DLL or a Windows Runtime Component that already works with Windows 8.1 Store apps, you can use this procedure to get the component or DLL working with the UWP and Windows 10. The basic procedure is to create a new project and copy your code into it. To port a Windows 8.1 Runtime Component to the UWP In the New Project dialog in Visual Studio 2017, locate the Windows Universal node. If you don't see this node, install the Windows 10 SDK first. Choose the Windows Runtime Component template, give a name for your component, and choose the OK button. The component name will be used as the namespace name, so you might want to use the same name as your old projects' namespace. This requires that you create the project in a different folder from the old one. If you choose a different name, you can update the namespace name in the generated code files. Close the project. Copy all the code files (.cpp, .h, .xaml, etc.) from your Windows 8.1 component into your newly created project. Do not copy the Package.appxmanifest file. Build, and resolve any errors due to breaking changes between different versions of the Windows SDK. Troubleshooting You might encounter various errors during the process of porting code to the UWP. Here are some of the possible problems you might encounter. Project Configuration Issues You might receive the error: could not find assembly 'platform.winmd': please specify the assembly search path using /AI or by setting the LIBPATH environment variable If this happens, the project is not building as a Windows Universal project. Check the project file and make sure it has the correct XML elements that identify a project as a Windows Universal Project. The following elements should be present (the version number of the target platform might be different): > If you created a new UWP project using Visual Studio, you should not see this error. See also Visual C++ Porting Guide Develop apps for the Universal Windows Platform (UWP) Feedback Send feedback about:
https://docs.microsoft.com/en-us/cpp/porting/porting-to-the-universal-windows-platform-cpp?view=vs-2019
CC-MAIN-2019-26
refinedweb
1,372
73.47
Warning: this page refers to an old version of SFML. Click here to switch to the latest version. Communicating with sockets Sockets A socket is the interface between your application and the outside world: through a socket, you can send and receive data. Therefore, any network program will most likely have to deal with sockets, they are the central element of network communication. There are several kinds of sockets, each providing specific features. SFML implements the most common ones: TCP sockets and UDP sockets. TCP vs UDP It is important to know what TCP and UDP sockets can do, and what they can't do, so that you can choose the best socket type according to the requirements of your application. The main difference is that TCP sockets are connection-based. You can't send or receive anything until you are connected to another TCP socket on the remote machine. Once connected, a TCP socket can only send and receive to/from the remote machine. This means that you'll need one TCP socket for each client in your application. UDP is not connection-based, you can send and receive to/from anyone at any time with the same socket. The second difference is that TCP is reliable unlike UDP. It ensures that what you send is always received, without corruption and in the same order. UDP performs less checks, and doesn't provide any reliability: what you send might be received multiple times (duplication), or in a different order, or be lost and never reach the remote computer. However, UDP does guarantee that data which is received is always valid (not corrupted). UDP may seem scary, but keep in mind that almost all the time, data arrives correctly and in the right order. The third difference is a direct consequence of the second one: UDP is faster and more lightweight than TCP. Because it has less requirements, thus less overhead. The last difference is about the way data is transported. TCP is a stream protocol: there's no message boundary, if you send "Hello" and then "SFML", the remote machine might receive "HelloSFML", "Hel" + "loSFML", or even "He" + "loS" + "FML". UDP is a datagram protocol. Datagrams are packets that can't be mixed with each other. If you receive a datagram with UDP, it is guaranteed to be exactly the same as it was sent. Oh, and one last thing: since UDP is not connection-based, it allows broadcasting messages to multiple recipients, or even to an entire network. The one-to-one communication of TCP sockets doesn't allow that. Connecting a TCP socket As you can guess, this part is specific to TCP sockets. There are two sides to a connection: the one that waits for the incoming connection (let's call it the server), and the one that triggers it (let's call it the client). On client side, things are simple: the user just needs to have a sf::TcpSocket and call its connect function to start the connection attempt. #include <SFML/Network.hpp> sf::TcpSocket socket; sf::Socket::Status status = socket.connect("192.168.0.5", 53000); if (status != sf::Socket::Done) { // error... } The first argument is the address of the host to connect to. It is an sf::IpAddress, which can represent any valid address: a URL, an IP address, or a network host name. See its documentation for more details. The second argument is the port to connect to on the remote machine. The connection will succeed only if the server is accepting connections on that port. There's an optional third argument, a time out value. If set, and the connection attempt doesn't succeed before the time out is over, the function returns an error. If not specified, the default operating system time out is used. Once connected, you can retrieve the address and port of the remote computer if needed, with the getRemoteAddress() and getRemotePort() functions. All functions of socket classes are blocking by default. This means that your program (more specifically the thread that contains the function call) will be stuck until the operation is complete. This is important because some functions may take very long: For example, trying to connect to an unreachable host will only return after a few seconds, receiving will wait until there's data available, etc. You can change this behavior and make all functions non-blocking by using the setBlocking function of the socket. See the next chapters for more details. On the server side, a few more things have to be done. Multiple sockets are required: One that listens for incoming connections, and one for each connected client. To listen for connections, you must use the special sf::TcpListener class. Its only role is to wait for incoming connection attempts on a given port, it can't send or receive data. sf::TcpListener listener; // bind the listener to a port if (listener.listen(53000) != sf::Socket::Done) { // error... } // accept a new connection sf::TcpSocket client; if (listener.accept(client) != sf::Socket::Done) { // error... } // use "client" to communicate with the connected client, // and continue to accept new connections with the listener The accept function blocks until a connection attempt arrives (unless the socket is configured as non-blocking). When it happens, it initializes the given socket and returns. The socket can now be used to communicate with the new client, and the listener can go back to waiting for another connection attempt. After a successful call to connect (on client side) and accept (on server side), the communication is established and both sockets are ready to exchange data. Binding a UDP socket UDP sockets need not be connected, however you need to bind them to a specific port if you want to be able to receive data on that port. A UDP socket cannot receive on multiple ports simultaneously. sf::UdpSocket socket; // bind the socket to a port if (socket.bind(54000) != sf::Socket::Done) { // error... } After binding the socket to a port, it's ready to receive data on that port. If you want the operating system to bind the socket to a free port automatically, you can pass sf::Socket::AnyPort, and then retrieve the chosen port with socket.getLocalPort(). UDP sockets that send data don't need to do anything before sending. Sending and receiving data Sending and receiving data is done in the same way for both types of sockets. The only difference is that UDP has two extra arguments: the address and port of the sender/recipient. There are two different functions for each operation: the low-level one, that sends/receives a raw array of bytes, and the higher-level one, which uses the sf::Packet class. See the tutorial on packets for more details about this class. In this tutorial, we'll only explain the low-level functions. To send data, you must call the send function with a pointer to the data that you want to send, and the number of bytes to send. char data[100] = ...; // TCP socket: if (socket.send(data, 100) != sf::Socket::Done) { // error... } // UDP socket: sf::IpAddress recipient = "192.168.0.5"; unsigned short port = 54000; if (socket.send(data, 100, recipient, port) != sf::Socket::Done) { // error... } The send functions take a void* pointer, so you can pass the address of anything. However, it is generally a bad idea to send something other than an array of bytes because native types with a size larger than 1 byte are not guaranteed to be the same on every machine: Types such as int or long may have a different size, and/or a different endianness. Therefore, such types cannot be exchanged reliably across different systems. This problem is explained (and solved) in the tutorial on packets. With UDP you can broadcast a message to an entire sub-network in a single call: to do so you can use the special address sf::IpAddress::Broadcast. There's another thing to keep in mind with UDP: Since data is sent in datagrams and the size of these datagrams has a limit, you are not allowed to exceed it. Every call to send must send less that sf::UdpSocket::MaxDatagramSize bytes -- which is a little less than 2^16 (65536) bytes. To receive data, you must call the receive function: char data[100]; std::size_t received; // TCP socket: if (socket.receive(data, 100, received) != sf::Socket::Done) { // error... } std::cout << "Received " << received << " bytes" << std::endl; // UDP socket: sf::IpAddress sender; unsigned short port; if (socket.receive(data, 100, received, sender, port) != sf::Socket::Done) { // error... } std::cout << "Received " << received << " bytes from " << sender << " on port " << port << std::endl; It is important to keep in mind that if the socket is in blocking mode, receive will wait until something is received, blocking the thread that called it (and thus possibly the whole program). The first two arguments specify the buffer to which the received bytes are to be copied, along with its maximum size. The third argument is a variable that will contain the actual number of bytes received after the function returns. With UDP sockets, the last two arguments will contain the address and port of the sender after the function returns. They can be used later if you want to send a response. These functions are low-level, and you should use them only if you have a very good reason to do so. A more robust and flexible approach involves using packets. Blocking on a group of sockets Blocking on a single socket can quickly become annoying, because you will most likely have to handle more than one client. You most likely don't want socket A to block your program while socket B has received something that could be processed. What you would like is to block on multiple sockets at once, i.e. waiting until any of them has received something. This is possible with socket selectors, represented by the sf::SocketSelector class. A selector can monitor all types of sockets: sf::TcpSocket, sf::UdpSocket, and sf::TcpListener. To add a socket to a selector, use its add function: sf::TcpSocket socket; sf::SocketSelector selector; selector.add(socket); A selector is not a socket container. It only references (points to) the sockets that you add, it doesn't store them. There is no way to retrieve or count the sockets that you put inside. Instead, it is up to you to have your own separate socket storage (like a std::vector or a std::list). Once you have filled the selector with all the sockets that you want to monitor, you must call its wait function to wait until any one of them has received something (or has triggered an error). You can also pass an optional time out value, so that the function will fail if nothing has been received after a certain period of time -- this avoids staying stuck forever if nothing happens. if (selector.wait(sf::seconds(10))) { // received something } else { // timeout reached, nothing was received... } If the wait function returns true, it means that one or more socket(s) have received something, and you can safely call receive on the socket(s) with pending data without having them block. If the socket is a sf::TcpListener, it means that an incoming connection is ready to be accepted and that you can call its accept function without having it block. Since the selector is not a socket container, it cannot return the sockets that are ready to receive. Instead, you must test each candidate socket with the isReady function: if (selector.wait(sf::seconds(10))) { // for each socket... <-- pseudo-code because I don't know how you store your sockets :) { if (selector.isReady(socket)) { // this socket is ready, you can receive (or accept if it's a listener) socket.receive(...); } } } You can have a look at the API documentation of the sf::SocketSelector class for a working example of how to use a selector to handle connections and messages from multiple clients. As a bonus, the time out capability of Selector::wait allows you to implement a receive-with-timeout function, which is not directly available in the socket classes, very easily: sf::Socket::Status receiveWithTimeout(sf::TcpSocket& socket, sf::Packet& packet, sf::Time timeout) { sf::SocketSelector selector; selector.add(socket); if (selector.wait(timeout)) return socket.receive(packet); else return sf::Socket::NotReady; } Non-blocking sockets All sockets are blocking by default, but you can change this behaviour at any time with the setBlocking function. sf::TcpSocket tcpSocket; tcpSocket.setBlocking(false); sf::TcpListener listenerSocket; listenerSocket.setBlocking(false); sf::UdpSocket udpSocket; udpSocket.setBlocking(false); Once a socket is set as non-blocking, all of its functions always return immediately. For example, receive will return with status sf::Socket::NotReady if there's no data available. Or, accept will return immediately, with the same status, if there's no pending connection. Non-blocking sockets are the easiest solution if you already have a main loop that runs at a constant rate. You can simply check if something happened on your sockets in every iteration, without having to block program execution.
https://en.sfml-dev.org/tutorials/2.1/network-socket.php
CC-MAIN-2021-49
refinedweb
2,192
54.73
Raspberry Pico: Unit Test Framework for Your Projects >. The Pico captured me, I wanted more than just run demos. So, I decided to start library development for a shift register and a temperature sensor. When developing a library, I want to have tests for several reasons. First, I like to use TDD and start with writing a test that will cover a n new feature before its implementation. Second, once you have a substantial test suite, it helps you to keep the library in a working shape when you refactor its code base. In this article, I will show how to install and use the unit testing framework cmocka. We will see the basic boilerplate code and an example for testing a Raspberry Pico program. This article originally appeared at my blog. Installation Grab the CMocka source from the official cmocka mirror. Then, extract the tar, compile and install. The steps in a nutshell: wget tar xvf cmocka-1.1.5.tar.xz cd cmocka-1.1.5 mkdir build cd build cmake .. make The make step should show this output: Scanning dependencies of target cmocka [ 4%] Building C object src/CMakeFiles/cmocka.dir/cmocka.c.o [ 9%] Linking C shared library libcmocka.so [ 9%] Built target cmocka Scanning dependencies of target assert_macro_test [ 13%] Building C object example/CMakeFiles/assert_macro_test.dir/assert_macro.c.o ... [ 95%] Building C object example/mock/uptime/CMakeFiles/uptime.dir/uptime.c.o [100%] Linking C executable uptime [100%] Built target uptime If all goes well, you can install the compiled libraries in your system. sudo make install[ 9%] Built target cmocka ... [100%] Built target uptime Install the project... -- Install configuration: "" -- Installing: /usr/local/lib/pkgconfig/cmocka.pc -- Installing: /usr/local/lib/cmake/cmocka/cmocka-config.cmake -- Installing: /usr/local/lib/cmake/cmocka/cmocka-config-version.cmake -- Installing: /usr/local/include/cmocka.h -- Installing: /usr/local/include/cmocka_pbc.h -- Installing: /usr/local/lib/libcmocka.so.0.7.0 -- Installing: /usr/local/lib/libcmocka.so.0 -- Installing: /usr/local/lib/libcmocka.so The files will be installed at /usr/local/lib. Unit Test Example Let’s write a very basic unit test example. /* * --------------------------------------- * | * devcon@admantium.com | * | * SPDX-License-Identifier: BSD-3-Clause | * --------------------------------------- */ #include <stdarg.h> #include <stddef.h> #include <setjmp.h> #include <cmocka.h>static void test_integers(void** state) { assert_int_equal(1,1); }int main(int argc, char* argv[]) { const struct CMUnitTest tests[] = { cmocka_unit_test(test_integers), }; return cmocka_run_group_tests(tests, NULL, NULL); } The important things here: - Always include all four libraries: <stdarg.h>, <stddef.h>, <setjmp.h>, <cmocka.h> - Define test cases as functions that receive an argument void** state - The test functions include different type of assert statements, shown here is assert_int_equalsee the official documentation for the full list of asserts. - In the mainfunction, add all defined test functions to the struct CMUnitTest tests[] Running Tests To invoke that test on the CLI, you will need to add CMocka installation path to the environment variable export LD_LIBRARY_PATH. export LD_LIBRARY_PATH=/usr/local/lib:${LD_LIBRARY_PATH} Then, run your compiler and link to the CMocka library. I’m using clang in the following example. clang -std=c18 -l cmocka simple.test.c -o tests.bin Finally, you can run the test, and see formatted output that shows which tests were successful. $> ./test.bin[==========] Running 1 test(s). [ RUN ] test_integers [ OK ] test_integers [==========] 1 test(s) run. [ PASSED ] 1 test(s). Testing a Pico Program Now that we have setup the testing framework, let’s use it to write tests for our Pico programs. At the time of writing this article, I was developing a library for working with shift registers. The library exposes a struct object that defines the pin layout, and several functions for setting bits or a bitmask to the shift register. I will not cover the entire library, but just highlight two test cases that show the essential how-to. Go to Github to see the entire rp2040-shift-register-74HC595 library. ShiftRegister Struct: Definition and Testing The shift register is controlled by three input pins: - Serial (SER): Set a single bit, low or high - Serial Clock (SRCLK): Send a clock signal that will write the active SER bit to the shift register - Register Clock (RCLK): Send a clock signal to copy the contents of the shift register into the storage register These pins are defined in the following struct object. typedef struct ShiftRegister { u_int8_t SERIAL_PIN; u_int8_t SHIFT_REGISTER_CLOCK_PIN; u_int8_t STORAGE_REGISTER_CLOCK_PIN; } ShiftRegister; The first test is about initializing a shift register and see that it’s defined pined are correctly defined inside the struct. We will use the familiar assert_int_equal test. void test_shift_register_config(void **state) { ShiftRegister reg = {14, 11, 12}; assert_int_equal(reg.SERIAL_PIN, 14); assert_int_equal(reg.SHIFT_REGISTER_CLOCK_PIN, 11); assert_int_equal(reg.STORAGE_REGISTER_CLOCK_PIN, 12); } Running the tests gives this output: Runing Tests [==========] Running 1 test(s). [ RUN ] test_shift_register_config [ OK ] test_shift_register_config [==========] 1 test(s) run. [ PASSED ] 1 test(s). Writing a single bit The most basic function is to write a single bit into the shift register. To keep track of this, the register object holds two state variables: The serial_pin_state and the shift_register_state. If a new bit is written with the write_bit function, the state will be updated accordingly. To implement this, we first add the state variables to the ShiftRegister. typedef u_int8_t bitmask;typedef struct ShiftRegister; { bool serial_pin_state; u_int8_t shift_register_state; } ShiftRegister; Then, we implement the write_bit function. This function sets the serial_pin_state to the given bit. If this bit is a 1, shift_register_state will shift right and add a 1, if the bit is a 0, it will just shift right. bool write_bit(ShiftRegister *reg, bool b,) { reg->serial_pin_state = b; (b) ? (reg->register_state += 0b10) : (reg->register_state <<= 0b01); return b; } For testing, we will write two bits: 1 followed by 0. After each step, we test the pin_state is set correctly. Finally, we test that the resulting bitmask is correct. To receive the bitmask representation of the shift register, the method print_shift_register is called, and its compared to a string object. The test method uses assert_memory_equal, a convenient test method to test that any types are equal. void test_write_bit(void **state) { ShiftRegister reg = {14, 11, 12}; write_bit(1, ®); assert_int_equal(reg.serial_pin_state, 1); write_bit(0, ®); assert_int_equal(reg.serial_pin_state, 0); printf("Shift Register: %s\n", print_shift_register(®)); assert_memory_equal(print_shift_register(®), ®"01000000", 8); } All tests are passed: Running Tests [==========] Running 2 test(s). [ RUN ] test_shift_register_config [ OK ] test_shift_register_config [ RUN ] test_write_bit Shift Register: 01000000 [ OK ] test_write_bit [==========] 2 test(s) run. [ PASSED ] 2 test(s). Conclusion This article introduced the CMocka unit testing framework for C programs. I showed how to compile, install and use it. Then, two examples were shown. The first example showed the necessary boilerplate code that you need to run a CMocka test. The 2nd example showed how to use CMocka for testing Pico code, but with a grain of salt: At the time of writing, I had no experience how to test that the hardware signals were transmitted from the Pico. In a future article about library design, I will cover this issue and detail how to test both the library function “as-is” and the hardware side. In my opinion, unit-testing helps you to write better code: By writing a test before the implementation, you structure the feature upfront, and when you have a substantial test suite, you can better maintain and refactor your code.
https://admantium.medium.com/raspberry-pico-unit-test-framework-for-your-projects-f92623524446
CC-MAIN-2022-05
refinedweb
1,211
50.33
- 2.1. Introduction - 2.2. A Simple C Program: Printing a Line of Text - 2.3. Another Simple C Program: Adding Two Integers - 2.4. Arithmetic in C - 2.5. Decision Making: Equality and Relational Operators - 2.6. Secure C Programming 2.2. A Simple C Program: Printing a Line of Text We begin by considering a simple C program. Our first example prints a line of text. The program and its screen output are shown in Fig. 2.1. Fig. 2.1. A first program in C. 1 // Fig. 2.1: fig02_01.c 2// A first program in C. 3#include <stdio.h> 4 5 // function main begins program execution 6int main( void ) 7{ 8printf( "Welcome to C!\n"); 9} // end function main This program illustrates several important C features. Lines 1 and 2 // Fig. 2.1: fig02_01.c // A first program in C begin with //, indicating that these two lines are comments. Comments do not cause the computer to perform any action when the program is run. Comments are ignored by the C compiler and do not cause any machine-language object code to be generated. The preceding comment simply describes the figure number, file name and purpose of the program. You can also use /*...*/ multiline comments in which everything from /* on the first line to */ at the end of the last line is a comment. We prefer // comments because they’re shorter and they eliminate common programming errors that occur with /*...*/ comments, especially when the closing */ is omitted. #include Preprocessor Directive Line 3 #include <stdio.h> is a directive to the C preprocessor. Lines beginning with # are processed by the preprocessor before compilation. Line 3 tells the preprocessor to include the contents of the standard input/output header (<stdio.h>) in the program. This header contains information used by the compiler when compiling calls to standard input/output library functions such as printf (line 8). We explain the contents of headers in more detail in Chapter 5. Blank Lines and White Space Line 4 is simply a blank line. You use blank lines, space characters and tab characters (i.e., “tabs”) to make programs easier to read. Together, these characters are known as white space. White-space characters are normally ignored by the compiler. The main Function Line 6 int main( void ) is a part of every C program. The parentheses after main indicate that main is a function. C programs contain one or more functions, one of which must be main. Every program in C begins executing at the function main. Functions can return information. The keyword int to the left of main indicates that main “returns” an integer (whole-number) value. We’ll explain what this means when we demonstrate how to create your own functions in Chapter 5. For now, simply include the keyword int to the left of main in each of your programs. Functions also can receive information when they’re called upon to execute. The void in parentheses here means that main does not receive any information. In Chapter 14, we’ll show an example of main receiving information. A left brace, {, begins the body of every function (line 7). A corresponding right brace ends each function (line 9). This pair of braces and the portion of the program between the braces is called a block. The block is an important program unit in C. An Output Statement Line 8 printf( "Welcome to C!\n" ); instructs the computer to perform an action, namely to print on the screen the string of characters marked by the quotation marks. A string is sometimes called a character string, a message or a literal. The entire line, including the printf function (the “f” stands for “formatted”), its argument within the parentheses and the semicolon (;), is called a statement. Every statement must end with a semicolon (also known as the statement terminator). When the preceding printf statement is executed, it prints the message Welcome to C! on the screen. The characters normally print exactly as they appear between the double quotes in the printf statement. Escape Sequences Notice that the characters \n were not printed on the screen. The backslash (\) is called an escape character. It indicates that printf is supposed to do something out of the ordinary. When encountering a backslash in a string, the compiler looks ahead at the next character and combines it with the backslash to form an escape sequence. The escape sequence \n means newline. When a newline appears in the string output by a printf, the newline causes the cursor to position to the beginning of the next line on the screen. Some common escape sequences are listed in Fig. 2.2. Fig. 2.2. Some common escape sequences. Because the backslash has special meaning in a string, i.e., the compiler recognizes it as an escape character, we use a double backslash (\\) to place a single backslash in a string. Printing a double quote also presents a problem because double quotes mark the boundaries of a string—such quotes are not printed. By using the escape sequence \" in a string to be output by printf, we indicate that printf should display a double quote. The right brace, }, (line 9) indicates that the end of main has been reached. We said that printf causes the computer to perform an action. As any program executes, it performs a variety of actions and makes decisions. Section 2.5 discusses decision making. Chapter 3 discusses this action/decision model of programming in depth. The Linker and Executables Standard library functions like printf and scanf are not part of the C programming language. For example, the compiler cannot find a spelling error in printf or scanf. When the compiler compiles a printf statement, it merely provides space in the object program for a “call” to the library function. But the compiler does not know where the library functions are—the linker does. When the linker runs, it locates the library functions and inserts the proper calls to these library functions in the object program. Now the object program is complete and ready to be executed. For this reason, the linked program is called an executable. If the function name is misspelled, the linker will spot the error, because it will not be able to match the name in the C program with the name of any known function in the libraries. Using Multiple printfs The printf function can print Welcome to C! several different ways. For example, the program of Fig. 2.3 produces the same output as the program of Fig. 2.1. This works because each printf resumes printing where the previous printf stopped printing. The first printf (line 8) prints Welcome followed by a space, and the second printf (line 9) begins printing on the same line immediately following the space. Fig. 2.3. Printing on one line with two printf statements. 1 // Fig. 2.3: fig02_03.c 2// Printing on one line with two printf statements. 3#include <stdio.h> 4 5 // function main begins program execution 6int main( void ) 7{ 8 printf( 9 printf( "to C!\n" ); 10} // end function main One printf can print several lines by using additional newline characters as in Fig. 2.4. Each time the \n (newline) escape sequence is encountered, output continues at the beginning of the next line. Fig. 2.4. Printing multiple lines with a single printf. 1 // Fig. 2.4: fig02_04.c 2// Printing multiple lines with a single printf. 3#include <stdio.h> 4 5 // function main begins program execution 6int main( void ) 7{ 8printf( toto \n C!\n" );C!\n" ); \n 9} // end function main
https://www.informit.com/articles/article.aspx?p=2062174&seqNum=2
CC-MAIN-2021-49
refinedweb
1,277
66.94
I wanted to add 9 seconds to each of the subtitle in my file to have it in sync with video. I wrote an awk org for it, and it did what i was trying to do. the video is in sync with subtitles now. now when i play the video with this new subtitles file, movie player quits when it reaches a particular subtitle. every time it quits when it reaches the same place. this does not happen with the original subtitle file. original file 43 00:09:20,820 --> 00:09:23,618 ...in which I have grown from childhood to a man! 44 00:09:24,757 --> 00:09:26,554 I had sworn by those alps and peaks... edited file 43 00:09:29,820 --> 00:09:32,618 ...in which I have grown from childhood to a man! 44 00:09:33,757 --> 00:09:35,554 I had sworn by those alps and peaks... movie player after showing the underlined text. Any idea why this happens?? i am running mint 10- julia, Totem Movie Player 2.32.0 Why do not use Subtitle Workshop, you can do with this software many things?. Regards: Romeo Ninov minor spelling correction--it should have been 'I wrote an awk prg. for it' instead of 'I wrote an awk org for it' Lets see... You've not included the script, nor the input or output. Please include more information. i wanted to do it using a script.i know that softwares are available for doing the same...but just wanted to give it a shot using awk.... but i still don't understand why am i face this problem. ok here is the script #!/usr/bin/awk -f BEGIN { FIELDWIDTHS="3 2 1 2 12 2 1 2 4" OFS="" } /-->/ { if ($4>21) { $4=$4-21; if ($4<10) { $4=0$4 } } else { $4=60-(21-$4); $2=$2-1; if ($2<10) { $2=0$2; } }; if ($8>21) { $8=$8-21; if ($8<10) { $8=0$8 } } else { $8=60-(21-$8); $6=$6-1; if ($6<10) { $6=0$6; } }; print $1,$2,$3,$4,$5,$6,$7,$8,$9} $0 !~ /-->/ {print $0} i have pasted sections of my input and output in the first post. please refer to that. Ed, agreed. Worrying if it is sensitive just to number changes etc. though. I'm thinking it might be relying on some hidden syntax for framing, like needing a TAB in a specific place. I checked the times were monotonically increasing, and that it hadn't been asked to wait for an hour or something like that. Well, that obviously is not the script that matches your description. "I wanted to add 9 seconds to each of the subtitle in my file" which is what your example data shows. So tell me about "$4=$4-21; " which seems to be taking 21 seconds OFF all the times. My guess is that you are now adding 9 seconds by changing all the -21 to +9. However, you have maybe forgotten to fix the minutes in stuff like "$2=$2-1; ". The first time you flick over a minute boundary, you decrement the minutes, and end up with a backward time jump, and the media player chokes on it. Your awk has a few problems too. Prefixing like "if ($4<10) { $4=0$4 } " does not make $4 a 2-digit number where I live - it's still a number variable. For safety, use printf with explicit %.2d formatting for fields. Single statements within an outer block do not need the { ... } round them, and the { ... } blocks themselves do not take an extra semicolon. Personally, I would make an awk function that took a time string and made an adjustment (in seconds) and returned a new timestring. And forget having all those fixed-width fields. Your code would look like: function Adj (ts, q, Local, h, m, s) { h = 0 + substr (ts, 1, 2); m = 0 + substr (ts, 4, 2); s = 0 + substr (ts, 7, 2); s += q; while (s >= 60) { s -= 60; m++; } while (s < 0) { s += 60; m--; } while (m >= 60 { m -= 60; h++; } while (m < 0) { m += 60; h--; } return (sprintf ("%.2d:%.2d:%.2d%s", h, m, s, substr (ts, 9))); } ! /-->/ { print; next; } { print Adj( $1, 9), $2, Adj( $3, 9); } At least you only have to write and fix the time maths once. [ENDS] Ok. Actually I was adjusting the time to find out how much exactly i should add or deduct. That's how 21 came into this code. I tried your code and it works perfectly well. just had a to add one bracket to fix syntax issue. Thank you so much!!! Not sure why it posted to Email, but not above. Oh well. Just a quick comment to roopesh: I am also using Mint 10 Julia and prefer it over Ubuntu. Sorry about the bug. Closing bracket missed on the function call.
https://it.toolbox.com/question/editing-subtitles-file-using-awk-082811?comments=all
CC-MAIN-2019-26
refinedweb
830
83.86
Object oriented design started from the moment of the invention of computer. There is programming, and programming methods emerge as the times require. Programming is basically to provide some instructions to the computer. At the beginning of the computing era, programming was usually limited to machine language programming. Machine language refers to those instruction sets that are specific to a particular machine or processor, in the form of 0 and 1. These are bit sequences (0100110...). But it is very difficult to write programs or develop software in machine language. It is practically impossible to develop software using bit sequences in today's scenarios. This is the main reason why programmers turn to the next generation programming language and develop assembly language, because assembly language is close enough to English and easy to understand. These assembly languages are used in microprocessors. With the invention of microprocessor, assembly language has been prosperous and dominant in the whole industry, but it is not enough. Programmers have come up with new things again, namely structured and procedural programming. Structured programming The basic principle of structured programming is to divide the program into function and module. The use of modules and functions makes the program easier to understand and read. It helps to write more concise code and maintain control over functions and modules. This approach focuses on functionality rather than data. It focuses on the development of large software applications, such as C for modern operating system development. Programming languages: PASCAL (introduced by Niklaus Wirth) and C (introduced by Dennis Ritchie) follow this approach. Programming methods This approach is also known as the top-down approach. In this method, programs are divided into functions that perform specific tasks. This method is mainly used for medium-sized applications. Data is global, and all functions can access global data. The basic disadvantage of programming method is that the data is not secure, because the data is global and can be accessed by any function. Program control flow is realized by function call and goto statement. Programming languages: FORTRAN (developed by IBM) and COBOL (developed by Dr. Grace Murray Hopper) follow this approach. These programming structures were developed in the late 1970s and 1980s. Although these languages meet the standard of well structured programs and software, there are still some problems. Their structure is not as good as the requirements at that time. They seem too general and irrelevant to real-time applications. In order to solve this kind of problem, OOP is developed as a solution. The method of OOP The concept of OOP is basically designed to overcome the shortcomings of the above programming method, which is not very close to the application in the real world. However, the use of conventional methods still increases the demand. This new method brings a revolution in the field of programming methodology. Object oriented programming (OOP) is something that allows you to write programs with the help of certain classes and real-time objects. We can say that this method is very close to the real world and its applications, because the state and behavior of these classes and objects are almost the same as the real world objects. Let's go deeper into the general concepts of OOP, as follows: What are classes and objects? This is the basic concept of OOP; the extended concept of structure used in C language. It is an abstract and user-defined data type. It consists of several variables and functions. The main purpose of this class is to store data and information. Members of a class define the behavior of the class. Class is the blueprint of an object, but we can say that the implementation of a class is an object. The class is not visible to the world, but the object is visible. Class car { int car_id; char colour[4]; float engine_no; double distance; void distance_travelled(); float petrol_used(); char music_player(); void display(); } Here, car like has the property car_id, color, engine_no and distance. It is similar to a real-world car with the same specifications and can be declared public (visible to everyone outside the class), protected and private (invisible to anyone). In addition, there are some methods, such as distance_travelled(),petrol_used(),music_player() and display(). In the code given below, car is a class, and c1 is an object of car. #include <iostream> using namespace std; class car { public: int car_id; double distance; void distance_travelled(); void display(int a, int b) { cout << "car id is=\t" << a << "\ndistance travelled =\t" << b + 5; } }; int main() { car c1; // Declare c1 of type car c1.car_id = 321; c1.distance = 12; c1.display(321, 12); return 0; } Data abstraction Abstract refers to the behavior of representing important and special features without background details or interpretation of the features. Data abstraction simplifies database design. Physical level: It describes how records are stored, which are usually hidden from users. It can be described by the phrase "memory block". Logical level: It describes the data stored in the database and the relationship between the data. Programmers usually work at this level because they know the functions needed to maintain relationships between data. View level: The application hides details of data types and information for security purposes. This level is usually implemented with the help of a GUI and displays detailed information for the user. Packaging Encapsulation is one of the basic concepts in object-oriented programming (OOP). It describes the idea of wrapping data and the way to process data in a unit, such as a class in Java. This concept is often used to hide the internal state representation of an object from the outside. Succession Inheritance is the ability of a class to inherit the functions or properties of another parent class. When we write a class, we inherit properties from other classes. So when we create a class, we don't need to write all the properties and functions over and over again, because they can be inherited from another class that owns it. Inheritance enables users to reuse code and reduce its redundancy when possible. import java.io.*; class GFG { public static void main(String[] args) { System.out.println("GfG!"); Dog dog = new Dog(); dog.name = "Bull dog"; dog.color = "Brown"; dog.bark(); dog.run(); Cat cat = new Cat(); cat.name = "Rag doll"; cat.pattern = "White and slight brownish"; cat.meow(); cat.run(); Animal animal = new Animal(); animal.name = "My favourite pets"; animal.run(); } } class Animal { String name; public void run() { System.out.println("Animal is running!"); } } class Dog extends Animal { /// the class dog is the child and animal is the parent String color; public void bark() { System.out.println(name + " Wooh ! Wooh !" + "I am of colour " + color); } } class Cat extends Animal { String pattern; public void meow() { System.out.println(name + " Meow ! Meow !" + "I am of colour " + pattern); } } Polymorphism Polymorphism refers to the ability to process data in many forms. It allows the same task to be performed in various ways. It consists of method overloading and method rewriting, that is, writing methods at once and performing many tasks with the same method name. #include <iostream> using namespace std; void output(float); void output(int); void output(int, float); int main() { cout << "\nGfG!\n"; int a = 23; float b = 2.3; output(a); output(b); output(a, b); return 0; } void output(int var) { // same function name but different task cout << "Integer number:\t" << var << endl; } void output(float var) { // same function name but different task cout << "Float number:\t" << var << endl; } void output(int var1, float var2) { // same function name but different task cout << "Integer number:\t" << var1; cout << " and float number:" << var2; } Some important knowledge about OOP: OOP regards data as a key element. The focus is on data, not programs. Break down the problem into simpler modules. The data is not allowed to flow freely in the whole system, that is local control flow. Data is protected by external functions. Advantages of OOP It's a good simulation of the real world. Using OOP, the program is easy to understand and maintain. OOP provides code reusability. Classes that have been created can be reused without having to write them again. OOP promotes the rapid development of programs that can develop classes in parallel. With OOP, the program is easier to test, manage and debug. Shortcomings of OOP When you use OOP, you sometimes overgeneralize classes. Class relations sometimes become superficial. OOP design is very difficult and needs proper knowledge. In addition, we need to plan and design OOP programming properly. To program with OOP, programmers need appropriate skills, such as designing, programming, and thinking based on objects and classes. Geeks for geeks Every day to learn a little knowledge, I hope to help you~ In addition, if you want to better improve your programming ability, learn C language c + + programming! Overtaking on the curve, one step faster! I may be able to help you here~ C language c + + programming learning circle, QQ group: 765803539[ Click to enter WeChat official account: C language programming learning base Share (source code, project video, project notes, basic introductory course) Welcome to change and learn programming partners, use more information to learn and grow faster than their own thinking Oh! Programming learning video sharing:
https://www.fatalerrors.org/a/c-c-programming-notes-object-oriented-programming-oop-do-you-really-know.html
CC-MAIN-2021-17
refinedweb
1,542
56.96
34032/record-limits-displayed-plot-using-matplotlib-show-module matplotlib.pyplot.show(*args, **kw) The show module when running in ipython with its pylab mode, display all figures and return to the ipython prompt. When you are using a non-interactive mode it will display all figures and block until the figures have been closed. When you are using it in an interactive mode it will have no effect unless figures were created prior to a change from non-interactive to interactive mode. Hey @anonymus, to change the default rc settings in python script, you need to know that all the rc values are stored in dictionary-like variables known as matplotlib.rcParams. This is the global package it can be modified like this: mpl.rcParams['lines.linewidth'] = length mpl.rcParams['lines.color'] = 'color' plt.plot(data) Hope this helps. Hey @Jinu, Try something like this: import turtle star = ...READ MORE Hey @Jinu, try this: import turtle polygon = ...READ MORE Hi there, instead of sklearn you could ...READ MORE There are several options. Here is a ...READ MORE n=[1,2,3,4,5,6,7,8,9] print(len(n)) =9 READ MORE Hi, there is only that way and ...READ MORE Hi, it is pretty simple, to be ...READ MORE Good question. I actually was stuck with ...READ MORE You can find the explanation and implementation ...READ MORE You can use the exponentiation operator or ...READ MORE OR At least 1 upper-case and 1 lower-case letter Minimum 8 characters and Maximum 50 characters Already have an account? Sign in.
https://www.edureka.co/community/34032/record-limits-displayed-plot-using-matplotlib-show-module?show=34169
CC-MAIN-2022-40
refinedweb
263
59.9
Description:This example shows how threads can help us to make User interface more responsive when we have some background jobs. It has got pairs of buttons (cmdUnresponsiveStart, cmdToggleForUnResponsiveStart) and (cmdresponsive, cmdToggleForResponsive).There is boolean runflag which checks if counter needs to be incremented and set the counter value in txtcounter.when we click on toggle buttons the counter is stopped,because go checks for this flag. What is needed to compile?.NET SDK How to Compile?csc /r:System.dll /r:System.winforms.dll /r:Microsoft.win32.interop.dll/r:System.Drawing.dll responsiveui.cs Code: namespace ©2015 C# Corner. All contents are copyright of their authors.
http://www.c-sharpcorner.com/UploadFile/hsankar/UIresponsiveusingThreading11162005054732AM/UIresponsiveusingThreading.aspx
CC-MAIN-2015-27
refinedweb
108
51.85
Important ReactJs Interview questions and answers Recently, ReactJS has become very popular because of its extra features like simplicity and flexibility than other SPA frameworks. Many traditional web developers moving towards React due to minimal learning curve. Here is the curated list of reactJs interview questions and answers. 1. What is React?). 2. What is the render method for?. 3. What are the lifecycle methods of React? Lifecycle methods in angular 16.3+ are -. 4. What is Virtual DOM? React builds up its own “virtual DOM” which is a lightweight representation of the DOM optimized for React’s diffing algorithms and reconciliation process. Virtual DOM changes eventually propagate to the actual DOM at the end of the reconciliation process. 5. What is state in React? State of a component is an object that holds some information that may change over the lifetime of the component. We should always try to make our state as simple as possible and minimize the number of stateful components. Example: class Demo extends React.Component { constructor(props) { super(props) this.state = { message: 'Hello world' } } render() { return ( <div> <h1>{this.state.message}</h1> </div> ) } } 6. What are the major features of React? - – In React, All browser’native events are wrapped by instances of Synthetic Event. It provides a cross-browser interface to a native event. That means you do not need to worry about incompatible event names and fields. - JSX – JSX is a XML-like syntax extension to ECMAScript (the acronym stands for JavaScript XML). Basically it just provides syntactic sugar for the React.createElement()function, giving us expressiveness of JavaScript along with HTML like template syntax. - React Native – React Native is mobile application framework created by Facebook. It is used to develop applications for Android, iOS and UWP by enabling developers to use React along with native platform capabilities. - Component-Based – In React everything is component the web page divided into small components to create a view(or UIs). Every part of applications. 7. What is JSX? JSX is a XML-like syntax extension to ECMAScript(superset of JavaScript). Basically it just provides syntactic sugar for the React.createElement() function, giving us expressiveness of JavaScript along with HTML like template syntax. In the example below text inside <h1> tag return as JavaScript function to the render function. class AppDemo extends React.Component { render() { return( <div> <h1>{'Hello world!'}</h1> </div> ) } } 8. How to create components in React? - Using Class – You can create a JavaScript class that extends (a.k.a. inherits) React’s Component class and write the component’s template in render() method. class MyComponent extends React.Component{ render() { return <div> <h1>Hello World!</h1> </div> } } - Using Function – This is the simplest way to create a component. Those are pure JavaScript functions that accept props object as first parameter and return React elements. function DemoComponent({ message }) { return <h1>{`Hello, ${message}`}</h1> } 9. When to use a Class Component over a Function Component? It depends, If the component needs state or lifecycle methods then use class component otherwise use function component. 10. 11.. 12.' }) 13. What are synthetic events in React? SyntheticEvent is a cross-browser wrapper around the browser’s native event. It’s API is same as the browser’s native event, including stopPropagation() and preventDefault(), except the events work identically across all browsers. 14. How to write comments in React? The comments in React/JSX are similar to JavaScript Multiline comments but are wrapped in curly braces. <div> {/* This is a comment */} {`Welcome ${user}, Good Morning`} </div> 15. What is reconciliation? When a component’s props or state change, React decides whether an actual DOM update is necessary by comparing the newly returned element with the previously rendered one. When they are not equal, React will update the DOM. This process is called reconciliation. 16. Why is it necessary to capitalize component names? It is necessary because components are not DOM elements, they are constructors. Also, in JSX lowercase tag names are referring to HTML elements, not components. 17. What are fragments? It’s common pattern in React which is used for a component to return multiple elements. Fragments let you group a list of children without adding extra nodes to the DOM. render() { return ( <React.Fragment> <ChildComponentA /> <ChildComponentB /> <ChildComponentC /> </React.Fragment> ) } 18.. 19. What are stateless. 20.() { // ... } } 21. What’s the difference between a ‘smart’ component and a ‘dumb’ component? Smart components manage their state or in a Redux environment which are connected to the Redux store. Whereas dumb components are driven completely by their props passed in from their parent and maintain no state of their own.
https://questionsforinterview.in/reactjs-interview-questions/
CC-MAIN-2020-45
refinedweb
767
52.15
VYSOKÉ UČENÍ TECHNICKÉ V BRNĚ BRNO UNIVERSITY OF TECHNOLOGY FAKULTA INFORMAČNÍCH TECHNOLOGIÍ ÚSTAV INFORMAČNÍCH SYSTÉMŮ FACULTY OF INFORMATION TECHNOLOGY DEPARTMENT OF INFORMATION SYSTEMS TOOL FOR QUERYING SSSD DATABASE BAKALÁŘSKÁ PRÁCE BACHELOR’S THESIS AUTOR PRÁCE AUTHOR BRNO 2013 DAVID BAMBUŠEK VYSOKÉ UČENÍ TECHNICKÉ V BRNĚ BRNO UNIVERSITY OF TECHNOLOGY FAKULTA INFORMAČNÍCH TECHNOLOGIÍ ÚSTAV INFORMAČNÍCH SYSTÉMŮ FACULTY OF INFORMATION TECHNOLOGY DEPARTMENT OF INFORMATION SYSTEMS NÁSTROJ PRO DOTAZOVÁNÍ SSSD DATABÁZE TOOL FOR QUERYING SSSD DATABASE BAKALÁŘSKÁ PRÁCE BACHELOR’S THESIS AUTOR PRÁCE DAVID BAMBUŠEK AUTHOR VEDOUCÍ PRÁCE SUPERVISOR BRNO 2013 Doc. Dr. Ing. DUŠAN KOLÁŘ Abstrakt Tato práce je zaměřena na databáze a konkrétně pak na databázi SSSD. SSSD je služba, která poskytuje jedno kompletní rozhraní pro přístup k různým vzdáleným poskytovatelům a ověřovatelům identit spolu s možností informace z nich získané ukládat v lokální cache k offline použití. Práce se zabývá jak databázemi obecně, tak pak hlavně LDAP a LDB, které jsou použity v SSSD. Dále popsuje architekturu a samotnou funkci SSSD. Hlavním cílem pak bylo vytvořit aplikaci, která bude administrátorům poskytovat možnost prohlížet všechna data uložená v databázi SSSD. Abstract This thesis is focused on databases, particularly on SSSD database. SSSD is a set of daemons providing an option to access various identity and authentication resources through one simple application, that also offers offline caching. Thesis describes general information about databases, but mainly focuses on LDAP and LDB, that are used in SSSD. In addition also describes function and architecture of SSSD. Main goal of this thesis was to create a tool, that will be able to query all the data stored in SSSD database. Klíčová slova Databáze, LDAP, LDB, SSSD, Red Hat, dotazovací nástroj Keywords Databases, LDAP, LDB, SSSD, Red Hat, querying tool Citace David Bambušek: Tool for querying SSSD database, bakalářská práce, Brno, FIT VUT v Brně, 2013 Tool for querying SSSD database Prohlášení Prohlašuji, že jsem tuto bakalářskou práci vypracoval samostatně pod vedením pana Doc. Dr. Ing. Dušana Koláře ....................... David Bambušek May 14, 2013 Poděkování Velice rád bych poděkoval vedoucímu mé bakalářské práce Doc. Dr. Ing. Dušanovi Kolářovi, který mi poskytl cenné pedagogické a věcné rady k vypracování mé práce. Stejně velký dík patří odbornému konzultantovi Ing. Janu Zelenému, za všechny potřebné rady a informace, které mi pomohly k úspěšnému vypracování mé práce. Na místě je také poděkovat firmě Red Hat, která mi umožnila vypracovávat práci pod její záštitou, stejně tak jako všem jejím zaměstancům, hlavně paktýmu SSSD pod vedením Ing. Jakuba Hrozka, kteří se podíleli na revizi mého kódu a poskytovali další věcné rady a doporučení. c David Bambušek, 2013. Tato práce vznikla jako školní dílo na Vysokém učení technickém v Brně, Fakultě informačních technologií. Práce je chráněna autorským zákonem a její užití bez udělení oprávnění autorem je nezákonné, s výjimkou zákonem definovaných případů. Contents 1 Introduction 3 2 Introduction to Databases 2.1 Relational Database . . . . . . . . 2.1.1 Overview and Terminology 2.1.2 Relational Model . . . . . . 2.1.3 Structure . . . . . . . . . . 2.1.4 Constrains . . . . . . . . . 2.2 Tree Databases . . . . . . . . . . . 2.2.1 XML Databases . . . . . . 2.3 Other Databases . . . . . . . . . . 2.3.1 NoSQL . . . . . . . . . . . 2.3.2 Object Oriented Databases 3 Directory Services 3.1 LDAP . . . . . . . . . . . . . . 3.1.1 Name Model . . . . . . 3.1.2 Informational Model . . 3.1.3 Functional Model . . . . 3.1.4 Security Model . . . . . 3.1.5 LDIF . . . . . . . . . . 3.1.6 LDAP Usage . . . . . . 3.1.7 LDAP Implemantations 3.2 LDB . . . . . . . . . . . . . . . 3.2.1 TDB & DBM . . . . . . 4 SSSD 4.1 User Login in Linux . 4.1.1 Identification . 4.1.2 Authentication 4.1.3 Problems Using 4.1.4 SELinux . . . . 4.2 Basic Function . . . . 4.3 Architecture . . . . . . 4.3.1 Processes . . . 4.3.2 Communication . . . . . . . . . NSS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . and PAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 5 5 6 6 7 8 8 10 10 12 . . . . . . . . . . 14 15 15 16 17 18 18 19 19 19 20 . . . . . . . . . 21 21 21 21 21 22 22 23 23 24 5 SSSD Database 5.1 Users . . . . . . 5.2 Groups . . . . . 5.3 Netgroups . . . 5.4 Services . . . . 5.5 Autofsm Maps 5.6 Sudo Rules . . 5.7 SSH Hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Querying Tool 6.1 Program Specification . . . . . . . . . . . . 6.2 Basic Information and User Interface . . . . 6.3 Application Architecture . . . . . . . . . . . 6.4 Tool Design . . . . . . . . . . . . . . . . . . 6.5 Implementation . . . . . . . . . . . . . . . . 6.5.1 Initialization . . . . . . . . . . . . . 6.5.2 Query on Basic Objects . . . . . . . 6.5.3 Domain and Subdomain Information 6.5.4 Printing Results . . . . . . . . . . . 6.5.5 Errors . . . . . . . . . . . . . . . . . 6.6 Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 25 26 26 26 27 27 28 . . . . . . . . . . . 29 29 29 31 32 32 32 33 34 34 35 35 7 Conclusion 36 A DVD contents 38 B Manual B.1 Unpacking . . . . B.2 Dependencies . . B.3 Compiling . . . . B.4 Exemplary data . B.5 Running the tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 39 39 40 40 40 C Man page 41 D Errors 43 E Application Constants, Flags and Arguments 45 F Test results 47 2 Chapter 1 Introduction Today, thanks to the Internet, society has become one global connected network of people, where the most important thing that makes people powerful is knowledge, in other words information. Unfortunately people are not able to store all the information they know in their own brains and therefore we are forced to use computers to help us with this task. For purposes of storing huge quantities of information that we need, we created computer databases. Databases can store various types of data, reflecting real models from our world or can store purely abstract data. To identify ourselves on the Internet or any other networks, we use virtual profiles, that store data about ourselves and about our membership in certain groups or companies. This thesis describes a tool, that is able to query such a database of users and additional information provided for managing virtual identity. Particularly it is an SSSD database. SSSD is a set of daemons providing an option to access various identity and authentication resources through one simple application, that also offers offline caching. In chapter two, we will look generally on various existing types of databases and make more detailed description of the most used one, that is relational database. Chapter three is focused on directory services - LDAP and LDB, which is not directory service by itself, but it is an abstract layer over lower key-value types of databases, offering LDAP-like API. LDB is the database used in SSSD. In chapter four we will take a closer look on SSSD itself, we will introduce basic purpose, function and architecture and in chapter five a complete description of a SSSD database will follow. The sixth chapter describes the tool for querying SSSD databases, there we will find a complete architecture of this application, its UI and examples of usage. Conclusion and development will be stated in the last seventh chapter. 3 Chapter 2 Introduction to Databases This chapter widely derives from sources [1] and [6]. Term database can be understood as a set or collection of data, that is somehow organized. It is typical, that a database reflects a real existing concept, system, structure or information, under which we can imagine for example cities and their populations, company employees or items in a store. We can work with this database and store, change, delete or query information stored in it. To work with a database, we use a database management system (DBMS), what is a software allowing us to run all mentioned operations and to administrate the database. Most known and widely used DBMS are for sure MySQL, SQLITE, Microsoft Access or PostgreSQL. When speaking about database, we usually understand this term as both data and their structure so as database management system, but formally database“ is just data wired ” to its data structures. We can divide function of DBMS into four groups: 1. Data control - maintaining integrity, taking care of security, monitoring, dealing with permissions to work with a database (creating and managing users) 2. Data definition - creating/modifying/removing data structures to/from a database 3. Data maintenance - inserting/updating/deleting data from a database 4. Data retrieval - data mining done by users to work with received information or to proceed it for other purposes by querying and analyzing Each database with its DBMS works and looks accordingly to its database model. We will get back to each type in next subsections, for now to mention them, they are historically divided to three major groups: navigational, relational and post relational databases. During first era in 1960’s, there were two main representants of navigational model: hierarchical model developed at IBM and the Codastl(Network) model. Navigation in Codastyl was based on linked data creating huge network. It was possible to search for entries using their primary key(CALC key), by using relationships from one entry to another or by going through all entries sequentially. IBM’s DBMS IMS was quite similar to Codastl, but instead of network model it used stricter hierarchical model. In 1970’s the world first encountered relational DBMS, which was introduced by Edgar Codd from IBM, more detailed description will be offered in next section. Main difference is that a database is formed from tables, each used for different entity. Main idea in terms of how to search for entries is that we should search data by content and not by following 4 links as it was with navigational databases. Relational databases became the most used types of databases and have persisted on that spot until now. The dominant language used to work with them is SQL. So far last era of databases, called post-relational, showed up at the dawn of new millennium. These databases are document-oriented and offer quick key-value storage. They are mostly used in situations when we store large quantity of information and where the relationship between them is not that important. These databases found their place at social networks like Facebook or Twitter to store comments, tweets and other mass quantity information. Languages used for communication with databases have each its own specific function: • Data definition languages - define data types and their relationships • Data manipulation languages - used for inserting/updating/deleting data • Query languages - used for searching for data Each database model has its own language. Most know and used languages are, already mentioned, SQL, then OQL used in object model databases or XQuery used with XML databases. 2.1 2.1.1 Relational Database Overview and Terminology As the book states [1]. A relational database is a set of data entries that are organized as a collection of tables. Relational database is created upon a basis of relational model and a software to operate this database is called a relational database management system (RDBMS). It is nowadays the most used type of database that can satisfy needs for solutions of most common problems and can be applied on most models that can be found in our world. The most simplified view on relational database is that it consists of tables, that are composed of rows, where each field represents one of attributes. Relational database theory is a mathematical theory and it has its own terminology, these terms are slightly different from those terms, that we use when talking about SQL, which is the main language operating with relational databases. Table 2.1 shows the differences. mathematical theory relation, base relvar tuple attribute attribute value derived relvar SQL table row column name column data query result Table 2.1: mathematical vs. SQL terminology 5 2.1.2 Relational Model The basic idea of relational model is that all data is mathematically represented as n-ary relations, which are subsets of a cartesian product of n domains. The view on data in this mathematical model is done by two-valued predicate logic, where each proposition can be either true or false (later there were attempts to change this to three or four valued predicate logic by adding unknown value and then valid an invalid unknown). Data are handled due to rules of relational algebra or a relational calculus. Each relational database has constraints, which lead its designer to create a consistent representation of some information model. The process of creating consistent database is called normalization, which leads to selection of most suitable logically equivalent alternative of the database. Basic building stone of a relational database is domain, usually referred as type. Tuple, that is ordered set of attributes. Attribute consists of attribute name and data type name. Every relation is composed from a heading and a body. Heading is a collection of attributes and body is the rest, meaning a set of n-tuples. Relation, which is visualized as a table, consists of a set of n-tuples, tuple is then similar to a row. Relvar is a specific relation type variable, it is used to differentiate between a variable containing relation and relation itself. This model was introduced to world by E.F.Codd, who worked in IBM’s San Jose Research Laboratory in 1970’s. Later other people as Chris Date and Hugh Darwen with their teams were the ones who developed and maintained relational model. Codd made his 12 rules“ (in fact 13, because they are numbered from zero to 12) to define what a ” database management system must fulfill in order to be accepted as relational. But in fact, none of used RDBMS nowadays comply to all 13 rules, the only example which does is Dataphor. 2.1.3 Structure When speaking about structure of a relational database, we use term table to visualize basic stone of database. It is mathematically incorrect term, in theory we use term relation. Now we will go through other basic terms of relational database, which will be shown on an example. We want to make a database of city citizens, where we want to know their name, surname, address and date of birth, further we assume, that each person has unique birth number. Each of these characteristics is called attribute. Each attribute can get one of different values (for address it would be one of city’s streets). Complete set of these values that attribute can have is called a domain. Information that characterizes one of citizens, puts in relation different values of attributes from various domains. Such a group of attributes belonging together is called a n-tuple. Here is a complete mathematical definition of what has just been explained. Given a collection of sets D1 , D2 , ..., Dn (not necessarily distinct), R is a relation on these n sets if it is a set of orderes n-tuples < d1 , d2 , ..., dn > such that d1 belongs to D1 , d2 belongs to D2 , ..., dn belongs to Dn . Sets D1 , D2 , ..., Dn are the domains of R, The value n is degree of R.[1] In writen text we write simply R(A1 , A2 , ..., An ) if we want to describe a relation R with attributes A1 ...An . 6 Figure 2.1: Relational database1 2.1.4 Constrains Each database usually contains more than one table and information in these tables can relate to each other. For example we already have table of our citizens and now we also have table with city houses. Each house has different size, different status and different owner. And here we get to a relation between our two tables. House owner is always a citizen. So there is relationship owns“ between those tables. In relational database this ” relation is purely logical, not physical (pointers) as before in pre-relational model, and it is created upon equality of values in certain rows of both tables. In the example it will be owner and name. To make it possible to create such relations, we need to make sure that each row will have unique way how to identify itself and then there must exist a link to this identifier in second table. In relational databases this is achieved by using keys. For identification it is a primary key and for links it is a foreign key. Data in every database is always somehow constrained. There are two types of constrains, general and specific. General constrains depend on usage of database and data is constrained in way of data type or minimum/maximum possible value. On the other hand, general constrains are the same for every database and they have effect on key attributes. Candidate and Primary Key Primary key is used to identify each row of a table, therefore it must be unique. We usually use logins, birth numbers or just integers as IDs for primary keys. Sometimes there can be more candidates for primary key, so we have to chose one of these candidate keys to be primary. Attribute CK of relation R is called a candidate key if applies to these rules: • Values of attribute CK in relation R are unique - there are no two n-tuples in relation with the value of this attribute. • Attribute CK is specific and minimal - it is not composed of more other attributes and can not be divided into simpler These rules must be valid in any time given. Primary key is then one of candidate keys, the rest of them are called alternative keys or secondary keys. We usually chose the simplest candidate key to become primary. Primary keys are used in other tables as links to table 1 database 7 containing primary key as attribute, therefore not only that it has to be unique, but it also can not be unset or unknown, in terms of SQL we call such a value NULL, so primary key can never be NULL. Foreign Key Attribute FK of relation R is called foreign key, if applies to these rules: • Each value of FK is fully inserted or fully not inserted • There is relation R2 with candidate key CK, that any value of FK equals to some of values from CK of n-tuple from that relation. Again we can draw some conclusions. We see that foreign keys do not have to have a value, that might be case where for example some house is not owned by anybody, so the attribute owner will be empty. It is important that every foreign key really leads to some candidate key in other table, because that is the thing keeping database consistent and it is responsibility of administrator/programmer to secure it. 2.2 Tree Databases These databases, also knows as hierarchical, are databases, which were mainly used at the beginning of computer era, before times when the relational model became most common standard. As the name suggest, data in this model is stored in tree structures. From different point of view a tree allows us to store data in child-parent structure, which is referred also as 1-n structure, because each entry can have only one parent, whereas a parent can have multiple or no children. In this type of database, entries are connected by these parent-child relations, which are in fact simple pointers. To look at a hierarchical database in the same way as at relational one, table would be entity type, row a record and at last, column would be an attribute. These databases are not much used nowadays and if, then just for special models like geographic informational systems or file system data. Most used implementations of hierarchical databases in the present are Windows Registry developed by Microsoft and IBM’s IMS. 2.2.1 XML Databases According to [12], XML database is another type of database model, this one is specific by storing data in XML format. This data can be queried as in other models, but can be also exported and serialized into other formats. We divide XML databases into three major groups: • Native XML Database (NXD) - the basic unit of this database is an XML document; it does not require any underlying physical model as it can stand on a relational, object oriented or any other kind of database or even just on indexed or compressed files; it defines XML document logical model, that at least must have elements, attributes, document order and PCDATA2 . 2 Parsed Character Data 8 • XML Enabled Database (XEDB) - this kind of database has also a XML mapping layer, that manages the retrieval and storage of XML data. Mapped data is stored in a specific format, so the original XML meta-data can be lost. To manipulate this data we can use either special XML technologies as DOM, SQL3 or XPath4 . • Hybrid XML Database (HXD) - This kind of database can be treated as both NXD or XEDB, it is just on application which way it will prefer. Implementation of HXD is for example Ozone. XML databases find their place with informational portals, product catalogues, business to business document exchange and many others. When applying XML databases for these solutions, they offer far bigger performance then relational databases and they are more convenient and easier to use, manage and expand. XML XML stands for Extensible Markup Language, which is nowadays format, that many applications use for encoding their documents. It was created to be both human-readable and machine-readable. Fundamental unit of XML document is an element, which is created using tags, a text construction that begins with <“ and end with >“, for example: ” ” <person>bambusekd</person> would be element representing a person bambusekd. Each element can have attributes, those can be either created inside beginning tag or as a separate couple of tags. <person age=‘‘21’’ position=‘‘student’’>bambusekd </person> or <person> <name>bambusekd</name> <age>21</age> <position>student</position> </person> both examples provide the same information. XML was designed to be simple and to have wide scale of usability on the Internet. It is data format base on Unicode, so it can serve in any of world’s languages. No matter the fact XML was designed for documents it found place also in many web services as representation of arbitrary data structures. XML is used in all major office tool applications as Microsoft office, OpenOffice, LibreOffice etc. XML has also been father to many other formats, that use XML syntax as XHTML 5 , RSS6 and others. 3 Documental Object Model,Structured Query Language 5 xhtml.asp 6 intro.asp 4 9 2.3 2.3.1 Other Databases NoSQL Term NoSQL was first used in 1998 by Carlo Strozzi, who described his lightweight relational database as NoSQL [10], because it did not follow design, structure and rules of typical SQL database. From then, you could not hear much about NoSQL, until a boom of social networks and a need for storaging huge amount of simple data came in 2009. In this year a guy from Last.fm7 made a statement, that NoSQL databases are those, that do not care about atomicity and consistency as traditional relational databases. Main goal of NoSQL database is to provide lighter database, that would be faster and with higher availability, this is achieved by a model with looser consistency. That allows faster horizontal scaling. These databases consist of key-value entries, where relationships between them are not that necessary. NoSQL is focused mainly on adding data to database and retrieving them, not more. This with omission of relations allows these kind of databases to be used just for certain types of models. Such models can be millions of posts on social network like Facebook or Twitter, where there is no need of relations in between data, we just need to store them and retrieve them. Term NoSQL does not mean that these databases are not SQL databases, because in fact some of them allow SQL-like queries above them, it rather means not only SQL“. ” Work on language that would be specifically made to query NoSQL database has began in 2011. This language is called UnSQL (Unstructured Query Language) and it can query collections of documents, what in relational model would be tables and rows. Unfortunately UnSQL is not capable as SQL in manners of data definition, so there is no parallel query to SQL’s CREATE TABLE or others in it. Document Store Databases Basic building stone of these databases, as name already says, is a document. As in relational model there are tables, here there are documents. Each implementation treats term document differently, but all of them encode data using one of formats as is JSON, XML, YAML or PDF. In comparison to relational model, records here do not have to follow strict scheme of data structure, that means that one type of entry can have on a different occasions different sets of attributes. This allows to add any new information at any time without any problems with predefined data structure. To address a document we use a unique identifier that can be either name or URI. Usually these values are hashed to indexes, so data retrieval of any document is very fast. Basic query needed with this database is to retrieve document base on its identifier, however some implementations allow us to retrieve documents based on their content, but this depends on each implementation. JSON8 - JavaScript Object Notation is a standard made to represent data structures and associative arrays. Although it is derived from JavaScript, which is a scripting language used mainly in web environment, it is language-independent. JSON standard can be found in RFC 4627, it was first introduced in 2001 by Douglas Crockford and it is nowadays very popular serialization format used in server-client communication next to XML and others. It contains just few basis data types: number, string, boolean, array, object and null. The 7 8 Free internet radio catalogue 10 format was created in human readable form. Compared to XML it has lower demands on data processing. Example of a person data encoded in JSON looks like this: { "firstName":"David", "lastName":"Bambusek", "job": { "occupation":"student", "year":"3" }, "age":"21" } YAML9 - is a recursive acronym for YAML Ain’t Markup Language“, previously Yet ” ” another Markup Language“. It was brought on the light of the computer world in 2001 by Clark Evans and it is another data serialization format, that is based on combined basis of programming languages as Python, Perl or C, XML and format of electronic mail. It is again representant of human-readable formats as was JSON or XML. YAML’s main goal is to offer a way how to map high-level languages data types as are lists, scalars or associative arrays, to be easily modified, or viewed, so the output could be used for configuration files or document headers. Thanks to its syntax with white-space delimiters it is very easy to work with YAML using grep or some of scripting languages as Perl. Same example used in JSON paragraph would look this way in YAML: first_name: David surname: Bambusek job: occupation: student year: 3 age: 21 Graph Databases Graph databases have completely different structure than relational databases as can be deduced from article [9]. They consist of nodes/vertices, edges/relationships and properties/attributes. Graph database is index-free, this is due to fact that every entry has a direct pointer to his neighboring element, so no lookups are needed. So as relational databases are based on mathematical theory of relations, graph databases are build upon a graph theory. Graph theory is useful in many ways, graph algorithms are used to find shortest paths, measure statistics like PageRank, closeness and it also offers a basis for high-performance databases. In graph a node represent an entry, that means person, company etc. Property is an information, that describes, details or specifies a node. If there is node David Bambusek, then one of properties could be student“. Then we have last item of graph - edge, which ” connects nodes with other nodes or nodes with properties, so in fact it represents the relationship between the two items they connect. These edges are the most important thing in graph model and carry the biggest amount of information. The way how to get 9 11 Figure 2.2: Graph database10 some worthwhile information is to examine connections and properties of nodes we are interested in or of those we are led to on a way from starting nodes. In comparison to relational databases, graph databases are faster with smaller amounts of data as they do not need to perform join operations. It is better to use them in case, where it is known the database will change, grow and in some kind evolve, meaning we will add new kinds of nodes or add new information to existing ones. On the other hand, when we compare performance of relational and graph database on the same set of large data, relational wins. 2.3.2 Object Oriented Databases Object oriented databases as described in [14] differ from all other types of databases in their attitude of storing data. In other databases, data is stored using basic data types as integers and strings. In this database model data is stored in objects. Same thinking is used in OOP languages as C++ or Java. Objects were made so we could better reflect real world objects. Each object is compound of two basic parts. First is set of attributes, attribute is object’s characteristic , attribute can be either simple data type as integer or it can be again object. Second part is collection of methods, these define object behavior. All together object contains data and also executable code. Creation of new object is called instantiation. In this model, there exist so called classes, which are in fact templates for objects, they define their attributes and methods, but they do not contain data by themselves. To identify objects an OID in used in this model, which is unique ID of each object. There are few fundamentals of object oriented approach, that are valid also in databases. 10 database 12 First thing is that objects communicate in between themselves using messages, depending on received message object does something that is defined by its behavior. Next thing is encapsulation, that means that inner object’s implementation is not visible for other objects, they can only see the interface. Therefore object itself or rather object data can be changed just by its own methods. This is so that there is no possible way how to make incorrect change to data stored in object. Also inheritance is very important, in fact it is one of the most useful features of OOP attitude. It is a way how to make relationships between objects and how to add some more specifications to new objects based on already created ones. We should use object databases in cases when we have very complex data with a lot of many-to-many relationships. There is no sense in using them for small databases with simple relationships. Object database advantages over relational are that thanks to inheritance, we do not need that much code, the data model reflects the real world by using objects, navigation is much easier, there is reduced paging and finally it works very well with distributed architectures. On the other hand, relational model with tables is simpler, it is more efficient with simple data and there is generally bigger support, more software and more standards for relational databases, it is likely to say that object oriented approach is still bit wilder then more stable and standardized relational. 13 Chapter 3 Directory Services Complete description of directory services is mentioned in articles written by ITU-T 1 organization [7] [8]. Directory services allow us to access data of directory type. By term directory we mean specialized database of telephone numbers, names, email addresses and others. These databases have their ancestors in printed directories of telephone numbers used by millions people around the globe. When computers came on scene during 1980’s, also new network services using the Internet to create global telephone number directory showed up. These directories were base on ITU-T X.500 standards and later in 1990’s, new standard was introduced, it was IETF Lightweight Directory Access Protocol (LDAP), which completely replaced X.500. LDAP architecture, which is based upon X.500 architecture was created in 1997 and has had many actualizations. Today the most actual recommendations for LDAP are RFC 4510 and RFC 4511. Main goal for LDAP was to create a distributed directory service, that would treat all user equally and that would be easily extended. Basically for the same reason X.500 was created, but due too difficulty of implementation was never so successful as LDAP. Architecture X.500 defines directory as a set of opened systems, cooperating in intention of preserving ” logical database of information containing set of object from the real world“ [7]. The architecture of database is hierarchical and is called Directory Information Base (DIB). It consists of entries, where each of them has set of information about itself, called attributes. Each attribute has a data type and a value. Each entry corresponds to some class of object and according to that class has different attributes, in other words object class defines structure of each entry. We can imagine DIB as a tree, that is organized from top to bottom, where the most top entry represents the most generalized entry, for example country, going down the tree, we meet regions, then companies and so on. Each entry needs to be identified somehow, for this we use distinguished name (DN). It always consists of DN of its superior entry DN and its own DN. Main difference between directory and relational database is in what we expect from them. Directories usually hold stable data, that are supposed to be read many times, rather then modified. We usually work with just one entry, so there is no need for operations like join or complicated search functions over more than one entry. Directory can contain 1 International Telecommunication Union - telecommunications sector 14 duplicated data if it helps the performance of it. Most applications that use directories expect quick response from them. Data stored in DIT can be physically stored on one server or can be distributed on more servers, it is wise to store them on more servers in case that we want to avoid overload of one server due to too many requests, we can also make a replication of some part of DIT, so request on this part can be divided onto more servers. X.500 Directory This standard is ancestor and template for all later created directory services. It was created in 1988 and it is environmentally and architecturally independent. Its concept is quite similar to DNS, where it consists of entries representing all countries of the world. X.500 uses many protocols: • Directory Access Protocol (DAP) - to access database of directory service • Directory System Protocol (DSP) - to exchange information • Directory Information Shadowing Protocol (DISP) - for data sharing between servers But X.500 was too difficult to implement, therefore a lightweight version called LDAP was created. It is much faster, effective and uses just one protocol called also LDAP. 3.1 LDAP Lightweight Directory Access Protocol (LDAP), described in [13] represents not only internet protocol for directory access, but entire directory service. LDAP is based on ITU-T X.500 standard and is described by four so called models. Each of them describes one different view on the directory service: • Name model - describes directory structure made of entries, that are identified by distinguished names (DN). • Informational model - describes information which all together form so called directory scheme, those are data types, operations that can be done upon them and information how to store them. It also describes entries, attributes and their possible values. • Functional model - describes operation done by LDAP protocol, where the most used operation is search and this model specifies where to search, using what keys and so on. • Security model - describes how data is secured in database, that means if there is some authentication, encryption and access rights. 3.1.1 Name Model Name model describes organization of data and relationships between them. Name model describes how entries are stored into tree structure called Directory Information Tree (DIT). In this tree, each peak of the tree is entry, that can have ancestor/parent and descendant/child, to which they are connected by an edge. The structure of the tree corresponds to company structure, geographical structure or any other hierarchical structure, that usually reflects some real existing model. At the most top, there is a special root entry. To 15 identify an entry we use distinguished name (DN), which is made from of all DNs of entry’s parents. There is also special type of entry called alias, which refers to some other entry. 3.1.2 Informational Model Informational model [15] describes structure of entries that create DIT (Directory information tree). Each entry contains information about itself, this information is stored using attributes, entry can have more attributes. Each attribute has a data type and a value, that can be either compounded value or just a simple value. Entry’s main identification is its DN (Distinguished Name), which is created by combining nodes’s name and DNs of nodes, that are situated on virtual route in the tree structure from actual node to tree’s root node. Lets make an example of Red Hat employee David Bambusek, entry will have an attribute cn, that goes for common name and mail, that stands for e-mail address, so in the directory there would be entry: dn: cn=David Bambusek, dn=redhat, dn=com cn: David Bambusek mail: xbambu02 [at] stud.fit.vutbr.cz Each entry belongs to a certain type of object class, these classes can be edited and created by database administrator. Object class is a set of objects, that share the same ” characteristic.“[8] Basic properties of class are that it defines which attributes entry derived from that class must and may have, defines which objects and entries belong to them, takes care about placement in DIT and checks operations being done over entries. As it is common in OO (Object oriented) attitude, classes can inherit some features of their parents. In LDAP we have 3 main types of classes: • Abstract class- no entry can be based on this class, they only serve as templates for other classes. • Structural class - each entry must be made based on at least one of these classes, every structural class is based (directly or indirectly) on the highest abstract class top. • Auxiliary class - used to extend attributes of entries. Formal definition of class according to ABNF(Augmented Backus–Naur Form) looks like this:" 16 The only must parameter as we see is OID - which is object identificator. Object class of our Red Hat employee from example would look like this: (2.5.6.6 NAME ’employee’ SUP top STRUCTURAL MUST ( cn \$ mail ) MAY ( departement \$ telephone ) ) That means there must be always employee’s full name and mail given and we can optionally add department where she/he works and her/his telephone. We have two different types of attributes, they can be either user attributes, which are already presented cn, mail and others, these can be changed or modified, then there are operational attributes, that are generated automatically, are permanent and are used for administration, for example information of who created entry and when. Each attribute is identified by unique Object Identifier (OID). Basic types of classes and attributes can be found in RFC 4519. 3.1.3 Functional Model Functional model describes operations that can be done upon a directory, that means adding, modifying, deleting entries and querying them. Not only that it describes these operations, but it also defines way and scale of a query/search operation. If we have a big directory, it can be very useful to limit our query just on some part of directory. LDAP defines three types of search. • base - search is done just in set base object • one-level - search is done just in direct child of base object • subtree - search is done in whole subtree of base object, with that object included For a search operation we can use different filters and comparing rules. Not all data types support all kinds of comparing. For example operations like greater than/less than can be used only on attributes that can be ordered alphabetically/numerically/. . . Basic filtering rules are mentioned in a table 3.1. Rule equality substring approximate greater than less than presence AND OR NOT Format (attr=value) (attr=[leading]*[any]*[trailing]) (attr =value) (attr>=value) (attr <=value) (attr=*) (&(rule1)(rule2)) (|(rule1)(rule2)) (!(rule1)(rule2)) Table 3.1: filtering rules for LDAP 17 There are few basic operations in LDAP that should be mentioned. • Bind - is used to establish a connection between server and client, to agree on type of authentication and to login into directory • Unbind - is used to end the connection • Search - basic search operation, you must define the base object of search, the scope, the filtering rule, list of attributes we are interested in and maximal number of results we want to get • Compare - is used to compare attribute values of set entries • Modify - is used to change defined entry • Add - is used to create new entry • Delete - is used to delete an entry • Abandon - cancels previous operation 3.1.4 Security Model Security model provides protection to data in directory against non authorized access. All LDAP servers have SASL authentication implemented. We can divide LDAP servers into three categories: • public servers with read-only data with anonymous acces • server supporting password authentication, MD5 SASL implementation is needed • server supporting cryptation and authentication, they must implement TLS operations and authentication with public keys 3.1.5 LDIF The LDAP Data Interchange Format2 (LDIF) is a standard, described in RFC 2849, that defines a plain text format for representing data stored in LDAP directories and LDAP operations. From LDIF point of view, LDAP directory is a set of entries, where each of them has its own record. Its birthplace was University of Michigan, where it was created by Tim Howes, Mark C. Smith and Gordon Good. LDIF has been extended and updated few times creating current standard specified in RFC 2849. We have already mentioned an example of LDAP entry, so here we have a modifying request on an entry, that adds e-mail address to an existing entry: dn: cn=David Bambusek, dn=redhat, dn=com changetype: modify add: mail\newline mail: xbambu02 [at] stud.fit.vutbr.cz 2 RFC 2489 18 3.1.6 LDAP Usage Directories are computer version of old telephone directories, so they most usually consist of information about people/organization/services, so the main usage is to get some contact information. This is used for example with e-mail clients. When writing email, you insert name of recipient, mail client will ask LDAP server for his email address and that server will answer by exact address or in case there are more people with same name with all their addresses. Quite similar situation is when using VoIP communication, each VoIP telephone has LDAP client, that can ask a LDAP server for a telephone number of a given person. Next example of LDAP usage is verification of users trying to access some services, most usually web service. Web client will ask user for login and password, send it to server, which will try to bind to LDAP server, which contains approved user logins and passwords, with this data and in case bind is successful, user is authorized. This approach is also used in Unix, where instead to authentication using /etc/passwd LDAP server is used. 3.1.7 LDAP Implemantations There are many implementations of LDAP servers, some of them are open source, some of them are commercial. Here is a list of the most known of them: • 389 Directory server (Fedora Direcory Server) -it was developed by Red Hat. The name comes from the port number for LDAP, which is 389. This implementation is built in Fedora and is supported by many other distributions like Debian or Solaris. • Active Directory - Microsoft’s implementation of a directory service, it was created in 1999 as part of Windows NT Server, it not only uses LDAP, but also Kerberos and DNS. • Apache Directory - is an implementation of directory service entirely written in Java, it is an open source project created by Apache Software Foundation • FreeIPA - is in fact combination of already existing projects, that provides managed Identity, Policy and Audit (IPA), it is focused on Unix computer networks. It uses Kerberos 5 for authentication Apache and Python for Web UI and 389 Directory server for LDAP. There is possibility to cooperate with Microsoft’s Active Directory using Samba. 3.2 LDB LDB is a LDAP-like embedded database used in Samba project. information about it can be gathered from [11]. Although it provides LDAP-like API, it is not LDAP standard compliant, its highest priority is to be compliant to Active Directory. LDB can be seen as a solution providing something between key-value pair database and an LDAP database. LDB is basically a module above TDB that manipulates key-value data into LDAP-like structure. LDB is a transactional, that means it checks if any error occurred during changing the database data before committing it and if so, all the changes are backed, so the database is in the same state as before intended change. It is also modular, that allows any new 19 functionality to be added or removed according to our needs on database performance. Available backends uses TDB, LDAP or SQLITE3. LDB has many advantages of LDAP like custom indexes, it offers powerful search options, it is hierarchical and its structures can be easily modified or extended. On the other hand it keeps also some advantages of TDB as is fast search, all the data is stored in one file and it is easy to backup. LDB enables fast searches thanks to function, that takes care of building indexes for it, when a new index is added, whole database is scanned so the indexes can be automatically rebuilt. Also there is no need for a schema, since any object can store arbitrary attributevalue pairs. LDB has many powerful tools. In between them is worth mentioning ldbsearch and ldbmodify. ldbsearch - its syntax is very similar to ldapsearch in LDAP, by using -H option, you define backend which should be used (tdb,sql, ldap,. . .) and then as in ldapsearch comes definition of the search scope, ehw base dn and a LDAP-like search expression. ldbmodify - is a tool using known LDIF format, it allows you to explore and change a snapshot of the directory in a text editor, you can use filters to show just object you want to see, it can be also used to backup and restore database and it works against an LDAP server too. 3.2.1 TDB & DBM TDB is a successor of DBM database, made by Samba team, very nicely described in [11]. Its main difference from DBM is that it allows multiple writers to use database simultaneously and uses internal locking to avoid one entry to be rewritten by one user, while another one is working with it. DBM is a very simple database allowing to store any data in simple key-value structure. It was designed by Ken Thompson from AT&T in 1979. Each key is hashed to allow fast data retrieval. DBM uses fixed-sized buckets for primary keys, that split as database grows. Hash is usually directly connected to physical disk, so the retrieval can be very fast, because there is no need for any connecting or difficult querying. 20 Chapter 4 SSSD 4.1 User Login in Linux In order to work with a Linux system, one must first log in. Presentation [5] concerns this topic in very detailed way. No matter if there is some GUI available or you log in using command line, you always have to enter your name and password. Term password covers not only ordinary text password, but it can be also fingerprint or any other device similar to it. Logging in has two phases. As first system needs to get information about user such as what her/his home folder is and as second, it is neccesary to authorize the user. 4.1.1 Identification Information about users or for example hosts can be usually found in various files like etc/passwd or etc/hosts. Problem is that they are not on the same place, therefore some API is needed to work with all of them. In Linux, there is NSS (Name service switch) that serves this purpose. It is part of a standard C library, so it can be run on any system. It is a modular feature, where each module works with one source managing one type of object. For example one module will work with user from LDAP and second with passwords from etc/passwd. NSS is configurated using its config file in etc/nsswitch.conf. 4.1.2 Authentication For authentication there is another API called PAM (Pluggable authentication module). It provides four main features - account, auth, seesion and password, where auth is the most important for us, because it tells us whether user can authenticate. PAM has also many modules, some of them are even based on anothers and complete programming is quite complicated. Most used modules are those providing connection to etc/shadow, LDAP, Kerberos, some additional modules offer advanced functionality as password quality checks. 4.1.3 Problems Using NSS and PAM Using NSS and PAM is possible for loggng into some system and this solution is working quite well, but there are few problems connected with it. For example when a situation of identity overlap occurs, when we have two domains with the same user, we have to somehow decide who is who and how he will be identified. Second big problem is how to query a remote server if a computer is currently offline. There are some options to solve this - a replica of LDAP tree can be made locally, information can be saved into etc/passwd or 21 whole directory can be stored in cache, but all of these solutions are unhandy. Also with usage of NSS and PAM comes a lot of work with configurations for administrators, that are usually not very happy when given such a big amount of work. 4.1.4 SELinux Security-Enhanced Linux (SELinux) is an implementation of a MAC (Mandatory access control) mechanism in the Linux kernel, that does check for allowed operations after standard DAC(Discretionary access controls) checks are done [2]. This feature is added to many Linux distributions by default (RHEL, Fedora). It was originally created at the Utah university by Flux team with support of US Department of Defense and it was called FLASK(Flux Advanced Security Kernel). It was later enhanced by the NSA (National Security Agency) and released as open source software. Normally each user has a right to operate with his own files as they want to. There is no relevance to user role, function of program or about sensitivity of the data, therefore it is very hard to create a system-wide security policy. SELinux adds labels to all objects e.g. files, processes and users, this label define SELinux context. This context provides user identification, his role, type and level. This information is used to decided whether user will be granted access or operation will be permitted. SELinux default policy is what is ” not permitted, is forbidden“ known as least privilege principle, which serves as protection against any system malfunctions and therefore highers the security level. 4.2 Basic Function SSSD is a set of daemons providing an option to access various identity and authentication resources through one simple application, that also offers offline caching [4]. SSSD was created to solve all the problems mentioned above. For further information study of [3] is highly recommended. Main idea of SSSD is to provide enhancements to Fedora or any other Linux distribution, that supports SSSD. First thing that SSSD offers is offline caching for network credentials. This is a big ease if you use a centrally managed laptops, because all the services as LDAP, NIS or FreeIPA will in fact work also offline. So what SSSD in fact does is that it provides access to SSSD cache for local services, cache stores information about identities from various providers as LDAP, Active Directory or Identity Management domain. SSSD used to be a client side part of freeIPA, but lately became separate project. Next feature of SSSD is fact, that it reduces the overhead of opening new sockets for each query on LDAP, by using just one persistent connection to one or more LDAP/NIS servers, each acting as separate namespace. The only service that communicates with LDAP is SSSD Data Provider and that reduces the load on LDAP server to one connection per client. SSSD can be connected to various domains-sources of identities, where each of them can be connected with more servers, so when one of them is down, next server on list will be used. Additional innovation, that SSSD brings is a service called InfoPipe, that works on D-BUS system. This service contains extended data information as preferred language or your profile image, which until now was mainly concern of various configuration files in user’s home directory, which is not always available, due to mounting of home directory has not yet been done. In summary, the benefits of SSSD are that laptop users can use their network logons even when they are offline, with SSSD you only need to manage one account. Just one 22 service is needed to work with multiple identity and authentication providers. Developers will have access to InfoPipe, what brings new approach for extended user information, other services as FreeIPA, LDAP or NIS can take advantage of offline features thanks to the caching and the last, that it will provide FreeIPA client side software, for entering into FreeIPA domains. And the whole configuration of SSSD is matter of few lines compared to NSS and PAM together, so it is very easy to use for administrators. 4.3 Architecture Figure 4.1: SSSD architecture [5] 4.3.1 Processes The SSSD is a set of four main processes, each of them has its own special function: 1. the monitor - the process, which checks if other processes are running, it spawns them on the start and then re-spawns them if one of the periodical check shows that any service is not working. 2. a data provider - this process is responsible for communicating with various backends and populates cache with data obtained from them. For each remote server, there is one data provider process. 3. responders - are processes, that communicate with system libraries as NSS or PAM and try to give them data, that they are asking for, from cache. In case that data in cache is expired or is not there at all, it gives a signal to data provider to obtain it. When data is obtained, it is stored in cache, responder is informed about it, so it again goes into cache and gets the needed data. 4. helper process - there are some operations, that could be blocking, therefore SSSD performs them in special sub-processes, that are forked from the Data Provider. 23 4.3.2 Communication D-Bus For communication in between processes the D-Bus protocol is used in SSSD. Communication is done in form of sending messages. D-Bus can be divided into four primary components: 1. The D-Bus Server - is used for establishing connections. It can be identified by its address, that consists of a transport name, colon and an optional list of keys and values separated by commas. 2. The D-Bus Connection - these connections are peer-to-peer connections, so one end listens for method calls, and the other one of them initiates methods and vice versa. 3. The D-Bus Message - There are two types of messages: • one way messages - these are D-Bus Signals and D-Bus Errors, they usually carry a simple message from one end of connection to the other one. These are most often signals to stop/start service or to notify that some error has occurred. • answer messages - these are D-Bus Methods, their functions is to run a method on a remote process as it would be run locally, after calling the method, there can, but not necessarily has to, be an answer to it that goes back to end that called the method. 4. The D-Bus System Bus - is not used in SSSD, but it is part of D-Bus protocol. It was designed by the Freedesktop project and was created to handle multiple communication between system daemons. S-Bus To ensure that SSSD will have good performance, it works completely in non-blocking way with help of the tevent1 event loop library developed as part of the Samba project. To provide certain level of abstraction and in order to make possible to integrate D-Bus with the tevent, SSSD uses S-Bus - a wrapper created around D-Bus library. Two processes in SSSD work as S-Bus servers, which is an abstraction of D-Bus server, they are identified by a UNIX socket, mentioned in the heading text, and they are: • The monitor - it can call methods as ping“ to check processes as was described above ” or rotateLogs“ to rotate logs by force and others. ” Socket - /var/lib/sss/pipes/private/sbus-monitor • The Data Provider - it calls different methods depending on the data type that is requested, for example the NSS can call method getAccountInfo“. ” Socket - /var/lib/sss/pipes/private/sbus-dp $ domain name 1 24 Chapter 5 SSSD Database SSSD database stores few different kinds of objects. There are 7 of them in total and in this chapter we will briefly introduce each of them. As was previously said, each object is an instance of a class that defines object’s attributes. Attributes of objects like users and groups are very similar to those typically used in LDAP, so we can find a lot of similarity here. Complete list of attributes for these objects of LDAP character can be found in ldap opts.h1 . 5.1 Users User objects store basic information about users, so in fact a user object is a virtual identity of a real person. Every user can be member of multiple groups or netgroups. objectClass=user attribute uid userPassword uidNumber gidNumber homeDirectory loginShell gecos description user name user password unique identification number group identification number home directory user shell full name Table 5.1: User attributes 1 sssd-1.9.91/src/providers/ldap/ldap opts.h 25 5.2 Groups Users can be members of groups, groups can also belong into groups, so multiple nesting is possible. There is a tool sss groupshow that can be used to display group members. objectClass=group attribute cn userPassword gidNumber description group name group password group identification number Table 5.2: Group attributes 5.3 Netgroups Netgroups are slightly different from groups, there are network-wide groups, that define set of users that have access to specific machines, set of machines with specific file system access and set of users with administrator privileges in specific domains. Netgroup is specified by a name and its members are in format of triples where one field is for machine, second for user and third for domain name. objectClass=netgroup attribute cn nsUniqueId description netgroup name netgroup unique UID Table 5.3: Netgroup attributes 5.4 Services There is not much to say about definition of service, as the name speaks for itself, each has a name and runs on a different port. objectClass=service attribute cn ipServicePort ipServiceProtocol description service name service port service protocol Table 5.4: Service attributes 26 5.5 Autofsm Maps [3]Autofsm map is a special feature used by automounter to automatically mount filesystems in response to access operations by user programs. When automounter is notified about attempts to access files or directories under selectively monitored subdirectory trees, it dynamically and transparently accesses local or remote devices. objectClass=automountMap attribute cn automountInformation description autofs entry key autofs entry value Table 5.5: Autofs map attributes 5.6 Sudo Rules [3]Sudo rules define users who have been granted some kind of access, commands that are in scope of this rule and a hosts that this rules applies to. They can also contain some additional information, but it is mainly only who can do what and where“. ” objectClass=sudoRule attribute cn sudoCommand sudoHost sudoUser sudoOption sudoRunAsUser sudoRunAsGroup description sudo rule name command host user option run as certain user run as certain group Table 5.6: Sudo rule attributes 27 5.7 SSH Hosts [3]SSH is a cryptographic network protocol for secure data communication, that allows us to connect to remote machines. SSH hosts are then those machines we connect to. To connect to a host we need a key, which is exactly what we store in the database. objectClass=sshHost attribute cn sshPublicKey sshKnownHostExpire description ssh host name public key time until entry expirates Table 5.7: SSH host attributes 28 Chapter 6 Querying Tool 6.1 Program Specification Querying tool is a console application called sss query, its purpose is to query all kinds of data, that are stored in SSSD database and offer its users results of these queries. Application will be a part of a tool package for administrating SSSD database. User can chose how wide a query will be, chose whether to search just in one particular domain or in all of those, which are available. It is possible to search for one exact entry, based on its unique identification, which varies depending on a type of an entry we are searching for or can provide information about all entries of the same type eg. users, groups etc. currently present in the database. For each entry type, there is a default set of attributes, that will be printed out, but if user wants to see just some of the attributes, he is free to notify the tool by adding them as an optional flag and application will provide just these to him. Implementation language of the application is C. 6.2 Basic Information and User Interface Application is able to answer queries on these types of objects: • users • groups • netgroups • services • autofsm maps • ssh hosts • sudo rules • domains • subdomains 29 For each item there is a predefined set of attributes, that will be printed out, in case one does not insert her/his own attributes as an optional parameter. Table E.1 shows default attributes for each entry type. Application has two mandatory (E.2) flags and one optional (E.3) flag. The application is a standard console application with no graphical user interface, all the communication between user and the tool is done upon the insertion of application’s parameters and flags at the time of executing it. Result or multiple results if so, are displayed on standard output in plain text format, if there is no result or some problem occurred during run of the application an error message will be displayed. After completion of query, successful or not, an application is terminated, therefore if more queries are requested, we need to run the application once again for each query. To use the tool, run application as following : $# ./sss_query <object_type> <identification> <attributes_to output> A complete list of flags used to operate the tool is listed in appendix E. An example of communication between user and application looks like this: Query on existing object $# ./sss_query -u -N bambusekd -S uidNumber,name $ === Showing requested user === UID Number(uidNumber): 00015487 Name(name): David Bambusek Query on non existing object $# ./sss_query -u I name=filutam $ Object not found! Application error $# ./sss_query -s I gid=002587 $ You can use GID just for identifying groups! 30 6.3 Application Architecture Until now, there has been no actual way, how to find out any information about objects stored in SSSD database easily. That caused many problems when administrators using SSSD communicated with SSSD developers trying to solve problems that occurred during its usage, because administrators were unable to specify what objects are in their databases or they were unable to gather information about database itself, its domains and subdomains. Another problem is fact, that SSSD is very wide project with a lot of source code and many modules, providing each different functionality and working with different objects. There is no centralized file, that would describe all the objects SSSD works with, what certainly does not ease work for developers or users. Therefore a tool, which would be able to offer all the functionality that has been missing and complete enumeration of objects and their attributes was needed. The main goal was to use as much functionality already implemented in SSSD as possible, to make the tool easy to use and make it fast. At the same time to be compatible with existing parts of SSSD and to be made general enough to be potentially expandable in future as SSSD will evolve and change. Main challenge of this project was to get familiar with a large number of techniques and technologies used in SSSD, to fully understand their function and to learn how to use them in order to gather all the knowledge needed to design a tool, which would be functional and compatible with the rest of SSSD. Since there is no documentation available, the only way how to study SSSD architecture and function was to explore vast source code and many dependent libraries to deduce how things work. The biggest task was to find already implemented functions from other modules, where they are used for other purposes, that could be re-used for the tool. As many people work on SSSD and each person creates different part, where functions querying database objects are used just as auxiliary functions, there was no common knowledge of them, so it was necessary to go carefully through all the source code to find all of them. Then the second stage of this task was to create those functions that were missing. The implementation follows strict rules of coding style applied in freeIPA1 project and projects associated to it, as is also SSSD. This is thus to make sure the code will be readable and understandable, because SSSD is an open-source project and anyone can contribute to it. Main goal of the design phase was to keep the source code as minimal and general as possible, to ensure that any future extension or modification to code will be easy and just a matter of simple addition of a new module for new object without any interference with the application’s core. The same attitude is used for eventual extension in amount of information, which is handed down to user, where the only thing needed to do so is to add attributes identification to inner complete enumeration of attributes and application will automatically handle the rest of the work. Implementation uses three main libraries: for memory allocation, argument parsing and for hash table administration. The tool also uses various functions from other SSSD modules to setup its basic functions as is connection to configuration database and database with data itself. 1 Style 31 6.4 Tool Design Complete concept of the application is quite straightforward. User enters specification of what kind of object is the tool going to search for, together with a particular identification of this object and the tool will print out, in case the object exists in database, information about it or an error message, if it does not. The biggest question concerning design was which type of identification will be chosen for objects as a key to the search function. Whether the tool will offer just usage of unique identifiers which are for the current list of objects name, unique identification number (UID), group identification number (GID) and port, therefore the result will always find just one particular object and print information about it or if the tool should be able to search according to any attribute that object can possibly have, so in this case, there could be a situation, where more than one object matches the input parameters. After discussion with other developers and after consideration of number of given attributes present within objects, it was decided, that the tool in this version will support just unique identifiers with option to get information about all the objects of one kind, stored in database. This is because main purpose why anyone will use the application will be to find out detailed information about objects and not to find which objects have a specific value of attribute and if, there is the option to print out all of the object and the results can be easily filtered by any command-line features like grep2 . Second issue that had to be taken under deep consideration was which attributes and what number of them will be available to be printed out. As there are many and many attributes that each object on basis of standard LDAP classes can have and as some of them are very peripheral for everyday usage a big filtration was made and just the “must have” attributes stayed, in other words those attributes that other applications in Linux environment usually print out about these objects. As was described in previous section, the tool is very flexible and modular, therefore adding any additional attributes will be no problem. The last addition that was made to the application’s design was an option, that allows user to hide some of default parameters, which would be normally printed out on the output, making the result list much more lucid, especially in cases when all the objects are going to be printed out. 6.5 6.5.1 Implementation Initialization The program starts by typical argument processing with help of library popt.h3 , that offers very user friendly set of functions and preset macros to easily process arguments given by user through command line. Next phase is to validate given arguments, because logically there are many forbidden combinations, because each object can be identified just by few specified identifiers 6.1. To process more complex arguments as are attributes that user wants to see on output so as to process type of identifier which will be used, serves a specially created function, that simply parses all the arguments and depending on input determines what kind of identification will be used and saves this information into main system structure query tool ctx, which is used not only to store this information, but 2 3 man grep, Draft Documentation/0.1/html/RPM Guide 32 mainly core data as is a link to confdb database, information about domains and list of system databases, all mentioned later. Object user group netgroup autofsm sudo rule service ssh host domain Identifiers name, UID name, GID name name name name, port name name Table 6.1: Object Identifiers For a memory allocation we do not use standard C malloc() and its relative functions, but the library talloc.h4 , which is a hierarchical, reference counted memory pool system with destructors, that bring easiness for otherwise complicated way how to allocate and free memory in C, which in bigger project grows to very messy thing and can be a source of many memory leaks. Thanks to hierarchical system, we only need to take care of deallocating the root object, that is connected to its children by links created upon their allocation and the library takes care of deallocating the whole connected tree of allocated memory. As next we proceed to connection to SSSD database, from here reffered to as “sysdb”, by using function init domains() borrowed from another tool sss cache. This function firstly establishes connection to confdb, which includes all necessary information and settings to enable connection to sysdb and as a next step finally connects to sysdb. All this functionality is done by functions already implemented in SSSD5 . Attributes saved in SSSD database are identified by names(strings) written rather in computer-like style, so for example UID number inside SSSD is identified as attribute uidNumber. But these names would not be very nice for users to read on output, that is reason why all attributes that sss query offers to be displayed are mapped to human readable form. For this approach we use a hash table, which key-value entries have format of: <string, string>(SYSDB <attribute>, human readable SYSDB <attribute>) Thanks to use of hash table, we can later profit on very fast and simple way how get the right translation for SYSDB attribute to human readable form. To use and work with hash tables, we use library called ding-libs (libdhash). Now we are in a stage where we already know what type of object we will be looking for and we checked whether user also used correct identifier, so now the only thing that is left is the search for object itself. 6.5.2 Query on Basic Objects In this stage we get to querying itself. There are many functions already implemented in SSSD that offer a way how to get information about objects stored in database. If we want to make a query, we need to call some of lbd functions on the lowest level, since ldb is the framework that allows us to work with database. The building stone of every functions 4 5 sssd-1.9.91/src/db/sysdb.h 33 that gets data from sysdb is ldb search that is very similar to ldap search, so we need to pass base dn, search scope and filters (3.1) as arguments to it and it will provide us with results of search as its return value in special structure ldb result. This structure contains a pointer to array of the results from the search and is easily accessible for further work. There are many functions that are wrappers around this basic and very general function for most of object in sysdb, each of these functions has its own specialties as each object has different unique identifier and needs to be treated differently in means of types of attributes it has or where in sysdb is this object located. These functions are widely used is sss query and in cases where these functions for certain object are not available, sss query uses basic ldb search to get results. The only difference between functions working with one certain object and those working with whole class of objects is, that there have not been implemented any functions, to get information about group of objects in SSSD, therefore ldb search is always used in this case. We handle the results in a basically same way, we just have to iterate through complete list of results. 6.5.3 Domain and Subdomain Information Different approach is used with domains. Since we have all the data about domain or domains stored in special structure since initialization, we do not need to make any additional queries on sysdb. It is quite easy with subdomains again, each structure with domain information includes also pointer to array of subdomains, that are in every view similar to domains, just their location in tree hierarchy is one level lower than domains. It is worth mentioning, that subdomains can again have subdomains, so there can be whole nested structure of subdomains. 6.5.4 Printing Results Now when we already have our query results, we just need to display them to users. Because sss query works as an console application and therefor lacks GUI, results are printed in text form on standard output. If user did not specified attributes he would like to see on output a default set of attributes will be printed out. In case of query on all objects of certain type, results are printed out one by one, visibly divided and provided with label saying from which domain they come from and what is their number in list of results. Print function is quite straight forward, we have an array of attributes that will be displayed on the output. We iterate through them and with each we first find the human readable name, that will go on the output together with its sysdb internal name and of course its value . This value is gathered from the result message that ldb search gave us using another ldb function ldb msg find attr as string and then the complete information about attribute is printed out. Nevertheless the output is done just in a text form, sss query tries to format and display results or any massages in a way that it looks nice and it is easy to take in. So little bit of ascii graphic“ 6 is used to separate results and to create a header for each one, so the user ” is provided with all the necessary information in understandable and easy to read form. 6 Graphical object made just from ascii characters 34 6.5.5 Errors Generally there are two types of errors that can occur during run of the application. Firstly there are system errors, that are caused by wrong configuration of SSSD or because there was some problem inside the application and secondly errors caused by wrong usage of the tool, which means that user inserted wrong or non-existing arguments of used invalid combination of arguments. All the errors, which can occur are stated in appendix D. 6.6 Tests To test the tool, there were created few tests, that at first place tested in sequence all the program’s argument combinations. Since there are really many possible combinations and not all of them are valid and furthermore each invalid combination falls within one of the different error group, it was necessary to test all of the combinations, to make sure that the tool will never start its function with invalid arguments given. Next of the tests filled the database with objects from each group to see whether all the functions collected from all around the SSSD really work together and bring the desired result. Functions that take care of connecting to configuration database and database of objects itself have been already included in unit tests of SSSD. Since the tool uses LDB API, which has already been implemented in Samba project, properly tested and used. There is no need to test them again. All the information and results of LDB testing can be found on projects documentation page7 . An extract from test that have been ran on the tool can be found in appendix F. there are results of mostly all cases user can perform with tool, when using it. 7 35 Chapter 7 Conclusion Reader of this thesis was introduced with overall information about databases, their different models with focus on those that are used the most these days, but we also did not forget to mention those types that are not that widely used, but are slowly finding their place in the modern, and in every moment developing, world of computers. We stopped for more detailed information about LDAP and LDB and moved to description of SSSD, its architecture, functions and its database. The main output of this thesis is a tool for querying SSSD database, which is able to query all different types of data stored in SSSD database and to offer SSSD administrators the important information about them. This tool was developed with emphasis on very easy usage and therefore offers very simple UI which allows very complex control of application. I strongly believe that this tool will ease work of many people dealing with SSSD and provide all the functionality that was expected. This application offers a lot of functionality, but there is a space to merge it with other already existing tools for SSSD administration so as with other tools, that has not yet been developed and might be needed. Thanks to its design, it will be also very easy to make extensions to it, if needed. Therefore its current stage might not be ultimate as application will change in future as will SSSD with newer versions that will be introduced. 36 Bibliography [1] C.J. Date. An Introduction to Database Systems. Addison-Wesley Publishing Company, 1979. ISBN 0-201-01530-7. [2] R. Haines. The SELinux Notebook - The Foundations. http: //, 2008. [3] Red Hat. Fedora documentation.. [4] J. Hrozek. SSSD Wiki - Design documents.. [5] Hrozek, J. and Nagy, M. FreeIPA and SSSD. Presentation at Red Hat Developers’ Conference, 2009. [6] Richard D. Irwin. Database Management - Theory and Application. Irwin, 1990. ISBN 0-256-07829-7. [7] ITU-T. The Directory - Overview of concepts, models and services. X.500, August 2005. [8] ITU-T. The Directory: Models. X.501, August 2005. [9] P. Neubauer. Graph Databse, NOSQL and Neo4j., May 2010. [10] P. Sadalage and M. Fowler. NoSQL Distilled: A Brief Guide to the Emerging World of Polyglot Persistence. Addison-Wesley, 2012. ISBN 0-321-82662-0. [11] S. Sorce. LDB and the LDAP server in Samba4. Samba experience conference, 2006. [12] A. Tatiyants. XML Databases., 2012. [13] M. Wahl, T. Howes, and S. Kille. Lightweight Directory Access Protocol, 12 1997. RFC 2251. [14] K. Wong. Introduction to object-oriented databases. MIT Press, 1948. ISBN 0-262-11124-1. [15] K. Zeilenga. Lightweight Directory Access Protocol (LDAP):Directory Information Models, 6 2006. RFC 4512. 37 Appendix A DVD contents • Directory documentation Contains LATEX source codes of bachelor thesis and a complete work in PDF format. • Directory sss query Contains source codes of SSSD 1.9.91, patch to add sss query to it, manual page of sss query in XML format, bash script for instalation of SSSD and putting it to state ready for use and few other files with exemplary data for the database. 38 Appendix B Manual This manual describes how to compile and run the sss query tool. This tool was developed and tested on OS Fedora 181 . To install and configure SSSD version 1.9.91, patch it to contain sss query and insert some test data into database, there is a prepared bash script install.sh, that will take care of all the job. In case that this script fails for any reason, please follow these steps: B.1 Unpacking Project is packed in .tar.gz format, to extract it, use tar. This will extract whole SSSD into directory called sssd-1.9.91. A patch containing sss query was already applied on this version, so there is no need to patch. $# tar -xzf sssd-1.9.91.tar.gz B.2 Dependencies SSSD uses many libraries, that might not be installed on Fedora 18 by default. All the necessary libraries are mentioned in BUILD.txt. In order to install all of them use: $# yum install openldap-devel gettext libtool pcre-devel c-ares-devel \ dbus-devel libxslt docbook-style-xsl krb5-devel nspr-devel \ libxml2 pam-devel nss-devel libtevent python-devel \ libtevent-devel libtdb libtdb-devel libtalloc libtalloc-devel \ libldb libldb-devel popt-devel c-ares-devel check-devel \ doxygen libselinux-devel libsemanage-devel bind-utils libnl3-devel \ gettext-devel glib2-devel $# yum install libcollection-devel libdhash-devel libpath_utils-devel libref_array-devel 1 39 libini_config-devel \ B.3 Compiling To build whole project use these simple orders. Autotools and Makefile will take care of everything by itself: $# autoreconf -i -f && \ ./configure && \ make B.4 Exemplary data To insert some exemplary data into database for purposes of testing, copy all the contents of directory exemplary data into /var/lib/sss/db. This can be easily achieved by using: $# cp * /var/lib/sss/db in directory exemplary data. B.5 Running the tool Now the tool is ready to be run. Database will include few users and groups in domain example.com. To run the tool, go to SSSD root directory and run: $# ./sss_query <object_type> <identification> <attributes_to output> 40 Appendix C Man page SSS_QUERY(8) SSSD Manual pages SSS_QUERY(8) NAME sss_query - perform query on SSSD database SYNOPSIS sss_query {options} {identifier} [set of attributes] DESCRIPTION sss_query allows user to query certain object or all object of the same class from SSSD database and displays results on the output. OPTIONS -u,--user Query on user(s). -g,--group Query on group(s). -n,--netgroup Query on netgroup(s). -s,--service Query on service(s). -m,--autofsmap Query on autofs map(s). -b,--sshhost Query on ssh host(s). -r,--sudorule Query on sudo rule(s). 41 -d,--domain Query on domain(s). -o,--subdomain Query on subdomains. -?,--help Display help message and exit. IDENTIFICATOR -N,--name name Object name. Tool also supports FQ names. -U,--uid UID Object UID. -G,--gid GID Object GID. -P,--port port Service port. -I,--ident identificator=value Searches based on identificator (one of previous options) and value. -A,--all Search for all objects of the same class. ATTRIBUTE_LIST -S,--show attribute_name,... Defines which attributes will be printed),pam_sss(8). AUTHORS The SSSD upstream - SSSD 04/29/2013 42 SSS_QUERY(8) Appendix D Errors System Errors Super user privilege Message:“You must be root in order to run this application!” All the tools in SSSD can be run only by user with super user rights. Confdb connection Message:“Could not initialize connection to the confdb” There was problem loading confdb, as default, SSSD expects confdb to be in /var/lib/sss/db, if you want to change the path, use –with-db-path argument during SSSD build. Domain connection Message:“Could not initialize domains” An error with domains occurred, check your domain configuration. Sysdb connection Message:“Could not initialize connection to the sysdb” Again, you need to check if sysdb database is correctly configured and set. Locale set error Message:“Error setting the locale” Locales contain information on how to interpret and perform certain input/output and transformation operations taking into consideration location and language specific settings, therefore check you environment’s locale settings. Hash table error Message:“Cannot add entry to hash table” Some problem occurred with adding hash to hash table. Argument Errors One or all objects Message:“Please chose to search either for one object or for all, not both” You can select either one object by specifying it using one of the <object type>from E.2 or you can use -A to display all objects. Using both parameters will cause error. 43 Too many identifications Message:“Please use one type of identification” You have to use one type of identification from <identification>in E.2, it is not possible to use more than one. Object not selected Message:“Please chose one type of object” You have to chose one type of object from <object type>E.2 to be searched. Identification unspecified Message:“Please enter identification of object you want to find” You forgot to insert name/uid/gig/port of the object. Direct identification mixed with indirect Message:“Please chose just -N/-U/. . .(–name/–uid/. . .) or -I(–ident) option” You can use either direct object identification or indirect using -I, not both at the same type. Wrong object identification Message:“You can use UID/GID/port number just for identifying users/group/services!” All the objects can be identified by their name, but just certain objects can be identified using UID, GID or port number. Invalid format of indirect identification Message:“Please insert valid pair of attributes: name=value” You did not follow the needed format of indirect identification. Too many output attributes Message:“Too many attributes to show” At maximum, there are only 5 attributes that an object have. Therefore by inserting more than 5, program will end with error. Invalid indirect identification Message:“Invalid identification type” There are only options name, uidNumber, gidNumber or portService for indirect identification and you used none of them. 44 Appendix E Application Constants, Flags and Arguments Entry type users groups netgroups service autofsm maps ssh hosts sudo rules domains subdomains SSSD internal macros SYSDB NAME, SYSDB UIDNUM, SYSDB GIDNUM, SYSDB HOMEDIR, SYSDB SHELL attributes name, uidNumber, guidNumber, homeDirectory, loginShell SYSDB NAME, SYSDB GIDNUM name, gidNumber SYSDB NAME, SYSDB UUID name, nsUniqueId SYSDB NAME, SYSDB USN, name, entryUSN, SYSDB SVC PORT, ipServicePort, SYSDB SVC PROTO ipServiceProtocol SYSDB AUTOFS ENTRY KEY, name, automountInSYSDB AUTOFS ENTRY VALUE formation SYSDB SSH HOST OC, sshHost, sshKnownSYSDB SSH KNOWN HOSTS EXPIRE, HostsExpire, SYSDB SSH PUBKEY sshPublicKey SYSDB SUDO CACHE AT CN, cn, sudoUser, SYSDB SUDO CACHE AT USER, sudoHost, SYSDB SUDO CACHE AT HOST, sudoCommand, SYSDB SUDO CACHE AT COMMAND, sudoOption SYSDB SUDO CACHE AT OPTION name, provider, SYSDB VERSION name, provider, version SYSDB NAME, name, realName, SYSDB SUBDOMAIN REALM, flatName, domainID SYSDB SUBDOMAIN FLAT, SYSDB SUBDOMAIN ID Table E.1: Default entry attributes 45 short opt. -u -g -n -s -m -t -r -d -a -N -U -G -P -I -A long opt. –user –group –netgroup –service –autofsmap –sshhost –sudorule –domain –subdomain –name –uid –gid –port –ident –all value format no value description type of object to be searched example -u [name] [uid] [gid] [port] [identificator]=[value] no value entry identification -N bambusekd –uid=32568 query all objects of chosen type Table E.2: Mandatory flags short opt. -S long opt. –show value format [attribute name]+ description If set, just specified attributes will be displayed, otherwise attributes from default set E.1 will be displayed. Table E.3: Optional flags 46 example -s name, mail Appendix F Test results Find user - all attributes $# ./sss_query -u -N bambusekd $# === - using FQ name $# ./sss_query -u -N bambusekd@example.com $# === - just UID and name $# ./sss_query -u -N bambusekd -S uidNumber,name $# === Showing requested object === UID number(uidNumber): 1063200001 Name(name): bambusekd Find all users $# ./sss_query -u -A $# === Showing all users in domain example.com === === entry: 0 === domain: example.com === 47 Name(name): bambusekd UID number(uidNumber): 1063200001 GID number(gidNumber): 1063200001 Home directory(homeDirectory): /home/bambusekd Shell(loginShell): /bin/sh === entry: 1 === domain: example.com === Name(name): krausj UID number(uidNumber): 1063200004 GID number(gidNumber): 1063200004 Home directory(homeDirectory): /home/krausj Shell(loginShell): /bin/sh Find all users - just UID and shell $# ./sss_query -u -A -S uidNumber,loginShell $# === Showing all users in domain example.com === === entry: 0 === domain: example.com === UID number(uidNumber): 1063200001 Shell(loginShell): /bin/sh === entry: 1 === domain: example.com === UID number(uidNumber): 1063200004 Shell(loginShell): /bin/sh Show domain info $#./sss_query -d -N example.com $# Domain name: example.com Domain provider: ipa Domain version: 0.14 Error - no object selected $# ./sss_query $# Please chose one type of object Error - too many arguments $#./sss_query -u -g $# Please chose one type of object 48 Find non existing object $#./sss_query -u -N smithj $# === Object not found! === Find object in non existing domain $#./sss_query -u -N bambusekd@fit.cz $# === There is no domain fit.cz === 49
https://manualzz.com/doc/48445832/vysoke%C2%B4uc%CB%87en%C3%ADtechnicke%C2%B4v-brne%CB%87
CC-MAIN-2019-04
refinedweb
15,122
51.89
Could you pleeeeeeeease add the functionality to call the users metamask extension and retrieve the wallet address. My website right now relies on the user to input their address, but the problem is that the user could input an address that is not theirs. Couldn’t you grab this with a Javascript function? Am I correct in my understanding that the metamask extension you are talking about refers to a crypto wallet? @robert Yes, i am referring to the users crypto wallet address Yes, you might be able to, I am just not sure how to go about prompting web3 to make the metamask wallet extension pop up, so i can connect to the site. I believe its something along the lines of: const Web3 = require(“web3”); const ethEnabled = async () => { if (window.ethereum) { await window.ethereum.send(‘eth_requestAccounts’); window.web3 = new Web3(window.ethereum); return true; } return false; } More documentation: I tried to insert the js, then call it in the module. But I can’t seem to get it working. python code in main module import anvil.js from anvil.js.window import getaddress class Form1(Form1Template): @staticmethod def getaddy(): status = getaddress() print(status) return status js code in native library: <script> function getaddress(){ const Web3 = require("web3"); const ethEnabled = async () => { if (window.ethereum) { await window.ethereum.send('eth_requestAccounts'); window.web3 = new Web3(window.ethereum); var accounts = await web3.eth.getAccounts(); return accounts return true; } return false; } } </script> Try calling your getaddy function on form_show. That is when JS will actually be called. We’re unlikely to add a python api for metamask anytime soon. The best approach is to use anvil.js as you’ve started doing. More context would be useful in helping out here. e.g. Where did you get your code snippet from? What error are you getting when you execute the snippet? As a quick tip - anytime you see the word require in a javascript code snippet - then it won’t work in the browser. When looking for a js library a good place to go is the github/documentation page. There is a cdn link in the github page for web3 Which means we can add a script in Native libaries <script src=""></script> The docs say that this adds Web3 to window so you can then remove the line const Web3 = require("web3"); from your script. Javascript is messy sometimes. Only on days with a ‘y’ in their name. Great Stuff, thanks for sharing the content
https://anvil.works/forum/t/metamask-wallet-call/9722
CC-MAIN-2022-40
refinedweb
414
75.5
Keyword Arguments of Ruby makes code more clear chenge Dec 26 '18 ・1 min read In this video the speaker demoed the pros of KWArgs. In the pic, upper code is obviously more clear. So when should you use it? I think, when it's hard to identify the params you should use KWArgs. The usage is simple: def foo(bar:, baz:) end foo(bar: 1, baz: 2) Do you think so? Classic DEV Post from Jan 2 Easily Build GraphQL APIs with Prisma Todd Birchard - Feb 4 Protect your system from changes in 3rd party dependencies Aleksi Kauppila - Feb 4 Monolithic vs. Microservices Architecture: Which is Right for Your App Aleksandra - Jan 31 Didn't watch the video, but what I find interesting is that keyword arguments did not exist until Ruby 2.0, which means before Ruby 2.0, there's a totally different way to do hack it, and you'll sometimes see this in the wild :) (I've encountered this in Rails' DateTime library) robots.thoughtbot.com/ruby-2-keywo... Yes, it simplify code. Very useful. Seems right as shown, but perhaps only for .new? For any other method: If the parameters are related, could they form a class and be passed as an object? If not, does the method lack a unified purpose? Right, better params no more than 3. If more than 3 maybe should use hash or object. How does it handle extra arguments? Does it ignore it or throw an exception? ArgumentError
https://dev.to/chenge/keyword-arguments-of-ruby-makes-code-more-clear-304a
CC-MAIN-2019-09
refinedweb
250
74.29
Intel, NVIDIA Take Shots At CPU vs. GPU Performance 129 MojoKid writes "In the past, NVIDIA has made many claims of how porting various types of applications to run on GPUs instead of CPUs can tremendously improve performance — by anywhere from 10x to 500x. Intel has remained relatively quiet on the issue until recently. The two." first post! (Score:4, Funny) Re:first post! (Score:5, Informative) Awesome. And now maybe you've learned a lesson. While the external processor was faster, sending your data over the bus to the external processor has an inherent delay in it. That's why your first post came in fourth. Re:first post! (Score:5, Funny) It depends? (Score:5, Insightful) Isn't it like saying "Ferrari makes the fastest tractors!" (yeah, I know!), which may be true, as long as they can actually carry out the things you want to do. I don't know about the limits of OpenCL/GPU-code (or architecture compared to regular CPUs/AMD64 functions, registers, cache, pipelines, what not), but I'm sure there's plenty and that someone will tell us. Re:It depends? (Score:5, Informative) Re: (Score:1) Yeah I wondered which one it was but I was somewhat to lazy I guess. Maybe the history was the Lamborghini guy decided he could to .. Only googled ferrari tractor to see if they had any or whatever it was lamborghini, got a few tractor images so I went with that. So Lamborghini went super-cars and Ferrari tractors ("if they can beat us at cars we for sure will show them with tractors!"? :D) Sorry for messing up :) [ferrari-tractors.com] Re: (Score:1) There are also ferrari tractors [ferraritractors.co.nz], unrelated to the sports car manufacturer though. Re: (Score:2) Shhhhh.... Your giving rednecks around the world hope that someday John Deere will make a sports car... Re: (Score:1) Re:It depends? (Score:5, Informative) Basically, GPUs are stream processors. They are fast at tasks that meet the following criteria: 1) Your problem has to be more or less infinitely parallel. A modern GPU will have anywhere in the range of 128-512 parallel execution units, and of course you can have multiple GPUs. So it needs to be something that can be broken down in to a lot of peices.. 3) Your problem must fit within the RAM of the GPU. This varies, 512MB-1GB is common for consumer GPUs, 4GB is fairly easy to get for things like Teslas that are built for GPGPU. GPUs have extremely fast RAM connected to them, much faster than even system RAM. 100GB/sec+ is not uncommon. While a 16x PCIe bus is fast, it isn't that fast. So to get good performance, the problem needs to fit on the GPU. You can move data to and from the main memory (or disk) occasionally, but most of the crunching must happen on card. 4) Your problem needs to have not a whole lot of branching, and when it does branch, multiple paths need to branch the same. GPUs handle branching, but not all that well. The performance penalty is pretty high. Also generally speaking a whole group of shaders has to branch the same way. So you need the sort of thing that when the "else" is hit, it is hit for the entire group. So, the more similar your problem is to that, the better GPUs work on it. 3D graphics would be an excellent example of something that meets that precisely, which is no surprise as that's what they are made for. The more your deviate from that, the less suited GPUs are. You can easily find tasks they are exceedingly slow at compared to CPUs. Basically modern CPUs tend to be quite good at everything. They have strong performance across the board so no matter what the task, they can do it well. The downside is they are unspecalized, they excel at nothing. The other end of the spectrum is an ASIC, a circuit designed for one and only one thing. That kind of thing can be extremely efficient. Something like a gigabit switch ASIC is a great example. You can have a tiny chip that draws a couple watts and yet and switch 50+gbit/sec of traffic. However that ASIC can only do its one task, no programability. GPUs are something of a hybrid. They are fully programmable, but they are specialized in to a given field. As such at the tasks they are good at, the are extremely fast. At the tasks they are not, they are extremely slow. Re: (Score:2) Re:It depends? (Score:5, Insightful) It is not a secret (it's a stated fact on both Intel and AMD's roadmaps) to integrate GPU like programmable FP into the FP units of the general processor. The likely result will be the same general purpose CPU you love, but there will be dozens of additional FP units that excel at mathematics like the parent described except more flexible. When the fusion'eske products ramp and GPGPU functionality is integrated into the CPU Nvidia is out of business. Oh I don't expect these fusion products to have great GPU's, but once you destroy the low end and mid range graphics marketplace there is very little $$ wise left to fund R&D (3dfx was the first one into the high end 3d market and they barely broke even on their first sales, the only reason they survived was because they were heavy in the arcade sector sales). If Nvidia hasn't been allowed to purchase Via's x86 license by that point they are quite frankly out of business. Not immediately of course, they will spend a few years evaporating all assets while they try to compete with only the highend marketplace but in the end they won't survive. Things go in cycles and the independent graphics chip cycle is going to end very shortly, maybe in a decade it will come back, but I'm skeptical. CPU's have exceeded the speed needed for 80% of most tasks out there. When I first started my Career computer runs of my design work took about 5-30 minutes to run on bare minimum quality. These days I can exceed that bare minimum by 20 times and the run will take seconds. It's to the point where I can model with far more precision than the end product needs with almost no time penalty. In fact additional CPU speed at this point is almost meaningless and my business isn't alone in this. In fact most of the software in my business is single threaded (and the apps run that fast with single threads). Once the software is multi-threaded there is really no additional CPU power needed and it may come to the point where my business just stops upgrading hardware beyond what's need to replace failures and my business isn't alone. I just don't see a future for independent graphics chip/card producers. Re: (Score:1) They are called, specifically, FPU's not FP's. As for the cpu guys putting the gpu guys out of business... we know how successful Intel has been trying to do just that with their GPU offerings... you expect that to change in the next, say, 10 years? Not likely given their past track record of failure. Re: (Score:2) If your theory was true, why hasn't it already happened? Both AMD and Nvidia have been putting pretty nice GPUs on motherboards for quite awhile, yet we still have discrete cards, why? There is a good reason why, for the most basic office task, even two or three year old gaming, the onboard chips work fine. I myself played Bioshock I and Swat 4 on my onboard with no trouble. But for anything where you care even a little bit about REAL performance the onboards, and I don't care if we are talking onboard or on Re: (Score:2) If your theory was true, why hasn't it already happened? Both AMD and Nvidia have been putting pretty nice GPUs on motherboards for quite awhile, yet we still have discrete cards, why? You have misunderstood the theory. The theory is that level of functionality will be merged into the CPU. Not into the motherboard. There is a good reason why, for the most basic office task, even two or three year old gaming, the onboard chips work fine. I myself played Bioshock I and Swat 4 on my onboard with no trouble. We still have discrete graphics cards because onboard GPUs are sufficient for most tasks? Do you have any idea what you're saying here? But for anything where you care even a little bit about REAL performance the onboards, and I don't care if we are talking onboard or on die, simply won't have a chance. You just can't put hundreds of Mb or even Gbs of RAM onto the die. They're not on the die in a motherboard-integrated solution, either. They use system memory. Further, CPUs can already access system memory. You clearly have no idea what you are talking about. So while you think discrete GPUs are gonna die, I'd say the opposite is true. I think the onboards will be used in machines where price trumps everything, such as bottom of the line netbooks and Walmart/Best Buy "specials", whereas for everything else since HD and games like Warcrack will continue to be popular and thus selling points discretes will bring in the "wow" factor and help OEMs to differentiate their products. You don't get it. ALL GPUs are going to go away, beca Re: (Score:2) I understand perfectly, it is you that probably needs a WHOOSH here. I know all about having everything onboard, as I'm old enough to remember when there was NOTHING but onboard. The problem with your theory is unless you make the CPU as easy to toss as the old Slot 1s it simply isn't gonna work for anything but the most basic tasks. And system memory will ALWAYS suck...full stop. That is why my CPU has 6Mb of cache onboard, do you think they would waste that much die space on cache if memory access weren't Re: (Score:2) I understand perfectly, it is you that probably needs a WHOOSH here. Well, give it a shot. I know all about having everything onboard, as I'm old enough to remember when there was NOTHING but onboard. Well, I had an Altos CP/M machine that had everything onboard, once. But my next one, a Kaypro 4, carried its modem on a daughterboard. So really, we're talking about times so old as to not be worth mentioning. The problem with your theory is unless you make the CPU as easy to toss as the old Slot 1s it simply isn't gonna work for anything but the most basic tasks. Unless the CPU is easily removable and discardable, it simply isn't going to be able to compute on the proper level? We're talking about power, not packages. Speak English, and make it relevant. And system memory will ALWAYS suck...full stop. That's funny, my crystal ball remains cloudy on the subject. Even my magic 8-Ball is no Re: (Score:2) Since you seem to be having trouble understanding, I'll break it down, kay? lets take a machine just three years old, standard "Best Buy Special". Now if your theory were true that machine would either A-have to be thrown in the trash, very wasteful, or B-be able to have the CPU trivially thrown away, because with the GPU integrated it will NOT be able to do the tasks required today. Is that so hard to understand? HD is going nowhere but UP, higher resolutions, bigger screens, etc. This is trivially handled Re: (Score:2) Finally since you want me to quote, I will. You said, and I quote "You don't get it. ALL GPUs are going to go away, because CPUs are getting better at GPU tasks faster than GPUs are getting better at CPU tasks. " Where is your proof? Pinetrail? A FIVE YEAR OLD GPU jammed into the CPU? You're so stupid, I can barely stand it. you want a GPU put into a CPU package to be proof that GPUs are going away? I said they're going away, not going into the package with the CPU. This is the general theme of your "conversation", attacking straw men. Welcome to my foes list, idiot. I can't waste more time on someone who doesn't understand when they are spewing logical fallacies (or who does it on purpose; I have not totally ruled out the possibility that you are a troll. But I suspect you simply have v Re: (Score:2) Ahhh...so there it is. You can't even come up with a SINGLE SOURCE to back up your claims, so all you can do is insult and add me to some imaginary list I couldn't give less of a shit about. Did I call you names? Nope that would be you acting all childish because you have NO PROOF AT ALL and are just pulling a theory out of your ass. If "CPUs do it better" as you say, why are Intel and AMD trying to integrate GPUs? wouldn't that be a waste? Of course it would be, but the answer is simple: Your theory is simp Re: (Score:2) CPUs have a long way to go to reach the level of memory bandwidth and latency available to a top grade graphic card - even triple channel DDR3 kits offer only some 50GB/s, while a AMD 4870 has some 115 GB/s. Latency on the GPU is similar with a top end desktop memory (server memory tends to have lower latency and reduced speed though, even if servers might use more memory channels than the three used in the top end desktops). Overall I think you are right (Score:2) But I think the timescale will be a very long one. I mean ideally, we want only the CPU in a computer. The whole idea of a computer is that it does everything, rather than having dedicated devices. Ideally that means that it does everything purely in software, that the CPU is all it needs. For everything else, we seem to have reached that point but graphics are still too intense. Have to have a dedicated DSP for them. However, we'll keep wanting that until the CPU can do photorealistic graphics in realtime. T Re: (Score:2) If Nvidia hasn't been allowed to purchase Via's x86 license by that point they are quite frankly out of business. Apparently, licensing terms for access to the x86 designs forbid it passing to non-US hands. I think VIA is working a loophole here, because Centaur is doing the CPU designs, VIA does the branding, chipsets, and sometimes the motherboards. However, it might work if nVidia aquired VIA and Centaur, and merged VIA's chipset departments into nvidia's (and segmented them or something) because otherwise they're only looking for Centaur and it ain't going to be pretty. I'm just disappointed in nVidia myself for let Re: (Score:2) New AVX SIMD is coming out soon. The first set of 256bit registers are suppose to be 2xs as fast as SSE and later 512bit and 1024bit AVX are suppose to be another ~2-4xs faster than the 256bit. I guess one of the benefits of AVX is the new register sizes are suppose to give transparent speed increases. So a program made for 256bit AVX will automatically see faster calculations when the new 512bit AVX registers come out. Sounds good to me. They're suppose to be 3 operandi instructions. Re: (Score:2) Afraid not (well, there are ways if you are willing to litter your code with C++ templates). Yes the instructions will process 8 floats, however you're only going to see some nice linear speed up if you are already using SOA data structures. For a lot of the 'traditional' SSE c Re: (Score:2) Re:It depends? (Score:5, Informative) "So to get good performance, the problem needs to fit on the GPU. You can move data to and from the main memory (or disk) occasionally, but most of the crunching must happen on card." From what I have seen when people use GPUs for HPC, this, more often than anything else, is the limiting factor. The actual calculations are plenty fast, but the need to format your data for the GPU, send it, then do the same in reverse for the result really limits the practical gain you get. I'm not saying it's useless or anything - far from it - but this issue is as important asthe actual processing you want to do for determining what kind of gain you'll see from such an approach. That's the big draw of the Teslas (Score:3, Informative) I mean when you get down to it, the seem really overpriced. No video output, their processor isn't anything faster, what's the big deal? Big deal is that 4x the RAM can really speed shit up. Unfortunately there are very hard limits to how much RAM they can put on a card. This is both because of the memory controllers, and because of electrical considerations. So you aren't going to see a 128GB GPU or the like any time soon. Most of our researchers that do that kind of thing use only Teslas because of the need Re: (Score:2) The problem is when you have a larger system, with hundreds of cores, and an iterative simulation. You run the system for a cycle, propagate data, then run for another cycle and so on. In that case you can't isolate a long-running process on the card, and you end up having to squeeze data through that bus for each cycle anyway. It is likely still well worth using GPUs but you do need to take a good look at whether adding GPUs are more or less effective than using your funds to simply add more cores instead. Re: (Score:1, Interesting) That is an excellent post, with the exception of this little bit GPUs have extremely fast RAM connected to them, much faster than even system RAM I'd like to see a citation for that little bit of trivia... the specific type & speed of RAM on a board with a GPU varies based on model and manufacturer. Cheaper boards use slower RAM, the more expensive ones use higher end stuff. I haven't seen ANY GPU's that came with on-board RAM that is any different than what you can mount as normal system RAM, however. Not trolling, I wanted to point out a serious flaw in what in an otherwise great po Re:It depends? (Score:4, Informative) GPUs have extremely fast RAM connected to them, much faster than even system RAM I'd like to see a citation for that little bit of trivia Ok, so my Geforce GTX480 has GDDR5 ( [nvidia.com] ) which is based on DDR3 ( [wikipedia.org] ) My memory bandwidth on the GTX480 is 177 GB/sec. The fastest DDR3 module is PC3-17000 ( [wikipedia.org] ) which gives approx 17000 MB/s which is approx 17GB/sec. So my graphics ram is basically 10x faster than system ram as it should be. Re: (Score:2) My memory bandwidth on the GTX480 is 177 GB/sec. The fastest DDR3 module is PC3-17000 ( [wikipedia.org] ) which gives approx 17000 MB/s which is approx 17GB/sec. And the high end CPUs have as far as I know triple channel memory now so a total of 51 GB/s. Not sure how valid that comparison is but graphics card tend to get their fill rate from having a much wider memory bus - the GTX480 has a 384 bit wide bus - rather than that much faster memory so it's probably not too far off. If CPUs move towards doing GPU-like work which can be loaded in wider chunks they'll probably move towards a wider bus too. Re: (Score:2) Width is part of it but it's also clock rate. The fastest overclocked DDR3 will go to 2.5GHz. The stock Geforce 480 is 3.7Ghz. At those rates the bus length gets to be an issue. The memory on a graphics card can be kept very close to the chip. On a PC the memory due to practical reason has to be set farther away resulting in necessarily slower clocks and data rates. The 51 GB/sec you mention is definitely overclocked. I've not seen stock memory that fast. Even so its still less than a third the rate of Re: (Score:2) No, I mean the fastest consumer card available right now. Re: (Score:1) You mean the HD5970? Re: (Score:2) I've had really bad experiences with ATI cards, especially the poor OpenGL support which I use predominantly. I'm sticking with nVidia - OpenGL is always faster on nVidia than ATI. Re: (Score:3, Interesting) I haven't seen ANY GPU's that came with on-board RAM that is any different than what you can mount as normal system RAM, however. You haven't been looking very hard. Most GPUs have GDDR3 or GDDR5 running at very high frequencies. :) My system for example: Main memory: DDR2 400Mhz, 64-bit bus. 6,400 MB/sec max. GPU memory: GDDR3 1050Mhz, 448-bit bus. 117,600 MB/sec max. Maybe double the DDR2 figure since it's in dual-channel mode. I'm not sure, but it hardly makes much of a difference in contrast. That isn't even exceptional by the way. I have a fairly mainstream GPU, the GTX 260 c216. High-end cards like the HD5870 and GT Re: (Score:3, Interesting) That's a very good breakdown of what you need to benefit from GPU based computing but, really, only #1 has any relevance vs. an x86 chip. #2) Yes, an x86 chip will have a high clock speed but, unless you can use SSE instructions, x86 is crazy slow. Also, most (if not all) architectures will give you half the flops for using the double precision vector instructions vs. the single precision ones. #3) This is a problem with CPUs as well except, as you point out, the memory is much slower. Performance is often Re: (Score:2) The GPUs are definately worse than CPUs in branching. If your code splits into 8 different code paths at one point due to branching, your performance can be as bad as 1/8 the maximum, since rather than do anything remotely like actual branching, some GPUs just interleave the code of the different branches, with each instruction tagged as to whether which branch the code belongs to. So if the unit is processing an instrcution for a branch it is not on, it usts sits there doing nothing for one instruction cycl Re: (Score:2) The other big factor (the biggest in most of the GPU code I've written) is your pattern of memory access. Most GPUs have no cache so access to memory has very high latency even though the bandwidth is excellent. The card will hide this latency to some extent through clever scheduling; and if all your threads are accessing adjacent memory, it will coalesce that into one big read/write. But GPUs do best on problems where the ratio of arithmetic to memory access is high, and your data can hang around in regist Re: (Score:2). That requirement is not necessarily true. Or at least not in the traditional sense of 'floating point.' GPUs make awesome pattern-matchers for data that isn't necessarily floating point. Elcomsoft (of adobe DRM international arreset fame) has a GPU accelerated password cracker [elcomsoft.com] that is essentially a massively parallel dictionary attack, A number of anti-virus vendors have GPU accelerated scanners - like Kaspersky. [theinquirer.net] And some people have been working with GPUs for network monitoring via packet analysis [ieee.org] too. Re: (Score:2) Some of the examples used in the cudaSDK are phoney. The sobel one can be made to run faster on the cpu - provided you use the intel compilers and performance primitives and can parallelise. It doesn't surprise me. There is an example of Sobel for the FPGA's that tote much faster execution times, but then when you examine the code, the fpga version has algorithmic optimisations that were 'left out' for the cpu version. Again, it can be made to run faster on the cpu. I'm not saying that GPUs are crap. For Re: (Score:2) You lazy fuckers (Score:5, Interesting) I don't expect slashdot "editors" to actually edit, but could you at least link to the most applicable past story on the subject [slashdot.org]? It's almost like you people don't care if slashdot appears at all competent. Snicker. Re: (Score:2) s/editors/kdawson/g Re: (Score:1) what does this mean? totally lost, i am. Re: (Score:1, Offtopic) So, once upon a time, there was this text editor called vi. To make it do shit you type in cryptic commands. The one for search-and-replace is s, followed by a slash, followed by the thing you want to search for, followed by another slash, followed by the thing you want to replace it with. Because of more arcana, this will only happen once per line unless you put a g after it. So s/cat/dog/g means "replace all occurrences of cat with dog". Incidentally, you also have to tell vi in what range it should do this Re: (Score:2) I think that predates vi. Good ol' "ed", the line editor, has s/foo/bar/g command. Re: (Score:2) AMD (Score:5, Funny) Re: (Score:1) Except that.. (Score:2) Magny-Cours is currently showing significant performance advantage over Intel's offerings while at the same time AMD's Evergreen *mostly* shows performance advantages over nVidia's Fermi despite making it to market ahead of Fermi. AMD is currently providing the best tech on the market This will likely change, but at the moment, things look good for them. Re: (Score:2) I just got back from a lattice QCD conference, and there were lots of talks on GPGPU. Everybody's falling over each other trying to implement their code on GPU's because of the performance gains. *Every* talk mentioned Nvidia cards -- Geforce GTX nnn's, Tesla boards, Fermi boards. Nobody talked about AMD at all. Maybe AMD does have an advantage, but nobody's using it. Re: (Score:1) Interestingly, most scientific papers talking about large speed gains (factor 2..10) by going from CPU to GPU computation compare a hand-optimized GPU implementation to a plain single-threaded non-SSE CPU implementation. From my experience, using SSE intrinsics gives a speed-up of 2..8 versus good generic code, and multi-threading gives more improvement until one hits the RAM bandwidth wall. Re: (Score:2) Re: (Score:2) *Every* talk mentioned Nvidia cards -- Geforce GTX nnn's, Tesla boards, Fermi boards. Nobody talked about AMD at all. Maybe AMD does have an advantage, but nobody's using it. That's because nVIDIA has excellent support, both on Windows and Linux, and documentation for their CUDA GPGPU system. They even have an emulator so people without an nVIDIA GPU can develop for one. (Although it's now deprecated.) On the other hand, AMD has CAL, Stream, and OpenCL; and I can't even figure out which one I'm supposed to use to support all GPGPU-capable AMD cards. OpenCL has some documentation; I can't find anything good on CAL, and I can't find any way to develop for the platform on Linux w Re:AMD (Score:5, Insightful) AMD is the most advantaged on this front... Intel and nVidia are stuck in the mode of realistically needing one another and simultaneously downplaying the other's contribution. AMD can use what's best for the task at hand/accurately portray the relative importance of their CPUs/GPUs without undermining their marketing message. CPUs and GPUs have different goals (Score:5, Interesting). Re: (Score:2) All 3 of them? Straw man? (Score:2) The author doesn't understand what the straw man argument is. He thinks it is bringing up anything that isn't specifically mentioned in the original argument. Nvidia stating that optimizing multi-core CPUs is difficult and that the Nvidia architecture has hundreds of applications seeing a huge gain in performance now is a valid point even if the Intel side never mentioned the difficulty of implementation. Intel says "Buy Nvidia" (Score:5, Insightful) What the hell kind of sales pitch is "We're only a little more than twice as slow!" It's gonna work, too. Humanity sucks at math. Re: (Score:2) The two times speed gain point is where it becomes pointless to exploit specialized hardware. Frequently, the software development program manager has two choices: a) Ship a product now, or b) Spend 1 to 2 more years developing the product, then ship it. The issue is that hardware doubles in speed every 1 to 2 years. If the cost of exploiting current specialized hardware is an additional 1 to 2 years software development, t Re: (Score:1) Re: (Score:2) I did an experiment on a Core 2 Duo a couple years ago and found it to be only 5% as fast at doing a huge matrix multiply compared to a (then) top-of-the-line Nvidia. So, they're catching up pretty well. That's worth noting for people who've been following this closely for a while. Re: (Score:2) It's a very good sales pitch, actually. Unlike AMD, NVidia isn't an alternative to Intel CPUs. Instead it's a complimentary technology, which adds additional cost. So, I could buy a $500 CPU and a $500 GPU, or I could buy TWO $500 CPUs, and get most of the performance, without having to completely redesign all software to run on an GPU. And Intel has at least one good point, in that NVidia's claims are based on pretty naive m Yes, great sales pitch (Score:2) and the SIMD instructions that have been added to Intel/AMD CPUs in recent years really are the same thing you get with GPU programming, just on a bit smaller scale. It's an order of magnitude different (and I know from experience coding CPU and GPU) i7 960 - 4 cores 4 way SIMD GT285 (not 280) - 30 cores 32 way SIMD SP GFLOPS i7 960 - 102 GT285 - 1080 No matter what, AMD really wins in this one. AMD has the potent Re: (Score:2) I think that CPUs are faster with conditional branching and other general purpose computing tasks, so I would sacrifice 2x for that. Optimizations Matter (Score:2) From the article, you can narrow the gap: "with careful multithreading, reorganization of memory access patterns, and SIMD optimizations" Sometimes though, I don't want to spend all week making optimizations. I just want my code to run and run fast. Sure, if you optimize the heck out of a section of code, you can always eek out a bit more performance, but if the unoptimized code can run just as fast (on a GPU), why would I bother? Re:Optimizations Matter (Score:4, Informative) Its certainly true that most programmers reach for the later style, but mainly because they arent planning on using any SIMD. Re: (Score:3) The difference is the 'naive' code you write to do things in the simplest manner *can* run on a CPU. For the GPU languages, you *must* make those optimizations. This is not to undercut the value of GPU (as Intel concedes, the gap is large), but it does serve to counteract the dramatic numbers tauted by nVidia. nVidia compared expert tuned and optimized performance metrics on their product and compared against stock, generic benchmarks on intel products. Still trying to keep Larrabee going? (Score:5, Insightful) On top of being highly capable at massively parallel floating point math (the bread and butter of top500 and most all real world HPC applications), GPU chips benefit from economies of scale by having a much larger market to sell chips to. If Intel has an HPC-only processor, I don't see it really surviving. There have been numerous HPC only accelerators that provided huge boosts over cpus that flopped. GPUs growing into that capability is the first large scale phenomenon in hpc with legs. Re: (Score:1) Who cares anymore? (Score:2) Does anyone under the age of 25 really care anymore about processor speed and video card "features"? I only ask because 15 years ago I cared greatly about this stuff. However, I'm not sure if that is a product of my immaturity at that time, or the blossoming industry in general. Nowadays it's all pretty much the same to me. Convenience (as in, there it is sitting on the shelf for a decent price) is more important these days. Re:Who cares anymore? (Score:4, Insightful) Two things: you've been conditioned to accept gaming graphics of yesteryear, and your need for more complex game play now trumps pure visuals. You can drop in a $100 video card, set the quality to give you excellent frame rates, and it looks fucking awesome because you remember playing Doom. Also, once you get to a certain point, the eye candy takes a backseat to game play and story - the basic cards hit that point pretty easily now. Back when we used to game, you needed just about every cycle you could get to make basic gameplay what would now be considered "primitive". Middling level detail is great, in my opinion. Going up levels to the maximum detail really adds very little. I won't argue that it's cool to see that last bit of realism, but it's not worth doubling the cost of a computer to get it. Re:Who cares anymore? (Score:4, Informative) An $80 video card enables high/ultra settings at 60+ FPS on nearly all games for the "low resolution" group, but not the "high resolution" group. Re: (Score:1) Re: (Score:2) So you ARE under the age of 25! Joking aside, I find the $50-$75 3d cards to work just fine for new 3d games. This has been an adequate price-to-performance point for me since about 2003. Take me out back and shoot me (Score:2) Oh for cryin' out loud (Score:5, Insightful) Big Deal, A Barrel... (Score:4, Insightful) Yeah, speciality silicon for a small subset of problems will stomp all over a general purpose CPU. No big news there. Why is Intel even bothering to whine about this stuff? They sound like a bunch of babies trying to argue that the sky isn't blue. This makes Intel look truely sad. It's completely unecessary. Re: (Score:3, Insightful) The reason that Intel is whining is in the context of large number crunching systems or high end workstations. Rather than sell Ks of chips for the former, Nvidia (and to a lesser extent AMD) gets to sell hundreds of GPU chips. And for the workstations, Intel sells only one chip instead of a 2 to 4. Larrabee is back? (Score:1) Larrabee Marketing == Direct-to-DVD? (Score:2) Intel decided to bail on marketing an in-house high performance GPU. But, they'd still like a return on their Larrabee investment. I don't doubt they would have been pushing the HPC mode anyway, but now, that's all they've got. Unfortunately for Intel, they've got to sell Larrabee performance based on in-house testing, while there are now a number of CUDA-based applications, and HPC-related conferences and papers are now replete with performance data. To Intel's and AMD/ATI's advantage, NVIDIA has signed on Sorry Intel Nvidia Wins (Score:2) Using Badaboom a CUDA app, you can rip down DVD copies to your Ipod's in minutes, not hours. Unfortunately Badaboom are idiots and are taking their sweet time porting to the 465/470/480 cards. I'd love to see a processor fast enough to beat a GPU at tasks such as these, and cd to mp3 conversions on CUDA, it's like moving from a hard drive to a fast SSD. The Solution? (Score:1) Hopefully, OpenCL will have the same cataylzing effect on HPC that OpenGL had on computer graphics, but time will tell. Word of warning to Intel: Almost nobody w Re: (Score:2) Re: (Score:2) The troll did have one point, the subject, where is AMD/ATI in this article ? Didn't they also have a product in that segment ? Re:AMD (Score:5, Informative) AMD just isnt doing well in the high end consumer-grade space, but then again the chips that Intel is ruling with in that segment are priced well above consumer budgets. GPUs not top notch across the board... (Score:2) Evergreen had a *huge* lead over pre-Fermi nVidia chips, and still leads in 32-bit precision (and by extension most of what the mass market cares about), but 64-bit precision lags Fermi. Of course, Evergreen beat Fermi to market by a large large margin. Re: (Score:2) It alternates between the two. First Nvidia is in the lead, ATI (AMD) takes a step forward and now ATI is in the lead. Then Nvidia takes a step forward and Nvidia regains top spot. So on and so forth, this is the best kind of competition you can hope for in Re: (Score:3, Insightful) I don't think AMD really cares about competing with top-end Intel processors. It takes a lot of R&D investment with very little return (it's a tiny market segment) In the low/mid range AMD rules the roost in terms of value for money. Re: (Score:2) Re: (Score:2) Re: (Score:1) look at the X6 BE chip, 6 cores, better performance than anything intel has at the same price. It doesn't compete with Intels 12 "core" in 12 thead applications, but apart from video encodes(even thats iffy) you'll be hard pressed to find a 12 thread app that doesn't end up IO bound, as a home user. Apart from video encodes you'll be hard pressed to find a 12 thread app, as a home user. (as in actually thrashing 12 threads at once) Re: (Score:2) AMD really cares about competing with top-end Intel processors - as when the Athlon ruled the roost, AMD sold its chips at a premium. Now (since the Core2Duo launched), with Intel in top spot, AMD is selling its processors cheaper, so it's losing possible profit.
http://hardware.slashdot.org/story/10/06/26/2322220/Intel-NVIDIA-Take-Shots-At-CPU-vs-GPU-Performance/interesting-comments
CC-MAIN-2014-52
refinedweb
6,589
70.84
SparkFun Blocks for Intel® Edison - 9 Degrees of Freedom Block. Suggested Reading If you are unfamiliar with Blocks, take a look at the General Guide to Sparkfun Blocks for Intel Edison. Other tutorials that may help you on your Edison adventure include: Board Overview The 9DOF Block has a lot of jumpers on it, but you can use it without understanding or changing any of them. Here's a description of each one: A (INT2) - Accelerometer/magnetometer interrupt 2. This pin can be configured to change on a number of different conditions. See datasheet pp 58 and 65-67 for more details on configuring the device. Closing this jumper with a solder blob connects the INT2 pin on the LSM9DS0 to GPIO 49 on the Edison. B (INT1) - Accelerometer/magnetometer interrupt 1. This pin can be configured to change on a number of different conditions. See datasheet pp 58 and 63-65 for more details on configuring the device. Closing this jumper with a solder blob connects the INT2 pin on the LSM9DS0 to GPIO 48 on the Edison. C (DRDYG) - Data Ready, gyroscope. Closing this jumper connects the pin to GPIO 47. See datasheet page 43 for information on configuring this pin. D (INTG) - Gyroscope interrupt. This pin can be configured to change on a number of different conditions. Closing this jumper will connect the pin to GPIO 46. See datasheet pages 43 and 47-50 for information on configuring this pin. E (DEN) - Data enable, gyroscope. Enable or !pause data collection. This pin can safely be ignored. Closing this jumper allows processor control of data collection via GPIO 165. F (CLOCK/DATA) - I/O interface selection jumpers. Default setting is to I2C1 but cutting the small trace visible between the two upper pads of each jumper and closing the bottom two pads with a solder blob allow the user to route control to SPIDEV2. SPI is currently an unsupported feature and will likely be removed from a future revision. G (CSG) - SPI chip select, gyroscope. Closing this jumper connects the signal to GPIO 111 on the Edison, which is FS0 on SPIDEV2. The CS pin can be either handled manually or by the driver. SPI is currently an unsupported feature and will likely be removed from a future revision. H (CSXM) - SPI chip select, accelerometer/magnetometer. Closing this jumper connects the signal to GPIO 110 on the Edison, which is FS1 on SPIDEV2. The CS pin can be either handled manually or by the driver. SPI is currently an unsupported feature and will likely be removed from a future revision. I (SDOG) - SPI serial data out (MISO), gyroscope. SPI is currently an unsupported feature and will likely be removed from a future revision. J (SDOXM) - Serial data out (MISO), accelerometer/magnetometer. SPI is currently an unsupported feature and will likely be removed from a future revision. K (I2C PUs) - Pull-up resistor removal for I2C SDA and SCL lines. Most likely, you won't want to remove these resistors from the system; however, if you have a lot of devices on the I2C bus, you may need to remove some of the pull-ups from the lines to reduce the pull-up strength. (No solder indicates that pull-ups are disabled. Connect all three pads with a solder blob to enable pull-ups.) L (CS PUs) - Pull-up resistor removal for SPI chip select lines. Normally pull-up resistors should be left in place. SPI is currently an unsupported feature and will likely be removed from a future revision. M (SDOG PU) - Closed by default, this pin sets the I2C address used by the gyroscope. When closed, the gyroscope's address is 0x6b. When open, jumper SDOG PD (labeled 'O' above) must be closed. N (SDOXM PU) - Closed by default, this pin sets the I2C address used by the magnetometer/accelerometer. When closed, their collective address is 0x1d. When open, jumper SDOXM PD (labeled 'P' above) must be closed. O (SDOG PD) - Open by default, this pin sets the I2C address used by the gyroscope. When closed, the gyroscope's address is 0x6a. P (SDOXM PD) - Open by default, this pin sets the I2C address used by the magnetometer/accelerometer. When closed, their collective address is 0x1e. Connecting the 9 DOF Block To use the 9 DOF Block 9 DO read over that tutorial to get up to speed. Getting Started Follow the instructions in the programming tutorial to create a new project named "SparkFun_9DOF_Edison_Block_Example". Once you've created the project, open the project files on disk (hint: you can find the path to the project by choosing "Properites" from the project menu), and copy the three source files found in the Edison 9DOF Block CPP library GitHub repository into the "src" directory. Code Everything you need to know is in the comments. language:c #include "mraa.hpp" #include <iostream> #include <unistd.h> #include "SFE_LSM9DS0.h" using namespace std; int main() { LSM9DS0 *imu; imu = new LSM9DS0(0x6B, 0x1D); // The begin() function sets up some basic parameters and turns the device // on; you may not need to do more than call it. It also returns the "whoami" // registers from the chip. If all is good, the return value here should be // 0x49d4. Here are the initial settings from this function: // Gyro scale: 245 deg/sec max // Xl scale: 4g max // Mag scale: 2 Gauss max // Gyro sample rate: 95Hz // Xl sample rate: 100Hz // Mag sample rate: 100Hz // These can be changed either by calling appropriate functions or by // pasing parameters to the begin() function. There are named constants in // the .h file for all scales and data rates; I won't reproduce them here. // Here's the list of fuctions to set the rates/scale: // setMagScale(mag_scale mScl) setMagODR(mag_odr mRate) // setGyroScale(gyro_scale gScl) setGyroODR(gyro_odr gRate) // setAccelScale(accel_scale aScl) setGyroODR(accel_odr aRate) // If you want to make these changes at the point of calling begin, here's // the prototype for that function showing the order to pass things: // begin(gyro_scale gScl, accel_scale aScl, mag_scale mScl, // gyro_odr gODR, accel_odr aODR, mag_odr mODR) uint16_t imuResult = imu->begin(); cout<<hex<<"Chip ID: 0x"<<imuResult<<dec<<" (should be 0x49d4)"<<endl; bool newAccelData = false; bool newMagData = false; bool newGyroData = false; bool overflow = false; // Loop and report data while (1) { // First, let's make sure we're collecting up-to-date information. The // sensors are sampling at 100Hz (for the accelerometer, magnetometer, and // temp) and 95Hz (for the gyro), and we could easily do a bunch of // crap within that ~10ms sampling period. while ((newGyroData & newAccelData & newMagData) != true) { if (newAccelData != true) { newAccelData = imu->newXData(); } if (newGyroData != true) { newGyroData = imu->newGData(); } if (newMagData != true) { newMagData = imu->newMData(); // Temp data is collected at the same // rate as magnetometer data. } } newAccelData = false; newMagData = false; newGyroData = false; // Of course, we may care if an overflow occurred; we can check that // easily enough from an internal register on the part. There are functions // to check for overflow per device. overflow = imu->xDataOverflow() | imu->gDataOverflow() | imu->mDataOverflow(); if (overflow) { cout<<"WARNING: DATA OVERFLOW!!!"<<endl; } // Calling these functions causes the data to be read from the IMU into // 10 16-bit signed integer public variables, as seen below. There is no // automated check on whether the data is new; you need to do that // manually as above. Also, there's no check on overflow, so you may miss // a sample and not know it. imu->readAccel(); imu->readMag(); imu->readGyro(); imu->readTemp(); // Print the unscaled 16-bit signed values. cout<<"-------------------------------------"<<endl; cout<<"Gyro x: "<<imu->gx<<endl; cout<<"Gyro y: "<<imu->gy<<endl; cout<<"Gyro z: "<<imu->gz<<endl; cout<<"Accel x: "<<imu->ax<<endl; cout<<"Accel y: "<<imu->ay<<endl; cout<<"Accel z: "<<imu->az<<endl; cout<<"Mag x: "<<imu->mx<<endl; cout<<"Mag y: "<<imu->my<<endl; cout<<"Mag z: "<<imu->mz<<endl; cout<<"Temp: "<<imu->temperature<<endl; cout<<"-------------------------------------"<<endl; // Print the "real" values in more human comprehensible units. cout<<"-------------------------------------"<<endl; cout<<"Gyro x: "<<imu->calcGyro(imu->gx)<<" deg/s"<<endl; cout<<"Gyro y: "<<imu->calcGyro(imu->gy)<<" deg/s"<<endl; cout<<"Gyro z: "<<imu->calcGyro(imu->gz)<<" deg/s"<<endl; cout<<"Accel x: "<<imu->calcAccel(imu->ax)<<" g"<<endl; cout<<"Accel y: "<<imu->calcAccel(imu->ay)<<" g"<<endl; cout<<"Accel z: "<<imu->calcAccel(imu->az)<<" g"<<endl; cout<<"Mag x: "<<imu->calcMag(imu->mx)<<" Gauss"<<endl; cout<<"Mag y: "<<imu->calcMag(imu->my)<<" Gauss"<<endl; cout<<"Mag z: "<<imu->calcMag(imu->mz)<<" Gauss"<<endl; // Temp conversion is left as an example to the reader, as it requires a // good deal of device- and system-specific calibration. The on-board // temp sensor is probably best not used if local temp data is required! cout<<"-------------------------------------"<<endl; sleep(1); } return MRAA_SUCCESS; } Resources and Going Further Now that you have had a brief overview of the 9 DOF Block, take a look at some of these other tutorials. These tutorials cover programming, Block stacking, and interfacing with the Intel Edison ecosystems. Edison General Topics: - General Guide to Sparkfun Blocks for Intel Edison - Edison Getting Started Guide Block Specific Topics:
https://learn.sparkfun.com/tutorials/sparkfun-blocks-for-intel-edison---9-degrees-of-freedom-block-/all
CC-MAIN-2021-43
refinedweb
1,507
55.64
For many PL/SQL developers, this might be common sense, but for one of our customers, this was an unknown PL/SQL feature: Backtraces. When your application raises an error somewhere deep down in the call stack, you don’t get immediate information about the exact source of the error. For large PL/SQL applications, this can be a pain. One workaround is to ...Read More » Metrics: Good VS Evil Pop ...Read More » Java Numeric Formatting I can think of numerous times when I have seen others write unnecessary Java code and I have written unnecessary Java code because of lack of awareness of a JDK class that already provides the desired functionality. One example of this is the writing of time-related constants using hard-coded values such as 60, 24, 1440, and 86400 when TimeUnit provides ...Read More » Programming Language Job Trends Part 1 – August 2014 It is time for the August edition of the programming language job trends! The response to the language list changes was definitely positive, so things will be stable for this edition. In Part 1, we look at Java, C++, C#, Objective C, and Visual Basic. I did look at the trends for Swift, but the demand is not high enough yet. Part 2 (PHP, Python, JavaScript, .. » Integrating jOOQ with PostgreSQL: Partitioning Introduction ...Read More » Feature ...Read More » Monitoring ...Read More »: import java.math.BigInteger; import java.util.ArrayList; import java.util.List; public class Fibonacci{ public List<Integer> getFiboSeries(int numberOfElements) ...Read More » How to successfully attack a software dinosaur? ...Read More »
https://www.javacodegeeks.com/page/333/
CC-MAIN-2016-40
refinedweb
260
67.45
Say you're building an SVG icon system. You're building a SVG sprite full of symbols by hand, or using a build tool like IcoMoon or grunt-svgstore to create it for you. What do you do with that sprite.svg? One option is to include it right at the top of the document and then <use> away. ... </head> <body> <!-- include it here --> <?php include_once("svg/sprite.svg"); ?> ... <!-- use it here --> <a href="/" class="logo"> <svg class="logo"> <use xlink: </svg> </a> That works, but it doesn't make very good use of caching. If the site does HTML caching, every single page will have this big chunk of identical SVG in it, which is a bit bloaty. Not to mention the HTML parser will need to read through all of that in the document each page load before moving on to the content. Probably better to take advantage of browser caching, like we do all other assets. We can do that by making our <use> link to an external source. But some browsers have trouble with that. Namely any version of IE and some older WebKit stuff. SVG for Everybody, that we recommended here, mostly works well. But, there are some browsers it doesn't catch with the UserAgent sniff it does. An alternative approach is just to ajax for the sprite (all the time) and inject it onto the page. That means you can browser cache that SVG, and it works everywhere that inline SVG works. Assuming you have a complete SVG document you're ajaxing for (sprite.svg), it's actually slightly tricky. You have to ensure that the SVG is in the proper namespace before you append it to the page (thanks Amelia). Luckily we can exploit the HTML parser that normally is in charge of this. We do this by appending the SVG into a <div> then appending that to the page. var ajax = new XMLHttpRequest(); ajax.open("GET", "svg/sprite.svg", true); ajax.send(); ajax.onload = function(e) { var div = document.createElement("div"); div.innerHTML = ajax.responseText; document.body.insertBefore(div, document.body.childNodes[0]); } If you're using jQuery, the callback data it gives you is already formatted into an SVG document, so you need to force it back into a string before appending it to the div and ultimately the page. $.get("svg/sprite.svg", function(data) { var div = document.createElement("div"); div.innerHTML = new XMLSerializer().serializeToString(data.documentElement); document.body.insertBefore(div, document.body.childNodes[0]); }); Remember that you use with just the identifier when you go this way, not the external source. <svg class="icon" viewBox="0 0 100 100"> <use xlink: </svg> Seems to work well in IE and Android: Also remember that SVG for Everybody helps you with transforming the markup into <img src="...png"> in non-supporting browsers, so if that's important to you, you'd be on your own there. Doesn’t work at all in Opera. Really? It seemes like it does. Wrote up something like this a month or so ago. I rely on jQuery, but you could obviously do it with vanilla JS. Pretty cool! That looks like it does an ajax request for every single SVG. The idea of a SVG sprite is that it’s a single request. Just something to consider. Yep! I wrote this mainly for use with large SVG’s. Or rather loading CSS-able SVG’s like normal images are loaded. Hence loading them synchronously. Just a tad confused. So you’re saying that using Ajax that the browser is able to cache sprite.svg because the browser treats it as a file yes? Also, you’re of course relying on JS to work. If it fails or disabled obviously this wouldn’t work, but I guess that’s fine if you’re only including enhancements like icons. That’s all correct. For instance, here’s ensuring the SVG is served correctly and cached from .htaccess: I like it! I’ve been using something similar to load svg combined by gulp-svgstore. The only difference is that I insert content before the script tag. Thanks for the shout-out about the innerHTMLtrick, but you don’t need it here. XMLHttpRequestprovides you with a complete XML document; you can just grab the SVG node and append it directly. (Because a valid stand-alone SVG file is XML, this will have better browser support than for HTML documents, which were a later extension.) The only trick is that for Firefox and IE you need to explicitly tell them that the response will be a document, not plain text/binary data. I would think the image/svg+xmlMIME type would be enough, but apparently not. The bare-bones JavaScript code would then be: I add a couple extra lines in this pen, to set a class on the SVG before I inject it into the document and to add a try/catch block just in case there are file access problems. It’s working if you see a shiny gold rectangle, it’s not if you see solid red: Thanks, Amelia. Could you help me sort out the best way to fetch and load multiple svg sprite files with AJAX? I’m surprised to see this works using an asynchronous request. What happens if <use>is parsed before the SVG is added to the page? Love it ! Love it! can make it just with html 5 ? Well, Its tricky but useful! I just put <object>into html: And then replace it with the SVG content once it’s loaded: You get caching & clean markup this way IMO. I wrote a helper to do this replacements for me like this: That looks neat, but does that mean that every single SVG is separate? The larger idea here is single-request. I used this technique to load few separate files, but it shouldn’t matter really. You can load a sprite and then use it like you want. HTML: JS: Just did a quick test – seems to work fine in all browsers (FF, Chrome, Safari, Opera, IE 9 – 11). I just stumbled over a nasty quirk with this technique: wherever they’re placed, Chrome (40) sees used symbols (e.g., <use xlink:) as descendants of the original <svg>tag, including that tag’s position in the DOM, which eliminates the possibility of re-styling the same SVG symbol with different fills depending on the context the usetag appears in. e.g., this won’t work in Chrome: SVG: HTML: CSS: Works in Firefox, and I haven’t tried it in other browsers, but Chrome doesn’t consider the symbol to actually exist within the <use>tag so the selector .pink-logo .logo-partis effectively unmatched. Minor follow-up; looks like Chrome’s behavior in accord with W3C spec: Since this applies all the way down to the SVG’s internals, the only workaround I can think of (for situations where you need to select & style specific elements within an SVG) is to use JS to pick symbols out of the DOM and append them directly to a new SVG element, avoiding <use>altogether. For most situations, you’re almost certainly better off just creating duplicate symbols for each of your variations. (Cue sad trombone.)
https://css-tricks.com/ajaxing-svg-sprite/
CC-MAIN-2018-34
refinedweb
1,214
73.37
Details Description This is a hole of PIG-1824. If the path of python module is in classpath, job die with the message could not instantiate 'org.apache.pig.scripting.jython.JythonFunction'. Here is my observation: If the path of python module is in classpath, fileEntry we got in JythonScriptEngine:236 is _pyclasspath_/script$py.class instead of the script itself. Thus we cannot locate the script and skip the script in job.xml. For example: register 'scriptB.py' using org.apache.pig.scripting.jython.JythonScriptEngine as pig A = LOAD 'table_testPythonNestedImport' as (a0:long, a1:long); B = foreach A generate pig.square(a0); dump B; scriptB.py: #!/usr/bin/python import scriptA @outputSchema("x:{t:(num:double)}") def sqrt(number): return (number ** .5) @outputSchema("x:{t:(num:long)}") def square(number): return long(scriptA.square(number)) scriptA.py: #!/usr/bin/python def square(number): return (number * number) When we register scriptB.py, we use jython library to figure out the dependent modules scriptB relies on, in this case, scriptA. However, if current directory is in classpath, instead of scriptA.py, we get _pyclasspath/scriptA.class. Then we try to put __pyclasspath/script$py.class into job.jar, Pig complains __pyclasspath_/script$py.class does not exist. This is exactly TestScriptUDF.testPythonNestedImport is doing. In hadoop 20.x, the test still success because MiniCluster will take local classpath so it can still find scriptA.py even if it is not in job.jar. However, the script will fail in real cluster and MiniMRYarnCluster of hadoop 23. Issue Links Activity - All - Work Log - History - Activity - Transitions Seeing some errors with import unicodedata. Will update the patch after fixing that case too. Changing status to Patch Available again. Issue was something that cannot be fixed in code and can be worked around. For anyone interested in the issue and the solution. The issue had to do with unicodedata.py loading UnicodeData.txt and EastAsianWidth.txt files inside its code. There is no way to determine them like imports and ship them with the jar. Also note that this happens when Lib directory is in classpath and not with standalone jython jar file. loader = pkgutil.get_loader('unicodedata') init_unicodedata(StringIO.StringIO(loader.get_data(os.path.join(my_path,'UnicodeData.txt')))) init_east_asian_width(StringIO.StringIO(loader.get_data(os.path.join(my_path,'EastAsianWidth.txt')))) The workaround for that is to ship those two files with hadoop's tmpfiles or mapred.cache.files option and set -Dmapred.child.env="JYTHONPATH=." pig -Dmapred.child.env="JYTHONPATH=." -Dtmpfiles="" norm_test.pig On a different note, found that progress is not reported in case of jython functions. Is this a known issue? Could not find any jiras. Hi Rohini, After applying the patch to trunk, I see the following error in TestScriptUDF.testPythonNestedImportClassPath: Testcase: testPythonNestedImportClassPath took 0.182 sec Caused an ERROR Python Error. Traceback (most recent call last): File "/home/cheolsoo/workspace/pig-svn/scriptB.py", line 2, in <module> import scriptA File "__pyclasspath__/scriptA.py", line 3, in <module> NameError: name 'outputSchema' is not defined Does this test pass for you? One cause for this error could be that your python cache dir is not writable and so the pig jar was not processed. Try running with -Dpython.cachedir=/<dir with write perms> if that is the case. Or are you running from eclipse? Hi Rohini, I tried what you suggested, but I still get the same error. ant clean test -Dtestcase=TestScriptUDF -Dpython.cachedir=/home/cheolsoo I see that the test fails on Mac, CentOS 6, and Ubuntu 12. It's not clear what's the root cause. I am attaching my test log. Suspecting that the following code execution is failing for you based on the stack trace. But the attached log does not have any error and the comment also says it will fail silently. // attempt addition of schema decorator handler, fail silently interpreter.exec("def outputSchema(schema_def):\n" + " def decorator(func):\n" + " func.outputSchema = schema_def\n" + " return func\n" + " return decorator\n\n"); Test ran fine for me in Mac and RHEL 5. I will see if I can try and reproduce. Can you add org.python.core.Options.verbose = Py.DEBUG; in the static block of JythonScriptEngine and see if that gives any other additional error messages for you? Hi Rohini, I found that the order in which test cases run matters. I am attaching two log files: good.log and bad.log. If I forced using OrderedJUnit4Runner that testPythonNestedImportClassPath runs before testPythonBuiltinModuleImport1, they all pass. But if testPythonBuiltinModuleImport1 runs before testPythonNestedImportClassPath, testPythonNestedImportClassPath fails: Testcase: testPythonNestedImportClassPath took 38.565 sec Testcase: testPythonBuiltinModuleImport1 took 35.904 sec Testcase: testPythonBuiltinModuleImport1 took 38.756 sec Testcase: testPythonNestedImportClassPath took 0.124 sec Caused an ERROR Python Error. Traceback (most recent call last): File "/Users/cheolsoo/workspace/pig/scriptB.py", line 2, in <module> import scriptA File "__pyclasspath__/scriptA.py", line 3, in <module> NameError: name 'outputSchema' is not defined I also turned on DEBUG as per your request, so you can see extra debug messages in the log files. Thanks Cheolsoo. I think this has something to do with PythonInterpreter being static in JythonScriptEngine. And you must be running with jdk7 so the test order was different. I was running with jdk6 and that's why did not see it. Will investigate and fix it. Used different names for different modules. Tests pass when run with jdk7 now. +1. Thanks for the fix. The test passes for me too. I also ran e2e test and found no failure. Minor comment: When you commit the patch, can you remove a tab char in the following line? + <!-- Remove jython jar from mrapp-generated-classpath --> Thanks for the review Cheolsoo. Removed the tab before committing. Committed to trunk. Fixed the issue and added unit tests to import os and re. Note: If jython-standalone.jar is in pig classpath, found that in real cluster had to add -Dmapred.child.env="JYTHONPATH=job.jar/Lib" to pick up the builtin modules as the jar gets extracted on the datanode and Lib is not in classpath. Might apply to using with oozie too. Could not simulate the error in unit test environment even after removing jython jar from mr-apps-classpath. If the extracted Lib directory is in classpath instead of standalone jar while launching pig the env setting is not required.
https://issues.apache.org/jira/browse/PIG-2433
CC-MAIN-2017-51
refinedweb
1,056
61.93
I take it this is related to your other question Does the other question supercede this one (this one is now meaningless)? Word has no option for using Regular Expressions, only its Find.Execute facility. That does have "Wildcard" capability, but I don't think that will do what you describe. The only way you can use regular expressions with content in a Word document would be to work with Office Open XML, not with the Word APIs. Office Open XML lets you open the document outside of Word; using standard Packaging and XML namespaces you can work with the document content (including using Regular Expressions). Cindy Meister, VSTO/Word MVP, my blog
https://social.msdn.microsoft.com/Forums/en-US/74f5de70-496d-4659-be90-3079579fa859/regular-expression-with-replace-condition?forum=worddev
CC-MAIN-2020-45
refinedweb
113
61.36
Opened 2 years ago Last modified 6 months ago Since Phobos introduced this very handy alias (and also wstring,dstring), Tango should also adopt it, too. Just a reminder for the tango team. :P This really needs to be added. Right now people do it in their own code, but unless the alias is marked private (easy to forget), it will eventually conflict and you have to fully qualify the name. having to type 'my.longnamedmodule.string' makes it useless. also by making it private you essentially have to do it in every single module. Please add these to object.d ASAP Neither Sean nor I are comfortable about this macro. It adds a keyword to the language, though not a real keyword, which seems kinda bogus when Walter is trying to keep the keyword list short. We need to think about it some more, so all I can offer for now are my apologies that it may take some time to resolve If not for all cases, I think this should at least be defined in a version(PhobosCompatibility?) block. Otherwise Tangobos does not really make it possible to compile most Phobos code without modifying the source code. In fact, if Tangobos is to work properly, then the entire contents of Tangobos' std.compat should be included in tango in a version(PhobosCompatibility?) block in Tango's object.di or equivalent. For reference, those contents are: extern (C) int printf(char *, ...); alias char[] string; alias wchar[] wstring; alias dchar[] dstring; alias Exception Error; alias bool bit; Actually I could live without the bool-bit thing, since that's deprecated. But using 'string' instead of 'char[]' is actually pushed by Walter as being the preferred style for writing code these days. So you can expect that a lot of Phobos code will be using it. Or, rather than putting those aliases directly in object.di, maybe this would be better: version(PhobosCompatibility) { public import std.compat; } Gah, printf! Another reason for string and friends being problematic; They were added to D 2.0 as the invariant string definitions, and then in D 1.0 for compatibility. But the semantics for a D 1.0 string is very very different from a D 2.0 string, not really making code portable between the two versions at all. See also #902. I think code have to be ported from D1 to D2 anyway at some point in the future. In many cases the D2 meaning is favourable. It is likely that the compiler will give proper errors if the semantics won't fit. Maybe the D2-Tango ppl have experience with this they want to share? :P It depends somewhat on what becomes implicitly convertible to what, but at least I have doubts about very much invariant use in Tango. You have to make a decision on this, people. If it's not going to happen you need to close this wontfix ASAP so people know that everyone is to be responsible for working around this incompatibility for themselves. If it is going to happen you need to do it ASAP so people can start using it and stop creating their own workarounds. Won't be resolved in 99.5 Replying to lindquist: This really needs to be added. Right now people do it in their own code, but unless the alias is marked private (easy to forget), it will eventually conflict and you have to fully qualify the name. This really needs to be added. Right now people do it in their own code, but unless the alias is marked private (easy to forget), it will eventually conflict and you have to fully qualify the name. This might be a compiler bug, but last time i tried (dmd 1.028 and 1.027) private aliases also conflicted. file main.d import test; import test2; void main(string[]) {} file test.d module test; private alias char[] string; file test2.d module test2; private alias char[] string; dmd main.d test.d test2.d output: main.d(4): Error: test.string at test.d(3) conflicts with test2.string at test2.d(3) When libraries want to be D2 compatible they are going to use the string alias. If someone whould use more than one library witch defines a alias for string, they will run into the problem with the string alias. This problem won't exist if Tango includes a string alias because then there will only be one library with the alias. #558 marked as a duplicate of this. Replying to kris: Was it fixed? I don't see any change in the svn online. Is it supposed to be a wontfix instead of fixed? it's in Sean's recent stuff
http://www.dsource.org/projects/tango/ticket/548
crawl-002
refinedweb
791
76.22
So if you follow me on medium you know I am doing a project where I post to Github more often in 2020. To keep track of all of my commits I thought it would be cool to make a Twitter bot that logs every time I make a commit to my git hub and then tweets out my progress at the end of the day. It’s a simple bot but I really liked the process of making it and wanted to share how I did that with everyone. In this post, I cover what my process looks like when I approach a task like this, I also cover the tools that I used and why I used them. This is a great opportunity to work with Python, the Twitter & Git APIs as well as Heroku. We will cover how to handle the cloud aspect with Heroku in the second part of this blog series. I am going to take you step by step through how I started this project with images of where to go to get your API keys and some code that is written in Python. This is designed to be a very easy and fun project and I think it is a great one for anyone starting out in programming that wants to expand their skills. This is always one of the hardest parts of any project, starting the damn thing. I always think it is helpful to think about the end goal and even write it down, then back into the steps that get you there. So for this project what is our end goal? We want to make a Twitter bot that tweets out if we have made a commit to our git hub page. Perfect, now that we have that goal in mind we can start putting the pieces together! We will need to use Twitter’s API to post the tweet, Git hubs API to access our post history, and then we will need to write the code in Python to drive this process. So far so good. I always find it useful to open up a file and start adding some markdown or comments to give the project some structure. The first file we will make is a Twitter_bot.py. There are for main parts to this file the Imports, Keys for the APIs, Authentication, and Code. Twitter API Let’s go ahead and get our API keys through Twitters developer platform here before moving on. Fill in the information Twitter requires you to provide to get your keys (general things like what the bot is going to do, do you plan on gathering data, etc.) then you should come to a page like this after you confirm your email and Twitter account. in the top right of the screen, you should see your profile name. Click the drop-down and navigate to get started. From here you can click the blue button on the right that says create an app to get your keys!! After this process, you will finally have your keys ready for use in your code WARNING!! Do not release your keys to anyone that is not supposed to have them and definitely don’t accidentally push them up to git hub. Git hub API Using the Git hub API is much easier than Twitter. We will only need to have the username and password for our git hub account on hand so make sure you sign up for a git hub account before proceeding. Imports With a quick few Google searches we can find the appropriate API documentation for this project Twitter: tweepy , Github: PyGithub you can install both of these via the pip install command. We will also need to get the time these events happened at, so also make sure to install datetime to handle this. The final part is to import environ. We will go over this import when it comes to the keys. The first part of your file should look something like this # Imports from os import environ from github import Github from datetime import datetime from tweepy import OAuthHandlerimport tweepy import time Keys for the API’s To start we need to load in our keys. In a separate file set your variable names for each of the keys. You need to make sure that the naming is consistent for both files. For me, I like to keep to the Twitter documentation suggestions for naming so I will use all caps. To run this locally feel free to ignore the environ['UNIQUE_KEY_NAME’] for the time being and just add your key as a string. This is fine for testing if the code works just to make sure you take them out before you make your bot public. The reason we use environment variables here is that we will be pushing this up to a Git hub and we want to keep these keys private. Setting them as environment variables allow us to access the keys without putting them in the code this is why we imported environ from os back in the import section. So I strongly advise you to make a separate keys.py file if you are going to make this bot public-facing at any point. The keys.py file should be set up like this: CONSUMER_KEY = 'Your consumer key from twitter here' CONSUMER_SECRET = 'your consumer secret key from twitter here' ACCESS_KEY = 'your access key from twitter here' ACCESS_SECRET = 'your access secret key from twitter here' username = 'your git hub user name here' password = 'your git hub password here' We are also going to set this bot to tweet every day so the interval is going to be set to 60 * 60 * 24 we will use this to make our bot sleep for this amount of time which is equal to 1 day. We will also declare a variable d to be today’s date using a date time object. Your Twitter_bot.py file should now contain this section # adding our keysCONSUMER_KEY = environ['CONSUMER_KEY'] CONSUMER_SECRET = environ['CONSUMER_SECRET'] ACCESS_KEY = environ['ACCESS_KEY'] ACCESS_SECRET = environ['ACCESS_SECRET'] username = environ['username'] password = environ['password'] INTERVAL = 60 * 60 * 24 d = datetime.today() Authentication Now we can authenticate our APIs. This lets our program talk with Twitter and request or post information. We are going to use the variable names api for Twitter and g for Git hub this is just for ease of use and to cut down on the number of keystrokes you need to perform. You can add the following code to Twitter_bot.py # twitter and git hub api authenticationauth = OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET) auth.set_access_token(ACCESS_KEY, ACCESS_SECRET) api = tweepy.API(auth)# git hub username and password g = Github(username, password) Code / Controle Flow In thinking through this program I just want to check if I have tweeted then do some operation but for it to run indefinitely. So to do this we can set the while loop equal to True. Inside the while loop, we will pull down our list of repos and extract the date time object associated with each one them and put those into a list. We can then check if the current date is in our list of date time object and if that is true post a tweet and else tweet out that we didn’t post anything. I would also like to keep track of how many days in a row my streak is so I will add a count variable to the program and because it runs every day the count will update every time the program runs. Finally outside of our for loop we can add time.sleep(INTERVAL) to make the function wait for 24hours until it does it aging So in code it looks like this. # while loop that runs indefinitely that check if my dates. while True: days_updated= [] count = 0# for loop to get all the repo times and append them to a listfor repo in g.get_user().get_repos(): days_updated.append(repo.updated_at.date())# if else block to check if todays date is in the list of dates of # reposif d.date() in days_updated: api.update_status(f'yes he did, he updated: {repo.name}') print('I tweeted that he pushed') count +=1 else: api.update_status(f"No he didn't he broke a {count} day long streak") print('I tweeted that he did not push')# this sleep operation takes the INTERVAL variable and sleeps for #24hrstime.sleep(INTERVAL) AND JUST LIKE THAT YOU HAVE MADE A BOT!!! If you run this program in your terminal with your keys input as a string, you will be able to see the program post to Twitter then wait for 24 hrs and go through the whole process again. So now you can tell all your friends “Look at me I can post to Twitter without actually going on Twitter how cool is that!!” You can navigate over to Twitter and see that your bot has posted. This is a great first step but we have a few more steps to make this a proper bot. Dealing With Errors Within some timeframes, Twitter won’t allow you to post the same tweets so if a tweet is duplicated it will throw an error. We can get around this by using a try and except block around our if else statements. I have made mine a little fancy in the final codebook but if you just want to circumnavigate the error messages you can just do this: try: if d.date() in days_updated: api.update_status(f'yes he did, he updated: {repo.name} he is on a {count} day streak') print('I tweeted that he pushed') else: api.update_status(f'{didnt_push[randrange(3)]} he has broke a {count} day streak') print(f'I tweeted that he did not push') time.sleep(INTERVAL) except: print('error duplicated tweet') pass Next steps Now some of you might have noticed a bit of a problem with this… You might be saying “ Great but I don’t want to keep my computer running this program all the time.” “what if I turn off my computer?” or “what if my wifi connection is interrupted and can’t post to Twitter, then what!!” that is exactly why we are going to push this up to the cloud and let it run on a web server rather than on our local machine! Pushing This up to the Cloud Check out part two where I walk you through how to get this pushed up to Heroku a cloud platform service supporting several programming languages including Python, node.js, and PHP.
http://technewsdestination.com/2020/01/16/making-a-twitter-bot-in-python/
CC-MAIN-2020-40
refinedweb
1,768
76.05
Testing Models In my last few articles, I've been dipping into the waters of "machine learning"—a powerful idea that has been moving steadily into the mainstream of computing, and that has the potential to change lives in numerous ways. The goal of machine learning is to produce a "model"—a piece of software that can make predictions with new data based on what it has learned from old data. One common type of problem that machine learning can help solve is classification. Given some new data, how can you categorize it? For example, if you're a credit-card company, and you have data about a new purchase, does the purchase appear to be legitimate or fraudulent? The degree to which you can categorize a purchase accurately depends on the quality of your model. And, the quality of your model will generally depend on not only the algorithm you choose, but also the quantity and quality of data you use to "train" that model. Implied in the above statement is that given the same input data, different algorithms can produce different results. For this reason, it's not enough to choose a machine-learning algorithm. You also must test the resulting model and compare its quality against other models as well. So in this article, I explore the notion of testing models. I show how Python's scikit-learn package, which you can use to build and train models, also provides the ability to test them. I also describe how scikit-learn provides tools to compare model effectiveness. Testing Models What does it even mean to "test" a model? After all, if you have built a model based on available data, doesn't it make sense that the model will work with future data? Perhaps, but you need to check, just to be sure. Perhaps the algorithm isn't quite appropriate for the type of data you're examining, or perhaps there wasn't enough data to train the model well. Or, perhaps the data was flawed and, thus, didn't train the model effectively. But, one of the biggest problems with modeling is that of "overfitting". Overfitting means that the model does a great job of describing the training data, but that it is tied to the training data so closely and specifically, it cannot be generalized further. For example, let's assume that a credit-card company wants to model fraud. You know that in a large number of cases, people use credit cards to buy expensive electronics. An overfit model wouldn't just give extra weight to someone buying expensive electronics in its determination of fraud; it might look at the exact price, location and type of electronics being bought. In other words, the model will precisely describe what has happened in the past, limiting its ability to generalize and predict the future. Imagine if you could read letters that were only from a font you had previously learned, and you can further understand the limitations of overfitting. How do you avoid overfit models? You check them with a variety of input data. If the model performs well with a number of different inputs, it should work well with a number of outputs. In my last article, I continued to look at data from a semi-humorous study in which evaluations were made of burritos at a variety of restaurants in Southern California. Examining this data allowed one to identify which elements of a burrito were important (or not) in the overall burrito's quality assessment. Here, in summary, are the steps I took inside a Jupyter notebook window in order to create and assess the data: %pylab inline import pandas as pd # load pandas with an alias from pandas import Series, DataFrame # load useful Pandas classes df = pd.read_csv('burrito.csv') # read into a data frame burrito_data = df[range(11,24)] burrito_data.drop(['Circum', 'Volume', 'Length'], axis=1, inplace=True) burrito_data.dropna(inplace=True, axis=0) y = burrito_data['overall'] X = burrito_data.drop(['overall'], axis=1) from sklearn.neighbors import KNeighborsRegressor # import # classifier KNR = KNeighborsRegressor() # create a model KNR.fit(X, y) # train the model So, is the model good or not? You can know only if you try to make some predictions for which you know the answers, and see whether the model predicts things correctly. Where can you find data about which you already know the answers? In the input data, of course! You can ask the model (KNR) to make predictions about X and compare those with y. If the model were performing categorization, you even could examine it by hand to get a basic assessment. But using regression, or even a large-scale categorization model, you're going to need a more serious set of metrics. Fortunately, scikit-learn comes with a number of metrics you can use. If you say: from sklearn import metrics then you have access to methods that can be used to compare your predicted values (that is, from the original "y" vector) to the values that were computed by the model. You can apply several scores to the model; one of them would be the "explained variance score". You can get that as follows: y_test = KNR.predict(X) from sklearn import metrics metrics.mean_squared_error(y_test, y) Notice what's happening here. You're reusing the input matrix X, asking the model to predict its outputs. But, you already know those outputs; those are in y. So now you see how closely the model comes to predicting outputs that already were fed into it. On my system, I get 0.64538408898281541. Ideally, with a perfect model, you would get a 1, which means that the model is okay, but not amazing. However, at least you now have a way of evaluating the model and comparing it against other models that might be better or worse. You even can run KNR for different numbers of neighbors and see how well (or poorly) each model does: for k in range(1,10): print(k) KNR = KNeighborsRegressor(n_neighbors=k) KNR.fit(X, y) y_test = KNR.predict(X) print "\t", metrics.mean_squared_error(y_test, y) print "\t", metrics.explained_variance_score(y_test, y) The good news is that you have now looked at how the KNR model changes when configured with different values of n_neighbors. Moreover, you see that when n_neighbors = 1, you get no error and 100% explained variance. The model is a success! Split Testing But wait. The above test is a bit silly. If you test the model using data that was part of the training, you would be surprised if the model didn't get it at least partly right. The real test of a model is how well it works when it encounters new data. It's a bit of a dilemma. You want to test the model with real-world data, but if you do that, you don't necessarily know what answer should appear. And, that means you can't really test it after all. The modeling world has a simple solution to this problem. Use only a subset of the training data to train the model, and use the rest for testing it. scikit-learn has functionality that supports this "train-test-split" functionality. You invoke the train_test_split function on your original X and y values, getting two X values (for training and testing) and two y values (for training and testing) back. As you might expect, you then can train the model with the X_train and y_train values and test it with X_test and y_test: from sklearn.cross_validation import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, ↪test_size=0.25) KNR = KNeighborsRegressor(n_neighbors=1) KNR.fit(X_train, y_train) y_pred = KNR.predict(X_test) print "\t", metrics.mean_squared_error(y_test, y_pred) print "\t", metrics.explained_variance_score(y_test, y_pred) Suddenly, this amazing model no longer seems so amazing. By checking it against values it hadn't seen before, it's giving a mean squared error of 0.4 and an explained variance of 0.2. This doesn't mean the model is terrible, but it does mean you might want to check it a bit further. Perhaps you should (again) check additional values of n_neighbors. Or, perhaps you should try something other than KNeighborsRegressor. Again though, the key takeaway is that you are now using a real, reasonable way to evaluate that model, rather than just eyeballing the numbers and assuming (hoping) that all is well. Multiple Splits The split-test that you do might somehow tickle the model in such a way that it gives particularly good (or bad) results. What you really need to do is try different splits, so you can be sure that no matter what training data you use, the model performs optimally. Then, you can average the results over a bunch of different splits. In the world of scikit-learn, this is done using KFold. You indicate how many different instances of the model you'll want to create and the number of "folds" (that is, split tests) you'll want to run: from sklearn.cross_validation import KFold, cross_val_score kfold = KFold(n=len(X), n_folds=10) With the kfold object in place, you then can pass it to the cross_val_score method in the cross_validation module. You pass it the model (KNR, in this case), X, y and the kfold object you created: v_results = cross_val_score(KNR, X, y, cv=kfold) The cv_results object you get back describes the cross validation and typically is analyzed by looking at its mean (that is, what was the average score across those runs) and the standard deviation (that is, how much variance was there across runs): print cv_results.mean() print cv_results.std() In this particular case, the results aren't that promising: 0.310254620082 0.278746712239 In other words, although the n_neighbors=1 seemed to be so terrific when first analyzed, using all of the training data for testing, that no longer appears to be the case. Even if you stick with KNR as your classifier, you still can incorporate KFold, checking to see when (if) a different value of n_neighbors might be better than the value of 1 you gave here: from sklearn.cross_validation import KFold, cross_val_score for k in range(1,10): print(k) KNR = KNeighborsRegressor(n_neighbors=k) kfold = KFold(n=len(X), n_folds=10) cv_results = cross_val_score(KNR, X, y, cv=kfold) print "\t", cv_results.mean() print "\t", cv_results.std() Sure enough, when k=9, you get results that are significantly better than when k=1: 0.594573190846 0.161443573949 That said, I do believe it's likely you can create a better model. Perhaps a better classifier for regression would improve things. Perhaps using categorization, rather than regression, in which you round the values in y to the nearest integer and treat scores as 5 distinct categories, would work. Perhaps, as mentioned before, I should have paid more attention to which columns were most (and least) important and done some better feature selection. Regardless, with a proper test system in place, you're now able to start tackling these questions intelligently with a way to evaluate your progress. Summary It's not enough to create a machine-learning model; testing it is also important. As you saw here, scikit-learn makes it relatively easy to create, split-test and then evaluate one model or even a whole bunch of them. Supervised learning isn't the only type of machine learning out there. In many cases, you can ask the computer to divide your data into multiple groups based on heuristics it develops, rather than categories that you have trained. In my next article, I plan to look at how (and when) to build "unsupervised learning" models.
https://www.linuxjournal.com/content/testing-models
CC-MAIN-2022-05
refinedweb
1,950
63.19
. Issue Links - relates to MAPREDUCE-2826 Change the job state observer classes to interfaces - Resolved HADOOP-1230 Replace parameters with context objects in Mapper, Reducer, Partitioner, InputFormat, and OutputFormat classes - Closed Activity - All - Work Log - History - Activity - Transitions A patch that moves the TaskScheduler class to a new org.apache.hadoop.mapred.scheduler package. I've also made JobInProgressListener public, as well as a number of related classes: JobInProgress, Task, TaskTracker status. For these last three I've changed the visibility of any public methods that are only used within the package to package-private so we start out with as little exposed as possible. I've also made TaskTrackerManager public and turned it into an abstract class. The attached script should be run before applying the patch. There's public and there's public. I'm not sure we're yet ready for this to appear in the user javadocs, but making it easier for developers to experiment is fine. The published javadoc is the API that we're promising to keep back-compatible, and we should expand it only cautiously. As we restructure the code, we should create a new classification we didn't have before: public but not in javadoc. This includes most of the hdfs and mapred daemon implementations. And the scheduler API belongs there for now. +1 on separating developer and user APIs. There's some discussion on this very issue in the context of interfaces/abstract classes (yes, that again) here and here. Also in "Item 18: Prefer interfaces to abstract classes" of Joshua Bloch's Effective Java (), he extols the virtues of interfaces over abstract classes. However, at the end he says: Once an interface is released and widely implemented, it is almost impossible to change. You really must get it right the first time. If an interface contains a minor flaw, it will irritate you and its users forever. If an interface is severely deficient, it can doom an API. The best thing to do when releasing a new interface is to have as many programmers as possible implement the interface in as many ways as possible before the interface is frozen. This will allow you to discover flaws while you can still correct them. So perhaps the developer API (which is "less public") is the place where it is OK to introduce interfaces - if we feel there is a good case for them - so we can get feedback on them by having different groups produce different implementations before they are frozen by appearing in user javadocs. (According to Bloch, three implementations is enough:) with inner classes, a lot of this is moot. What is important for an abstract class is that - it doesn't do anything in its constructor that calls non-final methods - it doesnt offer protected fields (better for private and protected methods) as you can't change much from then on. - you dont rely on package scoped fields/methods/classes The biggest problem I have with either api choice is exception handling -you are always stuck with whatever exceptions the api designer thought was likely. Currently hadoop has a lot of package-private code, and that is bad because it forces people subclassing to use that package name. This is bad for the hadoop team, as their packages appear in the stack traces -they get to field the support calls. One thing we could consider is a java5 tag @Unstable or @Internal, which would be a bit like @deprecated, and warn people that the things could change without warning -and hence that there were no stability guarantees at all. > he extols the virtues of interfaces over abstract classes What are they? Can you summarize? Are they significant? > So perhaps the developer API (which is "less public") is the place where it is OK to introduce interfaces [...] Perhaps, if there are significant advantages to interfaces. The ability to implement multiple interfaces in a single class can save a few lines of code, but doesn't really seem significant to me when compared to the huge cost of freezing the API. I like Steve's guidelines (above) for abstract classes. Perhaps we should document these (and more) in the wiki or someplace, and try to validate our abstract APIs against them. There are added potential pitfalls when using abstract classes instead of interfaces, and we should work hard to avoid them. You don't need to freeze interfaces. You only need to freeze interfaces that you promise to keep stable There's one other risk with abstract classes -that the base class adds a new public or protected method that accidentally clashes with one a subclass has implemented, in which case the subclass one gets used. There are some checks in C# to catch this, but not java, not even with the @Override attribute, because that attribute is optional. The best way to avoid problems is to have people building and testing their code with your SVN_HEAD version, as it finds problems the minute they are checked in, rather than when a release occurs. This is something we can do in open source development, across teams and organisations. > You don't need to freeze interfaces. You only need to freeze interfaces that you promise to keep stable Right. For example, we use interfaces for RPC protocols, but those are not for public consumption and are only used internally. We do not intend to support clients or servers that are not in Hadoop's source tree. So do we intend to permit implementations of the scheduler that are not in Hadoop's source tree? If so, then we should not use interfaces. If not, then interfaces are fine, since we can update all implementations if/when we alter the interface. Can you summarize? According to Joshua Bloch, the virtues are all to do with the fact that interfaces don't enforce a hierarchy on your design in the way that abstract classes do. This chimes with another principle in the book: "Favor composition over inheritance". So for example, you can retrofit existing classes to implement interfaces (you often can't do this with abstract classes - it "causes great collateral damage to the type hierarchy"); also you can define mixins and non-hierarchical type frameworks with interfaces. However, he does say that "It is far easier to evolve an abstract class than an interface" (which is where this whole discussion started). Perhaps we should document these (and more) in the wiki or someplace, and try to validate our abstract APIs against them. There are added potential pitfalls when using abstract classes instead of interfaces, and we should work hard to avoid them. +1 So do we intend to permit implementations of the scheduler that are not in Hadoop's source tree? I think it is OK to say that we should only support those in the tree for the time being. (That doesn't stop folks implementing their own schedulers of course, it just means they may have have to change them from release to release if the interfaces change.) However, I do think we should make it possible for scheduler implementations to live in their own packages. (E.g. I think we should do this before committing HADOOP-3746.) To do that, I suggest we do the following: - Repackage the current patch so the scheduler code goes in org.apache.hadoop.mapreduce.server.scheduler (with abstract classes changed to interfaces, as appropriate) - Remove javadoc for the server packages (or generate it separately, labelled as developer javadoc - see HADOOP-3742) - Restructure the MapReduce code into org.apache.hadoop.mapreduce for the user API ( HADOOP-1230), and the server API ( HADOOP-2884- is anyone working on this for reorganising MapReduce?) In Open Source there is no way to stop anyone getting at your internals. What you can do is flag which interfaces or classes have no stability guarantees. People are free to work with them subclass the classes, implement the interfaces but it is not the hadoop team's concerns if a new release stops their thing from recompiling. Interfaces can be easier to mock, and easier to patch in to other object hierarchies -and there is no requirement to never change an interface. Its something that sun and joshua do for the sun libraries, but even they have internal stuff they don't like you touching and which moves about from time to time. This is why the idea of having an @Internal tag appeals to me -something you can put on an interface to say "this is internal, track SVN Head if you don't want to be surprised". This is what an @Internal attribute would look like @Retention(RetentionPolicy.SOURCE) public @interface Internal It could then be used to annotate classes, methods, interfaces, etc @Internal public interface HealthChecks Even member variables you want better access to public class NameNode{ @Internal("namesystem should be accessed through getNamesystem()"); FSNamesystem namesystem; .... } The attribute would not do anything, except to say "be careful when you use this, we may change it" The line we've drawn to date has been that, if it's in the released javadoc, back-compatibility requirements apply. (We promise to deprecate features for one major release before removing them. This is in fact the defining characteristic of a major release. See for details.) Currently, the plan is not to release javadoc for the mapreduce and hdfs daemon code packages. The scheduler is part of the daemon code, so it's already out of scope for back-compatibility. So, for now, this is not an issue. It will only become an issue if we intend to start including javadoc for the scheduler in releases. Longer-term we might wish to have a finer-grained line. I still believe it ought to be easy to determine from the javadoc whether something is a supported API. So, if we were to, e.g., start releasing Scheduler javadoc, we'd need to make sure that unstable methods were clearly marked. In Lucene we adopted the convention of adding an "Expert:" to the beginning of javadoc comments for APIs that most users should ignore. Similarly, we might add something like "Unstable:" to APIs that we expect will change. Would an @Internal tag be able to affect the javadoc? I don't know about the impact of Java annotations and javadocs. You can put the attribute into the java .class files itself, then run code over it, but javadoc itself doesn't seem to. There are custom javadoc tags, but they are separate things. If all that is needed is a javadoc tag, then a custom tag like @hadoop.internal could be added. We don't need this change for 0.19 since the new schedulers have gone into contrib. Fixed forever ago. See discussions on interfaces vs abstract classes in HADOOP-3801and HADOOP-1230.
https://issues.apache.org/jira/browse/MAPREDUCE-342?attachmentOrder=asc
CC-MAIN-2016-50
refinedweb
1,810
61.56
comment is incorrect, it is listed as: '-- Save this in a source file, e.g., Interact.hs' This should be InteractWith.hs...since the actual file is InteractWith.hs... True is misspelled as "TRue" ..... "1.3.3. Boolean Logic, Operators, and Value Comparisons The values of Boolean logic in Haskell are TRue and False. The capitalization of these names is important." ironic that "capitalization of these names is important" following a typo. There is a typo in the phrase: Since hashWord32 is the faster of the two hashing functions, we call it if our data is a multiple of 4 bytes in size; otherwise, we call hashLittle2. 'hashWord32' should really be 'hashWord2' "Out timed action ensures that a value is evaluated ...." Should be "Our timed action ...". "That could got also be written as ...." I think the "got" should be removed. the paragraph starts "Fromour early examples" There needs to be a space between "From" and "our" "Let's say that we would like guarantee to ourselves that a piece of code ..." Probably should be "we would like a guarantee" or "we would like to guarantee". The sentence "Let's try our the methods of the Monoid typeclass" is wrong. I believe the author meant "Let's try out the methods" .... Instead of "It introduces an extensible extension system.", it should read "It introduces an extensible exception system.". "value" should be "values" class name BasicEq is separated as ... Basi cEq ... "It marked up in bold yellow". Need "is" after "It"; and also, there is no yellow in the printed book, so putting this after "see Figure 11-2" is a little confusing! "We can disregard for now" -- need "this" or "it" after "disregard". "the runParser accessor" should be "the runParse accessor" "Strig" instead of "String" in the following sentence: "...followed by any capitalization of the strig .png." in "we use the(**)" there is a space missing, should read "we use the (**)" "It allows us to define your own numeric types" -- I don't want _you_ defining _my_ types, thanks very much! (So "us" -> "you"?). Note from the Author or Editor:"us" should be "you" "Then apply the odd function" should read "Then apply the isOdd function". isOdd should remain in code font. Instead of "hit C to halt", it should read "hit Ctrl-C to halt" The text reads, 3) If this succeeds, we can install the package.... But the command to install the package isn't shown. For steps 1) and 2) a command is given. Did the command for 3) get cut off the bottom of the page? Note from the Author or Editor:After "we can install the package", add "by running runghc Setup install" The sentence reads: "As such a programmer, we're...." That's a type mismatch. "a programmer" <--> "we're The sentence should read: "As such programmers, we're ..." There's a duplicate "with" in the phrase: At best this will lead to C compiler warnings, and more likely, it will end with with a runtime crash. Note from the Author or Editor:Change "it will end with with a runtime crash" to "it will end with a runtime crash". The paragraph starts with "When we introduced the type Bookstore...". This should read "When we introduced the type BookInfo...". Later in the sentence "...the type constructor Bookstore..." should read "...the type constructor BookInfo...". Bookstore is the name of the Haskell source code file, not of the type being referred to. "Iin fact" instead of "In fact" All occurrences of "truncate" in the table are marked with an asterisk to indicate that there's a corresponding footnote, but the footnote is labeled with "a" instead. The title of the table does refer to the "a" footnote. The HTML version of the book (correctly?) uses the asterisk to label the footnote, and does away with the "a" reference in the table title. "... the definition of foldl' just shows illustrates how ..." should be either "just shows" or "illustrates" but not both. Note from the Author or Editor:Change "fold' just shows illustrates how" to "fold' illustrates how" book reads: "We'll use a Haskell see HTTP library" The word "see" should not be present, and was somehow introduced due to a link to the library being involved. Instead of Interact.hs, this should be InteractWith.hs, or the previous instances should also be Interact.hs. "import Parse -- from chapter 11" should be "import Parse -- from chapter 10". in "...the left of the == is equivalent to...", it should be ===, not == Note from the Author or Editor:NOTE: PAGE NUMBER IS 356, NOT 396 Should read "Book" instead of "BookStore". Note from the Author or Editor:Change "fields of a BookStore" to "fields of a BookInfo". The book reads: "to concepts that might be more familiar easier to understand" Missing: An "and/or" (better to use just "or") between "familiar" and "easier". Note from the Author or Editor:Add "or" between "familiar" and "easier". The two last lines of this code block should be missing. They serve no purpose. We already know that "it" stays the same if an error occurs. Note from the Author or Editor:Remove last two lines of second code block. There should be a space between "In Haskell," and "function". In ePub version, the first code sample is formatted without line breaks: Is: ghci> :set +tghci> 'c' 'c' it :: Char ghci> "foo" "foo" it :: [Char] Should be: ghci> :set +t ghci> 'c' 'c' it :: Char ghci> "foo" "foo" it :: [Char] The online version of the book has comments by readers stating that the operator ($) has been used, but not explained. I look in the index for "$" and don't find it, and suggest that newbies might not know to look for "($)" which gives you a reference to page 248 -- a reference many pages after where $ is used. Note from the Author or Editor:Please add to page 144 after its use of $, as well as to page 165 after its use of $, the following note box, which should be reproduced identically on both pages: "The $ operator is a bit of syntactic sugar that is equivalent to putting everything after it inside a pair of parenthesis." At least one of these instances should be added to the index. wrong word, whether should be when in "followed by an expression to evaluate whether that pattern matches." Note from the Author or Editor:Change "evaluate whether" to "evaluate if". The text states "... the result of drop 1 . words ..." It should read "... the result of tail . words ..." 3rd paragraph: "... a new value of type BookStore" should read "BookInfo" instead of "BookStore" 5th paragraph: two more occurrences of "BookStore" instead of "BookInfo" (one was already submitted by Terry Michaels). When you use regex-base-0.93.1 (Debian sid package: libghc6-regex-posix-dev (0.93.1-1)), '"I, B. Ionsonii, uurit a lift'd batch" =~ "(uu|ii)" :: [String]' causes an error because of an incompatibility of versions. See and '"I, B. Ionsonii, uurit a lift'd batch" =~ "(uu|ii)" :: [[String]]' returns '[["ii","ii"],["uu","uu"]]' when you use regex-base-0.93.1. Note from the Author or Editor:Can we add a footnote or note box after the example immediately before "Watch out for String results"? It should read: "Some versions of regular expression support in Haskell do not support [String] and [[String]] return values from pattern matching." At the end of sentence "Many systems-oriented...": parenthesis is closed, but never opened. Note from the Author or Editor:Delete parenthesis after "fnmatch". Your reference to the 'fourth comma-separated column' might be OK in some programming environments, but it is really the FIFTH comma-separated column, although it is accessed by (!!4) Note from the Author or Editor:Change "fourth comma-separated" to "fifth comma-separated". Incorrect comment: import Data.Char (digitToInt) -- we'll need ord shortly should read: import Data.Char (digitToInt) -- we'll need digitToInt shortly Drop plural "s" from "begins" in "Count the number of words in a string that begins with a capital letter[.]" "-- file: ch04/SplitLines.hs" should be "-- file: ch04/FixLines.hs". Note from the Author or Editor:"let�s call the new file FixLines.hs." should be: "let�s call the new file SplitLines.hs." since the file in the examples is called splitlines. Exercise 2 is only part of Exercise 1 Exercise 6 is only part of Exercise 5 Note from the Author or Editor:Make what is labeled as exercise 2 be additional paragraphs for exercise 1. Ditto for exercise 6 and 5. Renumber exercises as appropriate afterwards. What is numbered as Exercise 9 is only part of Exercise 8 The line as printed reads "asInt_either :: String -> Ei". It looks like the line is truncated, and should read "asInt_either :: String -> Either ErrorMessage Int" judging by the type def above and the output shown below. Starting the paragraph, there is the reference: 'In the latter case, the list...' But the comment about the endpoint being missed does not apply to latter, but to second case: 'In the second case, the list...' There is a capital letter 'T' instead of the word String in the following sentence: "In this example, the Int represents a book's identifier (e.g. in a stock database), T represents its title, and [String] represents the names of its authors." Text states: "When we introduced the type BookStore", "BookStore" should be "BookInfo". Should read: "When we introduced the type BookInfo". There's an extra 'S' following the quotes in the following: The empty string is written ""S, and is a synonym for [] There's a space missing between the word "prelude" and "is" in this sentence: "The Prelude moduleis sometimes..." In "[-2^29..2^29-1]", the exponents are not displayed as a superscript. While the text as written is valid Haskell code, it also is not written in the font used for code. One or the other should be done. Note from the Author or Editor:Should be in code font. The name "Lauren?iu Nicola" is not formatted correctly in the book. Where I've written a '?', there is the sort of empty box one typically sees when a font is missing a particular Unicode character. Checking the online book, the correct version of the name appears to be "Laurențiu". One fix would be to transliterate it to "Laurentiu". Exercise 7 should consist of two paragraphs. However, the second paragraph is incorrectly labeled as excercise 8. © 2019, O’Reilly Media, Inc. (707) 827-7019 (800) 889-8969 All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners.
https://www.oreilly.com/catalog/errata.csp?isbn=9780596514983&order=date
CC-MAIN-2019-51
refinedweb
1,769
74.49
FAQs Search Recent Topics Flagged Topics Hot Topics Best Topics Register / Login Post Reply Bookmark Topic Watch Topic New Topic programming forums Java Java JSRs Mobile Certification Databases Caching Books Engineering Languages Frameworks Products This Site Careers Other all forums Forum: Object Relational Mapping Replacing resultset values Eric Bresie Greenhorn Posts: 24 posted 3 years ago I have an event entity with a composite event id. I have a basic named query like the following which works as expected. @NamedQuery(name = "findEvents", query = "select evt from Event evt where (:pos is null or evt.eventId.pos = :pos) and (:sn is null or evt.eventId.sn = :sn) and ((:date is null) or (evt.date >= :date))" public class Event{ @Id private EventId eventId; Double rating; Date date; . . } @Embeddable public class EventId{ private String uid; private String pos; private String sn; } The (:param is null or evt.parm = :param) in the where clause expression "short circuits" which allows me to use one query for multiple cases (with different parameters set to null). I have facade code like: @SuppressWarnings("unchecked") public List<Event> getEventResultListString pos, String sn, Date fromDate, Date toDate, String variant){ List<Event> results = null; Query q = null; try { q = em.createNamedQuery("findEvents"); q.setParameter("pos", pos); q.setParameter("sn", sn); q.setParameter("date", date); results=(List<Event>) q.getResultList(); } catch (Exception e) { e.printStackTrace(); } return results; } Which would return results something like: //uid, pos, sn, date, rating 1,1,1,"01-03-2010", 1 2,1,1,"01-03-2010", 2 3,1,1,"01-04-2010", 3 4,1,1,"01-05-2010", 4 5,1,1,"01-06-2010", 3 Now I would like a similar query that return a List<Event> with aggregate Sum values instead of the individual rating, to get something like: //pos, sn, rating 1,1,3 1,2,10 I think I may need to use something like select evt.sn, evt.pos, sum(evt.rating) as rating from Event evt where evt.sn in ( 1,2) and date >= DATE '01-01-2010' and date <= DATE '02-01-2010' Group by sn, pos; or select NEW Event( evt.eventId, evt.uid, evt.sn, evt.pos, sum(evt.rating) as rating) from Event evt where evt.sn in ( 1,2) and date >= DATE '01-01-2010' and date <= DATE '02-01-2010' Group by sn, pos; Some of my confusion on this may be due to an entity design issue, but not sure. Is this good way to do this or is there a better way to do this? Should I have a new class (i.e. for use in a List<ResultClass>) for the results? Or use an Object [] type for the results? Or use an Object[] and then populate an individual Event? Hope this makes sense.. Eric It is sorta covered in the JavaRanch Style Guide . Post Reply Bookmark Topic Watch Topic New Topic Similar Threads Special Association Mapping in Hibernate Hibernate Select - Criteria with a Date Error Related to translating the SQL to HQL How to use create query with having list as parameter Double Insert Oracle 11g / JPA / Swing
https://coderanch.com/t/606663/ORM/databases/Replacing-resultset-values
CC-MAIN-2016-40
refinedweb
518
62.68
My graphics module First of all this module is just starting out and will be better in the future. You MUST name this module gec. After you have copied the code in, and named it gec, you can do import gecIn any script. My code: import gec ; reload(gec) import console gac = raw_input(' ') def graphics(): gbox=''' |-------------------| | | | | | | | | |___________________| ''' if gac == 'graphics.gbox()': console.clear() print(gbox) else: print('The Text You Entered Is Not Compatible With This Module') graphics() My current library of commands is graphics.gbox() wich will print a box If you want to publish your graphics library to the world and you want others to help you to improve it, you could create an account on GitHub and then create a GitHub repo. You might want to come up with a name that is more unique than graphics. Alrighty
https://forum.omz-software.com/topic/1498/my-graphics-module
CC-MAIN-2021-17
refinedweb
141
68.81
Results 1 to 3 of 3 - Join Date - Dec 2009 - Location - Hong Kong - 75 - Thanks - 11 - Thanked 1 Time in 1 Post Suggestions for family history site with searchable image collection? I want to transfer an archive of family photographs (going back several generations) to the web. It's for a gift - and I have till Christmas to get it done :-) The key feature would be to enter a name, and have the site return (in a gallery) all pictures showing that person. I'm open to any suggestion as to how I can do this. - Would it be possible using tags in Flickr, Picasa or one of the other image sites? - Could I build a site in Wordpress, Joomla or some other CMS? - Are there geneaology sites that provide this sort of thing pre-built? - Are there image gallery apps (Bananr, Jalbum...) capable of this sort of thing? - could I re-purpose other software like say a product catalog to manage this? Other desirable but not essential features: - A date tag that could be used to sort the images - Space to enter a brief bio of each person - Links to show relationships (parents, siblings, marriages) - Ability to generate a time line At the moment I'm just brainstorming - so any suggestions, pointers or examples would be much appreciated. - Join Date - Feb 2001 - Location - Silicon Valley, USA - 23,112 - Thanks - 5 - Thanked 93 Times in 89 Posts I don't know whether you want to use Flickr or Picasa, but I do think it's a good idea to use a site or product dedicated to photos rather than a generic blogging engine or CMS. It's much more work and more awkward to have to think in terms of blog posts or articles when building a gallery. - Join Date - Dec 2009 - Location - Hong Kong - 75 - Thanks - 11 - Thanked 1 Time in 1 Post Thanks, I'm leaning towards an account at one of the specialised geneaology sites - even if we don't use all their features. Now I just have to find one....
http://windowssecrets.com/forums/showthread.php/137633-Suggestions-for-family-history-site-with-searchable-image-collection?p=800071
CC-MAIN-2016-22
refinedweb
344
65.86
How to: Specify a Help File for Your Component In most situations, you should let the developers who are using your component enable the run-time Help. In some cases, however, it will make sense to allow your component to display HTML Help when called. HTML Help can be provided for components through the System.Windows.Forms.Help object. This object is a static class that encapsulates the HTML Help 1.x engine. This class cannot be instantiated, and its methods must be called directly. To display Help, invoke the Help.ShowHelp Method method. This overloaded method requires at least two arguments: the control that acts as the parent control of the Help dialog box, and the URL of the Help file. The Help file can be a compiled HTML Help 1.x file (.chm file) or an HTML file in the HTML Help format. If you are going to incorporate support for a Help file directly in your component, you have two options for when and how to show it: The preferred option is to implement a Help method that can be called by the client application. The client application can pass parameters to the Help method to ensure that the correct topics are displayed, and the developer coding with your component has the option of bypassing Help altogether. The other option is to invoke the ShowHelp method in response to conditions as they occur in code. This approach provides you the most control over what Help is displayed when, but it severely limits future developers in the use of your component. To specify and display a Help file for your component Create and compile your .chm Help file. If you do not already have a reference to the System.Windows.Forms namespace in your component, add one. Create a public method to show Help. This method should provide an easy way for developers to specify what Help they need to display. // This method takes parameters from the client application that allow // the developer to specify when Help is to be shown. public void MyHelp(System.Windows.Forms.Control parent, myHelpEnum topic) { // The file to display is chosen by the value of the topic. switch (topic) { case myHelpEnum.enumWidgets: System.Windows.Forms.Help.ShowHelp(parent, " C:\\help\\widgets.chm "); break; case myHelpEnum.enumMechanism: // Insert code to implement additional functionality. break; } }
https://msdn.microsoft.com/en-us/library/343eh7eh.aspx
CC-MAIN-2015-48
refinedweb
391
65.52
I am working with a JSON file and am using Python. I am trying to print an object that is nested in an array. I would like to print select objects (e.g. "name", "thomas_id") from the following array (is it considered a 'list' of 'objects' in an array? would the array be called the "cosponsors" array?): "cosponsors": [ { "district": null, "name": "Akaka, Daniel K.", "sponsored_at": "2011-01-25", "state": "HI", "thomas_id": "00007", "title": "Sen", "withdrawn_at": null }, . . . { "district": null, "name": "Lautenberg, Frank R.", "sponsored_at": "2011-01-25", "state": "NJ", "thomas_id": "01381", "title": "Sen", "withdrawn_at": null } ] print(data['cosponsors']['0']['thomas_id'] import json data = json.load(open('s2_data.json', 'r')) print (data["official_title"], data["number"], data["introduced_at"], data["bill_id"], data['subjects_top_term'], data['subjects'], data['summary']['text'], data['sponsor']['thomas_id'], data['sponsor']['state'], data['sponsor']['name'], data['sponsor'] ['type']) You are using a string to index the list, '0' is a string, not an integer. Remove the quotes: print(data['cosponsors'][0]['thomas_id']) When in doubt, check the partial result; see what print(type(data['cosponsors'])) produces; if that produces <type 'list'>, you know you need to use indexing with integers, if you get <type 'dict'>, use keys (a list of which can be gotten by calling print(list(...)) on the dictionary), etc. Usually, lists contain a variable number of objects; it could be just one, zero or a whole load more. You could loop over those objects: for cosponsor in data['cosponsors']: print(cosponsor['thomas_id']) The loop sets cosponsor to each of the objects in the data['cosponsors'] list, one by one.
https://codedump.io/share/hxhL6EjiIb94/1/print-json-object-nested-in-an-array-using-python
CC-MAIN-2018-13
refinedweb
260
64