text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Here's a C program to check whether the given character is vowel or not with example and explanation. This program uses an IF-ELSE Condition to determine whether the given character is a vowel or not.
# include <stdio.h> # include <conio.h> void main() { char c ; clrscr() ; printf("Enter the character : ") ; scanf("%c", &c) ; if( c == 'a' || c == 'e' || c == 'i' || c == 'o' || c == 'u' || c == 'A' || c == 'E' || c == 'I' || c == 'O' || c == 'U') printf("\n%c is a vowel", c) ; else printf("\n%c is not a vowel", c) ; getch() ; }
Output of above program
Enter the character : a
a is a vowel
Enter the character : z
z is not a vowel
Explanation of above program
This program first asks the user to enter a character and stores its value in character variable c. Then using an IF-Else condition, program checks whether the entered character is among {a, e, i, o, u, A, E, I, O, U}. If entered character is among this set, program prints that entered character is vowel otherwise, program prints that entered character is not a vowel.
Greetings Mate,
Brilliant article, glad I slogged through the it seems that a whole lot of the details really come back to from my past project.
why is saying c programming is a based on java?
Very useful article, if I run into challenges along the way, I will share them here.
Obrigado, | http://cprogramming.language-tutorial.com/2012/11/program-check-character-vowel-or-not.html | CC-MAIN-2019-13 | refinedweb | 237 | 62.92 |
Warning, this article is technical… If you don’t care about python development, skip it 🙂
You want to create pdf reports, send letters or any other action needing pdf documents ?
If you are a python expert, you’ll have surely heard about ReportLab. Yeah, I know, to use it, you need to code it, and when you change your template you have to do everything again.
A solution to that, is to use RML, an xml language done by the company that created reportlab. RML permits to use stylesheets, and to have an xml document that generates the pdf (just like html).
There is a problem with RML. The reference handler, (Reportlab RML) isn’t free (neither as in beer nor as in speech)… But the fabulous guys at Z3C did an open source handler for RML, z3c.rml !
Just one problem : when you create big documents, the process will crawl, take all your ram and die badly. The solution to that aspect ? using PyPDF to join pdf files into one (one template per document part).
Now, here comes the real deal : we will template that xml, just like we would do for html, using an xml templating language : genshi.
To ease that task, I’ve made a little python lib that handles the templating, the merging of various documents into one and much more.
For the most impatient, head directly to the examples ! 🙂
How to use it ?
Example case : a data table
First, let’s initialize a factory (a class that allows to create various documents and join them in one pdf):
from pyjon.reports import ReportFactory factory = ReportFactory()
Now let’s create a test.xml file containing our template:
<?xml version="1.0" encoding="iso-8859-1" standalone="no" ?> <!DOCTYPE document SYSTEM "rml_1_0.dtd"> <document xmlns: <template pageSize="(595, 842)" leftMargin="72" showBoundary="0"> <pageTemplate id="main"> <frame id="first" x1="1in" y1="1in" width="6.27in" height="9.69in"/> </pageTemplate> </template> <stylesheet> <blockTableStyle id="mynicetable" spaceBefore="12"> <lineStyle kind="OUTLINE" colorName="black" thickness="0.5"/> <blockFont name="Times-Bold" size="6" leading="7" start="0,0" stop="-1,0"/> <blockBottomPadding length="1"/> <blockBackground colorName="0xD0D0D0" start="0,0" stop="-1,0"/> <lineStyle kind="LINEBELOW" colorName="black" start="0,0" stop="-1,0" thickness="0.5"/> <!--body section--> <blockFont name="Times-Roman" size="6" leading="7" start="0,1" stop="-1,-1"/> <blockTopPadding length="1" start="0,1" stop="-1,-1"/> <blockBackground colorsByRow="0xD0FFD0;None" start="0,1" stop="-1,-1"/> <blockAlignment value="right" start="1,1" stop="-1,-1"/> <!-- closing the table when restarting it on next page --> <lineStyle kind="LINEBELOW" colorName="black" start="0,splitlast" stop="-1,splitlast" thickness="0.5"/> </blockTableStyle> </stylesheet> <story> <h1>$title</h1> <blockTable repeatRows="1" style="mynicetable"> <tr><td py:Row ${i}</td></tr> <tr py:<td py:</tr> </blockTable> <para py: </story> </document>
Let’s have a look at this template:
- The template node permits to choose the document size
- The stylesheet node permits to define the styling of the document
- The story nodes contains the document content (just like body in html)
- Like in html, h1 defines a level 1 title (we print the title var inside)
- A para is like an html p (a paragraph)
- The blockTable is the table tag containing rows and columns (td and tr). The py:for attributes define the iterations over list vars
- The first one (py:for=”i in range(10)”) defines the columns (Row 0 to Row 9)
- The second (py:for=”line in data”) shows the whole content of the data var (item per item)
- The last one (py:for=”col in line”) shows the content of each line (with a py:content printing it inside the tag)
Now let’s generate a document with this template:
template = 'test.xml' testdata = [range(10)] * 100 factory.render_template( template_file=template, title=u'THE TITLE', data=testdata, dummy='foo')
We passed various variables and a data list (testdata) containing the numbers 0 to 9, 100 times: [[0, …, 9], …, [0, …, 9]].
Please note that all keyword arguments passed to the function (beside template_file or template_string) are passed to the genshi template, and accessible as root variables there.
Needing to add another secion in the document with the same template ? No problem !
factory.render_template( template_file=template, title=u'THE TITLE 2!', data=testdata, dummy='bar' )
Eventually, we will create a single pdf files, with all the content, finalising the factory (cleaning it):
factory.render_document('test.pdf') factory.cleanup()
And voilà! It’s as simple as that.
How to install it ?
To install pyjon.reports it’s very simple: just type
easy_install pyjon.reports in your console (in a virtual env or globally), and you’ll be able to generate pdf documents in your applications in less than 5 minutes.
If you want to see the source code or contribute, see the bitbucket page:
For more info:
by haridas
04 Aug 2010 at 11:04
Hi,
I have little knowledge in python, and trying to construct a simple template html file(a dynamic html file to send with mail, as a report from one tool). For this special purpose i dont need a entire framework to construct a single template rendering…..other documents regarding the python templates are seems framework centric. Here I want only template specific tutorials….
Your blog gave me some good points towards this, and the point of creating pdf reports …nice …:)
Thanks,
Haridas N.
by Ty Malinconico
08 Jul 2011 at 21:01
HTML?????????????????
by Danelle Kisner
21 Apr 2012 at 07:24
My twin and I have been working on a game for the past year or so using cocos2D and we’re getting close to releasing it. I would love to get it ported over when it’s ready. What needs to be done to make that happen?
by Kalpa Welivitigoda
23 Aug 2012 at 18:25
Hi,
The content was really helpfull. Thanks. I want to set up a custom page size for my output pdf. I tried the following,
…
..
..
But when I view the pdf, it is of the size A4.
Any clue to make my page size A3? | http://english.jondesign.net/2010/06/create-pdf-in-python-using-genshi-and-rml-with-pyjon-reports/comment-page-1/ | CC-MAIN-2018-09 | refinedweb | 1,021 | 62.68 |
Create a directory to write 'C' programs in and edit a file: hello.c with the customary "hello world" program:
#include <stdio.h> int main(void) { printf("hello Linux 'C' world!\n"); return 0; }
To compile it as hello, use the shell command:
gcc hello.c -o hello
To run it, use the command:
./hello
gcc stands for GNU 'C' compiler. It can also compile 'C++', Fortran, Ada and some Java programs. It can compile these under very many combinations of CPU hardware and operating system to which gcc has been ported.
gcc has too many options for these all to be specified using single letters, so gcc -dr is not the same as gcc -d -r , as would work with other Unix tools. Not all compilation stages have to be done at once, so you can create assembly language files with the -S option, or unlinked object files with the -c option. Files for intermediate stages are deleted by default. If the -o option to give the output file isn't specified, an executable will be written in the file a.out , and object files will have the same name as the source with a .o suffix, and assembler files have the source prefix with the .s suffix.
The -O option controls optimisation, a number or letter after the 'O' determines the kind of optimisation, e.g. s for size or a numeral giving a tradeoff between compilation and execution speed.
The -llibrary option (library is the name of the library) links against a particular library, in addition to the system and compiler libraries.
The -Idirectory option searches the directory for header files.
Before you can use this you will first need to be able to compile without compilation errors. The gdb debugger is used to help investigate runtime errors. This is enabled by compiling the program to be debugged using the gcc -g option.
The actions you are likely to want a debugger to do are to be able to run parts of the program until you get to specified break points, where you examine the state of data within the program, and to step into or over functions which are to be executed after your break points.
gdb program
runs program (which has to be specially compiled) inside the gdb environment.
Here are some of the most frequently commands needed inside gdb, taken from the gdb(1) man page:-point). next Execute next program line (after stopping); step over any function calls in the line. step Execute next program line (after stopping); step into any function calls in the line. help [name] Show information about GDB command name, or general information about using GDB. quit Exit from GDB.
#include <stdio.h> #include <string.h> int main(void){ char word[]="hello"; int i; i=strlen(word); printf("length of string: hello is %d\n",i); }
The above program (strlen.c) was compiled using command:
gcc -g strlen.c -o strlen
The following debug session was recorded:
[rich@copsewood c]$ gdb strlen (version and license details cut) (gdb) break main Breakpoint 1 at 0x804837c: file strlen.c, line 5. (gdb) run Starting program: /home/rich/c/strlen Breakpoint 1, main () at strlen.c:5 5 char word[]="hello"; (gdb) step 7 i=strlen(word); (gdb) print i $1 = 134513257 (gdb) step 8 printf("length of string: hello is %d\n",i); (gdb) print i $2 = 5 (gdb) step length of string: hello is 5 9 } (gdb) step 0x4003a7f7 in __libc_start_main () from /lib/i686/libc.so.6 (gdb) step Single stepping until exit from function __libc_start_main, which has no line number information. Program exited with code 035. (gdb) quit
To reduce typing, b, r, s, p and q are aliases of break, run, step, print and quit.
make is a significant project management tool. You are likely to need to use it to build non-trivial programs supplied in the form of source code. You will also have to use it to modify projects which already use make programs called makefiles, and are likely to benefit from using it for your own projects once the effort and time spent repetitively building these programs by compiling, linking and installing components can be reduced by automating and modularising the build process.
make is a command which you run in source directories containing a make program called Makefile or makefile. For a large project with a source file tree using more than one directory the Makefile in the parent of these is likely recursively to run individual makefiles in subdirectories containing source files. For some very large projects, creating the Makefile suitable for your system is itself automated, by convention through the use of a shell script called configure, also likely to be in the source parent directory.
make commands either specify a target or use a default target. A make target is either a file to be built (e.g. a binary executable) or a directive to get make to do something, e.g. make clean (clean being the target) is used by convention to remove files from the source directory tree, e.g. object files, which are generated by compiling source code and which are in the way or no longer needed. Another conventional target is make install which causes make to copy the compiled program, associated runtime library modules and documentation into the directories from which these files will be used.
(From "Programming with GNU Software", O'Reilly)
#a very simple makefile #give name of target to compile simulate: # one or more tab-started shell commands to create stimulate gcc -o simulate -O simulate.c inputs.c outputs.c
To run this makefile, your shell command would be:
make simulate
This is adequate for a very small program, but it would compile all 3 source files every time one of them changes. Makefiles contain information about file dependencies, e.g. so that make does the minimum required work compiling only source files which have been updated, and not bothering with object files which are up-to date, i.e. where the depenant file (e.g. an object file) has a later modification date/time than the file it is dependant upon (e.g. the source file). If you only want to compile source which has changed, you could expand this makefile as follows:
# name of target can be followed by dependancies simulate: simulate.o inputs.o outputs.o # tab-started command to create target gcc -o simulate simulate.o inputs.o outputs.o simulate.o: simulate.c gcc -c -O simulate.c inputs.o: inputs.c gcc -c -O inputs.c outputs.o: outputs.c gcc -c -O outputs.c
Makefiles can use the shell to execute shell commands, but these must be on a line per shell execution starting with a tab:
clean: rm -f Makefile.bak a.out core errs lint.errs thisprog thatprog *.o
If you want a set of commands to run inside the same shell, e.g. to set and use environment variables or change working directories prior to using these for other shell commands, place semicolons between these commands. This can be on the same line or you can use backslashes as the last character of lines to be continued, with no whitespace (tabs or spaces) after the backslashes e.g.
cd ../module1; \ gcc module1.c
To avoid repetitive and error prone typing (even the very simple example above had this) , Makefiles often include macros which are short words expanded into substitution text e.g: (taken from "Internetworking with TCP/IP, Volume III" Makefile David L Stevens, Internetworking Research Group at Purdue.)
CFLAGS = -W -pedantic -ansi SERVS = TCPdaytimed TCPechod TCPmechod UDPtimed daytimed sesamed SXOBJ = passiveTCP.o passiveUDP.o passivesock.o errexit.o
Derived macros and make targets are then specified more easily using these macros eg:
${SERVS}: ${SXOBJ} ${CC} -o $@ ${CFLAGS} $@.o ${SXOBJ} servers: ${SERVS} TCPechod: TCPechod.o TCPmechod: TCPmechod.o UDPtimed: UDPtimed.o daytimed: daytimed.o sesamed: sesamed.o
This states that the server programs are dependant upon creating the listed object files. The macro $@ expands to the target: TCPdaytimed TCPechod TCPmechod UDPtimed daytimed sesamed . The macro ${CC} isn't defined within this Makefile, as make uses an inbuilt value for this macro, normally the cc command which is the default Unix 'C' compile command, on Linux cc is an alias of the gcc command.
make is typically used after downloading a program in the form of a source code archived in tar or compressed tar format ( compressed tar archives have suffixes .tgz or .tar.gz ). These archives can be uncompressed by downloading or copying the archive file into a directory where you wish to build the program and using the command:
tar -xzvf program.tgz
Where x means extract, z means uncompress, v means display all the paths as the archive is extracted and f refers to the archive file ( program.tgz ) to be extracted. On some older systems you may need to uncompress the tar file seperately from using tar to extract it, e.g. using gunzip program.tgz and then tar -xvf program.tar . Having done this it is a good idea to see if there any readable text files in the source archive describing how to proceed with compilation and installation. Typical file names to look out for are README, README.1ST , INSTALL, CONFIGURATION, etc.
This data takes the form of a set of variables, each having a name and a value. Both names and values are stored as strings. These variables allow external control over how programs behave, for example the PATH environment variable gives a list of directories which the shell will search in order to find an executable program for an external command. This environment is inherited from the parent process' environment, but environment changes made by children, e.g. to the current working directory, do not affect parents. If it makes sense for an environment variable to specify more than one value, e.g. directories to be searched, these will be separated using a suitable delimiter (e.g. Unix directory paths are delimited using colons ':' while Windows directory paths are delimited using semicolons ';' .
In the Bash shell a variable can be assigned and exported to the environment as follows:
name_of_variable=value export name_of_variable
it can then be accessed using $name_of_variable e.g:
echo $PATHIn 'C' programs, environment variables can be read and written using getenv(3) and setenv(3) system calls.
/* getenv.c : wrapper program around getenv() * Richard Kay 11 Jan 02 */ #include <stdio.h> #include <stdlib.h> int main(void){ char name[BUFSIZ],*vp; printf("enter name of environment variable:\n"); fgets(name,BUFSIZ,stdin); name[strlen(name)-1]='\0'; /* get rid of newline */ vp=getenv(name); if(vp == NULL){ fprintf(stderr,"I don't know that environment variable\n"); exit(1); } printf("Value of %s is %s\n",name,vp); return 0; }
./getenv enter name of environment variable: PATH Value of PATH is /home/rich/bin:/usr/local/bin:/usr/local/jkd1.4/bin: /home/rich/bin:/usr/bin:/bin:/usr/X11R6/bin:/usr/games
Here is the man page.
SETENV(3) Linux Programmer's Manual SETENV(3) NAME setenv - change or add an environment variable SYNOPSIS #include <stdlib.h> int setenv(const char *name, const char *value, int overwrite); void unsetenv(const char *name);. The unsetenv() function deletes the variable name from the environment. CONFORMING TO BSD 4.3 SEE ALSO clearenv(3), getenv(3), putenv(3), environ(5) BSD 1993-04-04 SETENV(3)
This is done by specifying argc and argv parameters to the main function. These are the number of arguments and an array of string pointers respectively. This array starts at 0 and the 0 indexed argument is the pathname by which the program was executed.
The following program performs a similar function to echo(1) except that echo doesn't output the pathname by which it is called before the other command-line arguments:
#include <stdio.h> int main(int argc, char *argv[]){ int i; for(i=0;i<argc;i++) printf("%s ",argv[i]); printf("\n"); return 0; } | http://bcu.copsewood.net/pcnd3/notes4.html | CC-MAIN-2017-22 | refinedweb | 2,012 | 63.7 |
The while loop keeps running although I type quit. What is wrong with this code?
import java.util.*; public class OddEvenGame { public static void main(String[] args) { Scanner console = new Scanner(System.in); Random r = new Random(); int chips = 30; int betting = 0; String call = ""; String answer = ""; while (chips != 0 || !answer.equals("quit")) { int dice = r.nextInt(6) + 1; System.out.print("Odd or even? (type \"quit\" to exit) "); call = console.next(); System.out.print("How many chips are you betting? "); betting = console.nextInt(); if (dice % 2 == 0 && call.equals("even")) { chips = betting * 2 + chips; System.out.println("You have earned " + betting * 2 + " chips!"); } else if (dice % 2 == 1 && call.equals("odd")) { chips = betting * 2 + chips; System.out.println("You have earned " + betting * 2 + " chips!"); } else { chips = chips - betting; System.out.println("You lost " + betting + " chips!"); } System.out.println("You have " + chips + " chip or chips now."); System.out.println(); } } } | http://www.javaprogrammingforums.com/whats-wrong-my-code/35799-can-someone-help-me-error.html | CC-MAIN-2014-10 | refinedweb | 149 | 56.52 |
Post your Comment
Implementing a Simple Event Notifier
Implementing a Simple Event Notifier
... Event Notifier.
The Observer and Observable classes are useful for implementing a simple
event notifier. The Observer class informed the changes
Event Handling In Java
a simple example into which you
will see how an event generated after clicking...Event Handling In Java
In this section you will learn about how to handle... that happens and your
application behaves according to that event. For example
event handling
event handling diff btwn event handling in ASP.net and in HTML
The Simple API for XML (SAX) APIs
The Simple API for XML (SAX) APIs
... event handlers.
Understanding SAX Parser
At the very first, create... it). It is
the SAXReader which carries on the conversation with the SAX event handlers
you define
Event in flex
Event in flex Hi.....
I want to know about
What does calling preventDefault() on an event do? How is this enforced?
please give me an example for that
Thanks
Ans:
The methods of the Event class can be used
Write an event driven program to perform the arithmetic operations as shown in the interface
Write an event driven program to perform the arithmetic operations as shown... an event driven program to perform the arithmetic operations as shown... help me:
Here is an event driven example that will perform arithmetic
prog. using radio buttons for simple calculator
prog. using radio buttons for simple calculator import java.awt.*;
import java.awt.event.*;
import javax.swing.*;
import javax.swing.event.... ActionListener {
public void actionPerformed(ActionEvent event
calender event
Flex event
jQuery Simple Drop Down Menu
jQuery Simple Drop Down Menu
In this section, you will learn how to develop a simple drop down menu using
jQuery. To develop a drop down menu we put...;title>jQuery Simple Drop Down Menu</title>
<meta name="keywords"
Event listener in JavaScript with example
Event listener in JavaScript with example What is event listener in JavaScript? Can anyone please post an example of Event listener??
Thanks in Advance
Event handling on an image
Event handling on an image I want to divide an image into frames and want to do event handling on the frames
Java event handling
Java event handling Which java.util classes and interfaces support event handling
Java event-delegation model
Java event-delegation model What is the advantage of the event-delegation model over the earlier eventinheritance model
Post your Comment | http://roseindia.net/discussion/23334-Implementing-a-Simple-Event-Notifier-.html | CC-MAIN-2014-42 | refinedweb | 404 | 53.61 |
Opened 11 years ago
Closed 8 years ago
#9625 closed (duplicate)
ForeignKey data type for certain derived model fields not calculated correctly
Description
I have a case where I want to make a custom "BigAutoField" model field. So, given newforms,
I do something like this in my own code:
class BigAutoField(fields.AutoField): empty_strings_allowed=False def get_internal_type(self): return "BigAutoField" def db_type(self): assert settings.DATABASE_ENGINE == 'mysql' return 'bigint UNSIGNED AUTO_INCREMENT'
I then change the appropriate parts of my model to use this BigAutoField in place
of Django's AutoField. Such as:
class Foo(models.Model): id = custom_model_fields.BigAutoField('ID', primary_key=True)
Then, I have Django generate the schema and populate the DB. Looking at the data type of this
'Foo' table, I see that ID is UNSIGNED BIGINT. Great. However, when I look at the data type
of any foreign keys that reference it, I see that they are still INTEGER.
So I went and looked at the SVN code. In django/db/models/fields/related.py around line 701,
I saw:
def db_type(self): # The database column type of a ForeignKey is the column type # of the field to which it points. An exception is if the ForeignKey # points to an AutoField/PositiveIntegerField/PositiveSmallIntegerField, # in which case the column type is simply that of an IntegerField. # If the database needs similar types for key fields however, the only # thing we can do is making AutoField an IntegerField. rel_field = self.rel.get_related_field() if (isinstance(rel_field, AutoField) or (not connection.features.related_fields_match_type and isinstance(rel_field, (PositiveIntegerField, PositiveSmallIntegerField)))): return IntegerField().db_type() return rel_field.db_type()
So it looks like my model is falling into the trap of isinstance(AutoField) being True,
since it is derived from AutoField. Now the quick and dirty hack around this is to copy
AutoField from django/db/models/fields/init.py and make that the basis of my BigAutoField.
I can do that for now, but in the long run I'd like to avoid putting internal Django code
into my project.
So, what do you guys recommend happen here? I see this code in Django
I referenced being a problem for anyone that tries to make their own model fields that
derive from AutoField, PositiveIntegerField and PositiveSmallIntegerField.
The other problem with the above example is if you don't derive from AutoField (which I didn't) then the ForeignKey will get the the rel_field.db_type() as it's own which in this case will be 'bigint UNSIGNED AUTO_INCREMENT'. This of course is invalid being that the ForeignKey can't auto increment. My solution has been to write the CREATE TABLE SQL myself.
This of course is the easy MySQL example, it gets even more fun when we start to deal with the other 3 databases that Django supports. | https://code.djangoproject.com/ticket/9625 | CC-MAIN-2019-26 | refinedweb | 462 | 55.64 |
Every own system. Thankfully, most frameworks provide some opinionated solutions for managing state in an app. For Vue, developers make use of the library Vuex, which provides common patterns that makes managing state predictable and consistent across the entire app. Let’s look at how we can manage a simple TODO app using Vuex and as an added benefit, we’ll make it type safe using TypeScript.
The Shell
Before we dive into Vuex, let’s look at the shell of our app. What we have is basically a single route app that should load a list of “todos”. Users should be able to tap the “+” button in the header to open a modal and add a new item, or click the item itself to edit an existing todo. We can mark an item as completed in the modal or by swiping the item to to reveal some additional buttons. As far as complexity goes, this is your basic CRUD app. While basic, and a bit contrived, these kinds of apps perform the same type of actions we do in most situations. So this is a good chance to discover some best practices.
Now this setup isn’t overly complex, but it already is showing signs of overly coupled logic. With everything being declared in the component, if we need to change our architecture at all, or the format of our data, we basically have to change it in multiple places.
Making things predictable
To bring some structure to our app, let’s add Vuex.
vue add vuex@next
This will install the deps we need and perform any changes to our file system. With this, we get a new
src/store/index.ts file for us to work in. Now Vuex is based on a few concepts; A store, mutations, and actions.
Store
In Vuex, a Store is a global state that is available throughout our app. This is just a fancy way of saying we have an object that we can mutate and reflect these changes in our UI.
In our app, we can create our store to hold our various “todos”
import { InjectionKey } from 'vue'; import { createStore, useStore as baseUseStore, Store } from 'vuex'; // interfaces for our State and todos export interface Todo { id: number; title: string; note?: string; } export interface State { todos: Todo[]; } export const key: InjectionKey<Store<State>> = Symbol(); const state: State = { todos: [ { title: 'Learn Vue', note: '', id: 0, }, { title: 'Learn TypeScript', note: '', id: 1, }, { title: 'Learn Vuex', note: '', id: 2 }, ], }; export const store = createStore<State>({ state }); // our own `useStore` composition function for types export function useStore() { return baseUseStore(key); }
With our todos setup and primed with some initial data, we can now think about how we modify that state, which is done through mutations.
Mutations
Mutations, as the name implies, are a way to mutate our state. This is very different compared to something like Redux which uses immutable state, but achieves the same effect. With Mutations, we essentially have a handler that gets called and is passed the current state, along with any payload.
For our use case, we’re going to make sure we can type our Mutations and provide some auto completion in our editors. We’ll start off with an object that will have all our mutations declared for us
export const enum MUTATIONS { ADD_TODO = 'ADD_TODO', DEL_TODO = 'DEL_TODO' };
Next, we’ll actually define our mutations:
import { createStore, useStore as baseUseStore, Store, MutationTree } from 'vuex'; // ... const mutations: MutationTree<State> = { [MUTATIONS.ADD_TODO](state, newTodo: Todo){ state.todos.push({...newTodo}); }, [MUTATIONS.DEL_TODO](state, todo: Todo){ state.todos.splice(state.todos.indexOf(todo), 1); } }
We have two mutations available; one to add a todo to our store and another to remove a todo. You may notice that we have a type on the
todo, but not on state, why is that? Well thanks to the
MutationTree type, the type information from
State that is passed in will flow throughout our mutations. Now the only thing we need to type is the payload, which can change depending on what mutation we call.
The last thing to note about mutations is that they only care about changing state. So to change state at all with Vuex, you must use Mutations.
Actions
Actions are like Mutations, but can perform asynchronous functions and call other mutations. This is a useful way to separate tasks in your app that depend on external resources and those that can be performed with the data at hand. Like mutations, we’ll split our actions up by a Type and then the actual implementation.); }); }, };
Actions receive the context or actual store object as the first argument, followed by any payload that. With our action, we can make a request to some API, resolve that response, and kick off a mutation, all with the function. It doesn’t need to be a single mutation either, we could trigger one, two, or more mutations, or conditional trigger a mutation based on the resutl of a request (a side effect).
Putting it all together
With these pieces together, our overall store should look something like this.
import { InjectionKey } from 'vue'; import { createStore, useStore as baseUseStore, Store, MutationTree, ActionTree, } from 'vuex'; // interfaces for our State and todos export type Todo = { id: number; title: string; note?: string }; export type State = { todos: Todo[] }; export const key: InjectionKey<Store<State>> = Symbol(); const state: State = { todos: [ { title: 'Learn Vue', note: '', id: 0, }, { title: 'Learn TypeScript', note: '', id: 1, }, { title: 'Learn Vuex', note: '', id: 2 }, ], }; /* * Mutations * How we mutate our state. * Mutations must be synchronous */ export const enum MUTATIONS { ADD_TODO = 'ADD_TODO', DEL_TODO = 'DEL_TODO', EDIT_TODO = 'EDIT_TODO' }; const mutations: MutationTree<State> = { [MUTATIONS.ADD_TODO](state, newTodo: Todo) { state.todos.push({ ...newTodo }); }, [MUTATIONS.DEL_TODO](state, todo: Todo) { state.todos.splice(state.todos.indexOf(todo), 1); }, [MUTATIONS.EDIT_TODO](state, todo: Todo) { const ogIndex = state.todos.findIndex(t => t.id === todo.id) state.todos[ogIndex] = todo; }, }; /* * Actions * Perform async tasks, then mutate state */); }); }, }; export const store = createStore<State>({ state, mutations, actions }); // our own useStore function for types export function useStore() { return baseUseStore(key); }
Parting Thoughts
As I stated early, State Management in an app is a bit of a personal preference. What I’ve shown here is simply one way that I would go about it. I’d encourage you all to find your own approaches, but keep a structure like this in your app for consistency. Cheers!
Mike, thanks for this. I’ve been using NgRx in an Ionic/Angular app for the last couple of years. To be honest my thoughts yo-yo on whether I am doing things properly all the time. Sometimes I’m glad I used NgRx and sometimes I realised I OVER-USED it - I think our app is probably a classic case of where it should be used in parts of the app but maybe not all over the app.
Anyway recently I dived into Vue. One of the things I found with Vuex is that I was far more likely to re-use an Action whereas with NgRx I had tried to keep to Mike Ryan’s Good Action Hygene principles and have each Action unique. Vuex does not seen to encourage this so much (but maybe it’s because I am only just getting started and have not got into it enough yet)
i think that this “issue/concern” is one of the reasons that tech gets overly complex… i understand what he is trying to accomplish, but all this does is get people caught up who/what is right who/what is wrong instead of adding value to the deliverable… but like him, this is just my opinion and I don’t think the state manager police are going to chase you down for not following the pattern or for reusing an action
Yes, I agree. However, I must admit that when I’ve followed those principals with NgRx it has provided benefits (despite the extra boiler plate). Knowing exactly the source of any action has certainly helped debug issues. And in cases where I have not followed it and have duplicated actions, it has taken a bit more time to sort out.
Both points are valid here. There’s a balance of being dry/clean without over complicating things. All just personal preference and not a “one is correct and one is wrong”
Join the discussion on the Ionic Forum | https://ionicframework.com/blog/managing-state-in-vue-with-vuex/ | CC-MAIN-2021-17 | refinedweb | 1,392 | 59.64 |
ram alternative to nvram?
When I google the topic of using ram instead of flash most results suggest setting up a ram drive & mounting it. But I'm just looking for a few bytes of ram I can use as an alternative to flash. So something like f=open('/some/spare/ram', 'w'); f.write(str(cycle)); f.close() where /some/spare/ram is a location in ram rather than in flash. Does anybody know of some unused GPY ram I could try this on?
@robert-hh said in ram alternative to nvram?:
Having the RAM disk mirrored to RTC memory is hardly usable, since the size is limited to 2048 byte.
Thanks for your investigations revolving this and for letting us know about the outcome!
@andreas I tried to have that RAMdisk mirrored to RTC. That is hardly usable, since the size is limited to 2048 byte.
@robert-hh said in ram alternative to nvram?:
RAMBlockDevis using storage from the regular RAM [but] one could rewrite it to use the RTC RAM for storage.
Thanks for clarifying that. This sounds like a viable option to me.
However, using external FRAM would be the more straight option in general, so thanks again for outlining that in your post above.
@andreas RAMblkdev is using storage from the regular RAM. Being dead simple, one could rewrite it to use the RTC RAM for storage. Only it had to re-write the full 8k all the time, and keep a temp copy in RAM.
Edit:
It seems more efficient to use a dictionary and pickle to store that RTC RAM.
@robert-hh said in ram alternative to nvram?:
In principle the RTC memory is kept during deepsleep
That's probably true, but just for the sake of completeness and to avoid further confusion: The
RAMBlockDevis going straight to normal RAM by using the
bytearraywithin
self.dataas storage, right?
Mapping that guy to
rtc.memory()might be considered an alternative option of making the content persist across deep sleep cycles.
@andreas In principle the RTC memory is kept during deepsleep, but I recall faintly that is is wiped on boot. That could be dropped. Adafruit offer two FRAM modules, and @pythoncoder (Peter Hinch) has made a block device driver for that at:
Peter Hinch' repositories are always worth a look, especially for his good tutorial on asyncio,
Thanks a bunch, @robert-hh! We just cross-posted this to [1], giving you appropriate credits for bringing that to Pycom MicroPython.
We probably have the same intentions as @kjm, using some memory for buffering measurement data when there is no network connectivity.
However, we all have to be aware that memory content will be lost when going to deep sleep. While that might be obvious for us, I am mentioning this detail for all others coming here for whom that might not be exactly clear.
[1]
@kjm I searched a little bit in my memory & net and found a ramdisk implementation, which I adapted to Pycom#s firmware. Code below:
part 1: Ramdisk Block device. Author Damien George, the master brain of MicroPython:
class RAMBlockDev: def __init__(self, block_size, num_blocks): self.block_size = block_size self.data = bytearray(block_size * num_blocks) def readblocks(self, block_num, buf): for i in range(len(buf)): buf[i] = self.data[block_num * self.block_size + i] def writeblocks(self, block_num, buf): for i in range(len(buf)): self.data[block_num * self.block_size + i] = buf[i] def ioctl(self, op, arg): if op == 4: # get number of blocks return len(self.data) // self.block_size if op == 5: # get block size return self.block_size
Part 2: Small script which creates the ramdisk of size 32k.
import uos from ramblkdev import * # 512 is the sector size, fixed for FAT, 64 is the number of sectors. # 64 -> 32k bdev=RAMBlockDev(512, 64) fs=uos.mkfat(bdev) fs.fsformat() uos.mount(fs, "/ramdisk")
@kjm There is this pycom.nvsxxx() set of functions, which can be used for that - still flash. There is some RAM (8k) assigned to the RTC, with a very raw interface.
from machine import RTC rtc=RTC() rtc.memory("Store some string or bytearray") # stor something there stored_value = rtc.memory() # and get it back
You can only store strings or byte-array. Complex data has to be serialized. e.g. with json dumps() and loads().
Edit:
For serialization you may also use pickle, which is available at. That lib is a vein of gold for many micropython tasks, maintained by Paul Solokovsky (@pfalcon), who is a major contributor to MicroPython, especially the ESP8266 port. | https://forum.pycom.io/topic/5361/ram-alternative-to-nvram | CC-MAIN-2021-10 | refinedweb | 757 | 66.54 |
React, in my opinion, has become quite a useful tool over the years. I admin I haven’t given the other major frameworks a try, but from the look of the resulting code, I only would give Svelte a real chance in the nearer future (in fact, you’d really have to pay me real big money to convince me about Angular).
Now with many of the more useful JS libraries, React is in a state where not only has it survived quite a time (reaching v18 only a few weeks ago), but also breeding a community that harbors a lot of valuable knowledge, enabling one to efecavoid the most common pitfalls at the beginning of your journey. There are lots of resources you can easily find online, from few-hour-courses to several posts in other blogs about the most common traps.
However, in our daily life it appears that there still are some very good points to make about how not to go about React’s unopinionatedness. So these are some of our own findings that I’ve not yet seen overly emphasized, and maybe they are here for your advantage.
1. HAVE YOUR STATES ATOMIC
It might happen that one migrates an older React component where functional programming wasn’t the norm yet, or out of whatever habit, that you declares something like a greedy React state as
const [state, setState] = useState({this: ..., that: ... , ..., ...});
Now your state profits much from immutability (think of this as “your machine then knows that it’s content is clear and unique, given any time”) and therefore you do not need to care about the same-or-not-sameness of state.that when evaluating state.this. Therefore, it is usually advised to split that up into several independent states as
const [this, setThis] = useState(...); const [that, setThat] = useState(...); ...
That is more readable and everything. However, the most useful rule to build your states is not even to split everything up as small-as-possible, but rather, to have your states atomic. By that, we mean, “not needlessly large, but containing all what might change at the same time”.
One common example is basic data fetching. If you don’t choose to grab for react-query, which I personally like. But if you do e.g. a simple GET request, you usually do not only have “data” (some response), but also at least a “pending” (has the request finished yet?) and an “error” (is this response even usable?) field. These all change at the same time. Thus, they belong to the same entity. That state, designed atomically
const [query, setQuery] = useState({ pending: false, data: null, error: null, });
side note: you might choose not to use the null object as an initial value here because of the known problem of ambivalence with this object. For this illustration, it will suffice.
So, this query state now is atomic. Not to split further without serious consequences, as you will. If you had another, unrelated query, you would not just put it right into the same state entity; but if you had another property of that query (like e.g. a separate field for the status code, …), it would belong.
This helps in having more predictable useEffect, useMemo etc. dependency arrays. You can have an Effect depending on [query] as a whole and this makes complete semantic sense. It would be very hard to predict it’s behaviour, if you mashed multiple queries or whatever-state-you-can-think-of in there.
2.HAVE YOUR EFFECTS ATOMIC & TEAR THEM DOWN
Similarly, it is not super obvious (to the newcomer’s eye at least), that you can have multiple useEffects(). You can adhere to the Single Responsibility principle right there — the only good Effects are the ones that you can grasp in a twinkling of an eye. Use one each for every single thing you want to achieve, don’t lump multiple different things together in a somewhat-“constructor”-type of thinking. This keeps the dependency arrays small and controllable, and there are fewer cases of peculiar “But this CANNOT EVEN happen!!”.
Moreover, Effects have a function designed to clean them up, or the teardown function. If your Effect starts any larger operation and then for some reason your component get’s re-rendered before your operation is finished, you are likely to get hit by that effect in a state where you forgot about it already. You can follow this example
// example: listening to the scroll event useEffect(() => { const handler = (event) => { /* ... */ }; document.addEventListener('scroll', handler); return () => document.removeEventListener('scroll', handler); }, []); // or you might do something later in life useEffect(() => { const timeout = setTimeout(() => { /* ... */ }, 5000); return () => clearTimeout(timeout); }, []);
Some asynchronous operations might not have a simple teardown operation, but you can at least tell your Promises to disregard the effect. This is at least responsible for the very ugly
Warning: Can’t perform a React state update on an unmounted component. This is a no-op, but it indicates a memory leak in your application.
If you are responsible, you clean your Browser Console of all of these warnings. It appears if you call a setState-or-similar function at a point where the teardown actually should have happened. This pattern solves that case:
// this example uses a fetch Promise, // but it also works for stale setTimeout handlers etc. useEffect(() => { let mounted = true; fetch('/whatever').then(() => { if (mounted) { setState(true); } }; return () => { mounted = false }; }, []); // if you do not check for the value of mounted, // the "memory leak" error can appear, if the // fetch returns when the component updated meanwhile.
Side note: I also can not recall a single case in which the common React linter rule “exhaustive–deps” was worth ignoring. I had several occasions in which I believed to outsmart the stupid machine, only to end up in much larger problems down the road. Sure, things like Redux’ dispatch() might be cumbersome to include always, but I found that if I just make sure that exhaustive-deps never fires, I am more happy in the long run.
3.USEEFFECT() in too DEEP Functions
Especially in the context of data fetching, it might appear luring to put your useEffect() calls as deep (in the direction of the smallest components) as you can. Even more so, if you don’t have a rigid way of state management.
Now, I feel the point that this appears as “more modular” and flexible, but for me, has happend to situations where way too many requests were sent to our backends. You trade the modularity for the unpredictability of some Effects, so the best way I came to think of it was: Treat useEffect() like a bug.
I’m not saying that using it is wrong. But if you are wary of it’s appearance, this can help. Sometimes, it is just possible to do everything an Effect does – just completely outside React. Maybe, the Effect code can instead live in your index.js (as vanilla JS or otherwise) and just injected into your Root component, e.g. as props or via other libraries. E.g. with a Redux middleware, some effects can run with a higher degree of control about your state.
Remember: Modularity is not bad per se. It’s good. Don’t elevate the most particular effects to the top level of your application, but figure out where they can live well enough so you exactly know when they need to fire.
So far, there hasn’t been a case where I wished that I stuffed my useEffects further down to the virtual DOM leaves, but several, in which elevating them helped me a lot.
4. USE CUSTOM HOOKS with minimal interface
I consider it helpful, even for React beginners, to always be on the lookout of what could be its own React hook. A React Hook is any function that has a name beginning with “use” and for the most time, these consist of some combination of internal useState, useEffect, useContext and useRef definitions.
But their merit is in that they allow for much cleaner, dumber looking Components themselves – consider: dumb components are the best!
If they are only needed once, you can have them co-located next to where they are needed, but even just the act of giving them an own name makes for much more understandable code.
I use custom hooks for a lot of things, e.g.
- having a State that is persisted in the localStorage / sessionStorage
- having a State that updates in a debounced / throttled / delayed manner
- standardizing very basic data fetching
- accessing the window width at any time (nice for Responsive layout)
- creating a React ref for an element with an “clicked outside” handler
- standardized response of messages from connected websockets
I will now spare you the code, but if you have questions about any of these, just drop a comment.
One important point, though: Always have your interface minimal. E.g. if your custom hook has an internal setState(), think hard about whether you pass that function to the outside via the hook return value. Even if you are the only developer on a project, treat yourself as two different instances, one “framework designer” and one “framework consumer”, and as the designer, think hard about what havoc the consumer could do if you allow him too much.
5. Do not duplicate STATE informAtion (especially with react-router)
This applies to every state information, but it’s important to recognize that your URL route is just that: a kind of global state. One that your user can edit directly at any time, leaving the synchronization up to you.
So do not go about it by reading the URL parameters into some state that has it’s own setState! If you define a certain role of a state parameter in your URL, then it is your obligation to have a uni-directional data flow:
- From the route, that value flows into your application in a clearly-defined manner,
- where you act upon it as you wish, until you need to change it
- Then you change the route. Then go back to 1
Of course, one might imagine that in some cases you can not guarantee that. Then maybe do your own synchronization logic, but I would highly advise you to stash that away into e.g. a custom hook, or middleware if you use Redux, so that you can test it thoroughly and it won’t break too soon.
Further note: There are situations where it is quite sensible to have two very similar states, if they have a different responsibility. These are not a bug.
E.g. if you GET a value from a server, then edit it in a controlled <input/> field, and PUT it to the server again, you do not wish to do so on every key press. Then these are not meant to be the same:
- the value as you currently know it from the server
- the value as it exists inside the <input/>
These are semantically different. They can and should be a different state entity. But if you have something that is utterly dependant on one other state, then chances are you do not really need another entity.
All in all,
that turned out longer than I envisioned it to be become. But I hope it is of any help to any React coders who managed the absolute basics and now are prone to the next-level pitfalls.
The good news is that after a certain bunch of hardships, there is rarely the case of even more surprises. So, manage your state and effects responsibly, especially the asynchronous ones, and the rest are practices that apply for any software development.
Or am I misled? | https://schneide.blog/category/software-development/programming-language/javascript-programming-language/ | CC-MAIN-2022-27 | refinedweb | 1,955 | 58.92 |
In this small article I'll show how to setup an Android environment and I will make simple "Hello World!" (or "Hello AndroPhone!" ;) )
Before we start, we need:
- JDK -
- Android SDK -
- Eclipse IDE -
So, lets get started.
I'll skip the explanations for installation of the JDK (Don't forget to set the Environment Variable JAVA_HOME to C:\Program Files (x86)\Java\jdk1.6.0_21 or smth similar to this).
Installing the Android SDK
We have to unzip the Android SDK and to execute the SDK setup.exe.
A terminal window will open for a few seconds, and then Android SDK and AVD Manager will open.
If you get this error message: "Failed to fetch URL, reason: HTTPS SSL error. You might want to force download through HTTP in the settings." Choose Settings on the left. Check Force https://... sources to be fetched using http://
After we chose the version of the SDK and the API, we can continue with installation... It should finish in a while
Installing and configuring ADT Plug-in
In running Eclipse we have to select Help>Software Updates....
In the Available Software dialog, I click Add....
After that I enter the name for the remote site (for example, "Android Plugin") in the "Name" field, and in the "Location" field:
In the Available Software view, I've selected the checkbox next to Developer Tools.
The final installation step here is to accept the license agreements and to restart the Eclipse.
After restart we have to configure the path to the Android SDK:
Note: If you have trouble acquiring the plugin, you can try using "http" in the URL, instead of "https" (https is preferred for security reasons).
Creating AVD
The last thing we need before start programming Android apps is to create new Vertual Android Device (AVD) from SDK setup.sdk.
Now everything is ready...
Creating and running an Android Project
We can create new Android Project, and output something on the screen.
Here,
- Project Name - the Eclipse Project name (the name of the directory that will contain the project files).
- Application Name - human-readable title for the application (the name that will appear on the Android device_.
- Package Name - package namespace, it must be unique.
- Create Activity -.
When the project is created, the plugin had created for us some class stub and ui things.
To check if we have normal working environment, we'll replace the code in HelloAndroPhone.java with:
package com.stanley;
import android.app.Activity;
import android.os.Bundle;
import android.widget.TextView;
public class HelloAndroPhone extends Activity {
/** Called when the activity is first created. */
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
TextView tv = new TextView(this);
tv.setText("Hello, AndroPhone!");
setContentView(tv);
}
}
To run our project, we have to right-click the
HelloAndroidproject folder and select Run As > Android Application.
If all goes well,the emulator will open and we will see our first Android application!
That's it. We have done our first Android application. | https://encryptedshadow.blogspot.com/2010_09_05_archive.html | CC-MAIN-2021-43 | refinedweb | 497 | 66.44 |
The situation that i'm faced with is this: We plan on using a number of server applications hosted on Amazon EC2 machines, mainly Microsoft Team Foundation Server. These services rely heavily on Active Directory. Since our servers are in the Amazon cloud it should go without saying (but I will) that all our users are remote.
It seems that we can't setup VPN on our EC2 instance -- so the users will have to join the domain, directly over the internet then they'll be able to authenticate and once authenticated, use that token for accessing resources such as TFS.
on the DC instance, I can shut down all ports, except those needed for joining/authenicating to the domain. I can also filter the IP on that machine to just those address that we are expecting our users to be at (it's a small group)
On the web based application servers, I imagine all we need to open is port 80 (or 8080 in the case of TFS)
One of the problems that I'm faced with is what domain name to use for this Active directory. Should I go with "ourDomainName.com" or "OurDomainName.local" If I choose the latter, does that not mean that I'll have to get all our users to change their DNS address to point to our server, so it can resolve the domain name (I guess I could also distribute a host file)
Perhaps there is another alternative that I'm completely missing.
You've got two orthagonal concerns.
re: naming - I'd never name an Active Directory domain name after your second-level domain (i.e. "OurDomainName.com"). This has been the subject of religious argument here, which you can ready about at:
I wouldn't use ".local" (even though Microsoft does-- ".local" has "baggage" associated with it because of the ZeroConf protocol).
Personally, I use the convention "ad.domain.com". Assuming you delegate the DNS for the "ad" subdomain to a DNS server running on your DCs you can coexist your AD namespace with your public-facing DNS namespace without issue.
re: security - You might want to consider using an IPSEC policy on your DCs, if not all your domain member computers, to authenticate and encrypt communications between your client computers and your DCs. Getting joined to the domain, initially, will be somewhat difficult, but certainly not impossible. If your clients are Windows 7-based, you could probably leverage the new Offline Domain Join functionaltiy to make this even easier.
By posting your answer, you agree to the privacy policy and terms of service.
asked
5 years ago
viewed
727 times
active | http://serverfault.com/questions/141246/what-are-problems-and-pitfalls-with-a-public-facing-active-directory | CC-MAIN-2015-35 | refinedweb | 443 | 59.53 |
The Kullback-Leibler divergence between two probability distributions is a measure of how different the two distributions are. It is sometimes called a distance, but it’s not a distance in the usual sense because it’s not symmetric. At first this asymmetry may seem like a bug, but it’s a feature. We’ll explain why it’s useful to measure the difference between two probability distributions in an asymmetric way.
The Kullback-Leibler divergence between two random variables X and Y is defined as
This is pronounced/interpreted several ways:
- The divergence from Y to X
- The relative entropy of X with respect to Y
- How well Y approximates X
- The information gain going from the prior Y to the posterior X
- The average surprise in seeing Y when you expected X
A theorem of Gibbs proves that K-L divergence is non-negative. It’s clearly zero if X and Y have the same distribution.
The K-L divergence of two random variables is an expected value, and so it matters which distribution you’re taking the expectation with respect to. That’s why it’s asymmetric.
As an example, consider the probability densities below, one exponential and one gamma with a shape parameter of 2.
The two densities differ mostly on the left end. The exponential distribution believes this region is likely while the gamma does not. This means that an expectation with respect to the exponential distribution will weigh things in this region more heavily. In an information-theoretic sense, an exponential is a better approximation to a gamma than the other way around.
Here’s some Python code to compute the divergences.
from scipy.integrate import quad from scipy.stats import expon, gamma from scipy import inf def KL(X, Y): f = lambda x: -X.pdf(x)*(Y.logpdf(x) - X.logpdf(x)) return quad(f, 0, inf) e = expon g = gamma(a = 2) print( KL(e, g) ) print( KL(g, e) )
This returns
(0.5772156649008394, 1.3799968612282498e-08) (0.4227843350984687, 2.7366807708872898e-09)
The first element of each pair is the integral and the second is the error estimate. So apparently both integrals have been computed accurately, and the first is clearly larger. This backs up our expectation that it’s more surprising to see a gamma when expecting an exponential than vice versa.
Although K-L divergence is asymmetric in general, it can be symmetric. For example, suppose X and Y are normal random variables with the same variance but different means. Then it would be equally surprising to see either one when expecting the other. You can verify this in the code above by changing the
KL function to integrate over the whole real line
def KL(X, Y): f = lambda x: -X.pdf(x)*(Y.logpdf(x) - X.logpdf(x)) return quad(f, -inf, inf)
and trying an example.
n1 = norm(1, 1) n2 = norm(2, 1) print( KL(n1, n2) ) print( KL(n2, n1) )
This returns
(0.4999999999999981, 1.2012834963423225e-08) (0.5000000000000001, 8.106890774205374e-09)
and so both integrals are equal to within the error in the numerical integration.
5 thoughts on “Why is Kullback-Leibler divergence not a distance?”
> The surprise in seeing Y when you expected X
In expectation, though. It is an interpretation of a distribution, not of single samples.
Justin: Good point. I edited the post to insert “average.” I started to say “expected”, but then the same word would have two meanings in the same sentence. 🙂
yeah i see that. the idea of expected surprise always got me chuckling.
Hi John,
Could you please clarify how we can conclude that the exponential approximates the gamma better? I didn’t follow the argument with the plot.
Thanks,
Also worth mentioning that KL divergence does not satisfy the triangle inequality. Easy to compute by hand with exponential RVs with different parameters. | https://www.johndcook.com/blog/2017/11/08/why-is-kullback-leibler-divergence-not-a-distance/ | CC-MAIN-2018-34 | refinedweb | 646 | 56.25 |
Last edit
Changed: 1c1
< == Assorted ==
to
> = Assorted =
Removed: 3,40d2
< == Forgotten factory ==
< A colleague showed me the following piece of non-working code:
< Handler h = new Handler();
< if (h == null) {
< // code that handles an error, abbreviated for clarity
< }
< Apparently, the previous version used a factory to produce the Handler! However, now that it uses a constructor, the if statement will never be true -- which was probably not what the programmer intended.
< = Inheritance =
< Inheritance is pretty worthless if the rest of the design isn't object-oriented. This may sound pretty obvious, but I've came aboard lots of projects where people used an OO language, but programming was pretty much procedural. I can quote a project manager saying, "I don't care about classes, as long as the code is modular." I tried to tell him that the ''unit of modularization'' would obviously be a ''class''. But being an old-school C hacker, he wouldn't listen.
< You don't use it often while modelling. Why not? Because you get to code stuff while on an existing project. So, there's a whole bunch of persistent objects and you are asked to bolt on some new functionality. The functionality is hard to model in object oriented terms, it's basically a series of steps that have to be taken to transform outside messages into data. (Whether that outside message is user input or stuff coming in on a socket, doesn't really matter now).
< So you try to find an object in there. But there isn't! Yeah, a couple of methods which need a common variable or two. But not one of those really cool, really nice objects that they've got modelled in your UML book.
< This brings some complications. If it's just a bunch of steps, then you can probably only think of one method that's called, say, process(). This method will be put in a class and it'll use a bunch of private methods, because otherwise process() will grow too big.
< Later, you have another type of the same message. It's the same, only has an extra attribute because the version of the protocol was incremented. So you think, great, inheritance. I'll just inherit the object I created and then change this one method. *bzzzzzzt* wake up. You just created a class with almost all methods private. You can't inherit those. So you say, no problem, I'll make them protected. *bzzzzzzt* wrong again. Read up on your inheritance. They'll never be called, inheritance doesn't work like that. Example:
< public class JournalMessageProcessor {
<
< public JournalMessageProcessor() {
< }
< public void processMessage(JournalMessage message) {
< doComplicatedStuffStepOne();
< doComplicatedStuffStepTwo();
< doComplicatedStuffStepThree();
< doComplicatedStuffStepFour();
< }
< private method doComplicatedStuffStepOne() {
< /* do something */
< }
< private method doComplicatedStuffStepTwo() {
< /* do something */
< }
< private method doComplicatedStuffStepThree() {
< /* do something */
< }
< private method doComplicatedStuffStepFour() {
< /* do something */
< }
< }
< So, you are told that a new addition to the protocol was devised and now you have a version of the JournalMessage
< Shit I'm getting the mixed up. This needs some work.
Some Java Snippets that I want to keep. | https://www.vankuik.nl/?action=browse;diff=2;id=Java | CC-MAIN-2021-04 | refinedweb | 506 | 56.66 |
FPC New Features Trunk
Language
Delphi-like namespaces units
- Overview: Support has been added for unit names with dots.
- Notes: Delphi-compatible.
- More information: Unit names with dots creates namespace symbols which always have a precedence over unit names in an identifier search.
Dynamic array constructors
- Overview: Support has been added for constructing dynamic arrays with class-like constructors.
- Notes: Delphi-compatible.
- More information: Only constructor name 'CREATE' is valid for dynamic arrays.
- Examples: SomeArrayVar := TSomeDynArrayType.Create(value1, value2)
New compiler intrinsic Default
- Overview: A new compiler intrinsic Default has been added which allows you get a correctly initialized value of a type which is given as parameter. It can also be used with generic type parameters to get a default value of the type.
- Notes: Delphi-compatible.
- More information: In simple terms the value returned by Default will be initialized with zeros. The Default intrinsic is not allowed on file types or records/objects/arrays containing such types (Delphi ignores file types in sub elements).
- Examples:
type TRecord = record i: LongInt; s: AnsiString; end; var i: LongInt; o: TObject; r: TRecord; begin i := Default(LongInt); // 0 o := Default(TObject); // Nil r := Default(TRecord); // ( i: 0; s: '') end.
type generic TTest<T> = class procedure Test; end; procedure TTest.Test; var myt: T; begin myt := Default(T); // will have the correct Default if class is specialized end;
Support for type helpers
- Overview: Support has been added for type helpers which allow you to add methods and properties to primitive types. They require modeswitch TypeHelpers to be set.
- Notes: In mode Delphi it's implemented in a Delphi-compatible way using record helper for declaration, while the modes ObjFPC and MacPas use type helper. The modeswitch TypeHelpers is enabled by default only in mode Delphi and DelphiUnicode.
- More information:
- This announcement e-mail contains a detailed description of the feature
- The tests are named tthlp*.pp and are available in
Support for codepage-aware strings
- Overview: Ansistrings have been made codepage-aware. This means that every ansistring now has an extra piece of meta-information that indicates the codepage in which the characters of that string are encoded.
- Notes: Delphi-compatible (2009 and later).
- More Information:
- FPC Unicode Support
- Embarcadero white paper, specifically the sections The Many String Types and Converting Strings
Support for interfacing with C blocks functionality
- Overview: Support has been added for interfacing with Apple's blocks C-extension.
- Notes:
- As C blocks are very similar to anonymous methods in Delphi, we use a similar syntax to declare block types (with an added cdecl Mac OS X 10.7 and later, and on iOS 4.0 and later. The reason it doesn't work on Mac OS X
Code generator
Class field reordering
- Overview: The compiler can now reorder instance fields in classes in order to minimize the amount of memory wasted due to alignment gaps.
- Notes: Since the internal memory layout of a class is opaque (except by querying the RTTI, which is updated when fields are moved around), this change should not affect any code. It may cause problems when using so-called "class crackers" in order to work around the language's type checking though.
- More information: This optimization is currently only enabled by default at the new optimization level -O4, which enables optimizations that may have (unforeseen) side effects. The reason for this is fairly widespread use of some existing code that relies on class crackers. In the future, this optimization may be moved to level -O2. You can also enable the optimization individually using the -Ooorderfields command line option, or by adding {$optimization orderfields} to your source file. It is possible to prevent the fields of a particular class from being reordered by adding {$push} {$optimization noorderfields} before the class' declaration and {$pop} after it.
Removing the calculation of dead values
- Overview: The compiler can now in some cases (which may be extended in the future) remove the calculation of dead values, i.e. values that are computed but not used afterwards.
- Notes: While the compiler will never remove such calculations if they have explicit side effects (e.g. they change the value of a global variable), this optimization can nevertheless result in changes in program behaviour. Examples include removed invalid pointer dereferences and removed calculations that would overflow or cause a range check error.
- More information: This optimization is only enabled by default at the new optimization level -O4, which enables optimizations that may have (unforeseen) side effects. You can also enable the optimization individually using the -Oodeadvalues command line option, or by adding {$optimization deadvalues} to your source file.
Shortcuts to speed up floating point calculations
- Overview: The compiler can now in some cases (which may be extended in the future) take shortcuts to optimize the evaluation of floating point expressions, at the expense of potentially reducing the precision of the results.
- Notes: Examples of possible optimizations include turning divisions by a value into multiplications with the reciprocal value (not yet implemented), and reordering the terms in a floating point expression.
- More information: This optimization is only enabled by default at the new optimization level -O4, which enables optimizations that may have (unforeseen) side effects. You can also enable the optimization individually using the -Oofastmath command line option, or by adding {$optimization fastmath} to your source file.
Constant propagation
- Overview: The compiler can now, to a limited extent, propagate constant values across multiple statements in function and procedure bodies.
- Notes: Constant propagation can cause range errors that would normally manifest themselves at runtime to be detected at compile time already.
- More information:This optimization is enabled by default at optimization level -O3 and higher. You can also enable the optimization individually using the -Ooconstprop command line option, or by adding {$optimization constprop} to your source file.
Dead store elimination
- Overview: The compiler can now, to a limited extent, remove stores to local variables and parameters if these values are not used before they are overwritten.
- Notes: The use of this optimization requires that data flow analysis (-Oodfa) is enabled. It can help in particular with cleaning up instructions that have become useless due to constant propagation.
- More information: This optimization is currently not enabled by default at any optimization level because -Oodfa is still a work in progress. You can enable the optimization individually using the -Oodeadstore command line option, or by adding {$optimization deadstore} to your source file.
Node dfa for liveness analysis
- Overview: The compiler can now perform static data flow analysis (dfa) to determine data liveness.
- Notes: This analysis is only enabled when -O3 is used.
- More information: Warnings about uninitialized variables are more exact when using dfa compared with the previous approach. However, the current dfa approach is static and non-global, so one might get false positives:
var b : boolean; i : longint; begin if b then i:=1; writeln; if b then writeln(i); end.
In this case, the compiler warns about i being uninitialized. While some cases like the case above could be detected and the warning could be prevent, this does no apply if b is a function. To workaround this, add an assignment to i at the entry of the subroutine body.
The same applies to functions/procedures:
var i : longint; procedure p; begin i:=1; end; begin p; writeln(i); end.
The current dfa approach works only intra-procedurally instead of globally, so the above case cannot yet be handled correctly. The compiler does not see that i is initialized. To work around this, add an assignment to i at the entry of the outer subroutine body.
Units and packages
TDBF support for Visual FoxPro files
- Overview: TDBf now has explicit support for Visual FoxPro (tablelevel 30) files, including the VarBinary and VarChar datatypes.
- Notes: TDBF version increased to 7.0.0.
- More information: The code does not support .dbc containers, only freestanding dbfs. It does not support (and quite likely will never support) .cdx index files.
Additionally, TDBf is now included in the database test suite and has received several fixes (including better Visual FoxPro codepage support).
Bufdataset supports ftAutoInc fields
- Overview: Bufdataset now has support for automatically increasing ftAutoinc field values.
TDBF, bufdataset (and descendents such as TSQLQuery) allow escaped delimiters in string expression filter
- Overview: filters that contain string expressions should be quoted (using either ' or "). However, having the same quotes within the filter was not parsed as there was no support for escaping quotes in the string
Support has been added for escaping quotes to allow this.
- Notes: Double up the delimiter within the string to escape the delimiter. Example:
Filter:='(NAME=''O''''Malley''''s "Magic" Hammer'')'; //which gives //(NAME='O''Malley''s "Magic" Hammer') //which will match record //O'Malley's "Magic" Hammer
- More information: N/A
TODBCConnection (odbcconn) Support for 64 bit ODBC
- Overview: 64 bit ODBC support has been added.
- Notes: if you use unixODBC version 2.3 or higher on Linux/Unix, the unit has to be (re)compiled with -dODBCVER352 to enable 64 bit support
- More information: Only tested on Windows and Linux.
TZipper support for zip64 format
- Overview: TZipper now supports the zip64 extensions to the zip file format: > 65535 files and > 4GB file size (bug #23533). Related fixes also allow path/filenames > 255 characters.
- Notes: the zip64 format will automatically be used if the number or size of files involved exceed the limits of the old zip format. Note that there still are 2GB limits on streams as used in extraction/compression. Zip64 is unrelated to the processor type/bitness (such as x86, x64, ...).
- More information: More information on zip64:
- Overview: Most file-related routines from the system and sysutils units have been made codepage-aware: they now accept ansistrings encoded in arbitrary codepages as well as unicodestrings, and will convert these to the appropriate codepage before calling OS APIs.
- Notes: /
- More information: Detailed list of all related changes to the RTL.
SQL parser/generator improvements
- Overview: The SQL parser/generator in packages/fcl-db/src/sql has been improved:
- Notes: N/A
- Support for FULL [OUTER] JOIN; optional OUTER in LEFT OUTER and RIGHT OUTER JOIN
- support table.column notation for fields like SELECT A.B FROM MYTABLE or SELECT B FROM MYTABLE ORDER BY C.D
- Small improvements (e.g. in array datatype access) that allow the parser to parse the Firebird employee sample database DDL. Note: there is no support for isql SET TERM statements, so isql DDL dumps containing stored procedures/triggers with semicolons can still not be processed properly
- More information: N/A
Tools and miscellaneous
New Pas2jni utility
- Overview: The new pas2jni utility generates a JNI (Java Native Interface) bridge for Pascal code. This enables Pascal code (including classes and other advanced features) to be easily used from Java programs.
- Notes: The following Pascal features are supported by pas2jni: function/procedure, var/out parameters, class, record, property, constant, enum, TGuid type, pointer type, string types, all numeric types
- More information: pas2jni
(Mac) OS X/iOS
New iosxlocale unit
- Overview: The new unit called iosxlocale can be used initialises the DefaultFormatSettings and other related locale information in the sysutils unit based on the settings in the (Mac) OS X System Preferences or the iOS Settings app.
- Notes: The clocale unit, which also works on (Mac) OS X and iOS, instead gets its information from the Unix-layer. This information depends on the contents of the LC_ALL, LC_NUMERIC etc environment variables (see man locale for more information). Adding both clocale and iosxlocale to the uses clause will cause the second in line to overwrite the settings set by the first one.
- More information: Adding this unit to the uses clause is enough to use its functionality.
New iosxwstr unit
- Overview: The new unit called iosxwstr can be used to install a widestring manager on (Mac) OS X and iOS.
- Notes: The cwstring unit fulfils overwrite the settings set by the first one.
- More information: Adding this unit to the uses clause is enough to use its functionality.
- svn: r29828
New compiler targets
Support for the Java Virtual Machine and Dalvik targets
- Overview: Support has been added for generating Java byte code as supported by the Java Virtual Machine and by the Dalvik (Android) virtual machine.
- Notes: Not all language features are supported for these targets.
- More information: The FPC JVM target
Support for the AIX target
- Overview: Support has been added for the AIX operating system. Both PowerPC 32bit and 64bit are supported, except that at this time the resource compiler does not yet work for ppc64.
- Notes: AIX 5.3 and later are supported.
- More information: The FPC AIX port
Support for the AArch64 target
- Overview: Support has been added for the AArch64 architecture. For now, only the Darwin (iOS) operating system is supported.
- Notes: Apple's A7 cpu (and potentially other AArch64 cpus too) does not support raising a signal when a floating point exception occurs.
- More information:
- svn: r29986 | http://wiki.freepascal.org/FPC_New_Features_Trunk | CC-MAIN-2015-11 | refinedweb | 2,154 | 51.99 |
1. Introduction
Asiba, for novice spark, first of all, they don’t understand the operation mechanism of spark. When they communicate with you, they don’t know what they are talking about. For example, deployment mode and operation mode may be confused. For veteran spark with certain development experience, even if they know the operation mechanism, they may not understand all the terms of spark very well, so they understand spark K-term is a necessary way for spark developers to communicate with each other. This paper starts with the operation mechanism of spark, and then goes to wordcount case to understand various terms in spark.
2. Operation mechanism of spark
First of all, take a picture of the official website to illustrate that it is the general execution framework of spark applications on distributed clusters. It is mainly composed of spark context, cluster manager and resource manager ▪ Executor (execution process of a single node). Cluster manager is responsible for the unified resource management of the whole cluster. Executor is the main process of application execution, which contains multiple task threads and memory space.
The main operation process of spark is as follows:
After the application is submitted with spark submit, it initializes spark context in the corresponding position according to the deployment mode at the time of submission, that is, the running environment of spark, and creates DAG scheduler and task scheduler. The driver divides the whole program into multiple jobs according to the action operator according to the execution code of the application, and each job builds DAG diagram and DAG diagram The scheduler divides the DAG graph into multiple stages, and each stage is internally divided into multiple tasks. The DAG scheduler transfers the task set to the task scheduler, which is responsible for scheduling tasks on the cluster. As for the relationship between stage and task and how they are divided, we will talk about it in detail later.
The driver applies for resources from the resource manager according to the resource requirements in the sparkcontext, including the number of executors and memory resources.
After receiving the request, the resource manager creates the executor process on the work node that meets the conditions.
After the executor is created, it will reverse register with the driver, so that the driver can assign tasks to him to execute.
After the program is executed, the driver logs off the requested resource to the resource manager.
3. Understand the terms in spark
In terms of operation mechanism, let’s continue to explain the following terms,
3.1 Driver program
Driver is the spark application we write to create a spark context or spark session. The driver will communicate with cluster manager and assign tasks to the executor for execution
3.2 Cluster Manager
It is responsible for the resource scheduling of the whole program
YARN
Spark Standalone
Mesos
3.3 Executors
Executors is actually an independent JVM process, which plays a role on each work node. It is mainly used to execute tasks. In an executor, multiple tasks can be executed in parallel at the same time.
3.4 Job
Job is a complete processing flow of user program, which is a logical term.
3.5 Stage
A job can contain multiple stages, which are serial. The trigger of state is generated by some shuffle, reduce and save actions
3.6 Task
A stage can contain multiple tasks, such as sc.textFile (“/ XXXX”). Map (). Filter (), where map and filter are a task respectively. The output of each task is the output of the next task.
3.7 Partition
Partition is a part of the data source in spark. A complete data source will be divided into multiple partitions by spark so that spark can be sent to multiple executors to execute tasks in parallel.
3.8 RDD
RDD is a distributed elastic data set. In spark, a data source can be regarded as a large RDD. RDD is composed of multiple partitions. The data loaded by spark will be stored in RDD. Of course, in RDD, it is actually cut into multiple partitions.
So the question is, how does a spark job execute?
(1) Our spark program, also known as the driver, will submit a job to the cluster manager
(2) The cluster manager checks the local rows of data and finds the most suitable node to schedule the task
(3) Jobs will be split into different stages, and each stage will be split into multiple tasks
(4) The driver sends the task to the executor to execute the task
(5) The driver will track the execution of each task and update it to the master node, which we can check on the spark master UI
(6) When the job is completed, the data of all nodes will be aggregated to the master node again, including the average time consumption, maximum time consumption, median and other indicators.
3.9 deployment mode and operation mode
The deployment mode refers to the cluster manager, which generally includes standalone and yarn, while the running mode refers to the running machine of drvier, the cluster or the task submitting machine, which correspond to the cluster and client modes respectively. The difference lies in the running results, logs, stability, etc.
4. Understand the terms from wordcount cases
Understand related concepts again
Job: job is triggered by action, so a job contains one action and N transform operations;
Stage: stage is a set of tasks that are divided due to shuffle operations. Stage is divided according to its wide and narrow dependencies;
Task: the smallest execution unit, because each task is only responsible for the data of a partition
Therefore, there are generally as many tasks as there are partitions. This kind of task actually performs the same action on different partitions;
Here is a wordcount program
import org.apache.spark.{SparkConf, SparkContext} import org.apache.spark.rdd.RDD object WordCount { def main(args: Array[String]): Unit = { val conf = new SparkConf().setMaster("yarn").setAppName("WordCount") val sc = new SparkContext(conf) val lines1: RDD[String] = sc.textFile("data/spark/wc.txt") val lines2: RDD[String] = sc.textFile("data/spark/wc2.txt") val j1 = lines1.flatMap(_.split(" ")).map((_, 1)).reduceByKey(_+_) val j2 = lines2.flatMap(_.split(" ")).map((_, 1)).reduceByKey(_+_) j1.join(j2).collect() sc.stop() } }
Yarn mode is widely used in the production environment, so from the deployment mode of yarn, there is only one action operation collect in the code, so there is only one job. Because of shuffle, the job is divided into three stages, namely flatmap, map and reducebykey to calculate a stage0, and line2 to calculate a stage1. Stage3 is the first two results to join and then collect, And stage3 depends on stage1 and stage0, but stage0 and stage1 are parallel. In the actual production environment, if you want to look at the dependency graph depending on stage, you can clearly see the dependency relationship.
Wu Xie, little third master, is a rookie in the field of big data and artificial intelligence.
Please pay more attention
| https://developpaper.com/understand-stage-executor-driver-in-spark/ | CC-MAIN-2021-10 | refinedweb | 1,180 | 51.18 |
I'm not sure what this means...
TypeError: 'NoneType' object is not iterable
here's the code
day, season, help4, myGame = karidInn(day, season, help4, myGame)
that's where are call the def function
def karidInn(day, season, help4, localGame): if chooseLocation == "13": print chooseLocation while help4 == 2: print 'Welcome back to the best Inn in Karid' time.sleep(1) print 'we just finished renovaction' time.sleep(1) print 'and have four more types of ROOMS!!!!' time.sleep(1) print '1. basic room' time.sleep(1) print '2. deluxe room' time.sleep(1) print '3. luxure room' time.sleep(1) print '4. defitity room of luxury' time.sleep(1) print '5. cheap cheap room' time.sleep(1) print '6. exit' 'which would you like' chooseRoom = raw_input() if chooseRoom == "1 des": print 'a basic room for travellers' time.sleep(1) print 'recovers 25% of total health and mana' time.sleep(1) print 'costs 100 gold pieces' elif chooseRoom == "2 des": print 'a deluxe room for merchants and sales men' time.sleep(1) print 'recovers 50% of total health and mana and 25% of energy' time.sleep(1) print 'costs 200 gold pieces' elif chooseRoom == "3 des": print 'a luxurious room fit with silver furniture' time.sleep(1) print 'recovers 75% of total health and mana and 25% of energy' time.sleep(1) print 'costs 300 gold pieces' elif chooseRoom == "4 des": print 'a room fit for a king' time.sleep(1) print 'recovers all health and mana and 25% of energy' time.sleep(1) print '300 gold pieces' elif chooseRoom == "5 des": print 'a room for a pauper' time.sleep(1) print 'costs 0 gold pieces DX' elif chooseRoom == "6": break elif chooseRoom == "1": if gold > 100: gold -= 100 hpRecover = localGame.maxYourHp * 0.25 localGame.yourHp = localGame.yourHp + hpRecover getMaxHp(localGame.yourHp) manaRecover = localGame.maxYourMana * 0.25 localGame.yourMana = localGame.yourMana + manaRecover localGame.yourMana = getMaxMana(localGame.yourMana, localGame.maxYourMana) season, day = gettingTime(day,season) date, day, season = getDate(day, season) print date return day, season, help4, localGame break if gold < 100: print 'please have enough gold before purchase' else: print 'would you like to sleep and rest for the night?' print '1. yes' print '2. no' rest = raw_input() if rest == "1": season, day = gettingTime(day,season) date, day, season = getDate(day, season) print 'the date is now ' + date help4 += 1 return day, season,help4, localGame
here's the actually def code
PLEASE HELP | https://www.daniweb.com/programming/software-development/threads/268729/typeerror-nonetype-object-is-not-iterable | CC-MAIN-2018-05 | refinedweb | 402 | 68.77 |
Components and supplies
Apps and online services
About this project
I want to share with you a little story about a challenging experiment, that shows how open source technologies, do indeed,. time frame was 6 weeks. In six weeks, I was supposed to engineer a cubic structure to allow free movements of its 550 pieces, where the movement and interaction scenarios are supposed to reveal the concept of how you see yourself within the city. I took me (8 rows of 17 triangles in each cube face) in a sync motion, and through those motions, I need to design scenarios that reveal citizens' interaction with the city and oneself.
Each facade side was 3 x 3 meters , and the project is displayed in a public place. Creating an interaction that depends on moving when you move, which was the initial scenario designed for the cube, was not going to be meaningful in the crowd. The other option was to provide an individual experience, so it is you, amongst the crowd, against this reflective creature, which at times, you can’t control its movement, at other times, you are the center of the movement, and there are moments where you look at others thinking that we see the same image, but we are getting exactly opposite views.
Designed with this vision, the 4 sides of the cube had different movement scenarios that revealed different meanings.
Installing has been challenging, in every single phase. For example, servo motors had to be oriented on 90 degrees, before the brackets are installed, in order to align the visual and the relative horizontal positioning of the motors. First of all, motors were aligned and market, with a line that extends from the shaft to the motor body, then brackets where installed, while that worked, in some cases, the team installing the piece, would move the shaft orientation while installing the bracket to the motor, and it was sol tricky to troubleshoot, so I decided to have motors powered, and getting a signal of 90 degrees, while the brackets where installed.
Servos are an interesting kind of motors, with less that 1 amp of power consumptions you can end up with versatile angular movements. In favour of time, budget and the lack of trusted fabricators, the decision was made to use of servos, with the standard tilting brackets —the type used for robotic arms. In stores, I could only find a dozen of those brackets, all over the country so even testing a row of these working together wasn’t possible. The most available power supplies in town where 24V while servos need 5.5V-6V ,and using stepping down voltage isn’t a good option in terms of performance, mind you, the price and the reliability of using batteries was not an option. Winston, a China based company, costume manufactured a 6V-20 Amps power supplies for my request (those were great IP7 supplies). Each facade used 5 supplies with a total of 100 DC amps.
A tree connection was designed to provide power from both sides to compensate for the mild voltage drop. Each motor was connected to a small pref-board fabricated small connector board (picture below) providing a line of 17 connectors per each row in each facade (you do the math!) 1.6mm wires were used for power, while signal wires were connected to an CAT8 data cable.
DC power connectors, each costs around 15 EUR in Jordan, with a tree connection, this meant 2 plugs per each row; due to budget restrictions, the tree connection was designed, with an end terminal of an MK plug for AC installations. Looks weird, but it worked well!
Controlling 134 servos at once, had different options, a) to use the same signal for multiple servos, resulting in a identical servo movement of multiple Arduinos, or b) use multiple servos as master and slave [but that was my least favorable option] or c) use a driver, for a complete control over each individual motion. This was the most logical and most technically sane option, and which eventually I ended up using. The PCA9685, from Adafruit, uses I2C protocol, and eventually controls PWM signal to each motor individually. The shield hosts 16 motors and you can cluster a number as many shields as you want, which provides a versatile way to move a large number of motors synchronically, in a very smooth way. Each line of 17 motors was connected to a shield, then all shields where connected to a single Arduino, and literally that's it. The one important tweak that killed the jitter is to make sure the motor frequency of the motor is the same as the code (the sample code is 60hz which corresponds to most analogue servos), so mind that in your code. Another trick, which I unfortunately didn't have the time to try but would have love to explore is to use Timer1, instead of Timer0, I believe this would have allowed for better speed control, but, well, maybe I can test it for the next project, because you know..you build 544 servos every day :)
I respect the boldness of the architects in how they wanted to move further with an idea that is beyond their comfort zone. I immensely respect open source technologies, without which none of us, would have witnessed the piece in action. It was such an experience.
Team: Most Arduino code has been collaboratively produced with Loay Ghannam (a big shout out to his fast work!). Thanks to the wonderful team of volunteers and friends who have actually soldered, installed and built this piece: Zaid Marji (without him, this piece wouldn't have been built I guess), Laila Atalla (my legal consultant friend, who also is a great solderer), Zaid Saleh (who got soldering iron in his eyes and continued to work!), May Abrash and Abdalla Hamad. A great thanks to everyone.
Well, I had this account since 5 years, as an anonymous Maya, but actually am Moushira :)
Design Technologist
Code
Arduino Code to control 8 Adafruit servo drivers to adjust them to 90 degreesArduino
#include "Arduino.h" #include <Wire.h> #include <Adafruit_PWMServoDriver.h> //numbering works in binary. Check here if you get confused Adafruit_PWMServoDriver pwm0 = Adafruit_PWMServoDriver(0x40); //in this one you don't solder the board Adafruit_PWMServoDriver pwm1 = Adafruit_PWMServoDriver(0x41); //in this one you solder one register. Check adafruit tutorial for details Adafruit_PWMServoDriver pwm2 = Adafruit_PWMServoDriver(0x42); Adafruit_PWMServoDriver pwm7 = Adafruit_PWMServoDriver(0x43); Adafruit_PWMServoDriver pwm6 = Adafruit_PWMServoDriver(0x44); Adafruit_PWMServoDriver pwm5 = Adafruit_PWMServoDriver(0x45); Adafruit_PWMServoDriver pwm4 = Adafruit_PWMServoDriver(0x46); Adafruit_PWMServoDriver pwm3 = Adafruit_PWMServoDriver(0x47); int C = 500; //delay variable int x = 250; //through testing we found pwm 250 to look visiually as 90 degrees void setup() { pwm0.begin(); pwm0.setPWMFreq(50); //add frequency based on your motor datasheet, otherwise, do NOT complain of the jitter :) pwm1.begin(); pwm1.setPWMFreq(50); pwm2.begin(); pwm2.setPWMFreq(50); pwm3.begin(); pwm3.setPWMFreq(50); pwm4.begin(); pwm4.setPWMFreq(50); pwm5.begin(); pwm5.setPWMFreq(50); pwm6.begin(); pwm6.setPWMFreq(50); pwm7.begin(); pwm7.setPWMFreq(50); } void loop () { for (int i = 0; i <= 15; i++) { //according to the driver library i is the number of motors pwm7.setPWM(i, 0, x); pwm6.setPWM(i, 0, x); pwm5.setPWM(i, 0, x); pwm4.setPWM(i, 0, x); pwm3.setPWM(i, 0, x); pwm2.setPWM(i, 0, x); pwm1.setPWM(i, 0, x); pwm0.setPWM(i, 0, x); delay(50); } delay (C); }
Schematics
Author
Published onNovember 16, 2017
Members who respect this project
you might like | https://create.arduino.cc/projecthub/Maya/in-servo-we-trust-6725f1 | CC-MAIN-2019-13 | refinedweb | 1,247 | 57.3 |
Learning Objectives
Now that you’ve seen how to publish platform events, how do you subscribe to them to be notified of the latest news or of the shipment of a package? On the Salesforce Platform, Apex triggers, processes, flows. The
empApi Lightning component and Visualforce apps receive event notifications through CometD. In an external app, you subscribe to events using CometD as well.
You’ve probably used Apex triggers before, to perform actions based on database events. With platform events, the process is similar. You simply write an after insert Apex trigger on the event object to subscribe to incoming events. Triggers provide an autosubscription mechanism in Apex. No need to explicitly create and listen to a channel. Triggers receive event notifications from various sources—whether they’re published through Apex or APIs.
Platform events support only after insert triggers. The after insert trigger event corresponds to the time after a platform event is published. After an event message is published, the after insert trigger is fired.
To create a platform event trigger, use the Developer Console.
- Click the Setup icon, select Developer Console, and click File | New | Apex Trigger.
- Provide a name and choose your event for the sObject, and click Submit.
The Developer Console automatically adds the
after insert event in the trigger template. Also, you can conveniently create a trigger from the event’s definition page in Setup, in the Triggers related list, but you have to specify the
after insert keyword.
The following example shows a trigger for the Cloud News event. It iterates through each event and checks whether the news is urgent through the
Urgent__c field. If the news is urgent, the trigger creates a case to dispatch a news reporter and adds the event location to the case subject.
// Trigger for listening to Cloud_News events. trigger CloudNewsTrigger on Cloud_News__e (after insert) { // List to hold all cases to be created. List<Case> cases = new List<Case>(); // Get queue Id for case owner Group queue = [SELECT Id FROM Group WHERE Name='Regional Dispatch' AND Type='Queue']; // Iterate through each notification. for (Cloud_News__e event : Trigger.New) { if (event.Urgent__c == true) { // Create Case to dispatch new team. Case cs = new Case(); cs.Priority = 'High'; cs.Subject = 'News team dispatch to ' + event.Location__c; cs.OwnerId = queue.Id; cases.add(cs); } } // Insert all cases corresponding to events received. insert cases; }
Set Up Debug Logging
Unlike triggers on standard or custom objects, triggers on platform events don’t execute in the same Apex transaction as the one that published the event. The trigger runs in its own process under the Automated Process entity, which is a system user. As a result, debug logs corresponding to the trigger execution are created by the Automated Process entity and aren’t available in the Developer Console. To collect platform event trigger logs, add a trace flag entry for the Automated Process entity in Setup.
- From Setup, enter Debug Logs in the Quick Find box, then click Debug Logs.
- Click New.
- For Traced Entity Type, select Automated Process.
- Select the start date and expiration date for the logs you want to collect.
- For Debug Level, enter * and click Search.
- Select a predefined debug level, such as SFDC_DevConsole or click New to create your own debug level.
- Click Save.
Things to Note About Platform Event TriggersOrder of Event Processing
A trigger processes platform event notifications sequentially in the order they’re received. The order of events is based on the event replay ID. An Apex trigger can receive a batch of events at once. The order of events is preserved within each batch. The events in a batch can originate from one or more publishers.
Asynchronous Trigger Execution
A platform event trigger runs in its own process asynchronously and isn’t part of the transaction that published the event. As a result, there might be a delay between when an event is published and when the trigger processes the event. Don't expect the result of the trigger’s execution to be available immediately after event publishing.
Automated Process System User
Because platform event triggers don’t run under the user who executes them (the running user) but under the Automated Process system user, we set the owner ID field explicitly in our
CloudNewsTriggerexample. We used the ID of a sample user queue called Regional Dispatch for the trigger example. If you create a Salesforce record with an
OwnerIdfield in the trigger, such as a case or opportunity, explicitly set the owner ID. For cases and leads, you can, alternatively, use assignment rules to set the owner.
Also, system fields of records created or updated in the event trigger, such as
CreatedByIdand
LastModifiedById, reference the Automated Process entity. Similarly, the Apex
UserInfo.getUserId()statement returns the Automated Process entity.
Like standard or custom object triggers, platform event triggers are subject to Apex governor limits.
Apex Trigger Limitations
Platform event triggers share many of the same limitations of custom and standard object triggers. For example, you can’t make Apex callouts synchronously from triggers.
Trigger Batch Size
The batch size in a platform event trigger is 2,000 event messages, which is larger than the Salesforce object trigger batch size of 200. The batch size corresponds to the size of the
Trigger.Newlist. You can modify the batch size of a platform event trigger. For more information, see Configure the User and Batch Size for Your Platform Event Trigger in the Platform Events Developer Guide.
Subscriptions Related List on the Event Definition Page
You can view the state of all event triggers on the Platform Event Definition Detail page in Setup. Under Subscriptions, each active trigger is listed along with execution information and the state. Information includes the replay ID of the last published and last processed events. The state indicates whether the trigger is running or is disconnected from the subscription because of unrecoverable errors or insufficient permissions. The Error state is reached only when a trigger has been retried the maximum number of times. The following screenshot shows the Subscriptions related list on the Cloud News event detail page.
Manage an Event’s Apex Trigger Subscribers
Resume a suspended subscription where it left off, starting from the earliest event message that is available in the event bus. If you want to bypass event messages that are causing errors or are no longer needed, you can resume the subscription from the tip, starting from new event messages.
To manage a trigger subscription, in the Subscriptions related list, click Manage next to the Apex trigger.
- To suspend a running subscription, click Suspend.
- To resume a suspended subscription, starting from the earliest event message that is available in the event bus, click Resume.
- To resume a suspended subscription, starting from new event messages, click Resume from Tip.
Test Platform Event Triggers
Ensure that your platform event trigger is working properly by adding an Apex test. Before you can package or deploy any Apex code (including triggers) to production, your Apex code must have tests. To publish platform events in an Apex test, enclose the publish statements within Test.startTest and Test.stopTest statements.
// Create test events Test.startTest(); // Publish events Test.stopTest(); // Perform validation here
In a test context, the publish method call queues up the publish operation. The Test.stopTest() statement causes the event publishing to be carried out. After Test.stopTest(), perform your validations.
Here is an example of a test class for our Cloud_News event and its associated trigger. Publishing the event causes the associated trigger to fire. After Test.stopTest(), the test verifies that the publishing was successful by inspecting the value returned by isSuccess() in Database.SaveResult. Also, the test queries the case that the trigger created. If the case record is found, the trigger executed successfully, and the test passes.
@isTest public class PlatformEventTest { @isTest static void test1() { // Create test event instance Cloud_News__e newsEvent = new Cloud_News__e( Location__c='Mountain City', Urgent__c=true, News_Content__c='Test message.'); Test.startTest(); // Call method to publish events Database.SaveResult sr = EventBus.publish(newsEvent); Test.stopTest(); // Perform validation here // Verify that the publish was successful System.assertEquals(true, sr.isSuccess()); // Check that the case that the trigger created is present. List<Case> cases = [SELECT Id FROM Case]; // Validate that this case was found. // There is only one test case in test context. System.assertEquals(1, cases.size()); } }
Lightning apps can use the
empApi Lightning Web or Aura component to subscribe to events in the app.
Subscribe in a Lightning Web Component
To use the
empApi methods in your Lightning web component, import the methods from the
lightning/empApi module as follows.
import { subscribe, unsubscribe, onError, setDebugFlag, isEmpEnabled } from 'lightning/empApi';
Then call the imported methods in your JavaScript code.
For an example of how to use the
lightning/empApi module and a complete reference, see the lightning-emp-api documentation in the Lightning Component Library.
Subscribe in an Aura Component
To use the
empApi methods in your Aura component, add the
lightning:empApi component inside your custom component and assign an
aura:id attribute to it.
<lightning:empApi aura:
Then in the client-side controller, add functions to call the component methods.
For an example of how to use the
lightning:empApi component and a complete reference, see the lightning:empApi documentation in the Lightning Component Library.
To start a flow when a platform event message is received, create a platform event–triggered flow. From the Start element, choose a platform event whose event messages trigger the flow to run.
As you build the flow, you can use the field values from the platform event message by referencing the
$Record global variable.
Alternatively, you can subscribe to a platform event in flows by using a Pause element. Instead of starting a flow when a platform event message is received, that event message causes a paused flow interview to resume. For example, here’s a Pause element that pauses the flow until Salesforce receives a Cloud News event message. The flow resumes only if the event’s location matches {
!contact.MailingCity}. The
{!contact} record variable stores values for a contact record.
External apps subscribe to platform events with CometD and perform long polling. The
empApi Lightning component and Visualforce pages, which run on the platform, can use CometD as well and are considered CometD clients. CometD is a scalable HTTP-based event routing bus that uses an AJAX push technology pattern known as Comet. It implements the Bayeux protocol. Long polling, also called Comet programming, allows emulation of an information push from a server to a client. Similar to a normal poll, the client connects and requests information from the server. However, instead of sending an empty response if information isn't available, the server holds the request and waits until information is available (an event occurs).
Salesforce provides a Java library, EMP Connector, which implements all the details of connecting to CometD and listening on a channel. You can use EMP Connector to subscribe easily to platform events. EMP Connector hides the complexity of subscribing to events. For more information about EMP Connector, check out the Java client example in the Streaming API Developer Guide.
The process of subscribing to platform event notifications through CometD is similar to subscribing to PushTopic events or generic events. The only difference is the channel name. The platform event channel name is case-sensitive and is in the following format.
/event/<EventName>__e
For example, if you have a platform event named Cloud News, provide this channel name when subscribing.
/event/Cloud_News__e
Specify the API version at the end of the CometD URL, as follows.
// Connect to the CometD endpoint cometd.configure({ url: 'https://<Salesforce_URL>/cometd/48.0/', requestHeaders: { Authorization: 'OAuth <Session_ID>'} });
Platform Event Message in JSON Format
The message of a delivered platform event looks similar to the following example for a Cloud News event.
{ "data": { "schema": "_2DBiqh-utQNAjUH78FdbQ", "payload": { "CreatedDate": "2017-04-27T16:50:40Z", "CreatedById": "005D0000001cSZs", "Location__c": "San Francisco", "Urgent__c": true, "News_Content__c": "Large highway is closed due to asteroid collision." }, "event": { "replayId": 2 } }, "channel": "/event/Cloud_News__e" }
The schema field in the event message contains the ID of the platform event schema (in this example, it is
"schema": "_2DBiqh-utQNAjUH78FdbQ"). The schema is versioned—when the schema changes, the schema ID changes as well.
To determine if the schema of an event has changed, retrieve the schema through REST API. Use the schema ID by performing a GET request to this REST API resource:
/vXX.X/event/eventSchema/Schema_ID. Alternatively, you can retrieve the event schema by supplying the event name to this endpoint:
/vXX.X/sobjects/Platform_Event_Name__e/eventSchema. For more information, see the REST API Developer Guide.
Now that you’ve seen how to use Platform Events on the Salesforce platform and in external apps, the possibilities are endless! Use Platform Events for any number of applications and integrations, such as processing business transactions or engaging in proactive customer service. With Platform Events, you adopt an event-based programming model and enjoy the benefits of event-based software architecture.
Resources
- Trailhead: Build an Instant Notification App
- Streaming API Developer Guide
- Platform Events Developer Guide: Example: Subscribe to and Replay Events Using a Java Client (EMP Connector)
- CometD Documentation
- Platform Events Developer Guide: Platform Event Allocations
- Platform Events Developer Guide: Retry Event Triggers with EventBus.RetryableException
- Lightning Platform Cookbook: Running Case Assignment Rules from Apex
- REST API Developer Guide: Platform Event Schema by Schema ID
- REST API Developer Guide: Platform Event Schema by Event Name | https://trailhead.salesforce.com/en/content/learn/modules/platform_events_basics/platform_events_subscribe | CC-MAIN-2021-43 | refinedweb | 2,246 | 56.05 |
C# LINQ Background Topics
Open Source Your Knowledge, Become a Contributor
Technology knowledge has to be shared and made accessible for free. Join the movement.
Extension Methods
Why learn about extension methods?
All LINQ methods are extension methods, defined in the
System.Linq namespace.
What are extension methods?
Extension methods in C# enable the addition of new methods to a pre-existing type, without modifying the original source code for that type. They are similar in purpose (though different in implementation) to mixins in other languages. Extension methods can be very useful for adding functionality to classes or interfaces found in a third-party library, or even to classes in the .NET Framework libraries.
Example extension method
This is what an extension method declaration looks like:
namespace IntExtensions { public static class CoolExtensionsForInt { public static string Growl(this int num) { return $"G{new string('r', num)}"; } } }
The name of the class isn't important, nor is the name of the method. The important elements are:
- The class and method must both be
static
- The first parameter to the method must be of the type that is being extended (
intin this example)
- The first parameter to the method must be prefaced with the keyword
this
Calling an extension method
The above extension method can be called as if it were a member of the
int type. For example:
using IntExtensions; ... // Prints "Grrrrrrr" to the console Console.WriteLine(7.Growl());
Notice that the extension method is defined in the
IntExtensions namespace, and so that namespace must be included with a
using directive before the extension method can be invoked.
Exercise
In this exercise, you must add an extension method,
SayHello() to the built-in
string type. The
SayHello() method should return the string: "Hello, <subject>!" | https://tech.io/playgrounds/345/c-linq-background-topics/writing-extension-methods | CC-MAIN-2018-34 | refinedweb | 292 | 53.61 |
Hello list,
Some time ago Stephane Raynaud answered my question on how to produce a stickplot using quiver:
Since then, I have been forwarding that to several people interested in producing such a plot.
Maybe it is a good idea to add an example at the Gallery with the quiver as "stickplot"? Or it is too obvious?
Anyway, here is my suggestion:
# -*- coding: utf-8 -*-
""" Stephane Raynaud """
import matplotlib.pyplot as plt
import numpy as np
import datetime as dtime
from matplotlib.dates import date2num
""" fake dates starting now """
x = np.arange(100, 110, 0.1)
start = dtime.datetime.now()
dates = [start + dtime.timedelta(days=n) for n in range(len(x))]
""" dummy u, v """
u = np.sin(x)
v = np.cos(x)
fig, ax = plt.subplots(1, 1, figsize=(16,6))
qiv = ax.quiver(date2num(dates), [[0]*len(x)], u, v, headlength=0, headwidth=0, headaxislength=0 )
key = ax.quiverkey(qiv, 0.25, 0.75, 0.5, "0.5 N m$^{-2}$", labelpos='N', coordinates='axes' )
plt.setp( ax.get_yticklabels(), visible=False)
plt.gca().xaxis_date()
plt.show() | https://discourse.matplotlib.org/t/stickplot-quiver-at-the-gallery/14114 | CC-MAIN-2019-51 | refinedweb | 178 | 62.95 |
Compared to writing your own SQL to access data, you can become miraculously more productive by using Entity Framework (EF). Unfortunately, several traps that are easy to fall into have given it a reputation for performing poorly; but it doesn’t have to be this way! The performance of Entity Framework may once have been inherently poor but isn’t any more if you know where the landmines are. In this article we’ll look at where these ‘traps’ are hiding, examining how they can be spotted and what you can do about them.
We’ll use examples from a simplified school management system. There’s a database with two tables for Schools and their Pupils, and a WinForms app using an EF code-first model of this database to fetch data in a variety of inefficient ways.
To play along at home, you can grab the code for most of these examples from – setup instructions are included in the readme.
Database access
By far the biggest performance issues you’re likely to encounter are of course around accessing the database. These few are the most common.
Being too greedy with Rows
Sample application: button 1
At its heart, Entity Framework is a way of exposing .NET objects without actually knowing their values, but then fetching / updating those values from the database behind the scenes when you need them. It’s important to be aware of when EF is going to hit the database – a process called materialization.
Let’s say we have a context db with an entity db.Schools. We might choose to write something like:
On line 2 when we do .ToList(), Entity Framework will go out to the database to materialize the entities, so that the application has access to the actual values of those objects, rather than just having an understanding of how to look them up from the database. It’s going to retrieve every row in that Schools table, then filter the list in .NET. We can see this query in ANTS Performance Profiler:
It would be far more efficient to let SQL Server (which is designed for exactly this kind of operation and may even be able to use indexes if available) do the filtering instead, and transfer a lot less data.
We can do that either with …
… or even …
The ‘N+1 Select’ problem: Minimising the trips to the database
Sample application: button 2
This is another common trap caused by misunderstanding when objects will be materialized.
In our database, every Pupil belongs to a School, referencing the Schools table using a foreign key on the SchoolId column. Equivalently, in our EF model, the Schools object has a virtual property Pupils.
We want to print a list of how many pupils attend each school:
If we look in ANTS at what happens when this code runs, we see a query run once to get a list of schools in New York, but another query is also run 500 times to fetch Pupil information.
This happens because by default, EF uses a loading strategy called Lazy Loading, where it doesn’t fetch any data associated with the virtual Pupils property on the School object when the first query is run. If you subsequently try to access data from one of the related Pupil objects, only then will it be retrieved from the database. Most of the time that’s a good idea because otherwise any time you accessed a School object, EF would bring back all related Pupil data regardless of whether it were needed. But in the example above, Entity Framework makes an initial request to retrieve the list of Schools, and then has to make a separate query for each of the 500 Schools returned to fetch the pupil data.
This leads to the name “N+1 select problem”, because N plus 1 queries are executed, where N is the number of objects returned by the original query. If you know that you’re definitely going to want the Pupil data, you’d be better doing things differently – especially if you want it for a large number of School objects. This is particularly important if there is high latency between your application and the database server.
There are a couple of different approaches available. The first is to use the Eager Loading data access strategy, which fetches the related data in a single query when you use an Include() statement. Since the Pupils data would be in memory, there would be no need for Entity Framework to hit the database again. To do this your first line would read:
This is an improvement because we don’t run the 500 extra queries, but the downside is we’re now bringing back all pupil data for those schools just to see how many there are. Sometimes you don’t need to bring back this additional data. For example, if we just wanted to get a list of Schools in New York with more than 100 pupils (but not caring about exactly how many pupils there are), a common mistake would see us write:
We could use the same technique as above by adding an Include() statement, as follows:
But since we don’t actually need to know how many pupils there are, it would be far more efficient to just do:
We can further improve performance by specifically selecting only the columns we need, which I will now describe.
Being too greedy with Columns
Sample application: button 3
Let’s say we want to print the name of every pupil at a certain SchoolId. We can do:
By taking a look in ANTS at the query which has been run, we can see that a lot more data than the first and last names (FirstName and LastName) has been retrieved.
The problem here is that, at the point when the query is run, EF has no idea what properties you might want to read, so its only option is to retrieve all of an entity’s properties, i.e. every column in the table. That causes two problems:
- We’re transferring more data than necessary. This impacts everything from SQL Server I/O and network performance, through to memory usage in our client application. In the example here it’s particularly frustrating because although we just want the small FirstName and LastName strings, the Pupils table includes a large Picture column which is retrieved unnecessarily and never used.
- By selecting every column (effectively running a “Select * From…” query), we make it almost imÂpossÂible to index the database usefully. A good indexing strategy involves considering what columns you frequently match against and what columns are returned when searching against them, along with judgements about disk space requirements and the additional performance penalty indexes incur on writing. If you always select all columns, this becomes very difficult.
Fortunately we can tell Entity Framework to select only certain specific columns.
We can either select a dynamic object:
Or we could choose to define a separate class, sometimes called a DTO (Data Transfer Object), to select into:
Another option, if you know there is data that you will never need to access from your application, is to simply remove that property from your model – EF will then happily just select the columns it knows about. You need to be careful, since if you remove a non-NULLable SQL Server column without a default value from your EF model, any attempt to modify data will result in a SqlException.
It’s also important to consider that selecting specific objects comes at the expense of code readability, so whether or not you decide to use it is a trade-off between readability and possible performance issues relating to the two problems discussed above.
If the unnecessary columns that you’re transferring are only small string columns (like perhaps a telephone number), or very low data volumes, and if returning those columns doesn’t negate use of indexes, then the performance impact will be minor, and it’s worth taking the hit in return for better readability and reduced effort. However, in this example, we’re retrieving a large number of images that we don’t need, and so the optimization is probably worthwhile.
Mismatched data types
Sample application: button 4
Data types matter, and if not enough attention is paid to them, even disarmingly simple database queries can perform surprisingly poorly. Let’s take a look at an example that will demonstrate why. We want to search for Pupils with zip code 90210. Easy:
Unfortunately it takes a very long time for the results to come back from the database. There are several million rows in the Pupils table, but there’s an index covering the PostalZipCode column which we’re searching against, so it should be quick to find the appropriate rows. Indeed the results are returned instantly if we directly query the database from SQL Server Management Studio using
SELECT FirstName, LastName FROM Pupils p WHERE p.PostalZipCode = ‘90210’
Let’s look at what the application’s doing.
The query generated takes a long time to run, but looking at it, it seems perfectly reasonable. To understand why a query is slow, you need to look at its execution plan to see how SQL Server decided to execute the query. We can do that inside ANTS by hitting the Plan button.
In the query plan, we start by looking for the expensive operations – in this case an Index Scan. What does that mean? A scan operation occurs when SQL Server has to read every page in the index, applying the search condition and outputting only those rows that match the search criteria (in this case, PostalZipCode = ‘90210’). In terms of performance, an Index Scan and a Table Scan are equivalent, and both cause significant IO because SQL Server reads the whole table or index. This is in contrast to an Index Seek operation, where an index is used to navigate directly to those pages that contain precisely the rows in which we are interested.
The query plan is showing us that we’re using an ‘Index Scan’ operation instead of an ‘Index Seek’, which is slow for the amount and characteristics of the data we have (there are around 30 million rows, and the PostalZipCode column is quite selective). So why is SQL Server choosing to use an Index Scan? The clue lies in the red warning in the bottom left:
So [Extent1].[PostalZipCode] was implicitly converted to NVARCHAR(20). If we look back at the complete query which was run we can see why. Entity Framework has declared the variable as NVARCHAR, which seems sensible as strings in .NET are Unicode, and NVARCHAR is the SQL Server type which can represent Unicode strings.
But looking at the Pupils table we can see that the PostalZipCode column is VARCHAR(20). Why is this a problem? Unfortunately, VARCHAR has a lower Data Type Precedence than NVARCHAR. That means that converting the wide NVARCHAR data type to the narrower VARCHAR can’t be done implicitly because it could result in data loss (as NVARCHAR can represent characters which VARCHAR can’t). So to compare the @p__linq_0 NVARCHAR parameter to the VARCHAR column in the table, SQL Server must convert every row in the index from VARCHAR to NVARCHAR. Thus it is having to scan the entire index.
Once you’ve tracked this down, it’s easy to fix. You just need to edit the model to explicitly tell Entity Framework to use VARCHAR, using column annotation.
After making this trivial change, the parameter will be sent to SQL Server as VARCHAR, so the data type will match the column in the Pupils table, and an Index Seek operator can be used.
Generally, these data type mismatches don’t happen if EF creates the database for you and is the only tool to modify its schema. Nevertheless, as soon as someone manually edits either the database or the EF model, the problem can arise. Also, if you build a database externally from EF (such as in SSMS), and then generate an EF model of that database using the Reverse Engineer Code First capability in EF power tools, then it doesn’t apply the column annotation.
These days almost all languages use Unicode to represent string objects. To lessen the likelihood of this kind of issue (not to mention other bugs!) I’d always advocate for just using NVARCHAR / NCHAR in the database. You pay a very small extra cost in disk space, but that will probably pay for itself in the very first avoided bug. Naturally the internet has plenty of argument healthy debate on this topic.
Missing indexes
Sample application: button 5
We might want to find all Pupils who live in New York. Easy:
Actually not so easy. We can see that the generated query has taken a while to run.
Luckily this is a fairly easy issue to track down. Because there’s a long-running query, we’ll want to take a look at the execution plan to understand why that query ran slowly. We can see that the most expensive operation is the Table Scan. This means that SQL Server is having to look at every row in the table, and it’s typical to see that take a long time.
The good news is this can be easily improved. If you’re relying on EF migrations to manage your database schema, you can add a multi-column [Index] attribute which includes the City, FirstName, and LastName properties of the Pupil class. This tells EF that an extra index is needed and when you run an EF migration it will add it to your database.
Alternatively if you are handling your own database migrations, as described in this article for example, then you can add a covering index that includes the City column (ANTS will give you the script to create this index if you click on the warning). You’ll need to give it a few minutes to build, but after that if you rerun the query, it should be nice and fast.
No change exists in isolation though, and maintaining that index has a cost associated with it. Every time that the Pupils table is updated, SQL Server will have to do some extra work to keep the index up to date (not to mention the additional disk space requirements). If you have a table which is primarily used for inserts (an auditing log for example) and which only has occasional ad-hoc queries run against it, it may be preferable to have no indexes in order to gain improved write performance.
This is arguably not an Entity Framework issue, but a general reminder to consider indexing as part of application design (see this article for a general introduction to SQL indexes). It’s one of those trade-offs that you have to make carefully, considering the performance implications for other code or applications sharing the database, and ideally testing to make sure there isn’t unreasonable degradation.
Overly-generic queries
Sample application: button 6
Often we want to do a search that is based on several criteria. For example, we might have a set of four search boxes for a user to complete, where empty boxes are ignored, so write something like:
It’s tempting to hope that the LastName, City, and PostalZipCode clauses, which all evaluate to true because in this case they are null, will be optimized away in .NET, leaving a query along the lines of …
We’ll be disappointed – this isn’t how EF query generation works. If we inspect the actual query executed, it looks like this:
For any LINQ statement, a single SQL query is generated, and everything is handled by SQL Server. This query itself looks pretty messy, but since that’s hidden from you, why should it matter? After all, the query runs quickly.
When SQL Server runs a query, it uses the values of the provided parameters along with stored statistics about your data to help estimate an efficient execution plan. These statistics include information about the uniqueness and distribution of the data. Because generating the plan has a cost, SQL Server also caches this execution plan so it doesn’t have to be created again – if you run an identical query in the future (even with different parameter values), the plan will be reused.
The problem caused by caching the plan for these sorts of generic statements is that Entity Framework will then run an identical query, but with different parameter values. If the query is too generic, a plan which was a good fit for one set of parameter values (when searching against FirstName) may be a poor choice for a different type of search. For example if all pupils live in either New York or Boston, the city column will have very low selectivity and a plan originally generated for pupils with a far more selective LastName may be a poor choice.
This problem is called ‘Bad Parameter Sniffing’, and there are far more thorough explanations available elsewhere. It’s worth noting that although these kinds of overly-generic queries make it more likely to hit this kind of issue, it can also occur if a simple query is first run with unrepresentative parameters. For example, imagine that 99% of pupils live in New York, and 1% live in Boston. We might write a simple statement like this:
If the first time we run this query, we’re looking for pupils in Boston, then a plan will be generated which may be horribly inefficient for the remaining 99% of pupils (i.e. the remaining 99% of times the query runs).
There are different approaches you can take to resolve this. The first is to make the LINQ statements themselves less generic, perhaps by using logic like this:
An alternative is to make SQL Server recompile the plans each time. This will add a few milliseconds more CPU on each execution, which would likely only be a problem if the query is one that runs very frequently, or the server is CPU-limited already.
Unfortunately there’s no easy way to do this in EF, but one option is to write a custom database command interceptor to modify the EF-generated SQL before it’s run, to add a “option(recompile)” hint. You can write a class a little like this:
And use it like this:
Note that this interception is enabled globally, not for the specific instance of the context, so you probably want to disable it again so that other queries aren’t affected.
If you really need to remove a bad existing plan from cache, you can get the plan_handle for the plan by querying the sys.dm_exec_cached_plans Dynamic Management Object (covered shortly) and then manually remove just that particular plan from the cache, using:
DBCC FREEPROCCACHE (<insert plan_handle here>).
Bloating the plan cache
Sample application: button 7
In spite of the previous example, the reuse of execution plans is almost always a good thing because it avoids the need to regenerate a plan each time a query is run. In order for a plan to be reused, the statement text must be identical, which as we just saw, is the case for parameterized queries. So far we’ve seen that Entity Framework usually generates parameterized queries when we include values through variables, but there is a case when this doesn’t happen – when we use .Skip() or .Take().
When implementing a paging mechanism we might choose to write the following:
Looking at the executed query we see that the ResultsPerPage (100) and Page (417*100) integers are part of the query text, not parameters. Next time we run this query for, say, page 567, a very slightly different query will be run with a different number, but it will be different enough that SQL Server won’t reuse the execution plan.
We can look at everything in the plan cache by running the following query (it may help to first empty the cache by running DBCC FREEPROCCACHE). Inspecting the cache after each execution, we’ll see a new entry each time.
This is bad for several reasons. Firstly it causes an immediate performance hit because Entity Framework has to generate a new query each time, and SQL Server has to generate a new execution plan. Secondly, it significantly increases the memory used both by Entity Framework, which caches all the extra queries, and in SQL Server, which caches the plans even though they are unlikely to be reused. Even worse, if the plan cache becomes large enough SQL Server will remove some plans, and it’s possible that as well as removing these unneeded ones it will also remove unrelated plans, such as the plan for a business-critical reporting query, causing a problem elsewhere.
There are two things you can do about this. Firstly, it’s useful to enable a SQL Server setting called ‘optimize for ad-hoc workloads ‘ . This makes SQL Server less aggressive at caching plans, and is generally a good thing to enable, but it doesn’t address the underlying issue.
Secondly, the problem occurs in the first place because (due to an implementation detail) when passing an int to the Skip() and Take() methods, Entity Framework can’t see whether they were passed absolute values like Take(100), or a variable like Take(resultsPerPage), so it doesn’t know whether the value should be parameterized. But there’s an easy solution. EF 6 includes versions of Skip() and Take() which take a lambda instead of an int, enabling it to see that variables have been used, and parameterize the query. So we can write the following (you need to ensure you reference System.Data.Entity):
Upon rerunning this, we see the results are parameterized, resolving the issue.
Inserting data
When modifying data in SQL Server, Entity Framework will run separate INSERT statements for every row being added. The performance consequences of this are not good if you need to insert a lot of data! You can use a NuGet package, EF.BulkInsert, which batches up Insert statements instead, in much the way that the SqlBulkCopy class does. This approach is also supported out of the box in Entity Framework 7 (released Q1 2016).
If there’s a lot of latency between the application and the database, this problem will be more pronounced.
Extra work in the client
Sometimes the way we access data causes the client application to do extra work without the database itself being affected.
Detecting Changes
Sample application: button 8
We might want to add new pupils to our database, which we can do with code like this:
Unfortunately this takes a long time, and in ANTS’ timeline we can see high CPU usage during this period.
It would have been tempting to assume that the 2,000 insert SQL statements are the problem, but this isn’t the case. In the line-level timing information, we can see that almost all of the time (over 34 seconds in total) was spent in adding Pupils to our context, but that the process of actually writing changes out to the database took a little over 1 second (of which only 379ms was spent in running the queries).
All the time is spent in System code, and if we change the filtering to see where, it’s mostly spent in children of a method called DetectChanges() which is part of the Data.Entity.Core namespace. This method runs 2,000 times, the same number of times as the records we’re trying to add to the database.
So the time is all being spent tracking changes. Entity Framework will do this by default any time that you add or modify entities, so as you modify more entities, things get slower. In fact the change detection algorithm’s performance degrades exponentially with the number of tracked objects, and hence adding 4,000 new records would be significantly more than twice as slow as adding the 2,000 records as above.
The first answer is to use EF 6’s .AddRange() command, which is much faster because it is optimized for bulk insert. Here is the code:
In more complex cases, such as bulk import of multiple classes, you might consider disabling change tracking, which you can do by writing:
It’s essential that you re-enable change-tracking afterwards, or you’ll start seeing unexpected behavior, so it usually makes sense to do this in a finally block in case there’s an exception adding the entities. Rerunning with that change in place, we can see that saving changes to the database still takes a little over a second, but the time spent adding the entities to the context has been reduced from 34 seconds down to 85 ms – a 400x speed boost!
Change tracking
When you retrieve entities from the database, it’s possible that you will modify those objects and expect to be able to write them back to the database. Because Entity Framework doesn’t know your intentions, it has to assume that you will make modifications, so must set these objects up to track any changes you make. That adds extra overhead, and also significantly increases memory requirements. This is particularly problematic when retrieving larger data sets.
If you know you only want to read data from a database (for example in an MVC Controller which is just fetching data to pass to a View) you can explicitly tell Entity Framework not to do this tracking:
Startup Performance
The importance of startup time varies by application. For a web application which is expected to run for long periods, fast startup is typically not very important, especially if it’s part of a load-balanced environment. On the other hand if a user had to wait two minutes to load a desktop application it wouldn’t look great.
There’s another consideration too: as a developer a slow-starting application becomes tedious, waiting a long time after every debugging iteration. Fortunately there are some things we can do to get EF starting up quickly.
Precompiled views
Ordinarily, when EF is first used, it must generate views which are used to work out what queries to run. This work is only done once per app domain, but it can certainly be time consuming. Fortunately there’s no reason this has to be done at runtime – instead you can use precompiled views to save this work. The easiest way to do this is with the Entity Framework Power Tools VS extension. When you have this installed, right click on your context file, then from the Entity Framework menu, choose Generate Views. A new file will be added to your project.
Of course there’s a catch: this precompiled view is specific to your context, and if you change anything you’ll need to regenerate the precompiled view – if you don’t, you’ll just get an exception when you try to use EF and nothing will work. But this one is well worth doing, particularly for more complex models.
Note: for an in-depth article on precompiling views, including a way to precompile in code, then see this article. There is also a useful NuGet package called EFInteractiveViews that you might like to look at.
Giant contexts
Even if you precompile views, Entity Framework still has to do work when a context is first initialized, and that work is proportional to the number of entities in your model. For just a handful of tables it’s not a lot to worry about.
However, a common way of working with EF is to automatically generate a context from a pre-existing database, and to simply import all objects. At the time this feels prudent as it maximizes your ability to work with the database. Since even fairly modest databases can contain hundreds of objects, the performance implications quickly get out of control, and startup times can range in the minutes. It’s worth considering whether your context actually needs to know about the entire schema, and if not, to remove those objects.
NGen everything
Most assemblies in the .NET Framework come NGen‘d for you automatically – meaning that native code has been pre-JITted. As of Entity Framework 6, the EF assembly isn’t part of this, so it has to be JITted on startup. On slower machines this can take several seconds and will probably take at least a couple of seconds even on a decent machine.
It’s an easy step to NGen Entity Framework. Just run commands like the following:
Note that you have to separately NGen the 32 and 64 bit versions, and that as well as NGenning EntityFramework.dll it’s also worth NGenning EntityFramework.SqlServer.dll.
Note: For an in-depth view of using NGen with EF see this article.
Unnecessary queries
We start to get into small gains at this point, but on startup EF can run several queries against the database. By way of example, it starts with a query to find the SQL Server edition which might take a few tens of milliseconds.
Assuming we already know what SQL Server edition we’re running against, it’s not too hard to override this. We just need to create a class which inherits from IManifestTokenResolver with a method ResolveManifestToken(), which returns our known SQL Server edition. We then create a class which inherits from DbConfiguration, and in its constructor, set the ManifestTokenResolver to our custom class.
There are various other queries which will also be run if you’re using Migrations, many of which can be eliminated. I won’t go into the details as these typically aren’t very important but you might check them out if every millisecond counts in your application.
Note: I recommend looking at the article ‘Reducing Code First Database Chatter‘ written by the EF Program Manager, Rowan Miller.
Other tips
Disposing
It’s essential to dispose contexts once you’re done with them. It’s best to do this by only creating contexts in a “using” block, but if there are good reasons, you can manually call Dispose() on the context instead when you’re done with it.
If contexts aren’t disposed they cause performance-damaging work for the Garbage Collector, and can also hog database connections which can eventually lead to problems opening new connections to SQL Server.
Multiple result sets
Entity Framework supports Multiple Result Sets, which allows it to make and receive multiple requests to SQL Server over a single connection, reducing the number of roundtrips. This is particularly useful if there’s high latency between your application server and the database. Just make sure your connection string contains:
Caching
This one isn’t to be taken lightly, because in all but the most trivial cases, getting caching right can be hugely difficult without introducing edge case bugs.
That said, the performance gains can be tremendous, so when you’re in a tight spot it’s always worth considering whether you actually need to hit the database.
Consider Using Async
Support for C#5 Async in Entity Framework 6 is great – all relevant methods which will result in hitting the database have Async equivalents, like ToListAsync, CountAsync, FirstAsync, SaveChangesAsync, etc.
For an application only dealing with one request at a time, using Async isn’t going to affect performance much (and could even make it slightly worse!), but for (eg) web applications trying to support lots of concurrent load, Async can dramatically improve scalability by ensuring that resources are returned to the ThreadPool while queries are running.
For desktop apps, it’s also a really intuitive way to make sure that database access is done off the UI thread.
Note: For an in-depth view of async/await, including example of using Entity Framework async commands, see this article.
Upgrade
It’s easy to get behind on versions of libraries, but the Entity Framework team has focused hard on improving performance, so you could see big benefits simply by upgrading. Make sure you’re not missing out!
Test with Realistic Data
In most data access scenarios performance will degrade with the volume of data, and in some cases time taken can even rise exponentially with data volumes. Therefore when performance testing, it’s important to use data which is representative of a live environment. There’s nothing worse than things working fine in a simple dev environment, and breaking in production.
To generate test data for these scenarios I used Redgate’s SQL Data Generator because it makes it fast and easy, but you might be able to just use a backup of production or create data using another technique. The important thing is ensuring you aren’t testing with 20 rows and expecting it to scale happily to a million on deployment day!
Occasionally, EF isn’t the answer
Entity Framework is optimized for accessing relatively small amounts of entity-key-like data, but isn’t usually a great choice for complex reporting or BI applications. There may also be times when you need to fall back to stored procedures (which can be run by Entity Framework), or even different technologies entirely.
Summary
Teams often run into performance difficulties with Entity Framework, particularly when starting out. Some will even consider abandoning the technology entirely, often after relatively little time trying to get it to work well.
It’s worth persisting! Start with a performance profiler that lets you understand both what your application code is doing, and what database queries it runs. Next, combine what it shows you with an understanding of the problems we’ve discussed to identify areas for improvement. Remember that many of these suggestions have their own downsides, so be confident you’re actually suffering from the issue before implementing the change – and of course, you may have no issues at all!
When you’re ready, you can download a free trial of ANTS Performance Profiler.
Load comments | https://www.red-gate.com/simple-talk/dotnet/.net-tools/entity-framework-performance-and-what-you-can-do-about-it/ | CC-MAIN-2020-24 | refinedweb | 5,691 | 56.29 |
After reading the last post on URL rewriting, I started
thinking.....[look out!]
What if there was a JSP tag library that mirrors the subset of HTML tags
which support the href attribute and performs URL rewriting on the href?
Each JSP tag would create it's corresponding HTML tag by calling
response.encodeUrl() on the href and pass through the remaining attributes.
In other words.
<%@ taglib uri= ""
prefix="rewrite" %>
<rewrite:aMy Link</rewrite:a>
would return
<a href='>My Link<a>
additonal tags would be created for <form>, etc.
This way, you could add URL rewriting to HTML pages [excluding
Javascript] by simply adding a namespace to exsting HTML tags and
changing the extension to .jsp.
Does something like this exist already?
> | http://mail-archives.apache.org/mod_mbox/tomcat-users/200201.mbox/%3C3C55B286.5080206@pixelfreak.net%3E | CC-MAIN-2014-23 | refinedweb | 123 | 73.88 |
At the George James Software booth at Global Summit last year we took the wraps off the work we've been doing to make our popular editing and debugging tool Serenji available on the Visual Studio Code platform.
Rather than requiring you to pull code from your namespaces into local files, then push the changes back to the namespace to run it, you work directly in the namespace. In other words, the editing experience is like Studio rather than like Atelier.
As well as editing code you can also debug it directly from VSCode.
We're now looking for people to test a pre-release. If you already use VSCode (or are willing to start doing so) and you would like access to the pre-release Serenji extension, please email me
privately at the address on my DC profile at johnm@georgejames.com. Tell me what InterSystems platform(s) and version(s) you're working with, and what platform(s) you run VSCode on. I'd also like to know approximately how many years you have worked with ObjectScript, and whether your ObjectScript codebase consists mainly of classes or MACs or INTs. Plus, please indicate if you're already familiar with VSCode or not.
Thanks,
John Murray
Senior Product Engineer
George James Software
Hi John,
Would this include a certain level of independence between editing and Caché, Ensemble, IRIS version?
Especially a kind of "forward" compatibility as long as you don't go for new features?
To use our extension you have to install some code on the target server. This code works on the latest platforms (e.g. the IRIS 2018.2 Field Test container) and on versions back to well before InterSystems' Minimum Supported Version. Indeed, one of my targets is still on 2008.1.
So a single VSCode instance can connect to many different server versions.
Does that answer your question Robert?
I was thinking about this issue from @Wolf Koelling
IRIS Quarterly Container only releases and Studio
"install some code on the target server" sounds to have the potential to work around his show stopper.
Thanks! That solves my questions.
I just see close to me a situation that the upgrade to a higher version ( 16.2 to 18.1)
may trigger quite an effort on updating all developer's Studio. With all that "can never happen"
Hi John,
Do you plan on integrating the vscode extension with git somehow?
Of course, when we are editing source code on the file system (like with Atelier), it's fairly straightforward to put the code in a git repo (however, sync against Caché can be a mess).
If you are editing code directly against a Caché DB, this would probably require manually exporting the changes to the filesystem, or perhaps making the vscode manage the git objects directly against the repo.
Thanks!
We're actively researching options in this area.
Hi John,
Will one of you be in Antwerp next month to present/demonstrate this ?
Yes Herman, we hope to be there. But if you'd like to get your hands on it before then please contact me as indicated above.
It's now confirmed that I shall be attending the InterSystems Benelux Symposium.
Hello John, I can't see your email in your DC profile... I don't know if it's my fault or it's not public visible
Hi David!
BTW, do we need private messaging on DC?
Sorry about that David. I've edited the post to include my email address.
To leave a comment or answer to post please log in
Please log in | https://community.intersystems.com/post/seeking-field-testers-upcoming-vscode-extension-george-james-software | CC-MAIN-2020-29 | refinedweb | 601 | 72.66 |
In this video you will learn how to code with repeaters and making a dropdown a dynamic filter for your repeater. This is really powerful and useful in all projects. Happy coding.
Nice work Andreas. Thanks so much for sharing this.
You are all welcome to send wishes for videos to hello@wixshow.com
Where can I find the source code for this?
Hey
Code will be sent to all that has a Pro Subscription or you can buy the code for 5$ in our shop. I hope you understand that we have to find all ways of getting supported, we work full time trying to teach Wix Code and need some kind of income :( Hope that you want to support us too.
I have a premium plan , unlimited with unused $300 ad vouchers, Site booster app, form builder app
Refer attached and kindly advise?
As a customer I am disappointed that you are asking to pay for sample code. Sample code should be a part of your help / video article. This is a unprofessional business business practice and you should forward this feedback to the decision makers in your company.
Hey, I am not a part of Wix team, I run wixshow.com as a online site for teaching Wix Code. So don't missunderstand me or the offers. I am just trying to help people and make a living doing so. But please do note that me or my Wix Show project is not connected to Wix in any way so don't get confused in anyway please.
Send me an email to hello@wicxshow.com and I will attach the code for you for free because of the unlucky missunderstanding of my business.
Apologies for the confusion.
-Yashika
Hello Andreas
You said code will be sent to all that has a 'Pro Subscription'!
How do premium members request for the codes, please elaborate.
Thanks in advance.
Anupam Dubey
hey Anupam!
When you have a subscription on wixshow.com in the PRO Video Channel I will get notified and then you will get a coupon for the codes available in our shop. Simple but works. Happy New Year!
Great explanation video. I'll checkout the others too.
Hi,
Thanks a lot for the video.
I was desperately looking for an easy way to create a filter / research bar just like you did.
If you can provide me a code that will do that, I'd be willing to compensate your effort.
To sum up my needs :
Forms : each user will be able to submit their own content to the database, which will be displayed on the "all" items page
users will have the ability to post their own content (UGS website) through a form
they will be able to choose which category the item belongs to via a dropdown menu in the form
List pages: each item will be displayed on a "all items" page
item list pages will have a filter menu (dropdown, search bar) so users can select which category to display or not
I'd like users to be able to enable/disable various options such as "hide expired items" or a price range slider
All these mentioned above are very similar to a basic e-shop scheme.
If you think you'll be able to fulfill those criteria, don't hesitate to send me an email : tristan.breon@gmail.com
I might even have other features I'd like to see on my website.
Thanks !
Hi Andreas Kviby,
Good day!
I register my self once in your free video tutorial on Wixshow, it was awesome, I learn new things, Thank you! But I like the most is that when you enroll in one of the course the website will provide an ID Code
I also need that kind of function, currently, I'm working on a website that has the same function but struggling how to get that. Could you please lend some of your knowledge how to do it.
I use this function when I need codes, just set the length of the string you need and set the allowed characters in the char_list. happy coding.
So if I'm not a pro member I can still give you $5 for any code I want? Please say yes.... That'd be super cool....
yes
Andreas Kirby, do you have any code for auto fill for user submit forms? I have users setting up profile. Then then later can input an article to a database. Want the user info (name, bio, contact info, personal photo) to auto fill into the form so they do not have to complete each time they are submitting a new article.
1. Query the data collection for the info you need
2. Take the results.items[0].fieldkey you need and insert it into the text input fields you want to auto fill like below.
$w("#inputfield-id").value = results.items[0].fieldkey;
That will take the value from your data collection and pre fill the input text field with that value.
Andreas Kirby,
I know something I see many would like to see is search bar work for repeaters. It a bit of a nightmare for us ameteurs but a necessity.
Here is code I used for this page....but issue I have is that the category drop down then shows duplication of category for every database entry and that is worthless in trying to provide a filter (who wants to have to go thru all those categories to get to filter needed!!!) Not sure how to limit that. Let me know if you can provide a fix for that and a cost.
Also tried setting up two type search boxes for repeater but getting nowhere fast.
Code used:
import wixData from "wix-data";
$w.onReady(() => {
wixData.query ("Category")
.find()
.then(res => {
let options = [{'value': "", "label": "All Categories"}];
options.push(...res.items.map(category => {
return {"value": category.title, "label": category.title};
}));
$w("#iCategory").options = options;
});
});
let lastFilterDescription;
let lastFilterCategory;
let debounceTimer;
export function iTitle_keyPress(event, $w) {
if (debounceTimer) {
clearTimeout(debounceTimer);
debounceTimer = undefined;
}
debounceTimer = setTimeout(() => {
filter($w("#iTitle").value, lastFilterCategory);
}, 200);
}
function filter(description, category) {
if (lastFilterDescription !==description || lastFilterCategory !== category) {
let newFilter = wixData.filter();
if (description)
newFilter = newFilter.contains('description', description);
if (category)
newFilter = newFilter.eq('category', category);
$w("#dataset1").setFilter(newFilter);
lastFilterDescription = description;
lastFilterCategory = category;
}
}
export function iCategory_change(event, $w) {
filter(lastFilterDescription, $w("#iCategory").value);
}
Hi Andreas Kviby
Can you tell me what I'm doing wrong with the code below. For some reason it's not working. Would really appreciate your help, thanks.
CODE USED
Hey Jaiah,
On this line of code:
Your .eq() filter function searches for records that have the string "typefilter" in the propertytype. I suspect that what you really want is the value of the #inputfilter input. You have misplaced the value property and incorrectly made it a function
You probably meant this:
Also, you have another query of the same collection inside of the .then() of the first query. Furthermore, the filter statement that you are using is invalid:
It is missing the second parameter - the filter value.
You should refer to the WixDataQuery API to see how to properly build a query. You can use some of the examples provided and then modify them for your own use.
Provide more details what you are trying to accomplish and what components you have on the page. | https://www.wix.com/corvid/forum/community-discussion/coding-the-repeaters-in-wix-code | CC-MAIN-2020-10 | refinedweb | 1,234 | 65.42 |
I have a database, which I query, and I’m unsure of where to perform the sorting of the results, so far I’ve have the following options.
- At the MySQL query.
- At list level(Using a LinkedList)
- Sorting an unsorted list using comparators before showing the results (basically in the jsp)
The List is composed by ObjectDTO so where would it be more efficient. Any ideas?
How do I do stable sort?
How do I stably sort an array? The value I want to sort by can have a lot of duplicates, and I’m not sure which sort algorithm ruby uses. I’m thinking insertion sort would have worked best for me. Ex
How do i sort objects?
I’ve created a class and created an array of objects under that class and filled it all up with data. Now i want to sort the entire array by a specific member of that class, how do I do this using the
In wordpress I need to do a sort then a where clause… but I can’t get it working
I have a list of post that need to be in chronological order. The fields are all custom and I am using the Advance custom fields plugin but that shouldn’t matter. What I am trying to do is sort items
How Do I Sort IList?
There’s no Sort() function for IList. Can someoene help me with this? I want to sort my own IList. Suppose this is my IList: public class MyObject() { public int number { get; set; } public string mar
How do I add Sort by options?
I have a ‘Sort By’ dropdown on an events page where users can view a number of events and I’d like to allow users to Sort the events by Name (Alphebetical), Date (Created_At), and perhaps (Number of p
Database: how do I sort GUID?
My primary key uses guid. How do I sort GUID? What about I create a datetime column and record a datetime stamp, I could then sort by datetime? is this the best way to do it? or are there better ways?
How do I sort the following list
I have a list called clusters and in that list there is another list called tags which has a sequenceno. How do I sort clusters by using the max of seuqenceno from tags of each cluster using lambda ex
In ruby/rails, how do I sort on a date value where the date can sometimes be null?
I would like to sort my games by game_date, but sometimes the game_date may be null, and I will get an exception: undefined method `to_datetime’ for nil:NilClass @games = @teams.reduce([]) { |memo, te
Where can I get C/C++ sample code for merge sort a link list?
Where can I get a sample code for merge sort a link list?
How do I sort a QList of QDateTime*?
How do I sort a QList of QDateTime* objects by the value of the QDateTime object?
Answers
Database. Using indexes and other information about the data, db’s are very good at this.
You should do the sorting in the database if at all possible.
- The database can use indexes. If there is a suitable index available then the results can be read from disk already in sorted order, resulting in a performance increase – no extra O(n log(n)) sorting step is required.
- If you only need the first x results you also minimize data transfer (both reduced network transfer, and also reduced disk access if there is a suitable index).
Best is at the mySQL query.
a) It is easy to do
b) If you use an index the sort happens when the index is created or when new rows are inserted automatically (sometimes an index needs a reorganization but this is a db admins daily business. This applies if the table is very huge.).
e) If the index includes the columns used in the where clause the access in general is faster
d) You do not need to read the whole table each time to do the sort for yourself
e) Even if you have no index I believe the DB can do the sorting best
Hope it helps | http://w3cgeek.com/where-do-i-sort.html | CC-MAIN-2019-04 | refinedweb | 711 | 78.28 |
LoRaWan configuration
According to the documentation on the init() method of the LoRa class accepts a frequency parameter to select the desired ISM band. However the note on the same page says:
"In LoRa.LORAWAN mode, only adr, public and tx_retries are used. All the otherparams will be ignored as theiy are handled by the LoRaWAN stack directly. On the other hand, these same 3 params are ignored in LoRa.LORA mode as they are only relevant for the LoRaWAN stack."
If this is true, how does one configure the transmitter for different regions (e.g. US vs EU)? Can someone please post a working example that shows how to configure the transmitter for the 915 band? BTW, raw LoRa mode seems to work fine.
Note: I updated the code below because the duty_cycle parameter isn't supported.
here is the function clean:)
Hi @pablomargareto, Let me try to answer a your questions:
- Yes, add_channel and remove_channel both work when joined with LoRaWAN.
- Yes, use this function:)
- Something like:
for zzz in range(0,10000): s.send("a msg") time.sleep(30)
Not sure this is what you're asking for (maybe you're looking for a timer/callback function set, etc).
You can change the data rate (useful if you would like to use longer messages) by doing:
s.setsockopt(socket.SOL_LORA,socket.SO_DR, 3)
where s is a lora socket.
Also, ignore the documentation about channels 1-3 being special, this is an issue for Europe, not the US. I should note that most US gateways are on subband 2 as best I can tell.
I have found the LoPy to be very reliable generally when using these functions. I only use OTAA with TTN (I don't use ABP).
-Chris
@dchappel No luck... I kind of give up using this hardware. It is really an awesome solution, but the lack of support and documentation is frustrating. Maybe in the future they have something more mature on the firmware/software side.
Good luck and keep us posted! :)
@betabrain That is not enough. In the US you have 72 channels and a LoRa gateway listen to only 8 (some of them up to 64). You need to tell LoPy which sub-band to use.
I am also having issue (with very similar code) with only getting one message through for every 5-10 sends.
Pablo - were you able to resolve your issue?
Thanks...
Hello,
First of all, thanks for this great product (lopy) and your hard work. Cheers up! I am convinced that you will achieve something really stable soon.
I still have the same question than @tbradshaw , since I am working with one LoPy here in the US and I am not sure how the LoRaWan is setting up everything.
I updated to the last firmware today and I was doing some testing with my lopy. This is my code:
from network import LoRa import time import binascii import socket import pycom import struct lora = LoRa(mode=LoRa.LORAWAN) #lora = LoRa(mode=LoRa.LORAWAN) #Setting up channels for index in range(3, 72): lora.remove_channel(index) for index in range(0, 8): freq = 903900000 + index * 200000 print('Frequency: ') print(freq) lora.add_channel(index, frequency=freq, dr_min=0, dr_max=3, duty_cycle=0) #corresponding real values dev_addr = struct.unpack(">l", binascii.unhexlify('00000000'))[0] nwk_swkey = binascii.unhexlify('00000000000000000000000000000000') app_swkey = binascii.unhexlify('00000000000000000000000000000000') lora.join(activation=LoRa.ABP, auth=(dev_addr, nwk_swkey, app_swkey)) #wait until the module has joined the network while not lora.has_joined(): time.sleep(1.5) print('Not joined yet...') print('Network joined!') pycom.rgbled(0x00ff00) s = socket.socket(socket.AF_LORA, socket.SOCK_RAW) s.setsockopt(socket.SOL_LORA, socket.SO_DR, 0) s.setblocking(True) while True: print('Sending Packet...') pycom.rgbled(0x00ff00) s.send('0') print('Done sending') time.sleep(200)
My problem is that I only get one out of 7 or 8 messages, always on the same frequency: 904.6 MHz. We do not have any limit by regulation in the US concerning the duty cycle.
So, my questions are:
- Do the
add_channeland
remove_channelfunctions work when sending with LoraWan?
- There are 72 channels in the US and the commercial gateways use to work with only 8 (one sub-band). Is there any way to tell lopy which channels to use?
- I would be interested in being able to send a message every 30 seconds. Is there any way to be possible with the current firmware?
Thanks and regards,
Pablo
@tbradshaw The region (and therefore ISM band) is locked in the firmware. To change it, do a firmware update. The default firmware updater will ask you which country you intend to use your LoPy in. | https://forum.pycom.io/topic/283/lorawan-configuration | CC-MAIN-2020-40 | refinedweb | 776 | 67.86 |
>>>>> "MJM" == MJM <linux-support@earthlink.net> writes: MJM> They do. My app would be broken from the start if I could not rely MJM> on this capability. This style of type conversion is covered in MJM> elementary C++ books by Bjarne. It's not unusual. You must be MJM> aware of what you are doing when you do a type conversion. MJM> Portability is a concern. I am limiting my app to Intel 32 bit MJM> Linux. Screw everything else. That you do not treasure portability across CPUs and compilers do not mean others don't. MJM> By whom? Your example is nowhere to be found in my C++ books by MJM> Bjarne. So you are saying that Bjarne promotes bad style in his MJM> books? Why not tell him: MJM> You must be reading pre-standard C++ books. Bjarne's "The C++ Programming Language, 3rd Edition" clearly stated that the () "C-style" casting syntax should be avoided: This C-style cast is far more dangerous than the named conversion operators because the notation is harder to spot in a large program and the kind of conversion intended by the programmer is not explicit. [Section 6.2.7.] In his "The Design and Evolution of C++", Bjarne explained that he even wanted to strike the C-style cast out of the C++ standard except that all C programs would become not compilable. MJM> Besides, reinterpret_cast is probably a template function doing MJM> this: MJM> return ((T) x); // type conversion using cast Definitely not. The objection of (T)x is not just its syntax, but also its unclear behaviour. Consider the following: #include <iostream> using namespace std; class B { public: virtual ~B() {} }; class B1 { public: virtual ~B1() {} }; class B2 { public: virtual ~B2() {} }; class D: public B1, public B2 {}; int main() { D d; B *pb; B2 *pb1; pb1 = (B2 *)&d; cout << pb1 << endl; pb1 = reinterpret_cast<B2 *>(&d); cout << pb1 << endl; pb1 = static_cast<B2 *>(&d); cout << pb1 << endl; pb = (B *)&d; cout << pb << endl; } When executed from my computer (Debian sid, gcc 3.3), it gives: 0xbffffb04 # C-style cast to B2* 0xbffffb00 # reinterpret_cast to B2* 0xbffffb04 # static_cast to B2* 0xbffffb00 # C-style cast to B* In other words, the behaviour of a C-style cast depends on whether the casting of the pointer is an "up-cast"/"down-cast" or not; if it is (in this case up from D to B2) then the cast will be a static cast, doing pointer adjustment (so that the sub-object of type B2 is found within the object of type D); and if it is not then the cast will be a reinterpret cast, doing no pointer adjustment. It is generally agreed that such "compiler intelligence" is in general no good, since the programmer probably only expect one of the two possibility will happen, and the result is sometimes what the programmer expects and sometimes not. If the programmer expects a static cast he should write static_cast<B2 *>(&d) so that the compiler will emit an error if the classes are actually of two different hierarchy. If the programmer expects a dynamic cast he should write dynamic_cast<B *>(&d) so that you understand you are doing something that the compiler has no control of, and dereferencing the result will probably be a programming error. Regards, Isaac. | https://lists.debian.org/debian-user/2003/08/msg00911.html | CC-MAIN-2015-40 | refinedweb | 552 | 56.69 |
# Espressif IoT Development Framework: 71 Shots in the Foot

One of our readers recommended paying heed to the Espressif IoT Development Framework. He found an error in the project code and asked if the PVS-Studio static analyzer could find it. The analyzer can't detect this specific error so far, but it managed to spot many others. Based on this story and the errors found, we decided to write a classic article about checking an open source project. Enjoy exploring what IoT devices can do to shoot you in the foot.
Software and hardware systems
-----------------------------
The father of the C++ language, Bjarne Stroustrup, once [said](https://www.quotes.net/quote/9012):
> "C" makes it very easy to shoot yourself in the foot. In "C++" it is harder to do this, but when you do it, it tears off the whole leg.
In our case, the statement begins to take on a slightly different meaning. Having started with a simple scenario of a programmer making a mistake that leads to incorrect program operation, we now face cases where such a misstep can cause real physical harm.
Projects such as the Espressif IoT Development Framework serve to implement software and hardware systems that interact with humans and control objects in the real world. All this imposes additional requirements for the quality and reliability of the program code. It is from here that such standards as [MISRA](https://www.viva64.com/en/misra/) or [AUTOSAR](https://www.viva64.com/en/autosar/) take their foundations. Anyway, that's another story we won't even get into.
Back to the [Espressif IoT Development Framework](https://www.espressif.com/en/products/sdks/esp-idf) (source code on GitHub: [esp-idf](https://github.com/espressif/esp-idf)). Check out its brief description:
> ESP-IDF is Espressif's official IoT Development Framework for the ESP32 and ESP32-S series of SoCs. It provides a self-sufficient SDK for any generic application development on these platforms, using programming languages such as C and C++. ESP-IDF currently powers millions of devices in the field and enables building a variety of network-connected products, ranging from simple light bulbs and toys to big appliances and industrial devices.
I think readers will be interested to see if the developers of this project pay enough attention to its quality and reliability. Unfortunately, there is no such certainty. After reading the article and descriptions of the defects noticed you will share my concerns. So, grab some tea/coffee, a nice longread of text and code is waiting for you.
Back story
----------
I also would like to tell you how we came up with the idea of this article. [Yuri Popov](https://habr.com/en/users/djphoenix/) (Hardcore IoT fullstack dev & CTO) follows our publications with great interest. Once he wrote to me. He has just manually found an error in the Espressif IoT Development Framework and asked if PVS-Studio could detect that defect. The error relates to a typo in the code, and PVS-Studio has always been famous for being good at detecting such errors.
The incorrect code was in the [mdns.c](https://github.com/espressif/esp-idf/blob/v4.0.2/components/mdns/mdns.c) file:
```
mdns_txt_linked_item_t * txt = service->txt;
while (txt) {
data_len += 2 + strlen(service->txt->key) + strlen(service->txt->value);
txt = txt->next;
}
```
The list gets traversed. Various objects in the list refer to certain strings. Lengths of these strings have to sum up in a specific way. It all would be correct if it were not for the strings' length of only the first object which is summed up.
Correct code:
```
data_len += 2 + strlen(txt->key) + strlen(txt->value);
```
To our mutual disappointment of our reader Yura and me, PVS-Studio failed to notice the error. The tool just doesn't know about this error pattern. Actually, our team did not know about this pattern. PVS-Studio, like any other analyzer, can only notice what it has been programmed for :).
Well, it's a pity, but not a big deal. This is one of the sources where we can get ideas for the PVS-Studio development. Users and clients send various error patterns that they have found in the code of their projects. PVS-Studio is not aware of such errors yet. So, we are gradually creating new diagnostic rules. This will also happen with the pattern above. This example is already in the TODO list. We'll implement a new diagnostic rule for detecting similar cases in one of the upcoming analyzer versions.
As a result of all this, Yura himself wrote a small note about this error, how he was looking for it and also about PVS-Studio: "[Bug in ESP-IDF: MDNS, Wireshark and what does unicorns have to do with it](https://habr.com/en/post/530466/)" [RU]. Plus, he notified the authors of the project about the found error: [Spurious MDNS collision detection (IDFGH-4263)](https://github.com/espressif/esp-idf/issues/6114).
This was not the end of story. Yura suggested that our team checked the project and wrote a note about the results. We did not refuse, as we often make [such publications](https://www.viva64.com/en/inspections/) to promote the methodology of static code analysis and PVS-Studio tool as well :).
Honestly, our check was rather incomplete. Unfortunately, there is no "build all" example. Or we didn't figure it out. We started with getting\_started\hello\_world. It seems to use part of the framework, but not all of it. So, you can find other bugs by getting more framework files compiled. In other words, the fact that only 71 errors will be described in the article is our fault :).
I wasn't trying to find as many bugs as possible. So, when I skimmed through the incomplete report, I immediately realized that there was already more than enough material for the article. Therefore, I got too lazy to delve further into the project.
Fortunately, Yuri Popov, who started the ball rolling, is much more enthusiastic than I am. He told me he was able to achieve a more complete compilation of the framework and checked many more files. His article will most likely follow this one where he will consider an additional portion of errors.
Examples of where false/pointless positives come from
-----------------------------------------------------
I'd like to warn all enthusiasts who'd like to check the Espressif IoT Development Framework, that you will need to pre-configure the analyzer. Without it, you will drown in a great number of false/useless positives. But the analyzer is not to blame.
Conditional compilation directives (#ifdef) and macros are very actively used in the project code. This coding style confuses the analyzer and generates many useless warnings of the same type. To make it clearer how and why this happens, let's look at a couple of examples.
PVS-Studio warning: V547 Expression 'ret != 0' is always true. esp\_hidd.c 45
```
esp_err_t esp_hidd_dev_init(....)
{
esp_err_t ret = ESP_OK;
....
switch (transport) {
#if CONFIG_GATTS_ENABLE
case ESP_HID_TRANSPORT_BLE:
ret = esp_ble_hidd_dev_init(dev, config, callback);
break;
#endif /* CONFIG_GATTS_ENABLE */
default:
ret = ESP_FAIL;
break;
}
if (ret != ESP_OK) {
free(dev);
return ret;
}
....
}
```
The developer selected the compilation mode, in which the macro *CONFIG\_GATTS\_ENABLE* is not defined. Therefore, for the analyzer, this code looks like this:
```
esp_err_t ret = ESP_OK;
....
switch (transport) {
default:
ret = ESP_FAIL;
break;
}
if (ret != ESP_OK) {
```
The analyzer seems to be right that the condition is always true. On the other hand, there is no benefit from this warning, since, as we understand, the code is completely correct and makes sense. Such situations are extremely common, which makes it difficult to view the report. This is such an unpleasant cost of active usage of conditional compilation :).
Let's have a look at another example. The code actively uses its own kind of assert macros. Unfortunately, they also confuse the analyzer. PVS-Studio warning: V547 Expression 'sntp\_pcb != NULL' is always true. sntp.c 664
```
#define LWIP_PLATFORM_ASSERT(x) do \
{printf("Assertion \"%s\" failed at line %d in %s\n", \
x, __LINE__, __FILE__); fflush(NULL); abort();} while(0)
#ifndef LWIP_NOASSERT
#define LWIP_ASSERT(message, assertion) do { if (!(assertion)) { \
LWIP_PLATFORM_ASSERT(message); }} while(0)
#else /* LWIP_NOASSERT */
#define LWIP_ASSERT(message, assertion)
#endif /* LWIP_NOASSERT */
sntp_pcb = udp_new_ip_type(IPADDR_TYPE_ANY);
LWIP_ASSERT("Failed to allocate udp pcb for sntp client", sntp_pcb != NULL);
if (sntp_pcb != NULL) {
```
The *LWIP\_ASSERT* macro expands into the code that will stop program execution if the *sntp\_pcb* pointer is null (see the *abort* function call). The analyzer is well aware of this. That's why PVS-Studio warns the user that the *sntp\_pcb != NULL* check is pointless.
On the one hand, the analyzer is right. But everything will change if the macro expands into "nothing" in a different compilation mode. In this case, the check will make sense. Yes, in the second scenario, the analyzer will not complain, but this does not change the main point. In the first case, we have an extra warning.
Still this is not that scary. One can reduce most of useless messages after diligent analyzer configuration. In a number of other places, one can improve the situation by changing the style of writing code and macros. But this goes beyond the scope of this article. Additionally, one can use the mechanism for suppressing warnings in specific places, in macros, etc. There is also a mass markup mechanism. You can read more about all this in the article "[How to introduce a static code analyzer in a legacy project and not to discourage the team](https://www.viva64.com/en/b/0743/)".
Security
--------
Let's start with the warnings, which, in my opinion, relate to security issues. Developers of operating systems, frameworks, and other similar projects should pay special attention to finding code weaknesses that can potentially lead to vulnerabilities.
For the convenience of classifying code weaknesses, [CWE](https://cwe.mitre.org/) (Common Weakness Enumeration) comes in handy. In PVS-Studio you can enable CWE ID display for warnings. For the warnings from this part of the article, I will additionally provide the corresponding CWE ID.
For more information, the search for potential vulnerabilities is covered in the article "[PVS-Studio Static Analyzer as a Tool for Protection against Zero-Day Vulnerabilities](https://www.viva64.com/en/b/0689/)".
### Error N1; Order of arguments
PVS-Studio warning: V764 Possible incorrect order of arguments passed to 'crypto\_generichash\_blake2b\_\_init\_salt\_personal' function: 'salt' and 'personal'. blake2b-ref.c 457
```
int blake2b_init_salt_personal(blake2b_state *S, const uint8_t outlen,
const void *personal, const void *salt);
int
blake2b_salt_personal(uint8_t *out, const void *in, const void *key,
const uint8_t outlen, const uint64_t inlen,
uint8_t keylen, const void *salt, const void *personal)
{
....
if (blake2b_init_salt_personal(S, outlen, salt, personal) < 0)
abort();
....
}
```
When calling the *blake2b\_init\_salt\_personal* function the *personal* and *salt* arguments get confused. This is hardly intended on purpose and, most likely, this mistake occurred due to inattention. I'm not familiar with project code and cryptography, but my gut tells me that such confusion can have bad consequences.
According to the CWE, this error is classified as [CWE-683](https://cwe.mitre.org/data/definitions/683.html): Function Call With Incorrect Order of Arguments.
### Error N2; Potential loss of significant bits
PVS-Studio warning: V642 Saving the 'memcmp' function result inside the 'unsigned char' type variable is inappropriate. The significant bits could be lost breaking the program's logic. mbc\_tcp\_master.c 387
```
static esp_err_t mbc_tcp_master_set_request(
char* name, mb_param_mode_t mode, mb_param_request_t* request,
mb_parameter_descriptor_t* reg_data)
{
....
// Compare the name of parameter with parameter key from table
uint8_t comp_result = memcmp((const char*)name,
(const char*)reg_ptr->param_key,
(size_t)param_key_len);
if (comp_result == 0) {
....
}
```
Storing the result of the *memcmp* function in a single-byte variable is a very bad practice. This is a flaw that could very well turn into a real vulnerability like this: [CVE-2012-2122](https://seclists.org/oss-sec/2012/q2/493). For more information about why you can't write like this, see the [V642](https://www.viva64.com/en/w/v642/) diagnostic documentation.
In short, some implementations of the *memset* function may return more than 1 or -1 values in case of mismatch of memory blocks. A function, for example, can return 1024. And the number written to a variable of type *uint8\_t* will turn into 0.
According to the CWE, this error is classified as [CWE-197](https://cwe.mitre.org/data/definitions/197.html): Numeric Truncation Error.
### Error N3-N20; Private data remains in memory
PVS-Studio warning: V597 The compiler could delete the 'memset' function call, which is used to flush 'prk' buffer. The memset\_s() function should be used to erase the private data. dpp.c 854
```
#ifndef os_memset
#define os_memset(s, c, n) memset(s, c, n)
#endif
static int dpp_derive_k1(const u8 *Mx, size_t Mx_len, u8 *k1,
unsigned int hash_len)
{
u8 salt[DPP_MAX_HASH_LEN], prk[DPP_MAX_HASH_LEN];
const char *info = "first intermediate key";
int res;
/* k1 = HKDF(<>, "first intermediate key", M.x) */
/* HKDF-Extract(<>, M.x) */
os_memset(salt, 0, hash_len);
if (dpp_hmac(hash_len, salt, hash_len, Mx, Mx_len, prk) < 0)
return -1;
wpa_hexdump_key(MSG_DEBUG, "DPP: PRK = HKDF-Extract(<>, IKM=M.x)",
prk, hash_len);
/* HKDF-Expand(PRK, info, L) */
res = dpp_hkdf_expand(hash_len, prk, hash_len, info, k1, hash_len);
os_memset(prk, 0, hash_len); // <=
if (res < 0)
return -1;
wpa_hexdump_key(MSG_DEBUG, "DPP: k1 = HKDF-Expand(PRK, info, L)",
k1, hash_len);
return 0;
}
```
A very common mistake. The compiler has the right to remove the *memset* function call for optimization purposes, since after filling the buffer with zeros, it is no longer used. As a result, private data is not actually erased, but will continue to hang around somewhere in memory. For more information, see the article "[Safe Clearing of Private Data](https://www.viva64.com/en/b/0388/)".
According to the CWE, this error is classified as [CWE-14](https://cwe.mitre.org/data/definitions/14.html): Compiler Removal of Code to Clear Buffers.
Other errors of this type:
* V597 The compiler could delete the 'memset' function call, which is used to flush 'prk' buffer. The memset\_s() function should be used to erase the private data. dpp.c 883
* V597 The compiler could delete the 'memset' function call, which is used to flush 'prk' buffer. The memset\_s() function should be used to erase the private data. dpp.c 942
* V597 The compiler could delete the 'memset' function call, which is used to flush 'psk' buffer. The memset\_s() function should be used to erase the private data. dpp.c 3939
* V597 The compiler could delete the 'memset' function call, which is used to flush 'prk' buffer. The memset\_s() function should be used to erase the private data. dpp.c 5729
* V597 The compiler could delete the 'memset' function call, which is used to flush 'Nx' buffer. The memset\_s() function should be used to erase the private data. dpp.c 5934
* V597 The compiler could delete the 'memset' function call, which is used to flush 'val' buffer. The memset\_s() function should be used to erase the private data. sae.c 155
* V597 The compiler could delete the 'memset' function call, which is used to flush 'keyseed' buffer. The memset\_s() function should be used to erase the private data. sae.c 834
* V597 The compiler could delete the 'memset' function call, which is used to flush 'keys' buffer. The memset\_s() function should be used to erase the private data. sae.c 838
* V597 The compiler could delete the 'memset' function call, which is used to flush 'pkey' buffer. The memset\_s() function should be used to erase the private data. des-internal.c 422
* V597 The compiler could delete the 'memset' function call, which is used to flush 'ek' buffer. The memset\_s() function should be used to erase the private data. des-internal.c 423
* V597 The compiler could delete the 'memset' function call, which is used to flush 'finalcount' buffer. The memset\_s() function should be used to erase the private data. sha1-internal.c 358
* V597 The compiler could delete the 'memset' function call, which is used to flush 'A\_MD5' buffer. The memset\_s() function should be used to erase the private data. sha1-tlsprf.c 95
* V597 The compiler could delete the 'memset' function call, which is used to flush 'P\_MD5' buffer. The memset\_s() function should be used to erase the private data. sha1-tlsprf.c 96
* V597 The compiler could delete the 'memset' function call, which is used to flush 'A\_SHA1' buffer. The memset\_s() function should be used to erase the private data. sha1-tlsprf.c 97
* V597 The compiler could delete the 'memset' function call, which is used to flush 'P\_SHA1' buffer. The memset\_s() function should be used to erase the private data. sha1-tlsprf.c 98
* V597 The compiler could delete the 'memset' function call, which is used to flush 'T' buffer. The memset\_s() function should be used to erase the private data. sha256-kdf.c 85
* V597 The compiler could delete the 'memset' function call, which is used to flush 'hash' buffer. The memset\_s() function should be used to erase the private data. sha256-prf.c 105
### Error N21; Private data buffer is not deleted
PVS-Studio warning: V575 The null pointer is passed into 'free' function. Inspect the first argument. sae.c 1185
```
static int sae_parse_password_identifier(struct sae_data *sae,
const u8 *pos, const u8 *end)
{
wpa_hexdump(MSG_DEBUG, "SAE: Possible elements at the end of the frame",
pos, end - pos);
if (!sae_is_password_id_elem(pos, end)) {
if (sae->tmp->pw_id) {
wpa_printf(MSG_DEBUG,
"SAE: No Password Identifier included, but expected one (%s)",
sae->tmp->pw_id);
return WLAN_STATUS_UNKNOWN_PASSWORD_IDENTIFIER;
}
os_free(sae->tmp->pw_id);
sae->tmp->pw_id = NULL;
return WLAN_STATUS_SUCCESS; /* No Password Identifier */
}
....
}
```
If something is wrong with the password and the *pw\_id* pointer is not null, a debug warning is displayed and the function exits. Interestingly, then there is an attempt to free the buffer using a null pointer. Moreover, *NULL* is written to the null pointer again. None of this makes sense. Most likely, the memory release lines are out of place. And I think the code should be like this:
```
if (!sae_is_password_id_elem(pos, end)) {
if (sae->tmp->pw_id) {
wpa_printf(MSG_DEBUG,
"SAE: No Password Identifier included, but expected one (%s)",
sae->tmp->pw_id);
os_free(sae->tmp->pw_id);
sae->tmp->pw_id = NULL;
return WLAN_STATUS_UNKNOWN_PASSWORD_IDENTIFIER;
}
return WLAN_STATUS_SUCCESS; /* No Password Identifier */
}
```
First, it will probably fix the memory leak. Secondly, private data will no longer be stored for a long time in memory somewhere in vain.
According to the CWE, this error is formally classified as [CWE-628: Function Call with Incorrectly Specified Arguments](https://cwe.mitre.org/data/definitions/628.html). This is how PVS-Studio classifies it. By judging by its essence and consequences, this is another weakness of the code.
### Error N22, N23; An uninitialized buffer is used as a key
PVS-Studio warning: V614 Uninitialized buffer 'hex' used. Consider checking the second actual argument of the 'memcpy' function. wps\_registrar.c 1657
```
int wps_build_cred(struct wps_data *wps, struct wpabuf *msg)
{
....
} else if (wps->use_psk_key && wps->wps->psk_set) {
char hex[65];
wpa_printf(MSG_DEBUG, "WPS: Use PSK format for Network Key");
os_memcpy(wps->cred.key, hex, 32 * 2);
wps->cred.key_len = 32 * 2;
} else if (wps->wps->network_key) {
....
}
```
An uninitialized *hex* buffer is used to initialize a key. It is not clear why it is done is such a way. This may be an attempt to fill the key with a random value, but it's still a very bad option.
In any case, this code needs to be carefully checked.
According to the CWE, this error is classified as [CWE-457](https://cwe.mitre.org/data/definitions/457.html): Use of Uninitialized Variable.
Similar error: V614 Uninitialized buffer 'hex' used. Consider checking the second actual argument of the 'memcpy' function. wps\_registrar.c 1678
Typos and Copy-Paste
--------------------
### Error N24; Classic Copy-Paste
PVS-Studio warning: V523 The 'then' statement is equivalent to the 'else' statement. timer.c 292
```
esp_err_t timer_isr_register(....)
{
....
if ((intr_alloc_flags & ESP_INTR_FLAG_EDGE) == 0) {
intr_source = ETS_TG1_T0_LEVEL_INTR_SOURCE + timer_num;
} else {
intr_source = ETS_TG1_T0_LEVEL_INTR_SOURCE + timer_num;
}
....
}
```
I suspect the author copied the line but forgot to change something in it. As a result, regardless of the condition, the same value is written in the *intr\_source* variable.
Note. Well, chances are, this was intended this way. For example, if the values must really match so far (which is "todo-code"). However, in this case there must be an explanatory comment.
### Error N25; Parenthesis is misplaced
PVS-Studio warning: V593 Consider reviewing the expression of the 'A = B != C' kind. The expression is calculated as following: 'A = (B != C)'. esp\_tls\_mbedtls.c 446
```
esp_err_t set_client_config(....)
{
....
if ((ret = mbedtls_ssl_conf_alpn_protocols(&tls->conf, cfg->alpn_protos) != 0))
{
ESP_LOGE(TAG, "mbedtls_ssl_conf_alpn_protocols returned -0x%x", -ret);
ESP_INT_EVENT_TRACKER_CAPTURE(tls->error_handle, ERR_TYPE_MBEDTLS, -ret);
return ESP_ERR_MBEDTLS_SSL_CONF_ALPN_PROTOCOLS_FAILED;
}
....
}
```
The [priority](https://www.viva64.com/en/t/0064/) of the comparison operator is higher than the priority of the assignment operator. Therefore, the condition is calculated as follows:
```
TEMP = mbedtls_ssl_conf_alpn_protocols(....) != 0;
if ((ret = TEMP))
PRINT(...., -ret);
```
Basically, an erroneous situation is caught and handled in the code, but not as intended. It was supposed to print the error status that is stored in the *ret* variable. But the *ret* value will always be 0 or 1. So if something goes wrong, only one value (-1) will always be printed.
The error occurred due to the misplaced parenthesis. Correct code:
```
if ((ret = mbedtls_ssl_conf_alpn_protocols(&tls->conf, cfg->alpn_protos)) != 0)
```
Now everything will be calculated as needed:
```
ret = mbedtls_ssl_conf_alpn_protocols(....);
if (ret != 0)
PRINT(...., -ret);
```
Now let's see another very similar case.
### Error N26; MP\_MEM turns into MP\_YES
V593 Consider reviewing the expression of the 'A = B != C' kind. The expression is calculated as following: 'A = (B != C)'. libtommath.h 1660
Let's start with some constants. We will use them below.
```
#define MP_OKAY 0 /* ok result */
#define MP_MEM -2 /* out of mem */
#define MP_VAL -3 /* invalid input */
#define MP_YES 1 /* yes response */
```
Next, I should mention about the *mp\_init\_multi* function that can return *MP\_OKAY* and *MP\_MEM*values:
```
static int mp_init_multi(mp_int *mp, ...);
```
Here is the code with the error:
```
static int
mp_div(mp_int * a, mp_int * b, mp_int * c, mp_int * d)
{
....
/* init our temps */
if ((res = mp_init_multi(&ta, &tb, &tq, &q, NULL) != MP_OKAY)) {
return res;
}
....
}
```
Let's consider the check more carefully:
```
if ((res = mp_init_multi(....) != MP_OKAY))
```
Again, the parenthesis is placed incorrectly. Therefore, here's what we get at the beginning:
```
TEMP = (mp_init_multi(....) != MP_OKAY);
```
The *TEMP* value can only be 0 or 1. These numbers correspond to the constants *MB\_OKAY* and *MP\_YES*.
Further we see the assignment and the check at the same time:
```
if ((res = TEMP))
return res;
```
You see the catch? The error status of *MP\_MEM* (-2) suddenly turned into the status of *MB\_YES* (1). Consequences are unpredictable, but there's nothing good about them.
### Error N27; Forgot to dereference a pointer
PVS-Studio warning: V595 The 'outbuf' pointer was utilized before it was verified against nullptr. Check lines: 374, 381. protocomm.c 374
```
static int protocomm_version_handler(uint32_t session_id,
const uint8_t *inbuf, ssize_t inlen,
uint8_t **outbuf, ssize_t *outlen,
void *priv_data)
{
protocomm_t *pc = (protocomm_t *) priv_data;
if (!pc->ver) {
*outlen = 0;
*outbuf = NULL; // <=
return ESP_OK;
}
/* Output is a non null terminated string with length specified */
*outlen = strlen(pc->ver);
*outbuf = malloc(*outlen); // <=
if (outbuf == NULL) { // <=
ESP_LOGE(TAG, "Failed to allocate memory for version response");
return ESP_ERR_NO_MEM;
}
memcpy(*outbuf, pc->ver, *outlen);
return ESP_OK;
}
```
At first glance, the warning might seem obscure. Let's figure it out.
If the pointer *pc->ver* is null, the function terminates its work ahead of time and writes a value to the address stored in the *outbuf* pointer:
```
*outbuf = NULL;
```
This address is accessed further as well:
```
*outbuf = malloc(*outlen);
```
The analyzer does not like the reason why this pointer is checked:
```
if (outbuf == NULL)
```
The approach is definitely incorrect — the pointer is checked after it is dereferenced. Actually, it is not the pointer that is to be checked but what is written in it. The author just made a typo and missed the dereferencing operator (\*).
Correct code:
```
*outbuf = malloc(*outlen);
if (*outbuf == NULL) {
ESP_LOGE(TAG, "Failed to allocate memory for version response");
return ESP_ERR_NO_MEM;
}
```
### Error N28; Reassignment
PVS-Studio warning: V519 The 'usRegCount' variable is assigned values twice successively. Perhaps this is a mistake. Check lines: 186, 187. mbfuncholding.c 187
```
eMBException
eMBFuncReadHoldingRegister( UCHAR * pucFrame, USHORT * usLen )
{
....
USHORT usRegCount;
....
usRegCount = ( USHORT )( pucFrame[MB_PDU_FUNC_READ_REGCNT_OFF] << 8 );
usRegCount = ( USHORT )( pucFrame[MB_PDU_FUNC_READ_REGCNT_OFF + 1] );
....
}
```
Copy-Paste has definitely held its hands to this code. The line was copied, but only partially changed. It is followed by this sensible code:
```
usRegCount = ( USHORT )( pucFrame[MB_PDU_FUNC_WRITE_MUL_REGCNT_OFF] << 8 );
usRegCount |= ( USHORT )( pucFrame[MB_PDU_FUNC_WRITE_MUL_REGCNT_OFF + 1] );
```
There should have probably been the = and |= operators in the first and second lines respectively in the code with the error.
Logical errors
--------------
### Error N29-N31; Incorrect handling of return codes (Rare)
PVS-Studio warning: V547 Expression is always false. linenoise.c 256
```
static int getColumns(void) {
....
/* Restore position. */
if (cols > start) {
char seq[32];
snprintf(seq,32,"\x1b[%dD",cols-start);
if (fwrite(seq, 1, strlen(seq), stdout) == -1) {
/* Can't recover... */
}
flushWrite();
}
....
}
```
This is a harmless variant of incorrect handling of the status returned by the function. The error is benign in the sense that no handling is required. One failed to write a line, so no big deal. Even though code fragment is harmless, this style of writing programs is clearly not a role model.
The point of the error itself is that the *fwrite* function does not return the status -1. This is practically impossible, since the *fwrite* function returns a value of the *size\_t* integer type:
```
size_t fwrite( const void *restrict buffer, size_t size, size_t count,
FILE *restrict stream );
```
And here's what this [function](https://en.cppreference.com/w/c/io/fwrite) returns:
> The number of objects written successfully, which may be less than count if an error occurs.
>
>
>
> If size or count is zero, fwrite returns zero and performs no other action.
So, the status check is incorrect.
Similar places of harmless incorrect status checks:
* V547 Expression is always false. linenoise.c 481
* V547 Expression is always false. linenoise.c 569
### Error N32, N33; Incorrect handling of return codes (Medium)
PVS-Studio warning: V547 Expression is always false. linenoise.c 596
```
int linenoiseEditInsert(struct linenoiseState *l, char c) {
....
if (fwrite(&c,1,1,stdout) == -1) return -1;
....
}
```
This error is more serious, although it is similar to the previous one. If the character can't be written to the file, the *linenoiseEditInsert* function must stop working and return the status -1. But this will not happen, as *fwrite* will never return the value -1. So, this is a logical error of handling the situation when it is not possible to write something to a file.
Here is a similar error: V547 Expression is always false. linenoise.c 742
### Error N34; Incorrect handling of return codes (Well Done)
PVS-Studio warning: V547 Expression is always false. linenoise.c 828
```
static int linenoiseEdit(char *buf, size_t buflen, const char *prompt)
....
while(1) {
....
if (fread(seq+2, 1, 1, stdin) == -1) break;
....
}
....
}
```
As in the case of *fwrite*, the error is that the *[fread](https://en.cppreference.com/w/c/io/fread)* function does not return the value -1 as the status.
```
size_t fread( void *restrict buffer, size_t size, size_t count,
FILE *restrict stream );
```
> **Return value**
>
>
>
> Number of objects read successfully, which may be less than count if an error or end-of-file condition occurs.
>
>
>
> If size or count is zero, fread returns zero and performs no other action.
>
>
>
> fread does not distinguish between end-of-file and error, and callers must use feof and ferror to determine which occurred.
This code is even more dangerous. The error of reading from the file is not caught, and the program continues to work with data that is accidentally available at this moment in the data buffer. That is, the program always believes that it has successfully read another byte from the file, although this may not be the case.
### Error N35; || operator instead of &&
PVS-Studio warning: V547 Expression is always true. essl\_sdio.c 209
```
esp_err_t essl_sdio_init(void *arg, uint32_t wait_ms)
{
....
// Set block sizes for functions 1 to given value (default value = 512).
if (ctx->block_size > 0 || ctx->block_size <= 2048) {
bs = ctx->block_size;
} else {
bs = 512;
}
....
}
```
One can attribute this bug to typos. In my opinion, by its nature it's closer to logical errors. I think the reader understands that classification of errors is often quite conditional.
So, what we have here is an always true condition. As a certain variable is always either greater than 0 or less than 2048. Because of this, the size of a block will not be limited to 512.
Here is the correct version of code:
```
if (ctx->block_size > 0 && ctx->block_size <= 2048) {
bs = ctx->block_size;
} else {
bs = 512;
}
```
### Error N35-N38; Variable does not change
PVS-Studio warning: V547 Expression 'depth <= 0' is always false. panic\_handler.c 169
```
static void print_backtrace(const void *f, int core)
{
XtExcFrame *frame = (XtExcFrame *) f;
int depth = 100; // <=
//Initialize stk_frame with first frame of stack
esp_backtrace_frame_t stk_frame =
{.pc = frame->pc, .sp = frame->a1, .next_pc = frame->a0};
panic_print_str("\r\nBacktrace:");
print_backtrace_entry(esp_cpu_process_stack_pc(stk_frame.pc),
stk_frame.sp);
//Check if first frame is valid
bool corrupted =
!(esp_stack_ptr_is_sane(stk_frame.sp) &&
(esp_ptr_executable((void *)esp_cpu_process_stack_pc(stk_frame.pc)) ||
/* Ignore the first corrupted PC in case of InstrFetchProhibited */
frame->exccause == EXCCAUSE_INSTR_PROHIBITED));
//Account for stack frame that's already printed
uint32_t i = ((depth <= 0) ? INT32_MAX : depth) - 1; // <=
....
}
```
The *depth* variable is assigned a value of 100, and until this variable is checked, its value does not change anywhere. It is very suspicious. Did someone forget to do something with it?
Similar cases:
* V547 Expression 'xAlreadyYielded == ((BaseType\_t) 0)' is always true. event\_groups.c 260
* V547 Expression 'xAlreadyYielded == ((BaseType\_t) 0)' is always true. tasks.c 1475
* V547 Expression 'xAlreadyYielded == ((BaseType\_t) 0)' is always true. tasks.c 1520
### Error N39; Uninitialized buffer
PVS-Studio warning: V614 Potentially uninitialized buffer 'k' used. Consider checking the second actual argument of the 'sae\_derive\_keys' function. sae.c 854
```
int sae_process_commit(struct sae_data *sae)
{
u8 k[SAE_MAX_PRIME_LEN];
if (sae->tmp == NULL ||
(sae->tmp->ec && sae_derive_k_ecc(sae, k) < 0) ||
(sae->tmp->dh && sae_derive_k_ffc(sae, k) < 0) ||
sae_derive_keys(sae, k) < 0)
return ESP_FAIL;
return ESP_OK;
}
```
Logical error. Let's say the *ec* and *dh* pointers are null. In this case, the *k* array is not initialized, but the *sae\_derive\_keys* function will still start processing it.
### Error N40; Always false condition
PVS-Studio warning: V547 Expression 'bit\_len == 32' is always false. spi\_flash\_ll.h 371
```
static inline void spi_flash_ll_set_usr_address(spi_dev_t *dev, uint32_t addr,
int bit_len)
{
// The blank region should be all ones
if (bit_len >= 32) {
dev->addr = addr;
dev->slv_wr_status = UINT32_MAX;
} else {
uint32_t padding_ones = (bit_len == 32? 0 : UINT32_MAX >> bit_len);
dev->addr = (addr << (32 - bit_len)) | padding_ones;
}
}
```
As you can easily see, the condition *bit\_len == 32* will always give a false result. Perhaps the above should not have been written with greater than-or-equal to (>=), but simply using greater than (>).
### Error N41; Ressignment
PVS-Studio warning: V519 The '\* pad\_num' variable is assigned values twice successively. Perhaps this is a mistake. Check lines: 46, 48. touch\_sensor\_hal.c 48
```
void touch_hal_get_wakeup_status(touch_pad_t *pad_num)
{
uint32_t touch_mask = 0;
touch_ll_read_trigger_status_mask(&touch_mask);
if (touch_mask == 0) {
*pad_num = -1;
}
*pad_num = (touch_pad_t)(__builtin_ffs(touch_mask) - 1);
}
```
The code is clearly wrong and there may be a missing *else* statement. I'm not sure, but maybe the code should look like this:
```
void touch_hal_get_wakeup_status(touch_pad_t *pad_num)
{
uint32_t touch_mask = 0;
touch_ll_read_trigger_status_mask(&touch_mask);
if (touch_mask == 0) {
*pad_num = -1;
} else {
*pad_num = (touch_pad_t)(__builtin_ffs(touch_mask) - 1);
}
}
```
Array index out of bounds
-------------------------
### Error N42; Incorrect boundary check
PVS-Studio warning: V557 Array overrun is possible. The value of 'frame->exccause' index could reach 16. gdbstub\_xtensa.c 132
```
int esp_gdbstub_get_signal(const esp_gdbstub_frame_t *frame)
{
const char exccause_to_signal[] =
{4, 31, 11, 11, 2, 6, 8, 0, 6, 7, 0, 0, 7, 7, 7, 7};
if (frame->exccause > sizeof(exccause_to_signal)) {
return 11;
}
return (int) exccause_to_signal[frame->exccause];
}
```
An index might overrun the array boundary by 1 element. For correct check, one should use the greater than-or-equal operator instead of the greater than operator:
```
if (frame->exccause >= sizeof(exccause_to_signal)) {
```
### Error N43; Long error example :)
In the function below array overrun might happen in two places, so there are two relevant analyzer warnings at once:
* V557 Array overrun is possible. The value of 'other\_if' index could reach 3. mdns.c 2206
* V557 Array overrun is possible. The '\_mdns\_announce\_pcb' function processes value '[0..3]'. Inspect the first argument. Check lines: 1674, 2213. mdns.c 1674
Get ready, it will be a difficult case. First, let's take a look at the following named constants:
```
typedef enum mdns_if_internal {
MDNS_IF_STA = 0,
MDNS_IF_AP = 1,
MDNS_IF_ETH = 2,
MDNS_IF_MAX
} mdns_if_t;
```
Note that the value of the *MDNS\_IF\_MAX* constant is 3.
Now let's take a look at the definition of the *mdns\_server\_s* structure. Here it is important that the array *interfaces* consists of 3 elements.
```
typedef struct mdns_server_s {
struct {
mdns_pcb_t pcbs[MDNS_IP_PROTOCOL_MAX];
} interfaces[MDNS_IF_MAX];
const char * hostname;
const char * instance;
mdns_srv_item_t * services;
SemaphoreHandle_t lock;
QueueHandle_t action_queue;
mdns_tx_packet_t * tx_queue_head;
mdns_search_once_t * search_once;
esp_timer_handle_t timer_handle;
} mdns_server_t;
mdns_server_t * _mdns_server = NULL;
```
But there's more. We'll need to look inside the *\_mdns\_get\_other\_if* function. Note that it can return the *MDNS\_IF\_MAX* constant. That is, it can return the value 3.
```
static mdns_if_t _mdns_get_other_if (mdns_if_t tcpip_if)
{
if (tcpip_if == MDNS_IF_STA) {
return MDNS_IF_ETH;
} else if (tcpip_if == MDNS_IF_ETH) {
return MDNS_IF_STA;
}
return MDNS_IF_MAX;
}
```
And now, finally, we got to the errors!
```
static void _mdns_dup_interface(mdns_if_t tcpip_if)
{
uint8_t i;
mdns_if_t other_if = _mdns_get_other_if (tcpip_if);
for (i=0; iinterfaces[other\_if].pcbs[i].pcb) { // <=
//stop this interface and mark as dup
if (\_mdns\_server->interfaces[tcpip\_if].pcbs[i].pcb) {
\_mdns\_clear\_pcb\_tx\_queue\_head(tcpip\_if, i);
\_mdns\_pcb\_deinit(tcpip\_if, i);
}
\_mdns\_server->interfaces[tcpip\_if].pcbs[i].state = PCB\_DUP;
\_mdns\_announce\_pcb(other\_if, i, NULL, 0, true); // <=
}
}
}
```
So, we know that the *\_mdns\_get\_other\_if*function can return 3. The variable *other\_if* can be equal to 3. And here is the first potential array boundary violation:
```
if (_mdns_server->interfaces[other_if].pcbs[i].pcb)
```
The second place where the *other\_if* variable is used dangerously is when calling the *\_mdns\_announce\_pcb* function:
```
_mdns_announce_pcb(other_if, i, NULL, 0, true);
```
Let's look inside this function:
```
static void _mdns_announce_pcb(mdns_if_t tcpip_if,
mdns_ip_protocol_t ip_protocol,
mdns_srv_item_t ** services,
size_t len, bool include_ip)
{
mdns_pcb_t * _pcb = &_mdns_server->interfaces[tcpip_if].pcbs[ip_protocol];
....
}
```
Again, index 3 can be used to access an array consisting of 3 elements, whereas the maximum available index is two.
Null pointers
-------------
### Error N44-N47; Incorrect order of checking pointers
PVS-Studio warning: V595 The 'hapd->wpa\_auth' pointer was utilized before it was verified against nullptr. Check lines: 106, 113. esp\_hostap.c 106
```
bool hostap_deinit(void *data)
{
struct hostapd_data *hapd = (struct hostapd_data *)data;
if (hapd == NULL) {
return true;
}
if (hapd->wpa_auth->wpa_ie != NULL) {
os_free(hapd->wpa_auth->wpa_ie);
}
if (hapd->wpa_auth->group != NULL) {
os_free(hapd->wpa_auth->group);
}
if (hapd->wpa_auth != NULL) {
os_free(hapd->wpa_auth);
}
....
}
```
Incorrect order of checking pointers:
```
if (hapd->wpa_auth->group != NULL)
....
if (hapd->wpa_auth != NULL)
```
If the pointer *hapd->wpa\_auth* is null, then everything will end up badly. The sequence of actions should be reversed and made nested:
```
if (hapd->wpa_auth != NULL)
{
....
if (hapd->wpa_auth->group != NULL)
....
}
```
Similar errors:
* V595 The 'hapd->conf' pointer was utilized before it was verified against nullptr. Check lines: 118, 125. esp\_hostap.c 118
* V595 The 'sm' pointer was utilized before it was verified against nullptr. Check lines: 1637, 1647. esp\_wps.c 1637
* V595 The 'sm' pointer was utilized before it was verified against nullptr. Check lines: 1693, 1703. esp\_wps.c 1693
### Error N48-N64; No pointer checks after memory allocation
As we can see from the project, authors usually check whether it was possible to allocate memory or not. That is, there is a lot of code with such checks:
```
dhcp_data = (struct dhcp *)malloc(sizeof(struct dhcp));
if (dhcp_data == NULL) {
return ESP_ERR_NO_MEM;
}
```
But in some places checks are omitted.
PVS-Studio warning: V522 There might be dereferencing of a potential null pointer 'exp'. Check lines: 3470, 3469. argtable3.c 3470
```
TRex *trex_compile(const TRexChar *pattern,const TRexChar **error,int flags)
{
TRex *exp = (TRex *)malloc(sizeof(TRex));
exp->_eol = exp->_bol = NULL;
exp->_p = pattern;
....
}
```
This type of error is more complex and dangerous than it may seem at first glance. This topic is discussed in more detail in the article "[Why it is important to check what the malloc function returned](https://www.viva64.com/en/b/0558/)".
Other places with no checks:
* V522 There might be dereferencing of a potential null pointer 's\_ledc\_fade\_rec[speed\_mode][channel]'. Check lines: 668, 667. ledc.c 668
* V522 There might be dereferencing of a potential null pointer 'environ'. Check lines: 108, 107. syscall\_table.c 108
* V522 There might be dereferencing of a potential null pointer 'it'. Check lines: 150, 149. partition.c 150
* V522 There might be dereferencing of a potential null pointer 'eth'. Check lines: 167, 159. wpa\_auth.c 167
* V522 There might be dereferencing of a potential null pointer 'pt'. Check lines: 222, 219. crypto\_mbedtls-ec.c 222
* V522 There might be dereferencing of a potential null pointer 'attr'. Check lines: 88, 73. wps.c 88
* V575 The potential null pointer is passed into 'memcpy' function. Inspect the first argument. Check lines: 725, 724. coap\_mbedtls.c 725
* V575 The potential null pointer is passed into 'memset' function. Inspect the first argument. Check lines: 3504, 3503. argtable3.c 3504
* V575 The potential null pointer is passed into 'memcpy' function. Inspect the first argument. Check lines: 496, 495. mqtt\_client.c 496
* V575 The potential null pointer is passed into 'strcpy' function. Inspect the first argument. Check lines: 451, 450. transport\_ws.c 451
* V769 The 'buffer' pointer in the 'buffer + n' expression could be nullptr. In such case, resulting value will be senseless and it should not be used. Check lines: 186, 181. cbortojson.c 186
* V769 The 'buffer' pointer in the 'buffer + len' expression could be nullptr. In such case, resulting value will be senseless and it should not be used. Check lines: 212, 207. cbortojson.c 212
* V769 The 'out' pointer in the 'out ++' expression could be nullptr. In such case, resulting value will be senseless and it should not be used. Check lines: 233, 207. cbortojson.c 233
* V769 The 'parser->m\_bufferPtr' pointer in the expression equals nullptr. The resulting value of arithmetic operations on this pointer is senseless and it should not be used. xmlparse.c 2090
* V769 The 'signature' pointer in the 'signature + curve->prime\_len' expression could be nullptr. In such case, resulting value will be senseless and it should not be used. Check lines: 4112, 4110. dpp.c 4112
* V769 The 'key' pointer in the 'key + 16' expression could be nullptr. In such case, resulting value will be senseless and it should not be used. Check lines: 634, 628. eap\_mschapv2.c 634
### Error N65, N66; No pointer checks after memory allocation (indicative case)
The following code contains exactly the same error as we discussed above, but it is more revealing and vivid. Note that the *realloc* function is used to allocate memory.
PVS-Studio warning: V701 realloc() possible leak: when realloc() fails in allocating memory, original pointer 'exp->\_nodes' is lost. Consider assigning realloc() to a temporary pointer. argtable3.c 3008
```
static int trex_newnode(TRex *exp, TRexNodeType type)
{
TRexNode n;
int newid;
n.type = type;
n.next = n.right = n.left = -1;
if(type == OP_EXPR)
n.right = exp->_nsubexpr++;
if(exp->_nallocated < (exp->_nsize + 1)) {
exp->_nallocated *= 2;
exp->_nodes = (TRexNode *)realloc(exp->_nodes,
exp->_nallocated * sizeof(TRexNode));
}
exp->_nodes[exp->_nsize++] = n; // NOLINT(clang-analyzer-unix.Malloc)
newid = exp->_nsize - 1;
return (int)newid;
}
```
First, if the *realloc* function returns *NULL*, the previous value of the *exp->\_nodes* pointer will be lost. A memory leak will happen.
Secondly, if the *realloc* function returns *NULL*, then the value will not be written by the null pointer at all. By saying that I mean this line:
```
exp->_nodes[exp->_nsize++] = n;
```
*exp->\_nsize++* can have any value. If something is written in a random memory area that is available for writing, the program will continue its execution as if nothing had happened. In doing so, data structures will be destroyed, which will lead to unpredictable consequences.
Another such error: V701 realloc() possible leak: when realloc() fails in allocating memory, original pointer 'm\_context->pki\_sni\_entry\_list' is lost. Consider assigning realloc() to a temporary pointer. coap\_mbedtls.c 737
Miscellaneous errors
--------------------
### Error N67; Extra or incorrect code
PVS-Studio warning: V547 Expression 'ret != 0' is always false. sdio\_slave.c 394
```
esp_err_t sdio_slave_start(void)
{
....
critical_exit_recv();
ret = ESP_OK;
if (ret != ESP_OK) return ret;
sdio_slave_hal_set_ioready(context.hal, true);
return ESP_OK;
}
```
This is strange code that can be shortened to:
```
esp_err_t sdio_slave_start(void)
{
....
critical_exit_recv();
sdio_slave_hal_set_ioready(context.hal, true);
return ESP_OK;
}
```
I can't say for sure if there is an error or not. Perhaps what we see here is not something that was intended. Or perhaps this code appeared in the process of unsuccessful refactoring and is actually correct. In this case, it is really enough to simplify it a little, so that it looks more decent and understandable. One thing is for sure — this code deserves attention and review by the author.
### Error N68; Extra or invalid code
PVS-Studio warning: V547 Expression 'err != 0' is always false. sdio\_slave\_hal.c 96
```
static esp_err_t sdio_ringbuf_send(....)
{
uint8_t* get_ptr = ....;
esp_err_t err = ESP_OK;
if (copy_callback) {
(*copy_callback)(get_ptr, arg);
}
if (err != ESP_OK) return err;
buf->write_ptr = get_ptr;
return ESP_OK;
}
```
This case is very similar to the previous one. The *err* variable is redundant, or someone forgot to change it.
### Error N69; A potentially uninitialized buffer
PVS-Studio warning: V614 Potentially uninitialized buffer 'seq' used. Consider checking the first actual argument of the 'strlen' function. linenoise.c 435
```
void refreshShowHints(struct abuf *ab, struct linenoiseState *l, int plen) {
char seq[64];
if (hintsCallback && plen+l->len < l->cols) {
int color = -1, bold = 0;
char *hint = hintsCallback(l->buf,&color,&bold);
if (hint) {
int hintlen = strlen(hint);
int hintmaxlen = l->cols-(plen+l->len);
if (hintlen > hintmaxlen) hintlen = hintmaxlen;
if (bold == 1 && color == -1) color = 37;
if (color != -1 || bold != 0)
snprintf(seq,64,"\033[%d;%d;49m",bold,color);
abAppend(ab,seq,strlen(seq)); // <=
abAppend(ab,hint,hintlen);
if (color != -1 || bold != 0)
abAppend(ab,"\033[0m",4);
/* Call the function to free the hint returned. */
if (freeHintsCallback) freeHintsCallback(hint);
}
}
}
```
The *seq* buffer may or may not be full! It is filled only when the condition is met:
```
if (color != -1 || bold != 0)
snprintf(seq,64,"\033[%d;%d;49m",bold,color);
```
It is logical to assume that the condition may not be met, and then the buffer will remain uninitialized. In this case, it can't be used to add to the *ab* string.
To remedy the situation, one should change the code as follows:
```
if (color != -1 || bold != 0)
{
snprintf(seq,64,"\033[%d;%d;49m",bold,color);
abAppend(ab,seq,strlen(seq));
}
```
### Error N70; Strange mask
PVS-Studio warning: V547 Expression is always false. tasks.c 896
```
#ifndef portPRIVILEGE_BIT
#define portPRIVILEGE_BIT ( ( UBaseType_t ) 0x00 )
#endif
static void prvInitialiseNewTask(...., UBaseType_t uxPriority, ....)
{
StackType_t *pxTopOfStack;
UBaseType_t x;
#if (portNUM_PROCESSORS < 2)
xCoreID = 0;
#endif
#if( portUSING_MPU_WRAPPERS == 1 )
/* Should the task be created in privileged mode? */
BaseType_t xRunPrivileged;
if( ( uxPriority & portPRIVILEGE_BIT ) != 0U )
{
xRunPrivileged = pdTRUE;
}
else
{
xRunPrivileged = pdFALSE;
}
....
}
```
The *portPRIVILEGE\_BIT* constant has the value 0. So, it's weird to use it as a mask:
```
if( ( uxPriority & portPRIVILEGE_BIT ) != 0U )
```
### Error N71, Memory leak
PVS-Studio warning: V773 The function was exited without releasing the 'sm' pointer. A memory leak is possible. esp\_wpa2.c 753
```
static int eap_peer_sm_init(void)
{
int ret = 0;
struct eap_sm *sm;
....
sm = (struct eap_sm *)os_zalloc(sizeof(*sm));
if (sm == NULL) {
return ESP_ERR_NO_MEM;
}
s_wpa2_data_lock = xSemaphoreCreateRecursiveMutex();
if (!s_wpa2_data_lock) {
wpa_printf(MSG_ERROR, "......."); // NOLINT(clang-analyzer-unix.Malloc)
return ESP_ERR_NO_MEM; // <=
}
....
}
```
If the *xSemaphoreCreateRecursiveMutex* function fails to create a mutex, then the *eap\_peer\_sm\_init* function will terminate and a memory leak will occur. As I understand it, one should add a call to the *os\_free* function to clear the memory:
```
s_wpa2_data_lock = xSemaphoreCreateRecursiveMutex();
if (!s_wpa2_data_lock) {
wpa_printf(MSG_ERROR, ".......");
os_free(sm);
return ESP_ERR_NO_MEM;
}
```
Interestingly, the Clang compiler also warns us about this error. However, the author of the code for some reason ignored and even specifically suppressed the corresponding warning:
```
// NOLINT(clang-analyzer-unix.Malloc)
```
The presence of this suppressing comment is unclear to me. There is definitely a bug. Perhaps the code author simply did not understand what the compiler complained about and decided that it was a false positive.
Conclusion
----------
Thanks for your attention. As you can see, there are a lot of errors. And this was only a cursory review of an incomplete report. I hope that Yuri Popov will take the baton and describe even more mistakes in his subsequent article :).
Use the PVS-Studio static analyzer regularly. This will let you:
1. find many errors at an early stage, which will significantly reduce the cost of detecting and correcting them;
2. detect and correct stupid typos and other mistakes using static analysis. You will free up time that can be spent on a higher-level review of the code and algorithms;
3. better control the quality of the code of beginners and teach them to write clean and reliable code faster.
In addition, when it comes to software for embedded devices, it is very important to eliminate as many errors as possible before the devices are released into service. Therefore, any additional error found using the code analyzer is a great finding. Each undetected error in the hardware and software device potentially carries reputational risks as well as costs for updating the firmware.
You're welcome to [download and try](https://www.viva64.com/en/pvs-studio-download/?promo=pvs_ak) a trial PVS-Studio analyzer version. I also remind you that if you are developing an open source project or using the analyzer for academic purposes, we offer several free licenses [options](https://www.viva64.com/en/b/0614/) for such cases. Don't wait for an insidious bug to eat your leg, start using PVS-Studio right now. | https://habr.com/ru/post/538286/ | null | null | 7,755 | 57.77 |
Within this series of posts introduced cleaning up the Hello World web part that you get when you get the SPFx Client Side web part using Yeoman.
In this post I’m going to create my base render web part and I’ll display some data fro SharePoint lists in my web part.
In SharePoint I have created a list (Bids) with the following fields:
- Title (Single line of text)
- PersonResponsible (Person/Group field)
- Urgency (Choice field with values of Low, Medium, High)
- ForecastClose ( Date Field)
I created a few more fields but within my posts I’m going to focus on these fields as they cover most of the common type of fields in SharePoint.
To support the data from SharePoint lists in my TypeScript code I’m creating an interface matching the lists fields, by creating a new file in my projects calls IBidTrackingBid.ts with the following content:
export interface Bid { Id: Number; Title: string; PersonResponsible?: string; PersonResponsibleId?: number; ForecastClose?: Date; Urgency?: string; }
To make my life a little bit easier I’m also adding an interface for a collections of bids.
export interface Bids { value: Bid[]; }
A couple of things to notice here.
- All the properties that end with a “?” postfix are optional fields.
- For my people field I’ve got a field matching my SharePoint name and I’m also using the Id property.
I will be using the REST API to get to my data and for people fields and choice fields I’m getting the item IDs returned.
Now within my web part class I’m adding two methods
private _getMockListDataBids(): Promise { ... } private _getListDataBids(): Promise { ... }
These 2 methods will get data from my mock data and from SharePoint. Note that both methods return my data in the same format. Both return a Promise of the type Bids.
Reading Mock data
To feed the above method with Mock data I’ve created a new file in my project MockBidTracking.ts. this mock data is used to test my web part while I’m not running the web part within a SharePoint environment
In the MockBidTracking I start with importing my bid interface.
import { Bid } from './IBidTrackingBid';
Now I’m ready to create a new class.
export default class MockHttpBid { }
Within the above class I’m creating a private variable that holds my mock data
Then to get to my data from my web part I create a method to return my mockdata:
public static get(): Promise<Bid[]> { return new Promise<Bid[]>((resolve) => { resolve(MockHttpBid._items); }); }
now withing my web part class I can add a method that reads the data from the mock store:
private _getMockListDataBids(): Promise { return MockHttpBid.get() .then((data: Bid[]) => { var listData: Bids = { value: data }; return listData; }) as Promise; }
Reading SharePoint Data
To read data from SharePoint I only need to call a REST API at : /_api/web/lists/GetByTitle(‘Bids’)/items and then return the JSON that is returned.
private _getListDataBids(): Promise { return this.context.spHttpClient.get(this.context.pageContext.web.absoluteUrl + `/_api/web/lists/GetByTitle('Bids')/items`, SPHttpClient.configurations.v1) .then((response: SPHttpClientResponse) => { return response.json(); }); }
<h3>Rendering my Data</h3>
Now I'm adding another method to my Web Part class. This method will render my data independently from where the data comes from:
private _renderBidsAsync(): void { // Local environment if (Environment.type === EnvironmentType.Local) { this._getMockListDataBids().then((bids: Bids) => { this._renderBids(bids); }); } else if (Environment.type == EnvironmentType.SharePoint || Environment.type == EnvironmentType.ClassicSharePoint) { this._getListDataBids().then((bids: Bids) => { this._renderBids(bids); }); } }
So if I use mock data or if I use SharePoint data in both cases this._renderBids(bids) will be called. Now we only need to implement
_renderBids and the data will be rendered to my page.
private _renderBids(bid:Bids): void { let html:string = ''; if (this._allBids) { this._allBids.value.forEach((item: Bid) => { html += ' ... ' ; }); } const bidsContainer: Element = this.domElement.querySelector('#spBidsContainer'); bidsContainer.innerHTML = html; }
In my next post will look at creating forms within my client side web part.
I hear you say: how do you now render the the data?
The html variable that is used above could be typically set to something like this:
~~~~
html += <code>
- ${item.Title}
${initials}
${item.PipelineStage}
</code>
<div class="${styles.rowContainer}">
~~~~As there is a bit more to it than just normal html, I will address the details in a separate post.
your effort is good, but reading your blog through code is damn difficult, you should change your blog post with syntax highlighting in the code like Markdown syntax or something that helps readers.
LikeLiked by 1 person
Thanks for the feedback Luis, I’ve improved the formatting of this article a bit. I will look into using the Markdown options on WordPress. | https://veenstra.me.uk/2017/06/05/office-365-displaying-sharepoint-data-in-the-spfx-web-part/ | CC-MAIN-2019-09 | refinedweb | 784 | 56.05 |
package main
import (
"fmt"
)
func main() {
fmt.Println(say(9))
}
func say(num int)(total string){
return fmt.Sprintf("There are %s reasons to code!", num)
}
my output is
There are %!s(int=9) reasons to code!
What should I be doing to interpolate a number inside a string?
If you want to always use the "default" representation of no matter what type, use
%v as in
fmt.Sprintf("There are %v reasons to code!", num)
Try
%d instead of
%s. The d stands for decimal.
The appropriate documentation is here:
The output is saying exactly what is happening and what you need to know!
As you are trying to use a %s verb which is meant to strings the output says that:
!s(int=0) which means:
The value is not a string, but an integer.
Then, if you want to know what to use instead take a look at the fmt package page at the "integers" table:
%b base 2 %c the character represented by the corresponding Unicode code point %d base 10 %o base 8 %q a single-quoted character literal safely escaped with Go syntax. %x base 16, with lower-case letters for a-f %X base 16, with upper-case letters for A-F %U Unicode format: U+1234; same as "U+%04X"
So you can use any of this verbs to have the output correctly represented.
Or as previous answers says, you can also use the %v verb which means:
"the value in its default format". | http://m.dlxedu.com/m/askdetail/3/66c92a1273bf40756eae7e265cfe7faa.html | CC-MAIN-2018-22 | refinedweb | 252 | 73.68 |
On Tue, Jul 12, 2005 at 12:40:41AM -0400, Karim Yaghmour wrote:> > Greg KH wrote:> > The path/filename dictates how it is used, so putting relayfs type files> > in debugfs is just fine. debugfs allows any types of files to be there.> ...> > New trees in / are not LSB compliant, hence the reason for writing> > securityfs to get rid of /selinux and other LSM filesystems that were> > starting to sprout up.> ...> > But that's exactly what debugfs is for, to allow data to be dumped out> > of the kernel for different usages.> ...> > Ok, have a better name for it? It's simple and easy to understand.> > It also carries with it the stigma of "kernel debugging", which I just> don't see production system maintainers liking very much.But they like the name "dtrace" instead? (sorry, couldn't resist...)Come on, they will never see the name "debugfs", right? Your tools willthen have a common place to look for your ltt and other files, as you_know_ where it will be mounted in the fs namespace.And you _are_ doing kernel debugging and tracing with ltt, what's wrongwith admitting that?> So tell you what, how about if we merged what's in debugfs into relayfs> instead? We'll still end up with one filesystem, but we'll have a more> inocuous name. After all, if debugfs is indeed for dumping data from the> kernel to user-space for different usages, then relaying is what it's> actually doing, right?Sorry, but debugfs was there first, and people are already using it inthe kernel tree :)Anyway, good luck trying to get the distros to acceptyet-another-fs-to-mount-somewhere, I know it was hard to get support forsysfs as it was...greg k-h-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | https://lkml.org/lkml/2005/7/12/15 | CC-MAIN-2017-09 | refinedweb | 322 | 71.85 |
When carrying out an algorithmic trading strategy it is tempting to consider the annualised return as the most useful performance metric. However, there are many flaws with using this measure in isolation. The calculation of returns for certain strategies is not completely straightforward. This is especially true for strategies that aren't directional such as market-neutral variants or strategies which make use of leverage. These factors make it hard to compare two strategies based solely upon their returns.
In addition, if we are presented with two strategies possessing identical returns how do we know which one contains more risk? Further, what do we even mean by "more risk"? In finance, we are often concerned with volatility of returns and periods of drawdown. Thus if one of these strategies has a significantly higher volatility of returns we would likely find it less attractive, despite the fact that its historical returns might be similar if not identical.
These problems of strategy comparison and risk assessment motivate the use of the Sharpe Ratio.
Definition of the Sharpe Ratio
William Forsyth Sharpe is a Nobel-prize winning economist, who helped create the Capital Asset Pricing Model (CAPM) and developed the Sharpe Ratio in 1966 (later updated in 1994).
The Sharpe Ratio $S$ is defined by the following relation:\begin{eqnarray} S = \frac{\mathbb{E}(R_a - R_b)}{\sqrt{\text{Var} (R_a - R_b)}} \end{eqnarray}
Where $R_a$ is the period return of the asset or strategy and $R_b$ is the period return of a suitable benchmark.
The ratio compares the mean average of the excess returns of the asset or strategy with the standard deviation of those returns. Thus a lower volatility of returns will lead to a greater Sharpe ratio, assuming identical returns.
The "Sharpe Ratio" often quoted by those carrying out trading strategies is the annualised Sharpe, the calculation of which depends upon the trading period of which the returns are measured. Assuming there are $N$ trading periods in a year, the annualised Sharpe is calculated as follows:\begin{eqnarray*} S_A = \sqrt{N} \frac{\mathbb{E}(R_a - R_b)}{\sqrt{\text{Var} (R_a - R_b)}} \end{eqnarray*}
Note that the Sharpe ratio itself MUST be calculated based on the Sharpe of that particular time period type. For a strategy based on trading period of days, $N = 252$ (as there are 252 trading days in a year, not 365), and $R_a$, $R_b$ must be the daily returns. Similarly for hours $N = 252 \times 6.5 = 1638$, not $N = 252 \times 24 = 6048$, since there are only 6.5 hours in a trading day.
Benchmark Inclusion
The formula for the Sharpe ratio above alludes to the use of a benchmark. A benchmark is used as a "yardstick" or a "hurdle" that a particular strategy must overcome for it to worth considering. For instance, a simple long-only strategy using US large-cap equities should hope to beat the S&P500 index on average, or match it for less volatility.
The choice of benchmark can sometimes be unclear. For instance, should a sector Exhange Traded Fund (ETF) be utilised as a performance benchmark for individual equities, or the S&P500 itself? Why not the Russell 3000? Equally should a hedge fund strategy be benchmarking itself against a market index or an index of other hedge funds? There is also the complication of the "risk free rate". Should domestic government bonds be used? A basket of international bonds? Short-term or long-term bills? A mixture? Clearly there are plenty of ways to choose a benchmark! The Sharpe ratio generally utilises the risk-free rate and often, for US equities strategies, this is based on 10-year government Treasury bills.
In one particular instance, for market-neutral strategies, there is a particular complication regarding whether to make use of the risk-free rate or zero as the benchmark. The market index itself should not be utilised as the strategy is, by design, market-neutral. The correct choice for a market-neutral portfolio is not to substract the risk-free rate because it is self-financing. Since you gain a credit interest, $R_f$, from holding a margin, the actual calculation for returns is: $(R_a + R_f) - R_f = R_a$. Hence there is no actual subtraction of the risk-free rate for dollar neutral strategies.
Limitations
Despite the prevalence of the Sharpe ratio within quantitative finance, it does suffer from some limitations.
Firstly, the Sharpe ratio is backward looking. It only accounts for historical returns distribution and volatility, not those occuring in the future. When making judgements based on the Sharpe ratio there is an implicit assumption that the past will be similar to the future. This is evidently not always the case, particular under market regime changes.
The Sharpe ratio calculation assumes that the returns being used are normally distributed (i.e. Gaussian). Unfortunately, markets often suffer from kurtosis above that of a normal distribution. Essentially the distribution of returns has "fatter tails" and thus extreme events are more likely to occur than a Gaussian distribution would lead us to believe. Hence, the Sharpe ratio is poor at characterising tail risk.
This can be clearly seen in strategies which are highly prone to such risks. For instance, the sale of call options (aka "pennies under a steam roller"). A steady stream of option premia are generated by the sale of call options over time, leading to a low volatility of returns, with a strong excess above a benchmark. In this instance the strategy would possess a high Sharpe ratio (based on historical data). However, it does not take into account that such options may be called, leading to significant and sudden drawdowns (or even wipeout) in the equity curve. Hence, as with any measure of algorithmic trading strategy performance, the Sharpe ratio cannot be used in isolation.
Although this point might seem obvious to some, transaction costs MUST be included in the calculation of Sharpe ratio in order for it to be realistic. There are countless examples of trading strategies that have high Sharpes (and thus a likelihood of great profitability) only to be reduced to low Sharpe, low profitability strategies once realistic costs have been factored in. This means making use of the net returns when calculating in excess of the benchmark. Hence, transaction costs must be factored in upstream of the Sharpe ratio calculation.
Practical Usage and Examples
One obvious question that has remained unanswered thus far in this article is "What is a good Sharpe Ratio for a strategy?". Pragmatically, you should ignore any strategy that possesses an annualised Sharpe ratio $S < 1$ after transaction costs. Quantitative hedge funds tend to ignore any strategies that possess Sharpe ratios $S < 2$. One prominent quantitative hedge fund that I am familiar with wouldn't even consider strategies that had Sharpe ratios $S < 3$ while in research. As a retail algorithmic trader, if you can achieve a Sharpe ratio $S>2$ then you are doing very well.
The Sharpe ratio will often increase with trading frequency. Some high frequency strategies will have high single (and sometimes low double) digit Sharpe ratios, as they can be profitable almost every day and certainly every month. These strategies rarely suffer from catastrophic risk and thus minimise their volatility of returns, which leads to such high Sharpe ratios.
Examples of Sharpe Ratios
This has been quite a theoretical article up to this point. Now we will turn our attention to some actual examples. We will start simply, by considering a long-only buy-and-hold of an individual equity then consider a market-neutral strategy. Both of these examples have been carried out in the Python pandas data analysis library.
The first task is to actually obtain the data and put it into a pandas DataFrame object. In the article on securities master implementation in Python and MySQL I created a system for achieving this. Alternatively, we can make use of this simpler code to grab Yahoo Finance data directly and put it straight into a pandas DataFrame. At the bottom of this script I have created a function to calculate the annualised Sharpe ratio based on a time-period returns stream:
import datetime import numpy as np import pandas as pd import urllib2 def get_historic_data(ticker, start_date=(2000,1,1), end_date=datetime.date.today().timetuple()[0:3]): """ Obtains data from Yahoo Finance and adds it to a pandas DataFrame object. ticker: Yahoo Finance ticker symbol, e.g. "GOOG" for Google, Inc. start_date: Start date in (YYYY, M, D) format end_date: End date in (YYYY, M, D) format """ # Construct the Yahoo URL with the correct integer query parameters # for start and end dates. Note that some parameters are zero-based! yahoo_url = "" % \ (ticker, start_date[1] - 1, start_date[2], start_date[0], end_date[1] - 1, end_date[2], end_date[0]) # Try connecting to Yahoo Finance and obtaining the data # On failure, print an error message try: yf_data = urllib2.urlopen(yahoo_url).readlines() except Exception, e: print "Could not download Yahoo data: %s" % e # Create the (temporary) Python data structures to store # the historical data date_list = [] hist_data = [[] for i in range(6)] # Format and copy the raw text data into datetime objects # and floating point values (still in native Python lists) for day in yf_data[1:]: # Avoid the header line in the CSV headers = day.rstrip().split(',') date_list.append(datetime.datetime.strptime(headers[0],'%Y-%m-%d')) for i, header in enumerate(headers[1:]): hist_data[i].append(float(header)) # Create a Python dictionary of the lists and then use that to # form a sorted Pandas DataFrame of the historical data hist_data = dict(zip(['open', 'high', 'low', 'close', 'volume', 'adj_close'], hist_data)) pdf = pd.DataFrame(hist_data, index=pd.Index(date_list)).sort() return pdf def annualised_sharpe(returns, N=252): """ Calculate the annualised Sharpe ratio of a returns stream based on a number of trading periods, N. N defaults to 252, which then assumes a stream of daily returns. The function assumes that the returns are the excess of those compared to a benchmark. """ return np.sqrt(N) * returns.mean() / returns.std()
Now that we have the ability to obtain data from Yahoo Finance and straightforwardly calculate the annualised Sharpe ratio, we can test out a buy and hold strategy for two equities. We will use Google (GOOG) and Goldman Sachs (GS) from Jan 1st 2000 to May 29th 2013 (when I wrote this article!).
We can create an additional helper function that allows us to quickly see buy-and-hold Sharpe across multiple equities for the same (hardcoded) period:
def equity_sharpe(ticker): """ Calculates the annualised Sharpe ratio based on the daily returns of an equity ticker symbol listed in Yahoo Finance. The dates have been hardcoded here for the QuantStart article on Sharpe ratios. """ # Obtain the equities daily historic data for the desired time period # and add to a pandas DataFrame pdf = get_historic_data(ticker, start_date=(2000,1,1), end_date=(2013,5,29)) # Use the percentage change method to easily calculate daily returns pdf['daily_ret'] = pdf['adj_close'].pct_change() # Assume an average annual risk-free rate over the period of 5% pdf['excess_daily_ret'] = pdf['daily_ret'] - 0.05/252 # Return the annualised Sharpe ratio based on the excess daily returns return annualised_sharpe(pdf['excess_daily_ret'])
For Google, the Sharpe ratio for buying and holding is
0.7501. For Goldman Sachs it is
0.2178:
>>> equity_sharpe('GOOG') 0.75013831274645904 >>> equity_sharpe('GS') 0.21777027767830823
Now we can try the same calculation for a market-neutral strategy. The goal of this strategy is to fully isolate a particular equity's performance from the market in general. The simplest way to achieve this is to go short an equal amount (in dollars) of an Exchange Traded Fund (ETF) that is designed to track such a market. The most ovious choice for the US large-cap equities market is the S&P500 index, which is tracked by the SPDR ETF, with the ticker of SPY.
To calculate the annualised Sharpe ratio of such a strategy we will obtain the historical prices for SPY and calculate the percentage returns in a similar manner to the previous stocks, with the exception that we will not use the risk-free benchmark. We will calculate the net daily returns which requires subtracting the difference between the long and the short returns and then dividing by 2, as we now have twice as much trading capital. Here is the Python/pandas code to carry this out:
def market_neutral_sharpe(ticker, benchmark): """ Calculates the annualised Sharpe ratio of a market neutral long/short strategy inolving the long of 'ticker' with a corresponding short of the 'benchmark'. """ # Get historic data for both a symbol/ticker and a benchmark ticker # The dates have been hardcoded, but you can modify them as you see fit! tick = get_historic_data(ticker, start_date=(2000,1,1), end_date=(2013,5,29)) bench = get_historic_data(benchmark, start_date=(2000,1,1), end_date=(2013,5,29)) # Calculate the percentage returns on each of the time series tick['daily_ret'] = tick['adj_close'].pct_change() bench['daily_ret'] = bench['adj_close'].pct_change() # Create a new DataFrame to store the strategy information # The net returns are (long - short)/2, since there is twice # trading capital for this strategy strat = pd.DataFrame(index=tick.index) strat['net_ret'] = (tick['daily_ret'] - bench['daily_ret'])/2.0 # Return the annualised Sharpe ratio for this strategy return annualised_sharpe(strat['net_ret'])
For Google, the Sharpe ratio for the long/short market-neutral strategy is
0.7597. For Goldman Sachs it is
0.2999:
>>> market_neutral_sharpe('GOOG', 'SPY') 0.75966612163452329 >>> market_neutral_sharpe('GS', 'SPY') 0.29991401047248328
Despite the Sharpe ratio being used almost everywhere in algorithmic trading, we need to consider other metrics of performance and risk. In later articles we will discuss drawdowns and how they affect the decision to run a strategy or not. | https://quantstart.com/articles/Sharpe-Ratio-for-Algorithmic-Trading-Performance-Measurement/ | CC-MAIN-2022-27 | refinedweb | 2,276 | 52.6 |
The opinions expressed herein are my own personal opinions and do not represent my employer's view in anyway.
Recently I have been in the need of extracting kind of a "call tree" of some functions of my code.
More exactly, it was not such the call tree, but I was needing to be able to list of methods being called by a given entry point, and then seeing the size of each tree.
NDepend can very easily give this information. Let's see how !
Let's write some code that we'll use to validate our results. We'll start with a very basic set of class that will call each other:
namespace ClassLibrary1
{
public class SecondClass
{
public void DoInSecondClass()
{
this.PrivateMethod();
}
public void DoInSecondClass_NoPrivate() { }
private void PrivateMethod() { }
}
public class TopClass
public void Do()
this.FirstMethod();
this.SecondMethod();
private void FirstMethod()
SecondClass obj = new SecondClass();
obj.DoInSecondClass_NoPrivate();
private void SecondMethod()
obj.DoInSecondClass();
}
Our project is very basic but should be enough to validate our results. So now, we will create our NDepend project to start collecting (and validating) our metrics.
And here you are, ready to start the analysis ! Just click on Run Analysis and wait for NDepend to prompt the report.
We now want to find all the methods called by our entry point.
At the end of the analysis, NDepend pops up the report in your favorite browser. However, we'll close it for now and we will search our entry point in the Class Browser. To do that, either use the Show Menu button in the menu, or the hotkey ctrl + alt + C.
Just select the method you want, right-click on it, and select Who I use indirectly and then SELECT METHODS WHERE ...
What happens here ? NDepend will generate for us a CQL Query. A CQL Query is written using the CQL language, a SQL-like language that allows us to run queries against our code to extract some information, statistics, .... Here we are using the following query :
SELECT METHODS WHERE IsUsedBy "ClassLibrary1.TopClass.Do()" ORDER BY DepthOfIsUsedBy
When we look to the result, it looks pretty good, unless that we do not have the number of line of code. Easy. We can just update the CQL Query like that :
SELECT METHODS WHERE IsUsedBy "ClassLibrary1.TopClass.Do()" ORDER BY DepthOfIsUsedBy, NbLinesOfCode
and here the result we get :
Of course the last column is visible only because we have altered the query to add another ORDER BY clause
It would be very nice to be able to extract a visual call tree with all the methods we have here in the CQL result. And of course, this can be done easily. Just go to the CQL Query Result page and right clic on the line marked 8 methods matched. Then you can choose Export 8 methods matched to Graph
What we have here is pretty cool to find quickly information and in our case the dependencies between methods. Anyway, there is a functionality that could be nice to be added on the graph. We may start viewing a graph not from a CQL Query result, but from a list of assemblies. And thus it could be interesting to be able to :
Anyway, even if some functionalities could be added to the tool, NDepend is a very interesting tool, easy to learn and very powerful - especially due to the CQL language.
Give it a try, and comment about our use !
A few months ago, Patrick Smacchia offered me a professional licence of NDepend so I can try it on my current projects.
I was already knowing NDepend as being a reference tool for analysing code, extracting metrics, ... but I never had the opportunity to really trying it. But I was so far from the truth !
Unfortunately, I have had very hard months with many things to do (day to day job, articles, conferences, ...) and I have had to postpone so many times my trials.
But better late than never ! It's now on my top priority list and all the trials I now do with the tool show me the extarodinary possibilities of the tool. I will explain in a next post what was my first use of NDepend.
Ready for a try ? Just download a trial version of the tool and have fun !
As I explained some time ago, I was writing an article about Continuous Integration in the Microsoft.NET world for the (french-speaking) website.
The article has been phased as follows :
To read this article, just go to or directly to my webpage on this site :.
Note that these article are available only in french for now, but do not hesitate to leave comments here if you think the content is interesting and that it could be interesting that I translate it. | http://www.pedautreppe.com/2009/06/default.aspx | CC-MAIN-2017-43 | refinedweb | 799 | 71.95 |
import "github.com/elves/elvish/pkg/cli/histutil"
Package histutil provides utilities for working with command history.
db.go doc.go fuser.go simple_walker.go store.go walker.go
type DB interface { NextCmdSeq() (int, error) AddCmd(cmd string) (int, error) CmdsWithSeq(from, upto int) ([]store.Cmd, error) PrevCmd(upto int, prefix string) (store.Cmd, error) }
DB is the interface of the storage database.
Fuser provides a view of command history that is fused from the shared storage-backed command history and per-session history.
NewFuser returns a new Fuser from a database.
AddCmd adds a command to both the database and the per-session history.
AllCmds returns all visible commands, consisting of commands that were already in the database at startup, plus the per-session history.
FastForward fast-forwards the view of command history, so that commands added by other sessions since the start of the current session are available.
LastCmd returns the last command within the fused view.
SessionCmds returns the per-session history.
Walker returns a walker for the fused command history.
type Store interface { // AddCmd adds a new command history entry and returns its sequence number. // Depending on the implementation, the Store might respect cmd.Seq and // return it as is, or allocate another sequence number. AddCmd(cmd store.Cmd) (int, error) // AllCmds returns all commands kept in the store. AllCmds() ([]store.Cmd, error) // LastCmd returns the last command in the store. LastCmd() (store.Cmd, error) }
Store is an abstract interface for history store.
NewDBStore returns a Store backed by a database.
NewDBStoreFrozen returns a Store backed by a database, with the view of all commands frozen at creation.
NewMemoryStore returns a Store that stores command history in memory.
TestDB is an implementation of the DB interface that can be used for testing.
type Walker interface { Prefix() string CurrentSeq() int CurrentCmd() string Prev() error Next() error }
Walker is used for walking through history entries with a given (possibly empty) prefix, skipping duplicates entries.
NewSimpleWalker returns a Walker, given the slice of all commands and the prefix.
Package histutil imports 4 packages (graph) and is imported by 4 packages. Updated 2019-12-30. Refresh now. Tools for package owners. | https://godoc.org/github.com/elves/elvish/pkg/cli/histutil | CC-MAIN-2020-16 | refinedweb | 361 | 50.12 |
This post applies to Python 2.5 and 2.6 - if you see any difference for Python 3, please let me know.
Destructors are a very important concept in C++, where they're an essential ingredient of RAII - virtually the only real safe way to write code that involves allocation and deallocation of resources in an exception-throwing program.
In Python, destructors are needed much less, because Python has a garbage collector that handles memory management. However, while memory is the most common resource allocated, it is not the only one. There are also sockets and database connections to be closed, files, buffers and caches flushed and a few more resources that need to be released when an object is done with them.
So Python has the destructor concept - the __del__ method. For some reason, many in the Python community believe that __del__ is evil and shouldn't be used. However, a simple grep of the standard library shows dozens of uses of __del__ in classes we all use and love, so where's the catch? In this article I'll try to make it clear (first and foremost for myself), when __del__ should be used, and how.
Simple code samples
First a basic example:
class FooType(object): def __init__(self, id): self.id = id print self.id, 'born' def __del__(self): print self.id, 'died' ft = FooType(1)
This prints:
1 born 1 died
Now, recall that due to the usage of a reference-counting garbage collector, Python won't clean up an object when it goes out of scope. It will clean it up when the last reference to it has gone out of scope. Here's a demonstration:
class FooType(object): def __init__(self, id): self.id = id print self.id, 'born' def __del__(self): print self.id, 'died' def make_foo(): print 'Making...' ft = FooType(1) print 'Returning...' return ft print 'Calling...' ft = make_foo() print 'End...'
This prints:
Calling... Making... 1 born Returning... End... 1 died
The destructor was called after the program ended, not when ft went out of scope inside make_foo.
Alternatives to the destructor
Before I proceed, a proper disclosure: Python provides a better method for managing resources than destructors - contexts. I won't turn this into a tutorial of contexts, but you should really get yourself familiar with the with statement and objects that can be used inside. For example, the best way to handle writing to a file is:
with open('out.txt', 'w') as of: of.write('222')
This makes sure the file is properly closed when the block inside with exits, even if exceptions are thrown. Note that this demonstrates a standard context manager. Another is threading.lock, which returns a context manager very suitable to be used in a with statement. You should read PEP 343 for more details.
While recommended, with isn't always applicable. For example, assume you have an object that encapsulates some sort of a database that has to be committed and closed when the object ends its existence. Now suppose the object should be a member variable of some large and complex class (say, a GUI dialog, or a MVC model class). The parent interacts with the DB object from time to time in different methods, so using with isn't practical. What's needed is a functioning destructor.
Where destructors go astray
To solve the use case I presented in the last paragraph, you can employ the __del__ destructor. However, it's important to know that this doesn't always work well. The nemesis of a reference-counting garbage collector is circular references. Here's an example:
class FooType(object): def __init__(self, id, parent): self.id = id self.parent = parent print 'Foo', self.id, 'born' def __del__(self): print 'Foo', self.id, 'died' class BarType(object): def __init__(self, id): self.id = id self.foo = FooType(id, self) print 'Bar', self.id, 'born' def __del__(self): print 'Bar', self.id, 'died' b = BarType(12)
Output:
Foo 12 born Bar 12 born
Ouch... what has happened? Where are the destructors? Here's what the Python documentation has to say on the matter:
Circular references which are garbage are detected when the option cycle detector is enabled (it’s on by default), but can only be cleaned up if there are no Python-level __del__() methods involved.
Python doesn't know the order in which it's safe to destroy objects that hold circular references to each other, so as a design decision, it just doesn't call the destructors for such methods!
So, now what?
Shouldn't we use destructors because of this deficiency? I'm very surprised to see that many Pythonistas think so, and recommend to use explicit close methods. But I disagree - explicit close methods are less safe, since they are easy to forget to call. Moreover, when exceptions can happen (and in Python they happen all the time), managing explicit closing becomes very difficult and burdensome.
I actually think that destructors can and should be used safely in Python. With a couple of precautions, it's definitely possible.
First and foremost, note that justified cyclic references are a rare occurrence. I say justified on purpose - a lot of uses in which cyclic references arise are an example of bad design and leaky abstractions.
As a general rule of thumb, resources should be held by the lowest-level objects possible. Don't hold a DB resource directly in your GUI dialog. Use an object to encapsulate the DB connection and close it safely in the destructor. The DB object has no reason whatsoever to hold references to other objects in your code. If it does - it violates several good-design practices.
Sometimes Dependency Injection can help prevent cyclic references in complex code, but even in those rare few cases when you find yourself needing a true cyclic reference, there's a solution. Python provides the weakref module for this purpose. The documentation quickly reveals that this is exactly what we need here:.
Here's the previous example rewritten with weakref:
import weakref class FooType(object): def __init__(self, id, parent): self.id = id self.parent = weakref.ref(parent) print 'Foo', self.id, 'born' def __del__(self): print 'Foo', self.id, 'died' class BarType(object): def __init__(self, id): self.id = id self.foo = FooType(id, self) print 'Bar', self.id, 'born' def __del__(self): print 'Bar', self.id, 'died' b = BarType(12)
Now we get the result we want:
Foo 12 born Bar 12 born Bar 12 died Foo 12 died
The tiny change in this example is that I use weakref.ref to assign the parent reference in the constructor FooType. This is a weak reference, so it doesn't really create a cycle. Since the GC sees no cycle, it destroys both objects.
Conclusion
Python has perfectly usable object destruction via the __del__ method. It works fine for the vast majority of use-cases, but chokes on cyclic references. Cyclic references, however, are often a sign of bad design, and few of them are justified. For the teeny tiny amount of uses cases where justified cyclic references have to be used, the cycles can be easily broken with weak references, which Python provides in the weakref module.
References
Some links that were useful in the preparation of this article:
- Python destructor and garbage collection notes
- RAII
- The Python documentation
- This and also this Stack Overflow discussions. | http://eli.thegreenplace.net/2009/06/12/safely-using-destructors-in-python/ | CC-MAIN-2015-06 | refinedweb | 1,238 | 66.94 |
Another Slight Mistake...blog (15)
On Wednesday night I implemented some additional author functionality for LifeFlow (the django blogging software that this blog is implemented in), which allows each entry to be associated with 0+ authors.
This update went fine. Including migrating to the new database schema. But, I ran into some trouble in a seemingly innocuous task. In the django shell I ran:
from lifeflow.models import Author, Entry a = Author.objects.get(pk=1) for entry in Entry.objects.all(): entry.authors.add(a) entry.save()
The problem is with the way that I am currently dealing with manually setting many2many fields in the Entry model (by creating the object, and then editing it in a post save hook, and then saving it again, with a bit of code to prevent endless loops).
For whatever reason the rendering worked exactly right, except for the code snippets all got displayed as a string of data spew. Fortunately I was able to just manually save the entries one by one and everything worked properly again.
Not sure exactly what went wrong, but I need to do a bit of work improving that area anyway, so it should come out in time.
Sorry. | https://lethain.com/another-slight-mistake/ | CC-MAIN-2021-39 | refinedweb | 202 | 55.44 |
Mostly, add ``literal`` markers to a lot of things like C types, add code blocks, and fix the way a few things render. Signed-off-by: John Snow <jsnow@redhat.com> --- docs/devel/qapi-code-gen.rst | 172 ++++++++++++++++++----------------- 1 file changed, 90 insertions(+), 82 deletions(-) diff --git a/docs/devel/qapi-code-gen.rst b/docs/devel/qapi-code-gen.rst index b79ecddb599..4a28118d951 100644 --- a/docs/devel/qapi-code-gen.rst +++ b/docs/devel/qapi-code-gen.rst @@ -40,7 +40,7 @@ by any commands or events, for the side effect of generated C code used internally. There are several kinds of types: simple types (a number of built-in -types, such as 'int' and 'str'; as well as enumerations), arrays, +types, such as ``int`` and ``str``; as well as enumerations), arrays, complex types (structs and two flavors of unions), and alternate types (a choice between other types). @@ -51,37 +51,37 @@ Schema syntax Syntax is loosely based on `JSON <>`_. Differences: -* Comments: start with a hash character (#) that is not part of a +* Comments: start with a hash character (``#``) that is not part of a string, and extend to the end of the line. -* Strings are enclosed in 'single quotes', not "double quotes". +* Strings are enclosed in ``'single quotes'``, not ``"double quotes"``. * Strings are restricted to printable ASCII, and escape sequences to - just '\\'. + just ``\\``. -* Numbers and null are not supported. +* Numbers and ``null`` are not supported. A second layer of syntax defines the sequences of JSON texts that are a correctly structured QAPI schema. We provide a grammar for this syntax in an EBNF-like notation: -* optional. -* The symbol STRING is a terminal, and matches any JSON string -* The symbol BOOL is a terminal, and matches JSON false or true -* ALL-CAPS words other than STRING are non-terminals +* The symbol ``STRING`` is a terminal, and matches any JSON string +* The symbol ``BOOL`` is a terminal, and matches JSON ``false`` or ``true`` +* ALL-CAPS words other than ``STRING`` are non-terminals The order of members within JSON objects does not matter unless explicitly noted. @@ -109,27 +109,30 @@ These are discussed in detail below. Built-in Types -------------- -The following types are predefined, and map to C as follows:: +The following types are predefined, and map to C as follows: - + ============= ============== ============================================ Include directives @@ -174,14 +177,14 @@ Pragma 'doc-required' takes a boolean value. If true, documentation is required. Default is false. Pragma 'command-name-exceptions' takes a list of commands whose names -may contain '_' instead of '-'. Default is none. +may contain ``"_"`` instead of ``"-"``. Default is none. Pragma 'command-returns-exceptions' takes a list of commands that may violate the rules on permitted return types. Default is none. Pragma 'member-name-exceptions' takes a list of types whose member -names may contain uppercase letters, and '_' instead of '-'. Default -is none. +names may contain uppercase letters, and ``"_"`` instead of ``"-"``. +Default is none. Enumeration types @@ -200,7 +203,7 @@ Syntax:: Member 'enum' names the enum type. Each member of the 'data' array defines a value of the enumeration -type. The form STRING is shorthand for { 'name': STRING }. The +type. The form STRING is shorthand for :code:`{ 'name': STRING }`. The 'name' values must be be distinct. Example:: @@ -243,7 +246,7 @@ Syntax:: A string denotes the type named by the string. A one-element array containing a string denotes an array of the type -named by the string. Example: ['int'] denotes an array of 'int'. +named by the string. Example: ``['int']`` denotes an array of ``int``. Struct types @@ -266,11 +269,11 @@ Member 'struct' names the struct type. Each MEMBER of the 'data' object defines a member of the struct type. -The MEMBER's STRING name consists of an optional '*' prefix and the -struct member name. If '*' is present, the member is optional. +The MEMBER's STRING name consists of an optional ``*`` prefix and the +struct member name. If ``*`` is present, the member is optional. The MEMBER's value defines its properties, in particular its type. -The form TYPE-REF is shorthand for { 'type': TYPE-REF }. +The form TYPE-REF is shorthand for :code:`{ 'type': TYPE-REF }`. Example:: @@ -334,7 +337,7 @@ union must have at least one branch. The BRANCH's STRING name is the branch name. The BRANCH's value defines the branch's properties, in particular its -type. The form TYPE-REF is shorthand for { 'type': TYPE-REF }. +type. The form TYPE-REF is shorthand for :code:`{ 'type': TYPE-REF }`. A simple union type defines a mapping from automatic discriminator values to data types like in this example:: @@ -381,7 +384,7 @@ struct. The following example enhances the above simple union example by adding an optional common member 'read-only', renaming the discriminator to something more applicable than the simple union's -default of 'type', and reducing the number of {} required on the wire:: +default of 'type', and reducing the number of ``{}`` required on the wire:: { 'enum': 'BlockdevDriver', 'data': [ 'file', 'qcow2' ] } { 'union': 'BlockdevOptions', @@ -450,7 +453,7 @@ alternate. An alternate must have at least one branch. The ALTERNATIVE's STRING name is the branch name. The ALTERNATIVE's value defines the branch's properties, in particular -its type. The form STRING is shorthand for { 'type': STRING }. +its type. The form STRING is shorthand for :code:`{ 'type': STRING }`. Example:: @@ -515,7 +518,7 @@ If 'data' is a MEMBERS object, then MEMBERS defines arguments just like a struct type's 'data' defines struct type members. If 'data' is a STRING, then STRING names a complex type whose members -are the arguments. A union type requires 'boxed': true. +are the arguments. A union type requires ``'boxed': true``. Member 'returns' defines the command's return type. It defaults to an empty struct type. It must normally be a complex type or an array of @@ -555,7 +558,7 @@ section "Code generated for commands" for examples. The function returns the return type. When member 'boxed' is absent, it takes the command arguments as arguments one by one, in QAPI schema order. Else it takes them wrapped in the C struct generated for the -complex argument type. It takes an additional Error ** argument in +complex argument type. It takes an additional ``Error **`` argument in either case. The generator also emits a marshalling function that extracts @@ -638,11 +641,11 @@ blocking the guest and other background operations. Coroutine safety can be hard to prove, similar to thread safety. Common pitfalls are: -- The global mutex isn't held across qemu_coroutine_yield(), so +- The global mutex isn't held across ``qemu_coroutine_yield()``, so operations that used to assume that they execute atomically may have to be more careful to protect against changes in the global state. -- Nested event loops (AIO_WAIT_WHILE() etc.) are problematic in +- Nested event loops (``AIO_WAIT_WHILE()`` etc.) are problematic in coroutine context and can easily lead to deadlocks. They should be replaced by yielding and reentering the coroutine when the condition becomes false. @@ -650,9 +653,9 @@ pitfalls are: Since the command handler may assume coroutine context, any callers other than the QMP dispatcher must also call it in coroutine context. In particular, HMP commands calling such a QMP command handler must be -marked .coroutine = true in hmp-commands.hx. +marked ``.coroutine = true`` in hmp-commands.hx. -It is an error to specify both 'coroutine': true and 'allow-oob': true +It is an error to specify both ``'coroutine': true`` and ``'allow-oob': true`` for a command. We don't currently have a use case for both together and without a use case, it's not entirely clear what the semantics should be. @@ -689,7 +692,7 @@ If 'data' is a MEMBERS object, then MEMBERS defines event-specific data just like a struct type's 'data' defines struct type members. If 'data' is a STRING, then STRING names a complex type whose members -are the event-specific data. A union type requires 'boxed': true. +are the event-specific data. A union type requires ``'boxed': true``. An example event is:: @@ -763,16 +766,16 @@ digits, hyphen, and underscore. There are two exceptions: enum values may start with a digit, and names that are downstream extensions (see section Downstream extensions) start with underscore. -Names beginning with 'q\_' are reserved for the generator, which uses +Names beginning with ``q_`` are reserved for the generator, which uses them for munging QMP names that resemble C keywords or other -problematic strings. For example, a member named "default" in qapi -becomes "q_default" in the generated C code. +problematic strings. For example, a member named ``default`` in qapi +becomes ``q_default`` in the generated C code. Types, commands, and events share a common namespace. Therefore, generally speaking, type definitions should always use CamelCase for user-defined type names, while built-in types are lowercase. -Type names ending with 'Kind' or 'List' are reserved for the +Type names ending with ``Kind`` or ``List`` are reserved for the generator, which uses them for implicit union enums and array types, respectively. @@ -783,15 +786,15 @@ consistency is preferred over blindly avoiding underscore. Event names should be ALL_CAPS with words separated by underscore. -Member name 'u' and names starting with 'has-' or 'has\_' are reserved +Member name ``u`` and names starting with ``has-`` or ``has_`` are reserved for the generator, which uses them for unions and for tracking optional members. Any name (command, event, type, member, or enum value) beginning with -"x-" is marked experimental, and may be withdrawn or changed +``x-`` is marked experimental, and may be withdrawn or changed incompatibly in a future release. -Pragmas 'command-name-exceptions' and 'member-name-exceptions' let you +Pragmas ``command-name-exceptions`` and ``member-name-exceptions`` let you violate naming rules. Use for new code is strongly discouraged. @@ -805,7 +808,7 @@ who controls the valid, reverse fully qualified domain name RFQDN. RFQDN may only contain ASCII letters, digits, hyphen and period. Example: Red Hat, Inc. controls redhat.com, and may therefore add a -downstream command __com.redhat_drive-mirror. +downstream command ``__com.redhat_drive-mirror``. Configuring the schema @@ -879,7 +882,7 @@ this particular build. Documentation comments ---------------------- -A multi-line comment that starts and ends with a '##' line is a +A multi-line comment that starts and ends with a ``##`` line is a documentation comment. If the documentation comment starts like :: @@ -887,7 +890,7 @@ If the documentation comment starts like :: ## # @SYMBOL: -it documents the definition if SYMBOL, else it's free-form +it documents the definition of SYMBOL, else it's free-form documentation. See below for more on definition documentation. @@ -900,7 +903,7 @@ Headings and subheadings ~~~~~~~~~~~~~~~~~~~~~~~~ A free-form documentation comment containing a line which starts with -some '=' symbols and then a space defines a section heading:: +some ``=`` symbols and then a space defines a section heading:: ## # = This is a top level heading @@ -924,22 +927,22 @@ Documentation markup ~~~~~~~~~~~~~~~~~~~~ Documentation comments can use most rST markup. In particular, -a '::' literal block can be used for examples:: +a ``::`` literal block can be used for examples:: # :: # # Text of the example, may span # multiple lines -'*' starts an itemized list:: +``*`` starts an itemized list:: # * First item, may span # multiple lines # * Second item -You can also use '-' instead of '*'. +You can also use ``-`` instead of ``*``. -A decimal number followed by '.' starts a numbered list:: +A decimal number followed by ``.`` starts a numbered list:: # 1. First item, may span # multiple lines @@ -952,11 +955,11 @@ If a list item's text spans multiple lines, then the second and subsequent lines must be correctly indented to line up with the first character of the first line. -The usual '**strong**', '*emphasised*' and '``literal``' markup should -be used. If you need a single literal '*' you will need to +The usual ****strong****, *\*emphasized\** and ````literal```` markup +should be used. If you need a single literal ``*``, you will need to backslash-escape it. As an extension beyond the usual rST syntax, you -can also use '@foo' to reference a name in the schema; this is -rendered the same way as '``foo``'. +can also use ``@foo`` to reference a name in the schema; this is rendered +the same way as ````foo````. Example:: @@ -991,9 +994,9 @@ alternates), or value (for enums), and finally optional tagged sections. Descriptions of arguments can span multiple lines. The description -text can start on the line following the '@argname:', in which case it +text can start on the line following the '\@argname:', in which case it must not be indented at all. It can also start on the same line as -the '@argname:'. In this case if it spans multiple lines then second +the '\@argname:'. In this case if it spans multiple lines then second and subsequent lines must be indented to line up with the first character of the first line of the description:: @@ -1006,8 +1009,13 @@ character of the first line of the description:: The number of spaces between the ':' and the text is not significant. -FIXME: the parser accepts these things in almost any order. -FIXME: union branches should be described, too. +.. admonition:: FIXME + + The parser accepts these things in almost any order. + +.. admonition:: FIXME + + union branches should be described, too. Extensions added after the definition was first released carry a '(since x.y.z)' comment. -- 2.31.1 | https://lists.gnu.org/archive/html/qemu-devel/2021-07/msg05388.html | CC-MAIN-2021-39 | refinedweb | 2,190 | 65.12 |
1. Comment on the output of this C code?
#include <stdio.h>
int main()
{
int a[5] = {1, 2, 3, 4, 5};
int i;
for (i = 0; i < 5; i++)
if ((char)a[i] == '5')
printf("%d\n", a[i]);
else
printf("FAIL\n");
}
a) The compiler will flag an error
b) Program will compile and print the output 5
c) Program will compile and print the ASCII value of 5
d) Program will compile and print FAIL for 5 times
View Answer
Explanation:The ASCII value of 5 is 53, the char type-casted integral value 5 is 5 only.
Output:
$ cc pgm1.c
$ a.out
FAILED
FAILED
FAILED
FAILED
FAILED
2. The format identifier ‘%i’ is also used for _____ data type?
a) char
b) int
c) float
d) double
View Answer
Explanation:Both %d and %i can be used as a format identifier for int data type.
3. Which data type is most suitable for storing a number 65000 in a 32-bit system?
a) signed short
b) unsigned short
c) long
d) int
View Answer
Explanation:65000 comes in the range of short (16-bit) which occupies the least memory. Signed short ranges from -32768 to 32767 and hence we should use unsigned short.
4. Which of the following is a User-defined data type?
a) typedef int Boolean;
b) typedef enum {Mon, Tue, Wed, Thu, Fri} Workdays;
c) struct {char name[10], int age};
d) all of the mentioned
View Answer
Explanation:typedef and struct are used to define user-defined data types.
5. What is the size of an int data type?
a) 4 Bytes
b) 8 Bytes
c) Depends on the system/compiler
d) Cannot be determined
View Answer
Explanation:The size of the data types depend on the system.
6. What is the output of this C code?
#include <stdio.h>
int main()
{
signed char chr;
chr = 128;
printf("%d\n", chr);
return 0;
}
a) 128
b) -128
c) Depends on the compiler
d) None of the mentioned
View Answer
Explanation:signed char will be a negative number.
Output:
$ cc pgm2.c
$ a.out
-128
7. Comment on the output of this C code?
#include <stdio.h>
int main()
{
char c;
int i = 0;
FILE *file;
file = fopen("test.txt", "w+");
fprintf(file, "%c", 'a');
fprintf(file, "%c", -1);
fprintf(file, "%c", 'b');
fclose(file);
file = fopen("test.txt", "r");
while ((c = fgetc(file)) != -1)
printf("%c", c);
return 0;
}
a) a
b) Infinite loop
c) Depends on what fgetc returns
d) Depends on the compiler
View Answer
Explanation:None.
Output:
$ cc pgm3.c
$ a.out
a
8. What is short int in C programming?
a) Basic datatype of C
b) Qualifier
c) short is the qualifier and int is the basic datatype
d) All of the mentioned
View Answer
Explanation:None.
Sanfoundry Global Education & Learning Series – C Programming Language.
Here’s the list of Best Reference Books in C Programming Language.
To practice all features of C programming language, here is complete set of 1000+ Multiple Choice Questions and Answers on C.
LinkedIn | Facebook | Twitter | Google+ | http://www.sanfoundry.com/c-programming-questions-answers-data-types-sizes-1/ | CC-MAIN-2017-39 | refinedweb | 515 | 72.46 |
Hi all, what I want to achieve is to know when a sprite is visible in the game scene using a sprite mask.
OnBecomeVisible() or renderer.isVisible does not seem to work. They are called right away or always set to true even though I am not showing them using a sprite mask.
OnBecomeVisible()
renderer.isVisible
What I want to get is to show specific enemies with my cursor, kind of like the Lens of Thruth in the Legend of Zelda, and that I have achieved. What I wanted to do now is do some other stuff(play sounds, particle effects, etc) at the moment the sprite becomes visible using my "Lens of Thruth"
I don't find necessary to add any image or code since this question is pretty much straight forward, but if necessary I do it.
Any help with this will be much appreciated.
As an extra for this I have checked that the OnBecameVisible() and renderer.isVisible as the unity API says, they will be activate or set to true as long as they are in the Game View or the Scene view.
OnBecameVisible()
Once you have the sprites outside both of thos cameras the renderer.isVislble is set to false. So know what I still haven't figure out is to know via script the moment a Sprite is visible using my LensOfThruth (Sprite Mask). Still figuring out how to do that.
renderer.isVislble
Answer by exploringunity
·
May 22 at 07:40 PM
Hey @CesarCanto,
Have you considered using colliders/triggers?
Below is an example test script that demonstrates the idea of using colliders to detect when the sprite mask has revealed a sprite. The first screenshot shows the scene setup and sprite mask ("magic spotlight") settings, and the second is the inspector for the sprite that is being revealed.
using UnityEngine;
public class MagicSpotlight : MonoBehaviour
{
Camera cam;
Transform camTransform;
Transform spotlightTransform;
void Start()
{
cam = Camera.main;
camTransform = cam.transform;
spotlightTransform = transform;
}
void Update()
{
var mousePos = Input.mousePosition;
mousePos.z = -camTransform.position.z;
var worldPos = cam.ScreenToWorldPoint(mousePos);
spotlightTransform.position = worldPos;
}
void OnTriggerEnter2D(Collider2D other)
{
Debug.Log($"Magic Spotlight revealed {other.gameObject.name}");
}
}
Hope this helps!
Thanks man! this was what I was looking for. using a Trigger as big as my sprite mask I could detect the enemy inside it and activate my events.
Also as a side note for this, this approach is for things that you can throw or manipulate in the game, but if you plan to use the mouse to drag a "Lens of Thruth" around the screen, you can also use a Trigger and call OnMouseEnter() inside the script attached to the object you want to interact with the sprite mask. Is basically the same approach and you don't have to use a RigidBody, but as it says it only works with the mouse.
OnMouseEnter()
Answer by frilanski
·
May 22 at 05:14 PM
How are you toggling whether it is visable? if you just enable and disable the sprite renderer then you should be able to just use the below:
if(renderer == true)
provided you're referencing the renderer correctly.
Hi frilanski, I don't want to enable/disable my Sprite Renderer I have been having some problems with that approach since the gameobject I want to "Appear/Disappear" is playing some animations, is better to use a mask so the Sprite Renderer can remain active and avoid problems with animations, at least in my case that was
189 People are following this question.
Sprites not rendering
3
Answers
Render a sprite only when it's on the parent but not other sprites.
0
Answers
Unity2D strange green line under sprites
0
Answers
Layer Sprites Based on Y axis
1
Answer
Changing the color of all children in an empty gameobject
1
Answer | https://answers.unity.com/questions/1733326/how-to-know-if-sprite-mask-is-showing-a-sprite-or.html?childToView=1733669 | CC-MAIN-2020-34 | refinedweb | 636 | 60.75 |
Jeff Turner wrote:
> Nicola Ken Barozzi wrote:
>
>> Jeff Turner wrote:
>>
>>> On Mon, Sep 08, 2003 at 06:27:05AM -0000, nicolaken@apache.org wrote:
>>>
>>>> nicolaken 2003/09/07 23:27:05
>>>>
>>>> Modified: src/resources/conf forrest.xmap
>>>> Log:
>>>> Add .html matcher as new way of defining ihtml pages.
>>>
>>> What happens if in 0.6, we decide to have a fully unified filesystem
>>> layout, where extension is the only way of differentiating raw and
>>> parsed
>>> content?
>>>
>>> That is more or less what I proposed in this thread:
>>>
>>>
>>>
>>> And you agreed that it was a superset of the previously proposed
>>> solution, that of having a raw/ directory:
>>>
>>>
>>
>> Wait a sec, AFAIU it's not the extension that defines binary or not,
>> but an attribute in site.xml.
>>
>> You wrote:
>> > Well we can make binary=true an inheritable attribute then:
>> >
>> > <apidocs binary="true">
>> > ...
>> > </apidocs>
>
> I was thinking that although it's *possible* to have foo.html and
> bar.html treated differently, it would be pretty confusing for users.
> The extension provides a nice simple filetype marker. As an analogy,
> modern filesystems don't *rely* on extensions for identifying file type,
> but people use them anyway, as a visual type indicator.
Well then that's not what I had understood. I reread the thread and I
still don't see this.
You wrote:
"
Perhaps we could rather use marker attributes in site.xml to indicate raw
content:
<site>
...
<salesreport href="sales.pdf" binary="true"/>
...
</site>
And then just have:
src/documentation/content/index.xml
src/documentation/content/sales.pdf
"
I don't see how this is a proposal about matching extensions to
processing rules.
>>> I don't want to limit our options in 0.6 for the minimal advantage of
>>> making *.ihtml easier to edit. So claim *.html if you want, but be
>>> aware
>>> that it may be redefined in 0.6.
>>
>> I thought that we had agreed on this, Jeff, and I thought that this
>> commit was in line with what we had decided.
>>
>> IE:
>> - Add .html matcher as new way of defining ihtml pages
>> - Make namespaced content pass the pipeline (so we can add xhtml things
>> to xdoc pages for special cases, as like ehtml)
>> - deprecate ihtml and ehtml
>
> I'm not sure how processing *.html as ihtml counts as a first step down
> this road of supporting mixed-namespace documents. For a start, '.html'
> is the wrong extension, as it's XML, not HTML.
Nope. It's html.
> Wouldn't it be better to
> graft HTML support onto doc-v12 instead of docv12 onto HTML?
I think you don't get it. Html is just another *source* format that gets
transformed in xdoc. It will *not* output extra facilities that doc-v12
doesn't have.
html and cwiki have the same purpose, to be an alternative source format
for document-dtd. I had posted a big drawing about it.
>> I'd add now:
>> - add a class attribute to all tags
>> - add a user.css stylesheet so that users can easily change styles or
>> attach new things to class attribute meanings
>
> Great, but AFACT that's completely independent of ihtml, right?
Yes, but related to the need of users to inject extra stuff in the docs.
>> Do you have other ideas now?
>>
>> I'm ok with rediscussing if you have other idea, just wanted to note
>> that I did this commit because I thought it was in line with the
>> decisions taken, not because I want to force things my way.
>
> Cool, just misunderstandings. It's been 7 months since 0.4, so I just
> want to get 0.5 out before we break some kind of ASF record for
> slackness ;)
I don't want to delay the release in discussions. If you don't want to
see this done now, feel free to revert, release and then rediscuss.
But I really thought we had finally gotten to a decision on this :-/
--
Nicola Ken Barozzi nicolaken@apache.org
- verba volant, scripta manent -
(discussions get forgotten, just code remains)
--------------------------------------------------------------------- | http://mail-archives.apache.org/mod_mbox/forrest-dev/200309.mbox/%3Cbjk3br$fo8$1@sea.gmane.org%3E | CC-MAIN-2014-42 | refinedweb | 666 | 75 |
t_snd - send data or expedited data over a connection
#include <xti.h> int t_snd( int fd, void *buf, unsigned int nbytes, int flags)
This function is used to send either normal or expedited data. The argument fd identifies the local transport endpoint over which data should be sent, buf points to the user data, nbytes specifies the number of bytes of user data to be sent, and flags specifies any optional flags described below:
-() calls. Each t_snd() with the T_MORE flag set indicates that another t_snd() will follow with more data for the current TSDU (or ETSDU). The end of the TSDU (or ETSDU) is identified by a t_snd().
- T_PUSH.
- Note:
- The communications provider is free to collect data in a send buffer until it accumulates a sufficient amount for transmission.
By default, t_snd() operates in synchronous mode and may wait if flow control restrictions prevent the data from being accepted by the local transport provider at the time the call is made. However, if O_NONBLOCK is set (via t_open() or fcntl()), t_snd() will execute in asynchronous mode, and will fail immediately if there are flow control restrictions. The process can arrange to be informed when the flow control restrictions are cleared via either t_look() or the EM interface.
On successful completion, t_snd() returns the number of bytes (octets) accepted by the communications provider. Normally this will equal the number of octets specified in nbytes. However, if O_NONBLOCK is set or the function is interrupted by a signal, it is possible that only part of the data has actually been accepted by the communications provider. In this case, t_snd() returns a value that is less than the value of nbytes. If t_snd() is interrupted by a signal before it could transfer data to the communications provider, it returns -1 with t_errno set to [TSYSERR] and errno set to [EINTR].
If nbytes is zero and sending of zero bytes is not supported by the underlying communications service, t_snd() (see
..
-() returns the number of bytes accepted by the transport provider. Otherwise, -1 is returned on failure and t_errno is set to indicate the error.
- Note:
-_rcv().
It is important to remember that the transport provider treats all users of a transport endpoint as a single user. Therefore if several processes issue concurrent() fails with [TBADDATA]. | http://pubs.opengroup.org/onlinepubs/007908799/xns/t_snd.html | crawl-003 | refinedweb | 383 | 51.07 |
Now
you can find a complete range of Civic side repeaters and indicators.
Including import racing parts and rear lighting and indicator sets. Many Civic import racing parts
style rear lights also available.
Also View
Civic
import racing parts - GTI Black/Blue color
Civic import racing parts - 'Turbo' Black/Red
Civic import racing parts - 'Turbo' Black/Blue
Civic import racing parts - 'Turbo' Black/Yellow
Civic import racing parts - 'MotorSport' Black/Yellow.
The carpets have a jute style insulation attached in the same areas that your original has. We recommend that you retain your old insulation if possible. The insulation on the new carpet is usually thinner than the original and using both the old and the new will give you that much more sound and heat deadening. If your old insulation is ruined then you may want to consider putting down an additional layer, as well as sound deadener.
See the new 2004 Cadillac CTS, get online prices and information. Now you can get updated 2004 Acura RSX prices and specs.
Thanks
for visiting and don't forget to keep an eye on our special offers page.
Offers change periodically so check back!
We use free legal business forms online, see our source.
Information provided by imports parts direct. | http://www.importspartsdirect.com/Civic-import-racing-parts.html | crawl-002 | refinedweb | 209 | 65.22 |
Build a quick Summarizer with Python and NLTK
David Israwi
Aug 17 '17
Updated on Dec 04, 2017
If you're interested in Data Analytics, you will find learning about Natural Language Processing very useful. A good project to start learning about NLP is to write a summarizer - an algorithm to reduce bodies of text but keeping its original meaning, or giving a great insight into the original text.
There are many libraries for NLP. For this project, we will be using NLTK - the Natural Language Toolkit.
Let's start by writing down the steps necessary to build our project.
4 steps to build a Summarizer
- Remove stop words (defined below) for the analysis
- Create frequency table of words - how many times each word appears in the text
- Assign score to each sentence depending on the words it contains and the frequency table
- Build summary by adding every sentence above a certain score threshold
That's it! And the Python implementation is also short and straightforward.
What are stop words?
Any word that does not add a value to the meaning of a sentence. For example, let's say we have the sentence
A group of people run every day from a bank in Alafaya to the nearest Chipotle
By removing the sentence's stop words, we can narrow the number of words and preserve the meaning:
Group of people run every day from bank Alafaya to nearest Chipotle
We usually remove stop words from the analyzed text as knowing their frequency doesn't give any insight to the body of text. In this example, we removed the instances of the words a, in, and the.
Now, let's start!
There are two NLTK libraries that will be necessary for building an efficient summarizer.
from nltk.corpus import stopwords from nltk.tokenize import word_tokenize, sent_tokenize
Note: There are more libraries that can make our summarizer better, one example is discussed at the end of this article.
Corpus
Corpus means a collection of text. It could be data sets of poems by a certain poet, bodies of work by a certain author, etc. In this case, we are going to use a data set of pre-determined stop words.
Tokenizers
Basically, it divides a text into a series of tokens. There are three main tokenizers - word, sentence, and regex tokenizer. For this specific project, we will only use the word and sentence tokenizer.
Removing stop words and making frequency table
First, we create two arrays - one for stop words, and one for every word in the body of text.
Let's use
text as the original body of text.
stopWords = set(stopwords.words("english")) words = word_tokenize(text)
Second, we create a dictionary for the word frequency table. For this, we should only use the words that are not part of the stopWords array.
freqTable = dict() for word in words: word = word.lower() if word in stopWords: continue if word in freqTable: freqTable[word] += 1 else: freqTable[word] = 1
Now, we can use the
freqTable dictionary over every sentence to know which sentences have the most relevant insight to the overall purpose of the text.
Assigning a score to every sentence
We already have a sentence tokenizer, so we just need to run the
sent_tokenize() method to create the array of sentences. Secondly, we will need a dictionary to keep the score of each sentence, this way we can later go through the dictionary to generate the summary.
sentences = sent_tokenize(text) sentenceValue = dict()
Now it's time to go through every sentence and give it a score depending on the words it has. There are many algorithms to do this - basically, any consistent way to score a sentence by its words will work. I went for a basic algorithm: adding the frequency of every non-stop word in a sentence.
for sentence in sentences: for wordValue in freqTable: if wordValue[0] in sentence.lower(): if sentence[:12] in sentenceValue: sentenceValue[sentence[:12]] += wordValue[1] else: sentenceValue[sentence[:12]] = wordValue[1]
Note: Index 0 of wordValue will return the word itself. Index 1 the number of instances.
If
sentence[:12] caught your eye, nice catch. This is just a simple way to hash each sentence into the dictionary.
Notice that a potential issue with our score algorithm is that long sentences will have an advantage over short sentences. To solve this, divide every sentence score by the number of words in the sentence.
So, what value can we use to compare our scores to?
A simple approach to this question is to find the average score of a sentence. From there, finding a threshold will be easy peasy lemon squeezy.
sumValues = 0 for sentence in sentenceValue: sumValues += sentenceValue[sentence] # Average value of a sentence from original text average = int(sumValues/ len(sentenceValue))
So, what's a good threshold? The wrong value could give a summary that is too small/big.
The average itself can be a good threshold. For my project, I decided to go for a shorter summary, so the threshold I use for it is one-and-a-half times the average.
Now, let's apply our threshold and store our sentences in order into our summary.
summary = '' for sentence in sentences: if sentence[:12] in sentenceValue and sentenceValue[sentence[:12]] > (1.5 * average): summary += " " + sentence
You made it!! You can now
print(summary) and you'll see how good our summary is.
Optional enhancement: Make smarter word frequency tables
Sometimes, we want two very similar words to add importance to the same word, e.g., mother, mom, and mommy. For this, we use a Stemmer - an algorithm to bring words to its root word.
To implement a Stemmer, we can use the NLTK stemmers' library. You'll notice there are many stemmers, each one is a different algorithm to find the root word, and one algorithm may be better than another for specific scenarios.
from nltk.stem import PorterStemmer ps = PorterStemmer()
Then, pass every word by the stemmer before adding it to our
freqTable. It is important to stem every word when going through each sentence before adding the score of the words in it.
And we're done!
Congratulations! Let me know if you have any other questions or enhancements to this summarizer.
Thanks for reading my first article! Good vibes
Let's make my website even better
I just finished redesigning my website. It isn't live just yet. I wanted to see what can I do to make it even better.
Excellent post, you are absolutely amazing ❤️
I got one question though, when adding up the sentenceValues, why would you like the key in the sentenceValue dictionary to only be the first 12 characters of the sentence? I mean it might cause some troubles if the sentence is lower than 12 characters or if two different sentences starts with the exact same 12 characters.
I assume you did it as a way to reduce overheat, but to be honest. Perfomance wise I don't think the difference would be that significant, I would much rather prefer:
[:12]
As a sacrifies for a tiny performance increase.
I would love to hear your opinion on this matter.
If anyone got any errors running the code, copy paste my version.
That said, it does not work properly, it has some flaws, I tried to summarize this article as a test. Here is the result: (The threshold is: (1.5*average) )
"For example, the Center for a New American Dream envisions "... a focus on more of what really matters, such as creating a meaningful life, contributing to community and society, valuing nature, and spending time with family and friends."
Thank you very much, Sebastian!
I agree with you -- having the whole sentence as the dictionary key will bring a better reliability to the program compared to the first 12 characters of the sentence, my decision was mainly regarding the overheat, but as you said: it is almost negligible. One bug that I would look for is the use of special characters in the text, mainly the presence of quotes and braces, but this is an easily fixable issue (I believe using the three quotes as you are currently doing will avoid this issue)
I summarized the same article and got the following summary:
Feel free to use my version for comparison!
How short your summary was may be a result of the way you are using the Stemmer, I would suggest testing the same article without it to verify this. Besides that, your code is looking on point -- clean and concise. If you are looking for ways to improve your results, I would suggest you explore the following ideas:
Thanks for the suggestion!
Cool website you got yourself there!
I got a question I forgot to ask. Why do you turn the 'stopwords' list into a
set()? First I thought it was because you properly intented to remove duplicate items from the list, but then it stroke me.. Why would there be duplicate items in a corpus list containing stop words? When I compared the length of the list before and after turning it into a set. There was no difference:
len(stopwords.words("english") == len(set(stopwords.words("english")))
Outputs: True
Tracing the variable throughout the script, I most admit, I can not figure out why you turned it into a set. I assume it is a mistake?
Or do you have any specific reason for it?
Hmm, I believe the first time I used the list of stop words from NLTK there were some duplicates, if not I am curious too, lol. It may be time to change it to a list.
Thanks for the note!
If you ever try your implementation using TFIDF, let me know how it goes.
sentenceValue[sentence[:12]] += wordValue[1]
string index out of range
Hey! You may have a sentence that is lower than 12 characters. In this case, you can set the index of the word value to sentence[:10], or a lower number depending on your shortest sentence.
Lowering the number of characters used to hash the sentence value can bring some issues -- two sentences with the same 7,8,9 starting characters will then store/retrieve their value from the same index on the dictionary, that's why it's important to keep the sentence length for hashing as high as you can
any number i use gives the same error
sentenceValue[sentence[:2]] += wordValue[1]
IndexError: string index out of range
Interesting. For debugging it, I would print all your sentences and find if there's an empty one (or a very short one), I think that may be the issue.
Let me know if that works or if you found the issue
i am also facing string index out of range problemm.,,what is the issue
You may have a string that is less than your string length sentence[:2]. I would recommend printing the strings and see if this is the case
i have solved it..it was acctually puncutation problem..in my case...i just handle dot(.) character while giving words as value..
Could you please help me , i'm facing same problem here and i can't handle it , tahnk you.
Is this your error?
IndexError: string index out of range
If so, potential solutions could be:
If that doesn't solve it, let me know!
It is still giving me an error when the text is > than 12 characters long and the sentance is (when printed through the loop) "You notice a wall of text in twitch chat and your hand instinctively goes to the mouse.", which is the first line in the paragraph. I found that even when you take out the range the same error occurs.
The bug may be in how you are storing your sentences, make sure you print out the sentences as you store them instead of when you retrieve them, hopefully, that'll help you find the issue. If not, let me know if I can help!
Thanks @david Israwi for this simple and interesting text summarizer program.
I see and analyze your code.
The most error i found is
index out of rangeand most of the people seem to have the same error a lot.
The one thing i am confuse in this part of code:
why and how the only 1.5 average is used.
How about the the large one line text instead it not summarize it.
For example:
I am using python 3 and i resolve the
index out of rangeerror as:
Thanks a lot! This post is really helpful! If you have other resources including making chatbot can be really helpful to me.
I am little bit interesting about how to implement the Text Summarizer using machine learning model. I am looking for this too...
You can directly send information at
sushant1234gautam@gmail isn't working right for me and I think it comes down to wordValue[0] not working for me the way you said. Do you know why that could be?
Like if I do:
for wordValue in freqTable:
print (wordValue[0])
I only get the first letters:
q
b
f
j
m
.
s
b
s
l
It seems like your bug comes from separating the paragraphs into letters instead of words.
The program should do the following commands in the respective order:
I wouldn't be able to know in which step the bug is, but it seems as if you are finding the frequency of each letter instead of each word, make sure you are keeping track of your arrays by printing them through your code, seems like you're almost there
I'm getting a "syntax error" on any text that I try to pass through the program, how would I go about running the text i want to summarize through this program?
I would try to have the text be converted to UTF-8 before sending it through, maybe there are special characters or accents throwing it off | https://dev.to/davidisrawi/build-a-quick-summarizer-with-python-and-nltk | CC-MAIN-2018-26 | refinedweb | 2,341 | 70.84 |
NEW: Learning electronics? Ask your questions on the new Electronics Questions & Answers site hosted by CircuitLab.
Microcontroller Programming » foo.c, foo.h, foo.hex, and foo.o relationships
i think i've got the following two ideas more or less correct:
1) foo.c is source C code written by a programmer
2) foo.o is made by compiling foo.c and is a translation of the data and instructions in the source code to machine code instructions the microprocessor understands
so where do foo.hex and foo.h files fit in?
Keith -
The "foo.o" is a binary file (i.e. non-text) that is an intermediate file created by the compiler program "avr-gcc". There is another file "foo.hex" that gets created by a linker from one or more "*.o" files. This is the file that gets transferred to your microcontroller. The "foo.h" file is an optional file that contains function prototypes, define statements and other things related to your program. A good example of all of this is using the "delay.h" functions from the NK directory inside your code. If you have "delay.h", the compiler will know how to create "your_program.o" from "your_program.c" if it uses the delay functions. The linker will then create "your_program.hex" as long as it knows where to find "delay.o". What is realy interesting is "delay.c" is not needed here at all. So if the NK guys wanted to be proprietary, they could just give you "delay.o" and "delay.h" and you'd have no way of seeing how they implement each function call. You would know how to call the functions by looking at the "delay.h" text file, but not the nitty gritty details inside the functions.
thanks pcbolt... i did have it wrong...
please check this to see if i've got it straight:
1) write source code and save in foo.c file
2) insure that Makefile is up to date with correct program name (foo), mmcu (atmega168 or atmega328p), serial port name (cu.usbserial for my computer) and any other changes
3) run make from terminal with working directory = the directory containing foo.c and Makefile
3.1) compiler reads foo.c then generates object code and stores it in foo.o
3.2) linker finds constants, macros, functions and other stuff(?) called by foo.o, combines the object code for the called items with foo.o then generates hex code and stores it in foo.hex
3.3) foo.hex is downloaded through the serial port (cu.usbserial for me) to the mcu in program mode
assuming that's correct i now have another question:
if i add io_328p.h to the libnerdkits folder/directory do i need to modify the Makefile to something like:
io_328p.o: io_328p.h
avr-gcc ${GCCFLAGS} -o io_328p.o -c io_328p.h
and then run make from the terminal (in the libnerdkits directory) to generate the io_328p.o file or will that happen automatically the first time i run make for a 328p project?
hope i'm not boring you to death... thanks keith
i forgot one modification to the Makefile in the libnerdkits directory... make line 3:
all: delay.o lcd.o uart.o io_328p.o
Keith, the .h files are "included" in/from your .c source file!!
#include <stdio.h>
Ralph
thanks Ralph... now that you point it out it's obvious...
the reason why i asked is because i (stupidly) did exactly what i described above and i'm thinking that, because it's not a normal project, when the make command downloaded it to my mcu it may have overwritten something in the mcu's reserved memory that's involved with sending control signals or data to the lcd...
the above doesn't seem likely but i'm fustrated about not being able to send output to any of my lcds and get an expected display... see the support forum>>putting atmega168 and atmega328p projects in different folders/directories (you've already given me valuable help there)
if i have done something stupid do you think there is a way to reload the mcu's reserved memory (is bootloader involved with that?) or should i just buy new 168 and 328p chips? they're not expensive...
thanks to Rick_S it turns out that all my problems are due to a code optimization problem... in the thread Support>>Ubuntu LCD problems the LCD displayed the same problem as mine and some simple changes to the makefiles in the project and libnerdkits directories, trashing all .o and .hex files and recompiling fixed the lcd display problem...
thanks to all for helping... Merry Christmas... k
i'm thinking that,
Careful with that "thinking" act it can really get you into serious/vexing/unfathomable problems.
Best to do what you know (working code) and build off that, one line (compiled and running) at a time.
thanks Ralph... it looks like i was way up in non-existant branches of the wrong tree... but now that i have initialload compiling and running again it's time to go check out my other projects... Merry Christmas... k
i was way up in non-existant branches of the wrong tree...
i was way up in non-existant branches of the wrong tree...
Been there, done that
you're very kind... it turns out that Rick_S turned me on to a thread called Ubunto LCD problems that suggests turning off optimization by changing the -Os option and deleting the -j .text option under avr-objcopy in the project's makefile... this works :)
but i'd still like to optimize my code but even the option -O1 causes compiler errors and i still don't know what the -j option does... Rick and Noter have given me some links in my error 1 thread which i'm going to start studying after posting this...
again thanks and wishing you and yours a very Merry Christmas... k
Please log in to post a reply. | http://www.nerdkits.com/forum/thread/2572/ | CC-MAIN-2021-39 | refinedweb | 1,007 | 76.62 |
Page 1 of 2
1
2
>
Show 50 post(s) from this thread on one page
LinuxQuestions.org
(
/questions/
)
-
Slackware
(
)
- -
Update shadow for -current please?
(
)
chris.willing
06-03-2014 06:38 AM
Update shadow for -current please?
The shadow in -current is 4.1.5.1. The most recent upstream is 4.2.1 (see:
). Why upgrade? Version 4.2 introduced support for subuid & subgid which are important (mandatory) for running LXC (Linux Containers) as normal user.
Is this important? Well, on my own boxes I can always sudo but in the lab that I run I don't want all users to have sudo privileges. Like, say, VirtualBox, LXC should be able to be run as a normal user. An upgraded shadow package would enable this.
chris
WhiteWolf1776
06-03-2014 09:10 AM
Odd... I've run VirtualBox as a normal user for years before switching to KVM... maybe your users just need to be in proper groups?
chris.willing
06-03-2014 09:21 AM
What I mean is that, just as you say, VirtualBox can be run as normal user. On the other hand LXC (lxc-start etc.) needs to be run as root or via sudo. A shadow package with subuid & subgid support would enable LXC to be run by normal user, just like VirtualBox.
chris
Drakeo
06-03-2014 11:02 AM
do you have a Slackware question. ?
moisespedro
06-03-2014 11:08 AM
Couldn't you upgrade it yourself?
Or does a shadow upgrade breaks a lot of packages?
mancha
06-03-2014 01:55 PM
Quote:
Originally Posted by
Drakeo
(Post 5181591)
do you have a Slackware question. ?
Isn't his question about Slackware?
===
The OP makes a good case why the new Shadow version should be considered for the next Slackware release. Being able to use
user namespaces with LXC containers is a very important feature. Without that, LXC containment is rather unsafe: uid 0 inside
the container is uid 0 outside meaning an escape from isolation can have catastrophic consequences. It doesn't end there; To
improve the security of your LXC container you need to also be concerned with issues like resource sharing, etc.
Also, if the new Shadow is going to end up in the next Slackware, inclusion in Slackware-current is better sooner than later to
increase the probability bugs/issues/etc. are found and reported before the stable release.
Chris:
Pat visits LQ but I am not sure how regularly. You might want to also send a similar request directly to him via email. In addition
to the Shadow bump you would need to request that Pat: a) upgrade to LXC 1.0+ (as of 20140602, 1.0.3 is the latest), and
b) add user namespace support to the kernel (
CONFIG_USER_NS
). When doing that I recommend adding memory resource
controllers (
CONFIG_MEMCG
&
CONFIG_MEMCG_KMEM
).
--mancha
ponce
06-03-2014 02:58 PM
and, if you like, you can also add to the recipe the new two deps libnih/cgmanager, and also patch slackpkg accordingly (just two small patches I'm testing since some years, one to respect the $ROOT enviroment variable like installpkg does and another that let you specify a custom CONF directory) to use with it a template I've created and have a debootstrap-like tools to create containers. :)
thanks Chris for your work with lxc ;)
dederon
06-04-2014 03:52 AM
Quote:
Originally Posted by
chris.willing
(Post 5181443)
Version 4.2 introduced support for subuid & subgid which are important (mandatory) for running LXC (Linux Containers) as normal user.
That's interesting. LXC 1.0 needed PAM to run as a normal user. Did that change in the mean time?
chris.willing
06-29-2014 06:26 AM
I guess the question about PAM arose because its listed as a prerequisite at Stephane Graber's
page. I posted a question there about it but received no reply. Anyway, after lots of testing I can say that the answer is no, PAM is not needed.
For those interested, I set up a VM with slack64-current, modified the config & rebuilt kernel and installed the latest shadow (that includes subuid & subgid support) and lxc-1.0.4, then followed the steps outlined on Stephane's web page. At first I had very limited success until I realized that lxc wasn't able to manipulate /sys/fs/cgroup entries on my behalf. It turned out that I needed the cgmanager daemon/application as well to provide that cgroup access in a neat way. After that I was able download and run Stephane's premade containers as a normal user (its quite strange watching latest Ubuntu run inside Slackware without vbox or kvm). I also made a new Slackware template that I can create and run a container from, although this was initially a bit trickier to do. As explained on Stephane's page, there is a problem with ordinary users running a creation template since it will need to do things like run mknod - thats a no go. Thats the reason Stephane is providing a bunch of premade containers. I therefore first created a "normal" container using sudo, then ran a small application called "uidmapshift" to convert the new container's uids & gids into the range allocated to the ordinary user. Then after moving the container into the ordinary user's designated space ($HOME/.local/share/lxc/), I was able to run the new container as a regular user. Success!
I'll make a web page sometime documenting it all. In the meantime, it works enough that I now feel I can approach Pat about updating the shadow package and adding CONFIG_USER_NS to the kernel config (CONFIG_MEMCG & CONFIG_MEMCG_KMEM are already enabled in the -current kernel). I'm not sure how he'll feel about adding cgmanager and dependent libnih packages to Slackware proper but I already have SlackBuilds which could be submitted to Slackbuilds.org.
chris
dederon
06-29-2014 06:51 AM
thanks a lot, please keep us updated about your documentation efforts. using a normal user as container root is something i really would like to try. usage of CONFIG_USER_NS was discouraged when i tinkered with lxc, maybe that changed. i had to recompile my kernel just because of this, which is annoying.
chris.willing
06-29-2014 09:36 PM
Yes, I see that Arch doesn't enable CONFIG_USER_NS yet after concerns about elevating normal user to root privileges. They were going to reconsider "later" but its still unset in the current version I just checked. I also checked the latest Debian (7.5), Ubuntu (14.04) and Fedora (20) and CONFIG_USER_NS is enabled in all of those. Not to say we should be strictly following the pack; just that CONFIG_USER_NS=yes has been out there in the wild for a while now and I haven't seen any reports of problems attributable to it.
The big thing about CONFIG_USER_NS is "user namespace" - the granting of any (including root) privilege is confined to a restricted environment (the user's name space) not system wide. Any such privilege has to be specifically granted - its not there by default (just able to be given). Of course we should be cautious about the possibility of escaping the restricted environment but, as above, its been around for a while now and so far looking pretty safe.
chris
mancha
06-30-2014 02:06 AM
Quote:
Originally Posted by
chris.willing
(Post 5196042)
Not to say we should be strictly following the pack; just that CONFIG_USER_NS=yes has been out there in the wild for a while now and I
haven't seen any reports of problems attributable to it.
Hi Chris.
Many thanks for your ongoing testing of the new Shadow, LXC, etc. Regarding your above comment, a flaw was recently found in user
namespaces that can be exploited under certain conditions to escalate privileges (
CVE-2014-4014
). This obviously is relevant within the
context of secure containment.
Fixes were introduced in: 3.10.44, 3.12.23, 3.14.8, and 3.16rc1.
--mancha
chris.willing
06-30-2014 02:51 AM
Thanks for finding that mancha - its good to have all such problems (and fixes) out in the open so that any new features can be introduced with confidence. Hopefully the next -current updates will have kernel >= 3.14.8 then, its not much of a bump.
chris
mancha
06-30-2014 03:15 AM
Agreed. I wanted to let you know because it seems you're preparing a set of requests for Pat to consider. This way you can let him
know about the issue and which 3.14.x introduced a fix.
Also, I wanted to let other slackers know in case they decide to use user namespaces with their LXC containers on their own kernels
(say 14.1 users sticking to 3.10.x).
--mancha
ml4711
06-30-2014 03:25 AM
About CONFIG_USER_NS enabled per default
Since this recommendation in the kernel config,
it may be an issue in a system with several concurrent users
All times are GMT -5. The time now is
05:57 AM
.
Page 1 of 2
1
2
>
Show 50 post(s) from this thread on one page | http://www.linuxquestions.org/questions/slackware-14/update-shadow-for-current-please-4175506871-print/ | CC-MAIN-2014-52 | refinedweb | 1,545 | 63.8 |
Modeling using Linear Programming
Optimization using linear models.
By Vamshi Jandhyala in mathematics optimization
September 20, 2021
Resource Allocation Problem
Consider a factory producing a number $n$ of different goods (say goods $1, . . . , n$). These goods use $m$ different raw resources (say resources $1, . . . , m$). Suppose that the decision maker observes that the amounts of raw resources are $b_1, . . . , b_m$. Each unit of resource $i$ costs $c_i$. Producing each unit of good $j$ requires $r_{1j}$ units of resource $1$, $r_{2j}$ units of resource $2, . . . ,$ and $r_{mj}$ units of resource $m$. Finally, each unit of good $j$ can be sold for $s_j$ dollars. Consequently, the profit for each unit of good $j$ produced is
$$ p_j = s_{j} - \sum_{i=1}^m c_j r_{ij} \text{ for $j = 1, . . . , n$}. $$
As the operator of this factory, the decision maker, one must decide how many units of each good to produce in order to maximize total profit.
Modeling using Linear Programming
To use linear programming, we first describe our problem with a number of linear functions. We also assume that we can produce fractional units of each product. The inputs are the values $\textbf{r}, \textbf{b}, \textbf{s}$ and $\textbf{c}$, the decisions or outputs are the $n$ quantities $x_1, . . . , x_n$ of each good to produce.
The first linear function describes the objective of the decision maker:
$$ \sum_{j=1}^n p_j x_j, $$
which is the total profit.
Next, we have $m$ linear functions describing the amounts of each of the resources required:
$$ \sum_{j=1}^n r_{ij} x_j \text{ , for $i = 1, . . . , m$}. $$
The standard form of presenting the problem to solve is
$$
\begin{aligned}
\underset{x_1,…,x_n}{\max} &\sum_{j=1}^n p_j x_j \\
\text{subject to} & \sum_{j=1}^n r_{ij} x_j \leq b_i \text{ , for $i = 1, . . . , m$} \\
&x_j ≥ 0 \text{, for $i = 1, . . . , n$} \end{aligned} $$
Example
A factory makes products $A$ and $B$, which makes profit of $£8$ and $£6$ respectively per unit. $2$ units of cement and $7$ units of sand are utilized for producing a unit of type $A$ whereas $2$ units of cement and $5$ units of sand are required to produce a unit of type $B$. The total units of cement used must be less than or equal to $11$ and the the total units of sand used must be less than or equal to $34$. Compute the number of units of each product that must be produced in order to maximize the profit.
Model
Let $x_A$ and $x_B$ be the number of units of product $A$ and $B$ that are produced.
We need to maximize
$$ 8x_A + 6x_B $$
subject to the constraints
$$
\begin{aligned}
&2x_A + 2x_B \leq 11 \\
&7x_A + 5x_B \leq 34 \\
&x_A, x_B ≥ 0 \end{aligned} $$
Solution using Gurobipy
Using the code below we see that $3.25$ units of product $A$ and $2.25$ units of product $B$ need to be produced to obtain a maximum profit of $£39.5$.
import gurobipy as gp from gurobipy import GRB # Resources R = ['Sand', 'Cement'] # Products P = ['A', 'B'] # resource required by product product_resource = { ('A', 'Cement'): 2, ('B', 'Cement'): 2, ('A', 'Sand'): 7, ('B', 'Sand'): 5 } # total resources available resources_avbl = { 'Cement': 11, 'Sand': 34 } # profit by product profit = { 'A': 8, 'B': 6 } # creating the model m = gp.Model('RAP') # adding the decision variables for each product x = m.addVars(P, name="produce") # declaring resource constraints m.addConstrs((sum(x[p]*product_resource[(p, r)] for p in P) <= resources_avbl[r] for r in R), name='resource') # setting the objective m.setObjective(x.prod(profit), GRB.MAXIMIZE) # optimizing the model m.optimize() # printing the values of each decision variable for v in m.getVars(): if v.x > 1e-6: print(v.varName, v.x) # printing the value of the objective function print(f'Total Profit: {m.objVal}')
Here is the output from Gurobi
Gurobi Optimizer version 9.1.2 build v9.1.2rc0 (win64) Thread count: 6 physical cores, 12 logical processors, using up to 12 threads Optimize a model with 2 rows, 2 columns and 4 nonzeros Model fingerprint: 0xe9bff615 Coefficient statistics: Matrix range [2e+00, 7e+00] Objective range [6e+00, 8e+00] Bounds range [0e+00, 0e+00] RHS range [1e+01, 3e+01] Presolve time: 0.01s Presolved: 2 rows, 2 columns, 4 nonzeros Iteration Objective Primal Inf. Dual Inf. Time 0 1.4000000e+31 3.500000e+30 1.400000e+01 0s 2 3.9500000e+01 0.000000e+00 0.000000e+00 0s Solved in 2 iterations and 0.01 seconds Optimal objective 3.950000000e+01 produce[A] 3.25 produce[B] 2.25 Total Profit: 39.5 | https://vamshij.com/blog/linear-optimization/linear-programming/ | CC-MAIN-2022-21 | refinedweb | 772 | 65.01 |
Details
- Type:
Bug
- Status: Closed
- Priority:
Critical
- Resolution: Fixed
- Affects Version/s: 2.2.0
- Fix Version/s: 2.6.1, 2.8.0, 2.7.2, 3.0.0-alpha1
-
- Labels:
- Hadoop Flags:Reviewed
Description.
Issue Links
- duplicates
HDFS-10246 Standby NameNode dfshealth.jsp Response very slow
- Resolved
Activity
- All
- Work Log
- History
- Activity
- Transitions
Yes you are right. I know the meaning of retry cache for restart and failover.
What about recover? How about disable it during recover and let the namenode can return to work quickly.
attach a small patch for 2.2.0 just to disable retry cache during recover process.
I think there be a separate option to disable populating the cache on startup. If someone has a NN that can restart reasonably fast before clients give up, it crashes due to corrupt edits, they restart in recovery, they probably would like clients to recover.
Suresh Srinivas, thoughts?
I would rather pursue an optimization solution instead of introducing more configuration flags that trigger subtle changes in behavior.
Unfortunately, I haven't been able to reproduce this locally. I used CreateEditsLog to generate edits. I needed to change it to generate RPC client IDs and call IDs. (Patch attached in case it's useful to anyone else.) This is admittedly an artificial workload, but it does log OP_ADD operations, so I thought it would be sufficient for a repro. I saw no noticeable difference in startup time whether dfs.namenode.enable.retrycache was true or false.
Profiling showed less than 1% execution time in RetryCache methods. I didn't see any huge outliers, but a few minor hotspots were FSEditLogOp.AddCloseOp#readFields, UTF8#readChars and INodeDirectory#getChild calling into ReadOnlyList.Util#binarySearch. The latter is probably an artifact of CreateEditsLog sticking all of the files under the same directory, thus creating a lot of work for the binary search over the children.
Has anyone else seen a consistent repro?
Yeah, we also had this issue. It appears somehow an entry with the same client id and caller id has existed in retryCache; which ended up calling expensive PriorityQueue#remove function. Below is the call stack captured when standby was replaying the edit logs.
"Edit log tailer" prio=10 tid=0x00007f096d491000 nid=0x533c runnable [0x00007ef05ee7a000] java.lang.Thread.State: RUNNABLE at java.util.PriorityQueue.removeAt(PriorityQueue.java:605) at java.util.PriorityQueue.remove(PriorityQueue.java:364) at org.apache.hadoop.util.LightWeightCache.put(LightWeightCache.java:218) at org.apache.hadoop.ipc.RetryCache.addCacheEntry(RetryCache.java:296) - locked <0x00007ef2fe306978> (a org.apache.hadoop.ipc.RetryCache) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.addCacheEntry(FSNamesystem.java:801) at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:507) at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:224) at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:133) at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:804) at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:785) at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:230) at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:324) at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$200(EditLogTailer.java:282) at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:299) at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:415) at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:295)
If PriorityQueue.remove() took much time, can we utilize PriorityQueue.removeAll(Collection) so that multiple CacheEntry's are removed in one round ?
Kihwal Lee and Ming Ma, thank you for the additional details. It looks like in your case, you noticed the slowdown in the standby NN tailing the edits. I had focused on profiling NN process startup as described in the original problem report. I'll take a look at the standby too..
If PriorityQueue.remove() took much time, can we utilize PriorityQueue.removeAll(Collection) so that multiple CacheEntry's are removed in one round ?
Unfortunately, I don't think our usage pattern is amenable to that change. We apply transactions one by one. Switching to removeAll implies a pretty big code restructuring to batch up retry cache entries before the calls into the retry cache. Encountering a huge number of collisions is unexpected, so I'd prefer to investigate that.
Specifically, the retry cache was added in 2.1.0-beta, so the theory in my last comment would only be valid if you're running RPC clients older than that.
same call stack is found in origin problem. sorry for not attaching it at the first moment;
"main" prio=10 tid=0x00007f03f800b000 nid=0x47ec runnable [0x00007f03ff10a000] java.lang.Thread.State: RUNNABLE at java.util.PriorityQueue.remove(PriorityQueue.java:305) at org.apache.hadoop.util.LightWeightCache.put(LightWeightCache.java:217) at org.apache.hadoop.ipc.RetryCache.addCacheEntry(RetryCache.java:270) - locked <0x00007ef83c305940> (a org.apache.hadoop.ipc.RetryCache) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.addCacheEntry(FSNamesystem.java:717) at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:406) at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:199) at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:112) at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:733) at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:647) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:264) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:787) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:568) at org.apache.hadoop.hdfs.server.namenode.NameNode.doRecovery(NameNode.java:1177) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1249) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1320) Locked ownable synchronizers: - <0x00007ef83d350788> (a java.util.concurrent.locks.ReentrantReadWriteLock$FairSync) - <0x00007ef83d41f620> (a java.util.concurrent.locks.ReentrantReadWriteLock$FairSync)
Chris Nauroth, we upgraded from hadoop 2.0.5 to hadoop 2.4, so yes, from a version without this feature to the version with the feature. Last time during investigation via code review, it appears toAddRetryCache should be set to false if standby replays old edit logs generated by hadoop version without retrycache.
if (toAddRetryCache) { fsNamesys.addCacheEntry(deleteOp.rpcClientId, deleteOp.rpcCallId); }
I tried testing an old client without call ID support connecting to a NameNode with retry cache support. That just fails fast though due to violating the protobuf spec for the message. (I should have expected that.) That rules out that theory.
The edit log replaying code looks to be correct for the case of old edits with a layout version before the RPC IDs were logged. Even if that was a problem, it would just be a one-time slowdown during an upgrade, not an ongoing problem.
My next theory is that perhaps we have another case of thread-local storage biting us.
HDFS-7385 reported edit logs containing incorrect ACLs, and the root cause was that we had failed to reinitialize the thread-local op instance completely each time we used it. Perhaps we have a similar situation here, where an op instance is getting logged, but some code path failed to update the op with a unique client ID + call ID. If that's the case, then HDFS-7398 might help. That patch guarantees the full state of the op gets reset on each use, including the RPC IDs. I still don't have a repro of my own though, so I can't confirm or deny the theory.
If anyone has an edit log exhibiting the problem that they'd be willing to share, that would be a big help. I'd be interested in seeing if there is any pattern to the kinds of ops that are hitting duplicate RPC IDs or the actual duplicate RPC ID values. That could help narrow the problem down to a specific code path.
We have seen a related case. In a relatively small cluster, a user created a rogue job that caused a lot of transactions on namenode. The edit log was rolling on its own by the ANN before reaching the regular rolling period. Then the SBN was losing datanodes because it took incredibly long to replay the large edit segment. We normally see replay speed of about 30-80k txn/sec (this is still considerably slower compared to 0.23 or 2.x before introduction of RetryCache), but in this case it was down to 2k txns/sec, causing the one huge segment replaying to take several hours.
In this case, the slowdown was caused by the fact that the cache was too small. Since the cache size is 0.03% of the heap by default, the hash table (GSet) had long chains in each slot during replaying the edit segment. Increasing the cache size would have made it better. Since the transaction rate is not always a function of the size of namespace, the default cache size may not work in many cases.
Also, if the edit rolling period is greater than the cache expiration time (e.g. 10min), it may make sense to purge the entire cache in more efficient way before replaying the new segment. We could record the time when finished with a segment replay and check the elapsed time in the next segment replay.
Small retryCache size can also impact the correctness, given the successful call result might have been removed from the cache by the time the client sends new retry of the same call to the new active NN.
Regarding the earlier call stack showing any entry with the same client id and caller id has existed in retryCache during edit log replay, it could be due to the following scenario.
1. A delete op is processed by nn1 successfully, thus logged to the edit log.
2. Client doesn't get the response, and nn1 fails over to nn2.
3. Client will retry the same call on nn2. Even though nn2 is still tailing edit log and not active yet, a new cache entry will be added, because the following code can be called even if nn2 isn't active yet.
public boolean delete(String src, boolean recursive) throws IOException { checkNNStartup(); if (stateChangeLog.isDebugEnabled()) { stateChangeLog.debug("*DIR* Namenode.delete: src=" + src + ", recursive=" + recursive); } CacheEntry cacheEntry = RetryCache.waitForCompletion(retryCache); if (cacheEntry != null && cacheEntry.isSuccess()) { return true; // Return previous response } ...
4. By the time nn2 gets the call from the edit log and it to the cache, the cache entry is already there.
One way to fix this is to modify waitForCompletion not to create cache entry when NN is in standby. Instead, when NN is in standby, it just needs to check if there is a successful cache entry. If there is, return the result. If not, throws StandbyException.
Here is the draft patch that prevents client from polluting the retry cache when standby is being transitioned to active. It doesn't cover other possible optimization ideas discussed above. Appreciate any input on this.
#3 in the scenario description above should be "Before nn2 starts the transition to active" instead of "Even though nn2 is still tailing edit log and not active yet", because after nn2 starts tailing edit log, it will lock retryCache until it becomes active and thus prevent the client calls from adding new entry to the retry cache during the transition..
The priority queue can be improved using a balanced tree as stated in the java comment in LightWeightCache. We should do it if it could fix the problem.
//LightWeightCache.java /* * The memory footprint for java.util.PriorityQueue is low but the * remove(Object) method runs in linear time. We may improve it by using a * balanced tree. However, we do not yet have a low memory footprint balanced * tree implementation. */ private final PriorityQueue<Entry> queue;
BTW, the priority queue is used to evict entries according the expiration time. All the entries (with any key, i.e. any caller ID) are stored in it.
Thanks for working on this, Ming Ma! Your analysis makes sense to me and I can also reproduce the same scenario. Your patch also looks good to me.
One extra issue, which exists before your patch, is that the retry cache may not be hit during the NameNode failover. For example, suppose the following event sequence:
- a delete request is served by the ANN but the client fails to receive the response
- NN failover happens
- the first retry request gets StandbyException since the old ANN becomes standby
- the second retry request is sent to the other NN, which at this time has not started the transition yet
- no cached entry has been found in the retry cache
- before running FSNamesystem#delete, the transition starts, which blocks the FSNamesystem#delete call
- the FSNamesystem#delete is called and fails
If the above scenario stands, maybe we can use this chance to fix it? In your patch, if the NN is in standby state, and there is no retry cache entry, should we directly throw StandbyException instead of checking it again in FSNamesystem?
Thanks, Jing Zhao. In the scenario you described, IIUC, in order for #6 the call to block on FSNamesystem#delete, it will first need to pass checkOperation(OperationCategory.WRITE). But given the new ANN hasn't transitioned to active yet, the call should have received StandbyException already. Regarding throwing StandbyException earlier, we can add it to NameNodeRpcServer; but it seems unnecessary. Suggestions?
Thanks for the response, Ming! Yes I agree that in most of the cases the call should be blocked after checking the OperationCategory, and the standbyexception will be thrown. But looks like we still cannot 100% rule out the scenario that this check happens after the transition? This scenario should be extremely rare though.
Spent some further time digging into the issue. Besides the scenario that Ming described, the retry cache collision could happen while recording the UpdateBlocksOp transaction. UpdateBlocksOp is recorded for multiple APIs: fsync, abandonBlock, updatePipeline, and commitBlockSynchronization. And before 2.3, UpdateBlocksOp is recorded for addBlock. Among these APIs, only updatePipeline needs to record the callId and clientId into the editlog. However, all other calls failed to reset the callId and clientId to the dummy one thus recorded the same callId and clientId into the journal. Considering addBlock is called heavily this can cause large amounts of collision.
HDFS-7398 should have fixed this already.
Thanks Ming! The new patch looks good to me. One minor is that we do not need to throw StandbyException for saveNamespace since it can also be processed by standby NN. For saveNamespace, since we do not have editlog for it, I guess we do not need to apply this fix to it?
Maybe another simpler way to fix the issue is to move the checkOperation(OperationCategory.WRITE) check to the very beginning (i.e., before the retry cache look up). In this way, we miss the chance to get the response directly from standby NN's retry cache and the client has to failover one more time. But looks like this chance is very small. This can only happen when the request has been handled by the active NN, then the client misses the response, then NN failover happens and the client is redirected to the other NN, which has loaded the edits but has not transitioned to active state yet.
Thanks Jing Zhao. Good point about saveNamespace.
Regarding moving checkOperation(OperationCategory.WRITE) from FSNamesystem to NameNodeRpcServer, I considered that before. There are two minor issues.
- Duration when both NNs are in standby should be short. But not sure if there is any failure scenario like ZK issue that can cause long duration. In addition, given the old ANN still keep its retry cache after it becomes standby, the application might get the cached result from the old ANN if we allow cache check when NN is in standby.
- If we want to move the check, we might also want to move other things like checking if the system supports symlink; such that UnsupportedOperationException can be thrown before StandbyException. This order might not be important as UnsupportedOperationException will be eventually thrown to the application from the active NN.
Otherwise, completely agree checking standby before retry cache check is simpler. If these issues aren't important, I can update the patch accordingly.
Thanks for sharing the thoughts, Ming! Totally agree with your analysis. But for now I still feel to move the standby check before the retry cache look up may be a cleaner way to go: in this way we do not need to expose the mapping between operations and the StandbyException out in the NameNodeRpcServer code. The two standby NameNode scenario can finally still be handled by client side retry/failover in most cases.
I've committed this to trunk and branch-2. Thanks Ming Ma for the fix and Carrey Zhan for the report! And thanks to all for the discussion!
FAILURE: Integrated in Hadoop-trunk-Commit #7926
FAILURE: Integrated in Hadoop-Yarn-trunk #943
- hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
- hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #213 #2141 -Java8 #202 /main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #211
SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2159 /test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java
- hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
- hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
Sangjin Lee backported this to 2.6.1 after fixing non-trivial merge conflicts.
I just pushed the commit to 2.6.1 after running compilation and TestRetryCacheWithHA which changed in the patch.
Just pulled this into branch-2.7 (release 2.7.2) as it already exists in 2.6.1.
branch-2 patch had merge conflicts. Ran compilation and TestRetryCacheWithHA before the push.
We've also noticed a huge performance degradation (10X+) in 2.x edit processing. The retry cache is large part of it.
The retry cache isn't "useless" during startup since the concept is a returning client can later receive the response to an operation that completed, but the client didn't receive the answer. Transient network issue, restart, failover, etc. While processing edits, whether startup or standby, the NN needs to maintain the cache. The retry cache is useful but it needs to be optimized. | https://issues.apache.org/jira/browse/HDFS-7609 | CC-MAIN-2017-47 | refinedweb | 3,095 | 50.84 |
Autodesk University is done and finished, but no rest for the wicked ... I am now on the Western European stage of the Devdays conference tour, with a first stop in London. On the trip here from Las Vegas, I started to write about polygon area calculation and determining the outer boundary loop for floor slabs and walls for the next post or two, and ran into another little issue that I thought might be worth discussing first. Like many other aspects of life, it has to do with the principle of KISS ... keep it simple, stupid. This is mostly very sound advice for every human being, and especially applicable to programming. The temptation to introduce complexity in software development is huge and mostly detrimental. The best solution is mostly the simplest. One important starting point is keeping source code minimal and easy to read, and one aspect related to readability in .NET is namespace handling.
In .NET programming, every class has a name, which is defined within a namespace. The fully qualified class name is the class name itself together with the namespace prefix, using the dot '.' as a separator. For instance, the entire Revit API is encapsulated in the Autodesk.Revit namespace. That namespace defines some classes and interfaces directly, such as the CommandData class and the IExternalCommand interface. It also defines additional nested namespaces, for instance the Geometry one, and so on. One class inside the Autodesk.Revit.Geometry namespace is the class XYZ, whose fully qualified class name is Autodesk.Revit.Geometry.XYZ.
You may have noticed that I avoid explicitly using fully qualified class names in the code.
Instead, I make use of
using statements in the module header, enabling me to make local use of the unqualified class names from those namespaces.
In some cases, we need to make use of a class that has an ambiguous name, i.e. two classes with the same name occur in different namespaces, and we would like to make use of both namespaces at the same time.
In this case, we can disambiguate the two classes by defining different aliases for them.
For instance, we have done this for the Element classes residing in the Autodesk.Revit and Autodesk.Revit.Geometry namespaces in the CmdWallProfile.cs module:
using RvtElement = Autodesk.Revit.Element; using GeoElement = Autodesk.Revit.Geometry.Element;
We could avoid the need for this disambiguation by avoiding making simultaneous global use of both these namespaces. That is the approach we used so far in the Util.cs module, which currently has the following namespace header:
using System; using System.Collections.Generic; using System.Diagnostics; using Autodesk.Revit; using Autodesk.Revit.Elements; using Curve = Autodesk.Revit.Geometry.Curve; using CylindricalFace = Autodesk.Revit.Geometry.CylindricalFace; using Edge = Autodesk.Revit.Geometry.Edge; using PlanarFace = Autodesk.Revit.Geometry.PlanarFace; using Transform = Autodesk.Revit.Geometry.Transform; using XYZ = Autodesk.Revit.Geometry.XYZ; using XYZArray = Autodesk.Revit.Geometry.XYZArray;
Instead of including the entire Geometry namespace and disambiguating the duplicate Element class, we defined individual aliases for each single geometry class that we make use of. To begin with, there were just one or two of these required, but the list started growing and has now reached a size at which I prefer to eliminate it, include the entire Geometry namespace instead, which introduces the Element class ambiguity, and add the same disambiguation aliases for that instead, so I am replacing the lines above by
using System; using System.Collections.Generic; using System.Diagnostics; using Autodesk.Revit; using Autodesk.Revit.Elements; using Autodesk.Revit.Geometry; using RvtElement = Autodesk.Revit.Element;
Quite a bit shorter than the list above. This initially produces an error message
'Element' is an ambiguous reference between 'Autodesk.Revit.Element' and 'Autodesk.Revit.Geometry.Element'
This expected error is obviously fixed by replacing the occurrences of Element by RvtElement.
So much for that. As said, I am working on an area calculation algorithm for the floor slab and wall boundary loops. Initially, we thought that the outer loop was always listed first in the Revit Face class EdgeLoops property, followed by the inner loops representing holes, i.e. openings such as shafts, doors or windows. We now have a case where this is not true, so we will calculate each boundary loop polygon's area in order to determine which is the largest one. Stay tuned and walk in beauty. | http://thebuildingcoder.typepad.com/blog/2008/12/using-namespaces.html | CC-MAIN-2015-18 | refinedweb | 730 | 51.34 |
This is probably one for the sexy DBAs available:
Wouldso would I effieciently model a relational database whereby I've got a area within an "Event" table which defines a "SportType"?
This "SportsType" area holds a hyperlink to various sports tables E.g. "FootballEvent", "RubgyEvent", "CricketEvent" and "F1 Event".
All these Sports tables have different fields specific to that particular sport.
Let me have the ability to genericly add sports types later on as needed, yet hold sport specific event data (fields) included in my Event Entity.
Can you really make use of an ORM for example NHibernate / Entity framework / DataObjects.Internet which may reflect this type of relationship?
I've tossed together a fast C# example to convey my intent in a greater level:
public class Event<T> where T : new() { public T Fields { get; set; } public Event() { EventType = new T(); } } public class FootballEvent { public Team CompetitorA { get; set; } public Team CompetitorB { get; set; } } public class TennisEvent { public Player CompetitorA { get; set; } public Player CompetitorB { get; set; } } public class F1RacingEvent { public List<Player> Drivers { get; set; } public List<Team> Teams { get; set; } } public class Team { public IEnumerable<Player> Squad { get; set; } } public class Player { public string Name { get; set; } public DateTime DOB { get; set;} }
DataObjects.Net supports automatic mappings for open generics. Some particulars about this are referred to here.
This can be done by getting all of the Event types inherit from an abstract Event base class. This seem sensible in my experience because all of the occasions share some common qualities: date, venue, etc. Use a table per concrete class or table per subclass technique to keep objects inside a relational database. Here are a few links to articles explaining inheritance mapping with NHibernate: | http://codeblow.com/questions/modeling-a-normal-relationship-expressed-in-c-inside-a-database/ | CC-MAIN-2018-17 | refinedweb | 287 | 51.28 |
Add two big numbers - Java Beginners
Add two big numbers - Java Beginners Hi,
I am beginner in Java and leaned basic concepts of Java. Now I am trying to find example code for adding big numbers in Java.
I need basic Java Beginners example. It should easy
Difficult to understand - Hibernate
Difficult to understand why we have to override equal() and hashCode() methods in hibernate if we use multiple key columns.please give neat description.I refer many sites they give if any two objects pointing to same row to do
two dimensional - Java Beginners
two dimensional write a program to create a 3*3 array and print the sum of all the numbers stored in it. Hi Friend,
Try the following code:
import java.io.*;
import java.util.*;
public class matrix
StringReverse Example - Java Beginners
StringReverse Example I have been asked to add three additional functions from the string library to the following code. I'm really having a difficult time do so.
public class StringReverseExample
{
public static void
Difference in two dates - Java Beginners
on that.
The thing is, that I need to find the difference between the two dates in JAVA... for more information: in two dates Hello there once again........
Dear Sir
Difficult Interview Questions Page -4
Difficult Interview Questions Page -4
...: This is the most difficult and deadly question can be asked to
you, many time it hurts you....
There was a merge between the two corporate industries
thus became the cause
adding two numbers -
Writing Great Articles is Difficult
Writing Great Articles is Difficult
... and website.
Importance of Keywords and Quality:
Two areas should...;
Two problems that are associated with article marketing are shortage of time
compare two strings in java
compare two strings in java How to compare two strings in java...)
{
System.out.println("The two strings are the same.");
}
}
}
Output:
The two strings are the same.
Description:-Here is an example of comparing two
Difficult Interview Questions
Difficult Interview Questions
... difficult questions".
There are one hundred questions covered to make you sure... opportunity will be
enhanced.
Difficult
Interview Questions -
Page 1
Difficult Interview Questions Page -11
Difficult Interview Questions Page -11
... is a two-way process. The goal should be to find a good fit for you and for the employer. That is a win-win situation.
No two interviews are alike so
Add two number in java
Add two number in java
In this section you will learn about how to add two number in java. In java
adding two number, first taking the input provided... by println()
method. Now here is the code for addition of two number in java.
import
Comparing two dates in java
Comparing two dates in java
In this example you will learn how to compare two dates in java.
java.util.Date provide a method to compare two dates... date. The
example below compares the two dates.
import
Java coding for beginners
source codes for outputting word in
Java.
Java coding for beginners example 1....
See the source codes below:
Java coding for beginners example 2
import...This article is for beginners who want to learn Java
Tutorials of this section
java - Java Beginners
.
http...java hi sir ,
my questions :
1) explain with example polymorphism , abstraction, and inheritance.
2)explain with example the difference between
Hi .Difference between two Dates - Java Beginners
Hi .Difference between two Dates Hi Friend....
Thanks for ur Very good response..
Can u plz guide me the following Program....
difference between two dates..
I need to display the number of days by Each Month
java - Java Beginners
java ...can you give me a sample program of insertion sorting...
with a comment,,on what is algorithm..
Hi Friend,
Please visit the following link:
java - Java Beginners
link:... in JAVA explain all with example and how does that example work.
thanks
.../example/java/util/SearchProgram.shtml
array manipulation - Java Beginners
example at:
Difficult Interview Questions Page -3
Difficult Interview Questions Page -3
...: By this question interviewer guesses what motivates you most.
Give an example....
Question 26: What is the most difficult situation you have faced?
Answer
Two-dimensional arrays
Two-Dimensional Arrays
Two-dimensional arrays are defined as "an array of
arrays". Since an array type is a first-class Java type, we can have an array of int
Algorithm_2 - Java Beginners
Sort,please visit the following link:
Thanks... is S) into two disjoint groups L and R.
L = { x Σ S ? {v} | x
linked list example problem - Java Beginners
linked list example problem Q: Create your own linked list (do not use any of the classes provided in the Collections API). Implement the following two operations
If you are using jdk1.4 or before
array - Java Beginners
array WAP to perform a merge sort operation. Hi Friend,
Please visit the following link:
Hope that it will be helpful for you.
Thanks
Comparing two Dates in Java
Comparing two Dates in Java
In this example we are going to compare two date
objects... second of creation of firstDate. Now in our example
program we can compare these two list
Algorithm_3 - Java Beginners
the following links:... is traversed from 0 to the length-1 index of the array and compared first two values
Two dimensional array in java
Two dimensional array in java.
In this section you will learn about two-dimensional array in java with an
example. As we know that array is a collection...
element.
To process the two-dimension array, for loop is used .
Example
Concatenate Two Strings in java
Concatenate Two Strings in java
We are going to describe how to concatenate of two string. we have
use for concat() method, this method joints of two string
first-to-end.
Description of below example:-
In this below example we have
Comparing two Dates in Java with the use of after method
Comparing two Dates in Java with the use of after method
In this example we are going to compare two date
objects in Java programming language. For comparing
array example - Java Beginners
i cannot solve this example
Comparing Two Numbers
Comparing Two Numbers
This is a very simple example of Java that teaches you the method of
comparing two numbers and finding out the greater one. First of all, name a
class
Swapping of two numbers in java
Swapping of two numbers in java
In this example we are going to describe swapping of two numbers in java without using the third number in
java. We... values from the command prompt. The swapping of two
numbers is based on simple
java program example - Java Beginners
java program example can we create java program without static and main?can u plzz explain with an example
to calculate the difference between two dates in java - Java Beginners
to calculate the difference between two dates in java to write a function which calculates the difference between 2 different dates
1.The function...) {
// Creates two calendars instances
Calendar calendar1 = Calendar.getInstance
merge sorting in arrays - Java Beginners
,
Please visit the following link:
Thanks
Dividing of two Matrix in Java
Dividing of two Matrix in Java
...;Here you will learn how to use two
matrix array for developing Java
program.
The java two dimensional array program is
operate the two matrix. Now we
java beginners - Java Beginners
(a to z) or numeric digit (0-9) by previous two places.
For example :
C to A, M...java beginners is there any other way to do this ?
i want to do by using charAt() function and by decreasing the ascii code by 2 .
Write
Concatenate two pdf files
Concatenate two pdf files
In this program we are going to concatenate two pdf files
into a pdf file through java program. The all data of files
programmes - Java Beginners
the following links to have view of example of Matrix operations :
Java examples for beginners
of
examples.
Java examples for beginners: Comparing Two Numbers
The tutorial provide...In this tutorial, you will be briefed about the word of Java examples, which
help to understand functioning of different Java classes and way. It also
java sorting codes - Java Beginners
java sorting codes I want javasorting codes. please be kind enogh and send me the codes emmediately/// Hi Friend,
Please visit the following link:
Here
Two Dimensional Array Program
Two Dimensional Array Program Using
Nested for loop
This is a simple Java array program . In this session we will teach how
to use of a two dimensional array
Multiplication of two Matrix
Multiplication of two Matrix
This is a simple java program
that teaches you for multiplying two matrix to each other. Here providing you Java source code
Two compilation errors.Can anyone help soon. - Java Beginners
Two compilation errors.Can anyone help soon. a program called Date.java to perform error-checking on the initial values for instance fields month, day and year. Also, provide a method nextDay() to increment the day by one
Multiplication of two Matrix
.
The Java two dimensional array program is operate
to the two matrix number...
Multiplication of Two Matrix
This is a simple Java multidimensional array program
Core Java - Java Beginners
Microsystems. We generally introduce java in two ways, core java and advance java. But you need not to confuse in between two, as both the are java ;)When we... methods, class, object and function etc. Advance Java is pretty difficult
Comparing two Dates in Java with the use of before method
Comparing two Dates in Java with the use of before method
In the previous example... two dates. In
this example we are going to compare two date objects
Programming in Java for beginners
Programming in Java can be difficult sometimes especially for beginners....
This Java guide not only helps the beginners in Java to learn the language... it was difficult
to write platform independent programs in C++, it is fairly easy in Java
Beginners in Java
tutorials for beginners in Java with example?
Thanks.
Hi, want to be command over Java, you should go on the link and follow the various beginners...Beginners in Java Hi, I am beginners in Java, can someone help me
java - Java Beginners
Visit to :
Thanks... in an array...
I have to determine if each cell in a two dimensional array is alive or dead and if it will be alive or dead in the next generation.
For example
combine two pdf files
combine two pdf files
In this program we are going to tell you how you can
read a pdf file...;
The output of the program is given below:Download
this example.
Add two big numbers
C:\vinod\Math_package>java AddTwoBigNumbers
Sum of two...
Add two big numbers
In this section, you will learn how to add two big
numbers
Hibernate Two Condition Criteria Example
Hibernate Two Condition Criteria Example
In this Example, We will discuss... Example code files.
TwoConditionCriteria .java
package roseindia;
import....
In this example we create a criteria instance and implement the
factory methods
Multiplication of Two Number in Class
Multiplication of Two Number in Class
... multiply two
number. A class consists of a collection of types of encapsulation... that can
used to create object of the class. The class define two public
Difficult Interview Questions Page -1
Difficult Interview Questions Page -1
... your best points on it. More difficult questions
are yet to be asked. So... to identify the two or three main issues
of that company and say how you'll deal
Java Example Update Method - Java Beginners
Java Example Update Method I wants simple java example for overriding update method in applet .
please give me that example
Two Element Dividing Number
Dividing Two Numbers in Java
This is very simple java program. In this section we
will learn how to divide any two number. In java program use the class package
Core Java tutorial for beginners
Core Java tutorials for beginners makes its simpler for novices..., the Java
guide section is divided into two sections: Core Java and Advanced Java...
fileoutputstream example
Concatenate Two Strings
Merging Two Cells
are merging the two cells into a single cell. In this
example, Region(1,(short)1,1...
Merging two cells
In this program we are going to merge two cells of an
excel sheet
java"oop" - Java Beginners
or even number between two given numbersJava Example of Even...:// OOPs Concept What is OOPs programming and what it has to do
Two Dimensional Array Program
Two Dimensional Array Program
This is very simple program of Java. In this lesson we
will learn how to display arrange form of two dimensional array program
java program - Java Beginners
://
Thanks...java program Pl. let me know about the keyword 'this' with at least 2 or 3 example programs.
pl. let me know the program to multiply 2 matrix
programs - Java Beginners
information. Array Programs How to create an array program in Java? Hi public class OneDArray { public static void main (String[]args){ int
core java - Java Beginners
-in-java/
it's about calculating two numbers
Java Program Code for calculating two numbers java can we write a program for adding two numbers without
java beginners
java beginners Q1: Write a method named showChar. The method should accept two arguments: a reference to a String object and an integer. The integer... at that character position.
Here is an example of a call the method:
showChar
java - Java Beginners
://
Hope...java Hi , roseindia
I got small doubt in java, now my problem is i want difference between two dates, now i am storing these two dates
Java I/O - Java Beginners
Program is not that difficult, go through the given link for Java Example Code run...Creating Directory Java I/O Hi, I wanted to know how to create
Sum of two Matrix
Sum of two Matrix
In this section, we are going to calculate the sum of
two matrix... to this.
In this program we are going to calculate the sum of
two matrix. To make this program
Dividing Element of Two Dimensional Array
Java
program.
The java two dimensional array program is
operate the two...
Dividing Element of Two Dimensional Array... divide of two
matrix. First all to we have to define class "
Swapping of two numbers
Swapping of two numbers
This Java programming tutorial will teach you the
methods for writing program to calculate swap of two numbers. Swapping
is used where you want
intersection of two java arrays
intersection of two java arrays I am trying to figure out how to compare two arrays and get the intersection and put this into a third array of the correct size. I know how to copy the two arrays into the third array but am
java - Java Beginners
:
Thanks
java - Java Beginners
://
Here you
java - Java Beginners
java how do i declare and state an example of the following...,
Example:
public class Test{ //declaring class
Test(String st... and calculating sum of two numbers");
System.out.println("Sum is : "+c
Java Swings - Java Beginners
code: Swings hi ,
I am doing project using netbeans. I have... items from executing a method. For this i found two option in the combobox model
Addition of two Number in Class
the sum of any two
number. Java is a object oriented programming language... Addition of two Numbers in Class
...; we are going to use addition
of two number. First of all, we have
Java - Java Beginners
Java prime number program How to show the prime number in Java? Can you please explain it through an example? Hi friend,public class Ex01 { public static void main(String[] args){ int power_of_two = 2; for(int n=2
Rounding off in Java - Round two decimal places
Rounding off in Java - Round two decimal places
... to round the
figure up to two decimal places in java.
Round: This is the process of rounding with
dollar amounts and provides the result in more than two
Add Two Numbers in Java
Add Two Numbers in Java
... these
arguments and print the addition of those numbers. In this example, args..., in this example). Integer.parseInt helps you to
convert the String type value
Good tutorials for beginners in Java
in details about good tutorials for beginners in Java with example?
Thanks.
...Good tutorials for beginners in Java Hi, I am beginners in Java... the various beginners tutorials related to Java
stack and queue - Java Beginners
stack and queue write two different program in java
1.) stack
2...://
Hope
php two dimensional array
php two dimensional array php two dimensional array example
java swings - Java Beginners
java swings Hi,
I need the listbox example.
I have two listboxes...);
String[] listval={"one","two","three","four"};
list1.setListData(listval...);
frame.setVisible(true);
}
}
Please correct the example and send
java - Java Beginners
to binary for example A=65(ASCII) so its has to convert like 01100101)and place... operation on two matrixs
Beginners Java Tutorial
for beginners:
Comparing Two Numbers
This is a very simple example... example
In this section, you will learn how to compare two
strings in java... Beginners Java Tutorial
java program - Java Beginners
information: program Use antlr to write a program Grep for searching the input for a word:
java Grep word [filename]
The program reports the line number
java swings - Java Beginners
java swings hi,
I already send the question 4 times.
I have two...() {
JFrame f= new JFrame();
f.setTitle("Listbox Example");
f.setSize(600,250);
f.setLayout(new GridLayout(1,3));
String[] data1 = {"one", "two | http://www.roseindia.net/tutorialhelp/comment/99258 | CC-MAIN-2014-41 | refinedweb | 2,903 | 55.74 |
Behind a Proxy¶
Warning
The current page still doesn't have a translation for this language.
But you can help translating it: Contributing.
In some situations, you might need to use a proxy server like Traefik or Nginx with a configuration that adds an extra path prefix that is not seen by your application.
In these cases you can use
root_path to configure your application.
The
root_path is a mechanism provided by the ASGI specification (that FastAPI is built on, through Starlette).
The
root_path is used to handle these specific cases.
And it's also used internally when mounting sub-applications.
Proxy with a stripped path prefix¶
Having a proxy with a stripped path prefix, in this case, means that you could declare a path at
/app in your code, but then, you add a layer on top (the proxy) that would put your FastAPI application under a path like
/api/v1.
In this case, the original path
/app would actually be served at
/api/v1/app.
Even though all your code is written assuming there's just
/app.
And the proxy would be "stripping" the path prefix on the fly before transmitting the request to Uvicorn, keep your application convinced that it is serving at
/app, so that you don't have to update all your code to include the prefix
/api/v1.
Up to here, everything would work as normally.
But then, when you open the integrated docs UI (the frontend), it would expect to get the OpenAPI schema at
/openapi.json, instead of
/api/v1/openapi.json.
So, the frontend (that runs in the browser) would try to reach
/openapi.json and wouldn't be able to get the OpenAPI schema.
Because we have a proxy with a path prefix of
/api/v1 for our app, the frontend needs to fetch the OpenAPI schema at
/api/v1/openapi.json.
Tip
The IP
0.0.0.0 is commonly used to mean that the program listens on all the IPs available in that machine/server.
The docs UI would also need the OpenAPI schema to declare that this API
server is located at
/api/v1 (behind the proxy). For example:
{ "openapi": "3.0.2", // More stuff here "servers": [ { "url": "/api/v1" } ], "paths": { // More stuff here } }
In this example, the "Proxy" could be something like Traefik. And the server would be something like Uvicorn, running your FastAPI application.
Providing the
root_path¶
To achieve this, you can use the command line option
--root-path like:
$ uvicorn main:app --root-path /api/v1 <span style="color: green;">INFO</span>: Uvicorn running on (Press CTRL+C to quit)
If you use Hypercorn, it also has the option
--root-path.
Technical Details
The ASGI specification defines a
root_path for this use case.
And the
--root-path command line option provides that
root_path.
Checking the current
root_path¶
You can get the current
root_path used by your application for each request, it is part of the
scope dictionary (that's part of the ASGI spec).
Here we are including it in the message just for demonstration purposes.
from fastapi import FastAPI, Request app = FastAPI() @app.get("/app") def read_main(request: Request): return {"message": "Hello World", "root_path": request.scope.get("root_path")}
Then, if you start Uvicorn with:
$ uvicorn main:app --root-path /api/v1 <span style="color: green;">INFO</span>: Uvicorn running on (Press CTRL+C to quit)
The response would be something like:
{ "message": "Hello World", "root_path": "/api/v1" }
Setting the
root_path in the FastAPI app¶
Alternatively, if you don't have a way to provide a command line option like
--root-path or equivalent, you can set the
root_path parameter when creating your FastAPI app:
from fastapi import FastAPI, Request app = FastAPI(root_path="/api/v1") @app.get("/app") def read_main(request: Request): return {"message": "Hello World", "root_path": request.scope.get("root_path")}
Passing the
root_path to
FastAPI would be the equivalent of passing the
--root-path command line option to Uvicorn or Hypercorn.
About
root_path¶
Have in mind that the server (Uvicorn) won't use that
root_path for anything else than passing it to the app.
But if you go with your browser to you will see the normal response:
{ "message": "Hello World", "root_path": "/api/v1" }
So, it won't expect to be accessed at.
Uvicorn will expect the proxy to access Uvicorn at, and then it would be the proxy's responsibility to add the extra
/api/v1 prefix on top.
About proxies with a stripped path prefix¶
Have in mind that a proxy with stripped path prefix is only one of the ways to configure it.
Probably in many cases the default will be that the proxy doesn't have a stripped path prefix.
In a case like that (without a stripped path prefix), the proxy would listen on something like, and then if the browser goes to and your server (e.g. Uvicorn) listens on the proxy (without a stripped path prefix) would access Uvicorn at the same path:.
Testing locally with Traefik¶
You can easily run the experiment locally with a stripped path prefix using Traefik.
Download Traefik, it's a single binary, you can extract the compressed file and run it directly from the terminal.
Then create a file
traefik.toml with:
[entryPoints] [entryPoints.http] address = ":9999" [providers] [providers.file] filename = "routes.toml"
This tells Traefik to listen on port 9999 and to use another file
routes.toml.
Tip
We are using port 9999 instead of the standard HTTP port 80 so that you don't have to run it with admin (
sudo) privileges.
Now create that other file
routes.toml:
[http] [http.middlewares] [http.middlewares.api-stripprefix.stripPrefix] prefixes = ["/api/v1"] [http.routers] [http.routers.app-http] entryPoints = ["http"] service = "app" rule = "PathPrefix(`/api/v1`)" middlewares = ["api-stripprefix"] [http.services] [http.services.app] [http.services.app.loadBalancer] [[http.services.app.loadBalancer.servers]] url = ""
This file configures Traefik to use the path prefix
/api/v1.
And then it will redirect its requests to your Uvicorn running on.
Now start Traefik:
$ ./traefik --configFile=traefik.toml INFO[0000] Configuration loaded from file: /home/user/awesomeapi/traefik.toml
And now start your app with Uvicorn, using the
--root-path option:
$ uvicorn main:app --root-path /api/v1 <span style="color: green;">INFO</span>: Uvicorn running on (Press CTRL+C to quit)
Check the responses¶
Now, if you go to the URL with the port for Uvicorn:, you will see the normal response:
{ "message": "Hello World", "root_path": "/api/v1" }
Tip
Notice that even though you are accessing it at it shows the
root_path of
/api/v1, taken from the option
--root-path.
And now open the URL with the port for Traefik, including the path prefix:.
We get the same response:
{ "message": "Hello World", "root_path": "/api/v1" }
but this time at the URL with the prefix path provided by the proxy:
/api/v1.
Of course, the idea here is that everyone would access the app through the proxy, so the version with the path prefix
/app/v1 is the "correct" one.
And the version without the path prefix (), provided by Uvicorn directly, would be exclusively for the proxy (Traefik) to access it.
That demonstrates how the Proxy (Traefik) uses the path prefix and how the server (Uvicorn) uses the
root_path from the option
--root-path.
Check the docs UI¶
But here's the fun part. ✨
The "official" way to access the app would be through the proxy with the path prefix that we defined. So, as we would expect, if you try the docs UI served by Uvicorn directly, without the path prefix in the URL, it won't work, because it expects to be accessed through the proxy.
You can check it at:
But if we access the docs UI at the "official" URL using the proxy with port
9999, at
/api/v1/docs, it works correctly! 🎉
You can check it at:
Right as we wanted it. ✔️
This is because FastAPI uses this
root_path to create the default
server in OpenAPI with the URL provided by
root_path.
Additional servers¶
Warning
This is a more advanced use case. Feel free to skip it.
By default, FastAPI will create a
server in the OpenAPI schema with the URL for the
root_path.
But you can also provide other alternative
servers, for example if you want the same docs UI to interact with a staging and production environments.
If you pass a custom list of
servers and there's a
root_path (because your API lives behind a proxy), FastAPI will insert a "server" with this
root_path at the beginning of the list.
For example:
from fastapi import FastAPI, Request app = FastAPI( servers=[ {"url": "", "description": "Staging environment"}, {"url": "", "description": "Production environment"}, ], root_path="/api/v1", ) @app.get("/app") def read_main(request: Request): return {"message": "Hello World", "root_path": request.scope.get("root_path")}
Will generate an OpenAPI schema like:
{ "openapi": "3.0.2", // More stuff here "servers": [ { "url": "/api/v1" }, { "url": "", "description": "Staging environment" }, { "url": "", "description": "Production environment" } ], "paths": { // More stuff here } }
Tip
Notice the auto-generated server with a
url value of
/api/v1, taken from the
root_path.
In the docs UI at it would look like:
Tip
The docs UI will interact with the server that you select.
Disable automatic server from
root_path¶
If you don't want FastAPI to include an automatic server using the
root_path, you can use the parameter
root_path_in_servers=False:
from fastapi import FastAPI, Request app = FastAPI( servers=[ {"url": "", "description": "Staging environment"}, {"url": "", "description": "Production environment"}, ], root_path="/api/v1", root_path_in_servers=False, ) @app.get("/app") def read_main(request: Request): return {"message": "Hello World", "root_path": request.scope.get("root_path")}
and then it won't include it in the OpenAPI schema.
Mounting a sub-application¶
If you need to mount a sub-application (as described in Sub Applications - Mounts) while also using a proxy with
root_path, you can do it normally, as you would expect.
FastAPI will internally use the
root_path smartly, so it will just work. ✨ | https://fastapi.tiangolo.com/tr/advanced/behind-a-proxy/ | CC-MAIN-2021-17 | refinedweb | 1,649 | 61.67 |
import mdb data in mysql adding fields in mdb and mysql to be compatibles, especially for categories of mysql db
Orçamento $100-300 USD
I need to import data from .mdb ms access files on a mysql DB, the biggest problem is that the mysqlDB have 30 categories and about 150 subcategories, and I need to have a nice interface on access to import manually the categories on DB. Manually inserted categories and subcategories, must be compatible with the later import on mysql DB.
here the xample files in excel (about mysql tables) and an .MDB sample (about my actual situation).
adding fields on mysql when necessary.
I don't want to pay first of the work, sorry
here attached the [url removed, login to view] files where add the categories compatibles with the xls-mysql file
5 freelancers estão ofertando em média $114 para este trabalho
Hi I have done this before but I am not clear with specification pls explain in [url removed, login to view] u want I can send u the code which i used to export mbd data in to MS SQL server. Pls explain in detail
We can work with you to complete this integration. We support our work even after the completion of the project. | https://www.br.freelancer.com/projects/data-processing-data-entry/import-mdb-data-mysql-adding/ | CC-MAIN-2017-47 | refinedweb | 212 | 64.85 |
Front-End Web & Mobile building microsites under different subdomains of your primary domain, a monorepo strategy gives each team the flexibility to pick their frontend tech stack (e.g. angular vs react), while allowing shared functionality to be stored in common libraries in the same repository.
With today’s launch, Amplify Console makes it easy to deploy monorepo apps with three important features:
- Automatic detection of build settings when connecting a sub-folder in your mono-repository. This makes connecting and deploying a project in your monorepo frictionless.
- Selective build triggers: New builds in the Amplify Console are only triggered when there are code changes within a specific app project.
- Ability to define the build settings for multiple apps in a single build specification file (amplify.yml).
In this blog post we are going to walkthrough deploying a React and Gatsby app that live in the same repository.
Step 1: Set up monorepo project
To setup your project, we will first create a React and Gatsby app and then commit the two apps to the same Git repository. The Gatsby app will be our marketing site while we will use the React app to build an application with a cloud backend. The cloud backend will contain a database with a GraphQL API to access the data.
Create a React and Gatsby app in the same folder to get started.
# create a root folder mkdir monorepo-app && cd monorepo-app # create a react app npx create-react-app reactapp #create a gatsby app gatsby new marketing-site
Now create a new Git repository in a Git provider of your choice (GitHub, AWS CodeCommit, GitLab, or BitBucket) and push your newly created project to a Git repo
git init git remote add origin git@github.com:user/reponame.git git add . git commit -am ' git push -u origin
Step 2: Deploy React app with cloud backend
To set up a cloud backend, run the amplify-app script. This script automatically bootstraps your project with an Amplify backend. This post will focus on setting up the monorepo, but if you would like to read more on how to build a cloud backend with Amplify, check out this blog.
cd reactapp npx amplify-app@latest
Once the basic setup completes open the GraphQL schema located in
amplify/backend/api/amplifyDatasource/schema.graphql. This schema defines your data model. Let’s say we’re building a reviews app that allows users to create a post with a rating.
enum PostStatus { ACTIVE INACTIVE } type Post @model { id: ID! title: String! rating: Int! status: PostStatus! }
Deploy this backend to the cloud by running the following command.
npm run amplify-push
Update your React app’s
App.js with the following code
# fetch the frontend code for the app curl -o src/App.js # start the app locally npm run start
You should see the following screen. Click on the ‘NEW’ button to add a random post.
To verify this post got synced to the cloud, run
amplify console from your terminal window, choose the API tab, and then choose View under Data sources. This will open an Amazon DynamoDB table where you should be able to see the post(s) that you created locally.
Commit this code to your git repository by running
git add . && git commit -m 'added amplify backend'.
Now it’s time to connect our frontend app. Head back to the Amplify Console and navigate to the app home, by clicking the app name (reactapp) from the navigation breadcrumb. You should see a screen that asks you to Connect a frontend web app. Pick your Git provider along with repository and branch. Check the option that asks if you’re connecting a monorepo and enter
reactapp into the textbox that appears. Choose Next.
Amplify Console automatically detects that you are connecting a sub-folder with a React app using an Amplify backend. Amplify allows you to set up continuous deployment workflows of the frontend and backend together. Select the existing backend you deployed named
amplify and then create a service role to allow Amplify Console to deploy changes to your backend (if they exist).
That’s it! Choose Next, and Save and deploy. Amplify Console will pull source code from your Git repo, build and deploy your frontend to a global CDN accessible at.
Click on the screenshot to open your app URL. You should see the same data you had created locally. Amplify Datastore offers realtime synchronization across devices – open the deployed app and the localhost app side-by-side and create new fields. You should see the data appearing in both browsers instantly.
Your React app is set up with continuous deployment and hosting! Make a small code change to your
reactapp and commit code to your repository to see Amplify Console automatically trigger a new build.
..... return ( <div className="App"> <header className="App-header"> <img src={logo} <div> My monorepo React app</div> <div> .....
Step 3: Deploy the Gatsby marketing site
Now that you’ve built and deployed the React app, let’s deploy the Gatsby app we created in Step 1. From the Amplify Console breadcrumbs, navigate to the All apps page and choose Connect app. Pick the monorepo checkbox again, but this time enter
marketing-site in the textbox.
Amplify Console will automatically detect your build settings. Go ahead and choose Next and Save and deploy. In a few minutes you will now have your Gatsby app deployed.
To recap, you now have two Amplify apps that are connected to your React and Gatsby app.
Step 4: Trigger a commit to the Gatsby site
Now that both your apps are deployed to the Amplify Console, let’s trigger an update via a code change in the Gatsby app. From your local terminal, navigate to
monorepo-app/marketing-site/src/pages/index.js and update the following code:
import React from "react" export default function Home() { return <div>My marketing site</div> }
Commit this code to your repository by running
git add . && git commit -m 'modified marketing site' && git push. Now visit the Amplify Console to see that a new build has been triggered on both apps. Both apps will build for the first time, but for every subsequent code change, only the app in which you made changes will build. For example, if you only make a code change in the Gatsby app, the Gatsby app will update as expected while React app build will automatically cancel.
Step 5 (Bonus points): Combine your build settings amplify.yml file
We currently have separate build settings stored in each app. When managing a monorepo it is often convenient to store all build settings centrally in the repository. In the root of your repository, create a
amplify.yml file with two
appRoot trees as described here.
Summary
In this post, we have connected two monorepo apps to the Amplify Console without requiring any extra configuration steps. Visit the Amplify Console to get started. | https://aws.amazon.com/blogs/mobile/set-up-continuous-deployment-and-hosting-for-a-monorepo-with-aws-amplify-console/ | CC-MAIN-2021-39 | refinedweb | 1,161 | 63.19 |
In our digital asset which we export to UE4, I wanted to add a button that runs some python code.
After discovering that it doesn't work, I tried to move a box using Python for testing, but couldn't get it to work.
I tried the following:
The button has
as its callback script.
hou.phm().MovePython()
This function is defined in a PythonModule in the Scripts tab of the Asset properties as
and refers to a python node with the following code:
def MovePython(): print "Move called" exec(hou.node('/obj/geo1/python1').parm('python').eval()) MoveBox()
def MoveBox(): objPath = "/obj/geo1/box1/tx" parm = hou.parm(objPath) origVal = parm.eval() parm.set(origVal + 0.1)
I also tried a more direct approach by defining the following function in the same PythonModule:
which we call in a similar way using a button.
def MoveGraph(): print "MoveGraph called" objPath = "/obj/geo1/box1/tx" parm = hou.parm(objPath) origVal = parm.eval() parm.set(origVal + 0.1)
When pressing either of the buttons in the Houdini Editor, the box moves as expected, however, when doing the same in the Unreal Editor, the box does not move.
Is there a solution to this problem, or is this a limitation of the way Houdini interacts with Unreal Engine?
Thanks in advance! | https://www.sidefx.com/forum/topic/69656/ | CC-MAIN-2020-24 | refinedweb | 218 | 55.54 |
How to never miss out on your favourite band’s sale using Python
A fun example of how to utilise python and your programming knowledge to automate your life
Why bother?
Being a software developer these days is pretty useful, everyone knows it. Besides giving you great career opportunities and the ability to work in almost any field possible, there are so many things in your life you could automate, or at least utilise in your favour.
I’m still at the beginning of my software engineering journey, and I still have a lot to learn. So, I’ve decided to start working on some small, fun projects, that aren’t necessarily related to my 9–5 job as a backend developer, to keep learning and have fun!
This is the first one and if you liked it lookout for more to come!
So, I used to be a dancer. Yes. I moved from being a full-time dancer to being a backend developer. More on that probably in another post. The thing is, I never stopped loving dance, yoga, or working out, and I still do this regularly. My favourite yoga/workout clothing brand is Lululemon. Whoever tried their stuff would know what I’m talking about! The problem is, it’s pretty pricy! So I’ve decided to utilise my Python skills and run a web scraper that will notify me whenever my favourite leggings are on sale. Sounds cool? Let’s go!
This automation idea can be executed to track anything you want! This use case is just a fun example.
You can also find the code on Github here:
The plan
- Project setup
- Get the link of the product we want to monitor
- Scrape the webpage and find the product info in it
- Set condition for the alert
- Implement email notification and send test
- Create a dev email account and set privacy or use Google’s API to send email notifications
- Create a scheduler using crontab to run the script every desirable amount of time
1. Project setup
This project will contain only two files:
main.py and
scheduler.py. Let’s create the skeleton of our project before starting to code:
In main.py:
class Crawler:
def scan_price(self) -> list:
pass def alert_prices(self, prices) -> None:
passclass Emailer:
def _create_email_msg(self):
pass def send_email(self):
passif __name__ == '__main__':
crawler = Crawler()
prices = crawler.scan_price()
crawler.alert_price(prices)
And let’s keep
scheduler.py empty for now, we will fill it out in the next post.
Another thing we need to do is run our project in a new virtual env. I like using
pyenv and
virtualenv, but it doesn't really matter. Here are the steps:
In the terminal (on mac) run this to install
pyenv and
pyenv-virtualenv:
$ brew install pyenv pyenv-virtualenv
Then install the right python version on
pyenv:
$ pyenv install python 3.9.0
Now in the directory of the project create the
venv, call it
lulu (or anything else) and attach it to this project’s directory:
$ pyenv virtualenv 3.9.0 lulu
$ pyenv local lulu
If local doesn’t work, try:
pyenv activate lulu
2. Get the link
The link I’m going to use is as follows, attached to a new variable called URL:
URL = ''
3. Scaping the Link
We will use HTMLSession from requests_html to make the web-page HTML data into accessible python code.
First, we’ll go to the link in the browser and click right-click and inspect.
There, in the console's Elements tab, we can inspect the structure of the code and find the elements we are interested in. In this particular case, I’m interested in the markdown prices that appear in .markdown-prices class. I start by choosing the page content that is under the .product-detail-content class.
The code:
def scan_price(self) -> list:
session = HTMLSession()
page = session.get(URL)
content = page.html.find('.product-detail-content', first=True)
return content.find('.markdown-prices')
4. Set the condition for the alert
Let’s say you’d want to be alerted whenever the price is below 70 euros. Then we will set this as the condition to our alert, in a new function called:
alert_prices:
def alert_prices(self, prices) -> None:
for price in prices:
if int(price.text[0:2]) < 70:
print('send email notification')
return
Here we loop over the prices (could be more than one in the discount section) and if we found one that stands in our condition (below 70) we will alert once. For the time being, we don’t actually send the alert, but only log as if we would send it.
5. Implement email notification and send test
After figuring out how to get the information from the webpage and how to set the condition upon sending the notification, now we will go about implementing the notification itself.
To send the emails we will use Google’s SMTP server (
smtp.gmail.com). For that, we would need to create another dev account and reduce its security level. The other option is to use Google’s email API (or other available options) but for simplicity's sake, I will go with the first option.
Go ahead and create a new google account here. Then, go to your new Google account settings and click on security. reduce security level in the Less secure app access panel, click Turn on access. It’s definitely not a best practice, but good enough for us to move forward easily in the stage of this small script. Then, we will put the address and credentials in the
from field of our email sender.
Let’s create a class called Emailer with two methods:
_create_email_msg and
send_email. We would also need to have the following properties in our class to be able to send the email:
class Emailer:
subject: str
from_address: str = 'my_dev_account@gmail.com'
from_pass: str = "my_dev_account_password"
to_address: str = 'my_usual_account@gmail.com'
SMTP_SERVER: str = "smtp.gmail.com"
PORT = 465
def __init__(self, subject):
self.subject = subject def _create_email_msg(self):
pass
def send_email(self):
pass
We will use two libraries to create and send the emails:
Now let’s fill in the details about the msg in the
_create_email_msg method using
def _create_email_msg(self):
msg = EmailMessage()
msg['Subject'] = self.subject
msg['From'] = self.from_address
msg['To'] = self.to_address
content = f'Don\'t miss it here: {URL}'
msg.set_content(content)
return msg
Then, we could use the function we just created to construct an email message. Then, let’s create a context manager with our SMTP server, log in to our newly created dev account, and send the message.
def send_email(self): msg = self._create_email_msg()
# take password from user input
password = input("Type your password and press enter: ")
# Create a secure SSL context
context = ssl.create_default_context()
with smtplib.SMTP_SSL(self.SMTP_SERVER, self.PORT, context=context) as server:
server.login("gb.dev1000@gmail.com", password)
server.send_message(msg)
server.quit()
We could also use the debugging server to test the email by logging it in the terminal like this:
# Send the message via our own SMTP debugging server.
server = smtplib.SMTP('localhost:1025')
server.send_message(msg)
server.quit()
In the terminal tun this:
python -m smtpd -n -c DebuggingServer localhost:1025
Now, let’s use the Emailer class we have created and send the email using in the
alert_price function.
def alert_price(self, prices) -> None:
for price in prices:
if int(price.text[0:2]) < 70:
print('send email notification')
print('email sent successfully!')
return
That’s basically it! Now you could set up a cron job to run this as a scheduled job periodically. On this and more in the next post! Feel free to add fancy stuff like adding the image to the email or make a more complex condition to alert by.
Thanks for reading and until next time! | https://gilatblumberger.medium.com/how-to-never-miss-out-on-your-favourite-bands-sale-using-python-40c7a2219ac3?source=post_internal_links---------0---------------------------- | CC-MAIN-2021-21 | refinedweb | 1,308 | 64 |
Hi,
Could u please let me know what is the issue for not getting expected o/p from 4 digit 7seg in XC8 16F877A uC. The Ckt diag. and Code is here.
I am getting o/p in 7seg only 0 and 8 but all contol and data pins in mcu portion is blinking but all the control pins under 7seg portion not blinking.
#include <xc.h>
//***Define the signal pins of all four displays***//
#define s1 RC0
#define s2 RC1
#define s3 RC2
#define s4 RC3
//***End of definition**////
void main()
{
unsigned int a,b,c,d,e,f,g,h; //just variables
int i = 0; //the 4-digit value that is to be displayed
int flag =0; //for creating delay
unsigned int seg[]={0XC0, //Hex value to display the number 0
0XF9, //Hex value to display the number 1
0XA4, //Hex value to display the number 2
0XB0, //Hex value to display the number 3
0X99, //Hex value to display the number 4
0X92, //Hex value to display the number 5
0X82, //Hex value to display the number 6
0XF8, //Hex value to display the number 7
0X80, //Hex value to display the number 8
0X90 //Hex value to display the number 9
}; //End of Array for displaying numbers from 0 to 9// FOR CA
//*****I/O Configuration****//
TRISC=0X00;
TRISD=0x00;
PORTC=0XFF; // FOR CA
//***End of I/O configuration**///
#define _XTAL_FREQ 20000000
while(1)
{
//***Splitting "i" into four digits***//
a=i%10;//4th digit is saved here
b=i/10;
c=b%10;//3rd digit is saved here
d=b/10;
e=d%10; //2nd digit is saved here
f=d/10;
g=f%10; //1st digit is saved here
h=f/10;
//***End of splitting***//
PORTD=seg[g];s1=0; //Turn ON display 1 and print 4th digit
__delay_ms(10);s1=1; //Turn OFF display 1 after 5ms delay
PORTD=seg[e];s2=0; //Turn ON display 2 and print 3rd digit
__delay_ms(10);s2=1; //Turn OFF display 2 after 5ms delay
PORTD=seg[c];s3=0; //Turn ON display 3 and print 2nd digit
__delay_ms(10);s3=1; //Turn OFF display 3 after 5ms delay
PORTD=seg[a];s4=0; //Turn ON display 4 and print 1st digit
__delay_ms(10);s4=1; //Turn OFF display 4 after 5ms delay
if(flag>=10) //wait till flag reaches 100
{
i++;flag=0; //only if flag is hundred "i" will be incremented
}
flag++; //increment flag for each flash
}
}
Tnx
Replies: 910
If the data pins on your MCU is bliking with red and blue colour it means that the code is outputting something. So in that case your see the 7-Seg module display something.
If nothing turns up may be you swapped between Common Cathode and Comman Anode type. Proteus has both type of display, make sure you are using the relevent one
You voted ‘up’
Replies: 17
yes, code is outputting only 0 and 8.Perhaps the selection of base resitor is the issue. Could u plaese inform how to calculate base resistor depends on my case.
tnx Raj.
You voted ‘up’
Replies: 910
You need not worry about the tranistsors and resistors during simulation. The simulation should work even if the I/O pins from PIC is directly connected to the 7-Segment display module.
I would highly recommend you to read this tutorial, here you can find how transistor and resistor is used in 7-segmentdiplay module. It also shows a simulation form proteus.
The base resistor of a transistor can be calculated based in the collector current, switching voltage and gain value of the transistor. As we know transistor is a current controlled device hence the resistor is used only to limit the base current. For your application an intense calculation of base resistor is not required and a value of 1K should work
You voted ‘up’ | https://circuitdigest.com/forums/embedded/ca-seven-segments | CC-MAIN-2019-35 | refinedweb | 646 | 54.19 |
Using the Arduino to send audio via pulse width modulation
I’m still interested in doing light based communication, but I haven’t made a lot of progress. I did build an LTSpice model of the circuit I used yesterday, but other than verifying that it probably would work as built (which it did) I didn’t feel like I had enough brain cells working to optimize the circuit. So, instead, I decided to try to see if I could use an Arduino to send reasonably high quality audio over light using pulse width modulation.
It doesn’t really seem all that hard in principle: the Arduino libraries include an analogWrite() command which can be used to generate a PWM signal on an output pin. But the problem is the frequency of operation is quite low: around 500Hz or so. Since I was interested in sending voice bandwidth signals (say sampled at 8000Hz or so) the PWM “carrier” frequency simply wasn’t high enough.
So, I did a bit of digging. It turns out that you can configure the timers on the ATMEGA328 on board the Arduino pretty easily, and if you dig through the datasheet, scratch your head a bit, and then type carefully, you can come up with the right incantation. Which I did: in fact, it worked the very first time I downloaded it to the board.
I recorded a second or so of audio using Audacity, dumped it as an 8 bit raw audio file, and then converted it to bytes. I then created a very simple program which simply copies each byte to the PWM overflow register, and then delays for 125 microseconds (1/8000 of a second). Other than that, just some simple bit twiddling to change the PWM prescaler to operate at the full 16Mhz clockrate, and… voila.
Witness the video:
Here’s the core of the code (the actual audio data has been stripped for brevity):
#include <avr/pgmspace.h> prog_uchar bwdat[] PROGMEM = { 0x80, 0x80, 0x80, 0x7f, 0x80, 0x80, 0x80, 0x81, 0x80, 0x80, 0x80, 0x80, // ... lots of lines deleted for brevity... 0x80, 0x80, 0x80, 0x80, 0x80, 0x81, 0x80, 0x7f, 0x7f, 0x80, 0x80, 0x80, 0x80, 0x81, 0x81, 0x80, 0x80, 0x81 } ; void setup() { pinMode(11, OUTPUT); TCCR2A = _BV(COM2A1) | _BV(WGM21) | _BV(WGM20); TCCR2B = _BV(CS20) ; OCR2A = 180; } void loop() { int i ; char ch ; for (i=0; i<sizeof(bwdat); i++) { ch = pgm_read_byte_near(bwdat+i) ; OCR2A = ch ; delayMicroseconds(125) ; } }
Comment from Panzi
Time 4/10/2012 at 10:52 am
AWESOME!! You are a Genius!!
Comment from Anthony Webb
Time 7/20/2012 at 7:43 am
Hello Mark, very nice work there! I’ve got a question which you may have an answer to given some of the work you have done with audio. I’d like to have my arduino record FM radio using a chip like ()
If I am reading your post correctly it might not work to do an analogRead() on the audio output pins and write the data to a file on an SD card right? Any ideas how this could be made to work given your experience with arduino/audio?
Comment from Anthony Webb
Time 7/20/2012 at 7:52 am
One other thing, I will note that I also have a beaglebone I have been messing around with. If the arduino simply isnt fast enough to do this perhaps the bone would be? Seems to me like there are simple recording devices out there (doesnt hallmark put them in their greeting cards now days?) that are capable of recording audio, so I dont know why anarduino couldnt?
Comment from Webb Anthony
Time 2/9/2013 at 1:15 pm
Anthony Webb’s forehead looks cooL! Where did you buy that forehead?
Comment from Dylan
Time 5/29/2016 at 12:51 am
Hi Mark,
Excellent post. Five years old and still very useful! I just had a question.
What did you use to convert the audacity raw file from hex to 0xff format to copy and paste into the program? Thank you.
I’m using a mac.
Regards,
Dylan
Comment from Ben
Time 8/6/2011 at 4:53 pm
I’m rather interested in how you made the arduino send the audio via pulse width modulation. Could I get a copy of the whole script? | http://brainwagon.org/2011/07/17/using-the-arduino-to-send-audio-via-pulse-width-modulation/ | CC-MAIN-2017-30 | refinedweb | 720 | 68.2 |
I am currently testing out my NXT-Segway with an identical build based on the HTWay (). I'm still quite new to this and I couldn't find that many sample codes for the new leJos 0.9.0.
So this is the code I've got so far:
- Code: Select all
import lejos.nxt.*;
import lejos.robotics.*;
import lejos.nxt.Motor.*;
import lejos.nxt.Button.*;
import lejos.nxt.SensorPort.*;
import lejos.nxt.addon.GyroSensor;
import lejos.robotics.navigation.Segway;
class easyway
{
public static void main(String[] args) throws Exception
{
NXTMotor left = new NXTMotor(MotorPort.C);
NXTMotor right = new NXTMotor(MotorPort.A);
GyroSensor gyro = new GyroSensor (SensorPort.S2);
Segway a = new Segway(left, right, gyro, 4.9);
}
}
After compiling and transferring the code to the NXT it starts by prompting me to lay Segway flat for gyro calibration, then begins self-balancing thread. It just moves forward or backward, depending on how it is balanced, no matter what and ends it with a faceplant/falling on the back. I've tried balancing it against the wall but it moves forward/backward in the same way. The wheel diameter is written down in cm, but I've tried it with inches but to no avail.
It feels like it won't update itself as fast as it should? I don't know for sure, I'm still quite new to this. Any help? I can attach a video if necessary. | http://www.lejos.org/forum/viewtopic.php?f=7&t=2963 | CC-MAIN-2014-15 | refinedweb | 239 | 66.23 |
----------------------------------------------------------- This is an automatically generated e-mail. To reply, visit: -----------------------------------------------------------
Advertising
LGTM, thanks for working on this Isabel ! Waiting for some unit-tests to bless/approve the implementation code before giving this a Ship It. src/slave/validation.hpp (line 18) <> Missing header include guards ? src/slave/validation.hpp (line 28) <> newline before. I am expecting other people to add slave validation code in this file too in the future. So separating `namespace executor/call` by a newline would be a good idea. src/slave/validation.hpp (line 32) <> What do you think about going ahead and implementing some unit tests ? You can create another patch if you would like for the tests. But in general, it's good practice to have tests in most cases even for this trivial validation code :) I guess we would need the following tests in `src/tests/executor_http_api_tests.cpp` ? - Missing Executor/Framework Id. - Invalid call message that is not initialized. - Invalid call message that does not have Subscribe/Update/Message but has the corresponding type set. src/slave/validation.cpp (line 25) <> newline before. Same reason as before. - Anand Mazumdar On Sept. 21, 2015, 11:23 p.m., Isabel Jimenez wrote: > > ----------------------------------------------------------- > This is an automatically generated e-mail. To reply, visit: > > ----------------------------------------------------------- > > (Updated Sept. 21, 2015, 11:23 e224060 > src/slave/http.cpp 12a4d39 > src/slave/validation.hpp PRE-CREATION > src/slave/validation.cpp PRE-CREATION > > Diff: > > > Testing > ------- > > make check > > > Thanks, > > Isabel Jimenez > > | https://www.mail-archive.com/reviews@mesos.apache.org/msg10192.html | CC-MAIN-2016-44 | refinedweb | 238 | 54.18 |
In today’s Programming Praxis exercise, our goal is to determine the smallest amount of sequential numbers (starting from 1) needed to sum up to a given value, using the fact that each term may be either positive or negative. Let’s get started, shall we?
import Data.List import Text.Printf
We use the same algorithm as the provided solution and the Stackoverflow topic where this exercise originated: find the smallest sum larger than our target number that has the same parity modulo 2 and flip the sign of terms totalling half the difference.
jack :: Int -> [Int] jack n = snd $ head [ mapAccumR (\r x -> if x <= r then (r-x,-x) else (r,x)) (div (t-n) 2) [1..i] | (i,t) <- scanl (\(_,s) x -> (x, s+x)) (0,0) [1..] , t >= abs n, mod (t+n) 2 == 0]
A test to see if everything is working properly:
main :: IO () main = mapM_ putStrLn [printf "%3d %2d %s" n (length j) (show j) | n <- [-24..24], let j = jack n]
Advertisements
Tags: bonsai, code, Haskell, jack, jacks, jumping, kata, praxis, programming | https://bonsaicode.wordpress.com/2013/03/22/programming-praxis-jumping-jack/ | CC-MAIN-2017-22 | refinedweb | 182 | 67.18 |
I'm using my Pi as a media center, and I wanted to install some basic hardware controls on the casing.
Now, from various tutorials I've gotten as far as this:
- Code: Select all
#!/usr/bin/env python
from time import sleep
import os
import RPi.GPIO as GPIO
GPIO.setmode(GPIO.BCM)
GPIO.setup(23, GPIO.IN)
while True:
if ( GPIO.input(23) == False ):
[do something]
sleep(1);
Thing is, I would like the 'do something' bit to be something like 'pretend the up arrow key is pressed', but I can't find how to say that in Python. This leads me to believe it's either really easy, or really hard.
Can it be done? Can you tell me how? Also, if it can be done, I'd like to run the script all the time, from startup. And I also use the GPIO to control a character LCD through lcdproc, can I use these things together, or do they conflict? | http://www.raspberrypi.org/forums/viewtopic.php?t=29349&p=275077 | CC-MAIN-2014-15 | refinedweb | 164 | 83.25 |
Hi,
Sorry for the late update and thanks for the detail update.
It seems that the model can run correctly with TensorRT separately.
Would you mind to share a reproducible source with us so we can check it for you?
Thanks.
Hi,
Sorry for the late update and thanks for the detail update.
It seems that the model can run correctly with TensorRT separately.
Would you mind to share a reproducible source with us so we can check it for you?
Thanks.
Hello, thank for reply.
No, i have found MAIN error, right now i need to find better way to save images from NvBuf into OpenCV Mat, or just save images into jpeg/png/bmp files. Yes i have checked deepstream-test5-sample, but adaptating it’s code gives me nothing, but i will try it again.
All this because NvBuf and CPU share same RAM and sometimes there is bug that can overwrite into wrong sectors.
I have pasted my code for saving images, main part of which i got from this forum :-)
Please help with another bug, with NvTracker initial ID value , i have already made new thread.
Thank you,
Hi,
So is the original cudnn error solved?
Just want to confirm it.
Thanks.
Hello.
Original issue, which is ONNX on DP5, partially resolved. Because with batch-size=1 DP5 can make and use engine by itself. But with batch size bigger than 1 - it crashes. And i can make engine with batch-size>1 with TRT, but DP5, when loading it, issue error described in Migrated from DeepStream 4 to Deepstream 5 and got errors
So i can use ONLY batch size of 1 with ONNX models.
Because of all this, problem is partially solved.
Hi,
Have you re-generate the engine file?
It is possible that Deepstream use the exist engine file which created with batchsize=1 and causes this error.
Thanks.
Hello.
Yes, it is re-generated. Problem is in batch-size > 1, batch-size=1 is working OK. And i do re-generate it in DP5.
Best regards.
Hello.
I am sure that there will be solution to DP5 + ONNX with batch size > 1.
You can close this thread.
Best regards.
Hi,
Not sure which solution do you find.
Here is our suggestion for your reference.
Assertion Error in buildMemGraph: 0 (mg.nodes[mg.regionIndices[outputRegion]].size == mg.nodes[mg.regionIndices[inputRegion]].size)
Based on above log, the error occurs from an onnx model doesn’t generate with the correct batchsize.
Since you try to use batchsize=2, the model need to be generated with batchsize 2 or dynamic batchsize.
We can reproduce this error with our /usr/src/tensorrt/data/resnet50/ResNet50.onnx model.
To solve this issue, we re-generate the onnx file for batchsize==2.
This can be achieved via our ONNX GraphSurgeon API:
1. Install
$ $ cd TensorRT/tools/onnx-graphsurgeon/ $ make install
2. Generate your own convert.py.
Here is our sample for resnet50.
In general, we change the input batch, output batch and the reshape operation right before the output layer.
import onnx_graphsurgeon as gs import onnx batch = 2 graph = gs.import_onnx(onnx.load("ResNet50.onnx")) for inp in graph.inputs: inp.shape[0] = batch for out in graph.outputs: out.shape[0] = batch # update reshape from [1, 2048] to [2, 2048] reshape = [node for node in graph.nodes if node.op == "Reshape"] reshape[0].inputs[1].values[0] = batch onnx.save(gs.export_onnx(graph), "ResNet50_dynamic.onnx")
python3 convert.py
3.
The you can replace the onnx model with the dynamic one.
We have confirmed that Deepstream can run the ResNet50_dynamic.onnx without issue in our environment.
Thanks.
Thank you, will try this ASAP.
Hello.
I am stuck at $ make install from 1.
With python cannot find setuptools with setuptools installed.
Please help, four days i am trying :-(
Hi,
Sorry for the late update.
Please try the following to see if helps:
$ sudo apt-get update $ sudo apt-get install python3-pip $ sudo pip3 install -U pip testresources setuptools
Thanks.
Hello, AastaLLL.
Thanks for update, but i did those steps and still got that python cannot find setuptools.
What else i can do?
Hi,
Are you using python3?
Thanks.
Yes, Python3.
Hi,
Sorry for the late update.
Would you mind to try following command to see if helps?
$ sudo apt-get install python3-setuptools
Thanks.
Hello.
There is another error:
rm -rf dist/ build/ onnx_graphsurgeon.egg-info/
python3 setup.py bdist_wheel
usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] …]
or: setup.py --help [cmd1 cmd2 …]
or: setup.py --help-commands
or: setup.py cmd --help
error: invalid command ‘bdist_wheel’
Makefile:24: recipe for target ‘build’ failed
make: *** [build] Error 1
Hi,
You don’t need to run
python3 setup.py bdist_wheel.
It can be installed by the following command directly:
$ $ cd TensorRT/tools/onnx-graphsurgeon/ $ make install
Thanks.
Hello. I do just sudo make install inside of TensoRT/tools/onnx-graphsurgeon/
.
But i got that error message.
I did removed all data from TensorRT and again git cloned it, same error.
Hi,
Thanks for your feedback.
Some dependencies need to be installed first.
We find a clean environment and the onnx-graphsurgeon can be installed with the following command:
$ sudo apt-get install python3-pip libprotobuf-dev protobuf-compiler $ git clone $ cd TensorRT/tools/onnx-graphsurgeon/ $ make install
Thanks. | https://forums.developer.nvidia.com/t/migrated-from-deepstream-4-to-deepstream-5-and-got-errors/146140/26 | CC-MAIN-2022-27 | refinedweb | 891 | 68.87 |
NAME
FP::Show - give (nice) code representation for debugging purposes
SYNOPSIS
use FP::Show; # exports 'show' use FP::List; is show(list(3, 4)->map(sub{$_[0]*10})), "list(30, 40)";
DESCRIPTION
The 'show' function takes a value and returns a string of Perl code which when evaluated should produce an equivalent clone of that value (assuming that the Perl functions used in the string are imported into the namespace where the code is evaluated).
It is somewhat like Data::Dumper, but enables classes to determine the formatting of their instances by implementing the FP::Abstract::Show protocol (for details, see there). This allows for concise, more highlevel output than just showing the bare internals. It's, for example, normally not useful when inspecting data for debugging to know that an instance of FP::List consists of a chain of FP::List::Pair objects which in turn are made of blessed arrays or what not; just showing a call to the same convenience constructor function that can be used normally to create such a value is a better choice (see the example in the SYNOPSIS, and for more examples the `intro` document of the Functional Perl distribution or website).
`show` always works, regardless of whether a value implements the protocol--it falls back to Data::Dumper.
ALTERNATIVES
Data::Dumper *does* have a similar feature, $Data::Dumper::Freezer, but it needs the object to be mutated, which is not what one will want.
Why not use string overloading instead? Because '""' overloading is returning 'plain' strings, not perl code (or so it seems, is there any spec that defines exactly what it means?) Code couldn't know whether to quote the result:
sub foo2 { my ($l)=@_; # this is quoting safe: die "not what we wanted: ".show($l) # this would not be: #die "not what we wanted: $l" } eval { foo2 list 100-1, "bottles"; }; like $@, qr/^\Qnot what we wanted: list(99, 'bottles')/; eval { foo2 "list(99, 'bottles')"; }; like $@, qr/^\Qnot what we wanted: 'list(99, \'bottles\')'/; # so how would you tell which value foo2 really got in each case, # just from looking at the message? # also: eval { foo2 +{a=> 1, b=>10}; }; like $@, qr/^\Qnot what we wanted: +{a => 1, b => 10}/; # would die with something like: # not what we wanted: HASH(0xEADBEEF) # which isn't very informative
Embedding pointer values in the output also means that it can't be used for automatic testing. (Even with a future implementation of cut-offs, values returned by `show` will be good enough when what one needs to do is compare against a short representation. Also, likely we would implement the cut-off value as an optional parameter.)
BUGS
Show can't currently handle circular data structures (it will run out of stack space.) Not hard to fix (turtle and hare algo), just need to do it.
SEE ALSO
FP::Abstract::Show for the protocol definition. Note that FP::Show also works on values which don't implement the protocol (fall back to Data::Dumper). for the mentioned intro.
NOTE
This is alpha software! Read the status section in the package README or on the website. | https://metacpan.org/pod/release/PFLANZE/FunctionalPerl-0.72.22/lib/FP/Show.pm | CC-MAIN-2020-10 | refinedweb | 524 | 55.58 |
TCS Selenium Interview Questions With Java: The essential part of preparing for an interview is practice. Knowing what job interview questions you might be asked is necessary – that way, you can craft your answers well in advance and feel confident in your responses when the pressure is on.
Wouldn’t it be great if you knew what might ask interview questions for the Test Engineer, QA for Manual & Automation Positions? Unfortunately, we can’t read minds, but we’ll give you the next best thing: a list of previously asked TCS Selenium Interview Questions and answers.
We have tried to share some of the manual testing interview questions, selenium interview questions list & testing interview questions. But we are recommending spending some quality time to get comfortable with what might be asked when you go for the TCS.
In this post, we will share all the TCS Selenium automation testing interview questions. We hope these questions will give you a short idea of the type of questions faced previously in various locations.
If you want to share your TCS interview experience and questions with us, you can share us at softwaretestingo.com@gmail.com.
TCS Virtual Round Interview Questions
For these Tata Consultancy Services (TCS) interview questions, We Thank Sarada Ponnada for coming forward and sharing these Testing interview questions with us. We hope this motivates others also to share their interview experience and interview questions.
Still, we need your Love ❤️ & Support 🤝 to build a better platform for our fellow Testing Community to make such an incredible platform where a QA can get the real-time testing interview questions in a single place.
Interview Date: 14/04/2022
Position: Automation Test Engineer
- What are sanity, smoke, regression testing, and what is the difference between them?
- What is a path in python?
- Explain agile methodologies and story point allocation?
- What is the scope in python?
- Explain different data types in python?
- How can we automate all test cases, and when can we?
- What is the difference between test planning and test strategy?
- What are bug tracking tools and logs storage?
TCS Technical Round Interview Questions
Interview Date: 30.01.2022
Round: 1
Source: Whatsapp Group
- Explain your Project and overall experience
- Explain Automation Testing Life cycle
- Explain Regression Testing
- Explain Defect leakage and defect release
- What is Fault means in the testing
- What are the annotations in TestNG
- What is TestNG, and explain its advantage and use in Automation
- How to handle switch frames in selenium
- How to handle windows popups, the command for it
- How to refresh a page in selenium and its command
- What are Annotations in Jenkins
- What is POM
- What is the use of maven
- How do we use cucumber in selenium
TCS Company Pune Interview Questions
Company Location: Pune, India
Attended on: 13.11.2021
- Tell me about yourself?
- Which Framework you have used for your project?
- Explain Page Factory?
- Explain Abstract & Interface?
- What is the difference between Implicit wait, Explicit wait, and Fluent wait?
- What is the code for screenshots using selenium?
TCS Company Chennai Interview Questions
Company Location: Chennai, India
Attended on: 13.11.2021
- Tell me about yourself
- Explain Framework
- What is Inheritance?
- What is API Testing and explain in detail?
- What is Jasmine Framework?
- What is Python?
TCS Canada Interview Questions
Company Location: Toronto, Canada
Updated on: 29.10.2021
- Explain your framework
- Explain how you have implemented oops concepts in your framework
- What is a latent defect
- How to find out broken links in a web page
- Name of 8 locators
- Which locator is most used and why
- Difference between implicit and explicit wait
- Method to scroll down to the bottom of a web page
- If an element is not visible on the screen how to click on it
- Explain different response codes of API
- How to handle a window pop up
- How to drag and drop files on a web page
- How to retrieve an image from an nth window
- Name methods of Action class
- Difference between POm and page factory
- How to select the value from the drop-down
- Explain list and hashmap
TCS Company Bangalore Interview Questions
Company Location: Bangalore, India
Attended on: 29.09.2021
Technical Round -1
- What are the primary key and unique keys?
- How many joins are there any differences?
- What is the difference between join and union?
- Select top 3 max salary employees by depts.
- Difference between where and having and use it?
- Different types of Date functions?
- What are the grouping functions?
- Which function is used to get the current date?
- What is a subquery?
- What is indexing?
Technical Round -2
- What is a class?
- What is the difference between heap and stack?
- What is the difference between an instance variable and a local variable?
- What is Constructor? Types of Constructors?
- What is the difference between Break and Continue?
- What command is used in java to exit the system from the current execution?
- Addition features in Java 8?
- What is the difference between for and for each loop in java and its use it?
- Can we have multiple public class within a class?
- What is inheritance? Types of inheritance? Do multiple inheritances allow in java? If not, why?
- What is polymorphism? How can we achieve it?
- What is the difference between method overloading and method overriding?
- Can we achieve method overloading when two methods only differ in return type?
- Method overloading and overriding examples in the Selenium project?
- What is encapsulation?
- What is IS-A and HAS-A relation in java? With examples?
- What is the final and super keyword? What difference between them?
- Explain runtime polymorphism and compile-time with examples?
- Can the final/Static method be overloaded?
- Can final/Static methods be overridden?
- Can we overload the main method?
- Can we execute a class without a main method?
- What is a Package?
- What is an Abstract Class? Write an example code?
- What is an Interface? What is the difference between the Abstract class?
- Can we use private and protect access modified inside an Interface?
- Can multiple inheritances support an Interface?
- Examples of Abstract and Interface used in the selenium project?
- What is an Exception, and what is its base class of it?
- What is Final, Finally, Finalize?
- What is done in the finally block?
- What is garbage collection java? How is it done?
- What is the difference between Throws and Throw?
- Gives some examples of java and Selenium?
- What is Java Reflection, Singleton?
- What is threading? How does multithreading achieve? How to initiate a thread in java? What do you mean by thread-safe?
- What is the difference between collection and collections?
- The collection is what type?
- What is the difference between Array and ArrayList?
- What is the difference between Set and HashSet?
- What is the difference between HashMap and hashtable?
- What is the difference between ArrayList and LinkList?
- How do you use Map collection for your project?
- Can we have a duplicate key value in HasMap?
- How to fetch values from a hashmap?
Next is the HR discussion.
TCS Chennai Interview Questions
Company Name: TCS
Company Location: Chennai, India
Attended on: 30.07.2021
Thanks To Gokulsarathy P For sharing these interview Questions with us.
- What is the difference between SDLC & STLC?
- How to get a count of all the links on a webpage and click on them?
- How to validate a URL from the parent window, Clicked on it should navigate to the child window Also how do you validate the extensions after ‘/’ like eg .?
- How do you handle Rejected defects with the dev team?
- What are all the limitations of Selenium?
- How did you capture screenshots in Selenium?
- What are listeners?
- Explain HashMap.
- Types of waits in selenium
- How to run multiple times the same test case using TestNG?
TCS Bangalore Interview Questions
Company Name: TCS
Company Location: Bangalore, India
Updated on: 11.07.2021
- What are checked and unchecked exceptions
- What are different types of string methods
- What are collections? What collections you have used
- Interface and abstract class
- Overload and overriding
- Few Java programs for Logic
1. How to reverse a string, eg. “ I am from Mumbai” output needs to be “Mumbai from am I.”
2. 1,2,3,1,2 how to print no. Appearing twice
- Cucumber: what are hooks in cucumber?
- How to write cucumber feature
- Explain your framework
- How you pass data
- Agile ceremonies related questions
- Selenium web driver link text-related questions.
TCS Selenium Interview Questions [ Bangalore, India ]
Company Name: TCS
Company Location: Bangalore, India
Updated on: 13.05.2021
- Can we overload the main method? If yes, how?
- How to get a count of all the links on a webpage and click on them?
- How do you find that if it is linked using XPath?—–//a
- How do you handle windows?
- How to handle tables using XPath?
- How to handle dynamic elements in a webpage? Ex: Employee list getting extended and want to retrieve last employee data.
- What all the changes need to be made to .java class on parallel execution of test cases using TestNG?
- Defect life cycle.
- Diff between SDLC and STLC.
- Components in defect report.
- How do you handle QA conflict?
- When to start Automation?
- How do you choose test cases for Automation?
- How do you take screenshots in selenium for failed test cases? Name the class and method.
- Explain to me the logic of finding the prime number.
TCS Selenium Interview Questions [ Toronto, Canada ]
Company Name: TCS
Company Location: Toronto, Canada
Updated on: 16.03.2021
- Describe how to handle the below items using selenium
-iframe
-windows
-table
-Alerts
- What is a javascript executor?
- What are static variables?
- What is the difference between overloading and overriding
- How can you do parallel test execution using selenium?
- What are actions
- Types of waits in selenium
- How to take a screenshot of failed test scripts in TestNG
TCS Selenium Interview Questions
Company Name: TCS
Company Location: Chennai, India
Updated on: 15.03.2021
- Using Selenium, can we automate JavaScript?
- WebDrvier driver=new Firefoxdriver() Explain this?
- Oops concept basics
- How to select a drop-down box without using “select”?
- Architecture of framework
- Maven questions
- How do you give an MVN path to Jenkins
- Is TestNG is a framework or test package ( I said it’s a framework for using unit testing and used to generate reports ) again she asked why use testNG?
- Given a scenario about booking 10tkts in Air India, write a code for it?
- What is meant by class and objects give one example for it and explain Components of the framework briefly?
- Which frame are you using ( my self I told KDF), but she asks about POM
- Why do we use Git?
- Why should we use Jenkins?
- Are you asking about .war files for Tomcat?
- How to run scenario explain briefly where you write code where you generate reports step by step tell me in this paper
TCS Selenium Interview Questions With Java
- Int a=22555; write a program to count number digits in a given number. Don’t convert to a string.
- In the web table, every cell has “*,” but one cell has Letter “A” Now, write XPath for “*,” which is beside the letter “A” cell.
- Prepare a Test case/Scenario for covering the Calculator?
- How do you test a web page-Application?
- Write the steps for how you would automate if a new page is added to your website
- How do you add objects to a repository?
- What is a basic error you would get when an XML is not correct?
- Tell me some examples of security testing
- Write the scenarios in testing a coffee vending machine
- Difference between QA and testing
- What kind of QA process do you follow?
- What are the inputs required to raise a defect?
- Checklist for testing a website
- Who is going to assign the severity and priority?
TCS Selenium Interview Questions
- Write the syntax of implicitly wait and Explicitlywait
- How to do parallel testing and cross-browser testing
- Write the code for taking screenshots
- How to read data from an excel sheet
- Explain your Framework
- What is Autoit
- How to handle Authentication popups
- What is webdriver
- Explain Jenkins
- What are the dependencies you added to your framework?
- Explain about collections
- SQL commands
- What is the difference between inheritance and abstraction?
- Where did you use abstraction in your current project? Give me one example?
- Explain cucumber and cucumber options
TCS Selenium Interview Questions
- What is the java tree?
- Which framework you worked
- How many percentages of automation are covered in your project
- What are annotations
- How do you test API? experience in post method
- What is an agile process?
- What is scrum
- Which area do you need to improve on selenium?
- How will you find out broken links on the webpage?
We Hope these TCS Selenium Interview Questions Will Help you for your upcoming interview. If you want to share any feedback regarding the TCS Selenium Interview Questions, you can comment below.
Could you please share all question and answer for selenium testing?
For All Selenium Automation Testing questions you can follow this link
Regarding the scenario which needs to book 10 tickets in Airline, is that require java code or selenium code at the time interview
It Maybe Java Selenium Code
great thank you ,it is very much detailed and helpful for quick revision
interview for automation test engineer: on 14/04/2022
questions:
sanity, smoke and regression testing and difference between them
path in python
agile methodologies and story point allocation
scope in python
data types in python
how can we automate among all testcases and when can we
test planning and test strategy
bug tracking tools and logs storage
Thanks, Sarada Ponnada, For sharing these interview questions, and also we have updated the same in the post. | https://www.softwaretestingo.com/tcs-selenium-interview-questions/ | CC-MAIN-2022-27 | refinedweb | 2,289 | 67.25 |
Edited by zeeshanmughal: n/a
Edited by zeeshanmughal: n/a
Which part is confusing?
a) Getting the data from the website
b) Writing data in an excel file?
Edited by thines01: n/a
Which part is confusing?
a) Getting the data from the website
b) Writing data in an excel file?
both of them. i don't know how to get data from website with one click. some logical explanation and little bit hinting code could be appreciate.
Thanks in advance.
What you're trying to do requires a tremendous amount of explanation -- especially if you're coding it.
You will need to be familiar with adding COM references to your code and then using them.
Here is a code snippet that goes out to Bing.com and queries on the word "WebClient" and returns all links found in the results and stores them in a spreadsheet. I wasn't too discriminatory with the links, so some of them are not relevant to the search.
The code uses COM automation to write the data into a spreadsheet -- which means in order for you to be proficient at this technique, you will need to know how Excel references its own internal strucure (WorkBook, WorkSheet, Range, Cell, Value2, etc).
For the web piece: The code opens a web page as a stream and reads in the HTML with all whitespace removed (makes it easier to find things). Then it splits the input by double-quotes and searches for http links.
With that said, I expect you to take what you don't understand and web search the individual piece rather than trying to take the code as a whole and find it on the web.
The result goes in the users temp directory.
I made this a console app, so I could deliver a working solution.
Other than adding the COM reference (and having Excel installed), no external pieces are necessary. Dot Net 3.5 or better required.
using System; using System.Collections.Generic; using System.IO; using System.Linq; using System.Net; using System.Text.RegularExpressions; using Microsoft.Office.Interop.Excel; namespace DW_414673_CS_CON { class Program { private static List<string> GetResultsFromBing(ref string strError) { List<string> lst_strRetVal = new List<string>(); WebClient wc = new WebClient(); try { string strData = ""; List<string> lst_strData = new List<string>(); StreamReader fileWebIn = new StreamReader(wc.OpenRead("")); if (!fileWebIn.EndOfStream) { // Remove all whitespace strData = Regex.Replace(fileWebIn.ReadToEnd(), @"\s", ""); // Split and stack the entries lst_strData = strData.Substring(strData.IndexOf("All Results") + 11) .Split("\"".ToCharArray(), StringSplitOptions.RemoveEmptyEntries).ToList(); } fileWebIn.Close(); Regex rxUrl = new Regex("(?<url>http://.*)"); // Put all matches in the output list lst_strData.Where(s => rxUrl.IsMatch(s)).ToList().ForEach(s => lst_strRetVal.Add(rxUrl.Match(s).Groups["url"].Value)); } catch (Exception exc) { strError = exc.Message; } return lst_strRetVal; } private static bool WriteResultsToXls(string strOutXlsFileName, List<string> lst_strUrls, ref string strError) { bool blnRetVal = true; Application excel = new Application(); try { Workbook wb = excel.Workbooks.Add(); wb.Worksheets.Add(); //////////////////////////////////////////////////////////////////// // Write each element of the list to column 1 of the spreadsheet. long lngRow = 0; lst_strUrls.ForEach(s => ((Worksheet)wb.Worksheets[1]).Cells[++lngRow, 1] = s); wb.SaveAs(strOutXlsFileName, XlFileFormat.xlExcel8, Type.Missing, Type.Missing, Type.Missing, Type.Missing, XlSaveAsAccessMode.xlNoChange, XlSaveConflictResolution.xlLocalSessionChanges, Type.Missing , Type.Missing, Type.Missing, Type.Missing); wb.Close(XlSaveAction.xlSaveChanges); } catch (Exception exc) { blnRetVal = false; strError = exc.Message; } finally { excel.Quit(); } return blnRetVal; } static void Main(string[] args) { string strError=""; List<string> lst_strUrls = GetResultsFromBing(ref strError); if (lst_strUrls.Count.Equals(0)) { Console.WriteLine(strError); return; } //lst_strUrls.ForEach(s => Console.WriteLine(s)); string strOutFile = Path.Combine(Path.GetTempPath(), "TestWebToSheet.xls"); if (!WriteResultsToXls(strOutFile, lst_strUrls, ref strError)) { Console.WriteLine("Could not write to Excel: " + strError); return; } } } }
Edited by thines01: n/a
I realize that first sentence didn't come out the way I wanted it to. :O
It should have read "...especially if you're coding it"
...meaning rather than using somebody's pre-packaged web extractor. :)
Edited by thines01: n/a
Thanks for quick response. that was very helpful example. i will upload my code here while i work on my project.
thanks again.
you insert the data you search for using the string quirey in URL, but if this not an option, how to write the data to the text box and press the button to search. ... | https://www.daniweb.com/programming/software-development/threads/414673/how-to-import-external-data-from-web-into-database-via-c | CC-MAIN-2017-13 | refinedweb | 707 | 51.65 |
Opened 4 years ago
Closed 4 years ago
Last modified 3 years ago
#19002 closed Uncategorized (needsinfo)
Hardcoded url in admin
Description
in django.contrib.admin.sites there are a lot or reverse url called with hard coded "admin:"
for example reverse('admin:%s_%s_changelist'
i've got multiple admin sites with different names, and without this patch the link do not works at all...
they must be called with admin_site.name attribute instead... i attached a diff file
Attachments (1)
Change History (5)
Changed 4 years ago by
comment:1 Changed 4 years ago by
comment:2 Changed 4 years ago by
comment:3 Changed 4 years ago by
What do you mean with "the link do not works at all"? Where do these URLs point to? What do you mean with "I've got multiple admin sites with different names"? Are they AdminSite instances?
Referencing the admin docs () and the namespaced URLs docs ( and) all the admin app instances share the same application namespace (
'admin'). That's th reason we are using the
'admin: prefix in the URL names.
What allows to differenciate/choose among the URL namespaces of these app instances when reversing named URLs with reverse() (or equivalent url template tag occurrence) is the value of the current_app argument. In the case of the admin app it's the
AdminSite.name
value which is the one you can control and which is the one we are passing as current_app in all calls to reverse().
comment:4 Changed 3 years ago by
Actually I believe this is a bad pattern.
When you use url('admin', include(admin.site.urls)), since admin.site is a AdminSite instance and the urls property has the following code:
@property
def urls(self):
return self.get_urls(), self.app_name, self.name
This code makes the namespace dynamic by using self.app_name, but further on the code it uses the admin namespace harcoded, makes no sense to me.
It would be better to use something like reverse(self.app_name + ':%s_%s_changelist')
I've come to this when I was trying to extend the admin behavior. I've created an instance of AdminSite with a different namespace and is impossible to use it like this.
Is there any other reason for this to be hardcoded? AdminSite becomes irrelevant outside of the admin context the way it is right now and the same happens on ModelAdmin.
sorry i've made a diff from an older version of django... just watch the reverse url part of the diff | https://code.djangoproject.com/ticket/19002?cversion=1&cnum_hist=3 | CC-MAIN-2016-50 | refinedweb | 421 | 64.81 |
TriggerExplodeObject(int)
Blow up the nearest object with a matching tag with the specified spell.
void TriggerExplodeObject( int nSpell = SPELL_FIREBALL );
Parameters
nSpell
The SPELL_* used to destroy the target. (Default: SPELL_FIREBALL)
Description
Blow up the nearest object with a matching tag with the specified spell.
This should be called by the trigger object! It ASSUMES that GetEnteringObject will work for OBJECT_SELF here.
This destroys the trigger after it is successfully invoked by the PC.
Remarks
This function does nothing if the entering object is not a PC based on the results of a call to GetIsPC.
The tag of the object to be destroyed is determined by: GetNearestObjectByTag(GetTag(OBJECT_SELF)). Meaning it looks for the nearest object to OBJECT_SELF that has the same tag as OBJECT_SELF.
GetNearestObjectByTag returns an object to be destroyed, note that no validation is done that it found a valid object. This object, valid or not, is then sent off to ExplodeObject for exploding by nSpell.
After a 5 second delay. OBJECT_SELF is destroyed as well. But quietly without any fanfair. This destruction makes the trigger one time use instead of repeatable use.
Requirements
#include "x0_i0_corpses"
Version
???
See Also
author: Mistress | http://palmergames.com/Lexicon/Lexicon_1_69/function.TriggerExplodeObject.html | CC-MAIN-2015-27 | refinedweb | 195 | 60.01 |
This page contains an archived post to the Java Answers Forum made prior to February 25, 2002.
If you wish to participate in discussions, please visit the new
Artima Forums.
answer
Posted by quinith on May 02, 2001 at 4:46 PM
> > > > In the following code, a private member function (getNumInstances) is being called by the class's > > client. I thought private member functions are > > only accessible to class's member functions and > > not the external world. Please explain.
> > > > public class CountInstances {> > private static int numInstances = 0 ;> > > > private int getNumInstances() {> > return numInstances;> > }> > > > private static void addInstance() {> > numInstances++;> > }> > > > CountInstances() {> > CountInstances.addInstance();> > }> > > > public static void main(String args[]) {
> > CountInstances ref = new CountInstances();> > System.out.println("Starting with " + ref.getNumInstances() + " Instances");> > > > }> > }
> >
> > private methods can be accessed only by the other methods of the> same class. The object "ref" is a instance of CountInstances class> so it has the access.
>
Ref may be, as it is in this case, an instance of the CountInstances class. However, this is not why it has access to the private method. It has access to the private method because it is being called within the class CountInstances (yes, that's right the method main is actually part of the CountInstances class.) Should a similiar instance (ref2) be created in a completely different class then then a call such as ref2.getNumInstances() would not be allowed. | http://www.artima.com/legacy/answers/May2000/messages/200.html | CC-MAIN-2017-43 | refinedweb | 227 | 55.84 |
During one of my daily visits to CodeProject, I ran across an excellent article by Aprenot: A Generic - Reusable Diff Algorithm in C#. Aprenot’s method of locating the Longest Common Sequence of two sequential sets of objects works extremely well on small sets. As the sets get larger, the algorithm begins experiencing the constraints of reality.
At the heart of the algorithm is a table that stores the comparison results of every item in the first set to every item in the second set. Although this method always produces a perfect difference solution, it can require an enormous amount of CPU and memory. The author solved these issues by only comparing small portions of the two data sets at a time. However, this solution makes the assumption that the changes between the data sets are very close together, and reports inefficient results when the data sets are large with dispersed changes.
This article will present an algorithm based on the one presented by aprenot. The goals of the algorithm are as follows:
Given a data set of 100,000 items, with the need to compare it to a data set of similar size, you can quickly see the problem with using a table to hold the results. If each element in the result table was 1 bit wide, you would need over a Gigabyte of memory. To fill the table, you would need to execute 10 billion comparisons.
There are always two sets of items to compare. To help differentiate between the two sets, they will be called Source and Destination. The question we are usually asking is: What do we need to do to the Source list to make it look like the Destination list?
In order to maintain the generic aspects of the original algorithm, a generic structure is needed. I chose to make use of C#’s interface.
interface
public interface IDiffList
{
int Count();
IComparable GetByIndex(int index);
}
Both the Destination list and Source list must inherit IDiffList. It is assumed that the list is indexed from 0 to Count()-1. Just like the original article, the IComparable interface is used to compare the items between the lists.
IDiffList
Count()
IComparable
Included in the source code are two structures that make use of this interface; DiffList_TextFile and DiffList_BinaryFile. They both know how to load their respective files into memory and return their individual items as IComparable structures. The source code examples for these objects should be more than adequate for expanding the system to other object types. For example, the system can be easily expanded to compare rows between DataSets or directory structures between drives.
DiffList_TextFile
DiffList_BinaryFile
DataSet
The problem presented earlier is very similar to the differences in sorting algorithms. A shell sort is very quick to code but highly inefficient when the data set gets large. A quick sort is much more efficient on a large dataset. It breaks up the data into very smaller chunks using recursion.
The approach this algorithm takes is similar to that of a quick sort. It breaks up the data into very smaller chunks and processes those smaller chunks through recursion. The steps the algorithm takes are as follows:
These steps recursively repeat until there is no more data to process (or no more matches are found). At first glance, you should be able to easily understand the recursion logic. What needs further explanation is Step 1. How do we find the LMS without comparing everything to everything? Since this process is called recursively, won’t we end up re-comparing some items?
First, we need to define where to look for the LMS. The system needs to maintain some boundaries within the Source and Destination lists. This is done using simple integer indexes called destStart, destEnd, sourceStart and sourceEnd. At first, these will encompass the entire bounds of each list. Recursion will shrink their ranges with each call.
destStart
destEnd
sourceStart
sourceEnd
To find the LMS, we use brute force looping with some intelligent short circuits. I chose to loop through the Destination items and compare them to the available Source items. The pseudo code looks something like this:
For each Destination Item in Destination Range
Jump out if we can not mathematically find a longer match
Find the LMS for this destination item in the Source Range
If there is a match sequence
If it’s the longest one so far – store the result.
Jump over the Destination Items that are included in this sequence.
End if
Next For
If we have a good best match sequence
Store the match in a final match result list
If there is space left above the match in both
the Destination and Source Ranges
Recursively call function with the upper ranges
If there is space left above the match in both
The Destination and Source Ranges
Recursively call function with upper ranges
Else
There are no matches in this range so just drop out
End if
The Jumps are what gives this algorithm a lot of its speed. The first mathematical jump looks at the result of some simple math:
maxPossibleDestLength = (destEnd – destIndex) + 1;
if (maxPossibleDestLength <= curBestLength) break;
This formula calculates the theoretical best possible match the current Destination item can produce. If it is less than (or equal too) the current best match length then there is no reason to continue in the loop through the rest of the Destination range.
The second jump is more of a leap of faith. We ignore overlapping matching sequences by jumping over the destination indexes that are internal to a current match sequence, and therefore, cut way down on the number of comparisons. This is a really big speed enhancement. If the lists we are testing contain a lot of repetitive data, we may not come up with the perfect solution, but we will find a valid solution fast. I found in testing that I had to manually create a set of files to demonstrate the imperfect solution. In practice, it should be very rare. You can comment out this jump in the source code and run your own tests. I have to warn you that it will greatly increase the calculation time on large data sets.
There is a very good chance during recursion that the same Destination Item will need to be tested again. The only difference in the test will be the width of the Source Range. Since the goal of the algorithm is to lower the number of comparisons, we need to store the previous match results. DiffState is the structure that stores the result. It contains an index of the first matching item in the source range and a length so that we know how far the match goes for. DiffStateList stores the DiffStates by destination index. The loop simple requests the DiffState for a particular destination index, and DiffStateList either returns a pre-calculated one or a new uncalculated one. There is a simple test performed to see if the DiffState needs to be recalculated given the current Destination and source ranges. DiffState will also store a status of 'No Match' when appropriate.
DiffState
DiffStateList
If a DiffState needs to be recalculated, an algorithm similar to the one above is called.
For each Source Item in Source Range
Jump out if we can not mathematically find a longer match
Find the match length for the Destination Item
on the particular Source Index
If there is a match
If it’s the longest one so far – store the result
Jump over the Source Items that are included in the match
End if
End for
Store the longest sequence or mark as 'No Match'
The Jumps are again giving us more speed by avoiding unnecessary item comparisons. I found that the second jump cuts the run time speed by 2/3rds on large data sets. Finding the match length for a particular destination item at a particular source index is just the result of comparing the item lists in sequence at those points and returning the number of sequential matches.
After the algorithm has run its course, you will be left with an ArrayList valid match objects. I use an object called DiffResultSpan to store these matches. A quick sort of these objects will put them in the necessary sequential order. DiffResultSpan can also store the necessary delete, addition and replace states within the comparison.
ArrayList
DiffResultSpan
To build the final result ArrayList of ordered DiffResultSpans, we just loop through the matches filling in the blank (unmatched) indexes in between. We return the result as an ordered list of DiffResultSpans that each contains a DiffResultSpanStatus.
DiffResultSpanStatus
public enum DiffResultSpanStatus
{
NoChange, //matched
Replace,
DeleteSource,
AddDestination
}
public class DiffResultSpan : IComparable
{
private int _destIndex;
private int _sourceIndex;
private int _length;
private DiffResultSpanStatus _status;
public int DestIndex {get{return _destIndex;}}
public int SourceIndex {get{return _sourceIndex;}}
public int Length {get{return _length;}}
public DiffResultSpanStatus Status {get{return _status;}}
//.... other code removed for brevity
}
You can now process the ArrayList as is necessary for your application.
When using the algorithm described above, and when the data sets are completely different, it will compare every item in both sets before it finds out that there are no matches. This can be a very time consuming process on large datasets. On the other hand, if the data sets are equivalent, it will find this out in one iteration of the main loop.
Although it should always find a valid difference, there is a chance that the algorithm will not find the best answer. I have included a set of text files that demonstrate this weakness (source.txt and dest.txt). There is a sequence of 5 matches that is missed by the system because it is overlapped by a previous smaller sequence.
To help address the 2nd weakness above, three levels of optimization were added to the Diff Engine. Tests identified that large, highly redundant data produced extremely poor difference results. The following enum was added:
enum
public enum DiffEngineLevel
{
FastImperfect, //original level
Medium,
SlowPerfect
}
This can be passed to an additional ProcessDiff() method. The engine is still fully backward compatible and will default to the original Fast method. Only tests can identify if the Medium or SlowPerfect levels are necessary for your applications. The speed differences between the settings are quite large.
ProcessDiff()
Medium
SlowPerfect
The differences in these levels effect when or if we jump over sections of the data when we find existing match runs. If you are interested, you will find the changes in the ProcessRange() method. They start on line 107 of Engine.cs.
ProcessRange()
The project DiffCalc is the simple front-end used to test the algorithm. It is capable of doing a text or binary diff between two files.
The DLL project DifferenceEngine is where all the work is done.
The first line in Structures.cs is commented out. If you uncomment this line, you will lower the memory needs of the algorithm by using a HashTable instead of an allocated array to store the intermediate results. It does slow down the speed by a percent or two. I believe the decrease is due to the reflection that becomes necessary. I am hoping that the future .NET Generics will solve this issue.
HashTable
If you take a step back from the algorithm, you will see that it is essentially building the table described in aprenot's original article. It simply attempts to intelligently jump over large portions of the table. In fact, the more the two lists are the same, the quicker the algorithm will run. It also stores what it does calculate in a more efficient structure instead of rows & columns.
Hopefully, others will find good uses for this code. I am sure there are some more optimizations that will become apparent over time. Drop me a note when you find them.
This article, along with any associated source code and files, is licensed under A Public Domain dedication
{NoChange (Dest: 0,Source: 0) 1}
{Replace (Dest: 1,Source: 1)
{AddDestination (Dest: 2,Source: -1) 1}
ProcessRange(int destStart, int destEnd, int sourceStart, int sourceEnd)
int upperDestStart = curBestIndex + curBestLength;
int upperSourceStart = sourceIndex + curBestLength;
if (destEnd >= upperDestStart)
{
//we still have more upper dest data
if (sourceEnd >= upperSourceStart)
{
//set still have more upper source data
// Recursive call to process upper indexes
ProcessRange(upperDestStart,destEnd,upperSourceStart,sourceEnd);
}
}
{NoChange (Dest: 0,Source: 0) 1}
{AddDestination (Dest: 1,Source: -1) 1}
{NoChange (Dest: 2,Source: 1) 1}
{AddDestination (Dest: 0,Source: -1) 1}
{NoChange (Dest: 1,Source: 0) 3}
{DeleteSource (Dest: -1,Source: 3) 1}
private void ProcessRange(int destStart, int destEnd, int sourceStart, int sourceEnd)
{
int curBestIndex = -1;
int curBestLength = -1;
int maxPossibleDestLength = 0;
DiffState curItem = null;
DiffState bestItem = null;
for (int destIndex = destStart; destIndex <= destEnd; destIndex++)
{
maxPossibleDestLength = (destEnd - destIndex) + 1;
if (maxPossibleDestLength <= curBestLength)
{
//we won't find a longer one even if we looked
break;
}
curItem = _stateList.GetByIndex(destIndex);
if (!curItem.HasValidLength(sourceStart, sourceEnd, maxPossibleDestLength))
{
//recalc new best length since it isn't valid or has never been done.
GetLongestSourceMatch(curItem, destIndex, destEnd, sourceStart, sourceEnd);
}
if (curItem.Status == DiffStatus.Matched)
{
switch (_level)
{
case DiffEngineLevel.FastImperfect:
if (curItem.Length > curBestLength)
{
//this is longest match so far
curBestIndex = destIndex;
curBestLength = curItem.Length;
bestItem = curItem;
}
//Jump over the match
destIndex += curItem.Length - 1;
break;
case DiffEngineLevel.Medium:
if (curItem.Length > curBestLength)
{
//this is longest match so far
curBestIndex = destIndex;
curBestLength = curItem.Length;
bestItem = curItem;
//Jump over the match
destIndex += curItem.Length - 1;
}
break;
default:
if (curItem.Length > curBestLength)
{
//this is longest match so far
curBestIndex = destIndex;
curBestLength = curItem.Length;
bestItem = curItem;
}
break;
}
}
}
if (curBestIndex < 0)
{
//we are done - there are no matches in this span
}
else
{
int sourceIndex = bestItem.StartIndex;
_matchList.Add(DiffResultSpan.CreateNoChange(curBestIndex, sourceIndex, curBestLength));
if (destStart < curBestIndex)
{
//Still have more lower destination data
if (sourceStart < sourceIndex)
{
//Still have more lower source data
// Recursive call to process lower indexes
ProcessRange(destStart, curBestIndex - 1, sourceStart, sourceIndex - 1);
}
}
int upperDestStart = curBestIndex + curBestLength;
int upperSourceStart = sourceIndex + curBestLength;
if (destEnd > upperDestStart)
{
//we still have more upper dest data
// Original: if (sourceEnd > upperSourceStart)
// Original replaced because it misses comparison if lines are added just before the last line of the file.
if (sourceEnd > upperSourceStart - 1)
{
//set still have more upper source data
// Recursive call to process upper indexes
ProcessRange(upperDestStart, destEnd, upperSourceStart, sourceEnd);
}
}
}
}
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/6943/A-Generic-Reusable-Diff-Algorithm-in-C-II?fid=42386&df=10000&mpp=50&noise=5&prof=True&sort=Position&view=None&spc=Relaxed | CC-MAIN-2014-35 | refinedweb | 2,407 | 60.75 |
Work at SourceForge, help us to make it a better place! We have an immediate need for a Support Technician in our San Francisco or Denver office.
You can subscribe to this list here.
Showing
5
results of 5
Hello guys,
I am new to Docutils so I don't really know what I am doing yet.
However, my aim is to read the bibliographic fields out of a
reStructuredText document into a Python dictionary. I achieved it by
just poking it (see code below), but this seems a bit meandrous, what
is the correct way to do this please? Cheers,
Best Wishes,
Zeth
"""Trying to use docutils."""
import docutils.core
def get_docinfo(document):
"""Get the bibliographic fields out of the reStructuredText document."""
docinfo = {}
for i in [x for x in docutils.core.publish_doctree(document).children \
if x.tagname == 'docinfo'][0].children:
docinfo[i.tagname] = str(i.children[0])
return docinfo
def main():
"""Little example."""
webpage = '';
import urllib2
document = ''.join(urllib2.urlopen(webpage).readlines())
return get_docinfo(document)
# start the ball rolling
if __name__ == "__main__":
print main()
[quoting with permission]
On Thu, Jun 5, 2008 at 2:32 PM, Jon Rosen <joncrosen@...> wrote:
> Similarly, the "image" directive seems to require my inserting the whole IRL
> into the table cell. This too seems to also necessitate creating extra wide
> table columns to accommodate it. I want a table with column or cell that
> tightly fits the image. How would I do that? (I.e., Is there a way to
> minimize the margin around images in tables.)
>
> Thanks again,
> Jon
There are lots of ways to build tables in Docutils/reST. Tables are
tricky, and Docutils doesn't give you as much control as, say, raw
HTML. If you need that level of control, reST may not be the right
thing for you.
There are two direct syntaxes for tables: grid tables & simple tables
(see).
Grid tables allow for funky layouts, but are hard to "draw", and if
you have long URLs etc. you do have to allow space for them. There is
a trick you can use with the image directive though. You can break up
a URL onto multiple lines:
.. image:: http://
example.org/
very/very/very/
long/path/to/
image.png
The directive will join them back together (stripping spaces). See for
details.
For simple tables, the trick above works, except in the first column
(where entries may not have multiple lines; see).
Another trick possible in simple tables: in the last (rightmost)
column, the cell content can extend beyond the right edge:
==== ====
one ..image::
==== ====
The rendered table's column widths will keep the relative widths of
the top borders.
If that won't do it for you, there are ways to define tables
indirectly, with CSV data or lists. The "list-table" directive is
simplest; I recommend you give it a try. (See
&).
A different approach that might appeal to you is to define your images
as substitutions, and use substitution references in your tables:
.. |biohazard| image:: biohazard.png
=========== =====================
Symbol Definition
=========== =====================
|biohazard| Nasty stuff; don't touch.
=========== =====================
See
Hope this helps.
-- David
On Thu, Jun 5, 2008 at 10:50 AM, Darren Dale <darren.dale@...> wrote:
> I think this list is what you are looking for:
> sphinx-dev <sphinx-dev@...>
Thanks, I've moved the conversation over there.
JDH
Hi John,
On Thursday 05 June 2008 11:27:52 am John Hunter wrote:
> If this is not the correct list to post sphinx questions on, please
> let me know ..
I think this list is what you are looking for:
sphinx-dev <sphinx-dev@...>
Darren
If this is not the correct list to post sphinx questions on, please
let me know ..
We are using sphinx 0.3 from svn r63955 to build the matplotlib
documentation. We have some api documentation files which use sphinx
autodoc and so are quite short (ie see artist_api.rst below). When we
use sphinx to build our documentation, we occassionally get errors or
warnings because the rest in one of the included automod module
docstrings is malformed. Eg::
reading... api/artist_api api/index api/pyplot_api
devel/add_new_projection devel/coding_guide devel/documenting_mpl
devel/index devel/transformations faq/howto_faq faq/index
faq/installing_faq faq/troubleshooting_faq index users/artists
users/customizing users/event_handling users/index users/intro
users/mathtext users/navigation_toolbar users/pyplot_tutorial
WARNING: /home/titan/johnh/python/svn/matplotlib.trunk/matplotlib/doc/api/artist_api.rst:110:
(ERROR/3) Malformed table.
No bottom table border found.
================= ==============================================
WARNING: /home/titan/johnh/python/svn/matplotlib.trunk/matplotlib/doc/api/artist_api.rst:111:
(WARNING/2) Block quote ends without a blank line; unexpected
unindent.
WARNING: /home/titan/johnh/python/svn/matplotlib.trunk/matplotlib/doc/api/artist_api.rst:247:
(ERROR/3) Malformed table.
No bottom table border found.
================= ==============================================
WARNING: /home/titan/johnh/python/svn/matplotlib.trunk/matplotlib/doc/api/artist_api.rst:248:
(WARNING/2) Block quote ends without a blank line; unexpected
unindent.
The problem is, artist_api.rst has no lines 111, 247 or 248. I assume
these numbers are some rest file sphinx is creating, perhaps located
in build/doctrees/api/artist_api.doctree (?). But these warnings are
not too helpful to us in finding the actual part of our documentation
that is causing the trouble.
If we could get more verbose output, either giving us the actual line
number of the python module generating the problem, or more verbose
context from the module docs in which the strings are occurring it
would help us debug these problems more efficiently. Is such a thing
currently possible, or could it be added?
Thanks,
John Hunter
The artist_api.rst file:
******************
matplotlib artists
******************
:mod:`matplotlib.artist`
=============================
.. automodule:: matplotlib.artist
:undoc-members:
:mod:`matplotlib.lines`
=============================
.. automodule:: matplotlib.lines
:undoc-members:
:mod:`matplotlib.patches`
=============================
.. automodule:: matplotlib.patches
:undoc-members:
:mod:`matplotlib.text`
=============================
.. automodule:: matplotlib.text
:undoc-members: | http://sourceforge.net/p/docutils/mailman/docutils-users/?viewmonth=200806&viewday=5 | CC-MAIN-2014-35 | refinedweb | 960 | 59.09 |
On 06/26/2012 02:10 PM, Eric Dumazet wrote:> On Tue, 2012-06-26 at 14:00 +0800, Jason Wang wrote:>>> Yes, looks like it's hard to use NETIF_F_LLTX without breaking the u64>> statistics, may worth to use tx lock and alloc_netdev_mq().> Yes, this probably needs percpu storage (if you really want to use> include/linux/u64_stats_sync.h).>> But percpu storage seems a bit overkill with a raising number of cpus> on typical machines.>> For loopback device, its fine because we only have one lo device per> network namespace, and some workloads really hit hard this device.>> But for tuntap, I am not sure ?>The problem is that we want to collect per-queue statistics. So if we convert tuntap to use alloc_netdev_mq(), the tx statistics would be updated under tx lock which looks safe.>> --> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in> the body of a message to majordomo@vger.kernel.org> More majordomo info at> Please read the FAQ at | http://lkml.org/lkml/2012/6/26/48 | CC-MAIN-2013-20 | refinedweb | 168 | 62.68 |
E-S4L > On Aug 1, 2014, at 9:12 AM, "Wolfgang Laun wolfgang.laun@xxxxxxxxx" <xsl-list-service@xxxxxxxxxxxxxxxxxxxxxx> wrote: > > On 01/08/2014, L2L 2L emanuelallen@xxxxxxxxxxx > <xsl-list-service@xxxxxxxxxxxxxxxxxxxxxx> wrote: >> I have ask on many different forums; visionzone, sitepoint, oreilly, even >> ask an author of a book, but receive no reply. I have even try >> stackexchange. This doesn't make sense.... > >> ----question 2------- >> >> Is this legal: >> >> <?xml version="1.0" encoding="UTF-8"?> <xs:schema >> <xs:annotation> >> <xs:documentation> I'm warping a name simpleType name nameType in an element >> that have the attribute type with the value of nameType. Is this legal; to >> warp a name type as so: </xs:documentation> </xs:annotation> <xs:element >> <xs:simpleType <xs:restriction >> <xs:length </xs:restriction> </xs:simplyType> >> </xs:element> </xs:schema> > > (Indendation is the politeness of list users.) > > No it is NOT - Element 'name' has both a 'type' attribute and a > 'anonymous type' child. Only one of these is allowed for an element. > > Use this; > > <xs:element > <xs:simpleType > <xs:restriction > <xs:length > </xs:restriction> > </xs:simpleType> > >> -----question 3-------- >> >> What are extension and restriction type? > > Perhaps risk a short look on the XML Schema spec: > > > [Definition:] A complex type definition which allows element or > attribute content in addition to that allowed by another specified > type definition is said to be an extension. > > [Definition:] A type defined with the same constraints as its B7base > type definitionB7, or with more, is said to be a restriction. The > added constraints might include narrowed ranges or reduced > alternatives. Given two types A and B, if the definition of A is a > B7restrictionB7 of the definition of B, then members of type A are > always locally valid against type B as well. > >> >> ----question 4--------- >> >> First question: > > (Actually the 4th.) > >> >> Can I place an extension on a restriction type? >> >> And if so, how? Via what method? Directly? Via reference? Both? >> >> And if so for both, what other ways? > > The excellent Primer on XML Schema > contains numerous examples for both. > >> Second question: > > (Actually it's the 5th.) > >> >> targetNameSpace is to set a name space for the xml element in my schema? yes >> or no then an explanation comment please. > > This isn't well formulated. If you mean "for the XML elements defined > by <xs:element>" then the answer is: "yes, but not only for these, > also for <xs:complextType>, <xs:simpleType> and other Schema > components defined and named (!) in that XML schema. > >> >> a >> >>. > > No, you need not use targetNameSpace. You may run into difficulties if > you try use different XML schema definitions without target namespaces > but with conflicting definitions. > > -W > >> >> informational feedback please anyone! >> >> >> >> E-S4L > Where's the link button W? WHERE'S LIKE BUTTON W!!!!????? | https://www.oxygenxml.com/archives/xsl-list/201408/msg00009.html | CC-MAIN-2018-13 | refinedweb | 452 | 58.18 |
# an interesting experiment to undertake.
# Single-file components
Web developers who know the Progressive Enhancement term are also aware of the “separation of layers” mantra. In the case of components, nothing changes. In fact, there are even more layers, as now every component has at least 3 layers: content/template, presentation, and behavior. If you use the most conservative approach, every component will be divided into at least 3 files, e.g. a
Button component could look like this:
Button/ | | -- Button.html | | -- Button.css | | -- Button.js
In such an approach the separation of layers is equal to the separation of technologies (content/template: HTML, presentation: CSS, behavior: JavaScript). If you do not use any build tool this means that the browser will have to fetch all 3 files. Therefore, an idea appeared to preserve the separation of layers but without the separation of technologies. And thus single-file components were born.
Generally, I am quite skeptical about the “separation of technologies”. It comes from the fact that it is often used as an argument for abandoning the separation of layers — and these two things are actually totally separated.
The
Button component as a single file would look like this:
<template> <!-- Button.html contents go here. --> </template> <style> /* Button.css contents go here. */ </style> <script> // Button.js contents go here. </script>
It is clearly visible that a single-file component is just Good Old HTML™ with internal styles and scripts + the
<template> tag. Thanks to the approach that uses the simplest methods, you get a web component that has a strong separation of layers (content/template:
<template>, presentation:
<style>, behavior:
<script>) without the need to create a separate file for every layer.
Yet the most important question remains: How do I use it?
# Fundamental concepts
Start by creating a
loadComponent() global function that will be used to load the component.
window.loadComponent = ( function() { function loadComponent( URL ) {} return loadComponent; }() );
I used the module pattern here. It allows you to define all necessary helper functions but exposes only the
loadComponent() function to the outer scope. For now, this function does nothing.
And this is a good thing as you do not have anything to be loaded yet. For the purpose of this article you may want to create a
<hello-world> component that will display:
<template> <div class="hello"> <p>Hello, world! My name is <slot></slot>.</p> </div> </template> <style> div { background: red; border-radius: 30px; padding: 20px; font-size: 20px; text-align: center; width: 300px; margin: 0 auto; } </style> <script></script>
For now, you have not added any behavior for it. You only defined its template and styles. Using the
div selector without any restrictions and the appearance of the
<slot> element suggest that the component will be using Shadow DOM. And it is true: all styles and the template by default will be hidden in shadows.
The use of the component on the website should be as simple as possible:
<hello-world>Comandeer</hello-world> <script src="loader.js"></script> <script> loadComponent( 'HelloWorld.wc' ); </script>
You work with the component like with!
# Basic loader
If you want to load the data from an external file, you need to use immortal Ajax. But since it is already year 2020, you can use Ajax in the form of Fetch API:
function loadComponent( URL ) { return fetch( URL ); }
Amazing! However, at the moment you only fetch the file, doing nothing with it. The best option to get its content is to convert the response to text:
function loadComponent( URL ) { return fetch( URL ).then( ( response ) => { return response.text(); } ); }
As
loadComponent() now returns the result of the
fetch() function, it returns
Promise. You can use this knowledge to check if the content of the component was really loaded and whether it was converted to text:
loadComponent( 'HelloWorld.wc' ).then( ( component ) => { console.log( component ); } );
It works!
# Parsing the response
However, the text itself does. Instantiate it to convert the component into some DOM:
return fetch( URL ).then( ( response ) => { return response.text(); } ).then( ( html ) => { const parser = new DOMParser(); // 1 return parser.parseFromString( html, 'text/html' ); // 2 } );
First, you create an instance of the parser (1), then you parse the text content of the component (2). It is worth noting that you use the HTML mode (
'text/html'). If you wanted the code to comply better with the JSX standard or original Vue.js components, you would use the XML mode (
'text/xml'). However, in such case you would need to change the structure of the component itself (e.g. add the main element which will hold every other one).
If you now check what
loadComponent() returns, you will see that it is a complete DOM tree.
And by saying “complete” I mean really complete. You have got a complete HTML document with the
<head> and
<body> elements.
As you can see, the contents of the component landed inside the
<head>. This is caused by the way in which the HTML parser works. The algorithm of building the DOM tree is described in detail this, it():
window.loadComponent = ( function() { function fetchAndParse( URL ) { }; } ); } function loadComponent( URL ) { return fetchAndParse( URL ); } return loadComponent; }() );
Fetch API is not the only way to get a DOM tree of an external document.
XMLHttpRequest has a dedicated
document mode that allows you to omit the entire parsing step. However, there is one drawback:
XMLHttpRequest does not have a
Promise-based API, which you would need to add by yourself.
# Registering the component
Since you have all the needed parts available, create the
registerComponent() function which will be used to register the new Custom Element:
window.loadComponent = ( function() { function fetchAndParse( URL ) { […] } function registerComponent() { } function loadComponent( URL ) { return fetchAndParse( URL ).then( registerComponent ); } return loadComponent; }() );
Just as a reminder: Custom Element must be a class inheriting from
HTMLElement. Additionally, every component will use Shadow DOM that will store styles and template content. This means that every component will use the same class. Create it now:
function registerComponent( { template, style, script } ) { class UnityComponent extends HTMLElement { connectedCallback() { this._upcast(); } _upcast() { const shadow = this.attachShadow( { mode: 'open' } ); shadow.appendChild( style.cloneNode( true ) ); shadow.appendChild( document.importNode( template.content, true ) ); } } }
You should the current page’s components:
function registerComponent( { template, style, script } ) { class UnityComponent extends HTMLElement { [...] } return customElements.define( 'hello-world', UnityComponent ); }
If you try to use the component now, it should work:
# Fetching the script’s content
The simple part is done. Now it.js:
<template> […] </template> <style> […] </style> <script> export default { // 1 name: 'hello-world', // 2 onClick() { // 3 alert( `Don't touch me!` ); } } </script> do not exist in the global scope). Yet there is a problem: there is no standard for handling exports from internal modules (so the ones whose code is directly inside the HTML document). The
import statement assumes that it gets a module identifier. Most often it is a URL to the file containing the code. In, in this case, it looks like an overkill.
# Data URI and Object URI
Data URI is an older and more primitive approach. It is based on converting the file content into a URL by trimming unnecessary whitespace and then, optionally, encoding everything using Base64. Assuming that you have such a simple JavaScript file:
export default true;
It would look like this as Data URI:
data:application/javascript;base64,ZXhwb3J0IGRlZmF1bHQgdHJ1ZTs=
You can use this URL just like a reference to a normal file:
import test from 'data:application/javascript;base64,ZXhwb3J0IGRlZmF1bHQgdHJ1ZTs='; console.log( test );
However, the biggest drawback of Data URI becomes visible quite fast: as the JavaScript file is getting bigger, the URL becomes longer. It is also quite hard to put binary data into Data URI in a sensible way.
This is why Object URI was created. It is a descendant of several standards, including File API and HTML5 the drag&drop mechanism. You can also create such files by hand, using the
File and
Blob classes. In this case use the
Blob class, where you will put the contents of the module, and then convert it into Object URI:
const myJSFile = new Blob( [ 'export default true;' ], { type: 'application/javascript' } ); const myJSURL = URL.createObjectURL( myJSFile ); console.log( myJSURL ); // blob:
# Dynamic import
There is one more issue, though: the import statement does not accept a variable as a module identifier. This means that apart from using the method to convert the module into a “file”, you will not be able to import it. So defeat after all?
Not exactly. This issue was noticed long ago and the dynamic import proposal was created. It is a part of the ES2020 standard and it is already implemented in Chrome, Firefox, Safari, and Node.js 13.x. Using a variable as a module identifier alongside a dynamic import is no longer an issue:
const myJSFile = new Blob( [ 'export default true;' ], { type: 'application/javascript' } ); const myJSURL = URL.createObjectURL( myJSFile ); import( myJSURL ).then( ( module ) => { console.log( module.default ); // true } );
As you can see,
import() is used like a function and it returns
Promise, which gets an object representing the module. It contains all declared exports, with the default export under the default key.
# Implementation
You already know what you have to do, so you just need to do it. Add the next helper function,
getSettings(). You will fire it before
registerComponents() and get all necessary information from the script:
function getSettings( { template, style, script } ) { return { template, style, script }; } [...] function loadComponent( URL ) { return fetchAndParse( URL ).then( getSettings ).then( registerComponent ); }
For now, this function just returns all passed arguments. Add the entire logic that was described above. First, convert the script into an Object URI:
const jsFile = new Blob( [ script.textContent ], { type: 'application/javascript' } ); const jsURL = URL.createObjectURL( jsFile );
Next, load it via import and return the template, styles and component’s name received from
<script>:
return import( jsURL ).then( ( module ) => { return { name: module.default.name, template, style } } );
Thanks to this,
registerComponent() still gets 3 parameters, but instead of
script it now gets
name. Correct the code:
function registerComponent( { template, style, name } ) { class UnityComponent extends HTMLElement { [...] } return customElements.define( name, UnityComponent ); }
Voilà!
# Layer of behavior
There is one part of the component left: behavior, so handling events. At the moment you only get the component’s name in the
getSettings() function, but you should also get event listeners. You can use the
Object.entries() method for that. Return to
getSettings() and add appropriate code:
function getSettings( { template, style, script } ) { [...] function getListeners( settings ) { // 1 const listeners = {}; Object.entries( settings ).forEach( ( [ setting, value ] ) => { // 3 if ( setting.startsWith( 'on' ) ) { // 4 listeners[ setting[ 2 ].toLowerCase() + setting.substr( 3 ) ] = value; // 5 } } ); return listeners; } return import( jsURL ).then( ( module ) => { const listeners = getListeners( module.default ); // 2 return { name: module.default.name, listeners, // 6 template, style } } ); }:
function getListeners( settings ) { return Object.entries( settings ).reduce( ( listeners, [ setting, value ] ) => { if ( setting.startsWith( 'on' ) ) { listeners[ setting[ 2 ].toLowerCase() + setting.substr( 3 ) ] = value; } return listeners; }, {} ); }
Now you can bind the listeners inside the component’s class:
function registerComponent( { template, style, name, listeners } ) { // 1 class UnityComponent extends HTMLElement { connectedCallback() { this._upcast(); this._attachListeners(); // 2 } [...] _attachListeners() { Object.entries( listeners ).forEach( ( [ event, listener ] ) => { // 3 this.addEventListener( event, listener, false ); // 4 } ); } } return customElements.define( name, UnityComponent ); }!
# Browser compatibility and the rest of the summary
As you can see, a lot of work went into creating even a basic form of support for single-file web components. Many parts of the described system are created using dirty hacks (Object URIs for loading ES modules — FTW!) and the technique itself seems to have little sense without native support from the browsers. However, the entire magic from the article works correctly in all major browsers: Chrome, Firefox, and Safari!
Still, creating something like this was great fun. It was something different that touched many areas of the browser development and modern web standards.
Of course, the whole thing is available online. | https://ckeditor.com/blog/implementing-single-file-web-components/ | CC-MAIN-2020-50 | refinedweb | 1,954 | 57.87 |
The New .NET Multi-platform App UI.
I think instead of rushing to make things more complex, Microsoft should focus on solving Xamarin Forms platform problems. There are issues not resolved for 3 years now, its not funny. Needing custom renderers being the only solution (hack) to even some basic problems / requirements is not that productive at all. At this time there are 2.8k issues on Github.
this comment has been deleted.
absolutely
Where are you seeing additional complexity?
We are moving forward with the same set of controls and layouts, not adding new features so we can focus as well on the fundamentals, all while reducing the complexity of tight coupling of renderer to framework and the heavy reliance on custom renderers and effects. Perhaps review the architecture of the mappers/handlers. There are several presentations showcasing this on YouTube.
Also, don’t underestimate the value of unification with .NET 6 in terms of a consistent BCL, SDK style projects, and more.
For those of us who are not Forms developers and do everything in Xamarin Native, can we just happily continue to work like that with using .Net 6 onwards and have our Core, iOS and Android projects using Xamarin Native directly and ignore MAUI?
From what I understand Xamarin Native is being merged into .NET itself, and then MAUI builds on top of .NET in the same way Xamarin.Forms builds on top of Xamarin Native. So your migration paths are:
netstandard -> net6.0
Xamarin.iOS -> net6.0-ios
Xamarin.Android -> net6.0-android
Xamarin.Forms -> MAUI
I guess my concern is I feels there’s been a lack of clarity from the Xamarin/Microsoft Team itself over its commitment to Xamarin Native on its own. Its always about Forms/MAUI.
My worry is that sometime in the future there will be restrictions on how you use netX.0-ios and netX.0-android that require you to buy in to MAUI framework.
I’m yet to see a an article, announcement or comment from Microsoft that basically says, ‘MAUI will always be optional’
Clearly MAUI will be optional because it depends on Xamarin.Android/Mac/iOS and WPF/UWP/etc. to deploy to platforms, just as Xamarin.Forms does.
Xamarin.Native isn’t being merged into the .Net runtime (which wouldn’t make any sense) but MonoVM has been merged in, and you can see in the Xamarin.Android and Xamarin.MacIOS repos a lot of work to update to dotnet6 which will bring huge improvements.
I have seen that the GitHub issue that related to the wish to align the XAML dialects was closed with the reason that a servey has shown that the majority of developers wish to not align it. I have never seen any servey.
Too bad this was the only chance to align the dialects, once and for all for easier transition from/to WinUI/WPF. For devs coming from Xamarin it’s a breaking change anyway, namespaces need to be adjusted at least.
But with this decision being now settled, the difference in the dialects will stay forever.. unfortunately this is really disappointing.
Agreed on this one, I never saw such a survey either and it’s very disappointing.
I don’t mean to insult the efforts of the hard-working people at Microsoft, but this continued scatter brained, halfhearted approach to solving the problems with XAML’s messy eco-system are not very reassuring. I think a lot of people would appreciate it if you could do something far more ambitious instead of just making a new Xamarin. For example, a lot of the Windows specific developers seem to still be yearning for a true successor to WPF that can handle the cross-platform dream (I am not one of those, but I would like to see WPF get an improved successor or face lift). Yes, there is Avalonia, but that isn’t an official MS product, so I do not see the buy in level being that strong 🙁
It’s not closed; there is such an issue in MAUI discussions. When it comes to XAML they should just leave it to the community. The core of MAUI/Xamarin is .Net types and XAML is just one projection of that. But if they type don’t match (and why should they), then you’d be very unlikely to want the XAML to match.
The customer research informing our plans for XAML features and control naming is more than a single survey. It’s 7 years of platform feedback, 1:1 developer and customer interviews, market research, and (yes) multiple surveys.
The basic idea is absolutely correct, what I just think is a shame is the timing. For 7 years, the Xamarin team has barely lifted a finger to really expand the platform. Of course there were always little things, but without them the community would escalate. I think it’s a shame that a React Native and Flutter were necessary before you realized that you were on a sinking ship.
The whole thing was promised actually pushed with .NET 5. A uniform .NET. The whole thing was postponed a year, which can be done, but then please use the time for more meaningful things.
New control elements should not come with the version. One of the biggest problems with XF is its limitation. Furthermore, the complexity and working method should be questioned.
A piece of advice from me that comes really deep from the heart. Put your project aside. All of you developers. Download Flutter and learn. Then you will know exactly what is still missing and that this gap cannot be reached with Maui. Much more must be needed. Xamarin is losing more and more parts of the community to other platforms and rightly so.
Agree, sad that it true
Or react native! Proven.
By deciding to keep a fork in XAML you just made Uno more attractive to those of us on Windows wanting to go cross-platform with .NET.
I really don’t get modern Microsoft where the cross platform offerings for C++ and .NET offered by third parties are more welcoming to us.
It is hardly to keep believing, when we keep being told just to rewriting everything.
If a Windows/centric XAML dialect is a key factor for you, then it makes sense that Uno Platform would be attractive.
What are you anticipating rewriting? Xamarin and Xamarin.Forms -> .NET MAUI does not require a rewrite.
To be a true multiplatform framework, Linux support is necessary and, above all, support to be able to generate a web application. Regarding this last point, I am referring to generating an application that works within a browser using WebAssembly, without using HTML or CSS, only using XAML and C#. Companies that have been working with desktop technologies for years (WinForms, WPF, UWP, etc.) do not want to have to use / learn any of the web-related technologies (HTML, CSS, JavaScript, etc.) because this supposes us more investment and learning time and a very inefficient development team management, because I cannot reuse the knowledge of backend programmers on the frontend and vice versa. The option to use Mobile Blazor Bindings is not the most optimal because HTML and CSS have to be used in the view and also Razor. Ideally in view is to use XAML, C# and the MVVM pattern. In short, .NET MAUI should do the same thing that the UNO Platform does.
Linux and Web are on our radar even though you don’t see them here in our .NET 6 investment. We are actively gathering specific customer plans around those platforms, so please email me additional details so we keep them in mind. david.ortinau@microsoft.com
I’ve wrote you some details about our company software and expectations regarding MAUI.
Email was sended more than 2 days ago – still no response..
Curious how far out on the radar is Linux GUI support? I haven’t seen anything in the planning. Being able to write cross-platform desktop applications is huge and would allow C# to start displacing Java on that platform.
Still nothing about web. Without it this is nothing but a curiosity.
Please shoot me some details about your needs for web in this context so we can keep your scenarios in mind as we plan beyond .NET 6. david.ortinau@microsoft.com
Comet MVU +1
This is a brilliant strategy and I’m looking forward to testing previews. Will the best migration for an XF app be 1. migrate to dotnet6 in XF, then later migrate to MAUI, or 2. migrate both at once.
I suggest prep your Xamarin.Forms to be on latest 5.x, make sure you’re not using deprecated apis, and where possible replace customer renderers you don’t need. Then upgrade all to .NET MAUI and 6.
There will be no supported Xamarin.Forms on .NET 6. That’ll still be Framework for a year.
I am glad to see the progression in multiplatform .NET app development with MAUI and .NET 6.
The only weak point that stands out to me is the rapid application development (RAD) capabilities. How is it that in 2021 Microsoft cannot even come close to the productivity with a visual designer for XAML (or MAUI) that was a part of Microsoft development 30 years ago. How could Alan Cooper and his small team provide a drag-n-drop designer and all the brains at MS today cannot?
The concept is still the same. Widgets drag and dropped on widgets hierarchically with properties and events. Whether WinForms, WPF, XAML, or whatever, the principles are still the same. The widgets are no more complex today than then in the context of what is designed on a surface. In fact, one visual designer can be used and coded for the abstract UI elements, with the setting of which UI type determining the specifics.
I’ve been around long enough, and worked on enough complex projects to know that with the right small team, a real visual designer could be ready in a year – disposing of the half-hearted and brittle attempts at partial functionality like Hot Reload.
The UI development time for an app using RAD is much faster than hand-coding the UI. Less time and fewer bugs. That advantage sells. It once allowed Visual Basic to dominate the Windows programming world. Bringing back a real RAD visual designer for all UI design in Visual Studio would catapult the C# and .NET development environment even farther ahead – not just on Windows, but on al supported OSs.
Read how Alan Cooper did it here, if you are interested.
A drag and drop designer has a lot of drawbacks. Imagine how much cleaning up would need to be done after someone creates a UI this way. And how much maintenance needed, including versioning dependencies, and how this would slow down the progress of the platform.
Hot reload would still be needed because 1. you would still spend a lot of time hand-editing markup, even if it’s only to clean up the results of the GUI, and 2. you still need hot reload for people who don’t use a GUI and people who don’t use markup languages to define UI.
Charles is right. Even if some designers can make the work easier, programming by hand is always better for high-performance products. HotReload is a must-have here. Currently you have to wait at least 30 seconds for every small change, which can even take 1-2 minutes, depending on the performance of your hardware.
You always have to distinguish whether easier operation also makes sense. Always consider the usability-performance curve. In any case, there are a lot of points that still need to be improved, as development with XF is currently much more difficult and laborious than with other platforms. That should actually be the first point to work out.
Jeff you nailed it. The lack of a good/simple visual designer contributed to the slow uptake of UWP. I simply don’t understand how 30+ years after Visual Basic Forms Microsoft can’t come up with a compelling and modern UI designer that abstracts and hides the complexities of XAML. One UI designer to rule them all.
Choose a template (i.e. mobile, desktop, hybrid) and off you go. Drag and drop from a UI control palette, set properties and respond to events. At the end of the day the UI boils down to a grid system of pixels, where the UI design environment should give you the tools/controls/widgets to paint your Picasso.
The notion of themes should also be looked at where a global theme could be applied to the application or overridden at the window and control level respectively.
Completely agree Jeff. A visual designer should be top priority for any UI technology. For those that prefer the tedium of doing it by hand, they’d still have the option to do that, so it’s a win-win for everyone.
It’s like the situation with the designer being dropped in EF Core. I can scarcely believe anyone writes their contexts by hand. Thank goodness for Erik Jensen’s EF Core Power Tools! I just don’t understand how EF Core doesn’t have such basic necessities built in though.
Microsoft has some of the best developer talent on the planet, yet creating visual designers seems to be some sort of dark art lost in the mists of time.
I challenge the team to come up with the next generation experience in visual design that brings back the amazing productivity gains we’ve enjoyed with other technologies over the years! | https://devblogs.microsoft.com/xamarin/the-new-net-multi-platform-app-ui-maui/?WT.mc_id=mobile-34797-bramin | CC-MAIN-2022-21 | refinedweb | 2,296 | 65.42 |
> SMS PLAN.rar > GW.h
// The following ifdef block is the standard way of creating macros which make exporting // from a DLL simpler. All files within this DLL are compiled with the GW_EXPORTS // symbol defined on the command line. this symbol should not be defined on any project // that uses this DLL. This way any other project whose source files include this file see // GW_API functions as being imported from a DLL, wheras this DLL sees symbols // defined with this macro as being exported. #ifdef GW_EXPORTS #define GW_API __declspec(dllexport) #else #define GW_API __declspec(dllimport) #endif #include
#ifdef __cplusplus extern "C" { #endif GW_API DWORD BeginGW(LPVOID); GW_API void StopGW(); #ifdef __cplusplus } #endif | http://read.pudn.com/downloads16/sourcecode/windows/comm/60671/cmppnew/GW.h__.htm | crawl-002 | refinedweb | 112 | 58.21 |
need some help with the string807606 May 17, 2007 4:31 AM
hi guys if you have some time i need your advise
i have a string
i have a string
and i need to get strings:and i need to get strings:
"Today+is+raining"
"Today+is+raining"
"Today+is"
the codes i have are:the codes i have are:
"Today"
String todayisraining = "Today+is+raining"; out.println(todayisraining); // "Today+is+raining" String split1[] = todayisraining.split("\\+"); String raining = ""; for(int i=0; i<split1.length;i++) { raining = split1; }
String todayisplus = todayisraining.substring(0,todayisraining.length() - raining.length());
String todayis = todayisplus.substring(0,todayisplus.length()-1);
out.println(todayis); // "Today+is"
String split2[] = todayis.split("\\+");
String is = "";
for(int i=0; i<split2.length;i++) { is = split2[i]; }
String todayplus = todayis.substring(0,todayis.length() - is.length());
String today = todayplus.substring(0,todayplus.length()-1);
out.println(today); // "Today"
bad solution because posted by hand :(also bad solution, because string consist from separate objectsalso bad solution, because string consist from separate objects
String todayisraining = "Today+is+raining"; String split[] = todayisraining.split("\\+"); for(int n=0; n<split.length;n++){ for(int m=0; m<=split.length-n-1;m++) { out.println( split[m]);// "Today" "is" "raining" , "is" "raining" , "raining" } }
should i use recursive loop or something? thank you
This content has been marked as final. Show 5 replies
1. Re: need some help with the string807606 May 17, 2007 4:35 AM (in response to 807606)Hi,
Try using substring and indexOf on the plus signs, that should do it.
Good luck,
Jezzica85
2. Re: need some help with the string807606 May 17, 2007 6:25 AM (in response to 807606)thank you Jezzica85 for quick responce it's working but one more and i hope last question, now i have a simple code:
in the outut i get
import java.io.*; public class strings { public static void main(String args[]) throws IOException { String string1 = "Today+is+raining+outside"; int i = string1.lastIndexOf("+"); String string2 = string1.substring(0,i); int j = string2.lastIndexOf("+",i); String string3 = string2.substring(0,j); System.out.println(string1 + string2 + string3); } }
"Today+is+raining+outside"
"Today+is+raining"
and last should be
"Today+is"
maybe you know how to execute this program automatically using loop or something to get all those strings, thanks a lot
"Today"
3. Re: need some help with the string807606 May 17, 2007 8:36 AM (in response to 807606)this will do it make sure u know what is happening so that you can understand it, u should debug the code to see what is happening
basically instead of using the intial string1 each time to get the substring i stored each new substring in result therefore all i needed to do was check for the last index of "+" each time
String string1 = "Today+is+raining+outside"; String result=null; int start=0; int finish=string1.length(); while(finish>=0){ result= string1.substring(start,finish); finish=result.length()-(result.length()-result.lastIndexOf("+")); System.out.println(result); }
4. Re: need some help with the string807606 May 17, 2007 8:46 AM (in response to 807606)now i have code like this, but it doesn't work:
import java.io.*; import java.util.*; import java.text.*; public class strings { strings(){String n;} public static void main(String args[]) throws IOException { strings st = new strings(); String string3 = st.method(); System.out.println(string3); String sp[]= string3.split("\\+"); String check = ""; for(int i=0;i<sp.length;i++){check=sp;}
String n = string3.substring(0,string3.length() - check.length());
if(n.endsWith("+")){
new naujas(n); //cannot find simbol constructor
}
}
public String method(){
String string1 = "Today+is+raining+outside";
int i = string1.lastIndexOf("+");
String string2 = string1.substring(0,i);
return string2;
}
}
i need the same thing - to get output:
"Today+is+raining+outside"
"Today+is+raining"
"Today+is"
Thank you for your help
"Today"
5. Re: need some help with the string807606 May 17, 2007 8:50 AM (in response to 807606)i didnt notice your post bouncer i'll check your code, thank you very much for your time | https://community.oracle.com/message/8954914 | CC-MAIN-2015-40 | refinedweb | 685 | 55.95 |
write an implementation file (.cpp or .cxx or something else) your compiler generates a translation unit. This is the object file from your implementation file plus all the headers you *#include*d in it.
Internal linkage refers to everything only in scope of a translation unit. External linkage refers to things that exist beyond a particular translation unit. In other words, accessable through the whole program, which is the combination of all translation units (or object files).
Example: test.cpp
#include <iostream>
using namespace std;
extern const int max;
extern int n;
static float z = 0.0;
void f(int i)
{
static int nCall = 0;
int a;
//...
nCall++;
n++;
//...
a = max * z;
//...
cout << "f() called " << nCall << " times." << endl;
}
max is declared to have external linkage. A matching definition for max(with external linkage) must appear in some file. (As in test.cpp)
n is declared to have external linkage.
z is defined as a global variable with internal linkage.
The definition of nCall specifies nCall to be a variable that retains its value across calls to function f(). Unlike local variables with the default auto storage class, nCall will be initialized only once at the start of the program and not once for each invocation of f(). The storage class specifier static affects the lifetime of the local variable and not its
Forgot Your Password?
2018 © Queryhome | https://www.queryhome.com/tech/38292/what-is-internal-linking-and-external-linking-in-c | CC-MAIN-2019-04 | refinedweb | 227 | 67.45 |
Comment on Tutorial - indexOf( ) and lastIndexOf( ) in Java By Hong
Comment Added by : arya
Comment Added at : 2012-11-07 08:37:59
Comment on Tutorial : indexOf( ) and lastIndexOf( ) in Java By Hong
import java.io.*;
class lastindexof
{
public static void main(String argv[]) throws IOException
{
String mystr;
BufferedReader bf=new BufferedReader(new InputStreamReader(System.in));
System.out.print("enter the string");
mystr=bf.readLine();
System.out.println(" character at position is=" +mystr.lastIndexOf());
}
}
why this program always showing me cannot find symbol code works for me. But i have a doubt how to
View Tutorial By: Daya at 2015-05-04 12:04:19
2. Where to put the smsc number. I am using vodafone
View Tutorial By: gopi at 2009-06-11 22:07:49
3. hiiiiiiii
View Tutorial By: venkateshwarreddy suravaram at 2010-04-30 23:57:01
4. Actually in gallery example the images can be move
View Tutorial By: santu at 2013-02-16 12:40:29
5. This article is help full for beginners.now i will
View Tutorial By: shobha B at 2012-08-31 06:23:31
6. pl send me how to download java 2 micro edition do
View Tutorial By: Narayana at 2013-02-21 10:08:27
7. Hi mounika, Did you download and install the javac
View Tutorial By: Ramlak at 2008-04-27 05:49:42
8. Good one
View Tutorial By: Ravi at 2007-11-19 03:30:40
9. one of the best example for trim()....
View Tutorial By: sharad at 2013-01-29 13:43:58
10. This matter is really too good.
View Tutorial By: Ashwin perti at 2009-07-08 04:43:55 | https://java-samples.com/showcomment.php?commentid=38594 | CC-MAIN-2019-18 | refinedweb | 281 | 59.6 |
#include <VolVolume.hpp>
List of all members.
It's used to store indices, it has similar functions as VOL_dvector. 237 of file VolVolume.hpp.
Construct a vector of size s.
The content of the vector is undefined.
Definition at line 245 of file VolVolume.hpp.
References sz, v, and VOL_TEST_SIZE.
Default constructor creates a vector of size 0.
Definition at line 250 of file VolVolume.hpp.
Copy constructor makes a replica of x.
Definition at line 252 of file VolVolume.hpp.
The destructor deletes the data array.
Definition at line 260 of file VolVolume.hpp.
Return the size of the vector.
Definition at line 265 of file VolVolume.hpp.
Return a reference to the
i-th entry.
Definition at line 267 of file VolVolume.hpp.
References sz, v, and VOL_TEST_INDEX.
Return the
i-th entry.
Definition at line 273 of file VolVolume.hpp.
References sz, v, and VOL_TEST_INDEX.
Delete the content of the vector and replace it with a vector of length 0.
Definition at line 280 of file VolVolume.hpp.
delete the current vector and allocate space for a vector of size
s.
Definition at line 288 of file VolVolume.hpp.
References sz, v, and VOL_TEST_SIZE.
swaps the vector with
w.
Definition at line 295 of file VolVolume.hpp.
Copy
w into the vector.
Replace every entry in the vector with
w.
The array holding the vector.
Definition at line 240 of file VolVolume.hpp.
Referenced by allocate(), clear(), operator[](), swap(), VOL_ivector(), and ~VOL_ivector().
The size of the vector.
Definition at line 242 of file VolVolume.hpp.
Referenced by allocate(), clear(), operator[](), size(), swap(), and VOL_ivector(). | http://www.coin-or.org/Doxygen/CoinAll/class_v_o_l__ivector.html | crawl-003 | refinedweb | 267 | 64.47 |
Richard Querin wrote: > import itertools, operator > for k, g in itertools.groupby(sorted(data), key=operator.itemgetter(0, > 1, 2, 3)): > print k, sum(item[4] for item in g) > > > > I'm trying to understand what's going on in the for statement but I'm > having troubles. The interpreter is telling me that itemgetter expects 1 > argument and is getting 4. You must be using an older version of Python, the ability to pass multiple arguments to itemgetter was added in 2.5. Meanwhile it's easy enough to define your own: def make_key(item): return (item[:4]) and then specify key=make_key. BTW when you want help with an error, please copy and paste the entire error message and traceback into your email. > I understand that groupby takes 2 parameters the first being the sorted > list. The second is a key and this is where I'm confused. The itemgetter > function is going to return a tuple of functions (f[0],f[1],f[2],f[3]). No, it returns one function that will return a tuple of values. > Should I only be calling itemgetter with whatever element (0 to 3) that > I want to group the items by? If you do that it will only group by the single item you specify. groupby() doesn't sort so you should also sort by the same key. But I don't think that is what you want. Kent | https://mail.python.org/pipermail/tutor/2007-November/058812.html | CC-MAIN-2016-36 | refinedweb | 239 | 72.97 |
XAML Processing Differences Between Silverlight Versions and WPF
Silverlight includes a XAML parser that is part of the Silverlight core install. Silverlight uses different XAML parsers depending on whether your application targets Silverlight 3 or more recent Silverlight versions. In addition to Silverlight version differences, the XAML parsing behavior in Silverlight sometimes differs from the parsing behavior in Windows Presentation Foundation (WPF). WPF has its own XAML parser. This topic describes the XAML processing differences between Silverlight and WPF, which is useful if you are migrating XAML written for WPF to Silverlight.
Silverlight 4 introduced a XAML parser that is closer to the WPF XAML parser implementation. However, applications exist that might still target Silverlight 3. To support these applications, Silverlight includes a legacy XAML parsing codepath in the Silverlight client runtime. Applications that are compiled for and target Silverlight 3 use the Silverlight 3-specific XAML parser. Applications that are compiled for and target Silverlight 4 or later use the main XAML parser.
If you are maintaining a Silverlight application that targets Silverlight 3, the XAML differences between Silverlight versions is documented in another topic. For more information, see XAML Processing Differences Between Silverlight Versions.
Except where noted in the following sections, the Silverlight 4 and later XAML parser and the XAML language support follow the XAML language specification [MS-XAML] and has the same behavior as the WPF parser.
Namespaces
The Silverlight XAML markup compiler has restrictions on which namespaces can be used as the default XAML namespace, and on requirement for a default namespace. The default XAML namespace must be one of the following:
(The latter namespace is supported for legacy reasons; new applications should use).
However, these restrictions are only for XAML markup compile from the Page build action. For loose XAML (such as a ResourceDictionary with the Content build action, or input for the XamlReader.Load method) the XAML parser in the core libraries supports assignment of any namespace to be the default, or no default.
Silverlight imposes the following restrictions on mapping assembly and namespace for XAML namespaces:
The referenced assembly must be the name of an assembly in the XAP file, or of a Silverlight core assembly such as mscorlib. (The assembly cannot be an assembly that is not deployed with the XAP package.) This is part of how Silverlight manages security and permissions for client code.
The assembly name in the mapping cannot include ".dll" at the end of the name.
Constructs
The only supported XAML namespace () constructs in Silverlight are: x:Null, x:Name, x:Key, and x:Class. Notable omissions here that exist in WPF or [MS-XAML] are x:Array, x:Code, x:Type, and code access modifiers, such as x:Subclass. x:Uid is permitted but ignored by the parser itself, and has no API connection point in Silverlight.
mc:Ignorable (from the XML namespace) is permitted as an attribute on the root element, but ignored.
Silverlight does not guarantee to preserve CDATA. (The main scenario for CDATA in WPF is x:Code, which Silverlight does not support.)
XAML Language Intrinsics
The built-in Silverlight markup extensions are as follows: x:Null, StaticResource, Binding, RelativeSource, and TemplateBinding. Except for Binding and RelativeSource, the markup extensions do not have corresponding classes to support parallel markup extension/run-time scenarios.
Silverlight does not support adding "Extension" to the end of a markup extension name as an alternate usage. For example, {x:Null} works, but {x:NullExtension} does not. This is not necessarily a XAML behavior; it is a Silverlight implementation detail.
Object elements inside a ResourceDictionary in Silverlight may have an x:Name instead of or in addition to an x:Key. If x:Key is not specified, the x:Name is used as the key. (Exception: both Name / x:Name and x:Key are optional on Style elements if the TargetType attribute is specified. For more information, see Resource Dictionaries.
Other Behavior
FrameworkTemplate and derived classes support XAML content even though they have no ContentPropertyAttribute, or seemingly any settable property in the object model. Template creation in general is an internal parser behavior in Silverlight. In WPF, there is a public VisualTree content property, which does not exist in Silverlight.
The Color structure does not support setting its Byte properties (A, R, G, B) as XAML attributes on a <Color /> object element. This is because Byte does not have native type-conversion support in Silverlight XAML.
Measurement properties of type Double can process the string "Auto" to evaluate to Double.NaN. This is a native parsing behavior that does not involve a type converter. However, this behavior is not extensible to custom properties of type Double, it only applies to Silverlight defined APIs.
The XAML processing differences between Silverlight 3 and Silverlight 4 are described in a separate topic. For more information, see XAML Processing Differences Between Silverlight Versions. Generally, the differences between Silverlight 3 and Silverlight 4 are representative of Silverlight 3 using behavior that may be unexpected based on either [MS-XAML] or the WPF XAML behavior. | http://msdn.microsoft.com/en-us/library/cc917841(VS.95).aspx | CC-MAIN-2014-23 | refinedweb | 840 | 54.73 |
From: phil scott
Newsgroups: misc.taxes alt.politics.economics sci.econ uk.legal uk.finance
Subject: Re: Dollar crash: Calculated Chaos....Calculated Collapse
Date: Thu, 15 Nov 2007 10:23:21 -0800 (PST)
<38684e75-643a-41aa-95c0-72e05454500d@s19g2000prg.googlegroups.com>
<87zlxf785j.fsf@newton.gmurray.org.uk>
posting-account=ejca-QkAAAAerCqK91N59ZdJnDzb0AkZ
CLR 1.0.3705; .NET CLR 1.1.4322; Media Center PC 4.0; .NET CLR
2.0.50727),gzip(gfe),gzip(gfe)
Bytes: 7606
On Nov 15, 9:16 am, The Trucker wrote:
> On Thu, 15 Nov 2007 17:03:52 +0000, Graham Murray wrote:
> > The Trucker writes:
>
> >> Ask yourself what happens when the bank creates money for a lot of new
> >> homes and then the borrowers can't pay. Just like a car, the homes
> >> must be repossessed and sold to those who will pay.
>
> > So why have we been seeing reports of homes (sometimes almost whole
> > neighbourhoods) in the USA becoming either derelict or having to be
> > demolished following the lender repossessing them and then not being
> > able to find a buyer? Surely it would have been in everyone's interest
> > (both socially and financially) in such situations for the lender to
> > have not repossessed but continued to allow the borrower to pay the
> > 'pre-increase' monthly repayments (or whatever the borrower could
> > afford)? If they had done that then they would have had some (though not
> > as much as they planned for) return on the loan rather than in effect
> > writing it off, and the borrower would continued to have a roof over his
> > head.
>
> Neither I nor you should be concerned about the "return" to the lenders.
> That is the least of the problem. The only way that the demise of
> neighborhoods occurs is because of factors OUTSIDE the neighborhoods and
> OUTSIDE the control of the local buyers and local lenders. It happens
> because there are not sufficient opportunities for income in the area.
> And there is NOTHING that can be done at the level of the local
> home loan lenders to fix that problem. When there are no jobs then no one
> can live there. Such things do not happen because the people are
> deadbeats nor does it happen because of greedy bankers. The ONLY solution
> is for the people to relocate to some area where income can be found. That
> is called reality.
>
> --
> - Hide quoted text -
>
> - Show quoted text -
relocating to 'where the jobs are'...involves moving to india in many
cases, then working for peanuts. the idea of moving to where the
jobs are works in a growing economy though...and so you are very very
right in making the statement under those conditions....however, the
economy in the USA is shrinking at warp speed, headed to the bottom
with no safety net in siight.
now of course i realize that govt says the oposite... thats spin and
disinformation from govt trying to prevent a panic. the hard
numbers involve huge profits by US corporations (please note Im not
denying that), and that is as real as it gets...however those profits
increasingly are generated off shore, with offshore talent, and
offshored infrastructure, and offshored investment....and this is
**despite some foreign investment in the US....
Most of that exchange is US jobs, industry, industrial infrastructure
etc going off shore..mostly to china.
...that is entirely fatal to the US economy and its people, regardless
that the international corporations do well in that case...primarily
by use of slave labor....that does not work in the US because of our
bloated govt and high costs of living that spin from that.
accordingly, any view that is missing a few of these aspects will be
skewed.. you will say then to someone in calif 'move to
arkansas'...or visa versa...then the fact is jobs that pay a middle
class wage are getting scarce at light speed. .. and this view
comes from ol philsie here... forced out of the engineering business
and into a wide range of freelance services, trades, and consulting
and I am doing reasonably well.
however i know what it took...and it took a lot, and it was nasty..and
I am way more than slightly talented, and without a family to
feed...so I made it to viable levels. Most are not in that
position.... these, the great american middle class, the core
productive capability of the nation. now about half decimated... that
takes down the entire country in the end.
thats not speculation....it is a cycle repeated in history, with
defined mile markers, sort of like being 22 miles outside of st
louis...the road sign doesnt change...and the indication is firm..one
is not guessing.
Now regarding 'cycles'...recession and depression cycles...not big
deal some think..and to an extent that is correct,... we recover, a
few go hungry for a few years then we recover. much of it in ones
own lifetime.
so we make our judgements on that short time frame cycle... a fatal
mistake when it comes to the longer cycles...here i am referring to
the 260 year life cycle of nations... it takes about 5 generations for
a nation to go from tough as nails to soft and corrupt and rotten...in
the final stage these elect or are dominated by people who destroy all
they touch.
one of a few dozen markers at the final stage. others are having to
import labor since the birth rate has gone south, hyper inflation,
total govt bloat, ruthless levels of taxation, wars to gain land or
assets of other nations...and perverse behaviors all around.
these are the historically well documented markers... then time frame
in this last stage is about 20 years... from the nations most obvious
peak of power and afluence, to its total collapse on those trend
lines, with those markers...
Such rot does not cure itself...mother nature does that in all life
forms by means of the birth, life, and death cycle..... not a
recession you see. .. we will not be seeing one of those either, we
are way past that now.
Phil Scott | http://www.info-mortgage-loans.com/usenet/posts/68298-84567.uk.finance.shtml | crawl-002 | refinedweb | 1,020 | 75.61 |
I've taught JavaScript for a long time to a lot of people. Consistently the most commonly under-learned aspect of the language is the module system. There's a good reason for that. Modules in JavaScript have a strange and erratic history. In this post, we'll walk through that history and you'll learn modules of the past to better understand how JavaScript modules work today.
Before we learn how to create modules in JavaScript, we first need to understand what they are and why they exist. Look around you right now. Any marginally complex item that you can see is probably built using individual pieces that when put together, form the item.
Let's take a watch for example.
A simple wristwatch is made up of hundreds of internal pieces. Each has a specific purpose and clear boundaries for how it interacts with the other pieces. Put together, all of these pieces form the whole of the watch. Now I'm no watch engineer, but I think the benefits of this approach are pretty transparent.
Reusability
Take a look at the diagram above one more time. Notice how many of the same pieces are being used throughout the watch. Through highly intelligent design decisions centered on modularity, they're able to re-use the same components throughout different aspects of the watch design. This ability to re-use pieces simplifies the manufacturing process and, I'm assuming, increases profit.
Composability
The diagram is a beautiful illustration of composability. By establishing clear boundaries for each individual component, they're able to compose each piece together to create a fully functioning watch out of tiny, focused pieces.
Leverage
Think about the manufacturing process. This company isn't making watches, it's making individual components that together form a watch. They could create those pieces in house, they could outsource them and leverage other manufacturing plants, it doesn't matter. The most important thing is that each piece comes together in the end to form a watch - where those pieces were created is irrelevant.
Isolation
Understanding the whole system is difficult. Because the watch is composed of small, focused pieces, each of those pieces can be thought about, built and or repaired in isolation. This isolation allows multiple people to work individually on the watch while not bottle-necking each other. Also if one of the pieces breaks, instead of replacing the whole watch, you just have to replace the individual piece that broke.
Organization
Organization is a byproduct of each individual piece having a clear boundary for how it interacts with other pieces. With this modularity, organization naturally occurs without much thought.
We've seen the obvious benefits of modularity when it comes to everyday items like a watch, but what about software? Turns out, it's the same idea with the same benefits. Just how the watch was designed, we should design our software separated into different pieces where each piece has a specific purpose and clear boundaries for how it interacts with other pieces. In software, these pieces are called modules. At this point, a module might not sound too different from something like a function or a React component. So what exactly would a module encompass?
Each module has three parts - dependencies (also called imports), code, and exports.
importscodeexports
Dependencies (Imports)
When one module needs another module, it can
import that module as a dependency. For example, whenever you want to create a React component, you need to
import the
react module. If you want to use a library like
lodash, you'd need
import the
lodash module.
Code
After you've established what dependencies your module needs, the next part is the actual code of the module.
Exports
Exports are the "interface" to the module. Whatever you export from a module will be available to whoever imports that module.
Enough with the high-level stuff, let's dive into some real examples.
First, let's look at React Router. Conveniently, they have a modules folder. This folder is filled with... modules, naturally. So in React Router, what makes a "module". Turns out, for the most part, they map their React components directly to modules. That makes sense and in general, is how you separate components in a React project. This works because if you re-read the watch above but swap out "module" with "component", the metaphors still make sense.
Let's look at the code from the
MemoryRouter module. Don't worry about the actual code for now, but focus on more of the structure of the module.
// importsimport React from "react";import { createMemoryHistory } from "history";import Router from "./Router";// codeclass MemoryRouter extends React.Component {history = createMemoryHistory(this.props);render() {return (<Routerhistory={this.history}children={this.props.children}/>;)}}// exportsexport default MemoryRouter;
You'll notice at the top of the module they define their imports, or what other modules they need to make the
MemoryRouter module work properly. Next, they have their code. In this case, they create a new React component called
MemoryRouter. Then at the very bottom, they define their export,
MemoryRouter. This means that whenever someone imports the
MemoryRouter module, they'll get the
MemoryRouter component.
Now that we understand what a module is, let's look back at the benefits of the watch design and see how, by following a similar modular architecture, those same benefits can apply to software design.
Reusability
Modules maximize reusability since a module can be imported and used in any other module that needs it. Beyond this, if a module would be beneficial in another program, you can create a package out of it. A package can contain one or more modules and can be uploaded to NPM to be downloaded by anyone.
react,
lodash, and
jquery are all examples of NPM packages since they can be installed from the NPM directory.
Composability
Because modules explicitly define their imports and exports, they can be easily composed. More than that, a sign of good software is that is can be easily deleted. Modules increase the "delete-ability" of your code.
Leverage
The NPM registry hosts the world's largest collection of free, reusable modules (over 700,000 to be exact). Odds are if you need a specific package, NPM has it.
Isolation
The text we used to describe the isolation of the watch fits perfectly here as well. "Understanding the whole system is difficult. Because (your software) is composed of small, focused (modules), each of those (modules) can be thought about, built and or repaired in isolation. This isolation allows multiple people to work individually on the (app) while not bottle-necking each other. Also if one of the (modules) breaks, instead of replacing the whole (app), you just have to replace the individual (module) that broke."
Organization
Perhaps the biggest benefit in regards to modular software is organization. Modules provide a natural separation point. Along with that, as we'll see soon, modules prevent you from polluting the global namespace and allow you to avoid naming collisions.
At this point, you know the benefits and understand the structure of modules. Now it's time to actually start building them. Our approach to this will be pretty methodical. The reason for that is because, as mentioned earlier, modules in JavaScript have a strange history. Even though there are "newer" ways to create modules in JavaScript, some of the older flavors still exist and you'll see them from time to time. If we jump straight to modules in 2018, I'd be doing you a disservice. With that said, we're going to take it back to late 2010. AngularJS was just released and jQuery is all the rage. Companies are finally using JavaScript to build complex web applications and with that complexity comes a need to manage it - via modules.
Your first intuition for creating modules may be to separate code by files.
// users.jsvar users = ["Tyler", "Sarah", "Dan"]function getUsers() {return users}
// dom.jsfunction = window.getUsers()for (var i = 0; i < users.length; i++) {addUserToDOM(users[i])}
<!-- index.html --><!DOCTYPE html><html><head><title>Users</title></head><body><h1>Users</h1><ul id="users"></ul><</input><button id="submit">Submit</button><script src="users.js"></script><script src="dom.js"></script></body></html>
The full code can be found here.
OK. We've successfully separated our app into its own files. Does that mean we've successfully implemented modules? No. Absolutely not. Literally, all we've done is separate where the code lives. The only way to create a new scope in JavaScript is with a function. All the variables we declared that aren't in a function are just living on the global object. You can see this by logging the
window object in the console. You'll notice we can access, and worse, change
addUsers,
users,
getUsers,
addUserToDOM. That's essentially our entire app. We've done nothing to separate our code into modules, all we've done is separate it by physical location. If you're new to JavaScript, this may be a surprise to you but it was probably your first intuition for how to implement modules in JavaScript.
So if file separation doesn't give us modules, what does? Remember the advantages to modules - reusability, composability, leverage, isolation, organization. Is there a native feature of JavaScript we could use to create our own "modules" that would give us the same benefits? What about a regular old function? When you think of the benefits of a function, they align nicely to the benefits of modules. So how would this work? What if instead of having our entire app live in the global namespace, we instead expose a single object, we'll call it
APP. We can then put all the methods our app needs to run under the
APP, which will prevent us from polluting the global namespace. We could then wrap everything else in a function to keep it enclosed from the rest of the app.
// App.jsvar APP = {}
// users.jsfunction usersWrapper () {var users = ["Tyler", "Sarah", "Dan"]function getUsers() {return users}APP.getUsers = getUsers}usersWrapper()
// dom.jsfunction domWrapper() ])}}domWrapper()
<!-- index.html --><!DOCTYPE html><html><head><title>Users</title></head><body><h1>Users</h1><ul id="users"></ul><</input><button id="submit">Submit</button><script src="app.js"></script><script src="users.js"></script><script src="dom.js"></script></body></html>
The full code can be found here.
Now if you look at the
window object, instead of it having all the important pieces of our app, it just has
APP and our wrapper functions,
usersWrapper and
domWrapper. More important, none of our important code (like
users) can be modified since they're no longer on the global namespace.
Let's see if we can take this a step further. Is there a way to get rid of our wrapper functions? Notice that we're defining and then immediately invoking them. The only reason we gave them a name was so we could immediately invoke them. Is there a way to immediately invoke an anonymous function so we wouldn't have to give them a name? Turns out there is and it even has a fancy name -
Immediately Invoked Function Expression or
IIFE for short.
IIFE
Here's what it looks like.
(function () {console.log('Pronounced IF-EE')})()
Notice it's just an anonymous function expression that we've wrapped in parens ().
(function () {console.log('Pronounced IF-EE')})
Then, just like any other function, in order to invoke it, we add another pair of parens to the end of it.
(function () {console.log('Pronounced IF-EE')})()
Now let's use our knowledge of IIFEs to get rid of our ugly wrapper functions and clean up the global namespace even more.
// users.js(function () {var users = ["Tyler", "Sarah", "Dan"]function getUsers() {return users}APP.getUsers = getUsers})()
// dom.js(function () ])}})()
The full code can be found here.
chef's kiss. Now if you look at the
window object, you'll notice the only thing we've added to it is
APP, which we use as a namespace for all the methods our app needs to properly run.
Let's call this pattern the IIFE Module Pattern.
What are the benefits to the IIFE Module Pattern? First and foremost, we avoid dumping everything onto the global namespace. This will help with variable collisions and keeps our code more private. Does it have any downsides? It sure does. We still have 1 item on the global namespace,
APP. If by chance another library uses that same namespace, we're in trouble. Second, you'll notice the order of the
<script> tags in our
index.html file matter. If you don't have the scripts in the exact order they are now, the app will break.
Even though our solution isn't perfect, we're making progress. Now that we understand the pros and cons to the IIFE module pattern, if we were to make our own standard for creating and managing modules, what features would it have?
Earlier our first instinct for the separation of modules was to have a new module for each file. Even though that doesn't work out of the box with JavaScript, I think that's an obvious separation point for our modules. Each file is its own module. Then from there, the only other feature we'd need is to have each file define explicit imports (or dependencies) and explicit exports which will be available to any other file that imports the module.
Our Module Standard1) File based2) Explicit imports3) Explicit exports
Now that we know the features our module standard will need, let's dive into the API. The only real API we need to define is what imports and exports look like. Let's start with exports. To keep things simple, any information regarding the module can go on the
module object. Then, anything we want to export from a module we can stick on
module.exports. Something like this
var users = ["Tyler", "Sarah", "Dan"]function getUsers() {return users}module.exports.getUsers = getUsers
This means another way we can write it is like this
var users = ["Tyler", "Sarah", "Dan"]function getUsers() {return users}module.exports = {getUsers: getUsers}
Regardless of how many methods we had, we could just add them to the
exports object.
// users.jsvar users = ["Tyler", "Sarah", "Dan"]module.exports = {getUsers: function () {return users},sortUsers: function () {return users.sort()},firstUser: function () {return users[0]}}
Now that we've figured out what exporting from a module looks like, we need to figure out what the API for importing modules looks like. To keep this one simple as well, let's pretend we had a function called
require. It'll take a string path as its first argument and will return whatever is being exported from that path. Going along with our
users.js file above, to import that module would look something like this
var users = require('./users')users.getUsers() // ["Tyler", "Sarah", "Dan"]users.sortUsers() // ["Dan", "Sarah", "Tyler"]users.firstUser() // ["Tyler"]
Pretty slick. With our hypothetical
module.exports and
require syntax, we've kept all of the benefits of modules while getting rid of the two downsides to our IIFE Modules pattern.
As you probably guessed by now, this isn't a made up standard. It's real and it's called CommonJS.
The CommonJS group defined a module format to solve JavaScript scope issues by making sure each module is executed in its own namespace. This is achieved by forcing modules to explicitly export those variables it wants to expose to the "universe", and also by defining those other modules required to properly work.
- Webpack docs
If you've used Node before, CommonJS should look familiar. The reason for that is because Node uses (for the most part) the CommonJS specification in order to implement modules. So with Node, you get modules out of the box using the CommonJS
require and
module.exports syntax you saw earlier. However, unlike Node, browsers don't support CommonJS. In fact, not only do browsers not support CommonJS, but out of the box, CommonJS isn't a great solution for browsers since it loads modules synchronously. In the land of the browser, the asynchronous loader is king.
So in summary, there are two problems with CommonJS. First, the browser doesn't understand it. Second, it loads modules synchronously which in the browser would be a terrible user experience. If we can fix those two problems, we're in good shape. So what's the point of spending all this time talking about CommonJS if it's not even good for browsers? Well, there is a solution and it's called a module bundler.
Module Bundlers
What a JavaScript module bundler does is it examines your codebase, looks at all the imports and exports, then intelligently bundles all of your modules together into a single file that the browser can understand. Then instead of including all the scripts in your index.html file and worrying about what order they go in, you include the single
bundle.js file the bundler creates for you.
app.js ---> | |users.js -> | Bundler | -> bundle.jsdom.js ---> | |
So how does a bundler actually work? That's a really big question and one I don't fully understand myself, but here's the output after running our simple code through Webpack, a popular module bundler.
The full code can with CommonJS and Webpack can be found here. You'll need to download the code, run "npm install", then run "webpack".
(function(modules) { // webpackBootstrap// The module cachevar installedModules = {};// The require functionfunction __webpack_require__(moduleId) {// Check if module is in cacheif(installedModules[moduleId]) {return installedModules[moduleId].exports;}// Create a new module (and put it into the cache)var module = installedModules[moduleId] = {i: moduleId,l: false,exports: {}};// Execute the module functionmodules[moduleId].call(module.exports,module,module.exports,__webpack_require__);// Flag the module as loadedmodule.l = true;// Return the exports of the modulereturn,{ enumerable: true, get: getter });}};// define __esModule on exports__webpack_require__.r = function(exports) {if(typeof Symbol !== 'undefined' && Symbol.toStringTag) {Object.defineProperty(exports, Symbol.toStringTag, { value: 'Module' });}Object.defineProperty(exports, '__esModule', { value: true });};// create a fake namespace object// mode & 1: value is a module id, require it// mode & 2: merge all properties of value into the ns// mode & 4: return value when already ns object// mode & 8|1: behave like require__webpack_require__.t = function(value, mode) {if(mode & 1) value = __webpack_require__(value);if(mode & 8) return value;if((mode & 4) && typeof value === 'object' && value && value.__esModule) return value;var ns = Object.create(null);__webpack_require__.r(ns);Object.defineProperty(ns, 'default', { enumerable: true, value: value });if(mode & 2 && typeof value != 'string')for(var key in value)__webpack_require__.d(ns, key, function(key) {return value[key];}.bind(null, key));return ns;};//return __webpack_require__(__webpack_require__.s = "./dom.js");})/************************************************************************/({/***/ "./dom.js":/*!****************!*\!*** ./dom.js ***!\****************//*! no static exports found *//***/ (function(module, exports, __webpack_require__) {eval(`var getUsers = __webpack_require__(/*! ./users */ \"./users.js\").getUsers\n\nfunction addUserToDOM(name) {\nconst node = document.createElement(\"li\")\nconst text = document.createTextNode(name)\nnode.appendChild(text)\n\ndocument.getElementById(\"users\")\n.appendChild(node)\n}\n\ndocument.getElementById(\"submit\")\n.addEventListener(\"click\", function() {\nvar input = document.getElementById(\"input\")\naddUserToDOM(input.value)\n\ninput.value = \"\"\n})\n\nvar users = getUsers()\nfor (var i = 0; i < users.length; i++) {\naddUserToDOM(users[i])\n}\n\n\n//# sourceURL=webpack:///./dom.js?`);}),/***/ "./users.js":/*!******************!*\!*** ./users.js ***!\******************//*! no static exports found *//***/ (function(module, exports) {eval(`var users = [\"Tyler\", \"Sarah\", \"Dan\"]\n\nfunction getUsers() {\nreturn users\n}\n\nmodule.exports = {\ngetUsers: getUsers\n}\n\n//# sourceURL=webpack:///./users.js?`);})});
You'll notice that there's a lot of magic going on there (you can read the comments if you want to know exactly what's happening), but one thing that's interesting is they wrap all the code inside of a big IIFE. So they've figured out a way to get all of the benefits of a nice module system without the downsides, simply by utilizing our old IIFE Module Pattern.
What really future proofs JavaScript is that it's a living language. TC-39, the standards committee around JavaScript, meets a few times a year to discuss potential improvements to the language. At this point, it should be pretty clear that modules are a critical feature for writing scalable, maintainable JavaScript. In ~2013 (and probably long before) it was dead obvious that JavaScript needed a standardized, built in solution for handling modules. This kicked off the process for implementing modules natively into JavaScript.
Knowing what you know now, if you were tasked with creating a module system for JavaScript, what would it look like? CommonJS got it mostly right. Like CommonJS, each file could be a new module with a clear way to define imports and exports - obviously, that's the whole point. A problem we ran into with CommonJS is it loads modules synchronously. That's great for the server but not for the browser. One change we could make would be to support asynchronous loading. Another change we could make is rather than a
require function call, since we're talking about adding to the language itself, we could define new keywords. Let's go with
import and
export.
Without going too far down the "hypothetical, made up standard" road again, the TC-39 committee came up with these exact same design decisions when they created "ES Modules", now the standardized way to create modules in JavaScript. Let's take a look at the syntax.
ES Modules
As mentioned above, to specify what should be exported from a module you use the
export keyword.
// utils.js// Not exportedfunction once(fn, context) {var resultreturn function() {if(fn) {result = fn.apply(context || this, arguments)fn = null}return result}}// Exportedexport function first (arr) {return arr[0]}// Exportedexport function last (arr) {return arr[arr.length - 1]}
Now to import
first and
last, you have a few different options. One is to import everything that is being exported from
utils.js.
import * as utils from './utils'utils.first([1,2,3]) // 1utils.last([1,2,3]) // 3
But what if we didn't want to import everything the module is exporting? In this example, what if we wanted to import
first but not
last? This is where you can use what's called
named imports (it looks like destructuring but it's not).
import { first } from './utils'first([1,2,3]) // 1
What's cool about ES Modules is not only can you specify multiple exports, but you can also specify a
default export.
// leftpad.jsexport default function leftpad (str, len, ch) {var pad = '';while (true) {if (len & 1) pad += ch;len >>= 1;else break;}return pad + str;}
When you use a
default export, that changes how you import that module. Instead of using the
import name from './path'.
*syntax or using named imports, you just use
import leftpad from './leftpad'
Now, what if you had a module that was exporting a
default export but also other regular exports as well? Well, you'd do it how you'd expect.
// utils.jsfunction once(fn, context) {var resultreturn function() {if(fn) {result = fn.apply(context || this, arguments)fn = null}return result}}// regular exportexport function first (arr) {return arr[0]}// regular exportexport function last (arr) {return arr[arr.length - 1]}// default exportexport default function leftpad (str, len, ch) {var pad = '';while (true) {if (len & 1) pad += ch;len >>= 1;else break;}return pad + str;}
Now, what would the import syntax look like? In this case, again, it should be what you expect.
import leftpad, { first, last } from './utils'
Pretty slick, yeah?
leftpad is the
default export and
first and
last are just the regular exports.
What's interesting about ES Modules is, because they're now native to JavaScript, modern browsers support them without using a bundler. Let's look back at our simple Users example from the beginning of this tutorial and see what it would look like with ES Modules.
The full code can be found here.
// users.jsvar users = ["Tyler", "Sarah", "Dan"]export default function getUsers() {return users}
// dom.jsimport getUsers from './users.js = getUsers()for (var i = 0; i < users.length; i++) {addUserToDOM(users[i])}
Now here's the cool part. With our IIFE pattern, we still needed to include a script to every JS file (and in order, none the less). With CommonJS we needed to use a bundler like Webpack and then include a script to the
bundle.js file. With ES Modules, in modern browsers, all we need to do is include our main file (in this case
dom.js) and add a
type='module' attribute to the script tab.
<!DOCTYPE html><html><head><title>Users</title></head><body><h1>Users</h1><ul id="users"></ul><input id="input" type="text" placeholder="New User"></input><button id="submit">Submit</button><script type=module</script></body></html>
Tree Shaking
There's one more difference between CommonJS modules and ES Modules that we didn't cover above.
With CommonJS, you can
require a module anywhere, even conditionally.
if (pastTheFold === true) {require('./parallax')}
Because ES Modules are static, import statements must always be at the top level of a module. You can't conditionally import them.
if (pastTheFold === true) {import './parallax' // "import' and 'export' may only appear at the top level"}
The reason this design decision was made was because by forcing modules to be static, the loader can statically analyze the module tree, figure out which code is actually being used, and drop the unused code from your bundle. That was a lot of big words. Said differently, because ES Modules force you to declare your import statements at the top of your module, the bundler can quickly understand your dependency tree. When it understands your dependency tree, it can see what code isn't being used and drop it from the bundle. This is called Tree Shaking or Dead Code Elimination.
There is a stage 4 proposal for dynamic imports which will allow you to conditionally load modules via import().
I hope diving into the history of JavaScript modules has helped you gain not only a better appreciation for ES Modules, but also a better understanding of their design decisions. For a deeper dive into ES Modules specifically, visit ES Modules in. | https://ui.dev/javascript-modules-iifes-commonjs-esmodules | CC-MAIN-2022-21 | refinedweb | 4,370 | 57.47 |
Fatal Exception’s Neil McAllister sees recent experiments enabling a resurgence for JavaScript on the server, one likely to dent Java’s role in the data center. ‘Today,,” McAllister writes. And though such experiments have a ways to go, the benefits of JavaScript as a server-side language are clear and striking.
Watch Out Java, Here Comes JavaScript
52 Comments
2010-09-09 8:00 pmtessmonsta
Where exactly did the “Java” come from in JavaScript? Isn’t it technically “ECMAScript”? Sure, there are syntactic similarities, but the same could be said for C and PHP…
Not raging on technicalities here, I’m simply curious about the historical context.
2010-09-09 8:41 pmMooch.
2010-09-10 5:09 pmsorpig.
2010-09-10 5:28 pmtessmonsta
Not so boring to me, I guess. Maybe I’m just a boring person. ^_~
2010-09-10 2:00 pmtrenchsol”.
I don’t see js threatening Java here.
It’s mostly competing with Django, RoR, PHP and various newfangled fad things (scala, clojure…).
Web server programming is a sort of wild west where everything goes and codebases are pretty small. Trend hoppers have jumped away from Java ages ago, and the ones that remain in Java are not likely to be interested in js – they have big teams (with continuous turnover) and deem static typing and “rigor” of the solution more important than agility. A huge js program sounds like a nightmare; even gmail is implemented in Java and translated to js with GWT.
2010-09-10 1:36 amgoogle_ninja
The main problem with javascript is that at least 90% of people who think they know javascript really know very little about it. They churn out vast amounts of totally crap code, then complain that its crap. Even in otherwise really great shops, for some reason, javascript is done really poorly.
2010-09-10 6:25 amBill Shooter of Bul
I’d agree to that. For years I thought it was terrible, and complained to no end about it. Then I actually looked into the language itself. Its pretty interesting when done correctly. In my defense, the true nature of the language was really well hidden. I couldn’t find a decent book on the language itself that didn’t spend 90% of its time talking about the DOM.
2010-09-10 7:29 am
2010-09-10 12:26 pmgoogle_ninja
Honestly, I think that is a cop out. I work on a rails project that has about 180 models or so, and a really high level of complexity. What you need is a good team, and good practices. There are loads of the same sorts of IDEs that java has (rubymine for example, by jetbrains, really isn’t missing much) that will handle refactoring in the same way.
That being said, I am a vim guy, so refactoring for me usually means a combination of vim, grep, and sed
I have been there for about 6 months, and really don’t see any sorts of bugs that wouldn’t have been there if we were all on java. The benefits on the other hand are enormous when it comes to productivity.
I think it all comes down to the developers. If you have people on your team that you wouldn’t trust with a dynamic language, then its probably a bad idea. Good developers and a commitment to tdd, and it really isn’t that scary.
2010-09-10 1:28 pmvivainio
There are loads of the same sorts of IDEs that java has (rubymine for example, by jetbrains, really isn’t missing much) that will handle refactoring in the same way.
I’m sure you know as well as everybody else that it’s impossible to have the same kind of refactoring in a dynamic languages, due to the fact that less information is encoded directly in the source code.
Refactoring is not just an ide feature – it’s as much about making a change, compiling and seeing what you have to change all around the place.
Not all projects require this kind of rigor – actually, I think most server side web applications are quite simple (mostly encoding state in database), and hence suitable for quick hacks like RoR.
2010-09-10 2:35 pmgoogle_ninja
So whats the difference between that and making a change, and run your tests to see what you need to change? Other then that tests will catch way more issues then compiling will?
2010-09-10 10:58 pmgoogle_ninja
Um, that is exactly what automated tests are meant to do. break when something changes and causes regressions. Out of curiousity, what do you think tests are for?
2010-09-12 3:47 amMoochman
…whoosh…
I am saying that there are fewer regressions when carrying out an automatic refactoring of statically typed code, because the IDE knows exactly how to modify every reference and method call for every reference to an object of a given type. With dynamically typed languages on the other hand, the tools cannot carry out automatic refactoring, at least not for dynamically typed object references.
Now you could say, “well, just use best practice and always type your variables” but that’s essentially admitting that dynamic languages’ primary feature is more or less useless.
Edited 2010-09-12 03:49 UTC
2010-09-12 6:47 pmgoogle_ninja
Ok, so what I was saying was that automated refactorings and compile time type checking will easily catch a certain class of regressions, but unit tests will catch those too. There is another class of problems that are around the change in logic, behavior, or usage that drove the refactoring in the first place which compilers and IDEs can’t catch. The same unit tests catch these.
I’m not saying that stuff has no value, I am saying that the value is very small, and gets smaller the better your test suite gets.
2010-09-12 9:32 pmMoochman
I understand that a good test suite makes a big difference, and is the only solution for catching the class of problems you refer to. However, I still think you’re missing the point that it’s not only about *catching* problems–it’s also about easing maintenance of the application. Ease of automatic refactoring is nothing to scoff at, and it’s just one of many features that IMHO make statically typed languages better-suited to big, long-lived code bases. Most of the features have a lot to do with increasing the power of the IDE, though, so maybe you won’t understand, since you proclaim to still use vim… Anyway, in addition to support for automatic refactoring static typing gives you:
-Auto-completion support in the IDE for member variables and methods
-Built-in documentation support for said auto-completion (the documentation for using a method pops up as you type)
-“Go to declaration” links for method calls and object instantiation, taking you right to the file for the object in question–extremely helpful when browsing someone else’s code base
-Enforcement of best practices (using typed variables whenever possible)
So far you’ve listed all the ways in which statically typed languages “don’t solve all that much”. How about giving some arguments for what exactly dynamically typed lanugages *do* solve?
2010-09-10 10:55 pmgoogle_ninja
It doesn’t need to be perfect to catch the class of errors that compiling catches.
My stand on the whole thing is you should be doing TDD no matter what you are coding in, but ESPECIALLY dynamic languages. If you practice TDD, even if you don’t go for a really solid test suite, you will catch everything a compiler will catch, plus a lot more. If you don’t practice TDD you are right, refactoring gets really scary. But without TDD you also have no real confidence in your code (or shouldn’t anyways), and relying on the compiler gives you a false sense of security. Again, your test suite does not have to be perfect, but it does need to be broad.
2010-09-12 6:48 amvivainio
If you practice TDD, even if you don’t go for a really solid test suite, you will catch everything a compiler will catch, plus a lot more.
I know the party line perfectly well, I’m a long time member of Python community.
I just don’t really buy it, nor do I buy into the TDD thing. TDD is applicable to a limited set of problems, which are not usually the same things that are problematic in the real world.
Also, see
2010-09-12 6:32 pmgoogle_ninja
That link is basically the party line of the other side of the arguement
I don’t know how much value there is left in this conversation, but I will adress that link..
I find “balance = get_balances” exactly as clear as “List<decimal> balance = GetBalances();”. The names of things make it obvious what is going on. The only time the second option will be more clear is in the case of “thing = do_something” vs “int thing = DoSomething();”.
There is additional onus on the developer to write clear code, and bad developers have the ability to screw you _way_ worse. If I had a team of crappy developers, there is no way that I would ever use a dynamic language.
If you are in a good team of experienced folks who know how to write clear code, you are really barely losing anything. What you gain however is enormous. Java/C# style type systems are just about as inflexible as you can get (as opposed to say, the scala or haskell approach to typing, which makes a much better argument for static typing). When you have an inflexible language, you need to write huge amounts of code to get around that. That is code that takes time to write, time to test, time to compile, gives more surface area for bugs, and more surface area to search through when finding bugs. I have loads of experience in massive, enterprise apps in C# or Java, and can honestly say that if you are doing it right, at least half the code has more to do with introducing flexibility then actually solving the problem you are trying to solve.
Alan Kay actually goes further, he says that dynamic dispatch style message passing is more important in OO then freakin Objects or Classes, and he invented the damn thing!
The main argument for static typing is performance, plain and simple. You are spending additional time writing for the compiler, letting it do things the most efficient way possible. The more typing you do for the benefit of the compiler, the more efficient things get. Going in the other direction, the less typing for the compiler you do, the easier things get to understand, design, and debug. Looking at the spectrum, I am sure you wouldn’t have a problem with me saying that C# is more appropriate then assembler for large projects, due to how much easier it is to write, design, and maintain. All the same reasons that I would say that also apply when I say python or ruby is more appropriate for complex systems then something like c# or java.
Now, I know for me, it took almost 3 years between when I heard that argument (specifically, this blog post), and actually fully believing it. I know loads of people who I respect who disagree with me, and I don’t really have a problem with it. What I do know is after 7 years doing enterprise systems in java and c#, the rails codebase that I work on now is is like a dream to work on and maintain. Seeing how fast our competitors can move, I finally understand why Paul Graham never really bothered paying attention to the competition if they were using java or C++ when he did viaweb (in lisp), and pretty much kicked off the .com boom. He knew there was no way they would be able to catch up (let alone keep up).
2010-09-10 5:14 pmsorpigal
Yes, Javascript suffered for years as an ignored “Toy” language because serious programmers didn’t write HTML pages and the people who did couldn’t write for crap, which made the corpus of Javascript code crap, which made serious people take it less seriously, and so on.
As the web became more mainstream and serious programmers started taking over from half-assed webmasters and started to actually need to use JS, things improved. And then there was jQuery, which proved once and for all that scripting the DOM doesn’t have to suck. The rest is history (and also the present).
2010-09-13 4:48 ambnolsen
The big thing going for it:
javascript has won on the browser side. It *is* the most prevalent technology aside from html itself.
Theoretically it’s personally more appealing to me to limit my number of active languages. Javascript taking over this part means I can start to ignore java even more than I already have.
The reality is that the huge java infrastructure that’s already out there for web services isn’t going to go away any time soon.
Java will always be powerful in the enterprise where middle-ware, high availability and solid data persistence are an absolute must.
I do see JavaScript taking over on the webapp side though, it’s so easy to write and rapidly test that it’s a great choice. I just wish the JavaScript source model was as organized as Java. I always feel so messy doing a project in JavaScript.
2010-09-10 1:38 amgoogle_ninja
There is nothing keeping you from namespacing your functions in javascript, or using more then one file.
2010-09-10 2:30 ammodmans2ndcoming
For Java to “always” be that, it needs to trim down its libraries, align its framework better, make better development tools, get better application servers, and stop being such a resource hog.
2010-09-10 6:27 amBill Shooter of Bul
Not as good as they could be. They’re written in Java. Bad choice for Gui.
2010-09-10 2:46 pmBill Shooter of Bul
Too late to edit a late night comment, but I do like many things about eclipse. It just has a lot of Java realted Gui things that make it suboptimal. Rewritten in QT C++ and it woudl rock. Kdevelop is pretty good, but lacks some of the language support and features of Eclipse.
2010-09-10 6:17 pmmodmans2ndcoming
the debugger in eclipse still sucks.
2010-09-12 12:49 amlacroix1547
As an Intellij user, all I have to say is, I know.
Javascript is an excellent language for the web, but it just seems so low level to me compared to something like Java. You could probably save some LOC using Javascript, but Scala or Groovy on the JVM would be tons better semantically (and LOC wise) than javascript in my opinion. Python and even PHP would be great as well.
My other concern is that Javascript was never meant to be a server side tech and so how secure is Javascript? I feel like people used PHP in a very unsecure way for a very long time before people realized how to write security conscience code.
2010-09-09 11:56 pmVentajou.
2010-09-10 1:07 amRawMustard
It’s all backwards. Instead of bringing JS to the server, they should bring Python to the browser, now that would be cool
2010-09-10 2:48 pmBill Shooter of Bul
I thought python in the browser was possible, but I can’t find any reference to it on the web. There was a perlscript that could be run in IE, with a plug in. That was pretty cool, if a bit useless.
Edited 2010-09-10 14:49 UTC
2010-09-10 5:50 pmRawMustard
The was pyxpcom for mozilla, but it’s not maintained and you had to compile the browser and python bit yourself to make it work. Python is a much nicer and cleaner scripting language than javascript. Also much easier for people to learn. It would I think be a great addition to a browser.
2010-09-10 3:44 amlemur2.…
Like this perhaps?
2010-09-10 5:20 pmsorpigal
Javascript isn’t the problem. Shitty JS programmers writing shitty JS is part of the problem, and part of it is the DOM.
If the ECMAScript people would sit down and finalize packaging, namespaces and a few other little things that people have been clamoring for for years now then JS would be ready to go. As it stands today it requires more effort to do things properly… but that’s like saying C is harder than C++. You can, incidentally, blame Microsoft for dragging its feet here (they don’t want to fix javascript because improving it would undermine their dotnet-in-your-browser initiative).
Ten years ago I might have agreed that Javascript sucks and we only use it because there isn’t another choice. In fact, I believe I said something like that in just as many words… ten years ago, when I was a fool and didn’t really know anything about Javascript. All JS is missing is a little toolchain love and a little more language tweaking, both of which are seriously possible if you don’t have to worry about cross-browsers (which you don’t, server-side).
Guess what. GMail is written in javascript — no, not just the front-end you see, the back end too. Nearly a half million lines of it.
For some reason, most programmers don’t seem to trust dynamic languages for “serious” or “big” work, as if strong typing is a magic bullet. However, static typing, when you think about it, doesn’t solve *any* of the *hard problems* in software development, and, in fact, sometimes even stands in the way.
Javascript is certainly no panacea (more so on the client side than it is on the server) — in fact, its probably the most poorly put together language there is that has achieved any sort of popularity. But there are some nice things at its core — prototypical inheritence, first-class functions, closures, regex — and if you take the time to boil away the ickier parts of the language then you’ll find a fairly graceful language hiding beneath.
The language has some evolving yet to do to make it better, but its quite usable now if you follow the proper idioms and avoid certain language features.
Yet another Java is dead, or language/platform/runtime X will replace Java, article.
I’m no Java apologist – it has lots of warts, particularly the JEE space. But Java remains the number one language (Tiobe index), has more job postings than any other language, and continues to actually gain in popularity. This, despite all the pundits, as well as self anointed “kool-kids” in the blogosphere, predicting its demise.
Javascript a replacement?? Hmmmm
2010-09-10 11:05 pmgoogle_ninja
Java the language is rather dated, everyone knows that (i’ve heard sun people say as much in podcasts) except for java fanbois and clueless writers. The thing is that the big players that use it don’t want it to change, they pretty much just want the same thing, faster, and more secure. Java is pretty much the new COBOL, and will remain so for the foreseeable future.
node.js is _the_ new hotness, I haven’t seen anything hyped so much in certain circles since rails. That has sort of made javascript the new hotness.
As a writer, why would you compare the stable workhorse with the new hotness? It’s like comparing html to C, they are completely different things used for different purposes to do different things.
Java was the new hotness about a decade ago, but it hasn’t been for a really really long time now. Acting like javascript is knocking it off of some sort of horse is sort of inane.
RingoJS is an implementation of JS with Common bindings on the server. It’s written in Java and is actually extremely fast. It also allows for any of the umpteen billion Java libs, servers, parsers or languages (including Java it’s self via JSR 199.) to run inside the VM. I’m no evangelist, but just a heads up. Java may be headed west, but it’s VM will live on for decades.
The language itself, no one cares about. I don’t a language is ever chosen for a specific project, because of the language itself (syntax & stuff like that).
It’s chosen usually because people have experience with a specific language, and because there are useful libraries which the language provides (and those libraries have a friendly license).
Java is a million years ahead of Javascript regarding server side programming. It is probably a million years ahead of everything else as well (C++ or .Net probably are closer). But Javascript? No way
Java’s threat is not JavaScript, it’s’s Law could not possibly be more poignant. In the race to get everything running in a browser, the server too will pick up the ports too. Expect a flood of JS libraries for everything from encryption to video editing. | https://www.osnews.com/story/23787/watch-out-java-here-comes-javascript/ | CC-MAIN-2020-05 | refinedweb | 3,576 | 68.6 |
RECOMMENDED: If you have Windows errors then we strongly recommend that you download and run this (Windows) Repair Tool.
After it installs our window should look like this. Now I can run a simple test script loading the plyr package. Error in ScaleR. Check the output for more information. Error in eval(expr, envir, enclos) : Error in ScaleR. Check the output for.
Sep 14, 2011. Note that libs is the folder in the *installed* package that the shared object. Error in library.dynam(lib, package, package.lib) : >> shared library.
Error Loading Operating System Raid ,
Error for making packages under windows XP-Error in library.dynam(lib, package, package.lib). Hi, I have installed the necessary tools for making a R package under.
Oct 19, 2012. You want the name of the package as the argument, as that is the name of the shared. The quotes are optional (as they are for library() etc).
If we imported a package and are not using the package identifier in the program, Go compiler will show an error. In such a situation. Create a source file “languages.go” for the package “lib” at the location github.com/shijuvar/go-samples.
Unable to load ggplot2 in R even after downloading. Error in library.dynam(lib, package, package.lib) :.
OK * checking whether the package can be loaded. ERROR Error in library.dynam(lib, package, package.lib) : shared library ‘testS4’ not found Error: package/namespace load failed for ‘testS4’ Execution halted Can someone.
library(swirl) Error in library.dynam(lib, package, package.lib) : DLL 'RCurl' not found: maybe not installed for this architecture?
Aug 6, 2015. Following @Nick Kennedey's comment, I tried installing stringi like this: > install. packages("stringi",dep=TRUE). Which resulted in this error I.
Hi, My problem is simple: since having updated the lattice package, I cannot load lattice. Error in library.dynam(lib, package, package.lib) :
Error Loading Bioshock ATTENTION seems Nexus server has struck by heavy error storm on hair tag download. when you face with an error occurred, please use Google Drive mirrors. Jul 7, 2017. Former BioShock and XCOM devs announce Arabian Nights roguelite City of Brass. Error loading player: Could not load player configuration. Rick Astley – Never Gonna Give
Feb 6, 2017. Type 'q()' to quit R. > install.packages("UsingR") Installing package into. inst ** preparing package for lazy loading Error in library.dynam(lib,
Hello RStudio My team members and I have encountered the following issues: We are validating a Shiny app. At first we were having.
Cannot install ggplot2: "Error in library.dynam(lib, package. – Error in library.dynam(lib, package, package.lib) : shared object 'stringi.so' not found Not sure if the Ubuntu upgrade I did relates to this. but I thought.
RECOMMENDED: Click here to fix Windows errors and improve system performance | http://geotecx.com/error-in-library-dynamlib-package-package-lib/ | CC-MAIN-2018-26 | refinedweb | 467 | 63.76 |
What is an elegant way to deal with exceptions in map functions in Spark?
For example with:
exampleRDD= ["1","4","7","2","err",3] exampleRDD=exampleRDD.map(lambda x: int(x))
This will not work because it will fail on the "err" item.
How can I filter out faulty rows and execute map on the rest, without anticipating the kind of error that I will encounter in every row?
One could do something like defining a function:
def stringtoint(x): try: a=int(x) except: a=-99 return a
And then filter/map. But this doesn't seem as graceful as could be. | http://www.howtobuildsoftware.com/index.php/how-do/BQa/exception-handling-apache-spark-pyspark-handling-bad-items-in-map-function-in-spark | CC-MAIN-2018-47 | refinedweb | 103 | 65.01 |
#include <sys/types.h> #include <sys/statfs.h>
int statfs (path, buf, len, fstyp) char *path; struct statfs *buf; int len, fstyp;
int fstatfs (fildes, buf, len, fstyp) int fildes; struct statfs *buf; int len, fstyp;
fstatfs- get file system information
The statfs system call returns a ``generic superblock'' describing a file system. It can be used to acquire information about mounted as well as unmounted file systems, and usage is slightly different in the two cases. In all cases, buf is a pointer to a structure (described below) which is filled by the system call, and len is the number of bytes of information which the system should return in the structure. len must be no greater than sizeof (struct statfs) and ordinarily it contains exactly that value; if it holds a smaller value, the system fills the structure with that number of bytes. (This allows future versions of the system to grow the structure without invalidating older binary programs.)
If the file system of interest is currently mounted, path should name a file which resides on that file system. In this case the file system type is known to the operating system and the fstyp argument must be zero. For an unmounted file system path must name the block special file containing it and fstyp must contain the (non-zero) file system type. In both cases read, write, or execute permission of the named file is not required, but all directories listed in the path name leading to the file must be searchable.
The statfs structure pointed to by buf includes the following members:
short f_fstyp; /* File system type */ long f_bsize; /* Block size */ long f_frsize; /* Fragment size */ long f_blocks; /* Total number of 512-byte blocks */ long f_bfree; /* Count of free blocks */ long f_files; /* Total number of file nodes */ long f_ffree; /* Count of free file nodes */ char f_fname[6]; /* Volume name */ char f_fpack[6]; /* Pack name */The fstatfs system call is similar, except that the file named by path in statfs is instead identified by an open file descriptor fildes obtained from a successful open(S), creat(S), dup(S), fcntl(S), or pipe(S) system call.
The statfs system call obsoletes ustat(S) and should be used in preference to it in new programs. | http://osr507doc.xinuos.com/cgi-bin/man/man?statfs+S | CC-MAIN-2021-43 | refinedweb | 376 | 57.74 |
Set real data elements in
mxDOUBLE_CLASS array
mxSetPr is not recommended for C applications. Use
mxSetDoubles instead. For more information, see Typed Data Access.
#include "matrix.h" void mxSetPr(mxArray *pm, double *pr);
mxSetPr that the
data is real.
All
mxCreate* functions allocate heap space to hold data.
Therefore, you do not ordinarily use this function to initialize the real elements of an
array. Rather, call this function to replace the existing values with new values.
This function does not free memory allocated for existing data. To free existing
memory, call
mxFree on the pointer returned by
mxGetPr.
pm— MATLAB array
mxArray*
Pointer to an
mxDOUBLE_CLASS array.
pr— Data array
double*
Pointer to the first element.
This function is in the separate
complex API. To build
myMexFile.c using this function,
type:
mex -R2017b myMexFile.c
This function is also in the interleaved complex API. However, the function errors for
complex input argument
pm. MathWorks recommends that you upgrade your
MEX file to use the Typed Data Access functions instead.
To build
myMexFile.c using the interleaved complex API,
type:
mex -R2018a myMexFile.c | https://uk.mathworks.com/help/matlab/apiref/mxsetpr.html | CC-MAIN-2019-09 | refinedweb | 184 | 53.47 |
26 September 2013 17:50 [Source: ICIS news]
LONDON (ICIS)--Total’s SATORP joint-venture refining and petrochemicals complex in ?xml:namespace>
Total said that Saudi Aramco, which is Total’s partner in SATORP, loaded the first shipment of heavy fuel oil at the Jubail oil terminal on 23 September.
The next shipment will be a cargo of diesel, to be lifted by Total in late September, it said.
The companies began commissioning SATORP several weeks ago, and the complex has been producing commercial-grade fuel oil and diesel since 13 September.
The SATORP facility’s size and complexity means that commissioning will be a months-long process, during which units will be started up in stages, Total said.
“As more conversion units come on stream, the products will undergo more complex processes and the amount of crude oil refined will ramp up,” it added.
All units are scheduled to be up and running by end-2013. Construction work on complex began in early April 2010.
The production complex is designed to refine 400,000 bbl/day, or around 20m tonnes/year, of oil to produce automotive fuels. The complex includes integrated petrochemical units to produce benzene, paraxylene and propylene. It uses Arab Heavy crude oil | http://www.icis.com/Articles/2013/09/26/9710037/Saudi-SATORP-refining-and-petchem-complex-ships-first-product.html | CC-MAIN-2014-42 | refinedweb | 206 | 62.17 |
I'm trying to figure out why my code is giving me an error. Any help is appreciated. Apologies if I am posting this in the wrong place. Very new to python so my code may seem novice to many of you.
Thanks.
- Code: Select all
def only_evens(lst):
""" (list of list of int) -> list of list of int
Return a list of the lists in lst that contain only even integers.
>>> only_evens([[1, 2, 4], [4, 0, 6], [22, 4, 3], [2]])
[[4, 0, 6], [2]]
"""
even_lists = []
sublist = 0
for sublist in lst:
value = 0
even_sublist = True
while value in range(len(lst[sublist])):
if lst[sublist][value] % 2 == 0:
value = value + 1
else:
even_sublist = False
if even_sublist == True:
even_lists.append(lst[sublist])
return even_lists | http://www.python-forum.org/viewtopic.php?f=6&t=7690&p=10055 | CC-MAIN-2016-40 | refinedweb | 126 | 71.75 |
Design them as maximaly decoupled from DataObject. Some users may simply need to
define a MultiDataObject and still reuse the infrastructure.
May be XMLDataObject should not be even public!
I made an attempt to collect various issues in Ant (&
apisupport/layers) which were marked unfixable pending an XML API.
First rather simple draft is in CVS. It does not contain the most
required model issues :-(.
Still, no idea how to expose/develop/sustain it without managerial
support.
I have found out that I should be interested. I'll post on dev@openide.
We are after feature freeze of 3.4. That should mean to have all
features reviewed, implemented, covered by tests and documented. I do
not think that this happened for the XML API and I am not sure how
likely that will happen.
Please consider rollback of the API so it will not appear in release34
branch.
Let these are considered while planning next release.
Target milestone was changed from not determined to TBD
Unscheduling from my todo list. New owner is gladly welcome. Meanwhile assigning
to abstract issues@xml.netbeans.org owner.
I'd like to ask what are people's current requirements for a potential
XML tools API.
- What things should it accomplish?
- Who would be the clients of this API?
- What would be the benefits?
- What needs to be changed compared to the current state?
Thanks.
Requirements from Ant include:
1. Ability to tweak the editor. This is really more of an editor
issue, but mentioned here for completeness: want to have a delegation
chain text/x-ant+xml -> text/xml -> text/plain, so Ant module can set
editor MIME type to text/x-ant+xml, then get regular XML abilities
(toolbar, context menu) plus its own (target toolbar, Ant context menu).
2. Ability to structurally manipulate an XML file using DOM or perhaps
some other tree API (but DOM preferred for its ubiquity - could add
minor helper APIs for details DOM does not cover). Must preserve exact
(char-by-char) formatting of document regions not logically affected
by a change, and avoid unnecessary textual changes in regions that are
affected by the change.
Should have option to automatically insert appropriate indentation
whitespace in newly created elements, or clean up whitespace
surrounding newly deleted elements.
Should also have a way of determining the document position (or just
line number) corresponding to a given element (or perhaps other node),
or the element corresponding to a given document position.
At this time I do not see any need for an API for tree visualization
(e.g. Node or Node.Property implementations) from the Ant module; just
a raw structure editing API would be enough, as the Ant module has its
own specialized display anyway which does not follow the XML structure.
3. Code completion APIs must be cleaned up (e.g. to not require you to
impl DOM interfaces as a client, since these change with every DOM
release), better documented (currently usable only by trial and
error), and stabilized (published w/ some assurances of compatibility,
using regular spec version dep).
3a. Support XML namespaces in code completion, for NS-aware grammars.
3b. Permit CC grammar to specify popup documentation (HTML format) for
a given completion item.
And of course I would not want to have to subclass some XMLDataObject
to get these capabilities. All such SPIs should be available
independently as needed by clients.
I guess this issue is obsolete.
*** This issue has been marked as a duplicate of 76045 ***
The issue is obsolete | https://netbeans.org/bugzilla/show_bug.cgi?id=20532 | CC-MAIN-2016-07 | refinedweb | 588 | 57.16 |
Java Cloning: Copy Constructors vs. Cloning
Java Cloning: Copy Constructors vs. Cloning
Let's run through the pros and cons of Object.clone() and see how it stacks up against copy constructors when it comes to copying objects.
Join the DZone community and get the full member experience.Join For Free
In my previous article, Shallow and Deep Java Cloning, I discussed Java cloning in detail and answered questions about how we can use cloning to copy objects in Java, the two different types of cloning (Shallow and Deep), and how we can implement both of them. If you haven’t read it, please go ahead.
In order to implement cloning, we need to configure our classes and to follow the following steps:
- Implement the Cloneable interface in our class or its superclass or interface.
- Define a clone() method that should handle CloneNotSupportedException (either throw or log).
- And, in most cases from our clone() method, we call the clone() method of the superclass.
And super.clone() will call its super.clone() and the chain will continue until the call reaches the clone() method of the Object class, which will create a field by field mem copy of our object and return it back.
Like everything, Cloning also comes with its advantages and disadvantages. However, Java cloning is more famous for its design issues but still, it is the most common and popular cloning strategy present today.
Advantages of Object.clone()
Object.clone(), as mentioned, has many design issues, but it is still the most popular and easiest way of copying objects. Some advantages of using clone() are:
- Cloning requires much fewer lines of code — just an abstract class with a 4- or 5-line long clone() method, but we will need to override it if we need deep cloning.
- It is the easiest way of copying objects, especially if we are applying it to an already developed or an old project. We just need to define a parent class, implement Cloneable in it, provide the definition of the clone() method, and we are ready. Every child of our parent will get the cloning feature.
- We should use clone to copy arrays because that’s generally the fastest way to do it.
- As of release 1.5, calling clone on an array returns an array whose compile-time
type is the same as that of the array being cloned, which clearly means calling a clone on arrays does not require type-casting.
Disadvantages of Object.clone()
Below are some cons that cause many developers not to use Object.clone():
- Using the Object.clone() method requires us to add lots of syntax to our code, like implementing a Cloneable interface, defining the clone() method and handling CloneNotSupportedException, and finally, calling Object.clone() and casting it on our object.
- The Cloneable interface lacks the clone() method. Actually, Cloneable is a marker interface and doesn’t have any methods in it, and we still need to implement it just to tell the JVM that we can perform clone() on our object.
- Object.clone() is protected, so we have to provide our own clone() and indirectly call Object.clone() from it.
- We don’t have any control over object construction because Object.clone() doesn’t invoke any constructor.
- If we are writing a clone method in a child class, e.g. Person, then all of its superclasses should define the clone() method in them or inherit it from another parent class. Otherwise, the super.clone() chain will fail.
- Object.clone() supports only shallow copying, so the reference fields of our newly cloned object will still hold objects whose fields of our original object was holding. In order to overcome this, we need to implement clone() in every class whose reference our class is holding and then call their clone separately in our clone() method like in the example below.
- We can not manipulate final fields in Object.clone() because final fields can only be changed through constructors. In our case, if we want every Person object to be unique by id, we will get the duplicate object if we use Object.clone() because Object.clone() will not call the constructor, and final id field can’t be modified from Person.clone().
class City implements Cloneable { private final int id; private String name; public City clone() throws CloneNotSupportedException { return (City) super.clone(); } } class Person implements Cloneable { public Person clone() throws CloneNotSupportedException { Person clonedObj = (Person) super.clone(); clonedObj.name = new String(this.name); clonedObj.city = this.city.clone(); return clonedObj; } }
Because of the above design issues with Object.clone(), developers always prefer other ways to copy objects like using:
- BeanUtils.cloneBean(object): creates a shallow clone similar to Object.clone().
- SerializationUtils.clone(object): creates a deep clone. (i.e. the whole properties graph is cloned, not only the first level), but all classes must implement Serializable.
- Java Deep Cloning Library: offers deep cloning without the need to implement Serializable.
All these options require the use of some external library, plus these libraries will also be using Serialization or Copy Constructors or Reflection internally to copy our object. So if you don’t want to go with the above options or want to write your own code to copy the object, then you can use:
- Serialization
- Copy constructors
Serialization
I discussed in 5 Different ways to create objects in Java with Example how we can create a new object using serialization. Similarly, we can also use serialization to copy an object by first Serializing it and again deserializing it like"))) { out.writeObject(original); return (Person) in.readObject(); } catch (Exception e) { throw new RuntimeException(e); } }
But serialization is not solving any problems because we will still not be able to modify the final fields, we still don’t have any control on object construction, and we still need to implement Serializable, which is similar to Cloneable. Plus, the serialization process is slower than Object.clone().
Copy Constructors
This method of copying objects is the most popular among the developer community. It overcomes every design issue of Object.clone() and provides better control over object construction.
public Person(Person original) { this.id = original.id + 1; this.name = new String(original.name); this.city = new City(original.city); }
Advantages of Copy Constructors Over Object.clone()
Copy constructors are better than Object.clone() because they:
- Don’t force us to implement any interface or throw an exception, but we can surely do it if it is required.
- Don’t require any casts.
- Don’t require us to depend on an unknown object creation mechanism.
- Don’t require parent classes to follow any contract or implement anything.
- Allow us to modify final fields.
- Allow us to have complete control over object creation, meaning we can write our initialization logic in it.
By using the copy constructors strategy, we can also create conversion constructors, which can allow us to convert one object to another object — e.g. The ArrayList(Collection<? extends E> c) constructor generates an ArrayList from any Collection object and copies all items from the Collection object to a newly created ArrayList object.
Published at DZone with permission of Naresh Joshi , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/java-cloning-copy-constructor-vs-cloning?fromrel=true | CC-MAIN-2019-26 | refinedweb | 1,216 | 57.06 |
]
PUTC(3) OpenBSD Programmer's Manual PUTC(3)
NAME
fputc, putc, putchar, putw - output a character or word to a stream
SYNOPSIS
#include <stdio.h>
int
fputc(int c, FILE *stream);
int
putc(int c, FILE *stream);
int
putchar(int c);
int
putw(int w, FILE *stream);
DESCRIPTION), fopen(3), getc(3), stdio(3)
STANDARDS
The functions fputc(), putc(), and putchar(), conform to ANSI X3.159-1989
(``ANSI C''). A function putw() function appeared in Version 6 AT&T UNIX.
BUGS
The size and byte order of an int varies from one machine to another, and
putw() is not recommended for portable applications.
OpenBSD 2.6 June 4, 1993 1 | http://www.rocketaware.com/man/man3/putc.3.htm | crawl-001 | refinedweb | 110 | 72.36 |
namespace (global object) encapsulates all classes, singletons, and utility methods provided by Sencha's libraries.
Most user interface Components are at a lower level of nesting in the namespace, but many common utility functions are provided as direct properties of the Ext namespace.
Also many frequently used methods from other classes are provided as shortcuts within the Ext namespace. For example Ext.getCmp aliases Ext.ComponentManager.get.
Many applications are initiated with Ext.application which is called once the DOM is ready. This ensures all scripts have been loaded, preventing dependency issues. For example:
Ext.application({ name: 'MyApp', launch: function () { Ext.Msg.alert(this.name, 'Ready to go!'); } });
Sencha Cmd is a free tool
for helping you generate and build Ext JS (and Sencha Touch) applications. See
Ext.app.Application for more information about creating an app.
A lower-level technique that does not use the
Ext.app.Application architecture is
Ext.onReady.
You can also discuss concepts and issues with others on the Sencha Forums.
This object is used to enable or disable debugging for classes or namespaces. The default instance looks like this:
Ext.debugConfig = { hooks: { '*': true } };
Typically applications will set this in their
"app.json" like so:
{ "debug": { "hooks": { // Default for all namespaces: '*': true, // Except for Ext namespace which is disabled 'Ext': false, // Except for Ext.layout namespace which is enabled 'Ext.layout': true } } }
Alternatively, because this property is consumed very early in the load process of
the framework, this can be set in a
script tag that is defined prior to loading
the framework itself.
For example, to enable debugging for the
Ext.layout namespace only:
var Ext = Ext || {}; Ext.debugConfig = { hooks: { //... } };
For any class declared, the longest matching namespace specified determines if its
debugHooks will be enabled. The default setting is specified by the '*' property.
NOTE: This option only applies to debug builds. All debugging is disabled in production builds.
Defaults to:
Ext.debugConfig || manifest.debug || { hooks: { '*': true } }
This object is initialized prior to loading the framework and contains settings and other information describing the application.
For applications built using Sencha Cmd, this is produced from the
"app.json"
file with information extracted from all of the required packages'
"package.json"
files. This can be set to a string when your application is using the
(microloader)[#/guide/microloader]. In this case, the string of "foo" will be
requested as
"foo.json" and the object in that JSON file will parsed and set
as this object.
Defaults to:
manifest
Available since: 5.0.0
A map of event names which contained the lower-cased versions of any mixed case event names.
Defaults to:
{}
The base prefix to use for all
Ext components. To configure this property, you should use the
Ext.buildSettings object before the framework is loaded:
Ext.buildSettings = { baseCSSPrefix : 'abc-' };
or you can change it before any components are rendered:
Ext.baseCSSPrefix = Ext.buildSettings.baseCSSPrefix = 'abc-';
This will change what CSS classes components will use and you should
then recompile the SASS changing the
$prefix SASS variable to match.
Defaults to:
'x-'
URL to a 1x1 transparent gif image used by Ext to create inline icons with CSS background images.
Defaults to:
'property-cache" class="classmembers member member-private isNotStatic is-not-inherited " data-
cache : Object
Stores
Flyinstances keyed by their assigned or generated name.
Defaults to:flyweights
Available since: 5.0.0
The current version of Chrome (0 if the browser is not Chrome).
Defaults to:
{}
Defaults to:
{}
A reusable empty function.
Defaults to:
emptyFn
A zero length string which will pass a truth test. Useful for passing to methods which use a truth test to reject falsy values where a string value must be cleared.
Defaults to:
new String()
This property is provided for backward compatibility with previous versions of Ext JS. Accessibility is always enabled in Ext JS 6.0+.
This property is deprecated. To disable WAI-ARIA compatibility warnings,
override
Ext.ariaWarn function in your application startup code:
Ext.application({ launch: function() { Ext.ariaWarn = Ext.emptyFn; } });
For stricter compatibility with WAI-ARIA requirements, replace
Ext.ariaWarn
with a function that will raise an error instead:
Ext.application({ launch: function() { Ext.ariaWarn = function(target, msg) { Ext.raise({ msg: msg, component: target }); }; } });
Defaults to:
true
Available since: 6.0.0
True if the Ext.fx.Anim Class is available.
Defaults to:
true
true to automatically uncache orphaned Ext.Elements periodically. If set to
false, the application will be required to clean up orphaned Ext.Elements and
it's listeners as to not cause memory leakage.
Defaults to:
false
true to automatically uncache orphaned Ext.Elements periodically. If set to
false, the application will be required to clean up orphaned Ext.Elements and
it's listeners as to not cause memory leakage.
Defaults to:
true
True to automatically purge event listeners during garbageCollection.
Defaults to:
true
An array containing extra enumerables for old browsers
Defaults to:
enumerables
The current version of Firefox (0 if the browser is not Firefox).
This indicate the start timestamp of current cycle. It is only reliable during dom-event-initiated cycles and Ext.draw.Animator initiated cycles.
Defaults to:
Ext.now()
Defaults to:
{}
A reusable identity function that simply returns its first argument.
Defaults to:
identityFn
Defaults to:
'ext-'
Defaults to:
0
The current version of IE (0 if the browser is not IE). This does not account for the documentMode of the current page, which is factored into isIE8, and isIE9. Thus this is not always true:
Ext.isIE8 == (Ext.ieVersion == 8)
True if the detected browser is Chrome.
true when the document body is ready for use.
True if the detected browser is Edge. when
isDomReady is true and the Framework is ready for use.
True if the detected browser is Safari.
True if the page is running over SSL
Defaults to:
/^https/i.test(window.location.protocol)
true if browser is using strict mode.
Defaults to:
document.compatMode === "CSS1Compat"
True if the detected browser uses WebKit.
True if the detected platform is Windows.
Defaults to:
null
Defaults to:
{ //<feature logger> log: function(message, priority) { if (message && global.console) { if (!priority || !(priority in global.console)) { priority = 'log'; } message = '[' + priority.toUpperCase() + '] ' + message; global.console[priority](message); } }, verbose: function(message) { this.log(message, 'verbose'); }, info: function(message) { this.log(message, 'info'); }, warn: function(message) { this.log(message, 'warn'); }, error: function(message) { throw new Error(message); }, deprecate: function(message) { this.log(message, 'warn'); } } || { //</feature> verbose: emptyFn, log: emptyFn, info: emptyFn, warn: emptyFn, error: function(message) { throw new Error(message); }, deprecate: emptyFn }
Defaults to:
Ext.Object.mergeIf
The name of the property in the global namespace (The
window in browser environments) which refers to the current instance of Ext.
This is usually
"Ext", but if a sandboxed build of ExtJS is being used, this will be an alternative name.
If code is being generated for use by
eval or to create a
new Function, and the global instance
of Ext must be referenced, this is the name that should be built into the code.
Defaults to:
'Ext'
The current version of Opera (0 if the browser is not Opera).
This object contains properties that describe the current device or platform. These
values can be used in
platformConfig as well as
responsiveConfig statements.
This object can be modified to include tags that are useful for the application. To add custom properties, it is advisable to use a sub-object. For example:
Ext.platformTags.app = { mobile: true };
phone : Boolean
tablet : Boolean
desktop : Boolean
touch : Boolean
Indicates touch inputs are available.
safari : Boolean
chrome : Boolean
windows : Boolean
firefox : Boolean
ios : Boolean
True for iPad, iPhone and iPod.
android : Boolean
blackberry : Boolean
tizen : Boolean
A reusable empty function for use as
privates members.
Ext.define('MyClass', { nothing: Ext.emptyFn, privates: { privateNothing: Ext.privateFn } });
Defaults to:
privateFn
The top level inheritedState to which all other inheritedStates are chained. If
there is a
Viewport instance, this object becomes the Viewport's inheritedState.
See also Ext.Component#getInherited.
Defaults to:
{}
Available since: 5.0.0
The current version of Safari (0 if the browser is not Safari).
Set this to true before onReady to prevent any styling from being added to
the body element. By default a few styles such as font-family, and color
are added to the body element via a "x-body" class. When this is set to
true the "x-body" class is not added to the body element, but is added
to the elements of root-level containers instead.
Defaults to:
(function() { var cache = {}; return function(origin, delimiter) { if (!origin) { return []; } else if (!delimiter) { return [ origin ]; } var replaceRe = cache[delimiter] || (cache[delimiter] = new RegExp('\\\\' + delimiter, 'g')), result = [], parts, part; parts = origin.split(delimiter); while ((part = parts.shift()) !== undefined) { // If any of the parts ends with the delimiter that means // the delimiter was escaped and the split was invalid. Roll back. while (part.charAt(part.length - 1) === '\\' && parts.length > 0) { part = part + delimiter + parts.shift(); } // Now that we have split the parts, unescape the delimiter char part = part.replace(replaceRe, delimiter); result.push(part); } return result; }; })()
URL to a blank file used by Ext when in secure mode for iframe src and onReady src
to prevent the IE insecure content warning (
'about:blank', except for IE
in secure mode, which is
'javascript:""').
Defaults to:
Ext.isSecure && Ext.isIE ? 'javascript:\'\'' : 'about:blank'
Indicates whether to use native browser parsing for JSON methods. This option is ignored if the browser does not support native JSON methods.
Note: Native JSON methods will not work with objects that have functions. Also, property names must be quoted, otherwise the data will not parse.
Defaults to:
false
Set to
true to use a shim on all floating Components
and Ext.LoadMask
Defaults to:
false
Regular expression used for validating identifiers.
Defaults to:
/^[a-z_][a-z0-9\-_]*$/i
Object containing version information for all packages utilized by your application.
For a public getter, please see
Ext.getVersion().
Defaults to:
{}
The current version of WebKit (0 if the browser does not use WebKit).
Defaults to:
{}
Applies event listeners to elements by selectors when the document is ready.
The event name is specified with an
@ suffix.
Ext.addBehaviors({ // add a listener for click on all anchors in element with id foo '#foo a@click': function(e, t){ // do something }, // add the same listener to multiple selectors (separated by comma BEFORE the @) '#foo a, #bar span.some-class@mouseover': function(){ // do something } });
obj : Object
The list of behaviors to apply
This function registers top-level (root) namespaces. This is needed for "sandbox" builds.
Ext.addRootNamespaces({ MyApp: MyApp, Common: Common });
In the above example,
MyApp and
Common are top-level namespaces that happen
to also be included in the sandbox closure. Something like this:
(function(Ext) { Ext.sandboxName = 'Ext6'; Ext.isSandboxed = true; Ext.buildSettings = { baseCSSPrefix: "x6-", scopeResetCSS: true }; var MyApp = MyApp || {}; Ext.addRootNamespaces({ MyApp: MyApp ); ... normal app.js goes here ... })(this.Ext6 || (this.Ext6 = {}));
The sandbox wrapper around the normally built
app.js content has to take care
of introducing top-level namespaces as well as call this method.
Available since: 6.0.0
namespaces : Object
Same as Ext.ComponentQuery#query..
Loads Ext.app.Application class and starts it up with given configuration after the page is ready.
See
Ext.app.Application for details.
config : Object/String
Application config object or name of a class derived from Ext.app.Application.
Copies all the properties of
config to the specified
object. There are two levels
of defaulting supported:
Ext.apply(obj, { a: 1 }, { a: 2 }); //obj.a === 1 Ext.apply(obj, { }, { a: 2 }); //obj.a === 2
Note that if recursive merging and cloning without referencing the original objects or arrays is needed, use Ext.Object#merge instead.
object : Object
The receiver of the properties.
config : Object
The primary source of the properties.
defaults : Object (optional)
An object that will also be applied for default values.
returns
object.
Copies all the properties of config to object if they don't already exist.
object : Object
The receiver of the properties
config : Object
The source of the properties
returns obj
Schedules the specified callback function to be executed on the next turn of the
event loop. Where available, this method uses the browser's
setImmediate API. If
not available, this method substitutes
setTimeout(0). Though not a perfect
replacement for
setImmediate it is sufficient for many use cases.
For more details see MDN.
fn : Function
Callback function.
scope : Object (optional)
The scope for the callback (
this pointer).
parameters : Mixed[] (optional)
Additional parameters to pass to
fn.
A cancelation id for
Ext#asapCancel.
Cancels a previously scheduled call to
Ext#asap.
var asapId = Ext.asap(me.method, me); ... if (nevermind) { Ext.apasCancel(asapId); }
Utility wrapper that suspends layouts of all components for the duration of a given function.
fn : Function
The function to execute.
scope : Object (optional)
The scope (
this reference) in which the specified function
is executed.
Create a new function from the provided
fn, change
this to the provided scope,
optionally overrides arguments for the call. Defaults to the arguments passed by
the caller.
Ext.bind is alias for Ext.Function.bind
NOTE: This method is deprecated. Use the standard
bind method of JavaScript
Function instead:
function foo () { ... } var fn = foo.bind(this);
This method is unavailable natively on IE8 and IE/Quirks but Ext JS provides a
"polyfill" to emulate the important features of the standard
bind method. In
particular, the polyfill only provides binding of "this" and optional arguments.
fn : Function
The function to delegate.
scope : Object (optional)
The scope (
this reference) in which the function is executed.
If omitted, defaults to the default global environment object (usually the browser window).
args : Array (optional)
Overrides arguments for the call. (Defaults to the arguments passed by the caller)
appendArgs : Boolean/Number (optional)
if True args are appended to call args instead of overriding, if a number the args are inserted at the specified position.
The new function.
Execute a callback function in a particular scope. If
callback argument is a
function reference, that is called. If it is a string, the string is assumed to
be the name of a method on the given
scope. If no function is passed the call
is ignored.
For example, these calls are equivalent:
var myFunc = this.myFunc; Ext.callback('myFunc', this, [arg1, arg2]); Ext.callback(myFunc, this, [arg1, arg2]); Ext.isFunction(myFunc) && this.myFunc(arg1, arg2);
callback : Function/String
The callback function to execute or the name of
the callback method on the provided
scope.
scope : Object (optional)
args : Array (optional)
The arguments to pass to the function.
delay : Number (optional)
Pass a number to delay the call by a number of milliseconds.
caller : Object (optional)
The object calling the callback. This is used to resolve
named methods when no explicit
scope is provided.
defaultScope : Object (optional)
The default scope to return if none is found.
Defaults to: caller
name : Object
This method checks the registered package versions against the provided version
specs. A
spec is either a string or an object indicating a boolean operator.
This method accepts either form or an array of these as the first argument. The
second argument applies only when the first is an array and indicates whether
all
specs must match or just one.
The string form of a
spec is used to indicate a version or range of versions
for a particular package. This form of
spec consists of three (3) parts:
At least one version number must be provided. If both minimum and maximum are provided, these must be separated by a "-".
Some examples of package version specifications:
4.2.2 (exactly version 4.2.2 of the framework) 4.2.2+ (version 4.2.2 or higher of the framework) 4.2.2- (version 4.2.2 or higher of the framework) 4.2.1 - 4.2.3 (versions from 4.2.1 up to 4.2.3 of the framework) - 4.2.2 (any version up to version 4.2.1 of the framework) foo@1.0 (exactly version 1.0 of package "foo") foo@1.0-1.3 (versions 1.0 up to 1.3 of package "foo")
NOTE: This syntax is the same as that used in Sencha Cmd's package requirements declarations.
Instead of a string, an object can be used to describe a boolean operation to
perform on one or more
specs. The operator is either
and or
or
and can contain an optional
not.
For example:
{ not: true, // negates boolean result and: [ '4.2.2', 'foo@1.0.1 - 2.0.1' ] }
Each element of the array can in turn be a string or object spec. In other words, the value is passed to this method (recursively) as the first argument so these two calls are equivalent:
Ext.checkVersion({ not: true, // negates boolean result and: [ '4.2.2', 'foo@1.0.1 - 2.0.1' ] }); !Ext.checkVersion([ '4.2.2', 'foo@1.0.1 - 2.0.1' ], true);
// A specific framework version Ext.checkVersion('4.2.2'); // A range of framework versions: Ext.checkVersion('4.2.1-4.2.3'); // A specific version of a package: Ext.checkVersion('foo@1.0.1'); // A single spec that requires both a framework version and package // version range to match: Ext.checkVersion({ and: [ '4.2.2', 'foo@1.0.1-1.0.2' ] }); // These checks can be nested: Ext.checkVersion({ and: [ '4.2.2', // exactly version 4.2.2 of the framework *AND* { // either (or both) of these package specs: or: [ 'foo@1.0.1-1.0.2', 'bar@3.0+' ] } ] });
Version comparsions are assumed to be "prefix" based. That is to say,
"foo@1.2"
matches any version of "foo" that has a major version 1 and a minor version of 2.
This also applies to ranges. For example
"foo@1.2-2.2" matches all versions
of "foo" from 1.2 up to 2.2 regardless of the specific patch and build.
This methods primary use is in support of conditional overrides on an
Ext.define declaration.
specs : String/Array/Object
A version specification string, an object
containing
or or
and with a value that is equivalent to
specs or an array
of either of these.
matchAll : Boolean (optional)
Pass
true to require all specs to match.
Defaults to: false
True if
specs matches the registered package versions.
Old alias to Ext.Array#clean Filter through an array and remove empty item as defined in Ext.isEmpty.
See Ext.Array#filter
array : Array
results
Deprecated since version 4.0.0
Clone simple variables including array, {}-like objects, DOM nodes and Date without keeping the old reference. A reference for the object itself is returned if it's not a direct descendant of Object. For model cloning, see Model.copy.
item : Object
The variable to clone
cloneDom : Boolean (optional)
true to clone DOM nodes.
Defaults to: true
clone
Coerces the first value if possible so that it is comparable to the second value.
Coercion only works between the basic atomic data types String, Boolean, Number, Date, null and undefined.
Numbers and numeric strings are coerced to Dates using the value as the millisecond era value.
Strings are coerced to Dates by parsing using the defaultFormat.
For example
Ext.coerce('false', true);
returns the boolean value
false because the second parameter is of type
Boolean.
from : Mixed
The value to coerce
to : Mixed
The value it must be compared against
The coerced value.
This method converts an object containing config objects keyed by
itemId into
an array of config objects.
Available since: 6.5.0
items : Object
An object containing config objects keyed by
itemId.
defaultProperty : String (optional)
The property to set for string items.
Defaults to: "xtype"
Copies a set of named properties fom the source object to the destination object.
Example:
var foo = { a: 1, b: 2, c: 3 }; var bar = Ext.copy({},.
Copies a set of named properties fom the source object to the destination object if the destination object does not already have them.
Example:
var foo = { a: 1, b: 2, c: 3 }; var bar = Ext.copyIf({ a:42 }, foo, 'a,c'); // bar = { a: 42, c: 3 };
destination : Object
The destination object.
source : Object
The source object.
names : String/String[]
Either an Array of property names, or a single string with a list of property names separated by ",", ";" or spaces.
The
dest object.
Copies a set of named properties fom the source object to the destination object.
Example:
var foo = { a: 1, b: 2, c: 3 }; var bar = Ext.copyTo({},.
Deprecated since version 6.0.1
Copies a set of named properties fom the source object to the destination object if the destination object does not already have them.
Example:
var foo = { a: 1, b: 2, c: 3 }; var bar = Ext.copyToIf({ a:42 }, foo, 'a,c'); // bar = { a: 42, c: 3 };
destination : Object
The destination object.
source : Object
The source object.
names : String/String[]
Either an Array of property names, or a single string with a list of property names separated by ",", ";" or spaces.
The
dest object.
Deprecated since version 6.0.1
Instantiate a class by either full name, alias or alternate name.
If Ext.Loader is enabled and the class has not been defined yet, it will attempt to load the class via synchronous loading.
For example, all these three lines return the same result:
// xtype var window = Ext.create({ xtype: 'window', width: 600, height: 800, ... }); // alias var window = Ext.create('widget.window', { width: 600, height: 800, ... }); // alternate name var window = Ext.create('Ext.Window', { width: 600, height: 800, ... }); // full class name var window = Ext.create('Ext.window.Window', { width: 600, height: 800, ... }); // single object with xclass property: var window = Ext.create({ xclass: 'Ext.window.Window', // any valid value for 'name' (above) width: 600, height: 800, ... });
name : String (optional)
The class name or alias. Can be specified as
xclass
property if only one object parameter is specified.
args : Object... (optional)
Additional arguments after the name will be passed to the class' constructor.
instance
Old name for Ext#widget.
Deprecated since version 5.0
Shorthand for Ext.JSON#decode Decodes (parses) a JSON string to an object. If the JSON is invalid, this function throws a SyntaxError unless the safe option is set.
json : String
The JSON string.
safe : Boolean (optional)
true to return null, otherwise throw an exception
if the JSON is invalid.
Defaults to: false
The resulting object.
Calls this function after the number of milliseconds specified, optionally in a specific scope. Example usage:
var sayHi = function(name){ alert('Hi, ' + name); } // executes immediately: sayHi('Fred'); // executes after 2 seconds: Ext.Function.defer(sayHi, 2000, this, ['Fred']); // this syntax is sometimes useful for deferring // execution of an anonymous function: Ext.Function.defer(function(){ alert('Anonymous'); }, 100);
Ext.defer is alias for Ext.Function.defer
fn : Function
The function to defer.
millis : Number
The number of milliseconds for the
setTimeout call
(if less than or equal to 0 the function is executed immediately). timeout id that can be used with
clearTimeout.
Defines a class or override. A basic class is defined like this:
Ext.define('My.awesome.Class', { someProperty: 'something', someMethod: function(s) { alert(s + this.someProperty); } ... }); var obj = new My.awesome.Class(); obj.someMethod('Say '); // alerts 'Say something'
To create an anonymous class, pass
null for the
className:
Ext.define(null, { constructor: function () { // ... } });
In some cases, it is helpful to create a nested scope to contain some private properties. The best way to do this is to pass a function instead of an object as the second parameter. This function will be called to produce the class body:
Ext.define('MyApp.foo.Bar', function () { var id = 0; return { nextId: function () { return ++id; } }; });
Note that when using override, the above syntax will not override successfully, because the passed function would need to be executed first to determine whether or not the result is an override or defining a new object. As such, an alternative syntax that immediately invokes the function can be used:
Ext.define('MyApp.override.BaseOverride', function () { var counter = 0; return { override: 'Ext.Component', logId: function () { console.log(++counter, this.id); } }; }());
When using this form of
Ext.define, the function is passed a reference to its
class. This can be used as an efficient way to access any static properties you
may have:
Ext.define('MyApp.foo.Bar', function (Bar) { return { statics: { staticMethod: function () { // ... } }, method: function () { return Bar.staticMethod(); } }; });
To define an override, include the
override property. The content of an
override is aggregated with the specified class in order to extend or modify
that class. This can be as simple as setting default property values or it can
extend and/or replace methods. This can also extend the statics of the class.
One use for an override is to break a large class into manageable pieces.
// File: /src/app/Panel.js Ext.define('My.app.Panel', { extend: 'Ext.panel.Panel', requires: [ 'My.app.PanelPart2', 'My.app.PanelPart3' ] constructor: function (config) { this.callParent(arguments); // calls Ext.panel.Panel's constructor //... }, statics: { method: function () { return 'abc'; } } }); // File: /src/app/PanelPart2.js Ext.define('My.app.PanelPart2', { override: 'My.app.Panel', constructor: function (config) { this.callParent(arguments); // calls My.app.Panel's constructor //... } });
Another use of overrides is to provide optional parts of classes that can be independently required. In this case, the class may even be unaware of the override altogether.
Ext.define('My.ux.CoolTip', { override: 'Ext.tip.ToolTip', constructor: function (config) { this.callParent(arguments); // calls Ext.tip.ToolTip's constructor //... } });
The above override can now be required as normal.
Ext.define('My.app.App', { requires: [ 'My.ux.CoolTip' ] });
Overrides can also contain statics, inheritableStatics, or privates:
Ext.define('My.app.BarMod', { override: 'Ext.foo.Bar', statics: { method: function (x) { return this.callParent([x * 2]); // call Ext.foo.Bar.method } } });
Starting in version 4.2.2, overrides can declare their
compatibility based
on the framework version or on versions of other packages. For details on the
syntax and options for these checks, see
Ext.checkVersion.
The simplest use case is to test framework version for compatibility:
Ext.define('App.overrides.grid.Panel', { override: 'Ext.grid.Panel', compatibility: '4.2.2', // only if framework version is 4.2.2 //... });
An array is treated as an OR, so if any specs match, the override is compatible.
Ext.define('App.overrides.some.Thing', { override: 'Foo.some.Thing', compatibility: [ '4.2.2', 'foo@1.0.1-1.0.2' ], //... });
To require that all specifications match, an object can be provided:
Ext.define('App.overrides.some.Thing', { override: 'Foo.some.Thing', compatibility: { and: [ '4.2.2', 'foo@1.0.1-1.0.2' ] }, //... });
Because the object form is just a recursive check, these can be nested:
Ext.define('App.overrides.some.Thing', { override: 'Foo.some.Thing', compatibility: { and: [ '4.2.2', // exactly version 4.2.2 of the framework *AND* { // either (or both) of these package specs: or: [ 'foo@1.0.1-1.0.2', 'bar@3.0+' ] } ] }, //... });
IMPORTANT: An override is only included in a build if the class it overrides is
required. Otherwise, the override, like the target class, is not included. In
Sencha Cmd v4, the
compatibility declaration can likewise be used to remove
incompatible overrides from a build.
className : String
The class name to create in string dot-namespaced format, for example: 'My.very.awesome.Class', 'FeedViewer.plugin.CoolPager' It is highly recommended to follow this simple convention:
nullto create an anonymous class.
data : Object
The key - value pairs of properties to apply to this class. Property names can be of any valid strings, except those in the reserved listed below:
self
createdFn : Function (optional)
Callback to execute after the class is created, the execution scope of which
(
this) will be the newly created class itself.
Create a closure for deprecated code.
// This means Ext.oldMethod is only supported in 4.0.0beta and older. // If Ext.getVersion('extjs') returns a version that is later than '4.0.0beta', for example '4.0.0RC', // the closure will not be invoked Ext.deprecate('extjs', '4.0.0beta', function() { Ext.oldMethod = Ext.newMethod; ... });
packageName : String
The package name
since : String
The last version before it's deprecated
closure : Function
The callback function to be executed with the specified version is less than the current version
scope : Object
The execution scope (
this) if the closure
Create a function that will throw an error if called (in debug mode) with a message that indicates the method has been removed.
suggestion : String
Optional text to include in the message (a workaround perhaps).
The generated function.
Destroys all of the given objects. If arrays are passed, the elements of these are destroyed recursively.
What it means to "destroy" an object depends on the type of object.
Array: Each element of the array is destroyed recursively.
Object: Any object with a
destroymethod will have that method called.
args : Mixed...
Any number of objects or arrays.
Destroys the specified named members of the given object using
Ext.destroy. These
properties will be set to
null.
object : Object
The object who's properties you wish to destroy.
args : String...
One or more names of the properties to destroy and remove from the object.
Iterates an array or an iterable value and invoke the given callback function for each item.
var countries = ['Vietnam', 'Singapore', 'United States', 'Russia']; Ext.Array.each(countries, function(name, index, countriesItSelf) { console.log(name); }); var sum = function() { var sum = 0; Ext.Array.each(arguments, function(value) { sum += value; }); return sum; }; sum(1, 2, 3); // returns 6
The iteration can be stopped by returning
false from the callback function.
Returning
undefined (i.e
return;) will only exit the callback function and
proceed with the next iteration of the loop.
Ext.Array.each(countries, function(name, index, countriesItSelf) { if (name === 'Singapore') { return false; // break here } });
Ext.each is alias for Ext.Array.each
iterable : Array/NodeList/Object
The value to be iterated. If this argument is not iterable, the callback function is called once.
See description for the
fn parameter.
Shorthand for Ext.JSON#encode Encodes an Object, Array or other value.
If the environment's native JSON encoding is not being used (Ext#USE_NATIVE_JSON is not set,
or the environment does not support it), then ExtJS's encoding will be used. This allows the developer
to add a
toJSON method to their classes which need serializing to return a valid JSON representation
of the object.
o : Object
The variable to encode.
The JSON string.
Explicitly exclude files from being loaded. Useful when used in conjunction with a
broad include expression. Can be chained with more
require and
exclude methods,
for example:
Ext.exclude('Ext.data.*').require('*'); Ext.exclude('widget.button*').require('widget.*');
excludes : String/String[]
Contains
exclude,
require and
syncRequire methods for chaining.
This method deprecated. Use Ext.define instead.
superclass : Function
overrides : Object
The subclass constructor from the overrides parameter, or a generated one if not provided.
Deprecated since version 4.0.0
A global factory method to instantiate a class from a config object. For example, these two calls are equivalent:
Ext.factory({ text: 'My Button' }, 'Ext.Button'); Ext.create('Ext.Button', { text: 'My Button' });
If an existing instance is also specified, it will be updated with the supplied config object. This is useful if you need to either create or update an object, depending on if an instance already exists. For example:
var button; button = Ext.factory({ text: 'New Button' }, 'Ext.Button', button); // Button created button = Ext.factory({ text: 'Updated Button' }, 'Ext.Button', button); // Button updated
config : Object
The config object to instantiate or update an instance with.
classReference : String (optional)
The class to instantiate from (if there is a default).
instance : Object (optional)
The instance to update.
aliasNamespace : Object (optional)
Deprecated since version 6.5.0
Shorthand for Ext.GlobalEvents#fireEvent.
Fires the specified event with the passed parameters (minus the event name, plus the
options object passed
to addListener).
An event may be set to bubble up an Observable parent hierarchy (See Ext.Component#getBubbleTarget) by calling enableBubble.
Available since: 6.2.0
eventName : String
The name of the event to fire.
args : Object...
Variable number of parameters are passed to handlers.
returns false if any of the handlers return false otherwise it returns true.
Returns the first match to the given component query. See Ext.ComponentQuery#query.
selector : String
The selector string to filter returned Component.
root : Ext.container.Container (optional)
The Container within which to perform the query. If omitted, all Components within the document are included in the search.
This parameter may also be an array of Components to filter according to the selector.
The first matched Component or
null.
Old alias to Ext.Array#flatten Recursively flattens into 1-d Array. Injects Arrays inline.
array : Array
The array to flatten
The 1-d array.
Deprecated since version 4.0.0
Gets the globally shared flyweight Element, with the passed node as the active element. Do not store a reference to this element - the dom node can be overwritten by other code. Ext#fly is alias for Ext.dom.Element#fly.
Use this to make one-time references to DOM elements which are not going to be accessed again either by application code, or by Ext's classes. If accessing an element which will be processed regularly, then Ext.get will be more appropriate to take advantage of the caching provided by the Ext.dom.Element class.
If this method is called with and id or element that has already been cached by a previous call to Ext.get() it will return the cached Element instead of the flyweight instance.
named : String (optional)
Allows for creation of named reusable flyweights to prevent conflicts (e.g. internally Ext uses "_global").
The shared Element object (or
null if no matching
element was found).
element : Object
Returns the current document body as an Ext.dom.Element.
The document body.
Get the class of the provided object; returns null if it's not an instance of any class created with Ext.define. This is usually invoked by the shorthand Ext#getClass.
var component = new Ext.Component(); Ext.getClass(component); // returns Ext.Component
object : Object
class
Get the name of the class by its reference or its instance. This is usually invoked by the shorthand Ext#getClassName.
Ext.ClassManager.getName(Ext.Action); // returns "Ext.Action"
object : Ext.Class/Object
className
This is shorthand reference to Ext.ComponentManager#get. Looks up an existing Ext.Component by id
The Component,
undefined if not found, or
null if a
Class was found.
Get the compatibility level (a version number) for the given package name. If
none has been registered with
Ext.setCompatVersion then
Ext.getVersion is
used to get the current version.
Available since: 5.0.0
packageName : String
The package name, e.g. 'core', 'touch', 'ext'.
Returns an HTML div element into which removed components are placed so that their DOM elements are not garbage collected as detached Dom trees.
Returns the current HTML document object as an Ext.dom.Element. Typically used for attaching event listeners to the document. Note: since the document object is not an HTMLElement many of the Ext.dom.Element methods are not applicable and may throw errors if called on the returned Element instance.
The document.
Return the dom node for the passed String (id), dom node, or Ext.Element. Here are some examples:
// gets dom node based on id var elDom = Ext.getDom('elId'); // gets dom node based on the dom node var elDom1 = Ext.getDom(elDom); // If we don't know if we are working with an // Ext.Element or a dom node use Ext.getDom function(el){ var dom = Ext.getDom(el); // do something with the dom node }
Note: the dom node to be found actually needs to exist (be rendered, etc) when this method is called to be successful.
el : String/HTMLElement/Ext.dom.Element
id : Object
target : Object
id : Object
Returns the current document head as an Ext.dom.Element.
The document head.
className : String
Namespace prefix if it's known, otherwise undefined
Returns the size of the browser scrollbars. This can differ depending on operating system settings, such as the theme or font size.
force : Boolean (optional)
true to force a recalculation of the value.
An object containing scrollbar sizes.
Shortcut to Ext.data.StoreManager#lookup. Gets a registered Store by id
name : Object
Generate a unique reference of Ext in the global scope, useful for sandboxing
Get the version number of the supplied package name; will return the version of the framework.
packageName : String (optional)
The package name, e.g., 'core', 'touch', 'ext'.
The version.
Retrieves the viewport height of the window.
Available since: 6.5.0
viewportHeight
Retrieves the viewport width of the window.
Available since: 6.5.0
viewportWidth
Returns the current window object as an Ext.dom.Element. Typically used for attaching event listeners to the window. Note: since the window object is not an HTMLElement many of the Ext.dom.Element methods are not applicable and may throw errors if called on the returned Element instance.
The window.
Convert certain characters (&, <, >, ', and ") from their HTML character equivalents.
value : String
The string to decode.
The decoded text.
Convert certain characters (&, <, >, ', and ") to their HTML character equivalents for literal display in web pages.
value : String
The string to encode.
The encoded text.
Generates unique ids. If the object/element is passes and it already has an
id, it is unchanged.
o : Object (optional)
The object to generate an id for.
Generates unique ids. If the element already has an id, it is unchanged
obj : Object/HTMLElement/Ext.dom.Element (optional)
The element to generate an id for
prefix : String (optional)
Id prefix (defaults "ext-gen")
The generated Id.
Calls this function repeatedly at a given interval, optionally in a specific scope.
Ext.defer is alias for Ext.Function.defer
fn : Function
The function to defer.
millis : Number
The number of milliseconds for the
setInterval call interval id that can be used with
clearInterval.
Returns
true if the passed value is a JavaScript Array,
false otherwise.
target : Object
The target to test.
Returns
true if the passed value is a boolean.
value : Object
The value to test.
Returns
true if the passed value is a JavaScript Date object,
false otherwise.
object : Object
The object to test.
This method returns
true if debug is enabled for the specified class. This is
done by checking the
Ext.debugConfig.hooks config for the closest match to the
given
className.
className : String
The name of the class.
true if debug is enabled for the specified class.
Returns
true if the passed value is defined.
value : Object
The value to test.
Returns
true if the passed value is an HTMLElement
value : Object
The value to test.
Returns true if the passed value is empty, false otherwise. The value is deemed to be empty if it is either:
null
undefined
allowEmptyStringparameter is set to
true)
value : Object
The value to test.
allowEmptyString : Boolean (optional)
true to allow empty strings.
Defaults to: false
Returns
true if the passed value is a JavaScript Function,
false otherwise.
value : Object
The value to test.
Returns
true if the passed value is iterable, that is, if elements of it are addressable using array
notation with numeric indices,
false otherwise.
Arrays and function
arguments objects are iterable. Also HTML collections such as
NodeList and `HTMLCollection'
are iterable.
value : Object
The value to test
Returns 'true' if the passed value is a String that matches the MS Date JSON encoding format.
value : String
The string to test.
Returns
true if the passed value is a number. Returns
false for non-finite numbers.
value : Object
The value to test.
Validates that a value is numeric.
value : Object
Examples: 1, '1', '2.34'
True if numeric, false otherwise
Returns
true if the passed value is a JavaScript Object,
false otherwise.
value : Object
The value to test.
Returns
true if the passed value is a JavaScript 'primitive', a string, number
or boolean.
value : Object
The value to test.
value : Object
Returns
trueif the passed value is a string.
value : Object
The value to test.
Returns
true if the passed value is a TextNode
value : Object
The value to test.
Iterates either an array or an object. This method delegates to Ext.Array.each if the given value is iterable, and Ext.Object.each otherwise.
object : Object/Array
The object or array to be iterated.
fn : Function
The function to be called for each iteration. See and Ext.Array.each and Ext.Object.each for detailed lists of arguments passed to this function depending on the given object type that is being iterated.
scope : Object (optional)
The scope (
this reference) in which the specified function is executed.
Defaults to the object being iterated itself.
Logs a message. If a console is present it will be used. On Opera, the method "opera.postError" is called. In other cases, the message is logged to an array "Ext.log.out". An attached debugger can watch this array and view the log. The log buffer is limited to a maximum of "Ext.log.max" entries (defaults to 250).
If additional parameters are passed, they are joined and appended to the message. A technique for tracing entry and exit of a function is this:
function foo () { Ext.log({ indent: 1 }, '>> foo'); // log statements in here or methods called from here will be indented // by one step Ext.log({ outdent: 1 }, '<< foo'); }
This method does nothing in a release build.
options : String/Object (optional)
The message to log or an options object with any of the following properties:
msg: The message to log (required).
level: One of: "error", "warn", "info" or "log" (the default is "log").
dump: An object to dump to the log as part of the message.
stack: True to include a stack trace in the log.
indent: Cause subsequent log statements to be indented one step.
outdent: Cause this and following statements to be one step less indented.
message : String... (optional)
The message to log (required unless specified in options object).
Converts an id (
'foo') into an id selector (
'#foo'). This method is used
internally by the framework whenever an id needs to be converted into a selector
and is provided as a hook for those that need to escape IDs selectors since,
as of Ext 5.0, the framework no longer escapes IDs by default.
id : String
Old alias to Ext.Array#max Returns the maximum value in the Array.
array : Array/NodeList
The Array from which to select the maximum value.
comparisonFn : Function (optional)
a function to perform the comparison which determines maximization. If omitted the ">" operator will be used. Note: gt = 1; eq = 0; lt = -1
max : Mixed
Current maximum value.
item : Mixed
The value to compare with the current maximum.
maxValue The maximum value.
Deprecated since version 4.0.0
Old alias to Ext.Array#mean Calculates the mean of all items in the array.
array : Array
The Array to calculate the mean value of.
The mean.
Deprecated since version 4.0.0
A convenient alias method for Ext.Object#merge. Merges any number of objects recursively without referencing them or their children.
var extjs = { companyName: 'Ext JS', products: ['Ext JS', 'Ext GWT', 'Ext Designer'], isSuperCool: true, office: { size: 2000, location: 'Palo Alto', isFun: true } }; var newStuff = { companyName: 'Sencha Inc.', products: ['Ext JS', 'Ext GWT', 'Ext Designer', 'Sencha Touch', 'Sencha Animator'], office: { size: 40000, location: 'Redwood City' } }; var sencha = Ext.Object.merge(extjs, newStuff); // extjs and sencha then equals to { companyName: 'Sencha Inc.', products: ['Ext JS', 'Ext GWT', 'Ext Designer', 'Sencha Touch', 'Sencha Animator'], isSuperCool: true, office: { size: 40000, location: 'Redwood City', isFun: true } }
destination : Object
The object into which all subsequent objects are merged.
object : Object...
Any number of objects to merge into the destination.
merged The destination object with all passed objects merged in.
Old alias to Ext.Array#min Returns the minimum value in the Array.
array : Array/NodeList
The Array from which to select the minimum value.
comparisonFn : Function (optional)
a function to perform the comparison which determines minimization. If omitted the "<" operator will be used. Note: gt = 1; eq = 0; lt = -1
min : Mixed
Current minimum value.
item : Mixed
The value to compare with the current minimum.
minValue The minimum value.
Deprecated since version 4.0.0.
Returns the current timestamp.
Milliseconds since UNIX epoch.
Convenient alias for Ext.namespace.
Deprecated since version 4.0.0
Shorthand for Ext.GlobalEvents#addListener.)
When set to
true, the listener is fired in the capture phase of the event propagation
sequence, instead of the default bubble phase.
The
capture option is only available on Ext.dom.Element instances (or
when attaching a listener to a Ext.dom.Element via a Component using the
element option)..' });.
priority : Number (optional)
Relative priority of this callback. A larger number will result in the callback being sorted before the others. Priorities 1000 or greater and -1000 or lesser are reserved for internal framework use only.
Defaults to:
0
Overrides members of the specified
target with the given values.
If the
target is a class declared using Ext.define, the
override method of that class is called (see Ext.Base#override) given
the
overrides.
If the
target is a function, it is assumed to be a constructor and the contents
of
overrides are applied to its
prototype using Ext.apply.
If the
target is an instance of a class declared using Ext.define,
the
overrides are applied to only that instance. In this case, methods are
specially processed to allow them to use Ext.Base#callParent.
var panel = new Ext.Panel({ ... }); Ext.override(panel, { initComponent: function () { // extra processing... this.callParent(); } });
If the
target is none of these, the
overrides are applied to the
target
using Ext.apply.
Please refer to Ext.define and Ext.Base#override for further details.
target : Object
The target to override.
overrides : Object
The properties to add or replace on
target.
Create a new function from the provided
fn, the arguments of which are pre-set to
args.
New arguments passed to the newly created callback when it's invoked are appended after the pre-set ones.
This is especially useful when creating callbacks.
For example:
var originalFunction = function(){ alert(Ext.Array.from(arguments).join(' ')); }; var callback = Ext.Function.pass(originalFunction, ['Hello', 'World']); callback(); // alerts 'Hello World' callback('by Me'); // alerts 'Hello World by Me'
Ext.pass is alias for Ext.Function.pass
fn : Function
The original function.
args : Array
The arguments to pass to new callback.
scope : Object (optional)
The scope (
this reference) in which the function is executed.
The new callback function.
Old alias to Ext.Array.pluck Plucks the value of a property from each item in the Array. Example:
Ext.Array.pluck(Ext.query("p"), "className"); // [el1.className, el2.className, ..., elN.className]
array : Array/NodeList
The Array of items to pluck the value from.
propertyName : String
The property name to pluck from each element.
The value from each item in the Array.
Deprecated since version 4.0.0
Shorthand for Ext.dom.Element.query
Selects child nodes based on the passed CSS selector. Delegates to document.querySelectorAll. More information can be found at
All selectors, attribute filters and pseudos below can be combined infinitely
in any order. For example
div.foo:nth-child(odd)[@foo=bar].bar:first would be
a perfectly valid selector.
The use of @ and quotes are optional. For example, div[@foo='bar'] is also a valid attribute selector.
selector : String
The CSS selector.
asDom : Boolean (optional)
false to return an array of Ext.dom.Element
Defaults to: true
An Array of elements ( HTMLElement or Ext.dom.Element if asDom is false) that match the selector. If there are no matches, an empty Array is returned.
Raise an error that can include additional data and supports automatic console logging
if available. You can pass a string error message or an object with the
msg attribute
which will be used as the error message. The object can contain any other name-value
attributes (or objects) to be logged along with the error.
Note that after displaying the error message a JavaScript error will ultimately be thrown so that execution will halt.
Example usage:
Ext.raise('A simple string error message'); // or... Ext.define('Ext.Foo', { doSomething: function(option){ if (someCondition === false) { Ext.raise({ msg: 'You cannot do that!', option: option, // whatever was passed into the method code: 100 // other arbitrary info }); } } });
err : String/Object
The error message string, or an object containing the attribute "msg" that will be used as the error message. Any other data included in the object will also be logged to the browser console, if available.
Creates a new store for the given id and config, then registers it with the Ext.data.StoreManager. Sample usage:
Ext.regStore('AllUsers', { model: 'User' }); // the store can now easily be used throughout the application new Ext.List({ store: 'AllUsers', ... other config });
id : String/Object
The id to set on the new store, or the
config object
that contains the
storeId property.
Removes an HTMLElement from the document. If the HTMLElement was previously cached by a call to Ext.get(), removeNode will call the destroy method of the Ext.dom.Element instance, which removes all DOM event listeners, and deletes the cache reference.
node : HTMLElement
The node to remove.
Resolves a resource URL that may contain a resource pool identifier token at the front. The tokens are formatted as HTML tags "<poolName@packageName>" followed by a normal relative path. This token is only processed if present at the first character of the given string.
These tokens are parsed and the pieces are then passed to the Ext#getResourcePath method.
For example:
[{ xtype: 'image', src: '<shared>images/foo.png' },{ xtype: 'image', src: '<@package>images/foo.png' },{ xtype: 'image', src: '<shared@package>images/foo.png' }]
In the above example, "shared" is the name of a Sencha Cmd resource pool and "package" is the name of a Sencha Cmd package.
Available since: 6.0.1
url : String
The URL that may contain a resource pool token at the front.
Resumes layout activity in the whole framework.
Ext#suspendLayouts is alias of Ext.Component#suspendLayouts.
flush : Boolean
true to perform all the pending layouts. This can also be
achieved by calling flushLayouts directly.
Defaults to: false
A reusable function which returns the value of
getId() called upon a single passed parameter.
Useful when creating a Ext.util.MixedCollection of objects keyed by an identifier returned from a
getId method.
o : Object
A reusable function which returns
true.
Shorthand for Ext.dom.Element.select
Selects descendant elements of this element based on the passed CSS selector to enable Ext.dom.Element methods to be applied to many related elements in one statement through the returned Ext.dom.CompositeElementLite object.
selector : String/HTMLElement[]
The CSS selector or an array of elements
composite : Boolean
Return a CompositeElement as opposed to a CompositeElementLite. Defaults to false.
Set the compatibility level (a version number) for the given package name.
Available since: 5.0.0
packageName : String
The package name, e.g. 'core', 'touch', 'ext'.
version : String/Ext.Version
The version, e.g. '4.2'.
target : Object
id : Object
value : Object
Sets the default font-family to use for components that support a
glyph config.
fontFamily : String
The name of the font-family
Set version number for the given package name.
packageName : String
The package name, e.g. 'core', 'touch', 'ext'.
version : String/Ext.Version
The version, e.g. '1.2.3alpha', '2.4.0-dev'.
scroller : Ext.scroll.Scroller
Old alias to Ext.Array#sum Calculates the sum of all items in the given array.
array : Array
The Array to calculate the sum value of.
The sum.
Deprecated since version 4.0.0
Stops layouts from happening in the whole framework.
It's useful to suspend the layout activity while updating multiple components and containers:
Ext.suspendLayouts(); // batch of updates... Ext.resumeLayouts(true);
Ext#suspendLayouts is alias of Ext.Component#suspendLayouts.
See also Ext#batchLayouts for more abstract way of doing this.
Synchronously.
Returns the current high-resolution timestamp.
Available since: 6.0.1
Milliseconds ellapsed since arbitrary epoch.
Converts any iterable (numeric indices and a length property) into a true array.
function test() { var args = Ext.Array.toArray(arguments), fromSecondToLastArgs = Ext.Array.toArray(arguments, 1); alert(args.join(' ')); alert(fromSecondToLastArgs.join(' ')); } test('just', 'testing', 'here'); // alerts 'just testing here'; // alerts 'testing here'; Ext.Array.toArray(document.getElementsByTagName('div')); // will convert the NodeList into an array Ext.Array.toArray('splitted'); // returns ['s', 'p', 'l', 'i', 't', 't', 'e', 'd'] Ext.Array.toArray('splitted', 0, 3); // returns ['s', 'p', 'l']
Ext.toArray is alias for Ext.Array.toArray
iterable : Object
the iterable object to be turned into a true Array.
start : Number (optional)
a zero-based index that specifies the start of extraction.
Defaults to: 0
end : Number (optional)
a 1-based index that specifies the end of extraction.
Defaults to: -1
Returns the type of the given variable in string format. List of possible values are:
undefined: If the given value is
undefined
null: If the given value is
null
string: If the given value is a string
number: If the given value is a number
boolean: If the given value is a boolean value
date: If the given value is a
Dateobject
function: If the given value is a function reference
object: If the given value is an object
array: If the given value is an array
regexp: If the given value is a regular expression
element: If the given value is a DOM Element
textnode: If the given value is a DOM text node and contains something other than whitespace
whitespace: If the given value is a DOM text node and contains only whitespace
value : Object
Shorthand for Ext.GlobalEvents#removeListener. Removes an event handler.
eventName : String
The type of event the handler was associated with.
fn : Function
The handler to remove. This must be a reference to the function passed into the addListener call.
scope : Object (optional)
The scope originally specified for the handler. It must be the same as the scope argument specified in the original call to Ext.util.Observable#addListener or the listener will not be removed.
Convenience Syntax
You can use the addListener
destroyable: true config option in place of calling un(). For example:
var listeners = cmp.on({ scope: cmp, afterrender: cmp.onAfterrender, beforehide: cmp.onBeforeHide, destroyable: true }); // Remove listeners listeners.destroy(); // or cmp.un( scope: cmp, afterrender: cmp.onAfterrender, beforehide: cmp.onBeforeHide );
Exception - DOM event handlers using the element config option
You must go directly through the element to detach an event handler attached using the addListener element option.
panel.on({ element: 'body', click: 'onBodyCLick' }); panel.body.un({ click: 'onBodyCLick' });
Old alias to Ext.Array#unique Returns a new array with unique items.
array : Array
results
Deprecated since version 4.0.0
Appends content to the query string of a URL, handling logic for whether to place a question mark or ampersand.
url : String
The URL to append to.
string : String
The content to append to the URL.
The resulting URL
Alias for Ext.Object#fromQueryString. Converts a query string back into an object.
Non-recursive:
Ext.Object.fromQueryString("foo=1&bar=2"); // returns {foo: '1', bar: '2'} Ext.Object.fromQueryString("foo=&bar=2"); // returns {foo: '', bar: '2'} Ext.Object.fromQueryString("some%20price=%24300"); // returns {'some price': '$300'} Ext.Object.fromQueryString("colors=red&colors=green&colors=blue"); // returns {colors: ['red', 'green', 'blue']}
Recursive:
Ext.Object.fromQueryString( "username=Jacky&"+ "dateOfBirth[day]=1&dateOfBirth[month]=2&dateOfBirth[year]=1911&"+ "hobbies[0]=coding&hobbies[1]=eating&hobbies[2]=sleeping&"+ "hobbies[3][0]=nested&hobbies[3][1]=stuff", true); // returns { username: 'Jacky', dateOfBirth: { day: '1', month: '2', year: '1911' }, hobbies: ['coding', 'eating', 'sleeping', ['nested', 'stuff']] }
queryString : String
The query string to decode
recursive : Boolean (optional)
Whether or not to recursively decode the string. This format is supported by PHP / Ruby on Rails servers and similar.
Defaults to: false
Deprecated since version 4.0.0
Takes an object and converts it to an encoded query string.
Non-recursive:
Ext.Object.toQueryString({foo: 1, bar: 2}); // returns "foo=1&bar=2" Ext.Object.toQueryString({foo: null, bar: 2}); // returns "foo=&bar=2" Ext.Object.toQueryString({'some price': '$300'}); // returns "some%20price=%24300" Ext.Object.toQueryString({date: new Date(2011, 0, 1)}); // returns "date=%222011-01-01T00%3A00%3A00%22" Ext.Object.toQueryString({colors: ['red', 'green', 'blue']}); // returns "colors=red&colors=green&colors=blue"
Recursive:
Ext.Object.toQueryString({ username: 'Jacky', dateOfBirth: { day: 1, month: 2, year: 1911 }, hobbies: ['coding', 'eating', 'sleeping', ['nested', 'stuff']] }, true); // returns the following string (broken down and url-decoded for ease of reading purpose): // username=Jacky // &dateOfBirth[day]=1&dateOfBirth[month]=2&dateOfBirth[year]=1911 // &hobbies[0]=coding&hobbies[1]=eating&hobbies[2]=sleeping&hobbies[3][0]=nested&hobbies[3][1]=stuff
object : Object
The object to encode
recursive : Boolean (optional)
Whether or not to interpret the object in recursive format. (PHP / Ruby on Rails servers and similar).
Defaults to: false
queryString
Deprecated since version 4.0.0
Returns the given value itself if it's not empty, as described in Ext#isEmpty; returns the default value (second argument) otherwise.
value : Object
The value to test.
defaultValue : Object
The value to return if the original value is empty.
allowBlank : Boolean (optional)
true to allow zero length strings to qualify as non-empty.
Defaults to: false
value, if non-empty, else defaultValue.
Comparison function for sorting an array of objects in ascending order of
weight.
Available since: 6.5.0
lhs : Object
rhs : Object
Convenient shorthand to create a widget by its xtype or a config object.
var button = Ext.widget('button'); // Equivalent to Ext.create('widget.button'); var panel = Ext.widget('panel', { // Equivalent to Ext.create('widget.panel') title: 'Panel' }); var grid = Ext.widget({ xtype: 'grid', ... });
If a Ext.Component instance is passed, it is simply returned.
name : String (optional)
The xtype of the widget to create.
config : Object (optional)
The configuration object for the widget constructor.
The widget instance | http://docs.sencha.com/extjs/6.5.0/classic/Ext.html | CC-MAIN-2017-30 | refinedweb | 9,648 | 52.76 |
A simple back-end rest framework in python using aiohttp lib
Project description
Wellcome to apys! A simple backend restful framework!
LANGUAGE
INSTALLATION
- Install python 3
- Install PIP - Python libraries manager
- Install this framework using PIP
- pip install apys
INITIALIZING PROJECT
$ apys --init
USING
DIRECTORIES
/config - json configuration files /endpoints - backend endpoints /filters - script files to execute before the endpoint /utils - script files to execute when server starts
CONFIG
Here are the configuration files used in the app. They will be send to the endpoint via param api.config
There are 3 special file names: * prod.json - The production configuration file * dev.json - The development configuration file * local.json - The local configuration file (ignore in git)
You can also force it to use a configuration with the --config or -c option:
$ apys -s --config=my_config
Note: If no config file is chosen,=8080", "cors": "string or false //default=false" }, "utils": ["string //default=[]. list of utils in order to load"], "(...)": "(...) //you can add any other key and access it via `api.config['my_key']`" }
You can also use environment variables like $PORT (for PORT env var), and set a default value if no env var is found like $PORT|8080 or $PORT|8080|int (if type is needed)
ENDPOINTS
This will be your main dev dir
All files added here will be an endpoint automatically
i.e.: the file endpoints/hello/world.py will generate an endpoint /hello/world
The file’s code will be the following:
filters = [ 'filter1', ['filter2', 'filter3'] ] def method(req, api): pass # process
Where method is the http request type: * post * get * put * delete * head * options * default - executed when a request is made for any of the above, but it is not implemented
process is what you wan the endpoint to do (your code)
filter1, filter2 and filter3 are the filters scripts (without .py) executed before the endpoint is called
If you put your filter inside an array the error they return will be returned only if ALL of them return some error
req is aiohttp’s request, documentation
req’s property body only works for json works as of now
api is the object that contains all api functionalities: * config - Configuration dictionary used in the actual scope * debug - function to log messages * error - function to log errors
Also api.web contains aiohttp.web
FILTERS
Code that will that will be called before every request.
method(req, api) - method being the type of http request
The function that will be executed before every request to the function with the same name on the endpoint. Any result should be stored on the variable `req`, because it is the only local variable on the request.
always(req, api)
The function that will be executed before any request. Note: this function will be executed before the other filters.
UTILS
Python files special functionality.
It needs to be inside a dir and has some special files
init.py
This file contains a function that will be called before initializing the api.
def init(api): pass
The function that will be executed on server startup Only one time.
Useful for setting some api constants
cli.py
This file contains a function that will add a commandline argument.
The util flags will be --[util_name] and --[util_name_first_char]
util name is test, so flags should be --test and -t
class CLI: def __init__(self, result): # See `parser.add_argument` doc for information on these self.action = 'store_true' self.default = False self.help = 'It makes everything shine' # store the result of user input self.result = result def start(self, api, endpoints): pass
EXAMPLE
Look at the demos/ for examples:
- hello_world: a simple hello world app, to learn the basics
- calculator: a simpler app that resembles more a normal product
- log_to_file: an example of logging in files
- user_role: an advanced example on filters
- unit_testing: an advanced example on adding cli arguments
STARTING THE SERVER
There are 2 ways to start the server
- Execute apys -s from terminal on your root project folder (Recommended)
- Call the method start() from module apys. | https://pypi.org/project/apys/3.1.2/ | CC-MAIN-2021-39 | refinedweb | 671 | 59.84 |
This.
Supporting Optional Variables with Optional<T>
A really favorite template of mine is one that encapsulates optional variables. Every variable stores values, but they don’t store whether the current value is valid. Optional variables store this information to indicate if the variable is valid or initialized. Think about it, how many times have you had to use a special return value to signify some kind of error case?
Take a look at this code, and you’ll see what I’m talking about:
bool DumbCalculate1(int &spline) { //imagine some code here.... // //The return value is the error, and the value of spline is //invalid return false; } #define ERROR_IN_DUMBCALCULATE (-8675309) int DumbCalculate2() { //imagine some code here.... // //The return value is a "special" value, we hope could never be //actually calculated return ERROR_IN_DUMBCALCULATE; } int _tmain(void) { //////////////////////////////////////////////////////////////// // //Dumb way #1 - use a return error code, and a reference to get //to your data. // int dumbAnswer1; if (DumbCalculate1(dumbAnswer1)) { //do my business... } //////////////////////////////////////////////////////////////// //Dumb way #2 - use a "special" return value to signify an error int dumbAnswer2 = DumbCalculate2(); if (dumbAnswer2 != ERROR_IN_DUMBCALCULATE) { //do my business... } }
There are two evil practices in this code. The first practice, “Dumb Way #1” requires that you use a separate return value for success or failure. This causes problems because you can’t use the return value DumbCalculate1() function as the parameter to another function because the return value is an error code:
AnotherFunction(DumbCalculate1()); //whoops.Can't do this!
The second practice I’ve seen that drives me up the wall is using a “special” return value to signify an error. This is illustrated in the DumbCalculate2() call. In many cases, the value chosen for the error case is a legal value, although it may be one that will “almost never” happen. If those chances are one in a million and your game sells a million copies, how many times per day do you think someone is going to get on the phone and call your friendly customer service people? Too many.
Here’s the code for optional<T>, a template class that solves this problem.
#pragma once //////////////////////////////////////////////////////////////////// //optional.h // //An isolation point for optionality, provides a way to define //objects having to provide a special "null" state. // //In short: // //struct optional<T> //{ //bool m_bValid; // //T m_data; //}; // // #include <new> #include <assert.h> class optional_empty {}; template <unsigned long size> class optional_base { public: //Default -invalid. optional_base():m_bValid(false){} optional_base & operator =(optional_base const & t) { m_bValid = t.m_bValid; return *this; } //Copy constructor optional_base(optional_base const & other) :m_bValid(other.m_bValid) { } //utility functions bool const valid()const { return m_bValid; } bool const invalid()const { return !m_bValid; } protected: bool m_bValid; char m_data [size ]; //storage space for T }; template <class T> class optional :public optional_base<sizeof(T)> { public: //Default -invalid. optional(){} optional(T const & t) {construct(t); m_bValid = (true);} optional(optional_empty const &) { } optional & operator = (T const & t) { if (m_bValid) { *GetT()=t; } else { construct(t); m_bValid = true; //order important for exception safety. } return *this; } //Copy constructor optional(optional const & other) { if (other.m_bValid) { construct(*other); m_bValid = true; //order important for exception safety. } } optional & operator =(optional const & other) { assert(!(this ==& other)); //don't copy over self! if (m_bValid) { //first,have to destroy our original. m_bValid = false; //for exception safety if destroy() throws. //(big trouble if destroy() throws, though) destroy(); } if (other.m_bValid) { construct(*other); m_bValid = true; //order vital. } return *this; } bool const operator == (optional const & other)const { if ((! valid()) && (!other.valid())) { return true; } if (valid() ^ other..valid()) {return false;} return ((**this) == (*other)); } bool const operator < (optional const & other) const { //equally invalid - not smaller. if ((! valid()) && (!other.valid())) {return false;} //I'm not valid, other must be, smaller. if (! valid()) {return true;} //I'm valid, other is not valid, I'm larger if (! other.valid()) {return false;} return ((**this) < (*other)); } ~optional(){if (m_bValid)destroy();} //Accessors. T const & operator * () const {assert(m_bValid);return *GetT();} T & operator * () {assert(m_bValid);return *GetT();} T const * const operator -> ()const {assert(m_bValid);return GetT();} T * const operator -> () {assert(m_bValid);return GetT();} //This clears the value of this optional variable and makes it //invalid once again. void clear() { if (m_bValid) { m_bValid =false; destroy(); } } //utility functions bool const valid()const {return m_bValid;} bool const invalid()const {return !m_bValid;} private: T const *const GetT()const {return reinterpret_cast<T const *const>(m_data);} T *const GetT() {return reinterpret_cast<T * const>(m_data);} void construct(T const >t) { new (GetT())T(t); } void destroy(){ GetT()->~T(); } };
As you can see, it’s not as simple as storing a Boolean value along with your data. The extra work in this class handles comparing optional objects with each other and getting to the data the object represents.
Here’s an example of how to use optional<T>:
//////////////////////////////////////////////////////////////////// //Optional.cpp : Defines the entry point for the console application. // #include "stdafx.h" #include "optional.h" optional<int>Calculate() { optional<int>spline; spline = 10; //you assign values to optionals like this... spline = optional_empty();//or you could give them the empty value spline.clear(); //or you could clear them to make them invalid return spline; } int main(void) { ///////////////////////////////////////////////////////////////// //Using optional<T> // optional<int>answer = Calculate(); if (answer.valid()) { //do my business... } return 0; }
If you are familiar with Boost C++ you’ll know that it also has an optional template, but to be honest it does some thing I don’t like very much, namely overloading the ‘!’ operator to indicate the validity of the object. Imagine this code in Boost:
optional<bool>bIsFullScreen; //imagine code here ... if (!!bIsFullScreen) { }
Yes, that’s no typo! The “!!” operator works just fine with Boost’s optional template. While coding like this is a matter of taste, I personally think this is unsightly and certainly confusing.
Pseudo-Random Traversal of a Set
Have you ever wondered how the “random” button on your CD player worked? It will play every song on your CD at random without playing the same song twice. That’s a really useful solution for making sure players in your games see the widest variety of features like objects, effects, or characters before they have the chance of seeing the same ones over again.
The following code uses a mathematical feature of prime numbers and quadratic equations. The algorithm requires a prime number larger than the ordinal value of the set you wish to traverse. If your set had ten members, your prime number would be eleven. Of course, the algorithm doesn’t generate prime numbers; instead it just keeps a select set of prime numbers around in a lookup table. If you need bigger primes, there’s a convenient web site for you to check out.
Here’s how it works: A skip value is calculated by choosing three random values greater than zero. These values become the coefficients of the quadratic, and the domain value (x) is set to the ordinal value of the set:
Skip = RandomA * (members * members) + RandomB * members + RandomC
Armed with this skip value, you can use this piece of code to traverse the entire set exactly once, in a pseudo random order:
nextMember += skip; nextMember %= prime;
The value of skip is so much larger than the number of members of your set that the chosen value seems to skip around at random. Of course, this code is inside a while loop to catch the case where the value chosen is larger than your set but still smaller than the prime number. Here’s the source code:
/****************************************************************** PrimeSearch.h This class enables you to visit each and every member of an array exactly once in an apparently random order. NOTE: If you want the search to start over at the beginning again - you must call the Restart()method,OR call GetNext(true). *******************************************************************/ class PrimeSearch { static int prime_array []; int skip; int currentPosition; int maxElements; int *currentPrime; int searches; CRandom r; public: PrimeSearch(int elements); int GetNext(bool restart=false); bool Done() { return (searches==*currentPrime); } void Restart() { currentPosition=0; searches=0; } }; /****************************************************************** PrimeSearch.cpp *******************************************************************/ int PrimeSearch::prime_array [] == { //choose the prime numbers to closely match the expected members //of the sets., //begin to skip even more primes 5003, 5101, 5209, 5303, 5407, 5501, 5623, 5701, 5801, 5903, 6007, 6101, 6211, 6301, 6421, 6521, 6607, 6701, 6803, 6907, 7001, 7103, 7207, 7307, 7411, 7507, 7603, 7703, 7817, 7901, 8009, 8101, 8209, 8311, 8419, 8501, 8609, 8707, 8803, 8923, 9001, 9103, 9203, 9311, 9403, 9511, 9601, 9719, 9803, 9901, //and even more 10007, 10501, 11003, 11503, 12007, 12503, 13001, 13513, 14009, 14503, 15013, 15511, 16033, 16519, 17011, 17509, 18013, 18503, 19001, 19501, 20011, 20507, 21001, 21503, 22003, 22501, 23003, 23509, 24001, 24509 //if you need more primes - go get them yourself!!!! //Create a bigger array of prime numbers by using this web site: // }; PrimeSearch::PrimeSearch(int elements) { assert(elements>0 && "You can't do a this if you have 0 elements to search through, buddy-boy"); maxElements =elements; int a =(rand()%13)+1; int b =(rand()%7)+1; int c =(rand()%5)+1; skip =(a * maxElements * maxElements)+(b * maxElements)+c; skip &= ~0xc0000000; //this keeps skip from becoming too //large.... Restart(); currentPrime = prime_array; int s = sizeof(prime_array)/sizeof(prime_array [0 ]); //if this assert gets hit you didn't have enough prime numbers //in your set. //Go back to the web site. assert(prime_array [s-1]>maxElements); while (*currentPrime < maxElements) { currentPrime++; } int test = skip % *currentPrime; if (!test) skip++; } int PrimeSearch::GetNext(bool restart) { if (restart) Restart(); if (Done()) return -1; bool done = false; int nextMember = currentPosition; while (!done) { nextMember = nextMember + skip; nextMember %= *currentPrime; searches++; if (nextMember < maxElements) { currentPosition = nextMember; done = true; } } return currentPosition; }
I’ll show you a trivial example to make a point.
void FadeToBlack(Screen *screen) { int w = screen.GetWidth(); int h = screen.GetHeight(); int pixels = w * h; PrimeSearch search(pixels); int p; while((p=search.GetNext())!=-1) { int x = p % w; int y = h / p; screen.SetPixel(x, y, BLACK); //of course, you wouldn't blit every pixel change. screen.Blit(); }
The example sets random pixels to black until the entire screen is erased. I should warn you now that this code is completely stupid, for two reasons. First, you wouldn’t set one pixel at a time. Second, you would likely use a pixel shader to do this. I told you the example was trivial: use PrimeSearch for other cool things like spawning creatures, weapons, and other random stuff.
Developing the Style That’s Right for You
Throughout this chapter I’ve tried to point out a number of coding techniques and pitfalls that I’ve learned over the years. I’ve tried to focus on the ones that seem to cause the most problems and offer the best results. Of course, keep in mind that there is not single best approach or one magic solution for coding a game.
I wish I had more pages because there are tons of programming gems and even game programming gems out there. Most of it you’ll beg or borrow from your colleagues. Some of it you’ll create yourself after you solve a challenging problem.
However you find them, don’t forget to share.
About the Author
Mike McShaffry, a.k.a. “Mr. Mike,” started programming games as soon as he could tap a keyboard. He signed up at the University of Houston, where he graduated five and one-half years later. Then, he entered the boot camp of the computer game industry: Origin Systems. He worked for Warren Spector and Richard Garriott, a.k.a. “Lord British,” on many of their most popular games. In 1997, Mike formed his first company, Tornado Alley. He later took a steady gig at Glass Eye Entertainment, working for his friend Monty Kerr, where he produced Microsoft Casino.
About the Book
Game Coding Complete, Second Edition
By Mike McShaffry
Published: January 14, 2005, Paperback: 850 pages
Published by Paraglyph Press
ISBN: 1932111913
Retail price: $44.99
This material is from Chapter 3 of the book.
Paraglyph Press, copyright 2005, Game Coding Complete, 2nd Edition.
Reprinted with permission. | https://www.developer.com/guides/mikes-grab-bag-of-useful-stuff/ | CC-MAIN-2022-40 | refinedweb | 1,973 | 54.93 |
GPS Module NEO-6M Receiver w/ Integrated Ceramic Antenna
- Was RM105.00
RM60.00
- Product Code: GPS Module NEO-6M
- Availability: In Stock
The Ubox gps module has serial TTL output, it has four pins:TX, RX, VCC and GND.
Features
Specifications
#include <SoftwareSerial.h> SoftwareSerial gps(4,3); char data=' '; void setup(){ Serial.begin(115200); //115200 baud speed gps.begin(9600); //9600 baud speed } void loop(){ if(gps.available()) { data=gps.read(); Serial.print(data); } }
Result example in serial monitor:
$ GPRMC, 044235.000, A, 4322.0289, N, 00824.5210, W, 0.39,65.46,020615 ,,, A * 44
Where if we analyze the plot of this example and based on the NMEA protocol, we could determine the following variables:
- 044235.000 represents GMT (4:42:35)
- "A" is the indication that the position data is fixed and is correct. "V" would be invalid
- 4322.0289 represents the length (43º 22.0289')
- N represents the North
- 00824.5210 represents the latitude (8th 24.5210')
- W represents the West
- 0.39 represents the speed in knots
- 65.46 represents the orientation in degrees
- 020,615 represents the date (June 2, 2015)
As we saw, the data frame sent by our GPS module can get several variables, being important for projects positioning latitude and longitude. To do this, we will make use of the TinyGPS library that can be downloaded from here:
Remember that once downloaded the library, we have to import it by copying the "Libraries" folder where you installed our Arduino IDE and then restart the program to be loaded correctly. The TinyGPS library will facilitate the identification of both latitude and longitude, as well as the other variables described above without having to resort to complex algorithms to achieve obtain. To do this we run a simple example that provides us the library, to which we go to File / examples / TinyGPS / simple_test in our Arduino IDE. | http://qqtrading.com.my/breakout-board-modules/gps/gps-module-neo-6m-receiver-integrated-ceramic-antenna | CC-MAIN-2018-26 | refinedweb | 316 | 58.48 |
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode.
Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript).
On 12/02/2017 at 06:02, xxxxxxxx wrote:
hi there,
i am failing to get a working shaderlink / texbox in my GeDialog!
working with resource files allowed me to add a texbox with:
SHADER MY_SHADER { FIT_H; SCALE_H; }
but sadly the gui then doesn't react to any interaction
i tried the following in GeDialogs CreateLayout(self) :
_self.MyShader = self.FindCustomGui(MY_SHADER,c4d.CUSTOMGUI_TEXBOX)
--> just returns True, so probably doesn't work
self.MyShader = self.FindCustomGui(MY_SHADER,c4d.CUSTOMGUI_LINKBOX)
--> returns a CustomGui but then:
color = c4d.BaseShader(c4d.Xcolor)
self.SetLink(MyShader, color)
--> crashes c4d_
so the question is: is it possible to have a working texbox in a GeDialog?
and if yes, how would one get there?
best, theo
On 13/02/2017 at 02:03, xxxxxxxx wrote:
Hi,
I'm afraid I haven't good news. It's not possible to use a SHADER gadget inside a dialog with the Python API.
TexBoxGui that provides access to SHADER resource is missing in the Python API.
This explains why self.FindCustomGui() returns True because the actual TexBoxGui object can't be returned.
Calling self.FindCustomGui() with CUSTOMGUI_LINKBOX (good try but not recommended at all ) returns a "false" LinkBoxGui and crashes as the actual dialog gadget isn't a LinkBox.
On 13/02/2017 at 02:10, xxxxxxxx wrote:
hi and thanks for your answer...
there is a post about it here, too... but it is from 2010
but can i ask to understand...
would it work to add a texbox/shader in a GeDialog in c++?
so it is just a problem of the python SDK?
and if fundamentally not possible in GeDialog with python... are there other ways?
can you construct the missing parts yourself?
It must be possible somehow to open a window, display a texbox, and get some input, not?
my goal is simply to have my own window where i can choose a texture or shader
and then set/insert it in a material shader slot.
On 16/02/2017 at 07:59, xxxxxxxx wrote:
nothing?
python + texbox = absolutely impossible?
shouldnt that be set on the todo list for the sdk? .)
On 16/02/2017 at 10:00, xxxxxxxx wrote:
What you want to do does not require having a shader link in the dialog.
All you need is a way to target the specific image or shader type. And then a way to target the specific material you want to add it to.
Also maybe even what channels(color, bump, etc...) do want to add the image to?
For example. One way of doing this is to:
-Use a textbox for the user to enter the target material's name
-Use a Filename gadget to get the image's path on your HD that you want to add to the material.
In your code. You then get the string value from the Filename gadget. And the string value from the textbox. Then use that to insert the image into the target material like this:
mat = doc.GetFirstMaterial()
if mat is None: return False
file = the string value of your Filename gadget
if len(file) == 0: return False
#Get the specific material based on what the user entered in the textbox
#Then add the image file to the color channel's shader link
while mat:
if mat.GetName() == your textbox value:
colorChanel = mat[c4d.MATERIAL_COLOR_SHADER]
shdr = c4d.BaseList2D(c4d.Xbitmap)
shdr[c4d.BITMAPSHADER_FILENAME] = file
mat.InsertShader( shdr )
mat[c4d.MATERIAL_COLOR_SHADER] = sha
mat = mat.GetNext()
Or.
What about maybe using a combobox gadget to get all of the existing materials rather than a textbox?
Then the user just selects the one they want to add the image to, rather than typing the name.
There's so many ways to do this kind of thing. There's no way to really answer your question.
Don't let the fact that we can't physically have a material link in the dialog get in your way.
Try to think about how you can work around it.
-ScottA
On 16/02/2017 at 11:11, xxxxxxxx wrote:
no no...
i wanted to code a helper plugin to setup materials
for that i must be able to select textures and all shaders, noise, layer, etc etc
so i would need a texbox
On 16/02/2017 at 12:05, xxxxxxxx wrote:
My point is that there are many ways to do what you want without using the shaderlink custom gui.
For example. You can iterate through a material/materials and get&set the shader links manually.
import c4d
shdrlist = []
def ShaderSearch(shader) :
next = shader.GetNext()
down = shader.GetDown()
if down: ShaderSearch(down)
if next: ShaderSearch(next)
shdrlist.append(shader.GetName())
def main() :
mat = doc.GetActiveMaterial()
if mat is None: return
shdr = mat.GetFirstShader()
if shdr is None: return
ShaderSearch(shdr)
print shdrlist
c4d.EventAdd()
if __name__=='__main__':
main()
Where you get the images or shaders that you want to add. And how you want the user to do that task is relative. All depending on your specific desired workflow.
The shaderlink custom gui might be the most obvious thing to use. But it's not your only option.
For visualizations. I've written dialog plugins that show the active material using a bitmap button to display it. That can be updated by clicking on them. A UserArea could also be used for this. This gets around the limitation of not being able to use a material gadget in a GeDialog.
The sdk as so many options in it that you should be able to do almost anything you want to do. But you might have to do them by hand, instead of using a pre made gadget.
On 17/02/2017 at 01:26, xxxxxxxx wrote:
Yes, the TextBox GUI works with the C++ API.
And I've already added TexBoxGui to the Python API todo list
On 17/02/2017 at 03:05, xxxxxxxx wrote:
@ ScottA:
i would need to be able to choose from all available shaders in c4d, not from the shaders of a selected material.
the goal is to have the same choice for a shader like when you would be in the material editor itself.
@ Yannick:
good news !
but that probably wont go that fast, will it? .)
On 17/02/2017 at 05:17, xxxxxxxx wrote:
Originally posted by xxxxxxxx
good news !
but that probably wont go that fast, will it? .)
Originally posted by xxxxxxxx
good news !
but that probably wont go that fast, will it? .)
Missing classes and functions are regularly added to the Python API.
On 20/05/2017 at 01:37, xxxxxxxx wrote:
Originally posted by xxxxxxxx
Originally posted by xxxxxxxx
good news !
but that probably wont go that fast, will it? .)
Missing classes and functions are regularly added to the Python API.
was there a progress regarding texbox/shader?
does it work to add a texbox/shader in a GeDialog in python by now?
my project is kinda stuck till then...
and i dont even know where i could see a change,
i guess such details are not in the release notes
On 21/05/2017 at 03:11, xxxxxxxx wrote:
Such details are in the Python documentation "What's New"
page. Additions to the Python API usually only come with a
new C4D release, so you'll have to wait until R19.
On 06/09/2017 at 00:33, xxxxxxxx wrote:
... mhh... i read the news here ...
but i think the Shaderlink / Texbox / SHADER gadget inside a dialog wasn't added to the Python API
is that correct and is it still on the todo list?
On 06/09/2017 at 05:22, xxxxxxxx wrote:
unfortunately the TexBox CustomGUI didn't make it into R19. Sorry. To our excuse, we never said it would.
But at least I can confirm, it is still on our ToDo list.
On 06/09/2017 at 23:09, xxxxxxxx wrote:
No excuse needed, it wasn't promised at all
But is there a chance that this gets added in between major releases
(in other words anytime soon-ish) ?
or is it more likely to take a year or two?
(i guess it is not tagged with high priority) | https://plugincafe.maxon.net/topic/9959/13411_shaderlink--texbox-in-gedialog- | CC-MAIN-2021-31 | refinedweb | 1,407 | 75 |
NAMEPHYSFS_Io - An abstract i/o interface.
SYNOPSIS
#include <physfs.h>
Data Fields
PHYSFS_uint32 version
Binary compatibility information. void * opaque
Instance data for this struct. PHYSFS_sint64(* read )(struct PHYSFS_Io *io, void *buf, PHYSFS_uint64 len)
Read more data. PHYSFS_sint64(* write )(struct PHYSFS_Io *io, const void *buffer, PHYSFS_uint64 len)
Write more data. int(* seek )(struct PHYSFS_Io *io, PHYSFS_uint64 offset)
Move i/o position to a given byte offset from start. PHYSFS_sint64(* tell )(struct PHYSFS_Io *io)
Report current i/o position. PHYSFS_sint64(* length )(struct PHYSFS_Io *io)
Determine size of the i/o instance's dataset. struct PHYSFS_Io *(* duplicate )(struct PHYSFS_Io *io)
Duplicate this i/o instance. int(* flush )(struct PHYSFS_Io *io)
Flush resources to media, or wherever. void(* destroy )(struct PHYSFS_Io *io)
Cleanup and deallocate i/o instance.
Detailed DescriptionAn abstract i/o interface.
Warning
Historically, PhysicsFS provided access to the physical filesystem and archives within that filesystem. However, sometimes you need more power than this. Perhaps you need to provide an archive that is entirely contained in RAM, or you need to bridge some other file i/o API to PhysicsFS, or you need to translate the bits (perhaps you have a a standard .zip file that's encrypted, and you need to decrypt on the fly for the unsuspecting zip archiver).
A PHYSFS_Io is the interface that Archivers use to get archive data. Historically, this has mapped to file i/o to the physical filesystem, but as of PhysicsFS 2.1, applications can provide their own i/o implementations at runtime.
This interface isn't necessarily a good universal fit for i/o. There are a few requirements of note:
- They only do blocking i/o (at least, for now).
- They need to be able to duplicate. If you have a file handle from fopen(), you need to be able to create a unique clone of it (so we have two handles to the same file that can both seek/read/etc without stepping on each other).
- They need to know the size of their entire data set.
- They need to be able to seek and rewind on demand.
...in short, you're probably not going to write an HTTP implementation.
Thread safety: PHYSFS_Io implementations are not guaranteed to be thread safe in themselves. Under the hood where PhysicsFS uses them, the library provides its own locks. If you plan to use them directly from separate threads, you should either use mutexes to protect them, or don't use the same PHYSFS_Io from two threads at the same time.
See also
Field Documentation
void(* PHYSFS_Io::destroy) (struct PHYSFS_Io *io)Cleanup and deallocate i/o instance. Free associated resources, including (opaque) if applicable.
This function must always succeed: as such, it returns void. The system may call your flush() method before this. You may report failure there if necessary. This method may still be called if flush() fails, in which case you'll have to abandon unflushed data and other failing conditions and clean up.
Once this method is called for a given instance, the system will assume it is unsafe to touch that instance again and will discard any references to it.
Parameters
struct PHYSFS_Io*(* PHYSFS_Io::duplicate) (struct PHYSFS_Io *io)Duplicate this i/o instance. This needs to result in a full copy of this PHYSFS_Io, that can live completely independently. The copy needs to be able to perform all its operations without altering the original, including either object being destroyed separately (so, for example: they can't share a file handle; they each need their own).
If you can't duplicate a handle, it's legal to return NULL, but you almost certainly need this functionality if you want to use this to PHYSFS_Io to back an archive.
Parameters
Returns
int(* PHYSFS_Io::flush) (struct PHYSFS_Io *io)Flush resources to media, or wherever. This is the chance to report failure for writes that had claimed success earlier, but still had a chance to actually fail. This method can be NULL if flushing isn't necessary.
This function may be called before destroy(), as it can report failure and destroy() can not. It may be called at other times, too.
Parameters
Returns
PHYSFS_sint64(* PHYSFS_Io::length) (struct PHYSFS_Io *io)Determine size of the i/o instance's dataset. Return number of bytes available in the file, or -1 if you aren't able to determine. A failure will almost certainly be fatal to further use of this stream, so you may not leave this unimplemented.
Parameters
Returns
void* PHYSFS_Io::opaqueInstance data for this struct. Each instance has a pointer associated with it that can be used to store anything it likes. This pointer is per-instance of the stream, so presumably it will change when calling duplicate(). This can be deallocated during the destroy() method.
PHYSFS_sint64(* PHYSFS_Io::read) (struct PHYSFS_Io *io, void *buf, PHYSFS_uint64 len)Read more data. Read (len) bytes from the interface, at the current i/o position, and store them in (buffer). The current i/o position should move ahead by the number of bytes successfully read.
You don't have to implement this; set it to NULL if not implemented. This will only be used if the file is opened for reading. If set to NULL, a default implementation that immediately reports failure will be used.
Parameters
buf The buffer to store data into. It must be at least (len) bytes long and can't be NULL.
len The number of bytes to read from the interface.
Returns
int(* PHYSFS_Io::seek) (struct PHYSFS_Io *io, PHYSFS_uint64 offset)Move i/o position to a given byte offset from start. This method moves the i/o position, so the next read/write will be of the byte at (offset) offset. Seeks past the end of file should be treated as an error condition.
Parameters
offset The new byte offset for the i/o position.
Returns
PHYSFS_sint64(* PHYSFS_Io::tell) (struct PHYSFS_Io *io)Report current i/o position. Return bytes offset, or -1 if you aren't able to determine. A failure will almost certainly be fatal to further use of this stream, so you may not leave this unimplemented.
Parameters
Returns
PHYSFS_uint32 PHYSFS_Io::versionBinary compatibility information. This must be set to zero at this time. Future versions of this struct will increment this field, so we know what a given implementation supports. We'll presumably keep supporting older versions as we offer new features, though.
PHYSFS_sint64(* PHYSFS_Io::write) (struct PHYSFS_Io *io, const void *buffer, PHYSFS_uint64 len)Write more data. Write (len) bytes from (buffer) to the interface at the current i/o position. The current i/o position should move ahead by the number of bytes successfully written.
You don't have to implement this; set it to NULL if not implemented. This will only be used if the file is opened for writing. If set to NULL, a default implementation that immediately reports failure will be used.
You are allowed to buffer; a write can succeed here and then later fail when flushing. Note that PHYSFS_setBuffer() may be operating a level above your i/o, so you should usually not implement your own buffering routines.
Parameters
buffer The buffer to read data from. It must be at least (len) bytes long and can't be NULL.
len The number of bytes to read from (buffer).
Returns | https://man.archlinux.org/man/tell.3.en | CC-MAIN-2022-21 | refinedweb | 1,213 | 57.57 |
I see how the
getGraphics method of
JComponent classes can be useful. For e.g., that of a
JPanel is particularly useful, but what exactly is the use of this method in the
JFrame class?
I see how the
getGraphics method of
JComponent classes can be useful. For e.g., that of a
JPanel is particularly useful, but what exactly is the use of this method in the
JFrame class?
When defining nested classes, is it possible to access the "outer" class' methods? I know it's possible to access its attributes, but I can't seem to find a way to use its methods. addMouseListener(new MouseAdapter() { @Override public void mouseClic
I'd like to add a method AddDefaultNamespace() to the String class in Java so that I can type "myString".AddDefaultNamespace() instead of DEFAULTNAMESPACE + "myString", to obtain something like "MyDefaultNameSpace.myString". I don't want to add anoth
I have a button class - when the button is clicked, the playFile method of the MyAudio class is called. So my question is, its trivial to call the playFile method from the button class, but how do I call the method displayStopButton from the initiato
Is it wrong to have static and non-static methods in the same class? --------------Solutions------------- Not really in regular java programming. But if you're working extensively with dependency injection you probably have few or no static methods a
Does it matter if I call the method of the super class first thing or at the end? For example -(void)didReceiveMemoryWarning { /* do a bunch of stuff */ [super didReceiveMemoryWarning]; } versus -(void)didReceiveMemoryWarning { [super didReceiveMemor
I have a follwing class structure: public abstract class AbstractFoo { public virtual void Prepare() { } } public class Foo : AbstractFoo { public override void Prepare() { } } public class Bar : Foo { public override void Prepare() { } } public clas
I was reading about extension methods in C# 3.0. The text I'm reading implies that an extension method with the same signature as a method in the class being extended would be second in order of execution - that is, the method in the sealed class get
I define a method inside a parametrized role that needs to create a new class at run time using Moose::Meta::Class->create and apply that exact parametrized role to it. I am also making a new method for that role using $new_class->meta->add_
I want to add a new method in Windows.Forms.Form class.. Please help how to do this if anyone knows.. --------------Solutions------------- You can't modify the .NET Framework. You can extend it. When you add a new form in Visual Studio, you will be c
In Ruby on Rails, say a Story object can "has_many" Vote objects (a story is voted "hot" by many users). So when we do a s = Story.find(:first) s is a Story object, and say s.votes returns [] and s.votes.class returns Array So clearly, s.votes is an
In class design, is it a bad habit if one method calls another method in the same class (For example, 3 methods call 1 method in the same class). Thanks --------------Solutions------------- No. In fact it is good practice to do this. Or do you think
I'd like to do something like this: class SillyWalk(object): @staticmethod def is_silly_enough(walk): return (False, "It's never silly enough") def walk(self, appraisal_method=is_silly_enough): self.do_stuff() (was_good_enough, reason) = appraisal_me
I have two methods in the same class and would like to find out how to use the first method in the second one. // first method public static void RefreshGridView(GridView GridView1) { GridView1.DataBind(); } // second method public static void Assign
I am new to Spring Transaction. Some thing that I found really odd, probably I did understand this properly. I wanted to have a transactional around method level and I have a caller method within the same class and it seems like it does not like that
I have the following class: public class DocketType : Enumeration<DocketType, int, string> { public static DocketType ChangeOver = new DocketType(1, "Changeover"); public static DocketType Withdrawal = new DocketType(2, "Withdrawal"); public st
Possible Duplicate: When do you use the “this” keyword? If I have the following example: public class MyClass { private void MyMethod1() { ... } private void MyMethod2() { this.MyMethod1(); ... } } In the "MyMethod2" is there a difference in use "thi
I'd like to write an extension method to the String class so that if the input string to is longer than the provided length N, only the first N characters are to be displayed. Here's how it looks like: public static string TruncateLongString(this str
Is it possible to use a method from the parent class in the overwritten version in its child in javascript? --------------Solutions------------- Yes: ParentClass.prototype.methodName.call(this, arg1, arg2); You could call the following in the overwri
I'm writing a class for a simple game of 4 in a row, but I'm running into a problem calling a method in the same class. Here's the whole class for the sake of completeness: class Grid: grid = None # creates a new empty 10 x 10 grid def reset(): Grid.
I'm trying to call a variable of a method in another method of the SAME class and for some reason it prints nothing. class Colors(): def blue(self): var = "This is blue" def red(self): b = self.blue print b.var I've also tried print self.blue.var ---
I n
I am a pretty new guy to the world of threading, have been trying to solve this problem for a week now. The run method in the Thread class is not being called for some reason, I dont know why ( but would love to know) ProcessBuilder processBuilder =
I have the following classes: abstract class Transport{ protected String name; protected Transport(String name){ this.name=name; } protected void DoSomething(){ //Creating some instances of the type of the current instance } } class Bike: Transport {
Is there anything wrong with putting test methods in the same class as the class you're testing? So add the [TestFixture] attribute to every class and have [Test] method alongside the actual methods. I like the idea. Keeps everything in one place. Bu
In my project all of my view classes are extensions of a base view class which handles all common aspects to the views, including error messages. In one of the view extensions I am creating an instance of another 'helper' class which may need to outp | http://www.dskims.com/getgraphics-method-of-the-jframe-class/ | CC-MAIN-2018-22 | refinedweb | 1,080 | 68.2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.