text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Hi all, I'm getting the following error when trying to run my script with pypy. I can run the script fine with python 2.7.9. File "app_main.py", line 75, in run_toplevel File "main_v1_9.py", line 22, in <module> from toolbox import cleanATOM File "D:\rosetta\pyrosettanew\toolbox\__init__.py", line 35, in <module> import rosetta File "D:\rosetta\pyrosettanew\rosetta\__init__.py", line 21, in <module> import utility File "D:\rosetta\pyrosettanew\rosetta\utility\__init__.py", line 1, in <module > from __utility_all_at_once_ import * ImportError: No module named __utility_all_at_once_ The file "__utility_all_at_once_.pyd" is there in that folder but it doesn't seem to work, and yet it works fine with no changes with regular python. I understand there are different namespaces with pypy, yet paths seems to be ok... Any ideas appreciated Thx PyPy uses a different interpreter than the standard "CPython" system. This means that modules which are not pure python which are compiled for the standard python system don't (quite) work with the PyPy interpreter. As notes: "C extensions need to be recompiled for PyPy in order to work." However, there are ways around it. The page notes several, including the Reflex option, which from my preliminary reading might work - I don't know if anyone has tried it with Rosetta yet, though. Note that you would use that with the regular Rosetta source download, rather than the pre-wrapped PyRosetta download, as the cppyy/Reflex system is doing the wrapping for you. If you try it, please let us know how it works. Thx for the prompt reply. A bit more involved than I hoped for at the moment! I am re-writing the various functions for searching, fragment insertion, small mover (all now done) with rotamer packing and loops still to do, all in Pyrosetta/python. As performance is always an issue I hoped the pypy method would be a simple and obvious one. The old rule, nothing is simple gets applied again here by the looks of it. I'll revisit it another time when I get the rest of the stuff working. Thx again for the info Mark Did this resolve your issue @MarkW please let me know! ________________________ Ashish Gupta Web Dev Well kind of! It gave an explanation for what was going on, but the solution looked too time consuming for what I wanted to do. Instead, I used ctypes to speed up sections that I identified with profiling as slow (mainly random number generation) and got some speed up there. I have also been using multiple cores simultaneously to crunch through the calcs quicker as well. So as of now I haven't gone any further with pypy
https://rosettacommons.org/node/4023
CC-MAIN-2022-27
refinedweb
451
73.17
Artifact 8efd7d5aca057a793d221973da1f22a22e69f4db: - File src/sqlite.h.in — part of check-in [160593dc] at 2007-08-16 19:40:17 on branch trunk — Change the design of the mutex interface to allow for both "fast" and "recursive" mutexes. (CVS 4238) (user: drh size: 142891) /* **.226 2007/08/16 19:40:17 /* ** Make sure these symbols where not defined by some previous header ** file. */ #ifdef SQLITE_VERSION # undef SQLITE_VERSION #endif #ifdef SQLITE_VERSION_NUMBER # undef SQLITE_VERSION_NUMBER #endif /* ** CAPI3REF: Compile-Time Library Version ** **. */ typedef struct sqlite3 sqlite3; /* ** CAPI3REF: 64-Bit Integer Types ** ** Some compilers do not support the "long long" datatype. So we have ** to do compiler-specific typedefs for 64-bit signed and unsigned integers. ** ** Many SQLite interface functions require a 64-bit integer arguments. ** Those interfaces are declared using this typedef. */ ** ** ** the followingLock() ** increases the lock. xUnlock() decreases the lock. ** The xCheckReservedLock() method looks ** to see if any database connection, either in this ** process or in some other process, is holding an RESERVED, ** PENDING, or EXCLUSIVE lock on the file. It returns true ** if such a lock exists and false if not. ** **SectorSize)(sqlite3_file*); int (*xDeviceCharacteristics)(sqlite3_file*); /* Additional methods may be added in future releases */ }; /* ** this object defines the interface between the ** SQLite core and the underlying operating system. The "vfs" ** in the name of the object stands for "virtual file system". ** ** The iVersion field is initially 1 but may be larger for future ** versions of SQLite. Additional fields may be appended to this ** object when the iVersion value is increased. ** ** The szOsFile field is the size of the subclassed sqlite3_file ** structure used by this VFS. mxPathname is the maximum length of ** a pathname in this VFS. ** **_find */ sqlite3_vfs *pPrev; /* Previous (*xAccess)(void *pAppData, const char *zName, int flags); int (*xGetTempName)(void *pAppData, char *zOut); int (*xFullPathname)(void *pAppData, const char *zName, char *zOut); void *(*xDlOpen)(void *pAppData, char *zFilename); void (*xDlError)(void*,. */ }; /* ** CAPI3REF: Enable Or Disable Extended Result Codes ** ** This routine enables or disables the ** [SQLITE_IOERR_READ | extended result codes] feature. ** By default, SQLite API routines return one of only 26 integer ** [SQLITE_OK |. */ int sqlite3_extended_result_codes(sqlite3*, int onoff); /* ** CAPI3REF: Last Insert Rowid ** ** Each entry in an SQLite table has a unique 64-bit signed. */ sqlite3_int64 sqlite3_last_insert_rowid(sqlite3*); /* ** CAPI3REF: Count The Number Of Rows Modified ** ** This function returns the number of database rows that were changed ** (or inserted or deleted) by the most recent SQL() [SQLITE_IOERR_BLOCKED]) **=""> ** CorruptionFollowingBusyError</a> wiki page for a discussion of why ** this is important. ** ** segmentation fault or other runtime error. ** ** There can only be a single busy handler defined for each database ** connection. Setting a new busy handler clears any previous one. ** Note that calling [sqlite3_busy_timeout()] will also set or clear ** the busy handler. */ ** [sqlite3_malloc()] happens, the calling function must not try to call ** [sqlite3_free()]); /* ** CAPI3REF: Formatted String Printing Functions ** **" and "%Q". Or. preparing SQL statements from an untrusted ** source, to ensure that the SQL statements do not try to access data ** that. ** **()]. */ authorizer ** ** and includes ** information on how long that statement ran. ** ** The sqlite3_profile() API is currently considered experimental and ** is subject to change. */ void *sqlite3_trace(sqlite3*, void(*xTrace)(void*,const char*), void*); ** ** default sqlite3_open_v2() interface works like sqlite3_open() except that ** provides two additional parameters for additional control over the ** new database connection. The flags parameter can be one of: ** ** <ol> ** <li> [SQLITE_OPEN_READONLY] ** <li> [SQLITE_OPEN_READWRITE] ** <li> [SQLITE_OPEN_READWRITE] | [SQLITE_OPEN_CREATE] ** </ol> ** ** The first value opens the database read-only. If the database does ** not previously exist, an error is returned. The second option opens ** the database for reading and. ** ** void *filename, /* Database filename (UTF-16) */ sqlite3 **ppDb, /* OUT: SQLite db handle */ int flags, /* Flags */ const char *zVfs /* Name of VFS module to use */ ); /* ** CAPI3REF: Error Codes And Messages ** ** The sqlite3_errcode() interface returns the numeric ** [SQLITE_OK | result code] or [SQLITE_IOERR_READ | extended result code] ** for the most recent failed sqlite3_* API call associated ** with [sqlite3] handle 'db'. If a prior API call failed but the ** most recent API call succeeded, the return value from sqlite3_errcode() ** is undefined. ** ** The sqlite3_errmsg() and sqlite3_errmsg16() return English-langauge ** text that describes the error, as either UTF8 or UTF16 respectively. ** Memory to hold the error message string is managed internally. The ** string may be overwritten or deallocated by subsequent calls to SQLite ** interface functions. ** **); const char *sqlite3_errmsg(sqlite3*); const void *sqlite3_errmsg16(sqlite3*); /* ** CAPI3REF: SQL Statement Object ** ** Instance of this object represent single SQL statements. This ** ** [sqlite3_bind_blob |: Compiling An SQL Statement ** ** To execute an SQL query, it must first be compiled into a byte-code ** program using one of these routines. ** ** The first argument "db" is an [sqlite3 | SQLite database handle] ** obtained from a prior call to [sqlite3_open()] or [sqlite3_open16()]. ** The second argument "zSql" is the statement to be compiled, encoded ** as either UTF-8 or UTF-16. The sqlite3_prepare() and sqlite3_prepare_v2() ** interfaces uses' character or ** until the nByte-th byte, whichever comes first. ** ** ite3_stmt | SQL statement structure] the compiled SQL statement ** using [sqlite3_finalize()] after it has finished with it. ** ** On success, [SQLITE_OK] is returned. Otherwise an ** [SQLITE_ERROR | ** [SQLITE_ERROR | result codes] or ** [SQLITE_IOERR_READ | extended result codes] such as directly. **: defined here. ** ** The first argument to the sqlite3_bind_*() routines always is a pointer ** to the [sqlite3_stmt] object returned from [sqlite3_prepare_v2()] or ** its variants. the compile-time ** parameter SQLITE_MAX_VARIABLE_NUMBER (default value: 999). ** See <a href="limits.html">limits.html</a> for additional information. ** **eros. A zeroblob uses a fixed amount of memory ** (just an integer to hold it size) while it is being processed. ** Zeroblobs are intended to serve as place-holders for BLOBs whose ** content is later written using ** [sqlite3_blob_open | increment BLOB I/O] routines. ** **. */ int sqlite3_bind_parameter_count(sqlite3_stmt*); /* ** CAPI3REF: Name Of A Host Parameter ** ** This routine returns a pointer to the name of the n-th parameter in a ** [sqlite3_stmt | prepared statement]. **()] or [sqlite3_prepare16_v2()]. */ ** ** These routines provide a means to determine what column of what ** table in which database a result of a SELECT statement comes from. ** The name of the database or table or column can be returned as ** either a UTF8 or UTF16 string. The _database_ routines return ** the database name, the _table_ routines return the table name, and ** the origin_ routines return the column name. ** The returned string is valid until ** the . preprocessor symbol defined. */ [sqlite3_stmt | compiled SQL, in ** the database schema: ** ** CREATE TABLE t1(c1 VARIANT); ** ** And the following statement compiled: ** ** SELECT c1 + 1, c1 FROM t1; ** ** Then i); const void *sqlite3_column_decltype16(sqlite3_stmt*,int); /* ** CAPI3REF: Evaluate An SQL Statement ** ** After an [sqlite3_stmt | SQL statement] evaluate_OK | result code] ** or [SQLITE_IOERR_READ | extended result code] (example: ** [SQLITE_INTERRUPT], [SQLITE_SCHEMA], [SQLITE_CORRUPT], and so forth) ** can be obtained by calling [sqlite3_reset()] on the ** ** **: Results Values From A Query ** ** is undefined. ** ** The sqlite3_column_type() routine returns ** . ** ** The sqlite3_column_bytes16() routine is similar to sqlite3_column_bytes() ** but leaves the result in UTF-16 instead of UTF-8. ** The zero <th> Requested <th> ** <tr><th> Type <th> ** on equaval*] pointer instead ** of an [sqlite3_stmt*] pointer and an integer column number. ** ** The sqlite3_value_text16() interface extracts a UTF16 string ** in the native byte-order of the host machine. The ** sqlite3_value_text16be() and sqlite3_value_text16le() interfaces ** extract UTF order ** words if the value is original a string that looks like a number) ** then it is done. Otherwise no conversion occurs. The ** [SQLITE_INTEGER | datatype] after conversion is returned. ** ** ** ** The implementation of aggregate SQL ** ** The following two functions may be used by scalar SQL functions to ** associate meta-data with argument values. If the same value is passed to ** multiple invocations of the same SQL meta-data ** associated*)); /* ** CAPI3REF: Constants Defining Special Destructor Behavior ** ** ** [sqlite3_bind_blob | sqlite3_bind_*] family of functions used ** to bind values to host parameters in prepared statements. ** Refer to the ** [sqlite3_bind_blob | sqlite3_bind_* documentation] for ** additional information. ** ** The sqlite3_result. */ ** [sqlite3*] handle). ** ** The sqlite3_create_collation_v2() works like sqlite3_create_collation() ** excapt [sqlite3*] database handle is closed using [sqlite3_close()]. ** ** The sqlite3_create_collation_v2() interface is experimental and ** subject to change in future releases. The other collation creation ** functions are stable. */ char ** ** ** ** If this global variable is made to point to a string which is ** the name of a folder (a.ka. directory), then all temporary files ** created by SQLite will be placed in that directory. If this variable ** is NULL pointer, then SQLite does a search for an appropriate temporary ** file directory. ** ** Once [sqlite3_open()] has been called, changing this variable will ** invalidate the current temporary database, if any. Generally speaking, ** it is not safe to invoke this routine after [sqlite3_open()] has ** been called. */ SQLITE_EXTERN char *sqlite3_temp_directory; /* ** CAPI3REF: Test To See If The Databse Is In Auto-Commit Mode ** **. */ ** **. */ void *sqlite3_update_hook( sqlite3*, void(*)(void *,int ,char const *,char const *,sqlite3_int64), void* ); /* ** CAPI3REF: Enable Or Disable Shared Pager Cache ** ** ** ** Place a "soft" limit on the amount of heap memory that may be allocated by ** SQLite within the current thread. If an internal allocation is requested ** that would exceed the specified limit, [sqlite3_release_memory()] is invoked ** one or more times to free up some space before the allocation is made. ** ** The limit is called "soft", because if [sqlite3_release_memory()] cannot free ** sufficient memory to prevent the limit from being exceeded, the memory is ** allocated anyway and the current operation proceeds. ** ** function is only available if the library was compiled with the ** SQLITE_ENABLE_MEMORY_MANAGEMENT option set. ** memory-management has been enabled. */ void sqlite3_soft_heap_limit(int); /* ** CAPI3REF: Clean Up Thread Local Storage ** **); /* ** CAPI3REF: Extract Metadata About A Column Of A Table ** ** This routine ** returns meta* Data type ** 6th const char* Name of the. ** ** If the specified table is actually a view, then an error is returned. ** ** If the specified column is "rowid", "oid" or "_rowid_" and an ** INTEGER PRIMARY KEY column has been explicitly declared, then the output ** parameters are set for the explicitly declared column. If there is no ** explicitly declared IPK()). ** ** This API is only available if the library was compiled with the ** SQLITE_ENABLE_COLUMN_METADATA */ ); /* **()]. ** ** Extension loading must be enabled using [sqlite3_enable_load_extension()] ** prior to calling this API or. It is off by default. See ticket #1863. ** ** Call this routine with onoff==1 to turn extension loading on ** and call it with onoff==0 to turn it back off again. */ int sqlite3_enable_load_extension(sqlite3 *db, int onoff); /* ** CAPI3REF: Make Arrangements To Automatically Load An Extension ** ** Register an extension entry point that is automatically invoked ** whenever a new database connection is opened using ** [sqlite3_open()] or [sqlite3_open16()]. ** **. ** ** This interface is experimental and is subject to change or ** removal in future releases of SQLite. */ stabl; /* ** A module is a class of virtual tables. Each module is defined ** by an instance of the following structure. This structure consists ** mostly of methods for the module. */ificatinos /* ** This routine is used to register a new module name with an SQLite ** connection. Module names must be registered before creating new ** virtual tables on the module, or before using preexisting virtual ** tables of the module. */ int sqlite3_create_module( sqlite3 *db, /* SQLite connection to register module with */ const char *zName, /* Name of the module */ const sqlite3_module *, /* Methods for the module */ void * /* Client data for xCreate/xConnect */ ); /* ** */ ); /* ** Every module implementation uses a subclass of the following structure ** to describe a particular instance of the module. Each subclass will ** be taylored(). */ struct sqlite3_vtab { const sqlite3_module *pModule; /* The module for this virtual table */ int nRef; /* Used internally */ char *zErrMsg; /* Error message from sqlite3_mprintf() */ /* Virtual table implementations will typically add additional fields */ }; /* */ }; /* ** The xCreate and xConnect methods of a module use the following API ** to declare the format (the names and datatypes of the columns) of ** the virtual tables they implement. */ int sqlite3_declare_vtab(sqlite3*, const char *zCreateTable); /* ** place-holder stablizes, we will declare the ** interface fixed, support it indefinitely, and remove this comment. ** ****** EXPERIMENTAL - subject to change without notice ************** */ /* **> ** ** If the flags parameter is non-zero, the blob is opened for ** read and write access. If it is zero, the blob is opened for read ** access. ** ** On success, [SQLITE_OK] is returned and the new ** [sqlite3_blob | blob handle] is written to *ppBlob. ** Otherwise an error code is returned and ** any value written to *ppBlob should not be used by the caller. ** This function sets the database-handle error code and message ** accessible via [sqlite3_errcode()] and [sqlite3_errmsg()]. */ ** ** This function is used to write data into an open ** [sqlite3_blob | blob-handle] from a user supplied buffer. ** n bytes of data are copied from the buffer ** pointed to by z into the open blob, starting at offset iOffset. ** ** If the [sqlite3_blob |. ** ** On success, SQLITE_OK is returned. Otherwise, an ** [SQLITE_ERROR | SQLite error code] or an ** [SQLITE_IOERR_READ | extended error code] is returned. */ builds come with a ** single default VFS that is appropriate for the host computer. ** New VFSes can be registered and existing VFSes can be unregistered. ** The following interfaces are provided. ** ** The sqlite3_find_vfs() interface returns a pointer to a VFS given its ** name. Names are case sensitive. If there is no match, a NULL ** pointer is returned. If zVfsName is NULL then the default ** VFS is returned. ** ** New VFSes are registered with sqlite3_register_vfs().. ** ** Unregister a VFS with the sqlite3_unregister_vfs() interface. ** If the default VFS is unregistered, another VFS is chosen as ** the default. The choice for the new VFS is arbitrary. */ sqlite3_vfs *sqlite3_find_vfs(const char *zVfsName); int sqlite3_register_vfs(sqlite3_vfs*, int makeDflt); int sqlite3_unregister_vfs. ** **_PRNG ** </ul> ** **. Three. Static mutexes do not need to be ** deallocated and SQLite never bothers to do so. ** **. ** ** The sqlite3_mutex_exit() routine exits a mutex that was ** previously entered by the same thread. The behavior ** is undefined if the mutex is not currently entered or ** is not currently allocated. SQLite will never do either. */*);
https://sqlite.org/src/artifact/8efd7d5aca057a79
CC-MAIN-2020-34
refinedweb
2,195
55.13
Many times customer says 'we need Windows application which works like browser�. This article describes how to make WinForms Application which mimics a Browser. Because Tiny WinForms Application Framework is so long a name, I have called it JUJU. When you start to build a new Windows Application and it�s layout looks like Browser, include these files to our project and start to create the new Forms. You can jump from any form to another and the framework remembers how to go backwards or forwards. There is one main class that handles the behaviour of Browser. It is called AppManager (Application Manager). Every time when you create a new Form, and you want to create it to the same stack as previous form, you must create the new form via AppManager, example. AppManager.Show (new Form1, ��) //new stack, no arguments AppManager.Show (new Forms, this, arguments) //same stack, with arguments Every Form has it�s own FormManager class, where you can hide all public events which are common to all Forms. FormManager takes care of usual Form events and it discuss with AppManager. Main function is Start and it is the place where you initialize AppManager. AppManager glues all windows together with a Double Linked List, see code. public class Start { public Start() {} [STAThread] static void Main() { AppManager.Show(new Home(), ""); Application.Run(); } } Expand your Form with your own common interface IForm. public class Form1 : System.Windows.Forms.Form, IForm{..} Add FormManager class to every Form which must have Browser like behaviour. FormManager class takes care of usual tasks which the Form must to do. It initialize Toolbars, Event Handler etc. After that you can concentrate real work, designing WinForms. Every common event is circulated via this class. publicForm1() { InitializeComponent(); FormManager frmManager = new FormManager( this); } Remember also to Implement all IForm functions. When you want to create new window just call AppManager.Show(...). Every new form inherits properties from previous form (= location, layout). Old Form also send arguments to the new one so that it knows how to initialize data. AppManager.Show( new Form1(), this, argsOUT); If the new Form is in the middle of stack when you call function AppManager.Show(..) it removes all the Forms from the right and after that it adds new Form to the end of stack. Previous form is hidden and new form is shown. publicstatic void Show(Form newForm, Form oldForm, string args) { Node current = Activate(oldForm); if (current.Right != null) Remove(current.Right.Value, true); Node node = new Node(newForm, current, null); current.Right = node; nodes.Add(node); InitForm(newForm, oldForm, args); ShowHide(newForm, oldForm); current = null; } If you have any ideas how to develop this idea don�t hesitate to contact me, dathava@jippii.fi. I would be pleased if someone can give me me ideas how to develop this idea further. Maybe there are better ways how to handle events. So if you invent any new features, let me know. Juju means a trick in Finnish. Common Keyboard and Menu handlers. General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/cs/tinywinformsappframework.aspx
crawl-002
refinedweb
509
58.99
>> > However,. I know it's difficult, and it's not even clear what it means in general. >. Doesn't sound like an attractive solution, since you have to choose one of the alternatives, and then the user will be surprised when the other alternatives don't work right. The "right way" would be to extend syntax-tables so they can jump from #else to #endif (or to #if when going backwards). > (iii) Each time point leaves such a region, splat the properties again. > What do you think? I also think down this way lies madness. It's just piling up hacks over hacks, and while it may improve some behaviors it will screw up others. Stefan
http://lists.gnu.org/archive/html/emacs-devel/2009-11/msg00171.html
CC-MAIN-2014-15
refinedweb
116
80.92
This... is a knife... hahaha I'm sorry, you're too late. This competition has ended. You can submit your entry next year though. The judge has already ruled C0D312 to be victorious. Ruling still stands. I still find the status bar not a good option. Doesn't show long enough. But I do like the idea of logging it to the console. What I would probably do is put it in an output panel that pops up at the bottom. So you could do multi-select; which would be nice, the info would be separate from all the other console noise, it is at the bottom and fairly low profile, and you could even have it auto-popup (conditionally with an additional toggle scope panel command if you find it too intrusive). I think the victor has "won" the right to throw this on Package Control. If no one really has any interest to do it, I will do it, but I will first leave it up to COD312. This is a related plugin I wrote. It displays the scope in the status bar as you move the cursor around. Just thought I'd share it: import sublime, sublime_plugin class PrintScopeNameCommand(sublime_plugin.EventListener): def on_selection_modified(self, view): sublime.status_message(view.scope_name(view.sel()[0].a)) It's a multiple choice question. Tick a) console and d) status bar etc I'm not saying status bar option has to be banished , just saying I never use it because it doesn't suite my needs. It can be left as an option by the author. I couldn't help myself: You can configure the command how you like via arguments. It accepts multi-select, and can select whether you want it to log to the statusbar(only shows first), console, clipboard (scopes only for each selection), and/or auto-popup panel. You can also have it give you the extent of the scope (pts and/or row/col). import sublime import sublime_plugin class GetSelectionScopeCommand(sublime_plugin.TextCommand): def get_scope(self, pt): if self.rowcol or self.points: pts = self.view.extract_scope(pt) if self.points: self.scope_bfr.append("%-25s (%d, %d)\n" % ("Scope Extent pts:", pts.begin(), pts.end())) if self.rowcol: row1, col1 = self.view.rowcol(pts.begin()) row2, col2 = self.view.rowcol(pts.end()) self.scope_bfr.append( "%-25s (line: %d char: %d, line: %d char: %d)\n" % ("Scope Extent row/col:", row1 + 1, col1 + 1, row2 + 1, col2 + 1) ) scope = self.view.scope_name(pt) if self.clipboard: self.clips.append(scope) if self.first and self.show_statusbar: self.status = scope self.first = False self.scope_bfr.append("%-25s %s\n\n" % ("Scope:", self.view.scope_name(pt))) def run(self, edit, show_statusbar=False, show_panel=False, clipboard=False, rowcol=False, points=False, multiselect=False, console_log=False): self.window = self.view.window() view = self.window.get_output_panel('scope_viewer') self.scope_bfr = ] self.clips = ] self.status = "" self.show_statusbar = show_statusbar self.show_panel = show_panel self.clipboard = clipboard self.rowcol = rowcol self.points = points self.first = True # Get scope info for each selection wanted if len(self.view.sel()): if multiselect: for sel in self.view.sel(): self.get_scope(sel.b) else: self.get_scope(self.view.sel()[0].b) # Copy scopes to clipboard if clipboard: sublime.set_clipboard('\n'.join(self.clips)) # Display in status bar if show_statusbar: sublime.status_message(self.status) # Show panel if show_panel: self.window.run_command("show_panel", {"panel": "output.scope_viewer"}) view.insert(edit, 0, ''.join(self.scope_bfr)) if console_log: print ''.join(self.scope_bfr) Here is an example command enabling all (but obviously you would probably want to limit it to what you actually use): // Get Selection Scope { "caption": "Get Selection Scope: Show Scope Under Cursor", "command": "get_selection_scope", "args": {"show_panel": true, "clipboard": true, "points": true, "rowcol": true, "multiselect": true, "console_log": true, "show_statusbar": true} }, @facelessuser panel ftw What he said. I had to throw out a couple of annoying carriage returns from the output, but other than that, this is a great plugin. (The removal of the carriage returns messes up the output on multi-select, but I don't think I'll mind.) I also took a look at nick's plugin, which I think I'll use if I can toggle on and off. I can write the toggling bit, but I failed to include it under facelessuser's code. Help? Also, what is "Extent pts"? Can you elaborate? Yeah, toggling such a feature is the only way I would use @nick's code. I wouldn't want all of that work going on on every cursor move unless I specifically wanted to see the scope. I could add this no problem. How far does the current scope extend. Like if you were in a string, it would give you the start of the string and the end of the string. It can be useful if you want to see what is included in that particular scope. Added instant scoping toggle. Repo is now here.github.com/facelessuser/ScopeHunter I will request it be added to Package Control. If you have issues or suggestions, please create an issue on github. Yes, sorry. I removed the two \ns from following line: \n self.scope_bfr.append("%-25s %s\n\n" % ("Scope:", self.view.scope_name(pt))) They were spacing out the output in the console, which I didn't like. Removing both \ns makes the output run-on when selecting multiple selections. (With "points" and "rowcol" set to false.) Thanks! I think the "points" and "rowcol" features are too arithmetical for me. @nick's plugin's behavior seems preferable for identifying extent. In an ideal universe, highlighting the scope would probably be best way. (I tried to avoid mentioning this on the off-chance that you would find it interesting ) But even if it can be done, from what I've seen in other plugins (e.g., github.com/a-sk/livecss), trying to hijack the color scheme makes a mess of things. I could cut it down to one, but that is as far as I would go. I can put it in a setting. @nick's doesn't show extent, it just shows the scopes. These are two different things, useful for two different purposes. But that is why I made the settings and not on by default; not everyone will want them.. I interpreted his statement to mean that the plugin would highlight all instances of the scope (under the cursor or on keypress or whathaveyou). Similar to how Find/Replace highlights all matches. Cake ST2 API find_by_selector(selector) [Regions] Finds all regions in the file matching the given selector, returning them as a list. I would prefer one over two. But don't worry too much about my fussy aesthetics I meant that the way I've used @nick's plugin is to enable it and then move the cursor around to locate the extent of the scope. I just created a github account for this One of these days I may even work out what all the pulling and forking is about Edit: I should've added a "Thanks!" somewhere or other. Edit #2: Actually, I hadn't even thought that! I very much agree with this sentiment . Is it related to grokking or borking Whoa! I had no idea my bringing this thread back to life would result in a handy plugin. I agree the short time the scope is shown by default is not enough. This will be a huge help, thanks. (Fwiw, I had to restart ST2 after installing to get this to work, in case anyone else has the same issue.) I think it is because I open the settings file into a variable and then reference the variable from then on, but when the plugin is getting copied over, ST2 quickly loads the Python script before the setting file is copied over. When you restart, all dependencies are available; therefore, no problems.
https://forum.sublimetext.com/t/show-scope-command-ala-textmate/2514/30
CC-MAIN-2016-18
refinedweb
1,311
68.57
Looks like the venerable MD5 cryptographic hash has developed a crack: A real MD5 collision. A team has published two different input streams which hash to the same MD5 value. Of course, because of the pigeonhole principle, everyone knew this had to happen. But no one had ever found a pair before. Now that they have, researchers will be working on the question of whether it is feasible to compute, for any given input stream, a different stream with the same hash. If that happens, then MD5 is useless cryptographically, and a lot of infrastructure will have to be thrown out, but not before a bunch of bad stuff (like theft and fraud) happens. Mark Pilgrim provides this Python program to demonstrate: # see a = "\xd1\x31\xdd\x02\xc5\xe6\xee\xc4\x69\x3d\x9a\x06\x98\xaf\xf9\x5c" \ "\x2f\xca\xb5\x8771\x41\x5a" \ "\x08\x51\x25\xe8\xf7\xcd\xc9\x9f\xd9\x1d\xbd\xf2b4a8\x0d\x1e" \ "\xc6\x98\x21\xbc\xb6\xa8\x83\x93\x96\xf9\x65\x2b\x6f\xf7\x2a\x70" b = "\xd1\x31\xdd\x02\xc5\xe6\xee\xc4\x69\x3d\x9a\x06\x98\xaf\xf9\x5c" \ "\x2f\xca\xb5\x07f1\x41\x5a" \ "\x08\x51\x25\xe8\xf7\xcd\xc9\x9f\xd9\x1d\xbd\x723428\x0d\x1e" \ "\xc6\x98\x21\xbc\xb6\xa8\x83\x93\x96\xf9\x65\xab\x6f\xf7\x2a\x70" print a == b from md5 import md5 print md5(a).hexdigest() == md5(b).hexdigest() Running it prints: False True Add a comment:
https://nedbatchelder.com/blog/200408/md5_collisions.html
CC-MAIN-2021-31
refinedweb
251
52.29
values stored for 'L' and 'I' items will be represented as Python long integers when retrieved, because Python's plain integer type cannot represent the full range of C's unsigned (long) integers. The module defines the following type: the Python/C API Reference Manual. -1, so that by default the last item is removed and returned. reverse quotes ( ``), so long as the array() function has been imported using from array import array. Examples: array('l') array('c', 'hello world') array('u', u'hello \textbackslash u2641') array('l', [1, 2, 3, 4, 5]) array('d', [1.0, 2.0, 3.14]) See Also:
http://www.wingware.com/psupport/python-manual/2.3/lib/module-array.html
crawl-003
refinedweb
103
64.51
Over on Eric Edelstein's, on talk of building another blog aggregator (to compete with Afrigator and Amatomu - listed in alphabetical order so as not to denote preference), I boasted at how easy it is to build the aggregator portion of it. (Well, I'm fairly certain I did. It seems the comments have all disappeared. I'm not the only one pretty sure there were comments there...) I did it quite significantly differently before, but I'm now building another version of the aggregator, hopefully in a way that gets rid of most of the tedium of aggregation, but allowing the results to be stored in whatever way the developer-user wants, and to allow capturing more about the feeds and posts than a generic aggregator will. While I wanted to abstract away the particular model, I started with models for Feed and Post as a beginning, using Elixir, drawing inspiration on fields and definitions from the FeedJack aggregator for Django. from elixir import * class Feed(Entity): has_field('feed_id', Integer, primary_key=True) has_field('feed_url', String(255), unique=True) has_field('title', Unicode(200)) has_field('description', Unicode()) has_field('link', String(255)) has_field('etag', String(50)) has_field('last_modified', DateTime) has_field('last_checked', DateTime) has_many('posts', of_kind='Post') using_options(tablename='a2d_feed') class Post(Entity): has_field('post_id', Integer, primary_key=True) has_field('link', String(255)) has_field('title', Unicode(255)) has_field('content', Unicode()) has_field('date_created', DateTime) has_field('date_modified', DateTime) has_field('guid', String(200), unique=True) has_field('author', Unicode(100)) has_field('author_email', String(255)) has_field('comments_url', String(255)) belongs_to('feed', of_kind="Feed", colname='feed_id', required=True) using_options(tablename='a2d_post') These are pretty much the minimum required fields for an aggregator to be both meaningful and efficient. The system can populate all the feed values the first time (and update it every time) it downloads the feed in the feed url. The eTag and last_modified should be used to avoid refetching a feed. The last_checked can be used in future as part of a strategy to try predict the most opportune schedule to pick up changes to the feed - or it can just be used for diagnostics. Posts have their content in terms of title, description, and link, and metadata like date created, modified, and who wrote them. They have a guid which can be used to avoid adding the same post twice through multiple sources - or just used to help make sure that the same post isn't added twice from the same source if the link or title or whatever changes. The next step is to use the Universal Feed Parser to download the feed and parse it into a reasonably portable way of accessing information from the various syndication formats, and then to store that information in the database. After that, there's looking at what other information the Universal Feed Parser makes available beyond the basics of the syndication formats. it's a short sad story. i definitely didn't delete your comment on purpose!!!! The construction of something like an aggregator fascinates me. I'd love to see what comes up next. Great to see others in SA using Python. I have some throw away code too that crawls the blogosphere and stores feeds in a sqlite database. There is also a basic web front end. It was used for the now defunct muti-blogs project but if people are interested in it I am willing to share the code. I certainly do NOT plan on creating YASABA! (yet another SA blog aggregator). Although from time to time people may see the crawler in their referrer logs as I experiment with it. Regards The difficult part in creating a aggregator site is NOT in collecting the data. That's easy. All you need to to occasionally check the xml feed for posts; and to store the data in a database with the relevant desciptor fields. You have shown how simple that is in this post. The difficult part, or rather I would say, the genius, lies in how one interacts with the user and and how one presents that data. I have endless irritations with Afrigator's interface. And although I quite like Amatomu, it still needs some improvements. But admittedly its in Alpha; and I can see Amatomu become nice. Afrigator looks like it has underlying design problems. Nevertheless, aggregation itself is just a service that runs through the xml feeds it has registered and updates a database. But how one presents that a data. Wow! That is everything. Aggregation. Easy. Presentation. Priceless. Yeah - that's exactly what I was saying on Eric's post. The picking up of feeds is the trivial part. The hard part is the added value - making it more likely for people to encounter content that they will find of interest to them. There's still quite a lot you can do without getting into that. A "planet" is a very useful community tool - witness the GeekDinner planet [planet.geekdinner.org.za], which really makes it easy to stay in touch with other people who attend Geek Dinners. So, allowing users to create their own mini-blogosphere is quite a valuable exercise - making it easy to create a "Web 2.0 security" planet with a few clicks is very cool. That and a mailing list or two and a calendar and so forth, and you're really going someplace.
http://nxsy.org/writing-an-aggregator-in-a-few-easy-steps
crawl-002
refinedweb
891
52.9
setlinebuf() Assign line buffering to a stream Synopsis: #include <unix.h> int setlinebuf( FILE *iop ); Arguments: - iop - The stream that you want to use line buffering. Library: libc Use the -l c option to qcc to link against this library. This library is usually included automatically. Description: The setbuffer() and setlinebuf() functions assign buffering to a stream. The types of buffering available are: - Unbuffered - Information appears on the destination file or terminal as soon as written. - Block-buffered - Many characters are saved and written as a block. - Line-buffered - Characters are saved until either a newline is encountered or input is read from std. Returns: No useful value.
https://developer.blackberry.com/playbook/native/reference/com.qnx.doc.neutrino.lib_ref/topic/s/setlinebuf.html
CC-MAIN-2019-47
refinedweb
108
59.6
AS 2.0 and class questionNewsgroup_User Jan 29, 2007 6:12 PM I put the following, which is for my buttons into an .as document: content has been marked as final. Show 7 replies 1. Re: AS 2.0 and class questionbutcho Jan 29, 2007 8:47 PM (in response to Newsgroup_User)Hi, my first question is how did you link your nav button to the ".as" file? ... It seem like you link it through the library and if so, your code is not a declaration of a class. Every class need to be declare as a class or it will only be some actionScript code in a ".as:" file. You should try something like this (if you linked the class trough the library): 2. Re: AS 2.0 and class questionbutcho Jan 29, 2007 9:05 PM (in response to Newsgroup_User)Me Again, you may also want to not link your nav button to the class. You can just instanciate the class and pass as a parameter the nav button (composition). So on your timeline you instanciate it : var nav:Navigation = new Navigation(_root.nav.contact_btn); // note that this way your Navigation Class need to be in the same folder than your fla file // if you want to have your class in a sub-folder use this (assuming that the sub-folder is called "classes" import classes.Navigation; var nav:Navigation = new Navigation(_root.nav.contact_btn); So now the class will go like this: 3. Re: AS 2.0 and class questionPeter Lorent Jan 30, 2007 2:13 AM (in response to Newsgroup_User)Show the complete code Brian. 4. Re: AS 2.0 and class questionNewsgroup_User Jan 30, 2007 10:03 AM (in response to Newsgroup_User)OK, I think this might be getting more complicated than I wanted to make it. I just wanted to have the code for my button in an external as document. Does that mean I have to make it a class? Right now I have two external as documents. My main fla has the following AS in it: stop(); #include "capitola.as" #include "nav.as" capitola.as looks like this and just is the opening sequence so far. So this is not a class, just external .as document. import com.mosesSupposes.fuse.* ZigoEngine.register(Fuse,PennerEasing,FuseFMP); //Initial Opening Sequence of the site //*************************// function initialOpenSequence():Void{ trace("openSequence has started"); this.attachMovie("site_frame", "site_frame", this.getNextHighestDepth(), {_x:0, _y:0, _alpha:0}); this.attachMovie("logo", "logo", this.getNextHighestDepth(), {_x:100, _y:26, _alpha:0}); this.attachMovie("mainImg", "mainImg", this.getNextHighestDepth(), {_x:80, _y:90, _alpha:0}); this.attachMovie("line", "line01", this.getNextHighestDepth(), {_x:80, _y:85, _alpha:0}); this.attachMovie("line", "line02", this.getNextHighestDepth(), {_x:80, _y:280, _alpha:0}); this.attachMovie("nav", "nav", this.getNextHighestDepth(), {_x:80, _y:290, _alpha:0}); this.attachMovie("indexText", "indexText", this.getNextHighestDepth(), {_x:110, _y:375, _alpha:0}); var fOpenSite:Fuse = new Fuse(); fOpenSite.label = "openSequence"; // label is a convenience for traceItems() calls. fOpenSite.autoClear = true; // set instance to self-destroy after completion fOpenSite.push( [ {target:site_frame, _alpha:100, seconds:3, ease:"easeOutQuad"}, {delay:1, target:logo, _alpha:100, seconds:1, ease:"easeOutQuad"} ] ); fOpenSite.push({target:mainImg, _alpha:100, seconds:2, ease:"easeOutExpo"}); fOpenSite.push({target:line01, _alpha:100, seconds:2, ease:"easeOutExpo"}); fOpenSite.push({target:line02, _alpha:100, seconds:2, ease:"easeOutExpo"}); fOpenSite.push({target:nav, _alpha:100, seconds:2, ease:"easeOutExpo"}); fOpenSite.push({target:indexText, _alpha:100, seconds:2, ease:"easeOutExpo"}); fOpenSite.start(); } initialOpenSequence(); function afterFuse():Void{ trace("fuse comlete"); } Then I have the code for the navigation in nav.as and that is it so far. this.nav.contact_btn.onRelease = function():Void { trace("test"); }; this.nav.contact_btn.onRollOver = function():Void { trace("test"); }; this.nav.contact_btn.onRollOut = function():Void { trace("test"); }; That is all I have. I was just wondering why I am getting that error message from the nav.as document? Thanks for any help! Brian 5. Re: AS 2.0 and class questionPeter Lorent Jan 30, 2007 10:57 AM (in response to Newsgroup_User)>>Does that mean I have to make it a class? No, normally you don't. As strange as it might seem, the error occurs in capitola.as and specifically the lines: 15. this.attachMovie("nav", "nav", this.getNextHighestDepth(), {_x:80, _y:290, _alpha:0}); 27. fOpenSite.push({target:nav, _alpha:100, seconds:2, ease:"easeOutExpo"}); If you comment those lines, the script validates. You cannot use the word 'nav' for the linkage identifier of your movieclip and name your as file nav.as Rename nav.as to navigation.as and delete nav.as Then you should be in the clear. 6. Re: AS 2.0 and class questionNewsgroup_User Jan 30, 2007 1:04 PM (in response to Newsgroup_User)LuigiL, thank you so much! I was actually really wondering about it, because I commented out the include for nav.as and was still getting the error. Thanks, Brian 7. Re: AS 2.0 and class questionPeter Lorent Jan 31, 2007 12:39 AM (in response to Newsgroup_User)You're welcome.
https://forums.adobe.com/thread/176526
CC-MAIN-2018-13
refinedweb
839
62.54
Conservapedia talk:What is going on at CP?/Archive111 Contents - 1 Tony Sidaway's view of CP's breakdown... - 2 OpenDNS - 3 So it seems apparent that RW is pretty boring without CP around... - 4 Question - 5 Is anyone connected enough.... - 6 It's all pretty simple, really - 7 wouldn't it be deliciously ironic.... - 8 Wikiforkids - 9 Anyone want to guess... - 10 it's back... - 11 Finally something to WIGO - 12 Obama still not president... - 13 Making up for lost time? - 14 Strange - 15 Do the bot! - 16 Blocking - 17 The FLQ's name is pretty now! - 18 The difficult we do at once... - 19 Gentlemen - 20 trustworthy - a short rant - 21 Bernard Goldberg - 22 More broken things - 23 Ed Poor - 24 Iduan gets mean - 25 RobertA - 26 Anonymous? - 27 Andy really doesn't like Comedy Central, does he? - 28 Quick! Someone tell Andy! Another study saying that sex is bad for you! - 29 Soon to be on ♥ K e n D o l l ♥'s bookshelf? - 30 Someone's got out of the wrong side of bed - 31 Joaquín gets locked out - 32 Revert in 5 ... 4 ... 3 ... - 33 fuzzy math - 34 Revert & Ban in 5 ... 4 .... 2 .... - 35 Republican Govs/Dem Senators - 36 Proverbs - 37 Oh, how I larfed - 38 Is CP Borken Again? - 39 Barney Fife - 40 Speaking from the dead: CP to implement flaggedrevs - 41 Blago - 42 Schlafly math redux - 43 Andy and British spelling - 44 The Trustworthy Encyclopedia... - 45 Better than football - 46 Economic Stimulus Bill Tony Sidaway's view of CP's breakdown...[edit] ... can be found here. It links the downtime of CP to the publication of the hit-list. On the talk page of said hit-list, Tony Sidaway made the case that the list didn't originate with a vandal. For your pleasure, I've preserved the talk site: Alright, first off, Begich is from Alaska. I've fixed that. But this article needs to die very quickly. It reads like a hit list, in all honesty. Everything we say is true, I believe, but this could look very bad... --DReynolds 22:42, 21 January 2009 (EST) - It's a very relevant article, especially when Senate appointments by governors have been such a big issue (although not in this context) this past election. On the contrary, we should also have a corresponding list here or in its own article for Republican senators in states with Democratic governors.--Fairfieldrr 09:40, 24 January 2009 (EST) - I must have had decaf this morning. Also, article edited to sound less damning. --Fairfieldrr 11:29, 24 January 2009 (EST) Agreed Terrible idea. Why is Johnny Isakson, who is a Republican, listed? This "Article" Was The Work Of An Internet Parodist/Vandal Conservapedia in no way sanctioned it, and cannot, because of the wiki format, completely stop the work of political terrorists, who are intent upon and dedicated to mocking our conservative, Christian-friendly encyclopedia. Thank you. --₮KAdmin/Talk Here 16:36, 24 January 2009 (EST) - The article was created by User:QWest. I have examined his contributions and, on the face of it, he seems to be for real. - Qwest's first contribution, on Talk:Main_Page, was shortly after the selection of Palin as Vice Presidential candidate. He said he had problems with that. "Women are supposed to be subordinate to men, how can I take one seriously as a Vice President?" This was taken by someone as parody, but he asserted that it was his honest opinion, and told another user: "I'm sorry you're in denial about the fact that women are rarely capable leaders." - On 7 September, he came up with the Obama "My Muslim faith" slip, to which Conservapedia still refers in one or two of its articles. - He later edit warred to put "Barack Obama" into a list of prominent Muslims in the "Islam" article. - He contributed "Abstinence only sex education doesn't work" to "Liberal Myths About Education" (it's still there). - He started the "Obama and Socialism" section to the Barack Obama article. - He contributed "The flagellum" (about the bacterial flagellum), Consciousness , symbiosis and the bat to "Counterexamples to evolution". - I don't believe these were acts of parody. I think he meant them sincerely. --TonySidaway 17:02, 24 January 2009 (EST) l'arronsicut fur in nocte 07:15, 27 January 2009 (EST) - Obviously Tony Sidaway is not fully acquainted with what goes on at CP. QWest just happens to be the name of an ISP and a history of "good edits" is just what you need to plant wikiweeds. TK is being disingenuous and attempting some damage limitation. P/Vs post lots of similar stuff and CP is quite happy to use it if they feel it supports their PoV (c.f. Bugler). Генгисevolving 07:36, 27 January 2009 (EST) - Hmmmm. Fascinating theory, but not a terribly convincing one in my eyes. To think that they forcibly nuked a full week of edits (and their server) just to make it look like one article never existed is overkill even for CP. They could have just nuked the article (followed by a recreation with TK's damage control note), claiming that it was the work of a parodist. If it was true, it would be the most notable Pyrrhic victory by CP to date. --Sid 07:56, 27 January 2009 (EST) - Yes. But unfortuantaly whenever CP tries to use a 'surgical strike' to destroy a single page they end up destroying the entire wiki, surrounding areas, UN headquarters, John Simpson, etc. Say surgical strike in D Rumsfeld accent and it will all become clear. StarFish 08:02, 27 January 2009 (EST) - The hit-list wasn't the most absurd or despicable thing CP produced - but it's one of the few items which made the news, at least a few of them ;-) - I suppose that Ma Phillys has some eager eagles to watch google news - and they will find this: - So, nothing about the world greatest class on American history or the youngest world in the history of the universe, but a hit-list! Maybe, this triggered some action: I've this picture in mind of a couple of young interns forced to wade through the articles on CP, torn between laughter and dismay... l'arronsicut fur in nocte 08:20, 27 January 2009 (EST) - I did assume at first that the outage was just a scheduled dbms upgrade that failed. Similar things used to happen in Wikipedia in the early days when they didn't have a full time professional systems team. Even comparatively recently, in 2006, I've found the Japanese Wikipedia in a complete muddle due to a botched upgrade. But the coincidence of the Conservapedia outage, and the date of the backup from which Conservapedia is now working, have slowly persuaded me that it probably has something to do with the "hit list" article. - Conservapedia can afford to be pretty relaxed about this. An outage of a few days is okay as long as they can continue to serve up the pages (so that their google ranking doesn't slip as the spiders get 404s or other errors). Unlike Wikipedia they have a closed membership with a very small inner core of editors who will wait as long as it takes. They have nowhere else to go, as far as I'm aware. The only real down side is that Schlafly will have to find another way of contacting his American History students. - Thanks for preserving my talk page comments. As I remarked later on my blog, the fact that QWest wasn't immediately blocked, and TK made no overt effort to contact him, seems suspicious if TK really does believe the article was created as parody. I looked at the first revision and it's very similar to the version shown in a screenshot on Wonkette's blog posting. But yes, he could be a deep troll. So could they all,for that matter (except Ed Poor whom I know from Wikipedia, and who is all too real). Poor Andy. --Tony Sidaway 10:28, 27 January 2009 (EST) - Isn't TK the one who likes off-wiki communication and insists on email/IM for some things? I'm just not sure we can assume that nothing has been discussed simply because it's not on CP. Worm (t | c) 10:48, 27 January 2009 (EST) - Hey Tony. I'm not sure if you'll read this or if it even matters if you do but TK himself is, as you say, "a deep troll" and something of a parodist (or at least his participation in CP is not wholly sincere). He has had one visible and public crash and burn at CP and his reinstatement in their ranks is something of a mystery to us. We here at RW have come to expect every word he types to be indistinguishable from bovine excrement. He has in the past used multiple channels of communication to sow confusion and dissent between and among "legitimate" CP editors, parodists, and legitimate RW editors alike. He is not of super villain caliber but he definitely is not playing the same game rest of us are playing. Have fun! Exasperate me!Sheesh!Not the most impressive contributor here 11:57, 27 January 2009 (EST) I have to side with those who think the timing is a coincidence. Andy tried to use computers and the results were disastrous. I think to read too much into the "hit list" appearance is to embrace the post hoc fallacy. This wouldn't be the biggest embarrassment CP has ever encountered, and it's not like the article actively encouraged assassinating senators. It would have been very easy to delete it, rather than crashing the entire site. As for whether or not the original author was a parodist, well, there's no way of knowing, and it doesn't matter. The parodists and the earnest editors are indistinguishable, and even if it was the work of a parodist, the fact that his edits have long been endorsed and approved of shows it wasn't simply a case of unobserved vandalism. In fact, vandalism at CP is almost never unobserved, as the ratio of unblocked, active editors to sysops and semi-sysops allows almost every non-sysop edit to be checked. Vandalism generally only prospers if it cannot be recognized as vandalism, as would be the case here. The fact that they base what is or is not vandalism on the editor rather than the content shows just how fucked up the place is. DickTurpis 13:18, 27 January 2009 (EST) - Surely the simplest explanation is: they broke something over there (exactly how doesn't matter), it's taking time to fix, and so Andy threw a recentish backup of the database onto another server, thus enabling homeskollars the world over to continue to access untainted-by-liberals-information in the meantime. Why invent conspiracy stories when there's a simple alternative? alt 13:27, 27 January 2009 (EST) - That does satisfy both wp:Occam's razor and Hanlon's_razor - the incompetence of some install messed it up. Or as the Bits have said - "Cock-up before conspiracy." Still, the list hitting the news and the revert to backup... and the name change gets me. If it was just the server down, that would be one thing. But it was server down and name change as if he wanted the name conservapedia itself to be removed from one's memory and instead start "fresh" with conservativeencyclopedia that didn't have the hit list taint. I'll go with incompetence, but there's still that notion at the back of my mind. I still smirk at the idea of Kenny looking at google and watching his page rank fall. So, here's the question - how long before it comes back up, will it be from backup or where it was before, what name, isp will it resurface on? This should give a better idea if it was intentional super deep burn or incontinent admins. --Shagie 14:21, 27 January 2009 (EST) - I've updated my blog entry. I think this was wild speculation on my part and I've withdrawn it. --Tony Sidaway 19:21, 27 January 2009 (EST) "[I]t is far easier for a conservative to detect conservative parody than for a liberal to do so, and vice-versa." - TK, in the comments on Tony's blog. I'm sure most of us here would disagree with him on this, given how most of us here speculated that Bugler was a parodist, even while he was getting plaudits from most established CP users, including Andy himself. Dreaded Walrus 19:40, 27 January 2009 (EST) OpenDNS[edit] I'm getting the following OpenDNS error now. Hmm, isn't loading right now. The computers that run are having some trouble. Usually this is just a temporary problem, so you might want to try again in a few minutes. I'm not sure if this means they've cocked up they're DNS records in which case things are going from bad to worse, or if it's just that they're in the process of moving back to the main domain. Technical Problems are fun - when I don't have to fix them. StarFish 10:32, 27 January 2009 (EST) - I'm still getting etc. Love² 10:44, 27 January 2009 (EST) - It's still loading fine for me, with a temporary redirect to . --Tony Sidaway 10:45, 27 January 2009 (EST) - I guess it will depend on who your ISP is. But for sure they have f'ed up their DNS> It just gets beter and beter. This is what I'm seeing for any CP page now. I guess if I went to directly that would work, but all google searches etc give the following for me. StarFish 10:54, 27 January 2009 (EST) - Google led to. Must be your ISP. :) Love² 10:59, 27 January 2009 (EST) - Wouldn't you think that they'd put a notice on the front page about their problems, for all those users who aren't sysops who presumably know what the hell is going on via email or IM? It's kind of bizarre that they'd let the side be dead for two days and still not say a work about it. --JeevesMkII 11:04, 27 January 2009 (EST) - (EC) DNS takes time to replicate and different ISPs replicate the data at different speeds / times. The source server currently contains no valid DNS data (anyone can check this using a dns lookup). So blank DNS data will get to you eventually (12 hours or so max) at which point the forward will fail and you will see the OpenDNS thing. That's my understanding anyway. Anyone else want to back me up (or shoot me down)? I don't want to say too much here because I don't want to help them with their technical problems! StarFish 11:04, 27 January 2009 (EST) - Nah, it's just those morons are moving their DNS arrangements to their own server (with no backup secondary or tertiary ns, facepalm.) The new whois for conservapedia says ns1.conservapedia.com and ns2.conservapedia.com (The same host, also the same host as the HTTP is on.) --JeevesMkII 11:28, 27 January 2009 (EST) - Does anyone know the IP for CP? I have the FF extension which shows it in the status bar but forgot to make a note. (Dang!) Генгисevolving 11:30, 27 January 2009 (EST) - Open terminal, type ping conservapedia.com. --JeevesMkII 11:32, 27 January 2009 (EST) @Jeeves: There's no way they'd put up a message like that. It would be tantamount to admitting they made a mistake. As has been previously noted, when the whole thing is over, it will never be mentioned again, and asking about it will probably lead to banhammering. Z3roh3ros 11:38, 27 January 2009 (EST) So it seems apparent that RW is pretty boring without CP around...[edit] ...admit it. Without them, we're nothing. Karajou, you complete me. Ed, I'm not sure I can go on without you. Andy, you had me at "Godspeed." Please come back soon. TheoryOfPractice 12:49, 27 January 2009 (EST) - Just wait for CUR to come back online -- Nx talk 12:56, 27 January 2009 (EST) - And if CP stays down, where are we going to find a site just as nutty and so thoroughly hypocritical to tear a few arseholes into? ENorman 13:02,) - AH GOD YES! --GTac 13:16, 27 January 2009 (EST) - I think it has to be Freep, or Human Events, or American Thinker, or Comfort Food (Ray Comfort's blog)... there's a lot of bad politics & woo out there.-Diadochus 13:18, 27 January 2009 (EST) - Frankly, if CP is gone then everybody wins. I'll sacrifice the entertainment that CP provides for the knowledge that Andy's poisonous ideas no longer have a broad audience. Stile4aly 13:19, 27 January 2009 (EST) - They have a broad audience? You mean, like the 12 editors who haven't been banned?EternalCritic 13:30, 27 January 2009 (EST) - No, no. They have an audience of broads. - Poor Excuse 00:17, 28 January 2009 (EST) - Any audience outside Andy's own head is too broad. Poe's Law cuts both ways. Although some parodists are mistaken for true believers, some true believers are mistaken for parodists as well. Too many people actually believe some of the stuff Andy talks about, most tragically his students. I truly hope that whatever Andy has done to bugger up CP is permanent and irreversible and that he won't have the inclination to restart this clusterfuck of a project. Let him retire to his sinecure with JPANDS muttering about how Obama is a Muslim and Liberals are deceitful. Stile4aly 15:54, 27 January 2009 (EST) - WorldNetDaily [1] has good giggle-value as well; the entire site exhibits a ♥ K e n D o l l ♥-style obsession with homosexuality. ListenerXTalkerX 13:32,) - Ahhh, it's good for a quick fix. EddyP 13:47, 27 January 2009 (EST) - Come back, CP! Without you, I'll need to go back to working or (more likely) find some other website to waste time on. Please spare me this fate. Z3rotalk 14:23, 27 January 2009 (EST) - Conservapedia isn't the beginning and end of the world! There's plenty of other insane things out there. Let's just live a little and enjoy the time off! ArmondikoVpostate 14:36, 27 January 2009 (EST) - At least there's still Faux News. ~Ttony21(talk, contribs) 14:45, 27 January 2009 (EST) (undent) The quote generator snippets above just gave me a lovely vision of a flash-based widget called "The Magic Wingnut", which starts in an upright position like a pair of Mickey Mouse ears, then is turned upside down to reveal a random CP-ism. Wish I had the programming chops to do it, but I don't, so I'm releasing the suggestion into the wild. --SpinyNorman 16:29, 27 January 2009 (EST) Question[edit] When typing conservapedia.com, is anyone else getting redirect to conservativenecyclopedia.com? Did Andy fuck up and not renew his lease on the domain name and lose it or something? DickTurpis 12:58, 27 January 2009 (EST) - Ummm, have you been out of town or something? Discussed in detail above. Short answer: it's borken..TheoryOfPractice 13:04, 27 January 2009 (EST) - (EC) according to whois: Registered through: EasyDomain.com Domain Name: CONSERVAPEDIA.COM Created on: 28-Aug-06 Expires on: 28-Aug-15 Last Updated on: 27-Jan-09 - so no, it's still his -- Nx talk 13:08, 27 January 2009 (EST) - I've been checking out easydomain and their hosting servers in the hope of finding some promiscuous sites being hosted on the same server (remember, some of wikipedia's content uses servers which are also used by a company that makes NUDITY websites ZOMG EVUL!!!!), but alas I can't find much. Someone want to sign up through easydomain and make a skin site just to piss off Andy? --GTac 13:13, 27 January 2009 (EST) - I know it's been borken, but until now I was still at east getting directed to conservapedia when searching for it, albeit it to an error page or jumbled nonsense. Getting directed to a different site entirely (one evidently unconnected to CP at all) is a new development on my end. Under what circumstances does that happen? Certainly I've tried to access sites that were down for whatever reason, but I recall only ever getting the site's error message or the generic firefox message. I don't think I've ever tried going to Amazon.com and instead being sent to "Amazonianbooks.com" or something. DickTurpis 13:24, 27 January 2009 (EST) - As far as I understand, conservapedia.com started redirecting to the other url sometime yesterday--first via a page that told you about the redirect, and then directly...but it seems as if, for some reason i can't begin to comprehend, different people have been having different results/experiences...TheoryOfPractice 13:32, 27 January 2009 (EST) Is anyone connected enough....[edit] ...to e-mail Ed or someone else and find out what's going on? TheoryOfPractice 16:16, 27 January 2009 (EST) - Oh, we all know what's going on. The Assfly is attempting to fix his wiki, and epically failing at it. About now he's probably realised that PERHAPS, just perhaps it isn't a terribly brilliant idea for the fucking nameservers for a domain be under the same fucking domain name as the one to be resolved. Morons. Seriously, the brilliance of making people resolve ns1.conservapedia.com to resolve conservapedia.com. It boggles the mind. --JeevesMkII 16:23, 27 January 2009 (EST) - You could pester Ed Poor at Wikipedia. His page says he doesn't discuss non-WP stuff but he might make an exception in this case. Don't forget to stroke his ego while you're there and he may be amenable. Yours trulyDear Sir 16:26, 27 January 2009 (EST) It's all pretty simple, really[edit] 1. It starts with a crash of some sort at the hosting site (could be hardware, could be massive bandwidth spike?) 2. Andy decides he might as well move the site to a new server, perhaps at his house, since he was thinking of doing it anyway 3. He posts the last backup he had at another domain name that he also owns, at the old host site (or rather, the host does it) 4. All the missing server stuff is due to him trying to get the DNS info set up to point to the new server 5. Because he doesn't really know what he's doing, what took Trent hours is taking him days 6. CP will be back 7. the week of 1/19 - 1/26 is forever lost due to the backup not being recent 8. One caveat on #6 is: if for some reason whoever pays his salary(s) got really miffed over the Wonkette piece and they are leaning on him hard, CP might never be the same. But really, I think we'll just see it back up later this week, missing a week of edits & c. He may even be waiting to propagate the DNS info until after he gets the whole thing running right again (remember, that took us several days of running "live with bugs") The only thing I can't "explain" is why the temporary holder site (CE) is down as well - unless perhaps he pooched all the DNS info for domains he owns by accident. The timing is just a wonderful coincidence, with the backup being the day before the inauguration, and before the "hit list" article was created. ħuman 16:19, 27 January 2009 (EST) - please explain DNS to me like I'm a six-year-old....TheoryOfPractice 16:23, 27 January 2009 (EST) - It's the magic stuff that lets websites play. And there's a Santa Claus. Yours trulyDear Sir 16:31, 27 January 2009 (EST) - Computers navigate around to each other on the internet using a numerical address, that is how they know who everyone is. This numerical address is the IP. To access a website you have to know its IP address. But as you know we goto websites based on words like "conservapedia.com." This is accomplished by a domain name server, the DNS, which stores the relationship between name and IP address. So when I type "conservapedia.com" the browser checks the DNS and asks "what IP address should I goto for the name conservapedia.com". As Jeeves pointed out there are a host of problems associated with making your DNS server the same domain as your website. tmtoulouse 16:47, 27 January 2009 (EST) Maybe he pulled the CE site when he realised using a week-old version looked suspicious & was causing further speculation about the (probably non-existent) cover-up. Wēāŝēīōīď Methinks it is a Weasel 16:28, 27 January 2009 (EST) - (EC) I would guess that he simply pulled the plug on it because the restored site is back up but has not rippled though the DNS yet. - @ToP - DNS or domain name servers are the lookup tables which link "friendly" domain names to BoNs and the subdirectories where a site is hosted. Many ISPs have their own local versions but it takes some time for changes to be promulgated to all the other servers. If you know the IP you can go to the site directly without waiting for a DNS update. Генгисevolving 16:50, 27 January 2009 (EST) - Thanks, Trent. Thanks Ghengis...Fuck you, Insert name here.TheoryOfPractice 16:51, 27 January 2009 (EST) - "please explain DNS to me like I'm a six-year-old" Isn't that what Insert Name Here did? You should be more specific what you ask for. - User 17:00, 27 January 2009 (EST) - You, my numerical friend, have obviously not dealt with many six-year-olds. Kids aren't idiots. and they know bullshit when they hear it. TheoryOfPractice 17:03, 27 January 2009 (EST) How things work - DNS[edit] You've got a name of something. Lets say it is "". Your computer first asks its store of names "do I know what the ip address of this is?" On a unix based system, this local store is in /etc/hosts. On a PC, it is c:\windows\system32\drivers\etc\hosts or something similar. Failing finding the ip address there, the computer then checks its local cache for dns of things it has looked up recently. At this point, the computer still doesn't know the ip address so it goes out to its list of name servers it knows - on unix this is found in /etc/resolv.conf. The computer then asks the name servers listed if they know of the ip address for the name. The first thing the name server does it checks its local cache of names it has found recently. Failing that, it goes and asks the name servers it knows of. The name servers are in a hierarchy and eventually propogate up to the root level name servers. When a dns server finds out the answer for a name lookup, part of the answer includes a serial number and a TTL (time to live). If the serial number is less than the serial number you have last found, the data is older and it is ignored. If the request comes in and it is still within the TTL of when the server last got data for the name, that cached information is returned. What often happens is a name is looked up for my name server now and I get an answer of, lets say 1 hour TTL and 1.2.3.4 for the ip address. Then Andy goes and changes it to point to 5.6.7.8 5 minutes later. My name server still has 55 minutes before it goes and looks for new data. But if you haven't gotten data for it and you look 5 minutes after Andy changes something, you get 5.6.7.8 while I am still looking at 1.2.3.4. In another 55 minutes my dns server will say "oh, this data is old" and go look for new data and pick up the change to 5.6.7.8. --Shagie 17:57, 27 January 2009 (EST) wouldn't it be deliciously ironic....[edit] .... if the FBI was investigating Conservapedia for the hit list, and that's why they've gone off-line? MDB 17:34, 27 January 2009 (EST) - Delicious, sure, but totally unlikely. This has administration fuckup written all over it. --JeevesMkII 17:36, 27 January 2009 (EST) - Yeah, but a bear can dream, can't he? MDB 17:39, 27 January 2009 (EST) - Actually the most delicious part of this is what it is doing to ♥ K e n D o l l ♥. Do you realise we're only two places from beating his evolution article on yahoo search? Couple more days of this and a visit or two from that Yahoo bot and we'll be able to really rub his nose in one of his "competitive" searches. --JeevesMkII 17:40, 27 January 2009 (EST) - Poor Ken, back to link farming! tmtoulouse 18:05, 27 January 2009 (EST) - My own fantasy is that a bunch of people in black uniforms broke into Andy's mom's basement: - "Andrew L. Schlafly, we're from the secret service. Unplug that server. We're confiscating it and taking it downtown. And the gerbils in their cage, too. You're under arrest for making threats against members of congress. This is no joke." - Possible, though not likely. - Gauss 17:55, 27 January 2009 (EST) - I don't seriously think that the "hit list" has anything to do with all this. Andy probably just borked things while trying to move/upgrade the server and that's it. - Something I'm more afraid of, is that Andy's interest in it is probably fading while it's offline. Editing CP seems to have drug-like effects on him, and perhaps now that he's been without his kicks for a few days, he's becoming clean? If he just takes a few steps back, he'll realize that he's wasting his life on the highway to failure. - In other words; Andy, if you are reading this - please, hang in there! Don't give up, don't let Conservapedia die! If you need technical support, please contact me. I'm an experienced web master and I'm sure I can help you get your server up and running in no time. Just STAY AWAY from the light! Etc 18:46, 27 January 2009 (EST) While I agree with Jeeves, it is entirely possible that "someone" at least had to carefully look into that hit list to make sure it was just innocent stupidity. ħuman 19:00, 27 January 2009 (EST) Wikiforkids[edit] Just wanted to remind you about this, a Wikipedia for kids, I thought some of our newfound free time could be spent doing something productive. We could also invite the sane editors at CP to take away whatever labor force they still have. There I go thinking about CP's downfall again. NightFlare 18:12, 27 January 2009 (EST) Anyone want to guess...[edit] When in five or six days time, they get some semblance of a wiki back on t'internet, what domain name they'll end up using? I'm hazarding a guess that conservativeencyclopedia.com is just the kind of dumb arse name that really appeals to the Assfly. The doubled vowel that makes it a real pain to type is just up his street. I bet he thinks it makes them seem professional. Also, anyone want a sweepstake on when they get it all fixed? I'm betting on Saturday, around 2PM UTC. --JeevesMkII 18:14, 27 January 2009 (EST) - That double vowel is pretty stupid. --"ConservapediaUndergroundInductorfeline fanatic 18:16, 27 January 2009 (EST) - Yesterday, the first time I saw conservativeencyclopedia.com, out of the corner of my eye I thought it read "conservativeteenencyclopedia" with all those e's and n's. And then I thought of Ed Poor and shuddered....TheoryOfPractice 18:17, 27 January 2009 (EST) - How do we know that the Assfly isn't really a bored teenager? --"ConservapediaUndergroundInductorfeline fanatic 18:18, 27 January 2009 (EST) - Last whois I ran shows a DNS other than conservapedia.com so that is something. Though nslookup is still returning servfail and NXDOMAIN so things are right fucked up. I am still thinking this will be resolved before the weekend, and we will be back on conservapedia.com.....too much name recognition to abandon it. tmtoulouse 18:19, 27 January 2009 (EST) - DNS is resolving but we are getting the default apache page on the site it looks like. It is progress though. tmtoulouse 18:34, 27 January 2009 (EST) - Didn't he register a handful of names similar to CP all at the same time? I think he just pulled one out of dormancy to "float" the backed up copy on. ħuman 19:02, 27 January 2009 (EST) - He has a whole bunch of names registered, though you have to pay to find them out on that site. Trial and error shows he has andyschlafly.com and aschlafly.com, and presumably other ones related to/misspellings of the two we've seen so far. alt 19:11, 27 January 2009 (EST) - But does he have assfly.com?Hactar 11:47, 28 January 2009 (EST) - If memory serves, there's one beginning with "R" as well. (Toast) and marmalade 11:52, 28 January 2009 (EST) it's back...[edit] Kinda...and Andy's already improvulating his history course...TheoryOfPractice 18:48, 27 January 2009 (EST) - Weird. "We're upgrading to a new server" during which time they lost a weeks worth of edits and were down for the better part of three days, and they don't even mention it. --JeevesMkII 18:50, 27 January 2009 (EST) - Should anyone tell them their logo isn't showing up? ħuman 19:05, 27 January 2009 (EST) - Let him figure it out himself. We might get 'trusworthy' again. --"ConservapediaUndergroundInductorfeline fanatic 19:10, 27 January 2009 (EST) Andy gets the WIGO train stoked up right from the start. Andy you are so kind to us: The dates given below are imprecise, and the homework encourages you to improve on them: - Creation of Adam and Eve 3700-4004 B.C. - Flood perhaps 3300 B.C. - Tower of Babel perhaps 2500 B.C. His second edit was to add the "perhaps"es... ħuman 19:13, 27 January 2009 (EST) - LOL: CPWebmaster proudly announces the result of a week's hard work!! Etc 19:17, 27 January 2009 (EST) How unfortunate. I created an account there a couple of days ago with good intentions and all my hard work (and my username) has vanished. Ah well, perhaps this is a sign I should stay away from teh Conservative Encyclopedia? MIP has actually signed in - 20:39, 27 January 2009 (EST) Broken their links[edit] For me, at least, all CP addresses now require a "/wiki" in the URL - example,. Well, good news everyone! That means ALL INCOMING LINKS TO THE SITE ARE NOW BROKEN. So long, Ken's swap-linking deals!-Diadochus 20:26, 27 January 2009 (EST) - This is a bad sign -- Nx talk 20:29, 27 January 2009 (EST) - I get that too. Unless they fix it soon, their sesrch engine rankings (which for some subjects such as atheism are quite respectable) will suffer. --Tony Sidaway 20:55, 27 January 2009 (EST) Frak! They fixed it.-Diadochus 00:35, 28 January 2009 (EST) Yeach![edit] I'd been told they'd gone offline but not the whole gory story. How humiliating for Andy - to be banninated from his own site. (Toast) and marmalade 10:32, 28 January 2009 (EST) Document the 'crash' as an article[edit] Shouldn't we have something like Conservapedia:Crash documenting all the lulz the Assfly's cock-up generated. Can someone who followed what was going on more closely save the story for posterity. Auld Nick 13:57, 28 January 2009 (EST) - On second thoughts Conservapedia:The week that never was. Auld Nick 05:43, 30 January 2009 (EST) Finally something to WIGO[edit] Usually, revealing obvious and childish wandalism such as this and this isn't my cup of tea, but the fact that it starts pouring in less than half an hour after the site starts working again, cracked me up. Good work, everyone! Etc 19:38, 27 January 2009 (EST) - Oh wait, I just noticed a big difference; CP doesn't have the /wiki/ prefix on links anymore, that is, is not. I guess we'll have to update all non-permanent links to CP, and interwiki-links (like CP:Goat) no longer works as expected. Anyone with magical powers who knows how to change that sort of stuff? Etc 19:42, 27 January 2009 (EST) - I guess sooner or later they'll get their mod-rewrite-fu on and it'll start working again. I'd hold fire on changing things until whatever they're doing has stabilised. --JeevesMkII 19:48, 27 January 2009 (EST) - Was about to say the same. Right now, it's all very raw there. No Conserv skin, no logo, etc. --Sid 19:49, 27 January 2009 (EST) - I don't know much about MediaWiki, but is there any particular reason to have the /wiki/ prefix on urls, except, obviously, if you have more stuff on the server than just the wiki? Etc 19:55, 27 January 2009 (EST) - The /wiki/ thing is back. - User 20:47, 27 January 2009 (EST) - They have always in the past used conservapedia.com/$1 as far as advantages/disadvantages go, if you are not working with multiple things like we are with our forums it is just preference and ease of configuration/options for configurations. I am sympathetic on this point. URL redirects are an absolute bitch, it was one of the last things I managed to get working right after the server move. tmtoulouse 20:55, 27 January 2009 (EST) - Basically, the /wiki/ prefix is just a convenience. MediaWiki basically works just fine when you have access to the index.php file (In a default, out-of-box MediaWiki install the links usually look like /installdirectory/index.php/Page_name), but if you want nice urls with no "index.php" in them, you can use Apache's mod_rewrite, and then the prefix doesn't matter at all - it can be anything or nothing. (It's also convenient for some segregation; I think Wikipedia puts the MediaWiki install location (/w/index.php) in robots.txt, so that the spiders do not bother indexing the entire histories of the articles, edit pages, and all that junk... --wwwwolf (barks/growls) 03:05, 28 January 2009 (EST) - I for one, do not like the childish wandalism --GTac 21:05, 27 January 2009 (EST) Is Croc really trying to reinstate a whole weeks worth of front page stories? --Arcan ¡ollǝɥ 21:15, 27 January 2009 (EST) Obama still not president...[edit] ... on CP. But I'm sure he'll get there. --Horace 21:01, 27 January 2009 (EST) Obama isn't Hussein yet, or at least not with the article title. --Kangaxx 22:44, 27 January 2009 (EST) - They're working on it. They've added the President template, but the article is otherwise still as it was before inauguration: "Obama will likely be the first Muslim President" and so on. --Tony Sidaway 00:57, 28 January 2009 (EST) - The article is now a FANTASTIC mishmash of gibberish. Part of the article states he's President, part of it says he's the President-Elect, part of it says he's the one-time Senator of Illinois. Then there's the speculation that he'll use the Koran at his inauguration. Any unfortunate child trying to learn from this encyclopedia would have their head spinning. Oh Dear Andy, yet more Epic Phailure. DogP 15:10, 28 January 2009 (EST) - The timing was sweet justice. We have had to make many edits related to the change of administration in the US ourselves, and we aren't even trying to build a 'pedia. They lost every post-inaugural edit they made, and will have to go through everything that changed all over again. ħuman 15:56, 28 January 2009 (EST) - You couldn't write a worse article if you tried. Yours trulyDear Sir 16:06, 28 January 2009 (EST) Making up for lost time?[edit] Christ, it just got up and there's a shitload of stuff in the CP twisted and biased worldview news section. Andy trying to make up for lost time? ENorman 22:34, 27 January 2009 (EST) - I suspect DeanS was keeping up with his news items "off line" during the break. ħuman 23:10, 27 January 2009 (EST) Strange[edit] Note the IP (209.85.100.44 - it's CP's) Love² 23:13, 27 January 2009 (EST) - These posts are several years old. Do IPs get recycled? TheoryOfPractice 23:19, 27 January 2009 (EST) - Not for webservers. - User 00:25, 28 January 2009 (EST) - Bitches be crazy! It's like whoever uses that IP, they're infected with Godspeed. NorsemanCyser Melomel 00:33, 28 January 2009 (EST) (medical hiatus :P) - Surely not every website has its own unique IP? Don't hosting companies have several/many websites in subdirectories for a single IP and the DNS directs to the correct one? Генгисevolving 02:26, 28 January 2009 (EST) Do the bot![edit] the broken links to CP? On wikipedia there are accounts called "bots" that can make edits automatically using a programming language. Could that be useful?--216.118.68.193 23:28, 27 January 2009 (EST) - ACD bot could be reprogrammed I guess. Ask Trent. - User 00:23, 28 January 2009 (EST) Maybe we could get Redirect fixer to do it. :) --Marty 02:01, 28 January 2009 (EST) - I think it was noted above that they fixed the /wiki thing, so our links should be good again. Yeah, they did. ħuman 15:58, 28 January 2009 (EST) Blocking[edit] Has the "missing time" resulted in the unblocking of blocks as well? TK's just made a load of /16 blocks in under 20 minutes - looks as if he's exercising his checkuser muscles. Love² 00:22, 28 January 2009 (EST) - I think there is definitely a bug with their IP blocks; for the first time in nearly two years the half a dozen range blocks covering Fairbanks no longer affect me :) Icewedge 00:29, 28 January 2009 (EST) - Bet that'll be taken care of in 5 ... 4 ... 3 .... Love² 00:41, 28 January 2009 (EST) - Bear in mind that as well as losing a week's worth of content, the wiki has also lost a week's worth of user and IP blocks. I suspect that many of these blocks are just repetitions of actions that were carried out in the lost week. --Tony Sidaway 00:49, 28 January 2009 (EST) - No my uni was blocked 2 months ago and I can get back in. - User 00:51, 28 January 2009 (EST) - Did HDCase do something in the missing period? His contribs show no action since last year. Love² 01:23, 28 January 2009 (EST) - Nope. He didn't do anything. Couldn't, he was still banned. TK's just an ass. Barikada 01:38, 28 January 2009 (EST) - Do I recall somewhere on RW listing TK's blocks (can't find anything myself)? He could be using that to reinstate them. The power of love² 04:31, 28 January 2009 (EST) - Yes, look up Conservapedia:Range_blocks and Conservapedia:IP_blocks, but they were both a little out of date. TK has been very busy re-instating the blocks from before. Has he got an offline spreadsheet or something, or does he just reference us? A pity, though that he has no control over the User Rights log... Oh Andy, where art thou? Bondurant 06:02, 28 January 2009 (EST) For once, I'm glad that I didn't update Conservapedia:IP_blocks. I assume that TK keeps a log for his blocks, his private hit-list. He reinstated the blocks he had made, but Karajou's blocks of this period seem to be lost. l'arronsicut fur in nocte 06:42, 28 January 2009 (EST) The FLQ's name is pretty now![edit] [2] Pffthahahahaha. Barikada 00:53, 28 January 2009 (EST) - We point it out, TK fixes it. So I won't point out the many articles with the same problem that remain. --Marty 02:19, 28 January 2009 (EST) The difficult we do at once...[edit] From the reverted wandalism linkie above, Andy's World History Lecture One: - One cannot fully understand 9/11, violence in the Middle East, or hostility between India and Pakistan without learning World history. We will. <sings> To dream... the impossible dream... --Marty 02:00, 28 January 2009 (EST) Gentlemen[edit] We have our first phone call on the new lines. Who still uses dial up? - User 02:20, 28 January 2009 (EST) - F! It's CUR! Love² 02:22, 28 January 2009 (EST) - Maybe Ken's a therian too given how much he love cheetahs. Hang on, love cheetahs, can't spell but criticizes others spelling, goes on about how high his IQ is, constantly trying to get our attention; CUR is Ken. 02:35, 28 January 2009 (EST) - I wouldn't imagine that dial-up users would see any difference in their download speed, they are already rate-limited by their modem. Генгисevolving 02:34, 28 January 2009 (EST) - And why their speed should affect the number of views is quite baffling ... unless they've been snowed under previously, ... or something ... or not ... Love² 02:38, 28 January 2009 (EST) - Well his concise articles are fucking long, maybe people get bored waiting for them to load. - User 02:42, 28 January 2009 (EST) - Also why does it take for ever to load anything on their faster server? - User 02:43, 28 January 2009 (EST) - 'cos it's down right now. --JeevesMkII 02:45, 28 January 2009 (EST) Ken burnt it; anyone have a screencap? Word cubic Hoover! 16:16, 28 January 2009 (EST) trustworthy - a short rant[edit] Let's see whether I get it right: an enterprise which labels itself trustworthy seems to have lost the combined efforts of some score of volunteers for a whole week, showing thereby that it can't be entrusted with safekeeping the work of its contributors... And the management doesn't give the slightest hint - whether this work is lost altogether - whether it will be retrieved - why it was lost in the first place - whether such data-losses will occur in the future Instead of an explanation, we get this little gem:) Well, we know that it is incompetence pure and simple, but for an honest editor, it has to feel like malice. l'arronsicut fur in nocte 07:13, 28 January 2009 (EST) - AndyJM's post is spot on. I wonder what Andy's reply will be. Or will they just sweep the whole incident under the carpet? -- Nx talk 07:40, 28 January 2009 (EST) - Asshole -- Nx talk 07:45, 28 January 2009 (EST) - Clearly the hamster isn't up to speed yet - database has just been locked down. --PsyGremlinWhut? 07:49, 28 January 2009 (EST) - And the official ungrammatical reply: disk crash -- Nx talk 08:01, 28 January 2009 (EST) - Was there ever an apology more sincere than this one by Andy:We'll do what we can, and apologize for any inconvenience and data loss. Thanks for understanding.--Andy Schlafly 09:09, 28 January 2009 (EST) . Granted, it is hidden at a talk page, and he was beaten to it by PJR ... l'arronsicut fur in nocte 09:54, 28 January 2009 (EST) Dear all, Any chance of sticking to BLACK print on a WHITE background? Old farts like me find the pale green on white very difficult to read! (I'm 59 you know but I bet you'll never be able to guess how old I am.) Mick McT 10:33, 28 January 2009 (EST) - Tha'art but a babby. (Toast) and marmalade 11:56, 28 January 2009 (EST) - May I extol the many and manifold virtues of turning off custom fonts and colours in your browser's preferences? I only wish more people would do it, so that web page authors would get the message that it isn't cool to dick around with these things. Personally, I don't care that you want to have bright cyan text in some irritating custom font, I just want to read the text. --JeevesMkII 10:41, 28 January 2009 (EST) - The irritating font is an idea I stole from PZ Myers. On his great blog Pharyngula he uses it to highlight the quotes of lunatics. I think it's a good way to mark longer quotations from conservapedia, as done above. But I'll stick to black and white in the future. l'arronsicut fur in nocte 11:47, 28 January 2009 (EST) - (I like the serif font for quotes, but have to agree on the gray (green?????) colour - it's not the best. (Toast) and marmalade 11:58, 28 January 2009 (EST) - Instead of quoting, just link. This is t'internet, after all. --JeevesMkII 12:14, 28 January 2009 (EST) - And we are talking about conservapedia, which proved to be one of the bigger memory holes... l'arronsicut fur in nocte 12:26, 28 January 2009 (EST) Looks like Andy had better get this bit of liberal content editted out of his Bible. --Edgerunner76Your views are intriguing to me and I wish to subscribe to your newsletter 12:51, 28 January 2009 (EST) - Evidently, the translation project is more urgent than previously expected! 14:20, 28 January 2009 (EST) - I like the idea, but we should use a lighter color.... A dark red or blue would be nice, and much easier to read. SirChuckBA product of Affirmative Action 15:32, 28 January 2009 (EST) Bernard Goldberg[edit] Is right wing darling Bernard Goldberg taking lessons from Conservapedia? His quotes of a Charlie Rose interview with Tom Brokaw could teach the Assfly lessons on how to quote mine. MDB 13:33, 28 January 2009 (EST) - He's a real scumbag. I noticed that he got the somber rather than comical Worst Person in the World send-off last night from Keith Olbermann. --Edgerunner76Your views are intriguing to me and I wish to subscribe to your newsletter 13:39, 28 January 2009 (EST) - That was such a massive distortion I was wondering if Goldberg could be used for libel. MDB 14:26, 28 January 2009 (EST) - Yeah... I can't stand the bastard... I'll try and find the video when Al Franken pwned him... It reminds me a lot of this POS. SirChuckBA product of Affirmative Action 15:38, 28 January 2009 (EST) More broken things[edit] Anyone have any idea what this means? alt 13:51, 28 January 2009 (EST) - Well, the error means a SQL join fucked up because the things being compared aren't directly comparable because they're in different character sets. How in the name of goat they managed to get their database in to this state, well, your guess is as good as mine. --JeevesMkII 13:55, 28 January 2009 (EST) - Hmm, SQL is something I'd always meant to learn a bit of but have never gotten around to bothering. They've certainly screwed up something; this is what led me to the first link I posted, and then there's the WIGO on JM's name. I think the real question is how they manage to run a wiki/webserver at all! (for suitable value of "run") alt 13:59, 28 January 2009 (EST) - None of the math tags are working, either. The Sierpinski thing is due to symbols in wiki links, which aren't working. They've got problems. 207.67.17.45 14:06, 28 January 2009 (EST) Ed Poor[edit] Discussion moved to Conservapedia Talk:Sysops/Ed Poor#Citizendium Iduan gets mean[edit] [3] I especially like the last bit where he all but admits he can break the rules, as long as it's against liberals. --Arcan ¡ollǝɥ 15:58, 28 January 2009 (EST) - It's the way into Andy's heart - in my opinion Andy thinks of most of the editors as parasites, he fails to see that the relationship is far more symbiotic. And a thought for Iduan: it's the hallmark of a gentleman to be courteous especially if the situation doesn't require it! --l'arronsicut fur in nocte 16:08, 28 January 2009 (EST) - I wouldn't really mind yet another jerk on CP, but it pains me to see Iduan suddenly slapping other editors around with that "Andy owns the site, and that makes everything Andy does 100% awesome! And if you don't agree with Andy, then you don't belong here!" attitude. Seriously, dude, would it have killed you to make a sock instead of ruining your old reputation? --Sid 17:23, 28 January 2009 (EST) - Isn't Iduan a parodist? Seems odd, after Bugler- wait, he never learns, does he? --"ConservapediaUndergroundInductorfeline fanatic 20:03, 28 January 2009 (EST) RobertA[edit] Are we organized enough to sanction anything? And if so how do we decide? --BoredCPer 19:13, 28 January 2009 (EST) - Well, the Newcomers' Guide states that "we do not condone vandalism of other wikis", although users here may be tempted decide to act on their own initiative. Just TK bumping the paranoia up a notch. KlapauciusEsteemed Constructor 19:23, 28 January 2009 (EST) - Well, looking at the logs, TK and Jallen deleted an article called Conservapedia:Barack Obama comparison around the same time RobertA was banned, so I guess he created it. And since that article comes from RW, that automatically means that we sanctioned that vandalism. I mean, like duh. Just like it's Wikipedia-sanctioned vandalism when someone randomly copypastes WP articles to-... no, wait... --Sid 19:56, 28 January 2009 (EST) Anonymous?[edit] They are legion--the question is, is this legit? And does anyone at CP even know what Anonymous is? TheoryOfPractice 21:19, 28 January 2009 (EST) - Eh-what? Please explain- still blocked from CP. --"ConservapediaUndergroundInductorfeline fanatic 21:20, 28 January 2009 (EST) - The big question is not if it's legit. The big question is will CP contact the FBI? Hactar 21:22, 28 January 2009 (EST) - Don't think Anon would bother with them - it's a hoax. (Toast) and marmalade 21:25, 28 January 2009 (EST) - On the plus side (for them at least), now they have someone to blame for their recent downtime, and the myriad of issues still bugging the site. They can claim that instead of the problems being a result of their incompetence, it must have been Anonymous testing to see how vulnerable CP is. -RedbackG'day 21:30, 28 January 2009 (EST) But Anonymous has threatened conservapedia, or at least Frullic claims. Is this WIGO worthy? If so, someone please give me a hand. And I think an FBI joke is due right about here. Hactar 21:20, 28 January 2009 (EST) - Who in the name of the great and almighty rusty-spotted cat- errr. . . GOAT! is Anonymous? --"ConservapediaUndergroundInductorfeline fanatic 21:23, 28 January 2009 (EST) - @ CUR--look up "Anonymous" on WP. (Anonymous (group))They are legion--and the kind of thing you might actually find interesting. TheoryOfPractice 21:24, 28 January 2009 (EST) - Or even here. (Toast) and marmalade 21:28, 28 January 2009 (EST) - I tried to delete this section because of the one above, as I think having two sections on the same topic is confusing at best- can someone with better wiki skills than I merge them or something? Hactar 21:30, 28 January 2009 (EST) thus? (Toast) and marmalade - (CUR, if you're blocked from reading CP: it's not them; it's something else that's doing it. (Toast) and marmalade 21:32, 28 January 2009 (EST) - And CUR if your parents monitor anything on your internet usage: STAY AWAY FROM 4Chan! (Toast) and marmalade 21:34, 28 January 2009 (EST) - I've done a few Anonymous things before, and generally we leave wikis alone because they are so easy to revert the vandalism of. Unless somebody bruteforced Andy's password or something epic like that, it'd be fruitless. Not only that, but most Anon raids are either against groups that have directly insulted them (Subeta "stealing the longcat and selling it as an item on their premium shop), or if a group really, really pisses them off (Scientology) ENorman 21:41, 28 January 2009 (EST) - With Anon, it's impossible to say that they aren't behind something like this, because it could easily be a few of them doing this. Also: "and make Oprah Winfrey say idiotic things on her show so do not underestimate them." THE FIENDS!! Barikada 22:31, 28 January 2009 (EST) - Just because the warning was reverted, I hope that helps instigate an attack. Though instead of blaming ebaums world as per regulation, they'll probably latch it to us instead. But yeah, anybody can go under the Anonymous banner, so you never know. ENorman 23:41, 28 January 2009 (EST) - While I know Anonymous and the chans loathe Conservapedia (I still see threads now and then talking about how messed up CP is and some small-scale wandalism from them), the vibe I also get from them is that it's so lightweight and inherently lulzy on its own is that it's not worth attacking en masse. Beyond that, I'm not sure what they can do that the usual gang of idiots do to themselves over there or what good parodists have already achieved. If they actually are planning something, it should be interesting to see what they cook up, though. A chanology-like attack against Conservapedia could likely tip Andy beyond the breaking point. Photovoltaic Array 06:00, 29 January 2009 (EST) Did Andy wipe the history again? I'm getting the error-this-revision-does-not-exist message, so I have no idea what you all are talking about. I would appreciate it if someone pointed me to a screenshot of whatever it was. Radioactive afikomen Please ignore all my awful pre-2014 comments. 00:38, 29 January 2009 (EST) - Presume it's been oversighted or rev hidden away. Twas just a post saying that they were gonna be attacked by anon like $cientology & Oprah. (Toast) and marmalade 00:58, 29 January 2009 (EST) - Thank you for explaining, Toast. Radioactive afikomen Please ignore all my awful pre-2014 comments. 01:02, 29 January 2009 (EST) - Extracts from History: - (cur) (last) 23:13, 28 January 2009 Aschlafly (Talk | contribs) (49,579 bytes) (→Request: removed garbage) - (cur) (last) 21:18, 28 January 2009 Jpatt (Talk | contribs) (50,351 bytes) (Undo revision 615791 by Frullic (Talk)) (Toast) and marmalade 01:36, 29 January 2009 (EST) - I've added an image backup from my cache. --Sid 06:56, 29 January 2009 (EST) - What form would an Anonymous DoS attack actually take? Wēāŝēīōīď Methinks it is a Weasel 08:06, 29 January 2009 (EST) - I assume (from a basis of ignorance,) that they'd just overwhelm the server with requests, both individually & using bots. Anyone tell more, please? (Toast) and marmalade 09:00, 29 January 2009 (EST) - Pretty much. Simple brute force attack, unless someone with a clue takes interest. And sending exploding vans ocf. Chris Fanshaw 09:06, 29 January 2009 (EST) - Considering that Anonymous is just a catch all phrase for any group of pissed off people on the internet, it's hardly even meaningful to speculate if "anonymous" is doing it! But surely even they (and by they I mean, whoever) have better things to do with their time than this. ArmondikoVpostate 09:26, 29 January 2009 (EST) - Not really. Chris Fanshaw 09:34, 29 January 2009 (EST) Andy really doesn't like Comedy Central, does he?[edit] though in his defense, aside from Stewart and Colbert Comedy Central rarely makes me laugh— Unsigned, by: ENorman / talk / contribs - What an ass. But we knew that... ħuman 23:56, 28 January 2009 (EST) - He's been even more of a vindictive little (rhymes with "runt") lately. I think the election broke whatever was left of his brain.-Diadochus 00:20, 29 January 2009 (EST) - Ready for the homework marking? And the boy/girl exams? (Toast) and marmalade 00:24, 29 January 2009 (EST) - Let us never forget Lewis Black roasting the site back in 2007. Photovoltaic Array 05:47, 29 January 2009 (EST) - "For those who want to forever remain ignorant, return to watching Comedy Central." (emphasis mine) <-- Is it just me, or does it sound as if he's saying that his World History course is my last chance of enlightenment? --Sid 06:49, 29 January 2009 (EST) - I like how he implies that getting a 100% on his course is something special, while all it takes for the students is to just make liberal bash comments or flatter Andy in every question. Honestly, how does Andy rationalize to himself that give a 10/10 to an obvious incorrect answer is a good thing which doesn't tarnish his "educational integrity". --GTac 10:35, 29 January 2009 (EST) - Clearly comedians are part of the liberal cult to destroy conservatism. How many conservative comedians are even out there (who are actually trying to be funny instead of just doing it by talking)? Dennis Miller...Toby Keith (not exactly a comedian but he was great on the Colbert Christmas episode)? Trey Parker is a Libertarian interestingly enough... ~Ttony21(talk, contribs) 13:57, 29 January 2009 (EST) - He also doesn't realize that at least one CC show (Colbert) seems to get some of their material from CP. At least twice, I've seen Colbert use a line that matched almost verbatum a post on CP (usually from the news section, and not parodists) from the current or previous day. In the Burris thing, CP had two separate news items, one saying he wouldn't be appointed because Democrats are racist, and another saying that if he was, it would only be because the Democrats are racist. Colbert used that for his opening material that night. Kalliumtalk 14:18, 29 January 2009 (EST) - It may seem that way, but I doubt it, really. The CP borken news is mostly a ticker taken from the same handful of usual suspect sources, and Colbert's writers probably mine them daily themselves. Now, even if I'm right, it is funny that items that CP sees as important news are seen by the Colbert team as comedy gold... ħuman 20:30, 30 January 2009 (EST) - Andy's edit to the article itself is pure gold. I mean, I know we see past the whole "CP is an encyclopedia" thing, but Andy obviously genuinely believes it. Which makes one wonder what type of encyclopedia he thinks would contain such blatantly slanted wording? (Apart from Wikipedia, obv.) - Also, what are the bets that if that edit was made by a new user, he would be reverted and blocked as a parodist? It just looks too parody-like to be legit Andy now. Dreaded Walrus 15:21, 29 January 2009 (EST) Quick! Someone tell Andy! Another study saying that sex is bad for you![edit] See?!?! Sexual activity in young men may lead to prostate cancer!!! Setting aside the fact that a few other studies roundly refute the conclusion, this could be Andy's next best hope to stifle sexuality!-Diadochus 00:15, 29 January 2009 (EST) - But according to this study, you can also get prostate-cancer from marital sex, and that sort of puts a damper on the whole "be fruitful and multiply" thing... ListenerXTalkerX 00:48, 29 January 2009 (EST) - Yeah, but Paul said all sex was bad. Marital sex is just the only kind that doesn't lead to damnation. But that doesn't make it good. -Hactar, signed out for zoning out for too long. - I would like the idea that all sex is bad to be practiced as well as preached for these people. It would solve all the world's problems in about 30-40 years time. ArmondikoVpostate 09:19, 29 January 2009 (EST) - Why are you assuming that Andy would consider that it also applies to marital sex? As long as it can be used to attack the things he dislikes, he can ignore how it applies to his own principles, it's the way he has always done thing. Like the ABORTION CAUSES BREAST CANCER BECAUSE BREASTFEEDING REDUCES THE CHANCE OF CANCER thing, it also means that ABSTINENCE CAUSES BREASTCANCER, but Andy will never be fooled by these liberal tactics of logic and reasoning. --GTac 10:18, 29 January 2009 (EST) Soon to be on ♥ K e n D o l l ♥'s bookshelf?[edit] New study argues that Darwin was attracted to the idea of common descent out of a sense of rage at the enslavement of black people. TheoryOfPractice 08:09, 29 January 2009 (EST) - Are you calling parrot Darwin a liar when he says he ♥s evolutionary racism? Say it ain't so. --JeevesMkII 08:11, 29 January 2009 (EST) - But the BBC is liberal in CP's eyes, and therefore anything they say is invalid, as well as deceitful propaganda unless it happens to coincide with what they think. ENorman 09:03, 29 January 2009 (EST) - Correction: - But xxx is liberal in CP's eyes, and therefore anything they say is invalid, as well as deceitful propaganda unless it happens to coincide with what they think. (replace "xxx" with name of your choice) (Toast) and marmalade 09:20, 29 January 2009 (EST) Someone's got out of the wrong side of bed[edit] This is a remarkably rude way to address a loyal acolyte.--Kriss AkabusiAAAAWOOOOGAAAR!!1 09:08, 29 January 2009 (EST) - WTF! what's he done to deserve that? Doesn't seem to have done a whole lot, good or bad. It looks as if andy's been disappointed in love or summat & is lashing out wildly. (Toast) and marmalade 09:16, 29 January 2009 (EST) - Did Andy take into account all the substantial edits of the week Jan 19 - Jan 26? For someone who expects courtesy, no reverence, of all others, he's remarkably rude. l'arronsicut fur in nocte 09:19, 29 January 2009 (EST) - Maybe Andy spotted the "PJR should run conservapedia" userbox on PhilipV's user page. JoeDuffy 09:31, 29 January 2009 (EST) - Also I note he has visited several European countries and may have been infected with Liberalitus. Seriously I think Andy is geting more than a little paranoid these days. StarFish 09:34, 29 January 2009 (EST) - His courtesy is as great as his management skills. Heck, he lost the data of a week, and still his only explanation for this is hidden somewhere in talk, talk, talk? Andy has shown that he only cares for the well-being of his few home-schoolers. I presume that he had informed these personally about the status of the website. What would the right thing to do? Act offensively! Explain the error, excuse for the loss, call for a boost of edits to make up for it, and promise to back-up more often. And do so on the main page! At the moment, the common editors are on the loss, those who have registered in the lost week, have lost there account. Should they re-register? The log-in screen advises against such an action. - As Andy is unable to act humble, even if it is called for, he can't to the right thing: Most probably, PhilipV has lost his last edits, but does he get an excuse? Nope. - l'arronsicut fur in nocte 09:35, 29 January 2009 (EST) - Considering how obnoxious Andy can be it amazes me that people spend time making serious contributions to the site. His attitude tends to be "You owe me for giving you this opportunity to contribute". JoeDuffy 09:48, 29 January 2009 (EST) - The ironic part is, that's exactly why a lot of the folks here went to CP in the first place, even though they weren't conservatives. It's fun to contribute to an encyclopedia that's in such early stages. There's all sorts of info you can add, and ways to contribute meaningfully. There were several who said all they wanted to do was contribute, and were happy to find a place in that stage. --Kels 10:20, 29 January 2009 (EST) - But Andy really doesn't appreciate that, even less than other sysops. Unless you're making anti-liberal ideological edits & comments, he barely cares. Wēāŝēīōīď Methinks it is a Weasel 11:25, 29 January 2009 (EST) At least he's got something to cheer himself up with, if that is indeed the case. Perhaps the reality of losing a week's page views has also contributed to his bad mood, however? KlapauciusEsteemed Constructor 09:50, 29 January 2009 (EST) - PhilipV did make a comment on the MediaWiki_talk:Username blacklist about 25 minutes prior to Andy's callout. Also his talk page has a couple of other users complaning about the lack of substance in his edits. Генгисevolving 10:00, 29 January 2009 (EST) Everybody probably blames him for the loss of a week's work. People think he should be a bit apologetic. Most reasonable people would be. Andy just tells everyone off and resorts to argumentum ad banhammer. Proxima Centauri 10:50, 29 January 2009 (EST) - Since anyone Andy gives a shit about was with him whining in DC during the period of lost edits (even though it's not true, they weren't away a week) it's no big deal to him. Now, I gotta say, I think the blacklisted name page is my favorite part of CP. Just imagining Andy be forced to type in all those words he finds so foul. I notice "gerbil" and any variations of the word have been added. Hmmmmm. Something hitting a bit close to home, Andy? DickTurpis 11:51, 29 January 2009 (EST) - I keep reading that rude comment as "I've reviewed your edits and they lack sufficient goat"... Which at least would be funny. What an ass. ħuman 13:57, 29 January 2009 (EST) - @ DT, yeah... Andy is the genius who coined our "Hot. Science. RationalWiki!" slogan. I also found out (ugh) what "lemon party" and "tub girl" are from his typing them on his blog... ħuman 14:03, 29 January 2009 (EST) Joaquín gets locked out[edit] I assume that this is Joaquín Martinez knocking on Andy's door? Gotta love HWessel's follow up comment. JoeDuffy 09:58, 29 January 2009 (EST) - That would presumably be because the special character í no longer works? He would need to log in as JoaquÃn MartÃnez. KlapauciusEsteemed Constructor 10:02, 29 January 2009 (EST) - And we all know who HWessel is. Генгисevolving 10:03, 29 January 2009 (EST) - Arggh, I hadn't realized that JMR10 is Joaquin Martinez... - HWessel? As in Horst Wessel? A little to obvious... l'arronsicut fur in nocte 10:06, 29 January 2009 (EST) - Interesting. I was only thinking of Chekov from Star Trek. Wēāŝēīōīď Methinks it is a Weasel 10:10, 29 January 2009 (EST) - Hope Joaquin reads this page, because he's unlikely to get help over there. Hola Joaquín, si usted está leyendo. Me gusta su uso muy creativo de fotos y dibujos. Sigue, macho.--Kriss AkabusiAAAAWOOOOGAAAR!!1 10:08, 29 January 2009 (EST) - Judging from this, TK doesn't know what happened - and can't or won't help Joaquín.... l'arronsicut fur in nocte 10:32, 29 January 2009 (EST) - Didn't we have something similar happen way back (2007?) (Toast) and marmalade 10:42, 29 January 2009 (EST) - Wow. He should be a system administrator, what with that ability to diagnose a problem. Commodore Guff (blocked for five years) 11:31, 29 January 2009 (EST) You bastards! [4]. You could have given me a chance... I doubt someone blind to the bloody obvious would have spotted it. HWessel 11:38, 29 January 2009 (EST) - Dude, you think a name like "HWessel" didn't set off alarm bells? Guys like TK and Dean and Karajou may be idiots, but they know their Nazis. TheoryOfPractice 11:42, 29 January 2009 (EST) - Try again as HorstW. We'll keep quiet this time.--Kriss AkabusiAAAAWOOOOGAAAR!!1 11:52, 29 January 2009 (EST) - If anyone has block rights over there you really have to give JMR10 an infinite block for not usning first name and last initial. DickTurpis 11:56, 29 January 2009 (EST) - Yeah, I should have guessed he'd be listening to it on winamp. Looking at my contribs, some have been reverted as vandalism, whilst others have been kept (and improved). I can't for the life of me figure out how they chose... HWessel 12:01, 29 January 2009 (EST) - (EC)HW, even without the name, you waaay overcooked the parody. Every edit was to punt in an atheist or liberal wiki-link (including in your old Europe holiday statement) everywhere, putting in B. Hussein Obama as his "full" name on another page and then brown nosing on Asclafly's talk page within minutes of creating your account. Since Bugler, any aspiring parodist needs to slow cook their OTT edits. Bondurant 12:02, 29 January 2009 (EST) - "any aspiring parodist needs to slow cook their OTT edits" -- That's what I'm doing... MDB 12:06, 29 January 2009 (EST) - Yet compared to the edits of, say, Ed, TK or the Great Fly himself, they were hardly exaggerated, weren't they? HWessel 12:33, 29 January 2009 (EST) - Enter Poe's Law. I can't tell you how many things I've seen that I thought were blatant parody and turned out to be an Aschlafly "insight." For example, when I saw Henry Kissinger listed as a liberal intellectual I assumed it must be parody, but it was put in by Andy himself. CorryI've made a huge mistake. 12:52, 29 January 2009 (EST) - Hadn't looked at that (lib intellectuals) before; start to finish a humungous laugh. I particularly like the "privileged youth" comments - anything strike you? Andy? (Toast) and marmalade 13:00, 29 January 2009 (EST) - The liberal intellectual page is mostly parody. I added almost half the names myself. I was very annoyed when Hawking and MLK were removed. DickTurpis 13:06, 29 January 2009 (EST) - Did anyone ever attempt to add Mr. Schlafly to this list? ListenerXTalkerX 13:10, 29 January 2009 (EST) - Andy did make 10 or so edits though. (Toast) and marmalade 13:12, 29 January 2009 (EST) - (unindent) Oppenheimer was a commie. Or at least he had friends who might have attended a party where a communist was in the next room. And he opposed Teller's building of the Hydrogen Bomb. England had Turing, we had Oppenheimer. At least Oppenheimer didn't kill himself. Hactar 15:30, 29 January 2009 (EST) Revert in 5 ... 4 ... 3 ...[edit] Nothing more to say (Toast) and marmalade 11:55, 29 January 2009 (EST) - What did I say? (Toast) and marmalade 12:04, 29 January 2009 (EST) - I love it. I don't know why they don't just shorten the article to Human babies are delivered to their parents by storks. --Edgerunner76Your views are intriguing to me and I wish to subscribe to your newsletter 12:08, 29 January 2009 (EST) I see someone (it wasn't me) ran with my idea about the storks. Human, you're right. That whole ramble about souls and abortion is just a gem. --Edgerunner76Your views are intriguing to me and I wish to subscribe to your newsletter 09:47, 30 January 2009 (EST) - Heh, yeah, that is funny too... (comment truncated when I realized that saying what I was going to say would ruin the fun). ħuman 20:55, 30 January 2009 (EST) fuzzy math[edit] Any idea where Andy got the figures for this edit? Isn't he of by a factor of 100 or so? DickTurpis 12:01, 29 January 2009 (EST) - I also heard that the Democrates voted 30 to 1 to rape your babies--GTac 12:13, 29 January 2009 (EST) - 30 to 1? Who was the lone sellout? DickTurpis 12:16, 29 January 2009 (EST) - Lieberman, of course. Everybody knows he's a DINO. MDB 12:27, 29 January 2009 (EST) - Someone's been reading WIGO (Toast) and marmalade 13:27, 29 January 2009 (EST) ($200,000 each ↓ $16,000 per family - not a great difference. (Toast) and marmalade - Or maybe he's reading his own wiki? --Kels 13:47, 29 January 2009 (EST) Classic Andy. Can't admit a mistake. He's not correcting a blatant error, he's "improving" the figures. Well, changing them from egregiously wrong to basically correct is an improvement, I'll grant him that. DickTurpis 14:59, 29 January 2009 (EST) Revert & Ban in 5 ... 4 .... 2 ....[edit] Jonsociologist Strange name) takes offence at Kajagoogoo for reverting his edit! (Toast) and marmalade 13:21, 29 January 2009 (EST) - Partly right,anyhow and the comment to Jon reads: "For starters, the entry you posted in Eyespot began this way: "Richard Dawkins, an evil evolutionist and an Atheist opponent of religion". And yes, I reverted it again. I have no problem if you want to write your entries in a professional manner. If you want to be childish in your edits, you'll be removed from the site.", so can a block be far behind? (Toast) and marmalade 14:12, 29 January 2009 (EST) - Funny how eyespot neglects to mention the "feature" many fish have. ħuman 16:48, 29 January 2009 (EST) - Took some time. (Toast) and marmalade 02:44, 30 January 2009 (EST) Republican Govs/Dem Senators[edit] The guy who reinstated it had klaimed to be a member of the New York Chapter of the KKK on his user page. (he was blocked) (Toast) and marmalade 13:43, 29 January 2009 (EST) - Hm, but the description didn't really look convincing. Probably a vandal. I doubt that the KKK is now in league with Conservapedia seeing as they're not really racist (Christian conservatives who think Obama is a Muslim maybe but not racist).~Ttony21(talk, contribs) 14:33, 29 January 2009 (EST) - Oh, Andy is a bit racist, but he's not white supremacist, which is a different matter. So yeah, CP is not pro-KKK by any means. Though they still have a bit in common in terms of religious beliefs. DickTurpis 14:37, 29 January 2009 (EST) - Fair enough. I also have a feeling that page is going to be re-added a few more times before this whole thing dies down. ~Ttony21(talk, contribs) 14:43, 29 January 2009 (EST) Ok, it's time to come clean. I was QWest. I chose the name based on my ISP. I felt the list was pretty subtle parody and I'm glad it lasted as long as it did as it clearly exposed CP's willingness to accept implied threats against liberals. I readded it once after the site came back (with plenty of snark and MOAR HITLER added just to make the parody obvious). When I get my home connectivity to RW fixed, I'll upload a screenshot of the new version I uploaded. Honestly, I had no idea it would generate any sort of attention beyond RW, and I was in fact a bit nervous when Wonkette linked it. For any FBI or Secret Service who may be reading this, it was parody for the purpose of exposing CP's own tolerance for hatred of liberals. I absolutely do not endorse violence of any sort. Stile4aly 15:02, 29 January 2009 (EST) - Good effort! An original parody. I wish I was as imaginative. Oh well. I'll just have to satisfy myself by adding MOAR HITLER. StarFish 15:34, 29 January 2009 (EST) - Stile4aly, you're a very naughty boy! Генгисevolving 15:49, 29 January 2009 (EST) - You know, this is the one piece of parody that I feel badly about. Although I think it was an accurate reflection of CP's tolerance for anti-liberal hatred, it was in poor taste and for that I do feel remorse. Stile4aly 16:19, 29 January 2009 (EST) - Don't feel too bad. Wonkette's "hit list" interpretation was only one way to read it. When I read it, it made sense in a mildly subversive political sense - a list, say, of senators to focus on over ethics, election fraud, finance fraud, etc. It's a bit like the lists we see at election time of "vulnerable seats". IOW, if some of those Dems could have been unseated by "other than electoral" means - not violence, just things like investigations into the senators dealings - they might be replaced with GOP people. Of course, now it will forever be a "hit list" in people's minds. ħuman 16:37, 29 January 2009 (EST) - Feel bad? No need at all! Just look at how much good came of this. We're happy because it got noticed and serves to highlights the wackyness of Conservapedia to the general public. Wonkette and their readers are happy because they get something to laugh about. Andy is (presumably) happy because he gets to fume over the Eeevil Liberals at Wonkette. And even Ken must be happy because the link from Wonkette is good for his precious search rankings. In other words, more or less everyone is happy, so at least utilitarianistically speaking, this must be a good thing. --AKjeldsenCum dissensie 17:48, 29 January 2009 (EST) - AKjeldsen arguing from a pro-utilitarian perspective of ethics? I wouldn't have ever thought...216.221.87.112 17:55, 29 January 2009 (EST) - Sometimes, you just gotta go with what works. </sophist> --AKjeldsenCum dissensie 08:34, 30 January 2009 (EST) Proverbs[edit] Doesn't this neatly sum up everything that's wrong with CP? — Unsigned, by: Ajkgordon / talk / contribs - Pretty much, yeah... ħuman 16:46, 29 January 2009 (EST) - This is a great reply by CPalmer regarding Andy's biblical "interpretation." I think that it is inevitable that people who use the bible as a tool of persecution must at some point convince themselves that at least some part of it doesn't apply to them. Here is one of Andy's rationalizations. Very nice. CorryI've made a huge mistake. 03:05, 30 January 2009 (EST) Oh, how I larfed[edit] Wish I could share this but a recent edit by a CP sysop reintroduced some lulz to an edited article. Генгисevolving 17:00, 29 January 2009 (EST) Is CP Borken Again?[edit] Sigh. --SpinyNorman 17:21, 29 January 2009 (EST) - It's still working for me, although for some reason the Alger Hiss article isn't. And as I was typing that, the site briefly went down but came up again almost immediately. KlapauciusEsteemed Constructor 17:31, 29 January 2009 (EST) - I'm getting "The server is temporarily unable to service your request due to maintenance downtime or capacity problems." alt 17:37, 29 January 2009 (EST) - I just got a "Connection to 65.60.9.250 Failed" page and then the above message when trying to access different pages. The server saga continues! Or did Anonymous set to work very quickly? KlapauciusEsteemed Constructor 17:46, 29 January 2009 (EST) - Are we *sure* Andy's not willing to have a "server crash" in order to erase all signs of the hit list? --Phentari 18:01, 29 January 2009 (EST) - The "hit list" page was protected, and if they wanted to get rid of something they could just oversight it away. It's entirely plausible that Klapaucius made the right call - CP was recently warned to expect a DoS attack, after all. alt 18:07, 29 January 2009 (EST) - I get a "503 Service Temporarily Unavailable" ("The server is temporarily unable to service your request due to maintenance downtime or capacity problems.") error at first, but now it loads again... though without the Conserv skin. And when I refreshed then, I got Firefox's built-in "Page Load Error: Failed to Connect" message. Trying again, I'm back at the 503 message. Yeah, I also fear that the DoS attack warning might have been real. *sigh* --Sid 18:12, 29 January 2009 (EST) - Maybe that Anonymous threat was real. 75.158.1.251 18:21, 29 January 2009 (EST) It seems to be back up (for me at least). There is a slight gap in the Recent Changes, too: - (Block log); 18:41 . . TK (Talk | contribs) blocked 24.47.176.0/21 (Talk) with an expiry time of 1 year (account creation disabled) (Vandalism: Users - LiveAndLetLive, TonyA, KRover. IP#24.47.182.184 - Reported to Optium Online, West Babylon, NY) - (diff) (hist) . . Talk:Barack Hussein Obama; 17:10 . . (+388) . . MauriceB (Talk | contribs) (→Is he or isn't he President?: ) I'll silently assume that they sorted out whatever has happened and simply go to bed now. If CP implodes again, I guess I'll read about it tomorrow :P --Sid 19:16, 29 January 2009 (EST) Barney Fife[edit] And he still hasn't caught it! -- YossieSpring in Fialta 23:17, 29 January 2009 (EST) - I'm almost positive it was an intentional jab. If he does the same thing in an article rather than talk we'll have a case. DickTurpis 01:34, 30 January 2009 (EST) - Confirmed DickTurpis 01:50, 30 January 2009 (EST) - Wow, what a lame insult. That doesn't even make sense. -- YossieSpring in Fialta 04:58, 30 January 2009 (EST) Aunt Bee for Secretary of Pies! Otis for ATF Director! Now we're talkin'! Jimaginator 09:04, 30 January 2009 (EST) Yes, there's no question it was intentional. I posted it only to show how childish and amateurish the self-proclaimed "senior sysop" of the self-proclaimed "trustworthy encyclopedia" was. Hey Terry! If you write stuff like that, how do you expect people to take you seriously? I never expected him to fix it, because, as we know, talk pages on a wiki are an immutable historical transcript. They are never changed, other than for removal of libel. But, at 0145, 30 January, he edited it. Nice to know you still read us, Terry! Gauss 15:04, 30 January 2009 (EST) This is, atleast, the second time there has been an WIGO about TK using Barney Fife as an insult for Barney Frank. He did it in a talk page in the midst of a conversation. It's a throwaway joke, that while not very funny, isn't especially noteworthy, either. I think there are dozens of better examples of his immaturity. Patrickr 15:29, 30 January 2009 (EST) Speaking from the dead: CP to implement flaggedrevs[edit] I'm technically R E T I R E D, but... So I occasionally talk with Geo.Plrd on gchat. He's a nice dude, truth be told. Anyways, he asked me the other day if flaggedrevs would "basically kill vandalism." I told him yes, maybe: but it would, ultimately, prove that Conservapedia can't handle the outside world. Nevertheless, I personally think that means they're going full-throttle with it. Just sayin'-caius (tailor) 01:13, 30 January 2009 (EST) Points to consider with flagged revisions: In order for it to work they will have to hide the revisions from everyone but sysops. This will create some interesting problems. First, their "guard dog" will be worthless since it relies on recent changes to detect editing rate. The flagged revisions will not appear in recent changes so guard dog will not activate. Next, no other users but sysops will see the edits, that means that it will be completely up to the sysops to deal with it, and they will have to be actively looking in the namespace that deals with flagged revisions. Here comes the interesting point, a hyperactive vandal or a bot could will up that flagged revisions cue with thousands of edits before anyone is aware of it. Extensions like "nuke" that remove all edits from a user will not function with flagged revisions. That means sysops will have to manually remove thousands of "bot" revisions while sorting through any "real" revisions. I imagine that after 2-3 events such as this occurring we might see them rethink the approach. Essentially what it would do is shut down half the tools they currently use for monitoring, blocking and cleaning up and give vandals the opportunity to waste substantial time of sysops. This is just a few ideas off the top of my head that I can think of that would cause problems for them. There are several others that I won't bring up to avoid giving people "ideas." tmtoulouse 01:49, 30 January 2009 (EST) - With flagged revisions, do regular edits not appear in recent changes, or does the article just not immediately update itself when viewed through normal means? I thought it was he latter, but I don't know a whole lot about it. DickTurpis 01:52, 30 January 2009 (EST) - The extension is configurable to work in multiple ways, I would imagine CP would make it so flagged revisions could only be seen by sysops. If they did not then it would defeat the whole purpose since seeing them in recent changes won't alter the perception of vandalism at all. tmtoulouse 01:57, 30 January 2009 (EST) - Hell, they troll every edit by "non trusted" users there (as much, or even more, then I do here). So it won't be more work for them, really. It will make capturebot's job harder - potentially even impossible. Of course, they will bork it on the first iteration, and second, and third. But if it makes Andrew happy (not having to argue with "liberals"), it will probably fly. Eh, we could move faster and more fluently than them, but really: who cares about wandalism on CP? The big funny is the asininity of what the sysops and trusted editors post. If they actually succeed in eliminating "wandals", Poe can take a holiday, and we can relax and just track the genuine idiocy there. ħuman 02:20, 30 January 2009 (EST) - We rarely really care about vandalism, and capture bot is used mostly to capture insanity from the sysops anyway. As I detailed above though flagged revision will make their job harder by hiding a vandal behind the scenes. If a sysops is not actively watching the review cue a bot could fill it up very quickly. Then what? Several thousand revisions in waiting is a lot for half a dozen people to sort through. tmtoulouse 02:22, 30 January 2009 (EST) - I can't help but think that this would be the end of discussion on non-user talk pages. CorryI've made a huge mistake. 03:12, 30 January 2009 (EST) It also means that anything that is visible on CP would have the stamp of approval of Andy. They couldn't claim the contents of an article it was vandalism if every thing has to be approved by one of his sysops. Every article becomes "this is what conservapedia believes and has approved of" --Shagie 03:13, 30 January 2009 (EST) - I just say: "Cupertino Effect". No tool can ever "kill" vandalism, just like no tool can ever "kill" typos. No tool will ever be a brain replacement. - Sure, it will help filter out those "GET STUFFED! GET STUFFED! GET STUFFED! ANDY AND GERBILS!" wandals, but beyond that, it simply means that sysops will have to do a lot of work, even without anybody DoS-ing the system with a bot. If they feel like running a simulation, ask the sysops to go through every single edit of a normal day and to either mark it as patrolled or to revert it. Good luck. The core sysops can't even be arsed to take care of move/delete requests, so what makes anybody think that they will sacrifice a few hours extra every day to double-check every edit? Sure, it will work for a day or so as everybody tries out the new toy, but in a week? A month? No. - All this will do (like Shagie said) is to give parodists and subtle vandals the Sysop Seal Of Approval )while potentially valid edits will get backlogged like whoa). There will be no more "Oh, this list was created by Internet Terrorists, and we never knew about it!" excuses because some sysop will have to explicitly say "I approve of this article." - So... I totally approve of this. Really, nothing will prove CP's helplessness more than installing this extension. --Sid 03:54, 30 January 2009 (EST) - I agree, it'll stop obvious vandals but it won't stop the real problems such as the senior editor's paranoia or the fact that Andy will accept anything so long as their's enough brown-nosing involved. Parodists wanting to make CP even more insane and showing up Schlafly and the other sysops as the bigotted twats that they are will still get their way. You can't write an extension or a bot that will solve that issue, only a brain transplant on behalf of Schlafly will help there. ArmondikoVpostate 10:01, 30 January 2009 (EST) Blago[edit] Watching CP try to react to Blago is like watching a two-headed dog chasing his tail. On one hand, he's a Democrat, so he's obviously evil personified, but on the other hand it was Democrats who removed him from office, so obviously it was a fascist power-grab on their part. Everybody's evil, except of course the Republicans who voted to impeach and convict him, they're okay. I also like the way in TK's world you cannot impeach someone until they have been convicted of a crime. Somehow it didn't work that way for Clinton. Perhaps we should wait a year for a criminal conviction before removing someone from office. That'll work well. DickTurpis 01:46, 30 January 2009 (EST) - Or you could do what the Republicans did, wait for a governor to be convicted of a crime and then not impeach him. Bob Taft is part of the reason Ohio is controlled by Democrats today. Hactar 13:42, 30 January 2009 (EST) - In the US, we have a similar system for handling both those things that can send you to prison and those things that can remove you from office—an "indictment"/"impeachment", and a "trial". (Many Americans are confused about the fact that impeachment is just the equivalent of indictment, that is, it is a formal accusation; that's all. Clinton was impeached and then acquitted at trial.) These two-step processes are essentially independent for criminal cases and removal-from-office cases. The impeachment doesn't bring criminal charges; it brings removal-from-office charges. The notion that you can't be impeached/removed unless a criminal indictment or conviction has occurred is a figment of Blago's and TK's imaginations. (And some of Nixon's supporters before that.) The question of whether conviction from impeachment (leading to removal from office) and conviction at criminal trial (leading to prison) constitutes double jeopardy in our criminal system is less than clear. Blago may well claim at his criminal trial that he can't be criminally punished for the crimes for which he was removed from office. He is not likely to prevail. Gauss 15:16, 30 January 2009 (EST) Schlafly math redux[edit] Regarding the main page right edits and crazy numbers... didn't we just cover this, like earlier today? Please try to avoid repetitive WIGOs, people. Or did Andy repeat his stupidity? ħuman 02:34, 30 January 2009 (EST) Andy and British spelling[edit] I wonder if Andy has ever exchanged correspondence with conservative icon Margaret "Iron Balls" Thatcher? She is titled The Right Honourable The Baroness Margaret Thatcher, so if she uses her title on her stationary, would he assume she's a dirty stinkin' libb-burr-ull? (Actually when she was an MP, she voted to decriminalize male homosexuality and legalize abortion, so he probably loathe her on those alone.) MDB 08:19, 30 January 2009 (EST) - Even good ol' Maggie isn't Conservative enough for The Schlaf. ArmondikoVpostate 08:24, 30 January 2009 (EST) - (EC) Thatcher was discussed sometime recently (I don't remember in what context) & Schlafly said something along the lines of her being relatively conservative (for a dang limey!), but still less than completely conservative for supporting gun control, abortion, homosexual agenda, etc. Wēāŝēīōīď Methinks it is a Weasel 08:26, 30 January 2009 (EST) - Yeh I remember that too. I think it was in the gun control discussion. There we go. --GTac 09:10, 30 January 2009 (EST) - Also she was a scientist before going into politics, and was one of the first world leaders to speak out about global warming (because she understood the research). Dang, them libruls are evrywhere! Yours trulyDear Sir 09:22, 30 January 2009 (EST) - I see the confusion. Andy doesn't acknowledge the existence of anything outside of America. As a result, Conservatives are all about the gun control and homosexuality, issues that just aren't real issues in the UK. ArmondikoVpostate 09:54, 30 January 2009 (EST) - It also may have to do with the British Conservative Party's faction of "liberal conservatism" which involves a conservative free market, as well as a socially liberal view on areas like gay marriage and abortion. So it seems to be more of a confusion over the archaic left-right, liberal vs conservative axis that Americans are so fixated on. ~Ttony21(talk, contribs) 11:29, 30 January 2009 (EST) The Trustworthy Encyclopedia...[edit] ... isn't very trustworthy, at least in terms of server up-time at the moment. It's down yet again. How have they managed to bork it so thoroughly? It must take someone really "special" to run one of the least reliable websites ont' interweb. Bondurant 11:17, 30 January 2009 (EST) - I'd still love to know (IT ignoramus that I am) just how they managed to convert every URL that has an é,ô,í character in it into complete gibberish. --PsyGremlinWhut? 11:26, 30 January 2009 (EST) - hhhhmmmm.... it would depend on which database they use, but I know some databases I've worked with require the administrator to say "use character set X". If they futzed up during the upgrade, they could have switched from a character set that allows things like accented characters to one that does not, and the new database is displaying them as best it can. Note that doesn't necessarily indicate complete stupidity on the part of whoever did the upgrade (thought they probably should have been paying better attention) -- it could also be bad programming on the part of the database designer, things like it detaults to a character set rather than using whatever the previous one was. MDB 11:34, 30 January 2009 (EST) - It's not really an encyclopedia, according to TK it's: "A resource that can be used by people all over the world to better understand American, conservative, Christian and family-friendly values". Well, I guess that's kind of true.. --GTac 14:07, 30 January 2009 (EST) - Except that he says "Conservapedia is an American, conservative and Christian-friendly encyclopedia" in the previous sentence... Sorry, GTac, even though we all agree that CP isn't really an encyclopedia. Bondurant 14:55, 30 January 2009 (EST) Better than football[edit] Isn't it fun watching parodists wreak havoc, only for them to be welcomed by TK? I've got my eye on one right now, I don't know who it is but I wish him Godspeed... HWessel 11:50, 30 January 2009 (EST) - By the way, I love the way the fly rapes references to make it look like there's any basis to what he's saying. His favourite technique seems to be the "Schlafly Double", e.g. "Barack Obama, president of the USA, is a muslim [ref linking to a page saying that B.O. is president]". HWessel 11:55, 30 January 2009 (EST) - Almost as good is when he links to CP as a source, especially when the source is an "essay" or "Mystery". DickTurpis 12:11, 30 January 2009 (EST) - Oooooh, I'll have to look out for one of those. How ironic it is that Andy has given birth to a pastime so very similar to paleontology... HWessel 12:20, 30 January 2009 (EST) - Could you point me to one? I haven't seen one of those before. EternalCritic 14:01, 30 January 2009 (EST) - Point you to one of the cases where Andy links to his own Mystery articles as a reference? Ha! Why limit yourself to one? --Sid 15:21, 30 January 2009 (EST) - And even these aren't as good as the famous reference "This actually happened" (note 1 in this page). KlapauciusEsteemed Constructor 15:26, 30 January 2009 (EST) - *laughs* I totally forgot about that one! But that reminds me of a certain dinner conversation! --Sid 15:30, 30 January 2009 (EST) - Interestingly, I just read the talk page of the "Was St. John a Child" mystery, and in the first paragraph here Andy makes an argument that could be very effectively used against YEC (failing trying to use reason). CorryI've made a huge mistake. 16:07, 30 January 2009 (EST) Economic Stimulus Bill[edit] Part Conservapedia, part right-wing in general, but I think this article, and asociated "non-essential" list goes a long way towards showing the right-wing's problems. Look at some of the items they consider non-essential; 32 billion for improving the energy grid, 30 billion on infrastructure spending, 54 billion for job traing, and so on. They don't get it; tax cuts are a short term answer at best. In order to stimulate the economy, you must create jobs; building a road is a much better way to put a guy to work than giving a billionaire more money, then hoping he hires someone. It's so frustrating when congressional republicans just don't get it, then hold up a bill that will help our country. If anything, the bill should have more spending, and less tax cuts. Z3rotalk 15:32, 30 January 2009 (EST)
https://rationalwiki.org/wiki/Conservapedia_talk:What_is_going_on_at_CP%3F/Archive111
CC-MAIN-2019-13
refinedweb
17,297
71.65
Given a string, find the first non-repeating character in it and return its index. If it doesn't exist, return -1. first_unique('leetcode') # 0 first_unique('loveleetcode') # 2 def first_unique(self, s): if s == '': return -1 for item in s: if s.count(item) == 1: return s.index(item) break return -1 Your version isn't bad for few cases with "nice" strings... but using count is quite expensive for long "bad" strings, I'd suggest you cache items, for instance: def f1(s): if s == '': return -1 for item in s: if s.count(item) == 1: return s.index(item) break return -1 def f2(s): cache = set() if s == '': return -1 for item in s: if item not in cache: if s.count(item) == 1: return s.index(item) else: cache.add(item) return -1 import timeit import random import string random.seed(1) K, N = 500, 100000 data = ''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(K)) print( timeit.timeit('f1(data)', setup='from __main__ import f1, data', number=N)) print( timeit.timeit('f2(data)', setup='from __main__ import f2, data', number=N)) The results on my laptop are: 32.05926330029437 4.267771588590406 The version using cache gives you 8x speed up factor vs yours wich is using count function all the time. So, my general advice would be... cache as much as possible whether it's possible EDIT: I've added Patrick Haugh version to the benchmark and it gave 10.92784585620725 EDIT2: I've added Mehmet Furkan Demirel version to the benchmark and it gave 10.325440507549331 EDIT3: I've added wim version to the benchmark and it gave 12.47985351744839 CONCLUSION: I'd use the version i've proposed initially using a simple cache without relying on Python counter modules, it's not necessary (in terms of performance)
https://codedump.io/share/cqGfcw4lZLPl/1/first-unique-character-in-a-string
CC-MAIN-2018-26
refinedweb
307
67.65
Ionic 4/Angular WooCommerce Tutorial Throughout this tutorial we are going to build a full Ionic 4 eCommerce app with a WooCommerce backend designed for people who need an Android/iOS mobile app for their WooCommerce based store. We are going to use Ionic 4 for front end and WordPress + WooCommerce for back end Tutorial requirements This tutorial requires you to have - A local WordPress installation with WooCommerce installed and configured. - Or a hosted WooCommerce store which you can test with. - Node.js and Ionic CLI installed on your development machine. - Some working experience with Ionic 4. We are not going to cover how to install WordPress and how to add the WooCommerce plugin since you can find many tutorials on the web already showing that. Setting up WooCommerce API? WooCommerce is a free WordPress plugin which allows you to create an ecommerce website based on WordPress. WooCommerce provides an API which we can consume via any client side code, in this case our Ionic 4 app but before we can do that, let's setup WooCommerce to allow authenticated clients to consume the API. Basically what we need to do is: - Enabling WordPress permalinks - Enabling WooCommerce API - Generating API keys First you need to turn on the WordPress permalinks via Admin -> Settings > Permalinks. Next you enable the WooCommerce API by going to WooCommerce > Settings > API tab then check the Enable REST API checkbox. Select the API tab Enable the API And finally generate the API keys which control access to your WooCommerce website from any client. So go to WooCommerce > Settings > API > Keys/Apps Click on Add key link/button. Then fill in the details for generating keys which are - A description for API keys. - The WordPress user for which you want to generate keys - The permissions (Read, Write or Read/Write access) Next click on generate keys. You should get two keys, Consumer Key and Consumer Secret,which you need to copy and save somewhere because we are going to need them in our Ionic 4 app. Generating an Ionic 4 Project Let's start our journey by generating a new Ionic 4 project based on the sidemenu template so open up your terminal under Linux/MAC system or command prompt under Windows and type: ionic start ionic4-woocommerce sidemenu --type=angular cd ionic4-woocommerce ionic serve We are going to use woocommerce-api which is the official Node.js module for WooCommerce API but since this module uses specific Node.js modules which are not available on the Cordova webview we can't use it directly with Ionic 4. Transfroming a Node.js Module to A Browser Library The solution is to use Browserify to bundle the module with all its dependencies inside one JavaScript file so we will not need any external Node.js dependencies which are not available inside the Cordova webview (a headless browser) used by Ionic 4. Browserify is a Node.js module which can be installed from NPM with: npm install -g browserify Next create a Node.js project with: mkdir woocommerce-api cd woocommerce-api npm init Just answer all the questions and hit Enter for NPM to generate a package.json. Next we need to install woocommerce-api module from npm with: npm install woocommerce-api --save The next thing is to create a main.js file inside our Node.js project: touch main.js Then copy this code: var WooCommerceAPI = window.WooCommerceAPI || {}; // Augment the object with modules that need to be exported. // You only need to require the top-level modules, browserify // will look for any dependencies and bundle them inside one file. WooCommerceAPI.WooCommerceAPI = require('woocommerce-api'); // Add to the global namespace window.WooCommerceAPI = WooCommerceAPI; So what this code does? First we have created a JavaScript object, then we required woocommerce-api and stick it to our object then exported the object to the global namespace (window) so it becomes available globally after we convert our main.js to a browser library with browserify. What we need now is to invoke browserify from the CLI to transform main.js alongside required module(s) to a browser library. Go ahead and open your terminal ,navigate inside the Node.js project and enter: browserify main.js -o woocommerce-api.js Wait until the command returns, you should find a woocommerce-api.js file inside your current folder. We can now include the woocommerce api library inside our Ionic 4 project via a script tag, just like any browser based library. Take woocommerce-api.js file and copy it inside the assets folder of your Ionic 4 project then include it in Ionic 4 index.html with: <script src="assets/woocommerce-api.js"> </script> If you find any errors with woocommerce-api.js, make sure to include an es6 shim since the module uses ES6 features which may not be available in Cordova webview. I used this shim from here <script src="assets/es6-shim.min.js"> </script> Ionic 4 already includes an ES6 shim but uses a Webpack based workflow with Gulp which generates a final bundle build/main.js that contains all project source files plus any shims for ES6 support so you need to either add your woocommerce-api.js library to Ionic 4 workflow so it can be bundled with main.js or just include woocommerce-api.js and es6-shim.js before main.js You can also try to use Crosswalk which replaces the old system browser used by Cordova with a recent browser version which supports ES6 features so you don't need to include any shims. I didn't try that but you can test it if you want. Now we are ready to use the WooCommerce API inside our Ionic 4 project. But we still have a few problems: Ionic 4 is based on TypeScript so how to include the JavaScript woocommerce api library inside a TypeScript project? Since we don't have typings for woocommerce-api.js we just need to add this line: declare var WooCommerceAPI: any; In any TypeScript source file before we can use the library. Ionic 4 Same Origin Policy and CORS If you are testing your Ionic 4 app locally using the Ionic serve command (local server) or even if you are using a device with live sync (--livereload ) you are going to find problems related to the same origin policy,when you connect to a WordPress server. The same origin policy implemented on browsers states that only clients from the same domain can connect to a server. So how can you solve this problem when developing your Ionic 4 app that connects to a WooCommerce server? You have many option, either on the side of Ionic 4 or WordPress server: - Testing only on a real mobile device but without live sync enabled (without --livereload). - Using an Ionic 4 proxy. - Changing CORS headers on the server to allow all domains (or selected domains) to connect. Testing only on the device without live sync can be time consuming and not effective during development phase so we are not going to follow this option. So you can either use an Ionic 4 proxy which allows you to bypass the CORS issue or, if you have control of your WordPress server, change the CORS headers on the server. Since I'm using a local WordPress server I'm just going to change the CORS to allow all domains to connect. Setting CORS with Apache I'm using the Apache 2 server under a Ubuntu 16 machine. If this is your case too you can follow these steps to change CORS to allow connection from all domains. You need to do two things: First enable Apache mod_headers module. Open your command line and type: sudo a2enmod headers Then restart the apache service with: sudo service apache2 restart Open .htaccess and add: <IfModule mod_headers.c> Header add Access-Control-Allow-Origin: "*" Header add Access-Control-Allow-Methods: "GET,POST,OPTIONS,DELETE,PUT" Header add Access-Control-Allow-Headers: "Content-Type" </IfModule> Then save your .htaccess file. Conclusion That is the end of this first part of our tutorial. In the next part we are going to build the actual Ionic 4 that connects to a WordPress + WooCommerce website, fetch things such as categories, products and orders and display them to your mobile app users.
https://www.techiediaries.com/woocommerce-ionic-2/
CC-MAIN-2021-39
refinedweb
1,392
62.78
dirname() Find the parent directory part of a file pathname Synopsis: #include <libgen.h> char *dirname( char *path ); Since: BlackBerry 10.0.0 Arguments: - path - The string to parse. Library: libc Use the -l c option to qcc to link against this library. This library is usually included automatically. Description:. The dirname() function might modify the string pointed to by path, and can return a pointer to static storage. Returns: A pointer to a string that's the parent directory of path. If path is a NULL pointer or points to an empty string, a pointer to a string "." is returned. Examples:); Classification: Last modified: 2014-06-24 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/d/dirname.html
CC-MAIN-2016-30
refinedweb
124
69.18
MySQL Connector/Python Release Notes This class provides all supported MySQL field or data types. They can be useful when dealing with raw data or defining your own converters. The field type is stored with every cursor in the description for each column. The following example shows how to print the name of the data type for each column in a result set. from __future__ import print_function import mysql.connector from mysql.connector import FieldType cnx = mysql.connector.connect(user='scott', database='test') cursor = cnx.cursor() cursor.execute( "SELECT DATE(NOW()) AS `c1`, TIME(NOW()) AS `c2`, " "NOW() AS `c3`, 'a string' AS `c4`, 42 AS `c5`") rows = cursor.fetchall() for desc in cursor.description: colname = desc[0] coltype = desc[1] print("Column {} has type {}".format( colname, FieldType.get_info(coltype))) cursor.close() cnx.close() The FieldType class cannot be instantiated. Column c1 has type DATE Column c2 has type TIME Column c3 has type DATETIME Column c4 has type VAR_STRING Column c5 has type LONGLONG So this "type" information while helpful is rather course. It doesn't for example distinguish between TEXT and VARBINARY(99). (It calls them both VAR_STRING.) Here are the type codes with their get_info() names: DECIMAL = 0x00 TINY = 0x01 SHORT = 0x02 LONG = 0x03 FLOAT = 0x04 DOUBLE = 0x05 NULL = 0x06 TIMESTAMP = 0x07 LONGLONG = 0x08 INT24 = 0x09 DATE = 0x0a TIME = 0x0b DATETIME = 0x0c YEAR = 0x0d NEWDATE = 0x0e VARCHAR = 0x0f BIT = 0x10 NEWDECIMAL = 0xf6 ENUM = 0xf7 SET = 0xf8 TINY_BLOB = 0xf9 MEDIUM_BLOB = 0xfa LONG_BLOB = 0xfb BLOB = 0xfc VAR_STRING = 0xfd STRING = 0xfe GEOMETRY = 0xff
http://dev.mysql.com/doc/connector-python/en/connector-python-api-fieldtype.html
CC-MAIN-2016-44
refinedweb
253
66.23
A documentation generator for Dart. This tool creates static HTML files generated from Dart source code. Run pub global activate dartdoc to install dartdoc. Run dartdoc from the root directory of package. By default, the documentation is generated to the doc/api/ directory. Ready for testing, but has known issues. docgen/ dartdocgen/ dartdoc-viewer? This tool intends to replace our existing set of API documentation tools. We'll take the best ideas and implementations from our existing doc tools and fold them into dartdoc.! You can see the latest API of dartdoc - generated by dartdoc - here. If you want to generate documentation for the SDK, run dartdoc with the following command line arguments: --sdk-docs --dart-sdk /pathTo/dart-sdk(optional) Please file reports on the GitHub Issue Tracker. You can view our license here. null doc/apidirectory You can install the package from the command line: $ pub global activate dartdoc The package has the following executables: $ dartdoc Add this to your package's pubspec.yaml file: dependencies: dartdoc: ^0.5.0 You can install packages from the command line: with pub: $ pub get Alternatively, your editor might support pub get. Check the docs for your editor to learn more. Now in your Dart code, you can use: import 'package:dartdoc/dartdoc.dart';
https://pub.dartlang.org/packages/dartdoc/versions/0.5.0
CC-MAIN-2019-13
refinedweb
213
51.24
Building a Mule Application with Maven outside Studio This guide walks you through creating a Mule Application project from scratch using a Maven archetype. It covers generating a project template, revising the configuration to create a simple "Hello World" application, manually maintaining its POM file, then building and deploying the project. Assumptions The document assumes you have downloaded Apache Maven and are familiar with using it to build and maintain projects. Read more about Using Maven with Mule. Configuring Maven Add the two blocks of code to the file settings.xml (usually in your Maven conf or $HOME/.m2 directory) This will have Maven allow you to execute Mule plugins: settings.xml This will tell your Maven instance where to find the Mule repositories: settings.xml Creating a Project Template Open a command shell and navigate to the directory where you want to create your project. Execute the mule-project-archetype as shown. If this is your first time running this command, Maven downloads the archetype for you. At minimum, you pass in two system parameters: artifactId: The short name for the project (such as myApp). This should be a single word in lower camel case with no spaces, periods, hyphens, etc. muleVersion: The version of the Mule project archetype you want to use. This will also be the default Mule version used for the generated artifact. The plugin asks a series of questions. The table below shows a set of possible responses. You can also choose to run this archetype in "silent mode" by including -DinteractiveMode=false, then passing in any of the following parameters. After you have answered all the questions, the archetype creates a directory using the project name you specified. This directory includes: a POM file for building with Maven a Mule configuration file ( src\main\app\mule-config.xml) that includes the namespaces for the transports and modules you specified and has placeholder elements for creating your first flow a package.htmlfile under src\main\javausing the package path you specified, i.e. src\main\java\com\mycompany\project\package.html some template files under src\testto help you get started creating a unit test for the project A new MULE-README.txtfile in the root of your project, explaining what files were created Creating a Mule Application from the Template Open src/main/app/mule-config.xml. Delete the flow that exists by default and add this one: Save this mule-config.xml. You now have a working project that can be packaged and deployed. Managing Dependencies If you add logic into your project that relies on a dependency that isn’t already included in the POM file generated by the archetype, you have to modify your POM file manually to add this dependency, as described here. This example walks you through revising your project test case in a way that requires an additional Maven dependency: the REST-assured DSL . Open the example test case file, src/test/java/com/mycompany/project/helloWorld/HelloWorldTestCase.java and replace the existing code with the following: Notes: * Line 4: Imports REST-assured * Lines 21-22: use REST-assured Add the following dependencies to your pom.xml file, located in the application’s root folder. While in the directory where the pom.xml file resides, package your Mule project from the command line by executing the following syntax: mvn package The maven-mule-plugin packages the project in a . zipfile. Copy the .zipfile to the appsfolder of your standalone Mule ESB instance to run the application. The console should print BUILD SUCCESS. See Also Read more about Using Maven with Mule. Learn how to import an existing Maven project into Anypoint Studio. Access additional Maven reference and configuration information.
https://docs.mulesoft.com/mule-user-guide/v/3.5/building-a-mule-application-with-maven-outside-studio
CC-MAIN-2017-26
refinedweb
621
56.25
12 things we hate about PHP Inconsistent naming conventions, incompatible versions, weirdness at almost every level – here's the hell we deal with between angle-bracket question marks Do we really hate PHP? Nah. If we did, we wouldn't be running Drupal and WordPress and other frameworks at the astronomical clip we do. Nope, true PHP hate would mean switching to Java. But familiarity breeds contempt, and let's face it, what the PHP naysayers say about one of our favorite server-side scripting tool has some merit. What follows are 12 gripes we have built up in our years of working with the language. The most important challenge in creating PHP is remembering when you're typing HTML and when you're typing PHP code. Mixing them together is a big selling point for PHP but taking advantage of it can be a pain. You look at a file, and it looks like code. But wait, where's the perfunctory tag that shifts us from writing HTML to creating server instructions? It's like you've taken two files operating on two different layers and smooshed them together. You've got to mind those tags because the whole purpose is to merge code with markup. Except when it makes everything more confused. Mixing server instructions with browser markup is a mistake. Over in Java land, the teams talk about strict adherence to the MVC paradigm. The model for the data goes in one file, the layout instructions that control the view go in a second set of files, and the logic that makes up the controller goes in a third. Keeping them separate keeps everything a bit more organized. But in PHP land, the bedrock design decision is that you're supposed to mix HTML markup with controller logic for the server. You can always separate them -- and many do -- but once you start doing that, you have to start asking yourself, "Why, again, are we using PHP?" Does anyone know when we use underscores? The method base64_encode uses an underscore, but urlencode doesn't. The name php_uname has an underscore, but phpversion doesn't. Why? Did anyone think about this? Did anyone edit the API? Meanwhile, function strcmp is case-sensitive, but strcasecmp is not. strpos is case-sensitive, but stripos is not. Does the letter i signify case-insensitivity or does the word case? Who can remember? You better. How many sort functions does the world really need? Java has one basic sort function and a simple interface for all objects. If you want another algorithm, you can add it, but most make do with the standard function. In PHP world, there's a long list of functions for sorting things: usort, sort, uksort, array_sort. (Note that only some use an underscore.) So start studying and get that cheat sheet ready. PHP may be open source, but the good features like caching are commercial. This is just an expression of reality. Zend needs to make some money, and it supports the entire PHP world by selling the best versions to people who need it. Complaining about this is like complaining about gravity. There are just things about earth that suck. Just don't fool yourself into thinking that it's all one big open source commune. Do you want to create your own functions? First decide whether you're going to use PHP 5.3 or later because that's when namespaces arrived. If you don't, make sure they don't collide with a library because in the old days, everything was global. And if you're going to be running PHP 5.3 and you want to use the namespaces, get used to the backslashes, one of the uglier punctuation marks ever invented. Way broken. PHP programmers are fond of noting that this expression is true: (string)"false" == (int)0 Notice this isn't one of those classic examples that some PHP fanboi will argue is really a side effect of a feature. After all, JavaScript has plenty of similar examples caused by its overeager type conversion. Nope. The line says that the thing on the left is a string and the thing on the right is an integer. But somehow they're all equal. It's enough to make you believe that everyone in the world could just get along in harmony if only the PHP designers were put in charge. There are too ways to do too many things. End of line comments can be specified with either a number sign or a double slash. Both float and double mean the same thing. Simplicity is often tossed aside because people have added their own little features along the way. It's like design by committee, but the committee never met to iron out the differences. The dollar sign prefix is confusing. Perhaps it makes sense to force all variables to begin with a dollar sign because it makes inserting them into templates a bit simpler, but shouldn't the constants also start with a dollar sign? There are numerical issues with big integers on 32-bit machines. They wrap around, but the 64-bit machines don't, meaning that code will run differently on different machines. You can test it on your desktop, and it will run just fine. But when you move to the server, it could do something entirely different. Then when you try to re-create the error on your desktop, you're out of luck. The only good news is that 32-bit machines may finally disappear. It's not really fair to blame PHP for one of the biggest class of security holes. People slip weird SQL strings past other languages too. It's just that PHP makes it easy to suck data from a form and send it off to MySQL. Too easy. The average newbie could make the same mistake with any language, but PHP makes it far too simple. There are big differences between versions. Compatibility isn't so simple. Many languages like Java or JavaScript have sacrificed fast evolution for backward compatibility. It's not uncommon for older code to run without a problem on newer machines. But not PHP. There are big differences between the various versions, and you better hope your server has the right version installed because you'll probably only notice it when weird errors start appearing. The first thing you look for when your server goes south is whether someone upgraded PHP. - White Paper - White Paper - Video/Webcast Sponsored - White Paper - White Paper - White Paper
http://www.infoworld.com/article/2606833/php/152614-12-things-we-hate-about-PHP.html
CC-MAIN-2015-27
refinedweb
1,097
74.19
Run Your First CrateDB Cluster on Kubernetes, Part One. This miniseries introduces you to Kubernetes and walks you through the process of setting up your first CrateDB cluster on your local machine. An Introduction to Kubernetes Components Internally, Kubernetes is a system of processes distributed across every node that makes up the Kubernetes cluster. This cluster must have at least one master node and may have any number of non-master (i.e., worker) nodes. Containers (that run the actual software you want to deploy) are then distributed intelligently across the Kubernetes nodes. A Kubernetes node has different Kubernetes components running on it, depending on the function of the node. A master node runs three unique components: The API server provides the REST endpoint for the Kubernetes API and provides the shared state which all other Kubernetes components use. It also validates and configures the Kubernetes objects. The controller manager is a daemon that monitors the shared state of the Kubernetes cluster (provided by the API server) and attempts to make changes to the Kubernetes cluster necessary to move it towards the desired state. The scheduler, among other tasks, assigns containers to run on nodes within the Kubernetes cluster according to resource usage and prioritization. A non-master node runs two unique components: kubelet (a.k.a. the node agent) The node agent is the primary component of a non-master node. This component receives Docker image specifications, runs the specified images, and tries to keep them healthy. The network proxy is responsible for implementing services (that provide, e.g., load balancing) defined via the API server. This way, Kubernetes can be thought of, not as a single thing, but as multiple instances of these five components coordinating across multiple machines. Concepts Now that we’ve established what Kubernetes is, we can look at the concepts that are used to define the state of a Kubernetes cluster. Arguably the most important of these concepts is a pod. Pods For Kubernetes, a pod is the foundational building block of any system. A pod represents a single unit of computing. This unit of computing may be one container or several tightly-coupled containers. If you are deploying a web application, for example, a pod would run a single instance of your application. You can scale pods horizontally. If you want to scale out, you can add replica pods. If you want to scale in, you can remove replica pods. Most simple web application instances only require one container. However, more complex applications might require more than one. For example, one container might serve client requests, and one container might perform asynchronous background jobs. Containers in a pod share a common network interface and each container has access to all storage volumes assigned to the pod. (Containers outside the pod do not have access to assigned storage volumes.) The official CrateDB Docker image works well as a single container pod. These CrateDB pods can then be combined to produce a CrateDB cluster of any size. In summary, pods are ephemeral, disposable environments that run one or more related containers. They are the foundational Kubernetes building block, and Kubernetes primarily concerns itself with pod scheduling, node assignment, and monitoring. Controllers You do not usually create pods by hand. Instead, you use a controller to create them. A controller is an entity within Kubernetes than manages a set of pods for you according to your specification. Kubernetes has several controllers for different purposes. For example, a DaemonSet is a controller that ensures that all nodes in the Kubernetes cluster run a single copy of the defined pod. DaemonSets can be useful for running things like a log collection daemon like fluentd. Another commonly used controller is the ReplicaSet, which ensures that a specified number of pod replicas are running at any given time. You could run a CrateDB cluster with a ReplicaSet, but you would quickly run into the problem of state. Ideally, containers should be stateless. That is, they should not have a fixed identity within your system. It shouldn't matter if a container is destroyed, recreated, moved to another node, or whatever—as long as the desired number of containers are running. Stateless containers are suitable for web applications that persist state to an external database. However, databases themselves require persistent storage. You don't want your data to vanish because a container was rescheduled! Besides, a CrateDB cluster requires stable network identities so that CrateDB can manage shard allocation. Fortunately, Kubernetes provides the StatefulSet controller which gives each pod a fixed identity and storage that persists across restarts and rescheduling. That is, the controller creates all of the pods within a StatefulSet from the same template, but they are not interchangeable. That's just the ticket for our purposes. Services So far we've got pods, which house one or more related containers. Also, we have controllers, which create and destroy pods according to our specifications. We can even have pods with persistent storage. However, because the pods can be stopped, started, and rescheduled to any Kubernetes node, assigned IP addresses may change over time. Changing IP addresses are the sort of thing we don't want client applications to have to handle. To solve this problem, we use services. A service provides a static interface to access one or more pods. The three most commonly used services are: ClusterIP The default type of service. Exposes pods on an internal network interface that is only reachable from within the Kubernetes cluster itself. Pods are exposed on a defined port number. These ports are accessible using the IP address of any Kubernetes node, and they are reachable from outside the Kubernetes cluster. Incoming network traffic is load balanced across the set of backend pods. A load balancer is made available on a defined port number that is accessible using the IP address of any Kubernetes node or a single virtual cluster IP address. For our purposes, we want to create a LoadBalancer service that balances incoming queries across the whole CrateDB cluster. Dive In Enough of the theory! Let's dive in and put this knowledge into practice. By the time you've finished this section, you should have a simple three-node CrateDB cluster running on Kubernetes. Prerequisites For this tutorial you need to install: Minikube (listed below) requires a hypervisor to work. Check the Minikube quickstart for a list of supported drivers. VirtualBox is a popular cross-platform option. The standard Kubernetes command-line tool. You can use this to create and inspect objects in your Kubernetes cluster. Follow the installation documentation to get this installed. There are multiple ways to provision Kubernetes. I chose Minikube because it is well suited for local development. Install the latest release of Minikube. This tutorial uses these three pieces of software to create and interact with Kubernetes cluster running inside a virtual machine. Once you have all three installed, you can continue. Start Kubernetes You don't need to worry about downloading disk images and setting up virtual machines manually. As long as you have a compatible hypervisor like VirtualBox installed on your system, Minikube detects it and set the whole thing up for you automatically. $ minikube start --memory 4096 Starting local Kubernetes v1.10.0 cluster... Starting VM... Downloading Minikube ISO 160.27 MB / 160.27 MB [======================================] 100.00% 0s Getting VM IP address... Moving files into cluster... Setting up certs... Connecting to cluster... Setting up kubeconfig... Starting cluster components... Kubectl is now configured to use the cluster. Loading cached images from config file. Here, we passed in --memory 4096 because Minikube assigns 1GB of memory for the virtual machine by default. However, we want more than that for our cluster. Feel free to change this if you need to. Minikube automatically configures kubectl to use the newly created Kubernetes cluster. To verify this, and to ensure the cluster is ready to use, run this command: $ kubectl get nodes NAME STATUS ROLES AGE VERSION minikube Ready master 4m v1.10.0 Great! The cluster is up and running. Namespaces Kubernetes objects exist inside namespaces. Namespaces provide a shared environment for objects, and these environments are isolated from other namespaces. The Standard Namespaces Kubernetes configures a new cluster with some initial namespaces. You can view the initial namespaces like so: $ kubectl get namespaces NAME STATUS AGE default Active 12m kube-public Active 12m kube-system Active 12m Let's look at these namespaces in detail: default The default namespace, used when no other namespaces are specified. kube-public This namespace is for resources that are publicly available. kube-system The namespace used by the Kubernetes internals. If you inspect the list of pods running in the kube-system namespace, you can see the Kubernetes components we covered earlier: $ kubectl get pods --namespace kube-system NAME READY STATUS RESTARTS AGE etcd-minikube 1/1 Running 0 30m kube-addon-manager-minikube 1/1 Running 0 31m kube-apiserver-minikube 1/1 Running 0 30m kube-controller-manager-minikube 1/1 Running 0 31m kube-dns-86f4d74b45-hl6v9 3/3 Running 0 31m kube-proxy-j7c59 1/1 Running 0 31m kube-scheduler-minikube 1/1 Running 0 30m kubernetes-dashboard-5498ccf677-wbbsm 1/1 Running 0 31m storage-provisioner 1/1 Running 0 31m Create a CrateDB Namespace Before we continue, let's create a namespace for our CrateDB cluster resources. Technically, we don't require namespaces for what we're doing. However, it's worthwhile using namespaces, to familiarize yourself with them. Create a namespace like this: $ kubectl create namespace crate namespace/crate created The newly created namespace should show up, like this: $ kubectl get namespaces NAME STATUS AGE default Active 32m kube-public Active 31m kube-system Active 32m crate Active 59s From now on, when we pass --namespace crate to kubectl when creating or modifying resources used by our CrateDB cluster. Create the CrateDB Services Internal Service For CrateDB to work, each CrateDB node should be able to discover and communicate with the other nodes in the cluster. Create a new file named crate-internal-service.yaml. Paste in the following configuration: Let's break that down: We define a Kubernetes service called crate-internal-servicethat maps onto all pods with the app:cratelabel. In a subsequent step, we configure all of our CrateDB pods to use this label. We configure a static IP address for the service. We expose the service on port 4300, which is the standard port CrateDB uses for inter-node communication. Kubernetes creates SRV records for each matching pod. We can use those records in a subsequent step to configure CrateDB unicast host discovery. Save the file and then create the service, like so: $ kubectl create -f crate-internal-service.yaml --namespace crate service/crate-internal created External Service We need to expose our pods externally so that clients may query CrateDB. Create a new file named crate-external-service.yaml. Paste in the following configuration: Let's break that down: We define a Kubernetes service called crate-external-servicethat maps onto all pods with the app:cratelabel, as before. Kubernetes will create an external load balancer. We exposed the service on port 4200 for HTTP clients and port 5432 for PostgreSQL wire protocol clients. Usually, a LoadBalancer service is only available using a hosted solution. In these situations, Kubernetes provisions an external load balancer using the load balancer service provided by the hosted solution. Fortunately, however, Minikube mocks out a load balancer for us. So we can happily use LoadBalancer services locally. Save the file and then create the service, like so: $ kubectl create -f crate-external-service.yaml --namespace crate service/crate-external created Create the CrateDB Controller So far, we've created two services: An internal service that CrateDB uses for node discovery and inter-node communication. An external service that load balances CrateDB client requests. However, these are just the interfaces to our CrateDB cluster. We still need to define a controller that knows how to assemble and manage our desired CrateDB cluster. This controller is the heart of our setup. Create a new file named crate-controller.yaml. Paste in the following configuration: volumes: # Use a RAM drive for storage which is fine for testing, but must # not be used for production setups! - name: data emptyDir: medium: "Memory" Phew! That's a lot. Let's break it down: We define a Kubernetes controller that creates pods named crate-0, crate-1, crate-2, and so on. This controller creates a StatefulSet called crate-set. This set must have three CrateDB pods with a fixed identity and persistent storage. Each pod has the app:cratelabel, flagging them for use with the services we defined previously. We use an InitContainer to manually configure the correct memory map limit so that the CrateDB bootstrap checks pass. Each pod gets 512MB of memory so that the three pods in our cluster use 1.5GB of the 4GB we allocated to the virtual machine. Plenty of room to grow. We define the CrateDB container to run in each pod. We're using the 3.0.5 version of the CrateDB Docker image. The crate-internal-serviceprovides SRV records. We use those records to configure CrateDB unicast host discovery via command-line options. Each pod exposes port 4300 for intra-node communication, port 4200 for HTTP clients, and port 5432 for PostgreSQL wire protocol clients. We pass through some configuration via environment variables. CrateDB detects CRATE_HEAP_SIZE, and the command-line options make use of the rest. The CrateDB heap size is configured as 256MB, which is 50% of the avilable memory. We're using a RAM drive for storage as a temporary storage solution to get us set up quickly. Save the file and then create the controller, like so: $ kubectl create -f crate-controller.yaml --namespace crate statefulset.apps/crate-controller created The StatefulSet controller brings up the CrateDB pods one by one. You can monitor the progress with this command: $ kubectl get pods --namespace crate NAME READY STATUS RESTARTS AGE crate-0 0/1 PodInitializing 0 36s Keep running this command until you see something like this: $ kubectl get pods --namespace crate NAME READY STATUS RESTARTS AGE crate-0 1/1 Running 0 2m crate-1 1/1 Running 0 1m crate-2 1/1 Running 0 1m Once you see this, all three pods are running, and your CrateDB cluster is ready! Access the CrateDB Cluster Before you can access CrateDB, you need to inspect the running services: $ kubectl get service --namespace crate NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE crate-external-service LoadBalancer 10.96.227.26 <pending> 4200:31159/TCP,5432:31316/TCP 44m crate-internal-service ClusterIP 10.101.192.101 <none> 4300/TCP 44m We're only interested in the crate-external-service service. The PORT(S) column tells us that Kubernetes port 31159 is mapped to CrateDB port 4200 (HTTP) and Kubernetes port 31316 is mapped to CrateDB port 5432 (PostgreSQL wire protocol). Because of a Minicube quirk, the external IP is always pending. To get around this, we can ask Minicube to list the services instead: $ minikube service list --namespace crate |------------|------------------------|--------------------------------| | NAMESPACE | NAME | URL | |------------|------------------------|--------------------------------| | my-cratedb | crate-external-service | | | | | | | my-cratedb | crate-internal-service | No node port | |------------|------------------------|--------------------------------| As shown, the crate-external-service has two ports exposed on 92.168.99.100. Because of a Minicube issue, for now, the http:// prefix is added regardless of the actual network protocol in use, which happens to be right for the CrateDB HTTP port, but is not correct for the PostgreSQL wire protocol port. The port we're interested in for this tutorial is the HTTP port, which is 31159. You can verify that this port is open and functioning as expected by issuing a simple HTTP request via the command line: $ curl 192.168.99.100:31159 { "ok" : true, "status" : 200, "name" : "Regenstein", "cluster_name" : "my-crate", "version" : { "number" : "3.0.5", "build_hash" : "89703701b45084f7e17705b45d1ce76d95fbc7e5", "build_timestamp" : "2018-07-31T06:18:44Z", "build_snapshot" : false, "es_version" : "6.1.4", "lucene_version" : "7.1.0" } } Great! This is the HTTP API response we expect. Copy and paste the same network address ( 192.168.99.100:31159 in the example above, but your address is probably different) into your browser. You should see the CrateDB Admin UI: Select the Cluster screen from the left-hand navigation menu, and you should see something that looks like this: Here you can see that our CrateDB cluster has three nodes, as expected. If this is your first time using CrateDB, you should check out the Getting Started guide for help importing test data and running your first query. When you're done, you can stop Minikube, like so: $ minikube stop Stopping local Kubernetes cluster... Machine stopped. Wrap Up In this post, we went over the basics of Kubernetes and created a simple three-node CrateDB cluster with Kubernetes on a single virtual machine. This setup is a good starting point for a local testing environment. However, we can improve it. An excellent place to start is with storage. CrateDB nodes are writing their data to a RAM drive, which is no good if we want our data to survive a power cycle. Part two of this miniseries shows you how to configure non-volatile storage and how to scale your cluster. Stay up to date events, how-to articles, and community update.
https://crate.io/a/run-your-first-cratedb-cluster-on-kubernetes-part-one/
CC-MAIN-2019-39
refinedweb
2,905
56.55
Monad Monad extends the Applicative type class with a new function flatten. Flatten takes a value in a nested context (eg. F[F[A]] where F is the context) and “joins” the contexts together so that we have a single context (ie. F[A]). The name flatten should remind you of the functions of the same name on many classes in the standard library. Option(Option(1)).flatten // res0: Option[Int] = Some(1) Option(None).flatten // res1: Option[Nothing] = None List(List(1),List(2,3)).flatten // res2: List[Int] = List(1, 2, 3) Monad instances If Applicative is already present and flatten is well-behaved, extending the Applicative to a Monad is trivial. To provide evidence that a type belongs in the Monad type class, cats’ implementation requires us to provide an implementation of pure (which can be reused from Applicative) and flatMap. We can use flatten to define flatMap: flatMap is just map followed by flatten. Conversely, flatten is just flatMap using the identity function x => x (i.e. flatMap(_)(x => x)). import cats._ implicit def optionMonad(implicit app: Applicative[Option]) = new Monad[Option] { // Define flatMap using Option's flatten method override def flatMap[A, B](fa: Option[A])(f: A => Option[B]): Option[B] = app.map(fa)(f).flatten // Reuse this definition from Applicative. override def pure[A](a: A): Option[A] = app.pure(a) @annotation.tailrec def tailRecM[A, B](init: A)(fn: A => Option[Either[A, B]]): Option[B] = fn(init) match { case None => None case Some(Right(b)) => Some(b) case Some(Left(a)) => tailRecM(a)(fn) } } flatMap flatMap is often considered to be the core function of Monad, and cats follows this tradition by providing implementations of flatten and map derived from flatMap and pure. Part of the reason for this is that name flatMap has special significance in scala, as for-comprehensions rely on this method to chain together operations in a monadic context. import scala.reflect.runtime.universe // import scala.reflect.runtime.universe universe.reify( for { x <- Some(1) y <- Some(2) } yield x + y ).tree // res4: reflect.runtime.universe.Tree = Some.apply(1).flatMap(((x) => Some.apply(2).map(((y) => x.$plus(y))))) tailRecM In addition to requiring flatMap and pure, Cats has chosen to require tailRecM which encodes stack safe monadic recursion, as described in Stack Safety for Free by Phil Freeman. Because monadic recursion is so common in functional programming but is not stack safe on the JVM, Cats has chosen to require this method of all monad implementations as opposed to just a subset. All functions requiring monadic recursion in Cats is done via tailRecM. An example Monad implementation for Option is shown below. Note the tail recursive and therefore stack safe implementation of tailRecM. } } More discussion about tailRecM can be found in the FAQ. ifM Monad provides the ability to choose later operations in a sequence based on the results of earlier ones. This is embodied in ifM, which lifts an if statement into the monadic context. import cats.implicits._ // import cats.implicits._ Monad[List].ifM(List(true, false, true))(ifTrue = List(1, 2), ifFalse = List(3, 4)) // res6: List[Int] = List(1, 2, 3, 4, 1, 2) Composition Unlike Functors and Applicatives, not all Monads compose. This means that even if M[_] and N[_] are both Monads, M[N[_]] is not guaranteed to be a Monad. However, many common cases do. One way of expressing this is to provide instructions on how to compose any outer monad ( F in the following example) with a specific inner monad ( Option in the following example). Note: the example below assumes usage of the kind-projector compiler plugin and will not compile if it is not being used in a project. case class OptionT[F[_], A](value: F[Option[A]]) implicit def optionTMonad[F[_]](implicit F : Monad[F]) = { new Monad[OptionT[F, ?]] { def pure[A](a: A): OptionT[F, A] = OptionT(F.pure(Some(a))) def flatMap[A, B](fa: OptionT[F, A])(f: A => OptionT[F, B]): OptionT[F, B] = OptionT { F.flatMap(fa.value) { case None => F.pure(None) case Some(a) => f(a).value } } def tailRecM[A, B](a: A)(f: A => OptionT[F, Either[A, B]]): OptionT[F, B] = OptionT { F.tailRecM(a)(a0 => F.map(f(a0).value) { case None => Either.right[A, Option[B]](None) case Some(b0) => b0.map(Some(_)) }) } } } This sort of construction is called a monad transformer. Cats has an OptionT monad transformer, which adds a lot of useful functions to the simple implementation above.
https://typelevel.org/cats/typeclasses/monad.html
CC-MAIN-2018-17
refinedweb
765
56.66
The authors are senior software engineers for Visual Numerics. They can be contacted at. One project we were recently involved in was the port of a large 32-bit application, which supported 11 platforms to a 64-bit environment. The number of lines of code in this application exceeded 300,000 lines. Considering that the 32-bit application had parts developed several years ago, there was every likelihood that the code had been modified by a variety of developers. For this and other reasons, we suspected that, among other problems, type mismatches that cause problems for a 64-bit port were likely introduced as modules were added or removed over time. We ported the 32-bit application to 64-bit to take advantage of the benefits of 64-bit technologylarge file support, large memory support, and 64-bit computation, among other features. Our overall approach was an iterative one that alternated between zooming in on detailed issues such as byte order and refining compiler flags, to stepping back to look at global issues, such as ANSI compliance and future portability of source-code base. Our first step was to research 64-bit resources to learn about each of the 11 operating system's compiler switches, memory models, and coding considerations. To define our starting point, we turned on the compiler warnings for one platform, ran a first build, and examined the build log's messages. With these initial builds and later use of tools such as Parasoft's Insure++ (), lint, and native debuggers, we developed a road map of the issues we would encounter. From there, we proceeded to perform a complete inventory of the source code and examine every build configuration. After initial code modifications, debug sessions, and passes through build messages, we had enough information to sort out and prioritize realistic milestones and the specific tasks required to get there. We reached a significant milestone when we had a running application with enough basic functionality that it could be debugged by running it through our automated test suite, which consists of backward compatibility tests in addition to new tests built to exercise 64-bit features. If you have several 64-bit platforms as part of your conversion project, you might be tempted to work on one platform at a time. Once the application is running properly on the first platform, you might move on to the next platform, and so on. However, we found significant advantages to working on all platforms at the same time because: - Each of the compilers provided different information in its warnings, and looking at the errors from several compilers can help to pinpoint problem areas. - Errors behave differently on different platforms. The same problem might cause a crash on one platform and appear to run successfully on another. A final consideration in approaching this project was to plan ahead for time required for the final release testing phase. Because our newly modified code base is shared across multiple 32-bit and 64-bit platforms, each 32-bit platform would need to be retested as thoroughly as our newly ported platforms, thereby doubling testing time and resources. Cross-Platform Issues There are a number of issues, ranging from compiler warnings to reading/writing binary data, that you can face when porting 32-bit applications that run on multiple 64-bit operating systems. Luckily, compilers can assist in determining 64-bit porting issues. Set the warning flags of the compilers to the strictest level on all platforms, paying close attention to warnings that indicate data truncation or assignment of 64-bit data to 32-bit data. However, one problem with compiler warnings is that turning on stricter warning levels can lead to an overwhelming number of warnings, many of which were automatically resolved by the compiler. The problem is that major warnings are buried within the mass of minor warnings, with no easy way to distinguish between the two. To resolve this issue, we enabled the warnings on multiple platforms and performed concurrent builds. This helped because different compilers give different warnings with different levels of detail. We then filtered the warnings using information from multiple compilers and were able to determine which warnings needed to be fixed. Some application requirements call for binary data or files to work with both 64-bit and 32-bit applications. In these situations, you have to examine your binary format for issues resulting from larger longs and pointers. This may require modifications to your read/write functions to convert sizes and handle any Little- or Big-endian issues for multiple platforms. To get the correct machine endianess, the larger data sizes in 64-bit applications require extended byte swapping. For example, a 32-bit long: Big Endian = (B0, B1, B2, B3) can be converted to: Little Endian = (B3, B2, B1, B0) while a 64-bit long: Big Endian = (B0, B1, B2, B3, B4, B5, B6, B7) is converted to: Little Endian = (B7, B6, B5, B4, B3, B2, B1, B0). Most compilers will find mismatched types and correct them during the build. This is true for simple assignments as well as most parameters passed to other functions. The real problems lay in the integer-long-pointer mismatches that are invisible to the compiler at compile time, or when an assumption the compiler makes at compile time is what produces a mismatch. The former concerns pointer arguments and function pointers, while the latter primarily concerns function prototypes. Passing integer and long pointers as arguments to functions can cause problems if the pointers are then dereferenced as a different, incompatible type. These situations are not an issue in 32-bit code because integers and longs are interchangeable. However, in 64-bit code, these situations result in runtime errors because of the inherent flexibility of pointers. Most compilers assume that what you are doing is what you intended to do, and quietly allow it unless you can enable additional warning messages. It is only during runtime that the problems surface. Listing One, for example, compiles without warnings on both Solaris and AIX (Forte7, VAC 6) in both 32-bit and 64-bit modes. However, the 64-bit version prints the incorrect value when run. While these problems may be easy to find in a short example, it may be more difficult in much larger code bases. This sort of problem might be hidden in real-world code and most compilers will not find it. Listing One works properly when built as a 64-bit executable on a Little-endian machine because the value of arg is entirely contained within the long's four least-significant bytes. However, even on Little-endian x86 machines, the 64-bit version produces an error during runtime when the value of arg exceeds its four least-significant bytes. With function pointers, the compiler has no information about which function will be called, so it cannot correct or warn you about type mismatches that might exist. The argument and return types of all functions called via a particular function pointer should agree. If that is not possible, you may have to provide separate cases at the point at which the function is called to make the proper typecasts of the arguments and return values. The second issue concerns implicit function declarations. If you do not provide a prototype for each function that your code calls, the compiler makes assumptions about them. Variations of the compiler warning "Implicit function declaration: assuming extern returning int" are usually inconsequential in 32-bit builds. However, in 64-bit builds, the assumption of an integer return value can cause real problems when the function returns either a long or a pointer (malloc, for example). To eliminate the need for the compiler to make assumptions, make sure that all required system header files are included and provide prototypes for your own external functions. Hidden Issues There are, of course, issues that may not be readily apparent at the beginning of the project. For instance, in 64-bit applications, longs and pointers are larger, which also increases the size of a structure containing these data types. The layout of your structure elements determines how much space is required by the structure. For example, a structure that contains an integer followed by a long in a 32-bit application is 8 bytes, but a 64-bit application adds 4 bytes of padding to the first element of the structure to align the second element on its natural boundary; see Figure 1. To minimize this padding, reorder the data structure elements from largest to smallest. However, if data structure elements are accessed as byte streams, you need to change your code logic to adjust for the new order of elements in the data structure. For cases where reordering the data structures is not practical and the data structure's elements are accessed as a byte stream, you need to account for padding. Our solution for these cases was to implement a helper function that eliminates the padding from the data structure before writing to the byte stream. A side benefit to this solution was that no changes were required on the reader side; see Listing Two. Arrays 64-bit long type arrays and arrays within structures will not only hold larger values than their 32-bit equivalents, but they may also hold more elements. Consider that 4-byte variables previously used to define array boundaries and allocate array sizes may also need to be converted to longs. (For help in determining whether existing long arrays should be reverted to integer type for better performance in your 64-bit application, see.) Coding Practices and Porting Considerations In addition to following the standard 64-bit coding practices recommended in your operating system's compiler documentation and noted in the resources listed in the Resources section, here are a few considerations and coding tips that will help when planning a 64-bit migration project: - Convert your source-code base to ANSI C/C++, if possible and realistic. This simplifies your 64-bit port and any future ports. - Does your target operating system support both 32- and 64-bit applications? Find this out ahead of time, as it will impact project decisions. For example, on Solaris, use the system command isainfo to check compatibility with both 32-bit and 64-bit applications: - If your source code is not already managed under a version-control system such as CVS (), it will be helpful to implement one before porting your code. Due to the large number of global changes we needed to make for porting, we needed to revert to previous code much more often than normal. This made having a version-control system extremely beneficial. - Does your application use and load 32 bit, third-party libraries? If so, it is better to decide during the planning phase whether these libraries should be upgraded to 64 bit. If long data and pointers are not transferred between your main application and third-party library, then possibly no 64-bit migration is necessary for the library as long as the operating system is capable of running both 32-bit and 64-bit applications. If the operating system does not have this dual capability, plan on taking the steps required to migrate the third-party application to 64 bit. - If your application dynamically loads libraries at runtime and still uses the old calls for load(), switch to dlopen() to correct data-transfer problems between the main application and the library module. This is especially true for older AIX applications coded before dlopen() was available. To enable runtime linking on AIX, use the -brtl option to the linker with the -L ":" option to locate libraries. For compatibility, both your main application and all libraries loaded with dlopen() will need to be compiled using runtime linking. - Consider backwards compatibility. When porting to 64-bit platforms, backwards compatibility issues will be even more critical. Consider enhancing your current test suite to include both older 32-bit tests and new 64-bit tests. % isainfo -v 64-bit sparcv9 applications 32-bit sparc applications Tools Performing a source-code inventory for a large code base shared across several platforms for 32-bit to 64-bit migration and assessing the scope of each change, however trivial, can prove to be a daunting task. The potential to overlook conversion problems and introduce new errors is high. However, by using a small arsenal of 64-bit tools and techniques, many of these potential problems can be caught during the precompilation stage, at compile time, and at runtime. Some of the tools available are: - Precompilation stage. A pass using lint, available with the compiler using the -errchk=longptr64 flag, is effective in catching type conversion mismatches, implicit function declarations, and parameter mismatches. Example 1 shows typical lint warnings that are red flags for 64 bit. Other lint-type applications are also available, such as FlexeLint (). - Compile-time techniques. Adjust your compiler warning levels so warnings are not suppressed, at least during the initial stages of the project. For multiplatform environments, take advantage of the fact that different operating systems compiling the same source code will complain about different issues. Clearing these warnings should benefit all platforms. - Compile-time/Runtime tools. Advanced tools, such as Insure++ or Purify for 64-bit for at least one base platform, are a huge benefit in any development environment for both runtime and compile-time issues. - Runtime tools. Try dbx, provided with each UNIX compiler, and ddd (data display debugger), a graphical interface for dbx and gdb on UNIX (). Conclusion Taking the time to do up-front planning and investigation is worth the effort. Don't get discouraged when nothing in your application is working correctly. Methodical and careful passes through the code will uncover the problem areas. With available memory and dataset sizes growing tremendously each year, the benefits of a 64-bit application are worth the pain of conversion. DDJ #include <stdlib.h> #include <stdio.h> int Func1(char *); int main() { long arg, ret; arg = 247; ret = Func1((char *)&arg); printf("%ld\n", ret); return(0); } int Func1(char * input) { int *tmp; tmp = (int *)input; return(*tmp); }Back to article Listing Two typdef struct demo{ int i; long j; } DEMO; DEMO test; /*pout_raw outputs raw bytes to a file */ /* output each element of a structure to avoid padding */ pout_raw ((int) file_unit, (char *) test.i, sizeof (test.i)); pout_raw ((int) file_unit, (char *) test.j, sizeof (test.j)); /* the following line of code includes padding */ pout_raw ((int) file_unit, (char *) test,sizeof(test));Back to article
http://www.drdobbs.com/parallel/multiplatform-porting-to-64-bits/184406427
CC-MAIN-2014-52
refinedweb
2,425
50.16
We will eventually refer to this as a base class or parent class, but for the time being, we will simply use it like any other class to show that it is indeed just a normal class. Note that we will explain the added keyword protected shortly. Figure 14.1 is a graphical representation of the Cvehicle class. Ignore lines 5, 6, and 11, they will be explained in detail later. This program cannot be compiled or executed because it is only a header file without the main() but makes sure there are no error such as typing errors. Next, examine the file named vehicle.cpp, and you will find that it is the implementation of the vehicle class. The initialize() method assigns the values input as parameters to the wheels and weight variables. 1. // program vehicle.cpp, implementation part, 2. // compile without error, generating object file, do not run 3. #include "vehicle.h" 4. 5. // class implementation part 6. // initialize to data input by user 7. void Cvehicle::initialize(int input_wheels, float input_weight) 8. { 9. wheels = input_wheels; 10. weight = input_weight; 11. } 12. 13. // get the number of wheels of this vehicle 14. int Cvehicle::get_wheels() 15. { 16. return wheels; 17. } 18. 19. // return the weight of this vehicle 20. float Cvehicle::get_weight() 21. { 22. return weight; 23. } 24. 25. // return the load on each wheel 26. float Cvehicle::wheel_load() 27. { 28. return (weight/wheels); 29. } Program 14.2: vehicle.cpp, implementation program of Cvehicle class We have methods to return the number of wheels and the weight, and finally, we have one that does a calculation to return the load on each wheel. At this point, we are more interested in learning how to implement the interface to the classes. We will use it as a base class later. Compile this program without error, generating object file, in preparation for the next program example, but you cannot execute it because there is no main() entry point. The file named transprt.cpp uses the Cvehicle class in similar manner as we have illustrated in the previous Module. This should be an indication to you that the Cvehicle class is just a normal class as defined in C++. 1. // program transprt.cpp, the main program, 2. // compile and run 3. 4. #include <iostream> 5. using namespace std; 6. #include "vehicle.h" 7. // a user define header file and put it in the same folder as this program 8. 9. void main() 10. { 11. Cvehicle car, motorcycle, truck, sedan_car; 12. // 4 objects instantiated 13. 14. // data initialization 15. car.initialize(4,3000.0); 16. truck.initialize(20,30000.0); 17. motorcycle.initialize(2,900.0); 18. sedan_car.initialize(4,3000.0); 19. 20. // display the data 21. cout<<"The car has "<<car.get_wheels()<< " tires.\n"; 22. cout<<"Truck has load "<<truck.wheel_load()<<" kg per tire.\n"; 23. cout<<"Motorcycle weight is "<<motorcycle.get_weight()<<" kg.\n"; 24. cout<<"Weight of sedan car is "<<sedan_car.get_weight()<<" kg, and has 25. "<<sedan_car.get_wheels()<<" tires.\n"; 26. 27. // system("pause"); 28. } Program 14.3: transprt.cpp the main program Examine the following file named car.h, for our first example of using a derived class. The Cvehicle class is inherited due to the ":public Cvehicle" code added to line 12 as shown below: class Ccar : public Cvehicle 1. // another class declaration car.h 2. // save and include this file in your project 3. // do not compile or run. 4. 5. #ifndef CAR_H // to avoid car.h re-inclusion/multi inclusion 6. #define CAR_H 7. 8. #include "vehicle.h" 9. 10. // a derived class declaration part 11. // Ccar class derived from Cvehicle class 12. class Ccar : public Cvehicle 13. { 14. int passenger_load; 15. public: 16. // this method will be used instead of the same 17. // method in Cvehicle class - overriding 18. void initialize(int input_wheels, float input_weight, int people = 4); 19. int passengers(void); 20. }; 21. 22. #endif Program 14.4: car.h program, declaration of derived class Ccar This derived class named Ccar is composed of all the information included in the base class Cvehicle, and all of its own additional information. Even though we did nothing to the class named Cvehicle, we made it available in this car.h program. In fact, it can be used as a normal class and a base class in the same program. A class that inherits another class is called a derived class or a child class. Likewise the terminology for the inherited class is called a base class, but parent class and super class are sometimes used. A base class is rather general class which can cover a wide range of objects attributes or behavior or properties, whereas a derived class is somewhat more specific but at the same time more useful. For simple example consider your family. You and your siblings inherited many characteristics of your mother and/or father such as eye and hair colors. Your father and/or mother are base classes, whereas you and your siblings are derived classes. In this case, the Cvehicle base class can be used to declare objects that represent plane, trucks, cars, bicycles, or any other vehicles you can think up. The class named Ccar however can only be used to declare an object that is of type Ccar because we have limited kinds of data that can be intelligently used with it. The car class is therefore more restrictive and specific than the Cvehicle class. If we wish to get even more specific, we could define a derived class using Ccar as the base class, name it Csports_car, and include information such as red_line_limit for the tachometer or turbo_type_engine as new member variables. Then, the Ccar class would therefore be used as a derived class and a base class at the same time, so it should be clear that these names refer to how the class is used. A derived class is defined by including the header file of the base class as is done in line 8 as shown below: #include "vehicle.h" And then the name of the base class is given following the name of the derived class separated by a colon ( : ) as is illustrated in line 12 as shown below: class Ccar : public Cvehicle Ignore the keyword public immediately following the colon in this line. This defines public inheritance and we will discuss it later. All objects declared as being of class Ccar therefore are composed of the two variables from the Cvehicle class because they inherit those variables, and the single member variable declared in the Ccar class named passenger_load, and the total member variable is listed below: ▪ int wheels; ▪ int weight; ▪ int passenger_load; An object of this class also will have three of the four methods of Cvehicle as listed below: ▪ get_wheels( ) { } ▪ get_weight( ) { } ▪ wheel_load( ) { } And the two new methods: ▪ initialize( ) { } ▪ passengers( ) { } The method named initialize() which is part of the Cvehicle class will not be available here because it is hidden by the local version of initialize() which is a part of the Ccar class. The local method will be used if the name is similar, allowing you to customize your new class. Figure 14.2 is a graphical representation of an object of this class. Note once again that the implementation for the base class only needs to be supplied in its compiled form of the vehicle.cpp file. The source code for implementation can be hidden. The header file for the base class, vehicle.h, must be available as a text file since the class definitions are required in order to use the class. The following figure is an illustration of our simple class hierarchy. Here, car inherits some of the common vehicle’s properties. tenouk fundamental of C++ object oriented tutorial
http://www.tenouk.com/Module14.html
crawl-002
refinedweb
1,292
66.44
Pandas Groupby Tutorial Hope if you are reading this post then you know what is groupby in SQL and how it is being used to aggregate the data of the rows with the same value in one or more column.. In this blog I am going to take a dataset and show how we can perform groupby on this data and explore the data further. Load Data We are going to use the seaborn exercise data for this tutorial. The data represents the type of diet and its corresponding pulse rate measured for the time in mins. You can load this data with a simple searborn command and then after some cleanup the data is ready to be used import seaborn as sns import pandas as pd exercise = sns.load_dataset('exercise') exercise.drop('Unnamed: 0',inplace=True,axis='columns') exercise['time']=exercise['time'].str.replace(' min','') exercise['time']= pd.to_numeric(exercise['time']) exercise.rename(columns={'time':'time_mins'},inplace=True) exercise.head() Pandas Groupby Count As a first step everyone would be interested to group the data on single or multiple column and count the number of rows within each group. So you can get the count using size or count function. if you are using the count() function then it will return a dataframe. Here we are interested to group on the id and Kind(resting,walking,sleeping etc.) when the pulse rate is measured. You can see for the id: 1 and kind resting the data has 3 rows in it and for walking and running there are no rows available in the data. grouped=exercise.groupby(['id','kind'],axis=0) grouped.count() Basic Aggreggation Now lets look at the simple aggregations functions that can be applied on the columns for this data. So if you have seen this data then the first thing you would be interested to know is what is the mean or average pulse rate across each of the diet under each id. Here we will first group by id and diet and then use the mean function to get a multi-index dataframe of the groups with the mean values for the column pulse and time_mins. We can easily find it out from this data that diet with low fat gives less pulse rate than the diet with no fat. Wow so we cleared the misconception with this data that eating fat rich food is not good for health. exercise.groupby(['diet']).mean() I hope at this point of time you would also be interested to see what is the average pulse for each of the kind. so lets find it out. Looks like resting has lowest mean pulse rate and running has the highest which was expected. exercise.groupby(['kind']).mean() There are other aggregating functions like sum, min, max, std,var etc. We will look into some of these functions later in the post. You can check these other functions Aggregating functions The result of the aggregation will have the group names as the new index along the grouped axis. In the case of multiple keys, the result is a Multi-Index by default, though this can be changed by using the as_index option. You can set the as_index parameter as False exercise.groupby(['id','diet'],as_index=False).agg(sum).head() or You can also do a reset_index exercise.groupby(['id','diet']).sum().head().reset_index() Describe if you want to generate a descriptive statistics that summarize the count, mean, std deviation, percentile and max values of a datasets distribution then simply use the describe function on the groupby object grouped=exercise.groupby(['id','diet']) grouped.describe().head() Pandas Groupby Multiple Functions With a grouped series or a column of the group you can also use a list of aggregate function or a dict of functions to do aggregation with and the result would be a hierarchical index dataframe exercise.groupby(['id','diet'])['pulse'].agg(['max','mean','min']).head() Similarly on a groupby object you can pass list of functions and it will give the aggregated results for all the columns in the group exercise.groupby(['id','diet']).agg(['max','mean','min']).head() Lambda function for Aggreggation You can also use a lambda function for aggregation with the groupby object. So here I am looking for a lambda function on the groupby which will give me the diff of max and min value in each group for both the columns pulse and time. The output will be a multi-index dataframe object and also renaming the column to diff grouped = exercise.groupby(['id','diet']).agg([lambda x: x.max() - x.min()]).rename(columns={'<lambda>': 'diff'}) grouped.head() Pandas groupby aggregate multiple columns using Named Aggregation As per the Pandas Documentation,To support column-specific aggregation with control over the output column names, pandas accepts the special syntax in GroupBy.agg(), known as “named aggregation”, where - The keywords are the output column names - The values are tuples whose first element is the column to select and the second element is the aggregation to apply to that column. Pandas provides the pandas.NamedAgg namedtuple with the fields [‘column’, ‘aggfunc’] to make it clearer what the arguments are. As usual, the aggregation can be a callable or a string alias. So we can specify for each column what is the aggregation function we want to apply and give a customize name to it. import numpy as np exercise.groupby(['id','diet']).agg(min_pulse=pd.NamedAgg(column='pulse', aggfunc='min'), max_time=pd.NamedAgg(column='time_mins', aggfunc='max'), average_pulse=pd.NamedAgg(column='pulse', aggfunc=np.mean)).head(10) Column Indexing The groupby object can be indexed by a column and the result will be a Series groupby object. Let’s use series groupby object time_mins and calculate its mean. So we get the total time for each of the kind. exercise.groupby('kind')['time_mins'].mean() exercise.groupby('kind')['pulse'].mean() Pandas groupby get_group Another useful method to select a group from the groupby object so from the groupby object we want to get kind - walking and it gives a dataframe with all rows of walking group. Basically it gets you all the rows of the group you are seeking for grouped=exercise.groupby('kind') grouped.get_group('walking').head() for an object grouped on multiple columns: grouped=exercise.groupby(['kind','diet']) grouped.get_group(('walking','no fat')).head() Iterating groupby if you want to iterate through each group for some manual operation then you can use something like this and it will return either a series or dataframe for name, group in grouped: print(name) print(group) Pandas SQL groupby Having you can query the multi-index dataframe using query function or use filter. Read this blog on how to use filters on groupby object grouped=exercise.groupby(['id','diet']).agg('count').head() # Same as SQL having grouped.query('pulse > 2') Groupby Cumulative Sum So you want to do a cumulative sum of all the pulse and time_mins for each group, which means to add up those column values for each group exercise.groupby(['id','diet']).agg(sum).groupby('diet').cumsum() Filtering Multi-index Columns There is a small work around for filtering the multi-index grouped dataframe. Suppose you want to get all the rows where pulse max,min difference is greater than 10 and time_mins max value is greater than or equal to 30 grouped[(grouped[('pulse','diff')]>10) & (grouped[('time_mins','max')]>=30)] Transform and Filter Using transform you can create a new column with the aggregated data and get your original dataframe back. Whereas filter can be used like having in SQL. I have a detailed blog which talks about how to use Transform and Filter with groupby. Please check this link. Groupby Apply Function We can also use apply and pass a function to each group in the groupby object. Say you want to half the pulse-rate in each group, so we can group it by id first and then use apply and pass our customized function so that it will return a dataframe with all the rows of the group and their halved pulse rate. def divide_by_half(x): # x is a DataFrame of group values x['pulse']=x['pulse']/2 return x exercise.groupby('id').apply(norm_by_data2) Pandas groupby aggregate to list Many a times we have seen instead of applying aggregation function we want the values of each group to be bind in a list. So if you want to list of all the time_mins in each group by id and diet then here is how you can do it exercise.groupby(['id','diet'])['time_mins'].apply(list) Conditional Group by count This is an interesting one. Suppose you want to group the data on id and diet and want to count all the pulse which is equal to 85 exercise.groupby(['id','diet'])['pulse'].apply(lambda x: x[x == 85].count()) This post was a very detailed introduction to pandas group by and all the features and functions that can be used along with it. As a next step you can run these codes and play around with other aggregation functions and get into the details of the code and can get many more interesting results. It’s not possible to cover all the scenarios and use cases around the groupby in one blog post. I will try to cover other features and use cases in my upcoming blogs. Let me know if you find this blog useful or do you have any suggestions in the comments sections below.
https://kanoki.org/2019/09/04/pandas-groupby-tutorial/
CC-MAIN-2022-33
refinedweb
1,586
60.35
Details Description The RDD API is very flexible, and as a result harder to optimize its execution in some cases. The DataFrame API, on the other hand, is much easier to optimize, but lacks some of the nice perks of the RDD API (e.g. harder to use UDFs, lack of strong types in Scala/Java). The goal of Spark Datasets is to provide an API that allows users to easily express transformations on domain objects, while also providing the performance and robustness advantages of the Spark SQL execution engine. Requirements - Fast - In most cases, the performance of Datasets should be equal to or better than working with RDDs. Encoders should be as fast or faster than Kryo and Java serialization, and unnecessary conversion should be avoided. - Typesafe - Similar to RDDs, objects and functions that operate on those objects should provide compile-time safety where possible. When converting from data where the schema is not known at compile-time (for example data read from an external source such as JSON), the conversion function should fail-fast if there is a schema mismatch. - Support for a variety of object models - Default encoders should be provided for a variety of object models: primitive types, case classes, tuples, POJOs, JavaBeans, etc. Ideally, objects that follow standard conventions, such as Avro SpecificRecords, should also work out of the box. - Java Compatible - Datasets should provide a single API that works in both Scala and Java. Where possible, shared types like Array will be used in the API. Where not possible, overloaded functions should be provided for both languages. Scala concepts, such as ClassTags should not be required in the user-facing API. - Interoperates with DataFrames - Users should be able to seamlessly transition between Datasets and DataFrames, without specifying conversion boiler-plate. When names used in the input schema line-up with fields in the given class, no extra mapping should be necessary. Libraries like MLlib should not need to provide different interfaces for accepting DataFrames and Datasets as input. For a detailed outline of the complete proposed API: marmbrus/dataset-api For an initial discussion of the design considerations in this API: design doc The initial version of the Dataset API has been merged in Spark 1.6. However, it will take a few more future releases to flush everything out. Issue Links - is related to SPARK-1021 sortByKey() launches a cluster job when it shouldn't - Resolved SPARK-2991 RDD transforms for scan and scanLeft - Resolved SPARK-2315 drop, dropRight and dropWhile which take RDD input and return RDD - Closed SPARK-2992 The transforms formerly known as non-lazy - Resolved SPARK-8360 Structured Streaming (aka Streaming DataFrames) - Resolved - relates to SPARK-12171 Support DataSet API in SparkR - Closed Activity - All - Work Log - History - Activity - Transitions This needs to be designed first. I'm not sure if static code analysis is a great idea since they fail often. I'm open to ideas though. Another idea is do something similar to F# TypeProvider approach: I haven't looked into this extensively just yet but as far as I understand this uses compile time macro to generate classes based on data sources. In that sense, it is slightly similar to protobuf where you generate Java class based on schema definition. This makes dataframe type safe at the very upstream. With a bit of IDE plugin, you will even able to have autocomplete and type check when you write code, which would be very nice. I'm not sure if it will be scalable to propagate these type information down stream (in aggregation or transformed dataframe) though. As I understand, the macro and type provider in Scala provides similar capabilities. To ask the obvious question: what are the reasons that the RDD API couldn't be adapted to these purposes? If I understand correctly, a summarization of the differences is that Datasets: 1. Support encoders for conversion to schema'd / efficiently serializable data 2. Have a GroupedDataset concept 3. Execute on Catalyst instead of directly on top of the DAGScheduler How difficult would it be to add encoders on top of RDDs, as well as a GroupedRDD? Is there anything in the RDD API contract that says RDDs can't be executed on top of Catalyst? Surely this creates some dependency hell as well given that SQL depends on core, but surely that's better than exposing an entirely new API that looks almost like the original one. I had a similar question about how much more this is than the current RDD API. For example, is the idea that, with the help of caller-provided annotations and/or some code analysis perhaps you could deduce more about operations and optimize them more? A lot of the API already covers the basics, like assuming reduce functions are associative, etc. I get transformations on domain objects in the style of Spark SQL but I can already "groupBy(customer.name)" in a normal RDD. I can also go sorta easily from DataFrames to RDDs and back. So I assume it's about static analysis of user functions, in the main? Or about getting to/from a Row faster? Sandy Ryza I thought a lot about doing this on top of the existing RDD API for a while, and that was my preference. However, we would need to break the RDD API, which breaks all existing applications. The big ones are: 1. encoders (which breaks almost every function that has a type parameter that's not T) 2. "partitions" (partitioning is a physical concept, and shouldn't be required as part of API semantics) 3. groupBy ... ... Other compatibility breaking things include: getting rid of class tags from the public API (a common complaint from java users) and not using a separate class for Java users (JavaRDD). If I understand correctly, it seems like there are ways to work around each of these issues that, necessarily, make the API dirtier, but avoid the need for a whole new public API. - groupBy: deprecate the old groupBy and add a groupWith or groupby method that returns a GroupedRDD. - partitions: have -1 be a special value that means "determined by the planner" - encoders: what are the main obstacles to addressing this with an EncodedRDD that extends RDD? Regarding the issues Michael brought up: I'd love to get rid of class tags from the public API as well as take out JavaRDD, but these seem more like "nice to have" than core to the proposal. Am I misunderstanding? All of these of course add ugliness, but I think it's really easy to underestimate the cost of introducing a new API. Applications everywhere become legacy and need to be rewritten to take advantage of new features. Code examples and training materials everywhere become invalidated. Can we point to systems that have successfully made a transition like this at this point in their maturity? Sandy Ryza Your concern is absolutely valid, but I don't think your EncodedRDD proposal works. For one, the map function (every other function that returns a type different from RDD's own T) will break. For two, the whole concept of PairRDDFunctions should go away with this new API. As I said, it's actually my preference to just use the RDD API. But if you take a look at what's needed here, it'd break too many functions. So we have the following choices: 1. Don't create a new API, and break the RDD API. People then can't update to newer versions of Spark unless they rewrite their apps. We did this with the SchemaRDD -> DataFrame change, which went well – but SchemaRDD wasn't really an advertised API back then. 2. Create a new API, and keep RDD API intact. People can update to new versions of Spark, but they can't take full advantage of all the Tungsten/DataFrame work immediately unless they rewrite their apps. Maybe we can implement the RDD API later in some cases using the new API so legacy apps can still take advantage whenever possible (e.g. inferring encoder based on classtags when possible). Also the RDD API as I see it today is actually a pretty good way for developers to provide data (i.e. used for data sources). If we break it, we'd still need to come up with a new data input API. BTW another possible approach that we haven't discussed is that we can start with an experimental new API, and in Spark 2.0 rename it to RDD. I'm less in favor of this because it still means applications can't update to Spark 2.0 without rewriting. I think improving Java compatibility and getting rid of the ClassTags is more than a nice to have. Having a separate class hierarchy for Java/Scala makes it very hard for people to build higher level libraries that work with both Scala and Java. As a result, I think Java adoption suffers. ClassTags are burdensome for both Scala and Java users. In order to make encoders work they way we want, nearly every function that takes a ClassTag today will need to be changed to take an encoder. As Reynold Xin points out, I think that kind of compatibly breaking is actually more damaging for a project of Spark's maturity than providing a higher-level parallel API to RDDs. That said, I think source compatibility for common code between RDDs -> Datasets would be great to make sure users can make the transition with as little pain as possible. Thanks for the explanation Reynold Xin and Michael Armbrust. I understand the problem and don't have any great ideas for an alternative workable solution. Maybe you all have thought through this as well, but I had some more thoughts on the proposed API. Fundamentally, it seems a weird to me that the user is responsible for having a matching Encoder around every time they want to map to a class of a particular type. In 99% of cases, the Encoder used to encode any given type will be the same, and it seems more intuitive to me to specify this up front. To be more concrete, suppose I want to use case classes in my app and have a function that can auto-generate an Encoder from a class object (though this might be a little bit time consuming because it needs to use reflection). With the current proposal, any time I want to map my Dataset to a Dataset of some case class, I need to either have a line of code that generates an Encoder for that case class, or have an Encoder already lying around. If I perform this operation within a method, I need to pass the Encoder down to the method and include it in the signature. Ideally I would be able to register an EncoderSystem up front that caches Encoders and generates new Encoders whenever it sees a new class used. This still of course requires the user to pass in type information when they call map, but it's easier for them to get this information than an actual encoder. If there's not some principled way to get this working implicitly with ClassTags, the user could just pass in classOf[MyCaseClass] as the second argument to map. Sandy Ryza did you look at the test cases in scala and java linked from the attached design doc? In Scala, users should never have to think about Encoders as long as their data can be represented as primitives, case classes, tuples, or collections. Implicits (provided by sqlContext.implicits._) automatically pass the required information to the function. In Java, the compiler is not helping us out as much, so the user must do as you suggest. The prototype shows ProductEncoder.tuple(Long.class, Long.class), but we will have a similar interface that works for class objects for POJOs / JavaBeans. The problem with doing this using a registry (like kryo in RDDs today) is that then you aren't finding out the object type until you have an example object from realizing the computation. That is often too late to do the kinds of optimizations that we are trying to enable. Instead we'd like to statically realize the schema at Dataset construction time. Encoders are just an encapsulation of the required information and provide an interface if we ever want to allow someone to specify a custom encoder. Regarding the performance concerns with reflection, the implementation that is already present in Spark master ( SPARK-10993 and SPARK-11090) is based on catalyst expressions. Reflection is done once on the driver, and the existing code generation caching framework is taking care of caching generated encoder bytecode on the executors. The problem with doing this using a registry (like kryo in RDDs today) is that then you aren't finding out the object type until you have an example object from realizing the computation. My suggestion was that the user would still need to pass the class object, so this shouldn't be a problem, unless I'm misunderstanding. Thanks to the pointer to the test suite. So am I to understand correctly that with Scala implicits magic I can do the following without any additional boilerplate? import <some basic sql stuff> case class MyClass1(<some fields>) case class MyClass2(<some fields>) val ds : Dataset[MyClass1] = ... val ds2: Dataset[MyClass2] = ds.map(funcThatConvertsFromMyClass1ToMyClass2) and in Java, imagining those case classes above were POJOs, we'd be able to support the following? Dataset<MyClass2> ds2 = ds1.map(funcThatConvertsFromMyClass1ToMyClass2, MyClass2.class); If that's the case, then that resolves my concerns above. Lastly, though, IIUC, it seems like for all the common cases, we could register an object with the SparkContext that converts from ClassTag to Encoder, and the RDD API would work. Where does that break down? Yeah, that Scala code should work. Regarding the Java version, the only difference is the API I have in mind would be Encoder.for(MyClass2.class). Passing in an encoder instead of a raw Class[_] gives us some extra indirection in case we want to support custom encoders some day. I'll add that we can also play reflection tricks in cases where things are not erased for Java, and this is the part of the proposal that is the least thought out at the moment. Any help making this part as powerful/robust as possible would be greatly appreciated. I think that is possible that in the long term we will do as you propose and remake the RDD API as a compatibility layer with the option to infer the encoder based on the class tag. The problem with this being the primary implementation is erasure. scala> import scala.reflect._ scala> classTag[(Int, Int)].erasure.getTypeParameters res0: Array[java.lang.reflect.TypeVariable[Class[_$1]]] forSome { type _$1 } = Array(T1, T2) We've lost the type of _1 and _2 and so we are going to have to fall back on runtime reflection again, per tuple. Where as the encoders that are checked into master could extract primitive int without any additional boxing and encode them directly into tungsten buffers. So ClassTags would work for case classes and Avro specific records, but wouldn't work for tuples (or anywhere else types get erased). Blrgh. I wonder if the former is enough? Tuples are pretty useful though. Yeah, I think tuples are a pretty important use case. Perhaps more importantly though, I think having a concept of encoders instead of relying on JVM types future proofs the API by giving us more control. If you look closely at the test case examples, there are some pretty crazy macro examples (i.e., R(a = 1, b = 2L)) where we actually create something like named tuples that codegen at compile time the logic required to directly encode the users results into tungsten format without needing to allocate an intermediate object. Beyond tuples, you'll also want encoders for other generic classes, such as Seq[T]. They're the cleanest mechanism to get the most type info. Also, from a software engineering point of view it's nice to avoid a central object where you register stuff to allow composition between libraries (basically, see the problems that the Kryo registry creates today). Arriving a little late to this discussion. Quick question for Reynold/Michael: Will Python (and R) get this API in time for 1.6, or is that planned for a later release? Once the Scala API is ready, I'm guessing that the Python version will mostly be a lightweight wrapper around that API. Nicholas Chammas it's not clear that it makes sense to add a similar API for Python and R. The main point of the Dataset API, as I understand it, is to extend DataFrames to take advantage of Java / Scala's static typing systems. This means recovering compile-time type safety, integration with existing Java / Scale object frameworks, and Scala syntactic sugar like pattern matching. Python and R are dynamically typed so can't take advantage of these. Sandy Ryza - Hmm, so are you saying that, generally speaking, Datasets will provide no performance advantages over DataFrames, and that they will just help in terms of catching type errors early? Python and R are dynamically typed so can't take advantage of these. I can't speak for R, but Python has supported type hints since 3.0. More recently, Python 3.5 introduced a typing module to standardize how type hints are specified, which facilitates the use of static type checkers like mypy. PySpark could definitely offer a statically type checked API, but practically speaking it would have to be limited to Python 3+. I suppose people don't generally expect static type checking when they use Python, so perhaps it makes sense not to support Datasets in PySpark. I think we could check types also in Python. As I understand DataSet should have performance advantage over RDD. Am I right ? Agree. The major performance gain of Dataset should be from Catalyst Optimizer. If you are referring to my comment, note that I am asking about Dataset vs. DataFrame, not Dataset vs. RDD. Nicholas Chammas Dataset actually will be slightly slower than DataFrame due to the conversion necessary from/to user-defined types. We do codegen all the conversions, but they are still conversions. Will you publish the exact performance penalty? Is it obvious? For example, larger than 1% in general workloads? I know it depends on the workloads. We haven't measured it yet, and as you said it is highly workload dependent. Thank you! I think the users might need to understand the potential performance difference when deciding to use DataFrame or Dataset. I will try to suggest my performance team to measure it. Let me know if you have any opinion. Thanks again! Reynold Xin What about Python API ? What's the target version ? Quote: "As we look forward to Spark 2.0, we plan some exciting improvements to Datasets, specifically: Python Support." After thinking about that more, I don't think it will happen any time soon. We simply don't see the strong benefit with Python to have a typed-safe way to work with data. Afterall, Python itself has no compile time type safety. And many of the runtime benefits of Dataset is already available in Python via Row as a dictionary interface. OK. So what about this patch ? Should we break backward compatibility in Python Dataframe API ? Unfortunately I think that's still necessary. The problem is that "map" should return the same type (i.e. DataFrame), not a different type. We can certainly use monkey patch to add a compatibility package though. That might be worth doing. Python itself has no compile time type safety. Practically speaking, this is no longer true. You can get a decent measure of "compile" time type safety using recent additions to Python (both the language itself and the ecosystem). Specifically, optional static type checking has been a focus in Python since 3.5+, and according to Python's BDFL both Google and Dropbox are updating large parts of their codebases to use Python's new typing features. Static type checkers for Python like mypy are already in use and are backed by several core Python developers, including Guido van Rossum (Python's creator/BDFL). So I don't think Datasets are a critical feature for PySpark just yet, and it will take some time for the general Python community to learn and take advantage of Python's new optional static typing features and tools, but I would keep this on the radar. This sounds interesting. In order to get this working, we need to get more information on the (black-box) operators used. So some analysis capability, or some predefined building blocks (SQL-lite if you will) are probably needed. Apache Flink uses static code analysis and annotations for to achieve a similar goal: Any other ideas?
https://issues.apache.org/jira/browse/SPARK-9999
CC-MAIN-2017-47
refinedweb
3,514
62.78
TRJOL PROBLEM LINK: Setter: Nitish Kumar Dubey Tester : Aditya Kumar Shaw Editorialist: Bratati Chakraborty DIFFICULTY: Easy PREREQUISITES: Basic Maths PROBLEM: We need to increase the Fragrance (F) of the drug to its maximum value such that H is always less than Y. Two chemicals 1 and 2 can be added to the drug to increase its fragrance. On adding 1, H gets multiplied by A and F gets increased by 1 and on adding 2, H gets incremented by B, and F gets increased by 1. The initial hardness is X and the initial fragrance is 0. We have to find the maximum possible value of fragrance in such a way that H doesn’t reach Y. QUICK EXPLANATION: We need to choose the operation (adding either chemical 1 or 2) that results in the minimum new value of X. Since H needs to be less than Y, this will ensure that the hardness will increase as slowly as possible. And we will get the maximum number of operations resulting in the maximized F. EXPLANATION: If we add chemical 1, then H gets multiplied by A and if we add chemical 2, H gets incremented by B. But, in both the cases F gets incremented by 1. So, every time we add any of the two chemicals, F gets incremented. Hence, the final value of F will be equal to the number of operations performed. Every operation could result in X=X * A or X=X+B. And H has to be less than Y, so if we want to maximize the no. of operations (in order to maximize F), we would want H to increase slowly. We need to choose the operation such that the new hardness is the minimum of the two results (X=X * A and X=X+B). If X * A < X+B, then we’ll choose chemical 1 over 2 and the same will hold true for the steps following it. Alternatively, if X * A>X+B, then we’ll choose chemical 2 throughout the entire procedure. We can execute X=X * A and F++ in a loop with the condition that X * A is minimum and the new X is less than Y: while(a*x<y&&a*x<=x+b) { x*=a; f++; } Note that if X+B is minimum, then for all the next steps it will be minimum as well. In this case, multiplication won’t be selected at all. So, the number of operations (=maximum fragrance) will be (Y-X-1)/b (because H can’t be greater than or equal to Y and that is our condition for operating on H and F). The final answer will be the number of times we perform X * A operations (which is nothing but the no of times the while loop runs) plus the number of X+B operations performed. TIME COMPLEXITY: O(log n) SOLUTION: Setter's Solution Language: C++14 #include <bits/stdc++.h> /*ABBREVIATION*/ #define ll long long int #define mod 1000000007 #define vi vector<ll> #define vic vector<char> #define mp make_pair #define pb push_back #define ff first #define ss second #define sz(x) (x.size()) #define CASE(t) cout<<"Case #"<<(t)<<": " #define mii map<ll,ll> #define mci map<char,ll> #define inf 1e18 /*loops*/ #define f(i,a,b) for(ll i=a;i<b;i++) #define rf(i,a,b) for(ll i=a;i>=b;i--) #define rep(i,n) f(i,0,n) #define rrep(i,n) rf(i,n-1,0) #define w(t) ll t; cin>>t; while(t--) /*max-min*/ #define max3(a, b, c) max(max(a, b), c) #define min3(a, b, c) min(min(a, b), c) //print #define no cout<<"NO"<<endl #define yes cout<<"YES"<<endl #define print(a) cout<<a<<endl #define line cout<<endl // sorting #define all(V) (V).begin(),(V).end() #define srt(V) sort(all(V)) #define sortGreat(V) sort(all(V),greater<ll>()) //input format #define in ll n;cin>>n; #define input rep(i,n){cin>>a[i];} using namespace std; /*void nitishcs414() { ios_base::sync_with_stdio(false); cin.tie(NULL); #ifndef ONLINE_JUDGE freopen("input3.txt", "r", stdin); freopen("output3.txt", "w", stdout); #endif // return 0; } */ int main() { //ios_base::sync_with_stdio(false); //cin.tie(NULL); //nitishcs414(); w(t) { ll x,y,a,b; cin>>x>>y>>a>>b; ll ans=0; while(a*x<y&&a*x<=x+b) { x*=a; ans++; } cout<<ans+(y-x-1)/b<<endl; } return 0; } Tester's Solution Language: Java import java.util.*; class Trjol { public static void main(String args[]) { Scanner sc = new Scanner(System.in); int T = sc.nextInt(); while(T-->0) { int x,y,a,b; x = sc.nextInt(); y = sc.nextInt(); a = sc.nextInt(); b = sc.nextInt(); int h = x; int f = 0; while(h*a < y && h*(a-1)<b ) { f += 1; h *= a; } f += (y-1-h)/b; //int alt= (y-1-x)/b; // System.out.println(f + " " + alt); //f = (alt>f) ? alt : f; System.out.println(f); } } } Editorialist's Solution Language: C++ #include <bits/stdc++.h> #define ll long long using namespace std; int main(){ int t; cin>>t; while(t--){ ll x,y,a,b; cin>>x>>y>>a>>b; ll f=0; while(x*a<y&&x*a<=x+b){ x*=a; f++; } cout<<f+(y-x-1)/b<<endl; } }
https://discuss.codechef.com/t/trjol-editorial/82615
CC-MAIN-2021-10
refinedweb
893
62.78
I am using Windows 10, PyScripter 2.6, Python 3.4.4. I'm new to programing. I am learning python and creating programs with the turtle module. The turtle module creates a window and has turtles draw what you tell them to draw on the canvas of the window. When there is an error in my turtle program, the turtle window will go into a "not responding" state. It crashes. I have to close it, it sends an error report to Microsoft, etc. etc. My question is, is there some code or a some way to prevent the "turtle window" from crashing and going into a "not responding" state? I've tried debug and syntax check, but they do not prevent the problem. Part of me tells me this is just the way it is. If you write bad code, your programs will crash, but it just seems like, in the development environment, there would be a way to "deal" with these things. Thanks Tim Code and Error Messages CODE def main(): pass if __name__ == '__main__': main() import turtle wn = turtle.Screen() wn.bgcolor("lightgreen") wn.title("Tess & Alex") tess = turtle.Turtle() tess.color("hotpink") tess.pensize(5) alex = turtle.Turtle tess.forward(80) tess.left(120) tess.forward(80) tess.left(120) tess.forward(80) tess.left(120) tess.right(180) tess.forward(80) for x in [0,1,2,3]: alex.forward(50) alex.left(90) wn.mainloop() ERROR Message File Name Line Position Traceback C:\py\3\program1.py 41 TypeError: forward() missing 1 required positional argument: 'distance' The bug in your code is this line: alex = turtle.Turtle which should be: alex = turtle.Turtle() But let's address your larger question. I assume that simple bugs in your program are causing long, drawn out crashes making it hard to do quick turn around debug and test cycles. (If not, just live with the current sitation.) Although I can't reproduce the behaviour on my system, there's something we can try (for development only, as folks will frown upon if it survives into your finished code as the exception is too broad): import sys from turtle import Turtle, Screen try: screen = Screen() screen.bgcolor("lightgreen") screen.title("Tess & Alex") tess = Turtle() tess.color("hotpink") tess.pensize(5) for _ in range(3): tess.forward(80) tess.left(120) tess.right(180) tess.forward(80) alex = Turtle for _ in range(4): alex.forward(50) alex.left(90) except Exception as e: exc_type, exc_obj, exc_tb = sys.exc_info() print(exc_type, "Line:", exc_tb.tb_lineno, "\n", e) exit() screen.mainloop() Note that I've intentionally kept your missing ()'s error. When it hits the error this time, within the try and except, it should hopefully return immediately with: <class 'TypeError'> Line: 23 forward() missing 1 required positional argument: 'distance' rather going into a "not responding" state on you. Give it a try, let us know if it helps.
http://m.dlxedu.com/m/askdetail/3/b9b49205252cf2ede2bf5ae2f06ee6d8.html
CC-MAIN-2018-22
refinedweb
489
66.44
Summary Adds new fields to a table, feature class, or raster. Usage For shapefiles and dBase tables, if field type defines a character, blanks are inserted for each record. If field type defines a numeric item, zeros are inserted for each record. The Add Fields tool has the following default field properties: - Added fields Allow NULL property will be true. - Added fields Editable property will be true. - Added fields Required property will be false. - Precision and scale are set by the field type and data source defaults. The field length is only applicable to fields of type text. A shapefile does not support aliases for fields, so you cannot add a field alias to a shapefile.Fields(in_table, field_description) Derived Output Code sample The following Python window script demonstrates how to use the AddFields tool in immediate mode. import arcpy arcpy.env.workspace = "C:/data/district.gdb" arcpy.management.AddFields( 'school', [['school_name', 'TEXT', 'Name', 255, 'Hello world', ''], ['street_number', 'LONG', 'Street Number', None, 35, 'StreetNumDomain'], ['year_start', 'DATE', 'Year Start', None, '2017-08-09 16:05:07', '']]) Environments Licensing information - Basic: Yes - Standard: Yes - Advanced: Yes
https://pro.arcgis.com/en/pro-app/latest/tool-reference/data-management/add-fields.htm
CC-MAIN-2022-05
refinedweb
184
65.42
SyncCtl(), SyncCtl_r() Perform an operation on a synchronization object Don't use the SyncCtl() or SyncCtl_r() kernel call directly; instead, call one of the following: Synopsis: #include <sys/neutrino.h> int SyncCtl( int cmd, sync_t * sync, void * data ); int SyncCtl_r( int cmd, sync_t * sync, void * data ); Since: BlackBerry 10.0.0. - _NTO_SCTL_MUTEX_WAKEUP — wake up threads that are blocked on a mutex. The data argument points to a structure that specifies the process and thread IDs. - - attach an event to a mutex so you'll be notified when the mutex changes to the DEAD state - wake up threads that are blocked on a mutex These functions are similar, except for the way they indicate errors. See the Returns section for details. In order to change the priority ceiling to a value above the maximum permitted for unprivileged processes, your process must have the PROCMGR_AID_PRIORITY ability enabled. For more information, see procmgr_ability(). Returns: The only difference between these functions is the way they indicate errors: Errors: - EAGAIN - All kernel synchronization event objects are in use. - EFAULT - A fault occurred when the kernel tried to access sync or data. - EINVAL - The synchronization object pointed to by sync doesn't exist, or the ceiling priority value pointed to by data is out of range. - ENOSYS - The SyncCtl() and SyncCtl_r() functions aren't currently supported. - EPERM - The calling process doesn't have the required permission; see procmgr_ability(). Classification: Last modified: 2014-06-24 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/s/syncctl.html
CC-MAIN-2016-22
refinedweb
254
56.15
Warning: WiX 3.5 is still under development, so you may encounter things that do not work with your specific build of WiX, this article is based on build 1602. When you are in a tightly controlled corporate environment it is not always possible to deploy your web applications to Windows Server 2008. I recently found myself in the situation where I had to deploy a .NET 4 MVC 2 application to Windows Server 2003. This article deals with how to handle this using the WiX toolset, because there are some extra things you have to deal with. Before we begin though I am using two WiX extensions, so these have to be set up first. Start by adding references to WixIIsExtension.dll and WixUtilExtension.dll to the WiX project, these can be found in the bin directory where you installed WiX. You must also set up the namespaces by starting your WiX with the following root node: <Wix xmlns="" xmlns:iis="" xmlns: You need to consider that your shiny new application runs on .NET Framework 4.0, while the server may already be running other ASP.NET applications based on earlier versions of the framework; to reduce risk you may want to keep them running on that earlier version rather than upgrade everything to 4.0. I am however assuming that the server already has version 4.0 of the runtime installed, the WiX code I provide just includes a check, if you want to install the framework at the same time then you need to consider a bootstrap setup program to do this for you. Here is how I check for .NET 4 as a pre-requisite: <Property Id='FXINSTALLED' Value='0'> <RegistrySearch Id='FxInstalledRegistry' Key='SOFTWARE\Microsoft\NET Framework Setup\NDP\v4\Full' Name='TargetVersion' Root='HKLM' Type='raw'/> </Property> <Condition Message='Requires the Microsoft .NET Framework 4'> Installed OR FXINSTALLED>="4.0.0" </Condition> The following describes the particular considerations that apply to deploying an MVC 2 application to IIS 6 when using .NET 4. The first thing is to make sure that the application has all its dependencies available. Clearly you need to include any of your own dependant assemblies. However, MVC 2 applications also have a dependency on System.Web.Mvc, which uses the 2.0 runtime, this is not included in the 4.0 framework. So unless you already have this assembly in the GAC on the server for some other MVC 2 application, you will need to include this assembly in the MSI, placing it in the application’s bin directory. Alternatively, if you want to install the runtime and share it among applications, you can get it from here:. Another consequence of running on .NET 4 is that the new application will need to run in a different application pool to any .NET 2/3/3.5 web applications running on the same web server. For that you may need a new application pool. This application pool needs to run under a suitable identity, I normally use an unprivileged domain user account for this purpose. The WiX to do this is: <util:User <util:GroupRef </util:User> <iis:WebAppPool <util:Group Note also that I used properties to pass in the name of the service account and its password. However, just creating a new application pool is not enough, the web application defined for the web site also needs to be set to run under ASP.NET 4.0. This is the trickiest bit because to do this you need to run aspnet_regiis.exe. The first step is to get the path to the correct version as follows: <Property Id='FXDIR' Value='0'> <RegistrySearch Id='FxInstallPathRegistry' Key='SOFTWARE\Microsoft\NET Framework Setup\NDP\v4\Full' Name='InstallPath' Root='HKLM' Type='raw'/> </Property> You may need be careful here, if the destination platform is 64-bit then you need to add Win64=’yes’ to the <RegistrySearch> element or set the <Package> element’s Platform attribute to ‘x64’, otherwise the wrong version of aspnet_regiis.exe will be executed later. To invoke aspnet_regiis.exe we need to know the site ID, which is a simple integer identifying the particular web site. For example to set the 4.0 framework for the second web site on a particular machine you need to invoke it as follows: aspnet_regiis -s W3SVC/2/ROOT The problem is that you don’t necessarily know the ID of your newly created web site. I use the following VBScript in a custom action to find the site ID from the friendly name you see in IIS Manager. Note that using VBScript custom actions is not always recommended (see VBScript (and Jscript) MSI CustomActions suck), but if you are in a controlled environment and you are not writing an MSI that will deployed on more than a handful of computers then I think it is acceptable to use short VBScript custom actions. Dim websvc, sitetoset Dim siteName, fxdir Dim WshShell Set WshShell = CreateObject("Wscript.Shell") ArgString = Session.Property("CustomActionData") Args = Split(ArgString, ",") siteName = Args(0) fxdir = Args(1) set sitetoset = nothing set websvc = GetObject("IIS://localhost/W3svc") for each site in websvc if site.class = "IIsWebServer" then if site.ServerComment = siteName then set sitetoset = site end if end if if (sitetoset is nothing) then WScript.Quit (GENERAL_FAILURE) end if cmd = fxdir + "aspnet_regiis -s W3SVC/"+sitetoset.Name+"/ROOT" Set oExec = WshShell.Exec(cmd) Do While oExec.Status = 0 ' Busy wait because Sleep fails when running during install Loop This script takes a single parameter, passed from the MSI, which is a comma-separated list of two parameters, the friendly name of the site, and the directory where aspnet_regiis.exe is located. The reason I pass the path to the directory is because there are 32 and 64 bit versions of aspnet_regiis.exe and the script does not know which one to run. I put the script in a file called RegAspNet40.vbs and include it in the MSI as follows: <Binary Id="RegAspNet40Script" SourceFile="RegAspNet40.vbs"/> Then I invoke the script as follows: <CustomAction Id="SetRegAspNet40" Property="RegAspNet40" Value="[SITENAME],[FXDIR]"/> <CustomAction Id="RegAspNet40" Execute="deferred" BinaryKey="RegAspNet40Script" VBScriptCall=""/> <InstallExecuteSequence> <Custom Action="SetRegAspNet40" After="CostFinalize"/> <Custom Action="RegAspNet40" After="PublishProduct">&MvcAppFeature=3</Custom> </InstallExecuteSequence> The SITENAME property is the friendly name for the web site and FXDIR is the property we set earlier to tell us the location of the .NET Framework. “MvcAppFeature” is the name I chose to give to the feature which installs the MVC 2 web application. All of this does not matter because an MVC 2 application will not work on IIS 6 without some other measures being taken. Typically an MVC 2 application benefits from using the Integrated Pipeline mode available in IIS 7 on Windows Server 2008, but this is not available in IIS 6 and you have to use Classic Mode instead. To do this you need to register an extension and modify your global.asax too. The global.asax modifications are described in the ASP.NET MVC with Different Versions of IIS tutorial. In addition if you want the home page to work without having to use a url like then you need to take the default.aspx page from an original MVC application and include the aspx extension too. This is also shown below. The extensions are registered in the <WebApplicationExtension> element in the fragment below. Note the use of the FXDIR property so that the appropriate version of the runtime is located. <iis:WebSite <iis:WebDirProperties <iis:WebAddress <iis:WebApplication <iis:WebApplicationExtension <iis:WebApplicationExtension </iis:WebApplication> </iis:WebSite> <iis:WebServiceExtension Note that the Description attribute is critical, do not omit it. Hopefully with this information you can now easily deploy MVC 2 applications on .NET 4 running on Windows Server 2003. Written by Rob Jarratt Thanks, this is great stuff.
https://blogs.msdn.microsoft.com/mcsuksoldev/2010/05/26/deploying-a-net-4-mvc-2-application-to-windows-server-2003-using-wix-3-5/
CC-MAIN-2017-26
refinedweb
1,307
54.93
I’ve been enjoying reading A Programmer’s Introduction to Mathematics by Jeremy Kun recently. After the introduction, the first main topic it covers is a neat trick for sharing secrets (encrypting messages) so that they can be decoded using polynomial functions. Being a firm believer in learning by doing, I immediately got stuck in and started exploring. It has been some time since I worked much with polynomials, so to get a feel for what I was doing, I wrote a Python program to help me visualise polynomial functions with given coefficients. That is what I want to share in this blog post. A polynomial function is a function such as a quadratic, a cubic, a quartic, and so on, involving only non-negative integer powers of its input variable. The form of a polynomial is where the a’s are real numbers (called the coefficients of the polynomial). For example: is a polynomial of degree 3, as 3 is the highest power of x in the formula. This is called a cubic polynomial. Notice that we don’t need every power of x up to 3: we only need to know the highest power of x to find out the degree. Python Code Listing for Plotting Polynomials This program uses matplotlib for plotting and numpy for easy array manipulation, so you will have to install these packages if you haven’t already. The code is commented to help you understand how it works. import numpy as np import matplotlib.pyplot as plt plt.style.use("fivethirtyeight") def polynomial_coefficients(xs, coeffs): """ Returns a list of function outputs (`ys`) for a polynomial with the given coefficients and a list of input values (`xs`). The coefficients must go in order from a0 to an, and all must be included, even if the value is 0. """ order = len(coeffs) print(f'# This is a polynomial of order {order - 1}.') ys = np.zeros(len(xs)) # Initialise an array of zeros of the required length. for i in range(order): ys += coeffs[i] * xs ** i return ys xs = np.linspace(0, 9, 10) # Change this range according to your needs. Start, stop, number of steps. coeffs = [0, 0, 1] # x^2 # xs = np.linspace(-5, 5, 100) # Change this range according to your needs. Start, stop, number of steps. # coeffs = [2, 0, -3, 4] # 4*x^3 - 3*x^2 + 2 plt.gcf().canvas.set_window_title('Fun with Polynomials') # Set window title plt.plot(xs, polynomial_coefficients(xs, coeffs)) plt.axhline(y=0, color='r') # Show xs axis plt.axvline(x=0, color='r') # Show y axis plt.title("y =4*x^3 - 3*x^2 + 2") # Set plot title plt.show() There are a couple of example polynomials provided in the code. Once you have checked out a simple quadratic (y = x^2) using the two lines below: xs = np.linspace(0, 9, 10) coeffs = [0, 0, 1] # x^2 you can uncomment these two lines, then once you have the hang of how the program works, you can try your setting up and plotting your own polynomials: # xs = np.linspace(-5, 5, 100) # Change this range according to your needs. Start, stop, number of steps. # coeffs = [2, 0, -3, 4] # 4*x^3 - 3*x^2 + 2 A Programmer’s Introduction to Mathematics: Second Edition by Jeremy Kun I thoroughly recommend this book for anyone keen to explore the relationship between Mathematics and Programming at a roughly undergraduate level. It is friendly and thorough, and there is a full repository of Python code examples available on GitHub. As an Amazon Associate I earn from qualifying purchases. This post has explored how to plot polynomials with Python and matplotlib. I hope you found it interesting and helpful. Happy computing!
https://compucademy.net/plotting-polynomials-with-python/
CC-MAIN-2022-27
refinedweb
624
65.62
Before starting to create a new Visual C++ application project Win32 Smart Device Project. 4. Specify a name and a location for the application and then click OK. Uncheck the Create directory for solution box. 5. In the Welcome to Win32 Smart Device Project Wizard, select Next. 6. In Platforms window, select Toradex_CE600 for Windows CE6 and Toradex_CE700 for Windows Embedded Compact 7. The screenshot references old SDK names, those have been replaced in the new SDKs, if you have projects that still use the old SDK, you can find information about how to migrate them here. Click Next. 7. In Project Settings, under Application type select Console application. From Additional options, select Empty project. Click Finish. 8. In the Solution Explorer, right click on Source Files. Select Add. From the list, select New Item... 9. In the Add New Item window, from Categories, select Code. From the Visual Studio installed templates, select C++ File (.cpp). Assign the name as vcppdemo.c 10. Edit the code as follows: #include <windows.h> int wmain(void) { printf("Welcome to Toradex"); //print the string getchar(); //get character before exit return 0; } 11. To build the VC++). 12. To deploy the project, in the Build menu, click Deploy Solution. 13. After deploying, on the Colibri module, open My Device. Select Program Files. The vcppdemo folder contains a .exe file to run the application. 14. Double click the .exe to Run the application. You can download project source code from here.
https://developer.toradex.cn/knowledge-base/create-a-new-vcpp-project
CC-MAIN-2020-29
refinedweb
246
69.68
AndroidX maps the original support library API packages into the androidx namespace. Only the package and Maven artifact names changed; class, method, and field names did not change. To learn how to use Android Studio to help you migrate existing code, see Migrate an existing project using Android Studio below. Migrate an existing project using Android Studio With Android Studio 3.2 and higher, you can quickly migrate an existing project to use AndroidX by selecting Refactor > Migrate to AndroidX from the menu bar. If you have any Maven dependencies that have not been migrated to the AndroidX namespace, the Android Studio build system also migrates those dependencies for you when you set the following two flags to true in your gradle.properties file: android.useAndroidX=true android.enableJetifier=true To migrate an existing project that does not use any third-party libraries with dependencies that need converting, you can set the android.useAndroidX flag to true and the android.enableJetifier flag to false. Artifact mappings The following table lists the current mappings from old artifacts to new ones. You can also download these mappings in CSV format. Class mappings The following table lists the current mappings from the old namespace to the new androidx packages. You can also download these mappings in CSV format.
https://developer.android.com/jetpack/androidx/migrate?authuser=0
CC-MAIN-2019-13
refinedweb
215
56.55
Continuous Wavelet Transform (CWT)¶ This section describes functions used to perform single continuous wavelet transforms. Single level - cwt¶ pywt. cwt(data, scales, wavelet)¶ One dimensional Continuous Wavelet Transform. Notes Size of coefficients arrays depends on the length of the input array and the length of given scales. Examples >>> import pywt >>> import numpy as np >>> import matplotlib.pyplot as plt >>> x = np.arange(512) >>> y = np.sin(2*np.pi*x/32) >>> coef, freqs=pywt.cwt(y,np.arange(1,129),'gaus1') >>> plt.matshow(coef) >>> plt.show() ---------- >>> import pywt >>> import numpy as np >>> import matplotlib.pyplot as plt >>> t = np.linspace(-1, 1, 200, endpoint=False) >>> sig = np.cos(2 * np.pi * 7 * t) + np.real(np.exp(-7*(t-0.4)**2)*np.exp(1j*2*np.pi*2*(t-0.4))) >>> widths = np.arange(1, 31) >>> cwtmatr, freqs = pywt.cwt(sig, widths, 'mexh') >>> plt.imshow(cwtmatr, extent=[-1, 1, 1, 31], cmap='PRGn', aspect='auto', ... vmax=abs(cwtmatr).max(), vmin=-abs(cwtmatr).max()) >>> plt.show() Continuous Wavelet Families¶ A variety of continuous wavelets have been implemented. A list of the available wavelet names compatible with cwt can be obtained by: wavlist = pywt.wavelist(kind='continuous') Mexican Hat Wavelet¶ The mexican hat wavelet "mexh" is given by: where the constant out front is a normalization factor so that the wavelet has unit energy. Complex Morlet Wavelets¶ The complex Morlet wavelet ( "cmorB-C" with floating point values B, C) is given by: where \(B\) is the bandwidth and \(C\) is the center frequency. Gaussian Derivative Wavelets¶ The Gaussian wavelets ( "gausP" where P is an integer between 1 and and 8) correspond to the Pth order derivatives of the function: where \(C\) is an order-dependent normalization constant. Complex Gaussian Derivative Wavelets¶ The complex Gaussian wavelets ( "cgauP" where P is an integer between 1 and 8) correspond to the Pth order derivatives of the function: where \(C\) is an order-dependent normalization constant. Shannon Wavelets¶ The Shannon wavelets ( "shanB-C" with floating point values B and C) correspond to the following wavelets: where \(B\) is the bandwith and \(C\) is the center frequency. Freuqency B-Spline Wavelets¶ The frequency B-spline wavelets ( "fpspM-B-C" with integer M and floating point B, C) correspond to the following wavelets: where \(M\) is the spline order, \(B\) is the bandwidth and \(C\) is the center frequency. Choosing the scales for cwt¶ For each of the wavelets described below, the implementation in PyWavelets evaluates the wavelet function for \(t\) over the range [wavelet.lower_bound, wavelet.upper_bound] (with default range \([-8, 8]\)). scale = 1 corresponds to the case where the extent of the wavelet is (wavelet.upper_bound - wavelet.lower_bound + 1) samples of the digital signal being analyzed. Larger scales correspond to stretching of the wavelet. For example, at scale=10 the wavelet is stretched by a factor of 10, making it sensitive to lower frequencies in the signal. To relate a given scale to a specific signal frequency, the sampling period of the signal must be known. pywt.scale2frequency() can be used to convert a list of scales to their corresponding frequencies. The proper choice of scales depends on the chosen wavelet, so pywt.scale2frequency() should be used to get an idea of an appropriate range for the signal of interest. For the cmor, fbsp and shan wavelets, the user can specify a specific a normalized center frequency. A value of 1.0 corresponds to 1/dt where dt is the sampling period. In other words, when analyzing a signal sampled at 100 Hz, a center frequency of 1.0 corresponds to ~100 Hz at scale = 1. This is above the Nyquist rate of 50 Hz, so for this particular wavelet, one would analyze a signal using scales >= 2. >>> import numpy as np >>> import pywt >>> dt = 0.01 # 100 Hz sampling >>> frequencies = pywt.scale2frequency('cmor1.5-1.0', [1, 2, 3, 4]) / dt >>> frequencies array([ 100. , 50. , 33.33333333, 25. ]) The CWT in PyWavelets is applied to discrete data by convolution with samples of the integral of the wavelet. If scale is too low, this will result in a discrete filter that is inadequately sampled leading to aliasing as shown in the example below. Here the wavelet is 'cmor1.5-1.0'. The left column of the figure shows the discrete filters used in the convolution at various scales. The right column are the corresponding Fourier power spectra of each filter.. For scales 1 and 2 it can be seen that aliasing due to violation of the Nyquist limit occurs. import numpy as np import pywt import matplotlib.pyplot as plt wav = pywt.ContinuousWavelet('cmor1.5-1.0') # print the range over which the wavelet will be evaluated print("Continuous wavelet will be evaluated over the range [{}, {}]".format( wav.lower_bound, wav.upper_bound)) width = wav.upper_bound - wav.lower_bound scales = [1, 2, 3, 4, 10, 15] max_len = int(np.max(scales)*width + 1) t = np.arange(max_len) fig, axes = plt.subplots(len(scales), 2, figsize=(12, 6)) for n, scale in enumerate(scales): # The following code is adapted from the internals of cwt int_psi, x = pywt.integrate_wavelet(wav, precision=10) step = x[1] - x[0] j = np.floor( np.arange(scale * width + 1) / (scale * step)) if np.max(j) >= np.size(int_psi): j = np.delete(j, np.where((j >= np.size(int_psi)))[0]) j = j.astype(np.int) # normalize int_psi for easier plotting int_psi /= np.abs(int_psi).max() # discrete samples of the integrated wavelet filt = int_psi[j][::-1] # The CWT consists of convolution of filt with the signal at this scale # Here we plot this discrete convolution kernel at each scale. nt = len(filt) t = np.linspace(-nt//2, nt//2, nt) axes[n, 0].plot(t, filt.real, t, filt.imag) axes[n, 0].set_xlim([-max_len//2, max_len//2]) axes[n, 0].set_ylim([-1, 1]) axes[n, 0].text(50, 0.35, 'scale = {}'.format(scale)) f = np.linspace(-np.pi, np.pi, max_len) filt_fft = np.fft.fftshift(np.fft.fft(filt, n=max_len)) filt_fft /= np.abs(filt_fft).max() axes[n, 1].plot(f, np.abs(filt_fft)**2) axes[n, 1].set_xlim([-np.pi, np.pi]) axes[n, 1].set_ylim([0, 1]) axes[n, 1].set_xticks([-np.pi, 0, np.pi]) axes[n, 1].set_xticklabels([r'$-\pi$', '0', r'$\pi$']) axes[n, 1].grid(True, axis='x') axes[n, 1].text(np.pi/2, 0.5, 'scale = {}'.format(scale)) axes[n, 0].set_xlabel('time (samples)') axes[n, 1].set_xlabel('frequency (radians)') axes[0, 0].legend(['real', 'imaginary'], loc='upper left') axes[0, 1].legend(['Power'], loc='upper left') axes[0, 0].set_title('filter') axes[0, 1].set_title(r'|FFT(filter)|$^2$')
https://pywavelets.readthedocs.io/en/latest/ref/cwt.html
CC-MAIN-2019-09
refinedweb
1,103
61.22
Make:it Robotics Starter Kit – Wireless Connectivity In this blog post we are going to take the information that we learned in the earlier blog post entitled “Make:it Robotics Starter Kit – Software Part 2″ and capture real time sensor data and send this data wirelessly to our computer. In order to complete this section we need to purchase a few items: RF Radio Transmitter and Receiver, (5 volt, not a 3.3 volt) Male Female jumper wires In addition we will need our FTDI USB cable, or a FTDI USB dongle. I did a bit of searching on the web and found a RF Data Module Tx/Rx kit. This kit has a 315 MHz transmitter and receiver. I paid $9.00 plus shipping for this kit. (Click on the Images to make them larger) This male FTDI USB adapter would be more compact and have a nicer form factor for your computer. But since I already had the FTDI USB cable I will use this for the tutorial. Later if you want to make a permanent configuration, you can always order the male version of the FTDI adapter. You can purchase any frequency transmitter/receiver kit as long as the frequencies are the same between transmitter/receiver. This blog post will explain in detail how to connect the RF radios to the robot and your computer. We will make a few small modifications to our original lineFollow.ino program. But the changes to get the robot communicating using the RF radios is pretty easy. Step 1: Transmitter/Receiver First we must identify the components of the radios. Your kit might be different than mine, My kit did not come with any documentation. So I had to do a bit of searching on the web to find out pin configuration, transmitter/receiver identification etc. The nice thing is there is lots of info out on the web about configuring RF radios. First let us determine which board is our transmitter and which board is our receiver. Look very closely at the pin labels on both boards that you have: This is an image of the boards that I received. Your boards might look different.Size and number of pins will not determine which board is a transmitter or receiver. On the above image look at the pin labels on the smaller board. The pins are labeled VCC, GND, DATA and ANT. VCC is voltage applied to the board.GND is the ground (earth) connector. DATA is the pin that transmits the data to the outside worldANT is the Antenna. Chances are that the board that has the Antenna pin is your transmitter. Lets look at the other larger board. GND, OUT, OUT, VCC This board is the receiver.Important, your boards may be configured differently. On your receiver you may only have one OUT, or data pin. Check the instructions that came with your RF kit or perform the same identification process that we just did in this post. Step 2: Wire the Driver Board - Transmitter Looking at our driver board, we need to find the following pins. We will now wire our transmitter to the robot. Get three female to male jumpers, Red, Black and Green, if you have them, if not choose three different color jumpers and write the colors down. Take the (Red wire) female socket end of the jumper, plug it into the pin labeled VCC on the transmitter, take the male end of the same jumper and plug it into the 5V socket on your driver board. Take the (Black wire) female socket end of the jumper and plug it into the pin labeled GND on the transmitter, take the male end of same jumper and plug it into either GND socket on the driver board. Take the (Green wire) female socket end of the jumper and plug it into the pin labeled DATA on the transmitter, take the male end of the same jumper and plug it into pin 5 (lower right header bank) of the driver board. Your transmitter has been wired to the robot. Step 3: Wire the Receiver to the FTDI Board For the time being I just laid the transmitter on the top of the driver board for testing. Once we want to run the robot we will tuck the transmitter in away from the wheels. Take the (Red wire) female socket end of the jumper and connect it to the VCC pin on the receiver, take the male end of the same jumper and plug into the socket of the FTDI connector that has the Red wire. Take the (Black wire) female socket end of the jumper and connect it to the GND pin on the receiver, take the male end of the same jumper and plug it into the socket of the FTDI connector that has the black wire. Take the (Yellow wire) female socket end of the jumper and connect it to the OUT pin on the receiver, take the male end of the same jumper and plug it into the socket of the FTDI connector that has the Yellow wire. Your receiver has been wired to your USB FTDI. Step 4: Modify the LineFollowing.ino Program We now need to modify our lineFollow.ino program as the RF radio kit has slightly different requirements than our hard coded serial port does. Here are the changes we need to make: #include "SoftwareSerial.h" #include "MakeItRobotics.h" #define rxPin 4 #define txPin 5 MakeItRobotics line_following; SoftwareSerial mySerial = SoftwareSerial(rxPin, txPin); int counter; In our definition section we need to add a counter variable that will be used to test the wireless serial port. This counter will have a data type of int. void setup() { Serial.begin(10420); //tell the Arduino to communicate with Make: it PCBdelay(500); //delay 500 ms line_following.line_following_setup(); //initialize the status of line following robot line_following.all_stop(); //all motors stop pinMode(rxPin, INPUT); pinMode(txPin, OUTPUT); // set the data rate for the SoftwareSerial port // mySerial.begin(9600); mySerial.begin(1200); counter = 0; } In our setup() function we need to make the following changes: We need to comment out the mySerial.begin(9600); command and add the following command: mySerial.begin(1200); 1200 is the baud rate (speed of data transfer) that the RF radios can communicate with each other. This value may be different for your radios. Check your documentation. In my situation the radio came with no documentation, so I first tried 2400 baud, which did not work, so I then lowered the baud rate to 1200 baud, which did work. We also initialize the counter variable to a value of 0. In the Loop() function we need to add the following: // mySerial.println(sensor_in, HEX); mySerial.println(counter, DEC); counter++; First we comment out the mySerial.printLn(sensor_in, HEX); line and add the following line: mySerial.printLn(counterm DEC); In order to test that we configured our RF radios properly we just want to send some numbers to our workstation. Later when we make sure that our RF connections are working we will change the mySerial.printLn() back to what it was before. These are all of the changes we need to make. Just attach the USB cable to the Arduino USB connector on your robot. we will first compile and upload the program to our Arduino. Remember we need to check to make sure that we are using the correct serial port number, Check the earlier blog entry “Make:it Robotics Starter Kit – Software Part 2″ for details on how to accomplish this task. Remember you do not need to turn on the battery box switches in order to have the robot communicate across the serial port. Once your lineFollow.ino program has uploaded to the Arduino, plug your FTDI USB cable into your computer. Load the Arduino IDE, select the correct serial port and open the Serial Monitor program. Select the correct baud rate, in my case I selected 1200 baud. If you performed all of the tasks correctly then you should see the following in your serial monitor program. Your program should look a bit different as this screen shot was taken from the ATMEL Studio IDE: If your serial monitor is receiving, what appears to be a lot of junk characters or a bunch of question marks. Then either your baud rate on the serial monitor does not match the baud rate you set in your lineFollow.ino program or you need to change the baud rate that the program is using as the RF radios may not support this baud rate: It is also easy to corrupt the serial port, so in many situations, it is best just to unplug all of the USB ports and try again, maybe even rebooting your computer if you cannot get it to work. In the next blog, tutorial we will setup the lineFollow.ino program to send sensor data to our computer while the robot is running around the black circle sending real time data to our computer wirelessly.
http://www.instructables.com/id/Makeit-Robotics-Starter-Kit-Wireless-Connectivity/
CC-MAIN-2017-22
refinedweb
1,510
71.14
In the Forward Error Handling – Part 1: Outline of this weblog series I explained the concepts behind forward error handling. Now I will discuss how to set up Error and Conflict Handler Framework and look how the framework is used with SAP Business Suite. The reason is very simple: if you want to understand a framework in detail you should analyze how it is implemented within SAP standard. If you want to know the end user uses the PPO tool you should have a look how it a video you’ll find here: for Ehp 5 or in PI/XI: Forward Error Handling (FEH) for asynchronous proxy calls with the use of Error and Conflict Handler (ECH) for Ehp 4. Prerequisites Asynchronous Communication uses an XI infrastructure within AS ABAP which has to be set up first which is described in SAP Library: . After setting up the local integration engine you have go to transaction SPRO and activate the error and conflict handler: Then run report SXMS_FEH_TEST to test whether FEH runs correctly. If everything is OK one message is displayed like in message monitor in transaction SXMB_MONI: To use this Forward Error Handling you need authorizations for transactions like SXMB_MONI_SEL and for other components like Post Processing Office (transactions /SAPPO/PPO2 and /SAPPO/PPO3). A look at Enterprise Services of SAP Business Suite Asynchronous Enterprise Services of SAP Business Suite are using the Forward Error Handling Framework. This works as follows: - The error process can be set up & customized. SAP Mentor Michael Krawczyk wrote a great blog about Forward Error Handling and showed how to link it to the post processing office: PI/XI: Forward Error Handling (FEH) for asynchronous proxy calls with the use of Error and Conflict Handler (ECH) . - Then we need some to insert some code snippets into the proxy implementation. And this is how it goes. Let’s have a look at Service Interface CreditLimitChangeRequestERPChangeRequest_In: You can see that there is only one operation and exception. When the service is called it delegates to a method EXECUTE_ASYNCHRONOUS of a generated proxy class. So let’s have a look at the generated server proxy especially at the method EXECUTE_ASYNCHRONOUS: The implementation is very small: SET UPDATE TASK LOCAL is common for Enterprise Services that change data. Then the call is delegated to an implementation class: METHOD ii_ukm_clcr_chgrq~execute_asynchronous. DATA lo_serv_impl TYPE REF TO cl_ukm_clcr_impl_chgrq. SET UPDATE TASK LOCAL. TRY. CALL METHOD cl_fscm_extended_xml_handling=>deactivate_exml_handling. CATCH cx_ai_system_fault . ENDTRY. * get proxy implementation object reference lo_serv_impl ?= cl_ukm_clcr_impl_chgrq=>get_instance( ). * execute service processing CALL METHOD lo_serv_impl->execute EXPORTING input = input. ENDMETHOD. Now let’s look at the implementation-class that is called from the generated service class: The main method is EXECUTE. Within that method at first the Forward Error Handling Framework is initialized: DATA: lo_feh_registration TYPE REF TO cl_feh_registration lo_feh_registration = cl_feh_registration=>s_initialize( ). Then the following steps are performed: - The input parameters from the web service payload are mapped into an internal format. - Then with the business logic with those input parameters is called. - In both cases errors can occur and given to method lo_feh_registration->collect. Then an exception is raised. Within ECH framework the error will be persisted as orders within PPO framework. There some action like discard, finish, retry and so on can be executed manually or automatically. These actions will be delegated to those actions defined in interface IF_ECH_ACTION of the action class CL_UKM_CLCR_IMPL_CHGRQ which is shown above. We can customize this behavior in many ways which I describe now. An error of a specific Enterprise Service will be assigned to a component (often corresponding to an application) and a process within in that application. We can define how to persist the error situation and the action class. The latter is important because its methods will be called by PPO. And this information will be stored in table ECHS_PROCESSES: So the implementation class of a web service has two purposes: it implements the service logic which contains passing the error data to ECH framework. ECH can perform error processes because the implementation class is from the point of view of ECH an actions class containing a special interface whose methods will be called for retry, discard and so on… And this logic is stored in table ECHS_DEFLTRESOL. For each error scenario we can define error categories (format error, processing error, authorization error, lock request and many more) and for each combination further parameter: In our case the error is persistent and so submitted to PPO. Retry Group S40 means that is rescheduled every 40 minutes automatically but can retried manually (retry mode 3). Finish resp. fail mode 2 mean that those actions only can performed manually. In Ehp 4 those tables are system tables but in later releases they will have proper category “E” as well as customer namespaces which is necessary when you want to define own error processes for custom developed enterprise services. This is customer customizing that can be defined in SPRO above using “Define Resolution Strategy” by creating a condition by creating condition and target: Now you can define possible values: Summary & Outline So far we looked into SAP Business Suite. If you want to use this framework in custom used error processes you should - analyze classes that implement interface IF_ECH_ACTION, - the customizing tables in ABAP package FS_ECH_BASIS and - the weblog of SAP Mentor Michal Krawczyk who explains PI/XI: Forward Error Handling (FEH) for asynchronous proxy calls with the use of Error and Conflict Handler (ECH) and explains these details PI/XI: Forward Error Handling (FEH) for asynchronous proxy calls with the use of Error and Conflict Handler (ECH) part 2. ECH framework has been improved in Ehp 5 and there will be even more improvements in Ehp 6. In the next installments of this weblog series I will cover following topics: - Post Processing Office as generic error handling framework. Therefore we need some additional customizing. - HDS frameworks for error classification which was shipped in software component SAP_BS_FND in NW 7.01. - Further development of ECH in Ehp 5 and 6 (as soon as the latter is general available). Hello Tobias , Thanks for the valuable blog. I am trying to implement FEH for error handling in our project. After I have activated FEH from the Error and conflict handler and saved it. After that I run the report SXMS_FEH_TEST. But, for me it shows an error messages saying no Binding found unlike successful message in your blog. Can you please let me know what extra do I need to do.
https://blogs.sap.com/2011/02/28/forward-error-handling-a-short-look-at-sap-business-suite-ehp-4/
CC-MAIN-2018-39
refinedweb
1,086
50.97
Mysensor usb gateway serial problem Hello, could somebody help me and say why it doesn't work? // Enable debug prints to serial monitor #define MY_DEBUG // Enable and select radio type attached //#define MY_RADIO_RF CHILD_ID 3 #define BUTTON_PIN 3 #define RELAY_PIN void before() { for (int sensor=1, pin=RELAY_PIN; sensor<=NUMBER_OF_RELAYS; sensor++, pin++) { // Then set relay pins in output mode pinMode(pin, OUTPUT); // Set relay to last known state (using eeprom storage) digitalWrite(pin, loadState(sensor)?RELAY_ON:RELAY_OFF); } } int gate = 23; int relay1 = 22; void setup() { Serial.begin (115200); pinMode(BUTTON_PIN,INPUT); digitalWrite(BUTTON_PIN,HIGH); pinMode(relay1, OUTPUT); digitalWrite (relay1, LOW); pinMode(gate, OUTPUT); digitalWrite (gate, LOW); pinMode(24, INPUT_PULLUP); } void presentation() { // Send the sketch version information to the gateway and Controller sendSketchInfo("Relay", "1.0"); for (int sensor=1, pin=RELAY_PIN; sensor<=NUMBER_OF_RELAYS; sensor++, pin++) { // Register all sensors to gw (they will be created as child devices) present(sensor, S_BINARY); } } void loop() { byte value = analogRead(0); if (value == LOW){ delay(10); digitalWrite(gate, LOW); digitalWrite(relay1, LOW); delay(2000); digitalWrite(gate, HIGH); } if (value == HIGH){ delay(10); digitalWrite(gate, LOW); digitalWrite(relay1, HIGH); delay(2000); digitalWrite(gate, HIGH); } }()); } } Welcome to the MySensors community @dany17 Would you mind sharing what you mean by "it" and "doesn't work"? If you haven't already, see for the most common problems and how to troubleshoot them efficiently. - scalz Hardware Contributor last edited by scalz @dany17 Hi. My crystal ball told me you need to uncomment this line in your sketch if you're planning to use nrf24 : //#define MY_RADIO_RF24 to #define MY_RADIO_RF24 If it doesn't help, like mfalkvidd said please show us your debug logs. @scalz said in Mysensor usb gateway serial problem: If it doesn't help, like mfalkvidd said please show us your debug logs. OK i do //#define MY_RADIO_RF24 because for now I wanted to make a base station via usb I will tell you my idea and you will tell if it is possible here because I am slowly losing faith XD I want to use mega2560 to control several devices. One of them is the roller blind that I converted into electric motor control (currently I have two. The first classic 2 wires and I have connected to 3 contactors which control the direction and time of operation. The second stepper motor). I now want to use my sensor to control the blind in domoticz with 1 switch or if it could be slider. The next thing would be INA219 but maybe later for the time being on this roller blind, is it possible to do? I would like to use this program in mysensors with a switch from domoticz #include <Stepper.h> #define STEPS 32 Stepper stepper (STEPS, 8, 10, 9, 11); int val = 0; void setup () { Serial.begin (9600); stepper.setSpeed (800); pinMode(4, INPUT_PULLUP); } void loop () { if (digitalRead (4) == LOW) { val = 20480; stepper.step (val); Serial.println (val); } if (digitalRead (4) == HIGH) { val = -20480; stepper.step (val); Serial.println (val); } } ? @kimot yes I know this is an error but I wanted to read the value from pin 4. this example worked for me (RelayActuator) I wanted to use this ordinary switch to control the rest of the program. that's all I need at the moment
https://forum.mysensors.org/topic/10635/mysensor-usb-gateway-serial-problem/3?lang=en-US
CC-MAIN-2020-16
refinedweb
542
51.38
NAMEshmop - shared memory operations SYNOPSIS #include <sys/types.h> #include <sys/shm.h> void *shmat(int shmid, const void *shmaddr, int shmflg); int shmdt(const void *shmaddr); DESCRIPTIONThe function shmat attaches the shared memory segment identified by shmid to the address space of the calling process. The attaching address is specified by shmaddr with one of the following criteria: If shmaddr is NULL, the system chooses a suitable (unused) address at which to attach the segment. If shmaddr isn't NULL and SHM_RND is asserted in shmflg, the attach occurs at the address equal to shmaddr rounded down to the nearest multiple of SHMLBA. Otherwise shmaddr must be a page aligned address at which the attach occurs. If SHM_RDONLY is asserted brk value of the calling process is not altered by the attach. The segment will automatically be detached at process exit. The same segment may be attached as a read and as a read-write one, and more than once, in the process's address space. On a successful shmat call the system updates the members of the shmid_ds structure to be deleted. The function shmdt detaches the shared memory segment located at the address specified by shmaddr from the address space of the calling process. The to-be-detached segment must be currently attached with shmaddr equal to the value returned by the its process is unmapped. SYSTEM CALLS - fork() - After a fork() the child inherits the attached shared memory segments. - exec() - After an exec() all attached shared memory segments are detached from the process. - exit() - Upon exit() all attached shared memory segments are detached from the process. RETURN VALUEOn failure both functions return -1 with errno indicating the error. On success shmat returns the address of the attached shared memory segment, and shmdt returns 0. ERRORSWhen shmat fails, errno is set to one of the following: - fail only if there is no shared memory segment attached at shmaddr, in such a case at return errno will be set to EINVAL. SEE ALSOipc(5), shmctl(2), shmget(2) Important: Use the man command (% man) to see how a command is used on your particular computer. >> Linux/Unix Command Library
http://linux.about.com/library/cmd/blcmdl2_shmat.htm
CC-MAIN-2016-22
refinedweb
361
53.92
6.4: Practice 5 Integrated Development Environment - Page ID - 10271 Learning Objectives With. Exercises Question (T/F) 1. IDE means Integer Division Expression. 2. Most modern compilers are really an IDE type of software, not just a compiler. 3. cin and cout are used for the standard input and output in C++. 4. Programming errors are extremely easy to understand and fix. 5. All C++ programs will have at least one include type of compiler directive. Answers 1. False 2. True 3. True 4. False 5. True Lab Assignment Creating a Folder or Sub-Folder for Chapter 05_05_02_Pseudocode.txt Download from Connexions: Solution_Lab_02_Test_Data.txt Detailed Lab Instructions Read and follow the directions below carefully, and perform the steps in the order listed. - Copy into your sub-folder: Chapter_05 one of the source code listings that we have used. We suggest the Lab 01 source code and rename the copy: Lab_05.cpp - Modify the code to follow the Solution_Lab_02_Pseudocode.txt file. - Build (compile and run) your program. You have successfully written this program if when it runs and you use the test data [use the test data as supplied as the solution for Lab 02] it gives the predicted results. - After you have successfully written this program, if you are taking this course for college credit, follow the instructions from your professor/instructor for submitting it for grading. Problems Problem 05a - Instructions List and describe what might cause the four (4) types of errors encountered in a program using an Integrated Development Environment software product. Problem 05b - Instructions Identify four (4) problems with this code listing (HINT: The four (4) types of errors encountered in a program using an Integrated Development Environment software product). C++ Source Code Listing //****************************************************** // Filename: Compiler_Test.cpp // Purpose: Average the ages of two people // Ken Busbee; // Date: Jan 5, 2009 // Comment: Main idea is to be able to debug and run a program on your compiler. //****************************************************** // Headers and Other Technical Items #include <iostrern> using namespace std; // Function Prototypes void pause(void); // Variables int age1; int age2; double answer; //****************************************************** // main //****************************************************** int main(void) { // Input cout << "\nEnter the age of the first person --->: "; cin >> age1; cout << "\nEnter the age of the second person --->: "; cin >> age2; // Process answer = (age1 + age2)/3.0; // Output cout << "\nThe average of their ages is --->: "; cout << answer; pause(); return 0; } //****************************************************** // End of Program //******************************************************
https://eng.libretexts.org/Bookshelves/Computer_Science/Book%3A_Programming_Fundamentals_-_A_Modular_Structured_Approach_using_C___(Busbee)/6%3A_Integrated_Development_Environment/6.4%3A_Practice_5_Integrated_Development_Environment
CC-MAIN-2019-35
refinedweb
388
54.22
I'm trying to make my objects Parcelable. However, I have custom objects and those objects have ArrayList You can find some examples of this here, here (code is taken here), and here. You can create a POJO class for this, but you need to add some extra code to make it Parcelable. Have a look at the implementation. public class Student implements Parcelable{ private String id; private String name; private String grade; // Constructor public Student(String id, String name, String grade){ this.id = id; this.name = name; this.grade = grade; } // Getter and setter methods ......... ......... // Parcelling part public Student(Parcel in){ String[] data = new String[3]; in.readStringArray(data); this.id = data[0]; this.name = data[1]; this.grade = data[2]; } @Оverride public int describeContents(){ return 0; } @Override public void writeToParcel(Parcel dest, int flags) { dest.writeStringArray(new String[] {this.id, this.name, this.grade}); } public static final Parcelable.Creator CREATOR = new Parcelable.Creator() { public Student createFromParcel(Parcel in) { return new Student(in); } public Student[] newArray(int size) { return new Student[size]; } }; } Once you have created this class, you can easily pass objects of this class through the Intent like this, and recover this object in the target activity. intent.putExtra("student", new Student("1","Mike","6")); Here, the student is the key which you would require to unparcel the data from the bundle. Bundle data = getIntent().getExtras(); Student student = (Student) data.getParcelable("student"); This example shows only String types. But, you can parcel any kind of data you want. Try it out. EDIT: Another example, suggested by Rukmal Dias.
https://codedump.io/share/CZzLgnigqroJ/1/how-can-i-make-my-custom-objects-parcelable
CC-MAIN-2018-05
refinedweb
260
61.53
Hello Extension edit /* * hello.c -- A minimal Tcl C extension. */ #include <tcl.h> static int Hello_Cmd(ClientData cdata, Tcl_Interp *interp, int objc, Tcl_Obj *const objv[]) { Tcl_SetObjResult(interp, Tcl_NewStringObj("Hello, World!", -1)); return TCL_OK; } /* * Hello_Init -- Called when Tcl loads your extension. */ int DLLEXPORT Hello_Init(Tcl_Interp *interp) { if (Tcl_InitStubs(interp, TCL_VERSION, 0) == NULL) { return TCL_ERROR; } /* changed this to check for an error - GPS */ if (Tcl_PkgProvide(interp, "Hello", "1.0") == TCL_ERROR) { return TCL_ERROR; } Tcl_CreateObjCommand(interp, "hello", Hello_Cmd, NULL, NULL); return TCL_OK; } Explanatory NotesDKF: In this file, the entry point (which Tcl discovers through dynamic library magic) is called Hello_Init, and that is responsible for connecting the extension up to the Tcl interpreter. It does this by first calling Tcl_InitStubs (which allows it to call other parts of the Tcl C API), then it creates the guts of the extension (here using Tcl_CreateObjCommand to register Hello_Cmd with the name "hello"). Following that, it then formally provides the Hello package (allowing for software versioning control) and returns TCL_OK to indicate that the setup succeeded.Inside Hello_Cmd, we ignore the arguments (fine for simple commands), and instead just set the result to a new string containing "Hello, World!". The -1 just means "take all characters" in a C strlen() sense; positive numbers are used when you know the length of the string, but it's usually easier with literals to use the auto-length feature. Finally, the command implementation returns TCL_OK to say "this was a successful run of the command".Advanced note on the arguments to Hello_Cmd (taken from the chat and using dgp's words ...)This only applies, if you need to deal with arguments to your command so this simple Hello world command does not need the following knowledge.When Hello_Cmd is called, the objv array holds objc pointers to Tcl_Obj structs. At that point you could pass any of those pointers to Tcl_GetString() and get back a pointer to the first element of an array of bytes that are a string in Tcl's internal encoding terminated by a NULL byte. If you pass one of those pointers to some other routine, it could happen that the Tcl_Obj might get free'd by that routine. So to prevent that you should Tcl_IncrRefCount(objv[i]) on any argument you need to be sure lives on after being passed to something and balance that with a Tcl_DecrRefCount(objv[i]) when you don't need the insurance any more (Tcl_GetString will not free the Tcl_Obj). Just because the objv[i] isn't freed, you can't conclude that the pointer you get back from Tcl_GetString() might not be freed. Their lifetimes are not the same. So, for any pointer you get back from Tcl_GetString(), you ought to make a copy of that string before you do any thing else, assuming you want to keep it around for a while (see also Tcl_Obj refCount HOWTO). Building the ExtensionBuilding C code is a difficult thing to describe, because so many compilers do things differently. In the notes below, developers have begun to add examples of the command line arguments one would use for various compilers.One of the things that you _might_ be able to use to help you is a file called tclConfig.sh . This file is one that is installed into tcl's primary library directory, and contains a series of shell variable assignments that correspond to flags used to compile the original tclsh interpreter. The closer you match those flags, the better your chances are that your extension is going to work.[Please replace this comment with details as to how to make use of the compile variables!]To compile the extension above, you should copy the above code into a file called hello.c. To compile this code will require that you provide to the compiler the location of the tcl.h header, at the very least. Then you need to check the location of your version of Tcl. We need to know the directory that contains the tcl.h and the libtclstub file (tclstubNN.lib on windows). The simplest method is to launch tclsh and examine the value of tcl_library. Below substitute TCLINC for the path that contains tcl.h and TCLLIB for the path that contains the library.Unix: gcc -shared -o libhello.so -DUSE_TCL_STUBS -I$TCLINC hello.c -L$TCLLIB -ltclstub8.4Windows (using MSVC) cl -nologo -W3 -O2 -MD -DUSE_TCL_STUBS -I$TCLINC -c hello.c link -nologo -release -dll -out:hello.dll hello.obj -libpath:$TCLLIB tclstub84.libWindows (using mingw gcc) gcc -shared -o hello.dll -DUSE_TCL_STUBS -I$TCLINC -L$TCLLIB -ltclstub84MaxOSX: gcc -dynamiclib -DUSE_TCL_STUBS hello.c -L/Library/Frameworks/Tcl.framework -ltclstub8.4 -o libhello.dylibMore on Building an extension for Tcl under Mac OS X and Building Tcl DLL's for Windows and Building the Hello C Extension using Visual Studio 2010 express:To load, remember to do: tclsh8.4 % load ./libhello[info sharedlibextension] % hello Hello, World!so that Tcl finds it ([load] doesn't look in the current dir unless told to).After running a compile step similar to the above, you should end up with a shared library that you can [load] into tclsh, and then call the "hello" command.Lots of details left out - consult the man pages, books and more extensive docs elsewhere. [Could we at least fill in a skeleton of what kind of details have been left out, so that one knows what to look for elsewhere?]Oh, and adapt the compiler line to your OS/compiler combination. Using namespacesTR - Many extensions nowadays create their commands in a namespace. To do the above example using namespaces, you only need two more lines in the code: Hello_Init(Tcl_Interp *interp) { Tcl_Namespace *nsPtr; /* pointer to hold our own new namespace */ if (Tcl_InitStubs(interp, TCL_VERSION, 0) == NULL) { return TCL_ERROR; } /* create the namespace named 'hello' */ nsPtr = Tcl_CreateNamespace(interp, "hello", NULL, NULL); if (nsPtr == NULL) { return TCL_ERROR; } /* just prepend the namespace to the name of the command. Tcl will now create the 'hello' command in the 'hello' namespace so it can be called as 'hello::hello' */ Tcl_CreateObjCommand(interp, "hello::hello", Hello_Cmd, NULL, NULL); Tcl_PkgProvide(interp, "Hello", "1.0"); return TCL_OK; }Note that compiling the example with the Tcl_CreateNamespace() function will give you a warning using Tcl 8.4 (at least until 8.4.12) and no warning using Tcl 8.5. This is because Tcl_CreateNamespace is an internal function in 8.4 and public in 8.5. So compiling with Tcl 8.4 you should add the line '#include <tclInt.h>' to the C source if you want to avoid this warning. But the code works regardless of this directive.CLN - In my extension, I put the namespace in a #define so I have: #define NS "hello" ... nsPtr = Tcl_CreateNamespace(interp, NS, NULL, NULL); ... Tcl_CreateObjCommand(interp, NS "::hello", Hello_Cmd, NULL, NULL);(Note that there's no operator between the NS and the "::hello" for Tcl_CreateObjCommand(), C very nicely catenates adjacent literal strings.)EG There is no need to use Tcl_CreateNamespace() before using Tcl_CreateObjCommand(). The later creates namespaces as needed, which is counterintuitive given the (opposite) behaviour of proc. Creating a PackageNEM As this page has become somewhat more comprehensive than I originally planned (and a good thing too), it seems appropriate to add the next step: installing your C code as a package. Once you have your dynamic library, this is really quite simple. Firstly, you need to create a pkgIndex.tcl file (note the exact spelling), with instructions telling Tcl how it can load your package. The basic template looks something like this: # pkgIndex.tcl -- tells Tcl how to load my package. package ifneeded "Hello" 1.0 \ [list load [file join $dir libhello[info sharedlibextension]]]All this says is that when the "Hello" package, version 1.0, is required, then it can be found by loading the library libhello.so (or libhello.dll etc) from the directory where this pkgIndex.tcl file was found. Note that the version number in the pkgIndex.tcl file should exactly match that found in the Tcl_PkgProvide call in your C code.You can then install this file along with your dynamic library in a directory where tcl can find it -- that is any of the directories specified in the auto_path variable, and any sub-directories. On UNIX systems this typically includes /usr/local/lib, on Windows it will likely include C:/Tcl/lib or C:/Program Files/Tcl/lib (perhaps localised), and on Mac OS X it likely includes /Library/Tcl. So, assuming we are on a UNIX machine and want to install into /usr/local/lib, we would create a directory such as: $ mkdir /usr/local/lib/hello1.0 $ cp pkgIndex.tcl libhello.so /usr/local/lib/hello1.0/ $ tclsh8.4 % package require Hello 1.0 1.0 % hello Hello, World!Use "hello::hello" if using the namespace version. There is also a sample with Windows build instructions at Building Tcl DLL's for Windows. RS 2007-10-15: Another Windows example (namespacing avoided), using tcltcc: ~ $ tclsh % package require tcc 0.2 % tcc::dll hello hello % hello cproc hello {} char* {return "hello, world!";} % hello write -file hello.dllIf this isn't simple, what is? :^) Just to show that it works: % hello wrong # args: should be "hello cmd ..."Oops.. that was still the compiler named hello % load hello.dll % hello hello, world!Q.E.D. See sampleextension for an example of a Tcl extension that also demonstrates the TEA Tcl Extension Architecture... mghello Can someone PLEASE provide an example of compiling & linking this example for AIX? I don't have gcc on my AIX box at work so I need an example using IBM's cc/xlc compiler. Actually, compiling seems to be quite straightforward. It's the linking/shared library build step that's giving me trouble. AIX's multitude of compiler and linker options are a source of limitless confusion. I appreciate any help.SL I tried: xlc -c hello.cxx -o hello.o -I$TCLINCL -DUSE_TCL_STUBS xlc -qmkshrobj hello.o -L$TCLLIB -ltclstub8.4 -o hello.sobut get an error because of missing exports. Change int DLLEXPORTto EXTERN int DLLEXPORTand it works. From what I know, the DLLEXPORT is not necessary but the EXTERN should be present for all exported functions (at least [1] page 740 advise to do so).
http://wiki.tcl.tk/11153
CC-MAIN-2015-18
refinedweb
1,711
57.47
This also affects our software. I agree with Dan (danmbox): I don't understand; so many people depend on it and yet an out-of-the-box solution doesn't work. I don't want to break the distutils package of our users because we use mingw. Within one Python script, I managed to fix it using this before the setup call: if isWindows(): """ Fix bug in cygwinccompiler: removed -mno-cygwin. This is fixed in cygwinccompiler_new. We hacked the distutils.ccompiler : def new_compiler : It uses sys.modules to fetch the compiler By modifying the sys.modules, we can choose our own compiler version. (this is a bug that's out there for quite some time) """ import cygwinccompiler_new import distutils.cygwinccompiler import sys sys.modules["distutils.cygwinccompiler"] = cygwinccompiler_new ..if I then later run setup(...), it will use my new cygwinccompiler_new, that has the '-mno-cygwin' line removed. However, when you want to install new packages using pip from the command-line, I cannot find a suitable fix (except if I would replace the distutils.cygwinccompiler before pip'ing, then put it back). For afaik, distutils cannot be virtualenv'ed, right? So we cannot even fix the issue in a virtual environment. If it is not possible to find out what version of gcc implemented it first; can't you simply use a pragmatic solution and run "gcc -mno-cygwin": if it gives an error, then remove the option. That would need the least testing and would fix the issue.
https://bugs.python.org/msg183032
CC-MAIN-2018-17
refinedweb
248
59.4
curl_version_info - returns run-time libcurl version info #include <curl/curl.h> curl_version_info_data *curl_version_info( CURLversion type); Returns a pointer to a filled in struct with information about various run-time features in libcurl. type_IPV6 supports IPv6 CURL_VERSION_KERBEROS4 supports kerberos4 (when using FTP) CURL_VERSION_SSL supports SSL (HTTPS/FTPS) (Added in 7.10) CURL_VERSION_LIBZ supports HTTP deflate using libz (Added in 7.10) CURL_VERSION_NTLM supports HTTP NTLM (added in 7.10.6) CURL_VERSION_GSSNEGOTIATE supports HTTP GSS-Negotiate (added in 7.10.6) CURL_VERSION_DEBUG libcurl was built with debug capabilities (added in 7.10.6) CURL_VERSION_CURLDEBUG libcurl was built with memory tracking debug capabilities. This is mainly of interest for libcurl hackers. (added in 7.19.6) CURL_VERSION_ASYNCHDNS libcurl was built with support for asynchronous name lookups, which allows more exact timeouts (even on Windows) and less blocking when using the multi interface. (added in 7.10.7) CURL_VERSION_SPNEGO libcurl was built with support for SPNEGO authentication (Simple and Protected GSS-API Negotiation Mechanism, defined in RFC 2478.) (added in 7.10.8) CURL_VERSION_LARGEFILE libcurl was built with support for large files. (Added in 7.11.1) CURL_VERSION_IDN libcurl was built with support for IDNA, domain names with international letters. (Added in 7.12.0) CURL_VERSION_SSPI libcurl was built with support for SSPI. This is only available on Windows and makes libcurl use Windows- provided functions for NTLM authentication. It also allows libcurl to use the current user and the current user’s password without the app having to pass them on. (Added in 7.13.2) CURL_VERSION_CONV libcurl was built with support for character conversions, as provided by the CURLOPT_CONV_* callbacks. (Added in 7.15.4) ssl_version is an ASCII string for the OpenSSL version used. If libcurl has no SSL support, this is NULL. ssl_version_num is the numerical OpenSSL version value as defined by the OpenSSL project. If libcurl has no SSL support, this is. A pointer to a curl_version_info_data struct. curl_version(3)
http://huge-man-linux.net/man3/curl_version_info.html
CC-MAIN-2019-04
refinedweb
321
61.33
This section covers the C++ API that Squish provides to make it possible to achieve even tighter integration with the AUT, and to solve some specific problems that occasionally arise. Recording hints allow an application to influence Squish's event recorder while a test engineer records a test script. Using a recording hint an application can insert comments or function calls into the test script at particular points. Recording hints are made possible by the RecordHint class. This class is supplied with Squish and is defined in the file recordhint.h in Squish's include directory. The public API is implemented inline in this file, so the application only needs to include the file itself—there is no need to link against an additional library. To see how the RecordHint class is used in practice, we will review an example. Let's assume that we have an application which defines a function called myfunc which we have also wrapped so that a test script can access it. After the user clicks a particular button in the application we want the test script to call myfunc. To do this, we add the following C++ code at the point where the button click is handled: Squish::RecordHint myfunc_comment(Squish::RecordHint::Comment, "Call myfunc"); myfunc_comment.send(); Squish::RecordHint myfunc_caller(Squish::RecordHint::Function, "myfunc"); myfunc_caller.send(); Now when recording a script and clicking on the button, two extra lines in the test script will be generated, as the code snippet below illustrates: def main(): ... clickButton("....") # Call myfunc myfunc() function main() { ... clickButton("...."); // Call myfunc myfunc(); } sub main { ... clickButton("...."); # Call myfunc myfunc(); } # encoding: UTF-8 require 'squish' include Squish def main # ... clickButton("....") # Call myfunc myfunc() # ... end proc main {} { ... invoke clickButton "...." # Call myfunc invoke myfunc } This small example shows when and how to use record hints. The complete API is in the recordhint.h file; look for the RecordHint class inside the Squish namespace. In most cases, Squish hooks into the AUT without requiring any special preparation. However, in some cases (e.g., on AIX) this is not possible due to technical limitations of the operating system. In such cases the built-in hook approach can be used. This requires two tiny changes to the AUT: Include the qtbuiltinhook.h header file, which can be found in Squish's include directory, in the application's code where the main function is defined or where the QApplication object is created. Call the Squish::installBuiltinHook function as soon as you have created the QApplication object. Example: #include <QApplication> #include "qtbuiltinhook.h" int main(int argc, char **argv) { QApplication app(argc, argv); Squish::installBuiltinHook(); // ... return app.exec(); } This is the only preparation needed to make your program testable on platforms that don't support the preloading mechanism. It does not matter if you leave in this code on other platforms, since the the function is smart enough to do nothing if it isn't needed. The Squish::installBuiltinHook function is very lightweight and won't make any difference to the program's performance. Nonetheless, we recommend removing it for publicly released versions of the program. This can easily be done using an #ifdef that includes the header and the function call for testing builds and excludes them for release builds. The Squish::installBuiltinHook function performs the following actions: If the environment variable SQUISH_PREFIX is not set, it does nothing and returns immediately. Otherwise it tries to load the shared library squishqtbuiltinhook in the lib (or bin) subdirectory in the directory specified by SQUISH_PREFIX, and tries to resolve the squishqtbuiltinhook_init symbol in that library. If it fails to find the library or finds it but fails to resolve the symbol, it does nothing and returns. Finally, if it found the library and resolved the symbol, it calls the squishqtbuiltinhook_init function. This function handles the hooking. The Squish::installBuiltinHook function returns true if the hooking succeeded, that is, the application is executed by Squish; otherwise it returns false. The built-in hook is meant as a fallback mechanism on platforms where the normal hooking doesn't work. So if you want to use the built-in hook on platforms where Squish supports non-intrusive hooking, Squish will still use the non-intrusive hooking mechanism by default, although the built-in hook is included in the AUT. Nonetheless, it is possible to force the squishserver to use the built-in hook rather than Squish's non-intrusive hooking mechanism. This can be done by setting a squishserver configuration option (see Configuring squishserver (Section 19.4.2.3)): squishserver --config usesBuiltinHook aut It is also possible to use the built-in hook mechanism to attach to a running application (see Attaching to Running Applications (Section 19.8) for more details on attaching to a running application). To make an application attachable with the built-in hook, you must call the Squish::allowAttaching function after the QApplication has been created. The argument to this function is a port number that the application should listen on for a squishserver to connect to. The function is declared in qtbuiltinhook.h. Here is the standard pattern for making an application attachable: #include <QApplication> #include "qtbuiltinhook.h" int main(int argc, char **argv) { QApplication app(argc, argv); Squish::allowAttaching(11233);int main(int argc, char **argv) { QApplication app(argc, argv); Squish::allowAttaching(11233); //... return app.exec(); }//... return app.exec(); } Rebuild the application with these changes to make it possible for Squish to attach to it. Now start the AUT using the start*aut (Section 19.4.6) program supplied with Squish (in the Squish tool's bin directory): startaut --uses-builtin-hook aut This starts the AUT running and listening on the specified port, so you can now attach to it from within a test script. The next step is to register the AUT as an attachable AUT as described in Register the AUT (Section 19.8.3). See Attaching from a Script (Section 19.8.4) for details on how to attach to the application from a test script.
https://doc.froglogic.com/squish/5.1/rg-cppapi.html
CC-MAIN-2019-18
refinedweb
1,001
55.24
152Views2Replies How do you have a C# program look for a file in a directory and if it isnt there generate it? Answered I would like it to check for a text file named Username.txt in the same drive (ex C:/) as the program, and if it isnt there, generate it? Where in the source would the code go? Best Answer 10 years ago All done using the System.IO namespace and it is very easy to do what you are asking. There are a few different ways to do it. I'll show you one. A method is built in called "File.Exists()" which will check to see if a given file exists on your system. You could put this in a IF statement kinda like this: if (File.Exists("C:\\testfile.txt")) { return true; } else { WriteToFile(); } static void WriteToFile() { StreamWriter SW; SW=File.CreateText("C:\\testfile.txt"); SW.WriteLine("This is the first line of text"); SW.WriteLine("This is second line"); SW.Close(); Console.WriteLine("File Created Successfully"); } Note that you should also build in some simple error checking (using try, catch) to make sure the file is actually created without error. Where you put this code depends on when you want it to run. If you want to provide more detail around what you're trying to do I could write up a simple example for you to work with. 10 years ago Dead easy to write it, what do you mean where in the source would it go ? It goes where you need it !
https://www.instructables.com/community/How-do-you-have-a-C-program-look-for-a-file-in-a-/
CC-MAIN-2021-17
refinedweb
260
84.07
Welcome to the February has been quite a bit of activity in the past couple of months within the Jython community. Jython development has been quite active, Jython 2.2 Beta 1 has been released, and the user base is also growing. As a whole, the CPython language and all of its implementations are actively growing more each day. As such is the case, this year's PyCon event is sure to be a worthy event. I would like to encourage all reader's that live near Dallas or have the ability to travel to attend this event. It will be held Feb 23-25 in Addison Texas. For more information, please visit the PyCon 2007 home page. Special thanks to those who submitted material for this month's newsletter! - Josh Juneau Questions, comments, or suggestions? Please send email to: jython-monthly@mchsi.com or jython-users@lists.sourceforge.net for discussion. News Welcome the most recent release of the Jython scripting language, Jython 2.2 Beta 1! This represents a major milestone in the life cycle of the Jython scripting language as this is the first release since the summer of 2005. The Jython developer and user community is extremely active right now and I predict that the gap between future releases will be much smaller. Thanks to the developers for working hard on this release! Now it is up to the user community to test and give feedback for Beta 1. Download it now and get started!!. Off The Lists Question From Luca Cassioli: - I need I have a function "waiting in background" for an event to be triggered. Can it be done from console, or should I write a script? In other words, I want to port such a source to Jython: import javax.telephony.*; import javax.telephony.events.*; The MyInCallObserver class implements the CallObserver and - recieves all Call-related events. public class MyInCallObserver implements CallObserver { import MyInCallObserver; [....] try { - Terminal terminal = myprovider.getTerminal("1234567890"); terminal.addCallObserver(new MyInCallObserver()); - System.out.println("Can’t get Terminal: " + excp.toString()); System.exit(0); Answer From Jeff Emanuel: see Direct translation: # imports omitted class MyInCallObserver(CallObserver): - def callChangedEvent(self, evList): - # handle event ... try: - terminal = myprovider.getTerminal('1234567890') terminal.addCallObserver(MyInCallObserver()) except java.lang.Exception, ex: - print "Can't get terminal " + ex.toString() System.exit(0) Or try the bean events technique (): def handleCall(evList): - # handleEvent try: - terminal = myprovider.getTerminal('1234567890') terminal.callChangedEvent=handleCall except java.lang.Exception, ex: - print "Can't get terminal " + ex.toString() System.exit(0) Interested in Developing Jython? If you are interested in developing Jython, please take a look at the current bug listing and submit patches for items which you can repair. Tips and Tricks Python Cookbook: BaseHTTPServer with Socket Timeout Python Cookbook: Generator for an arbitrary number of 'for' loops Python Cookbook: Another generator for an arbitrary number of 'for' loops Jython Blogs Jython Roadmap -- Frank Wierzbicki Jython 2.2 - Beta 1 Released -- Frank Wierzbicki Jython 2.2 Beta 1 -- Josh Juneau Improvised AOP with Jython GUESS: A cool graph visualization application using Jython Jython 2.2 Hits Major Milestone -- Push To Test IDE Pydev and Pydev Extensions 1.2.6 have been released Details on Pydev Extensions: Details on Pydev: Details on its development: Discussions Connect to Java RMI Server from Jython Extended Repository Service & Jython Scripting Discussion Submitted by Thomas Muller -- Second Notice.
https://wiki.python.org/jython/JythonMonthly/Newsletters/February2007?highlight=Jacl2Jython
CC-MAIN-2016-50
refinedweb
560
50.84
Java Posse #277 Feedback: Not a view from an ivory tower By Darcy-Oracle on Sep 03, 2009 The entry below is a slightly edited copy of a message I used to start a new thread on the Java Posse's Google Group, largely in response to comments make by Dick Wall in the first twenty minutes of episode #277 of the Java Posse podcast. After listening to episode 277, I'm led to conclude I'm thought of by some as one of the "ivory tower guys" who "just says no" to ideas about changing the Java programming language. I have a rather different perspective. In November 2006, Sun published javac and related code under the familiar GPLv2 with Classpath exception. [1] Shortly thereafter in January 2007, no less a Java luminary than James Gosling endorsed the Kitchen Sink Language (KSL) project. [2] In James' words KSL is "A place where people could throw language features, no matter how absurd, just so that folks could play around" since he has "... never been real happy with debates about language features, I'd much rather implement them and try them out." [3] KSL received no significant community response. Later in 2007, after the remaining core components of the platform were published as open source software as part of OpenJDK during JavaOne, in November Kijaro was created. [4] Kijaro is similar in spirit to KSL, but does note require contributors to sign the Sun Contributor Agreement (SCA). Before Project Coin, Kijaro saw a modest number of features developed, fewer than ten, which is also not a particular vigorous community response given the millions of Java developers in the world. The earliest posts on what would become Project Coin mentioned the utility of prototypes, the Project Coin proposal form included a section to provide a link to an optional prototype, and I repeated stated throughout Project Coin the helpfulness of providing a prototype along with a proposal. Despite the availability of the OpenJDK sources for javac and the repeated suggestions to produce prototypes, only a handful of prototypes were developed for the 70 proposals sent into Project Coin. Dick asked rhetorically during the podcast whether alternative projects exploring language changes were inevitable as the "only approach given strict control exercised over the JVM [by Sun]." IMO, such approaches are inevitable only if Sun's repeated efforts to collaborate continue to be ignored. Classes on compilers are a core component of many undergraduate compiler science curricula. All the Java sources in the JDK 7 langtools repository adds up to about 160,000 lines of code and javac itself is a bit over 70,000 lines currently. These are far from trivial code bases and some portions of them are quite tricky, but implementing certain changes isn't that hard. Really. Try it out. Dick says toward the end of the opening segment "If people do want to do this stuff, right now they are being told they can't." I certainly do not have the authority to tell others what they can and cannot do. Indeed, I have advocated producing prototypes of language changes as a much more productive outlet than whining and pouting that other people aren't busy implementing the language changes you want to little avail. Others have already noted in previous messages to this group the irony of Lombok using the annotation processing facility I added to javac in JDK 6 as an alternate way to explore language changes (together with an agent API to rewrite javac internal classes!) . However, way back before JDK 5 shipped in 2004, we at Sun recognized that annotation processors by themselves would be a possible way to implement certain kinds of de facto language changes. The apt tool and later javac were always designed to be general meta- programming frameworks not directly tied to annotations; for example, an annotation processor can process a type containing no annotations to, say, enforce a chosen extra-linguistic check based on the structure of the program. [Such as the naming convention checker shipped as a sample annotation processor in JDK 6.] As an example of what can be done just using annotation processing, long-time annotation processing hacker Bruce Chapman implemented "multi-line strings" as part of his rapt project [5]; the value of the string is populated from a multi-line comment. After repeatedly outlining how it would be possible to do so on the annotation processing forum [6], I've gotten around to hacking up a little proof- of-concept annotation processor based implementation of Properties. [7] The user writes code like public class TestProperty extends TestPropertyParent { protected TestProperty() {}; @ProofOfConceptProperty protected int property1; @ProofOfConceptProperty(readOnly = false) protected long property2; @ProofOfConceptProperty protected double property3; public static TestProperty newInstance(int property1, long property2, double property3) { return new TestPropertyChild(property1, property2, property3); } } and the annotation processor generates the superclass and subclass to implement the desired semantics, including the getter and setter methods, etc. Using annotation processors in this way is a bit clunky compared to native language support, but if people want to experiment, the capabilities have been standardized as part of the platform since JDK 6. It is technically possible to take the OpenJDK sources and construct a version of javac that accepts language extensions; after all, this is how we generally evolve the language and also how the JSR 308 changes were developed before going back into the JDK 7 code base. Additionally, the IcedTea project and the shipping of OpenJDK 6 in Linux distributions has provided an existence proof that folks other than Sun can take the entire OpenJDK code base, add various patches and additional components to it, and ship it as a product. Given the OpenJDK sources Sun has published, subject to the license and trademark terms and conditions, anyone is free to implement and use language changes, as long as they assume the costs and responsibilities for doing so. Experimentation has long been encouraged and experiences from experiments using language changes on real code bases would certainly better inform language evolution decisions. Unfortunately, others have generally not done these experiments, or if the experiments have been done, the results have not be shared. I also do not have the power to prevent others from using non-Java languages on the JVM or to force others to run anything on the JVM, nor would I want to exercise such powers even if I had them. Indeed, for some time Sun has endorsed having additional languages for the platform and the main beneficiary of John Rose's JSR 292 work will not be the Java language, but all the non-Java languages hosted on top of the JVM. I do have the authority to speak on what Sun will and will not spend our resources on in relation to Project Coin, certainly a right any organization reserves to do with its resources.. [8] Finally, going back to a white paper from 1996, the design of Java quite intentionally said "No!" to various widely-used features from the C/C++ world including a preprocessor and multiple inheritance. Again since the beginning, Java admittedly borrowed features from many other established languages. [9] Given the vast number of potential ways to change the language that have been proposed, many language changes will continue to be called and few will continue to be chosen. In any endeavor there is a tension to balance stability and progress. For the Java language, given the vast numbers of programmers and existing code bases, we try to err on the side of being conservative (saying "No." by default) first to do no harm, but also to preserve the value of existing Java sources, class files, and programmer skills. There are many other fine languages which run on the JVM platform and I expect the Java language to continue to adopt changes, big and small, informed both by direct experiments with prototypes and by experiences with other languages. [1] [2] [3] [4] [5]; see also Bruce's [6] [7] [8] [9] The incredible amount of work put in on the closures prototypes, which have not led to anything being added to Java 7, does rather dissuade one from wanting to spend a lot of effort on an implementation before your good selves have blessed the idea. That said, most language changes will not require as much work as the closures prototypes did. Posted by Ricky Clarkson on September 03, 2009 at 09:56 AM PDT # @Ricky, First, there is far from unanimity in the Java community on the underlying choice of whether or not closures would be an appropriate language change for Java at this time. Second, Neal's work was acknowledged by Sun as the Silver-level winner of the OpenJDK Community Innovators' Challenge. Posted by Joe Darcy on September 03, 2009 at 10:56 AM PDT # These Java Posse comments are based on an outdated world and are indicative of problems in the general Java community IMHO. As Joe says, the code has been there for over two years (nearly three for javac), yet there has been incredibly little input or response given the size of the community. If Sun were really the roadblock people make out, then surely someone would have forked the codebase by now. It's possible. It's all there under the GPLv2 + Classpath exception. I wouldn't want to see it happen and I know it's not going to happen because those who can't be bothered to even look at the code and make small patches are hardly going to start maintaining such a huge codebase. If there's one issue with OpenJDK at present, it's not resistance from Sun but people. There's just not enough people there, reviewing patches and improving the infrastructure. Instead, patches sit on mailing lists for weeks and we still don't have things we aimed for months ago like a fully open bug database. And the reason? The community. Or rather, the lack of one. If there were more people commenting on patches from outside Sun, if there were more people pushing against some of the arbitrary barriers currently in place, there would be more reason for Sun to exert the time and effort to change things. There isn't. From a worldwide Java community, the mailing lists show basically the IcedTea hackers and a number of other talented individuals external to Sun. Out of millions of Java developers. It's time the community stopped whining and started acting if they really want to be taken seriously. Posted by Andrew John Hughes on September 03, 2009 at 12:26 PM PDT # Your frustrations are comprehensible, but so is the disappointment of many seasoned Java developers, witnessing longstanding closure wars without any substantial outcome. James Gosling wrote in September 2006: "There's an entertaining little firestorm going on over closures". Three years are a long time in our business. In the same blog entry, James Gosling also wrote: "I have somewhat mixed feelings about closures: they are pretty complicated.", which exposes a remarkably different assessment of the skills of the Java developer community. Obviously, James Gosling thinks that the average Java developer might have difficulties mastering closures, while you expect more Java developers to hack the compiler ;-) Posted by Karl Peterbauer on September 03, 2009 at 06:24 PM PDT # Joe, If we expect anything resembling unanimity in the Java community before adding any major language feature, I believe Java will never have any new major language features. The truth is that the process for adding language features to Java is opaque and heavyweight. The latter is sort of expected given the widespread usage of the language, but the former could be rectified. In particular, it would be nice if there was a clear decision maker when it comes to these things. Most languages that evolve well have someone with a unifying vision providing guidance, which is something that Java is lacking as far as I can see. The closures example is a good one. Things went on for many months and no-one had any idea what the official position was. You mention James Gosling as a luminary in your post, he was in favour of closures, wasn't he? As was Neal, Gilad, Peter and others. Still not enough. What is then? If people knew, they may be more tempted to contribute. Personally, I've moved on and do all my development in Scala instead of Java whenever I can (which thankfully has been pretty often recently). Best, Ismael Posted by Ismael Juma on September 03, 2009 at 07:59 PM PDT # Joe, "First, there is far from unanimity in the Java community on the underlying choice of whether or not closures would be an appropriate language change for Java at this time." Good. That's a positive reflection on the diversity of the Java community. My point about closures is that Sun did not even appear to engage in the debate except to say no 3 years down the line. That does not encourage anyone. On another note, is an open bug-tracker such an investment of effort? I submitted a bug report on OpenJDK at bugs.sun.com one month ago and it still does not appear on the public website. I can't really see Sun as being anything other than opaque while that kind of system is in place. Posted by Ricky Clarkson on September 03, 2009 at 08:40 PM PDT # I agree that people having to actually do the work. Even "hackers in the trenches" can write code and contribute it, let people play with it, get feedback, etc. The "ivory tower people tell me I don't want this" claim is just weird. I don't understand why people reason that way. It is as if they disqualify themselves beforehand from pushing the community forward. And indeed the world changed drastically when most Sun's java software related implementations became Free Software. Everybody now has the ability to fork if they wish since almost everything is available under the GPL these days. That is enormous power (and with it comes great responsibilities). That said I can certainly see reasons for people to feel the freedom and openness could be improved. Nobody wants to fork, the community as a whole is helped by having a canonical Free Java implementation as much as possible. That is also why currently certain issues don't lead to forking to open up the processes more. The damage of forking is just bigger than making the processes more open and transparent. But it still would be really beneficial if we could make things more open and transparent around OpenJDK: - We want a canonical implementation as much as possible, but Sun is still publishing their Java JRE/SDK products as proprietary software with a license that disallows anybody from inspecting, comparing, publishing and discussing how they changed the OpenJDK code they base their products on. Getting everybody, including Sun to play by the same rules when publishing products (under the GPL) would increase the trust of the community. - Although the code is free, the JCP isn't. The JCP still allows publishing JSR specifications that then cannot be used by the community for working on free implementation. Sun itself is doing this, putting terms in their JSR click-through licenses that disallow sub- and super-setting, or basing implementations on Sun derived code... (which was published under the GPL remember...) - The JCP workaround for "small public interface changes" process (CCC) still isn't transparent. In fact it isn't documented anywhere as far as I know, but people are still required to get permission from the CCC first. - Commits require bug numbers that the community is unable to create. The public bug database is often ignored and the proprietary sun bug tracker is very hard to use and has lots of issues hidden from the community (even issues mentioned in commits, so it is impossible to judge what a bug really was and why the fix was necessary in the first place). - Sun demands full control and all rights over any contribution through the SCA, so they are the only party that can "escape" the tit-for-tat GPL share-alike rules of the community. That means some improvements have to stay out of OpenJDK just because of this legal restriction that favours Sun above the community. - A lot of communication still is done only in private. Although the hg repos show all commits, a lot of the discussion and patch approval process is hidden because Sun engineers are still allowed to put changes in without discussing them with the rest of the community on the public lists. - Sun sometimes makes promises about liberating code, like the plugin and webstart for example, that then never happen. Don't promise things you are then not actually doing. Fixing the above will certainly improve the situation and will make it easier for supporters of Sun and OpenJDK to really stand behind it as the way to push Java forward together. Then saying that "those who do the work, get to decide how the project [Java] evolves" would be really true for everybody. Posted by Mark on September 03, 2009 at 10:31 PM PDT # Joe, try to look at this situation from view of somebody from community, which you blame to be passive: 1. One of basic psychological properties of men is to invest time only if we have some level of assurance, that invested time will not wasted. Also note, that professionals care about own time more, than students. So, it is normal to describe proposal, than receive feedback, than if feedback is positive: implement one, otherwise - not. So, instead asking 'why only a handful of prototypes were developed for the 70 proposals', real question are: 'why some prototypes was developed at all ?' Also, note, that fact, that 'exists people without experience, which contribute nothing but write to project coin' does not annul fact, that exists people who can produce value, if one will be accepted. 2. Feedback, which ordinary community member received from project coin is also frustrated: naturally people want to receive in feedback some clear vision of future not only in scope on project coin, but in more general perspective. I. e. ideally will be to receive classification by 4 cases: - (1) proposal just no valid to be reviewed. - (2) Sun will reject proposal 'at all' - (3) Sun will reject proposal now for project coin but in principle it can be accepted later. I. e. under some conditions (such as implementation, .. etc) it is possible to include such feature in Java in some future during year or two. - (4) accepted now. Instead we have (1,2,3) in one group, so if exists people, which devote some time for real contribution in project coin, but without accepted proposals, then their motivation to devote time to Java language development now lower, than before project start, because they does not receive any positive feedback. 3. Also note, that only 2 from 7 accepted proposals was not from Sun; one of non-Sun proposal was flying in blogs near year before project coin starts; second look's like maintenance-level change. May be exists reasons, but from outside this viewed as NIH syndrome. 4. Also one of well-known characteristics of OSS, that contribitors prefer work in areas, which are interested to them, not which is needed by project owner. So, from point of view of external developer, activity in Java language development is waste of time: if we will propose change, we have no way to receive feedback, examples from past (closures and now project coin) shown that with probability of 99% you work will be dropped. It is possible to fix such situation (?): may be, but defining clear vision of community process. (for now my forecasts are pessimistic). Posted by Ruslan Shevchenko on September 03, 2009 at 10:32 PM PDT # @Ricky: "My point about closures is that Sun did not even appear to engage in the debate except to say no 3 years down the line. That does not encourage anyone." -- Did any other major JCP member engage in the debate? Please refresh my memory. @Israel: "The closures example is a good one. Things went on for many months and no-one had any idea what the official position was." -- Whose official position? Do you mean Sun's? I have spoken for Sun on the Java language at JavaOne for three years running and did not use the word "closure" at any point. Evidently closures do not fit into Sun's short and medium-term vision for the Java language. Nor do reified generics. Nor do multimethods. Nor do a hundred other features Sun is not currently pursuing. You should ask other major JCP members for their position on Java language enhancements. Posted by Alex Buckley on September 04, 2009 at 05:29 AM PDT # @Ruslan: You endlessly complain that Sun doesn't give you feedback on your proposals. If you send a proposal to a mailing list, and only a few people comment on it, then your proposal is evidently not very compelling. It might be a good idea, but it's less good than other ideas which get a lot of discussion. Send a proposal for C# to Microsoft and see what feedback you receive. Frankly, your primary concern is not the technical quality of a feature, but whether Sun "blesses" your feature and pushes it through the JCP on your behalf. Sun is not going to do that for most features, from you or anyone else. You will be disappointed 95% of the time if getting recognition from the JCP if your goal. Posted by Alex Buckley on September 04, 2009 at 05:44 AM PDT # @Ruslan, First, everyone who participated in the coin list did so of their own free will; they choose to spend their time following the activity or contributing to it. The expectations going into Project Coin should have been quite clear; the default answer is "no," a prototype can help your case because it provides more accurate information about the utility of the change. Of the selected proposals, only one was written by a Sun employee (Strings in switch by me) and the majority of the voluminous traffic on the coin list (around 1300 email messages just during the call for proposals period!) were not penned by Sun employees. So there was significant non-Sun involvement in coin. To be blunt, many of the submitted proposals were simply not credible attempts by the authors of producing a useful analysis of the new feature. They were closer to a passive-aggressive ploy to have others on the list do the detailed work and hard thinking required to turn a general idea into a meaningful proposal. That is not the arrangement offered by the Coin invitation; it is rude to expect others to spend more time "fixing" a proposal than the original author spent writing it. Posted by Joe Darcy on September 04, 2009 at 06:05 AM PDT # > Send a proposal for C# to Microsoft and see what feedback you receive. Apart from the difference that C# is actually innovating on its own, they just added optional parameters due to strong demand from the community (MVP's). Of course hackers in that camp are more attracted towards Mono as it provides the best of both worlds - open source as well as innovation. Posted by Martin S. on September 04, 2009 at 06:10 AM PDT # @Karl, There are millions upon millions of Java developers who would have to learn about closures if they were added in the platform. I do not expect the "average Java developer" to have the skills or interest to hack the compiler. But, I don't think it is unreasonable for a vanishing small fraction of Java developers, a dozen, two dozen, to implement and experiment with new language features before they are added to the platform. Posted by Joe Darcy on September 04, 2009 at 06:20 AM PDT # "Evidently closures do not fit into Sun's short and medium-term vision for the Java language" so based on this statement we can safely assume that closures & reified generics are off the plate for Java 8 ? I am very much curious who forms "Sun's Official" position. Is it a specific team an individual at Sun .. maybe you? "C# to Microsoft and see what feedback you receive" Most of the proposals sent to coin have already been implemented in C#. Posted by joeJava on September 04, 2009 at 06:25 AM PDT # @Martin S: That's good news if you're an MVP. What is Microsoft's response if you're an unaffiliated individual with a half-formed idea illustrated by confusing example code which raises a new question from every new reader? @joe java: No, you cannot assume that. You cannot assume anything about JDK8. Planning for JDK8 will start after JDK7 is completed. The major new feature for JDK7 is the module system and the modularized JDK codebase, which will allow structural changes to the language, libraries, and VM which are impossible today. In terms of who forms Sun's official position, it's a team effort. As spec lead for the language, I have a fairly powerful ability to say "no, this is a bad or unsafe feature" ... but what determines if features actually get pushed forward is a mix of customer demand and internal resourcing. For reference, a customer is not someone who says "+1" on a mailing list or leaves blog comments saying "How can we get this feature into Java?". Posted by Alex Buckley on September 04, 2009 at 06:45 AM PDT # @Alex: [?! - where I 'endlessly complain ?']. I just tell, why spending time, participating in Java language evolution, is not attractive for external developer. It does not depend from me or some other person. If you want to draw peoples to language development, than you must know how situation is looks from other side. It's just information, nothing more. If you don't want to hear answer, than don't ask appropriative question [why community is passive]. Microsoft does not define process for external contriubution to language. [but I'm sure, that if they will define such process, they will take into account Sun errors. Also you can build next version of process better, if will hear answers] May be we have situation, where really Java does not need external help from community. I. e. if traditional development model is 'cathedral', you happy with this, interaction with community require adding elements of 'bazaar' and extra resources for communication. This can be also honest answer. Posted by Ruslan Shevchenko on September 04, 2009 at 06:59 AM PDT # @Joe 1. [ 5/7 or 1/7 (?) : let's look at first you post: [A] and final 5 results: [B] A.1 is B.1 (string in switch) A.2 is B.3 (diamond) A.3 --- is not in final 5 A.4 is B.7 (JSR 292) A.5 [sun] (+ literals [not-sun]) is B.6 (what we have in B but not in A: B.2 (flying in blogs near year) B.4 (maintance-level change) B.5 (integer literals: Bruce Chapman [sun] + Derek Foster [not-sun]) ] So, proposals from external world is in minority. 2. Coin was good initiative and you was provide great work moderating one. If we see that exists some objective contradictions (you think that community is passive, many people from community think, that Java development is too slow), than nothing bad in analysing such points. [more, I think that analysing of pain points for each project is duty of PM]. For now is clear, that if you want to achieve some result from community, than you must think about result/effort ratio for ordinary community member. When this ratio is low, community is passive, when hight - active. Posted by Ruslan Shevchenko on September 04, 2009 at 08:13 AM PDT # @Ruslan, Below is the list of accepted features sent to the coin-dev list with the author and affiliation of the proposal: \* Strings in switch, Joe Darcy, Sun \* Automatic Resource Management, Josh Bloch, Google \* Improved Type Inference for Generic Instance Creation, Jeremy Manson, Google (implementation by Maurizio Cimadamore of Sun) \* Simplified Varargs Method Invocation, Bob Lee, Google \* Literal improvements, Bruce Chapman and Derek Foster \* Language support for JSR 292, John Rose, Sun \* Indexing access syntax for Lists and Maps, Shams Mahmood Imam \* Collection Literals, Josh Bloch, Google The Bruce Chapman who is active on Project Coin is \*not\* the Bruce Chapman who works for Sun. So as you can see, most of the accepted proposals were not authored by Sun. Evidently other people judged the initial list of proposals to seed the discussion to be worthy of inclusion. Posted by Joseph Darcy on September 04, 2009 at 08:42 AM PDT # @Joe. Thanks. You are right if not count 'mentioning in first port' form of affiliation from Sun. Posted by Ruslan Shevchenko on September 04, 2009 at 09:07 AM PDT # @Ruslan: Sun is not interested in the general "external developer" participating in Java language development. I do not speak for other major JCP members, but I think it's safe to say that most are interested in expert contributions only. An "ordinary community member" does not magically become an expert by filling out the Coin proposal form. If someone filled out the form and sent it to the Coin list with no prototype and it received little discussion, then in all likelihood their proposal was no good. Maybe it was a good idea, but poorly written. The Coin community does not owe you a detailed explanation. There are community members on the Coin list who made expert contributions. Sometimes the contribution was to refine a proposal and further support it. Sometimes the contribution was a private prototype that showed up problems with a proposal. Sometimes the contribution was to keep their mouth shut and not draw attention to a proposal they hoped would go away. Posted by Alex Buckley on September 04, 2009 at 09:42 AM PDT # @Alex. Sorry, I does not understand you at all here. The word 'ordinary' was used by me not as synonym of 'non-expert'. Posted by Ruslan Shevchenko on September 04, 2009 at 10:00 AM PDT # @Ruslan: In general, you imply that there is a class of ordinary Java developer who is interested in language development but is not a language expert. I agree that class exists. You are a member of it. Trouble is, only expert contributions are useful. Posted by Alex Buckley on September 04, 2009 at 10:28 AM PDT # @Alex So this is personal attack ;) Sorry, I was not expect this. Curiously, that most of my effort in project coin was not in proposals, but in providing statistics about usage patterns of other proposals (which, as I remember, was accepted positive in mail list). //Ahh, you attack is answer on my defence of you incorrect words about multiline strings proposal: Ok, so now I know why Java is dying: people, which head Java are not adequate enough to live without baseless personal attacks. Ok, live in you word if you prefer. Regards ! Posted by Ruslan Shevchenko on September 04, 2009 at 10:57 AM PDT # @Ruslan: "expert contributions" was meant to refer to proposals. I am a fan of multiline strings but your proposals were difficult to understand. However, your analyses were quite useful, and sparked good discussion. There was some question about their accuracy, but your contribution was impressive overall. People have alluded to improvements for "next time". Clearer roles for contributors is one of them, perhaps enforced by separate lists for proposals and analyses. You are right that contributors come in different guises. Posted by Alex Buckley on September 04, 2009 at 11:18 AM PDT # @Alex: About quality proposals: I'm sure that if we will provide 'blind test', than we will unable to separate proposal from 'experts' from 'non-experts' in you sense. My language is not ideal, but language of other proposals [some of which was accepted] was the same quality. Of course this is bad, but this is problem for both 'experts' and 'non-experts' in you sense. //(And to prevent 'bad' submission for 'next time', good solution is qualification round before judgement 'in essence') Sorry if my word where too harsh. Thanks for recognition for 'non-expert'-s to participate in game. At least any expert was non-expert at some point of time and may be I have chanсe ;) Posted by Ruslan Shevchenko on September 04, 2009 at 11:54 AM PDT # > Send a proposal for C# to Microsoft and see what feedback you receive. Funny you should mention that.. Posted by Jonathan Holland on September 05, 2009 at 03:26 AM PDT # The problem is that after years of political games in the Java, OpenSolaris, MySQL etc communities around Sun, we're very skeptical that OpenJDK will endorse and accept our work. We might be wrong.. we might be TOTALLY wrong. However, you can't just expect us to drop our prejudice over night. I personally have tried to contribute code BACK to Sun over the years. I had patches to java.util.regex which were denied... these were simple patches to small bugs and added features. I was told they were denied as they would be added to future versions of the JDK; they never were. I have lots of examples like this.... Anyway, it's going to take a while for Java to gain the reputation of Python or Ruby in the community. There's just a lot of bad karma in the air... Posted by Kevin Burton on September 05, 2009 at 05:11 AM PDT # A whole career of submitting bug fixes to Sun and have them go no where has breed nothing but distrust. Now that same organisation wants all the control of the process and expects fully working implementations of major functionality only to ultimately say no. Nothing changed except the sticker on the box, and with Oracle owning it all you can bet it will get worse. The JavaPosse is spot on with their remarks, major and minor features have been rejected off hand without much of a good discussion even when popular with the community at large. Posted by Paul Keeble on September 05, 2009 at 09:59 AM PDT # Joseph, thank you for your opinion. The real problem actually is that, thanks to Java, we have all happily forgotten on how to use pointers and C/C++ JDK sources are just unbreakable to us ;-) Posted by Marian on September 06, 2009 at 12:49 AM PDT # Marian, As javac is written in Java, that shouldn't be an issue for language changes. In fact, javac is the best-written Java code I've ever seen. (probably people who actually fix bugs in it have a different opinion) Posted by Ricky Clarkson on September 06, 2009 at 12:51 AM PDT # Joseph, you outlined the implementation of first-class properties using annotation processors and code generation. In fact, JPA 2.0 (Linda DeMichiel, Gavin King et al) will adopt a exactly this strategy for a type-safe, compiler-checked criteria API. This API (and the supporing tool-chain) could be substantially simplified if the Java language would support a type-safe literal syntax for methods and fields. The problem space is well understood, no need for further experiments here. I'm sure that the JPA 2.0 expert group will provide a good, feasible solution, but it will not have the elegance and power of C#'s LINQ (which requires a bunch of other language enhancements). JSR 295 (Beans Binding) also heavily suffered from missing language features. In essence, this JSR was about keeping properties of different objects in sync, primarily for "data binding" of bean properties to UI widgets such as textfields or check boxes. This is obviously a painful task if properties have to be identified via strings and reflection (in fact, they used JSP's EL syntax). The JSR was finally abandoned, perhaps because of other reasons I don't know. Hans Muller's JSR 296 (Swing Application Framework) had to stick to strings for associating UI widgets like menu items with annotated methods. Not a showstopper issue, but method literals would have spiced up the solution. Uh, this JSR is also abandoned... Personal story: Most of my work time, I'm slapping a user interface (GWT in my case) around a database, with JPA and a decent data-binding mechanism as essential drivers. Some time ago, JPA 2.0 was pie in the sky, so I had to stick to non type-safe, non-refactorable JPA query strings, but JSR 295 was still alive, and I really liked many concepts proposed there. I could not use it directly, since JSP EL heavily relies on reflection, which is unfortunately not available in a GWT environment, and I really dislike the idea of using JSP EL anyway. As a workaround, I reimplemented JSR 295's core concepts, this time using apt-generated property objects. I hope that I can replace my proprietary poor man's solution soon by replacing it with the JPA 2.0 code generator. Even then, I will have only a very clunky solution compared to the built-in capabilities of C#/LINQ. Last battle area: On the server-side, it's quite popular to use an IoC-container (e.g., Spring) to wire up the main pieces of an application, and IoC for the client is becoming a hot topic. Annotation-driven auto-wiring works in many cases, but not always, typically requiring the use of XML for defining the relationships. This piece of XML introduces yet another non type-safe, non-refactorable element, prone to a whole class of nasty runtime errors (and requiring massive additional tooling support, for example for convenient auto-completion). Current vanilla Java is way too verbose, but a concise literal syntax for methods/properties might be an enabler for a full-blown Java-only Ioc-container. Bottom line: The need for new language features (type-safe literals for methods/fields/properties, closures and the like) does not arise from subjective, nebulous reasons such as preferring functional programming style over imperative style. It's about solving practical problems, and it is a matter of fact, that expert groups of important JSRs have clearly stated that they can only provide the second best solution because of missing language features. Posted by Karl Peterbauer on September 06, 2009 at 04:51 AM PDT # I believe the proposal call was finished when Gavin suggested this. Posted by sboulay on September 06, 2009 at 11:20 AM PDT # Maybe Java doesn't get prototypes because the brightest people already moved to greener pastures? It's not attractive to work on Java, and Sun is to blame to make it such a terrible experience. And why should I try to move an overturning galley if there are motor boats waiting for me? And after declining closures for Java7, Java \*is\* an overturning galley, and if you want to stay on the JVM, Scala is the only reasonable way to go. Posted by Landei on September 06, 2009 at 06:15 PM PDT # @Landei: I don't disagree. My problem is just that I find Scala too different and academic to be a realistic candidate for next gen Java. I see a major problem with the elite using Scala for their projects whilst the remaining cubicle workers are destined to legacy Java work. I know what Ted Neward says about this issue (we'll always need people flippin' burgers), but I disagree. Posted by Casper Bang on September 06, 2009 at 09:35 PM PDT # @Jonathan Holland: You say: ." 1) You didn't send in a proposal; you blogged an idea with no expectation that anyone in particular, let alone from Microsoft, would read it. It so happens that the right engineer ended up seeing it - perhaps through luck, perhaps through an automatic alert - and had the resources to fix it in a timely fashion. This rare but plausible sequence of events is not unknown at Sun either. 2) I was referring to C# language proposals. Changing APIs is non-trivial but it is much easier than changing a language. So the question stands: How do you formally submit C# language proposals? Posted by Alex Buckley on September 08, 2009 at 04:21 AM PDT # "How do you formally submit C# language proposals?" Same way as people have submitted RFE's to Sun over the years, with the community voting on it. I.e. here's C#'s Mads Torgerson (who worked with Neal Gafter on Java wildcards) responding to a proposal: It's debatable whether that's any less formal than a mailing-list. But it certainly is a lower bar of entry as people are not asked to implement prototypes or to hush if they do not have that capability. That seems to come back to various definitions of "community" apparently going around. Posted by Casper Bang on September 08, 2009 at 06:49 AM PDT # @Caspar: connect.microsoft.com indeed seems to have the same role as bugs.sun.com. A member of the public asks "Please can I have this feature?", other people can vote, and ultimately a Microsoft/Sun employee says yes or no. This kind of mechanism has a low barrier to entry. That's a \*disadvantage\* because your database just fills up with half-formed ideas which go nowhere. Hence, Coin offered an \*additional\* channel for proposing language changes above and beyond those already submitted via bugs.sun.com. Since rough sketches can already be submitted to bugs.sun.com, we felt justified in having a higher bar for the additional channel. Posted by Alex Buckley on September 08, 2009 at 09:35 AM PDT # @Karl Peterbauer, Taking a glass half-full perspective, annotations together with annotation processing have allowed the removal of much of the need for non-Java files like XML deployment descriptors and have enabled the development of tool chains with feasible, if not ideal, solutions to some of the problems you cite. JavaFX has extensive language support for binding. Posted by Joe Darcy on September 08, 2009 at 10:02 AM PDT # @Ruslan "Did any other major JCP member engage in the debate? Please refresh my memory." Yes, Google announced an official position. Posted by Ricky Clarkson on September 10, 2009 at 06:37 PM PDT # @Ricky, The text of Google's official position on Java language changes is no longer at nor can I find it in the wayback machine. However, IIRC it was something along the lines of "experimentation should continue." The position was neither "Yes." or "No." to closures or anything else. Posted by Joe Darcy on September 11, 2009 at 03:46 AM PDT # BIAS DISCLAIMER - features mention below are favored by me! The tension between backward compatibility and introducing new features seems be the problem, there are ways out of this problem, e.g.: 1. Introduce a source statement before the package statement at the start of the file, e.g. "source Java7;". This way a file that is pre-7, or lacks a source statement, can be compiled with the existing compiler, and a file that is 7 can use the new compiler. This way mixed, pre & post 7, projects can be compiled and they can pick up the correct API via the module system (this is effectively how mixed language projects work at the moment). 2. When a feature is heavily asked for, e.g. better inner classes / closures, then the minimum feature set, i.e. something like CISE plus a nifty collections library, could be implemented. At a latter date more features could be added, e.g. support for even shorter syntax, enclosing return statements, function types (structural typing), and user defined control structures if the need was still felt by the community. Some care is needed when introducing the starting proposal to ensure that it can be extended at a latter date (in the case of inner classes / closures at least this does not seem impossible). On a more negative point, project coin was announced with various suggested ideas: Strings in switch, more concise calls to constructors with type parameters, exception enhancements, ability to call with methods with exotic names, and (possibly) bracket notation to access collections. The final list is: Strings in switch, \*Automatic Resource Management\*, more concise calls to constructors with type parameters, \*simplified varargs method invocation\*, \*better integral literals\*, \*language support for Collection literals\*, ability to call with methods with exotic names (\*and rest of JSR 292\*), and (possibly) bracket notation to access collections. The differences between the initial and final proposal lists are highlighted above. Half the final list was on the original list. This does not seem like a hugh change given the amount of discussion on project coin. Back to the positive points. The project coin process did provide for open documented discussion of options, which as far as I can tell is well liked by the community. Cheers, -- Howard. Posted by Howard Lovatt on September 13, 2009 at 05:30 AM PDT # I have written a working example (For Java 5 or later) that you could consolidate the various helper method for array Arrays, Array, ArrayUtils, System etc into an OO way to be naturally available to uses of an array. e.g. double[] values = { 5, 6.5, 2.1 }; values = values.add(3.2); values.sort(); System.out.println(values); // prints [ 2.1, 3.2, 5.0, 6.5 ] The compiler, IDEs and other tools handle this as expected. I don't think there is much expectation that just providing a working prototype will make much difference which it comes to extending Java. Otherwise all the features which have been added to Groovy which have been considered (as it is a super set of Java) Providing a working prototype would have made the difference for how many suggestions... Can anyone name one? Posted by Peter Lawrey on September 25, 2009 at 04:54 AM PDT # It doesn't surprise me that Sun's attempts to open its projects hasn't resulted in many contributions. I have worked for Sun, been a Java Developer for ten years, have 360 Duke Stars and if you google "peter lawrey"+java you get 9,140 hits ;) I work as Head of Trading Technology for a hedge fund. I must have had a few good ideas over the years. ;) However, I have no idea whether any of the 30+ bugs/RFEs I have posted over the years have resulted in a change to the JDK and I have attempted to contribute to the OpenJDK a number of times haven't had any luck there either. I don't even know how to query the status of reports I have made. By comparison I have reported 30 issues in the last year on IntelliJ which is \*not\* an open source product, and 10 have been resolved. So your frustration that there hasn't been more community contribution to the JDK is no surprise at all. ;) Posted by Peter Lawrey on September 25, 2009 at 12:41 PM PDT #
https://blogs.oracle.com/darcy/entry/javaposse_277_ivory_tower
CC-MAIN-2015-22
refinedweb
7,925
59.64
XMLForest Export and Import a hierarchy of Archetypes based content using IMS-ContentPackages and Marshalls XML-Marshallers. Current release No stable release available yet. If you are interested in getting the source code of this project, you can get it from the code repository. Project Description 0. Introduction XMLForest is a tool for importing and exporting a bunch of Archetypes based content. 1. Installation XMLForest shold live in the Products folder of your Zope instance, simply put it there and it should work. 2. Dependencies I have developed XMLForest on my box with the following components installed. - Archetypes version >=1.3.7 - Plone version 2.1.2 - CompoundField version 1.0-beta build 241 (only for (doc-)tests) - Marshall (version 1.1, better gogo-jensens-merge-task branch) - Relations version 0.5 (UNRELEASED) - Zope 2.8.6 or 2.9.1 3. Testing To run the included doctest start "./bin/zopectl test Products/XMLForest" in your Zope root directory. The doctest is a good source for information on the implementation of XMLForest. 4. What does it do? XMLForest is a portal tool you can use to export or import Archetypes in XML format. The doctest shows you a quick example. In the doctest we set up a few objects, then we export them as xml and re-import them to check some of their attributes as well as a contentish relation. You can see how contentish and non-contentish references are exported and imported. You can also see how 'ordinary' objects are exported. 5. What about IMS support? IMS is a XML standard for learning technologies. We thought it would be rather challenging and extremely enriching if we would support a broad standard like IMS. The core documentation on the parts of IMS that are important for XMLForest are found here: As you can see IMS specifies a so called 'manifest' tag. This is the root tag of the manifests we use for import. The manifest would explain what data is to be imported and it will reflect the hierarchical and relational structure of the objects. To accomplish import and export the manifest has to major parts: organizations and resources. Organizations define the structure of the objects. There are three types of organizations in XMLForest: Hierarchical, relational and 'Hephaistos'. The organizations would just describe the structure of the objects, not their content. The content is defined in the resource tags. 'Hierarchical Organizations' reflect the containment of objects. One object can contain another like e.g. a folder might contain a file. This containment is reflected in XML as a tag that contains another tag. I give you an example of a hierarchical organization that contains some items: <organization identifier="HIERARCHICAL" structure="hierarchical"> <item identifier="ForestTestFolder" identifierref="1"> <item identifier="item1" identifierref="2"> <item identifier="subitem1" identifierref="3"> <item identifier="subitem" identifierref="4" /> </item> </item> <item identifier="item2" identifierref="5" /> </item> </organization> When you look at the 'item' tags you see that they are nested. The item with the identifier 'ForestTestFolder' contains an item with the identifier 'item1'. When you look closer you see that 'ForestTestFolder' also contains another item with the name 'item2'. The item 'subitem' shows how a single item can be contained in more containers, it is within the 'subitem1' tag and 'subuitem1' is within 'item1'. This shows how deeper levels of containemt are translated into a XML structure. The next type of organization I would like to explain is the 'Relational Organization'. The items contained in a hierarchical tree can carry references to other items with them. Such references can be called 'relations'. Relations do have a certain type, e.g. obe relation could be named 'Father_of' while another is called 'Mother_of'. One object could have two relations, one to their father object, one to their mother object. Those References can also carry information themselves, there are so called 'container objects'. These container objects are like any other archetype object but the 'belong' to a certain reference and are stroed next to it automatically. Relations are a way to introduce very complex structures into data, raising the dimensionality and density of information. Therefore we support them in XMLForest: <organization identifier="RELATIONS" structure="relations" xmlns=""> <item PortalType="foresttestitems_other" identifier="83aa4d62f296c28ea2cf2c6bf982ca44"> <metadata> <rnode sourceidentifierref="1" targetidentifierref="2" type="foresttestitems_other" /> </metadata> </item> <item PortalType="ForestContentReference" identifier="261a8cde16d4cb9211806bd1458e623e" identifierref="4"> <metadata> <rnode contentidentifierref="4" sourceidentifierref="5" targetidentifierref="6" type="ForestContentReference" /> </metadata> </item> </organization> The first item in this example for relational organizations is a simple non- contentish relation. It points out that tere is a 'foresttestitems_other' relation from the item with the identifier '1' to an other item with the identifier '2'. The reference itself has the identifier '83aa4d62f296c28ea2cf2c6bf982ca44'. That odd identifier is just an autoatically generated identifier, since references are allways stored automatically by the underlying references/relations system. The second item is a 'contentish' relation, it has a source object, a target obejct (just like the non-contentish reference before had), but it has also got a 'content' object and a pointer to its resource. There is also a 'hephaistos' organization implemented. This organization is sort of a 'don't care' organization that just delegates object creation to a mechanism called hephaistos that would do 'the right thing' and knows where all the objects should be stored. So the first part of the manifest file is explained now, leaving the other half in obscure foggy distance. No, wait! I will explain it right now. The second part of the manifest file keeps so called 'resources'. A resource is a definition of the content of an object. As we have seen before the structural data allways had references ro resources in the tags. On object creation those references are resolved, and the content of objects is written into the created objects as well. Let us put that mechanism on a glass plate and have a closer look. Whenever an object is being created it will be filled with the content that is being stored for that object on export. Those resources can be extra files or it can be a part of the manifest file that holds the so called 'metadata'. You can copy/paste the metadata from any exported resource xml file into the manifest and XMLForest should be able not to follow a 'file' reference but take the metadata that has been pasted. I give you can example: <resources> <resource UID="4260799500487c24c9682da3dde073c8" identifier="1" type="file"> <file href="4260799500487c24c9682da3dde073c8.xml" /> </resource> <resource UID="079275ea242e7b1ec9db18bfc868aeaf" identifier="2" type="file"> <file href="079275ea242e7b1ec9db18bfc868aeaf.xml" /> </resource> .... </resources> This resource definition just uses files. It holds references to files on the file system that contain all the content information or 'metadata' that is needed to restore the object. When a object is found in the the organization definitions and it is created a lookup takes place, mathing the resource reference from the structural definitions with one identifier in the resources part of the manifest. XMLForest then knows what to restore into the object after it has been created. The other possibility is to put the metadata part into the resoource tag like this: <resource identifier="RESOURCE_ID_01" type="inline"> <metadata xmlns="" xmlns: <dc:creator> portal_owner </dc:creator> <xmp:CreateDate> 2005-11-25T11:29:08Z </xmp:CreateDate> <xmp:ModifyDate> 2005-11-25T11:29:08Z </xmp:ModifyDate> <field id="pattr"> {'y': (1,), 'x': (2,)} </field> <field id="allowDiscussion"> 0 </field> <field id="id"> item1 </field> <field id="tattr">blabla blablabla</field> <field id="xpattr"> {'y': (3,), 'x': (4,)} </field> <uid> 079275ea242e7b1ec9db18bfc868aeaf </uid> <cmf:type> ForestTestItem < 12:29:08"/> </cmf:history> </cmf:workflow> </cmf:workflow_history> <cmf:security> <cmf:local_role cmf: </cmf:security> </metadata> </resource> There is no 'file' attribute in the resource definition, it simply contains the metadata information as a tag. This format is also valid for XMLForest. The metadata structure itself is not part of the XMLForest specification and implementation. It is part of the Marshall Product. I have done a primary/binary plugable namespace for Marshall. If anyone needs to export and import binary data feel free to contact me. It can handle images/audio and other multimedia files as uu-encoded or base64 encoded binary data. 6. How does XMLForest do that? To answer that question I will go into detail of the implementation. Let us assume we export a tree of objects using XMLForest. The most important command will be: tool.export_tree(root_object, directory) This command tells XMLForest to export the root_object given, and all sub- objects it contains, recursively. The second parameter, "directory", is a valid filesystem path to the folder where all data should be put. XMLForest will create a number of files a typical xml import/export directory layout looks like this: 0d670efa20d8dc7cfdf96b27f913283a.xml a2ccba0cdc80cbf9bf8528fb2595d7a6.xml 0f0a74010887766cd7d1a2c3769af85d.xml ab8e5b89407f93a61f9b58d8d2503833.xml 15f5eff422d0428a49c95eca9e174603.xml afcbc1d7d35233de34ef82af069d5e9d.xml 206da6f7f828a332dbf0bbeed187cb29.xml b321df7ee39c0523131e81420f2fa0b3.xml 22f3b66739440c657c8a627bfa4327f2.xml b6e934ea42c01acb479a58a52d5bfbac.xml 313413b7538cee2eb7a0c5536f15d905.xml c7ed17925ac9d5774cdb04b37ab8797a.xml 356e3876310c67f26ec26f87dd05bef9.xml cc6f124c0c6c5682f55c7f87801c9d53.xml 44a42d89934f8ab04fcb5481b6eaefbd.xml d19fdaf004d8ae0749d1a179ac098fef.xml 4cf01b0099ec8ebc461653a9e38ed2ed.xml d4ae5ed5b2dd529fdf66c255e79d1f80.xml 50da5b8a5f173dbbb981b9edafc679a9.xml e075c90db05cf2079b1ac68beed9acd5.xml 6da66d74f116dc06372cb9df168820da.xml e5220d1557104e0a00afe26479122aea.xml 7558b88d4bee7c52deb3ec153f96e850.xml e5fd71c1d33d19f821ad604f2a2b85bf.xml 76154f4d247a1f7a393217e9638162c8.xml f74bc45b93b295469ccf884a7f5f1c39.xml 7abb80f09eab110f509752fd5b6951c0.xml fa3aedf97b9a46d1a90a3ce947ea023a.xml 8127af64f9fda189d6719b934c1a6596.xml fd73770c4fcb35c2f23daeb6767a7d25.xml 97916f68774397d750644744300e9e2f.xml fddbf9ddd9f49fd53e6823f71b86bbd2.xml 9ddec7f637767cb1ef3756a1ba602687.xml manifest.xml Don't panic. As I said I will go into detail in this very README.txt and sort things out. The most important file in the output directory is "manifest.xml". It contains all meta-information of the export. First I want to explain the structure of this file. The outermost tag in "manifest.xml" is the tag <manifest>. Within that tag all information about the hierarchy and the references of the export/import is declared. The first part of the file contains all hierarchy nodes called <hnode>. A <hnode> may contain other <hnode> elements, reflecting the structure of the tree. I give an example: <hnode"> <hnode dataref="50da5b8a5f173dbbb981b9edafc679a9.xml" id="50da5b8a5f173dbbb981b9edafc679a9" name="item_nested_0" type="ForestTestItem"/> </hnode> That is only a fragment of the hnode section from the doctest. The attributes used explain in which file the content information of the Archetypes object is kept and what type and id the object has. For example the first hnode given in the fragment above has the following attributes:" The "id" attribute is an id that is uniquely used in the "manifest.xml" file. It will be used later, in the part of the file that holds the information for references. The "name" attribute contains the Zope id of the Archetypes object. During the import the object represented by the <hnode> will have the id given here. The "type" attribute reflects the portal_type of the Archetype object. When XMLForest imports data it will create the object with the given meta_type. The "dataref" attribute tells XMLForest where to look for the content data of the Archetypes object. In my example the hierarchy of the output directory is flat and all parts of the export are in one folder. However, when you want to do a custom import, you might have a different directory layout and "dataref" would also contain a valid path to the file containing content data. The path will typically be relative to the directory parameter given, but it may also be absolute. When you do a custom import you are not bound to the directory layout the "export_tree" method uses. Just make sure the path is resolvable. XMLForest uses the UID as a pert of the filename for content data. You may use any file name, as long as it is unique. I will now show an actual content data file, to keep the context clear I choose "b321df7ee39c0523131e81420f2fa0b3.xml": <?xml version="1.0" ?> <metadata xmlns="" xmlns: <dc:creator> portal_owner </dc:creator> <xmp:CreateDate> 2005-11-25T15:07:41Z </xmp:CreateDate> <xmp:ModifyDate> 2005-11-25T15:07:41Z </xmp:ModifyDate> <field id="allowDiscussion"> 0 </field> <field id="id"> folder2 </field> <uid> b321df7ee39c0523131e81420f2fa0b3 </uid> <cmf:type> ForestTestFolder < 16:07:41"/> </cmf:history> </cmf:workflow> </cmf:workflow_history> <cmf:security> <cmf:local_role cmf: </cmf:security> </metadata> The data in a content file is generated by the "Marshall" Product. The <metadata> tag contains information about the different namespaces that are used. You might be missing the "uuns" namespace, since "uuns" is a custom namespace I wrote for exporting and importing binary data. The installation of "uuns" is not explained here and you might only want to use it if you do have binary data. Namespaces are registered in the "Marshall" Product itself (in Marshall/namespaces/__init__.py). Each namespace will put a section of data into the export file. On import the different namespaces are called to deserialize the data into an Archetypes object. References We return to the "metadata.xml" file and there is an optional second section for references. This section consists of <rnode> tags. Those nodes represent metadata of references between objects. I have another fragment of data from the doctest "manifest.xml" ready as an example: <rnode id="ba93aa00fb6e28105b6b7a095f94e961" sourceID="e5fd71c1d33d19f821ad604f2a2b85bf" targetID="97916f68774397d750644744300e9e2f" type="foresttestitems_other"/> This fragment represents one reference, in this case a reference that has no content object. The attribute "id" is an identifier that is unique within "manifest.xml". The attribute "sourceID" is a reference to another identifier within "manifest.xml". It is used to declare which of the objects is a source object in a reference. References exist between two objects, one is called the source object, the other one is called target object. The attribute "targetID" is a reference within "manifest.xml" that refers to the target object. The attribute "type" contains the meta_type of the reference. With the attributes mentioned so far a reference is defined and can be established during an import. In my example here I can show you how to find out which source object is referencing which target object. XMLForest will take the sourceID given to look up the source object in a storage of <hnode> elements it creates during the first (hierarchy) pass. In our case the identifyers used are the UIDs of the Archetypes objects, but when you do a custom import you will not be able to provide that information during you export. To say it in one sentence: "sourceID" refers to an "id" within "manifest.xml", and the identifyer will be resolved by performing a lookup. In this example the lookup will take "sourceID" and it contains the value "e5fd71c1d33d19f821ad604f2a2b85bf". This value is used to look for an entry in the <hnode> section. It will find the following <hnode> tag: <hnode dataref="e5fd71c1d33d19f821ad604f2a2b85bf.xml" id="e5fd71c1d33d19f821ad604f2a2b85bf" name="item0" type="ForestTestItem"/> Since the object has been created in the first pass it will have a new Archetypes UID and this new UID is kept in the internal storage of XMLForest during the import. The lookup resolves the "sourceID" attribute and finds the coresponding Archetypes object to establish the relation. For "targetID" the same procedure is executed and XMLForest finds the right pair of objects. Once the pair is found a reference of the "type" given in the <rnode> tag is created. In this example "sourceID" refers to an object with the Zope id "item0", that is an instance of "ForestTestItem". The attribute "targetID" refers to another object with the Zope id "item1", another "ForestTestItem". XMLForest will create a reference with the type "foresttestitems_other" between those two objects. Relations XMLForest also handles contentish references, in this "README.txt" I call them relations. I have an example for an <rnode> element that defines a contentish relation in the doctest, too: <rnode dataref="cc6f124c0c6c5682f55c7f87801c9d53.xml" id="ec8e3504522190ee905e0c72c5ddfba3" sourceID="8127af64f9fda189d6719b934c1a6596" targetID="d19fdaf004d8ae0749d1a179ac098fef" type="ForestContentReference"/> As you can see it holds the same information as the <rnode> tag explained before, but now we also find an additional attribute "dataref". The attribute "dataref" has the same meaning that the "dataref" attributes you find in <hnode> tags. It is used to provide a reference to the data file that contains the information from the content object of the relation. As soon as the relation between the objects is established XMLForest looks up the content object and fills it with the data from the file defined in "dataref". 7. Further documentation As mentioned exhaustively in this "README.txt" you will find a lot of hints in the doctest. You find the doctest at "XMLForest/doc/testXMLForestTool.txt". 8. Epilog During the design of XMLForest I have been discussing the implementation with several people in the plone community. Kapil was giving me a lot of input, we found that it is necessary to do an import in two passes. We also discussed that it might be useful to provide a "path" attribute, whenever it is possible, to speed up the lookup process for objects. We hope that we found a stable basis for future XML import/export tasks, such as importing and exporting not a "tree" of objects, but also a "soup", meaning that we try to provide an XML schema that is also able to hold meta-information of objects organized in a "cloud" and not a hierarchy. The "path" attribute is reserved for such imports and exports. Further there are several optional "...UID" attributes that can be used to import e.g. references between Archetypes objects that already live in the ZODB. I hope you now understand the internal behavior of XMLForest, so you are able to create your own custom imports. Feel free to contact me if you have any questions about XMLForest. I would also like to thank Ullrich Eck for his input during the design process of XMLForest. A lot of "thank-yous" go out for Philipp Auersperg, who flattened my path during the implementation of XMLForest. He kept my spirit up! And finally the sponsoring credits go to the ZUCCARO project team of Biblotheca Hertziana in Rome, namely Martin Raspe and Georg Schelbert! Regards, Georg Gogo. BERNHARD <gogo@bluedynamics.com>
http://plone.org/products/xmlforest/
crawl-002
refinedweb
2,942
55.95
This article is about using Python in the context of a machine learning or artificial intelligence (AI) system for making real-time predictions, with a Flask REST API. The architecture exposed here can be seen as a way to go from proof of concept (PoC) to minimal viable product (MVP) for machine learning applications. Python is not the first choice one can think of when designing a real-time solution. But as Tensorflow and Scikit-Learn are some of the most used machine learning libraries supported by Python, it is used conveniently in many Jupyter Notebook PoCs. What makes this solution doable is the fact that training takes a lot of time compared to predicting. If you think of training as the process of watching a movie and predicting the answers to questions about it, then it seems quite efficient to not have to re-watch the movie after each new question. Training is a sort of compressed view of that “movie” and predicting is retrieving information from the compressed view. It should be really fast, whether the movie was complex or long. Let’s implement that with a quick [Flask] example in Python! Generic Machine Learning Architecture Let’s start by outlining a generic training and prediction architecture flow: First, a training pipeline is created to learn about the past data according to an objective function. This should output two key elements: - Feature engineering functions: the transformations used at training time should be reused at prediction time. - Model parameters: the algorithm and hyperparameters finally selected should be saved, so they can be reused at prediction time Note that feature engineering done during training time should be carefully saved in order to be applicable to prediction. One usual problem among many others that can emerge along the way is feature scaling which is necessary for many algorithms. If feature X1 is scaled from value 1 to 1000 and is rescaled to the [0,1] range with a function f(x) = x/max(X1), what would happen if the prediction set has a value of 2000? Some careful adjustments should be thought of in advance so that the mapping function returns consistent outputs that will be correctly computed at prediction time. Machine Learning Training vs Predicting There is a major question to be addressed here. Why are we separating training and prediction to begin with? It is absolutely true that in the context of machine learning examples and courses, where all the data is known in advance (including the data to be predicted), a very simple way to build the predictor is to stack training and prediction data (usually called a test set). Then, it is necessary to train on the “training set” and predict on the “test set” to get the results, while at the same time doing feature engineering on both train and test data, training and predicting in the same and unique pipeline. However, in real life systems, you usually have training data, and the data to be predicted comes in just as it is being processed. In other words, you watch the movie at one time and you have some questions about it later on, which means answers should be easy and fast. Moreover, it is usually not necessary to re-train the entire model each time new data comes in since training takes time (could be weeks for some image sets) and should be stable enough over time. That is why training and predicting can be, or even should be, clearly separated on many systems, and this is also better reflecting how an intelligent system (artificial or not) learns. The Connection With Overfitting The separation of training and prediction is also a good way to address the overfitting problem. In statistics, overfitting is “the production of an analysis that corresponds too closely or exactly to a particular set of data, and may, therefore, fail to fit additional data or predict future observations reliably”. Overfitting is particularly seen in datasets with many features, or with datasets with limited training data. In both cases, the data has too much information compared to what can be validated by the predictor, and some of them might not even be linked to the predicted variable. In this case, noise itself could be interpreted as a signal. A good way of controlling overfitting is to train on part of the data and predict on another part on which we have the ground truth. Therefore the expected error on new data is roughly the measured error on that dataset, provided the data we train on is representative of the reality of the system and its future states. So if we design a proper training and prediction pipeline together with a correct split of data, not only we address the overfitting problem but also we can reuse that architecture for predicting on new data. The last step would be controlling that the error on new data is the same as expected. There is always a shift (the actual error is always below the expected one), and one should determine what is an acceptable shift—but that’s not the topic of this article. A REST API for Predicting That’s where clearly separating training and prediction comes in handy. If we saved our feature engineering methods and our model parameters, then we can build a simple REST API with these elements. The key here is to load the model and parameters at the API launch. Once launched and stored in memory, each API call triggers the feature engineering calculation and the “predict” method of the ML algorithm. Both are usually fast enough to ensure a real-time response. The API can be designed to accept a unique example to be predicted, or several different ones (batch predictions). Here is the minimal Python/Flask code that implements this principle, with JSON in and JSON out (question in, answer out): app = Flask(__name__) @app.route('/api/makecalc/', methods=['POST']) def makecalc(): """ Function run at each API call No need to re-load the model """ # reads the received json jsonfile = request.get_json() res = dict() for key in jsonfile.keys(): # calculates and predicts res[key] = model.predict(doTheCalculation(key)) # returns a json file return jsonify(res) if __name__ == '__main__': # Model is loaded when the API is launched model = pickle.load(open('modelfile', 'rb')) app.run(debug=True) Note that the API can be used for predicting from new data, but I don’t recommend using it for training the model. It could be used, but this complexifies model training code and could be more demanding in terms of memory resources. Implementation Example - Bike Sharing Let’s take a Kaggle dataset, bike sharing, as an example. Say we are a bike sharing company that wants to forecast the number of bike rentals each day in order to better manage the bike’s maintenance, logistics and other aspects of business. Rentals mainly depend on the weather conditions, so with the weather forecast, that company could get a better idea when rentals will peak, and try to avoid maintenance on these days. First, we train a model and save it as a pickle object which can be seen in the Jupyter notebook. Model training and performance is not dealt with here, this is just an example for understanding the full process. Then we write the data transformation that will be done at each API call: import numpy as np import pandas as pd from datetime import date def doTheCalculation(data): data['dayofyear']=(data['dteday']- data['dteday'].apply(lambda x: date(x.year,1,1)) .astype('datetime64[ns]')).apply(lambda x: x.days) X = np.array(data[['instant','season','yr','holiday','weekday','workingday', 'weathersit','temp','atemp','hum','windspeed','dayofyear']]) return X This is just a calculation of a variable (day of year) to include both the month and the precise day. There is also a selection of columns and their respective order to be kept. We need, then, to write the REST API with Flask: from flask import Flask, request, redirect, url_for, flash, jsonify from features_calculation import doTheCalculation import json, pickle import pandas as pd import numpy as np app = Flask(__name__) @app.route('/api/makecalc/', methods=['POST']) def makecalc(): """ Function run at each API call """ jsonfile = request.get_json() data = pd.read_json(json.dumps(jsonfile),orient='index',convert_dates=['dteday']) print(data) res = dict() ypred = model.predict(doTheCalculation(data)) for i in range(len(ypred)): res[i] = ypred[i] return jsonify(res) if __name__ == '__main__': modelfile = 'modelfile.pickle' model = pickle.load(open(modelfile, 'rb')) print("loaded OK") app.run(debug=True) Run this program, it will serve the API on port 5000 by default. If we test a request locally, still with Python: import requests, json url = '[]()' text = json.dumps({"0":{"instant":1,"dteday":"2011-01-01T00:00:00.000Z","season":1,"yr":0,"mnth":1,"holiday":0,"weekday":6,"workingday":0,"weathersit":2,"temp":0.344167,"atemp":0.363625,"hum":0.805833,"windspeed":0.160446}, "1":{"instant":2,"dteday":"2011-01-02T00:00:00.000Z","season":1,"yr":0,"mnth":1,"holiday":0,"weekday":3,"workingday":0,"weathersit":2,"temp":0.363478,"atemp":0.353739,"hum":0.696087,"windspeed":0.248539}, "2":{"instant":3,"dteday":"2011-01-03T00:00:00.000Z","season":1,"yr":0,"mnth":1,"holiday":0,"weekday":1,"workingday":1,"weathersit":1,"temp":0.196364,"atemp":0.189405,"hum":0.437273,"windspeed":0.248309}}) The request contains all the information that was fed to the model. Therefore, our model will respond with a forecast of bike rentals for the specified dates (here we have three of them). headers = {'content-type': 'application/json', 'Accept-Charset': 'UTF-8'} r = requests.post(url, data=text, headers=headers) print(r,r.text) <Response [200]> { "0": 1063, "1": 1028, "2": 1399 } That’s it! This service could be used in any company’s application easily, for maintenance planning or for users to be aware of bike traffic, demand, and the availability of rental bikes. Putting it all Together The major flaw of many machine learnings systems, and especially PoCs, is to mix training and prediction. If they are carefully separated, real-time predictions can be performed quite easily for an MVP, at a quite low development cost and effort with Python/Flask, especially if, for many PoCs, it was initially developed with Scikit-learn, Tensorflow, or any other Python machine learning library. However, this might not be feasible for all applications, especially applications where feature engineering is heavy, or applications retrieving the closest match that need to have the latest data available at each call. In any case, do you need to watch movies over and over to answer questions about them? The same rule applies to machine learning! Understanding the basics What is a REST API? In the context of web services, RESTful APIs are defined with the following aspects: a URL, a media type and an HTTP method (GET, POST, etc. ). They can be used as a unified way of exchanging information between applications. What is machine learning? Machine learning is a subset of artificial intelligence in the field of computer science that often uses statistical techniques to give computers the ability to "learn" (i.e., progressively improve performance on a specific task) with data, without being explicitly programmed. What is Tensorflow? TensorFlow is an open-source software library for dataflow programming across a range of tasks. It is a symbolic math library and is also used for machine learning applications such as neural networks. What is Scikit-Learn? Scikit-learn, also sklearn, is a free software machine learning library for the Python programming language. What is feature engineering? Feature engineering is the process of using domain knowledge of the data to create features that make machine learning algorithms work. It augments the available data with additional information relevant to the predicted target. What is Jupyter? Jupyter Notebook (formerly IPython Notebooks) is a web-based interactive computational environment supporting the Python programming language. What is a PoC? A PoC, or proof of concept, is a first-stage piece of program that demonstrates the feasibility of a project.
https://www.toptal.com/python/python-machine-learning-flask-example
CC-MAIN-2019-26
refinedweb
2,015
52.6
19 10 / 2014 Python global keyword Python which is outside the foo scope. After foo invocation bar value was modified inside foo. The value is modified to bar global variable. This may not be obvious at first step, then you realize function can be aliased to variable. 23 5 / 2014 python source fileencoding Some of the python source file starts with -*- coding: utf-8 -*-. This particular line tells python interpreter all the content (byte string) is utf-8 encoded. Lets see how it affects the code. uni1.py: # -*- coding: utf-8 -*- print("welcome") print("animé") output: ➜ code$ python2 uni1.py welcome animé Third line had a accented character and it wasn’t explictly stated as unicode. utf-8, so it worked. What if first line was missing ? uni2.py print("welcome") print("animé") output: code$ python2 uni2.py File "uni2.py", line 2 SyntaxError: Non-ASCII character '\xc3' in file uni2.py on line 2, but no encoding declared; see for details Now python complains that Non-ASCII character is found since default encoding is ASCII. More about source encoding can be found in PEP 263 Always set encoding in first or second line of python file. 13 5 / 2014 How to install externally hosted files using pip As of writing (12, May 2014) latest version of pip is 1.5.1. pip doesn’t allow installing packages from non PyPI based url. It is possible to upload tar or zip or tar.gz file to PyPI or specify download url which points other sites(Example: pyPdf points to). pip considers externally hosted packages as insecure. Agreed. This is one of the reason why I kept using pip 1.4.1. Finally decided to fix this issue. Below is the sample error which pip throws. (document-converter)➜ document-converter git:(fix_requirements) pip install pyPdf Downloading/unpacking pyPdf Could not find any downloads that satisfy the requirement pyPdf Some externally hosted files were ignored (use --allow-external pyPdf to allow). Cleaning up... No distributions at all found for pyPdf Storing debug log for failure in /Users/kracekumar/.pip/pip.log (document-converter)➜ document-converter git:(fix_requirements) pip install --allow-external pyPdf You must give at least one requirement to install (see "pip help install") (document-converter)➜ document-converter git:(fix_requirements) pip install pyPdf --allow-external pyPdf Downloading/unpacking pyPdf Could not find any downloads that satisfy the requirement pyPdf Some insecure and unverifiable files were ignored (use --allow-unverified pyPdf to allow). Cleaning up... No distributions at all found for pyPdf Storing debug log for failure in /Users/kracekumar/.pip/pip.log The above method is super confusing and counter intutive. Fix is (document-converter)➜ document-converter git:(fix_requirements) pip install pyPdf --allow-external pyPdf --allow-unverified pyPdf Downloading/unpacking pyPdf pyPdf an externally hosted file and may be unreliable pyPdf is potentially insecure and unverifiable. Downloading pyPdf-1.13.tar.gz Running setup.py (path:/Users/kracekumar/Envs/document-converter/build/pyPdf/setup.py) egg_info for package pyPdf Installing collected packages: pyPdf Running setup.py install for pyPdf Successfully installed pyPdf Cleaning up... The above method is not used in production environment. In production environment it is recommended to do pip install -r requirements.txt. # requirements.txt --allow-external pyPdf --allow-unverified pyPdf pyPdf --allow-external xhtml2pdf==0.0.5 pyPdf has two issues, two flags needs to mentioned in requirements.txt. Since xhtml2pdf requires pyPdf --allow-external flag is passed. Wish it was possible to pass both the switches in same line. If you do so pip will ignore it. Now running pip install -r requirements.txt will works like a charm(with warnings). Since current approach is super confusing, there is a discussion. Thanks Ivoz for helping me to resolve this. 07 4 / 2014 How to learn Python ? Over period of time few people have asked me in meetups, online I want to learn python. Suggest me few ways to learn. Everyone who asked me had different background and different intentions. Before answering the question I try to collect more information about their interest and their previous approaches. Some learnt basics from codecademy, some attended beginners session in Bangpypers meetup. In this post I will cover general questions asked and my suggested approach. Q: Suggest some online resources and books to learn python ? A: I suggest three resources, How to think like computer scientist, Learn Python The Hardway, CS101 from Udacity. This is highly subjective because it depends on previous programming experience etc … I have a blog post with lot of python snippet without explanation (I know it is like sea without waves). Q: It takes too much time to complete the book. I want to learn it soon. A: I have been programming in python over 3 years now, still I don’t know in depth python. You may learn it in six months or in a week. It is the journey which is interesting more than destination. Q: How long will it take to learn python ? A: It depends what you want to learn in python. I learnt python in 3 hours while commuting to college. You should be able to grasp basic concepts in few hours. Practice will make you feel confident. Q: I learnt basics of Python, can you give me some problems to solve and I will get back with solutions ? A: No. I would be glad to help you if you are stuck. Learning to solve your problem is great way to learn. My first usable python program was to download Tamil songs. I still use the code :-). So find the small problem or project to work on. I would be happy to review the code and give suggestions to it. Q: I want to contribute to open source python projects can you suggest ? A: Don’t contribute to project because you want to, rather find a library or project which is interesting to you and see if things can be made better. Your contribution can be small like fixing spelling mistake (I have contributed with single character change). Linux kernel accepts patch which fixes spelling mistake. Every contribution has its own effect. So contribute if it adds value. In case you are reading this blog post and want to learn python or need help I would be glad to help. 22 3 / 2014 Stop iteration when condition is meet while iterating We are writing a small utility function called is_valid_mime_type. The function takes a mime_type as an argument and checks if the mime type is one of the allowed types. Code looks like ALLOWED_MIME_TYPE = ('application/json', 'text/plain', 'text/html') def is_valid_mimetype(mime_type): """Returns True or False. :param mime_type string or unicode: HTTP header mime type """ for item in ALLOWED_MIME_TYPE: if mime_type.startswith(item): return True return False Above code can refactored into single line using any. def is_valid_mimetype(mime_type): """Returns True or False. :param mime_type string or unicode: HTTP header mime type """ return any([True for item in ALLOWED_MIME_TYPE if mime_type.startswith(item)]) One liner. It is awesome, but not performant. How about using def is_valid_mimetype(mime_type): """Returns True or False. :param mime_type string or unicode: HTTP header mime type """ return next((True for item in ALLOWED_MIME_TYPE if mime_type.startswith(item)), False) (True for item in ALLOWED_MIME_TYPE if mime_type.startswith(item) is generator expression. When ALLOWED_MIME_TYPE is None or EMPTY exception will be raised. In order to avoid that False is passed as an argument to Edit: def is_valid_mimetype(mime_type): """Returns True or False. :param mime_type string or unicode: HTTP header mime type """ return any(mime_type.startswith(item) for item in ALLOWED_MIME_TYPE) Cleaner than 07 3 / 2014 Find n largest and smallest number in an iterable Python has sorted function which sorts iterable in ascending or descending order. # Sort descending In [95]: sorted([1, 2, 3, 4], reverse=True) Out[95]: [4, 3, 2, 1] # Sort ascending In [96]: sorted([1, 2, 3, 4], reverse=False) Out[96]: [1, 2, 3, 4] sorted(iterable, reverse=True)[:n] will yield first n largest numbers. There is an alternate way. Python has heapq which implements heap datastructure. heapq has function nlargest and nsmallest which take arguments n number of elements, iterable like list, dict, tuple, generator and optional argument key. In [85]: heapq.nlargest(10, [1, 2, 3, 4,]) Out[85]: [4, 3, 2, 1] In [88]: heapq.nlargest(10, xrange(1000)) Out[88]: [999, 998, 997, 996, 995, 994, 993, 992, 991, 990] In [89]: heapq.nlargest(10, [1000]*10) Out[89]: [1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000] In [99]: heapq.nsmallest(3, [-10, -10.0, 20.34, 0.34, 1]) Out[99]: [-10, -10.0, 0.34] Let’s say marks is a list of dictionary containing students marks. Now with heapq it is possible to find highest and lowest mark in a subject. In [113]: marks = [{'name': "Ram", 'chemistry': 23},{'name': 'Kumar', 'chemistry': 50}, {'name': 'Franklin', 'chemistry': 89}] In [114]: heapq.nlargest(1, marks, key=lambda mark: mark['chemistry']) Out[114]: [{'chemistry': 89, 'name': 'Franklin'}] In [115]: heapq.nsmallest(1, marks, key=lambda mark: mark['chemistry']) Out[115]: [{'chemistry': 23, 'name': 'Ram'}] heapq can be used for building priority queue. Note: IPython is used in examples where In [114]: means Input line number 114 and Out[114] means Output line number 114. 27 2 / 2014 Counting elements with dictionary Let. 08 2 / 2014 Updating.741284+00:00\', "is_superuser" = true, "username" = \'kracekumar\', "first_name" = \'kracekumar\', "last_name" = \'\', "email" = \'me@kracekumar.com\', "is_staff" = true, "is_active" = true, "date_joined" = \'2014-01-30 18:41:18.174353+00:00\' WHERE "auth_user"."id" = 1 ', u'time': u'0.001'}] Not happy. Honestly it should be UPDATE auth_user SET first_name = 'kracekumar' WHERE id = 1. Django should ideally update modified fields. Right way to do is In [23]: User.objects.filter(id=u.id).update(first_name="kracekumar") Out[23]: 1 In [24]: connection.queries Out[24]: [... {u'sql': u'UPDATE "auth_user" SET "first_name" = \'kracekumar\' WHERE "auth_user"."id" = 1 ', u'time': u'0.001'}] Yay! Though both queries took same amount of time, latter is better. Edit: There is one more cleaner way to do it. In [60]: u.save(update_fields=['first_name']) In [61]: connection.queries Out[61]: [... {u'sql': u'UPDATE "auth_user" SET "first_name" = \'kracekumar\' WHERE "auth_user"."id" = 1 ', u'time': u'0.001'}] 26 12 / 2013 introduction to python This is the material which I use for teaching python to beginners. tld;dr: Very minimal explanation more code. Python? - Interpreted language - Multiparadigm Introduction hasgeek@hasgeek-MacBook:~/codes/python/hacknight$ python Python 2.7.3 (default, Aug 1 2012, 05:14:39) [GCC 4.6.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> >>> print "Let's learn Python" Let's learn Python Numbers >>> 23 + 43 66 >>> 23 - 45 -22 >>> 23 * 45 1035 >>> 23 ** 4 279841 >>> 23 / 4 5 >>> 23 / 4.0 5.75 >>> 7 % 2 1 Expressions >>> 3 < 2 False >>> 3 > 2 True >>> 3 > 2 < 1 False >>> (3 > 2) and (2 < 1) False >>> 3 > 2 > 1 > 0 True >>> (3 > 2) and (2 > 1) and (1 > 0) True >>> 1 or 2 1 >>> 2 or 1 2 >>> 1 + 2 + 3 * 4 + 5 20 1 + 2 + 3 * 4 + 5 ↓ 3 + 3 * 4 + 5 ↓ 3 + 12 + 5 ↓ 15 + 5 ↓ 20 >>> "python" > "perl" True >>> "python" > "java" True Variables >>> a = 23 >>> print a 23 >>>>> print a Python Guess the output True = False False = True print True, False print 2 > 3 Parallel Assignment >>> language, version = "Python", 2.7 >>> print language, version Python 2.7 >>> x = 23 >>> x = 23 >>> y = 20 >>> x, y = x, x + y >>> print x, y 23 43 Guess the output z, y = 23, z + 23 a, b = 23, 12, 20 a = 1, 2 Swap Variable >>> x = 12 >>> y = 21 >>> x, y = y, x >>> print x, y 21 12 >>> String >>>>> print language Python >>>>> print language Python >>>>> print language Python >>>. ... """ >>> print. >>> Guess output name = "krace" + "kumar" print name print name[0] name[0] = "K" Guess output print 1 + 2.5 print "kracekumar" + 23 Condition Write a program to find greatest of two numbers. >>> a = 12 >>> b = 23 >>> if a > b: ... print "a is greater than b" ... else: ... print "b is greater than a" ... b is greater than a >>> if a > 0: ... print "a is positive" ... elif a == 0: ... print "a is zero" ... elif a < 0: ... print "a is negative" ... a is positive Data Structure List List is a collection of heterogenous data types like integer, float, string. >>> a = [1, 2, 3] >>> b = ["Python", 2.73, 3] >>> len(a) 3 >>> len(b) 3 >>> a[0] 1 >>> a[-1] 3 >>> b[2] 3 >>> [1, 2] + [3, 4] [1, 2, 3, 4] >>> all = [a, b] >>> all[0] [1, 2, 3] >>> all[-1] ['Python', 2.73, 3] >>> all[3] Traceback (most recent call last): File "<stdin>", line 1, in <module> IndexError: list index out of range >>> all.append("Bangalore") >>> all [[1, 2, 3], ['Python', 2.73, 3], 'Bangalore'] >>> del all[-1] >>> all [[1, 2, 3], ['Python', 2.73, 3]] >>> all[1] = "insert" >>> all [[1, 2, 3], 'insert'] >>> all [[1, 2, 3], 'insert'] >>> 'insert' in all True >>> range(10) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] >>> range(10, 2) [] >>> range(10, 0, -1) [10, 9, 8, 7, 6, 5, 4, 3, 2, 1] >>> range(0, 12, 1) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11] range() -> `range([start,] stop[, step]) -> list of integers` Slicing >>> l = [1, 2, 3, 4, 5, 6, 7] [1, 2, 3, 4, 5, 6, 7] >>> l[:2] #first two elements [1, 2] >>> l[2:] #exclude first two elements [3, 4, 5, 6, 7] >>> l[::2] #every second element [1, 3, 5, 7] >>> l[::1] #every element [1, 2, 3, 4, 5, 6, 7] >>> l[::3] #every third element [1, 4, 7] >>> l[::10] #every tenth element [1] >>> l[::-1] [7, 6, 5, 4, 3, 2, 1] Guess the output l[1:7:2] [][:2] [1][:2] Accessing list elements >>> for item in all: ... print item ... [1, 2, 3] insert >>> for number in range(10): ... print number ... 0 1 2 3 4 5 6 7 8 9 Find all odd numbers from 0 to 9 >>> for number in range(0, 10): ... if number % 2: ... print number ... 1 3 5 7 9 inbuilt functions >>> help([]) >>> min([1, 2, 3]) 1 >>> max([1, 2, 3]) 3 >>> sum([1, 2, 3]) 6 >>> pow(2, 3) 8 Write program which takes a number as input and if number is divisible by 3, 5 print Fizz, Buzz, FizzBuzz respectively import sys if __name__ == "__main__": if len(sys.argv) == 2: number = int(sys.argv[1]) if number % 15 == 0: print "FizzBuzz" elif number % 3 == 0: print "Fizz" elif number % 5 == 0: print "Buzz" else: print number else: print "python filename.py 23 is the format" Tuples Tuple is a sequence type just like list, but it is immutable. A tuple consists of a number of values separated by commas. >>> t = (1, 2) >>> t (1, 2) >>> t[0] 1 >>> t[0] = 1.1 Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'tuple' object does not support item assignment >>> t = 1, 2 >>> t (1, 2) >>> del t[0] Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'tuple' object doesn't support item deletion >>> for item in t: ... print item ... 1 2 Sets Sets are unordered collection of unique elements. >>> x = set([1, 2, 1]) >>> x set([1, 2]) >>> x.add(3) >>> x set([1, 2, 3]) >>> x = {1, 3, 4, 1} >>> x set([1, 3, 4]) >>> 1 in x True >>> -1 in x False >>> Again Lists >>> even_numbers = [] >>> for number in range(0, 9): ... if number % 2 == 0: ... even_numbers.append(number) ... >>> even_numbers [0, 2, 4, 6, 8] As a programmer your job is write lesser code List Comprehensions >>> [x for x in range(10)] [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] >>> [x + 1 for x in range(10)] [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] >>> numbers = [] >>> for x in range(10): ... numbers.append(x + 1) ... >>> print numbers [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] >>> even_numbers = [x for x in range(10) if x %2 == 0] >>> even_numbers [0, 2, 4, 6, 8] >>> [(x, y) for x in range(5) for y in range(5) if (x+y)%2 == 0] [(0, 0), (0, 2), (0, 4), (1, 1), (1, 3), (2, 0), (2, 2), (2, 4), (3, 1), (3, 3 ), (4, 0), (4, 2), (4, 4)] >>> Dictionaries >>> d = {'a': 1, 'b': 2, 'c': 3} >>> d['a'] 1 >>> d.get('a') 1 >>> d['z'] Traceback (most recent call last): File "<stdin>", line 1, in <module> KeyError: 'z' >>> d.get('z') >>> >>> d['a'] = 2 >>> d {'a': 2, 'c': 3, 'b': 2} >>> d['z'] = 26 >>> d {'a': 2, 'c': 3, 'b': 2, 'z': 26} >>> d.keys() ['a', 'c', 'b', 'z'] >>> d.values() [2, 3, 2, 26] >>> d.items() [('a', 2), ('c', 3), ('b', 2), ('z', 26)] >>> type(d.items()) <type 'list'> >>> d = {'a': 2, 'b': 2, 'c': 3, 'z': 26} >>> for key in d: ... print key ... a c b z >>> for key, value in d.items(): ... print key, value ... a 2 c 3 b 2 z 26 >>> 'a' in d True >>> d.has_key('a') True Function Just like a value can be associated with a name, a piece of logic can also be associated with a name by defining a function. >>> def square(x): ... return x * x ... >>> square(2) 4 >>> square(2+1) 9 >>> square(x=5) 25 >>> def dont_return(name): ... print "Master %s ordered not to return value" % name ... >>> dont_return("Python") Master Python ordered not to return value >>> def power(base, to_raise=2): ... return base ** to_raise ... >>> power(3) 9 >>> power(3, 3) 27 >>> def power(to_raise=2, base): ... return base ** to_raise ... File "<stdin>", line 1 SyntaxError: non-default argument follows default argument >>> square(3) + square(4) 25 >>> power(base=square(2)) 16 >>> def sum_of_square(x, y): ... return square(x) + square(y) ... >>> sum_of_square(2, 3) 13 >>> s = square >>> s(4) 16 >>> def fxy(f, x, y): ... return f(x) + f(y) ... >>> fxy(square, 3, 4) 25 Methods - Methods are special kind of functions that work on an object. >>>>> type(lang) <type 'str'> >>> dir(lang) ['_'] >>> lang.upper() 'PYTHON' >>> help(lang.upper) >>> lang.startswith('P') True >>> help(lang.startswith) >>> lang.startswith('y', 1) True Files >>> f = open('foo.txt', 'w') >>> help(f) >>> f.write("First line") >>> f.close() >>> f = open('foo.txt', 'r') >>> f.readline() 'First line' >>> f.readline() '' >>> f = open('foo.txt', 'a') >>> f.write('Second line') >>> f.close() >>> f = open('foo.txt', 'r') >>> f.readline() 'First lineSecond line' >>> f = open('foo.txt', 'a') >>> f.write("New line\n") >>> f.write("One more new line") >>> f.close() >>> f = open('foo.txt', 'r') >>> f.readline() 'First lineSecond lineNew line\n' >>> f.readline() 'One more new line' >>> f.readline() '' >>> f.close() >>> f = open('foo.txt') >>> f.readlines() ['First lineSecond lineNew line\n', 'One more new line'] >>> f = open('foo.txt', 'w') >>> f.writelines(["1\n", "2\n"]) >>> f.close() >>> f.readlines() >>> f = open('foo.txt') >>> f.readlines() ['1\n', '2\n'] >>> f.close() Exception Handling >>> f = open('a.txt') Traceback (most recent call last): File "<stdin>", line 1, in <module> IOError: [Errno 2] No such file or directory: 'a.txt' >>> try: ... f = open('a.txt') ... except: ... print "Exception occured" ... Exception occured >>> try: ... f = open('a.txt') ... except IOError, e: ... print e.message ... >>> e IOError(2, 'No such file or directory') >>> dir(e) ['__class__', '__delattr__', '__dict__', '__doc__', '__format__', '__getattribute__', '__getitem__', '__getslice__', '__hash__', '__init__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__setstate__', '__sizeof__', '__str__', '__subclasshook__', '__unicode__', 'args', 'errno', 'filename', 'message', 'strerror'] >>> e.strerror 'No such file or directory' >>> try: ... print l[4] ... except IndexError, e: ... print e ... list index out of range >>> raise Exception("error message") Traceback (most recent call last): File "<stdin>", line 1, in <module> Exception: error message >>> try: ... print "a" ... raise Exception("doom") ... except: ... print "b" ... else: ... print "c" ... finally: ... print "d" ... a b d Object Oriented Programming >>> class BankAccount: def __init__(self): self.balance = 0 def withdraw(self, amount): self.balance -= amount return self.balance def deposit(self, amount): self.balance += amount return self.balance >>> a = BankAccount() >>> b = BankAccount() >>> a.deposit(200) 200 >>> b.deposit(500) 500 >>> a.withdraw(20) 180 >>> b.withdraw(1000) -500 >>> class MinimumBalanceAccount(BankAccount): ... def __init__(self, minimum_balance): ... BankAccount.__init__(self) ... self.minimum_balance = minimum_balance ... ... def withdraw(self, amount): ... if self.balance - amount < self.minimum_balance: ... print "Sorry, you need to maintain minimum balance" ... else: ... return BankAccount.withdraw(self, amount) >>> a = MinimumBalanceAccount(500) >>> a <__main__.MinimumBalanceAccount instance at 0x7fa0bf329878> >>> a.deposit(2000) 2000 >>> a.withdraw(1000) 1000 >>> a.withdraw(1000) Sorry, you need to maintain minimum balance >>> class A: ... def f(self): ... return self.g() ... def g(self): ... return "A" ... >>> a = A() >>> a.f() 'A' >>> a.g() 'A' >>> class A: ... def __init__(self): ... self._protected = 1 ... self.__private = 2 ... >>> a = A() >>> a._protected 1 >>> a.__private Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: A instance has no attribute '__private' Sample Python Program #! /usr/bin/env python # -*- coding: utf-8 -*- class BankAccount: def __init__(self): self.balance = 0 def withdraw(self, amount): self.balance -= amount return self.balance def deposit(self, amount): self.balance += amount return self.balance class MinimumBalanceAccount(BankAccount): def __init__(self, minimum_balance): BankAccount.__init__(self) self.minimum_balance = minimum_balance def withdraw(self, amount): if self.balance - amount < self.minimum_balance: print "Sorry, you need to maintain minimum balance" else: return BankAccount.withdraw(self, amount) def __repr__(self): return "MinimuBalanceAccount, Balance: %d" %(self.balance) if __name__ == "__main__": a = MinimumBalanceAccount(500) print a.deposit(5000) print a.withdraw(4500) print a.withdraw(500) Few examples are taken from python practice book. Github repo: 25 12 / 2013 Deploying full fledged flask app in production This article will focus on deploying flask app starting from scratch like creating separate linux user, installating database, web server. Web server will be nginx, database will be postgres, python 2.7 middleware will be uwsgi, server ubuntu 13.10 x64. Flask app name is fido. Demo is carried out in Digital ocean. Step 1 - Installation Python header root@fido:~# apt-get install -y build-essential python-dev Install uwsgi dependencies root@fido:~# apt-get install -y libxml2-dev libxslt1-dev Nginx, uwsgi root@fido:~# apt-get install -y nginx uwsgi uwsgi-plugin-python Start nginx root@fido:~# service nginx start * Starting nginx nginx [ OK ] Postgres root@fido:~# apt-get install -y postgresql postgresql-contrib libpq-dev Step 2 - User Create a new linux user fido root@fido:~# adduser fido Enter all the required details. root@fido:~# ls /home fido Successfully new user is created. Grant fido root privilege. root@fido:~# /usr/sbin/visudo # User privilege specification root ALL=(ALL:ALL) ALL fido ALL=(ALL:ALL) ALL Since fido is not normal user, delete fido’s home directory. root@fido:~# rm -rf /home/fido root@fido:~# ls /home root@fido:~# Create a new db user fido root@fido:~# su - postgres postgres@fido:~$ createuser --pwprompt Enter name of role to add: fido Enter password for new role: Enter it again: Shall the new role be a superuser? (y/n) y --pwprompt will prompt for password. release is the password I typed (we need this to connect db from app). Create a new database fido postgres@fido:~$ createdb fido; postgres@fido:~$ psql -U fido -h localhost Password for user fido: psql (9.1.10) SSL connection (cipher: DHE-RSA-AES256-SHA, bits: 256) Type "help" for help. fido=# \d No relations found. Done. New database role fido and database is created. We are successfully able to login. Step 3 - Python dependencies Install pip root@fido:# cd /tmp root@fido:/tmp# wget root@fido:/tmp# python ez_setup.py install root@fido:/tmp# easy_install pip # What is easy_install ? Python package manager. # what is pip ? Python package manager. # How to install pip ? easy_install pip. # Shame on python :-( ... Installed /usr/local/lib/python2.7/dist-packages/pip-1.4.1-py2.7.egg Processing dependencies for pip Finished processing dependencies for pip Install virtualenv root@fido:/tmp# pip install virtualenv Step 4 - Install app dependencies Here is the sample app code. The app is just for demo. The app will be placed in /var/www/fido. Normally in production, this will be similar to git clone <url> or hg clone <url> inside directory. Make sure you aren’t sudo while cloning. root@fido:/tmp# cd /var root@fido:/var# mkdir www root@fido:/var# mkdir www/fido Change the owner of the repo to fido. root@fido:/var# chown fido:fido www/fido root@fido:/var# ls -la www/ total 12 drwxr-xr-x 3 root root 4096 Dec 25 03:18 . drwxr-xr-x 14 root root 4096 Dec 25 03:18 .. drwxr-xr-x 2 fido fido 4096 Dec 25 03:18 fido app.py - fido application. # /usr/bin/env python from flask import Flask, request from flask.ext.sqlalchemy import SQLAlchemy app = Flask(__name__) app.config['SQLALCHEMY_DATABASE_URI'] = "postgres://fido:release@localhost:5432/fido" db = SQLAlchemy(app) class Todo(db.Model): id = db.Column(db.Integer(), nullable=False, primary_key=True) name = db.Column(db.UnicodeText(), nullable=False) status = db.Column(db.Boolean(), default=False, nullable=True) @app.route("/") def index(): return "Index page. Use /new create a new todo" @app.route('/new', methods=['POST']) def new(): form = request.form name, status = form.get('name'), form.get('status') or False todo = Todo(name=name, status=status) db.session.add(todo) db.session.commit() return "Created todo: {}".format(name) if __name__ == "__main__": db.create_all() app.run('0.0.0.0', port=3333, debug=True) Add a wsgi file website.py root@fido:/var/www/fido# cat website.py import sys import os.path sys.path.insert(0, os.path.dirname(__file__)) from app import app as application Files in fido directory. root@fido:/var/www/fido# tree . . ├── app.py ├── __init__.py └── website.wsgi 0 directories, 3 files Step 5 - Virtual env and dependencies root@fido:/var/www/fido# virtualenv --no-site-packages env root@fido:/var/www/fido# . env/bin/activate (env)root@fido:/var/www/fido# pip install flask sqlalchemy flask-sqlalchemy psycopg2 Step 6 - final setup Create uwsgi config file # Add following lines to fido.ini file root@fido:/etc# cat uwsgi/apps-enabled/fido.ini [uwsgi] socket = 127.0.0.1:5000 threads = 2 master = true uid = fido gid = fido chdir = /var/www/fido home = /var/www/fido/env/ pp = .. module = website Check whether uwsgi is booting up properly. root@fido:/var/www/fido# uwsgi --ini /etc/uwsgi/apps-enabled/fido.ini ... ... Python version: 2.7.5+ (default, Sep 19 2013, 13:52:09) [GCC 4.8.1] Set PythonHome to /var/www/fido/env/ Python main interpreter initialized at 0xb96a70 python threads support enabled your server socket listen backlog is limited to 100 connections your mercy for graceful operations on workers is 60 seconds mapped 165920 bytes (162 KB) for 2 cores *** Operational MODE: threaded *** added ../ to pythonpath. WSGI app 0 (mountpoint='') ready in 1 seconds on interpreter 0xb96a70 pid: 17559 (default app) *** uWSGI is running in multiple interpreter mode *** spawned uWSGI master process (pid: 17559) spawned uWSGI worker 1 (pid: 17562, cores: 2) uwsgi is able to load the app without any issues. Kill the uwsgi using keyboard interrupt. Now lets create table. root@fido:/var/www/fido# . env/bin/activate (env)root@fido:/var/www/fido# python app.py * Running on * Restarting with reloader Exit the program, db.create_all() must have created the table. Normally in production environment, it is advised to use python manage.py db create or any similar approach. Configure nginx root@fido:/var/www/fido# cat /etc/nginx/sites-enabled/fido.in upstream flask { server 127.0.0.1:5000; } # configuration of the server server { # the domain name it will serve for listen 127.0.0.1; # This is very important to test the server locally server_name fido.in; # substitute your machine's IP address or FQDN charset utf-8; location / { uwsgi_pass flask; include uwsgi_params; } } Now nginx and uwsgi will be running in background. Restart them. root@fido:/var/www/fido# service nginx restart * Restarting nginx nginx [ OK ] root@fido:/var/www/fido# service uwsgi restart * Restarting app server(s) uwsgi [ OK ] Machine name is fido, so lets try curl root@fido:/var/www/fido# curl Index page. Use /new create a new todo Create a new task. root@fido:/var/www/fido#curl --data "name=write blog post about flask deployment" Created todo: write blog post about flask deployment We have successfully deployed flask + uwsgi + nginx. Since we installed uwsgi from ubuntu repo, it is started as upstart process, that is why we issue commands like service uwsgi restart. To see all upstart service, try service --status-all. If you are running multiple web application in a single server, create one user per application.
http://kracekumar.com/tagged/python
CC-MAIN-2014-42
refinedweb
4,776
68.26
The BPM UI Coach View set IBM BPM has progressed through many iterations over the many years of its existence. From the original Lombardi Teamworks, through IBM WebSphere Lombardi Edition, through to todays IBM BPM offerings. Throughout that time, changes have been made relating to how a developer creates screen definitions (coaches). The single largest change occurred with the arrival of the Coach View technology. Coach Views provided a "composable" set of building blocks that could be used to construct the screens. The Coach Views provided by IBM included simple text inputs, checkboxes, tables and a few others. The secret sauce of the Coach View technology was that customers and other vendors could build their own Coach Views which then became additional components in the palette of available building blocks used to construct the screens. Over the years, IBM steadfastly held to the belief that the simple set of Coach Views provided with the product were sufficient and that there were enough additional Coach Views in the public domain and from third party vendors that IBM need not invest additional effort producing richer of more numerous instances. Of the multiple vendors producing Coach View sets, one stood out. That was the SPARK UI set from Salient Process. IBM examined the available Coach View sets from a variety of vendors and chose SPARK UI for acuisition. After purchase and a period of harmonizing the SPARK UI set with IBM's core look and feel, the release of 8.6.0 saw the arrival of a Coach View set that IBM calls BPM UI. To all intents and purposes, this is the SPARK UI set of Coach Views now exclusively owned by IBM and distributed with the base BPM product. The super rich and numerous Coach Views now available with BPM that were previously known as SPARK UI, are the core of the remainder of this chapter. Configuration Each SPARK UI Coach View will have its own configuration section. Some aspects of the configuration will be specific to the type of Coach View while others will be common. The configuration features are broken into categories: Formula Behavior Appearance Performance Responsive Events The Control tree Within a Coach, controls can be considered to exist in a hierarchical structure just like a file system tree. The top of the page is called the root and every other control is either a child of root or a child of some other control which will eventually be able to be traceable back up to the root. Every control can be classed as either a container or an element. A container is distinct from an element as the container may have child controls while an element may not. An addressing scheme is available which allows us to locate any individual control in the tree. If an address starts with "/" then we are considering our navigation from the root downwards. If the address does not start with a "/" then we are working contextually relative to some reference control. Since each control is a child of some other control (except for the root) then we can think of the notion of a parent of a control. If we are navigating relative to a control, we can reference its parent with the string "..". A convenience is that a control that shares the same parent (i.e. a sibling control) can be reference with just the name of the control. To access a specific control using an absolute path, we can use the function: page.ui.get(<address>) For example: page.ui.get("/Text1") will return the control called "Text1" that is an immediate child of theroot. To determine the path name of a view, we can use view.ui.getAbsoluteName(). Events The Events category defines event types that the Coach View publishes upon when something interesting happens such as a user interaction. When an event fires, JavaScript code that is associated with the event entry is called. This JavaScript can be inline or can be a function call to a JavaScript function that is within scope. Let us look at an example. Imagine we have a button Coach View that looks as follows on the screen: In the Events of that Coach View, we can define the following: Notice that we have coded "On Click". This says that the code will be executed when the button is clicked. This code is inline code. We can also code up a JavaScript function in a Custom HTML Coach View that might be: <script> function buttonClicked(cv) { alert("Button clicked\!"); } </script> And then in the "On Click" handler we can define: as the code to be executed. Formulas To use the value of a control we can use: ${<name>}.getValue() This is so commonly performed that a short hand is available of the form: @ {Name} Scripting When scripting views, we can refer to a view by its identity. We have a couple of techniques for referencing the view: page.ui.get({name}) ${name} me When an event is called, there is a context in which that event executes. It includes the following variables: me - The control that issued the event. view - The parent of the control that issued the event. Layout The layout containers are: Well Panel Collapsible Panel Tab Section Stack Modal Section Horizontal Vertical Table Layout/Row/Cell Horizontal Split Icons Icons are available in the SPARK UI and are based upon the "fontawesome.io" set that can be found here. Charts Making REST Calls from a Coach The Controls Alerts The Alerts control is responsible for showing alerts. To show an alert, a method called appendAlert() must be called on the alert control instance. The signature of the appendAlert() is: appendAlert(title, text, style, timeout, id, data) See also: Area Chart See also: Badge The badge can have three shapes: See also: Bar Chart See also: Breadcrumbs At runtime, we would normally call "appendItem()" to add a new level to the bread crumb trail. A default view looks as follows: See also: Button Here are some examples: Default Rounded Flat Default with icon See also: Caption Box The caption box encloses a single additional SPARK UI control and adds a specialized label around it. The position and style of the label can be highly customized. See also: Check Box See also: Checkbox Group The isChecked() method returns true or false depending on whether or not the check box is checked. See also: Collapsible Panel The Collapsible Panel is a container which can be collapsed down to a single entry and re-opened again. A group of collapsible panels can be defined that are linked together. When one is opened, all the others close ensuring there is only one open at a time. See also: Configuration This control provides SPARK UI diagnostics/debugging capabilities. See also: Data The Data control provides a binding between BPM data (teamworks data) and SPARK UI data. It is a non-visual control and hence is used exclusively in JavaScript most commonly in event handling. The control should be bound to a human service variable. The control exposes setData() and getData() to set and get the value of the variable. See also: Data Export The Data Export control allows one to export data as an Excel spreadsheet or as a CSV document. See also: Date Picker Pick a date (not a time nor a date and time). See also: Decimal See also: Deferred Section It takes time for a set of Coach Views to load into a Coach. We may wish them not to be loaded immediately but instead deferred to a point in the future. See also: Device Sensor Determine nature of the device (browser, platform etc) that the Coach is running upon and be able to use that information to control the appearance of the Coach. See also: Donut Chart See also: Dual List See also: Event Subscription See also: Exit Safeguard Prompt the user for a confirmation before closing the Human Service browser window/tab. See also: Geo Coder See also: Geo Location See also: Horizontal Layout See also: Horizontal Split See also: Icon The icon can be one of the font-awesome icons. See also: Image See also: Input Group See also: Integer See also: Line See also: Line Chart See also: Link See also: Map See also: Masked Text This control provides an input field which controls the input format of the input field. The primary data entered that has been entered into the field can be retrieved through "getMaskedText()". See also: Modal Alert See also: Modal Section The modal section provides a dialog box that can contain other controls. A Coach user can't interact with controls not in the dialog box until the dialog is disposed. This is what is known as "modal" interactions. When working with the Modal Section in Web PD it has the following appearance: Notice that there doesn't appear to be anywhere to drop its contained Coach Views. If we click on the control within Web PD, we will find that it expands: Now we can place content within it. Clicking again on the "red X" at the top left, will once again collapse it. We should set the visibility of the Modal Section to "None" otherwise it will be visible when the page is initially loaded … and this isn't what one would normally want. However, we seem to have a new problem. If we set the "Visibility" of the control to be "None", it disappears in the PD editor. This feels like a PD bug. To work around this, instead of explicitly setting the visibility to none, we can specify a script rule that returns "NONE". A good candidate for a child of the Modal Section is a Panel. By using a Panel we get a header and content areas that looks very much like a dialog. If we set the icon of the Panel to be "close" then we get a pleasing close button: On the Panel control, we can catch a click on the close button via an event which we can then use to hide the dialog: Another useful addition is to place a Panel Footer in the panel and in the panel footer we can place a button. This provides an almost identical look and feel to what users expect from a dialog box. Placing a button in a footer makes it appear on the left of the dialog and if we want it on the right, add a horizontal layout that is tight and right justified and place the button within that. See also: Multi Purpose Chart See also: Multi Select The Item List property in the Items section defines a list from which the items will be chosen. To determine when an item selection has changed, we can use the "On Change" event. We can retrieve a list of selected items using the "getSelectedItems()" method. Here is an example of usage. The data for the select comes from name/value pairs where the name is shown as the label in the list and the value is the corresponding value. The data can come from either static description, a service call or a configuration option. See also: Navigation Event See also: Note Display a visible "note" to the Coach user. See also: Notification See also: Output Text This control provides an output text field. See also: Panel The Panel is a container control that provides a header and a content area. The header can also have an icon which appears on the right. A click on the icon can be captured through an event. See also: - - Panel Footer Panel Header Panel Footer See also: - - Panel Panel Header Panel Header See also: - - Panel Footer See also: Pie Chart See also: Places See also: Popup Menu Command - (command (String)) Item Type - (itemType (MenuItemType)) Icon - (icon (String)) Item Text - (itemText (String)) Badge shape - (badgeShape (MenuBadgeShape)) Badge color - (badgeColor (TooltipColorStyle)) Badge text - (badgeText (String)) See also: Progress Bar See also: QR Code See also: Radio Button See also: Radio Button Group It is not uncommon to want to have a radio button be part of a group of radio buttons such that when one is selected, any others in the group become unselected. Here is an example: The data for the group comes from name/value pairs where the name is shown as the label of the radio button and the value is the corresponding value. The data can come from either static description, a service call or a configuration option. See also: Responsive Sensor See also: Service Call We use this view to define a BPM hosted service that can be invoked from the Human Service. An instance of this view has an "execute()" method that, when invoked, will result in the execution of the corresponding bound service. Parameters can be passed as input. For example: ${MyService}.execute(data) The signature of the target service must have: Input: - data (ANY) Output: results (ANY) error (AjaxError) Within Web PD, when we create a new service, what is created is an External Service and not an AJAX service. This seems to work but results in a not authorized when called. We can fix that by visiting the Overview tab of the service and changing its exposed configuration: In the callback, we can use the getResult() method to retrieve the returned data. In the callback we can code @functionName to invoke a function in our code. See also: Service Data Table Very similar to the Table control, this control calls a back-end service that, when it returns, will use the data returned from it to populate the table. We can call refresh(true) to cause the service to be called and populated. See also: Signature See also: Single Select See also: Slider See also: Spacer See also: Stack See also: Status Box Attach a status box to another control. See also: Step Chart See also: Style See also: Switch We can ask the switch whether it is "on" or "off" using the "isChecked()" method. If the switch is on, the return is true otherwise if the switch is off, the return is false. See also: Tab Section See also: Table The columns property of the table is an array of TableColumn. We can obtain the selected record using getSelectedRecord(). Imagine we have a button in a row in the table. When the button is clicked, we want to do something related to the row that contained the button. How can we determine which row the button was in that was clicked? table.getRow(me) Once we know the row, we can get the data using getData(). Imagine we have obtained (or calculated) a new set of data to be shown in the table. How do we visualize that data? See also: Table Layout Cell See also: Table Layout Row See also: Text In the event handling: On Change provides environmental data as the variables "newText" and"oldText". On Input provides environmental data as the variables "current" and "potential". The potential is the complete new text. See also: Text Area See also: Text Editor See also: Text Reader See also: Timer See also: Tooltip The tooltip provides a container. Coach Views placed in the container will then have the tooltip shown associated with them because they are contained within the container. See also: Type Ahead Text See also: Variant This control is able to stand in for other controls including Text, Masked Text, Single Select, Date, Decimal and Integer. See also: Vertical Layout See also: Video See also: Well See also: New Controls See also: Internals To be a developer of SPARK UI one must understand the architecture. The first thing to realize is that there is no code in the common event/callback functions known as load(), change(), unload() etc. Instead, the code is kept in a single JavaScript file which explicitly defines the functions required by CoachNG. There are several reasons for this. First it keeps the source integrated and together and hence easier to edit and work upon. It also allows us to run JavaScript minifiers and obfuscators over the code. In addition, having a single JavaScript source file causes it to be browser cached and is faster. If this wasn't enough, we will also find that when performing Chrome debugging, we can quickly get to our code through the underlying file. For example, the CoachNG onLoad() function would be defined in the control as: this.constructor.prototype.load The source file that contains these functions is called "BPMExt-Control-<name>.js". The SPARK UI controls adhere to naming conventions. Specifically: BPMExt-Control-<name>.js - The source of the JavaScript that constitutes the control. BPMExt-Control-<name>.css - CSS needed for this specific control. BPMExt-Control-<name>-Palette.png BPMExt-Control-<name>-Preview.png BPMExt-Control-<name>-Preview.js BPMExt-Control-<name>-Preview.html The typical included scripts are: BPMExt-Core.js (BPMExt-Core.zip) BPMExt-Control-<name>.js BPMExt-Control-<name>.css spark-fonts.css spark.css loader.js The AMD dependencies are: com.salientprocess.bpm.views/bpmext bpmext com.ibm.bpm.coach.utils/utilities utilities dojo/dom-class domClass dojo/dom-attr domAttr dojo/dom-construct domConstruct A Coach view is initialized with a piece of inline JavaScript coded as follows: bpmext_control_Init<Name>.call( this, utilities, bpmext, domClass, domAttr, domConstruct, fastclick ) This causes the function called "bpmext_control_Init<Name>" to be invoked with the local "this" in the function being executed being the Coach View itself. This is a common JavaScript technique. This function is defined in the corresponding BPMExt-Control-<name>.js. The structure of a Coach View is: Create and populate the this._instance Create and populate the this.constructor.prototype._proto Create and populate the exposed functions at this.constructor.prototype Create a load function at this.constructor.prototype.load Create a view function at this.constructor.prototype.view Create a change function at this.constructor.prototype.change Create a change function at this.constructor.prototype.validate Create a change function at this.constructor.prototype.collaboration Create a change function at this.constructor.prototype.unload The load function must call bpmext.ui.loadView(this) if the control is a view or it should call bpmext.ui.loadContainer(this) if the control is a container. The unload function must call bpmext.ui.unloadView(this). When we execute the loadView() call, a set of pre-defined functions are implicitly added to our view including: addClass() getData() hide() isBound() isEnabled() isValid() isVisible() propagateUpValueChange() recalculate() setData() setEnabled() setValid() setVisible() show() triggerFormulaUpdates() The "_instance" view property We have adopted a convention that each control will have a property called "_instance" that is an object that will hold our private instance state. One of the primary reasons for this is so that we don't pollute the interface of our control with unexpected or non-public data. The substitute object If we have no bound object, we can create a substitute one with a call to bpmext.ui.substituteObject(). The substitute config options If we have no bound configuration option, we can create a substitute one with a call to bpmext.ui.substituteConfigOption(). For example: bpmext.ui.substituteConfigOption(this, "color", "black"); Event handling To register an event handler, we can call bpmext.ui.registerEventHandlingFunction(). For example: bpmext.ui.registerEventHandlingFunction(this, "onclickEvent"); The name of the event must be a configuration option of type string. Looking at the previous example, we would need a configuration option called "onclickEvent". What this will hold at design time is a fragment of JavaScript code which will be evaluated when the event is fired. The way we fire an event is to call bpmext.ui.executeEventHandlingFunction(this, "onclickEvent"). Formula handling To register for formula handling, we can call bpmext.ui.setupFormulaTriggeredUpdates(). The signature of this function is bpmext.ui.setupFromulaTriggeredUpdates(view, updateFn, resultFn) The updateFn function is invoked when the value of the formula associated with the control needs to be updated. The resultFn is a function that is invoked that must return a value. This value is what is returned when we use the scripting shortcut "@{<name>}". The semantics of this is to return the current value of the control as it can be used in other formulaic expressions. When the value of our control changes we need to inform other controls that formulas in use that refer to our value need to be updated. We do this through a call to "bpmext.ui.broadcastExpressionTrigger()". Questions: - It appears that the formula is obtained from a configuration option called "expression". Is that correct? Documentation generation The source should contain JsDoc markup for automatic generation of documentation. Each public method should have a JsDoc comment of the following format: /** * @instance * @memberof * @method * @desc * @returns */ See also: bpmext.ui addEventSubscription(eventName, callback, view, persistent) alert(topic) eventSubscriptionExists(eventName, view) executeEventHandlingFunction(view, eventName) forEachViewSibling(view, callback) - getChildContainers(view, deep) getChildViews(view, deep) getCoachNGViewPath(view) getContainer(viewPath, fromView) getContainerPath(view) getEffectiveVisibility(view) getEventHandlingFunction(view, eventName) getFunction(functionName, fromView) getFunctionContext(functionName, fromView) getInvalidViews(view) getNearestValidationContainer() getOption(view, optionName, defaultValue) getRequiredViews(onlyEmpty, fromView) getValidationContainers() getView(viewPath, fromView) getViewData(viewPath, fromView) getViewPath(getViewPath) getViewValidationErrors() isViewExplicitlyLoaded(view) loadContainer(view) loadView(view) makeUniqueId(prefix)(prefix) popValidationContainer() publishEvent(eventName, payload, persistent) pushValidationContainer() registerEventHandlingFunction(view, eventName) removeEventSubscription(eventName, view) removeViewValidationError() setupFormulaTrigeredUpdates() setViewData(data, viewPath, fromView) setViewEnabled(enabled, required) substituteConfigOption(view, propertyName, defVal) substituteObject(view, type, propertyName, defVal) unloadContainer(view) unloadView(view) updateViewValidationState() View Navigation forEachViewSibling(view, callback) getChildContainers(view, deep) getChildViews(view, deep) - Get an array of child views (not containers). If the deep value is true, we will return recursively otherwise just the immediate children are returned. getView(viewPath, fromView) getViewPath(getViewPath) <view>.context.viewid See also: - The Control tree bpmext.ui.View.* These methods are added to each view under the "ui" namespace. addEventSubscription(eventName, callback) get(path, index) getAncestor(name, literal) getCount(viewPath) getIndex() getAbsoluteName() - Return a string representation that can be used to address the view. getOption(optionName, defVal) getParent(literal) - Return the single view that is the parent of the called view. If literal is true then we will return the CoachNG parent, otherwise we return the first non-decorative parent. This may also return undefined if there is no parent. getSibling(name) invoke(functionName, args …) getChild(name, index) isSubscribingToEvent(eventName) publishEvent(eventName, payload) removeEventSubscription(eventName) Creating a new custom Coach View Here is the recipe for creating a new custom Coach View that matches the pattern followed by SPARK UI. Imagine our new Coach View is going to be called "Xyz". Create a JavaScript source file called "Xyz.js". Use the following as a template: myControl_xyz = function (bpmext) { // Store any instance specific data within this object. Data stored in here will // be assured to be private and unique to this instance of the coach view and // not visible to others. this._instance = { }; if (!this.constructor.prototype._proto) { this.constructor.prototype._proto = { _handleVisibility : function (view) { var visibility = bpmext.ui.getEffectiveVisibility(view); view.context.setDisplay(visibility != "NONE"); view.context.setVisibility(visibility != "HIDDEN"); }, _myFunction : function(view) { //internal function // My code goes here. } }; this.constructor.prototype.myFunction = function () { //publicly available method calls internal function this._proto._myFunction(this); } this.constructor.prototype.getType = function () { return "xyz.1"; } this.constructor.prototype.load = function () { try { var opts = this.context.options; //default configuration values if(!opts.myConfig) bpmext.ui.substituteConfigOption(this, "myConfig", <defaultValue>);); }; } } Next, add this file to your toolkit as a managed Web file. Now we can create the Coach View. In the Included Scripts section, add a reference to the newly added Web file. In the AMD dependendencies, add: com.salientprocess.bpm.views/bpmext -> bpmext In the "Inline Javascript" add: myControl_xyz.call(this, bpmext); SPARK UI References DOM Programming While DOM programming is a general programming discipline in its own right, we will cover here some of the notes I have found useful when working in this area. Creating a new element. var newElement = document.createElement("<tagName>"); Adding an element to the DOM Tree element1.appendChild(element2); Finding the first matching node var node = element.querySelector("<CSS Selector>"); Finding all the matching nodes var nodeList = element.querySelectorAll("<CSS Selector>"); Get the parent of a node var parent = element.parentNode; var parent = element.patentElement; The return is null if we have no parent. Adding text to a node We can create a text node using <element>.createTextNode("<text>") and then add that node as a child of our parent node. var newTextNode = document.createTextNode("Hello!"); parent.appendChild(newTextNode); Another setter for text is to assign to the textContent property. For example: parent.textContent = "Hello!"; Yet another option is innerText.
https://learn.salientprocess.com/books/ibm-bpm/page/the-bpm-ui-coach-view-set
CC-MAIN-2019-22
refinedweb
4,091
54.32
373 System ID: UF00028308:01373 Related Items Preceded by: Lake City reporter and Columbia gazette Full Text Packer Delight Green Bay enjoying Super Bowl win. snorts, I B 000014 120511 3 PO5 OX LORIDA HISTORY - SMA UNIv OF FLORIDA GAINESVILLE FLO3261F 1943DA IP ^ 1-1943 Lat Season Ends CHS boys basketball loses district opener. Sports, IB orter '9< Tuesday, February 8, 201 I ww ecityreporter.com- Vol. 137, No. 15 E 75 cents I- I A Scott proposes $5 billion spending cuts ASSOCIATED PRESS Florida Gov. Rick Scott announces his new budget during a Tea Party event in Eustis on Monday. Council OKs reduced rates for natural gas City manager says new charges take effect on March 1. By ANTONIA ROBINSON arobinson@lakecityreporter.com Natural gas customers will soon see a difference in their bills. The City of Lake City Council approved amend- ing natural gas rates and charges Monday night. New natural gas rates go into effect March 1, said City Manager Wendell Johnson. A study was completed in 2010. on natural gas rates, he said. Several issues were brought to light in the study. There was an overcharge to residential customers and an undercharge for commercial rates, Johnson said. There was also no recurring capital improve- ment program or rate stabi- lization fund. '"The ordinance corrects those things to make the sys- tem better," he said. Residential customers will see a reduction in their rates. "I think they will be pleased with that," Johnson. Commercial customers will only see a slight increase. The base customer charge is now $10 per month for residential and $25 for com- mercial. The distribution charge for -residential customers will decrease from $0.62012 a therm to $0.59099. The dis- tribution charge for commer- cial customers will increase from $0.32177 a therm to $0.36194. RATES continued on 3A Chamber sets spotlight for citizen of year Top member businesses will also get awards. From staff reports The strength and accom- plishments of the local business community will be recognized during an expanded Lake City- Columbia County Chamber of Commerce Business of the Year honors luncheon on Wednesday, March 30. The Lake City-Columbia County Chamber of Commerce will expand the traditional award to honor two Chamber member busi- nesses in their respective size categories fewer than 10 employees and more than 10 employees. This year, a third award - Citizen of the Year will be revived to honor an indi- vidual who is a Chamber mem ber with a dis- tinguished record of S business leadership and a his- tory of civic Folsom and volun- teer service in the com- munity. The Chamber's Business Enhancement Committee, with approval from the Chamber of Commerce's board of directors, pro- posed the business awards expansion and presentation CHAMBER continued on 3A About half of governor's proposed cuts will hit education budget. By BRENDAN FARRINGTON Associated Press EUSTIS New Republican Gov. Rick Scott received wild applause from about 1,000 tea party activists when he said the $65.9 billion budget proposal he rolled out Monday would cut govern- ment waste and lower taxes. Scott is proposing $5 billion in spending cuts, about half of that in education, in the next budget year beginning July 1 and another $2.6 bil- lion more the following year. Officials say it contained TNT and Semtex. By C.J. RISAK crisak@lakecityreporter., com head report- edly contain- ing 2.5 pounds of high explosive was delivered to the Lake City Police Department Sunday by a resident who "saw it lying on a lawn." The warhead proved to be active. Police sealed off the area and summoned the Explosive "There's things that we need to dust off and repair and protect, and there's things we need to completely throw away." Rick Scott Governor At the tea party rally in a Baptist church, the new Republican gover- nor compared his look at the current $70.4 billion budget to going up in an attic of an old home. "Over the last three months I spent a lot of time in that attic and I'm clean- ing it out," said Scott. "There's things that we need to dust off and repair and protect, and there's things we need to completely throw away." Scott also has proposed cutting pension benefits for state workers, teachers and some local government employees while making them con- tribute 5 percent of their salaries to SCOTT continued on 3A PATRICK SCOTT/Lake City Reporter Units from the Columbia County Sheriffs Office, LCPD, Alachua County Bomb Squad and the Lake City Fire Department provide an escort for the U.S. Navy's E.O.D. truck that was carrying the rocket warhead, Ordnance Disposal unit not have a safe way to from Alachua County. transport a device of this But the specialists did nature, so an EOD unit from the U.S. Naval Air Station in Jacksonville was called. The rocket was placed in a sealed box and taken to a deserted area, where it was detonated. Traffic was rerouted from the area for more than two hours, accord- ing to a witness. How this device ended up on a lawn at Northwest Brady Circle and Northwest Maitland Terrace in Lake City was unknown. The man picked it up and took it to the LCPD, placing it in the shrubs outside WARHEAD continued on 3A 1 8426I a0002 1 CALL US: (386) 752-1293 SUBSCRIBE TO THE REPORTER: Voice: 755-5445 Fax: 752-9400 57 -. * Mostly sunny WEATHER, 2A O pinion ................ Around Florida........... O bituaries .............. Advice & Comics......... Puzzles ................. i^. TODAY IN SCHOOL FCAT High Achievers A ..*: COMING WEDNESDAY , Teacher of the Year profile. Warhead found PATRICK SCOTT/Lake City Reporter A bomb specialist from the U.S Navy's Mayport Naval Station in Jacksonville moves a rocket warhead to a sealed box after picking it up outside the door of the Lake City Police Department headquarters. Traffic in the area was rerouted for more than two hours. Resident leaves explosive outside LCPD LAKE CITY REPORTER DAILY BRIEFING TUESDAY, FEBRUARY 8, 2011 Saturday: 4-5-20-23-38-48 .- H3 Monday: Afternoon: 5-3-1 Evening: 3-2-7 Monday: Afternoon: 3-7-8-4 Evening: 7-0-2-0 Sunday: 3-22-27-32-33 PEOPLE IN THE NEWS Ratings rise for Super Bowl measure- ment of the nation's 56 largest media markets, this year's game had a 3 per- cent higher rating than last year's. The game also had a 71 share - meaning that more than two-thirds of the televisions being watched in the country at that time were watch- ing the Super Bowl on the Fox net- work. That's the highest audience share for a Super Bowl since 1982, a time when there were far fewer tele- vision networks as competition. NFL Commissioner Roger Goodell said Monday that preliminary TV ratings show the game is "on track to be the most-watched show in tele- vision history." . Television ratings in general have been super for the NFL this year, with viewership up 13 percent over last year, Nielsen said. Games on CBS, Fox and NBC averaged 20 mil- lion viewers, more than twice what networks get for their prime-time programming. Michelle Obama.plans visit to 'Regis and Kelly' NEW YORK Michelle Obama is paying her first visit to "Live! With Regis and Kelly." ASSOCIATED PRESS In a Sunday photo, Packer fans cheer while watching the game at a Super Bowl party in Kenosha, Wis. The show announced Monday that the first lady will be a guest of Regis Philbin and Kelly Ripa on Feb. 9. Mrs. Obama will be in New York to mark the first anniversary of the Let's Move! campaign to fight child- hood obesity. She is expected to discuss the cam- paign during her "Live!" appearance. "Live!" is distributed by Disney- ABC Domestic Television. 'The X Factor' winner to get $5 million contract NEW YORK The winner of Simon Cowell's upcoming game show on Fox will get a $5 million record deal. Fox said it believes that's the biggest guaranteed prize in television history. "The X Factor" is due to pre- miere on Fox next fall. Cowell left "American Idol" in part to concen- trate on the new show, which has been a successful format in several countries since its debut in 2004. The winner of the inaugural American edition gets a recording contract with Syco, a joint venture between Cowell and Sony Music. Cowell will be an on-air participant in "The X Factor." Other judges have not been announced. Fox said Monday that auditions for contestants begin March 27 in Los Angeles, with other sessions following in the Chicago, Dallas, Miami, Seattle and New York areas. SAssociated Press Celebrity Birthdays * Composer-conductor John Williams is 79. * Former ABC News anchor Ted Koppel is 71. * Actor Nick Nolte is 70. * Comedian Robert Klein is 69. * Actress Brooke Adams is 62. * Actress Mary Steenburgen is 58. Daily Scripture * Author John Grisham is 56. * Actor Henry Czerny is 52. * Rock singer Vince Neil (Motley Crue) is 50. * Rock singer-musi- cian Sammy Llanas (The BoDeans) is 50. * Actress Mary McCormack is 42. * Retired NBA player Alonzo Mourning is 41. "Let those who love the Lord hate evil, for he guards the lives of his faithful ones and deliv- ers them from the hand of the wicked." Psalm 97:10 Reporter. Rivera knew man who got contract MIAMI While U.S. Rep. David Rivera was chairman of the Miami- Dade GOP, the Republican party made record pay- ments with little explana- tion to a political consul- tant with whom he had close ties. The Miami Herald reported par- ties must keep expense records and generally keep detailed ones for auditors, GOP leaders said. Rivera was a state rep- resentative before winning his congressional seat in November. He is facing a criminal investigation into his finances. Lawmakers delay insurance bill TALLAHASSEE - A Senate committee has delayed voting on a proposed property insur- ance bill designed to crack down on fraudulent or excessive sinkhole claims. Although it appeared the Banking and Insurance Committee was about to approve the bill (SB 408), until some late public testimony in Monday's sometimes testy meeting persuaded the panel's chairman and bill's spon- sor to postpone a vote. Sen. Garrett Richter (R-Naples) said he wants to allow the committee an opportunity to hear more about the business-backed proposal. LEANNE TYO/Lake City Reporter Quilts stay on display at library Jim Evans (right) of Lake City and his daughter, Anna Evans, admire a quilt by Esther Albritton at the Columbia County Public Library Main Branch Thursday. More than 50 quilts will be on display at the library through Feb. 21. Having the quilts on display is for the community and the Olustee Festival week- end, said Loretta Kissner, guild president. The panel had already - knocked down a series of amendments offered by Sen. Mike Fasano (R- New Port Richey) who said loosening the reins on insurers would make it easier for them to refuse to provide sinkhole coverage. typically tops the list in shark attacks. There were 36 shark attacks in the United States last year, including five in North Carolina and four each in California, Hawaii and South Carolina. Woman died after Shark attacks on riding Disney ride edeline nn state GAINESVILLE, said Florida ORLANDO A 77-year- old woman collapsed and died in December after rid- ing. THE WEATHER MOSTLY PAtTLY CHANCE MOSTLY :4 SUNNY SUNNY HOWERS SUNNY H157 L030 HI 66 W 40 HI64L0 38 H159L030 SVidosta 55/33 Jacksonville Lake City 57/35 57/30 Gainesville Da4tona Beach \59/31 6V43 Ocala 11/33t * Orlando Cape Canaveral Key West 66/44 63/46 lake City S, Miami Tanpa \ Naples 63/45 West Palm Beach Ocala 72/50 Orlando Ft. Lauderdale Panamq City F. Myers. 73/56 Pensacola 67/47 Naples Tallahassee "68/47 Miai Tampa aW '.- 73/56 Valdosta y eK West -'-I I,.. LAKE ITYAL ANA TEMPERATURES High Monday Low Monday Normal high Normal low Record high Record low PRECIPITATION Monday Month total Year total Normal month-to-date Normal year-to-date On this date in 1989, sixteen cities in the western U.S. reported record low temperatures for the date. Marysville, Calif., reported an all-time record low reading of 21 degrees above zero. I ,. Forecasts, data and graph- Ics 2011 Weather Central " J. -," LLC, Madison, Wis. Ot Connected ; jL^o l l Qj .^^ ^ ^ ^ * Associated Press Ih whole or in part is forbidden without the permis- sion of the publisher. U.S. Postal Service No. 310-880. POSTMASTER: Send address changes to Lake City Reporter, P.O. Box 1709, Lake City, Fl Ra.. AROUND FLORIDA REINLFREATMPfo usa,'eray8 Tallahassee 55/32 ,. Pensacola I* " 53/35 ,P uaCi 52/34 City Wednesday Cape Canaveral 68 53 i, Dayt6na Beach 68/51/s Ft. Lauderdale Fort Myers Gainesville Jacksonville Thursday 69,51. pc 69/49/pc 77/61/pc 74/55/pc 67/41/sh 63/40/sh 75/66/pc 64/38/sh 76/61/pc 73/56/pc 70/43/sh 71/50/pc 57/38/r 53/35/sh 59/36/r 68/51/pc 59/37/r 77/55/pc 74/61/s 73/56/s 67/42/pc 63/44/pc 75/69/s 66/40/pc 74/62/s 72/55/s 69/43/pc 71/51/s 57/44/c 55/40/pc 60/37/pc 67/53/s 62/41/sh 73/57/s 71/61 W. Palm Beach SUN Sunrise today Sunset today Sunrise tom. Sunset torn. MOON Moonrise today Moonset today Moonrise tom. Moonset tom. 58 50 68 44 86 in 1957 25 in 1912 1.44" 2.91" 6.59" 0.82" 4.33" 7:17 a.m. 6:13 p.m. 7:16 a.m. 6:14 p.m. 9:54 a.m. 11:21 p.m. 10:27 a.m. 6 MR 30 nhiestlnbum Today's ultra-violet radiation risk for the area on a scale from 0 to 10+. An exclusive service brought to our readers by The Weather Channel. weather.corn Feb. Feb. Feb. March 11 18 24 4 First Full Last New 7a lp Tuesday - ~u~d - 7p Wla 6a WednesdayI -Fhis ike tIum k I - % MMM.=wj 12 l AES ; ii~iii~u flU Page Editor: C.J. Risak, 754-0427 'v l tm akII I Page Editor: Roni ToIdane~, 754-0424 LAKE CITY REPORTER LOCAL & STATE TUESDAY, FEBRUARY 8, 2011 RATES: Lower Continued From Page 1A In other business: Jill Milton, City human resource coordinator, was recog- nized for acquiring 40 hours of continuing education courses in human resource management from Villanova University through online courses; Mayor Stephen Witt issued a proclamation for Hazardous Materials Awareness Week, which is Feb. 13 -19; Council approved a third amendment to the funding and program agreement relating to the grant amendment between the Office of Tourism, Trade and Economic Development and the city, approving, confirming and rat- ifying its execution by the mayor; A total of 22 fleet vehicles owned by the city were declared surplus to its need and authorized to be sold at public auction. The vehicles were taken out of service due to the city utilizing a car leas- ing program for its fleet. Councilman George Ward was absent from the meeting due to a family emergency. The next City of Lake City coun- cil meeting is 7 p.m. Feb. 21 at City Hall. scoTT Continued From Page 1A their retirement plan. He also wants to cut the state's work force by 8,645 positions, or near- ly 7 percent, next year while reducing pay for prison employ- ees. Scott also wants to spend more on economic development and' cut $4.1 billion in fees and taxes as part of a two-year spending plan. His proposal includes a $1.4 billion cut in the state's corporate income tax that he eventually wants to phase out. Businesses also would get a $630.8 million reduction in taxes they pay into the state's unem- ployment compensation pro- gram. Scott also wants to roll back motor vehicle fees that law- makers increase a couple years ago by $360 million. He told the crowd he's going through every line of the budget and removing any item that isn't essential gov- ernment spending. "Government has to get back to its core functions, but only its core functions," he said. In an unusual move, Scott broke with tradition by making his budget announcement outside of Tallahassee. Tea party activ- ists from around the state came to the church and rallied ahead of Scott'fs speech. Outside there were "Don't Tread On Me" flags and inside the crowd sang patriot- ic songs and listened to speakers criticize President Obama. CHAMBER: Officials bring back Citizen of the Year honor Continued From Page 1 as the focal point of the Chamber's first quarter Better Business Series luncheon. 'Traditionally, we've presented an award dur- ing the annual business meeting, but this year, we expanded the number of awards and decided to give the winners a show all of their own," said Steve Smith, chair- man of the Chamber's Business Enhancement Committee. "It's a major accomplishment for a business to win a Chamber award. We want the spotlight on the win- ners busi- nesses who were worthy of recognition were being omitted because they did not meet the qualifica- tions rec- ognition they deserve." The Chamber's Business Enhancement Committee currently is accepting nominations for the three awards. Guidelines for nomina- tions are the following: Chamber Business of the Year Award: Category 1: 10 employ- ees or fewer; Category 2: More than 10 employ- ees. Criteria: Must be a Chamber Member; In Business at least three years; Supportive and involved in the Community Chamber Citizen of the Year Award: Criteria: Must be a Chamber Member; Must have a distinguished record of business leadership; Must have a history of civic and volunteer ser- vice. Nominations should be compiled in written form and detail how the business meets the crite- ria, then offer a detailed explanation as to why the business is an excep- tional candidate for the award. Nominations should be delivered to the Chamber office, 162 S. Marion Ave., Lake City, FL 32055, by 5 p.m., Wednesday, Feb. 16, 2011. For more information, contact Dennille Folsom, executive director, at (386) 752-3690 or den- nille@lakecitychamber. com. --to"-- ---" MARLON IVEY BAIL BONDS* "When things go wrong, give me a call" I Office: (386) 208-0645 I Cell: (386) 344-2233 . Marion Ivey 24 Hour Service LTD Surety AgentClip & Save - L NJ - PATRICK SCOTTILake City Reporter A rocket warhead lies on the ground after it was taken to the Lake City Police/Fire Department's parking lot Sunday. WARHEAD: NAS officials checking serial numbers Continued From Page 1A their door. "The citizen was driving down the road and said he just saw it lying there," Capt John Blanchard of the LCPD said. Blanchard could not say how old the device was. It did contain a mixture of Semtex and TNT, both highly explosive. "The NAS (Naval Air Station) got the serial num- ber off the device and was checking to see if there were any records of it," Blanchard said. He added that, although no damage was done in this instance, people are asked not to handle any similar discoveries. 'This was a rocket war- head," he said. "What we would ask anyone to do, if they find anything that looks suspicious, anything at all, don't disturb it. Just * REFINISH! M aleSee our work at IN Miracle Method SURFACE RESTORATION Contact your local office at (352) 372-1811 or 888-992-6222 Robert Woodard Edward ones Financial Advisor MAKING SENSEOF INVESTING 148 North Marion Ave Downtown Lake City, FL 32055-3915 Bus. 386-752-1215 TF Fax 800-217-2105 TF. 888-752-1215 robert.woodard@edwardjones.com Green Gables Learning Tree Need A DayCare? * Immediate openings for all ages No waiting list * Infant spots available After school care provided * Accepting Gateway Coalition N.E.C.P.A. accredited * Flexible hours 162 NW Gwen Lake Ave. Lake City (Across from Rountree-Moore Toyota) (386) 755-7677 pick up the phone and give us a call." Blanchard said that "often just changes in elevation will be enough to set it off." It was the second time in a month that explosive -l.lan Juga r US Army Ritricl *MOWING *FERTILIZING *HARDSCAPES materials were discovered by accident On Jan. 15, a woman from White Springs found 20 pounds of old rail- road detonators, referred to as railroad torpedoes, in a relative's shed she was cleaning out. Free Estimates 365-3659 - 752-9286 ACTUAL CU ISTOMFR *Design PLANTING *SOD & MULCH Licensed & Insured Get cash for your structured settlement or annuity payments. High payouts. -],d ri t, le Btr Business Bureau. February Special SAVE $150 with purchase of 2 systems (386) 752-3361 Service & Supplies available for Men & Women ---- -- r-- -- Need More Response? Advertising Networks of Florida can get your ad in hundreds of papers reaching MILLIONS of people! 1-866-742-1373 Put US to work A for you! AN F ^j^3;eaft;Z3we -I I LAKE CITY REPORTER LOCAL & STATE TUESDAY, FEBRUARY 8, 2011 Page Editor: Roni Toldanes, 754-0424 OPINION Tuesday, February 8, 201 I 0 OP THEIR INION Message to Congress: Just do your work n a brief glimpse of what the Senate is capable of last week, it voted 81- 17 to remove from the health care law an exces- sively burdensome reporting requirement on business that many realized was a mistake almost as soon as it was enacted. The vote proved that, in a better world, the two parties could work together to improve a needed overhaul of health care. Unfortunately, that vote fol- lowed one to repeal the health' care law that was doomed from the start and failed on a party- line vote. The Republicans said the vote had symbolic importance and they wanted to get the Democrats on record. That would be fine, except we now face more of the same, and too much symbolism can get in the way of important work. Republicans' best course would be to wait until the chal- lenges to the health care law reach the Supreme Court and hope the justices rule their way. Or they could hope the voters give them control of the 'Senate with a veto-proof margin 'in 2012. Instead the Republicans are talking about nickel-and-diming the law to death by blocking the funds needed .to implement it, a time-consuming requi- site spending bills to run the government, all of which was supposed to have been done by Sept.- 30 at'the latest. The legislative calendar calls for the House to be in session 123 days over 32 weeks. That doesn't leave a lot of room for doomed "symbolic" votes. * San Angelo (Texas) Standardthe Health care law beats broccoli For those of you keep- ing score at home, it's 2-2 in the game of fed- eral judges deciding whether the health care reform legislation is consti- tutional. This leaves the average American in a quandary, or even a quarry, where it is easy to be left between a rock and a hard place. The situation begs the question: Is the health care bill constitutional or isn't it? Who knows? At the heart of the question is a dispute as old as the republic the power and reach of the federal gov- ernment. Fellows in powdered wigs argued about it then and folks with tea bags on their hats argue about it now. People in astronaut helmets will argue about it in the future. That's because the Founding Fathers, dedicated to the liberty of ensuring never a dull moment, produced a Constitution that was both spe- cific and vague. It remains the mother of all challenges to apply 18th century words to 21st cen- tury situations impossible for them to have imagined. Consider the health-care law. Four federal judges have now pored over the Constitution and then- amazing coincidence! - the two appointed by Republican presidents ruled it wasn't consti- tutional and the two appointed by Democratic presidents said it was. Yes, this certainly inspires confidence in the judicial sys- tem. The Commerce Clause (Article I, Section 8, Clause 3) seems simple: 'To regulate Commerce with foreign Nations, and among the several states, and with the Indian Tribes." Unfortunately, simple is a chal- lenge to any self-respecting legal scholar to argue the contrary and render the simple compli- cated. When it comes to medicine, commerce isn't what it was. The health-care system back in 1787 Reg Henry rhenry@post-gazette.com was a guy named Nathaniel who had a cart offering a fine selection of leeches and a few doctors who employed medical instruments consisting of sharp pointy things that flies liked to' perch upon. (Surprisingly, these tools worked like a charm. Patients saw them and ran away, ,which the doctors took as sign of a cure. Interestingly, this out- of-sight, out-of-mind view of medicine is held by some Republicans today in consider- ing the plight of the uninsured.) The health system has become Big Business nay, Huge Business. Go to an emergency room today with your leg fall- ing off and your insurance card will be quickly examined if they first don't X-ray your wallet for symptoms of cash. If Scrooge McDuck were still hanging out with Donald Duck, his money bin would be a nonprofit insur- ance company because that is where the profit is. If ever a commerce needed to be regu- lated among the several states, health care is it. Yet 26 of those several states are arguing that the federal government has exceeded its powers chiefly because the law makes, most people buy their own insurance if they don't have it. Oh, the constitutional horror of someone being made to act responsibly! The lat- est naysayer is U.S. District Judge Roger Vinson of Florida, Ronald Reagan's gift to future generations. In his enthusiasm, he went further than previous rulings and junked the whole law because of the compulsory insurance purchase. He also traded in the commerce of strained analogies, comparing the insurance requirement to making people eat healthful food. "Congress could require that people buy and consume broc- coli at regular intervals," he wrote, "Not only because the required purchases will positive- ly impact interstate commerce, but also because people who eat healthier tend to be healthier and are thus more productive and put less of a strain on the health care system." Well, no, your honor. Broccoli alone does not make you healthy and you and I don't have to 'pay directly for'its'lack. Broccoli isn't responsible for driving up the costs of a huge business across state lines. Lack of insurance by millions of so-called passive people is one such factor. You and I pay for the uninsured in our premiums. So, it's constitutional for us to have to pay more for the health- care freeloaders, but it is uncon- stitutional to insist that they pay for themselves? And that's because government can't insist on people buying anything? But governments insist on people buying things all the time at every level. Homeowners must buy snow shovels to keep side- walks clear, business owners must meet myriad expensive requirements, motorists tnust replace old tires, parents must feed their children and we must all buy clothes. You'll get in trouble if you don't. Don't believe me? Just try turning up at the Supreme Court naked (this experiment not advised in the winter months). The Constitution is like the Bible people read into it what they will. But it is a shame that its underlying support for regulated commerce among the states is being chewed up like a half-baked broccoli casserole. M Reg Henry is a columnist for the Pittsburgh Post-Gazette. OTHER OPINION No easy fix for illegal immigration The latest estimates of America's illegal immigrant popula- tion should come as' a jolt to anyone who thinks bad economic conditions somehow offer a quick fix to the nation's immigration problems. Figures from the Pew Hispanic Center indicate the 'illegal immigrant population started growing again even before the nation's economy showed small signs of recovery. The Obama administration, which pledged last year to redouble the effort for compre- hensive immigration reform, gave the issue back-burner status as the recession and growing joblessness caused immigrants to leave. The illegal immigrant population peaked at around 12 million in 2007, level- ing at around 11.1 million by 2009. But illegal immigration is back on the rise, according to the Pew estimates, just as U.S. unemployment shows signs of dropping and the Dow hovers at 12,000. Last year saw the illegal immigrant population rise by an estimated 100,000. We shudder to imagine what these numbers will look like next year if eco- nomic improvement continues. While other states lost illegal immigrants, Texas, Louisiana and Oklahoma saw an increase of about 240,000 from 2007 to 2010, the Pew study said. If anything, the Texas delegation to Congress should be hopping angry for something to be done - now at the federal level. These dramatic popula- tion fluctuations are costly to Americans, particularly at the local level, where school dis- tricts, public hospitals and ser- vice-oriented businesses must plan their budgets based on immigrant populations that may or may not be there from one year to the next. It's no secret that Texas businesses, especially in the construction and food-service sectors, are heavily reliant on low-cost illegal immigrant labor. The Pew numbers make clear that Texas taxpayers and busi- nesses need a federal response to a federal problem. The worst response is a state legislative package that will drive migrants deeper into the shadows. * The Dallas Morning News Star Parker parker@urbancure.org Freedom must start at home etching the wav6 I I of unrest in the \ Middle East, There are les- Vsons sur- prises. informa- tion from what's already hap- pened. But can we turn to them for the entirely new, for the unanticipated, for the inconceiv- able? Forget it. It should be obvious that 10 years, 25 years, 50 years from now the world will be as differ-:. ent from today as today is from 10 years, 25 years, or 50 years' ago. And it should be equally obvi, ous that we have no "experts"'., that know what those great changes will be and what they' will mean. Yet we continue to allow ourselves to be persuaded that: we can know what cannot be " known and that experts can pro- vide, environ- ment, or what have you. The fact that they are wrong 100 percent of the time never seems to discourage us from going down the same path again and' again. On the other hand, there are things we can do that are . far more useful ways to use our, brains. We can identify the cor- rect principles by which to live and allow those to guide how , we conduct our affairs. Getting back to the Middle East, the most effective thing :' we could have been doing, and can do now, is set an example. If we want to promote free- , dom, how about starting at home? If we'd been doing what we should have, we'd set an exam- ple for others, we'd have better judgment regarding what is wrong with them, and we'd be more prosperous and therefore, stronger and more influential. It's time to get perspective about what we can do, what we can't do, and get our own house in order. Star Parker is president of CURE, Coalition on Urban Renewal and Education () and author of three books. 4A LAKE CITY REPORTER LOCAL TUESDAY, FEBRUARY 8, 2011 COMMUNITY CALENDAR * To submit your Community Calendar item, contact Antonia Robinson at 754-0425 or by e-mail at arobinson@ lakecityreporter. com. Wednesday Dietetic Association meeting The Gainesville District Dietetic Association is meeting at 5:30 p.m. Wednesday at Haven Hospice in Gainesville. All registered dietitians, dietetic technicians - registered and students - are invited to attend. The meeting is sponsored by Yakult and Barnes Home Health Care. Ana Rosales, RD, LDN, will be providing a presentation on "Why Probiotics are Important in Nutrition." Attendees can receive 1.5 Continuing Education Units. Visit, for more information. Lake City Newcomers Regular Meeting The regular meeting of the Lake City Newcomers and Friends is 11 a.m. Wednesday at Quail Heights Country Club, on Branford Highway. Luncheon costs $10. The program for this month will be Patriot Music by the Reflections. All mem- bers, guests and friends are welcome. For more information, please call 752-4552 or 755.-4051 Thursday Washington Birthday Celebration The local Sons of the American Revolution is joining the Edward Rutledge Daughters of the Revolution Chapter along with the North Central Florida Regents Council for a George Washington Birthday Celebration luncheon at 11 a.m. Thursday at Quail Heights Country Club. Registration is at 10:30 a.m. James Montgomery "Mr. Mont" is the guest speaker. Buffet lunch costs $15 per person. DAR members are asked to stay afterward for a brief meeting to vote on several important business items. Free Medicaid workshop Teresa Byrd Morgan of Morgan Law Center for Estate & Legacy Planning is hosting a free Medicaid workshop 2 p.m. Thursday. on Thursday. The landlords meeting will take place at the Lake Shore conference room. Garden Club meeting The Lake City Garden Club is meeting at 10 a.m. Thursday at the Woman's Club. The program will be "Wild about Succulents" by'Sandra Plummer. Visitors are welcome. Friday HSCT production The High Springs Community Theater presents Sherlock's Last Case, a play by Charles Marowitz at 8 p.m. Friday and Saturday. The theater is located in Historic High Springs at 130 NE First Ave. The play centers on a death threat against Sherlock Holmes by the supposed son of his late nemesis Professor Moriarty. Tickets are available at The Framery in Lake City on Baya (386- 754-2780), at The Coffee Clutch in High Springs (386-454-7593) online at highspringscommunitythe- atercom or at the door. Prices are $11 for adults, $8 for youth 12 years old and younger; and Seniors Sunday $9. Saturday Riding Club meeting The Columbia County Riding Club meets at 6 p.m. Saturday at the Columbia Resource Rodeo Arena. The club meets second and fourth Saturday of each month. Free admission for spectators. Bring your hors- es and families for a night of fun. Fee required for riders. Cook shack on site. For more info go to website www. columbiacountyriding- club.com. Founders Day celebration The Columbia County Branch NAACP is host- ing a Founders Day celebration in honor of Black History Month at 3 p.m. Saturday at New Day Springs Missionary Baptist Church. Individual students or groups interested in participating should call 752-4074. The church is located at 1213 Long St. Fort Mose trip Black History 2011 trip to Fort Mose is leaving at 7 a.m. Saturday from Richardson Community Center. The event is spon- sored by It's About My Efforts. The month-long theme is "Self Sufficiency is Key." Visit- myefforts.org or call 386- 697-6075 for details. Charity Ball The 18th Lake City Police Department Charity Ball is 7 p.m. to midnight Saturday at the Lake City County Club. All proceeds from this year's ball will go toward the purchase of a Firearms Training Simulator. Tickets are $50 a person. The black- tie event will feature finger food, entertain- ment, music, dancing and door prizes. Contact Destiny Hill at 758-5484 or Samantha Driggers at 758-5483 for ticket infor- mation. '50s Rock-n-Roll and Sock Hop Dance Mike Mullis 50s Rock-n- Roll Show and Valentines OBITUARIES Devan Allen Bozeman Devan Allen Bozeman, 16, of Branford, died Saturday, Febru-" ary 5, 2011, from injuries sus- tained in an automobile accident. born Septem- ber 16, 1994 in Winter Ha- ven, Florida, - and had lived in Branford for most of his life. Devan was a 10th grade student at Branford High School, active in the agri- culture program, and had wanted to be wildlife officer. He loved hunting, running dogs, working on trucks, anything to do with being outdoors. Devan attended Christ Central Ministries of Lake City, where he was a mem- ber of the ROC youth group, where he had played in the band. He is survived by his parents, Mark and Michelle Bozeman; two sisters, Kirsten Alese Boze- man and Megan Michelle Pierce; his brother, Jeremiah Evans Bozeman; grandparents, Marion &Michael Thrower, Edgar Pow- ell and Jack & Delores Bozeman. Funeral services will be held on Wednesday, February 9, 2011 at 2:00 P. M., in the sanctuary of Christ Central Ministries of Lake City, with Rev. Lonnie Johns, of- ficiating. The family will receive friends in the church for two hours prior, beginning at 12 P. M. In lieu of flowers, donations should be made in Devan's name to Partners of Hope International for Honduras, 507 NW Hall of Fame Drive, Lake City, FL 32055. Arrangements are under the care of WILIAMS-THOM- AS FUNERAL HOME WE- STAREA, 823 NW 143rd Street. For further information call Williams-Thomas Westarea (352) 376-7556 Nichole Marie Cervantez Mrs. Nichole Marie Cervantez, 25 and her son,' Bryson Allen Cervantez, of Lake City were the victims of a homicide that occurred on February 2, 2011. A native of Fort Myers, Florida, Nichole had been a resident of Lake City for the majority of the past twelve years. She had re- cently returned from Texas hav- ing lived there for a brief time and had also lived in Illinois and Michigan. She had worked as a waitress. In her spare time Nich- ole enjoyed four wheeling, mud bogging, reading and horses. Her favorite time was the time that she spent with her daugh- ter. She was excited to be preg- nant and was looking forward to the birth of her son, Bryson. Nichole is survived by her daugh- ter, Ariana Renee Cervantez; her mother, Shelly Lynn Harris and her father, Jose Cervantez (Renee) all of Lake City; her husband, Larry Fleetwood III of Cleveland, Texas; her sister, Cari Sue Snyder of Lake City and her maternal grandparents, John & Mary Wheeler of Naples, Flori- da. Numerous other family mem- bers and friends also survive. Graveside funeral services for Nichole Nichole Cervantez Memorial fund at any First Federal banking location. Arrangements are under the direction of the DEES-PAR- RISH FAMILY FUNERAL HOME, 458 S. Marion Ave., Lake City, FL 32025 752- 1234 please sign our on-line family guestbook at- rishfamilyfuneralhome. com Monica Blanche Webb Hudson Mrs. Monica Blanche Webb Hudson, 27, of Lake City, was the victim of a homicide that occurred on February 2, 2011. A native of Fort Smith, Arkan- sas, Monica had been a resident of Lake City for the majority of the past fourteen years. She had Lived in Iowa prior to returning to Lake City. Monica was a home- maker who enjoyed playing with her children and reading. Monica is survived by her chil- dren, Von Leigfh Hudson and Kayla M. Hudson of Montezu- ma, Iowa and Arianna Nicole Hudson of Lake City; her hus- band, Michael Steven Hudson of Montezuma, Iowa; her mother, Mary Jane Bass (Dustin) of Lake City; Paul Webb (Karen) of Mon- tezuma, Iowa; her brothers, Paul Harvey Webb II and River Webb of Montezuma, Iowa; and Ryan Keene Bass of Lake City; her step-brothers, Dustin Winchell and Derek Winchell and her step-sister, Alyssa Winchell all of Iowa. Numerous other family members and friends also survive. Graveside funeral services for Monica Monica Hudson Memo- rial fund (att: Patricia Keene) at the D.O.T. credit Union. Ar- rangements are under the direc- tion of the DEES-PARRISH FAMILY FUNERAL HOME, 458 S. MarionAve., Lake City, FL 32025 752-1234 please sign our on-line family guestbook at www. parrishfamilyfuneralhome. corn Cynthia Ann "Cindy" Joye Ms. Cynthia Ann "Cindy" Joye, 54, of Gainesville, died January 30, 2011 at her residence follow- ing a brief illness. A native and longtime resident of Columbia County, Ms. Joye had lived at her family's home in Suwannee prior to moving to Gainesville in 2004. Ms. Joye was a member of the 1974 graduating class at Columbia High School and she had attended the Lake City Com- munity College for two years. Ms. Joye worked as the manager of the Eckerd's drug store prior to a disabling injury. Ms. Joye was of the Baptist faith. She was preceded in death by her father, Nax "Mason" Joye, Jr.. She is survived by her mother, June Joye; her sisters, Cheryl Shiver (Ken) and Carol McClel- lan (James) and her beloved dogs, Thomas, Winston and Abigail. Private family services for Ms. Joye were held in Memorial Cemetery. The family requests that in lieu of flowers memo- rial donations be made to the A.S.P.C.A., 424 East 92nd Street, New York, NY 10128. Arrangements are under the di- rection of the DEES-PARRISH FAMILY FUNERAL HOME, 458 S. MarionAve., Lake City, FL 32025 752-1234 please sign our on-line family guestbook at www, parrishfamilyfuneralhome. corn Michael "Kevin" Tucker, Jr. Mr. Michael "Kevin" Tucker, Jr., 32, of Lake City, was the victim of a homicide that occurred on February 2, 2011. A lifelong resident of Lake City; Kevin had worked in the highway bridge construction in- dustry for several years. Kevin enjoyed spending time with his kids, hanging out with friends, listening to rock-n-roll music and watching "Gator" football. He was of the Baptist faith. Kevin is survived by his chil- dren, Michael Austin Tucker, Matthew Aaron Tucker and Michelle Adrianna Tucker all of Lake City; his parents, Cathy and "Red" Ratliff of Lake City; . his brothers, Roger German of Jacksonville; Joey German of Macon, Georgia; Robin Ratliff of Valdosta, Georgia; Randy Ratliff of Lake City and J.P. Ratliff of Fort White, Florida; his sisters, Crystal Tucker and Missy Ratliff both of Lake City; his biological father, Mike Tucker and the mother of his children, Carissa Calvarese. Funeral services for Kevin will be conducted at 3:00 P.M. on Thursday, February 10, 2011 in the chapel of the Dees-Par- rish Family Funeral Home with Rev. Randy Ogburn officiating. Interment will follow in the Oak Grove Cemetery (which is lo- cated on Highway 441 about fif- teen miles north of Lake City). The family will receive friends from 5:00-7:00 Wednesday eve- ning in the chapel of the funeral home. Arrangements are under the direction of the DEES- PARRISH FAMILY FUNER- AL HOME,, 458 S. Marion Ave., Lake City, FL 32025 752- 1234 please sign our on-line family guestbook at- rishfamilyfuneralhome. corn Obituaries are paid advertise-' ments. For details, call the Lake City Reporter's classified depart- ment at 752-1293. Sock Hop Dance is 8 p.m. Saturday at the Spirit of the Suwannee Music Hall. The event is a live musi- cal performance. Prizes awarded for Best 50s costume, Hula Hoop chal- lenge, trivia and Wipeout dance contest. Contact the Music Hall at 386-364- 1703. Reservations are highly recommended. Committee Meeting Richardson High School Alumni confer in a committee meeting at noon on Saturday. All RHS Alumni are invited to attend the event at the Richardson Center. For more information, contact Ms. Jones at 386-752- 0815. FACS Valentine's Party All 2011 active Filipino American Cultural Society members and guests are invited to attend the FACS Valentine's Day party and dance from 6 p.m. to 10 p.m. on Saturday. Come and enjoy an evening of dancing, cultural food and more at the Epiphany Catholic Church social hall. Remember to bring a covered to dish to share. For more informa- tion, contact Bob Gavette at 386-965-5905. Charity Walk/Run A walk/run for a cure for juvenile diabetes in memo- ry of the late Lindsi Young will take place from 9 a.m. to 1 p.m. on Saturday on the Suwannee High School track. Participants are asked to collect donations to support their walk, and bring those donations to the event to help find a cure for juvenile diabetes. Registration for the walk begins at 8:30 a.m. at the track. uonate Car Doat V viotorcycle .. 1-800-227-2643 FREE 2-Night Vacation! S.J READY TO LEARN? Adober i* Test-taling and study skills Photoshop Illustrator Human Effectiveness Train.ng; Database Design/ I* Resumes, Cover Letters Development Business Communications Michael E. Quist (386) 963-2169 15681 CR 137 Wellborn FL michaelquist@hotmail.com Man Goes "TOAD-AL" at High School Reunion BEXAR COUNTY After using 'lhera-Gesic' on aching joints, . Tom W. attended last Friday's reunion where, according to 5. 14&-o A Cak 154 NW Veterans St Suite 101 Lake City, FL 32055 wisdomhealing@live corn 13861 438-5185 1 FREE! Yoga Class Sor Smoothie , Mun presem tlhsO coupon .oaAvz New Year's Special! Let us clean your carpet! 3 ROOMS & HALL 5 ROOMS & HALL $79.00 $118.00 Call for additional special offers or visit cwoThe BayAoy Group, LLC %0,oW aso vAYW AY " RI.SA L RV iCDISCOVER VISA '' I-etir The Needs Of Home A.rd Industry 386-755-6142 / 386-362-2244 All rooms 300sq max per room. LR, DR combo count as 2 rooms. Not valid with any other offer. Residential only. Offer expires 2/1/11 "Together we are the SOLUTION" Page Editor: Roni Toldanes, 754-0424 LAKE CITY REPORTER SCHOOLS TUESDAY, FEBRUARY 8, 2011 Bulletin Board NE SABOTOR CO L CAMPUS NEWS Columbia City Elementary N Columbia City's 35 FCAT High Achievers were Sydney Griffin, Cecily Griffis, Colby Odom, Ian Beckman, Colin Broome, Sarah Griffin, Jacob Zecher, Andrew Harding, Cameron Nichols, Axel Ortiz, Tyler Utley, Matthew Dimauro, Caley Edwards, Hannah Hornberger, Megan Grubb, Emily Harrington, Jessica Jewett, Seth Register, Sasha Ellis, Cody Kight, Dakota Lugenbeel, Austin Nash, Olivia Sadlik-Peralta, Nicholas Hall, Matthew Rockafellow, Jasmine Cook, Jonathan Ellis, Austin Jenkins, Carlie Carswell, Dezmund Cothran, Chace Curtis, Winston Kam, Makenzie Kemp, Chelsey Jones and Lindsey Langston. CCE is fortunate to have a wonderful group of fifth-grade students to produce "Good Morning Columbia City," a daily morning news show, under the direction of Mrs. Guetherman and Mrs. Cox. CCE had more than . 160 students who had perfect attendance no tardies or early dismiss- als for the second nine weeks. Fort White Elementary As fifth-graders work hard to master math, read- inigandd science Sunshine State Benchmarks, two fifth-grade students - Shelby DuBose and Savana Terry have done an out- standing job in the district's Spelling Bee and Science Fair. Terry received an Honorable Mention in the Science Fair Jan. 11 for her "Bio-Degradable" project, which proved which toilet paper was most effective. DuBose took third place at the district level for the Scripps National Spelling Bee Jan 20. Five Points Elementary Last week was Celebrate Literacy Week, Florida - "Champions Read, Readers Lead," which was geared toward motivating children and adults alike in becom- ing champions and leaders of reading. The "Just Read, Florida!" office challenged schools to participate in a Million Minute Marathon. Five Points Elementary stu- dents kicked off the week during the Morning Wave with Book Talk Segment COURTESY PHOTO FCAT High Achievers enjoy fun-filled day Columbia City Elementary's 35 FCAT High Achievers pose for a photo at Funworks in Gainesville Jan. 28 to celebrate FCAT High Achiever Day. The fourth- and fifth-grade students who scored a level five on reading or math or a level five or six on writing when taking their FCAT last spring enjoyed skating, bowling, miniature golf and games at Funworks and a lunch at the Oaks Mall food court. PTO paid expenses for the fun day. STUDENT PROFILE Name: Matthew Hunter Age: 10 Parents: Alan and Pam Hunter School and grade: Columbia City Elementary, fifth grade Achievements: "A" Honor Roll, Fourth Grade Math Bee Team, Honor Roll since second grade, school Spelling Bee winner Clubs or organiza- tions: CCE Broadcast Team, Fourth Grade Chorus, CYFA Football What do you like best about school? I like lan- guage arts, math and EE. What would you like COURTESY PHOTOS Matthew Hunter to do when you com- plete your education? I would like to design video Students celebrate Literacy Week Mike Millikin, superintendent of schools, reads to Five Points Elementary Students on Jan. 25 as part of the school's com- memoration of Celebrate Literacy Week Florida. For Life Insurance Go With . Someone You Know John Burs, III Mary H. Summer. Agent Financial Services Rep. RI 234 SW Main Blvd. 752-5866 - IAsSJialWlo ianG T 0 i. ,Wffei games. Teacher's comments: Matthew is an all-around amazing student. He excels in everything he puts his mind to. It has been an absolute pleasure to teach him this year. Principal's com- ments: Matthew is a won- derful student and a great role model for others. He is always polite and kind. He has a great future ahead of him. Student's comments concerning honor: It's an honor to represent CCE. I would like to thank my parents and my teach- er because they helped me get here. J Po.sturc/p)d- -. *' .. .. ... .... rrb:-O y Bay Island $199 Queen 2-pc Set lAVin 2pc 'i-t SALE S'199 Full nC, "reT SALE 5599 on:q 'c.c :-t SALE S1099 :I P< osiurc/ 'i.( h ' " ..e ... ernriiodtj Cedar Point 10099 Queen 2-pc Set Twin 2pc set SALE S599 Full 2Doc set SALE S799 King 2pc set SALE S1399 CATALOG SHOWROOM F. CATALOG SHOWROOM FO ..-- embod vs.Other Brand Ours Thieirs S' utlas B mar 1 ab c fpchn NO | : ". '. B , -' P siur, : i2' ioi Supe ori , Introducing M; emory Introspection Foam TWin 20rc set SALE S1499 SFull.2pc set SALES1699 embody oueen 2p set SALE 51999 Jo King 2pC set SALE S2499 )R..COM PL"ETE" OMF. 2p N Set AL GS16 Lake City Reporter Page Editor: Roni Toldanes, 754-0424 m. }, Story ideas? Contact Tim Kirby Sports Editor 754-042 tkrby@lakeatyreportercom Lake City Reporter SPORTS Tuesday, February 8, 201 I Section B BRIEFS WOMEN'S SOFTBALL Board meeting set for today Columbia County Women's Softball has a board meeting at 6:30 p.m. today at the meeting hall next to the playground at Southside Recreation Complex. All those interested in women's slow pitch softball are encouraged to attend. For details, call Casandra Wheeler at 365-216 or e-mail john_ casandra@hotmail.com. YOUTH SOFTBALL Girls softball registration ends The Girls Softball Association of Columbia County has final registration (ages 4-17) at the softball complex from 5-7 p.m. today. Registration forms also are available at Brian's Sports and completed forms can be dropped off there. Coaches are being sought For details, call 755-4271 or visit information@girls softballassociation.org. OLUSTEE 5K Sign-up open for Feb. 19 run The 2011 Olustee 5K Run/Walk is 7 a.m. Feb. 19. Individual or team registration is available at fitnessonline Michelle Richards at (386) 208-2447. * From staff reports GAMES Today Fort White High boys basketball vs. Newberry High in District 5-3A tournament at Williston High, 6 p.m. Fort White High softball at Gainesville High, 7 p.m. (JV-5) Fort White High baseball vs. Union County High in preseason game, 7:30 p.m. Thursday Columbia High softball vs. Ridgeview High, 7 p.m. (JV-5) Columbia High baseball in preseason game at Suwannee High,. Bonds seeks next step for Indians baseball Fort White made playoffs as district runner-up in '10. By TIM KIRBY tkirby@lakecityreporter.com FORT WHITE Fort White High baseball made the playoffs as district runner-up last season and coach Chad Bonds would like to go one step better in 2011. "We beat Williston last year in the tournament and it was kind of an upset," Bonds said on Monday. "We have some talented individ- uals a lot of kids that played last year are back. We need to see if we have got some guys willing to take the ball and go out and win. We are looking for someone to take ownership of that big game." Fort White is sched- uled to host Union County High in a playoff game at 7:30 p.m. today and P.K. Yonge School on Thursday, but Bonds said the games may have to be moved back because of the rain. Steel Bonds is looking at four prime pitchers with seniors Justin Kortessis and Josh Faulkner, junior Brandon Sharpe and sophomore Kevin Dupree. Kortessis was 3-6 with five starts last year and a 4.86 ERA. He struck out 37. Dupree was 2-0 with two starts and a 2.67 ERA. Sharpe had five appearances with a 2.55 ERA. "I like our pitching a little better than last year going in," Bonds said. "I feel INDIANS continued on 2B hu TIM KIRBY/Lakd City Reporter Fort White High baseball coach Chad Bonds is entering his third season at the helm of the Indians. in, ASSOCIATED PRESS Green Bay Packers' Jordy Nelson (left) catches a touchdown pass in front of Pittsburgh Steelers defender William Gay (22) during the first quarter of Super Bowl XLV in Arlington, Texas, on Sunday. Packers take advantage of Pittsburgh mistakes for 31-25 Super Bowl win By JAIME ARON Associated Press ARLINGTON, Texas- Aaron Rodgers grew up in Northern California watching Joe Montana and Steve Young have their best games on the biggest stage. Welcome -to the club, kid. Rodgers carried a patch- work lineup into the Super Bowl, then kept things calm when Green Bay's depth was tested further. His most accomplished receiver and the heart-and- soul of his defense were knocked out by halftime, yet Rodgers still guided the Packers to a 31-25 victory over the Pittsburgh Steelers on Sunday night. So now the Vince Lombardi Trophy is head- ed back to Titletown for the first time in 14 years, and Rodgers can lead the championship parade with the shiny hardware riding shotgun in the red convert- ible he received as Super Bowl MVP. "It's the top of the moun- tain in my sport, my pro- fession," Rodgers said. "It's "I feel like I let the city of Pittsburgh down, the fans, my coaches and my teammates and it's not a good feeling." Ben Roethlisberger, Steelers quarterback what you dream about as a kid and think about in high school, junior college, D-I - getting this opportunity and what would you do?" Here's what Rodgers did:- He put his team ahead on their second drive and made sure they never trailed. He went back to receivers even after they dropped passes, sometimes on the very next snap. He threw three touchdowns and had .no turnovers. No matter how many players the Packers lost this season and they put 16 on injured reserve, includ- ing a half-dozen starters - someone else was always ready to step in. Holes were still being plugged in this game. Consider these contribu- tions by guys who weren't being counted on when the season began. Big Ben only once. The play was made by Frank Zombo, an undrafted rookie linebacker who became a starter only after an inju- ry, then became a reserve again when Erik Walden did a solid job in his place. But Walden was inactive because of an ankle injury, so Zombo got the start. Jarrett Bush and Pat Lee filled in for Charles Woodson and Sam Shields at cornerback. Bush had an interception and came tan- talizingly close to another on the play that sealed the game. Bush only remem- bered playing two games in the secondary this season, Lee just one. "I don't know if we're just well-coached.or what it is," Zombo said. 'We just make plays." Green Bay led 21-3 with a few minutes left until halftime, but the champi- onship-steeled guys from Pittsburgh had plenty of resolve of their own. NFL continued on 3B Generals knock Tigers out of tourney Columbia loses district play-in game, 68-63. By TIM KIRBY tkirby@lakecityreporter.com In 20 years of coaching that includes a state final game, Stephen Jenkins has probably never worked harder. Jenkins led an under- sized and outmanned Lee High team to a 68-63 come-from-behind win over host Columbia High in the District 4-5A play-in game on Monday. "I had to do all I could to squeeze this one out," Jenkins said of his 4-16 team. "We've been deplet- ed since Christmas." Treys were raining down for the Tigers as they built a 10-point lead at the end of the first quar- ter and stretched it to 13 points, 38-25, at intermis- sion. Marquez Marshall and Dray Simmons each had a dozen points in the first two quarters. Henry Jones scored 15 in the first half for the Generals. In the third quarter, William McDuffie and Darius Harper came alive for Lee, scoring nine and eight points, respectively. It was left to Nigel Atkinson. to keep the Tigers on top with his seven points and Columbia held a 53-50 lead at th: end of the quarter. ' Lee scored the first four points of the fourth quar- ter to move into the lead, then Simmons answered with a 3-pointer. Lee went on a 6-0 run with three baskets from Jones. Javonta6 Foster hit a shot to get the Tigers within two points, but McDuffie made both ends of three consecu- tive one-and-ones to keep Columbia at bay. '"We're basically trying to get ready for next year," Jenkins said. "It shows what they can do if they play hard, stay together and listen." Marshall and Simmons scored 15 points each to lead Columbia (7-17). Laremy Tunsil scored 12 points andAtkinson scored 10. Markhem Gaskins scored nine points and Foster scored two. Jones (24 points) and McDuffie (22 points) paced Lee. LAKE CITY REPORTER SPORTS TUESDAY, FEBRUARY 8, 2011 TELEVISION TV sports Today MEN'S COLLEGE BASKETBALL 7 p.m. ESPN Indiana at Purdue ESPN2 Cincinnati at DePaul 9 pm. ESPN -Tennessee at Kentucky NHL HOCKEY 7:30 p.m. VERSUS- Buffalo atTampa Bay FOOTBALL NFL playoffs Wild Card Seattle 41. New Orleans 36 N.Y.Jets 17. Indianapolis 16 Baltimore 30, Kansas City 7 Green Bay 21, Philadelphia 16 Divisional Playoffs Pittsburgh 31, Baltimore 24 Green Bay 48,Adanta 21 Chicago 35. Seattle 24 N.Y.jets 28, New England 21 Conference Championships Green Bay 21.Chicago 14 Pttsburgh24. N.Y.Jets 19 Super Bowl Green Bay 31, Pittsburgh 25 Super Bowl records ARUNGTON,Texas Some records set or tied in the 2011 Super Bowl: TEAM Game RecordsSet Fewest rushing attempts, both teams: 36 Pittsburgh 23, Green Bay 13 (old record: 37; Pittsburgh (25) vs. Arizona (12), 2009 and Indianapolis (19) vs. New Orleans (18), 2010). Game Records Tied One Team Fewest turnovers, game: 0 Green Bay vs. Pittsburgh (held by 17 others). Most points, first quarter. 14 Green Bay vs. Pittsburgh (held by six'others). Largest lead first quarter: 14-0 - Green Bay vs. Pittsburgh (Miami vs. Minnesota, 1974; Oakland vs. Philadelphia, 1981). Fewest rush attempts by winning team: 13 Green Bay vs. Pittsburgh (St. Louis vs.Tennessee, 2000). Most two-point conversions: I - Pittsburgh vs. Green Bay (held by five others). Game Records Tied Both Teams Fewest first downs by penalty: 0 - Green Bay and Pittsburgh (held by five others). Records Tied Most games played team: 8 - Pittsburgh (held with Dallas). BASKETBALL NBA schedule Monday's Games Charlotte 94, Boston 89 LA. Lakers at Memphis (n) Minnesota at New Orleans (n) Cleveland at Dallas (n) Houston at Denver (n) Chicago at Portland (n) Utah at Sacramento (n) Phoenix at Golden State (n) Pearl returns forVols By BETH RUCKER Associated Press KNOXVILLE, Tenn.- Tennessee coach Bruce Pearl is back after his eight- game suspension from Southeastern Conference play. He still feels like more punishment is on the hori- zon with the Vols playing No. 18 Kentucky and No. 17 Florida this week. Though Pearl has had plenty of success at Florida's O'Connell Center, he hasn't won at Kentucky's Rupp Arena since an 75-67 vic- tory on Feb. 7, 2006, against a Tubby Smith-coached Wildcats team. "When Commissioner (Mike) Slive penalized me the eight games I think he originally wanted to do 10, but when he looked at the schedule and saw that I have to go to Rupp and the O'Dome, he decided to set- tle for just 6ight and make me go those two place," Pearl joked Monday. Kentucky coach John Calipari has his own issues with the Wilcats,'who are .500 halfway through the SEC season for the first time since 1990. Kentucky has lost its last two SEC games. Calipari is especially frus- trated about Kentucky's 70-68 loss at Florida on Saturday, when Brandon Knight's 3-pointer at the buzzer came up short Today's Games Philadelphia at Atanta, 7 p.m. LA.Washington, 7 p.m. LA Clippers at New York, 7:30 p.m. Chicago at Utah, 9 p.m. Dallas at Sacramento, 10 p.m. Denver at Golden State, 10:30 p.m. AP Top 25 The top 25 teams in The Associated Press' college basketball poll, with first- place votes, records through Feb. 6, total points and last week's ranking. Record Pts Pvs 1. Ohio St.(65) 24-0 1,625 IreDame 19-4 1,185 9 9.Villanova 19-4 1,047 12 10. Connecticut 18-4 1,040 6 I I.Georgetown 18-5 1,009 13 12.Syracuse 20-4 919 17 13.Wisconsin 17-5 790 19 14.Purdue 18-5 754 IIasA&M 17-5 231 16 23.Vanderbilt 16-6 128 23 24.Temple 17-5 110 - 25.WestVirginia 15-7 93 25 Others receiving votes: Minnesota 88, Wichita St. 29, Coastal Carolina 26, Cincinnati 22, Saint Mary's, Calif. 22, Alabama 21, George Mason 19. Washington 15, Marquette 12, Xavier 12, Florida St. 1 1 Belmont 5, Illinois 5, UCLA 5, UNLV 5, Baylor 4, Colorado St. 2, Tennessee 2, UTEP 2, Cleveland St. I, Duquesne I, Missouri St. I. AP Top 25 schedule Today's Games No. 6 San Diego State vs. Utah, 10.30 p.m. No. 14 Purdue vs. Indiana, 7 p.m. No. 18 Kentucky vs.Tennessee, 9 p.m. USA Today/ESPN Top 25 The top 25 teams in the USA Today- ESPN men's college basketball poll, with first-place votes, records through Feb. 6, total points and previous ranking: Record Pts I. Ohio State (31) 24-0 775 2. Kansas 22-1 732 3.Texas 20-3 721 4. Pittsburgh 21-2 678 5. Duke 21-2 642 6. San Diego State 23-1 614 7. Notre Dame 19-4 575 8. Brigham Young 22-2 564 9. Connecticut 18-4 496 ACROSS 1 Knock sharply 4 Hail, to Caesar 7 Lanolin source 11 Cry ofdiscovery 12 Prospector's find 14 "Catch-22" actor 15 Singer 17 Harness piece 18 Good look 19 Candy-bar nut 21 Rx monitor 22 Hood's weapon, 23 Spine-tingling 26 Deadly 29 Black gem 30 Give a darn 31 Food fish 33 PBS "Science Guy" 34 One in a million 35 Dressed -36 Go higher 38 Frequent 39 Creeping vine 40 Fall behind m Pvs I 1 . 2 3, S 4 2 5 4 6 I 8 4 9 ' 7 10.Villanova 19-4 495 12 S11. Georgetown 18-5 447 14 12.Purdue 18-5 401 10 13.Syracuse 20-4 369 17 14.Wisconsin 17-5 361 18 15. Louisville 18-5 350 13 16.Arizona 20-4 273 22 17. Utah State 22-2 257 21 18. Kentucky 16-6 246 I1 19. Florida 18-5 243 23 20. Missouri 18-5 234 15 21. North Carolina 17-5 165 - 22.TexasA&M 17-5 128 16 23. Saint Mary's 20-4 64 - 24.Vanderbilt 16-6 39 24 25. Minnesota 16-7 37 20 Others receiving votes: West Virginia 29, Temple 27. Washington 21, Coastal Carolina 15, George Mason 13, Xavier 13,Wichita State 12, UCLA 9.Alabama 8, Florida State 6,Texas-El Paso 4, Illinois 3,Virginia Commonwealth 3, Marquette 2. UNLV 2,Valparaiso 2. SEC standings East W Florida 7 Tennessee 5 Georgia 5 Kentucky 4 Vanderbilt 4 South Carolina 4 West Alabama 7 Mississippi St. 4 Arkansas 4 Mississippi 3 LSU 2 Auburn I ACC standings Duke North Carolina Florida St. Clemson Virginia Tech Boston College Maryland Miami Virginia Georgia Tech N.C. State Wake Forest HOCKEY NHL schedule Monday's Games Toronto 5,Atlanta 4 Detroit 3, N.Y. Rangers 2 Edmonton at Nashville (n) Chicago at Calgary (n) Colorado at Phoenix (n) Ottawa atVancouver (n) TodayVancouver, 10 p.m. 41 Exactly like this (2 wds.) 44 Invisible swim- mers 48 spumante 49 Clinch (2 wds.) 51 Nudge, per- haps 52 Your Majesty 53 Open meadow 54 Snacks 55 Wyo. clock set- ting 56 Telepathy DOWN 1 Stellar review 2 Nautical greet- ing 3 Marathoner's concern 4 Refer to 5 There! 6 Magazine execs 7 Affection 8 Low-fat spread Wilson takes Phoenix Open for second win By JOHN NICHOLSON Associated Press SCOTTSDALE, Ariz. - Packers fan Mark Wilson celebrated a big victory of his own on a playing field about as close to frozen tundra as it gets on the PGA Tour. A self-described cheese- head from Menomonee Falls, Wis., Wilson won the frost-delayed Phoenix Open on Monday for his second victory in three starts this year, holing a 9-foot birdie putt. "Luckily, my son, after we played Candy Land in the middle of the fourth quarter, he said, 'OK, the last two minutes we can watch it together."' Delays for frost and fro- zen ner- vous today than I was expecting," Wilson said. "I didn't sleep great last night It was probably the excitement with the Super Bowl and the uncertainty of today." The Sony Open winner last month in a 36-hole Sunday finish, Wilson made a 4'1 thank- fully, I started it on line and knocked it in." Martin Laird (65) and Vijay Singh (66) tied for third at 16 under, and Gary Woodland (66),J.B. Holmes (67) and Nick Watney (68) followed at 15 under. INDIANS: Try to dethrone Bulldogs Continued From Page 1B comfortable with four of my arms." Bonds said he also is look- ing at sophomore Robby Howell (3.71 ERA in three appearances) .and junior lefty Jonathan Dupree. "I really liked Kevin at the end of last year and he has gained a couple of mph on his fastball and has matured," Bonds said. "Robby came up at the end of last year and has experi- ence on the mound." Kortessis, who has already signed with St. Johns River State College, hit .436 last year with 29 runs, 18 RBIs, 10 doubles, one triple and three home runs. He stole nine bases, primarily as the Indians' lead-off hitter. Bonds said he may move the shortstop's bat lower in the lineup. Kevin Dupree played third base last year and hit .364 with 22 runs scored, 17 RBIs and seven extra- base hits including three dingers. Bonds said he will Answer to Previous Puzzle LE I YoAI W rL SEIA I DOLID I TMCIC S AG HA SA NDBAR RESCU E ARUN IF S LOB RODEO V I DEOS A|MOR Y I PE GOO [T E MEN S' SLIM TRAITS FU ELS UN I IBM D.OLENIAN I N RAILINGS CODE EHS COPENSITLO0 G UT ERATI FE N 9 Thor's dad 10 Country, 13 Knickknack stand 16 Glue on 20 Past due Want more puzzles? Check out the "Just Right Crossword Puzzles" books at QuillDriverBooks.com i 12 13 4 15 16 l l 9 lo I 2011 by UFS, Inc. 23 Geological period 24 "Watermark" chanteuse 25 Deli breads 26 Pie crust ingredient 27 Rights org. 28 Provide at interest 30 Gorges 32 Banned bug spray 34 Giveskiegas 35 Like zoo ani- mals 37 London and Hong Kong 38 Ophelia's love 40 Dens 41 Senator in space Garn 42 Annapolis inst. 43 Dele's undo- ing 45 Tree trunk 46 Totally amazes 47 Purse closer 50 Zero in on hit No. 3 or cleanup. Bryce Beach returns behind the plate and will play some outfield with the arrival of Columbia High transfer Cody Spin. Beach hit .373 with 14 runs and 22 RBIs and will likely join Faulkner at the top of the lineup. Both have great speed, as does Kortessis. Bonds said both receiv- ers are more than adequate for high school. He counts on them to block balls in the dirt and get help from his pitchers to prevent steals by holding runners and delivering the ball quickly to the plate. Jonathan Dupree will play first and hit in the middle of the lineup. Taylor Morgan is a middle infield- er, who was injured last year. Jacob Philman, Nate Reeves, Anthony Gonzalez and Brandon Brooks are working in the outfield. Bonds said defending District 5-3A champion Suwannee High has a lot of players returning and always strong Williston had a young team in 2010. Santa Fe High and Newberry High round out the district. Fort White is the fifth baseball job for Bonds, who played at Suwannee. He has also headed up programs at Carrabelle High (two years), Wakulla High (three years), Hamilton County High and was at Branford High for three years before coming to Fort White.. He took Carrabelle to the playoffs in 2001 and Branford in 2006-07 includ- ing one district champion- ship. He enters his third year as head coach of the Indians. Call Mary or Bridget TODAY to place a THAT SCRAMBLED WORD GAME by Mike Argirion and Jeff Knurek Unscramble these four Jumbles, one letter to each square, to form four ordinary words. EIDUG Now arrange the circled letters to form the surprise answer, as suggested by the above cartoon. Ans: A (Answers tomorrow) Yesterday's Jumbles: SCOUT ITCHY TINGLE CONCUR I Answer: When he read the novel about the invisible man, it was OUT OF SIGHT SCOREBOARD "11 Page Editor: Tim Kirby, 754-0421 IMPERR 7, -- -- L 1__ < _ Rodgers moves out of the shadow with MVP award ASSOCIATED PRESS Pittsburgh Steelers' Isaac Redman (33) and Rashard Mendenhall walk off the field after the Steelers' Super Bowl XLV loss on Sunday. Steelers' big guns silent in Super Bowl By DENNIS WASZAK Jr. Associated Press ARLINGTON, Texas- James Harrison sat on a stool and stared blankly into the Steelers' quiet lock- erprettymuch summed things up for the Steelers defense, which isn't used to having to explain why it couldn't get the job done. "Bottom line is, we played subpar ball," Harrison said. "And, you see what the turnout is." It was an unexpected letdown for Pittsburgh, which relies on its aggres- sive defen- sive quar- terback hits, but that was pretty much all. As for Polamalu, the game-chang- ing plays never came. "I had some opportuni- ties to make some plays," Polamalu said. "I was just off a step here or there." Such as the early chance he whiffed on, when he delivered only a glanc- ing blow on running back James Starks. Polamalu had his biggest hit the very next play as Greg Jennings caught a 21-yard touch- down pass. "It's incredibly hum- bling," Polamalu said. 'Toughest loss I've ever had in my life." Dick LeBeau's defense was one of the strengths all season for the Steelers, who have a long legacy of pun- ishers." By HOWARD FENDRICH Associated Press ARLINGTON, Texas - Aaron Rodgers celebrated his first Super Bowl scor- ing pass by simply raising both arms in the familiar signal for "Touchdown!" before briefly embracing an offensive lineman. After his next two touch- down tosses, Rodgers slowly meandered to the end zone to pat his receiver on the shoulder. Quite clearly, Rodgers is no Brett Favre. Didn't pretend to be him. Doesn't need to worry about emu- lating him. Rodgers does things his way: He's a quarterback who boasts California cool and preci- sion passing, a generally laid-back guy who does not engage in the sort of wild, high-risk throws or leaping, helmet-smacking, post-TD displays Favre made famous. And now Rodgers owns as many Super Bowl victo- ries. "Aaron is Aaron. Aaron and Brett are two totally dif- ferent 01' estab- lished himself as one of the game's best This was his third full season as a starting QB, NFL: No comeback Continued From Page 1B touchdown pass to Mike Wallace and following with a nifty. 2-point conversion on an option. Green Bay's 18-point lead was down to a field goal.pletion. "I feel like I let the city of Pittsburgh down, the fans, my coaches and my team- mates and it's not a good feeling," Roethlisberger said. Pittsburgh 0 10 7 8 25 Green Bay 14 7 0 10 31 First Quarter GB-Nelson 29 pass from Rodgers (Crosby kick), 3:44. GB--Collins 37 interception return (Crosby kick), 3:20. Second Quarter Pit-FG Suisham 33, 11:08. GB-Jennings 21 pass from Rodgers (Crosby kick), 2:24. Pit-Ward 8 First downs 19 Total Net Yards 387 Rushes-yards 23-126 Passing 261 Punt Returns 4-5 Kickoff Returns 6-111 Interceptions Ret. 0-0 Comp-Att-Int 25-40-2 Sacked-Yards Lost 1-2 Punts 3-51.0 Fumbles-Lost 1-1 Penalties-Yards 6-55 Time of Possession 33:25 GB 15 338 13-50 288 1-0 3-63 2-38 24-39-0 3-16 6-40.5 1-0 7-67 26:35 INDIVIDUAL STATISTICS RUSHING-Pittsburgh, Mendenhall 14-63, Roethlisberger 4-31, Redman 2- 19, Moore 3-13. Green Bay, Starks 11-52, Rodgers 2-(minus 2). PASSING-Pittsburgh, Roethlisberger 25-40-2-263. Green Bay, Rodgers 24-39- 0-304. RECEIVING-Pittsburgh, Wallace 9- 89, Ward 7-78, Randle El 2-50, Sanders 2-17, Miller 2-12, Spaeth 1-9, Mendenhall 1-7, Brown 1-1. Green Bay, Nelson 9- 140, J.Jones 5-50, Jennings 4-64, Driver 2-28, Jackson 1-14, Quarless 1-5, Hall 1-2, Crabtree 1-1. MISSED FIELD GOALS-Pittsburgh, Suisham 52 (W Your Vehicle Sold, Call ASSOCIATED PRESS Green Bay Packers quarterback Aaron Rodgers celebrates after beating the Pittsburgh Steelers in the Super Bowl on Sunday in Arlington, Texas. and he was particularly good throughout the play- offs,Mike Tomlin said, "and continued to stand in there and throw the football accurately." That's not all. Rodgers changed plays at the last moment, reading the defense before the snap and adjusting. He over- came a poor start, a couple of key drops and a third- quarter lapse. * And he did it all with- out per- fect all game. But perhaps he could be forgiven if he was experiencing some jit- ters: After all, the guy only played in one playoff game in his career before this season. "We kind of struggled at times on offense," Rodgers said. That's true. He began the game by overthrow- ing receivers and generally being off-kilter, completing only one of his first five passes. But he knows a thing or two about slow starts. Just look at Rodgers' career arc. Despite record- setting years during high school in Chico, Calif., the slim Rodgers was not seri- ously recruited by major college football programs. That was OK, though. Didn't let it bother him. "That guy," Packers receiver Donald Driver said, "is a true leader." Rodgers put the ball right where he w:.nted hithe. I HURRY! FINAL DAY! Today is the last day to place your Love Line! The Lake City Reporter CPresents: ve "' Love Line Rates are as follows: 35 WORDS or less for s12.00 Each additional word 15 Add a photo for s3.00 nes Malachi. ." ., 7Thanl l 'Oll I ,l Jl' y 'l e l ' ; "' ". / A ,on. I,, ,r to <, l ding ik,' rest ".. / lt 11 1 1 10 ,, itar t: .. ,Maria fif* -------- ?H Uw Print your message here: Your Name: Phone; Address: City/State/Zip: Mail to: Lake City Reporter, Classified Department PO Bo 1709, Lake City, FL 32056 755-5440 ALL ADS MUST BE PAIDAT Sa THE TIME OF PLACEMENT. DEADLINE IS FEB. 8,2011. A Lake City Reportei r lakeot,lreporterco.rrn CI.I ItNTS .i.gp.n- * PA LAKE CITY REPORTER FOOTBALL TUESDAY, FEBRUARY 8, 2011 Page Editor: Tim Kirby, 754-0421 LAKE CITY REPORTER ADVICE & COMICS TUESDAY, FEBRUARY 8, 2011 DILBERT I ACCOUNTING YOU CHARGED MY PROTECT FOR EXPENSES THAT AREN'T MINE. LET ME SEE THAT. BABY BLUES BLONDE BEETLE BAILEY WE ACCOUNTANTS ARE ARSENIC-BASED LIFE FORMS. THAT MAKES YOU MAY NATURAL ENEMY. HAGARTHE HORRIBLE DEAR ABBY Independent woman can't find the right mix in men THAT IS LIVE LONG NOT AND LOGICAL. PHOSPHER. I- ' DEAR ABBY: I'm an in- dependent, 41-year-old wom- an who attracts men who are 10 to 13 years younger than I am. I'm not interested in them because I feel they are only after one thing. Another problem is, when I start get- ting close to a man my own age, he always makes me feel "smothered." It seems I'm either loved too much or not at all. Is there a balance, or am I just afraid of getting close? - AVOIDING GETTING HURT IN MILWAUKEE DEAR AVOIDING: I suspect that it's the lat- ter.- All younger men are not interested in only one thing. Some are, but not all. And men your age who are ready for commitment are not "smothering" you but they do seem to want some- thing you are unwilling or unable to give. Unless you can determine what's holding you back, you will remain single and look- ing. A psychologist could help you get to the heart of the matter quickly, and that's what I'm recommending so I won't hear from you with this same problem when you're 50. DEAR ABBY: After nine years of marriage, my husband, "'Brett," and I welcomed our first child 10 months ago. We are happy does, he or she isn't going to be calling Carol by any multisyllabic appellations. Your child will probably call her a name that's easy to pronounce and entirely original. DEAR ABBY: I am the youngest of three children. Whenever my mom looks through our family photo al- bums, she makes comments about "the good old days" while she's looking at the pictures taken before I was born. It offends me when I hear it, because it feels like she's saying the years she remembers most fondly are the ones before she had me. Am I overreacting, or do those comments seem inap- propriate to you as well? - OUT OF THE PICTURE, LEWISTON, IDAHO DEAR OUT OF THE PICTURE: When your mother looks at the photo albums, she may be remind- ed of a time when she was younger, experienced less stress and had fewer respon- sibilities. Not knowing her, I can't tell you if you're overre- acting. But I can suggest that you discuss this with her be- cause your feelings may be a mile off target. Please don't wait and let this fester. * Write Dear Abby at or P.O. Box 69440, Los Angeles, CA 90069. HOROSCOPES ARIES (March 21- April 19): Ride out any controversy or negativity. Size up your situation with- out making a commitment This is a great time to prove how valuable you are but it's not the time to negotiate or to make demands. *** TAURUS (April 20- May 20): You can't ap- pease everyone. Offer what you know you can do well and successfully. You will be inclined to underestimate your current situation, so it's very important not to make promises or to think in too broad a spectrum. GEMINI (May 21-June 20): You've got more going for you than you realize. Don't look back or second- guess yourself. Put your plans into motion and strive for perfection and comple- tion. You have room to grow and advance and that's pre- cisely what your aim should be. **** CANCER (June 21- July 22): Avoid anyone who wants too much or is putting pressure on you. You will learn a valuable les- son about lifestyle that will help you change your ways, correct poor habits and implement a positive set of rules. ** LEO (July 23-Aug. THE LAST WORD Eugenia Last 22): If you aren't happy with where you are, con- sider what you can learn or what skills you can pick up to help you get to where you want to be. Discuss your plans with someone you respect You can create a much more stable environ- ment for yourself. ***** VIRGO (Aug. 23-Sept, 22): You need to play a little harder and strive for a bit more enjoyment in your life. Get involved in activi- ties that stimulate you men- tally or physically and you will feel much better about attacking any professional goals. *** LIBRA (Sept. 23-Oct. 22): Separate yourself from the bullies and people try- ing to push you aside or make you feel or look bad. Get involved in groups that will see your potential and allow you to take things in a. new direction. *** SCORPIO (Oct. 23- Nov. 21): Some of the peo- ple you have always been able to count on in the past will disappoint you. This time around, voice your opinion loud and clear. You will feel better and will stand a better chance of winning a battle that you have no choice but to fight. *** SAGHITARIUS (Nov. 22-Dec. 21): Someone close to you will not agree with your decisions. A change in the way you live and do things is expected and, although you won't like all the results, you will be in a better place and position. CAPRICORN (Dec. 22-Jan. 19): You'll have trouble making up your mind and, when you do, you are likely to discover you made a poor choice. Don't be afraid to slow down and hold off on any decision- making for the time being. Spend less, offer less and do less. ** AQUARIUS (Jan. 20- Feb. 18): Don't let anyone discourage you. There are lots of doors opening and you have the energy, desire and ability to pursue the op- portunities. Your discipline will enable you to reach goals you normally would never consider. **** PISCES (Feb. 19- -March 20): You will have difficulty making decisions. Don't let anyone put pres- sure 'on you. It may cost you a deal or a partnership ini- tially but, in hindsight, you will realize it is the wrong time to make a move that is binding. *** FRANK & ERNEST FOR BETTER OR WORSE CELEBRITY CIPHER by Luis Campos Celebrity Cipher cryptograms are created from quotations by famous people, past and present. Each letter in the cipher stands for another. Today's clue: K equals V "X CSXAUOJTWY SJKHN JF XF B I I D C J H Z I B D FY O A X FZ J M A B D I X F'Y SJ KH YWXY UXA A B D ZB F' Y NYXA." XOYWDO RJSSHO PREVIOUS SOLUTION: "For my part I know nothing with any certainty, but the sight of the stars makes me dream." Vincent van Gogh (c) 2011 by NEA, Inc. 2-8 CLASSIC PEANUTS I 2-8-111 F Abigail Van Buren except for a problem with Brett's mother, "Carol." Carol and I have had a rocky relationship, although in recent years things seem to have gotten better. My complaint (and Brett's as well) with Carol is that she is intrusive. She always wants to be in the middle of every- thing and won't ease up on "mothering" Brett. Further- more, Carol has decided our child should call her "Grandmommy" or "Mom- my Smith." I object to that name be- cause I feel "Mommy" is the one name reserved for me. I don't mind "Grandma," "Grandmother" or "Granny." But Carol won't back down. We tried coming up with an- other name, but she has ig- nored our suggestions. Am I being unreason- able? Please advise. THE ONLY MOMMY HERE DEAR ONLY MOMMY: You and Brett need to calm down. Your child won't be doing a lot of talking for a while. And when your baby SNUFFY SMITH ZITS B.C. --I Page Editor: Emogene Graham, 754-0415 LAKE CITY REPORTER CLASSIFIED TUESDAY, FEBRUARY 8, 2011 I- SELL ITIic F NDIT'IB Legal IN THE CIRCUIT COURT OF THE 3RD JUDICIAL CIRCUIT, IN AND FOR COLUMBIA COUNTY, FLORIDA FILE NO: I-Ocr-C.P PROBATE DIVISION IN RE : ESTATE OF LEMMA WYNELLE GOLLY, Deceased. NOTICE TO CREDITORS The administration of the estate of LEMMA WYNELLE GOLLY, de- ceased, whose date of death was September 9,2010; File Number 11- 09-CP is pending in the Circuit Court for Columbia County, Florida, Pro- bate Division, the address of which is P.O. Box 2068, Lake City, Florida 32056. The names and addresses ofthe personal representative and the personal representative's attorney are set forth below. All creditors of the decedent and oth- er persons having claims or demands against decedent's estate, on whom a copy of this notice has been served, must file their claims with this court WITHIN THE LATER OF 3 MONTHS AFTER THE DATE OF THE FIRST PUBLICATION OF THIS NOTICE OR 30 DAYS AF- TER THE TIMEENTS DATE OF DEATH IS BARRED. The date of first publication of this notice is: February 8, 2011. By: /s/ Rhett Bullard Attorney for Petitioner Florida Bar No.: 175986 100 South Ohio Avenue Live Oak, Florida 32064 By:/s/ PHILLIP J. SIMPSON Petitioner/Personal Representative 04543410 February 8, 15, 2011 Registration of Fictitious Names We the undersigned, being duly sworn, do hereby declare under oath that the names of all persons interest- ed in the business or profession car- ried on under the name of NORTH FLORIDA BATTERY & CORE at 895 N Marion Avenue, Lake City, FL 32055 Contact Phone Number: 386-344-0456 and the extent of the interest of each, is as follows: Name: KW Holmes Extent of Interest: 100% by:/s/ KW Holmes STATE OF FLORIDA COUNTY OF COLUMBIA Sworn to and subscribed before me this 7th day of February, A.D. 2011. by:/s/ KATHLEEN A. RIOTTO 05525083 February 8,. 09-489-CA MARK A. COOK, and ELIZABETH COOK; any and all unknown parties claiming by, through, under or against the herein named individual Defendant(s) who are not known to be dead or alive, whether said un- known parties may claim an interest as spouses, heirs, devisees, grantees or other claimants; John Doe and Jane Doe as unknown tenants in pos- session, and UNITED STATES OF AMERICA, Defendants. AMENDED NOTICE OF FORE- CLOSURE SALE NOTICE is hereby given that P. DEWITT CASON, Clerk of the Cir- cuit Court of Columbia County, Flor- ida, will on the 16TH day of FEB- RUARY,. 01-5S-16-03397-201 Parcel 1A Begin at the Northwest comer of Lot 1, Cove at Rose Creek, a subdivision as recorded in Plat Book 8, Pages 107-109 of the Public Records of Columbia County, Florida, and run thence S 00"59'15"W, along the East maintained right of way of SW Wal- ter Avenue, 555.21 feet to the North right of way of SW Emorywood Glen; thence S 47'14'30" E, along said North right of way, 21.85 feet; thence N 89" 22' 22" E, along said North right of way, 148.68 feet to. Lake City Reporter CLASSIFIED Legal Point of a curve; thence run Easterly along said North right of way, along the arc of said curve concave to the North having a radius of 470.00 feet, a central angle of 07" 10'56", a chord bearing and distance of N 85*46'54" E 58.88 feet, an arc distance of 58.92 feet; thence N 12"43'13" W, 579.16 feet to the North line of aforesaid Lot 1; thence S 89" 22'22" W, along said North line, 86.34 feet to the Point of the Beginning. pursuant to the Final Judgment of Foreclosure entered in a case pend- ing in said Court, the style of which is as set out above, and the docket number of which is 09-489 13TH day of, January, 2011. P. DEWIT CASON Clerk of the Circuit Court Columbia County, Florida By: B. Scippio, Deputy Clerk 05524975 . February 1, 8, 2011 IN THE CIRCUIT COURT, THIRD JUDICIAL CIRCUIT, IN AND FOR COLUMBIA COUNTY, FLORIDA. CASE NO. 10-434-CA BULLARD MANAGEMENT SERVICES, INC., a Florida corporation Plaintiff, vs. GERALD H. JOHNSON and BRENDA LEE HALL JOHNSON Defendants. NOTICE OF PUBLIC SALE Notice is hereby given that the fol- lowing described real property: SEE SCHEDULE "A" ATTACHED HERETO SCHEDULED "A" NOTICE OF PUBLIC SALE BULLARD MAN- AGEMENT SERVICES, INC. vs. JOHNSON, et al Lot 10 of San-Tucknee Estates an unrecorded subdivision in Section 30 Township 6 South, Range 16 East of Columbia County, Florida. See be- low for full legal description: DESCRIPTION: LOT 10 A PART OF THE SW 1/4 OF SECTION 30, TOWNSHIP 6 SOUTH, RANGE 16 EAST, MORE PARTICULARLY DESCRIBED AS FOLLOWS; COMMENCE AT THE SOUTH- EAST CORNER OF SAID SW 1/4 AND RUN N 1'03'14" W., ALONG THE EAST LINE THEREOF, 683.47 FEET; THENCE S 87'03'30" W., 648.40 FEET FOR A POINT OF BEGINNING; THENCE S87" 03'30"W., 656.50 FEET; THENCE N. 1'26'00" W., 661.19 FEET; THENCE N.86'29'50" E., 656.70 FEET; THENCE S 1l26'00" E., 667.62 FEET TO THE POINT OF BEGINNING. .. '-"-" " COLUMBIA COUNTY, FLORIDA CONTAINING 10:01' ACRES, MORE OR LESS. SUBJECT TO AN INGRESS AND EGRESS EASEMENT OVER AND ACROSS THE NORTH AND EAST 30.00 FEET THEREOF. DESCRIPTION: INGRESS AND EGRESS EASEMENT AN INGRESS AND EGRESS EASEMENT IN THE SW 1/4 OF SECTION 30, TOWNSHIP 6 SOUTH, RANGE 16 EAST, OVER AND ACROSS THE FOLLOWING DESCRIBED PARCEL; COM- MENCE AT THE NORTHWEST CORNER OF SAID SW 1/4 AND RUN N 88'08'53" E., ALONG THE NORTH LINE THEREOF, 657.24 FEET FOR A POINT OF BEGIN- NING OF SAID INGRESS AND EGRESS EASEMENT; THENCE CONTINUE N 88*08'53" E., ALONG SAID NORTH LINE, 60.00 FEET; THENCE S.1'26'00" E., 1294.70 FEET; THENCE N. 87"53'02' E., 598.43 FEET; THENCE N. 86'29'50" E., 687.80 FEET; THENCE S. 1'26'00" E., 798.72 FEET; THENCE S 88'34'00" W., 60.00 FEET; THENCE N. 1'26'00: W., 736.52 FEET; THENCE S. 86'29'50" W., 625.59 FEET; THENCE S. 87'53'02" W., 599.15 FEET; THENCE S. 1"26'00" E., 732.57 FEET; THENCE S. 88'34'00" W., 60.00 FEET; THENCE N. 1-26'00" W., 2087.28 FEET TO THE POINT OF BEGIN- NING. COLUMBIA COUNTY, FLORIDA. shall be sold by the Clerk of this Court, at public sale, pursuant to the -Final Judgment in the above styled action dated January 27, 2011, at the Columbia County Courthouse in Lake City, Columbia County, Flori- da, at 11:00 A.M., on Wednesday, February 23, 2011, to the highest and best bidder for cash. Any person claiming an interest in any surplus from the sale, other than the property owner as of the date of the notice of lis pendens, must file a claim within 60 days after the sale. WITNESS my hand and the official seal in the State and County afore- said this 27th day of January, 2011 P. DEWIT CASON Clerk of Court By:/s/J. Harris Deputy Clerk 04543290 February 1,8,2011 PUBLIC NOTICE OF INTENT TO ISSUE AIR PERMIT Florida Department of Environmen- tal Protection Air Resource Section, Northeast District Office Draft Minor Source Air Construction Permit Project No. 7770017-015-AC Anderson Columbia Company, Inc., Plant #10 Columbia County, Florida Applicant: The applicant for this project is Anderson Columbia Com- pany. The applicant's authorized representative and mailing address is: Brian P. Schreiber, Secretary, Anderson Columbia Company, Inc., Plant #10, Post Office Box 1829, Lake City, Florida 32056. Facility Location: Anderson Colum- bia Company operates the existing Plant No. 10, which is located in Co- lumbia County at 1 mile north of the intersection of 1-75 and US Hwy 41, off of US Hwy 41 in Ellisville, Flori- da. Project: This permit authorizes the addition of Columbia County as an authorized operating location in the Legal permit. Permitting Authority: Applications for air construction permits are sub- ject to review in accordance with the provisions of Chapter 403, Florida Statutes (FS.) and Chapters 62-4, 62-210 and 62-212 of the Florida Administrative Code (F.A.C.). The proposed project is not exempt from air permitting requirements and an air permit is required to perform the proposed work. The Permitting Au- thority responsible for making a per- mit determination for this project is the Department of Environmental Protection's Air Resource Section in the Northeast District Office. The Permitting Authority's physical ad- dress is: 7825 Bayweadows Way, Suite B200, Jacksonville, Florida 32256-7590. The Permitting Au- thority's mailing address is: 7825 Bayweadows Way, Suite B200, Jacksonville, Florida 32256-7590. The Permitting Authority's tele- phone number is 904/256-1700. Project File: A complete project file is available for public inspection dur- ing the normal business hours of 8:00 a.m. to 5:00 p.m., Monday through Friday (except legal holi- days), at the physical address indicat- ed above for the Permitting Authori- ty. The complete project file in- cludes the Draft Permit, the Techni- cal Evaluation and Preliminary De- termination, the application and in- formation submitted by the applicant (exclusive of confidential records un- der Section 403.111, F.S.). Interest- ed persons may contact the Permit- ting Authority's project engineer for additional information at the address and phone number listed above. In addition, electronic copies of these documents are available on the fol- lowing web site:- sion/apds/default.asp. Notice of Intent to Issue Air Permit: The Permitting Authority gives no- tice of its intent to issue an air con- struction permit to the applicant for the project described above. The ap- plicant has provided reasonable as- surance that operation of proposed equipment will not adversely impact air quality and that the project will comply with all appropriate provi- sions of Chapters 62.4, 62-204, 62- 210, 62-212, 62-296 and 62-297, F.A.C. The Permitting Authority will issue a Final Permit in accord- ance with the conditions of the pro- posed Draft Permit unless a timely petition for an administrative hearing is filed under Sections 120.569 rind 120.57, F.S. or unless public com- ment received in accordance with this notice results in a different deci- sion or a significant change of terms or conditions. Comments: The Permitting Authori- ty will accept written comments con- cerning the proposed Draft Permit for a period of 14 days from the date of publication of this Public Notice. Written comments musl be received by the Permititirig Authority by close of business (5:00 p.m.) on or before the end of the 14-day period. If writ- ten comments received result in a significant change to the Draft Per- mit, the Permitting Authority shall revise the Draft Permit and require, if applicable, another Public Notice. All comments filed will be made available for public inspection. Petitions: A person whose substan- tial interests are affected by the pro- posed permittingdecision may peti- tion for an administrative hearing in accordance with Sections 120.569 and 120.57, F.S. The petition must contain the information set forth be- low and must be filed with (received by) the Department's Agency Clerk in the Office of General Counsel of the Department of Environmental Protection at 3900 Commonwealth Boulevard, Mail Station #35, Talla- hassee, Florida 32399-3000 (Tele- phone: 850/245-2241) Petitions filed by any persons other than those entitled to written notice under Sec- tion 120.60(3), F.S. must be filed within 14 days of publication of this Public Notice or receipt of a written notice, whichever occurs first. Un- der fail- ure of any person to file a petition within the appropriate time period shall constitute a waiver of that per- son's right to request an administra- tive determination (hearing) under Sections 120.569 and 120.57, F.S., or to intervene in this proceeding and participate as a party to it. Any sub- sequent intervention (in a proceeding initiated by another party) will be on- ly at the approval of the presiding of- ficer upon the filing of a motion in compliance with Rule 28-106.205, F.A.C. A petition that disputes the material facts on which the Permitting Au- thority's action is based must contain the following information: (a) The name and address of each agency af- fected and each agency's file or iden- tification number, if known; (b) The name, address and telephone number of the petitioner; the name address and telephone number of the peti- tioner's representative, if any, which shall be the address for service pur- poses during the course of the pro- ceeding; and an explanation of how the petitioner's substantial rights will be affected by the agency determina- tion; (c) A statement of when and how the petitioner received notice of the agency action or proposed deci- sion; (d) A statement of all disputed issues of material fact. If there are none, the petition must so state; (e) A concise statement of the ultimate facts alleged, including the specific facts the petitioner contends warrant reversal or modification of the agen- cy's proposed action; (f) A statement of the specific rules or statutes the petitioner contends require reversal or modification of the agency's pro- posed action including an explana- tion of how the alleged facts relate to the specific rules or statutes; and, (g) A statement of the relief sought by the petitioner, stating precisely the action the petitioner wishes the agen- cy to take with respect to the agen- cy's proposed action. A petition that does not dispute the material facts upon which the Permitting Authori- ty's action is based shall state that no Take ADvantage of the Reporter Classifieds! 755-5440 Legal such facts are in dispute and other- wise shall contain the same informa- tion as set forth above, as required by Rule 28-106.301, F.A.C. Because the administrative hearing process is designed to formulate final agency action, the filing of a petition means that the Permitting Authori- ty's final action may be different from the position taken by it in this Public Notice of Intent to Issue Air Permit. Persons whose substantial interests will be affected by any such final decision of the Permitting Au- thority on the application have the right to petition to become a party to the proceeding, in accordance with the requirements set forth above. Mediation: Mediation is not availa- ble for this proceeding. 04543449 February 8, 9, 2011 010 Announcements 020 Lost & Found LOST DOG: $100 Reward. Miss- ing since week of 01/10 from Branford Hwy/Emerald Forrest. Brown Lab/bulldog mix. answers to Nikkie. 386-288-6786 too Job 100 Opportunities' 04543385 NOW HIRING!!! We are now hiring experienced Class A Drivers 04543409 COTTAGE PARENTS The Florida Sheriffs Boys Ranch is looking for couples to be full-time Cottage Parents. Responsibilities include the direct care and development of 10 boys, ages 8-18. Professional skill based,training & support provided. Help children develop social, academic, and independent living skills. Salary $47,000.00 per couple with housing, utilities, board, and benefits provided. High school diploma or GED required. For more - information contact Linda Mather at (386) 842-5555 Imather@youthranches.org Fax resume to (386) 842-1029 Employment application on line at (EOE/DRUG FREE WORKPLACE) 04543447 Join our family of caring professionals! I l ',! I ';I.:(lhospiceofcitruscounty.org Hospice of the Nature Coast P.O. Box 641270 Beverly Hills, FL 34464 Fax: 352-527-9366 DFWP/EOE 05525012 Office Assistant Full time permanent position in White Springs. Must have solid computer skills, office experience a must. Will train right person in our speciality. Opportunity for advancement. Please EMAIL resume to hr@speced.org 05525065 THE HEALTH CENTER OF LAKE CITY Has a full-time opening for Maintenance Director, Excellent Salary EOE/ADA/ Drug Free Workplace Apply in person or send resume to: 560 SW McFarlane Avenue Lake City, FL 32025 Fax: 386-961-9296 Email: healthcenter@thehealth center.comcastbiz.net A/C SERVICE Tech Min 5 yrs experience F/T with benefits Please call 386-454-4767 Classified Department: 755-5440 2 Temporary Farm Workers needed. Employer: Gaines Gentry Thoroughbreds Fayefte Co, KY. Horses, Straw/Hay, Row Crop & Alternative Work. Employment Dates: 03/20/11 12/10/11. 6 months experience required.18884. 16 Temporary Farm Workers needed. Employer: John W. Camp Todd Co, KY. Tobacco, Row Crop, & Alternative Work. Employment Dates: 03/23/11 - 12/2419658. H2A Employment Ad Farm workers planting, maintenance, harvest of fruits and vegs..$9.94/j. 6 positions. Central Maryland. Temporary employment from mid March -- mid, Dt c: There will be work for at least 3/4 of the work period. Tools provided. Housing at no cost and, transportation & subsistence expenses to worksite provided for workers whose permanent residence is out of area and who complete0554484. Non-emergeny Drivers needed. PT, clean driving record. 386-752-2112 individual for Sales Position. Rountree -Moore Ford Lincoln Mercury Great benefits, paid vaca- tion. Exp. a plus but not necessary. Call Chris. @ 386-755-0630 05525050 SuIwannliZ Medical Personnel RN's for Med/Surg & Telemetry, Top Daily pay, Local Medical Centers, 1-877-630-6988 05525076 Nurse On Call Home Health Agency, Medicare certified, is now hiring RN, LPN, PT & ST Sign on bonus for F/T 352-395-6424, Fax 352-395-6519 Homecare LPN's & Homecare CNA's needed for cli- ent in Lake City, call Maxim Healthcare Services 352-291-4888 Internal Medicine of Lake City is looking for N.P. or P.A. Please contact Dr Bali @ 386-755-1703 PT CNA needed. Send resume to 826 SW Main Blvd Ste 102. Lake City, FL. 32025. * ADvantage Tow Behind Grill/Smoker $1,250 OBO. 386-249-3104 or 386-719-4802 463 Building i 100 Job Opportunities 6 TEMP FARMWORKERS, Mar 15-Nov 30, Webers Farm Parkville MD 21234 Plant, Maintain, har- vest fruit/veg crops. Must be able to work outdoors (extreme heat in- clement weather) crouch, bend, sit on ground, reach, lift & carry up to 75 lbs, $9.94/hr-3/4 guarantee for contract. Tools, supplies provided at no cost. Transportation subsis- tence reimbursed if applicable upon 50% contract completion. Housing provided w/o cost to workers who cannot reasonably re- turn to residence at end of work day. Report or send to nearest FL Agency of Workforce Innovation office & ref. job order #MD0853609 AVON!!!, EARN up to 50%!!! Only $10 for Starter Kit, ,1-800-275-9945 pin #4206 120 Medical 120v Employment & Vstraiiingservices.com 310 Pets & Supplies AKC GERMAN SHEPPARD puppy. Born 12/13. Parentsotr site. $400. 386-496-3654 or 352-745-1452 Free young male cat has bob tail, loving 386-755-0920 PITBULL PUPPY for sale. 7 week old. Parents on site. $250. GRAND CHAMPION BLOODLINES. 386-288-0 Anique, S6-.063-62 1 Frost Free Refrigerator Nice w/top freezer. White $200. obo 386-292-3927 or 386-984-0387 GE DISHWASHER, white. $75.00 Works good. 386-292-3927 or 386-984-0387 Kitchen or bathroom floor cabinet. $35. 386-292-3927 or 386-984-0387 Classified Department: 755-5440 LAKE CITY REPORTER CLASSIFIED TUESDAY, FEBRUARY 8, 2011 630 Mobile Homes 6 for Rent 2 br/2 full bath SWMH ready to rent Ft White $600.mo 386-497-1464 or 365-1705 3 Nice DWMH Nice area 3/2. Back porch/carport, Country living. $675 month, 1st, last & $300 dep, Call 386-752-6333 640 Mobile Homes 6 730f Unfurnished 730 Home For Rent 05524832 New Years Dream "Surprise" Why Rent? Lease to own. New model home Large 3br/2ba house. In town. Fenced yard. $800 mo. 1st, last and security. 386-867-1212 LOVELY 3BR/1BA Farm house for rent. Quiet country area. Please call after 5pm. 386-752-0017. Leave message. Nice, private, quiet, 2/1, 4 miles S of Lake City, $500 dep, $550 mo 386-867-1833 or 386-590-0642 Prime location 2br/lba. Resid'l or comm'l. Comer of Baya & McFarlane. $600. mo. $500 sec. 386-752-9144 or 755-2235 Rent/Sale 3/2 on 9 beautiful fenced acres. Garage & other out buildings. $850.mo. plus sec. dep. Wellborn area. 386-754-0732 Three Rivers Estates, 2/1, CH/A, 2010 W2 & ref's from current landlord req'd, Access to Rivers $675 mo, $600 sec., 386-497-4699 Turnkey rental, 3/2 split,2 CG, 1/2 rsa unipt nPiohhnrhnnrd clnos to 05524940-acre ....... Palm Harbor Homes 1-75, $1050 per month,l st/last/sec, Short Sales/Repo's/Used Homes 386-454-2826 or 954-895-1722 3 or 4 Bedroom Doublewides Won't Last!! $3,500 40k 0 Business & John 800-622-2832 Ext. 210 IV Office Rentals 4/2 DWMH in Retirement Park, 2 Porches, Shed, Extras, Reduced Price. 386-752-4258 Owner Fin, 3/2, DWMH, new- paint,carpet, small down $625mon 386-867-1833 or 386-590-0642 Royals Homes is Quality! We treat you like Family. Stop in or Call Catherine 386-754-6737 1800 SQ FT $1100. Office furniture available and cubicle dividers.Water, sewer and garbage fees included. 386-752-4072 Ready to move 3BR/2BA Great area, close to town, pool, no pets. Ref. req'd $900 mo, $600 dep. 386-752-9144 days, 752-2803,397-3500 after 5p 780 Condos for Sale 3 bdrm Condo Nit, back patio, HOA fees include ext maintenance of home, lawn & pool MLS#76797 $110,000, Call Missy Zecher @ Remax 386-623-0237 710 Unfurnished Apt. 805 Lots for 1/1 apts for rent on Madison St, $500 month, $200 sec dep, utilities included, (two available) 386-365-2515 2BR/1BA. Close to town. $565.mo plus deposit. Includes water & sewer. 386-965-2922 & IBr's from $135/wk. Util. & cable incl., Sec 8 vouchers accepted, monthly rates avail Call 386-752-2741 Updated apartments w/tile floors & fresh paint. Excellent location. From $525. + sec. Call Michelle 386-752-9626 72,0 Furnished Apts. 7 0 For Rent Rooms for Rent. Hillcrest, Sands, Columbia. All furnished. Electric, cable, fridge, microwave. Weekly or monthly rates. 1 person $135, 2 persons $150. weekly 386-752-5808 Unfurnished 730 Home For Rent 4/3 Refurbished Home w/CH/A for Rent or Sale, on East side of town Call 386-294-2494 for details 1 acre lot outside the city limits . Homes only subdivision. Priced below the assessed value with the county, $16,900 Hallmark Real Estate 386-867-1613 2 ac lot in River Access community. Suwanne River 1 mile away. Owner will finance. $13,500 Hallmark Real Estate 386-867-1613 2br/2ba Eastside Village. Unique floor plan. Lg utility/ work room. Screened front porch. $55,000 386-755-5110 Daniel Crapps Agency, Inc. 3/2 w/ Front deck and large Florida room. garage and other out bldgs on 9 beautiful fenced acres. $139,900. Neg. 386-754-0732 810 Home for Sale 2BR/2BA home w/1,592 SqFt in Eastside Village w/huge master suite, climatized Fla room, Ig kitchen $61,500 #76753 DANIEL CRAPPS AGENCY, INC. bdrm + office, 2 living & dining areas, front & back porch $279,900 MLS# 72831 Call Charlie Sparks @ Westfield Realty 386-755-0808 4/2 2300 plus sq ft MH Corner lot in Piccadilly Park. Newly painted in/out. New carpet /vinyl. 2 car garage. Inground pool. $133,500. Century 21/The Darby Rogers Co. 386-752-6575 Cute 3/2 nicely remodeled home, 2 acres, partially fenced $115,888 Call Brittany @ Results Realty 386-397-3473 Derington Properties, LLC 3/2 MH, large deck and screened porch, 5 ac. Seller financing avail. $46,500 386-965-4300 810 Home for Sale Derington Properties, LLC DWMH, 5 ac. Screened front/back porches. 20x40 shop fully equip- ped w/bath. $74,900. 965-4300. 810 Home for Sale Perfection! Marion Place, gated, brick 3/2 over 1800 sqft. Screened lanai $158,900 386-965-4300 Derington Properties, LLC Announcements Advertise in Over 100 Papers throughout Florida for One Low Rate. Advertising Net- works of Florida, Put us to work for You! (866)742-1373. com. Business Opportunities DO YOU EARN $800.00 IN A DAY? Your Own Local Candy Route 25 Ma- chines and Candy All for $9995.00 All Ma- jor.comrn Help Wanted 17 DRIVERS NEEDED! Top 5% Pay! Ex- cellent Benefits New Trucks Ordered! Need CDL-A & 3 mos recent OTR. (877)258- 8782 Drivers FOOD TANKER DRIVERS NEEDED OTR positions available NOW! CDL-A w/ Tanker REQ'D. Outstanding pay & Benefits! Call a recruiter TODAY! (877)882-6537 Drivers Earn Up to 39/mi HOME SEV- ERAL NIGHTS & WEEKENDS 1 yr OTR Flatbed exp. Call: (800)572-5489 Susan ext. 227 SUNBELT TRANSPORT, LLC Drivers / Teams $1,000.00 SIGN ON BO- NUS! 100% O/Op Contractor Co. Dedi- cated Reefer Fleet Run California, Midwest, East. Call (800)237-8288 or visit- cocarriers.com Driver $.33/mile to $.42/mile based on v _ 830 Commercial 8 Property Prime Commercial Property across from plaza, frontage on Baya 3.27 acres, room for building $398,888 386-867-1271 Call Nancy @ Results Realty It's Tax Time, 1 Recreational 95 Vehicles Homestead Ranger Travel Trailer 28ft. One slideout Fiberglass, Awning, sleeps 8. $11,000. (850)322-7152 Lake City Reporter length of haul, PLUS $.02/mile safety bo- nus paid quarterly. Van & Refrigerated. CDL-A w/3 mos current OTR experience. (800)414-9569. Miscellaneous ATTEND COLLEGE ONLINE from Home. *Medical, *Business, *Paralegal, *Criminal Justice. Job placement assistance. Computer available. Financial Aid if quali- fied. SCHEV certified. Call (877)206-5165, AIRLINES ARE HIRING Train for high paying Aviation Maintenance Career. FAA approved program. Financial aid if qualified - Housing available. CALL Aviation Insti- tute of Maintenance (866)314-3769. Out of Area Real Estate Own 20 Acres Only $129/mo. $13,900 near growing El Paso, Texas (safest city in Amer- ica!) Low down, no credit checks, owner fi- nancing. Free map/pictures (866)485-4364 Schools & Education Heat & Air JOBS Ready to work? 3 week accelerated program. Hands on environ- ment. Nationwide certifications and Local Job Placement Assistance! (877)994-9904 Approved for VA education benefits. Learn to Operate a Crane or Bulldozer. Heavy Equipment Training. National Cer- tification. Georgia School of Construction. Use code "FLCNH" (866)218-2763 ANF ADVERTISING NE It/J'K (1 FLORIDA Cla:..,f. Di.: 1 Mietro'Daily Week of February 7, 2011 ) Contact Us | Permissions | Preferences | Technical Aspects | Statistics | Internal | Privacy Policy © 2004 - 2011 University of Florida George A. Smathers Libraries.All rights reserved. Acceptable Use, Copyright, and Disclaimer Statement Powered by SobekCM
http://ufdc.ufl.edu/UF00028308/01373
CC-MAIN-2018-05
refinedweb
18,854
73.17
Do these scripts use strict and warnings? If they don't, it would be a good idea to make them do so. This will considerably simplify your maintenance task in future. See my node Thoughts on revisiting old code. Secondly, a profiler such as Devel::DProf can help with your subroutines and %INC. You might also consider Devel::SmallProf which gives execution times and counts of each line of code. BTW you don't want the profiler turned on normally, as it considerably slows down execution of the scripts. But I was moved to check again since I remember finding (and I think mentioning on PM, somewhere..) someone who had done a cool demonstration of graphing of his perl program at runtime. I couldn't find it but think this is probably GraphViz.. Answer: Generate UML from Perl code? includes four responses including autodia. Generate a Graph-ical call tree for your *.pm perl modules uses graphviz at AT&T. An article at DDJ about using graphviz. cpan search for graphviz which showed me some other things like GraphViz::ISA and Joe McMahon's site on GraphViz::Data::Structure Hope this helps. Unfortunately since there's no such thing as named parameters in Perl there's no real way to find out about a function's parameters either. (Update: That is, there is no introspective data structure where parameters could be read from; you have, as described below, to hook the function calls and dump the passed data to find out what's going on.) A namespace's globals are stored in a hash that is named after it, with the double doublecolon; for example, %main::. To find out all the globals defined in package main, you could iterate over keys %main::, and query its type using f.ex ref $main::{some_global}. A very useful module that comes to mind is Hook::WrapSub. You could iterate over the keys in %SomeNamespace::, find which ones are subs, and wrap those in a function that logs debugging information, including the parameters in @_. You could also tie the global variables of the namespaces to wrapper classes that log their use. Makeshifts last the longest. Unfortunately since there's no such thing as named parameters in Perl there's no real way to find out about a function's parameters either. there are no named params in perl 5 (they're coming in perl 6.) you can find out about a function's parameters, though. use Devel::TraceSubs, my first cpan module (you'll have to install Hook::LexWrap first.) you specify namespaces in which you want to trace the subroutines, it reports in text or wrapped (read html) format. optionally, you can see the parameters passed to the subs. version 0.01 prints only scalars, so you won't see data inside references. you can see the stack depth for each call, and figure out how each piece of code relates to the rest. i'm working on changes (thanks to Jenda,) which i hope to put in version 0.02 (this weekend?) you'll be able to see params passed through Data::Dumper, and sub entry/exit timestamps. it will also fix some other bugs related to modules it's not allowed to trace (Carp, Data::Dumper, and any others i find.) oh, and i think you'll be better able to solve your overall problem by reading up on refactoring. Martin Fowler's book "Refactoring: Improving the Design of Existing Code", web site, buy it is a really good one. in case you don't get a chance to read it, i'll give you a few hints: don't refactor and add functionality at the same time, and take small steps. read the book, it's tremendous. ~Particle *accelerates* Tron Wargames Hackers (boo!) The Net Antitrust (gahhh!) Electric dreams (yikes!) Office Space Jurassic Park 2001: A Space Odyssey None of the above, please specify Results (110 votes), past polls
http://www.perlmonks.org/index.pl?node_id=176463
CC-MAIN-2014-35
refinedweb
660
74.49
Hello all! Just started taking an evening course in C (brand new to programming)and am enjoying it immensely. I am however having a couple of problems with understanding what I know are pretty basic situations. Can someone tell me why my program compiles but then when I try to execute it it gives a windows error and crash's. It is supposed to emulate a simple calculator reading from a disk file and writing to another with errror messages if you try to divide by 0 or use an unknown operator. Any advice is appreciated and the input file is in the directory with the .c and .in files. #include <stdio.h> int main() { FILE* infile; FILE* outfile; int x, y, answer; char op; infile = fopen("calc4.in", "r"); outfile = fopen("calc4.out", "w"); do { // Read x op y from infile fscanf(infile, "%d %c %d", &x, &op, &y); answer = (x + y) || (x - y) || (x * y) || (x / y); if (op != ('+' || '-' || '*' || '/')) fprintf(outfile,"\n Unknown Operator!"); else if (y == 0) fprintf(outfile,"\n Division by Zero is not allowed!"); else fprintf(outfile,"\n%d %c %d = %d", x, op, y,answer); }while (!((x == 0) && (y == 0))); return 0; }
http://cboard.cprogramming.com/c-programming/26383-newbie-needs-advice.html
CC-MAIN-2015-32
refinedweb
198
64.51
As the title says, my calculator’s Display doesn’t work as expected; I passed in values for p elements to be displayed on the calculator’s screen, so that when the page first loads there’s a “0” there, but on page load the display screen is empty. And one I click something, the display element just collapses. I need help figuring out what I did wrong and how I should fix it. I also need to know if it’s okay to attach multiple event listeners in the same useEffect effect, or if I should make a different useEffect call for each event listener I need to add. Also, is it a good idea to use document.addEventListener or should I do it a different way? Edit: I tried doing console.log(Display.props.currentValue) in Display.js, here: import React from "react"; import PropTypes from "prop-types"; const Display = props => { return ( <div id="display"> <p id="stored">{props.storedValue}</p> <p id="current">{props.currentValue}</p> </div> ); }; Display.propTypes = { storedValue: PropTypes.string.isRequired, currentValue: PropTypes.string.isRequired }; console.log(Display.props.currentValue); export default Display; And it gives me this error: TypeError: Cannot read property 'currentValue' of undefined How did Display.props become undefined? I also experimented a bit by putting Display component’s definition inside App.js, but that didn’t work either. Really. How do I fix this?
https://forum.freecodecamp.org/t/need-help-with-react-calculator-display-component-not-working-correctly/422204
CC-MAIN-2022-21
refinedweb
233
51.75
Say the matrix A has R rows and C columns. At some point (r, c), we look at all possible neighbors (nr, nc); there are size of them with sum value. A neighbor is actually in the matrix if 0 <= nr < R and 0 <= nc < C. We want the total sum value, divided (with integer division) by the number of neighbors. def imageSmoother(self, A): R, C = len(A), len(A[0]) ans = [[0] * C for _ in A] for r in xrange(R): for c in xrange(C): value = size = 0 for nr in (r-1, r, r+1): for nc in (c-1, c, c+1): if 0 <= nr < R and 0 <= nc < C: value += A[nr][nc] size += 1 ans[r][c] = value // size return ans
https://discuss.leetcode.com/topic/100181/python-straightforward-with-explanation
CC-MAIN-2018-05
refinedweb
129
73.51
Dfect 0.1.0 Assertion testing library for Ruby Dfect is an assertion testing library for Ruby that emphasizes a simple assertion vocabulary, instant debuggability of failures, and flexibility in composing tests. Dfect is exciting because: * It has only 5 methods to remember: D F E C T. * It lets you debug assertion failures interactively. * It keeps a detailed report of assertion failures. * It lets you nest tests and execution hooks. * It is implemented in a mere 313 lines of code. Version 0.1.0 (2009-04-28) This release adds new variations to assertion methods, fixes several bugs, and improves test coverage. Thank you * François Beausoleil contributed patches for both code and tests! :-) New features * Added [1]negation (m!) and [2]sampling (m?) variations to [3]assertion methods. These new methods implement assertion functionality missing so far (previously we could not assert that a given exception was NOT thrown) and thereby allow us to fully test Dfect using itself. * Added documentation on [4]how to insulate tests from the global Ruby namespace. Bug fixes * The E() method did not consider the case where a block does not raise anything as a failure. --François Beausoleil * When creating a report about an assertion failure, an exception would be thrown if any local variables pointed to an empty array. * The Dfect::<() method broke the inheritance-checking behavior of the < class method. Added a bypass to the originial behavior so that RCov::XX can properly generate a report about code that uses Dfect. * Added workaround for YAML error when serializing a class object: TypeError: can’t dump anonymous class Class Housekeeping * Filled the big holes in test coverage. Everything except the runtime debugging logic is now covered by the unit tests. References
https://www.ruby-forum.com/t/ann-dfect-0-1-0/166750
CC-MAIN-2021-25
refinedweb
291
56.76
SwiftUI Snapshot Testing Snapshot testing is a technique that has been very popular in the web development world and it seems like a great way to test SwiftUI user interfaces. I read about snapshot tests in a recent blog post and was intrigued, but I had some difficulty getting it to work, so when I finally succeeded, I decided to share my experiences in the hope that others will find them useful. What is Snapshot Testing Unit testing checks that when you call various functions or methods with certain inputs, you get the output you expect. I use unit tests for testing my models and the methods that change them. But this only tests the logic behind the app, it does nothing to test whether the app is displaying what it should, or whether it is responding correctly to the user’s actions. UI testing emulates user actions by faking taps, clicks, text entry and so on and checks that labels, buttons etc are showing the correct information after these fake interactions. Snapshot testing is in between these two as it effectively takes a picture of the interface. The first time you run the test it will store an image and all subsequent test runs will check that the current interface matches this stored image. If there are any differences, the test will fail so you can decide whether to keep the new version or revert to what you had before. How did I get started? I first read about the idea of using snapshot testing for SwiftUI in a blog post by Vadim Bulavin. He made a very good argument for this, but I found his instructions assumed more knowledge of the topic than I had at the time and so I discarded the idea after an initial attempt. But he was suggesting using a snapshotting library published by Point-Free and I later discovered a link to one of their videos where they discuss this exact thing: SwiftUI Snapshot Testing. This was enough to get me going with attempt #2. Setting up an app for snapshot testing Since the blog post and video were talking about iOS apps, I decided to start there, but you know me, I will get to macOS apps later… First off, I created a single view iOS app using SwiftUI making sure to check “Include Unit Tests”, but not “Include UI Tests”. I created a simple view so I had something to test. If you want to use this on an app that does not already have a unit tests target, go to the project settings, click the + button to add a new target and choose a Unit Testing Bundle. Next step was to import the snapshot testing library using Swift Package Manager. Go to File > Swift Packages > Add Package Dependency. Paste in the URL below and click Next. I accepted the default versioning suggestions on the next pane. On the final pane, it is important to select the correct target for this package. Select the app’s test target, not the app itself and not the UI test target if you have one. I made this mistake on my first try as I assumed that snapshot testing would be part of UI testing but it is actually part of unit testing. Writing a Snapshot Test Now I added a new Unit Test Case Class file to the tests target in my app. I had to import SwiftUI and SnapshotTesting into this test file as well as declaring the app as a testable import. The easiest way to do this is to copy the @testable import heading from the sample test file to make sure it is exactly right. The import needs to match the name of your app module. Finally it’s time to write the first snapshot test: import XCTest import SnapshotTesting import SwiftUI @testable import Snapshots class SnapshotsTests: XCTestCase { func testDefaultAppearance() { let contentView = ContentView() assertSnapshot(matching: contentView, as: .image) } } This uses the snapshot library’s assertSnapshot method to save the content view as an image. But unfortunately, this doesn’t work. The problem is that the second parameter is a Snapshotting strategy that can convert various UI elements into some form of data or image. But the library doesn’t know what a SwiftUI View is, so it needs a way to convert the view into something that can be recognized by the snapshotter. I added this extension to SwiftUI’s View that wraps the SwiftUI View in a UIHostingController. It returns this as a UIViewController which is a valid input for a snapshotter and can be converted to an image. extension SwiftUI.View { func toVC() -> UIViewController { let vc = UIHostingController(rootView: self) vc.view.frame = UIScreen.main.bounds return vc } } Now my first test became: func testDefaultAppearance() { let contentView = ContentView() assertSnapshot(matching: contentView.toVC(), as: .image) } And it worked. Or rather it failed as expected because there was no image to compare it with. Checking the error message, I was able to see where it had created the snapshot image file which I could look at. And the second time I ran the test, it passed. If you ever get an error message saying “No such module ‘SnapshotTesting’” use Shift-Command-U to re-build for testing. This usually only happens after you have cleaned your build folder. Testing a Change Now that I had a passing test, the next thing was to check what happens if the UI changes. This may be due to a deliberate change or because the cat just walked across your keyboard (a not infrequent occurrence around here). Where I originally had a button with the label “Save”, I decided to change this to “OK” (rejecting the cat’s suggestion of “q2eegrnh”). Running the test again produced this result: And I was then able to compare the 2 images, using the path to the failing image from the error message. Once I had confirmed that the new image was what I wanted and not a result of error, either feline or human, I set the test to record a new result so that the new version became the official test version. func testDefaultAppearance() { let contentView = ContentView() record = true assertSnapshot(matching: contentView.toVC(), as: .image) } This caused a failing test again as the new version was written to the Snapshots folder, but after removing the record = true line and re-running the test, it passed again, with my new button label now an accepted part of the test. Using Snapshots with State In SwiftUI, the UI displayed is a function of state, so changing state properties changes the UI. This is what makes snapshot testing really good for SwiftUI apps as you can change the state programmatically and confirm that this is reflected in the UI. So having proved that the snapshot tests worked, I decided to move on and test it with my new anagram assistant app. This is quite a simple app that has a single AppState class that holds all the program data. So I was able to write a suite of tests that changed the state in various ways and then snap-shotted the UI with that state. Here are a couple of examples: func testEmptyContentView() { let contentView = ContentView() assertSnapshot(matching: contentView.toVC(), as: .image) } func testAfterLocking() { var contentView = ContentView() let appState = AppState.sampleState() appState.availableLetters.sort() appState.selectedLetterIndex = 1 appState.placeSelectedLetter(at: 3) appState.toggleLockedState() appState.availableLetters.sort() contentView.appState = appState assertSnapshot(matching: contentView.toVC(), as: .image) } This worked really well with only one slight problem. As the state arranges the availableLetters array randomly for display, I had to add a sort to make sure they always displayed in the same order and made the tests repeatable. And as a bonus, I was able to test a screen in dark mode with this test which sets the colorScheme: func testDarkMode() { var contentView = ContentView() contentView.appState = sampleAppState() assertSnapshot( matching: contentView.colorScheme(.dark).toVC(), as: .image) } Accessibility Tests iOS supports dynamic type and if your app uses standard font styles, it will adopt these dynamic sizes automatically. I can’t find the link right now, but I remember reading an article that said nearly half of all iPhone users change the default text size, setting it either smaller or larger. With snapshot testing, it is quick and easy to get a view of how you app looks with different font sizes. Here is my test function for taking a snapshot of every possible font size variation. func testDynamicFonts() { var contentView = ContentView() contentView.appState = sampleAppState() for contentSize in ContentSizeCategory.allCases { assertSnapshot(matching: contentView.environment(\.sizeCategory, contentSize).toVC(), as: .image, named: "\(contentSize)") } } For the settings screen, I decided that smaller fonts were not a problem, but I wanted to check the two largest options, so I used this test function: func testSettingsScreen() { let settingsView = SettingsView() assertSnapshot(matching: settingsView.toVC(), as: .image) assertSnapshot( matching: settingsView.environment( \.sizeCategory, ContentSizeCategory.accessibilityExtraExtraExtraLarge ).toVC(), as: .image, named: "AccessibilityXXXL") assertSnapshot( matching: settingsView.environment( \.sizeCategory, ContentSizeCategory.extraExtraExtraLarge ).toVC(), as: .image, named: "XXXL") } This let me quickly see where the problems were and what I needed to adjust. Snapshot Test for Mac Apps You knew you weren’t going to get through this without me going on about Mac apps… Snapshot tests for a Mac app work well, with one caveat. First I had to change the Swift.View extension so that it returned an NSViewController instead of a UIViewController. extension SwiftUI.View { func toVC() -> NSViewController { let vc = NSHostingController(rootView: self) vc.view.frame = CGRect(x: 0, y: 0, width: 1024, height: 768) return vc } } I chose an arbitrary size for the snapshot, you just need to make sure your UI will fit into whatever size you select. The real problem was with sand-boxing. The snapshot library was blocked from writing the image files to the project directory if the app was sand-boxed. This seems really peculiar, since Xcode is running the tests and Xcode writes to the project directory all the time! I found two ways around this: - Turn off sand-box mode temporarily while testing. - Make a non-sand-boxed target and use it for testing against. Neither of these are particularly great. Option 1 is tedious, although I think it can work if the snapshots remain the same, it only fails if there is a change that it needs to write to disk. Option 2 is tedious to set up (contact me if you would like more details) but is more seamless after that. The best solution would be for Xcode to allow you to turn off sand-boxing for a test target. Maybe Xcode 12… Limitations of Snapshot Testing Ignoring the Mac and concentrating only on iOS apps for the moment, there were a few issues: You have to run your tests against the same simulator every time, or at least against a simulator with the same screen dimensions. I decided to to use the iPhone SE (2nd generation) as it has a small screen and I find smaller screens to be more of a problem than large ones. You also need to make sure it is always using the same appearance: light or dark, unless you want to specify this for every test. I ended up with this setup function that ran before my snapshot test suite: static override func setUp() { let device = UIDevice.current.name if device != "iPhone SE (2nd generation)" { fatalError("Switch to using iPhone SE (2nd generation) for these tests.") } UIView.setAnimationsEnabled(false) UIApplication.shared.windows.first?.layer.speed = 100 record = false } This uses a couple of tricks that are supposed to speed up tests and has a record setting that I could set for the entire suite if I wished, and it throws a fatalError if I select the wrong device or simulator. It would be neater if Xcode allowed you to select a simulator in the test target build settings, but I think you can only do this if you run tests from the command line. Snapshot tests confirm that the UI matches the state, but they do not check to see if the state changes in response to user input. That is the missing link that UI testing provides, but even without that, I believe that snapshot testing is a very useful tool and much better than having no form of UI testing at all. You need to look at your snapshots. This may sound obvious but the snapshot library creates a set of images. These images are then set as the goal for future tests. If you don’t check that they are correct, then every test could be confirming that the UI is wrong but unchanged. If the tests report a difference, look at both copies and see which one is right. For the same reason, the snapshot images need to be included in your version control repository. Summary Will I use snapshot tests for my SwiftUI apps? Yes, definitely. I use unit tests for my model classes but mostly avoid UI tests as they are too clumsy to write and time-consuming to run. Snapshot tests are better for SwiftUI, and very fast. Huge thanks to Vadim Bualvin for the original inspiration for this article. Go and read his blog post for a more detailed look. And thanks to Brandon Williams & Stephen Celis at Point-Free for getting me going after my initial discarding of the idea. Any mistakes or errors are mine and not theirs. If you want to learn about UI testing for SwiftUI apps, I recommend watching azamsharp’s YouTube video: User Interface Testing for SwiftUI Applications. As always, if you have any comments, suggestions or ideas, I would love to hear from you. Please contact me using one of the links below or through my Contact page.
https://troz.net/post/2020/swiftui_snapshots/
CC-MAIN-2020-40
refinedweb
2,294
60.45
Automated caching and invalidation for the Django ORM Project description Django-cachebot provides automated caching and invalidation for the Django ORM. Installation easy_install django-cachebot or pip install django-cachebot Add cachebot to your INSTALLED_APPS Set a cache backend to one of the backends in cachebots.backends, for instance: CACHE_BACKEND = 'cachebot.backends.memcached://127.0.0.1:11211/?timeout=0' Current supported backends are: cachebot.backends.dummy cachebot.backends.memcached cachebot.backends.pylibmcd Cachebot monkey patches the default Django manager and queryset to make CacheBotManager and CachedQuerySet the defaults used by your Django project. Usage Suppose you had a query that looked like this and you wanted to cache it: Photo.objects.filter(user=user, status=2) Just add .cache() to the queryset chain like so: Photo.objects.cache().filter(user=user, status=2) This query will get invalidated if any of the following conditions are met: 1. One of the objects returned by the query is altered. 2. The user is altered. 3. A Photo is modified and has status = 2. 4. A Photo is modified and has user = user. This invalidation criteria is probably too cautious, because we don’t want to invalidate this cache every time a Photo with status = 2 is saved. To fine tune the invalidation criteria, we can specify to only invalidate on certain fields. For example: Photo.objects.cache('user').filter(user=user, status=2) This query will get invalidated if any of the following conditions are met: 1. One of the objects returned by the query is altered. 2. The user is altered. 3. A Photo is modified and has user = user. django-cachebot can also handle select_related, forward relations, and reverse relations, ie: Photo.objects.select_related().cache('user').filter(user__username="david", status=2) Photo.objects.cache('user').filter(user__username="david", status=2) Photo.objects.cache('message__sender').filter(message__sender=user, status=2) Settings CACHEBOT_CACHE_GET default: False if CACHEBOT_CACHE_GET = True, all objects.get queries will automatically be cached. This can be overridden at the manager level like so: class Photos(models.Model): ... objects = models.Manager(cache_get=True) CACHEBOT_CACHE_ALL default: False if CACHEBOT_CACHE_ALL = True, all queries will automatically be cached. This can be overridden at the manager level like so: class Photos(models.Model): ... objects = models.Manager(cache_all=True) CACHE_PREFIX default: ‘’ Suppose you have a development and production server sharing the same memcached server. Normally this is a bad idea because each server might be overwriting the other server’s cache keys. If you add CACHE_PREFIX to your settings, all cache keys will have that prefix appended to them so you can avoid this problem. Caveats (Important!) django-cachebot requires django 1.2 or greater Adding/Removing objects with a ManyRelatedManager will not automatically invalidate. This is because signals for these types of operations are not in Django until 1.2. Until then, you’ll need to manually invalidate these queries like so: from cachebot.signals import invalidate_object user.friends.add(friend) invalidate_object(user) invalidate_object(friend) count() queries will not get cached. If you’re invalidating on a field that is in a range or exclude query, these queries will get invalidated when anything in the table changes. For example the following would get invalidated when anything on the User table changed: Photo.objects.cache('user').filter(user__in=users, status=2) Photo.objects.cache('user').exclude(user=user, status=2) You should probably use a tool like django-memcache-status to check on the status of your cache. If memcache overfills and starts dropping keys, it’s possible that your queries might not get invalidated. .values_list() doesn’t cache yet. You should do something like this instead: [photo['id'] for photo in Photo.objects.cache('user').filter(user=user).values('id')] It’s possible that there are edge cases I’ve missed. django-cachebot is still in it’s infancy, so you should still double check that your queries are getting cached and invalidated. Please let me know if you notice any weird discrepancies. Dependencies - Django 1.2 Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/django-cachebot/0.2.3/
CC-MAIN-2019-51
refinedweb
687
51.44
I have a list of numbers and I would like to remove the LAST odd number from it. This code works well only when the last odd number is not repeated throughout the list before: numbers = [1, 7, 2, 34, 8, 7, 2, 5, 14, 22, 93, 48, 76, 15, 7] odd_numbers = [] def remove_last_odd(numbers): for n in numbers: if n % 2 != 0: odd_numbers.append(n) numbers.remove(odd_numbers[-1]) return numbers not the most efficient way but will work: def remove_last_odd(numbers): rnumbers = numbers[::-1] for n in rnumbers: if n % 2 != 0: rnumbers.remove(n) break return rnumbers[::-1] basically do this: reverse list, remove first odd number, reverse again and return.
https://codedump.io/share/uJg5JqPRMSWl/1/how-to-remove-last-odd-number-in-a-list
CC-MAIN-2017-04
refinedweb
114
53.75
You. These values are hard to remember. It is much easier to remember information that, for example, relates to a test score by the name testScore. By declaring a variable, you can refer to the reserved memory by the variable s name , which is much easier to remember and identify with the stored information than is the hexadecimal address. While declaring a variable is relatively simple, requiring only one line of code, much is happening behind the scenes. The program at the end of this section will show you how to determine the address and size of the memory reserved by declaring a variable. You have to declare a variable before you can use it. Declaring a variable involves the following syntax: [data type] [variable name] ; The data type may be any of the ones discussed in Chapter 2, including int, float, bool, char, or string. The data type tells the computer how much memory to reserve. As you learned in Chapter 2, different data types have different sizes in bytes. If you specify a data type with a size (on your compiler and operating system) of 4 bytes, then the computer will reserve 4 bytes of memory. You choose the variable name; how you name a variable is discussed later in the section Naming the Variable. The name is an alias by which you can refer in code to the area of reserved memory. Thus, when you name a variable that relates to a test score testScore, you can refer in code to the reserved memory by the name testScore instead of by a hexadecimal value such as 0012FED4. Finally, the variable declaration ends with a semicolon. The semicolon tells the compiler that the statement has ended. You can declare a variable either within a function, such as main, or above all functions, just below any include directives. Since for now our programs have only one function, main, we will declare all variables within main. When our programs involve more than one function, we will revisit the issue of where to declare variables. The following statement declares in main an integer variable named testScore. int main(void) { int testScore; return 0; } You will receive a compiler error if you refer to a variable before declaring it. In the following code, the reference to testScore will cause the compiler error undeclared identifier. int main(void) { testScore; int testScore; return 0; } This compiler error will occur even though the variable is declared in the very next statement. The reason is that the compiler reads the code from top to bottom, so when it reaches the first reference to testScore, it has not seen the variable declaration. This undeclared identifier compiler error is similar to the one in the Hello World! project in Chapter 1 when we (deliberately) misspelled cout as Cout. Since testScore is not a name built into C++, like main and int, the compiler does not recognize it. When you declare a variable, then the compiler recognizes further references to the variable name as referring to the variable that you declared. If you have several variables of the same data type, you could declare each variable in a separate statement. int testScore; int myWeight; int myHeight; However, if the variables are of the same data type, you don t need to declare each variable in a separate statement. Instead, you can declare them all in one statement, separated by commas. The following one statement declares all three integer variables: int testScore, myWeight, myHeight; The data type int appears only once, even though three variables are declared. The reason is that the data type qualifies all three variables, since they appear in the same statement as the data type. However, the variables must all be of the same data type to be declared in the same statement. You cannot declare an int variable and a float variable in the same statement. Instead, the int and float variables would have to be declared in separate statements. int testScore; float myGPA; Variables, like people, have names , which are used to identify the variable so you can refer to it in code. There are only a few limitations on how you can name a variable. The variable name cannot begin with any character other than a letter of the alphabet (A “Z or a “z) or an underscore (_). Secret agents may be named 007, but not variables. However, the second and following characters of the variable name may be digits, letters , or underscores. The variable name cannot contain embedded spaces, such as My Variable, or punctuation marks other than the underscore character (_). The variable name cannot be the same as a word reserved by C++, such as main or int. The variable name cannot have the same name as the name of another variable declared in the same scope. Scope is an issue that will be discussed in Chapter 8. For present purposes, this rule means you cannot declare two variables in main with the same name. Besides these limitations, you can name a variable pretty much whatever you want. However, it is a good idea to give your variables names that are meaningful. If you name your variables var1, var2, var3, and so on, up through var17, you may find it difficult to later remember the difference between var8 and var9. And if you find it difficult, imagine how difficult it would be for a fellow programmer, who didn t even write the code, to figure out the difference. In order to preserve your sanity , or possibly your life in the case of enraged fellow programmers, I recommend you use a variable name that is descriptive of the purpose of the variable. For example, testScore is descriptive of a variable that represents a test score. The variable name testScore is a combination of two names: test and score. You can t have a variable name with embedded spaces such as test score. Therefore, the two words are put together, and differentiated by capitalizing the first letter of the second word. By the convention I use, the first letter of a variable name is not capitalized. A naming convention is simply a consistent method of naming variables. There are a number of naming conventions. In addition to the one I described earlier, another naming convention is to name a variable with a prefix, usually all lowercase and consisting of three letters, that indicate its data type, followed by a word with its first letter capitalized, that suggests its purpose. Some examples: intScore Integer variable representing a score, such as on a test. strName String variable representing a name, such as a person s name. blnResident Boolean variable, representing whether or not someone is a resident. It is not particularly important which naming convention you use. What is important is that you use one and stick to it. Declaring a variable reserves memory. You can use the address operator (&) to learn the address of this reserved memory. The syntax is &[variable name] For example, the following code outputs 0012FED4 on my computer. However, the particular memory address for testScore on your computer may be different than 0012FED4. Indeed, if I run this program again some time later, the particular memory address for testScore on my computer may be different than 0012FED4. #include <iostream> using namespace std; int main(void) { int testScore; cout << &testScore; return 0; } The address 0012FED4 is a hexadecimal (Base 16) number. As discussed in Chapter 2, memory addresses usually are expressed as a hexadecimal number. The operating system, not the programmer, chooses the address at which to store a variable.. The amount of memory reserved depends on a variable s data type. As you learned in Chapter 2, different data types have different sizes. In Chapter 2, you used the sizeof operator to learn the size (on your compiler and operating system) of different data types. You also can use the sizeof operator to determine the size (again, on your compiler and operating system) of different variables. The syntax for using the sizeof operator to determine the size of a variable is almost the same as the syntax for using the sizeof operator to determine the size of a data type. The only difference is that the parentheses following the sizeof operator refers to a variable name rather than a data type name. The following code outputs the address and size of two variables: #include <iostream> using namespace std; int main(void) { short testScore; float myGPA; cout << "The address of testScore is " << &testScore << "\n"; cout << "The size of testScore is " << sizeof(testScore) << "\n"; cout << "The address of myGPA is " << &myGPA << "\n"; cout << "The size of myGPA is " << sizeof(myGPA) << "\n"; return 0; } The output when I ran this program (yours may be different) is The address of testScore is 0012FED4 The size of testScore is 2 The address of myGPA is 0012FEC8 The size of myGPA is 4 Figure 3-1 shows how memory is reserved for the two variables. Due to the different size of the variables, the short variable, testScore, takes up two bytes of memory, and the float variable, myGPA, takes up four bytes of memory. As Figure 3-1 depicts, the addresses of the two variables are near each other. The operating system often attempts to do this. However, this is not always possible, depending on factors such as the size of the variables and memory already reserved. There is no guarantee that two variables will even be near each other in memory. In Figure 3-1, the value for both memory addresses is unknown. That is because we have not yet specified the values to be stored in those memory locations. The next section shows you how to do this.
https://flylib.com/books/en/1.472.1.26/1/
CC-MAIN-2020-29
refinedweb
1,626
60.85
, or make sure to extend the experiment when it is necessary. In this assignment, we provide you a CloudLab profile called “cs744-fa19-assignment1” under “UWMadison744-F19” project for you to start your experiment. The profile is a simple Your home directory in the CloudLab machine is relatively small and can only hold 16GB of data. We have also enabled another mount point to contain around 25 > shivaram@node1:~$ df -h | grep data /dev/xvda4 24G 44M 23.1.2.tar.gz There are a few configuration files we need to edit. They are originally empty so users have to manually set them. Add the following contents in the <property> field in hadoop-3-3.1.2/etc/hadoop/hdfs-site.xml. Make sure you specify the path by yourself (You should create folders by yourself if needed. For example, create hadoop-3.1.2/etc/hadoop/workers-3.1.2/bin and hadoop-3.4.4-bin-hadoop2.7.tgz Similar to HDFS you will need to modify spark-2.4.4-bin-hadoop2.7/conf/slaves to include the IP address of all the slave machines. To start the Spark standalone cluster you can then run the following command on the master node: spark-2.4.4 RDD programming guide. Resilient Distributed Datasets (RDD) is the main abstraction in Spark which is a collection of elements partitioned across the nodes of the cluster that can be operated on in parallel. In our case you will be creating RDD object from the data that you load from HDFS. Users may also ask Spark to persist an RDD in memory, allowing it to be reused efficiently across parallel operations (not necessary to do for this part of the assignment, but will need to do it in part 3). Finally, RDDs automatically recover from node failures. An example of couple commands if you are using PySpark (Python API that supports Spark) that should be handy. from pyspark import SparkContext, SparkConf # The first thing that Spark application does is creating an object SparkContext. conf = SparkConf().setAppName(appName).setMaster(master) sc = SparkContext(conf=conf) # You can read the data from file into RDD object as a collection of lines lines = sc.textFile("data.txt") After loading data you can apply RDD operations on it. Read more about transformations and actions here. lineLengths = lines.map(lambda s: len(s)) totalLength = lineLengths.reduce(lambda a, b: a + b)19. Every RDD has an optional Partitioner object. Any shuffle operation on an RDD with a Partitioner will respec that Partitioner. Any shuffle operation on two RDDs (e.g., join) will take on the Partitioner of one of them, if one is set. You can control the partitioner by using the partitionBy(…) command. Also pay close attention to what operations preserve the partitioner and what dont (see comment on flatMap and flatMapValues) Task 3. Persist the appropriate RDD as in-memory objects and see what changes. Read about RDD persistence..
https://pages.cs.wisc.edu/~shivaram/cs744-fa19/assignment1.html
CC-MAIN-2021-49
refinedweb
493
58.28
Full file at Chapter 2 Risk and Return: Part I ANSWERS TO BEGINNING-OF-CHAPTER QUESTIONS Our students have had an introductory finance course, and many have also taken a course on investments and/or capital markets. Therefore, they have seen the Chapter 2 material previously. However, we use the Beginning of Chapter (BOC) questions to review the chapter because our students need a refresher. With students who have not had as much background, it is best to go through the chapter on a point-by-point basis, using the PowerPoint slides. With our students, this would involve repeating too much of the intro course. Therefore, we just discuss the questions, including the model for Question 6. Before the class, we tell our students that the chapter is a review and that we will call on them to discuss the BOC questions in class. We expect students to be able to give short, incomplete answers that demonstrate that they have read the chapter, and then we provide more complete answers as necessary to make sure the key points are covered. Our students have mainly taken multiple-choice exams, so they are uncomfortable with essay tests. Also, we cover the chapters they were exposed to in the intro course rather quickly, so our assignments often cover a lot of pages. We explain that much of the material is a review, and that if they can answer the BOC questions (after the class discussion) they will do OK on the exams. We also tell them, partly for motivation and partly to reduce anxiety, that our exams will consist of 5 slightly modified BOC questions, of which they must answer 3. We also tell them that they can use a 4-page “cheat sheet,” two sheets of paper, front and back. They can put anything they want on it—formulas, definitions, outlines of answers to the questions, or complete answers. The better students write out answers to the questions before class, and then extend them after class and before the exams. This helps them focus and get better prepared. Writing out answers is a good way to study, and outlining answers to fit them on the cheat sheet (in really small font!) also helps them learn. We try to get students to think in an integrated manner, relating topics covered in different chapters to one another. Studying all of the BOC questions in a fairly compressed period before the exams helps in this regard. They tell us that they learn a great deal when preparing their cheat sheets. We initially expected really excellent exams, given that the students had the questions and could use cheat sheets. Some of the exams were indeed excellent, but we were surprised and disappointed at the poor quality of many of the midterm exams. Part of the problem is that our Answers and Solutions: 2 - 1 students were not used to taking essay exams. Also, they would have done better if they had taken the exam after we covered cases (in the second half of the semester), where we apply the text material to real-world cases. While both points are true, it’s also true that some students are just better than others. The students who received low exam grades often asked us what they did wrong. That’s often a hard question to answer regarding an essay exam. What we ended up doing was make copies of the best 2 or 3 student answers to each exam question, and then when students came in to see why they did badly, we made them read the good answers before we talked with them. 95% of the time, they told us they understand why their grade was low, and they resolved to do better next time. Finally, since our students are all graduating seniors, we graded rather easily. Answers 2-1 Stand-alone risk is the risk faced by an investor who holds just one asset, versus the risk inherent in a diversified portfolio. Stand-alone risk is measured by the standard deviation (SD) of expected returns or the coefficient of variation (CV) of returns = SD/expected return. A portfolio’s risk is measured by the SD of its returns, and the risk of the individual stocks in the portfolio is measured by their beta coefficients. Note that unless returns on all stocks in a portfolio are perfectly positively correlated, the portfolio’s SD will be less than the average of the SD’s of the individual stocks. Diversification reduces risk. In theory, investors should be concerned only with portfolio risk, but in practice many investors are not well diversified, hence are concerned with stand-alone risk. Managers or other employees who have large stockholdings in their companies are an example. They get stock (or options) as incentive compensation or else because they founded the company, and they are often constrained from selling to diversify. Note too that years ago brokerage costs and administrative hassle kept people from diversifying, but today mutual funds enable small investors to diversify efficiently. Also, the Enron and WorldCom debacles and their devastating effects on 401k plans heavily in those stocks illustrated the importance of diversification. Answers and Solutions: 2 - 2 Full file at 2-2 Diversification can eliminate unsystematic risk, but market risk will remain. See Figure 2-8 for a picture of what happens as stocks are added to a portfolio. The graph shows that the risk of the portfolio as measured by its SD declines as more and more stocks are added. This is the situation if randomly selected stocks are added, but if stocks in the same industry are added, the benefits of diversification will be lessened. Conventional wisdom says that 40 to 50 stocks from a number of different industries is sufficient to eliminate most unsystematic risk, but in recent years the markets have become increasingly volatile, so now it takes somewhat more, perhaps 60 or 70. Of course, the more stocks, the closer the portfolio will be to having zero unsystematic risk. Again, this assumes that stocks are randomly selected. Note, however, that the more stocks the portfolio contains, the greater the administrative costs. Mutual funds can help here. Different diversified portfolios can have different amounts of risk. First, if the portfolio concentrates on a given industry or sector (as sector mutual funds do), then the portfolio will not be well diversified even if it contains 100 stocks. Second, the betas of the individual stocks affect the risk of the portfolio. If the stock with the highest beta in each industry is selected, then the portfolio may not have much unsystematic risk, but it will have a high beta and thus have a lot of market risk. (Note: The market risk of a portfolio is measured by the beta of the portfolio, and that beta is a weighted average of the betas of the stocks in the portfolio.) 2-3 a. Note: This question is covered in more detail in Chapter 5, but students should remember this material from their first finance course, so it is a review. Expected: The rate of return someone expects to earn on a stock. It’s typically measured as D1/P0 + g for a constant growth stock. Required: The minimum rate of return that an investor must expect on a stock to induce him or her to buy or hold the stock. It’s typically measured as r s = rrf + b(MRP), where MRP is the market risk premium or the risk premium required for an average stock. Historical: The average rate of return earned on a stock during some past period. The historical return on an average large stock varied from –3% to +37% during the 1990s, and the average annual return was about 15%. The worm turned after 1999— the average return was negative in 2000, 2001, and 2002, with the S&P 500 down 23.4% in 2002. The Nasdaq average of mostly tech stock did even worse, falling 31.5% in 2002 alone. The variations for individual stocks were much greater—the best performer on the NYSE in 2000 gained 413% and the worst performer lost 100% of its value. Answers and Solutions: 2 - 3 b. Are the 3 types of return equal? 1) Expected = required?. The answer is, “maybe.” For the market to be in equilibrium, the expected and required rate of return as seen by “the marginal investor” must be equal for any given stock and therefore for the entire market. If the expected return exceeded the required return, then investors would buy, pushing the price up and the expected return down, and thus produce an equilibrium. Note, though, that any individual investor may believe that a given stock’s expected and required returns differ, so individuals may think there are bargains to be bought or dogs to be sold. Also, new information is constantly hitting the market and changing the opinions of marginal investors, and this leads to swings in the market. New technology is causing new information to be disseminated ever more rapidly, and that is leading to more rapid and violent market swings. 2) Historical = expected and/or required? There is no reason whatever to think that the historical rate of return for any given year for either one stock or for all stocks on average will be equal to the expected and/or required rate of return. Rational people don’t expect abnormally good or bad performance to continue. On the other hand, people do argue that investors expect to earn returns in the future that approximate average past returns. For example, if stocks returned 9% on average in the past (from 1926 to 2005, which is as far back as good data exist), then they may expect to earn about 9% on stocks in the future. Note, though, that this is a controversial issue—the period 1926-2005 covers a lot of very different economic environments, and investors may not expect the future to replicate the past. Certainly investors didn’t expect future returns to equal distant past returns during the height of the 1999 bull market or to lose money as they did in 2002. 2-4 To be risk averse means to dislike risk. Most investors are risk averse. Therefore, if Securities A and B both have an expected return of say 10%, but Security A has less risk than B, then most investors will prefer A. As a result, A’s price will be bid up, and B’s price bid down, and in the resulting equilibrium A’s expected rate of return will be below that of B. Of course, A’s required rate of return will also be less than B’s, and in equilibrium the expected and required returns will be equal. One issue here is the type of risk investors are averse to—unsystematic, market, or both? According to CAPM theory, only market risk as measured by beta is relevant and thus only market risk requires a premium. However, empirical tests indicate that investors also require a premium for bearing unsystematic risk as measured by the stock’s SD. Answers and Solutions: 2 - 4 Full file at 2-5 CAPM = Capital Asset Pricing Model. The CAPM establishes a metric for measuring the market risk of a stock (beta), and it specifies the relationship between risk as measured by beta and the required rate of return on a stock. Its principal developers (Sharpe and Markowitz) won the Nobel Prize in 1990 for their work. The key assumptions are spelled out in Chapter 3, but they include the following: (1) all investors focus on a single holding period, (2) investors can lend or borrow unlimited amounts at the risk-free rate, (3) there are no brokerage costs, and (4) there are no taxes. The assumptions are not realistic, so the CAPM may be incorrect. Empirical tests have neither confirmed nor refuted the CAPM with any degree of confidence, so it may or may not provide a valid formula for measuring the required rate of return. The SML, or Security Market Line (see Figure 2-10), specifies the relationship between risk as measured by beta and the required rate of return, r s = rrf + b(MRP). MRP = Expected rate of return on the market – Risk-free rate = rm – rfr . The data requirements are beta, the risk-free rate, and the rate of return expected on the market. Betas are easy to get (by calculating them or from some source such as Value Line or Yahoo!, but a beta shows how volatile a stock was in the past, not how volatile it will be in the future. Therefore, historical betas may not reflect investors’ perceptions about a stock’s future risk, which is what’s relevant. The risk-free rate is based on either T-bonds or T-bills; these rates are easy to get, but it is not clear which should be used, and there can be a big difference between bill and bond rates, depending on the shape of the yield curve. Finally, it is difficult to determine the rate of return investors expect on an average stock. Some argue that investors expect to earn the same average return in the future that they earned in the past, hence use historical MRPs, but as noted above, that may not reflect investors’ true expectations. The bottom line is that we cannot be sure that the CAPM-derived estimate of the required rate of return is actually correct. 2-6 a. Given historical returns on X, Y, and the Market, we could calculate betas for X and Y. Then, given rrf and the MRP, we could use the SML equation to calculate X and Y’s required rates of return. We could then compare these required returns with the given expected returns to determine if X and Y are bargains, bad deals, or in equilibrium. We assumed a set of data and then used an Excel model to calculate betas for X and Y, and the SML required returns for these stocks. Note that in our Excel model (ch02-M) we also show, for the market, how to calculate the total return based on stock price changes plus dividends. b x = 0.69; by = 1.66 and rx = 10.7%; ry = 14.6%. Since Y has the higher beta, it has the higher required return. In our examples, the returns all fall on the trend line. Thus, the two stocks have essentially no diversifiable, unsystematic risk—all of their risk is market risk. If these were real companies, they might have the indicated trend lines and betas, but the points would be scattered about the trend line. See Figure 3-8 in Chapter 3, where data for General Electric are plotted. Although the situation for our Stocks X and Y would never occur for individual stocks, it would occur (approximately) for index Answers and Solutions: 2 - 5 funds, if Stock X were an index fund that held stocks with betas that averaged 0.69 and Stock Y were an index fund with b = 1.66 stocks. b. Here we drop Year 1 and add Year 6, then calculate new betas and r’s. For Stock X, the beta and required return would be reasonably stable. However, Y’s beta would fall, given its sharp decline in a year when the market rose. In our Excel model, Y’s beta falls from 1.66 to 0.19, and its required return as calculated with the SML falls to 8.8%. The results for Y make little sense. The stock fell sharply because investors became worried about its future prospects, which means that it fell because it became riskier. Yet its beta fell. As a riskier stock, its required return should rise, yet the calculated return fell from 14.6% to 8.8%, which is only a little above the riskless rate. The problem is that Y’s low return tilted the regression line down—the point for Year 6 is in the lower right quadrant of the Excel graph. The low R 2 and the large standard error as seen in the Excel regression make it clear that the beta, and thus the calculated required return, are not to be trusted. Note that in April 2001, the same month that PG&E declared bankruptcy, its beta as reported by Finance.Yahoo was only 0.05, so our hypothetical Stock Y did what the real PG&E actually did. The moral of the story is that the CAPM, like other cost of capital estimating techniques, can be dangerous if used without care and judgment. One final point on all this: The utilities are regulated, and regulators estimate their cost of capital and use it as a basis for setting electric rates. If the estimated cost of capital is low, then the companies are only allowed to earn a low rate of return on their invested capital. At times, utilities like PG&E become more risky, have resulting low betas, and are then in danger of having some squirrelly finance “expert” argue that they should be allowed to earn an improper CAPM rate of return. In the industrial sector, a badly trained financial analyst with a dumb supervisor could make the same mistake, estimate the cost of capital to be below the true cost, and cause the company to make investments that should not be made. Answers and Solutions: 2 - 6 Full file at A B C D E F G H 3/18/2009 1 Worksheet for Chapter 2 BOC Questions (ch02boc-model.xls) 2 3 We use BOC Question 2-6 to illustrate some points about the CAPM, the SML, and Excel. For additional information on Excel, see the Tool Kit for Chapter Rate of Return Calculation For the Market: Ending Price $100.00 $118.00 $124.94 $140.68 $128.74 $154.35 $180.72 The following returns were earned on the market and on Stocks X and Y during the last 5 years: Year 1 2 3 4 5 6 Avg 1-5 Percentage Returns Market Stock X Stock Y 20% 16% 28% 8% 8% 8% 15% 12% 20% -6% -2% -15% 23% 18% 33% 20% 16% -70% 12.0% 10.4% 14.8% Beta X: Beta Y: 0.69 1.66 From below Beta Graph, Years 1-5 Only 40% X )30% % (20% n r10% tu e R 0% k c -10% ot S -20% I Y -30% -10% 0% 10% 20% 30% Market Return (%) 40% Could get betas by regression, but an easier way is to use the LINEST function. Click fx > Statistical > LINEST and then follow the menu to get beta X = 0.69 and beta Y = 1.66. Here's the completed dialog box for X. You can use the data to find beta to Y as an exercise, and also to find the revised beta based on years 2-6. Beta X: Beta Y: 0.69 1.66 SML Analysis: Risk-free rate: Market return: r(X) r(Y) = r(rf) + = r(rf) + New Beta Y: New r(y): 8.0% 12.0% b(r(Market) - r(fr)) 8.0% + 2.7% = 10.7% = Predicted return for X. b(r(Market) - r(fr)) 8.0% + 6.6% = 14.6% = Predicted return for Y. 0.19 8.8% We could also use the statistical function RSQ to calculate the R-squares for the betas. For Y R-square dropped from 1.0 to .0029. This indicates that the beta, and the CAPM required return, are being measured with a lot of error. So, we cannot trust the accuracy of the new estimated required return. Answers and Solutions: 2 - 7 ANSWERS TO END-OF-CHAPTER QUESTIONS 2-1 a. Stand-alone risk is only a part of total risk and pertains to the risk an investor takes by holding only one asset. Risk is the chance that some unfavorable event will occur. For instance, the risk of an asset is essentially the chance that the asset’s cash flows will be unfavorable or less than expected. A probability distribution is a listing, chart or graph of all possible outcomes, such as expected rates of return, with a probability assigned to each outcome. When in graph form, the tighter the probability distribution, the less uncertain the outcome. b. The expected rate of return (^ ) is the expected value of a probability distribution of expected returns. c. A continuous probability distribution contains an infinite number of outcomes and is graphed from - and +. d. The standard deviation (σ) is a statistical measure of the variability of a set of observations. The variance (σ2) of the probability distribution is the sum of the squared deviations about the expected value adjusted for deviation. The coefficient of variation (CV) is equal to the standard deviation divided by the expected return; it is a standardized risk measure which allows comparisons between investments having different expected returns and standard deviations. e. A risk averse investor dislikes risk and requires a higher rate of return as an inducement to buy riskier securities. A realized return is the actual return an investor receives on their investment. It can be quite different than their expected return. f. A risk premium is the difference between the rate of return on a risk-free asset and the expected return on Stock i which has higher risk. The market risk premium is the difference between the expected return on the market and the risk-free rate. g. CAPM is a model based upon the proposition that any stock’s required rate of return is equal to the risk free rate of return plus a risk premium reflecting only the risk remaining after diversification. h. The expected return on a portfolio. r p, is simply the weighted-average expected return of the individual stocks in the portfolio, with the weights being the fraction of total portfolio value invested in each stock. The market portfolio is a portfolio consisting of all stocks. Answers and Solutions: 2 - 8 Full file at i. Correlation is the tendency of two variables to move together. A correlation coefficient (ρ) of +1.0 means that the two variables move up and down in perfect synchronization, while a coefficient of -1.0 means the variables always move in opposite directions. A correlation coefficient of zero suggests that the two variables are not related to one another; that is, they are independent. j. Market risk is that part of a security’s total risk that cannot be eliminated by diversification. It is measured by the beta coefficient. Diversifiable risk is also known as company specific risk, that part of a security’s total risk associated with random events not affecting the market as a whole. This risk can be eliminated by proper diversification. The relevant risk of a stock is its contribution to the riskiness of a well-diversified portfolio. k. The beta coefficient is a measure of a stock’s market risk, or the extent to which the returns on a given stock move with the stock market. The average stock’s beta would move on average with the market so it would have a beta of 1.0. l. The security market line (SML) represents in a graphical form, the relationship between the risk of an asset as measured by its beta and the required rates of return for individual securities. The SML equation is essentially the CAPM, r i = rRF + bi(rM rRF). m. The slope of the SML equation is (r M - rRF), the market risk premium. The slope of the SML reflects the degree of risk aversion in the economy. The greater the average investors aversion to risk, then the steeper the slope, the higher the risk premium for all stocks, and the higher the required return. 2-2 a. The probability distribution for complete certainty is a vertical line. b. The probability distribution for total uncertainty is the X axis from - to +. 2-3 Security A is less risky if held in a diversified portfolio because of its lower beta and negative correlation with other stocks. In a single-asset portfolio, Security A would be more risky because σA > σB and CVA > CVB. Answers and Solutions: 2 - 9 2-4 a. No, it is not riskless. The portfolio would be free of default risk and liquidity risk, but inflation could erode the portfolio’s purchasing power. If the actual inflation rate is greater than that expected, interest rates in general will rise to incorporate a larger inflation premium (IP) and the value of the portfolio would decline. b. No, you would be subject to reinvestment rate risk. You might expect to “roll over” the Treasury bills at a constant (or even increasing) rate of interest, but if interest rates fall, your investment income will decrease. c. A U.S. government-backed bond that provided interest with constant purchasing power (that is, an indexed bond) would be close to riskless. 2-5 The risk premium on a high beta stock would increase more. RPj = Risk Premium for Stock j = (rM - rRF)bj. If risk aversion increases, the slope of the SML will increase, and so will the market risk premium (rM – rRF). The product (rM – rRF)bj is the risk premium of the jth stock. If bj is low (say, 0.5), then the product will be small; RP j will increase by only half the increase in RPM. However, if bj is large (say, 2.0), then its risk premium will rise by twice the increase in RPM. 2-6 According to the Security Market Line (SML) equation, an increase in beta will increase a company’s expected return by an amount equal to the market risk premium times the change in beta. For example, assume that the risk-free rate is 6 percent, and the market risk premium is 5 percent. If the company’s beta doubles from 0.8 to 1.6 its expected return increases from 10 percent to 14 percent. Therefore, in general, a company’s expected return will not double when its beta doubles. 2-7 Yes, if the portfolio’s beta is equal to zero. In practice, however, it may be impossible to find individual stocks that have a nonpositive beta. In this case it would also be impossible to have a stock portfolio with a zero beta. Even if such a portfolio could be constructed, investors would probably be better off just purchasing Treasury bills, or other zero beta investments. Answers and Solutions: 2 - 10 Full file at SOLUTIONS TO END-OF-CHAPTER PROBLEMS 2-1 Investment $35,000 40,000 Total $75,000 Beta 0.8 1.4 ($35,000/$75,000)(0.8) + ($40,000/$75,000)(1.4) = 1.12. 2-2 rRF = 6%; rM = 13%; b = 0.7; rs = ? rs = rRF + (rM - rRF)b = 6% + (13% - 6%)0.7 = 10.9%. 2-3 rRF = 5%; RPM = 6%; rM = ? rM = 5% + (6%)1 = 11%. rs when b = 1.2 = ? rs = 5% + 6%(1.2) = 12.2%. 2-4 = 26.69% = 2.34. 11.40% Answers and Solutions: 2 - 11 2-5 a. r r = (0.3)(15%) + (0.4)(9%) + (0.3)(18%) = 13.5%. M = (0.3)(20%) + (0.4)(5%) + (0.3)(12%) = 11.6%. J = 38.64% = 6.22%. c. CVM = CVJ = 2-6 a. 3.85% = 0.29. 13.5% 6.22% = 0.54. 11.6% rA = rRF + (rM - rRF)bA 12% = 5% + (10% - 5%)bA 12% = 5% + 5%(bA) 7% = 5%(bA) 1.4 = bA. b. rA = 5% + 5%(bA) rA = 5% + 5%(2) rA = 15%. Answers and Solutions: 2 - 12 Full file at 2-7 a. ri = rRF + (rM - rRF)bi = 9% + (14% - 9%)1.3 = 15.5%. b. 1. rRF increases to 10%: rM increases by 1 percentage point, from 14% to 15%. ri = rRF + (rM - rRF)bi = 10% + (15% - 10%)1.3 = 16.5%. 2. rRF decreases to 8%: rM decreases by 1%, from 14% to 13%. ri = rRF + (rM - rRF)bi = 8% + (13% - 8%)1.3 = 14.5%. c. 1. rM increases to 16%: ri = rRF + (rM - rRF)bi = 9% + (16% - 9%)1.3 = 18.1%. 2. rM decreases to 13%: ri = rRF + (rM - rRF)bi = 9% + (13% - 9%)1.3 = 14.2%. 2-8 $142,500 $7,500 Old portfolio beta = $150,000 (b) + $150,000 (1.00) 1.12 = 0.95b + 0.05 1.07 = 0.95b 1.13 = b. New portfolio beta = 0.95(1.13) + 0.05(1.75) = 1.16. Alternative Solutions: 1. Old portfolio beta = 1.12 = (0.05)b1 + (0.05)b2 +...+ (0.05)b20 1.12 = (bi)(0.05) bi = 1.12/0.05 = 22.4. New portfolio beta = (22.4 - 1.0 + 1.75)(0.05) = 1.1575 = 1.16. 2. bi excluding the stock with the beta equal to 1.0 is 22.4 - 1.0 = 21.4, so the beta of the portfolio excluding this stock is b = 21.4/19 = 1.1263. The beta of the new portfolio is: Answers and Solutions: 2 - 13 1.1263(0.95) + 1.75(0.05) = 1.1575 = 1.16. 2-9 $400,000 $600,000 $1,000,000 $2,000,000 Portfolio beta = $4,000,000 (1.50) + $4,000,000 (-0.50) + $4,000,000 (1.25) + $4,000,000 (0.75) = 0.1)(1.5) + (0.15)(-0.50) + (0.25)(1.25) + (0.5)(0.75) = 0.15 - 0.075 + 0.3125 + 0.375 = 0.7625. rp = rRF + (rM - rRF)(bp) = 6% + (14% - 6%)(0.7625) = 12.1%. Alternative solution: First compute the return for each stock using the CAPM equation [rRF + (rM - rRF)b], and then compute the weighted average of these returns. rRF = 6% and rM - rRF = 8%. Stock A B C D Total Investment $ 400,000 600,000 1,000,000 2,000,000 $4,000,000 Beta 1.50 (0.50) 1.25 0.75 r = rRF + (rM - rRF)b 18% 2 16 12 Weight 0.10 0.15 0.25 0.50 1.00 rp = 18%(0.10) + 2%(0.15) + 16%(0.25) + 12%(0.50) = 12.1%. 2-10 First, calculate the beta of what remains after selling the stock: bp = 1.1 = ($100,000/$2,000,000)0.9 + ($1,900,000/$2,000,000)bR 1.1 = 0.045 + (0.95)bR bR = 1.1105. bN = (0.95)1.1105 + (0.05)1.4 = 1.125. 2-11 We know that bR = 1.50, bS = 0.75, rM = 13%, rRF = 7%. ri = rRF + (rM - rRF)bi = 7% + (13% - 7%)bi. rR = 7% + 6%(1.50) = 16.0% rS = 7% + 6%(0.75) = 11.5 4.5% Answers and Solutions: 2 - 14 Full file at 2-12 The answers to a, b, c, and d are given below: 2005 2006 2007 2008 2009 Mean Std Dev CV ¯A (18.00%) 33.00 15.00 (0.50) 27.00 ¯B (14.50%) 21.80 30.50 (7.60) 26.30 Portfolio (16.25%) 27.40 22.75 (4.05) 26.65 11.30 20.79 1.84 11.30 20.78 1.84 11.30 20.13 1.78 e. A risk-averse investor would choose the portfolio over either Stock A or Stock B alone, since the portfolio offers the same expected return but with less risk. This result occurs because returns on A and B are not perfectly positively correlated (ρ AB = 0.88). 2-13 a. bX = 1.3471; bY = 0.6508. These can be calculated with a spreadsheet. b. rX = 6% + (5%)1.3471 = 12.7355%. rY = 6% + (5%)0.6508 = 9.2540%. c. bp = 0.8(1.3471) + 0.2(0.6508) = 1.2078. rp = 6% + (5%)1.2078 = 12.04%. Alternatively, rp = 0.8(12.7355%) + 0.2(9.254%) = 12.04%. d. Stock X is undervalued, because its expected return exceeds its required rate of return. Answers and Solutions: 2 - 15 SOLUTION TO SPREADSHEET PROBLEM 2-14 The detailed solution for the spreadsheet problem is available in the file Solution for IFM10 Ch 02 P14 Build a Model.xls on the textbook’s Web site. Answers and Solutions: 2 - 16 Full file at CASE financial-management-10th-edition-brigham one year, you have been instructed to plan for a one-year holding period. Further, your boss has restricted you to the following investment alternatives, shown with their probabilities and associated outcomes. (Disregard for now the items at the bottom of the data; you will fill in the blanks later.) State of the economy Recession Below avg Average Above avg Boom r-hat (r) Std dev (Ďƒ) Coef of var (cv) beta (b) Returns On Alternative Investments Estimated Rate Of Return TAlta Repo Am. Market 2-stock prob. Bills Inds Men Foam portfolio portfolio 0.1 8.0% -22.0% 28.0% 10.0%* -13.0% 3.0% 0.2 8.0 -2.0 14.7 -10.0 1.0 0.4 8.0 20.0 0.0 7.0 15.0 10.0 0.2 8.0 35.0 -10.0 45.0 29.0 0.1 8.0 50.0 -20.0 30.0 43.0 15.0 1.7% 13.8% 15.0% 0.0 13.4 18.8 15.3 7.9 1.4 1.0 -0.86 0.68 *Note that the estimated returns of American Foam do not always move in the same direction as the overall economy. For example, when the economy is below average, consumers purchase fewer mattresses than they would if the economy was stronger. However, if the economy is in a flat-out recession, a large number of consumers who were planning to purchase a more expensive inner spring mattress may purchase, instead, a cheaper foam mattress. Under these circumstances, we would expect American Foam’s stock price to be higher if there is a recession than if the economy was just below average. Answers and Solutions: 2 - 17 Barney Smith’s economic forecasting staff has developed probability estimates for the state of the economy, and its security analysts have developed a sophisticated computer program which was used to estimate the rate of return on each alternative under each state of the economy. Alta Industries is an electronics firm; Repo Men collects past-due debts; and American Foam manufactures mattresses and other foam products. Barney Smith also maintains an “index fund” which owns a market-weighted fraction of all publicly traded stocks; you can invest in that fund, and thus obtain average stock market results. Given the situation as described, answer the following questions. a. What are investment returns? What is the return on an investment that costs $1,000 and is sold after one year for $1,100? Answer: Investment return measures the financial results of an investment. They may be expressed in either dollar terms or percentage terms. The dollar return is $1,100 - $1,000 = $100. The percentage return is $100/$1,000 = 0.10 = 10%. b. 1. Why is the t-bill’s return independent of the state of the economy? Do t-bills promise a completely risk-free return? Answer: The 8 percent t-bill return does not depend on the state of the economy because the treasury must (and will) redeem the bills at par regardless of the state of the economy. The t-bills are risk-free in the default risk sense because the%. Thus, in terms of purchasing power, t-bills are not riskless.. Answers and Solutions: 2 - 18 Full file at b. 2. Why are Alta Ind.’s returns expected to move with the economy whereas Repo Men’s are expected to move counter to the economy? Answer: Alta Industries’ returns move with, hence are positively correlated with, the economy, because the firm’s sales, and hence profits, will generally experience the same type of ups and downs as the economy. If the economy is booming, so will Alta. On the other hand, Repo Men is considered by many investors to be a hedge against both bad times and high inflation, so if the stock market crashes, investors in this stock should do relatively well. Stocks such as Repo Men are thus negatively correlated with (move counter to) the economy. (note: in actuality, it is almost impossible to find stocks that are expected to move counter to the economy. Even Repo Men shares have positive (but low) correlation with the market.) c. Calculate the expected rate of return on each alternative and fill in the blanks on the row for r in the table above. Answer: The expected rate of return, r , is expressed as follows: n r= P r . i i i =1 Here Pi is the probability of occurrence of the ith state, r i is the estimated rate of return for that state, and n is the number of states. Here is the calculation for Alta Inds.: r Alta Inds = 0.1(-22.0%) + 0.2(-2.0%) + 0.4(20.0%) + 0.2(35.0%) + 0.1(50.0%) = 17.4%. We use the same formula to calculate r’s for the other alternatives: r r r T-bills = 8.0%. Repo Men Am Foam = 1.7%. = 13.8%. r M = 15.0%. Answers and Solutions: 2 - 19 above. Answer: The standard deviation is calculated as follows: n σ= (ri - rˆi )2 Pi . σAlta = [(-22.0 - 17.4)2(0.1) + (i =1 2.0 - 17.4) (0.2) + (20.0 - 17.4)2(0.4) + (35.0 - 17.4)2(0.2) + (50.0 - 17.4)2(0.1)]0.5 = 401.4 = 20.0%. 2 Here are the standard deviations for the other alternatives: σ T-bills = 0.0%. σ Repo = 13.4%. σ Am Foam = 18.8%. σ M = 15.3%. d. 2. What type of risk is measured by the standard deviation? Answer: The standard deviation is a measure of a security’s (or a portfolio’s) stand-alone risk. The larger the standard deviation, the higher the probability that actual realized returns will fall far below the expected return, and that losses rather than profits will be incurred. Answers and Solutions: 2 - 20 d. 3. Draw a graph which shows roughly the shape of the probability distributions for Alta Inds, Am Foam, and T-bills. Answer: Probability of Occurrence T-Bills ALTA INDS AM FOAM -60 -45 -30 -15 0 15 30 45 60 Rate of Return (%) Based on these data, Alta Inds is the most risky investment, t-bills the least risky. Mini Case: 2 - 21? Answer: The coefficient of variation (CV) is a standardized measure of dispersion about the expected value; it shows the amount of risk per unit of return. CV = r . CVT-bills = 0.0%/8.0% = 0.0. CVAlta Inds = 20.0%/17.4% = 1.1. CVRepo Men = 13.4%/1.7% = 7.9. CVAm Foam = 18.8%/13.8% = 1.4. CVM = 15.3%/15.0% = 1.0. When we measure risk per unit of return, Repo Men, with its low expected return, becomes the most risky stock. The CV is a better measure of an asset’s stand-alone risk than σ because CV considers both the expected value and the dispersion of a distribution--a security with a low expected return and a low standard deviation could have a higher chance of a loss than one with a high σ but a high r . Mini Case: 2 - 22 f. Suppose you created a 2-stock portfolio by investing $50,000 in Alta Inds and $50,000 in Repo Men. 1. Calculate the expected return ( rp ), the standard deviation (σp), and the coefficient of variation (cvp) for this portfolio and fill in the appropriate blanks in the table above. Answer:: r p = 0.5(-22%) + 0.5(28%) = 3%. We would do similar calculations for the other states of the economy, and get these results: State Recession Below Average Average Above Average Boom Portfolio 3.0% 6.4 10.0 12.5 15.0 Now we can multiply probabilities times outcomes in each state to get the expected return on this two-stock portfolio, 9.6%. Alternatively, we could apply this formula, R = wi x ri = 0.5(17.4%) + 0.5(1.7%) = 9.6%, Which finds r as the weighted average of the expected returns of the individual securities in the portfolio. It is tempting to find the standard deviation of the portfolio as the weighted average of the standard deviations of the individual securities, as follows: σp wi(σi) + wj(σj) = 0.5(20%) + 0.5(13.4%) = 16.7%. However, this is not correct--it is necessary to use a different formula, the one for σ that we used earlier, applied to the two-stock portfolio’s returns. The portfolio’s σ depends jointly on (1) each security’s σ and (2) the correlation between the securities’ returns. The best way to approach the problem is to estimate the portfolio’s risk and return in each state of the economy, and then to estimate σ p with the σ formula. Given the distribution of returns for the portfolio, we can calculate the portfolio’s σ and CV as shown below: Mini Case: 2 - 23 Ďƒp = [(3.0 - 9.6)2(0.1) + (6.4 - 9.6)2(0.2) + (10.0 - 9.6)2(0.4) + (12.5 - 9.6)2(0.2) + (15.0 - 9.6)2(0.1)]0.5 = 3.3%. CVp = 3.3%/9.6% = 0.3. f. 2. How does the riskiness of this 2-stock portfolio compare with the riskiness of the individual stocks if they were held in isolation? Answer: Using either Ďƒ or CV as our stand-alone risk measure, the stand-alone risk of the portfolio is significantly less than the stand-alone risk of the individual stocks. This is because the two stocks are negatively correlated--when Alta Inds is doing poorly, Repo Men is doing well, and vice versa. Combining the two stocks diversifies away some of the risk inherent in each stock if it were held in isolation, i.e., in a 1-stock portfolio.. Answer: D e n s ity P o rtfo lio o f s to c k s w ith rp = 1 6 % O ne S to c k 0 16 R e tu rn % The standard deviation gets smaller as more stocks are combined in the portfolio, while rp (the portfolio’s return) remains constant. Thus, by adding stocks to your Mini Case: 2 - 24 portfolio, which initially started as a 1-stock portfolio, risk has been reduced. In the real world, stocks are positively correlated with one another--if the economy does well, so do stocks in general, and vice versa. Correlation coefficients between stocks generally range from +0.5 to +0.7. The average correlation between stocks is about 0.35. σ. In fact, σ stabilizes at about 20 which is highly correlated with an index such as the S&P 500 rather than hold all the stocks in the index.) The implication is clear: investors should hold well-diversified portfolios of stocks rather than individual stocks. (In fact, individuals can hold diversified portfolios through mutual fund investments.) By doing so, they can eliminate about half of the riskiness inherent in individual stocks. h. 1. Should portfolio effects impact the way investors think about the riskiness of individual stocks? Answer: Portfolio diversification does affect investors’ views of risk. A stock’s stand-alone risk as measured by its σ. Mini Case: 2 - 25 h. 2. If you decided to hold a 1-stock portfolio, and consequently were exposed to more risk than diversified investors, could you expect to be compensated for all of your risk; that is, could you earn a risk premium on that part of your risk that you could have eliminated by diversifying? Answer:. i. How is market risk measured for individual securities? How are beta coefficients calculated? Answer: Market risk, which is relevant for stocks held in well-diversified portfolios, is defined as the contribution of a security to the overall riskiness of the portfolio. It is measured by a stock’s beta coefficient, which measures the stock’s volatility relative to the market. Run a regression with returns on the stock in question plotted on the y axis and returns on the market portfolio plotted on the x axis. The slope of the regression line, which measures relative volatility, is defined as the stock’s beta coefficient, or b. j. Suppose you have the following historical returns for the stock market and for another company, P.Q. Unlimited. Explain how to calculate beta, and use the historical stock returns to calculate the beta for PQU. Interpret your results.% Answer: Betas are calculated as the slope of the “characteristic” line, which is the regression line showing the relationship between a given stock and the general stock market. Mini Case: 2 - 26 40% PQU 20% 0% -40% -20% 0% 20% rM 40% -20% -40% r PQU = 0.83r M + 0.03 2 R = 0.36 Show the graph with the regression results. Point out that the beta is the slope coeeficient, which is 0.83. State that an average stock, by definition, moves with the market. Beta coefficients measure the relative volatility of a given stock relative to the stock market. The average stock’s beta is 1.0. Most stocks have betas in the range of 0.5 to 1.5. Theoretically, betas can be negative, but in the real world they are generally positive. In practice, 4 or 5 years of monthly data, with 60 observations, would generally be used. Some analysts use 52 weeks of weekly data. Point out that the r 2 of 0.36 is slightly higher than the typical value of about 0.29. A portfolio would have an r 2 greater than 0.9. k. The expected rates of return and the beta coefficients of the alternatives as supplied by Barney Smith’s computer program are as follows: Security Alta Inds Market Am. Foam T-Bills Repo Men Return ( r ) 17.4% 15.0 13.8 8.0 1.7 Risk (Beta) 1.29 1.00 0.68 0.00 (0.86) Mini Case: 2 - 27 (1) Do the expected returns appear to be related to each alternative’s market risk? (2) Is it possible to choose among the alternatives on the basis of the information developed thus far? Answer: l. The expected returns are related to each alternative’s market risk--that is, the higher the alternative’s rate of return the higher its beta. Also, note that t-bills have 0 risk. We do not yet have enough information to choose among the various alternatives. We need to know the required rates of return on these alternatives and compare them with their expected returns. 1. Write out the security market line (SML) equation, use it to calculate the required rate of return on each alternative, and then graph the relationship between the expected and required rates of return. Answer: Here is the SML equation: ri = rrf + (rm - rrf)bi. If we use the t-bill yield as a proxy for the risk-free rate, then r RF = 8%. Further, our estimate of rm = r m is 15%. Thus, the required rates of return for the alternatives are as follows: Alta Inds: 8% + (15% - 8%)1.29 = 17.03% 17.0%. Mini Case: 2 - 28 Market: 8% + (15% - 8%)1.00 = 15.0%. Am Foam : 8% +(15% - 8%)0.68 = 12.76% 12.8%. T-Bills: 8% + (15% - 8%)1.29 = 17.03% 17.0%. Repo Men: 8% + (15% - 8%)-0.86 = 1.98% 2%. l. 2. How do the expected rates of return compare with the required rates of return? Answer: We have the following relationships: Alta Inds Market Am Foam T-Bills Repo Men 17.4% 15.0 13.8 8.0 1.7 Required Return (r) CONDITION Undervalued: r >R Fairly Valued (Market Equilibrium) Undervalued: r >R Fairly Valued Overvalued: R > r 17.0% 15.0 12.8 8.0 2.0 SML: ri = rRF + RPM bi = 8% + 7%(b i) Return Required and Expected Rates of SECURITY Expected Return ( r ) 25% 20% Alta Inds. 15% Am. Foam Market 10% T-Bills 5% 0% Repo Men -5% -10% -3 -2 -1 0 1 2 3 Beta (Note: the plot looks somewhat unusual in that the x axis extends to the left of zero. We have a negative beta stock, hence a required return that is less than the risk-free rate.) The t-bills and market portfolio plot on the SML, Alta Inds. And Am. Foam plot above it, and Repo Men plots below it. Thus, the t-bills and the market portfolio promise a fair return, Alta Inds and Am. Foam are good deals because they have expected returns above their required returns, and Repo Men has an expected return below its required return. Mini Case: 2 - 29 l. 3. Does the fact that Repo Men has an expected return which is less than the t-bill rate make any sense? Answer: Repo Men is an interesting stock. Its negative beta indicates negative market risk-including it in a portfolio of “normal” stocks will lower the portfolio’s risk. Therefore, its required rate of return is below the risk-free rate. Basically, this means that Repo Men is a valuable security to rational, well-diversified investors. To see why, consider this question: would any rational investor ever make an investment which. l. 4. What would be the market risk and the required return of a 50-50 portfolio of Alta Inds and Repo Men? Of Alta Inds and Am. Foam? Answer: Note that the beta of a portfolio is simply the weighted average of the betas of the stocks in the portfolio. Thus, the beta of a portfolio with 50 percent Alta Inds and 50 percent Repo Men is: n bp = w b i i. i =1 bp = 0.5(bAlta) + 0.5(bRepo) = 0.5(1.29) + 0.5(-0.86) = 0.215, rp = rRF + (rM - rRF)bp = 8.0% + (15.0% - 8.0%)(0.215) = 8.0% + 7%(0.215) = 9.51% 9.5%. For a portfolio consisting of 50% Alta Inds plus 50% Am. Foam, the required return would be 14.9%: bp = 0.5(1.29) + 0.5(0.68) = 0.985. rp = 8.0% + 7%(0.985) = 14.9%. Mini Case: 2 - 30 m. 1. Suppose investors raised their inflation expectations by 3 percentage points over current estimates as reflected in the 8 percent t-bill rate. What effect would higher inflation have on the SML and on the returns required on high- and lowrisk securities? Answer: Required and Expected Rates of Return (%) 40 35 Increased Risk Aversion 30 Increased Inflation 25 20 15 Original Situation 10 5 0.00 0.50 1.00 1.50 2.00 Beta Here we have plotted the SML for betas ranging from 0 to 2.0. The base case SML is based on r RF = 8% and r M = 15%. If inflation expectations increase by 3 percentage points, with no change in risk aversion, then the entire SML is shifted upward (parallel to the base case SML) by 3 percentage points. Now, r RF = 11%, r M = 18%, and all securities’ required returns rise by 3 percentage points. Note that the market risk premium, r m − r RF , remains at 7 percentage points. m. 2. Suppose instead that investors’ risk aversion increased enough to cause the market risk premium to increase by 3 percentage points. (inflation remains constant.) What effect would this have on the SML and on returns of high- and low-risk securities? Answer: When investors’ risk aversion increases, the SML is rotated upward about the yintercept ( r RF ). r RF remains at 8 percent, but now r M increases to 18 percent, so the market risk premium increases to 10 percent. The required rate of return will rise sharply on high-risk (high-beta) stocks, but not much on low-beta securities. Mini Case: 2 - 31 Web Appendix 2B Calculating Beta Coefficients With a Financial Calculator Solutions to Problems 2. 2. Because of a relative scarcity of such stocks and the beneficial net effect on portfolios that include it, its “risk premium” is likely to be very low or even negative. Theoretically, it should be negative. Web Solutions: 2 - 32. g. The beta would decline to 0.53. A decline indicates that the stock has become less risky; however, with the change in the debt ratio the stock has actually become more risky. In Web Solutions: 2 - 33 periods of transition, when the risk of the firm is changing, the beta can yield conclusions that are exactly opposite to the actual facts. Once the company's risk stabilizes, the calculated beta should rise and should again approximate the true beta. Web Solutions: 2 - 34 Web 2B Solutions to Problems 2. Web Solutions: 2 - 35 2.. Web Solutions: 2 - 36 g. The beta would decline to 0.53. A decline indicates that the stock has become less risky; however, with the change in the debt ratio the stock has actually become more risky. In periods of transition, when the risk of the firm is changing, the beta can yield conclusions that are exactly opposite to the actual facts. Once the company's risk stabilizes, the calculated beta should rise and should again approximate the true beta. 2B-2 a. The slope of the characteristic line is the stock’s beta coefficient. Slope = ri Rise . Run rM SlopeA = BetaA = 29.00 15.20 = 1.0. 29.00 15.20 SlopeB = BetaB = 20.00 13.10 = 0.5. 29.00 15.20 rM b. ˆ = 0.1(-14%) + 0.2(0%) + 0.4(15%) + 0.2(25%) + 0.1(44%) = -1.4% + 0% + 6% + 5% + 4.4% = 14%. Web Solutions: 2 - 37 The graph of the SML is as follows: The equation of the SML is thus: ri = rRF + (rM – rRF)bi = 9% + (14% – 9%)bi = 9% + (5%)bi. c. Required rate of return on Stock A: rA = rRF + (rM – rRF)bA = 9% + (14% – 9%)1.0 = 14%. Required rate of return on Stock B: rB = 9% + (14% – 9%)0.5 = 11.50%. rC = 18%. d. Expected return on Stock C = ˆ Return on Stock C if it is in equilibrium: rC . rC = rRF + (rM – rRF)bC = 9% + (14% – 9%)2 = 19% 18% = ˆ A stock is in equilibrium when its required return is equal to its expected return. Stock C’s required return is greater than its expected return; therefore, Stock C is not in equilibrium. Equilibrium will be restored when the expected return on Stock C is driven up to 19%. With an expected return of 18% on Stock C, investors should sell it, driving its price down and its yield up. Web Solutions: 2 - 38
https://issuu.com/eric410/docs/solution-manual-intermediate-financ
CC-MAIN-2017-34
refinedweb
9,176
63.29
Prime numbers From HaskellWiki Latest revision as of 00:42, 31 May 2017 In mathematics, amongst the natural numbers greater than 1, a prime number (or a prime) is such that has no divisors other than itself (and 1). [edit]. [edit]. To find out \ ⋃ { {n×m : m ∈ Nn} : n ∈ N2 } - = N2 \ ⋃ { {n×n, n×n+n, n×n+n+n, ...} : n ∈ N2 } - = N2 \ ⋃ { {p×p, p×p+p, p×p+p+p, ...} :.) In pseudocode, this can be written as primes = [2..] \ [[p*p, p*p+p..] | p <- primes]. Short exposition is here. [edit] 3 Sieve of Eratosthenes Simplest, bounded, very inefficient formulation: import Data.List (\\) primesTo m = sieve [2..m] {- (\\) is set-difference for unordered lists -} where sieve (x:xs) = x : sieve (xs \\ [x,x+x..m]) sieve [] = [] The (unbounded) sieve of Eratosthenes calculates primes as integers above 1 that are not multiples of primes, i.e. not composite — whereas composites are found as enumeration of multiples of each prime, generated by counting up from prime's square in constant increments equal to that prime (or twice that much, for odd primes). This is much more efficient and runs at about n1.2 empirical orders of growth (corresponding to n log n log log n complexity, more or less, in n primes produced): import Data.List.Ordered (minus, union, unionAll) primes = 2 : 3 : minus [5,7..] (unionAll [[p*p, p*p+2*p..] | p <- tail primes]) {- Using `under n = takeWhile (<= n)`, with ordered increasing lists, `minus`, `union` and `unionAll` satisfy, for any `n` and `m`: under n (minus a b) == nub . sort $ under n a \\ under n b under n (union a b) == nub . sort $ under n a ++ under n b under n . unionAll . take m == under n . foldl union [] . take m under n . unionAll == nub . sort . concat . takeWhile (not.null) . map (under n) -} The definition is primed with 2 and 3 as initial primes, to avoid the vicious circle. The "big union" unionAll function could be defined as the folding of (\(x:xs) -> (x:) . union xs); or it could use a Bool array as a sorting and duplicates-removing device. The processing naturally divides up into the segments between successive squares of primes. Stepwise development follows (the fully developed version is here). [edit] 3.1 Initial definition First of all, working with ordered increasing lists, the sieve of Eratosthenes can be genuinely represented by -- genuine yet wasteful sieve of Eratosthenes -- primes = eratos [2.. ] -- unbounded primesTo m = eratos [2..m] -- bounded, up to m where eratos [] = [] eratos (p:xs) = p : eratos (xs `minus` [p, p+p..]) -- eratos (p:xs) = p : eratos (xs `minus` map (p*) the merge sort. [edit] 3.2 Analysis So for each newly found prime p)). [edit]..div m p]) -- eulers (p:xs) = p : eulers (xs `minus` map (p*) (under (div m p) (p:xs))) -- turner (p:xs) = p : turner [x | x<-xs, x<p*p || rem x p /= 0] Its empirical complexity is around O(n1.45). This simple optimization works here because this formulation is bounded (by an upper limit). To start late on a bounded sequence is to stop early (starting past end makes an empty sequence – see warning below 1), thus preventing the creation of all the superfluous multiples streams which start above the upper bound anyway (note that Turner's sieve is unaffected by this).. [edit] flatly ignore all evens above 2 a priori.) It is now clear that it can't be made unbounded just by abolishing the upper bound m, because the guard can not be simply omitted without changing the complexity back for the worst. [edit] 3.5 Accumulating Array So while minus(a,b) takes O( | b | ) operations for random-access imperative arrays and about O( | a | ) operations here for ordered increasing lists of numbers, Indeed for unboxed arrays, with the type signature added explicitly (suggested by Daniel Fischer), the above code runs pretty. [edit]:pt) | q <- p*p , (h,t) <- span (< q) xs = h ++ sieve (t `minus` [q, q+p..]) pt -- h ++ turner [x | x<-t, rem x p>0] pt Inlining and fusing span and (++) we get: primesPE = 2 : ops where ops = sieve [3,5..] 9 ops -- odd primes sieve (x:xs) q ps@ ~(p:pt) | x < q = x : sieve xs q ps | otherwise = sieve (xs `minus` [q, q+2*p..]) (head pt^2) pt Since the removal of a prime's multiples here starts at the right moment, and not just from the right place, the code could now finally be made unbounded. Because no multiples-removal process is started prematurely, there are no extraneous multiples streams, which were the reason for the original formulation's extreme inefficiency. [edit] 3.7 Segmented With work done segment-wise between the successive squares of primes it becomes primesSE = 2 : ops where ops = sieve 3 9 ops [] -- odd primes sieve x q ~(p:pt) fs = foldr (flip minus) [x,x+2..q-2] -- chain of subtractions [[y+s, y+2*s..q] | (s,y) <- fs] -- OR, -- [x,x+2..q-2] `minus` foldl union [] -- subtraction of merged -- [[y+s, y+2*s..q] | (s,y) <- fs] -- lists ++ sieve (q+2) (head pt^2) pt (. [edit] : ops where ops = sieve 3 9 ops [] -- odd primes sieve x q ~(p:pt) fs = ([x,x+2..q-2] `minus` joinST [[y+s, y+2*s..q] | (s,y) <- fs]) ++ sieve (q+2) (head pt^2) pt ((++ [(2*p,q)]) [(s,q-rem (q-y) s) | (s,y) <- fs]) joinST (xs:t) = (union xs . joinST . pairs) t where pairs (xs:ys:t) = union xs ys : pairs t pairs t = t joinST [] = [] [edit] 3.7.2 Segmented merging via an array The removal of composites is easy with arrays. Starting points can be calculated directly: import Data.List (inits, tails) import Data.Array.Unboxed primesSAE = 2 : sieve 2 4 (tail primesSAE) (inits primesSAE) -- (2:) . (sieve 2 4 . tail <*> inits) $ primesSAE where sieve r q ps (fs:ft) = [n | (n,True) <- assocs ( accumArray (\ _ _ -> False) True (r+1,q-1) [(m,()) | p <- fs, let s = p * div (r+p) p, m <- [s,s+p..q-1]] :: UArray Int Bool )] ++ sieve q (head ps^2) (tail ps) ft The pattern of iterated calls to tail is captured by a higher-order function tails, which explicitly generates the stream of tails of a stream, making for a bit more readable (even if possibly a bit less efficient) code: psSAGE = 2 : [n | (r:q:_, fs) <- (zip . tails . (2:) . map (^2) <*> inits) psSAGE, (n,True) <- assocs ( accumArray (\_ _ -> False) True (r+1, q-1) [(m,()) | p <- fs, let s = (r+p)`div`p*p, m <- [s,s+p..q-1]] :: UArray Int Bool )] [edit] 3.8 Linear merging But segmentation doesn't add anything substantially, and each multiples stream starts at its prime's square anyway. What does the Postponed code do, operationally? With each prime's square passed by, there emergess where prs = 3 : minus [5,7..] (joinL [[p*p, p*p+2*p..] | p <- prs]) joinL ((x:xs):t) = x : union xs (joinL t) Here, xs stays near the top, and more frequently odds-producing streams of multiples of smaller primes are above those of the bigger primes, that produce less frequently their multiples which have to pass through more union nodes on their way up. Plus, no explicit synchronization is necessary anymore because the produced multiples of a prime start at its square anyway - just some care has to be taken to avoid a runaway access to the indefinitely-defined structure, defining joinL (or foldr's combining function) to produce part of its result before accessing the rest of its input (thus making it productive). Melissa O'Neill introduced double primes feed to prevent unneeded memoization (a memory leak). We can even do multistage. Here's the code, faster still and with radically reduced memory consumption, with empirical orders of growth of around ~ n1.40 (initially better, yet worsening for bigger ranges): primesLME = 2 : _Y ((3:) . minus [5,7..] . joinL . map (\p-> [p*p, p*p+2*p..])) _Y :: (t -> t) -> t _Y g = g (_Y g) -- multistage, non-sharing, g (g (g (g ...))) -- g (let x = g x in x) -- two g stages, sharing _Y is a non-sharing fixpoint combinator, here arranging for a recursive "telescoping" multistage primes production (a tower of producers). This allows the primesLME stream to be discarded immediately as it is being consumed by its consumer. For prs from primesLME1 definition above it is impossible, as each produced element of prs is needed later as input to the same prs corecursive stream definition. So the prs stream feeds in a loop into itself and is thus retained in memory, being consumed by self much slower than it is produced. With multistage production, each stage feeds into its consumer above it at the square of its current element which can be immediately discarded after it's been consumed. (3:) jump-starts the whole thing. [edit] 3.9 Tree merging Moreover, it can be changed into a tree structure. This idea is due to Dave Bayer and Heinrich Apfelmus: primesTME = 2 : _Y ((3:) . gaps 5 . joinT . map (\p-> [p*p, p*p+2*p..])) -- joinL ((x:xs):t) = x : union xs (joinL t) joinT ((x:xs):t) = x : union xs (joinT (pairs t)) -- set union, ~= where pairs (xs:ys:t) = union xs ys : pairs t -- nub.sort.concat gaps k s@(x:xs) | k < x = k:gaps (k+2) s -- ~= [k,k+2..]\\s, | True = gaps (k+2) xs -- when null(s\\[k,k+2..]) This code is pretty) []: Data.List.Ordered.foldt of the data-ordlist package builds the same structure, but in a lazier fashion, consuming its input at the slowest pace possible. Here this sophistication is not needed (evidently). [edit] . hitsW 11 wheel) gapsW k (d:w) s@(c:cs) | k < c = k : gapsW (k+d) w s -- set difference | otherwise = gapsW (k+d) w cs -- k==c hitsW k (d:w) s@(p:ps) | k < p = hitsW (k+d) w s -- intersection | otherwise = scanl (\c d->c+p*d) (p*p) (d:w) : hitsW (k+d) w ps -- k==p wheel = 2:4:2:4:6:2:6:4:2:4:6:6:2:6:4:2:6:4:6:8:4:2:4:2: 4:8:6:4:6:2:4:6:2:6:6:4:2:4:6:2:6:4:2:4:2:10:2:10:wheel -- cycle $ zipWith (-) =<< tail $ [i | i <- [11..221], gcd i 210 == 1] The hitsW function is there to find the starting point for rolling the wheel for each prime, but this can be found directly: primesW = [2,3,5,7] ++ _Y ( (11:) . tail . gapsW 11 wheel . joinT . map (\p-> map (p*) . dropWhile (< p) $ scanl (+) (p - rem (p-11) 210) wheel) ) Seems to run about 1.4x faster, too. [edit] 3.10.1 Above Limit - Offset Sieve Another task is to produce primes above a given value: {-# OPTIONS_GHC -O2 -fno-cse #-} primesFromTMWE primes m = dropWhile (< m) [2,3,5,7,11] ++ gapsW a wh2 (compositesFrom a) where (a,wh2) = rollFrom (snapUp (max 3 m) 3 2) (h,p2:t) = span (< z) $ drop 4 primes -- p < z => p*p<=a z = ceiling $ sqrt $ fromIntegral a + 1 -- p2>=z => p2*p2>a compositesFrom a = joinT (joinST [multsOf p a | p <- h ++ [p2]] : [multsOf p (p*p) | p <- t] ) snapUp v o step = v + (mod (o-v)2) | x < m = go xs ws2 | True = (n+x-m, ws) -- (x >= m) A certain preprocessing delay makes it worthwhile when producing more than just a few primes, otherwise it degenerates into simple trial division, which is then ought to be used directly: primesFrom m = filter isPrime [m..] [edit]:pt) q = case (M.null m, M.findMin m) of (False, (n2, skips)) | n == n2 -> mkPrimes (n+2) (addSkips n (M.deleteMin m) skips) ps q _ -> if n<q then n : mkPrimes (n+2) m ps q else mkPrimes (n+2) (addSkip n m (2*p)) pt (head pt^2) addSkip n m s = M.alter (Just . maybe [s] (s:)) (n+s) m addSkips = foldl' . addSkip [edit] 4 Turner's sieve - Trial division David Turner's (SASL Language Manual, 1983) formulation replaces non-standard minus in the sieve of Eratosthenes by stock list comprehension with rem filtering, turning it into a trial division algorithm, for clarity and simplicity: -- unbounded sieve, premature filters primesT = sieve [2..] where sieve (p:xs) = p : sieve [x | x <- xs, rem x p > 0] -- map head -- $ iterate (\(p:xs) -> [x | x <- xs, rem x p > 0]) [2..] This creates many superfluous implicit filters, because they are created prematurely. To be admitted as prime, each number will be tested for divisibility here by all its preceding primes, while just those not greater than its square root would suffice. To find e.g. the 1001st prime ( 7927), 1000 filters are used, when in fact just the first 24 are needed (up to 89's filter only). Operational overhead here is huge. [edit] 4.1 Guarded Filters But this really ought to be changed into bounded and guarded variant, again achieving the "miraculous" complexity improvement from above quadratic to about O(n1.45) empirically (in n primes produced): primesToGT m = sieve [2..m] where sieve (p:xs) | p*p > m = p : xs | True = p : sieve [x | x <- xs, rem x p > 0] -- (\(a,b:_) -> map head a ++ b) . span ((< m).(^2).head) $ -- iterate (\(p:xs) -> [x | x <- xs, rem x p > 0]) [2..m] [edit] 4.2 Postponed Filters Or it can remain unbounded, just filters creation must be postponed until the right moment: primesPT1 = 2 : sieve primesPT1 [3..] where sieve (p:pt) xs = let (h,t) = span (< p*p) xs in h ++ sieve pt [x | x <- t, rem x p > 0] -- fix $ concatMap (fst . fst) -- . iterate (\((_,xs), p:pt) -> let (h,t) = span (< p*p) xs in -- ((h, [x | x <- t, rem x p > 0]), pt)) -- . (,) ([2],[3..]) It can be re-written with span and (++) inlined and fused into the sieve: primesPT = 2 : oddprimes where oddprimes = sieve [3,5..] 9 oddprimes sieve (x:xs) q ps@ ~(p:pt) | x < q = x : sieve xs q ps | True = sieve [x | x <- xs, rem x p /= 0] (head pt^2) pt creating here as well the linear filtering. [edit] 4.3 Optimal trial division The above is equivalent to the traditional formulation of trial division, ps = 2 : [i | i <- [3..], and [rem i p > 0 | p <- takeWhile (\p -> p^2 <= i) ps]] or, noDivs n fs = foldr (\f r -> f*f > n || (rem n f > 0 && r)) True fs -- primes = filter (`noDivs`[2..]) [2..] -- = 2 : filter (`noDivs`[3,5..]) [3,5..]. Trial division is used as a simple primality test and prime factorization algorithm. [edit] 4.4 Segmented Generate and Test Next we turn the list of filters into one filter by an explicit list, each one in a progression of prefixes of the primes list. This seems to eliminate most recalculations, explicitly filtering composites out from batches of odds between the consecutive squares of primes. import Data.List primesST = 2 : ops where ops = sieve 3 9 ops (inits ops) -- odd primes -- (sieve 3 9 <*> inits) ops -- [],[3],[3,5],... sieve x q ~(_:pt) (fs:ft) = filter ((`all` fs) . ((> 0).) . rem) [x,x+2..q-2] ++ sieve (q+2) (head pt^2) pt ft This can also be coded as, arguably more readable, primesSGT = 2 : ops where ops = 3 : [n | (r:q:_, px) <- (zip . tails . (3:) . map (^2) <*> inits) ops, n <- [r+2,r+4..q-2], all ((> 0).rem n) px] -- n <- foldl (>>=) [r+2,r+4..q-2] -- chain of filters -- [filterBy [p] | p <- px]] -- n <- [r+2,r+4..q-2] >>= filterBy px] -- filter by a list [edit] 4.4.1 Generate and Test Above Limit The following will start the segmented Turner sieve at the right place, using any primes list it's supplied with (e.g. TMWE etc.) or itself, as shown, demand computing it just up to the square root of any prime it'll produce: primesFromST m | m <= 2 = 2 : primesFromST 3 primesFromST m | m > 2 = sieve (m`div`2*2+1) (head ps^2) (tail ps) (inits ps) where (h,ps) = span (<= (floor.sqrt $ fromIntegral m+1)) ops sieve x q ps (fs:ft) = filter ((`all` (h ++ fs)) . ((> 0).) . rem) [x,x+2..q-2] ++ sieve (q+2) (head ps^2) (tail ps) ft ops = 3 : primesFromST 5 -- odd primes -- ~> take 3 $ primesFromST 100000001234 -- [100000001237,100000001239,100000001249] This is usually faster than testing candidate numbers for divisibility one by one which has to re-fetch anew the needed prime factors to test by, for each candidate. Faster is the offset sieve of Eratosthenes on odds, and yet faster the one w/ wheel optimization, on this page. [edit]. [edit] 5 Euler's Sieve [edit]. [edit]). [edit] 6 Using Immutable Arrays [edit] 6.1 Generating Segments of Primes The sieve of Eratosthenes' removal of multiples on each segment of odds can be done by actually marking them in an array, instead of manipulating ordered lists, and can be further sped up more than twice by working with odds only: import Data.Array.Unboxed primesSA :: [Int] primesSA = 2 : oddprimes () where oddprimes = (3 :) . sieve 3 [] . oddprimes sieve x fs (p:ps) = [i*2 + x | (i,True) <- assocs a] ++ sieve (p*p) ((p,0) : [(s, rem (y-q) s) | (s,y) <- fs]) ps where q = (p*p-x)`div`2 a :: UArray Int Bool a = accumArray (\ b c -> False) True (1,q-1) [(i,()) | (s,y) <- fs, i <- [y+s, y+s+s..q]] Runs significantly faster than TMWE and with better empirical complexity, of about O(n1.10..1.05) in producing first few millions of primes, with constant memory footprint. [edit]2 else f (head x) a2 where q = p*p a2 :: UArray Int Bool a2 = a // [(i,False) | i <- [q, q+2*p..n]] x = [i | i <- [p+2,p+4..n], a2 ! i] [edit] 6.3 Calculating Primes in a Given Range primesFromToA a b = (if a<3 then [2] else []) ++ [i | i <- [o,o+2..b], ar ! i] where o = max (if even a then a+1 else a) 3 -- first odd in the segment r = floor . sqrt $ fromIntegral b + 1 ar = accumArray (\_ _ -> False) True (o,b) -- initially all True, [(i,()) | p <- [3,5..r] , let q = p*p -- flip every multiple of an odd s = 2*p -- to False (n,x) = quotRem (o - q) s q2 = if o <= q then q else q + (n + signum x)*s , i <- [q2,q2+s..b] ] Although sieving by odds instead of by primes, the array generation is so fast that it is very much feasible and even preferable for quick generation of some short spans of relatively big primes. [edit] 7 Using Mutable Arrays Using mutable arrays is the fastest but not the most memory efficient way to calculate prime numbers in Haskell. [edit]). [edit] 7.2 Bitwise prime sieve with Template Haskell place [edit] 8 Implicit Heap See Implicit Heap. [edit] 9 Prime Wheels See Prime Wheels. [edit] 10 Using IntSet for a traditional sieve See Using IntSet for a traditional sieve. [edit] 11 Testing Primality, and Integer Factorization See Testing primality: [edit] 12 One-liners See primes one-liners. [edit]. - Empirical orders of growth on Wikipedia.
https://wiki.haskell.org/index.php?title=Prime_numbers&diff=cur&oldid=56311
CC-MAIN-2017-26
refinedweb
3,218
70.13
Passing data to a function In most cases, you want your function to accept one or more values that you pass to it as data for the function to operate on. For example, the Sqr() function accepts a single argument, which must be a number. To define the arguments that your custom function accepts, use the following syntax, inside the parentheses that follow the function name: name As Type where name is just some name that you make up to use as a placeholder for the incoming value, and Type is a valid data type. For example, you might want the custom SalesTax() function to accept a single numeric value as an argument. You need to make up a name for that, so just call it AnyNum. You also have to define that incoming value as some sort of number. Most likely, the passed value is a Currency value anyway, so you can modify the custom SalesTax() function as follows to accept a single number as an argument: Public Function SalesTax(AnyNum As Currency) End Function What the first line really means is "Expect some number to be here when called. Refer to that number as AnyNum and treat it as a Currency number." A function can accept any number of arguments. If you want a function to accept multiple arguments, give each argument a name and data type by using the same preceding syntax. Separate each definition with a comma. The SalesTax() function needs to accept only one argument, so don't modify that one. However, just as a general example, if you want a function to accept two arguments, you define each as in this example: Public Function funcName(AnyNum As Currency, AnyText As String) End Function Returning a value from a function A function can also return a value - that is, only one value because a function can't return multiple values. To make your function return a value, you just add As Type where Type is a valid data type, to the end of the first statement, outside the closing parenthesis of the function name. You specify only the data type of the returned value - don't give it a name. For example, you might want the SalesTax() function to return a single value that's a Currency number. In that case, modify the SalesTax() function this way: Public Function SalesTax(AnyNum As Currency) As Currency End Function The custom function doesn't return its value until all the code in the procedure has been executed. To define the value returned by the function, use the syntax functionName = value where functionName is the same as the name of the function itself, without the parentheses, and value is the value that you want the function to return (although the value can be an expression that calculates a return value). Suppose you want to be able to pass to the SalesTax() function some Currency value, like $100.00 or $65.45 or whatever, and have it return the sales tax for that amount. To pick a number out of a hat, the sales tax rate is 6.75 percent. The following SalesTax() function performs the appropriate calculation (by multiplying the number that's passed to it by 0.0675) and then returns the results of that calculation: Public Function SalesTax(AnyNum As Currency) As Currency 'Multiply passed value by 6.75% (0.0675) and 'return the result of that calculation. SalesTax = AnyNum * 0.0675 End Function
https://sourcedaddy.com/ms-access/passing-data-to-function.html
CC-MAIN-2019-35
refinedweb
578
58.82
Formulation Inputs CostsVSalas Nov 15, 2013 12:31 AM Hi, Which is the criteria the UI uses to decide which cost to display when there are more than one cost options for a formulation input and the combination of cross reference, cost type and cost set? For instance, the following costs were saved throughout the SaveDWBSpecCost web service: Which cost will be displayed on the UI? Why? How can I control this programatically or from the database to display the specific cost that I want? Thanks. -Victoria Salas 1. Re: Formulation Inputs CostsKellyMayfield-Oracle Nov 15, 2013 6:27 PM (in response to VSalas) You can select the specific cross reference you would like to use using the pack size drop down, the system will then pull the cost for that specific cross ref. If no cross reference is selected I think out of the box we just grab the first equivalent in alphabetical/numeric order. However, we fully intended for customers to extend this logic because we believe most customers will want to apply a unique strategy for their business or even per product category. For example, for Fresh Fruit products pull the most expensive cost if restrictions are set to XYZ, pull the least expensive if restrictions are set to ABC. You can see all of the available cost extension points in the extensibility pack > extensibility guide. We hope to offer a reference implementation in a future extensibility pack release so any information around how you would like the system to behave would be beneficial for that example. Thanks Kelly 2. Re: Formulation Inputs CostsVSalas Nov 21, 2013 8:38 PM (in response to KellyMayfield-Oracle) Hi Kelly, Do you recommend us to use this extension point? Or, should we wait to until a Reference Implementation is officially available? Also, I would like to reframe the question in case I was not very clear. Using the pack size drop down, if there are multiple costs stored for a particular material and those multiple costs correspond to the same cross reference, which cost will the P4P show? Is there a rule for picking one when there are several options? Thank you very much. -Victoria 3. Re: Formulation Inputs CostsKellyMayfield-Oracle Nov 21, 2013 10:13 PM (in response to VSalas) You definitely should use the extension point now. If you need help with your extension just post any questions here and I'm sure we can provide guidance. The rule the system follows would be the same anytime the system finds more than one corresponding cost - whether a pack size is selected or not. Right now it just picks the first it finds in alphabetical order. Let us know what logic you would like the system to perform and we can provide guidance. 4. Re: Formulation Inputs CostsVSalas Nov 22, 2013 12:08 AM (in response to KellyMayfield-Oracle) Hi Kelly, What I need to do is to always show the highest cost available per formulation input given a combination of Cost Reference/Cost Type/Cost Set. I would really appreciate if you can provide me guidance on: - How to install the extension point. - How to use the extension point. - Provide me some scenarios on how the extension point will work. Thanks a lot. -Victoria 5. Re: Formulation Inputs CostsRon M-Oracle Nov 26, 2013 7:44 PM (in response to VSalas)1 person found this helpful Hi Victoria, You will have to do this using the Costing extensions. This type of functionality is what the extension point is meant for, which updates the value in the UI directly. This particular extension is not well documented, so I’ll help you along. But I wanted to better understand your requirements. The cost extension is trying to pull in the highest cost. The Pack Size is defined on the material spec’s cross references grid and is directly associated to one cross reference (SystemID and Equivalent). If you choose a Pack Size in the formulation inputs grid, you are then choosing which cross reference (system ID & equivalent) should be used to pull the cost. This works today, but it doesn’t address the issue of clients having the multiple valid costs with the same systemID and equivalent. From the data you provided, I am assuming that this isn’t a use case you need to handle. Is that right? I believe the use case you need to handle is when the user does not select a pack size. Is this case, we are just pulling in the first equivalent we find in alphabetical order for the material spec, regardless of the cost. So this is the problem that I assume you need to handle. Is that right? If so, then you will need to write a new FormatPlugin Extension point. These are documented in the Extensibility Pack's PluginExtensions documentation. Your Format Plugin is configured in the CustomPluginExtensions.xml file. There are three different extension points that could be implemented, so I’m not sure which one you need, but I think you’ll need the “FormulationInputCostBookPriceOverride” one. For this extension point, the FormatPlugin will get a FormulationCostContext<IFormulationInputDO> as the context.Context value, which is in the Xeno.Prodika.GSMLib.Formulation.Extension namespace. This class has an Object property, which in this case would be the formulationInput (type of IFormulationInputDO). You should be able to check if a Pack Size has been selected for this input using the formulationInputs’ PackSize property. The Context also has a CostPreferences property, which is a FormulationSpecCostPreferences, giving you the cost preferences of the current formulation spec, such as the Cost Type, Cost Set, Currency, etc. You should then be able to loop through the Material spec's legacy profiles collection (Material property of the formulationInput), finding the cross references with the matching systemID from the cost preferences. Using these equivalents, you can call the CostLibraryService (available from GeneralServices.dll) and get all the possible costs, returning the highest value. ... var costLibraryService = CostLibraryService; foreach (string equivalent in matchingEquivalents) { ICostItem costLibraryCostItem = costLibraryService.GetCostItem(legacyProfile.SystemCode, equivalent, costType, supplierNumber); } Your plugin should simply return the highest cost as a string (without the currency) protected static ICostLibraryService CostLibraryService { get { return (ICostLibraryService)AppPlatformHelper.ServiceManager[ typeof( ICostLibraryService ).FullName ]; } } Note that this costLibraryService.GetCostItem method will not handle mutliple costs for the same equivalent. If you need to handle that, you can try using one of the GetCostItems methods instead, though this may take some more work. This should get your started. Let me know when you need more help. Regards, Ron 6. Re: Formulation Inputs CostsVSalas Nov 27, 2013 10:12 PM (in response to Ron M-Oracle) Hi Ron, After I get the highest cost, how can I make the system to show it after the user selects the Identity Preferences and Cost Book Preferences on the Settings Menu? What I need to do is to show the highest cost per formulation input after the user selects the preferences on the Settings Menu, assuming that there will be more than one cost options available. In this way, when the user hits save after selecting the cost preferences, the formulation input is being saved with the highest cost available. Thanks! -Victoria Salas 7. Re: Formulation Inputs CostsRon M-Oracle Nov 27, 2013 10:17 PM (in response to VSalas) Did you try implementing the plugin? If so, what happened? When you select the cost preferences, I believe that it should reload the costs, so your plugin will get called then. 8. Re: Formulation Inputs CostsVSalas Dec 2, 2013 10:57 PM (in response to Ron M-Oracle) Hi Ron, I installed the patch for the bug # 17511080 to solve the problem with unrelated equivalents - costs. I already did some testing and the patch didn't make any difference. Also, I opened the files v6.1.1.1.21.xml and v6.1.1.1.21-orcl.xml from the patch files and both have the comment /* EMPTY SCRIPTS */. Is this normal? I'd appreciate your help on this. Thanks. -Victoria 9. Re: Formulation Inputs CostsRon M-Oracle Dec 3, 2013 4:00 PM (in response to VSalas) If you are looking for a highest cost functionality, the patch will not provide that. It was addressing an issue where the user selected a pack size, and it didn't pull in the cost for that pack size's equivalent. To implement a highest cost retrieval, you will need to follow my steps above. Let me know if you have questions. 10. Re: Formulation Inputs CostsVSalas Dec 10, 2013 3:34 PM (in response to Ron M-Oracle).
https://community.oracle.com/message/11277280
CC-MAIN-2017-26
refinedweb
1,437
54.22
There seem to be two major test frameworks in Haskell. The first one, HUnit, is based on the xUnit family. Create test cases which assert various properties of the functions you're testing, bundle them into a test suite and run them. foo :: (Num a) => a -> a -> a -> a foo a b c = a * b + c test1 = TestCase (assertEqual "* has higher precedence" 26 (foo 2 10 6)) tests = TestList [TestLabel "Foo test" test1] -- From the REPL *Main> runTestTT tests Cases: 1 Tried: 1 Errors: 0 Failures: 0 Counts {cases = 1, tried = 1, errors = 0, failures = 0} Tests like this always feel a bit smelly - the only way to verify the test is to write the code again twice. Whilst measure twice cut once works for carpentry, it doesn't feel right for programming... Enter an alternative testing framework, QuickCheck. The idea is simple, instead of testing arbitrary assertions about your code, specify the invariants associated with your function and let QuickCheck see if it can generate a failing test case. As a simple example, let's say we write a function to add two numbers together: addNum :: (Num a) => a -> a -> a addNum a b = a + b prop_AddNum a b = (addNum a b) >= b && (addNum a b) >= a We specify the invariant that if we add numbers together the result is bigger than either argument. Running Quick Check shows that this is (obviously wrong!) and gives an example set of arguments that fail the test. *Main> quickCheck prop_AddNum Falsifiable, after 5 tests: -1 -2 The convention is that the invariants are usually specified as beginning with "prop_" in case you're wondering where the weird naming comes from. QuickCheck generates random instances satisfying the types and validates the properties. Generators exist for the basic types and can be extended to your own. Taking an example from the ray tracing functions we can specify an invariant that the distance between any two points is constant after a linear transform. square :: (Num a) => a -> a square x = x * x distance :: Point -> Point -> Float distance p1 p2 = sqrt(square ((x p1)-(x p2)) + square ((y p1)-(y p2))) prop_distance :: Point -> Point -> Float -> Float -> Bool prop_distance p1 p2 d1 d2 = 0.001 > abs (distance p1 p2 - distance (Point ((x p1) + d1) ((y p1) + d2)) (Point ((x p2) + d1) ((y p2) + d2))) Note that the absis just to deal with rounding errors that occur when dealing with floating point types results from the square root. The code won't compile as is, because QuickCheck doesn't know how to generate Point objects. We can solve this problem by creating an instance of Arbitraryspecialized (is that the right word?) for Point types. instance Arbitrary Point where arbitrary = do x <- choose(1,1000) :: Gen Float y <- choose(1,1000) :: Gen Float return (Point x y) dois used to provide sequencing of statements. We can now run quickCheck and verify that the invariant holds. *Main> quickCheck prop_distance OK, passed 100 tests. I'm still not quite understanding some aspects of this (e.g. why can't I write Point choose(1,1000) choose(1,1000)instead of sequencing?), but this is a pretty neat way of writing tests and definitely gives me further reason to try and understand Haskell in more depth.
http://www.fatvat.co.uk/2009/08/testing-times.html
CC-MAIN-2020-05
refinedweb
544
67.18
I'm too lazy to talk nonsense. I'm a fool. C Title Description Given a tree with \ (n \) points, record \ (L(u,v) \) as the number of points on the \ ((u,v) \) simple path. For the path \ ((a,b),(c,d) \) point disjoint Quad \ ((a,b,c,d) \), we want to know how many different values \ ((L(a,b),L(c,d)) \) have. \(n\leq 5\cdot 10^5\) solution The key \ (\ tt observation \) is that for any Quad, there must be an edge so that the two paths are in two subtrees respectively. Then we can enumerate this edge and take the diameter in both subtrees. It is found that this process can be realized by changing the root \ (dp \), and then we need to make a rectangular area and, in fact, it is a suffix maximum. summary This problem has a "no cross" limit. We can enumerate one thing to "cut" them. #include <cstdio> #include <vector> #include <iostream> using namespace std; const int M = 500005; int read() { int x=0,f=1;char c; while((c=getchar())<'0' || c>'9') {if(c=='-') f=-1;} while(c>='0' && c<='9') {x=(x<<3)+(x<<1)+(c^48);c=getchar();} return x*f; } int n,lf[M],b[M];vector<int> g[M]; struct diameter { int x,y,z; diameter() {x=y=z=0;} void add(int c) { if(c>x) z=y,y=x,x=c; else if(c>y) z=y,y=c; else if(c>z) z=c; } int len(int ban) { if(x==ban) return y+z+1; if(y==ban) return x+z+1; if(z==ban) return x+y+1; return x+y+1; } }dp[M]; void upd(int &x,int y) {x=max(x,y);} void work(int u,int fa) { for(auto v:g[u]) if(v^fa) work(v,u),dp[u].add(dp[v].x+1); } void dfs(int u,int fa,int mx) { dp[u].add(lf[u]); for(auto v:g[u]) if(v^fa) { int ban=dp[v].x+1,wv=dp[v].len(0); int wu=max(dp[u].len(ban),mx); upd(b[wu],wv);upd(b[wv],wu); lf[v]=(dp[u].x==dp[v].x+1)?dp[u].y+1:dp[u].x+1; dfs(v,u,wu); } } signed main() { freopen("tree.in","r",stdin); freopen("tree.out","w",stdout); n=read(); for(int i=1;i<n;i++) { int u=read(),v=read(); g[u].push_back(v); g[v].push_back(u); } work(1,0);dfs(1,0,0); long long ans=0; for(int i=n;i>=1;i--) { b[i]=max(b[i],b[i+1]); ans+=b[i]; } printf("%lld\n",ans); } D Title Description There are \ (n+1 \) cities on the same line, starting from \ (0 \) from left to right. The distance between the city \ (I \) and the city \ (0 \) is \ (a_i \). You need to start from the city \ (0 \) to reach the city \ (n \), and you need to eat a sugar every unit of distance. Each city has a candy store with unlimited sugar. The price of a candy store in the \ (I \) city is \ (b_i \) and the price of selling a candy is \ (s_i \). You can sell excess candy in the store. You can carry up to \ (m \) units of candy at the same time. At the beginning, you have unlimited money. Ask how much money you need to spend at least to reach the end (it can be negative) \(n\le 200000\) Solution 1 The common routine of this kind of trading problem is: we look for some unreasonable equivalent operations, and the skill is often delay. For this problem, we fill our backpacks in every store. If there are more candy in the end, we will refund them at the original price. According to this basic idea, when we visit the store \ (i \), we are greedy as follows: - First, consider selling the existing candy. If the price is \ (x \) and the selling value of the store is \ (y \), then selling it can earn \ (y-x \), but violent selling is not good. Because we may have to eat the candy later, we change its value to \ (y \), so when we refund money later, it is equivalent to choosing to sell the candy, If you eat this candy, it is equivalent to not selling it, which is a clever delay operation. - Then consider filling the backpack, replacing some candy with candy that is more expensive than it, and then plug in the current candy. - Finally, consider the candy consumed in the next displacement. According to greed, we consume the candy with the lowest price. According to the operation characteristics, we can maintain it with double ended queue. Each element is a binary of value quantity, and the operation becomes: - Pop up some elements with \ (x \) value at the head of the team, and modify their weight to \ (y \) - Pop up some elements with higher price than the current one at the end of the team and insert new elements. - Delete elements from the head of the team in turn. According to the sharing principle, time complexity \ (O(n) \) Solution 2 This problem can also be started from the perspective of \ (dp \), but if you always want to reduce dimension, you will enter a dead end of thinking. Let \ (dp[i][j] \) represent the minimum cost of candy with \ (j \) when you go to the \ (I \) store. It is not difficult to write the following transfer: In fact \ ((2) (3) \) is the combination of convex functions, and \ ((1) \) is the overall translation of the function, so this is a variant of slope trick. We can prove that \ (dp[i] \) is a convex hull, and the transfer can be translated into the operation on the convex hull as follows: - Pop up the front points, and then pan as a whole. - For broken lines with a slope greater than \ (b_i \), change their slope to \ (b_i \) - For polylines whose front slope is less than \ (s_i \), change their slope to \ (s_i \) This can be implemented with a double ended queue, and then you find that it is the same as the solution. The time complexity \ (O(n) \) #include <cstdio> #include <iostream> using namespace std; const int M = 200005; int read() { int x=0,f=1;char c; while((c=getchar())<'0' || c>'9') {if(c=='-') f=-1;} while(c>='0' && c<='9') {x=(x<<3)+(x<<1)+(c^48);c=getchar();} return x*f; } int n,m,l,r,a[M],b[M],s[M],qv[M<<1],qn[M<<1]; long long ans; signed main() { freopen("candy.in","r",stdin); freopen("candy.out","w",stdout); n=read();m=read();l=n;r=n-1; for(int i=1;i<=n;i++) a[i]=read(); for(int i=0;i<n;i++) b[i]=read(),s[i]=read(); for(int i=0;i<n;i++) { //sell the candy int cnt=0; while(l<=r && qv[l]<=s[i]) cnt+=qn[l],l++; qn[--l]=cnt;qv[l]=s[i]; //fulfill the bagpack & abandon the trash cnt=(i==0)?m:a[i]-a[i-1]; while(l<=r && qv[r]>=b[i]) ans-=1ll*qn[r]*qv[r],cnt+=qn[r],r--; qn[++r]=cnt;qv[r]=b[i]; ans+=1ll*cnt*b[i]; //use the candy for walking cnt=a[i+1]-a[i]; while(cnt) { int v=min(cnt,qn[l]); cnt-=v;qn[l]-=v; if(qn[l]==0) l++; } } for(int i=l;i<=r;i++) ans-=1ll*qn[i]*qv[i]; printf("%lld\n",ans); }
https://programmer.ink/think/improvement-group-training-2021-round2.html
CC-MAIN-2022-05
refinedweb
1,268
67.59
Introduction There are two ways to append PropDefs into a PropObj. Traditionally, appendPropDefs(PropDefs defs) is used for creating and appending PropDefs: /** * Append PropDefs. */ public void appendPropDefs(PropDefs defs) { super(..); // Create and append PropDefs here... } Another is the newer(2019) Append PropDefs syntax of doing so: /** * Append PropDefs. */ public props { // What PropDefs to create goes here... } Which actually compiles into exactly overriding these two methods: /** * Internal append prop defs. * This should never be overridden outside of the props syntax. */ public void internalAppendProps(PropDefs defs) { // Compiled statements from props syntax... } /** * Init props. * This is where init {} code goes as well as value initializations. */ public void internalInitProps(PropDefs defs) { // Initialization code from the syntax compiles to here... } The appendPropDefs(PropDefs defs) method can be used in conjunction with the PropDef syntax and is always an option instead. The syntax for appending PropDefs to PropObjs is quite beneficial. There is no longer a need to experience the nuances of using the traditional appendPropDefs method. (Remember PropDef classes to instantiate, differences between PropDefs vs FieldPropDefs, caching syntax for utilizing the propDefCache, attributes for copy=(null, shallow, or reference) or for stream=null, methods to append with, etc...) In general, the syntax allows for the most ease in defining PropDefs to reduce the clutter of all that is needed. Below is an outline of the syntax and how it should be used within PropObjs in comparison to using the appendPropDefs method assuming there is a boolean field named boolProp. /** * Append PropDefs. */ public props { "boolProp"; } /** * Append PropDefs. */ public void appendPropDefs(PropDefs defs) { defs.put(FieldPropDef(this.class, "boolProp")); } Syntax arguments The syntax can have special arguments passed in: /** * Append PropDef syntax. */ public props : debug=true, super=false { // What PropDefs to create goes here... } There are currently only 4 special arguments that can only be placed here: debug=true When specified, a dump of what is being produced in the syntax symbol table will be displayed on compile. False by default or if not in developMode. fullDebug=true A complete dump for the symbol table as well as a printout of the statements within each method will be displayed on compile. False by default or if not in developMode. help=true Prints out an example of using this syntax as well as a link to get to this wiki page. False by default or if not in developMode. super=false Keep in mind the syntax is essentially overriding the internalInitProps and internalAppendProps method. Therefore this argument means it will not call super in the overriden methods. Else it calls the super method by default. Do note the syntax DOES NOT recognize placing in dynamic fields as the boolean argument. Meaning this would not be recognized and default to its default value, which is super=true. /** * Should I call super? */ public bool callSuper = false; /** * Append PropDef syntax. */ public props : super=callSuper { // What PropDefs to create goes here... } PropDef, FieldPropDef, and Custom PropDefs They help define the key/value pairs inside the PropObj.propData str->Object map. Each PropDef must have both a k:str member and a pType:Type member, where k equates to the key:str within the propData and the type defines what the value type is/can be (See also PropObj). They should never exist outside of a PropObj (except for the cache -- more on that later) and they should only be accessed via a PropObj. Different classes of PropDefs can be made to suit different needs and generally fall into the base PropDef class, FieldPropDef, and other custom PropDef classes. Below is an example of this in action assuming there is a boolean field in the class named boolFieldPropDef. /** * Boolean field property. */ public bool boolFieldPropDef; /** * Append PropDefs. */ public props { int "basePropDef"; double "customPropDef" : ExampleCustomPropDef; "boolFieldPropDef"; // OR "boolField" : fieldName="boolFieldPropDef"; } If a data type is given using the syntax, it'll initialize the PropDef as the base PropDef class. Unless stated like how "customPropDef" is initialized by passing in the ExpCustomPropDef class. If the type is not stated and a field exists of the same name, it'll automatically be initialized as a FieldPropDef. Alternatively, you can also assign the PropDef key to a different name and still tie the actual field name from the class as such. This is how it looks like doing the exact same if it's done traditionally instead: /** * Append PropDefs. */ public void appendPropDefs(PropDefs defs) { defs.put(PropDef("basePropDef", int)); defs.put(ExampleCustomPropDef("customPropDef", double)); defs.put(FieldPropDef(this.class, "boolFieldPropDef")); } Initial values We can initialize values associated with a PropDef. This adds a put(..) within the internalInitProps(..) that will set a starting value. /** * Props. */ public props { bool "myBool" v=true; double "myDouble" v=34 : domain=myDoubleSubSet(); } Overriding PropDef methods One special thing the append PropDef syntax can do is overriding PropDef methods within the syntax itself automatically and not needing to manually create a subclass of the PropDef. /** * PropDef example class. */ public class PropExample extends PropObj { /** * Integer field property. */ public int intFieldPropDef; /** * Append PropDefs. */ public props { "intFieldPropDef" : { Object get(PropObj owner, Object env) { return 42; } void put(PropObj z, Object v, Object env) { that.printSomething(v); } } } /** * Print something. */ extend public void printSomething(Object v) { pln("Magic! ", v); } } { PropExample exp(); for (k, def in exp.propDefs.defs) { pln(k); pln(def); pln(exp.get(k)); exp.put(k, 89); } } intFieldPropDef @ExpDefintFieldPropDef2509(int intFieldPropDef []) 42 Magic! 89 When a method override is placed in the syntax, the PropDef will automatically be subclassed as if it's a private class in the same file of this current class (PropExample). Notice the way it's named contains the key the PropDef (@ExpDef + "your PropDef key" + a unique number of the auto subclassed PropDef): Object -- cm.lang PropDef -- cm.props FieldPropDef -- cm.props @ExpDefintFieldPropDef2509 : private -- profile.configura Notice also there is a special that field within the override. that is a reference to the owner of the PropDef (for this example, it would be PropExample). Since this is essentially a method override of a PropDef class, this would refer to the overridden PropDef class, so we use that to reference the owner. The casting of that from PropObj to the actual owner subclass is also done automatically. The usual case for using this is overriding the put and get method. This essentially is like the getter and setter methods of a normal field, but for a "field"(key and value pair) in the propData str->Object map. Method overrides in the syntax can also be placed in a block to form a scope (see the Scopes and blocks section of this article). PropDef arguments Within the syntax creating PropDefs, we can pass in arguments to further customize the initialization of the PropDef. Example: /** * Append PropDefs. */ public props : cached=false { "sample"; "example" : cached=true, stream=null; defs cached=true { "exampleInDefsBlock1"; "exampleInDefsBlock2"; } } Observe that any PropDef argument(cached in this example) can be placed on 'props' as if it's a syntax argument, this means all PropDef initialization will be set to false unless stated otherwise. The example here will then be interpreted as: - "sample" is not cached. - "example is cached and this PropDef cannot be streamed or "saved". - "exampleInDefsBlock1" and "exampleInDefsBlock2" will be cached as a result of being inside a defs block(see the Scopes and blocks section of this article) which states the cached argument as true. List of arguments that can be placed on 'props', inside a def block, or on individual PropDefs: Caching PropDefs PropDefs may be cached so that their creation later may be quicker. For PropObjs that are created often, this results in a quicker construction. In the case of FieldPropDefs, the savings are quite noticeable as the field lookups are avoided if the entry already exists within the cache. It is important to note that PropDefs with changing 'domain', 'setting', or 'default' fields should often be kept out of the cache as these PropDefs may be pointed to by other PropObjs as well. Such situations include quick properties for example as the PropDef 'setting' may be altered by individual instantiations of that PropObj differently. appendPropDefs PropDef cache syntax (the long way): /** * Append prop defs. */ public void appendPropDefs(PropDefs defs) { super(..); // Put a PropDef into the cache -- if the entry already exists, just append it to defs. cacheProp("myProp", bool, defs) { result PropDef("myProp", bool, default=true); }; // Put a FieldPropDef into the cache. cacheProp("myField", defs) { result FieldPropDef(this.class, "myField", fieldName="myFieldName", default=true); }; } append props syntax (results are the same as above -- super called unless super=false supplied) // Placed in props args public props : cached=true, default=true { bool "myProp"; "myField" : fieldName="myFieldName"; } // Or placed on individual prop public props { bool "myProp" : cached=true, default=true; "myField" : cached=true, default=true, fieldName="myFieldName"; } // Or placed on a defs container public props { defs cached=true, default=true { bool "myProp"; "myField" : fieldName="myFieldName"; } } Scopes and blocks Another neat trick this syntax can do is applying something to a scope of PropDefs by placing them in a block of code. - defs block The defs block is meant to simply make it easier to define attributes and types over many PropDef definitions. In practice, nested 'defs' should be avoided as much as possible -- keeping the code easy to read. /** * Append PropDefs. */ public props { defs { // Definitions behave normally.. "myProp"; } defs MyCorePropDef { // Definitions are of type MyCorePropDef unless otherwise specified.. defs MyOtherCorePropDef { // Definitions are of type MyOtherCorePropDef unless otherwise specified.. } defs PropDef core=false { // Definitions are of type PropDef unless otherwise specified.. } } defs domain=true, setting=true { // Definitions are of type 'core' and have domain=true and setting=true.. defs domain=false { // Definitions are of type 'core' and have domain=false and setting=true.. } } defs MyCorePropDef cached=true, default=true { // Definitions are of type MyCorePropDef, are cached, and have default=true.. } } - methods block The 'methods' block takes all PropDefs defined within the scope and override all of its methods. This can be very nice to avoid writing out the same implementation many times for multiple PropDefs that require the same behavior. /** * Props. */ public props { //Normal FieldPropDefs and PropDefs. "myFieldPropDef"; int "myPropDef"; //Implement methods for myFieldPropDef and myPropDef. methods { str label(..) { return "myFieldPropDef and myPropDef"; } } // MyCoreProps. defs MyCoreProp default=true { bool "myCoreProp0"; bool "myCoreProp1"; bool "myCoreProp2"; // Implement methods for all MyCoreProps. methods { Object default(..) { return env ? default = env : default; } str label(..) { return "MyCoreProp"; } } } } - PropDef key block This works exactly like the method block(above) in overriding PropDef methods, but you can specify which PropDef on overriding the method rather than all the methods in the current scope. /** * Append PropDefs. */ public props { bool "myProp"; int "myOtherProp"; "myProp" { Object default(..) { return true; } } "myProp", "myOtherProp" { str label(..) { return "mine"; } } // Can also be done without " " myProp { Object domain(..) { return env; } } } - construct block A 'construct' block is a block(s) within the syntax that is executed before any of the PropDefs are created. It compiles to statements executed in the internalAppendProps(..). /** * Append PropDefs. */ public props { construct { // Anything here will be executed before anything is created (after super(..) if !super=false).. // Ex. appending a prop the old way ('defs' is in scope as we are within internalAppendProps(..)) defs.put(MyMadeUpPropDef("someKey", bool)); //fields declared here can be used within the syntax. int val9 = 9; PropInputSetting generalSetting(); } defs setting=generalSetting { "myIntFieldPropDef" : default=val9; "myOtherIntFieldPropDef" : default=val13; // Just for example -- these can be placed ANYWHERE. // The statements are also appended top-down. construct { int val13 = 13; } } // Create the PropDefs bool "myPropDef"; construct { // Anything here or within any other construct block will be appended to the statements to be // executed before anything is created (in order of compilation -- top down). This includes containers. } } One can use these blocks to append PropDefs the old way (defs.put(PropDef(..))) or to initialize other objects used within the construction of the PropDefs. An example of this might be 'exposed=true' -- one might want to ensure that the PropObj being exposed exists. Fields declared in this block can be referred to within the syntax. It is important to note that anything within the block is omitted from props interpretation or manipulation. Anything in the block will not be altered by the syntax (Ex. default=Object etc.. specified within containers). - init block An 'init' block(s) contains code to be executed after the PropDefs are created (internalInitProps(..)). /** * Append PropDefs. */ public props { init { // Any code within this block will be executed after PropDef creation.. } // Create prop defs.. init { // Any code here will be executed after creation and appended below the statements in previous init blocks. // Ex. initializing def values (can be done with the v= as well) this."myProp" = someValue; propDef("otherProp").domain = otherSubSet(); // Any code that directly relates to these PropDef initializations } } Just like with the 'construct' blocks, anything within the 'init' will be omitted from props interpretation -- separate from the syntax entirely. See also a PropDef syntax presentation YouTube video. Please sign in to leave a comment.
https://support.configura.com/hc/en-us/articles/360049540094-Append-PropDefs
CC-MAIN-2021-04
refinedweb
2,131
57.06
Service Bus - micklpl/avanade-azure-workshop Wiki Available tagsAvailable tags service-bus-send-message service-bus-process-message Azure Service Bus send message and add webjobAzure Service Bus send message and add webjob git checkout service-bus-send-message Populate values of these keys in application settings in Azure Portal. All settings except serviceBusSharedAccessKeyValue should have already been set as they come from ARM templete. If you want to run your app locally you have to populate all of them in Web.config. <add key="serviceBusScheme" value="sb" /> <add key="serviceBusServiceName" value="NAME_OF_SERVICE_BUS_RESOURCE_ALREADY_HERE" /> <add key="serviceBusSharedAccessKeyName" value="RootManageSharedAccessKey" /> <add key="serviceBusSharedAccessKeyValue" value="PASTE_RootManageSharedAccessKey_HERE" /> Get missing values from Azure Portal. Click on the Service Bus resource, go to "Shared access policies" and copy one of the keys. Now it is time to add a web job to the project. Right click on WebApp project and add New Azure WebJob Project. Important! Make sure that newly created project is in the same .Net Framework version. They should be for instance 4.5.2 like in our example: It would be better if you add NuGet packages by managing them in whole solution context, so you are sure all packages are at the same version across all projects. Add Autofac and Microsoft.Azure.WebJobs.ServiceBus NuGet to Topics project. Important! Check if other packages such as Newtonsoft.Json and WindowsAzure.Storage are at same versions across other projects. To do so, right click on the solution, then go to "Manage NuGet Packages for solution..." Paste the following code to Program.cs file: internal class Program { private static void Main() { var config = new JobHostConfiguration(); config.UseServiceBus(); var host = new JobHost(config); // The following code ensures that the WebJob will be running continuously host.RunAndBlock(); } } And the following code to Functions.cs file: public class Functions { private const string SubscriptionName = "webjobssubscription"; public async Task ProcessGameMessage([ServiceBusTrigger(nameof(GameMessageModel), SubscriptionName)] GameMessageModel message, TextWriter textWriter) { await ProcessTopic(message, textWriter); } private async Task ProcessTopic<TTopic>(TTopic message, TextWriter textWriter) where TTopic : BaseMessageModel { await WriteMessage($"Processing topic message {typeof(TTopic).Name}. Body: {JsonConvert.SerializeObject(message)}", textWriter); } private static async Task WriteMessage(string message, TextWriter writer) { await writer.WriteLineAsync(message); } } Add reference to Avanade.AzureWorkshop.WebApp, check if all compiles and publish the app. At this point you should be able to consume messages. STOP!!!, a task for you! Using Storage Explorer, create a games table. Now, you can play a game, simply click "play a game" link under group stadings. Open Web Jobs logs to check if the message has arrived. You should be able to see which group has played a game. At this commit you get a logic that saves played game when the message is processed. Play some games, analyze the code and see which teams advance to the next stage of championships. ObstaclesObstacles Please check that option Always On is set to On in application settings.
https://github-wiki-see.page/m/micklpl/avanade-azure-workshop/wiki/Service-Bus
CC-MAIN-2022-27
refinedweb
481
50.12
# How I gave my old laptop second life *17-19 min read* Hi y'all, my name is Labertte and I use Arch btw. Probably like every other Linux user, I'd like to buy a ThinkPad, put some lightweight distribution like Arch or Gentoo on it, and then ~~go to Starbucks, get a soy latte and tell everyone that I use "linux"~~. But I decided to go a little different route and give a chance to my old laptop that I was using about five or seven years ago. Plan of the article: 1. [What to do with an old laptop](#what-to-do-with-an-old-laptop) 2. [How to make the world see your site](#how-to-make-the-world-see-your-site) 3. [Buying an IP camera, recording from the camera to the server](#ip-camera) 4. [Problem with recording video directly to the server](#camera-recording-problem) 5. [Recording solution](#recording-solution) 6. [What I learned from this experiment](#conclusions) *Disclaimer* I wanna warn you from the beginning that this post is nothing more than my story. It's not a full-fledged tutorial or a step-by-step guide to follow. However, if you are interested in how this or that technology works, how to make smth better, then be free to write in the comments and I'll try to write a technical article that will help you understand the topic you're interested in. 1. What to do with an old laptop? --------------------------------- From the attic was yanked an old laptop running Windows 7, which has such characteristics: **CPU**: Intel(R) Celeron(R) CPU B815, 2 cores, 1.6GHz **RAM**: 2Gb **GPU**: AMD ATI Radeon HD 6400M/7400M Series **Monitor resolution**: 1366x768 But what can be done with this marvel-present for gamers? In fact, there are a huge number of options, it all depends on your imagination and the time you take to implement them. Here are just a few examples of what you can turn a dusty and useless laptop/PC: * File storage * Mail server * Build-automation server * Web server * Proxy server * Database server The only thing I originally planned to do was to install Arch on it, run ssh and use scp to transfer some music or video files so they would be on the server and not on my active workstation. A stripped-down version of NAS, you could say. So, the first thing to do is to install the operating system. Following all the steps in the excellent [article on the ArchWiki](https://wiki.archlinux.org/title/Installation_guide), the bare OS has been installed. After connecting laptop to the local network (via Ethernet or WiFi) I got its internal IP address and MAC address with the command: `# ip a` ![](https://habrastorage.org/r/w1560/getpro/habr/upload_files/7e9/af4/fdf/7e9af4fdf898b9ac5fb587d790f35579.png)Next I need to do is to enter the settings of my router (192.168.0.1) and then into DHCP Address Reservation where I connect the MAC address to the IP address. This is necessary in order to if my server 'll reconnect to the network, the router 'll give my laptop the same IP address. Make the internal IP address "static". So, once the operating system is completely installed I can proceed to the first stage of the server configuration. ### Installing and configuring ssh First of all, I install the necessary package to the server: `# pacman -S openssh` And start the service: `# systemctl enable sshd --now` I don't wanna enter the password every time I connect to the server, so I add ssh keys, which 'll not only increase login speed, but also enhance security. For this purpose, I generate keys on the local computer: `$ ssh-keygen` The program prompts to enter the name of the file in which the private key 'll be written, as well as a password for it. I confirm the password and here I have two files - `home_server` and `home_server.pub`. After this it is necessary to make the server trust me. To do this I copy the key to the server: `$ ssh-copy-id -i home_server root@` And test the connection: `$ ssh root@` The password is no longer required, however, I can still use it to log in, which means the server is still vulnerable to bruteforce. For this reason I prohibit login by password on the server in the configuration file `/etc/ssh/sshd_config`: ``` PubkeyAuthentication yes # Authentication through ssh public key PasswordAuthentication no # Prohibit login through password ChallengeResponseAuthentication no # This setting also could ask a password so disable it UsePAM no # Depending on your PAM configuration may bypass the setting of "PermitRootLogin without-password" so disable it ``` Then I restarted the ssh daemon to apply the changes from the config: `# systemctl reload sshd` Next, I install a simple firewall on which open only one port - 22 (ssh): `# pacman -S ufw` And firewall setup: `# ufw default deny incoming # block all incoming connections by default` `# ufw allow from 192.168.0.0/24 to any port 22` `# ufw enable` And after that, start it up: `# systemctl enable ufw --now` Now I've ssh configured and can upload some files to the server via it. But of course this is not enough - for what I need a server that stores files that are only accessible from my active device? I could make some kind oow to make the world see your sitef web site and/or just broadcast some files to the whole internet. Thus, I'm smoothly approaching the idea that I wanna make a site that 'll be visible to the entire Internet. 2. How to make the world see your site -------------------------------------- In the beginning it's necessary to think about the domain for the site. After decided on choice, go to any domain registrar and register it. (*Costs ~$8 per year*). When I bought a domain, went to the "Set DNS Host Records" tab, and opposite to the my domain put my public IP address (it can be found out by using the command `$ curl ifconfig.co`) and click on "Save". ![](https://habrastorage.org/r/w1560/getpro/habr/upload_files/208/ee4/9c6/208ee49c6868d93f10ab8fbc4cc38e59.png)After changes put in the terminal: `$ ping` If the site is ping-able and the IP is defined - everything fine. Now it's time to create the site itself. For now, only one file will be enough - index.html. The simplest version of the site: ``` Welcome to this site! Welcome to my site! =================== ``` Great, there is the source code to the simplest site, which so far consists of just one file. However, at the moment the only way to view our website is to save the above code to a file and then open it in a browser. **Q**: Then how do we view our site from a work computer or phone? **A**: Set up a local web server! To do this, I install nginx - a program that 'll display our static content, in my case a single .html page. Note: *nginx has a lot of other interesting features, but let's not go into them now* It's done by the command: `# pacman -S nginx nginx-mod-headers-more certbot-nginx` The web server is installed, and with it some headers for it and certbot. In order for the site to have https and traffic encrypted I run certbot: `# certbot --nginx` I entered the email, domain and also specified that the traffic should be redirected from http to https. After certbot will edit the configuration file nginx. But this is not all the settings, so I'll add few more rows to the `/etc/nginx/nginx.conf` configuration file: ``` load_module "/usr/lib/nginx/modules/ngx_http_headers_more_filter_module.so"; # Add the dynamic library with headers for nginx web server user http http; # Specify with which user nginx 'll run the web server error_log logs/error.log; # Capture detailed information about errors and request processing in log files error_log logs/error.log notice; error_log logs/error.log info; events { worker_connections 1024; # Maximum number of simultaneous connections that each worker process can manage } http { server_tokens off; # Hide nginx version in 404 page include /etc/nginx/mime.types; # Include all mime types to correctly show them in browser types_hash_max_size 4096; # Sets the maximum size of the types hash table server_names_hash_bucket_size 128; # Sets the bucket size for the server names hash tables more_set_headers "Server: serverrr"; # Specify the response header add_header Server "Serverrr"; # Describes the software used by the origin server that handled the request add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always; # HTTP Strict Transport Security is an excellent feature to support on your site and strengthens your implementation of TLS by getting the User Agent to enforce the use of HTTPS add_header Content-Security-Policy "default-src 'self';" always; # CSP is an effective measure to protect your site from XSS attacks. By whitelisting sources of approved content, you can prevent the browser from loading malicious assets add_header X-Frame-Options "SAMEORIGIN"; # X-Frame-Options tells the browser whether you want to allow your site to be framed or not. By preventing a browser from framing your site you can defend against attacks like clickjacking. add_header X-Content-Type-Options "nosniff"; # X-Content-Type-Options stops a browser from trying to MIME-sniff the content type and forces it to stick with the declared content-type add_header 'Referrer-Policy' 'same-origin'; # When a user clicks a link on one site, the origin, that takes them to another site, the destination site receives information about the origin the user came from. This is how we get metrics like those provided by Google Analytics on where our traffic came from. add_header Permissions-Policy "geolocation=(),midi=(),sync-xhr=(),microphone=(),camera=(),magnetometer=(),gyroscope=(),fullscreen=(self),payment=()"; # This header allows a site to control which features and APIs can be used in the browser. server { server_name ; server\_name ; listen 443 ssl; # managed by Certbot ssl\_certificate /etc/letsencrypt/live//fullchain.pem; # managed by Certbot ssl\_certificate\_key /etc/letsencrypt/live//privkey.pem; # managed by Certbot include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot ssl\_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot root ; # Specify the root directory that 'll be used to search for a file location / { index index.html; # The greetings html file } } server { if ($host = ) { return 301 https://$host$request\_uri; } # managed by Certbot if ($host = ) { return 301 https://$host$request\_uri; } listen 80; server\_name ; server\_name ; return 404; # managed by Certbot } } ``` After the config is done, nginx can be started: `# systemctl enable nginx --now` And it is necessary to open ports 80 and 443 in the firewall, so that it was possible to access the web server via http and https:Buying an IP camera, recording from the camera to the server `# ufw allow 80` `# ufw allow 443` Now if I go from work computer or phone to the address: `https://` then a welcome message should be displayed. Awesome site is ready, but it's local, which means users from another network won't be able to go to it and see my frontend skills. In order to solve this problem I need to forward the ports on the router. To do this I again go to 192.168.0.1 and go to Forwarding -> Virtual Servers. Here I add a new rule where specify that port 80 should be triggered to the local server IP. The same thing I do the same with the 443 port. What happens when we add a rule? From now when the user enters the domain of my site in the url string and presses Enter, it 'll go to my public ip address on port 443. Router looks where this port leads - to which local IP address (in this case server local IP). And redirects the request to our home server. Then the home server sends the result of the request (index.html file) and the user sees it in his browser. (\*Very simplified, but enough to understand the overall structure. If you wanna know more about what happens, write a comment and I'll write a separate article, where this point will be described in details). Now every user on the Internet can go to my site! 3. Buying an IP camera, recording from the camera to the server --------------------------------------------------------------- Okay, I have a server where I can upload some files, and my website is hosted on it as well. But then what? It's extremely boring to just sit there and look at the only page of the site, though from another network. Then I saw on Ebay IP camera, which is clearly cheaper than similar ads. I met with the camera guy and he told me that he had bought this camera relatively recently, but **"he couldn't make it possible to watch video online from an external network"**. I figured I could do it, ~~I'm a Linux user and have a googling skills after all~~, and I bought a D-Link DCS-2132L/B. Brought it home, unpacked, connected it via Ethernet to the router, as well as to the power outlet. Idk if it's on or if it even works yet, but the first thought that came to me was to go to the router settings and look at the list of connected devices to find the IP address of the camera, and then just type that IP in the search box and see what happens. Once again I go to the router settings -> DHCP -> "DHCP Client List" and look for smth similar to camera device. Once I found my DCS-2132L I copy the IP and go to it. A window opens with the entry of the login and password. Trying the standard `login: admin, password: admin` and... unfortunately, it didn't work. I'm looking on the Internet to find out how to get into the camera settings and finally I found an option where I had to specify `login: admin, password:` Tried it an... it doesn't work either. Still not very good, what to do? Oh, exactly, on the camera there is a reset button! I clamped it for 15 seconds and the camera is reset to factory settings. Re-login again - and damn yeah, I'm in. I can see a picture from the camera and a bunch of some settings. On the first thing I changed the password from the admin account and added a new user with fewer rights. Great, now there is an account where the user can view the video, but don't do any changes. The picture is very slow to be honest, so i changed the resolution from 1280x720 to 800x448 and change the codec from H.264 to JPEG. Now the picture can give out even 25 FPS. Video movements became much smoother. To watch live broadcast from local network just go to link: `http:///video1.mjpg` Again, watching only from the local network isn't interesting, so I wanna add the ability to watch live streaming from an external network. To do this I add to nginx configuration file in the `server` block the following lines: ``` location = /camera { proxy_pass http:///video1.mjpg; } ``` And surely reload the config: `# systemctl reload nginx` Great, now when someone goes to the address `https:///camera` he'll get a window with username and password. Without the credentials no one 'll have access to the video from the camera. And this is good. But you must admit that we don't always have time to see what is going on in the camera, so it'd be good if the recording would go directly to the server and we could get access to the video files at any time. Looking in the settings of the camera, we can see the section Setup -> Event Setup. ![](https://habrastorage.org/r/w1560/getpro/habr/upload_files/2fb/87d/84a/2fb87d84a7e1abf2b74a2ab63ffc2b4b.png)There is a possibility to configure separately where and what will be recorded. In the subsection `Server` we can specify where the video/photos/logs from the camera 'll be recorded. There are a total of 4 options where we can record: * Email * FTP * Network Storage * SD Card ![](https://habrastorage.org/r/w1560/getpro/habr/upload_files/40b/6e8/3db/40b6e83dbb133f69629f489b15959118.png) Just below we can see the next item - `Media`. Here it is possible to specify what to save: * Snapshots * Videos * Logs ![](https://habrastorage.org/r/w1560/getpro/habr/upload_files/661/cce/e70/661ccee700ceb3e43e309bc520bf410e.png) Then it's possible to configure when we need to make a record, this is done in the item `Event`. There are such options when it's possible to record: * Video motion event * Periodic * Digital input * System boot * Network lost * Passive infrared sensor * Sound detection ![](https://habrastorage.org/r/w1560/getpro/habr/upload_files/3a6/98f/1e8/3a698f1e81f72b51373253327ab11bac.png) Also in this tab it's possible to configure at which days (and hours) the event 'll be "listened on". This event and among other things here specifies where the recording 'll go. And the last item in the tab - `Recording`. In this tab we choose on what days (and hours) the full recording of what happens on the camera, and it's possible to configure where the recording 'll be made + its size (or duration). ![](https://habrastorage.org/r/w1560/getpro/habr/upload_files/db2/392/2fb/db23922fb5a4304f7be1a23bd3836d98.png)After I got acquainted with all the possible settings I decided to set up FTP and record video from the camera via it. ### Installing and configuring FTP Firstly, install the needed package from the repository: `# pacman -S vsftpd` After this create a new user, through which we 'll connect to the server: `# useradd ftpuser` And also add a password to it: `# passwd ftpuser` Great, opened up the required ports so that we can connect to the server: `# ufw allow 21` Afterwards I created a new directory where files from camera will be saved: `# mkdir /var/www/camera_videos` Thereafter changed the owner of that directory: `# chown ftpuser /var/www/camera_videos` All that remains is to modify the vsftpd configuration file `/etc/vsftpd.conf`: ``` anonymous_enable=NO # Disable anonymous local_enable=YES # Allow local users to log in write_enable=YES # Allow to write smth on our server nopriv_user=ftpuser # A user which the ftp server can use as a totally isolated and unprivileged user. chroot_list_enable=YES # Mark that we 'll use a list of users that can be chroot'ed chroot_list_file=/etc/vsftpd.chroot_list # Specify the path to the file with chroot'ed users allow_writeable_chroot=YES # Allow chroot()'ing a user to a directory writable by that user local_root=/var/www/camera_videos # Specify the default directory for ftp connections ``` Now create a list with the users: `/etc/vsftpd.chroot_list`, where I added only one newly created user: `ftpuser` Eventually the ftp service can be run: `# systemctl enable vsftpd --now` And we can test the connection to the server through ftp: `$ ftp` Next, I entered the login and password from ftp user and I can view the contents of the directory with `ls`, create a directory with `mkdir` or transfer some local files to the remote server. Then I went to the camera settings via it IP, go to Setup -> Event Setup -> Server, add a new ftp server, where I give the server a name, specify the server address, the port on which I'll connect, the name of the ftp user and his password. Then press the Test button and see the message: Test Ok. It means that everything is configured correctly and the camera can now record video to the server via ftp. ![](https://habrastorage.org/r/w1560/getpro/habr/upload_files/47c/2d5/584/47c2d55840583efed5bcffd4b8c99743.png)*In order to make sure that the test connection to the server was successful in the directory specified in local\_root in the vsftpd configuration file, there should be a test.txt file with the following content: "*`The Result of Server Test of Your IP Camera.`". If it present - the camera has established contact with the server. Save the server settings and go to the item below - `Media`. There I specified Media name, also chose video and which video profile to use. The video duration, its maximum size in Kbytes and the prefix which will be added before the name of the recorded video were specified as well. Note: *For the ftp server, the maximum video file size can be 5 MBytes.* ![](https://habrastorage.org/r/w1560/getpro/habr/upload_files/7e7/deb/6d5/7e7deb6d53e1cb8f20a1e5a33eafed00.png)The next item is setting up `Event`. Again, I gave a name to the event, select the trigger - Video Motion Detection, the time when this trigger should be active leave the default (always on), specified which server to use for recording and which Media to take. ![](https://habrastorage.org/r/w1560/getpro/habr/upload_files/496/6ed/3fa/4966ed3fa53a29f9c82c451eb674c3b8.png)Save the settings and move on to select the area to trigger Motion Detection: Setup -> Motion Detection. To use motion detection, you must first select the checkboxes and then select the areas you want to monitor for motion with the mouse. ![](https://habrastorage.org/r/w1560/getpro/habr/upload_files/668/5bf/f34/6685bff34c4a5a95007b00cf4b9b7a66.png)Walk in front of the camera and look at the contents of the directory `/var/www/camera_videos` on the server - there should appear a new file that begins with the prefix specified in Media and the date the video was created. With this setting: > **Frame size**: 800x448 > **Maximum frame rate**: 25 > **Video quality**: Standard > **Mode**: JPEG > **Pre-event recording**: 2 seconds > **Maximum duration**: 100 Seconds > **Maximum file size**: 5000 Kbytes > > I got the video with such metadata: ``` { "FileName": "motion_20210806_172133.mp4", "FileSize": "5.1 MB", "FileModifyDate": "2022:08:06 17:21:33+03:00", "FileAccessDate": "2022:08:06 17:23:47+03:00", "FilePermissions": "-rw-------", "FileType": "MP4", "Duration": "10.24 s", "Encoder": "Lavf54.63.104", "TrackDuration": "10.21 s", "ImageWidth": 800, "ImageHeight": 448, "XResolution": 72, "YResolution": 72, "BitDepth": 24, "VideoFrameRate": 25.159, "MediaDuration": "10.24 s", "AudioChannels": 2, "AudioBitsPerSample": 16, "AudioSampleRate": 8000, "MediaDataSize": 5075314, "MediaDataOffset": 60044, "ImageSize": "800x448", "Megapixels": 0.358, "AvgBitrate": "3.97 Mbps", } ``` As we can see, on average it is about 10-second video of relatively good resolution. And now we already have our own budget "security system", but smth is missing again.... Exactly - recordings of what's going on on the camera at all times, 24/7. 4. Problem with recording video directly to the server ------------------------------------------------------ Well, let's go to the `Recording` tab and try to start recording via ftp. But I found that I can't even select FTP server. Why is that? Do you remember in the previous section I mentioned the write limit is only 5 MBytes, while the maximum limit is 50 MBytes? Turns out that the camera supports writing more than 5 MBytes, only to the SD card. This is extremely disappointing, it appears that everything I did was a waste of time? If I can't record video 24/7 then what's the point of the camera? Yes, I can watch the online broadcast from external network and I've Motion Detection, but it's pointless if I can't review the camera recordings over a period of time. I'm writing to D-Link support with the hope that I haven't noticed some secret clause and it's still possible to record directly to the server, not to the SD card. The variant with the SD card is bad because it has a limited capacity (mostly 16-32 Gbytes). And what is 32 Gbytes of video? If we take those settings I gave above, then after simple calculations we can get the "1 day recording volume": 5.1 MBytes / 10.21 sec = 0.499510284 MBytes/sec, in other words 86400 sec (1 day) \* 0.499510284 MBytes/sec = 43157.6885376 MBytes (~43.1576885376 GBytes). Even if I take an ordinary 32 GBytes microSD card, it won't even enough for 1 day! While I was waiting for a reply from the support, I decided to buy a 32 GBytes card hoping that it'd be possible to transfer the data from SD to the server automatically. Bought a micro SD card, put it in the camera. Now went to the camera settings - Setup -> Event Setup and add a new server, where I chose SD Card. Then I go to the item `Recording` and add a new item, where specify the profile from which to record, the size of a single video, as well as the size at which the video will begin to be re-recorded. ![](https://habrastorage.org/r/w1560/getpro/habr/upload_files/601/f67/80c/601f6780cb92bf68783315838bac003e.png)The Setup -> SD Card tab allows to view the contents of microSD card. The videos are conveniently sorted by day as well as by hour. Thus it'd be easy to find the video of the desired time fragment. ![](https://habrastorage.org/r/w1560/getpro/habr/upload_files/ae3/67b/a51/ae367ba51393340f31496ce01cd0dcaa.png)Everything seems to work, it records videos to the SD card, but in order to view them, the video must first be downloaded locally. And as you understand, even with a pretty good Internet, downloading videos is not an option. This is a monotonous and routine work that should be automated, because it is Sisyphus work to download each video separately to your computer and then transfer it to the server. And then I got a reply from D-Link’s support. I hopefully open the letter and see: `"Unfortunately, it looks like the model number of your D-Link device is discontinued, and free technical assistance has already ended"`~~Thanks for the reply, much appreciated, so useful~~ Unfortunately, D-Link didn’t help me, so I’ll have to find a solution myself. 5. Recording solution --------------------- I'm not going to give up. If necessary, I will personally re-flash the camera circuit board, twist every wire, connect it directly to the server, and run it as a built-in camera. Of course it's too early to go to extremes yet, but if shit happens I'm ready. In the meantime, let's see what options I have and what I can do: 1. Hide in a corner and cry 2. Leave everything as is 3. Search for the answer on the Internet, how other people have solved it For now, let's leave the first two options for later. I started googling how to record video from the camera directly to the server and in one of the posts I see that the guy can connect to his camera directly via ssh. I decided that I'm not worse, maybe I can do this with my camera too. I scanned the ports in the hope that 22 is still open and... only ports open for http, https, rtsp and upnp. I went to look for the documentation of the camera and saw that the firmware update v1.08.03 had the ssh port closed. So I decided to dig up the firmware version below v1.08.03 and download it. And I even found the needed version, so I try to update it in Maintenance -> Firmware Upgrade, but even here I failed - no downgrade is possible. At this point I really had a thought that that was it, the end... Then I remembered that there are a few more options to add servers beside the FTP server and the SD card. I quickly go in there and see - Email and Network Storage. Version with **Email** was discarded at once, because direct recording from camera to server would be impossible, it can only send the whole video and even then free api have their own limitations (eg, google mail api have limit to send a file above 35 MBytes), and to set up my own mail server seemed not the brightest idea (just for this specific purpose, but in general it's very interesting and useful to have a personal mail server, at least for secure reasons). So there remains one last option - **Network Storage**. And it seems that this is it, my last chance for a happy, successful life. I start googling about this option on the camera, whether it's possible to write video directly to the server. Didn't find a straight answer to my question, but noticed that a huge number of people failed to add their server as a NAS to the camera. In the example saw what I need to enter in the fields to connect to the server and realized that it's about samba server. ![](https://habrastorage.org/r/w1560/getpro/habr/upload_files/a85/a23/0e5/a85a230e5fa004ca6ffd58e2e28f35b1.png) Okay, I'll try to set up samba server, and then connect to it as well as by ftp, maybe I can kick it. ### Installing and configuring samba Install the package from the repository: `# pacman -S samba` Edit the config file `/etc/samba/smb.conf`: ``` [global] server role = standalone server map to guest = Bad User usershare allow guests = yes hosts allow = 192.168.0.0/16 log file = /var/log/samba/%m.log log level = 1 auth:5 winbind:5 smb ports = 139 idmap config * : backend = tdb idmap config * : range = 3000-7999 idmap config SAMDOM:backend = ad idmap config SAMDOM:schema_mode = rfc2307 idmap config SAMDOM:range = 10000-999999 idmap config SAMDOM:unix_nss_info = yes vfs objects = acl_xattr map acl inherit = yes store dos attributes = yes server min protocol = NT1 client min protocol = NT1 [recordings] comment = recordings from cam path = /var/www/recordings read only = no guest ok = yes force user = root force group = root ``` After the configuration file has been set up, start the samba service: `# systemctl enable smb --now` and don't forgot about `# systemctl enable nmb --now`. Also open port for smb protocol: `# ufw allow from 192.168.0.0/24 to any port 139` Then the samba server can be added to the camera: ![](https://habrastorage.org/r/w1560/getpro/habr/upload_files/2b1/299/cb4/2b1299cb4143662367e86d2955b2b7da.png)Press the Test button and get Test Ok. Now try to record directly to our server, go to Setup -> Event setup -> Recording, add a new item and see that in addition to the SD card there is another option where a record can be written - `SAMBA` . Select it, change the size of rewritable files and save. ![](https://habrastorage.org/r/w1560/getpro/habr/upload_files/cbb/9a0/2dd/cbb9a02dd5737e1535c681d9e6e3c439.png)Turn on the setting and see that in the directory that was specified in `smb.conf` appeared directory `Video`. The general structure of the directory is approximately as follows: ``` . ├── test.txt └── Video ├── 20220801 │   ├── 21 │   ├── 22 │   └── 23 ├── 20220802 │   ├── 00 │   ├── 01 │   ├── 02 │   └── 03 └── 20220803 ├── 17 ├── 18 ├── 19 └── 20 ``` Great, now I have both video recording and motion detection set up. Everything works exactly as I wanted it to. 6. What I learned from this experiment -------------------------------------- During the server setup, as well as solving problems that gradually appear on my way I figured out how to set up an ftp server, how to configure a samba server, played with the camera and its configuration. Dug deeper into how to configure ssh, how to make nginx web server more secure, which headers to include and much more besides. It has been a very interesting and educational experience but I'd recommend its repetition only for those who sincerely wanna understand how each technology works "under the hood". Wish you all success and more patience to yourself! Good luck
https://habr.com/ru/post/682148/
null
null
5,238
60.04
.NET Language-Integrated Query for XML Data Michael Champion February 2007 Applies to: Visual Studio Code Name "Orcas" .Net Framework 3.5 Summary: LINQ to XML was developed with Language-Integrated Query over XML in mind and takes advantage of standard query operators and adds query extensions specific to XML. The samples in most of this document are shown in C# for brevity. (44 printed pages) Contents Introduction Sample XML Programming XML With LINQ to XML LINQ to XML Design Principles The LINQ to XML Class Hierarchy XML Names Creating XML From Scratch Traversing XML Manipulating XML Working With Attributes Working With Other Types of XML Nodes Annotating Nodes With User-Defined Information Outputting XML Validating XML Querying XML With LINQ to XML Querying XML Using Query Expressions with XML Using XPath and XSLT With LINQ to XML Mixing XML and Other Data Models Reading From a Database to XML Reading XML and Updating a Database Layered Technologies Over LINQ to XML LINQ to XML in Visual Basic 9.0 Schema Aware XML Programming February 2007 CTP Release Notes Changes Since the May 2006 CTP Non-Exhaustive List of Planned Features in Future Releases References Introduction XML has achieved tremendous adoption as a basis for formatting data whether in Word files, on the wire, in configuration files, or in databases; XML seems to be everywhere. Yet, from a development perspective, XML is still hard to work with.. LINQ to XML, a component of the LINQ project, aims to address this issue. LINQ to XML is a modernized in-memory XML programming API designed to take advantage of the latest .NET Framework language innovations. It provides both DOM and XQuery/XPath like functionality in a consistent programming experience across the different LINQ-enabled data access technologies. There are two major perspectives for thinking about and understanding LINQ to XML. From one perspective you can think of LINQ to XML as a member of the LINQ Project family of technologies with LINQ to XML providing an XML Language-Integrated Query capability along with a consistent query experience for objects, relational database (LINQ to SQL, LINQ to DataSet, LINQ to Entities), and other data access technologies as they become LINQ-enabled. From another perspective you can think of LINQ to XML as a full feature in-memory XML programming API comparable to a modernized, redesigned Document Object Model (DOM) XML Programming API plus a few key features from XPath and XSLT. LINQ to XML was developed with Language-Integrated Query over XML in mind from the beginning. It takes advantage of standard query operators and adds query extensions specific to XML. From an XML perspective, LINQ to XML provides the query and transformation power of XQuery and XPath integrated into .NET Framework languages that implement the LINQ pattern (for example, C#, Visual Basic, and so on.). This provides a consistent query experience across LINQ enabled APIs and allows you to combine XML queries and transforms with queries from other data sources.. LINQ to XML uses modern language features (e.g., generics and nullable types) and diverges from the DOM programming model with a variety of innovations to simplify programming against XML. Even without Language-Integrated Query capabilities LINQ to XML represents a significant stride forward for XML programming. The next section of this document, "Programming XML", provides more detail on the in-memory XML Programming API aspect of LINQ to XML. LINQ to XML is a language-agnostic component of the LINQ Project. The samples in most of this document are shown in C# for brevity. LINQ to XML can be used just as well with a LINQ-enabled version of the Visual Basic .NET compiler. Section "LINQ to XML in Visual Basic 9.0" discusses Visual Basic-specific programming with LINQ to XML in more detail. Sample XML For the purposes of this paper let's establish a simple XML contact list sample that we can use throughout our discussion. > Programming XML with LINQ to XML This section details how to program with LINQ to XML independent of Language-Integrated Query. Because LINQ to XML provides a fully featured in-memory XML programming API you can do all of the things you would expect when reading and manipulating XML. A few examples include the following: - Load XML into memory in a variety of ways (file, XmlReader, and so on). - Create an XML tree from scratch. - Insert new XML Elements into an in-memory XML tree. - Delete XML Elements out of an in-memory XML tree. - Save XML to a variety of output types (file, XmlWriter, and so on). You should be able to accomplish most XML programming tasks you run into using this technology. LINQ to XML Design Principles LINQ to XML is designed to be a lightweight XML programming API. This is true from both a conceptual perspective, emphasizing a straightforward, easy to use programming model, and from a memory and performance perspective. Its public data model is aligned as much as possible with the W3C XML Information Set. Key concepts This section outlines some key concepts that differentiate LINQ to XML from other XML programming APIs, in particular the current predominant XML programming API, the W3C DOM. In object oriented programming when you create object graphs, and correspondingly in W3C DOM, when creating an XML tree, you build up the XML tree in a bottom-up manner. For example using XmlDocument (the DOM implementation from Microsoft) this would be a typical way to create an XML tree. provides few clues to the structure of the XML tree. LINQ to XML supports this approach to constructing an XML tree but also supports an alternative approach referred to as functional construction. by indenting (and squinting a bit) the code to construct the XML tree shows the structure of the underlying XML. Functional construction is described further in the section titled "Creating XML From Scratch." Document "free" When programming XML your primary focus is usually on XML elements and perhaps attributes. This makes sense because an XML tree, other than at the leaf level, is composed of XML elements and your primary goal when working with XML is traversing or manipulating the XML elements that make up the XML tree. In LINQ to XML you can work directly with XML elements in a natural way. For example you can do the following: - Create XML elements directly (without an XML document involved at all) - Load them from XML that exists in a file - Save (write) them to a writer Compare this to W3C DOM, in which the XML document is used as a logical container for the XML tree. In DOM XML nodes, including elements and attributes, must be created in the context of an XML document. Here is a fragment of the code from the previous example to create a name element: Note how the XML document is a fundamental concept in DOM. XML nodes are created in the context of the XML document. If you want to use an element across multiple documents you must import the nodes across documents. This is an unnecessary layer of complexity that LINQ to XML avoids. In LINQ to XML you create XML elements directly: You do not have to create an XML Document to hold the XML tree. The LINQ to XML object model does provide an XML document to use if necessary, for example if you have to add a comment or processing instruction at the top of the document. The following is an example of how to create an XML Document with an XML Declaration, Comment, and Processing Instruction along with the contacts content. XDocument contactsDoc = new XDocument( new XDeclaration("1.0", "utf-8", "yes"), new XComment("LINQ to XML Contacts XML Example"), new XProcessingInstruction("MyApp", "123-44-4444"),") ) ) ) ); After this statement contactsDoc contains: <?xml version="1.0" encoding="utf-8" standalone="yes"?> <!--LINQ to XML Contacts XML Example--> <?MyApp 123-44-4444?> <contacts> <contact> <name>Patrick Hines</name> <phone>206-555-0144</phone> <address> <street1>123 Main St</street1> <city>Mercer Island</city> <state>WA</state> <postal>68042</postal> </address> </contact> </contacts> XML names LINQ to XML goes out of its way to make XML names as straightforward as possible. Arguably, the complexity of XML names, which is often considered an advanced topic in XML literature, comes not from namespaces, which developers use regularly in programming, but from XML prefixes. XML prefixes can be useful for reducing the keystrokes required when inputting XML or making XML easier to read, however prefixes are just a shortcut for using the full XML Namespace. On input LINQ to XML resolves all prefixes to their corresponding XML Namespace and prefixes are not exposed in the programming API. In LINQ to XML, an XName represents a full XML name consisting of an XNamespace object and the local name. Developers will usually find it more convenient to use the XNamespace object rather than the namespace URI string. For example, to create an XElement called contacts that has the namespace " " you could use the following code: Conversely, W3C DOM exposes XML names in a variety of ways across the API. For example, to create an XmlElement, there are three different ways that you can specify the XML name. All of these allow you to specify a prefix. This leads to a confusing API with unclear consequences when mixing prefixes, namespaces, and namespace declarations (xmlns attributes that associate a prefix with an XML namespace). LINQ to XML treats XML namespace prefixes as serialization options and nothing more. When you read XML, all prefixes are resolved, and each named XML item has a fully expanded name containing the namespace and the local name. On output, the XML namespace declarations (xmlns attributes) are honored and the appropriate prefixes are then displayed. If you need to influence prefixes in the XML output, you can add xmlns attributes in the appropriate places in the XML tree. See the section titled "XML Names" for more information.attributes) are honored and the appropriate prefixes are then displayed. If you need to influence prefixes in the XML output, you can add xmlns attributes in the appropriate places in the XML tree. See the section titled "XML Names" for more information. Typically, the leaf elements in an XML tree contain values such as strings, integers, and decimals. The same is true for attributes. In LINQ to XML, you can treat elements and attributes that contain values in a natural way, simply cast them to the type that they contain. For example, assuming that name is an XElement that contains a string, you could do the following: Usually this will show up in the context of referring to a child element directly like this: Explicit cast operators are provided for string, bool, bool?, int, int?, uint, uint?, long, long?, ulong, ulong?, float, float?, double, double?, decimal, decimal?, DateTime, DateTime?, TimeSpan, TimeSpan?, and GUID, GUID?. In contrast, the W3C DOM always treats text as an XML node. Consequently in many DOM implementations the only way to read and manipulate the underlying text of a leaf node is to read the text node children of the leaf node. For example just to read the value of the name element you would need to write code similar to the following: This has been simplified in some W3C DOM implementations, such as the Microsoft XmlDocument API, by using the InnerText method. With LINQ to XML, there is an XText class, but it is used only to let you work with mixed content and CData sections. Developers of applications that do not use these features of XML don't have to worry about text nodes in most cases. You can usually work directly with the basic .NET Framework-based types, reading them and adding them directly to the XML. In general, it is best to ignore the existence of XText nodes unless you are working with mixed content or CData sections. The LINQ to XML Class Hierarchy In Figure 1, you can see the major classes defined in LINQ to XML. Figure 1. LINQ to XML Class Hierarchy Note the following about the LINQ to XML class hierarchy: - Although XElement is low in the class hierarchy, it is the fundamental class in LINQ to XML. XML trees are generally made up of a tree of XElements. XAttributes are name/value pairs associated with an XElement. XDocuments are created only if necessary, such as to hold a DTD or top level XML processing instruction (XProcessingInstruction). All other XNodes can only be leaf nodes under an XElement, or possibly an XDocument (if they exist at the root level). - XAttribute and XNode are peers derived from a common base class XObject. XAttributes are not XNodes because XML attributes are really name value pairs associated with an XML element not nodes in the XML tree. Contrast this with W3C DOM. - XText and XCData are exposed in this version of LINQ to XML, but as discussed above, it is best to think of them as a semi-hidden implementation detail except when exposing text nodes is necessary. As a user, you can get back the value of the text within an element or attribute as a string or other simple value. - The only XNode that can have children is an XContainer, meaning either an XDocument or XElement. An XDocument can contain an XElement (the root element), an XDeclaration, an XDocumentType, or an XProcessingInstruction. An XElement can contain another XElement, an XComment, an XProcessingInstruction, and text (which can be passed in a variety of formats, but will be represented in the XML tree as text). XML Names XML names, often a complex subject in XML programming APIs, are represented simply in LINQ to XML. An XML name is represented by an XNamespace object (which encapsulates the XML namespace URI) and a local name. An XML namespace serves the same purpose that a namespace does in your .NET Framework-based programs, allowing you to uniquely qualify the names of your classes. This helps ensure that you don't run into a name conflict with other users or built-in names. When you have identified an XML namespace, you can choose a local name that needs to be unique only within your identified namespace. For example, if you want to create an XML element with the name contacts, you would likely want to create it within an XNamespace with a URI such as. Another aspect of XML names is XML namespace prefixes. XML prefixes cause most of the complexity of XML names. In XML syntax, prefixes allow you to create a shortcut for an XML namespace, which makes the XML document more concise and understandable. XML prefixes depend on their context to have meaning. The XML prefix myPrefix could be associated with one XML namespace in one part of an XML tree, but be associated with a completely different XML namespace in a different part of the XML tree. LINQ to XML simplifies XML names by removing XML prefixes from the XML Programming API and encapsulates them in XNamespace objects. When reading in XML, each XML prefix is resolved to its corresponding XML namespace. Therefore, when developers work with XML names they are working with a fully qualified XML name; an XML namespace, and a local name. In LINQ to XML, the class that represents XML names is XName, consisting of an XNamespace object and the local name. For example, to create an XElement called contacts that has the namespace "" you could use the following code: XML names appear frequently throughout the LINQ to XML API, and wherever an XML name is required, you will find an XName parameter. However, you seldom work directly with an XName. XName contains an implicit conversion from string. The string representation of an XName is referred to as an expanded name. An expanded name looks like the following: An expanded name with the XML namespace and the local name contacts looks like the following: It is possible to use this expanded name syntax rather than constructing XNamespace objects any time an XName is required. For example, the constructor for XElement takes an XName as its first argument: You do not have to type the XML namespace every time you use an XML name. You can use the facilities of the language itself to make this easier. For example, you can use the following common pattern:") ) ) ); The resulting XML will look like: <contacts xmlns=""> <contact> <name>Patrick Hines</name> <phone type="home">206-555-0144</phone> <phone type="work">425-555-0145</phone> <address> <street1>123 Main St</street1> <city>Mercer Island</city> <state>WA</state> <postal>68042</postal> </address> </contact> </contacts> XML prefixes and output Earlier in this section we mentioned that, when reading in XML, prefixes are resolved to their corresponding XML namespaces. But what happens on output? What if you need or want to influence prefixes when outputting the XML? You can do this by creating xmlns attributes (XML namespace declarations) that associate a prefix to an XML namespace. For example: The snippet would generate: Therefore, if you have a specific output in mind, you can manipulate the XML to have the XML namespace declarations with your desired prefixes exactly where you want them. You can load existing XML into an LINQ to XML XML tree so that you can read it or manipulate it. LINQ to XML provides multiple input sources, including a file, an XmlReader, a TextReader, or a string. To input a string, you use the Parse method. Here is an example of the Parse method: XElement contacts = XElement.Parse( @">"); To input from any of the other sources, you use the Load method. For example, to load XML from a file: Creating XML from Scratch LINQ to XML provides a powerful approach to creating XML elements. This is referred to as functional construction. Functional construction lets you create all or part of your XML tree in a single statement. For example, to create a contacts XElement, you could use the following code:") ) ) ); By indenting, the XElement constructor resembles the structure of the underlying XML. Functional construction is enabled by an XElement constructor that takes a params object. The contents parameter is extremely flexible, supporting any type of object that is a legitimate child of an XElement. Parameters can be any of the following: - A string ,which is added as text content. This is the recommended pattern to add a string as the value of an element; the LINQ to XML implementation will create the internal XText node. - An XText, which can have either a string or CData value, added as child content. This is mainly useful for CData values; using a string is simpler for ordinary string values. - An XElement, which is added as a child element. - An XAttribute, which is added as an attribute. - An XProcessingInstruction or XComment, which is added as child content. - An IEnumerable, which is enumerated, and these rules are applied recursively. - Anything else, ToString() is called and the result is added as text content. - null, which is ignored. In the above example showing functional construction, a string ("Patrick Hines") is passed into the name XElement constructor. This could have been a variable (for example, new XElement("name", custName)), it could have been a different type besides string (for example, new XElement("quantity", 55)), it could have been the result of a function call like this or it could have even been the an IEnumerable<XElement>. For example, a common scenario is to use a query within a constructor to create the inner XML. The following code reads contacts from an array of Person objects into a new XML element contacts. class Person { public string Name; public string[] PhoneNumbers; } var persons = new[] { new Person { Name = "Patrick Hines", PhoneNumbers = new[] { "206-555-0144", "425-555-0145" } }, new Person { Name = "Gretchen Rivas", PhoneNumbers = new[] { "206-555-0163" } } }; XElement contacts = new XElement("contacts", from p in persons select new XElement("contact", new XElement("name", p.Name), from ph in p.PhoneNumbers select new XElement("phone", ph) ) ); Console.WriteLine(contacts); This gives the following output: Notice how the inner body of the XML, the repeating contact element, and, for each contact, the repeating phone were generated by queries that return an IEnumerable. When an objective of your program is to create an XML output, functional construction lets you begin with the end in mind. You can use functional construction to shape your goal output document and either create the subtree of XML items inline, or call out to functions to do the work. Functional construction is instrumental in transforms, which are described in more detail in section "XML Transformation." Transformation is a key usage scenario in XML, and functional construction is well-suited for this task. Traversing XML When you have XML available to you in-memory, the next step is often to navigate to the XML elements that you want to work on. Language-Integrated Query provides powerful options for doing just this (as described in the section titled "Querying XML With LINQ to XML"). This section describes more traditional approaches to walking through an XML tree. Getting the children of an XML element LINQ to XML provides methods for getting the children of an XElement. To get all of the children of an XElement (or XDocument), you can use the Nodes() method. This returns IEnumerable<object> because you could have text mixed with other LINQ to XML types. For example, you might have the following XML loaded into an XElement called contact : Using Nodes(), you could get all of the children and output the results by using this code fragment: The results would show on the console as: The first child was the string, "Met in 2005.", the second child was the XElement name, the third child was the first phone XElement, the fourth child was the second phone XElement, and the fifth child was an XComment with the value "Avoid whenever possible". Notice that ToString() on an XNode (XElement, for example) returns a formatted XML string based on the node type. This is a great convenience, and we will use this many times in this document. If you want to be more specific, you can ask for content nodes of an XElement of a particular type. For example, you might want to get the XElement children for the contact XElement only. In this case, you can specify a parameterized type: And you would only get the element child written to the console: Because XML Elements are prevalent and important in most XML scenarios, there are methods for navigating to XElements directly below a particular XElement in the XML tree. The method Elements() returns IEnumerable<XElement>, and is a shortcut for Nodes().OfType<XElement>(). For example, to get all of the element children of contact, you would do the following: Again, only the XElement children would be output: If you want to get all XElements with a specific name, you can use the Elements(XName) overload that takes an XName as a parameter. For example, to get only the phone XElements, you could do the following: This would write all of the phone XElements to the console. If you know that there is only one child element with a particular name, you can use the Element(XName) (not plural) method, which returns a single XElement. If there is more than one element with this name, you will get the first one. For example, to get the name XElement, you could do the following: Or, you could get the value of name like this: Nodes(), Elements(), Elements(XName), and Element(XName) are the basic methods for simple traversal of XML. If you are familiar with XPath, these methods are analogous to child::node(), child::*, child::name, and child::name[1], respectively. XML Query extensions such as Descendants() and Ancestors() as discussed in the section titled "Querying XML With LINQ to XML", serve a similar traversal purpose and are often combined with the basic traversal methods. Getting the parent and document of an XML element To traverse upwards in the XML tree, you can use the Parent property of XElement. For example, if you had a phone XElement, you retrieve the associated contact with the following: Note that the Parent property of a root element is null. It is not the associated document as it is in some other XML APIs. In LINQ to XML, the XML document is not considered a part of the XML tree. If you want the document associated with an XElement (or any XNode), you can get to it from the Document property. If you want to associate an XElement as the root element of a document, you can pass the element into the XDocument constructor or you can add the root to the document as a child element. For example, to establish the contacts XElement as the root element of a contactsDoc XDocument, you could do the following: or Manipulating XML LINQ to XML provides a full set of methods for manipulating XML. You can insert, delete, copy, and update XML content. Inserting XML You can easily add content to an existing XML tree. To add another phone XElement by using the Add() method: This code fragment will add the mobilePhone XElement as the last child of contact. If you want to add to the beginning of the children, you can use AddFirst(). If you want to add the child in a specific location, you can navigate to a child before or after your target location by using AddBeforeSelf() or AddAfterSelf(). For example, if you wanted mobilePhone to be the second phone you could do the following: The Add methods work similarly to the XElement and XDocument (actually XContainer) constructors so you can easily add full XML subtrees using the functional construction style. For example, you might want to add an Address to a contact. Let's look a little deeper at what is happening behind the scenes when adding an element child to a parent element. When you first create an XElement it is unparented. If you check its Parent property you will get back null. When you use Add to add this child element to the parent, LINQ to XML checks to see if the child element is unparented, if so, LINQ to XML parents the child element by setting the child's Parent property to the XElement that Add was called on. This is a very efficient technique which is extremely important since this is the most common scenario for constructing XML trees. To add mobilePhone to another contact: Again, LINQ to XML checks to see if the child element is parented. In this case, the child is already parented. If the child is already parented, LINQ to XML clones the child element under subsequent parents. The previous example is the same as doing the following: Deleting XML To delete XML, navigate to the content you want to delete and call Remove(). For example, if you want to delete the first phone number for a contact: Remove() also works over an IEnumerable, so you could delete all of the phone numbers for a contact in one call. You can also remove all of the content from an XElement by using the RemoveNodes() method. For example you could remove the content of the first contact's first address with this statement: Another way to remove an element is to set it to null using SetElement, which we talk further about in the next section, "Updating XML." Updating XML To update XML, you can navigate to the XElement whose contents you want to replace, and then use the ReplaceNodes() method. For example, if you wanted to change the phone number of the first phone XElement of a contact, you could do the following: You can also update an XML subtree using ReplaceContent(). For example, to update an address we could do the following: ReplaceContent() is general purpose. SetElement() is designed to work on simple content. You call ReplaceContent() on the element itself; with SetElement(), you operate on the parent. For example, we could have performed the same update we demonstrated above on the first phone number by using this statement: The results would be identical. If there had been no phone numbers, an XElement named "phone" would have been added under contact . For example, you might want to add a birthday to the contact. If a birthday is already there, you want to update it. If it does not exist, you want to insert it. Also, if you use SetElement() with a value of null, the XElement will be deleted. You can remove the birthday element completely by: Attributes have a symmetric method called SetAttribute() , which is discussed in the section titled "Working With Attributes." Be careful with deferred query execution Keep in mind when manipulating XML that in most cases query operators work on a "deferred execution" basis (also called "lazy"), meaning the queries are resolved as requested rather than all at once at the beginning of the query. For example take this query which attempts to remove all of the phone elements in the contacts list: The query will remove only the first "phone" descendant from the tree because the iteration will be cut short. You can resolve this issue by forcing resolution of the entire sequence using ToList() or ToArray(). For example, this approach will work. This will cache up the list of phones so that there will be no problem iterating through them and deleting them. The query extension Remove() is one of the few extension methods that does not use deferred execution and uses exactly this ToList() approach to cache up the items targeted for deletion. We could have written the previous example as: While removal is the most obvious situation where the combination of data manipulation operations and deferred query execution can create problems, it is not the only one. A few words of advice: - Understand that this complex interaction between lazy evaluation and data manipulation is not a "bug" in LINQ to XML; it is a more fundamental issue in computer science (often referred to as the "Halloween Problem"). - In general, the minimalist design philosophy of LINQ to XML precludes extensive analysis and optimization to keep users from stumbling over these problems. You need to determine, for your own application, what the appropriate tradeoff between making a static copy of a region of an XML document before manipulating it without fear of the Halloween Problem, and carefully working around the reality that that data manipulation operations can change the definition of the results of a query in ways that are not easy to anticipate. - Consider using a "functional" transformation approach rather than an in-place updating approach when designing your data manipulation logic. Functional constructors in LINQ to XML make it quite easy to dynamically produce a new document with structures and values defined as transformations of some input document. You don't need to learn an event-oriented API or XSLT to build efficient XML transformation pipeline, you can do it all with LINQ to XML. Working with Attributes There is substantial symmetry between working with XElement and XAttribute classes. However, in the LINQ to XML class hierarchy, XElement and XAttribute are quite distinct and do not derive from a common base class. This is because XML attributes are not nodes in the XML tree; they are unordered name/value pairs associated with an XML element. LINQ to XML makes this distinction, but in practice, working with XAttribute is quite similar to working with XElement. Considering the nature of an XML attribute, where they diverge is understandable. Adding XML attributes Adding an XAttribute is very similar to adding a simple XElement. In the sample XML, notice that each phone number has a type attribute that states whether this is a home, work, or mobile phone number: You create an XAttribute by using functional construction the same way you would create an XElement with a simple type. To create a contact using functional construction: Just as you use SetElement to update, add, or delete elements with simple types, you can do the same using the SetAttribute(XName, object) method on XElement. If the attribute exists, it will be updated. If the attribute does not exist, it will be added. If the value of the object is null, the attribute will be deleted. Getting XML attributes The primary method for accessing an XAttribute is by using the Attribute(XName) method on XElement. For example, to use the type attribute to obtain the contact's home phone number: Notice how the Attribute(XName) works similarly to the Element(XName) method. Also, notice that there are identical explicit cast operators, which lets you cast an XAttribute to a variety of simple types (see section "Text as value" for a list of the types defined for explicit casting from XElements and XAttributes). Deleting XML Attributes If you want to delete an attribute you can use Remove or SetAttribute(XName, object) passing null as the value of object. For example, to delete the type attribute from the first phone using Remove. Or using SetAttribute: Working With Other Types of XML Nodes LINQ to XML provides a full set of the different types of XML nodes that appear in XML. To illustrate this, we can create a document that uses all of the different XML node types: When you output xdoc, you get: LINQ to XML makes it as easy as possible to work with XML elements and attributes, but other XML node types are ready and available if you need them. Annotating Nodes With User-Defined Information LINQ to XML gives you the ability associate some application-specific information with a particular node in an XML tree. Examples include the line number range in the source file from which an element was parsed, the post schema validation type of the element, a business object that contains the data structures into which the XML information was copied and the methods for working with it (for example, a real invoice object with data in CLR and application defined types), and so on. LINQ to XML accommodates this need by defining methods on the XContainer class that can annotate an instance of the class with one or more objects, each of some unique type. Conceptually, the set of annotations on an XContainer object is akin to a dictionary, with the type being the key and the object itself being the value. To add an annotation to an XElement or XDocument object: where LineNumberInfo is an application defined class for storing line number information. The annotation can be retrieved with: The Annotation() method returns null if the element does not have an annotation of the given type. The annotation is removed with: There are a couple caveats: Annotation lookup is based on type identity; it doesn't know about interfaces, inheritance, and so on. For example, if you add an annotation with an object of type Customer which derives from type Person (or implements a Person interface), a call to GetAnnotation<Person>() won't find it. Thus, when you annotate an XElement object, it should be with an instance of a private class of a type that you are sure will be unique. Outputting XML After reading in your XML or creating some from scratch, and then manipulating it in various ways, you will probably want to output your XML. To accomplish this, you can use one of the overloaded Save() methods on an XElement or XDocument to output in a variety of ways. You can save to a file, a TextWriter, or an XmlWriter. For example, to save the XElement named contacts to a file: Validating XML You can validate an XElement tree against an XML schema via extensions method in the System.Xml.Schema namespace. This is exactly the same functionality that was shipped in .NET 2.0, with only a "bridge" to expose the classes in that namespace to LINQ to XML. To bring it into scope, use: Use the .NET 2.0 classes and methods to populate XmlSchemaObject /XmlSchemaSet objects. There will be methods available to Validate()XElement, XAttribute, or XDocument objects against the schema and optionally populate a post schema validation infoset as annotations on the LINQ to XML tree.XDocument objects against the schema and optionally populate a post schema validation infoset as annotations on the LINQ to XML tree. Querying XML with LINQ to XML The major differentiator for LINQ to XML and other in-memory XML programming APIs is Language-Integrated Query. Language-Integrated Query provides a consistent query experience across different data models as well as the ability to mix and match data models within a single query. This section describes how to use Language-Integrated Query with XML. The following section contains a few examples of using Language-Integrated Query across data models. Standard query operators form a complete query language for IEnumerable<T>. Standard query operators show up as extension methods on any object that implements IEnumerable<T> and can be invoked like any other method. This approach, calling query methods directly, can be referred to as "explicit dot notation." In addition to standard query operators are query expressions for five common query operators: - Where - SelectMany - OrderBy - GroupBy Query expressions provide an ease-of-use layer on top of the underlying explicit dot notation similar to the way that foreach is an ease-of-use mechanism that consists of a call to GetEnumerator() and a while loop. When working with XML, you will probably find both approaches useful. An orientation of the explicit dot notation will give you the underlying principles behind XML Language-Integrated Query, and help you to understand how query expressions simplify things. Querying XML For in-depth information about Language-Integrated Query, we encourage you to review the materials in the References section of this document. This section describes Language-Integrated Query from a usage perspective, focusing on XML querying patterns and providing examples along the way. The LINQ to XML integration with Language-Integrated Query is apparent in three ways: - Leveraging standard query operators - Using XML query extensions - Using XML transformation The first is common with any other Language-Integrated Query enabled data access technology and contributes to a consistent query experience. The last two provide XML-specific query and transform features. Standard Query Operators and XML LINQ to XML fully leverages standard query operators in a consistent manner exposing collections that implement the IEnumerable interface. Review The .NET Standard Query Operators for details on how to use standard query operators. In this section we will cover two scenarios that occasionally arise when using standard query operators. Creating multiple peer nodes in a select Creating a single XElement with the Select standard query operator works as you would expect when doing a transform into XML but what if you need to create multiple peer elements within the same Select? For example let's say that we want to flatten out our contact list and list the contact information directly under the root <contacts> element rather than under individual <contact> elements, like this: <contacts> <!-- contact --> <name>Patrick Hines</name> <phone type="home">206-555-0144</phone> <phone type="work">425-555-0145</phone> <address> <address> <state>WA</state> </address> </address> <!-- contact --> <name>Gretchen Rivas</name> <address> <address> <state>WA</state> </address> </address> <!-- contact --> <name>Scott MacDonald</name> <phone type="home">925-555-0134</phone> <phone type="mobile">425-555-0177</phone> <address> <address> <state>CA</state> </address> </address> </contacts> To do this, you can use this query: Notice that we used an array initializer to create the sequence of children that will be placed directly under the contacts element. Handling Null in a transform When you are writing a transform in XML using functional construction, you sometimes encounter situations where an element is optional, and you do not want to create some part of the target XML if the element is not there. For example, the following is a query that gets names and phone numbers putting the phone numbers under a wrapping element <phoneNumbers>. If the contact has no phone numbers, the phoneNumbers wrapping element will exist, but there will be no phone child elements. The following example demonstrates how to resolve this situation: Functional construction has no problem with null, so using the ternary operator inline (c.Elements("phone").Any() ? ... : null) lets you suppress the phoneNumber if the contact has no phone numbers. This same result could be achieved without using the ternary operator by calling out to a function from the query: new XElement("contacts", from c in contacts.Elements("contact") select new XElement("contact", c.Element("name"), GetPhoneNumbers(c) ) ); ... static XElement GetPhoneNumbers(XElement c) { if (c.Elements("phone").Any()) return new XElement("phoneNumbers", c.Elements("phone")); else return null; } XML Query Extensions XML-specific query extensions provide you with the query operations you would expect when working in an XML tree data structure. These XML-specific query extensions are analogous to the XPath axes. For example, the Elements method is equivalent to the XPath * (star) operator. The following sections describe each of the XML-specific query extensions in turn. Elements and Content The Elements query operator returns the child elements for each XElement in a sequence of XElements (IEnumerable<XElement>). For example, to get the child elements for every contact in the contact list, you could do the following: Note that the two Elements() methods in this example are different, although they do identical things. The first Elements is calling the XElement method Elements(), which returns an IEnumerable<XObject> containing the child elements in the single XElement contacts. The second Elements() method is defined as an extension method on IEnumerable<XObject>. It returns a sequence containing the child elements of every XElement in the list. The results of the above query look like this: <name>Patrick Hines</name> <phone type="home">206-555-0144</phone> <phone type="work">425-555-0145</phone> <address> <street1>123 Main St</street1> <city>Mercer Island</city> <state>WA</state> <postal>68042</postal> </address> <netWorth>10</netWorth> <name>Gretchen Rivas</name> <phone type="mobile">206-232-4444</phone> <address> <street1>123 Main St</street1> <city>Mercer Island</city> <state>WA</state> <postal>68042</postal> </address> <netWorth>11</netWorth> <name>Scott MacDonald</name> <phone type="home">925-555-0134</phone> <phone type="mobile">425-555-0177</phone> <address> <street1>345 Stewart St</street1> <city>Chatsworth</city> <state>CA</state> <postal>92345</postal> </address> <netWorth>500000</netWorth> If you want all of the children with a particular name, you can use the Elements(XName) overload. For example: This would return: Descendants and Ancestors The Descendants and Ancestors query operators let you query down and up the XML tree, respectively. Descendants with no parameters gives you all the child content of an XElement and, in turn, each child's content down to the leaf nodes (the XML subtree). Optionally, you can specify an XName (Descendants(XName)) and retrieve all of the descendants with a specific name, or specify a type (Descendants<T>) and retrieve all of the descendants of a specified LINQ to XML type (for example, XComment). To get all of the phone numbers in our contact list, you could do the following: Descendants and Ancestors do not include the current node. If you use Descendants() on the root element, you will get the entire XML tree except the root element. If you want to include the current node, use DescendantsAndSelf, which lets you specify an XName or type. Ancestors and AncestorsAndSelf work similarly to Descendants and DescendantsAndSelf; they just go up the XML tree instead of down. For example, you can retrieve the first phone number in the contacts XML tree, and then print out its ancestors: The results will show: If you do the same thing with AncestorsAndSelf, the output will also show phone: The results will show: The Descendants and Ancestors XML query extensions can greatly reduce the code needed to traverse an XML tree. You will find that you use them often for quick navigation in an XML tree. Attributes The Attributes XML query extension is called on an IEnumerable<XElement> and returns a sequence of attributes (IEnumerable<XAttribute>). Optionally, you can specify an XName to return only attributes with that name. For example, you could get a list of the distinct types of phone numbers that are in the contact list: which will return: ElementsBeforeSelf, ElementsAfterSelf, NodesBeforeSelf, NodesAfterSelf If you are positioned on a particular element, you sometimes want to retrieve all of the child elements or content before that particular element, or the child elements or content after that particular element. The ElementsBeforeSelf query extension returns an IEnumerable<XElement> containing the sibling elements that occur before that element. ElementsAfterSelf returns the sibling elements that occur after that element. The NodesBeforeSelf query extension returns the previous siblings of any type (for example, string, XComment, XElement, and so on). Consequently, it returns an IEnumerable<XNode>. Similarly, NodesAfterSelf returns the following siblings of any type. Technical Note: XML query extensions The LINQ to XML specific extension methods are found in the XElementSequence class. Just as standard query operators are generally defined as extension methods on IEnumerable<T>, the XML query operators are generally defined as extension methods on IEnumerable<XElement>. XElementSequence is just a container class to hold these extension methods. Most likely you will never call these static methods through XElementSequence—but you could. For example, consider the following query to get all of the phone numbers in the contact list. This could be rewritten using the static extension method Elements(this IEnumerable<XElement> source, XName name) in ElementSequence like this: You can learn more about the technical details of query extensions in the C# 3.0 Overview document (see References). XML Transformation Transforming XML is a very important XML usage scenario. It is so important that it is a critical feature in two key XML technologies: XQuery and XSLT. While XSLT is accessible in LINQ to XML, the "pure" LINQ way to do transformation, which works for all types of input data, is via functional construction. Most transformations to an XML document can be thought of in terms of functionally constructing your target XML. In other words, you can "begin with the end in mind," shaping your goal XML and filling in chunks of the XML by using combinations of queries and functions as needed. For example, you might want to transform the format of the contact list to a customer list. Beginning with the end in mind, the customer list needs to look something like this: Using functional construction to create this XML would look like this: To transform our contact list to this new format, you would do the following: Notice how the transformation aligns with the structure of our target document. You start by creating the outer, root element of the target XML: You will need to create a Customer XElement that corresponds to every contact in the original XML. To do this, you would retrieve all the contact elements under contacts, because you have to select what you need for each contact. The Select begins another functional construction block that will be executed for each contact. You now construct the <Customer> part of the target XML. You start by creating a Customer XElement: The <PhoneNumbers> child is more complex because the phone numbers in the contact list are listed directly under the contact: To accomplish this, query the phone numbers for the contact and put them as children under the <PhoneNumbers> element: In this code, you query the contact's phone numbers, c.Elements("phone"), for each phone. We also create a new XElement called Phone with same type attribute as the original phone, and with the same value. You will often want to simplify your transformations by having functions that do the work for portions of your transformation. For example, you could write the above transformation using more functions to break up the transformation. Whether you decide to this is completely up to you, just as you might or might not decide to break up a large, complex function based on your own design sensibility. One approach to breaking up a complex function looks like this: new XElement("Customers", GetCustomers(contacts)); static IEnumerable<XElement> GetCustomers(XElement contacts) { return from c in contacts.Elements("contact") select FormatCustomer(c); } static XElement FormatCustomer(XElement c) { return new XElement("Customer", new XElement("Name", (string) c.Element("name"), GetPhoneNumbers(c))); } static XElement GetPhoneNumbers(XElement c) { return !c.Elements("phone").Any() ? null : new XElement("PhoneNumbers", from ph in c.Elements("phone") select new XElement("Phone", ph.Attribute("type"), (string) ph) ); } This example shows a relatively trivial instance of the power of transformation in .NET Framework Language-Integrated Query. With functional construction and the ability to incorporate function calls, you can create arbitrarily complex documents in a single query/transformation. You can just as easily include data from a variety of data sources, as well as XML. Using Query Expressions with XML There is nothing unique in the way that LINQ to XML works with query expressions so we will not repeat information in the reference documents here. The following shows a few simple examples of using query expressions with LINQ to XML. This query retrieves all of the contacts from Washington, orders them by name, and then returns them as string (the result of this query is IEnumerable<string>). This query retrieves the contacts from Washington that have an area code of 206 ordered by name. The result of this query is IEnumerable<XElement>. Here is another example retrieving the contacts that have a net worth greater than the average net worth. Using XPath and XSLT with LINQ to XML LINQ to XML supports a set of "bridge classes" that allow it to work with existing capabilities in the System.Xml namespace, including XPath and XSLT. Note System.Xml supports only the 1.0 version of these specifications in "Orcas." Extension methods supporting XPath are enabled by referencing the System.Xml.XPath namespace This brings into scope CreateNavigator overloads to create XpathNavigator objects, XPathEvaluateXPathEvaluate overloads to evaluate an XPath expression, andoverloads to evaluate an XPath expression, and XPathSelectElement[s]XPathSelectElement[s] overloads that work much like SelectSingleNodeoverloads that work much like SelectSingleNode and XPatheXelectNodes methods in the System.Xml DOM API. To use namespace-qualified XPath expressions, it is necessary to pass in a NamespaceResolver object, just as with DOM.and XPatheXelectNodes methods in the System.Xml DOM API. To use namespace-qualified XPath expressions, it is necessary to pass in a NamespaceResolver object, just as with DOM. For example, to display all elements with the name "phone": Likewise, XSLT is enabled by referencing the System.Xml.Xsl namespace That allows you to create an XPathNavigator using XDocumentCreateNavigator() and pass it to the Transform() method. Mixing XML and other data models Language-Integrated Query provides a consistent query experience across different data models via standard query operators and the use of Lambda Expressions. It also provides the ability to mix and match Language-Integrated Query enabled data models/APIs within a single query. This section provides a simple example of two common scenarios that mix relational data with XML, using the Northwind sample database. We will use the Northwind sample database and for these examples. Reading from a database to XML The following is a simple example of reading from the Northwind database (using LINQ to SQL) to retrieve the customers from London, and then transforming them into XML: The resulting XML output is this: <Customers> <Customer CustomerID="AROUT"> <Name>Mark Harrington</Name> <Phone>(171) 555-0188</Phone> </Customer> <Customer CustomerID="BSBEV"> <Name>Michelle Alexander</Name> <Phone>(171) 555-0112</Phone> </Customer> <Customer CustomerID="CONSH"> <Name>Nicole Holliday</Name> <Phone>(171) 555-0182</Phone> </Customer> <Customer CustomerID="EASTC"> <Name>Kim Ralls</Name> <Phone>(171) 555-0197</Phone> </Customer> <Customer CustomerID="NORTS"> <Name>Scott Culp</Name> <Phone>(171) 555-0173</Phone> </Customer> <Customer CustomerID="SEVES"> <Name>Deepak Kumar</Name> <Phone>(171) 555-0117</Phone> </Customer> </Customers> Reading XML and Updating a Database You can also read XML and put that information into a database. For this example, assume that you are getting a set of customer updates in XML format. For simplicity, the update records contain only the phone number changes. The following is the sample XML: To accomplish this update, you query for each customerUpdate element and call the database to get the corresponding Customer record. Then, you update the Customer column with the new phone number. These are just a few examples of what you can do with Language Integerated Query across data models. For more examples of using LINQ to SQL, see the LINQ to SQL Overview document (see References). Layered Technologies Over LINQ to XML The LINQ to XML XML Programming API will be the foundation for a variety of layered technologies. Two of these technologies are discussed below. LINQ to XML in Visual Basic 9.0 Visual Basic 9.0 will provide deep support for LINQ to XML. Instead of using methods to construct and navigate XML, Visual Basic 9.0 uses XML literals for construction and Xml Axis Properties for navigation. This is an important distinction and is closer to the design center of Visual Basic. XML literals allow Visual Basic developers to construct LINQ to XML objects such as XDocument and XElement directly using familiar XML syntax. Values within these objects can be created with expression evaluation and variable substitution. Xml Axis Properties will allow developers to access XML nodes directly by special syntax that include the XML axis and the element or attribute name, rather than indirectly using method calls. These two features will provide deep, explicit, easy to use and powerful support for XML and LINQ to XML programming in Visual Basic. XML Literals Let us revisit the first example in this paper, (see "Functional construction"), but this time written in Visual Basic. The syntax is very similar to the existing C# syntax: Dim contacts As XElement = _", "98040")))) The above Visual Basic statement initializes the value of the variable contacts . Visual Basic allows us to go one-step further than calling the LINQ to XML APIs to create new objects; it lets us write the XML inline using actual XML syntax: to be a new object of type XElementto be a new object of type XElement using the traditional API approachusing the traditional API approach Dim contacts As XElement = _ <contacts> <contact> <name>Patrick Hines</name> <phone type="home">206-555-0144</phone> <phone type="work">425-555-0145</phone> <address> <street1>123 Main St</street1> <city>Mercer Island</city> <state>WA</state> <postal>98040</postal> </address> </contact> </contacts> The XML structure of the result XElement is obvious, which makes the Visual Basic code easy to read and maintain. The Visual Basic compiler translates the XML literals on the right-hand side of the statement into the appropriate calls to the LINQ to XML APIs, producing the exact same code as in the first example. This ensures full interoperability between Visual Basic and other languages that use LINQ to XML. Note that we do not need line continuations in XML literals. This allows developers to copy and paste XML from/to any XML source document. Let us take another example where we create the same contact object but use variables instead. Visual Basic allows embedding expressions in the XML literals that create the XML values at run time. For example suppose that the contact name was stored in a variable called MyName. Now we may write as follows: People familiar with ASP.NET will immediately recognize the <%= and %> syntax. This syntax is used to bracket Visual Basic expressions, whose values will become the element content. Substituting the value of a variable like MyName is only one example, the expression could just as easily have been a database lookup, an array access, a library function call, that return a type that is valid element content such as string, List of XElement and so on. The same embedded expression syntax is used within the angle brackets of XML syntax. In the following example, the value of the attribute "type" is set from an expression: Similarly, the name of an element can be computed from an expression:from an expression: Note that it is valid to use </> to close an element. This is a very convenient feature, especially when the element name is computed.to close an element. This is a very convenient feature, especially when the element name is computed. XML Axis Properties In addition to using XML literals for constructing XML, Visual Basic 9.0 also simplifies accessing and navigating XML structures via XML axis properties that can be used with XElement and XDocument types. That is, instead of calling explicit methods to navigate and locate elements and attributes, we can use XML axis properties as LINQ to XML object properties. For example: - use the child axis contact .<phone> to get all "phone" elements from the contact element - use the attribute axis phone.@type to get the string value of the "type" attribute of the phone element - use the descendants axis contact...<city>—written literally as three dots in the source code—to get all "city" children of the contact - use the Value extension property to get the string value of the first object in the IEnumerable that is returned from the XML axis properties - use the extension indexer on IEnumerable(Of T) to select the first element of the resulting sequence We put all these innovations together to make the code simpler, for example printing the phone's type and the contact's city looks as follows: The compiler knows to use XML axis properties over XML when the target expression is of type XElement, XDocument, or a collection of these types. The compiler translates the Xml axis properties as follows: - the child-axis expression contact.<phone> into the raw LINQ to XML call contact.Elements("phone"), which returns the collection of all child elements named "phone" of the contact element - the attribute axis expression phone.@type into phone.Attributes("type").Value, which returns the string value of the attribute named "type", if such attribute does not exist, the return will be "Nothing" - the descendant axis contact...<city> expression into a combination of steps, first it calls the contact.Descendants("city") method, which returns the collection of all elements named city at any depth below contact, then it gets the first one and if it exists it calls the Value property on that element The equivalent code after translation into LINQ to XML calls is as below: Putting it all together Used together, Language-Integrated Query and the new XML features in Visual Basic 9.0, provides a simple but powerful way to perform many common XML programming tasks. Let us examine the query in section "Creating multiple peer nodes in a select" that creates a flattened contact list and removes the contact element: The following is the C# version: In Visual Basic 9.0 it can be written as follows: Schema Aware XML Programming. Take the following LINQ to XML code sample that totals orders for a specific zip code The generic nature of the LINQ to XML API is responsible for the various quotes (..."price" ...) and casts (...(double)i.Element("price") ...). That is, the LINQ to XML API knows nothing about the shape of the XML and the types of attributes and elements; it is not aware that there will be a "price" element under an "item" element and that its type is double. Consequently, you as a developer must know and assert that information (using quotes and casts). Using a schema-derived object model for orders, it would be possible to write code like the following: Instead of quotes and casts you are working with types such as Orders and Item, and properties such as Price and Quantity. In addition to the static typing benefits, schema-derived object models provide various other capabilities: the classes may be extended by virtual methods; debugging may leverage type information, and the XML-isms are hidden. Hence, you as a programmer may view XML programming as a form of object-oriented (OO) programming. The schema-derived object model does not bypass LINQ to XML. Instead the schema-derived classes use LINQ to XML underneath to store the XML data in generic XML trees. This design implies that XML fidelity is preserved by the typed programming model. Also, it implies that no draconian choice is necessary. So your application, for some part, may use the generic API, where this is more convenient, while in another part, a schema-derived object model may be used. Since the schema-derived classes are "typed views" on LINQ to XML trees, both parts of the application would share the XML trees without any challenges regarding serialization and synchronization. We investigate schema-derived object models for LINQ in an incubation project; codename: LINQ to XSD, as of writing. There has been an Alpha release of LINQ to XSD in November 2006. Plans, timelines, and preview schedules for a potential product based on this incubation effort have not been determined. February 2007 CTP Release Notes The LINQ to XML specification is still evolving, and will continue to evolve before it is released. We release previews of this technology to get comments from potential users. The changes in this CTP reflect feedback from the previous CTPs, and subsequent releases will reflect feedback from this CTP. Changes Since the May 2006 CTP Bridge classes. Event model This allows LINQ to XML trees to be efficiently synchronized with a GUI, for example. a Windows Presentation Foundation application. XObject class There is a new base class for both XElement and XAttribute, introduced largely to support the event model. XStreamingElement class removed This was done because there was uncertainty about the design of various features to support efficient production and consumption of large XML documents. The result of the design exercise was to confirm the original design of XStreamingElement, so it will be put back in the next release. Non-Exhaustive List of Planned Features in Future Releases - The IXmlSerializable interface will be supported. - The XStreamingElement class will be added back to allow trees of enumerators to be defined that can be "lazily" serialized as XML in a deferred manner. - The IXmlSerializeable interface will be implemented to facilitate the use of LINQ to XML in conjunction with the Microsoft Web services APIs. References These documents can be found online at The LINQ Project website: - The LINQ Project Overview, .NET Language-Integrated Query - The .NET Standard Query Operators - C# Version 3.0 Specification - Overview of Visual Basic 9.0 - LINQ to SQL .NET Language-Integrated Query for Relational Data Other documents, samples, and tutorials are also available.
http://msdn.microsoft.com/en-us/library/bb308960(msdn.10).aspx
CC-MAIN-2014-52
refinedweb
10,472
50.26
This action might not be possible to undo. Are you sure you want to continue? 0 In the same week .NET 3.5 is released I get round to starting an article on .NET 3.0! So it'll only be another year or so before I manage to get to 3.5. In fact it will be sooner but back to 3.0 for the moment. Most of the classes in the .NET Framework version 2.0 are unchanged in 3.0/3.5. The key differences for .NET 3.0 are new libraries that offer completely new sets of functionality. There are four main components: Windows Presentation Foundation (WPF) is an entirely new UI technology based on the DirectX engine which facilitates the creation of vector based user interfaces rather than the conventional Windows bitmap based UI. WPF, it is claimed, is the platform for the next generation of interfaces facilitating UIs with built in capabilities to utilise this vector basis to deliver scaling, animation media, styling and 3D visualisation to deliver true business benefits. Windows Workflow (WF) is an engine for building applications with workflow. WF serves as the kernel of workflow, handling threading, persistence and other plumbing tasks. WF brings a consistent, component-oriented philosophy to workflow and is already at the core of Microsoft's business in products like SharePoint. Windows Communication Foundation (WCF) is a unified framework for machine to machine and process to process communication. WCF brings together the capabilities of various technologies into a common, integrated programming model. These technologies include Web Services (traditional and latest standards compliant versions thereof), enterprise services, remoting, and Microsoft Message Queue (MSMQ). WCF is designed to be the principal Microsoft platform for systems utilising the increasingly popular Service Oriented Architecture (SOA). Windows CardSpace is the name for a new technology in the .NET Framework 3.0 that simplifies and improves the safety of accessing resources and sharing personal information on the Internet. In this article I'll look in a little more theoretical detail at the first 3 of these 4 new components. In later articles I hope to dig a little deeper into the practicalities of using these new technologies. The main reference for this article is Professional VB 2005 with .NET 3.0 by Evjen et al. Windows Presentation Foundation WPF is based more on the HTML declarative model than Windows Forms model (for example). The idea is that you can use a declarative model to design your UI and then use that model for a desktop or web version of your application. WPF uses XML to declare the UI elements, relying on a standard known as the eXtensible Application Markup Language (XAML). Existing web and windows forms controls are based on raster graphics – collections of points on a surface that represent an image. The alternative approach is vector graphics, where a vector is a line with a point of origin that continues forward in space from that point of origin. Rather than being based on a collection of points, vector graphics are based on a series of vectors. Vector graphics allow you to create great looking UIs, but are more processor hungry, as we've seen with the increasing system requirements (particularly as regards graphics cards) with the Vista operating system with which the WPF libraries were released. WPF is based on Microsoft's DirectX technologies. What do you need to build WPF Applications WPF was released with Windows Vista so if this is your OS you are ready to go with regard to the Framework. If not you'll need the .NET 3 (or 3.5) runtime package for starters. Everybody will then need the Windows Vista SDK to start building WPF applications. Unfortunately it doesn't go as far as completely upgrading VS.NET 2005 to enable you to build WPF and other .NET 3.0 applications. Until you have a copy of VS2008 (just released in RTM form at the time 1 of writing so you may have a copy already) developers will need to get their hands a little dirty on the command line. Saying that, there are Visual Studio extensions for WPF/WCF but these are Community Technology Preview, and shall not exceed this level. XAML and WPF XAML is a markup based protocol. Similar to SOAP and several other XML-based formats, the XAML specification defines a potentially open standard for describing user interface elements. WPF is Microsoft’s implementation of this standard across 2 XML namespaces, one focussed on the definition of XAML and the 2nd focussed on WPF's implementation of the XAML specification (the WPF extensions). It is important to note that the XAML namespace can, and is, used for things other than WPF, e.g. WF. XAML is defined as a language consisting of a collection of elements, attributes and related objects. These objects are referenced from the XAML namespace, which by convention precedes each class with an "x:". WPF extends and maps these declarative structures back to .NET, using specific classes that are part of the .NET 3 System.Windows namespace. There are 3 categories of XML statement to be found within the XML namespace: attribute, markup extension and XAML directive. Attributes I'm sure you are already familiar with – these are the properties that can be associated with an XML node. Markup extensions correspond to the 'eXstensible' of the XAML acronym. A markup extension can be used to create an XML node or a collection of XML attributes. XAML directives allow placement of procedural code or data within XAML code. Everything else you see in the XAML file belongs to the 2nd namespace which defines the WPF extensions. We'll very briefly look at some of the core elements of this namespace. First up is the Application object which is required by all WPF applications and represents the application to the CLR. It is the primary process and is an instantitation of a class in the System.Windows namespace. In fact VB provides the underlying application object so you may not even need to deal with this. Behidn the scenes however the Application definition references Window and Page object. In your Window definition you would define the interface of your application. As a basic example you might give your Window a title, define its size and define a grid so you could reference specific points in the window, perhaps dispay a button you could code against. Then you need to write the partial class to react against the button click event raised. Basic WPF example: <Application xmlns="" StartupUri="WPFWindow.xaml" /> <Window xmlns="" xmlns: <Grid> <Button Margin="100,100,100,100" name="btnHello" Click="btnHelloClick">Press Me</Window> </Grid> Partial Public Class MainWindow Inherits System.Windows.Window Public Sub New() InitializeComponent() End Sub Public Sub btnHelloClick(ByValSender As Object, ByVal e As System.Windows.RoutedEventArgs) btnHello.Content = "Hello!" System.Windows.MessageBox.Show("Hello!") End Sub 2 Controls, Resources, Styles and Layout There's a whole new controls library to play with. We've seen the Window. We could also have used a Page if we were targeting a web application. There are also panels for layout, of which the aforementioned Grid is one implementation but here are more as layout is key to presentaional power, e.g. StackPanel, Canvas, DockPanel, ToolBar and Tab related controls. You also have the standard UI controls and a host more for you to explore, including data bound controls. Before long you'll want to include resources in your application, e.g. strings, images or a graphic element. This can be achieved by adding a resource to your application definition which is then avaiable for use by other objects. Styles are a sub-element of a resource definition and allow you to apply the resource to an object type or sub-set within an object type as opposed to having to apply the resource to each object explicitly. Windows Workflow Foundation (WF) Workflow refers to the steps involved in an application. WF enables developers to graphically build application workflow while keeping it logically distinct from the main application code and providing several of the common services required. Steps in an application workflow may be performed by a human or a computer and may need to integrate with an external application. The workflow files in WF are XML files written in a version of XAML, the same as that Used to describe WPF files. These files describe the actions to perform within the workflow and the relationship between those actions. You could use a text editor but VS enables developers to visually design the workflow, creating the XAML for you. A workflow is composed of a number of rule definitions with each definition including activities (work flow steps), conditions and expressions. Activities are run if conditions are met with expressions defining the tests of the conditions. Six main components make up any WF applicaton: 1. Host process – this is the executable that will host the workflow, typically a windows form, ASP.NET or windows service application. The workflow is hosted and runs within this process. 2. WF runtime services, responsible for o loading, scheduling and actual workflow execution o persistence – saving the state of the workflow as required o tracking – enabling (health) monitoring of the state of the workflow 3. Workflow runtime engine – responsible for executing each workflow instance 4. Workflow – the actual workflow which consists of one or more activities and may consist of workflow markup and/ or code. Mutipleinstances of a workflow may be active at any given time in an application. 5. Activity library – a collection of standard actions used to create workflows. 6. Custom activities – mostly achieved through attributes and inheritance. WF supports two main styles of workflows: sequential and state machine. Sequential workflows are the classic flowchart style of process (a series of steps with possible branching). State machine workflows are less linear. A good way to identify a candidate for a state machine workflow is determing whether the process is defined in terms of modes/ states. The standard WF activities can be divided into five major categories: 1. Communication with external code 2. Flow control 3. Action 4. Scope – group activities into a logical element, typically to participate in transactions 5. State – used exclusively in state machine workflows (one of the machines states) 3 WF integrates well into appplications including Windows Forms and ASp.NET applications. It provides a means to modularise the workflow from those applications, to graphically design the workflow in VS.NET for agreement with project stakeholders and the technology permits the workflow to be changed relatively easily without requiring changes to the core application. Windows Communication Foundation (WCF) This is really all about Service Oriented Architecture (SOA) - a non-proprietary, interoperable, distributed, message based architecture. Some principles of SOA are: • Entity boundaries are explicit with underlying functionaility accessed via the exposed interface. • Services are autonomous – hence published interfaces must remain unchanged • Services are based on contracts, schemas and policies – all services require a contract regarding what is required to consume items from the interface (usually done through WSDL document) Along with a contract, schemas are required to define the items passed as parameters or delivered through the service (using XSD schemas). Finally policies define any capabilities or requirements of the service. • Service compatiability that is based upon policy – enables services to decide policies, decided at ruyntime, that ar required to consume the service. These policies are usually expressed through WS-policy. WCF is a framework that works on SOA principles and makes it relatively simple to implement. It is also a means to build distributed applications in a Microsoft environment, though the consumers of the application certainly don’t need to be Microsoft clients. WCF allows you to build all kinds of distributed applications including 'traditional' Web Services so that your services support SOAP and will therefore be compatible with older .NET (and other) technologies. WCF is not just about pure SOAP over the wire - you can work with an Infoset, and create a binary representation of your SOAP message that can then be sent along with your choice of protocol. This is for those who are particularly concerned about performance and have traditionally turned to .NET remoting. WCF can also work with a message through its lifecycle meaning that WCF can deal with transactions like those of Enterprise Services. Along with distributed transactions, WCF can deal with the queuing of messages, and it allows for the intermittent connected nature that an application might have to deal with. WCF is all about messages but not just WebService like messages. For example, WCF can be used to communicate messages to components contained on the same machine on which the WCF service is running. Hence you can use WCF for communication between components contained in different processes on the same machine as well as with other components on another machine, whether that machine is a Microsoft OS based machine, or not. WCF also enables you to develop a service once and then expose that service via multiple endpoints (which can use entirely different protocols) via simple configuration changes. 4 WS-* WCF facilitates the utilisation of a framework of WS-* specifications which can be enabled to allow for defined ways of dealing with security, reliability and transactions (with incomplete notes): Security • WS-Security – supports credential exchange, message integrity and message confidentiality • WS-SecureConversation • WS-Trust Reliability • WS-ReliableMessaging Trasnactions • WS-AtomicTransaction • WS-Coordination Messaging • SOAP • WS-Addressing • MTOM – Message Transmission Optimization Mechanism, the replacement for DIME (Direct Internet Message Encapsulation) as ameans to transmit binary objects along with a SOAP message Metadata – allows definition of your interface • WSDL • WS-Policy – provides consumers with a specification of what is required to consume the service • WS-MetadataExchange These use the SOAP header enabling messages to be self contained and not rely on the transport protocol for anything but transmission of the message itself. WCF can therefore make use of these specifications if the developer wishes. If currently working in a .NET 2.0 environment you need to install .NET 3 and to build WCF services directly in VS2005 you need to install the VS2005 extensions for .NET 3 (WCF and WPF). If you are using a VS2008 Beta, or indeed the full product by the time you read this, you’ll be ready to go already. You’ll then be able to add a WCF project. WCF Service Constituents The service 2.One or more endpoints 3.The host environment A service is a .NET class which contains methods exposed through the WCF service. A service can have one or more endpoints which allow a client to communicate with the service. Endpoints are made up of 3 parts: A – address (where), B – binding (how), C – contract (what). The host envionment constitutes an application domain and process of any form suggested by requirements (console, windows forms, WPF, windows services, IIS). To create a WCF service you need to • create a service contract (the class with the methods you want to expose), and a • data contract (a class that specifies the structure you want to expose from the interface) To use the service you need to create the WSDL document that will be used by the client, then create the client itself. 1. Summary 5 I hope this whistlestop tour of 3 of the 4 main new elments in .NET 3.0 has been useful. Hopefully there will be more practial articles to follow on DotNetJohn to build on this theoretical information. 6 This action might not be possible to undo. Are you sure you want to continue? We've moved you to where you read on your other device. Get the full title to continue reading from where you left off, or restart the preview.
https://www.scribd.com/document/19105562/NET-3-0
CC-MAIN-2016-50
refinedweb
2,631
54.32
open vt start a program on a new virtual terminal (VT). #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> int open(const char *pathname, int flags); int open(const char *pathname, int flags, mode_t mode); int creat(const char *pathname, mode_t mode);: A semantically similar (but deprecated) interface for block devices is described in raw(8).. Some of these optional flags can be altered using fcntl(2) after the file has been opened. creat() is equivalent to open() with flags equal to O_CREAT|O_WRONLY|O_TRUNC. The O_CLOEXEC flag is not specified in POSIX.1-2001, but is specified in POSIX.1-2008. O_DIRECT is not specified in POSIX; one has to define _GNU_SOURCE to get its definition.-standard access mode 3 (binary 11) in flags to mean: check for read and write permission on the file and return a descriptor that can't be used for reading or writing. This non-standard. use(), readpipe() (aka qx//) and similar operators found within the lexical scope of this pragma will use the declared defaults. Three-argument opens are not affected by this pragma since there you (can) explicitly specify the layers and are supposed to know what you are doing.", ord(<I>), ""; # this should print 0xc1 close I; they are appended.. Note that if you are going to be reading or writing binary data from the channel created by this command, you should use the fconfigure command to change the -translation option of the channel to binary before transferring any binary data. This is in contrast to the ``b'' character passed as part of the equivalent of the access parameter to some versions of the C library fopen() function.. Opening a command pipeline is not supported under Macintosh, since applications do not support the concept of standard input or output. - ro - - zh_CN Meet new people ¤ 1994-2005 LinuxReviews™ ¤ ¤ ¤ ¤ XHTML 1.0 ¤ © xiando Corp.
http://linuxreviews.org/man/open/
crawl-002
refinedweb
314
55.03
Tax Have a Tax Question? Ask a Tax Expert Hello and thank you for using Just Answer,If you pay someone's medical bills fo rthem that would normally only be allowed as a deduction if the person was your dependent. There is a provision that is the only reason you cannot claim them as a dependent is because they make more than the personal exemption amount, then you are allowed to use the medical bills as a deduction. Unfortunately in your son's case if they do not live with you then the exception would not apply. You would have other reasons why you could not claim them, such as you are not supplying more then half of their support and they are married and have income to report. You can include medical expenses you paid for an individual that would have been your dependent except that: He or she received gross income of $3,800 or more in 2012, He or she filed a joint return for 2012, or You, or your spouse if filing jointly, could be claimed as a dependent on someone else's 2012 return. If the above are the only reasons you could not claim your son as a dependent then you would be allowed to use the medical expenses on your Schedule A that you pay on his behalf. The amount for #1 above increases to $3900 for 2013. My goal is to give you excellent service. If you are satisfied, please rate me. If you have follow-up questions on this same topic, use the reply box below. To start a new conversation with me on a new topic request me again. So you are saying this - he does not live with me, he will earn more than $3900 in 2013, and he and his wife will file a joint return in 2013 as they did in 2012, I can claim the medical expenses I pay for him in 2013 on my 2013 Schedule A. Is that correct ? Thanks !! I am saying that if you could claim him as a dependent except that he earned too much and he filed the joint return (generally just to get a refund because no tax liability was there) then you could use the medical but if they do not live with you then you could not.You see sometimes a relative or child lives with the taxpayer and still because they earn over the limit they cannot be depdndents but the taxpayer could still use the medical they paid. So if they do not live with you then you cannot. The person for whom the medical is owed (I am guessing your daughter-in-law if this is for the new baby and congrats on that) then she would have needed to meet all the other requirements but she earned too much or she filed a joint return. You see if the person you pay the medical for would need to meet all the other requirements for a dependent but just miss it by either the earnings or the filing a joint return. I understand your wanting to assist them and it would be great if you could use the medical on your Schedule A.
http://www.justanswer.com/tax/7z2e1-son-wife-just-2nd-child-several.html
CC-MAIN-2016-26
refinedweb
540
69.86
I came across this code while looking for exam prep questions. I don't understand what is invoking the superclass constructor in this code? The output is ---> feline cougar cc THL public class Feline{ public String type = "f "; public Feline(){ System.out.print("feline "); } } - public class Cougar extends Feline { public Cougar(){ System.out.print("cougar "); } public static void main(String[] args){ new Cougar().go(); } void go(){ type = "c "; System.out.print(this.type + super.type); } } When you have a class that extends some other class, eg Cougar extends Feline, there must be a call to the super class at the top of the constructor. When you don't write one, Java assumes you meant to call the default super class constructor. So your constructor: public Cougar(){ System.out.print("cougar "); } Is actually interpreted as: public Cougar(){ super(); System.out.print("cougar "); } Hence the call to the super class constructor. It's interesting to note that because all classes are extensions of the class Object, there is a call to a super class constructor at the beginning of every constructor that you'll ever write - either an explicit one you've included with or without arguments, or the default super class constructor if you do not specify.
http://databasefaq.com/index.php/answer/181748/java-inheritance-constructor-superclass-super-what-is-invoking-the-super-class-constructor-here
CC-MAIN-2018-39
refinedweb
206
56.05
GameFromScratch.com Gamefromscratch has a long running series taking an indepth look at various game engines available. Today we are going to look at the Godot game engine, an open source C++ based game engine with a complete Unity-esque world editor. Godot runs on Windows, Linux and Mac platforms and can target all of those, plus iOS, Android, PS3 and PS Vita, NaCL with HTML5 and Windows Phone both being in development. In Godot’s own words: Godot is an advanced, feature packed, multi-platform 2D and 3D game engine. It provides a huge set of common tools, so you can just focus on making your game without reinventing the wheel. jump right in! There is a video on this post available here (and embedded at bottom of the page) which goes in to a bit more detail. You get started with Godot by downloading the executable from the downloads page. You probably expect this to be an installer, but you would be wrong. Instead when you run it you get the Project Manager: Here you can either create a new project or load an existing one. Finally once you’ve selected or created, you are brought to the editor: This is where the magic happens. The above screenshot is of the Platformer demo being edited. The left hand window is where your scene is composed. As you can see from the four tabs above, you can work in 2D, 3D, Script editing or browse the built in help in this window. The top left icons are actually menus and a lot of the functionality and tools are tucked away behind Scene and Import. The top right hand dialog is your scene graph: Here you can see ( and create/instance ) the items that make up your world. Simply click the New Icon to add a new items to the world, or the + Icon to add a new instance instead. The other icons are for wiring scripts and signals (events) up to the objects in your world. Below the scene graph, you’ve got the Inspector window, which enables you to set properties of objects in your scene. As you can see from the screen shot, Godot takes a very modular/component approach which is quite popular these days: This enables you to visual inspect and edit properties of your game objects. It also represents one of the first flaws of Godot… some of the controls are just awkward to use. For example if you don’t hit enter after editing a text property, the values are lost. Additionally modifying numeric fields can be a pain in the ass at times. It’s all stuff that can be fixed with time ( Godot was only open sourced about a year ago after all ) but for now its clunky and somewhat annoying. So that’s the visual editor… what about code? Well the majority of your programming is going to be done in GDScript, using the included editor. GDScript is a Python-esque proprietary scripting language. I don’t generally like this approach as it makes all the existing tools and editors worthless, lose the years of bug fixing, performance improvements, etc… while forcing a learning curve on everyone that wants to use the engine. That said, the idea behind a scripting language is they should be easy to use and learn. The authors explained their decision in the FAQ:. That covers the why anyways, now let’s look at the language itself. As I said earlier, it’s a Python like (whitespace based) scripting language. Let’s look at an example from the included demos: extends RigidBody2D const STATE_WALKING = 0 const STATE_DYING = 1 var state = STATE_WALKING var direction = -1 var anim="" var rc_left=null var rc_right=null var WALK_SPEED = 50 var bullet_class = preload("res://bullet.gd") func _die(): queue_free() func _pre_explode(): #stay there clear_shapes() set_mode(MODE_STATIC) get_node("sound").play("explode") func _integrate_forces(s): var lv = s.get_linear_velocity() var new_anim=anim if (state==STATE_DYING): new_anim="explode" elif (state==STATE_WALKING): new_anim="walk" var wall_side=0.0 for i in range(s.get_contact_count()): var cc = s.get_contact_collider_object(i) var dp = s.get_contact_local_normal(i) if (cc): if (cc extends bullet_class and not cc.disabled): set_mode(MODE_RIGID) state=STATE_DYING s.set_angular_velocity(sign(dp.x)*33.0) set_friction(true) cc.disable() get_node("sound").play("hit") break if (dp.x>0.9): wall_side=1.0 elif (dp.x<-0.9): wall_side=-1.0 if (wall_side!=0 and wall_side!=direction): direction=-direction get_node("sprite").set_scale( Vector2(-direction,1) ) if (direction<0 and not rc_left.is_colliding() and rc_right.is_colliding()): direction=-direction get_node("sprite").set_scale( Vector2(-direction,1) ) elif (direction>0 and not rc_right.is_colliding() and rc_left.is_colliding()): direction=-direction get_node("sprite").set_scale( Vector2(-direction,1) ) lv.x = direction * WALK_SPEED if( anim!=new_anim ): anim=new_anim get_node("anim").play(anim) s.set_linear_velocity(lv) func _ready(): rc_left=get_node("raycast_left") rc_right=get_node("raycast_right") As someone raised on curly braces and semi colons it can take a bit of time to break muscle memory, but for the most part the language is pretty intuitive and easy to use. You can see a quick language primer here. Remember earlier I said the code editor was built into the engine, let’s take a look at that now: As you can see, the editor does provide most of the common features you would expect from an IDE, a personal favorite being auto-completion. Features like code intention, find and replace and auto indention are all available, things often missing from built in editors. Like dealing with the Inspector window though their can be some annoyances, like the autocomplete window appearing as you are trying to cursor around your code, requiring you to hit Escape to dismiss it. For the most part though the editing experience is solid and hopefully some of the warts disappear with time. Now perhaps the biggest deal of all: Debugging! This is where so many home made scripting languages really suck, the lack of debugging. Not Godot: You can set breakpoints, step into/over your running code and most importantly inspect variable values and stack frames. Once again, it’s the debugging experience that often makes working in scripting languages a pain in the ass, so this is nice to see! Of course, one of the big appeals of Godot is going to be the C++ support, so where exactly does that come in? Well first and most obviously, Godot is written in C++ and fully open source under the MIT license ( a very very very liberal license ), so you can of course do whatever you want. I pulled the source from Github and built without issue in Visual Studio 2013 in just a few minutes. The build process however is based around Scons, which means you have to Python 2.7x and Scons installed and configured, but neither is a big deal. What about extending Godot, that is what the majority of people will want to do. Well fortunately it’s quite easy to create C++ extensions, although again you need Scons and have to do a bit of configuration, but once you’ve done it once assuming you’ve got a properly configured development environment the process should be quick and mostly painless. From the wiki page here is a sample C++ module: Sumator.h /* sumator.h */ #ifndef SUMATOR_H #define SUMATOR_H #include "reference.h" class Sumator : public Reference { OBJ_TYPE(Sumator,Reference); int count; protected: static void _bind_methods(); public: void add(int value); void reset(); int get_total() const; Sumator(); }; #endif Sumator.cpp /* sumator.cpp */ #include "sumator.h" void Sumator::add(int value) { count+=value; } void Sumator::reset() { count=0; } int Sumator::get_total() const { return count; } void Sumator::_bind_methods() const { ObjectTypeDB::bind_method("add",&Sumator::add); ObjectTypeDB::bind_method("reset",&Sumator::reset); ObjectTypeDB::bind_method("get_total",&Sumator::get_total); } Sumator::Sumator() { count=0; } Then in your script you can use it like: var s = Sumator.new() s.add(10) s.add(20) s.add(30) print( s.get_total() ) s.reset() If you inherit from Node2D or a derived class, it will be available in the editor. You can expose properties to the inspector and otherwise treat your module like any other Node available. Remember though, for productivity sake, you should really only be dropping to C++ as a last resource. It is however quite simple to do. Nodes, Nodes and more Nodes At the heart of Godot, the world is essentially a tree of Nodes, so I suppose it’s worthwhile looking at some of the nodes available and how they work. From the scene graph window, you add a new node to the world using this icon: Next it’s a matter of picking which type, which could of course include modules you created yourself in C++. As you can see from the small portion I’ve shown above, there are a LOT of built in nodes already available. From UI controls, to physics controllers, path finding and AI tools, bounding containers, video players and more. Essentially you create your game by composing scenes, which then are composed of nodes. Once you’ve created a node you can then script it. Simply select your node in the scene graph and then click the script icon: Then a New Script dialog will be displayed: And your script will be created: As you can see, the script itself inherets from the node type you selected. You can now script any and all logic attached to this particular Node. Essentaily your game logic is implemented here. In addition to wiring up scripts to game nodes, you can also wire up Signals. Signals can be thought of as incoming events: So, what about Help then? Well this is both a strength and weakness of Godot. As you saw earlier, there is actually an integrated help tab. You can look it up any time, or use press SHIFT + F1 while coding to get context sensitive help: It’s invaluable, but unfortunately the resulting help file is often just a listing of methods/parameters and nothing more. I often instead just keep the class reference from the Wiki open in a browser window. For an open source project, the reference is pretty good, but it could still certainly use a lot more love. Next are the tutorials, there’s a fairly good selection available on the Wiki but I think a good in-depth beginner series is really needed. To someone that just downloaded Godot and is thinking “now what”… that presents a challenge. That said, I will probably be creating one, so this shouldn’t be an issue in time! Finally, and perhaps most valuably, there are several demos included: So, once you’ve finished your game, how do you publish it to the various available platforms? Well for starters you click the Export button: You also need an export template, which is available from the Godot downloads site. Additionally you need to configure the tool chain for each platform, such as the Android SDK for Android, XCode for iOS ( and a Mac! You can’t end run around the needing a Mac for iOS development requirement unfortunately), etc. But the process is spelled out for you and they make it pretty easy. I haven’t even really touched upon the plethora of tools tucked away in the editor… need to import a font, there’s a tool for that. There’s an improved Blender COLLADA plugin which just worked when I tried it. There’s a tool for mapping controls to commands, there are tools for compressing and modifying textures on import, for modifying incoming 3D animations, etc… Basically if you need to do it, there is probably a tool for it shoved in there somewhere. On the other hand, the process itself can be pretty daunting. Figuring out how to get input, a script’s life cycle etc isn’t immediately obvious. It really is an engine you have to sit down with and just play around. Sometimes you will hit a wall and it can be pretty damned frustrating. However, it’s also a hell of a lot of fun and once it starts to click it is a great engine to work in. I definitely recommend you check out Godot, especially if you are looking for an open source Unity like experience. That said, calling this a Unity clone would certainly be doing it a disservice, Godot is a great little game engine on it’s own accord that deserves much more exposure. Programming 2D, 3D, Engine, Review
http://www.gamefromscratch.com/post/2015/01/04/A-Closer-Look-at-the-Godot-Game-Engine.aspx
CC-MAIN-2016-50
refinedweb
2,081
63.49
I am new to JavaFx and I am creating an application and was in need of something similar to JDialog that was offered while using swing components. I solved that by creating new stage, but now I need a way to close the new stage from within itself by clicking a button. (yes, the x button works too, but wanted it on button as well). To describe the situation: I have a main class from which I create the main stage with a scene. I use FXML for that. public void start(Stage stage) throws Exception { Parent root = FXMLLoader.load(getClass().getResource("Builder.fxml")); stage.setTitle("Ring of Power - Builder"); stage.setScene(new Scene(root)); stage.setMinHeight(600.0); stage.setMinWidth(800.0); stage.setHeight(600); stage.setWidth(800); stage.centerOnScreen(); stage.show(); } Now in the main window that appears I have all the control items and menus and stuff, made through FXML and appropriate control class. That's the part where I decided to include the About info in the Help menu. So I have an event going on when the menu Help - About is activated, like this: @FXML private void menuHelpAbout(ActionEvent event) throws IOException{ Parent root2 = FXMLLoader.load(getClass().getResource("AboutBox.fxml")); Stage aboutBox=new Stage(); aboutBox.setScene(new Scene(root2)); aboutBox.centerOnScreen(); aboutBox.setTitle("About Box"); aboutBox.setResizable(false); aboutBox.initModality(Modality.APPLICATION_MODAL); aboutBox.show(); } As seen the About Box window is created via FXML with control class again. There is a picture, a text area and a button, and I want that button to close the new stage that is the aboutBox from within the AboutBox.java class so to speak. The only way I found myself to be able to do this, was to define a public static Stage aboutBox; inside the Builder.java class and reference to that one from within the AboutBox.java in method that handles the action event on the closing button. But somehow it doesn't feel exactly clean and right. Is there any better way? Thanks in advance for your advices. You can derive the stage to be closed from the event passed to the event handler. new EventHandler<ActionEvent>() { @Override public void handle(ActionEvent actionEvent) { // take some action ... // close the dialog. Node source = (Node) actionEvent.getSource(); Stage stage = (Stage) source.getScene().getWindow(); stage.close(); } } In JavaFX 2.1, you have few choices. The way like in jewelsea's answer or the way what you have done already or modified version of it like public class AboutBox extends Stage { public AboutBox() throws Exception { initModality(Modality.APPLICATION_MODAL); Button btn = new Button("Close"); btn.setOnAction(new EventHandler<ActionEvent>() { @Override public void handle(ActionEvent arg0) { close(); } }); // Load content via // EITHER Parent root = FXMLLoader.load(getClass().getResource("AboutPage.fxml")); setScene(new Scene(VBoxBuilder.create().children(root, btn).build())); // OR Scene aboutScene = new Scene(VBoxBuilder.create().children(new Text("About me"), btn).alignment(Pos.CENTER).padding(new Insets(10)).build()); setScene(aboutScene); // If your about page is not so complex. no need FXML so its Controller class too. } } with usage like new AboutBox().show(); in menu item action event handler.
https://javafxpedia.com/en/knowledge-base/11468800/javafx2---closing-a-stage--substage--from-within-itself
CC-MAIN-2020-45
refinedweb
517
60.51
Contents Among my favorite art styles are pointillism and cubism. By no means am I an art critic or expert, but I think my attraction to these styles stems from the way they (respectively) deconstruct the colors and shapes of their subjects. As a personal exerciese, I wanted to write a program to take images and reproduce them in a pseudo-pointillist style. Prior to this, I hadn’t had any experience programtically manipulating images. The “problem” I set for myself: Given an image, reproduce it using points in such a way that it is still recognizable as its source image. Pseudo-pointillist Mona Lisa Approach In coming up with the solution to this challenge, I had several things in mind that I wanted to use: - D3 - I use D3 for creating graphs and other data visualizations. I ultimately wanted to turn this into a small web application, so using D3 to render the resulting “pointillist” image seemed like a logical approach. - Python - The packages available for python make it possible to do flask a lot for data analysis and web development respectively, so using python to handle the pixel/color data for the images and as a backend for the web application made perfect sense. At this point, I needed to find a way to extract the pixel data from the source images. Google and stackoverflow pointed me to 1. # Open the image and return an image object. In[0]: im = Image.open("./example_picture.jpg") # List comprehension that calls im.getpixel() # -- the method that gets the RGB information for a pixel at location (X, Y) -- # for all of the pixels in the image. Returns a list of (R, G, B) tuples by taking # the (X) and (Y) values from im.size() -- the method that returns the dimensions # of the image -- and looping through all of their combinations. In[1]: pix = [[im.getpixel((x,y)) for x in range(im.size[0])] \ for y in range(im.size[1])] The actual type of pix as returned by the list comprehension is a list of lists of tuples ( [[(R, G, B)]] ). Each row of pixels is its own list, so subsetting pix with (x, y) coordinates, e.g. pix[x][y] returns the (R, G, B) value for a given pixel. Next, I needed to take the RGB values and format them to use with D3. Putting the list-of-lists-of-tuples into a pandas DataFrame made it easier to structure the elements and get them in the format I wanted. To plot the data in D3, I wanted to make each pixel a javascript object containing its x and y coordinates and its color information. # To reduce the number of points needed to reproduce the image, I set a max height for the results. In[3]: pixels_high = 80 # I then set a 'skip' value that would take every X pixels from the image based on its height. In[4]: skip = round(im.size[1]/pixels_high) # Constructing the Data Frame. In[5]: pix_frame = pd.DataFrame(pix) # List comprehension that selects the appropriate pixels based on the skip value and # formats the data as JSON for plotting in D3. In[6]: colors = [{"x": x, "y": y, "color": "rgba({0}, {1}, {2}, 0.75)".\ format(pix_frame[x][y][0], pix_frame[x][y][1], pix_frame[x][y][2])} \ for y in pix_frame.index \ for x in pix_frame.columns \ if y % skip == 0 and x % skip == 0] The result of this chunk of code is a list of dictionaries [{“x”: X, “y”: Y, “color”: “rgb(R, G, B, A)”}, …] where X and Y are the coordinates of the point and color is the RGB value with an added alpha component for a little transparency. Setting the skip value results in some separation between the points when plotting them, so that 1) there are fewer points to plot overall and 2) the points are distinct. Formatting the data like this allows it to be passed to D3 as JSON for plotting. With the basic code to get and format the pixel data from the images working, I wrote a class to streamline the data-aquisition process. With the class’ built-in methods, the flask application can easily get the pixel data from any given image in just a few lines of code. Pseudo-pointillist The Starry Night The flask application itself is relatively simple. It has a landing page (“/”) that renders a template (containing the HTML and javascript that handle the actual visualization), a callable address (“/new_picture”) that handles getting a new picture when the “new picture” event handler is triggered, and a couple convenience methods that pick a random image from the “images” directory and make calls to the PointillismImage class. The template sets up a basic web page. Flask uses the Jinja2 templating engine, so there is a trick I’m using here worth mentioning. Where things are enclosed in doubly curly braces – {{}} – these are replaced by variables passed in by the flask application. I’m using this feature here to pass in the dataset and aspect ratio for the intial image. The main draw of D3, for me, is the ease with which one can model data, and adjust the models when the data change. A brief explanation of how it works: you can make selections, bind data to those selections, and draw elements based on those data. // Set height and width of the final image based on the height // of the window and the aspect ratio of the image. var aspect = {{ aspect }} // Substituted with the value provided by flask. var h = window.innerHeight - 100; var w = aspect * h; // Select the body and append an svg element with height h and width w. // Save the selection as 'svg'. var svg = d3.select("body") .append("svg") .attr({ height: h, width: w }); // Describes scales that map an input domain to an output range. // Ensures that the (x, y) coordinates are scaled to fit in the svg.]); // In svg, select all "circle" elements, // Note that at this point, they don't exist, but they will soon. var circles = svg.selectAll("circle") .data(dataset) // Bind dataset to the selection. .enter() // Call the enter method. .append("circle") // Append circles to the svg for the datapoints in dataset. ; // Describes the circles. circles.attr({ cx: function(d) {return xScale(d.x);}, // X coordinate is the x property of the object scaled. cy: function(d) {return yScale(d.y);}, // Y coordinate is the y property of the object scaled. r: 4, // Radius is hardcoded to 4 pixels for now. fill: function(d) {return d.color} // Fill the circle using its color property. }); There is additional code (included below) that handles updating the points when the data change, as well as how to handle adding new points and removing old points when the number of datapoints changes. There’s also a function that initiates an AJAX call to the backend that triggers loading data for a new image. Results Examples of the Mona Lisa and The Starry Night are included above. The other test images I used were: Georges Seurat’s Sunday Afternoon on the Island of La Grande Jatte Alfalfa, St. Denis, also by Seurat: The Scream by Edvard Munch: The Park at Carrières-Saint-Denis by Georges Braque Overall I’m pleased with the way it turned out, and it was a fun way to experiment with working with images. There are a few things I’m planning to add (e.g. making the point radius and the pixel heights of the images variable and changeable by the user, loading some information about the art beside it) before putting it up as an application. The code itself is included below and is also available on GitHub, where it can be downloaded. The GitHub repo includes the prerequisites needed to get everything up and running as well as the example images that I used. Footnotes 1 Getting it to work with jpegs on OS X required installing libjpeg (via Homebrew) and setting up the XCode command lines tools xcode-select –install. With those requirements satisfied, pip install pillow got pillow to install properly. Code Pointillism Object import pandas as pd from PIL import Image class PointillismImage(object): """ Opens an image and provides accessors for aspect ratio and JSON formatted pixel data. """ def __init__(self, f_name): """ Initializes with provided filename. """ self.f_name = f_name self.im = self.open_image() self.pixel_matrix = self.set_pixel_data() def open_image(self): """ Opens image and sets the Image object to self.im """ return Image.open(self.f_name) def set_pixel_data(self): """ Gets pixel colors (R,G,B) for all (X, Y)'s. Sets self.pixel_matrix with the resulting Data Frame. """ pix = [[self.im.getpixel((x, y)) for x in range(self.im.size[0])] \ for y in range(self.im.size[1])] pix_frame = pd.DataFrame(pix) return pix_frame def get_pixel_json(self, height): """ Uses height and sets skip to determine which pixels to take, then formats the the points needed to plot the image in a list of dicts that will be parseable from D3. """ skip = round(self.im.size[1]/height) colors = [{"x": x, "y": y, "color": "rgba({0}, {1}, {2}, 0.75)".\ format(self.pixel_matrix[x][y][0], self.pixel_matrix[x][y][1], \ self.pixel_matrix[x][y][2])} for y in self.pixel_matrix.index \ for x in self.pixel_matrix.columns if y % skip == 0 \ and x % skip == 0] return colors def get_aspect(self): """ Floating point aspect ratio of image. """ return self.im.size[0] / float(self.im.size[1]) Flask App from flask import Flask, request, render_template, jsonify from random import randrange import os import pointillism_image as pointillism app = Flask(__name__) @app.route("/", methods=["GET", "POST"]) def main(): """ Entry point. Selects a random picture from './images', and returns the pixel and aspect information to the application on loading. """ image_data = image_handler() return render_template("layout.html", aspect=image_data[0], dataset=image_data[1]) @app.route("/new_picture", methods=["GET", "POST"]) def generate_picture_data(): """ Called by click handler in javascript to draw a new image. Returns pixel and aspect data as JSON. """ image_data = image_handler() return jsonify({"aspect": image_data[0], "dataset": image_data[1]}) def image_handler(): """ Selects random picture, opens it and returns pixel and aspect data. """ picture = select_picture("images") image = pointillism.PointillismImage(picture) aspect = image.get_aspect() dataset = image.get_pixel_json(70) return (aspect, dataset) def select_picture(pic_dir): """ Selects random image from images directory. """ pics = os.listdir(pic_dir) return "{0}/{1}".format(pic_dir, pics[randrange(0, len(pics))]) if __name__ == "__main__": app.debug = True app.run(port=8080) Template <!DOCTYPE HTML> <html> <head> <meta charset="utf-8"> <script src="" charset="utf-8"></script> <title>Pointillism!</title> </head> <body> <h4>Random picture time!</h4> <script type="text/javascript"> /* Variables for page layout and data. */ var aspect = {{ aspect }} var h = window.innerHeight - 100; var w = aspect * h; var padding = 10; var dataset = {{ dataset | tojson | e}}; /* Scales for positioning of the points. */]); var svg = d3.select("body") .append("svg") .attr({ height: h, width: w }); var circles = svg.selectAll("circle") .data(dataset) .enter() .append("circle") ; circles.attr({ cx: function(d) {return xScale(d.x);}, cy: function(d) {return yScale(d.y);}, r: 4, //r: function(d) {return rScale(d.x);}, fill: function(d) {return d.color} }); d3.select("body").select("h4") .on("click", function(){ new_image(); }); var update_image = function(newDataset) { h = window.innerHeight - 100; console.log(h); w = aspect * h; xScale = d3.scale.linear() .domain([0, d3.max(newDataset, function(d) {return d.x;})]) .range([padding, w - padding]); yScale = d3.scale.linear() .domain([0, d3.max(newDataset, function(d) {return d.y;})]) .range([padding, h - padding]); rScale = d3.scale.linear() .domain([0, d3.max(newDataset, function(d) {return d.y;})]) .range([4, 4]); var circles = svg.selectAll("circle") .data(newDataset); circles.enter() .append("circle") .transition() //.duration(500) .attr({ cx: function(d) {return xScale(d.x);}, cy: function(d) {return yScale(d.y);}, r: 4, //r: function(d) {return rScale(d.y);}, fill: function(d) { return d.color; } }); circles.transition() .duration(500) .attr({ cx: function(d) {return xScale(d.x);}, cy: function(d) {return yScale(d.y);}, r: 6, fill: function(d) { return d.color; }, }) .each("end", function() { d3.select(this) .transition() .duration(500) .attr({ r: 4 //r: function(d) {return rScale(d.y);} }) }); circles.exit() .transition() .duration(500) .attr("x", w) .remove(); }; /* [{x: _, y: _, color: _}, ...] */ var new_image = function() { d3.json("{{ url_for('generate_picture_data') }} ", function(error, data) { if (data) { var dataset = data.dataset; aspect = data.aspect; update_image(dataset); } else { console.log(error); }; }); }; </script> </body> </html>
https://travispoulsen.com/blog/posts/2014-07-19-Pseudo-Pointillism.html
CC-MAIN-2021-21
refinedweb
2,063
59.09
So you have decided to create a chat bot because everybody is doing it, but you dont want to loose the SAPUI5 look and feel. Time to create a custom control. I initially did a mock up and started looking for controls that I could use within my control, I also then decided that I would extend the Control object itself and write my own renderer rather than extending another control. So this is my design, its really composed of a button and a responsive popover containing some other controls. getting started… The first step is extending the control and importing the files that you need, sap.ui.define( [ "sap/ui/core/Control", "sap/m/Button", "sap/ui/core/IconPool", "sap/m/Dialog", "sap/m/List", "sap/m/FeedListItem", "sap/m/FeedInput", "sap/m/ResponsivePopover", "sap/m/VBox", "sap/m/ScrollContainer", "sap/m/Bar", "sap/m/Title", "sap/ui/core/ResizeHandler" ], function(Control, Button, IconPool, Dialog, List, FeedListItem, FeedInput, ResponsivePopover, VBox, ScrollContainer, Bar, Title, ResizeHandler) { var ChatDialog = Control.extend("CSID.i027737.custlib.controls.ChatDialog",{ of course during the course of design this section was updated an dismantlement many times as i tried different controls. Then we need to define the metadata of the control, what properties i would like to be able to let the application developer set and what events the control will trigger in the application. metadata : { properties : { title: {type: "string", group: "Appearance", defaultValue: null}, width: {type: "sap.ui.core.CSSSize", group: "Dimension", defaultValue: null}, height: {type: "sap.ui.core.CSSSize", group: "Dimension", defaultValue: null}, buttonIcon: {type: "sap.ui.core.URI", group: "Appearance", defaultValue: null}, robotIcon: {type: "sap.ui.core.URI", group: "Appearance", defaultValue: null}, userIcon: {type: "sap.ui.core.URI", group: "Appearance", defaultValue: null}, initialMessage: {type: "string", group: "Appearance", defaultValue: "Hello, How can I help?"}, placeHolder: {type: "string", group: "Appearance", defaultValue: "Post something here"} }, aggregations : { _chatButton: {type: "sap.m.Button", multiple: false}, _popover: {type: "sap.m.ResponsivePopover", multiple: false} }, events : { send: { parameters : { text : {type : "string"} } } } }, I defined some custom properties, internal aggregations marked (with the underscore _) and also an event that the application developer can bind a function to. The Renderer… renderer : function(oRm, oControl) { var oChatBtn = oControl.getAggregation("_chatButton"); var oPop = oControl.getAggregation("_popover"); oRm.write("<div "); oRm.addClass("bkChatButton"); oRm.writeClasses(); oRm.write(">"); oRm.renderControl(oChatBtn); oRm.renderControl(oPop); oRm.write("</div>"); } When you create a custom control you will need to specify a renderer so the render manager knows what to paint in the DOM for this control. As you can see above I have two controls stored in aggregations and I am just rendering then within my <div> which I have also assigned the class bkChatButton so I can add some css styles later. so far so good… init… This is where we are doing the majority of our heavy lifting. We need to design the chat bot and all the sub controls that are in this responsive popover. To start with import some css and also create the button, I bind an internal function to the button press event which will open the chat dialog. init : function () { //initialisation code, in this case, ensure css is imported var libraryPath = jQuery.sap.getModulePath("CSID.i027737.custlib"); jQuery.sap.includeStyleSheet(libraryPath + "/css/bkChat.css"); var oBtn = new Button(this.getId() + "-bkChatButton", { press: this._onOpenChat.bind(this) }); this.setAggregation("_chatButton", oBtn); The _onOpenChat function looks something like this… Im using the openBy function of the responsivepopver and then setting the content height and width. _onOpenChat: function(oEvent){ this.getAggregation("_popover").openBy(this.getAggregation("_chatButton")); this.getAggregation("_popover").setContentHeight(this.getProperty("height")); this.getAggregation("_popover").setContentWidth(this.getProperty("width")); }, You will notice that I gave my button an id, and the id was based on the id of the control, this.getId() + “-bkChatButton” This is a good idea as if the control is used many times each will have a different id, and we wont see any errors relating to duplicate ids. I need to give this control an id to access it as one of its properties is based on my custom control property. I want this button icon to be configurable, but in the init function the properties are not available. Therefore, i overwrite the setter for the property buttonIcon and I call the setProperty method which actually exists in the managedObject.js somewhere. Finally I set the icon of the button I have given the id to. setButtonIcon: function(sButtonIcon){ this.setProperty("buttonIcon", sButtonIcon, true); sap.ui.getCore().byId(this.getId() + "-bkChatButton").setIcon(sButtonIcon); }, The most important thing to take note of here is that, When a control is created and the init function is called the properties that are assigned are not available in the init. First the init is called and then as the XML parser encounters properties it calls the setProperty() method to update them one by one. Therefore accessing the properties within the init method will lead to empty values. To get a good understanding of this I put some breakpoints in the init and the setters and i could see them getting called and in what order. User Input… the chat dialog is simply a sap.m.List (without separators between the FedListItems used) and also a sap.m.FeedInput where the end user will write. Lets first talk about the FeedInput, The feedinput is a UI5 control that the user types text and to enter it they need to press the button. I want the Enter key to also send the message. But the default behavior of the Enter key is to move to the next line. To get around this, I add an event delagate to the feed item, so now when the enter key is pressed, the below function is called. First I use preventDefault() to stop the carrage return happening. Then I check if there is Text in the feed item. If there is I fire the event that normally only happens when the button is pressed. Once this is done I clear the text. var oFeedIn = new FeedInput(this.getId() + "-bkChatInput", { post: this._onPost.bind(this), showicon: true }).addStyleClass("sapUiTinyMargin"); oFeedIn.addEventDelegate({ onsapenter: function(oEvent) { oEvent.preventDefault(); var sTxt = oFeedIn.getValue(); if(sTxt.length > 0){ oFeedIn.fireEvent("post", { value: sTxt }, true, false); oFeedIn.setValue(null); } } }); You can see above that the event fired is post and in the creation of the feed input, i have bound this event to the _oPost function. Lets take a look at that now. When this is fired we need to do 2 things, update the list with out new chat text, and also send this text to the chatbot to get a response. As we are the control developer and not the application developer we will fire our custom controls event “send” with the text. Now the application developer can bind to this event to get the server response. We will look at that after. _onPost: function(oEvent){ var sText = oEvent.getSource().getValue(); this.addChatItem(sText, true); this.fireEvent("send", { text: sText }, false, true); }, You can see above we have added the chat with the function addChatItem time to look at that… addChatItem… This function is designed to add the text in chat bubbles into the chat dialog, it takes 2 parameters, the text and also a boolean, to decide if it is the user or the robot adding the text. Above we can see it is being called by the user. However later we will see the robot calling this. depending on the boolean value a class is added and this styles the chat bubble with the appropriate bubble direction and the correct image. We w I also scroll to the bottom of my list when a new chat is added. addChatItem: function(sText, bUser){ var oFeedListItem = new FeedListItem({ showicon: true, text: sText }); if(bUser){ oFeedListItem.setIcon(this.getUserIcon()); oFeedListItem.addStyleClass("bkUserInput"); sap.ui.getCore().byId(this.getId() + "-bkChatList").addItem(oFeedListItem, 0); }else{ oFeedListItem.setIcon(this.getRobotIcon()); oFeedListItem.addStyleClass("bkRobotInput"); sap.ui.getCore().byId(this.getId() + "-bkChatList").addItem(oFeedListItem, 0); } var oScroll = sap.ui.getCore().byId(this.getId() + "-bkChatScroll"); setTimeout(function(){ oScroll.scrollTo(0, 1000, 0); }, 0); } Using the Control… Now the control is developer, the below code is external to the custom control and its the application developer who is writing this. To use the control, they just define the namespace in the view, then they can add the component and bind all the properties we have defined in the metadata of the control. <mvc:View <App id="idAppControl"> <pages> <Page title="{i18n>title}"> <content> <controls:ChatDialog </controls:ChatDialog> </content> </Page> </pages> </App> </mvc:View> We also bind to the event send. We bind the function onSendPressed which we will define in the controller of this view. Lets take a look at that now. onSendPressed: function(oEvent){ var chatbot = this.getView().byId("brianchat"); var question = oEvent.getParameter("text"); var payload = {content: question}; jQuery.ajax({ url: "/chat", cache: false, type: "POST", data: JSON.stringify(payload), async: true, success: function(sData) { chatbot.addChatItem(sData.content, false); }, error: function(sError) { chatbot.addChatItem("Sorry im malfunctioning", false); } }); } Once this is called the application developer can include the ajax call to whatever chatbot they are using. Once the response comes back then they use the addChatItem function de spoke of earlier. This time however it is called with false, to the response text and bubble will be styled as a robot. Summary… There is some more going on it the code that we have not discussed, resizing the dialog etc. Also the design of the speech bubbles, this is done in the CSS and there is plenty of resources on the web to explain. I will upload all the code the gibhub when I get a chance. Happy Chatting 🙂 Great post, Brian!! How do I simulate bot typing ? Thanks You can call the function addChatItem, with the text and false Great Post, Brian. The blog refers to frontend designing, however, how do we build the backend with Chatbot algorithms? In the UI controller I have the ajax call to the server, I created a node js application which routed the chats to recast.ai, you can also use wit.ai or watson of chat applications with the algorithms, in these you can train the bots etc. to identifiy text and patterns etc. Hi Brian, Your blog is extremely nice . But it would be helpful if you upload it in git hub,because I found many things are missing when try to implement it. Thanks in Advance Mriganka Basak Hi Brian, I wanted to follow up with the GitHub upload. Do you mind uploading it to the GitHub? Thanks in advance, Yigit
https://blogs.sap.com/2018/07/05/implementing-a-chatbot-custom-control/
CC-MAIN-2020-50
refinedweb
1,771
56.55
I have two Numpy arrays x (m, i) y (m, j) x y (m, i*j) import numpy as np np.random.seed(1) x = np.random.randint(0, 2, (10, 3)) y = np.random.randint(0, 2, (10, 2)) x array([[1, 1, 0], [0, 1, 1], [1, 1, 1], [0, 0, 1], [0, 1, 1], [0, 0, 1], [0, 0, 0], [1, 0, 0], [1, 0, 0], [0, 1, 0]]) y array([[0, 0], [1, 1], [1, 1], [1, 0], [0, 0], [1, 1], [1, 1], [1, 1], [0, 1], [1, 0]]) array([[0, 0, 0, 0, 0, 0], [0, 0, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 1, 0], [0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 1, 1], [0, 0, 0, 0, 0, 0], [1, 1, 0, 0, 0, 0], [0, 1, 0, 0, 0, 0], [0, 0, 1, 0, 0, 0]]) x y def _mult(x, y): r = [] for xc in x.T: for yc in y.T: r.append(xc * yc) return np.array(r).T Use NumPy broadcasting - (y[:,None]*x[...,None]).reshape(x.shape[0],-1) Explanation As inputs, we have - y : 10 x 2 x : 10 x 3 With y[:,None], we are introducing a new axis between the existing two dims, thus creating a 3D array version of it. This keeps the first axis as the first one in 3D version and pushes out the second axis as the third one. With x[...,None], we are introducing a new axis as the last one by pushing up the two existing dims as the first two dims to result in a 3D array version. To summarize, with the introduction of new axes, we have - y : 10 x 1 x 2 x : 10 x 3 x 1 With y[:,None]*x[...,None], there would be broadcasting for both y and x, resulting in an output array with a shape of (10,3,2). To get to the final output array of shape (10,6), we just need to merge the last two axes with that reshape.
https://codedump.io/share/zMwPcLMSxLvj/1/multiply-each-column-from-2d-array-with-each-column-from-another-2d-array
CC-MAIN-2018-17
refinedweb
353
80.65
# Breaking UC Browser ![](https://habrastorage.org/r/w780q1/webt/ke/e0/rd/kee0rdupth-kalljz9gfxpjndnk.jpeg) ### Introduction At the end of March we [reported](https://news.drweb.ru/show/?i=13176) on the hidden potential to download and run unverified code in UC Browser. Today we will examine in detail how it happens and how hackers can use it. Some time ago, UC Browser was promoted and distributed quite aggressively. It was installed on devices by malware, distributed via websites under the guise of video files (i.e., users thought they were downloading pornography or something, but instead were getting APK files with this browser), advertised using worrisome banners about a user’s browser being outdated or vulnerable. The official UC Browser VK group had a [topic](https://vk.com/topic-27170131_31448404?offset=0) where users could complain about false advertising and many users provided examples. In 2016, there was even a [commercial](https://www.youtube.com/watch?v=VRFOl7k5axc) in Russian (yes, a commercial of a browser that blocks commercials). As we write this article, UC Browser was installed 500,000,000 times from Google Play. This is impressive since only Google Chrome managed to top that. Among the reviews, you can see a lot of user complaints about advertising and being redirected to other applications on Google Play. This was the reason for our study: we wanted to see if UC Browser is doing something wrong. And it is! The application is able to download and run executable code, which [violates Google Play’s policy for app publishing](https://play.google.com/about/privacy-security-deception/malicious-behavior/) . And UC Browser doesn’t only download executable code; it does this unsafely, which can be used for a MitM attack. Let's see if we can use it this way. Everything that follows applies to the version of UC Browser that was distributed via Google Play at the time of our study: ``` package: com.UCMobile.intl versionName: 12.10.8.1172 versionCode: 10598 sha1 APK-file: f5edb2243413c777172f6362876041eb0c3a928c ``` ### Attack Vector UC Browser’s manifest contains a service with a telltale name of *com.uc.deployment.UpgradeDeployService*. ``` ``` When this service launches, the browser makes a POST request to [puds.ucweb.com/upgrade/index.xhtml](http://puds.ucweb.com/upgrade/index.xhtml) that can be seen in traffic for some time after the launch. In response, the browser may receive a command to download any update or a new module. During our analysis, we never received such commands from the server, but we noticed that when trying to open a PDF file in the browser, it repeats the request to the above address, then downloads a native library. To simulate an attack, we decided to use this feature of UC Browser—the ability to open PDF files using a native library — not present in the APK file, but downloadable from the Internet. Technically, UC Browser can download something without a user’s permission when given an appropriate response to a request sent upon startup. But for this we need to study the interaction protocol with the server in more detail, so we thought it was easier to just hook and edit the response and then replace the library needed to open PDF files with something different. So when a user wants to open a PDF file directly in the browser, traffic may contain the following requests: ![](https://habrastorage.org/r/w1560/webt/j0/i1/c1/j0i1c1n_nwf_gnauzbudnqor5oi.png) First, there is a POST request to [puds.ucweb.com/upgrade/index.xhtml](http://puds.ucweb.com/upgrade/index.xhtml), then the compressed library for viewing PDF files and office documents is downloaded. Logically, we can assume that the first request sends information about the system (at least the architecture, because the server needs to select an appropriate library), and the server responds with some information about the library that needs to be downloaded, like its address and maybe something else. The problem is that this request is encrypted. | | | | --- | --- | | Request fragment | Response fragment | | | | The library is compressed in a ZIP file and not encrypted. ![](https://habrastorage.org/r/w1560/webt/6r/lt/ns/6rltnsrtyd-_5ekwis68qn-vydc.png) ### Searching for traffic decryption code Let’s try and decrypt the server’s response. Take a look at the code of the *com.uc.deployment.UpgradeDeployService class*: from the *onStartCommand method*, we navigate to *com.uc.deployment.b.x*, and then to *com.uc.browser.core.d.c.f.e*: ``` public final void e(l arg9) { int v4_5; String v3_1; byte[] v3; byte[] v1 = null; if(arg9 == null) { v3 = v1; } else { v3_1 = arg9.iGX.ipR; StringBuilder v4 = new StringBuilder("["); v4.append(v3_1); v4.append("]product:"); v4.append(arg9.iGX.ipR); v4 = new StringBuilder("["); v4.append(v3_1); v4.append("]version:"); v4.append(arg9.iGX.iEn); v4 = new StringBuilder("["); v4.append(v3_1); v4.append("]upgrade_type:"); v4.append(arg9.iGX.mMode); v4 = new StringBuilder("["); v4.append(v3_1); v4.append("]force_flag:"); v4.append(arg9.iGX.iEo); v4 = new StringBuilder("["); v4.append(v3_1); v4.append("]silent_mode:"); v4.append(arg9.iGX.iDQ); v4 = new StringBuilder("["); v4.append(v3_1); v4.append("]silent_type:"); v4.append(arg9.iGX.iEr); v4 = new StringBuilder("["); v4.append(v3_1); v4.append("]silent_state:"); v4.append(arg9.iGX.iEp); v4 = new StringBuilder("["); v4.append(v3_1); v4.append("]silent_file:"); v4.append(arg9.iGX.iEq); v4 = new StringBuilder("["); v4.append(v3_1); v4.append("]apk_md5:"); v4.append(arg9.iGX.iEl); v4 = new StringBuilder("["); v4.append(v3_1); v4.append("]download_type:"); v4.append(arg9.mDownloadType); v4 = new StringBuilder("["); v4.append(v3_1); v4.append("]download_group:"); v4.append(arg9.mDownloadGroup); v4 = new StringBuilder("["); v4.append(v3_1); v4.append("]download_path:"); v4.append(arg9.iGH); v4 = new StringBuilder("["); v4.append(v3_1); v4.append("]apollo_child_version:"); v4.append(arg9.iGX.iEx); v4 = new StringBuilder("["); v4.append(v3_1); v4.append("]apollo_series:"); v4.append(arg9.iGX.iEw); v4 = new StringBuilder("["); v4.append(v3_1); v4.append("]apollo_cpu_arch:"); v4.append(arg9.iGX.iEt); v4 = new StringBuilder("["); v4.append(v3_1); v4.append("]apollo_cpu_vfp3:"); v4.append(arg9.iGX.iEv); v4 = new StringBuilder("["); v4.append(v3_1); v4.append("]apollo_cpu_vfp:"); v4.append(arg9.iGX.iEu); ArrayList v3_2 = arg9.iGX.iEz; if(v3_2 != null && v3_2.size() != 0) { Iterator v3_3 = v3_2.iterator(); while(v3_3.hasNext()) { Object v4_1 = v3_3.next(); StringBuilder v5 = new StringBuilder("["); v5.append(((au)v4_1).getName()); v5.append("]component_name:"); v5.append(((au)v4_1).getName()); v5 = new StringBuilder("["); v5.append(((au)v4_1).getName()); v5.append("]component_ver_name:"); v5.append(((au)v4_1).aDA()); v5 = new StringBuilder("["); v5.append(((au)v4_1).getName()); v5.append("]component_ver_code:"); v5.append(((au)v4_1).gBl); v5 = new StringBuilder("["); v5.append(((au)v4_1).getName()); v5.append("]component_req_type:"); v5.append(((au)v4_1).gBq); } } j v3_4 = new j(); m.b(v3_4); h v4_2 = new h(); m.b(v4_2); ay v5_1 = new ay(); v3_4.hS(""); v3_4.setImsi(""); v3_4.hV(""); v5_1.bPQ = v3_4; v5_1.bPP = v4_2; v5_1.yr(arg9.iGX.ipR); v5_1.gBF = arg9.iGX.mMode; v5_1.gBI = arg9.iGX.iEz; v3_2 = v5_1.gAr; c.aBh(); v3_2.add(g.fs("os_ver", c.getRomInfo())); v3_2.add(g.fs("processor_arch", com.uc.b.a.a.c.getCpuArch())); v3_2.add(g.fs("cpu_arch", com.uc.b.a.a.c.Pb())); String v4_3 = com.uc.b.a.a.c.Pd(); v3_2.add(g.fs("cpu_vfp", v4_3)); v3_2.add(g.fs("net_type", String.valueOf(com.uc.base.system.a.Jo()))); v3_2.add(g.fs("fromhost", arg9.iGX.iEm)); v3_2.add(g.fs("plugin_ver", arg9.iGX.iEn)); v3_2.add(g.fs("target_lang", arg9.iGX.iEs)); v3_2.add(g.fs("vitamio_cpu_arch", arg9.iGX.iEt)); v3_2.add(g.fs("vitamio_vfp", arg9.iGX.iEu)); v3_2.add(g.fs("vitamio_vfp3", arg9.iGX.iEv)); v3_2.add(g.fs("plugin_child_ver", arg9.iGX.iEx)); v3_2.add(g.fs("ver_series", arg9.iGX.iEw)); v3_2.add(g.fs("child_ver", r.aVw())); v3_2.add(g.fs("cur_ver_md5", arg9.iGX.iEl)); v3_2.add(g.fs("cur_ver_signature", SystemHelper.getUCMSignature())); v3_2.add(g.fs("upgrade_log", i.bjt())); v3_2.add(g.fs("silent_install", String.valueOf(arg9.iGX.iDQ))); v3_2.add(g.fs("silent_state", String.valueOf(arg9.iGX.iEp))); v3_2.add(g.fs("silent_file", arg9.iGX.iEq)); v3_2.add(g.fs("silent_type", String.valueOf(arg9.iGX.iEr))); v3_2.add(g.fs("cpu_archit", com.uc.b.a.a.c.Pc())); v3_2.add(g.fs("cpu_set", SystemHelper.getCpuInstruction())); boolean v4_4 = v4_3 == null || !v4_3.contains("neon") ? false : true; v3_2.add(g.fs("neon", String.valueOf(v4_4))); v3_2.add(g.fs("cpu_cores", String.valueOf(com.uc.b.a.a.c.Jl()))); v3_2.add(g.fs("ram_1", String.valueOf(com.uc.b.a.a.h.Po()))); v3_2.add(g.fs("totalram", String.valueOf(com.uc.b.a.a.h.OL()))); c.aBh(); v3_2.add(g.fs("rom_1", c.getRomInfo())); v4_5 = e.getScreenWidth(); int v6 = e.getScreenHeight(); StringBuilder v7 = new StringBuilder(); v7.append(v4_5); v7.append("*"); v7.append(v6); v3_2.add(g.fs("ss", v7.toString())); v3_2.add(g.fs("api_level", String.valueOf(Build$VERSION.SDK_INT))); v3_2.add(g.fs("uc_apk_list", SystemHelper.getUCMobileApks())); Iterator v4_6 = arg9.iGX.iEA.entrySet().iterator(); while(v4_6.hasNext()) { Object v6_1 = v4_6.next(); v3_2.add(g.fs(((Map$Entry)v6_1).getKey(), ((Map$Entry)v6_1).getValue())); } v3 = v5_1.toByteArray(); } if(v3 == null) { this.iGY.iGI.a(arg9, "up_encode", "yes", "fail"); return; } v4_5 = this.iGY.iGw ? 0x1F : 0; if(v3 == null) { } else { v3 = g.i(v4_5, v3); if(v3 == null) { } else { v1 = new byte[v3.length + 16]; byte[] v6_2 = new byte[16]; Arrays.fill(v6_2, 0); v6_2[0] = 0x5F; v6_2[1] = 0; v6_2[2] = ((byte)v4_5); v6_2[3] = -50; System.arraycopy(v6_2, 0, v1, 0, 16); System.arraycopy(v3, 0, v1, 16, v3.length); } } if(v1 == null) { this.iGY.iGI.a(arg9, "up_encrypt", "yes", "fail"); return; } if(TextUtils.isEmpty(this.iGY.mUpgradeUrl)) { this.iGY.iGI.a(arg9, "up_url", "yes", "fail"); return; } StringBuilder v0 = new StringBuilder("["); v0.append(arg9.iGX.ipR); v0.append("]url:"); v0.append(this.iGY.mUpgradeUrl); com.uc.browser.core.d.c.i v0_1 = this.iGY.iGI; v3_1 = this.iGY.mUpgradeUrl; com.uc.base.net.e v0_2 = new com.uc.base.net.e(new com.uc.browser.core.d.c.i$a(v0_1, arg9)); v3_1 = v3_1.contains("?") ? v3_1 + "&dataver=pb" : v3_1 + "?dataver=pb"; n v3_5 = v0_2.uc(v3_1); m.b(v3_5, false); v3_5.setMethod("POST"); v3_5.setBodyProvider(v1); v0_2.b(v3_5); this.iGY.iGI.a(arg9, "up_null", "yes", "success"); this.iGY.iGI.b(arg9); } ``` We can see this is where the POST request is made. Have a look at the 16-byte array that contains: 0x5F, 0, 0x1F, -50 (=0xCE). Values are the same as in the above request. The same class contains a nested class with another interesting method: ``` public final void a(l arg10, byte[] arg11) { f v0 = this.iGQ; StringBuilder v1 = new StringBuilder("["); v1.append(arg10.iGX.ipR); v1.append("]:UpgradeSuccess"); byte[] v1_1 = null; if(arg11 == null) { } else if(arg11.length < 16) { } else { if(arg11[0] != 0x60 && arg11[3] != 0xFFFFFFD0) { goto label_57; } int v3 = 1; int v5 = arg11[1] == 1 ? 1 : 0; if(arg11[2] != 1 && arg11[2] != 11) { if(arg11[2] == 0x1F) { } else { v3 = 0; } } byte[] v7 = new byte[arg11.length - 16]; System.arraycopy(arg11, 16, v7, 0, v7.length); if(v3 != 0) { v7 = g.j(arg11[2], v7); } if(v7 == null) { goto label_57; } if(v5 != 0) { v1_1 = g.P(v7); goto label_57; } v1_1 = v7; } label_57: if(v1_1 == null) { v0.iGY.iGI.a(arg10, "up_decrypt", "yes", "fail"); return; } q v11 = g.b(arg10, v1_1); if(v11 == null) { v0.iGY.iGI.a(arg10, "up_decode", "yes", "fail"); return; } if(v0.iGY.iGt) { v0.d(arg10); } if(v0.iGY.iGo != null) { v0.iGY.iGo.a(0, ((o)v11)); } if(v0.iGY.iGs) { v0.iGY.a(((o)v11)); v0.iGY.iGI.a(v11, "up_silent", "yes", "success"); v0.iGY.iGI.a(v11); return; } v0.iGY.iGI.a(v11, "up_silent", "no", "success"); } } ``` This method receives an input of a bytes array and checks whether the zero byte is 0x60, or the third byte is 0xD0 and if the second byte is 1, 11, or 0x1F. Check out the server response: zero byte is 0x60, the second byte is 0x1F, the third byte is 0x60. It looks like what we need. Judging by the strings («up\_decrypt», for example), a method is supposed to be called here to decrypt the server response. Now let’s look at the g.j method. Note that the first argument is a byte at offset 2 (that is, 0x1F in our case), and the second is the server response without the first 16 bytes. ``` public static byte[] j(int arg1, byte[] arg2) { if(arg1 == 1) { arg2 = c.c(arg2, c.adu); } else if(arg1 == 11) { arg2 = m.aF(arg2); } else if(arg1 != 0x1F) { } else { arg2 = EncryptHelper.decrypt(arg2); } return arg2; } ``` Obviously, it is selecting the decryption algorithm, and the byte that in our case equals 0x1F indicates one of three possible options. Let’s go back to the code analysis. After a couple of jumps, we get to the method with the telltale name, *decryptBytesByKey*. Now, two more bytes are separated from our response and form a string. It’s clear this is how the key is selected to decrypt the message. ``` private static byte[] decryptBytesByKey(byte[] bytes) { byte[] v0 = null; if(bytes != null) { try { if(bytes.length < EncryptHelper.PREFIX_BYTES_SIZE) { } else if(bytes.length == EncryptHelper.PREFIX_BYTES_SIZE) { return v0; } else { byte[] prefix = new byte[EncryptHelper.PREFIX_BYTES_SIZE]; // 2 байта System.arraycopy(bytes, 0, prefix, 0, prefix.length); String keyId = c.ayR().d(ByteBuffer.wrap(prefix).getShort()); // Выбор ключа if(keyId == null) { return v0; } else { a v2 = EncryptHelper.ayL(); if(v2 == null) { return v0; } else { byte[] enrypted = new byte[bytes.length - EncryptHelper.PREFIX_BYTES_SIZE]; System.arraycopy(bytes, EncryptHelper.PREFIX_BYTES_SIZE, enrypted, 0, enrypted.length); return v2.l(keyId, enrypted); } } } } catch(SecException v7_1) { EncryptHelper.handleDecryptException(((Throwable)v7_1), v7_1.getErrorCode()); return v0; } catch(Throwable v7) { EncryptHelper.handleDecryptException(v7, 2); return v0; } } return v0; } ``` Jumping ahead a bit, note that at this stage, it’s only the key identifier, not the key itself. Key selection is going to be a little more complicated. In the next method, two more parameters are added to the existing ones, so we get a total of four. The magic number 16, the key identifier, the encrypted data, and a string is added there for some reason (empty in our case). ``` public final byte[] l(String keyId, byte[] encrypted) throws SecException { return this.ayJ().staticBinarySafeDecryptNoB64(16, keyId, encrypted, ""); } ``` After a series of jumps, we see the *staticBinarySafeDecryptNoB64* method of the *com.alibaba.wireless.security.open.staticdataencrypt.IStaticDataEncryptComponent* interface. The main application code has no classes that implement this interface. This class is contained in the file ***lib/armeabi-v7a/libsgmain.so***, which is not really .SO, but rather .JAR. The method we are interested in is implemented as follows: ``` package com.alibaba.wireless.security.a.i; // ... public class a implements IStaticDataEncryptComponent { private ISecurityGuardPlugin a; // ... private byte[] a(int mode, int magicInt, int xzInt, String keyId, byte[] encrypted, String magicString) { return this.a.getRouter().doCommand(10601, new Object[]{Integer.valueOf(mode), Integer.valueOf(magicInt), Integer.valueOf(xzInt), keyId, encrypted, magicString}); } // ... private byte[] b(int magicInt, String keyId, byte[] encrypted, String magicString) { return this.a(2, magicInt, 0, keyId, encrypted, magicString); } // ... public byte[] staticBinarySafeDecryptNoB64(int magicInt, String keyId, byte[] encrypted, String magicString) throws SecException { if(keyId != null && keyId.length() > 0 && magicInt >= 0 && magicInt < 19 && encrypted != null && encrypted.length > 0) { return this.b(magicInt, keyId, encrypted, magicString); } throw new SecException("", 301); } //... } ``` Here, our parameter list is supplemented with two more integers: 2 and 0. Apparently, 2 means decryption, as in the *doFinal method* of the *javax.crypto.Cipher system* class. Then, this information is transmitted to a certain Router along with the number 10601, which is apparently the command number. After the next chain of jumps, we find a class that implements the *RouterComponent* interface and the *doCommand* method: ``` package com.alibaba.wireless.security.mainplugin; import com.alibaba.wireless.security.framework.IRouterComponent; import com.taobao.wireless.security.adapter.JNICLibrary; public class a implements IRouterComponent { public a() { super(); } public Object doCommand(int arg2, Object[] arg3) { return JNICLibrary.doCommandNative(arg2, arg3); } } ``` There is also the *JNICLibrary* class, where the native *doCommandNative* method is declared: ``` package com.taobao.wireless.security.adapter; public class JNICLibrary { public static native Object doCommandNative(int arg0, Object[] arg1); } ``` So, we need to find the *doCommandNative* method in the native code. That’s where the fun begins. ### Machine code obfuscation There is one native library in the *libsgmain.so* file (which is actually a .JAR file and, like we said above, implements some encryption related interfaces): ***libsgmainso-6.4.36.so***. We load it in IDA and get a bunch of dialogs with error messages. The problem is that the section header table is invalid. This is done on purpose to complicate the analysis. ![](https://habrastorage.org/r/w780q1/webt/_k/bz/5c/_kbz5cbedw6g1jnsj047rtqudli.jpeg) But we don’t really need it anyway. The program header table is enough to correctly load the ELF file and analyze it. So we simply delete the section header table, nulling the corresponding fields in the header. ![](https://habrastorage.org/r/w780q1/webt/2y/xh/vk/2yxhvksp30f016gfyhrlfh0pklc.jpeg) Then we open the file in IDA again. We have two ways to tell the Java virtual machine exactly where the native library contains the implementation of the method declared as native in the Java code. The first is to give it a name like this: *Java\_package\_name\_ClassName\_methodName*. The second one is to register it when loading the library (in the JNI\_OnLoad function) by calling the RegisterNatives function. In our case, if you use the first method, the name should be like this: *Java\_com\_taobao\_wireless\_security\_adapter\_JNICLibrary\_doCommandNative*. The list of exported functions doesn’t contain this name, which means we need to look for the RegisterNatives. Thus, we go to the JNI\_OnLoad function and see the following: ![](https://habrastorage.org/r/w780q1/webt/m6/dp/iw/m6dpiwaonfuyri-jg8thppcfgqe.jpeg) What’s going on here? At first glance, the beginning and end of the function are typical of the ARM architecture. The first instruction pushes the contents of the registers that the function will use to the stack (in this case, R0, R1, and R2), as well as the contents of the LR register with the function’s return address. The last instruction restores the saved registers and puts the return address to the PC register, thus returning from the function. But if we take a closer look, we may notice that the penultimate instruction changes the return address, stored on the stack. Let’s calculate what it will be when the code is executed. The address 0xB130 loads in R1, has 5 subtracted from it, then is moved to R0 and receives an addition of 0x10. In the end, it equals 0xB13B. Thus, IDA thinks that the final instruction performs a normal function return, while in fact, it performs a transfer to the calculated address 0xB13B. Now let us remind you that ARM processors have two modes and two sets of instructions — ARM and Thumb. The low-order bit of the address determines which instruction set the processor will use That is, the address is actually 0xB13A, while the value in the low-order bit indicates the Thumb mode. A similar “adapter” and some semantic garbage are added to the beginning of each function in this library. But we won’t dwell on them in detail. Just remember that the real beginning of almost all functions is a little further. Since no explicit transition to 0xB13A in the code exists, IDA cannot recognize that there is code there. For the same reason, it does not recognize most of the code in the library as code, which makes analyzing a bit trickier. So, we tell IDA that there is code, and this is what happens: ![](https://habrastorage.org/r/w1560/webt/eq/wp/p3/eqwpp3exmqnybi7r_tdvvan4jls.png) Starting from 0xB144, we clearly have the table. But what about sub\_494C? ![](https://habrastorage.org/r/w1560/webt/bo/9b/rn/bo9brnkfvauy3ntm2dnucnsxqhs.png) When calling this function in the LR register, we get the address of the above-mentioned table (0xB144). R0 contains the index in this table. That is, we take the value from the table, add it to LR, and obtain the address we need to go to. Let's try to calculate it: 0xB144 + [0xB144 + 8 \* 4] = 0xB144 + 0x120 = 0xB264. We navigate to this address and see a couple of useful instructions, then go to 0xB140: ![](https://habrastorage.org/r/w1560/webt/nz/95/4x/nz954xqtjmky6tnpaqeejhvecus.png) Now there will be a transition at offset with the index 0x20 from the table. Judging by the size of the table, there will be many such transitions in the code. So we would want to deal with this automatically and avoid manually calculating addresses. Thus, scripts and the ability to patch code in IDA come to our rescue: ``` def put_unconditional_branch(source, destination): offset = (destination - source - 4) >> 1 if offset > 2097151 or offset < -2097152: raise RuntimeError("Invalid offset") if offset > 1023 or offset < -1024: instruction1 = 0xf000 | ((offset >> 11) & 0x7ff) instruction2 = 0xb800 | (offset & 0x7ff) patch_word(source, instruction1) patch_word(source + 2, instruction2) else: instruction = 0xe000 | (offset & 0x7ff) patch_word(source, instruction) ea = here() if get_wide_word(ea) == 0xb503: #PUSH {R0,R1,LR} ea1 = ea + 2 if get_wide_word(ea1) == 0xbf00: #NOP ea1 += 2 if get_operand_type(ea1, 0) == 1 and get_operand_value(ea1, 0) == 0 and get_operand_type(ea1, 1) == 2: index = get_wide_dword(get_operand_value(ea1, 1)) print "index =", hex(index) ea1 += 2 if get_operand_type(ea1, 0) == 7: table = get_operand_value(ea1, 0) + 4 elif get_operand_type(ea1, 1) == 2: table = get_operand_value(ea1, 1) + 4 else: print "Wrong operand type on", hex(ea1), "-", get_operand_type(ea1, 0), get_operand_type(ea1, 1) table = None if table is None: print "Unable to find table" else: print "table =", hex(table) offset = get_wide_dword(table + (index << 2)) put_unconditional_branch(ea, table + offset) else: print "Unknown code", get_operand_type(ea1, 0), get_operand_value(ea1, 0), get_operand_type(ea1, 1) == 2 else: print "Unable to detect first instruction" ``` We put the cursor on the 0xB26A string, run the script, and see the transition to 0xB4B0: ![](https://habrastorage.org/r/w1560/webt/52/rf/fx/52rffxhzmtrfdhfrznsr8dovfim.png) Again, IDA does not recognize this place as code. We help it and see another structure there: ![](https://habrastorage.org/r/w1560/webt/ct/d9/kv/ctd9kvewm9l1blmw3pqipi8mnqm.png) Instructions that go after BLX do not appear very meaningful; they are more like some kind of offset. We look at sub\_4964: ![](https://habrastorage.org/r/w1560/webt/o7/wi/4-/o7wi4-imrothkr6qpkbrklj40aw.png) Indeed, it takes DWORD at the address from LR, adds it to this address, then takes the value at the resulting address and stores it in the stack. Additionally, it adds 4 to LR to jump this same offset after returning from the function. Then the POP {R1} command takes the resulting value from the stack. Looking at what is located at the address 0xB4BA + 0xEA = 0xB5A4, we can see something similar to the address table: ![](https://habrastorage.org/r/w1560/webt/ev/hb/a8/evhba8ty8dtnv8niuxyfue0nkfk.png) To patch this structure, we need to get two parameters from the code: the offset and the register number where we want to push the result. We will have to prepare a piece of code in advance for each possible register. ``` patches = {} patches[0] = (0x00, 0xbf, 0x01, 0x48, 0x00, 0x68, 0x02, 0xe0) patches[1] = (0x00, 0xbf, 0x01, 0x49, 0x09, 0x68, 0x02, 0xe0) patches[2] = (0x00, 0xbf, 0x01, 0x4a, 0x12, 0x68, 0x02, 0xe0) patches[3] = (0x00, 0xbf, 0x01, 0x4b, 0x1b, 0x68, 0x02, 0xe0) patches[4] = (0x00, 0xbf, 0x01, 0x4c, 0x24, 0x68, 0x02, 0xe0) patches[5] = (0x00, 0xbf, 0x01, 0x4d, 0x2d, 0x68, 0x02, 0xe0) patches[8] = (0x00, 0xbf, 0xdf, 0xf8, 0x06, 0x80, 0xd8, 0xf8, 0x00, 0x80, 0x01, 0xe0) patches[9] = (0x00, 0xbf, 0xdf, 0xf8, 0x06, 0x90, 0xd9, 0xf8, 0x00, 0x90, 0x01, 0xe0) patches[10] = (0x00, 0xbf, 0xdf, 0xf8, 0x06, 0xa0, 0xda, 0xf8, 0x00, 0xa0, 0x01, 0xe0) patches[11] = (0x00, 0xbf, 0xdf, 0xf8, 0x06, 0xb0, 0xdb, 0xf8, 0x00, 0xb0, 0x01, 0xe0) ea = here() if (get_wide_word(ea) == 0xb082 #SUB SP, SP, #8 and get_wide_word(ea + 2) == 0xb503): #PUSH {R0,R1,LR} if get_operand_type(ea + 4, 0) == 7: pop = get_bytes(ea + 12, 4, 0) if pop[1] == '\xbc': register = -1 r = get_wide_byte(ea + 12) for i in range(8): if r == (1 << i): register = i break if register == -1: print "Unable to detect register" else: address = get_wide_dword(ea + 8) + ea + 8 for b in patches[register]: patch_byte(ea, b) ea += 1 if ea % 4 != 0: ea += 2 patch_dword(ea, address) elif pop[:3] == '\x5d\xf8\x04': register = ord(pop[3]) >> 4 if register in patches: address = get_wide_dword(ea + 8) + ea + 8 for b in patches[register]: patch_byte(ea, b) ea += 1 patch_dword(ea, address) else: print "POP instruction not found" else: print "Wrong operand type on +4:", get_operand_type(ea + 4, 0) else: print "Unable to detect first instructions" ``` We put the cursor at the beginning of the structure we want to replace (i.e. 0xB4B2) and run the script: ![](https://habrastorage.org/r/w780q1/webt/lv/mp/oo/lvmpoow5agndjnbbl-t3imyaisq.jpeg) In addition to the already mentioned structures, the code includes the following: ![](https://habrastorage.org/r/w1560/webt/dk/nn/rw/dknnrwaye18zzz0xipny0_gdqfq.png) As in the previous case, there is an offset after the BLX instruction: ![](https://habrastorage.org/r/w780q1/webt/fn/bw/uz/fnbwuzpo3qw1hp3vd9rvr2xfq1k.jpeg) We take the offset at the address from LR, add it to LR, and navigate there. 0x72044 + 0xC = 0x72050. The script for this structure is quite simple: ``` def put_unconditional_branch(source, destination): offset = (destination - source - 4) >> 1 if offset > 2097151 or offset < -2097152: raise RuntimeError("Invalid offset") if offset > 1023 or offset < -1024: instruction1 = 0xf000 | ((offset >> 11) & 0x7ff) instruction2 = 0xb800 | (offset & 0x7ff) patch_word(source, instruction1) patch_word(source + 2, instruction2) else: instruction = 0xe000 | (offset & 0x7ff) patch_word(source, instruction) ea = here() if get_wide_word(ea) == 0xb503: #PUSH {R0,R1,LR} ea1 = ea + 6 if get_wide_word(ea + 2) == 0xbf00: #NOP ea1 += 2 offset = get_wide_dword(ea1) put_unconditional_branch(ea, (ea1 + offset) & 0xffffffff) else: print "Unable to detect first instruction" ``` The result of executing the script: ![](https://habrastorage.org/r/w780q1/webt/tm/fq/yo/tmfqyogtdhbje0atjckosbrgide.jpeg) After we patch everything in this function, we can point the IDA to its real beginning. It will collect the entire function code piece by piece and we’ll be able to decompile it using HexRays. ### Decrypting the strings We’ve learned how to deal with machine code obfuscation in the ***libsgmainso-6.4.36.so*** library from UC Browser and obtained the function code *JNI\_OnLoad*. ``` int __fastcall real_JNI_OnLoad(JavaVM *vm) { int result; // r0 jclass clazz; // r0 MAPDST int v4; // r0 JNIEnv *env; // r4 int v6; // [sp-40h] [bp-5Ch] int v7; // [sp+Ch] [bp-10h] v7 = *(_DWORD *)off_8AC00; if ( !vm ) goto LABEL_39; sub_7C4F4(); env = (JNIEnv *)sub_7C5B0(0); if ( !env ) goto LABEL_39; v4 = sub_72CCC(); sub_73634(v4); sub_73E24(&unk_83EA6, &v6, 49); clazz = (jclass)((int (__fastcall *)(JNIEnv *, int *))(*env)->FindClass)(env, &v6); if ( clazz && (sub_9EE4(), sub_71D68(env), sub_E7DC(env) >= 0 && sub_69D68(env) >= 0 && sub_197B4(env, clazz) >= 0 && sub_E240(env, clazz) >= 0 && sub_B8B0(env, clazz) >= 0 && sub_5F0F4(env, clazz) >= 0 && sub_70640(env, clazz) >= 0 && sub_11F3C(env) >= 0 && sub_21C3C(env, clazz) >= 0 && sub_2148C(env, clazz) >= 0 && sub_210E0(env, clazz) >= 0 && sub_41B58(env, clazz) >= 0 && sub_27920(env, clazz) >= 0 && sub_293E8(env, clazz) >= 0 && sub_208F4(env, clazz) >= 0) ) { result = (sub_B7B0(env, clazz) >> 31) | 0x10004; } else { LABEL_39: result = -1; } return result; } ``` Let’s look into the following strings: ``` sub_73E24(&unk_83EA6, &v6, 49); clazz = (jclass)((int (__fastcall *)(JNIEnv *, int *))(*env)->FindClass)(env, &v6); ``` It’s quite clear that the *sub\_73E24* function decrypts the class name. Parameters of this function contain a pointer to the data that look similar to those encrypted, a kind of buffer, and a number. Obviously, a decrypted string will be in the buffer after a call to the function, since the buffer goes to the *FindClass* function, which receives the same class name as the second parameter. So the number is the size of the buffer or the length of the string. Let’s try to decrypt the name of the class. It should indicate if we are going in the right direction. Let’s take a closer look at what happens in *sub\_73E24*. ``` int __fastcall sub_73E56(unsigned __int8 *in, unsigned __int8 *out, size_t size) { int v4; // r6 int v7; // r11 int v8; // r9 int v9; // r4 size_t v10; // r5 int v11; // r0 struc_1 v13; // [sp+0h] [bp-30h] int v14; // [sp+1Ch] [bp-14h] int v15; // [sp+20h] [bp-10h] v4 = 0; v15 = *(_DWORD *)off_8AC00; v14 = 0; v7 = sub_7AF78(17); v8 = sub_7AF78(size); if ( !v7 ) { v9 = 0; goto LABEL_12; } (*(void (__fastcall **)(int, const char *, int))(v7 + 12))(v7, "DcO/lcK+h?m3c*q@", 16); if ( !v8 ) { LABEL_9: v4 = 0; goto LABEL_10; } v4 = 0; if ( !in ) { LABEL_10: v9 = 0; goto LABEL_11; } v9 = 0; if ( out ) { memset(out, 0, size); v10 = size - 1; (*(void (__fastcall **)(int, unsigned __int8 *, size_t))(v8 + 12))(v8, in, v10); memset(&v13, 0, 0x14u); v13.field_4 = 3; v13.field_10 = v7; v13.field_14 = v8; v11 = sub_6115C(&v13, &v14); v9 = v11; if ( v11 ) { if ( *(_DWORD *)(v11 + 4) == v10 ) { qmemcpy(out, *(const void **)v11, v10); v4 = *(_DWORD *)(v9 + 4); } else { v4 = 0; } goto LABEL_11; } goto LABEL_9; } LABEL_11: sub_7B148(v7); LABEL_12: if ( v8 ) sub_7B148(v8); if ( v9 ) sub_7B148(v9); return v4; } ``` The sub\_7AF78 function creates a container instance for byte arrays of the specified size (we will not focus on them in detail). Two such containers are created here: one contains the string "***DcO/lcK+h?m3c\*q@***" (it is easy to guess that this is the key), the other has the encrypted data. Both objects are then placed in a certain structure, which transfers to the *sub\_6115C* function. We may also note that this structure contains a field with a value of 3. Let’s see what happens next. ``` int __fastcall sub_611B4(struc_1 *a1, _DWORD *a2) { int v3; // lr unsigned int v4; // r1 int v5; // r0 int v6; // r1 int result; // r0 int v8; // r0 *a2 = 820000; if ( a1 ) { v3 = a1->field_14; if ( v3 ) { v4 = a1->field_4; if ( v4 < 0x19 ) { switch ( v4 ) { case 0u: v8 = sub_6419C(a1->field_0, a1->field_10, v3); goto LABEL_17; case 3u: v8 = sub_6364C(a1->field_0, a1->field_10, v3); goto LABEL_17; case 0x10u: case 0x11u: case 0x12u: v8 = sub_612F4( a1->field_0, v4, *(_QWORD *)&a1->field_8, *(_QWORD *)&a1->field_8 >> 32, a1->field_10, v3, a2); goto LABEL_17; case 0x14u: v8 = sub_63A28(a1->field_0, v3); goto LABEL_17; case 0x15u: sub_61A60(a1->field_0, v3, a2); return result; case 0x16u: v8 = sub_62440(a1->field_14); goto LABEL_17; case 0x17u: v8 = sub_6226C(a1->field_10, v3); goto LABEL_17; case 0x18u: v8 = sub_63530(a1->field_14); LABEL_17: v6 = 0; if ( v8 ) { *a2 = 0; v6 = v8; } return v6; default: LOWORD(v5) = 28032; goto LABEL_5; } } } } LOWORD(v5) = -27504; LABEL_5: HIWORD(v5) = 13; v6 = 0; *a2 = v5; return v6; } ``` The field with the previously assigned value 3 is transitioned as the switch parameter. Let’s take a look at case 3 — parameters that the previous function added to the structure (i.e., the key and the encrypted data) are transitioned to the sub\_6364C function. If we look closely at sub\_6364C, we can recognize the RC4 algorithm. So we have an algorithm and a key. Let’s try to decrypt the name of the class. This is what we’ve got: com/taobao/wireless/security/adapter/JNICLibrary. Brilliant! We are on the right track. ### Command tree Now we need to find the call to *RegisterNatives*, which will point us to the *doCommandNative* function. So we look through the functions called from *JNI\_OnLoad*, and find it in *sub\_B7B0*: ``` int __fastcall sub_B7F6(JNIEnv *env, jclass clazz) { char signature[41]; // [sp+7h] [bp-55h] char name[16]; // [sp+30h] [bp-2Ch] JNINativeMethod method; // [sp+40h] [bp-1Ch] int v8; // [sp+4Ch] [bp-10h] v8 = *(_DWORD *)off_8AC00; decryptString((unsigned __int8 *)&unk_83ED9, (unsigned __int8 *)name, 0x10u);// doCommandNative decryptString((unsigned __int8 *)&unk_83EEA, (unsigned __int8 *)signature, 0x29u);// (I[Ljava/lang/Object;)Ljava/lang/Object; method.name = name; method.signature = signature; method.fnPtr = sub_B69C; return ((int (__fastcall *)(JNIEnv *, jclass, JNINativeMethod *, int))(*env)->RegisterNatives)(env, clazz, &method, 1) >> 31; } ``` And indeed, a native method with the name *doCommandNative* is registered here. Now we know its address. Let’s have a look at what it does. ``` int __fastcall doCommandNative(JNIEnv *env, jobject obj, int command, jarray args) { int v5; // r5 struc_2 *a5; // r6 int v9; // r1 int v11; // [sp+Ch] [bp-14h] int v12; // [sp+10h] [bp-10h] v5 = 0; v12 = *(_DWORD *)off_8AC00; v11 = 0; a5 = (struc_2 *)malloc(0x14u); if ( a5 ) { a5->field_0 = 0; a5->field_4 = 0; a5->field_8 = 0; a5->field_C = 0; v9 = command % 10000 / 100; a5->field_0 = command / 10000; a5->field_4 = v9; a5->field_8 = command % 100; a5->field_C = env; a5->field_10 = args; v5 = sub_9D60(command / 10000, v9, command % 100, 1, (int)a5, &v11); } free(a5); if ( !v5 && v11 ) sub_7CF34(env, v11, &byte_83ED7); return v5; } ``` The name suggests that it is the entry point for all functions the developers transferred to the native library. We’re specifically interested in function number 10601. From the code, we can see that the command number gives us three numbers: command / 10000, command % 10000 / 100, and command % 10 (in our case, 1, 6, and 1). These three numbers, as well as the pointer to JNIEnv and the arguments transferred to the function, make up a structure and are passed on. With these three numbers (we’ll denote them N1, N2, and N3), a command tree is constructed. Something like this: ![](https://habrastorage.org/r/w1560/webt/xg/jk/l1/xgjkl1uhzmibo-nhmha3kpivf5a.png) The tree is created dynamically in JNI\_OnLoad. Three numbers encode the path in the tree. Each leaf of the tree contains the corresponding function’s xorred address. The key is in the parent node. It is quite easy to find a place in the code where the function we need is added to the tree if we understand all the structures there (we won’t spend time describing them in this article). ### More obfuscation We’ve got the address of the function that is supposed to decrypt the traffic: 0x5F1AC. But it’s still too early to relax — the UC Browser developers have another surprise for us. After receiving the parameters from an array in the Java code, we go to the function at 0x4D070. Another type of code obfuscation is already waiting. We then push two indices in R7 and R4: ![](https://habrastorage.org/r/w1560/webt/7i/fx/86/7ifx86mr4yph41jsxukpjqvxmfa.png) The first index moves to R11: ![](https://habrastorage.org/r/w780q1/webt/fa/62/6j/fa626jaylcaaewtslff_-byaseo.jpeg) We use this index to obtain the address from the table: ![](https://habrastorage.org/r/w1560/webt/co/rk/s1/corks1o2dp75elfvdsxjimrohuo.png) After transferring to the first address, we use the second index from R4. The table contains 230 elements. What do we do with it? We could tell the IDA that it is a sort of a switch: Edit -> Other -> Specify switch idiom. ![](https://habrastorage.org/r/w780q1/webt/36/uu/xn/36uuxnabf3xesptr8qisxzzfutk.jpeg) The resulting code is horrendous. However, we can see the call to the familiar *sub\_6115C* function in its tangles: ![](https://habrastorage.org/r/w780q1/webt/-d/pt/3a/-dpt3akb_yckinmdheczkktzlem.jpeg) There was the switch parameter with the RC4 decryption in case 3. In this case, the structure that transfers to the function is filled with the parameters transferred to doCommandNative. We recall that we had magicInt there with value 16. We look at the corresponding case and after several transitions, find the code that helps us identify the algorithm. ![](https://habrastorage.org/r/w780q1/webt/rn/_1/i4/rn_1i4avl8oki83hslk5u19ygoe.jpeg) It’s AES! We have an algorithm and only need to get its parameters, such as mode, key, and (possibly) the initialization vector (its presence depends on the AES algorithm operation mode). The structure that contains them should be created somewhere before calling the sub\_6115C function. But since this part of the code is particularly well obfuscated, we decided to patch the code so all parameters of the decryption function could be dumped into a file. ### Patch If you don’t want to manually write all the patch code in the assembly language, you can run Android Studio, code a function that receives the same parameters as our decryption function and writes to the file, then copy the resulting code generated by the compiler. Our good friends from the UC Browser team also “ensured” the convenience of adding code. We do remember that we have garbage code at the beginning of each function, which can easily be replaced with any other code. Very convenient :) However, there is not enough room at the beginning of the target function for the code that saves all parameters to a file. We had to divide it into parts and use the garbage blocks of neighboring functions. We got four parts in total. Part one: ![](https://habrastorage.org/r/w780q1/webt/hy/lk/qr/hylkqrhqgyei6qjdj6ayvgivpqu.jpeg) The first four function parameters in the ARM architecture are indicated in the R0-R3 registers, while the rest, if any, goes via the stack. The LR register indicates the return address. We need to save all this data so the function can work after we dump its parameters. We also need to save all the registers that we use in the process, so we use PUSH.W {R0-R10,LR}. In R7, we get the address of the list of parameters transferred to the function via the stack. Using the fopen function, we open the /data/local/tmp/aes file in the “ab” mode (so that we could add something). We then load the file name address in R0 and the string address that indicates the mode in R1. This is where the garbage code ends, so we navigate to the next function. Since we want it to continue working, we put the transition to the actual function code in the beginning, before the garbage, and replace the garbage with the rest of the patch. ![](https://habrastorage.org/r/w1560/webt/68/ss/kf/68sskfcymjorl_flpf8mg9wwej4.png) We then call fopen. The first three parameters of the aes function are of the int type. Since we pushed the registers to the stack at the beginning, we can simply transfer their addresses in the stack to the fwrite function. ![](https://habrastorage.org/r/w780q1/webt/d1/yt/iz/d1ytiz_jx7byflntqxxec2nece0.jpeg) Next, we have three structures that indicate the size of the data and contain a pointer to the data for the key, the initialization vector, and the encrypted data. ![](https://habrastorage.org/r/w780q1/webt/me/vy/kz/mevykztpkcwdr6xkpnenauxqccs.jpeg) At the end, we close the file, restore the registers, and give control back to the actual aes function. We compile the APK file with the patched library, sign it, download it onto a device or emulator, and run it. Now we see the dump has been created with a lot of data. The browser does not just decrypt traffic, but other data too, and all decryption is performed via this function. For some reason, we don’t see the data we need, and the request we are expecting is not visible in the traffic. Let’s skip waiting until UC Browser has the chance to make this request and take the encrypted response obtained earlier from the server to patch the application again. We’ll add the decryption to the onCreate of the main activity. ``` const/16 v1, 0x62 new-array v1, v1, [B fill-array-data v1, :encrypted_data const/16 v0, 0x1f invoke-static {v0, v1}, Lcom/uc/browser/core/d/c/g;->j(I[B)[B move-result-object v1 array-length v2, v1 invoke-static {v2}, Ljava/lang/String;->valueOf(I)Ljava/lang/String; move-result-object v2 const-string v0, "ololo" invoke-static {v0, v2}, Landroid/util/Log;->d(Ljava/lang/String;Ljava/lang/String;)I ``` We compile it, sign, install, and run. Thus, we get a NullPointerException since the method returns a null value. After analyzing of the code further, we found a function with rather interesting strings: “META-INF/” and “.RSA”. Looks like the app verifies its certificate, or even generates keys from it. We don’t really want to dig into what is happening with the certificate, so let’s just give it the correct certificate. We’ll patch the encrypted string, so instead of “META-INF/” we get “BLABLINF/”, we create a folder with this name in the APK file, and save the browser certificate in it. We compile it, sign, install, and run. Bingo! We have the key! ### MitM Now we’ve got the key and an equal initialization vector. Let’s try to decrypt the server response in the CBC mode. ![](https://habrastorage.org/r/w1560/webt/9k/m7/un/9km7unymy_ybx1lwnflfbqd36ke.png) We see the archive URL, something like MD5, “extract\_unzipsize”, and a number. Let us check. The MD5 of the archive is the same; the size of the unzipped library is the same. Now we’ll try to patch this library and transmit it to the browser. To show that our patched library has loaded, we’ll build an Intent to create the text message «PWNED!» We’ll replace two responses from the server: [puds.ucweb.com/upgrade/index.xhtml](http://puds.ucweb.com/upgrade/index.xhtml) and the one that prompts the archive download. In the first, we substitute MD5 (the size remains the same after unzipping); in the second, we send the archive with the patched library. The browser makes several attempts to download the archive, resulting in an error. Apparently, something fishy is happening there. Analyzing this bizarre format, we found that the server also transmits the archive size: ![](https://habrastorage.org/r/w1560/webt/e2/-v/ua/e2-vuahng86h0bt0vm84f3xhqrg.png) It is LEB128 encoded. The patch slightly changes the size of the compressed library, so the browser decided that the archive broke upon download and displayed an error after several attempts. So we fix the archive size and… voila! :) See the result in the video. ### Consequences and developer’s response In the same way, hackers can use this insecure feature of UC Browser to distribute and launch malicious libraries. These libraries will work in the context of the browser, resulting in full system privileges that the browser has. This grants them free reign to display phishing windows, as well as access to the browser’s working files including logins, passwords, and cookies in the database. We contacted the UC Browser developers and informed them about the problem we had found, tried to point out the vulnerability and its danger, but they refused to discuss the matter. Meanwhile, the browser with the dangerous function remained in plain sight. Though as soon as we revealed the details of the vulnerability, it was impossible to ignore it as before. A new version of UC Browser 12.10.9.1193 was released on March 27, which accessed the server via HTTPS [puds.ucweb.com/upgrade/index.xhtml](https://puds.ucweb.com/upgrade/index.xhtml). In addition, between the “bug fixing” and the time we wrote this article, an attempt to open a PDF in the browser resulted in an error message with the text, “Oops, something is wrong.” There was no request to the server when trying to open the PDF file. This was performed upon startup, though, which is a sign that the ability to download the executable code in violation of Google Play policies is still present.
https://habr.com/ru/post/452076/
null
null
7,185
56.35
Twitter is a Web-based social media site that lets you communicate with followers through stories known as tweets through the Twitter GUI. Tweets are limited to a maximum of 140 characters, a limitation based on the state of mobile devices at the time Twitter was developed. But it is a welcome enforcement, as it prevents unnecessary spam and verbal clutter within a single tweet. Now that you are familiar with Twitter, it's time to move to the next level by familiarizing yourself with Twitter Search. As I said, Twitter is an online organization filled with tweets, or brief statements that users make to their followers. With that in mind, wouldn't it be great if you could find a bunch of tweets related to a specific subject? Good news. With Twitter Search, you can. You can search by keywords, topic, author, language, and a variety of other criteria. Head on over to to see this in action. Type a keyword that you want to search for (for example, Java™), and then click Search. Voilà! A series of tweets, newest to oldest, appears on the screen. But how can you search by topic instead of by keyword? Keep in mind that tweets specific to a particular topic contain the topic name preceded by the hash symbol/pound sign ( #). For example, a Star Trek enthusiast might tweet something about the new movie and within that tweet include #startrek to let people know that this particular tweet is about Star Trek. To search for tweets by topic, simply include the topic name (including the hash symbol/pound sign) in your keyword search. Following the previous example, simply go to the Twitter Search page, type #startrek, then click Search. You will see a list of tweets specific to Star Trek. The Twitter Search API The Twitter Search API is great for manual searches as a user. But wouldn't it be great if you, as an outstanding software developer, could programmatically search for tweets based on topic or keyword? More fabulous news: You can. Like many other great Web applications, Twitter Search provides a REST API so that you can search for tweets in an automated fashion. Before delving too fully into the API, however, it is probably best to cover the REST concept first for those unfamiliar with it. What is REST? REST, for purposes of this article, enables developers to access information and resources using a simple HTTP invocation. Think of REST this way: You can obtain domain-specific data simply by pointing a URL to a specific location. You can also think of it as a simplified Web service, but if you say that too loudly around the wrong people, you might find yourself in the middle of a debate. So, the Twitter Search API is a REST service that enables users to point to a specific URL and retrieve a variety of tweets that meet the criteria specified in the URL. This enables you, as a developer, to accept input within a Web application and dynamically query Twitter based on that input, using a simple URL that encodes the input into a format that the API understands. Getting started: A simple example Consider the example in Listing 1. Listing 1. A simple example of a Twitter search This query is very easy to parse. The domain is intuitive: search.twitter.com. This is where the Search API resides. After the first slash is the service that you are executing—in this case, the word search. It might seem peculiar to even require the word search here, as the word is already in the domain name, but this is an attempt by the good folks at Twitter to keep things consistent with the basic Twitter API, which uses numerous functions. Following search is an .atom extension. This simply means that the results of the search will be returned in Atom format. Additional formats that you can use are RSS (.rss) and JavaScript Object Notation (JSON—.json). Next comes the only request parameter, q, which is short for query. And following that is the value of that parameters—in this case, java. To summarize, the URL in the code example above tells the Twitter Search API to search for all recent tweets containing the word java (case is insensitive) and return the results in Atom format. Parsing the output Now, point your browser to the address in Listing 1. The actual output returned to your screen will vary depending on which browser you use and which version it is. To keep things consistent when you view the source, right-click the screen, and then click View Source. You should see something similar to Listing 2. Listing 2. Output from a simple search (partial output) <?xml version="1.0" encoding="UTF-8"?> <feed xmlns: <id>tag:search.twitter.com,2005:search/java</id> <link type="text/html" rel="alternate" href=""/> <link type="application/atom+xml" rel="self" href=""/> <title>java - Twitter Search</title> <link type="application/opensearchdescription+xml" rel="search" href=""/> <link type="application/atom+xml" rel="refresh" href=""/> <twitter:warning>since_id removed for pagination.</twitter:warning> <updated>2009-06-01T12:11:26Z</updated> <openSearch:itemsPerPage>15</openSearch:itemsPerPage> <link type="application/atom+xml" rel="next" href=""/> <entry> <id>tag:search.twitter.com,2005:1990561514</id> <published>2009-06-01T12:11:26Z</published> <link type="text/html" rel="alternate" href=""/> <title>D/L latest upgrade for Google's Chrome Browser & like it. Faster, esp w Java</title> <content type="html">D/L latest upgrade for Google's Chrome Browser & like it. Faster, esp w <b>Java</b></content> <updated>2009-06-01T12:11:26Z</updated> <twitter:source><a href="">web</a></twitter:source> <twitter:lang>en</twitter:lang> <author> <name>GailR (Gail R)</name> <uri></uri> </author> </entry> ... ]> Note: Your output will look totally different in content but identical in structure. This because I ran my search at a completely different time than you are running yours, so the recent tweets for me will be different than the recent tweets for you. Recall that the default search sorts by tweets newest to oldest. Here's a breakdown of the code: - Note that the root element is feed. This is standard according to the Atom specification (see Resources for links to more information about Atom). The namespace that Twitter uses is, as specified as an attribute in the root element. - The titleelement provides a synopsis of the query—useful if you are simply parsing the output but might not have been the one who created the query. - The linkelements provide the URL for the query itself. You can plug those into your browser and get the same results. - The entrystanza represents a tweet. Although for the sake of brevity only one is listed, in reality, there will be many of these in your output. Notice that title and content are the same in actual content. This is because tweets have no titles, so it makes sense that the title is the actual tweet itself. Recall that Atom is designed for article-type documents, which usually have a headline, then a main body. Because that is not the case with tweets, the two elements contain identical content. - The idelement is required by Atom and is a globally unique identifier (GUID) for this particular tweet. All tweets across the universe of Twitter will have unique IDs so they can be referenced individually. - The publishedand updateddate and times are also identical. This makes sense, because the tweet was never updated. - The first linkelement provides a link to this single tweet. Paste into a browser, and you'll see that same tweet. - The sourceelement (specified as twitter:sourcebecause it is in the - The langelement (specified as twitter:langbecause it is in the enis used. - The authorstanza provides information about the Twitter user. Crafting more complex searches The example provided so far is fairly rudimentary: It is a search for one word only. However, the Twitter Search API provides a powerful set of criteria parameters and supports complex queries. Suppose, for example, you want to search for tweets directed to a specific user. In tweet language, users direct their tweets to a specific user by prepending an at sign ( @) to the user's screen name (for example, @johnqpublic). For searching purposes, you can disregard the @ sign and simply search for a tweet directed at a specific user. See Listing 3. Listing 3. Searching for tweets directed to a specific user Note the %3A in the middle of the URL, just in front of the user's name: that is the URL encoding for a colon ( :). It follows the to prefix, so you can read it as follows: to:johnqpublic. If you want to search for tweets from a particular user instead of to a particular user, simply substitute the word from for to in Listing 3. If you want to search for a specific topic, you simply need to encode the hash tag/pound sign for the URL. That code is %23, so an API #startrek would look like Listing 4. Listing 4. Searching for tweets by topic Note that, like so many other search engines, you can use AND and OR within your Twitter searches. This is accomplished by placing +AND+ and +OR+ between the query values, respectively. For an example, see Listing 5, which returns all recent tweets containing either #startrek or #americanidol. Listing 5. Searching for tweets containing either "#startrek" or "#americanidol" You also have the option of specifying the lang parameter so that your query only returns results in a specific language. The value of the lang parameter must match one of the language codes included in the ISO 639-1 specification. See an example in Listing 6. Listing 6. Searching for Star Trek tweets in English You can also restrict the search results based on date. Use the parameters since and until to return tweets no older than a certain date or no later than a certain date, respectively. An example is found in Listing 7. Listing 7. Searching for Star Trek tweets since 1 May 2009 Conclusion Twitter is a social networking phenomenon that facilitates microblogging among interested parties. It has skyrocketed in popularity just over the past year as everyone from postal workers to celebrities find themselves tweeting on a regular basis. In compliance with unwritten rules of the information superhighway, Twitter also provides a Search utility so that people can search for tweets based on a specific set of criteria. The search utility enables searches to be performed either manually using a Web page (the same way many people use Google, for example) or through a REST invocation. Using the Twitter Search API enables Web application developers to automate Twitter searches. Developers can use it to display up-to-the-minute, content-specific tweets within their own (or their client's) Web applications. It is an outstanding utility for those interested in gleaning even more domain-specific information from the Internet. Resources Learn - Search API documentation: Check out the Twitter Search API documentation. - RESTful Web services: The basics (Alex Rodriguez, developerWorks, November 2008): Read an excellent overview of REST. - The Atom specification: Check out Wikipedia's overview of the Atom specification. - Request for Comments (RFC) 4827: Read the complete Atom specification. - - The Twitter site: Explore the Twitter service. Try it and be connected with friends, family, and co–workers as you exchange short messages about what you do. - IBM product evaluation versions:.
http://www.ibm.com/developerworks/library/x-twitsrchapi/
CC-MAIN-2014-49
refinedweb
1,913
63.29
Marker (engine/model) @ckeditor/ckeditor5-engine/src/model/markercollection Marker is a continuous parts of model (like a range), is named and represent some kind of information about marked part of model document. In contrary to nodes, which are building blocks of model document tree, markers are not stored directly in document tree but in model markers' collection. Still, they are document data, by giving additional meaning to the part of a model document between marker start and marker end. In this sense, markers are similar to adding and converting attributes on nodes. The difference is that attribute is connected with a given node (e.g. a character is bold no matter if it gets moved or content around it changes). Markers on the other hand are continuous ranges and are characterized by their start and end position. This means that any character in the marker is marked by the marker. For example, if a character is moved outside of marker it stops being "special" and the marker is shrunk. Similarly, when a character is moved into the marker from other place in document model, it starts being "special" and the marker is enlarged. Another upside of markers is that finding marked part of document is fast and easy. Using attributes to mark some nodes and then trying to find that part of document would require traversing whole document tree. Marker gives instant access to the range which it is marking at the moment. Markers are built from a name and a range. Range of the marker is updated automatically when document changes, using live range mechanism. Name is used to group and identify markers. Names have to be unique, but markers can be grouped by using common prefixes, separated with :, for example: user:john or search:3. That's useful in term of creating namespaces for custom elements (e.g. comments, highlights). You can use this prefixes in event-update listeners to listen on changes in a group of markers. For instance: model.markers.on( 'update:user', callback ); will be called whenever any user:* markers changes. There are two types of markers. Markers managed directly, without using operations. They are added directly by Writerto the MarkerCollectionwithout any additional mechanism. They can be used as bookmarks or visual markers. They are great for showing results of the find, or select link when the focus is in the input. Markers managed using operations. These markers are also stored in MarkerCollectionbut changes in these markers is managed the same way all other changes in the model structure - using operations. Therefore, they are handled in the undo stack and synchronized between clients if the collaboration plugin is enabled. This type of markers is useful for solutions like spell checking or comments. Both type of them should be added / updated by addMarker and removed by removeMarker methods. model.change( ( writer ) => { const marker = writer.addMarker( name, { range, usingOperation: true } ); // ... writer.removeMarker( marker ); } ); See Writer to find more examples. Since markers need to track change in the document, for efficiency reasons, it is best to create and keep as little markers as possible and remove them as soon as they are not needed anymore. Markers can be downcasted and upcasted. Markers downcast happens on event-addMarker and event-removeMarker events. Use downcast converters or attach a custom converter to mentioned events. For data pipeline, marker should be downcasted to an element. Then, it can be upcasted back to a marker. Again, use upcast converters or attach a custom converter to event-element. Marker instances are created and destroyed only by MarkerCollection. Filtering Properties A value indicating if the marker changes the data. A value indicating if the marker is managed using operations. See marker class description to learn more about marker types. See addMarker. Marker's name. _liveRange : LiveRange protected Range marked by the marker. _affectsData : Boolean private Specifies whether the marker affects the data produced by the data pipeline (is persisted in the editor's data). _managedUsingOperations : Boolean private Flag indicates if the marker is managed using operations or not. Methods constructor( name, liveRange, managedUsingOperations, affectsData ) Creates a marker instance. Parameters name : String Marker name. liveRange : LiveRange Range marked by the marker. managedUsingOperations : Boolean Specifies whether the marker is managed using operations. affectsData : Boolean Specifies whether the marker affects the data produced by the data pipeline (is persisted in the editor's data). Returns current marker end position. Returns a range that represents the current state of the marker. Keep in mind that returned value is a Range, not a LiveRange. This means that it is up-to-date and relevant only until next model document change. Do not store values returned by this method. Instead, store nameand get Markerinstance from MarkerCollection every time there is a need to read marker properties. This will guarantee that the marker has not been removed and that it's data is up-to-date. Returns Returns current marker start position. Checks whether this object is of the given. marker.is( 'marker' ); // -> true marker.is( 'model:marker' ); // -> true marker.is( 'view:element' ); // -> false marker.is( 'documentSelection' ); // -> false Check the entire list of model objects which implement the is()method. Parameters type : String - Returns Boolean - _attachLiveRange( liveRange ) → LiveRange protected Binds new live range to the marker and detach the old one if is attached. _detachLiveRange() protected Unbinds and destroys currently attached live range. Events change:content( eventInfo, oldRange, data ) Fired whenever change on Documentis done inside marker range. This is a delegated LiveRange change:content: change:range( eventInfo, oldRange, data ) Fired whenever marker range is changed due to changes on Document. This is a delegated LiveRange change:range:
https://ckeditor.com/docs/ckeditor5/latest/api/module_engine_model_markercollection-Marker.html
CC-MAIN-2020-16
refinedweb
938
50.23
ESP of LED light easily using our own web server. Lets have a quick look on how to build ESP32 Web Server PWM based LED Control. ESP32 Board The ESP32 board is the advanced generation ESP8266. One of the differences is its built-in Bluetooth. It also has a 2.4 GHz Wi-Fi core and built-in Bluetooth with 40-nanometer technology from TSMC. This module has the best performance in energy consumption. It brings the best results for us with the least energy consumption. If we want to take a closer look at this board, we must say that this ESP32 chip is implemented on a development board, which is also called System on a chip. Pulse Width Modulation (PWM) Pulse Width Modulation or pwm, means pulse width modulation, a way to regulate electrical power by changing the cut-off time and the connection of the source in each cycle. In fact, pwm is a square signal that can be 1, that is, 5 volts or 0, which means 0 volts at a time. For example, if the Duty Cycle of a PWM wave is equal to 80%, that is, in each cycle 80% of the voltage is equal to 5 volts and 20% of the voltage is equal to 0. The PWM is shown in the figure below. Working In this project, ESP32 creates a web page for us through which we can change the PWM values from 0 to 255. As a result the pwm value will change accordingly. This change will help in changing the intensity of the led light which is connected with GPIO2. In fact, when we change the value on a web page, an HTTP request is sent from the web server to ESP32 for the applied change and then ESP32 adjusts the pwm values according to the received number. Arduino IDE Configuration In this tutorial, we will use the ESP32 board. Follow the steps to install this board in Arduino software. First, in the Arduino IDE software, go to File -> Preferences and put the following links in the specified section. Then go to Tools -> Board -> Boards Manager In the board manager section, search for the word ESP32 and click on Install Required Libraries This tutorial uses the following three libraries as mentioned in the header file. You can follow the installation steps below. - WiFi.h - AsyncTCP.h - ESPAsyncWebServer.h Download the library from the following links. First go to Sketch -> Include Library -> Add .zip Library and add the .Zip file you downloaded. Connection You can connect any color of LED with ESP32 board. This just a tutorial. Code Analysis I will first describe parts of the code. In this section we will add the required libraries ESPAsyncWebServer and ESPAsyncTCP as well as Wi-Fi which are needed to build the ESP32 web server. #include <WiFi.h> #include <AsyncTCP.h> #include <ESPAsyncWebServer.h> In this section, the user’s Internet details are required to establish a connection to the network. const char* ssid = "REPLACE_WITH_YOUR_SSID"; const char* password = "REPLACE_WITH_YOUR_PASSWORD"; This line of code resets the initial value to zero. String sliderValue = "0"; This section deals with pwm settings const int freq = 5000; const int ledChannel = 0; const int resolution = 8; Final Code You can download the code from the download link. Unzip it and open the code using Arduino IDE. Then upload the code in ESP32. In case you are not sure on uploading the code then you can refer the article on Uploading Code in ESP32. Construction & Testing The result after uploading the code in ESP32 chip, got to Arduino IDE and open the Serial Monitor. Reset the ESP32 module and then it will try to connect with Wi-Fi. After connecting with network it will show you an IP address. Now put this IP address in a browser and you can access the web page. Conclusion This is a simple tutorial for understanding the concept of webserver and pwm using ESP32 board. Instead of a LED you can connect the device which you want to control. Here we have build a web server and using esp32 pwm pin we controlling the brightness or intensity of the light. Do try if you are a beginner and want to start off with IoT projects.
https://www.iotstarters.com/esp32-web-server-pwm-based-led-control/
CC-MAIN-2021-25
refinedweb
711
73.07
Barcode Software itextsharp qr code c# Part 2: Windows Server 2003 Installation in C# Generate QRCode in C# Part 2: Windows Server 2003 Installation The CookieContainer Property devexpress winforms barcode use .net winforms bar code maker to insert barcodes on .net bmp BusinessRefinery.com/ bar code using barcode generator for ms reporting services control to generate, create bar code image in ms reporting services applications. barcoder BusinessRefinery.com/ barcodes To copy a list into Excel, display the list in Internet Explorer and click the Export To Spreadsheet link. This downloads a small file that has an .iqy filename extension, and that tells Excel how to access the list interactively. Excel displays a List toolbar that has options to modify the list columns and settings, synchronize data with the SharePoint site, create charts, and so forth. generate, create barcode restore none for excel spreadsheets projects BusinessRefinery.com/ bar code using visual basic asp .net to incoporate barcodes with asp.net web,windows application BusinessRefinery.com/ barcodes Supercomputers asp.net read barcode-scanner Using Barcode scanner for contact .net vs 2010 Control to read, scan read, scan image in .net vs 2010 applications. BusinessRefinery.com/ barcodes using barcode encoder for .net winforms control to generate, create barcodes image in .net winforms applications. applications BusinessRefinery.com/barcode Dim res As Single = CounterSample.Calculate(cs1, pc2.NextSample) qr code scanner java download use j2ee qr-code creator to make qrcode on java apply BusinessRefinery.com/Quick Response Code use aspx qrcode implement to print qr-code with .net effect BusinessRefinery.com/QRCode Getting Started using data word to embed qr code 2d barcode on asp.net web,windows application BusinessRefinery.com/qr bidimensional barcode rdlc qr code using barcode printer for rdlc report control to generate, create qr code image in rdlc report applications. store BusinessRefinery.com/qr bidimensional barcode 10. On the Resource tab, in the Assignments group, click Assign Resources. qr code jis x 0510 image advantage with word documents BusinessRefinery.com/QR Code to display qr code jis x 0510 and qrcode data, size, image with java barcode sdk winform BusinessRefinery.com/QR-Code 9. What solution can be used to allow remote travel agents to securely access their e-mail and use S/MIME to protect the e-mail messages without enabling an additional e-mail client Because Travel Works uses Exchange Server 2003 and Outlook 2003, it can implement RPC over HTTPS to enable secure client access to mail services from the Outlook client. There is no need to implement RFC-based protocols, such as POP3 or IMAP4. 10. One of the travel agents has forgotten the password used to protect the e-mail encryption certificate and can no longer read encrypted e-mail. What must you do to allow the travel agent to access the encrypted email You must delete the existing certificate and private key from the user s certificate store and recover the private key from the CA database. When you import the certificate, the user can choose a new password for the encryption certificate s private key. 11. When performing the proof of concept test of your e-mail solution, you are told that customers are complaining that their e-mail applications are reporting that the digital signatures are failing. You look at the certificate and find the following URLs in the CDP extension: code 39 barcode generator java use spring framework code39 integrated to develop code39 on java company BusinessRefinery.com/Code 39 Full ASCII using barcode integration for excel control to generate, create code-39 image in excel applications. alphanumeric BusinessRefinery.com/Code39 FrontPage 2003 provides great flexibility for modifying SharePoint sites. .net data matrix reader Using Barcode decoder for core VS .NET Control to read, scan read, scan image in VS .NET applications. BusinessRefinery.com/datamatrix 2d barcode winforms code 128 using barcode implementation for .net for windows forms control to generate, create barcode 128a image in .net for windows forms applications. text BusinessRefinery.com/code 128 code set c It is recommended that a gigabit or faster network be used between database servers, application servers, and other support servers such as backup servers and network attached storage (NAS) systems. After using the fastest network hardware available, the next option for increasing network throughput is using multiple network connections and segmenting the network. If you are using NAS storage, this should be on a dedicated network. Connectivity between the database server and the backup server also should be on a dedicated network. font barcode 128 vb.net using barcode creator for visual .net control to generate, create code-128b image in visual .net applications. validation BusinessRefinery.com/code 128 code set c .net code 128 reader Using Barcode scanner for valid VS .NET Control to read, scan read, scan image in VS .NET applications. BusinessRefinery.com/USS Code 128 to work so that DML operations for any given data are made only at one database, which is then synchronized with its peers. use aspx.cs page barcode code39 integrating to access code 3/9 with .net settings BusinessRefinery.com/barcode 39 code 39 barcode font for crystal reports download use .net framework 3 of 9 barcode maker to build code 3 of 9 in .net component BusinessRefinery.com/barcode 3 of 9 Distributing Themes by Copying a Disk Folder Run updateResellerSales.cmd in the C:\rs2000sbs\chap11 folder to publish a revised version of the Reseller Sales report. This batch command file will publish a revised report definition of the Reseller Sales report and overwrite the existing report definition. In Report Manager, use the Back button to return to the Reseller Sales Report. A portion of the report is shown here: Important In order to use the data processing extension, you need to use a new instance of Visual Studio. Otherwise, Report Designer will not recognize the new data processing extension. { { { { { { { { FirstName FirstName FirstName FirstName FirstName FirstName FirstName FirstName = = = = = = = = isCaptain The code inside the Outer class can always create and use instances of the Inner class, regardless of the scope qualifier used for the Inner class. If the nested class is declared using a scope qualifier other than Private, the nested class is also visible to the outside of the Outer class, using the dot syntax: <input runat="server" id="lastName" type="text" /> You delegate control of Active Directory objects to grant users permission to manage users, groups, computers, OUs, or other objects stored in Active Directory. You can grant permissions in the following ways: Part III: All identity classes implement the IIdentity interface. This interface exposes the follow ing read-only properties: Establishing a PKI Part IV: Creating and Designing Sites Using FrontPage 2003 22 More QR Code on C# generate barcode in c# windows application: Vendor in visual C#.net Generate qr bidimensional barcode in visual C#.net Vendor print barcode in c# .net: Part 2: Windows Server 2003 Installation in c sharp Compose qr barcode in c sharp Part 2: Windows Server 2003 Installation print barcode in c# .net: G06ws24 in C# Draw QR in C# G06ws24 itextsharp qr code c#: More RIS Customization Tips in .net C# Generating QR-Code in .net C# More RIS Customization Tips itextsharp qr code c#: Part 2: Windows Server 2003 Installation in visual C#.net Encoder QR Code 2d barcode in visual C#.net Part 2: Windows Server 2003 Installation print barcode in c# .net: For more information about Adprep, see 8, Upgrading to Windows Server 2003. in c sharp Drawer qr codes in c sharp For more information about Adprep, see 8, Upgrading to Windows Server 2003. print barcode in c# .net: When upgrading Windows NT 4 domain controllers, you must implement the domain in c sharp Connect Quick Response Code in c sharp When upgrading Windows NT 4 domain controllers, you must implement the domain itextsharp qr code c#: Part 3: Windows Server 2003 Upgrades and Migrations in C#.net Printing qrcode in C#.net Part 3: Windows Server 2003 Upgrades and Migrations print barcode in c# .net: Part 3: Windows Server 2003 Upgrades and Migrations in C#.net Incoporate Quick Response Code in C#.net Part 3: Windows Server 2003 Upgrades and Migrations itextsharp qr code c#: Part 3: Windows Server 2003 Upgrades and Migrations in visual C# Include QR Code JIS X 0510 in visual C# Part 3: Windows Server 2003 Upgrades and Migrations generate barcode in c# windows application: Select the System Compatibility option on the Setup menu. in C# Make Denso QR Bar Code in C# Select the System Compatibility option on the Setup menu. itextsharp qr code c#: Selecting the Migration Tools in C#.net Compose QR-Code in C#.net Selecting the Migration Tools itextsharp qr code c#: Inside Out in C# Compose qr codes in C# Inside Out itextsharp qr code c#: Part 3: Windows Server 2003 Upgrades and Migrations in C# Writer QR Code 2d barcode in C# Part 3: Windows Server 2003 Upgrades and Migrations generate barcode in c# windows application: Migrating Passwords in .net C# Get qr bidimensional barcode in .net C# Migrating Passwords itextsharp qr code c#: Migrating a Trust in .net C# Incoporate qr codes in .net C# Migrating a Trust generate barcode in c# windows application: Frequently used programs list in .net C# Receive qr barcode in .net C# Frequently used programs list itextsharp qr code c#: Removing Items from the Start Menu in visual C# Encoder QR Code JIS X 0510 in visual C# Removing Items from the Start Menu itextsharp qr code c#: Part 4: Managing Windows Server 2003 Systems in visual C#.net Paint QRCode in visual C#.net Part 4: Managing Windows Server 2003 Systems itextsharp qr code c#: Using the MMC in C#.net Render qrcode in C#.net Using the MMC Articles you may be interested barcode reader in asp.net c#: This page intentionally left blank in visual C# Drawer USS Code 128 in visual C# This page intentionally left blank barcode dll for vb.net: Part II in C#.net Add UPC Symbol in C#.net Part II microsoft excel barcode font: Wireless Security Policies in visual C# Printing Universal Product Code version A in visual C# Wireless Security Policies android barcode scanner javascript: Part III in .net C# Connect Data Matrix ECC200 in .net C# Part III qrcoder c# example: Thin-Wafer Solar Cells in C#.net Generator qr-codes in C#.net Thin-Wafer Solar Cells excel ean barcode font: The Scale Stage in C# Creation Quick Response Code in C# The Scale Stage Table-auto-width Online Simple-page Keyword Sideline Aligned-column in .NET Generating qr-codes vb.net barcode scanner tutorial: Twelve in C#.net Paint barcode pdf417 in C#.net Twelve winforms textbox barcode scanner: U Step 4. Review the Knowledge You Need to Score High in C#.net Generate PDF-417 2d barcode in C#.net U Step 4. Review the Knowledge You Need to Score High winforms data matrix reader: Voice Carriage in MPLS in C# Embed pdf417 2d barcode in C# Voice Carriage in MPLS qrcode.net c# example: BS MS-A in visual C# Paint qr-codes in visual C# BS MS-A tbarcode excel: AM FL Y in c sharp Generator pdf417 in c sharp AM FL Y zxing c# create qr code: Understanding GUIDs in visual C# Integration qr-codes in visual C# Understanding GUIDs barcode generator in excel 2007 free download: STEP 4. Discover/Review to Score High in c sharp Maker qr barcode in c sharp STEP 4. Discover/Review to Score High ean 128 parser c#: click the Up One Level button on the right side of the view. in c sharp Integrating Data Matrix in c sharp click the Up One Level button on the right side of the view. java barcode ean 13: xlink:type in Java Make EAN 13 in Java xlink:type microsoft excel 2003 barcode font: Call Processing Functions in c sharp Integration qr codes in c sharp Call Processing Functions excel barcode generator: Additional Resources in c sharp Use QR Code ISO/IEC18004 in c sharp Additional Resources namespace for barcode reader in c#: EFFECTS OF WASTE in C# Create code 128c in C# EFFECTS OF WASTE ssrs barcode font pdf: Data Types in visual C#.net Assign QR Code JIS X 0510 in visual C#.net Data Types
http://www.businessrefinery.com/yc2/359/119/
CC-MAIN-2022-05
refinedweb
2,061
55.54
Given arrays X and Y (preferably both as inputs, but otherwise, with one as input and the other hardcoded), how can I use jq to output the array containing all elements common to both? e.g. what is a value of f such that echo '[1,2,3,4]' | jq 'f([2,4,6,8,10])' [2,4] map(select(in([2,4,6,8,10]))) --> outputs [1,2,3,4] select(map(in([2,4,6,8,10]))) --> outputs [1,2,3,4,5] A simple and quite fast (but somewhat naive) filter that probably does essentially what you want can be defined as follows: # x and y are arrays def intersection(x;y): ( (x|unique) + (y|unique) | sort) as $sorted | reduce range(1; $sorted|length) as $i ([]; if $sorted[$i] == $sorted[$i-1] then . + [$sorted[$i]] else . end) ; If x is provided as input on STDIN, and y is provided in some other way (e.g. def y: ...), then you could use this as: intersection(.;y) Other ways to provide two distinct arrays as input include: * using the --slurp option * using "--arg a v" (or "--argjson a v" if available in your jq) Here's an an even shorter def that's slower but often quite fast in practice: def i(x;y): (x|unique) as $x | (y|unique) as $y | (($x + $y) | unique) - (($x - $y) + ($y - $x)); Here's a standalone filter for finding the intersection of arbitrarily many arrays: # Input: an array of arrays def intersection: def i(y): ((unique + (y|unique)) | sort) as $sorted | reduce range(1; $sorted|length) as $i ([]; if $sorted[$i] == $sorted[$i-1] then . + [$sorted[$i]] else . end) ; reduce .[1:][] as $a (.[0]; i($a)) ; Examples: [ [1,2,4], [2,4,5], [4,5,6]] #=> [4] [[]] #=> [] [] #=> null Of course if x and y are already known to be sorted and/or unique, more efficient solutions are possible. See in particular
https://codedump.io/share/Emro0Dw0jMK9/1/how-to-get-the-intersection-of-two-json-arrays-using-jq
CC-MAIN-2018-09
refinedweb
319
56.32
nurbs NURBS This package is not currently in any snapshots. If you're interested in using it, we recommend adding it to Stackage Nightly. Doing so will make builds more reliable, and allow stackage.org to host generated Haddocks. Simple NURBS library with support of NURBS, periodic NURBS, knot insertion-removal, NURBS split-joint import Control.Lens import Linear.NURBS import Linear.V2 import Test.Hspec -- | Simple NURBS of degree 3 test₁ ∷ NURBS V2 Double test₁ = nurbs 3 [ V2 0.0 0.0, V2 10.0 0.0, V2 10.0 10.0, V2 20.0 20.0, V2 0.0 20.0, V2 (-20.0) 0.0] -- | Another NURBS of degree 3 test₂ ∷ NURBS V2 Double test₂ = nurbs 3 [ V2 (-20.0) 0.0, V2 (-20.0) (-20.0), V2 0.0 (-40.0), V2 20.0 20.0] -- | Make test₁ periodic testₒ ∷ NURBS V2 Double testₒ = set periodic True test₁ main ∷ IO () main = hspec $ do describe "evaluate point" $ do it "should start from first control point" $ (test₁ `eval` 0.0) ≃ (test₁ ^?! wpoints . _head . wpoint) it "should end at last control point" $ (test₁ `eval` 1.0) ≃ (test₁ ^?! wpoints . _last . wpoint) describe "insert knot" $ do it "should not change nurbs curve" $ insertKnots [(1, 0.1), (2, 0.3)] test₁ ≃ test₁ describe "remove knots" $ do it "should not change nurbs curve" $ removeKnots [(1, 0.1), (2, 0.3)] (insertKnots [(1, 0.1), (2, 0.3)] test₁) ≃ test₁ describe "purge knots" $ do it "should not change nurbs curve" $ purgeKnots (insertKnots [(1, 0.1), (2, 0.6)] test₁) ≃ test₁ describe "split" $ do it "should work as cut" $ snd (split 0.4 test₁) ≃ cut (Span 0.4 1.0) test₁ describe "normalize" $ do it "should not affect curve" $ cut (Span 0.2 0.8) test₁ ≃ normalizeKnot (cut (Span 0.2 0.8) test₁) describe "joint" $ do it "should joint cutted nurbs" $ uncurry joint (split 0.3 test₁) ≃ Just test₁ it "should cut jointed nurbs" $ (cut (Span 0.0 1.0) <$> (test₁ ⊕ test₂)) ≃ Just test₁ it "should cut jointed nurbs" $ (cut (Span 1.0 2.0) <$> (test₁ ⊕ test₂)) ≃ Just test₂ describe "periodic" $ do it "can be broken into simple nurbs" $ breakLoop 0.0 testₒ ≃ testₒ it "can be broken in any place" $ uncurry (flip (⊕)) (split 0.5 (breakLoop 0.0 testₒ)) ≃ Just (breakLoop 0.5 testₒ) Depends on 5 packages: Used by 1 package:
https://www.stackage.org/package/nurbs
CC-MAIN-2018-09
refinedweb
389
88.33
Create a Next.js and Contentful Application That Builds and Deploys with Now How to deploy your Next.js and Contentful application with Now in a serverless environment In this guide, you will discover how to create a Next.js app that displays links to posts from the ZEIT blog by utilizing the Contentful client, before deploying with just a single command to ZEIT Now. Next.js from ZEIT is a production-ready framework that helps you create fast React applications. Contentful is a powerful headless CMS that allows you to rapidly create, manage and distribute content to any platform you like. By following this guide, you will create a clone of the example app, a starting point to get you up and running with your own Next.js + Contentful app in minutes. Step 1: Create your Contentful Content From your Contentful Spaces dashboard, create a new Content Model. That's it for creating content!: Creating API keys Click the Settings tab and choose the API Keys option, then click the Add API Key button. With the keys created, make a note of both the Space ID and the Content Delivery API - access token, these will be used later on. That's all the setup required for Contentful, within just a few minutes you have managed to create a Content Model, add content and generate a set of API keys. In the next step, you will create your Next.js app. Step 3: Creating your Next.js Application Firstly, create a project directory and cd into it like so: mkdir my-nextjs-contentful-project && cd my-nextjs-contentful-project Creating and entering into the project directory. Next, initialize your project, creating a package.json file in the process: yarn init -y Initializing your project with a package.json file. Next, add the project dependencies: yarn add contentful next react react-dom Adding contentful, react and react-dom as dependencies to your project. With the project initialized, create a /pages directory with a index.js file inside that uses the following code: import { useEffect, useState } from 'react' import Head from 'next/head' import Post from '../components/post' const client = require('contentful').createClient({ space: process.env.SPACE_ID, accessToken: process.env.ACCESS_TOKEN }) function HomePage() { async function fetchContentTypes() { const types = await client.getContentTypes() if (types.items) return types.items console.log('Error getting Content Types.') } async function fetchEntriesForContentType(contentType) { const entries = await client.getEntries({ content_type: contentType.sys.id }) if (entries.items) return entries.items console.log(`Error getting Entries for ${contentType.name}.`) } const [posts, setPosts] = useState([]) useEffect(() => { async function getPosts() { const contentTypes = await fetchContentTypes() const allPosts = await fetchEntriesForContentType(contentTypes[0]), the next step will show you how to add an environment variable to the project. Step 4: Adding Environment Variables Add a now.json file at the root of your project directory with the following code: { "build": { "env": { "CONTENTFUL_SPACE_ID": "@contentful_space_id", "CONTENTFUL_ACCESS_TOKEN": "@contentful_access_token" } } } An example now.json file for your Next.js + Contentful project. With your now.json file created, you should add a next.config.js file at the root of your project directory with the code below: module.exports = { env: { SPACE_ID: process.env.CONTENTFUL_SPACE_ID, ACCESS_TOKEN: process.env.CONTENTFUL_ACCESS_TOKEN } } An example next.config.js file for your Next.js + Contentful project. The next.config.js file provides access to environment variables inside your Next.js app. Now, add the following build script to your package.json file: { ... "scripts": { "build": "next build" } } Adding a build script to your package.json file for your Next.js + Contentful project. Next, you will make your API keys available to your application during local development by creating a .env.build file. Create a .env.build file at the root of your project directory with the following code, adding your API keys where instructed: CONTENTFUL_SPACE_ID=your-space-id CONTENTFUL_ACCESS_TOKEN=your-access-token An example .env.build file for your Next.js + Contentful project. Lastly, to make your API keys available for cloud deployment, create two Now Secrets with the commands below: now secrets add CONTENTFUL_SPACE_ID your-space-id Adding the CONTENTFUL_SPACE_ID secret to your project using Now Secrets. now secrets add CONTENTFUL_ACCESS_TOKEN your-access-token Adding the CONTENTFUL_ACCESS_TOKEN secret to your project using Now Secrets. With those steps out the way you are now able to run your application. You can develop your application locally using the following command: next dev Using the next dev command to run the app locally. Step.
https://zeit.co/guides/deploying-next-and-contentful-with-now
CC-MAIN-2019-35
refinedweb
732
51.75
Dear XXXXX070, This is correct, you would be filing a non-resident income tax return to the U.S. for revenues derived from the U.S. market place. These returns are easy to fill out by most individuals. The problem is you need to make payments. You return will likely include a schedule C or C-EZ AND which are documents to show net profits from operating the busines in the united states. Non-resient aliens do not have to pay the self employment tax. You will use form 1040-NR. You would want a professional tax preparer or CPA to file taxes for you the first year so that you can see how it is done. After that you could do it for your self easily enough. You can find a tax preparer in the U.S. to prepare them for you for about 100 to 250 dollars. You could Use H&R block, GHRCI, Jackson Hewitt, etc. OR, skip them all together and use the Turbo Tax Software. The cost of the software is tax deductable. AND Turbo Tax has a inteview process built in to easily guide you through. The problem with using the software is they will automatically prepare an SE tax form for you, which you do not have to pay. (SE tax is Self employment tax) So you would have to delete the SE form and and zero that line in the form 1040NR and submit your return manually. If you use Turbo Tax, you will also have to load the 1040NR form before you begin the return. Here: have a look at the forms: Dear XXXXX, thank you for accepting my answer, and your feedback. yes, if you earn less than the personal exemption and standard deduction, you do not have to file a return. That changes each year and is dependent on your filing status. Filing status is: Single, Married Seperate returns, and married filing a joint return. In your case you would not be filing a joint return. The personal exemption for 2008 is 3500 The standard deduction is: 5450 for singles or married fileing seperatte returns. So you would be able to earn 8950 u.s. before filing a return (as a non-resident alien earning money in the u.s., and not physically present there) You can set up relationships in the U.S. where withholding of your payments is made from the earnings, but that withholding would be at 30 percent, and you would have to file a return anyway, in order to get back the excess taxes or taxes that you would not have had to pay based on the gross earnings. this is not possible with retail sales. This is only possible if you are retained as a consultant or independent contractor. Day to day consumers are not going to be able to do this. Your Follow on Question: May I please clarify... that if I complete and submit a W-8BEN "Certificate of Foreign Status of Beneficial Owner for United States Tax Withholding" instead of a W-8ECI then 30% tax will be withheld and given to the IRS with me receiving the balance (remainder)? ANSWER: Yes, but, you only have withholding if you are dealing business to business, or with agents. Individuals form comsumer sales will not be albe to withhold. The general public is ignorant of this kind of thing, and are not really required to do that, because they are making purchases. So you may still be on the hook for reporting your own income and paying the tax at the end of the year. for example: if you are retained by a company to provide consulting services, the company would withhold back up taxes of 30% and they, the company would pay the tax to the federal government. However if a consumer buys a service or prodcut from you, they are not required to issue any tax payment or collect or withhold any tax, and so would not even know how to do that. Your follow on question: about the portion of 100K or more. Normally this would be taxed, as personal income at more than 30%. However, this is business income not peronal services income, and no matter, you would receive the tax treaty rate of 30%. The tax treaty takes precedence. I understand what you are trying to do. the tax returns are not that hard to do. Your accountant would probably be able to understand them. Even if he was not, you would only need a U.S. tax preparer to do this one time and after that you or your accountant would be able to do it on your own. the problem you have is that there is no mechanism for consumers in general to withold taxes. One big reason is, that you pay taxes not on gross revenues, but on net profts. The consumer has no way of determining that. Setting up any kind of system to enable such transactionsn would be extremely costly. Consider, if you have 1000 customers, then you would have 1000 people making income tax payments in your name. You would have to have a system to produce the tax documents required for the customers to complete and fill in with specific instructions for turning the money in your name over to the IRS. What consumers would do that? who would want to buy from you? Consumers want to point and click. How would you determine the net profit from a single sale, when you have no exact idea of what your variable expenses would be for the business? Trust me friend, the best way is to have your accountant figure your net profits using the schedule C and then for you to pay tax on that. If your net profits are less than the amounts that I mentioned earlier (8,950 ) you would not even have to file a return. You can pay a bonus. There should be a bonus button somewhere. thank you for your remarks. Fill out the W8-BEN so they only take out the 30%. This is because you are not solely claiming treaty benefits. The treaty will still exempt you from filing if you earn less than the amount I indicated. You can set it up so that the person who is giving you your paycheck is withholding for you. You have to make sure they understand that you are subject to backup withholding at 30%. I checked and they will issue a 1099 form, which is a tax document reporting income to the IRS, for any earnings more than 600 dollars, u.s. Since there is information sharing between australia and the U.s. you will have to file a return if you earn more than the amount I mentioned before. The company would in this instance be able to withhold taxes for you. BUT, it could end up being too much, based on your actuall net profit, and you may have to file a return to get a refund.
http://www.justanswer.com/tax/1a6bw-looking-utilising-linkshare-an.html
CC-MAIN-2014-15
refinedweb
1,175
80.82
TL;DR: In this series, you will use modern technologies like Vue.js, AWS Lambda, Express, MongoDB, and Auth0 to create a production-ready application that acts like a micro-blog engine. The first part of the series (this one) will focus on the setup of the Vue.js client that users will interact with and on the definition of the Express backend app. The second part will show you how to prepare your app for showtime. There, you will start by signing up to AWS and to MongoLabs (where you will deploy the production MongoDB instance), then you will focus on refactoring both your frontend and backend apps to support different environments (like development and production). You can find the final code developed in this part in this GitHub repository. Stack Overview Before starting with the hands-on exercises, let's take a brief overview of each piece that will compose your micro-blog engine. Vue.js As stated by the official guide, Vue.js is a progressive framework for building user interfaces. One of its focuses is on enabling developers to incrementally adopt the framework. That is, instead of demanding that developers use it to structure the whole application, Vue.js allows them to use it on specific parts to enhance the user experience on legacy apps. With that said, the guide also states that Vue.js is "perfectly capable of powering sophisticated Single-Page Applications (SPAs) when used in combination with modern tooling (like Webpack) and supporting libraries". In this series, you will have the opportunity to see this great framework in action to create a SPA from the ground up. AWS Lambda AWS Lambda is a serverless computer platform, provided by Amazon, that allows developers to run their code without having to spend too much time thinking about the servers needed to run it. Although the main idea of using a serverless platform is to facilitate the deployment process and the scalability of applications, AWS Lambda is not easy for newcomers. In fact, AWS Lambda on its own is not enough to run Rest APIs like the one you will need for your micro-blog engine. Besides this AWS service, you will also need to use AWS API Gateway to define how external services (or, in this case, a Vue.js client) can communicate with your serverless backend app. This last piece is exactly what makes AWS Lambda not straightforward. So, to avoid wasting your time with the intricacies of AWS API Gateway and AWS Lambda, you will take advantage of an open-source tool called Claudia.js. The goal of this tool is to enable you to deploy your Node.js projects to AWS Lambda and API Gateway easily. Express Express is the most popular framework for developing web applications on the Node.js landscape. By using it, you will have access to tons of (and great) documentation, a huge community, and a lot of middleware that will help you achieve your goals. In this series, for example, you will use at least four popular middleware to define your backend endpoints: bodyParser, cors, helmet, and morgan. If you haven't heard about them, you will find a brief explanation of what they are capable of doing and you will see that using them is pretty straightforward. You will also note that defining endpoints and communicating with databases (in this case, MongoDB) can't be easier. MongoDB MongoDB is a popular NoSQL database that treats data as documents. That is, instead of the classic approach of defining data as tables and rows that relate to each other, MongoDB allows developers to persist and query complex data structures. As you will see, using MongoDB to persist JSON data sent by clients and processed by Node.js (or Express in this case) is really simple. Auth0 Handling identity on modern applications is not easy. For starter, if the developers choose to homegrown their own solution they will have to create everything from the sign-up page, passing through the recovering password feature, till the handling of sessions and access tokens. Not to say that if they want to integrate with social networks like Facebook or Google, or if they want to allow users from companies that rely on Active Directory or SAML, they will face a scenario that is way more complex. To avoid all these challenges and to enable you to focus on what matters the most to your application (its special features) you will take advantage of Auth0. With Auth0, a global leader in Identity-as-a-Service (IDaaS), you only have to write a few lines of code to get solid identity management solution, single sign-on, support for social identity providers (like Facebook, GitHub, Twitter, etc.), and support for enterprise identity providers (Active Directory, LDAP, SAML, custom, etc.). What You Will Build Throughout this series, you will use the stack described above (mainly Vue.js, Lambda, Express, and MongoDB) to create a micro-blog that contains a single public channel of micro-posts. That is, visitors (unauthenticated users) will be able to see all micro-posts and registered users will be able to express their minds publicly. Although simple, this application will enable you to learn how to create secure, modern, and production-ready applications with Vue.js and AWS Lambda. Now, regarding this specific part of the series, you will achieve the following objectives: - You will bootstrap your Vue.js application with vue-cli. - You will create your backend app with Express. - You will initialise a MongoDB instance and use it in your Express app. - You will enable identity management in both your Vue.js application and your Express backend with the help of Auth0. So, without further ado, it's time to start developing! Vue.js, Express, and Mongo: Hands-On! So, now that you know the stack that you will use and that you know what you will build, it's time to start creating your app. To keep things organised, you will create a directory to keep both your frontend and backend source code: # create the root directory mkdir vuejs-lambda # move into it cd vuejs-lambda Then, as a responsible developer, you will make this directory a Git repository so you can keep your code safe and sound: # initialize Git in the directory git init For now, there is nothing to save in your Git repository but, if you want to save some keystrokes in the future, you can define a Git alias to add and commit files: git config --global alias.cm '!git add -A && git commit -m' With this alias in place, you can just issue git cm 'some message' when you want to save your work. Bootstrapping Vue.js To create your Vue.js application based on best practices, you will take advantage of the vue-cli tool. So, before using this command line interface, you will have to install it on your machine: npm install -g vue-cli Then, in the root directory of your project, you will run the following command: vue init webpack client This command will make vue-cli ask a bunch of questions. You can answer them as follows: Project name:vuejs-micro-blog-engine Project description:A Vue.js Micro-Blog Engine Author:Your Name Vue build:Runtime + Compiler: recommended for most users Install vue-router?Yes Use ESLint to lint your code?Yes Pick an ESLint preset:Standard Set up unit tests:No Setup e2e tests with Nightwatch?No Run npm install?Yes, use NPM Note: If you are an advanced user, you can tweak the answers at will. For example, you can choose another ESLint preset like Airbnb or choose to use Yarn instead of NPM. However, keep in mind that if you change answers, the coding style shown here might not be the same as your project expects or that you will have to change a few of the commands shown. After answering the questions appropriately, you can issue the following commands to check if your frontend Vue.js client is really working: # move dir into your new client cd client # run the development server npm run dev The last command will run the development server on. So, if you open a browser and head to this address, you will see the following screen: Good, you can now stop your server by hitting Ctrl + C and save your work: git cm 'bootstrapping the Vue.js client' Creating an Express Project Now that you have your Vue.js frontend application skeleton defined, you can focus on scaffolding your backend application. For that, you will create a new directory called backend and install some dependencies: # move back to the project root cd .. # create the backend dir mkdir backend # move into the backend directory cd backend # initialise NPM npm init -y # install dependencies npm i express body-parser cors mongodb morgan helmet So, as you can see, the commands above initialised NPM in the backend directory and installed six NPM packages: express: the most popular Node.js web application framework; body-parser: an Express middleware that parses request bodies so you can access JSON objects sent by clients; cors: an Express middleware to make your endpoint accept cross-origin requests; mongodb: the MongoDB native driver for Node.js; morgan: an HTTP request logger middleware for Node.js web apps; - and helmet: an Express middleware that helps to secure your apps with various HTTP headers. Note: If you want more information, you can check the official documentation of each package on the links above. After installing all dependencies, you can create a new directory to hold the JavaScript source code and define the index.js file inside it: # define a directory to hold the javascript code mkdir src # create the index.js file touch src/index.js Then, in this new file, you can add the following code: /'); }); As you can see, the code above is quite simple. From top to bottom, it: - starts by importing most of the packages that you installed before (the only one missing is mongodbbecause you will use it elsewhere); - then, it defines a new Express web application; - after that, it configures helmet, bodyParser, cors, and morganmiddleware; - and, finally, it triggers the server on port 8081. Right now, there is no reason to start your Express application. Why? Because there are no endpoints defined on it and because you have no MongoDB instance to communicate with. In the next section, you will initialise a MongoDB instance and then create a few endpoints to communicate with it. However, before proceeding into it, you will be better off creating two new files (in the project root) to help you in your development process. The first one is a .gitignore file to keep you from committing useless files into your Git repository. So, create this file in the project root directory and copy the contents of this file into it. Or, if you prefer, you can also create and populate this file quite easily with the following commands: # move to the project root cd .. # create and populate .gitignore curl >> .gitignore After that, you can create the second file. This one, called .editorconfig will help you keep your indentation style consistent. Again, you can copy and paste the contents from the internet, or you can use the following command: curl >> .editorconfig Great! Now, you are ready to save your progress and move to the next section: git cm 'Scaffolding the Express web application.' Preparing a MongoDB Instance After defining the basic structure of both your backend and frontend applications, you will need to initialise a MongoDB instance to persist your users' data. There are many options to create a new MongoDB instance. For example: - You can install MongoDB on your development machine, but this would make the process of upgrading to newer versions harder. - You can use a MongoDB service like MLab, but this is a little extreme and slow for the development machine. - You can use Docker to initialise a container with a MongoDB instance. Although the last alternative requires you to have Docker installed in your development machine, this is the best option for the development phase because it allows you to have multiple MongoDB instances at the same time in the same machine. So, if you don't have Docker already installed locally, head to the Docker Community download page and follow the instructions for your OS. Note: In the next article, you will create an MLab account to use a reliable and globally available instance of MongoDB. Don't worry, as any other service used in this series, MLab provides a free tier that is more than enough for this series. After installing it, you can trigger a new MongoDB instance with the following command: docker run --name mongo \ -p 27017:27017 \ -d mongo Yup, that's it. It's easy like that to initialise a new MongoDB instance in a Docker container. For more information about it, you can check the instructions on the official Docker image for MongoDB. Integrating Vue.js, Express, and MongoDB You now have all the basic building blocks in place and ready to be used. So, it's time to tie them up to see the stack in action. Consuming MongoDB Collections with Express For starters, you will create a new file called routes.js in the backend/src directory and add the following code to it: const express = require('express'); const MongoClient = require('mongodb').MongoClient; const router = express.Router(); // retrieve latest micro-posts router.get('/', async (req, res) => { const collection = await loadMicroPostsCollection(); res.send( await collection.find({}).toArray() ); }); // insert a new micro-post router.post('/', async (req, res) => { const collection = await loadMicroPostsCollection(); await collection.insertOne({ text: req.body.text, createdAt: new Date(), }); res.status(200).send(); }); async function loadMicroPostsCollection() { const client = await MongoClient.connect('mongodb://localhost:27017/'); return client.db('micro-blog').collection('micro-posts'); } module.exports = router; As you can see, the code necessary to integrate your Express application with a MongoDB instance is extremely simple (for this basic app of course). In less than 30 lines, you defined two routes for your app: one for sending micro-posts to users and one to persist new micro-posts in the database. Now, to use your new routes, you will have to update the index.js file (the one that resides in the backend/src directory) as follows: // ... other require statements const routes = require('./routes'); // express app definition and middleware config app.use('/micro-posts', routes); app.listen(8081, () => { console.log('listening on port 8081'); }); Again, pretty straightforward, isn't it? You just had to require the new file and make your Express app aware of the routes defined on it through the use function. With these changes in place, you can test your backend with the following commands (or with any HTTP client for that matter): # move cursor to the backend directory cd backend # start up your Express app node src # fetches the micro-posts (for now, an empty array) curl 0:8081/micro-posts # add the first micro-post curl -X POST -H 'Content-Type: application/json' -d '{ "text": "I love coding" }' 0:8081/micro-posts # fetches the micro-posts again curl 0:8081/micro-posts As you now have your Express app integrated with a MongoDB instance, it's time to save your progress: git cm 'integrating Express and Mongo' Consuming Express Endpoints with Vue.js Now, you can focus on upgrading your Vue.js app to communicate with these two new endpoints. So, first, you will need a new service that will interface with this communication. To define this service, create a new file called MicroPostsService.js inside the ./client/src/ directory and add the following code to it: import axios from 'axios' const url = '' class MicroPostsService { static getMicroPosts () { return new Promise(async (resolve, reject) => { try { const serverResponse = await axios.get(url) const unparsedData = serverResponse.data resolve(unparsedData.map(microPost => ({ ...microPost, createdAt: new Date(microPost.createdAt) }))) } catch (error) { reject(error) } }) } static insertMicroPost (text) { return axios.post(url, { text }) } } export default MicroPostsService Note: Whenever you get an answer back from the GETendpoint, you iterate over the micro-posts returned to transform the stringified version of the createdAtproperty into real JavaScript Dateobjects. You are doing it so you can manipulate this property more easily. As you can see, this service depends on Axios, a promise based HTTP client for JavaScript applications. So, to install Axios, issue the following command from the ./client directory: # using npm to install axios npm i axios After installing this library, you can wrap your head around the HelloWorld component (this is the first component your users will see when they access your application). In this component, you will use the newly created service to show micro-posts to all users. So, open the HelloWorld.vue file and replace the contents of the <script> tag with the following: import MicroPostService from '../MicroPostsService' export default { name: 'HelloWorld', data () { return { microPosts: [], error: '' } }, async created () { try { this.microPosts = await MicroPostService.getMicroPosts() } catch (error) { this.error = error.message } } } In the new version of this code, you are using the created lifecycle hook to fetch micro-posts from the backend (through the MicroPostService.getMicroPosts function). Then, when the request is fulfilled, you are populating the microPosts property so you can render it on the screen. You are also checking for any error (with try ... catch) and populating the error property if anything goes wrong. Now, to use these two properties ( microPosts and error), you can replace the contents of the <template> tag with the following: <div class="container"> <h1>Latest Micro-Posts</h1> <p class="error" v-</p> <div class="micro-posts-container"> <div class="micro-post" v-for="(microPost, index) in microPosts" v-bind:item="microPost" v-bind:index="index" v-bind: <div class="created-at"> {{ `${microPost.createdAt.getDate()}/${microPost.createdAt.getMonth() + 1}/${microPost.createdAt.getFullYear()}` }} </div> <p class="text"></p> <p class="author">- Unknown</p> </div> </div> </div> This will make the component render the error if anything goes wrong (i.e. you are relying on a conditional rendering: v-if="error") or render the microPosts returned by the backend. For each micro-post ( v-for="(microPost, index) in microPosts"), you are telling the component to render the date that it was created (inside the div.created-at element), the text inputted by the user (inside the p.text element), and the author's name (unknown for now). Lastly, to make your application look a little bit better, you can replace the contents of the <style> tag with the following: div.container { max-width: 800px; margin: 0 auto; } p.error { border: 1px solid #ff5b5f; background-color: #ffc5c1; padding: 10px; margin-bottom: 15px; } div.micro-post { position: relative; border: 1px solid #5bd658; background-color: #bcffb8; padding: 10px; margin-bottom: 15px; } div.created-at { position: absolute; top: 0; left: 0; padding: 5px 15px 5px 15px; background-color: darkgreen; color: white; font-size: 13px; } p.text { font-size: 22px; font-weight: 700; margin-bottom: 0; } p.author { font-style: italic; margin-top: 5px; } Now, to check if everything is working as expected, you can run the following commands: # move to the backend directory cd ../backend/ # run the backend in the background node src & # move back to the Vue.js client directory cd ../client/ # start the development server npm run dev After running the development server, head to to check the new version of your application: As a reminder, if you want to create new micro-posts, you will have to use some HTTP client like curl for now: curl -X POST -H 'Content-Type: application/json' -d '{ "text": "I want pizza" }' 0:8081/micro-posts Ok, time to save your progress: git cm 'Integrating Vue.js, Express, and MongoDB.' Handling Authentication with Auth0 Awesome, you have all the main building blocks of your app (Vue.js, Express, and MongoDB) integrated and communicating properly. Now, it's time to focus on adding a modern identity management tool to your app so you can identify who is accessing it and to let users publish their own micro-posts. So, before getting into the details on how to integrate Auth0 in your Vue.js app and in your Express backend, you will need to create a new Auth0 account. If you don't have one already, now it's a good time to sign up for a free Auth0 account. "Check out how to add modern identity management to Vue.js apps." Integrating Auth0 and Your Vue.js App After signing up for your free Auth0 account, you will need to create a representation of your Vue.js app on it. So, head to the Applications page inside the Auth0 management dashboard and click on Create Application. Clicking on it will bring a small form that will ask you two things: - The name of your application: you can enter something like "Vue.js Micro-Blog". - The type of your application: here you will have to choose Single Page Web Applications. After filling in the form, you can click on the Create button. This will redirect you to a page where you can see tabs like Quick Start, Settings, and Connections. To proceed, click on the Settings tab. In this new page, you will see a form where you can tweak your application configuration. For now, you are only interested in adding values to two fields: Allowed Callback URLs: Here you will need to add that Auth0 know it can redirect users to this URL after the authentication process. Allowed Logout URLs: The same idea but for the logout process. So, add this field. After inserting these values into these fields, hit the Save Changes button at the bottom of the page. You can now move back to your code, but don't close the Settings page just yet, you will need to copy some info from it soon. Back in your code, the first thing you will do is to install a package called auth0-js in your Vue.js application: # make sure you are in the client directory cd ./client/ # install auth0-js npm install auth0-js With this package in place, you will create a new component (which you will call Callback) to handle the authentication response from Auth0. So, create a new file called Callback.vue inside the ./client/src/components/ directory and define its <script> section as follows: <script> import auth0Client from '../AuthService' export default { name: 'Callback', async created () { await auth0Client.handleAuthentication() this.$router.push('/') } } </script> This is configuring the component to parse the hash (through the auth0Client.handleAuthentication function) returned by Auth0 to fetch the access_token and the id_token of the users that are authenticating. After handling the authentication response, this component will forward your users to the HelloWorld component again ( this.$router.push('/')). Note: You probably realized that you don't have any module called AuthService, right? Don't worry, you will create it soon! Before refactoring the HelloWorld component to make it aware of the authentication status, you can add a nice message and a nice style to your Callback component. To do this, add the following code after the <script>...</script> tag: <template> <div> <p>Loading your profile...</p> </div> </template> <style scoped> div > p { margin: 0 auto; text-align: center; font-size: 22px; } </style> Then, after creating the Callback component, you will add buttons to your app so users can log in and log out. But first, you have to configure the auth0-js package with your Auth0 details. To do so, you will create a new file called AuthService.js inside the ./client/src/ directory. This file will hold a class (and a singleton instance of this class) that will handle the interaction with Auth0. So, after creating this file, insert the following code into it:: 'token id_token', scope: 'openid profile' }) this.getAccessToken = this.getAccessToken.bind(this) this.getProfile = this.getProfile.bind(this) this.handleAuthentication = this.handleAuthentication.bind(this) this.isAuthenticated = this.isAuthenticated.bind(this) this.signIn = this.signIn.bind(this) this.signOut = this.signOut.bind(this) } getAccessToken () { return this.accessToken } getProfile () { return this.profile } handleAuthentication () { return new Promise((resolve, reject) => { this.auth0.parseHash((err, authResult) => { if (err) return reject(err) if (!authResult || !authResult.accessToken || !authResult.idToken) { return reject(err) } this.accessToken = authResult.accessToken this.idToken = authResult.idToken this.profile = authResult.idTokenPayload // set the time that the id token will expire at this.expiresAt = authResult.expiresIn * 1000 + new Date().getTime() resolve() }) }) } isAuthenticated () { return new Date().getTime() < this.expiresAt } signIn () { this.auth0.authorize() } signOut () { // clear id token, profile, and expiration this.auth0.logout({ clientID: '<YOUR-AUTH0-CLIENT-ID>', returnTo: '' }) } } const auth0Client = new Auth() export default auth0Client As you can imagine, you will have to replace both <YOUR-AUTH0-DOMAIN> and the <YOUR-AUTH0-CLIENT-ID> placeholders with details from your Auth0 account. So, back in the Auth0 management dashboard (hopefully, you have left it open), you can copy the value from the Domain field (e.g. bk-tmp.auth0.com) and use it to replace <YOUR-AUTH0-DOMAIN> and copy the value from the Client ID field (e.g. KsX...GPy) and use it to replace <YOUR-AUTH0-CLIENT-ID>. After configuring the auth0-js package, you can open the HelloWorld component (its file resides in the ./client/src/components/ directory) and refactor the <template></template> section as follows: <template> <div class="container"> <h1>Latest Micro-Posts</h1> <div class="users"> <button v- </button> <button v- Sign Out </button> <p v- Hello there, {{ profile.name }}. Why don't you <router-link : share your thoughts? </router-link> </p> </div> <!-- ... p.error and div.micro-posts-container elements stay untouched ... --> </div> </template> The new version of the template adds buttons to allow users to sign in and sign out and an area that shows their names alongside with a link to ShareThoughts. You will create a new component to handle this route in a bit but, before that, you still have to refactor the <script> area of this component to support the new features: <script> import auth0Client from '../AuthService' import MicroPostService from '../MicroPostsService' export default { name: 'HelloWorld', data () { return { microPosts: [], error: '', profile: null } }, async created () { try { this.microPosts = await MicroPostService.getMicroPosts() this.profile = auth0Client.getProfile() } catch (error) { this.error = error.message } }, methods: { signOut: auth0Client.signOut } } </script> As you can see, now you have a new property called profile on the component and you have defined two methods to handle the two buttons added to the template: signOut. These methods are just pointers to the methods provided by your AuthService singleton instance. The last missing piece to have a Vue.js application properly integrated with Auth0 is to register the Callback component as the responsible for the /callback route. To achieve this, open the index.js file that resides in the ./client/src/router/ directory and replace its contents with this: import Vue from 'vue' import Router from 'vue-router' import HelloWorld from '@/components/HelloWorld' import Callback from '@/components/Callback' Vue.use(Router) export default new Router({ mode: 'history', routes: [ { path: '/', name: 'HelloWorld', component: HelloWorld }, { path: '/callback', name: 'Callback', component: Callback } ] }) Note The mode: 'history'property is needed because, without it, your Vue.js app won't be able to properly handle the hash present in the callback URL. If you test your application now, you will see that it is capable of authenticating users through Auth0 and of showing users' name. However, you still haven't created the route where users will be able to express their mind. You can focus on this task now, but not before saving your progress: git cm 'integrating the Vue.js client with Auth0' As you will see, after integrating Auth0 into your app, you will be able to easily add secured areas in your Vue.js app. For that, the first thing you will do is to create a new file called ShareThoughts.vue inside the ./client/src/components/ directory. In this file, you will add the following <script> tag: <script> import MicroPostsService from '../MicroPostsService' export default { name: 'ShareThoughts', data () { return { text: '' } }, methods: { async shareThoughts () { await MicroPostsService.insertMicroPost(this.text) this.$router.push('/') } } } </script> As you can see, the idea of this component is to let authenticated users share their thoughts through the shareThoughts method. This method simply delegates the communication process with the backend to the insertMicroPost function of the MicroPostsService that you defined before. After this service finishes inserting the new micro-post in your backend, the ShareThoughts component forwards users to the public page where they can see all micro-posts (including what they just shared). For now, the layout of this component can be as simple as possible. Adding to it a label, an input, and a button will suffice to get the work done. So, still in the ShareThoughts.vue file, add the following template: <template> <div class="share-thoughts"> <label for="share-thoughts">What do you want to share?</label> <input id="share-thoughts" v- <button v-on:Share!</button> </div> </template> Then, to make this new component available in your app, you will need to register it under your routes. To do so, open the ./client/src/router/index.js file add update it as follows: // ... other import statements ... import ShareThoughts from '@/components/ShareThoughts' import auth0Client from '../AuthService' Vue.use(Router) export default new Router({ mode: 'history', routes: [ // ... other routes ... { path: '/share-your-thoughts', name: 'ShareThoughts', component: ShareThoughts, beforeEnter: (to, from, next) => { if (auth0Client.isAuthenticated()) { return next() } next('/') } } ] }) If you run both your frontend app (by running npm run dev from the client directory) and your backend app (by running node src from the backend directory), you will be able to authenticate yourself and navigate to to share some thoughts. Note: If you don't authenticate yourself before trying to access this route, you will get redirected to the main route. This happens because you defined a guard condition on the beforeEnter hook that calls next('/') for unauthenticated users. Ok, then. From the frontend perspective, you have enough features. So, now it's time to focus on refactoring your backend application to integrate it with Auth0. But, as always, not before saving your progress: git cm 'adding a secured route to Vue.js' Securing Your Express App with Auth0 As you have finished integrating your frontend application with Auth0, the only missing part now is adding Auth0 to your backend. For starters, you will need three to install new packages in your backend project: # make sure you are in the backend directory cd ../backend # install dependencies npm i express-jwt jwks-rsa auth0 Together, these dependencies will allow you to validate access_tokens sent by clients and will allow your backend app to get the profile of your users (like name and picture). If you need more info about these packages, you can check the following resources: express-jwt, jwks-rsa, and auth0. Now, to use these packages, you will have to update only a single file: ./backend/src/routes.js. Inside this file, update the code as follows: // ... other require statements ... const auth0 = require('auth0'); const jwt = require('express-jwt'); const jwksRsa = require('jwks-rsa'); // ... router definition and router.get('/', ...) ... // this is a middleware to validate access_tokens-AUDIENCE>', issuer: `https://<YOUR-AUTH0-DOMAIN>/`, algorithms: ['RS256'] }); // insert a new micro-post with user details router.post('/', checkJwt, async (req, res) => { const collection = await loadMicroPostsCollection(); const token = req.headers.authorization .replace('bearer ', '') .replace('Bearer ', ''); const authClient = new auth0.AuthenticationClient({ domain: '<YOUR-AUTH0-DOMAIN>', clientId: '<YOUR-AUTH0-CLIENT-ID>', }); authClient.getProfile(token, async (err, userInfo) => { if (err) { return res.status(500).send(err); } await collection.insertOne({ text: req.body.text, createdAt: new Date(), author: { sub: userInfo.sub, name: userInfo.name, picture: userInfo.picture, }, }); res.status(200).send(); }); }); // ... loadMicroPostsCollection and module.exports ... As you can see, this new version simply defines a middleware called checkJwt to validate any access_token sent on request headers. Then, it uses the new middleware to secure the function responsible for handling POST HTTP requests ( router.post('/', checkJwt, async (req, res) => { ... });). After that, inside this function, you are fetching the access_token from req.headers.authorization so you can use it to get profile details about the user calling your API (for this, you are using auth0.AuthenticationClient and authClient.getProfile). Lastly, when your backend finishes fetching users' profiles, it uses them to add more info into micro-posts saved in your MongoDB instance. Note: You have to replace the three occurrences of <YOUR-AUTH0-DOMAIN>with your own Auth0 domain, <YOUR-AUTH0-CLIENT-ID>with the client ID created before, and <YOUR-AUTH0-AUDIENCE>with the audience of an Auth0 API. You will create this API now. To create an Auth0 API to represent your backend, head to the APIs page in your Auth0 management dashboard and hit the Create API button. This will bring up a form where you will set a name for your API (e.g. Micro-Blog API), an identifier which is also known as audience (in this case you can use), and a signing algorithm (you can leave it as RS256). Then, after filling in this form, you can click on the Create button and replace <YOUR-AUTH0-AUDIENCE> in the file above with. Back to the code, you will need to perform two changes in your Vue.js app. First, you will need to update the MicroPostsService.js file (which you can find in the ./client/src/ directory) so it sends users' access_tokens when submitting new micro-posts: // ... import axios ... import auth0Client from './AuthService' // ... const url ... class MicroPostsService { // ... static getMicroPosts () ... static insertMicroPost (text) { return axios.post(url, { text }, { headers: { 'Authorization': `Bearer ${auth0Client.getAccessToken()}` } }) } } // ... export default MicroPostsService ... Then, you can update the <template> tag inside the HelloWorld.vue file as follows: <template> <div class="container"> <!-- ... leave h1, div.users, and p.error untouched ... --> <div class="micro-posts-container"> <div class="micro-post" ...> <!-- and simply replace p.author with this: --> <p class="author">- {{ microPost.author ? microPost.author.name : 'Unknown' }}</p> </div> </div> </div> </template> The last thing you will need to do in your app is to replace the audience property in the object passed to auth0.WebAuth in the ./client/src/AuthService.js file. Now, instead of passing <YOUR-AUTH0-DOMAIN>/userinfo to it, you will need to pass the identifier of the Auth0 API that you created before (i.e.). In the end, the constructor of the AuthService class will look like this: import auth0 from 'auth0-js' class Auth { constructor () { this.auth0 = new auth0.WebAuth({ // the following three lines MUST be updated domain: '<YOUR-AUTH0-DOMAIN>', audience: '<AN-AUTH0-API-IDENTIFIER>', clientID: '<AN-AUTH0-CLIENT-ID>', redirectUri: '', responseType: 'token id_token', scope: 'openid profile' }) // ... binds ... } // ... other methods ... } const auth0Client = new Auth() export default auth0Client Just don't forget to replace the <YOUR-AUTH0-DOMAIN>, <AN-AUTH0-CLIENT-ID>, <AN-AUTH0-API-IDENTIFIER> placeholders accordingly. After that, for any new micro-post submitted by an authorized user, your Vue.js app with show their names. To see this in action, issue the following commands (you might have to stop the previous version of your backend app that is running on the background): # go to the backend directory cd ./backend # leave Express running in the background node src & # go to the client directory cd ../client # run the local development server npm run dev Then, head to. Now, after authenticating yourself, you can share a new thought and your name will appear on the micro-post: That's it, you have completed the first version of your micro-blog engine and you are now ready to deploy it to production on AWS. But this task will be left to the second part of the series. Time to save your progress! git cm 'vue.js and express fully integrated with auth0' "Developing Vue.js apps and integrating it with an Express backend is quite easy." Conclusion and Next Steps In the first part of this series, you have created a Vue.js application to work as the user interface of a micro-blog engine. You have also created an Express API to persists micro-posts in a MongoDB instance. Besides that, you have installed and configured Auth0 on both your frontend and backend applications to take advantage of a modern identity management system. With this, you have finished developing the first version of your micro-blog engine and you are ready to move it to production. So, in the next part of this series, you will prepare your source code to deploy your backend API to AWS Lambda and your frontend Vue.js app to an AWS S3 bucket. Then, you will use Claudia.js, a tool that facilitates AWS Lambda management, to make your backend code live and will use the AWS CLI (Command Line Interface) tool to push your Vue.js app to AWS S3. I hope you enjoy the process. Stay tuned! Auth0 Docs Implement Authentication in Minutes OAuth2 and OpenID Connect: The Professional GuideGet the free ebook!
https://auth0.com/blog/vue-js-and-lambda-developing-production-ready-apps-part-1/
CC-MAIN-2020-45
refinedweb
6,142
55.64
Weixin for Flask. Project description Flask-Weixin is the implementation for with the flavor of Flask. It can be used without Flask too. Installation You can install Flask-Weixin with pip: $ pip install Flask-Weixin Or, with setuptools easy_install in case you didn’t have pip: $ easy_install Flask-Weixin Eager to get started? It is always the Flask way to create a new instance: from flask_weixin import Weixin weixin = Weixin(app) Or pass the app later: weixin = Weixin() weixin.init_app(app) However, you need to configure before using it, here is the configuration list: - WEIXIN_TOKEN: this is required - WEIXIN_SENDER: a default sender, optional - WEIXIN_EXPIRES_IN: not expires by default For Flask user, it is suggested that you use the default view function: app.add_url_rule('/', view_func=weixin.view_func) @weixin.register('*') def reply(**kwargs): username = kwargs.get('sender') sender = kwargs.get('receiver') content = kwargs.get('content') return weixin.reply( username, sender=sender, content=content ) The example above will reply anything the user sent. Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/Flask-Weixin/0.2.0/
CC-MAIN-2018-22
refinedweb
183
50.53
Introduction Animations pique a user's interest in applications. Not only that, animations can also be used to improve UX(user experience) considering the fact that making dramatic transitions and movements on the screen can be a way to retain a user’s attention while the content of a page loads . In this article, we’ll go over how to use the Animate On Scroll (AOS) library to animate Angular web pages. Learning Objectives At the end of this article, you would have learned how to: - Install and configure the Animate On Scroll Library - Initialize and animate web pages in Angular applications. Prerequisites To get the most out of this tutorial, a basic understanding of the following is required; - HTML - CSS - Angular - Typescript Let's get started by going over the step-by-step process to achieve the learning objectives of this tutorial. 1. Setting up/installing an Angular app. An angular app must be running before we begin at all. This can be accomplished by executing the following sequence of commands: ng new my-app // ? Would you like to add Angular routing? Yes // ? Which stylesheet format would you like to use? CSS All of our routing configuration would need to be defined in our angular project's app-routing.module.ts file. Angular CLI will add the app-routing.module.ts file to our Angular project if we answer "YES" to the question "Would you like to add Angular routing?". cd myApp This would change our directory to myApp folder. ng serve --o This would serve and open our Angular project on by default. This way we can now view our project. 2. Configuring/installing the Animate On Scroll (AOS) library. We will install and configure the animation library in this step so that it can be accessed and used in the project. To install the library, run the following command: npm install aos The above command will install the animation library, and once it has been successfully installed, it is important to update the styles array in the angular.json file to include the animation library. To do this, open the angular.json file and add the following line to it; “node_modules/aos/dist/aos.css” Having done that correctly, we have successfully installed and configured the AOS library, which makes it ready for use in our project. We may need to restart our server in order for our project to be updated with recent changes after installation, but only if our project appears to be out of date. 3. Initializing/ Animating with the Animate On Scroll Library (AOS). In this step, we would finally bring our animations to life and make them work as we scroll through our web pages. Let's get started and see what happens. First, we must open our desired component's TS file, for example, “home-component.ts”, import the AOS library, and initialize it. This can be accomplished by following the steps outlined below; - Import the library: Inside the desired component.ts file, add the import; import AOS from "aos"; - Initialize the functionality: To get the AOS library functioning, it is important to call the init() function in the ngOnInit of our component.ts file. This can simply be done by adding the the following line of code: AOS.init(); By doing this, the AOS library has been initialized and our animations are ready for action. But before we can see the effects, we must open our component.html file(e.g home-component.html), which must be the html component of the ts file we just worked on, and set animation using the data-aos attribute in our desired divs. Example; <div data- <!-- our contents —-> </div> The code above would add a fade-up animation to the div on scroll, but the capability of the AOS Library is not limited to this. To discover more animations, The official Animate on Scroll website has an experience of animations and effects provided by the library. You may check it out here and notice how the effects happen as you scroll down and up the page. Conclusion. So far in this article, we've been able to see how easy it is to set up an Angular app with Animate On Scroll effects on its pages using the AOS Library. Questions and suggestions are always welcome in the comments. See you in the next article. Happy Coding! Thank you for reading this far. I hope you found the tutorial useful. If you have any questions or comments, please leave them in the comments section. Discussion (1) Hi! This is an awesome article on AOS with Angular. Your explanation is clear and easy to understand. I managed to use it successfully in my project. But, I am facing issue using AOS for Angular SSR/universal. Is this something you can help with? Thank you
https://dev.to/this-is-angular/how-to-implement-animate-on-scroll-in-angular-web-apps-using-the-aos-library-28d7
CC-MAIN-2022-27
refinedweb
810
64.61
From: Guillaume Melquiond (guillaume.melquiond_at_[hidden]) Date: 2004-09-22 07:20:01 Le mer 22/09/2004 à 12:19, Jonathan Wakely a écrit : > On Wed, Sep 22, 2004 at 11:27:09AM +0200, Guillaume Melquiond wrote: > > > > right. Version of GCC is being used in numeric/interval/detail/bugs.hpp > > > to discover capabilities of runtime, thus problem. > > > > Not to discover capabilities, it is to discover namespaces. At the time > > of GCC 3.4, the developers decided that the inverse hyperbolic functions > > should not be in the std namespace anymore (they were before). It is the > > reason why the GCC version is tested. > > > > You still haven't say how I can detect that it's the mingw runtime that > > will be used. I need to know a preprocessor test so that i can adapt the > > interval library to this runtime. > > Thanks for looking into this. This part of the code needed some cleanup. As I explained, this code was written by Jens Maurer 4 years ago in the first release of the library. And as much as possible I tried to not modify it while it worked (or I believed it did). So it's clearly outdated nowadays. > However, another test in boost/numeric/interval/detail/bugs.hpp seems > wrong to me: > > # if defined(__GNUC__) && (__GNUC__ == 3) && (__GNUC_MINOR__ > 3) > # define BOOST_NUMERIC_INTERVAL_using_ahyp(a) using ::a > # elif defined(BOOST_HAS_INV_HYPERBOLIC) > # define BOOST_NUMERIC_INTERVAL_using_ahyp(a) using std::a > # endif > > This implies that GCC 3.3 (and earlier) puts the inverse hyperbolic > functions in namespace std. According to a simple test (*) I've just run, > none of GCC 3.0.4 on linux, 3.3.3 on FreeBSD or 3.4.3 on FreeBSD declares > acosh in namespace std. FreeBSD is not necessarily a good example since there has been a few problems with respect to supporting C99 mathematical functions. But I verified with Linux, and you are right: the C99 functions did leave the std namespace a lot sooner than what I thought. > Only GCC 2.x seems to declare those functions in namespace std, but > that's probably because namespace std == global namespace in GCC 2.x > (all std:: qualifiers are effectively ignored) > > Should the first line of the test above be changed as follows? > > # if defined(__GNUC__) && (__GNUC__ == 3) According to your information, even the version number could be ignored since GCC 2.x assumes the root and the std namespaces are equivalent. > I believe the regression tests pass because the top of the file says: [... snipped whole explanation that makes a lot of sense, thanks ...] > Why is BOOST_HAS_INV_HYPERBOLIC defined if you're using Glibc but not > libstdc++? libstdc++ doesn't make the functions unavailable, it just > doesn't put them in namespace std. I don't know; it's also something I was wondering. But since the library works and is used on a lot more platform than what I have access to, I refrained from experimenting. > Why does one test look at the stdlib macro, but the next one looks at > the compiler version? Because one of the test was written by Jens, and the other one by me later on. :) > I assume the stdlib check is to account for GCC > with STLPort, but then the next test ignores that possiblity and only > looks at the compiler version. I won't pretend to understand exactly > what that file is trying to do. Comments might help. The purpose of this file is to provide information so that the rest of the code doesn't have problem with namespaces. For example, at a time there was a macro BOOST_NUMERIC_INTERVAL_using_max that was dealing with the min and max locations. This code has since been factorized outside of the library in Boost.Config (BOOST_USING_STD_MAX). And so only the macros for the mathematical functions still remain here. I wouldn't mind the whole file being taken into the Boost.Config subsystem, so that I don't have to deal with these issues anymore. However, in the meantime, let's see what we can do to improve the current situation. I would modify the BOOST_HAS_INV_HYPERBOLIC macro so that it really means these C99 functions are defined. In case they are not, dummy functions would be defined in a detail namespace so that the compilers don't complain about unknown identifier, and ADL could still be used (this is mandatory). And in all cases, the BOOST_NUMERIC_INTERVAL_using_ahyp macro would point to the correct namespace (root, std, or the dummy detail one). How does it sound? Regards, Guillaume Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2004/09/72796.php
CC-MAIN-2020-45
refinedweb
770
66.84
- Code: Select all import signal class Alarm(Exception): pass def alarm_handler(signum, frame): raise Alarm signal.signal(signal.SIGALRM, alarm_handler) signal.alarm(3) # seconds try: while True: print('in loop') #print('proceed after loop') #a = 12345678 ** 123456789 signal.alarm(0) # reset the alarm except Alarm: print("Oops, taking too long!") # whatever else this code will terminate a process (in unix) for an infinite loop, or long running function, etc., but my goal was to prohibate someone from entering something like: - Code: Select all 12345678 ** 123456789 So the preceeding code cuts the infinite while loop out after 3 seconds, but if uncomment the exponent cray thing, it freezes and does not cut it after 3 seconds. Why?
http://www.python-forum.org/viewtopic.php?p=8202
CC-MAIN-2013-48
refinedweb
116
62.38
Storage usage quota - Introduced in GitLab 12.0. - Moved to GitLab Free. A project’s repository has a free storage quota of 10 GB. When a project’s repository reaches the quota it To help manage storage, a namespace’s owner can view: - Total storage used in the namespace - Total storage used per project To view storage usage, from the namespace’s page go to Settings > Usage Quotas and select the Storage tab. The Usage Quotas statistics are updated every 90 minutes. If your namespace shows N/A as the total storage usage, push a commit to any project in that namespace to trigger a recalculation. A stacked bar graph shows the proportional storage used for the namespace, including a total per storage item. Click on each project’s title to see a breakdown per storage item. Storage usage statistics - Introduced in GitLab 13.7. - It’s deployed behind a feature flag, enabled by default. - It’s enabled on GitLab SaaS. - It’s recommended for production use. The following storage usage statistics are available to an owner: - Total namespace storage used: Total amount of storage used across projects in this namespace. - Total excess storage used: Total amount of storage used that exceeds their allocated storage. - Purchased storage available: Total storage that has been purchased but is not yet used. Excess () beside their name. Excess).
https://docs.gitlab.com/ee/user/usage_quotas.html
CC-MAIN-2021-31
refinedweb
224
66.44
APL Standard Commands (APL 1.0) (This is not the most recent version of APL. Use the Other Versions option to see the documentation for the most recent version of APL) - Common Properties - AutoPage command - Idle command - Parallel command - Scroll command - ScrollToIndex command - SendEvent command - Sequential command - SetPage command - SetState Command - SetValue Command - SpeakItem command - SpeakList Command. when If when is set to true, run the command. If false, ignore the command.", "duration": 1000 } and display each page, including the first one. Any animated transition between pages will add to the total amount of time. Idle command The Idle command does nothing. It may be a placeholder or used } componentId The ID of the ScrollView or Sequence. If omitted, the component issuing the ScrollPage command is used. distance The scrolling distance, measured in pages. One "page" is the width or height of the ScrollView or Sequence component. Negative numbers scroll backwards. Setting distance to 0 does not scroll.. Here is an example of a SendEvent command:: { "type": "TouchWrapper", "id": "buyButton", "onPress": { "type": "SendEvent", "arguments": [ "he bought it" ] }, "item": { "type": "Text", "text": "Buy it now!" } } With SendEvent, set the following properties in addition to the regular command properties. message is packaged into a TemplateRuntime.UserEvent message sent to the skill. The TemplateRuntime.UserEvent message takes the following approximate form. { "namespace": "TemplateRuntime", "name": "UserEvent", "payload": { "skillID": "XYZZY", "directiveID": "PDQBACH", "arguments": [ ARGUMENT1, ARGUMENT2, : ], "components": { ID1: VALUE1, ID2: VALUE2, : }, "source": { "type": "Button", "handler": "Press", "id": "buyButton", "value": false } } } The APL runtime is responsible for passing the argument data in the larger payload structure, along with source information about the type of component that created the event, the handler called, and the id of the component. Sequential command The Sequential command runs a series of commands in order, waiting for the previous command to finish before running the next. The Sequential command is finished when all of its child commands have finished. When the Sequential command is stops early, the currently running command stops and no further commands run. The type of the Sequential command is Sequential. The Sequential command has the following properties in addition to the common command properties. } ] } repeatCount The number of times to repeat this series of commands. Defaults to 0. Negative values will be ignored. Each component has a state used to evaluate styles and displaying the component. The SetState command changes one of the component's state settings. The SetState command can be used to change the checked, disabled, and focused states. The focused state may only be set, and it can't be cleared. The karaoke and pressed states canot be directly set, but you can use the SpeakItem command be used to change the value of various properties, on a per-component basis. The specific properties that are mutable are documented for each component. accept is the color property; other properties are ignored. SpeakList Command Read the contents of a range of items inside a common container. Each item will scroll into view before speech. Each item should have a speech property.); } }
https://developer.amazon.com/es-ES/docs/alexa/alexa-presentation-language/apl-standard-commands-v1.html
CC-MAIN-2022-21
refinedweb
505
58.18
Opened 9 years ago Closed 9 years ago #13672 closed (duplicate) template cache tag should gracefully handle uninitialized variables Description I keep running into this problem. I have a page that uses the template cache tag. One of the variables that gets used as a cache key is sometimes available in the context, sometimes not (it's added to the context via middleware). If the variable is not initialized, the whole page fails to render. EX: context: {key1: 'value', key2: 'another value'} template: {% if key3 %} no errors are thrown here {% endif %} {% cache 1800 mycache key1 key2 key3 %} causes a problem {% endcache %} result: Caught VariableDoesNotExist while rendering: Failed lookup for key [key3] in u"[{'key1: 'value', key2: 'another value'}]" I think it would be best to just have the key be "None" if it's uninitialized instead of throwing an error. Attachments (1) Change History (3) Changed 9 years ago by comment:1 Changed 9 years ago by comment:2 Changed 9 years ago by #13167 is open for tracking the general problem of VariableDoesNotExist being raised by non-existent filter/tag arguments. There is at least one tag (if) that suppresses this behavior, but in general VariableDoesNotExist has been raised for these error cases since before 1.0. The proposal to change the general behavior involves checking TEMPLATE_DEBUG to decide whether to raise an exception or not. Since I believe we want to fix the problem in general and not take a tag-by-tag approach, I'm going to close this as a dupe of that. I don't know off hand which do and which don't but it seems fairly common for a tag to fail if the template doesn't exist. Presumably this could be resolved by wrapping {% cache %} in a if statement? I think this might be a case of DDN. Since there is a fine line where failing gracefully is good and obfuscating errors is bad. I don't like the idea of thinking i'm caching only to realise later that the variable is not quite making it to the template.
https://code.djangoproject.com/ticket/13672
CC-MAIN-2019-35
refinedweb
348
64.34
Customizing and Managing Your Site's Appearance / Part 3 - Page 3 Customizing and Managing Your Site's Appearance - Part 3 [con't] User Controls The previous section examined the powerful master page mechanism in ASP.NET. A master page allows the developer to define the structural layout for multiple Web Forms in a separate file and then apply this layout across multiple Web Forms. You can thus move common layout elements out of all the individual pages and into a single master page. As a consequence, the master page feature is a powerful mechanism for reducing code-behind and markup duplication across a Web application. Yet, it still is possible, even when using a master page, for presentation-level (i.e., user interface) duplication to exist in a site. For instance, consider the page shown back in Figure 6.16. In this example, the feature book box is displayed only on the home page. But imagine that you want this box to appear on several other pages, but not on every page (if it was to appear on every page, you would place it in the master page). Your inner programmer should shirk at the idea of simply copying and pasting the markup and code for implementing this featured book box to multiple pages. Although such a solution is easy in the short term, maintaining this type of approach in the long run is a real headache; every time you have to make a change to this box, you have to change it in multiple places. User controls are the preferred ASP.NET solution to this type of presentation- level duplication. They provide a cleaner approach to user interface reuse then copying and pasting or using the server-side includes of classic ASP. In addition, user controls in ASP.NET are very simple to create and then use. As well, they follow the same familiar development model as regular Web Forms. Creating and Using User Controls User controls are created in a manner similar to Web Forms. As with Web Forms, a user control can contain markup as well as programming logic. Also like Web Forms, the programming for a user control can be contained within the same file as the markup, or contained within a separate code-behind file. After a user control is defined, it can then be used in Web Forms, master pages, or even other user controls. The markup for a user control is contained in a text file with the .ascx extension. This file can contain any necessary programming logic within embedded code declaration block (i.e., within tags) embedded within this .ascx user control file. Alternately, the programming code can be contained in a code-behind class for the user control. However, the code-behind class for a user control inherits from the UserControl class, rather than the Page class. Walkthroughs 6.3 and 6.4 demonstrate how to create and then use a simple user control. Walkthrough 6.3 Creating a User Control Use the Website ν Add New Item menu option in Visual Studio, or right-click the project in Solution Explorer and choose Add New Item. From the Add New Item dialog box, choose the Web User Control template and name the control FeatureBookControl.ascx. Click Add. Notice that a blank user control contains no content other than a Controldirective. Unlike with a Web Form, there is no HTML skeleton with a user control. It is completely up to the developer to decide what markup should appear in the user control. Change the user control so that it has the following content. Save the control. Now that you have created a user control, you can use it in a Web Form, master page, or other user control. To make use of a user control, you must follow two steps: - Indicate that you want to use a user control via the Registerdirective, as shown in the following. After registering the user control, place the user control tag in the markup just as you would with any Web server control (including the runat="server"and Idproperties). For instance, if you had the user control specified in step 1, the following markup adds this control. TagPrefix determines a unique namespace for the user control, which is necessary to differentiate multiple user controls with the same name. TagName is the unique name for the user control, whereas Src specifies the virtual path to the user control, such as MyControl.ascx or ~/Controls/MyControl.ascx. Core Note: As an alternative to manually entering these two steps, you can also simply drag-and-drop the user control onto your page while it is in Design view in Visual Studio. Walkthrough 6.4 Using a User Control Create a new Web Form or edit an existing Web Form. For instance, you could use the Default.aspxpage from Listing 6.10. Add the following Registerdirective to the page. In a sensible spot, add the following markup to the page. If you are using the Default.aspxpage from Listing 6.10, replace the markup in the first Contentcontrol with the following. Save and test the page. The user control should be displayed in the page. Adding Data and Behaviors to the User Control In the user control created in Walkthrough 6.3, the information on the featured book was hardcoded. A more realistic version of this user control would interact with some type of business object or database. Thus, like most Web Forms, most user controls also include some type of programming logic. You also may want to customize some aspect of the user control, so that it appears or behaves differently when used in different pages or controls. For instance, in the example Featured Book user control, it might be useful to specify the category of book you want to see in the control. You can easily add this type of customization to a control, by adding public properties to the user control. These public properties can then be manipulated declaratively or programmatically by the containing page. For instance, Listing 6.14 demonstrates the definition of a FeatureCategory property in the code-behind class for the FeatureBookControl user control. The Page_Load in this simplified example modifies the output of the control based on the value of this property. Listing 6.14 FeatureBookControl.ascx.cs This code-behind class assumes that the markup for the FeatureBookControl user control has been modified, as shown in Listing 6.15. Listing 6.15 FeatureBookControl.ascx This public property in the user control can now be used wherever you use this user control. For instance, the following example illustrates one way that this property could be used. Summary This chapter examined the different ways to customize the visual appearance of your Web application in ASP.NET 2.0. It began with the simplest, namely the common appearance properties that are available for most server controls. These properties allow you to control colors, borders, and text formatting. Although these properties are very useful for customizing the appearance of your Web output, they do not contain the full formatting power of Cascading Style Sheets. Luckily, ASP.NET server controls fully support styles via the CssClass and the Style properties. I recommended that you minimize the amount of appearance formatting in your server controls as much as possible, and instead externalize appearance settings in your CSS files. The benefit to this approach is that your Web Form's markup becomes simpler, easier to modify, and scales better to different devices. The chapter also covered two important features of ASP.NET 2.0: themes and master pages. Themes provide a way to customize the appearance of Web server controls on a site-wide basis. Themes can still be integrated with CSS; doing so allows the developer to completely separate style from content. The master pages mechanism provides a much sought-after templating technique to ASP.NET. With master pages, elements that are common throughout a site, such as headers, footers, navigation elements, and the basic site layout itself, can be removed from Web Forms and placed instead within a master page. This significantly simplifies the Web Forms within a site, making the site as a whole easier to create and maintain. The brief final section in the chapter covered user controls. These are an essential part of most real-world Web sites. User controls provide a consistent, object-oriented approach to user interface reuse in ASP.NET. The next chapter covers another vital part of your Web site's appearance, its navigation system. It examines how you can programmatically move from page to page as well as the new controls in ASP.NET 2.0 devoted to navigation: the Tree, and Exercises The solutions to the following exercises can be found at my Web site,. Additional exercises only available for teachers and instructors are also available from this site. Create a page named PropertyTester.aspx. This page should allow the user to dynamically change various appearance properties of some sample controls on the page, as shown in Figure 6.18. Ideally, the code-behind class uses an instance of the Styleclass as a data member, whereas event handlers simply modify this instance and apply it to the controls. Create a new theme to be used by the ThemeTester.aspxexample from Listing 6.3. Create a master page named ThreeColumns.masterthat contains a header, three content columns, and a footer. The three columns should each contain a Contentcontrol. Create a demonstration page that uses this master. Create a user control named ThemeSelector.ascxthat allows the user to change the theme used on the page. This control should only display themes that exist. Create a demonstration page that uses this user control. Key Concepts - Master pages - Named collection - Named skins - Session state - Skins - Themes - User controls References - Allen, Scott. "Master Pages in ASP.NET.". - Allen, Scott. "Themes in ASP.NET.". - Murkoth, Jeevan C. "Master Pages in ASP.NET 2.0." dotnetdevelopersjournal.com (December 2004). - Onion, Fritz. "Master Your Site Design with Visual Inheritance and Page Templates." MSDN Magazine (June 2004). - Winstanley, Phil. "Skin Treatment: Exploiting ASP.NET 2.0 Themes." asp.netPro Magazine (October 2005). Printed with permission from from the book Core Internet Application Development with ASP.NET 2.0 written by Randy Connolly. ISBN 0321419502  Copyright © 2007 Prentice Hall.. Digg This Add to del.icio.us URL:
http://www.webreference.com/programming/asp/site_appearance3/3.html
CC-MAIN-2017-30
refinedweb
1,731
56.96
bnbc 1.13.0 The bnbc package provides functionality to perform normalization and batch correction across samples on data obtained from Hi-C (Lieberman-Aiden et al. 2009) experiments. In this package we implement tools for general subsetting of and data extraction from sets of Hi-C contact matrices, as well as smoothing of contact matrices, cross-sample normalization and cross-sample batch effect correction methods. GRangesobject representing the genome assayed, with individual ranges having the width that is equal to the bin size used to partition the genome. listof (square) contact matrices. DataFrameobject containing sample-level covariates (i.e. gender, date of processing, etc). If you use this package, please cite (Fletez-Brant et al. 2017). It is well appreciated that Hi-C contact matrices exhibit an exponential decay in observed number of contacts as a function of the distance between the pair of interacting loci. In this work we operate, as has recently been done (i.e. (Yang et al. 2017)), on the set of all loci interacting at a specific distance, one chromosome at a time. For a given distance \(k\), the relevant set of loci are listed in each contact matrix as the entries comprising the \(k\)-th matrix diagonal (with the main diagonal being referred to as the first diagonal). We refer to these diagonals as matrix “bands”. This document has the following dependencies library(bnbc) bnbc uses the ContactGroup class to represent the set of contact matrices for a given set of genomic locis interactions. The class has 3 slots: rowData: a GRangesobject that has 1 element for each bin of the partitioned genome. colData: a DataFrameobject that contains information on each sample (i.e. gender). contacts: a listof contact matrices. We expect each ContactGroup to represent data from 1 chromosome. We are thinking about a whole-genome container. An example dataset for chromosome 22 is supplied with the package. data(cgEx) cgEx ## Object of class `ContactGroup` representing contact matrices with ## 1282 bins ## 40 kb average width per bin ## 14 samples Creating a ContactGroup object requires specifying the 3 slots above: cgEx <- ContactGroup(rowData=rowData(cgEx), contacts=contacts(cgEx), colData=colData(cgEx)) Note that in this example, we used the accessor methods for each of these slots; there are also corresponding ‘setter’ methods, such as rowData(cgEx)<-. Printing a ContactGroup object gives the number of bins represented by the rowData slot, the width of the bin in terms of genomic distances (i.e. 100kb) and the number of samples: cgEx ## Object of class `ContactGroup` representing contact matrices with ## 1282 bins ## 40 kb average width per bin ## 14 samples The InteractionSet package contains a class called InteractionSet which is essentially an extension of the ContactGroup class. The internal storage format is different and InteractionSet is not restricted to square contact matrices like the ContactGroup class. We are interested in porting the bnbc() function to using InteractionSet, but bnbc() extensively uses band matrices and we have optimized Rcpp-based routines for getting and setting bands of normal matrices, which means ContactGroup is a pretty fast for our purposes. To get data into bnbc you need a list of contact matrices, one per sample. We assume the contact matrices are square, with no missing values. We do not require that data have been transformed or pre-processed by various bias correction software and provide methods for some simple pre-processing techniques. There is currently no standard Hi-C data format. Instead, different groups produces custom formats, often in forms of text files. Because contact matrices are square, it is common to only distribute the upper or lower triangular matrix. In that case, you can use the following trick to make the matrix square: mat <- matrix(1:9, nrow = 3, ncol = 3) mat[lower.tri(mat)] <- 0 mat ## [,1] [,2] [,3] ## [1,] 1 4 7 ## [2,] 0 5 8 ## [3,] 0 0 9 ## Now we fill in the lower triangular matrix with the upper triangular mat[lower.tri(mat)] <- mat[upper.tri(mat)] mat ## [,1] [,2] [,3] ## [1,] 1 4 7 ## [2,] 4 5 8 ## [3,] 7 8 9 Below, we demonstrate the steps needed to convert a set of hypothetical contact matrices into a ContactGroup object. The object upper.mats.list is supposed to be a list of contact matrices, each represented as an upper triangular matrix. We also suppose LociData to be a GenomicRanges object containing the loci of the contact matrices, and SampleData to be a DataFrame of per-sample information (i.e. cell type, sample name etc). We first convert all contact matrices to be symmetric matrices, then use the constructor method ContactGroup() to create the object. ## Example not run ## Convert upper triangles to symmetry matrix MatsList <- lapply(upper.mats.list, function(M) { M[lower.tri(M)] <- M[upper.tri(M)] }) ## Use ContactGroup constructor method cg <- ContactGroup(rowData = LociData, contacts = MatsList, colData = SampleData) For this to work, the contacts list has to have the same names as the rownames of colData. *.coolerfiles The .cooler file format is widely adopted and supported by bnbc. We assume a simple cooler file format (see ?getChrCGFromCools for a full description; importantly, we assume the same interactions are observed in all samples, even if some have a value of 0) of one resolution per file, generated by the cooler program. Our point of entry is to catalog which interactions are stored in the cooler file. We do this by generating an index of the positions of the file, using the function getGenomeIdx(). coolerDir <- system.file("cooler", package = "bnbc") cools <- list.files(coolerDir, pattern="cool$", full.names=TRUE) step <- 4e4 bin.ixns.list <- bnbc:::getGenomeIdx(cools[1], step) bin.ixns <- bin.ixns.list$bin.ixns We have, as output, a list bin.ixns.list, of two elements. The first is the element bin.ixns, which is a data.table object that lists the set of interactions observed in the cooler file. The other element is a data.frame object bins, which lists the set of genomic bins and their coordinates. These are a convenience used in some functions and generally are not of interest to the end user. With our index, we can proceed to load our data into memory, one chromosome’s data at a time (at this time our method does not handle -interactions). We emphasize that with all observations from interactions between loci on one chromosome in memory, our algorithm is extremely efficient, with custom routines for matrix updating, and requires only pass over the data. data(cgEx) cool.cg <- bnbc:::getChrCGFromCools(bin.ixns, files = cools, chr = "chr22", colData = colData(cgEx)[1:2,]) all.equal(contacts(cgEx)[[1]], contacts(cool.cg)[[1]]) ## [1] TRUE In this example, we load the ContactGroup object cgEx into memory to compare with the representation of it in cool files generated by the cooler program. We then use the method getChrCGFromCools() to load an entire chromosome’s interaction matrices (observed on all subjects) into memory. At this point, users have a valid ContactGroup object, and can proceed with their analyses as described in subsequent sections. We provide setter and getter methods for manipulating individual matrix bands for contact matrices as well. First, we have functions for working with bands of individual matrices (not bnbc related): mat.1 <- contacts(cgEx)[[1]] mat.1[1000:1005, 1000:1005] ## [,1] [,2] [,3] [,4] [,5] [,6] ## [1,] 5 30 36 32 53 20 ## [2,] 30 7 36 38 50 16 ## [3,] 36 36 3 44 51 15 ## [4,] 32 38 44 4 89 39 ## [5,] 53 50 51 89 36 55 ## [6,] 20 16 15 39 55 5 b1 <- band(mat=mat.1, band.no=2) band(mat=mat.1, band.no=2) <- b1 + 1 mat.1[1000:1005, 1000:1005] ## [,1] [,2] [,3] [,4] [,5] [,6] ## [1,] 5 31 36 32 53 20 ## [2,] 31 7 37 38 50 16 ## [3,] 36 37 3 45 51 15 ## [4,] 32 38 45 4 90 39 ## [5,] 53 50 51 90 36 56 ## [6,] 20 16 15 39 56 5 In this example, the main diagonal of the contact matrix is also the main diagonal of the printed example above. Similarly, band number two, which is also the first off-diagonal, is also the first off-diagonal of the printed example. As can be seen from the printed example, updating a matrix band is a symmetric operation, and updated the first off-diagonal in both the upper and lower triangles of the matrix. To utilize this across a list of contact matrices, we have the cgApply() function which applies the same function to each of the matrices. It supports parallelization using parallel. To adjust for differences in depth sequencing, we first apply the logCPM transform (Law et al. 2014) to each contact matrix. This transformation divides each contact matrix by the sum of the upper triangle of that matrix (adding 0.5 to each matrix cell and 1 to sum of the upper triangle), scales the resulting matrix by \(10^6\) and finally takes the log of the scaled matrix (a fudge factor is added to both numerator and denominator prior to taking the logarithm).. cgEx.cpm <- logCPM(cgEx) Additionally, we smooth each contact matrix with a square smoothing kernel to reduce artifacts of the choice of bin width. We support both box and Gaussian smoothers. cgEx.smooth <- boxSmoother(cgEx.cpm, h=5) ## or ## cgEx.smooth <- gaussSmoother(cgEx.cpm, radius=3, sigma=4) BNBC operates on each matrix band separately. For each matrix band \(k\), we extract each sample’s observation on that band and form a matrix \(M\) from those bands; if band \(k\) has \(d\) entries, then after logCPM transformation, \(M \in \mathbb{R}^{n \times d}\). For each such matrix, we first apply quantile normalization (Bolstad et al. 2003) to correct for distributional differences, and then ComBat (Johnson, Li, and Rabinovic 2007) to correct for batch effects. Here we will use bnbc to do batch correction on the first 10 matrix bands, beginning with the second matrix band and ending on the eleventh. cgEx.bnbc <- bnbc(cgEx.smooth, batch=colData(cgEx.smooth)$Batch, threshold=1e7, step=4e4, nbands=11, verbose=FALSE) ## Found 70 genes with uniform expression within a single batch (all zeros); these will not be adjusted for batch. ## Found 95 genes with uniform expression within a single batch (all zeros); these will not be adjusted for batch. ## Found 103 genes with uniform expression within a single batch (all zeros); these will not be adjusted for batch. ## Found 77 genes with uniform expression within a single batch (all zeros); these will not be adjusted for batch. ## Found 63 genes with uniform expression within a single batch (all zeros); these will not be adjusted for batch. ## Found 76 genes with uniform expression within a single batch (all zeros); these will not be adjusted for batch. ## Found 66 genes with uniform expression within a single batch (all zeros); these will not be adjusted for batch. ## Found 82 genes with uniform expression within a single batch (all zeros); these will not be adjusted for batch. ## Found 79 genes with uniform expression within a single batch (all zeros); these will not be adjusted for batch. ## Found 77 genes with uniform expression within a single batch (all zeros); these will not be adjusted for batch. ##nbc_1.13.0 SummarizedExperiment_1.21.0 ## [3] Biobase_2.51.0 GenomicRanges_1.43.0 ## [5] GenomeInfoDb_1.27.0 IRanges_2.25.0 ## [7] S4Vectors_0.29.0 MatrixGenerics_1.3.0 ## [9] matrixStats_0.57.0 BiocGenerics_0.37.0 ## [11] BiocStyle_2.19.0 ## ## loaded via a namespace (and not attached): ## [1] Rcpp_1.0.5 locfit_1.5-9.4 lattice_0.20-41 ## [4] fftwtools_0.9-9 png_0.1-7 digest_0.6.27 ## [7] R6_2.4.1 tiff_0.1-5 RSQLite_2.2.1 ## [10] evaluate_0.14 sva_3.39.0 httr_1.4.2 ## [13] zlibbioc_1.37.0 rlang_0.4.8 data.table_1.13.2 ## [16] annotate_1.69.0 blob_1.2.1 Matrix_1.2-18 ## [19] preprocessCore_1.53.0 rmarkdown_2.5 splines_4.1.0 ## [22] BiocParallel_1.25.0 stringr_1.4.0 htmlwidgets_1.5.2 ## [25] RCurl_1.98-1.2 bit_4.0.4 DelayedArray_0.17.0 ## [28] compiler_4.1.0 xfun_0.18 mgcv_1.8-33 ## [31] htmltools_0.5.0 GenomeInfoDbData_1.2.4 bookdown_0.21 ## [34] edgeR_3.33.0 XML_3.99-0.5 bitops_1.0-6 ## [37] rhdf5filters_1.3.0 grid_4.1.0 nlme_3.1-150 ## [40] xtable_1.8-4 DBI_1.1.0 magrittr_1.5 ## [43] stringi_1.5.3 XVector_0.31.0 genefilter_1.73.0 ## [46] limma_3.47.0 vctrs_0.3.4 EBImage_4.33.0 ## [49] Rhdf5lib_1.13.0 tools_4.1.0 bit64_4.0.5 ## [52] jpeg_0.1-8.1 abind_1.4-5 survival_3.2-7 ## [55] yaml_2.2.1 AnnotationDbi_1.53.0 rhdf5_2.35.0 ## [58] BiocManager_1.30.10 memoise_1.1.0 knitr_1.30 Bolstad, B M, R A Irizarry, M Astrand, and T P Speed. 2003. “A Comparison of Normalization Methods for High Density Oligonucleotide Array Data Based on Variance and Bias.” Bioinformatics 19: 185–93.. Fletez-Brant, Kipper, Yunjiang Qiu, David U Gorkin, Ming Hu, and Kasper D Hansen. 2017. “Removing Unwanted Variation Between Samples in Hi-C Experiments.” bioRxiv, 214361.. Johnson, W Evan, Cheng Li, and Ariel Rabinovic. 2007. “Adjusting Batch Effects in Microarray Expression Data Using Empirical Bayes Methods” 8: 118–27.. Law, Charity W, Yunshun Chen, Wei Shi, and Gordon K Smyth. 2014. “Voom: Precision Weights Unlock Linear Model Analysis Tools for RNA-seq Read Counts.” Genome Biology 15: R29.. Lieberman-Aiden, Erez, Nynke L van Berkum, Louise Williams, Maxim Imakaev, Tobias Ragoczy, Agnes Telling, Ido Amit, et al. 2009. “Comprehensive Mapping of Long-Range Interactions Reveals Folding Principles of the Human Genome.” Science 326: 289–93.. Yang, Tao, Feipeng Zhang, Galip Gurkan Yardimci, Fan Song, Ross C Hardison, William Stafford Noble, Feng Yue, and Qunhua Li. 2017. “HiCRep: Assessing the Reproducibility of Hi-C Data Using a Stratum- Adjusted Correlation Coefficient.” Genome Research..
https://master.bioconductor.org/packages/devel/bioc/vignettes/bnbc/inst/doc/bnbc.html
CC-MAIN-2021-10
refinedweb
2,306
56.55
brassow On May 8, 2006, at 11:55 PM, Neil Brown wrote: Hi, We have a report of a system oops during pvmove. What appears to be happening is core_in_sync is being passed a 'region' is which much too large. When this is indexed into the bitset at lc->sync_bits it hits an unmapped page, and results in an oops. I believe the problem is in bio_to_region. See the patch below. If a section of an lv which is *not* at the start of the lv is being moved using dm-raid1, I think the region number is being calculated wrongly resulting in the inappropriately large index. We really need to subtract ti->begin from bi_sector before shifting. We will try to get this patch tested on the machine that showed the fault, but I would appreciate any feedback about the patch and I am still not very familiar with this code. Thanks, NeilBrown Signed-off-by: Neil Brown <neilb suse de> ### Diffstat output ./drivers/md/dm-raid1.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff ./drivers/md/dm-raid1.c~current~ ./drivers/md/dm-raid1.c--- ./drivers/md/dm-raid1.c~current~ 2006-05-09 14:47:35.000000000 +1000+++ ./drivers/md/dm-raid1.c 2006-05-09 14:47:35.000000000 +1000 @@ -111,7 +111,7 @@ struct region { */static inline region_t bio_to_region(struct region_hash *rh, struct bio *bio){ - return bio->bi_sector >> rh->region_shift; + return (bio->bi_sector - rh->ms->ti->begin) >> rh->region_shift; }static inline sector_t region_to_sector(struct region_hash *rh, region_t region)-- dm-devel mailing list dm-devel redhat com
https://www.redhat.com/archives/dm-devel/2006-May/msg00040.html
CC-MAIN-2015-14
refinedweb
263
63.39